path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
examples/1_Basics.ipynb
|
###Markdown
Example 1: Basics Begin by importing AutoMPC.
###Code
import autompc as ampc
import numpy as np
###Output
Loading AutoMPC...
Finished loading AutoMPC
###Markdown
SystemsLet's begin by showing how to define a System. In AutoMPC, a System defines the variables of control and observation for a particular robot. Here we define `simple_sys` which has to observation variables (x and y) and one control variable (u). Optionally, the system can also include the time step at which is data is sampled for the system. Here we define the time step as 0.05 s.
###Code
simple_sys = ampc.System(["x", "y"], ["u"], dt=0.05)
###Output
_____no_output_____
###Markdown
Given a system, we can access its properties as follows
###Code
print("Observation Dimension: ", simple_sys.obs_dim)
print("Observation Variables: ", simple_sys.observations)
print("Control Dimension: ", simple_sys.ctrl_dim)
print("Control Variables: ", simple_sys.controls)
###Output
Observation Dimension: 2
Observation Variables: ['x', 'y']
Control Dimension: 1
Control Variables: ['u']
###Markdown
TrajectoriesThe Trajectory class stores a sequence of controls and observations. Trajectories are defined with respect to a particular system.Here we define a zero trajectory for `simple_sys` with 10 time steps.
###Code
traj = ampc.zeros(simple_sys, 10)
###Output
_____no_output_____
###Markdown
There are a couple different ways to set trajectory values. We demonstrate a few below:
###Code
traj[0, "x"] = 1.0 # Set x to 1 at timestep 0
traj[1, "u"] = 2.0 # Set u to 2 at timestep 1
traj[2].obs[:] = np.array([3.0, 4.0]) # Set the observation (x and y) to [3,4] at timestep 2
traj[3].ctrl[:] = np.array([5.0]) # Set the control (u) to [5] at timestep 3
###Output
_____no_output_____
###Markdown
Similarly, there are a number of reading trajectory values.
###Code
print("Value of y at timestep 2: ", traj[2, "y"])
print("Observation at timestep 0: ", traj[0].obs)
print("Control at timestep 1: ", traj[1].ctrl)
###Output
Value of y at timestep 2: 4.0
Observation at timestep 0: [1. 0.]
Control at timestep 1: [2.]
###Markdown
We can also access the entire set of observations and controls for a trajectory as numpy arrays:
###Code
print("Observations")
print("------------")
print(traj.obs)
print("")
print("Controls")
print("--------")
print(traj.ctrls)
###Output
Observations
------------
[[1. 0.]
[0. 0.]
[3. 4.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]]
Controls
--------
[[0.]
[2.]
[0.]
[5.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]]
|
7 QUORA INSINCERE QUESTIONN/text-pre-processing-techniques.ipynb
|
###Markdown
Text Pre-processing TechniquesThese techniques may or may not be useful for this competition. Given the fact that is a text competition, i thought that it would be a good oportunity to present them. I have used them before in two papers. [A Comparison of Pre-processing Techniques for Twitter Sentiment Analysis](https://link.springer.com/chapter/10.1007/978-3-319-67008-9_31) and [A comparative evaluation of pre-processing techniques and their interactions for twitter sentiment analysis](https://www.sciencedirect.com/science/article/pii/S0957417418303683). The full code is on this [Github repository](https://github.com/Deffro/text-preprocessing-techniques) with some extra techniques.
###Code
import pandas as pd
import numpy as np
import re
###Output
_____no_output_____
###Markdown
Load Dataset and print some questions
###Code
train_df = pd.read_csv("../input/train.csv")
X_train = train_df["question_text"].fillna("dieter").values
test_df = pd.read_csv("../input/test.csv")
X_test = test_df["question_text"].fillna("dieter").values
y = train_df["target"]
text = train_df['question_text']
for row in text[:10]:
print(row)
###Output
_____no_output_____
###Markdown
1. Remove Numbers**Example:** Which is best powerbank for iPhone 7 in India? -> Which is best powerbank for iPhone in India?
###Code
def removeNumbers(text):
""" Removes integers """
text = ''.join([i for i in text if not i.isdigit()])
return text
text_removeNumbers = pd.DataFrame(columns=['TextBefore', 'TextAfter', 'Changed'])
text_removeNumbers['TextBefore'] = text.copy()
for index, row in text_removeNumbers.iterrows():
row['TextAfter'] = removeNumbers(row['TextBefore'])
text_removeNumbers['Changed'] = np.where(text_removeNumbers['TextBefore']==text_removeNumbers['TextAfter'], 'no', 'yes')
print("{} of {} ({:.4f}%) questions have been changed.".format(len(text_removeNumbers[text_removeNumbers['Changed']=='yes']), len(text_removeNumbers), 100*len(text_removeNumbers[text_removeNumbers['Changed']=='yes'])/len(text_removeNumbers)))
for index, row in text_removeNumbers[text_removeNumbers['Changed']=='yes'].head().iterrows():
print(row['TextBefore'],'->',row['TextAfter'])
###Output
_____no_output_____
###Markdown
2. Replace Repetitions of PunctuationThis technique: - replaces repetitions of exlamation marks with the tag "multiExclamation" - replaces repetitions of question marks with the tag "multiQuestion" - replaces repetitions of stop marks with the tag "multiStop" **Example:** How do I overcome the fear of facing an interview? It's killing me inside..what should I do? -> How do I overcome the fear of facing an interview? It's killing me inside multiStop what should I do?
###Code
def replaceMultiExclamationMark(text):
""" Replaces repetitions of exlamation marks """
text = re.sub(r"(\!)\1+", ' multiExclamation ', text)
return text
def replaceMultiQuestionMark(text):
""" Replaces repetitions of question marks """
text = re.sub(r"(\?)\1+", ' multiQuestion ', text)
return text
def replaceMultiStopMark(text):
""" Replaces repetitions of stop marks """
text = re.sub(r"(\.)\1+", ' multiStop ', text)
return text
text_replaceRepOfPunct = pd.DataFrame(columns=['TextBefore', 'TextAfter', 'Changed'])
text_replaceRepOfPunct['TextBefore'] = text.copy()
for index, row in text_replaceRepOfPunct.iterrows():
row['TextAfter'] = replaceMultiExclamationMark(row['TextBefore'])
row['TextAfter'] = replaceMultiQuestionMark(row['TextBefore'])
row['TextAfter'] = replaceMultiStopMark(row['TextBefore'])
text_replaceRepOfPunct['Changed'] = np.where(text_replaceRepOfPunct['TextBefore']==text_replaceRepOfPunct['TextAfter'], 'no', 'yes')
print("{} of {} ({:.4f}%) questions have been changed.".format(len(text_replaceRepOfPunct[text_replaceRepOfPunct['Changed']=='yes']), len(text_replaceRepOfPunct), 100*len(text_replaceRepOfPunct[text_replaceRepOfPunct['Changed']=='yes'])/len(text_replaceRepOfPunct)))
for index, row in text_replaceRepOfPunct[text_replaceRepOfPunct['Changed']=='yes'].head().iterrows():
print(row['TextBefore'],'->',row['TextAfter'])
###Output
_____no_output_____
###Markdown
3. Remove Punctuation**Example:** Why haven't two democracies never ever went for a full fledged war? What stops them? -> Why havent two democracies never ever went for a full fledged war What stops them
###Code
import string
translator = str.maketrans('', '', string.punctuation)
text_removePunctuation = pd.DataFrame(columns=['TextBefore', 'TextAfter', 'Changed'])
text_removePunctuation['TextBefore'] = text.copy()
for index, row in text_removePunctuation.iterrows():
row['TextAfter'] = row['TextBefore'].translate(translator)
text_removePunctuation['Changed'] = np.where(text_removePunctuation['TextBefore']==text_removePunctuation['TextAfter'], 'no', 'yes')
print("{} of {} ({:.4f}%) questions have been changed.".format(len(text_removePunctuation[text_removePunctuation['Changed']=='yes']), len(text_removePunctuation), 100*len(text_removePunctuation[text_removePunctuation['Changed']=='yes'])/len(text_removePunctuation)))
for index, row in text_removePunctuation[text_removePunctuation['Changed']=='yes'].head().iterrows():
print(row['TextBefore'],'->',row['TextAfter'])
###Output
_____no_output_____
###Markdown
Hmm, i expected everything to change, because they are question with "?". Let's see the ones that didn't change.
###Code
for index, row in text_removePunctuation[text_removePunctuation['Changed']=='no'].head().iterrows():
print(row['TextBefore'],'->',row['TextAfter'])
###Output
_____no_output_____
###Markdown
4. Replace ContractionsThis techniques replaces contractions to their equivalents.**Example:** What's the scariest thing that ever happened to anyone? -> What is the scariest thing that ever happened to anyone?
###Code
contraction_patterns = [ (r'won\'t', 'will not'), (r'can\'t', 'cannot'), (r'i\'m', 'i am'), (r'ain\'t', 'is not'), (r'(\w+)\'ll', '\g<1> will'), (r'(\w+)n\'t', '\g<1> not'),
(r'(\w+)\'ve', '\g<1> have'), (r'(\w+)\'s', '\g<1> is'), (r'(\w+)\'re', '\g<1> are'), (r'(\w+)\'d', '\g<1> would'), (r'&', 'and'), (r'dammit', 'damn it'), (r'dont', 'do not'), (r'wont', 'will not') ]
def replaceContraction(text):
patterns = [(re.compile(regex), repl) for (regex, repl) in contraction_patterns]
for (pattern, repl) in patterns:
(text, count) = re.subn(pattern, repl, text)
return text
text_replaceContractions = pd.DataFrame(columns=['TextBefore', 'TextAfter', 'Changed'])
text_replaceContractions['TextBefore'] = text.copy()
for index, row in text_replaceContractions.iterrows():
row['TextAfter'] = replaceContraction(row['TextBefore'])
text_replaceContractions['Changed'] = np.where(text_replaceContractions['TextBefore']==text_replaceContractions['TextAfter'], 'no', 'yes')
print("{} of {} ({:.4f}%) questions have been changed.".format(len(text_replaceContractions[text_replaceContractions['Changed']=='yes']), len(text_replaceContractions), 100*len(text_replaceContractions[text_replaceContractions['Changed']=='yes'])/len(text_replaceContractions)))
for index, row in text_replaceContractions[text_replaceContractions['Changed']=='yes'].head().iterrows():
print(row['TextBefore'],'->',row['TextAfter'])
###Output
_____no_output_____
###Markdown
5. Lowercase**Example:** What do you know about Bram Fischer and the Rivonia Trial? -> what do you know about bram fischer and the rivonia trial?
###Code
text_lowercase = pd.DataFrame(columns=['TextBefore', 'TextAfter', 'Changed'])
text_lowercase['TextBefore'] = text.copy()
for index, row in text_lowercase.iterrows():
row['TextAfter'] = row['TextBefore'].lower()
text_lowercase['Changed'] = np.where(text_lowercase['TextBefore']==text_lowercase['TextAfter'], 'no', 'yes')
print("{} of {} ({:.4f}%) questions have been changed.".format(len(text_lowercase[text_lowercase['Changed']=='yes']), len(text_lowercase), 100*len(text_lowercase[text_lowercase['Changed']=='yes'])/len(text_lowercase)))
for index, row in text_lowercase[text_lowercase['Changed']=='yes'].head().iterrows():
print(row['TextBefore'],'->',row['TextAfter'])
###Output
_____no_output_____
###Markdown
Some question are written only in lowercase. This happens when they start with a number.
###Code
for index, row in text_lowercase[text_lowercase['Changed']=='no'].head().iterrows():
print(row['TextBefore'],'->',row['TextAfter'])
###Output
_____no_output_____
###Markdown
6. Replace Negations with Antonyms**Example:** Why are humans not able to be evolved developing resistance against diseases? -> Why are humans unable to be evolved developing resistance against diseases ?
###Code
import nltk
from nltk.corpus import wordnet
def replace(word, pos=None):
""" Creates a set of all antonyms for the word and if there is only one antonym, it returns it """
antonyms = set()
for syn in wordnet.synsets(word, pos=pos):
for lemma in syn.lemmas():
for antonym in lemma.antonyms():
antonyms.add(antonym.name())
if len(antonyms) == 1:
return antonyms.pop()
else:
return None
def replaceNegations(text):
""" Finds "not" and antonym for the next word and if found, replaces not and the next word with the antonym """
i, l = 0, len(text)
words = []
while i < l:
word = text[i]
if word == 'not' and i+1 < l:
ant = replace(text[i+1])
if ant:
words.append(ant)
i += 2
continue
words.append(word)
i += 1
return words
def tokenize1(text):
tokens = nltk.word_tokenize(text)
tokens = replaceNegations(tokens)
text = " ".join(tokens)
return text
text_replaceNegations = pd.DataFrame(columns=['TextBefore', 'TextAfter', 'Changed'])
text_replaceNegations['TextBefore'] = text.copy()
for index, row in text_replaceNegations.iterrows():
row['TextAfter'] = tokenize1(row['TextBefore'])
text_replaceNegations['Changed'] = np.where(text_replaceNegations['TextBefore'].str.replace(" ","")==text_replaceNegations['TextAfter'].str.replace(" ","").str.replace("``",'"').str.replace("''",'"'), 'no', 'yes')
print("{} of {} ({:.4f}%) questions have been changed.".format(len(text_replaceNegations[text_replaceNegations['Changed']=='yes']), len(text_replaceNegations), 100*len(text_replaceNegations[text_replaceNegations['Changed']=='yes'])/len(text_replaceNegations)))
for index, row in text_replaceNegations[text_replaceNegations['Changed']=='yes'].head().iterrows():
print(row['TextBefore'],'->',row['TextAfter'])
###Output
_____no_output_____
###Markdown
7. Handle Capitalized Words**Example:** Which is better to use, Avro or ORC? -> Which is better to use , Avro or ALL_CAPS_ORC ?
###Code
def addCapTag(word):
""" Finds a word with at least 3 characters capitalized and adds the tag ALL_CAPS_ """
if(len(re.findall("[A-Z]{3,}", word))):
word = word.replace('\\', '' )
transformed = re.sub("[A-Z]{3,}", "ALL_CAPS_"+word, word)
return transformed
else:
return word
def tokenize2(text):
finalTokens = []
tokens = nltk.word_tokenize(text)
for w in tokens:
finalTokens.append(addCapTag(w))
text = " ".join(finalTokens)
return text
text_handleCapWords = pd.DataFrame(columns=['TextBefore', 'TextAfter', 'Changed'])
text_handleCapWords['TextBefore'] = text.copy()
for index, row in text_handleCapWords.iterrows():
row['TextAfter'] = tokenize2(row['TextBefore'])
text_handleCapWords['Changed'] = np.where(text_handleCapWords['TextBefore'].str.replace(" ","")==text_handleCapWords['TextAfter'].str.replace(" ","").str.replace("``",'"').str.replace("''",'"'), 'no', 'yes')
print("{} of {} ({:.4f}%) questions have been changed.".format(len(text_handleCapWords[text_handleCapWords['Changed']=='yes']), len(text_handleCapWords), 100*len(text_handleCapWords[text_handleCapWords['Changed']=='yes'])/len(text_handleCapWords)))
for index, row in text_handleCapWords[text_handleCapWords['Changed']=='yes'].head().iterrows():
print(row['TextBefore'],'->',row['TextAfter'])
###Output
_____no_output_____
###Markdown
8. Remove Stopwords**Example:** How I know whether a girl had done sex before sex with me? -> How I know whether girl done sex sex ?
###Code
from nltk.corpus import stopwords
stoplist = stopwords.words('english')
def tokenize(text):
finalTokens = []
tokens = nltk.word_tokenize(text)
for w in tokens:
if (w not in stoplist):
finalTokens.append(w)
text = " ".join(finalTokens)
return text
text_removeStopwords = pd.DataFrame(columns=['TextBefore', 'TextAfter', 'Changed'])
text_removeStopwords['TextBefore'] = text.copy()
for index, row in text_removeStopwords.iterrows():
row['TextAfter'] = tokenize(row['TextBefore'])
text_removeStopwords['Changed'] = np.where(text_removeStopwords['TextBefore'].str.replace(" ","")==text_removeStopwords['TextAfter'].str.replace(" ","").str.replace("``",'"').str.replace("''",'"'), 'no', 'yes')
print("{} of {} ({:.4f}%) questions have been changed.".format(len(text_removeStopwords[text_removeStopwords['Changed']=='yes']), len(text_removeStopwords), 100*len(text_removeStopwords[text_removeStopwords['Changed']=='yes'])/len(text_removeStopwords)))
for index, row in text_removeStopwords[text_removeStopwords['Changed']=='yes'].head().iterrows():
print(row['TextBefore'],'->',row['TextAfter'])
###Output
_____no_output_____
###Markdown
9. Replace Elongated WordsThis technique replaces an elongated word with its basic form, unless the word exists in the lexicon.**Example:** Game of Thrones, what does Arya find out about Littlefinger? -> Game of Thrones , what does Arya find out about Litlefinger ?
###Code
def replaceElongated(word):
""" Replaces an elongated word with its basic form, unless the word exists in the lexicon """
repeat_regexp = re.compile(r'(\w*)(\w)\2(\w*)')
repl = r'\1\2\3'
if wordnet.synsets(word):
return word
repl_word = repeat_regexp.sub(repl, word)
if repl_word != word:
return replaceElongated(repl_word)
else:
return repl_word
def tokenize(text):
finalTokens = []
tokens = nltk.word_tokenize(text)
for w in tokens:
finalTokens.append(replaceElongated(w))
text = " ".join(finalTokens)
return text
text_removeElWords = pd.DataFrame(columns=['TextBefore', 'TextAfter', 'Changed'])
text_removeElWords['TextBefore'] = text.copy()
for index, row in text_removeElWords.iterrows():
row['TextAfter'] = tokenize(row['TextBefore'])
text_removeElWords['Changed'] = np.where(text_removeElWords['TextBefore'].str.replace(" ","")==text_removeElWords['TextAfter'].str.replace(" ","").str.replace("``",'"').str.replace("''",'"'), 'no', 'yes')
print("{} of {} ({:.4f}%) questions have been changed.".format(len(text_removeElWords[text_removeElWords['Changed']=='yes']), len(text_removeElWords), 100*len(text_removeElWords[text_removeElWords['Changed']=='yes'])/len(text_removeElWords)))
for index, row in text_removeElWords[text_removeElWords['Changed']=='yes'].head().iterrows():
print(row['TextBefore'],'->',row['TextAfter'])
###Output
_____no_output_____
###Markdown
10. Stemming/Lemmatizing**Example:** How do modern military submarines reduce noise to achieve stealth? -> how do modern militari submarin reduc nois to achiev stealth ?
###Code
from nltk.stem.porter import PorterStemmer
stemmer = PorterStemmer() #set stemmer
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer() # set lemmatizer
def tokenize(text):
finalTokens = []
tokens = nltk.word_tokenize(text)
for w in tokens:
finalTokens.append(stemmer.stem(w)) # change this to lemmatizer.lemmatize(w) for Lemmatizing
text = " ".join(finalTokens)
return text
text_stemming = pd.DataFrame(columns=['TextBefore', 'TextAfter', 'Changed'])
text_stemming['TextBefore'] = text.copy()
for index, row in text_stemming.iterrows():
row['TextAfter'] = tokenize(row['TextBefore'])
text_stemming['Changed'] = np.where(text_stemming['TextBefore'].str.replace(" ","")==text_stemming['TextAfter'].str.replace(" ","").str.replace("``",'"').str.replace("''",'"'), 'no', 'yes')
print("{} of {} ({:.4f}%) questions have been changed.".format(len(text_stemming[text_stemming['Changed']=='yes']), len(text_stemming), 100*len(text_stemming[text_stemming['Changed']=='yes'])/len(text_stemming)))
for index, row in text_stemming[text_stemming['Changed']=='yes'].head().iterrows():
print(row['TextBefore'],'->',row['TextAfter'])
###Output
_____no_output_____
###Markdown
CombosOf course we can use more than one technique at the same time. The order is essential here.**Example:** What are the recommended 2D game engines for a beginning Python programmer? -> what recommend d game engin begin python programm
###Code
def tokenize(text):
finalTokens = []
tokens = nltk.word_tokenize(text)
for w in tokens:
if (w not in stoplist):
w = addCapTag(w) # Handle Capitalized Words
w = w.lower() # Lowercase
w = replaceElongated(w) # Replace Elongated Words
w = stemmer.stem(w) # Stemming
finalTokens.append(w)
text = " ".join(finalTokens)
return text
text_combos = pd.DataFrame(columns=['TextBefore', 'TextAfter', 'Changed'])
text_combos['TextBefore'] = text.copy()
for index, row in text_combos.iterrows():
row['TextAfter'] = replaceContraction(row['TextBefore']) # Replace Contractions
row['TextAfter'] = removeNumbers(row['TextAfter']) # Remove Integers
row['TextAfter'] = replaceMultiExclamationMark(row['TextAfter']) # Replace Multi Exclamation Marks
row['TextAfter'] = replaceMultiQuestionMark(row['TextAfter']) # Replace Multi Question Marks
row['TextAfter'] = replaceMultiStopMark(row['TextAfter']) # Repalce Multi Stop Marks
row['TextAfter'] = row['TextAfter'].translate(translator) # Remove Punctuation
row['TextAfter'] = tokenize(row['TextAfter'])
text_combos['Changed'] = np.where(text_combos['TextBefore'].str.replace(" ","")==text_combos['TextAfter'].str.replace(" ","").str.replace("``",'"').str.replace("''",'"'), 'no', 'yes')
print("{} of {} ({:.4f}%) questions have been changed.".format(len(text_combos[text_combos['Changed']=='yes']), len(text_combos), 100*len(text_combos[text_combos['Changed']=='yes'])/len(text_combos)))
for index, row in text_combos[text_combos['Changed']=='yes'].head().iterrows():
print(row['TextBefore'],'->',row['TextAfter'])
###Output
_____no_output_____
|
projects/dummy_notebook.ipynb
|
###Markdown
Interesting Data Analysis
###Code
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0, 20, 1)
y = np.arange(5, 10, .25) + np.random.normal(size=20)
fig, ax = plt.subplots()
ax.plot(x, y, linewidth=2.0)
ax.set(xlim=(0, 20), xticks=np.arange(0, 20),
ylim=(3, 12), yticks=np.arange(3, 12))
plt.title('Data Analysis Chart')
plt.show()
###Output
_____no_output_____
|
cs231n_assignments/assignment3/.ipynb_checkpoints/LSTM_Captioning-checkpoint.ipynb
|
###Markdown
Image Captioning with LSTMsIn the previous exercise you implemented a vanilla RNN and applied it to image captioning. In this notebook you will implement the LSTM update rule and use it for image captioning.
###Code
# As usual, a bit of setup
import time, os, json
import numpy as np
import matplotlib.pyplot as plt
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.rnn_layers import *
from cs231n.captioning_solver import CaptioningSolver
from cs231n.classifiers.rnn import CaptioningRNN
from cs231n.coco_utils import load_coco_data, sample_coco_minibatch, decode_captions
from cs231n.image_utils import image_from_url
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
###Output
_____no_output_____
###Markdown
Load MS-COCO dataAs in the previous notebook, we will use the Microsoft COCO dataset for captioning.
###Code
# Load COCO data from disk; this returns a dictionary
# We'll work with dimensionality-reduced features for this notebook, but feel
# free to experiment with the original features by changing the flag below.
data = load_coco_data(pca_features=True)
# Print out all the keys and values from the data dictionary
for k, v in data.items():
if type(v) == np.ndarray:
print(k, type(v), v.shape, v.dtype)
else:
print(k, type(v), len(v))
###Output
train_captions <class 'numpy.ndarray'> (400135, 17) int32
train_image_idxs <class 'numpy.ndarray'> (400135,) int32
val_captions <class 'numpy.ndarray'> (195954, 17) int32
val_image_idxs <class 'numpy.ndarray'> (195954,) int32
train_features <class 'numpy.ndarray'> (82783, 512) float32
val_features <class 'numpy.ndarray'> (40504, 512) float32
idx_to_word <class 'list'> 1004
word_to_idx <class 'dict'> 1004
train_urls <class 'numpy.ndarray'> (82783,) <U63
val_urls <class 'numpy.ndarray'> (40504,) <U63
###Markdown
LSTMIf you read recent papers, you'll see that many people use a variant on the vanilla RNN called Long-Short Term Memory (LSTM) RNNs. Vanilla RNNs can be tough to train on long sequences due to vanishing and exploding gradients caused by repeated matrix multiplication. LSTMs solve this problem by replacing the simple update rule of the vanilla RNN with a gating mechanism as follows.Similar to the vanilla RNN, at each timestep we receive an input $x_t\in\mathbb{R}^D$ and the previous hidden state $h_{t-1}\in\mathbb{R}^H$; the LSTM also maintains an $H$-dimensional *cell state*, so we also receive the previous cell state $c_{t-1}\in\mathbb{R}^H$. The learnable parameters of the LSTM are an *input-to-hidden* matrix $W_x\in\mathbb{R}^{4H\times D}$, a *hidden-to-hidden* matrix $W_h\in\mathbb{R}^{4H\times H}$ and a *bias vector* $b\in\mathbb{R}^{4H}$.At each timestep we first compute an *activation vector* $a\in\mathbb{R}^{4H}$ as $a=W_xx_t + W_hh_{t-1}+b$. We then divide this into four vectors $a_i,a_f,a_o,a_g\in\mathbb{R}^H$ where $a_i$ consists of the first $H$ elements of $a$, $a_f$ is the next $H$ elements of $a$, etc. We then compute the *input gate* $g\in\mathbb{R}^H$, *forget gate* $f\in\mathbb{R}^H$, *output gate* $o\in\mathbb{R}^H$ and *block input* $g\in\mathbb{R}^H$ as$$\begin{align*}i = \sigma(a_i) \hspace{2pc}f = \sigma(a_f) \hspace{2pc}o = \sigma(a_o) \hspace{2pc}g = \tanh(a_g)\end{align*}$$where $\sigma$ is the sigmoid function and $\tanh$ is the hyperbolic tangent, both applied elementwise.Finally we compute the next cell state $c_t$ and next hidden state $h_t$ as$$c_{t} = f\odot c_{t-1} + i\odot g \hspace{4pc}h_t = o\odot\tanh(c_t)$$where $\odot$ is the elementwise product of vectors.In the rest of the notebook we will implement the LSTM update rule and apply it to the image captioning task. In the code, we assume that data is stored in batches so that $X_t \in \mathbb{R}^{N\times D}$, and will work with *transposed* versions of the parameters: $W_x \in \mathbb{R}^{D \times 4H}$, $W_h \in \mathbb{R}^{H\times 4H}$ so that activations $A \in \mathbb{R}^{N\times 4H}$ can be computed efficiently as $A = X_t W_x + H_{t-1} W_h$ LSTM: step forwardImplement the forward pass for a single timestep of an LSTM in the `lstm_step_forward` function in the file `cs231n/rnn_layers.py`. This should be similar to the `rnn_step_forward` function that you implemented above, but using the LSTM update rule instead.Once you are done, run the following to perform a simple test of your implementation. You should see errors on the order of `e-8` or less.
###Code
N, D, H = 3, 4, 5
x = np.linspace(-0.4, 1.2, num=N*D).reshape(N, D)
prev_h = np.linspace(-0.3, 0.7, num=N*H).reshape(N, H)
prev_c = np.linspace(-0.4, 0.9, num=N*H).reshape(N, H)
Wx = np.linspace(-2.1, 1.3, num=4*D*H).reshape(D, 4 * H)
Wh = np.linspace(-0.7, 2.2, num=4*H*H).reshape(H, 4 * H)
b = np.linspace(0.3, 0.7, num=4*H)
next_h, next_c, cache = lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)
expected_next_h = np.asarray([
[ 0.24635157, 0.28610883, 0.32240467, 0.35525807, 0.38474904],
[ 0.49223563, 0.55611431, 0.61507696, 0.66844003, 0.7159181 ],
[ 0.56735664, 0.66310127, 0.74419266, 0.80889665, 0.858299 ]])
expected_next_c = np.asarray([
[ 0.32986176, 0.39145139, 0.451556, 0.51014116, 0.56717407],
[ 0.66382255, 0.76674007, 0.87195994, 0.97902709, 1.08751345],
[ 0.74192008, 0.90592151, 1.07717006, 1.25120233, 1.42395676]])
print('next_h error: ', rel_error(expected_next_h, next_h))
print('next_c error: ', rel_error(expected_next_c, next_c))
###Output
next_h error: 5.705412962326019e-09
next_c error: 5.8143123088804145e-09
###Markdown
LSTM: step backwardImplement the backward pass for a single LSTM timestep in the function `lstm_step_backward` in the file `cs231n/rnn_layers.py`. Once you are done, run the following to perform numeric gradient checking on your implementation. You should see errors on the order of `e-7` or less.
###Code
np.random.seed(231)
N, D, H = 4, 5, 6
x = np.random.randn(N, D)
prev_h = np.random.randn(N, H)
prev_c = np.random.randn(N, H)
Wx = np.random.randn(D, 4 * H)
Wh = np.random.randn(H, 4 * H)
b = np.random.randn(4 * H)
next_h, next_c, cache = lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)
dnext_h = np.random.randn(*next_h.shape)
dnext_c = np.random.randn(*next_c.shape)
fx_h = lambda x: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[0]
fh_h = lambda h: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[0]
fc_h = lambda c: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[0]
fWx_h = lambda Wx: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[0]
fWh_h = lambda Wh: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[0]
fb_h = lambda b: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[0]
fx_c = lambda x: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[1]
fh_c = lambda h: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[1]
fc_c = lambda c: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[1]
fWx_c = lambda Wx: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[1]
fWh_c = lambda Wh: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[1]
fb_c = lambda b: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[1]
num_grad = eval_numerical_gradient_array
dx_num = num_grad(fx_h, x, dnext_h) + num_grad(fx_c, x, dnext_c)
dh_num = num_grad(fh_h, prev_h, dnext_h) + num_grad(fh_c, prev_h, dnext_c)
dc_num = num_grad(fc_h, prev_c, dnext_h) + num_grad(fc_c, prev_c, dnext_c)
dWx_num = num_grad(fWx_h, Wx, dnext_h) + num_grad(fWx_c, Wx, dnext_c)
dWh_num = num_grad(fWh_h, Wh, dnext_h) + num_grad(fWh_c, Wh, dnext_c)
db_num = num_grad(fb_h, b, dnext_h) + num_grad(fb_c, b, dnext_c)
dx, dh, dc, dWx, dWh, db = lstm_step_backward(dnext_h, dnext_c, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dh error: ', rel_error(dh_num, dh))
print('dc error: ', rel_error(dc_num, dc))
print('dWx error: ', rel_error(dWx_num, dWx))
print('dWh error: ', rel_error(dWh_num, dWh))
print('db error: ', rel_error(db_num, db))
###Output
[[0.11562932 0.40252033 0.68319591 0.01477822 0.88849866 0.92828658]
[0.80387128 0.76408623 0.91926022 0.99642801 0.03837388 0.11064462]
[0.10705379 0.15262737 0.03999789 0.00319094 0.56077168 0.64876226]
[0.99784098 0.54332641 0.0229006 0.5318491 0.07121957 0.22598479]]
(4, 24) (5, 24) (4, 5) (6, 24) (4, 6) (24,)
dx error: 1.0
dh error: 1.0
dc error: 1.522158616862235e-10
dWx error: 1.0
dWh error: 1.0
db error: 1.0
###Markdown
LSTM: forwardIn the function `lstm_forward` in the file `cs231n/rnn_layers.py`, implement the `lstm_forward` function to run an LSTM forward on an entire timeseries of data.When you are done, run the following to check your implementation. You should see an error on the order of `e-7` or less.
###Code
N, D, H, T = 2, 5, 4, 3
x = np.linspace(-0.4, 0.6, num=N*T*D).reshape(N, T, D)
h0 = np.linspace(-0.4, 0.8, num=N*H).reshape(N, H)
Wx = np.linspace(-0.2, 0.9, num=4*D*H).reshape(D, 4 * H)
Wh = np.linspace(-0.3, 0.6, num=4*H*H).reshape(H, 4 * H)
b = np.linspace(0.2, 0.7, num=4*H)
h, cache = lstm_forward(x, h0, Wx, Wh, b)
expected_h = np.asarray([
[[ 0.01764008, 0.01823233, 0.01882671, 0.0194232 ],
[ 0.11287491, 0.12146228, 0.13018446, 0.13902939],
[ 0.31358768, 0.33338627, 0.35304453, 0.37250975]],
[[ 0.45767879, 0.4761092, 0.4936887, 0.51041945],
[ 0.6704845, 0.69350089, 0.71486014, 0.7346449 ],
[ 0.81733511, 0.83677871, 0.85403753, 0.86935314]]])
print('h error: ', rel_error(expected_h, h))
###Output
_____no_output_____
###Markdown
LSTM: backwardImplement the backward pass for an LSTM over an entire timeseries of data in the function `lstm_backward` in the file `cs231n/rnn_layers.py`. When you are done, run the following to perform numeric gradient checking on your implementation. You should see errors on the order of `e-8` or less. (For `dWh`, it's fine if your error is on the order of `e-6` or less).
###Code
from cs231n.rnn_layers import lstm_forward, lstm_backward
np.random.seed(231)
N, D, T, H = 2, 3, 10, 6
x = np.random.randn(N, T, D)
h0 = np.random.randn(N, H)
Wx = np.random.randn(D, 4 * H)
Wh = np.random.randn(H, 4 * H)
b = np.random.randn(4 * H)
out, cache = lstm_forward(x, h0, Wx, Wh, b)
dout = np.random.randn(*out.shape)
dx, dh0, dWx, dWh, db = lstm_backward(dout, cache)
fx = lambda x: lstm_forward(x, h0, Wx, Wh, b)[0]
fh0 = lambda h0: lstm_forward(x, h0, Wx, Wh, b)[0]
fWx = lambda Wx: lstm_forward(x, h0, Wx, Wh, b)[0]
fWh = lambda Wh: lstm_forward(x, h0, Wx, Wh, b)[0]
fb = lambda b: lstm_forward(x, h0, Wx, Wh, b)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
dh0_num = eval_numerical_gradient_array(fh0, h0, dout)
dWx_num = eval_numerical_gradient_array(fWx, Wx, dout)
dWh_num = eval_numerical_gradient_array(fWh, Wh, dout)
db_num = eval_numerical_gradient_array(fb, b, dout)
print('dx error: ', rel_error(dx_num, dx))
print('dh0 error: ', rel_error(dh0_num, dh0))
print('dWx error: ', rel_error(dWx_num, dWx))
print('dWh error: ', rel_error(dWh_num, dWh))
print('db error: ', rel_error(db_num, db))
###Output
_____no_output_____
###Markdown
INLINE QUESTION Recall that in an LSTM the input gate $i$, forget gate $f$, and output gate $o$ are all outputs of a sigmoid function. Why don't we use the ReLU activation function instead of sigmoid to compute these values? Explain. LSTM captioning modelNow that you have implemented an LSTM, update the implementation of the `loss` method of the `CaptioningRNN` class in the file `cs231n/classifiers/rnn.py` to handle the case where `self.cell_type` is `lstm`. This should require adding less than 10 lines of code.Once you have done so, run the following to check your implementation. You should see a difference on the order of `e-10` or less.
###Code
N, D, W, H = 10, 20, 30, 40
word_to_idx = {'<NULL>': 0, 'cat': 2, 'dog': 3}
V = len(word_to_idx)
T = 13
model = CaptioningRNN(word_to_idx,
input_dim=D,
wordvec_dim=W,
hidden_dim=H,
cell_type='lstm',
dtype=np.float64)
# Set all model parameters to fixed values
for k, v in model.params.items():
model.params[k] = np.linspace(-1.4, 1.3, num=v.size).reshape(*v.shape)
features = np.linspace(-0.5, 1.7, num=N*D).reshape(N, D)
captions = (np.arange(N * T) % V).reshape(N, T)
loss, grads = model.loss(features, captions)
expected_loss = 9.82445935443
print('loss: ', loss)
print('expected loss: ', expected_loss)
print('difference: ', abs(loss - expected_loss))
###Output
_____no_output_____
###Markdown
Overfit LSTM captioning modelRun the following to overfit an LSTM captioning model on the same small dataset as we used for the RNN previously. You should see a final loss less than 0.5.
###Code
np.random.seed(231)
small_data = load_coco_data(max_train=50)
small_lstm_model = CaptioningRNN(
cell_type='lstm',
word_to_idx=data['word_to_idx'],
input_dim=data['train_features'].shape[1],
hidden_dim=512,
wordvec_dim=256,
dtype=np.float32,
)
small_lstm_solver = CaptioningSolver(small_lstm_model, small_data,
update_rule='adam',
num_epochs=50,
batch_size=25,
optim_config={
'learning_rate': 5e-3,
},
lr_decay=0.995,
verbose=True, print_every=10,
)
small_lstm_solver.train()
# Plot the training losses
plt.plot(small_lstm_solver.loss_history)
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.title('Training loss history')
plt.show()
###Output
_____no_output_____
###Markdown
LSTM test-time samplingModify the `sample` method of the `CaptioningRNN` class to handle the case where `self.cell_type` is `lstm`. This should take fewer than 10 lines of code.When you are done run the following to sample from your overfit LSTM model on some training and validation set samples. As with the RNN, training results should be very good, and validation results probably won't make a lot of sense (because we're overfitting).
###Code
for split in ['train', 'val']:
minibatch = sample_coco_minibatch(small_data, split=split, batch_size=2)
gt_captions, features, urls = minibatch
gt_captions = decode_captions(gt_captions, data['idx_to_word'])
sample_captions = small_lstm_model.sample(features)
sample_captions = decode_captions(sample_captions, data['idx_to_word'])
for gt_caption, sample_caption, url in zip(gt_captions, sample_captions, urls):
plt.imshow(image_from_url(url))
plt.title('%s\n%s\nGT:%s' % (split, sample_caption, gt_caption))
plt.axis('off')
plt.show()
###Output
_____no_output_____
|
pythonUPVX15.ipynb
|
###Markdown
Estructuras de control de flujo condicional
###Code
a=True
if(a):
print(a)
print('Siguiente Instrucción')
###Output
_____no_output_____
###Markdown
###Code
a=True
if(a):
print(a)
else:
print("no")
print('Siguiente Instrucción')
###Output
_____no_output_____
###Markdown
###Code
a=3
if a==4:
print("cuatro")
elif a>2:
print("grt2")
print("siguiente instrucción")
###Output
_____no_output_____
|
partition/part_data-driven_lr.ipynb
|
###Markdown
IntroductionThis notebook will assign documents to domains in the data-driven ontology with the highest Dice similarity of their brain structures and mental function terms. Load the data
###Code
import pandas as pd
import numpy as np
import sys
sys.path.append("..")
import utilities, partition
framework = "data-driven"
clf = "_lr"
###Output
_____no_output_____
###Markdown
Brain activation coordinates
###Code
act_bin = utilities.load_coordinates()
print("Document N={}, Structure N={}".format(
act_bin.shape[0], act_bin.shape[1]))
###Output
Document N=18155, Structure N=118
###Markdown
Document-term matrix
###Code
dtm_bin = utilities.load_doc_term_matrix(version=190325, binarize=True)
print("Document N={}, Term N={}".format(
dtm_bin.shape[0], dtm_bin.shape[1]))
###Output
Document N=18155, Term N=4107
###Markdown
Domain archetypes
###Code
from collections import OrderedDict
lists, circuits = utilities.load_framework("{}{}".format(framework, clf))
words = sorted(list(set(lists["TOKEN"])))
structures = sorted(list(set(act_bin.columns)))
domains = list(OrderedDict.fromkeys(lists["DOMAIN"]))
archetypes = pd.DataFrame(0.0, index=words+structures, columns=domains)
for dom in domains:
for word in lists.loc[lists["DOMAIN"] == dom, "TOKEN"]:
archetypes.loc[word, dom] = 1.0
for struct in structures:
archetypes.loc[struct, dom] = circuits.loc[struct, dom]
archetypes[archetypes > 0.0] = 1.0
print("Term & Structure N={}, Domain N={}".format(
archetypes.shape[0], archetypes.shape[1]))
###Output
Term & Structure N=208, Domain N=6
###Markdown
Document splits
###Code
splits = {}
splits["discovery"] = [int(pmid.strip()) for pmid in open("../data/splits/train.txt")]
splits["replication"] = [int(pmid.strip()) for pmid in open("../data/splits/validation.txt")]
splits["replication"] += [int(pmid.strip()) for pmid in open("../data/splits/test.txt")]
for split, pmids in splits.items():
print("{:12s} N={}".format(split.title(), len(pmids)))
###Output
Discovery N=12708
Replication N=5447
###Markdown
Assign documents to domains
###Code
from scipy.spatial.distance import dice, cdist
pmids = sorted(list(dtm_bin.index.intersection(act_bin.index)))
len(pmids)
dtm_words = dtm_bin.loc[pmids, words]
act_structs = act_bin.loc[pmids, structures]
docs = dtm_words.copy()
docs[structures] = act_structs.copy()
docs.head()
archetypes.shape
docs.shape
dom_dists = cdist(docs.values, archetypes.values.T, metric="dice")
dom_dists = pd.DataFrame(dom_dists, index=docs.index, columns=domains)
dom_dists.shape
doc2dom_df = pd.Series(doc2dom)
doc2dom_df.to_csv("data/doc2dom_{}_lr.csv".format(framework), header=False)
dom2doc = {dom: [] for dom in domains}
for pmid, dom in doc2dom.items():
dom2doc[dom].append(pmid)
for dom, dom_pmids in dom2doc.items():
n_pmids_dis = len(set(dom_pmids).intersection(set(splits["discovery"])))
n_pmids_rep = len(set(dom_pmids).intersection(set(splits["replication"])))
print("{:16s} {:5d} discovery {:5d} replication".format(dom, n_pmids_dis, n_pmids_rep))
###Output
MEMORY 612 discovery 264 replication
REWARD 557 discovery 216 replication
COGNITION 3090 discovery 1354 replication
VISION 1729 discovery 714 replication
MANIPULATION 5796 discovery 2507 replication
LANGUAGE 924 discovery 392 replication
###Markdown
Plot document distances
###Code
from style import style
%matplotlib inline
for split, split_pmids in splits.items():
print("Processing {} split (N={} documents)".format(split, len(split_pmids)))
print("----- Computing Dice distance between documents")
docs_split = docs.loc[split_pmids]
doc_dists = cdist(docs_split, docs_split, metric="dice")
doc_dists = pd.DataFrame(doc_dists, index=split_pmids, columns=split_pmids)
print("----- Sorting documents by domain assignment")
dom_pmids = []
for dom in domains:
dom_pmids += [pmid for pmid, sys in doc2dom.items() if sys == dom and pmid in split_pmids]
doc_dists = doc_dists[dom_pmids].loc[dom_pmids]
print("----- Locating transition points between domains")
transitions = []
for i, pmid in enumerate(dom_pmids):
if doc2dom[dom_pmids[i-1]] != doc2dom[pmid]:
transitions.append(i)
transitions += [len(split_pmids)]
print("----- Plotting distances between documents sorted by domain")
partition.plot_partition("{}{}".format(framework, clf), doc_dists, transitions,
style.palettes[framework], suffix="_{}".format(split))
###Output
Processing discovery split (N=12708 documents)
----- Computing Dice distance between documents
----- Sorting documents by domain assignment
----- Locating transition points between domains
----- Plotting distances between documents sorted by domain
|
symbolic/angvelxform_dot.ipynb
|
###Markdown
Determine derivative of Jacobian from angular velocity to exponential ratesPeter Corke 2021SymPy code to deterine the time derivative of the mapping from angular velocity to exponential coordinate rates.
###Code
from sympy import *
###Output
_____no_output_____
###Markdown
A rotation matrix can be expressed in terms of exponential coordinates (also called Euler vector)$\mathbf{R} = e^{[\varphi]_\times} $where $\mathbf{R} \in SO(3)$ and $\varphi \in \mathbb{R}^3$.The mapping from angular velocity $\omega$ to exponential coordinate rates $\dot{\varphi}$ is$\dot{\varphi} = \mathbf{A} \omega$where $\mathbf{A}$ is given by (2.107) of [Robot Dynamics Lecture Notes, Robotic Systems Lab, ETH Zurich, 2018](https://ethz.ch/content/dam/ethz/special-interest/mavt/robotics-n-intelligent-systems/rsl-dam/documents/RobotDynamics2018/RD_HS2018script.pdf)$\mathbf{A} = I_{3 \times 3} - \frac{1}{2} [v]_\times + [v]^2_\times \frac{1}{\theta^2} \left( 1 - \frac{\theta}{2} \frac{\sin \theta}{1 - \cos \theta} \right)$where $\theta = \| \varphi \|$.We simplify the equation as$\mathbf{A} = I_{3 \times 3} - \frac{1}{2} [v]_\times + [v]^2_\times \Theta$where$\Theta = \frac{1}{\theta^2} \left( 1 - \frac{\theta}{2} \frac{\sin \theta}{1 - \cos \theta} \right)$We want to find the deriviative, which we can compute using the chain rule$\dot{\mathbf{A}} = - \frac{1}{2} [\dot{v}]_\times + 2 [v]_\times [\dot{v}]_\times \Theta + [v]^2_\times \dot{\Theta}$We start by defining some symbols
###Code
Theta, theta, theta_dot, t = symbols('Theta theta theta_dot t', real=True)
###Output
_____no_output_____
###Markdown
We start by finding an expression for $\Theta$ which depends on $\theta(t)$
###Code
theta_t = Function(theta)(t)
Theta = 1 / theta_t ** 2 * (1 - theta_t / 2 * sin(theta_t) / (1 - cos(theta_t)))
Theta
###Output
_____no_output_____
###Markdown
and now determine the derivative
###Code
T_dot = Theta.diff(t)
T_dot
###Output
_____no_output_____
###Markdown
which is a somewhat complex expression that depends on $\theta(t)$ and $\dot{\theta}(t)$.We will remove the time dependency and generate code
###Code
T_dot = T_dot.subs([(theta_t.diff(t), theta_dot), (theta_t, theta)])
pycode(T_dot)
###Output
_____no_output_____
###Markdown
In order to evaluate the line above we need an expression for $\theta$ and $\dot{\theta}$. $\theta$ is the norm of $\varphi$ whose elements are functions of time
###Code
phi_names = ('varphi_0', 'varphi_1', 'varphi_2')
phi = [] # names of angles, eg. theta
phi_t = [] # angles as function of time, eg. theta(t)
phi_d = [] # derivative of above, eg. d theta(t) / dt
phi_n = [] # symbol to represent above, eg. theta_dot
for i in phi_names:
phi.append(symbols(i, real=True))
phi_t.append(Function(phi[-1])(t))
phi_d.append(phi_t[-1].diff(t))
phi_n.append(i + '_dot')
###Output
_____no_output_____
###Markdown
Compute the norm
###Code
theta = Matrix(phi_t).norm()
theta
###Output
_____no_output_____
###Markdown
and find its derivative
###Code
theta_dot = theta.diff(t)
theta_dot
###Output
_____no_output_____
###Markdown
and now remove the time dependenices
###Code
theta_dot = theta_dot.subs(a for a in zip(phi_d, phi_n))
theta_dot = theta_dot.subs(a for a in zip(phi_t, phi))
theta_dot
###Output
_____no_output_____
###Markdown
Determine derivative of Jacobian from angular velocity to exponential ratesPeter Corke 2021SymPy code to deterine the time derivative of the mapping from angular velocity to exponential coordinate rates.
###Code
from sympy import *
###Output
_____no_output_____
###Markdown
A rotation matrix can be expressed in terms of exponential coordinates (also called Euler vector)$\mathbf{R} = e^{[\varphi]_\times} $where $\mathbf{R} \in SO(3)$ and $\varphi \in \mathbb{R}^3$.The mapping from angular velocity $\omega$ to exponential coordinate rates $\dot{\varphi}$ is$\dot{\varphi} = \mathbf{A} \omega$where $\mathbf{A}$ is given by (2.107) of [Robot Dynamics Lecture Notes, Robotic Systems Lab, ETH Zurich, 2018](https://ethz.ch/content/dam/ethz/special-interest/mavt/robotics-n-intelligent-systems/rsl-dam/documents/RobotDynamics2018/RD_HS2018script.pdf)$\mathbf{A} = I_{3 \times 3} - \frac{1}{2} [v]_\times + [v]^2_\times \frac{1}{\theta^2} \left( 1 - \frac{\theta}{2} \frac{\sin \theta}{1 - \cos \theta} \right)$where $\theta = \| \varphi \|$ and $v = \hat{\varphi}$We simplify the equation as$\mathbf{A} = I_{3 \times 3} - \frac{1}{2} [v]_\times + [v]^2_\times \Theta$where$\Theta = \frac{1}{\theta^2} \left( 1 - \frac{\theta}{2} \frac{\sin \theta}{1 - \cos \theta} \right)$We can find the derivative using the chain rule$\dot{\mathbf{A}} = - \frac{1}{2} [\dot{v}]_\times + 2 [v]_\times [\dot{v}]_\times \Theta + [v]^2_\times \dot{\Theta}$We start by defining some symbols
###Code
Theta, theta, theta_dot, t = symbols('Theta theta theta_dot t', real=True)
###Output
_____no_output_____
###Markdown
We start by finding an expression for $\Theta$ which depends on $\theta(t)$
###Code
theta_t = Function(theta)(t)
Theta = 1 / theta_t ** 2 * (1 - theta_t / 2 * sin(theta_t) / (1 - cos(theta_t)))
Theta
###Output
_____no_output_____
###Markdown
and now determine the derivative
###Code
T_dot = Theta.diff(t)
T_dot
###Output
_____no_output_____
###Markdown
which is a somewhat complex expression that depends on $\theta(t)$ and $\dot{\theta}(t)$.We will remove the time dependency and generate code
###Code
T_dot = T_dot.subs([(theta_t.diff(t), theta_dot), (theta_t, theta)])
pycode(T_dot)
###Output
_____no_output_____
###Markdown
In order to evaluate the line above we need an expression for $\theta$ and $\dot{\theta}$. $\theta$ is the norm of $\varphi$ whose elements are functions of time
###Code
phi_names = ('varphi_0', 'varphi_1', 'varphi_2')
phi = [] # names of angles, eg. theta
phi_t = [] # angles as function of time, eg. theta(t)
phi_d = [] # derivative of above, eg. d theta(t) / dt
phi_n = [] # symbol to represent above, eg. theta_dot
for i in phi_names:
phi.append(symbols(i, real=True))
phi_t.append(Function(phi[-1])(t))
phi_d.append(phi_t[-1].diff(t))
phi_n.append(i + '_dot')
###Output
_____no_output_____
###Markdown
Compute the norm
###Code
theta = Matrix(phi_t).norm()
theta
###Output
_____no_output_____
###Markdown
and find its derivative
###Code
theta_dot = theta.diff(t)
theta_dot
###Output
_____no_output_____
###Markdown
and now remove the time dependenices
###Code
theta_dot = theta_dot.subs(a for a in zip(phi_d, phi_n))
theta_dot = theta_dot.subs(a for a in zip(phi_t, phi))
theta_dot
###Output
_____no_output_____
|
Vacation Py/VacationPy.ipynb
|
###Markdown
VacationPy---- Note* Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
###Output
_____no_output_____
###Markdown
Store Part I results into DataFrame* Load the csv exported in Part I to a DataFrame
###Code
city_data = pd.read_csv("..\Weather Py\city_data.csv")
city_data
###Output
_____no_output_____
###Markdown
Humidity Heatmap* Configure gmaps.* Use the Lat and Lng as locations and Humidity as the weight.* Add Heatmap layer to map.
###Code
#Configure gmaps
gmaps.configure(api_key=g_key)
#Customize the map
figure_layout = {
"width": "600px",
"height": "500px",
"border": "1px solid black",
"padding": "1px",
"margin": "0 auto 0 auto"
}
fig = gmaps.figure(layout=figure_layout)
#Latitude and Longitude locations
locations = city_data[["Lat", "Lng"]]
#Declare Humidity
humidity = city_data["Humidity"]
#Create heat layer
heat_layer = gmaps.heatmap_layer(locations, weights=humidity,
dissipating=False, max_intensity=100,
point_radius=3)
#Adding heat layer to the map
fig.add_layer(heat_layer)
#Display map
fig
###Output
_____no_output_____
###Markdown
Create new DataFrame fitting weather criteria* Narrow down the cities to fit weather conditions.* Drop any rows will null values.
###Code
#Custom dataframe with temperature exceeding 75 but less than 90 degrees, no clouds in the sky, and wind speed greater than 5
cities_df = city_data.loc[(city_data["Max Temp"]>70) & (city_data["Max Temp"]<80)
& (city_data["Cloudiness"]==0)
& (city_data["Wind Speed"]<10)].dropna()
cities_df
###Output
_____no_output_____
###Markdown
Hotel Map* Store into variable named `hotel_df`.* Add a "Hotel Name" column to the DataFrame.* Set parameters to search for hotels with 5000 meters.* Hit the Google Places API for each city's coordinates.* Store the first Hotel result into the DataFrame.* Plot markers on top of the heatmap.
###Code
#Store cities dataframe into the variable hotel dataframe (hotel_df)
hotel_df = cities_df
#Adding column called "Hotel Name" to hotel dataframe
hotel_df["Hotel Name"]= ""
hotel_df
#Set parameters dictionary to search for hotels with 5000 meters
parameters = {"type" : "hotel",
"keyword" : "hotel",
"radius" : 5000,
"key" : g_key}
#Url to be used to call API
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
for index, row in hotel_df.iterrows():
# get city name, lat, lnt from df
lat = row["Lat"]
lng = row["Lng"]
name_of_city = row["City"]
parameters["location"] = f"{lat},{lng}"
# assemble url and make API request
print(f"Retrieving Results for Index {index}: {name_of_city}.")
response = requests.get(base_url, params=parameters).json()
# extract results
results = response['results']
# save the hotel name to dataframe
try:
print(f"Closest hotel in {name_of_city} is {results[0]['name']}.")
hotel_df.loc[index, "Hotel Name"] = results[0]['name']
# if there is no hotel available
except (KeyError, IndexError):
print("No hotel within 5000 radius.....Searching.")
print("------------")
# Print search complete when the search has completed
print("-------Search complete-------")
hotel_df
###Output
_____no_output_____
###Markdown
Two cities (Wagar & Poum) did not find hotels within a 5000 radius.
###Code
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
# Add marker layer
marker_layer = gmaps.marker_layer(locations
,info_box_content=hotel_info)
#Adding marker layer to map
fig.add_layer(marker_layer)
# Display figure
fig
###Output
_____no_output_____
###Markdown
VacationPy---- Note* Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
from pprint import pprint
# Import API key
from api_keys import g_key
###Output
_____no_output_____
###Markdown
Store Part I results into DataFrame* Load the csv exported in Part I to a DataFrame
###Code
# Load csv
weather_file = "../WeatherPy/WeatherPY3.csv"
# Read and display csv with Pandas
weather_df = pd.read_csv(weather_file)
weather_df.head()
###Output
_____no_output_____
###Markdown
Humidity Heatmap* Configure gmaps.* Use the Lat and Lng as locations and Humidity as the weight.* Add Heatmap layer to map.
###Code
# Configure gmaps
gmaps.configure(api_key=g_key)
#Determine max Humidity
humidity_max=weather_df['Humidity'].max()
humidity_max
# Store latitude and longitude in locations
locations = weather_df[["Lat", "Lng"]]
# locations
rating = weather_df["Humidity"].astype(float)
# Plot heatmap
fig = gmaps.figure()
# Create heat layer
heat_layer = gmaps.heatmap_layer(locations, weights=rating, dissipating=False, max_intensity=100, point_radius=1)
# Add layer
fig.add_layer(heat_layer)
# Display figure
fig
###Output
_____no_output_____
###Markdown
Create new DataFrame fitting weather criteria* Narrow down the cities to fit weather conditions.* Drop any rows will null values.
###Code
# Create a dataframe narrowing down cities to fit my very broad definition of ideal weather locations for a vacation.
hotel_df= weather_df[(weather_df["Max Temp"]>60) & (weather_df["Max Temp"]<=90) & (weather_df["Humidity"]>=30) & (weather_df["Humidity"]<=60) & (weather_df["Cloudiness"]<60)]
hotel_df
###Output
_____no_output_____
###Markdown
Hotel Map* Store into variable named `hotel_df`.* Add a "Hotel Name" column to the DataFrame.* Set parameters to search for hotels with 5000 meters.* Hit the Google Places API for each city's coordinates.* Store the first Hotel result into the DataFrame.* Plot markers on top of the heatmap.
###Code
# Add a "Hotel Name" column to dataframe
hotel_df["Hotel Name"] = ""
hotel_df.head()
# Set parameters to search for a hotel
params = {
"radius": 5000,
"types": "lodging",
"key": g_key
}
# Iterate through
for index, row in hotel_df.iterrows():
# get lat, lng from df
lat = row["Lat"]
lng = row["Lng"]
params["location"] = f"{lat},{lng}"
# Use the search term: "Hotel" and our lat/lng
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
# make request and print url
name_address = requests.get(base_url, params=params)
# convert to json
name_address = name_address.json()
# Grab the first hotel from the results and store the name
try:
hotel_df.loc[index, "Hotel Name"] = name_address["results"][0]["name"]
except (KeyError, IndexError):
print("Missing field/result... skipping.")
hotel_df
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
#Convert "Hotel Name" column from the hotel_df to a list
hotels= hotel_df["Hotel Name"].tolist()
hotels
# Create a map with markers of hotel locations
hotel_layer = gmaps.marker_layer(locations)
fig = gmaps.figure()
fig.add_layer(hotel_layer)
#Display map
fig
# Add marker layer ontop of heat map
fig=gmaps.figure()
fig.add_layer(heat_layer)
fig.add_layer(hotel_layer)
# Display figure
fig
# locations, fill_color='rgba(0, 150, 0, 0.4)',
# stroke_color='rgba(0, 0, 150, 0.4)', scale=2,
# info_box_content=[f"Hotels: {hotels}" for Name in hotel_info]
#gmaps.configure(api_key=g_key)
# for idx, each_row in hotel_df.iterrows():
# complete_url= f'{base_url}address={cities}&key=g_key'
# response=requests.get(complete_url)
# data=response.json()
# base_url= "https://maps.googleapis.com/maps/api/place/findplacefromtext/json?"
#response = requests.get(f"http://api.openweathermap.org/data/2.5/weather?q={city_name}&appid={weather_api_key}&units=Imperial").json()
# city.append(response['name'])
#for idx, each_row in hotel_df.head(5).iterrows():
# query_param= {
# 'key': g_key,
# 'location':f"{each_row['Lat'], each_row['Lng']}",
# 'radius': 5000,
# 'type': 'lodging'
# }
# response=requests.get(base_url, query_param)
#complete_url=f'{base_url}address={each_row["City"]}, {each_row["State"]}&key={g_key}'
#data=response.json(),
# query_url= (f"{base_url}, {query_param}, &key={g_key}")
# response = requests.get(f"{query_url}&key={g_key}")
#response= requests.get(base_url, query_param, g_key)
# params = {
# "radius": 5000,
# "types": "lodging",
# "key": g_key
# }
# # Iterate through
# for index, row in hotel_df.iterrows():
# # get lat, lng from df
# lat = row["Lat"]
# lng = row["Lng"]
# params["location"] = f"{lat},{lng}"
# base_url= "https://maps.googleapis.com/maps/api/place/findplacefromtext/json?"
# response = requests.get(base_url, params=params)
# data=response.json()
# try:
# row.loc[index, "Hotel Name"]= data["results"][0]["name"]
# except:
# print("City not found. Skipping...")
# pass
# hotel_df.head()
#for idx, each_row in weather_df.iterrows():
# params = {
# "radius": target_radius
#}
# lat= hotel_df.loc['Lat']
# lng= hotel_df.loc['Lng']
#city_name= hotel_df['City']
# query_params={
# 'key': g_key,
# # 'location': city,
# 'radius': 5000,
# 'types': 'lodging'
# }
# for idx, each_row in hotel_df.iterrows():
# try:
# lat= each_row["Lat"]
# lon= each_row["Lng"]
# #city= (f"{lat},{lon}")
# query_params['location'] = f"{lat},{lng}"
# base_url= "https://maps.googleapis.com/maps/api/place/findplacefromtext/json?"
# response = requests.get(base_url, params=query_params)
# data=response.json()
# each_row.loc[index, "Hotel Name"]= data["results"][0]["name"]
# except:
# print("City not found. Skipping...")
# pass
# hotel_df.head()
# # city=city_name.tolist()
# # base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
# base_url= "https://maps.googleapis.com/maps/api/place/findplacefromtext/json?"
# for idx, each_row in hotel_df.iterrows():
# try:
# lat= each_row["Lat"]
# lon= each_row["Lng"]
# city= (f"{lat},{lon}")
# query_params={
# 'key': g_key,
# # 'location': city,
# 'radius': 5000,
# 'types': 'lodging'
# }
# response = requests.get(base_url, params=query_params)
# data=response.json()
# each_row.loc[index, "Hotel Name"]= data["results"][0]["name"]
# # hotel_name= each_row["Hotel Name"]
# except:
# print("City not found. Skipping...")
# pass
# # for idx, each_row in hotel_df.iterrows():
# # lat= each_row["Lat"]
# # lon= each_row["Lng"]
# # city= (f"{lat},{lon}")
# # query_params={
# # 'key': g_key,
# # 'location': city,
# # 'radius': 5000,
# # 'types': 'lodging'
# # }
# # base_url= "https://maps.googleapis.com/maps/api/place/findplacefromtext/json?"
# # response = requests.get(base_url, params=query_params)
# # data=response.json()
# # try:
# # each_row.loc[index, "Hotel Name"]= data["results"][0]["name"]
# # #hotel_name= each_row["Hotel Name"]
# # except:
# # print("City not found. Skipping...")
# # pass
# hotel_df.head()
# # hotel_name= each_row["Hotel Name"]
# #pprint(data)
# # response=requests.get(complete_url)
# #query_param
# # for idx, each_row in hotel_df(3).iterrows():
# # complete_url=f'{base_url}{query_param}'
# # for idx, each_row in hotel_df():
# # for idx, each_row in hotel_df(3).iterrows():
# # query_param={
# # 'key': g_key,
# # 'location': {each_row['City']},
# # 'radius': 5000,
# # 'type': 'lodging'
# # }
# #response=requests.get(base_url, query_param)
# # 'location': {each_row['City']},
# # pprint(response.json())
# #response=requests.get(gmaps.configure(api_key=g_key))
###Output
_____no_output_____
###Markdown
VacationPy---- Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
import scipy.stats as st
import json
# Import API key
from api_keys import g_key
gmaps.configure(api_key=g_key)
from ipywidgets import interact
@interact(x=(0, 100, 10))
def p(x=50):
pass
###Output
_____no_output_____
###Markdown
Store Part I results into DataFrame* Load the csv exported in Part I to a DataFrame
###Code
clean_data_path='../Weather Py/clean_city_data.csv'
wpy_df=pd.read_csv(clean_data_path)
wpy_df.head()
###Output
_____no_output_____
###Markdown
Humidity Heatmap* Configure gmaps.* Use the Lat and Lng as locations and Humidity as the weight.* Add Heatmap layer to map.
###Code
#Configuring gmaps
#Grab lat and lon from wpy_df to use in gmaps. Grab humidity data for location value
wpy_df['lat_lon']=""
lat_lon=wpy_df[['Lat','Lon']]
humidity=wpy_df['Humidity']
lat_lon
#Create google map figure to place heat map
#gmaps.configure(api_key=gkey)
hum_fig=gmaps.figure(zoom_level=2.0,center=(0, 0) )
hum_heat=gmaps.heatmap_layer(lat_lon)
heat_layer = gmaps.heatmap_layer(lat_lon, weights=humidity,
dissipating=False, max_intensity=90,
point_radius=1)
#Add heatmap to figure
hum_fig.add_layer(heat_layer)
hum_fig
###Output
_____no_output_____
###Markdown
Create new DataFrame fitting weather criteria* Narrow down the cities to fit weather conditions.* Drop any rows will null values.
###Code
#Already dropped null values. Used criteria to select ideal weather and filter df for those locations
wpy_df
nice_df=wpy_df[(wpy_df['Humidity']>40) & (wpy_df['Humidity']<60)]
nice_df=nice_df[(nice_df['Max Temp']>65) &(nice_df['Max Temp']<85)]
nice_df=nice_df[nice_df['Cloud Cover']<50]
nice_df=nice_df[nice_df['Wind Speed']<10]
#nice_df=nice_df.dropna(axis=0, how="any")
nice_df
###Output
_____no_output_____
###Markdown
Hotel Map* Store into variable named `hotel_df`.* Add a "Hotel Name" column to the DataFrame.* Set parameters to search for hotels with 5000 meters.* Hit the Google Places API for each city's coordinates.* Store the first Hotel result into the DataFrame.* Plot markers on top of the heatmap.
###Code
#Remove unnamed column. Create new column for Hotel Name
hotel_df=nice_df
hotel_df=hotel_df.drop(['Unnamed: 0'], axis=1)
hotel_df['Hotel Name']=''
hotel_df
#view hotel_df
hotel_df
#target_coordinates = (str(hotel_df['Lat'])+", "+str(hotel_df['Lon']))
#print(target_coordinates)
#Creating new lat_lon column to pass into json request for latitidue and longitude. Could also use variable=hotel_df[['Lat', 'Lon']] I believe.
hotel_df['lat_lon']=""
xy=[]
for index, row in hotel_df.iterrows():
x=str(row['Lat']) + ', ' + str(row['Lon'])
xy.append(x)
hotel_df['lat_lon']=xy
hotel_df
# set up a parameters dictionary, base url to search, and variable lists to fill
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
target_search = "Hotel"
target_radius = 5000
target_type = "lodging"
hotel_name=[]
country_name=[]
# set up a parameters dictionary
p_dict = {
"keyword": target_search,
"radius": target_radius,
"type": target_type,
"key": g_key
}
# use iterrows to iterate through hotel dataframe
for index, row in hotel_df.iterrows():
# get location from lat_lon
# Help from Hunter Carlisle on try and exception section in for-loop below. Used to skip sections where there is no response within parameters.
# Found city of Yuli had no hotel within 5000 meters based on lat, lon
try:
location = row['lat_lon']
# add keyword to params dict
p_dict['location'] = location
# assemble url and make API request
response = requests.get(base_url, params=p_dict).json()
# extract results
results = response['results']
#print results name. Used this in my troubleshooting
print(results[0]['name'])
hotel=results[0]['name']
hotel_name.append(hotel)
except IndexError as error:
hotel_name.append("")
hotel_df['Hotel Name']=hotel_name
hotel_df
#Remove hotel name = "". This is byproduct of try exception statement above. Rather than use expection: pass, replaced those cells with "" in order to keep order in loop.
hotel_df_cleaned = hotel_df[hotel_df["Hotel Name"] != ""]
hotel_df_cleaned
#Create new column called City that is equivalent to the name column since that is what info box template coding appears to be requesting below.
hotel_df_cleaned['City'] = hotel_df_cleaned['name']
#View hotel_df_cleaned
hotel_df_cleaned
#create hotel_info list to store data
hotel_info=[]
#create list of variables to run through for-loop in cell below
locations=hotel_df_cleaned[['Lat', 'Lon']]
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df_cleaned.iterrows()]
locations = hotel_df_cleaned[["Lat", "Lon"]]
#Create new gmap marker_layer
new_layer=gmaps.marker_layer(locations, label='Click for more Info', info_box_content=hotel_info)
# Add marker layer ontop of humidity heat map
hum_fig.add_layer(new_layer)
# Display figure
hum_fig
###Output
_____no_output_____
###Markdown
VacationPy---- Note* Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
import json
import time
# Import API key
from api_keys import g_key
###Output
_____no_output_____
###Markdown
Store Part I results into DataFrame* Load the csv exported in Part I to a DataFrame
###Code
weather_data = pd.read_csv("../Weather Py/output_data/cities.csv")
weather_data
###Output
_____no_output_____
###Markdown
Humidity Heatmap* Configure gmaps.* Use the Lat and Lng as locations and Humidity as the weight.* Add Heatmap layer to map.
###Code
# gmaps
gmaps.configure(api_key=g_key)
# Store latitude and longitude in locations
locations = weather_data[["Lat", "Lng"]]
# Store Humidity in humidity
humidity = weather_data["Humidity"]
# Plot Heatmap
fig = gmaps.figure(center=(46.0, -5.0), zoom_level=2)
max_intensity = np.max(humidity)
# Add heat layer
heat_layer = gmaps.heatmap_layer(locations, weights = humidity, dissipating=False, max_intensity=100, point_radius=3)
# Add layer
fig.add_layer(heat_layer)
# Display figure
fig
###Output
_____no_output_____
###Markdown
Create new DataFrame fitting weather criteria* Narrow down the cities to fit weather conditions.* Drop any rows will null values.
###Code
# Narrow down the cities with wind speed < 10 mph, cloudiness = 0 and max temp between 60 and 80
narrow_city_df = weather_data.loc[(weather_data["Wind Speed"] <= 10) & (weather_data["Cloudiness"] == 0) & \
(weather_data["Max Temp"] >= 70) & (weather_data["Max Temp"] <= 80)].dropna()
narrow_city_df
###Output
_____no_output_____
###Markdown
Hotel Map* Store into variable named `hotel_df`.* Add a "Hotel Name" column to the DataFrame.* Set parameters to search for hotels with 5000 meters.* Hit the Google Places API for each city's coordinates.* Store the first Hotel result into the DataFrame.* Plot markers on top of the heatmap.
###Code
#hotel df
hotel_df = narrow_city_df.loc[:,["City", "Country", "Lat", "Lng"]]
#add Hotel Name Column
hotel_df["Hotel Name"] = ""
#print
hotel_df
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
params = {"type" : "hotel",
"keyword" : "hotel",
"radius" : 5000,
"key" : g_key}
for index, row in hotel_df.iterrows():
# get city name, lat, lnt from df
lat = row["Lat"]
lng = row["Lng"]
city_name = row["City"]
# add keyword to params dict
params["location"] = f"{lat},{lng}"
# assemble url and make API request
print(f"Retrieving Results for Index {index}: {city_name}.")
response = requests.get(base_url, params=params).json()
# results
results = response['results']
# save the hotel name to dataframe
try:
print(f"Closest hotel in {city_name} is {results[0]['name']}.")
hotel_df.loc[index, "Hotel Name"] = results[0]['name']
# if there is no hotel available, show missing field
except (KeyError, IndexError):
print("Missing field/result... skipping.")
print("------------")
# Wait 1 sec to make another api request to avoid SSL Error
time.sleep(1)
# Print end of search once searching is completed
print("-------End of Search-------")
#print df
hotel_df
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
# Add marker layer ontop of heat map
markers = gmaps.marker_layer(locations, info_box_content = hotel_info)
# Add the layer to the map
fig.add_layer(markers)
# Display figure
fig
###Output
_____no_output_____
|
misc/notebooks/tutorials/basic/observables.ipynb
|
###Markdown
Manipulating and measuring observablesThis notebook introduces the Observable class that allows to describe, manipulate and sample observables over quantum states produced by circuits. Defining a new observableWe will take as example a simple observable that counts the number of ones in a quantum state over 5 qubits.This observable can be written as:$$ O = \Sigma_i (1 - \sigma_z^i)/2 $$An observable is initialized with the number of qubits it acts on:
###Code
from qat.core import Observable, Term
nbqbits = 5
one_count = Observable(nbqbits)
###Output
_____no_output_____
###Markdown
New Pauli terms can be added to the observable.First, we need to write our observable $O$ as a sum of weighted Pauli operators:$$ O = N/2 - \Sigma_i \frac{1}{2}\sigma_z^i $$
###Code
# The sigma Z terms:
for i in range(nbqbits):
one_count.add_term(Term(-0.5, "Z", [i]))
# And the constant term:
one_count.constant_coeff += nbqbits/2
###Output
_____no_output_____
###Markdown
We can print our observable to check if it is correct
###Code
print(one_count)
###Output
_____no_output_____
###Markdown
Sampling an observable over the final state of a circuitLets build a simple circuit and approximate the expectation of our observable over its final state.
###Code
from qat.lang.AQASM import Program, X, CNOT, RX
prog_2_ones = Program()
qbits = prog_2_ones.qalloc(nbqbits)
prog_2_ones.apply(X, qbits[0])
prog_2_ones.apply(CNOT, qbits[0], qbits[2])
circ_2_ones = prog_2_ones.to_circ()
from qat.qpus import LinAlg
qpu = LinAlg()
job = circ_2_ones.to_job("OBS", observable=one_count, nbshots=30)
print("Number of ones:", qpu.submit(job).value)
###Output
_____no_output_____
###Markdown
Now with a less obvious circuit:
###Code
prog = Program()
qbits = prog.qalloc(5)
for i, qb in enumerate(qbits):
prog.apply(RX(0.324 * i), qb)
circ = prog.to_circ()
job = circ.to_job("OBS", observable=one_count, nbshots=30)
print("Number of ones:", qpu.submit(job).value)
###Output
_____no_output_____
###Markdown
Of course, we can reduce the deviation of this result by increasing the number of samples:
###Code
job = circ.to_job("OBS", observable=one_count, nbshots=1000)
print("Number of ones:", qpu.submit(job).value)
###Output
_____no_output_____
###Markdown
Or even compute the exact value of the observable using an "infinite" number of shots
###Code
job = circ.to_job("OBS", observable=one_count)
print("Exact number of ones:", qpu.submit(job).value)
###Output
_____no_output_____
|
genius_crawler/2. using vaex.ipynb
|
###Markdown
Bag of Words vectorization for song meta dataNetworkx graph is coming out way too small. I have over 31 million edges. But the graph has 393,971 nodes why list comprehensionlist comprehesion worked where numpy vectorization and vaex built in string operations failed.
###Code
d = df.meta_data_2.evaluate()
data_strings = [" ".join(i) for i in d]
###Output
_____no_output_____
###Markdown
Build vocabularycountvectorizer() turns `'pete-rock'` into `['pete','rock']`So we need to build a new vocabulary
###Code
data_vocabulary = list(" ".join(data_strings).split(" "))
len(data_vocabulary)
len(set(data_vocabulary))
###Output
_____no_output_____
###Markdown
9 million features condenses into 24k nodesmaybe the edgelist wasn't that far off
###Code
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(
strip_accents=None,
lowercase=True,
preprocessor=None,
tokenizer=None,
stop_words=None,
)
vectorizer.vocabulary =set(data_vocabulary)
X = vectorizer.fit_transform(data_strings)
len(vectorizer.vocabulary_)
###Output
_____no_output_____
###Markdown
Back to VaexThis is going to be very round-about-way.This did not work, skip to truncated SVD for now.
###Code
# import pyarrow
# pyarrow.array(X)
###Output
_____no_output_____
###Markdown
Truncated SVDused because the truncated form can take sparse matrices.
###Code
from sklearn.decomposition import TruncatedSVD
svd = TruncatedSVD(n_components=300, n_iter=30, random_state=42)
%%time
x_svd = svd.fit_transform(X)
x_svd.shape
###Output
_____no_output_____
###Markdown
Get lyric embeddings
###Code
npl = df.lyrics.evaluate()
import re
def clean_lyrics(lyrics):
"""
0. Check for type == string, or else it will throw an error. didn't use a plain else incase there were other errors.
1. turn the pickled list into into an actual list
2. rejoin the list into one string
3. regex and other string operations to clean the lyrics
"""
try:
lyrics = literal_eval(lyrics)
l = " ".join(lyrics)
l = re.sub(r'\[(.+?)\]|"', " ", l)
l = (
l.replace("'", "")
.replace(r", ", " ")
.replace("(", "")
.replace(")", "")
.replace("?", "")
.replace(":" "", "")
.strip()
.lower()
)
return l
except:
return " "
df.add_virtual_column("clean_lyrics", df.lyrics.apply(clean_lyrics))
###Output
_____no_output_____
###Markdown
I should definitely have been visualizing data earlier. There are a few outliers causing problems. no lyrics...I was way too confident in my data.
###Code
df
cl = df.clean_lyrics.evaluate()
df.head(5)
import matplotlib.pyplot
df.plot1d(df.clean_lyrics.str.len())
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
lcl = [z.split(" ") for z in cl]
documents = [TaggedDocument(doc, [i]) for i, doc in enumerate(cl)]
model = Doc2Vec(vector_size=50, min_count=2, epochs=40)
model.build_vocab(documents)
model.train(documents, total_examples=model.corpus_count, epochs=model.epochs)
#INFO:MainThread:gensim.models.base_any2vec:training on a 57955880 raw words (115911720 effective words) took 7759.6s, 14938 effective words/s
x ="Yeah, I'm gonna take my horse to the old town road I'm gonna ride 'til \
I can't no more I'm gonna take my horse to the old town \
road I'm gonna ride 'til I can't no more\
I got the horses in the back\
Horse tack is attached\
Hat is matte black\
Got the boots that's black to match\
Ridin' on a horse, ha\
You can whip your Porsche\
I been in the valley\
You ain't been up off that porch, now"
x = x.split(" ")
y = "Hat down, cross town, livin' like a rockstar\
Spent a lot of money on my brand new guitar\
Baby's got a habit: diamond rings and Fendi sports bras\
Ridin' down Rodeo in my Maserati sports car\
Got no stress, I've been through all that\
I'm like a Marlboro Man so I kick on back\
Wish I could roll on back to that old town road\
I wanna ride 'til I can't no more"
y= y.split(" ")
x = model.infer_vector(x)
model.infer_vector(y)
pj = "Yamborghini chain, rest in peace to my superior\
Hermès link could feed a village in Liberia\
TMZ taking pictures, causin' mad hysteria\
Momma see me on BET and started tearin' up\
I'ma start killin' niggas, how you get that trife?\
I attended Harlem picnics where you risked your life\
Uncle used to skim work, sellin' nicks at night\
I was only 8 years old, watching Nick at Nite\
Uncle Psycho was in that bathroom buggin'!\
Knife to his guts, hope Daddy don't cut him\
Suicidal thoughts brought to me with no advisory\
He was pitchin' dummy, sellin' fiends mad ivory\
Grandma had the arthritis in her hands, bad!\
She was poppin' pills like rappers in society\
I fuck yo bitch for the irony\
I'll send Meechy at yo ho if yo bitch keep eyein' me"
pj = pj.split(" ")
pj = model.infer_vector(pj)
model.docvecs.most_similar([pj],topn=10)
songs = df.song_identifier.evaluate()
dff[964697]
lyrics = dff.lyrics.evaluate()
lyrics[641956]
cl[641956]
documents[641956]
model.save("lyric_doc2vec")
model2=Doc2Vec.load("lyric_doc2vec")
pj1 = "I'ma explain why you probably never seen me\
I'm in a sunken place, no Instagram, I'm watchin' TV\
I think I trade my breakfast, lunch and dinner for some kitty\
Please believe me, I see RiRi, I'ma eat it like panini\
I go dumb up in the broad, hit the walls like graffiti\
Indian burns all up on a nigga wee-wee\
I think I need a foursome, Bella, Kendall, Gigi\
It'd be easy if Reneezy hook it all up on the leezy\
I go crazy in my Yeezy, Kirk Kneezy on the beat\
I told 'em now we finna glow up in the street\
Rappers talk subliminal but they don't talk to me\
Put 'em in a Jersey shore like Pauly D"
pj1 = pj1.split(" ")
pj1 = model2.infer_vector(pj1)
model2.docvecs.most_similar([pj1],topn=10)
z[1532871]
lyrics = df.lyrics.evaluate()
lyrics[1532871]
" ".join(literal_eval(lyrics[1532871]))
###Output
_____no_output_____
###Markdown
Get edgelist
###Code
df.drop(df.meta_data_1,inplace=True)
df.drop(df.header_links,inplace=True)
df.drop(df.side_table,inplace=True)
df.drop(df.index,inplace=True)
def edge_list(song_name,associated_data):
return np.array([(song_name,i) for i in associated_data])
df.add_virtual_column("nodes",df.meta_data_2.apply(len))
song = df.song_identifier.evaluate()
data = df.meta_data_2.evaluate()
df.add_column("edge_list",np.array([edge_list(a,b) for a,b in zip(song,data)]))
associated_data = df.evaluate(df.meta_data_2)
edge_list = df.edge_list.evaluate()
edge_list = np.array([ i for sublist in edge_list for i in sublist ])
type(edge_list)
lyrics
#!pipenv install networkx --dev
import networkx as nx
G = nx.from_edgelist(edge_list)
g = nx.to_scipy_sparse_matrix(G)
#!pipenv install scikit-learn --dev
from sklearn.decomposition import TruncatedSVD
svd = TruncatedSVD(n_components=300, n_iter=50, random_state=42)
g_svd = svd.fit_transform(g)
G.size()
###Output
_____no_output_____
|
bindings/python/tutorials/CNTK_101_LogisticRegression.ipynb
|
###Markdown
CNTK 101: Logistic Regression and ML PrimerThis tutorial is targeted to individuals who are new to CNTK and to machine learning. In this tutorial, you will train a simple yet powerful machine learning model that is widely used in industry for a variety of applications. The model trained below scales to massive data sets in the most expeditious manner by harnessing computational scalability leveraging the computational resources you may have (one or more CPU cores, one or more GPUs, a cluster of CPUs or a cluster of GPUs), transparently via the CNTK library.The following notebook users Python APIs. If you are looking for this example in Brainscript, please look [here](https://github.com/Microsoft/CNTK/tree/v2.0.beta1.0/Examples/Tutorials/LogisticRegressionAndMultiClass). Introduction**Problem**:A cancer hospital has provided data and wants us to determine if a patient has a fatal [malignant][] cancer vs. a benign growth. This is known as a classification problem. To help classify each patient, we are given their age and the size of the tumor. Intuitively, one can imagine that younger patients and/or patient with small tumor size are less likely to have malignant cancer. The data set simulates this application where the each observation is a patient represented as a dot (in the plot below) where red color indicates malignant and blue indicates benign disease. Note: This is a toy example for learning, in real life there are large number of features from different tests/examination sources and doctors' experience that play into the diagnosis/treatment decision for a patient.**Goal**:Our goal is to learn a classifier that automatically can label any patient into either benign or malignant category given two features (age and tumor size). In this tutorial, we will create a linear classifier that is a fundamental building-block in deep networks.In the figure above, the green line represents the learnt model from the data and separates the blue dots from the red dots. In this tutorial, we will walk you through the steps to learn the green line. Note: this classifier does make mistakes where couple of blue dots are on the wrong side of the green line. However, there are ways to fix this and we will look into some of the techniques in later tutorials. **Approach**: Any learning algorithm has typically 5 stages namely, Data reading, Data preprocessing, Creating a model, Learning the model parameters and Evaluating (a.k.a. testing/prediction) the model. >1. Data reading: We generate simulated data sets with each sample having two features (plotted below) indicative of the age and tumor size.>2. Data preprocessing: Often the individual features such as size or age needs to be scaled. Typically one would scale the data between 0 and 1. To keep things simple, we are not doing any scaling in this tutorial (for details look here: [feature scaling][]).>3. Model creation: We introduce a basic linear model in this tutorial. >4. Learning the model: This is also known as training. While fitting a linear model can be done in a variety of ways ([linear regression][]), in CNTK we use Stochastic Gradient Descent a.k.a. [SGD][].>5. Evaluation: This is also known as testing where one takes data sets with known labels (a.k.a ground-truth) that was not ever used for training. This allows us to assess how a model would perform in real world (previously unseen) observations. Logistic Regression[Logistic regression][] is fundamental machine learning technique that uses a linear weighted combination of features and generates the probability of predicting different classes. In our case the classifer will generate a probability in [0,1] which can then be compared with a threshold (such as 0.5) to produce a binary label (0 or 1). However, the method shown can be extended to multiple classes easily. In the figure above, contributions from different input features are linearly weighted and aggregated. The resulting sum is mapped to a 0-1 range via a [sigmoid][] function. For classifiers with more than two output labels, one can use a [softmax][] function.[malignant]: https://en.wikipedia.org/wiki/Malignancy[feature scaling]: https://en.wikipedia.org/wiki/Feature_scaling[SGD]: https://en.wikipedia.org/wiki/Stochastic_gradient_descent[linear regression]: https://en.wikipedia.org/wiki/Linear_regression[logistic regression]: https://en.wikipedia.org/wiki/Logistic_regression[softmax]: https://en.wikipedia.org/wiki/Multinomial_logistic_regression[sigmoid]: https://en.wikipedia.org/wiki/Sigmoid_function
###Code
# Import the relevant components
import numpy as np
import sys
import os
from cntk import Trainer, cntk_device, StreamConfiguration
from cntk.device import cpu, set_default_device
from cntk.learner import sgd
from cntk.ops import *
###Output
_____no_output_____
###Markdown
Data GenerationLet us generate some synthetic data emulating the cancer example using `numpy` library. We have two features (represented in two-dimensions) each either being to one of the two classes (benign:blue dot or malignant:red dot). In our example, each observation in the training data has a label (blue or red) corresponding to each observation (set of features - age and size). In this example, we have two classes represened by labels 0 or 1, thus a binary classification task.
###Code
# Define the network
input_dim = 2
num_output_classes = 2
###Output
_____no_output_____
###Markdown
Input and LabelsIn this tutorial we are generating synthetic data using `numpy` library. In real world problems, one would use a [reader][], that would read feature values (`features`: *age* and *tumor size*) corresponding to each obeservation (patient). The simulated *age* variable is scaled down to have similar range as the other variable. This is a key aspect of data pre-processing that we will learn more in later tutorials. Note, each observation can reside in a higher dimension space (when more features are available) and will be represented as a [tensor][] in CNTK. More advanced tutorials shall introduce the handling of high dimensional data.[reader]: https://github.com/Microsoft/CNTK/search?p=1&q=reader&type=Wikis&utf8=%E2%9C%93[tensor]: https://en.wikipedia.org/wiki/Tensor
###Code
# Ensure we always get the same amount of randomness
np.random.seed(0)
# Helper function to generate a random data sample
def generate_random_data_sample(sample_size, feature_dim, num_classes):
# Create synthetic data using NumPy.
Y = np.random.randint(size=(sample_size, 1), low=0, high=num_classes)
# Make sure that the data is separable
X = (np.random.randn(sample_size, feature_dim)+3) * (Y+1)
# Specify the data type to match the input variable used later in the tutorial (default type is double)
X = X.astype(np.float32)
# converting class 0 into the vector "1 0 0",
# class 1 into vector "0 1 0", ...
class_ind = [Y==class_number for class_number in range(num_classes)]
Y = np.asarray(np.hstack(class_ind), dtype=np.float32)
return X, Y
# Create the input variables denoting the features and the label data. Note: the input_variable does not need
# additional info on number of observations (Samples) since CNTK creates only the network topology first
mysamplesize = 32
features, labels = generate_random_data_sample(mysamplesize, input_dim, num_output_classes)
###Output
_____no_output_____
###Markdown
Let us visualize the input data.**Note**: If the import of `matplotlib.pyplot` fails, please run `conda install matplotlib` which will fix the `pyplot` version dependencies. If you are on a python environment different from Anaconda, then use `pip install`.
###Code
# Plot the data
import matplotlib.pyplot as plt
%matplotlib inline
#given this is a 2 class ()
colors = ['r' if l == 0 else 'b' for l in labels[:,0]]
plt.scatter(features[:,0], features[:,1], c=colors)
plt.xlabel("Scaled age (in yrs)")
plt.ylabel("Tumor size (in cm)")
plt.show()
###Output
_____no_output_____
###Markdown
Model CreationA logistic regression (a.k.a LR) network is the simplest building block but has been powering many ML applications in the past decade. LR is a simple linear model that takes as input, a vector of numbers describing the properties of what we are classifying (also known as a feature vector, $\bf{x}$, the blue nodes in the figure) and emits the *evidence* ($z$) (output of the green node, a.k.a. as activation). Each feature in the input layer is connected with a output node by a corresponding weight w (indicated by the black lines of varying thickness). The first step is to compute the evidence for an observation. $$z = \sum_{i=1}^n w_i \times x_i + b = \textbf{w} \cdot \textbf{x} + b$$ where $\bf{w}$ is the weight vector of length $n$ and $b$ is known as the [bias][] term. Note: we use **bold** notation to denote vectors. The computed evidence is mapped to a 0-1 scale using a [`sigmoid`][] (when the outcome can take one of two values) or a `softmax` function (when the outcome can take one of more than 2 classes value).Network input and output: - **input** variable (a key CNTK concept): >An **input** variable is a user-code facing container where user-provided code fills in different observations (data point or sample, equivalent to a blue/red dot in our example) as inputs to the model function during model learning (a.k.a.training) and model evaluation (a.k.a testing). Thus, the shape of the `input_variable` must match the shape of the data that will be provided. For example, when data are images each of height 10 pixels and width 5 pixels, the input feature dimension will be 2 (representing image height and width). Similarly, in our example the dimensions are age and tumor size, thus `input_dim` = 2). More on data and their dimensions to appear in separate tutorials. [bias]: https://www.quora.com/What-does-the-bias-term-represent-in-logistic-regression[`sigmoid`]: https://en.wikipedia.org/wiki/Sigmoid_function
###Code
input = input_variable(input_dim, np.float32)
###Output
_____no_output_____
###Markdown
Network setupThe `linear_layer` function is a straight forward implementation of the equation above. We perform two operations:0. multiply the weights ($\bf{w}$) with the features ($\bf{x}$) using CNTK `times` operator and add individual features' contribution,1. add the bias term $b$.These CNTK operations are optimized for execution on the available hardware and the implementation hides the complexity away from the user.
###Code
# Define a dictionary to store the model parameters
mydict = {"w":None,"b":None}
def linear_layer(input_var, output_dim):
input_dim = input_var.shape[0]
weight_param = parameter(shape=(input_dim, output_dim))
bias_param = parameter(shape=(output_dim))
mydict['w'], mydict['b'] = weight_param, bias_param
return times(input_var, weight_param) + bias_param
###Output
_____no_output_____
###Markdown
`z` will be used to represent the output of a network.
###Code
output_dim = num_output_classes
z = linear_layer(input, output_dim)
###Output
_____no_output_____
###Markdown
Learning model parametersNow that the network is setup, we would like to learn the parameters $\bf w$ and $b$ for our simple linear layer. To do so we convert, the computed evidence ($z$) into a set of predicted probabilities ($\textbf p$) using a `softmax` function.$$ \textbf{p} = \mathrm{softmax}(z)$$ The `softmax` is an activation function that maps the accumulated evidences to a probability distribution over the classes (Details of the [softmax function][]). Other choices of activation function can be [found here][].[softmax function]: https://www.cntk.ai/pythondocs/cntk.ops.html?highlight=softmaxcntk.ops.softmax[found here]: https://github.com/Microsoft/CNTK/wiki/Activation-Functions TrainingThe output of the `softmax` is a probability of observations belonging to the respective classes. For training the classifier, we need to determine what behavior the model needs to mimic. In other words, we want the generated probabilities to be as close as possible to the observed labels. This function is called the *cost* or *loss* function and shows what is the difference between the learnt model vs. that generated by the training set.[`Cross-entropy`][] is a popular function to measure the loss. It is defined as:$$ H(p) = - \sum_{j=1}^C y_j \log (p_j) $$ where $p$ is our predicted probability from `softmax` function and $y$ represents the label. This label provided with the data for training is also called the ground-truth label. In the two-class example, the `label` variable has dimensions of two (equal to the `num_output_classes` or $C$). Generally speaking, if the task in hand requires classification into $C$ different classes, the label variable will have $C$ elements with 0 everywhere except for the class represented by the data point where it will be 1. Understanding the [details][] of this cross-entropy function is highly recommended.[`cross-entropy`]: http://lsstce08:8000/cntk.ops.htmlcntk.ops.cross_entropy_with_softmax[details]: http://colah.github.io/posts/2015-09-Visual-Information/
###Code
label = input_variable((num_output_classes), np.float32)
loss = cross_entropy_with_softmax(z, label)
###Output
_____no_output_____
###Markdown
EvaluationIn order to evaluate the classification, one can compare the output of the network which for each observation emits a vector of evidences (can be converted into probabilities using `softmax` functions) with dimension equal to number of classes.
###Code
eval_error = classification_error(z, label)
###Output
_____no_output_____
###Markdown
Configure trainingThe trainer strives to reduce the `loss` function by different optimization approaches, [Stochastic Gradient Descent][] (`sgd`) being one of the most popular one. Typically, one would start with random initialization of the model parameters. The `sgd` optimizer would calculate the `loss` or error between the predicted label against the corresponding ground-truth label and using [gradient-decent][] generate a new set model parameters in a single iteration. The aforementioned model parameter update using a single observation at a time is attractive since it does not require the entire data set (all observation) to be loaded in memory and also requires gradient computation over fewer datapoints, thus allowing for training on large data sets. However, the updates generated using a single observation sample at a time can vary wildly between iterations. An intermediate ground is to load a small set of observations and use an average of the `loss` or error from that set to update the model parameters. This subset is called a *minibatch*.With minibatches we often sample observation from the larger training dataset. We repeat the process of model parameters update using different combination of training samples and over a period of time minimize the `loss` (and the error). When the incremental error rates are no longer changing significantly or after a preset number of maximum minibatches to train, we claim that our model is trained.One of the key parameter for optimization is called the `learning_rate`. For now, we can think of it as a scaling factor that modulates how much we change the parameters in any iteration. We will be covering more details in later tutorial. With this information, we are ready to create our trainer. [optimization]: https://en.wikipedia.org/wiki/Category:Convex_optimization[Stochastic Gradient Descent]: https://en.wikipedia.org/wiki/Stochastic_gradient_descent[gradient-decent]: http://www.statisticsviews.com/details/feature/5722691/Getting-to-the-Bottom-of-Regression-with-Gradient-Descent.html
###Code
# Instantiate the trainer object to drive the model training
learning_rate = 0.02
learner = sgd(z.parameters, lr=learning_rate)
trainer = Trainer(z, loss, eval_error, [learner])
###Output
_____no_output_____
###Markdown
First let us create some helper functions that will be needed to visualize different functions associated with training. Note these convinience functions are for understanding what goes under the hood.
###Code
from cntk.utils import get_train_eval_criterion, get_train_loss
# Define a utiltiy function to compute moving average sum (
# More efficient implementation is possible with np.cumsum() function
def moving_average(a, w=10) :
if len(a) < w:
return a[:]
return [val if idx < w else sum(a[(idx-w):idx])/w for idx, val in enumerate(a)]
# Defines a utility that prints the training progress
def print_training_progress(trainer, mb, frequency, verbose=1):
training_loss, eval_error = "NA", "NA"
if mb % frequency == 0:
training_loss = get_train_loss(trainer)
eval_error = get_train_eval_criterion(trainer)
if verbose: print ("Minibatch: {0}, Loss: {1:.4f}, Error: {2:.2f}".format(mb, training_loss, eval_error))
return mb, training_loss, eval_error
###Output
_____no_output_____
###Markdown
Run the trainerWe are now ready to train our Logistic Regression model. We want to decide what data we need to feed into the training engine.In this example, each iteration of the optimizer will work on 25 samples (25 dots w.r.t. the plot above) a.k.a `minibatch_size`. We would like to train on say 20000 observations. If the number of samples in the data is only 10000, the trainer will make 2 passes through the data. This is represented by `num_minibatches_to_train`. Note: In real world case, we would be given a certain amount of labeled data (in the context of this example, observation (age, size) and what they mean (benign / malignant)). We would use a large number of observations for training say 70% and set aside the remainder for evaluation of the trained model.With these parameters we can proceed with training our simple feedforward network.
###Code
# Initialize the parameters for the trainer
minibatch_size = 25
num_samples_to_train = 20000
num_minibatches_to_train = int(num_samples_to_train / minibatch_size)
# Run the trainer on and perform model training
training_progress_output_freq = 20
plotdata = {"batchsize":[], "loss":[], "error":[]}
for i in range(0, num_minibatches_to_train):
features, labels = generate_random_data_sample(minibatch_size, input_dim, num_output_classes)
# Specify the mapping of input variables in the model to actual minibatch data to be trained with
trainer.train_minibatch({input : features, label : labels})
batchsize, loss, error = print_training_progress(trainer, i, training_progress_output_freq, verbose=1)
if not (loss == "NA" or error =="NA"):
plotdata["batchsize"].append(batchsize)
plotdata["loss"].append(loss)
plotdata["error"].append(error)
# Compute the moving average loss to smooth out the noise in SGD
plotdata["avgloss"] = moving_average(plotdata["loss"])
plotdata["avgerror"] = moving_average(plotdata["error"])
#Plot the training loss and the training error
import matplotlib.pyplot as plt
plt.figure(1)
plt.subplot(211)
plt.plot(plotdata["batchsize"], plotdata["avgloss"], 'b--')
plt.xlabel('Minibatch number')
plt.ylabel('Loss')
plt.title('Minibatch run vs. Training loss ')
plt.show()
plt.subplot(212)
plt.plot(plotdata["batchsize"], plotdata["avgerror"], 'r--')
plt.xlabel('Minibatch number')
plt.ylabel('Label Prediction Error')
plt.title('Minibatch run vs. Label Prediction Error ')
plt.show()
###Output
_____no_output_____
###Markdown
Evaluation / Testing Now that we have trained the network. Let us evaluate the trained network on data that hasn't been used for training. This is called **testing**. Let us create some new data and evaluate the average error & loss on this set. This is done using `trainer.test_minibatch`. Note the error on this previously unseen data is comparable to training error. This is a **key** check. Should the error be larger than the training error by a large margin, it indicates that the train model will not perform well on data that it has not seen during training. This is known as [overfitting][]. There are several ways to address overfitting that is beyond the scope of this tutorial but CNTK toolkit provide the necessary components to address overfitting.Note: We are testing on a single minibatch for illustrative purposes. In practice one runs several minibatches of test data and reports the average. **Question** Why is this suggested? Try plotting the test error over several set of generated data sample and plot using plotting functions used for training. Do you see a pattern?[overfitting]: https://en.wikipedia.org/wiki/Overfitting
###Code
# Run the trained model on newly generated dataset
#
test_minibatch_size = 25
features, labels = generate_random_data_sample(test_minibatch_size, input_dim, num_output_classes)
trainer.test_minibatch({input : features, label : labels})
###Output
_____no_output_____
###Markdown
Checking prediction / evaluation For evaluation, we map the output of the network between 0-1 and convert them into probabilities for the two classes. This suggests the chances of each observation being malignant and benign. We use a softmax function to get the probabilities of each of the class.
###Code
out = softmax(z)
result = out.eval({input : features})
###Output
_____no_output_____
###Markdown
Lets compare the ground-truth label with the predictions. They should be in agreement.**Question:** - How many predictions were mislabeled? Can you change the code below to identify which observations were misclassified?
###Code
print("Label :", np.argmax(labels[:5],axis=1))
print("Predicted:", np.argmax(result[0,:5,:],axis=1))
###Output
Label : [1 0 0 1 1]
Predicted: [1 0 0 0 0]
###Markdown
VisualizationIt is desirable to visualize the results. In this example, the data is conveniently in two dimensions and can be plotted. For data with higher dimensions, visualtion can be challenging. There are advanced dimensionality reduction techniques that allow for such visualisations [t-sne][].[t-sne]: https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding
###Code
# Model parameters
print(mydict['b'].value)
bias_vector = mydict['b'].value
weight_matrix = mydict['w'].value
# Plot the data
import matplotlib.pyplot as plt
#given this is a 2 class
colors = ['r' if l == 0 else 'b' for l in labels[:,0]]
plt.scatter(features[:,0], features[:,1], c=colors)
plt.plot([0, bias_vector[0]/weight_matrix[0][1]], [ bias_vector[1]/weight_matrix[0][0], 0], c = 'g', lw = 3)
plt.xlabel("Scaled age (in yrs)")
plt.ylabel("Tumor size (in cm)")
plt.show()
###Output
[ 7.98766518 -7.988904 ]
###Markdown
CNTK 101: Logistic Regression and ML PrimerThis tutorial is targeted to individuals who are new to CNTK and to machine learning. In this tutorial, you will train a simple yet powerful machine learning model that is widely used in industry for a variety of applications. The model trained below scales to massive data sets in the most expeditious manner by harnessing computational scalability leveraging the computational resources you may have (one or more CPU cores, one or more GPUs, a cluster of CPUs or a cluster of GPUs), transparently via the CNTK library. Introduction**Problem**:A cancer hospital has provided data and wants us to determine if a patient has a fatal [malignant][] cancer vs. a benign growth. This is known as a classification problem. To help classify each patient, we are given their age and the size of the tumor. Intuitively, one can imagine that younger patients and/or patient with small tumor size are less likely to have malignant cancer. The data set simulates this application where the each observation is a patient represented as a dot (in the plot below) where red color indicates malignant and blue indicates benign disease. Note: This is a toy example for learning, in real life there are large number of features from different tests/examination sources and doctors' experience that play into the diagnosis/treatment decision for a patient.**Goal**:Our goal is to learn a classifier that automatically can label any patient into either benign or malignant category given two features (age and tumor size). In this tutorial, we will create a linear classifier that is a fundamental building-block in deep networks.In the figure above, the green line represents the learnt model from the data and separates the blue dots from the red dots. In this tutorial, we will walk you through the steps to learn the green line. Note: this classifier does make mistakes where couple of blue dots are on the wrong side of the green line. However, there are ways to fix this and we will look into some of the techniques in later tutorials. **Approach**: Any learning algorithm has typically 5 stages namely, Data reading, Data preprocessing, Creating a model, Learning the model parameters and Evaluating (a.k.a. testing/prediction) the model. >1. Data reading: We generate simulated data sets with each sample having two features (plotted below) indicative of the age and tumor size.>2. Data preprocessing: Often the individual features such as size or age needs to be scaled. Typically one would scale the data between 0 and 1. To keep things simple, we are not doing any scaling in this tutorial (for details look here: [feature scaling][]).>3. Model creation: We introduce a basic linear model in this tutorial. >4. Learning the model: This is also known as training. While fitting a linear model can be done in a variety of ways ([linear regression][]), in CNTK we use Stochastic Gradient Descent a.k.a. [SGD][].>5. Evaluation: This is also known as testing where one takes data sets with known labels (a.k.a ground-truth) that was not ever used for training. This allows us to assess how a model would perform in real world (previously unseen) observations. Logistic Regression[Logistic regression][] is fundamental machine learning technique that uses a linear weighted combination of features and generates the probability of predicting different classes. In our case the classifer will generate a probability in [0,1] which can then be compared with a threshold (such as 0.5) to produce a binary label (0 or 1). However, the method shown can be extended to multiple classes easily. In the figure above, contributions from different input features are linearly weighted and aggregated. The resulting sum is mapped to a 0-1 range via a [sigmoid][] function. For classifiers with more than two output labels, one can use a [softmax][] function.[malignant]: https://en.wikipedia.org/wiki/Malignancy[feature scaling]: https://en.wikipedia.org/wiki/Feature_scaling[SGD]: https://en.wikipedia.org/wiki/Stochastic_gradient_descent[linear regression]: https://en.wikipedia.org/wiki/Linear_regression[logistic regression]: https://en.wikipedia.org/wiki/Logistic_regression[softmax]: https://en.wikipedia.org/wiki/Multinomial_logistic_regression[sigmoid]: https://en.wikipedia.org/wiki/Sigmoid_function
###Code
# Import the relevant components
import numpy as np
import sys
import os
from cntk import Trainer, cntk_device, StreamConfiguration
from cntk.device import cpu, set_default_device
from cntk.learner import sgd
from cntk.ops import input_variable, cross_entropy_with_softmax, combine, classification_error, sigmoid
from cntk.ops import *
###Output
_____no_output_____
###Markdown
Data GenerationLet us generate some synthetic data emulating the cancer example using `numpy` library. We have two features (represented in two-dimensions) each either being to one of the two classes (benign:blue dot or malignant:red dot). In our example, each observation in the training data has a label (blue or red) corresponding to each observation (set of features - age and size). In this example, we have two classes represened by labels 0 or 1, thus a binary classification task.
###Code
# Define the network
input_dim = 2
num_output_classes = 2
###Output
_____no_output_____
###Markdown
Input and LabelsIn this tutorial we are generating synthetic data using `numpy` library. In real world problems, one would use a [reader][], that would read feature values (`features`: *age* and *tumor size*) corresponding to each obeservation (patient). Note, each observation can reside in a higher dimension space (when more features are available) and will be represented as a [tensor][] in CNTK. More advanced tutorials shall introduce the handling of high dimensional data.[reader]: https://github.com/Microsoft/CNTK/search?p=1&q=reader&type=Wikis&utf8=%E2%9C%93[tensor]: https://en.wikipedia.org/wiki/Tensor
###Code
# Ensure we always get the same amount of randomness
np.random.seed(0)
# Helper function to generate a random data sample
def generate_random_data_sample(sample_size, feature_dim, num_classes):
# Create synthetic data using NumPy.
Y = np.random.randint(size=(sample_size, 1), low=0, high=num_classes)
# Make sure that the data is separable
X = (np.random.randn(sample_size, feature_dim)+3) * (Y+1)
# Specify the data type to match the input variable used later in the tutorial (default type is double)
X = X.astype(np.float32)
# converting class 0 into the vector "1 0 0",
# class 1 into vector "0 1 0", ...
class_ind = [Y==class_number for class_number in range(num_classes)]
Y = np.asarray(np.hstack(class_ind), dtype=np.float32)
return X, Y
# Create the input variables denoting the features and the label data. Note: the input_variable does not need
# additional info on number of observations (Samples) since CNTK first create only the network topology first
mysamplesize = 64
features, labels = generate_random_data_sample(mysamplesize, input_dim, num_output_classes)
###Output
_____no_output_____
###Markdown
Let us visualize the input data.**Note**: If the import of `matplotlib.pyplot` fails, please run `conda install matplotlib` which will fix the `pyplot` version dependencies. If you are on a python environment different from Anaconda, then use `pip install`.
###Code
# Plot the data
import matplotlib.pyplot as plt
%matplotlib inline
#given this is a 2 class ()
colors = ['r' if l == 0 else 'b' for l in labels[:,0]]
plt.scatter(features[:,0], features[:,1], c=colors)
plt.show()
###Output
_____no_output_____
###Markdown
Model CreationA logistic regression (a.k.a LR) network is the simplest building block but has been powering many ML applications in the past decade. LR is a simple linear model that takes as input, a vector of numbers describing the properties of what we are classifying (also known as a feature vector, $\bf{x}$, the blue nodes in the figure) and emits the *evidence* ($z$) (output of the green node, a.k.a. as activation). Each feature in the input layer is connected with a output node by a corresponding weight w (indicated by the black lines of varying thickness). The first step is to compute the evidence for an observation. $$z = \sum_{i=1}^n w_i \times x_i + b = \textbf{w} \cdot \textbf{x} + b$$ where $\bf{w}$ is the weight vector of length $n$ and $b$ is known as the [bias][] term. Note: we use **bold** notation to denote vectors. The computed evidence is mapped to a 0-1 scale using a [`sigmoid`][] (when the outcome can take one of two values) or a `softmax` function (when the outcome can take one of more than 2 classes value).Network input and output: - **input** variable (a key CNTK concept): >An **input** variable is a user-code facing container where user-provided code fills in different observations (data point or sample, equivalent to a blue/red dot in our example) as inputs to the model function during model learning (a.k.a.training) and model evaluation (a.k.a testing). Thus, the shape of the `input_variable` must match the shape of the data that will be provided. For example, when data are images each of height 10 pixels and width 5 pixels, the input feature dimension will be 2 (representing image height and width). Similarly, in our example the dimensions are age and tumor size, thus `input_dim` = 2). More on data and their dimensions to appear in separate tutorials. [bias]: https://www.quora.com/What-does-the-bias-term-represent-in-logistic-regression[`sigmoid`]: https://en.wikipedia.org/wiki/Sigmoid_function
###Code
input = input_variable(input_dim, np.float32)
###Output
_____no_output_____
###Markdown
Network setupThe `linear_layer` function is a straight forward implementation of the equation above. We perform two operations:0. multiply the weights ($\bf{w}$) with the features ($\bf{x}$) using CNTK `times` operator and add individual features' contribution,1. add the bias term $b$.These CNTK operations are optimized for execution on the available hardware and the implementation hides the complexity away from the user.
###Code
# Define a dictionary to store the model parameters
mydict = {"w":None,"b":None}
def linear_layer(input_var, output_dim):
input_dim = input_var.shape[0]
weight_param = parameter(shape=(input_dim, output_dim))
bias_param = parameter(shape=(output_dim))
mydict['w'], mydict['b'] = weight_param, bias_param
return times(input_var, weight_param) + bias_param
###Output
_____no_output_____
###Markdown
`z` will be used to represent the output of a network.
###Code
output_dim = num_output_classes
z = linear_layer(input, output_dim)
###Output
_____no_output_____
###Markdown
Learning model parametersNow that the network is setup, we would like to learn the parameters $\bf w$ and $b$ for our simple linear layer. To do so we convert, the computed evidence ($z$) into a set of predicted probabilities ($\textbf p$) using a `softmax` function.$$ \textbf{p} = \mathrm{softmax}(z)$$ The `softmax` is an activation function that maps the accumulated evidences to a probability distribution over the classes (Details of the [softmax function][]). Other choices of activation function can be [found here][].[softmax function]: https://www.cntk.ai/pythondocs/cntk.ops.html?highlight=softmaxcntk.ops.softmax[found here]: https://github.com/Microsoft/CNTK/wiki/Activation-Functions TrainingThe output of the `softmax` is a probability of observations belonging to the respective classes. For training the classifier, we need to determine what behavior the model needs to mimic. In other words, we want the generated probabilities to be as close as possible to the observed labels. This function is called the *cost* or *loss* function and shows what is the difference between the learnt model vs. that generated by the training set.[`Cross-entropy`][] is a popular function to measure the loss. It is defined as:$$ H(p) = - \sum_{j=1}^C y_j \log (p_j) $$ where $p$ is our predicted probability from `softmax` function and $y$ represents the label. This label provided with the data for training is also called the ground-truth label. In the two-class example, the `label` variable has dimensions of two (equal to the `num_output_classes` or $C$). Generally speaking, if the task in hand requires classification into $C$ different classes, the label variable will have $C$ elements with 0 everywhere except for the class represented by the data point where it will be 1. Understanding the [details][] of this cross-entropy function is highly recommended.[`cross-entropy`]: http://lsstce08:8000/cntk.ops.htmlcntk.ops.cross_entropy_with_softmax[details]: http://colah.github.io/posts/2015-09-Visual-Information/
###Code
label = input_variable((num_output_classes), np.float32)
loss = cross_entropy_with_softmax(z, label)
###Output
_____no_output_____
###Markdown
EvaluationIn order to evaluate the classification, one can compare the output of the network which for each observation emits a vector of evidences (can be converted into probabilities using `softmax` functions) with dimension equal to number of classes.
###Code
eval_error = classification_error(z, label)
###Output
_____no_output_____
###Markdown
Configure trainingThe trainer strives to reduce the `loss` function by different optimization approaches, [Stochastic Gradient Descent][] (`sgd`) being one of the most popular one. Typically, one would start with random initialization of the model parameters. The `sgd` optimizer would calculate the `loss` or error between the predicted label against the corresponding ground-truth label and using [gradient-decent][] generate a new set model parameters in a single iteration. The aforementioned model parameter update using a single observation at a time is attractive since it does not require the entire data set (all observation) to be loaded in memory and also requires gradient computation over fewer datapoints, thus allowing for training on large data sets. However, the updates generated using a single observation sample at a time can vary wildly between iterations. An intermediate ground is to load a small set of observations and use an average of the `loss` or error from that set to update the model parameters. This subset is called a *minibatch*.With minibatches we often sample observation from the larger training dataset. We repeat the process of model parameters update using different combination of training samples and over a period of time minimize the `loss` (and the error). When the incremental error rates are no longer changing significantly or after a preset number of maximum minibatches to train, we claim that our model is trained.One of the key parameter for optimization is called the `learning_rate`. For now, we can think of it as a scaling factor that modulates how much we change the parameters in any iteration. We will be covering more details in later tutorial. With this information, we are ready to create our trainer. [optimization]: https://en.wikipedia.org/wiki/Category:Convex_optimization[Stochastic Gradient Descent]: https://en.wikipedia.org/wiki/Stochastic_gradient_descent[gradient-decent]: http://www.statisticsviews.com/details/feature/5722691/Getting-to-the-Bottom-of-Regression-with-Gradient-Descent.html
###Code
# Instantiate the trainer object to drive the model training
learning_rate = 0.02
learner = sgd(z.parameters, lr=learning_rate)
trainer = Trainer(z, loss, eval_error, [learner])
###Output
_____no_output_____
###Markdown
First let us create some helper functions that will be needed to visualize different functions associated with training. Note these convinience functions are for understanding what goes under the hood.
###Code
from cntk.utils import get_train_eval_criterion, get_train_loss
# Define a utiltiy function to compute moving average sum (
# More efficient implementation is possible with np.cumsum() function
def moving_average(a, w=10) :
if len(a) < w:
return a[:]
return [val if idx < w else sum(a[(idx-w):idx])/w for idx, val in enumerate(a)]
# Defines a utility that prints the training progress
def print_training_progress(trainer, mb, frequency, verbose=1):
training_loss, eval_error = "NA", "NA"
if mb % frequency == 0:
training_loss = get_train_loss(trainer)
eval_error = get_train_eval_criterion(trainer)
if verbose: print ("Minibatch: {0}, Loss: {1:.4f}, Error: {2:.2f}".format(mb, training_loss, eval_error))
return mb, training_loss, eval_error
###Output
_____no_output_____
###Markdown
Run the trainerWe are now ready to train our Logistic Regression model. We want to decide what data we need to feed into the training engine.In this example, each iteration of the optimizer will work on 25 samples (25 dots w.r.t. the plot above) a.k.a `minibatch_size`. We would like to train on say 20000 observations. If the number of samples in the data is 10000. Then the trainer will make multiple passes through the data. Note: In real world case, we would be given a certain amount of labeled data (in the context of this example, observation (age, size) and what they mean (benign / malignant)). We would use a large number of observations for training say 70% and set aside the remainder for evaluation of the trained model.With these parameters we can proceed with training our simple feedforward network.
###Code
# Initialize the parameters for the trainer
minibatch_size = 25
num_samples_to_train = 20000
num_minibatches_to_train = int(num_samples_to_train / minibatch_size)
# Run the trainer on and perform model training
training_progress_output_freq = 20
plotdata = {"batchsize":[], "loss":[], "error":[]}
for i in range(0, num_minibatches_to_train):
features, labels = generate_random_data_sample(minibatch_size, input_dim, num_output_classes)
# Specify the mapping of input variables in the model to actual minibatch data to be trained with
trainer.train_minibatch({input : features, label : labels})
batchsize, loss, error = print_training_progress(trainer, i, training_progress_output_freq, verbose=1)
if not (loss == "NA" or error =="NA"):
plotdata["batchsize"].append(batchsize)
plotdata["loss"].append(loss)
plotdata["error"].append(error)
# Compute the moving average loss to smooth out the noise in SGD
plotdata["avgloss"] = moving_average(plotdata["loss"])
plotdata["avgerror"] = moving_average(plotdata["error"])
#Plot the training loss and the training error
import matplotlib.pyplot as plt
plt.figure(1)
plt.subplot(211)
plt.plot(plotdata["batchsize"], plotdata["avgloss"], 'b--')
plt.xlabel('Minibatch number')
plt.ylabel('Loss')
plt.title('Minibatch run vs. Training loss ')
plt.show()
plt.subplot(212)
plt.plot(plotdata["batchsize"], plotdata["avgerror"], 'r--')
plt.xlabel('Minibatch number')
plt.ylabel('Label Prediction Error')
plt.title('Minibatch run vs. Label Prediction Error ')
plt.show()
###Output
_____no_output_____
###Markdown
Evaluation / Testing Now that we have trained the network. Let us evaluate the trained network on data that hasn't been used for training. This is called **testing**. Let us create some new data and evaluate the average error & loss on this set. This is done using `trainer.test_minibatch`. Note the error on this previously unseen data is comparable to training error. This is a **key** check. Should the error be larger than the training error by a large margin, it indicates that the train model will not perform well on data that it has not seen during training. This is known as [overfitting][]. There are several ways to address overfitting that is beyond the scope of this tutorial but CNTK toolkit provide the necessary components to address overfitting.Note: We are testing on a single minibatch for illustrative purposes. In practice one runs several minibatches of test data and reports the average. **Question** Why is this suggested? Try plotting the test error over several set of generated data sample and plot using plotting functions used for training. Do you see a pattern?[overfitting]: https://en.wikipedia.org/wiki/Overfitting
###Code
# Run the trained model on newly generated dataset
#
test_minibatch_size = 25
features, labels = generate_random_data_sample(test_minibatch_size, input_dim, num_output_classes)
trainer.test_minibatch({input : features, label : labels})
###Output
_____no_output_____
###Markdown
Checking prediction / evaluation For evaluation, we map the output of the network between 0-1 and convert them into probabilities for the two classes. This suggests the chances of each observation being malignant and benign. We use a softmax function to get the probabilities of each of the class.
###Code
out = softmax(z)
result = out.eval({input : features})
###Output
_____no_output_____
###Markdown
Lets compare the ground-truth label with the predictions. They should be in agreement.**Question:** - How many predictions were mislabeled? Can you change the code below to identify which observations were misclassified?
###Code
print("Label :", np.argmax(labels[:5],axis=1))
print("Predicted:", np.argmax(result[0,:5,:],axis=1))
###Output
Label : [1 0 0 1 1]
Predicted: [1 0 0 0 0]
###Markdown
VisualizationIt is desirable to visualize the results. In this example, the data is conveniently in two dimensions and can be plotted. For data with higher dimensions, visualtion can be challenging. There are advanced dimensionality reduction techniques that allow for such visualisations [t-sne][].[t-sne]: https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding
###Code
# Model parameters
print(mydict['b'].value)
bias_vector = mydict['b'].value
weight_matrix = mydict['w'].value
# Plot the data
import matplotlib.pyplot as plt
#given this is a 2 class
colors = ['r' if l == 0 else 'b' for l in labels[:,0]]
plt.scatter(features[:,0], features[:,1], c=colors)
plt.plot([0, bias_vector[0]/weight_matrix[0][1]], [ bias_vector[1]/weight_matrix[0][0], 0], c = 'g', lw = 3)
plt.show()
###Output
[ 7.99138641 -7.99262619]
|
Phase1/ROAD_TO_AI_S2/pandas/pandas_classroom.ipynb
|
###Markdown
Road to AI Session 2 Database,Graphs & MathsPANDAS Importing the library
###Code
import numpy as np
import pandas as pd
print(pd.__version__)
###Output
1.1.3
###Markdown
Reading the file
###Code
df = pd.read_csv('apy.csv')
###Output
_____no_output_____
###Markdown
The Basics
###Code
df.head()
df.tail()
#We can see the dimensions of the dataframe using the the shape attribute
df.shape
#We can also extract all the column names as a list
df.columns.tolist()
#function to see statistics like mean, min, etc about each column of the dataset
df.describe()
## max() will show you the maximum values of all columns
df.max()
#get the max value for a particular column
df['Area'].max()
## find the mean of the Production score.
df['Production'].mean()
## function to identify the row index
df['Production'].argmax()
###Output
_____no_output_____
###Markdown
**value_counts()** shows how many times each item appears in the column. This particular command shows the number of games in each season
###Code
df['Area'].value_counts()
###Output
_____no_output_____
###Markdown
Acessing Values
###Code
## get attributes
df.iloc[[df['Production'].argmax()]]
df.iloc[[df['Production'].argmax()]]['Area']
###Output
_____no_output_____
###Markdown
When you see data displayed in the above format, you're dealing with a Pandas **Series** object, not a dataframe object.
###Code
type(df.iloc[[df['Production'].argmax()]]['Area'])
type(df.iloc[[df['Production'].argmax()]])
###Output
_____no_output_____
###Markdown
The other really important function in Pandas is the **loc** function. Contrary to iloc, which is an integer based indexing, loc is a "Purely label-location based indexer for selection by label". Since the table are ordered from 0 to 145288, iloc and loc are going to be pretty interchangable in this type of dataset
###Code
df.iloc[:3]
## loc is a "Purely label-location based indexer for selection by label"
df.loc[:3]
###Output
_____no_output_____
###Markdown
Notice the slight difference in that iloc is exclusive of the second number, while loc is inclusive. Below is an example of how you can use loc to acheive the same task as we did previously with iloc
###Code
df.loc[df['Production'].argmax(), 'Area']
###Output
_____no_output_____
###Markdown
A faster version uses the **at()** function. At() is really useful wheneever you know the row label and the column label of the particular value that you want to get.
###Code
df.at[df['Production'].argmax(), 'Area']
###Output
_____no_output_____
###Markdown
Sorting
###Code
## sort the dataframe in increasing order
df.sort_values('Area').head()
df.groupby('Area')
###Output
_____no_output_____
###Markdown
Filtering Rows Conditionally
###Code
df[df['Area'] > 500000]
df[(df['Area'] > 5000000) & (df['Area'] < 5555500)]
###Output
_____no_output_____
###Markdown
Grouping
###Code
## allows you to group entries by certain attributes
df.groupby('State_Name')['Area'].mean().head()
df.groupby('State_Name')['Area'].value_counts().head(9)
df.values
## Now, you can simply just access elements like you would in an array.
df.values[0][0]
###Output
_____no_output_____
###Markdown
Extracting Rows and Columns The bracket indexing operator is one way to extract certain columns from a dataframe.
###Code
df[['Production', 'Area']].head()
###Output
_____no_output_____
###Markdown
Notice that you can acheive the same result by using the loc function. Loc is a veryyyy versatile function that can help you in a lot of accessing and extracting tasks.
###Code
df.loc[:, ['Production', 'Area']].head()
###Output
_____no_output_____
###Markdown
Note the difference is the return types when you use brackets and when you use double brackets.
###Code
type(df['Production'])
type(df[['Production']])
###Output
_____no_output_____
###Markdown
You've seen before that you can access columns through df['col name']. You can access rows by using slicing operations.
###Code
df[0:3]
###Output
_____no_output_____
###Markdown
Here's an equivalent using iloc
###Code
df.iloc[0:3,:]
###Output
_____no_output_____
###Markdown
Data Cleaning
###Code
## if there are any missing values in the dataframe, and will then sum up the total for each column
df.isnull().sum()
###Output
_____no_output_____
|
Memory_Transformer_XL.ipynb
|
###Markdown
###Code
!Memory Transformer-XL
!pip install memory-transformer-xl
!pip install transformers
!pip install mlm-pytorch
import torch
from memory_transformer_xl import MemoryTransformerXL
model = MemoryTransformerXL(
num_tokens = 20000,
dim = 1024,
heads = 8,
depth = 8,
seq_len = 512,
mem_len = 256, # short term memory (the memory from transformer-xl)
lmem_len = 256, # long term memory (memory attention network attending to short term memory and hidden activations)
mem_write_iters = 2, # number of iterations of attention for writing to memory
memory_layers = [6,7,8], # which layers to use memory, only the later layers are actually needed
num_mem_kv = 128, # number of memory key/values, from All-attention paper
).cuda()
x1 = torch.randint(0, 20000, (1, 512)).cuda()
logits1, mem1 = model(x1)
x2 = torch.randint(0, 20000, (1, 512)).cuda()
logits2, mem2 = model(x2, memories = mem1)
mem2
###Output
_____no_output_____
###Markdown
分词数据特点:可直接用于预训练、语言模型或语言生成任务。发布专用于简体中文NLP任务的小词表。词表介绍Google原始中文词表和我们发布的小词表的统计信息如下:Token Type Google CLUESimplified Chinese 11378 5689Traditional Chinese 3264 ✗English 3529 1320Japanese 573 ✗Korean 84 ✗Emoji 56 ✗Numbers 1179 140Special Tokens 106 106Other Tokens 959 766Total 21128 8021https://github.com/CLUEbenchmark/CLUEPretrainedModels
###Code
from transformers import AutoTokenizer, AutoModel,BertTokenizer
tokenizer = BertTokenizer.from_pretrained("clue/roberta_chinese_clue_tiny")
# model = AutoModel.from_pretrained("clue/roberta_chinese_clue_tiny")
tokenizer
tokenizer.vocab_size
dir(tokenizer)
###Output
_____no_output_____
###Markdown
模型测试使用mlm模式训练https://github.com/lucidrains/mlm-pytorch
###Code
# vocab_size
###Output
_____no_output_____
###Markdown
重新mlm模型
###Code
import math
from functools import reduce
import torch
from torch import nn
import torch.nn.functional as F
# helpers
def prob_mask_like(t, prob):
return torch.zeros_like(t).float().uniform_(0, 1) < prob
def mask_with_tokens(t, token_ids):
init_no_mask = torch.full_like(t, False, dtype=torch.bool)
mask = reduce(lambda acc, el: acc | (t == el), token_ids, init_no_mask)
return mask
def get_mask_subset_with_prob(mask, prob):
batch, seq_len, device = *mask.shape, mask.device
max_masked = math.ceil(prob * seq_len)
num_tokens = mask.sum(dim=-1, keepdim=True)
mask_excess = (mask.cumsum(dim=-1) > (num_tokens * prob).ceil())
mask_excess = mask_excess[:, :max_masked]
rand = torch.rand((batch, seq_len), device=device).masked_fill(~mask, -1e9)
_, sampled_indices = rand.topk(max_masked, dim=-1)
sampled_indices = (sampled_indices + 1).masked_fill_(mask_excess, 0)
new_mask = torch.zeros((batch, seq_len + 1), device=device)
new_mask.scatter_(-1, sampled_indices, 1)
return new_mask[:, 1:].bool()
# main class
class MLMXL(nn.Module):
def __init__(
self,
transformer,
mask_prob = 0.15,
replace_prob = 0.9,
num_tokens = None,
random_token_prob = 0.,
mask_token_id = 2,
pad_token_id = 0,
mask_ignore_token_ids = []):
super().__init__()
self.transformer = transformer
self.mem=None
# mlm related probabilities
self.mask_prob = mask_prob
self.replace_prob = replace_prob
self.num_tokens = num_tokens
self.random_token_prob = random_token_prob
# token ids
self.pad_token_id = pad_token_id
self.mask_token_id = mask_token_id
self.mask_ignore_token_ids = set([*mask_ignore_token_ids, pad_token_id])
def forward(self, input, **kwargs):
# do not mask [pad] tokens, or any other tokens in the tokens designated to be excluded ([cls], [sep])
# also do not include these special tokens in the tokens chosen at random
no_mask = mask_with_tokens(input, self.mask_ignore_token_ids)
mask = get_mask_subset_with_prob(~no_mask, self.mask_prob)
# get mask indices
mask_indices = torch.nonzero(mask, as_tuple=True)
# mask input with mask tokens with probability of `replace_prob` (keep tokens the same with probability 1 - replace_prob)
masked_input = input.clone().detach()
# if random token probability > 0 for mlm
if self.random_token_prob > 0:
assert self.num_tokens is not None, 'num_tokens keyword must be supplied when instantiating MLM if using random token replacement'
random_token_prob = prob_mask_like(input, self.random_token_prob)
random_tokens = torch.randint(0, self.num_tokens, input.shape, device=input.device)
random_no_mask = mask_with_tokens(random_tokens, self.mask_ignore_token_ids)
random_token_prob &= ~random_no_mask
random_indices = torch.nonzero(random_token_prob, as_tuple=True)
masked_input[random_indices] = random_tokens[random_indices]
# [mask] input
replace_prob = prob_mask_like(input, self.replace_prob)
masked_input = masked_input.masked_fill(mask * replace_prob, self.mask_token_id)
# mask out any tokens to padding tokens that were not originally going to be masked
labels = input.masked_fill(~mask, self.pad_token_id)
if self.mem!=None:
# get generator output and get mlm loss
logits,self.mem = self.transformer(masked_input, memories = self.mem, **kwargs)
else:
logits,self.mem = self.transformer(masked_input, **kwargs)
mlm_loss = F.cross_entropy(
logits.transpose(1, 2),
labels,
ignore_index = self.pad_token_id
)
return mlm_loss
import torch
from memory_transformer_xl import MemoryTransformerXL
import torch
from torch import nn
from torch.optim import Adam
# from mlm_pytorch import MLM
model = MemoryTransformerXL(
num_tokens = tokenizer.vocab_size,
dim = 128,
heads = 8,
depth = 8,
seq_len = 1024,
mem_len = 256, # short term memory (the memory from transformer-xl)
lmem_len = 256, # long term memory (memory attention network attending to short term memory and hidden activations)
mem_write_iters = 2, # number of iterations of attention for writing to memory
memory_layers = [6,7,8], # which layers to use memory, only the later layers are actually needed
num_mem_kv = 128, # number of memory key/values, from All-attention paper
).cuda()
x1 = torch.randint(0, tokenizer.vocab_size, (1, 1024)).cuda()
logits1, mem1 = model(x1)
x2 = torch.randint(0, tokenizer.vocab_size, (1, 1024)).cuda()
logits2, mem2 = model(x2, memories = mem1)
tokenizer
torch.save(model.state_dict(), "model1024.bin")
# plugin the language model into the MLM trainer
trainer = MLMXL(
model,
mask_token_id = tokenizer.mask_token_id, # the token id reserved for masking
pad_token_id = tokenizer.pad_token_id, # the token id for padding
mask_prob = 0.15, # masking probability for masked language modeling
replace_prob = 0.90, # ~10% probability that token will not be masked, but included in loss, as detailed in the epaper
mask_ignore_token_ids = [tokenizer.cls_token_id,tokenizer.sep_token_id] # other tokens to exclude from masking, include the [cls] and [sep] here
).cuda()
# optimizer
opt = Adam(trainer.parameters(), lr=3e-4)
# one training step (do this for many steps in a for loop, getting new `data` each time)
data = torch.randint(0, tokenizer.vocab_size, (2, 1024)).cuda()
loss = trainer(data)
loss.backward()
opt.step()
opt.zero_grad()
# after much training, the model should have improved for downstream tasks
# torch.save(transformer, f'./pretrained-model.pt')
loss
# dir(model)
model
logits2
###Output
_____no_output_____
|
hands_on_ML_ch05_SVM.ipynb
|
###Markdown
###Code
import numpy as np
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
iris = datasets.load_iris()
X = iris["data"][:, (2,3)]
y = (iris["target"]==2).astype(np.float64)
y
svm_clf = Pipeline([
("scaler", StandardScaler()),
("linear_svc", LinearSVC(C=1, loss="hinge"))
])
svm_clf.fit(X,y)
svm_clf.predict([[5.5, 1.7]])
from sklearn.datsets
###Output
_____no_output_____
|
5. Regex/Regex.ipynb
|
###Markdown
RegexRegex stands for Regular Expression. It is used to find whether a string math certain pattern.
###Code
import re
import pandas as pd
dataset = pd.read_csv("Dividend.csv", sep = ";", encoding = "ISO-8859-1")
dataset.head()
# Select tweets
tweets = dataset["bericht tekst"]
# Select sample tweet for hashtag
ex_hashtag = tweets.values[137]
# Select sample tweet for mention
ex_mention = tweets.values[14]
# For hashtags
reg_exp_hashtags = "#{1,1}[1-9-A-z]+|#{1,1}[0-9]+|#{1,1}[A-z]+|\"{1}#{1,1}[A-z]+[0-9]+"
# For mentions
reg_exp_at = "@{1}[A-z-0-9]+"
# Clean hashtags
tweets = tweets.apply(lambda x: re.sub(reg_exp_hashtags, '', x).strip())
# Clean mentions
tweets = tweets.apply(lambda x: re.sub(reg_exp_at, '', x).strip())
print(ex_hashtag)
print("------------------------------")
print(tweets.values[137])
print("==============================")
print(ex_mention)
print("------------------------------")
print(tweets.values[14])
###Output
Wiebes: verder praten over klimaatakkoord, geen verplichte woningisolatie https://t.co/Ajc8LTTjRy via @NOS Maar..., polderen is alleen toepasselijk als de zeebodem en niet de zeespiegel stijgt. Kabinet aan de slag SVP: [#dividendbelasting] niet [afschaff
------------------------------
Wiebes: verder praten over klimaatakkoord, geen verplichte woningisolatie https://t.co/Ajc8LTTjRy via Maar..., polderen is alleen toepasselijk als de zeebodem en niet de zeespiegel stijgt. Kabinet aan de slag SVP: [ niet [afschaff
==============================
Ik stel mij het volgende telefoontje voor: - Hoiii Paul, met Mark. Zeg je geeft toch wel ff de protesten tegen [afschaffen] [#dividendbelasting] de schuld hè? Levert het jullie toch nog wat op. -Spreekt vanzelf Mark pb ligt al klaar. Die miljardjes ga
------------------------------
Ik stel mij het volgende telefoontje voor: - Hoiii Paul, met Mark. Zeg je geeft toch wel ff de protesten tegen [afschaffen] [ de schuld hè? Levert het jullie toch nog wat op. -Spreekt vanzelf Mark pb ligt al klaar. Die miljardjes ga
|
scrape/reccomended.ipynb
|
###Markdown
Lets import the necessary tools for pre processing and viewing our data
###Code
import pandas as pd
import numpy as np
import turicreate as tc
#ml
from sklearn.preprocessing import LabelEncoder
act = pd.read_csv('//root//Documents//scrape//movie_data.csv')
le = LabelEncoder()
le.fit(act.item_id)
act['item_id'] = le.transform(act.item_id)
act.to_csv('//root//Documents//scrape//movie_data.csv', index = False)
###Output
_____no_output_____
###Markdown
Reading this in as a SFrame so the library turicreate can read and perform recommendations on it.
###Code
actor = tc.SFrame.read_csv('//root//Documents//scrape//movie_data.csv')
train_data, validation_data = tc.recommender.util.random_split_by_user(actor, 'user_id', 'item_id')
model = tc.recommender.create(train_data, 'user_id', 'item_id')
result = model.recommend()
result
###Output
_____no_output_____
|
notebooks/nl-be/Communicatie - SMS verzenden.ipynb
|
###Markdown
Vereiste:--De hier gebruikte online SMS dienst wordt beheerd door http://clickatell.com/ en vereist een account. Clickatell biedt een testaccount aan met een (beperkt) aantal gratis berichtjes. Er zijn verschillende andere diensten mogelijk, mogelijk zelfs uw eigen mobiele operator, maar dan dient de onderstaande code aangepast te worden aan de API van de aanbieder.Na het registreren voor een account, dient er een REST API angemaakt te worden, waarvoor Clickatell een Auth Token zal genereren. Dit token dient hieronder ingevuld te worden:
###Code
TOKEN = "****************************************************************"
DEST = "32475******"
from clickatell.rest import Rest
clickatell = Rest(TOKEN);
response = clickatell.sendMessage([DEST], "Raspi wants to be your BFF forever", extra={'from':'32477550561'})
# de extra['from'] parameters kan gebruikt worden, maar dan dient het gebruikte telefoonnummer geregistreerd
# te worden via de Clickatell administratie interface
print response
for entry in response:
print('destination {}:'.format(entry['destination']))
for key in entry.keys():
print(" {}: {}".format(key, entry[key]))
###Output
_____no_output_____
|
source/ja/grover.ipynb
|
###Markdown
データベース検索を行う ここでは、**グローバーのアルゴリズム**{cite}`grover_search,nielsen_chuang_search`の紹介と、そのアルゴリズムを使用して構造化されていないデータベースを検索する問題を考えます。アルゴリズムを説明した後に、Qiskitを使用してグローバーのアルゴリズムを実装します。```{contents} 目次---local: true---```$\newcommand{\ket}[1]{| 1 \rangle}$$\newcommand{\bra}[1]{\langle 1 |}$$\newcommand{\braket}[2]{\langle 1 | 2 \rangle}$ はじめに量子コンピュータが古典コンピュータの計算能力を優位に上回る性能を発揮するためには、量子計算の特徴をうまく活用するアルゴリズムを考える必要があります。そのような量子アルゴリズムの一つとして知られているのが、グローバーのアルゴリズムです。このアルゴリズムは**構造化されていないデータベースの検索**に適しており、古典計算より少ない計算量で答えを得られることが証明されています。このアルゴリズムは**振幅増幅**と呼ばれる手法をベースにしており、量子アルゴリズムのサブルーチンとしても幅広く活用されています。 (database)= 非構造化データの検索$N$個の要素からなるリストがあり、その中の一つの要素$w$を見つけたいとします。求める要素$w$を見つけるためには、古典計算では最悪の場合$N$回、平均では$N/2$回リストを調べる必要があります。グローバーのアルゴリズムでは、おおよそ$\sqrt{N}$回の検索で$w$を見つけることができることが知られています。つまり、**古典計算に対して二次の高速化が可能**というわけです。 (grover)= グローバーのアルゴリズムここでは$n$個の量子ビットを考え、その量子ビットが表現できる可能性のある全ての計算基底でリストが構成されているものとします。つまり$N=2^n$として、リストは$\ket{00\cdots00}$, $\ket{00\cdots01}$, $\ket{00\cdots10}$, $\cdots$, $\ket{11\cdots11}$までの$N$個の要素(10進数表記だと$\ket{0}$, $\ket{1}$, $\cdots$, $\ket{N-1}$)を含んでいます。 (grover_phaseoracle)= 位相オラクルの導入グローバーのアルゴリズムで重要になるのは、特定の状態の位相を変える位相オラクルです。まず、$U\ket{x}=(-1)^{f(x)}\ket{x}$で与えられる位相オラクルを考えます。つまりある状態$\ket{x}$に作用すると、その状態の位相をある関数$f(x)$に応じて$-1^{f(x)}$だけシフトさせるような演算です。ここで$f(x)$として$$f(x) = \bigg\{\begin{aligned}&1 \quad \text{if} \; x = w \\&0 \quad \text{else} \\\end{aligned}$$のような関数を考えると、求める解$w$の位相を反転するオラクル$U_w$$$U_w:\begin{aligned}&\ket{w} \to -\ket{w}\\&\ket{x} \to \ket{x} \quad \forall \; x \neq w\end{aligned}$$が得られます。この時、$U_w$は$U_w=I-2\ket{w}\bra{ w}$と表現できることが分かります。また、関数$f_0(x)$として$$f_0(x) = \bigg\{\begin{aligned}&0 \quad \text{if} \; x = 0 \\&1 \quad \text{else} \\\end{aligned}$$を考えると、0以外の位相を反転するユニタリー$U_0$$$U_0:\begin{aligned}&\ket{0}^{\otimes n} \to \ket{0}^{\otimes n}\\&\ket{x} \to -\ket{x} \quad \forall \; x \neq 0\end{aligned}$$を得ることができます。この時、$U_0$は$U_0=2\ket{0}\bra{ 0}^{\otimes n}-I$になります。 (grover_circuit)= 量子回路の構成グローバーアルゴリズムを実装する量子回路の構造は、下図のようになっています。$n$量子ビットの回路を$\ket{0}$の初期状態から出発し、Hadamard演算を適用して重ね合わせ状態を作ります。その後、$G$と書かれている演算を繰り返し適用します。```{image} figs/grover.png:alt: grover:class: bg-primary mb-1:width: 600px:align: center```$G$は「**グローバーの反復**」とも呼ばれるユニタリー演算で、以下のような4つのステップから構成されています。```{image} figs/grover_iter.png:alt: grover_iter:class: bg-primary mb-1:width: 550px:align: center```$U_w$と$U_0$は、それぞれ上で説明した解$w$の位相を反転するオラクルと0以外の位相を反転するオラクルです。回路の最初にあるHadamard演算と合わせて、グローバーの反復を1回実行するまでのステップ```{image} figs/grover_iter1.png:alt: grover_iter1:class: bg-primary mb-1:width: 600px:align: center```を細かく見ていきます。 (grover_superposition)= 重ね合わせ状態の生成まず、$n$量子ビット回路の初期状態$\ket{0}^{\otimes n}$にHadamard演算を適用し、一様に重ね合わされた状態を生成します。$$\ket{s} = H^{\otimes n}\ket{0}^{\otimes n} = \frac{1}{\sqrt{N}}\sum_{x=0}^{N-1}\ket{x}$$この状態を$\ket{s}$とします。 (grover_geometry)= 幾何学的な表現この$\ket{s}$の状態を幾何学的に表現してみましょう。まず、重ね合わせ状態$\ket{s}$と求める状態$\ket{w}$が張る2次元平面を考えます。$\ket{w}$に直交する状態$\ket{w^{\perp}}$は$\ket{w^{\perp}}:=\frac{1}{\sqrt{N-1}}\sum_{x \neq w}\ket{x}$と表現できるため、この平面上では$\ket{w}$に直交する軸に相当します。簡易的に、この平面では$\ket{w^{\perp}}=\begin{bmatrix}1\\0\end{bmatrix}$と$\ket{w}=\begin{bmatrix}0\\1\end{bmatrix}$と書くことにします。まとめると、この2次元平面では$\ket{s}$は($\ket{w^{\perp}}$, $\ket{w}$)という二つのベクトルの線形和として書くことができます。$$\begin{aligned}\ket{s}&=\sqrt{\frac{N-1}{N}}\ket{w^{\perp}}+\frac1{\sqrt{N}}\ket{w}\\&=: \cos\frac\theta2\ket{w^{\perp}}+\sin\frac\theta2\ket{w}\\&= \begin{bmatrix}\cos\frac\theta2\\\sin\frac\theta2\end{bmatrix}\end{aligned}$$答えが一つであるため、$\ket{w}$の振幅は$\frac1{\sqrt{N}}$、$\ket{w^{\perp}}$の振幅は$\sqrt{\frac{N-1}{N}}$になります。$\sin\frac\theta2=\frac1{\sqrt{N}}$なる$\theta$を定義すると、$$\theta=2\arcsin\frac{1}{\sqrt{N}}$$になります。($\ket{w^{\perp}}$, $\ket{w}$)平面での$\ket{s}$を図示すると、以下のようになります。```{image} figs/grover_rot1.png:alt: grover_rot1:class: bg-primary mb-1:width: 300px:align: center``` (grover_oracle)= オラクルの適用次に、$\ket{s}$にオラクル$U_w$を適用します。このオラクルは、この平面上では$U_w=I-2\ket{w}\bra{ w}=\begin{bmatrix}1&0\\0&-1\end{bmatrix}$と表現することが可能です。つまり、$U_w$は$\ket{w^{\perp}}$軸に関して$\ket{s}$を折り返す操作(下図)に対応しており、この操作で$\ket{w}$の位相が反転します。```{image} figs/grover_rot2.png:alt: grover_rot2:class: bg-primary mb-1:width: 300px:align: center``` (grover_diffuser)= Diffuserの適用次は$H^{\otimes n}U_0H^{\otimes n}$の適用で、この演算はDiffuserと呼ばれます。$U_0=2\ket{0}\bra{0}^{\otimes n}-I$なので、$U_s \equiv H^{\otimes n}U_0H^{\otimes n}$と定義すると$$\begin{aligned}U_s &\equiv H^{\otimes n}U_0H^{\otimes n}\\&=2H^{\otimes n}\ket{0}^{\otimes n}\bra{0}^{\otimes n}H^{\otimes n}-H^{\otimes n}H^{\otimes n}\\&=2\ket{s}\bra{ s}-I\\&=\begin{bmatrix}\cos\theta&\sin\theta\\\sin\theta&-\cos\theta\end{bmatrix}\end{aligned}$$になります。つまり、Diffuser$U_s$は$U_w\ket{s}$を$\ket{s}$に関して折り返す操作に対応します(下図)。```{image} figs/grover_rot3.png:alt: grover_rot3:class: bg-primary mb-1:width: 300px:align: center```まとめると、グローバーの反復$G=U_sU_w$は$$\begin{aligned}G&=U_sU_w\\&= \begin{bmatrix}\cos\theta&-\sin\theta\\\sin\theta&\cos\theta\end{bmatrix}\end{aligned}$$であるため、$\ket{s}$を$\ket{w}$に向けて角度$\theta$だけ回転する操作を表していることが分かります(下図)。```{image} figs/grover_rot4.png:alt: grover_rot4:class: bg-primary mb-1:width: 300px:align: center```$G$を1回適用すれば$\theta$だけ回転するということは、$G$を$r$回繰り返せば$r\theta$回転することになります。その時の$\ket{s}$の状態は$$G^r\ket{s}=\begin{bmatrix}\cos\frac{2r+1}{2}\theta\\\sin\frac{2r+1}{2}\theta\end{bmatrix}$$で与えられます。つまり、求める答え$\ket{w}$に到達するためには、$\frac{2r+1}2\theta\approx\frac{\pi}2$となる$r$の回数だけ回転すれば良いことになります。1回の回転角$\theta$が十分小さいとして、$\sin\frac\theta2=\frac{1}{\sqrt{N}}\approx\frac\theta2$とすると、$r\approx\frac\pi4\sqrt{N}$が得られます。つまり${\cal O}(\sqrt{N})$の操作で答え$\ket{w}$に到達することが示せたわけであり、古典計算に対する2次の高速化が得られることが分かりました。Diffuserの役割をもう少し見てみましょう。ある状態$\ket{\psi}$が、$\ket{\psi}:=\sum_k a_k\ket{k}$という振幅$a_k$を持つ$\ket{k}$の重ね合わせ状態として書かれるとしましょう。この状態にDiffuserを適用すると$$\begin{aligned}\left( 2\ket{s}\bra{ s} - I \right)\ket{\psi}&=\frac2N\sum_i\ket{i}\cdot\sum_{j,k}a_k\braket{j}{k}-\sum_k a_k\ket{k}\\&= 2\frac{\sum_i a_i}{N}\sum_k\ket{k}-\sum_k a_k\ket{k}\\&= \sum_k \left( 2\langle a \rangle-a_k \right)\ket{k}\end{aligned}$$となります。$\langle a \rangle\equiv\frac{\sum_i a_i}{N}$は振幅の平均です。この式が意味するところは、ある状態$\ket{k}$の振幅$a_k$が、平均に対する摂動の形$a_k=\langle a \rangle-\Delta$で表現できると考えると理解しやすくなります。つまり、Diffuserを適用した後の振幅が$2\langle a \rangle-a_k=\langle a \rangle+\Delta$になることから、Diffuserは平均$\langle a \rangle$に関して振幅を反転する操作を表していると考えることができるわけです。 (grover_amp)= 振幅増幅を可視化するグローバーアルゴリズムで振幅がどのように増幅されるのか、実際目で見てみることにします。まず最初のHadamard変換で、全ての計算基底が等しい振幅を持つ重ね合わせ状態を生成します(下図の1)。横軸は$N$個の計算基底、縦軸は各基底の振幅の大きさを表しており、全ての基底が$\frac{1}{\sqrt{N}}$の大きさの振幅を持っています(振幅の平均を赤破線で表示)。次にオラクル$U_w$を適用すると、$\ket{w}$の位相が反転し、振幅が$-\frac{1}{\sqrt{N}}$になります(下図の2)。この状態での振幅の平均は$\frac{1}{\sqrt{N}}(1-\frac2N)$になり、(1)の状態より低くなります。最後にDiffuserを適用すると、平均に関して振幅を反転します(下図の3)。その結果、$\ket{w}$の振幅が増幅され、$\ket{w}$以外の基底の振幅は減少します。1回のグローバーの反復操作で、$\ket{w}$の振幅が約3倍程度増幅することも図から見てとれます。この操作を繰り返し実行すれば$\ket{w}$の振幅がさらに増幅されるため、正しい答えを得る確率が増加していくだろうということも予想できますね。```{image} figs/grover_amp.png:alt: grover_amp:class: bg-primary mb-1:width: 800px:align: center``` (grover_multidata)= 複数データの検索今までは検索するデータが一つだけの場合を考えてきましたが、このセクションの最後に複数のデータを検索する場合を考察してみましょう。例えば、$N=2^n$個のデータから$M$個のデータ$\{w_i\}\;(i=0,1,\cdots,M-1)$を探すケースです。これまでと同様に、求める状態$\ket{w}$とそれに直行する状態$\ket{w^{\perp}}$$$\begin{aligned}&\ket{w}:=\frac{1}{\sqrt{M}}\sum_{i=0}^{M-1}\ket{w_i}\\&\ket{w^{\perp}}:=\frac{1}{\sqrt{N-M}}\sum_{x\notin\{w_0,\cdots,w_{M-1}\}}\ket{x}\end{aligned}$$が張る2次元平面の上で、同様の議論を進めることができます。$\ket{s}$はこの平面上で$$\begin{aligned}\ket{s}&=\sqrt{\frac{N-M}{N}}\ket{w^{\perp}}+\sqrt{\frac{M}{N}}\ket{w}\\&=: \cos\frac\theta2\ket{w^{\perp}}+\sin\frac\theta2\ket{w}\\\end{aligned}$$と表現でき、$\ket{w}$の振幅$\sqrt{\frac{M}{N}}$を$\sin\frac\theta2$と定義すると、角度$\theta$は$\theta=2\arcsin\sqrt{\frac{M}{N}}$になります。答えが一つのケースと比べて、角度は$\sqrt{M}$倍大きく、1回のグローバーの反復操作でより大きく回転することになります。その結果、より少ない$r\approx\frac\pi4\sqrt{\frac{N}{M}}$回の回転操作で答えに到達することが可能になることが分かります。 (imp)= アルゴリズムの実装 ($N=2^6$の場合)ではここから、実際にグローバーアルゴリズムを実装してデータベースの検索問題に取り掛かってみましょう。ここで考える問題は、$N=2^6$個の要素を持つリスト($=[0,1,2,\cdots,63]$)から、一つの答え"45"を見つけるグローバーアルゴリズムの実装です(もちろんこの数はなんでも良いので、後で自由に変更して遊んでみてください)。つまり6量子ビットの量子回路を使って、$\ket{45}=\ket{101101}$を探す問題です。 (imp_qiskit)= Qiskitでの実装まず必要な環境をセットアップします。
###Code
# Tested with python 3.7.9, qiskit 0.23.5, numpy 1.20.1
import matplotlib.pyplot as plt
import numpy as np
# Qiskit関連のパッケージをインポート
from qiskit import IBMQ, Aer, QuantumCircuit, ClassicalRegister, QuantumRegister, execute
from qiskit.providers.ibmq import least_busy
from qiskit.quantum_info import Statevector
from qiskit.visualization import plot_histogram
from qiskit.tools.monitor import job_monitor
###Output
_____no_output_____
###Markdown
6量子ビットの回路`grover_circuit`を準備します。グローバー反復を一回実行する量子回路は以下のような構成になりますが、赤枠で囲んだ部分(オラクルとDiffuserの中の$2\ket{0}\bra{0}-I$の部分)を実装する量子回路を書いてください。```{image} figs/grover_6bits_45.png:alt: grover_6bits_45:class: bg-primary mb-1:width: 600px:align: center```一様な重ね合わせ状態$\ket{s}$を生成した後に、オラクルを実装します。
###Code
def initialize_s(qc, qubits):
"""回路のqubitsにHゲートを適用"""
for q in qubits:
qc.h(q)
return qc
n = 6
grover_circuit = QuantumCircuit(n)
grover_circuit = initialize_s(grover_circuit, list(range(n)))
# オラクルを作成して、回路に実装
oracle = QuantumCircuit(n)
##################
### EDIT BELOW ###
##################
#oracle.?
##################
### EDIT ABOVE ###
##################
oracle_gate = oracle.to_gate()
oracle_gate.name = "U_w"
grover_circuit.append(oracle_gate, list(range(n)))
###Output
_____no_output_____
###Markdown
次に、Diffuser用の回路を実装します。
###Code
def diffuser(n):
qc = QuantumCircuit(n)
qc.h(range(n))
##################
### EDIT BELOW ###
##################
#qc.?
##################
### EDIT ABOVE ###
##################
qc.h(range(n))
U_s = qc.to_gate()
U_s.name = "U_s"
return U_s
grover_circuit.append(diffuser(n), list(range(n)))
grover_circuit.measure_all()
grover_circuit.draw('mpl')
###Output
_____no_output_____
###Markdown
(imp_simulator)= シミュレータでの実験回路の実装ができたら、シミュレータで実行して結果をプロットしてみます。結果が分かりやすくなるように、測定したビット列を整数にしてからプロットするようにしてみます。
###Code
backend = Aer.get_backend('qasm_simulator')
results = execute(grover_circuit, backend=backend, shots=1024).result()
answer = results.get_counts()
# 横軸を整数でプロットする
def show_distribution(answer):
n = len(answer)
x = [int(key,2) for key in list(answer.keys())]
y = list(answer.values())
fig, ax = plt.subplots()
rect = ax.bar(x,y)
def autolabel(rects):
for rect in rects:
height = rect.get_height()
ax.annotate('{:.3f}'.format(height/sum(y)),
xy=(rect.get_x()+rect.get_width()/2, height),xytext=(0,0),
textcoords="offset points",ha='center', va='bottom')
autolabel(rect)
plt.ylabel('Probabilities')
plt.show()
show_distribution(answer)
###Output
_____no_output_____
###Markdown
正しく回路が実装できていれば、$\ket{101101}=\ket{45}$の状態を高い確率で測定できる様子を見ることができるでしょう。しかし、上での議論からも分かるように、$N=2^6$の探索では一回のグローバー反復では正しくない答えも無視できない確率で現れてきます。グローバーの反復を複数回繰り返すことで、正しい答えがより高い確率で得られることを課題として見ることにします。 (imp_qc)= 量子コンピュータでの実験以下のようにすることで、量子コンピュータで実行することができます。結果を確認してください。
###Code
# 量子コンピュータで実行する場合
IBMQ.enable_account('__paste_your_token_here__')
provider = IBMQ.get_provider(hub='ibm-q', group='open', project='main')
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 6 and
not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
# 最も空いているバックエンドで回路を実行します。キュー内のジョブの実行をモニターします。
job = execute(grover_circuit, backend=backend, shots=1024, optimization_level=3)
job_monitor(job, interval=2)
# 計算結果
results = job.result()
answer = results.get_counts(grover_circuit)
show_distribution(answer)
# (Hidden cell) set to some dummy dict
answer = {'000000': 21, '000001': 15, '010000': 21, '010001': 10, '010010': 18, '010011': 14, '010100': 22, '010101': 13, '010110': 21, '010111': 11, '011000': 16, '011001': 9, '011010': 15, '011011': 12, '011100': 20, '011101': 13, '011110': 19, '011111': 11, '000010': 14, '100000': 26, '100001': 23, '100010': 20, '100011': 11, '100100': 16, '100101': 12, '100110': 13, '100111': 15, '101000': 19, '101001': 17, '101010': 13, '101011': 14, '101100': 17, '101101': 18, '101110': 23, '101111': 9, '000011': 21, '110000': 19, '110001': 17, '110010': 9, '110011': 16, '110100': 23, '110101': 21, '110110': 13, '110111': 8, '111000': 14, '111001': 20, '111010': 12, '111011': 9, '111100': 13, '111101': 17, '111110': 11, '111111': 8, '000100': 17, '000101': 18, '000110': 24, '000111': 19, '001000': 13, '001001': 15, '001010': 20, '001011': 16, '001100': 20, '001101': 13, '001110': 19, '001111': 18}
show_distribution(answer)
###Output
_____no_output_____
###Markdown
データベース検索を行う ここでは、**グローバーのアルゴリズム**{cite}`grover_search,nielsen_chuang_search`の紹介と、そのアルゴリズムを使用して構造化されていないデータベースを検索する問題を考えます。アルゴリズムを説明した後に、Qiskitを使用してグローバーのアルゴリズムを実装します。```{contents} 目次---local: true---```$\newcommand{\ket}[1]{| 1 \rangle}$$\newcommand{\bra}[1]{\langle 1 |}$$\newcommand{\braket}[2]{\langle 1 | 2 \rangle}$ はじめに量子コンピュータが古典コンピュータの計算能力を優位に上回る性能を発揮するためには、量子計算の特徴をうまく活用するアルゴリズムを考える必要があります。そのような量子アルゴリズムの一つとして知られているのが、グローバーのアルゴリズムです。このアルゴリズムは**構造化されていないデータベースの検索**に適しており、古典計算より少ない計算量で答えを得られることが証明されています。このアルゴリズムは**振幅増幅**と呼ばれる手法をベースにしており、量子アルゴリズムのサブルーチンとしても幅広く活用されています。 (database)= 非構造化データの検索$N$個の要素からなるリストがあり、その中の一つの要素$w$を見つけたいとします。求める要素$w$を見つけるためには、古典計算では最悪の場合$N$回、平均では$N/2$回リストを調べる必要があります。グローバーのアルゴリズムでは、おおよそ$\sqrt{N}$回の検索で$w$を見つけることができることが知られています。つまり、**古典計算に対して二次の高速化が可能**というわけです。 (grover)= グローバーのアルゴリズムここでは$n$個の量子ビットを考え、その量子ビットが表現できる可能性のある全ての計算基底でリストが構成されているものとします。つまり$N=2^n$として、リストは$\ket{00\cdots00}$, $\ket{00\cdots01}$, $\ket{00\cdots10}$, $\cdots$, $\ket{11\cdots11}$までの$N$個の要素(10進数表記だと$\ket{0}$, $\ket{1}$, $\cdots$, $\ket{N-1}$)を含んでいます。 (grover_phaseoracle)= 位相オラクルの導入グローバーのアルゴリズムで重要になるのは、特定の状態の位相を変える位相オラクルです。まず、$U\ket{x}=(-1)^{f(x)}\ket{x}$で与えられる位相オラクルを考えます。つまりある状態$\ket{x}$に作用すると、その状態の位相をある関数$f(x)$に応じて$-1^{f(x)}$だけシフトさせるような演算です。ここで$f(x)$として$$f(x) = \bigg\{\begin{aligned}&1 \quad \text{if} \; x = w \\&0 \quad \text{else} \\\end{aligned}$$のような関数を考えると、求める解$w$の位相を反転するオラクル$U_w$$$U_w:\begin{aligned}&\ket{w} \to -\ket{w}\\&\ket{x} \to \ket{x} \quad \forall \; x \neq w\end{aligned}$$が得られます。この時、$U_w$は$U_w=I-2\ket{w}\bra{ w}$と表現できることが分かります。また、関数$f_0(x)$として$$f_0(x) = \bigg\{\begin{aligned}&0 \quad \text{if} \; x = 0 \\&1 \quad \text{else} \\\end{aligned}$$を考えると、0以外の位相を反転するユニタリー$U_0$$$U_0:\begin{aligned}&\ket{0}^{\otimes n} \to \ket{0}^{\otimes n}\\&\ket{x} \to -\ket{x} \quad \forall \; x \neq 0\end{aligned}$$を得ることができます。この時、$U_0$は$U_0=2\ket{0}\bra{ 0}^{\otimes n}-I$になります。 (grover_circuit)= 量子回路の構成グローバーアルゴリズムを実装する量子回路の構造は、下図のようになっています。$n$量子ビットの回路を$\ket{0}$の初期状態から出発し、Hadamard演算を適用して重ね合わせ状態を作ります。その後、$G$と書かれている演算を繰り返し適用します。```{image} figs/grover.png:alt: grover:class: bg-primary mb-1:width: 600px:align: center```$G$は「**グローバーの反復**」とも呼ばれるユニタリー演算で、以下のような4つのステップから構成されています。```{image} figs/grover_iter.png:alt: grover_iter:class: bg-primary mb-1:width: 550px:align: center```$U_w$と$U_0$は、それぞれ上で説明した解$w$の位相を反転するオラクルと0以外の位相を反転するオラクルです。回路の最初にあるHadamard演算と合わせて、グローバーの反復を1回実行するまでのステップ```{image} figs/grover_iter1.png:alt: grover_iter1:class: bg-primary mb-1:width: 600px:align: center```を細かく見ていきます。 (grover_superposition)= 重ね合わせ状態の生成まず、$n$量子ビット回路の初期状態$\ket{0}^{\otimes n}$にHadamard演算を適用し、一様に重ね合わされた状態を生成します。$$\ket{s} = H^{\otimes n}\ket{0}^{\otimes n} = \frac{1}{\sqrt{N}}\sum_{x=0}^{N-1}\ket{x}$$この状態を$\ket{s}$とします。 (grover_geometry)= 幾何学的な表現この$\ket{s}$の状態を幾何学的に表現してみましょう。まず、重ね合わせ状態$\ket{s}$と求める状態$\ket{w}$が張る2次元平面を考えます。$\ket{w}$に直交する状態$\ket{w^{\perp}}$は$\ket{w^{\perp}}:=\frac{1}{\sqrt{N-1}}\sum_{x \neq w}\ket{x}$と表現できるため、この平面上では$\ket{w}$に直交する軸に相当します。簡易的に、この平面では$\ket{w^{\perp}}=\begin{bmatrix}1\\0\end{bmatrix}$と$\ket{w}=\begin{bmatrix}0\\1\end{bmatrix}$と書くことにします。まとめると、この2次元平面では$\ket{s}$は($\ket{w^{\perp}}$, $\ket{w}$)という二つのベクトルの線形和として書くことができます。$$\begin{aligned}\ket{s}&=\sqrt{\frac{N-1}{N}}\ket{w^{\perp}}+\frac1{\sqrt{N}}\ket{w}\\&=: \cos\frac\theta2\ket{w^{\perp}}+\sin\frac\theta2\ket{w}\\&= \begin{bmatrix}\cos\frac\theta2\\\sin\frac\theta2\end{bmatrix}\end{aligned}$$答えが一つであるため、$\ket{w}$の振幅は$\frac1{\sqrt{N}}$、$\ket{w^{\perp}}$の振幅は$\sqrt{\frac{N-1}{N}}$になります。$\sin\frac\theta2=\frac1{\sqrt{N}}$なる$\theta$を定義すると、$$\theta=2\arcsin\frac{1}{\sqrt{N}}$$になります。($\ket{w^{\perp}}$, $\ket{w}$)平面での$\ket{s}$を図示すると、以下のようになります。```{image} figs/grover_rot1.png:alt: grover_rot1:class: bg-primary mb-1:width: 300px:align: center``` (grover_oracle)= オラクルの適用次に、$\ket{s}$にオラクル$U_w$を適用します。このオラクルは、この平面上では$U_w=I-2\ket{w}\bra{ w}=\begin{bmatrix}1&0\\0&-1\end{bmatrix}$と表現することが可能です。つまり、$U_w$は$\ket{w^{\perp}}$軸に関して$\ket{s}$を折り返す操作(下図)に対応しており、この操作で$\ket{w}$の位相が反転します。```{image} figs/grover_rot2.png:alt: grover_rot2:class: bg-primary mb-1:width: 300px:align: center``` (grover_diffuser)= Diffuserの適用次は$H^{\otimes n}U_0H^{\otimes n}$の適用で、この演算はDiffuserと呼ばれます。$U_0=2\ket{0}\bra{0}^{\otimes n}-I$なので、$U_s \equiv H^{\otimes n}U_0H^{\otimes n}$と定義すると$$\begin{aligned}U_s &\equiv H^{\otimes n}U_0H^{\otimes n}\\&=2H^{\otimes n}\ket{0}^{\otimes n}\bra{0}^{\otimes n}H^{\otimes n}-H^{\otimes n}H^{\otimes n}\\&=2\ket{s}\bra{ s}-I\\&=\begin{bmatrix}\cos\theta&\sin\theta\\\sin\theta&-\cos\theta\end{bmatrix}\end{aligned}$$になります。つまり、Diffuser$U_s$は$U_w\ket{s}$を$\ket{s}$に関して折り返す操作に対応します(下図)。```{image} figs/grover_rot3.png:alt: grover_rot3:class: bg-primary mb-1:width: 300px:align: center```まとめると、グローバーの反復$G=U_sU_w$は$$\begin{aligned}G&=U_sU_w\\&= \begin{bmatrix}\cos\theta&-\sin\theta\\\sin\theta&\cos\theta\end{bmatrix}\end{aligned}$$であるため、$\ket{s}$を$\ket{w}$に向けて角度$\theta$だけ回転する操作を表していることが分かります(下図)。```{image} figs/grover_rot4.png:alt: grover_rot4:class: bg-primary mb-1:width: 300px:align: center```$G$を1回適用すれば$\theta$だけ回転するということは、$G$を$r$回繰り返せば$r\theta$回転することになります。その時の$\ket{s}$の状態は$$G^r\ket{s}=\begin{bmatrix}\cos\frac{2r+1}{2}\theta\\\sin\frac{2r+1}{2}\theta\end{bmatrix}$$で与えられます。つまり、求める答え$\ket{w}$に到達するためには、$\frac{2r+1}2\theta\approx\frac{\pi}2$となる$r$の回数だけ回転すれば良いことになります。1回の回転角$\theta$が十分小さいとして、$\sin\frac\theta2=\frac{1}{\sqrt{N}}\approx\frac\theta2$とすると、$r\approx\frac\pi4\sqrt{N}$が得られます。つまり${\cal O}(\sqrt{N})$の操作で答え$\ket{w}$に到達することが示せたわけであり、古典計算に対する2次の高速化が得られることが分かりました。Diffuserの役割をもう少し見てみましょう。ある状態$\ket{\psi}$が、$\ket{\psi}:=\sum_k a_k\ket{k}$という振幅$a_k$を持つ$\ket{k}$の重ね合わせ状態として書かれるとしましょう。この状態にDiffuserを適用すると$$\begin{aligned}\left( 2\ket{s}\bra{ s} - I \right)\ket{\psi}&=\frac2N\sum_i\ket{i}\cdot\sum_{j,k}a_k\braket{j}{k}-\sum_k a_k\ket{k}\\&= 2\frac{\sum_i a_i}{N}\sum_k\ket{k}-\sum_k a_k\ket{k}\\&= \sum_k \left( 2\langle a \rangle-a_k \right)\ket{k}\end{aligned}$$となります。$\langle a \rangle\equiv\frac{\sum_i a_i}{N}$は振幅の平均です。この式が意味するところは、ある状態$\ket{k}$の振幅$a_k$が、平均に対する摂動の形$a_k=\langle a \rangle-\Delta$で表現できると考えると理解しやすくなります。つまり、Diffuserを適用した後の振幅が$2\langle a \rangle-a_k=\langle a \rangle+\Delta$になることから、Diffuserは平均$\langle a \rangle$に関して振幅を反転する操作を表していると考えることができるわけです。 (grover_amp)= 振幅増幅を可視化するグローバーアルゴリズムで振幅がどのように増幅されるのか、実際目で見てみることにします。まず最初のHadamard変換で、全ての計算基底が等しい振幅を持つ重ね合わせ状態を生成します(下図の1)。横軸は$N$個の計算基底、縦軸は各基底の振幅の大きさを表しており、全ての基底が$\frac{1}{\sqrt{N}}$の大きさの振幅を持っています(振幅の平均を赤破線で表示)。次にオラクル$U_w$を適用すると、$\ket{w}$の位相が反転し、振幅が$-\frac{1}{\sqrt{N}}$になります(下図の2)。この状態での振幅の平均は$\frac{1}{\sqrt{N}}(1-\frac2N)$になり、(1)の状態より低くなります。最後にDiffuserを適用すると、平均に関して振幅を反転します(下図の3)。その結果、$\ket{w}$の振幅が増幅され、$\ket{w}$以外の基底の振幅は減少します。1回のグローバーの反復操作で、$\ket{w}$の振幅が約3倍程度増幅することも図から見てとれます。この操作を繰り返し実行すれば$\ket{w}$の振幅がさらに増幅されるため、正しい答えを得る確率が増加していくだろうということも予想できますね。```{image} figs/grover_amp.png:alt: grover_amp:class: bg-primary mb-1:width: 800px:align: center``` (grover_multidata)= 複数データの検索今までは検索するデータが一つだけの場合を考えてきましたが、このセクションの最後に複数のデータを検索する場合を考察してみましょう。例えば、$N=2^n$個のデータから$M$個のデータ$\{w_i\}\;(i=0,1,\cdots,M-1)$を探すケースです。これまでと同様に、求める状態$\ket{w}$とそれに直行する状態$\ket{w^{\perp}}$$$\begin{aligned}&\ket{w}:=\frac{1}{\sqrt{M}}\sum_{i=0}^{M-1}\ket{w_i}\\&\ket{w^{\perp}}:=\frac{1}{\sqrt{N-M}}\sum_{x\notin\{w_0,\cdots,w_{M-1}\}}\ket{x}\end{aligned}$$が張る2次元平面の上で、同様の議論を進めることができます。$\ket{s}$はこの平面上で$$\begin{aligned}\ket{s}&=\sqrt{\frac{N-M}{N}}\ket{w^{\perp}}+\sqrt{\frac{M}{N}}\ket{w}\\&=: \cos\frac\theta2\ket{w^{\perp}}+\sin\frac\theta2\ket{w}\\\end{aligned}$$と表現でき、$\ket{w}$の振幅$\sqrt{\frac{M}{N}}$を$\sin\frac\theta2$と定義すると、角度$\theta$は$\theta=2\arcsin\sqrt{\frac{M}{N}}$になります。答えが一つのケースと比べて、角度は$\sqrt{M}$倍大きく、1回のグローバーの反復操作でより大きく回転することになります。その結果、より少ない$r\approx\frac\pi4\sqrt{\frac{N}{M}}$回の回転操作で答えに到達することが可能になることが分かります。 (imp)= アルゴリズムの実装 ($N=2^6$の場合)ではここから、実際にグローバーアルゴリズムを実装してデータベースの検索問題に取り掛かってみましょう。ここで考える問題は、$N=2^6$個の要素を持つリスト($=[0,1,2,\cdots,63]$)から、一つの答え"45"を見つけるグローバーアルゴリズムの実装です(もちろんこの数はなんでも良いので、後で自由に変更して遊んでみてください)。つまり6量子ビットの量子回路を使って、$\ket{45}=\ket{101101}$を探す問題です。 (imp_qiskit)= Qiskitでの実装まず必要な環境をセットアップします。
###Code
# Tested with python 3.7.9, qiskit 0.23.5, numpy 1.20.1
import matplotlib.pyplot as plt
import numpy as np
# Qiskit関連のパッケージをインポート
from qiskit import IBMQ, Aer, QuantumCircuit, ClassicalRegister, QuantumRegister, transpile
from qiskit.providers.ibmq import least_busy
from qiskit.quantum_info import Statevector
from qiskit.visualization import plot_histogram
from qiskit.tools.monitor import job_monitor
###Output
_____no_output_____
###Markdown
6量子ビットの回路`grover_circuit`を準備します。グローバー反復を一回実行する量子回路は以下のような構成になりますが、赤枠で囲んだ部分(オラクルとDiffuserの中の$2\ket{0}\bra{0}-I$の部分)を実装する量子回路を書いてください。```{image} figs/grover_6bits_45.png:alt: grover_6bits_45:class: bg-primary mb-1:width: 600px:align: center```一様な重ね合わせ状態$\ket{s}$を生成した後に、オラクルを実装します。
###Code
def initialize_s(qc, qubits):
"""回路のqubitsにHゲートを適用"""
for q in qubits:
qc.h(q)
return qc
n = 6
grover_circuit = QuantumCircuit(n)
grover_circuit = initialize_s(grover_circuit, list(range(n)))
# オラクルを作成して、回路に実装
oracle = QuantumCircuit(n)
##################
### EDIT BELOW ###
##################
#oracle.?
##################
### EDIT ABOVE ###
##################
oracle_gate = oracle.to_gate()
oracle_gate.name = "U_w"
grover_circuit.append(oracle_gate, list(range(n)))
###Output
_____no_output_____
###Markdown
次に、Diffuser用の回路を実装します。
###Code
def diffuser(n):
qc = QuantumCircuit(n)
qc.h(range(n))
##################
### EDIT BELOW ###
##################
#qc.?
##################
### EDIT ABOVE ###
##################
qc.h(range(n))
U_s = qc.to_gate()
U_s.name = "U_s"
return U_s
grover_circuit.append(diffuser(n), list(range(n)))
grover_circuit.measure_all()
grover_circuit.draw('mpl')
###Output
_____no_output_____
###Markdown
(imp_simulator)= シミュレータでの実験回路の実装ができたら、シミュレータで実行して結果をプロットしてみます。結果が分かりやすくなるように、測定したビット列を整数にしてからプロットするようにしてみます。
###Code
backend = Aer.get_backend('qasm_simulator')
grover_circuit = transpile(grover_circuit, backend=backend)
results = backend.run(grover_circuit, shots=1024).result()
answer = results.get_counts()
# 横軸を整数でプロットする
def show_distribution(answer):
n = len(answer)
x = [int(key,2) for key in list(answer.keys())]
y = list(answer.values())
fig, ax = plt.subplots()
rect = ax.bar(x,y)
def autolabel(rects):
for rect in rects:
height = rect.get_height()
ax.annotate('{:.3f}'.format(height/sum(y)),
xy=(rect.get_x()+rect.get_width()/2, height),xytext=(0,0),
textcoords="offset points",ha='center', va='bottom')
autolabel(rect)
plt.ylabel('Probabilities')
plt.show()
show_distribution(answer)
###Output
_____no_output_____
###Markdown
正しく回路が実装できていれば、$\ket{101101}=\ket{45}$の状態を高い確率で測定できる様子を見ることができるでしょう。しかし、上での議論からも分かるように、$N=2^6$の探索では一回のグローバー反復では正しくない答えも無視できない確率で現れてきます。グローバーの反復を複数回繰り返すことで、正しい答えがより高い確率で得られることを課題として見ることにします。 (imp_qc)= 量子コンピュータでの実験以下のようにすることで、量子コンピュータで実行することができます。結果を確認してください。
###Code
# 量子コンピュータで実行する場合
IBMQ.enable_account('__paste_your_token_here__')
provider = IBMQ.get_provider(hub='ibm-q', group='open', project='main')
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 6 and
not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
# 最も空いているバックエンドで回路を実行します。キュー内のジョブの実行をモニターします。
grover_circuit = transpile(grover_circuit, backend=backend, optimization_level=3)
job = backend.run(grover_circuit, shots=1024)
job_monitor(job, interval=2)
# 計算結果
results = job.result()
answer = results.get_counts(grover_circuit)
show_distribution(answer)
# (Hidden cell) set to some dummy dict
answer = {'000000': 21, '000001': 15, '010000': 21, '010001': 10, '010010': 18, '010011': 14, '010100': 22, '010101': 13, '010110': 21, '010111': 11, '011000': 16, '011001': 9, '011010': 15, '011011': 12, '011100': 20, '011101': 13, '011110': 19, '011111': 11, '000010': 14, '100000': 26, '100001': 23, '100010': 20, '100011': 11, '100100': 16, '100101': 12, '100110': 13, '100111': 15, '101000': 19, '101001': 17, '101010': 13, '101011': 14, '101100': 17, '101101': 18, '101110': 23, '101111': 9, '000011': 21, '110000': 19, '110001': 17, '110010': 9, '110011': 16, '110100': 23, '110101': 21, '110110': 13, '110111': 8, '111000': 14, '111001': 20, '111010': 12, '111011': 9, '111100': 13, '111101': 17, '111110': 11, '111111': 8, '000100': 17, '000101': 18, '000110': 24, '000111': 19, '001000': 13, '001001': 15, '001010': 20, '001011': 16, '001100': 20, '001101': 13, '001110': 19, '001111': 18}
show_distribution(answer)
###Output
_____no_output_____
|
.Trash-1000/files/Lab4-DyanmicProgramming.ipynb
|
###Markdown
** Dynamic Programming Lab ** This is the Lab for the Dynamic Programming module of the edX "Reinforcement Learning Explained" course. The lab consists of 4 exercises: - implement Policy Evaluation using the 2 array approach - implement Policy Evaluation using the in-place approach - implement Policy Iteration - implement Value Iteration On each of the 4 code cells below (one for each exercise), make sure you don't change the function signature for the primary function you are implementing, and the call to the tester code that verifies its correctness.When you finish your implemention of each function, execute the code cell and vertify that the code passes. If it does, save the printed "passcode" value for when you later submit your results on the course webpage for the lab. If it doesn't pass, correct your code and try again. ** Exercise 1: Policy Evaluation - 2 arrays ** Policy Evaluation calculates the value function for a policy, given the policy and the full definition of the associated Markov Decision Process. The full definition of an MDP is the set of states, the set of available actions for each state, the set of rewards, the discount factor, and the state/reward transition function.Implement the algorithm for Iterative Policy Evaluation using the 2 array approach in the below code cell. In the 2 array approach, one array holds the value estimates for each state computed on the previous iteration, and one array holds the value estimates for the states computing in the current iteration.
###Code
import tester # required for testing and grading your code
def policy_eval_two_arrays(state_count, gamma, theta, get_policy_actions, get_transitions):
"""
This function uses the two-array approach to evaluate the specified policy for the specified MDP:
'state_count' is the total number of states in the MDP. States are represented as 0-relative numbers.
'gamma' is the MDP discount factor for rewards.
'theta' is the small number threshold to signal convergence of the value function (see Iterative Policy Evaluation algorithm).
'get_policy_actions' is the stochastic policy function - it takes a state parameter and returns list of tuples,
where each tuple is of the form: (action, probability). It represents the policy being evaluated.
'get_transitions' is the state/reward transiton function. It accepts two parameters, state and action, and returns
a list of tuples, where each tuple is of the form: (next_state, reward, probabiliity).
"""
V = state_count*[0]
# insert code here to evaluate the policy using the 2 array approach
return V
tester.policy_eval_two_arrays_test(policy_eval_two_arrays)
#--- Solutions below - remove all below code cells on the student version of the labs ---
import tester
def eval_formula2(state, action, get_transitions, gamma, V):
trans_sum = 0
trans_tuples = get_transitions(state, action)
for tt in trans_tuples:
next_state = tt[0]
reward = tt[1]
trans_prob = tt[2]
trans_sum += trans_prob * (reward + gamma * V[next_state])
return trans_sum
def eval_formula(state, state_action_tuples, get_transitions, gamma, V):
action_sum = 0
for at in state_action_tuples:
action = at[0]
action_prob = at[1]
action_sum += action_prob * eval_formula2(state, action, get_transitions, gamma, V)
return action_sum
def policy_eval_two_arrays(state_count, gamma, theta, get_policy_actions, get_transitions):
V = state_count*[0]
V_last = state_count*[0]
k = 0
while True:
delta = 0
#print("k=", k, "V=", V)
for s in range(state_count):
v = V_last[s]
state_action_tuples = get_policy_actions(s)
V[s] = eval_formula(s, state_action_tuples, get_transitions, gamma, V_last)
delta = max(delta, abs(v-V[s]))
k += 1
if (delta < theta):
break
V_last = list(V)
print("FINAL k=", k)
#print("FINAL V=", V)
return V
tester.policy_eval_two_arrays_test(policy_eval_two_arrays)
tester.get_equiprobable_policy_actions(0)
tester.get_transitions(2,'l')
for a, action in testSuper.get_equiprobable_policy_actions(0):
print(a, action)
import numpy as np
def policy_eval_two_arrays(state_count, gamma, theta, get_policy_actions, get_transitions):
#V = np.zeros(env.nS)
V = state_count*[0]
V_last = state_count*[0]
while True:
delta = 0
# For each state, perform a "full backup"
for s in range(state_count):
v = 0
# Look at the possible next actions
for a, action_prob in get_policy_actions(s):
# For each action, look at the possible next states...
for next_state, reward, prob in get_transitions(s,a):
# Calculate the expected value
v += action_prob * prob * (reward + gamma * V_last[next_state])
# How much our value function changed (across any states)
delta = max(delta, np.abs(v - V[s]))
V[s] = v
# Stop evaluating once our value function change is below a threshold
if delta < theta:
break
V_last = list(V)
return V
tester.policy_eval_two_arrays_test(policy_eval_two_arrays)
###Output
Testing: Policy Evaluation (two-arrays)
passed test: return value is list
passed test: length of list = 15
passed test: values of list elements
PASSED: Policy Evaluation (two-arrays) passcode = 9986-145
###Markdown
** Exercise 2: Policy Evaluation - in-place method ** Implement the algorithm for Iterative Policy Evaluation using the in-place approach in the below code cell. In the in-place approach, one array holds the values being estimated for each state and the same array is used for estimates of states needed by the algorithm.
###Code
import tester # required for testing and grading your code
def policy_eval_in_place(state_count, gamma, theta, get_policy_actions, get_transitions):
"""
This function uses the in-place approach to evaluate the specified policy for the specified MDP:
'state_count' is the total number of states in the MDP. States are represented as 0-relative numbers.
'gamma' is the MDP discount factor for rewards.
'theta' is the small number threshold to signal convergence of the value function (see Iterative Policy Evaluation algorithm).
'get_policy_actions' is the stochastic policy function - it takes a state parameter and returns list of tuples,
where each tuple is of the form: (action, probability). It represents the policy being evaluated.
'get_transitions' is the state/reward transiton function. It accepts two parameters, state and action, and returns
a list of tuples, where each tuple is of the form: (next_state, reward, probabiliity).
"""
V = state_count*[0]
# insert code here to evaluate the policy using the in-place approach
return V
tester.policy_eval_in_place_test(policy_eval_in_place)
#--- Solutions below - remove all below code cells on the student version of the labs ---
import tester
def eval_formula2(state, action, get_transitions, gamma, V):
trans_sum = 0
trans_tuples = get_transitions(state, action)
for tt in trans_tuples:
next_state = tt[0]
reward = tt[1]
trans_prob = tt[2]
trans_sum += trans_prob * (reward + gamma * V[next_state])
return trans_sum
def eval_formula(state, state_action_tuples, get_transitions, gamma, V):
action_sum = 0
for at in state_action_tuples:
action = at[0]
action_prob = at[1]
action_sum += action_prob * eval_formula2(state, action, get_transitions, gamma, V)
return action_sum
def policy_eval_in_place(state_count, gamma, theta, get_policy_actions, get_transitions):
V = state_count*[0]
k = 0
while True:
delta = 0
#print("k=", k, "V=", V)
for s in range(state_count):
v = V[s]
state_action_tuples = get_policy_actions(s)
V[s] = eval_formula(s, state_action_tuples, get_transitions, gamma, V)
delta = max(delta, abs(v-V[s]))
k += 1
if (delta < theta):
break
print("FINAL k=", k)
#print("FINAL V=", V)
return V
tester.policy_eval_in_place_test(policy_eval_in_place)
def policy_eval_in_place(state_count, gamma, theta, get_policy_actions, get_transitions):
#V = np.zeros(env.nS)
V = state_count*[0]
while True:
delta = 0
# For each state, perform a "full backup"
for s in range(state_count):
v = 0
# Look at the possible next actions
for a, action_prob in get_policy_actions(s):
# For each action, look at the possible next states...
for next_state, reward, prob in get_transitions(s,a):
# Calculate the expected value
v += action_prob * prob * (reward + gamma * V[next_state])
# How much our value function changed (across any states)
delta = max(delta, np.abs(v - V[s]))
V[s] = v
# Stop evaluating once our value function change is below a threshold
if delta < theta:
break
return V
tester.policy_eval_in_place_test(policy_eval_in_place)
###Output
Testing: Policy Evaluation (in-place)
passed test: return value is list
passed test: length of list = 15
passed test: values of list elements
PASSED: Policy Evaluation (in-place) passcode = 9991-562
###Markdown
** Exercise 3: Policy Iteration ** Implement the algorithm for Policy Iteration in the code cell below. ** Can I just call "policy_eval_in_place()" for the Policy Evaluation step of this algorithm? ** Note that there is a subtle difference between the algorithm for Policy Evaluation, which assumes the policy is stochastic, and the Policy Evaluation step for the Policy Iteration algorithm, which assumes the policy is deterministic. This means that you cannot directly call your previous code, but you can reuse large pieces of it for the Policy Evaluation step.
###Code
import tester # required for testing and grading your code
def policy_iteration(state_count, gamma, theta, get_available_actions, get_transitions):
"""
This function computes the optimal value function and policy for the specified MDP, using the Policy Iteration algorithm.
'state_count' is the total number of states in the MDP. States are represented as 0-relative numbers.
'gamma' is the MDP discount factor for rewards.
'theta' is the small number threshold to signal convergence of the value function (see Iterative Policy Evaluation algorithm).
'get_available_actions' returns a list of the MDP available actions for the specified state parameter.
'get_transitions' is the MDP state / reward transiton function. It accepts two parameters, state and action, and returns
a list of tuples, where each tuple is of the form: (next_state, reward, probabiliity).
"""
V = state_count*[0] # init all state value estimates to 0
pi = state_count*[0]
# init with a policy with first avail action for each state
for s in range(state_count):
avail_actions = get_available_actions(s)
pi[s] = avail_actions[0][0]
# insert code here to iterate using policy evaluation and policy improvement (see Policy Iteration algorithm)
return (V, pi) # return both the final value function and the final policy
tester.policy_iteration_test(policy_iteration)
###Output
Testing: Policy Iteration
passed test: return value is tuple
passed test: length of tuple = 2
passed test: v is list of length=15
ERROR: v elements don't match expected values: # of mismatches=14
###Markdown
** Exercise 4: Value Iteration ** Implement the algorithm for Value Iteration in the code cell below.
###Code
import tester # required for testing and grading your code
def value_iteration(state_count, gamma, theta, get_available_actions, get_transitions):
"""
This function computes the optimal value function and policy for the specified MDP, using the Value Iteration algorithm.
'state_count' is the total number of states in the MDP. States are represented as 0-relative numbers.
'gamma' is the MDP discount factor for rewards.
'theta' is the small number threshold to signal convergence of the value function (see Iterative Policy Evaluation algorithm).
'get_available_actions' returns a list of the MDP available actions for the specified state parameter.
'get_transitions' is the MDP state / reward transiton function. It accepts two parameters, state and action, and returns
a list of tuples, where each tuple is of the form: (next_state, reward, probabiliity).
"""
V = state_count*[0] # init all state value estimates to 0
pi = state_count*[0]
# (this section of code can be removed when actual implementation is added)
# init with a policy with first avail action for each state
for s in range(state_count):
avail_actions = get_available_actions(s)
pi[s] = avail_actions[0][0]
# insert code here to iterate using policy evaluation and policy improvement (see Policy Iteration algorithm)
return (V, pi) # return both the final value function and the final policy
tester.value_iteration_test(value_iteration)
#--- Solutions below - remove all below code cells on the student version of the labs ---
import tester
def eval_formula2(state, action, get_transitions, gamma, V):
trans_sum = 0
trans_tuples = get_transitions(state, action)
for tt in trans_tuples:
next_state = tt[0]
reward = tt[1]
trans_prob = tt[2]
trans_sum += trans_prob * (reward + gamma * V[next_state])
return trans_sum
def eval_formula(state, state_action_tuples, get_transitions, gamma, V):
action_sum = 0
for at in state_action_tuples:
action = at[0]
action_prob = at[1]
action_sum += action_prob * eval_formula2(state, action, get_transitions, gamma, V)
return action_sum
def policy_eval_in_place(state_count, gamma, theta, get_policy_actions, get_transitions):
V = state_count*[0]
k = 0
while True:
delta = 0
#print("k=", k, "V=", V)
for s in range(state_count):
v = V[s]
state_action_tuples = get_policy_actions(s)
V[s] = eval_formula(s, state_action_tuples, get_transitions, gamma, V)
delta = max(delta, abs(v-V[s]))
k += 1
if (delta < theta):
break
print("FINAL k=", k)
#print("FINAL V=", V)
return V
def policy_eval_two_arrays(state_count, gamma, theta, get_policy_actions, get_transitions):
V = state_count*[0]
V_last = state_count*[0]
k = 0
while True:
delta = 0
#print("k=", k, "V=", V)
for s in range(state_count):
v = V_last[s]
state_action_tuples = get_policy_actions(s)
V[s] = eval_formula(s, state_action_tuples, get_transitions, gamma, V_last)
delta = max(delta, abs(v-V[s]))
k += 1
if (delta < theta):
break
V_last = list(V)
print("FINAL k=", k)
#print("FINAL V=", V)
return V
tester.policy_eval_two_arrays_test(policy_eval_two_arrays)
tester.policy_eval_in_place_test(policy_eval_in_place)
import tester # required for testing and grading your code
def calc_max_action(state, avail_actions, get_transitions, gamma, V):
max_action = avail_actions[0]
max_value = -999999
for action in avail_actions:
value = eval_formula3(state, action, get_transitions, gamma, V)
if (value >= max_value):
max_value = value
max_action = action
#print("avail_actions=", avail_actions, ", max_action=", max_action, ", max_value=", max_value)
return max_action
def eval_formula3(state, action, get_transitions, gamma, V):
trans_sum = 0
trans_tuples = get_transitions(state, action)
for tt in trans_tuples:
next_state = tt[0]
reward = tt[1]
trans_prob = tt[2]
trans_sum += trans_prob * (reward + gamma * V[next_state])
return trans_sum
def deterministic_policy_eval(state_count, gamma, theta, pi, get_transitions):
V = state_count*[0]
k = 0
#print("deterministic_policy_eval: theta=", theta)
while True:
delta = 0
#print("k=", k, "V=", V)
for s in range(state_count):
v = V[s]
at = pi[s]
action = at[0]
V[s] = eval_formula3(s, action, get_transitions, gamma, V)
delta = max(delta, abs(v-V[s]))
k += 1
#print("k=", k, "delta=", delta)
if (delta < theta):
break
#print(" Policy Eval step completed: k=", k)
return V
def policy_iteration(state_count, gamma, theta, get_available_actions, get_transitions):
# step 1 - initialization
V = state_count * [0] # init all state value estimates to 0
pi = state_count * [0]
# init with a policy with first avail action for each state
for s in range(state_count):
avail_actions = get_available_actions(s)
pi[s] = avail_actions[0][0]
iteration = 1
while (True):
print("Iteration: " + str(iteration))
# step 2 - Policy Evaluation
V = deterministic_policy_eval(state_count, gamma, theta, pi, get_transitions)
# step 3 - Policy Improvement
policy_stable = True
for s in range(state_count):
old_action = pi[s]
avail_actions = get_available_actions(s)
pi[s] = calc_max_action(s, avail_actions, get_transitions, gamma, V)
if (old_action != pi[s]):
policy_stable = False
#print(" Policy Improvement step completed")
if policy_stable:
V = deterministic_policy_eval(state_count, gamma, theta, pi, get_transitions)
break
iteration += 1
print("final V=", V)
print("final pi=", pi)
return (V, pi) # return both the final value function and the final policy
tester.policy_iteration_test(policy_iteration)
import tester # required for testing and grading your code
def eval_formula3(state, action, get_transitions, gamma, V):
trans_sum = 0
trans_tuples = get_transitions(state, action)
for tt in trans_tuples:
next_state = tt[0]
reward = tt[1]
trans_prob = tt[2]
trans_sum += trans_prob * (reward + gamma * V[next_state])
return trans_sum
def calc_max_action_value(state, avail_actions, get_transitions, gamma, V):
max_action = avail_actions[0]
max_value = -999999
for action in avail_actions:
value = eval_formula3(state, action, get_transitions, gamma, V)
if (value >= max_value):
max_value = value
max_action = action
#print("avail_actions=", avail_actions, ", max_action=", max_action, ", max_value=", max_value)
return (max_action, max_value)
def value_iteration(state_count, gamma, theta, get_available_actions, get_transitions):
V = state_count * [0] # init all state value estimates to 0
iteration = 1
while (True):
print("Iteration: " + str(iteration))
delta = 0
for s in range(state_count):
v = V[s]
avail_actions = get_available_actions(s)
_, V[s] = calc_max_action_value(s, avail_actions, get_transitions, gamma, V)
delta = max(delta, abs(v - V[s]))
if (delta < theta):
break
iteration += 1
# finally, calculate the optimal policy from the optimal value function V
pi = state_count * [0]
for s in range(state_count):
avail_actions = get_available_actions(s)
pi[s], _ = calc_max_action_value(s, avail_actions, get_transitions, gamma, V)
print("final V=", V)
print("final pi=", pi)
return (V, pi) # return both the final value function and the final policy
tester.value_iteration_test(value_iteration)
###Output
Testing: Value Iteration
Iteration: 1
Iteration: 2
Iteration: 3
Iteration: 4
final V= [0.0, -1.0, -1.999, -2.997001, -1.0, -1.999, -2.997001, -1.999, -1.999, -2.997001, -1.999, -1.0, -2.997001, -1.999, -1.0]
final pi= ['d', 'l', 'l', 'd', 'u', 'u', 'd', 'd', 'u', 'd', 'd', 'd', 'r', 'r', 'r']
passed test: return value is tuple
passed test: length of tuple = 2
passed test: v is list of length=15
passed test: values of v elements
passed test: pi is list of length=15
passed test: values of pi elements
PASSED: Value Iteration passcode = 9990-000
|
jwst_validation_notebooks/source_catalog/jwst_source_catalog_nircam_test/jwst_nircam_imaging_source_catalog.ipynb
|
###Markdown
JWST Pipeline Validation Notebook: NIRCam, calwebb_image3, source_catalog **Instruments Affected**: e.g., FGS, MIRI, NIRCam, NIRISS, NIRSpec Table of Contents [Introduction](intro) [JWST CalWG Algorithm](algorithm) [Defining Terms](terms) [Test Description](description) [Data Description](data_descr) [Set up Temporary Directory](tempdir) [Imports](imports) [Loading the Data](data_load) [Run the Image3Pipeline](pipeline) [Perform Visual Inspection](visualization) [Manually Find Matches](manual) [About This Notebook](about) IntroductionThis is the NIRCam validation notebook for the Source Catalog step, which generates a catalog based on input exposures.* Step description: https://jwst-pipeline.readthedocs.io/en/latest/jwst/source_catalog/index.html* Pipeline code: https://github.com/spacetelescope/jwst/tree/master/jwst/source_catalog[Top of Page](title_ID) JWST CalWG AlgorithmThis is the NIRCam imaging validation notebook for the Source Catalog step, which uses image combinations or stacks of overlapping images to generate "browse-quality" source catalogs. Having automated source catalogs will help accelerate the science output of JWST. The source catalogs should include both point and "slightly" extended sources at a minimum. The catalog should provide an indication if the source is a point or an extended source. For point sources, the source catalog should include measurements corrected to infinite aperture using aperture corrections provided by a reference file. See: * https://outerspace.stsci.edu/display/JWSTCC/Vanilla+Point+Source+Catalog[Top of Page](title_ID) Defining Terms* JWST: James Webb Space Telescope* NIRCam: Near-Infrared Camera[Top of Page](title_ID) Test DescriptionHere we generate the source catalog and visually inspect a plot of the image with the source catalog overlaid. We also look at some other diagnostic plots and then cross-check the output catalog against Mirage catalog inputs. [Top of Page](title_ID) Data DescriptionThe set of data used in this test were created with the Mirage simulator. The simulator created a NIRCam imaging mode exposures for the short wave NRCA1 detector. [Top of Page](title_ID) Set up Temporary DirectoryThe following cell sets up a temporary directory (using python's `tempfile.TemporaryDirectory()`), and changes the script's active directory into that directory (using python's `os.chdir()`). This is so that, when the notebook is run through, it will download files to (and create output files in) the temporary directory rather than in the notebook's directory. This makes cleanup significantly easier (since all output files are deleted when the notebook is shut down), and also means that different notebooks in the same directory won't interfere with each other when run by the automated webpage generation process.If you want the notebook to generate output in the notebook's directory, simply don't run this cell.If you have a file (or files) that are kept in the notebook's directory, and that the notebook needs to use while running, you can copy that file into the directory (the code to do so is present below, but commented out).
###Code
#****
#
# Set this variable to False to not use the temporary directory
#
#****
use_tempdir = True
# Create a temporary directory to hold notebook output, and change the working directory to that directory.
from tempfile import TemporaryDirectory
import os
import shutil
if use_tempdir:
data_dir = TemporaryDirectory()
# If you have files that are in the notebook's directory, but that the notebook will need to use while
# running, copy them into the temporary directory here.
#
# files = ['name_of_file']
# for file_name in files:
# shutil.copy(file_name, os.path.join(data_dir.name, file_name))
# Save original directory
orig_dir = os.getcwd()
# Move to new directory
os.chdir(data_dir.name)
# For info, print out where the script is running
print("Running in {}".format(os.getcwd()))
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID)
###Code
import os
if 'CRDS_CACHE_TYPE' in os.environ:
if os.environ['CRDS_CACHE_TYPE'] == 'local':
os.environ['CRDS_PATH'] = os.path.join(os.environ['HOME'], 'crds', 'cache')
elif os.path.isdir(os.environ['CRDS_CACHE_TYPE']):
os.environ['CRDS_PATH'] = os.environ['CRDS_CACHE_TYPE']
print('CRDS cache location: {}'.format(os.environ['CRDS_PATH']))
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) ImportsList the package imports and why they are relevant to this notebook.* astropy for various tools and packages* inspect to get the docstring of our objects.* IPython.display for printing markdown output* jwst.datamodels for JWST Pipeline data models* jwst.module.PipelineStep is the pipeline step being tested* matplotlib.pyplot.plt to generate plot
###Code
# plotting, the inline must come before the matplotlib import
%matplotlib inline
# %matplotlib notebook
# These gymnastics are needed to make the sizes of the figures
# be the same in both the inline and notebook versions
%config InlineBackend.print_figure_kwargs = {'bbox_inches': None}
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['savefig.dpi'] = 80
mpl.rcParams['figure.dpi'] = 80
from matplotlib import pyplot as plt
import matplotlib.patches as patches
params = {'legend.fontsize': 6,
'figure.figsize': (8, 8),
'figure.dpi': 150,
'axes.labelsize': 6,
'axes.titlesize': 6,
'xtick.labelsize':6,
'ytick.labelsize':6}
plt.rcParams.update(params)
# Box download imports
from astropy.utils.data import download_file
from pathlib import Path
from shutil import move
from os.path import splitext
# python general
import os
import numpy as np
# astropy modules
import astropy
from astropy.io import fits
from astropy.table import QTable, Table, vstack, unique
from astropy.wcs.utils import skycoord_to_pixel
from astropy.coordinates import SkyCoord
from astropy.visualization import simple_norm
from astropy import units as u
import photutils
# jwst
from jwst.pipeline import calwebb_image3
from jwst import datamodels
def create_image(data_2d, xpixel=None, ypixel=None, title=None):
''' Function to generate a 2D image of the data,
with an option to highlight a specific pixel.
'''
fig = plt.figure(figsize=(8, 8))
ax = plt.subplot()
norm = simple_norm(data_2d, 'sqrt', percent=99.)
plt.imshow(data_2d, norm=norm, origin='lower', cmap='gray')
if xpixel and ypixel:
plt.plot(xpixel, ypixel, marker='o', color='red', label='Selected Pixel')
plt.xlabel('Pixel column')
plt.ylabel('Pixel row')
if title:
plt.title(title)
plt.subplots_adjust(left=0.15)
plt.colorbar(label='MJy/sr')
def create_image_with_cat(data_2d, catalog, flux_limit=None, title=None):
''' Function to generate a 2D image of the data,
with sources overlaid.
'''
fig = plt.figure(figsize=(8, 8))
ax = plt.subplot()
norm = simple_norm(data_2d, 'sqrt', percent=99.)
plt.imshow(data_2d, norm=norm, origin='lower', cmap='gray')
for row in catalog:
if flux_limit:
if np.isnan(row['aper_total_flux']):
pass
else:
if row['aper_total_flux'] > flux_limit:
plt.plot(row['xcentroid'], row['ycentroid'], marker='o', markersize='3', color='red')
else:
plt.plot(row['xcentroid'], row['ycentroid'], marker='o', markersize='1', color='red')
plt.xlabel('Pixel column')
plt.ylabel('Pixel row')
if title:
plt.title(title)
plt.subplots_adjust(left=0.15)
plt.colorbar(label='MJy/sr')
def create_scatterplot(catalog_colx, catalog_coly, title=None):
''' Function to generate a generic scatterplot.
'''
fig = plt.figure(figsize=(8, 8))
ax = plt.subplot()
ax.scatter(catalog_colx,catalog_coly)
plt.xlabel(catalog_colx.name)
plt.ylabel(catalog_coly.name)
if title:
plt.title(title)
def get_input_table(sourcelist):
'''Function to read in and access the simulator source input files.'''
all_source_table = Table()
# point source and galaxy source tables have different headers
# change column headers to match for filtering later
if "point" in sourcelist:
col_names = ["RA", "Dec", "RA_degrees", "Dec_degrees",
"PixelX", "PixelY", "Magnitude",
"counts_sec", "counts_frame"]
elif "galaxy" in sourcelist:
col_names = ["PixelX", "PixelY", "RA", "Dec",
"RA_degrees", "Dec_degrees", "V2", "V3", "radius",
"ellipticity", "pos_angle", "sersic_index",
"Magnitude", "countrate_e_s", "counts_per_frame_e"]
else:
print('Error! Source list column names need to be defined.')
sys.exit(0)
# read in the tables
input_source_table = Table.read(sourcelist,format='ascii')
orig_colnames = input_source_table.colnames
# only grab values for source catalog analysis
short_source_table = Table({'In_RA': input_source_table['RA_degrees'],
'In_Dec': input_source_table['Dec_degrees']},
names=['In_RA', 'In_Dec'])
# combine source lists into one master list
all_source_table = vstack([all_source_table, short_source_table])
# set up columns to track which sources were detected by Photutils
all_source_table['Out_RA'] = np.nan
all_source_table['Out_Dec'] = np.nan
all_source_table['Detected'] = 'N'
all_source_table['RA_Diff'] = np.nan
all_source_table['Dec_Diff'] = np.nan
# filter by RA, Dec (for now)
no_duplicates = unique(all_source_table,keys=['In_RA','In_Dec'])
return no_duplicates
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) Loading the Data The simulated exposures used for this test are stored in Box. Grab them.
###Code
def get_box_files(file_list):
for box_url,file_name in file_list:
if 'https' not in box_url:
box_url = 'https://stsci.box.com/shared/static/' + box_url
downloaded_file = download_file(box_url)
if Path(file_name).suffix == '':
ext = splitext(box_url)[1]
file_name += ext
move(downloaded_file, file_name)
file_urls = ['https://stsci.box.com/shared/static/72fds4rfn4ppxv2tuj9qy2vbiao110pc.fits',
'https://stsci.box.com/shared/static/gxwtxoz5abnsx7wriqligyzxacjoz9h3.fits',
'https://stsci.box.com/shared/static/tninaa6a28tsa1z128u3ffzlzxr9p270.fits',
'https://stsci.box.com/shared/static/g4zlkv9qi0vc5brpw2lamekf4ekwcfdn.json',
'https://stsci.box.com/shared/static/kvusxulegx0xfb0uhdecu5dp8jkeluhm.list']
file_names = ['jw00042002001_01101_00004_nrca5_cal.fits',
'jw00042002001_01101_00005_nrca5_cal.fits',
'jw00042002001_01101_00006_nrca5_cal.fits',
'level3_lw_imaging_files_asn.json',
'jw00042002001_01101_00004_nrca5_uncal_galaxySources.list']
box_download_list = [(url,name) for url,name in zip(file_urls,file_names)]
get_box_files(box_download_list)
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) Run the Image3PipelineRun calwebb_image3 to get the output source catalog and the final 2D image.
###Code
img3 = calwebb_image3.Image3Pipeline()
img3.assign_mtwcs.skip=True
img3.save_results=True
img3.resample.save_results=True
img3.source_catalog.snr_threshold = 5
img3.source_catalog.save_results=True
img3.run(file_names[3])
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) Perform Visual InspectionPerform the visual inspection of the catalog and the final image.
###Code
catalog = Table.read("lw_imaging_cat.ecsv")
combined_image = datamodels.ImageModel("lw_imaging_i2d.fits")
create_image(combined_image.data, title="Final combined NIRCam image")
create_image_with_cat(combined_image.data, catalog, title="Final image w/ catalog overlaid")
catalog
create_scatterplot(catalog['label'], catalog['aper_total_flux'],title='Total Flux in '+str(catalog['aper_total_flux'].unit))
create_scatterplot(catalog['label'], catalog['aper_total_abmag'],title='Total AB mag')
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) Manually Find Matches Since this is a simulated data set, we can compare the output catalog information from the pipeline with the input catalog information used to create the simulation. Grab the input catalog RA, Dec values and the output catalog RA, Dec values.
###Code
test_outputs = get_input_table(file_names[4])
in_ra = test_outputs['In_RA'].data
in_dec = test_outputs['In_Dec'].data
out_ra = catalog['sky_centroid'].ra.deg
out_dec = catalog['sky_centroid'].dec.deg
###Output
_____no_output_____
###Markdown
Set the tolerance and initialize our counters.
###Code
tol = 1.e-3
found_count=0
multiples_count=0
missed_count=0
###Output
_____no_output_____
###Markdown
Below we loop through the input RA, Dec values and compare them to the RA, Dec values in the output catalog. For cases where there are multiple matches for our tolerance level, count those cases.
###Code
for ra,dec,idx in zip(in_ra, in_dec,range(len(test_outputs))):
match = np.where((np.abs(ra-out_ra) < tol) & (np.abs(dec-out_dec) < tol))
if np.size(match) == 1:
found_count +=1
test_outputs['Detected'][idx] = 'Y'
test_outputs['Out_RA'][idx] = out_ra[match]
test_outputs['Out_Dec'][idx] = out_dec[match]
test_outputs['RA_Diff'][idx] = np.abs(ra-out_ra[match])
test_outputs['Dec_Diff'][idx] = np.abs(dec-out_dec[match])
if np.size(match) > 1:
multiples_count +=1
if np.size(match) < 1:
missed_count +=1
###Output
_____no_output_____
###Markdown
Let's see how it did.
###Code
total_percent_found = (found_count/len(test_outputs))*100
print('\n')
print('SNR threshold used for pipeline: ',img3.source_catalog.snr_threshold)
print('Total found:',found_count)
print('Total missed:',missed_count)
print('Number of multiples: ',multiples_count)
print('Total number of input sources:',len(test_outputs))
print('Total number in output catalog:',len(catalog))
print('Total percent found:',total_percent_found)
print('\n')
###Output
_____no_output_____
###Markdown
Use photutils to find catalog matches Photutils includes a package to match sources between catalogs by providing a max separation value. Set that value and compare the two catalogs.
###Code
catalog_in = SkyCoord(ra=in_ra*u.degree, dec=in_dec*u.degree)
catalog_out = SkyCoord(ra=out_ra*u.degree, dec=out_dec*u.degree)
max_sep = 1.0 * u.arcsec
# idx, d2d, d3d = cat_in.match_to_catalog_3d(cat_out)
idx, d2d, d3d = catalog_in.match_to_catalog_sky(catalog_out)
sep_constraint = d2d < max_sep
catalog_in_matches = catalog_in[sep_constraint]
catalog_out_matches = catalog_out[idx[sep_constraint]]
###Output
_____no_output_____
###Markdown
Now, ```catalog_in_matches``` and ```catalog_out_matches``` are the matched sources in ```catalog_in``` and ```catalog_out```, respectively, which are separated less than our ```max_sep``` value.
###Code
print('Number of matched sources using max separation of '+str(max_sep)+': ',len(catalog_out_matches))
###Output
_____no_output_____
###Markdown
JWST Pipeline Validation Notebook: NIRCam, calwebb_image3, source_catalog **Instruments Affected**: e.g., FGS, MIRI, NIRCam, NIRISS, NIRSpec Table of Contents [Introduction](intro) [JWST CalWG Algorithm](algorithm) [Defining Terms](terms) [Test Description](description) [Data Description](data_descr) [Set up Temporary Directory](tempdir) [Imports](imports) [Loading the Data](data_load) [Run the Image3Pipeline](pipeline) [Perform Visual Inspection](visualization) [Manually Find Matches](manual) [About This Notebook](about) IntroductionThis is the NIRCam validation notebook for the Source Catalog step, which generates a catalog based on input exposures.* Step description: https://jwst-pipeline.readthedocs.io/en/latest/jwst/source_catalog/index.html* Pipeline code: https://github.com/spacetelescope/jwst/tree/master/jwst/source_catalog[Top of Page](title_ID) JWST CalWG AlgorithmThis is the NIRCam imaging validation notebook for the Source Catalog step, which uses image combinations or stacks of overlapping images to generate "browse-quality" source catalogs. Having automated source catalogs will help accelerate the science output of JWST. The source catalogs should include both point and "slightly" extended sources at a minimum. The catalog should provide an indication if the source is a point or an extended source. For point sources, the source catalog should include measurements corrected to infinite aperture using aperture corrections provided by a reference file. See: * https://outerspace.stsci.edu/display/JWSTCC/Vanilla+Point+Source+Catalog[Top of Page](title_ID) Defining Terms* JWST: James Webb Space Telescope* NIRCam: Near-Infrared Camera[Top of Page](title_ID) Test DescriptionHere we generate the source catalog and visually inspect a plot of the image with the source catalog overlaid. We also look at some other diagnostic plots and then cross-check the output catalog against Mirage catalog inputs. [Top of Page](title_ID) Data DescriptionThe set of data used in this test were created with the Mirage simulator. The simulator created a NIRCam imaging mode exposures for the short wave NRCA1 detector. [Top of Page](title_ID) Set up Temporary DirectoryThe following cell sets up a temporary directory (using python's `tempfile.TemporaryDirectory()`), and changes the script's active directory into that directory (using python's `os.chdir()`). This is so that, when the notebook is run through, it will download files to (and create output files in) the temporary directory rather than in the notebook's directory. This makes cleanup significantly easier (since all output files are deleted when the notebook is shut down), and also means that different notebooks in the same directory won't interfere with each other when run by the automated webpage generation process.If you want the notebook to generate output in the notebook's directory, simply don't run this cell.If you have a file (or files) that are kept in the notebook's directory, and that the notebook needs to use while running, you can copy that file into the directory (the code to do so is present below, but commented out).
###Code
#****
#
# Set this variable to False to not use the temporary directory
#
#****
use_tempdir = True
# Create a temporary directory to hold notebook output, and change the working directory to that directory.
from tempfile import TemporaryDirectory
import os
import shutil
if use_tempdir:
data_dir = TemporaryDirectory()
# If you have files that are in the notebook's directory, but that the notebook will need to use while
# running, copy them into the temporary directory here.
#
# files = ['name_of_file']
# for file_name in files:
# shutil.copy(file_name, os.path.join(data_dir.name, file_name))
# Save original directory
orig_dir = os.getcwd()
# Move to new directory
os.chdir(data_dir.name)
# For info, print out where the script is running
print("Running in {}".format(os.getcwd()))
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) If Desired, set up CRDS to use a local cacheBy default, the notebook template environment sets up its CRDS cache (the "CRDS_PATH" environment variable) in /grp/crds/cache. However, if the notebook is running on a local machine without a fast and reliable connection to central storage, it makes more sense to put the CRDS cache locally. Currently, the cell below offers several options, and will check the supplied boolean variables one at a time until one matches.* if `use_local_crds_cache` is False, then the CRDS cache will be kept in /grp/crds/cache* if `use_local_crds_cache` is True, the CRDS cache will be kept locally * if `crds_cache_tempdir` is True, the CRDS cache will be kept in the temporary directory * if `crds_cache_notebook_dir` is True, the CRDS cache will be kept in the same directory as the notebook. * if `crds_cache_home` is True, the CRDS cache will be kept in $HOME/crds/cache * if `crds_cache_custom_dir` is True, the CRDS cache will be kept in whatever is stored in the `crds_cache_dir_name` variable.If the above cell (creating a temporary directory) is not run, then setting `crds_cache_tempdir` to True will store the CRDS cache in the notebook's directory (the same as setting `crds_cache_notebook_dir` to True).
###Code
import os
# Choose CRDS cache location
use_local_crds_cache = False
crds_cache_tempdir = False
crds_cache_notebook_dir = False
crds_cache_home = False
crds_cache_custom_dir = False
crds_cache_dir_name = ""
if use_local_crds_cache:
if crds_cache_tempdir:
os.environ['CRDS_PATH'] = os.path.join(os.getcwd(), "crds")
elif crds_cache_notebook_dir:
try:
os.environ['CRDS_PATH'] = os.path.join(orig_dir, "crds")
except Exception as e:
os.environ['CRDS_PATH'] = os.path.join(os.getcwd(), "crds")
elif crds_cache_home:
os.environ['CRDS_PATH'] = os.path.join(os.environ['HOME'], 'crds', 'cache')
elif crds_cache_custom_dir:
os.environ['CRDS_PATH'] = crds_cache_dir_name
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) ImportsList the package imports and why they are relevant to this notebook.* astropy for various tools and packages* inspect to get the docstring of our objects.* IPython.display for printing markdown output* jwst.datamodels for JWST Pipeline data models* jwst.module.PipelineStep is the pipeline step being tested* matplotlib.pyplot.plt to generate plot
###Code
# plotting, the inline must come before the matplotlib import
%matplotlib inline
# %matplotlib notebook
# These gymnastics are needed to make the sizes of the figures
# be the same in both the inline and notebook versions
%config InlineBackend.print_figure_kwargs = {'bbox_inches': None}
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['savefig.dpi'] = 80
mpl.rcParams['figure.dpi'] = 80
from matplotlib import pyplot as plt
import matplotlib.patches as patches
params = {'legend.fontsize': 6,
'figure.figsize': (8, 8),
'figure.dpi': 150,
'axes.labelsize': 6,
'axes.titlesize': 6,
'xtick.labelsize':6,
'ytick.labelsize':6}
plt.rcParams.update(params)
# Box download imports
from astropy.utils.data import download_file
from pathlib import Path
from shutil import move
from os.path import splitext
# python general
import os
import numpy as np
# astropy modules
import astropy
from astropy.io import fits
from astropy.table import QTable, Table, vstack, unique
from astropy.wcs.utils import skycoord_to_pixel
from astropy.coordinates import SkyCoord
from astropy.visualization import simple_norm
from astropy import units as u
import photutils
# jwst
from jwst.pipeline import calwebb_image3
from jwst import datamodels
def create_image(data_2d, xpixel=None, ypixel=None, title=None):
''' Function to generate a 2D image of the data,
with an option to highlight a specific pixel.
'''
fig = plt.figure(figsize=(8, 8))
ax = plt.subplot()
norm = simple_norm(data_2d, 'sqrt', percent=99.)
plt.imshow(data_2d, norm=norm, origin='lower', cmap='gray')
if xpixel and ypixel:
plt.plot(xpixel, ypixel, marker='o', color='red', label='Selected Pixel')
plt.xlabel('Pixel column')
plt.ylabel('Pixel row')
if title:
plt.title(title)
plt.subplots_adjust(left=0.15)
plt.colorbar(label='MJy/sr')
def create_image_with_cat(data_2d, catalog, flux_limit=None, title=None):
''' Function to generate a 2D image of the data,
with sources overlaid.
'''
fig = plt.figure(figsize=(8, 8))
ax = plt.subplot()
norm = simple_norm(data_2d, 'sqrt', percent=99.)
plt.imshow(data_2d, norm=norm, origin='lower', cmap='gray')
for row in catalog:
if flux_limit:
if np.isnan(row['aper_total_flux']):
pass
else:
if row['aper_total_flux'] > flux_limit:
plt.plot(row['xcentroid'], row['ycentroid'], marker='o', markersize='3', color='red')
else:
plt.plot(row['xcentroid'], row['ycentroid'], marker='o', markersize='1', color='red')
plt.xlabel('Pixel column')
plt.ylabel('Pixel row')
if title:
plt.title(title)
plt.subplots_adjust(left=0.15)
plt.colorbar(label='MJy/sr')
def create_scatterplot(catalog_colx, catalog_coly, title=None):
''' Function to generate a generic scatterplot.
'''
fig = plt.figure(figsize=(8, 8))
ax = plt.subplot()
ax.scatter(catalog_colx,catalog_coly)
plt.xlabel(catalog_colx.name)
plt.ylabel(catalog_coly.name)
if title:
plt.title(title)
def get_input_table(sourcelist):
'''Function to read in and access the simulator source input files.'''
all_source_table = Table()
# point source and galaxy source tables have different headers
# change column headers to match for filtering later
if "point" in sourcelist:
col_names = ["RA", "Dec", "RA_degrees", "Dec_degrees",
"PixelX", "PixelY", "Magnitude",
"counts_sec", "counts_frame"]
elif "galaxy" in sourcelist:
col_names = ["PixelX", "PixelY", "RA", "Dec",
"RA_degrees", "Dec_degrees", "V2", "V3", "radius",
"ellipticity", "pos_angle", "sersic_index",
"Magnitude", "countrate_e_s", "counts_per_frame_e"]
else:
print('Error! Source list column names need to be defined.')
sys.exit(0)
# read in the tables
input_source_table = Table.read(sourcelist,format='ascii')
orig_colnames = input_source_table.colnames
# only grab values for source catalog analysis
short_source_table = Table({'In_RA': input_source_table['RA_degrees'],
'In_Dec': input_source_table['Dec_degrees']},
names=['In_RA', 'In_Dec'])
# combine source lists into one master list
all_source_table = vstack([all_source_table, short_source_table])
# set up columns to track which sources were detected by Photutils
all_source_table['Out_RA'] = np.nan
all_source_table['Out_Dec'] = np.nan
all_source_table['Detected'] = 'N'
all_source_table['RA_Diff'] = np.nan
all_source_table['Dec_Diff'] = np.nan
# filter by RA, Dec (for now)
no_duplicates = unique(all_source_table,keys=['In_RA','In_Dec'])
return no_duplicates
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) Loading the Data The simulated exposures used for this test are stored in Box. Grab them.
###Code
def get_box_files(file_list):
for box_url,file_name in file_list:
if 'https' not in box_url:
box_url = 'https://stsci.box.com/shared/static/' + box_url
downloaded_file = download_file(box_url)
if Path(file_name).suffix == '':
ext = splitext(box_url)[1]
file_name += ext
move(downloaded_file, file_name)
file_urls = ['https://stsci.box.com/shared/static/72fds4rfn4ppxv2tuj9qy2vbiao110pc.fits',
'https://stsci.box.com/shared/static/gxwtxoz5abnsx7wriqligyzxacjoz9h3.fits',
'https://stsci.box.com/shared/static/tninaa6a28tsa1z128u3ffzlzxr9p270.fits',
'https://stsci.box.com/shared/static/g4zlkv9qi0vc5brpw2lamekf4ekwcfdn.json',
'https://stsci.box.com/shared/static/kvusxulegx0xfb0uhdecu5dp8jkeluhm.list']
file_names = ['jw00042002001_01101_00004_nrca5_cal.fits',
'jw00042002001_01101_00005_nrca5_cal.fits',
'jw00042002001_01101_00006_nrca5_cal.fits',
'level3_lw_imaging_files_asn.json',
'jw00042002001_01101_00004_nrca5_uncal_galaxySources.list']
box_download_list = [(url,name) for url,name in zip(file_urls,file_names)]
get_box_files(box_download_list)
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) Run the Image3PipelineRun calwebb_image3 to get the output source catalog and the final 2D image.
###Code
img3 = calwebb_image3.Image3Pipeline()
img3.assign_mtwcs.skip=True
img3.save_results=True
img3.resample.save_results=True
img3.source_catalog.snr_threshold = 5
img3.source_catalog.save_results=True
img3.run(file_names[3])
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) Perform Visual InspectionPerform the visual inspection of the catalog and the final image.
###Code
catalog = Table.read("lw_imaging_cat.ecsv")
combined_image = datamodels.ImageModel("lw_imaging_i2d.fits")
create_image(combined_image.data, title="Final combined NIRCam image")
create_image_with_cat(combined_image.data, catalog, title="Final image w/ catalog overlaid")
catalog
create_scatterplot(catalog['id'], catalog['aper_total_flux'],title='Total Flux in '+str(catalog['aper_total_flux'].unit))
create_scatterplot(catalog['id'], catalog['aper_total_abmag'],title='Total AB mag')
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) Manually Find Matches Since this is a simulated data set, we can compare the output catalog information from the pipeline with the input catalog information used to create the simulation. Grab the input catalog RA, Dec values and the output catalog RA, Dec values.
###Code
test_outputs = get_input_table(file_names[4])
in_ra = test_outputs['In_RA'].data
in_dec = test_outputs['In_Dec'].data
out_ra = catalog['sky_centroid'].ra.deg
out_dec = catalog['sky_centroid'].dec.deg
###Output
_____no_output_____
###Markdown
Set the tolerance and initialize our counters.
###Code
tol = 1.e-3
found_count=0
multiples_count=0
missed_count=0
###Output
_____no_output_____
###Markdown
Below we loop through the input RA, Dec values and compare them to the RA, Dec values in the output catalog. For cases where there are multiple matches for our tolerance level, count those cases.
###Code
for ra,dec,idx in zip(in_ra, in_dec,range(len(test_outputs))):
match = np.where((np.abs(ra-out_ra) < tol) & (np.abs(dec-out_dec) < tol))
if np.size(match) == 1:
found_count +=1
test_outputs['Detected'][idx] = 'Y'
test_outputs['Out_RA'][idx] = out_ra[match]
test_outputs['Out_Dec'][idx] = out_dec[match]
test_outputs['RA_Diff'][idx] = np.abs(ra-out_ra[match])
test_outputs['Dec_Diff'][idx] = np.abs(dec-out_dec[match])
if np.size(match) > 1:
multiples_count +=1
if np.size(match) < 1:
missed_count +=1
###Output
_____no_output_____
###Markdown
Let's see how it did.
###Code
total_percent_found = (found_count/len(test_outputs))*100
print('\n')
print('SNR threshold used for pipeline: ',img3.source_catalog.snr_threshold)
print('Total found:',found_count)
print('Total missed:',missed_count)
print('Number of multiples: ',multiples_count)
print('Total number of input sources:',len(test_outputs))
print('Total number in output catalog:',len(catalog))
print('Total percent found:',total_percent_found)
print('\n')
###Output
_____no_output_____
###Markdown
Use photutils to find catalog matches Photutils includes a package to match sources between catalogs by providing a max separation value. Set that value and compare the two catalogs.
###Code
catalog_in = SkyCoord(ra=in_ra*u.degree, dec=in_dec*u.degree)
catalog_out = SkyCoord(ra=out_ra*u.degree, dec=out_dec*u.degree)
max_sep = 1.0 * u.arcsec
# idx, d2d, d3d = cat_in.match_to_catalog_3d(cat_out)
idx, d2d, d3d = catalog_in.match_to_catalog_sky(catalog_out)
sep_constraint = d2d < max_sep
catalog_in_matches = catalog_in[sep_constraint]
catalog_out_matches = catalog_out[idx[sep_constraint]]
###Output
_____no_output_____
###Markdown
Now, ```catalog_in_matches``` and ```catalog_out_matches``` are the matched sources in ```catalog_in``` and ```catalog_out```, respectively, which are separated less than our ```max_sep``` value.
###Code
print('Number of matched sources using max separation of '+str(max_sep)+': ',len(catalog_out_matches))
###Output
_____no_output_____
###Markdown
JWST Pipeline Validation Notebook: calwebb_image3, source_catalog **Instruments Affected**: e.g., FGS, MIRI, NIRCam, NIRISS, NIRSpec Table of Contents [Introduction](intro) [JWST CalWG Algorithm](algorithm) [Defining Terms](terms) [Test Description](description) [Data Description](data_descr) [Set up Temporary Directory](tempdir) [Imports](imports) [Loading the Data](data_load) [Run the Image3Pipeline](pipeline) [Perform Visual Inspection](visualization) [Manually Find Matches](manual) [About This Notebook](about) IntroductionThis is the NIRCam validation notebook for the Source Catalog step, which generates a catalog based on input exposures.* Step description: https://jwst-pipeline.readthedocs.io/en/latest/jwst/source_catalog/index.html* Pipeline code: https://github.com/spacetelescope/jwst/tree/master/jwst/source_catalog[Top of Page](title_ID) JWST CalWG AlgorithmThis is the NIRCam imaging validation notebook for the Source Catalog step, which uses image combinations or stacks of overlapping images to generate "browse-quality" source catalogs. Having automated source catalogs will help accelerate the science output of JWST. The source catalogs should include both point and "slightly" extended sources at a minimum. The catalog should provide an indication if the source is a point or an extended source. For point sources, the source catalog should include measurements corrected to infinite aperture using aperture corrections provided by a reference file. See: * https://outerspace.stsci.edu/display/JWSTCC/Vanilla+Point+Source+Catalog[Top of Page](title_ID) Defining Terms* JWST: James Webb Space Telescope* NIRCam: Near-Infrared Camera[Top of Page](title_ID) Test DescriptionHere we generate the source catalog and visually inspect a plot of the image with the source catalog overlaid. We also look at some other diagnostic plots and then cross-check the output catalog against Mirage catalog inputs. [Top of Page](title_ID) Data DescriptionThe set of data used in this test were created with the Mirage simulator. The simulator created a NIRCam imaging mode exposures for the short wave NRCA1 detector. [Top of Page](title_ID) Set up Temporary DirectoryThe following cell sets up a temporary directory (using python's `tempfile.TemporaryDirectory()`), and changes the script's active directory into that directory (using python's `os.chdir()`). This is so that, when the notebook is run through, it will download files to (and create output files in) the temporary directory rather than in the notebook's directory. This makes cleanup significantly easier (since all output files are deleted when the notebook is shut down), and also means that different notebooks in the same directory won't interfere with each other when run by the automated webpage generation process.If you want the notebook to generate output in the notebook's directory, simply don't run this cell.If you have a file (or files) that are kept in the notebook's directory, and that the notebook needs to use while running, you can copy that file into the directory (the code to do so is present below, but commented out).
###Code
#****
#
# Set this variable to False to not use the temporary directory
#
#****
use_tempdir = True
# Create a temporary directory to hold notebook output, and change the working directory to that directory.
from tempfile import TemporaryDirectory
import os
import shutil
if use_tempdir:
data_dir = TemporaryDirectory()
# If you have files that are in the notebook's directory, but that the notebook will need to use while
# running, copy them into the temporary directory here.
#
# files = ['name_of_file']
# for file_name in files:
# shutil.copy(file_name, os.path.join(data_dir.name, file_name))
# Save original directory
orig_dir = os.getcwd()
# Move to new directory
os.chdir(data_dir.name)
# For info, print out where the script is running
print("Running in {}".format(os.getcwd()))
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) If Desired, set up CRDS to use a local cacheBy default, the notebook template environment sets up its CRDS cache (the "CRDS_PATH" environment variable) in /grp/crds/cache. However, if the notebook is running on a local machine without a fast and reliable connection to central storage, it makes more sense to put the CRDS cache locally. Currently, the cell below offers several options, and will check the supplied boolean variables one at a time until one matches.* if `use_local_crds_cache` is False, then the CRDS cache will be kept in /grp/crds/cache* if `use_local_crds_cache` is True, the CRDS cache will be kept locally * if `crds_cache_tempdir` is True, the CRDS cache will be kept in the temporary directory * if `crds_cache_notebook_dir` is True, the CRDS cache will be kept in the same directory as the notebook. * if `crds_cache_home` is True, the CRDS cache will be kept in $HOME/crds/cache * if `crds_cache_custom_dir` is True, the CRDS cache will be kept in whatever is stored in the `crds_cache_dir_name` variable.If the above cell (creating a temporary directory) is not run, then setting `crds_cache_tempdir` to True will store the CRDS cache in the notebook's directory (the same as setting `crds_cache_notebook_dir` to True).
###Code
import os
# Choose CRDS cache location
use_local_crds_cache = False
crds_cache_tempdir = False
crds_cache_notebook_dir = False
crds_cache_home = False
crds_cache_custom_dir = False
crds_cache_dir_name = ""
if use_local_crds_cache:
if crds_cache_tempdir:
os.environ['CRDS_PATH'] = os.path.join(os.getcwd(), "crds")
elif crds_cache_notebook_dir:
try:
os.environ['CRDS_PATH'] = os.path.join(orig_dir, "crds")
except Exception as e:
os.environ['CRDS_PATH'] = os.path.join(os.getcwd(), "crds")
elif crds_cache_home:
os.environ['CRDS_PATH'] = os.path.join(os.environ['HOME'], 'crds', 'cache')
elif crds_cache_custom_dir:
os.environ['CRDS_PATH'] = crds_cache_dir_name
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) ImportsList the package imports and why they are relevant to this notebook.* astropy for various tools and packages* inspect to get the docstring of our objects.* IPython.display for printing markdown output* jwst.datamodels for JWST Pipeline data models* jwst.module.PipelineStep is the pipeline step being tested* matplotlib.pyplot.plt to generate plot
###Code
# plotting, the inline must come before the matplotlib import
%matplotlib inline
# %matplotlib notebook
# These gymnastics are needed to make the sizes of the figures
# be the same in both the inline and notebook versions
%config InlineBackend.print_figure_kwargs = {'bbox_inches': None}
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['savefig.dpi'] = 80
mpl.rcParams['figure.dpi'] = 80
from matplotlib import pyplot as plt
import matplotlib.patches as patches
params = {'legend.fontsize': 6,
'figure.figsize': (8, 8),
'figure.dpi': 150,
'axes.labelsize': 6,
'axes.titlesize': 6,
'xtick.labelsize':6,
'ytick.labelsize':6}
plt.rcParams.update(params)
# Box download imports
from astropy.utils.data import download_file
from pathlib import Path
from shutil import move
from os.path import splitext
# python general
import os
import numpy as np
# astropy modules
import astropy
from astropy.io import fits
from astropy.table import QTable, Table, vstack, unique
from astropy.wcs.utils import skycoord_to_pixel
from astropy.coordinates import SkyCoord
from astropy.visualization import simple_norm
from astropy import units as u
import photutils
# jwst
from jwst.pipeline import calwebb_image3
from jwst import datamodels
def create_image(data_2d, xpixel=None, ypixel=None, title=None):
''' Function to generate a 2D image of the data,
with an option to highlight a specific pixel.
'''
fig = plt.figure(figsize=(8, 8))
ax = plt.subplot()
norm = simple_norm(data_2d, 'sqrt', percent=99.)
plt.imshow(data_2d, norm=norm, origin='lower', cmap='gray')
if xpixel and ypixel:
plt.plot(xpixel, ypixel, marker='o', color='red', label='Selected Pixel')
plt.xlabel('Pixel column')
plt.ylabel('Pixel row')
if title:
plt.title(title)
plt.subplots_adjust(left=0.15)
plt.colorbar(label='MJy/sr')
def create_image_with_cat(data_2d, catalog, flux_limit=None, title=None):
''' Function to generate a 2D image of the data,
with sources overlaid.
'''
fig = plt.figure(figsize=(8, 8))
ax = plt.subplot()
norm = simple_norm(data_2d, 'sqrt', percent=99.)
plt.imshow(data_2d, norm=norm, origin='lower', cmap='gray')
for row in catalog:
if flux_limit:
if np.isnan(row['aper_total_flux']):
pass
else:
if row['aper_total_flux'] > flux_limit:
plt.plot(row['xcentroid'], row['ycentroid'], marker='o', markersize='3', color='red')
else:
plt.plot(row['xcentroid'], row['ycentroid'], marker='o', markersize='1', color='red')
plt.xlabel('Pixel column')
plt.ylabel('Pixel row')
if title:
plt.title(title)
plt.subplots_adjust(left=0.15)
plt.colorbar(label='MJy/sr')
def create_scatterplot(catalog_colx, catalog_coly, title=None):
''' Function to generate a generic scatterplot.
'''
fig = plt.figure(figsize=(8, 8))
ax = plt.subplot()
ax.scatter(catalog_colx,catalog_coly)
plt.xlabel(catalog_colx.name)
plt.ylabel(catalog_coly.name)
if title:
plt.title(title)
def get_input_table(sourcelist):
'''Function to read in and access the simulator source input files.'''
all_source_table = Table()
# point source and galaxy source tables have different headers
# change column headers to match for filtering later
if "point" in sourcelist:
col_names = ["RA", "Dec", "RA_degrees", "Dec_degrees",
"PixelX", "PixelY", "Magnitude",
"counts_sec", "counts_frame"]
elif "galaxy" in sourcelist:
col_names = ["PixelX", "PixelY", "RA", "Dec",
"RA_degrees", "Dec_degrees", "V2", "V3", "radius",
"ellipticity", "pos_angle", "sersic_index",
"Magnitude", "countrate_e_s", "counts_per_frame_e"]
else:
print('Error! Source list column names need to be defined.')
sys.exit(0)
# read in the tables
input_source_table = Table.read(sourcelist,format='ascii')
orig_colnames = input_source_table.colnames
# only grab values for source catalog analysis
short_source_table = Table({'In_RA': input_source_table['RA_degrees'],
'In_Dec': input_source_table['Dec_degrees']},
names=['In_RA', 'In_Dec'])
# combine source lists into one master list
all_source_table = vstack([all_source_table, short_source_table])
# set up columns to track which sources were detected by Photutils
all_source_table['Out_RA'] = np.nan
all_source_table['Out_Dec'] = np.nan
all_source_table['Detected'] = 'N'
all_source_table['RA_Diff'] = np.nan
all_source_table['Dec_Diff'] = np.nan
# filter by RA, Dec (for now)
no_duplicates = unique(all_source_table,keys=['In_RA','In_Dec'])
return no_duplicates
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) Loading the Data The simulated exposures used for this test are stored in Box. Grab them.
###Code
def get_box_files(file_list):
for box_url,file_name in file_list:
if 'https' not in box_url:
box_url = 'https://stsci.box.com/shared/static/' + box_url
downloaded_file = download_file(box_url)
if Path(file_name).suffix == '':
ext = splitext(box_url)[1]
file_name += ext
move(downloaded_file, file_name)
file_urls = ['https://stsci.box.com/shared/static/72fds4rfn4ppxv2tuj9qy2vbiao110pc.fits',
'https://stsci.box.com/shared/static/gxwtxoz5abnsx7wriqligyzxacjoz9h3.fits',
'https://stsci.box.com/shared/static/tninaa6a28tsa1z128u3ffzlzxr9p270.fits',
'https://stsci.box.com/shared/static/g4zlkv9qi0vc5brpw2lamekf4ekwcfdn.json',
'https://stsci.box.com/shared/static/kvusxulegx0xfb0uhdecu5dp8jkeluhm.list']
file_names = ['jw00042002001_01101_00004_nrca5_cal.fits',
'jw00042002001_01101_00005_nrca5_cal.fits',
'jw00042002001_01101_00006_nrca5_cal.fits',
'level3_lw_imaging_files_asn.json',
'jw00042002001_01101_00004_nrca5_uncal_galaxySources.list']
box_download_list = [(url,name) for url,name in zip(file_urls,file_names)]
get_box_files(box_download_list)
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) Run the Image3PipelineRun calwebb_image3 to get the output source catalog and the final 2D image.
###Code
img3 = calwebb_image3.Image3Pipeline()
img3.assign_mtwcs.skip=True
img3.save_results=True
img3.resample.save_results=True
img3.source_catalog.snr_threshold = 5
img3.source_catalog.save_results=True
img3.run(file_names[3])
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) Perform Visual InspectionPerform the visual inspection of the catalog and the final image.
###Code
catalog = Table.read("lw_imaging_cat.ecsv")
combined_image = datamodels.ImageModel("lw_imaging_i2d.fits")
create_image(combined_image.data, title="Final combined NIRCam image")
create_image_with_cat(combined_image.data, catalog, title="Final image w/ catalog overlaid")
catalog
create_scatterplot(catalog['id'], catalog['aper_total_flux'],title='Total Flux in '+str(catalog['aper_total_flux'].unit))
create_scatterplot(catalog['id'], catalog['aper_total_abmag'],title='Total AB mag')
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) Manually Find Matches Since this is a simulated data set, we can compare the output catalog information from the pipeline with the input catalog information used to create the simulation. Grab the input catalog RA, Dec values and the output catalog RA, Dec values.
###Code
test_outputs = get_input_table(file_names[4])
in_ra = test_outputs['In_RA'].data
in_dec = test_outputs['In_Dec'].data
out_ra = catalog['sky_centroid'].ra.deg
out_dec = catalog['sky_centroid'].dec.deg
###Output
_____no_output_____
###Markdown
Set the tolerance and initialize our counters.
###Code
tol = 1.e-3
found_count=0
multiples_count=0
missed_count=0
###Output
_____no_output_____
###Markdown
Below we loop through the input RA, Dec values and compare them to the RA, Dec values in the output catalog. For cases where there are multiple matches for our tolerance level, count those cases.
###Code
for ra,dec,idx in zip(in_ra, in_dec,range(len(test_outputs))):
match = np.where((np.abs(ra-out_ra) < tol) & (np.abs(dec-out_dec) < tol))
if np.size(match) == 1:
found_count +=1
test_outputs['Detected'][idx] = 'Y'
test_outputs['Out_RA'][idx] = out_ra[match]
test_outputs['Out_Dec'][idx] = out_dec[match]
test_outputs['RA_Diff'][idx] = np.abs(ra-out_ra[match])
test_outputs['Dec_Diff'][idx] = np.abs(dec-out_dec[match])
if np.size(match) > 1:
multiples_count +=1
if np.size(match) < 1:
missed_count +=1
###Output
_____no_output_____
###Markdown
Let's see how it did.
###Code
total_percent_found = (found_count/len(test_outputs))*100
print('\n')
print('SNR threshold used for pipeline: ',img3.source_catalog.snr_threshold)
print('Total found:',found_count)
print('Total missed:',missed_count)
print('Number of multiples: ',multiples_count)
print('Total number of input sources:',len(test_outputs))
print('Total number in output catalog:',len(catalog))
print('Total percent found:',total_percent_found)
print('\n')
###Output
_____no_output_____
###Markdown
Use photutils to find catalog matches Photutils includes a package to match sources between catalogs by providing a max separation value. Set that value and compare the two catalogs.
###Code
catalog_in = SkyCoord(ra=in_ra*u.degree, dec=in_dec*u.degree)
catalog_out = SkyCoord(ra=out_ra*u.degree, dec=out_dec*u.degree)
max_sep = 1.0 * u.arcsec
# idx, d2d, d3d = cat_in.match_to_catalog_3d(cat_out)
idx, d2d, d3d = catalog_in.match_to_catalog_sky(catalog_out)
sep_constraint = d2d < max_sep
catalog_in_matches = catalog_in[sep_constraint]
catalog_out_matches = catalog_out[idx[sep_constraint]]
###Output
_____no_output_____
###Markdown
Now, ```catalog_in_matches``` and ```catalog_out_matches``` are the matched sources in ```catalog_in``` and ```catalog_out```, respectively, which are separated less than our ```max_sep``` value.
###Code
print('Number of matched sources using max separation of '+str(max_sep)+': ',len(catalog_out_matches))
###Output
_____no_output_____
###Markdown
JWST Pipeline Validation Notebook: NIRCam, calwebb_image3, source_catalog **Instruments Affected**: e.g., FGS, MIRI, NIRCam, NIRISS, NIRSpec Table of Contents [Introduction](intro) [JWST CalWG Algorithm](algorithm) [Defining Terms](terms) [Test Description](description) [Data Description](data_descr) [Set up Temporary Directory](tempdir) [Imports](imports) [Loading the Data](data_load) [Run the Image3Pipeline](pipeline) [Perform Visual Inspection](visualization) [Manually Find Matches](manual) [About This Notebook](about) IntroductionThis is the NIRCam validation notebook for the Source Catalog step, which generates a catalog based on input exposures.* Step description: https://jwst-pipeline.readthedocs.io/en/latest/jwst/source_catalog/index.html* Pipeline code: https://github.com/spacetelescope/jwst/tree/master/jwst/source_catalog[Top of Page](title_ID) JWST CalWG AlgorithmThis is the NIRCam imaging validation notebook for the Source Catalog step, which uses image combinations or stacks of overlapping images to generate "browse-quality" source catalogs. Having automated source catalogs will help accelerate the science output of JWST. The source catalogs should include both point and "slightly" extended sources at a minimum. The catalog should provide an indication if the source is a point or an extended source. For point sources, the source catalog should include measurements corrected to infinite aperture using aperture corrections provided by a reference file. See: * https://outerspace.stsci.edu/display/JWSTCC/Vanilla+Point+Source+Catalog[Top of Page](title_ID) Defining Terms* JWST: James Webb Space Telescope* NIRCam: Near-Infrared Camera[Top of Page](title_ID) Test DescriptionHere we generate the source catalog and visually inspect a plot of the image with the source catalog overlaid. We also look at some other diagnostic plots and then cross-check the output catalog against Mirage catalog inputs. [Top of Page](title_ID) Data DescriptionThe set of data used in this test were created with the Mirage simulator. The simulator created a NIRCam imaging mode exposures for the short wave NRCA1 detector. [Top of Page](title_ID) Set up Temporary DirectoryThe following cell sets up a temporary directory (using python's `tempfile.TemporaryDirectory()`), and changes the script's active directory into that directory (using python's `os.chdir()`). This is so that, when the notebook is run through, it will download files to (and create output files in) the temporary directory rather than in the notebook's directory. This makes cleanup significantly easier (since all output files are deleted when the notebook is shut down), and also means that different notebooks in the same directory won't interfere with each other when run by the automated webpage generation process.If you want the notebook to generate output in the notebook's directory, simply don't run this cell.If you have a file (or files) that are kept in the notebook's directory, and that the notebook needs to use while running, you can copy that file into the directory (the code to do so is present below, but commented out).
###Code
#****
#
# Set this variable to False to not use the temporary directory
#
#****
use_tempdir = True
# Create a temporary directory to hold notebook output, and change the working directory to that directory.
from tempfile import TemporaryDirectory
import os
import shutil
if use_tempdir:
data_dir = TemporaryDirectory()
# If you have files that are in the notebook's directory, but that the notebook will need to use while
# running, copy them into the temporary directory here.
#
# files = ['name_of_file']
# for file_name in files:
# shutil.copy(file_name, os.path.join(data_dir.name, file_name))
# Save original directory
orig_dir = os.getcwd()
# Move to new directory
os.chdir(data_dir.name)
# For info, print out where the script is running
print("Running in {}".format(os.getcwd()))
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) If Desired, set up CRDS to use a local cacheBy default, the notebook template environment sets up its CRDS cache (the "CRDS_PATH" environment variable) in /grp/crds/cache. However, if the notebook is running on a local machine without a fast and reliable connection to central storage, it makes more sense to put the CRDS cache locally. Currently, the cell below offers several options, and will check the supplied boolean variables one at a time until one matches.* if `use_local_crds_cache` is False, then the CRDS cache will be kept in /grp/crds/cache* if `use_local_crds_cache` is True, the CRDS cache will be kept locally * if `crds_cache_tempdir` is True, the CRDS cache will be kept in the temporary directory * if `crds_cache_notebook_dir` is True, the CRDS cache will be kept in the same directory as the notebook. * if `crds_cache_home` is True, the CRDS cache will be kept in $HOME/crds/cache * if `crds_cache_custom_dir` is True, the CRDS cache will be kept in whatever is stored in the `crds_cache_dir_name` variable.If the above cell (creating a temporary directory) is not run, then setting `crds_cache_tempdir` to True will store the CRDS cache in the notebook's directory (the same as setting `crds_cache_notebook_dir` to True).
###Code
import os
# Choose CRDS cache location
use_local_crds_cache = False
crds_cache_tempdir = False
crds_cache_notebook_dir = False
crds_cache_home = False
crds_cache_custom_dir = False
crds_cache_dir_name = ""
if use_local_crds_cache:
if crds_cache_tempdir:
os.environ['CRDS_PATH'] = os.path.join(os.getcwd(), "crds")
elif crds_cache_notebook_dir:
try:
os.environ['CRDS_PATH'] = os.path.join(orig_dir, "crds")
except Exception as e:
os.environ['CRDS_PATH'] = os.path.join(os.getcwd(), "crds")
elif crds_cache_home:
os.environ['CRDS_PATH'] = os.path.join(os.environ['HOME'], 'crds', 'cache')
elif crds_cache_custom_dir:
os.environ['CRDS_PATH'] = crds_cache_dir_name
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) ImportsList the package imports and why they are relevant to this notebook.* astropy for various tools and packages* inspect to get the docstring of our objects.* IPython.display for printing markdown output* jwst.datamodels for JWST Pipeline data models* jwst.module.PipelineStep is the pipeline step being tested* matplotlib.pyplot.plt to generate plot
###Code
# plotting, the inline must come before the matplotlib import
%matplotlib inline
# %matplotlib notebook
# These gymnastics are needed to make the sizes of the figures
# be the same in both the inline and notebook versions
%config InlineBackend.print_figure_kwargs = {'bbox_inches': None}
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['savefig.dpi'] = 80
mpl.rcParams['figure.dpi'] = 80
from matplotlib import pyplot as plt
import matplotlib.patches as patches
params = {'legend.fontsize': 6,
'figure.figsize': (8, 8),
'figure.dpi': 150,
'axes.labelsize': 6,
'axes.titlesize': 6,
'xtick.labelsize':6,
'ytick.labelsize':6}
plt.rcParams.update(params)
# Box download imports
from astropy.utils.data import download_file
from pathlib import Path
from shutil import move
from os.path import splitext
# python general
import os
import numpy as np
# astropy modules
import astropy
from astropy.io import fits
from astropy.table import QTable, Table, vstack, unique
from astropy.wcs.utils import skycoord_to_pixel
from astropy.coordinates import SkyCoord
from astropy.visualization import simple_norm
from astropy import units as u
import photutils
# jwst
from jwst.pipeline import calwebb_image3
from jwst import datamodels
def create_image(data_2d, xpixel=None, ypixel=None, title=None):
''' Function to generate a 2D image of the data,
with an option to highlight a specific pixel.
'''
fig = plt.figure(figsize=(8, 8))
ax = plt.subplot()
norm = simple_norm(data_2d, 'sqrt', percent=99.)
plt.imshow(data_2d, norm=norm, origin='lower', cmap='gray')
if xpixel and ypixel:
plt.plot(xpixel, ypixel, marker='o', color='red', label='Selected Pixel')
plt.xlabel('Pixel column')
plt.ylabel('Pixel row')
if title:
plt.title(title)
plt.subplots_adjust(left=0.15)
plt.colorbar(label='MJy/sr')
def create_image_with_cat(data_2d, catalog, flux_limit=None, title=None):
''' Function to generate a 2D image of the data,
with sources overlaid.
'''
fig = plt.figure(figsize=(8, 8))
ax = plt.subplot()
norm = simple_norm(data_2d, 'sqrt', percent=99.)
plt.imshow(data_2d, norm=norm, origin='lower', cmap='gray')
for row in catalog:
if flux_limit:
if np.isnan(row['aper_total_flux']):
pass
else:
if row['aper_total_flux'] > flux_limit:
plt.plot(row['xcentroid'], row['ycentroid'], marker='o', markersize='3', color='red')
else:
plt.plot(row['xcentroid'], row['ycentroid'], marker='o', markersize='1', color='red')
plt.xlabel('Pixel column')
plt.ylabel('Pixel row')
if title:
plt.title(title)
plt.subplots_adjust(left=0.15)
plt.colorbar(label='MJy/sr')
def create_scatterplot(catalog_colx, catalog_coly, title=None):
''' Function to generate a generic scatterplot.
'''
fig = plt.figure(figsize=(8, 8))
ax = plt.subplot()
ax.scatter(catalog_colx,catalog_coly)
plt.xlabel(catalog_colx.name)
plt.ylabel(catalog_coly.name)
if title:
plt.title(title)
def get_input_table(sourcelist):
'''Function to read in and access the simulator source input files.'''
all_source_table = Table()
# point source and galaxy source tables have different headers
# change column headers to match for filtering later
if "point" in sourcelist:
col_names = ["RA", "Dec", "RA_degrees", "Dec_degrees",
"PixelX", "PixelY", "Magnitude",
"counts_sec", "counts_frame"]
elif "galaxy" in sourcelist:
col_names = ["PixelX", "PixelY", "RA", "Dec",
"RA_degrees", "Dec_degrees", "V2", "V3", "radius",
"ellipticity", "pos_angle", "sersic_index",
"Magnitude", "countrate_e_s", "counts_per_frame_e"]
else:
print('Error! Source list column names need to be defined.')
sys.exit(0)
# read in the tables
input_source_table = Table.read(sourcelist,format='ascii')
orig_colnames = input_source_table.colnames
# only grab values for source catalog analysis
short_source_table = Table({'In_RA': input_source_table['RA_degrees'],
'In_Dec': input_source_table['Dec_degrees']},
names=['In_RA', 'In_Dec'])
# combine source lists into one master list
all_source_table = vstack([all_source_table, short_source_table])
# set up columns to track which sources were detected by Photutils
all_source_table['Out_RA'] = np.nan
all_source_table['Out_Dec'] = np.nan
all_source_table['Detected'] = 'N'
all_source_table['RA_Diff'] = np.nan
all_source_table['Dec_Diff'] = np.nan
# filter by RA, Dec (for now)
no_duplicates = unique(all_source_table,keys=['In_RA','In_Dec'])
return no_duplicates
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) Loading the Data The simulated exposures used for this test are stored in Box. Grab them.
###Code
def get_box_files(file_list):
for box_url,file_name in file_list:
if 'https' not in box_url:
box_url = 'https://stsci.box.com/shared/static/' + box_url
downloaded_file = download_file(box_url)
if Path(file_name).suffix == '':
ext = splitext(box_url)[1]
file_name += ext
move(downloaded_file, file_name)
file_urls = ['https://stsci.box.com/shared/static/72fds4rfn4ppxv2tuj9qy2vbiao110pc.fits',
'https://stsci.box.com/shared/static/gxwtxoz5abnsx7wriqligyzxacjoz9h3.fits',
'https://stsci.box.com/shared/static/tninaa6a28tsa1z128u3ffzlzxr9p270.fits',
'https://stsci.box.com/shared/static/g4zlkv9qi0vc5brpw2lamekf4ekwcfdn.json',
'https://stsci.box.com/shared/static/kvusxulegx0xfb0uhdecu5dp8jkeluhm.list']
file_names = ['jw00042002001_01101_00004_nrca5_cal.fits',
'jw00042002001_01101_00005_nrca5_cal.fits',
'jw00042002001_01101_00006_nrca5_cal.fits',
'level3_lw_imaging_files_asn.json',
'jw00042002001_01101_00004_nrca5_uncal_galaxySources.list']
box_download_list = [(url,name) for url,name in zip(file_urls,file_names)]
get_box_files(box_download_list)
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) Run the Image3PipelineRun calwebb_image3 to get the output source catalog and the final 2D image.
###Code
img3 = calwebb_image3.Image3Pipeline()
img3.assign_mtwcs.skip=True
img3.save_results=True
img3.resample.save_results=True
img3.source_catalog.snr_threshold = 5
img3.source_catalog.save_results=True
img3.run(file_names[3])
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) Perform Visual InspectionPerform the visual inspection of the catalog and the final image.
###Code
catalog = Table.read("lw_imaging_cat.ecsv")
combined_image = datamodels.ImageModel("lw_imaging_i2d.fits")
create_image(combined_image.data, title="Final combined NIRCam image")
create_image_with_cat(combined_image.data, catalog, title="Final image w/ catalog overlaid")
catalog
create_scatterplot(catalog['label'], catalog['aper_total_flux'],title='Total Flux in '+str(catalog['aper_total_flux'].unit))
create_scatterplot(catalog['label'], catalog['aper_total_abmag'],title='Total AB mag')
###Output
_____no_output_____
###Markdown
[Top of Page](title_ID) Manually Find Matches Since this is a simulated data set, we can compare the output catalog information from the pipeline with the input catalog information used to create the simulation. Grab the input catalog RA, Dec values and the output catalog RA, Dec values.
###Code
test_outputs = get_input_table(file_names[4])
in_ra = test_outputs['In_RA'].data
in_dec = test_outputs['In_Dec'].data
out_ra = catalog['sky_centroid'].ra.deg
out_dec = catalog['sky_centroid'].dec.deg
###Output
_____no_output_____
###Markdown
Set the tolerance and initialize our counters.
###Code
tol = 1.e-3
found_count=0
multiples_count=0
missed_count=0
###Output
_____no_output_____
###Markdown
Below we loop through the input RA, Dec values and compare them to the RA, Dec values in the output catalog. For cases where there are multiple matches for our tolerance level, count those cases.
###Code
for ra,dec,idx in zip(in_ra, in_dec,range(len(test_outputs))):
match = np.where((np.abs(ra-out_ra) < tol) & (np.abs(dec-out_dec) < tol))
if np.size(match) == 1:
found_count +=1
test_outputs['Detected'][idx] = 'Y'
test_outputs['Out_RA'][idx] = out_ra[match]
test_outputs['Out_Dec'][idx] = out_dec[match]
test_outputs['RA_Diff'][idx] = np.abs(ra-out_ra[match])
test_outputs['Dec_Diff'][idx] = np.abs(dec-out_dec[match])
if np.size(match) > 1:
multiples_count +=1
if np.size(match) < 1:
missed_count +=1
###Output
_____no_output_____
###Markdown
Let's see how it did.
###Code
total_percent_found = (found_count/len(test_outputs))*100
print('\n')
print('SNR threshold used for pipeline: ',img3.source_catalog.snr_threshold)
print('Total found:',found_count)
print('Total missed:',missed_count)
print('Number of multiples: ',multiples_count)
print('Total number of input sources:',len(test_outputs))
print('Total number in output catalog:',len(catalog))
print('Total percent found:',total_percent_found)
print('\n')
###Output
_____no_output_____
###Markdown
Use photutils to find catalog matches Photutils includes a package to match sources between catalogs by providing a max separation value. Set that value and compare the two catalogs.
###Code
catalog_in = SkyCoord(ra=in_ra*u.degree, dec=in_dec*u.degree)
catalog_out = SkyCoord(ra=out_ra*u.degree, dec=out_dec*u.degree)
max_sep = 1.0 * u.arcsec
# idx, d2d, d3d = cat_in.match_to_catalog_3d(cat_out)
idx, d2d, d3d = catalog_in.match_to_catalog_sky(catalog_out)
sep_constraint = d2d < max_sep
catalog_in_matches = catalog_in[sep_constraint]
catalog_out_matches = catalog_out[idx[sep_constraint]]
###Output
_____no_output_____
###Markdown
Now, ```catalog_in_matches``` and ```catalog_out_matches``` are the matched sources in ```catalog_in``` and ```catalog_out```, respectively, which are separated less than our ```max_sep``` value.
###Code
print('Number of matched sources using max separation of '+str(max_sep)+': ',len(catalog_out_matches))
###Output
_____no_output_____
|
Car Crashes - Feature selection and enginering.ipynb
|
###Markdown
Feature selection and feature engineeringThis isn't based on a news article, exactly, it's from a paper. You can see the paper in `/data/`.
###Code
import pandas as pd
import numpy as np
from sklearn.linear_model import LogisticRegression
pd.set_option("display.max_columns", 200)
pd.set_option("display.max_colwidth", 200)
%matplotlib inline
###Output
_____no_output_____
###Markdown
Reading in our data Every single person in a crashWe'll start by reading in the list of people who were involved in an accident.**Call this dataframe `people`.**
###Code
people = pd.read_csv('data/combined-person-data.csv')
###Output
/Users/tbi/.pyenv/versions/3.6.5/lib/python3.6/site-packages/IPython/core/interactiveshell.py:3049: DtypeWarning: Columns (5,15,26) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
How often did each severity of injury show up? (e.g. not injured, non-incapacitating injury, etc)
###Code
people.INJ_SEVER_CODE.value_counts()
###Output
_____no_output_____
###Markdown
We're only interested in fatalities, so let's create a new `had_fatality` column for when people received a fatal injury.**Confirm there were 1681 people with fatal injuries.**
###Code
people['had_fatality'] = np.where(people.INJ_SEVER_CODE == 5, 1, 0)
people.had_fatality.sum()
###Output
_____no_output_____
###Markdown
Working on Features Starting our analysisWe're going to run a regression on the impact of being **male vs female on crash fatalities**. Prepare a dataframe called `train_df` with the appropriate information in it.* **Tip:** What column(s) are your input, and what is your output? Aka independent and dependent variables* **Tip:** You'll need to convert your input column into something numeric, I suggest using `.replace`* **Tip:** We aren't interested in the "Unknown" sex - either filtering or `np.nan` + `.dropna()` might be useful ways to get rid of those columns
###Code
train_df = people[['had_fatality','SEX_CODE']].replace('F', 0).replace('M', 1).replace('U', np.nan).dropna(axis = 0)
###Output
_____no_output_____
###Markdown
Confirm that your `train_df` has two columns and 815,827 rows.> **Tip:** If you have more rows, make sure you dropped all of the rows with Unknown sex.>> **Tip:** If you have more columns, make sure you only have your input and output columns.
###Code
train_df['SEX_CODE'] = train_df.SEX_CODE.astype(int)
train_df.shape
###Output
_____no_output_____
###Markdown
Run your regressionSee the effect of sex on whether the person's injuries are fatal or not. After we train the regression, we can use my ✨favorite technique✨ to display features and their odds ratios: ```pythonfeature_names = X.columnscoefficients = clf.coef_[0]pd.DataFrame({ 'feature': feature_names, 'coefficient (log odds ratio)': coefficients, 'odds ratio': np.exp(coefficients).round(4)}).sort_values(by='odds ratio', ascending=False)```
###Code
X = train_df.drop(columns="had_fatality")
y = train_df.had_fatality
clf = LogisticRegression(C=1e9, solver='lbfgs')
clf.fit(X,y)
feature_names = X.columns
coefficients = clf.coef_[0]
pd.DataFrame({
'feature': feature_names,
'coefficient (log odds ratio)': coefficients,
'odds ratio': np.exp(coefficients).round(4)
}).sort_values(by='odds ratio', ascending=False)
###Output
_____no_output_____
###Markdown
Use words to interpret this result
###Code
# if all other variables remain unchanged, being a man is 2 times more likely to die in car crushes than being a woman
###Output
_____no_output_____
###Markdown
Adding more featuresThe actual crash data has more details - whether it was snowy/icy, whether it was a highway, etc. Read in `combined-crash-data.csv`, calling it **`crashes`**, and merge it with our people dataset. I'll save you a lookup: the `REPORT_NO` is what matches between the two.
###Code
crashes = pd.read_csv('data/combined-crash-data.csv')
crashes.sample(5)
merged = people.merge(crashes, left_on = 'REPORT_NO', right_on = 'REPORT_NO')
###Output
_____no_output_____
###Markdown
Examining more possible featuresHow often was it wet, dry, snowy, icy, etc? **What was the most common surfce condition?*** **Tip:** We're interested in surface condition, _not_ road condition, _not_ weather condition
###Code
import re
names = '''
0 Not Applicable
1 Wet
2 Dry
3 Snow
4 Ice
5 Mud
6 Slush
7 Water
8 Sand
9 Oil
88 Other
99 Unknown
'''
keys = [int(x) for x in re.findall('\d+', names)]
names = re.findall('([ A-Za-z]+)\n', names)
dict(zip(keys, names))
merged['SURF_COND_CODE'] = merged.SURF_COND_CODE.replace(dict(zip(keys, names)))
###Output
_____no_output_____
###Markdown
Do you feel that a **Dry** road condition should be the average of **Wet** and **Snow?**
###Code
#Don't think so
###Output
_____no_output_____
###Markdown
The answer to that should be *no*, which means we can't use this data as numeric data. We want a different coefficient for each of these - I want to know the impact of dry, the impact of wet, the impact of snow, all separately.Start by **replacing each code with a proper description.** I'll even include them here:* `00` - Not Applicable* `01` - Wet* `02` - Dry* `03` - Snow* `04` - Ice* `05` - Mud, Dirt, Gravel* `06` - Slush* `07` - Water (standing/moving)* `08` - Sand* `09` - Oil* `88` - Other* `99` - UnknownBut watch out, pandas read the column in as numbers so they might have come through a little differently than their codes. Confirm you have 147,803 wet, and a few codes you can't understand, like `6.03` and `7.01`.
###Code
merged.SURF_COND_CODE.value_counts()
###Output
_____no_output_____
###Markdown
Replace the codes you don't understand with `Other`.
###Code
merged['SURF_COND_CODE'] = merged.SURF_COND_CODE.replace({6.03: 'Other', 7.01: 'Other', 9.88: 'Other', 8.05: 'Other'})
###Output
_____no_output_____
###Markdown
Confirm you have 3,196 'Other'.
###Code
(merged.SURF_COND_CODE == 'Other').sum()
###Output
_____no_output_____
###Markdown
One-hot encodingWe're going to use `pd.get_dummies` to build a variable you'll call `surf_dummies`. Each surface condition should be a `0` or `1` as to whether it was that condition (dry, icy, wet, etc).Use a `prefix=` so we know they are **surface** conditions.You'll want to drop the column you'll use as the reference category.**Before we do this: which column works best as the reference?** Now build your `surf_dummies` variable.
###Code
surf_dummies = pd.get_dummies(merged.SURF_COND_CODE, prefix = 'surf', drop_first = True)
surf_dummies.head()
###Output
_____no_output_____
###Markdown
Confirm your `surf_dummies` looks roughly like this:|surface_Ice|Surce_Mud, Dirt, Gravel|surface_Not Applicable|...|surface_Wet||---|---|---|---|---||0|0|0|...|0||0|0|0|...|0||0|0|1|...|0||0|0|1|...|0||0|0|0|...|1| Another regressionLet's run another regression to see the impact of both **sex and surface condition** on fatalities. Build your `train_df`To build your `train_df`, I recommend doing it either of these two ways. They both first select the important columns, then add in the one-hot encoded `surf_dummies` columns.```pythontrain_df = pd.DataFrame({ 'col1': merged.col1, 'col2': merged.col2, 'col3': merged.col3,})train_df = train_df.join(surf_dummies)train_df = train_df.dropna()train_df.head()```or like this:```pythontrain_df = train_df[['col1','col2','col3']].copy()train_df = train_df.join(surf_dummies)train_df = train_df.dropna()train_df.head()```The second one is shorter, but the first one makes it easier to use comments to remove columns later.
###Code
train_df = merged[['SEX_CODE', 'had_fatality']].replace('F', 0).replace('M', 1).replace('U', np.nan).copy()
train_df = train_df.join(surf_dummies).dropna()
train_df.head()
###Output
_____no_output_____
###Markdown
Run your regression and check your odds ratiosActually no, wait, first - what kind of surface do you think will have the **highest fatality rate?**
###Code
# Snow
train_df.shape
train_df.head()
###Output
_____no_output_____
###Markdown
Confirm your `train_df` has 815,843 rows and 9 columns.* **Tip:** When you run your regression, if you get an error about not knowing what to do with `U`, it's because you didn't convert your sex to numbers (or if you did, you didn't do it in your original dataframe)
###Code
X = train_df.drop(columns = 'had_fatality')
y = train_df.had_fatality
clf = LogisticRegression(C=1e9, solver='lbfgs')
clf.fit(X, y)
feature_names = X.columns
coefficients = clf.coef_[0]
pd.DataFrame({
'feature': feature_names,
'coefficient (log odds ratio)': coefficients,
'odds ratio': np.exp(coefficients).round(4)
}).sort_values(by='odds ratio', ascending=False)
###Output
_____no_output_____
###Markdown
**Is this what you expected?** Why do you think this result might be the case?
###Code
#Might be that wet, ice and snow could all be cause of muddy road
###Output
_____no_output_____
###Markdown
More features: VehiclesMaybe the car they're in is related to the car they were in. Luckily, we have this information - **read in `combined_vehicle_data` as `vehicles`.**
###Code
vehicles = pd.read_csv('data/combined-vehicle-data.csv')
###Output
_____no_output_____
###Markdown
Weights of those carsThe car weights are stored in **another file** since the info had to come from an API. I looked up the VINs - vehicle identification numbers - in a government database to try to get data for each of them.**Read them and build a new dataframe that is both the vehicle data along with their weights.** You can call it `vehicles` since you don't need the original weightless vehicle data any more.
###Code
weight = pd.read_csv('data/vins_and_weights.csv')
vehicles.head(2)
weight.head(2)
vehicles = vehicles.merge(weight, left_on = 'VIN_NO', right_on = 'VIN')
###Output
_____no_output_____
###Markdown
Confirm that your combined `vehicles` dataset should have 534,436 rows and 35 columns. And yes, that's less than we were working with before - you haven't combined it with the people/crashes dataset yet.
###Code
vehicles.shape
###Output
_____no_output_____
###Markdown
Filter your dataWe only want vehicles that are "normal" - somewhere between 1500 and 6000 pounds. Filter your vehicles to only include those in that weight range.
###Code
vehicles = vehicles[(vehicles.weight >= 1500) & (vehicles.weight <= 6000)]
###Output
_____no_output_____
###Markdown
Confirm that you have 532,370 vehicles in the dataset.
###Code
vehicles.shape
###Output
_____no_output_____
###Markdown
Add this vehicle information to your merged dataNow we'll have a dataframe that contains information on:* The people themselves and their injuries* The crash* The vehiclesEvery person came with a `VEHICLE_ID` column that is the vehicle they were in. You'll want to merge on that.
###Code
merged = merged.merge(vehicles, left_on = 'VEHICLE_ID', right_on = 'VEHICLE_ID')
###Output
_____no_output_____
###Markdown
Confirm you have 99 columns and 616,212 rows. **That is a lot of possible features!**
###Code
merged.shape
###Output
_____no_output_____
###Markdown
Another regression, because we can't get enoughBuild another `train_df` and run another regression about **how car weight impacts the chance of fatalities**. You'll want to confirm that your dataset has 616,212 and 2 columns.
###Code
train_df = merged[['had_fatality', 'weight']]
X = train_df.drop(columns = 'had_fatality')
y = merged.had_fatality
clf = LogisticRegression(C = 1e-2, solver = "lbfgs")
clf.fit(X, y)
feature_names = X.columns
coefficients = clf.coef_[0]
pd.DataFrame({
'feature': feature_names,
'coefficient (log odds ratio)': coefficients,
'odds ratio': np.exp(coefficients).round(4)
}).sort_values(by='odds ratio', ascending=False)
###Output
_____no_output_____
###Markdown
**Can you translate that into plain English?** Remember weight is in **pounds**.
###Code
# All else equal, every addtional one pound to the weight of car will be 1 times likely to cause fatality in car crush
# When the weight of car is increased by 1 pound, the odds of death in car crash increase by a factor or 1
###Output
_____no_output_____
###Markdown
I feel like pounds isn't the best measure for something like this. Remember how we had to adjust percentages with AP and life expecntancy, and then change around the way we said things? It sounded like this:> Every 10% increase in unemployment translates to a year and a half loss of life expectancyInstead of every single pound, maybe we could do every... some other number of pounds? One hundred? One thousand?**Run another regression with weight in thousands of pounds.** Get another odds ratio. Give me another sentence English.
###Code
train_df['weight_in_thousand'] = train_df.copy().weight / 1000
X = train_df.drop(columns = ['had_fatality', 'weight'])
y = train_df.had_fatality
clf = LogisticRegression(C = 1e4, solver='lbfgs')
clf.fit(X, y)
feature_names = X.columns
coefficients = clf.coef_[0]
pd.DataFrame({
'feature': feature_names,
'coefficient (log odds ratio)': coefficients,
'odds ratio': np.exp(coefficients).round(4)
}).sort_values(by='odds ratio', ascending=False)
# Every thousand pounds heavier your car is increase translates to a 15% decrease in fatalities
###Output
_____no_output_____
###Markdown
Two-car accidents, struck and strikerHere's the thing, though: **it isn't just the weight of your car.** It's the weight of both cars! If I'm in a big car and I have a wreck with a smaller car, it's the smaller car that's in trouble.To get that value, we need to do some **feature engineering**, some calculating of *new* variables from our *existing* variables.We need to jump through some hoops to do that. Two-car accidentsFirst we're going to count how many vehicles were in each accident. Since we're looking to compare the weight of two cars hitting each other, **we're only going to want crashes with only two cars.**
###Code
counted = vehicles.REPORT_NO.value_counts()
counted.head(10)
###Output
_____no_output_____
###Markdown
By using `.value_counts` I can see how many cars were in each crash, and now I'm going to filter to get a list of all of the ones with two vehicles.
###Code
two_car_report_nos = counted[counted == 2].index
two_car_report_nos
###Output
_____no_output_____
###Markdown
And now we'll filter my vehicles so we only have those that were in two-vehicle crashes.
###Code
vehicles = vehicles[vehicles.REPORT_NO.isin(two_car_report_nos)]
###Output
_____no_output_____
###Markdown
Struck and strikerTo do the math correctly, we need both the risk of someone dying in the smaller car _and_ the risk of someone dying in the bigger car. To do this we need to separate our cars into two groups:* The 'struck' vehicle: did the person die inside?* The 'striker' vehicle: how much heavier was it than the struck car?But we don't know which car was which, so we have to try out both versions - pretending car A was the striker, then pretending car B was the striker. It's hard to explain, but you can read `Pounds That Kill - The External Costs of Vehicle Weight.pdf` for more details on how it works.
###Code
cars_1 = vehicles.drop_duplicates(subset='REPORT_NO', keep='first')
cars_2 = vehicles.drop_duplicates(subset='REPORT_NO', keep='last')
cars_merged_1 = cars_1.merge(cars_2, on='REPORT_NO', suffixes=['_striker', '_struck'])
cars_merged_2 = cars_2.merge(cars_1, on='REPORT_NO', suffixes=['_striker', '_struck'])
vehicles_complete = pd.concat([cars_merged_1, cars_merged_2])
vehicles_complete.head()
###Output
_____no_output_____
###Markdown
Put people in their carsWhich car was each person in? We'll assign that now.
###Code
merged = people.merge(vehicles_complete, left_on='VEHICLE_ID', right_on='VEHICLE_ID_struck')
merged.head(3)
###Output
_____no_output_____
###Markdown
Add the crash detailsYou did this already! I'm going to do it for you. We're merging on `REPORT_NO_x` because there are so many `REPORT_NO` columns duplicated across our files that pandas started giving them weird names.
###Code
merged = merged.merge(crashes, left_on='REPORT_NO_x', right_on='REPORT_NO')
merged.head(3)
###Output
_____no_output_____
###Markdown
FilterWe already filtered out vehicles by weight, so we don't have to do that again. Calculated featuresI'm sure you forgot what all the features are, so we'll bring back whether there was a fatality or not Feature: Accident was fatal
###Code
merged['had_fatality'] = (merged.INJ_SEVER_CODE == 5).astype(int)
merged.had_fatality.value_counts()
###Output
_____no_output_____
###Markdown
Feature: Weight difference**Remove everything missing weights for strikers or struck vehicles.** You might need to `merged.columns` to remind yourself what the column names are.
###Code
merged = merged.dropna(subset= ['weight_striker', 'weight_struck'])
###Output
_____no_output_____
###Markdown
Confirm your dataset has roughly 335,000 rows.
###Code
merged.shape
###Output
_____no_output_____
###Markdown
Create a new feature called `weight_diff` about how much heavier the striking car was compared to the struck car. **Make sure you've done the math correctly!**
###Code
merged['weight_diff'] = merged.weight_striker - merged.weight_struck
###Output
_____no_output_____
###Markdown
Feature adjustmentMake all of your weight columns in **thousands of pounds** instead of just in pounds. It'll help you interpret your results much better.
###Code
merged[['weight_striker', 'weight_struck', 'weight_diff']] = merged[['weight_striker', 'weight_struck', 'weight_diff']] / 1000
merged[['weight_striker', 'weight_struck', 'weight_diff']].head()
###Output
_____no_output_____
###Markdown
Another regression!!!**What is the impact of weight difference on fatality rate?** Create your `train_df`, drop missing values, run your regression, analyze your odds ratios.
###Code
train_df = merged[['weight_diff', 'had_fatality']]
X = train_df.drop(columns = 'had_fatality')
y = train_df.had_fatality
clf = LogisticRegression(C = 1e4, solver = 'lbfgs')
clf.fit(X, y)
feature_names = X.columns
coefficients = clf.coef_[0]
pd.DataFrame({
'feature': feature_names,
'coefficient (log odds ratio)': coefficients,
'odds ratio': np.exp(coefficients).round(4)
}).sort_values(by='odds ratio', ascending=False)
###Output
_____no_output_____
###Markdown
Please translate your odds ratio into plain English.
###Code
# Every additional 1000 pounds in the weight differene between the striker and the struck car, the odds of death in car crash increases by a factor of 1.6
# All else equal, when there is a 1000 pounds increase in the weight differene between the striker and the struck car, it is 1.6 times more likely to have fatality in the car crush
###Output
_____no_output_____
###Markdown
Adding in more featuresHow about speed limit? That's important, right? We can add the speed limit of the striking vehicle with `SPEED_LIMIT_striker`.
###Code
train_df = merged[['had_fatality', 'SPEED_LIMIT_striker']]
train_df['SPEED_LIMIT_striker'] = train_df.SPEED_LIMIT_striker / 5
X = train_df.drop(columns = 'had_fatality')
y = train_df.had_fatality
clf = LogisticRegression(C = 1e1, solver = 'lbfgs')
clf.fit(X, y)
feature_names = X.columns
coefficients = clf.coef_[0]
pd.DataFrame({
'feature': feature_names,
'coefficient (log odds ratio)': coefficients,
'odds ratio': np.exp(coefficients).round(4)
}).sort_values(by='odds ratio', ascending=False)
###Output
_____no_output_____
###Markdown
Can you translate the speed limit odds ratio into plain English?
###Code
# Every addtional 5 mph increase in the striker's speed limit
###Output
_____no_output_____
###Markdown
Feature engineering: Speed limitsHonestly, that's a pretty bad way to go about things. What's more fun is if we **translate speed limits into bins.**First, we'll use `pd.cut` to assign each speed limit a category.
###Code
speed_bins = [-np.inf, 10, 20, 30, 40, 50, np.inf]
merged['speed_bin'] = pd.cut(merged.SPEED_LIMIT_struck, bins=speed_bins)
merged[['SPEED_LIMIT_striker', 'speed_bin']].head(10)
###Output
_____no_output_____
###Markdown
Then we'll one-hot encode around 20-30mph speed limits.
###Code
speed_dummies = pd.get_dummies(merged.speed_bin,
prefix='speed').drop('speed_(20.0, 30.0]', axis=1)
speed_dummies.head()
###Output
_____no_output_____
###Markdown
Running a regressionI like this layout for creating `train_df`, it allows us to easily add dummies and do a little replacing/encoding when we're building binary features like for sex.> If the below gives you an error, it's because `SEX_CODE` is already a number. In that case, just remove `.replace({'M': 1, 'F': 0, 'U': np.nan })`.
###Code
# Start with our normal features
train_df = pd.DataFrame({
'weight_diff': merged.weight_diff,
# 'sex': merged.SEX_CODE,#.replace({'M': 1, 'F': 0, 'U': np.nan }),
'had_fatality': merged.had_fatality,
})
# Add the one-hot encoded features
train_df = train_df.join(speed_dummies)
train_df = train_df.join(surf_dummies)
# Drop missing values
train_df = train_df.dropna()
train_df.head()
X = train_df.drop(columns = 'had_fatality')
y = train_df.had_fatality
clf = LogisticRegression(C = 1e9, solver = 'lbfgs')
clf.fit(X, y)
feature_names = X.columns
coefficients = clf.coef_[0]
pd.DataFrame({
'feature': feature_names,
'coefficient (log odds ratio)': coefficients,
'odds ratio': np.exp(coefficients).round(4)
}).sort_values(by='odds ratio', ascending=False)
###Output
_____no_output_____
###Markdown
Describe the impact of the different variables in simple language. What has the largest impact?
###Code
# The speed limit ranging from 40 to 50 mph has the most impact on car crush fatality, meaning that with an increase of 20 in speed limit range, car crash is 5.7 times more likely to lead to death.
###Output
_____no_output_____
###Markdown
Feature selection and feature engineeringThis isn't based on a news article, exactly, it's from a paper. You can see the paper in `/data/`.
###Code
import pandas as pd
import numpy as np
from sklearn.linear_model import LogisticRegression
pd.set_option("display.max_columns", 200)
pd.set_option("display.max_colwidth", 200)
%matplotlib inline
###Output
_____no_output_____
###Markdown
Reading in our data Every single person in a crashWe'll start by reading in the list of people who were involved in an accident.**Call this dataframe `people`.**
###Code
people = pd.read_csv('data/combined-person-data.csv')
people.head()
people.dtypes
###Output
_____no_output_____
###Markdown
How often did each severity of injury show up? (e.g. not injured, non-incapacitating injury, etc)
###Code
people.INJ_SEVER_CODE.value_counts()
# 01 No Injury
# 02 Non-incapacitating Injury
# 03 Possible Incapacitating Injury
# 04 Incapacitating/Disabled Injury
# 05 Fatal Injury
people['is_fatality'] = people.INJ_SEVER_CODE == 5
###Output
_____no_output_____
###Markdown
We're only interested in fatalities, so let's create a new `is_fatality` column for when people received a fatal injury.**Confirm there were 1681 people with fatal injuries.**
###Code
people['is_fatality'].value_counts()
###Output
_____no_output_____
###Markdown
Working on Features Starting our analysisWe're going to run a regression on the impact of being **male vs female on crash fatalities**. Prepare a dataframe called `train_df` with the appropriate information in it.* **Tip:** What column(s) are your input, and what is your output? Aka independent and dependent variables* **Tip:** You'll need to convert your input column into something numeric, I suggest using `.replace`* **Tip:** We aren't interested in the "Unknown" sex - either filtering or `np.nan` + `.dropna()` might be useful ways to get rid of those columns
###Code
people.columns
train_df = people[['SEX_CODE', 'is_fatality']]
train_df['SEX_CODE'].replace(int)
train_df.dropna()
train_df = people
train_df['is_male'] = train_df.SEX_CODE.replace({'F':0, 'M':1, 'U':np.nan})
train_df = train_df[['is_fatality', 'is_male']]
train_df = train_df.dropna()
train_df.shape
###Output
_____no_output_____
###Markdown
Confirm that your `train_df` has two columns and 815,827 rows.> **Tip:** If you have more rows, make sure you dropped all of the rows with Unknown sex.>> **Tip:** If you have more columns, make sure you only have your input and output columns. Run your regressionSee the effect of sex on whether the person's injuries are fatal or not. I want to see a result dataframe that includes:* Feature name* Coefficient* Odds ratio
###Code
X = train_df.drop(columns='is_fatality')
y = train_df.is_fatality
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(C=1e9, solver='lbfgs', max_iter=4000)
clf.fit(X, y)
feature_names = X.columns
coefficients = clf.coef_[0]
pd.DataFrame({
'feature': feature_names,
'coefficient (log odds ratio)': coefficients,
'odds ratio': np.exp(coefficients)
}).sort_values(by='odds ratio', ascending=False)
###Output
_____no_output_____
###Markdown
Use words to interpret this result
###Code
# Males are twice as likely to die from a car crash (2.043798).
###Output
_____no_output_____
###Markdown
Adding more featuresThe actual crash data has more details - whether it was snowy/icy, whether it was a highway, etc. Read in `combined-crash-data.csv` and merge it with our people dataset. I'll save you a lookup: the `REPORT_NO` is what matches between the two.
###Code
crash = pd.read_csv('data/combined-crash-data.csv')
crash.head()
merged = people.merge(crash, on='REPORT_NO')
merged.head(2)
###Output
_____no_output_____
###Markdown
Examining more possible featuresHow often was it wet, dry, snowy, icy, etc? **What was the most common condition?**
###Code
# Weather
# 00 Not Applicable
# 02 Foggy
# 03 Raining
# 05 Severe Winds
# 6.01 Clear
# 7.01 Cloudy
# 8.04 Snow
# 9.04 Sleet
# 10.04 Blowing Snow
# 11.88 Blowing Sand, Soil, Dirt
# 12.04 Wintry Mix
# 88 Other
# 99 Unknown
merged.SURF_COND_CODE.value_counts(dropna=False)
###Output
_____no_output_____
###Markdown
Do you feel that a **Dry** road condition should be the average of **Wet** and **Snow?**
###Code
#That doensn't make much sense...(?)
###Output
_____no_output_____
###Markdown
The answer to that should be *no*, which means we can't use this data as numeric data. We want a different coefficient for each of these - I want to know the impact of dry, the impact of wet, the impact of snow, all separately.Start by **replacing each code with a proper description.** I'll even include them here:* `00` - Not Applicable* `01` - Wet* `02` - Dry* `03` - Snow* `04` - Ice* `05` - Mud, Dirt, Gravel* `06` - Slush* `07` - Water (standing/moving)* `08` - Sand* `09` - Oil* `88` - Other* `99` - UnknownBut watch out, pandas read the column in as numbers so they might have come through a little differently than their codes.
###Code
list(merged.SURF_COND_CODE.unique())
weather_dict = {2.0:'dry', 0.0:'not_applicable',
1.0:'wet',
np.nan:np.nan,
99.0:'unknown', 3.0:'snow',
5.0:'mud_dirt_gravel',
88.0:'other',
7.01:'other',
6.03:'other',
9.88:'other',
4.0:'ice',
8.05:'other'}
merged['surface_cond'] = merged.SURF_COND_CODE.replace(weather_dict)
merged.surface_cond.value_counts()
###Output
_____no_output_____
###Markdown
Confirm you have 147,803 wet, and a few codes you can't understand, like `6.03` and `7.01`.
###Code
#Done
###Output
_____no_output_____
###Markdown
Replace the codes you don't understand with `Other`.
###Code
#Done
###Output
_____no_output_____
###Markdown
Confirm you have 3,196 'Other'.
###Code
#Done
###Output
_____no_output_____
###Markdown
One-hot encodingWe're going to use `pd.get_dummies` to build a variable you'll call `surf_dummies`. Each surface condition should be a `0` or `1` as to whether it was that condition (dry, icy, wet, etc).Use a `prefix=` so we know they are **surface** conditions.You'll want to drop the column you'll use as the reference category.**Before we do this: which column works best as the reference?**
###Code
# set DRY as the reference because that is what we want to compare other
# surface conditions to
surf_dummies = pd.get_dummies(merged.surface_cond, prefix='surface').\
drop('surface_dry', axis=1)
###Output
_____no_output_____
###Markdown
Now build your `surf_dummies` variable.
###Code
surf_dummies.head()
###Output
_____no_output_____
###Markdown
Confirm your `surf_dummies` looks roughly like this:|surface_Ice|Surce_Mud, Dirt, Gravel|surface_Not Applicable|...|surface_Wet||---|---|---|---|---||0|0|0|...|0||0|0|0|...|0||0|0|1|...|0||0|0|1|...|0||0|0|0|...|1| Another regressionLet's run another regression to see the impact of both **sex and surface condition** on fatalities. Build your `train_df`To build your `train_df`, I recommend doing it either of these two ways:```pythontrain_df = pd.DataFrame({ 'col1': merged.col1, 'col2': merged.col2, 'col3': merged.col3,})train_df = train_df.join(surf_dummies)train_df = train_df.dropna()```or like this:```pythontrain_df = train_df[['col1','col2','col3']].copy()train_df = train_df.join(surf_dummies)train_df = train_df.dropna()```The second one is shorter, but the first one makes it easier to use comments to remove columns later.
###Code
train_df = pd.DataFrame({
'is_male': merged.is_male,
'is_fatality': merged.is_fatality,
})
train_df = train_df.join(surf_dummies)
train_df = train_df.dropna()
###Output
_____no_output_____
###Markdown
Run your regression and check your odds ratiosActually no, wait, first - what kind of surface do you think will have the **highest fatality rate?**
###Code
X = train_df.drop(columns='is_fatality')
y = train_df.is_fatality
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(C=1e9, solver='lbfgs', max_iter=4000)
clf.fit(X, y)
feature_names = X.columns
coefficients = clf.coef_[0]
pd.DataFrame({
'feature': feature_names,
'coefficient (log odds ratio)': coefficients,
'odds ratio': np.exp(coefficients)
}).sort_values(by='odds ratio', ascending=False)
###Output
_____no_output_____
###Markdown
Confirm your `train_df` has 815,843 rows and 9 columns.* **Tip:** When you run your regression, if you get an error about not knowing what to do with `U`, it's because you didn't convert your sex to numbers (or if you did, you didn't do it in your original dataframe)
###Code
train_df.shape
###Output
_____no_output_____
###Markdown
**Is this what you expected?** Why do you think this result might be the case?
###Code
#I didn't expected an specific thing haha But know that you are 3x more likely to die in a muddy surface (especially
#if you are a male) is quite surprising
###Output
_____no_output_____
###Markdown
More features: VehiclesMaybe the car they're in is related to the car they were in. Luckily, we have this information - **read in `combined_vehicle_data` as `vehicles`.**
###Code
vehicles = pd.read_csv('data/combined-vehicle-data.csv')
vehicles.head()
###Output
_____no_output_____
###Markdown
Weights of those carsThe car weights are stored in **another file** since the info had to come from an API. I looked up the VINs - vehicle identification numbers - in a government database to try to get data for each of them.**Read them and build a new dataframe that is both the vehicle data along with their weights.** You can call it `vehicles` since you don't need the original weightless vehicle data any more.
###Code
vehicles_weights = pd.read_csv('data/vins_and_weights.csv')
vehicles_weights.columns
vehicles_weights.head()
###Output
_____no_output_____
###Markdown
Confirm that your combined `vehicles` dataset should have 534,436 rows and 35 columns. And yes, that's less than we were working with before - you haven't combined it with the people/crashes dataset yet.
###Code
vehicles = vehicles.merge(vehicles_weights, left_on='VIN_NO', right_on='VIN')
vehicles.shape
###Output
_____no_output_____
###Markdown
Filter your dataWe only want vehicles that are "normal" - somewhere between 1500 and 6000 pounds. Filter your vehicles to only include those in that weight range.
###Code
vehicles_normal = vehicles[(vehicles.weight > 1500) & (vehicles.weight < 6000)]
###Output
_____no_output_____
###Markdown
Confirm that you have 532,370 vehicles in the dataset.
###Code
vehicles_normal.shape
###Output
_____no_output_____
###Markdown
Add this vehicle information to your merged dataNow we'll have a dataframe that contains information on:* The people themselves and their injuries* The crash* The vehiclesEvery person came with a `VEHICLE_ID` column that is the vehicle they were in. You'll want to merge on that.
###Code
vehicle_merged = merged.merge(vehicles_normal, on='VEHICLE_ID')
vehicle_merged.head()
###Output
_____no_output_____
###Markdown
Confirm you have 99 columns and 616,212 rows. **That is a lot of possible features!**
###Code
vehicle_merged.shape
###Output
_____no_output_____
###Markdown
Another regression, because we can't get enoughBuild another `train_df` and run another regression about **how car weight impacts the chance of fatalities**. You'll want to confirm that your dataset has 616,212 and 2 columns.
###Code
train_df = vehicle_merged.copy()
train_df = train_df[['weight', 'is_fatality']]
train_df.shape
X = train_df.drop(columns='is_fatality')
y = train_df.is_fatality
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(C=1e9, solver='lbfgs', max_iter=4000)
clf.fit(X, y)
feature_names = X.columns
coefficients = clf.coef_[0]
pd.DataFrame({
'feature': feature_names,
'coefficient (log odds ratio)': coefficients,
'odds ratio': np.exp(coefficients)
}).sort_values(by='odds ratio', ascending=False)
###Output
_____no_output_____
###Markdown
**Can you translate that into plain English?** Remember weight is in **pounds**.
###Code
#Not sure about how right I am:
#Each extra pound in the car's weight, a car crash is less likely to happen.
###Output
_____no_output_____
###Markdown
I feel like pounds isn't the best measure for something like this. Remember how we had to adjust percentages with AP and life expecntancy, and then change around the way we said things? It sounded like this:> Every 10% increase in unemployment translates to a year and a half loss of life expectancyInstead of every single pound, maybe we could do every... some other number of pounds? One hundred? One thousand?**Run another regression with weight in thousands of pounds.** Get another odds ratio. Give me another sentence English.
###Code
train_df['weight_1000'] = train_df.weight / 1000
train_df = train_df.drop('weight', axis=1)
X = train_df.drop(columns='is_fatality')
y = train_df.is_fatality
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(C=1e9, solver='lbfgs', max_iter=4000)
clf.fit(X, y)
feature_names = X.columns
coefficients = clf.coef_[0]
pd.DataFrame({
'feature': feature_names,
'coefficient (log odds ratio)': coefficients,
'odds ratio': np.exp(coefficients)
}).sort_values(by='odds ratio', ascending=False)
# Every thousand pounds heavier your car is increase translates to a 15% decrease in fatalities
###Output
_____no_output_____
###Markdown
Two-car accidents, struck and strikerHere's the thing, though: **it isn't just the weight of your car.** It's the weight of both cars! If I'm in a big car and I have a wreck with a smaller car, it's the smaller car that's in trouble.To get that value, we need to do some **feature engineering**, some calculating of *new* variables from our *existing* variables.We need to jump through some hoops to do that. Two-car accidentsFirst we're going to count how many vehicles were in each accident. Since we're looking to compare the weight of two cars hitting each other, **we're only going to want crashes with only two cars.**
###Code
counted = vehicles.REPORT_NO.value_counts()
counted.head(10)
###Output
_____no_output_____
###Markdown
By using `.value_counts` I can see how many cars were in each crash, and now I'm going to filter to get a list of all of the ones with two vehicles.
###Code
two_car_report_nos = counted[counted == 2].index
two_car_report_nos
###Output
_____no_output_____
###Markdown
And now we'll filter my vehicles so we only have those that were in two-vehicle crashes.
###Code
vehicles = vehicles[vehicles.REPORT_NO.isin(two_car_report_nos)]
###Output
_____no_output_____
###Markdown
Struck and strikerTo do the math correctly, we need both the risk of someone dying in the smaller car _and_ the risk of someone dying in the bigger car. To do this we need to separate our cars into two groups:* The 'struck' vehicle: did the person die inside?* The 'striker' vehicle: how much heavier was it than the struck car?But we don't know which car was which, so we have to try out both versions - pretending car A was the striker, then pretending car B was the striker. It's hard to explain, but you can read `Pounds That Kill - The External Costs of Vehicle Weight.pdf` for more details on how it works.
###Code
cars_1 = vehicles.drop_duplicates(subset='REPORT_NO', keep='first')
cars_2 = vehicles.drop_duplicates(subset='REPORT_NO', keep='last')
cars_merged_1 = cars_1.merge(cars_2, on='REPORT_NO', suffixes=['_striker', '_struck'])
cars_merged_2 = cars_2.merge(cars_1, on='REPORT_NO', suffixes=['_striker', '_struck'])
vehicles_complete = pd.concat([cars_merged_1, cars_merged_2])
vehicles_complete.head()
###Output
_____no_output_____
###Markdown
Put people in their carsWhich car was each person in? We'll assign that now.
###Code
merged = people.merge(vehicles_complete, left_on='VEHICLE_ID', right_on='VEHICLE_ID_struck')
merged.head(3)
###Output
_____no_output_____
###Markdown
Add the crash detailsYou did this already! I'm going to do it for you. We're merging on `REPORT_NO_x` because there are so many `REPORT_NO` columns duplicated across our files that pandas started giving them weird names.
###Code
merged = merged.merge(crash, left_on='REPORT_NO_x', right_on='REPORT_NO')
merged.head(3)
###Output
_____no_output_____
###Markdown
FilterWe already filtered out vehicles by weight, so we don't have to do that again. Calculated featuresI'm sure you forgot what all the features are, so we'll bring back whether there was a fatality or not Feature: Accident was fatal
###Code
merged['had_fatality'] = (merged.INJ_SEVER_CODE == 5).astype(int)
merged.had_fatality.value_counts()
###Output
_____no_output_____
###Markdown
Feature: Weight difference**Remove everything missing weights for strikers or struck vehicles.** You might need to `merged.columns` to remind yourself what the column names are.
###Code
merged = merged.dropna(subset=['weight_struck', 'weight_striker'])
###Output
_____no_output_____
###Markdown
Confirm your dataset has 334,396 rows.
###Code
merged.shape
###Output
_____no_output_____
###Markdown
Create a new feature called `weight_diff` about how much heavier the striking car was compared to the struck car. **Make sure you've done the math correctly!**
###Code
merged['weight_diff'] = merged.weight_striker - merged.weight_struck
###Output
_____no_output_____
###Markdown
Feature adjustmentMake all of your weight columns in **thousands of pounds** instead of just in pounds. It'll help you interpret your results much better.
###Code
merged['weight_striker_1000'] = merged.weight_striker / 1000
merged['weight_struck_1000'] = merged.weight_struck / 1000
merged['weight_diff_1000'] = merged.weight_striker_1000 - merged.weight_struck_1000
###Output
_____no_output_____
###Markdown
Another regression!!!**What is the impact of weight difference on fatality rate?** Create your `train_df`, drop missing values, run your regression, analyze your odds ratios.
###Code
train_df = merged[['is_fatality', 'weight_diff_1000']]
X = train_df.drop(columns='is_fatality')
y = train_df.is_fatality
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(C=1e9, solver='lbfgs', max_iter=4000)
clf.fit(X, y)
feature_names = X.columns
coefficients = clf.coef_[0]
pd.DataFrame({
'feature': feature_names,
'coefficient (log odds ratio)': coefficients,
'odds ratio': np.exp(coefficients)
}).sort_values(by='odds ratio', ascending=False)
###Output
_____no_output_____
###Markdown
Please translate your odds ratio into plain English.
###Code
#For every thousand pounds your car weights, there is a 1.13 increase in fatalities
###Output
_____no_output_____
###Markdown
Adding in more featuresHow about speed limit? That's important, right? We can add the speed limit of the striking vehicle with `SPEED_LIMIT_striker`.
###Code
train_df = merged[['is_fatality', 'weight_diff_1000', 'SPEED_LIMIT_striker']]
train_df.SPEED_LIMIT_striker.value_counts()
X = train_df.drop(columns='is_fatality')
y = train_df.is_fatality
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(C=1e9, solver='lbfgs', max_iter=4000)
clf.fit(X, y)
feature_names = X.columns
coefficients = clf.coef_[0]
pd.DataFrame({
'feature': feature_names,
'coefficient (log odds ratio)': coefficients,
'odds ratio': np.exp(coefficients)
}).sort_values(by='odds ratio', ascending=False)
###Output
_____no_output_____
###Markdown
Can you translate the speed limit odds ratio into plain English? Feature engineering: Speed limitsHonestly, that's a pretty bad way to go about things. What's more fun is if we **translate speed limits into bins.**First, we'll use `pd.cut` to assign each speed limit a category.
###Code
speed_bins = [-np.inf, 10, 20, 30, 40, 50, np.inf]
merged['speed_bin'] = pd.cut(merged.SPEED_LIMIT_struck, bins=speed_bins)
merged[['SPEED_LIMIT_striker', 'speed_bin']].head(10)
###Output
_____no_output_____
###Markdown
Then we'll one-hot encode around 20-30mph speed limits.
###Code
speed_dummies = pd.get_dummies(merged.speed_bin,
prefix='speed').drop('speed_(20.0, 30.0]', axis=1)
speed_dummies.head()
###Output
_____no_output_____
###Markdown
Running a regressionI like this layout for creating `train_df`, it allows us to easily add dummies and do a little replacing/encoding when we're building binary features like for sex.> If the below gives you an error, it's because `SEX_CODE` is already a number. In that case, just remove `.replace({'M': 1, 'F': 0, 'U': np.nan })`.
###Code
# Start with our normal features
train_df = pd.DataFrame({
'weight_diff': merged.weight_diff,
'sex': merged.SEX_CODE,#.replace({'M': 1, 'F': 0, 'U': np.nan }),
'had_fatality': merged.had_fatality,
})
# Add the one-hot encoded features
train_df = train_df.join(speed_dummies)
train_df = train_df.join(surf_dummies)
# Drop missing values
train_df = train_df.dropna()
train_df.head()
###Output
_____no_output_____
###Markdown
Describe the impact of the different variables in simple language. What has the largest impact?
###Code
X = train_df.drop(columns='had_fatality')
y = train_df.had_fatality
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(C=1e9, solver='lbfgs', max_iter=4000)
clf.fit(X, y)
###Output
_____no_output_____
|
Analysis/FrotaVeiculos/MergingData.ipynb
|
###Markdown
Frotas de Veiculos no Brasil Mesclagem dos dados
###Code
# Importações Libs
import pandas as pd
# Leitura CSVs
df_frota_032020 = pd.read_csv('datasets/2020-03_frota_de_veiculos.csv', sep=';')
df_frota_042020 = pd.read_csv('datasets/2020-04_frota_de_veiculos.csv', sep=';')
df_frota_052020 = pd.read_csv('datasets/2020-05_frota_de_veiculos.csv', sep=';')
df_frota_062020 = pd.read_csv('datasets/2020-06_frota_de_veiculos.csv', sep=';')
df_frota_072020 = pd.read_csv('datasets/2020-07_frota_de_veiculos.csv', sep=';')
df_frota_082020 = pd.read_csv('datasets/2020-08_frota_de_veiculos.csv', sep=';')
df_frota_092020 = pd.read_csv('datasets/2020-09_frota_de_veiculos.csv', sep=';')
df_frota_102020 = pd.read_csv('datasets/2020-10_frota_de_veiculos.csv', sep=';')
df_frota_112020 = pd.read_csv('datasets/2020-11_frota_de_veiculos.csv', sep=';')
df_frota_032020.info()
df_frota_042020.info()
df_frota_052020.info()
df_frota_062020.info()
df_frota_072020.info()
df_frota_082020.info()
df_frota_092020.info()
df_frota_102020.info()
df_frota_112020.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 65535 entries, 0 to 65534
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 UF 65535 non-null object
1 MUNICIPIO 65535 non-null object
2 TIPO DE VEÍCULO 65535 non-null object
3 QUANTIDADE 65535 non-null int64
dtypes: int64(1), object(3)
memory usage: 2.0+ MB
###Markdown
Tratamentos Renomear colunas
###Code
df_frota_042020.rename(columns={'TIPO': 'TIPO DE VEICULO'}, inplace=True)
df_frota_112020.rename(columns={'TIPO DE VEÍCULO': 'TIPO DE VEICULO'}, inplace=True)
###Output
_____no_output_____
###Markdown
Criando coluna de data
###Code
df_frota_032020['DATE'] = pd.to_datetime('01032020', format='%d%m%Y')
df_frota_042020['DATE'] = pd.to_datetime('01042020', format='%d%m%Y')
df_frota_052020['DATE'] = pd.to_datetime('01052020', format='%d%m%Y')
df_frota_062020['DATE'] = pd.to_datetime('01062020', format='%d%m%Y')
df_frota_072020['DATE'] = pd.to_datetime('01072020', format='%d%m%Y')
df_frota_082020['DATE'] = pd.to_datetime('01082020', format='%d%m%Y')
df_frota_092020['DATE'] = pd.to_datetime('01092020', format='%d%m%Y')
df_frota_102020['DATE'] = pd.to_datetime('01102020', format='%d%m%Y')
df_frota_112020['DATE'] = pd.to_datetime('01112020', format='%d%m%Y')
###Output
_____no_output_____
###Markdown
Merge Datasets
###Code
df_frota_0311_2020 = pd.concat([df_frota_032020, df_frota_042020, df_frota_052020,
df_frota_062020, df_frota_072020, df_frota_082020,
df_frota_092020, df_frota_102020, df_frota_112020])
df_frota_0311_2020.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1001631 entries, 0 to 65534
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 UF 1001631 non-null object
1 MUNICIPIO 1001631 non-null object
2 TIPO DE VEICULO 1001631 non-null object
3 QUANTIDADE 1001631 non-null int64
4 DATE 1001631 non-null datetime64[ns]
dtypes: datetime64[ns](1), int64(1), object(3)
memory usage: 45.9+ MB
###Markdown
Save Dataset
###Code
df_frota_0311_2020.to_csv('datasets/2020-0311_frota_de_veiculos.csv', index=None)
###Output
_____no_output_____
|
Introduction_Regex.ipynb
|
###Markdown
INTRODUCTION TO REGULAR EXPRESSIONS Libs
###Code
import re
import datetime
#tab
#shift tab
#shift + double tab
###Output
_____no_output_____
###Markdown
Raw String and Regular String
###Code
s = 'a\tb' # a-tab-b
print(s)
raw_s = r'a\tb' #use raw string when defining regex patterns in your code
print(raw_s)
###Output
a\tb
###Markdown
re.match - Find the first match
###Code
pattern = r'\d+'
text = '42 is my lucky number'
match = re.match(pattern, text)
if match:
print('Match success')
else:
print('no match')
text
pattern
re.match(pattern, text)
pattern = r'\d+'
text = 'is my lucky number'
match = re.match(pattern, text)
if match:
print('Match success')
else:
print('no match')
###Output
no match
###Markdown
***
###Code
pattern = r'\d+'
text = 'is my lucky number'
match = re.match(pattern, text)
if match:
print('Match success')
else:
print('no match')
###Output
no match
###Markdown
***
###Code
pattern = r'\d+'
text = '42 is my lucky number'
match = re.match(pattern, text)
if match:
print(match.group(0), 'at index:', match.start())
else:
print('no match')
if match:
print(match.group(0))
else:
print('no match')
###Output
42
###Markdown
Match - try to apply the pattern at the start of the string Input Validation
###Code
def is_integer(text):
pattern = r"\d+"
match = re.match(pattern, text)
if match:
return True
else:
return False
is_integer("123")
is_integer("abc")
def test_is_integer():
pass_list = ["123","456","900","0991"]
fail_list = ["a123","124a","1 2 3","1\t2"," 12","45 "]
for text in pass_list:
if not is_integer(text):
print('\tFailed to detect an integer',text)
for text in fail_list:
if is_integer(text):
print('\tIncorrectly classified as an integer',text)
print('Test complete')
test_is_integer()
def is_integer(text):
pattern = r"\d+$" #look for numbers followed by end of string
match = re.match(pattern, text)
if match:
return True
else:
return False
def test_is_integer():
pass_list = ["123","456","900","0991"]
fail_list = ["a123","124a","1 2 3","1\t2"," 12","45 "]
for text in pass_list:
if not is_integer(text):
print('\tFailed to detect an integer',text)
for text in fail_list:
if is_integer(text):
print('\tIncorrectly classified as an integer',text)
print('Test complete')
test_is_integer()
###Output
Test complete
###Markdown
re.search - Find the first match anywhere
###Code
pattern = r"\d+"
text = 'my lucky numbers are 42 and 24'
match = re.search(pattern,text)
if match:
print(match.group(0), 'at index:', match.start())
else:
print('no match')
def is_integer(text):
pattern = r"\d+$" #look for numbers followed by end of string
match = re.search(pattern, text)
if match:
return True
else:
return False
is_integer(text)
###Output
_____no_output_____
###Markdown
re.findall - Find all the matches
###Code
# Find all numbers in the text
pattern = r"\d+"
text = "NY Postal Codes are 10001, 10002, 10003, 10004"
match = re.findall(pattern, text)
print(match)
###Output
['10001', '10002', '10003', '10004']
###Markdown
re.finditer - Iterator
###Code
pattern = r"\d+"
text = "NY Postal Codes are 10001, 10002, 10003, 10004"
match_iter = re.finditer(pattern, text)
for match in match_iter:
print("\t", match.group(0), 'at index:', match.start())
pattern = r"\d+"
text = "NY Postal Codes are 10001, 10002, 10003, 10004"
match_iter = re.finditer(pattern, text)
for match in match_iter:
print("\t", match.group(0), 'at index:', match.start())
i = 0
for match in match_iter:
print('/t', match.group(0), 'at index:', match.start())
i += 1
if i > 1:
break
###Output
10001 at index: 20
10002 at index: 27
10003 at index: 34
10004 at index: 41
###Markdown
groups - find sub matches
###Code
pattern = r"(\d{4})(\d{2})(\d{2})"
text = "Start Date: 20200920"
match = re.search(pattern, text)
match
match.groups()
match.groups(1)[0]
match.groups(2)[1]
for idx, value in enumerate(match.groups()):
print(idx, value, idx+1, match.start(idx+1))
pattern = r"(\d{4})(\d{2})(\d{2})"
text = "Start Date: 20200920"
print("Pattern",pattern)
match = re.search(pattern, text)
if match:
print('Found a match', match.group(0), 'at index:', match.start())
print('Groups', match.groups())
for idx, value in enumerate(match.groups()):
print ('\tGroup', idx+1, value, '\tat index', match.start(idx+1))
else:
print("No Match")
###Output
Pattern (\d{4})(\d{2})(\d{2})
Found a match 20200920 at index: 12
Groups ('2020', '09', '20')
Group 1 2020 at index 12
Group 2 09 at index 16
Group 3 20 at index 18
###Markdown
named groups
###Code
# Separate year, month and day
pattern = r"(?P<year>\d{4})(?P<month>\d{2})(?P<day>\d{2})"
text = "Start Date: 20200920"
print("Pattern",pattern)
match = re.search(pattern, text)
if match:
print('Found a match', match.group(0), 'at index:', match.start())
print('\t',match.groupdict())
else:
print("No Match")
pattern = r"(?P<year>\d{4})(?P<month>\d{2})(?P<day>\d{2})"
text = "start date: 19910822"
match = re.search(pattern, text)
match.group(0), match.start()
match.groupdict()
match.groupdict()
###Output
_____no_output_____
###Markdown
re.sub - find and replacetwo patterns: one to find the text and another pattern with replacement text
###Code
pattern = r"(?P<year>\d{4})(?P<month>\d{2})(?P<day>\d{2})"
text = "Start Date: 20200920, End Date: 20210920"
replacement_pattern = r"\g<month>-\g<day>-\g<year>"
print(text)
new_text = re.sub(pattern, replacement_pattern, text)
print(new_text)
def format_date(match):
in_date = match.groupdict()
year = int(in_date['year'])
month = int(in_date['month'])
day = int(in_date['day'])
#https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior
return datetime.date(year,month,day).strftime('%b-%d-%Y')
# Format date
pattern = r"(?P<year>\d{4})(?P<month>\d{2})(?P<day>\d{2})"
text = "Start Date: 20200920, End Date: 20210920"
print ('original text\t', text)
print()
# find and replace
new_text= re.sub(pattern, format_date, text)
print('new text\t', new_text)re.split - split text based on specified pattern
###Output
original text Start Date: 20200920, End Date: 20210920
new text Start Date: Sep-20-2020, End Date: Sep-20-2021
###Markdown
re.split - split text based on specified pattern
###Code
pattern = r','
text = 'today, is, my lucky, day'
re.split(pattern, text)
###Output
_____no_output_____
|
_notebooks/2020-03-05-the-pandas-reference.ipynb
|
###Markdown
The Pandas Reference > A tutorial on how to write clean pandas code to perform data analysis.- toc: false - badges: true- comments: true- categories: [pandas, python]- image: images/chart-preview.png AboutMuch of data exists in rectangular format with rows and columns. Different terms can be used to describe these kind of data 1. Table 2. Data frame 3. Structured data 4. Spreadsheets Pandas is one of the widely used data manipulation library in python for structured datasets. Below is a summary of the key operations that are part of any essential data analysis project(SQL equivalents). 1. Select column references 2. Select scalar expression 3. Where 4. Group By5. Select aggregation 6. Order By 7. Window functions 8. Join When I started using pandas, realized that there are multiple ways to perform the same operation.Also, code I was writing was not as elegant as SQL queries and hard to debug. In this blog post I will share examples of how to perform the above mentioned SQL operations in pandas and write pandas code that is readable and easy to maintain.
###Code
import pandas as pd
import numpy as np
df = pd.read_csv("https://raw.githubusercontent.com/mwaskom/seaborn-data/master/tips.csv")
pd.options.display.max_rows = 20
df.head(5)
###Output
_____no_output_____
###Markdown
Select columns Use loc with list of columns names to perform selection of columns. I would recommend using this syntax as it offers more flexibility in your data analysis task ```python.loc[:,['col1','col2']] ``` Select total_bill and tips column from the data. Note: we are using method chaining to perform operations one after another
###Code
(df
.loc[:,['tip','sex']]
.head()
)
###Output
_____no_output_____
###Markdown
Select only columns starting with the letter 't'. Using this simple and readable syntax enables one to perform complex select operations in pandas
###Code
(df
.loc[:,[col for col in df.columns if col.startswith('t')]]
.head()
)
###Output
_____no_output_____
###Markdown
Select columns manipulation Use assign statement to add new columns, updated existing columns ```python.assign(new_col=1).assign(new_col=lambda x:x['col']+1).assign(old_col=lambda x:x['old_col']+1)```
###Code
(df
.loc[:,['total_bill','tip','sex','day','time']]
.assign(percentage_tip=lambda x:x['tip']/x['total_bill']) #add new column
.assign(tip=lambda x:x['tip']+1) # update existing column
.assign(count=1) #add constant value
.head()
)
###Output
_____no_output_____
###Markdown
Filter rows (where)Use query to perform filtering of rows in pandas ```pythonval=10.query("col1>='10'").query("col1>='@val'").query(f"col1>='{val}'").query("col1.isin(['a','b'])",engine='python')```
###Code
#filter only transaction with more than 15% in tips
(df
.loc[:,['total_bill','tip','sex','day','time']]
.assign(percentage_tip=lambda x:x['tip']/x['total_bill'])
.query("percentage_tip>.15")
.head()
)
per_tip=.15
#using @ within query to refer a variable in the filter
print("")
display(df
.loc[:,['total_bill','tip','sex','day','time']]
.assign(percentage_tip=lambda x:x['tip']/x['total_bill'])
.query("percentage_tip>@per_tip")
.head()
)
#using f-string to perform filtering
display(df
.loc[:,['total_bill','tip','sex','day','time']]
.assign(percentage_tip=lambda x:x['tip']/x['total_bill'])
.query(f"percentage_tip>{per_tip}")
.head()
)
#Filter only transactions happend on Sunday and Monday
(df
.loc[:,['total_bill','tip','sex','day','time']]
.query("day.isin(['Sun','Mon'])",engine='python')
.head()
)
###Output
_____no_output_____
###Markdown
Group By and Aggregation Use groupby with named aggs to perform any type of aggregations. Aggregation function are flexible enough that we can pass in lambda function and numpy functions to perform aggregations.
###Code
#By day get average and total bill
(df
.groupby(['day'])
.agg(avg_bill=('total_bill','mean')
,total_bill=('total_bill','sum')) #multiple column aggregations supported
.reset_index()
)
#By day get average of total bill using : functions, lambda functions, numpy functions
(df
.groupby(['day'])
.agg(avg_bill_mean=('total_bill','mean')
,avg_bill_lambda=('total_bill',lambda x:x.mean()) #using lambda functions
,avg_bill_np=('total_bill',np.mean)) #using numpy functions
.reset_index()
)
###Output
_____no_output_____
###Markdown
Ordering rowsMost of the data analysis tasks requires sorting as a preprocessing step or as a last step to display output. This can be done in pandas by using sort_values function Use sort_values to order a pandas data frame along the column/axis specified ```python.sort_values(['col1','col2'],ascending=[True,False])```
###Code
#By day get average and total bill.Sort the output by total_bill
(df
.groupby(['day'])
.agg(avg_bill=('total_bill','mean')
,total_bill=('total_bill','sum'))
.reset_index()
.sort_values(['total_bill']) #Default in ascending
)
#By day get average and total bill.Sort the output by total_bill
(df
.groupby(['day'])
.agg(avg_bill=('total_bill','mean')
,total_bill=('total_bill','sum'))
.reset_index()
.sort_values(['total_bill'],ascending=[False]) #By descending order
)
#By day get average and total bill.Sort the output by total_bill and avg_bill
(df
.groupby(['day'])
.agg(avg_bill=('total_bill','mean')
,total_bill=('total_bill','sum'))
.reset_index()
.sort_values(['total_bill','avg_bill'],ascending=[False,True]) #By multiple columns one by asc and other by desc
)
###Output
_____no_output_____
###Markdown
Window function Window functions are very powerful in the SQL world. Here we will learn how to use the following functions: row_number(), Lead()/Lag(), Running sum within each group (partition)
###Code
#Equivalent of row_number() over(partition by day order by total_bill asc) as row_number
(df
.assign(row_number=lambda x:x.sort_values(['total_bill'],ascending=[True]).groupby(['day']).cumcount()+1)
.sort_values(['row_number'])
.head()
)
#Equivalent of lag(total_bill) over(partition by day order by total_bill asc) as previous_bill
(df
.assign(row_number=lambda x:x.sort_values(['total_bill'],ascending=[True]).groupby(['day']).cumcount()+1)
.assign(prev_bill=lambda x:x.sort_values(['total_bill'],ascending=[True]).groupby(['day'])['total_bill'].shift(1))
.sort_values(['row_number'])
.head()
)
#Equivalent of lead(total_bill) over(partition by day order by total_bill asc) as previous_bill
(df
.assign(row_number=lambda x:x.sort_values(['total_bill'],ascending=[True]).groupby(['day']).cumcount()+1)
.assign(next_bill=lambda x:x.sort_values(['total_bill'],ascending=[True]).groupby(['day'])['total_bill'].shift(-1))
.sort_values(['row_number'])
.head()
)
#Equivalent of sum(total_bill) over(partition by day) as sum_bill_day
#Equivalent of sum(tip) over(partition by day order by total bill asc) as cum_tip_day
#Equivalent of sum(tip) over(partition by day order by total. bill rows between 3 preceeding and current row) as rolling_3d_sum
(df
.assign(sum_bill_day=lambda x:x.groupby(['day'])['total_bill'].transform('sum'))
.assign(cum_tip_day=lambda x:x.sort_values(['total_bill']).groupby(['day'])['tip'].cumsum())
.assign(rolling_3d_sum=lambda x:x.sort_values(['total_bill']).groupby(['day'])['tip'].rolling(2,min_period=1).sum().reset_index(drop=True, level=0))
.query("day=='Sat'")
.sort_values(['total_bill'])
.head()
)
###Output
_____no_output_____
###Markdown
The Pandas Reference> A tutorial on how to write clean pandas code to perform data analysis.- toc: false - badges: true- comments: true- categories: [pandas, python]- image: images/chart-preview.png AboutMuch of data exists in rectangular format with rows and columns. Different terms can be used to describe these kind of data 1. Table 2. Data frame 3. Structured data 4. Spreadsheets Pandas is one of the widely used data manipulation library in python for structured datasets. Below is a summary of the key operations essential for performing data analysis project(SQL equivalents). 1. Select column references 2. Select scalar expression 3. Where 4. Group By5. Select aggregation 6. Order By 7. Window functionsWhen I started using pandas, realized that there are multiple ways to perform the same operations.Also, code I was writing was not as elegant as SQL queries and hard to debug. In this blog post I will share examples of how to perform the above mentioned SQL operations in pandas and write pandas code that is readable and easy to maintain.
###Code
import pandas as pd
df = pd.read_csv("https://raw.githubusercontent.com/mwaskom/seaborn-data/master/tips.csv")
pd.options.display.max_rows = 20
df.head(5)
###Output
_____no_output_____
###Markdown
Select columns Use loc with list of columns names to perform selection of columns ```python.loc[:,['col1','col2']] ``` Select total_bill and tips column from the data. Note: we are using chaining to perform operations one after another
###Code
(df
.loc[:,['tip','sex']]
.head()
)
###Output
_____no_output_____
###Markdown
Select columns manipulation Use assign statement to add new columns, updated existing columns ```python.assign(new_col=1).assign(new_col=lambda x:x['col']+1)```
###Code
(df
.loc[:,['total_bill','tip','sex','day','time']]
.assign(percentage_tip=lambda x:x['tip']/x['total_bill'])
.head()
)
###Output
_____no_output_____
###Markdown
Filter rows (where)Use query to peform filting of rows in pandas ```python.query("col1>='10'")```
###Code
#filter only transaction with more than 15% in tips
(df
.loc[:,['total_bill','tip','sex','day','time']]
.assign(percentage_tip=lambda x:x['tip']/x['total_bill'])
.query("percentage_tip>.15")
.head()
)
per_tip=.15
#using @ within query to refer a variable in the filter
(df
.loc[:,['total_bill','tip','sex','day','time']]
.assign(percentage_tip=lambda x:x['tip']/x['total_bill'])
.query("percentage_tip>@per_tip")
.head()
)
###Output
_____no_output_____
###Markdown
Group By and Aggregation Use groupby with named aggs to perform any type of aggregations
###Code
#By day get average and total bill
(df
.groupby(['day'])
.agg(avg_bill=('total_bill','mean')
,total_bill=('total_bill','sum'))
.reset_index()
)
###Output
_____no_output_____
###Markdown
Ordering rowsUse assign statement to add new columns, updated existing columns ```python.sort_values(['col1','col2'],ascending=[True,False])```
###Code
#By day get average and total bill.Sort the output by total_bill
(df
.groupby(['day'])
.agg(avg_bill=('total_bill','mean')
,total_bill=('total_bill','sum'))
.reset_index()
.sort_values(['total_bill'])
)
###Output
_____no_output_____
|
Assignment_9_Sabio.ipynb
|
###Markdown
Lab 4- Plotting Vector using NumPy and MatPlotLib In this laboratory we will be discussing the basics of numerical and scientific programming by working with Vectors using NumPy and MatPlotLib. ObjectivesAt the end of this activity you will be able to:1. Be familiar with the libraries in Python for numerical and scientific programming.2. Visualize vectors through Python programming.3. Perform simple vector operations through code. Discussion NumPy NumPy or Numerical Python, is mainly used for matrix and vector operations. It is capable of declaring computing and representing matrices. Most Python scientific programming libraries uses NumPy as the basic code. Scalars \\Represent magnitude or a single valueVectors \\Represent magnitude with directors Representing Vectors Now that you know how to represent vectors using their component and matrix form we can now hard-code them in Python. Let's say that you have the vectors: $$ A = 4\hat{x} + 5\hat{y} \\B = 1\hat{x} - 4\hat{y}\\C = 5ax + 4ay - 3az \\D = 2\hat{i} - 2\hat{j} + 4\hat{k}$$ In which it's matrix equivalent is: $$ A = \begin{bmatrix} 4 \\ 4\end{bmatrix} , B = \begin{bmatrix} 1 \\ -5\end{bmatrix} , C = \begin{bmatrix} 5 \\ 4 \\ -3 \end{bmatrix}, D = \begin{bmatrix} 2 \\ -2 \\ 4\end{bmatrix}$$$$ A = \begin{bmatrix} 4 & 4\end{bmatrix} , B = \begin{bmatrix} 1 & -5\end{bmatrix} , C = \begin{bmatrix} 5 & 4 & -3\end{bmatrix} , D = \begin{bmatrix} 2 & -2 & 4\end{bmatrix} $$ We can then start doing numpy code with this by:
###Code
## Importing necessary libraries
import numpy as np ## 'np' here is short-hand name of the library (numpy) or a nickname.
A = np.array([4, 5])
B = np.array([1, -4])
C = np.array([
[5],
[4],
[-3]
])
D = np.array ([[2],
[-2],
[4]])
print('Vector A is ', A)
print('Vector B is ', B)
print('Vector C is ', C)
print('Vector D is ', D)
###Output
Vector A is [4 5]
Vector B is [ 1 -4]
Vector C is [[ 5]
[ 4]
[-3]]
Vector D is [[ 2]
[-2]
[ 4]]
###Markdown
Describing vectors in NumPy Describing vectors is very important if we want to perform basic to advanced operations with them. The fundamental ways in describing vectors are knowing their shape, size and dimensions.
###Code
### Checking shapes
### Shapes tells us how many elements are there on each row and column
nixxnoxx = np.array([
[5, 8, 6],
[3,-12, 28]
])
print("\n",A.shape,"\n",nixxnoxx.shape, "\n",C.shape)
### Checking size
### Array/Vector sizes tells us many total number of elements are there in the vector
print("\n",A.size,"\n",B.size,"\n",C.size,"\n",D.size)
### Checking dimensions
### The dimensions or rank of a vector tells us how many dimensions are there for the vector.
D.ndim
print ("\n",A.ndim,"\n",B.ndim,"\n",C.ndim,"\n",D.ndim)
nixxnoxx.ndim
###Output
_____no_output_____
###Markdown
Great! Now let's try to explore in performing operations with these vectors. Addition The addition rule is simple, the we just need to add the elements of the matrices according to their index. So in this case if we add vector $A$ and vector $B$ we will have a resulting vector: $$R = 5\hat{x}+1\hat{y} \\ \\or \\ \\ R = \begin{bmatrix} 5 \\ 1\end{bmatrix} $$ So let's try to do that in NumPy in several number of ways:
###Code
## this is the functional method usisng the numpy library
cute = np.add(A, B)
ko = np.add(C, D)
print ("\n",cute,"\n\n",ko)
cute = A + B ## this is the explicit method, since Python does a value-reference so it can
## know that these variables would need to do array operations.
ko= C+D
print ("\n",cute,"\n\n",ko)
o1 = np.array([1,2,3])
o2 = np.array([8,12,-9])
o3 = np.array([2,-3,-1])
o4 = np.array([5,-3,4])
nyx1 = o1+o2+o3+o4
nyx1
nyx2 = o1-o2-o3-o4
nyx2
nyx3= np.multiply(o3,o4)
nyx3
nyx4= np.divide(o3,o4)
nyx4
###Output
_____no_output_____
###Markdown
Try for yourself! Try to implement subtraction, multiplication, and division with vectors $E$ and $F$!
###Code
### Try out you code here!
E = np.array([2,4,6])
F = np.array([-1,3,-5])
Sub= E-F
Sub
Mul= np.multiply(E,F)
Mul
Div= np.divide(E,F)
Div
###Output
_____no_output_____
###Markdown
Scaling Scaling or scalar multiplication takes a scalar value and performs multiplication with a vector. Let's take the example below: $$S = 5 \cdot A$$ We can do this in numpy through:
###Code
#S = 5 * A
S = np.multiply(5,A)
S
###Output
_____no_output_____
###Markdown
Try to implement scaling with two vectors.
###Code
uwu= 7* B
uwu
rawr= np.multiply(8,C)
rawr
###Output
_____no_output_____
###Markdown
MatPlotLib MatPlotLib or MATLab Plotting library is Python's take on MATLabs plotting feature. MatPlotLib can be used vastly from graping values to visualizing several dimensions of data. Visualizing Data It's not enough just solving these vectors so might need to visualize them. So we'll use MatPlotLib for that. We'll need to import it first.
###Code
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
ony = [2, -12]
dana = [4, -3]
plt.scatter(ony[0], ony[1], label='ony', c='blue')
plt.scatter(dana[0], dana[1], label='dana', c='orange')
plt.grid()
plt.legend()
plt.show()
N = np.array([2, -12])
D = np.array([4, -3])
R = N + D
Magnitude = np.sqrt(np.sum(R**2))
plt.title("Resultant Vector\nMagnitude:{}" .format(Magnitude))
plt.xlim(-15, 15)
plt.ylim(-20, 15)
plt.quiver(0, 0, N[0], N[1], angles='xy', scale_units='xy', scale=1, color='blue')
plt.quiver(N[0], N[1], D[0], D[1], angles='xy', scale_units='xy', scale=1, color='orange')
plt.quiver(0, 0, R[0], R[1], angles='xy', scale_units='xy', scale=1, color='black')
plt.grid()
plt.show()
print(R)
print(Magnitude)
Slope = R[1]/R[0]
print(Slope)
Angle = (np.arctan(Slope))*(180/np.pi)
print(Angle)
n = N.shape[0]
plt.xlim(-15, 15)
plt.ylim(-20, 15)
plt.quiver(0,0, N[0], N[1], angles='xy', scale_units='xy',scale=1)
plt.quiver(N[0],N[1], D[0], D[1], angles='xy', scale_units='xy',scale=1)
plt.quiver(0,0, R[0], R[1], angles='xy', scale_units='xy',scale=1)
plt.show()
###Output
_____no_output_____
###Markdown
Try plotting Three Vectors and show the Resultant Vector as a result.Use Head to Tail Method.
###Code
G= np.array([2, -10])
H = np.array([5, -3])
I = np.array([5, 12])
R = G + H + I
Magnitude = np.sqrt(np.sum(R**2))
plt.title("Resultant Vector\nMagnitude:{}" .format(Magnitude))
plt.xlim(-15, 15)
plt.ylim(-20, 15)
plt.quiver(0, 0, G[0], G[1], angles='xy', scale_units='xy', scale=1, color='blue')
plt.quiver(G[0], G[1], H[0], H[1], angles='xy', scale_units='xy', scale=1, color='orange')
plt.quiver(7,-13, I[0], I[1], angles='xy', scale_units='xy', scale=1, color='yellow')
plt.quiver(0, 0, R[0], R[1], angles='xy', scale_units='xy', scale=1, color='black')
plt.grid()
plt.show()
print(R)
print(Magnitude)
Slope = R[1]/R[0]
print(Slope)
Angle = (np.arctan(Slope))*(180/np.pi)
print(Angle)
###Output
_____no_output_____
|
notebooks/Table 1.ipynb
|
###Markdown
Stats used in Table 1 (ablation study)
###Code
!pip install --quiet bootstrapped
import os
import numpy as np
import re
from glob import glob
import yaml
import pickle
import pandas as pd
import bootstrapped.bootstrap as bs
import bootstrapped.stats_functions as bs_stats
def read_data(dirname, strategy):
eval_file = os.path.join(dirname, "eval.pkl")
config_file = os.path.join(dirname, ".hydra", "config.yaml")
with open(config_file, "r") as f:
config = yaml.safe_load(f)
with open(eval_file, "rb") as f:
data = pickle.load(f)
# read setting
arr_setting = [strategy]
rows = []
header = None
for i, (ins, res, info) in enumerate(data):
if header is None:
header = ["strategy", "ins", "num_agents"]
header = header + list(res.get_dict_wo_paths().keys()) + list(info.keys())
rows.append(arr_setting + [i, ins.num_agents] + list(res.get_dict_wo_paths().values()) + list(info.values()))
return pd.DataFrame(rows, columns=header)
DATADIR = "/data/exp/ctrm_sampling_ablation/"
subdirs = [re.split("/", x)[-2] for x in sorted(glob(f"{DATADIR}/*/stats.txt"))]
df_normal = read_data(os.path.join(DATADIR, subdirs[0]), "normal")
df_wo_comm = read_data(os.path.join(DATADIR, subdirs[1]), "wo_comm")
df_wo_ind = read_data(os.path.join(DATADIR, subdirs[2]), "wo_ind")
df_wo_random_walk = read_data(os.path.join(DATADIR, subdirs[3]), "wo-random-walk")
df = pd.concat([
df_normal,
df_wo_comm,
df_wo_ind,
df_wo_random_walk,
])
df
# without wo_rw
df = pd.concat([
df_normal,
df_wo_comm,
df_wo_ind,
])
df
num_strategies = len(df.groupby(["strategy"]))
df_sub = df.query(f"solved == 1")
print("**success rate**")
display(df_sub.groupby("strategy")["solved"].count())
print()
all_success_indexes = tuple(df_sub.groupby("ins")["name_planner"].count().loc[lambda x: x >= num_strategies].index)
df_all_success = df_sub.query(f"ins in {all_success_indexes}")
print(f"instances succeeded over all strategies: {len(all_success_indexes)} / 100\n")
print("\n**sum-of-costs**")
for strategy, res in df_all_success.groupby(["strategy"]):
samples = np.array(res["sum_of_costs"] / res["num_agents"])
print(strategy, bs.bootstrap(samples, stat_func=bs_stats.mean))
print("\n**expanded nodes**")
for strategy, res in df_all_success.groupby(["strategy"]):
cnt = np.array(res["lowlevel_expanded"] / res["num_agents"])
print(strategy, bs.bootstrap(cnt, stat_func=bs_stats.mean))
###Output
**success rate**
|
SII/ML/2-Feature_Selection.ipynb
|
###Markdown
Feature Selection* Selecting features from the dataset* Improve estimator's accuracy* Boost preformance for high dimensional datsets
###Code
from sklearn import feature_selection
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
VarianceThreshold* Drop the columns whose variance is below configured level* This method is unsupervised .i.e target not taken into action* Intution : Columns whose values arn't petty much the same won't have much impact on target
###Code
df = pd.DataFrame({'A':['m','f','m','m','m','m','m','m'],
'B':[1,2,3,1,2,1,1,1],
'C':[1,2,3,1,2,1,1,1]})
df
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df['A'] = le.fit_transform(df.A)
df
vt = feature_selection.VarianceThreshold(threshold=.2)
vt.fit_transform(df)
vt.variances_
###Output
_____no_output_____
###Markdown
Chi-Square for Non-negative feature & class* Feature data should be booleans or count* Supervised technique for feature selection* Target should be discrete* Higher chi value means more important feature for target
###Code
df = pd.read_csv('datasets/tennis.csv')
df.head()
for col in df.columns:
le = LabelEncoder()
df[col] = le.fit_transform(df[col])
df
df.drop('play',axis=1)
chi2, pval = feature_selection.chi2(df.drop('play',axis=1),df.play)
chi2
pval
###Output
_____no_output_____
###Markdown
4. ANOVA using f_classif* For feature variables continues in nature* And, target variable discrete in nature* Internally, this method uses ratio of variation within a columns & variation across columns
###Code
from sklearn.datasets import load_breast_cancer
cancer_data = load_breast_cancer()
X = cancer_data.data
Y = cancer_data.target
print(X.shape)
F, pval = feature_selection.f_classif(X,Y)
print(pval)
print(F)
###Output
[6.46981021e+02 1.18096059e+02 6.97235272e+02 5.73060747e+02
8.36511234e+01 3.13233079e+02 5.33793126e+02 8.61676020e+02
6.95274435e+01 9.34592949e-02 2.68840327e+02 3.90947023e-02
2.53897392e+02 2.43651586e+02 2.55796780e+00 5.32473391e+01
3.90144816e+01 1.13262760e+02 2.41174067e-02 3.46827476e+00
8.60781707e+02 1.49596905e+02 8.97944219e+02 6.61600206e+02
1.22472880e+02 3.04341063e+02 4.36691939e+02 9.64385393e+02
1.18860232e+02 6.64439606e+01]
###Markdown
* Each value represents importance of a feature Univariate Regression Test using f_regression* Linear model for testing the individual effect of each of many regressors.* Correlation between each value & target is calculated* F-test captures linear dependency
###Code
from sklearn.datasets import california_housing
house_data = california_housing.fetch_california_housing()
X,Y = house_data.data, house_data.target
print(X.shape,Y.shape)
F, pval = feature_selection.f_regression(X,Y)
F
###Output
_____no_output_____
###Markdown
* Columns with top F values are the selected features F score verses Mutual Information
###Code
np.random.seed(0)
X = np.random.rand(1000, 3)
y = X[:, 0] + np.sin(6 * np.pi * X[:, 1]) + 0.1 * np.random.randn(1000)
plt.scatter(X[:,0],y,s=10)
plt.scatter(X[:,1],y,s=10)
F, pval = feature_selection.f_regression(X,y)
print(F)
###Output
[187.42118421 52.52357392 0.47268298]
###Markdown
Mutual Information for regression using mutual_info_regression* Returns dependency in the scale of 0 & 1 among feature & target* Captures any kind of dependency even if non-linear* Target is continues in nature
###Code
feature_selection.mutual_info_regression(X,y)
###Output
_____no_output_____
###Markdown
Mutual Information for classification using mutual_info_classification* Returns dependency in the scale of 0 & 1 among feature & target* Captures any kind of dependency even if non-linear* Target is discrete in nature
###Code
cols = ['age','workclass','fnlwgt','education','education-num','marital-status','occupation','relationship'
,'race','sex','capital-gain','capital-loss','hours-per-week','native-country','Salary']
adult_data = pd.read_csv('https://raw.githubusercontent.com/zekelabs/data-science-complete-tutorial/master/Data/adult.data.txt', names=cols)
adult_data.head()
cat_cols = list(adult_data.select_dtypes('object').columns)
cat_cols.remove('Salary')
len(cat_cols)
from sklearn.preprocessing import LabelEncoder
for col in cat_cols:
le = LabelEncoder()
adult_data[col] = le.fit_transform(adult_data[col])
X = adult_data.drop(columns=['Salary'])
y = le.fit_transform(adult_data.Salary)
firep = feature_selection.mutual_info_classif(X, y)
X.columns
X.columns[np.argsort(firep)[::-1]]
###Output
_____no_output_____
###Markdown
SelectKBest* SelectKBest returns K important features based on above techniques* Based on configuration, it can use mutual_information or ANOVA or regression based techniques
###Code
adult_data.head
adult_data.shape
selector = feature_selection.SelectKBest(k=7, score_func=feature_selection.f_classif)
data = selector.fit_transform(adult_data.drop('Salary',axis=1),adult_data.Salary)
data.shape
selector.scores_
selector = feature_selection.SelectKBest(k=7, score_func=feature_selection.mutual_info_classif)
data = selector.fit_transform(adult_data.drop('Salary',axis=1),adult_data.Salary)
data.shape
selector.scores_
###Output
_____no_output_____
###Markdown
SelectPercentile* Selecting top features whose importances are in configured parameter* Default is top 10 percentile
###Code
selector = feature_selection.SelectPercentile(percentile=20, score_func=feature_selection.mutual_info_classif)
data = selector.fit_transform(adult_data.drop('Salary',axis=1),adult_data.Salary)
data.shape
###Output
_____no_output_____
###Markdown
SelectFromModel* Selecting important features from model weights* The estimator should support 'feature_importances'
###Code
from sklearn.datasets import load_boston
boston = load_boston()
boston.data.shape
from sklearn.linear_model import LinearRegression
clf = LinearRegression()
sfm = feature_selection.SelectFromModel(clf, threshold=0.25)
clf = LinearRegression()
sfm.fit_transform(boston.data, boston.target).shape
###Output
_____no_output_____
###Markdown
Recursive Feature Elimination* Uses an external estimator to calculate weights of features* First, the estimator is trained on the initial set of features and the importance of each feature is obtained either through a coef_ attribute or through a feature_importances_ attribute. * Then, the least important features are pruned from current set of features. * That procedure is recursively repeated on the pruned set until the desired number of features to select is eventually reached.
###Code
from sklearn.datasets import make_regression
from sklearn.feature_selection import RFE
from sklearn.svm import SVR
X, y = make_regression(n_samples=50, n_features=10, random_state=0)
estimator = SVR(kernel="linear")
selector = RFE(estimator, 5, step=1)
data = selector.fit_transform(X, y)
X.shape
data.shape
selector.ranking_
###Output
_____no_output_____
|
docs/tutorials/Tutorial05_Tuning-curves-and-decoding.ipynb
|
###Markdown
Tutorial 05 - Tuning curves and decoding Goals- Learn to estimate and plot 2D tuning curves- Implement a Bayesian decoding algorithm- Compare the decoded and actual positions by computing the decoding error Compute the tuning curves
###Code
# import necessary packages
%matplotlib inline
import os
import sys
import numpy as np
import nept
import matplotlib.pyplot as plt
# define where your data folder is located
data_path = os.path.join(os.path.abspath('.'), 'data')
data_folder = os.path.join(data_path, 'R042-2013-08-18')
# load the info file, which contains experiment-specific information\
sys.path.append(data_folder)
import r042d3 as info
# Load position (.nvt) from this experiment
position = nept.load_position(os.path.join(data_folder, info.position_filename), info.pxl_to_cm)
# Plot the position
plt.plot(position.x, position.y, 'k.', ms=1)
plt.show()
# Load spikes (.t and ._t) from this experiment
spikes = nept.load_spikes(data_folder)
# Plot the spikes
for idx, spiketrain in enumerate(spikes):
plt.plot(spiketrain.time, np.ones(len(spiketrain.time))+idx, '|')
plt.show()
# limit position and spikes to task times
task_start = info.task_times['task'].start
task_stop = info.task_times['task'].stop
task_position = position.time_slice(task_start, task_stop)
task_spikes = [spiketrain.time_slice(task_start, task_stop) for spiketrain in spikes]
# limit position to those where the rat is running
run_position = task_position[nept.run_threshold(task_position, thresh=1.1, t_smooth=1.0)]
# Plot the running Y position over time
plt.plot(run_position.time, run_position.y, 'b.', ms=1)
plt.show()
# Plot the running position
plt.plot(run_position.x, run_position.y, 'b.', ms=1)
plt.show()
# Plot the task spikes
for idx, spiketrain in enumerate(task_spikes):
plt.plot(spiketrain.time, np.ones(len(spiketrain.time))+idx, '|', color='k')
plt.show()
# Define the X and Y boundaries from the unfiltered position, with 3 cm bins
xedges, yedges = nept.get_xyedges(position, binsize=3)
tuning_curves = nept.tuning_curve_2d(run_position, np.array(task_spikes), xedges, yedges,
occupied_thresh=0.2, gaussian_std=0.1)
# Plot a few of the neuron's tuning curves
xx, yy = np.meshgrid(xedges, yedges)
for i in [7, 33, 41]:
print('neuron:', i)
plt.figure(figsize=(6, 5))
pp = plt.pcolormesh(xx, yy, tuning_curves[i], cmap='bone_r')
plt.colorbar(pp)
plt.axis('off')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
DecodingNext, let's decode the location of the subject using a Bayesian algorithm.Specifically, this is a method known as "one-step Bayesian decoding" and isillustrated in this figure from van der Meer et al., 2010.
###Code
# Bin the spikes
window_size = 0.0125
window_advance = 0.0125
time_edges = nept.get_edges(run_position.time[0], run_position.time[-1], window_advance, lastbin=True)
counts = nept.bin_spikes(task_spikes,
run_position.time[0],
run_position.time[-1],
dt=window_size,
window=window_advance,
gaussian_std=None,
normalized=True)
# Reshape the 2D tuning curves (essentially flatten them, while keeping the 2D information intact)
tc_shape = tuning_curves.shape
decode_tuning_curves = tuning_curves.reshape(tc_shape[0], tc_shape[1] * tc_shape[2])
# Find the likelihoods - this could take a minute...
likelihood = nept.bayesian_prob(counts, decode_tuning_curves, window_size, min_neurons=2, min_spikes=1)
# Find the center of the position bins
xcenters = (xedges[1:] + xedges[:-1]) / 2.
ycenters = (yedges[1:] + yedges[:-1]) / 2.
xy_centers = nept.cartesian(xcenters, ycenters)
# Based on the likelihoods, find the decoded location
decoded = nept.decode_location(likelihood, xy_centers, counts.time)
nan_idx = np.logical_and(np.isnan(decoded.x), np.isnan(decoded.y))
decoded = decoded[~nan_idx]
# Plot the decoded position
plt.plot(decoded.x, decoded.y, 'r.', ms=1)
plt.show()
###Output
_____no_output_____
###Markdown
Compare the decoded to actual positions
###Code
# Find the actual position for every decoded time point
actual_x = np.interp(decoded.time, run_position.time, run_position.x)
actual_y = np.interp(decoded.time, run_position.time, run_position.y)
actual_position = nept.Position(np.hstack((actual_x[..., np.newaxis],
actual_y[..., np.newaxis])), decoded.time)
# Plot the actual position
plt.plot(actual_position.x, actual_position.y, 'g.', ms=1)
plt.show()
###Output
_____no_output_____
###Markdown
Notice the pedestal is not represented as round, as before. This is because we are interpolating to find an actual position that corresponds to each decoded time.
###Code
# Find the error between actual and decoded positions
errors = actual_position.distance(decoded)
print('Mean error:', np.mean(errors), 'cm')
# Plot the errors
plt.hist(errors)
plt.show()
###Output
_____no_output_____
###Markdown
Tutorial 05 - Tuning curves and decoding Goals- Learn to estimate and plot 2D tuning curves- Implement a Bayesian decoding algorithm- Compare the decoded and actual positions by computing the decoding error Compute the tuning curves
###Code
# import necessary packages
%matplotlib inline
import os
import sys
import numpy as np
import nept
import matplotlib.pyplot as plt
# define where your data folder is located
data_path = os.path.join(os.path.abspath('.'), 'data')
data_folder = os.path.join(data_path, 'R042-2013-08-18')
# load the info file, which contains experiment-specific information\
sys.path.append(data_folder)
import r042d3 as info
# Load position (.nvt) from this experiment
position = nept.load_position(os.path.join(data_folder, info.position_filename), info.pxl_to_cm)
# Plot the position
plt.plot(position.x, position.y, 'k.', ms=1)
plt.show()
# Load spikes (.t and ._t) from this experiment
spikes = nept.load_spikes(data_folder)
# Plot the spikes
for idx, spiketrain in enumerate(spikes):
plt.plot(spiketrain.time, np.ones(len(spiketrain.time))+idx, '|')
plt.show()
# limit position and spikes to task times
task_start = info.task_times['task'].start
task_stop = info.task_times['task'].stop
task_position = position.time_slice(task_start, task_stop)
task_spikes = [spiketrain.time_slice(task_start, task_stop) for spiketrain in spikes]
# limit position to those where the rat is running
run_position = task_position[nept.run_threshold(task_position, thresh=1.1, t_smooth=1.0)]
# Plot the running Y position over time
plt.plot(run_position.time, run_position.y, 'b.', ms=1)
plt.show()
# Plot the running position
plt.plot(run_position.x, run_position.y, 'b.', ms=1)
plt.show()
# Plot the task spikes
for idx, spiketrain in enumerate(task_spikes):
plt.plot(spiketrain.time, np.ones(len(spiketrain.time))+idx, '|', color='k')
plt.show()
# Define the X and Y boundaries from the unfiltered position, with 3 cm bins
xedges, yedges = nept.get_xyedges(position, binsize=3)
tuning_curves = nept.tuning_curve_2d(run_position, np.array(task_spikes), xedges, yedges,
occupied_thresh=0.2, gaussian_std=0.1)
# Plot a few of the neuron's tuning curves
xx, yy = np.meshgrid(xedges, yedges)
for i in [7, 33, 41]:
print('neuron:', i)
plt.figure(figsize=(6, 5))
pp = plt.pcolormesh(xx, yy, tuning_curves[i], cmap='bone_r')
plt.colorbar(pp)
plt.axis('off')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
DecodingNext, let's decode the location of the subject using a Bayesian algorithm.Specifically, this is a method known as "one-step Bayesian decoding" and isillustrated in this figure from van der Meer et al., 2010.
###Code
# Bin the spikes
window_size = 0.0125
window_advance = 0.0125
time_edges = nept.get_edges(run_position.time[0], run_position.time[-1], window_advance, lastbin=True)
counts = nept.bin_spikes(task_spikes,
run_position.time[0],
run_position.time[-1],
dt=window_size,
window=window_advance,
gaussian_std=None,
normalized=True)
# Reshape the 2D tuning curves (essentially flatten them, while keeping the 2D information intact)
tc_shape = tuning_curves.shape
decode_tuning_curves = tuning_curves.reshape(tc_shape[0], tc_shape[1] * tc_shape[2])
# Find the likelihoods - this could take a minute...
likelihood = nept.bayesian_prob(counts, decode_tuning_curves, window_size, min_neurons=2, min_spikes=1)
# Find the center of the position bins
xcenters = (xedges[1:] + xedges[:-1]) / 2.
ycenters = (yedges[1:] + yedges[:-1]) / 2.
xy_centers = nept.cartesian(xcenters, ycenters)
# Based on the likelihoods, find the decoded location
decoded = nept.decode_location(likelihood, xy_centers, counts.time)
nan_idx = np.logical_and(np.isnan(decoded.x), np.isnan(decoded.y))
decoded = decoded[~nan_idx]
# Plot the decoded position
plt.plot(decoded.x, decoded.y, 'r.', ms=1)
plt.show()
###Output
_____no_output_____
###Markdown
Compare the decoded to actual positions
###Code
# Find the actual position for every decoded time point
actual_x = np.interp(decoded.time, run_position.time, run_position.x)
actual_y = np.interp(decoded.time, run_position.time, run_position.y)
actual_position = nept.Position(np.hstack((actual_x[..., np.newaxis],
actual_y[..., np.newaxis])), decoded.time)
# Plot the actual position
plt.plot(actual_position.x, actual_position.y, 'g.', ms=1)
plt.show()
###Output
_____no_output_____
###Markdown
Notice the pedestal is not represented as round, as before. This is because we are interpolating to find an actual position that corresponds to each decoded time.
###Code
# Find the error between actual and decoded positions
errors = actual_position.distance(decoded)
print('Mean error:', np.mean(errors), 'cm')
# Plot the errors
plt.hist(errors)
plt.show()
###Output
_____no_output_____
|
scenic/common_lib/colabs/scenic_playground.ipynb
|
###Markdown
Download and install Scenic
###Code
!rm -rf *
!rm -rf .config
!rm -rf .git
!git clone https://github.com/google-research/scenic.git .
!python -m pip install -q .
###Output
Cloning into '.'...
remote: Enumerating objects: 727, done.[K
remote: Counting objects: 100% (727/727), done.[K
remote: Compressing objects: 100% (467/467), done.[K
remote: Total 727 (delta 392), reused 578 (delta 244), pack-reused 0[K
Receiving objects: 100% (727/727), 8.28 MiB | 1.17 MiB/s, done.
Resolving deltas: 100% (392/392), done.
[33m DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default.
pip 21.3 will remove support for this functionality. You can find discussion regarding this at https://github.com/pypa/pip/issues/7555.[0m
Building wheel for scenic (setup.py) ... [?25l[?25hdone
###Markdown
Train [a simple feedforward network on mnist](https://github.com/google-research/scenic/blob/main/scenic/projects/baselines/configs/mnist/mnist_config.py)
###Code
!PYTHONPATH="$(pwd)":"$PYTHON_PATH" python scenic/main.py \
--config=scenic/projects/baselines/configs/mnist/mnist_config.py \
--workdir=./
###Output
I1011 13:37:22.330163 140122186418048 xla_bridge.py:226] Unable to initialize backend 'tpu_driver': NOT_FOUND: Unable to find driver in registry given worker:
I1011 13:37:22.470367 140122186418048 xla_bridge.py:226] Unable to initialize backend 'tpu': INVALID_ARGUMENT: TpuPlatform is not available.
I1011 13:37:22.470726 140122186418048 app.py:80] JAX host: 0 / 1
I1011 13:37:22.470875 140122186418048 app.py:81] JAX devices: [GpuDevice(id=0, process_index=0)]
I1011 13:37:22.471045 140122186418048 local.py:45] Setting task status: host_id: 0, host_count: 1
I1011 13:37:22.474154 140122186418048 local.py:51] Created artifact Workdir of type ArtifactType.DIRECTORY and value ./.
I1011 13:37:23.214558 140122186418048 app.py:91] RNG: [0 0]
I1011 13:37:24.043761 140122186418048 train_utils.py:149] device_count: 1
I1011 13:37:24.044039 140122186418048 train_utils.py:150] num_hosts : 1
I1011 13:37:24.044161 140122186418048 train_utils.py:151] host_id : 0
I1011 13:37:24.045577 140122186418048 datasets.py:91] On-demand import of dataset (mnist) from module (scenic.dataset_lib.mnist_dataset).
I1011 13:37:24.045974 140122186418048 train_utils.py:168] local_batch_size : 128
I1011 13:37:24.046089 140122186418048 train_utils.py:169] device_batch_size : 128
I1011 13:37:24.046446 140122186418048 mnist_dataset.py:73] Loading train split of the MNIST dataset.
I1011 13:37:24.047486 140122186418048 dataset_info.py:375] Load dataset info from /root/tensorflow_datasets/mnist/3.0.1
I1011 13:37:24.049653 140122186418048 dataset_info.py:430] Field info.citation from disk and from code do not match. Keeping the one from code.
I1011 13:37:24.049880 140122186418048 dataset_info.py:430] Field info.splits from disk and from code do not match. Keeping the one from code.
I1011 13:37:24.050011 140122186418048 dataset_info.py:430] Field info.supervised_keys from disk and from code do not match. Keeping the one from code.
I1011 13:37:24.050171 140122186418048 dataset_info.py:430] Field info.module_name from disk and from code do not match. Keeping the one from code.
I1011 13:37:24.050393 140122186418048 dataset_builder.py:352] Reusing dataset mnist (/root/tensorflow_datasets/mnist/3.0.1)
I1011 13:37:24.050624 140122186418048 dataset_utils.py:499] Host 0 data range: from 0 to 60000 (from split train)
I1011 13:37:24.050778 140122186418048 logging_logger.py:36] Constructing tf.data.Dataset mnist for split ReadInstruction('train[0:60000]'), from /root/tensorflow_datasets/mnist/3.0.1
I1011 13:37:24.237263 140122186418048 mnist_dataset.py:90] Loading test split of the MNIST dataset.
I1011 13:37:24.238183 140122186418048 dataset_info.py:375] Load dataset info from /root/tensorflow_datasets/mnist/3.0.1
I1011 13:37:24.240091 140122186418048 dataset_info.py:430] Field info.citation from disk and from code do not match. Keeping the one from code.
I1011 13:37:24.240290 140122186418048 dataset_info.py:430] Field info.splits from disk and from code do not match. Keeping the one from code.
I1011 13:37:24.240410 140122186418048 dataset_info.py:430] Field info.supervised_keys from disk and from code do not match. Keeping the one from code.
I1011 13:37:24.240530 140122186418048 dataset_info.py:430] Field info.module_name from disk and from code do not match. Keeping the one from code.
I1011 13:37:24.240723 140122186418048 dataset_builder.py:352] Reusing dataset mnist (/root/tensorflow_datasets/mnist/3.0.1)
I1011 13:37:24.240959 140122186418048 dataset_utils.py:499] Host 0 data range: from 0 to 10000 (from split test)
I1011 13:37:24.241099 140122186418048 logging_logger.py:36] Constructing tf.data.Dataset mnist for split ReadInstruction('test[0:10000]'), from /root/tensorflow_datasets/mnist/3.0.1
I1011 13:37:24.340181 140122186418048 dataset_info.py:375] Load dataset info from /root/tensorflow_datasets/mnist/3.0.1
I1011 13:37:24.341984 140122186418048 dataset_info.py:430] Field info.citation from disk and from code do not match. Keeping the one from code.
I1011 13:37:24.342200 140122186418048 dataset_info.py:430] Field info.splits from disk and from code do not match. Keeping the one from code.
I1011 13:37:24.342328 140122186418048 dataset_info.py:430] Field info.supervised_keys from disk and from code do not match. Keeping the one from code.
I1011 13:37:24.342463 140122186418048 dataset_info.py:430] Field info.module_name from disk and from code do not match. Keeping the one from code.
I1011 13:37:24.343276 140122186418048 dataset_info.py:375] Load dataset info from /root/tensorflow_datasets/mnist/3.0.1
I1011 13:37:24.344682 140122186418048 dataset_info.py:430] Field info.citation from disk and from code do not match. Keeping the one from code.
I1011 13:37:24.344892 140122186418048 dataset_info.py:430] Field info.splits from disk and from code do not match. Keeping the one from code.
I1011 13:37:24.345018 140122186418048 dataset_info.py:430] Field info.supervised_keys from disk and from code do not match. Keeping the one from code.
I1011 13:37:24.345163 140122186418048 dataset_info.py:430] Field info.module_name from disk and from code do not match. Keeping the one from code.
I1011 13:37:26.517037 140122186418048 parameter_overview.py:257]
+--------------------------+-----------+--------+-----------+--------+
| Name | Shape | Size | Mean | Std |
+--------------------------+-----------+--------+-----------+--------+
| Dense_0/bias | (64,) | 64 | 0.0 | 0.0 |
| Dense_0/kernel | (784, 64) | 50,176 | -0.000102 | 0.0357 |
| Dense_1/bias | (64,) | 64 | 0.0 | 0.0 |
| Dense_1/kernel | (64, 64) | 4,096 | 0.00208 | 0.125 |
| output_projection/bias | (10,) | 10 | 0.0 | 0.0 |
| output_projection/kernel | (64, 10) | 640 | -0.00915 | 0.127 |
+--------------------------+-----------+--------+-----------+--------+
Total: 55,050
I1011 13:37:26.517357 140122186418048 debug_utils.py:68] Total params: 55050
I1011 13:37:26.697683 140122186418048 debug_utils.py:122] GFLOPs 0.000 for input spec: [((-1, 28, 28, 1), <class 'jax._src.numpy.lax_numpy.float32'>)]
I1011 13:37:26.730629 140122186418048 checkpoints.py:249] Found no checkpoint files in .
I1011 13:37:26.751403 140122186418048 classification_trainer.py:314] Starting training loop at step 1.
I1011 13:37:26.751932 140116713731840 logging_writer.py:35] [1] gflops=0.000055, num_trainable_params=55050
/usr/local/lib/python3.7/dist-packages/jax/_src/profiler.py:167: UserWarning: StepTraceContext has been renamed to StepTraceAnnotation. This alias will eventually be removed; please update your code.
"StepTraceContext has been renamed to StepTraceAnnotation. This alias "
2021-10-11 13:37:27.683510: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/gemm_algorithm_picker.cc:211] Failed to find best cuBLAS algorithm, GEMM performance might be suboptimal: INTERNAL: All algorithms tried for %custom-call.1 = f32[128,64]{1,0} custom-call(f32[128,784]{1,0} %bitcast.7, f32[784,64]{1,0} %parameter.13, f32[128,64]{1,0} %broadcast), custom_call_target="__cublas$gemm", metadata={op_type="add" op_name="pmap(<unnamed wrapped function>)/add" source_file="/usr/local/lib/python3.7/dist-packages/flax/linen/linear.py" source_line=181}, backend_config="{\"alpha_real\":1,\"alpha_imag\":0,\"beta\":1,\"dot_dimension_numbers\":{\"lhs_contracting_dimensions\":[\"1\"],\"rhs_contracting_dimensions\":[\"0\"],\"lhs_batch_dimensions\":[],\"rhs_batch_dimensions\":[]},\"batch_size\":\"1\",\"lhs_stride\":\"100352\",\"rhs_stride\":\"50176\"}" failed. Falling back to default algorithm.
2021-10-11 13:37:27.684329: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/gemm_algorithm_picker.cc:211] Failed to find best cuBLAS algorithm, GEMM performance might be suboptimal: INTERNAL: All algorithms tried for %custom-call.3 = f32[128,64]{1,0} custom-call(f32[128,64]{1,0} %maximum.101, f32[64,64]{1,0} %parameter.15, f32[128,64]{1,0} %broadcast.1), custom_call_target="__cublas$gemm", metadata={op_type="add" op_name="pmap(<unnamed wrapped function>)/add" source_file="/usr/local/lib/python3.7/dist-packages/flax/linen/linear.py" source_line=181}, backend_config="{\"alpha_real\":1,\"alpha_imag\":0,\"beta\":1,\"dot_dimension_numbers\":{\"lhs_contracting_dimensions\":[\"1\"],\"rhs_contracting_dimensions\":[\"0\"],\"lhs_batch_dimensions\":[],\"rhs_batch_dimensions\":[]},\"batch_size\":\"1\",\"lhs_stride\":\"8192\",\"rhs_stride\":\"4096\"}" failed. Falling back to default algorithm.
2021-10-11 13:37:27.685097: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/gemm_algorithm_picker.cc:211] Failed to find best cuBLAS algorithm, GEMM performance might be suboptimal: INTERNAL: All algorithms tried for %custom-call.5 = f32[128,10]{1,0} custom-call(f32[128,64]{1,0} %maximum.140, f32[64,10]{1,0} %parameter.17, f32[128,10]{1,0} %broadcast.3), custom_call_target="__cublas$gemm", metadata={op_type="add" op_name="pmap(<unnamed wrapped function>)/add" source_file="/usr/local/lib/python3.7/dist-packages/flax/linen/linear.py" source_line=181}, backend_config="{\"alpha_real\":1,\"alpha_imag\":0,\"beta\":1,\"dot_dimension_numbers\":{\"lhs_contracting_dimensions\":[\"1\"],\"rhs_contracting_dimensions\":[\"0\"],\"lhs_batch_dimensions\":[],\"rhs_batch_dimensions\":[]},\"batch_size\":\"1\",\"lhs_stride\":\"8192\",\"rhs_stride\":\"640\"}" failed. Falling back to default algorithm.
2021-10-11 13:37:27.685657: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/gemm_algorithm_picker.cc:211] Failed to find best cuBLAS algorithm, GEMM performance might be suboptimal: INTERNAL: All algorithms tried for %custom-call.6 = f32[128,64]{1,0} custom-call(f32[128,10]{1,0} %add.354, f32[64,10]{1,0} %parameter.17), custom_call_target="__cublas$gemm", metadata={op_type="dot_general" op_name="pmap(<unnamed wrapped function>)/dot_general[ dimension_numbers=(((1,), (1,)), ((), ()))\n precision=None\n preferred_element_type=None ]" source_file="/usr/local/lib/python3.7/dist-packages/flax/linen/linear.py" source_line=177}, backend_config="{\"alpha_real\":1,\"alpha_imag\":0,\"beta\":0,\"dot_dimension_numbers\":{\"lhs_contracting_dimensions\":[\"1\"],\"rhs_contracting_dimensions\":[\"1\"],\"lhs_batch_dimensions\":[],\"rhs_batch_dimensions\":[]},\"batch_size\":\"1\",\"lhs_stride\":\"1280\",\"rhs_stride\":\"640\"}" failed. Falling back to default algorithm.
2021-10-11 13:37:27.686257: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/gemm_algorithm_picker.cc:211] Failed to find best cuBLAS algorithm, GEMM performance might be suboptimal: INTERNAL: All algorithms tried for %custom-call.7 = f32[128,64]{1,0} custom-call(f32[128,64]{1,0} %select.369, f32[64,64]{1,0} %parameter.15), custom_call_target="__cublas$gemm", metadata={op_type="dot_general" op_name="pmap(<unnamed wrapped function>)/dot_general[ dimension_numbers=(((1,), (1,)), ((), ()))\n precision=None\n preferred_element_type=None ]" source_file="/usr/local/lib/python3.7/dist-packages/flax/linen/linear.py" source_line=177}, backend_config="{\"alpha_real\":1,\"alpha_imag\":0,\"beta\":0,\"dot_dimension_numbers\":{\"lhs_contracting_dimensions\":[\"1\"],\"rhs_contracting_dimensions\":[\"1\"],\"lhs_batch_dimensions\":[],\"rhs_batch_dimensions\":[]},\"batch_size\":\"1\",\"lhs_stride\":\"8192\",\"rhs_stride\":\"4096\"}" failed. Falling back to default algorithm.
2021-10-11 13:37:27.688724: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/gemm_algorithm_picker.cc:211] Failed to find best cuBLAS algorithm, GEMM performance might be suboptimal: INTERNAL: All algorithms tried for %custom-call.9 = f32[784,64]{1,0} custom-call(f32[128,784]{1,0} %bitcast.7, f32[128,64]{1,0} %select.384, f32[784,64]{1,0} %multiply.322), custom_call_target="__cublas$gemm", metadata={op_type="add_any" op_name="pmap(<unnamed wrapped function>)/add_any" source_file="/content/scenic/train_lib/classification_trainer.py" source_line=120}, backend_config="{\"alpha_real\":1,\"alpha_imag\":0,\"beta\":1,\"dot_dimension_numbers\":{\"lhs_contracting_dimensions\":[\"0\"],\"rhs_contracting_dimensions\":[\"0\"],\"lhs_batch_dimensions\":[],\"rhs_batch_dimensions\":[]},\"batch_size\":\"1\",\"lhs_stride\":\"100352\",\"rhs_stride\":\"8192\"}" failed. Falling back to default algorithm.
2021-10-11 13:37:27.690173: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/gemm_algorithm_picker.cc:211] Failed to find best cuBLAS algorithm, GEMM performance might be suboptimal: INTERNAL: All algorithms tried for %custom-call.11 = f32[64,64]{1,0} custom-call(f32[128,64]{1,0} %maximum.101, f32[128,64]{1,0} %select.369, f32[64,64]{1,0} %multiply.320), custom_call_target="__cublas$gemm", metadata={op_type="add_any" op_name="pmap(<unnamed wrapped function>)/add_any" source_file="/content/scenic/train_lib/classification_trainer.py" source_line=120}, backend_config="{\"alpha_real\":1,\"alpha_imag\":0,\"beta\":1,\"dot_dimension_numbers\":{\"lhs_contracting_dimensions\":[\"0\"],\"rhs_contracting_dimensions\":[\"0\"],\"lhs_batch_dimensions\":[],\"rhs_batch_dimensions\":[]},\"batch_size\":\"1\",\"lhs_stride\":\"8192\",\"rhs_stride\":\"8192\"}" failed. Falling back to default algorithm.
2021-10-11 13:37:27.690897: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/gemm_algorithm_picker.cc:211] Failed to find best cuBLAS algorithm, GEMM performance might be suboptimal: INTERNAL: All algorithms tried for %custom-call.13 = f32[64,10]{1,0} custom-call(f32[128,64]{1,0} %maximum.140, f32[128,10]{1,0} %add.354, f32[64,10]{1,0} %multiply.318), custom_call_target="__cublas$gemm", metadata={op_type="add_any" op_name="pmap(<unnamed wrapped function>)/add_any" source_file="/content/scenic/train_lib/classification_trainer.py" source_line=120}, backend_config="{\"alpha_real\":1,\"alpha_imag\":0,\"beta\":1,\"dot_dimension_numbers\":{\"lhs_contracting_dimensions\":[\"0\"],\"rhs_contracting_dimensions\":[\"0\"],\"lhs_batch_dimensions\":[],\"rhs_batch_dimensions\":[]},\"batch_size\":\"1\",\"lhs_stride\":\"8192\",\"rhs_stride\":\"1280\"}" failed. Falling back to default algorithm.
I1011 13:37:28.338377 140116713731840 logging_writer.py:35] [1] train_accuracy=0.031250, train_loss=2.367167
I1011 13:37:28.338600 140116713731840 logging_writer.py:35] [1] learning_rate=0.10000000149011612
I1011 13:37:29.061676 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.070312, valid_loss=2.317226
I1011 13:37:29.086503 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.078125, valid_loss=2.323468
I1011 13:37:29.096722 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.083333, valid_loss=2.310772
I1011 13:37:29.125261 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.083984, valid_loss=2.306300
I1011 13:37:29.155785 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.079687, valid_loss=2.310255
I1011 13:37:29.186198 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.083333, valid_loss=2.309690
I1011 13:37:29.216058 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.083705, valid_loss=2.307699
I1011 13:37:29.246399 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.090820, valid_loss=2.298871
I1011 13:37:29.280526 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.092014, valid_loss=2.298447
I1011 13:37:29.309807 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.098437, valid_loss=2.296714
I1011 13:37:29.339236 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.098722, valid_loss=2.296815
I1011 13:37:29.363693 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102214, valid_loss=2.295394
I1011 13:37:29.410041 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102764, valid_loss=2.296545
I1011 13:37:29.441876 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102679, valid_loss=2.296509
I1011 13:37:29.469475 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.101042, valid_loss=2.296526
I1011 13:37:29.497415 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.099609, valid_loss=2.298498
I1011 13:37:29.524138 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.101103, valid_loss=2.298337
I1011 13:37:29.547951 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.101562, valid_loss=2.298520
I1011 13:37:29.581416 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102385, valid_loss=2.298767
I1011 13:37:29.608180 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.104297, valid_loss=2.297799
I1011 13:37:29.635370 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.109003, valid_loss=2.295632
I1011 13:37:29.660868 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.107244, valid_loss=2.296778
I1011 13:37:29.694727 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.106997, valid_loss=2.297229
I1011 13:37:29.721502 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.106771, valid_loss=2.296281
I1011 13:37:29.751823 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.105937, valid_loss=2.296726
I1011 13:37:29.780170 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.104868, valid_loss=2.296996
I1011 13:37:29.806238 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.103877, valid_loss=2.296737
I1011 13:37:29.833980 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102958, valid_loss=2.296254
I1011 13:37:29.862789 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.101832, valid_loss=2.297665
I1011 13:37:29.893026 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.103125, valid_loss=2.297216
I1011 13:37:29.920401 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.103831, valid_loss=2.297017
I1011 13:37:29.945014 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102295, valid_loss=2.298065
I1011 13:37:29.985043 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102509, valid_loss=2.297678
I1011 13:37:30.014860 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.103171, valid_loss=2.297110
I1011 13:37:30.037876 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102679, valid_loss=2.297536
I1011 13:37:30.065467 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.103082, valid_loss=2.297011
I1011 13:37:30.102349 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.103252, valid_loss=2.297233
I1011 13:37:30.135100 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.103207, valid_loss=2.297168
I1011 13:37:30.163512 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.103165, valid_loss=2.297728
I1011 13:37:30.195191 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.103125, valid_loss=2.297764
I1011 13:37:30.214009 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102706, valid_loss=2.297824
I1011 13:37:30.244026 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.103237, valid_loss=2.297624
I1011 13:37:30.272275 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102653, valid_loss=2.298252
I1011 13:37:30.305942 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102983, valid_loss=2.297823
I1011 13:37:30.340119 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102951, valid_loss=2.298340
I1011 13:37:30.367739 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102751, valid_loss=2.298397
I1011 13:37:30.412262 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102394, valid_loss=2.298383
I1011 13:37:30.439724 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102539, valid_loss=2.298734
I1011 13:37:30.469190 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.103635, valid_loss=2.298282
I1011 13:37:30.496141 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.104062, valid_loss=2.298273
I1011 13:37:30.531314 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.103401, valid_loss=2.298804
I1011 13:37:30.554804 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.103816, valid_loss=2.299096
I1011 13:37:30.592104 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.104216, valid_loss=2.298816
I1011 13:37:30.610923 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.104167, valid_loss=2.298651
I1011 13:37:30.645015 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.105398, valid_loss=2.298075
I1011 13:37:30.669055 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.104911, valid_loss=2.298440
I1011 13:37:30.705884 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.104441, valid_loss=2.298548
I1011 13:37:30.729547 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.104930, valid_loss=2.298330
I1011 13:37:30.753895 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.105403, valid_loss=2.298217
I1011 13:37:30.786985 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.105599, valid_loss=2.298111
I1011 13:37:30.809854 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.105661, valid_loss=2.297816
I1011 13:37:30.840849 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.105595, valid_loss=2.297686
I1011 13:37:30.875266 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.105035, valid_loss=2.298276
I1011 13:37:30.910569 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.104614, valid_loss=2.298463
I1011 13:37:30.936408 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.104687, valid_loss=2.298179
I1011 13:37:30.961055 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.104877, valid_loss=2.297972
I1011 13:37:30.997040 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.104128, valid_loss=2.298407
I1011 13:37:31.029308 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.103975, valid_loss=2.298545
I1011 13:37:31.050748 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.104053, valid_loss=2.298754
I1011 13:37:31.080088 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.103571, valid_loss=2.298868
I1011 13:37:31.111133 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102663, valid_loss=2.299425
I1011 13:37:31.134297 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102431, valid_loss=2.299564
I1011 13:37:31.169328 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102740, valid_loss=2.299219
I1011 13:37:31.189510 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102829, valid_loss=2.299457
I1011 13:37:31.234695 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102812, valid_loss=2.299351
I1011 13:37:31.257628 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102488, valid_loss=2.299209
I1011 13:37:31.281896 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.102780, valid_loss=2.299004
I1011 13:37:31.312210 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.103065, valid_loss=2.299020
I1011 13:37:31.319868 140115538753280 logging_writer.py:35] [1] valid_accuracy=0.103000, valid_loss=2.299067
I1011 13:37:37.511965 140122186418048 local.py:51] Created artifact [10] Profile of type ArtifactType.URL and value None.
I1011 13:37:46.857614 140122186418048 checkpoints.py:120] Saving checkpoint at step: 468
I1011 13:37:46.860437 140122186418048 checkpoints.py:149] Saved checkpoint at ./checkpoint_468
I1011 13:37:46.864437 140115530360576 logging_writer.py:35] [469] core_hours_Tesla K80=0.005144, core_hours_approx_v3=0.005144, epoch=1.002137, img/sec=3234.817845, img/sec/core=3234.817845
I1011 13:37:47.344448 140115530360576 logging_writer.py:35] [469] train_accuracy=0.907602, train_loss=0.300405
I1011 13:37:47.344725 140115530360576 logging_writer.py:35] [469] learning_rate=0.10000002384185791
I1011 13:37:47.358852 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.968750, valid_loss=0.116311
I1011 13:37:47.364650 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.968750, valid_loss=0.117993
I1011 13:37:47.369975 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.963542, valid_loss=0.121681
I1011 13:37:47.374881 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.957031, valid_loss=0.143569
I1011 13:37:47.380566 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.960937, valid_loss=0.129825
I1011 13:37:47.386048 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.966146, valid_loss=0.117139
I1011 13:37:47.391285 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.963170, valid_loss=0.120103
I1011 13:37:47.396937 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.963867, valid_loss=0.123557
I1011 13:37:47.402757 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.963542, valid_loss=0.127855
I1011 13:37:47.408189 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.964062, valid_loss=0.127633
I1011 13:37:47.413595 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.964489, valid_loss=0.127041
I1011 13:37:47.419074 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.963542, valid_loss=0.128293
I1011 13:37:47.427630 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.962139, valid_loss=0.131549
I1011 13:37:47.435208 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.963728, valid_loss=0.126438
I1011 13:37:47.438154 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.961458, valid_loss=0.131181
I1011 13:37:47.443315 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.959961, valid_loss=0.135321
I1011 13:37:47.449507 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.961397, valid_loss=0.130797
I1011 13:37:47.454724 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.961372, valid_loss=0.130359
I1011 13:37:47.460091 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.960115, valid_loss=0.133501
I1011 13:37:47.465418 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.958594, valid_loss=0.139756
I1011 13:37:47.471124 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.959077, valid_loss=0.138922
I1011 13:37:47.476996 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.957741, valid_loss=0.142356
I1011 13:37:47.482487 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.957201, valid_loss=0.142952
I1011 13:37:47.487885 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.957031, valid_loss=0.142388
I1011 13:37:47.493303 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.956562, valid_loss=0.143729
I1011 13:37:47.499084 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.956130, valid_loss=0.142278
I1011 13:37:47.504204 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.956019, valid_loss=0.141809
I1011 13:37:47.509456 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955636, valid_loss=0.142516
I1011 13:37:47.515101 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955280, valid_loss=0.143011
I1011 13:37:47.520574 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955208, valid_loss=0.144394
I1011 13:37:47.526174 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955393, valid_loss=0.143564
I1011 13:37:47.531577 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.954102, valid_loss=0.146055
I1011 13:37:47.537061 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.953362, valid_loss=0.147283
I1011 13:37:47.542196 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.953355, valid_loss=0.147766
I1011 13:37:47.546571 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.953125, valid_loss=0.147623
I1011 13:37:47.553284 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.953125, valid_loss=0.149061
I1011 13:37:47.559092 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.953758, valid_loss=0.148340
I1011 13:37:47.565466 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.953742, valid_loss=0.149155
I1011 13:37:47.570961 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.954728, valid_loss=0.147582
I1011 13:37:47.576832 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955469, valid_loss=0.145357
I1011 13:37:47.582243 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955412, valid_loss=0.145500
I1011 13:37:47.587471 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.954985, valid_loss=0.145724
I1011 13:37:47.592967 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955850, valid_loss=0.144180
I1011 13:37:47.598690 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955966, valid_loss=0.144060
I1011 13:37:47.604397 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955903, valid_loss=0.143817
I1011 13:37:47.610249 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.956182, valid_loss=0.143977
I1011 13:37:47.615572 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.956117, valid_loss=0.145029
I1011 13:37:47.621106 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.956055, valid_loss=0.144888
I1011 13:37:47.626611 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955995, valid_loss=0.144292
I1011 13:37:47.632327 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955312, valid_loss=0.145765
I1011 13:37:47.637511 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955270, valid_loss=0.146169
I1011 13:37:47.643576 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955228, valid_loss=0.146444
I1011 13:37:47.652644 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955041, valid_loss=0.147016
I1011 13:37:47.659231 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955150, valid_loss=0.146859
I1011 13:37:47.666827 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.954972, valid_loss=0.147165
I1011 13:37:47.670330 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955218, valid_loss=0.147250
I1011 13:37:47.675548 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955455, valid_loss=0.148165
I1011 13:37:47.680941 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955011, valid_loss=0.149371
I1011 13:37:47.686234 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.954979, valid_loss=0.149376
I1011 13:37:47.691832 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.954948, valid_loss=0.149036
I1011 13:37:47.697730 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955302, valid_loss=0.148723
I1011 13:37:47.703434 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955519, valid_loss=0.148442
I1011 13:37:47.709276 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955109, valid_loss=0.148963
I1011 13:37:47.714902 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955566, valid_loss=0.147473
I1011 13:37:47.720798 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955769, valid_loss=0.147133
I1011 13:37:47.726675 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955966, valid_loss=0.146116
I1011 13:37:47.732175 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.956040, valid_loss=0.146471
I1011 13:37:47.738189 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955538, valid_loss=0.147247
I1011 13:37:47.743935 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955503, valid_loss=0.146953
I1011 13:37:47.750199 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955580, valid_loss=0.147137
I1011 13:37:47.755846 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955546, valid_loss=0.147675
I1011 13:37:47.761617 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955512, valid_loss=0.148347
I1011 13:37:47.767352 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955479, valid_loss=0.148210
I1011 13:37:47.773056 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955025, valid_loss=0.148907
I1011 13:37:47.778472 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.954792, valid_loss=0.148944
I1011 13:37:47.784536 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.954873, valid_loss=0.149048
I1011 13:37:47.789479 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.955053, valid_loss=0.148077
I1011 13:37:47.796241 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.954928, valid_loss=0.149086
I1011 13:37:47.802179 140115512289024 logging_writer.py:35] [469] valid_accuracy=0.954900, valid_loss=0.149295
I1011 13:37:49.117255 140122186418048 checkpoints.py:120] Saving checkpoint at step: 936
I1011 13:37:49.120733 140122186418048 checkpoints.py:149] Saved checkpoint at ./checkpoint_936
I1011 13:37:49.124776 140115530360576 logging_writer.py:35] [937] core_hours_Tesla K80=0.005508, core_hours_approx_v3=0.005508, epoch=2.002137, img/sec=45679.594868, img/sec/core=45679.594868
I1011 13:37:49.590364 140115530360576 logging_writer.py:35] [937] train_accuracy=0.957849, train_loss=0.136666
I1011 13:37:49.590580 140115530360576 logging_writer.py:35] [937] learning_rate=0.10000002384185791
I1011 13:37:49.603528 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.976562, valid_loss=0.047878
I1011 13:37:49.609658 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.972656, valid_loss=0.075906
I1011 13:37:49.614776 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.971354, valid_loss=0.081514
I1011 13:37:49.620380 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.964844, valid_loss=0.096503
I1011 13:37:49.625241 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965625, valid_loss=0.096622
I1011 13:37:49.630991 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.970052, valid_loss=0.088447
I1011 13:37:49.641466 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.973214, valid_loss=0.086058
I1011 13:37:49.647884 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.970703, valid_loss=0.094274
I1011 13:37:49.653186 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.967014, valid_loss=0.098868
I1011 13:37:49.659262 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.967969, valid_loss=0.093854
I1011 13:37:49.664444 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.968040, valid_loss=0.093558
I1011 13:37:49.672832 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.967448, valid_loss=0.097713
I1011 13:37:49.675593 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966947, valid_loss=0.099546
I1011 13:37:49.682296 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.968750, valid_loss=0.095636
I1011 13:37:49.688216 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.968750, valid_loss=0.095122
I1011 13:37:49.693304 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965820, valid_loss=0.103102
I1011 13:37:49.698425 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966912, valid_loss=0.100470
I1011 13:37:49.703822 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.967448, valid_loss=0.099352
I1011 13:37:49.709121 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.967516, valid_loss=0.100476
I1011 13:37:49.714523 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966406, valid_loss=0.101992
I1011 13:37:49.719710 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966518, valid_loss=0.103210
I1011 13:37:49.729898 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966264, valid_loss=0.102642
I1011 13:37:49.734998 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966712, valid_loss=0.101867
I1011 13:37:49.740329 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966471, valid_loss=0.101938
I1011 13:37:49.745519 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965625, valid_loss=0.104632
I1011 13:37:49.750709 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965445, valid_loss=0.103566
I1011 13:37:49.755815 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965278, valid_loss=0.104602
I1011 13:37:49.761245 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965681, valid_loss=0.103966
I1011 13:37:49.766862 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965248, valid_loss=0.104969
I1011 13:37:49.771949 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965365, valid_loss=0.106450
I1011 13:37:49.778104 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965474, valid_loss=0.106471
I1011 13:37:49.784733 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965332, valid_loss=0.107106
I1011 13:37:49.790340 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.964489, valid_loss=0.109863
I1011 13:37:49.796111 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.964614, valid_loss=0.110676
I1011 13:37:49.801396 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.964732, valid_loss=0.110210
I1011 13:37:49.807048 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.964627, valid_loss=0.111313
I1011 13:37:49.812555 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965372, valid_loss=0.110087
I1011 13:37:49.817957 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.964844, valid_loss=0.111982
I1011 13:37:49.823265 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.964944, valid_loss=0.111162
I1011 13:37:49.828632 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965820, valid_loss=0.109056
I1011 13:37:49.834273 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965701, valid_loss=0.109147
I1011 13:37:49.839479 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965588, valid_loss=0.109325
I1011 13:37:49.844783 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966206, valid_loss=0.107646
I1011 13:37:49.850199 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966264, valid_loss=0.107699
I1011 13:37:49.855256 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966146, valid_loss=0.107139
I1011 13:37:49.866571 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965863, valid_loss=0.107963
I1011 13:37:49.871456 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965592, valid_loss=0.108368
I1011 13:37:49.878300 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965658, valid_loss=0.108143
I1011 13:37:49.883790 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965880, valid_loss=0.108273
I1011 13:37:49.890274 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965469, valid_loss=0.109102
I1011 13:37:49.895750 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965686, valid_loss=0.108804
I1011 13:37:49.901224 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965895, valid_loss=0.108989
I1011 13:37:49.906872 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965949, valid_loss=0.108785
I1011 13:37:49.912834 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966146, valid_loss=0.108333
I1011 13:37:49.918477 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966051, valid_loss=0.108978
I1011 13:37:49.924092 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966099, valid_loss=0.109015
I1011 13:37:49.929715 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966009, valid_loss=0.109834
I1011 13:37:49.935353 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965921, valid_loss=0.110611
I1011 13:37:49.940646 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965704, valid_loss=0.110370
I1011 13:37:49.946998 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965885, valid_loss=0.109936
I1011 13:37:49.952596 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965932, valid_loss=0.109597
I1011 13:37:49.958917 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966104, valid_loss=0.108967
I1011 13:37:49.964379 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966022, valid_loss=0.108784
I1011 13:37:49.969743 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966064, valid_loss=0.108172
I1011 13:37:49.975721 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965986, valid_loss=0.108591
I1011 13:37:49.983355 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966146, valid_loss=0.107977
I1011 13:37:49.987543 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966185, valid_loss=0.107933
I1011 13:37:49.993048 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966108, valid_loss=0.107907
I1011 13:37:49.999556 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966259, valid_loss=0.107676
I1011 13:37:50.007040 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966518, valid_loss=0.107031
I1011 13:37:50.012663 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966219, valid_loss=0.107772
I1011 13:37:50.018155 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966254, valid_loss=0.108513
I1011 13:37:50.022249 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966182, valid_loss=0.108481
I1011 13:37:50.029265 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.966111, valid_loss=0.108782
I1011 13:37:50.032893 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965937, valid_loss=0.108945
I1011 13:37:50.040016 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965666, valid_loss=0.109031
I1011 13:37:50.047675 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965808, valid_loss=0.108192
I1011 13:37:50.051897 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965946, valid_loss=0.108089
I1011 13:37:50.057278 140115538753280 logging_writer.py:35] [937] valid_accuracy=0.965900, valid_loss=0.108384
I1011 13:37:51.397515 140122186418048 checkpoints.py:120] Saving checkpoint at step: 1404
I1011 13:37:51.400723 140122186418048 checkpoints.py:149] Saved checkpoint at ./checkpoint_1404
I1011 13:37:51.404632 140115530360576 logging_writer.py:35] [1405] core_hours_Tesla K80=0.005880, core_hours_approx_v3=0.005880, epoch=3.002137, img/sec=44820.327875, img/sec/core=44820.327875
I1011 13:37:51.876979 140115530360576 logging_writer.py:35] [1405] train_accuracy=0.966513, train_loss=0.109176
I1011 13:37:51.877257 140115530360576 logging_writer.py:35] [1405] learning_rate=0.10000002384185791
I1011 13:37:51.891464 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.976562, valid_loss=0.047212
I1011 13:37:51.897446 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.980469, valid_loss=0.067890
I1011 13:37:51.902668 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.973958, valid_loss=0.081291
I1011 13:37:51.908132 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.968750, valid_loss=0.095397
I1011 13:37:51.913722 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.971875, valid_loss=0.096281
I1011 13:37:51.919125 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.973958, valid_loss=0.092314
I1011 13:37:51.926215 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.974330, valid_loss=0.093098
I1011 13:37:51.929651 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.971680, valid_loss=0.104904
I1011 13:37:51.934859 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.969618, valid_loss=0.106936
I1011 13:37:51.939853 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.970312, valid_loss=0.103399
I1011 13:37:51.947282 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.969460, valid_loss=0.103375
I1011 13:37:51.952668 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.968099, valid_loss=0.105362
I1011 13:37:51.957997 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.966947, valid_loss=0.108332
I1011 13:37:51.963064 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.967076, valid_loss=0.106784
I1011 13:37:51.970000 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.967187, valid_loss=0.105699
I1011 13:37:51.975131 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965820, valid_loss=0.109543
I1011 13:37:51.980514 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.967371, valid_loss=0.105939
I1011 13:37:51.987952 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.967014, valid_loss=0.106832
I1011 13:37:51.991482 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.967105, valid_loss=0.107521
I1011 13:37:51.997149 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.966797, valid_loss=0.107724
I1011 13:37:52.002505 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.966518, valid_loss=0.109542
I1011 13:37:52.008096 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.966619, valid_loss=0.108260
I1011 13:37:52.018516 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.967052, valid_loss=0.106202
I1011 13:37:52.023424 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965495, valid_loss=0.108470
I1011 13:37:52.029059 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965000, valid_loss=0.111931
I1011 13:37:52.034542 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.964844, valid_loss=0.111311
I1011 13:37:52.040118 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.964120, valid_loss=0.112175
I1011 13:37:52.043402 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.964565, valid_loss=0.111221
I1011 13:37:52.049255 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.964440, valid_loss=0.111749
I1011 13:37:52.054611 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.964583, valid_loss=0.113564
I1011 13:37:52.060449 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.964718, valid_loss=0.113380
I1011 13:37:52.065858 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.964844, valid_loss=0.113990
I1011 13:37:52.072009 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.963778, valid_loss=0.116709
I1011 13:37:52.077472 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.963925, valid_loss=0.117006
I1011 13:37:52.082785 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.964509, valid_loss=0.115492
I1011 13:37:52.088794 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965061, valid_loss=0.116033
I1011 13:37:52.094407 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965372, valid_loss=0.116242
I1011 13:37:52.099886 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965461, valid_loss=0.117562
I1011 13:37:52.105360 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965345, valid_loss=0.116478
I1011 13:37:52.110801 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.966016, valid_loss=0.114414
I1011 13:37:52.116615 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965892, valid_loss=0.115442
I1011 13:37:52.122121 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965588, valid_loss=0.114829
I1011 13:37:52.127897 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.966025, valid_loss=0.113438
I1011 13:37:52.133476 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965732, valid_loss=0.113474
I1011 13:37:52.139017 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.966146, valid_loss=0.112518
I1011 13:37:52.144504 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965863, valid_loss=0.112139
I1011 13:37:52.150090 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965758, valid_loss=0.112231
I1011 13:37:52.155342 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965983, valid_loss=0.111950
I1011 13:37:52.161215 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965721, valid_loss=0.111933
I1011 13:37:52.166568 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965625, valid_loss=0.112934
I1011 13:37:52.179476 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965839, valid_loss=0.112683
I1011 13:37:52.185490 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965895, valid_loss=0.112679
I1011 13:37:52.191756 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965802, valid_loss=0.112281
I1011 13:37:52.197847 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965856, valid_loss=0.111644
I1011 13:37:52.205344 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965909, valid_loss=0.111622
I1011 13:37:52.210427 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.966099, valid_loss=0.111316
I1011 13:37:52.217742 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965872, valid_loss=0.111893
I1011 13:37:52.222396 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965652, valid_loss=0.112496
I1011 13:37:52.229200 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965440, valid_loss=0.111779
I1011 13:37:52.235358 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965625, valid_loss=0.111130
I1011 13:37:52.240765 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965932, valid_loss=0.110933
I1011 13:37:52.246458 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965852, valid_loss=0.110297
I1011 13:37:52.252342 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965402, valid_loss=0.111024
I1011 13:37:52.257919 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965576, valid_loss=0.110277
I1011 13:37:52.263857 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965144, valid_loss=0.110676
I1011 13:37:52.270178 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965436, valid_loss=0.109824
I1011 13:37:52.276753 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965602, valid_loss=0.109950
I1011 13:37:52.282358 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965648, valid_loss=0.109937
I1011 13:37:52.288483 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965806, valid_loss=0.109384
I1011 13:37:52.294321 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.966183, valid_loss=0.108799
I1011 13:37:52.299953 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.966109, valid_loss=0.109262
I1011 13:37:52.305844 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.966254, valid_loss=0.109835
I1011 13:37:52.311891 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.966182, valid_loss=0.110646
I1011 13:37:52.317973 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.966216, valid_loss=0.110878
I1011 13:37:52.326849 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965937, valid_loss=0.111331
I1011 13:37:52.335108 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965872, valid_loss=0.111206
I1011 13:37:52.344285 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.966011, valid_loss=0.110488
I1011 13:37:52.349681 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965645, valid_loss=0.110790
I1011 13:37:52.355180 140115512289024 logging_writer.py:35] [1405] valid_accuracy=0.965600, valid_loss=0.110748
I1011 13:37:53.684578 140122186418048 checkpoints.py:120] Saving checkpoint at step: 1872
I1011 13:37:53.687911 140122186418048 checkpoints.py:149] Saved checkpoint at ./checkpoint_1872
I1011 13:37:53.688028 140122186418048 checkpoints.py:174] Removing checkpoint at ./checkpoint_468
I1011 13:37:53.692822 140115530360576 logging_writer.py:35] [1873] core_hours_Tesla K80=0.006248, core_hours_approx_v3=0.006248, epoch=4.002137, img/sec=45152.245821, img/sec/core=45152.245821
I1011 13:37:54.182454 140115530360576 logging_writer.py:35] [1873] train_accuracy=0.971104, train_loss=0.094048
I1011 13:37:54.182729 140115530360576 logging_writer.py:35] [1873] learning_rate=0.10000002384185791
I1011 13:37:54.196289 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.976562, valid_loss=0.076963
I1011 13:37:54.203335 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.964844, valid_loss=0.090261
I1011 13:37:54.209955 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.958333, valid_loss=0.099855
I1011 13:37:54.213971 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.958984, valid_loss=0.111538
I1011 13:37:54.218877 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.960937, valid_loss=0.105064
I1011 13:37:54.224594 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.967448, valid_loss=0.093241
I1011 13:37:54.231449 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.968750, valid_loss=0.092362
I1011 13:37:54.234731 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.968750, valid_loss=0.095923
I1011 13:37:54.242402 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.965278, valid_loss=0.098190
I1011 13:37:54.246476 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.965625, valid_loss=0.094957
I1011 13:37:54.252227 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.965909, valid_loss=0.093884
I1011 13:37:54.261597 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.965495, valid_loss=0.099746
I1011 13:37:54.268741 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.965144, valid_loss=0.099629
I1011 13:37:54.275834 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.965960, valid_loss=0.095834
I1011 13:37:54.283298 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.964062, valid_loss=0.099937
I1011 13:37:54.288768 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.961914, valid_loss=0.104771
I1011 13:37:54.294876 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.962776, valid_loss=0.102474
I1011 13:37:54.302367 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.962674, valid_loss=0.102295
I1011 13:37:54.306643 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.962171, valid_loss=0.104261
I1011 13:37:54.312247 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.960156, valid_loss=0.109487
I1011 13:37:54.320043 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.960565, valid_loss=0.110308
I1011 13:37:54.325851 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.960582, valid_loss=0.111957
I1011 13:37:54.330261 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.960598, valid_loss=0.112838
I1011 13:37:54.335436 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.960612, valid_loss=0.112269
I1011 13:37:54.341583 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.960312, valid_loss=0.115078
I1011 13:37:54.346579 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.960036, valid_loss=0.115347
I1011 13:37:54.352993 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.960648, valid_loss=0.114955
I1011 13:37:54.361747 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.960937, valid_loss=0.115059
I1011 13:37:54.369620 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.961207, valid_loss=0.114897
I1011 13:37:54.375826 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.961198, valid_loss=0.115435
I1011 13:37:54.381249 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.962198, valid_loss=0.113333
I1011 13:37:54.387706 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.962158, valid_loss=0.114146
I1011 13:37:54.393301 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.961648, valid_loss=0.114777
I1011 13:37:54.399476 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.960937, valid_loss=0.115690
I1011 13:37:54.406092 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.961161, valid_loss=0.115314
I1011 13:37:54.413255 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.961155, valid_loss=0.115581
I1011 13:37:54.419315 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.961571, valid_loss=0.115496
I1011 13:37:54.426075 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.961965, valid_loss=0.116786
I1011 13:37:54.431396 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.962340, valid_loss=0.115199
I1011 13:37:54.437736 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.963281, valid_loss=0.112884
I1011 13:37:54.443224 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.963224, valid_loss=0.112870
I1011 13:37:54.448683 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.962984, valid_loss=0.113875
I1011 13:37:54.454181 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.963481, valid_loss=0.112338
I1011 13:37:54.459419 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.963423, valid_loss=0.112316
I1011 13:37:54.464879 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.963021, valid_loss=0.112112
I1011 13:37:54.469997 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.962976, valid_loss=0.113404
I1011 13:37:54.475103 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.963265, valid_loss=0.113327
I1011 13:37:54.484720 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.963542, valid_loss=0.112800
I1011 13:37:54.493990 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.963648, valid_loss=0.113037
I1011 13:37:54.499847 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.963281, valid_loss=0.113304
I1011 13:37:54.503942 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.963542, valid_loss=0.112354
I1011 13:37:54.509688 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.963792, valid_loss=0.112070
I1011 13:37:54.516190 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.963886, valid_loss=0.112263
I1011 13:37:54.522075 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.963686, valid_loss=0.112204
I1011 13:37:54.527330 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.964062, valid_loss=0.111515
I1011 13:37:54.532555 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.964146, valid_loss=0.112029
I1011 13:37:54.538170 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.964227, valid_loss=0.112714
I1011 13:37:54.543639 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.964170, valid_loss=0.113831
I1011 13:37:54.549257 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.964380, valid_loss=0.113366
I1011 13:37:54.554711 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.964714, valid_loss=0.112955
I1011 13:37:54.562455 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.964908, valid_loss=0.112131
I1011 13:37:54.566591 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.964970, valid_loss=0.111856
I1011 13:37:54.571851 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.965030, valid_loss=0.112015
I1011 13:37:54.577061 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.965210, valid_loss=0.111370
I1011 13:37:54.582843 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.965264, valid_loss=0.111397
I1011 13:37:54.589213 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.965672, valid_loss=0.110226
I1011 13:37:54.594686 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.965835, valid_loss=0.109671
I1011 13:37:54.600072 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.965648, valid_loss=0.109893
I1011 13:37:54.606545 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.965806, valid_loss=0.109168
I1011 13:37:54.613009 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.966183, valid_loss=0.108644
I1011 13:37:54.619420 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.966109, valid_loss=0.109305
I1011 13:37:54.625149 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.966254, valid_loss=0.109158
I1011 13:37:54.630772 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.966289, valid_loss=0.109210
I1011 13:37:54.636190 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.966111, valid_loss=0.109412
I1011 13:37:54.641564 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.966146, valid_loss=0.108942
I1011 13:37:54.647296 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.966283, valid_loss=0.108743
I1011 13:37:54.652360 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.966315, valid_loss=0.108060
I1011 13:37:54.658561 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.966046, valid_loss=0.108817
I1011 13:37:54.663917 140115538753280 logging_writer.py:35] [1873] valid_accuracy=0.966000, valid_loss=0.109136
I1011 13:37:56.009868 140122186418048 checkpoints.py:120] Saving checkpoint at step: 2340
I1011 13:37:56.013097 140122186418048 checkpoints.py:149] Saved checkpoint at ./checkpoint_2340
I1011 13:37:56.013273 140122186418048 checkpoints.py:174] Removing checkpoint at ./checkpoint_936
I1011 13:37:56.017515 140115530360576 logging_writer.py:35] [2341] core_hours_Tesla K80=0.006621, core_hours_approx_v3=0.006621, epoch=5.002137, img/sec=44651.450410, img/sec/core=44651.450410
I1011 13:37:56.492295 140115530360576 logging_writer.py:35] [2341] train_accuracy=0.973157, train_loss=0.085993
I1011 13:37:56.492524 140115530360576 logging_writer.py:35] [2341] learning_rate=0.10000002384185791
I1011 13:37:56.506742 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.968750, valid_loss=0.066349
I1011 13:37:56.512698 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.960937, valid_loss=0.080514
I1011 13:37:56.517704 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.958333, valid_loss=0.083485
I1011 13:37:56.523003 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.960937, valid_loss=0.091904
I1011 13:37:56.528127 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.960937, valid_loss=0.093689
I1011 13:37:56.532583 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966146, valid_loss=0.083513
I1011 13:37:56.537597 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.969866, valid_loss=0.080828
I1011 13:37:56.544224 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.969727, valid_loss=0.087027
I1011 13:37:56.549202 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.968750, valid_loss=0.090101
I1011 13:37:56.554475 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967969, valid_loss=0.090399
I1011 13:37:56.561301 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967330, valid_loss=0.092584
I1011 13:37:56.566468 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.968099, valid_loss=0.095135
I1011 13:37:56.572551 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967548, valid_loss=0.094757
I1011 13:37:56.578842 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.968192, valid_loss=0.091794
I1011 13:37:56.584013 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967708, valid_loss=0.091742
I1011 13:37:56.589307 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.965820, valid_loss=0.098140
I1011 13:37:56.594737 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.965993, valid_loss=0.095711
I1011 13:37:56.603116 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.965712, valid_loss=0.094805
I1011 13:37:56.609262 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.965872, valid_loss=0.098093
I1011 13:37:56.613099 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.963281, valid_loss=0.104271
I1011 13:37:56.617877 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.963542, valid_loss=0.104174
I1011 13:37:56.623847 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.964134, valid_loss=0.104527
I1011 13:37:56.630649 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.965014, valid_loss=0.102224
I1011 13:37:56.634696 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.964518, valid_loss=0.102431
I1011 13:37:56.639773 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.964375, valid_loss=0.105616
I1011 13:37:56.644879 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.964543, valid_loss=0.104342
I1011 13:37:56.650872 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.964410, valid_loss=0.103889
I1011 13:37:56.657687 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.964286, valid_loss=0.104603
I1011 13:37:56.661555 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.964709, valid_loss=0.105247
I1011 13:37:56.668311 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.964844, valid_loss=0.104762
I1011 13:37:56.673635 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.965222, valid_loss=0.103639
I1011 13:37:56.677365 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.965332, valid_loss=0.104283
I1011 13:37:56.682702 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.964489, valid_loss=0.106361
I1011 13:37:56.690272 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.964154, valid_loss=0.107185
I1011 13:37:56.694279 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.964062, valid_loss=0.106649
I1011 13:37:56.698407 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.964193, valid_loss=0.106139
I1011 13:37:56.716202 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.964738, valid_loss=0.105248
I1011 13:37:56.722152 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.964638, valid_loss=0.107486
I1011 13:37:56.728629 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.965144, valid_loss=0.105863
I1011 13:37:56.734668 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966016, valid_loss=0.103694
I1011 13:37:56.740427 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.965892, valid_loss=0.103646
I1011 13:37:56.746775 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.965960, valid_loss=0.103636
I1011 13:37:56.752173 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966206, valid_loss=0.102388
I1011 13:37:56.759035 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966442, valid_loss=0.101808
I1011 13:37:56.764965 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.965972, valid_loss=0.102390
I1011 13:37:56.770647 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.965863, valid_loss=0.102592
I1011 13:37:56.776385 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.965758, valid_loss=0.102987
I1011 13:37:56.782065 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966146, valid_loss=0.102326
I1011 13:37:56.787757 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966358, valid_loss=0.102192
I1011 13:37:56.796289 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966094, valid_loss=0.103621
I1011 13:37:56.800140 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966452, valid_loss=0.103111
I1011 13:37:56.805944 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966496, valid_loss=0.103220
I1011 13:37:56.811625 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966392, valid_loss=0.103226
I1011 13:37:56.821498 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966725, valid_loss=0.102418
I1011 13:37:56.828607 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966761, valid_loss=0.101532
I1011 13:37:56.834700 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966657, valid_loss=0.101955
I1011 13:37:56.842260 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966694, valid_loss=0.102593
I1011 13:37:56.847875 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966595, valid_loss=0.103743
I1011 13:37:56.854095 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966631, valid_loss=0.103784
I1011 13:37:56.859451 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966667, valid_loss=0.103421
I1011 13:37:56.865348 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966957, valid_loss=0.102482
I1011 13:37:56.871028 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966860, valid_loss=0.102218
I1011 13:37:56.876505 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966890, valid_loss=0.102218
I1011 13:37:56.882583 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967041, valid_loss=0.101608
I1011 13:37:56.888560 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.966947, valid_loss=0.101482
I1011 13:37:56.894108 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967211, valid_loss=0.100609
I1011 13:37:56.900089 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967351, valid_loss=0.100578
I1011 13:37:56.905466 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967256, valid_loss=0.100834
I1011 13:37:56.910783 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967618, valid_loss=0.099865
I1011 13:37:56.916261 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967522, valid_loss=0.099801
I1011 13:37:56.921833 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967320, valid_loss=0.100650
I1011 13:37:56.927609 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967339, valid_loss=0.100849
I1011 13:37:56.933133 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967466, valid_loss=0.100730
I1011 13:37:56.940493 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967272, valid_loss=0.100792
I1011 13:37:56.944714 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967396, valid_loss=0.100355
I1011 13:37:56.950650 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967311, valid_loss=0.100464
I1011 13:37:56.956397 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967431, valid_loss=0.099904
I1011 13:37:56.964500 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967548, valid_loss=0.099992
I1011 13:37:56.973144 140115512289024 logging_writer.py:35] [2341] valid_accuracy=0.967500, valid_loss=0.100134
I1011 13:37:58.301074 140122186418048 checkpoints.py:120] Saving checkpoint at step: 2808
I1011 13:37:58.305062 140122186418048 checkpoints.py:149] Saved checkpoint at ./checkpoint_2808
I1011 13:37:58.305190 140122186418048 checkpoints.py:174] Removing checkpoint at ./checkpoint_1404
I1011 13:37:58.309476 140115530360576 logging_writer.py:35] [2809] core_hours_Tesla K80=0.006989, core_hours_approx_v3=0.006989, epoch=6.002137, img/sec=45160.044865, img/sec/core=45160.044865
I1011 13:37:58.794518 140115530360576 logging_writer.py:35] [2809] train_accuracy=0.976329, train_loss=0.078176
I1011 13:37:58.795318 140115530360576 logging_writer.py:35] [2809] learning_rate=0.10000002384185791
I1011 13:37:58.809585 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.968750, valid_loss=0.082084
I1011 13:37:58.817958 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.976562, valid_loss=0.074323
I1011 13:37:58.823120 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.979167, valid_loss=0.065421
I1011 13:37:58.826139 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.974609, valid_loss=0.090876
I1011 13:37:58.831986 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.971875, valid_loss=0.087949
I1011 13:37:58.839206 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.975260, valid_loss=0.078232
I1011 13:37:58.845018 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.976562, valid_loss=0.076995
I1011 13:37:58.848865 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.974609, valid_loss=0.079070
I1011 13:37:58.856001 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973958, valid_loss=0.079639
I1011 13:37:58.858964 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.975000, valid_loss=0.075275
I1011 13:37:58.864884 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.974432, valid_loss=0.077012
I1011 13:37:58.872169 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.974609, valid_loss=0.076985
I1011 13:37:58.876502 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.974760, valid_loss=0.077853
I1011 13:37:58.881582 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.975446, valid_loss=0.077627
I1011 13:37:58.888202 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973958, valid_loss=0.082809
I1011 13:37:58.892370 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973633, valid_loss=0.084897
I1011 13:37:58.897367 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973805, valid_loss=0.082714
I1011 13:37:58.902779 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.972656, valid_loss=0.085408
I1011 13:37:58.909762 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973273, valid_loss=0.084314
I1011 13:37:58.913842 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973437, valid_loss=0.085496
I1011 13:37:58.919046 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973214, valid_loss=0.085469
I1011 13:37:58.923876 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973722, valid_loss=0.086373
I1011 13:37:58.929888 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973505, valid_loss=0.086996
I1011 13:37:58.935056 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.972982, valid_loss=0.087866
I1011 13:37:58.941602 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.972812, valid_loss=0.090532
I1011 13:37:58.947225 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.972656, valid_loss=0.089940
I1011 13:37:58.952681 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973090, valid_loss=0.089846
I1011 13:37:58.959593 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.972377, valid_loss=0.089911
I1011 13:37:58.963605 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.972252, valid_loss=0.089778
I1011 13:37:58.968741 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.972396, valid_loss=0.089721
I1011 13:37:58.975955 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973034, valid_loss=0.088724
I1011 13:37:58.981983 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.972412, valid_loss=0.090368
I1011 13:37:58.986999 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.972301, valid_loss=0.091547
I1011 13:37:58.992595 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.972426, valid_loss=0.090843
I1011 13:37:58.999296 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.972545, valid_loss=0.089553
I1011 13:37:59.003721 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.972439, valid_loss=0.089226
I1011 13:37:59.014349 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.972762, valid_loss=0.088485
I1011 13:37:59.023196 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.972656, valid_loss=0.089101
I1011 13:37:59.028627 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973357, valid_loss=0.087591
I1011 13:37:59.034016 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.974023, valid_loss=0.085951
I1011 13:37:59.038070 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973895, valid_loss=0.086213
I1011 13:37:59.043286 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973400, valid_loss=0.087229
I1011 13:37:59.048678 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973837, valid_loss=0.085749
I1011 13:37:59.053766 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.974077, valid_loss=0.085429
I1011 13:37:59.061197 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973958, valid_loss=0.086426
I1011 13:37:59.065279 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.974185, valid_loss=0.086445
I1011 13:37:59.070761 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973903, valid_loss=0.087476
I1011 13:37:59.076267 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.974284, valid_loss=0.086990
I1011 13:37:59.081938 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.974330, valid_loss=0.086819
I1011 13:37:59.087590 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.974531, valid_loss=0.086429
I1011 13:37:59.092964 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.974571, valid_loss=0.085937
I1011 13:37:59.098534 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.974609, valid_loss=0.085930
I1011 13:37:59.104070 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.974499, valid_loss=0.085919
I1011 13:37:59.110643 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.974248, valid_loss=0.086191
I1011 13:37:59.115685 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.974290, valid_loss=0.086140
I1011 13:37:59.121527 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973912, valid_loss=0.087401
I1011 13:37:59.127782 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973684, valid_loss=0.088853
I1011 13:37:59.133584 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973060, valid_loss=0.090731
I1011 13:37:59.139030 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973120, valid_loss=0.090704
I1011 13:37:59.144195 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973307, valid_loss=0.090473
I1011 13:37:59.149620 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973489, valid_loss=0.089691
I1011 13:37:59.154768 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973538, valid_loss=0.089338
I1011 13:37:59.159966 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973338, valid_loss=0.089880
I1011 13:37:59.165472 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973633, valid_loss=0.089314
I1011 13:37:59.171000 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973558, valid_loss=0.089393
I1011 13:37:59.176319 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973722, valid_loss=0.088515
I1011 13:37:59.182107 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973647, valid_loss=0.088719
I1011 13:37:59.187619 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973346, valid_loss=0.089468
I1011 13:37:59.192818 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973505, valid_loss=0.089078
I1011 13:37:59.198622 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973772, valid_loss=0.088966
I1011 13:37:59.204018 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973812, valid_loss=0.089161
I1011 13:37:59.210135 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973850, valid_loss=0.089307
I1011 13:37:59.215856 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973673, valid_loss=0.089601
I1011 13:37:59.221472 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973606, valid_loss=0.089486
I1011 13:37:59.226812 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973437, valid_loss=0.089804
I1011 13:37:59.234316 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973273, valid_loss=0.089846
I1011 13:37:59.242124 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973214, valid_loss=0.089485
I1011 13:37:59.246492 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973157, valid_loss=0.090398
I1011 13:37:59.251579 140115538753280 logging_writer.py:35] [2809] valid_accuracy=0.973100, valid_loss=0.090519
I1011 13:38:00.613862 140122186418048 checkpoints.py:120] Saving checkpoint at step: 3276
I1011 13:38:00.618399 140122186418048 checkpoints.py:149] Saved checkpoint at ./checkpoint_3276
I1011 13:38:00.618621 140122186418048 checkpoints.py:174] Removing checkpoint at ./checkpoint_1872
I1011 13:38:00.623572 140115530360576 logging_writer.py:35] [3277] core_hours_Tesla K80=0.007367, core_hours_approx_v3=0.007367, epoch=7.002137, img/sec=44081.636642, img/sec/core=44081.636642
I1011 13:38:01.104991 140115530360576 logging_writer.py:35] [3277] train_accuracy=0.976512, train_loss=0.074930
I1011 13:38:01.105236 140115530360576 logging_writer.py:35] [3277] learning_rate=0.10000002384185791
I1011 13:38:01.122182 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.945312, valid_loss=0.117502
I1011 13:38:01.128556 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.949219, valid_loss=0.104941
I1011 13:38:01.134241 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.945312, valid_loss=0.140586
I1011 13:38:01.141214 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.943359, valid_loss=0.151152
I1011 13:38:01.146738 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.946875, valid_loss=0.143278
I1011 13:38:01.151832 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.950521, valid_loss=0.132998
I1011 13:38:01.156847 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.952009, valid_loss=0.128076
I1011 13:38:01.162372 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.952148, valid_loss=0.131496
I1011 13:38:01.168592 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.950521, valid_loss=0.136335
I1011 13:38:01.171885 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.951562, valid_loss=0.130860
I1011 13:38:01.177707 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.952415, valid_loss=0.131717
I1011 13:38:01.183322 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.949219, valid_loss=0.139747
I1011 13:38:01.189529 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.947115, valid_loss=0.142734
I1011 13:38:01.195325 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.948103, valid_loss=0.139563
I1011 13:38:01.203352 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.948437, valid_loss=0.137438
I1011 13:38:01.206774 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.946777, valid_loss=0.141280
I1011 13:38:01.212934 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.948070, valid_loss=0.138849
I1011 13:38:01.219761 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.947917, valid_loss=0.140275
I1011 13:38:01.227852 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.947780, valid_loss=0.141544
I1011 13:38:01.234050 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.946875, valid_loss=0.143708
I1011 13:38:01.238238 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.947173, valid_loss=0.143607
I1011 13:38:01.244944 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.946023, valid_loss=0.147066
I1011 13:38:01.250746 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.946671, valid_loss=0.146161
I1011 13:38:01.258766 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.945964, valid_loss=0.146927
I1011 13:38:01.262576 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.946250, valid_loss=0.150263
I1011 13:38:01.268139 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.946514, valid_loss=0.148452
I1011 13:38:01.273505 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.946759, valid_loss=0.148187
I1011 13:38:01.279985 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.948103, valid_loss=0.146051
I1011 13:38:01.286100 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.948815, valid_loss=0.145488
I1011 13:38:01.291868 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.948958, valid_loss=0.146259
I1011 13:38:01.299026 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.948841, valid_loss=0.145988
I1011 13:38:01.306074 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.948975, valid_loss=0.145776
I1011 13:38:01.309904 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.948627, valid_loss=0.146638
I1011 13:38:01.315746 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.948070, valid_loss=0.146336
I1011 13:38:01.324244 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.947768, valid_loss=0.145837
I1011 13:38:01.333717 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.948351, valid_loss=0.144592
I1011 13:38:01.339652 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.948480, valid_loss=0.144773
I1011 13:38:01.347903 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.948396, valid_loss=0.145861
I1011 13:38:01.353675 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.948518, valid_loss=0.145434
I1011 13:38:01.359386 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.949023, valid_loss=0.143576
I1011 13:38:01.363189 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.948552, valid_loss=0.144303
I1011 13:38:01.369494 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.948661, valid_loss=0.143615
I1011 13:38:01.375206 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.949310, valid_loss=0.142400
I1011 13:38:01.384016 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.949219, valid_loss=0.143182
I1011 13:38:01.388168 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.949306, valid_loss=0.141684
I1011 13:38:01.394303 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.949558, valid_loss=0.142223
I1011 13:38:01.400595 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.949801, valid_loss=0.141311
I1011 13:38:01.406728 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.950358, valid_loss=0.140924
I1011 13:38:01.412918 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.950415, valid_loss=0.141025
I1011 13:38:01.420419 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.950625, valid_loss=0.140675
I1011 13:38:01.429324 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.950827, valid_loss=0.140251
I1011 13:38:01.435157 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.950571, valid_loss=0.141857
I1011 13:38:01.441242 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.950767, valid_loss=0.141450
I1011 13:38:01.447011 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.950955, valid_loss=0.140608
I1011 13:38:01.453398 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.950710, valid_loss=0.140310
I1011 13:38:01.459327 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.951032, valid_loss=0.139768
I1011 13:38:01.465294 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.951069, valid_loss=0.140011
I1011 13:38:01.480877 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.950835, valid_loss=0.140620
I1011 13:38:01.483645 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.951271, valid_loss=0.139568
I1011 13:38:01.490826 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.951432, valid_loss=0.138813
I1011 13:38:01.498445 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.952100, valid_loss=0.137419
I1011 13:38:01.503858 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.952369, valid_loss=0.136700
I1011 13:38:01.509551 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.952629, valid_loss=0.136151
I1011 13:38:01.515087 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.953003, valid_loss=0.135443
I1011 13:38:01.521224 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.952764, valid_loss=0.135635
I1011 13:38:01.526774 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.953125, valid_loss=0.134722
I1011 13:38:01.533524 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.953125, valid_loss=0.134428
I1011 13:38:01.537675 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.953470, valid_loss=0.134278
I1011 13:38:01.545023 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.953578, valid_loss=0.134113
I1011 13:38:01.548711 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.953795, valid_loss=0.133835
I1011 13:38:01.554462 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.953895, valid_loss=0.133795
I1011 13:38:01.559242 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.953776, valid_loss=0.134294
I1011 13:38:01.566561 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.953767, valid_loss=0.134722
I1011 13:38:01.570512 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.953864, valid_loss=0.134598
I1011 13:38:01.576580 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.953854, valid_loss=0.135166
I1011 13:38:01.584272 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.953639, valid_loss=0.135608
I1011 13:38:01.592287 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.953734, valid_loss=0.135254
I1011 13:38:01.593919 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.953826, valid_loss=0.135286
I1011 13:38:01.604132 140115512289024 logging_writer.py:35] [3277] valid_accuracy=0.953800, valid_loss=0.135605
I1011 13:38:02.934522 140122186418048 checkpoints.py:120] Saving checkpoint at step: 3744
I1011 13:38:02.937907 140122186418048 checkpoints.py:149] Saved checkpoint at ./checkpoint_3744
I1011 13:38:02.938031 140122186418048 checkpoints.py:174] Removing checkpoint at ./checkpoint_2340
I1011 13:38:02.942122 140115530360576 logging_writer.py:35] [3745] core_hours_Tesla K80=0.007735, core_hours_approx_v3=0.007735, epoch=8.002137, img/sec=45147.994407, img/sec/core=45147.994407
I1011 13:38:03.417119 140115530360576 logging_writer.py:35] [3745] train_accuracy=0.977481, train_loss=0.071040
I1011 13:38:03.417356 140115530360576 logging_writer.py:35] [3745] learning_rate=0.10000002384185791
I1011 13:38:03.432895 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.968750, valid_loss=0.095306
I1011 13:38:03.438857 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.972656, valid_loss=0.075193
I1011 13:38:03.443779 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.966146, valid_loss=0.093326
I1011 13:38:03.449697 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.964844, valid_loss=0.104934
I1011 13:38:03.454799 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.967187, valid_loss=0.099582
I1011 13:38:03.459833 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.972656, valid_loss=0.086857
I1011 13:38:03.465318 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975446, valid_loss=0.081806
I1011 13:38:03.470464 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974609, valid_loss=0.083552
I1011 13:38:03.475736 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974826, valid_loss=0.083464
I1011 13:38:03.481488 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.977344, valid_loss=0.077886
I1011 13:38:03.487509 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.977273, valid_loss=0.078080
I1011 13:38:03.494909 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.978516, valid_loss=0.077490
I1011 13:38:03.501472 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.979567, valid_loss=0.076337
I1011 13:38:03.510682 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.978237, valid_loss=0.075542
I1011 13:38:03.520038 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.978125, valid_loss=0.075324
I1011 13:38:03.526432 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.977539, valid_loss=0.077991
I1011 13:38:03.533637 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.977482, valid_loss=0.075818
I1011 13:38:03.538853 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.977865, valid_loss=0.074676
I1011 13:38:03.543889 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.977385, valid_loss=0.075325
I1011 13:38:03.549509 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975391, valid_loss=0.082318
I1011 13:38:03.554582 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974702, valid_loss=0.084481
I1011 13:38:03.560405 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974432, valid_loss=0.085803
I1011 13:38:03.566032 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974524, valid_loss=0.085079
I1011 13:38:03.571585 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974284, valid_loss=0.084777
I1011 13:38:03.577282 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974062, valid_loss=0.088246
I1011 13:38:03.584976 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974159, valid_loss=0.088185
I1011 13:38:03.588471 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974826, valid_loss=0.087113
I1011 13:38:03.594026 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974888, valid_loss=0.086380
I1011 13:38:03.599546 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974677, valid_loss=0.088969
I1011 13:38:03.604795 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974479, valid_loss=0.088684
I1011 13:38:03.610224 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974798, valid_loss=0.087262
I1011 13:38:03.615699 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974365, valid_loss=0.087987
I1011 13:38:03.621978 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.973248, valid_loss=0.089137
I1011 13:38:03.627655 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.972886, valid_loss=0.088339
I1011 13:38:03.635535 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.973214, valid_loss=0.087419
I1011 13:38:03.637625 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.973524, valid_loss=0.086857
I1011 13:38:03.644948 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974029, valid_loss=0.085905
I1011 13:38:03.652081 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.973890, valid_loss=0.088305
I1011 13:38:03.659312 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974359, valid_loss=0.087053
I1011 13:38:03.664676 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975000, valid_loss=0.085314
I1011 13:38:03.672773 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974848, valid_loss=0.085849
I1011 13:38:03.680775 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974702, valid_loss=0.086090
I1011 13:38:03.687053 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975109, valid_loss=0.084722
I1011 13:38:03.692812 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975142, valid_loss=0.084729
I1011 13:38:03.698583 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975174, valid_loss=0.083718
I1011 13:38:03.703978 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975374, valid_loss=0.083693
I1011 13:38:03.709794 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975233, valid_loss=0.084600
I1011 13:38:03.715899 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975098, valid_loss=0.084191
I1011 13:38:03.721684 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975287, valid_loss=0.083519
I1011 13:38:03.728486 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975156, valid_loss=0.083955
I1011 13:38:03.734764 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975031, valid_loss=0.084017
I1011 13:38:03.740734 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975210, valid_loss=0.084218
I1011 13:38:03.747100 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975088, valid_loss=0.084256
I1011 13:38:03.754283 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975260, valid_loss=0.083752
I1011 13:38:03.758471 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975284, valid_loss=0.083455
I1011 13:38:03.764793 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975307, valid_loss=0.084144
I1011 13:38:03.770865 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975329, valid_loss=0.084871
I1011 13:38:03.779090 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.974946, valid_loss=0.086632
I1011 13:38:03.784124 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975238, valid_loss=0.086393
I1011 13:38:03.789553 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975260, valid_loss=0.085906
I1011 13:38:03.796031 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975538, valid_loss=0.085013
I1011 13:38:03.802285 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975302, valid_loss=0.084879
I1011 13:38:03.811235 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975198, valid_loss=0.084739
I1011 13:38:03.815358 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975464, valid_loss=0.084123
I1011 13:38:03.828547 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975481, valid_loss=0.083932
I1011 13:38:03.837295 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975616, valid_loss=0.083170
I1011 13:38:03.846782 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975630, valid_loss=0.083244
I1011 13:38:03.852364 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975414, valid_loss=0.083403
I1011 13:38:03.858557 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975543, valid_loss=0.082818
I1011 13:38:03.864268 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975781, valid_loss=0.082401
I1011 13:38:03.870190 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975792, valid_loss=0.082697
I1011 13:38:03.875817 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975911, valid_loss=0.082708
I1011 13:38:03.881703 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975920, valid_loss=0.082868
I1011 13:38:03.890115 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975929, valid_loss=0.082560
I1011 13:38:03.895169 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975833, valid_loss=0.082376
I1011 13:38:03.899173 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975843, valid_loss=0.082452
I1011 13:38:03.906449 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975751, valid_loss=0.082457
I1011 13:38:03.911819 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975561, valid_loss=0.083070
I1011 13:38:03.916988 140115538753280 logging_writer.py:35] [3745] valid_accuracy=0.975500, valid_loss=0.083181
I1011 13:38:05.262078 140122186418048 checkpoints.py:120] Saving checkpoint at step: 4212
I1011 13:38:05.265500 140122186418048 checkpoints.py:149] Saved checkpoint at ./checkpoint_4212
I1011 13:38:05.265624 140122186418048 checkpoints.py:174] Removing checkpoint at ./checkpoint_2808
I1011 13:38:05.270427 140115530360576 logging_writer.py:35] [4213] core_hours_Tesla K80=0.008108, core_hours_approx_v3=0.008108, epoch=9.002137, img/sec=44647.102368, img/sec/core=44647.102368
I1011 13:38:05.764020 140115530360576 logging_writer.py:35] [4213] train_accuracy=0.977831, train_loss=0.071539
I1011 13:38:05.764275 140115530360576 logging_writer.py:35] [4213] learning_rate=0.10000002384185791
I1011 13:38:05.778188 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.960937, valid_loss=0.113638
I1011 13:38:05.783784 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.972656, valid_loss=0.090208
I1011 13:38:05.788844 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.971354, valid_loss=0.098020
I1011 13:38:05.794041 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970703, valid_loss=0.099812
I1011 13:38:05.799208 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.971875, valid_loss=0.091432
I1011 13:38:05.804344 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.975260, valid_loss=0.082986
I1011 13:38:05.809334 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.976562, valid_loss=0.081204
I1011 13:38:05.814722 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.973633, valid_loss=0.086868
I1011 13:38:05.820007 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.973958, valid_loss=0.085287
I1011 13:38:05.825003 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.974219, valid_loss=0.081874
I1011 13:38:05.829881 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.974432, valid_loss=0.080909
I1011 13:38:05.834875 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.973307, valid_loss=0.082818
I1011 13:38:05.842251 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.969952, valid_loss=0.086427
I1011 13:38:05.848839 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970424, valid_loss=0.085319
I1011 13:38:05.854368 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970312, valid_loss=0.085318
I1011 13:38:05.859839 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970215, valid_loss=0.087988
I1011 13:38:05.864885 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.971507, valid_loss=0.084751
I1011 13:38:05.871433 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970920, valid_loss=0.085757
I1011 13:38:05.874594 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.971628, valid_loss=0.086519
I1011 13:38:05.880397 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.969141, valid_loss=0.090048
I1011 13:38:05.885389 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.968378, valid_loss=0.092801
I1011 13:38:05.890399 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.967685, valid_loss=0.094082
I1011 13:38:05.895846 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.967391, valid_loss=0.092848
I1011 13:38:05.900855 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.967122, valid_loss=0.092793
I1011 13:38:05.906520 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.967187, valid_loss=0.095805
I1011 13:38:05.911562 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.966947, valid_loss=0.095011
I1011 13:38:05.916445 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.966725, valid_loss=0.095625
I1011 13:38:05.923708 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.967355, valid_loss=0.094464
I1011 13:38:05.927486 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.967672, valid_loss=0.095387
I1011 13:38:05.932719 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.967187, valid_loss=0.096729
I1011 13:38:05.938282 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.967490, valid_loss=0.094832
I1011 13:38:05.943478 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.967285, valid_loss=0.095331
I1011 13:38:05.948689 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.966856, valid_loss=0.096386
I1011 13:38:05.954242 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.967142, valid_loss=0.095434
I1011 13:38:05.959350 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.967634, valid_loss=0.093826
I1011 13:38:05.964190 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.967882, valid_loss=0.093970
I1011 13:38:05.969462 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.968539, valid_loss=0.093090
I1011 13:38:05.974731 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.968750, valid_loss=0.095150
I1011 13:38:05.980073 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.969351, valid_loss=0.093564
I1011 13:38:05.985012 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.969922, valid_loss=0.092102
I1011 13:38:05.990343 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970084, valid_loss=0.092533
I1011 13:38:05.995862 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970052, valid_loss=0.092659
I1011 13:38:06.000868 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970749, valid_loss=0.091031
I1011 13:38:06.006488 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970881, valid_loss=0.090761
I1011 13:38:06.011607 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970833, valid_loss=0.091244
I1011 13:38:06.016931 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970958, valid_loss=0.091209
I1011 13:38:06.022135 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.971077, valid_loss=0.091426
I1011 13:38:06.027334 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.971029, valid_loss=0.090969
I1011 13:38:06.033073 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970982, valid_loss=0.090910
I1011 13:38:06.038525 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970469, valid_loss=0.091013
I1011 13:38:06.043758 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970435, valid_loss=0.091209
I1011 13:38:06.049290 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970403, valid_loss=0.091271
I1011 13:38:06.062720 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970077, valid_loss=0.091584
I1011 13:38:06.066341 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970197, valid_loss=0.091573
I1011 13:38:06.073789 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970170, valid_loss=0.091077
I1011 13:38:06.081246 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970145, valid_loss=0.092126
I1011 13:38:06.086821 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.969984, valid_loss=0.093272
I1011 13:38:06.092870 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.969962, valid_loss=0.094702
I1011 13:38:06.098466 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970074, valid_loss=0.094425
I1011 13:38:06.103694 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970312, valid_loss=0.094238
I1011 13:38:06.109227 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970543, valid_loss=0.093195
I1011 13:38:06.115069 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970514, valid_loss=0.093044
I1011 13:38:06.120559 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.969990, valid_loss=0.094240
I1011 13:38:06.126372 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970215, valid_loss=0.093394
I1011 13:38:06.132068 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970312, valid_loss=0.093076
I1011 13:38:06.137678 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970644, valid_loss=0.091968
I1011 13:38:06.143204 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970732, valid_loss=0.092254
I1011 13:38:06.148824 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970703, valid_loss=0.092006
I1011 13:38:06.154938 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.971014, valid_loss=0.091225
I1011 13:38:06.161688 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.971094, valid_loss=0.091146
I1011 13:38:06.167883 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970951, valid_loss=0.091370
I1011 13:38:06.178336 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.971137, valid_loss=0.091616
I1011 13:38:06.181853 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.970997, valid_loss=0.091868
I1011 13:38:06.187443 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.971178, valid_loss=0.091383
I1011 13:38:06.195272 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.971042, valid_loss=0.091168
I1011 13:38:06.201219 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.971114, valid_loss=0.091171
I1011 13:38:06.209757 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.971287, valid_loss=0.090662
I1011 13:38:06.214083 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.971054, valid_loss=0.091685
I1011 13:38:06.219918 140115512289024 logging_writer.py:35] [4213] valid_accuracy=0.971000, valid_loss=0.092086
I1011 13:38:07.554414 140122186418048 local.py:41] Setting work unit notes: 119.3 steps/s, 100.0% (4680/4680), ETA: 0m (0m : 0.2% checkpoint, 17.1% eval)
I1011 13:38:07.554989 140115530360576 logging_writer.py:35] [4680] steps_per_sec=119.307813
I1011 13:38:07.557116 140115530360576 logging_writer.py:35] [4680] core_hours_Tesla K80=0.008478, core_hours_approx_v3=0.008478, epoch=10.000000, img/sec=44893.887851, img/sec/core=44893.887851
I1011 13:38:08.017058 140115530360576 logging_writer.py:35] [4680] train_accuracy=0.977248, train_loss=0.069586
I1011 13:38:08.017334 140115530360576 logging_writer.py:35] [4680] learning_rate=0.10000001639127731
I1011 13:38:08.030735 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968750, valid_loss=0.129273
I1011 13:38:08.036700 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968750, valid_loss=0.119689
I1011 13:38:08.044379 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.963542, valid_loss=0.110827
I1011 13:38:08.047333 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.960937, valid_loss=0.112190
I1011 13:38:08.052469 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.960937, valid_loss=0.110497
I1011 13:38:08.059280 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.966146, valid_loss=0.097484
I1011 13:38:08.064376 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968750, valid_loss=0.096808
I1011 13:38:08.073166 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969727, valid_loss=0.095618
I1011 13:38:08.078387 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.970486, valid_loss=0.097969
I1011 13:38:08.083570 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.971094, valid_loss=0.094341
I1011 13:38:08.089126 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.970170, valid_loss=0.094370
I1011 13:38:08.094645 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969401, valid_loss=0.095082
I1011 13:38:08.100213 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968149, valid_loss=0.096196
I1011 13:38:08.106033 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.967634, valid_loss=0.093719
I1011 13:38:08.111281 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968229, valid_loss=0.092721
I1011 13:38:08.117331 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.967773, valid_loss=0.097120
I1011 13:38:08.124916 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969210, valid_loss=0.095657
I1011 13:38:08.128839 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969618, valid_loss=0.094599
I1011 13:38:08.135871 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.970395, valid_loss=0.093650
I1011 13:38:08.139065 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969922, valid_loss=0.095227
I1011 13:38:08.144762 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969494, valid_loss=0.096057
I1011 13:38:08.150561 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968395, valid_loss=0.096656
I1011 13:38:08.156185 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969090, valid_loss=0.094797
I1011 13:38:08.163647 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968099, valid_loss=0.096582
I1011 13:38:08.167541 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968125, valid_loss=0.098467
I1011 13:38:08.172708 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.967849, valid_loss=0.097492
I1011 13:38:08.178109 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968461, valid_loss=0.096842
I1011 13:38:08.183527 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969029, valid_loss=0.094878
I1011 13:38:08.189237 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969289, valid_loss=0.095417
I1011 13:38:08.194396 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968490, valid_loss=0.096925
I1011 13:38:08.200157 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969002, valid_loss=0.095475
I1011 13:38:08.205457 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968994, valid_loss=0.095443
I1011 13:38:08.211004 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968750, valid_loss=0.097123
I1011 13:38:08.216576 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968520, valid_loss=0.097665
I1011 13:38:08.222098 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968527, valid_loss=0.097142
I1011 13:38:08.229448 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968533, valid_loss=0.097674
I1011 13:38:08.232575 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968539, valid_loss=0.097255
I1011 13:38:08.237780 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968339, valid_loss=0.100143
I1011 13:38:08.244685 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968750, valid_loss=0.098700
I1011 13:38:08.250061 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969336, valid_loss=0.097067
I1011 13:38:08.255380 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968941, valid_loss=0.098857
I1011 13:38:08.262145 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968750, valid_loss=0.099344
I1011 13:38:08.273759 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969295, valid_loss=0.098024
I1011 13:38:08.278054 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969105, valid_loss=0.098355
I1011 13:38:08.286692 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968924, valid_loss=0.098179
I1011 13:38:08.293039 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968750, valid_loss=0.098203
I1011 13:38:08.300283 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968584, valid_loss=0.098170
I1011 13:38:08.304478 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968587, valid_loss=0.098142
I1011 13:38:08.310145 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968750, valid_loss=0.098447
I1011 13:38:08.315885 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968281, valid_loss=0.098592
I1011 13:38:08.321719 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968290, valid_loss=0.098455
I1011 13:38:08.329766 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968149, valid_loss=0.099607
I1011 13:38:08.333369 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968160, valid_loss=0.099498
I1011 13:38:08.340982 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968461, valid_loss=0.099198
I1011 13:38:08.345071 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968466, valid_loss=0.098903
I1011 13:38:08.350618 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968890, valid_loss=0.098622
I1011 13:38:08.356155 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968887, valid_loss=0.100246
I1011 13:38:08.361375 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.968885, valid_loss=0.101238
I1011 13:38:08.368014 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969280, valid_loss=0.100312
I1011 13:38:08.373765 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969271, valid_loss=0.100155
I1011 13:38:08.379756 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969262, valid_loss=0.099782
I1011 13:38:08.386999 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969632, valid_loss=0.098748
I1011 13:38:08.391232 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969990, valid_loss=0.098391
I1011 13:38:08.397715 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969849, valid_loss=0.098284
I1011 13:38:08.403527 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969832, valid_loss=0.097746
I1011 13:38:08.410784 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.970052, valid_loss=0.096811
I1011 13:38:08.414635 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.970033, valid_loss=0.097143
I1011 13:38:08.420496 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.969784, valid_loss=0.097943
I1011 13:38:08.426122 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.970222, valid_loss=0.096861
I1011 13:38:08.433858 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.970424, valid_loss=0.096205
I1011 13:38:08.440110 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.970511, valid_loss=0.095806
I1011 13:38:08.446235 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.970703, valid_loss=0.095998
I1011 13:38:08.454037 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.970890, valid_loss=0.095648
I1011 13:38:08.457603 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.970861, valid_loss=0.095777
I1011 13:38:08.470597 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.970833, valid_loss=0.096168
I1011 13:38:08.477122 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.970909, valid_loss=0.096685
I1011 13:38:08.482476 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.970982, valid_loss=0.096081
I1011 13:38:08.488964 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.971054, valid_loss=0.095907
I1011 13:38:08.494513 140115512289024 logging_writer.py:35] [4680] valid_accuracy=0.971000, valid_loss=0.096360
I1011 13:38:08.500287 140122186418048 checkpoints.py:120] Saving checkpoint at step: 4680
I1011 13:38:08.504111 140122186418048 checkpoints.py:149] Saved checkpoint at ./checkpoint_4680
I1011 13:38:08.504275 140122186418048 checkpoints.py:174] Removing checkpoint at ./checkpoint_3276
|
CGD_Workshop_Day_3.ipynb
|
###Markdown
CreditsThis notebook is based on this [Article](https://confusedcoders.com/data-science/deep-learning/how-to-apply-deep-learning-on-tabular-data-with-fastai) Prepare Dataset
###Code
## Get the dataset Ready ##
# Install Kaggle and make directory for kaggle
!pip install -U -q kaggle && mkdir -p ~/.kaggle
# move json file from kaggle to the kaggle directory
!cp kaggle.json ~/.kaggle/
# Download the dataset
!kaggle competitions download -c house-prices-advanced-regression-techniques
# Unzip the dataset
!unzip -q house-prices-advanced-regression-techniques -d data/
import pandas as pd
import numpy as np
from fastai import *
from fastai.tabular import *
df_train = pd.read_csv('/content/train.csv')
df_train.head()
df_test = pd.read_csv("/content/test.csv")
df_test_id = df_test['Id']
display(df_train.head())
display(df_test.head())
df_test.mean()
df_test = df_test.fillna(value = df_test.mean())
df_train.info()
df_test.info()
dep_var = 'SalePrice'
cat_names = df_train.select_dtypes(include=['object']).columns.tolist()
cont_names = df_train.select_dtypes(include=[np.number]).drop('SalePrice',axis=1).columns.tolist()
print(cat_names)
print(cont_names)
from sklearn.preprocessing import LabelEncoder
lc = LabelEncoder()
ms_zoning_df = df_train['MSZoning'].copy()
ms_zoning_df.value_counts()
np.unique(lc.fit_transform(ms_zoning_df))
ms_zoning_df.head()
pd.get_dummies(ms_zoning_df)
###Output
_____no_output_____
###Markdown
Packaging the dataset
###Code
print("Categorical columns are : ", cat_names)
print('Continuous numerical columns are :', cont_names)
procs = [FillMissing, Categorify, Normalize]
# Test tabularlist
test = TabularList.from_df(df_test, cat_names=cat_names, cont_names=cont_names, procs=procs)
# Train data bunch
data = (TabularList.from_df(df_train, path='.', cat_names=cat_names, cont_names=cont_names, procs=procs)
.split_by_rand_pct(valid_pct = 0.2, seed = 42)
.label_from_df(cols = dep_var, label_cls = FloatList, log = True )
.add_test(test)
.databunch())
# Create deep learning model
learn = tabular_learner(data, layers=[200,100], metrics=[rmse,mae])
# select the appropriate learning rate
learn.lr_find()
# we typically find the point where the slope is steepest
learn.recorder.plot()
# Fit the model based on selected learning rate
learn.fit_one_cycle(15, max_lr =1e-01)
learn.summary()
DatasetType.
preds, targets = learn.get_preds(DatasetType.Test)
a = preds[0]
np.exp(a[0].data.item())
# get predictions
preds, targets = learn.get_preds(DatasetType.Test)
labels = [np.exp(p[0].data.item()) for p in preds]
labels
# create submission file to submit in Kaggle competition
submission = pd.DataFrame({'Id': df_test_id, 'SalePrice': labels})
submission.to_csv('submission.csv', index=False)
submission.head()
###Output
_____no_output_____
###Markdown
Using machine learning
###Code
display(df_train.head())
display(df_train.head())
display(df_train.isna().sum().sort_values(ascending=False)/len(df_train))
# display(df_test.isna().sum())
df_train['PoolQC'].mode()
###Output
_____no_output_____
|
docs/examples/driver_examples/Qcodes example with Lakeshore 325.ipynb
|
###Markdown
QCoDeS Example with Lakeshore 325Here provided is an example session with model 325 of the Lakeshore temperature controller
###Code
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from qcodes.instrument_drivers.Lakeshore.Model_325 import Model_325
lake = Model_325("lake", "GPIB0::12::INSTR")
###Output
Connected to: LSCI 325 (serial:LSA2251, firmware:1.8/1.1) in 0.15s
###Markdown
Sensor commands
###Code
# Check that the sensor is in the correct status
lake.sensor_A.status()
# What temperature is it reading?
lake.sensor_A.temperature()
lake.sensor_A.temperature.unit
# We can access the sensor objects through the sensor list as well
assert lake.sensor_A is lake.sensor[0]
###Output
_____no_output_____
###Markdown
Heater commands
###Code
# In a closed loop configuration, heater 1 reads from...
lake.heater_1.input_channel()
lake.heater_1.unit()
# Get the PID values
print("P = ", lake.heater_1.P())
print("I = ", lake.heater_1.I())
print("D = ", lake.heater_1.D())
# Is the heater on?
lake.heater_1.output_range()
###Output
_____no_output_____
###Markdown
Loading and updating sensor calibration values
###Code
curve = lake.sensor_A.curve
curve_data = curve.get_data()
curve_data.keys()
fig, ax = plt.subplots()
ax.plot(curve_data["Temperature (K)"], curve_data['log Ohm'], '.')
plt.show()
curve.curve_name()
curve_x = lake.curve[23]
curve_x_data = curve_x.get_data()
curve_x_data.keys()
temp = np.linspace(0, 100, 200)
new_data = {"Temperature (K)": temp, "log Ohm": 1/(temp+1)+2}
fig, ax = plt.subplots()
ax.plot(new_data["Temperature (K)"], new_data["log Ohm"], '.')
plt.show()
curve_x.format("log Ohm/K")
curve_x.set_data(new_data)
curve_x.format()
curve_x_data = curve_x.get_data()
fig, ax = plt.subplots()
ax.plot(curve_x_data["Temperature (K)"], curve_x_data['log Ohm'], '.')
plt.show()
###Output
_____no_output_____
###Markdown
Go to a set point
###Code
import time
import numpy
from IPython.display import display
from ipywidgets import interact, widgets
from matplotlib import pyplot as plt
def live_plot_temperature_reading(channel_to_read, read_period=0.2, n_reads=1000):
"""
Live plot the temperature reading from a Lakeshore sensor channel
Args:
channel_to_read
Lakeshore channel object to read the temperature from
read_period
time in seconds between two reads of the temperature
n_reads
total number of reads to perform
"""
# Make a widget for a text display that is contantly being updated
text = widgets.Text()
display(text)
fig, ax = plt.subplots(1)
line, = ax.plot([], [], '*-')
ax.set_xlabel('Time, s')
ax.set_ylabel(f'Temperature, {channel_to_read.temperature.unit}')
fig.show()
plt.ion()
for i in range(n_reads):
time.sleep(read_period)
# Update the text field
text.value = f'T = {channel_to_read.temperature()}'
# Add new point to the data that is being plotted
line.set_ydata(numpy.append(line.get_ydata(), channel_to_read.temperature()))
line.set_xdata(numpy.arange(0, len(line.get_ydata()), 1)*read_period)
ax.relim() # Recalculate limits
ax.autoscale_view(True, True, True) # Autoscale
fig.canvas.draw() # Redraw
lake.heater_1.control_mode("Manual PID")
lake.heater_1.output_range("Low (2.5W)")
lake.heater_1.input_channel("A")
# The following seem to be good settings for our setup
lake.heater_1.P(400)
lake.heater_1.I(40)
lake.heater_1.D(10)
lake.heater_1.setpoint(15.0) # <- temperature
live_plot_temperature_reading(lake.sensor_a, n_reads=400)
###Output
_____no_output_____
###Markdown
Lakeshore 325 driver exampleHere provided is an example session with model 325 of the Lakeshore temperature controller
###Code
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from qcodes.instrument_drivers.Lakeshore.Model_325 import Model_325
lake = Model_325("lake", "GPIB0::12::INSTR")
###Output
Connected to: LSCI 325 (serial:LSA2251, firmware:1.8/1.1) in 0.15s
###Markdown
Sensor commands
###Code
# Check that the sensor is in the correct status
lake.sensor_A.status()
# What temperature is it reading?
lake.sensor_A.temperature()
lake.sensor_A.temperature.unit
# We can access the sensor objects through the sensor list as well
assert lake.sensor_A is lake.sensor[0]
###Output
_____no_output_____
###Markdown
Heater commands
###Code
# In a closed loop configuration, heater 1 reads from...
lake.heater_1.input_channel()
lake.heater_1.unit()
# Get the PID values
print("P = ", lake.heater_1.P())
print("I = ", lake.heater_1.I())
print("D = ", lake.heater_1.D())
# Is the heater on?
lake.heater_1.output_range()
###Output
_____no_output_____
###Markdown
Loading and updating sensor calibration values
###Code
curve = lake.sensor_A.curve
curve_data = curve.get_data()
curve_data.keys()
fig, ax = plt.subplots()
ax.plot(curve_data["Temperature (K)"], curve_data['log Ohm'], '.')
plt.show()
curve.curve_name()
curve_x = lake.curve[23]
curve_x_data = curve_x.get_data()
curve_x_data.keys()
temp = np.linspace(0, 100, 200)
new_data = {"Temperature (K)": temp, "log Ohm": 1/(temp+1)+2}
fig, ax = plt.subplots()
ax.plot(new_data["Temperature (K)"], new_data["log Ohm"], '.')
plt.show()
curve_x.format("log Ohm/K")
curve_x.set_data(new_data)
curve_x.format()
curve_x_data = curve_x.get_data()
fig, ax = plt.subplots()
ax.plot(curve_x_data["Temperature (K)"], curve_x_data['log Ohm'], '.')
plt.show()
###Output
_____no_output_____
###Markdown
Go to a set point
###Code
import time
import numpy
from IPython.display import display
from ipywidgets import interact, widgets
from matplotlib import pyplot as plt
def live_plot_temperature_reading(channel_to_read, read_period=0.2, n_reads=1000):
"""
Live plot the temperature reading from a Lakeshore sensor channel
Args:
channel_to_read
Lakeshore channel object to read the temperature from
read_period
time in seconds between two reads of the temperature
n_reads
total number of reads to perform
"""
# Make a widget for a text display that is contantly being updated
text = widgets.Text()
display(text)
fig, ax = plt.subplots(1)
line, = ax.plot([], [], '*-')
ax.set_xlabel('Time, s')
ax.set_ylabel(f'Temperature, {channel_to_read.temperature.unit}')
fig.show()
plt.ion()
for i in range(n_reads):
time.sleep(read_period)
# Update the text field
text.value = f'T = {channel_to_read.temperature()}'
# Add new point to the data that is being plotted
line.set_ydata(numpy.append(line.get_ydata(), channel_to_read.temperature()))
line.set_xdata(numpy.arange(0, len(line.get_ydata()), 1)*read_period)
ax.relim() # Recalculate limits
ax.autoscale_view(True, True, True) # Autoscale
fig.canvas.draw() # Redraw
lake.heater_1.control_mode("Manual PID")
lake.heater_1.output_range("Low (2.5W)")
lake.heater_1.input_channel("A")
# The following seem to be good settings for our setup
lake.heater_1.P(400)
lake.heater_1.I(40)
lake.heater_1.D(10)
lake.heater_1.setpoint(15.0) # <- temperature
live_plot_temperature_reading(lake.sensor_a, n_reads=400)
###Output
_____no_output_____
###Markdown
Lakeshore 325 driver exampleHere provided is an example session with model 325 of the Lakeshore temperature controller
###Code
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from qcodes.instrument_drivers.Lakeshore.Model_325 import Model_325
lake = Model_325("lake", "GPIB0::12::INSTR")
###Output
Connected to: LSCI 325 (serial:LSA2251, firmware:1.8/1.1) in 0.15s
###Markdown
Sensor commands
###Code
# Check that the sensor is in the correct status
lake.sensor_A.status()
# What temperature is it reading?
lake.sensor_A.temperature()
lake.sensor_A.temperature.unit
# We can access the sensor objects through the sensor list as well
assert lake.sensor_A is lake.sensor[0]
###Output
_____no_output_____
###Markdown
Heater commands
###Code
# In a closed loop configuration, heater 1 reads from...
lake.heater_1.input_channel()
lake.heater_1.unit()
# Get the PID values
print("P = ", lake.heater_1.P())
print("I = ", lake.heater_1.I())
print("D = ", lake.heater_1.D())
# Is the heater on?
lake.heater_1.output_range()
###Output
_____no_output_____
###Markdown
Loading and updating sensor calibration values
###Code
curve = lake.sensor_A.curve
curve_data = curve.get_data()
curve_data.keys()
fig, ax = plt.subplots()
ax.plot(curve_data["Temperature (K)"], curve_data['log Ohm'], '.')
plt.show()
curve.curve_name()
curve_x = lake.curve[23]
curve_x_data = curve_x.get_data()
curve_x_data.keys()
temp = np.linspace(0, 100, 200)
new_data = {"Temperature (K)": temp, "log Ohm": 1/(temp+1)+2}
fig, ax = plt.subplots()
ax.plot(new_data["Temperature (K)"], new_data["log Ohm"], '.')
plt.show()
curve_x.format("log Ohm/K")
curve_x.set_data(new_data)
curve_x.format()
curve_x_data = curve_x.get_data()
fig, ax = plt.subplots()
ax.plot(curve_x_data["Temperature (K)"], curve_x_data['log Ohm'], '.')
plt.show()
###Output
_____no_output_____
###Markdown
Go to a set point
###Code
import time
import numpy
from IPython.display import display
from ipywidgets import interact, widgets
from matplotlib import pyplot as plt
def live_plot_temperature_reading(channel_to_read, read_period=0.2, n_reads=1000):
"""
Live plot the temperature reading from a Lakeshore sensor channel
Args:
channel_to_read
Lakeshore channel object to read the temperature from
read_period
time in seconds between two reads of the temperature
n_reads
total number of reads to perform
"""
# Make a widget for a text display that is contantly being updated
text = widgets.Text()
display(text)
fig, ax = plt.subplots(1)
line, = ax.plot([], [], '*-')
ax.set_xlabel('Time, s')
ax.set_ylabel(f'Temperature, {channel_to_read.temperature.unit}')
fig.show()
plt.ion()
for i in range(n_reads):
time.sleep(read_period)
# Update the text field
text.value = f'T = {channel_to_read.temperature()}'
# Add new point to the data that is being plotted
line.set_ydata(numpy.append(line.get_ydata(), channel_to_read.temperature()))
line.set_xdata(numpy.arange(0, len(line.get_ydata()), 1)*read_period)
ax.relim() # Recalculate limits
ax.autoscale_view(True, True, True) # Autoscale
fig.canvas.draw() # Redraw
lake.heater_1.control_mode("Manual PID")
lake.heater_1.output_range("Low (2.5W)")
lake.heater_1.input_channel("A")
# The following seem to be good settings for our setup
lake.heater_1.P(400)
lake.heater_1.I(40)
lake.heater_1.D(10)
lake.heater_1.setpoint(15.0) # <- temperature
live_plot_temperature_reading(lake.sensor_a, n_reads=400)
###Output
_____no_output_____
###Markdown
QCoDeS Example with Lakeshore 325Here provided is an example session with model 325 of the Lakeshore temperature controller
###Code
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from qcodes.instrument_drivers.Lakeshore.Model_325 import Model_325
lake = Model_325("lake", "GPIB0::12::INSTR")
###Output
Connected to: LSCI 325 (serial:LSA2251, firmware:1.8/1.1) in 1.30s
###Markdown
Sensor commands
###Code
# Check that the sensor is in the correct status
lake.sensor_A.status()
# What temperature is it reading?
lake.sensor_A.temperature()
lake.sensor_A.temperature.unit
# We can access the sensor objects through the sensor list as well
assert lake.sensor_A is lake.sensor[0]
###Output
_____no_output_____
###Markdown
Heater commands
###Code
# In a closed loop configuration, heater 1 reads from...
lake.heater_1.input_channel()
lake.heater_1.unit()
# Get the PID values
print("P = ", lake.heater_1.P())
print("I = ", lake.heater_1.I())
print("D = ", lake.heater_1.D())
# Is the heater on?
lake.heater_1.output_range()
###Output
_____no_output_____
###Markdown
Loading and updating sensor calibration values
###Code
curve = lake.sensor_A.curve
curve_data = curve.get_data()
curve_data.keys()
fig, ax = plt.subplots()
ax.plot(curve_data["Temperature (K)"], curve_data['log Ohm'], '.')
plt.show()
curve.curve_name()
curve_x = lake.curve[23]
curve_x_data = curve_x.get_data()
curve_x_data.keys()
temp = np.linspace(0, 100, 200)
new_data = {"Temperature (K)": temp, "log Ohm": 1/(temp+1)+2}
fig, ax = plt.subplots()
ax.plot(new_data["Temperature (K)"], new_data["log Ohm"], '.')
plt.show()
curve_x.format("log Ohm/K")
curve_x.set_data(new_data)
curve_x.format()
curve_x_data = curve_x.get_data()
fig, ax = plt.subplots()
ax.plot(curve_x_data["Temperature (K)"], curve_x_data['log Ohm'], '.')
plt.show()
###Output
_____no_output_____
###Markdown
Go to a set point
###Code
import time
import numpy
from IPython.display import display
from ipywidgets import interact, widgets
from matplotlib import pyplot as plt
def live_plot_temperature_reading(channel_to_read, read_period=0.2, n_reads=1000):
"""
Live plot the temperature reading from a Lakeshore sensor channel
Args:
channel_to_read
Lakeshore channel object to read the temperature from
read_period
time in seconds between two reads of the temperature
n_reads
total number of reads to perform
"""
# Make a widget for a text display that is contantly being updated
text = widgets.Text()
display(text)
fig, ax = plt.subplots(1)
line, = ax.plot([], [], '*-')
ax.set_xlabel('Time, s')
ax.set_ylabel(f'Temperature, {channel_to_read.temperature.unit}')
fig.show()
plt.ion()
for i in range(n_reads):
time.sleep(read_period)
# Update the text field
text.value = f'T = {channel_to_read.temperature()}'
# Add new point to the data that is being plotted
line.set_ydata(numpy.append(line.get_ydata(), channel_to_read.temperature()))
line.set_xdata(numpy.arange(0, len(line.get_ydata()), 1)*read_period)
ax.relim() # Recalculate limits
ax.autoscale_view(True, True, True) # Autoscale
fig.canvas.draw() # Redraw
lake.heater_1.control_mode("Manual PID")
lake.heater_1.output_range("Low (2.5W)")
lake.heater_1.input_channel("A")
# The following seem to be good settings for our setup
lake.heater_1.P(400)
lake.heater_1.I(40)
lake.heater_1.D(10)
lake.heater_1.setpoint(15.0) # <- temperature
live_plot_temperature_reading(lake.sensor_a, n_reads=400)
###Output
_____no_output_____
###Markdown
Querying the resistance and heater output
###Code
# to get the resistance of the system (25 or 50 Ohm)
lake.heater_1.resistance()
# to set the resistance of the system (25 or 50 Ohm)
lake.heater_1.resistance(50)
lake.heater_1.resistance()
# output in percent (%) of current or power, depending on setting, which can be queried by lake.heater_1.output_metric()
lake.heater_1.heater_output() # in %, 50 means 50%
###Output
_____no_output_____
###Markdown
Lakeshore 325 driver exampleHere provided is an example session with model 325 of the Lakeshore temperature controller
###Code
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from qcodes.instrument_drivers.Lakeshore.Model_325 import Model_325
lake = Model_325("lake", "GPIB0::12::INSTR")
###Output
Connected to: LSCI 325 (serial:LSA2251, firmware:1.8/1.1) in 0.15s
###Markdown
Sensor commands
###Code
# Check that the sensor is in the correct status
lake.sensor_A.status()
# What temperature is it reading?
lake.sensor_A.temperature()
lake.sensor_A.temperature.unit
# We can access the sensor objects through the sensor list as well
assert lake.sensor_A is lake.sensor[0]
###Output
_____no_output_____
###Markdown
Heater commands
###Code
# In a closed loop configuration, heater 1 reads from...
lake.heater_1.input_channel()
lake.heater_1.unit()
# Get the PID values
print("P = ", lake.heater_1.P())
print("I = ", lake.heater_1.I())
print("D = ", lake.heater_1.D())
# Is the heater on?
lake.heater_1.output_range()
###Output
_____no_output_____
###Markdown
Loading and updating sensor calibration values
###Code
curve = lake.sensor_A.curve
curve_data = curve.get_data()
curve_data.keys()
fig, ax = plt.subplots()
ax.plot(curve_data["Temperature (K)"], curve_data['log Ohm'], '.')
plt.show()
curve.curve_name()
curve_x = lake.curve[23]
curve_x_data = curve_x.get_data()
curve_x_data.keys()
temp = np.linspace(0, 100, 200)
new_data = {"Temperature (K)": temp, "log Ohm": 1/(temp+1)+2}
fig, ax = plt.subplots()
ax.plot(new_data["Temperature (K)"], new_data["log Ohm"], '.')
plt.show()
curve_x.format("log Ohm/K")
curve_x.set_data(new_data)
curve_x.format()
curve_x_data = curve_x.get_data()
fig, ax = plt.subplots()
ax.plot(curve_x_data["Temperature (K)"], curve_x_data['log Ohm'], '.')
plt.show()
###Output
_____no_output_____
###Markdown
Go to a set point
###Code
import time
import numpy
from IPython.display import display
from ipywidgets import interact, widgets
from matplotlib import pyplot as plt
def live_plot_temperature_reading(channel_to_read, read_period=0.2, n_reads=1000):
"""
Live plot the temperature reading from a Lakeshore sensor channel
Args:
channel_to_read
Lakeshore channel object to read the temperature from
read_period
time in seconds between two reads of the temperature
n_reads
total number of reads to perform
"""
# Make a widget for a text display that is contantly being updated
text = widgets.Text()
display(text)
fig, ax = plt.subplots(1)
line, = ax.plot([], [], '*-')
ax.set_xlabel('Time, s')
ax.set_ylabel(f'Temperature, {channel_to_read.temperature.unit}')
fig.show()
plt.ion()
for i in range(n_reads):
time.sleep(read_period)
# Update the text field
text.value = f'T = {channel_to_read.temperature()}'
# Add new point to the data that is being plotted
line.set_ydata(numpy.append(line.get_ydata(), channel_to_read.temperature()))
line.set_xdata(numpy.arange(0, len(line.get_ydata()), 1)*read_period)
ax.relim() # Recalculate limits
ax.autoscale_view(True, True, True) # Autoscale
fig.canvas.draw() # Redraw
lake.heater_1.control_mode("Manual PID")
lake.heater_1.output_range("Low (2.5W)")
lake.heater_1.input_channel("A")
# The following seem to be good settings for our setup
lake.heater_1.P(400)
lake.heater_1.I(40)
lake.heater_1.D(10)
lake.heater_1.setpoint(15.0) # <- temperature
live_plot_temperature_reading(lake.sensor_a, n_reads=400)
###Output
_____no_output_____
|
_build/jupyter_execute/contents/tools/decay.ipynb
|
###Markdown
Simulating Mass Budget _(The contents presented in this section were re-developed principally by Dr. P. K. Yadav. The original tool, Spreadsheet based, was developed by Prof. Rudolf Liedl)_ How to use the tool? 1. Go to the Binder by clicking the rocket button (top-right of the page)2. Execute the code cell3. Change the values of different quantities in the box.This tool can also be downloaded and run locally. For that download the _deacy.ipynb_ file and execute the process in any editor (e.g., JUPYTER notebook, JUPYTER lab) that is able to read and execute this file-type.The code may also be executed in the book page.The codes are licensed under CC by 4.0 [(use anyways, but acknowledge the original work)](https://creativecommons.org/licenses/by/4.0/deed.en)
###Code
# Used library
import numpy as np # for calculation
import matplotlib.pyplot as plt # for plots
import pandas as pd # for table
import ipywidgets as widgets # for widgets
# The main function
def mass_bal(n_simulation, MA, MB, MC, R_A, R_B):
A = np.zeros(n_simulation) # creat an array with zros
B = np.zeros(n_simulation)
C = np.zeros(n_simulation)
time = np.arange(n_simulation)
for i in range(0,n_simulation-1):
A[0] = MA # starting input value
B[0] = MB
C[0] = MC
A[i+1] = A[i]-R_A*A[i]
B[i+1] = B[i]+R_A*A[i]-R_B*B[i]
C[i+1] = C[i]+R_B*B[i]
summ = A[i]+B[i]+C[i]
d = {"Mass_A": A, "Mass_B": B, "Mass_C": C, "Total Mass": summ}
df = pd.DataFrame(d) # Generating result table
label = ["Mass A (g)", "Mass B (g)", "Mass C (g)"]
fig = plt.figure(figsize=(6,4))
plt.plot(time, A, time, B, time, C, linewidth=3); # plotting the results
plt.xlabel("Time [Time Unit]"); plt.ylabel("Mass [g]") # placing axis labels
plt.legend(label, loc=0);plt.grid(); plt.xlim([0,n_simulation]); plt.ylim(bottom=0) # legends, grids, x,y limits
plt.show() # display plot
return print(df.round(2))
# Widgets and interactive
N = widgets.BoundedIntText(value=20,min=0,max=100,step=1,description= 'Δ t (day)',disabled=False)
A = widgets.BoundedFloatText(value=100,min=0,max=1000.0,step=1,description='M<sub>A</sub> (kg)',disabled=False)
B = widgets.BoundedFloatText(value=5,min=0,max=1000.0,step=1,description='M<sub>B</sub> (kg)',disabled=False)
C = widgets.BoundedFloatText(value=10,min=0,max=1000,step=0.1,description='M<sub>C</sub> (kg)',disabled=False)
RA = widgets.BoundedFloatText(value=0.2,min=0,max=100,step=0.1,description='R<sub>A</sub> (day<sup>-1 </sup>)',disabled=False)
RB = widgets.BoundedFloatText(value=0.2,min=0,max=100,step=0.1,description='R<sub>B</sub> (day<sup>-1 </sup>)',disabled=False)
interactive_plot = widgets.interactive(mass_bal, n_simulation = N, MA=A, MB=B, MC=C, R_A=RA, R_B=RB,)
output = interactive_plot.children[-1]
#output.layout.height = '350px'
interactive_plot
###Output
_____no_output_____
|
notebooks/tf2-mnist-cnn.ipynb
|
###Markdown
MNIST handwritten digits classification with CNNsIn this notebook, we'll train a convolutional neural network (CNN, ConvNet) to classify MNIST digits using **Tensorflow** (version $\ge$ 2.0 required) with the **Keras API**.This notebook builds on the MNIST-MLP notebook, so the recommended order is to go through the MNIST-MLP notebook before starting with this one. First, the needed imports.
###Code
%matplotlib inline
from pml_utils import show_failures
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.utils import plot_model, to_categorical
from distutils.version import LooseVersion as LV
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
print('Using Tensorflow version: {}, and Keras version: {}.'.format(tf.__version__, tf.keras.__version__))
assert(LV(tf.__version__) >= LV("2.0.0"))
from tensorflow.keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
nb_classes = 10
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
# one-hot encoding:
Y_train = to_categorical(y_train, nb_classes)
Y_test = to_categorical(y_test, nb_classes)
print()
print('MNIST data loaded: train:',len(X_train),'test:',len(X_test))
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('Y_train:', Y_train.shape)
###Output
_____no_output_____
###Markdown
We'll have to do a bit of tensor manipulations...
###Code
# input image dimensions
img_rows, img_cols = 28, 28
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
print('X_train:', X_train.shape)
###Output
_____no_output_____
###Markdown
InitializationNow we are ready to create a convolutional model. * The `Conv2D` layers operate on 2D matrices so we input the digit images directly to the model. * The `MaxPooling2D` layer reduces the spatial dimensions, that is, makes the image smaller. * The `Flatten` layer flattens the 2D matrices into vectors, so we can then switch to `Dense` layers as in the MLP model. See https://keras.io/layers/convolutional/, https://keras.io/layers/pooling/ for more information.
###Code
# number of convolutional filters to use
nb_filters = 32
# convolution kernel size
kernel_size = (3, 3)
# size of pooling area for max pooling
pool_size = (2, 2)
inputs = keras.Input(shape=input_shape)
x = layers.Conv2D(nb_filters, kernel_size,
padding='valid',
activation ='relu')(inputs)
x = layers.Conv2D(nb_filters, kernel_size,
padding='valid',
activation ='relu')(x)
x = layers.MaxPooling2D(pool_size=pool_size)(x)
x = layers.Dropout(0.25)(x)
x = layers.Flatten()(x)
x = layers.Dense(units=128, activation ='relu')(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(units=nb_classes,
activation='softmax')(x)
model = keras.Model(inputs=inputs, outputs=outputs,
name="cnn_model")
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print(model.summary())
plot_model(model, show_shapes=True)
###Output
_____no_output_____
###Markdown
LearningNow let's train the CNN model.This is a relatively complex model, so training is considerably slower than with MLPs.
###Code
%%time
epochs = 5 # one epoch takes about 45 seconds
history = model.fit(X_train,
Y_train,
epochs=epochs,
batch_size=128,
verbose=2)
plt.figure(figsize=(5,3))
plt.plot(history.epoch,history.history['loss'])
plt.title('loss')
plt.figure(figsize=(5,3))
plt.plot(history.epoch,history.history['accuracy'])
plt.title('accuracy');
###Output
_____no_output_____
###Markdown
InferenceWith enough training epochs, the test accuracy should exceed 99%. You can compare your result with the state-of-the art [here](http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html). Even more results can be found [here](http://yann.lecun.com/exdb/mnist/).
###Code
%%time
scores = model.evaluate(X_test, Y_test, verbose=2)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
###Output
_____no_output_____
###Markdown
We can now take a closer look at the results using the `show_failures()` helper function. Here are the first 10 test digits the CNN classified to a wrong class:
###Code
predictions = model.predict(X_test)
show_failures(predictions, y_test, X_test)
###Output
_____no_output_____
###Markdown
We can use `show_failures()` to inspect failures in more detail. For example, here are failures in which the true class was "6":
###Code
show_failures(predictions, y_test, X_test, trueclass=6)
###Output
_____no_output_____
|
rnns.ipynb
|
###Markdown
Note that a Bidirectional layer doubles the size of the LSTM units. If you enter 64, you will end up with 128 for that layer, since there are 64 going forward and 64 going backward. CNNs and RNNs
###Code
model = Sequential([
Embedding(vocab_size, embedding_dim, input_length=max_length),
Conv1D(128, 5, activation='relu'),
GlobalMaxPooling1D(),
Dense(24, activation='relu'),
Dense(1, activation='relu')
])
###Output
_____no_output_____
###Markdown
Рекуррентные сети* [Unreasonable effectiveness of RNN](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) (Andrej Karpathy)* [Официальный код PyTorch](https://github.com/pytorch/examples/tree/master/word_language_model)--- Теория: работа с последовательными даннымиДля анализа последовательных данных — звука, музыки, текста, цены биткоина, спортивной статистики, шахматных ходов, состояний игры в Доте — используются свои архитектуры, использующие «память» для обработки данных произвольной длины.Пусть у нас есть какая-нибудь функция от двух векторных аргументов $f(x, h)$ (нейросеть с trainable параметрами — тоже как бы функция) и какая-нибудь последовательость входных данных $\{x_1, x_2, \ldots, x_n\}$.Получим последовательность $\{h_1, h_2, \ldots, h_n\}$ по следующему правилу: $ h_t = f(x_i, h_{t-1}) $ ($h_0$ предполагаем чем-нибудь изначально инициализированным). Все осталньые $h_i$ будут потом использоваться для чего-то полезного.Когда мы это всё развернем, на самом деле получится обычный статический вычислительный граф с кучей weight sharing-а, выходом которого будет $n$ скрытых состояний.Подобные архитектуры называют рекуррентными сетями. Затухающий градиентГлубокие сети очень трудно обучать. Рекуррентная сеть по сути не отличается от очень глубокой статичной сети, вход которой просто подается на разные уровни глубины. Такой сети очень трудно будет уловить связь, между данными, расположенными на большом расстоянии.Чтобы с этим побороться, придумали *механизмы памяти*.Сверху — не бро, снизу — бро.Представьте конвейерную ленту, которая движется вдоль наших последовательных данных. Информация с данных может запрыгивать на ленту, проезжать вперед и спрыгивать оттуда, когда она понадобится. LSTM (long-short term memory) — это тот блок, который решает, какой информации нужно запрыгнуть. Он позволяет сохранять информацию до более позднего времени, когда она понадобится.Он состоит из нескольких «гейтов», каждый из которых представляет собой trainable матрицу. Эти гейты решают, что можно забыть, что можно добавить, и что сейчас в данный момент важно от входных данных. Они считают маски — вектора после softmax — на которые домножаются входные данные.Эту LSTM-ячейку мы просто будем использовать в качестве $𝑓$. Суть не изменилась: граф также разворачивается в статический, только более сложный. ЭмбеддингиНейросети не берут на вход сырые текстовые данные, а работают с векторами. Как правило, текст разделяют (*токенизируют*) на мелкие куски (буквы, слова, отдельные слоги), а дальше логично каждый токен заменить на one-hot вектор размерности словаря (все элементы нули, кроме одного, соответствующего номеру токена).Но что будет, если помножить one-hot вектор на матрицу? Получится просто какая-то строка матрицы. Вместо этой операции можно было сразу вставить строку, соответствующую единичке. Эта операция называется embedding. Теперь мы можем просто сопоставить каждому уникальному токеру свой обучаемый вектор.У этих векторов есть смысл и много применений. Когда они уже обучены на какой-то реальной задаче, они становятся очень информативными. Синонимы должны иметь очень близкие вектора. Например, с ними работает всякая алгебра типа «король - мужчина + женщина = королева».Впрочем, геометрическая интерпретация пока не нашла особых применений.Зацените ещё эту игру: https://research.google.com/semantris Практика: языковые моделиВам нужно закодить задание с отбора, только на максималках. Напомним:**Марковский процесс** — случайный процесс, эволюция которого на каждом шаге не зависит от предшествовавшей истории.Естественный язык, музыку и всё такое можно тоже моделировать как марковский процесс, где состоянием будет всё, что сгенерировали ранее. Языковые модели — это аппроксиматоры такого марковского процесса. Одним из способов её реализовать является рекуррентная сеть, обученная предсказывать следующий токен по всем предыдущим.Языковые модели используются очень много где — это одна из центральных тем в NLP.* Ими можно делать диалоговые системы: скармливаем предыдущее сообщение и генерируем следующее до токена остановки.* Подсказки поиска в Google делаются примерно на них.* Автокоррекцию и спеллчекинг можно сделать, находя «странные» токены.* Их можно пихнуть в ансамбль для задачи перевода, например.* Сжатие текстовых данных основано на языковых моделях (правда не нейросетевых, потому что они сами весят много). Есть способ сжимать данные эффективно, если мы знаем их распределение — называется арифметическое кодирование.* Тренд 2018 года: делать клёвые вещи из скрытых слоев языковой модели (см. ELMo, ULMFiT, OpenAI GPT, BERT). ПрепроцессингВозьмите какие-нибудь сырые данные. Википедия, «Гарри Поттер», «Игра Престолов», твиты Тинькова — что угодно.
###Code
!cat a1.txt >> source.txt
!cat a2.txt >> source.txt
!cat a3.txt >> source.txt
!apt-get install -y -qq software-properties-common module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
!mkdir -p drive
!google-drive-ocamlfuse drive
raw_text = ''
with open('drive/rnn/source.txt', 'r', encoding='windows-1251') as file:
raw_text = file.read().lower()
print('ok', len(raw_text))
raw_text = raw_text[275:-275]
len(raw_text)
def clean(text):
res = []
was = False
for el in text:
if el.isalpha():
res.append(el)
was = False
else:
if not was:
res.append(' ')
was = True
return res
print(raw_text[:100])
print(''.join(clean(raw_text[:100])))
text = clean(raw_text)
print(set(text))
###Output
ик ротфусс
«имя ветра»
моей матери, которая научила меня любить книги и открыла мне двери в нарнию
ик ротфусс имя ветра моей матери которая научила меня любить книги и открыла мне двери в нарнию
{'i', 'з', 'ё', 'м', 'ш', 'h', 'l', 'л', 'm', 'b', ' ', 'д', 'и', 'w', 'f', 'ъ', 'c', 'в', 'т', 'd', 'p', 'a', 'ф', 't', 'ь', 'с', 'u', 'ч', 'e', 'п', 'n', 'g', 'о', 'р', 'й', 'а', 'г', 'v', 'ю', 'r', 'k', 'б', 'к', 's', 'э', 'ц', 'щ', 'y', 'ж', 'н', 'у', 'я', 'е', 'ы', 'х', 'o'}
###Markdown
Вспомните, как вы писали языковую модель для отбора. Сделайте такую же токенизацию — сопоставьте всем различным символам свой номер. Удобно это хранить просто в питоновском словаре (`char2idx`). Для генерации вам потребуется ещё и обратный словарь (`idx2char`).Клёво будет ещё написать отдельный класс, который делает токенизацию и детокенизацию.
###Code
class Vocab:
def __init__(self, data):
self.char2idx = {}
self.idx2char = {}
for c in data:
if c not in self.char2idx:
self.char2idx[c] = len(self.idx2char)
self.idx2char[len(self.char2idx) - 1] = c
def tokenize(self, sequence):
res = []
for el in sequence:
dummy = [0] * len(self)
dummy[self.char2idx[el]] = 1
res.append(dummy)
return res
def detokenize(self, sequence):
res = []
for el in sequence:
mx = 0
for i in range(len(el)):
if el[i] > el[mx]:
mx = i
res.append(self.idx2char[mx])
return res
def __len__(self):
return len(self.char2idx)
voc = Vocab(text)
print(''.join(voc.detokenize(voc.tokenize(text[100:150]))))
import sys
sys.version
!pip3 install http://download.pytorch.org/whl/cu92/torch-0.4.1-cp36-cp36m-linux_x86_64.whl
!pip3 install torchvision
import torch
from torch import nn
from torch.utils.data.dataset import Dataset
from torch.utils.data import DataLoader
from matplotlib.pyplot import plot as plt
from torch import tensor
import torch.nn.functional as F
from torch.distributions.one_hot_categorical import OneHotCategorical
torch.cuda.device_count()
device = torch.device('cpu' if torch.cuda.device_count() == 0 else 'cuda:0')
#text = text[:60000]
device
len(text)
class TextData(Dataset):
def __init__(self, text, max_length):
print("TextData", len(text), max_length)
self.data = text
self.max_length = max_length
def __getitem__(self, ind):
return (tensor(self.data[ind:ind + self.max_length], dtype=torch.float),
tensor(self.data[ind + 1:ind + self.max_length + 1], dtype=torch.long))
def __len__(self):
return len(self.data) - self.max_length
cpu = torch.device('cpu')
def validate(model):
with torch.no_grad():
good = 0
total = 0
model.eval()
for (x, real) in validator:
x = x.transpose(0, 1)
y = model(x)
target = torch.max(real.transpose(0, 1), dim=2)[1].to(device).reshape(-1)
have = torch.max(y, dim=2)[1].reshape(-1)
#print(target.shape, have.shape)
total += target.shape[0]
#print(type(target == have), (target == have).shape)
good += (target == have).sum()
#print(good, total)
return float(good) / float(total)
###Output
_____no_output_____
###Markdown
МодельПримерно такое должно зайти:* Эмбеддинг* LSTM / GRU* Дропаут* Линейный слой* SoftmaxВам нужно по префиксу научиться предсказывать вероятности следующего токена. По сути это задача классификации.Так можно делать, но это неэффективно. Пусть вы учли контекст размера 50. Вы потратите очень много вычислений лишь чтобы предсказать один токен в самом конце. Вместо этого вы можете предсказывать сразу все 50 токенов. Для этого нужно поддерживать hidden state и из него на ходу делать предсказание следующего токена и считать на нем лосс классификации, и этот же токен потом кормить в модель на вход, получая новый hidden state. Так вы по сути делаете сразу 50 классификаций за примерно то же количество вычислений, и ваша модель обучается намного быстрее. Этот трюк называется **teacher forcing**. Обучение* Делайте сэмплирование предложений фиксированной длины из вашего корпуса. Можете как нарезать их изначально, так и написать генератор.* Используйте teacher forcing.* Выход модели — это one-hot вход, смещенный на одну позиию.* Функция потерь: кроссэнтропия.* Не забудьте мониторить и валидацию, и train.
###Code
class Model(nn.Module):
def __init__(self, **kwarg):
self.embed_length = kwarg['embeding']
self.vocab = kwarg['vocab_size']
self.hidd_size = self.embed_length
super(Model, self).__init__()
self.embed = nn.Linear(self.vocab, self.embed_length)
self.lstm = nn.LSTM(input_size=self.embed_length, hidden_size=self.hidd_size, num_layers=2)
self.final = nn.Linear(self.embed_length, self.vocab)
def get_hidden(self, batch_size):
hidden = torch.zeros(2, batch_size, self.hidd_size, device=device)
cell_states = torch.zeros(2, batch_size, self.hidd_size, device=device)
return (hidden, cell_states)
def forward(self, X):
batch_size = X.shape[1]
length = X.shape[0]
X_D = tensor(X).to(device)
X1 = self.embed(X_D.view(-1, self.vocab))#.to(device)
X1 = X1.view(length, batch_size, -1)
hidden, cell_states = self.get_hidden(batch_size)
res, (_, __) = self.lstm(X1, (hidden, cell_states))
'''for i in range(length):
out, hidden = self.lstm(X1[i].view(1, batch_size, -1), hidden.view(1, -1, 5))
preds.append(out)'''
#res = torch.stack(preds).to(device)
out = self.final(res.view(length, batch_size, -1))
out = out - out.data.max()
return F.softmax(out, dim=2)
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
batch_size = 256
sequence_len = 40
learning_rate = 0.01
p = 0.33
START_VAL = int(len(text) * p)
dataset = TextData(voc.tokenize(text[:-START_VAL]), sequence_len)
val_dataset = TextData(voc.tokenize(text[-START_VAL:]), sequence_len)
loader = DataLoader(dataset, batch_size=batch_size)
validator = DataLoader(val_dataset, batch_size=batch_size)
def check_prob(probs):
assert(int(torch.max(probs.view(-1))[0]) <= 1)
from torch.nn.modules.loss import NLLLoss
criterion = NLLLoss()
model = Model(embeding=15, vocab_size=len(voc)).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
losses = []
for epoch in range(15):
model.train()
ind = 0
if epoch == 4:
for g in optimizer.param_groups:
g['lr'] = 0.001
for (x, real) in loader:
model.zero_grad()
optimizer.zero_grad()
x = x.transpose(0, 1)
target = torch.max(real.transpose(0, 1), dim=-1)[1].to(device)
#print(x.shape)
y = model(x)
#check_prob(y)
loss = criterion(y.view(-1, len(voc)), target.reshape(-1))
loss.backward()
losses.append(loss.item())
#torch.nn.utils.clip_grad_norm(model.parameters(), 40)
optimizer.step()
ind += 1
if ind % 1000 == 0:
print(ind, len(loader))
print('epoch:', epoch, 'val_acc:', validate(model))
plt.title("Training loss")
plt.xlabel("iteration")
plt.ylabel("loss")
plt.plot(losses, 'b')
plt.show()
losses = []
for epoch in range(15):
model.train()
ind = 0
if epoch == 4:
for g in optimizer.param_groups:
g['lr'] = 0.001
for (x, real) in loader:
model.zero_grad()
optimizer.zero_grad()
x = x.transpose(0, 1)
target = torch.max(real.transpose(0, 1), dim=-1)[1].to(device)
#print(x.shape)
y = model(x)
loss = criterion(y.view(-1, len(voc)), target.reshape(-1))
loss.backward()
losses.append(loss.item())
#torch.nn.utils.clip_grad_norm(model.parameters(), 40)
optimizer.step()
ind += 1
if ind % 1000 == 0:
print(ind, len(loader))
print('epoch:', epoch, 'val_acc:', validate(model))
plt.title("Training loss")
plt.xlabel("iteration")
plt.ylabel("loss")
plt.plot(losses, 'b')
plt.show()
validate(model)
###Output
tensor(60974, device='cuda:0') 394400
###Markdown
СпеллчекерИз языковой модели можно сделать простенький спеллчекер: можно визуализировать предсказанные вероятности на каждом символе.Бонус: можете усреднить перплексии по словам и выделять их, а не отдельные символы.
###Code
from IPython.core.display import display, HTML
def print_colored(sequence, intensities, delimeter=''):
html = delimeter.join([
f'<span style="background: rgb({255}, {255-x}, {255-x})">{c}</span>'
for c, x in zip(sequence, intensities)
])
display(HTML(html))
print_colored('Налейте мне экспрессо'.split(), [0, 0, 100], ' ')
sequence = 'Эту домашку нужно сдать втечении двух недель'
intensities = [0]*len(sequence)
intensities[25] = 50
intensities[26] = 60
intensities[27] = 70
intensities[31] = 150
print_colored(sequence, intensities)
tocheck = ['деманы еще ни проснулись а профессор в у ниверситете уже заснул',
'как пройти в библиотеку',
'норм прогеры кодят нейроинтерфейсы попивая смузи и катаясь на гироскутере по дороге в барбершоп или коворкинг']
tocheck.append(text[2000:2300])
threshold = 0.02
for sent in tocheck[:]:
#print(sent)
prob = model(tensor(voc.tokenize(sent), dtype=torch.float).view(len(sent), 1, -1))
col = [0] * len(sent)
for i in range(len(sent)):
if max(prob[i][0]) - prob[i][0][voc.char2idx[sent[i]]] >= threshold:
col[i] = 100
#print(sent[i], prob[i][0][voc.char2idx[sent[i]]], max(prob[i][0]))
print_colored(sent, col)
print()
###Output
_____no_output_____
###Markdown
Генерация предложений* Поддерживайте hidden state при генерации. Не пересчитывайте ничего больше одного раза.* Прикрутите температуру: это когда при сэмплировании все логиты (то, что перед софтмаксом) делятся на какое-то число (по умолчанию 1, тогда ничего не меняется). Температура позволяет делать trade-off между разнообразием и правдоподобием (подробнее — см. блог Карпатого).* Ваша реализация должна уметь принимать строку seed — то, с чего должно начинаться сгенерированная строка.
###Code
import random as rd
def sample(self, length, temperature=1, seed=[]):
with torch.no_grad():
res = [tensor(el, dtype=torch.float).to(device) for el in seed]
hidden = torch.randn(2, 1, self.hidd_size).to(device)
cell_states = torch.randn(2, 1, self.hidd_size).to(device)
if len(seed) != 0:
seed = self.embed(tensor(seed, dtype=torch.float).to(device))
#print(type(seed))
for i in range(seed.shape[0]):
_, (hidden, cell_states) = self.lstm(seed[i].view(1, 1, -1), (hidden, cell_states))
print(voc.detokenize(res))
out = torch.zeros(1, 1, self.vocab, dtype=torch.float).to(device)
out[0, 0, rd.randint(1, self.vocab) - 1] = 1
for i in range(length):
out, (hidden, cell_states) = self.lstm(self.embed(out).view(1, 1, -1), (hidden, cell_states))
z = F.softmax(self.final(out).view(-1) / temperature, dim=-1)
#print(i, z, z.shape)
out = OneHotCategorical(z).sample()
#print(out, torch.max(out))
res.append(out)
print(voc.detokenize(res))
return res
model.sample = sample
print(model.sample)
print(''.join(voc.detokenize(sample(model, 20, 3, voc.tokenize('в университе')))))
print(''.join(voc.detokenize(sample(model, 20))))
print(''.join(voc.detokenize(sample(model, 20, 1, voc.tokenize('о')))))
import pickle
with open('drive/RNN/cnn-gru-2', 'wb') as file:
pickle.dump(model, file)
with open('drive/RNN/cnn-gru-2', 'rb') as file:
mdodel2 = pickle.load(file)
###Output
_____no_output_____
###Markdown
RNN architectures exploration
###Code
import torch
from utils import count_params
from rnns import RNN, GRU, LSTM, BLSTM
rnn = RNN(28, 256, 2)
count_params(rnn)
rnn(torch.rand(4, 1, 28, 28).squeeze(1)).shape
for name, param in rnn.named_parameters():
print(f'{param.size()} : {name}')
gru = GRU(28, 256, 2)
count_params(gru)
gru(torch.rand(4, 1, 28, 28).squeeze(1)).shape
for name, param in gru.named_parameters():
print(f'{param.size()} : {name}')
lstm = LSTM(28, 256, 2)
count_params(lstm)
lstm(torch.rand(4, 1, 28, 28).squeeze(1)).shape
for name, param in lstm.named_parameters():
print(f'{param.size()} : {name}')
blstm = BLSTM(28, 256, 2)
count_params(blstm)
blstm(torch.rand(4, 1, 28, 28).squeeze(1)).shape
for name, param in blstm.named_parameters():
print(f'{param.size()} : {name}')
###Output
torch.Size([1024, 28]) : lstm.weight_ih_l0
torch.Size([1024, 256]) : lstm.weight_hh_l0
torch.Size([1024]) : lstm.bias_ih_l0
torch.Size([1024]) : lstm.bias_hh_l0
torch.Size([1024, 28]) : lstm.weight_ih_l0_reverse
torch.Size([1024, 256]) : lstm.weight_hh_l0_reverse
torch.Size([1024]) : lstm.bias_ih_l0_reverse
torch.Size([1024]) : lstm.bias_hh_l0_reverse
torch.Size([1024, 512]) : lstm.weight_ih_l1
torch.Size([1024, 256]) : lstm.weight_hh_l1
torch.Size([1024]) : lstm.bias_ih_l1
torch.Size([1024]) : lstm.bias_hh_l1
torch.Size([1024, 512]) : lstm.weight_ih_l1_reverse
torch.Size([1024, 256]) : lstm.weight_hh_l1_reverse
torch.Size([1024]) : lstm.bias_ih_l1_reverse
torch.Size([1024]) : lstm.bias_hh_l1_reverse
torch.Size([10, 512]) : fc.weight
torch.Size([10]) : fc.bias
|
notebooks/dev/.ipynb_checkpoints/n08_market_simulator_b-checkpoint.ipynb
|
###Markdown
This notebook is to aid in the development of a complete market simulator.
###Code
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
###Output
Populating the interactive namespace from numpy and matplotlib
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Let's first create a quantization function
###Code
levels = [-13.5, -10.0, -1.0, 2.0, 3.0]
real_value = -6.7
temp_list = levels + [real_value]
temp_list
temp_list.sort()
temp_list
sorted_index = temp_list.index(real_value)
if sorted_index == 0:
q_value = levels[0]
elif sorted_index == len(temp_list)-1:
q_value = levels[-1]
else:
q_value = (temp_list[sorted_index-1] + temp_list[sorted_index+1])/2
q_value
def quantize(real_value, levels):
temp_list = levels + [real_value]
temp_list.sort()
sorted_index = temp_list.index(real_value)
if sorted_index == 0:
q_value = levels[0]
elif sorted_index == len(temp_list)-1:
q_value = levels[-1]
else:
q_value = (temp_list[sorted_index-1] + temp_list[sorted_index+1])/2
return q_value
levels
x = arange(-20,20,0.2)
x_df = pd.DataFrame(x, columns=['real_value'])
x_df
len(x_df.values.tolist())
from functools import partial
# x_df.apply(lambda x:print('{} \n {}'.format(x,'-'*20)), axis=1)
x_df['q_value'] = x_df.apply(lambda x: partial(quantize, levels=levels)(x[0]), axis=1)
x_df.head()
plt.plot(x_df['real_value'], x_df['q_value'])
###Output
_____no_output_____
###Markdown
Let's create an Indicator and extract some values
###Code
data_df = pd.read_pickle('../../data/data_df.pkl')
first_date = data_df.index.get_level_values(0)[0]
first_date
one_input_df = data_df.loc[first_date,:]
one_input_df
###Output
_____no_output_____
###Markdown
Normally, the data to pass to the extractor will be all the data, for one symbol, during a period of some days.
###Code
num_days = 50
end_date = data_df.index.get_level_values(0).unique()[num_days-1]
sym_data = data_df['MSFT'].unstack()
sym_data.head()
batch_data = sym_data[first_date:end_date]
batch_data.shape
from recommender.indicator import Indicator
arange(0,1e4,1)
ind1 = Indicator(lambda x: x['Close'].mean(), arange(0,10000,0.1).tolist())
ind1.extract(batch_data)
ind1.q_levels
###Output
_____no_output_____
###Markdown
Another Indicator
###Code
ind2 = Indicator(lambda x: (x['Volume']/x['Close']).max(), arange(0,1e8,1e4).tolist())
ind2.extract(batch_data)
(batch_data['Volume']/batch_data['Close']).max()
ind3 = Indicator(lambda x: x['High'].min(), arange(0,1000,0.1).tolist())
ind3.extract(batch_data)
###Output
_____no_output_____
###Markdown
Let's create a function to enumerate states from a vectorial state.
###Code
indicators = [ind1, ind2, ind3]
vect_state = list(map(lambda x: x.extract(batch_data), indicators))
vect_state
###Output
_____no_output_____
###Markdown
Let's generate the q_values for the q_levels
###Code
len(ind1.q_levels)
q_values = [ind1.q_levels[0]] + (np.array(ind1.q_levels[1:]) + np.array(ind1.q_levels[:-1])).tolist() + [ind1.q_levels[-1]]
q_values
len(q_values)
indicators[0].q_levels.index(vect_state[0])
###Output
_____no_output_____
|
MCMC multitau.ipynb
|
###Markdown
The conditional probability for a OU process $p(x,t|x_{0},0)$ is$$p(x,t|x_{0},0)=\frac{1}{\sqrt{2\pi A(1-B^{2})}}\exp \left(-\frac{(x-Bx_{0})^{2}}{2A(1-B^{2})}\right)$$
###Code
class Ornstein_Uhlenbeck(pm.Continuous):
"""
Ornstein-Uhlenbeck Process
Parameters
----------
B : tensor
B > 0, B = exp(-(D/A)*delta_t)
A : tensor
A > 0, amplitude of fluctuation <x**2>=A
delta_t: scalar
delta_t > 0, time step
"""
def __init__(self, A=None, B=None,
*args, **kwargs):
super(Ornstein_Uhlenbeck, self).__init__(*args, **kwargs)
self.A = A
self.B = B
self.mean = 0.
def logp(self, x):
A = self.A
B = self.B
x_im1 = x[:-1]
x_i = x[1:]
ou_like = pm.Normal.dist(mu=x_im1*B, tau=1.0/A/(1-B**2)).logp(x_i)
return pm.Normal.dist(mu=0.0,tau=1.0/A).logp(x[0]) + tt.sum(ou_like)
data = np.load("OUmt_sN05.npy")
data = data[:2]
a_bound = 10
result_df = pd.DataFrame(columns=['dt','A', 'dA','B','dB','s','ds'])
for dataset in data:
delta_t = dataset[0]
ts = dataset[1:]
print(delta_t)
with pm.Model() as model:
B = pm.Beta('B', alpha=1.0,beta=1.0)
A = pm.Uniform('A', lower=0, upper=a_bound)
sigma = pm.Uniform('sigma',lower=0,upper=5)
path = Ornstein_Uhlenbeck('path',A=A, B=B,shape=len(ts))
dataObs = pm.Normal('dataObs',mu=path,sigma=sigma,observed=ts)
trace = pm.sample(2000,cores=4)
a_mean = trace['A'].mean()
b_mean = trace['B'].mean()
a_std = trace['A'].std()
b_std = trace['B'].std()
sigma_mean = trace['sigma'].mean()
sigma_std = trace['sigma'].std()
result_df = result_df.append({'dt':delta_t,
'A':a_mean,
'dA':a_std,
'B':b_mean,
'dB':b_std,
's':sigma_mean,
'ds':sigma_std},ignore_index=True)
tau = -delta_t_list/np.log(result_array.T[2])
dtau = delta_t_list*result_array.T[3]/result_array.T[2]/np.log(result_array.T[2])**2
plt.plot(delta_t_list,result_array.T[6],"o")
plt.xlabel(r'$\Delta t/\tau$')
plt.ylabel(r'$\sigma_{GT-model}$')
plt.errorbar(delta_t_list,result_array.T[0],yerr=result_array.T[1],fmt="o",label="A")
plt.errorbar(delta_t_list,tau,dtau,fmt="o",label=r'$\tau$')
plt.legend(loc="upper left")
plt.errorbar(delta_t_list,result_array.T[4],yerr=result_array.T[5],fmt="o")
plt.xlabel(r'$\Delta t/\tau$')
plt.ylabel(r'$\sigma_{noise}$')
###Output
_____no_output_____
|
Módulo 3/Proyecto/Proyecto_Equipo_15_python.ipynb
|
###Markdown
**EQUIPO 15**
INTEGRANTES:
* HURTADO GUTIÉRREZ MARCO ANTONIO
* SALDAÑA CABRERA MIGUEL ANGEL
* VEGA MARTÍNEZ ANGEL CRIPTO-MEX
Herramienta para ayudar a convertir tu inversión en su equivalente en alguna criptomoneda

Identificación del problema: Las falta de herramientas de conversión accesibles para conocer el valor que puedes invertir en una criptomoneda no ayuda a que las personas se decidan a invertir en alguna, ni siquiera se atreven a profundizar ni aprender de esta rama. Planteamiento de preguntas clave: * ¿Qué información debo recopilar y de donde la obtengo?* ¿Qué bases de datos requiero para resolver este problema?* ¿Qué API's me pueden ayudar a obtener esta información actualizada en tiempo real?* ¿Qué Bibliotecas y/o paquetes vamos a usar?* ¿Es necesario limpiar o transformar algún dato? Colección de datos y uso de API's: * Las Bibliotecas que se emplearon fueron las siguientes: CmcScraper, requests, json y el paquete de Pandas.* La API de la cual se recopilaron las bases de datos pertinentes acerca de las criptomonedas la vamos a encontrar en el siguiente enlace: https://coinmarketcap.com/api/documentation/v1/tag/cryptocurrency.* La API de donde se obtuvo la conversión de divisas es la siguiente: https://exchangeratesapi.io/.
###Code
!pip install cryptocmd # Para instalar cryptocmd.
# Importamos las bibliotecas y el paquete.
from cryptocmd import CmcScraper
import requests
import pandas as pd
import json
###Output
Requirement already satisfied: cryptocmd in /usr/local/lib/python3.7/dist-packages (0.6.0)
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from cryptocmd) (2.23.0)
Requirement already satisfied: tablib in /usr/local/lib/python3.7/dist-packages (from cryptocmd) (3.0.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->cryptocmd) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->cryptocmd) (2020.12.5)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->cryptocmd) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->cryptocmd) (3.0.4)
###Markdown
Análisis exploratorio de datos:
###Code
# Análisis exploratorio de las criptomonedas.
cripto_moneda = input('¿Qué criptomoneda quieres buscar? ',)
scraper = CmcScraper(cripto_moneda) #Buscando bitcoin
headers, data = scraper.get_data() #obteniendo todo el raw data
btc_json_data = scraper.get_data("json") #transformando a json
scraper.export("csv", name="BTC_all_time") #exportandolo a csv y cambiando el nombre
dfx = scraper.get_dataframe() #haciendolo un dataFrame
dfx.head()
# Análisis exploratorio de las divisas.
x = input('¿En qué divisa te encuentras? ',)
# Obtenemos la API y además le concatenamos la base que queremos utilzar
endpoint = 'https://api.exchangeratesapi.io/latest?base='+x.upper()
r = requests.get(endpoint) # Hacemos esta petición con endpoint.
r.status_code # Verificamos que no existe ningun error.
json = r.json() # Convertimo a formato json.
json.keys() # Obtenemos las llaves.
json['rates'] # Divisas en su valor base.
json['base'] # Verificamos que sea la base que se introdujo anteriormente.
json['date'] # Comprobamos la fecha actual.
###Output
_____no_output_____
###Markdown
Limpieza de Datos:
###Code
# Limpieza de datos del dataframe de las divisas.
data = json['rates'] # Guardamos en una variable llamada data el valor de rates para su limpieza.
# Con la limpieza de datos podemos visualizar el siguiente dataframe.
normalized = pd.json_normalize(data)
df = pd.DataFrame.from_dict(normalized)
df.head()
# Fución para optener el nombre de las divisas.
def nombre_divisas():
nombre = []
for columna in df:
nombre.append(columna)
return nombre
# Limpieza del datframe de las criptomonedas.
df_nuevo= dfx.drop(['Open','High','Low','Volume','Market Cap'], axis = 1)
df_nuevo.head()
###Output
_____no_output_____
###Markdown
Transformación de datos
###Code
# Función para conversión de divisas.
def conversor_de_divisas(cantidad_en_dolares, pais):
moneda = df[pais].loc[0]
return cantidad_en_dolares * moneda
inversion = input("¿Cuánto quieres invertir?", )
inversion = float(inversion)
cantidad_en_dolares = inversion
z = conversor_de_divisas(cantidad_en_dolares, 'USD')
print('Tú inversión equivale a: ', z,'USD')
# Valor de la moneda el día que se consultó.
hoy = df_nuevo.iloc[0,1]
hoy
# Conversión Final.
eq = z/hoy
print(f'Tú inversión en {cripto_moneda} es : {eq}')
###Output
Tú inversión en btc es : 0.008880062681447478
###Markdown
Cripto-Mex:
###Code
print(''' Bienvenido a Cripto-Mex Versión: Beta''')
cliente = input('\n ¿Cuál es tu nombre? ', )
print(f'''\n \n ¡Hola {cliente}!, nosotros te ayudaremos a conocer el valor equivalente de tu inversión
a la criptomoneda que tú elijas''')
print('\n Este es nuestro catálogo de Criptomonedas:')
print(["\n BTC",'ETH','BNB','USDT','ADA','DOT','XRP'])
print('\n Para mas codigos de cryptomonedas consultar la web: https://coinmarketcap.com')
moneda = input('\n ¿En cuál criptomoneda te gustaría invertir? ',)
scraper= CmcScraper(moneda) #Buscando bitcoin
headers, data = scraper.get_data() #obteniendo todo el raw data
btc_json_data = scraper.get_data("json") #transformando a json
scraper.export("csv", name="BTC_all_time") #exportandolo a csv y cambiando el nombre
dfx = scraper.get_dataframe() #haciendolo un dataFrame
df_nuevo= dfx.drop(['Open','High','Low','Volume','Market Cap'], axis = 1)
hoy = df_nuevo.iloc[0,1]
print("\n Catálogo de las divisas")
print(['CAD', 'HKD', 'ISK', 'PHP', 'DKK', 'HUF', 'CZK', 'GBP','RON', 'SEK', 'IDR','INR', 'BRL', 'RUB', 'HRK', 'JPY', 'THB', 'CHF', 'EUR', 'MYR','BGN', 'TRY','CNY',
'NOK', 'NZD','ZAR','USD','MXN','SGD','AUD' ,'ILS','KRW','PLN'])
x = input('¿En qué divisa te encuentras? ',)
endpoint = 'https://api.exchangeratesapi.io/latest?base='+x.upper()
r = requests.get(endpoint)
json = r.json()
data = json['rates']
normalized = pd.json_normalize(data)
df = pd.DataFrame.from_dict(normalized)
def nombre_divisas():
nombre = []
for columna in df:
nombre.append(columna)
return nombre
nombre_divisas()
def conversor_de_divisas(cantidad_en_dolares, pais):
moneda = df[pais].loc[0]
return cantidad_en_dolares * moneda
inversion = input(f"\n ¿Cuánto quieres invertir {cliente}? ", )
inversion = float(inversion)
inversion_z = conversor_de_divisas(inversion, 'USD')
print('Tú inversión equivale a: ', inversion_z,'USD')
eq = inversion_z/hoy
print(f'Tú inversión en {moneda} es : {eq}')
###Output
Bienvenido a Cripto-Mex Versión: Beta
¿Cuál es tu nombre? marco
¡Hola marco!, nosotros te ayudaremos a conocer el valor equivalente de tu inversión
a la criptomoneda que tú elijas
Este es nuestro catálogo de Criptomonedas:
['\n BTC', 'ETH', 'BNB', 'USDT', 'ADA', 'DOT', 'XRP']
Para mas codigos de cryptomonedas consultar la web: https://coinmarketcap.com
¿En cuál criptomoneda te gustaría invertir? btc
Catálogo de las divisas
['CAD', 'HKD', 'ISK', 'PHP', 'DKK', 'HUF', 'CZK', 'GBP', 'RON', 'SEK', 'IDR', 'INR', 'BRL', 'RUB', 'HRK', 'JPY', 'THB', 'CHF', 'EUR', 'MYR', 'BGN', 'TRY', 'CNY', 'NOK', 'NZD', 'ZAR', 'USD', 'MXN', 'SGD', 'AUD', 'ILS', 'KRW', 'PLN']
¿En qué divisa te encuentras? mxn
¿Cuánto quieres invertir marco? 1000
Tú inversión equivale a: 46.3952393 USD
Tú inversión en btc es : 0.0008880062681447476
|
notebooks/fgv_classes/professor_hitoshi/aula 4 - Deep Learning - parte I.ipynb
|
###Markdown
 Deep LearningVamos utilizar o script referente ao livro online: [Neural Networks and Deep Learning](http://neuralnetworksanddeeplearning.com/chap1.html)com algumas modificações
###Code
#### Libraries
# Standard library
import random
# Third-party libraries
import numpy as np
import pandas as pd
#### Miscellaneous functions
def sigmoid(z):
"""The sigmoid function."""
return 1.0/(1.0+np.exp(-z))
def sigmoid_prime(z):
"""Derivative of the sigmoid function."""
return sigmoid(z)*(1-sigmoid(z))
# graficos
import seaborn as sns
%matplotlib inline
sns.set_context('paper')
# interatividade
from ipywidgets import interact, interactive, fixed, interact_manual, FloatSlider, IntSlider
import ipywidgets as widgets
# caso necessario instalar ipywidgets
# com pip...
# pip install ipywidgets
# jupyter nbextension enable --py --sys-prefix widgetsnbextension (necessario se virtualenv)
# com conda...
# conda install -c conda-forge ipywidgets
###Output
_____no_output_____
###Markdown
importando e explorando mnist
###Code
import mnist_loader
training_data, validation_data, test_data = mnist_loader.load_data_wrapper()
train = list(training_data)
valid = list(validation_data)
teste = list(test_data)
# quantas amostras existem em cada dataset?
print(len(train))
print(len(valid))
print(len(teste))
# qual o shape de uma amostra?
# qual o label de uma amostra?
print (train[20][0].shape)
print (train[20][1])
print (teste[20][0].shape)
print (teste[20][1])
valid[21][1]
# checando algumas amostras
def f(x, dataset):
if dataset == 'treino':
d = train
elif dataset == 'teste':
d = teste
elif dataset == 'validacao':
d = valid
if x is '':
sns.heatmap(np.zeros((28,28)), cmap = 'gray_r', vmin = 0, vmax = 1)
else:
amostra = int(x)
print('amostra =', x)
print('label =', d[amostra][1].reshape(10,).argmax() if dataset == 'treino' else d[amostra][1])
sns.heatmap(d[amostra][0].reshape(28,28), cmap = 'gray_r', vmin = 0, vmax = 1)
interact(f, dataset = ['treino', 'validacao', 'teste'],
x = IntSlider(min = 0, max = len(train) - 1, step = 1, continuous_update = False), );
###Output
amostra = 34517
label = 9
###Markdown
network.py
###Code
# %load network.py
"""
network.py
~~~~~~~~~~
IT WORKS
A module to implement the stochastic gradient descent learning
algorithm for a feedforward neural network. Gradients are calculated
using backpropagation. Note that I have focused on making the code
simple, easily readable, and easily modifiable. It is not optimized,
and omits many desirable features.
"""
class Network(object):
def __init__(self, sizes):
"""The list ``sizes`` contains the number of neurons in the
respective layers of the network. For example, if the list
was [2, 3, 1] then it would be a three-layer network, with the
first layer containing 2 neurons, the second layer 3 neurons,
and the third layer 1 neuron. The biases and weights for the
network are initialized randomly, using a Gaussian
distribution with mean 0, and variance 1. Note that the first
layer is assumed to be an input layer, and by convention we
won't set any biases for those neurons, since biases are only
ever used in computing the outputs from later layers."""
self.num_layers = len(sizes)
self.sizes = sizes
self.biases = [np.random.randn(y, 1) for y in sizes[1:]]
self.weights = [np.random.randn(y, x)
for x, y in zip(sizes[:-1], sizes[1:])]
def feedforward(self, a):
"""Return the output of the network if ``a`` is input."""
for b, w in zip(self.biases, self.weights):
a = sigmoid(np.dot(w, a)+b)
return a
def SGD(self, training_data, epochs, mini_batch_size, eta,
test_data=None):
"""Train the neural network using mini-batch stochastic
gradient descent. The ``training_data`` is a list of tuples
``(x, y)`` representing the training inputs and the desired
outputs. The other non-optional parameters are
self-explanatory. If ``test_data`` is provided then the
network will be evaluated against the test data after each
epoch, and partial progress printed out. This is useful for
tracking progress, but slows things down substantially."""
training_data = list(training_data)
n = len(training_data)
if test_data:
test_data = list(test_data)
n_test = len(test_data)
for j in range(epochs):
random.shuffle(training_data)
mini_batches = [
training_data[k:k+mini_batch_size]
for k in range(0, n, mini_batch_size)]
for mini_batch in mini_batches:
self.update_mini_batch(mini_batch, eta)
if test_data:
print("Epoch {} : {} / {}".format(j,self.evaluate(test_data),n_test));
else:
print("Epoch {} complete".format(j))
def update_mini_batch(self, mini_batch, eta):
"""Update the network's weights and biases by applying
gradient descent using backpropagation to a single mini batch.
The ``mini_batch`` is a list of tuples ``(x, y)``, and ``eta``
is the learning rate."""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
for x, y in mini_batch:
delta_nabla_b, delta_nabla_w = self.backprop(x, y)
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
self.weights = [w-(eta/len(mini_batch))*nw
for w, nw in zip(self.weights, nabla_w)]
self.biases = [b-(eta/len(mini_batch))*nb
for b, nb in zip(self.biases, nabla_b)]
def backprop(self, x, y):
"""Return a tuple ``(nabla_b, nabla_w)`` representing the
gradient for the cost function C_x. ``nabla_b`` and
``nabla_w`` are layer-by-layer lists of numpy arrays, similar
to ``self.biases`` and ``self.weights``."""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
# feedforward
activation = x
activations = [x] # list to store all the activations, layer by layer
zs = [] # list to store all the z vectors, layer by layer
for b, w in zip(self.biases, self.weights):
z = np.dot(w, activation)+b
zs.append(z)
activation = sigmoid(z)
activations.append(activation)
# backward pass
delta = self.cost_derivative(activations[-1], y) * \
sigmoid_prime(zs[-1])
nabla_b[-1] = delta
nabla_w[-1] = np.dot(delta, activations[-2].transpose())
# Note that the variable l in the loop below is used a little
# differently to the notation in Chapter 2 of the book. Here,
# l = 1 means the last layer of neurons, l = 2 is the
# second-last layer, and so on. It's a renumbering of the
# scheme in the book, used here to take advantage of the fact
# that Python can use negative indices in lists.
for l in range(2, self.num_layers):
z = zs[-l]
sp = sigmoid_prime(z)
delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
nabla_b[-l] = delta
nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())
return (nabla_b, nabla_w)
def evaluate(self, test_data):
"""Return the number of test inputs for which the neural
network outputs the correct result. Note that the neural
network's output is assumed to be the index of whichever
neuron in the final layer has the highest activation."""
test_results = [(np.argmax(self.feedforward(x)), y)
for (x, y) in test_data]
return sum(int(x == y) for (x, y) in test_results)
def cost_derivative(self, output_activations, y):
"""Return the vector of partial derivatives \partial C_x /
\partial a for the output activations."""
return (output_activations-y)
###Output
_____no_output_____
###Markdown
treinando o modelo
###Code
training_data, validation_data, test_data = mnist_loader.load_data_wrapper()
net = Network([784, 30, 10])
net.SGD(training_data, 15, 10, 3.0, test_data=test_data)
###Output
Epoch 0 : 9048 / 10000
Epoch 1 : 9232 / 10000
Epoch 2 : 9238 / 10000
Epoch 3 : 9329 / 10000
Epoch 4 : 9345 / 10000
Epoch 5 : 9366 / 10000
Epoch 6 : 9369 / 10000
Epoch 7 : 9390 / 10000
Epoch 8 : 9400 / 10000
Epoch 9 : 9410 / 10000
Epoch 10 : 9437 / 10000
Epoch 11 : 9466 / 10000
Epoch 12 : 9427 / 10000
Epoch 13 : 9450 / 10000
Epoch 14 : 9467 / 10000
Epoch 15 : 9464 / 10000
Epoch 16 : 9466 / 10000
Epoch 17 : 9484 / 10000
Epoch 18 : 9481 / 10000
Epoch 19 : 9494 / 10000
Epoch 20 : 9448 / 10000
Epoch 21 : 9474 / 10000
Epoch 22 : 9488 / 10000
Epoch 23 : 9452 / 10000
Epoch 24 : 9503 / 10000
Epoch 25 : 9502 / 10000
Epoch 26 : 9489 / 10000
Epoch 27 : 9492 / 10000
Epoch 28 : 9508 / 10000
Epoch 29 : 9496 / 10000
###Markdown
testando o modelo
###Code
def softmax(a):
return np.exp(a) / np.exp(a).sum()
training_data, validation_data, test_data = mnist_loader.load_data_wrapper()
train = list(training_data)
valid = list(validation_data)
teste = list(test_data)
interact(f, dataset = ['treino', 'validacao', 'teste'],
x = IntSlider(min = 0, max = len(train) - 1, step = 1, continuous_update = False), );
amostra = 313
imagem = teste[amostra][0]
preds = net.feedforward(imagem)
probs = softmax(preds)
pd.DataFrame(probs, columns = ['probs']).plot(kind = 'barh')
preds
###Output
_____no_output_____
|
scratchpad/voids_paper/notebooks/scratch/rewrite_recon_patches/test_rec_pts.ipynb
|
###Markdown
Segment a sparse 3D image with a single material component The goal of this notebook is to develop a 3D segmentation algorithm that improves segmentation where features are detected.**Data:** AM parts from Xuan Zhang.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import os
import h5py
import sys
import time
import seaborn as sns
import pandas as pd
import cupy as cp
from tomo_encoders import Patches
from tomo_encoders.misc import viewer
from tomo_encoders import DataFile
from tomo_encoders.reconstruction.recon import recon_binning, recon_patches_3d, rec_patch, rec_pts
# from tomo_encoders.misc.voxel_processing import cylindrical_mask, normalize_volume_gpu
r_fac = 1.0
ht = 32
wd = 2176
th = 1500
n_sel =int(ht*wd*wd*r_fac)
data = cp.random.normal(0,1,(th, ht, wd), dtype = cp.float32)
theta = cp.linspace(0, np.pi, th, dtype = cp.float32)
center = wd/2
vol = np.ones((ht,wd,wd))
vox_coords = np.where(vol == 1)
pts = np.asarray(vox_coords).T
pts = cp.asarray(pts, dtype = cp.int32)
pts = pts[cp.random.choice(len(pts), n_sel, replace = False),...].copy()
t000 = time.time()
gpts = pts[:,0]*wd*wd+pts[:,1]*wd+pts[:,2]
ind = cp.argsort(gpts)
pts = pts[ind]
t_sort = (time.time()-t000)*1000.0
print(f'sorting overhead: {t_sort:.2f} ms')
for i in range(5):
obj1 = rec_pts(data, theta, center, pts)
obj1 = obj1.reshape(ht,wd,wd)
times = []
for i in range(5):
obj, t_ = rec_patch(data, theta, center, 0, wd, 0, wd, 0, ht, TIMEIT=True)
print(f"time {t_:.2f} ms")
times.append(t_)
print(f"time = {np.median(times):.2f} ms")
obj.size
print(f'is algorithm working fine? {~np.any(obj1-obj)}')
###Output
is algorithm working fine? True
|
notebooks/7. Outlier detection/IQR Rule.ipynb
|
###Markdown
Standard deviation and mean
###Code
print("Mean ",np.mean(distribution))
print("STD ",np.std(distribution))
def select_outliers_std(distribution):
mean = np.mean(distribution)
std = np.std(distribution)
list_outliers = []
for i in distribution:
if i > (mean+(3*std)) or i < (mean-(3*std)):
list_outliers.append(i)
return list_outliers
select_outliers_std(distribution)
###Output
_____no_output_____
###Markdown
IQR Rule
###Code
def select_outliers(distribution):
median_data = np.median(distribution)
Q1 = np.percentile(distribution, 25, interpolation="midpoint")
Q3 = np.percentile(distribution, 75, interpolation="midpoint")
IQR = Q3 - Q1
list_outliers = []
for v in distribution:
if v > (median_data + (1.5 * IQR)) or v < (median_data - (1.5 * IQR)):
list_outliers.append(v)
return list_outliers
list_res = select_outliers(distribution)
list_res.sort()
list_res
###Output
_____no_output_____
|
intermediate-lessons/cyberinfrastructure/cyberinfrastructure-1.ipynb
|
###Markdown
CyberinfrastructureThis Intermediate lesson on Cyberinfrastructure introduces ...Lesson Developers:
###Code
# This code cell starts the necessary setup for Hour of CI lesson notebooks.
# First, it enables users to hide and unhide code by producing a 'Toggle raw code' button below.
# Second, it imports the hourofci package, which is necessary for lessons and interactive Jupyter Widgets.
# Third, it helps hide/control other aspects of Jupyter Notebooks to improve the user experience
# This is an initialization cell
# It is not displayed because the Slide Type is 'Skip'
from IPython.display import HTML, IFrame, Javascript, display
from ipywidgets import interactive
import ipywidgets as widgets
from ipywidgets import Layout
import getpass # This library allows us to get the username (User agent string)
# import package for hourofci project
import sys
sys.path.append('../../supplementary') # relative path (may change depending on the location of the lesson notebook)
import hourofci
# Retreive the user agent string, it will be passed to the hourofci submit button
agent_js = """
IPython.notebook.kernel.execute("user_agent = " + "'" + navigator.userAgent + "'");
"""
Javascript(agent_js)
# load javascript to initialize/hide cells, get user agent string, and hide output indicator
# hide code by introducing a toggle button "Toggle raw code"
HTML('''
<script type="text/javascript" src=\"../../supplementary/js/custom.js\"></script>
<style>
.output_prompt{opacity:0;}
</style>
<input id="toggle_code" type="button" value="Toggle raw code">
''')
###Output
_____no_output_____
|
W3-02-ML0101EN-Clas-Decision-Trees-drug.ipynb
|
###Markdown
Decision TreesEstimated time needed: **15** minutes ObjectivesAfter completing this lab you will be able to:- Develop a classification model using Decision Tree Algorithm In this lab exercise, you will learn a popular machine learning algorithm, Decision Tree. You will use this classification algorithm to build a model from historical data of patients, and their response to different medications. Then you use the trained decision tree to predict the class of a unknown patient, or to find a proper drug for a new patient. Table of contents About the dataset Downloading the Data Pre-processing Setting up the Decision Tree Modeling Prediction Evaluation Visualization Import the Following Libraries: numpy (as np) pandas DecisionTreeClassifier from sklearn.tree
###Code
import numpy as np
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
#add code below to show multiple outputs
#https://volderette.de/jupyter-notebook-tip-multiple-outputs/
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
#If you use this you need to end all matplotlib plot.show() lines with a semicolon or they will show additional text
###Output
_____no_output_____
###Markdown
About the dataset Imagine that you are a medical researcher compiling data for a study. You have collected data about a set of patients, all of whom suffered from the same illness. During their course of treatment, each patient responded to one of 5 medications, Drug A, Drug B, Drug c, Drug x and y. Part of your job is to build a model to find out which drug might be appropriate for a future patient with the same illness. The feature sets of this dataset are Age, Sex, Blood Pressure, and Cholesterol of patients, and the target is the drug that each patient responded to. It is a sample of multiclass classifier, and you can use the training part of the dataset to build a decision tree, and then use it to predict the class of a unknown patient, or to prescribe it to a new patient. Downloading the Data To download the data, we will use !wget to download it from IBM Object Storage.
###Code
# !wget -O drug200.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%203/data/drug200.csv
###Output
_____no_output_____
###Markdown
**Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC) Now, read data using pandas dataframe:
###Code
my_data = pd.read_csv("https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%203/data/drug200.csv", delimiter=",")
my_data[0:5]
###Output
_____no_output_____
###Markdown
Practice What is the size of data?
###Code
# write your code here
my_data.size
my_data.shape
# https://stackoverflow.com/questions/24524104/pandas-describe-is-not-returning-summary-of-all-columns
my_data.describe(include = [np.number])
my_data.describe(include = ['O'])
###Output
_____no_output_____
###Markdown
Click here for the solution```pythonmy_data.shape``` Pre-processing Using my_data as the Drug.csv data read by pandas, declare the following variables: X as the Feature Matrix (data of my_data) y as the response vector (target) Remove the column containing the target name since it doesn't contain numeric values.
###Code
X = my_data[['Age', 'Sex', 'BP', 'Cholesterol', 'Na_to_K']]#.values
# sklearn now is compatible with pandas so converting to numpy is no longer necessary.
# the code below has been altered to use pandas so it is more himan readable or "literate"
X[0:5]
###Output
_____no_output_____
###Markdown
As you may figure out, some features in this dataset are categorical such as **Sex** or **BP**. Unfortunately, Sklearn Decision Trees do not handle categorical variables. But still we can convert these features to numerical values. **pandas.get_dummies()**Convert categorical variable into dummy/indicator variables.
###Code
from sklearn import preprocessing
le_sex = preprocessing.LabelEncoder()
le_sex.fit(['F','M'])
#X[:,1] = le_sex.transform(X[:,1])
X = X.assign(Sex=le_sex.transform(X.Sex))
le_BP = preprocessing.LabelEncoder()
le_BP.fit([ 'LOW', 'NORMAL', 'HIGH'])
#X[:,2] = le_BP.transform(X[:,2])
X = X.assign(BP=le_BP.transform(X.BP))
le_Chol = preprocessing.LabelEncoder()
le_Chol.fit([ 'NORMAL', 'HIGH'])
#X[:,3] = le_Chol.transform(X[:,3])
X = X.assign(Cholesterol=le_Chol.transform(X.Cholesterol))
X[0:5]
# I really don't like this because the sklearn documentation clearly says it should not be used for independent variables
# https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html
# but I am struggling to fix it, so I am leaving it, see below...
###Output
_____no_output_____
###Markdown
Now we can fill the target variable.
###Code
y = my_data["Drug"]
y[0:5]
###Output
_____no_output_____
###Markdown
OneHotEncoding!
###Code
X2 = my_data[['Age', 'Sex', 'BP', 'Cholesterol', 'Na_to_K']]#.values
print("Original Data")
X2[0:5]
enc = preprocessing.OneHotEncoder(drop='first')
X3 = pd.DataFrame(enc.fit_transform(X2[['Sex', 'BP', 'Cholesterol']]).toarray(),
columns=enc.get_feature_names(['Sex', 'BP', 'Cholesterol']))
print("Encoded New Data Columns")
X3[0:5]
#https://stackoverflow.com/questions/52430798/onehotencoder-encoding-only-some-of-categorical-variable-columns
X4 = pd.concat((X2,pd.DataFrame(X3)),1)
X4.drop(columns=['Sex', 'BP', 'Cholesterol'], inplace=True)
print("Combined Data")
X4.head()
###Output
Original Data
###Markdown
Setting up the Decision Tree We will be using train/test split on our decision tree. Let's import train_test_split from sklearn.cross_validation.
###Code
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Now train_test_split will return 4 different parameters. We will name them:X_trainset, X_testset, y_trainset, y_testset The train_test_split will need the parameters: X, y, test_size=0.3, and random_state=3. The X and y are the arrays required before the split, the test_size represents the ratio of the testing dataset, and the random_state ensures that we obtain the same splits.
###Code
# NOTE: I changed X to X4 to use my OneHotEncoded data.
X_trainset, X_testset, y_trainset, y_testset = train_test_split(X4, y, test_size=0.3, random_state=3)
###Output
_____no_output_____
###Markdown
PracticePrint the shape of X_trainset and y_trainset. Ensure that the dimensions match
###Code
# your code
X_trainset.shape
y_trainset.shape
X_trainset.shape[0] == y_trainset.shape[0]
###Output
_____no_output_____
###Markdown
Click here for the solution```pythonprint('Shape of X training set {}'.format(X_trainset.shape),'&',' Size of Y training set {}'.format(y_trainset.shape))``` Print the shape of X_testset and y_testset. Ensure that the dimensions match
###Code
# your code
X_testset.shape
y_testset.shape
X_testset.shape[0] == y_testset.shape[0]
###Output
_____no_output_____
###Markdown
Click here for the solution```pythonprint('Shape of X training set {}'.format(X_testset.shape),'&',' Size of Y training set {}'.format(y_testset.shape))``` Modeling We will first create an instance of the DecisionTreeClassifier called drugTree. Inside of the classifier, specify criterion="entropy" so we can see the information gain of each node.
###Code
drugTree = DecisionTreeClassifier(criterion="entropy", max_depth = 4)
drugTree # it shows the default parameters
###Output
_____no_output_____
###Markdown
Next, we will fit the data with the training feature matrix X_trainset and training response vector y_trainset
###Code
drugTree.fit(X_trainset,y_trainset)
###Output
_____no_output_____
###Markdown
Prediction Let's make some predictions on the testing dataset and store it into a variable called predTree.
###Code
predTree = drugTree.predict(X_testset)
###Output
_____no_output_____
###Markdown
You can print out predTree and y_testset if you want to visually compare the prediction to the actual values.
###Code
print (predTree [0:5])
print (y_testset [0:5])
###Output
['drugY' 'drugX' 'drugX' 'drugX' 'drugX']
40 drugY
51 drugX
139 drugX
197 drugX
170 drugX
Name: Drug, dtype: object
###Markdown
Evaluation Next, let's import metrics from sklearn and check the accuracy of our model.
###Code
from sklearn import metrics
import matplotlib.pyplot as plt
print("DecisionTrees's Accuracy: ", metrics.accuracy_score(y_testset, predTree))
###Output
DecisionTrees's Accuracy: 0.9833333333333333
###Markdown
**Accuracy classification score** computes subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true. In multilabel classification, the function returns the subset accuracy. If the entire set of predicted labels for a sample strictly match with the true set of labels, then the subset accuracy is 1.0; otherwise it is 0.0. Visualization Lets visualize the tree
###Code
# Notice: You might need to uncomment and install the pydotplus and graphviz libraries if you have not installed these before
# !conda install -c conda-forge pydotplus -y
# !conda install -c conda-forge python-graphviz -y
from io import StringIO
import pydotplus
import matplotlib.image as mpimg
from sklearn import tree
%matplotlib inline
# NOTE: I had to change the featureNames to X4.columns
dot_data = StringIO()
filename = "drugtree.png"
featureNames = X4.columns
targetNames = my_data["Drug"].unique().tolist()
out=tree.export_graphviz(drugTree,feature_names=featureNames, out_file=dot_data, class_names= np.unique(y_trainset), filled=True, special_characters=True,rotate=False)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
graph.write_png(filename)
img = mpimg.imread(filename)
plt.figure(figsize=(100, 200))
plt.imshow(img,interpolation='nearest');
###Output
_____no_output_____
|
compare_fauzi_bins/160601_ANI_improvements--use_percent_coverage.ipynb
|
###Markdown
import sys
###Code
import pandas as pd
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import aggregate_mummer_results
full_data = pd.read_csv("percent_identities.tsv" ,sep = '\t')
full_data.head(3)
organism_names = full_data['query name'].unique()
organism_names
plot_names = [n for n in organism_names if "Methylotenera" in n] + \
[n for n in organism_names if "Acidovorax" in n]
plot_names
def only_selected_query_and_ref(name_list):
all_data = pd.read_csv("percent_identities.tsv" ,sep = '\t')
all_data = all_data[all_data['query name'].isin(name_list)]
all_data = all_data[all_data['ref name'].isin(name_list)]
print("num rows selected: {}".format(all_data.shape[0]))
return all_data
plot_data = only_selected_query_and_ref(plot_names)
ax = plt.axes()
sns.heatmap(aggregate_mummer_results.pivot_identity_table(plot_data), ax = ax, )
ax.set_title('% identity \n (length-weighted)')
ax.figure.tight_layout()
ax.figure.set_size_inches(w=4, h=6)
ax.figure.savefig('160601_original_percent_identity_measure.pdf')
plot_data.head()
ax = plt.axes()
sns.heatmap(aggregate_mummer_results.pivot_identity_table(plot_data, value_var='estimated % identity'),
ax = ax)
ax.set_title('(% identity)*(fraction aligned))')
ax.figure.tight_layout()
ax.figure.set_size_inches(w=4, h=6)
ax.figure.savefig('160601_original_percent_identity_tims_frac_aligned.pdf')
def subset_given_colnames(name_list):
full_data = pd.read_csv("percent_identities.tsv" ,sep = '\t')
all_names = full_data['query name'].unique()
# build a list of names to pick out.
plot_names = []
for org_name in name_list:
plot_names += [n for n in organism_names if org_name in n]
# reduce to the desired organisms.
selected_data = full_data.copy()
selected_data = selected_data[selected_data['query name'].isin(plot_names)]
selected_data = selected_data[selected_data['ref name'].isin(plot_names)]
print("num rows selected: {}".format(selected_data.shape[0]))
return selected_data
###Output
_____no_output_____
###Markdown
subset_given_colnames(['Acidovorax', 'Methylotenera mobilis'])
###Code
def plot_metrics_as_heatmaps(metric_list, organism_list, figsize=(10, 6),
filename = None):
print(len(metric_list))
fig, axn = plt.subplots(1, len(metric_list),
sharex=True, sharey=True,
figsize=figsize)
cbar_ax = fig.add_axes([.91, .3, .03, .4])
data = subset_given_colnames(name_list = organism_list)
data['% of query aligned'] = data['frac of query aligned']*100
for i, metric in enumerate(metric_list):
# prepare pivoted data
print("i: {}, metric: {}".format(i, metric))
subplot_ax = axn[i]
print('axis: {}'.format(subplot_ax))
subplot_data = aggregate_mummer_results.pivot_identity_table(data,
value_var=metric)
sns.heatmap(subplot_data, ax=axn[i],
cbar=i == 0,
vmin=0, vmax=100,
cbar_ax=None if i else cbar_ax
)
subplot_ax.set_title(metric)
fig.tight_layout(rect=[0, 0, .9, 1])
print(type(fig))
print(type(axn))
if filename is not None:
fig.savefig(filename)
fig.savefig(filename.rstrip('pdf') + 'svg')
###Output
_____no_output_____
###Markdown
mpl.rcParams.update({ 'font.size': 12, 'axes.titlesize': 14, 'axes.labelsize': 12, 'xtick.labelsize': 12, 'ytick.labelsize': 12, 'font.family': 'Lato', 'font.weight': 600, 'axes.labelweight': 300, 'axes.titleweight': 100, 'figure.autolayout': True})
###Code
mpl.rcParams.update({'axes.titleweight': 600})
p = plot_metrics_as_heatmaps(['% identity', '% of query aligned', 'estimated % identity'],
['Methylotenera mobilis', 'Acidovorax'],
figsize=(11, 4),
filename = '160601_ANI_metric_development.pdf')
#p.figure.savefig()
#p.figure.savefig('160601_ANI_metric_development.svg')
p = plot_metrics_as_heatmaps(['% of query aligned', 'estimated % identity'],
organism_names,
figsize=(20, 15),
filename = '160603_all_Fauzi--percent_aligned_an_percent_identity.pdf')
#p.figure.savefig()
#p.figure.savefig('160601_ANI_metric_development.svg')
p = plot_metrics_as_heatmaps(['% of query aligned', 'estimated % identity'],
['Methylophilus methylotrophus'],
figsize=(7, 4.5),
filename = '160603_Fauzi_Methylophilus_methylotrophus--percent_aligned_and_percent_identity.pdf')
#p.figure.savefig()
#p.figure.savefig('160601_ANI_metric_development.svg')
###Output
2
num rows selected: 64
i: 0, metric: % of query aligned
axis: Axes(0.125,0.125;0.352273x0.775)
i: 1, metric: estimated % identity
axis: Axes(0.547727,0.125;0.352273x0.775)
|
Talk.ipynb
|
###Markdown
 This is an interactive notebook to illustrate ideas to make scientific computing more engaging. Key points: - Starting with a more complex idea can be more motivating- Live coding is a helpful tool to deliver programming content- Teach concepts as they are needed- Avoid too many baby steps *Notebook cheats:* To run the currently highlighted cell by holding &x21E7; Shift and press &x23ce; Enter;To advance the slide show use Space  Getting ready for the expedition to get to treasure Then you finally reach treasure... The great dissapointment happens because.... Exploring ideas for keeping students engaged How1. Incentivise through glimpses at the treasures2. Using timely teaching environment3. Chose delivery methods wisely4. Use of formative assessment through in class exercises5. Give good motivations for new concepts! What 1. Stay in the STEM context to help with motivation 2. Teach reproducibility and best practices as you go along  Which of the following instructions would be more motivating on day one?  What is a more appealing first lesson overview to a diverse set of students? 
###Code
# reading data
data = pd.read_csv('data/gravity.csv',sep=' ')
# plotting data
sbn.distplot(data['Student1'], label = 'Student 1', norm_hist=True)
sbn.distplot(data['Student2'], label = 'Student 2', norm_hist=True)
plt.plot(np.ones(2)*9.81,[0,1.9], '--', color='black', alpha=0.5, label='Actual g')
sbn.despine()
# plot Formatting
plt.xlabel('Estimate of g')
plt.ylabel('Density')
plt.legend()
###Output
_____no_output_____
###Markdown
What approach is easier to follow for new complex concepts?  Live coding
###Code
# load the thrombin.csv file
data = np.loadtxt('data/thrombin.csv')
# plot time thrombin data
#Fixme
# compute correlation
r,p = scipy.stats.#Fixme
#print("The correlation is %.2f"%r)
###Output
_____no_output_____
###Markdown
How can understanding amongst learners be assessed in a motivating way?  Check your understanding and use Tophat to indicate your answerQ. Suppose you have a dataset that consists of 4 columns in a variable called data, how do you plot the first and second column against each other? Try the different options in your notebook if you are unsure.1. `plt.plot(data[:,1], data[:,2])`2. `plt.plot(data[0,:], data[1,:])`3. `plt.plot(data[:,0], data[:,1])`4. `plot.plt(data[:,0], data[:,1])`⚠️ Use your red sticker if you get stuck, so that one of the tutors can help you. Explore concepts without the necessity for any coding: Birge-Sponer in vibrational spectroscopyBirge-Sponer plot allow you to estimate an upper bound for discosiation energies from virbtational transitions.  Adding dataYou can add a set of vibrational transition wave numbers and their corresponding vibrational quantum numbers in the two cells below. An example for HgH would look like this: Observed transitions in cm$^{-1}$: 1203.7, 965.6, 632.4, 172 Vibrational quantum numbers: 0.5, 1.5, 2.5, 3.5
###Code
import Helper
data = Helper.data_input()
display(data)
###Output
_____no_output_____
###Markdown
The data will be read by the program and then plotted against each other, when you execute the next cell.
###Code
Helper.plot_birge_sponer(data)
###Output
_____no_output_____
###Markdown
Extrapolating the dataNow in order to be able to compute the dissociation constant we need to extrapolate the line until it crosses the y-axis at x=0 and the x axis at y=0. The plot below has done this automatically. The Helper module uses a linear regression fit called `linregress` as implemented in `scipy`.
###Code
Helper.plot_extrapolated_birge_sponer(data)
###Output
_____no_output_____
###Markdown
Computing the area under the curveYou can see, that the dashed orange line is the extrapolated curve to where the extrapolation is required. You could now try and read the numbers of the graph, or just compute the area under the curve, which in this case is a right-angle triangle. Remember the area of a triangle is given by:$A = \frac{1}{2}ab$, where a, in this case, is the side of the y-axis and b is the side of the x-axis. Again there is a convenient helper function that will take the data from the curve and compute the area, and conveniently display this result.
###Code
Helper.compute_area_under_graph(data)
###Output
_____no_output_____
###Markdown
**Check your understanding**:How is the dissociation energy computed from the wave number that is estimated by the area under the curve?Try it yourself and see if you get the same answer as below.
###Code
Helper.compute_dissociation_energy(data)
###Output
_____no_output_____
###Markdown
⚠️ **Use your green sticker if you got to the same answer, or your red one if you got stuck!** 5 Approaches to help with motivation:1. Show treasures at the start!2. Hide complexity where necessary3. Motivate new concepts with appropriate STEM related examples4. Make use of the Ecosystem, such as interactive Jupyter notebooks5. Use frequent formative assessment and feedback Thank you for your attention! I am happy to take any questions! Data generation for some parts of the notebook
###Code
g1 = np.random.normal(9.81,0.2, 50)
g2 = np.random.normal(9.2,0.6, 50)
g3 = np.random.normal(9.89, 0.2, 50)
np.savetxt('gravity.csv', np.column_stack((g1,g2,g3)))
###Output
_____no_output_____
###Markdown
Talk `https://tiny.cc/saul-jl-19` IntroductionHello!I am Saul Shanabrook and I work at [Quansight](github.com/quansight/).Thank you for having me here today. I am grateful to be able to meet you all and have this space to share with you. Ground rulesThis is the deal. I have prepared some code and examples to show you. But I don't know you all. I don't know where you are coming from or what you are looking for exactly. So I wanna ask you all to helpe me out here. Please interact. Interrupt me if I am saying something that doesn'y make sense or if I am going too quickly or if you have a question. I would love to learn as well how what I am talking about fits into your day to day lives, so please interject with those types of questions or comments.I also am not an expert here. I know a few things about some tools, but this is a huge space and I am ignorant of lots of it. So please correct me if I am getting things wrong. I also don't know much about your work here, day to day. So excuse my ignorance on most things physics related! ReferenceAll of this material is online at `github.com/saulshanabrook/icalepcs-2019-jupyterlab`. You can run it all on binder. So if you are interested, go to this URL and click the link to follow along. Outline Image
###Code
import ipywidgets
@ipywidgets.interact(i=(0, 5))
def display_diagram(i=0):
return ipywidgets.Image(value=open(f"Talk{i}.png", "rb").read())
###Output
_____no_output_____
###Markdown
Reducing Order for Accelerated Analysis: *One Guy’s Take on SVD, POD, DMD, and Their Use for Nuclear Engineering*Jeremy Roberts, Associate Professor \Alan Levin Department of Mechanical and Nuclear Engineering \Kansas State UniversityWednesday, January 27th, 2021Rothrock Lecture Series \Department of Nuclear Engineering \Texas A&M University I Spotted a Bear that Changed My World Karhunen and Loève said a stochastic process $X_t$ with covariance $K_X(t, t')$ can be represented exactly (or approximately) as an infinite (or a finite), weighted sum of the (time-dependent) eigenvectors of a certain functional of $K_{X}$. Let $\mathbf{x}_j$ be a column of an image (i.e., matrix) $\mathbf{A}$. Then $$\mathbf{C}_{ij} = E[(\mathbf{x}_i-\mathbf{m})^T (\mathbf{x}_j-\mathbf{m})] \, ,$$for $\mathbf{m} = E[\mathbf{x}]$, and eigendecomposition leads to$$\mathbf{C} = \mathbf{W}\boldsymbol{\Lambda}\mathbf{W}^T \, .$$ Define $\mathbf{y} = \mathbf{W}^T\mathbf{x}$ (or $\mathbf{B} = \mathbf{W}^T\mathbf{A}$). By construction, $\mathbf{W}^T$ rotates the columns $\mathbf{x}$ so that the result is decorrelated, i.e., diagonalized. Moreover, the columns of $\mathbf{W}$ with the largest eigenvalues $\lambda$ preserve the most “energy” of the initial system upon the inverse (or a "best" picture). This is (often, but maybe not correctly) called the Karhunen-Loève Transform. (See R.D. Dony, "Karhunen-Loève Transform". *The Transform and Data Compression Handbook*Ed. K. R. Rao and P.C. Yip. Boca Raton, CRC Press LLC, 2001.) Whoa, slow down. Show me the SVD. First, what does "best" mean? Least squares? Minimax? Given $\mathbf{A} \in R^{m\times n}$ of rank $\min(m, n)$, can we find $\tilde{\mathbf{A}} \in R^{m\times n}$ of rank $r < \min(m, n)$ that satisfies $$ \min_{\tilde{A}} \sqrt{\sum_j \sum_i (A_{ij} - \tilde{A}_{ij})^2} \qquad \text{least-square pixel error}$$ Equivalently, where $\mathbf{x}$ is a column of $\mathbf{A}$, find $\tilde{\mathbf{A}}$ that satisfies$$ \min_{\tilde{A}} \sqrt{\sum_j ||\mathbf{x}_i - \tilde{\mathbf{x}}_i||_2^2} \qquad \text{root-mean square column error}$$ Both are useful ways to think of the problem since applications are often about "all the pixels" or "all the columns." The solution, of course, is the singular value decomposition (SVD), or $$ \mathbf{A} = \mathbf{U}\boldsymbol{\Sigma}\mathbf{V}^T \, ,$$where $\mathbf{U} \in R^{m, n}$, $\boldsymbol{\Sigma} \in R^{n, n}$ is a diagonal matrix of strictly nonnegative *singular* values $\sigma_i,\, i = 1\ldots n$ such that $\sigma_i \geq \sigma_{i+1}$, and $\mathbf{V} \in R^{n, n}$. Both $\mathbf{U}$ and $\mathbf{V}$ are orthogonal matrices (i.e., $\mathbf{U}^T \mathbf{U} = \mathbf{I}$). Proof is left to the viewer... but the *truncated* SVD $\mathbf{U}$ yields the approximation $\tilde{\mathbf{A}} = \mathbf{U}_r \boldsymbol{\Sigma}_r \mathbf{V}_r^T$ that uses the first $r$ columns $\mathbf{U}$ and minimizes$$ \sqrt{\sum_j \sum_i (A_{ij} - \tilde{A}_{ij})^2} = || \tilde{\mathbf{A}} - \mathbf{A} ||_F \, .$$among all possible rank-$r$ approximations $\mathbf{A}$.
###Code
import urllib
url = 'https://unsplash.com/photos/f1q4NlVRYSc/download?force=true&w=2400'
urllib.request.urlretrieve(url , 'snake.jpg')
import matplotlib.pyplot as plt
A_rgb = plt.imread('snake.jpg') # 1737x2400 matrix of RGB tuples
plt.imshow(A_rgb)
import numpy as np
A = np.array(A_rgb.reshape((A_rgb.shape[0], A_rgb.shape[1]*A_rgb.shape[2])), dtype='float')
U, sigma, V = np.linalg.svd(A, compute_uv=True) # sigma is a 1-d array
import matplotlib.pyplot as plt
plt.semilogy(sigma, 'go', mfc='w')
plt.semilogy(10, sigma[10], 'rs', 50, sigma[50], 'b^', 100, sigma[100], 'kh', mfc='None', ms=15)
plt.xlabel('$i$'); plt.ylabel('$\sigma_i$'); plt.title("Most information in first ~100 singular values.")
A_r = []
for r in [10, 50, 100]:
Sigma = np.diag(sigma) # make Sigma a diagonal matrix
A_r.append(U[:, :r]@(Sigma[:r, :r]@V[:r, :])) # one can also try sklearn.decomposition.TruncatedSVD
A_r[-1] = np.array(A_r[-1].reshape((1737, 2400, 3)), dtype='i')
A_r[-1][A_r[-1]<0]=0; A_r[-1][A_r[-1]>255]=255;
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(10,8))
axes[0, 0].imshow(A_rgb); axes[0,0].set_title('original')
tmpl = 'n={}, {:.1f}% compression'
comp = lambda n: (1-(U.shape[0]+V.shape[0])*n/A.shape[0]/A.shape[1])*100
axes[1, 0].imshow(A_r[0]); axes[1,0].set_title(tmpl.format(10, comp(10)))
axes[0, 1].imshow(A_r[1]); axes[0,1].set_title(tmpl.format(50, comp(50)))
axes[1, 1].imshow(A_r[2]); axes[1,1].set_title(tmpl.format(100, comp(100)))
###Output
_____no_output_____
|
notebooks/edit-run-repeat.ipynb
|
###Markdown
Edit-run-repeat: Stopping the cycle of pain 1. No more docs-guessing
###Code
df = pd.read_csv("../data/water-pumps.csv", index=0)
df.head(1)
pd.read_csv?
df = pd.read_csv("../data/water-pumps.csv",
index_col=0,
parse_dates="date_recorded")
df.head(1)
###Output
_____no_output_____
###Markdown
2. No more copy pastaDon't repeat yourself.
###Code
plot_data = df['construction_year']
plot_data = plot_data[plot_data != 0]
sns.kdeplot(plot_data, bw=0.1)
plt.show()
plot_data = df['longitude']
plot_data = plot_data[plot_data != 0]
sns.kdeplot(plot_data, bw=0.1)
plt.show()
plot_data = df['amount_tsh']
plot_data = plot_data[plot_data > 20000]
sns.kdeplot(plot_data, bw=0.1)
plt.show()
plot_data = df['latitude']
plot_data = plot_data[plot_data > 20000]
sns.kdeplot(plot_data, bw=0.1)
plt.show()
def kde_plot(dataframe, variable, upper=0.0, lower=0.0, bw=0.1):
plot_data = dataframe[variable]
plot_data = plot_data[(plot_data > lower) & (plot_data < upper)]
sns.kdeplot(plot_data, bw=bw)
plt.show()
kde_plot(df, 'construction_year', upper=2016)
kde_plot(df, 'longitude', upper=42)
kde_plot(df, 'amount_tsh', lower=20000, upper=400000)
###Output
_____no_output_____
###Markdown
3. No more guess-and-checkUse [pdb](https://docs.python.org/2/library/pdb.html) the Python debugger to debug inside a notebook. Key commands are: - `p`: Evaluate and print Python code - `w`: Where in the stack trace am I? - `u`: Go up a frame in the stack trace. - `d`: Go down a frame in the stack trace. - `c`: Continue execution - `q`: Stop execution There are two ways to activate the debugger: - `%pdb`: toggles wether or not the debugger will be called on an exception - `%debug`: enters the debugger at the line where this magic is
###Code
kde_plot(df, 'date_recorded')
# "1" turns pdb on, "0" turns pdb off
%pdb 1
kde_plot(df, 'date_recorded')
# turn off debugger
%pdb 0
###Output
_____no_output_____
###Markdown
4. No more "Restart & Run All"`assert` is the poor man's unit test: stops execution if condition is `False`, continues silently if `True`
###Code
def gimme_the_mean(series):
return np.mean(series)
assert gimme_the_mean([0.0]*10) == 0.0
data = np.random.normal(0.0, 1.0, 1000000)
assert gimme_the_mean(data) == 0.0
np.testing.assert_almost_equal(gimme_the_mean(data),
0.0,
decimal=1)
###Output
_____no_output_____
|
reddit webscrapping and NLP Project/reddit web scrapper.ipynb
|
###Markdown
we will scrap data like post, comments from reddit and make a dataset will be lated used for Natural language processing.
###Code
from core.selenium_scraper import SeleniumScraper
from core.soup_scraper import SoupScraper
from core.progress_bar import ProgressBar
from core.sql_access import SqlAccess
import time
pip install selenium
reddit_home = 'https://www.reddit.com'
slash = '/r/'
subreddit = 'DataScience'
sort_by = '/hot/'
scroll_n_times = 1000
scrape_comments = True
erase_db_first = True
SQL = SqlAccess()
SelScraper = SeleniumScraper()
BSS = SoupScraper(reddit_home,
slash,
subreddit)
SelScraper.setup_chrome_browser()
# Collect links from subreddit
links = SelScraper.collect_links(page = reddit_home +
slash + subreddit + sort_by,
scroll_n_times = scroll_n_times)
# Find the <script> with id='data' for each link
script_data = BSS.get_scripts(urls = links)
# Transforms each script with data into a Python dict, returned as [{}, {}...]
BSS.data = SelScraper.reddit_data_to_dict(script_data = script_data)
print('Scraping data...')
progress = ProgressBar(len(links))
for i, current_data in enumerate(BSS.data):
progress.update()
BSS.get_url_id_and_url_title(BSS.urls[i],
current_data, i)
BSS.get_title()
BSS.get_upvote_ratio()
BSS.get_score()
BSS.get_posted_time()
BSS.get_author()
BSS.get_flairs()
BSS.get_num_gold()
BSS.get_category()
BSS.get_total_num_comments()
BSS.get_links_from_post()
BSS.get_main_link()
BSS.get_text()
BSS.get_comment_ids()
print('Scraping data...')
start = time.time()
progress = ProgressBar(len(links))
for i, current_data in enumerate(BSS.data):
progress.update()
BSS.get_url_id_and_url_title(BSS.urls[i],
current_data, i)
BSS.get_title()
BSS.get_upvote_ratio()
BSS.get_score()
BSS.get_posted_time()
BSS.get_author()
BSS.get_flairs()
BSS.get_num_gold()
BSS.get_category()
BSS.get_total_num_comments()
BSS.get_links_from_post()
BSS.get_main_link()
BSS.get_text()
BSS.get_comment_ids()
time.sleep(1)
BSS.prepare_data_for_sql(scrape_comments=scrape_comments)
try:
SQL.create_or_connect_db(erase_first=erase_db_first)
# [0] = post, [1] = comment, [2] = link
for i in range(len(BSS.post_data)):
SQL.insert('post', data = BSS.post_data[i])
SQL.insert('link', data = BSS.link_data[i])
if scrape_comments:
SQL.insert('comment', data = BSS.comment_data[i])
except Exception as ex:
print(ex)
finally:
SQL.save_changes()
time.sleep(1)
end = time.time()
print(('\nIt took {0} seconds to scrape and store {1} links').format(round(end - start, 1),
len(links)))
###Output
Scraping data...
Gathering all the scraped data, and scraping ALL comment data (very slow, dependent on number of comments)
It took 2.0 seconds to scrape and store 0 links
###Markdown
dataset is saved into sql database
###Code
#if want to enter new data to database
try:
SQL.create_or_connect_db(erase_first=erase_db_first)
# [0] = post, [1] = comment, [2] = link
for i in range(len(BSS.post_data)):
SQL.insert('post', data = BSS.post_data[i])
SQL.insert('link', data = BSS.link_data[i])
if scrape_comments:
SQL.insert('comment', data = BSS.comment_data[i])
except Exception as ex:
print(ex)
finally:
SQL.save_changes()
#now lets make dataset
from core.sql_access import SqlAccess
import pandas as pd
#connect to database
SQL = SqlAccess()
SQL.create_or_connect_db()
c = SQL.conn
#retrieve data from database table
all_data = pd.read_sql_query("""
SELECT *
FROM post p
LEFT JOIN comment c
ON p.id = c.post_id
LEFT JOIN link l
ON p.id = l.post_id;
""", c)
#collect posts and comments data
post = pd.read_sql_query("""
SELECT *
FROM post;
""", c)
comment = pd.read_sql_query("""
SELECT *
FROM comment;
""", c)
#save as csv
all_data.to_csv('data/post_comment_link_data_demo.csv', columns=all_data.columns, index=False)
post.to_csv('data/post_data_demo.csv', columns=post.columns, index=False)
comment.to_csv('data/comment_data_demo.csv', columns=comment.columns, index=False)
###Output
_____no_output_____
|
notebooks/CLASSIFICATION_POWERCHRD_(2,3,4 classes)/CNN_MFCCs_all_fx_on-off_PWC_2_CLASSES_5.9.2018.ipynb
|
###Markdown
Importing the required libraries
###Code
# TRAINING ON TESTS OF INCREASING COMPLEXITY
import librosa
import librosa.display
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from matplotlib.pyplot import specgram
import keras
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
#from keras.layers import LSTM
#from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.layers import Input, Flatten, Dropout, Activation
from keras.layers import Conv1D, MaxPooling1D, AveragePooling1D
from keras.models import Model
from keras.callbacks import ModelCheckpoint
from sklearn.metrics import confusion_matrix
import pandas as pd
from keras_tqdm import TQDMNotebookCallback
import pickle
import os
###Output
/home/stjepan/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
###Markdown
Data
###Code
# Preprocessed data (feature extraction + labels)
# config file only for info on preprocessing details
# https://drive.google.com/open?id=1ARx2M2OnHjUDXFb1Z33BZ7lBhrn0HIpn
features = np.load('/home/stjepan/Documents/soloact/data/processed/training_X_power.npy')
features.shape
#reading labels from csv
labels = pd.read_csv("/home/stjepan/Documents/soloact/data/processed/training_Y_power.csv", delimiter=",")
#labels[:9]
#our labels (2 classes)
labels[["overdrive.gain_db", "reverb.reverberance"]].head(10)
###Output
_____no_output_____
###Markdown
Train/test split + Shuffle
###Code
from sklearn.model_selection import train_test_split
X_train, X_val, Y_train, Y_val = train_test_split(features,labels, shuffle = True, test_size = 0.2, random_state = 44)
Y_train.shape
###Output
_____no_output_____
###Markdown
Filtering labels from dataframe
###Code
Y_train_l = Y_train.filter(regex=r'reverb.rev|gain_db')
Y_val_l = Y_val.filter(regex=r'reverb.rev|gain_db')
# Y_train_l = Y_train.filter(regex=r'chorus.del|phaser.del|reverb.rev|gain_db')
# Y_val_l = Y_val.filter(regex=r'chorus.del|phaser.del|reverb.rev|gain_db')
Y_train_l[:9]
###Output
_____no_output_____
###Markdown
Setting up labels
###Code
Y_train_l2 = Y_train_l.fillna(0)
Y_val_l2 = Y_val_l.fillna(0)
Y_train_l2.columns = ["O","R"]
Y_val_l2.columns = ["O","R"]
Y_train_l2 = Y_train_l2.where(Y_train_l2 == 0).replace(np.nan,1)
Y_val_l2 = Y_val_l2.where(Y_val_l2 == 0).replace(np.nan,1)
Y_val_l2[list("OR")] = Y_val_l2[list("OR")].astype(int)
Y_train_l2[list("OR")] = Y_train_l2[list("OR")].astype(int)
# Getting binary representation of label states (on - off)
Y_val_l2[:9]
###Output
_____no_output_____
###Markdown
Concatonating effect name to its state for later comparisons of actual/predicted labels
###Code
labels_train = Y_train_l2.assign(labels = 'Ovd__' + Y_train_l2["O"].apply(str) + '_Rev__' + Y_train_l2["R"].apply(str))
labels_val = Y_val_l2.assign( labels = 'Ovd__'+ Y_val_l2["O"].map(str) + '_Rev__' + Y_val_l2["R"].map(str))
labels_TR = labels_train["labels"]
labels_VAL = labels_val["labels"]
labels_VAL[:5]
from keras.utils import np_utils
from sklearn.preprocessing import LabelEncoder
lb = LabelEncoder()
Y_train_OCPR = np_utils.to_categorical(lb.fit_transform(labels_TR))
Y_val_OCPR = np_utils.to_categorical(lb.fit_transform(labels_VAL))
Y_train_OCPR[0]
Y_val_OCPR
###Output
_____no_output_____
###Markdown
CNN model
###Code
from keras import layers
model = Sequential()
model.add(Conv1D(128, 5,padding='same',
input_shape=(205,1)))
model.add(Activation('relu'))
model.add(Conv1D(128, 5,padding='same'))
model.add(Activation('relu'))
model.add(Dropout(0.1))
model.add(MaxPooling1D(pool_size=(8)))
model.add(Conv1D(128, 5,padding='same',))
model.add(Activation('relu'))
model.add(Conv1D(128, 5,padding='same',))
model.add(Activation('relu'))
model.add(Conv1D(128, 5,padding='same',))
model.add(Activation('relu'))
model.add(Dropout(0.3))
model.add(Conv1D(128, 5,padding='same',))
model.add(Activation('relu'))
model.add(Flatten())
model2 = Sequential()
model2.add(model)
model2.add(Dense(4, activation = "softmax"))
model1 = Sequential()
model1.add(model)
model1.add(Dense(1, activation = "relu"))
opt = opt = keras.optimizers.Adam(lr=0.00001)
model1.compile(optimizer= opt, loss=['mae'], metrics =["mae"])
model2.compile(optimizer = opt, loss = "categorical_crossentropy", metrics = ["accuracy"])
###Output
WARNING:tensorflow:From /home/stjepan/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py:497: calling conv1d (from tensorflow.python.ops.nn_ops) with data_format=NHWC is deprecated and will be removed in a future version.
Instructions for updating:
`NHWC` for data_format is deprecated, use `NWC` instead
###Markdown
Fitting the model
###Code
CLASS_history = model2.fit(X_train, Y_train_OCPR,
batch_size=32,
epochs=10,
validation_data=(X_val, Y_val_OCPR),
verbose=0,
callbacks=[TQDMNotebookCallback()])
model_name = 'guitar_Overdrive_Reverb_on-off_20-70_lr_0.0001_10ep_dropout_0.3_PWC_2_CLASSES_5.9.2018.h5'
save_dir = os.path.join(os.getcwd(), 'saved_models')
# Save model, and history
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
model_path = os.path.join(save_dir, model_name)
model2.save(model_path)
print('Saved trained model at %s ' % model_path)
with open('guitar_Overdrive_Reverb_on-off_20-70_lr_0.0001_10ep_dropout_0.3_PWC_2_CLASSES_5.9.2018', 'wb') as file_pi:
pickle.dump(CLASS_history.history, file_pi)
# Plot history
plt.plot(CLASS_history.history['acc'])
plt.plot(CLASS_history.history['val_acc'])
plt.title('model ACC')
plt.ylabel('ACC')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.ylim(ymax=1)
plt.ylim(ymin=0)
plt.show()
###Output
_____no_output_____
###Markdown
Predictions
###Code
preds = model2.predict(X_val,
batch_size=32,
verbose=1)
type(preds)
preds1=preds.argmax(axis=1)
preds1
abc = preds1.astype(int).flatten()
predictions = (lb.inverse_transform((abc)))
preddf = pd.DataFrame({'predictedvalues': predictions})
preddf[:10]
actual=Y_val_OCPR.argmax(axis=1)
abc123 = actual.astype(int).flatten()
actualvalues = (lb.inverse_transform((abc123)))
actualdf = pd.DataFrame({'actualvalues': actualvalues})
finaldf = actualdf.join(preddf)
np.sum(finaldf.actualvalues != finaldf.predictedvalues)
finaldf[:9]
cnnhistory=model.fit(X_train, Y_train, batch_size=32, epochs=50, validation_data=(X_val, Y_val))
###Output
_____no_output_____
###Markdown
Plotting results
###Code
from pylab import rcParams
rcParams['figure.figsize'] = 15, 9 #setting figure size
plt.plot(cnnhistory.history['mean_absolute_error'])
plt.plot(cnnhistory.history['val_mean_absolute_error'])
plt.title('model MAE')
plt.ylabel('MAE')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.ylim(ymax=20)
plt.ylim(ymin=0)
plt.show()
###Output
_____no_output_____
###Markdown
Saving model
###Code
import os
model_name = 'guitar_dist_gain_0-80_reverberance_0-100_other_on-off_200ep_lr_0.0001_dropout_0.3_PWC.h5'
save_dir = os.path.join(os.getcwd(), 'saved_models')
# Save model and weights
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
model_path = os.path.join(save_dir, model_name)
model.save(model_path)
print('Saved trained model at %s ' % model_path)
###Output
Saved trained model at /home/stjepan/Documents/_Krish_Suchitra_Tristan_PORTFOLIO/DL_audio/saved_models/guitar_dist_gain_0-80_reverberance_0-100_other_on-off_200ep_lr_0.0001_dropout_0.3_PWC.h5
###Markdown
Predctions
###Code
preds = model.predict(X_val,
batch_size=32,
verbose=1)
preds=list(preds[:,0])
#actual=list(Y_val[:,0])
results = pd.DataFrame({'predicted' : preds, 'actual' : Y_val})
results['diff'] = abs(results['predicted'] - results['actual'])
print(results['diff'].mean())
results.head(15)
results.sort_values(by='actual', ascending=False, inplace = True)
results_p=results[["predicted", "actual"]]
results_p.reset_index(inplace=True, drop = "index")
results_p.plot()
###Output
210/210 [==============================] - 0s 1ms/step
3.3403684885728926
###Markdown
Baseline model predicts gain parameter value of guitar distorstion effect with average error of 4.5 (out of range 0-65)
###Code
results['diff'].describe()
###Output
_____no_output_____
|
docs/dymos_book/api/phase_api.ipynb
|
###Markdown
The Phase API Options
###Code
om.show_options_table('dymos.phase.Phase')
###Output
_____no_output_____
|
examples/exp-convergence_study_38cells.ipynb
|
###Markdown
Initial condition
###Code
# 2D honeycomb mesh
n_x = 5
n_y = 5
coords = utils.generate_honeycomb_coordinates(n_x, n_y)
# make cell_list for the sheet
sheet = [cl.Cell(i, [x,y], -6.0, True, lambda t: 6 + t) for i, (x, y) in enumerate(coords)]
# delete cells to make it circular
del sheet[24]
del sheet[23]
del sheet[21]
del sheet[20]
del sheet[4]
del sheet[0]
# plot to check what happend
utils.plot_2d_population(sheet)
#prepare consistent initial data
solver_scipy = cbmos.CBModel(ff.PiecewisePolynomial(), scpi.solve_ivp, dim)
t_data_init = [0, 0.0001]
_, initial_sheet = solver_scipy.simulate(sheet, t_data_init, {'muA': 0.21*9.1, 'muR': 9.1, 'rA': rA, 'rR': 1.0/(1.0-np.sqrt(0.21)/3.0), 'n': 1.0, 'p': 1.0}, {}, seed=seed)[-1]
# plot to check what happend
utils.plot_2d_population(initial_sheet)
###Output
_____no_output_____
###Markdown
Convergence study Using parameters fitted to relaxation time
###Code
params_cubic = {"mu": 5.70, "s": s, "rA": rA}
muR = 9.1
ratio = 0.21
params_poly = {'muA': ratio*muR, 'muR': muR, 'rA': rA, 'rR': 1.0/(1.0-np.sqrt(ratio)/3.0), 'n': 1.0, 'p': 1.0}
mu_gls=1.95
params_gls = {'mu': mu_gls, 'a':-2*np.log(0.002/mu_gls)}
params = {'cubic': params_cubic, 'pw. quad.': params_poly, 'GLS': params_gls}
dt_ref = 0.0005
N_ref = int(1/dt_ref*tf)+1
t_data_ref = np.arange(0, tf, dt_ref)
ref_sol_dicts = {}
for solver in solver_names:
print(solver)
solvers = solver_dicts[solver]
ref_traj = {}
for force in force_names:
print('>'+force)
t_data_ref, history = solvers[force].simulate(initial_sheet, t_data_ref, params[force], {'dt': dt_ref}, seed=seed)
ref_traj[force] = {
cell.ID: np.array([cell_list[i].position for cell_list in history])
for i, cell in enumerate(history[0])
}
ref_sol_dicts[solver] = ref_traj
print('Done.')
dt_values = [0.001*1.25**n for n in range(0, 22)]
sol_dicts = {}
sol = {'cubic': [], 'pw. quad.': [], 'GLS': []}
for dt in dt_values:
N = int(1/dt*tf) + 1
print([ N, dt])
#t_data = np.linspace(0,1,N)
t_data = np.arange(0,tf,dt)
for force in force_names:
t_data, history = solver_dicts['EF'][force].simulate(initial_sheet, t_data, params[force], {'dt': dt}, seed=seed)
length = min(len(t_data), len(history))
traj = {
cell.ID: np.array([cell_list[i].position for cell_list in history])
for i, cell in enumerate(history[0])
}
errorx = 0
errory = 0
for ID, tr in traj.items():
interx = np.interp(t_data_ref[:], t_data[:length], tr[:length, 0])
intery = np.interp(t_data_ref[:], t_data[:length], tr[:length, 1])
#splinex = CubicSpline(t_data[1:length], tr[:length, 0])
#spliney = CubicSpline(t_data[1:length], tr[:length, 1])
#interx = splinex(t_data_ref[1:])
#intery = spliney(t_data_ref[1:])
ref = ref_sol_dicts['EF'][force][ID]
refx = ref[:,0]
refy = ref[:,1]
errorx = errorx + np.linalg.norm(interx-refx)/np.linalg.norm(refx)
errory = errory + np.linalg.norm(intery-refy)/np.linalg.norm(refy)
error = np.array([errorx, errory])/len(history[0])
sol[force].append(error)
sol_dicts['EF'] = sol
current_solver = 'midpoint'
sol = {'cubic': [], 'pw. quad.': [], 'GLS': []}
for dt in dt_values:
N = int(1/dt*tf) + 1
#print([ N, dt])
#t_data = np.linspace(0,1,N)
t_data = np.arange(0,tf,dt)
for force in force_names:
t_data, history = solver_dicts[current_solver][force].simulate(initial_sheet, t_data, params[force], {'dt': dt}, seed=seed)
length = min(len(t_data), len(history))
traj = {
cell.ID: np.array([cell_list[i].position for cell_list in history])
for i, cell in enumerate(history[0])
}
errorx = 0
errory = 0
for ID, tr in traj.items():
splinex = CubicSpline(t_data[:length], tr[:length, 0])
spliney = CubicSpline(t_data[:length], tr[:length, 1])
interx = splinex(t_data_ref[:])
intery = spliney(t_data_ref[:])
ref = ref_sol_dicts[current_solver][force][ID]
refx = ref[:,0]
refy = ref[:,1]
errorx = errorx + np.linalg.norm(interx-refx)/np.linalg.norm(refx)
errory = errory + np.linalg.norm(intery-refy)/np.linalg.norm(refy)
error = np.array([errorx, errory])/len(history[0])
sol[force].append(error)
sol_dicts[current_solver] = sol
current_solver = 'AB'
sol = {'cubic': [], 'pw. quad.': [], 'GLS': []}
for dt in dt_values:
N = int(1/dt*tf) + 1
#print([ N, dt])
#t_data = np.linspace(0,1,N)
t_data = np.arange(0,tf,dt)
for force in force_names:
t_data, history = solver_dicts[current_solver][force].simulate(initial_sheet, t_data, params[force], {'dt': dt}, seed=seed)
length = min(len(t_data), len(history))
traj = {
cell.ID: np.array([cell_list[i].position for cell_list in history])
for i, cell in enumerate(history[0])
}
errorx = 0
errory = 0
for ID, tr in traj.items():
splinex = CubicSpline(t_data[:length], tr[:length, 0])
spliney = CubicSpline(t_data[:length], tr[:length, 1])
interx = splinex(t_data_ref[:])
intery = spliney(t_data_ref[:])
ref = ref_sol_dicts[current_solver][force][ID]
refx = ref[:,0]
refy = ref[:,1]
errorx = errorx + np.linalg.norm(interx-refx)/np.linalg.norm(refx)
errory = errory + np.linalg.norm(intery-refy)/np.linalg.norm(refy)
error = np.array([errorx, errory])/len(history[0])
sol[force].append(error)
sol_dicts[current_solver] = sol
params = {'legend.fontsize': 'xx-large',
'figure.figsize': (6.75, 5),
'lines.linewidth': 3.0,
'axes.labelsize': 'xx-large',
'axes.titlesize':'xx-large',
'xtick.labelsize':'xx-large',
'ytick.labelsize':'xx-large',
'legend.fontsize': 'xx-large',
'font.size': 12,
'font.family': 'serif',
"mathtext.fontset": "dejavuserif",
'axes.titlepad': 12,
'axes.labelpad': 12}
plt.rcParams.update(params)
# single figure
fig, (ax1, ax2, ax3) = plt.subplots(1,3, figsize=(19.5, 5), sharey=True, gridspec_kw={'wspace': 0.1})
# ax1
for force in force_names:
ax1.loglog(dt_values, np.sum(np.array(sol_dicts['EF'][force]), axis=1), label=force, color=colors[force], linestyle=linestyles[force])
#plt.loglog(dt_values, np.array(sol_dicts['EF'][force])[:,1], label=force+' y')
ax1.loglog(dt_values[1:-1], np.array(dt_values[1:-1])*0.2, ':', label='$f(\Delta t)= \Delta t$', color='grey')
ax1.legend(loc='lower right', borderaxespad=0.)
#plt.legend()
ax1.set_title(r'$\bf{(g)}$')
ax1.set(ylabel='$\epsilon_{rel}$')
ax1.set_xlim([7*1e-4, 1.5*1e-1])
ax1.set_ylim([5*1e-10, 5*1e-0])
ax1.set_xticklabels([])
#ax1.set_ylim([5*1e-8, 5*1e-0])
ax1.text(0.0001, 0.00000001, 'Monolayer (38 cells)', fontsize=22, rotation='vertical')
#ax2
for force in force_names:
ax2.loglog(dt_values, np.sum(np.array(sol_dicts['midpoint'][force]), axis=1), label=force, color=colors[force], linestyle=linestyles[force])
#plt.loglog(dt_values, np.array(sol_dicts[current_solver][force])[:,1], label=force+' y')
ax2.loglog(dt_values[1:-1], np.array(dt_values[1:-1])**2*0.4, ':', label='$f(\Delta t)= \Delta t^2$', color='grey')
ax2.legend(borderaxespad=0.)
#ax2.set(xlabel='$\Delta t$')
ax2.set_xlim([7*1e-4, 1.5*1e-1])
ax2.set_title(r'$\bf{(h)}$')
ax2.set_xticklabels([])
#ax2.text(0.0025, 150, 'midpoint method', fontsize=22)
#ax3
for force in force_names:
ax3.loglog(dt_values, np.sum(np.array(sol_dicts['AB'][force]), axis=1), label=force, color=colors[force], linestyle=linestyles[force])
#plt.loglog(dt_values, np.array(sol_dicts[current_solver][force])[:,1], label=force+' y')
ax3.loglog(dt_values[1:-1], np.array(dt_values[1:-1])**2*0.4, ':', label='$f(\Delta t)= \Delta t^2$', color='grey')
ax3.legend(borderaxespad=0.)
#ax3.set(xlabel='$\Delta t$')
ax3.set_xlim([7*1e-4, 1.5*1e-1])
ax3.set_title(r'$\bf{(i)}$')
ax3.set_xticklabels([])
#ax3.text(0.001, 150, 'Adams-Bashforth method', fontsize=22)
plt.savefig('Fig13c_combined.pdf', bbox_inches='tight')
###Output
_____no_output_____
|
analysis/cifar10_weights_analysis.ipynb
|
###Markdown
This notebook is aimed to analyze cifar10 weights to apply quantization to maintain the accuracy
###Code
import matplotlib.pyplot as plt
import numpy as np
from utils.weights import get_weights
weights_dict = get_weights()
# plot the weight distribution of convolutional layers
for i in range(4):
conv_name = 'conv2d_%d' % (i + 1)
weights = weights_dict[conv_name]['kernel'].flatten()
plt.figure()
plt.hist(weights, bins=16)
plt.title(conv_name)
plt.show()
# plot the bias distribution of convolutional layers
for i in range(4):
conv_name = 'conv2d_%d' % (i + 1)
weights = weights_dict[conv_name]['bias'].flatten()
plt.figure()
plt.hist(weights, bins=16)
plt.title(conv_name)
plt.show()
# plot the weight distribution of dense layers
for i in range(2):
conv_name = 'dense_%d' % (i + 1)
weights = weights_dict[conv_name]['kernel'].flatten()
print 'max:', max(weights), 'min:', min(weights)
plt.figure()
plt.hist(weights, bins=32)
plt.title(conv_name)
plt.show()
import h5py
import os
# load data of output of each layer of 10000 testing samples from cifar10
data_each_layer = h5py.File('data/cifar-10_output.h5')
layers = data_each_layer['x_test_group'].keys()
print(layers)
useful_layers = ['conv2d_1_input', 'conv2d_1', u'batch_normalization_1', u'conv2d_2', u'batch_normalization_2',
u'conv2d_3', u'batch_normalization_3', u'conv2d_4', u'batch_normalization_4', u'dense_1', u'dense_2']
# plot the output of each layer
for layer in useful_layers:
data = data_each_layer['x_test_group'][layer][0:1000].flatten()
print 'max:', max(data), 'min:', min(data)
plt.figure()
plt.hist(data, bins=16)
plt.title(layer)
plt.show()
###Output
_____no_output_____
|
DataScience/DS-Unit-2-Linear-Models/module2-regression-2/LS_DS_212.ipynb
|
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 2*--- Regression 2- Do train/test split- Use scikit-learn to fit a multiple regression- Understand how ordinary least squares regression minimizes the sum of squared errors- Define overfitting/underfitting and the bias/variance tradeoff SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries:- matplotlib- numpy- pandas- plotly- scikit-learn
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
###Output
_____no_output_____
###Markdown
Do train/test split Overview Predict Elections! 🇺🇸🗳️ How could we try to predict the 2020 US Presidential election? According to Douglas Hibbs, a political science and economics professor, you can [explain elections with just two features, "Bread and Peace":](https://douglas-hibbs.com/background-information-on-bread-and-peace-voting-in-us-presidential-elections/)> Aggregate two-party vote shares going to candidates of the party holding the presidency during the postwar era are well explained by just two fundamental determinants:>> (1) Positively by weighted-average growth of per capita real disposable personal income over the term. > (2) Negatively by cumulative US military fatalities (scaled to population) owing to unprovoked, hostile deployments of American armed forces in foreign wars. Let's look at the data that Hibbs collected and analyzed:
###Code
import pandas as pd
df = pd.read_csv(DATA_PATH+'elections/bread_peace_voting.csv')
df
###Output
_____no_output_____
###Markdown
Data Sources & Definitions- 1952-2012: Douglas Hibbs, [2014 lecture at Deakin University Melbourne](http://www.douglas-hibbs.com/HibbsArticles/HIBBS-PRESVOTE-SLIDES-MELBOURNE-Part1-2014-02-26.pdf), Slide 40- 2016, Vote Share: [The American Presidency Project](https://www.presidency.ucsb.edu/statistics/elections)- 2016, Recent Growth in Personal Incomes: [The 2016 election economy: the "Bread and Peace" model final forecast](https://angrybearblog.com/2016/11/the-2016-election-economy-the-bread-and-peace-model-final-forecast.html)- 2016, US Military Fatalities: Assumption that Afghanistan War fatalities in 2012-16 occured at the same rate as 2008-12> Fatalities denotes the cumulative number of American military fatalities per millions of US population the in Korea, Vietnam, Iraq and Afghanistan wars during the presidential terms preceding the 1952, 1964, 1968, 1976 and 2004, 2008 and 2012 elections. —[Hibbs](http://www.douglas-hibbs.com/HibbsArticles/HIBBS-PRESVOTE-SLIDES-MELBOURNE-Part1-2014-02-26.pdf), Slide 33 Here we have data from the 1952-2016 elections. We could make a model to predict 1952-2016 election outcomes — but do we really care about that? No, not really. We already know what happened, we don't need to predict it. This is explained in [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/), Chapter 2.2, Assessing Model Accuracy:> In general, we do not really care how well the method works training on the training data. Rather, _we are interested in the accuracy of the predictions that we obtain when we apply our method to previously unseen test data._ Why is this what we care about? >> Suppose that we are interested in developing an algorithm to predict a stock’s price based on previous stock returns. We can train the method using stock returns from the past 6 months. But we don’t really care how well our method predicts last week’s stock price. We instead care about how well it will predict tomorrow’s price or next month’s price. >> On a similar note, suppose that we have clinical measurements (e.g. weight, blood pressure, height, age, family history of disease) for a number of patients, as well as information about whether each patient has diabetes. We can use these patients to train a statistical learning method to predict risk of diabetes based on clinical measurements. In practice, we want this method to accurately predict diabetes risk for _future patients_ based on their clinical measurements. We are not very interested in whether or not the method accurately predicts diabetes risk for patients used to train the model, since we already know which of those patients have diabetes. So, we're really interested in the 2020 election — but we probably don't want to wait until then to evaluate our model.There is a way we can estimate now how well our model will generalize in the future. We can't fast-forward time, but we can rewind it...We can split our data in **two sets.** For example: 1. **Train** a model on elections before 2008.2. **Test** the model on 2008, 2012, 2016. This "backtesting" helps us estimate how well the model will predict the next elections going forward, starting in 2020. This is explained in [_Forecasting,_ Chapter 3.4,](https://otexts.com/fpp2/accuracy.html) Evaluating forecast accuracy:> The accuracy of forecasts can only be determined by considering how well a model performs on new data that were not used when fitting the model.>>When choosing models, it is common practice to separate the available data into two portions, training and test data, where the training data is used to estimate any parameters of a forecasting method and the test data is used to evaluate its accuracy. Because the test data is not used in determining the forecasts, it should provide a reliable indication of how well the model is likely to forecast on new data.>>>>The size of the test set is typically about 20% of the total sample, although this value depends on how long the sample is and how far ahead you want to forecast. The following points should be noted.>>- A model which fits the training data well will not necessarily forecast well.>- A perfect fit can always be obtained by using a model with enough parameters.>- Over-fitting a model to data is just as bad as failing to identify a systematic pattern in the data.>>Some references describe the test set as the “hold-out set” because these data are “held out” of the data used for fitting. Other references call the training set the “in-sample data” and the test set the “out-of-sample data”. We prefer to use “training data” and “test data” in this book. **How should we split: Randomly? Before/after a given date?**I recommend you all read a great blog post, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/), by fast.ai cofounder Rachel Thomas.She gives great examples to answer the question “When is a random subset not good enough?” I’m not as opposed to random splits as Rachel Thomas seems to be. But it’s worth thinking about the trade-offs!Time-based and random splits can both be useful, and you’ll get repeated hands-on practice with both during this unit! (She also talks about the distinction between validation & test sets, which we’ll introduce in the last lesson of this Sprint.) Follow AlongSplit the data in two sets:1. Train on elections before 2008.2. Test on 2008 and after.
###Code
train = df[ df['Year'] < 2008 ]
test = df[ df['Year'] >= 2008 ]
###Output
_____no_output_____
###Markdown
How many observations (rows) are in the train set? In the test set?
###Code
train.shape, test.shape
###Output
_____no_output_____
###Markdown
Note that this volume of data is at least two orders of magnitude smaller than we usually want to work with for predictive modeling.There are other validation techniques that could be used here, such as [time series cross-validation](https://scikit-learn.org/stable/modules/cross_validation.htmltime-series-split), or [leave-one-out cross validation](https://scikit-learn.org/stable/modules/cross_validation.htmlleave-one-out-loo) for small datasets. However, for this module, let's start simpler, with train/test split. Using a tiny dataset is intentional here. It's good for learning because we can see all the data at once. ChallengeIn your assignment, you will do train/test split, based on date. Use scikit-learn to fit a multiple regression OverviewWe've done train/test split, and we're ready to fit a model. We'll proceed in 3 steps. The first 2 are review from the previous module. The 3rd is new.- Begin with baselines (0 features) - Simple regression (1 feature)- Multiple regression (2 features) Follow Along Begin with baselines (0 features) What was the average Incumbent Party Vote Share, in the 1952-2004 elections?
###Code
train['Incumbent Party Vote Share'].mean()
###Output
_____no_output_____
###Markdown
What if we guessed this number for every election? How far off would this be on average?
###Code
# Arrange y target vectors
target = 'Incumbent Party Vote Share'
y_train = train[target]
y_test = test[target]
# Get mean baseline
print('Mean Baseline (using 0 features)')
guess = y_train.mean()
guess # avg val of train set aka baseline
# Train Error
from sklearn.metrics import mean_absolute_error
y_pred = [guess] * len(y_train)
mae = mean_absolute_error(y_train, y_pred)
print(f'Train Error (1952-2004 elections): {mae:.2f} percentage points')
# Test Error
y_pred = [guess] * len(y_test)
mae = mean_absolute_error(y_test, y_pred)
print(f'Test Error (2008-16 elections): {mae:.2f} percentage points')
###Output
Test Error (2008-16 elections): 3.63 percentage points
###Markdown
Simple regression (1 feature) Make a scatterplot of the relationship between 1 feature and the target.We'll use an economic feature: Average Recent Growth in Personal Incomes. ("Bread")
###Code
import pandas as pd
import plotly.express as px
px.scatter(
train,
x='Average Recent Growth in Personal Incomes',
y='Incumbent Party Vote Share',
text='Year',
title='US Presidential Elections, 1952-2004',
trendline='ols', # Ordinary Least Squares
)
###Output
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning:
pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
###Markdown
1952 & 1968 are outliers: The incumbent party got fewer votes than predicted by the regression. What do you think could explain those years? We'll come back to this soon, but first... Use scikit-learn to fit the simple regression with one feature.Follow the [5 step process](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API), and refer to [Scikit-Learn LinearRegression documentation](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html).
###Code
# 1. Import the appropriate estimator class from Scikit-Learn
from sklearn.linear_model import LinearRegression
# an estimater is an obj in sklearn that has a fit function. aka estimating params
# on a fit function.
# 2 types:
# predictors and transformers.
# 2. Instantiate this class
model = LinearRegression()
# 3. Arrange X features matrices (already did y target vectors)
features = ['Average Recent Growth in Personal Incomes']
X_train = train[features]
X_test = test[features]
print(f'Linear Regression, dependent on: {features}')
# 4. Fit the model
model.fit(X_train, y_train)
y_pred = model.predict(X_train)
mae = mean_absolute_error(y_train, y_pred)
print(f'Train Error: {mae:.2f} percentage points')
# 5. Apply the model to new data
y_pred = model.predict(X_test)
mae = mean_absolute_error(y_test, y_pred)
print(f'Test Error: {mae:.2f} percentage points')
###Output
Test Error: 1.80 percentage points
###Markdown
How does the error compare to the baseline? Multiple regression (2 features) Make a scatterplot of the relationship between 2 features and the target.We'll add another feature: US Military Fatalities per Million. ("Peace" or the lack thereof.)Rotate the scatterplot to explore the data. What's different about 1952 & 1968?
###Code
px.scatter_3d(
train,
x='Average Recent Growth in Personal Incomes',
y='US Military Fatalities per Million',
z='Incumbent Party Vote Share',
text='Year',
title='US Presidential Elections, 1952-2004'
)
###Output
_____no_output_____
###Markdown
Use scikit-learn to fit a multiple regression with two features.
###Code
# TODO: Complete this cell
# Re-arrange X features matrices
features = ['Average Recent Growth in Personal Incomes',
'US Military Fatalities per Million']
print(f'Linear Regression, dependent on: {features}')
X_train = train[features]
X_test = test[features]
# TODO: Fit the model
model.fit(X_train, y_train)
# TODO: Apply the model to new data
y_pred = model.predict(X_train)
y_pred
# compare to baseline
mae = mean_absolute_error(y_train, y_pred)
mae
# 5. Apply the model to new data
y_pred = model.predict(X_test)
mae = mean_absolute_error(y_test, y_pred)
print(f'Test Error: {mae:.2f} percentage points')
###Output
Test Error: 1.63 percentage points
###Markdown
How does the error compare to the prior model? Plot the plane of best fit For a regression with 1 feature, we plotted the line of best fit in 2D. (There are many ways to do this. Plotly Express's `scatter` function makes it convenient with its `trendline='ols'` parameter.)For a regression with 2 features, we can plot the plane of best fit in 3D!(Plotly Express has a `scatter_3d` function but it won't plot the plane of best fit for us. But, we can write our own function, with the same "function signature" as the Plotly Express API.)
###Code
import itertools
import numpy as np
import plotly.express as px
import plotly.graph_objs as go
from sklearn.linear_model import LinearRegression
def regression_3d(df, x, y, z, num=100, **kwargs):
"""
Visualize linear regression in 3D: 2 features + 1 target
df : Pandas DataFrame
x : string, feature 1 column in df
y : string, feature 2 column in df
z : string, target column in df
num : integer, number of quantiles for each feature
"""
# Plot data
fig = px.scatter_3d(df, x, y, z, **kwargs)
# Fit Linear Regression
features = [x, y]
target = z
model = LinearRegression()
model.fit(df[features], df[target])
# Define grid of coordinates in the feature space
xmin, xmax = df[x].min(), df[x].max()
ymin, ymax = df[y].min(), df[y].max()
xcoords = np.linspace(xmin, xmax, num)
ycoords = np.linspace(ymin, ymax, num)
coords = list(itertools.product(xcoords, ycoords))
# Make predictions for the grid
predictions = model.predict(coords)
Z = predictions.reshape(num, num).T
# Plot predictions as a 3D surface (plane)
fig.add_trace(go.Surface(x=xcoords, y=ycoords, z=Z))
return fig
regression_3d(
train,
x='Average Recent Growth in Personal Incomes',
y='US Military Fatalities per Million',
z='Incumbent Party Vote Share',
text='Year',
title='US Presidential Elections, 1952-2004'
)
###Output
_____no_output_____
###Markdown
Where are 1952 & 1968 in relation to the plane? Which elections are the biggest outliers now? Roll over points on the plane to see predicted incumbent party vote share (z axis), dependent on personal income growth (x axis) and military fatatlies per capita (y axis). Get and interpret coefficients During the previous module, we got the simple regression's coefficient and intercept. We plugged these numbers into an equation for the line of best fit, in slope-intercept form: $y = mx + b$Let's review this objective, but now for multiple regression.What's the equation for the plane of best fit?$y = \beta_0 + \beta_1x_1 + \beta_2x_2$Can you relate the intercept and coefficients to what you see in the plot above?
###Code
model.intercept_, model.coef_
# coef slope term for each feature
beta0 = model.intercept_
beta1, beta2 = model.coef_
print(f'y = {beta0} + {beta1}x1 + {beta2}x2')
# This is easier to read
print('Intercept', model.intercept_)
coefficients = pd.Series(model.coef_, features)
print(coefficients.to_string())
# coef = rise/run
# coef = change in Y / change in x
# Ecample in module 1: $ Sales price of a condo / num of sq ft= $3076 price per sq ft
###Output
Intercept 46.25489966153873
Average Recent Growth in Personal Incomes 3.590047
US Military Fatalities per Million -0.053157
###Markdown
One of the coefficients is positive, and the other is negative. What does this mean? Let's look at some scenarios. We'll see that one unit's change in an independent variable results in a coefficient worth of change in the dependent variable. What does the model predict if income growth=0%, fatalities=0
###Code
model.predict([[0, 0]])
###Output
_____no_output_____
###Markdown
Income growth = 1% (fatalities = 0)
###Code
model.predict([[1, 0]])
###Output
_____no_output_____
###Markdown
The difference between these predictions = ?
###Code
model.predict([[1, 0]]) - model.predict([[0, 0]])
# 3.59 increase incumbent party vote share / 1% increase in income growth
###Output
_____no_output_____
###Markdown
What if... income growth = 2% (fatalities = 0)
###Code
model.predict([[2, 0]])
###Output
_____no_output_____
###Markdown
The difference between these predictions = ?
###Code
model.predict([[2, 0]]) - model.predict([[1, 0]])
###Output
_____no_output_____
###Markdown
What if... (income growth=2%) fatalities = 100
###Code
model.predict([[2, 100]])
###Output
_____no_output_____
###Markdown
The difference between these predictions = ?
###Code
model.predict([[2, 100]]) - model.predict([[2, 0]])
###Output
_____no_output_____
###Markdown
What if income growth = 3% (fatalities = 100)
###Code
model.predict([[3, 100]])
###Output
_____no_output_____
###Markdown
The difference between these predictions = ?
###Code
model.predict([[3, 100]]) - model.predict([[2, 100]])
###Output
_____no_output_____
###Markdown
What if (income growth = 3%) fatalities = 200
###Code
model.predict([[3, 200]])
###Output
_____no_output_____
###Markdown
The difference between these predictions = ?
###Code
model.predict([[3, 200]]) - model.predict([[3, 100]])
###Output
_____no_output_____
###Markdown
ChallengeIn your assignment, you'll fit a Linear Regression with at least 2 features. Understand how ordinary least squares regression minimizes the sum of squared errors OverviewSo far, we've evaluated our models by their absolute error. It's an intuitive metric for regression problems.However, ordinary least squares doesn't directly minimize absolute error. Instead, it minimizes squared error. In this section, we'll introduce two new regression metrics: - Squared error- $R^2$ We'll demostrate two possible methods to minimize squared error:- Guess & check- Linear Algebra Follow Along Guess & CheckThis function visualizes squared errors. We'll go back to simple regression with 1 feature, because it's much easier to visualize.Use the function's m & b parameters to "fit the model" manually. Guess & check what values of m & b minimize squared error.
###Code
from matplotlib.patches import Rectangle
import matplotlib.pyplot as plt
import numpy as np
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
def squared_errors(df, feature, target, m, b):
"""
Visualize linear regression, with squared errors,
in 2D: 1 feature + 1 target.
Use the m & b parameters to "fit the model" manually.
df : Pandas DataFrame
feature : string, feature column in df
target : string, target column in df
m : numeric, slope for linear equation
b : numeric, intercept for linear requation
"""
# Plot data
fig = plt.figure(figsize=(7,7))
ax = plt.axes()
df.plot.scatter(feature, target, ax=ax)
# Make predictions
x = df[feature]
y = df[target]
y_pred = m*x + b
# Plot predictions
ax.plot(x, y_pred)
# Plot squared errors
xmin, xmax = ax.get_xlim()
ymin, ymax = ax.get_ylim()
scale = (xmax-xmin)/(ymax-ymin)
for x, y1, y2 in zip(x, y, y_pred):
bottom_left = (x, min(y1, y2))
height = abs(y1 - y2)
width = height * scale
ax.add_patch(Rectangle(xy=bottom_left, width=width, height=height, alpha=0.1))
# Print regression metrics
mse = mean_squared_error(y, y_pred)
rmse = np.sqrt(mse)
mae = mean_absolute_error(y, y_pred)
r2 = r2_score(y, y_pred)
print('Mean Squared Error:', mse)
print('Root Mean Squared Error:', rmse)
print('Mean Absolute Error:', mae)
print('R^2:', r2)
###Output
_____no_output_____
###Markdown
Here's what the mean baseline looks like:
###Code
feature = 'Average Recent Growth in Personal Incomes'
squared_errors(train, feature, target, m=0, b=y_train.mean())
###Output
Mean Squared Error: 31.186940816326533
Root Mean Squared Error: 5.584526910699467
Mean Absolute Error: 4.846938775510204
R^2: 0.0
###Markdown
Notice that $R^2$ is exactly zero. [$R^2$ represents the proportion of the variance for a dependent variable that is explained by the independent variable(s).](https://en.wikipedia.org/wiki/Coefficient_of_determination)The mean baseline uses zero independent variables and explains none of the variance in the dependent variable, so its $R^2$ score is zero.The highest possible $R^2$ score is 1. The lowest possible *Train* $R^2$ score with ordinary least squares regression is 0.In this demo, it's possible to get a negative Train $R^2$, if you manually set values of m & b that are worse than the mean baseline. But that wouldn't happen in the real world.However, in the real world, it _is_ possible to get a negative *Test/Validation* $R^2$. It means that your *Test/Validation* predictions are worse than if you'd constantly predicted the mean of the *Test/Validation* set. ---Now that we've visualized the squared errors for the mean baseline, let's guess & check some better values for the m & b parameters:
###Code
squared_errors(train, feature, target, m=3, b=46)
###Output
Mean Squared Error: 13.611378571428576
Root Mean Squared Error: 3.6893601845616235
Mean Absolute Error: 2.742142857142858
R^2: 0.5635551863970272
###Markdown
You can run the function repeatedly, with different values for m & b.How do you interpret each metric you see?- Mean Squared Error- Root Mean Squared Error- Mean Absolute Error- $R^2$Does guess & check really get used in machine learning? Sometimes! Some complex functions are hard to minimize, so we use a sophisticated form of guess & check called "gradient descent", which you'll learn about in Unit 4.Fortunately, we don't need to use guess & check for ordinary least squares regression. We have a solution, using linear algebra! Linear AlgebraThe same result that is found by minimizing the sum of the squared errors can be also found through a linear algebra process known as the "Least Squares Solution:"\begin{align}\hat{\beta} = (X^{T}X)^{-1}X^{T}y\end{align}Before we can work with this equation in its linear algebra form we have to understand how to set up the matrices that are involved in this equation. The $\beta$ vectorThe $\beta$ vector represents all the parameters that we are trying to estimate, our $y$ vector and $X$ matrix values are full of data from our dataset. The $\beta$ vector holds the variables that we are solving for: $\beta_0$ and $\beta_1$Now that we have all of the necessary parts we can set them up in the following equation:\begin{align}y = X \beta + \epsilon\end{align}Since our $\epsilon$ value represents **random** error we can assume that it will equal zero on average.\begin{align}y = X \beta\end{align}The objective now is to isolate the $\beta$ matrix. We can do this by pre-multiplying both sides by "X transpose" $X^{T}$.\begin{align}X^{T}y = X^{T}X \beta\end{align}Since anything times its transpose will result in a square matrix, if that matrix is then an invertible matrix, then we should be able to multiply both sides by its inverse to remove it from the right hand side. (We'll talk tomorrow about situations that could lead to $X^{T}X$ not being invertible.)\begin{align}(X^{T}X)^{-1}X^{T}y = (X^{T}X)^{-1}X^{T}X \beta\end{align}Since any matrix multiplied by its inverse results in the identity matrix, and anything multiplied by the identity matrix is itself, we are left with only $\beta$ on the right hand side:\begin{align}(X^{T}X)^{-1}X^{T}y = \hat{\beta}\end{align}We will now call it "beta hat" $\hat{\beta}$ because it now represents our estimated values for $\beta_0$ and $\beta_1$ Lets calculate our $\beta$ parameters with numpy!
###Code
# This is NOT something you'll be tested on. It's just a demo.
# X is a matrix. Add column of constants for fitting the intercept.
def add_constant(X):
constant = np.ones(shape=(len(X),1))
return np.hstack((constant, X))
X = add_constant(train[features].values)
print('X')
print(X)
# y is a column vector
y = train[target].values[:, np.newaxis]
print('y')
print(y)
# Least squares solution in code
X_transpose = X.T
X_transpose_X = X_transpose @ X
X_transpose_X_inverse = np.linalg.inv(X_transpose_X)
X_transpose_y = X_transpose @ y
beta_hat = X_transpose_X_inverse @ X_transpose_y
print('Beta Hat')
print(beta_hat)
# Scikit-learn gave the exact same results!
model.intercept_, model.coef_
###Output
_____no_output_____
###Markdown
Define overfitting/underfitting and the bias/variance tradeoff Overview Read [_Python Data Science Handbook,_ Chapter 5.3](https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.htmlThe-Bias-variance-trade-off). Jake VanderPlas explains overfitting & underfitting:> Fundamentally, the question of "the best model" is about finding a sweet spot in the tradeoff between bias and variance. Consider the following figure, which presents two regression fits to the same dataset:> >>> The model on the left attempts to find a straight-line fit through the data. Because the data are intrinsically more complicated than a straight line, the straight-line model will never be able to describe this dataset well. Such a model is said to _underfit_ the data: that is, it does not have enough model flexibility to suitably account for all the features in the data; another way of saying this is that the model has high _bias_.>> The model on the right attempts to fit a high-order polynomial through the data. Here the model fit has enough flexibility to nearly perfectly account for the fine features in the data, but even though it very accurately describes the training data, its precise form seems to be more reflective of the particular noise properties of the data rather than the intrinsic properties of whatever process generated that data. Such a model is said to _overfit_ the data: that is, it has so much model flexibility that the model ends up accounting for random errors as well as the underlying data distribution; another way of saying this is that the model has high _variance_. VanderPlas goes on to connect these concepts to the "bias/variance tradeoff":> From the scores associated with these two models, we can make an observation that holds more generally:>>- For high-bias models, the performance of the model on the validation set is similar to the performance on the training set.>>- For high-variance models, the performance of the model on the validation set is far worse than the performance on the training set.>> If we imagine that we have some ability to tune the model complexity, we would expect the training score and validation score to behave as illustrated in the following figure:>>>> The diagram shown here is often called a validation curve, and we see the following essential features:>>- The training score is everywhere higher than the validation score. This is generally the case: the model will be a better fit to data it has seen than to data it has not seen.>- For very low model complexity (a high-bias model), the training data is under-fit, which means that the model is a poor predictor both for the training data and for any previously unseen data.>- For very high model complexity (a high-variance model), the training data is over-fit, which means that the model predicts the training data very well, but fails for any previously unseen data.>- For some intermediate value, the validation curve has a maximum. This level of complexity indicates a suitable trade-off between bias and variance.>>The means of tuning the model complexity varies from model to model. So far, our only "means of tuning the model complexity" has been selecting one feature or two features for our linear regression models. But we'll quickly start to select more features, and more complex models, with more "hyperparameters."This is just a first introduction to underfitting & overfitting. We'll continue to learn about this topic all throughout this unit. Follow Along Let's make our own Validation Curve, by tuning a new type of model complexity: polynomial degrees in a linear regression. Go back to the the NYC Tribeca condo sales data
###Code
# Read NYC Tribeca condo sales data, from first 4 months of 2019.
# Dataset has 90 rows, 9 columns.
df = pd.read_csv(DATA_PATH+'condos/tribeca.csv')
assert df.shape == (90, 9)
# Arrange X features matrix & y target vector
features = ['GROSS_SQUARE_FEET']
target = 'SALE_PRICE'
X = df[features]
y = df[target]
###Output
_____no_output_____
###Markdown
Do random [train/test split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html)
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=11)
###Output
_____no_output_____
###Markdown
Repeatedly fit increasingly complex models, and keep track of the scores
###Code
from IPython.display import display, HTML
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import PolynomialFeatures
# Credit for PolynomialRegression: Jake VanderPlas, Python Data Science Handbook, Chapter 5.3
# https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.html#Validation-curves-in-Scikit-Learn
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree),
LinearRegression(**kwargs))
polynomial_degrees = range(1, 10, 2)
train_r2s = []
test_r2s = []
for degree in polynomial_degrees:
model = PolynomialRegression(degree)
display(HTML(f'Polynomial degree={degree}'))
model.fit(X_train, y_train)
train_r2 = model.score(X_train, y_train)
test_r2 = model.score(X_test, y_test)
display(HTML(f'<b style="color: blue">Train R2 {train_r2:.2f}</b>'))
display(HTML(f'<b style="color: red">Test R2 {test_r2:.2f}</b>'))
plt.scatter(X_train, y_train, color='blue', alpha=0.5)
plt.scatter(X_test, y_test, color='red', alpha=0.5)
plt.xlabel(features)
plt.ylabel(target)
x_domain = np.linspace(X.min(), X.max())
curve = model.predict(x_domain)
plt.plot(x_domain, curve, color='blue')
plt.show()
display(HTML('<hr/>'))
train_r2s.append(train_r2)
test_r2s.append(test_r2)
display(HTML('Validation Curve'))
plt.plot(polynomial_degrees, train_r2s, color='blue', label='Train')
plt.plot(polynomial_degrees, test_r2s, color='red', label='Test')
plt.xlabel('Model Complexity (Polynomial Degree)')
plt.ylabel('R^2 Score')
plt.legend()
plt.show()
###Output
_____no_output_____
|
nb/qit_example.ipynb
|
###Markdown
True distributionFirst, we make a true distribution. For this simple example, it is just a Gaussian
###Code
# true distribution of redshifts
Z_TRUE_MIN, Z_TRUE_MAX = 0., 2.
LOC_TRUE = 0.60
SCALE_TRUE = 0.30
true_dist = qp.Ensemble(qp.stats.norm, data=dict(loc=LOC_TRUE, scale=SCALE_TRUE))
ax_true = true_dist.plot(xlim=(Z_TRUE_MIN, Z_TRUE_MAX), label=r"unnorm")
###Output
_____no_output_____
###Markdown
Implicit priorNow we make the implicit prior. In our case it is similiar to the true distribution, but slightly different.
###Code
LOC_PRIOR = 0.65
SCALE_PRIOR = 0.35
implicit_prior = qp.Ensemble(qp.stats.norm, data=dict(loc=LOC_PRIOR, scale=SCALE_PRIOR))
ax_prior = implicit_prior.plot(xlim=(Z_TRUE_MIN, Z_TRUE_MAX), label=r"unnorm")
###Output
_____no_output_____
###Markdown
EstimatorNow we try and model the behavior of a simple estimator.Our simple estimator has a likelihood $p(d | z)$ to return an esimate $d$ for a true value $z$.
###Code
# This represents the "estimator" code, we define 50 bins (in true redshift)
# and in each bin the likelihood p(z_obs) is a Gaussian centered on the bin center
N_EST_BINS = 50
z_bins = np.linspace(Z_TRUE_MIN, Z_TRUE_MAX, N_EST_BINS+1)
z_centers = qp.utils.edge_to_center(z_bins)
z_widths = 0.2 * np.ones(N_EST_BINS)
likelihood = qp.Ensemble(qp.stats.norm, data=dict(loc=np.expand_dims(z_centers, -1), scale=np.expand_dims(z_widths, -1)))
# These are the points at which we evaluate the PDFs
N_OBS_BINS = 300
Z_OBS_MIN, Z_OBS_MAX = -0.5, 2.5
grid_edge = np.linspace(Z_OBS_MIN, Z_OBS_MAX, N_OBS_BINS+1)
grid_cent = qp.utils.edge_to_center(grid_edge)
p_grid = likelihood.pdf(grid_cent)
plot_kwds = dict(xlim=(Z_TRUE_MIN, Z_TRUE_MAX),
ylim=(Z_OBS_MIN, Z_OBS_MAX),
xlabel=r'$z_{\rm true}$',
ylabel=r'$d$')
pl_like = qp.plotting.plot_2d_like(p_grid.T, **plot_kwds)
###Output
_____no_output_____
###Markdown
Posterior distributionsOk, now we are going to extract the posterior distributions $p(z|d)$, $p(z|d,\phi^{\dagger})$ and $p(z|d,\phi^{*})$. In our case these correspond to the posteriors assuming a flat prior, assuming the true distribution as the prior and assuming the implicit prior.
###Code
# Let's flip around the likelihood
z_grid = z_centers
flat_post = qp.Ensemble(qp.stats.hist, data=dict(bins=z_bins, pdfs=p_grid.T))
post_grid = qit.like_funcs.get_posterior_grid(flat_post, z_grid)
est_grid = qit.like_funcs.get_posterior_grid(flat_post, z_grid, implicit_prior)
true_grid = qit.like_funcs.get_posterior_grid(flat_post, z_grid, true_dist)
pl_post = qp.plotting.plot_2d_like(post_grid, **plot_kwds)
pl_est = qp.plotting.plot_2d_like(est_grid, **plot_kwds)
pl_true = qp.plotting.plot_2d_like(true_grid, **plot_kwds)
###Output
_____no_output_____
###Markdown
Sample points from the true distribution
###Code
# Now let's sample points in true z
N_SAMPLES = 10000
N_HIST_BINS = 50
z_true_sample = np.squeeze(true_dist.rvs(size=N_SAMPLES))
fig_sample, ax_sample = qp.plotting.make_figure_axes(xlim=(Z_TRUE_MIN, Z_TRUE_MAX),
xlabel=r"$z_{\rm true}$",
ylabel="Counts / %0.2f" % ((Z_TRUE_MAX-Z_TRUE_MIN)/N_HIST_BINS))
hist = ax_sample.hist(z_true_sample, bins=np.linspace(Z_TRUE_MIN, Z_TRUE_MAX, N_HIST_BINS+1))
###Output
_____no_output_____
###Markdown
Create a sample of points in the measured distributionWe do this by sampling a $d$ value from the correct bin for each sampled value in $z_{\rm true}$.
###Code
# Now we create a sample of points in measured z.
N_OBS_HIST_BINS = 75
whichbin = np.searchsorted(z_bins, z_true_sample)-1
mask = (z_true_sample > 0) * (z_true_sample <= 2.0)
mask *= (whichbin < z_centers.size)
whichbin = whichbin[mask]
sampler = qp.Ensemble(qp.stats.norm, data=dict(loc=np.expand_dims(z_centers[whichbin], -1), scale=np.expand_dims(z_widths[whichbin], -1)))
z_meas_sample = np.squeeze(sampler.rvs(1))
fig_hmeas, ax_hmeas = qp.plotting.make_figure_axes(xlim=(Z_OBS_MIN, Z_OBS_MAX),
xlabel=r"$z_{\rm true}$",
ylabel="Counts / %0.2f" % ((Z_OBS_MAX-Z_OBS_MIN)/N_OBS_HIST_BINS))
hist = ax_hmeas.hist(z_meas_sample, bins=np.linspace(Z_OBS_MIN, Z_OBS_MAX, N_OBS_HIST_BINS+1))
# Overplot the scatter plot on the 2-d likelihood plot
pl_true = qp.plotting.plot_2d_like(p_grid.T, **plot_kwds)
ax_like2 = pl_true[1]
sc = ax_like2.scatter(z_true_sample[mask], z_meas_sample, s=1, color='gray')
###Output
_____no_output_____
###Markdown
Profile plotThe previous plot is a bit messy, lets plot the mean and std in slices of x. (This is a "profile" plot in particle physics jargon.)
###Code
N_PROF_BINS = 20
pl_true2 = qp.plotting.plot_2d_like(p_grid.T, **plot_kwds)
ax_prof = pl_true2[1]
x_prof = np.linspace(Z_TRUE_MIN, Z_TRUE_MAX, N_PROF_BINS+1)
x_prof_cent = qp.utils.edge_to_center(x_prof)
prof_vals, prof_errs = qp.utils.profile(z_true_sample[mask], z_meas_sample, x_prof)
sc = ax_prof.errorbar(x_prof_cent, prof_vals, yerr=prof_errs)
###Output
_____no_output_____
###Markdown
Posteriors for the measured valuesNow we get $p(z|d_{j})$, $p(z | d_{j}, \phi^{\dagger})$ and $p(z | d_{j}, \phi^{*})$ for the samples we simulated.
###Code
# Now we get the posteriors for all the measured values
z_meas_bin = np.searchsorted(grid_edge, z_meas_sample)-1
z_meas_mask = (z_meas_bin >= 0) * (z_meas_bin < grid_cent.size)
z_meas_bin = z_meas_bin[z_meas_mask]
post_dict = qp.like_funcs.make_ensemble_for_posterior_interp(post_grid, z_grid, z_meas_bin)
est_dict = qp.like_funcs.make_ensemble_for_posterior_interp(est_grid, z_grid, z_meas_bin)
true_dict = qp.like_funcs.make_ensemble_for_posterior_interp(true_grid, z_grid, z_meas_bin)
which_sample = np.argmax(z_meas_sample[0:100])
fig_x, ax_x = qp.plotting.make_figure_axes(xlim=(Z_TRUE_MIN, Z_TRUE_MAX),
xlabel=r"$z_{\rm true}$",
ylabel=r"$p(z)$")
ax_x.plot(z_grid, post_dict['vals'][which_sample], label='implicit=flat')
ax_x.plot(z_grid, est_dict['vals'][which_sample], label='implicit=estimated')
ax_x.plot(z_grid, true_dict['vals'][which_sample], label='implict=true')
ax_x.plot(z_grid, np.squeeze(implicit_prior.pdf(z_grid)), label='implicit prior')
ax_x.plot(z_grid, np.squeeze(true_dist.pdf(z_grid)), label='true')
leg = fig_x.legend()
###Output
_____no_output_____
###Markdown
Check of effect of binning the samplesThis compares a histogram made from the original z_meas values to a histogram made by taking the closest grid point
###Code
fig_check, ax_check = qp.plotting.make_figure_axes(xlim=(Z_OBS_MIN, Z_OBS_MAX),
xlabel=r"$z_{\rm true}$",
ylabel="Counts / %0.02f" % ((Z_OBS_MAX-Z_OBS_MIN)/N_OBS_HIST_BINS))
ax_check.hist(z_meas_sample, bins=np.linspace(Z_OBS_MIN, Z_OBS_MAX, N_OBS_HIST_BINS+1), label='sample', histtype='step')
z_meas_binned = grid_cent[z_meas_bin]
ax_check.hist(z_meas_binned, bins=np.linspace(Z_OBS_MIN, Z_OBS_MAX, N_OBS_HIST_BINS+1), label='check', histtype='step')
leg = fig_check.legend()
###Output
_____no_output_____
###Markdown
Compare the true distribtuion to the naive "stacking"
###Code
N_FIT_BINS = 4
hist_bins = np.linspace(Z_TRUE_MIN, Z_TRUE_MAX, N_FIT_BINS+1)
fig_stack, ax_stack = qp.plotting.make_figure_axes(xlim=(Z_TRUE_MIN, Z_TRUE_MAX),
xlabel=r"$z_{\rm true}$",
ylabel=r"$p(z)$")
ax_stack.hist(z_true_sample[mask], bins=hist_bins, density=True, label=r'$z_{\rm true}$', histtype='step')
ax_stack.plot(z_grid, np.squeeze(true_dist.pdf(z_grid)), label=r'$p(z)$')
ax_stack.plot(z_grid, post_dict['stack'], label=r'$\sum_{j} p(z | d_{j})$')
ax_stack.plot(z_grid, est_dict['stack'], label=r'$\sum_{j} p(z | d_{j} \phi^{*})$')
ax_stack.plot(z_grid, true_dict['stack'], label=r'$\sum_{j} p(z | d_{j} \phi^{\dagger})$')
leg = fig_stack.legend()
###Output
_____no_output_____
###Markdown
Plot some posterior distributions
###Code
fig_1, ax_1 = qp.plotting.make_figure_axes(xlim=(Z_TRUE_MIN, Z_TRUE_MAX),
xlabel=r"$z_{\rm true}$",
ylabel=r"$p(z | d)$")
fig_2, ax_2 = qp.plotting.make_figure_axes(xlim=(Z_TRUE_MIN, Z_TRUE_MAX),
xlabel=r"$z_{\rm true}$",
ylabel=r"$p(z | d, \phi^{*})$")
post_vals = post_dict['vals']
est_vals = est_dict['vals']
for i in range(10):
ax_1.plot(z_grid, post_vals[i])
ax_2.plot(z_grid, est_vals[i])
###Output
_____no_output_____
###Markdown
Test to make sure we can update the parameters of a distritubion for fittingThis is just a software test to make sure that setting the values of a model parameter changes the model.
###Code
model_params = np.ones((1, N_FIT_BINS))
model = qp.Ensemble(qp.stats.hist, data=dict(bins=hist_bins, pdfs=model_params))
fig_model, ax_model = qp.plotting.make_figure_axes(xlim=(Z_TRUE_MIN, Z_TRUE_MAX),
xlabel=r"$z_{\rm true}$",
ylabel=r"$p(z)$")
ax_model.plot(z_grid, np.squeeze(model.pdf(z_grid)), label='orig')
new_params = np.ones(N_FIT_BINS)
new_params[1] = 1.4
model.update_objdata(dict(pdfs=np.expand_dims(new_params, 0)))
ax_model.plot(z_grid, np.squeeze(model.pdf(z_grid)), label='new')
leg = fig_model.legend()
###Output
_____no_output_____
###Markdown
Test the likelihood function by evaluating it for a flat distribution and for the true distribution
###Code
N_EVAL_PTS = 201
eval_grid = np.linspace(Z_TRUE_MIN, Z_TRUE_MAX, N_EVAL_PTS)
model_params = np.log(np.ones(N_FIT_BINS))
hist_cents = qp.utils.edge_to_center(hist_bins)
true_vals = np.histogram(z_true_sample, bins=np.linspace(Z_TRUE_MIN, Z_TRUE_MAX, N_FIT_BINS+1))[0]
v_flat = qp.like_funcs.log_hyper_like(model_params, est_dict['ens'], model, implicit_prior, eval_grid)
v_true = qp.like_funcs.log_hyper_like(np.log(true_vals), est_dict['ens'], model, implicit_prior, eval_grid)
print(v_flat, v_true)
###Output
_____no_output_____
###Markdown
Make the objective function for fittingIn this case it is just the log_hyper_like with all of the arguments except for the logs of the bin heights, (i.e. the fitting parameters) specified.
###Code
obj_func = qp.like_funcs.make_log_hyper_obj_func(ensemble=est_dict['ens'],\
model=model, implicit_prior=implicit_prior, grid=eval_grid)
v_flat = obj_func(model_params)
v_true = obj_func(np.log(true_vals))
print(v_flat, v_true)
###Output
_____no_output_____
###Markdown
Fit for the hyper-parameters
###Code
result = minimize(obj_func, model_params)
print(result)
# Check the current value of the objective function
obj_func(result['x'])
# Extract the parameters and convert back to counts (The Jacobian happens to be identical to the fitted values)
fitted_vals = np.exp(result['x'])
fitted_errs = np.sqrt(np.array([result['hess_inv'][i,i] for i in range(4)]))
norm_factor = 2 / fitted_vals.sum()
normed_fit = norm_factor * fitted_vals
jac = fitted_vals
# Convert to PDF, for plotting
normed_errs = norm_factor * jac * fitted_errs
model.update_objdata(dict(pdfs=np.expand_dims(normed_fit, 0)))
model_vals = np.squeeze(model.pdf(z_grid))
fig_result, ax_result = qp.plotting.make_figure_axes(xlim=(Z_TRUE_MIN, Z_TRUE_MAX),
xlabel=r"$z_{\rm true}$",
ylabel=r"$p(z)$")
ax_result.hist(z_true_sample[mask], bins=hist_bins, density=True, label=r'$z_{\rm true}$', histtype='step')
ax_result.plot(z_grid, np.squeeze(true_dist.pdf(z_grid)), label=r'$p(z)$')
ax_result.plot(z_grid, post_dict['stack'], label=r'$\sum_j p(z | d_{j}$)')
ax_result.plot(z_grid, est_dict['stack'], label=r'$\sum_j p(z | d_{j}, \phi^{*})$')
ax_result.plot(z_grid, true_dict['stack'], label=r'$\sum_j p(z | d_{j}, \phi^{\dagger})$')
#ax_result.errorbar(hist_cents, normed_fit, yerr=normed_errs, label="result")
ax_result.plot(z_grid, model_vals, label='model')
leg = fig_result.legend()
###Output
_____no_output_____
###Markdown
Fitting in counts space
###Code
N_LIKE_PTS = 301
like_grid = np.linspace(Z_OBS_MIN, Z_OBS_MAX, N_LIKE_PTS)
eval_bins = np.searchsorted(z_bins, eval_grid, side='left')-1
eval_mask = (eval_bins >= 0) * (eval_bins < z_bins.size-1)
eval_grid = eval_grid[eval_mask]
eval_bins = eval_bins[eval_mask]
like_eval = likelihood.pdf(like_grid)[eval_bins]
obs_cts_grid = np.linspace(Z_OBS_MIN, Z_OBS_MAX, 7)
data_cts = np.histogram(z_meas_sample, bins=obs_cts_grid)[0]
obj_func_binned = qp.funcs.make_binnned_loglike_obj_func(model=model, data_cts=data_cts,\
like_eval=like_eval, like_grid=like_grid, model_grid=eval_grid, cts_grid=obs_cts_grid)
flat = 0.5*data_cts.sum()*np.ones(4)
model_flat = qp.funcs.model_counts(np.log(flat), model, like_eval, like_grid, eval_grid, obs_cts_grid)
model_true = qp.funcs.model_counts(np.log(true_vals), model, like_eval, like_grid, eval_grid, obs_cts_grid)
ll_flat = obj_func_binned(np.log(flat))
ll_true = obj_func_binned(np.log(true_vals))
print(ll_flat, ll_true)
result = minimize(obj_func_binned, np.ones(4))
print(result)
model_cts = qp.funcs.model_counts(result['x'], model, like_eval, like_grid, eval_grid, obs_cts_grid)
cts_cent = 0.5 * (obs_cts_grid[1:] + obs_cts_grid[:-1])
fig_fit, ax_fit = qp.plotting.make_figure_axes(xlim=(Z_OBS_MIN, Z_OBS_MAX),
xlabel=r"$d$",
ylabel=r"$n(d)$")
ax_fit.set_yscale('log')
ax_fit.set_ylim(1., 1e4)
ax_fit.scatter(cts_cent, data_cts, label='data')
ax_fit.plot(cts_cent, model_cts, label='fit')
leg = fig_fit.legend()
fit_cts = np.exp(result['x'])
fit_cts *= 2/fit_cts.sum()
pdf_true = true_vals * 2 / true_vals.sum()
fig_fit2, ax_fit2 = qp.plotting.make_figure_axes(xlim=(Z_TRUE_MIN, Z_TRUE_MAX),
xlabel=r'$z_{\rm true}$',
ylabel=r'p(z)')
ax_fit2.hist(z_true_sample[mask], bins=hist_bins, density=True, label=r'$z_{\rm true}$', histtype='step')
ax_fit2.plot(z_grid, np.squeeze(true_dist.pdf(z_grid)), label=r'$p(z)$')
ax_fit2.plot(hist_cents, fit_cts, label="fit")
ax_fit2.plot(z_grid, model_vals, label='model')
leg = fig_fit2.legend()
vals = z_grid
bins = z_grid
edges = bins
widths = edges[1:] - edges[:-1]
np.floor((vals-bins[0])/widths[0]).astype(int)
###Output
_____no_output_____
|
Transfer Learning/Horse Vs Humans/Horse_Vs_Humans_Transfer_Learning_Answer.ipynb
|
###Markdown
###Code
# Import all the necessary files!
import os
%tensorflow_version 1.x
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import Model
# Download the inception v3 weights
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 \
-O /tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
# Import the inception model
from tensorflow.keras.applications.inception_v3 import InceptionV3
# Create an instance of the inception model from the local pre-trained weights
local_weights_file = '/tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5'
pre_trained_model = InceptionV3(input_shape = (150, 150, 3),
include_top = False,
weights = None)
pre_trained_model.load_weights(local_weights_file)
# Make all the layers in the pre-trained model non-trainable
for layer in pre_trained_model.layers:
layer.trainable = False
# Print the model summary
pre_trained_model.summary()
# Expected Output is extremely large, but should end with:
#batch_normalization_v1_281 (Bat (None, 3, 3, 192) 576 conv2d_281[0][0]
#__________________________________________________________________________________________________
#activation_273 (Activation) (None, 3, 3, 320) 0 batch_normalization_v1_273[0][0]
#__________________________________________________________________________________________________
#mixed9_1 (Concatenate) (None, 3, 3, 768) 0 activation_275[0][0]
# activation_276[0][0]
#__________________________________________________________________________________________________
#concatenate_5 (Concatenate) (None, 3, 3, 768) 0 activation_279[0][0]
# activation_280[0][0]
#__________________________________________________________________________________________________
#activation_281 (Activation) (None, 3, 3, 192) 0 batch_normalization_v1_281[0][0]
#__________________________________________________________________________________________________
#mixed10 (Concatenate) (None, 3, 3, 2048) 0 activation_273[0][0]
# mixed9_1[0][0]
# concatenate_5[0][0]
# activation_281[0][0]
#==================================================================================================
#Total params: 21,802,784
#Trainable params: 0
#Non-trainable params: 21,802,784
last_layer = pre_trained_model.get_layer('mixed10')
print('last layer output shape: ', last_layer.output_shape)
last_output = last_layer.output
# Expected Output:
# ('last layer output shape: ', (None, 7, 7, 768))
# Define a Callback class that stops training once accuracy reaches 99.9%
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('acc')>0.999):
print("\nReached 99.9% accuracy so cancelling training!")
self.model.stop_training = True
from tensorflow.keras.optimizers import RMSprop
# Flatten the output layer to 1 dimension
x = layers.Flatten()(last_output)
# Add a fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dense(1024, activation='relu')(x)
# Add a dropout rate of 0.2
x = layers.Dropout(0.3)(x)
# Add a final sigmoid layer for classification
x = layers.Dense (1, activation='sigmoid')(x)
model = Model( pre_trained_model.input, x)
model.compile(optimizer = RMSprop(lr=0.0001),
loss = 'binary_crossentropy',
metrics = ['acc'])
model.summary()
# Expected output will be large. Last few lines should be:
# mixed7 (Concatenate) (None, 7, 7, 768) 0 activation_248[0][0]
# activation_251[0][0]
# activation_256[0][0]
# activation_257[0][0]
# __________________________________________________________________________________________________
# flatten_4 (Flatten) (None, 37632) 0 mixed7[0][0]
# __________________________________________________________________________________________________
# dense_8 (Dense) (None, 1024) 38536192 flatten_4[0][0]
# __________________________________________________________________________________________________
# dropout_4 (Dropout) (None, 1024) 0 dense_8[0][0]
# __________________________________________________________________________________________________
# dense_9 (Dense) (None, 1) 1025 dropout_4[0][0]
# ==================================================================================================
# Total params: 47,512,481
# Trainable params: 38,537,217
# Non-trainable params: 8,975,264
# Get the Horse or Human dataset
!wget --no-check-certificate https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip -O /tmp/horse-or-human.zip
# Get the Horse or Human Validation dataset
!wget --no-check-certificate https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip -O /tmp/validation-horse-or-human.zip
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import zipfile
local_zip = '//tmp/horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/training')
zip_ref.close()
local_zip = '//tmp/validation-horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/validation')
zip_ref.close()
# Define our example directories and files
train_dir = '/tmp/training'
validation_dir = '/tmp/validation'
train_horses_dir = os.path.join(train_dir, 'horses') # Directory with our training horse pictures
train_humans_dir = os.path.join(train_dir, 'humans') # Directory with our training humans pictures
validation_horses_dir = os.path.join(validation_dir, 'horses') # Directory with our validation horse pictures
validation_humans_dir = os.path.join(validation_dir, 'humans')# Directory with our validation humanas pictures
train_horses_fnames = os.listdir(train_horses_dir)
train_humans_fnames = os.listdir(train_humans_dir)
validation_horses_fnames = os.listdir(validation_horses_dir)
validation_humans_fnames = os.listdir(validation_humans_dir)
print(len(train_horses_fnames))
print(len(train_humans_fnames))
print(len(validation_horses_fnames))
print(len(validation_humans_fnames))
# Expected Output:
# 500
# 527
# 128
# 128
# Add our data-augmentation parameters to ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255.,
rotation_range = 40,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator( rescale = 1.0/255. )
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(train_dir,
batch_size = 20,
class_mode = 'binary',
target_size = (150, 150))
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory( validation_dir,
batch_size = 20,
class_mode = 'binary',
target_size = (150, 150))
# Expected Output:
# Found 1027 images belonging to 2 classes.
# Found 256 images belonging to 2 classes.
# Run this and see how many epochs it should take before the callback
# fires, and stops training at 99.9% accuracy
# (It should take less than 100 epochs)
callbacks = myCallback()
history = model.fit_generator(
train_generator,
validation_data = validation_generator,
steps_per_epoch = 100,
epochs = 5,
validation_steps = 50,
verbose = 2,
callbacks=[callbacks])
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()
plt.show()
###Output
_____no_output_____
|
ipython/3_Training_Predicting/prnn_recsys17.ipynb
|
###Markdown
Hyperparameter definitions
###Code
batch_size = 512
acts = ['softmax', 'tanh']
l_sizes = [100, 1000]
lrs = [0.001, 0.01]
###Output
_____no_output_____
###Markdown
Hyperparameter model training
###Code
for act in acts:
for ls in l_sizes:
for lr in lrs:
train_dataset = SessionDataset(train)
loader = SessionDataLoader(train_dataset, batch_size=batch_size)
mapitem = loader.dataset.itemmap
# define model
model, encoder = create_prnn_model(item_count, feature_size, batch_size=batch_size, hidden_units = ls, o_activation = act, lr = lr)
# train model
model = train_prnn(model, lr, loader)
model_name = "recsys17_prnn_a_" + act + "_ls_" + str(ls) + "_lr_" + str(lr) + ".model"
pickle.dump(model, open(model_path_valid + model_name, 'wb'), protocol=4)
print("Stored model in: " + model_path_valid + model_name)
###Output
_____no_output_____
###Markdown
Predict for hyperparameters
###Code
import keras.losses
keras.losses.TOP1 = TOP1
pd.set_option('display.max_colwidth', -1)
train_dataset = SessionDataset(train)
loader = SessionDataLoader(train_dataset, batch_size=batch_size)
def predict_function(sid, test_session, pr, item_idx_map, idx_item_map, cut_off=20,
session_key='session_id', item_key='item_id', time_key='created_at'):
test_session.sort_values([time_key], inplace=True)
# get first and only session_id (as we grouped it before calling this method)
session_id = test_session[session_key].unique()[0]
log_columns = ["session_id", "input_items", "input_count", "position", "remaining_items", "remaining_count", "predictions"]
log_df = pd.DataFrame(columns = log_columns)
session_length = len(test_session)
il = a = np.zeros((batch_size, 1, len(item_idx_map)))
ir = a = np.zeros((batch_size, 1, 79))
for i in range(session_length -1):
# use current item as reference point (rest is for testing)
current_item_id = test_session[item_key].values[i]
item_vec = np.zeros(len(item_idx_map), dtype=int)
item_idx = item_idx_map[current_item_id]
item_vec[item_idx] = 1
# set vector in batch input
il[i, 0] = item_vec
item_features = item_encodings[current_item_id]
#item_features = item_features.reshape(1,1, len(item_features))
ir[i, 0] = item_features
# do batch prediction
pred = model.predict([il, ir], batch_size=batch_size)
# for every subsession prediction
for i in range(session_length-1):
preds = pred[i]
topn_idx_preds = preds.argsort()[-cut_off:][::-1]
predictions = []
# for every recommended item index
for item_idx in topn_idx_preds:
pred_item = idx_item_map[item_idx]
predictions.append(pred_item)
current_input_set = test_session[item_key].values[:i+1]
remaining_test_set = test_session[item_key].values[i+1:]
position = "MID"
if i == 0:
position = "FIRST"
if len(remaining_test_set) == 1:
position = "LAST"
log_df = log_df.append({
"session_id": sid,
"input_items": ','.join(map(str, current_input_set)),
"input_count": len(current_input_set),
"position": position,
"remaining_items": ','.join(map(str, remaining_test_set)),
"remaining_count": len(remaining_test_set),
"predictions": ','.join(map(str, predictions))
}, ignore_index=True)
log_df['input_count'] = log_df['input_count'].astype(int)
log_df['remaining_count'] = log_df['remaining_count'].astype(int)
return log_df
test_path = '../../data/' + dataset + 'processed/valid_test_14d.csv'
test = pd.read_csv(test_path, sep='\t')[['session_id', 'item_id', 'created_at']]
test_dataset = SessionDataset(test)
test_generator = SessionDataLoader(test_dataset, batch_size=batch_size)
session_groups = test.groupby("session_id")
mapitem = loader.dataset.itemmap
item_idx_map = {}
idx_item_map = {}
for index, row in mapitem.iterrows():
item_id = row["item_id"]
item_idx = row["item_idx"]
item_idx_map[item_id] = item_idx
idx_item_map[item_idx] = item_id
predict_path = "../../data/recsys17/interim/predict/hyperparam/"
for act in acts:
for ls in l_sizes:
for lr in lrs:
model_name = "recsys17_prnn_a_" + act + "_ls_" + str(ls) + "_lr_" + str(lr) + ".model"
model = pickle.load(open(model_path_valid + model_name, 'rb'))
res_list = []
# predict
report_freq = len(session_groups) // 5
count = 0
for sid, session in session_groups:
pred_df = predict_function(sid, session, model, item_idx_map, idx_item_map)
res_list.append(pred_df)
# reset states
model.get_layer('gru_left').reset_states()
model.get_layer('gru_right').reset_states()
# print progress
count += 1
if count % report_freq == 0:
print("Predicted for " + str(count) + " sessions. " + str(len(session_groups) - count) + " sessions to go." )
# concat results
res = pd.concat(res_list)
res = res.reindex(columns = ["session_id", "input_items", "input_count", "position", "remaining_items", "remaining_count", "predictions"])
store_name = model_name.replace("recsys17_", "").replace(".model", "")
res.to_csv(predict_path + "test_14d_" + store_name + ".csv", sep='\t')
print("Stored predictions: " + predict_path + "test_14d_" + store_name + ".csv")
###Output
_____no_output_____
###Markdown
Set data for final training
###Code
# set data
train_path = '../../data/' + dataset + 'processed/train_14d.csv'
train = pd.read_csv(train_path, sep='\t')[['session_id', 'item_id', 'created_at']]
interactions = pd.read_csv('../../data/' + dataset + 'raw/interactions.csv', header=0, sep='\t')
items = pd.read_csv('../../data/' + dataset + 'raw/items.csv', header=0, sep='\t')
view_fields = ["item_id", "career_level", "discipline_id", "industry_id", "country", "is_payed", "region", "employment"]
common_items = items.merge(interactions, on=['item_id'])[view_fields].drop_duplicates()
item_count = len(train['item_id'].unique())
print(item_count)
session_count = len(train['created_at'].unique())
print(len(common_items))
# RecSys17 items need to be converted to dummies
common = common_items
common["country"] = common["country"].astype('str')
common["career_level"] = common["career_level"].astype('str')
common["industry_id"] = common["industry_id"].astype('str')
common["is_payed"] = common["is_payed"].astype('str')
common["region"] = common["region"].astype('str')
common["employment"] = common["employment"].astype('str')
common["discipline_id"] = common["discipline_id"].astype('str')
df2 = pd.DataFrame(index=common.index)
s1 = pd.get_dummies(common["country"].fillna("").str.split(",").apply(pd.Series).stack(), prefix="country").sum(level=0)
df2 = pd.concat([df2, s1], axis=1)
s1 = pd.get_dummies(common["career_level"].fillna("").str.split(",").apply(pd.Series).stack(), prefix="career_level").sum(level=0)
df2 = pd.concat([df2, s1], axis=1)
df2 = df2.drop(["country_", "career_level_"], axis=1, errors="ignore")
s1 = pd.get_dummies(common["industry_id"].fillna("").str.split(",").apply(pd.Series).stack(), prefix="industry_id").sum(level=0)
df2 = pd.concat([df2, s1], axis=1)
s1 = pd.get_dummies(common["is_payed"].fillna("").str.split(",").apply(pd.Series).stack(), prefix="is_payed").sum(level=0)
df2 = pd.concat([df2, s1], axis=1)
df2 = df2.drop(["industry_id_", "is_payed_"], axis=1, errors="ignore")
s1 = pd.get_dummies(common["region"].fillna("").str.split(",").apply(pd.Series).stack(), prefix="region").sum(level=0)
df2 = pd.concat([df2, s1], axis=1)
s1 = pd.get_dummies(common["employment"].fillna("").str.split(",").apply(pd.Series).stack(), prefix="employment").sum(level=0)
df2 = pd.concat([df2, s1], axis=1)
df2 = df2.drop(["region_", "employment_"], axis=1, errors="ignore")
s1 = pd.get_dummies(common["discipline_id"].fillna("").str.split(",").apply(pd.Series).stack(), prefix="discipline_id").sum(level=0)
df2 = pd.concat([df2, s1], axis=1)
df2 = df2.drop(["discipline_id_"], axis=1, errors="ignore")
common = common.drop(["country", "career_level", "industry_id", "is_payed", "region", "employment", "discipline_id"], axis=1)
df2 = pd.concat([common, df2], axis=1)
one_hot = df2
print(one_hot.shape)
# number of content features per item
feature_size = one_hot.shape[1] - 1
item_encodings = {}
for index, row in one_hot.iterrows():
item_id = row["item_id"]
item_encodings[item_id] = row.values[1:]
print(len(item_encodings))
# load data
train_dataset = SessionDataset(train)
loader = SessionDataLoader(train_dataset, batch_size=batch_size)
mapitem = loader.dataset.itemmap
###Output
_____no_output_____
###Markdown
Train final model
###Code
# use best params
ls = 100
act = "tanh"
lr = 0.01
# define model
model, encoder = create_prnn_model(item_count, feature_size, batch_size=batch_size, hidden_units = ls, o_activation = act, lr = lr)
# train model
model = train_prnn(model, lr, loader)
model_name = "recsys17_prnn_a_" + act + "_ls_" + str(ls) + "_lr_" + str(lr) + ".model"
pickle.dump(model, open(model_path + model_name, 'wb'), protocol=4)
print("Stored model in: " + model_path + model_name)
###Output
_____no_output_____
###Markdown
Generate predictions
###Code
import keras.losses
keras.losses.TOP1 = TOP1
train_dataset = SessionDataset(train)
loader = SessionDataLoader(train_dataset, batch_size=batch_size)
test_path = '../../data/' + dataset + 'processed/test_14d.csv'
test = pd.read_csv(test_path, sep='\t')[['session_id', 'item_id', 'created_at']]
test_dataset = SessionDataset(test)
test_generator = SessionDataLoader(test_dataset, batch_size=batch_size)
session_groups = test.groupby("session_id")
mapitem = loader.dataset.itemmap
item_idx_map = {}
idx_item_map = {}
for index, row in mapitem.iterrows():
item_id = row["item_id"]
item_idx = row["item_idx"]
item_idx_map[item_id] = item_idx
idx_item_map[item_idx] = item_id
predict_path = "../../data/recsys17/interim/predict/base/"
model_name = "recsys17_prnn_a_" + act + "_ls_" + str(ls) + "_lr_" + str(lr) + ".model"
model = pickle.load(open(model_path + model_name, 'rb'))
res_list = []
# predict
report_freq = len(session_groups) // 5
count = 0
for sid, session in session_groups:
pred_df = predict_function(sid, session, model, item_idx_map, idx_item_map)
res_list.append(pred_df)
# reset states
model.get_layer('gru_left').reset_states()
model.get_layer('gru_right').reset_states()
# print progress
count += 1
if count % report_freq == 0:
print("Predicted for " + str(count) + " sessions. " + str(len(session_groups) - count) + " sessions to go." )
# concat results
res = pd.concat(res_list)
res = res.reindex(columns = ["session_id", "input_items", "input_count", "position", "remaining_items", "remaining_count", "predictions"])
res.to_csv(predict_path + "test_14d_prnn.csv", sep='\t')
print("Stored predictions: " + predict_path + "test_14d_prnn.csv")
###Output
_____no_output_____
|
Python-OOPS.ipynb
|
###Markdown
Class attributes and updating those
###Code
class Employee:
employee_id = 111
employee = Employee()
employee.employee_id
Employee.employee_id = 222
employee2 = Employee()
employee2.employee_id
###Output
_____no_output_____
###Markdown
Instance attributes
###Code
employee2.name = 'abc'
employee2.name
employee.name = 'dasd'
employee.name
#Instance attributes are specific to the object, class attributes are specific to the class
employee.employee_id = 444
employee.employee_id
employee2.employee_id
#Python first searches for instance attributes and then if no match comes, it searches for class attributes
#Instance attributes->Class attributes
###Output
_____no_output_____
###Markdown
Understanding Self parameter
###Code
class Employee:
def employeeDetails():
pass
employee = Employee()
#employee.employeeDetails()
#If you run this you will get this error:
#TypeError: employeeDetails() takes 0 positional arguments but 1 was given
#Bcz python calls the method like this->Employee.employeeDetails(employee)->error comes bcz of employee object passing
class Employee:
def employeeDetails(self):
self.name = 'Souparna'
print(self.name)
employee = Employee()
employee.employeeDetails()
print('\n')
Employee.employeeDetails(employee)
#If you dont use the objectname.instancename , then the lifespan of a an attribute is only inside the enclosing method
class Employee:
def employeeDetails(self):
self.name = 'Souparna'
print(self.name)
age = 30
print(age)
def printEmployeeDetails(self):
print(self.name)
print(age)
employee2 = Employee()
employee2.employeeDetails()
#employee2.printEmployeeDetails()->NameError: name 'age' is not defined
###Output
_____no_output_____
###Markdown
Static methods and instance methods
###Code
#Instance methods are methods of the class that make use of the self parameter
#,to access and modify the instance attributes of the class
#All the methods used above are instance methods
#Static methods donot take the default self parameter
#Question is how will it avoid the error which python will throw is self is not passed
#Using 'DECORATOR', we distinguish between static and instance methods
class Employee:
def employeeDetails(self):
self.name = 'Souparna'
print(self.name)
@staticmethod
def welcomeMessage():
print('Hello World')
employee = Employee()
employee.employeeDetails()
employee.welcomeMessage()
#We need to have a way to intialize all the attributes of our object/class before they are being used
#Python helps in doing that witht the help of a special method called the init method
#Special methods in python start and end with __
class Employee:
def employeeDetails(self):
self.name = 'Souparna'
print(self.name)
def welcomeMessage(self):
print(self.age)
employee = Employee()
#employee.welcomeMessage() ->AttributeError: 'Employee' object has no attribute 'age'
#Lets use __init__ method now
class Employee:
def __init__(self):
self.name = 'Souparna'
def welcomeMessage(self):
print(self.name)
employee = Employee()
employee.welcomeMessage()
#Make sure to initialize all attributes within init method, then the object becomes a fully initialized object
#We need to have a way in which the init method takes in a parameter and assigns the attribute to the parameter
class Employee:
def __init__(self,name):
self.name = name
#self.name implies instance attribute name, name implies the parameter passed in parenthesis
def welcomeMessage(self):
print(self.name)
employeeTwo = Employee('Bose')
employeeTwo.welcomeMessage()
#CLASS ATTRIBUTE->either inside class or classname.attributeName
#INSTANCE ATTRIBBUTE->objectname.attributeName
#Self parameter handling->objectname.methodName() is handled as classname.MethodName(objectName)->This is the self param
#init() method is an INITIALIZER in python, called when an object is instantiated
###Output
_____no_output_____
|
scripts/Create_Model.ipynb
|
###Markdown
Imports
###Code
import os
import sys
import numpy as np
import random
import time
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import PIL
from PIL import Image
from IPython import display
import torch
import torchvision.transforms as transforms
from ImageTransformer import ImageTransformer
from trainer import Trainer
from datasets import InputDataset
###Output
_____no_output_____
###Markdown
Paths & Model
###Code
main_path = "PATH/TO/IMAGE/DIR/"
style_dir = "PATH/TO/STYLE/IMAGE/DIR/"
test_image_path = "/content/Bacchus.jpg"
IDtail = "_Z.pth"
def reload_model():
return ImageTransformer(leak=0,
norm_type='batch',
DWS=True,
DWSFL=False,
outerK=3,
resgroups=1,
filters=[8, 16, 16],
shuffle=False,
blocks=[2, 2, 2, 1, 1],
endgroups=(1, 1),
upkern=3,
bias_ll=True)
###Output
_____no_output_____
###Markdown
Functions
###Code
# load device for gpu or cpu running (GPU recommended)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load a dataset of jpgs, pngs, etc (NOTE: Not linked)
contentims_raw = os.listdir(main_path)
contentims = []
for path in contentims_raw:
if path[:1] != ".":
contentims.append(path)
cutoff = 0.85 * len(contentims)
cutoff = (cutoff // 16) * 16
contenttrain = contentims[:cutoff]
contentval = contentims[cutoff:]
# load various functions and transformations for image I/O
transformPILtoTensor = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
transformTensortoPIL = transforms.Compose([
transforms.Normalize((-1., -1., -1.), (2., 2., 2.)),
transforms.ToPILImage()
])
def load_img_x(path_to_img, max_dim=512):
# for loading style image
img = Image.open(path_to_img)
shape = img.size
short_dim = min(shape)
scale = max_dim / short_dim
img = img.resize((int(shape[0] * scale), int(shape[1] * scale)))
imgs = transformPILtoTensor(img).unsqueeze(0).to(device, torch.float)
return imgs
def load_img_reshape(path_to_img, max_dim=512):
img = Image.open(path_to_img)
shape = img.size
short_dim = min(shape)
scale = max_dim / short_dim
img = img.resize((int(shape[0] * scale), int(shape[1] * scale)))
new_shape = img.size
os_h = int((new_shape[0] - max_dim) / 2)
os_w = int((new_shape[1] - max_dim) / 2)
img = img.crop((os_h, os_w, os_h + max_dim, os_w + max_dim))
imgs = transformPILtoTensor(img).unsqueeze(0).to(torch.float)
return imgs
def load_prepped_img(path_to_img):
img = Image.open(path_to_img)
imgs = transformPILtoTensor(img).unsqueeze(0).to(torch.float)
return imgs
def load_data(content, resize=False):
if resize:
load_func = load_img_reshape
else:
load_func = load_prepped_img
x = load_func(mainpath + content[0])
for path in content[1:]:
x = torch.cat((x, load_func(mainpath + path)), 0)
print(x.shape)
return x
def prepandclip(img):
return img.squeeze().data.clamp_(-1, 1).cpu().detach()
def fuse_and_save(model, path):
model.eval()
model.fuse()
torch.save(model.state_dict(), path)
def show_test_image_quality(model, image, device=device):
model_input = image.clone()
image = (image.squeeze(0).permute(1, 2, 0) + 1.) / 2
plt.subplot(121)
plt.imshow(image)
plt.axis('off')
plt.title('input')
with torch.no_grad():
model_input = model_input.to(device)
model_output = model(model_input)
output = prepandclip(model_output)
output = (output.permute(1, 2, 0) + 1.) / 2
plt.subplot(122)
plt.imshow(output)
plt.axis('off')
plt.title('output')
plt.tight_layout()
plt.show()
test_image = load_img_x(test_image_path, max_dim=300)
# create a torch tensor of images that are that have been cropped with correct aspect
xtrain = load_data(contenttrain)
xval = load_data(contentval)
def run_trainer(image_transformer,
xtrain,
xval,
content_layers,
style_layers,
style_path,
outfile,
content_style_layers=None,
epochs=300,
patience=5,
style_weight=10,
content_weight=1,
tv_weight=1000,
cs_weight=10,
stable_weight=2000,
color_weight=1000,
pretrained_filename="ae" + IDchoice,
test_image=None):
# load image trainsformer and trained AE
if pretrained_filename is not None:
image_transformer.load_state_dict(torch.load(pretrained_filename))
style_image = load_img_x(style_path, max_dim=256)
trainer = Trainer(image_transformer, content_layers, style_layers,
style_image, content_style_layers)
# prep train data
datasettrain = InputDataset(xtrain)
# prep val data
datasetval = InputDataset(xval)
print(torch.cuda.memory_summary(abbreviated=True))
# train
trainer.train(datasettrain,
val=datasetval,
epochs=epochs,
epoch_show=1,
style_weight=style_weight,
content_weight=content_weight,
stable_weight=stable_weight,
tv_weight=tv_weight,
color_weight=color_weight,
cs_weight=cs_weight,
es_patience=patience,
batch_size=8,
equalize_style_layers=True,
best_path="best.pth",
test_image=test_image)
# revert to best and save
image_transformer.load_state_dict(torch.load("best.pth"))
fuse_and_save(image_transformer, outfile)
del trainer
del datasettrain
del datasetval
del image_transformer
torch.cuda.empty_cache()
###Output
_____no_output_____
###Markdown
Train
###Code
content_layers = ['relu_7']
style_layers = ['relu_2', 'relu_4', 'relu_7', 'relu_11', 'relu_15']
style_weights = 0.5
content_style_layers = None
style_path = style_dir + "Kandinsky_Composition_7.jpg"
outfile = "comp7_bench" + IDtail
image_transformer = reload_model()
run_trainer(image_transformer,
xtrain,
xval,
content_layers,
style_layers,
style_path,
outfile,
pretrained_filename=None,
content_style_layers=content_style_layers,
patience=5,
test_image=test_image,
epochs=50,
style_weight=style_weights,
cs_weight=0,
content_weight=1,
tv_weight=1000,
stable_weight=5000,
color_weight=0)
###Output
_____no_output_____
|
notebooks/clean_text_in_employements.ipynb
|
###Markdown
---
###Code
df['responsibilities_tokens'] = df['responsibilities'].fillna('').apply(preprocess_text)
all_words = []
for x in df['responsibilities_tokens'].values:
all_words.extend(x)
lemmatized_tokens = dict()
unique_words = set(all_words)
for word in tqdm(unique_words):
lemmatized_tokens[word] = lemmatizer.parse(word)[0].normal_form
def lemmatize_list(s, rules=lemmatized_tokens):
result = ' '.join([rules.get(x, ' ') for x in s])
return result
df['responsibilities'] = df['responsibilities_tokens'].apply(lemmatize_list)
df = df.drop(columns=['responsibilities_tokens'])
df.head()
###Output
_____no_output_____
###Markdown
---
###Code
df['achievements_tokens'] = df['achievements']\
.fillna('')\
.apply(lambda x: preprocess_text(x, 3))
all_words = []
for x in df['achievements_tokens'].values:
all_words.extend(x)
lemmatized_tokens = dict()
unique_words = set(all_words)
for word in tqdm(unique_words):
lemmatized_tokens[word] = lemmatizer.parse(word)[0].normal_form
df['achievements'] = df['achievements_tokens'].apply(lemmatize_list)
df = df.drop(columns=['achievements_tokens'])
df['position_tokens'] = df['position']\
.fillna('')\
.apply(lambda x: preprocess_text(x, 3))
all_words = []
for x in df['position_tokens'].values:
all_words.extend(x)
lemmatized_tokens = dict()
unique_words = set(all_words)
for word in tqdm(unique_words):
lemmatized_tokens[word] = lemmatizer.parse(word)[0].normal_form
df['position_clean'] = df['position_tokens'].apply(lemmatize_list)
df = df.drop(columns=['position_tokens'])
df['employer_tokens'] = df['employer']\
.fillna('')\
.apply(lambda x: preprocess_text(x, 2))
all_words = []
for x in df['employer_tokens'].values:
all_words.extend(x)
lemmatized_tokens = dict()
unique_words = set(all_words)
for word in tqdm(unique_words):
lemmatized_tokens[word] = word #lemmatizer.parse(word)[0].normal_form
df['employer_clean'] = df['employer_tokens'].apply(lemmatize_list)
df = df.drop(columns=['employer_tokens'])
df.to_csv('employements_mult_new.csv', sep=';', index=False)
###Output
_____no_output_____
|
examples/tsi/ex06_parameterized_phi.ipynb
|
###Markdown
Plot the results using default 'Predictor Corrector':This is preferred integrator for most cases, but it does not support adaptive timestepping.
###Code
#unpack and rescale simulation output
t = data['t']; S_t = data['S_t']; I_t = data['I_t']; Ic_t = data['Ic_t']
plt.figure(figsize=(12, 4)); plt.subplot(121)
plt.plot(t,np.sum(S_t,0), color="#348ABD", lw=2, label = 'Susceptible') #all susceptible
plt.plot(t,np.sum(I_t,0), color="#A60628", lw=2, label = 'Infected') #all Infected
plt.plot(t,np.sum(Ic_t[0,:,:],0), color='green', lw=2, label = 'Recovered') #all Recovered
plt.xlabel('time (days)'); plt.xlim(0,Tf); plt.ylim(0,1)
plt.ylabel('Fraction of compartment value'); plt.legend()
plt.subplot(122)
for i in (1 + np.arange(len(subclasses)-1)):
plt.plot(t,np.sum(Ic_t[i,:,:],0), lw=2, label = subclasses[i])
plt.legend(); plt.xlabel('time (days)'); plt.xlabel('time (days)'); plt.xlim(0,Tf); plt.ylim(0)
###Output
_____no_output_____
###Markdown
Repeat same simulation using Galerkin Discretization and default integrator (odeint)This integrator supports adaptive timestepping but it is not recommended for time-dependent contact matrices or non-smooth dynamic more generally.
###Code
parameters['NL'] = 5
model = pyrosstsi.deterministic.Simulator(parameters,'Galerkin')
IC = model.get_IC()
data = model.simulate(IC)#,10**-3,10**-2)# <- error tolerance options
#unpack and rescale simulation output
t = data['t']; S_t = data['S_t']; I_t = data['I_t']; Ic_t = data['Ic_t']
plt.figure(figsize=(12, 4)); plt.subplot(121)
plt.plot(t,np.sum(S_t,0), color="#348ABD", lw=2, label = 'Susceptible') #all susceptible
plt.plot(t,np.sum(I_t,0), color="#A60628", lw=2, label = 'Infected') #all Infected
plt.plot(t,np.sum(Ic_t[0,:,:],0), color='green', lw=2, label = 'Recovered') #all Recovered
plt.xlabel('time (days)'); plt.xlim(0,Tf); plt.ylim(0,1)
plt.ylabel('Fraction of compartment value'); plt.legend();
plt.subplot(122)
for i in (1 + np.arange(len(subclasses)-1)):
plt.plot(t,np.sum(Ic_t[i,:,:],0), lw=2, label = subclasses[i])
plt.legend(); plt.xlabel('time (days)'); plt.xlabel('time (days)'); plt.xlim(0,Tf); plt.ylim(0)
###Output
_____no_output_____
###Markdown
Repeat the same using Galerkin discretization and Crank Nicolson integratorThis integrator supports adaptive time-stepping and is preferable to 'odeint' whenever the contact matrix is time-dependent. Still not recommended for non-smooth dynamics (e.g. lockdown). When the contact matrix is time-dependent and piecewise smooth, consider using the Hybrid method (see example notebook on the subject).
###Code
parameters['NL'] = 5
model = pyrosstsi.deterministic.Simulator(parameters,'Galerkin','Crank Nicolson')
IC = model.get_IC()
data = model.simulate(IC,10**-3*4,10**-2*4)# <- error tolerance options
#unpack and rescale simulation output
t = data['t']; S_t = data['S_t']; I_t = data['I_t']; Ic_t = data['Ic_t']
plt.figure(figsize=(12, 4)); plt.subplot(121)
plt.plot(t,np.sum(S_t,0), color="#348ABD", lw=2, label = 'Susceptible') #all susceptible
plt.plot(t,np.sum(I_t,0), color="#A60628", lw=2, label = 'Infected') #all Infected
plt.plot(t,np.sum(Ic_t[0,:,:],0), color='green', lw=2, label = 'Recovered') #all Recovered
plt.xlabel('time (days)'); plt.xlim(0,Tf); plt.ylim(0,1)
plt.ylabel('Fraction of compartment value'); plt.legend();
plt.subplot(122)
for i in (1 + np.arange(len(subclasses)-1)):
plt.plot(t,np.sum(Ic_t[i,:,:],0), lw=2, label = subclasses[i])
plt.legend(); plt.xlabel('time (days)'); plt.xlabel('time (days)'); plt.xlim(0,Tf); plt.ylim(0)
###Output
_____no_output_____
|
python-for-apis/python-for-apis-spring-2021.ipynb
|
###Markdown
Getting Data from API's with Python **GW Libraries and Academic Innovation**Monday, February 1, 2021 Workshop goalsThis workshop will cover basic use cases for retrieving data from RESTful API's with Python. By the conclusion of this workshop, you will have worked through the following:* Understanding the REST framework for data retrieval* Constructing a query with parameters in Python using the `requests` library* Writing a `for` loop to retrieve multiple sets results* Parsing a JSON response* Exporting data in CSV format Tips for using this Google Colab notebookWhen working in a Google Colaboratory notebook, `Shift-Return` (`Shift-Enter`) runs the cell you're on. You can also run the cell using the `Play` button at the left edge of the cell.There are many other keyboard shortcuts. You can access the list via the menu bar, at `Tools`-->`Command palette`. In fact, you can even customize your keyboard shortcuts using `Tools`-->`Keyboard shortcuts`.(If you're working in an Anaconda/Jupyter notebook: - `Control-Enter` (`Command-Return`) runs the cell you're on. You can also run the cell using the `Run` button in the toolbar. `Esc`, then `A` inserts a cell above where you are. - `Esc`, then `B` inserts a cell below where you are. - More shortcuts under `Help` --> `Keyboard Shortcuts`)You will probably get some errors in working through this notebook. That's okay, you can just go back and change the cell and re-run it.The notebook auto-saves as you work, just like gmail and most Google apps. Introduction What is an API?An **A**pplication **P**rogramming **I**nterface is a generic term for functionality that allows one computer application to talk to another. In contrast to a graphical user interface (GUI), which allows an end user to interact with an application via visual symbols (*e.g.* icons) and manual operations (*e.g.* mouse clicks), an API allows a user to interact with the application by writing code. You can think of API's as the glue that holds together the various modules and libraries of code that make up a given system, whether we're talking about a single piece of software or the entire World Wide Web.------------------------- What is REST?**R**epresentational **S**tate **T**ransfer refers to a common set of principles implemented by services that communicate via the web. Most RESTful API's use **HTTP** to provide access. Via HTTP and its core methods, you code can communicate with a web service the way your browser does when you visit a web site. We'll see how to write code to do just that in this workshop. SetupWe're going to use a couple of libraries for making API calls and processing the data these calls return. They are not part of the standard Python distribution, but they're pre-installed for Google Colaboratory notebooks. If you're running a Jupyter notebook locally on your computer via the Anaconda distribution of Python, they are pre-installed there as well. If not, you can install them yourself by running these commands inline in your notebook:`!pip install pandas``!pip intall requests`You can also install them at the command line by using the above commands *without* the prefixed exclamation point. Using API's to find and rerieve COVID-19 data First we need to import the libraries we're using to work with this data.As a refresher: - `import` loads an external Python library for use in your code. - `as` with `import` allows us to provide a nickname for the library, so that we don't have type the full name each time.
###Code
import requests
import pandas as pd
###Output
_____no_output_____
###Markdown
A straightforward request with JSON The first data set we'll use is provided by _The Atlantic_'s [Covid Tracking Project](https://covidtracking.com/data/api).Let's take a moment to look at the documentation together. This API is fairly straightforward. We can retrieve the results in either JSON or CSV. We'll be using JSON, primarily to familiarize ourselves with this format, which is quite common for RESTful API's. **J**ava**S**cript **O**bject **N**otation is a data format designed to map readily onto Javascript data types. As it happens, it also maps readily onto Python data types. We'll use the API **endpoint** for "Historic US Values" in JSON format. API documentation will often refer to multiple endpoints, each of which provides access to a different set or view of data. This endpoint provides time series data for COVID-19 cases in the US.
###Code
covid_us_url = 'https://api.covidtracking.com/v1/us/daily.json'
###Output
_____no_output_____
###Markdown
To fetch the data from the endpoint, we use the `requests` library, calling the `get` method and passing as an argument the endpoint URL. `GET` is one of several HTTP "verbs," which correspond to different actions a web server can be asked to perform. `GET` means, _Give me the data stored at this particular URL path_.
###Code
resp = requests.get(covid_us_url)
###Output
_____no_output_____
###Markdown
`requests.get` returns a `Response` object. This Python object has many useful properties. It's important to remember that with HTTP services, there can be many reasons why your request for data might fail. Common issues include the following:- The server might be down.- You might have used an incorrect or defunct URL.- You might not have the right permissions.Because of that, our `Response` object contains more than **just** the data we have requested. It contains a `status_code` property, which lets us know what **kind** of response the server gave. Anything other than `200` means that the request failed.
###Code
resp.status_code
###Output
_____no_output_____
###Markdown
The `Response` object also contains the response **headers** sent by the server. Every web server you visit transmits one or more headers to the client you're using (web browser, etc.). Most of the time you don't need to worry about these, but when programming with API's, you may find them useful.The `Content-Type` header, for instance, lets us confirm that the data we received was in fact formatted as JSON.Note that our `Response` object has converted these headers to a Python dictionary for ease of access.
###Code
resp.headers
###Output
_____no_output_____
###Markdown
Each HTTP response also has a **body**. This is either the data we have requested, or some type of error message. The data can be formatted in many different ways. Most plain web pages are formatted as `text/html`. This doesn't actually mean much to Python, since Python doesn't have an HTML data type. But you can view the contents of the body as a Python string by evaluating `resp.text`.
###Code
resp.text
###Output
_____no_output_____
###Markdown
Notice the outer quotation marks alerting us that this is a string. A giant string is no fun to work with as data. Fortunately, if the body of the response has been correctly formatted as JSON, we can easily convert it to mre useful Python data types.`resp.json()` converts the **body** of the response, which is the data we requested, into native Python types: strings, numeric types, lists, and dictionaries.**Note**: Not all API's return JSON by default or even at all. Many use XML. If you call `.json()` on a `Response` that does not contain JSON-formatted data, Python will raise an exception.
###Code
data_us_daily = resp.json()
###Output
_____no_output_____
###Markdown
Let's look at this data. What Python data types do you see here?
###Code
data_us_daily
###Output
_____no_output_____
###Markdown
We have a Python list of dictionaries, each of which has the same keys. This is a typical way to represent a table of data in Python.The `pandas` library, however, provides the `DataFrame` type, which makes working with tabular data much easier.The `DataFrame.from_records` method takes a list of Python dictionaries and converts it into a table, where the shared keys are the table columns, and the values become the values in each row.
###Code
data_us_daily = pd.DataFrame.from_records(data_us_daily)
###Output
_____no_output_____
###Markdown
Now we can really see the tabular nature of this data. From here, we can use `pandas` methods to clean, sort, filter, aggregate, and even plot the data. We can also export it easily to CSV.We'll come back to `pandas` later in the workshop. For now, let's tackle a slightly more complicated API. Making repeated requestsThe `requests` library is great. But because HTTP requests can be complicated, there are certain steps we will usually want to take when making requests -- like checking for status errors, decoding content, etc. -- that can become repetitive if we have to write them out every time. So let's create a Python **function** to handle all of that housekeeping. Our function will take some arguments: - a url- an optional dictionary of URL parameters (to be explained later)- an optional dictionary of HTTP headersIt will return:- The body of the HTTP response, if the request succeeded.- Otherwise, it will raise a Python exception.
###Code
def get_data(url, params=None, headers=None): # We'll talk about these later
'''Accepts a url, which should be a string.
Optionally, accepts a dictionary of URL parameters and a custom HTTP header.'''
try:
# We pass all our arguments to requests.get
resp = requests.get(url, params=params,
headers=headers)
# If the response is anything other than 200, raise_for_status() will raise an exception
resp.raise_for_status()
# Here we can check for a JSON response
# the expression headers.get('Content-Type', '') looks for a key of 'Content-Type' in the headers dictionary.
# If it doesn't find one, it returns the empty string as a default, since some headers may not have Content-Type specified.
if 'application/json' in resp.headers.get('Content-Type', ''):
# If the header says it's JSON, parse it as JSON
data = resp.json()
return data
else:
# Otherwise, just return the response as text
return resp.text
# Here we trap any errors and print a helpful message for the user
except Exception as e: # Here we catch errors
print('Error fetching data from url', url)
print(resp.text)
# This will cause the exception to bubble up in the stack trace, which is helpful for debugging
raise
###Output
_____no_output_____
###Markdown
If you've never used `try` and `except` before, these Python keywords provide ways for us to catch and handle errors gracefully. They are particularly useful when working with HTTP data, since you can't really predict how the web server you're sending requests to will behave. If no errors/exceptions occur in processing the `try` block, Python will skip the `except` block altogether. At the moment, our `except` block just prints an error message to the screen. But in other situations, you might want to log the errors to a file, or take some other action, depending on the type of error. Getting COVID-19 data by countryThe [COVID 19 API](https://covid19api.com/) collects data from various sources and provides it JSON format.This API is a bit more complex, in that we need to specify both a country and a date range when making our requests.We can check out the documentation on Postman:[https://documenter.getpostman.com/view/10808728/SzS8rjbc](https://documenter.getpostman.com/view/10808728/SzS8rjbc) If we consult the documentation for the endpoint **By Country Total**, we see that the URL should contain the name of the country in a specific format called a _slug_. (This is a format that removes spaces, capitalization, and characters that are more difficult to parse when constructing URL's.)How do we find out the slug? There's an API endpoint for that, too. So our first step is to get the list of slugs and find the one for the country we want whose data we want to retrieve.
###Code
countries_url = 'https://api.covid19api.com/countries'
# We can use our new function to get this data
country_metadata = get_data(countries_url)
###Output
_____no_output_____
###Markdown
Note how the country metadata is presented. Again, we have a list of dictionaries, each of which contains the name of a country, its slug, and its ISO code. ExerciseTo get data for a specific country, we can use the following URL:```covid_country_url = 'https://api.covid19api.com/total/country/{country_slug}/status/confirmed'```We need to replace the `country_slug` in curly braces with the actual slug for the country we are interested in.How would you use `country_metadata` to look up the slug for a specific country by name, _e.g._, Germany? Use only Python code. AnswerThere are multiple valid approaches. Here's one handy way.```country_data_dict = {c['Country']: c for c in country_data}```This is called a **dictionary comprehension**. It's basically a `for` loop embedded in a Python dictionary expression. You can use comprehensions to create Python dicts, lists, and sets. Here we convert a list of dictionaries into a dictionary of dictionaries. That allows us to look up the metadata for each country by its more standard name.
###Code
country_data_dict = {c['Country']: c for c in country_metadata}
###Output
_____no_output_____
###Markdown
Now we can find the slug like so:
###Code
germany_slug = country_data_dict['Germany']['Slug']
###Output
_____no_output_____
###Markdown
To create the URL for the _By Country Total_ endpoint, we can use string formatting. The part in curly braces will be replaced by whatever value we pass to a keyword argument to the `.format` method where the keyword is the same as the part in curly braces. Note the `.format` is actually a method defined on the string itself. All string objects in Python have this method available.
###Code
covid_country_url = 'https://api.covid19api.com/total/country/{country_slug}/status/confirmed'
germany_url = covid_country_url.format(country_slug=germany_slug)
###Output
_____no_output_____
###Markdown
To get country COVID data for a range of dates, we can supply a `from` and a `to` date as URL paramters. URL parameters are the parts of the URL that follow a question mark. They typically have the form `key=value` where `key` is the parameter name and `value` is the associated value. You can think of them like keywords you enter into a search engine using an Advanced Search form. Constructing a URL with parameters in Python is straightfoward with the `requests` library. As we've seen, it takes an optional keyword argument called `params`, which should be a dictionary mapping keys to values. The Covid API documentation indicates that the date value should conform to a particular format. Assuming we want data for each day starting at midnight, we can use string formatting to simplify creation of these parameters.
###Code
date_str = '{date}T00:00:00Z'
params = {'from': date_str.format(date='2020-03-01'),
'to': date_str.format(date='2021-01-31')}
germany_data = get_data(germany_url, params=params)
###Output
_____no_output_____
###Markdown
ExerciseCan you write a function that accepts the following:- a country name as a string, e.g., `'Germany'`- a from-date- a to dateand that returns the case data for that country?**Requirements** 1. We want to be able to pass in the standard country names in English, not the slugs.2. We want to pass in the dates as strings of the format YEAR-MONTH-DAY.3. We want to receive the data for the country that we identified.4. **Bonus**: If the user submits a country name that's not in the list, we want to catch it gracefully, printing an error message for the user but not breaking the function **Answer**
###Code
def get_country_data(country, from_date, to_date):
'''First argument should be a Python string.
Second and third arguments should be Python strings of the format YEAR-MONTH-DAY.'''
# Uses the date_str we defined above to create the parameters
params = {'from': date_str.format(date=from_date),
'to': date_str.format(date=to_date)}
try:
# Uses our predefined dictionary to retrieve the slug
# In a try/except block to catch cases where the country name we provided isn't in the dictionary
slug = country_data_dict[country]['Slug']
# If a dictionary doesn't have a certain key, a KeyError is raised
except KeyError:
# Error message for the user
print("Country not found: ", country)
return
# Creates the URL for this country
url = covid_country_url.format(country_slug=slug)
# Calls our predefined function
data = get_data(url, params=params)
# Don't forget to return something!
return data
get_country_data('United Kingdom', '2020-03-01', '2021-01-26')
###Output
_____no_output_____
###Markdown
What if we want to return data for multiple countries at the same time? We can refactor our function using a `for` loop and a list.
###Code
def get_country_data(countries, from_date, to_date):
'''First argument should be a Python list.
Second and third arguments should be Python strings of the format YEAR-MONTH-DAY.'''
# Uses the date_str we defined above to create the parameters
params = {'from': date_str.format(date=from_date),
'to': date_str.format(date=to_date)}
# An empty list to hold the data for all the countries
all_data = []
# Loops through the list of contries
for country in countries:
try:
# Uses our predefined dictionary to retrieve the slug
# In a try/except block to catch cases where the country name we provided isn't in the dictionary
slug = country_data_dict[country]['Slug']
# If a dictionary doesn't have a certain key, a KeyError is raised
except KeyError:
# Error message for the user
print("Country not found: ", country)
# Goes to the next iteration of the loop
continue
# Creates the URL for this country
url = covid_country_url.format(country_slug=slug)
# Calls our predefined function
data = get_data(url, params=params)
# Adds these results to the original set
# Using extend (rather than append) prevents us from getting a list of lists
all_data.extend(data)
# Don't forget to return something!
return all_data
three_countries = get_country_data(['Germany', 'China', 'United States of America'],
from_date='2020-03-01',
to_date='2021-01-26')
###Output
_____no_output_____
###Markdown
Assuming we used `.extend` to build our list, we can create a `DataFrame` with this data, which should be a single list of dictionaries.
###Code
comp_data = pd.DataFrame.from_records(three_countries)
###Output
_____no_output_____
###Markdown
Analyzing COVID-19 country data We can filter our DataFrame and can even graph our data using `pandas` built-in plotting functions, which use `matplotlib` under the hood.Let's look at how we would graph the trend of cases for a single country. Our dataset contains the cumulative total by date for each country. If we want to plot date against case and country, the first step is to convert the date column to a datetime format that Python can recognize. (Datetime values transmitted via JSON will typically be either strings or integers.)`pandas` makes such conversions fairly straightforward. The `pandas.to_datetime` method recognizes strings in a wide variety of standard formats and converts them to Python datetime objects.
###Code
comp_data['Date'] = pd.to_datetime(comp_data['Date'])
###Output
_____no_output_____
###Markdown
We can now use the `DataFrame.loc` property to isolate those rows where the `Country` column contains the name `Germany`.
###Code
germany = comp_data.loc[comp_data['Country'] == 'Germany']
###Output
_____no_output_____
###Markdown
To create a timeseries plot, we can use the `DataFrame.plot` method. In this case, since there are multiple columns, we'll want to supply the `x` and `y` arguments to the `plot` method, indicating which column to use as which axis.
###Code
germany.plot(x='Date', y='Cases')
###Output
_____no_output_____
###Markdown
Our plot could use some better formatting and a title. The `plot` method returns a `matplotlib Axes` object, which can be used to set properties on the plot itself.
###Code
ax = germany.plot(x='Date', y='Cases')
ax.set_title('COVID cases in Germany, March 2020-January 2021')
ax.ticklabel_format(style='plain', axis='y')
###Output
_____no_output_____
|
Chapters/Old/06.Bioinspired/Chapter6.ipynb
|
###Markdown
Chapter 6: Bio-inspired optimization*Selected Topics in Mathematical Optimization**2016-2017***Bernard De Baets****Michiel Stock****Raúl Pérez-Fernández** 
###Code
from random import random, choice
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interact, FloatSlider
%matplotlib inline
###Output
_____no_output_____
###Markdown
Introduction and general ideaThe open (or obsessive) mind can find optimization everywhere in the world around him. Ants find the optimal paths from food to their nest, rivers stream to maximize their water flow, plants maximize the amount of sunlight captured with their leafs and many of the laws of physics can be formulated as a minimization of energy. Bio-inspired optimization, or bio-inspired computing in general, borrows ideas from nature to solve complex problems. A central theme among these paradigms is that they use simple, local manipulations from which the general computation is an emergent property. Many also make use of a **population** of candidate solutions which is improved iteratively.Bio-inspired optimization algorithms (BIOAs) are often applied for more complex problems discussed so far. Many real-word problems are often over-or underconstrained, lack detailed information about the target function (e.g. no gradient can be computed) or deal with complex 'structured data'. Examples of such problems which (bio)engineers routinely deal with include designing an antenna, callibrating a forest fire model or create a new functional peptide (topic of this project).Contrary to most of the algorithms discussed so far, BIOAs often lack theoretical guarantees, both on their running time and on the quality of the solution. It is not uncommon to let such algorithms run for days or weeks. Since these algorithms do not exploit the structure or the gradient of the target function, only function evalutions are used. This is especially costly when evaluating the target function is expensive, for example when determining the parameters of a large set of ordinary differential equations. Furthermore, most bio-inspired optimization algorithms have some hyperparameters which must be tuned to the problem, otherwise the optimization might run badly.Despite these drawbacks, BIOAs also have many merits. Because they are very flexible, they can be applied to almost any optimization problem. For some problem classes (such as NP-hard problems), there are no efficient exact solvers, making for example the traveling salesman problem unsolvable for moderately large instances using standard techniques. BIOAs on the other hand can often generate reasonable solutions quite quickly. Furthermore, BIOAs work **progressive**, meaning that intermediate solutions can be obtained at any time. Hence, even if the optimization process is prematurely aborted, we still have some result to show for our effort. BIOA come in [many flavors](https://arxiv.org/pdf/1307.4186.pdf), which is one of the reasons why some of researchers dislike them. An important class of algorithms try to mimic swarm intelligence, for example how ants find their way in their surroundings is implemented in ant colony optimization. In this project we will work with genetic algorithms (GAs), which are based on the most succesful optimization algorithm in nature: evolution. GAs use *in silico* evolution to obtain iteratively better solutions for a problem. Most GAs are based on the following four concepts:- **maintenance of a population**- **creation of diversity**- **a natural selection**- **genetic enheritance**  Genetic representation  **genotype**: reprentation of a candidate solution on which the genetic algorithm will operate- often discrete representation (e.g. binary encoding of numbers)- ideally covers the space of optimal solutions (prior knowledge)- recombination should have high probability of generating increasingly better individuals **phenotype**: the candidate solution in a representation of the problem- **decoding**: translating genotype into phenotype- **encoding**: translating the phenotype in genotypeSince the genetic operators of the GA only work on the genotype, only decoding has to be defined. **fitness**: the quality of the solution, will be used to select individuals for the next generation Selection and reproduction**Selection** is the procedure such that individuals with a higher fitness are more likelily to go to the next generation.Usually the population size is fixed. individuals with high fitness are likely to be selected multiple times, those with low fitness might not be selected. **Roulette wheel selection**A new population of $n$ individuals is chosen by chosing individuals proportional to their fitness:$$p(i) = \frac{f(i)}{\sum_{j=1}^nf(j)}\,,$$with $p(i)$ the probability of choosing individual $i$ and $f(i)$ the fitness of individual $i$.Drawbacks:- only when fitness is positive- selection process dependent on (nonlinear) scaling of fitness **Tournament selection**Randomly choose two individuals, retain the individual with the highest fitness for the population of the next generation (pick one at random if fitness is equal). Repeat as many times as there are individuals in the population. Below is an illustration of the two types of selection.
###Code
# generate random initial population
population_fitness = [np.random.randn()**2 for _ in range(1000)]
# roulette wheel selection
population_fitness_roulette = []
finess_sum = np.sum(population_fitness)
while len(population_fitness_roulette) < len(population_fitness):
fitness = choice(population_fitness)
if fitness / finess_sum > random():
population_fitness_roulette.append(fitness)
# tournament selection
population_fitness_tournament = []
while len(population_fitness_tournament) < len(population_fitness):
selected_fitness = max(choice(population_fitness), choice(population_fitness))
population_fitness_tournament.append(selected_fitness)
fig, (ax0, ax1, ax2) = plt.subplots(nrows=3, sharex=True, figsize=(10, 7))
ax0.hist(population_fitness)
ax1.hist(population_fitness_roulette)
ax2.hist(population_fitness_tournament)
ax0.set_title('Fitness inital population')
ax1.set_title('Fitness after roulette selection')
ax2.set_title('Fitness after tournament selection')
###Output
_____no_output_____
###Markdown
**Elitism**: after selection and recombination, the individual with the highest fitness is often retained in the population. This way the best fitness of the population will never decrease. Genetic operatorsSelection increases the average quality of the candidate solutions, at the cost of decreasing the **Recombination** is the process of bringing back diversity into the population *without any regards for fitness*. Mutations- small change to the genotype- mutations operate at the level of the individual- example: flipping a bit in a binary representation- example: adding a normally distributed number to a real value Crossover**Crossover** recombined two individuals (parents) into two children by randomly switching parts of the genotypeTypes of crossover:- **one-point**: select randomly a crossover point on each of the two strings and swapping around this point- **multipoint**: the same, but with $n$ crossover points- **uniform**: each element is switched between the parents with a certain probability (usually 50%)- specialised crossovers for graphs, cycles or trees  Algorithms Hill climbing- iterative improvement of starting point- converges to local optimum (dependent on starting position)- usually executed multiple times with different initial conditions ```Hill climbing algorithm choose a random current_solution local := FALSE // assume solution is not in local optimum while local is FALSE: set local to TRUE for all neighbors of current_solution: if neighbor improves current_solution: set local to FALSE current_solution := neighbor return current_solution``` **Advantages**:- easy to implement- only needs the scoring function and a definition of neighborhood to search**Disadvantages**:- converges to a local optimum that is dependent of the starting position- no guarantees on the quality of the solution- no guarantees on the running time Simulated Annealing- instead of scanning the whole neighborhood, one candidate solution is randomly chosen - if the new solution has a higher fitness: accept it - if it has a lower fitness, accept with probability $e^{\Delta f / T}$- the temperature $T$ controls the **exploration** vs **exploitation** trade-off- the temperature is gradually decreased when running the algorithm ```Simulated annealing algorithm(Tmax, Tmin, r, kT) choose random initial point x T := Tmax while T > Tmin repeat kT times randomly choose xn from neigborhood of x if f(xn) > f(x) x := xn else with probability exp( (f(xn) - f(x))/T ) x := xn T := T * r return x ``` **Example of simulated annealing**$$\min_x\ f(x) = |x\cos(x)| + \frac{|x|}{2}\,.$$During each step, a new candidate solution is selected randomly according to$$x_n = x + \Delta x\,,$$with$$\Delta x \sim N(0, \sigma)\,.$$
###Code
from simulated_annealing_demo import plot_SA_example
f_toy_example = lambda x : np.abs(x * np.cos(x)) + 0.5 * np.abs(x)
x0 = 55
hyperparameters = {'Tmax' : 1000, 'Tmin' : 0.1,
'r' : 0.8, 'kT' : 10, 'sigma' : 5}
plot_SA_example(f_toy_example, x0, hyperparameters)
###Output
_____no_output_____
###Markdown
Genetic algorithm ```Genetic algorithm(population size, number of generations, pmut, pcross) initialize a random population repeat for a number of generations use tournament selection to generate a new population recombinate the new population using mutations and crossover apply elitism return best individual of final population``` Project: Designing bio-active peptidesSome peptides have an anti-microbial function. These peptides originate from the innate immuum system and are found in nearly all classes of life. These peptides often have a general mode of action and are thus effective against a broad range of microorganisms and it is quite difficult to acquire resistence for these organisms. As such they are an attractive alternative to conventional antibiotics.  In this project we will use genetic algorithms (in the very broad sense) to design and optimize a novel anti-microbial peptide. For this, we have downloaded a fasta file containing the amino acid sequence of over two thousand anti-microbial peptides as a reference set from the [Collection of Anti-Microbial Peptides](http://www.camp.bicnirrh.res.in/) database. Using a machine learning technique called kernel embedding, we have modelled the distribution of these peptides. This model can be used to generate a score between 0 and 1 for a given peptide, representing our belief that the peptide has an anti-microbial function (higher is better). The model is externally developed and is to us a black box. For example:
###Code
from anti_microbial_peptide_model import score_peptide
print(score_peptide('ASRTYUIPUYGRVHHGY')) # a random peptide
print(score_peptide('SKITDILAKLGKVLAHV')) # a peptide from the database
%timeit score_peptide('ASRTYUIPUYGRVHHGY') # time to score a peptide
###Output
_____no_output_____
###Markdown
We will try to find a new peptide with a length of twenty amino acids with the highest possible score according to the given model. To this end, hill climbing, simulated annealing and genetic algorithms will be used.For the problem setting at hand, we clarify the following terminology:- The **neigborhood** of a peptide: the set of all peptides which differ with exactly one amino acid compared to the given peptide- When a **mutation** occurs, a residue of a peptide is replaced by a randomly chosen amino acid. In our implementation of simulated annealing and the genetic algorithm, each amino acid in a peptide has a small fixed probability `pmut` to be mutated.- During a **crossover** event between two peptides, at each position the corresponding residues of the peptides are either switched or remain unchanged with equal probability. Crossovers occur between two randomly selected individuals with a probability `pcross`. **Assignment 1**Complete the implementation of the function `hill_climbing` to bring either a given peptide or a randomly generated peptide of a given length to a local optimum. Run the algorithms ten times to generate optimized peptides of length twenty. What scores do you get? Describe these plots.
###Code
from protein_sequence_features import amino_acids
amino_acids # the amino acids
def explore_peptide_region(peptide, scoring):
"""
Searches all neighboring peptides of a given peptide that differ exactly one
amino acid
"""
# complete this
best_score, best_peptide
def hill_climbing(peptidesize=None, peptide=None, scoring=score_peptide):
"""
Uses hill climbing to find a peptide with a high score for
antimicrobial activity.
Inputs:
- peptidesize : give size if stated from a randon peptide
- peptide : optionally give an intial peptide to improve
- scoring : the scoring function used for the peptides
Outputs:
- peptide : best found peptide
- best_scores : best scores obtained through the iterations
"""
assert peptidesize is not None or peptide is not None
# if no peptide is made, give a random one
if peptide is None:
peptide = ''
for res in range(peptidesize):
peptide += choice(amino_acids)
else:
peptidesize = len(peptide)
best_scores = [scoring(peptide)]
peptides = [peptide]
while True:
new_score, new_peptide = # find
if ... # improvement?
else:
break
return peptide, best_scores
%%time
# make a plot of the running of hill climbing
# for ten runs
###Output
_____no_output_____
###Markdown
COMMENT ON THIS PLOT? HOW ARE THE DIFFERENT RUNS THE SAME AND WHAT DO THEY HAVE IN COMMON? **Assignment 2**Hill climbing greedily improves the given peptide until no single change of in amino acid residus increases the score. The solution of hill climbing is likely to be a local optimum (and not necessarily a good one!).We will try to generate better peptides using simulated annealing (which only uses mutations to generate diversity in the candidate solutions) and a genetic algorithm (which uses both mutations as well as crossover to generate novel peptides). 1. Complete the functions to generate diversity in the peptides. The function `mutate_peptide` takes a peptide as input and returns a new peptide where each amino acid is changed by a randomly chosen other peptide with a probability `pmut`. The function `crossover_peptides` requires two peptides of equal length as input and outputs the corresponding random crossover peptides.2. Complete the function `simulated_annealing` to optimize random peptide of fixed length. Try to find an optimal peptide of length twenty. Discuss how to choose good values for `Tmin`, `Tmax`, `r` and `kT`.3. Finally, complete the function `genetic_algorithm`. You also have to complete the functions `tournament_selection` and `recombinate` which will be used in the main algorithm. Try to find the an optimal peptide of length twenty as well, using some trial and error to find the hyperparameters. 4. Compare the quality of the solution using hill climbing, simulated annealing and the genetic algorithm. If you take code complexity and computation time into account, which would you try first for general problems?
###Code
def mutate_peptide(peptide, pmut=0.05):
"""
Replaces each amino acid of the peptide with an arbitrary chosen
amino acid with a probability pmut
"""
# complete this
return mutated_peptide
def crossover_peptides(peptide1, peptide2):
"""
Performs crossover for two peptides, each position is switched with equal
probability.
Inputs:
- peptide1, peptide2
Outputs:
- crossed_peptide1, crossed_peptide2
"""
# complete this
return crossed_peptide1, crossed_peptide2
peptide1 = 'AAAAAAAAAAAA'
peptide2 = 'CCCCCCCCCCCC'
print(mutate_peptide(peptide1, pmut=0.1))
print(*crossover_peptides(peptide1, peptide2))
def simulated_annealing(peptidesize, Tmax, Tmin, pmut, r, kT,
scoring=score_peptide):
"""
Uses simulated annealing to find a peptide with a high score for
antimicrobial activity.
Inputs:
- peptidesize : length of the peptide
- Tmax : maximum (starting) temperature
- Tmin : minimum (stopping) temperature
- pmut : probability of mutating an amino acid in the peptide
- r : rate of cooling
- kT : number of iteration with fixed temperature
- scoring : the scoring function used for the peptides
Outputs:
- peptide : best found peptide
- fbest : best scores obtained through the iterations
- temperatures : temperature during the iterations
"""
# create intial peptide
peptide = ''
for _ in range(peptidesize):
peptide += choice(amino_acids)
temp = Tmax
fstar = scoring(peptide)
fbest = [fstar]
temperatures = [temp]
while temp > Tmin:
for _ in range(kT):
#
if # ...
# ...
fbest.append(fstar) # save best value
temperatures.append(temp) # save best temperature
return peptide, fbest, temperatures
%%time
peptide_SA, fitness, temperature = simulated_annealing(peptidesize=20, # ...
# make a plot for simulated annealing
###Output
_____no_output_____
###Markdown
DESCRIBE THE EFFECT OF THE HYPERPARAMETERS. MAKE A PLOT TO ILLUSTRATE THE BEHAVIOUR BELOW.
###Code
# EXPERIMENT WITH THE HYPERPARAMETERS OF SA HERE
def tournament_selection(scored_peptides):
"""
Apply tournament selection on a list of scored peptides.
Input:
- scored_peptides : a list of scored peptides, each element is a tuple
of the form (score, peptide)
Output:
- selected_peptides : a list of peptides selected from scored_peptides
based on tournament selection (without the score)
"""
# complete this
return selected_peptides
def recombinate(population, pmut, pcross):
"""
Recombinates a population of peptides.
Inputs:
- population : a list of peptides
- pmut : probability of mutating an amino acid
- pcross : probability of two peptides crossing over
Output:
- recombinated_population
"""
recombinated_population = []
# the population with mutation an cross over applied to
return recombinated_population
def genetic_algorithm(peptidesize, n_iterations, popsize, pmut, pcross,
scoring=score_peptide):
"""
Uses a genetic algorithm to find a peptide with a high score for
antimicrobial activity.
Inputs:
- peptidesize : length of the peptide
- n_iterations : number of iterations (generations)
- popsize : size of the population
- pmut : probability of mutating an amino acid in the peptide
- pcross : probability of performing a crossover
- scoring : the scoring function used for the peptides
Outputs:
- best_peptide : best found peptide
- best_fitness_iteration : best scores obtained through the iterations
"""
# initialize population
population = []
for _ in range(popsize):
peptide = ''
for _ in range(peptidesize):
peptide += choice(amino_acids)
population.append(peptide)
# score peptides
scored_peptides = [(scoring(peptide), peptide)
for peptide in population]
best_fitness, best_peptide = max(scored_peptides)
best_fitness_iteration = [best_fitness]
for iter in range(n_iterations):
# select population
# recombinate population
# elitism
# score peptides
# select best
best_fitness, best_peptide = max(scored_peptides)
best_fitness_iteration.append(best_fitness)
return best_peptide, best_fitness_iteration
%time
peptide_GA, best_fitness_iteration = genetic_algorithm(peptidesize=20, n_iterations=1000,
popsize=500, pmut=0.02, pcross=0.8, scoring=score_peptide)
# make a plot for the genetic algorithm
###Output
_____no_output_____
|
00_python_basics/function_introduction.ipynb
|
###Markdown
Functions**function** is a named sequence of statements that performs a computation. - When you define a function, you specify the name and the sequence of statements. - Later, you can “call” the function by name. Function callsWe've already see a **function call**:
###Code
type(42)
###Output
_____no_output_____
###Markdown
- The name of the function is type. - The expression in parentheses is called the argument of the function. - The result, for this function, is the type of the **argument**.- It is common to say that a function “takes” an argument and “returns” a result. The result is also called the **return value**.
###Code
int('32') # string to int
int('Hello')
float(32) # int / string to float
float('3.14')
str(31) # int / float to string
###Output
_____no_output_____
###Markdown
Math functionsPython has a math module that provides most of the familiar mathematical functions. - A **module** is a file that contains a collection of related functions.- Before we can use the functions in a module, we have to import it with an **import statement**:
###Code
import math
###Output
_____no_output_____
###Markdown
This statement creates a **module object** named math.
###Code
math
###Output
_____no_output_____
###Markdown
The module object contains the functions and variables defined in the module. - To access one of the functions, you have to specify the name of the module and the name of the function, separated by a period. - This format is called **dot notation** Example
###Code
degrees = 45
radians = degrees / 180.0 * math.pi
math.sin(radians)
###Output
_____no_output_____
###Markdown
The expression `math.pi` gets the variable pi from the math module. Its value is a floating-point approximation of $\pi$, accurate to about 15 digits.
###Code
math.pi
###Output
_____no_output_____
###Markdown
New FunctionsA **function definition** specifies the name of a new function and the sequence of statements that run when the function is called.
###Code
def print_lyrics():
print("Hello darkness my old friend")
print("Pink fluffy unicorns!")
###Output
_____no_output_____
###Markdown
`def` is a keyword that indicates that this is a function definition.Defining a function creates a **function object**, which has type function:
###Code
type(print_lyrics)
print_lyrics()
###Output
Hello darkness my old friend
Pink fluffy unicorns!
###Markdown
Once you have defined a function, you can use it inside another function.
###Code
def repeat_lyrics():
print_lyrics()
print_lyrics()
repeat_lyrics()
###Output
Hello darkness my old friend
Pink fluffy unicorns!
Hello darkness my old friend
Pink fluffy unicorns!
###Markdown
Definitions and UsesPulling together the code fragments from the previous section, the whole program looks like this:```pythondef print_lyrics(): print("I'm a lumberjack, and I'm okay.") print("I sleep all night and I work all day.")def repeat_lyrics(): print_lyrics() print_lyrics()repeat_lyrics()```This program contains two function definitions: `print_lyrics` and `repeat_lyrics`. You have to create a function before you can run it. In other words, the function definition has to run before the function gets called. Parameters and argumentsSome of the functions we have seen require arguments.Inside the function, the arguments are assigned to variables called **parameters**. Here is a definition for a function that takes an argument:
###Code
def print_twice(param):
print(param)
print(param)
print_twice('Hello')
print_twice(42)
print_twice(math.pi)
print_twice('Spam ' * 10)
print_twice(math.cos(math.pi))
###Output
-1.0
-1.0
###Markdown
The argument is evaluated before the function is called, so in the examples the expressions `'Spam '*10` and `math.cos(math.pi)` are only evaluated once.
###Code
spam = 'Spam is the king of breakfast!'
print_twice(spam)
###Output
Spam is the king of breakfast!
Spam is the king of breakfast!
###Markdown
Variables and parameters are localWhen you create a variable inside a function, it is **local**, which means that it only exists inside the function. For example:
###Code
def cat_twice(part1, part2):
cat = part1 + part2
print_twice(cat)
###Output
_____no_output_____
###Markdown
This function takes two arguments, concatenates them, and prints the result twice. Here is an example that uses it:
###Code
line1 = 'Hello Darkness! '
line2 = 'Big Fluffy Unicorns!'
cat_twice(line1, line2)
###Output
Hello Darkness! Big Fluffy Unicorns!
Hello Darkness! Big Fluffy Unicorns!
###Markdown
When cat_twice terminates, the variable cat is destroyed. If we try to print it, we get an exception:
###Code
print(cat)
###Output
_____no_output_____
|
TKtalk_jupyter.ipynb
|
###Markdown
Jupyter notebooks in education==========================**Zsolt Elter, Andreas Solders** *TK Talk, 2020 March* Content- Context - Energy physics II with nuclear energy- Brief overview of Jupyter notebooks- Lot of demonstration- Feed-back and Feed-forward- Hopefully a lot of Q&A (we need ideas for notebooks:)) Course where implemented- Energy Physics II with Nuclear Energy, 10.0 c (first part)- Introductory reactor physics - Neutron cross sections - Neutron slowing down - Neutron diffusion - Point kinetics- Large variety of data is involved (cross sections, nubar, spectra)- Reactor physics is driven by computations- Ideal case for Problem-based learning Course where implemented- Energy Physics II with Nuclear Energy, 10.0 c (first part)- Instructions - Traditional lectures - Tutorials - Seminars, computer exercise, study visit, ...- Examination - Home assignments - Oral exam Usage in the course- Tutorial solutions (show plots and equations)- Interactive plots in lectures- Home assignments - Students receive introduction - Data (eg. cross section or measurement) - Instructions - Then they write the exercise in the notebook What is a Jupyter notebook?Browser-based document mixing1. Narrative text written in Markdown - Markdown is a very **simple** and _easy to learn_ markup language, aka ~~difficult~~.2. Equations written in $\LaTeX$3. Live, executable code (eg. `python`)4. Visualizations (eg. matplotlib)5. Hyperlinks: - [Markdown Cheatsheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet)7. Figures  Might be familiar to Wolfram Mathematica users! Slide shows- Save the Notebook as a slide show (File -> Download as -> Reveal.js slides (.slides.html) - Run it in a browser of you choice - Passive! - Alternatively, use RISE to activly display your NB in your browser (like this pressentation) - Make changes to your slide while pressenting - Draw directly in the slide - Draw on a chalk board - Execute code directly in the slide! Python code - first exampleOne describes some problem with equations, then some code.$$f(t)=C\cdot t^3$$
###Code
import matplotlib.pyplot as plt
import numpy as np
C=3
t=np.linspace(-10,10,1000)
plt.figure(figsize=(3, 2))
plt.plot(t,C*t**3)
plt.xlabel('t')
plt.ylabel('f')
plt.show()
###Output
_____no_output_____
|
experiments/tl_3v2/jitter1/oracle.run1.framed-cores/trials/12/trial.ipynb
|
###Markdown
Transfer Learning Template
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
###Output
_____no_output_____
###Markdown
Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
###Code
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_3-jitter1v2:oracle.run1.framed -> cores",
"device": "cuda",
"lr": 0.0001,
"x_shape": [2, 256],
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10.",
"1-11.",
"1-15.",
"1-16.",
"1-17.",
"1-18.",
"1-19.",
"10-4.",
"10-7.",
"11-1.",
"11-14.",
"11-17.",
"11-20.",
"11-7.",
"13-20.",
"13-8.",
"14-10.",
"14-11.",
"14-14.",
"14-7.",
"15-1.",
"15-20.",
"16-1.",
"16-16.",
"17-10.",
"17-11.",
"17-2.",
"19-1.",
"19-16.",
"19-19.",
"19-20.",
"19-3.",
"2-10.",
"2-11.",
"2-17.",
"2-18.",
"2-20.",
"2-3.",
"2-4.",
"2-5.",
"2-6.",
"2-7.",
"2-8.",
"3-13.",
"3-18.",
"3-3.",
"4-1.",
"4-10.",
"4-11.",
"4-19.",
"5-5.",
"6-15.",
"7-10.",
"7-14.",
"8-18.",
"8-20.",
"8-3.",
"8-8.",
],
"domains": [1, 2, 3, 4, 5],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/root/csc500-main/datasets/cores.stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": [
"unit_power",
"jitter_256_1",
"lowpass_+/-10MHz",
"take_200",
],
"episode_transforms": [],
"domain_prefix": "C_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 2000,
"pickle_path": "/root/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": [
"unit_power",
"jitter_256_1",
"take_200",
"resample_20Msps_to_25Msps",
],
"episode_transforms": [],
"domain_prefix": "O_",
},
],
"seed": 500,
"dataset_seed": 500,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
###Output
_____no_output_____
|
experimental/widgets/7_Widget Alignment.ipynb
|
###Markdown
**1.** `VBox(HBox)`
###Code
VBox([HBox([VBox([Dropdown(description='Choice', options=['foo', 'bar']),
ColorPicker(description='Color'),
HBox([Button(), Button()])]),
Textarea(value="Lorem ipsum dolor sit amet, consectetur adipiscing elit,"
"sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. "
"Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris "
"nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in "
"reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla "
"pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa "
"qui officia deserunt mollit anim id est laborum.")]),
HBox([Text(), Checkbox(description='Check box')]),
IntSlider(),
Controller()], background_color='#EEE')
###Output
_____no_output_____
###Markdown
**2.** `HBox(VBox)`
###Code
HBox([VBox([Button(description='Press'), Dropdown(options=['a', 'b']), Button(description='Button')]),
VBox([Button(), Checkbox(), IntText()])], background_color='#EEE')
###Output
_____no_output_____
###Markdown
**3.** `VBox(HBox)` width sliders, range sliders and progress bars
###Code
VBox([HBox([Button(), FloatRangeSlider(), Text(), Button()]),
HBox([Button(), FloatText(),
FloatProgress(value=40), Checkbox(description='Check')]),
HBox([ToggleButton(), IntSlider(description='Foobar'),
Dropdown(options=['foo', 'bar']), Valid()]),
])
###Output
_____no_output_____
###Markdown
**4.** Dropdown resize
###Code
dd = Dropdown(description='Foobar', options=['foo', 'bar'])
dd
dd.layout.width = '148px'
cp = ColorPicker(description='foobar')
###Output
_____no_output_____
###Markdown
**5.** Colorpicker alignment, concise and long version
###Code
VBox([HBox([Dropdown(width='148px', options=['foo', 'bar']),
Button(description='Button')]), cp, HBox([Button(), Button()])])
cp.concise = True
cp.concise = False
cp2 = ColorPicker()
VBox([HBox([Button(), Button()]), cp2])
cp2.concise = True
cp2.concise = False
###Output
_____no_output_____
###Markdown
**6.** Vertical slider and progress bar alignment and resize
###Code
HBox([IntSlider(description='Slider', orientation='vertical', height='200px'),
FloatProgress(description='Progress', value=50, orientation='vertical', height='200px')])
HBox([IntSlider(description='Slider', orientation='vertical'),
FloatProgress(description='Progress', value=50, orientation='vertical')])
###Output
_____no_output_____
###Markdown
**7.** Tabs
###Code
t = Tab(children=[FloatText(), IntSlider()], _titles={0: 'Text', 1: 'Slider'})
t
t.selected_index = 1
###Output
_____no_output_____
|
jupyter_notebooks/dao3d_2d.ipynb
|
###Markdown
3D-DAOSTORM 2D / 2D fixed analysis.This notebook explains how to do 2D or 2D fixed analysis using 3D-DAOSTORM.* In 2D fixed fitting we constrain the Gaussian fitting function with a fixed $\sigma$ value.* In 2D fitting the Gaussian $\sigma$ can vary, but it is forced to be the same in X and Y. Configuring the directoryCreate an empty directory somewhere on your computer and tell Python to go to that directory.
###Code
import os
os.chdir("/home/hbabcock/Data/storm_analysis/jy_testing/")
print(os.getcwd())
###Output
_____no_output_____
###Markdown
Generate sample data for analysis.
###Code
import storm_analysis.jupyter_examples.dao3d_2d as dao3d_2d
dao3d_2d.configure()
###Output
_____no_output_____
###Markdown
Working with analysis parameters.In this example we'll only adjust `threshold` but other important parameters include `sigma`, `roi_size` and `find_max_radius`.
###Code
import storm_analysis.sa_library.parameters as params
daop = params.ParametersDAO().initFromFile("example.xml")
###Output
_____no_output_____
###Markdown
Getting help with a parameter:
###Code
print(daop.helpAttr("threshold"))
###Output
_____no_output_____
###Markdown
Changing or getting a parameter. Here we are setting `max_frame` to 1 so that 3D-DAOSTORM will only analyze the first frame.
###Code
daop.changeAttr("max_frame", 1)
print("max_frame is", daop.getAttr("max_frame"))
###Output
_____no_output_____
###Markdown
Print out all of the available parameters and their current values.
###Code
daop.prettyPrint()
###Output
_____no_output_____
###Markdown
Testing analysis parameters.
###Code
import os
import storm_analysis.jupyter_examples.overlay_image as overlay_image
import storm_analysis.daostorm_3d.mufit_analysis as mfit
# For this data-set, no localizations will be found if threshold is above 25.0
daop.changeAttr("threshold", 6.0)
daop.changeAttr("find_max_radius", 5) # original value is 5 (pixels)
daop.changeAttr("roi_size", 9) # original value is 9 (pixels)
daop.changeAttr("sigma", 1.5) # original value is 1.5 (pixels)
# Save the changed parameters.
daop.toXMLFile("testing.xml")
###Output
_____no_output_____
###Markdown
Test 3D-DAOSTORM analysis with these parameters. Ideally it should find 120 localizations in the frame.Note that 3D-DAOSTORM will first check for existing analysis so you have to delete the old analysis after changing parameters.
###Code
if os.path.exists("testing.hdf5"):
os.remove("testing.hdf5")
mfit.analyze("test.tif", "testing.hdf5", "testing.xml")
overlay_image.overlayImage("test.tif", "testing.hdf5", 0)
###Output
_____no_output_____
###Markdown
Using VisualizerAn alternative way to visualize the results is to use the visualizer program. This will only work if you are running jupyter locally.
###Code
import inspect
import storm_analysis
vis_path = os.path.dirname(inspect.getfile(storm_analysis)) + "/visualizer/"
vis_cmd = vis_path + "/visualizer.py"
vis_dir = os.getcwd()
%run $vis_cmd $vis_dir
print(vis_path)
###Output
_____no_output_____
###Markdown
Analyzing the whole movie
###Code
# This tells 3D-DAOSTORM to analyze the whole movie.
daop.changeAttr("max_frame", -1)
daop.toXMLFile("final.xml")
# Delete any stale results.
if os.path.exists("final.hdf5"):
os.remove("final.hdf5")
# Run the analysis.
mfit.analyze("test.tif", "final.hdf5", "final.xml")
###Output
_____no_output_____
###Markdown
Creating an image from the analysis
###Code
import matplotlib
import matplotlib.pyplot as pyplot
import storm_analysis.sa_utilities.hdf5_to_image as h5_image
sr_im = h5_image.render2DImage("final.hdf5", scale = 1, sigma = 1)
fig = pyplot.figure(figsize = (8, 8))
ax = fig.add_subplot(1,1,1)
ax.imshow(sr_im)
ax.set_title("SR Image")
pyplot.show()
###Output
_____no_output_____
###Markdown
3D-DAOSTORM 2D / 2D fixed analysis.This notebook explains how to do 2D or 2D fixed analysis using 3D-DAOSTORM.* In 2D fixed fitting we constrain the Gaussian fitting function with a fixed $\sigma$ value.* In 2D fitting the Gaussian $\sigma$ can vary, but it is forced to be the same in X and Y. Configuring the directoryCreate an empty directory somewhere on your computer and tell Python to go to that directory.
###Code
import os
os.chdir("/home/hbabcock/Data/storm_analysis/jy_testing/")
print(os.getcwd())
###Output
_____no_output_____
###Markdown
Generate sample data for analysis.
###Code
import storm_analysis.jupyter_examples.dao3d_2d as dao3d_2d
dao3d_2d.configure()
###Output
_____no_output_____
###Markdown
Working with analysis parameters.In this example we'll only adjust `threshold` but other important parameters include `sigma`, `roi_size` and `find_max_radius`.
###Code
import storm_analysis.sa_library.parameters as params
daop = params.ParametersDAO().initFromFile("example.xml")
###Output
_____no_output_____
###Markdown
Getting help with a parameter:
###Code
print(daop.helpAttr("threshold"))
###Output
_____no_output_____
###Markdown
Changing or getting a parameter. Here we are setting `max_frame` to 1 so that 3D-DAOSTORM will only analyze the first frame.
###Code
daop.changeAttr("max_frame", 1)
print("max_frame is", daop.getAttr("max_frame"))
###Output
_____no_output_____
###Markdown
Print out all of the available parameters and their current values.
###Code
daop.prettyPrint()
###Output
_____no_output_____
###Markdown
Testing analysis parameters.
###Code
import os
import storm_analysis.jupyter_examples.overlay_image as overlay_image
import storm_analysis.daostorm_3d.mufit_analysis as mfit
# For this data-set, no localizations will be found if threshold is above 25.0
daop.changeAttr("threshold", 6.0)
daop.changeAttr("find_max_radius", 5) # original value is 5 (pixels)
daop.changeAttr("roi_size", 9) # original value is 9 (pixels)
daop.changeAttr("sigma", 1.5) # original value is 1.5 (pixels)
# Save the changed parameters.
#
# Using pretty = True will create a more human readable XML file. The default value is False.
#
daop.toXMLFile("testing.xml", pretty = True)
###Output
_____no_output_____
###Markdown
Test 3D-DAOSTORM analysis with these parameters. Ideally it should find 120 localizations in the frame.Note that 3D-DAOSTORM will first check for existing analysis so you have to delete the old analysis after changing parameters.
###Code
if os.path.exists("testing.hdf5"):
os.remove("testing.hdf5")
mfit.analyze("test.tif", "testing.hdf5", "testing.xml")
overlay_image.overlayImage("test.tif", "testing.hdf5", 0)
###Output
_____no_output_____
###Markdown
Using VisualizerAn alternative way to visualize the results is to use the visualizer program. This will only work if you are running jupyter locally.
###Code
import inspect
import storm_analysis
vis_path = os.path.dirname(inspect.getfile(storm_analysis)) + "/visualizer/"
vis_cmd = vis_path + "/visualizer.py"
vis_dir = os.getcwd()
%run $vis_cmd $vis_dir
print(vis_path)
###Output
_____no_output_____
###Markdown
Analyzing the whole movie
###Code
# This tells 3D-DAOSTORM to analyze the whole movie.
daop.changeAttr("max_frame", -1)
daop.toXMLFile("final.xml")
# Delete any stale results.
if os.path.exists("final.hdf5"):
os.remove("final.hdf5")
# Run the analysis.
mfit.analyze("test.tif", "final.hdf5", "final.xml")
###Output
_____no_output_____
###Markdown
Creating an image from the analysis
###Code
import matplotlib
import matplotlib.pyplot as pyplot
import storm_analysis.sa_utilities.hdf5_to_image as h5_image
sr_im = h5_image.render2DImage("final.hdf5", scale = 1, sigma = 1)
fig = pyplot.figure(figsize = (8, 8))
ax = fig.add_subplot(1,1,1)
ax.imshow(sr_im)
ax.set_title("SR Image")
pyplot.show()
###Output
_____no_output_____
|
FNN.ipynb
|
###Markdown
###Code
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
from torch.autograd import Variable
train_dataset = dsets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = dsets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor())
train_dataloader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=100,
shuffle=True)
test_dataloader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=100,
shuffle=False)
# Parmeter Input_dim, Hidden_dim, Output_dim
class FNN(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim):
super(FNN, self).__init__()
#Linear Function
self.fc1 = nn.Linear(input_dim, hidden_dim)
#non-Linear Function
self.aFun1 = nn.ReLU()
#Linear Function
self.fc2 = nn.Linear(hidden_dim, hidden_dim)
#non-Linear Function
self.aFun2 = nn.ReLU()
#Linear Funcation
self.fc3 = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
# Linear function
out= self.fc1(x)
# Non - linearity
out= self.aFun1(out)
# Linear function
out= self.fc2(out)
# Non - linearity
out= self.aFun2(out)
# Linear function
out= self.fc3(out)
return out
# Intintiate Modle Class
input_dim = 28*28
output_dim = 10
# Number of Neurons and Number of activation functions
hidden_dim = 100
model= FNN(input_dim, hidden_dim, output_dim)
# Intintiate Loss function
cert = nn.CrossEntropyLoss()
# Intintiate Optimaztion
Optimaztion = torch.optim.SGD(model.parameters(), lr=0.01)
# Modelling train
itr = 0;
epoch = 5;
for epoch in range(epoch):
for i, (images, lables) in enumerate(train_dataloader):
#Input/Lable --> Variable
inputs = Variable(images.view(-1, 28*28))
lables = Variable(lables)
#Clear gradient
Optimaztion.zero_grad()
outputs = model(inputs)
loss = cert(outputs, lables)
loss.backward()
Optimaztion.step()
itr+=1
if itr % 500 == 0 :
total = 0;
correct = 0;
for images, lables in test_dataloader:
inputs = Variable(images.view(-1, 28*28))
outputs= model(inputs)
_, pred = torch.max(outputs.data, 1)
total += lables.size(0)
correct += (pred == lables).sum()
accuracy = 100 * correct / total
print(loss.data, accuracy)
# test data
###Output
_____no_output_____
|
mnist_on_gpu.ipynb
|
###Markdown
TensorFlow-GPU model for MNIST dataset
###Code
# Version 3
#
# From begining install Tensorflow-GPU by
# https://www.thehardwareguy.co.uk/install-tensorflow-gpu
# and look to this comment below article:
# "In tensorflow = 2.1.0, it is not needed to install keras separately
# (pip install keras not needed) and it will be installed (keras =2.2.4-tf)
# as a dependency of tensorflow-gpu with "conda install" itself.
# it's work for me in Win10
# GOOD LUCK :)
# Imports
import os
import tensorflow as tf
# uncomment next 2 lines to start on GPU
# physical_devices = tf.config.list_physical_devices('GPU')
# tf.config.experimental.set_memory_growth(physical_devices[0], True)
import pandas as pd
import matplotlib.pyplot as plt
class Data:
def __init__(self):
mnist = tf.keras.datasets.mnist
(self.x_train, self.y_train),(self.x_test, self.y_test) = mnist.load_data(path="mnist.npz")
# normalization
self.x_train = self.x_train / 255.0
self.x_test = self.x_test / 255.0
class Model:
def __init__(self, hidden_layer, learning_rate):
self.name = "mnist"
self.save_folder = os.path.join(os.getcwd(),
"training_mnist_gpu")
if not os.path.exists(self.save_folder):
os.mkdir(self.save_folder)
self.checkpoint_path = os.path.join(self.save_folder,
"cp.ckpt")
self.model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(hidden_layer,
activation='relu'),
tf.keras.layers.Dropout(learning_rate),
tf.keras.layers.Dense(10,
activation='softmax')
])
self.model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
def train(self, data, epochs):
checkpoint_dir = os.path.dirname(self.checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=self.checkpoint_path,
save_weights_only=True,
save_freq=10*len(data.x_train),
verbose=1)
self.model.fit(data.x_train,
data.y_train,
epochs=epochs,
validation_data=(data.x_train, data.y_train),
callbacks=[cp_callback])
return self.model.evaluate(x=data.x_test,
y=data.y_test,
verbose=1)
hidden_layer=128
# Best values (acc=0.9828)
# ep_list = [15]
# lr_list = [0.15]
# Short exploratory
# ep_list = [3, 5, 7, 10, 12, 15]
# lr_list = [0.15, 0.1, 0.05, 0.01]
# Exploratory values - uncomment to train on various parameters
lr_list = [0.2, 0.15, 0.125, 0.1, 0.075, 0.05, 0.025]
ep_list = range(6, 22, 3)
data = Data()
# Visual test for test_data
plt.figure(figsize=(20, 10))
for i in range(50):
plt.subplot(5,10,i+1)
plt.xticks([])
plt.yticks([])
plt.imshow(data.x_test[i], cmap=plt.cm.binary)
plt.xlabel(data.y_test[i], size=20)
plt.grid(False)
plt.show()
results = list()
best_acc = 0
cycle = 1
all_cycles = len(lr_list) * len(ep_list)
for lr in lr_list:
for ep in ep_list:
print(f"\n--- cycle {cycle} of {all_cycles} ---")
print(f"learning_rate={lr}, num_ep={ep}")
M = Model(hidden_layer, lr)
loss, acc = M.train(data, ep)
# save best
if acc > best_acc:
params = f"hl{hidden_layer}_lr{int(lr*10000)}_ep{ep}"
model_filename = f"{M.name}_{int(acc*10000)}({params}).h5"
M.model.save(os.path.join(M.save_folder, model_filename))
print(f"model saved to {model_filename}")
best_acc = acc
results.append({'n': cycle, 'lr': lr, 'ep': ep, 'loss': loss, 'acc': acc})
cycle += 1
# Show me best 5 in table
results_df = pd.DataFrame(columns=['n', 'lr', 'ep', 'loss', 'acc'])
results_df = results_df.append(results, ignore_index=True).sort_values('acc', ascending=False)
print(results_df.head())
# Show me graph
g = results_df[['ep', 'lr', 'acc']]
fig, ax = plt.subplots(figsize=(20,10))
scatter = ax.scatter(x='lr', y='acc',
data=g,
s=list(map(lambda a: a*a, g['ep'])),
c='ep',
alpha=1,
cmap="Spectral")
plt.ylabel('accuracy')
plt.xlabel('learning rate')
ax.legend(*scatter.legend_elements(),
loc="upper left",
title="Epoch's num:")
ax.grid(True)
plt.show()
###Output
_____no_output_____
|
source/02_GLM_Linear_Regression/Class.ipynb
|
###Markdown
Today:* Supervised Learning* Linear Regression * Model * Cost Function * Optimization * Gradient Descent * Feature Scaling* Polynomial Regression * Model * Overfitting and Underfitting * Regularization Resources:* Supervised Learning: https://mcerovic.github.io/notes/SupervisedLearning/index.html* Linear Regression: https://mcerovic.github.io/notes/LinearRegression/index.html* Gradient Descent: https://mcerovic.github.io/notes/GradientDescent/index.html* Feature Scaling: http://sebastianraschka.com/Articles/2014_about_feature_scaling.htmlabout-standardization Linear regression
###Code
# Import necessary libraries
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
# Load dataset as numpy array
y, x = np.loadtxt('../../data/02_LinearRegression/house_price.csv', delimiter=',', unpack=True)
n_samples = len(x)
# Normalize data
x = (x - np.mean(x)) / np.std(x)
y = (y - np.mean(y)) / np.std(y)
print(x[0])
# Graphical preview
fig, ax = plt.subplots()
ax.set_xlabel('Size')
ax.set_ylabel('Price')
ax.scatter(x, y, edgecolors='k', label='Real house price')
ax.grid(True, color='gray', linestyle='dashed')
###Output
_____no_output_____
###Markdown
Model
###Code
X = tf.placeholder(tf.float32, name='X')
Y = tf.placeholder(tf.float32, name='Y')
w1 = tf.Variable(0.0, name='w1')
w0 = tf.Variable(0.0, name='w0')
Y_predicted = tf.add(tf.multiply(X, w1), w0)
###Output
_____no_output_____
###Markdown
Cost function
###Code
cost = tf.reduce_mean(tf.square(Y - Y_predicted), name='cost')
###Output
_____no_output_____
###Markdown
Optimization
###Code
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(cost)
###Output
_____no_output_____
###Markdown
Train
###Code
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(1000):
total_cost = 0
for sample in range(n_samples):
_, l = sess.run([optimizer, cost], feed_dict={X: x[sample], Y: y[sample]})
total_cost = l
print('Epoch {0}: {1}'.format(i, total_cost))
w, b = sess.run([w1, w0])
# Append hypothesis that we found on the plot
ax.plot(x, x * w + b, color='r', label='Predicted house price')
ax.legend()
fig
# Predict at point 0.5
print(0.5 * w + b)
###Output
0.4344252091832459
|
notebooks/GTO/NRC_GTO_YSO_direct_imaging.ipynb
|
###Markdown
Define Sources and their Reference PSF Stars
###Code
# Various Bandpasses
bp_v = S.ObsBandpass('v')
bp_k = pynrc.bp_2mass('k')
bp_w1 = pynrc.bp_wise('w1')
bp_w2 = pynrc.bp_wise('w2')
# source, dist, age, sptype, vmag kmag W1 W2
args_sources = [('SAO 206462', 135, 10, 'F8V', 8.7, 5.8, 5.0, 4.0),
('TW Hya', 60, 10, 'M0V', 11.0, 7.3, 7.0, 6.9),
('MWC 758', 160, 5, 'A5V', 8.3, 5.7, 4.6, 3.5), # Lazareff et al. (2016)
('HL Tau', 140, 1, 'K5V', 15.1, 7.4, 5.2, 3.3),
('PDS 70', 113, 10, 'K7IV', 12.2, 8.8, 8.0, 7.7)]
# Corresponding reference stars
ref_sources = [('HD 94771', 'G4V', 5.6),
('HD 94771', 'G4V', 5.6),
('HR 1889', 'F5III', 5.4),
('HR 1889', 'F5III', 5.4),
('HR 1889', 'F5III', 5.4)]
# Directory housing VOTables
# http://vizier.u-strasbg.fr/vizier/sed/
votdir = 'votables/'
# Directory to save plots and figures
outdir = 'YSOs/'
# List of filters
args_filter = [('F182M', 'MASK335R', 'CIRCLYOT'),
('F210M', 'MASK335R', 'CIRCLYOT'),
('F250M', 'MASK335R', 'CIRCLYOT'),
('F300M', 'MASK335R', 'CIRCLYOT'),
('F335M', 'MASK335R', 'CIRCLYOT'),
('F444W', 'MASK335R', 'CIRCLYOT')]
#args_filter = [('F335M', 'MASK335R', 'CIRCLYOT'),
# ('F444W', 'MASK335R', 'CIRCLYOT')]
filt_keys = []
for filt,mask,pupil in args_filter:
filt_keys.append(make_key(filt, mask=mask, pupil=pupil))
subsize = 320
# List of filters
args_filter = [('F187N', None, None),
('F200W', None, None),
('F356W', None, None),
('F444W', None, None)]
#args_filter = [('F300M', None, None),
# ('F356W', None, None),
# ('F356W', 'MASK430R', 'CIRCLYOT'),
# ('F430M', None, None),
# ('F444W', None, None),
# ('F444W', 'MASK430R', 'CIRCLYOT')]
filt_keys = []
for filt,mask,pupil in args_filter:
filt_keys.append(make_key(filt, mask=mask, pupil=pupil))
subsize = 400
###Output
_____no_output_____
###Markdown
SAO 206462
###Code
# Fit spectrum to SED photometry
i=0
name_sci, dist_sci, age_sci, spt_sci, vmag_sci, kmag_sci, w1_sci, w2_sci = args_sources[i]
vot = votdir + name_sci.replace(' ' ,'') + '.vot'
mag_sci, bp_sci = vmag_sci, bp_v
args = (name_sci, spt_sci, mag_sci, bp_sci, vot)
src = source_spectrum(*args)
src.fit_SED(use_err=False, robust=False, wlim=[0.5,10], IR_excess=True)
# Final source spectrum
sp_sci = src.sp_model
# Do the same for the reference source
name_ref, spt_ref, kmag_ref = ref_sources[i]
vot = votdir + name_ref.replace(' ' ,'') + '.vot'
mag_ref, bp_ref = kmag_ref, bp_k
args = (name_ref, spt_ref, mag_ref, bp_ref, vot)
ref = nrc_utils.source_spectrum(*args)
ref.fit_SED(use_err=True, robust=True)
# Final reference spectrum
sp_ref = ref.sp_model
# Plot spectra
fig, axes = plt.subplots(1,2, figsize=(14,4.5))
src.plot_SED(ax=axes[0], xr=[0.5,30])
ref.plot_SED(ax=axes[1], xr=[0.5,30])
axes[0].set_title('Science Specta -- {} ({})'.format(src.name, spt_sci))
axes[1].set_title('Reference Specta -- {} ({})'.format(ref.name, spt_ref))
#for ax in axes:
# ax.set_xscale('linear')
# ax.xaxis.set_minor_locator(AutoMinorLocator())
fig.tight_layout()
fig.subplots_adjust(top=0.85, bottom=0.1 , left=0.05, right=0.97)
fig.savefig(outdir+'{}_SEDs.pdf'.format(name_sci.replace(' ','')))
# Plot the two spectra
fig, ax = plt.subplots(1,1, figsize=(8,5))
xr = [2.5,5.5]
bp = pynrc.read_filter(*args_filter[-1])
for sp in [sp_sci, sp_ref]:
w = sp.wave / 1e4
o = S.Observation(sp, bp, binset=bp.wave)
sp.convert('Jy')
f = sp.flux / o.effstim('Jy')
ind = (w>=xr[0]) & (w<=xr[1])
ax.plot(w[ind], f[ind], lw=1, label=sp.name)
ax.set_ylabel('Flux (Jy) normalized over bandpass')
sp.convert('flam')
ax.set_xlim(xr)
ax.set_ylim([0,ax.get_ylim()[1]])
ax.set_xlabel(r'Wavelength ($\mu m$)')
ax.set_title('{} Spectra'.format(sp_sci.name))
# Overplot Filter Bandpass
ax2 = ax.twinx()
ax2.plot(bp.wave/1e4, bp.throughput, color='C2', label=bp.name+' Bandpass')
ax2.set_ylim([0,0.8])
ax2.set_xlim(xr)
ax2.set_ylabel('Bandpass Throughput')
ax.legend(loc='upper left')
ax2.legend(loc='upper right')
fig.tight_layout()
fig.savefig(outdir+'{}_2SEDs.pdf'.format(name_sci.replace(' ','')))
# Disk model information
# File name, arcsec/pix, dist (pc), wavelength (um), flux units
args_disk = ('example_disk.fits', 0.007, 140.0, 1.6, 'mJy/arcsec^2')
# Create a dictionary that holds the obs_coronagraphy class for each filter
wfe_drift = 0
obs_dict = obs_wfe(wfe_drift, args_filter, sp_sci, dist_sci, sp_ref=sp_ref, args_disk=args_disk,
wind_mode='WINDOW', subsize=subsize, verbose=False)
# # Generate initial observations for each filter(no WFE drift)
# def do_init(args_disk=None, fov_pix=None, verbose=True):
# wfe_ref_drift = 0
# obs_dict = obs_wfe(wfe_ref_drift, args_list, dist_sci, sp_ref=sp_ref,
# args_disk=args_disk, fov_pix=fov_pix, verbose=verbose)
# return obs_dict
# obs_dict = do_init(args_disk=args_disk, fov_pix=401, verbose=False)
# Update detector readout
for key in filt_keys:
obs = obs_dict[key]
if 'none' in key:
pattern, ng, nint_sci, nint_ref = ('RAPID',5,150,150)
elif ('MASK210R' in key) or ('MASKSWB' in key):
pattern, ng, nint_sci, nint_ref = ('BRIGHT2',10,20,20)
else:
pattern, ng, nint_sci, nint_ref = ('MEDIUM8',10,15,15)
obs.update_detectors(read_mode=pattern, ngroup=ng, nint=nint_sci)
obs.nrc_ref.update_detectors(read_mode=pattern, ngroup=ng, nint=nint_ref)
#print(key)
#print(obs.multiaccum_times)
#_ = obs.sensitivity(nsig=5, units='vegamag', verbose=True)
#print('')
###Output
_____no_output_____
###Markdown
Saturation
###Code
# Max Saturation Values
dmax = []
for k in filt_keys:
print('\n{}'.format(k))
obs = obs_dict[k]
dsat_asec = do_sat_levels(obs, satval=0.9, plot=False)
dmax.append(dsat_asec)
###Output
F187N_none_none
SAO 206462
29 saturated pixel at NGROUP=2; Max Well: 23.24
56 saturated pixel at NGROUP=5; Max Well: 58.10
Sat Dist NG=2: 0.13 arcsec
HD 94771
48 saturated pixel at NGROUP=2; Max Well: 39.94
72 saturated pixel at NGROUP=5; Max Well: 99.85
F200W_none_none
SAO 206462
237 saturated pixel at NGROUP=2; Max Well: 443.17
484 saturated pixel at NGROUP=5; Max Well: 1107.91
Sat Dist NG=2: 0.62 arcsec
HD 94771
303 saturated pixel at NGROUP=2; Max Well: 700.83
707 saturated pixel at NGROUP=5; Max Well: 1752.08
F356W_none_none
SAO 206462
253 saturated pixel at NGROUP=2; Max Well: 1010.22
533 saturated pixel at NGROUP=5; Max Well: 2525.56
Sat Dist NG=2: 1.11 arcsec
HD 94771
153 saturated pixel at NGROUP=2; Max Well: 465.12
269 saturated pixel at NGROUP=5; Max Well: 1162.81
F444W_none_none
SAO 206462
298 saturated pixel at NGROUP=2; Max Well: 731.09
529 saturated pixel at NGROUP=5; Max Well: 1827.73
Sat Dist NG=2: 0.92 arcsec
HD 94771
111 saturated pixel at NGROUP=2; Max Well: 218.49
247 saturated pixel at NGROUP=5; Max Well: 546.22
###Markdown
Photon Limit Curves
###Code
nsig = 5
roll = 10
for k in filt_keys:
obs_dict[k].sp_ref = sp_sci
wfe_list = [0]
curves_photon = do_contrast(obs_dict, wfe_list, filt_keys, nsig=nsig, roll_angle=roll, opt_diff=False)
###Output
F187N_none_none
F200W_none_none
F356W_none_none
F444W_none_none
###Markdown
Reference Reconstruction Curves
###Code
for k in filt_keys:
obs_dict[k].sp_ref = sp_sci
wfe_list = [10]
curves_recon10 = do_contrast(obs_dict, wfe_list, filt_keys, nsig=nsig, roll_angle=roll, opt_diff=False)
wfe_list = [5]
curves_recon5 = do_contrast(obs_dict, wfe_list, filt_keys, nsig=nsig, roll_angle=roll, opt_diff=False)
wfe_list = [2]
curves_recon2 = do_contrast(obs_dict, wfe_list, filt_keys, nsig=nsig, roll_angle=roll, opt_diff=False)
###Output
F187N_none_none
F200W_none_none
F356W_none_none
F444W_none_none
F187N_none_none
F200W_none_none
F356W_none_none
F444W_none_none
F187N_none_none
F200W_none_none
F356W_none_none
F444W_none_none
###Markdown
Basic PSF Subtraction Curves
###Code
for k in filt_keys:
obs_dict[k].sp_ref = sp_ref
wfe_list = [5]
curves_basic = do_contrast(obs_dict, wfe_list, filt_keys, nsig=nsig, roll_angle=roll, opt_diff=True)
###Output
F187N_none_none
F200W_none_none
F356W_none_none
F444W_none_none
###Markdown
Roll Subtraction
###Code
for k in filt_keys:
obs_dict[k].sp_ref = sp_sci
wfe_list = [0]
curves_photon2 = do_contrast(obs_dict, wfe_list, filt_keys, nsig=nsig, no_ref=True, roll_angle=roll)
nsig = 5
for k in filt_keys:
obs_dict[k].sp_ref = sp_sci
wfe_list = [10]
curves_noref10 = do_contrast(obs_dict, wfe_list, filt_keys, nsig=nsig, no_ref=True, roll_angle=roll)
wfe_list = [5]
curves_noref5 = do_contrast(obs_dict, wfe_list, filt_keys, nsig=nsig, no_ref=True, roll_angle=roll)
wfe_list = [2]
curves_noref2 = do_contrast(obs_dict, wfe_list, filt_keys, nsig=nsig, no_ref=True, roll_angle=roll)
import matplotlib.patches as patches
from pynrc.obs_nircam import plot_planet_patches
fig, axes = plt.subplots(2,2, figsize=(13,8))
xlim1 = [0,5]
xlim2 = [0,10]
ylim = [25,8]
curves_all = [curves_basic, curves_recon10, curves_recon5, curves_recon2, curves_photon]
labels = ['Basic Ref Sub', 'Opt Sub (10 nm)', 'Opt Sub (5 nm)',
'Opt Sub (2 nm)', 'Opt Sub (0 nm)']
lin_vals = np.linspace(0.2,0.7,len(curves_all))
cb = plt.cm.Blues_r(lin_vals)[::-1]
curves_all2 = [curves_noref10, curves_noref5, curves_noref2, curves_photon2]
labels2 = ['Roll Sub (10 nm)', 'Roll Sub (5 nm)',
'Roll Sub (2 nm)', 'Roll Sub (0 nm)']
lin_vals2 = np.linspace(0.2,0.7,len(curves_all2))
cr = plt.cm.Reds_r(lin_vals2)[::-1]
axes = axes.flatten()
for j, k in enumerate(filt_keys):
for jj, cv in enumerate(curves_all):
curves = cv[k]
rr, contrast, mag_sens = curves[0]
axes[j].plot(rr, mag_sens, color=cb[jj], zorder=1, lw=2, label=labels[jj])
for jj, cv in enumerate(curves_all2):
curves = cv[k]
rr, contrast, mag_sens = curves[0]
axes[j].plot(rr, mag_sens, color=cr[jj], zorder=1, lw=2, label=labels2[jj])
for j, ax in enumerate(axes):
ax.set_xlabel('Distance (arcsec)')
ax.set_ylabel('{}-sigma Sensitivities (mag)'.format(nsig))
if j<2: ax.set_xlim(xlim1)
else: ax.set_xlim(xlim2)
ax.set_ylim(ylim)
obs = obs_dict[filt_keys[j]]
plot_planet_patches(ax, obs, age=age_sci, update_title=True)
ax.legend(ncol=3, loc=1, fontsize=8)
# Saturation levels
for j, ax in enumerate(axes):
dy = ylim[1] - ylim[0]
rect = patches.Rectangle((0, ylim[0]), dmax[j], dy, alpha=0.2,
color='k', zorder=2)
ax.add_patch(rect)
fig.tight_layout()
dist = obs.distance
age_str = 'Age = {:.0f} Myr'.format(age_sci)
dist_str = 'Distance = {:.1f} pc'.format(dist) if dist is not None else ''
fig.suptitle('{} ({}, {})'.format(name_sci,age_str,dist_str), fontsize=16);
#fig.subplots_adjust(top=0.85)
fig.subplots_adjust(top=0.9)
fname = "{}_contrast2_{}.pdf".format(name_sci.replace(" ", ""), obs.mask)
fig.savefig(outdir+fname)
key_F444W = filt_keys[-1]
curves_roll_F444W = [curves_photon2[key_F444W][0], curves_noref2[key_F444W][0],
curves_noref5[key_F444W][0], curves_noref10[key_F444W][0]]
wfe_list = [0,2,5,10]
sat_rad = dmax[-1]
obs = obs_dict[filt_keys[-1]]
age = age_sci
do_plot_contrasts(None, curves_roll_F444W, nsig, wfe_list, obs, age, sat_rad=sat_rad,
yr=[24,8], save_fig=True, outdir=outdir)
###Output
_____no_output_____
###Markdown
PDS 70
###Code
# Fit spectrum to SED photometry
i=4
name_sci, dist_sci, age_sci, spt_sci, vmag_sci, kmag_sci, w1_sci, w2_sci = args_sources[i]
vot = votdir + name_sci.replace(' ' ,'') + '.vot'
mag_sci, bp_sci = vmag_sci, bp_v
args = (name_sci, spt_sci, mag_sci, bp_sci, vot)
src = source_spectrum(*args)
src.fit_SED(use_err=False, robust=False, wlim=[1,10], IR_excess=True)
# Final source spectrum
sp_sci = src.sp_model
# Do the same for the reference source
name_ref, spt_ref, kmag_ref = ref_sources[i]
vot = votdir + name_ref.replace(' ' ,'') + '.vot'
mag_ref, bp_ref = kmag_ref, bp_k
args = (name_ref, spt_ref, mag_ref, bp_ref, vot)
ref = nrc_utils.source_spectrum(*args)
ref.fit_SED(use_err=False, robust=False, wlim=[2,20])
# Final reference spectrum
sp_ref = ref.sp_model
# Plot spectra
fig, axes = plt.subplots(1,2, figsize=(14,4.5))
src.plot_SED(ax=axes[0], xr=[0.5,30])
ref.plot_SED(ax=axes[1], xr=[0.5,30])
axes[0].set_title('Science Specta -- {} ({})'.format(src.name, spt_sci))
axes[1].set_title('Reference Specta -- {} ({})'.format(ref.name, spt_ref))
#for ax in axes:
# ax.set_xscale('linear')
# ax.xaxis.set_minor_locator(AutoMinorLocator())
fig.tight_layout()
fig.subplots_adjust(top=0.85, bottom=0.1 , left=0.05, right=0.97)
fig.savefig(outdir+'{}_SEDs.pdf'.format(name_sci.replace(' ','')))
# Plot the two spectra
fig, ax = plt.subplots(1,1, figsize=(8,5))
xr = [2.5,5.5]
bp = pynrc.read_filter(*args_filter[-1])
for sp in [sp_sci, sp_ref]:
w = sp.wave / 1e4
o = S.Observation(sp, bp, binset=bp.wave)
sp.convert('Jy')
f = sp.flux / o.effstim('Jy')
ind = (w>=xr[0]) & (w<=xr[1])
ax.plot(w[ind], f[ind], lw=1, label=sp.name)
ax.set_ylabel('Flux (Jy) normalized over bandpass')
sp.convert('flam')
ax.set_xlim(xr)
ax.set_ylim([0,ax.get_ylim()[1]])
ax.set_xlabel(r'Wavelength ($\mu m$)')
ax.set_title('{} Spectra'.format(sp_sci.name))
# Overplot Filter Bandpass
ax2 = ax.twinx()
ax2.plot(bp.wave/1e4, bp.throughput, color='C2', label=bp.name+' Bandpass')
ax2.set_ylim([0,0.8])
ax2.set_xlim(xr)
ax2.set_ylabel('Bandpass Throughput')
ax.legend(loc='upper left')
ax2.legend(loc='upper right')
fig.tight_layout()
fig.savefig(outdir+'{}_2SEDs.pdf'.format(name_sci.replace(' ','')))
# Disk model information
# File name, arcsec/pix, dist (pc), wavelength (um), flux units
args_disk = ('example_disk.fits', 0.007, 140.0, 1.6, 'mJy/arcsec^2')
# Create a dictionary that holds the obs_coronagraphy class for each filter
wfe_drift = 0
obs_dict = obs_wfe(wfe_drift, args_filter, sp_sci, dist_sci, sp_ref=sp_ref, args_disk=args_disk,
wind_mode='WINDOW', subsize=subsize, verbose=False)
# # Generate initial observations for each filter(no WFE drift)
# def do_init(args_disk=None, fov_pix=None, verbose=True):
# wfe_ref_drift = 0
# obs_dict = obs_wfe(wfe_ref_drift, args_list, dist_sci, sp_ref=sp_ref,
# args_disk=args_disk, fov_pix=fov_pix, verbose=verbose)
# return obs_dict
# obs_dict = do_init(args_disk=args_disk, fov_pix=401, verbose=False)
# Update detector readout
for key in filt_keys:
obs = obs_dict[key]
if 'none' in key:
pattern, ng, nint_sci, nint_ref = ('RAPID',5,150,150)
elif ('MASK210R' in key) or ('MASKSWB' in key):
pattern, ng, nint_sci, nint_ref = ('BRIGHT2',10,20,20)
else:
pattern, ng, nint_sci, nint_ref = ('MEDIUM8',10,15,15)
obs.update_detectors(read_mode=pattern, ngroup=ng, nint=nint_sci)
obs.nrc_ref.update_detectors(read_mode=pattern, ngroup=ng, nint=nint_ref)
#print(key)
#print(obs.multiaccum_times)
#_ = obs.sensitivity(nsig=5, units='vegamag', verbose=True)
#print('')
###Output
_____no_output_____
###Markdown
Saturation
###Code
# Max Saturation Values
dmax = []
for k in filt_keys:
print('\n{}'.format(k))
obs = obs_dict[k]
dsat_asec = do_sat_levels(obs, satval=0.9, plot=False)
dmax.append(dsat_asec)
###Output
F187N_none_none
PDS 70
5 saturated pixel at NGROUP=2; Max Well: 2.56
9 saturated pixel at NGROUP=5; Max Well: 6.41
Sat Dist NG=2: 0.05 arcsec
HR 1889
50 saturated pixel at NGROUP=2; Max Well: 52.58
82 saturated pixel at NGROUP=5; Max Well: 131.46
F200W_none_none
PDS 70
65 saturated pixel at NGROUP=2; Max Well: 45.19
110 saturated pixel at NGROUP=5; Max Well: 112.97
Sat Dist NG=2: 0.16 arcsec
HR 1889
390 saturated pixel at NGROUP=2; Max Well: 933.86
900 saturated pixel at NGROUP=5; Max Well: 2334.65
F356W_none_none
PDS 70
45 saturated pixel at NGROUP=2; Max Well: 46.63
72 saturated pixel at NGROUP=5; Max Well: 116.57
Sat Dist NG=2: 0.27 arcsec
HR 1889
183 saturated pixel at NGROUP=2; Max Well: 610.59
325 saturated pixel at NGROUP=5; Max Well: 1526.48
F444W_none_none
PDS 70
50 saturated pixel at NGROUP=2; Max Well: 30.20
81 saturated pixel at NGROUP=5; Max Well: 75.51
Sat Dist NG=2: 0.32 arcsec
HR 1889
143 saturated pixel at NGROUP=2; Max Well: 291.36
292 saturated pixel at NGROUP=5; Max Well: 728.41
###Markdown
Photon Limit Curves
###Code
nsig = 5
roll = 10
for k in filt_keys:
obs_dict[k].sp_ref = sp_sci
wfe_list = [0]
curves_photon = do_contrast(obs_dict, wfe_list, filt_keys, nsig=nsig, roll_angle=roll, opt_diff=False)
###Output
F187N_none_none
F200W_none_none
F356W_none_none
F444W_none_none
###Markdown
Reference Reconstruction Curves
###Code
for k in filt_keys:
obs_dict[k].sp_ref = sp_sci
wfe_list = [10]
curves_recon10 = do_contrast(obs_dict, wfe_list, filt_keys, nsig=nsig, roll_angle=roll, opt_diff=False)
wfe_list = [5]
curves_recon5 = do_contrast(obs_dict, wfe_list, filt_keys, nsig=nsig, roll_angle=roll, opt_diff=False)
wfe_list = [2]
curves_recon2 = do_contrast(obs_dict, wfe_list, filt_keys, nsig=nsig, roll_angle=roll, opt_diff=False)
###Output
F187N_none_none
F200W_none_none
F356W_none_none
F444W_none_none
F187N_none_none
F200W_none_none
F356W_none_none
F444W_none_none
F187N_none_none
F200W_none_none
F356W_none_none
F444W_none_none
###Markdown
Basic PSF Subtraction Curves
###Code
for k in filt_keys:
obs_dict[k].sp_ref = sp_ref
wfe_list = [5]
curves_basic = do_contrast(obs_dict, wfe_list, filt_keys, nsig=nsig, roll_angle=roll, opt_diff=True)
###Output
F187N_none_none
F200W_none_none
F356W_none_none
F444W_none_none
###Markdown
Roll Subtraction
###Code
for k in filt_keys:
obs_dict[k].sp_ref = sp_sci
wfe_list = [0]
curves_photon2 = do_contrast(obs_dict, wfe_list, filt_keys, nsig=nsig, no_ref=True, roll_angle=roll)
nsig = 5
for k in filt_keys:
obs_dict[k].sp_ref = sp_sci
wfe_list = [10]
curves_noref10 = do_contrast(obs_dict, wfe_list, filt_keys, nsig=nsig, no_ref=True, roll_angle=roll)
wfe_list = [5]
curves_noref5 = do_contrast(obs_dict, wfe_list, filt_keys, nsig=nsig, no_ref=True, roll_angle=roll)
wfe_list = [2]
curves_noref2 = do_contrast(obs_dict, wfe_list, filt_keys, nsig=nsig, no_ref=True, roll_angle=roll)
import matplotlib.patches as patches
from pynrc.obs_nircam import plot_planet_patches
fig, axes = plt.subplots(2,2, figsize=(13,8))
xlim1 = [0,5]
xlim2 = [0,10]
ylim = [25,8]
curves_all = [curves_basic, curves_recon10, curves_recon5, curves_recon2, curves_photon]
labels = ['Basic Ref Sub', 'Opt Sub (10 nm)', 'Opt Sub (5 nm)',
'Opt Sub (2 nm)', 'Opt Sub (0 nm)']
lin_vals = np.linspace(0.2,0.7,len(curves_all))
cb = plt.cm.Blues_r(lin_vals)[::-1]
curves_all2 = [curves_noref10, curves_noref5, curves_noref2, curves_photon2]
labels2 = ['Roll Sub (10 nm)', 'Roll Sub (5 nm)',
'Roll Sub (2 nm)', 'Roll Sub (0 nm)']
lin_vals2 = np.linspace(0.2,0.7,len(curves_all2))
cr = plt.cm.Reds_r(lin_vals2)[::-1]
axes = axes.flatten()
for j, k in enumerate(filt_keys):
for jj, cv in enumerate(curves_all):
curves = cv[k]
rr, contrast, mag_sens = curves[0]
axes[j].plot(rr, mag_sens, color=cb[jj], zorder=1, lw=2, label=labels[jj])
for jj, cv in enumerate(curves_all2):
curves = cv[k]
rr, contrast, mag_sens = curves[0]
axes[j].plot(rr, mag_sens, color=cr[jj], zorder=1, lw=2, label=labels2[jj])
for j, ax in enumerate(axes):
ax.set_xlabel('Distance (arcsec)')
ax.set_ylabel('{}-sigma Sensitivities (mag)'.format(nsig))
if j<2: ax.set_xlim(xlim1)
else: ax.set_xlim(xlim2)
ax.set_ylim(ylim)
obs = obs_dict[filt_keys[j]]
plot_planet_patches(ax, obs, age=age_sci, update_title=True)
ax.legend(ncol=3, loc=1, fontsize=8)
# Saturation levels
for j, ax in enumerate(axes):
dy = ylim[1] - ylim[0]
rect = patches.Rectangle((0, ylim[0]), dmax[j], dy, alpha=0.2,
color='k', zorder=2)
ax.add_patch(rect)
fig.tight_layout()
dist = obs.distance
age_str = 'Age = {:.0f} Myr'.format(age_sci)
dist_str = 'Distance = {:.1f} pc'.format(dist) if dist is not None else ''
fig.suptitle('{} ({}, {})'.format(name_sci,age_str,dist_str), fontsize=16);
#fig.subplots_adjust(top=0.85)
fig.subplots_adjust(top=0.9)
fname = "{}_contrast2_{}.pdf".format(name_sci.replace(" ", ""), obs.mask)
fig.savefig(outdir+fname)
key_F444W = filt_keys[-1]
curves_roll_F444W = [curves_photon2[key_F444W][0], curves_noref2[key_F444W][0],
curves_noref5[key_F444W][0], curves_noref10[key_F444W][0]]
wfe_list = [0,2,5,10]
sat_rad = dmax[-1]
obs = obs_dict[filt_keys[-1]]
age = age_sci
do_plot_contrasts(None, curves_roll_F444W, nsig, wfe_list, obs, age, sat_rad=sat_rad,
yr=[24,8], save_fig=True, outdir=outdir)
###Output
_____no_output_____
|
SEGUNDO PROYECTO.ipynb
|
###Markdown
PARTE A - Transformación de DatosElige cuáles de las siguientes tareas son apropiadas para su dataset. Implementa las transformaciones que elegiste. Es importante que justifiques por qué las haces:1-Detección y eliminación de Outliers2-Encoding3-Imputación de valores faltantes4-Escalado de datos5-Generación de nuevas variables predictoras/reducción de dimensionalidad (SVD/PCA).Vuelve a entrenar el modelo implementado en la Entrega 01 - en particular, el árbol de decisión - con este nuevo dataset transformado . Evalúa su desempeño a partir del dataset obtenido luego de transformar los datos. ¿Hay una mejora en su desempeño? Compara con el desempeño obtenido en el proyecto 01. Sea cual sea la respuesta, intenta explicar a qué se debe.
###Code
#Importe de librerías:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.metrics import mean_squared_error
import pandas_profiling
from sklearn import preprocessing
from sklearn import model_selection, metrics
import missingno as msno
from sklearn.svm import SVR
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import PolynomialFeatures
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import RandomizedSearchCV, RepeatedKFold
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso
import xgboost as xgb
import multiprocessing
import sys
import xgboost
import xgboost as xgb
#Primero importo la base de datos
df=pd.read_csv("DS_Proyecto_01_Datos_Properati.csv")
df.head()
#A MI DATASET, LO VOY A FILTRAR CAPITAL FEDERAL Y POR LOS TIPOS DE PROPIEDAD CASA, DEPARTAMENTO Y PH
tipo_propiedad= df.loc[ (df.property_type=="PH")|(df.property_type=="Departamento")|(df.property_type=="Casa") ]
tipo_region=tipo_propiedad.loc[(tipo_propiedad.l2=="Capital Federal")]
columnas=tipo_region.iloc[:, [8,9,10,11,12,13,17]]
columnas.head()
#columnas.shape
###Output
_____no_output_____
###Markdown
VALORES FALTANTES
###Code
msno.heatmap(columnas,figsize=(6,4))
msno.matrix(columnas,figsize=(8,4))
#ANALIZO LOS VALORES FALTANTES
columnas.isnull().sum()
#ANALIZO LA MEDIA DE LOS BAÑOS, SEGÚN LOS TIPOS DE PROPIEDAD
mean_bathrooms=columnas[["property_type","bathrooms"]].groupby("property_type").agg(pd.Series.mode)
mean_bathrooms.to_dict()
columnas=columnas.set_index("property_type")
columnas.bathrooms.fillna(mean_bathrooms.to_dict()["bathrooms"],inplace=True)
columnas.reset_index(inplace=True)
columnas.shape
columnas.surface_total.fillna(columnas.surface_total.median(), inplace = True)
columnas.surface_covered.fillna(columnas.surface_covered.median(), inplace = True)
#CORROBORO QUE NO TENGA VALORES FALTANTES
columnas.isnull().sum()
###Output
_____no_output_____
###Markdown
DETECCION DE OUTLIERS
###Code
columnas.describe()
columnas.describe()
columnas.shape
#OTRA FORMA DE SACAR LOS OUTLIERS DE CADA VARIABLE, SIN TENER QUE HACER LOS GRÁFICOS
# q25,q75 = np.percentile(columnas.surface_covered.values, [25,75])
# iqr = q75 - q25
# minimo = q25 - 1.5*iqr
# maximo = q75 + 1.5*iqr
# print(q25,q75,iqr, minimo, maximo)
#Este gráfico muestra la cantidad de rooms según los tipos de propiedad
sns.boxplot(data=columnas, x="property_type", y="rooms")
plt.title("RELACIÓN ENTRE TIPO DE PROPIEDAD Y ESPACIOS")
plt.xlabel("TIPOS DE PROPIEDAD")
plt.ylabel("ESPACIOS")
#Este gráfico muestra la cantidad de bedrooms según los tipos de propiedad
sns.boxplot(data=columnas, x="property_type", y="bedrooms")
plt.title("RELACIÓN ENTRE TIPO DE PROPIEDAD Y CUARTOS")
plt.xlabel("TIPOS DE PROPIEDAD")
plt.ylabel("HABITACIONES")
#Ahora analizaremos la variable bathrooms, para determinar si tenemos o no ouliers
sns.boxplot(data=columnas, x="property_type", y="bathrooms")
plt.title("RELACIÓN ENTRE TIPO DE PROPIEDAD Y BAÑOS")
plt.xlabel("TIPOS DE PROPIEDAD")
plt.ylabel("BAÑOS")
#Ahora analizaremos la variable surface total, para determinar si tenemos o no ouliers
sns.boxplot(data=columnas, x="property_type", y="surface_total")
plt.title("RELACIÓN ENTRE TIPO DE PROPIEDAD Y SUPERFICIE TOTAL")
plt.xlabel("TIPOS DE PROPIEDAD")
plt.ylabel("SUPERFICIE TOTAL")
#Por último analizaremos la variable surface_covered, para determinar si tenemos
sns.boxplot(data=columnas, x="property_type", y="surface_covered")
plt.title("RELACIÓN ENTRE TIPO DE PROPIEDAD Y SUPERFICIE CUBIERTA")
plt.xlabel("TIPOS DE PROPIEDAD")
plt.ylabel("SUPERFICIE CUBIERTA")
###Output
_____no_output_____
###Markdown
A partir de estos gráficos, vamos a realizar el filtrado de las variables
###Code
#Filtraremos las propiedades de la siguiente manera:
filtrado_1=columnas.loc[columnas["rooms"]<7]
filtrado_2=filtrado_1.loc[filtrado_1["bedrooms"]<6]
filtrado_3=filtrado_2.loc[filtrado_2["bathrooms"]<4]
filtrado_4=filtrado_3.loc[filtrado_3["surface_total"]<186]
filtrado_5=filtrado_4.loc[filtrado_4["surface_covered"]<158.5]
filtrado_5
###Output
_____no_output_____
###Markdown
ENCODING --> ONE HOT ENCODING
###Code
filtrado_5.value_counts("property_type")
ohe=pd.get_dummies(filtrado_5["property_type"])
filtrado_5=pd.concat([filtrado_5,ohe],axis=1)
filtrado_5=filtrado_5.drop(["property_type"],axis=1)
filtrado_5
###Output
_____no_output_____
###Markdown
ESCALADO DE DATOS
###Code
base_ml=filtrado_5.copy()
columnas=["rooms","bedrooms","bathrooms","surface_total","surface_covered"]
for col in columnas:
scl=preprocessing.StandardScaler()
X=scl.fit_transform(base_ml[col].values.reshape(-1,1))
base_ml[col]=X
y=base_ml["price"]
X=base_ml.drop("price",axis=1)
base_ml.corr()
# plt.figure(figsize=(8,8))
# sns.heatmap(base_ml, cbar = True, square = True, annot=True, fmt= '.2f',annot_kws={'size': 15},
# xticklabels= base_ml.columns,
# yticklabels= base_ml.columns,
# cmap= 'coolwarm')
# plt.show()
###Output
_____no_output_____
###Markdown
ENTRENAMIENTO DE MODELO
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y ,test_size=0.30, random_state=42)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
linear_model = LinearRegression()
tree_regressor = DecisionTreeRegressor()
knn_regressor = KNeighborsRegressor()
linear_model.fit(X_train, y_train)
tree_regressor.fit(X_train, y_train)
knn_regressor.fit(X_train, y_train)
modelos = ['Regresión lineal', 'Árbol de Decisión', 'Vecinos más cercanos']
for i, model in enumerate([linear_model, tree_regressor, knn_regressor]):
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
print(f'Modelo: {modelos[i]}')
rmse_train = np.sqrt(mean_squared_error(y_train, y_train_pred))
rmse_test = np.sqrt(mean_squared_error(y_test, y_test_pred))
print(f'Raíz del error cuadrático medio en Train: {rmse_train}')
print(f'Raíz del error cuadrático medio en Test: {rmse_test}')
plt.figure(figsize = (8,4))
plt.subplot(1,2,1)
sns.distplot(y_train - y_train_pred, bins = 20, label = 'train')
sns.distplot(y_test - y_test_pred, bins = 20, label = 'test')
plt.xlabel('errores')
plt.legend()
ax = plt.subplot(1,2,2)
ax.scatter(y_test,y_test_pred, s =2)
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes
np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes]
]
ax.plot(lims, lims, 'k-', alpha=0.75, zorder=0)
plt.xlabel('y (test)')
plt.ylabel('y_pred (test)')
plt.tight_layout()
plt.show()
coeff_df = pd.DataFrame(linear_model.coef_, X.columns, columns=['Coefficient'])
coeff_df
###Output
_____no_output_____
###Markdown
MODELO AVANZADO 1 - REGRESIÓN LINEAL (POLYNOMIAL FEATURES Y REGULARIZACION)
###Code
#PRIMERO ENTRENAREMOS UN MODELO DE REGRESIÓN LINEAL, CON ATRIBUTOS POLINÓMICOS.
poly = PolynomialFeatures()
X_train_poly = poly.fit_transform(X_train)
X_test_poly = poly.transform(X_test)
reg_poly = LinearRegression()
reg_poly.fit(X_train_poly,y_train)
y_train_pred_poly = reg_poly.predict(X_train_poly)
y_test_pred_poly = reg_poly.predict(X_test_poly)
rmse_train = np.sqrt(mean_squared_error(y_train, y_train_pred_poly))
rmse_test = np.sqrt(mean_squared_error(y_test, y_test_pred_poly))
print(f'Raíz del error cuadrático medio en Train: {rmse_train}')
print(f'Raíz del error cuadrático medio en Test: {rmse_test}')
#VAMOS A HACER LO MISMO QUE HICIMOS MÁS ARRIBA PERO CON OTRO GRADO POLINOMICO PARA VER SI MEJORA NUESTRO MODELO
#RECORDEMOS QUE LA MÉTRICA ES EL ERROR CUADRADO MEDIO NEGATIVO.
poly3 = PolynomialFeatures(3)
X_train_poly_3 = poly3.fit_transform(X_train)
X_test_poly_3 = poly3.fit_transform(X_test)
reg_poly3 = LinearRegression()
reg_poly3.fit(X_train_poly_3,y_train)
y_train_pred_poly_3 = reg_poly3.predict(X_train_poly_3)
y_test_pred_poly_3 = reg_poly3.predict(X_test_poly_3)
rmse_train = np.sqrt(mean_squared_error(y_train, y_train_pred_poly_3))
rmse_test = np.sqrt(mean_squared_error(y_test, y_test_pred_poly_3))
print(f'Raíz del error cuadrático medio en Train: {rmse_train}')
print(f'Raíz del error cuadrático medio en Test: {rmse_test}')
poly4 = PolynomialFeatures(4)
X_train_poly_4 = poly4.fit_transform(X_train)
X_test_poly_4 = poly4.fit_transform(X_test)
reg_poly4 = LinearRegression()
reg_poly4.fit(X_train_poly_4,y_train)
y_train_pred_poly_4 = reg_poly4.predict(X_train_poly_4)
y_test_pred_poly_4 = reg_poly4.predict(X_test_poly_4)
rmse_train = np.sqrt(mean_squared_error(y_train, y_train_pred_poly_4))
rmse_test = np.sqrt(mean_squared_error(y_test, y_test_pred_poly_4))
print(f'Raíz del error cuadrático medio en Train: {rmse_train}')
print(f'Raíz del error cuadrático medio en Test: {rmse_test}')
#TAMBIÉN PODRIAMOS CREAR UNA VARIABLE QUE AL CAMBIAR EL NUMERO DE LOS POLINOMIOS, ME DETERMINE EL ERROR.
poly_num=5
poly_num = PolynomialFeatures(degree=poly_num)
X_train_poly_num = poly_num.fit_transform(X_train)
X_test_poly_num = poly_num.fit_transform(X_test)
reg_poly_num = LinearRegression()
reg_poly_num.fit(X_train_poly_num,y_train)
y_train_pred_poly_num = reg_poly_num.predict(X_train_poly_num)
y_test_pred_poly_num = reg_poly_num.predict(X_test_poly_num)
rmse_train = np.sqrt(mean_squared_error(y_train, y_train_pred_poly_num))
rmse_test = np.sqrt(mean_squared_error(y_test, y_test_pred_poly_num))
print(f'Raíz del error cuadrático medio en Train: {rmse_train}')
print(f'Raíz del error cuadrático medio en Test: {rmse_test}')
#VEMOS QUE NUESTRO MEJOR MODELO HASTA EL MOMENTO, ES CON ATRIBUTOS POLINÓMICO DE GRADO 4 CON UN ERROR DE 89073.22
#AHORA REALIZAREMOS VALIDACIÓN CRUZADA TANTO CON REGULARIZACIÓN RIDGE Y LASSO PARA VER SI NUESTRO MODELO MEJORA
reg_ridge = Ridge() #EL VALOR POR DEFECTO DE ALFA ES DE 1.0
ridgecv= cross_val_score(reg_ridge, X_train_poly_4, y_train, scoring = 'neg_root_mean_squared_error', cv=5)
print(ridgecv*-1,ridgecv.mean()*-1)
reg_num=10
reg_ridge_num = Ridge(alpha=reg_num) #UN ALFA GRANDE, FUERZA A QUE LOS COEFICIENTES SEAN CERCANOS A 0
ridgecv_num= cross_val_score(reg_ridge_num, X_train_poly_4, y_train, scoring = 'neg_root_mean_squared_error', cv=5)
print(ridgecv_num*-1,ridgecv_num.mean()*-1)
reg_lasso=Lasso() #EL VALOR POR DEFECTO DE ALFA ES DE 1.0 Y DE N_ITER ES DE 1000
lassocv=cross_val_score(reg_lasso, X_train_poly_4, y_train, scoring = 'neg_root_mean_squared_error', cv=5)
print(lassocv*-1,lassocv.mean()*-1)
alpha_num=0.001
reg_lasso_num=Lasso(alpha=alpha_num) #Con una max_iter de 2000, el modelo tampoco mejora mucho y demora, por eso no lo pongo
lassocv_num=cross_val_score(reg_lasso_num, X_train_poly_4, y_train, scoring = 'neg_root_mean_squared_error', cv=5)
print(lassocv_num*-1,lassocv_num.mean()*-1)
###Output
C:\Users\54351\Miniconda3\envs\datascience\lib\site-packages\sklearn\linear_model\_coordinate_descent.py:529: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Duality gap: 161393728909316.22, tolerance: 78762768963.45775
model = cd_fast.enet_coordinate_descent(
C:\Users\54351\Miniconda3\envs\datascience\lib\site-packages\sklearn\linear_model\_coordinate_descent.py:529: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Duality gap: 160451254801163.2, tolerance: 78424785584.96384
model = cd_fast.enet_coordinate_descent(
C:\Users\54351\Miniconda3\envs\datascience\lib\site-packages\sklearn\linear_model\_coordinate_descent.py:529: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Duality gap: 145276434698124.9, tolerance: 76343828170.41612
model = cd_fast.enet_coordinate_descent(
C:\Users\54351\Miniconda3\envs\datascience\lib\site-packages\sklearn\linear_model\_coordinate_descent.py:529: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Duality gap: 162752517679491.4, tolerance: 79868615465.55113
model = cd_fast.enet_coordinate_descent(
###Markdown
RANDOM FOREST
###Code
#AHORA PROBAREMOS Y EVALUAREMOS CON UN SEGUNDO MODELO, RANDOM FOREST, CON SUS VALORES POR DEFECTO Y LUEGO BUSCAREMOS
#MEDIANTE GRID SEARCH, CUÁLES SON LOS MEJORES HIPERMPARÁMETROS.
rf=RandomForestRegressor()
rf.fit(X_train,y_train)
y_train_pred = rf.predict(X_train)
y_test_pred = rf.predict(X_test)
rmse_train = np.sqrt(mean_squared_error(y_train, y_train_pred))
rmse_test = np.sqrt(mean_squared_error(y_test, y_test_pred))
print(f'Raíz del error cuadrático medio en Train: {rmse_train}')
print(f'Raíz del error cuadrático medio en Test: {rmse_test}')
importances = rf.feature_importances_
columns = X_train.columns
indices = np.argsort(importances)[::-1]
plt.figure(figsize = (10,8))
sns.barplot(columns[indices], importances[indices])
plt.title("Importancia de cada variable para predecir")
plt.show()
#DEFINIMOS LOS HIPERPARÁMETROS QUE QUEREMOS QUE EL MODELO BUSQUE
param_dist = {"max_features": np.arange(0,9),
"n_estimators": np.arange(100,150, 10)}
model = RandomizedSearchCV(rf, param_dist, scoring= 'neg_root_mean_squared_error', cv=5)
model.fit(X_train, y_train)
print("Mejores parametros: "+str(model.best_params_))
print("Mejor Score: "+str(model.best_score_)+'\n')
# scores = pd.DataFrame(model.cv_results_)
# scores.head()
rf=RandomForestRegressor(n_estimators=130,max_features=3)
rfcv= cross_val_score(rf, X_train, y_train, scoring = 'neg_root_mean_squared_error', cv=5)
print(rfcv*-1,rfcv.mean()*-1)
###Output
[67228.92063296 69590.0303491 87022.99133441 65727.57052251
70144.36625939] 71942.775819674
###Markdown
SVMSe podría agregar este modelo de regresión de soporte vectorial, pero no tiene una buena performance (El error está cerca de los $135.228,40), y los recursos requeridos para "correr" el modelo son altos.
###Code
# svr=SVR()
# svr.fit(X_train,y_train)
# y_train_pred = svr.predict(X_train)
# y_test_pred = svr.predict(X_test)
# rmse_train = np.sqrt(mean_squared_error(y_train, y_train_pred))
# rmse_test = np.sqrt(mean_squared_error(y_test, y_test_pred))
# print(f'Raíz del error cuadrático medio en Train: {rmse_train}')
# print(f'Raíz del error cuadrático medio en Test: {rmse_test}')
# param_grid = {"gamma": np.arange(0.001,0.1,3),
# "C": np.arange(0.001,0.1,3),
# "kernel":["poly","rbf","sigmoid"]}
# svr=SVR()
# model2 = GridSearchCV(svr, param_grid=param_grid, cv=5)
# svr.get_params()
# model2.fit(X_train, y_train)
# print("Mejores parametros: "+str(model2.best_params_))
# print("Mejor Score: "+str(model2.best_score_)+'\n')
# scores = pd.DataFrame(model2.cv_results_)
# scores.head()
# svr=SVR(kernel="rbf",C=0.001,gamma=0.001)
# svr.fit(X_train,y_train)
# y_train_pred = svr.predict(X_train)
# y_test_pred = svr.predict(X_test)
# rmse_train = np.sqrt(mean_squared_error(y_train, y_train_pred))
# rmse_test = np.sqrt(mean_squared_error(y_test, y_test_pred))
# print(f'Raíz del error cuadrático medio en Train: {rmse_train}')
# print(f'Raíz del error cuadrático medio en Test: {rmse_test}')
###Output
_____no_output_____
###Markdown
XGBOOST
###Code
xg_reg = xgb.XGBRegressor()
xg_reg.fit(X_train,y_train)
preds=xg_reg.predict(X_test)
y_train_pred = xg_reg.predict(X_train)
y_test_pred = xg_reg.predict(X_test)
rmse_train = np.sqrt(mean_squared_error(y_train, y_train_pred))
rmse_test = np.sqrt(mean_squared_error(y_test, y_test_pred))
print(f'Raíz del error cuadrático medio en Train: {rmse_train}')
print(f'Raíz del error cuadrático medio en Test: {rmse_test}')
importances = xg_reg.feature_importances_
columns = X_train.columns
indices = np.argsort(importances)[::-1]
plt.figure(figsize = (10,8))
sns.barplot(columns[indices], importances[indices])
plt.title("Importancia de cada variable para predecir")
plt.show()
param_dist = {"learning_rate": np.arange(0,1,0.2),
"n_estimators": np.arange(100,150, 10),
"alpha": np.arange(1,10,2)}
model = RandomizedSearchCV(xg_reg, param_dist, scoring= 'neg_root_mean_squared_error', cv=5)
model.fit(X_train, y_train)
print("Mejores parametros: "+str(model.best_params_))
print("Mejor Score: "+str(model.best_score_)+'\n')
xg_reg = xgb.XGBRegressor(n_estimators=140,learning_rate=0.4,alpha=1)
xg_reg= cross_val_score(xg_reg, X_train, y_train, scoring = 'neg_root_mean_squared_error', cv=5)
print(xg_reg*-1,xg_reg.mean()*-1)
###Output
[68979.17233713 71660.20147055 89194.61424884 68400.41761208
73062.3718132 ] 74259.35549635907
###Markdown
COMPARACIÓN DE MODELOS EVALUADOS
###Code
# Dataframe with the results of models
modelos = {'Models': [ridgecv_num.mean()*-1, rfcv.mean()*-1, xg_reg.mean()*-1]}
modelos_df = pd.DataFrame(modelos, index=["Ridge regression","Random Forest" ,"XG Boost"])
modelos_df
plt.figure(figsize=(50,50))
modelos_df.sort_values(by='Models').plot.bar(legend=None)
plt.xticks(rotation=0, fontsize=15)
plt.title('Comparación de Modelos', fontsize=20)
plt.ylabel('RMSE' , fontsize=20)
###Output
_____no_output_____
|
analise-imagens-medicas.ipynb
|
###Markdown
MBA FIAP Inteligência Artificial & Machine Learning Visão Computacional: Análise de Imagens Médicas 1. IntroduçãoAs tecnologias de imagens médicas estão cada vez mais integradas aos sitemas de visão computacional, incluindo as imagens de raio-x.Modelos de equipamentos modernos geram imagens digitais deste tipo de exame, proporcionando análises mais completas e menos _ad-hoc_, com isso algumas pré-análises podem ser realizadas por aplicações baseadas em inteligência artificial para confirmar ou sugerir diagnósticos ao profissional responsável pelo exame.No campo dos diagósticos por raios-x, a pnenumonia é uma das enfermidades onde seu uso é um dos mais aplicados para determinar o curso de tratamento. 2. InstruçõesEste projeto final tem como objetivo explorar os conhecimentos adquiridos nas aulas práticas.Por meio de uma trilha guiada, iremos constuir um modelo que seja capaz de classificar imagens de raio-x para determinar se a determinada pessoa está com alguma condição que necessita maiores cuidados.De acordo com as imagens disponíveis para o treinamento e validação, será de critério do grupo selecionar as quantidades ideais ou até mesmo pré-processar as imagens para obter o melhor resultado, nos principais indicadores de performance, como precisão, sensibilidade e pontuação F1.Este projeto poderá ser feita por grupos de até 4 pessoas.Caso este projeto seja substitutivo, deverá ser realizado por apenas uma pessoa.| Nome dos Integrantes | RM | Turma || :----------------------- | :------------- | :-----: || ANA RAFAELA GOMES | RM 337382 | `14IA` || ANDERSON DIAS LIMA | RM 338650 | `14IA` || DANILO DA COSTA ALVES | RM 336665 | `14IA` || LUCAS ALVES RODRIGUES | RM 337584 | `14IA` |Por ser um projeto guiado, fique atento quando houver as marcações **Implementação** indica que é necessário realizar alguma implementação em Python no bloco a seguir onde há a inscrição ```IMPLEMENTAR``` e **Resposta** indica que é esperado uma resposta objetiva relacionado a algum questionamento. **Cada grupo pode utilizar nas respostas objetivas quaisquer itens necessários que enriqueçam seu ponto vista, como gráficos, fotos e, até mesmo, trechos de código-fonte.**Pode-se utilizar quantos blocos forem necessários para realizar determinadas implementações ou utilizá-las para justificar as respostas. Não é obrigatório utilizar somente o bloco indicado.Ao final não se esqueça de subir os arquivos do projeto nas contas do GitHub de cada membro, ou subir na do representante do grupo e os membros realizarem o fork do projeto.A avaliação terá mais ênfase nos seguintes tópicos de desenvolvimento do projeto: 1. __Pré-Processamento__2. __Classificação__3. __Performance__4. __Conclusões Finais__ 3.1 Detalhe do problema: a pneunomiaFonte: [artigo](https://drauziovarella.uol.com.br/doencas-e-sintomas/pneumonia) do Dr. Drauzio Varella.Pneumonias são infecções que se instalam nos pulmões, órgãos duplos localizados um de cada lado da caixa torácica. Podem acometer a região dos alvéolos pulmonares onde desembocam as ramificações terminais dos brônquios e, às vezes, os interstícios (espaço entre um alvéolo e outro).Basicamente, pneumonia é provocada pela penetração de um agente infeccioso ou irritante (bactérias, vírus, fungos e por reações alérgicas) no espaço alveolar, onde ocorre a troca gasosa. Esse local deve estar sempre muito limpo, livre de substâncias que possam impedir o contato do ar com o sangue.Exame clínico, auscultação dos pulmões e radiografias de tórax são recursos essenciais para o diagnóstico de pneumonia. 3.2 Diagnóstico por raio-x O exame de raio-x traz diferenças em cada tipo de diagnóstico, sendo considerado os seguintes grupos de análise: **normal** (ou controle) onde não há nenhuma condição de infeção, **bacterial pneumonia** (pneumonia bacteriana) que representa a condição de infecção bacteriana e **viral pneumonia** que indica a condição de infecção vira. As imagens de controle não são mais brancas ao centro que é onde fica o coração. Já nas imagens com pneumonia é possível notar regiões brancas ao redor dos pulmões, que é como o exame identifica as secreções responsáveis pela infeçcão.Quando mais regiões brancas ao redor do pulmão mais severa é a inflamação e menos se observa dos detalhes dos pulmões, ficando um pouco esmaecido diante desta condição. 4.1 ProblemaConstruir um classificador utilizando _transfer learning_ para identificar as seguintes classes: **controle**, **pneumonia bacteriana** e **pneumonia viral**.Para construir este classificador, utilize o dataset do [Kaggle Chest Ray Pneumonia](https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia) e organize os dados de forma a separar em cada uma das classes que já estão definidas no diretório ```raiox```, sendo ```controle``` para as imagens normais (sem inflamação), ```bacteria``` para as imagens de pneumonia bacteriana e ```viral``` para as imagens de pneumonia viral.Determine a quantidade de imagens a serem treinadas e validadas. Utiliza pelo menos, 100 imagens para cada classe.Compare os resultados com pelo menos 3 classificadores, obtendo os valores de **precisão (precision)**, **sensibilidade (recall)** e **pontuação F1 (F1 Score)**. No guia abaixo, foi indicado os seguintes modelos: ResNet50, VGG16 e VGG19. >Importante: a escolha do número de imagens devem ser o suficiente para alcançar o valor de **precisão** mínima de 70%.A construção do modelo será utilizada o framework Keras. **Pergunta**: Qual o número de imagens que foram selecionadas para cada classe? **Resposta**:1341 = raiox/train/NORMAL2530 = raiox/train/PNEUMONIA_BACTERIA1345 = raiox/train/PNEUMONIA_VIRUS 4.2 Componentes obrigatóriosEste projeto requer a instalação dos seguintes componentes, via ```conda install```:* Keras* Tensorflow* Pillow* Matplotlib
###Code
import os
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
%matplotlib inline
import keras
from keras.preprocessing.image import ImageDataGenerator
from keras.applications import ResNet50
from keras.applications import VGG16, VGG19
from keras.applications import Xception
from keras.applications.xception import preprocess_input
from keras.applications.resnet50 import preprocess_input
from keras.applications.vgg16 import preprocess_input
from keras import Model, layers
from keras.models import Sequential, load_model, model_from_json
from tensorflow.keras import optimizers
from tensorflow.keras.layers import Dense, Conv2D, Dropout, Flatten, MaxPooling2D, Activation
import keras.backend as K
###Output
_____no_output_____
###Markdown
4.3 Carregando imagens de treinamento e validaçãoSelecione a melhor divisão entre dados de treinamento e validação. O número deverá ser representado em número fracionário, 5% equivale a 0.05, por exemplo.
###Code
## IMPLEMENTE
from pathlib import Path
divisao_treino_validacao = 0.2
# Dataset baixado de: https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia
# Amostras gravadas em: raiox/train/"
# 1341 = raiox/train/NORMAL
# 2530 = raiox/train/PNEUMONIA_BACTERIA
# 1345 = raiox/train/PNEUMONIA_VIRUS
base_path = "raiox/train"
filename_filter = "*.jpeg"
qtdeTotal = 0
# Verificar a quantidade de amostras disponíveis.
for filename in Path(base_path).rglob(filename_filter):
qtdeTotal+=1
print("Quantidade total de imagens: %s" % qtdeTotal)
print("Percentual da base de validação %s" % divisao_treino_validacao)
train_datagen = ImageDataGenerator(validation_split=divisao_treino_validacao)
train_generator = train_datagen.flow_from_directory(
"raiox/train/",
batch_size=32,
class_mode="categorical",
color_mode="rgb",
target_size=(224,224),
subset="training")
val_generator = train_datagen.flow_from_directory(
"raiox/train/",
batch_size=32,
class_mode="categorical",
color_mode="rgb",
target_size=(224,224),
subset="validation")
train_generator.class_indices, val_generator.class_indices
###Output
_____no_output_____
###Markdown
4.4 Modelos de transfer learningO Keras já possui classes especializadas para os seguintes modelos de deep-learning treinados com o conjunto de dados [ImageNet](http://www.image-net.org/): * Xception* VGG16* VGG19* ResNet50* InceptionV3* InceptionResNetV2* MobileNet* DenseNet* NASNet* MobileNetV2Mais detalhes, veja na [documentação do Keras](https://keras.io/applications/). Para este estudo, vamos utilizar para avaliação as seguintes arquiteturas: RestNet50, VGG15 e VGG19. 4.5 Indicadores de desempenhoO Keras não possui os indicadores de desempenho como precisão, sensibilidade e pontuação f1 por padrão, portanto precisamos implementar externamente.
###Code
def recall_score(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision_score(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def f1_score(y_true, y_pred):
precision = precision_score(y_true, y_pred)
recall = recall_score(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
###Output
_____no_output_____
###Markdown
4.5.1 Arquitetura ResNet50 **Pergunta**: Explique como é constituída a arquitetura do ResNet50? *Utilize, se necessário, gráficos, projetos que utilizam essa arquitetura. Detalhe também sua topologia em camadas e mostre quais as situações essa arquitetura pode ter mais êxito e quais cenários não tem.* **Resposta**:Baseado nos conceitos de Deep Learning, na arquitetura 'ResNet50' uma das características mais aparentes está nos conjuntos/blocos "Identidade" + "Convolução". A 'ResNet50' é capaz de obter mais de 23 milhões de parâmetros treináveis podendo chegar em mais de 150 camadas; é em razão deste processamento bastante profundo que esse tipo de arquitetura é frequentemente usada no processamento de imagens, capaz de interpretar desde os pequenos detalhes (ex: bordas de imagem) até as estruturas/composições maiores. Contudo; esse mesmo benefício poderia tornar a rede pesada e ineficaz, cujo problema é contornado com o uso de um sinal matemático (+ -), sendo a soma do sinal produzido pelas duas camadas convolucionais anteriores; somado ao sinal transmitido diretamente do ponto anterior a estas camadas, juntando um sinal processado com o sinal de uma etapa anterior no processamento. A ilustração abaixo apresenta essa "propagação" mediante uma função de ativação ReLU: O exemplo abaixo apresenta uma visão simplificada desta arquitetura:***ENTRADA:***- é aplicado um bloco de preenchimento zero (adição de linhas e colunas com o valor zero em cada lado do filtro de convolução no formato (3,3);***ESTÁGIO 1 (com função de ativação ReLU):***- uma convolução 2D com 64 filtros no formato (7,7), usando uma stride de (2,2).- uma normalização de lotes é aplicada ao eixo dos canais da entrada;- um max-pooling usando uma matriz (3,3) com uma stride (2,2).Nos estágios 2, 3, 4 e 5 o bloco convolucional usa três conjuntos de filtros.***ESTÁGIO 2:***- dois blocos de identidade que utilizam três conjuntos de filtros.***ESTÁGIO 3:***- três blocos de identidade usando três conjuntos de filtros.***ESTÁGIO 4:***- cinco blocos de identidade usando três conjuntos de filtros.***ESTÁGIO 5:***- dois blocos de identidade usando três conjuntos de filtros.***AVERAGE-POOLING***- camada de pooling usando uma matriz (2,2).***FLATTEN***- camada de achatamento sem hiperparâmetros. É responsável em transformar o mapa de características para que os dados possam ser utilizados na camada totalmente conectada.***FULLY CONNECTED + SAÍDA***- camada totalmente conectada que reduz sua entrada para o número de classes; neste caso usando uma ativação Softmax A técnica de transfer learning consiste de utilizar o mesmo modelo e treiná-lo para outas imagens. Por tal motivo, excluímos a última camada para modelar com as classes que definimos, ou seja, **controle**, **bacteriana** e **viral**. Informe a quantidade de classes a serem classificadas.
###Code
## IMPLEMENTE
qtde_classes = 3
conv_base = ResNet50(include_top=False)
for layer in conv_base.layers:
layer.trainable = False
x = conv_base.output
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(128, activation='relu')(x)
predictions = layers.Dense(qtde_classes, activation='softmax')(x)
model = Model(conv_base.input, predictions)
model.summary()
optimizer = optimizers.Adam()
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=[precision_score, recall_score, f1_score])
###Output
_____no_output_____
###Markdown
O número de épocas define quantas vezes o modelo irá treinar e validar o erro, assim ajustando os pesos para melhor convergência.Escolha o número adequado de épocas para alcançarmos pelo menos **70% de precisão de validação**.
###Code
## IMPLEMENTE
qtde_epocas = 60
### IMPORTANTE ###
# A célula abaixo está recebendo 'warning deprecated', porém foi mantido conforme o template original
'''WARNING:tensorflow:From <ipython-input-86-68139f1e6072>:1: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
Please use Model.fit, which supports generators.
'''
history = model.fit_generator(generator=train_generator,
epochs=qtde_epocas,
validation_steps=5,
steps_per_epoch=5,
validation_data=val_generator)
###Output
Epoch 1/60
5/5 [==============================] - 17s 3s/step - loss: 1.7355 - precision_score: 0.4125 - recall_score: 0.3812 - f1_score: 0.3958 - val_loss: 1.3464 - val_precision_score: 0.5558 - val_recall_score: 0.5437 - val_f1_score: 0.5495
Epoch 2/60
5/5 [==============================] - 15s 3s/step - loss: 1.0653 - precision_score: 0.6325 - recall_score: 0.6000 - f1_score: 0.6158 - val_loss: 0.9114 - val_precision_score: 0.5392 - val_recall_score: 0.5250 - val_f1_score: 0.5319
Epoch 3/60
5/5 [==============================] - 16s 3s/step - loss: 0.7759 - precision_score: 0.6340 - recall_score: 0.5875 - f1_score: 0.6092 - val_loss: 0.6919 - val_precision_score: 0.6955 - val_recall_score: 0.6812 - val_f1_score: 0.6880
Epoch 4/60
5/5 [==============================] - 17s 3s/step - loss: 0.5532 - precision_score: 0.7647 - recall_score: 0.7312 - f1_score: 0.7476 - val_loss: 0.5787 - val_precision_score: 0.7637 - val_recall_score: 0.7312 - val_f1_score: 0.7469
Epoch 5/60
5/5 [==============================] - 16s 3s/step - loss: 0.5774 - precision_score: 0.7758 - recall_score: 0.7375 - f1_score: 0.7559 - val_loss: 0.6557 - val_precision_score: 0.6893 - val_recall_score: 0.6625 - val_f1_score: 0.6755
Epoch 6/60
5/5 [==============================] - 16s 3s/step - loss: 0.5509 - precision_score: 0.7764 - recall_score: 0.7375 - f1_score: 0.7562 - val_loss: 0.5869 - val_precision_score: 0.7470 - val_recall_score: 0.7188 - val_f1_score: 0.7322
Epoch 7/60
5/5 [==============================] - 15s 3s/step - loss: 0.5256 - precision_score: 0.7955 - recall_score: 0.7688 - f1_score: 0.7817 - val_loss: 0.6743 - val_precision_score: 0.7092 - val_recall_score: 0.6875 - val_f1_score: 0.6980
Epoch 8/60
5/5 [==============================] - 16s 3s/step - loss: 0.5408 - precision_score: 0.7844 - recall_score: 0.7312 - f1_score: 0.7566 - val_loss: 0.5549 - val_precision_score: 0.7908 - val_recall_score: 0.7563 - val_f1_score: 0.7731
Epoch 9/60
5/5 [==============================] - 17s 3s/step - loss: 0.4267 - precision_score: 0.8458 - recall_score: 0.8250 - f1_score: 0.8352 - val_loss: 0.5806 - val_precision_score: 0.7972 - val_recall_score: 0.7625 - val_f1_score: 0.7793
Epoch 10/60
5/5 [==============================] - 16s 3s/step - loss: 0.4978 - precision_score: 0.8219 - recall_score: 0.8062 - f1_score: 0.8139 - val_loss: 0.6434 - val_precision_score: 0.7225 - val_recall_score: 0.6938 - val_f1_score: 0.7077
Epoch 11/60
5/5 [==============================] - 17s 3s/step - loss: 0.5810 - precision_score: 0.8028 - recall_score: 0.7875 - f1_score: 0.7950 - val_loss: 0.7042 - val_precision_score: 0.6931 - val_recall_score: 0.6750 - val_f1_score: 0.6838
Epoch 12/60
5/5 [==============================] - 16s 3s/step - loss: 0.5050 - precision_score: 0.7978 - recall_score: 0.7625 - f1_score: 0.7796 - val_loss: 0.5742 - val_precision_score: 0.7665 - val_recall_score: 0.7375 - val_f1_score: 0.7515
Epoch 13/60
5/5 [==============================] - 16s 3s/step - loss: 0.5204 - precision_score: 0.7833 - recall_score: 0.7437 - f1_score: 0.7628 - val_loss: 0.5332 - val_precision_score: 0.7928 - val_recall_score: 0.7500 - val_f1_score: 0.7705
Epoch 14/60
5/5 [==============================] - 16s 3s/step - loss: 0.4688 - precision_score: 0.8134 - recall_score: 0.7937 - f1_score: 0.8033 - val_loss: 0.6327 - val_precision_score: 0.7724 - val_recall_score: 0.7375 - val_f1_score: 0.7540
Epoch 15/60
5/5 [==============================] - 15s 3s/step - loss: 0.4333 - precision_score: 0.8179 - recall_score: 0.7875 - f1_score: 0.8019 - val_loss: 0.5036 - val_precision_score: 0.8198 - val_recall_score: 0.7625 - val_f1_score: 0.7897
Epoch 16/60
5/5 [==============================] - 15s 3s/step - loss: 0.4289 - precision_score: 0.8490 - recall_score: 0.7875 - f1_score: 0.8168 - val_loss: 0.4675 - val_precision_score: 0.8275 - val_recall_score: 0.8062 - val_f1_score: 0.8165
Epoch 17/60
5/5 [==============================] - 15s 3s/step - loss: 0.5013 - precision_score: 0.7965 - recall_score: 0.7750 - f1_score: 0.7853 - val_loss: 0.5077 - val_precision_score: 0.8216 - val_recall_score: 0.7812 - val_f1_score: 0.8005
Epoch 18/60
5/5 [==============================] - 17s 3s/step - loss: 0.4975 - precision_score: 0.8034 - recall_score: 0.7688 - f1_score: 0.7855 - val_loss: 0.5267 - val_precision_score: 0.7591 - val_recall_score: 0.7437 - val_f1_score: 0.7513
Epoch 19/60
5/5 [==============================] - 15s 3s/step - loss: 0.4943 - precision_score: 0.7835 - recall_score: 0.7688 - f1_score: 0.7760 - val_loss: 0.5222 - val_precision_score: 0.7825 - val_recall_score: 0.7688 - val_f1_score: 0.7755
Epoch 20/60
5/5 [==============================] - 16s 3s/step - loss: 0.4056 - precision_score: 0.8576 - recall_score: 0.8250 - f1_score: 0.8408 - val_loss: 0.4822 - val_precision_score: 0.7847 - val_recall_score: 0.7750 - val_f1_score: 0.7798
Epoch 21/60
5/5 [==============================] - 17s 3s/step - loss: 0.4047 - precision_score: 0.8557 - recall_score: 0.8188 - f1_score: 0.8363 - val_loss: 0.5148 - val_precision_score: 0.7904 - val_recall_score: 0.7812 - val_f1_score: 0.7857
Epoch 22/60
5/5 [==============================] - 15s 3s/step - loss: 0.4979 - precision_score: 0.7724 - recall_score: 0.7625 - f1_score: 0.7674 - val_loss: 0.5138 - val_precision_score: 0.8015 - val_recall_score: 0.7750 - val_f1_score: 0.7878
Epoch 23/60
5/5 [==============================] - 15s 3s/step - loss: 0.4397 - precision_score: 0.8270 - recall_score: 0.8062 - f1_score: 0.8162 - val_loss: 0.5481 - val_precision_score: 0.8044 - val_recall_score: 0.7937 - val_f1_score: 0.7990
Epoch 24/60
5/5 [==============================] - 16s 3s/step - loss: 0.4783 - precision_score: 0.8588 - recall_score: 0.8313 - f1_score: 0.8446 - val_loss: 0.5134 - val_precision_score: 0.7782 - val_recall_score: 0.7688 - val_f1_score: 0.7734
Epoch 25/60
5/5 [==============================] - 16s 3s/step - loss: 0.4345 - precision_score: 0.8048 - recall_score: 0.8000 - f1_score: 0.8024 - val_loss: 0.5843 - val_precision_score: 0.7737 - val_recall_score: 0.7500 - val_f1_score: 0.7616
Epoch 26/60
5/5 [==============================] - 15s 3s/step - loss: 0.4917 - precision_score: 0.7907 - recall_score: 0.7750 - f1_score: 0.7826 - val_loss: 0.5151 - val_precision_score: 0.7667 - val_recall_score: 0.7250 - val_f1_score: 0.7447
Epoch 27/60
5/5 [==============================] - 18s 4s/step - loss: 0.3465 - precision_score: 0.8792 - recall_score: 0.8625 - f1_score: 0.8706 - val_loss: 0.5227 - val_precision_score: 0.7867 - val_recall_score: 0.7625 - val_f1_score: 0.7743
Epoch 28/60
5/5 [==============================] - 17s 3s/step - loss: 0.3854 - precision_score: 0.8447 - recall_score: 0.8125 - f1_score: 0.8280 - val_loss: 0.5291 - val_precision_score: 0.7712 - val_recall_score: 0.7563 - val_f1_score: 0.7636
Epoch 29/60
5/5 [==============================] - 17s 3s/step - loss: 0.5336 - precision_score: 0.7641 - recall_score: 0.7312 - f1_score: 0.7473 - val_loss: 0.5721 - val_precision_score: 0.7412 - val_recall_score: 0.6875 - val_f1_score: 0.7125
Epoch 30/60
5/5 [==============================] - 15s 3s/step - loss: 0.3989 - precision_score: 0.8298 - recall_score: 0.8005 - f1_score: 0.8147 - val_loss: 0.5024 - val_precision_score: 0.7962 - val_recall_score: 0.7750 - val_f1_score: 0.7852
Epoch 31/60
5/5 [==============================] - 16s 3s/step - loss: 0.5027 - precision_score: 0.8178 - recall_score: 0.7875 - f1_score: 0.8019 - val_loss: 0.6072 - val_precision_score: 0.7623 - val_recall_score: 0.7063 - val_f1_score: 0.7325
Epoch 32/60
5/5 [==============================] - 18s 4s/step - loss: 0.4259 - precision_score: 0.8068 - recall_score: 0.7875 - f1_score: 0.7967 - val_loss: 0.5665 - val_precision_score: 0.7319 - val_recall_score: 0.7188 - val_f1_score: 0.7252
Epoch 33/60
5/5 [==============================] - 18s 4s/step - loss: 0.4932 - precision_score: 0.7978 - recall_score: 0.7875 - f1_score: 0.7926 - val_loss: 0.8022 - val_precision_score: 0.6438 - val_recall_score: 0.6313 - val_f1_score: 0.6374
Epoch 34/60
5/5 [==============================] - 20s 4s/step - loss: 0.5344 - precision_score: 0.7812 - recall_score: 0.7812 - f1_score: 0.7812 - val_loss: 0.6196 - val_precision_score: 0.7323 - val_recall_score: 0.7188 - val_f1_score: 0.7254
Epoch 35/60
5/5 [==============================] - 21s 4s/step - loss: 0.4450 - precision_score: 0.8105 - recall_score: 0.8000 - f1_score: 0.8052 - val_loss: 0.6140 - val_precision_score: 0.7349 - val_recall_score: 0.7250 - val_f1_score: 0.7299
Epoch 36/60
5/5 [==============================] - 19s 4s/step - loss: 0.4875 - precision_score: 0.8136 - recall_score: 0.7875 - f1_score: 0.8002 - val_loss: 0.5348 - val_precision_score: 0.7760 - val_recall_score: 0.7375 - val_f1_score: 0.7561
Epoch 37/60
5/5 [==============================] - 18s 4s/step - loss: 0.4783 - precision_score: 0.7642 - recall_score: 0.7500 - f1_score: 0.7569 - val_loss: 0.5511 - val_precision_score: 0.7762 - val_recall_score: 0.7437 - val_f1_score: 0.7595
Epoch 38/60
5/5 [==============================] - 19s 4s/step - loss: 0.3574 - precision_score: 0.8417 - recall_score: 0.8313 - f1_score: 0.8364 - val_loss: 0.4671 - val_precision_score: 0.8127 - val_recall_score: 0.7812 - val_f1_score: 0.7965
Epoch 39/60
5/5 [==============================] - 19s 4s/step - loss: 0.3966 - precision_score: 0.8590 - recall_score: 0.8375 - f1_score: 0.8480 - val_loss: 0.5140 - val_precision_score: 0.8069 - val_recall_score: 0.7875 - val_f1_score: 0.7970
Epoch 40/60
5/5 [==============================] - 19s 4s/step - loss: 0.4900 - precision_score: 0.8082 - recall_score: 0.7937 - f1_score: 0.8008 - val_loss: 0.5024 - val_precision_score: 0.7517 - val_recall_score: 0.7375 - val_f1_score: 0.7444
Epoch 41/60
5/5 [==============================] - 19s 4s/step - loss: 0.4563 - precision_score: 0.8009 - recall_score: 0.7812 - f1_score: 0.7909 - val_loss: 0.5530 - val_precision_score: 0.7530 - val_recall_score: 0.7375 - val_f1_score: 0.7450
Epoch 42/60
5/5 [==============================] - 19s 4s/step - loss: 0.4886 - precision_score: 0.7839 - recall_score: 0.7688 - f1_score: 0.7762 - val_loss: 0.4794 - val_precision_score: 0.8266 - val_recall_score: 0.8000 - val_f1_score: 0.8130
Epoch 43/60
5/5 [==============================] - 16s 3s/step - loss: 0.4208 - precision_score: 0.8236 - recall_score: 0.8188 - f1_score: 0.8211 - val_loss: 0.4752 - val_precision_score: 0.7976 - val_recall_score: 0.7688 - val_f1_score: 0.7826
Epoch 44/60
5/5 [==============================] - 19s 4s/step - loss: 0.4000 - precision_score: 0.8419 - recall_score: 0.8313 - f1_score: 0.8365 - val_loss: 0.4711 - val_precision_score: 0.8024 - val_recall_score: 0.7875 - val_f1_score: 0.7948
Epoch 45/60
5/5 [==============================] - 18s 4s/step - loss: 0.4212 - precision_score: 0.8421 - recall_score: 0.8313 - f1_score: 0.8366 - val_loss: 0.4513 - val_precision_score: 0.8261 - val_recall_score: 0.8062 - val_f1_score: 0.8159
Epoch 46/60
5/5 [==============================] - 18s 4s/step - loss: 0.3809 - precision_score: 0.8603 - recall_score: 0.8500 - f1_score: 0.8551 - val_loss: 0.4620 - val_precision_score: 0.8070 - val_recall_score: 0.7875 - val_f1_score: 0.7970
Epoch 47/60
5/5 [==============================] - 17s 3s/step - loss: 0.4240 - precision_score: 0.8090 - recall_score: 0.7937 - f1_score: 0.8012 - val_loss: 0.5080 - val_precision_score: 0.7671 - val_recall_score: 0.7625 - val_f1_score: 0.7648
Epoch 48/60
5/5 [==============================] - 18s 4s/step - loss: 0.4055 - precision_score: 0.8117 - recall_score: 0.8000 - f1_score: 0.8056 - val_loss: 0.5290 - val_precision_score: 0.8104 - val_recall_score: 0.7812 - val_f1_score: 0.7954
Epoch 49/60
5/5 [==============================] - 17s 3s/step - loss: 0.4683 - precision_score: 0.7802 - recall_score: 0.7750 - f1_score: 0.7776 - val_loss: 0.5492 - val_precision_score: 0.7823 - val_recall_score: 0.7563 - val_f1_score: 0.7688
Epoch 50/60
5/5 [==============================] - 17s 3s/step - loss: 0.4807 - precision_score: 0.8069 - recall_score: 0.7937 - f1_score: 0.8002 - val_loss: 0.5160 - val_precision_score: 0.7582 - val_recall_score: 0.7312 - val_f1_score: 0.7442
Epoch 51/60
5/5 [==============================] - 16s 3s/step - loss: 0.4732 - precision_score: 0.7820 - recall_score: 0.7625 - f1_score: 0.7720 - val_loss: 0.5165 - val_precision_score: 0.7649 - val_recall_score: 0.7563 - val_f1_score: 0.7605
Epoch 52/60
5/5 [==============================] - 15s 3s/step - loss: 0.3487 - precision_score: 0.8375 - recall_score: 0.8375 - f1_score: 0.8375 - val_loss: 0.5261 - val_precision_score: 0.7625 - val_recall_score: 0.7625 - val_f1_score: 0.7625
Epoch 53/60
5/5 [==============================] - 17s 3s/step - loss: 0.3390 - precision_score: 0.8804 - recall_score: 0.8750 - f1_score: 0.8777 - val_loss: 0.5375 - val_precision_score: 0.7726 - val_recall_score: 0.7625 - val_f1_score: 0.7675
Epoch 54/60
5/5 [==============================] - 18s 4s/step - loss: 0.3467 - precision_score: 0.8663 - recall_score: 0.8500 - f1_score: 0.8579 - val_loss: 0.5685 - val_precision_score: 0.7427 - val_recall_score: 0.7125 - val_f1_score: 0.7270
Epoch 55/60
5/5 [==============================] - 16s 3s/step - loss: 0.4065 - precision_score: 0.8215 - recall_score: 0.8000 - f1_score: 0.8105 - val_loss: 0.5135 - val_precision_score: 0.7577 - val_recall_score: 0.7437 - val_f1_score: 0.7506
Epoch 56/60
5/5 [==============================] - 17s 3s/step - loss: 0.3822 - precision_score: 0.8192 - recall_score: 0.8192 - f1_score: 0.8192 - val_loss: 0.5574 - val_precision_score: 0.7096 - val_recall_score: 0.7000 - val_f1_score: 0.7046
Epoch 57/60
5/5 [==============================] - 16s 3s/step - loss: 0.4346 - precision_score: 0.8274 - recall_score: 0.8125 - f1_score: 0.8198 - val_loss: 0.6085 - val_precision_score: 0.7447 - val_recall_score: 0.7250 - val_f1_score: 0.7346
Epoch 58/60
5/5 [==============================] - 16s 3s/step - loss: 0.3149 - precision_score: 0.8546 - recall_score: 0.8438 - f1_score: 0.8490 - val_loss: 0.5516 - val_precision_score: 0.7281 - val_recall_score: 0.7063 - val_f1_score: 0.7168
Epoch 59/60
5/5 [==============================] - 15s 3s/step - loss: 0.3753 - precision_score: 0.8300 - recall_score: 0.8250 - f1_score: 0.8275 - val_loss: 0.5998 - val_precision_score: 0.7544 - val_recall_score: 0.7312 - val_f1_score: 0.7425
Epoch 60/60
5/5 [==============================] - 17s 3s/step - loss: 0.4313 - precision_score: 0.8094 - recall_score: 0.7937 - f1_score: 0.8014 - val_loss: 0.5503 - val_precision_score: 0.7602 - val_recall_score: 0.7125 - val_f1_score: 0.7349
###Markdown
Um modelo que converge bem possui o gráfico de perda (*loss*) descendente e os gráfico de precisão (*precision*), sensibilidade (*recall*) e pontuação f1 (*f1 score*) em acendente.
###Code
# Exibindo dados de Precisão
plt.plot(history.history['precision_score'])
plt.plot(history.history['val_precision_score'])
plt.title('model precision')
plt.ylabel('precision')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# Exibindo dados de Sensibilidade
plt.plot(history.history['recall_score'])
plt.plot(history.history['val_recall_score'])
plt.title('model recall')
plt.ylabel('recall')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# Exibindo dados de F1 Score
plt.plot(history.history['f1_score'])
plt.plot(history.history['val_f1_score'])
plt.title('model f1_score')
plt.ylabel('f1_score')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# Exibindo dados de Perda
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
**Pergunta**: Avalie os gráficos de perda (*loss*), precisão (*precision*), sensibilidade (*recall*) e pontuação f1 (*f1 score*) e explique o comportamento de ambos no que tange a convergência do modelo. **Resposta**: Os gráficos indicam que o treinamento do modelo continua convergindo até aproximadamente a época 45, após essa época, podemos observar a loss aumentando, enquanto precision, recall e f1 score diminuem. Também observamos que a partir da época 20, começamos a ter uma distância entre os dados de treino e validação, sendo aproximadamente na época 31, a pior época de todas. Isso pode ser um grande indicativo de overfit. **Pergunta**: Quais são os valores de **precisão (precision)**, **sensibilidade (recall)** de validação? *Estes valores são exibidos durante o treinamento, utilize a última saída, exemplo:*```Epoch 10/10 [==============================] - 45s 9s/step - loss: 0.1234 - precision_score: 0.9742 - recall_score: 0.9683 - f1_score: 0.9712 - val_loss: 0.8819 - val_precision_score: 0.6912 - val_recall_score: 0.5649 - val_f1_score: 0.6216```No caso acima, o valor de precisão, sensibilidade e pontuação de validação são, respectivamente, 69,12%, 56,49% e 62,16%. **Resposta**: - Precisão: 76,02% - Recall: 71,25% 4.5.2 Arquitetura VGG16 **Pergunta**: Explique como é constituída a arquitetura do VGG16? *Utilize, se necessário, gráficos, projetos que utilizam essa arquitetura. Detalhe também sua topologia em camadas e mostre quais as situações essa arquitetura pode ter mais êxito e quais cenários não tem.* **Resposta**: A arquitetura VGG16 possui camadas convolucionais seguidas com uma camada max pooling e assim por diante. No final possui camadas densas totalmente conectadas para classificação através da função de ativação softmax como observamos na imagem acima.As camadas convolucionais utilizam janelas de convolução (strides) com o menor tamanho possível necessário para que mantenha a noção de direção (cima, baixo, esquerda, direita e centro), no caso, de tamanho 3*3 pixels. Também utilizando padding nas bordas da imagem, para que dessa forma as dimensões de entrada sejam mantidas na saídas.As camadas de max pooling utilizam strides de tamanho 2*2 sem padding.Após o sinal da imagem passar por todas as camadas convolucionais e max pooling, chega às camadas densas, que classificam através da função de ativição softmax.O exemplo abaixo apresenta uma visão simplificada desta arquitetura: ***ENTRADA:***- A entrada para a camada cov1 é de imagem RGB de tamanho fixo 224 x 224.***ESTÁGIO 1 (com função de ativação ReLU):***- duas camadas convolucionais com 64 filtros de tamanho 3*3 e mesmo padding.- um max-pooling com uma stride (2,2).***ESTÁGIO 2:***- duas camadas convolucionais com tamanho de filtro 3*3 e 256 filtros.- um max-pooling com uma stride (2,2).***ESTÁGIO 3:***- três camadas convolucionais com tamanho de filtro 3*3 e 512 filtros cada e mesmo padding.- um max-pooling com uma stride (2,2).***ESTÁGIO 4:***- três camadas convolucionais com tamanho de filtro 3*3 e 512 filtros cada e mesmo padding.- um max-pooling com uma stride (2,2).***ESTÁGIO 5:***- três camadas convolucionais com tamanho de filtro 3*3 e 512 filtros cada e mesmo padding.- um max-pooling com uma stride (2,2).***FULLY CONNECTED + SAÍDA (com função de ativação ReLU***- três camadas densas totalmente conectada que reduz sua entrada para o número de classes; neste caso usando uma ativação Softmax.- todas as camadas ocultas utilizam ReLU como sua função de ativação.Refêrencias: - https://www.geeksforgeeks.org/vgg-16-cnn-model/ - https://neurohive.io/en/popular-networks/vgg16/pll_switcher
###Code
conv_base = VGG16(include_top=False)
for layer in conv_base.layers:
layer.trainable = False
x = conv_base.output
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(128, activation='relu')(x)
predictions = layers.Dense(qtde_classes, activation='softmax')(x)
model = Model(conv_base.input, predictions)
model.summary()
optimizer = keras.optimizers.Adam()
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=[precision_score, recall_score, f1_score])
history = model.fit_generator(generator=train_generator,
epochs=qtde_epocas,
validation_steps=5,
steps_per_epoch=5,
validation_data=val_generator)
###Output
Epoch 1/60
5/5 [==============================] - 35s 7s/step - loss: 2.5240 - precision_score: 0.4147 - recall_score: 0.3938 - f1_score: 0.4039 - val_loss: 2.6586 - val_precision_score: 0.3710 - val_recall_score: 0.3688 - val_f1_score: 0.3698
Epoch 2/60
5/5 [==============================] - 35s 7s/step - loss: 1.5291 - precision_score: 0.5797 - recall_score: 0.5625 - f1_score: 0.5709 - val_loss: 1.4010 - val_precision_score: 0.6491 - val_recall_score: 0.6313 - val_f1_score: 0.6398
Epoch 3/60
5/5 [==============================] - 36s 7s/step - loss: 0.9406 - precision_score: 0.7424 - recall_score: 0.7188 - f1_score: 0.7303 - val_loss: 0.8281 - val_precision_score: 0.7187 - val_recall_score: 0.7000 - val_f1_score: 0.7091
Epoch 4/60
5/5 [==============================] - 35s 7s/step - loss: 0.9654 - precision_score: 0.7388 - recall_score: 0.7250 - f1_score: 0.7317 - val_loss: 1.2381 - val_precision_score: 0.5805 - val_recall_score: 0.5625 - val_f1_score: 0.5712
Epoch 5/60
5/5 [==============================] - 39s 8s/step - loss: 0.8153 - precision_score: 0.7337 - recall_score: 0.7250 - f1_score: 0.7293 - val_loss: 0.8335 - val_precision_score: 0.7088 - val_recall_score: 0.6812 - val_f1_score: 0.6943
Epoch 6/60
5/5 [==============================] - 38s 8s/step - loss: 0.8959 - precision_score: 0.6917 - recall_score: 0.6625 - f1_score: 0.6767 - val_loss: 0.7925 - val_precision_score: 0.7635 - val_recall_score: 0.7500 - val_f1_score: 0.7566
Epoch 7/60
5/5 [==============================] - 38s 8s/step - loss: 0.7467 - precision_score: 0.7441 - recall_score: 0.7250 - f1_score: 0.7343 - val_loss: 0.7704 - val_precision_score: 0.7904 - val_recall_score: 0.7750 - val_f1_score: 0.7825
Epoch 8/60
5/5 [==============================] - 40s 8s/step - loss: 0.7021 - precision_score: 0.7589 - recall_score: 0.7500 - f1_score: 0.7544 - val_loss: 0.8305 - val_precision_score: 0.6912 - val_recall_score: 0.6750 - val_f1_score: 0.6829
Epoch 9/60
5/5 [==============================] - 40s 8s/step - loss: 0.7203 - precision_score: 0.7298 - recall_score: 0.7000 - f1_score: 0.7144 - val_loss: 0.6570 - val_precision_score: 0.8212 - val_recall_score: 0.8062 - val_f1_score: 0.8136
Epoch 10/60
5/5 [==============================] - 38s 8s/step - loss: 0.6445 - precision_score: 0.7486 - recall_score: 0.7437 - f1_score: 0.7461 - val_loss: 0.7018 - val_precision_score: 0.7118 - val_recall_score: 0.6812 - val_f1_score: 0.6960
Epoch 11/60
5/5 [==============================] - 41s 8s/step - loss: 0.7010 - precision_score: 0.7227 - recall_score: 0.6938 - f1_score: 0.7077 - val_loss: 0.8628 - val_precision_score: 0.7161 - val_recall_score: 0.7125 - val_f1_score: 0.7143
Epoch 12/60
5/5 [==============================] - 38s 8s/step - loss: 0.7225 - precision_score: 0.7472 - recall_score: 0.7375 - f1_score: 0.7423 - val_loss: 0.8780 - val_precision_score: 0.6759 - val_recall_score: 0.6375 - val_f1_score: 0.6553
Epoch 13/60
5/5 [==============================] - 38s 8s/step - loss: 0.8212 - precision_score: 0.6826 - recall_score: 0.6687 - f1_score: 0.6755 - val_loss: 0.8039 - val_precision_score: 0.7905 - val_recall_score: 0.7812 - val_f1_score: 0.7858
Epoch 14/60
5/5 [==============================] - 36s 7s/step - loss: 0.5914 - precision_score: 0.7626 - recall_score: 0.7375 - f1_score: 0.7495 - val_loss: 0.7567 - val_precision_score: 0.7409 - val_recall_score: 0.7312 - val_f1_score: 0.7360
Epoch 15/60
5/5 [==============================] - 36s 7s/step - loss: 0.5007 - precision_score: 0.7945 - recall_score: 0.7750 - f1_score: 0.7844 - val_loss: 0.7128 - val_precision_score: 0.7450 - val_recall_score: 0.7312 - val_f1_score: 0.7380
Epoch 16/60
5/5 [==============================] - 36s 7s/step - loss: 0.6686 - precision_score: 0.7371 - recall_score: 0.7188 - f1_score: 0.7278 - val_loss: 0.9047 - val_precision_score: 0.7144 - val_recall_score: 0.6812 - val_f1_score: 0.6972
Epoch 17/60
5/5 [==============================] - 36s 7s/step - loss: 0.7623 - precision_score: 0.7110 - recall_score: 0.6938 - f1_score: 0.7021 - val_loss: 0.7424 - val_precision_score: 0.7821 - val_recall_score: 0.7625 - val_f1_score: 0.7721
Epoch 18/60
5/5 [==============================] - 36s 7s/step - loss: 0.6745 - precision_score: 0.7488 - recall_score: 0.7437 - f1_score: 0.7462 - val_loss: 0.5166 - val_precision_score: 0.7778 - val_recall_score: 0.7625 - val_f1_score: 0.7700
Epoch 19/60
5/5 [==============================] - 36s 7s/step - loss: 0.4798 - precision_score: 0.8018 - recall_score: 0.7812 - f1_score: 0.7914 - val_loss: 0.6557 - val_precision_score: 0.7575 - val_recall_score: 0.7437 - val_f1_score: 0.7505
Epoch 20/60
5/5 [==============================] - 37s 7s/step - loss: 0.5306 - precision_score: 0.7934 - recall_score: 0.7688 - f1_score: 0.7807 - val_loss: 0.5827 - val_precision_score: 0.7661 - val_recall_score: 0.7563 - val_f1_score: 0.7611
Epoch 21/60
5/5 [==============================] - 37s 7s/step - loss: 0.5703 - precision_score: 0.7727 - recall_score: 0.7437 - f1_score: 0.7579 - val_loss: 0.6859 - val_precision_score: 0.7255 - val_recall_score: 0.7125 - val_f1_score: 0.7188
Epoch 22/60
5/5 [==============================] - 38s 8s/step - loss: 0.5567 - precision_score: 0.7976 - recall_score: 0.7875 - f1_score: 0.7925 - val_loss: 0.6757 - val_precision_score: 0.7505 - val_recall_score: 0.7312 - val_f1_score: 0.7406
Epoch 23/60
5/5 [==============================] - 36s 7s/step - loss: 0.5837 - precision_score: 0.7953 - recall_score: 0.7750 - f1_score: 0.7847 - val_loss: 0.5374 - val_precision_score: 0.7903 - val_recall_score: 0.7750 - val_f1_score: 0.7824
Epoch 24/60
5/5 [==============================] - 37s 7s/step - loss: 0.5543 - precision_score: 0.7846 - recall_score: 0.7688 - f1_score: 0.7765 - val_loss: 0.7367 - val_precision_score: 0.7060 - val_recall_score: 0.6875 - val_f1_score: 0.6962
Epoch 25/60
5/5 [==============================] - 36s 7s/step - loss: 0.5751 - precision_score: 0.7188 - recall_score: 0.7188 - f1_score: 0.7187 - val_loss: 0.6251 - val_precision_score: 0.8077 - val_recall_score: 0.7812 - val_f1_score: 0.7940
Epoch 26/60
5/5 [==============================] - 37s 7s/step - loss: 0.5663 - precision_score: 0.7463 - recall_score: 0.7125 - f1_score: 0.7289 - val_loss: 0.6719 - val_precision_score: 0.6848 - val_recall_score: 0.6500 - val_f1_score: 0.6668
Epoch 27/60
5/5 [==============================] - 38s 8s/step - loss: 0.6112 - precision_score: 0.7482 - recall_score: 0.7437 - f1_score: 0.7459 - val_loss: 0.6309 - val_precision_score: 0.7789 - val_recall_score: 0.7563 - val_f1_score: 0.7672
Epoch 28/60
5/5 [==============================] - 37s 7s/step - loss: 0.6073 - precision_score: 0.7690 - recall_score: 0.7500 - f1_score: 0.7593 - val_loss: 0.6278 - val_precision_score: 0.7444 - val_recall_score: 0.7312 - val_f1_score: 0.7377
Epoch 29/60
5/5 [==============================] - 36s 7s/step - loss: 0.6708 - precision_score: 0.7089 - recall_score: 0.6750 - f1_score: 0.6912 - val_loss: 0.5021 - val_precision_score: 0.8117 - val_recall_score: 0.8062 - val_f1_score: 0.8089
Epoch 30/60
5/5 [==============================] - 36s 7s/step - loss: 0.5131 - precision_score: 0.8201 - recall_score: 0.8000 - f1_score: 0.8098 - val_loss: 0.5807 - val_precision_score: 0.7706 - val_recall_score: 0.7375 - val_f1_score: 0.7536
Epoch 31/60
5/5 [==============================] - 38s 8s/step - loss: 0.5401 - precision_score: 0.7995 - recall_score: 0.7750 - f1_score: 0.7870 - val_loss: 0.6498 - val_precision_score: 0.7328 - val_recall_score: 0.7188 - val_f1_score: 0.7256
Epoch 32/60
5/5 [==============================] - 36s 7s/step - loss: 0.5369 - precision_score: 0.7996 - recall_score: 0.7625 - f1_score: 0.7804 - val_loss: 0.4654 - val_precision_score: 0.8080 - val_recall_score: 0.7875 - val_f1_score: 0.7975
Epoch 33/60
5/5 [==============================] - 36s 7s/step - loss: 0.4665 - precision_score: 0.8334 - recall_score: 0.8062 - f1_score: 0.8195 - val_loss: 0.6450 - val_precision_score: 0.7539 - val_recall_score: 0.7250 - val_f1_score: 0.7390
Epoch 34/60
5/5 [==============================] - 36s 7s/step - loss: 0.5180 - precision_score: 0.7796 - recall_score: 0.7688 - f1_score: 0.7740 - val_loss: 0.6872 - val_precision_score: 0.7136 - val_recall_score: 0.6875 - val_f1_score: 0.7001
Epoch 35/60
5/5 [==============================] - 37s 7s/step - loss: 0.6372 - precision_score: 0.7483 - recall_score: 0.7250 - f1_score: 0.7362 - val_loss: 0.5639 - val_precision_score: 0.7960 - val_recall_score: 0.7812 - val_f1_score: 0.7885
Epoch 36/60
5/5 [==============================] - 36s 7s/step - loss: 0.6380 - precision_score: 0.7603 - recall_score: 0.7500 - f1_score: 0.7551 - val_loss: 0.8511 - val_precision_score: 0.6732 - val_recall_score: 0.6562 - val_f1_score: 0.6646
Epoch 37/60
5/5 [==============================] - 38s 8s/step - loss: 0.4723 - precision_score: 0.8030 - recall_score: 0.7875 - f1_score: 0.7951 - val_loss: 0.7706 - val_precision_score: 0.7228 - val_recall_score: 0.7188 - val_f1_score: 0.7207
Epoch 38/60
5/5 [==============================] - 37s 7s/step - loss: 0.5925 - precision_score: 0.7655 - recall_score: 0.7563 - f1_score: 0.7608 - val_loss: 0.8949 - val_precision_score: 0.6274 - val_recall_score: 0.6187 - val_f1_score: 0.6230
Epoch 39/60
5/5 [==============================] - 37s 7s/step - loss: 0.5472 - precision_score: 0.7503 - recall_score: 0.7188 - f1_score: 0.7340 - val_loss: 0.7887 - val_precision_score: 0.7324 - val_recall_score: 0.7188 - val_f1_score: 0.7254
Epoch 40/60
5/5 [==============================] - 39s 8s/step - loss: 0.5019 - precision_score: 0.7879 - recall_score: 0.7625 - f1_score: 0.7747 - val_loss: 0.5268 - val_precision_score: 0.7478 - val_recall_score: 0.7437 - val_f1_score: 0.7457
Epoch 41/60
5/5 [==============================] - 39s 8s/step - loss: 0.5375 - precision_score: 0.7272 - recall_score: 0.7188 - f1_score: 0.7229 - val_loss: 0.5641 - val_precision_score: 0.7964 - val_recall_score: 0.7875 - val_f1_score: 0.7919
Epoch 42/60
5/5 [==============================] - 39s 8s/step - loss: 0.4742 - precision_score: 0.7854 - recall_score: 0.7750 - f1_score: 0.7800 - val_loss: 0.7429 - val_precision_score: 0.7440 - val_recall_score: 0.7250 - val_f1_score: 0.7343
Epoch 43/60
5/5 [==============================] - 37s 7s/step - loss: 0.4725 - precision_score: 0.8171 - recall_score: 0.8062 - f1_score: 0.8115 - val_loss: 0.6011 - val_precision_score: 0.7820 - val_recall_score: 0.7625 - val_f1_score: 0.7720
Epoch 44/60
5/5 [==============================] - 39s 8s/step - loss: 0.5209 - precision_score: 0.8232 - recall_score: 0.8125 - f1_score: 0.8178 - val_loss: 0.5616 - val_precision_score: 0.7593 - val_recall_score: 0.7500 - val_f1_score: 0.7546
Epoch 45/60
5/5 [==============================] - 38s 8s/step - loss: 0.4644 - precision_score: 0.7830 - recall_score: 0.7625 - f1_score: 0.7725 - val_loss: 0.5373 - val_precision_score: 0.7875 - val_recall_score: 0.7875 - val_f1_score: 0.7875
Epoch 46/60
5/5 [==============================] - 38s 8s/step - loss: 0.5048 - precision_score: 0.7974 - recall_score: 0.7875 - f1_score: 0.7924 - val_loss: 0.8671 - val_precision_score: 0.6754 - val_recall_score: 0.6500 - val_f1_score: 0.6623
Epoch 47/60
5/5 [==============================] - 36s 7s/step - loss: 0.6261 - precision_score: 0.7092 - recall_score: 0.7000 - f1_score: 0.7044 - val_loss: 0.6452 - val_precision_score: 0.7446 - val_recall_score: 0.7312 - val_f1_score: 0.7378
Epoch 48/60
5/5 [==============================] - 36s 7s/step - loss: 0.5248 - precision_score: 0.8000 - recall_score: 0.8000 - f1_score: 0.8000 - val_loss: 0.5995 - val_precision_score: 0.7449 - val_recall_score: 0.7312 - val_f1_score: 0.7379
Epoch 49/60
5/5 [==============================] - 37s 7s/step - loss: 0.5074 - precision_score: 0.7841 - recall_score: 0.7750 - f1_score: 0.7795 - val_loss: 0.7178 - val_precision_score: 0.7327 - val_recall_score: 0.7125 - val_f1_score: 0.7223
Epoch 50/60
5/5 [==============================] - 37s 7s/step - loss: 0.4705 - precision_score: 0.8222 - recall_score: 0.8125 - f1_score: 0.8173 - val_loss: 0.5841 - val_precision_score: 0.7363 - val_recall_score: 0.7312 - val_f1_score: 0.7337
Epoch 51/60
5/5 [==============================] - 37s 7s/step - loss: 0.4737 - precision_score: 0.8120 - recall_score: 0.7812 - f1_score: 0.7962 - val_loss: 0.7791 - val_precision_score: 0.7017 - val_recall_score: 0.6875 - val_f1_score: 0.6944
Epoch 52/60
5/5 [==============================] - 36s 7s/step - loss: 0.4883 - precision_score: 0.7784 - recall_score: 0.7688 - f1_score: 0.7735 - val_loss: 0.6743 - val_precision_score: 0.7044 - val_recall_score: 0.7000 - val_f1_score: 0.7022
Epoch 53/60
5/5 [==============================] - 36s 7s/step - loss: 0.5567 - precision_score: 0.7555 - recall_score: 0.7375 - f1_score: 0.7463 - val_loss: 0.5801 - val_precision_score: 0.7463 - val_recall_score: 0.7375 - val_f1_score: 0.7417
Epoch 54/60
5/5 [==============================] - 38s 8s/step - loss: 0.3698 - precision_score: 0.8294 - recall_score: 0.8188 - f1_score: 0.8240 - val_loss: 0.7155 - val_precision_score: 0.7265 - val_recall_score: 0.7125 - val_f1_score: 0.7193
Epoch 55/60
5/5 [==============================] - 36s 7s/step - loss: 0.4958 - precision_score: 0.7980 - recall_score: 0.7875 - f1_score: 0.7927 - val_loss: 0.5255 - val_precision_score: 0.7829 - val_recall_score: 0.7688 - val_f1_score: 0.7757
Epoch 56/60
5/5 [==============================] - 36s 7s/step - loss: 0.4243 - precision_score: 0.8305 - recall_score: 0.8000 - f1_score: 0.8148 - val_loss: 0.5356 - val_precision_score: 0.7860 - val_recall_score: 0.7500 - val_f1_score: 0.7675
Epoch 57/60
5/5 [==============================] - 36s 7s/step - loss: 0.3317 - precision_score: 0.8869 - recall_score: 0.8813 - f1_score: 0.8840 - val_loss: 0.5310 - val_precision_score: 0.7721 - val_recall_score: 0.7375 - val_f1_score: 0.7542
Epoch 58/60
5/5 [==============================] - 36s 7s/step - loss: 0.4269 - precision_score: 0.8433 - recall_score: 0.8375 - f1_score: 0.8404 - val_loss: 0.6429 - val_precision_score: 0.7363 - val_recall_score: 0.7312 - val_f1_score: 0.7337
Epoch 59/60
5/5 [==============================] - 37s 7s/step - loss: 0.4382 - precision_score: 0.8042 - recall_score: 0.8000 - f1_score: 0.8021 - val_loss: 0.5176 - val_precision_score: 0.8089 - val_recall_score: 0.7937 - val_f1_score: 0.8012
Epoch 60/60
5/5 [==============================] - 34s 7s/step - loss: 0.4150 - precision_score: 0.8448 - recall_score: 0.8255 - f1_score: 0.8348 - val_loss: 0.7182 - val_precision_score: 0.7375 - val_recall_score: 0.7375 - val_f1_score: 0.7375
###Markdown
Um modelo que converge bem possui o gráfico de perda (*loss*) descendente e os gráfico de precisão (*precision*), sensibilidade (*recall*) e pontuação f1 (*f1 score*) em acendente.
###Code
# Exibindo dados de Precisão
plt.plot(history.history['precision_score'])
plt.plot(history.history['val_precision_score'])
plt.title('model precision')
plt.ylabel('precision')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# Exibindo dados de Sensibilidade
plt.plot(history.history['recall_score'])
plt.plot(history.history['val_recall_score'])
plt.title('model recall')
plt.ylabel('recall')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# Exibindo dados de F1 Score
plt.plot(history.history['f1_score'])
plt.plot(history.history['val_f1_score'])
plt.title('model f1_score')
plt.ylabel('f1_score')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# Exibindo dados de Perda
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
**Pergunta**: Avalie os gráficos de perda (*loss*), precisão (*precision*), sensibilidade (*recall*) e pontuação f1 (*f1 score*) e explique o comportamento de ambos no que tange a convergência do modelo. **Resposta**: As métricas precision, recall e f1 score continuam melhoraram até aproximadamente a época 35, a qual começou a ter oscilações maiores, após a época 46, vemos que o treino e teste começam a se distanciar, indicando que talvez o modelo já não esteja convergindo tão bem em épocas maiores. Outro pronto interessante, é que as métricas precision, recall e f1 score apresentam o mesmo valor ao fim da época 60.Para fazer com que o modelo continue convergindo bem em épocas maiores, podemos adicionar mais imagens de treino. **Pergunta**: Quais são os valores de **precisão (precision)**, **sensibilidade (recall)** de validação? *Estes valores são exibidos durante o treinamento, utilize a última saída, exemplo:*```Epoch 10/10 [==============================] - 45s 9s/step - loss: 0.1234 - precision_score: 0.9742 - recall_score: 0.9683 - f1_score: 0.9712 - val_loss: 0.8819 - val_precision_score: 0.6912 - val_recall_score: 0.5649 - val_f1_score: 0.6216```No caso acima, o valor de precisão, sensibilidade e pontuação de validação são, respectivamente, 69,12%, 56,49% e 62,16%. **Resposta**: - Precisão: 73,75% - Recall: 73,75% 4.5.3 Arquitetura VGG19 **Pergunta**: Explique como é constituída a arquitetura do VGG19? *Utilize, se necessário, gráficos, projetos que utilizam essa arquitetura. Detalhe também sua topologia em camadas e mostre quais as situações essa arquitetura pode ter mais êxito e quais cenários não tem.* **Resposta**:A arquitetura da VGG19 possui o mesmo conceito que a VGG16, utiliza camas convolucionais com o menor tamanho, tem camas de max pooling entre grupos de camadas convolucionais, reduzindo a dimensionalidade da imagem, e após todo esse processo, utiliza camadas densas totalmente conectadas e por fim a ultima uma função de ativação softmax, que realiza a classificação da imagem.A principal diferença é o tamanho das arquiteturas, sendo a VGG19 maior com mais grupos de camadas convolucionais, possuindo uma melhora na convergência, porém, com maior custo computacional. Referência: - http://datahacker.rs/deep-learning-vgg-16-vs-vgg-19/
###Code
conv_base = VGG19(include_top=False)
for layer in conv_base.layers:
layer.trainable = False
x = conv_base.output
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(128, activation='relu')(x)
predictions = layers.Dense(qtde_classes, activation='softmax')(x)
model = Model(conv_base.input, predictions)
model.summary()
optimizer = keras.optimizers.Adam()
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=[precision_score, recall_score, f1_score])
history = model.fit_generator(generator=train_generator,
epochs=qtde_epocas,
validation_steps=5,
steps_per_epoch=5,
validation_data=val_generator)
###Output
Epoch 1/60
5/5 [==============================] - 43s 9s/step - loss: 3.0195 - precision_score: 0.3532 - recall_score: 0.3375 - f1_score: 0.3447 - val_loss: 1.9208 - val_precision_score: 0.4243 - val_recall_score: 0.4125 - val_f1_score: 0.4181
Epoch 2/60
5/5 [==============================] - 42s 8s/step - loss: 1.4678 - precision_score: 0.5395 - recall_score: 0.5312 - f1_score: 0.5353 - val_loss: 1.0889 - val_precision_score: 0.6608 - val_recall_score: 0.6250 - val_f1_score: 0.6422
Epoch 3/60
5/5 [==============================] - 42s 8s/step - loss: 1.2206 - precision_score: 0.6216 - recall_score: 0.6062 - f1_score: 0.6137 - val_loss: 1.2634 - val_precision_score: 0.6277 - val_recall_score: 0.6125 - val_f1_score: 0.6199
Epoch 4/60
5/5 [==============================] - 43s 9s/step - loss: 1.0428 - precision_score: 0.6272 - recall_score: 0.6125 - f1_score: 0.6197 - val_loss: 1.1252 - val_precision_score: 0.6602 - val_recall_score: 0.6438 - val_f1_score: 0.6518
Epoch 5/60
5/5 [==============================] - 43s 9s/step - loss: 0.8972 - precision_score: 0.7198 - recall_score: 0.7063 - f1_score: 0.7129 - val_loss: 0.9213 - val_precision_score: 0.7093 - val_recall_score: 0.7000 - val_f1_score: 0.7046
Epoch 6/60
5/5 [==============================] - 43s 9s/step - loss: 0.7248 - precision_score: 0.7437 - recall_score: 0.7250 - f1_score: 0.7339 - val_loss: 0.8647 - val_precision_score: 0.7630 - val_recall_score: 0.7250 - val_f1_score: 0.7435
Epoch 7/60
5/5 [==============================] - 43s 9s/step - loss: 0.7725 - precision_score: 0.7053 - recall_score: 0.6938 - f1_score: 0.6994 - val_loss: 1.0753 - val_precision_score: 0.6478 - val_recall_score: 0.6438 - val_f1_score: 0.6457
Epoch 8/60
5/5 [==============================] - 43s 9s/step - loss: 0.8259 - precision_score: 0.6843 - recall_score: 0.6687 - f1_score: 0.6763 - val_loss: 0.7983 - val_precision_score: 0.6893 - val_recall_score: 0.6625 - val_f1_score: 0.6755
Epoch 9/60
5/5 [==============================] - 43s 9s/step - loss: 0.6707 - precision_score: 0.7802 - recall_score: 0.7625 - f1_score: 0.7710 - val_loss: 1.2638 - val_precision_score: 0.6798 - val_recall_score: 0.6750 - val_f1_score: 0.6774
Epoch 10/60
5/5 [==============================] - 44s 9s/step - loss: 0.7833 - precision_score: 0.7491 - recall_score: 0.7312 - f1_score: 0.7399 - val_loss: 0.6344 - val_precision_score: 0.7906 - val_recall_score: 0.7625 - val_f1_score: 0.7761
Epoch 11/60
5/5 [==============================] - 43s 9s/step - loss: 0.8707 - precision_score: 0.6769 - recall_score: 0.6625 - f1_score: 0.6695 - val_loss: 0.9999 - val_precision_score: 0.6928 - val_recall_score: 0.6687 - val_f1_score: 0.6804
Epoch 12/60
5/5 [==============================] - 44s 9s/step - loss: 0.6786 - precision_score: 0.6855 - recall_score: 0.6812 - f1_score: 0.6833 - val_loss: 0.8783 - val_precision_score: 0.7215 - val_recall_score: 0.6938 - val_f1_score: 0.7073
Epoch 13/60
5/5 [==============================] - 41s 8s/step - loss: 0.6722 - precision_score: 0.7481 - recall_score: 0.7481 - f1_score: 0.7481 - val_loss: 0.8197 - val_precision_score: 0.6761 - val_recall_score: 0.6375 - val_f1_score: 0.6560
Epoch 14/60
5/5 [==============================] - 45s 9s/step - loss: 0.7135 - precision_score: 0.7149 - recall_score: 0.7000 - f1_score: 0.7072 - val_loss: 0.6222 - val_precision_score: 0.7859 - val_recall_score: 0.7812 - val_f1_score: 0.7835
Epoch 15/60
5/5 [==============================] - 48s 10s/step - loss: 0.6390 - precision_score: 0.7757 - recall_score: 0.7563 - f1_score: 0.7658 - val_loss: 1.0108 - val_precision_score: 0.6125 - val_recall_score: 0.5938 - val_f1_score: 0.6029
Epoch 16/60
5/5 [==============================] - 46s 9s/step - loss: 0.6208 - precision_score: 0.7959 - recall_score: 0.7812 - f1_score: 0.7884 - val_loss: 0.7572 - val_precision_score: 0.7576 - val_recall_score: 0.7375 - val_f1_score: 0.7473
Epoch 17/60
5/5 [==============================] - 44s 9s/step - loss: 0.7323 - precision_score: 0.7085 - recall_score: 0.7000 - f1_score: 0.7042 - val_loss: 0.8082 - val_precision_score: 0.7327 - val_recall_score: 0.7125 - val_f1_score: 0.7222
Epoch 18/60
5/5 [==============================] - 44s 9s/step - loss: 0.7015 - precision_score: 0.7625 - recall_score: 0.7375 - f1_score: 0.7495 - val_loss: 0.7717 - val_precision_score: 0.7495 - val_recall_score: 0.7312 - val_f1_score: 0.7401
Epoch 19/60
5/5 [==============================] - 44s 9s/step - loss: 0.5518 - precision_score: 0.7806 - recall_score: 0.7375 - f1_score: 0.7584 - val_loss: 0.5529 - val_precision_score: 0.8000 - val_recall_score: 0.7750 - val_f1_score: 0.7873
Epoch 20/60
5/5 [==============================] - 44s 9s/step - loss: 0.5815 - precision_score: 0.7463 - recall_score: 0.7375 - f1_score: 0.7417 - val_loss: 0.6630 - val_precision_score: 0.7378 - val_recall_score: 0.7250 - val_f1_score: 0.7312
Epoch 21/60
5/5 [==============================] - 44s 9s/step - loss: 0.5278 - precision_score: 0.7529 - recall_score: 0.7437 - f1_score: 0.7482 - val_loss: 0.6846 - val_precision_score: 0.7077 - val_recall_score: 0.6812 - val_f1_score: 0.6942
Epoch 22/60
5/5 [==============================] - 44s 9s/step - loss: 0.5072 - precision_score: 0.8221 - recall_score: 0.8000 - f1_score: 0.8107 - val_loss: 0.5682 - val_precision_score: 0.8057 - val_recall_score: 0.7688 - val_f1_score: 0.7864
Epoch 23/60
5/5 [==============================] - 45s 9s/step - loss: 0.4899 - precision_score: 0.8764 - recall_score: 0.8596 - f1_score: 0.8676 - val_loss: 0.8026 - val_precision_score: 0.6624 - val_recall_score: 0.6250 - val_f1_score: 0.6429
Epoch 24/60
5/5 [==============================] - 44s 9s/step - loss: 0.4841 - precision_score: 0.8513 - recall_score: 0.8250 - f1_score: 0.8378 - val_loss: 0.6457 - val_precision_score: 0.7337 - val_recall_score: 0.7250 - val_f1_score: 0.7293
Epoch 25/60
5/5 [==============================] - 44s 9s/step - loss: 0.4235 - precision_score: 0.8306 - recall_score: 0.8250 - f1_score: 0.8278 - val_loss: 0.8500 - val_precision_score: 0.6945 - val_recall_score: 0.6750 - val_f1_score: 0.6844
Epoch 26/60
5/5 [==============================] - 44s 9s/step - loss: 0.4386 - precision_score: 0.8390 - recall_score: 0.8188 - f1_score: 0.8286 - val_loss: 0.6115 - val_precision_score: 0.7810 - val_recall_score: 0.7563 - val_f1_score: 0.7683
Epoch 27/60
5/5 [==============================] - 44s 9s/step - loss: 0.6348 - precision_score: 0.7802 - recall_score: 0.7750 - f1_score: 0.7776 - val_loss: 0.5789 - val_precision_score: 0.7958 - val_recall_score: 0.7750 - val_f1_score: 0.7850
Epoch 28/60
5/5 [==============================] - 44s 9s/step - loss: 0.5271 - precision_score: 0.8028 - recall_score: 0.7875 - f1_score: 0.7950 - val_loss: 0.6107 - val_precision_score: 0.7952 - val_recall_score: 0.7750 - val_f1_score: 0.7849
Epoch 29/60
5/5 [==============================] - 44s 9s/step - loss: 0.5273 - precision_score: 0.8368 - recall_score: 0.7937 - f1_score: 0.8144 - val_loss: 0.6593 - val_precision_score: 0.7649 - val_recall_score: 0.7500 - val_f1_score: 0.7573
Epoch 30/60
5/5 [==============================] - 45s 9s/step - loss: 0.5115 - precision_score: 0.7764 - recall_score: 0.7563 - f1_score: 0.7661 - val_loss: 0.7564 - val_precision_score: 0.7136 - val_recall_score: 0.6812 - val_f1_score: 0.6968
Epoch 31/60
5/5 [==============================] - 44s 9s/step - loss: 0.6153 - precision_score: 0.7346 - recall_score: 0.7125 - f1_score: 0.7232 - val_loss: 0.7778 - val_precision_score: 0.7030 - val_recall_score: 0.6938 - val_f1_score: 0.6983
Epoch 32/60
5/5 [==============================] - 44s 9s/step - loss: 0.4803 - precision_score: 0.7705 - recall_score: 0.7563 - f1_score: 0.7630 - val_loss: 0.7566 - val_precision_score: 0.7102 - val_recall_score: 0.6875 - val_f1_score: 0.6986
Epoch 33/60
5/5 [==============================] - 44s 9s/step - loss: 0.5372 - precision_score: 0.7659 - recall_score: 0.7375 - f1_score: 0.7512 - val_loss: 0.4888 - val_precision_score: 0.7840 - val_recall_score: 0.7688 - val_f1_score: 0.7762
Epoch 34/60
5/5 [==============================] - 44s 9s/step - loss: 0.4206 - precision_score: 0.8307 - recall_score: 0.7875 - f1_score: 0.8077 - val_loss: 0.7516 - val_precision_score: 0.7505 - val_recall_score: 0.7312 - val_f1_score: 0.7406
Epoch 35/60
5/5 [==============================] - 44s 9s/step - loss: 0.3421 - precision_score: 0.8607 - recall_score: 0.8500 - f1_score: 0.8553 - val_loss: 0.6659 - val_precision_score: 0.7476 - val_recall_score: 0.7375 - val_f1_score: 0.7425
Epoch 36/60
5/5 [==============================] - 44s 9s/step - loss: 0.4814 - precision_score: 0.7921 - recall_score: 0.7875 - f1_score: 0.7898 - val_loss: 0.6602 - val_precision_score: 0.7706 - val_recall_score: 0.7563 - val_f1_score: 0.7633
Epoch 37/60
5/5 [==============================] - 44s 9s/step - loss: 0.5947 - precision_score: 0.7685 - recall_score: 0.7437 - f1_score: 0.7558 - val_loss: 0.5757 - val_precision_score: 0.7447 - val_recall_score: 0.7250 - val_f1_score: 0.7346
Epoch 38/60
5/5 [==============================] - 44s 9s/step - loss: 0.4072 - precision_score: 0.8266 - recall_score: 0.8062 - f1_score: 0.8162 - val_loss: 0.6710 - val_precision_score: 0.7043 - val_recall_score: 0.6875 - val_f1_score: 0.6957
Epoch 39/60
5/5 [==============================] - 45s 9s/step - loss: 0.4641 - precision_score: 0.8145 - recall_score: 0.7937 - f1_score: 0.8039 - val_loss: 0.7979 - val_precision_score: 0.7065 - val_recall_score: 0.6938 - val_f1_score: 0.7000
Epoch 40/60
5/5 [==============================] - 1529s 306s/step - loss: 0.6247 - precision_score: 0.7552 - recall_score: 0.7500 - f1_score: 0.7526 - val_loss: 0.5872 - val_precision_score: 0.7971 - val_recall_score: 0.7625 - val_f1_score: 0.7793
Epoch 41/60
5/5 [==============================] - 40s 8s/step - loss: 0.4264 - precision_score: 0.8181 - recall_score: 0.8125 - f1_score: 0.8153 - val_loss: 0.6287 - val_precision_score: 0.6970 - val_recall_score: 0.6812 - val_f1_score: 0.6889
Epoch 42/60
5/5 [==============================] - 41s 8s/step - loss: 0.5539 - precision_score: 0.7888 - recall_score: 0.7688 - f1_score: 0.7785 - val_loss: 0.6551 - val_precision_score: 0.7118 - val_recall_score: 0.6938 - val_f1_score: 0.7025
Epoch 43/60
5/5 [==============================] - 42s 8s/step - loss: 0.5720 - precision_score: 0.7596 - recall_score: 0.7375 - f1_score: 0.7481 - val_loss: 0.6224 - val_precision_score: 0.8011 - val_recall_score: 0.7812 - val_f1_score: 0.7909
Epoch 44/60
5/5 [==============================] - 41s 8s/step - loss: 0.4904 - precision_score: 0.7673 - recall_score: 0.7625 - f1_score: 0.7649 - val_loss: 0.5881 - val_precision_score: 0.7376 - val_recall_score: 0.7188 - val_f1_score: 0.7280
Epoch 45/60
5/5 [==============================] - 42s 8s/step - loss: 0.3962 - precision_score: 0.8155 - recall_score: 0.8000 - f1_score: 0.8076 - val_loss: 0.6702 - val_precision_score: 0.7183 - val_recall_score: 0.7000 - val_f1_score: 0.7089
Epoch 46/60
5/5 [==============================] - 42s 8s/step - loss: 0.4141 - precision_score: 0.8022 - recall_score: 0.7812 - f1_score: 0.7915 - val_loss: 0.5774 - val_precision_score: 0.7356 - val_recall_score: 0.7125 - val_f1_score: 0.7238
Epoch 47/60
5/5 [==============================] - 42s 8s/step - loss: 0.5124 - precision_score: 0.7984 - recall_score: 0.7875 - f1_score: 0.7929 - val_loss: 0.5987 - val_precision_score: 0.7974 - val_recall_score: 0.7875 - val_f1_score: 0.7924
Epoch 48/60
5/5 [==============================] - 42s 8s/step - loss: 0.4953 - precision_score: 0.8025 - recall_score: 0.7875 - f1_score: 0.7948 - val_loss: 0.5955 - val_precision_score: 0.7651 - val_recall_score: 0.7500 - val_f1_score: 0.7573
Epoch 49/60
5/5 [==============================] - 43s 9s/step - loss: 0.3979 - precision_score: 0.8338 - recall_score: 0.8125 - f1_score: 0.8229 - val_loss: 0.5645 - val_precision_score: 0.7800 - val_recall_score: 0.7688 - val_f1_score: 0.7742
Epoch 50/60
5/5 [==============================] - 43s 9s/step - loss: 0.4541 - precision_score: 0.8230 - recall_score: 0.8125 - f1_score: 0.8177 - val_loss: 0.6166 - val_precision_score: 0.7583 - val_recall_score: 0.7437 - val_f1_score: 0.7509
Epoch 51/60
5/5 [==============================] - 43s 9s/step - loss: 0.5403 - precision_score: 0.7559 - recall_score: 0.7375 - f1_score: 0.7465 - val_loss: 0.6025 - val_precision_score: 0.7688 - val_recall_score: 0.7688 - val_f1_score: 0.7687
Epoch 52/60
5/5 [==============================] - 43s 9s/step - loss: 0.4298 - precision_score: 0.8068 - recall_score: 0.7812 - f1_score: 0.7937 - val_loss: 0.5187 - val_precision_score: 0.7705 - val_recall_score: 0.7563 - val_f1_score: 0.7632
Epoch 53/60
5/5 [==============================] - 66s 13s/step - loss: 0.4946 - precision_score: 0.8056 - recall_score: 0.7812 - f1_score: 0.7931 - val_loss: 0.7200 - val_precision_score: 0.7346 - val_recall_score: 0.7250 - val_f1_score: 0.7296
Epoch 54/60
5/5 [==============================] - 43s 9s/step - loss: 0.4003 - precision_score: 0.8238 - recall_score: 0.8188 - f1_score: 0.8212 - val_loss: 0.6381 - val_precision_score: 0.7213 - val_recall_score: 0.7063 - val_f1_score: 0.7136
Epoch 55/60
5/5 [==============================] - 44s 9s/step - loss: 0.4457 - precision_score: 0.8280 - recall_score: 0.8125 - f1_score: 0.8200 - val_loss: 0.6974 - val_precision_score: 0.7288 - val_recall_score: 0.7188 - val_f1_score: 0.7237
Epoch 56/60
5/5 [==============================] - 43s 9s/step - loss: 0.4974 - precision_score: 0.8127 - recall_score: 0.8072 - f1_score: 0.8099 - val_loss: 0.7724 - val_precision_score: 0.7645 - val_recall_score: 0.7500 - val_f1_score: 0.7571
Epoch 57/60
5/5 [==============================] - 43s 9s/step - loss: 0.5474 - precision_score: 0.8155 - recall_score: 0.8000 - f1_score: 0.8076 - val_loss: 0.8046 - val_precision_score: 0.7194 - val_recall_score: 0.7063 - val_f1_score: 0.7127
Epoch 58/60
5/5 [==============================] - 44s 9s/step - loss: 0.5038 - precision_score: 0.8062 - recall_score: 0.8062 - f1_score: 0.8062 - val_loss: 0.6603 - val_precision_score: 0.7685 - val_recall_score: 0.7500 - val_f1_score: 0.7588
Epoch 59/60
5/5 [==============================] - 44s 9s/step - loss: 0.5164 - precision_score: 0.8236 - recall_score: 0.8125 - f1_score: 0.8180 - val_loss: 0.7273 - val_precision_score: 0.7341 - val_recall_score: 0.7250 - val_f1_score: 0.7295
Epoch 60/60
5/5 [==============================] - 44s 9s/step - loss: 0.5337 - precision_score: 0.7648 - recall_score: 0.7500 - f1_score: 0.7572 - val_loss: 0.6374 - val_precision_score: 0.7832 - val_recall_score: 0.7688 - val_f1_score: 0.7758
###Markdown
Um modelo que converge bem possui o gráfico de perda (*loss*) descendente e os gráfico de precisão (*precision*), sensibilidade (*recall*) e pontuação f1 (*f1 score*) em acendente.
###Code
# Exibindo dados de Precisão
plt.plot(history.history['precision_score'])
plt.plot(history.history['val_precision_score'])
plt.title('model precision')
plt.ylabel('precision')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# Exibindo dados de Sensibilidade
plt.plot(history.history['recall_score'])
plt.plot(history.history['val_recall_score'])
plt.title('model recall')
plt.ylabel('recall')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# Exibindo dados de F1 Score
plt.plot(history.history['f1_score'])
plt.plot(history.history['val_f1_score'])
plt.title('model f1_score')
plt.ylabel('f1_score')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# Exibindo dados de Perda
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
**Pergunta**: Avalie os gráficos de perda (*loss*), precisão (*precision*), sensibilidade (*recall*) e pontuação f1 (*f1 score*) e explique o comportamento de ambos no que tange a convergência do modelo. **Resposta**: Podemos observar que o modelo estava convergindo muito bem até mais ou menos a época 20, após isso, todas as métricas apresentaram uma distância entre os dados de treino e de teste, tendo uma maior tendência a distância a partir da época 50, a loss também indica uma tendência a aumentar. Isso pode indicar um possivel overfit dos dados. **Pergunta**: Quais são os valores de **precisão (precision)**, **sensibilidade (recall)** de validação? *Estes valores são exibidos durante o treinamento, utilize a última saída, exemplo:*```Epoch 10/10 [==============================] - 45s 9s/step - loss: 0.1234 - precision_score: 0.9742 - recall_score: 0.9683 - f1_score: 0.9712 - val_loss: 0.8819 - val_precision_score: 0.6912 - val_recall_score: 0.5649 - val_f1_score: 0.6216```No caso acima, o valor de precisão, sensibilidade e pontuação de validação são, respectivamente, 69,12%, 56,49% e 62,16%. **Resposta**: - Precisão: 78,32% - Recall: 76,88% 4.6 Compartivo de arquiteturasPreencha a tabela abaixo com os valores dos indicadores de performance apresentados._O cálculo do F1-Score é dado por 2 * (Precisão * Sensibilidade) / (Precisão + Sensibilidade)._ | Modelo | Precisão (*Precision*) | Sensibilidade (*Recall*) | F1-Score ||----------|----------|---------------|----------|| ResNet50 | 76,02 % | 71,25 % | 73,49 % || VGG16 | 73,75 % | 73,75 % | 73,75 % || VGG19 | 78,32 % | 76,88 % | 77,58 % | 4.7 ConclusõesAnalise os resultados da tabela de indicadores do comparativo de arquiteturas e explique os principais motivos pelos quais cada modelo obteve cada resultado. **Resposta**: O modelo que se saiu melhor foi o da rede neural VGG19, em todas as 3 métricas (precisão, recall e f1-score) se mostrou maior que os outros dois.Ao analisarmos os gráficos de cada modelo, podemos observar que o modelo da ResNet50 possui a maior distância nas épocas entre os dados de treino e de teste, indicando que talvez seja mais passivel de overfit nos dados.Em relação ao VGG16, o mesmo possui uma constância maior em relação a todos os outros nas métricas analisadas, mostrando um ajuste bem fiel aos dados ao longo das épocas, contudo, teve alguns picos bem grandes em que os dados de treino e teste se distanciaram extremamente, apesar disso, o modelo no final acabou ficando com alguams métricas menores que os outros dois, e curiosamente, com o mesmo valor entre Precisão, Recall e F1-Score.A VGG19 atingiu o melhor resultado ao final das 60 épocas, apesar disso, podemos observar que durante toda as épocas, não teve um fit tão bem quanto a VGG16, porém, seu resultado final superou todos os demais modelos. 4.8 Abordagem ExtraConsiderando os outros classificadores, escolha outro que ainda não foi utilizado, implemente abaixo. Ao final compare os resultados e explique os resultados._Não se esquece de utilizar as importações adequadas para cada modelo.A forma de implementação deve respeitar as mesmas condições como valor de split e quantidade de imagens para poder comparar os modelos._ Não conseguimos rodar nossa implementação abaixo devido a alto uso de CPU, sempre que tentamos acaba travando os computadores, todos os integrantes tentaram Mas, para não perdermos o intuito de tudo isso, acredito que a rede neural abaixo implementada se comporte parecido com a VGG16, tendo até um desempenho mais baixo em relação a ela, devido a isto, acredito que não seja tão relevante quanto os outros modelos já abordados nesse notebook
###Code
#IMPLEMENTE
# Nessa implementação, iremos adicionar manualmente as camadas para construir um modelo mais simples em relação aos abordados nesse notebook.
# Iremos utilizar algumas funções para realizar o tratamento inicial das imagens.
# Função para detecção de bordas
import cv2
def border_detection(img_path):
img = cv2.imread(img_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img = cv2.resize(img, (224,224))
img = np.uint8(img)
img = cv2.Canny(img, 50, 80)
img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)
return img
# Função para transformar as imagens em imagens com a borda detectada
transformed_train_generator = []
img_train_label = []
for img_path, img_label in zip(train_generator.filepaths, train_generator.labels):
transformed_train_generator += [border_detection(img_path)]
img_array_label = [0,0,0]
img_array_label[img_label] = 1
img_train_label += [img_array_label]
transformed_train_generator = np.array(transformed_train_generator, dtype=np.float64)
img_train_label = np.array(img_train_label, dtype=np.uint8)
transformed_val_generator = []
img_val_label = []
for img_path, img_label in zip(val_generator.filepaths, val_generator.labels):
transformed_val_generator += [border_detection(img_path)]
img_array_label = [0,0,0]
img_array_label[img_label] = 1
img_val_label += [img_array_label]
transformed_val_generator = np.array(transformed_val_generator, dtype=np.float64)
img_val_label = np.array(img_val_label, dtype=np.uint8)
model = Sequential()
model.add(Conv2D(64, (3,3), padding='same'))
model.add(Conv2D(64, (3,3), padding='same'))
model.add(MaxPooling2D((2,2)))
model.add(Conv2D(128, (3,3), padding='same'))
model.add(Conv2D(128, (3,3), padding='same'))
model.add(MaxPooling2D((2,2)))
model.add(Conv2D(256, (3,3), padding='same'))
model.add(Conv2D(256, (3,3), padding='same'))
model.add(MaxPooling2D((2,2)))
model.add(layers.GlobalAveragePooling2D())
model.add(Dense(128))
model.add(Dense(3, 'softmax'))
model.build(input_shape=(None,224,224,3))
model.summary()
optimizer = optimizers.Adam()
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=[precision_score, recall_score, f1_score])
history = model.fit(x=transformed_train_generator, y=img_train_label, epochs=qtde_epocas, steps_per_epoch=5)
# Exibindo dados de Precisão
plt.plot(history.history['precision_score'])
plt.plot(history.history['val_precision_score'])
plt.title('model precision')
plt.ylabel('precision')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# Exibindo dados de Sensibilidade
plt.plot(history.history['recall_score'])
plt.plot(history.history['val_recall_score'])
plt.title('model recall')
plt.ylabel('recall')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# Exibindo dados de F1 Score
plt.plot(history.history['f1_score'])
plt.plot(history.history['val_f1_score'])
plt.title('model f1_score')
plt.ylabel('f1_score')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# Exibindo dados de Perda
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
###Output
_____no_output_____
|
exploration/analysis.ipynb
|
###Markdown
In this notebook:* SemEval 2013 Task 13* Ablation results for the "Word Sense Induction with Neural biLM and Symmetric Patterns" paper
###Code
import pandas as pd
import numpy as np
import sys
sys.path.append("..")
from spwsi.semeval_utils import generate_sem_eval_2013
from collections import Counter, defaultdict
titles_pos = {'j': 'ADJ.', 'n': 'NOUN', 'v': 'VERB'}
target_counts = Counter()
targets_by_pos = defaultdict(set)
print('loading dataset instances statistics...')
for _, _, instance_id in generate_sem_eval_2013('../resources/SemEval-2013-Task-13-test-data/'):
target = instance_id.rsplit('.', 1)[0]
pos = target.split('.')[-1]
targets_by_pos[pos].add(target)
target_counts[target] += 1
to_remove = []
total_instances = 0
for target, count in target_counts.items():
if count < 50:
to_remove.append(target)
print('removing %s for analyisis has it has only %d labeled instances - other targets have around 100' % (
target, count))
else:
total_instances += count
print()
print('After removing instances:')
total_targets = 0
for pos, targets_set in targets_by_pos.items():
targets_set -= set(to_remove)
total_targets += len(targets_set)
print('%d targets with part of speech %s' % (len(targets_set), titles_pos[pos]))
print('in total, %d instances from %d targets' % (total_instances, total_targets))
print()
print('Note: this pruning is done only for the part of speech break-down exploration below and isn\'t done during WSI')
from collections import defaultdict
target_senses = defaultdict(set)
with open('../resources/SemEval-2013-Task-13-test-data/keys/gold/all.key', 'r') as fin:
for line in fin:
target, inst, senses = line.strip().split(' ', 2)
if target in to_remove:
continue
senses = [x.split('/')[0] for x in senses.split()]
target_senses[target].update(senses)
rows = []
for target, senses in target_senses.items():
pos = target.split('.')[-1]
rows.append((pos, len(senses)))
dfs = pd.DataFrame(rows, columns=['pos', 'count_senses'])
print('Number of senses per target, by part of speech:')
print()
print('ALL mean:%.2f std:%.2f' % (dfs.count_senses.mean(), dfs.count_senses.std()))
titles_pos = {'j': 'ADJ.', 'n': 'NOUN', 'v': 'VERB'}
print()
for pos, title in titles_pos.items():
print('%s mean:%.2f std:%.2f' % (
title, dfs[dfs.pos == pos].count_senses.mean(), dfs[dfs.pos == pos].count_senses.std()))
###Output
Number of senses per target, by part of speech:
ALL mean:6.94 std:2.71
ADJ. mean:5.90 std:1.37
NOUN mean:7.32 std:2.21
VERB mean:7.11 std:3.54
###Markdown
effect of number of clusters on task score
###Code
import matplotlib.pyplot as plt
plt.rcParams['font.family'] = 'Serif'
dfnc=pd.read_csv('n_clusters.data.csv.gz')
titles_pos={'all':'ALL','j':'ADJ.','n':'NOUN','v':'VERB'}
dfnc['pos']=dfnc.target.apply(lambda x:'ALL' if x=='all' else titles_pos[x.split('.')[1]])
dfnc=dfnc.sort_values('pos')
grouped=dfnc.groupby(['n_clusters','pos']).mean().unstack().AVG *100
ax = grouped.plot(figsize=(6,6),style=['--','-',':','-.'])
# ax.set_title('AVG by number of clusters',fontsize=20)
ax.set_xlabel('Number of clusters',fontsize=16)
ax.set_ylabel('AVG',fontsize=15)
plt.xticks(np.arange(4, 16))
plt.yticks(np.arange(18, 29))
ax.legend(grouped.columns,fontsize=15);
ax.axhline(20.58,color="black",alpha=0.3,linewidth=1)
ax.text(12.3,20.7,"MCC-S(20.58)",size=12)
plt.show()
###Output
_____no_output_____
###Markdown
Ablation, broken down by part of speech
###Code
import matplotlib.patheffects as PathEffects
from matplotlib.patches import ConnectionPatch
import matplotlib.pyplot as plt
# ablations
dfa = pd.read_csv('ablation.data.csv.gz').query('target == "all"')[
['AVG', 'FBC', 'FNMI', 'disable_lemmatization', 'disable_symmetric_patterns', 'disable_tfidf']]
all_settings = dict(
vanilla='disable_lemmatization == False and disable_symmetric_patterns == False and disable_tfidf == False',
sp='disable_lemmatization == False and disable_symmetric_patterns == True and disable_tfidf == False',
lem='disable_lemmatization == True and disable_symmetric_patterns == False and disable_tfidf == False',
tfidf='disable_lemmatization == False and disable_symmetric_patterns == False and disable_tfidf == True',
sp_lem='disable_lemmatization == True and disable_symmetric_patterns == True and disable_tfidf == False',
all_flags='disable_lemmatization == True and disable_symmetric_patterns == True and disable_tfidf == True',
)
res = []
for settings, query in all_settings.items():
avgs = dfa.query(query)['AVG'] * 100
fnmis = dfa.query(query)['FNMI'] * 100
fbcs = dfa.query(query)['FBC'] * 100
res.append((settings, avgs.mean(), avgs.std(), fnmis.mean(), fnmis.std(), fbcs.mean(), fbcs.std()))
ablations = pd.DataFrame(res,
columns=['removed', 'AVG_mean', 'AVG_std', 'FNMI_mean', 'FNMI_std', 'FBC_mean', 'FBC_std'])
print('Ablation results:')
print()
print(ablations.round(2))
print()
print()
# ablations by pos
dfa = pd.read_csv('ablation.data.csv.gz').query('target != "all"')[
['run_name', 'AVG', 'target', 'disable_lemmatization', 'disable_symmetric_patterns', 'disable_tfidf']]
dfa['pos'] = dfa.target.apply(lambda x: 'all' if x == 'all' else x.split('.')[1])
dfa = dfa[~dfa['target'].isin(to_remove)]
# to make it comparable to all which is a mean, we first mean across run by POS
dfa = dfa.groupby(['run_name', 'pos']).mean().reset_index()
res = []
for pos in dfa['pos'].unique():
for settings, query in all_settings.items():
avgs = dfa[dfa.pos == pos].query(query)['AVG'] * 100
res.append((settings, pos, avgs.mean(), avgs.std()))
ablations_pos = pd.DataFrame(res, columns=['removed', 'pos', 'AVG_mean', 'AVG_std'])
print('Ablation by pos:')
print()
print(ablations_pos.round(2))
print()
print()
ablations_full = pd.concat([ablations, ablations_pos], sort=True).fillna('all')
f, axs = plt.subplots(4, 1, figsize=(7.5, 15), sharey=True)
titles_pos = {'all': 'ALL', 'j': 'ADJ.', 'n': 'NOUN', 'v': 'VERB'}
labels = {'vanilla': 'FULL', 'sp': 'w/o SP', 'lem': 'w/o LEM', 'tfidf': 'w/o TFIDF', 'sp_lem': 'w/o SP, LEM',
'all_flags': 'w/o ALL'}
position = {'vanilla': 5, 'sp': 4, 'lem': 3, 'tfidf': 2, 'sp_lem': 1, 'all_flags': 0}
pallete = {'vanilla': '#444444', 'sp': '#555555', 'lem': '#666666', 'tfidf': '#777777', 'sp_lem': '#888888',
'all_flags': '#999999'}
font = {'fontname': 'DejaVu Serif', 'size': 10}
for idx, (pos, ax) in enumerate(zip(['all', 'j', 'n', 'v'], axs)):
data = ablations_full[ablations_full.pos == pos]
y_laybles = [labels[x] for x in data.removed.values]
colors = [pallete[x] for x in data.removed.values]
vals = data['AVG_mean'].values
stds = data['AVG_std'].values
y_pos = [position[x] for x in data.removed.values]
ax.barh(y_pos, vals, xerr=stds, align='center', color=['#666666'],
ecolor='black', capsize=7)
ax.spines['bottom'].set_visible(idx == 3)
ax.spines['top'].set_visible(idx == 0)
ax.set_title(titles_pos[pos], y=0.02, size=12)
ax.set_yticks(y_pos)
for x in ax.get_xticklabels():
x.set_size(15)
ax.set_xlim(0, 30)
ax.set_ylim(-1.3, 5.5)
ax.set_yticklabels(y_laybles, size=12)
for i in range(6):
ax.text(1, i - 0.15, "%.1f±%.1f" % (vals[5 - i], stds[5 - i]), color="white", size=14)
axs[0].xaxis.tick_top()
for x in axs[0].get_xticklabels():
x.set_size(15)
axs[1].tick_params(axis='x', which='both', bottom=False, top=False, labelbottom=False)
axs[2].tick_params(axis='x', which='both', bottom=False, top=False, labelbottom=False)
axs[-1].set_xlabel('AVG', fontsize=15)
con = ConnectionPatch(xyA=(23.56, -1.3), xyB=(23.56, 5.5), coordsA="data", coordsB="data", alpha=0.5,
axesA=axs[-1], axesB=axs[0], color="k", ls=':', lw=1)
axs[-1].add_artist(con)
axs[-1].text(23.56, -1, 'best reported score (23.56)', rotation=270, size=15, verticalalignment='bottom')
con = ConnectionPatch(xyA=(20.58, -1.3), xyB=(20.58, 5.5), coordsA="data", coordsB="data", alpha=0.5,
axesA=axs[-1], axesB=axs[0], color="k", ls=':', lw=1)
axs[-1].add_artist(con)
txt = axs[-1].text(20.58, -1, '(20.58)', rotation=270, size=15, verticalalignment='bottom')
# txt.set_path_effects([PathEffects.withStroke(linewidth=1, foreground='#333333')])
plt.subplots_adjust(top=0.92, bottom=0.08, left=0.05, right=0.95, hspace=0,
wspace=0)
plt.show()
###Output
Ablation results:
removed AVG_mean AVG_std FNMI_mean FNMI_std FBC_mean FBC_std
0 vanilla 25.43 0.48 11.26 0.43 57.49 0.23
1 sp 23.36 0.48 9.83 0.40 55.51 0.34
2 lem 22.39 0.52 9.54 0.43 52.61 0.25
3 tfidf 23.32 0.65 9.19 0.52 59.21 0.27
4 sp_lem 23.08 0.56 9.59 0.43 55.58 0.44
5 all_flags 23.24 0.60 9.49 0.50 56.92 0.34
Ablation by pos:
removed pos AVG_mean AVG_std
0 vanilla j 27.50 1.60
1 sp j 24.30 1.19
2 lem j 25.68 1.49
3 tfidf j 24.36 1.85
4 sp_lem j 24.08 1.19
5 all_flags j 25.03 1.16
6 vanilla n 22.88 0.82
7 sp n 22.59 0.70
8 lem n 22.33 0.62
9 tfidf n 19.54 1.16
10 sp_lem n 22.37 0.81
11 all_flags n 22.40 0.95
12 vanilla v 22.78 0.65
13 sp v 20.45 0.75
14 lem v 17.01 0.45
15 tfidf v 21.42 0.80
16 sp_lem v 19.86 0.90
17 all_flags v 19.51 0.75
###Markdown
why symmetric patterns without lemmatization is failing on verbsin the code that follows we show why using symmteric patterns without lemmatization is a bad idea.since symmetric patterns substitutes tend to agree on tense with the disambiguated word, different tensed word, even with the same sense,have different substitutes - matching tense with the disambiguated word.we predict subsitute represntatives as done in the paper, cluster them, and show the clustering created uni-tensed groups, disregarding actual sensewe do this by fitting a linear regression model on the clusters created and looking at the LR coeficents.Note that the features aren't the best substitutes for the target but good indicators for separtions among the senses of the given target
###Code
print('verb targets:\n'+' '.join(targets_by_pos['v']))
target_target = 'suggest.v'
# this essentially recreates our method, induce clusters for target_target and train a logistic regression model to
# find most influential featrues(words) for the given clusters
from spwsi.spwsi import DEFAULT_PARAMS
from spwsi.semeval_utils import generate_sem_eval_2013
from sklearn.feature_extraction import DictVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.cluster import AgglomerativeClustering
from sklearn.pipeline import make_pipeline
from collections import Counter
from sklearn import linear_model
from spwsi.bilm_elmo import BilmElmo
# load dataset for target_target
target_target_insts = dict()
for tokens, target_idx, inst_id in generate_sem_eval_2013('../resources/SemEval-2013-Task-13-test-data'):
target = inst_id.rsplit('.', 1)[0]
if target == target_target:
target_target_insts[inst_id] = (tokens, target_idx)
# create an LM for predicting substitutes
CUDA_DEVICE = 0
elmo_vocab_path = '../resources/vocab-2016-09-10.txt'
BilmElmo.create_lemmatized_vocabulary_if_needed(elmo_vocab_path)
elmo_as_lm = BilmElmo(CUDA_DEVICE, '../resources/elmo_2x4096_512_2048cnn_2xhighway_softmax_weights.hdf5',
elmo_vocab_path,
batch_size=DEFAULT_PARAMS['lm_batch_size'],
cutoff_elmo_vocab=DEFAULT_PARAMS['cutoff_lm_vocab'])
# we'll repeat once with lemmatization and once without
disable_tfidf = False
disable_symmetric_patterns = False
for disable_lemmatization in True, False:
print('disable_lemmatization', disable_lemmatization)
# draw representatives as done in the paper
n_clusters = DEFAULT_PARAMS['n_clusters']
inst_ids_to_representatives_sp_no_lem = elmo_as_lm.predict_sent_substitute_representatives(
target_target_insts, DEFAULT_PARAMS['n_represent'], DEFAULT_PARAMS['n_samples_side'],
disable_symmetric_patterns, disable_lemmatization,
DEFAULT_PARAMS['prediction_cutoff'])
inst_ids_ordered = list(inst_ids_to_representatives_sp_no_lem.keys())
lemma = inst_ids_ordered[0].rsplit('.', 1)[0]
representatives = [y for x in inst_ids_ordered for y in inst_ids_to_representatives_sp_no_lem[x]]
n_represent = len(representatives) // len(inst_ids_ordered)
to_pipeline = [DictVectorizer()]
if not disable_tfidf:
to_pipeline.append(TfidfTransformer())
data_transformer = make_pipeline(*to_pipeline)
transformed = data_transformer.fit_transform(representatives).todense()
clustering = AgglomerativeClustering(n_clusters=n_clusters, linkage='average', affinity='cosine')
clustering.fit(transformed)
senses = {}
for i, inst_id in enumerate(inst_ids_ordered):
inst_id_clusters = Counter(clustering.labels_[i * n_represent:
(i + 1) * n_represent])
senses[inst_id] = inst_id_clusters
# we fit a logistic regression to find indicative words for that sense
clusters_centers = []
for cluster_idx in set(clustering.labels_):
clusters_centers.append(
np.array(np.mean(transformed[np.where(clustering.labels_ == cluster_idx)], 0)).reshape(-1))
logistic = linear_model.LogisticRegression(fit_intercept=False)
logistic.fit(clusters_centers, range(n_clusters))
# we print the results and see senses are grouped by tense when lemmatization isn't done but symmetric patterns is
for cluster_idx in range(n_clusters):
print('cluster', cluster_idx)
for inst_id, senses_inst in senses.items():
best_sense, _ = senses_inst.most_common()[0]
if best_sense == cluster_idx:
tokens = target_target_insts[inst_id][0].copy()
idx_int_tokens = target_target_insts[inst_id][1]
word = tokens[idx_int_tokens]
tokens[idx_int_tokens] = '***' + word + '***'
print(' '.join(tokens))
break
best_features = np.argsort(logistic.coef_[cluster_idx])[-5:]
best_words = [to_pipeline[0].feature_names_[x] for x in best_features]
print(best_words)
print()
print()
print()
###Output
/home/nlp/asafam/miniconda2/envs/py36/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
###Markdown
Statistics on how tense correlates to senseWe calculate the mean normalized mutual information(NMI) between tense and sense.Values are between 0 and 1 where 1 means perfect correlation. we expect some corrlation between them as seen in the gold label mean NMI
###Code
from collections import defaultdict
from sklearn.metrics.cluster import normalized_mutual_info_score
# find tense in dataset instances:
import spacy
nlp = nlp = spacy.load("en", disable=['ner'])
inst_id_to_tense = {}
for tokens, target_idx, inst_id in generate_sem_eval_2013('../resources/SemEval-2013-Task-13-test-data'):
lemma_pos = inst_id.rsplit('.', 1)[0]
pos = lemma_pos.split('.')[-1]
if pos != 'v':
# we only need verbs
continue
doc = spacy.tokens.Doc(nlp.vocab, words=tokens)
nlp.tagger(doc)
inst_id_to_tense[inst_id] = doc[target_idx].tag_
all_tenses_ordered = list(set(inst_id_to_tense.values()))
def get_semeval_key_best_senses(filepath, filterset):
ret = defaultdict(dict)
with open(filepath) as fin:
for line in fin:
target, inst_id, senses = line.strip().split(maxsplit=2)
if not inst_id in filterset:
continue
senses = [x.split('/') for x in senses.split()]
senses = sorted(senses, key=lambda x: int(x[1]))
best_sense = senses[-1][0]
ret[target][inst_id] = best_sense
return ret
for setting_name, key_path in [('w/ symmetric patterns w/ lemmatization', 'sp_lem.key'),
('w/ symmetric patterns w/o lemmatization', 'sp_no_lem.key'),
('w/o symmetric patterns w/o lemmatization', 'no_sp_no_lem.key'),
('w/o tfidf', 'no_tfidf.key'),
('w/o all', 'no_all.key'),
('w/o sp', 'no_sp.key'),
('gold labels', '../resources/SemEval-2013-Task-13-test-data/keys/gold/verbs.key')]:
mis = []
semeval_key = get_semeval_key_best_senses(key_path, inst_id_to_tense)
for target, di in semeval_key.items():
order_of_insts = list(di.keys())
order_of_senses = list(set(di.values()))
X = [order_of_senses.index(di[x]) for x in order_of_insts]
Y = [all_tenses_ordered.index(inst_id_to_tense[x]) for x in order_of_insts]
mis.append(normalized_mutual_info_score(X, Y))
print('%s: mean ± STD NMI: %.2f ± %.2f ' % (setting_name, np.mean(mis), np.std(mis)))
###Output
w/ symmetric patterns w/ lemmatization: mean ± STD NMI: 0.22 ± 0.12
w/ symmetric patterns w/o lemmatization: mean ± STD NMI: 0.67 ± 0.12
w/o symmetric patterns w/o lemmatization: mean ± STD NMI: 0.26 ± 0.09
w/o tfidf: mean ± STD NMI: 0.18 ± 0.07
w/o all: mean ± STD NMI: 0.24 ± 0.08
w/o sp: mean ± STD NMI: 0.19 ± 0.08
gold labels: mean ± STD NMI: 0.15 ± 0.07
|
Sesame Street - Natural Langauge Processing/.ipynb_checkpoints/SongLyric_Webscrape-checkpoint.ipynb
|
###Markdown
Import
###Code
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import requests
from bs4 import BeautifulSoup
import re
import pandas as pd
import os
import time
import datetime
import csv
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Scraping Pages
###Code
#launch url
chromedriver = "/Users/vicky/Downloads/chromedriver" # path to the chromedriver executable
os.environ["webdriver.chrome.driver"] = chromedriver
kidshows = {#'SesameStreet': 'https://www.letssingit.com/sesame-street-3k3zj/lyrics',
# 'Wiggles': 'https://www.letssingit.com/the-wiggles-z5jb5',
# 'YoGabbaGabba': 'https://www.letssingit.com/yo-gabba-gabba-f5cvf/lyrics',
# 'Barney': 'https://www.letssingit.com/barney-6569v',
# 'Dora': 'https://www.letssingit.com/dora-the-explorer-32q46/lyrics',
# 'SpongeBob': 'https://www.letssingit.com/spongebob-squarepants-vqclr',
'SpongeBob' :'http://www.azlyricdb.com/artist/Spongebob-Squarepants-21635'}
spongebob = {}
for key, value in kidshows.items():
print(key)
soup=BeautifulSoup(requests.get(value).text, "lxml")
for link in soup.findAll('a', attrs={'href': re.compile("/lyrics")}):
l = (link.get('href'))
### Getting each of the song links from the main page...
soupsong = BeautifulSoup(requests.get('http://www.azlyricdb.com' + l).text, "lxml")
# s = soupsong.find(id='lrc')
# while getattr(s, 'name') != 'style':
# s = s.next
# spongebob.append(s)
# for line in soupsong.findAll('li'):
# print(line)
# lyric = (line.get('li'))
# spongebob.append(line)
header = soupsong.find('h1').getText()
for items in soupsong.findAll('li'):
if header in spongebob:
spongebob[header].append(items.getText())
else:
spongebob[header] = items.getText()
### HOW TO ADD THE TEXT TO THE DICTIONARY.
# spongebob[head] = soupsong.findAll('li').getText()
# if line.find('style'):
# break
# else:
# spongesong.append(line)
# if 'style' not in lyric:
# spongesong.append(lyric)
time.sleep(2)
#songs=[song for song in songlist.find_all('href')]
# <li>The data you want</li>
spongebob
resultstable=soupresults.find_all(class_='table-hover')[1]
rows=[row for row in resultstable.find_all('tr')]
rows=rows[1:5]
horseywins={}
for row in rows:
items=row.find_all('td')
for entries in items:
splitsies = entries.text.split(':')
horseywins[splitsies[0]] = (splitsies[1])
horseywins['Horse Name'] = (key)
horseywinnings.append(horseywins)
time.sleep(1)
python_button_Workouts = jesustakethewheel.find_element_by_id('Hworkouts')
python_button_Workouts.click()
time.sleep(.5)
python_button_seemore = jesustakethewheel.find_element_by_link_text('SEE MORE WORKOUTS')
python_button_seemore.click()
time.sleep(.5)
soupworkout=BeautifulSoup(jesustakethewheel.page_source, "lxml")
workouttable=soupworkout.find(class_='resultTable')
#horseyworkouts=[]
for row in workouttable.find_all('tr')[1:]:
items=row.find_all('td')
Track = items[0].text
Date = items[1].text
Course = items[2].text
Distance = items[3].text
if len(items[4].text) <6:
tp = datetime.datetime.strptime(items[4].text,'%S.%f')
else:
tp = datetime.datetime.strptime(items[4].text,'%M:%S.%f')
Time = tp.second*10+tp.minute*600+tp.microsecond//100000
Note = items[5].text
Rank = items[6].text
rowdict= {'Horse Name': key,'Course' :Course,'Track' :Track,'Date':Date ,'Course' :Course ,'Distance' :Distance,'Time_tenths_second':Time,'Note':Note,'Rank':Rank}
horseyworkouts.append(rowdict)
time.sleep(1)
###Output
_____no_output_____
|
assignments/2019/assignment3/StyleTransfer-PyTorch.ipynb
|
###Markdown
Style TransferIn this notebook we will implement the style transfer technique from ["Image Style Transfer Using Convolutional Neural Networks" (Gatys et al., CVPR 2015)](http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf).The general idea is to take two images, and produce a new image that reflects the content of one but the artistic "style" of the other. We will do this by first formulating a loss function that matches the content and style of each respective image in the feature space of a deep network, and then performing gradient descent on the pixels of the image itself.The deep network we use as a feature extractor is [SqueezeNet](https://arxiv.org/abs/1602.07360), a small model that has been trained on ImageNet. You could use any network, but we chose SqueezeNet here for its small size and efficiency.Here's an example of the images you'll be able to produce by the end of this notebook: Setup
###Code
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as T
import PIL
import numpy as np
from scipy.misc import imread
from collections import namedtuple
import matplotlib.pyplot as plt
from cs231n.image_utils import SQUEEZENET_MEAN, SQUEEZENET_STD
%matplotlib inline
###Output
_____no_output_____
###Markdown
We provide you with some helper functions to deal with images, since for this part of the assignment we're dealing with real JPEGs, not CIFAR-10 data.
###Code
def preprocess(img, size=512):
transform = T.Compose([
T.Resize(size),
T.ToTensor(),
T.Normalize(mean=SQUEEZENET_MEAN.tolist(),
std=SQUEEZENET_STD.tolist()),
T.Lambda(lambda x: x[None]),
])
return transform(img)
def deprocess(img):
transform = T.Compose([
T.Lambda(lambda x: x[0]),
T.Normalize(mean=[0, 0, 0], std=[1.0 / s for s in SQUEEZENET_STD.tolist()]),
T.Normalize(mean=[-m for m in SQUEEZENET_MEAN.tolist()], std=[1, 1, 1]),
T.Lambda(rescale),
T.ToPILImage(),
])
return transform(img)
def rescale(x):
low, high = x.min(), x.max()
x_rescaled = (x - low) / (high - low)
return x_rescaled
def rel_error(x,y):
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
def features_from_img(imgpath, imgsize):
img = preprocess(PIL.Image.open(imgpath), size=imgsize)
img_var = img.type(dtype)
return extract_features(img_var, cnn), img_var
# Older versions of scipy.misc.imresize yield different results
# from newer versions, so we check to make sure scipy is up to date.
def check_scipy():
import scipy
vnum = int(scipy.__version__.split('.')[1])
major_vnum = int(scipy.__version__.split('.')[0])
assert vnum >= 16 or major_vnum >= 1, "You must install SciPy >= 0.16.0 to complete this notebook."
check_scipy()
answers = dict(np.load('style-transfer-checks.npz'))
###Output
_____no_output_____
###Markdown
As in the last assignment, we need to set the dtype to select either the CPU or the GPU
###Code
dtype = torch.FloatTensor
# Uncomment out the following line if you're on a machine with a GPU set up for PyTorch!
#dtype = torch.cuda.FloatTensor
# Load the pre-trained SqueezeNet model.
cnn = torchvision.models.squeezenet1_1(pretrained=True).features
cnn.type(dtype)
# We don't want to train the model any further, so we don't want PyTorch to waste computation
# computing gradients on parameters we're never going to update.
for param in cnn.parameters():
param.requires_grad = False
# We provide this helper code which takes an image, a model (cnn), and returns a list of
# feature maps, one per layer.
def extract_features(x, cnn):
"""
Use the CNN to extract features from the input image x.
Inputs:
- x: A PyTorch Tensor of shape (N, C, H, W) holding a minibatch of images that
will be fed to the CNN.
- cnn: A PyTorch model that we will use to extract features.
Returns:
- features: A list of feature for the input images x extracted using the cnn model.
features[i] is a PyTorch Tensor of shape (N, C_i, H_i, W_i); recall that features
from different layers of the network may have different numbers of channels (C_i) and
spatial dimensions (H_i, W_i).
"""
features = []
prev_feat = x
for i, module in enumerate(cnn._modules.values()):
next_feat = module(prev_feat)
features.append(next_feat)
prev_feat = next_feat
return features
#please disregard warnings about initialization
###Output
_____no_output_____
###Markdown
Computing LossWe're going to compute the three components of our loss function now. The loss function is a weighted sum of three terms: content loss + style loss + total variation loss. You'll fill in the functions that compute these weighted terms below. Content lossWe can generate an image that reflects the content of one image and the style of another by incorporating both in our loss function. We want to penalize deviations from the content of the content image and deviations from the style of the style image. We can then use this hybrid loss function to perform gradient descent **not on the parameters** of the model, but instead **on the pixel values** of our original image.Let's first write the content loss function. Content loss measures how much the feature map of the generated image differs from the feature map of the source image. We only care about the content representation of one layer of the network (say, layer $\ell$), that has feature maps $A^\ell \in \mathbb{R}^{1 \times C_\ell \times H_\ell \times W_\ell}$. $C_\ell$ is the number of filters/channels in layer $\ell$, $H_\ell$ and $W_\ell$ are the height and width. We will work with reshaped versions of these feature maps that combine all spatial positions into one dimension. Let $F^\ell \in \mathbb{R}^{C_\ell \times M_\ell}$ be the feature map for the current image and $P^\ell \in \mathbb{R}^{C_\ell \times M_\ell}$ be the feature map for the content source image where $M_\ell=H_\ell\times W_\ell$ is the number of elements in each feature map. Each row of $F^\ell$ or $P^\ell$ represents the vectorized activations of a particular filter, convolved over all positions of the image. Finally, let $w_c$ be the weight of the content loss term in the loss function.Then the content loss is given by:$L_c = w_c \times \sum_{i,j} (F_{ij}^{\ell} - P_{ij}^{\ell})^2$
###Code
def content_loss(content_weight, content_current, content_original):
"""
Compute the content loss for style transfer.
Inputs:
- content_weight: Scalar giving the weighting for the content loss.
- content_current: features of the current image; this is a PyTorch Tensor of shape
(1, C_l, H_l, W_l).
- content_target: features of the content image, Tensor with shape (1, C_l, H_l, W_l).
Returns:
- scalar content loss
"""
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
###Output
_____no_output_____
###Markdown
Test your content loss. You should see errors less than 0.001.
###Code
def content_loss_test(correct):
content_image = 'styles/tubingen.jpg'
image_size = 192
content_layer = 3
content_weight = 6e-2
c_feats, content_img_var = features_from_img(content_image, image_size)
bad_img = torch.zeros(*content_img_var.data.size()).type(dtype)
feats = extract_features(bad_img, cnn)
student_output = content_loss(content_weight, c_feats[content_layer], feats[content_layer]).cpu().data.numpy()
error = rel_error(correct, student_output)
print('Maximum error is {:.3f}'.format(error))
content_loss_test(answers['cl_out'])
###Output
_____no_output_____
###Markdown
Style lossNow we can tackle the style loss. For a given layer $\ell$, the style loss is defined as follows:First, compute the Gram matrix G which represents the correlations between the responses of each filter, where F is as above. The Gram matrix is an approximation to the covariance matrix -- we want the activation statistics of our generated image to match the activation statistics of our style image, and matching the (approximate) covariance is one way to do that. There are a variety of ways you could do this, but the Gram matrix is nice because it's easy to compute and in practice shows good results.Given a feature map $F^\ell$ of shape $(C_\ell, M_\ell)$, the Gram matrix has shape $(C_\ell, C_\ell)$ and its elements are given by:$$G_{ij}^\ell = \sum_k F^{\ell}_{ik} F^{\ell}_{jk}$$Assuming $G^\ell$ is the Gram matrix from the feature map of the current image, $A^\ell$ is the Gram Matrix from the feature map of the source style image, and $w_\ell$ a scalar weight term, then the style loss for the layer $\ell$ is simply the weighted Euclidean distance between the two Gram matrices:$$L_s^\ell = w_\ell \sum_{i, j} \left(G^\ell_{ij} - A^\ell_{ij}\right)^2$$In practice we usually compute the style loss at a set of layers $\mathcal{L}$ rather than just a single layer $\ell$; then the total style loss is the sum of style losses at each layer:$$L_s = \sum_{\ell \in \mathcal{L}} L_s^\ell$$Begin by implementing the Gram matrix computation below:
###Code
def gram_matrix(features, normalize=True):
"""
Compute the Gram matrix from features.
Inputs:
- features: PyTorch Tensor of shape (N, C, H, W) giving features for
a batch of N images.
- normalize: optional, whether to normalize the Gram matrix
If True, divide the Gram matrix by the number of neurons (H * W * C)
Returns:
- gram: PyTorch Tensor of shape (N, C, C) giving the
(optionally normalized) Gram matrices for the N input images.
"""
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
###Output
_____no_output_____
###Markdown
Test your Gram matrix code. You should see errors less than 0.001.
###Code
def gram_matrix_test(correct):
style_image = 'styles/starry_night.jpg'
style_size = 192
feats, _ = features_from_img(style_image, style_size)
student_output = gram_matrix(feats[5].clone()).cpu().data.numpy()
error = rel_error(correct, student_output)
print('Maximum error is {:.3f}'.format(error))
gram_matrix_test(answers['gm_out'])
###Output
_____no_output_____
###Markdown
Next, implement the style loss:
###Code
# Now put it together in the style_loss function...
def style_loss(feats, style_layers, style_targets, style_weights):
"""
Computes the style loss at a set of layers.
Inputs:
- feats: list of the features at every layer of the current image, as produced by
the extract_features function.
- style_layers: List of layer indices into feats giving the layers to include in the
style loss.
- style_targets: List of the same length as style_layers, where style_targets[i] is
a PyTorch Tensor giving the Gram matrix of the source style image computed at
layer style_layers[i].
- style_weights: List of the same length as style_layers, where style_weights[i]
is a scalar giving the weight for the style loss at layer style_layers[i].
Returns:
- style_loss: A PyTorch Tensor holding a scalar giving the style loss.
"""
# Hint: you can do this with one for loop over the style layers, and should
# not be very much code (~5 lines). You will need to use your gram_matrix function.
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
###Output
_____no_output_____
###Markdown
Test your style loss implementation. The error should be less than 0.001.
###Code
def style_loss_test(correct):
content_image = 'styles/tubingen.jpg'
style_image = 'styles/starry_night.jpg'
image_size = 192
style_size = 192
style_layers = [1, 4, 6, 7]
style_weights = [300000, 1000, 15, 3]
c_feats, _ = features_from_img(content_image, image_size)
feats, _ = features_from_img(style_image, style_size)
style_targets = []
for idx in style_layers:
style_targets.append(gram_matrix(feats[idx].clone()))
student_output = style_loss(c_feats, style_layers, style_targets, style_weights).cpu().data.numpy()
error = rel_error(correct, student_output)
print('Error is {:.3f}'.format(error))
style_loss_test(answers['sl_out'])
###Output
_____no_output_____
###Markdown
Total-variation regularizationIt turns out that it's helpful to also encourage smoothness in the image. We can do this by adding another term to our loss that penalizes wiggles or "total variation" in the pixel values. You can compute the "total variation" as the sum of the squares of differences in the pixel values for all pairs of pixels that are next to each other (horizontally or vertically). Here we sum the total-variation regualarization for each of the 3 input channels (RGB), and weight the total summed loss by the total variation weight, $w_t$:$L_{tv} = w_t \times \left(\sum_{c=1}^3\sum_{i=1}^{H-1}\sum_{j=1}^{W} (x_{i+1,j,c} - x_{i,j,c})^2 + \sum_{c=1}^3\sum_{i=1}^{H}\sum_{j=1}^{W - 1} (x_{i,j+1,c} - x_{i,j,c})^2\right)$In the next cell, fill in the definition for the TV loss term. To receive full credit, your implementation should not have any loops.
###Code
def tv_loss(img, tv_weight):
"""
Compute total variation loss.
Inputs:
- img: PyTorch Variable of shape (1, 3, H, W) holding an input image.
- tv_weight: Scalar giving the weight w_t to use for the TV loss.
Returns:
- loss: PyTorch Variable holding a scalar giving the total variation loss
for img weighted by tv_weight.
"""
# Your implementation should be vectorized and not require any loops!
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
###Output
_____no_output_____
###Markdown
Test your TV loss implementation. Error should be less than 0.0001.
###Code
def tv_loss_test(correct):
content_image = 'styles/tubingen.jpg'
image_size = 192
tv_weight = 2e-2
content_img = preprocess(PIL.Image.open(content_image), size=image_size).type(dtype)
student_output = tv_loss(content_img, tv_weight).cpu().data.numpy()
error = rel_error(correct, student_output)
print('Error is {:.3f}'.format(error))
tv_loss_test(answers['tv_out'])
###Output
_____no_output_____
###Markdown
Now we're ready to string it all together (you shouldn't have to modify this function):
###Code
def style_transfer(content_image, style_image, image_size, style_size, content_layer, content_weight,
style_layers, style_weights, tv_weight, init_random = False):
"""
Run style transfer!
Inputs:
- content_image: filename of content image
- style_image: filename of style image
- image_size: size of smallest image dimension (used for content loss and generated image)
- style_size: size of smallest style image dimension
- content_layer: layer to use for content loss
- content_weight: weighting on content loss
- style_layers: list of layers to use for style loss
- style_weights: list of weights to use for each layer in style_layers
- tv_weight: weight of total variation regularization term
- init_random: initialize the starting image to uniform random noise
"""
# Extract features for the content image
content_img = preprocess(PIL.Image.open(content_image), size=image_size).type(dtype)
feats = extract_features(content_img, cnn)
content_target = feats[content_layer].clone()
# Extract features for the style image
style_img = preprocess(PIL.Image.open(style_image), size=style_size).type(dtype)
feats = extract_features(style_img, cnn)
style_targets = []
for idx in style_layers:
style_targets.append(gram_matrix(feats[idx].clone()))
# Initialize output image to content image or nois
if init_random:
img = torch.Tensor(content_img.size()).uniform_(0, 1).type(dtype)
else:
img = content_img.clone().type(dtype)
# We do want the gradient computed on our image!
img.requires_grad_()
# Set up optimization hyperparameters
initial_lr = 3.0
decayed_lr = 0.1
decay_lr_at = 180
# Note that we are optimizing the pixel values of the image by passing
# in the img Torch tensor, whose requires_grad flag is set to True
optimizer = torch.optim.Adam([img], lr=initial_lr)
f, axarr = plt.subplots(1,2)
axarr[0].axis('off')
axarr[1].axis('off')
axarr[0].set_title('Content Source Img.')
axarr[1].set_title('Style Source Img.')
axarr[0].imshow(deprocess(content_img.cpu()))
axarr[1].imshow(deprocess(style_img.cpu()))
plt.show()
plt.figure()
for t in range(200):
if t < 190:
img.data.clamp_(-1.5, 1.5)
optimizer.zero_grad()
feats = extract_features(img, cnn)
# Compute loss
c_loss = content_loss(content_weight, feats[content_layer], content_target)
s_loss = style_loss(feats, style_layers, style_targets, style_weights)
t_loss = tv_loss(img, tv_weight)
loss = c_loss + s_loss + t_loss
loss.backward()
# Perform gradient descents on our image values
if t == decay_lr_at:
optimizer = torch.optim.Adam([img], lr=decayed_lr)
optimizer.step()
if t % 100 == 0:
print('Iteration {}'.format(t))
plt.axis('off')
plt.imshow(deprocess(img.data.cpu()))
plt.show()
print('Iteration {}'.format(t))
plt.axis('off')
plt.imshow(deprocess(img.data.cpu()))
plt.show()
###Output
_____no_output_____
###Markdown
Generate some pretty pictures!Try out `style_transfer` on the three different parameter sets below. Make sure to run all three cells. Feel free to add your own, but make sure to include the results of style transfer on the third parameter set (starry night) in your submitted notebook.* The `content_image` is the filename of content image.* The `style_image` is the filename of style image.* The `image_size` is the size of smallest image dimension of the content image (used for content loss and generated image).* The `style_size` is the size of smallest style image dimension.* The `content_layer` specifies which layer to use for content loss.* The `content_weight` gives weighting on content loss in the overall loss function. Increasing the value of this parameter will make the final image look more realistic (closer to the original content).* `style_layers` specifies a list of which layers to use for style loss. * `style_weights` specifies a list of weights to use for each layer in style_layers (each of which will contribute a term to the overall style loss). We generally use higher weights for the earlier style layers because they describe more local/smaller scale features, which are more important to texture than features over larger receptive fields. In general, increasing these weights will make the resulting image look less like the original content and more distorted towards the appearance of the style image.* `tv_weight` specifies the weighting of total variation regularization in the overall loss function. Increasing this value makes the resulting image look smoother and less jagged, at the cost of lower fidelity to style and content. Below the next three cells of code (in which you shouldn't change the hyperparameters), feel free to copy and paste the parameters to play around them and see how the resulting image changes.
###Code
# Composition VII + Tubingen
params1 = {
'content_image' : 'styles/tubingen.jpg',
'style_image' : 'styles/composition_vii.jpg',
'image_size' : 192,
'style_size' : 512,
'content_layer' : 3,
'content_weight' : 5e-2,
'style_layers' : (1, 4, 6, 7),
'style_weights' : (20000, 500, 12, 1),
'tv_weight' : 5e-2
}
style_transfer(**params1)
# Scream + Tubingen
params2 = {
'content_image':'styles/tubingen.jpg',
'style_image':'styles/the_scream.jpg',
'image_size':192,
'style_size':224,
'content_layer':3,
'content_weight':3e-2,
'style_layers':[1, 4, 6, 7],
'style_weights':[200000, 800, 12, 1],
'tv_weight':2e-2
}
style_transfer(**params2)
# Starry Night + Tubingen
params3 = {
'content_image' : 'styles/tubingen.jpg',
'style_image' : 'styles/starry_night.jpg',
'image_size' : 192,
'style_size' : 192,
'content_layer' : 3,
'content_weight' : 6e-2,
'style_layers' : [1, 4, 6, 7],
'style_weights' : [300000, 1000, 15, 3],
'tv_weight' : 2e-2
}
style_transfer(**params3)
###Output
_____no_output_____
###Markdown
Feature InversionThe code you've written can do another cool thing. In an attempt to understand the types of features that convolutional networks learn to recognize, a recent paper [1] attempts to reconstruct an image from its feature representation. We can easily implement this idea using image gradients from the pretrained network, which is exactly what we did above (but with two different feature representations).Now, if you set the style weights to all be 0 and initialize the starting image to random noise instead of the content source image, you'll reconstruct an image from the feature representation of the content source image. You're starting with total noise, but you should end up with something that looks quite a bit like your original image.(Similarly, you could do "texture synthesis" from scratch if you set the content weight to 0 and initialize the starting image to random noise, but we won't ask you to do that here.) Run the following cell to try out feature inversion.[1] Aravindh Mahendran, Andrea Vedaldi, "Understanding Deep Image Representations by Inverting them", CVPR 2015
###Code
# Feature Inversion -- Starry Night + Tubingen
params_inv = {
'content_image' : 'styles/tubingen.jpg',
'style_image' : 'styles/starry_night.jpg',
'image_size' : 192,
'style_size' : 192,
'content_layer' : 3,
'content_weight' : 6e-2,
'style_layers' : [1, 4, 6, 7],
'style_weights' : [0, 0, 0, 0], # we discard any contributions from style to the loss
'tv_weight' : 2e-2,
'init_random': True # we want to initialize our image to be random
}
style_transfer(**params_inv)
###Output
_____no_output_____
###Markdown
Style TransferIn this notebook we will implement the style transfer technique from ["Image Style Transfer Using Convolutional Neural Networks" (Gatys et al., CVPR 2015)](http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf).The general idea is to take two images, and produce a new image that reflects the content of one but the artistic "style" of the other. We will do this by first formulating a loss function that matches the content and style of each respective image in the feature space of a deep network, and then performing gradient descent on the pixels of the image itself.The deep network we use as a feature extractor is [SqueezeNet](https://arxiv.org/abs/1602.07360), a small model that has been trained on ImageNet. You could use any network, but we chose SqueezeNet here for its small size and efficiency.Here's an example of the images you'll be able to produce by the end of this notebook: Setup
###Code
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as T
import PIL
import numpy as np
from scipy.misc import imread
from collections import namedtuple
import matplotlib.pyplot as plt
from cs231n.image_utils import SQUEEZENET_MEAN, SQUEEZENET_STD
%matplotlib inline
###Output
_____no_output_____
###Markdown
We provide you with some helper functions to deal with images, since for this part of the assignment we're dealing with real JPEGs, not CIFAR-10 data.
###Code
def preprocess(img, size=512):
transform = T.Compose([
T.Resize(size),
T.ToTensor(),
T.Normalize(mean=SQUEEZENET_MEAN.tolist(),
std=SQUEEZENET_STD.tolist()),
T.Lambda(lambda x: x[None]),
])
return transform(img)
def deprocess(img):
transform = T.Compose([
T.Lambda(lambda x: x[0]),
T.Normalize(mean=[0, 0, 0], std=[1.0 / s for s in SQUEEZENET_STD.tolist()]),
T.Normalize(mean=[-m for m in SQUEEZENET_MEAN.tolist()], std=[1, 1, 1]),
T.Lambda(rescale),
T.ToPILImage(),
])
return transform(img)
def rescale(x):
low, high = x.min(), x.max()
x_rescaled = (x - low) / (high - low)
return x_rescaled
def rel_error(x,y):
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
def features_from_img(imgpath, imgsize):
img = preprocess(PIL.Image.open(imgpath), size=imgsize)
img_var = img.type(dtype)
return extract_features(img_var, cnn), img_var
# Older versions of scipy.misc.imresize yield different results
# from newer versions, so we check to make sure scipy is up to date.
def check_scipy():
import scipy
vnum = int(scipy.__version__.split('.')[1])
major_vnum = int(scipy.__version__.split('.')[0])
assert vnum >= 16 or major_vnum >= 1, "You must install SciPy >= 0.16.0 to complete this notebook."
check_scipy()
answers = dict(np.load('style-transfer-checks.npz'))
###Output
_____no_output_____
###Markdown
As in the last assignment, we need to set the dtype to select either the CPU or the GPU
###Code
dtype = torch.FloatTensor
# Uncomment out the following line if you're on a machine with a GPU set up for PyTorch!
#dtype = torch.cuda.FloatTensor
# Load the pre-trained SqueezeNet model.
cnn = torchvision.models.squeezenet1_1(pretrained=True).features
cnn.type(dtype)
# We don't want to train the model any further, so we don't want PyTorch to waste computation
# computing gradients on parameters we're never going to update.
for param in cnn.parameters():
param.requires_grad = False
# We provide this helper code which takes an image, a model (cnn), and returns a list of
# feature maps, one per layer.
def extract_features(x, cnn):
"""
Use the CNN to extract features from the input image x.
Inputs:
- x: A PyTorch Tensor of shape (N, C, H, W) holding a minibatch of images that
will be fed to the CNN.
- cnn: A PyTorch model that we will use to extract features.
Returns:
- features: A list of feature for the input images x extracted using the cnn model.
features[i] is a PyTorch Tensor of shape (N, C_i, H_i, W_i); recall that features
from different layers of the network may have different numbers of channels (C_i) and
spatial dimensions (H_i, W_i).
"""
features = []
prev_feat = x
for i, module in enumerate(cnn._modules.values()):
next_feat = module(prev_feat)
features.append(next_feat)
prev_feat = next_feat
return features
#please disregard warnings about initialization
###Output
_____no_output_____
###Markdown
Computing LossWe're going to compute the three components of our loss function now. The loss function is a weighted sum of three terms: content loss + style loss + total variation loss. You'll fill in the functions that compute these weighted terms below. Content lossWe can generate an image that reflects the content of one image and the style of another by incorporating both in our loss function. We want to penalize deviations from the content of the content image and deviations from the style of the style image. We can then use this hybrid loss function to perform gradient descent **not on the parameters** of the model, but instead **on the pixel values** of our original image.Let's first write the content loss function. Content loss measures how much the feature map of the generated image differs from the feature map of the source image. We only care about the content representation of one layer of the network (say, layer $\ell$), that has feature maps $A^\ell \in \mathbb{R}^{1 \times C_\ell \times H_\ell \times W_\ell}$. $C_\ell$ is the number of filters/channels in layer $\ell$, $H_\ell$ and $W_\ell$ are the height and width. We will work with reshaped versions of these feature maps that combine all spatial positions into one dimension. Let $F^\ell \in \mathbb{R}^{C_\ell \times M_\ell}$ be the feature map for the current image and $P^\ell \in \mathbb{R}^{C_\ell \times M_\ell}$ be the feature map for the content source image where $M_\ell=H_\ell\times W_\ell$ is the number of elements in each feature map. Each row of $F^\ell$ or $P^\ell$ represents the vectorized activations of a particular filter, convolved over all positions of the image. Finally, let $w_c$ be the weight of the content loss term in the loss function.Then the content loss is given by:$L_c = w_c \times \sum_{i,j} (F_{ij}^{\ell} - P_{ij}^{\ell})^2$
###Code
def content_loss(content_weight, content_current, content_original):
"""
Compute the content loss for style transfer.
Inputs:
- content_weight: Scalar giving the weighting for the content loss.
- content_current: features of the current image; this is a PyTorch Tensor of shape
(1, C_l, H_l, W_l).
- content_target: features of the content image, Tensor with shape (1, C_l, H_l, W_l).
Returns:
- scalar content loss
"""
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
###Output
_____no_output_____
###Markdown
Test your content loss. You should see errors less than 0.001.
###Code
def content_loss_test(correct):
content_image = 'styles/tubingen.jpg'
image_size = 192
content_layer = 3
content_weight = 6e-2
c_feats, content_img_var = features_from_img(content_image, image_size)
bad_img = torch.zeros(*content_img_var.data.size()).type(dtype)
feats = extract_features(bad_img, cnn)
student_output = content_loss(content_weight, c_feats[content_layer], feats[content_layer]).cpu().data.numpy()
error = rel_error(correct, student_output)
print('Maximum error is {:.3f}'.format(error))
content_loss_test(answers['cl_out'])
###Output
_____no_output_____
###Markdown
Style lossNow we can tackle the style loss. For a given layer $\ell$, the style loss is defined as follows:First, compute the Gram matrix G which represents the correlations between the responses of each filter, where F is as above. The Gram matrix is an approximation to the covariance matrix -- we want the activation statistics of our generated image to match the activation statistics of our style image, and matching the (approximate) covariance is one way to do that. There are a variety of ways you could do this, but the Gram matrix is nice because it's easy to compute and in practice shows good results.Given a feature map $F^\ell$ of shape $(C_\ell, M_\ell)$, the Gram matrix has shape $(C_\ell, C_\ell)$ and its elements are given by:$$G_{ij}^\ell = \sum_k F^{\ell}_{ik} F^{\ell}_{jk}$$Assuming $G^\ell$ is the Gram matrix from the feature map of the current image, $A^\ell$ is the Gram Matrix from the feature map of the source style image, and $w_\ell$ a scalar weight term, then the style loss for the layer $\ell$ is simply the weighted Euclidean distance between the two Gram matrices:$$L_s^\ell = w_\ell \sum_{i, j} \left(G^\ell_{ij} - A^\ell_{ij}\right)^2$$In practice we usually compute the style loss at a set of layers $\mathcal{L}$ rather than just a single layer $\ell$; then the total style loss is the sum of style losses at each layer:$$L_s = \sum_{\ell \in \mathcal{L}} L_s^\ell$$Begin by implementing the Gram matrix computation below:
###Code
def gram_matrix(features, normalize=True):
"""
Compute the Gram matrix from features.
Inputs:
- features: PyTorch Tensor of shape (N, C, H, W) giving features for
a batch of N images.
- normalize: optional, whether to normalize the Gram matrix
If True, divide the Gram matrix by the number of neurons (H * W * C)
Returns:
- gram: PyTorch Tensor of shape (N, C, C) giving the
(optionally normalized) Gram matrices for the N input images.
"""
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
###Output
_____no_output_____
###Markdown
Test your Gram matrix code. You should see errors less than 0.001.
###Code
def gram_matrix_test(correct):
style_image = 'styles/starry_night.jpg'
style_size = 192
feats, _ = features_from_img(style_image, style_size)
student_output = gram_matrix(feats[5].clone()).cpu().data.numpy()
error = rel_error(correct, student_output)
print('Maximum error is {:.3f}'.format(error))
gram_matrix_test(answers['gm_out'])
###Output
_____no_output_____
###Markdown
Next, implement the style loss:
###Code
# Now put it together in the style_loss function...
def style_loss(feats, style_layers, style_targets, style_weights):
"""
Computes the style loss at a set of layers.
Inputs:
- feats: list of the features at every layer of the current image, as produced by
the extract_features function.
- style_layers: List of layer indices into feats giving the layers to include in the
style loss.
- style_targets: List of the same length as style_layers, where style_targets[i] is
a PyTorch Tensor giving the Gram matrix of the source style image computed at
layer style_layers[i].
- style_weights: List of the same length as style_layers, where style_weights[i]
is a scalar giving the weight for the style loss at layer style_layers[i].
Returns:
- style_loss: A PyTorch Tensor holding a scalar giving the style loss.
"""
# Hint: you can do this with one for loop over the style layers, and should
# not be very much code (~5 lines). You will need to use your gram_matrix function.
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
###Output
_____no_output_____
###Markdown
Test your style loss implementation. The error should be less than 0.001.
###Code
def style_loss_test(correct):
content_image = 'styles/tubingen.jpg'
style_image = 'styles/starry_night.jpg'
image_size = 192
style_size = 192
style_layers = [1, 4, 6, 7]
style_weights = [300000, 1000, 15, 3]
c_feats, _ = features_from_img(content_image, image_size)
feats, _ = features_from_img(style_image, style_size)
style_targets = []
for idx in style_layers:
style_targets.append(gram_matrix(feats[idx].clone()))
student_output = style_loss(c_feats, style_layers, style_targets, style_weights).cpu().data.numpy()
error = rel_error(correct, student_output)
print('Error is {:.3f}'.format(error))
style_loss_test(answers['sl_out'])
###Output
_____no_output_____
###Markdown
Total-variation regularizationIt turns out that it's helpful to also encourage smoothness in the image. We can do this by adding another term to our loss that penalizes wiggles or "total variation" in the pixel values. You can compute the "total variation" as the sum of the squares of differences in the pixel values for all pairs of pixels that are next to each other (horizontally or vertically). Here we sum the total-variation regualarization for each of the 3 input channels (RGB), and weight the total summed loss by the total variation weight, $w_t$:$L_{tv} = w_t \times \left(\sum_{c=1}^3\sum_{i=1}^{H-1}\sum_{j=1}^{W} (x_{i+1,j,c} - x_{i,j,c})^2 + \sum_{c=1}^3\sum_{i=1}^{H}\sum_{j=1}^{W - 1} (x_{i,j+1,c} - x_{i,j,c})^2\right)$In the next cell, fill in the definition for the TV loss term. To receive full credit, your implementation should not have any loops.
###Code
def tv_loss(img, tv_weight):
"""
Compute total variation loss.
Inputs:
- img: PyTorch Variable of shape (1, 3, H, W) holding an input image.
- tv_weight: Scalar giving the weight w_t to use for the TV loss.
Returns:
- loss: PyTorch Variable holding a scalar giving the total variation loss
for img weighted by tv_weight.
"""
# Your implementation should be vectorized and not require any loops!
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
###Output
_____no_output_____
###Markdown
Test your TV loss implementation. Error should be less than 0.0001.
###Code
def tv_loss_test(correct):
content_image = 'styles/tubingen.jpg'
image_size = 192
tv_weight = 2e-2
content_img = preprocess(PIL.Image.open(content_image), size=image_size).type(dtype)
student_output = tv_loss(content_img, tv_weight).cpu().data.numpy()
error = rel_error(correct, student_output)
print('Error is {:.3f}'.format(error))
tv_loss_test(answers['tv_out'])
###Output
_____no_output_____
###Markdown
Now we're ready to string it all together (you shouldn't have to modify this function):
###Code
def style_transfer(content_image, style_image, image_size, style_size, content_layer, content_weight,
style_layers, style_weights, tv_weight, init_random = False):
"""
Run style transfer!
Inputs:
- content_image: filename of content image
- style_image: filename of style image
- image_size: size of smallest image dimension (used for content loss and generated image)
- style_size: size of smallest style image dimension
- content_layer: layer to use for content loss
- content_weight: weighting on content loss
- style_layers: list of layers to use for style loss
- style_weights: list of weights to use for each layer in style_layers
- tv_weight: weight of total variation regularization term
- init_random: initialize the starting image to uniform random noise
"""
# Extract features for the content image
content_img = preprocess(PIL.Image.open(content_image), size=image_size).type(dtype)
feats = extract_features(content_img, cnn)
content_target = feats[content_layer].clone()
# Extract features for the style image
style_img = preprocess(PIL.Image.open(style_image), size=style_size).type(dtype)
feats = extract_features(style_img, cnn)
style_targets = []
for idx in style_layers:
style_targets.append(gram_matrix(feats[idx].clone()))
# Initialize output image to content image or nois
if init_random:
img = torch.Tensor(content_img.size()).uniform_(0, 1).type(dtype)
else:
img = content_img.clone().type(dtype)
# We do want the gradient computed on our image!
img.requires_grad_()
# Set up optimization hyperparameters
initial_lr = 3.0
decayed_lr = 0.1
decay_lr_at = 180
# Note that we are optimizing the pixel values of the image by passing
# in the img Torch tensor, whose requires_grad flag is set to True
optimizer = torch.optim.Adam([img], lr=initial_lr)
f, axarr = plt.subplots(1,2)
axarr[0].axis('off')
axarr[1].axis('off')
axarr[0].set_title('Content Source Img.')
axarr[1].set_title('Style Source Img.')
axarr[0].imshow(deprocess(content_img.cpu()))
axarr[1].imshow(deprocess(style_img.cpu()))
plt.show()
plt.figure()
for t in range(200):
if t < 190:
img.data.clamp_(-1.5, 1.5)
optimizer.zero_grad()
feats = extract_features(img, cnn)
# Compute loss
c_loss = content_loss(content_weight, feats[content_layer], content_target)
s_loss = style_loss(feats, style_layers, style_targets, style_weights)
t_loss = tv_loss(img, tv_weight)
loss = c_loss + s_loss + t_loss
loss.backward()
# Perform gradient descents on our image values
if t == decay_lr_at:
optimizer = torch.optim.Adam([img], lr=decayed_lr)
optimizer.step()
if t % 100 == 0:
print('Iteration {}'.format(t))
plt.axis('off')
plt.imshow(deprocess(img.data.cpu()))
plt.show()
print('Iteration {}'.format(t))
plt.axis('off')
plt.imshow(deprocess(img.data.cpu()))
plt.show()
###Output
_____no_output_____
###Markdown
Generate some pretty pictures!Try out `style_transfer` on the three different parameter sets below. Make sure to run all three cells. Feel free to add your own, but make sure to include the results of style transfer on the third parameter set (starry night) in your submitted notebook.* The `content_image` is the filename of content image.* The `style_image` is the filename of style image.* The `image_size` is the size of smallest image dimension of the content image (used for content loss and generated image).* The `style_size` is the size of smallest style image dimension.* The `content_layer` specifies which layer to use for content loss.* The `content_weight` gives weighting on content loss in the overall loss function. Increasing the value of this parameter will make the final image look more realistic (closer to the original content).* `style_layers` specifies a list of which layers to use for style loss. * `style_weights` specifies a list of weights to use for each layer in style_layers (each of which will contribute a term to the overall style loss). We generally use higher weights for the earlier style layers because they describe more local/smaller scale features, which are more important to texture than features over larger receptive fields. In general, increasing these weights will make the resulting image look less like the original content and more distorted towards the appearance of the style image.* `tv_weight` specifies the weighting of total variation regularization in the overall loss function. Increasing this value makes the resulting image look smoother and less jagged, at the cost of lower fidelity to style and content. Below the next three cells of code (in which you shouldn't change the hyperparameters), feel free to copy and paste the parameters to play around them and see how the resulting image changes.
###Code
# Composition VII + Tubingen
params1 = {
'content_image' : 'styles/tubingen.jpg',
'style_image' : 'styles/composition_vii.jpg',
'image_size' : 192,
'style_size' : 512,
'content_layer' : 3,
'content_weight' : 5e-2,
'style_layers' : (1, 4, 6, 7),
'style_weights' : (20000, 500, 12, 1),
'tv_weight' : 5e-2
}
style_transfer(**params1)
# Scream + Tubingen
params2 = {
'content_image':'styles/tubingen.jpg',
'style_image':'styles/the_scream.jpg',
'image_size':192,
'style_size':224,
'content_layer':3,
'content_weight':3e-2,
'style_layers':[1, 4, 6, 7],
'style_weights':[200000, 800, 12, 1],
'tv_weight':2e-2
}
style_transfer(**params2)
# Starry Night + Tubingen
params3 = {
'content_image' : 'styles/tubingen.jpg',
'style_image' : 'styles/starry_night.jpg',
'image_size' : 192,
'style_size' : 192,
'content_layer' : 3,
'content_weight' : 6e-2,
'style_layers' : [1, 4, 6, 7],
'style_weights' : [300000, 1000, 15, 3],
'tv_weight' : 2e-2
}
style_transfer(**params3)
###Output
_____no_output_____
###Markdown
Feature InversionThe code you've written can do another cool thing. In an attempt to understand the types of features that convolutional networks learn to recognize, a recent paper [1] attempts to reconstruct an image from its feature representation. We can easily implement this idea using image gradients from the pretrained network, which is exactly what we did above (but with two different feature representations).Now, if you set the style weights to all be 0 and initialize the starting image to random noise instead of the content source image, you'll reconstruct an image from the feature representation of the content source image. You're starting with total noise, but you should end up with something that looks quite a bit like your original image.(Similarly, you could do "texture synthesis" from scratch if you set the content weight to 0 and initialize the starting image to random noise, but we won't ask you to do that here.) Run the following cell to try out feature inversion.[1] Aravindh Mahendran, Andrea Vedaldi, "Understanding Deep Image Representations by Inverting them", CVPR 2015
###Code
# Feature Inversion -- Starry Night + Tubingen
params_inv = {
'content_image' : 'styles/tubingen.jpg',
'style_image' : 'styles/starry_night.jpg',
'image_size' : 192,
'style_size' : 192,
'content_layer' : 3,
'content_weight' : 6e-2,
'style_layers' : [1, 4, 6, 7],
'style_weights' : [0, 0, 0, 0], # we discard any contributions from style to the loss
'tv_weight' : 2e-2,
'init_random': True # we want to initialize our image to be random
}
style_transfer(**params_inv)
###Output
_____no_output_____
|
notebooks/final_notebook.ipynb
|
###Markdown
**Meditation and Neural Activities:** *Replication & Classifier Development***Final Data Science Neuroscience Project**A replication of Brandmeyer & Delorme (2018), with data-driven techniques. Creating new supervised and unsupervised model for classifier of expert vs. non-expert labels in mediation, using EEG brain activities.This project is available at https://github.com/yuyang-zhong/EEG-Neural.----**Yuyang Zhong**University of California, BerkeleyDecember 9, 2019----**Cognitive Neuroscience**Jack L. Gallant, Ph.D., *Professor*Manon Ironside, *Graduate Student Instructor*---- Introduction BackgroundMeditation had been claimed to have a lot of physical and mental effects for individuals who actively practice meditation on a regular basis. However, to psychologists and neuroscientists, researchers are more interested in how these effects show up. Past research had focused on whether there was significant changes in subjects' neural activities when engaged in meditation over a period of time. The current research that this project is based on, conducted by Brandmeyer & Delorme in 2016, focuses on whether there was a significant difference in "depth" of neural activity (measured through EEG) for those who practice meditation on a more frequent basis ("expert") compared to those on a less frequent basis ("non-expert"). This project will dive into understanding the published data better, and see if a classifier could be built to label "expert" vs. "non-expert" based on neural activities during meditation. Motivation & SignificanceIn the literature, probes into meditation had lead researchers to find evidence of a default-mode network, as well as differences in functional connectivity of brain activities (Berkovich-Ohana et al. 2016;, Garrison et al. 2015). The present research (Brandmeyer & Delorme 2016), as well as this project, can potentially provide evidence for whether meditation alters one's default-mode network and functional connectivity, and attributing those change to the benefits claimed by individuals practicing meditation as part of their daily lives. Authors of this paper were also trying to probe whether default mode network was related to the frequency of mind wandering episodes during meditation, especially those not aware by the individual (Christoff et al. 2009). This project, however, will not focus as much on mind wandering, but more so on the classification of subjects for their meditation expertise. Method Dataset OverviewThis dataset was made available by the authors of the present research (Brandmeyer & Delorme 2016), at multiple open data repositories. Version 2.0 of the data, published on November 19, 2018, was downloaded from **Zenodo** (https://doi.org/10.5281/zenodo.2536267). Description by AuthorThis meditation experiment contains 24 subjects. Subjects were meditating and were interrupted about every 2 minutes to indicate their level of concentration and mind wandering. Dataset organizationThe dataset is organized in the Brain Image Data Structure (BIDS) format. The raw data contains the MATLAB code for session, sound files to the stimuli, folders for each subject, within with folders for each session the subject participated in. In the session folders the eeg measures and event files are provided. Methods & Techniques of the Original Research(Section referenced and adopted from original research article.)EEG data was collected, using a 64-channel BioSemi system and and BioSemi 10-20 head cap montage. There are a total of 64 channels (locations of measure), mapped by the `Biosemi64Alpha` montage (not part of the standard `mne` packages, direction to load this custom montage is included below). This measure has very well temporal resolution but poor spatial resolution.A total of 24 participants were in this study. Participants were asked to meditate for 30-90 seconds, and interrupted to rate their mindfulness depth and mind wandering level. This project will solely focus on the onset of that interruption, the period of meditation before that, as well as the short period right after in response to the interruption. Data Analysis Outline of AnalysisThe following table summarizes the methods and techniques used in each section of this project.| Section | Methods | Motivation || ------- | ------- | ---------- || **Data Exploration** | Time frequency analysis | Compute Time-Frequency Representation (TFR) using Morlet wavelet, and seeing if I can identify concentrations of epochs to focus on. || **Data Exploration** | Topographic Mapping: All Evoked Response | Looking at average brain activities near an event. || **Data Exploration** | Event-related Spectral Perturbation (ERSP): Onset Evoked Response | This would be important to help us understand whether this question is actually valid to ask - is there a difference in onset evoked response, and activities before that between the 2 subject groups? | | **Data Cleaning** | - | The data will be shrunken down to an average of all evoked responses, 10 seconds before the onset and 5 after, for each individual subjects. This will be used for the classifier. | | **Data Analysis** | Correlation Matrix | Looking at which channels are most and least correlated with each other. || **Data Analysis** | Independent Component Analysis (ICA) & Principal Component Analysis (PCA) | Looking at which channels are most significant in contribution to the overall brain activity, and try to see if I can figure out why. This would help identify components to use for our model. || **Classifier** | Logistic Regression with 5-Fold Cross Validation | The simpler method. | | **Classifier** | Neural Network: Multi-Layer Perceptron Classifier | A bit fancier method to improve accuracy. || **Classifier** | Random Forest | A bit fancier method to improve accuracy. | Project Setup & Imports Project DependenciesThis project utilizes the following Python packages:`numpy`,`pandas`,`matplotlib`,`seaborn`,`mne`, and`sklearn`.To install a package within this jupyter notebook, utilize the command `!pip install [package-name]`. The `!` will allow you to run command line prompt within this notebook. Importing Packages
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import mne
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.model_selection import KFold
from sklearn.decomposition import PCA, FastICA
from sklearn.metrics import classification_report
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier
###Output
_____no_output_____
###Markdown
Suppresses WarningsThis is used for exporting the final PDF file without the warning messages. Feel free to comment this out.
###Code
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Data Exploration on Subject 1 (Non-Expert) & 15 (Expert) Importing data
###Code
raw_fname1 = '../rawdata/bidsexport/sub-001/ses-01/eeg/sub-001_ses-01_task-meditation_eeg.bdf'
raw_fname15 = '../rawdata/bidsexport/sub-015/ses-01/eeg/sub-015_ses-01_task-meditation_eeg.bdf'
raw1 = mne.io.read_raw_bdf(raw_fname1, preload=True)
raw15 = mne.io.read_raw_bdf(raw_fname15, preload=True)
###Output
Extracting EDF parameters from /Users/yuyang.zhong/eeg/rawdata/bidsexport/sub-001/ses-01/eeg/sub-001_ses-01_task-meditation_eeg.bdf...
BDF file detected
Setting channel info structure...
Creating raw.info structure...
Reading 0 ... 696575 = 0.000 ... 2720.996 secs...
Extracting EDF parameters from /Users/yuyang.zhong/eeg/rawdata/bidsexport/sub-015/ses-01/eeg/sub-015_ses-01_task-meditation_eeg.bdf...
BDF file detected
Setting channel info structure...
Creating raw.info structure...
Reading 0 ... 695807 = 0.000 ... 2717.996 secs...
###Markdown
For the purpose of this project, we will remove all of the "channels" that are metadata of the subject/experiment. We are focusing only on the 64 EEG channels.
###Code
raw1.drop_channels(['EXG1', 'EXG2', 'EXG3', 'EXG4', 'EXG5', 'EXG6', 'EXG7', 'EXG8',
'GSR1', 'GSR2', 'Erg1', 'Erg2', 'Resp', 'Plet', 'Temp'])
raw15.drop_channels(['EXG1', 'EXG2', 'EXG3', 'EXG4', 'EXG5', 'EXG6', 'EXG7', 'EXG8',
'GSR1', 'GSR2', 'Erg1', 'Erg2', 'Resp', 'Plet', 'Temp'])
###Output
_____no_output_____
###Markdown
Loading & Setting Custom Montage `biosemi64alpha`Since the researchers used an alphabetical (A/B) version of the standard `biosemi64` montage, we will need to load our own montage file to allow appropriate topographical mapping.
###Code
from os.path import abspath
montage = mne.channels.read_montage(abspath("../biosemi64alpha.txt"))
raw1.set_montage(montage);
raw15.set_montage(montage);
###Output
_____no_output_____
###Markdown
Subject InformationLet's print 1 subject's EEG information.
###Code
print(raw1)
print(raw1.info)
###Output
<RawEDF | sub-001_ses-01_task-meditation_eeg.bdf, n_channels x n_times : 65 x 696576 (2721.0 sec), ~345.6 MB, data loaded>
<Info | 17 non-empty fields
bads : list | 0 items
ch_names : list | A1, A2, A3, A4, A5, A6, A7, A8, A9, ...
chs : list | 65 items (EEG: 64, STIM: 1)
comps : list | 0 items
custom_ref_applied : bool | False
dev_head_t : Transform | 3 items
dig : Digitization | 67 items (3 Cardinal, 64 EEG)
events : list | 0 items
highpass : float | 0.0 Hz
hpi_meas : list | 0 items
hpi_results : list | 0 items
lowpass : float | 52.0 Hz
meas_date : tuple | 2014-04-04 19:40:17 GMT
nchan : int | 65
proc_history : list | 0 items
projs : list | 0 items
sfreq : float | 256.0 Hz
acq_pars : NoneType
acq_stim : NoneType
ctf_head_t : NoneType
description : NoneType
dev_ctf_t : NoneType
device_info : NoneType
experimenter : NoneType
file_id : NoneType
gantry_angle : NoneType
helium_info : NoneType
hpi_subsystem : NoneType
kit_system_id : NoneType
line_freq : NoneType
meas_id : NoneType
proj_id : NoneType
proj_name : NoneType
subject_info : NoneType
utc_offset : NoneType
xplotter_layout : NoneType
>
###Markdown
The location of the sensors/channels are shown below. As you can see, the labels of the channels begin with A & B as aligned with the left/right hemispheres, instead of the specific naming of the standard `biosemi64` channel names.
###Code
mne.viz.plot_sensors(raw1.info, ch_type='eeg', show_names=True);
###Output
_____no_output_____
###Markdown
Events & EpochsWhat kind of events are there in this session? This prints the top 5 events found for subject 1.
###Code
events1 = mne.find_events(raw1, stim_channel='Status')
events15 = mne.find_events(raw15, stim_channel='Status')
print(events1[:5])
print(events15[:5])
###Output
Trigger channel has a non-zero initial value of 65536 (consider using initial_event=True to detect this event)
Removing orphaned offset at the beginning of the file.
87 events found
Event IDs: [ 2 4 128]
Trigger channel has a non-zero initial value of 65536 (consider using initial_event=True to detect this event)
Removing orphaned offset at the beginning of the file.
84 events found
Event IDs: [ 2 4 128]
[[18275 0 128]
[19387 0 2]
[20422 0 2]
[32156 0 128]
[46029 0 128]]
[[35569 0 128]
[36506 0 2]
[37545 0 4]
[69254 0 128]
[70197 0 4]]
###Markdown
From the `task-meditation_events.json` file, we found the following ID corresponding the events.
###Code
import json
with open("../rawdata/bidsexport/task-meditation_events.json", "r") as events_file:
events = json.load(events_file)
event_dict = {i: d for d, i in events['value']['Levels'].items()}
print(event_dict)
###Output
{'Response 1 (this may be a response to question 1, 2 or 3)': '2', 'Response 2 (this may be a response to question 1, 2 or 3)': '4', 'Response 3 (this may be a response to question 1, 2 or 3)': '8', 'Indicate involuntary response': '16', 'First question onset (most important marker)': '128'}
###Markdown
Since only 3 events were used in the dataset, we will manually load those using the printout above.
###Code
event_dict = {'Response 1 (this may be a response to question 1, 2 or 3)': 2,
'Response 2 (this may be a response to question 1, 2 or 3)': 4,
'First question onset (most important marker)': 128}
###Output
_____no_output_____
###Markdown
Setting epochs: We are only interested in 10 seconds before the onset and 5 seconds after.
###Code
epochs1 = mne.Epochs(raw1, events1, event_id=event_dict, tmin=-10, tmax=5, preload=True)
epochs15 = mne.Epochs(raw15, events15, event_id=event_dict, tmin=-10, tmax=5, preload=True)
###Output
87 matching events found
Applying baseline correction (mode: mean)
Not setting metadata
0 projection items activated
Loading data for 87 events and 3841 original time points ...
0 bad epochs dropped
84 matching events found
Applying baseline correction (mode: mean)
Not setting metadata
0 projection items activated
Loading data for 84 events and 3841 original time points ...
0 bad epochs dropped
###Markdown
We will then select the 3 conditions we have noted earlier (the 3 events above) and equalize them. Then we will select epochs related to these conditions. Since we only care about the onset, we will visualize the `onset_epoch` at channel `A2`, which in located in the prefrontal cortex.
###Code
conds = ['Response 1 (this may be a response to question 1, 2 or 3)',
'Response 2 (this may be a response to question 1, 2 or 3)',
'First question onset (most important marker)']
epochs1.equalize_event_counts(conds)
epochs15.equalize_event_counts(conds)
r1_epochs1 = epochs1['Response 1 (this may be a response to question 1, 2 or 3)']
r2_epochs1 = epochs1['Response 2 (this may be a response to question 1, 2 or 3)']
onset_epochs1 = epochs1['First question onset (most important marker)']
onset_epochs1.plot_image(picks=['A2']);
r1_epochs15 = epochs15['Response 1 (this may be a response to question 1, 2 or 3)']
r2_epochs15 = epochs15['Response 2 (this may be a response to question 1, 2 or 3)']
onset_epochs15 = epochs15['First question onset (most important marker)']
onset_epochs15.plot_image(picks=['A2']);
###Output
Dropped 33 epochs
Dropped 9 epochs
18 matching events found
No baseline correction applied
Not setting metadata
0 projection items activated
0 bad epochs dropped
###Markdown
We can see that there was an interesting dip around the onset for our expert mediator (second plot), but some hill for the non-expert (first plot). The non-expert also has more variance in their activity, shown by the labels on the y-axis of the first plot. There may be related to mind wandering episodes, and that the non-expert may be invoking executive functions in the prefrontal cortex. It also seems that there are a totally inverse relationship for this channel. Let's try another channel, `B17`, located by the parietal lobe.
###Code
onset_epochs1.plot_image(picks=['B17']);
onset_epochs15.plot_image(picks=['B17']);
###Output
18 matching events found
No baseline correction applied
Not setting metadata
0 projection items activated
0 bad epochs dropped
###Markdown
`B17` is located in parietal lobe, and many research had found that this region have activity correlated with the default-mode network and overall functional connectivity of the brain. As you can see, the 2 subjects actually display similar pattern for the meditation, but a clear spike of activity in the non-expert (first plot) around the first second after the onset of interruption. We can also see a greater variability in activities before the onset. Does this tell us about the more "smooth" brain activity, even in response to interruption, for expert meditators? Time Frequency AnalysisSince channel `B17` seem to be showing some interesting stuff, let's run a time frequency analysis for both subjects at this channel.
###Code
frequencies = np.arange(7, 40, 1)
power1 = mne.time_frequency.tfr_morlet(onset_epochs1, n_cycles=3, return_itc=False,
freqs=frequencies, decim=3)
power15 = mne.time_frequency.tfr_morlet(onset_epochs15, n_cycles=3, return_itc=False,
freqs=frequencies, decim=3)
power1.plot(['B17']);
power15.plot(['B17']);
###Output
No baseline correction applied
###Markdown
Well, it doesn't tell us much, but does confirm what we had discussed earlier. It seems that there's more variability in activity for the non-expert (first plot) than the expert (second plot) at channel `B17`. Evoked Response Since this is an event related response, we can take a look at the evoked response between the 2 subjects around the onset of the interruption. We only looked at 2 individual channels earlier, but we can also look at the aggregated response from all channels for these 2 subjects
###Code
onset_evoked1 = onset_epochs1.average()
onset_evoked15 = onset_epochs15.average()
mne.viz.plot_compare_evokeds(dict(Subj1=onset_evoked1, Subj15=onset_evoked15),
legend='upper left', show_sensors='upper right');
###Output
combining channels using "gfp"
combining channels using "gfp"
###Markdown
It seems that even from the aggregated results we can already tell which person is an expert and who is non-expert. There is a huge amount of variability for Subject 1 (blue line), while Subject 15's activities seemed to be relatively smooth overall. The spike around 3 seconds after onset (presumably when questions kick in) are drastically different as well.We can also look at which brain region has the most activity around onset epochs.
###Code
title1 = 'EEG Average reference (Subject 1)'
onset_evoked1.plot(titles=dict(eeg=title1), time_unit='s')
onset_evoked1.plot_topomap(times=[-5,1,4], size=3., title=title1, time_unit='s');
title15 = 'EEG Average reference (Subject 15)'
onset_evoked15.plot(titles=dict(eeg=title15), time_unit='s')
onset_evoked15.plot_topomap(times=[-5,1,4], size=3., title=title15, time_unit='s');
###Output
_____no_output_____
###Markdown
There are very drastic differences for activities between the 2 subjects! There seems to be more frontal lobe activities for subject 1 (non-expert) and more parietal and occipital lobes activities for subject 15 (expert). Subject 1, again, shows a lot more variance compared to subject 15 in averaged EEG references. We also see that for both subjects, there are activations near the temporal lobe by 4 seconds after onset, presumably due to the auditory input from the instructions.Given all of the results found here around the onset epochs, we are more motivated to see whether this could be trained as a classifier to label the level of experience for a meditator. With this graph, it's quite convincing that it might work. Data CleaningFor simplicity of the final output, data cleaning is performed via a separate notebook `data_cleaning.ipynb` in the same `notebooks` folder. A `*.csv` export of the clean up data will be used and loaded here. This is going to take a bit to load. Good time for a tea break. Come back in a few minutes.
###Code
cleaned = pd.read_csv('../final_data.csv').rename(columns={'Unnamed: 0': 'Time'})
###Output
_____no_output_____
###Markdown
Data Analysis Separating Data: X, Y, Train, Validation, TestWe will be separating the data into 80% as training set, 20% as test set.
###Code
# Setting Random Seed
np.random.seed(45)
# Partitioning X matrix and y vector
X = cleaned.drop('expert', axis=1)
y = cleaned['expert']
# Splitting into Train and Test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
###Output
_____no_output_____
###Markdown
Correlation MatrixLet's take a look at which channels are most correlated with each other.
###Code
# Create correlation matrix
corr_matrix = X_train.corr().abs()
# Generate a mask for the upper triangle
mask = np.zeros_like(corr_matrix, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(220, 10, as_cmap=True)
sns.heatmap(corr_matrix, mask=mask, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5});
###Output
_____no_output_____
###Markdown
It seems that a lot of these regions are loosely correlated with each other, but not a lot. There are several channels that are completely not related (lighter areas), say `B17` and `A2`, which were the 2 we discussed earlier, being in completely different regions of the brain (parietal vs. frontal). Independent Component Analysis (ICA)Let's run ICA on our `X_train` and see what features stand out.
###Code
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
ica = FastICA(n_components=10) # 10 independent components
X_train_transformed = ica.fit_transform(X_train)
###Output
_____no_output_____
###Markdown
This plot will visualize what channels contributed to which ICs the most.
###Code
plt.figure(figsize=(16,2))
plt.matshow(ica.components_, cmap='viridis', fignum=1, aspect='auto')
labs = []
plt.yticks(range(ica.components_.shape[0]),
[f'Comp# {i}' for i in range(1, ica.components_.shape[0]+11)],fontsize=10)
plt.colorbar()
plt.xticks(range(ica.components_.shape[1]), np.array(X.columns),rotation=90,ha='left')
plt.rc('xtick', labelsize=10)
plt.rc('ytick', labelsize=10)
plt.rc('axes', labelsize=10)
plt.show();
###Output
_____no_output_____
###Markdown
This ICA component matrix tells us that there are several channels of higher importances, even though with a very limited amount of variances explained. It does seem that time has something to do with the components as well. Principal Component Analysis (PCA)We will try running PCA and see if any matching channels would show up from our results.
###Code
pca = PCA(n_components=.95) # 95% variance explained
X_train_transformed = pca.fit_transform(X_train)
###Output
_____no_output_____
###Markdown
Similarly, this plot will visualize what channels contributed to which PCs the most.
###Code
plt.figure(figsize=(16,6))
plt.matshow(pca.components_, cmap='viridis', fignum=1, aspect='auto')
labs = []
plt.yticks(range(pca.components_.shape[0]),
[f'Comp# {i}' for i in range(1, pca.components_.shape[0]+1)],fontsize=10)
plt.colorbar()
plt.xticks(range(pca.components_.shape[1]), np.array(X.columns),rotation=90,ha='left')
plt.rc('xtick', labelsize=10)
plt.rc('ytick', labelsize=10)
plt.rc('axes', labelsize=10)
plt.show();
###Output
_____no_output_____
###Markdown
The PCA yielded a total of 33 principal components and it seems that there are quiet a few channels highly related to some PCs. It seems that time is a huge factor for PC 10 and 11. We can plot the following scree plot to see whether any PCs stand out extremely from everything else.
###Code
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('PCs')
plt.ylabel('Cumulative explained variance')
plt.title('Scree Plot for Principal Components, 95% Variance Explained')
plt.show()
###Output
_____no_output_____
###Markdown
It seems that the first few PCs explain over 50%+ of the variance. However, that is probably not enough for our classifier. Classifiers____Please note the the following analyses will take quite some time to run. It's time for a meal break, I'd say! The last 2 methods in total will probably take 45 minutes to an hour to run.____Moving on to building our classifier. Our goal is to see whether we can use these onset epoch data to create a classifier to label expert and non-expert meditator solely based on EEG measures. Logistic Regression (L-BFGS) with 5-Fold Cross ValidationWe can now try to run a simple logistic regression model to predict the labels in the y-vector.
###Code
# Create Logistic Function
logistic = LogisticRegression(solver='lbfgs', multi_class='multinomial', max_iter=500)
# Cross Validation
cross_val_score(logistic, X_train, y_train, cv=5)
###Output
_____no_output_____
###Markdown
Our logistic regression using L-BFGS solver yielded a ~54.2% accuracy for our model. Logistic Regression (LASSO) with 5-Fold Cross ValidationWe will introduce a L1 penalty and see if our result improves.
###Code
# Create Logistic Function with L1 Penalty
logistic_l1 = LogisticRegression(penalty='l1', solver='saga', multi_class='multinomial', max_iter=500)
# Cross Validation
cross_val_score(logistic_l1, X_train, y_train, cv=5)
###Output
_____no_output_____
###Markdown
This yielded similar results of a ~54.2% accuracy.... It's bad, but at least it's slightly better than chance (50%). Neural Network: Multi-Layer Perceptron classifierWe will deploy an MLP classifier to see if our result will improve. This will take about 20 minutes to run, if no other tasks are running on your computer.
###Code
mlp = MLPClassifier(activation='logistic', random_state=45)
mlp.fit(X_train, y_train)
mlp.score(X_train, y_train)
mlp.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
It appears that our neural network is really bad at not overfitting and yielded something of 99.27% accuracy on our training set (!!!!!), but only ~50% on our test set... The model performance from the test set actually was worse than our Logistic Regression CV model (depends on the random seed, at least when I set it to 45 it will remain 50% for rerun). This is essentially a chance model that cannot be used.Let's see if random forest performs a little better. Random ForestWe will now deploy a random forest classifier on our data to see how well we did. This will take about 25 minutes to run, if no other tasks are running on your computer.
###Code
clf = RandomForestClassifier(n_estimators=20, criterion='entropy', random_state=45)
clf.fit(X_train, y_train)
clf.score(X_train, y_train)
clf.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Similar to our MLP Classifier, we also way overfit on the training set, reaching a whopping 99.99% accuracy. But we do see a slight increase on our test set accuracy, which is at 55.54%, best in all of our methods. Let's take a moment to see which channels were most important in building our random forest. We will select the top 10 features.
###Code
feature_imp = pd.Series(clf.feature_importances_,
index=X.columns).sort_values(ascending=False)[:10]
sns.barplot(x=feature_imp, y=feature_imp.index)
plt.xlabel('Feature Importance Score')
plt.ylabel('Features')
plt.title("Visualizing Important Features")
plt.show();
###Output
_____no_output_____
###Markdown
We can see that the time seem to be some sort of determinant factor in our analysis, but not sure what. We can capture these important features/channels and visualize them on the sensor plot below.
###Code
# Deleting Time as a "channel"
imp_chs = list(np.delete(np.array(feature_imp.index), list(feature_imp.index).index('Time')))
mne.viz.plot_sensors(raw1.info, ch_type='eeg', show_names=imp_chs);
###Output
_____no_output_____
###Markdown
Outline- Background to Project- Code Imports- Import Data- Models - (Models = KNN, LR, RF, AdaBoost, Gradient, XG Boost, NN-tanh, NN-Relu) - Build Pipeline - GridSearch - Parameter Grid - Construct GridSearch - Fit GridSearch - Output Best Accuracy - Save Model's Best Accuracy to Summary Table - (From RF take feature importance) - (Gradient = XG Boost?)- Evaluate Best model - Confusion Matrix for best model- Function to predict next fight- Conclusion - Future Work Code Imports Library Installations
###Code
# !conda install py-xgboost
###Output
_____no_output_____
###Markdown
Import Libraries
###Code
# For Dataframes and arrays
import numpy as np
import pandas as pd
# Visualization libraries
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
# Preprocessing Data
# Train:Test split
from sklearn.model_selection import train_test_split
# Scaling
from sklearn.preprocessing import StandardScaler
# Feature Extraction
from sklearn.decomposition import PCA
# Modeling
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
import xgboost as xgb
from xgboost import XGBClassifier
# Prevent Kernel Dying
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
# Neural Network
import tensorflow as tf
import keras
from keras.layers import Dense, Dropout, Activation, LeakyReLU
from keras.models import Sequential
from keras.optimizers import SGD
# Tuning
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
# Evaluation
from sklearn import metrics
from sklearn.metrics import confusion_matrix
import itertools
# Warnings
import warnings
warnings.filterwarnings("ignore")
# Set random seeds
np.random.seed(123)
tf.set_random_seed(123)
import datetime
import pickle
import random
###Output
_____no_output_____
###Markdown
Local Code Imports - Do not delete
###Code
# DO NOT REMOVE THESE
%load_ext autoreload
%autoreload 2
# DO NOT REMOVE This
%reload_ext autoreload
## DO NOT REMOVE
## import local src module -
## src in this project will contain all your local code
## clean_data.py, model.py, visualize.py, custom.py
from src import make_data as mk
from src import visualize as viz
from src import model as mdl
from src import pandas_operators as po
from src import custom as cm
def test_src():
mk.test_make_data()
viz.test_viz()
mdl.test_model()
po.test_pandas()
return 1
test_src()
###Output
In make_data
In Visualize
In Model
In pandas ops
###Markdown
Project Background UFC Background
###Code
# Credit to National Geographic
cm.ufc_intro_vid()
###Output
_____no_output_____
###Markdown
Project Objective Compare a Variety of Machine Learning ModelsFor this project we are classifying. There are many choices of classification algorithm, each with its own strengths and weaknesses. There is no single classifier that always works best across all scenarios so we will compare a handful of different learning algorithms to select the best model for our particular problem. Assumptions made- In reality, each bout has 4 possible outcomes; fighter 1 wins, fighter 2 wins, draw or no contest. For simplicity of modeling, these outcomes have been reduced to simply fighter 1 wins or fighter 2 wins. - The models are trained on fighter statistics that reflect their current statistics, not those at the time of the fight. i.e. a fighter's first bout will feature their statistics the same statistics as their latest bout. Import Data
###Code
data = pd.read_csv('../data/processed/combined')
data.head(3)
fighters = pd.read_csv('../data/processed/fighters_cleaned')
fighters.drop(labels=['draw'], axis=1, inplace=True)
fighters.head(3)
bouts = pd.read_csv('../data/processed/bouts_cleaned')
bouts.head(3)
###Output
_____no_output_____
###Markdown
Train-Test Split
###Code
X = data.drop(['date', 'fighter1', 'fighter2', 'winner_is_fighter1'],axis=1)
y = data['winner_is_fighter1']
###Output
_____no_output_____
###Markdown
- We no longer require the fighter names
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1, stratify=y)
print('X_Train: \n\tObservations: {} \tFeatures: {} \t{}% of data'.format(X_train.shape[0], X_train.shape[1], len(X_train)/len(X)*100))
print('X_Test: \n\tObservations: {} \tFeatures: {} \t{}% of data'.format(X_test.shape[0], X_test.shape[1], len(X_test)/len(X)*100))
###Output
X_Train:
Observations: 3199 Features: 41 70.0% of data
X_Test:
Observations: 1371 Features: 41 30.0% of data
###Markdown
- We set the random state so that our results can be reproduced easily- The stratify argument ensures that we maintain the proportion of class labels, i.e. the same proportion of fighter 1 wins to fighter 2 wins, across the original data, y_train and y_test.
###Code
X_train = X_train.astype(float)
X_test = X_test.astype(float)
###Output
_____no_output_____
###Markdown
- Many of the features are type integer. Several of the models we will run prefer to have type float as an input Scaling - Many of the machine learning and optimization algorithms that we will be using require feature scaling in order to optimize performance. - We will standardize the features using StandardScaler from scikit-learn's preprocessing module. - This will transform the data, resulting in each feature having a mean of 0 and a standard deviation of 1.
###Code
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
- We will use the same scaling parameters to standardize the test set, so that the values in the training and test dataset are comparable to each other. Models **Compare Several Models**- For this project we are classifying bouts. There are many choices of classification algorithm, each with its own strengths and weaknesses. - There is no single classifier that always works best across all scenarios so we will compare a handful of different learning algorithms to select the best model for our particular problem. **Cross Validation**- Train-test splitting does not ensure a 'random' split, which may result in our models overfitting our data. We can use *cross-validation* to mitigate this issue.- There are several varieties of cross validation available in [SK Learn's Model Selection](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.model_selection)- We will be using K-Fold Cross Validation, with a K of 5 - We use each fold as a validation set, with the remaining folds used to train our model - The computational cost will increase with each increase in the number of folds. - Our dataset is relatively small so this should not be a big problem. **Track each model's statistics**- We will keep track of each models statistics and performance in a dataframe
###Code
models_summary = pd.DataFrame()
models_summary.rename_axis('Model', axis='columns', inplace=True)
###Output
_____no_output_____
###Markdown
**Accuracy is our best metric**- Although there are many metrics to evaluate our models against one another, the *Occam's Razor* principle can often be applied (That is, that the simpler explanation is to be preferred). In the case of comparing our models, we are most interested in how many fights we correctly predict, i.e. the accuracy. There is virtually no class imbalance and the cost associated with a false positive and a false negative are identical. Baseline Model - Our Base Rate will be to classify every bout as the largest class- Our Null Rate will be the accuracy of trivially predicting the most-frequent class
###Code
baseline_accuracy = cm.model_baseline(y_train)
# Save the Baseline Model to our models summary dataframe
models_summary['Baseline'] = [baseline_accuracy]
models_summary.rename({0: 'Accuracy'}, inplace=True)
models_summary
###Output
_____no_output_____
###Markdown
K-Nearest Neighbors **Why Use KNN?**- K-Nearest Neighbors (KNN) is a instance-based learning type of nonparamteric model. It memorizes the training dataset and adapts immediately as we collect new training data.- The computational complexity for classifying new samples grows linearly with the number of samples in the training dataset. i.e. with every bout that occurs, and updates the model, the model becomes slower and slower to run. Our dataset is relatively very small so we are able to use this model- We are using the default Minkowski distance, which requires our distances to be standardized.**Dimensionality Reduction using PCA**- Principle Component Analysis (PCA) is a form of feature extraction, where we transform the data onto a new feature space whilst maintaining most of the relevant information.- Main benefits of PCA: - Improves computational efficiency of our learning algorithm - Reduces the "curse of dimensionality", which can improve the predictive performance
###Code
cm.plot_explained_variance(X_train_std,
features_to_show=30,
title='KNN PCA Total and Explained Variance')
###Output
The top 30 principal components explains 97.91% of the variance
###Markdown
**Choosing the right number of neighbors (k)**- This is critical to avoid over and underfitting our model.- We can do this manually by plotting the accuracy of the KNN model, as we change the k value. - Our classification is binary, so we should choose k to be an odd number to avoid tied votes.
###Code
cm.acc_by_k_value(X_train_std, y_train, X_test_std, y_test)
###Output
Highest Accuracy is 65.86%, when K is 25
###Markdown
- The best K is the one that results in the highest test accuracy score. - The problem with using the chart above is that we are inadvertently using our test set as a training set. We are forcing the model to fit the test set, and causing the model to overfit.- A better approach is to use **k-fold cross validation**. - This will allow us to compare hyperparameters (such as number of neighbors) on a validation set, and then only use the test data to represent how well the model performs on unseen data
###Code
cm.model_knn_grid(X_train, y_train)
knn_accuracy = cm.model_knn(X_train_std, y_train, X_test_std, y_test, n_neighbors=21)
# Add KNN results to models summary dataframe
models_summary['KNN'] = [knn_accuracy]
models_summary
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
cm.model_logreg_grid(X_train_std, y_train)
lr_accuracy = cm.model_logreg(C_=0.1, X_train_=X_train_std, y_train_=y_train, X_test_=X_test_std, y_test_=y_test)
# Add Logistic Regression results to models summary dataframe
models_summary['Logistic Regression'] = lr_accuracy
models_summary
###Output
_____no_output_____
###Markdown
Random Forest - Because this is an ensemble algorithm, the model is naturally resistant to noise and variance in the data (which helps reduce overfitting), and generally tends to perform quite well.- As it is an ensemble alogorithm we have the computational cost of training each model
###Code
X_train_top_features = X_train.copy()
X_train_top_features = cm.rename_top_features(X_train_top_features)
cm.model_rf_grid(X_train, y_train)
rf_accuracy = cm.model_random_forest(X_train_top_features, y_train, X_test, y_test,
max_depth_=5, min_samples_leaf_=0.05,
min_samples_split_=0.05, n_estimators_=120)
# Add Random Forest results to models summary dataframe
models_summary['Random Forest'] = rf_accuracy
models_summary
###Output
_____no_output_____
###Markdown
AdaBoost
###Code
cm.model_ada_boost_grid(X_train, y_train)
ada_accuracy = cm.model_ada_boost(X_train, y_train, X_test, y_test,
learning_rate=0.5, n_estimators=90)
# Add Ada Boost results to models summary dataframe
models_summary['Ada Boost'] = ada_accuracy
models_summary
###Output
_____no_output_____
###Markdown
Gradient Boosting (XG Boost) - eXtreme Gradient Boosting (XG Boost) is a form of gradient boosting which often produces the best performances, amongst gradient boosting algorithms- XG Boost is able to parallelize the construction of decision trees across all our computer's CPU cores during the training phase - This can even be done across a cluster of computers
###Code
xgb_accuracy = cm.model_xgboost(X_train, y_train, X_test, y_test)[0]
models_summary['XG Boost'] = xgb_accuracy
models_summary
###Output
_____no_output_____
###Markdown
Neural Network (tanh)
###Code
model_tanh = cm.model_compile_neural_network(X_train_std, 'tanh')
history_tanh = cm.model_fit_neural_network(X_train_std, y_train, model_tanh, 100)
loss = history_tanh.history['loss']
cm.plot_training_loss(loss)
acc = history_tanh.history['acc']
val_acc = history_tanh.history['val_acc']
cm.training_and_validation_accuracy(acc, val_acc, history_tanh,
title='Neural Network (tanh) Training and Validation Accuracy')
cm.training_and_validation_accuracy(acc, val_acc, history_tanh,
title='Neural Network (tanh) Training and Validation Accuracy',
y_lim=(0.6,0.8), show_mov_avg=True)
nn_tanh_accuracy = cm.model_test_acc_neural_network(X_test_std, y_test, model_tanh)
# Add Score to Models Summary DataFrame
models_summary['Neural Network (tanh)'] = nn_tanh_accuracy
models_summary
###Output
_____no_output_____
###Markdown
Neural Network (Relu)
###Code
model_relu = cm.model_compile_neural_network(X_train_std, 'relu')
history_relu = cm.model_fit_neural_network(X_train_std, y_train, model_relu, 100)
loss = history_relu.history['loss']
cm.plot_training_loss(loss)
acc = history_relu.history['acc']
val_acc = history_relu.history['val_acc']
cm.training_and_validation_accuracy(acc, val_acc, history_relu,
title='Neural Network (tanh) Training and Validation Accuracy')
acc = history_relu.history['acc']
val_acc = history_relu.history['val_acc']
cm.training_and_validation_accuracy(acc, val_acc, history_relu,
title='Neural Network (tanh) Training and Validation Accuracy',
y_lim=(0.4,0.75),
mov_avg_n=10,
show_mov_avg=True)
nn_relu_accuracy = cm.model_test_acc_neural_network(X_test_std, y_test, model_relu)
# Add Score to Models Summary DataFrame
models_summary['Neural Network (relu)'] = nn_relu_accuracy
models_summary
###Output
_____no_output_____
###Markdown
Evaluation of Models
###Code
models_summary = models_summary.T
# Sort models_summary into order
models_summary.sort_values('Accuracy', inplace=True)
models_summary
###Output
_____no_output_____
###Markdown
Barchart to compare Models
###Code
cm.plot_compare_models(models_summary, 'Accuracy')
###Output
_____no_output_____
###Markdown
- We can see that XG Boost has the highest accuracy with 73.7% Confusion Matrix of Best Model
###Code
# XG Boost
y_test = cm.model_xgboost(X_train, y_train, X_test, y_test)[2]
y_test_pred = cm.model_xgboost(X_train, y_train, X_test, y_test)[3]
conf_matrix = confusion_matrix(y_test,y_test_pred)
cm.plot_confusion_matrix(cm=conf_matrix, classes=['Fighter 1 Wins','Fighter 2 Wins'], normalize=True, title='Confusion Matrix for XG Boost Model');
###Output
Accuracy for XGBoost model : 73.7%
Accuracy for XGBoost model : 73.7%
Normalized confusion matrix
###Markdown
Function to Predict a Fight
###Code
# Generate a random list of fighter names
cm.random_fighter_names(fighters, num_fighters=5)
# Generate a list of names that contain a given string
cm.fighter_name_contains('Adam')[:5]
###Output
Names that contain "Adam"
###Markdown
###Code
cm.predict_fight(data, fighters)
###Output
What is the name of Fighter 1? Conor McGregor
What is the name of Fighter 2? Adam Lynn
When will the fight take place? (YYYY-MM-DD) (default = today)
Is this a title fight? (yes/no) no
I think that Conor McGregor will beat Adam Lynn.
I am 98.91% sure of this
###Markdown
Predict a recent bout, not in our dataset - UFC 240 took place on July 27th 2019. The headline fight was between Max Holloway and Frankie Edgar. We will use the model to predict the outcome of the fight
###Code
cm.predict_fight(data, fighters)
###Output
_____no_output_____
|
boston_housing/home/boston_housing.ipynb
|
###Markdown
Machine Learning Engineer Nanodegree Model Evaluation & Validation Project: Predicting Boston Housing PricesWelcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting StartedIn this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a *good fit* could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis.The dataset for this project originates from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Housing). The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset:- 16 data points have an `'MEDV'` value of 50.0. These data points likely contain **missing or censored values** and have been removed.- 1 data point has an `'RM'` value of 8.78. This data point can be considered an **outlier** and has been removed.- The features `'RM'`, `'LSTAT'`, `'PTRATIO'`, and `'MEDV'` are essential. The remaining **non-relevant features** have been excluded.- The feature `'MEDV'` has been **multiplicatively scaled** to account for 35 years of market inflation.Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
###Code
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from sklearn.model_selection import ShuffleSplit
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
print("Boston housing dataset has {} data points with {} variables each.".format(*data.shape))
###Output
Boston housing dataset has 489 data points with 4 variables each.
###Markdown
Data ExplorationIn this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into **features** and the **target variable**. The **features**, `'RM'`, `'LSTAT'`, and `'PTRATIO'`, give us quantitative information about each data point. The **target variable**, `'MEDV'`, will be the variable we seek to predict. These are stored in `features` and `prices`, respectively. Implementation: Calculate StatisticsFor your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since `numpy` has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model.In the code cell below, you will need to implement the following:- Calculate the minimum, maximum, mean, median, and standard deviation of `'MEDV'`, which is stored in `prices`. - Store each calculation in their respective variable.
###Code
# TODO: Minimum price of the data
minimum_price = np.min(data['MEDV'])
# TODO: Maximum price of the data
maximum_price = np.max(data['MEDV'])
# TODO: Mean price of the data
mean_price = np.mean(data['MEDV'])
# TODO: Median price of the data
median_price = np.median(data['MEDV'])
# TODO: Standard deviation of prices of the data
std_price = np.std(data['MEDV'])
# Show the calculated statistics
print("Statistics for Boston housing dataset:\n")
print("Minimum price: ${}".format(minimum_price))
print("Maximum price: ${}".format(maximum_price))
print("Mean price: ${}".format(mean_price))
print("Median price ${}".format(median_price))
print("Standard deviation of prices: ${}".format(std_price))
###Output
Statistics for Boston housing dataset:
Minimum price: $105000.0
Maximum price: $1024800.0
Mean price: $454342.9447852761
Median price $438900.0
Standard deviation of prices: $165171.13154429474
###Markdown
Question 1 - Feature ObservationAs a reminder, we are using three features from the Boston housing dataset: `'RM'`, `'LSTAT'`, and `'PTRATIO'`. For each data point (neighborhood):- `'RM'` is the average number of rooms among homes in the neighborhood.- `'LSTAT'` is the percentage of homeowners in the neighborhood considered "lower class" (working poor).- `'PTRATIO'` is the ratio of students to teachers in primary and secondary schools in the neighborhood.** Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an **increase** in the value of `'MEDV'` or a **decrease** in the value of `'MEDV'`? Justify your answer for each.****Hint:** This problem can phrased using examples like below. * Would you expect a home that has an `'RM'` value(number of rooms) of 6 be worth more or less than a home that has an `'RM'` value of 7?* Would you expect a neighborhood that has an `'LSTAT'` value(percent of lower class workers) of 15 have home prices be worth more or less than a neighborhood that has an `'LSTAT'` value of 20?* Would you expect a neighborhood that has an `'PTRATIO'` value(ratio of students to teachers) of 10 have home prices be worth more or less than a neighborhood that has an `'PTRATIO'` value of 15? **Answer: **a higher value for the feature RM correlates with an increase in MEDVbecause homeowners are willing to pay for additional space and flexibility in laying out their space.some homeowners are constrained, and need at least a certain number of rooms, to live with their roommates, family, or remote-worka low number for the feature LSTAT correlates with a increase in the value MEDV.homeowners are statistically paying more for beeing in richer areas of towns.some of the reasons are a suspicion of worse crime, infrastructure and them wanting to disassociate themself from poorness.the lower ratio of students per teacher given by PTRATIO correlates with a increase in the value MEDV.as education is extremely important to local and global society, aswell as to individuals, students and parents and other related roles strive to seek good education.therefore the locality of educational institutions, with good personnel ressources is strongly considered. ---- Developing a ModelIn this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions. Implementation: Define a Performance MetricIt is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the [*coefficient of determination*](http://stattrek.com/statistics/dictionary.aspx?definition=coefficient_of_determination), R2, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions. The values for R2 range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the **target variable**. A model with an R2 of 0 is no better than a model that always predicts the *mean* of the target variable, whereas a model with an R2 of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the **features**. _A model can be given a negative R2 as well, which indicates that the model is **arbitrarily worse** than one that always predicts the mean of the target variable._For the `performance_metric` function in the code cell below, you will need to implement the following:- Use `r2_score` from `sklearn.metrics` to perform a performance calculation between `y_true` and `y_predict`.- Assign the performance score to the `score` variable.
###Code
# TODO: Import 'r2_score'
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
""" Calculates and returns the performance score between
true and predicted values based on the metric chosen. """
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
###Output
_____no_output_____
###Markdown
Question 2 - Goodness of FitAssume that a dataset contains five data points and a model made the following predictions for the target variable:| True Value | Prediction || :-------------: | :--------: || 3.0 | 2.5 || -0.5 | 0.0 || 2.0 | 2.1 || 7.0 | 7.8 || 4.2 | 5.3 |Run the code cell below to use the `performance_metric` function and calculate this model's coefficient of determination.
###Code
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print("Model has a coefficient of determination, R^2, of {:.3f}.".format(score))
###Output
Model has a coefficient of determination, R^2, of 0.923.
###Markdown
* Would you consider this model to have successfully captured the variation of the target variable? * Why or why not?** Hint: ** The R2 score is the proportion of the variance in the dependent variable that is predictable from the independent variable. In other words:* R2 score of 0 means that the dependent variable cannot be predicted from the independent variable.* R2 score of 1 means the dependent variable can be predicted from the independent variable.* R2 score between 0 and 1 indicates the extent to which the dependent variable is predictable. An * R2 score of 0.40 means that 40 percent of the variance in Y is predictable from X. **Answer:**the performance of models can be judged by its predictions variance vs the datasets variance.as the r2-score ranges from negative infinity to 1:if the regressed model faults,it performs worse than the null hypothesis, the simplest model, beeing e.g. the datasets mean.this is indicated by a negative r2-scorea low score in the range 0 to 1, indicates that the model misses the variation in its predictions.the model can only predict simplistic cases,missing the meaningfull aspects of the dataseta r2-score close to 1 shows that the models predictions have enough variance to have learned the datasets meaning.But when using the r2-score, one has to consider the biasing aspects of this metric.complexity, selection-biases, variable-dependencies itself, ... arent taken into account.this can lead to developing e.g. overcomplex overfit models.so additional metrics and graphing should be considered,especially when encountering suspicously high r2-scores like 0.99.the r2-score compares the variances.with an r2-score of 0.923, 92.3% of the data variance is predicted by the model.the score isnt suspiciously high that the model seems overfit, but high enough, that it captured enough variation of the dataset.i consider r2-score of 0.923 as one indication of a successfull model. Implementation: Shuffle and Split DataYour next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.For the code cell below, you will need to implement the following:- Use `train_test_split` from `sklearn.model_selection` to shuffle and split the `features` and `prices` data into training and testing sets. - Split the data into 80% training and 20% testing. - Set the `random_state` for `train_test_split` to a value of your choice. This ensures results are consistent.- Assign the train and testing splits to `X_train`, `X_test`, `y_train`, and `y_test`.
###Code
# TODO: Import 'train_test_split'
from sklearn.model_selection import train_test_split
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(data[['RM', 'LSTAT', 'PTRATIO']],
data['MEDV'],
train_size=0.8,
test_size=0.2,
random_state=42,
shuffle=True)
# Success
print("Training and testing split was successful.")
###Output
Training and testing split was successful.
###Markdown
Question 3 - Training and Testing* What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?**Hint:** Think about how overfitting or underfitting is contingent upon how splits on data is done. **Answer: **the data is split into a testing and training set,to analyse the models performance and generality.if the split has too much training data, the model overfits and doesnt work with novel data.too much testing data and the model underfits and doesnt reach its performance target.capturing the right split is elemental for the models usecase applicability against other methods. ---- Analyzing Model PerformanceIn this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing `'max_depth'` parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone. Learning CurvesThe following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R2, the coefficient of determination. Run the code cell below and use these graphs to answer the following question.
###Code
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
###Output
_____no_output_____
###Markdown
Question 4 - Learning the Data* Choose one of the graphs above and state the maximum depth for the model. * What happens to the score of the training curve as more training points are added? What about the testing curve? * Would having more training points benefit the model? **Hint:** Are the learning curves converging to particular scores? Generally speaking, the more data you have, the better. But if your training and testing curves are converging with a score above your benchmark threshold, would this be necessary?Think about the pros and cons of adding more training points based on if the training and testing curves are converging. **Answer: **with a max depth of 3,the training score decreases while the testing score increases.they both converge until the training process runs out of new data.adding new data always increases the model performanceas long as the data is novel, good quality and problem-related.at a certain point because of diminishing returns, more quantity is not reasonable anymore.the score that the test and train scores converge to, isdependent on the test/train ratio. Complexity CurvesThe following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the **learning curves**, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the `performance_metric` function. ** Run the code cell below and use this graph to answer the following two questions Q5 and Q6. **
###Code
vs.ModelComplexity(X_train, y_train)
###Output
_____no_output_____
###Markdown
Question 5 - Bias-Variance Tradeoff* When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? * How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?**Hint:** High bias is a sign of underfitting(model is not complex enough to pick up the nuances in the data) and high variance is a sign of overfitting(model is by-hearting the data and cannot generalize well). Think about which model(depth 1 or 10) aligns with which part of the tradeoff. **Answer: **with max depth of 1 the model suffers from high bias(underfitting)indicated by the low training and validation score.with a max depth of 10 the model is overfitting.it has a high variance, which can be seen by the increased difference between the training and validation score and also the increasing uncertainty of the validation score. Question 6 - Best-Guess Optimal Model* Which maximum depth do you think results in a model that best generalizes to unseen data? * What intuition lead you to this answer?** Hint: ** Look at the graph above Question 5 and see where the validation scores lie for the various depths that have been assigned to the model. Does it get better with increased depth? At what point do we get our best validation score without overcomplicating our model? And remember, Occams Razor states "Among competing hypotheses, the one with the fewest assumptions should be selected." **Answer: **a max depth of 3 seems be optimal.the difference between training and validation score is much lower,than with a max depth higher than 4.the score increase from a max depth of 3 to 4 is negligible, and doesnt justify the additional complexity. ----- Evaluating Model PerformanceIn this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from `fit_model`. Question 7 - Grid Search* What is the grid search technique?* How it can be applied to optimize a learning algorithm?** Hint: ** When explaining the Grid Search technique, be sure to touch upon why it is used, what the 'grid' entails and what the end goal of this method is. To solidify your answer, you can also give an example of a parameter in a model that can be optimized using this approach. **Answer: **grid search is an exhaustive model search technique, that finds the best model from a set of different models and hyperparameters, judged by a given score function.it is used because of its simplicity, robustness, flexibility and its high autonomy.it is familiar, because it automates the manual train, evaluate, tweak loop for finding models and hyperparametersyou typically start of by splitting the data into training, validation and testing sets.a set of models, a scoring function and hyperparameters, with each hyperparameter having a list of discrete values, is specified.these values are chosen with the search complexity, range, and magnitudes in mind.the grid search function builds a hypergrid with every possible combination of these.it trains every combination with the training set and calculates its score function on the validation setdepending on the implementation the grid search returns, with the combination, its score, the trained model, and/or possibly the whole grid.the returned datastructure is then used by the developers to decide wether to test with the testing set, and/or to refine and continue further search.as an example:a developer is searching for a precise classifier.the big dataset needs hours to train, so grid search running over the weekend is optimal with respect to working hours.sklearns grid search allows for a single model.he specifies it as a decision tree and f0.5-score as criterion.grid search run with exponential complexity, so he specifies the searchspace as {"min_samples_leaf":[2,4,8], "min_samples_split":[2,8,32], "max_depth":[2,8,32]} .grid search runs all 3x3x3 combinations on the training and validation splits and returns with exemploratory optimal values of {"min_samples_leaf":[4], "min_samples_split":[8], "max_depth":[32]}, the trained model class and its score 0.9 .that is good enough for his task, so he verifies the model with the testing data. Question 8 - Cross-Validation* What is the k-fold cross-validation training technique? * What benefit does this technique provide for grid search when optimizing a model?**Hint:** When explaining the k-fold cross validation technique, be sure to touch upon what 'k' is, how the dataset is split into different parts for training and testing and the number of times it is run based on the 'k' value.When thinking about how k-fold cross validation helps grid search, think about the main drawbacks of grid search which are hinged upon **using a particular subset of data for training or testing** and how k-fold cv could help alleviate that. You can refer to the [docs](http://scikit-learn.org/stable/modules/cross_validation.htmlcross-validation) for your answer. **Answer: **k-fold cross validation is a technique to use available data more efficiently.a big part of the data, rests in the validation set.the data is split k-fold into k buckets, one of them beeing the validation bucket.this way the train and validation sets are rotated, so more data can be used.the model is evaluated, by its average score on the different folds.additionally to indicating selection bias from the train-validation split also helps to recognize overfitting.grid search selects the best model on the given split.therefore overfitting on the split is likely,through the variation introduced in k-fold cross validationa bigger part of the data is used and the effect of overfitting reduced. Implementation: Fitting a ModelYour final implementation requires that you bring everything together and train a model using the **decision tree algorithm**. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the `'max_depth'` parameter for the decision tree. The `'max_depth'` parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called *supervised learning algorithms*.In addition, you will find your implementation is using `ShuffleSplit()` for an alternative form of cross-validation (see the `'cv_sets'` variable). While it is not the K-Fold cross-validation technique you describe in **Question 8**, this type of cross-validation technique is just as useful!. The `ShuffleSplit()` implementation below will create 10 (`'n_splits'`) shuffled sets, and for each shuffle, 20% (`'test_size'`) of the data will be used as the *validation set*. While you're working on your implementation, think about the contrasts and similarities it has to the K-fold cross-validation technique.For the `fit_model` function in the code cell below, you will need to implement the following:- Use [`DecisionTreeRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html) from `sklearn.tree` to create a decision tree regressor object. - Assign this object to the `'regressor'` variable.- Create a dictionary for `'max_depth'` with the values from 1 to 10, and assign this to the `'params'` variable.- Use [`make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html) from `sklearn.metrics` to create a scoring function object. - Pass the `performance_metric` function as a parameter to the object. - Assign this scoring function to the `'scoring_fnc'` variable.- Use [`GridSearchCV`](http://scikit-learn.org/0.20/modules/generated/sklearn.model_selection.GridSearchCV.html) from `sklearn.model_selection` to create a grid search object. - Pass the variables `'regressor'`, `'params'`, `'scoring_fnc'`, and `'cv_sets'` as parameters to the object. - Assign the `GridSearchCV` object to the `'grid'` variable.
###Code
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
from sklearn.tree import DecisionTreeRegressor
def fit_model(X, y):
""" Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y]. """
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(n_splits = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor()
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth':list(range(1,11))}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search cv object --> GridSearchCV()
# Make sure to include the right parameters in the object:
# (estimator, param_grid, scoring, cv) which have values 'regressor', 'params', 'scoring_fnc', and 'cv_sets' respectively.
grid = GridSearchCV(regressor, params, scoring_fnc, cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
###Output
_____no_output_____
###Markdown
Making PredictionsOnce a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a *decision tree regressor*, the model has learned *what the best questions to ask about the input data are*, and can respond with a prediction for the **target variable**. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on. Question 9 - Optimal Model* What maximum depth does the optimal model have? How does this result compare to your guess in **Question 6**? Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
###Code
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print("Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth']))
###Output
Parameter 'max_depth' is 4 for the optimal model.
###Markdown
** Hint: ** The answer comes from the output of the code snipped above.**Answer: **the optimal model has a max_depth of 4 as seen in the chart of question 6.for predicting housing prices, i personally prefer erring more on the side of generality Question 10 - Predicting Selling PricesImagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:| Feature | Client 1 | Client 2 | Client 3 || :---: | :---: | :---: | :---: || Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms || Neighborhood poverty level (as %) | 17% | 32% | 3% || Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |* What price would you recommend each client sell his/her home at? * Do these prices seem reasonable given the values for the respective features? **Hint:** Use the statistics you calculated in the **Data Exploration** section to help justify your response. Of the three clients, client 3 has has the biggest house, in the best public school neighborhood with the lowest poverty level; while client 2 has the smallest house, in a neighborhood with a relatively high poverty rate and not the best public schools.Run the code block below to have your optimized model make predictions for each client's home.
###Code
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print("Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price))
###Output
Predicted selling price for Client 1's home: $403,025.00
Predicted selling price for Client 2's home: $237,478.72
Predicted selling price for Client 3's home: $931,636.36
###Markdown
**Answer: **based on the model i would recommend:selling price for Client 1s home: 403,025.00selling price for Client 2s home: 237,478.72selling price for Client 3s home: 931,636.36these prices seem reasonable based on the given features.they lie within 3 standartdeviations.the price is consistent with the expectation from the feature difference. SensitivityAn optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. **Run the code cell below to run the `fit_model` function ten times with different training and testing sets to see how the prediction for a specific client changes with respect to the data it's trained on.**
###Code
vs.PredictTrials(features, prices, fit_model, client_data)
###Output
Trial 1: $391,183.33
Trial 2: $419,700.00
Trial 3: $415,800.00
Trial 4: $420,622.22
Trial 5: $418,377.27
Trial 6: $411,931.58
Trial 7: $399,663.16
Trial 8: $407,232.00
Trial 9: $351,577.61
Trial 10: $413,700.00
Range in prices: $69,044.61
|
0_HelloWorld.ipynb
|
###Markdown
HelloWorld
###Code
import pandas as pd
pd.__version__
###Output
_____no_output_____
###Markdown
Variables and operations
###Code
a = 2
b = a
b += 10
a
b
s = "Hello World!"
print(s)
s[-1]
###Output
_____no_output_____
###Markdown
Data structures
###Code
l = [1,2,3]
d = {1:"one",2:"two",3:"three"}
l.append(4)
l
d.update({4:"four"})
d
l[2]
# comprehensions
[x*2 for x in l]
# operations on lists
l*2
# arrays
import numpy as np
v = np.array(l)
v
# broadcasting
v*2
###Output
_____no_output_____
###Markdown
Looping
###Code
for x in l:
print(x/2)
n = 0
while n < len(l):
print(l[n])
n += 1
sorted(l, reverse=True)
for n,x in enumerate(l):
if n < 2:
print(x)
###Output
1
2
###Markdown
Functions
###Code
def power(x,p=2):
return x**p
power(2)
power(2,3)
###Output
_____no_output_____
###Markdown
Exercises* Create a list coprehension with an if condition inside.* Create a function that brings a string to lowercase and removes its punctuation.
###Code
# your code here
###Output
_____no_output_____
|
00_section_features_evaluation.ipynb
|
###Markdown
**Section:** Features evaluation
###Code
import os
import itertools as it
import warnings
import datetime
import numpy as np
import pandas as pd
%matplotlib inline
from matplotlib import pyplot as plt
from matplotlib import cm
import seaborn as sns
import joblib
import pathlib
from sklearn.ensemble import RandomForestClassifier
from sklearn.exceptions import DataConversionWarning
import tqdm
from libs.container import Container
from libs.nearest import nearest
from libs.experiment import WithAnotherExperiment, roc, metrics
from libs.precstar import prec_star
warnings.simplefilter("ignore", category=DataConversionWarning)
PATH = pathlib.Path(os.path.abspath(os.path.dirname("")))
DATA_PATH = PATH / "_data"
COLUMNS_NO_FEATURES = ['id', 'tile', 'cnt', 'ra_k', 'dec_k', 'vs_type', 'vs_catalog', 'cls']
sample = pd.read_pickle("bkp/s20k_scaled.pkl.bz2")
# the features
X_columns = [c for c in sample.columns if c not in COLUMNS_NO_FEATURES]
y_column = "cls"
sample[X_columns] = sample[X_columns].astype(np.float32)
data = Container({k: v for k, v in sample.groupby("tile") if k in ["b234", "b360", "b278", "b261"]})
del sample
data
def make_clf(k, df, X_columns):
X_train = df[X_columns].values
y_train = df.cls.values
clf = RandomForestClassifier(n_estimators=500, criterion="entropy")
clf.fit(X_train, y_train)
return k, clf
def get_clfs(data, X_columns):
print("Creating classifiers with {} features...".format(len(X_columns)))
with joblib.Parallel(n_jobs=-1) as jobs:
clfs = jobs(
joblib.delayed(make_clf)(k, d, X_columns)
for k, d in sorted(tqdm.tqdm(data.items())))
return Container(clfs)
def get_combs(data, X_columns):
combs = []
clfs = get_clfs(data, X_columns)
for train_name, clf in clfs.items():
for test_name in clfs.keys():
if train_name != test_name:
test_sample = data[test_name]
comb = Container({
"idx": len(combs),
"train_name": train_name, "clf": clf,
"test_name": test_name, "test_sample": test_sample,
"X_columns": X_columns, "y_column": y_column})
combs.append(comb)
return combs
def execute_clf(idx, train_name, clf, test_name, test_sample, X_columns, y_column):
X_test = test_sample[X_columns].values
y_test = test_sample[y_column].values
predictions = clf.predict(X_test)
probabilities = clf.predict_proba(X_test)
fpr, tpr, thresholds = metrics.roc_curve(
y_test, 1.-probabilities[:,0], pos_label=1)
prec_rec_curve = metrics.precision_recall_curve(
y_test, 1.- probabilities[:,0], pos_label=1)
roc_auc = metrics.auc(fpr, tpr)
result = Container({
"idx": idx,
"train_name": train_name,
"test_name": test_name,
'fpr': fpr,
'tpr': tpr,
'thresh': thresholds,
'roc_auc': roc_auc,
'prec_rec_curve': prec_rec_curve,
'real_cls': y_test,
'predictions': predictions,
'probabilities': probabilities,
'confusion_matrix': metrics.confusion_matrix(y_test, predictions)})
return result
def train_and_run(data, X_columns):
combs = get_combs(data, X_columns)
print("Combinaciones: {}".format(len(combs)))
print("Launching classifiers for {} features...".format(len(X_columns)))
with joblib.Parallel(n_jobs=-1) as jobs:
results = jobs(
joblib.delayed(execute_clf)(**comb) for comb in tqdm.tqdm(combs))
return results
period_X_columns = [c for c in X_columns if c.startswith("Freq") or c in ["PeriodLS", "Period_fit", "ppmb", "Psi_CS", "Psi_eta"]]
extintion_X_columns = [c for c in X_columns if c.startswith("n09_") or c.startswith("c89_")]
mag_X_columns = [c for c in X_columns if c not in (period_X_columns + extintion_X_columns)]
columns_combs = {
"All Features": X_columns,
"Magnitude + Period": mag_X_columns + period_X_columns,
"Magnitude + Extinction": mag_X_columns + extintion_X_columns,
"Period + Extinction": period_X_columns + extintion_X_columns,
}
fname = "paper_bk/00_all_results.pkl.bz2"
if os.path.exists(fname):
all_results = joblib.load(fname)
else:
all_results = {}
for k, columns in columns_combs.items():
print(f"----- {k} -----")
all_results[k] = train_and_run(data, columns)
joblib.dump(all_results, fname, compress=3)
###Output
_____no_output_____
###Markdown
Analysis
###Code
if not os.path.exists("plots/s_features/"):
os.makedirs("plots/s_features/")
plt.rcParams.update({'font.size': 10})
def as_df(data):
rows, tiles = [], sorted(list(data.keys()))
for rname in tiles:
row = data[rname].copy()
row.update({"Train": rname})
rows.append(row)
df = pd.DataFrame(rows)
df = df.set_index("Train")[tiles]
return df
def heatmap(ax, results, fp, show_recall=True):
cmap = sns.cm.rocket
fix_recall, fix_precs = {}, {}
for r in results:
train_name, test_name = r["train_name"], r["test_name"]
if train_name not in fix_recall:
fix_recall[train_name] = {}
fix_precs[train_name] = {}
precs, recalls, probs = r.prec_rec_curve
idx = nearest(recalls, fp)
fix_recall[train_name][test_name] = recalls[idx]
fix_precs[train_name][test_name] = precs[idx]
fix_precs = as_df(fix_precs)
fix_recall = as_df(fix_recall)
sns.heatmap(fix_precs, annot=True, fmt='.3f', linewidths=.5, ax=ax[0], cmap=cmap, center=.5, vmin=0., vmax=1.)
ax[0].set_xlabel("Test")
ax[0].set_title(u"Precision")
sns.heatmap(fix_recall, annot=True, fmt='.3f', linewidths=.5, ax=ax[1], cmap=cmap, center=.5, vmin=0., vmax=1.)
ax[1].set_xlabel("Test")
ax[1].set_title(u"Recall")
fig, axes = plt.subplots(4, 2, figsize=(8, 2.5*4))
for rname, axs in zip(all_results, axes):
heatmap(axs, all_results[rname], 0.90)
axs[0].set_ylabel(f"{rname}\n{axs[0].get_ylabel()}")
fig.tight_layout()
fig.savefig("plots/s_features/section_features_prec_rec_heatmap.pdf")
plt.rcdefaults()
SMALL_SIZE = 14
MEDIUM_SIZE = 18
BIGGER_SIZE = 22
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=BIGGER_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=15.5) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
def plot_prec_roc_curve(ax, train_name, test_name, all_results):
for rname, results in all_results.items():
for r in results:
if r.test_name == test_name and r.train_name == train_name:
ax.plot(r.prec_rec_curve[1], r.prec_rec_curve[0], label=rname)
break
ax.set_title("Train {} - Test {}".format(train_name, test_name))
ax.set_xlabel("Recall")
ax.set_ylabel("Precision")
fig, axes = plt.subplots(4, 3, figsize=(18, 20))
axes = list(it.chain(*axes))
last = axes[-1]
axes = iter(axes)
for train_name in data.keys():
for test_name in data.keys():
if train_name == test_name:
continue
ax = next(axes)
plot_prec_roc_curve(ax, train_name, test_name, all_results)
if ax == last:
ax.legend(loc='lower left')
fig.tight_layout()
fig.savefig("plots/s_features/section_features_prec_rec_curve.pdf")
fig, axes = plt.subplots(1, 3, figsize=(18, 5))
last = axes[-1]
axes = iter(axes)
for train_name in data.keys():
if train_name == "b278":
for test_name in data.keys():
if train_name == test_name:
continue
ax = next(axes)
plot_prec_roc_curve(ax, train_name, test_name, all_results)
if ax == last:
ax.legend(loc='lower left')
fig.tight_layout()
fig.savefig("plots/s_features/section_features_body_curve.pdf")
import datetime
datetime.datetime.now()
###Output
_____no_output_____
|
docs/source/tutorials/hpo_quickstart_pytorch/model.ipynb
|
###Markdown
Port PyTorch Quickstart to NNIThis is a modified version of `PyTorch quickstart`_.It can be run directly and will have the exact same result as original version.Furthermore, it enables the ability of auto tuning with an NNI *experiment*, which will be detailed later.It is recommended to run this script directly first to verify the environment.There are 2 key differences from the original version:1. In `Get optimized hyperparameters`_ part, it receives generated hyperparameters.2. In `Train model and report accuracy`_ part, it reports accuracy metrics to NNI.
###Code
import nni
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor
###Output
_____no_output_____
###Markdown
Hyperparameters to be tunedThese are the hyperparameters that will be tuned.
###Code
params = {
'features': 512,
'lr': 0.001,
'momentum': 0,
}
###Output
_____no_output_____
###Markdown
Get optimized hyperparametersIf run directly, :func:`nni.get_next_parameter` is a no-op and returns an empty dict.But with an NNI *experiment*, it will receive optimized hyperparameters from tuning algorithm.
###Code
optimized_params = nni.get_next_parameter()
params.update(optimized_params)
print(params)
###Output
_____no_output_____
###Markdown
Load dataset
###Code
training_data = datasets.FashionMNIST(root="data", train=True, download=True, transform=ToTensor())
test_data = datasets.FashionMNIST(root="data", train=False, download=True, transform=ToTensor())
batch_size = 64
train_dataloader = DataLoader(training_data, batch_size=batch_size)
test_dataloader = DataLoader(test_data, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Build model with hyperparameters
###Code
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using {device} device")
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, params['features']),
nn.ReLU(),
nn.Linear(params['features'], params['features']),
nn.ReLU(),
nn.Linear(params['features'], 10)
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
model = NeuralNetwork().to(device)
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=params['lr'], momentum=params['momentum'])
###Output
_____no_output_____
###Markdown
Define train and test
###Code
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device)
pred = model(X)
loss = loss_fn(pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
def test(dataloader, model, loss_fn):
size = len(dataloader.dataset)
num_batches = len(dataloader)
model.eval()
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
X, y = X.to(device), y.to(device)
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= num_batches
correct /= size
return correct
###Output
_____no_output_____
###Markdown
Train model and report accuracyReport accuracy metrics to NNI so the tuning algorithm can suggest better hyperparameters.
###Code
epochs = 5
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
train(train_dataloader, model, loss_fn, optimizer)
accuracy = test(test_dataloader, model, loss_fn)
nni.report_intermediate_result(accuracy)
nni.report_final_result(accuracy)
###Output
_____no_output_____
|
ProjectEuler/Project Euler -- Largest prime factor.ipynb
|
###Markdown
Largest Prime FactorThe prime factors of 13195 are 5, 7, 13 and 29.What is the largest prime factor of the number 600851475143 ?
###Code
600851475143 % 2 == 0
#Recall: No prime number is even
import math
def findprime(n):
#placeholder to return value
maxprime = -1
#Removing all possibility of even numbers
while (n % 2) == 0:
n = n / 2
maxprime = 2
#Using range fucnction to increament, starting from 3, up to the square root of n (making computation eaier) and ,
# and adding one, then stepping by + 2
for i in range(3, int(math.sqrt(n)) + 1, 2):
#going through all possibilites for odd numbers
while (n % i) == 0:
maxprime = i
n = n / i
if(n > 2):
maxprime = n
return maxprime
findprime(21)
findprime(600851475143)
###Output
_____no_output_____
|
Db2_11.5_JSON_03_Db2_ISO_JSON_Functions.ipynb
|
###Markdown
Db2 JSON Function OverviewUpdated: 2019-10-03 Db2 JSON FunctionsDb2 Version 11.1 Fix pack 4 introduced a subset of the JSON SQL functions defined by ISO and that set is shown in the table below.| Function | Description ||:---------|:------------|| `BSON_TO_JSON` | Convert BSON formatted document into JSON strings | `JSON_TO_BSON` | Convert JSON strings into a BSON document format | `JSON_ARRAY` | Creates a JSON array from input key value pairs| `JSON_OBJECT` | Creates a JSON object from input key value pairs | `JSON_VALUE` | Extract an SQL scalar value from a JSON object | `JSON_QUERY` | Extract a JSON object from a JSON object | `JSON_TABLE` | Creates a SQL table from a JSON object | `JSON_EXISTS` | Determines whether a JSON object contains the desired JSON valueThese functions are all part of the SYSIBM schema, so a user does not require permissions in order to use them for development or general usage. The functions can be categorized into three broad categories: Conversion functionsThe `BSON_TO_JSON` and `JSON_TO_BSON` functions are used to convert JSON character data into the binary BSON format and vice-versa. Conversion functions are optional and are discussed in the section below. These functions are not actually part of the ISO specifications and are provided simply for your convenience. Retrieval functionsThe `JSON_VALUE` and `JSON_QUERY` functions are used to retrieve portions of a document as SQL or JSON scalar values, while `JSON_TABLE` can be used to format JSON documents into a table of rows and columns. The `JSON_EXISTS` function can be used in conjunction with the retrieval functions to check for the existence of a field. Publishing RoutinesThe `JSON_ARRAY` and `JSON_OBJECT` functions are used to create JSON objects from relational data. Common Db2 JSON ParametersA majority of the Db2 ISO JSON functions depend on two parameters that are supplied at the beginning of a function. These parameters are: * JSON Expression* JSON Path Expression JSON ExpressionThe JSON expression refers to either a column name in a table where the JSON document is stored (either in JSON or BSON format), a JSON or BSON literal string, or a SQL variable containing a JSON or BSON string. The examples below illustrate these options.* A column name within a Table```JSON_VALUE(CUSTOMER.JSON_DOC,…)```* Using a character string as the argument```JSON_VALUE('{"first":"Thomas","last":"Hronis":}',…)```* Using an SQL variable```CREATE VARIABLE EXPR VARCHAR(256) DEFAULT('{"first":"Thomas"}')JSON_VALUE(EXPR,…)```The JSON expression can also include a modifier of `FORMAT JSON` or `FORMAT BSON`. The `FORMAT` clause is optional and by default the Db2 functions use the data type of the supplied value to determine how to interpret the contents. In the event that you need to override how the JSON field is interpreted, you must use the `FORMAT` option. JSON Path ExpressionA JSON path expression is used to navigate to individual values, objects, arrays, or allow for multiple matches within a JSON document. The JSON path expression is based on the syntax that is fully described in the notebook on JSON Path Expressions. The following list gives a summary of how a path expression is created but the details of how the matches occur are documented in the next chapter.* The top of any path expression is the anchor symbol (`$`)* Traverse to specific objects at different levels by using the dot operator (`.`)* Use square brackets `[]` to refer to items in an array with the first item starting at position zero (i.e. first element in an array is accessed as `arrayname[0]`)* Use the backslash `\` as an escape character when key names include any of the JSON path characters `(.,*,$,[,])`* Use the asterisk (`*`) to match any object at the current level* Use the asterisk (`*`) to match all objects in an array or retrieve only the value fields from an objectThe path expression can have an optional name represented by the `AS path-name` clause. The `AS` clause is included for compatibility with the ISO SQL standard but currently does not have any effect on the Db2 JSON functions. Sample JSON FunctionsThe following SQL demonstrates some of the JSON functions that are available in Db2. The other notebooks will go into more details of each one of these functions. Load Db2 Extensions and Connect to the DatabaseThe `connection` notebook contains the `CONNECT` statement which allows access to the `SAMPLE` database. If you need to modify the connection information, edit the `connection.ipynb` notebook.
###Code
import os.path
if (os.path.exists('db2.ipynb')):
%run db2.ipynb
%run connection.ipynb
else:
%run ../db2.ipynb
%run ../connection.ipynb
###Output
_____no_output_____
###Markdown
This statement will create a variable named customer which will be used for some of the examples.
###Code
customer = {
"customerid": 100000,
"identity":
{
"firstname": "Jacob",
"lastname": "Hines",
"birthdate": "1982-09-18"
},
"contact":
{
"street": "Main Street North",
"city": "Amherst",
"state": "OH",
"zipcode": "44001",
"email": "[email protected]",
"phone": "813-689-8309"
},
"payment":
{
"card_type": "MCCD",
"card_no": "4742-3005-2829-9227"
},
"purchases":
[
{
"tx_date": "2018-02-14",
"tx_no": 157972,
"product_id": 1860,
"product": "Ugliest Snow Blower",
"quantity": 1,
"item_cost": 51.86
}
]
}
###Output
_____no_output_____
###Markdown
JSON_EXISTSCheck to see if the customer made a __`purchase`__.
###Code
%sql VALUES JSON_EXISTS(:customer,'$.purchases')
###Output
_____no_output_____
###Markdown
JSON_VALUERetrieve the __`customerid`__ field.
###Code
%sql VALUES JSON_VALUE(:customer,'$.customerid')
###Output
_____no_output_____
###Markdown
JSON_QUERYRetrieve the __`identity`__ structure.
###Code
%sql -j VALUES JSON_QUERY(:customer,'$.identity')
###Output
_____no_output_____
###Markdown
JSON_TABLERetrieve all of the personal information as a table.
###Code
%%sql
WITH CUSTOMER(INFO) AS (VALUES :customer)
SELECT T.* FROM CUSTOMER,
JSON_TABLE(INFO, 'strict $'
COLUMNS(
FIRST_NAME VARCHAR(20) PATH '$.identity.firstname',
LAST_NAME VARCHAR(20) PATH '$.identity.lastname',
BIRTHDATE DATE PATH '$.identity.birthdate')
ERROR ON ERROR) AS T;
###Output
_____no_output_____
###Markdown
JSON_OBECTPublish one record as a JSON object.
###Code
%%sql -j
WITH CUSTOMER(CUSTNO, FIRSTNAME, LASTNAME, BIRTHDATE, INCOME) AS
(
VALUES
(1, 'George', 'Baklarz', '1999-01-01', 50000)
)
SELECT
JSON_OBJECT (
KEY 'customer' VALUE JSON_OBJECT
(
KEY 'id' VALUE CUSTNO,
KEY 'name' VALUE JSON_OBJECT
(
KEY 'first' VALUE FIRSTNAME,
KEY 'last' VALUE LASTNAME
) FORMAT JSON,
KEY 'birthdate' VALUE BIRTHDATE,
KEY 'income' VALUE INCOME
) FORMAT JSON
)
FROM CUSTOMER
###Output
_____no_output_____
###Markdown
JSON_ARRAYPublish one record as a JSON array object.
###Code
%%sql -j
WITH CUSTOMERS(CUSTNO) AS
(
VALUES
10, 20, 33, 55, 77
)
VALUES
JSON_OBJECT (
KEY 'customers' VALUE JSON_ARRAY (SELECT * FROM CUSTOMERS) FORMAT JSON
)
###Output
_____no_output_____
|
keras/cifar10-classification/cifar10_mlp.ipynb
|
###Markdown
Artificial Intelligence Nanodegree Convolutional Neural Networks---In this notebook, we train an MLP to classify images from the CIFAR-10 database. 1. Load CIFAR-10 Database
###Code
import keras
from keras.datasets import cifar10
# load the pre-shuffled train and test data
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
###Output
Using TensorFlow backend.
###Markdown
2. Visualize the First 24 Training Images
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
fig = plt.figure(figsize=(20,5))
for i in range(36):
ax = fig.add_subplot(3, 12, i + 1, xticks=[], yticks=[])
ax.imshow(np.squeeze(x_train[i]))
###Output
_____no_output_____
###Markdown
3. Rescale the Images by Dividing Every Pixel in Every Image by 255
###Code
# rescale [0,255] --> [0,1]
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
###Output
_____no_output_____
###Markdown
4. Break Dataset into Training, Testing, and Validation Sets
###Code
from keras.utils import np_utils
# one-hot encode the labels
num_classes = len(np.unique(y_train))
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# break training set into training and validation sets
(x_train, x_valid) = x_train[5000:], x_train[:5000]
(y_train, y_valid) = y_train[5000:], y_train[:5000]
# print shape of training set
print('x_train shape:', x_train.shape)
# print number of training, validation, and test images
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
print(x_valid.shape[0], 'validation samples')
###Output
x_train shape: (45000, 32, 32, 3)
45000 train samples
10000 test samples
5000 validation samples
###Markdown
5. Define the Model Architecture
###Code
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
# define the model
model = Sequential()
model.add(Flatten(input_shape = x_train.shape[1:]))
model.add(Dense(1000, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten_1 (Flatten) (None, 3072) 0
_________________________________________________________________
dense_1 (Dense) (None, 1000) 3073000
_________________________________________________________________
dropout_1 (Dropout) (None, 1000) 0
_________________________________________________________________
dense_2 (Dense) (None, 512) 512512
_________________________________________________________________
dropout_2 (Dropout) (None, 512) 0
_________________________________________________________________
dense_3 (Dense) (None, 10) 5130
=================================================================
Total params: 3,590,642.0
Trainable params: 3,590,642.0
Non-trainable params: 0.0
_________________________________________________________________
###Markdown
6. Compile the Model
###Code
# compile the model
model.compile(loss='categorical_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
7. Train the Model
###Code
from keras.callbacks import ModelCheckpoint
# train the model
checkpointer = ModelCheckpoint(filepath='MLP.weights.best.hdf5', verbose=1,
save_best_only=True)
hist = model.fit(x_train, y_train, batch_size=32, epochs=20,
validation_data=(x_valid, y_valid), callbacks=[checkpointer],
verbose=2, shuffle=True)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/20
Epoch 00000: val_loss improved from inf to 1.91876, saving model to MLP.weights.best.hdf5
52s - loss: 3.2886 - acc: 0.2462 - val_loss: 1.9188 - val_acc: 0.3000
Epoch 2/20
Epoch 00001: val_loss did not improve
54s - loss: 1.8626 - acc: 0.3242 - val_loss: 1.9204 - val_acc: 0.3082
Epoch 3/20
Epoch 00002: val_loss improved from 1.91876 to 1.78092, saving model to MLP.weights.best.hdf5
52s - loss: 1.8230 - acc: 0.3438 - val_loss: 1.7809 - val_acc: 0.3588
Epoch 4/20
Epoch 00003: val_loss improved from 1.78092 to 1.72077, saving model to MLP.weights.best.hdf5
54s - loss: 1.7887 - acc: 0.3575 - val_loss: 1.7208 - val_acc: 0.3640
Epoch 5/20
Epoch 00004: val_loss did not improve
55s - loss: 1.7777 - acc: 0.3651 - val_loss: 1.7357 - val_acc: 0.3500
Epoch 6/20
Epoch 00005: val_loss improved from 1.72077 to 1.71538, saving model to MLP.weights.best.hdf5
53s - loss: 1.7641 - acc: 0.3675 - val_loss: 1.7154 - val_acc: 0.3818
Epoch 7/20
Epoch 00006: val_loss did not improve
52s - loss: 1.7616 - acc: 0.3700 - val_loss: 1.7708 - val_acc: 0.3670
Epoch 8/20
Epoch 00007: val_loss did not improve
52s - loss: 1.7641 - acc: 0.3729 - val_loss: 1.7766 - val_acc: 0.3738
Epoch 9/20
Epoch 00008: val_loss improved from 1.71538 to 1.70597, saving model to MLP.weights.best.hdf5
52s - loss: 1.7709 - acc: 0.3672 - val_loss: 1.7060 - val_acc: 0.3840
Epoch 10/20
Epoch 00009: val_loss did not improve
51s - loss: 1.7635 - acc: 0.3744 - val_loss: 1.8535 - val_acc: 0.3260
Epoch 11/20
Epoch 00010: val_loss did not improve
54s - loss: 1.7551 - acc: 0.3780 - val_loss: 1.7249 - val_acc: 0.3758
Epoch 12/20
Epoch 00011: val_loss did not improve
55s - loss: 1.7617 - acc: 0.3757 - val_loss: 1.7308 - val_acc: 0.3660
Epoch 13/20
Epoch 00012: val_loss did not improve
52s - loss: 1.7694 - acc: 0.3745 - val_loss: 1.9086 - val_acc: 0.3150
Epoch 14/20
Epoch 00013: val_loss did not improve
53s - loss: 1.7654 - acc: 0.3711 - val_loss: 1.7625 - val_acc: 0.3684
Epoch 15/20
Epoch 00014: val_loss did not improve
52s - loss: 1.7691 - acc: 0.3726 - val_loss: 1.7753 - val_acc: 0.3778
Epoch 16/20
Epoch 00015: val_loss did not improve
52s - loss: 1.7780 - acc: 0.3688 - val_loss: 1.7723 - val_acc: 0.3592
Epoch 17/20
Epoch 00016: val_loss did not improve
52s - loss: 1.7757 - acc: 0.3675 - val_loss: 1.7359 - val_acc: 0.3644
Epoch 18/20
Epoch 00017: val_loss did not improve
54s - loss: 1.7868 - acc: 0.3676 - val_loss: 1.7861 - val_acc: 0.3538
Epoch 19/20
Epoch 00018: val_loss did not improve
53s - loss: 1.7797 - acc: 0.3717 - val_loss: 1.7431 - val_acc: 0.3698
Epoch 20/20
Epoch 00019: val_loss improved from 1.70597 to 1.70173, saving model to MLP.weights.best.hdf5
52s - loss: 1.7857 - acc: 0.3670 - val_loss: 1.7017 - val_acc: 0.3926
###Markdown
8. Load the Model with the Best Classification Accuracy on the Validation Set
###Code
# load the weights that yielded the best validation accuracy
model.load_weights('MLP.weights.best.hdf5')
###Output
_____no_output_____
###Markdown
9. Calculate Classification Accuracy on Test Set
###Code
# evaluate and print test accuracy
score = model.evaluate(x_test, y_test, verbose=0)
print('\n', 'Test accuracy:', score[1])
###Output
Test accuracy: 0.4
###Markdown
Artificial Intelligence Nanodegree Convolutional Neural Networks---In this notebook, we train an MLP to classify images from the CIFAR-10 database. 1. Load CIFAR-10 Database
###Code
import keras
from keras.datasets import cifar10
# load the pre-shuffled train and test data
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
###Output
Using TensorFlow backend.
###Markdown
2. Visualize the First 24 Training Images
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
fig = plt.figure(figsize=(20,5))
for i in range(36):
ax = fig.add_subplot(3, 12, i + 1, xticks=[], yticks=[])
ax.imshow(np.squeeze(x_train[i]))
###Output
_____no_output_____
###Markdown
3. Rescale the Images by Dividing Every Pixel in Every Image by 255
###Code
# rescale [0,255] --> [0,1]
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
###Output
_____no_output_____
###Markdown
4. Break Dataset into Training, Testing, and Validation Sets
###Code
from keras.utils import np_utils
# one-hot encode the labels
num_classes = len(np.unique(y_train))
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# break training set into training and validation sets
(x_train, x_valid) = x_train[5000:], x_train[:5000]
(y_train, y_valid) = y_train[5000:], y_train[:5000]
# print shape of training set
print('x_train shape:', x_train.shape)
# print number of training, validation, and test images
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
print(x_valid.shape[0], 'validation samples')
###Output
x_train shape: (45000, 32, 32, 3)
45000 train samples
10000 test samples
5000 validation samples
###Markdown
5. Define the Model Architecture
###Code
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
# define the model
model = Sequential()
model.add(Flatten(input_shape = x_train.shape[1:]))
model.add(Dense(1000, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten_1 (Flatten) (None, 3072) 0
_________________________________________________________________
dense_1 (Dense) (None, 1000) 3073000
_________________________________________________________________
dropout_1 (Dropout) (None, 1000) 0
_________________________________________________________________
dense_2 (Dense) (None, 512) 512512
_________________________________________________________________
dropout_2 (Dropout) (None, 512) 0
_________________________________________________________________
dense_3 (Dense) (None, 10) 5130
=================================================================
Total params: 3,590,642.0
Trainable params: 3,590,642.0
Non-trainable params: 0.0
_________________________________________________________________
###Markdown
6. Compile the Model
###Code
# compile the model
model.compile(loss='categorical_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
7. Train the Model
###Code
from keras.callbacks import ModelCheckpoint
# train the model
checkpointer = ModelCheckpoint(filepath='MLP.weights.best.hdf5', verbose=1,
save_best_only=True)
hist = model.fit(x_train, y_train, batch_size=32, epochs=20,
validation_data=(x_valid, y_valid), callbacks=[checkpointer],
verbose=2, shuffle=True)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/20
Epoch 00000: val_loss improved from inf to 1.91876, saving model to MLP.weights.best.hdf5
52s - loss: 3.2886 - acc: 0.2462 - val_loss: 1.9188 - val_acc: 0.3000
Epoch 2/20
Epoch 00001: val_loss did not improve
54s - loss: 1.8626 - acc: 0.3242 - val_loss: 1.9204 - val_acc: 0.3082
Epoch 3/20
Epoch 00002: val_loss improved from 1.91876 to 1.78092, saving model to MLP.weights.best.hdf5
52s - loss: 1.8230 - acc: 0.3438 - val_loss: 1.7809 - val_acc: 0.3588
Epoch 4/20
Epoch 00003: val_loss improved from 1.78092 to 1.72077, saving model to MLP.weights.best.hdf5
54s - loss: 1.7887 - acc: 0.3575 - val_loss: 1.7208 - val_acc: 0.3640
Epoch 5/20
Epoch 00004: val_loss did not improve
55s - loss: 1.7777 - acc: 0.3651 - val_loss: 1.7357 - val_acc: 0.3500
Epoch 6/20
Epoch 00005: val_loss improved from 1.72077 to 1.71538, saving model to MLP.weights.best.hdf5
53s - loss: 1.7641 - acc: 0.3675 - val_loss: 1.7154 - val_acc: 0.3818
Epoch 7/20
Epoch 00006: val_loss did not improve
52s - loss: 1.7616 - acc: 0.3700 - val_loss: 1.7708 - val_acc: 0.3670
Epoch 8/20
Epoch 00007: val_loss did not improve
52s - loss: 1.7641 - acc: 0.3729 - val_loss: 1.7766 - val_acc: 0.3738
Epoch 9/20
Epoch 00008: val_loss improved from 1.71538 to 1.70597, saving model to MLP.weights.best.hdf5
52s - loss: 1.7709 - acc: 0.3672 - val_loss: 1.7060 - val_acc: 0.3840
Epoch 10/20
Epoch 00009: val_loss did not improve
51s - loss: 1.7635 - acc: 0.3744 - val_loss: 1.8535 - val_acc: 0.3260
Epoch 11/20
Epoch 00010: val_loss did not improve
54s - loss: 1.7551 - acc: 0.3780 - val_loss: 1.7249 - val_acc: 0.3758
Epoch 12/20
Epoch 00011: val_loss did not improve
55s - loss: 1.7617 - acc: 0.3757 - val_loss: 1.7308 - val_acc: 0.3660
Epoch 13/20
Epoch 00012: val_loss did not improve
52s - loss: 1.7694 - acc: 0.3745 - val_loss: 1.9086 - val_acc: 0.3150
Epoch 14/20
Epoch 00013: val_loss did not improve
53s - loss: 1.7654 - acc: 0.3711 - val_loss: 1.7625 - val_acc: 0.3684
Epoch 15/20
Epoch 00014: val_loss did not improve
52s - loss: 1.7691 - acc: 0.3726 - val_loss: 1.7753 - val_acc: 0.3778
Epoch 16/20
Epoch 00015: val_loss did not improve
52s - loss: 1.7780 - acc: 0.3688 - val_loss: 1.7723 - val_acc: 0.3592
Epoch 17/20
Epoch 00016: val_loss did not improve
52s - loss: 1.7757 - acc: 0.3675 - val_loss: 1.7359 - val_acc: 0.3644
Epoch 18/20
Epoch 00017: val_loss did not improve
54s - loss: 1.7868 - acc: 0.3676 - val_loss: 1.7861 - val_acc: 0.3538
Epoch 19/20
Epoch 00018: val_loss did not improve
53s - loss: 1.7797 - acc: 0.3717 - val_loss: 1.7431 - val_acc: 0.3698
Epoch 20/20
Epoch 00019: val_loss improved from 1.70597 to 1.70173, saving model to MLP.weights.best.hdf5
52s - loss: 1.7857 - acc: 0.3670 - val_loss: 1.7017 - val_acc: 0.3926
###Markdown
8. Load the Model with the Best Classification Accuracy on the Validation Set
###Code
# load the weights that yielded the best validation accuracy
model.load_weights('MLP.weights.best.hdf5')
###Output
_____no_output_____
###Markdown
9. Calculate Classification Accuracy on Test Set
###Code
# evaluate and print test accuracy
score = model.evaluate(x_test, y_test, verbose=0)
print('\n', 'Test accuracy:', score[1])
###Output
Test accuracy: 0.4
|
Machine Learning Summer School 2019 (Moscow, Russia)/tutorials/geometric_techniques_in_ml/riemannian_opt_for_ml_task.ipynb
|
###Markdown
This is a tutorial notebook on Riemannian optimization for machine learning, prepared for the Machine Learning Summer School 2019 (MLSS-2019, http://mlss2019.skoltech.ru) in Moscow, Russia, Skoltech (http://skoltech.ru).Copyright 2019 by Alexey Artemov and ADASE 3DDL Team. Special thanks to Alexey Zaytsev for a valuable contribution. Riemannian optimization for machine learning The purpose of this tutorial is to give a gentle introduction into the practice of Riemannian optimization. You will learn to: 1. Reformulate familiar optimization problems in terms of Riemannian optimization on manifolds. 2. Use a Riemannian optimization library `pymanopt`. Index 1. [Recap and the introduction: linear regression](Recap-and-the-introduction:-linear-regression).2. [Introduction into ManOpt and pymanopt](Intoduction-into-ManOpt-package-for-Riemannian-optimization).3. [Learning the shape space of facial landmarks](Learning-the-shape-space-of-facial-landmarks): - [Problem formulation and general reference](Problem-formulation-and-general-reference). - [Procrustes analysis for the alignment of facial landmarks](Procrustes-analysis-for-the-alignment-of-facial-landmarks). - [PCA for learning the shape space](PCA-for-learning-the-shape-space).4. [Analysing the shape space of facial landmarks via MDS](Analysing-the-shape-space-of-facial-landmarks-via-MDS).5. [Learning the Gaussian mixture models for word embeddings](Learning-the-Gaussian-mixture-models-for-word-embeddings). Install the necessary libraries
###Code
!pip install --upgrade git+https://github.com/mlss-skoltech/tutorials.git#subdirectory=geometric_techniques_in_ML
!pip install pymanopt autograd
!pip install scipy==1.2.1 -U
import pkg_resources
DATA_PATH = pkg_resources.resource_filename('riemannianoptimization', 'data/')
###Output
_____no_output_____
###Markdown
Recap and the introduction: linear regression _NB: This section of the notebook is for illustrative purposes only, no code input required_ Recall the maths behind it: We're commonly working with a problem of finding the weights $w \in \mathbb{R}^n$ such that$$||\mathbf{y} - \mathbf{X} \mathbf{w}||^2_2 \to \min_{\mathbf{w}},$$with $\mathbf{x}_i \in \mathbb{R}^n$, i.e. features are vectors of numbers, and $y_i \in \mathbb{R}$.$\mathbf{X} \in \mathbb{R}^{\ell \times n}$ is a matrix with $\ell$ objects and $n$ features.A commonly computed least squares solution is of the form: $$\mathbf{w} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{y}.$$We could account for the non-zero mean case ($\mathrm{E} \mathbf{y} \neq 0$) by either adding and subtracting the mean, or by using an additional feature in $\mathbf{X}$ set to all ones.The solution could simply be computed via:
###Code
def compute_weights_multivariate(X, y):
"""
Given feature array X [n_samples, 1], target vector y [n_samples],
compute the optimal least squares solution using the formulae above.
For brevity, no bias term!
"""
# Compute the "inverting operator"
R = np.dot(
np.linalg.inv(
np.dot(X.T, X)
), X.T
)
# Compute the actual solution
w = np.dot(R, y)
return w
###Output
_____no_output_____
###Markdown
Recall the gradient descent solution: Let us view$$L(\mathbf{y}, \mathbf{X} \mathbf{w}) = \frac{1}{\ell} ||\mathbf{y} - \mathbf{X} \mathbf{w}||^2_2 \to \min_{\mathbf{w}},$$as pure unconstrained optimization problem of the type $$f(\mathbf{w}) \to \min\limits_{\mathbf{w} \in \mathbb{R}^n}$$with $f(\mathbf{w}) \equiv L(\mathbf{y}, \mathbf{X} \mathbf{w})$.To use the gradient descent, we must * initialize the weights $\mathbf{w}$ somehow,* find a way of computing the __gradient__ of our quality measure $L(\mathbf{y}, \widehat{\mathbf{y}})$ w.r.t. $\mathbf{w}$,* starting from the initialization, iteratively update weights using the gradient descent: $$\mathbf{w}^{(i+1)} \leftarrow \mathbf{w}^{(i)} - \gamma \nabla_{\mathbf{w}} L,$$where $\gamma$ is step size.Since we choose $L(\mathbf{y}, \widehat{\mathbf{y}}) \equiv \frac 1 \ell ||\mathbf{y} - \mathbf{X} \mathbf{w} ||^2$, our gradient is $ \frac 2 \ell \mathbf{X}^T (\mathbf{y} - \mathbf{X} \mathbf{w}) $.The solution is coded by:
###Code
from sklearn.metrics import mean_squared_error
def compute_gradient(X, y, w):
"""
Computes the gradient of MSE loss
for multivariate linear regression of X onto y
w.r.t. w, evaluated at the current w.
"""
prediction = np.dot(X, w) # [n_objects, n_features] * [n_features] -> [n_objects]
error = prediction - y # [n_objects]
return 2 * np.dot(error, X) / len(error) # [n_objects] * [n_objects, n_features] -> [n_features]
def gradient_descent(X, y, w_init, iterations=1, gamma=0.01):
"""
Performs the required number of iterations of gradient descent.
Parameters:
X [n_objects, n_features]: matrix of featues
y [n_objects]: responce (dependent) variable
w_init: the value of w used as an initializer
iterations: number of steps for gradient descent to compute
gamma: learning rate (gradient multiplier)
"""
costs, grads, ws = [], [], []
w = w_init
for i in range(iterations):
# Compute our cost in current point (before the gradient step)
costs.append(mean_squared_error(y, np.dot(X, w)) / len(y))
# Remember our weights w in current point
ws.append(w)
# Compute gradient for w
w_grad = compute_gradient(X, y, w)
grads.append(w_grad)
# Update the current weight w using the formula above (see comments)
w = w - gamma * w_grad
# record the last weight
ws.append(w)
return costs, grads, ws
###Output
_____no_output_____
###Markdown
Intoduction into ManOpt package for Riemannian optimization `ManOpt` and `pymanopt` The Matlab library `ManOpt` (https://www.manopt.org) and its Python version `pymanopt` (http://pymanopt.github.io) are versatile toolboxes for optimization on manifolds. The two libraries are built so that they separate the _manifolds_, the _solvers_ and the _problem descriptions_. For basic use, one only needs to: * pick a manifold from the library, * describe the cost function (and possible derivatives) on this manifold, and * pass it on to a solver. _NB: The purpose of the following is to get familiar with pymanopt and to serve as a reference point when coding your own optimization problems._To start working with `pymanopt`, you'll need the following 1. Import the necessary backend for automatic differentiation```pythonimport autograd.numpy as np```but theano and TensorFlow backends are supported, too. We will also require importing `pymanopt` itself, along with the necessary submodules:```pythonimport pymanopt as optimport pymanopt.solvers as solversimport pymanopt.manifolds as manifolds``` 2. Define (or rather, select) the manifold of interest. `pymanopt` provides a [large number](https://pymanopt.github.io/doc/manifolds) of predefined manifold classes (however, a lot less than the [original ManOpt Matlab library](https://www.manopt.org/tutorial.htmlmanifolds)). E.g., to instantiate a manifold $V_{2}(\mathbb {R}^{5}) = \{X \in \mathbb{R}^{5 \times 2} : X^TX = I_2\}^k$ of orthogonal projection matrices from $\mathbb{R}^5$ to $\mathbb{R}^2$ you will write:```pythonmanifold = manifolds.Stiefel(5, 2)```Available manifolds include [Steifel](https://pymanopt.github.io/doc/module-pymanopt.manifolds.stiefel) ([wiki](https://en.wikipedia.org/wiki/Stiefel_manifold)), Rotations or SO(n) ([wiki](https://en.wikipedia.org/wiki/Orthogonal_group)), [Euclidean](https://pymanopt.github.io/doc/module-pymanopt.manifolds.euclidean), [Positive Definite](https://pymanopt.github.io/doc/pymanopt.manifolds.psd.PositiveDefinite) ([wiki](https://en.wikipedia.org/wiki/Definiteness_of_a_matrix)), and [Product](https://pymanopt.github.io/doc/pymanopt.manifolds.product.Product), along many others. 3. Define the **scalar** cost function (here using `autograd.numpy`) to be minimized by the ```pythondef cost(X): return np.sum(X)```Note that the scalar `cost` python function **will have access to objects defined elsewhere in code** (which allows accessing $X$ and $y$ for optimization). 4. Instantiate the `pymanopt` problem```pythonproblem = opt.Problem(manifold=manifold, cost=cost, verbosity=2)```The keyword `verbosity` controls hwo much output you get from the system (smaller values mean less output). 5. Instantiate a `pymanopt` solver, e.g.:```pythonsolver = solvers.SteepestDescent()```The library has a lot of solvers implemented, including SteepestDescent, TrustRegions, ConjugateGradient, and NelderMead objects. 6. Perform the optimization in a single blocking function call, obtaining the optimal value of the desired quantity:```pythonXopt = solver.solve(problem)``` Linear regression using `pymanopt`_The purpose of this section is to get the first hands-out experience using `pymanopt`. We compare its output with hand-coded gradient descent and the analytic solution._
###Code
import pymanopt as opt
import pymanopt.solvers as solvers
import pymanopt.manifolds as manifolds
# Import the differentiable numpy -- this is crucial,
# as `np` conventionally imported will not provide gradients.
# See more at https://github.com/HIPS/autograd
import autograd.numpy as np
# Generate random data
X = np.random.randn(200, 3)
y = np.random.randint(-5, 5, (200))
###Output
_____no_output_____
###Markdown
**Exercise:** program the linear regression using manifold optimization**Hint:** create `Euclidean` manifold and the `SteepestDescent` solver. **Hint:** write down the formula for the cost. Remember it has the access to `X` and `y` defined above.
###Code
import autograd.numpy as np # import again to avoid errors
# Cost function is the squared error. Remember, cost is a scalar value!
def cost(w):
return # <your code here>
# A simplest possible solver (gradient descent)
solver = # <your code here>
# R^3
manifold = # <your code here>
# Solve the problem with pymanopt
problem = opt.Problem(manifold=manifold, cost=cost)
wopt = solver.solve(problem)
print('The following regression weights were found to minimise the '
'squared error:')
print(wopt)
###Output
_____no_output_____
###Markdown
Compute the linear regression solution via numerical optimization using steepest descent over the Euclidean manifold $\mathbb{R}^3$, _only using our handcrafted gradient descent_.
###Code
gd_params = dict(w_init=np.random.rand(X.shape[1]),
iterations=20,
gamma=0.1)
costs, grads, ws = gradient_descent(X, y, **gd_params)
print(" iter\t\t cost val\t grad. norm")
for iteration, (cost, grad, w) in enumerate(zip(costs, grads, ws)):
gradnorm = np.linalg.norm(grad)
print("%5d\t%+.16e\t%.8e" % (iteration, cost, gradnorm))
print('\nThe following regression weights were found to minimise the '
'squared error:')
print(w)
###Output
_____no_output_____
###Markdown
Finally, use the analytic formula.
###Code
print('The closed form solution to this regression problem is:')
compute_weights_multivariate(X, y)
###Output
_____no_output_____
###Markdown
Recall that you can always look what's inside by either reading the [developer docs](https://pymanopt.github.io/doc/) or simply examining the code via typing:```pythonsolvers.SteepestDescent??```Compare the code there with our hand-crafted gradient descent. Learning the shape space of facial landmarks Problem formulation and general reference In this part, we will create the shape space of facial landmarks. Building such a shape space is of great interest in computer vision area, where numerous applications such as face detection, facial pose regression, and emotion recognition depend heavily on such models. Here are the basics of what one needs to know to proceed with this tutorial.1. [Active Shape Models](https://en.wikipedia.org/wiki/Active_shape_model) are a class of statistical shape models that can iteratively deform to fit to an example of the object in a image. They are commonly build by analyzing variations in points distributions and _encode plausible variations, allowing one to discriminate them from unlikely ones_.2. One great reference for all ASMs is Tim Cootes' paper: _Cootes, T., Baldock, E. R., & Graham, J. (2000)._ [An introduction to active shape models](https://person.hst.aau.dk/lasse/teaching/IACV/doc/asm_overview.pdf). _Image processing and analysis, 223-248._ It includes motivation, math, and algorithms behind the ASM.3. Nice reference implementations of the Active Shape Model for faces include, e.g., [this Matlab code](https://github.com/johnwmillr/ActiveShapeModels) and [this one, featuring additionally dental image analysis](https://github.com/LennartCockx/Python-Active-shape-model-for-Incisor-Segmentation).4. Production libraries such as [dlib](http://dlib.net) implement their own ASMs of facial landmarks. (image taken from [Neeraj Kumar's page on LPFW](https://neerajkumar.org/databases/lfpw/))We will (1) [look at the data](Obtain-and-view-the-dataset),(2) [align shapes](Procrustes-analysis-for-the-alignment-of-facial-landmarks),and (3) [compute the shape space](PCA-for-learning-the-shape-space). Obtain and view the dataset_The goal of this section is to examine the dataset._
###Code
from riemannianoptimization.tutorial_helpers import load_data, plot_landmarks
landmarks = load_data(DATA_PATH)
###Output
_____no_output_____
###Markdown
View a random subset of the data. Run the cell below multiple times to view different subsets.You can set `draw_landmark_id` and `draw_landmarks` to 0 to turn them off.
###Code
import matplotlib.pyplot as plt
idx = np.random.choice(len(landmarks), size=6) # sample random faces
fig, axs = plt.subplots(ncols=6, nrows=1, figsize=(18, 3))
for ax, image in zip(axs, landmarks[idx]):
plot_landmarks(image, ax=ax, draw_landmark_id=1, draw_landmarks=1)
###Output
_____no_output_____
###Markdown
Procrustes analysis for the alignment of facial landmarks_The purpose of this section is to learn how to use manifold optimization for shape alignment_. One thing to note is that the landmarks are annotated in images with different resolution and are generally **misaligned**. One can easily understand this by observing landmark scatterplots. Subtracting the mean shape or standardizing the points doesn't help.
###Code
fig, (ax1, ax2, ax3) = plt.subplots(figsize=(15, 5), ncols=3)
ax1.scatter(landmarks[:, 0::2], -landmarks[:, 1::2], alpha=.01)
# compute the mean shape
mean_shape = np.mean(landmarks, axis=0)
landmarks_centered = landmarks - mean_shape
ax2.scatter(landmarks_centered[:, 0::2], -landmarks_centered[:, 1::2], alpha=.01)
# compute additionally the standard deviation in shape
std_shape = np.std(landmarks, axis=0)
landmarks_standardized = landmarks_centered / std_shape
ax3.scatter(landmarks_standardized[:, 0::2], -landmarks_standardized[:, 1::2], alpha=.01);
###Output
_____no_output_____
###Markdown
**Q:** Why such variation? Why we don't see separate clusters of "average keypoints", like average eye1, eye2, and etc."? We must _align_ shapes to a _canonical pose_ to proceed with building the ASM.This will be done in a simple way via [Procrustes analysis](https://en.wikipedia.org/wiki/Procrustes_analysis). In its simplest form, Procrustes analysis aligns each shape so that the sum of distances of each shape to the mean $D = \sum\limits_i ||\mathbf{x}_i − \mathbf{\overline{x}}||^2_2)$ is minimised:1. Translate each example so that its center of gravity is at the origin.2. Choose one example as an initial estimate of the mean shape and scale.3. Record the first estimate as $\overline{x}_0$ to define the default orientation.4. Align all the shapes with the current estimate of the mean shape.5. Re-estimate the mean from aligned shapes.6. Apply constraints on scale and orientation to the current estimate of the mean by aligning it with x ̄0 and scaling so that $|\overline{x}| = 1$.7. If not converged, return to 4.(Convergence is declared if the estimate of the mean does not changesignificantly after an iteration) 
###Code
# A small helper function we will need
# to center the shape at the origin and scale it to a unit norm.
def standardize(shape):
# shape must have the shape [n_landmarks, 2], e.g. [35, 2]
shape -= np.mean(shape, 0)
shape_norm = np.linalg.norm(shape)
shape /= shape_norm
return shape
# A large helper function that we will employ to align
# the *entire collection* of shapes -- skip for now.
def align_landmarks(landmarks, mean_shape=None, aligner=None, n_iterations=1):
"""
Aligns landmarks to an estimated mean shape.
In this function, `landmarks` are always assumed to be array of shape [n, 35, 2].
aligner: a function getting two arguments (mean_shape and shape), returning
the transformation from shape to mean_shape
"""
# Translate each example so that its center of gravity is at the origin.
landmarks -= np.mean(landmarks, axis=1, keepdims=True)
# Choose one example as an initial estimate of the mean shape and scale
# so that |x ̄| = x ̄21 + y ̄12 + x ̄2 . . . = 1.
mean_shape = np.mean(landmarks, axis=0)
mean_shape = standardize(mean_shape)
# Record the first estimate as x0 to define the default orientation.
mean_shape_0 = mean_shape[:]
def align_to_mean(landmarks, mean_shape, aligner=None):
aligned_landmarks = []
for shape in landmarks:
shape = standardize(shape)
shape = aligner(mean_shape, shape)
aligned_landmarks.append(shape)
return np.array(aligned_landmarks)
print(" iter\t cost val.\t mean diff.")
for iteration in range(n_iterations):
# Align all the shapes with the current estimate of the mean shape.
aligned_landmarks = align_to_mean(landmarks, mean_shape, aligner=aligner)
mean_shape_prev = mean_shape
# Re-estimate the mean from aligned shapes.
mean_shape = np.mean(aligned_landmarks, axis=0)
# Apply constraints on scale and orientation to the current
# estimate of the mean by aligning it with x ̄0 and scaling so that |x ̄| = 1.
mean_shape = aligner(mean_shape_0, mean_shape)
mean_shape /= np.linalg.norm(mean_shape)
cost = np.sum(
np.linalg.norm(aligned_landmarks - mean_shape, axis=(1, 2))
)
mean_shape_diff = np.linalg.norm(mean_shape - mean_shape_prev)
print("%5d\t%+.8e\t%.8e" % (iteration, cost, mean_shape_diff))
# If not converged, return to 4.
# (Convergence is declared if the estimate of the mean does not change significantly after an iteration)
return np.array(aligned_landmarks), mean_shape
landmarks = landmarks.reshape(-1, 35, 2)
###Output
_____no_output_____
###Markdown
One may naturally resort to [scipy.spatial.procrustes](https://docs.scipy.org/doc/scipy-1.2.1/reference/generated/scipy.spatial.procrustes.html), which computes an optimal alignment using a scale vector $\mathbf{s}$ and a rotation matrix $\mathbf{R}$, solving [orthogonal Procrustes problem](https://en.wikipedia.org/wiki/Orthogonal_Procrustes_problem). **Exercise:** Using `scipy.spatial.procrustes`, write a default aligner function for our `align_landmarks`. This function must accept two shapes and return the second one aligned to the first one.
###Code
from scipy.spatial import procrustes
def default_procrustes(target_shape, source_shape):
"""Align the source shape to the target shape.
For standardized shapes, can skip translating/scaling
aligned source by target's parameters.
target_shape, source_shape: ndarrays of shape [35, 2]
return ndarray of shape [35, 2]
"""
# <your code here>
# Try aligning a single shape
mean_shape = np.mean(landmarks, axis=0)
mean_shape = standardize(mean_shape)
shape_std = standardize(landmarks[400])
aligned_shape = default_procrustes(mean_shape, shape_std)
fig, (ax1, ax2, ax3) = plt.subplots(figsize=(15, 5), ncols=3)
plot_landmarks(mean_shape, ax=ax1)
ax1.set_title('Mean shape')
# compute the mean shape
plot_landmarks(mean_shape, ax=ax2, color_landmarks='grey', color_contour='grey', alpha=0.5)
plot_landmarks(shape_std, ax=ax2)
ax2.set_title('Another shape, distance = {0:.3f}'.format(np.linalg.norm(mean_shape - shape_std)))
# compute additionally the standard deviation in shape
plot_landmarks(mean_shape, ax=ax3, color_landmarks='grey', color_contour='grey', alpha=0.5)
plot_landmarks(aligned_shape, ax=ax3)
ax3.set_title('Aligned shapes, distance = {0:.3f}'.format(np.linalg.norm(mean_shape - aligned_shape)));
# Align the entire dataset to a mean shape
aligned_landmarks, mean_shape = align_landmarks(landmarks, aligner=default_procrustes, n_iterations=3)
fig, (ax1, ax2) = plt.subplots(figsize=(10, 5), ncols=2)
ax1.scatter(aligned_landmarks[:, :, 0], -aligned_landmarks[:, :, 1], alpha=.01)
ax1.set_title('Aligned landmarks cloud')
# compute the mean shape
plot_landmarks(mean_shape, ax=ax2)
ax2.set_title('Mean landmarks');
###Output
_____no_output_____
###Markdown
But let's do the same using Riemannian optimization! **Q:** Why we need to optimize anything by hand, if we have the procrustes implemented in scipy?
###Code
import pymanopt as opt
import pymanopt.manifolds as manifolds
import pymanopt.solvers as solvers
###Output
_____no_output_____
###Markdown
Recall that the orthogonal Procrustus problem seeks for:$$R=\arg \min _{\Omega }\|\Omega A-B\|_{F}\quad \mathrm {subject\ to} \quad \Omega ^{T}\Omega =I,$$i.e. $R$ belongs to the Stiefel manifold. One can optimize that, however, it might be more reasonable to optimize using rotations + scaling.In here, $A$ and $B$ are our shapes, and $\Omega$ is our seeked transform. **Exercise:** program the variants of the Procrustes alignment using the following variants: * $R \in \text{Stiefel}(2, 2)$, i.e. we seek a projection matrix using `Stiefel` object * $R \in \text{SO}(2)$, i.e. we seek a rotation matrix using `Rotations` object * $R \in \text{SO}(2)$ and $s \in R^2$, i.e. we seek a rotation + scaling transform using `Product` of `Rotations` and `Euclidean` manifolds, see example [here](https://github.com/pymanopt/pymanopt/blob/master/examples/regression_offset_autograd.py))
###Code
import autograd.numpy as np # import here to avoid errors
def riemannian_procrustes_projection(mean_shape, shape):
"""Align the source shape to the target shape using projection.
target_shape, source_shape: ndarrays of shape [35, 2]
return ndarray of shape [35, 2]
"""
def cost(R):
return # <your code here>
solver = solvers.SteepestDescent()
manifold = # <your code here>manifolds.Stiefel(2, 2)
problem = opt.Problem(manifold=manifold, cost=cost, verbosity=0)
R_opt = solver.solve(problem)
return # <your code here>
def riemannian_procrustes_rotation(mean_shape, shape):
"""Align the source shape to the target shape using rotation.
target_shape, source_shape: ndarrays of shape [35, 2]
return ndarray of shape [35, 2]
"""
def cost(R):
return # <your code here>
solver = solvers.SteepestDescent()
manifold = # <your code here>
problem = opt.Problem(manifold=manifold, cost=cost, verbosity=0)
R_opt = solver.solve(problem)
return # <your code here>
def riemannian_procrustes_rotation_scaling(mean_shape, shape):
"""Align the source shape to the target shape using a combination rotation and scaling.
target_shape, source_shape: ndarrays of shape [35, 2]
return ndarray of shape [35, 2]
"""
def cost(Rs):
R, s = Rs
return # <your code here>
solver = solvers.SteepestDescent()
manifold = # <your code here>
problem = opt.Problem(manifold=manifold, cost=cost, verbosity=0)
Rs_opt = solver.solve(problem)
R_opt, s_opt = Rs_opt
return # <your code here>
# Stiefel
aligned_landmarks, mean_shape = align_landmarks(landmarks, aligner=riemannian_procrustes_projection, n_iterations=3)
fig, (ax1, ax2) = plt.subplots(figsize=(10, 5), ncols=2)
ax1.scatter(aligned_landmarks[:, :, 0], -aligned_landmarks[:, :, 1], alpha=.01)
ax1.set_title('Aligned landmarks cloud')
# compute the mean shape
plot_landmarks(mean_shape, ax=ax2)
ax2.set_title('Mean landmarks');
# Rotations
aligned_landmarks, mean_shape = align_landmarks(landmarks, aligner=riemannian_procrustes_rotation, n_iterations=3)
fig, (ax1, ax2) = plt.subplots(figsize=(10, 5), ncols=2)
ax1.scatter(aligned_landmarks[:, :, 0], -aligned_landmarks[:, :, 1], alpha=.01)
ax1.set_title('Aligned landmarks cloud')
# compute the mean shape
plot_landmarks(mean_shape, ax=ax2)
ax2.set_title('Mean landmarks');
# Rotations + scale
aligned_landmarks, mean_shape = align_landmarks(landmarks, aligner=riemannian_procrustes_rotation_scaling, n_iterations=3)
fig, (ax1, ax2) = plt.subplots(figsize=(10, 5), ncols=2)
ax1.scatter(aligned_landmarks[:, :, 0], -aligned_landmarks[:, :, 1], alpha=.01)
ax1.set_title('Aligned landmarks cloud')
# compute the mean shape
plot_landmarks(mean_shape, ax=ax2)
ax2.set_title('Mean landmarks');
###Output
_____no_output_____
###Markdown
PCA for learning the shape space_The goal of this section is to learn how to program the simple but powerful PCA linear dimensionality reduction technique using Riemannian optimization._ The typical way of learning the shape space is to find a low-dimensional manifold controlling most of the variability in shapes in a (hopefully) interpretable way. Such a manifold is commonly found using [PCA method](https://en.wikipedia.org/wiki/Principal_component_analysis).We will apply PCA to a matrix $\mathbf{X} \in \mathbb{R}^{n \times 70}$ of aligned shapes.A common way of learning PCA is using SVD implemented in the [`sklearn.decomposition.PCA` class](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html).
###Code
aligned_landmarks = aligned_landmarks.reshape(-1, 70)
from sklearn.decomposition import PCA
pca = PCA(n_components=1)
pca.fit(aligned_landmarks)
d0 = pca.inverse_transform(
pca.transform(aligned_landmarks)
)
data_scaled_vis = d0.reshape((-1, 35, 2))
plt.scatter(data_scaled_vis[:200, :, 0], -data_scaled_vis[:200, :, 1], alpha=.1)
###Output
_____no_output_____
###Markdown
Do the same using Riemannian optimization Recall that PCA finds a low-dimensional linear subspace by searching for a corresponding orthogonal projection. Thus, PCA searches for an orthogonal projection $M$ such that:$$M = \arg \min _{\Omega } \|X - \Omega \Omega^{\intercal} X\|^2_{F} \quad \mathrm {subject\ to} \quad \Omega ^{T}\Omega = I,$$i.e. $\Omega$ belongs to the Stiefel manifold $\mathcal{O}^{d \times r}$. The value $\|X - M M^{\intercal} X\|^2_{F}$ is the reconstruction error from projecting $X$ to $r$-dimensional subspace and restoring back to $d$-dimensional (original) one. **Exercise:** program the PCA by finding an orthogonal projection from 70-dimensional onto 2-dimensional subspace, using `pymanopt`.**Hint:** use `Stiefel(70, 2)` manifold and the reconstruction error cost as described above.
###Code
# Cost function is the reconstruction error
def cost(w):
return # <your code here>
solver = solvers.TrustRegions()
manifold = # <your code here>
problem = opt.Problem(manifold=manifold, cost=cost)
wopt = solver.solve(problem)
print('The following projection matrix was found to minimise '
'the squared reconstruction error: ')
print(wopt)
###Output
_____no_output_____
###Markdown
Now construct a low-dimensional approximation of $X$, by projecting to $r$-dimensional parameter space and back.
###Code
aligned_landmarks_r = np.dot(wopt, np.dot(wopt.T, aligned_landmarks.T)).T
aligned_landmarks_r = aligned_landmarks_r.reshape((-1, 35, 2))
plt.scatter(aligned_landmarks_r[:200, :, 0], -aligned_landmarks_r[:200, :, 1], alpha=.1)
###Output
_____no_output_____
###Markdown
Exploring the lower-dimensional linear manifold parameterizing landmarks_The purpose of this part is to understand how the coordinate values in the lower-dimensional space influences the landmark shape_. Coordinates along principal components _parameterize_ the shape, i.e. smooth walk along these directions should result in interpolation between shapes. **Exercise:** explore the lower-dimensional linear manifold parameterizing landmarks: * Show samples _from the data_ with different coordinated along PC\1 (hint: use `reconstructions_sorted_along_pc` below) * Show _synthetic_ samples obtained by moving in the data manifold along PC\1 (hint: modify `reconstructions_sorted_along_pc` below into `vary_on_manifold`)
###Code
def reconstructions_sorted_along_pc(landmarks, w, pc=1, n_shapes=6):
# project to r-dimensional manifold
projected_landmarks = np.dot(w.T, landmarks.T).T
# sort along dimension selected by pc
pc_idx = np.argsort(projected_landmarks[:, pc])
# reconstruct several shapes with varying degree
# of expressiveness in parameter pc
idx = np.linspace(0, len(landmarks), n_shapes).astype(int)
idx[-1] = idx[-1] - 1
shapes_to_reconstruct = projected_landmarks[pc_idx[idx]].T
reconstructions = np.dot(w, shapes_to_reconstruct).T
reconstructions = reconstructions.reshape((-1, 35, 2))
return reconstructions
def plot_variability_along_pc(landmarks, w, pc=1, n_shapes=6):
reconstructions = reconstructions_sorted_along_pc(landmarks, w, pc=pc, n_shapes=n_shapes)
fig, axs = plt.subplots(ncols=6, nrows=1, figsize=(18, 3))
for ax, image in zip(axs, reconstructions):
plot_landmarks(image, ax=ax)
plot_variability_along_pc? # <your code here>
###Output
_____no_output_____
###Markdown
**Q:** Would this variability necessary be exactly like the PCA?
###Code
# PC2
def vary_on_manifold(landmarks, id, w, pc=1, n_shapes=6):
projected_landmarks = np.dot(w.T, landmarks.T).T
min_pc_value = # <your code here>
max_pc_value = # <your code here>
pc_values = # <your code here>
the_one_projection = projected_landmarks[id][None]
shapes_to_reconstruct = np.tile(the_one_projection, (n_shapes, 1))
shapes_to_reconstruct[:, pc] = pc_values
reconstructions = np.dot(w, shapes_to_reconstruct.T).T
reconstructions = reconstructions.reshape((-1, 35, 2))
fig, axs = plt.subplots(ncols=n_shapes, nrows=1, figsize=(3 * n_shapes, 3))
for ax, image in zip(axs, reconstructions):
plot_landmarks(image, ax=ax)
vary_on_manifold(aligned_landmarks, 0, wopt, pc=1, n_shapes=30)
###Output
_____no_output_____
###Markdown
Analysing the shape space of facial landmarks via MDS Compute embedding of the shape space into 2D, preserving distances between shapes Classic multidimensional scaling (MDS) aims to find an orthogonal mapping $M$ such that:$$M = \arg \min _{\Omega } \sum_i \sum_j (d_X (\mathbf{x}_i, \mathbf{x}_j) - d_Y (\Omega^{\intercal}\mathbf{x}_i, \Omega^{\intercal}\mathbf{x}_j))^2 \quad \mathrm {subject\ to} \quad \Omega ^{T}\Omega = I,$$i.e. $\Omega$ belongs to the Stiefel manifold $\mathcal{O}^{d \times r}$ where $d$ is the dimensionality of the original space, and $r$ is the dimensionality of the compressed space.In other words, consider distances $d_X (\mathbf{x}_i, \mathbf{x}_j)$ between ech pair $(i, j)$ of objects in the original space $X$. MDS aims at projecting $\mathbf{x}_i$'s to a linear subspace $Y$ such that each distance $d_Y (M^{\intercal}\mathbf{x}_i, M^{\intercal}\mathbf{x}_j)$ approximates $d_X (\mathbf{x}_i, \mathbf{x}_j)$ as closely as possible.
###Code
aligned_landmarks = aligned_landmarks.reshape((-1, 70))
# a slightly tricky way of computing pairwise distances for [n, d] matrixes of objects,
# see https://stackoverflow.com/questions/28687321/computing-euclidean-distance-for-numpy-in-python
def calculate_pairwise_distances(points):
return ((points[..., None] - points[..., None].T) ** 2).sum(1)
euclidean_distances = calculate_pairwise_distances(aligned_landmarks)
###Output
_____no_output_____
###Markdown
**Exercise:** program MDS dimensionality reduction method using `pymanopt`. Project from 70-dimensional to 2-dimensional space.**Hint:** to compute distances, use `calculate_pairwise_distances` above.**Hint:** use `Stiefel(70, 2)` manifold
###Code
import autograd.numpy as np
def cost(w):
# <your code here>
solver = solvers.TrustRegions()
manifold = # <your code here>
problem = opt.Problem(manifold=manifold, cost=cost)
wopt = solver.solve(problem)
print('The following projection matrix was found to minimise '
'the squared reconstruction error: ')
print(wopt)
projected_shapes = np.dot(wopt.T, aligned_landmarks.T).T
from riemannianoptimization.tutorial_helpers import prepare_html_for_visualization
from IPython.display import HTML
HTML(prepare_html_for_visualization(projected_shapes, aligned_landmarks, scatterplot_size=[700, 700],
annotation_size=[100, 100], floating_annotation=True))
###Output
_____no_output_____
|
09 lettura dati e pandas.ipynb
|
###Markdown
Esempio 3L'ultimo esempio che trattiamo è una dataset di opere d'arte. La principale caratteristica del file è che il delimitatore è il punto e virgola.Una seconda caratteristica è che alcuni campi alfanumerici sono racchiusi fra virgolette (`"`) perchè contengono a loro volta dei punti e virgola che devono essere interpretati come punteggiatura, non come delimitatore. In questo caso pandas legge correttamente il file, ma questo caso è spesso problematico e deve essere gestito con attenzione, sfruttando le opzioni `quoting`, `doublequote`, `quotechar` e `escapechar`.
###Code
arte = pd.read_csv("data/arte.csv.gz",
delimiter = ';')
arte
###Output
_____no_output_____
###Markdown
Pandas Pandas è una libreria open source che fornisce due funzionalità fondamentali:1. la capacità di leggere un file dati strutturato (ad esempio, in formato CSV)2. la capacità di trattare dati in formato tabellare (DataFrame o Serie)Anaconda include Pandas, però deve essere importata. Di conseguenza la prima istruzione sarà:
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Adesso la libreria pandas è disponibile come `pd`. Formati di fileAlcuni formati di file sono più comuni da trovare:* CSV (comma separated values). I dati sono separati da virgole o da altri caratteri (spazio, punto e virgola)* XLS. I dati sono memorizzati in un foglio di calcolo di Excel* JSON. E' un formato pensato per scambiare dati fra computer. Non è semplice da leggere direttamente Per ognuno di questi formati avremo una istruzione specifica che permette la lettura. Un esempio di file CSV si trova all'indirizzo [https://github.com/gdv/foundationsCS-2018/raw/master/ex-data/f1-db/results.csv](https://github.com/gdv/foundationsCS-2018/raw/master/ex-data/f1-db/results.csv) e viene riportato sotto.```resultId,raceId,driverId,constructorId,number,grid,position,positionText,positionOrder,points,laps,time,milliseconds,fastestLap,rank,fastestLapTime,fastestLapSpeed,statusId1,18,1,1,22,1,1,1,1,10,58,34:50.6,5690616,39,2,01:27.5,218.3,12,18,2,2,3,5,2,2,2,8,58,5.478,5696094,41,3,01:27.7,217.586,13,18,3,3,7,7,3,3,3,6,58,8.163,5698779,41,5,01:28.1,216.719,1```In questo caso la virgola è il separatore fra campi diversi e viene usato il punto decimale per dividere la parte intera da quella frazionaria.Inoltre la prima riga riporta i nomi dei vari campi. File JSONJSON è un formato testuale di dati utilizzato principalmente per lo scambio di dati. Mentre il formato CSV è pensato per rappresentare dati in formato tabellare, il formato JSON permette di rappresentare dati gerarchici e con schema flessibile.Il seguente esempio di file JSON è rielaborato da [Wikipedia](https://it.wikipedia.org/wiki/JavaScript_Object_Notation) e riporta i dati di due persone:```{ "name": "Mario", "surname": "Rossi", "birthday": { "day": 1, "month": 1, "year": 2000 },}{ "name": "Giovanna", "surname": "Verdi",}``` Leggere file JSONPer leggere un file JSON e importare i dati in un DataFrame bisogna usare l'istruzione `read_json` che richiede come argomento il nome del file da leggere, oppure l'indirizzo https(s) (*URL*) del file, nel caso in cui sia disponibile per il download.Leggiamo adesso i dati
###Code
incidenti = pd.read_json("https://git.io/fhmXn")
incidenti
###Output
_____no_output_____
###Markdown
E' buona prassi verificare visualmente, anche in modo sommario, che il DataFrame sia stato importato correttamente. Per vedere il DataFrame è sufficiente scrivere il nome del DataFrame come unica istruzione. Lettura datiIl caso appena mostrato ha scaricato i dati dall'URL indicato. Un'altra possibilità è fornire il percorso completo del file da leggere. Il percorso può essere sia assoluto che parziale. Le due istruzioni che seguono hanno lo stesso effetto dell'istruzione precedente (la prima usa un percorso assoluto, la seconda un percorso relativo).
###Code
incidenti = pd.read_json("/home/utente/python/data/incidenti.json")
incidenti = pd.read_json("data/incidenti.json")
###Output
_____no_output_____
###Markdown
I due percorsi sono da intendersi come indicativi, in quanto il percorso reale dipende dalla cartella in cui si sono salvati i dati e da cui si è fatto partire Jupyter.Inoltre come specificare il percorso dipende dal sistema operativo utilizzato. In particolare, sebbene Windows normalmente richieda di usare il carattere `\` (backslash) per separare le cartelle, in Jupyter bisogna usare il carattere `/` (sbarra). Lettura foglio di calcoloPer leggere un foglio di calcolo di Excel, bisogna usare l'istruzione `read_excel`. Esattamente come la `read_json`, bisogna fornire un argomento che è il percorso del file o l'URL del file da importare.
###Code
lavoro = pd.read_excel("http://www2.census.gov/prod2/statcomp/usac/excel/CLF01.xls")
lavoro
###Output
_____no_output_____
###Markdown
In questo caso ci si attende che la prima riga del file xls contenga i nomi delle colonne. Lettura file CSVBuona parte dei dataset vengono distribuiti in formati CSV perchè sono semplici da produrre e da leggere e si prestano ad essere compressi.L'istruzione da utilizzare è `read_csv` di cui possiamo vedere un esempio.
###Code
f1 = pd.read_csv("https://git.io/fpdnm")
f1
###Output
_____no_output_____
###Markdown
`Read_csv` in dettaglio `read_csv` La `read_csv` è una istruzione centrale in pandas: sarà la modalità principale con cui leggeremo i dati da importare in un DataFrame. Per questo motivo dedicheremo spazio a descrivere diverse opzioni a disposizione.Ogni volta che vogliamo leggere un nuovo file di dati, dobbiamo capire quali opzioni dobbiamo utilizzare.
###Code
nani = pd.read_csv("data/7-nani.csv")
nani
###Output
_____no_output_____
###Markdown
Il file dati `7-nani.csv` non contiene una riga con i nomi di colonna e contiene solo i nomi dei nani. Di conseguenza la `read_csv` senza opzioni non permette di leggere correttamente i dati: infatti nel DataFrame `nani`, il nome della colonna diventa `Brontolo` (che invece dovrebbe essere un dato), e `nani` è un DataFrame sebbene contenga solo una colonna di dati (quindi dovrebbe essere una Serie). `names`L'opzione `names` permette di specificare i nomi delle colonne da leggere. L'opzione permette di specificare la lista dei nomi da utilizzare. Inoltre assume che la prima riga del file contenga dei dati da inserire nel DataFrame.
###Code
nani = pd.read_csv("data/7-nani.csv",
names = ['Nome'])
nani
###Output
_____no_output_____
###Markdown
`squeeze`Per ottenere una Serie dal file `7-nani.csv` dobbiamo usare l'opzione `squeeze`, che è dedicata allo scopo: se i dati presentano una sola colonna, il risultato è una Serie.
###Code
nani = pd.read_csv("data/7-nani.csv",
names = ['Nome'], squeeze = True)
nani
###Output
_____no_output_____
###Markdown
`delimiter`La virgola è il carattere più utilizzato per separare i campi, ma non è l'unico. Un altro carattere utilizzato spesso il il punto e virgole (`;`), soprattutto per i file ottenuti esportando da Excel.Questo delimitatore è i default in Italia, perchè la virgola viene utilizzata per separare la parte intera di un numero dalla parte frazionaria (ad esempio *12,345*).L'opzione `delimiter` (o l'equivalente `sep`) permettono di specificare il carattere da usare come separatore. Vediamo un esempio.
###Code
iscritti = pd.read_csv("data/2009-2013_iscritti.csv", delimiter = ';')
iscritti
###Output
_____no_output_____
###Markdown
In casi più rari, è possibile specificare anche stringhe di separazione, invece di singoli caratteri. Un caso particolare si ha quando il separatore è formato da una sequenza di spazi e/o tabulazione: ciò corrisponde all'opzione `delim_whitespace`. `skiprows`In alcuni file di dati le prime righe vengono utilizzate per scrivere dei commenti (tipicamente una descrizione dei dati). L'opzione `skiprows` permette di indicare quante righe del file contengono commenti e devono essere saltate in fase di lettura. Questa opzione normalmente viene utilizzata insieme all'opzione `names` per indicare i nomi delle colonne.
###Code
kidney = pd.read_csv("data/kidney.txt",
delim_whitespace = True,
skiprows = 17,
names = ['paziente', 'tempo', 'genere', 'età', 'tipo', 'diagnosi'])
kidney
###Output
_____no_output_____
###Markdown
Un'alternativa, nel caso in cui il commento contenga i nomi delle colonne, è l'opzione `header` che permette di specificare il numero di riga che contiene i nomi delle colonne. Dati EuropeiAbbiamo accennato in precedenza al fatto che in Italia (e in Europa) si preferisca utilizzare la virgola per separare la parte intera da quella frazionaria di un numero.Un'altra differenza fra lo standard americano e quello europeo è nel separatore delle migliaia: in Europa si usa lo spazio oppure il punto (*67.891.123*), mentre lo standard americano è la virgola (*67,891,123*).Per gestire entrambi questi casi, abbiamo rispettivamente le opzioni `decimal` e `thousands`. Per quest'ultima opzione, il valore di default è la stringa vuota: di conseguenza l'opzione `thousands` deve essere utilizzata anche per numeri che usano la virgola come separatore delle migliaia.
###Code
gettito = pd.read_csv("data/2009-2013_gettito_contribuzione.csv", delimiter = ';', decimal = ',')
gettito
###Output
_____no_output_____
###Markdown
File di grandi dimensioniI file di dati di grandi dimensioni presentano due problematiche distinte:1. i tempi di calcolo si allungano;2. le procedure usate da pandas per inferire l'organizzazione dei dati (in particolare i tipi delle colonne) non sono precise.Ancora una volta, abbiamo alcune opzioni di pandas per limitare questi problemi:1. `nrows`: specifica il numero di righe del file che devono essere lette;2. `low_memory = False`: permette a pandas di usare una maggiore quantità di memoria per inferire l'organizzazione dei dati.Un'altra caratteristica di pandas che lo rende adatto alla gestione di dati di grandi dimensioni è il fatto che riesce a leggere file compressi. Vediamo una lettura di file di grandi dimensioni che non utilizza nessuna delle opzioni indicate e vediamo l'avviso che ci segnala un possibile problema in fase di inferenza dei tipi di dati.
###Code
bandi = pd.read_csv("data/scpbandinew.csv.bz2")
###Output
/home/gianluca/.miniconda/lib/python3.6/site-packages/IPython/core/interactiveshell.py:3020: DtypeWarning: Columns (15,19,34) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
File di grandi dimensioni (2)Vediamo adesso la lettura dello stesso file, utilizzando l'opzione `low_memory`.
###Code
bandi = pd.read_csv("data/scpbandinew.csv.bz2", low_memory = False)
bandi
###Output
_____no_output_____
###Markdown
Con `nrows`
###Code
bandi = pd.read_csv("data/scpbandinew.csv.bz2", nrows=100)
bandi
###Output
_____no_output_____
###Markdown
L'opzione `nrows` è molto utile in fase iniziale di sviluppo, perchè permette di controllare la correttezza del codice su dati facilmente gestibili. Però non può essere usata per fare un'analisi dei dati.La strategia migliore è usarla solo nelle prime fasi di sviluppo e solo se i dati sono troppo difficoltosi da gestire a causa della loro dimensione. Ancora `read_csv` Gestione date e orariLa lettura di date e orari presenta diverse problematiche. In particolare:1. Oltre agli usuali concetti di data e orario, abbiamo anche il concetto di *timestamp* (detto anche istante temporale o datetime), che è essenzialmente formato da una data e un orario (ore, minuti, secondi e spesso anche frazione di secondo).2. La data può essere in formato Europeo (giorno/mese/anno) o americano (mese/giorno/anno).3. Il separatore fra giorno, mese, anno non è univoco. Normalmente si usa `/` o `-`, ma altri caratteri sono talvolta utilizzati (ad esempio `.`).4. L'indicazione di un timestamp è precisa solo quando viene specificato anche il fuso orario di riferimento (se il file è ottenuto come output di un programma, il fuso orario di default è `UTC` che corrisponde al fuso orario di Greenwich).Tutti questi aspetti contribuiscono a rendere la lettura di date e ore particolamente difficoltosa. `parse_dates`L'opzione `parse_dates` esplicita quali colonne contengono date (o timestamp).
###Code
strutture = pd.read_csv("data/2009-2013_strutture.csv", delimiter = ';',
parse_dates = [0] )
strutture
###Output
_____no_output_____
###Markdown
Il comportamento di default è però leggere le date in formato americano. Per il formato europeo dobbiamo utilizzare l'opzione `dayfirst`. Quindi l'istruzione corretta diventa:
###Code
strutture = pd.read_csv("data/2009-2013_strutture.csv", delimiter = ';',
parse_dates = [0], dayfirst = True )
strutture
###Output
_____no_output_____
###Markdown
Valori mancantiL'utilizzo di valori specifici, detti *sentinelle*, per rappresentare valori mancanti è piuttosto frequente, ma è dipendente dal contesto. In questi casi è opportuno usare alcune opzioni in fase di lettura:1. `na_values`: una lista di stringhe che vengono interpretati come valori mancanti2. `keep_default_na`: un booleano che indica se continuare ad interpretare le stringhe `NaN`, `nan`, `N/A`, `null` per rappresentare valori mancanti.Quando queste opzioni vengono utilizzate, è opportuno aggiungere anche `verbose = True`: in questo modo vengono calcolati il numero di valori mancanti nelle colonne non-numeriche. Ciò permette di controllare che le stringhe che codificano i valori mancanti non abbiano introdotti altri problemi.La presenza di valori mancanti è un aspetto fondamentale nella lettura dei file di dati: purtroppo ciò viene spesso sottovalutato. EncodingL'encoding è una questione tecnica che indica come caratteri e simboli siano rappresentati dal computer come numero binario (sequenza di 0 e 1).Purtroppo, l'encoding più utilizzato (ASCII) non permette di rappresentare le lettere accentate ed altri caratteri utilizzati in lingue diverse dall'Inglese. Ciò comporta alcuni problemi nel leggere file di dati con caratteri accentati che possono essere superati solo specificando l'encoding utilizzato nella creazione del file: per quanto riguarda i file con scritte in Italiano, gli encoding normalmente utilizzati sono `iso-8859-1` e `utf-8`.Quando si cerca di leggere un file usando un encoding errato, si incorre in un errore che viene immediatamente segnalato da pandas con un messaggio `can't decode byte` oppure `UnicodeDecodeError`. Vediamo adesso un caso in cui è necessario specificare l'encoding.
###Code
num_interventi = pd.read_csv("data/2009-2013_numero_interventi.csv", delimiter = ';')
num_interventi = pd.read_csv("data/2009-2013_numero_interventi.csv", delimiter = ';', encoding = 'iso-8859-1')
###Output
_____no_output_____
###Markdown
EsempioNel file `data/latest_bid.csv`, le colonne `decreto_data` e `decreto_data_datetime` contengono delle date.
###Code
bid = pd.read_csv("data/latest_bid.csv",
parse_dates = ['decreto_data', 'decreto_data_datetime'])
bid
###Output
_____no_output_____
###Markdown
Esempio 2Il file `data/farmaci.csv.gz` presenta diverse caratteristiche che devono essere gestite opportunamente nella `read_csv`:* il delimitatore è il punto e virgola;* l'encoding è `iso-8859-1`;* il file ha grandi dimensioni;* alcune colonne contengono una data e la data `9999-12-31` rappresenta un valore mancante
###Code
farmaci = pd.read_csv("data/farmaci.csv.gz",
delimiter = ';',
encoding = 'iso-8859-1',
low_memory = False,
na_values = '9999-12-31',
parse_dates = ['INIZIO_VALIDITA',
'FINE_VALIDITA',
'DATAFINE_COMMERCIO'])
farmaci
###Output
_____no_output_____
|
TrainTheModel.ipynb
|
###Markdown
Load Training Dataset
###Code
X_train = np.load("GITHUB/XtrainWindowSize"
+ str(windowSize) + "PCA" + str(numPCAcomponents) + "testRatio" + str(testRatio) + ".npy")
y_train = np.load("GITHUB/ytrainWindowSize"
+ str(windowSize) + "PCA" + str(numPCAcomponents) + "testRatio" + str(testRatio) + ".npy")
# Reshape into (numberofsumples, channels, height, width)
X_train = np.reshape(X_train, (X_train.shape[0],X_train.shape[3], X_train.shape[1], X_train.shape[2]))
# convert class labels to on-hot encoding
y_train = np_utils.to_categorical(y_train)
# Define the input shape
input_shape= X_train[0].shape
print(input_shape)
# number of filters
C1 = 3*numPCAcomponents
# Define the model
model = Sequential()
model.add(Conv2D(C1, (3, 3), activation='relu', input_shape=input_shape))
model.add(Conv2D(3*C1, (3, 3), activation='relu'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(6*numPCAcomponents, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(16, activation='softmax'))
sgd = SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=32, epochs=15)
import h5py
from keras.models import load_model
model.save('my_model.h5')
###Output
_____no_output_____
###Markdown
Load Training Dataset
###Code
X_train = np.load("/home/deeplearning/Desktop/GITHUB/XtrainWindowSize"
+ str(windowSize) + "PCA" + str(numPCAcomponents) + "testRatio" + str(testRatio) + ".npy")
y_train = np.load("/home/deeplearning/Desktop/GITHUB/ytrainWindowSize"
+ str(windowSize) + "PCA" + str(numPCAcomponents) + "testRatio" + str(testRatio) + ".npy")
# Reshape into (numberofsumples, channels, height, width)
X_train = np.reshape(X_train, (X_train.shape[0],X_train.shape[3], X_train.shape[1], X_train.shape[2]))
# convert class labels to on-hot encoding
y_train = np_utils.to_categorical(y_train)
# Define the input shape
input_shape= X_train[0].shape
print(input_shape)
# number of filters
C1 = 3*numPCAcomponents
# Define the model
model = Sequential()
model.add(Conv2D(C1, (3, 3), activation='relu', input_shape=input_shape))
model.add(Conv2D(3*C1, (3, 3), activation='relu'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(6*numPCAcomponents, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(16, activation='softmax'))
sgd = SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=32, epochs=15)
import h5py
from keras.models import load_model
model.save('my_model.h5')
###Output
_____no_output_____
###Markdown
Load Training Dataset
###Code
X_train = np.load("X_trainPatches_" + str(windowSize) + "PCA" + str(numPCAcomponents) + "testRatio" + str(testRatio) + ".npy")
y_train = np.load("y_trainPatches_" + str(windowSize) + "PCA" + str(numPCAcomponents) + "testRatio" + str(testRatio) + ".npy")
# Reshape into (numberofsamples, channels, height, width)
X_train = np.reshape(X_train, (X_train.shape[0],X_train.shape[3], X_train.shape[1], X_train.shape[2]))
# convert class labels to on-hot encoding
y_train = np_utils.to_categorical(y_train)
# Define the input shape
input_shape= X_train[0].shape
print(input_shape)
# number of filters
C1 = 3*numPCAcomponents
# Define the model
model = Sequential()
model.add(Conv2D(C1, (3, 3), activation='relu', input_shape=input_shape))
model.add(Conv2D(3*C1, (3, 3), activation='relu'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(6*numPCAcomponents, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(16, activation='softmax'))
sgd = SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=32, epochs=15)
import h5py
from keras.models import load_model
model.save('my_model' + str(windowSize) + 'PCA' + str(numPCAcomponents) + "testRatio" + str(testRatio) + '.h5')
###Output
_____no_output_____
|
.ipynb_checkpoints/Diseases Instructor-checkpoint.ipynb
|
###Markdown
Diseases and spreading :/The SIR model is one of the simplest compartmental models, and many models are derivations of this basic form. The model consists of three compartments– S for the number susceptible, I for the number of infectious, and R for the number recovered (or immune). This model is reasonably predictive for infectious diseases which are transmitted from human to human, and where recovery confers lasting resistance, such as measles, mumps and rubella. for more info on network diffusion models in python checkout - NDlib - Network Diffusion Libraryhttps://github.com/GiulioRossetti/ndlib
###Code
import networkx as nx
import ndlib.models.epidemics.SIRModel as sir
import ndlib.models.ModelConfig as mc
from ndlib.viz.mpl.DiffusionTrend import DiffusionTrend
# Network Definition
G = nx.erdos_renyi_graph(400, 0.1)
# Model Selection
model = sir.SIRModel(G)
# Model Configuration
config = mc.Configuration()
config.add_model_parameter('beta', 0.001)
config.add_model_parameter('gamma', 0.01)
config.add_model_parameter("percentage_infected", 0.1)
model.set_initial_status(config)
# Simulation
iterations = model.iteration_bunch(200)
trends = model.build_trends(iterations)
viz = DiffusionTrend(model, trends)
p = viz.plot()
###Output
_____no_output_____
###Markdown
What happens if we change the structure of this network?
###Code
G = nx.erdos_renyi_graph(400, 0.7)
model = sir.SIRModel(G)
config = mc.Configuration()
config.add_model_parameter('beta', 0.001)
config.add_model_parameter('gamma', 0.01)
config.add_model_parameter("percentage_infected", 0.1)
model.set_initial_status(config)
iterations = model.iteration_bunch(200)
trends = model.build_trends(iterations)
viz = DiffusionTrend(model, trends)
p = viz.plot()
G = nx.barabasi_albert_graph(400, 40)
model = sir.SIRModel(G)
config = mc.Configuration()
config.add_model_parameter('beta', 0.001)
config.add_model_parameter('gamma', 0.01)
config.add_model_parameter("percentage_infected", 0.4)
model.set_initial_status(config)
iterations = model.iteration_bunch(200)
trends = model.build_trends(iterations)
viz = DiffusionTrend(model, trends)
p = viz.plot()
###Output
_____no_output_____
###Markdown
Does this also hold true for various other diffusion processes like computer viruses?Sure! why not? ExerciseLet's take a dataset of autonomous systems, i.e to-be future IoT networks.The graph of routers comprising the Internet can be organized into sub-graphs called Autonomous Systems (AS). Each AS exchanges traffic flows with some neighbors (peers). We can construct a communication network of who-talks-to- whom from the BGP (Border Gateway Protocol) logs.source: http://snap.stanford.edu/data/as.htmlCreate the a graph and run SIR model on the graph and plot the diffusion trend curve. Play around with various parameters :)Also look at the degree distribution of this network, what can we infer from this.
###Code
import pandas as pd
G = nx.Graph()
for row in pd.read_csv('autosys.txt', delimiter='\t').iterrows():
G.add_edge(row[1][0], row[1][1])
model = sir.SIRModel(G)
config = mc.Configuration()
config.add_model_parameter('beta', 0.01)
config.add_model_parameter('gamma', 0.01)
config.add_model_parameter("percentage_infected", 0.3)
model.set_initial_status(config)
iterations = model.iteration_bunch(500)
trends = model.build_trends(iterations)
viz = DiffusionTrend(model, trends)
p = viz.plot()
import matplotlib.pyplot as plt
plt.hist(list(nx.pagerank(G).values()))
plt.show()
sorted(dict(nx.degree(G)).values(), reverse=True)
###Output
_____no_output_____
|
Regression_Analysis_hyperparameter_tuning.ipynb
|
###Markdown
Regresssion with scikit-learn using Soccer Dataset Prepared by: Shadab Sayeed Import Libraries
###Code
import sqlite3
import pandas as pd
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from math import sqrt
%matplotlib inline
from google.colab import drive
drive.mount('/content/drive')
!pip install fastai==0.7.0
from fastai.imports import *
from fastai.structured import *
from pandas_summary import DataFrameSummary
from sklearn.ensemble import RandomForestRegressor
from IPython.display import display
from sklearn import metrics
from sklearn.svm import SVR
from sklearn.neighbors import KNeighborsRegressor
from sklearn.ensemble import GradientBoostingRegressor
from matplotlib import rcParams
from matplotlib.cm import rainbow
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Read Data from the Database into pandas
###Code
# Create your connection.
cnx = sqlite3.connect('/content/drive/My Drive/CSV files/database.sqlite')
df = pd.read_sql_query("SELECT * FROM Player_Attributes", cnx)
df.head()
print(df.shape)
df.isna().sum()
print(df.shape)
print(df.dtypes)
df.columns
###Output
(183978, 42)
id int64
player_fifa_api_id int64
player_api_id int64
date object
overall_rating float64
potential float64
preferred_foot object
attacking_work_rate object
defensive_work_rate object
crossing float64
finishing float64
heading_accuracy float64
short_passing float64
volleys float64
dribbling float64
curve float64
free_kick_accuracy float64
long_passing float64
ball_control float64
acceleration float64
sprint_speed float64
agility float64
reactions float64
balance float64
shot_power float64
jumping float64
stamina float64
strength float64
long_shots float64
aggression float64
interceptions float64
positioning float64
vision float64
penalties float64
marking float64
standing_tackle float64
sliding_tackle float64
gk_diving float64
gk_handling float64
gk_kicking float64
gk_positioning float64
gk_reflexes float64
dtype: object
###Markdown
Declaring the Columns we want to Use as Features
###Code
features = [
'potential', 'crossing', 'finishing', 'heading_accuracy',
'short_passing', 'volleys', 'dribbling', 'curve', 'free_kick_accuracy',
'long_passing', 'ball_control', 'acceleration', 'sprint_speed',
'agility', 'reactions', 'balance', 'shot_power', 'jumping', 'stamina',
'strength', 'long_shots', 'aggression', 'interceptions', 'positioning',
'vision', 'penalties', 'marking', 'standing_tackle', 'sliding_tackle',
'gk_diving', 'gk_handling', 'gk_kicking', 'gk_positioning',
'gk_reflexes']
###Output
_____no_output_____
###Markdown
Specifying the Prediction Target
###Code
target = ['overall_rating']
###Output
_____no_output_____
###Markdown
Cleaning the Data
###Code
df = df.dropna()
print(df.shape)
df.head()
X = df[features]
y = df[target]
print(X.shape)
print(y.shape)
X.head()
import seaborn as sns
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
plotting a heatmap to see the co-relation.
###Code
plt.figure(1,figsize=(24,15))
sns.heatmap(X.corr(),annot=True,cmap="YlGnBu")
plt.show()
###Output
_____no_output_____
###Markdown
Let us look at a typical row from our features:
###Code
X.iloc[2]
###Output
_____no_output_____
###Markdown
Displaying target values:
###Code
y.head()
###Output
_____no_output_____
###Markdown
Split the Dataset into Training and Test Datasets
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=324,shuffle=True)
y_test.mean()
###Output
_____no_output_____
###Markdown
(1) Linear Regression: Fiting a model to the training set Performing Prediction using Linear Regression Model
###Code
model1= LinearRegression(n_jobs=-1)
model1.fit(X_train, y_train)
y_pred1=model1.predict(X_test)
RMSE1 = sqrt(mean_squared_error(y_true = y_test, y_pred = y_pred1))
print(y_test.mean())
print(RMSE1)
len(X.columns)
###Output
_____no_output_____
###Markdown
Performing Prediction using Decision tree Regression Model
###Code
score=[]
for i in range(2,len(X.columns)+1):
model2= DecisionTreeRegressor(max_depth=10+i,max_features=i)
model2.fit(X_train, y_train)
y_pred2=model2.predict(X_test)
RMSE2 = sqrt(mean_squared_error(y_true = y_test, y_pred = y_pred2))
score.append(RMSE2)
RMSE2
plt.figure(1,figsize=(24,8))
list1=list(range(2,len(X.columns)+1))
sns.lineplot(x=list1,y=score)
sns.scatterplot(x=list1,y=score,color='red',legend='brief')
for i in range(len(list1)):
plt.text(x =list1[i]+0.05 , y =score[i]+0.05, s =round(score[i],3), size = 10)
plt.xticks([i for i in range(2, len(X.columns) + 1)])
plt.xlabel('Max features')
plt.xlim(1,35)
plt.ylim(1.2,2.8)
plt.ylabel('Scores')
plt.title('Decision Tree Regress RMSE scores for different number of maximum features')
plt.show()
###Output
_____no_output_____
###Markdown
What is the mean of the expected target value in test set ?
###Code
score1=[]
estimators=[1,2,4,6,8,10,12,14,18,20]
for i in estimators:
model3= RandomForestRegressor(n_jobs=-1,n_estimators=i,max_depth=5+i)
model3.fit(X_train, y_train.values.ravel())
y_pred3=model3.predict(X_test)
RMSE3 = sqrt(mean_squared_error(y_true = y_test, y_pred = y_pred3))
#print(y_test.mean())
score1.append(RMSE3)
sns.set_style('whitegrid')
plt.figure(1,figsize=(18,8))
sns.lineplot(x=estimators,y=score1,color='yellow')
sns.scatterplot(x=estimators,y=score1,color='blue')
for i in range(len(estimators)):
plt.text(x =estimators[i]+0.05 , y =score1[i]+0.05, s =round(score1[i],3), size = 12)
plt.xticks(estimators)
plt.ylim(0.8,3.1)
#plt.xlim(1,20)
plt.xlabel('n_estimators')
plt.ylabel('Scores')
plt.title('Random Forest Tree Regress RMSE scores for different number of maximum features')
plt.show()
print("Computed Random Forest Scores By tuning")
score1
###Output
Computed Random Forest Scores By tuning
###Markdown
Overall Best Random Forest Regressor Model Lets do some further hyperparameter Tuning* n_estimators = 20, max_depth = 25 let's play with max_features = [ 'auto' , 'sqrt' , 'log2' ]* Increasing max_features generally improves the performance of the model as at each node now we have a higher number of options to be considered. However, this is not necessarily true as this decreases the diversity of individual tree which is the USP of random forest.
###Code
m_fea=['auto','sqrt','log2',28,34]
score_fea=[]
for i in m_fea:
model3_rf= RandomForestRegressor(n_jobs=-1,n_estimators=20,max_depth=25,max_features=i)
model3_rf.fit(X_train, y_train.values.ravel())
y_pred3_rf=model3_rf.predict(X_test)
RMSE3_rf = sqrt(mean_squared_error(y_true = y_test, y_pred = y_pred3_rf))
#print(y_test.mean())
score_fea.append(RMSE3_rf)
print('max_features :'+str(i))
print(RMSE3_rf)
max_feat=pd.DataFrame({'Name':m_fea,'RMSE_Score':score_fea})
max_feat
plt.figure(1,figsize=(14,8))
sns.barplot(x='Name',y='RMSE_Score',data=max_feat)
for i in range(5):
plt.text(x =i-0.1 , y = score_fea[i]+0.002, s =round(score_fea[i],3), size = 14)
plt.ylim(0.9,1.05)
plt.ylabel('RMSE Score')
plt.xlabel('max_features method')
plt.title('Comparing various max_features values on the performance of the model: less the better')
plt.show()
###Output
_____no_output_____
###Markdown
Here is simple model to explain how Random Forest Regression works. Below is tree structure to exlain the decision taken
###Code
reg_rf= RandomForestRegressor(n_jobs=-1,n_estimators=1,max_depth=3,bootstrap=False)
reg_rf.fit(X_train, y_train.values.ravel())
draw_tree(reg_rf.estimators_[0], X_train, precision=5)
print(RMSE1)
print(RMSE2)
print(RMSE3)
###Output
2.8053030468552103
1.4292009130459054
1.0159395639400142
###Markdown
KNeighboursRegressor Not that good as shown by RMSE error
###Code
model5= KNeighborsRegressor()
model5.fit(X_train, y_train.values.ravel())
y_pred5=model5.predict(X_test)
RMSE5 = sqrt(mean_squared_error(y_true = y_test, y_pred = y_pred5))
print(y_test.mean())
print(RMSE5)
###Output
overall_rating 68.635818
dtype: float64
1.5443597253788965
###Markdown
Gradient Boosting Regressor has an RMSE of 1.777
###Code
estimators_gb=[18,20,24,28,30,32,36,40,60,80,100,120]
score3=[]
for i in estimators_gb:
model6= GradientBoostingRegressor(n_estimators=i)
model6.fit(X_train, y_train.values.ravel())
y_pred6=model6.predict(X_test)
RMSE6 = sqrt(mean_squared_error(y_true = y_test, y_pred = y_pred6))
score3.append(RMSE6)
print("estimators :"+str(i))
print(RMSE6)
sns.set_style('whitegrid')
len(score3)==len(estimators_gb)
plt.figure(1,figsize=(18,8))
sns.lineplot(x=estimators_gb,y=score3,color='orange')
sns.scatterplot(x=estimators_gb,y=score3,color='red')
for i in range(len(estimators_gb)):
plt.text(x =estimators_gb[i]+0.05 , y =score3[i]+0.05, s =round(score3[i],3), size = 12)
#plt.xticks(estimators)
plt.ylim(1.5,3.5)
plt.xlim(1,140)
plt.xlabel('n_estimators')
plt.ylabel('Scores')
plt.title('Gradient boosting Regressor RMSE scores for different number of estimators')
regressor=pd.DataFrame({'Linear regression':[RMSE1],'Descicion Tree Regressor':[RMSE2],'Random Forest Regressor':[RMSE3_rf],'Gradient Boosting Regressor':[RMSE6],'K neighbours Regressor':[RMSE5]})
###Output
_____no_output_____
###Markdown
RMSE of different Regreesor to see has performed best
###Code
regressor
###Output
_____no_output_____
###Markdown
For comparision: Mean of the expected target value in test set
###Code
y.mean()
###Output
_____no_output_____
###Markdown
Mean is 68.635317
###Code
prediction=pd.DataFrame({'Test':y_test.overall_rating,'DecisionTree':y_pred2,'RandomForest':y_pred3_rf,'GradientBoosting':y_pred6})
prediction=prediction.reset_index(drop=True)
print(prediction.shape)
prediction.head(15)
sns.set_style('darkgrid')
plt.figure(1,figsize=(20,8))
sns.lineplot(x=X_test.potential,y=y_test['overall_rating'],color='red',label='Test data')
sns.lineplot(x=X_test.potential,y=y_pred3,color='green',label='Random Forest Regressor')
sns.lineplot(x=X_test.potential,y=y_pred2,color='blue',label='Decision tree regresssor Predicted data')
plt.title("Plotting Prediction data vs Test data to see deviation.")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
As can been seen above prediction is fairly accurate. With a RMSE of less than < 0.9452 Given A mean of 68.635 It's fairly good Prediction
###Code
###Output
_____no_output_____
|
old_experiments/02_ImageWang_ContrastLearning_20_kornia_80ep_best.ipynb
|
###Markdown
Image网 Submission `128x128` This contains a submission for the Image网 leaderboard in the `128x128` category.In this notebook we:1. Train on 1 pretext task: - Train a network to do image inpatining on Image网's `/train`, `/unsup` and `/val` images. 2. Train on 4 downstream tasks: - We load the pretext weights and train for `5` epochs. - We load the pretext weights and train for `20` epochs. - We load the pretext weights and train for `80` epochs. - We load the pretext weights and train for `200` epochs. Our leaderboard submissions are the accuracies we get on each of the downstream tasks.
###Code
import json
import torch
import numpy as np
from functools import partial
from fastai2.basics import *
from fastai2.vision.all import *
torch.cuda.set_device(3)
# Chosen parameters
lr=2e-2
sqrmom=0.99
mom=0.95
beta=0.
eps=1e-4
bs=64
sa=1
m = xresnet34
act_fn = Mish
pool = MaxPool
nc=20
source = untar_data(URLs.IMAGEWANG_160)
len(get_image_files(source/'unsup')), len(get_image_files(source/'train')), len(get_image_files(source/'val'))
# Use the Ranger optimizer
opt_func = partial(ranger, mom=mom, sqr_mom=sqrmom, eps=eps, beta=beta)
m_part = partial(m, c_out=nc, act_cls=torch.nn.ReLU, sa=sa, pool=pool)
model_meta[m_part] = model_meta[xresnet34]
save_name = 'imagewang_contrast_kornia_80ep'
###Output
_____no_output_____
###Markdown
Pretext Task: Contrastive Learning
###Code
#export
from pytorch_metric_learning import losses
class XentLoss(losses.NTXentLoss):
def forward(self, output1, output2):
stacked = torch.cat((output1, output2), dim=0)
labels = torch.arange(output1.shape[0]).repeat(2)
return super().forward(stacked, labels, None)
class ContrastCallback(Callback):
run_before=Recorder
def __init__(self, size=256, aug_targ=None, aug_pos=None, temperature=0.1):
self.aug_targ = ifnone(aug_targ, get_aug_pipe(size))
self.aug_pos = ifnone(aug_pos, get_aug_pipe(size))
self.temperature = temperature
def update_size(self, size):
pipe_update_size(self.aug_targ, size)
pipe_update_size(self.aug_pos, size)
def begin_fit(self):
self.old_lf = self.learn.loss_func
self.old_met = self.learn.metrics
self.learn.metrics = []
self.learn.loss_func = losses.NTXentLoss(self.temperature)
def after_fit(self):
self.learn.loss_fun = self.old_lf
self.learn.metrics = self.old_met
def begin_batch(self):
xb, = self.learn.xb
xb_targ = self.aug_targ(xb)
xb_pos = self.aug_pos(xb)
self.learn.xb = torch.cat((xb_targ, xb_pos), dim=0),
self.learn.yb = torch.arange(xb_targ.shape[0]).repeat(2),
#export
def pipe_update_size(pipe, size):
for tf in pipe.fs:
if isinstance(tf, RandomResizedCropGPU):
tf.size = size
def get_dbunch(size, bs, workers=8, dogs_only=False):
path = URLs.IMAGEWANG_160 if size <= 160 else URLs.IMAGEWANG
source = untar_data(path)
folders = ['unsup', 'val'] if dogs_only else None
files = get_image_files(source, folders=folders)
tfms = [[PILImage.create, ToTensor, RandomResizedCrop(size, min_scale=0.9)],
[parent_label, Categorize()]]
# dsets = Datasets(files, tfms=tfms, splits=GrandparentSplitter(train_name='unsup', valid_name='val')(files))
dsets = Datasets(files, tfms=tfms, splits=RandomSplitter(valid_pct=0.1)(files))
# batch_tfms = [IntToFloatTensor, *aug_transforms(p_lighting=1.0, max_lighting=0.9)]
batch_tfms = [IntToFloatTensor]
dls = dsets.dataloaders(bs=bs, num_workers=workers, after_batch=batch_tfms)
dls.path = source
return dls
size = 128
bs = 256
dbunch = get_dbunch(160, bs)
len(dbunch.train.dataset)
dbunch.show_batch()
# # xb = TensorImage(torch.randn(1, 3,128,128))
# afn_tfm, lght_tfm = aug_transforms(p_lighting=1.0, max_lighting=0.8, p_affine=1.0)
# # lght_tfm.split_idx = None
# xb.allclose(afn_tfm(xb)), xb.allclose(lght_tfm(xb, split_idx=0))
import kornia
#export
def get_aug_pipe(size, stats=None, s=.7):
stats = ifnone(stats, imagenet_stats)
rrc = kornia.augmentation.RandomResizedCrop((size,size), scale=(0.2, 1.0), ratio=(3/4, 4/3))
rhf = kornia.augmentation.RandomHorizontalFlip()
rcj = kornia.augmentation.ColorJitter(0.8*s, 0.8*s, 0.8*s, 0.2*s)
tfms = [rrc, rhf, rcj, Normalize.from_stats(*stats)]
pipe = Pipeline(tfms)
pipe.split_idx = 0
return pipe
aug = get_aug_pipe(size)
aug2 = get_aug_pipe(size)
cbs = ContrastCallback(size=size, aug_targ=aug, aug_pos=aug2, temperature=0.1)
xb,yb = dbunch.one_batch()
nrm = Normalize.from_stats(*imagenet_stats)
xb_dec = nrm.decodes(aug(xb))
show_images([xb_dec[0], xb[0]])
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 256), nn.ReLU(), nn.Linear(256, 128))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func,
metrics=[], loss_func=CrossEntropyLossFlat(), cbs=cbs, pretrained=False,
config={'custom_head':ch}
).to_fp16()
learn.unfreeze()
learn.fit_flat_cos(80, 2e-2, wd=1e-2, pct_start=0.5)
torch.save(learn.model[0].state_dict(), f'{save_name}.pth')
# learn.save(save_name)
###Output
_____no_output_____
###Markdown
Downstream Task: Image Classification
###Code
def get_dbunch(size, bs, workers=8, dogs_only=False):
path = URLs.IMAGEWANG_160 if size <= 160 else URLs.IMAGEWANG
source = untar_data(path)
if dogs_only:
dog_categories = [f.name for f in (source/'val').ls()]
dog_train = get_image_files(source/'train', folders=dog_categories)
valid = get_image_files(source/'val')
files = dog_train + valid
splits = [range(len(dog_train)), range(len(dog_train), len(dog_train)+len(valid))]
else:
files = get_image_files(source)
splits = GrandparentSplitter(valid_name='val')(files)
item_aug = [RandomResizedCrop(size, min_scale=0.35), FlipItem(0.5)]
tfms = [[PILImage.create, ToTensor, *item_aug],
[parent_label, Categorize()]]
dsets = Datasets(files, tfms=tfms, splits=splits)
batch_tfms = [IntToFloatTensor, Normalize.from_stats(*imagenet_stats)]
dls = dsets.dataloaders(bs=bs, num_workers=workers, after_batch=batch_tfms)
dls.path = source
return dls
def do_train(size=128, bs=64, lr=1e-2, epochs=5, runs=5, dogs_only=False, save_name=None):
dbunch = get_dbunch(size, bs, dogs_only=dogs_only)
for run in range(runs):
print(f'Run: {run}')
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func, normalize=False,
metrics=[accuracy,top_k_accuracy], loss_func=LabelSmoothingCrossEntropy(),
# metrics=[accuracy,top_k_accuracy], loss_func=CrossEntropyLossFlat(),
pretrained=False,
config={'custom_head':ch})
if save_name is not None:
state_dict = torch.load(f'{save_name}.pth')
learn.model[0].load_state_dict(state_dict)
# state_dict = torch.load('imagewang_inpainting_15_epochs_nopretrain.pth')
# learn.model[0].load_state_dict(state_dict)
learn.unfreeze()
learn.fit_flat_cos(epochs, lr, wd=1e-2)
###Output
_____no_output_____
###Markdown
5 Epochs
###Code
epochs = 5
runs = 1
do_train(epochs=epochs, runs=runs, lr=2e-2, dogs_only=False, save_name=save_name)
###Output
Run: 0
###Markdown
20 Epochs
###Code
epochs = 20
runs = 1
# LATEST
do_train(epochs=epochs, runs=runs, lr=2e-2, dogs_only=False, save_name=save_name)
###Output
Run: 0
###Markdown
80 epochs
###Code
epochs = 80
runs = 1
do_train(epochs=epochs, runs=runs, dogs_only=False, save_name=save_name)
###Output
Run: 0
###Markdown
Accuracy: **62.18%** 200 epochs
###Code
epochs = 200
runs = 1
do_train(epochs=epochs, runs=runs, dogs_only=False, save_name=save_name)
###Output
Run: 0
|
notebooks/DNNStattus4APIDUMP.ipynb
|
###Markdown
Table of Contents1 Gated CNN1.1 GCNN 1D Time Series1.2 GCNN 1D Residuals
###Code
#####
### Inicio de refatoracao usando principios de design patterns.
# TODO: pesquisar mais e mais e mais a fundo o design do Tensorflow pra seguir de perto.
from enum import Enum
class BlockTypes(Enum):
GCNN2D = 'gcnn2d'
SOFTMAX = 'softmax'
CROSSENTROPY = 'loss-crossentropy'
INPUT = 'signal_in'
REDUCEMEAN = 'reducemean'
class ArchBlocks:
def __add__(self, block):
pass
class Architecture(ArchBlocks):
'''
aa
'''
def __init__(self, **kwargs):
self.modules = {'0':[ArchBlocks.INPUT,0,0,[],[]]}
def __add__(self, archblock1):
pass
#self.modules = archblock1
def _block_specification(self,key):
pass
class Stattus4NeuralNetAPI:
def __init__(self, datatype, datasocket = False, **kwargs):
'''
Input
datatype: Audio, Pressure, Flow.
datasocket: if True tries to fetch data from a database and push output to database
pre-configured through kwargs, if config is not provided then tries to read from datafeeder or dump to local file-system.
Optional
num_of_channels: number of channels for Audio data, if not given get only the first audio and discard the others.
'''
self.datatype = datatype
self.datasocket = datasocket
def setup_hyperparameters(self, architectures):
'''
Input
architectures: Architecture object specifying archetypical structure to be investigated for its hyperparams.
Return a dictionary of the given
'''
pass
#######
#######
####
###
#
# FUTURE: Name/Var Scope handler. This class/template must be used to implement logic of names
# according to the architecture block connection. E.G. if a block is just the same from the other in the
# sequence them it must be named as if is a deep part of the same architecture blocks. If it is a ramification
# of the given below but the same arch them he must be named with another tag.
# When duplicating some block to architectonic use, E.G. if you need to frame your data or feature, them each
# frame must have an coeherent name space block, like the today ordimatlicly done for repeated name space block,
# but this must be automaticly done for identical built blocks of the model, without the need to explicity code it.
# Names must be given as unique ID according to te input-output architecture connection, if it has the same IO from previous Block in the sequence,
# then each Input-ARchBlock-Output that is IDentical must have the same name with the appropriated tag for deepnes in the sequence of blocks
# Automatic update of the name scopes according to the architecture.
# Dont change Identical sequential IO Name spaces
## Unused building-blocks, will add in the API in the future, after a throughful refatoration.
#
self.architectures.gcnn1d = self._gcnn1d
self.architectures.residualgcnn1d = self._residualgcnn1d
def _gcnn1d(self, **kwargs):
try:
channels_out = kwargs['channels_out']
filter_size = kwargs['filter_size']
except Exception:
sys.exit('Parameters Not Defined Error')
signal_in = self(**kwargs)
# postfix = self.get_namepostfix('gccn1d',**kwargs)
with self.graph.as_default():
with tf.variable_scope('gccn1d'):#+postfix):
with self.graph.device(_dev_selector(arg1='foo')):
conv_linear = tf.keras.layers.Conv1D( channels_out, filter_size, padding='causal', name='conv_linear', use_bias=True)(signal_in)
with self.graph.device(_dev_selector(arg1='foo')):
conv_gate = tf.sigmoid(tf.keras.layers.Conv1D( channels_out, filter_size, padding='causal', name='conv', use_bias=True )(signal_in),name='conv_sigmoid')
with self.graph.device(_dev_selector(arg1='foo')):
gated_convolutions = tf.multiply(conv_linear,conv_gate,name='gated_convolutions')
def _residualgcnn1d(self, **kwargs):
try:
channels_out = kwargs['channels_out']
filter_size = kwargs['filter_size']
except KeyError:
sys.exit('Parameters Not Defined Error')
signal_in = self(**kwargs)
####
## Keras convolutions. Classes, so dont behave like functions but outputs Tensor, use its functions to query the variables filters and bias
# postfix = self.get_namepostfix('gccn1d',**kwargs)
with self.graph.as_default():
with tf.variable_scope('residualgccn1d'):#+postfix):
with self.graph.device(_dev_selector(arg1='foo')):
conv_linear = tf.keras.layers.Conv1D( channels_out, filter_size, padding='causal', name='conv_linear', use_bias=True)(signal_in)
with self.graph.device(_dev_selector(arg1='foo')):
conv_gate = tf.sigmoid(tf.keras.layers.Conv1D( channels_out, filter_size, padding='causal', name='conv', use_bias=True )(signal_in),name='conv_sigmoid')
with self.graph.device(_dev_selector(arg1='foo')):
gated_convolutions = tf.multiply(conv_linear,conv_gate,name='gated_convolutions')
# Input channels must be the same size of the convolution channels output (ie number of filters applied)
with self.graph.device(_dev_selector(arg1='foo')):
residual = tf.add(gated_convolutions,signal_in,name='residual')
def _save_trainable_vars(self, blockname):
with self.graph.as_default():
saver = tf.train.Saver(var_list=self.graph.get_collection('trainable_variables') )
tf.global_variables_initializer()
sess = tf.Session(graph=self.graph)
sess.run(self.graph.get_operations()[-1])
saver.save(sess, os.getcwd()+'/'+blockname)
#saver.export_meta_graph(filename=blockname+'.constructor', collection_list='trainable_variables', export_scope=None, strip_default_attrs=False)
self.arch_blocks[blockname] = blockname
def define_block(self,blockname):
self._save_trainable_vars(blockname)
graph = tf.Graph()
with graph.as_default():
sess = tf.Session(graph=graph)
new_saver = tf.train.import_meta_graph(os.getcwd()+'/'+self.arch_blocks[blockname]+'.meta', import_scope = blockname)
new_saver.restore(sess, os.getcwd()+'/'+self.arch_blocks[blockname])
####
## Redefinir signal_in dos blocos utilizados para construir este bloco através do dict namescopo
## para que run_cgraph consiga achar inputs
#
####
## Signalin handling provisorio, fazer um modulo apenas para lidar com signal
#
self.graph = graph
self.signal_in = []
self.signal_in.append(self.graph.get_tensor_by_name(blockname+'/signal_in:0'))
# Build block from name in the top level of the blocks, this is a previous version that take names inside namescope dict and build new block
def from_block(self,archblockname,tag = ""):
with self.graph.as_default():
sess = tf.Session(graph=self.graph)
new_saver = tf.train.import_meta_graph(os.getcwd()+'/'+self.arch_blocks[archblockname]+'.meta', import_scope = archblockname+tag)
new_saver.restore(sess, os.getcwd()+'/'+self.arch_blocks[archblockname])
self.signal_in.append(self.graph.get_tensor_by_name(archblockname+tag+'/signal_in:0'))
self.num_input -= 1
###Output
_____no_output_____
###Markdown
Gated CNN Gated CNN is a doubled CNN in whom one of the convoluted signals does the role of opening/closing the network, giving an **Attention Mechanism** to the convolution, for being activated by a sigmoid. It gives non-vanishing gradient, since the multiplication rule for the derivative applies, also, applies gradient to the linear convoluted part. GCNN 1D Time Series In time series version, since the desirable learning is based on **past** events, or you cannot uphold the assumption that you have acess to future data, have to make sure that the convolution is **causal**, that is\begin{align}y_{n}= a_{i}x_{n-i}=a_{n-j}x_{j}\end{align} Giving at last, if the filter has length k, k-1 zero padding to the input x.
###Code
#build graph
graph = build_graph( (None,l[0],channels), arch = 'gcnn1d',print_ops = True, new_graph=True, show_cgraph = True)
'''
obs: if you have a graph sometimes is necessary to run more than one time wit reset_default_graph to get a new graph
'''
###Output
_____no_output_____
###Markdown
GCNN 1D Residuals
###Code
#build graph
graph = build_graph( (None,l[0],channels), arch = 'residual gcnn1d',print_ops = True,new_graph=True, show_cgraph = True)
###Output
_____no_output_____
###Markdown
Hyper Param Protocol (hpp) Two interfaces implementing the following:A - Space x Selection Algorithm x Architecture- Implementations of this interface specifies the search space for the Deep Learning model, including space size i.e. the number of points and space complexity that is objects to data dict (Variability)/structure/number of sources/Unique Ids (e.g. location). The Space Complex objects should be implement through his own interface.B - Space Complex- Central to space complexity are the specifications of the already given above. Once one have implemented the specification than can implement the Interface itself.
###Code
# HPP Definitions. First DISS implementation by the already made architecture, brute force selection Algol,
# and Space == Space Size.
# TODO - refatorar.
# TODO - full implementation
class Hpp:
'''
Hyperparam tuning protocol.
Input
- space: Space object giving the size and the complexity of the space
- selection_algorithm: function running algorithm that search through space using specific evaluation (also implemented in the algorithm)
- architecture: Architecture object specifying archetipical architecture to be tuned by searching in space.
'''
def __init__(self,space = {},selection_algorithm = lambda x: 'foo',architecture='gcnn2d'):
self.space_size = space.get_size()
self.data_structure = space.get_complex().data.get_structure()
self.source_id = space.get_complex().get_ids()
self.data_dict = space.get_complex().data.get_dict()
self.algorithm = selection_algorithm
self.architecture = architecture
def _parse_config(config_list):
'''
Receives a config and build from dict to perform examples:
config0 = [ {"0_frames": lambda : 4 1 if 10 == 1 else 0,"0_0": lambda :(10,30,40,2), "0_1": lambda :(10,2,2,8), "0_2": lambda :(10,2,1,16), "0_3": lambda :(10,4,1,32), "0_4": lambda :(10,8,1,64), "0_5": lambda :(10,16,1,128), "0_6": lambda :(10,13,1,256)},{"1_labels": lambda :2},{"2_null": lambda :'a'},{"3_labels": lambda :2,"3_learningrate": lambda :0.005}]
config1 = [ {"0_frames":4,"0_0":(10,30,40,2)}, {"1_1":(10,2,2,8), "1_2":(10,2,1,16), "1_3":(10,4,1,32), "1_4":(10,8,1,64), "1_5":(10,8,1,64)},{"1_labels":2},{"2_null":a},{"3_labels":2,"3_learningrate":0.005}]
config2 = [ {"0_frames":4,"0_0":(10,30,40,2), "0_1":(10,2,2,8), "0_2":(10,2,1,16), "0_3":(10,4,1,32), "0_4":(10,8,1,64) },{"1_labels":2},{"2_null":a},{"3_labels":2,"3_learningrate":0.005}]
config3 = [ {"0_frames":4,"0_0":(10,30,40,2), "0_1":(10,2,2,8), "0_2":(10,2,1,16), "0_3":(10,4,1,32) },{"1_labels":2},{"2_null":a},{"3_labels":2,"3_learningrate":0.005}]
config4 = [ {"0_frames":4,"0_0":(10,30,40,2), "0_1":(10,2,2,8), "0_2":(10,2,1,16)},{"1_labels":2},{"2_null":a},{"3_labels":2,"3_learningrate":0.005}]
config5 = [ {"0_frames":4,"0_0":(10,30,40,2), "0_1":(10,2,2,8)},{"1_labels":2},{"2_null":a},{"3_labels":2,"3_learningrate":0.005}]
config6 = [{"0_frames":4,"0_0":(10,30,40,2), "0_1":(10,2,2,8), "0_2":(10,6,1,16), "0_3":(10,6,1,32), "0_4":(10,6,1,64), "0_5":(10,10,1,128), "0_6":(10,16,1,256)},{"1_labels":2},{"2_null":a},{"3_labels":2,"3_learningrate":0.005}]
config7 = [ {"0_frames":4,"0_0":(10,30,40,2), "0_1":(10,2,2,8), "0_2":(10,6,1,16), "0_3":(10,6,1,32), "0_4":(10,6,1,64), "0_5":(10,10,1,128)},{"1_labels":2},{"2_null":a},{"3_labels":2,"3_learningrate":0.005}]
config8 = [ {"0_frames":4,"0_0":(10,30,40,2), "0_1":(10,2,2,8), "0_2":(10,6,1,16), "0_3":(10,6,1,32), "0_4":(10,6,1,64)},{"1_labels":2},{"2_null":a},{"3_labels":2,"3_learningrate":0.005}]
config9 = [ {"0_frames":4,"0_0":(10,30,40,2), "0_1":(10,2,2,8), "0_2":(10,6,1,16), "0_3":(10,6,1,32)},{"1_labels":2},{"2_null":a},{"3_labels":2,"3_learningrate":0.005}]
config10 = [ {"0_frames":4,"0_0":(10,30,40,2), "0_1":(10,2,2,8), "0_2":(10,6,1,16)},{"1_labels":2},{"2_null":a},{"3_labels":2,"3_learningrate":0.005}]
config11 = [ {"0_frames":4,"0_0":(10,30,40,2), "0_1":(10,2,2,8)},{"1_labels":2},{"2_null":a},{"3_labels":2,"3_learningrate":0.005}]
'''
overall_config = []
overall_attr = []
overall_deepness = []
for block_config in config_list:
layer_configs = []
layer_attr = []
layer_deep = 0
# attributes
bc = block_config.items()
for item in bc:
if item[0].split('_')[1].isalnum():
layer_configs.append(item)
layer_deep += 1
else:
layer_attr.append(item)
layer_configs.sort()
layer_attr.sort()
overall_deepness.append(layer_deep)
overall_config.append( dict(layer_configs) )
overall_attr.append( dict(layer_attr) )
return overall_attr,overall_config,overall_deepness
def _is_vec_embed():
pass
def run(self):
'''
try:
assert type(tf.get_default_graph()) == type(tf.Graph())
graph = tf.get_default_graph()
except AssertionError:
graph = tf.Graph()
'''
tf.reset_default_graph()
graph = tf.Graph()
s = 0
for config in range(self.algorithm.size):
s += 1
layer_config_list = self.algorithm()
overall_attr,overall_config,overall_deepness = _parse_config(layer_config_list)
with tf.variable_scope('Config {}'.format(s)):
#TODO Assert that it is a block 0 and its attributes (for this case only nframe)
blockattr = overall_attr.pop(0)
nframe = blockattr.values()[0]
config = overall_config.pop(0)
for blocks in range(len(self.architecture)):
graph = build_graph_module(graph, scope_tensor_name=op.name, arch = self.architecture[blocks], print_ops = True, name_scope = True, show_cgraph = True, filter_size=(k[1],k[2]), channels_out = k[3], deepness = '_d1',num_labels=2,learning_rate=0.005)
op = graph.get_operations()[-1]
try:
blockattr = overall_attr[0]
if blockattr.keys()[0].split('_')[0] == blocks:
blockattr = overall_attr.pop(0)
for attr,v in blockattr:
if attr.split('_')[1] == 'frames':
nframes
except:
pass
for k in layer_config[2:]:
'''
names = []
for op in graph.get_operations():
if op.name.split('/')[1] == 'transpose_1' and op.name.split('/')[0][0:7] == 'softmax':
names.append(op.name)
graph = build_graph_module(graph, scope_tensor_name=names, arch = 'reducemean',print_ops = True, name_scope = True, show_cgraph = True, filter_size=(2,1), channels_out = 64, deepness = '',num_labels=4,learning_rate=0.005, verbose=True)
op = graph.get_operations()[-1]
graph = build_graph_module(graph, scope_tensor_name=op.name, arch = 'loss-crossentropy',print_ops = True, name_scope = True, show_cgraph = True, filter_size=(2,1), channels_out = 64, deepness = '',num_labels=4,learning_rate=0.1, batch=40, verbose=True)
layer_config = self.algorithm()
with tf.variable_scope('secondparam'):
graph = build_graph( (layer_config[0][0],layer_config[0][1],layer_config[0][2],layer_config[0][3]), arch = self.architecture,print_ops = True,new_graph=False, show_cgraph = True, filter_size=(layer_config[1][1],layer_config[1][2]), channels_out = layer_config[1][3])
op = graph.get_operations()[-1]
for k in layer_config[2:]:
graph = build_graph_module(graph, scope_tensor_name=op.name, arch = self.architecture, print_ops = True, name_scope = True, show_cgraph = True, filter_size=(k[1],k[2]), channels_out = k[3], deepness = '_d1',num_labels=2,learning_rate=0.005)
op = graph.get_operations()[-1]
'''
class Space:
'''
Space is built upon space size and complexity.
Input:
- size: an integer, the number of configurations to look for the model.
- spacecomplex: Unique Ids, a hashable type or string specifing source e.g. location. A SigmaComplex object specifying meta-data, i.e. data configurations and
possibly other relevant keys to hyper-param searching such features, diversity mensures and so on.
SigmaComplex must be an interface to be implemented according to the data specification, possibly separating mutable from immutable characters.
'''
def __init__(self, size, spacecomplex):
self.size = size
self.sigmacomplex = spacecomplex
def get_size(self):
return self.size
def get_complex(self):
return self.sigmacomplex
def get_ids(self):
return self.sigmacomplex.get_ids()
def get_sourcenum(self):
return self.sigmacomplex.data.sourcenum
def get_dict(self):
return self.sigmacomplex.data.get_dict()
def get_structure(self):
return self.sigmacomplex.data.get_structure()
class SigmaComplex:
'''
Unique Ids, a hashable type or string specifing source e.g. location. A SigmaComplex object specifying meta-data, i.e. data configurations and
possibly other relevant keys to hyper-param searching such features, diversity mensures and so on.
SigmaComplex must be an interface to be implemented according to the data specification, possibly separating mutable from immutable characters.
Data specification and ids will correspond to the empirical collected data according to the geo-located point in a device that has known inner workings
according to the laws of physics.
'''
def __init__(self, ids, data):
self.ids = ids
self.data = data
def get_ids(self):
return self.ids
def get_dict(self):
return self.data.get_dict()
def get_structure(self):
return self.data.get_structure()
class DataMeta:
'''
Data MaTter specification according to device collector, its inner workings (laws of physics), and other relevant keys.
'''
def __init__(self):
self.structure = "image"
self.data_dict = { "labels" : ["cv","sv"] , "features" : ["spect","framed"], "channel_num": 1 }
def get_dict(self):
return self.data_dict
def get_structure(self):
return self.structure
# Example of 'foo'
spacecomplex = SigmaComplex(["Chala-head-chala"] , DataMeta())
space = Space( 5, spacecomplex)
hyperparam = Hpp(space, lambda x: '飛輪功', "gcnn2d")
# Build search space data structure. To be added to SigmaComplex class/interface/template
# "softmax","reducemean","loss-crossentropy"
config0 = [ {"0_frames":4,"0_0":(10,30,2,2), "0_1":(10,2,2,8), "0_2":(10,2,1,16), "0_3":(10,4,1,32), "0_4":(10,8,1,64), "0_5":(10,16,1,128), {"1_labels":2} ]
config1 = [ {"0_frames":4,"0_0":(10,30,2,2), "0_1":(10,2,2,8), "0_2":(10,2,1,16), "0_3":(10,4,1,32)} ]
config2 = [ {"0_frames":4,"0_0":(10,30,2,2), "0_1":(10,2,2,8) ]
config3 = [ {"0_frames":4,"0_0":(10,30,2,2), "0_1":(10,4,2,8), "0_2":(10,4,1,16), "0_3":(10,4,1,32), "0_4":(10,4,1,64), "0_5":(10,7,1,128), "0_6":(10,7,1,256), "0_7":(10,7,1,512)}, {"1_labels":2}, ]
search_space = ( config0, config1)
# hash( '''( ( {"0_sld":4},{"0_0":(10,30.0,40.0)}),({"1_sld":3},{"1_0":(10,30.0,40.0,40.0)} ) )''' )
# Example exhaustive search (河南科技大学)
# you can use yield if you want to
class search(object):
def __init__(self, listconfig):
self.size = len(listconfig)
self.num = 0
self.config = listconfig
def __iter__(self):
return self
# Python 3 compatibility
def __next__(self):
return self.next()
def next(self):
if self.num < self.size:
cur, self.num = self.config[self.num], self.num+1
return cur
else:
raise StopIteration()
def __call__(self):
return self.next()
spacecomplex = SigmaComplex(["Chala-head-chala"] , DataMeta())
space = Space( 5, spacecomplex)
#TODO implementation of the Space->SigmaComplex->DataMeta()
hyperparam = Hpp(space, search( search_space ), ["gcnn2d","softmax","reducemean","loss-crossentropy"])
###
## Danijar option to define scope with decorators
#
def doublewrap(function):
"""
A decorator decorator, allowing to use the decorator to be used without
parentheses if no arguments are provided. All arguments must be optional.
"""
@functools.wraps(function)
def decorator(*args, **kwargs):
if len(args) == 1 and len(kwargs) == 0 and callable(args[0]):
return function(args[0])
else:
return lambda wrapee: function(wrapee, *args, **kwargs)
return decorator
@doublewrap
def define_scope(function, scope=None, *args, **kwargs):
"""
A decorator for functions that define TensorFlow operations. The wrapped
function will only be executed once. Subsequent calls to it will directly
return the result so that operations are added to the graph only once.
The operations added by the function live within a tf.variable_scope(). If
this decorator is used with arguments, they will be forwarded to the
variable scope. The scope name defaults to the name of the wrapped
function.
"""
attribute = '_cache_' + function.__name__
name = scope or function.__name__
@property
@functools.wraps(function)
def decorator(self):
if not hasattr(self, attribute):
with tf.variable_scope(name, *args, **kwargs):
setattr(self, attribute, function(self))
return getattr(self, attribute)
return decorator
#####
####
##
#
def _calculate_lr_alpha(self, step):
self.lalpha = np.abs( (self.lossval[step] + self.lossval[step-1] + self.lossval[step-2] + self.lossval[step-3])/2.0 -self.lossval[step-4] -self.lossval[step-2])
print('l ',self.lalpha)
self.alpha = (self.lossval[step] - self.lossval[step-4])/self.lalpha
print('alpha ', self.alpha)
if len(self.logvarloss) >= 2:
if self.logvarloss[-1] - self.logvarloss[-2] < - 1.0 and self.logvarloss[-1] - self.logvarloss[-2] > - 2.5:
self.optimizer._lr = self.optimizer._lr + self.optimizer._lr/3.0
elif self.logvarloss[-1] - self.logvarloss[-2] > 1.0 and self.logvarloss[-1] - self.logvarloss[-2] < 2.5:
self.optimizer._lr = self.optimizer._lr - self.optimizer._lr/3.0
elif self.logvarloss[-1] - self.logvarloss[-2] < - 2.5:
self.optimizer._lr = self.optimizer._lr + self.optimizer._lr/2.0
elif self.logvarloss[-1] - self.logvarloss[-2] > 2.5:
self.optimizer._lr = self.optimizer._lr - self.optimizer._lr/2.0
elif self.logvarloss[-1] - self.logvarloss[-2] < 1.0 and self.logvarloss[-1] - self.logvarloss[-2] > - 1.0:
self.optimizer._lr = self.optimizer._lr + self.optimizer._lr/2.0
###Output
_____no_output_____
|
docs/datasources/SoFIFA.ipynb
|
###Markdown
SoFIFA
###Code
sofifa = sd.SoFIFA(leagues="ENG-Premier League", seasons=2021)
print(sofifa.__doc__)
###Output
/cw/dtaijupiter/NoCsBack/dtai/pieterr/Projects/soccerdata/soccerdata/_common.py:246: UserWarning: Season id "2021" is ambiguous: interpreting as "20-21"
warnings.warn(msg)
###Markdown
EA Sports FIFA player ratings
###Code
ratings = sofifa.read_ratings()
ratings.head()
###Output
_____no_output_____
###Markdown
SoFIFA
###Code
sofifa = sd.SoFIFA(leagues="ENG-Premier League", seasons=2021)
print(sofifa.__doc__)
###Output
/cw/dtaijupiter/NoCsBack/dtai/pieterr/Projects/soccerdata/soccerdata/_common.py:466: UserWarning: Season id "2021" is ambiguous: interpreting as "20-21"
warnings.warn(msg)
###Markdown
EA Sports FIFA player ratings
###Code
ratings = sofifa.read_ratings()
ratings.head()
###Output
_____no_output_____
|
notebooks/pycaret-diamond-linux-blade.ipynb
|
###Markdown
MLFlow integration was introduced in version 2.0 so make sure one is running on a correct version of pycaret
###Code
import pycaret
print(pycaret.__version__)
# set tracking uri
import mlflow
mlflow.set_tracking_uri(mlflow_uri)
# print(mlflow.get_tracking_uri())
# print(mlflow.get_artifact_uri())
from pycaret.datasets import get_data
dataset_name = 'diamond'
target_var_name = 'Price'
data = get_data(dataset_name)
from pycaret.regression import *
import os
username = os.getenv('uid')
s = setup(data, target = target_var_name, transform_target = True, log_experiment = True, experiment_name = f'pycaret-{dataset_name}-{username}-exp-rhel')
# compare all models
# for pycaret.regression sort default is R2
best = compare_models(sort='MAPE')
plot_model(best, plot = 'feature')
###Output
_____no_output_____
###Markdown
one can view the results on mlflow uri and load the artifact
###Code
mlflow_artifact_base_path = os.path.dirname(os.path.dirname(mlflow.get_artifact_uri()))
best_run_id = '862de9418a4f4c4d8d874a0ce033a2f7'
# print(f'{mlflow_artifact_base_path}/{best_run_id}/artifacts/model/model')
pipeline = load_model(f'{mlflow_artifact_base_path}/{best_run_id}/artifacts/model/model')
print(pipeline)
copy_data_without_target = data.copy().drop(target_var_name, axis=1, inplace=True)
y_pred = predict_model(pipeline, data=copy_data_without_target)
y_pred.head()
###Output
_____no_output_____
|
covid19Reg3Tpot.ipynb
|
###Markdown
###Code
!pip install tpot
# pandas and numpy for data manipulation
import pandas as pd
import numpy as np
from sklearn import preprocessing
from sklearn.preprocessing import Normalizer
from sklearn.preprocessing import Binarizer
from sklearn.decomposition import PCA
# Import the tpot regressor
from tpot import TPOTRegressor
import sklearn.model_selection as model_selection
from sklearn.model_selection import train_test_split
data=pd.read_csv("https://covid19.who.int/WHO-COVID-19-global-data.csv")
data.dtypes
data=data.replace(0, np.nan)
data = data.replace(r'<^\s+$0', np.nan, regex=True)
data.sample(5)
data.columns
#data=data.loc[:,['Date_reported','New_cases', 'Cumulative_cases','Cumulative_deaths','New_deaths']]
#data=data.loc['R6':'R10', 'C':'E']
data2=data.iloc[:,[0,4,5,7,6]]
data=data2
data3=data.loc[(data!=0).any(1)]
data=data3
type(data)
train, test = train_test_split(data, test_size=0.25, random_state=42, shuffle=True)
train.shape
test.shape
data.columns
X_train=train.iloc[:, 1:4]
X_test=test.iloc[:, 1:4]
Y_train=train.iloc[:,-1]
Y_test=test.iloc[:,-1]
X_train.sample(5)
# pandas and numpy for data manipulation
import pandas as pd
import numpy as np
# Import the tpot regressor
from tpot import TPOTRegressor
# Convert to numpy arrays
training_features = np.array(X_train)
testing_features = np.array(X_test)
# Sklearn wants the labels as one-dimensional vectors
training_targets = np.array(Y_train).reshape((-1,))
testing_targets = np.array(Y_test).reshape((-1,))
# Create a tpot object with a few parameters
tpot = TPOTRegressor(scoring = 'neg_mean_absolute_error',
max_time_mins = 120,
n_jobs = -1,
verbosity = 2,
cv = 5)
# Fit the tpot model on the training data
tpot.fit(training_features, training_targets)
###Output
_____no_output_____
|
manuscript_code/classification_JAMES/trainingApproach_climatedata_v2.26.ipynb
|
###Markdown
Exploring Abstention Lossauthor: Elizabeth A. Barnes, Randal J. Barnesdate: January 15, 2021, 0738MST* based on Thulasidasan, S., T. Bhattacharya, J. Bilmes, G. Chennupati, and J. Mohd-Yusof, 2019: Combating Label Noise in Deep Learning Using Abstention. arXiv [stat.ML],.* thesis: https://digital.lib.washington.edu/researchworks/handle/1773/45781* code base is here: https://github.com/thulas/dac-label-noise/blob/master/dac_loss.py
###Code
import numpy as np
import time
import sys
import collections
import os
import glob
import pickle
import sklearn
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
import tensorflow as tf
from tensorflow.keras import optimizers
import matplotlib as mpl
import matplotlib.pyplot as plt
import cartopy as ct
import cartopy.crs as ccrs
import abstentionloss
import metrics
import network
import plots
import climatedata
import experiments
import imp
imp.reload(experiments)
imp.reload(abstentionloss)
imp.reload(plots)
imp.reload(climatedata)
import palettable
import pprint
mpl.rcParams['figure.facecolor'] = 'white'
mpl.rcParams['figure.dpi']= 150
dpiFig = 300.
np.warnings.filterwarnings('ignore', category=np.VisibleDeprecationWarning)
tf.print(f"sys.version = {sys.version}", output_stream=sys.stdout)
tf.print(f"tf.version.VERSION = {tf.version.VERSION}", output_stream=sys.stdout)
#--------------------------------------------------------
DATA_NAME = 'tranquilFOO23'#'tranquilFOO0'
SCRIPT_NAME = 'trainingApproach_climatedata_v2.26_cmdA.py'
checkpointDir = '/Users/eabarnes/Data/2021/abstention_loss/checkpoints/'
EXPINFO = experiments.define_experiments(DATA_NAME)
pprint.pprint(EXPINFO, width=60)
#--------------------------------------------------------
NP_SEED = 99
np.random.seed(NP_SEED)
tf.random.set_seed(99)
###Output
_____no_output_____
###Markdown
Internal functions
###Code
def in_ipynb():
try:
from IPython import get_ipython
if 'IPKernelApp' not in get_ipython().config: # pragma: no cover
mpl.use('Agg')
return False
except:
mpl.use('Agg')
return False
return True
def get_exp_name(loss, data_name, extra_text = ''):
# set experiment name
if loss == 'DNN':
EXP_NAME = (
data_name
+ '_DNN'
+ '_prNoise' + str(PR_NOISE)
+ '_networkSeed' + str(NETWORK_SEED)
+ '_npSeed' + str(NP_SEED)
)
else:
EXP_NAME = (
data_name
+ '_' + loss
+ '_' + UPDATER
+ '_abstSetpoint' + str(setpoint)
+ '_prNoise' + str(PR_NOISE)
+ '_networkSeed' + str(NETWORK_SEED)
+ '_npSeed' + str(NP_SEED)
)
return EXP_NAME + extra_text
def make_model(loss_str = 'DNN', updater_str='Colorado', setpoint=.5, spinup_epochs=10, nupd=10):
# Define and train the model
tf.keras.backend.clear_session()
if(loss_str == 'DNN'):
model = network.defineNN(hiddens, input_shape=X_train_std.shape[1], output_shape=NLABEL, ridge_penalty=RIDGE, act_fun='relu', network_seed=NETWORK_SEED)
loss_function = tf.keras.losses.CategoricalCrossentropy()
model.compile(
optimizer=optimizers.SGD(lr=LR_INIT, momentum=0.9, nesterov=True),
loss = loss_function,
metrics=[
metrics.AbstentionFraction(NLABEL),
metrics.PredictionAccuracy(NLABEL)
]
)
else:
model = network.defineNN(hiddens, input_shape=X_train_std.shape[1], output_shape=NLABEL+1, ridge_penalty=RIDGE, act_fun='relu', network_seed=NETWORK_SEED)
updater = getattr(abstentionloss, updater_str)(setpoint=setpoint,
alpha_init=.5,
length=nupd)
loss_function = getattr(abstentionloss, loss_str)(updater=updater,
spinup_epochs=spinup_epochs)
model.compile(
optimizer=optimizers.SGD(lr=LR_INIT, momentum=0.9, nesterov=True),
loss = loss_function,
metrics=[
alpha_value,
metrics.AbstentionFraction(NLABEL),
metrics.PredictionLoss(NLABEL),
metrics.PredictionAccuracy(NLABEL)
]
)
# model.summary()
return model, loss_function
###Output
_____no_output_____
###Markdown
Load the data
###Code
# load the data
if 'SSTrand' not in globals():
try:
SIMPLE_DATA = EXPINFO['simple_data']
except KeyError:
SIMPLE_DATA = False
try:
REGION_NAME = EXPINFO['foo_region']
except KeyError:
REGION_NAME = 'ENSO'
if(SIMPLE_DATA==True):
SSTrand, y, lat, lon = climatedata.load_simpledata(size='15x60')
elif(SIMPLE_DATA==False):
SSTrand, y, lat, lon = climatedata.load_data()
else:
SSTrand, y, lat, lon = climatedata.load_simpledata(size=SIMPLE_DATA)
lat = np.squeeze(lat)
lon = np.squeeze(lon)
print('SST shape = ' + str(np.shape(SSTrand)))
# define the ENSO region
reg_lats, reg_lons = climatedata.get_region(region_name = REGION_NAME)
# plot the data
cmap = palettable.cartocolors.diverging.Geyser_7.mpl_colormap
if in_ipynb():
plt.figure(figsize=(12,2.73*2))
mapProj = ct.crs.EqualEarth(central_longitude = 0.)
ax = plt.subplot(1,2,1,projection=mapProj)
cb, image = plots.drawOnGlobe(ax,
mapProj,
SSTrand[20,:,:],
np.squeeze(lat),
np.squeeze(lon),
cmap = cmap,
vmin = -3,
vmax=3,
cbarBool=True,
fastBool=True,
extent='both'
)
plt.plot([reg_lons[0], reg_lons[0],reg_lons[1],reg_lons[1],reg_lons[0]], [reg_lats[0], reg_lats[1], reg_lats[1], reg_lats[0],reg_lats[0]],
color='white', linestyle='--',
transform=ccrs.PlateCarree(),
)
plt.show()
imp.reload(climatedata)
np.random.seed(NP_SEED)
NLABEL = EXPINFO['numClasses']
NSAMPLES = EXPINFO['nSamples']
PR_NOISE = EXPINFO['prNoise']
CUTOFF = EXPINFO['cutoff']
UNDERSAMPLE = EXPINFO['undersample']
#----------------------------
X, y_cat, tranquil, corrupt, y_perc = climatedata.add_noise(data_name=DATA_NAME,
X=SSTrand[:NSAMPLES],
y=y[:NSAMPLES],
lat=lat,
lon=lon,
pr_noise=PR_NOISE,
nlabel=NLABEL,
cutoff=CUTOFF,
region_name=REGION_NAME,
)
data_train, data_val, data_test = climatedata.split_data(X, y_cat, tranquil, corrupt)
X_train, y_train, tr_train, cr_train = data_train
X_val, y_val, tr_val, cr_val = data_val
print('Train Shape = ' + str(np.shape(X_train)))
print('Validation Shape = ' + str(np.shape(X_val)))
# undersample the data
if UNDERSAMPLE:
print('----Training----')
X_train, y_train, tr_train = climatedata.undersample(X_train, y_train, tr_train) # training data
print('total samples = ' + str(np.shape(X_train)[0]))
print('----Validation----')
X_val, y_val, tr_val = climatedata.undersample(X_val, y_val, tr_val) # validation data
print('total samples = ' + str(np.shape(X_val)[0]))
# process data for training
X_train_std, onehotlabels, X_val_std, onehotlabels_val, xmean, xstd = climatedata.preprocess_data(X_train, y_train, X_val, y_val, NLABEL)
if in_ipynb():
plt.figure(figsize=(6*1.5,3*1.5))
plt.subplot(2,2,1)
plt.hist(y_train,np.arange(0,NLABEL+1))
plt.xlabel('labels')
plt.title('all')
plt.subplot(2,2,4)
plt.hist(y_train[cr_train==1],np.arange(0,NLABEL+1))
plt.xlabel('class')
plt.title('corrupted labels')
plt.subplot(2,2,3)
plt.hist(y_train[tr_train==1],np.arange(0,NLABEL+1))
plt.xlabel('class')
plt.title('tranquil labels')
plt.subplot(2,2,2)
plt.hist(y_train[tr_train==0],np.arange(0,NLABEL+1))
plt.xlabel('class')
plt.title('not tranquil')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Train the model
###Code
def alpha_value(y_true,y_pred):
return loss_function.updater.alpha
def scheduler(epoch, lr):
if epoch < lr_epoch_bound:
return lr
else:
return LR_INIT/2.#lr*tf.math.exp(-0.1)
class EarlyStoppingDAC(tf.keras.callbacks.Callback):
"""Stop training when the loss is at its min, i.e. the loss stops decreasing.
Arguments:
patience: Number of epochs to wait after min has been hit. After this
number of no improvement, training stops.
"""
def __init__(self, patience=0):
super(EarlyStoppingDAC, self).__init__()
self.patience = patience
# best_weights to store the weights at which the minimum loss occurs.
self.best_weights = None
def on_train_begin(self, logs=None):
# The number of epoch it has waited when loss is no longer minimum.
self.wait = 0
# The epoch the training stops at.
self.stopped_epoch = 0
# Initialize the best as zero.
self.best = 0.
self.best_epoch = np.Inf
# initialize best_weights to non-trained model
self.best_weights = self.model.get_weights()
def on_epoch_end(self, epoch, logs=None):
current = logs.get("val_prediction_accuracy")
if np.greater(current, self.best):
abstention_error = np.abs(logs.get("val_abstention_fraction") - setpoint)
if np.less(abstention_error,.1):
self.best = current
self.wait = 0
# Record the best weights if current results is better (greater).
self.best_weights = self.model.get_weights()
self.best_epoch = epoch
else:
self.wait += 1
if self.wait >= self.patience:
self.stopped_epoch = epoch
self.model.stop_training = True
print("Restoring model weights from the end of the best epoch.")
self.model.set_weights(self.best_weights)
def on_train_end(self, logs=None):
if self.stopped_epoch > 0:
print("Early stopping, setting to best_epoch = " + str(self.best_epoch + 1))
LOSS = EXPINFO['loss']
UPDATER = EXPINFO['updater']
REWRITE = False
SAVE_HISTORY = True
EXTRA_TEXT = ''
#---------------------
# Set parameters
NUPD = EXPINFO['nupd']
hiddens = EXPINFO['hiddens']
SPINUP_EPOCHS = EXPINFO['spinup']
BATCH_SIZE = EXPINFO['batch_size']
LR_INIT = EXPINFO['lr_init']
N_EPOCHS = 200
lr_epoch_bound = 10000
RIDGE = 0.
#---------------------
approach_dic = {'DNN':'',
'DAC':'',
# 'DNN-DNN':'_postDNN-DNN',
# 'DAC-DNN':'_postDAC-DNN',
'ORACLE':'_oracle',
# 'SELENE':'_selene'
}
abstain_setpoint = np.around(np.arange(0., 1., .1), 3)
seed_vector = np.arange(0,50)
if in_ipynb():
NETWORK_SEED_LIST = (0,)
else:
NETWORK_SEED_LIST = (int(sys.argv[-1]),)
if(NETWORK_SEED_LIST[0]>np.max(seed_vector)):
sys.exit()
for NETWORK_SEED in NETWORK_SEED_LIST:
for setpoint in abstain_setpoint:
for app in approach_dic.keys():
# skipping rules----
if(setpoint==0):
if((app != 'DNN') and (app != 'ORACLE') and (app != 'SELENE')):
continue
else:
if((app=='DNN') or app=='ORACLE' or app=='SELENE'):
continue
#-------------------
if((app=='DNN') or (app=='ORACLE' or app=='SELENE')):
EXP_NAME = get_exp_name(loss = 'DNN', data_name=DATA_NAME, extra_text=approach_dic[app])
elif(app=='DAC'):
EXP_NAME = get_exp_name(loss = LOSS, data_name=DATA_NAME, extra_text = approach_dic[app])
elif(app=='DNN-DNN' or app=='DAC-DNN'):
EXP_NAME = get_exp_name(loss = 'DNN', data_name=DATA_NAME, extra_text=approach_dic[app])
i = EXP_NAME.find('prNoise')
EXP_NAME = EXP_NAME[:i] + 'abstSetpoint' + str(setpoint) + '_' + EXP_NAME[i:]
else:
raise ValueError('no such approach')
model_name = 'saved_models/model_' + EXP_NAME
if(os.path.exists((model_name + '.h5').format(N_EPOCHS)) and REWRITE==False):
continue
else:
print(EXP_NAME)
#-------------------------------
# Determine indices to grab for training of the different approaches
if((app=='DNN') or (app=='DAC')):
i_train = np.arange(0,np.shape(onehotlabels)[0])
i_val = np.arange(0,np.shape(onehotlabels_val)[0])
elif(app=='ORACLE'):
i_train = np.where(cr_train==0)[0]
i_val = np.where(cr_val==0)[0]
elif(app=='SELENE'):
i_train = np.where(tr_train==1)[0]
i_val = np.where(tr_val==1)[0]
elif(app=='DNN-DNN'):
exp_name_0 = get_exp_name(loss = 'DNN', data_name=DATA_NAME, extra_text='')
model_name_0 = 'saved_models/model_' + exp_name_0 + '.h5'
model0, __ = make_model(loss_str = 'DNN')
model0.load_weights(model_name_0)
y_pred_train_0 = model0.predict(X_train_std)
y_pred_val_0 = model0.predict(X_val_std)
max_logits = np.max(y_pred_train_0,axis=-1)
i_train = np.where(max_logits >= np.percentile(max_logits, 100*setpoint))[0]
max_logits = np.max(y_pred_val_0,axis=-1)
i_val = np.where(max_logits >= np.percentile(max_logits, 100*setpoint))[0]
elif(app=='DAC-DNN'):
exp_name_0 = get_exp_name(loss = LOSS, data_name=DATA_NAME, extra_text='')
model_name_0 = 'saved_models/model_' + exp_name_0 + '.h5'
model0, __ = make_model(loss_str = LOSS)
model0.load_weights(model_name_0)
y_pred_train_0 = model0.predict(X_train_std)
y_pred_val_0 = model0.predict(X_val_std)
i_train = np.where(np.argmax(y_pred_train_0,axis=-1) != NLABEL)[0]
i_val = np.where(np.argmax(y_pred_val_0,axis=-1) != NLABEL)[0]
else:
raise ValueError('no such app')
#-------------------------------
# Get the model
tf.keras.backend.clear_session()
# callbacks
lr_callback = tf.keras.callbacks.LearningRateScheduler(scheduler,verbose=0)
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath = checkpointDir + 'model_' + EXP_NAME + '_epoch{epoch:03d}.h5',
verbose=0,
save_weights_only=True,
)
# define the model and loss function
if(app=='DAC'):
es_dac_callback = EarlyStoppingDAC(patience=30)
model, loss_function = make_model(loss_str = LOSS,
updater_str=UPDATER,
setpoint=setpoint,
spinup_epochs=SPINUP_EPOCHS,
nupd=NUPD)
callbacks = [abstentionloss.AlphaUpdaterCallback(), lr_callback, cp_callback, es_dac_callback]
else:
es_callback = tf.keras.callbacks.EarlyStopping(monitor='val_prediction_accuracy', patience=30, mode='max', restore_best_weights=True, verbose=1)
model, loss_function = make_model(loss_str = 'DNN')
callbacks = [lr_callback, cp_callback, es_callback]
#-------------------------------
# Remake onehotencoding
hotlabels = onehotlabels[:,:model.output_shape[-1]] # strip off abstention class if using the DNN
hotlabels_val = onehotlabels_val[:,:model.output_shape[-1]] # strip off abstention class if using the DNN
#-------------------------------
# Train the model
start_time = time.time()
try:
history = model.fit(
X_train_std[i_train],
hotlabels[i_train],
validation_data=(X_val_std[i_val], hotlabels_val[i_val]),
batch_size=BATCH_SIZE,
epochs=N_EPOCHS,
shuffle=True,
verbose=0,
callbacks=callbacks
)
if(SAVE_HISTORY):
# save history data
history_dict = model.history.history
history_file = 'saved_models/history_' + EXP_NAME + '.pickle'
with open(history_file, 'wb') as handle:
pickle.dump(history_dict, handle, protocol=pickle.HIGHEST_PROTOCOL)
except ValueError:
continue
stop_time = time.time()
tf.print(f"Elapsed time during fit = {stop_time - start_time:.2f} seconds\n")
model.save_weights(model_name + '.h5')
for f in glob.glob(checkpointDir + 'model_' + EXP_NAME + "_epoch*.h5"):
os.remove(f)
#-------------------------------
# Display the results
exp_info=(LOSS, N_EPOCHS, setpoint, SPINUP_EPOCHS, hiddens, LR_INIT, lr_epoch_bound, BATCH_SIZE, NETWORK_SEED)
plots.plot_results(
EXP_NAME,
history,
exp_info=exp_info,
saveplot=True,
showplot=True
)
if in_ipynb()==False:
print('-----starting new kernel-----')
os.execv(sys.executable, ['python'] + ['/Users/eabarnes/GoogleDrive/WORK/RESEARCH/2021/abstention_networks/' + SCRIPT_NAME] + [str(NETWORK_SEED+1)])
print('-----exiting...')
sys.exit()
# (X_val_std[i_val], hotlabels_val[i_val])
# model.evaluate(x=X_val_std[i_val], y=hotlabels_val[i_val])
###Output
_____no_output_____
|
src/python/pylattice/classes/Track_test.ipynb
|
###Markdown
read in the data
###Code
inputParameters = pd.read_csv('../../_inputParameters.csv',names=['key','value'])
inputParameters.style.set_properties(**{'text-align': 'left'})
#inputParameters
def getInputParameter(inputParametersPandas,key):
#this locates the row, gets the result out of its array form and strips whitespaces away
return (((inputParametersPandas.loc[inputParametersPandas['key'] == key]).values)[0,1]).strip()
outputDataFolder = getInputParameter(inputParameters,"outputDataFolder")
ch0_outputDataFolder = getInputParameter(inputParameters,"ch0_outputDataFolder")
ch0_trackingCsvFilename = getInputParameter(inputParameters,"ch0_trackingCsvFilename")
ch1_outputDataFolder = getInputParameter(inputParameters,"ch1_outputDataFolder")
ch1_trackingCsvFilename = getInputParameter(inputParameters,"ch1_trackingCsvFilename")
#trackColor = getInputParameter(inputParameters,"trackColor")
#trackingBildFilename = getInputParameter(inputParameters,"trackingBildFilename")
#framerate_msec = float(getInputParameter(inputParameters,"framerate_msec"))
#movieLength = float(getInputParameter(inputParameters,"movieLength"))
#print(trackColor)
data0 = pd.read_csv(outputDataFolder+'/'+ch0_outputDataFolder+'/'+ch0_trackingCsvFilename,header=0)
data0.columns = ["trackId", "tracklength", "frameId", "particleId", "x", "y", "z", "A", "noIdea1", "noIdea2", "noIdea3", "noIdea4"]
trackIdsLong0 = (data0[data0['tracklength'] > 10]).drop_duplicates(subset='trackId')['trackId'].values
trackIds0 = data0.drop_duplicates(subset='trackId')['trackId'].values
#data0 = data0.drop_duplicates(subset='trackId')
data1 = pd.read_csv(outputDataFolder+'/'+ch1_outputDataFolder+'/'+ch1_trackingCsvFilename,header=0)
data1.columns = ["trackId", "tracklength", "frameId", "particleId", "x", "y", "z", "A", "noIdea1", "noIdea2", "noIdea3", "noIdea4"]
trackIdsLong1 = (data1[data1['tracklength'] > 10]).drop_duplicates(subset='trackId')['trackId'].values
trackIds1 = data1.drop_duplicates(subset='trackId')['trackId'].values
data1[0:5]
###Output
_____no_output_____
###Markdown
read in tracks from channel 0
###Code
trk.Track
import timeit
start_time = timeit.default_timer()
tracks0 = []
cm0 = []
len0 = []
Amean0 = []
maxDist0 = []
for i in range(0,len(trackIdsLong0)):
if(i %1000 == 0):
print(str(i)+"/"+str(len(trackIdsLong0)))
a = trk.Track(data0[data0['trackId'] == trackIdsLong0[i]])
tracks0.append(a)
cm0.append(a.cm)
len0.append(a.len)
Amean0.append(a.Amean)
maxDist0.append(a.maxDist)
elapsed = timeit.default_timer() - start_time
print('time elapse: '+str(np.round(elapsed,decimals=2))+'s')
cm0 = np.array(cm0)
#plt.figure(dpi=300)
ax = plt.axes(projection='3d')
ax.scatter3D(cm0[:,0], cm0[:,1], cm0[:,2],c=np.log(Amean0),cmap='plasma',s=1,alpha=0.3);
plt.xlabel('x [px]')
plt.ylabel('y [px]')
#plt.xlim(-200,800)
#plt.ylim(-200,800)
ax.set_zlim(-500,500)
###Output
_____no_output_____
###Markdown
read in tracks from channel 1
###Code
import timeit
start_time = timeit.default_timer()
tracks1 = []
cm1 = []
Amean1 = []
Afirst = []
maxDist1 = []
len1= []
for i in range(0,len(trackIdsLong1)):
if(i %1000 == 0):
print(str(i)+"/"+str(len(trackIdsLong1)))
a = trk.Track(data1[data1['trackId'] == trackIdsLong1[i]])
tracks1.append(a)
cm1.append(a.cm)
len1.append(a.len)
Amean1.append(a.Amean)
Afirst.append(a.A[0])
maxDist1.append(a.maxDist)
elapsed = timeit.default_timer() - start_time
print('time elapse: '+str(np.round(elapsed,decimals=2))+'s')
cm1 = np.array(cm1)
#plt.figure(dpi=300)
ax = plt.axes(projection='3d')
ax.scatter3D(cm1[:,0], cm1[:,1], cm1[:,2],c=np.log(Amean1),cmap='plasma',s=1,alpha=0.3);
plt.xlabel('x [px]')
plt.ylabel('y [px]')
#plt.xlim(-200,800)
#plt.ylim(-200,800)
ax.set_zlim(-500,500)
###Output
_____no_output_____
###Markdown
test the write BILD file function
###Code
t0 = tracks0[0]
filename = '_tmp__track_488_'+str(t0.id)+'.bild'
print(filename)
t0.writeBILD(filename,color='green',center=t0.cm-np.array([20,20,20]))
t1 = tracks1[0]
filename = '_tmp__track_560_'+str(t1.id)+'.bild'
print(filename)
t1.writeBILD(filename,color='red',center=t1.cm-np.array([20,20,20]))
###Output
_tmp__track_488_1.bild
_tmp__track_560_1.bild
###Markdown
test the write Tiff box function
###Code
import skimage
import json
class Track:
def __init__(self,pandasTrackData):
tracklength = int((pandasTrackData['tracklength'].values)[0])
#trim the end of the track until you got rid of all the nans
xCoordLastEntry = pandasTrackData[tracklength-1:tracklength]['x'].astype(float).values
#print(xCoordLastEntry,tracklength)
while np.isnan(xCoordLastEntry):
tracklength = tracklength-1
xCoordLastEntry = pandasTrackData[tracklength-1:tracklength]['x'].astype(float).values
#print(xCoordLastEntry,tracklength)
track = pandasTrackData[0:tracklength] # this function kills all the NaNs that come from matlab
self.id = track['trackId'].astype(int).values[0]
self.len = tracklength
self.coords = track[['x','y','z']].astype(float).values
self.cm = np.nanmean(self.coords,axis=0)
self.maxDist = np.linalg.norm(self.coords[0]-self.coords[-1])
self.particleIDs = track['particleId'].astype(int).values
self.A = track['A'].astype(float).values
self.Amean = np.nanmean(self.A)
self.frameIDs = track['frameId'].astype(int).values
def reveal(self):
print('id',self.id)
print('tracklength',self.len)
print('center of mass',self.cm)
print('coords',self.coords)
print('particleIDs',self.particleIDs)
print('A',self.A)
print('frameIDs',self.frameIDs)
def plot(self):
plt.figure(dpi=300)
ax = plt.axes(projection='3d')
ax.plot3D(self.coords[:,0], self.coords[:,1], self.coords[:,2], 'grey')
ax.scatter3D(self.coords[:,0], self.coords[:,1], self.coords[:,2],c=self.A, cmap='plasma',s=100);
plt.xlabel('x [px]')
plt.ylabel('y [px]')
def writeBILD(self,BILDfilename,color='black',center=[]):
filename=BILDfilename
file = open(BILDfilename,'w')
file.write(".transparency 0.5\n")
file.write(".color "+color+"\n")
line = ".comment trackID"+str(self.id)+"\n"
file.write(line)
for i in range(1,self.len):
tzero = self.coords[i-1]
tone = self.coords[i]
if len(center) != 0:
tzero = tzero-center
tone = tone-center
# Data for a three-dimensional line
x0 = float(tzero[0])
y0 = float(tzero[1])
z0 = float(tzero[2])
A0 = float(self.A[i-1])
x1 = float(tone[0])
y1 = float(tone[1])
z1 = float(tone[2])
A1 = float(self.A[i])
if(math.isnan(x0) or math.isnan(y0) or math.isnan(z0) or math.isnan(x1) or math.isnan(y1) or math.isnan(z1)):
line = ".arrow "+str(x0)+" "+str(y0)+" "+str(z0)+" "+str(x1)+" "+str(y1)+" "+str(z1)+"\n" #" "+str(radius)+"\n"
print(line)
file.write(".comment "+line)
continue
line = ".arrow "+str(x0)+" "+str(y0)+" "+str(z0)+" "+str(x1)+" "+str(y1)+" "+str(z1)+"\n" #" "+str(radius)+"\n"
file.write(line)
file.close()
def writeTiffBoxesAroundEveryDetection(self,tiffFilename,tiffImageSize=[40,40,40],center=[]):
centerOfImage = np.array([tiffImageSize[0]//2,tiffImageSize[1]//2,tiffImageSize[2]//2])-np.array([1,1,1])
#### </matlab weirdo> ####
#### warning, i have to invert all the coordinates to get back to tiff coordinates ####
#### this is a problem that comes from the matlab code.. lets see what we can do here
trackCoordsRaw = self.coords
trackCoordsRaw[:,[0, 1,2]] = trackCoordsRaw[:,[2, 1, 0]]
trackCoords = trackCoordsRaw.astype(int)
centerOfMass = np.array([self.cm.astype(int)[2],self.cm.astype(int)[1],self.cm.astype(int)[0]])
#### </matlab weirdo> ####
# recenter all track coordinates to the center of mass of the track
if len(center) == 0:
boxCenters = trackCoords-centerOfMass
else:
boxCenters = trackCoords-np.array(center)
meshIndexes = tp.getCubeMeshIndexes()
counter = 0
for boxCenter in boxCenters:
if(counter%20 ==0):
print(counter)
image = np.zeros(tiffImageSize)
meshIndexesAdjusted = meshIndexes+boxCenter+centerOfImage
for index in meshIndexesAdjusted:
image[index[0],index[1],index[2]]=10
data = image.astype('uint16')
metadata = dict(microscope='joh', shape=data.shape, dtype=data.dtype.str)
metadata = json.dumps(metadata)
skimage.external.tifffile.imsave(tiffFilename+str(counter)+'.tif', data, description=metadata)
counter = counter +1;
print('done')
import timeit
start_time = timeit.default_timer()
tracks1 = []
cm1 = []
Amean1 = []
Afirst = []
maxDist1 = []
len1= []
for i in range(0,10):
if(i %1000 == 0):
print(str(i)+"/"+str(len(trackIdsLong1)))
a = Track(data1[data1['trackId'] == trackIdsLong1[i]])
tracks1.append(a)
cm1.append(a.cm)
len1.append(a.len)
Amean1.append(a.Amean)
Afirst.append(a.A[0])
maxDist1.append(a.maxDist)
elapsed = timeit.default_timer() - start_time
print('time elapse: '+str(np.round(elapsed,decimals=2))+'s')
t1 = tracks1[0]
filename = '_tmp__track_488_tiffTest_'
print(filename)
t1.writeTiffBoxesAroundEveryDetection(filename,tiffImageSize=[40,40,40])
import skimage
skimage
###Output
_____no_output_____
|
Image_Recognition_Resnet.ipynb
|
###Markdown
Image Recognition With ResNet50
###Code
!pip install keras
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
import numpy as np
model = ResNet50(weights='imagenet')
img_path = 'vege.jpg'
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = model.predict(x)
print('Predicted:', decode_predictions(preds, top=3)[0])
img_path = 'sea.jpg'
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = model.predict(x)
# decode the results into a list of tuples (class, description, probability)
# (one such list for each sample in the batch)
print('Predicted:', decode_predictions(preds, top=3)[0])
# Predicted: [(u'n02504013', u'Indian_elephant', 0.82658225), (u'n01871265', u'tusker', 0.1122
img_path = 'sea.jpg'
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = model.predict(x)
# decode the results into a list of tuples (class, description, probability)
# (one such list for each sample in the batch)
print('Predicted:', decode_predictions(preds, top=3)[0])
img_path = 'Des.jpg'
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = model.predict(x)
# decode the results into a list of tuples (class, description, probability)
# (one such list for each sample in the batch)
print('Predicted:', decode_predictions(preds, top=3)[0])
###Output
Predicted: [('n04005630', 'prison', 0.8305863), ('n03032252', 'cinema', 0.06983957), ('n03661043', 'library', 0.050778072)]
|
notebooks/A11_NoSQL_Redis.ipynb
|
###Markdown
RedisREmote DIctionary Service is a key-value database.- [Official docs](https://redis.io/documentation)- [Use cases](https://redislabs.com/solutions/use-cases/)- More about [redis-py](https://github.com/andymccurdy/redis-py) ConceptsRedis is a very simple database conceptually. From a programmer perspective, it's as if you can magically persist - simple values- dictionaries- sets- lists- priority queuesso that they are usable from other programs, possibly residing in other computers. The API is simple to use. And it is an in-memory database, hence extremely fast.A few more concepts relevant for Redis- Transactions- Pipelines- Expiring values- Publish-subscribe model Connect to database
###Code
import redis
###Output
_____no_output_____
###Markdown
Providing access informationIt is common to keep access configuration information to services such as a database or cloud platform in a local file - here we use YAML.**Note**: This file MUST be listed in `.gitignore` - otherwise anyone with access to your repository knows your password!
###Code
%%file redis_auth_config.yaml
# This would normally live on disk and not be in a notebook!
host: 'localhost'
port: 6379
password:
import yaml
with open('redis_auth_config.yaml') as f:
auth = yaml.load(f, Loader=yaml.FullLoader)
auth
r = redis.Redis(
host = auth['host'],
port = auth['port'],
password = auth['password']
)
###Output
_____no_output_____
###Markdown
redis-server
###Code
r.ping()
###Output
_____no_output_____
###Markdown
Clear database
###Code
r.flushdb()
###Output
_____no_output_____
###Markdown
Simple data types Set and get a single value
###Code
r.set('a', 'adenosine')
r.get('a')
###Output
_____no_output_____
###Markdown
Set and get multiple values
###Code
r.mset(dict(c='cytosine', t='thymidine', g='guanosine'))
r.mget(list('tcga'))
###Output
_____no_output_____
###Markdown
Deletion
###Code
r.delete('a')
r.keys()
r.delete('c', 't', 'g')
r.keys()
###Output
_____no_output_____
###Markdown
TransactionsTransactions are achieved by creating and executing pipeline. This is useful not just for atomicity, but also to reduce communication costs.
###Code
pipe = r.pipeline()
(
pipe.set('a', 0).
incr('a').
incr('a').
incr('a').
execute()
)
r.get('a')
###Output
_____no_output_____
###Markdown
Expiring valuesYou can also find the time to expiry with `ttl` (time-to-live) and convert from volatile to permanent with `persist`
###Code
import time
r.setex('foo', 3, 'bar')
print('get', r.get('foo'))
time.sleep(1)
print('ttl', r.ttl('foo'))
time.sleep(1)
print('ttl', r.ttl('foo'))
time.sleep(1)
print('ttl', r.ttl('foo'))
time.sleep(1)
print('get', r.get('foo'))
###Output
get b'bar'
ttl 2
ttl 1
ttl -2
get None
###Markdown
Alternative
###Code
r.set('foo', 'bar')
r.expire('foo', 3)
print(r.get('foo'))
time.sleep(3)
print(r.get('foo'))
###Output
b'bar'
None
###Markdown
Complex data types
###Code
import warnings
warnings.simplefilter('ignore', DeprecationWarning)
###Output
_____no_output_____
###Markdown
Hash
###Code
r.hmset('nuc', dict(a='adenosine', c='cytosine', t='thymidine', g='guanosine'))
r.hget('nuc', 'a')
r.hmget('nuc', list('ctg'))
r.hkeys('nuc')
r.hvals('nuc')
###Output
_____no_output_____
###Markdown
List
###Code
r.rpush('xs', 1, 2, 3)
r.lpush('xs', 4, 5, 6)
r.llen('xs')
r.lrange('xs', 0, r.llen('xs'))
r.lrange('xs', 0, -1)
###Output
_____no_output_____
###Markdown
Using list as a queue
###Code
r.lpush('q', 1, 2, 3)
while r.llen('q'):
print(r.rpop('q'))
###Output
b'1'
b'2'
b'3'
###Markdown
Using list as stack
###Code
r.lpush('q', 1, 2, 3)
while r.llen('q'):
print(r.lpop('q'))
###Output
b'3'
b'2'
b'1'
###Markdown
Transferring values across lists
###Code
r.lpush('l1', 1,2,3)
while r.llen('l1'):
r.rpoplpush('l1', 'l2')
r.llen('l1'), r.llen('l2')
for key in r.scan_iter('l2'):
print(key)
r.lpush('l1', 1,2,3)
###Output
_____no_output_____
###Markdown
Sets
###Code
r.sadd('s1', 1,2,3)
r.sadd('s1', 2,3,4)
r.smembers('s1')
r.sadd('s2', 4,5,6)
r.sdiff(['s1', 's2'])
r.sinter(['s1', 's2'])
r.sunion(['s1', 's2'])
###Output
_____no_output_____
###Markdown
Sorted setsThis is equivalent to a priority queue.
###Code
r.zadd('jobs',
dict(job1=3,
job2=7,
job3=1,
job4=2,
job5=6)
)
r.zincrby('jobs', 2, 'job5')
r.zrange('jobs', 0, -1, withscores=True)
r.zrevrange('jobs', 0, -1, withscores=True)
###Output
_____no_output_____
###Markdown
Union and intersection storeThis just creates new sets from the union and intersection respectively.
###Code
s1 = 'time flies like an arrow'
s2 = 'fruit flies like a banana'
from collections import Counter
c1 = Counter(s1.split())
c2 = Counter(s2.split())
r.zadd('c1', c1)
r.zadd('c2', c2)
r.zrange('c1', 0, -1, withscores=True)
r.zrange('c2', 0, -1, withscores=True)
r.zunionstore('c3', ['c1', 'c2'])
r.zrange('c3', 0, -1, withscores=True)
r.zinterstore('c4', ['c1', 'c2'])
r.zrange('c4', 0, -1, withscores=True)
###Output
_____no_output_____
###Markdown
Publisher/SubscriberSource: https://making.pusher.com/redis-pubsub-under-the-hood/
###Code
help(r.pubsub)
p = r.pubsub()
###Output
_____no_output_____
###Markdown
Channels
###Code
p.subscribe('python', 'perl', 'sql')
m = p.get_message()
while m:
print(m)
m = p.get_message()
p.channels
p2 = r.pubsub()
p2.psubscribe('p*')
p2.patterns
###Output
_____no_output_____
###Markdown
Messages From [redis-puy](https://github.com/andymccurdy/redis-py)Every message read from a PubSub instance will be a dictionary with the following keys.- type: One of the following: 'subscribe', 'unsubscribe', 'psubscribe', 'punsubscribe', 'message', 'pmessage'- channel: The channel [un]subscribed to or the channel a message was published to- pattern: The pattern that matched a published message's channel. Will be None in all cases except for 'pmessage' types.- data: The message data. With [un]subscribe messages, this value will be the number of channels and patterns the connection is currently subscribed to. With [p]message messages, this value will be the actual published message.
###Code
r.publish('python', 'use blank spaces')
r.publish('python', 'no semi-colons')
r.publish('perl', 'use spaceship operator')
r.publish('sql', 'select this')
r.publish('haskell', 'functional is cool')
m = p.get_message()
while m:
print(m)
m = p.get_message()
p.unsubscribe('python')
p.channels
r.publish('python', 'use blank spaces 2')
r.publish('python', 'no semi-colons 2')
r.publish('perl', 'use spaceship operator 2')
r.publish('sql', 'select this 2')
r.publish('haskell', 'functional is cool 2')
m = p.get_message()
while m:
print(m)
m = p.get_message()
m = p2.get_message()
while m:
print(m)
m = p2.get_message()
###Output
{'type': 'psubscribe', 'pattern': None, 'channel': b'p*', 'data': 1}
{'type': 'pmessage', 'pattern': b'p*', 'channel': b'python', 'data': b'use blank spaces'}
{'type': 'pmessage', 'pattern': b'p*', 'channel': b'python', 'data': b'no semi-colons'}
{'type': 'pmessage', 'pattern': b'p*', 'channel': b'perl', 'data': b'use spaceship operator'}
{'type': 'pmessage', 'pattern': b'p*', 'channel': b'python', 'data': b'use blank spaces 2'}
{'type': 'pmessage', 'pattern': b'p*', 'channel': b'python', 'data': b'no semi-colons 2'}
{'type': 'pmessage', 'pattern': b'p*', 'channel': b'perl', 'data': b'use spaceship operator 2'}
###Markdown
Multiple databases
###Code
r2 = redis.Redis(db=1)
r2.flushdb()
for c in ['c1', 'c2', 'c3', 'c4']:
r.move(c, 1)
for key in r2.scan_iter('c?'):
print(r2.zrange(key, 0, -1, withscores=True))
###Output
[(b'flies', 2.0), (b'like', 2.0)]
[(b'a', 1.0), (b'an', 1.0), (b'arrow', 1.0), (b'banana', 1.0), (b'fruit', 1.0), (b'time', 1.0), (b'flies', 2.0), (b'like', 2.0)]
[(b'a', 1.0), (b'banana', 1.0), (b'flies', 1.0), (b'fruit', 1.0), (b'like', 1.0)]
[(b'an', 1.0), (b'arrow', 1.0), (b'flies', 1.0), (b'like', 1.0), (b'time', 1.0)]
###Markdown
Clean upThere is no need to close the connections when we use the `Redis()` object. This is taken care of automatically```pythondef execute_command(self, *args, **options): "Execute a command and return a parsed response" pool = self.connection_pool command_name = args[0] connection = pool.get_connection(command_name, **options) try: connection.send_command(*args) return self.parse_response(connection, command_name, **options) except (ConnectionError, TimeoutError) as e: connection.disconnect() if not connection.retry_on_timeout and isinstance(e, TimeoutError): raise connection.send_command(*args) return self.parse_response(connection, command_name, **options) finally: pool.release(connection) ``` Benchmark redis
###Code
! redis-benchmark --help
%%bash
redis-benchmark -q -n 10000 -c 50
###Output
PING_INLINE: 0.00
PING_INLINE: 38525.90
PING_INLINE: 38610.04 requests per second
PING_BULK: 38370.84
PING_BULK: 38461.54 requests per second
SET: 39017.54
SET: 39215.69 requests per second
GET: 40135.75
GET: 40000.00 requests per second
INCR: 38286.36
INCR: 38314.18 requests per second
LPUSH: 40879.81
LPUSH: 41322.31 requests per second
RPUSH: 38242.99
RPUSH: 36900.37 requests per second
LPOP: 37413.61
LPOP: 37453.18 requests per second
RPOP: 34918.60
RPOP: 34722.22 requests per second
SADD: 37571.43
SADD: 37878.79 requests per second
HSET: 38341.88
HSET: 38759.69 requests per second
SPOP: 40682.24
SPOP: 39370.08 requests per second
LPUSH (needed to benchmark LRANGE): 37742.57
LPUSH (needed to benchmark LRANGE): 37878.79 requests per second
LRANGE_100 (first 100 elements): 33139.54
LRANGE_100 (first 100 elements): 33222.59 requests per second
LRANGE_300 (first 300 elements): 17696.97
LRANGE_300 (first 300 elements): 19696.11
LRANGE_300 (first 300 elements): 19646.37 requests per second
LRANGE_500 (first 450 elements): 14363.64
LRANGE_500 (first 450 elements): 14386.03
LRANGE_500 (first 450 elements): 14543.02
LRANGE_500 (first 450 elements): 14577.26 requests per second
LRANGE_600 (first 600 elements): 12482.35
LRANGE_600 (first 600 elements): 12532.74
LRANGE_600 (first 600 elements): 12505.96
LRANGE_600 (first 600 elements): 12484.39 requests per second
MSET (10 keys): 38314.29
MSET (10 keys): 40160.64 requests per second
###Markdown
RedisREmote DIctionary Service is a key-value database.- [Official docs](https://redis.io/documentation)- [Use cases](https://redislabs.com/solutions/use-cases/)- More about [redis-py](https://github.com/andymccurdy/redis-py) ConceptsRedis is a very simple database conceptually. From a programmer perspective, it's as if you can magically persist - simple values- dictionaries- sets- lists- priority queuesso that they are usable from other programs, possibly residing in other computers. The API is simple to use. And it is an in-memory database, hence extremely fast.A few more concepts relevant for Redis- Transactions- Pipelines- Expiring values- Publish-subscribe model Connect to database
###Code
import redis
###Output
_____no_output_____
###Markdown
Providing access informationIt is common to keep access configuration information to services such as a database or cloud platform in a local file - here we use YAML.**Note**: This file MUST be listed in `.gitignore` - otherwise anyone with access to your repository knows your password!
###Code
%%file redis_auth_config.yaml
# This would normally live on disk and not be in a notebook!
host: 'localhost'
port: 6379
password:
import yaml
with open('redis_auth_config.yaml') as f:
auth = yaml.load(f, Loader=yaml.FullLoader)
auth
r = redis.Redis(
host = auth['host'],
port = auth['port'],
password = auth['password']
)
r.ping()
###Output
_____no_output_____
###Markdown
Clear database
###Code
r.flushdb()
###Output
_____no_output_____
###Markdown
Simple data types Set and get a single value
###Code
r.set('a', 'adenosine')
r.get('a')
###Output
_____no_output_____
###Markdown
Set and get multiple values
###Code
r.mset(dict(c='cytosine', t='thymidine', g='guanosine'))
r.mget(list('tcga'))
###Output
_____no_output_____
###Markdown
Deletion
###Code
r.delete('a')
r.keys()
r.delete('c', 't', 'g')
r.keys()
###Output
_____no_output_____
###Markdown
TransactionsTransactions are achieved by creating and executing pipeline. This is useful not just for atomicity, but also to reduce communication costs.
###Code
pipe = r.pipeline()
(
pipe.set('a', 0).
incr('a').
incr('a').
incr('a').
execute()
)
r.get('a')
###Output
_____no_output_____
###Markdown
Expiring valuesYou can also find the time to expiry with `ttl` (time-to-live) and convert from volatile to permanent with `persist`
###Code
import time
r.setex('foo', 3, 'bar')
print('get', r.get('foo'))
time.sleep(1)
print('ttl', r.ttl('foo'))
time.sleep(1)
print('ttl', r.ttl('foo'))
time.sleep(1)
print('ttl', r.ttl('foo'))
time.sleep(1)
print('get', r.get('foo'))
###Output
_____no_output_____
###Markdown
Alternative
###Code
r.set('foo', 'bar')
r.expire('foo', 3)
print(r.get('foo'))
time.sleep(3)
print(r.get('foo'))
###Output
_____no_output_____
###Markdown
Complex data types
###Code
import warnings
warnings.simplefilter('ignore', DeprecationWarning)
###Output
_____no_output_____
###Markdown
Hash
###Code
r.hmset('nuc', dict(a='adenosine', c='cytosine', t='thymidine', g='guanosine'))
r.hget('nuc', 'a')
r.hmget('nuc', list('ctg'))
r.hkeys('nuc')
r.hvals('nuc')
###Output
_____no_output_____
###Markdown
List
###Code
r.rpush('xs', 1, 2, 3)
r.lpush('xs', 4, 5, 6)
r.llen('xs')
r.lrange('xs', 0, r.llen('xs'))
r.lrange('xs', 0, -1)
###Output
_____no_output_____
###Markdown
Using list as a queue
###Code
r.lpush('q', 1, 2, 3)
while r.llen('q'):
print(r.rpop('q'))
###Output
_____no_output_____
###Markdown
Using list as stack
###Code
r.lpush('q', 1, 2, 3)
while r.llen('q'):
print(r.lpop('q'))
###Output
_____no_output_____
###Markdown
Transferring values across lists
###Code
r.lpush('l1', 1,2,3)
while r.llen('l1'):
r.rpoplpush('l1', 'l2')
r.llen('l1'), r.llen('l2')
for key in r.scan_iter('l2'):
print(key)
r.lpush('l1', 1,2,3)
###Output
_____no_output_____
###Markdown
Sets
###Code
r.sadd('s1', 1,2,3)
r.sadd('s1', 2,3,4)
r.smembers('s1')
r.sadd('s2', 4,5,6)
r.sdiff(['s1', 's2'])
r.sinter(['s1', 's2'])
r.sunion(['s1', 's2'])
###Output
_____no_output_____
###Markdown
Sorted setsThis is equivalent to a priority queue.
###Code
r.zadd('jobs',
dict(job1=3,
job2=7,
job3=1,
job4=2,
job5=6)
)
r.zincrby('jobs', 2, 'job5')
r.zrange('jobs', 0, -1, withscores=True)
r.zrevrange('jobs', 0, -1, withscores=True)
###Output
_____no_output_____
###Markdown
Union and intersection storeThis just creates new sets from the union and intersection respectively.
###Code
s1 = 'time flies like an arrow'
s2 = 'fruit flies like a banana'
from collections import Counter
c1 = Counter(s1.split())
c2 = Counter(s2.split())
r.zadd('c1', c1)
r.zadd('c2', c2)
r.zrange('c1', 0, -1, withscores=True)
r.zrange('c2', 0, -1, withscores=True)
r.zunionstore('c3', ['c1', 'c2'])
r.zrange('c3', 0, -1, withscores=True)
r.zinterstore('c4', ['c1', 'c2'])
r.zrange('c4', 0, -1, withscores=True)
###Output
_____no_output_____
###Markdown
Publisher/SubscriberSource: https://making.pusher.com/redis-pubsub-under-the-hood/
###Code
help(r.pubsub)
p = r.pubsub()
###Output
_____no_output_____
###Markdown
Channels
###Code
p.subscribe('python', 'perl', 'sql')
m = p.get_message()
while m:
print(m)
m = p.get_message()
p.channels
p2 = r.pubsub()
p2.psubscribe('p*')
p2.patterns
###Output
_____no_output_____
###Markdown
Messages From [redis-puy](https://github.com/andymccurdy/redis-py)Every message read from a PubSub instance will be a dictionary with the following keys.- type: One of the following: 'subscribe', 'unsubscribe', 'psubscribe', 'punsubscribe', 'message', 'pmessage'- channel: The channel [un]subscribed to or the channel a message was published to- pattern: The pattern that matched a published message's channel. Will be None in all cases except for 'pmessage' types.- data: The message data. With [un]subscribe messages, this value will be the number of channels and patterns the connection is currently subscribed to. With [p]message messages, this value will be the actual published message.
###Code
r.publish('python', 'use blank spaces')
r.publish('python', 'no semi-colons')
r.publish('perl', 'use spaceship operator')
r.publish('sql', 'select this')
r.publish('haskell', 'functional is cool')
m = p.get_message()
while m:
print(m)
m = p.get_message()
p.unsubscribe('python')
p.channels
r.publish('python', 'use blank spaces 2')
r.publish('python', 'no semi-colons 2')
r.publish('perl', 'use spaceship operator 2')
r.publish('sql', 'select this 2')
r.publish('haskell', 'functional is cool 2')
m = p.get_message()
while m:
print(m)
m = p.get_message()
m = p2.get_message()
while m:
print(m)
m = p2.get_message()
###Output
_____no_output_____
###Markdown
Multiple databases
###Code
r2 = redis.Redis(db=1)
r2.flushdb()
for c in ['c1', 'c2', 'c3', 'c4']:
r.move(c, 1)
for key in r2.scan_iter('c?'):
print(r2.zrange(key, 0, -1, withscores=True))
###Output
_____no_output_____
###Markdown
Clean upThere is no need to close the connections when we use the `Redis()` object. This is taken care of automatically```pythondef execute_command(self, *args, **options): "Execute a command and return a parsed response" pool = self.connection_pool command_name = args[0] connection = pool.get_connection(command_name, **options) try: connection.send_command(*args) return self.parse_response(connection, command_name, **options) except (ConnectionError, TimeoutError) as e: connection.disconnect() if not connection.retry_on_timeout and isinstance(e, TimeoutError): raise connection.send_command(*args) return self.parse_response(connection, command_name, **options) finally: pool.release(connection) ``` Benchmark redis
###Code
! redis-benchmark --help
%%bash
redis-benchmark -q -n 10000 -c 50
###Output
_____no_output_____
###Markdown
RedisREmote DIctionary Service is a key-value database.- [Official docs](https://redis.io/documentation)- [Use cases](https://redislabs.com/solutions/use-cases/)- More about [redis-py](https://github.com/andymccurdy/redis-py) ConceptsRedis is a very simple database conceptually. From a programmer perspective, it's as if you can magically persist simple values, dictionaries, sets, lists, and priority queues, so that they are usable from other programs, possibly residing in other computers. The API is simple to use. And it is an in-memory database, hence extremely fast.More advanced concepts- Pipelines- Expiring values- Publish-subscribe model Connect to database
###Code
import redis
###Output
_____no_output_____
###Markdown
Providing access informationIt is common to keep access configuration information to services such as a database or cloud platform in a local file - here we use YAML.**Note**: This file MUST be listed in `.gitignore` - otherwise anyone with access to your repository knows your password!
###Code
%%file redis_auth_config.yaml
# This would normally live on disk and not be in a notebook!
host: 'localhost'
port: 6379
password:
import yaml
with open('redis_auth_config.yaml') as f:
auth = yaml.load(f, Loader=yaml.FullLoader)
auth
r = redis.Redis(
host = auth['host'],
port = auth['port'],
password = auth['password']
)
r.ping()
###Output
_____no_output_____
###Markdown
Clear database
###Code
r.flushdb()
###Output
_____no_output_____
###Markdown
Simple data types Set and get a single value
###Code
r.set('a', 'adenosine')
r.get('a')
###Output
_____no_output_____
###Markdown
Set and get multiple values
###Code
r.mset(dict(c='cytosine', t='thymidine', g='guanosine'))
r.mget(list('tcga'))
###Output
_____no_output_____
###Markdown
Deletion
###Code
r.delete('a')
r.keys()
r.delete('c', 't', 'g')
r.keys()
###Output
_____no_output_____
###Markdown
TransactionsTransactions are achieved by creating and executing pipeline. This is useful not just for atomicity, but also to reduce communication costs.
###Code
pipe = r.pipeline()
(
pipe.set('a', 0).
incr('a').
incr('a').
incr('a').
execute()
)
r.get('a')
###Output
_____no_output_____
###Markdown
Expiring valuesYou can also find the time to expiry with `ttl` (time-to-live) and convert from volatile to permanent with `persist`
###Code
import time
r.setex('foo', 3, 'bar')
print('get', r.get('foo'))
time.sleep(1)
print('ttl', r.ttl('foo'))
time.sleep(1)
print('ttl', r.ttl('foo'))
time.sleep(1)
print('ttl', r.ttl('foo'))
time.sleep(1)
print('get', r.get('foo'))
###Output
_____no_output_____
###Markdown
Alternative
###Code
r.set('foo', 'bar')
r.expire('foo', 3)
print(r.get('foo'))
time.sleep(3)
print(r.get('foo'))
###Output
_____no_output_____
###Markdown
Complex data types
###Code
import warnings
warnings.simplefilter('ignore', DeprecationWarning)
###Output
_____no_output_____
###Markdown
Hash
###Code
r.hmset('nuc', dict(a='adenosine', c='cytosine', t='thymidine', g='guanosine'))
r.hget('nuc', 'a')
r.hmget('nuc', list('ctg'))
r.hkeys('nuc')
r.hvals('nuc')
###Output
_____no_output_____
###Markdown
List
###Code
r.rpush('xs', 1, 2, 3)
r.lpush('xs', 4, 5, 6)
r.llen('xs')
r.lrange('xs', 0, r.llen('xs'))
r.lrange('xs', 0, -1)
###Output
_____no_output_____
###Markdown
Using list as a queue
###Code
r.lpush('q', 1, 2, 3)
while r.llen('q'):
print(r.rpop('q'))
###Output
_____no_output_____
###Markdown
Using list as stack
###Code
r.lpush('q', 1, 2, 3)
while r.llen('q'):
print(r.lpop('q'))
###Output
_____no_output_____
###Markdown
Transferring values across lists
###Code
r.lpush('l1', 1,2,3)
while r.llen('l1'):
r.rpoplpush('l1', 'l2')
r.llen('l1'), r.llen('l2')
for key in r.scan_iter('l2'):
print(key)
r.lpush('l1', 1,2,3)
###Output
_____no_output_____
###Markdown
Sets
###Code
r.sadd('s1', 1,2,3)
r.sadd('s1', 2,3,4)
r.smembers('s1')
r.sadd('s2', 4,5,6)
r.sdiff(['s1', 's2'])
r.sinter(['s1', 's2'])
r.sunion(['s1', 's2'])
###Output
_____no_output_____
###Markdown
Sorted setsThis is equivalent to a priority queue.
###Code
r.zadd('jobs',
dict(job1=3,
job2=7,
job3=1,
job4=2,
job5=6)
)
r.zincrby('jobs', 2, 'job5')
r.zrange('jobs', 0, -1, withscores=True)
r.zrevrange('jobs', 0, -1, withscores=True)
###Output
_____no_output_____
###Markdown
Union and intersection storeThis just creates new sets from the union and intersection respectively.
###Code
s1 = 'time flies like an arrow'
s2 = 'fruit flies like a banana'
from collections import Counter
c1 = Counter(s1.split())
c2 = Counter(s2.split())
r.zadd('c1', c1)
r.zadd('c2', c2)
r.zrange('c1', 0, -1, withscores=True)
r.zrange('c2', 0, -1, withscores=True)
r.zunionstore('c3', ['c1', 'c2'])
r.zrange('c3', 0, -1, withscores=True)
r.zinterstore('c4', ['c1', 'c2'])
r.zrange('c4', 0, -1, withscores=True)
###Output
_____no_output_____
###Markdown
Publisher/SubscriberSource: https://making.pusher.com/redis-pubsub-under-the-hood/
###Code
help(r.pubsub)
p = r.pubsub()
###Output
_____no_output_____
###Markdown
Channels
###Code
p.subscribe('python', 'perl', 'sql')
m = p.get_message()
while m:
print(m)
m = p.get_message()
p.channels
p2 = r.pubsub()
p2.psubscribe('p*')
p2.patterns
###Output
_____no_output_____
###Markdown
Messages From [redis-puy](https://github.com/andymccurdy/redis-py)Every message read from a PubSub instance will be a dictionary with the following keys.- type: One of the following: 'subscribe', 'unsubscribe', 'psubscribe', 'punsubscribe', 'message', 'pmessage'- channel: The channel [un]subscribed to or the channel a message was published to- pattern: The pattern that matched a published message's channel. Will be None in all cases except for 'pmessage' types.- data: The message data. With [un]subscribe messages, this value will be the number of channels and patterns the connection is currently subscribed to. With [p]message messages, this value will be the actual published message.
###Code
r.publish('python', 'use blank spaces')
r.publish('python', 'no semi-colons')
r.publish('perl', 'use spaceship operator')
r.publish('sql', 'select this')
r.publish('haskell', 'functional is cool')
m = p.get_message()
while m:
print(m)
m = p.get_message()
p.unsubscribe('python')
p.channels
r.publish('python', 'use blank spaces 2')
r.publish('python', 'no semi-colons 2')
r.publish('perl', 'use spaceship operator 2')
r.publish('sql', 'select this 2')
r.publish('haskell', 'functional is cool 2')
m = p.get_message()
while m:
print(m)
m = p.get_message()
m = p2.get_message()
while m:
print(m)
m = p2.get_message()
###Output
_____no_output_____
###Markdown
Multiple databases
###Code
r2 = redis.Redis(db=1)
r2.flushdb()
for c in ['c1', 'c2', 'c3', 'c4']:
r.move(c, 1)
for key in r2.scan_iter('c?'):
print(r2.zrange(key, 0, -1, withscores=True))
###Output
_____no_output_____
###Markdown
Clean upThere is no need to close the connections when we use the `Redis()` object. This is taken care of automatically```pythondef execute_command(self, *args, **options): "Execute a command and return a parsed response" pool = self.connection_pool command_name = args[0] connection = pool.get_connection(command_name, **options) try: connection.send_command(*args) return self.parse_response(connection, command_name, **options) except (ConnectionError, TimeoutError) as e: connection.disconnect() if not connection.retry_on_timeout and isinstance(e, TimeoutError): raise connection.send_command(*args) return self.parse_response(connection, command_name, **options) finally: pool.release(connection) ``` Benchmark redis
###Code
%%bash
redis-benchmark -q -n 10000 -c 50
###Output
_____no_output_____
|
lecture_05.ipynb
|
###Markdown
Lecture 05:**Linear Regression PyTorch way**There is a rythm to the pytorch programs* Model and network - Forward pass* Loss and Optimizer* Training loopWe will use the same linear regression example as before for this lecture and use pytorch natively for all the coding
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch
x_data = torch.Tensor([[1.0], [2.0],[3.0]])
y_data = torch.Tensor([[2.0], [4.0],[6.0]])
## Model network and forward pass
class Model(torch.nn.Module):
def __init__(self):
"""
In the constructor we instantiate 2 nn.linear module
"""
super(Model, self).__init__()
self.linear = torch.nn.Linear(1,1) # One data in and one out for x and y
def forward(self, x):
"""
In forward function we accept the input variable and we return variable for the output
We can use the modules defined in the constructor and arbitary operations
on the variable as well"""
y_pred = self.linear(x)
return y_pred
# Our model
model = Model()
# Loss function and optimizer.
# model.parameters() automatically calcuated the gradient for all the weights in the network
criterion = torch.nn.MSELoss(size_average = False)
optimum = torch.optim.SGD(model.parameters(), lr = 0.01)
# Training loop
for epoch in range (100):
# Using forward pass to calcuate the prediction
y_pred = model(x_data)
# Compute and print the loss
loss = criterion(y_pred, y_data)
print(f'Epoch: {epoch}, Loss: {loss.item()}')
# Making the gradients zero and then doing a backward pass to calcuate
# And then update the weights
optimum.zero_grad()
loss.backward()
optimum.step()
# After training
new_val = torch.Tensor([4.0])
print('Predict (after training)', 4, model.forward(new_val).item())
###Output
Epoch: 0, Loss: 0.000493427854962647
Epoch: 1, Loss: 0.00048634305130690336
Epoch: 2, Loss: 0.00047935411566868424
Epoch: 3, Loss: 0.00047246244503185153
Epoch: 4, Loss: 0.0004656784294638783
Epoch: 5, Loss: 0.0004589696181938052
Epoch: 6, Loss: 0.0004523824609350413
Epoch: 7, Loss: 0.0004458907642401755
Epoch: 8, Loss: 0.0004394740972202271
Epoch: 9, Loss: 0.0004331583040766418
Epoch: 10, Loss: 0.00042693031718954444
Epoch: 11, Loss: 0.00042080465937033296
Epoch: 12, Loss: 0.00041475644684396684
Epoch: 13, Loss: 0.00040878489380702376
Epoch: 14, Loss: 0.00040291156619787216
Epoch: 15, Loss: 0.0003971316618844867
Epoch: 16, Loss: 0.00039141540764831007
Epoch: 17, Loss: 0.0003857953997794539
Epoch: 18, Loss: 0.00038024436798878014
Epoch: 19, Loss: 0.00037479097954928875
Epoch: 20, Loss: 0.00036940042627975345
Epoch: 21, Loss: 0.0003640844370238483
Epoch: 22, Loss: 0.00035885433317162097
Epoch: 23, Loss: 0.0003536948934197426
Epoch: 24, Loss: 0.0003486151108518243
Epoch: 25, Loss: 0.0003436130646150559
Epoch: 26, Loss: 0.0003386674798093736
Epoch: 27, Loss: 0.00033380769309587777
Epoch: 28, Loss: 0.0003290083259344101
Epoch: 29, Loss: 0.00032428139820694923
Epoch: 30, Loss: 0.00031960842898115516
Epoch: 31, Loss: 0.00031501270132139325
Epoch: 32, Loss: 0.00031049229437485337
Epoch: 33, Loss: 0.00030603198683820665
Epoch: 34, Loss: 0.0003016302362084389
Epoch: 35, Loss: 0.00029729443485848606
Epoch: 36, Loss: 0.0002930318296421319
Epoch: 37, Loss: 0.00028881384059786797
Epoch: 38, Loss: 0.0002846646821126342
Epoch: 39, Loss: 0.00028057279996573925
Epoch: 40, Loss: 0.0002765395911410451
Epoch: 41, Loss: 0.00027256819885224104
Epoch: 42, Loss: 0.00026865125983022153
Epoch: 43, Loss: 0.0002647844376042485
Epoch: 44, Loss: 0.0002609850780572742
Epoch: 45, Loss: 0.00025722902501001954
Epoch: 46, Loss: 0.000253535428782925
Epoch: 47, Loss: 0.0002498896501492709
Epoch: 48, Loss: 0.0002462957927491516
Epoch: 49, Loss: 0.00024276424665004015
Epoch: 50, Loss: 0.00023927078291308135
Epoch: 51, Loss: 0.0002358347992412746
Epoch: 52, Loss: 0.00023244312615133822
Epoch: 53, Loss: 0.0002291033451911062
Epoch: 54, Loss: 0.00022580692893825471
Epoch: 55, Loss: 0.00022256042575463653
Epoch: 56, Loss: 0.000219370995182544
Epoch: 57, Loss: 0.0002162097516702488
Epoch: 58, Loss: 0.0002131020009983331
Epoch: 59, Loss: 0.00021004454174544662
Epoch: 60, Loss: 0.00020702675101347268
Epoch: 61, Loss: 0.0002040490653598681
Epoch: 62, Loss: 0.0002011168544413522
Epoch: 63, Loss: 0.00019823206821456552
Epoch: 64, Loss: 0.00019538355991244316
Epoch: 65, Loss: 0.00019257667008787394
Epoch: 66, Loss: 0.00018980208551511168
Epoch: 67, Loss: 0.00018707069102674723
Epoch: 68, Loss: 0.0001843883073888719
Epoch: 69, Loss: 0.00018173549324274063
Epoch: 70, Loss: 0.00017912212933879346
Epoch: 71, Loss: 0.00017654933617450297
Epoch: 72, Loss: 0.000174012006027624
Epoch: 73, Loss: 0.0001715158869046718
Epoch: 74, Loss: 0.0001690446079010144
Epoch: 75, Loss: 0.0001666130410740152
Epoch: 76, Loss: 0.00016422067710664123
Epoch: 77, Loss: 0.00016185821732506156
Epoch: 78, Loss: 0.00015953781257849187
Epoch: 79, Loss: 0.00015724146214779466
Epoch: 80, Loss: 0.00015498421271331608
Epoch: 81, Loss: 0.0001527576387161389
Epoch: 82, Loss: 0.0001505643012933433
Epoch: 83, Loss: 0.00014839674986433238
Epoch: 84, Loss: 0.00014626598567701876
Epoch: 85, Loss: 0.00014416183694265783
Epoch: 86, Loss: 0.0001420860644429922
Epoch: 87, Loss: 0.00014004806871525943
Epoch: 88, Loss: 0.0001380343601340428
Epoch: 89, Loss: 0.00013605151616502553
Epoch: 90, Loss: 0.00013410118117462844
Epoch: 91, Loss: 0.00013217095693107694
Epoch: 92, Loss: 0.00013027130626142025
Epoch: 93, Loss: 0.00012840254930779338
Epoch: 94, Loss: 0.00012654860620386899
Epoch: 95, Loss: 0.00012472760863602161
Epoch: 96, Loss: 0.00012294366024434566
Epoch: 97, Loss: 0.0001211712951771915
Epoch: 98, Loss: 0.00011943148274440318
Epoch: 99, Loss: 0.00011770993296522647
Predict (after training) 4 7.987527847290039
###Markdown
isOdd version 1.0
###Code
def isOdd():
print "This function will print a 0 if your number is even and a 1 if your number is odd."
print "Note: this program does not handle 0's properly."
check = input("What number would you like to check? :")
output = check%2
print output
isOdd()
###Output
_____no_output_____
###Markdown
Conditions - back to temperature
###Code
## Convert2.py
def temps():
celsius = input("What is the Celsius temperature? ")
fahrenheit = 9.0 / 5.0 * celsius + 32
print "Temperature is", fahrenheit, "degrees fahrenheit."
if fahrenheit >= 90:
print "It's really hot out there, be careful!"
if fahrenheit <= 30:
print "Brrrrr. Be sure to dress warmly"
temps()
###Output
_____no_output_____
###Markdown
Two-Way Decisions (isOdd)
###Code
## is odd, v2.0
def isOdd():
print "This function will print a 0 if your number is even and a 1 if your number is odd."
print "Note: this program does not handle 0's properly."
check = input("What number would you like to check?: ")
output = check%2
if output == 1:
print "Your number is odd."
else:
print "Your number is even."
isOdd()
###Output
_____no_output_____
###Markdown
Multi-Way Decisions (isOdd)
###Code
## isOdd, v3.0
def isOdd():
print "This function will tell you if your number is even or if it is odd."
print "It will handle 0 differently."
check = input("What number would you like to check?: ")
output = check%2
if check == 0:
print "Your number is zero."
elif output == 1:
print "Your number is odd"
else:
print "Your number is even."
isOdd()
###Output
_____no_output_____
###Markdown
Exception Handling
###Code
## isOdd, v 4.0
## This version handles exceptions
def isOdd():
print "This function will print a 0 if your number is even and a 1 if your number is odd."
try:
check = input("What number would you like to check?: ")
output = check%2
if check == 0:
print "Your number is zero."
elif output == 1:
print "Your number is odd"
else:
print "Your number is even."
except NameError:
print "You must enter a number, not a word. Try again!"
isOdd()
###Output
_____no_output_____
###Markdown
Multi-way Decisions (House or Senate)We will probably not have time to get to this, but in case you want to see code that I wrote for this problem:
###Code
##eligible v1.0
def eligible():
job = raw_input("Would you like to me a member of the Senate or the House?: ")
job = job.lower()
age = input("How old are you?: ")
citizen = input("How many years have you been a citizen of the United States?: ")
if job == "senate":
if age >= 30:
if citizen >= 9:
print "You are eligible to be a Senator!"\
"If for some reason you want to be a Senator."
else:
print"You are ineligible to be a Senator -"
"you must be citizen for 9 years."
else:
print "You can't be Senator - you are too young."\
elif job == "house":
if age >= 25:
if citizen >= 7:
print "You are eligible to be a member of the House! " \
"Although why you would want to be is anyone's guess."
else:
print "You are ineligible to be a member of the House -"
"You must be a citizen for 7 years."
else:
print "You are ineligible to be a member of the House - "\
"You are too young. I know, they act like chidren, but they"
" are technically all over the age of 25."
else:
print "That's not a job - try again!"
eligible()
###Output
_____no_output_____
|
tarea5_daa.ipynb
|
###Markdown
###Code
from time import time
def ejemplo1( n ):
start_time = time()
c = n + 1
d = c * n
e = n * n
total = c + e - d
elapsed_time = time() - start_time
print("Tiempo transcurrido: %0.10f segundos." % elapsed_time)
print(f"total={ total }")
for entrada in range(100, 1100, 100):
ejemplo1(entrada)
# T(n) = 4
from time import time
def ejemplo2( n ):
start_time = time()
contador = 0
for i in range( n ) :
for j in range( n ) :
contador += 1
elapsed_time = time() - start_time
print("Tiempo transcurrido: %0.10f segundos." % elapsed_time)
return contador
for entrada in range(100, 1100, 100):
ejemplo2(entrada)
# T(n) = 1 + n^2
from time import time
def ejemplo3(n):
start_time = time()
x = n * 2
y = 0
for m in range(100):
y = x - n
elapsed_time = time() - start_time
print("Tiempo transcurrido: %0.10f segundos." % elapsed_time)
return y
for entrada in range(100, 1100, 100):
ejemplo3(entrada)
# T(n) = 102
from time import time
def ejemplo4( n ):
start_time = time()
x = 3 * 3.1416 + n
y = x + 3 * 3 - n
z = x + y
elapsed_time = time() - start_time
print("Tiempo transcurrido: %0.10f segundos." % elapsed_time)
return z
for entrada in range(100, 1100, 100):
ejemplo4(entrada)
# T(n) = 3
from time import time
def ejemplo5( x ):
start_time = time()
n = 10
for j in range( 0 , x , 1 ):
n = j + n
elapsed_time = time() - start_time
print("Tiempo transcurrido: %0.10f segundos." % elapsed_time)
return n
for entrada in range(100, 1100, 100):
ejemplo5(entrada)
# T(x) = 1 + x
from time import time
def ejemplo6( n ):
start_time = time()
data=[[[1 for x in range(n)] for x in range(n)]
for x in range(n)]
suma = 0
for d in range(n):
for r in range(n):
for c in range(n):
suma += data[d][r][c]
elapsed_time = time() - start_time
print("Tiempo transcurrido: %0.10f segundos." % elapsed_time)
return suma
for entrada in range(100, 1100, 100):
ejemplo6(entrada)
# T(n) = 2 + n^3
###Output
Tiempo transcurrido: 0.1271543503 segundos.
Tiempo transcurrido: 1.1214296818 segundos.
Tiempo transcurrido: 3.6337397099 segundos.
Tiempo transcurrido: 8.6273136139 segundos.
Tiempo transcurrido: 17.5303323269 segundos.
Tiempo transcurrido: 33.2833490372 segundos.
Tiempo transcurrido: 51.7544102669 segundos.
Tiempo transcurrido: 80.1667358875 segundos.
Tiempo transcurrido: 109.7234559059 segundos.
Tiempo transcurrido: 156.1047496796 segundos.
|
Platypus StableSwap/PlatypusFinance.ipynb
|
###Markdown
Quick Platypus Tokenomics calculation wimwam.eth
###Code
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator
# protocol-level constants
MAX_SUPPLY = 3 * 10**8 # 300,000,000
LIQUIDITY_MINING_ALLOCATION = 0.4 # 40% of above
BASE_POOL_ALLOCATION = .3
BOOSTING_POOL_ALLOCATION = .5
AVAX_PTP_POOL = .2
HOURLY_STAKED_PTP_vePTP_YIELD = 0.014
MAX_vePTP_TO_STAKED_PTP_RATIO = 30
# market-level constants
CIRCULATING_MARKET_CAP = 10 ** 7 # assume 10MM MC for now
TVL_TO_CMC_RATIO = 5 # TVL 5x the size of token CMC (curve's TVL is >12x)
TVL = TVL_TO_CMC_RATIO * CIRCULATING_MARKET_CAP
PERCENT_COINS_CIRCULATING = .035 + .035 + LIQUIDITY_MINING_ALLOCATION
PTP_PRICE = CIRCULATING_MARKET_CAP / (MAX_SUPPLY * PERCENT_COINS_CIRCULATING)
FDMC = PTP_PRICE * MAX_SUPPLY
PERCENT_PTP_STAKED = .4
PTP_STAKED = MAX_SUPPLY * PERCENT_COINS_CIRCULATING * PERCENT_PTP_STAKED
print(f"Calculations consider PTP/USD: ${round(PTP_PRICE, 3)}\n" +
f"Reflecting a FDMC of \t${round(FDMC / 10**6)}MM " +
f"({round(PERCENT_COINS_CIRCULATING * 100)}% of coins available)\n" +
f"and implying TVL of \t${round(TVL / 10**6)}MM " +
f"(Mcap/TVL: {round(1 / TVL_TO_CMC_RATIO, 4)})\n" +
f"with {round(PTP_STAKED / 10**6, 2)}MM PTP staked for vePTP ({round(PERCENT_PTP_STAKED * 100)}%)")
# Stablecoin bankroll of 0 to 10,000 USD
STABLES_MIN = 0
STABLES_MAX = 10000
N_STEPS = 100
stable_deposit_range = np.arange(STABLES_MIN, STABLES_MAX,
(STABLES_MAX - STABLES_MIN) / N_STEPS)
# Allocating some percent of bankroll to market-buying PTP for staking
MIN_BANKROLL_PROPORTION_FOR_PTP = 0
MAX_BANKROLL_PROPORTION_FOR_PTP = 0.2 # Max of 20%
ptp_market_buy_bankroll_proportion = np.arange(MIN_BANKROLL_PROPORTION_FOR_PTP,
MAX_BANKROLL_PROPORTION_FOR_PTP,
MAX_BANKROLL_PROPORTION_FOR_PTP / N_STEPS)
def boosted_pool_emission_rate(your_stable_deposit, vePTP_held, other_deposit_weights):
your_boosted_pool_weight = np.sqrt(your_stable_deposit * vePTP_held)
percentage = your_boosted_pool_weight / other_deposit_weights
return percentage
def base_pool_emission_rate(your_stable_deposit, other_stable_deposits):
total_deposits = other_stable_deposits + your_stable_deposit
percentage = your_stable_deposit / total_deposits
return percentage
# testing using the medium cases
print(f'original: {round(boosted_pool_emission_rate(1000, 0, 3*10**5))}%')
print(f'case 1: {round(boosted_pool_emission_rate(1000, 200, 3*10**5) * 100, 3)}%')
print(f'case 2: {round(boosted_pool_emission_rate(1500, 200, 3*10**5) * 100, 3)}%')
print(f'case 3: {round(boosted_pool_emission_rate(900, 800, 3*10**5) * 100, 3)}%')
# define function with vectorize decorator for extensibility
@np.vectorize
def total_emissions_rate(stable_bankroll,
ptp_marketbuy_proportion,
staking_hours = 24,
other_deposits = CIRCULATING_MARKET_CAP / .2 # assume TVL is 5x the market cap (curve's is ~12.5x)
):
'''
stable_bankroll: total USD value of the stables you'd invest in the Platypus protocol
ptp_marketbuy_proportion: proportion of stable_bankroll you'd use to marketbuy PTP for staking to vePTP
staking_hours: how long you'd spend generating vePTP from staked PTP (default: 1 day)
returns the number of PTP tokens you'd rececive given defined constants earlier in the notebook.
'''
n_PTP = (stable_bankroll * ptp_marketbuy_proportion) / PTP_PRICE
n_vePTP = staking_hours * HOURLY_STAKED_PTP_vePTP_YIELD * n_PTP
stable_deposit = stable_bankroll * (1 - ptp_marketbuy_proportion)
# calculating lower bound on total deposit weights:
# assume all other deposits are from one wallet with all other staked PTP
# and it's been staking as long as you have
total_deposit_weights = PTP_STAKED * HOURLY_STAKED_PTP_vePTP_YIELD * staking_hours
boosted = boosted_pool_emission_rate(stable_deposit, n_vePTP, total_deposit_weights)
base = base_pool_emission_rate(stable_deposit, TVL - stable_deposit)
return (BOOSTING_POOL_ALLOCATION * boosted) + (BASE_POOL_ALLOCATION * base)
# Create the mesh
stable_bankroll, ptp_proportion = np.meshgrid(stable_deposit_range, ptp_market_buy_bankroll_proportion)
returns = total_emissions_rate(stable_bankroll, ptp_proportion)
# plotting time
fig, ax = plt.subplots(subplot_kw={"projection": "3d"}, figsize=(18,9))
manifold = ax.plot_surface(stable_bankroll, ptp_proportion, returns,
cmap=cm.coolwarm, linewidth=0, antialiased=False)
fig.colorbar(manifold, shrink=0.5, aspect=5)
plt.show()
###Output
_____no_output_____
|
docs/beta/notebooks/PrototypingWithPython.ipynb
|
###Markdown
Prototyping with Python_This is the manuscript of Andreas Zeller's keynote"Coding Effective Testing Tools Within Minutes" at the TAIC PART 2020 conference._ In our [Fuzzing Book](index.ipynb), we use Python to implement automated testing techniques, and also as the language for most of our test subjects. Why Python? The short answer is> Python made us amazingly _productive_. Most techniques in this book took **2-3 days** to implement. This is about **10-20 times faster** than for "classic" languages like C or Java.A factor of 10–20 in productivity is enormous, almost ridiculous. Why is that so, and which consequences does this have for research and teaching?In this essay, we will explore some of the reasons, prototyping a _symbolic test generator_ from scratch. This normally would be considered a very difficult task, taking months to build. Yet, developing the code in this chapter took less than two hours – and explaining it takes less than 20 minutes.
###Code
from bookutils import YouTubeVideo
YouTubeVideo("IAreRIID9lM")
###Output
_____no_output_____
###Markdown
Python is EasyPython is a high-level language that allows one to focus on the actual _algorithms_ rather than how individual bits and bytes are passed around in memory. For this book, this is important: We want to focus on how individual techniques work, and not so much their optimization. Focusing on algorithms allows you to toy and tinker with them, and quickly develop your own. Once you have found out how to do things, you can still port your approach to some other language or specialized setting. As an example, take the (in)famous _triangle_ program, which classifies a triangle of lengths $a$, $b$, $c$ into one of three categories. It reads like pseudocode; yet, we can easily execute it.
###Code
def triangle(a, b, c):
if a == b:
if b == c:
return 'equilateral'
else:
return 'isosceles #1'
else:
if b == c:
return 'isosceles #2'
else:
if a == c:
return 'isosceles #3'
else:
return 'scalene'
###Output
_____no_output_____
###Markdown
Here's an example of executing the `triangle()` function:
###Code
triangle(2, 3, 4)
###Output
_____no_output_____
###Markdown
For the remainder of this chapter, we will use the `triangle()` function as ongoing example for a program to be tested. Of course, the complexity of `triangle()` is a far cry from large systems, and what we show in this chapter will not apply to, say, an ecosystem of thousands of intertwined microservices. Its point, however, is to show how easy certain techniques can be – if you have the right language and environment. Fuzzing is as Easy as Always If you want to test `triangle()` with random values, that's fairly easy to do. Just bring along one of the Python random number generators and throw them into `triangle()`.
###Code
from random import randrange
for i in range(10):
a = randrange(1, 10)
b = randrange(1, 10)
c = randrange(1, 10)
t = triangle(a, b, c)
print(f"triangle({a}, {b}, {c}) = {repr(t)}")
###Output
triangle(1, 6, 1) = 'isosceles #3'
triangle(2, 1, 3) = 'scalene'
triangle(1, 5, 8) = 'scalene'
triangle(3, 2, 7) = 'scalene'
triangle(2, 6, 3) = 'scalene'
triangle(7, 8, 6) = 'scalene'
triangle(5, 7, 7) = 'isosceles #2'
triangle(3, 8, 7) = 'scalene'
triangle(5, 1, 8) = 'scalene'
triangle(8, 4, 8) = 'isosceles #3'
###Markdown
So far, so good – but that's something you can do in pretty much any programming language. What is it that makes Python special? Dynamic Analysis in Python: So Easy it HurtsDynamic analysis is the ability to track what is happening during program execution. The Python `settrace()` mechanism allows you to track all code lines, all variables, all values, as the program executes – and all this in a handful of lines of code. Our `Coverage` class from [the chapter on coverage](Coverage.ipynb) shows how to capture a trace of all lines executed in five lines of code; such a trace easily converts into sets of lines or branches executed. With two more lines, you can easily track all functions, arguments, variable values, too – see for instance our [chapter on dynamic invariants](DynamicInvariants). And you can even access the source code of individual functions (and print it out, too!) All this takes 10, maybe 20 minutes to implement. Here is a piece of Python that does it all. We track lines executed, and for every line, we print its source codes and the current values of all local variables:
###Code
import sys
import inspect
def traceit(frame, event, arg):
function_code = frame.f_code
function_name = function_code.co_name
lineno = frame.f_lineno
vars = frame.f_locals
source_lines, starting_line_no = inspect.getsourcelines(frame.f_code)
loc = f"{function_name}:{lineno} {source_lines[lineno - starting_line_no].rstrip()}"
vars = ", ".join(f"{name} = {vars[name]}" for name in vars)
print(f"{loc:50} ({vars})")
return traceit
###Output
_____no_output_____
###Markdown
The function `sys.settrace()` registers `traceit()` as a trace function; it will then trace the given invocation of `triangle()`:
###Code
def triangle_traced():
sys.settrace(traceit)
triangle(2, 2, 1)
sys.settrace(None)
triangle_traced()
###Output
triangle:1 def triangle(a, b, c): (a = 2, b = 2, c = 1)
triangle:2 if a == b: (a = 2, b = 2, c = 1)
triangle:3 if b == c: (a = 2, b = 2, c = 1)
triangle:6 return 'isosceles #1' (a = 2, b = 2, c = 1)
triangle:6 return 'isosceles #1' (a = 2, b = 2, c = 1)
###Markdown
In comparison, try to build such a dynamic analysis for, say, C. You can either _instrument_ the code to track all lines executed and record variable values, storing the resulting info in some database. This will take you _weeks,_ if not _months_ to implement. You can also run your code through a debugger (step-print-step-print-step-print); but again, programming the interaction can take days. And once you have the first results, you'll probably realize you need something else or better, so you go back to the drawing board. Not fun. Together with a dynamic analysis such as the one above, you can make fuzzing much smarter. Search-based testing, for instance, evolves a population of inputs towards a particular goal, such as coverage. With a good dynamic analysis, you can quickly implement search-based strategies for arbitrary goals. Static Analysis in Python: Still EasyStatic analysis refers to the ability to analyze _program code_ without actually executing it. Statically analyzing Python code to deduce any property can be a nightmare, because the language is so highly dynamic. (More on that below.)If your static analysis does not have to be _sound_, – for instance, because you only use it to _support_ and _guide_ another technique such as testing – then a static analysis in Python can be very simple. The `ast` module allows you to turn any Python function into an abstract syntax tree (AST), which you then can traverse as you like. Here's the AST for our `triangle()` function:
###Code
from bookutils import rich_output
import ast
if rich_output():
# Normally, this will do
from showast import show_ast
else:
def show_ast(tree):
ast.dump(tree, indent=4)
triangle_source = inspect.getsource(triangle)
triangle_ast = ast.parse(triangle_source)
show_ast(triangle_ast)
###Output
_____no_output_____
###Markdown
Now suppose one wants to identify all `triangle` branches and their conditions using static analysis. You would traverse the AST, searching for `If` nodes, and take their first child (the condition). This is easy as well:
###Code
def collect_conditions(tree):
conditions = []
def traverse(node):
if isinstance(node, ast.If):
cond = ast.unparse(node.test).strip()
conditions.append(cond)
for child in ast.iter_child_nodes(node):
traverse(child)
traverse(tree)
return conditions
###Output
_____no_output_____
###Markdown
Here are the four `if` conditions occurring in the `triangle()` code:
###Code
collect_conditions(triangle_ast)
###Output
_____no_output_____
###Markdown
Not only can we extract individual program elements, we can also change them at will and convert the tree back into source code. Program transformations (say, for instrumentation or mutation analysis) are a breeze. The above code took five minutes to write. Again, try that in Java or C. Symbolic Reasoning in Python: There's a Package for ThatLet's get back to testing. We have shown how to extract conditions from code. To reach a particular location in the `triangle()` function, one needs to find a solution for the _path conditions_ leading to that branch. To reach the last line in `triangle()` (the `'scalene'` branch), we have to find a solution for $$a \ne b \land b \ne c \land a \ne c$$We can make use of a _constraint_ solver for this, such as Microsoft's [_Z3_ solver](https://github.com/Z3Prover/z3):
###Code
import z3
###Output
_____no_output_____
###Markdown
Let us use Z3 to find a solution for the `'scalene'` branch condition:
###Code
a = z3.Int('a')
b = z3.Int('b')
c = z3.Int('c')
s = z3.Solver()
s.add(z3.And(a > 0, b > 0, c > 0)) # Triangle edges are positive
s.add(z3.And(a != b, b != c, a != c)) # Our condition
s.check()
###Output
_____no_output_____
###Markdown
Z3 has shown us that there is a solution ("sat" = "satisfiable"). Let us get one:
###Code
m = s.model()
m
###Output
_____no_output_____
###Markdown
We can use this solution right away for testing the `triangle()` function and find that it indeed covers the `'scalene'` branch. The method `as_long()` converts the Z3 results into numerical values.
###Code
triangle(m[a].as_long(), m[b].as_long(), m[c].as_long())
###Output
_____no_output_____
###Markdown
A Symbolic Test GeneratorWith what we have seen, we can now build a _symbolic test generator_ – a tool that attempts to systematically create test inputs that cover all paths. Let us find all conditions we need to solve, by exploring all paths in the tree. We turn these paths to Z3 format right away:
###Code
def collect_path_conditions(tree):
paths = []
def traverse_if_children(children, context, cond):
old_paths = len(paths)
for child in children:
traverse(child, context + [cond])
if len(paths) == old_paths:
paths.append(context + [cond])
def traverse(node, context):
if isinstance(node, ast.If):
cond = ast.unparse(node.test).strip()
not_cond = "z3.Not(" + cond + ")"
traverse_if_children(node.body, context, cond)
traverse_if_children(node.orelse, context, not_cond)
else:
for child in ast.iter_child_nodes(node):
traverse(child, context)
traverse(tree, [])
return ["z3.And(" + ", ".join(path) + ")" for path in paths]
path_conditions = collect_path_conditions(triangle_ast)
path_conditions
###Output
_____no_output_____
###Markdown
Now all we need to do is to feed these constraints into Z3. We see that we easily cover all branches:
###Code
for path_condition in path_conditions:
s = z3.Solver()
s.add(a > 0, b > 0, c > 0)
eval(f"s.check({path_condition})")
m = s.model()
print(m, triangle(m[a].as_long(), m[b].as_long(), m[c].as_long()))
###Output
[c = 1, a = 1, b = 1] equilateral
[c = 2, a = 1, b = 1] isosceles #1
[c = 2, a = 1, b = 2] isosceles #2
[c = 2, a = 2, b = 1] isosceles #3
[c = 3, a = 1, b = 2] scalene
###Markdown
Success! We have covered all branches of the triangle program! Now, the above is still very limited – and tailored to the capabilities of the `triangle()` code. A full implementation would actually* translate entire Python conditions into Z3 syntax (if possible),* handle more control flow constructs such as returns, assertions, exceptions* and half a million things more (loops, calls, you name it)Some of these may not be supported by the Z3 theories. To make it easier for a constraint solver to find solutions, you could also provide _concrete values_ observed from earlier executions that already are known to reach specific paths in the program. Such concrete values would be gathered from the tracing mechanisms above, and boom: you would have a pretty powerful and scalable concolic (concrete-symbolic) test generator. Now, the above might take you a day or two, and as you expand your test generator beyond `triangle()`, you will add more and more features. The nice part is that every of these features you will invent might actually be a research contribution – something nobody has thought of before. Whatever idea you might have: you can quickly implement it and try it out in a prototype. And again, this will be orders of magnitude faster than for conventional languages. Things that will not workPython has a reputation for being hard to analyze statically, and this is true; its dynamic nature makes it hard for traditional static analysis to exclude specific behaviors. We see Python as a great language for prototyping automated testing and dynamic analysis techniques, and as a good language to illustrate _lightweight_ static and symbolic analysis techniques that would be used to _guide_ and _support_ other techniques (say, generating software tests).But if you want to _prove_ specific properties (or the absence thereof) by static analysis of code only, Python is a challenge, to say the least; and there are areas for which we would definitely _warn_ against using it. (No) Type CheckingUsing Python to demonstrate _static type checking_ will be suboptimal (to say the least) because, well, Python programs typically do not come with type annotations. You _can_, of course, annotate variables with types, as we assume in the [chapter on Symbolic Fuzzing](SymbolicFuzzer.ipynb):
###Code
def typed_triangle(a: int, b: int, c: int) -> str:
return triangle(a, b, c)
###Output
_____no_output_____
###Markdown
Most real-world Python code will not be annotated with types, though. While you can also _retrofit them_, as discussed in [our chapter on dynamic invariants](DynamicInvariants.ipynb), Python simply is not a good domain to illustrate type checking. If you want to show the beauty and usefulness of type checking, use a strongly typed language like Java, ML, or Haskell. (No) Program ProofsPython is a highly dynamic language in which you can change _anything_ at runtime. It is no problem assigning a variable different types, as in
###Code
x = 42
x = "a string"
###Output
_____no_output_____
###Markdown
or change the existence (and scope) of a variable depending on some runtime condition:
###Code
p1, p2 = True, False
if p1:
x = 42
if p2:
del x
# Does x exist at this point?
###Output
_____no_output_____
###Markdown
Prototyping with Python_This is the manuscript of Andreas Zeller's keynote"Coding Effective Testing Tools Within Minutes" at the TAIC PART 2020 conference._ In our [Fuzzing Book](index.ipynb), we use Python to implement automated testing techniques, and also as the language for most of our test subjects. Why Python? The short answer is> Python made us amazingly _productive_. Most techniques in this book took **2-3 days** to implement. This is about **10-20 times faster** than for "classic" languages like C or Java.A factor of 10–20 in productivity is enormous, almost ridiculous. Why is that so, and which consequences does this have for research and teaching?In this essay, we will explore some of the reasons, prototyping a _symbolic test generator_ from scratch. This normally would be considered a very difficult task, taking months to build. Yet, developing the code in this chapter took less than two hours – and explaining it takes less than 20 minutes.
###Code
from bookutils import YouTubeVideo
YouTubeVideo("IAreRIID9lM")
###Output
_____no_output_____
###Markdown
Python is EasyPython is a high-level language that allows one to focus on the actual _algorithms_ rather than how individual bits and bytes are passed around in memory. For this book, this is important: We want to focus on how individual techniques work, and not so much their optimization. Focusing on algorithms allows you to toy and tinker with them, and quickly develop your own. Once you have found out how to do things, you can still port your approach to some other language or specialized setting. As an example, take the (in)famous _triangle_ program, which classifies a triangle of lengths $a$, $b$, $c$ into one of three categories. It reads like pseudocode; yet, we can easily execute it.
###Code
def triangle(a, b, c):
if a == b:
if b == c:
return 'equilateral'
else:
return 'isosceles #1'
else:
if b == c:
return 'isosceles #2'
else:
if a == c:
return 'isosceles #3'
else:
return 'scalene'
###Output
_____no_output_____
###Markdown
Here's an example of executing the `triangle()` function:
###Code
triangle(2, 3, 4)
###Output
_____no_output_____
###Markdown
For the remainder of this chapter, we will use the `triangle()` function as ongoing example for a program to be tested. Of course, the complexity of `triangle()` is a far cry from large systems, and what we show in this chapter will not apply to, say, an ecosystem of thousands of intertwined microservices. Its point, however, is to show how easy certain techniques can be – if you have the right language and environment. Fuzzing is as Easy as Always If you want to test `triangle()` with random values, that's fairly easy to do. Just bring along one of the Python random number generators and throw them into `triangle()`.
###Code
from random import randrange
for i in range(10):
a = randrange(1, 10)
b = randrange(1, 10)
c = randrange(1, 10)
t = triangle(a, b, c)
print(f"triangle({a}, {b}, {c}) = {repr(t)}")
###Output
triangle(1, 6, 1) = 'isosceles #3'
triangle(2, 1, 3) = 'scalene'
triangle(1, 5, 8) = 'scalene'
triangle(3, 2, 7) = 'scalene'
triangle(2, 6, 3) = 'scalene'
triangle(7, 8, 6) = 'scalene'
triangle(5, 7, 7) = 'isosceles #2'
triangle(3, 8, 7) = 'scalene'
triangle(5, 1, 8) = 'scalene'
triangle(8, 4, 8) = 'isosceles #3'
###Markdown
So far, so good – but that's something you can do in pretty much any programming language. What is it that makes Python special? Dynamic Analysis in Python: So Easy it HurtsDynamic analysis is the ability to track what is happening during program execution. The Python `settrace()` mechanism allows you to track all code lines, all variables, all values, as the program executes – and all this in a handful of lines of code. Our `Coverage` class from [the chapter on coverage](Coverage.ipynb) shows how to capture a trace of all lines executed in five lines of code; such a trace easily converts into sets of lines or branches executed. With two more lines, you can easily track all functions, arguments, variable values, too – see for instance our [chapter on dynamic invariants](DynamicInvariants). And you can even access the source code of individual functions (and print it out, too!) All this takes 10, maybe 20 minutes to implement. Here is a piece of Python that does it all. We track lines executed, and for every line, we print its source codes and the current values of all local variables:
###Code
import sys
import inspect
def traceit(frame, event, arg):
function_code = frame.f_code
function_name = function_code.co_name
lineno = frame.f_lineno
vars = frame.f_locals
source_lines, starting_line_no = inspect.getsourcelines(frame.f_code)
loc = f"{function_name}:{lineno} {source_lines[lineno - starting_line_no].rstrip()}"
vars = ", ".join(f"{name} = {vars[name]}" for name in vars)
print(f"{loc:50} ({vars})")
return traceit
###Output
_____no_output_____
###Markdown
The function `sys.settrace()` registers `traceit()` as a trace function; it will then trace the given invocation of `triangle()`:
###Code
def triangle_traced():
sys.settrace(traceit)
triangle(2, 2, 1)
sys.settrace(None)
triangle_traced()
###Output
triangle:1 def triangle(a, b, c): (c = 1, b = 2, a = 2)
triangle:2 if a == b: (c = 1, b = 2, a = 2)
triangle:3 if b == c: (c = 1, b = 2, a = 2)
triangle:6 return 'isosceles #1' (c = 1, b = 2, a = 2)
triangle:6 return 'isosceles #1' (c = 1, b = 2, a = 2)
###Markdown
In comparison, try to build such a dynamic analysis for, say, C. You can either _instrument_ the code to track all lines executed and record variable values, storing the resulting info in some database. This will take you _weeks,_ if not _months_ to implement. You can also run your code through a debugger (step-print-step-print-step-print); but again, programming the interaction can take days. And once you have the first results, you'll probably realize you need something else or better, so you go back to the drawing board. Not fun. Together with a dynamic analysis such as the one above, you can make fuzzing much smarter. Search-based testing, for instance, evolves a population of inputs towards a particular goal, such as coverage. With a good dynamic analysis, you can quickly implement search-based strategies for arbitrary goals. Static Analysis in Python: Still EasyStatic analysis refers to the ability to analyze _program code_ without actually executing it. Statically analyzing Python code to deduce any property can be a nightmare, because the language is so highly dynamic. (More on that below.)If your static analysis does not have to be _sound_, – for instance, because you only use it to _support_ and _guide_ another technique such as testing – then a static analysis in Python can be very simple. The `ast` module allows you to turn any Python function into an abstract syntax tree (AST), which you then can traverse as you like. Here's the AST for our `triangle()` function:
###Code
from bookutils import rich_output
import ast
import astor
if rich_output():
# Normally, this will do
from showast import show_ast
else:
def show_ast(tree):
ast.dump(tree)
triangle_source = inspect.getsource(triangle)
triangle_ast = ast.parse(triangle_source)
show_ast(triangle_ast)
###Output
_____no_output_____
###Markdown
Now suppose one wants to identify all `triangle` branches and their conditions using static analysis. You would traverse the AST, searching for `If` nodes, and take their first child (the condition). This is easy as well:
###Code
def collect_conditions(tree):
conditions = []
def traverse(node):
if isinstance(node, ast.If):
cond = astor.to_source(node.test).strip()
conditions.append(cond)
for child in ast.iter_child_nodes(node):
traverse(child)
traverse(tree)
return conditions
###Output
_____no_output_____
###Markdown
Here are the four `if` conditions occurring in the `triangle()` code:
###Code
collect_conditions(triangle_ast)
###Output
_____no_output_____
###Markdown
Not only can we extract individual program elements, we can also change them at will and convert the tree back into source code. Program transformations (say, for instrumentation or mutation analysis) are a breeze. The above code took five minutes to write. Again, try that in Java or C. Symbolic Reasoning in Python: There's a Package for ThatLet's get back to testing. We have shown how to extract conditions from code. To reach a particular location in the `triangle()` function, one needs to find a solution for the _path conditions_ leading to that branch. To reach the last line in `triangle()` (the `'scalene'` branch), we have to find a solution for $$a \ne b \land b \ne c \land a \ne c$$We can make use of a _constraint_ solver for this, such as Microsoft's [_Z3_ solver](https://github.com/Z3Prover/z3):
###Code
import z3
###Output
_____no_output_____
###Markdown
Let us use Z3 to find a solution for the `'scalene'` branch condition:
###Code
a = z3.Int('a')
b = z3.Int('b')
c = z3.Int('c')
s = z3.Solver()
s.add(z3.And(a > 0, b > 0, c > 0)) # Triangle edges are positive
s.add(z3.And(a != b, b != c, a != c)) # Our condition
s.check()
###Output
_____no_output_____
###Markdown
Z3 has shown us that there is a solution ("sat" = "satisfiable"). Let us get one:
###Code
m = s.model()
m
###Output
_____no_output_____
###Markdown
We can use this solution right away for testing the `triangle()` function and find that it indeed covers the `'scalene'` branch. The method `as_long()` converts the Z3 results into numerical values.
###Code
triangle(m[a].as_long(), m[b].as_long(), m[c].as_long())
###Output
_____no_output_____
###Markdown
A Symbolic Test GeneratorWith what we have seen, we can now build a _symbolic test generator_ – a tool that attempts to systematically create test inputs that cover all paths. Let us find all conditions we need to solve, by exploring all paths in the tree. We turn these paths to Z3 format right away:
###Code
def collect_path_conditions(tree):
paths = []
def traverse_if_children(children, context, cond):
old_paths = len(paths)
for child in children:
traverse(child, context + [cond])
if len(paths) == old_paths:
paths.append(context + [cond])
def traverse(node, context):
if isinstance(node, ast.If):
cond = astor.to_source(node.test).strip()
not_cond = "z3.Not" + cond
traverse_if_children(node.body, context, cond)
traverse_if_children(node.orelse, context, not_cond)
else:
for child in ast.iter_child_nodes(node):
traverse(child, context)
traverse(tree, [])
return ["z3.And(" + ", ".join(path) + ")" for path in paths]
path_conditions = collect_path_conditions(triangle_ast)
path_conditions
###Output
_____no_output_____
###Markdown
Now all we need to do is to feed these constraints into Z3. We see that we easily cover all branches:
###Code
for path_condition in path_conditions:
s = z3.Solver()
s.add(a > 0, b > 0, c > 0)
eval(f"s.check({path_condition})")
m = s.model()
print(m, triangle(m[a].as_long(), m[b].as_long(), m[c].as_long()))
###Output
[b = 1, a = 1, c = 1] equilateral
[b = 1, a = 1, c = 2] isosceles #1
[b = 2, a = 1, c = 2] isosceles #2
[b = 2, a = 1, c = 1] isosceles #3
[b = 3, a = 1, c = 2] scalene
###Markdown
Success! We have covered all branches of the triangle program! Now, the above is still very limited – and tailored to the capabilities of the `triangle()` code. A full implementation would actually* translate entire Python conditions into Z3 syntax (if possible),* handle more control flow constructs such as returns, assertions, exceptions* and half a million things more (loops, calls, you name it)Some of these may not be supported by the Z3 theories. To make it easier for a constraint solver to find solutions, you could also provide _concrete values_ observed from earlier executions that already are known to reach specific paths in the program. Such concrete values would be gathered from the tracing mechanisms above, and boom: you would have a pretty powerful and scalable concolic (concrete-symbolic) test generator. Now, the above might take you a day or two, and as you expand your test generator beyond `triangle()`, you will add more and more features. The nice part is that every of these features you will invent might actually be a research contribution – something nobody has thought of before. Whatever idea you might have: you can quickly implement it and try it out in a prototype. And again, this will be orders of magnitude faster than for conventional languages. Things that will not workPython has a reputation for being hard to analyze statically, and this is true; its dynamic nature makes it hard for traditional static analysis to exclude specific behaviors. We see Python as a great language for prototyping automated testing and dynamic analysis techniques, and as a good language to illustrate _lightweight_ static and symbolic analysis techniques that would be used to _guide_ and _support_ other techniques (say, generating software tests).But if you want to _prove_ specific properties (or the absence thereof) by static analysis of code only, Python is a challenge, to say the least; and there are areas for which we would definitely _warn_ against using it. (No) Type CheckingUsing Python to demonstrate _static type checking_ will be suboptimal (to say the least) because, well, Python programs typically do not come with type annotations. You _can_, of course, annotate variables with types, as we assume in the [chapter on Symbolic Fuzzing](SymbolicFuzzer.ipynb):
###Code
def typed_triangle(a: int, b: int, c: int) -> str:
return triangle(a, b, c)
###Output
_____no_output_____
###Markdown
Most real-world Python code will not be annotated with types, though. While you can also _retrofit them_, as discussed in [our chapter on dynamic invariants](DynamicInvariants.ipynb), Python simply is not a good domain to illustrate type checking. If you want to show the beauty and usefulness of type checking, use a strongly typed language like Java, ML, or Haskell. (No) Program ProofsPython is a highly dynamic language in which you can change _anything_ at runtime. It is no problem assigning a variable different types, as in
###Code
x = 42
x = "a string"
###Output
_____no_output_____
###Markdown
or change the existence (and scope) of a variable depending on some runtime condition:
###Code
p1, p2 = True, False
if p1:
x = 42
if p2:
del x
# Does x exist at this point?
###Output
_____no_output_____
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.