text
stringlengths 2.5k
6.39M
| kind
stringclasses 3
values |
---|---|
# 2 Data Acquisition
In this chapter we will discuss data acquisition and data formatting for four online Assyriological projects: [ORACC](http://oracc.org) (2.1), [ETCSL](https://etcsl.orinst.ox.ac.uk/), (2.2) [CDLI](http://cdli.ucla.edu) (2.3) and [BDTNS](http://bdtns.filol.csic.es/) (2.4).
The data in [CDLI](http://cdli.ucla.edu) and [BDTNS](http://bdtns.filol.csic.es/) are made available in raw-text format, with transliteration only. For instance (atf text format as used by [CDLI](http://cdli.ucla.edu)):
```{admonition} ATF
:class: tip, dropdown
ATF is short for ASCII Text Format. [ORACC](http://oracc.org) and [CDLI](http://cdli.ucla.edu) use different versions of the ATF format. The various symbols and conventions are explained [here](http://oracc.org/doc/help/editinginatf/cdliatf/).
```
&P100001 = AAS 013
#atf: lang sux
@tablet
@obverse
@column 1
$ beginning broken
1'. a2-bi u4 [...] 5(u) 4(disz) 2/3(disz)-kam
2'. 8(gesz2) 3(u) 5(disz) gurusz u4 1(disz)-sze3
3'. si-i3-tum nig2-ka9-ak mu en-mah-gal-an-na ba-hun
4'. 2(asz) 2(barig) sze gur
This data format is easy to read for humans (those humans who know Sumerian), but less so for computers. It is necessary to tell the software which data elements belong to the text and which do not (for instance, line numbers and surface labels) and what the various non-textual elements mean. We will see examples of how such data sets may be used in the sections 2.3 ([CDLI](http://cdli.ucla.edu)) and 2.4 ([BDTNS](http://bdtns.filol.csic.es/)). Section 2.4 will also demonstrate code for constructing a search engine for [BDTNS](http://bdtns.filol.csic.es/) that ignores sign values - that is, searching for `luh` will also find `sukkal`, etc. The code uses both [BDTNS](http://bdtns.filol.csic.es/) data and the [ORACC Global Sign List](http://orac.org/ogsl), showing how data from different projects can be mashed into a single tool.
The data in [ORACC](http://oracc.org) and [ETCSL](https://etcsl.orinst.ox.ac.uk/) are made available in [JSON](http://json.org) and [XML](http://xml.org), respectively. Those formats are very explicit and atomistic. They less easy to read for humans, but are very flexible for computational usage and allow for multiple levels of annotation (with e.g. lexical, morphological, and graphemic information) at the same time. The data in [ORACC](http://oracc.org) and [ETCSL](https://etcsl.orinst.ox.ac.uk/) includes lemmatization, linking each word to an entry in a glossary. The following is an example of a JSON file, one may click on any of the lines with an arrow to expose more or less of the hierarchical structure. The usage of JSON and XML files will be discussed in sections 2.1 and 2.2.
```
import json
import panel as pn
pn.extension()
with open('P100001.json', 'r', encoding='utf8') as p:
P100001 = json.load(p)
json_object = pn.pane.JSON(P100001, name='P100001', depth=1, height=300, width=500, theme = 'light')
json_object
```
This represents the same text as the one shown in raw text format above ([P100001 = AAS 13](http://oracc.org/epsd2/P100001)), but in this case provided with lemmatization and explicit information on the various data types.
```{admonition} Full JSON file
:class: tip, dropdown
To see the full JSON file of P100001 click [here](https://github.com/niekveldhuis/compass/blob/master/2_Data_Acquisition/P100001.json)
```
The Compass project mostly deals with [ORACC](http://oracc.org) data, and much of this chapter will provide code and explanations for how to extract the various types of information that are included in the JSON files. The parsing of the [ETCSL](https://etcsl.orinst.ox.ac.uk/) XML files (section [2.2](2.2) is, to some extent, redundant, because all of the [ETCSL](https://etcsl.orinst.ox.ac.uk/) data have been incorporated into [epsd2/literary](http://oracc.org/epsd2/literary) and can be parsed with the tools for regular [ORACC](http://oracc.org) projects.
The Chapters 3-6 of Compass will work with [ORACC](http://oracc.org) data and will parse that data with the tools demonstrated and explained in section [2.1](2.1). Chapter 2 is not needed to follow along in those chapters. The present chapter is primarily meant for researchers who wish to pursue their own computational projects and need a deeper understanding of how the data is acquired and formatted.
|
github_jupyter
|
# Text Generation with Neural Networks
Import necessary packages for preprocessing, model building, etc. We follow the steps described in the theoretical part of this summer school as follows:
0. Define Reseach Goal (already done)
2. Retrieve Data
3. Prepare Data
4. Explore Data
5. Model Data
6. Present and automate Model
```
from keras.callbacks import LambdaCallback
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.optimizers import RMSprop
from keras.utils.data_utils import get_file
from keras.models import load_model
from keras import backend as K
import numpy as np
import random
import sys
import io
```
# 1. Retrieve Data
Load your data! You can pick up data from everywhere, such as plain text, HTML, source code, etc.
You can either automatically download with Keras get_file function or download it manually and import it in this notebook.
## Example Data Set
[trump.txt](https://raw.githubusercontent.com/harshilkamdar/trump-tweets/master/trump.txt)
```
#path = get_file('trump.txt', origin='https://raw.githubusercontent.com/harshilkamdar/trump-tweets/master/trump.txt')
text = io.open('resources/shakespeare.txt', encoding='utf-8').read().lower()
print('corpus length:', len(text))
```
# 2. Prepare Data
As described in the theoretical part of this workshop we need to convert our text into a word embedding that can be processed by a (later) defined Neural Network.
## 2.1. Create Classes
The goal after this step is to have a variable which contains the distinct characters of the text. Characters can be letters, digits, punctions, new lines, spaces, etc.
### Example:
Let's assume we have the following text as input: "hallo. "
After the following step, we want to have all distinct characters, i.e.:
``[ "h", "a", "l", "o", ".", " " ] ``
```
chars = sorted(list(set(text)))
print('total chars:', len(chars))
```
## 2.2. Create Training Set
In the following section we need to create our test set based on our text. The idea is to map a sequence of characters to a class. In this case, a class is one of our distinct characters defined in the previous task. This means that a sequence of characters predicts the next character. This is important for the later model to know which characters come after specific sequences. The sequence length can be chosen. So try out different squence length.
### Example:
Our text is still: "hallo. "
Sequence length: 2 (i.e. 2 characters predict the next character)
The result (training set) should be defined as follows:
``
Seuences --> Class
"ha" --> "l"
"al" --> "l"
"ll" --> "o"
"lo" --> "."
"o." --> " "
``
You can read the previous example like this: Squence "ha" predicts the next character " l ", sequence "al" predicts next character " l " and so on.
```
seqlen = 40 # Sequence length parameter
step = 5 # Determines the how many characters the window should be shifted in the text
sequences = [] # List of sequences
char_class = [] # Corresponding class of each sequence
for i in range(0, len(text) - seqlen, step):
sequences.append(text[i: i + seqlen])
char_class.append(text[i + seqlen])
print('#no sequences:', len(sequences))
```
## 2.3. Check your Data
Now that we processed our data, it's time to understand what we have built so far.
```
for idx in range(len(sequences[:10])):
print(sequences[idx], ":" , char_class[idx])
# Print from 1st to 10th character
chars[:10]
# Print from 150th to 160th character :-)
chars[150:160]
```
## 2.4. Vectorization of Training Sequences
The following section describes the desired form of our final training set.
text: "hallo. ".
As defined above we have a couple of sequences mapping to the next appearing character in the text (e.g. "ha" mapping to "l"). But first of all, we transform each sequence to the following one-hot-encoded matrix.
**Example:**
sequence "ha" maps to the following matrix
| | h | a | l | o | . | ' ' |
|-----|-----|-----|-----|-----|-----|-----|
| h | 1 | 0 | 0 | 0 | 0 | 0 |
| a | 0 | 1 | 0 | 0 | 0 | 0 |
next sequence "al" maps to the following matrix
| | h | a | l | o | . | ' ' |
|-----|-----|-----|-----|-----|-----|-----|
| a | 0 | 1 | 0 | 0 | 0 | 0 |
| l | 0 | 0 | 1 | 0 | 0 | 0 |
... And so on
## 2.5. Vectorization of Target Classes
We build our target classes similar to the training set. We need a one hot-encoded vector for each target (which is a character).
**Example:** for target char "l" the vector looks like this
| | h | a | l | o | . | ' ' |
|-----|-----|-----|-----|-----|-----|-----|
| l | 0 | 0 | 1 | 0 | 0 | 0 |
```
# Indexed characters as dictionary
char_indices = dict((c, i) for i, c in enumerate(chars))
# Both matrices will initialized with zeros
training_set = np.zeros((len(sequences), seqlen, len(chars)), dtype=np.bool)
target_char = np.zeros((len(sequences), len(chars)), dtype=np.bool)
for i, sequence in enumerate(sequences):
for t, char in enumerate(sequence):
training_set[i, t, char_indices[char]] = 1
target_char[i, char_indices[char_class[i]]] = 1
```
# 3. Explore Data
```
# Let's check the shape of the training_set
training_set.shape
```
Output: (x, y, z)
x = number of all sequences to test
y = window size to predict the next character
z = number of all appearing characters in text (for one-hot-enconding)
```
# Let's check the shape of the target_char (act as our target classes)
target_char.shape
```
Output: (x, y)
x = number of all sequences to test
y = the mapping of each sequence to the next character
# 4. Model data
Let's get down to business! Create your model.
Try different model configuration (see [keras doc](https://keras.io/models/about-keras-models/#about-keras-models))
```
model = Sequential()
# build the model: a LSTM
model = Sequential()
model.add(LSTM(128, input_shape=(seqlen, len(chars))))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))
optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
model.summary()
def getNextCharIdx(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
# Creation of reverse char index, to get the char for the predicted class
indices_char = dict((i, c) for i, c in enumerate(chars))
def on_epoch_end(epoch, logs):
# Function invoked at end of each epoch. Prints generated text.
print()
print('----- Generating text after Epoch: %d' % epoch)
start_index = random.randint(0, len(text) - seqlen - 1)
for diversity in [1, 0.1, 0.5]:
print('----- diversity:', diversity)
generated = ''
sentence = text[start_index: start_index + seqlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(1000):
x_pred = np.zeros((1, seqlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_indices[char]] = 1.
preds = model.predict(x_pred, verbose=0)[0]
next_index = getNextCharIdx(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
```
# 5. Evaluate Model
We are not at the sweet part of the model. Let's fit our model and see what it prints!
```
model.fit(training_set, target_char,
batch_size=128,
epochs=150,
callbacks=[print_callback])
```
# Present and Automate
Having a model trained for hours is a valuable asset! We need now to store the model and use it to solve the problem we wanted to solve with Machine Learning. Keras has a simple function to save a model to the local file system and also a function to load the model again and have it ready for our task!
```
model.save('shakespeareModel.h5')
model = load_model('shakespeareModel.h5')
```
|
github_jupyter
|
## Variant of the Blocked Input Model in which the stop process decelerates the go process by a rate that varies across trials
```
import numpy
import random
import matplotlib.pyplot as plt
import matplotlib
import seaborn
import pandas
import matplotlib.patches as patches
from matplotlib.ticker import FormatStrFormatter
%matplotlib inline
params={'mugo':.2,
'mustop':.8,
'threshold':60,
'nondecisiongo':50,
'nondecisionstop':50,
'inhibitionParam':1,
'ssds':[1,50,100,150, 200,250, 300, 350, 400, 450, 500,3000],
'nreps':1000,
'maxtime':1000}
def interactiverace(params):
stopaccumsave = []
mustopsave = []
stopsave = []
meanrtgo = numpy.zeros(len(params['ssds']))
presp = numpy.zeros(len(params['ssds']));
for irep in range(params['nreps']):
for j,ssd in enumerate(params['ssds']):
stopsignaldelay = ssd
goaccumulator = 0
stopaccumulator = 0
rtgo = 0
itime = 0
mustop = params['mustop']+numpy.random.normal(loc=0, scale=.7)
if mustop < 0:
mustop = 0
mustopsave.append(mustop)
while itime < params['maxtime'] and rtgo == 0: # single trial
itime = itime + 1
if itime < stopsignaldelay + params['nondecisionstop']:
inhibition = 0
else:
inhibition = params['inhibitionParam']
stopaccumulator = mustop + numpy.random.normal(loc=0, scale=.008)
if stopaccumulator <= 0:
stopaccumulator = 0;
stopaccumsave.append(stopaccumulator)
if itime >= params['nondecisiongo']:
goaccumulator = goaccumulator + params['mugo'] - inhibition*stopaccumulator + numpy.random.normal(loc=0, scale=1)
if goaccumulator <= 0:
goaccumulator = 0;
if goaccumulator > params['threshold']:
if rtgo == 0:
rtgo = itime;
meanrtgo[j] += rtgo;
if rtgo > 0:
presp[j] += 1;
for ssd in range(len(params['ssds'])):
if presp[ssd] > 0:
meanrtgo[ssd] = meanrtgo[ssd]/presp[ssd];
presp[ssd] = presp[ssd]/params['nreps'];
return(meanrtgo,presp,mustopsave,stopaccumsave)
meanrtgo,presp,mustopsave,stopaccumsave=interactiverace(params)
print(meanrtgo)
print(presp)
#print(stopaccumsave)
#print(mustopsave)
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.plot(params['ssds'][:11],meanrtgo[:11] - meanrtgo[11])
plt.plot([params['ssds'][0],params['ssds'][10]],[0,0],'k:')
plt.xlabel('Stop signal delay')
plt.ylabel('Violation (Stop Failure RT - No-Stop RT)')
plt.subplot(1,2,2)
plt.plot(params['ssds'][:11],presp[:11])
plt.xlabel('Stop signal delay')
plt.ylabel('Probability of responding')
plt.axis([params['ssds'][0],params['ssds'][10],0,1])
```
|
github_jupyter
|
# ML Scripts
So far, we've done everything inside the Jupyter notebooks but we're going to now move our code into individual python scripts. We will lay out the code that needs to be inside each script but checkout the `API` lesson to see how it all comes together.
<div align="left">
<a href="https://github.com/madewithml/lessons/blob/master/notebooks/03_APIs/02_ML_Scripts/02_PT_ML_Scripts.ipynb" role="button"><img class="notebook-badge-image" src="https://img.shields.io/static/v1?label=&message=View%20On%20GitHub&color=586069&logo=github&labelColor=2f363d"></a>
<a href="https://colab.research.google.com/github/madewithml/lessons/blob/master/notebooks/03_APIs/02_ML_Scripts/02_PT_ML_Scripts.ipynb"><img class="notebook-badge-image" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
</div>
# data.py
## Load data
```
import numpy as np
import pandas as pd
import random
import urllib
SEED = 1234
DATA_FILE = 'news.csv'
INPUT_FEATURE = 'title'
OUTPUT_FEATURE = 'category'
# Set seed for reproducibility
np.random.seed(SEED)
random.seed(SEED)
# Load data from GitHub to notebook's local drive
url = "https://raw.githubusercontent.com/madewithml/lessons/master/data/news.csv"
response = urllib.request.urlopen(url)
html = response.read()
with open(DATA_FILE, 'wb') as fp:
fp.write(html)
# Load data
df = pd.read_csv(DATA_FILE, header=0)
X = df[INPUT_FEATURE].values
y = df[OUTPUT_FEATURE].values
df.head(5)
```
## Preprocessing
```
import re
LOWER = True
FILTERS = r"[!\"'#$%&()*\+,-./:;<=>?@\\\[\]^_`{|}~]"
def preprocess_texts(texts, lower, filters):
preprocessed_texts = []
for text in texts:
if lower:
text = ' '.join(word.lower() for word in text.split(" "))
text = re.sub(r"([.,!?])", r" \1 ", text)
text = re.sub(filters, r"", text)
text = re.sub(' +', ' ', text) # remove multiple spaces
text = text.strip()
preprocessed_texts.append(text)
return preprocessed_texts
original_text = X[0]
X = np.array(preprocess_texts(X, lower=LOWER, filters=FILTERS))
print (f"{original_text} → {X[0]}")
```
## Split data
```
import collections
from sklearn.model_selection import train_test_split
TRAIN_SIZE = 0.7
VAL_SIZE = 0.15
TEST_SIZE = 0.15
SHUFFLE = True
def train_val_test_split(X, y, val_size, test_size, shuffle):
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=test_size, stratify=y, shuffle=shuffle)
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=val_size, stratify=y_train, shuffle=shuffle)
return X_train, X_val, X_test, y_train, y_val, y_test
# Create data splits
X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(
X=X, y=y, val_size=VAL_SIZE, test_size=TEST_SIZE, shuffle=SHUFFLE)
class_counts = dict(collections.Counter(y))
print (f"X_train: {X_train.shape}, y_train: {y_train.shape}")
print (f"X_val: {X_val.shape}, y_val: {y_val.shape}")
print (f"X_test: {X_test.shape}, y_test: {y_test.shape}")
print (f"{X_train[0]} → {y_train[0]}")
print (f"Classes: {class_counts}")
```
# tokenizers.py
## Tokenizer
```
import json
import re
SEPARATOR = ' ' # word level
class Tokenizer(object):
def __init__(self, separator, pad_token='<PAD>', oov_token='<UNK>',
token_to_index={'<PAD>': 0, '<UNK>': 1}):
self.separator = separator
self.oov_token = oov_token
self.token_to_index = token_to_index
self.index_to_token = {v: k for k, v in self.token_to_index.items()}
def __len__(self):
return len(self.token_to_index)
def __str__(self):
return f"<Tokenizer(num_tokens={len(self)})>"
def fit_on_texts(self, texts):
for text in texts:
for token in text.split(self.separator):
if token not in self.token_to_index:
index = len(self)
self.token_to_index[token] = index
self.index_to_token[index] = token
return self
def texts_to_sequences(self, texts):
sequences = []
for text in texts:
sequence = []
for token in text.split(self.separator):
sequence.append(self.token_to_index.get(
token, self.token_to_index[self.oov_token]))
sequences.append(sequence)
return sequences
def sequences_to_texts(self, sequences):
texts = []
for sequence in sequences:
text = []
for index in sequence:
text.append(self.index_to_token.get(index, self.oov_token))
texts.append(self.separator.join([token for token in text]))
return texts
def save(self, fp):
with open(fp, 'w') as fp:
contents = {
'separator': self.separator,
'oov_token': self.oov_token,
'token_to_index': self.token_to_index
}
json.dump(contents, fp, indent=4, sort_keys=False)
@classmethod
def load(cls, fp):
with open(fp, 'r') as fp:
kwargs = json.load(fp=fp)
return cls(**kwargs)
# Input vectorizer
X_tokenizer = Tokenizer(separator=SEPARATOR)
X_tokenizer.fit_on_texts(texts=X_train)
vocab_size = len(X_tokenizer)
print (X_tokenizer)
# Convert text to sequence of tokens
original_text = X_train[0]
X_train = np.array(X_tokenizer.texts_to_sequences(X_train))
X_val = np.array(X_tokenizer.texts_to_sequences(X_val))
X_test = np.array(X_tokenizer.texts_to_sequences(X_test))
preprocessed_text = X_tokenizer.sequences_to_texts([X_train[0]])
print (f"{original_text} \n\t→ {preprocessed_text} \n\t→ {X_train[0]}")
# Save tokenizer
X_tokenizer.save(fp='X_tokenizer.json')
# Load tokenizer
X_tokenizer = Tokenizer.load(fp='X_tokenizer.json')
print (X_tokenizer)
```
## Label Encoder
```
class LabelEncoder(object):
def __init__(self, class_to_index={}):
self.class_to_index = class_to_index
self.index_to_class = {v: k for k, v in self.class_to_index.items()}
self.classes = list(self.class_to_index.keys())
def __len__(self):
return len(self.class_to_index)
def __str__(self):
return f"<LabelEncoder(num_classes={len(self)})>"
def fit(self, y_train):
for i, class_ in enumerate(np.unique(y_train)):
self.class_to_index[class_] = i
self.index_to_class = {v: k for k, v in self.class_to_index.items()}
self.classes = list(self.class_to_index.keys())
return self
def transform(self, y):
return np.array([self.class_to_index[class_] for class_ in y])
def decode(self, index):
return self.index_to_class.get(index, None)
def save(self, fp):
with open(fp, 'w') as fp:
contents = {
'class_to_index': self.class_to_index
}
json.dump(contents, fp, indent=4, sort_keys=False)
@classmethod
def load(cls, fp):
with open(fp, 'r') as fp:
kwargs = json.load(fp=fp)
return cls(**kwargs)
# Output vectorizer
y_tokenizer = LabelEncoder()
# Fit on train data
y_tokenizer = y_tokenizer.fit(y_train)
print (y_tokenizer)
classes = y_tokenizer.classes
print (f"classes: {classes}")
# Convert labels to tokens
class_ = y_train[0]
y_train = y_tokenizer.transform(y_train)
y_val = y_tokenizer.transform(y_val)
y_test = y_tokenizer.transform(y_test)
print (f"{class_} → {y_train[0]}")
# Class weights
counts = np.bincount(y_train)
class_weights = {i: 1.0/count for i, count in enumerate(counts)}
print (f"class counts: {counts},\nclass weights: {class_weights}")
# Save label encoder
y_tokenizer.save(fp='y_tokenizer.json')
# Load label encoder
y_tokenizer = LabelEncoder.load(fp='y_tokenizer.json')
print (y_tokenizer)
```
# datasets.py
```
import math
import torch
import torch.nn as nn
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
BATCH_SIZE = 128
FILTER_SIZES = [2, 3, 4]
# Set seed for reproducibility
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.cuda.manual_seed_all(SEED) # multi-GPU.
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
USE_CUDA = True
DEVICE = torch.device('cuda' if (torch.cuda.is_available() and USE_CUDA) else 'cpu')
print (DEVICE)
```
## Pad
```
def pad_sequences(X, max_seq_len):
sequences = np.zeros((len(X), max_seq_len))
for i, sequence in enumerate(X):
sequences[i][:len(sequence)] = sequence
return sequences
# Pad sequences
inputs = [[1,2,3], [1,2,3,4], [1,2]]
max_seq_len = max(len(x) for x in inputs)
padded_inputs = pad_sequences(X=inputs, max_seq_len=max_seq_len)
print (padded_inputs.shape)
print (padded_inputs)
```
## Dataset
```
class TextDataset(Dataset):
def __init__(self, X, y, batch_size, max_filter_size):
self.X = X
self.y = y
self.batch_size = batch_size
self.max_filter_size = max_filter_size
def __len__(self):
return len(self.y)
def __str__(self):
return f"<Dataset(N={len(self)}, batch_size={self.batch_size}, num_batches={self.get_num_batches()})>"
def __getitem__(self, index):
X = self.X[index]
y = self.y[index]
return X, y
def get_num_batches(self):
return math.ceil(len(self)/self.batch_size)
def collate_fn(self, batch):
"""Processing on a batch."""
# Get inputs
X = np.array(batch)[:, 0]
y = np.array(batch)[:, 1]
# Pad inputs
max_seq_len = max(self.max_filter_size, max([len(x) for x in X]))
X = pad_sequences(X=X, max_seq_len=max_seq_len)
return X, y
def generate_batches(self, shuffle=False, drop_last=False):
dataloader = DataLoader(dataset=self, batch_size=self.batch_size,
collate_fn=self.collate_fn, shuffle=shuffle,
drop_last=drop_last, pin_memory=True)
for (X, y) in dataloader:
X = torch.LongTensor(X.astype(np.int32))
y = torch.LongTensor(y.astype(np.int32))
yield X, y
# Create datasets
train_set = TextDataset(X=X_train, y=y_train, batch_size=BATCH_SIZE, max_filter_size=max(FILTER_SIZES))
val_set = TextDataset(X=X_val, y=y_val, batch_size=BATCH_SIZE, max_filter_size=max(FILTER_SIZES))
test_set = TextDataset(X=X_test, y=y_test, batch_size=BATCH_SIZE, max_filter_size=max(FILTER_SIZES))
print (train_set)
print (train_set[0])
# Generate batch
batch_X, batch_y = next(iter(test_set.generate_batches()))
print (batch_X.shape)
print (batch_y.shape)
```
# utils.py
## Embeddings
```
from io import BytesIO
from urllib.request import urlopen
from zipfile import ZipFile
EMBEDDING_DIM = 100
def load_glove_embeddings(embeddings_file):
"""Load embeddings from a file."""
embeddings = {}
with open(embeddings_file, "r") as fp:
for index, line in enumerate(fp):
values = line.split()
word = values[0]
embedding = np.asarray(values[1:], dtype='float32')
embeddings[word] = embedding
return embeddings
def make_embeddings_matrix(embeddings, token_to_index, embedding_dim):
"""Create embeddings matrix to use in Embedding layer."""
embedding_matrix = np.zeros((len(token_to_index), embedding_dim))
for word, i in token_to_index.items():
embedding_vector = embeddings.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
return embedding_matrix
# Unzip the file (may take ~3-5 minutes)
resp = urlopen('http://nlp.stanford.edu/data/glove.6B.zip')
zipfile = ZipFile(BytesIO(resp.read()))
zipfile.namelist()
# Write embeddings to file
embeddings_file = 'glove.6B.{0}d.txt'.format(EMBEDDING_DIM)
zipfile.extract(embeddings_file)
!ls
# Create embeddings
embeddings_file = 'glove.6B.{0}d.txt'.format(EMBEDDING_DIM)
glove_embeddings = load_glove_embeddings(embeddings_file=embeddings_file)
embedding_matrix = make_embeddings_matrix(
embeddings=glove_embeddings, token_to_index=X_tokenizer.token_to_index,
embedding_dim=EMBEDDING_DIM)
print (embedding_matrix.shape)
```
# model.py
## Model
```
import torch.nn.functional as F
NUM_FILTERS = 50
HIDDEN_DIM = 128
DROPOUT_P = 0.1
class TextCNN(nn.Module):
def __init__(self, embedding_dim, vocab_size, num_filters, filter_sizes,
hidden_dim, dropout_p, num_classes, pretrained_embeddings=None,
freeze_embeddings=False, padding_idx=0):
super(TextCNN, self).__init__()
# Initialize embeddings
if pretrained_embeddings is None:
self.embeddings = nn.Embedding(
embedding_dim=embedding_dim, num_embeddings=vocab_size,
padding_idx=padding_idx)
else:
pretrained_embeddings = torch.from_numpy(pretrained_embeddings).float()
self.embeddings = nn.Embedding(
embedding_dim=embedding_dim, num_embeddings=vocab_size,
padding_idx=padding_idx, _weight=pretrained_embeddings)
# Freeze embeddings or not
if freeze_embeddings:
self.embeddings.weight.requires_grad = False
# Conv weights
self.filter_sizes = filter_sizes
self.conv = nn.ModuleList(
[nn.Conv1d(in_channels=embedding_dim,
out_channels=num_filters,
kernel_size=f) for f in filter_sizes])
# FC weights
self.dropout = nn.Dropout(dropout_p)
self.fc1 = nn.Linear(num_filters*len(filter_sizes), hidden_dim)
self.fc2 = nn.Linear(hidden_dim, num_classes)
def forward(self, x_in, channel_first=False):
# Embed
x_in = self.embeddings(x_in)
if not channel_first:
x_in = x_in.transpose(1, 2) # (N, channels, sequence length)
# Conv + pool
z = []
conv_outputs = [] # for interpretability
max_seq_len = x_in.shape[2]
for i, f in enumerate(self.filter_sizes):
# `SAME` padding
padding_left = int((self.conv[i].stride[0]*(max_seq_len-1) - max_seq_len + self.filter_sizes[i])/2)
padding_right = int(math.ceil((self.conv[i].stride[0]*(max_seq_len-1) - max_seq_len + self.filter_sizes[i])/2))
# Conv + pool
_z = self.conv[i](F.pad(x_in, (padding_left, padding_right)))
conv_outputs.append(_z)
_z = F.max_pool1d(_z, _z.size(2)).squeeze(2)
z.append(_z)
# Concat conv outputs
z = torch.cat(z, 1)
# FC layers
z = self.fc1(z)
z = self.dropout(z)
logits = self.fc2(z)
return conv_outputs, logits
# Initialize model
model = TextCNN(embedding_dim=EMBEDDING_DIM,
vocab_size=vocab_size,
num_filters=NUM_FILTERS,
filter_sizes=FILTER_SIZES,
hidden_dim=HIDDEN_DIM,
dropout_p=DROPOUT_P,
num_classes=len(classes),
pretrained_embeddings=embedding_matrix,
freeze_embeddings=False).to(DEVICE)
print (model.named_parameters)
```
# train.py
## Training
```
from pathlib import Path
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.tensorboard import SummaryWriter
%load_ext tensorboard
LEARNING_RATE = 1e-4
PATIENCE = 3
NUM_EPOCHS = 100
def train_step(model, device, dataset, optimizer):
"""Train step."""
# Set model to train mode
model.train()
train_loss = 0.
correct = 0
# Iterate over train batches
for i, (X, y) in enumerate(dataset.generate_batches()):
# Set device
X, y = X.to(device), y.to(device)
# Reset gradients
optimizer.zero_grad()
# Forward pass
_, logits = model(X)
# Define loss
loss = F.cross_entropy(logits, y)
# Backward pass
loss.backward()
# Update weights
optimizer.step()
# Metrics
y_pred = logits.max(dim=1)[1]
correct += torch.eq(y_pred, y).sum().item()
train_loss += (loss.item() - train_loss) / (i + 1)
train_acc = 100. * correct / len(dataset)
return train_loss, train_acc
def test_step(model, device, dataset):
"""Validation or test step."""
# Set model to eval mode
model.eval()
loss = 0.
correct = 0
y_preds = []
y_targets = []
# Iterate over val batches
with torch.no_grad():
for i, (X, y) in enumerate(dataset.generate_batches()):
# Set device
X, y = X.to(device), y.to(device)
# Forward pass
_, logits = model(X)
# Metrics
loss += F.cross_entropy(logits, y, reduction='sum').item()
y_pred = logits.max(dim=1)[1]
correct += torch.eq(y_pred, y).sum().item()
# Outputs
y_preds.extend(y_pred.cpu().numpy())
y_targets.extend(y.cpu().numpy())
loss /= len(dataset)
accuracy = 100. * correct / len(dataset)
return y_preds, y_targets, loss, accuracy
def train(model, optimizer, scheduler,
train_set, val_set, test_set, writer):
# Epochs
best_val_loss = np.inf
for epoch in range(NUM_EPOCHS):
# Steps
train_loss, train_acc = train_step(model, DEVICE, train_set, optimizer)
_, _, val_loss, val_acc = test_step(model, DEVICE, val_set)
# Metrics
print (f"Epoch: {epoch} | train_loss: {train_loss:.2f}, train_acc: {train_acc:.1f}, val_loss: {val_loss:.2f}, val_acc: {val_acc:.1f}")
writer.add_scalar(tag='training loss', scalar_value=train_loss, global_step=epoch)
writer.add_scalar(tag='training accuracy', scalar_value=train_acc, global_step=epoch)
writer.add_scalar(tag='validation loss', scalar_value=val_loss, global_step=epoch)
writer.add_scalar(tag='validation accuracy', scalar_value=val_acc, global_step=epoch)
# Adjust learning rate
scheduler.step(val_loss)
# Early stopping
if val_loss < best_val_loss:
best_val_loss = val_loss
patience = PATIENCE # reset patience
torch.save(model.state_dict(), MODEL_PATH)
else:
patience -= 1
if not patience: # 0
print ("Stopping early!")
break
# Optimizer
optimizer = Adam(model.parameters(), lr=LEARNING_RATE)
scheduler = ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=3)
# Path to save model
MODEL_NAME = 'TextCNN'
MODEL_PATH = Path(f'models/{MODEL_NAME}.h5')
Path(MODEL_PATH.parent).mkdir(parents=True, exist_ok=True)
# TensorBoard writer
log_dir = f'tensorboard/{MODEL_NAME}'
!rm -rf {log_dir} # remove if it already exists
writer = SummaryWriter(log_dir=log_dir)
# Training
train(model, optimizer, scheduler,
train_set, val_set, test_set, writer)
%tensorboard --logdir {log_dir}
```
## Evaluation
```
import io
import itertools
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_recall_fscore_support
def plot_confusion_matrix(y_pred, y_target, classes, cmap=plt.cm.Blues):
"""Plot a confusion matrix using ground truth and predictions."""
# Confusion matrix
cm = confusion_matrix(y_target, y_pred)
cm_norm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
# Figure
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(cm, cmap=plt.cm.Blues)
fig.colorbar(cax)
# Axis
plt.title("Confusion matrix")
plt.ylabel("True label")
plt.xlabel("Predicted label")
ax.set_xticklabels([''] + classes)
ax.set_yticklabels([''] + classes)
ax.xaxis.set_label_position('bottom')
ax.xaxis.tick_bottom()
# Values
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, f"{cm[i, j]:d} ({cm_norm[i, j]*100:.1f}%)",
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
# Display
plt.show()
def get_performance(y_pred, y_target, classes):
"""Per-class performance metrics. """
performance = {'overall': {}, 'class': {}}
metrics = precision_recall_fscore_support(y_target, y_pred)
# Overall performance
performance['overall']['precision'] = np.mean(metrics[0])
performance['overall']['recall'] = np.mean(metrics[1])
performance['overall']['f1'] = np.mean(metrics[2])
performance['overall']['num_samples'] = np.float64(np.sum(metrics[3]))
# Per-class performance
for i in range(len(classes)):
performance['class'][classes[i]] = {
"precision": metrics[0][i],
"recall": metrics[1][i],
"f1": metrics[2][i],
"num_samples": np.float64(metrics[3][i])
}
return performance
# Test
y_preds, y_targets, test_loss, test_acc = test_step(model, DEVICE, test_set)
print (f"test_loss: {test_loss:.2f}, test_acc: {test_acc:.1f}")
# Class performance
performance = get_performance(y_preds, y_targets, classes)
print (json.dumps(performance, indent=4))
# Confusion matrix
plt.rcParams["figure.figsize"] = (7,7)
plot_confusion_matrix(y_preds, y_targets, classes)
print (classification_report(y_targets, y_preds))
```
# inference.py
## Load model
```
# Load model
model = TextCNN(embedding_dim=EMBEDDING_DIM,
vocab_size=vocab_size,
num_filters=NUM_FILTERS,
filter_sizes=FILTER_SIZES,
hidden_dim=HIDDEN_DIM,
dropout_p=DROPOUT_P,
num_classes=len(classes),
pretrained_embeddings=embedding_matrix,
freeze_embeddings=False).to(DEVICE)
model.load_state_dict(torch.load(MODEL_PATH))
model.eval()
```
## Inference
```
import collections
def get_probability_distribution(y_prob, classes):
results = {}
for i, class_ in enumerate(classes):
results[class_] = np.float64(y_prob[i])
sorted_results = {k: v for k, v in sorted(
results.items(), key=lambda item: item[1], reverse=True)}
return sorted_results
def get_top_n_grams(tokens, conv_outputs, filter_sizes):
# Process conv outputs for each unique filter size
n_grams = {}
for i, filter_size in enumerate(filter_sizes):
# Identify most important n-gram (excluding last token)
popular_indices = collections.Counter([np.argmax(conv_output) \
for conv_output in conv_outputs[filter_size]])
# Get corresponding text
start = popular_indices.most_common(1)[-1][0]
n_gram = " ".join([token for token in tokens[start:start+filter_size]])
n_grams[filter_size] = n_gram
return n_grams
# Inputs
texts = ["The Wimbledon tennis tournament starts next week!",
"The President signed in the new law."]
texts = preprocess_texts(texts, lower=LOWER, filters=FILTERS)
X_infer = np.array(X_tokenizer.texts_to_sequences(texts))
print (f"{texts[0]} \n\t→ {X_tokenizer.sequences_to_texts(X_infer)[0]} \n\t→ {X_infer[0]}")
y_filler = np.array([0]*len(texts))
# Dataset
infer_set = TextDataset(X=X_infer, y=y_filler, batch_size=BATCH_SIZE,
max_filter_size=max(FILTER_SIZES))
# Iterate over infer batches
conv_outputs = collections.defaultdict(list)
y_probs = []
with torch.no_grad():
for i, (X, y) in enumerate(infer_set.generate_batches()):
# Set device
X, y = X.to(DEVICE), y.to(DEVICE)
# Forward pass
conv_outputs_, logits = model(X)
y_prob = F.softmax(logits, dim=1)
# Save probabilities
y_probs.extend(y_prob.cpu().numpy())
for i, filter_size in enumerate(FILTER_SIZES):
conv_outputs[filter_size].extend(conv_outputs_[i].cpu().numpy())
# Results
results = []
for index in range(len(X_infer)):
results.append({
'raw_input': texts[index],
'preprocessed_input': X_tokenizer.sequences_to_texts([X_infer[index]])[0],
'probabilities': get_probability_distribution(y_prob[index], y_tokenizer.classes),
'top_n_grams': get_top_n_grams(
tokens=preprocessed_input.split(' '),
conv_outputs={k:v[index] for k,v in conv_outputs.items()},
filter_sizes=FILTER_SIZES)})
print (json.dumps(results, indent=4))
```
Use inferences to collect information how the model performs on your real world data and use it to improve it over time.
- Use a probability threshold for the top class (ex. If the predicted class is less than 75%, send the inference for review).
- Combine the above with Use probability thresholds for each class (ex. if the predicted class is `Sports` at 85% but that class's precision/recall is low, then send it for review but maybe you don't do this when the predicted class is `Sports` but above 90%.
- If the preprocessed sentence has <UNK> tokens, send the inference for further review.
- When latency is not an issue, use the n-grams to validate the prediction.
Check out the `API` lesson to see how all of this comes together to create an ML service.
---
Share and discover ML projects at <a href="https://madewithml.com/">Made With ML</a>.
<div align="left">
<a class="ai-header-badge" target="_blank" href="https://github.com/madewithml/lessons"><img src="https://img.shields.io/github/stars/madewithml/lessons.svg?style=social&label=Star"></a>
<a class="ai-header-badge" target="_blank" href="https://www.linkedin.com/company/madewithml"><img src="https://img.shields.io/badge/style--5eba00.svg?label=LinkedIn&logo=linkedin&style=social"></a>
<a class="ai-header-badge" target="_blank" href="https://twitter.com/madewithml"><img src="https://img.shields.io/twitter/follow/madewithml.svg?label=Follow&style=social"></a>
</div>
|
github_jupyter
|
# Model Selection, Underfitting, and Overfitting
:label:`sec_model_selection`
As machine learning scientists,
our goal is to discover *patterns*.
But how can we be sure that we have
truly discovered a *general* pattern
and not simply memorized our data?
For example, imagine that we wanted to hunt
for patterns among genetic markers
linking patients to their dementia status,
where the labels are drawn from the set
$\{\text{dementia}, \text{mild cognitive impairment}, \text{healthy}\}$.
Because each person's genes identify them uniquely
(ignoring identical siblings),
it is possible to memorize the entire dataset.
We do not want our model to say
*"That's Bob! I remember him! He has dementia!"*
The reason why is simple.
When we deploy the model in the future,
we will encounter patients
that the model has never seen before.
Our predictions will only be useful
if our model has truly discovered a *general* pattern.
To recapitulate more formally,
our goal is to discover patterns
that capture regularities in the underlying population
from which our training set was drawn.
If we are successful in this endeavor,
then we could successfully assess risk
even for individuals that we have never encountered before.
This problem---how to discover patterns that *generalize*---is
the fundamental problem of machine learning.
The danger is that when we train models,
we access just a small sample of data.
The largest public image datasets contain
roughly one million images.
More often, we must learn from only thousands
or tens of thousands of data examples.
In a large hospital system, we might access
hundreds of thousands of medical records.
When working with finite samples, we run the risk
that we might discover apparent associations
that turn out not to hold up when we collect more data.
The phenomenon of fitting our training data
more closely than we fit the underlying distribution is called *overfitting*, and the techniques used to combat overfitting are called *regularization*.
In the previous sections, you might have observed
this effect while experimenting with the Fashion-MNIST dataset.
If you altered the model structure or the hyperparameters during the experiment, you might have noticed that with enough neurons, layers, and training epochs, the model can eventually reach perfect accuracy on the training set, even as the accuracy on test data deteriorates.
## Training Error and Generalization Error
In order to discuss this phenomenon more formally,
we need to differentiate between training error and generalization error.
The *training error* is the error of our model
as calculated on the training dataset,
while *generalization error* is the expectation of our model's error
were we to apply it to an infinite stream of additional data examples
drawn from the same underlying data distribution as our original sample.
Problematically, we can never calculate the generalization error exactly.
That is because the stream of infinite data is an imaginary object.
In practice, we must *estimate* the generalization error
by applying our model to an independent test set
constituted of a random selection of data examples
that were withheld from our training set.
The following three thought experiments
will help illustrate this situation better.
Consider a college student trying to prepare for his final exam.
A diligent student will strive to practice well
and test his abilities using exams from previous years.
Nonetheless, doing well on past exams is no guarantee
that he will excel when it matters.
For instance, the student might try to prepare
by rote learning the answers to the exam questions.
This requires the student to memorize many things.
She might even remember the answers for past exams perfectly.
Another student might prepare by trying to understand
the reasons for giving certain answers.
In most cases, the latter student will do much better.
Likewise, consider a model that simply uses a lookup table to answer questions. If the set of allowable inputs is discrete and reasonably small, then perhaps after viewing *many* training examples, this approach would perform well. Still this model has no ability to do better than random guessing when faced with examples that it has never seen before.
In reality the input spaces are far too large to memorize the answers corresponding to every conceivable input. For example, consider the black and white $28\times28$ images. If each pixel can take one among $256$ grayscale values, then there are $256^{784}$ possible images. That means that there are far more low-resolution grayscale thumbnail-sized images than there are atoms in the universe. Even if we could encounter such data, we could never afford to store the lookup table.
Last, consider the problem of trying
to classify the outcomes of coin tosses (class 0: heads, class 1: tails)
based on some contextual features that might be available.
Suppose that the coin is fair.
No matter what algorithm we come up with,
the generalization error will always be $\frac{1}{2}$.
However, for most algorithms,
we should expect our training error to be considerably lower,
depending on the luck of the draw,
even if we did not have any features!
Consider the dataset {0, 1, 1, 1, 0, 1}.
Our feature-less algorithm would have to fall back on always predicting
the *majority class*, which appears from our limited sample to be *1*.
In this case, the model that always predicts class 1
will incur an error of $\frac{1}{3}$,
considerably better than our generalization error.
As we increase the amount of data,
the probability that the fraction of heads
will deviate significantly from $\frac{1}{2}$ diminishes,
and our training error would come to match the generalization error.
### Statistical Learning Theory
Since generalization is the fundamental problem in machine learning,
you might not be surprised to learn
that many mathematicians and theorists have dedicated their lives
to developing formal theories to describe this phenomenon.
In their [eponymous theorem](https://en.wikipedia.org/wiki/Glivenko%E2%80%93Cantelli_theorem), Glivenko and Cantelli
derived the rate at which the training error
converges to the generalization error.
In a series of seminal papers, [Vapnik and Chervonenkis](https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_theory)
extended this theory to more general classes of functions.
This work laid the foundations of statistical learning theory.
In the standard supervised learning setting, which we have addressed up until now and will stick with throughout most of this book,
we assume that both the training data and the test data
are drawn *independently* from *identical* distributions.
This is commonly called the *i.i.d. assumption*,
which means that the process that samples our data has no memory.
In other words,
the second example drawn and the third drawn
are no more correlated than the second and the two-millionth sample drawn.
Being a good machine learning scientist requires thinking critically,
and already you should be poking holes in this assumption,
coming up with common cases where the assumption fails.
What if we train a mortality risk predictor
on data collected from patients at UCSF Medical Center,
and apply it on patients at Massachusetts General Hospital?
These distributions are simply not identical.
Moreover, draws might be correlated in time.
What if we are classifying the topics of Tweets?
The news cycle would create temporal dependencies
in the topics being discussed, violating any assumptions of independence.
Sometimes we can get away with minor violations of the i.i.d. assumption
and our models will continue to work remarkably well.
After all, nearly every real-world application
involves at least some minor violation of the i.i.d. assumption,
and yet we have many useful tools for
various applications such as
face recognition,
speech recognition, and language translation.
Other violations are sure to cause trouble.
Imagine, for example, if we try to train
a face recognition system by training it
exclusively on university students
and then want to deploy it as a tool
for monitoring geriatrics in a nursing home population.
This is unlikely to work well since college students
tend to look considerably different from the elderly.
In subsequent chapters, we will discuss problems
arising from violations of the i.i.d. assumption.
For now, even taking the i.i.d. assumption for granted,
understanding generalization is a formidable problem.
Moreover, elucidating the precise theoretical foundations
that might explain why deep neural networks generalize as well as they do
continues to vex the greatest minds in learning theory.
When we train our models, we attempt to search for a function
that fits the training data as well as possible.
If the function is so flexible that it can catch on to spurious patterns
just as easily as to true associations,
then it might perform *too well* without producing a model
that generalizes well to unseen data.
This is precisely what we want to avoid or at least control.
Many of the techniques in deep learning are heuristics and tricks
aimed at guarding against overfitting.
### Model Complexity
When we have simple models and abundant data,
we expect the generalization error to resemble the training error.
When we work with more complex models and fewer examples,
we expect the training error to go down but the generalization gap to grow.
What precisely constitutes model complexity is a complex matter.
Many factors govern whether a model will generalize well.
For example a model with more parameters might be considered more complex.
A model whose parameters can take a wider range of values
might be more complex.
Often with neural networks, we think of a model
that takes more training iterations as more complex,
and one subject to *early stopping* (fewer training iterations) as less complex.
It can be difficult to compare the complexity among members
of substantially different model classes
(say, decision trees vs. neural networks).
For now, a simple rule of thumb is quite useful:
a model that can readily explain arbitrary facts
is what statisticians view as complex,
whereas one that has only a limited expressive power
but still manages to explain the data well
is probably closer to the truth.
In philosophy, this is closely related to Popper's
criterion of falsifiability
of a scientific theory: a theory is good if it fits data
and if there are specific tests that can be used to disprove it.
This is important since all statistical estimation is
*post hoc*,
i.e., we estimate after we observe the facts,
hence vulnerable to the associated fallacy.
For now, we will put the philosophy aside and stick to more tangible issues.
In this section, to give you some intuition,
we will focus on a few factors that tend
to influence the generalizability of a model class:
1. The number of tunable parameters. When the number of tunable parameters, sometimes called the *degrees of freedom*, is large, models tend to be more susceptible to overfitting.
1. The values taken by the parameters. When weights can take a wider range of values, models can be more susceptible to overfitting.
1. The number of training examples. It is trivially easy to overfit a dataset containing only one or two examples even if your model is simple. But overfitting a dataset with millions of examples requires an extremely flexible model.
## Model Selection
In machine learning, we usually select our final model
after evaluating several candidate models.
This process is called *model selection*.
Sometimes the models subject to comparison
are fundamentally different in nature
(say, decision trees vs. linear models).
At other times, we are comparing
members of the same class of models
that have been trained with different hyperparameter settings.
With MLPs, for example,
we may wish to compare models with
different numbers of hidden layers,
different numbers of hidden units,
and various choices of the activation functions
applied to each hidden layer.
In order to determine the best among our candidate models,
we will typically employ a validation dataset.
### Validation Dataset
In principle we should not touch our test set
until after we have chosen all our hyperparameters.
Were we to use the test data in the model selection process,
there is a risk that we might overfit the test data.
Then we would be in serious trouble.
If we overfit our training data,
there is always the evaluation on test data to keep us honest.
But if we overfit the test data, how would we ever know?
Thus, we should never rely on the test data for model selection.
And yet we cannot rely solely on the training data
for model selection either because
we cannot estimate the generalization error
on the very data that we use to train the model.
In practical applications, the picture gets muddier.
While ideally we would only touch the test data once,
to assess the very best model or to compare
a small number of models to each other,
real-world test data is seldom discarded after just one use.
We can seldom afford a new test set for each round of experiments.
The common practice to address this problem
is to split our data three ways,
incorporating a *validation dataset* (or *validation set*)
in addition to the training and test datasets.
The result is a murky practice where the boundaries
between validation and test data are worryingly ambiguous.
Unless explicitly stated otherwise, in the experiments in this book
we are really working with what should rightly be called
training data and validation data, with no true test sets.
Therefore, the accuracy reported in each experiment of the book is really the validation accuracy and not a true test set accuracy.
### $K$-Fold Cross-Validation
When training data is scarce,
we might not even be able to afford to hold out
enough data to constitute a proper validation set.
One popular solution to this problem is to employ
$K$*-fold cross-validation*.
Here, the original training data is split into $K$ non-overlapping subsets.
Then model training and validation are executed $K$ times,
each time training on $K-1$ subsets and validating
on a different subset (the one not used for training in that round).
Finally, the training and validation errors are estimated
by averaging over the results from the $K$ experiments.
## Underfitting or Overfitting?
When we compare the training and validation errors,
we want to be mindful of two common situations.
First, we want to watch out for cases
when our training error and validation error are both substantial
but there is a little gap between them.
If the model is unable to reduce the training error,
that could mean that our model is too simple
(i.e., insufficiently expressive)
to capture the pattern that we are trying to model.
Moreover, since the *generalization gap*
between our training and validation errors is small,
we have reason to believe that we could get away with a more complex model.
This phenomenon is known as *underfitting*.
On the other hand, as we discussed above,
we want to watch out for the cases
when our training error is significantly lower
than our validation error, indicating severe *overfitting*.
Note that overfitting is not always a bad thing.
With deep learning especially, it is well known
that the best predictive models often perform
far better on training data than on holdout data.
Ultimately, we usually care more about the validation error
than about the gap between the training and validation errors.
Whether we overfit or underfit can depend
both on the complexity of our model
and the size of the available training datasets,
two topics that we discuss below.
### Model Complexity
To illustrate some classical intuition
about overfitting and model complexity,
we give an example using polynomials.
Given training data consisting of a single feature $x$
and a corresponding real-valued label $y$,
we try to find the polynomial of degree $d$
$$\hat{y}= \sum_{i=0}^d x^i w_i$$
to estimate the labels $y$.
This is just a linear regression problem
where our features are given by the powers of $x$,
the model's weights are given by $w_i$,
and the bias is given by $w_0$ since $x^0 = 1$ for all $x$.
Since this is just a linear regression problem,
we can use the squared error as our loss function.
A higher-order polynomial function is more complex
than a lower-order polynomial function,
since the higher-order polynomial has more parameters
and the model function's selection range is wider.
Fixing the training dataset,
higher-order polynomial functions should always
achieve lower (at worst, equal) training error
relative to lower degree polynomials.
In fact, whenever the data examples each have a distinct value of $x$,
a polynomial function with degree equal to the number of data examples
can fit the training set perfectly.
We visualize the relationship between polynomial degree
and underfitting vs. overfitting in :numref:`fig_capacity_vs_error`.

:label:`fig_capacity_vs_error`
### Dataset Size
The other big consideration to bear in mind is the dataset size.
Fixing our model, the fewer samples we have in the training dataset,
the more likely (and more severely) we are to encounter overfitting.
As we increase the amount of training data,
the generalization error typically decreases.
Moreover, in general, more data never hurt.
For a fixed task and data distribution,
there is typically a relationship between model complexity and dataset size.
Given more data, we might profitably attempt to fit a more complex model.
Absent sufficient data, simpler models may be more difficult to beat.
For many tasks, deep learning only outperforms linear models
when many thousands of training examples are available.
In part, the current success of deep learning
owes to the current abundance of massive datasets
due to Internet companies, cheap storage, connected devices,
and the broad digitization of the economy.
## Polynomial Regression
We can now explore these concepts interactively
by fitting polynomials to data.
```
from d2l import tensorflow as d2l
import tensorflow as tf
import numpy as np
import math
```
### Generating the Dataset
First we need data. Given $x$, we will use the following cubic polynomial to generate the labels on training and test data:
$$y = 5 + 1.2x - 3.4\frac{x^2}{2!} + 5.6 \frac{x^3}{3!} + \epsilon \text{ where }
\epsilon \sim \mathcal{N}(0, 0.1^2).$$
The noise term $\epsilon$ obeys a normal distribution
with a mean of 0 and a standard deviation of 0.1.
For optimization, we typically want to avoid
very large values of gradients or losses.
This is why the *features*
are rescaled from $x^i$ to $\frac{x^i}{i!}$.
It allows us to avoid very large values for large exponents $i$.
We will synthesize 100 samples each for the training set and test set.
```
max_degree = 20 # Maximum degree of the polynomial
n_train, n_test = 100, 100 # Training and test dataset sizes
true_w = np.zeros(max_degree) # Allocate lots of empty space
true_w[0:4] = np.array([5, 1.2, -3.4, 5.6])
features = np.random.normal(size=(n_train + n_test, 1))
np.random.shuffle(features)
poly_features = np.power(features, np.arange(max_degree).reshape(1, -1))
for i in range(max_degree):
poly_features[:, i] /= math.gamma(i + 1) # `gamma(n)` = (n-1)!
# Shape of `labels`: (`n_train` + `n_test`,)
labels = np.dot(poly_features, true_w)
labels += np.random.normal(scale=0.1, size=labels.shape)
```
Again, monomials stored in `poly_features`
are rescaled by the gamma function,
where $\Gamma(n)=(n-1)!$.
Take a look at the first 2 samples from the generated dataset.
The value 1 is technically a feature,
namely the constant feature corresponding to the bias.
```
# Convert from NumPy ndarrays to tensors
true_w, features, poly_features, labels = [tf.constant(x, dtype=
tf.float32) for x in [true_w, features, poly_features, labels]]
features[:2], poly_features[:2, :], labels[:2]
```
### Training and Testing the Model
Let us first implement a function to evaluate the loss on a given dataset.
```
def evaluate_loss(net, data_iter, loss): #@save
"""Evaluate the loss of a model on the given dataset."""
metric = d2l.Accumulator(2) # Sum of losses, no. of examples
for X, y in data_iter:
l = loss(net(X), y)
metric.add(tf.reduce_sum(l), tf.size(l).numpy())
return metric[0] / metric[1]
```
Now define the training function.
```
def train(train_features, test_features, train_labels, test_labels,
num_epochs=400):
loss = tf.losses.MeanSquaredError()
input_shape = train_features.shape[-1]
# Switch off the bias since we already catered for it in the polynomial
# features
net = tf.keras.Sequential()
net.add(tf.keras.layers.Dense(1, use_bias=False))
batch_size = min(10, train_labels.shape[0])
train_iter = d2l.load_array((train_features, train_labels), batch_size)
test_iter = d2l.load_array((test_features, test_labels), batch_size,
is_train=False)
trainer = tf.keras.optimizers.SGD(learning_rate=.01)
animator = d2l.Animator(xlabel='epoch', ylabel='loss', yscale='log',
xlim=[1, num_epochs], ylim=[1e-3, 1e2],
legend=['train', 'test'])
for epoch in range(num_epochs):
d2l.train_epoch_ch3(net, train_iter, loss, trainer)
if epoch == 0 or (epoch + 1) % 20 == 0:
animator.add(epoch + 1, (evaluate_loss(net, train_iter, loss),
evaluate_loss(net, test_iter, loss)))
print('weight:', net.get_weights()[0].T)
```
### Third-Order Polynomial Function Fitting (Normal)
We will begin by first using a third-order polynomial function, which is the same order as that of the data generation function.
The results show that this model's training and test losses can be both effectively reduced.
The learned model parameters are also close
to the true values $w = [5, 1.2, -3.4, 5.6]$.
```
# Pick the first four dimensions, i.e., 1, x, x^2/2!, x^3/3! from the
# polynomial features
train(poly_features[:n_train, :4], poly_features[n_train:, :4],
labels[:n_train], labels[n_train:])
```
### Linear Function Fitting (Underfitting)
Let us take another look at linear function fitting.
After the decline in early epochs,
it becomes difficult to further decrease
this model's training loss.
After the last epoch iteration has been completed,
the training loss is still high.
When used to fit nonlinear patterns
(like the third-order polynomial function here)
linear models are liable to underfit.
```
# Pick the first two dimensions, i.e., 1, x, from the polynomial features
train(poly_features[:n_train, :2], poly_features[n_train:, :2],
labels[:n_train], labels[n_train:])
```
### Higher-Order Polynomial Function Fitting (Overfitting)
Now let us try to train the model
using a polynomial of too high degree.
Here, there are insufficient data to learn that
the higher-degree coefficients should have values close to zero.
As a result, our overly-complex model
is so susceptible that it is being influenced
by noise in the training data.
Though the training loss can be effectively reduced,
the test loss is still much higher.
It shows that
the complex model overfits the data.
```
# Pick all the dimensions from the polynomial features
train(poly_features[:n_train, :], poly_features[n_train:, :],
labels[:n_train], labels[n_train:], num_epochs=1500)
```
In the subsequent sections, we will continue
to discuss overfitting problems
and methods for dealing with them,
such as weight decay and dropout.
## Summary
* Since the generalization error cannot be estimated based on the training error, simply minimizing the training error will not necessarily mean a reduction in the generalization error. Machine learning models need to be careful to safeguard against overfitting so as to minimize the generalization error.
* A validation set can be used for model selection, provided that it is not used too liberally.
* Underfitting means that a model is not able to reduce the training error. When training error is much lower than validation error, there is overfitting.
* We should choose an appropriately complex model and avoid using insufficient training samples.
## Exercises
1. Can you solve the polynomial regression problem exactly? Hint: use linear algebra.
1. Consider model selection for polynomials:
1. Plot the training loss vs. model complexity (degree of the polynomial). What do you observe? What degree of polynomial do you need to reduce the training loss to 0?
1. Plot the test loss in this case.
1. Generate the same plot as a function of the amount of data.
1. What happens if you drop the normalization ($1/i!$) of the polynomial features $x^i$? Can you fix this in some other way?
1. Can you ever expect to see zero generalization error?
[Discussions](https://discuss.d2l.ai/t/234)
|
github_jupyter
|
```
import numpy as np
import math
import random
import pandas as pd
import os
import matplotlib.pyplot as plt
import cv2
import glob
import gc
from google.colab import files
src = list(files.upload().values())[0]
open('utils.py','wb').write(src)
from utils import *
from tqdm import tqdm
import pickle
from keras.optimizers import *
from keras.models import Model
from keras.layers import *
from keras.layers.core import *
from keras.layers.convolutional import *
from keras import backend as K
import tensorflow as tf
```
# Initialize the setting
```
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="1"
random.seed(123)
class Config():
def __init__(self):
self.frame_l = 32 # the length of frames
self.joint_n = 15 # the number of joints
self.joint_d = 2 # the dimension of joints
self.clc_num = 21 # the number of class
self.feat_d = 105
self.filters = 16
self.data_dir = '/mnt/nasbi/homes/fan/projects/action/skeleton/data/JHMDB/'
C = Config()
def data_generator(T,C,le):
X_0 = []
X_1 = []
Y = []
for i in tqdm(range(len(T['pose']))):
p = np.copy(T['pose'][i])
p = zoom(p,target_l=C.frame_l,joints_num=C.joint_n,joints_dim=C.joint_d)
label = np.zeros(C.clc_num)
label[le.transform(T['label'])[i]-1] = 1
M = get_CG(p,C)
X_0.append(M)
X_1.append(p)
Y.append(label)
X_0 = np.stack(X_0)
X_1 = np.stack(X_1)
Y = np.stack(Y)
return X_0,X_1,Y
```
# Building the model
```
def poses_diff(x):
H, W = x.get_shape()[1],x.get_shape()[2]
x = tf.subtract(x[:,1:,...],x[:,:-1,...])
x = tf.image.resize_nearest_neighbor(x,size=[H.value,W.value],align_corners=False) # should not alignment here
return x
def pose_motion(P,frame_l):
P_diff_slow = Lambda(lambda x: poses_diff(x))(P)
P_diff_slow = Reshape((frame_l,-1))(P_diff_slow)
P_fast = Lambda(lambda x: x[:,::2,...])(P)
P_diff_fast = Lambda(lambda x: poses_diff(x))(P_fast)
P_diff_fast = Reshape((int(frame_l/2),-1))(P_diff_fast)
return P_diff_slow,P_diff_fast
def c1D(x,filters,kernel):
x = Conv1D(filters, kernel_size=kernel,padding='same',use_bias=False)(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)
return x
def block(x,filters):
x = c1D(x,filters,3)
x = c1D(x,filters,3)
return x
def d1D(x,filters):
x = Dense(filters,use_bias=False)(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)
return x
def build_FM(frame_l=32,joint_n=22,joint_d=2,feat_d=231,filters=16):
M = Input(shape=(frame_l,feat_d))
P = Input(shape=(frame_l,joint_n,joint_d))
diff_slow,diff_fast = pose_motion(P,frame_l)
x = c1D(M,filters*2,1)
x = SpatialDropout1D(0.1)(x)
x = c1D(x,filters,3)
x = SpatialDropout1D(0.1)(x)
x = c1D(x,filters,1)
x = MaxPooling1D(2)(x)
x = SpatialDropout1D(0.1)(x)
x_d_slow = c1D(diff_slow,filters*2,1)
x_d_slow = SpatialDropout1D(0.1)(x_d_slow)
x_d_slow = c1D(x_d_slow,filters,3)
x_d_slow = SpatialDropout1D(0.1)(x_d_slow)
x_d_slow = c1D(x_d_slow,filters,1)
x_d_slow = MaxPool1D(2)(x_d_slow)
x_d_slow = SpatialDropout1D(0.1)(x_d_slow)
x_d_fast = c1D(diff_fast,filters*2,1)
x_d_fast = SpatialDropout1D(0.1)(x_d_fast)
x_d_fast = c1D(x_d_fast,filters,3)
x_d_fast = SpatialDropout1D(0.1)(x_d_fast)
x_d_fast = c1D(x_d_fast,filters,1)
x_d_fast = SpatialDropout1D(0.1)(x_d_fast)
x = concatenate([x,x_d_slow,x_d_fast])
x = block(x,filters*2)
x = MaxPool1D(2)(x)
x = SpatialDropout1D(0.1)(x)
x = block(x,filters*4)
x = MaxPool1D(2)(x)
x = SpatialDropout1D(0.1)(x)
x = block(x,filters*8)
x = SpatialDropout1D(0.1)(x)
return Model(inputs=[M,P],outputs=x)
def build_DD_Net(C):
M = Input(name='M', shape=(C.frame_l,C.feat_d))
P = Input(name='P', shape=(C.frame_l,C.joint_n,C.joint_d))
FM = build_FM(C.frame_l,C.joint_n,C.joint_d,C.feat_d,C.filters)
x = FM([M,P])
x = GlobalMaxPool1D()(x)
x = d1D(x,128)
x = Dropout(0.5)(x)
x = d1D(x,128)
x = Dropout(0.5)(x)
x = Dense(C.clc_num, activation='softmax')(x)
######################Self-supervised part
model = Model(inputs=[M,P],outputs=x)
return model
DD_Net = build_DD_Net(C)
DD_Net.summary()
```
## Train and test on GT_split 1
```
from google.colab import drive
import pickle
drive.mount('/content/drive')
DATA_PATH1 = "/content/drive/My Drive/Colab Notebooks/Data"
infile = open(DATA_PATH1+'/GT_train_1.pkl','rb')
Train = pickle.load(infile)
DATA_PATH2 = "/content/drive/My Drive/Colab Notebooks/Data"
testfile= open(DATA_PATH2+'/GT_test_1.pkl','rb')
Test = pickle.load(testfile)
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(Train['label'])
X_0,X_1,Y = data_generator(Train,C,le)
X_test_0,X_test_1,Y_test = data_generator(Test,C,le)
import keras
lr = 1e-3
DD_Net.compile(loss="categorical_crossentropy",optimizer=adam(lr),metrics=['accuracy'])
lrScheduler = keras.callbacks.ReduceLROnPlateau(monitor='loss', factor=0.5, patience=5, cooldown=5, min_lr=1e-5)
history = DD_Net.fit([X_0,X_1],Y,
batch_size=len(Y),
epochs=600,
verbose=True,
shuffle=True,
callbacks=[lrScheduler],
validation_data=([X_test_0,X_test_1],Y_test)
)
lr = 1e-3
DD_Net.compile(loss="categorical_crossentropy",optimizer=adam(lr),metrics=['accuracy'])
lrScheduler = keras.callbacks.ReduceLROnPlateau(monitor='loss', factor=0.5, patience=5, cooldown=5, min_lr=5e-6)
history = DD_Net.fit([X_0,X_1],Y,
batch_size=len(Y),
epochs=500,
verbose=True,
shuffle=True,
callbacks=[lrScheduler],
validation_data=([X_test_0,X_test_1],Y_test)
)
# Plot training & validation accuracy values
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
```
## Train and test on GT_split 2
```
from google.colab import drive
import pickle
drive.mount('/content/drive')
DATA_PATH1 = "/content/drive/My Drive/Colab Notebooks/Data"
infile = open(DATA_PATH1+'/GT_train_2.pkl','rb')
Train = pickle.load(infile)
DATA_PATH2 = "/content/drive/My Drive/Colab Notebooks/Data"
testfile= open(DATA_PATH2+'/GT_test_2.pkl','rb')
Test = pickle.load(testfile)
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(Train['label'])
X_0,X_1,Y = data_generator(Train,C,le)
X_test_0,X_test_1,Y_test = data_generator(Test,C,le)
# Re-initialize weights, since training and testing data switch
DD_Net = build_DD_Net(C)
import keras
lr = 1e-3
DD_Net.compile(loss="categorical_crossentropy",optimizer=adam(lr),metrics=['accuracy'])
lrScheduler = keras.callbacks.ReduceLROnPlateau(monitor='loss', factor=0.5, patience=5, cooldown=5, min_lr=1e-5)
history = DD_Net.fit([X_0,X_1],Y,
batch_size=len(Y),
epochs=600,
verbose=True,
shuffle=True,
callbacks=[lrScheduler],
validation_data=([X_test_0,X_test_1],Y_test)
)
lr = 1e-4
DD_Net.compile(loss="categorical_crossentropy",optimizer=adam(lr),metrics=['accuracy'])
lrScheduler = keras.callbacks.ReduceLROnPlateau(monitor='loss', factor=0.5, patience=5, cooldown=5, min_lr=5e-6)
history = DD_Net.fit([X_0,X_1],Y,
batch_size=len(Y),
epochs=500,
verbose=True,
shuffle=True,
callbacks=[lrScheduler],
validation_data=([X_test_0,X_test_1],Y_test)
)
# Plot training & validation accuracy values
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
```
## Train and test on GT_split 3
```
from google.colab import drive
import pickle
drive.mount('/content/drive')
DATA_PATH1 = "/content/drive/My Drive/Colab Notebooks/Data"
infile = open(DATA_PATH1+'/GT_train_3.pkl','rb')
Train = pickle.load(infile)
DATA_PATH2 = "/content/drive/My Drive/Colab Notebooks/Data"
testfile= open(DATA_PATH2+'/GT_test_3.pkl','rb')
Test = pickle.load(testfile)
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(Train['label'])
X_0,X_1,Y = data_generator(Train,C,le)
X_test_0,X_test_1,Y_test = data_generator(Test,C,le)
# Re-initialize weights, since training and testing data switch
DD_Net = build_DD_Net(C)
import keras
lr = 1e-3
DD_Net.compile(loss="categorical_crossentropy",optimizer=adam(lr),metrics=['accuracy'])
lrScheduler = keras.callbacks.ReduceLROnPlateau(monitor='loss', factor=0.5, patience=5, cooldown=5, min_lr=1e-5)
history = DD_Net.fit([X_0,X_1],Y,
batch_size=len(Y),
epochs=600,
verbose=True,
shuffle=True,
callbacks=[lrScheduler],
validation_data=([X_test_0,X_test_1],Y_test)
)
lr = 1e-3
DD_Net.compile(loss="categorical_crossentropy",optimizer=adam(lr),metrics=['accuracy'])
lrScheduler = keras.callbacks.ReduceLROnPlateau(monitor='loss', factor=0.5, patience=5, cooldown=5, min_lr=5e-6)
history = DD_Net.fit([X_0,X_1],Y,
batch_size=len(Y),
epochs=500,
verbose=True,
shuffle=True,
callbacks=[lrScheduler],
validation_data=([X_test_0,X_test_1],Y_test)
)
# Plot training & validation accuracy values
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
```
## Calculate average
```
(0.63+0.66+0.68)/3
```
|
github_jupyter
|
# Face Recognition & Verification for Person Identification
Inspired by Coursera deeplearning.ai's assignment of programming a face recognition for happy house, I wanted to give it a try implementing a face recognition system by using face detection library(https://github.com/ageitgey/face_recognition) and face_recognition model from deeplearning.ai course specialization.
In this notebook, I implemented a person identification system by using pre-trained model to map face images into 128 dimensional encodings.
In the notebook,
- I tried to implement pre-processing process for the images by using face detection library
- Kept track of encodings of a person and try to improve performance by adding more pictures of a person (more embeddings of the same person)
- Detect and identify people given an specific image
- Implement triple loss function
- Implement face verification and face recognition step
- I save unknown encodings in the database dictionary for later identification
```
#import the necessary packages
from keras.models import Sequential
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.engine.topology import Layer
from keras import backend as K
K.set_image_data_format('channels_first')
import cv2
import os
import numpy as np
from numpy import genfromtxt
import pandas as pd
import tensorflow as tf
from fr_utils import *
from inception_blocks import *
import matplotlib.pyplot as plt
import face_recognition
from PIL import Image
%matplotlib inline
%load_ext autoreload
%autoreload 2
np.set_printoptions(threshold=np.nan)
# Initialize the model
# The model takes images with shape (3, 96, 96) 'channels first'
FRmodel = faceRecoModel(input_shape=(3, 96, 96))
#Showing the architecture of the model
FRmodel.summary()
# using triplets of images, for triplet loss function
# anchor (A): picture of the person
# positive (P): picture of the same person of the anchor image
# negative (N): picture of a different person than the anchor image(person)
# Goal: Individual's encoding should be closer to the positive image and further away from negative image by margin alpha
def triplet_loss(y_true, y_pred, alpha = 0.2):
"""
Implementation of the triplet loss
Arguments:
y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
y_pred -- python list containing three objects:
anchor -- the encodings for the anchor images, of shape (None, 128)
positive -- the encodings for the positive images, of shape (None, 128)
negative -- the encodings for the negative images, of shape (None, 128)
Returns:
loss -- real number, value of the loss
"""
anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
# (encoding) distance between the anchor and the positive
pos_dist = tf.square(tf.subtract(anchor, positive))
# (encoding) distance between the anchor and the negative
neg_dist = tf.square(tf.subtract(anchor, negative))
# Subtracting the two previous distances and adding an alpha.
basic_loss = tf.add(tf.reduce_sum(tf.subtract(pos_dist, neg_dist)), alpha)
# Taking the maximum of basic_loss and 0.0. Summing over the training examples.
loss = tf.reduce_sum(tf.maximum(basic_loss, 0))
return loss
# Compile the model
FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
load_weights_from_FaceNet(FRmodel)
#Function for resizing an image
def pre_process_image(img, image_size):
"""
Resizes an image into given image_size (height, width, channel)
Arguments:
img -- original image, array
image_size -- tuple containing width, height, channel of the image (h, w, c)
Returns:
img -- resized image
"""
height, width, channels = image_size
img = cv2.resize(img, dsize=(height, width))
return img
# Function for identifying face locations on an image
def find_face_locations(image_path):
"""
returns the bounding box locations of the faces, image from the path
Arguments:
image_path -- destination of the original image
image_size -- tuple containing width and height of the image (h, w)
Returns:
(top, right, bottom, left), image -- bounding box
if multiple faces present in the picture returns a list of tuples,
image obtained from image_path
"""
# Use face recognition module to detect faces
image = face_recognition.load_image_file(image_path)
#Test: print("Shape of the image: " + str(image.shape))
face_locations = face_recognition.face_locations(image)
for face_location in face_locations:
# Print the location of each face in this image
top, right, bottom, left = face_location
print("A face is located at pixel location Top: {}, Left: {}, Bottom: {}, Right: {}".format(top, left, bottom, right))
return face_locations, image
# access the actual face itself and print
#face_image = image[top:bottom, left:right]
#pil_image = Image.fromarray(face_image)
#pil_image.show()
```
## Image to Embedding
`face_img_to_encoding(image_path, model)` : basically runs the forward propagation of the model on the specified image.
```
def face_img_to_encoding(image_path, model):
"""
returns the embedding vector of the specific image from the path
Arguments:
image_path -- Destination of the original image
model -- Inception model instance in Keras
Returns:
embeddings -- List containing embeddings of the people in the image
"""
# obtain the face locations and the image
face_locations, image = find_face_locations(image_path)
#initialize the embeddings list
embeddings = []
#initialize embeddings list
for face_location in face_locations:
# Print the location of each face in this image
top, right, bottom, left = face_location
# access the actual face itself
face_image = image[top:bottom, left:right]
# resize the cropped face image
image_size = (96, 96, 3)
img = pre_process_image(face_image, image_size)
# pre-process the face image
img = img[...,::-1]
img = np.around(np.transpose(img, (2,0,1))/255.0, decimals=12)
x_train = np.array([img])
embedding = model.predict_on_batch(x_train)
embeddings.append(embedding)
return embeddings
```
## Create the Database
```
# Create a initial database for identifying people
database = {}
database["leonardo dicaprio"] = face_img_to_encoding("my_images/dicaprio.jpg", FRmodel)
database["brad pitt"] = face_img_to_encoding("my_images/bradPitt1.jpg", FRmodel)
database["matt damon"] = face_img_to_encoding("my_images/mattDamon.jpg", FRmodel)
database["unknown"] = face_img_to_encoding("my_images/unknown.jpg", FRmodel)
# Test for face_img_to_encoding
embedding = face_img_to_encoding("my_images/dicaprio.jpg", FRmodel)
img = cv2.imread("my_images/dicaprio.jpg")
#Visualize the image
plt.imshow(img)
#Visualize the embedding
print(embedding)
```
## Face Verification
Face Verification is a 1:1 matching problem given identity of a person program identifies if the picture of a person matches with identity
- verify() function below implements simple face-verification functionality
```
def verify(image_path, identity, database, model):
"""
Function that verifies if the person on the "image_path" image is "identity".
Arguments:
image_path -- path to an image
identity -- string, name of the person.
database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).
model -- Inception model instance in Keras
Returns:
dist -- distance between the image_path and the image of "identity" in the database.
match -- True, if person(embedding) matches with the identity(embedding) .
"""
# Encodings in the image.
encodings = face_img_to_encoding(image_path, FRmodel)
#Loop inside encodings to obtain encoding of each person
for encoding in encodings:
# Step 2: Compute distance with identity's image
dist = np.linalg.norm(encoding - database[identity])
# Step 3: Match if dist < 0.8
if dist < 0.8:
print(str(identity) + ", you are verified")
match = True
else:
print("You're not " + str(identity) + "!!!")
match = False
return dist, match
```
## Let's see if we can verify Matt Damon
```
verify("my_images/dicaprio.jpg", "matt damon", database, FRmodel)
verify("my_images/mattDamon1.jpg", "matt damon", database, FRmodel)
```
## Face Recognition
Identifies the person withou needing to provide an identity. This is a 1:K matching problem.
Steps:
1. Compute the target encoding of the image from image_path
2. Find the encoding from the database that has smallest distance with the target encoding.
```
def recognize(image_path, database, model):
"""
Implements face recognition by finding who is the person on the image_path image.
Arguments:
image_path -- path to an image
database -- database containing image encodings along with the name of the person on the image
model -- Inception model instance in Keras
Returns:
identities -- list, containing names of the predicted people on the image_path image
"""
## Step 1: Compute the encodings
encodings = face_img_to_encoding(image_path, model)
# Initialize the lists for keeping track of people in the picture
identities = []
unknown_encodings = []
# Loop over person encodings in the specific image
for encoding in encodings:
## Step 2: Find the closest encoding ##
# Initializing "min_dist" to a large value, say 100
min_dist = 100
# Loop over the database dictionary's names and encodings.
for (name, db_encodings) in database.items():
for db_enc in db_encodings:
# Compute L2 distance between the target "encoding" and the current "emb" from the database.
dist = np.linalg.norm(encoding - db_enc)
# If this distance is less than the min_dist, then set min_dist to dist, and identity to name.
if dist < min_dist:
min_dist = dist
identity = name
if min_dist > 0.8:
print("Not in the database.")
#Add the encoding in the database for unknown encodings
unknown_encodings.append(encoding)
else:
if identity not in identities and identity != "unknown":
print ("You're " + str(identity) + ", the distance is " + str(min_dist))
#Add the encoding to the known person's encoding list so that model can become more robust.
identities.append(identity)
face_encodings = database[str(identity)]
face_encodings.append(encoding)
database[str(identity)] = face_encodings
for encoding in unknown_encodings:
unknown = database["unknown"]
unknown.append(encoding)
database["unknown"] = unknown
return identities
```
## Let's see if the database can recognize unseen picture of Matt Damon
```
recognize("my_images/mattDamon1.jpg", database, FRmodel)
```
### End of The Recognition & Verification, Congratulations
Keep Learning...
|
github_jupyter
|
[Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb)
# Installation
```
#format the book
%matplotlib inline
from __future__ import division, print_function
from book_format import load_style
load_style()
```
This book is written in Jupyter Notebook, a browser based interactive Python environment that mixes Python, text, and math. I choose it because of the interactive features - I found Kalman filtering nearly impossible to learn until I started working in an interactive environment. It is difficult to form an intuition about many of the parameters until you can change them and immediately see the output. An interactive environment also allows you to play 'what if' scenarios. "What if I set $\mathbf{Q}$ to zero?" It is trivial to find out with Jupyter Notebook.
Another reason I choose it is because most textbooks leaves many things opaque. For example, there might be a beautiful plot next to some pseudocode. That plot was produced by software, but software that is not available to the reader. I want everything that went into producing this book to be available to you. How do you plot a covariance ellipse? You won't know if you read most books. With Jupyter Notebook all you have to do is look at the source code.
Even if you choose to read the book online you will want Python and the SciPy stack installed so that you can write your own Kalman filters. There are many different ways to install these libraries, and I cannot cover them all, but I will cover a few typical scenarios.
## Installing the SciPy Stack
This book requires IPython, Jupyter, NumPy, SciPy, SymPy, and Matplotlib. The SciPy stack of NumPy, SciPy, and Matplotlib depends on third party Fortran and C code, and is not trivial to install from source code. The SciPy website strongly urges using a pre-built installation, and I concur with this advice.
Jupyter notebook is the software that allows you to run Python inside of the browser - the book is a collection of Jupyter notebooks. IPython provides the infrastructure for Jupyter and data visualization. NumPy and Scipy are packages which provide the linear algebra implementation that the filters use. Sympy performs symbolic math - I use it to find derivatives of algebraic equations. Finally, Matplotlib provides plotting capability.
I use the Anaconda distribution from Continuum Analytics. This is an excellent distribution that combines all of the packages listed above, plus many others. IPython recommends this package to install Ipython. Installation is very straightforward, and it can be done alongside other Python installations you might already have on your machine. It is free to use. You may download it from here: http://continuum.io/downloads I strongly recommend using the latest Python 3 version that they provide. For now I support Python 2.7, but perhaps not much longer.
There are other choices for installing the SciPy stack. You can find instructions here: http://scipy.org/install.html It can be very cumbersome, and I do not support it or provide any instructions on how to do it.
Many Linux distributions come with these packages pre-installed. However, they are often somewhat dated and they will need to be updated as the book depends on recent versions of all. Updating a specific Linux installation is beyond the scope of this book. An advantage of the Anaconda distribution is that it does not modify your local Python installation, so you can install it and not break your linux distribution. Some people have been tripped up by this. They install Anaconda, but the installed Python remains the default version and then the book's software doesn't run correctly.
I do not run regression tests on old versions of these libraries. In fact, I know the code will not run on older versions (say, from 2014-2015). I do not want to spend my life doing tech support for a book, thus I put the burden on you to install a recent version of Python and the SciPy stack.
You will need Python 2.7 or later installed. Almost all of my work is done in Python 3.6, but I periodically test on 2.7. I do not promise any specific check in will work in 2.7 however. I use Python's `from __future__ import ...` statement to help with compatibility. For example, all prints need to use parenthesis. If you try to add, say, `print x` into the book your script will fail; you must write `print(x)` as in Python 3.X.
Please submit a bug report at the book's [github repository](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python) if you have installed the latest Anaconda and something does not work - I will continue to ensure the book will run with the latest Anaconda release. I'm rather indifferent if the book will not run on an older installation. I'm sorry, but I just don't have time to provide support for everyone's different setups. Packages like `jupyter notebook` are evolving rapidly, and I cannot keep up with all the changes *and* remain backwards compatible as well.
If you need older versions of the software for other projects, note that Anaconda allows you to install multiple versions side-by-side. Documentation for this is here:
https://conda.io/docs/user-guide/tasks/manage-python.html
## Installing FilterPy
FilterPy is a Python library that implements all of the filters used in this book, and quite a few others. Installation is easy using `pip`. Issue the following from the command prompt:
pip install filterpy
FilterPy is written by me, and the latest development version is always available at https://github.com/rlabbe/filterpy.
## Downloading and Running the Book
The book is stored in a github repository. From the command line type the following:
git clone --depth=1 https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python.git
This will create a directory named Kalman-and-Bayesian-Filters-in-Python. The `depth` parameter just gets you the latest version. Unless you need to see my entire commit history this is a lot faster and saves space.
If you do not have git installed, browse to https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python where you can download the book via your browser.
Now, from the command prompt change to the directory that was just created, and then run Jupyter notebook:
cd Kalman-and-Bayesian-Filters-in-Python
juptyer notebook
A browser window should launch showing you all of the chapters in the book. Browse to the first chapter by clicking on it, then open the notebook in that subdirectory by clicking on the link.
More information about running the notebook can be found here:
http://jupyter-notebook-beginner-guide.readthedocs.org/en/latest/execute.html
## Companion Software
Code that is specific to the book is stored with the book in the subdirectory *./kf_book*. This code is in a state of flux; I do not wish to document it here yet. I do mention in the book when I use code from this directory, so it should not be a mystery.
In the *kf_book* subdirectory there are Python files with a name like *xxx*_internal.py. I use these to store functions that are useful for a specific chapter. This allows me to hide away Python code that is not particularly interesting to read - I may be generating a plot or chart, and I want you to focus on the contents of the chart, not the mechanics of how I generate that chart with Python. If you are curious as to the mechanics of that, just go and browse the source.
Some chapters introduce functions that are useful for the rest of the book. Those functions are initially defined within the Notebook itself, but the code is also stored in a Python file that is imported if needed in later chapters. I do document when I do this where the function is first defined, but this is still a work in progress. I try to avoid this because then I always face the issue of code in the directory becoming out of sync with the code in the book. However, IPython Notebook does not give us a way to refer to code cells in other notebooks, so this is the only mechanism I know of to share functionality across notebooks.
There is an undocumented directory called **experiments**. This is where I write and test code prior to putting it in the book. There is some interesting stuff in there, and feel free to look at it. As the book evolves I plan to create examples and projects, and a lot of this material will end up there. Small experiments will eventually just be deleted. If you are just interested in reading the book you can safely ignore this directory.
The subdirectory *./kf_book* contains a css file containing the style guide for the book. The default look and feel of IPython Notebook is rather plain. Work is being done on this. I have followed the examples set by books such as [Probabilistic Programming and Bayesian Methods for Hackers](http://nbviewer.ipython.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter1_Introduction/Chapter1.ipynb). I have also been very influenced by Professor Lorena Barba's fantastic work, [available here](https://github.com/barbagroup/CFDPython). I owe all of my look and feel to the work of these projects.
## Using Juptyer Notebook
A complete tutorial on Jupyter Notebook is beyond the scope of this book. Many are available online. In short, Python code is placed in cells. These are prefaced with text like `In [1]:`, and the code itself is in a boxed area. If you press CTRL-ENTER while focus is inside the box the code will run and the results will be displayed below the box. Like this:
```
print(3+7.2)
```
If you have this open in Jupyter Notebook now, go ahead and modify that code by changing the expression inside the print statement and pressing CTRL+ENTER. The output should be changed to reflect what you typed in the code cell.
## SymPy
SymPy is a Python package for performing symbolic mathematics. The full scope of its abilities are beyond this book, but it can perform algebra, integrate and differentiate equations, find solutions to differential equations, and much more. For example, we use use it to compute the Jacobian of matrices and expected value integral computations.
First, a simple example. We will import SymPy, initialize its pretty print functionality (which will print equations using LaTeX). We will then declare a symbol for SymPy to use.
```
import sympy
sympy.init_printing(use_latex='mathjax')
phi, x = sympy.symbols('\phi, x')
phi
```
Notice how it prints the symbol `phi` using LaTeX. Now let's do some math. What is the derivative of $\sqrt{\phi}$?
```
sympy.diff('sqrt(phi)')
```
We can factor equations
```
sympy.factor(phi**3 -phi**2 + phi - 1)
```
and we can expand them.
```
((phi+1)*(phi-4)).expand()
```
You can evauate an equation for specific values of its variables:
```
w =x**2 -3*x +4
print(w.subs(x, 4))
print(w.subs(x, 12))
```
You can also use strings for equations that use symbols that you have not defined:
```
x = sympy.expand('(t+1)*2')
x
```
Now let's use SymPy to compute the Jacobian of a matrix. Given the function
$$h=\sqrt{(x^2 + z^2)}$$
find the Jacobian with respect to x, y, and z.
```
x, y, z = sympy.symbols('x y z')
H = sympy.Matrix([sympy.sqrt(x**2 + z**2)])
state = sympy.Matrix([x, y, z])
H.jacobian(state)
```
Now let's compute the discrete process noise matrix $\mathbf Q$ given the continuous process noise matrix
$$\mathbf Q = \Phi_s \begin{bmatrix}0&0&0\\0&0&0\\0&0&1\end{bmatrix}$$
The integral is
$$\mathbf Q = \int_0^{\Delta t} \mathbf F(t)\mathbf Q\mathbf F^T(t)\, dt$$
where
$$\mathbf F(\Delta t) = \begin{bmatrix}1 & \Delta t & {\Delta t}^2/2 \\ 0 & 1 & \Delta t\\ 0& 0& 1\end{bmatrix}$$
```
dt = sympy.symbols('\Delta{t}')
F_k = sympy.Matrix([[1, dt, dt**2/2],
[0, 1, dt],
[0, 0, 1]])
Q = sympy.Matrix([[0,0,0],
[0,0,0],
[0,0,1]])
sympy.integrate(F_k*Q*F_k.T,(dt, 0, dt))
```
## Various Links
https://ipython.org/
https://jupyter.org/
https://www.scipy.org/
|
github_jupyter
|
```
import ipywidgets as W
from wxyz.jsonld.widget_jsonld import Expand, Compact, Flatten, Frame, Normalize
from wxyz.lab.widget_dock import DockBox
from wxyz.lab.widget_editor import Editor
from wxyz.core.widget_json import JSON
flex = lambda x=1: dict(layout=dict(flex=f"{x}"))
context = JSON("""{
"@context": {
"@vocab": "http://schema.org/"
}
}""")
document = JSON("""{
"@graph": [{
"@type": "Person",
"@id": "this-guy",
"name": "Jekyll",
"jobTitle": "Doctor"
},{
"@type": "Person",
"@id": "this-guy",
"name": "Hyde",
"jobTitle": "Mister"
}]
}""")
context_source = Editor(description="JSON-LD Context", **flex())
document_source = Editor(description="JSON Document", **flex())
W.jslink((context, "source"), (context_source, "value"))
W.jslink((document, "source"), (document_source, "value"))
expand = Expand()
expand_output = Editor(description="Expanded")
W.jslink((expand, "value"), (expand_output, "value"))
W.jslink((document, "value"), (expand, "source"))
W.jslink((context, "value"), (expand, "expand_context"))
compact = Compact()
compact_output = Editor(description="Compacted")
W.jslink((compact, "value"), (compact_output, "value"))
W.jslink((document, "value"), (compact, "source"))
W.jslink((context, "value"), (compact, "context"))
W.jslink((context, "value"), (compact, "expand_context"))
flatten = Flatten()
flatten_output = Editor(description="Flattened")
W.jslink((flatten, "value"), (flatten_output, "value"))
W.jslink((document, "value"), (flatten, "source"))
W.jslink((context, "value"), (flatten, "context"))
W.jslink((context, "value"), (flatten, "expand_context"))
error = Editor("errors will appear here", description="errors be here", **flex(1))
W.jslink((expand, "error"), (error, "value"))
W.jslink((compact, "error"), (error, "value"))
W.jslink((flatten, "error"), (error, "value"))
jsonld_playground = DockBox([
document_source,
context_source,
expand_output,
compact_output,
flatten_output,
error
], layout=dict(height="60vh"))
@jsonld_playground.on_displayed
def on_display(*args, **kwargs):
jsonld_playground.dock_layout = {
'type': 'split-area',
'orientation': 'horizontal',
'children': [
{'type': 'split-area', 'orientation': 'vertical', 'children': [
{'type': 'tab-area', 'widgets': [0], 'currentIndex': 0},
{'type': 'tab-area', 'widgets': [1], 'currentIndex': 0},
], 'sizes': [2, 1]},
{'type': 'split-area', 'orientation': 'vertical', 'children': [
{'type': 'tab-area', 'widgets': [2], 'currentIndex': 0},
{'type': 'tab-area', 'widgets': [3], 'currentIndex': 0},
], 'sizes': [1, 1]},
{'type': 'split-area', 'orientation': 'vertical', 'children': [
{'type': 'tab-area', 'widgets': [4], 'currentIndex': 0},
{'type': 'tab-area', 'widgets': [5], 'currentIndex': 0}
], 'sizes': [1, 1]},
],
'sizes': [1, 1, 1]
}
jsonld_playground
```
|
github_jupyter
|
## Content-Based Filtering Using Neural Networks
This notebook relies on files created in the [content_based_preproc.ipynb](./content_based_preproc.ipynb) notebook. Be sure to run the code in there before completing this notebook.
Also, we'll be using the **python3** kernel from here on out so don't forget to change the kernel if it's still Python2.
This lab illustrates:
1. how to build feature columns for a model using tf.feature_column
2. how to create custom evaluation metrics and add them to Tensorboard
3. how to train a model and make predictions with the saved model
Tensorflow Hub should already be installed. You can check that it is by using "pip freeze".
```
%bash
pip freeze | grep tensor
```
If 'tensorflow-hub' isn't one of the outputs above, then you'll need to install it. Uncomment the cell below and execute the commands. After doing the pip install, click **"Reset Session"** on the notebook so that the Python environment picks up the new packages.
```
#%bash
#pip install tensorflow-hub
import os
import tensorflow as tf
import numpy as np
import tensorflow_hub as hub
import shutil
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.8'
%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
```
### Build the feature columns for the model.
To start, we'll load the list of categories, authors and article ids we created in the previous **Create Datasets** notebook.
```
categories_list = open("categories.txt").read().splitlines()
authors_list = open("authors.txt").read().splitlines()
content_ids_list = open("content_ids.txt").read().splitlines()
mean_months_since_epoch = 523
```
In the cell below we'll define the feature columns to use in our model. If necessary, remind yourself the [various feature columns](https://www.tensorflow.org/api_docs/python/tf/feature_column) to use.
For the embedded_title_column feature column, use a Tensorflow Hub Module to create an embedding of the article title. Since the articles and titles are in German, you'll want to use a German language embedding module.
Explore the text embedding Tensorflow Hub modules [available here](https://alpha.tfhub.dev/). Filter by setting the language to 'German'. The 50 dimensional embedding should be sufficient for our purposes.
```
embedded_title_column = hub.text_embedding_column(
key="title",
module_spec="https://tfhub.dev/google/nnlm-de-dim50/1",
trainable=False)
content_id_column = tf.feature_column.categorical_column_with_hash_bucket(
key="content_id",
hash_bucket_size= len(content_ids_list) + 1)
embedded_content_column = tf.feature_column.embedding_column(
categorical_column=content_id_column,
dimension=10)
author_column = tf.feature_column.categorical_column_with_hash_bucket(key="author",
hash_bucket_size=len(authors_list) + 1)
embedded_author_column = tf.feature_column.embedding_column(
categorical_column=author_column,
dimension=3)
category_column_categorical = tf.feature_column.categorical_column_with_vocabulary_list(
key="category",
vocabulary_list=categories_list,
num_oov_buckets=1)
category_column = tf.feature_column.indicator_column(category_column_categorical)
months_since_epoch_boundaries = list(range(400,700,20))
months_since_epoch_column = tf.feature_column.numeric_column(
key="months_since_epoch")
months_since_epoch_bucketized = tf.feature_column.bucketized_column(
source_column = months_since_epoch_column,
boundaries = months_since_epoch_boundaries)
crossed_months_since_category_column = tf.feature_column.indicator_column(tf.feature_column.crossed_column(
keys = [category_column_categorical, months_since_epoch_bucketized],
hash_bucket_size = len(months_since_epoch_boundaries) * (len(categories_list) + 1)))
feature_columns = [embedded_content_column,
embedded_author_column,
category_column,
embedded_title_column,
crossed_months_since_category_column]
```
### Create the input function.
Next we'll create the input function for our model. This input function reads the data from the csv files we created in the previous labs.
```
record_defaults = [["Unknown"], ["Unknown"],["Unknown"],["Unknown"],["Unknown"],[mean_months_since_epoch],["Unknown"]]
column_keys = ["visitor_id", "content_id", "category", "title", "author", "months_since_epoch", "next_content_id"]
label_key = "next_content_id"
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column,record_defaults=record_defaults)
features = dict(zip(column_keys, columns))
label = features.pop(label_key)
return features, label
# Create list of files that match pattern
file_list = tf.gfile.Glob(filename)
# Create dataset from file list
dataset = tf.data.TextLineDataset(file_list).map(decode_csv)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
```
### Create the model and train/evaluate
Next, we'll build our model which recommends an article for a visitor to the Kurier.at website. Look through the code below. We use the input_layer feature column to create the dense input layer to our network. This is just a sigle layer network where we can adjust the number of hidden units as a parameter.
Currently, we compute the accuracy between our predicted 'next article' and the actual 'next article' read next by the visitor. We'll also add an additional performance metric of top 10 accuracy to assess our model. To accomplish this, we compute the top 10 accuracy metric, add it to the metrics dictionary below and add it to the tf.summary so that this value is reported to Tensorboard as well.
```
def model_fn(features, labels, mode, params):
net = tf.feature_column.input_layer(features, params['feature_columns'])
for units in params['hidden_units']:
net = tf.layers.dense(net, units=units, activation=tf.nn.relu)
# Compute logits (1 per class).
logits = tf.layers.dense(net, params['n_classes'], activation=None)
predicted_classes = tf.argmax(logits, 1)
from tensorflow.python.lib.io import file_io
with file_io.FileIO('content_ids.txt', mode='r') as ifp:
content = tf.constant([x.rstrip() for x in ifp])
predicted_class_names = tf.gather(content, predicted_classes)
if mode == tf.estimator.ModeKeys.PREDICT:
predictions = {
'class_ids': predicted_classes[:, tf.newaxis],
'class_names' : predicted_class_names[:, tf.newaxis],
'probabilities': tf.nn.softmax(logits),
'logits': logits,
}
return tf.estimator.EstimatorSpec(mode, predictions=predictions)
table = tf.contrib.lookup.index_table_from_file(vocabulary_file="content_ids.txt")
labels = table.lookup(labels)
# Compute loss.
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
# Compute evaluation metrics.
accuracy = tf.metrics.accuracy(labels=labels,
predictions=predicted_classes,
name='acc_op')
top_10_accuracy = tf.metrics.mean(tf.nn.in_top_k(predictions=logits,
targets=labels,
k=10))
metrics = {
'accuracy': accuracy,
'top_10_accuracy' : top_10_accuracy}
tf.summary.scalar('accuracy', accuracy[1])
tf.summary.scalar('top_10_accuracy', top_10_accuracy[1])
if mode == tf.estimator.ModeKeys.EVAL:
return tf.estimator.EstimatorSpec(
mode, loss=loss, eval_metric_ops=metrics)
# Create training op.
assert mode == tf.estimator.ModeKeys.TRAIN
optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
```
### Train and Evaluate
```
outdir = 'content_based_model_trained'
shutil.rmtree(outdir, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
estimator = tf.estimator.Estimator(
model_fn=model_fn,
model_dir = outdir,
params={
'feature_columns': feature_columns,
'hidden_units': [200, 100, 50],
'n_classes': len(content_ids_list)
})
train_spec = tf.estimator.TrainSpec(
input_fn = read_dataset("training_set.csv", tf.estimator.ModeKeys.TRAIN),
max_steps = 2000)
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset("test_set.csv", tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 30,
throttle_secs = 60)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
```
This takes a while to complete but in the end, I get about **30% top 10 accuracy**.
### Make predictions with the trained model.
With the model now trained, we can make predictions by calling the predict method on the estimator. Let's look at how our model predicts on the first five examples of the training set.
To start, we'll create a new file 'first_5.csv' which contains the first five elements of our training set. We'll also save the target values to a file 'first_5_content_ids' so we can compare our results.
```
%%bash
head -5 training_set.csv > first_5.csv
head first_5.csv
awk -F "\"*,\"*" '{print $2}' first_5.csv > first_5_content_ids
```
Recall, to make predictions on the trained model we pass a list of examples through the input function. Complete the code below to make predicitons on the examples contained in the "first_5.csv" file we created above.
```
output = list(estimator.predict(input_fn=read_dataset("first_5.csv", tf.estimator.ModeKeys.PREDICT)))
import numpy as np
recommended_content_ids = [np.asscalar(d["class_names"]).decode('UTF-8') for d in output]
content_ids = open("first_5_content_ids").read().splitlines()
```
Finally, we map the content id back to the article title. Let's compare our model's recommendation for the first example. This can be done in BigQuery. Look through the query below and make sure it is clear what is being returned.
```
import google.datalab.bigquery as bq
recommended_title_sql="""
#standardSQL
SELECT
(SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) = \"{}\"
LIMIT 1""".format(recommended_content_ids[0])
current_title_sql="""
#standardSQL
SELECT
(SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) = \"{}\"
LIMIT 1""".format(content_ids[0])
recommended_title = bq.Query(recommended_title_sql).execute().result().to_dataframe()['title'].tolist()[0]
current_title = bq.Query(current_title_sql).execute().result().to_dataframe()['title'].tolist()[0]
print("Current title: {} ".format(current_title))
print("Recommended title: {}".format(recommended_title))
```
### Tensorboard
As usual, we can monitor the performance of our training job using Tensorboard.
```
from google.datalab.ml import TensorBoard
TensorBoard().start('content_based_model_trained')
for pid in TensorBoard.list()['pid']:
TensorBoard().stop(pid)
print("Stopped TensorBoard with pid {}".format(pid))
```
|
github_jupyter
|
```
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import scale
wines=pd.read_csv("wine.csv")
wines
wines.describe()
wines.info()
wines_ary=wines.values
wines_ary
wines_normal = scale(wines_ary)
wines_normal
```
# PCA Implementation
```
pca = PCA()
pca_values = pca.fit_transform(wines_normal)
pca_values
var = pca.explained_variance_ratio_
var
var1 = np.cumsum(np.round(var,decimals = 4)*100)
var1
pca.components_
plt.plot(var1, color='red', marker = 'o',linestyle = '--')
# Final Dataframe
finalDf =pd.concat([wines['Type'],pd.DataFrame(pca_values[:,0:3], columns=['pc1','pc2','pc3'])] ,axis = 1)
finalDf
finalDf = pd.concat([pd.DataFrame(pca_values[:,0:3],columns=['pc1','pc2','pc3']), wines['Type']], axis = 1)
finalDf
# Visualization of PCAs
fig=plt.figure(figsize=(16,12))
sns.scatterplot(data=finalDf)
sns.scatterplot(data=finalDf,x='pc1',y='pc2', hue='Type')
sns.scatterplot(data=finalDf,x='pc1',y='pc3', hue='Type')
sns.scatterplot(data=finalDf,x='pc2',y='pc3', hue='Type')
```
# Checking with other Clustering Algorithms
# 1. Hierarchical Clustering
```
# Import Libraries
import scipy.cluster.hierarchy as sch
from sklearn.cluster import AgglomerativeClustering
from sklearn.preprocessing import normalize
# As we already have normalized data, create Dendrograms
plt.figure(figsize=(10,8))
dendrogram=sch.dendrogram(sch.linkage(finalDf,method='average'))
hc=AgglomerativeClustering(n_clusters=6, affinity='euclidean', linkage = 'average')
hc
y_hc=pd.DataFrame(hc.fit_predict(finalDf),columns=['clustersid'])
y_hc['clustersid'].value_counts()
# Adding clusters to dataset
wine3=wines.copy()
wine3['clustersid']=hc.labels_
wine3
```
# 2. K-Means Clustering
```
# Import Libraries
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaled_wines= scaler.fit_transform(wines.iloc[:,1:])
scaled_wines
# within-cluster sum-of-squares criterion
wcss=[]
for i in range (1,11):
kmeans=KMeans(n_clusters=i,random_state=0)
kmeans.fit(finalDf)
wcss.append(kmeans.inertia_)
# Plot K values range vs WCSS to get Elbow graph for choosing K (no. of clusters)
plt.plot(range(1,11),wcss, marker = 'o', linestyle = '--')
plt.title('Elbow Graph')
plt.xlabel('Number of clusters')
plt.ylabel('WCSS')
plt.show()
```
# Build Cluster algorithm using K=4
```
# Cluster algorithm using K=4
clusters3=KMeans(4,random_state=30).fit(finalDf)
clusters3
clusters3.labels_
# Assign clusters to the data set
wine4=wines.copy()
wine4['clustersid']=clusters3.labels_
wine4
wine4['clustersid'].value_counts()
scaled_wines
```
|
github_jupyter
|
This notebook contains a bunch of experiments to determine the optimal learning rate value for different optimizers. The reference model is a CNN with 3 convolutional blocks; the dataset is an augmented version of the CBIS dataset.
# Environment setup
```
# Connect to Google Drive
from google.colab import drive
drive.mount('/content/gdrive')
# Copy the dataset from Google Drive to local
!cp "/content/gdrive/My Drive/CBIS_DDSM.zip" .
!unzip -qq CBIS_DDSM.zip
!rm CBIS_DDSM.zip
cbis_path = 'CBIS_DDSM'
# Import libraries
%tensorflow_version 1.x
import os
import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow.keras import optimizers
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, LearningRateScheduler, Callback
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import RMSprop, SGD, Adam, Nadam
```
# Data pre-processing
```
def load_training():
"""
Load the training set (excluding baseline patches)
"""
images = np.load(os.path.join(cbis_path, 'numpy data', 'train_tensor.npy'))[1::2]
labels = np.load(os.path.join(cbis_path, 'numpy data', 'train_labels.npy'))[1::2]
return images, labels
def load_testing():
"""
Load the test set (abnormalities patches and labels, no baseline)
"""
images = np.load(os.path.join(cbis_path, 'numpy data', 'public_test_tensor.npy'))[1::2]
labels = np.load(os.path.join(cbis_path, 'numpy data', 'public_test_labels.npy'))[1::2]
return images, labels
def remap_label(l):
"""
Remap the labels to 0->mass 1->calcification
"""
if l == 1 or l == 2:
return 0
elif l == 3 or l == 4:
return 1
else:
print("[WARN] Unrecognized label (%d)" % l)
return None
# Load training and test images (abnormalities only, no baseline)
train_images, train_labels= load_training()
test_images, test_labels= load_testing()
# Number of images
n_train_img = train_images.shape[0]
n_test_img = test_images.shape[0]
print("Train size: %d \t Test size: %d" % (n_train_img, n_test_img))
# Compute width and height of images
img_w = train_images.shape[1]
img_h = train_images.shape[2]
print("Image size: %dx%d" % (img_w, img_h))
# Remap labels
train_labels = np.array([remap_label(l) for l in train_labels])
test_labels = np.array([remap_label(l) for l in test_labels])
# Create a new dimension for color in the images arrays
train_images = train_images.reshape((n_train_img, img_w, img_h, 1))
test_images = test_images.reshape((n_test_img, img_w, img_h, 1))
# Convert from 16-bit (0-65535) to float (0-1)
train_images = train_images.astype('uint16') / 65535
test_images = test_images.astype('uint16') / 65535
# Shuffle the training set (originally sorted by label)
perm = np.random.permutation(n_train_img)
train_images = train_images[perm]
train_labels = train_labels[perm]
# Create a generator for training images
train_datagen = ImageDataGenerator(
validation_split=0.2,
rotation_range=180,
zoom_range=0.2,
horizontal_flip=True,
vertical_flip=True,
fill_mode='reflect'
)
# Fit the generator with some images
train_datagen.fit(train_images)
# Split train images into actual training and validation
train_generator = train_datagen.flow(train_images, train_labels, batch_size=128, subset='training')
validation_generator = train_datagen.flow(train_images, train_labels, batch_size=128, subset='validation')
# Visualize one image from the dataset and its label, just to make sure the data format is correct
idx = 0
plt.imshow(train_images[idx][:,:,0], cmap='gray')
plt.show()
print("Label: " + str(train_labels[idx]))
def create_cnn():
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(48, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
return model
```
# Learning rate estimation
The following experiment involves four of the most popular optimizers for NNs: SGD, RMSprop, Adam and Nadam.
In order to roughly approximate the range of reasonable values of learning rate for each optimizer, a simple strategy is adopted. Starting with a very low LR, its value is slightly increased at the end of each epoch. Initially, the network will learn slowly, because a small LR does now allow large weight updates, hence the loss will remain more or less constant.
Then, the LR increases and the loss starts decreasing. At some point, however, the learning rate becomes so big that updates cause large and unpredictable fluctuations of the loss, and the network basically starts diverging.
In practice, the loss will start from a value around 0.69, corresponding to random prediction. After it gets lower a certain threshold, let's say 0.6, one can safely assume that the network started learning, and will eventually reach a loss minimum. Later, as soon as the weights diverge and the network goes back to random prediction, the training is stopped.
```
loss_lower_threshold = 0.60
loss_upper_threshold = 0.69
class StopOnDivergingLoss(Callback):
def on_epoch_end(self, epoch, logs={}):
global low_reached
if logs.get('loss') < loss_lower_threshold:
low_reached = True
if logs.get('loss') > loss_upper_threshold and low_reached:
print("\nStopping training!")
self.model.stop_training = True
# Callback for monitoring the loss at each learning rate
class LossLRCallback(Callback):
def on_epoch_end(self, epoch, logs=None):
lr2loss[opt][0].append(keras.backend.eval(self.model.optimizer.lr))
lr2loss[opt][1].append(logs['loss'])
# Callback to update the learning rate
lr_inc_rate = 1.1
def lr_scheduler(epoch):
new_lr = lr_begin*(lr_inc_rate**epoch)
print("Learning rate: %.7f" % new_lr)
return new_lr
opts = [SGD, RMSprop, Adam, Nadam]
initial_lr = {
SGD: 1e-3,
RMSprop: 1e-5,
Adam: 1e-6,
Nadam: 1e-6
}
lr2loss = {
SGD: [[], []],
RMSprop: [[], []],
Adam: [[], []],
Nadam: [[], []]
}
# For each optimizer, perform a run incrementing the learning rate after every
# epoch, and keep track of the results
for opt in opts:
print("Optimizer: " + opt.__name__)
cnn = create_cnn()
lr_begin = initial_lr[opt]
low_reached = False
stop_on_diverging_loss = StopOnDivergingLoss()
losslrcb = LossLRCallback()
lrschedulecb = keras.callbacks.LearningRateScheduler(lr_scheduler)
cnn.compile(
optimizer=opt(learning_rate=lr_begin),
loss='binary_crossentropy',
metrics=['accuracy'])
history = cnn.fit_generator(
train_generator,
steps_per_epoch=n_train_img // 128,
epochs=300,
validation_data=validation_generator,
callbacks=[stop_on_diverging_loss, losslrcb, lrschedulecb],
shuffle=True,
verbose=1,
initial_epoch=0)
# Plot the loss obtained at different learning rates
plt.figure(figsize=(9, 8), dpi=80, facecolor='w', edgecolor='k')
for opt in opts:
plt.xscale('log')
plt.ylim(0.35, 0.8)
plt.plot(lr2loss[opt][0], lr2loss[opt][1], label=opt.__name__)
plt.title(' Loss-LR curve')
plt.ylabel('Loss')
plt.xlabel('Learning rate')
plt.legend(loc='lower right')
plt.show()
```
The graph above clearly shows that learning rate plays a decisive role during the training. When it is too high, weight updates become too large and the network becomes unstable, failing to converge towards the loss minimum. On the other hand, if it set too low, the network learns slowly and we observe modest improvements between two consecutive epochs.
The global minimum of the Loss-LR curve indicates the point where the learning rate starts causing instabilities, hence choosing a greater value is discouraged.
Ideally, the best one is in the region with the fastest descent of loss function, that is where the plotted curve is steepest (negatively). It should be also noted that, in a stable network, loss variations naturally decrease over time, even if the LR remains constant, as a consequence of the gradual convergence of the weights towards the optimum. Thus, the steepest point represents may not directly represent the optimal LR, but a lower bound for it.
That said, a practical way to choose an adequate LR for an optimizer is to pick a value between the steepest point and the minimum, e.g. in the middle of this region.
In this case, reasonable choices are:
* **SGD** : 3e-2
* **RMSProp** : 1e-4
* **Adam** : 1e-4
* **Nadam** : 1e-4
Note how these values slightly differ from the Keras default ones.
# Optimizers comparison
In the following experiment each optimizer runs once for 100 epochs, with the previously determined learning rate.
```
# For each optimizer, execute a training run with the previously determined best learning rate
optimal_lr = {
SGD: 3e-2,
RMSprop: 1e-4,
Adam: 1e-4,
Nadam: 1e-4
}
histories = {}
for opt in opts:
print("Optimizer: " + opt.__name__)
cnn = create_cnn()
cnn.compile(
optimizer=opt(learning_rate=optimal_lr[opt]),
loss='binary_crossentropy',
metrics=['accuracy'])
histories[opt] = cnn.fit_generator(
train_generator,
steps_per_epoch=n_train_img // 128,
epochs=100,
validation_data=validation_generator,
shuffle=True,
verbose=1,
initial_epoch=0)
# Validation accuracy
plt.figure(figsize=(7, 7), dpi=80, facecolor='w', edgecolor='k')
plt.title('Validation accuracy comparison')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
for opt in opts:
val_acc = histories[opt].history['val_acc']
epochs = range(1, len(val_acc)+1)
plt.plot(epochs, val_acc, label=opt.__name__)
plt.legend(loc='lower right')
# Validation loss
plt.figure(figsize=(7, 7), dpi=80, facecolor='w', edgecolor='k')
plt.title('Validation loss comparison')
plt.ylabel('Loss')
plt.xlabel('Epoch')
for opt in opts:
val_loss = histories[opt].history['val_loss']
epochs = range(1, len(val_loss)+1)
plt.plot(epochs, val_loss, label=opt.__name__)
plt.legend(loc='lower right')
```
The graphs shows that SGD is relatively weak with respect to the other optimizers. Adam converges faster than RMSprop and Nadam, whose curves are quite similar.
# Learning rate verification
Now that approximate values for the learning rate have been discovered, one may try to directly experiment with different nearby values and find which one works best.
## RMSprop
```
# Try RMSprop with different learning rates
lr_to_test = (1e-5, 1e-4, 1e-3)
opt = RMSprop
histories = {}
for lr in lr_to_test:
print("RMS [lr = %.5f]: " % lr)
cnn = create_cnn()
cnn.compile(
optimizer=opt(learning_rate=lr),
loss='binary_crossentropy',
metrics=['accuracy'])
histories[lr] = cnn.fit_generator(
train_generator,
steps_per_epoch=n_train_img // 128,
epochs=100,
validation_data=validation_generator,
shuffle=True,
verbose=1,
initial_epoch=0)
# Validation accuracy
plt.figure(figsize=(7, 7), dpi=80, facecolor='w', edgecolor='k')
plt.title('Validation accuracy comparison')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
for lr in lr_to_test:
val_acc = histories[lr].history['val_acc']
epochs = range(1, len(val_acc)+1)
plt.plot(epochs, val_acc, label=("%s, lr=%f" % (opt.__name__, lr)))
plt.legend(loc='lower right')
# Validation loss
plt.figure(figsize=(7, 7), dpi=80, facecolor='w', edgecolor='k')
plt.title('Validation loss comparison')
plt.ylabel('Loss')
plt.xlabel('Epoch')
for lr in lr_to_test:
val_loss = histories[lr].history['val_loss']
epochs = range(1, len(val_loss)+1)
plt.plot(epochs, val_loss, label=("%s, lr=%f" % (opt.__name__, lr)))
plt.legend(loc='upper right');
```
**Result**: Values between 1e-4 and 1e-3 produce similar results. 1e-3 is more noisy, but converges a bit faster. On the other hand, 1e-5 represents an excessively low value.
## Adam
```
# Try Adam with different learning rates
lr_to_test = (1e-5, 1e-4, 1e-3)
opt = Adam
histories = {}
for lr in lr_to_test:
print("Adam [lr = %.5f]: " % lr)
cnn = create_cnn()
cnn.compile(
optimizer=opt(learning_rate=lr),
loss='binary_crossentropy',
metrics=['accuracy'])
histories[lr] = cnn.fit_generator(
train_generator,
steps_per_epoch=n_train_img // 128,
epochs=100,
validation_data=validation_generator,
shuffle=True,
verbose=1,
initial_epoch=0)
# Validation accuracy
plt.figure(figsize=(7, 7), dpi=80, facecolor='w', edgecolor='k')
plt.title('Validation accuracy comparison')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
for lr in lr_to_test:
val_acc = histories[lr].history['val_acc']
epochs = range(1, len(val_acc)+1)
plt.plot(epochs, val_acc, label=("%s, lr=%f" % (opt.__name__, lr)))
plt.legend(loc='lower right')
# Validation loss
plt.figure(figsize=(7, 7), dpi=80, facecolor='w', edgecolor='k')
plt.title('Validation loss comparison')
plt.ylabel('Loss')
plt.xlabel('Epoch')
for lr in lr_to_test:
val_loss = histories[lr].history['val_loss']
epochs = range(1, len(val_loss)+1)
plt.plot(epochs, val_loss, label=("%s, lr=%f" % (opt.__name__, lr)))
plt.legend(loc='upper right');
```
**Result**: 1e-5 is definitely a bad choice, as the network converges very slowly. Interestingly, 1e-4 produces better results than 1e-3.
```
```
|
github_jupyter
|
### Load Test deployed web application
This notebook pulls some images and tests them against the deployed web application. We submit requests asychronously which should reduce the contribution of latency.
```
import asyncio
import json
import random
import urllib.request
from timeit import default_timer
import aiohttp
import matplotlib.pyplot as plt
import testing_utilities
from tqdm import tqdm
%matplotlib inline
```
We will test our deployed service with 100 calls. We will only have 4 requests concurrently at any time. We have only deployed one pod on one node and increasing the number of concurrent calls does not really increase throughput. Feel free to try different values and see how the service responds.
```
NUMBER_OF_REQUESTS = 100 # Total number of requests
CONCURRENT_REQUESTS = 4 # Number of requests at a time
```
Get the IP address of our service
```
service_json = !kubectl get service azure-dl -o json
service_dict = json.loads(''.join(service_json))
app_url = service_dict['status']['loadBalancer']['ingress'][0]['ip']
scoring_url = 'http://{}/score'.format(app_url)
version_url = 'http://{}/version'.format(app_url)
!curl $version_url # Reports the Tensorflow Version
IMAGEURL = "https://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Lynx_lynx_poing.jpg/220px-Lynx_lynx_poing.jpg"
plt.imshow(testing_utilities.to_img(IMAGEURL))
def gen_variations_of_one_image(num, label='image'):
out_images = []
img = testing_utilities.to_img(IMAGEURL).convert('RGB')
# Flip the colours for one-pixel
# "Different Image"
for i in range(num):
diff_img = img.copy()
rndm_pixel_x_y = (random.randint(0, diff_img.size[0]-1),
random.randint(0, diff_img.size[1]-1))
current_color = diff_img.getpixel(rndm_pixel_x_y)
diff_img.putpixel(rndm_pixel_x_y, current_color[::-1])
b64img = testing_utilities.to_base64(diff_img)
out_images.append(json.dumps({'input':{label:'\"{0}\"'.format(b64img)}}))
return out_images
url_list = [[scoring_url, jsonimg] for jsonimg in gen_variations_of_one_image(NUMBER_OF_REQUESTS)]
def decode(result):
return json.loads(result.decode("utf-8"))
async def fetch(url, session, data, headers):
start_time = default_timer()
async with session.request('post', url, data=data, headers=headers) as response:
resp = await response.read()
elapsed = default_timer() - start_time
return resp, elapsed
async def bound_fetch(sem, url, session, data, headers):
# Getter function with semaphore.
async with sem:
return await fetch(url, session, data, headers)
async def await_with_progress(coros):
results=[]
for f in tqdm(asyncio.as_completed(coros), total=len(coros)):
result = await f
results.append((decode(result[0]),result[1]))
return results
async def run(url_list, num_concurrent=CONCURRENT_REQUESTS):
headers = {'content-type': 'application/json'}
tasks = []
# create instance of Semaphore
sem = asyncio.Semaphore(num_concurrent)
# Create client session that will ensure we dont open new connection
# per each request.
async with aiohttp.ClientSession() as session:
for url, data in url_list:
# pass Semaphore and session to every POST request
task = asyncio.ensure_future(bound_fetch(sem, url, session, data, headers))
tasks.append(task)
return await await_with_progress(tasks)
```
Below we run the 100 requests against our deployed service
```
loop = asyncio.get_event_loop()
start_time = default_timer()
complete_responses = loop.run_until_complete(asyncio.ensure_future(run(url_list, num_concurrent=CONCURRENT_REQUESTS)))
elapsed = default_timer() - start_time
print('Total Elapsed {}'.format(elapsed))
print('Avg time taken {0:4.2f} ms'.format(1000*elapsed/len(url_list)))
```
Below we can see the output of some of our calls
```
complete_responses[:3]
num_succesful=[i[0]['result'][0]['image'][0][0] for i in complete_responses].count('n02127052 lynx, catamount')
print('Succesful {} out of {}'.format(num_succesful, len(url_list)))
# Example response
plt.imshow(testing_utilities.to_img(IMAGEURL))
complete_responses[0]
```
To tear down the cluster and all related resources go to the [deploy on AKS notebook](04_DeployOnAKS.ipynb)
|
github_jupyter
|
# Uncertainty Sampling on the Radio Galaxy Zoo
```
import sys
import h5py, numpy, sklearn.neighbors
from astropy.coordinates import SkyCoord
import matplotlib.pyplot as plt
sys.path.insert(1, '..')
import crowdastro.train, crowdastro.test
TRAINING_H5_PATH = '../training.h5'
CROWDASTRO_H5_PATH = '../crowdastro.h5'
NORRIS_DAT_PATH = '../data/norris_2006_atlas_classifications_ra_dec_only.dat'
CLASSIFIER_OUT_PATH = '../classifier.pkl'
ASTRO_TRANSFORMER_OUT_PATH = '../astro_transformer.pkl'
IMAGE_TRANSFORMER_OUT_PATH = '../image_transformer.pkl'
IMAGE_SIZE = 200 * 200
ARCMIN = 1 / 60
N_JOBS = 8
%matplotlib inline
# Load labels.
with h5py.File(TRAINING_H5_PATH, 'r') as training_h5:
crowdsourced_labels = training_h5['labels'].value
with h5py.File(CROWDASTRO_H5_PATH, 'r') as crowdastro_h5:
ir_names = crowdastro_h5['/wise/cdfs/string'].value
ir_positions = crowdastro_h5['/wise/cdfs/numeric'].value[:, :2]
ir_tree = sklearn.neighbors.KDTree(ir_positions)
with open(NORRIS_DAT_PATH, 'r') as norris_dat:
norris_coords = [r.strip().split('|') for r in norris_dat]
norris_labels = numpy.zeros((len(ir_positions)))
for ra, dec in norris_coords:
# Find a neighbour.
skycoord = SkyCoord(ra=ra, dec=dec, unit=('hourangle', 'deg'))
ra = skycoord.ra.degree
dec = skycoord.dec.degree
((dist,),), ((ir,),) = ir_tree.query([(ra, dec)])
if dist < 0.1:
norris_labels[ir] = 1
def softmax(x):
exp = numpy.exp(x - numpy.max(x))
out = exp / exp.sum()
return out
def train_and_test(hidden_atlas_training_indices):
"""
hidden_atlas_training_indices: ATLAS indices to hide.
"""
with h5py.File(TRAINING_H5_PATH, 'r') as training_h5, h5py.File(CROWDASTRO_H5_PATH, 'r') as crowdastro_h5:
n_static = 5 if training_h5.attrs['ir_survey'] == 'wise' else 6
train_indices = training_h5['is_ir_train'].value
atlas_train_indices = training_h5['is_atlas_train'].value
# Remove all IR objects near hidden ATLAS objects.
for atlas_index in hidden_atlas_training_indices:
ir = crowdastro_h5['/atlas/cdfs/numeric'][atlas_index, n_static + IMAGE_SIZE:]
nearby = (ir < ARCMIN).nonzero()[0]
for ir_index in nearby:
train_indices[ir_index] = 0
n_ir = train_indices.sum()
# We can now proceed as usual with training/testing.
outputs = training_h5['labels'].value[train_indices]
n = len(outputs)
astro_inputs = numpy.minimum(
training_h5['features'][train_indices, :n_static], 1500)
image_inputs = training_h5['features'].value[train_indices, n_static:]
astro_transformer = sklearn.pipeline.Pipeline([
('normalise', sklearn.preprocessing.Normalizer()),
('scale', sklearn.preprocessing.StandardScaler()),
])
image_transformer = sklearn.pipeline.Pipeline([
('normalise', sklearn.preprocessing.Normalizer()),
])
features = []
features.append(astro_transformer.fit_transform(astro_inputs))
features.append(image_transformer.fit_transform(image_inputs))
inputs = numpy.hstack(features)
classifier = sklearn.linear_model.LogisticRegression(
class_weight='balanced', n_jobs=N_JOBS)
classifier.fit(inputs, outputs)
# Test the classifier.
test_indices = training_h5['is_atlas_test'].value
numeric_subjects = crowdastro_h5['/atlas/cdfs/numeric'][test_indices, :]
n_norris_agree = 0
n_crowdsourced_agree = 0
n_all_agree = 0
n_either_agree = 0
n_no_host = 0
n_total = 0
for subject in numeric_subjects:
swire = subject[2 + IMAGE_SIZE:]
nearby = swire < ARCMIN
astro_inputs = numpy.minimum(training_h5['features'][nearby, :n_static],
1500)
image_inputs = training_h5['features'][nearby, n_static:]
features = []
features.append(astro_transformer.transform(astro_inputs))
features.append(image_transformer.transform(image_inputs))
inputs = numpy.hstack(features)
crowdsourced_outputs = crowdsourced_labels[nearby]
norris_outputs = norris_labels[nearby]
if sum(crowdsourced_outputs) < 1 or sum(norris_outputs) < 1:
# No hosts!
n_no_host += 1
continue
selection = classifier.predict_proba(inputs)[:, 1].argmax()
n_norris_agree += norris_outputs[selection]
n_crowdsourced_agree += crowdsourced_outputs[selection]
n_all_agree += norris_outputs[selection] * crowdsourced_outputs[selection]
n_either_agree += norris_outputs[selection] or crowdsourced_outputs[selection]
n_total += 1
# Compute the uncertainties of the pool.
pool_indices = training_h5['is_atlas_train'].value
numeric_subjects = crowdastro_h5['/atlas/cdfs/numeric'][pool_indices, :]
uncertainties = []
for subject in numeric_subjects:
swire = subject[2 + IMAGE_SIZE:]
nearby = swire < ARCMIN
astro_inputs = numpy.minimum(training_h5['features'][nearby, :n_static],
1500)
image_inputs = training_h5['features'][nearby, n_static:]
features = []
features.append(astro_transformer.transform(astro_inputs))
features.append(image_transformer.transform(image_inputs))
inputs = numpy.hstack(features)
probs = softmax(classifier.predict_proba(inputs)[:, 1])
entropy = -numpy.sum(numpy.log(probs) * probs)
uncertainties.append(entropy)
return (n_norris_agree / n_total, n_crowdsourced_agree / n_total,
n_all_agree / n_total, n_either_agree / n_total, uncertainties, n_ir)
# Randomly hide 90% of labels.
with h5py.File(TRAINING_H5_PATH, 'r') as training_h5:
atlas_train_indices = training_h5['is_atlas_train'].value
initial_hidden_atlas_training_indices = numpy.arange(atlas_train_indices.sum())
numpy.random.shuffle(initial_hidden_atlas_training_indices)
initial_hidden_atlas_training_indices = initial_hidden_atlas_training_indices[
:9 * len(initial_hidden_atlas_training_indices) // 10]
initial_hidden_atlas_training_indices.sort()
# Testing random label selection.
norris_accuracies_random = []
rgz_accuracies_random = []
all_accuracies_random = []
any_accuracies_random = []
n_ir_random = []
n_batch = 100
n_epochs = 25
numpy.random.seed(0)
hidden_atlas_training_indices = initial_hidden_atlas_training_indices[:]
for epoch in range(n_epochs):
print('Epoch {}/{}'.format(epoch + 1, n_epochs))
# Train, test, and generate uncertainties.
results = train_and_test(hidden_atlas_training_indices)
norris_accuracies_random.append(results[0])
rgz_accuracies_random.append(results[1])
all_accuracies_random.append(results[2])
any_accuracies_random.append(results[3])
n_ir_random.append(results[5])
# Choose n_batch new labels at random.
if len(hidden_atlas_training_indices) < n_batch:
break
else:
numpy.random.shuffle(hidden_atlas_training_indices)
hidden_atlas_training_indices = hidden_atlas_training_indices[:-n_batch]
hidden_atlas_training_indices.sort()
# Testing uncertainty sampling label selection.
norris_accuracies_uncsample = []
rgz_accuracies_uncsample = []
all_accuracies_uncsample = []
any_accuracies_uncsample = []
n_ir_uncsample = []
hidden_atlas_training_indices = initial_hidden_atlas_training_indices[:]
for epoch in range(n_epochs):
print('Epoch {}/{}'.format(epoch + 1, n_epochs))
# Train, test, and generate uncertainties.
results = train_and_test(hidden_atlas_training_indices)
uncertainties = results[4]
norris_accuracies_uncsample.append(results[0])
rgz_accuracies_uncsample.append(results[1])
all_accuracies_uncsample.append(results[2])
any_accuracies_uncsample.append(results[3])
n_ir_uncsample.append(results[5])
# Choose the n_batch most uncertain objects to label.
if len(hidden_atlas_training_indices) < n_batch:
break
else:
hidden_atlas_training_indices = numpy.array(
sorted(hidden_atlas_training_indices, key=lambda z: uncertainties[z]))[:-n_batch]
hidden_atlas_training_indices.sort()
plt.figure(figsize=(15, 10))
plt.subplot(2, 2, 1)
plt.plot(all_accuracies_random, c='pink')
plt.plot(any_accuracies_random, c='darkred')
plt.plot(all_accuracies_uncsample, c='lightgreen')
plt.plot(any_accuracies_uncsample, c='darkgreen')
plt.xlabel('{}-batch epochs'.format(n_batch))
plt.ylabel('Classification accuracy')
plt.legend(['Norris & RGZ (passive)', 'Norris | RGZ (passive)',
'Norris & RGZ (unc)', 'Norris | RGZ (unc)'], loc='lower right')
plt.subplot(2, 2, 2)
plt.plot(norris_accuracies_random, c='red')
plt.plot(norris_accuracies_uncsample, c='green')
plt.legend(['Norris (passive)', 'Norris (unc)'], loc='lower right')
plt.xlabel('{}-batch epochs'.format(n_batch))
plt.ylabel('Classification accuracy')
plt.subplot(2, 2, 3)
plt.plot(rgz_accuracies_random, c='red')
plt.plot(rgz_accuracies_uncsample, c='green')
plt.legend(['RGZ (passive)', 'RGZ (unc)'], loc='lower right')
plt.xlabel('{}-batch epochs'.format(n_batch))
plt.ylabel('Classification accuracy')
plt.subplot(2, 2, 4)
plt.plot(numpy.array(n_ir_random) - numpy.array(n_ir_uncsample))
plt.xlabel('{}-batch epochs'.format(n_batch))
plt.ylabel('Difference in number of IR examples')
plt.show()
```
Conclusion: Uncertainty sampling with entropy doesn't work very well.
|
github_jupyter
|
# Assignment 1: Numpy RNN
Implement a RNN and run BPTT
```
from typing import Dict, Tuple
import numpy as np
class RNN(object):
"""Numpy implementation of sequence-to-one recurrent neural network for regression tasks."""
def __init__(self, input_size: int, hidden_size: int, output_size: int):
"""Initialization
Parameters
----------
input_size : int
Number of input features per time step
hidden_size : int
Number of hidden units in the RNN
output_size : int
Number of output units.
"""
super(RNN, self).__init__()
self.input_size = input_size # D in literature
self.hidden_size = hidden_size # I in literature
self.output_size = output_size # K in literature
# create and initialize weights of the network
# as 90% of the usages in the scriptum are W.T, R.T, V.T
init = lambda shape: np.random.uniform(-0.2, 0.2, shape)
self.W = init((hidden_size, input_size)) # I X D
self.R = init((hidden_size, hidden_size)) # I x I
self.bs = np.zeros((hidden_size))
self.V = init((output_size, hidden_size)) # K x I
self.by = np.zeros((output_size))
# place holder to store intermediates for backprop
self.a = None
self.y_hat = None
self.grads = None
self.x = None
def forward(self, x: np.ndarray) -> np.ndarray:
"""Forward pass through the RNN.
Parameters
----------
x : np.ndarray
Input sequence(s) of shape [sequence length, number of features]
Returns
-------
NumPy array containing the network prediction for the input sample.
"""
self.x = x
# as we have no activation function (f(t) is linear)
# a(t) = f(s(t)) = s(t) = W^T . x(t) + R^T . a(t-1) + bs
# = tanh( W^T . x(t) + R^T . a(t-1) + bs )
self.a = np.zeros((self.input_size, self.hidden_size)) # to make accessing t = -1 possible
for t in range(len(x)):
self.a[t] = np.tanh(self.W @ x[t] + self.R @ self.a[t-1] + self.bs)
self.y_hat = self.V @ self.a[t] + self.by
return self.y_hat # sequence-to-1 model, so we only return the last
def forward_fast(self, x: np.ndarray) -> np.ndarray:
""" optimized method without saving to self.a """
a = np.tanh(self.W @ x[0] + self.bs)
for t in range(1, len(x)):
a = np.tanh(self.W @ x[t] + self.R @ a + self.bs)
return self.V @ a + self.by
def backward(self, d_loss: np.ndarray) -> Dict:
"""Calculate the backward pass through the RNN.
Parameters
----------
d_loss : np.ndarray
The gradient of the loss w.r.t the network output in the shape [output_size,]
Returns
-------
Dictionary containing the gradients for each network weight as key-value pair.
"""
# create view, so that we don't have to reshape every time we call it
a = self.a.reshape(self.a.shape[0], 1, self.a.shape[1])
x = self.x.reshape(self.x.shape[0], 1, self.x.shape[1])
# needs to be calculated only once
d_V = d_loss @ a[-1]
d_by = d_loss
# init with 0 and sum it up
d_W = np.zeros_like(self.W)
d_R = np.zeros_like(self.R)
d_bs = np.zeros_like(self.bs)
# instead of using * diag, we use elementwise multiplication
delta = d_loss.T @ self.V * (1 - a[-1] ** 2)
for t in reversed(range(self.input_size)):
d_bs += delta.reshape(self.bs.shape)
d_W += delta.T @ x[t]
if t > 0:
d_R += delta.T @ a[t-1]
# a[t] = tanh(..) -> derivation = 1-tanh² -> reuse already calculated tanh
# calculate delta for the next step at t-1
delta = delta @ self.R * (1 - a[t-1] ** 2)
self.grads = {'W': d_W, 'R': d_R, 'V': d_V, 'bs': d_bs, 'by': d_by}
return self.grads
def update(self, lr: float):
# update weights, aggregation is already done in backward
w = self.get_weights()
for name in w.keys():
w[name] -= lr * self.grads[name]
# reset internal class attributes
self.grads = {}
self.y_hat, self.a = None, None
def get_weights(self) -> Dict:
return {'W': self.W, 'R': self.R, 'V': self.V, 'bs': self.bs, 'by': self.by}
def set_weights(self, weights: Dict):
if not all(name in weights.keys() for name in ['W', 'R', 'V']):
raise ValueError("Missing one of 'W', 'R', 'V' keys in the weight dictionary")
for name, w in weights.items():
self.__dir__["name"] = w
```
<h2 style="color:rgb(0,120,170)">Numerical gradient check</h2>
To validate your implementation, especially the backward pass, use the two-sided gradient approximation given by the equation below.
```
def get_numerical_gradient(model: RNN, x: np.ndarray, eps: float=1e-7) -> Dict:
"""Implementation of the two-sided numerical gradient approximation
Parameters
----------
model : RNN
The RNN model object
x : np.ndarray
Input sequence(s) of shape [sequence length, number of features]
eps : float
The epsilon used for numerical gradient approximation
Returns
-------
A dictionary containing the numerical gradients for each weight of the RNN. Make sure
to name the dictionary keys like the names of the RNN gradients dictionary (e.g.
'd_R' for the weight 'R')
"""
g = {}
# iterate all weight-matrices w and all positions i, and calculate the num. grad.
for name, w in model.get_weights().items():
# initialize weight gradients with zero
wg = np.zeros_like(w)
# this makes a backup copy of original weights
for i, orig in np.ndenumerate(w): # can be 1d or 2d
# caculate for +eps
w[i] += eps
plus = model.forward_fast(x)
# calculate for -eps
w[i] = orig - eps
minus = model.forward_fast(x)
w[i] = orig # reset
# set weight gradient for this weight and this index
wg[i] = np.sum(plus - minus) / (2*eps)
# add calculated weights into return-weights
g[name] = wg
return g
def get_analytical_gradient(model: RNN, x: np.ndarray) -> Dict:
"""Helper function to get the analytical gradient.
Parameters
----------
model : RNN
The RNN model object
x : np.ndarray
Input sequence(s) of shape [sequence length, number of features]
Returns
-------
A dictionary containing the analytical gradients for each weight of the RNN.
"""
loss = model.forward(x)
return model.backward(np.ones((model.output_size, 1)))
def gradient_check(model: RNN, x: np.ndarray, treshold: float = 1e-7):
"""Perform gradient checking.
You don't have to do anything in this function.
Parameters
----------
model : RNN
The RNN model object
x : np.ndarray
Input sequence(s) of shape [sequence length, number of features]
eps : float
The epsilon used for numerical gradient approximation
"""
numerical_grads = get_numerical_gradient(model, x)
analytical_grads = get_analytical_gradient(model, x)
for key, num_grad in numerical_grads.items():
difference = np.linalg.norm(num_grad - analytical_grads[key])
# assert num_grad.shape == analytical_grads[key].shape
if difference < treshold:
print(f"Gradient check for {key} passed (difference {difference:.3e})")
else:
print(f"Gradient check for {key} failed (difference {difference:.3e})")
```
<h2 style="color:rgb(0,120,170)">Compare the time for gradient computation</h2>
Finally, use the code below to investigate the benefit of being able to calculate the exact analytical gradient.
```
print("Gradient check with a single output neuron:")
model = RNN(input_size=5, hidden_size=10, output_size=1)
x = np.random.rand(5, 5)
gradient_check(model, x)
print("\nGradient check with multiple output neurons:")
model = RNN(input_size=5, hidden_size=10, output_size=5)
x = np.random.rand(5, 5)
gradient_check(model, x)
analytical_time = %timeit -o get_analytical_gradient(model, x)
numerical_time = %timeit -o get_numerical_gradient(model, x)
if analytical_time.average < numerical_time.average:
fraction = numerical_time.average / analytical_time.average
print(f"The analytical gradient computation was {fraction:.0f} times faster")
else:
fraction = analytical_time.average / numerical_time.average
print(f"The numerical gradient computation was {fraction:.0f} times faster")
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/marixko/Supervised_Learning_Tutorial/blob/master/The_Basics_of_Supervised_Learning_For_Astronomers.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
###**About Google's Colaboratory: **
This is a free Jupyter environment that runs in Google's cloud, which means you can run codes in your computer without having to install anything. You can create a copy of this tutorial in your own Google's Drive and make your own changes. Colaboratory also allows you to easily share your code with others! [Read more](https://colab.research.google.com/notebooks/basic_features_overview.ipynb)
---
# Introduction
> **Author**: Lilianne M. I. Nakazono (email: [email protected])
> PhD student at Instituto de Astronomia, Geofísica e Ciências Atmosféricas -- Universidade de São Paulo (IAG-USP). Bachelor's degree in Statistics (IME-USP) and in Astronomy (IAG-USP).
> **April 2019**
---
###**What is Machine Learning?**
From SAS:
>> *"Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention."*
###**What is Supervised Learning?**#
From S.B. Kotsiantis (2007):
>> *"Every instance in any dataset used by machine learning algorithms is represented using the same set of features. The features may be continuous, categorical or binary. If instances are given with known labels (the corresponding correct outputs) then the learning is called *supervised*, in contrast to *unsupervised learning*, where instances are unlabeled."*
---
###**STAR/GALAXY separation**#
In this tutorial we will perform a STAR/GALAXY separation using a real dataset from [S-PLUS](http://www.splus.iag.usp.br/). This data were already matched with [SDSS](https://www.sdss.org/) (DR15) spectroscopical data and it will be used to train and test the supervised classifiers. The final step (not included in this tutorial) is to use the trained model to predict the classification of your unknown objects.
This tutorial will be entirely in Python 3 and we will go through the following topics:
- Introduction to `Pandas` ([Documentation](https://pandas.pydata.org/))
- Data visualization with `seaborn` ([Documentation](https://seaborn.pydata.org/))
- Classification methods with `sklearn` ([Documentation](https://scikit-learn.org/stable/index.html))
---
**Additional information about the data**
ID - Object ID Number
RA - Right Ascension in decimal degrees [J2000]
Dec - Declination in decimal degrees [J2000]
FWHM_n - Normalized Full width at half maximum to detection image seeing (pixels)
A - Profile RMS along major axis (pixels)
B - Profile RMS along minor axis (pixels)
KrRadDet - Kron apertures in units of A or B (pixels)
uJAVA_auto, F378_auto, F395_auto, F410_auto, F430_auto, g_auto, F515_auto, r_auto, F660_auto, i_auto, F861_auto, z_auto - Total-restricted magnitudes (AB) in corresponding filters
class - Spectroscopic classification from SDSS
#**1. Libraries and Functions**
```
import seaborn as sns
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from sklearn.svm import SVC
from sklearn.metrics import confusion_matrix
import itertools
from mlxtend.plotting import plot_decision_regions
import matplotlib as mpl
import matplotlib.gridspec as gridspec
from sklearn import metrics
pd.set_option("display.max_rows", None)
pd.set_option("display.max_columns", None)
# Modified from: https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.3f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
```
#**2. Read Data**
For statistical/machine learning purposes it is **always** better to read the data in a dataframe (data structured in labels for rows and columns) format.
```
#Reading dataset from github and saving as dataframe
url = 'https://raw.githubusercontent.com/marixko/'
file = 'tutorial_classifiers/master/tutorial_data.txt'
df = pd.read_csv(url+file, delim_whitespace=True, low_memory=False)
# Run this cell to quickly check your dataset
df
# Check header
list(df)
```
#**3. Pre-analysis**
Before applying any kind of analysis, you need to be aware of any problem in your dataset that can affect your training (e.g. missing values and outliers). Sometimes it will require pre-processing your dataset beforehand (e.g. for missing values, interpolating values or removing them from data may be necessary).
```
# You can check your dataset by using describe().
# It will return the total count, mean, standard deviation,
# minimum, Q1, Q2 (median), Q3 and maximum
df.describe()
# If you want to check a specific feature use for instance:
# df.FWHM_n.describe()
```
Another good practice is to check high correlations in your dataset, which can allow you to identify which features are redundant. Thus, you can also be able to reduce dimensionality of your dataset.
>> *"The fact that many features depend on one another often unduly influences the accuracy of supervised ML classification models. This problem can be addressed by construction new features from the basic feature set."* -- S.B. Kotsiantis (2007)
(One way to deal with multicollinearity -- when 2 or more features are moderately or highly correlated -- is creating a new feature set using [Principal Component Analysis](https://en.wikipedia.org/wiki/Principal_component_analysis).)
```
plt.close()
f, ax = plt.subplots(figsize=(8, 8))
var = ['FWHM_n', 'A', 'B', 'KrRadDet', 'uJAVA_auto',
'F378_auto', 'F395_auto', 'F410_auto', 'g_auto', 'F515_auto',
'r_auto', 'F660_auto', 'i_auto', 'F861_auto', 'z_auto']
corr = df[var].corr()
sns.heatmap(corr, mask=np.zeros_like(corr, dtype=np.bool),
cmap=sns.diverging_palette(220, 10, as_cmap=True),
square=True, ax=ax, center=0, vmin=-1, vmax=1)
plt.title('Correlation Matrix')
plt.show()
#It would also be interesting to check the correlation plot for each class
```
Qualitative variables can also be included. In this case, however, there are no qualitative features that came from S-PLUS observations.
But let's check the classification label counts:
```
# For qualitative variables, use value_counts()
df['class'].value_counts()
```
Note that for this example the classes are balanced. It represents a best case scenario, which rarely happens in the real world.
Be very careful with imbalanced datasets! Some methods and metrics are not good for imbalanced cases, some manipulation in your sampling method (e.g. over/under-sampling) or in your algorithm (e.g. penalized classification) may be necessary.
> **Note:** Supervised Learning is not suitable for problems like "I want to find very rare objects that we have never found before!". The learning process is based on your ground-truth samples, so you need to ask yourself "Is my ground-truth sample representative of what I want to find?"
#** 4. Feature Selection**
A very important step of the analysis is choosing your input features. Sometimes you already know which features you need to use to achieve your goals, which comes from your previous knowledge about the topic. However, you can also evaluate which features will give you the best performance. We will discuss more about it on the following sections.
For didactic purposes, let's consider two feature spaces:
> `dim15` = {all useful information from the catalog}
> `dim2` = {normalized FWHM, Profile RMS along major axis}
```
dim15 = ['FWHM_n', 'A', 'B', 'KrRadDet', 'uJAVA_auto',
'F378_auto', 'F395_auto', 'F410_auto', 'g_auto', 'F515_auto',
'r_auto', 'F660_auto', 'i_auto', 'F861_auto', 'z_auto']
dim2 = ['FWHM_n','A']
```
#** 5. Sampling training and testing sets **
Regardless of the classification method you choose, you will want to estimate how accurately your predictive model will perform. This is called **cross-validation** and there are several ways to do it. Some examples are:
* **Holdout method**: randomly separate your original dataset into the training and the testing set. It's very common to adopt 1:3 ratio for the size of test/training sets, although you can choose another ratio. Very simple and fast computationally, but you need to be cautious as it is a single run method. Thus, it may be subject to large variabilities
* **Leave-p-out cross-validation**:
Uses p observations as the testing set and the remaining observations as the training set. Repeat to cover any sampling possibility
* **k-fold cross-validation**: the original dataset is randomly partitioned into k equal sized subsamples. One subsample will be used as testing set and the other k-1 as training set. Repeat k times, until each subsample is used exactly once as the testing set.
I strongly recommend that you also check the other methods before choosing one. For this tutorial we will use the **Holdout method**, for simplicity.
```
label = pd.DataFrame(df['class'])
# Transform strings into numbered labels
label.loc[label['class'] == 'STAR', 'class'] = 0
label.loc[label['class'] == 'GALAXY', 'class'] = 1
# Use train_test_split() to sample your training and testing sets
# Let's fix a random_state=42 in order to have the same sets
# on each run. Stratify parameter guarantees that the original
# proportion of the classes is maintained
X_train, X_test, y_train, y_test = train_test_split(df[dim15], label,
test_size=0.3,
random_state=42,
stratify = label)
```
#** 6. Classification method: Support Vector Machine (SVM)**
We finally reached the point where we are going to run a classification algorithm. It is common to think, at first, that this would be the most complicated part, but a well-done job will require you to spend most of your time on the other steps.
There are several classification methods you can use, each of them has its own pros and cons, depending on your science goals and on your dataset. I will give you an example using Support Vector Machine (SVM) with linear kernel, but I recommend you to also check other methods (e.g. Random Forest, Logistic Regression, K-NN, ...)
**DON'T FORGET TO:**
- Learn the basic idea of the method. You don't need to know all the math behind it, but you need to know how it works intuitively
- Check what are the assumptions of the method and if your dataset is in agreement with it
- Learn what the parameters of your model (a.k.a. hyperparameters) do. Choosing them wisely can be crucial to have good results in the end. Note: the hyperparameters space can also be part of your validation tests
## 6.1. Basic idea
The SVM finds the hyperplane that best separates your data, based on maximizing the margin between each class. For instance, in one dimension SVM will find a point. For two dimensions, it will be a line. For three dimensions, it will be a plane.
To use a linear kernel, we assume that the data is linearly separable. Otherwise, we should use another kernel (e.g. polynomial).
Read more about SVM [here](https://scikit-learn.org/stable/modules/svm.html#scores-probabilities)
## 6.2. Feature space: dim2
```
# Train your model:
clf2 = SVC(kernel= 'linear')
clf2.fit(X_train[dim2], y_train.values.ravel())
# Make the predictions:
y_pred2 = clf2.predict(X_test[dim2])
# Plot confusion matrix:
matrix = confusion_matrix(y_test['class'], y_pred2)
fig = plot_confusion_matrix(matrix, classes=['STAR','GALAXY'])
plt.show()
```
From the confusion matrix above we can already see how good the results are: most of our stars (galaxies) are being assigned as stars (galaxies) and just a few percent were misclassified.
Now let's check the plot and how the separation looks like:
```
plt.style.use('seaborn-pastel')
fig = plt.figure(figsize=(18,6))
gs = gridspec.GridSpec(1, 2)
ax = plt.subplot(gs[0,0])
sns.scatterplot(x=X_train.FWHM_n, y=X_train.A,
hue=y_train['class'])
#Calculate margin (from https://scikit-learn.org/stable/auto_examples/svm/plot_svm_margin.html)
w = clf2.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(-5, 5)
yy = a * xx - (clf2.intercept_[0]) / w[1]
margin = 1 / np.sqrt(np.sum(clf2.coef_ ** 2))
yy_down = yy - np.sqrt(1 + a ** 2) * margin
yy_up = yy + np.sqrt(1 + a ** 2) * margin
#Plot margin
plt.plot(xx, yy, 'k-')
plt.plot(xx, yy_down, 'k--')
plt.plot(xx, yy_up, 'k--')
plt.xlabel('FWHM_n')
plt.ylabel('A')
plt.xlim(0,8)
plt.ylim(0.8, 10)
plt.title('Training set')
ax = plt.subplot(gs[0,1])
sns.scatterplot(x=X_test.FWHM_n , y=X_test.A, hue=y_test['class'])
plt.plot(xx, yy, 'k-')
plt.plot(xx, yy_down, 'k--')
plt.plot(xx, yy_up, 'k--')
plt.xlim(0,8)
plt.ylim(0.8, 10)
plt.title('Testing set')
plt.show()
```
The solid line corresponds to the optimal threshold found by SVM. The dashed lines in the plots above correspond to the maximized margin that I mentioned in Section 6.1.
These are calculated using only a small part of the data: the objects around where the separation may occur, they are called the Support Vectors. Let's check which ones were considered for this classification:
```
fig = plt.figure(figsize=(9,7))
sns.scatterplot(x=X_train[dim2].FWHM_n, y=X_train[dim2].A,
hue=y_train['class'])
plt.scatter(clf2.support_vectors_[:, 0],
clf2.support_vectors_[:, 1], s=8,
zorder=10,color='red', marker='+')
plt.xlim(0.9,2)
plt.ylim(0.8,5)
plt.plot(xx, yy, 'k-')
plt.plot(xx, yy_down, 'k--')
plt.plot(xx, yy_up, 'k--')
plt.title('Support vectors (Training set)')
```
## 6.3. Feature space: dim15
In the last section we saw how SVM works in a 2D space. In that case, it is possible to visually check the separation. However, we have much more information available. if we analyse them altogether, it can improve our results. Although, it is impossible to visually check the results, so we need to rely on performance metrics that we will discuss further on the next section.
```
# Train your model:
clf15 = SVC(kernel= 'linear')
clf15.fit(X_train, y_train.values.ravel())
# Make predictions:
y_pred = clf15.predict(X_test)
# Plot confusion matrix:
matrix = confusion_matrix(y_test['class'], y_pred)
fig = plot_confusion_matrix(matrix, classes=['STAR','GALAXY'])
plt.show()
# Yeah, as simple as that! :)
```
#** 7. Validation and Model Selection**
How can we choose between two (or more) different models?
For that, we have several performance metrics that we can consider when selecting the best model and I will show a few of them.
The way you are going to analyze the metrics depends on your science goals. For instance:
* In a STAR/GALAXY separation you are probably not interested in a specific class, but in the overall classification. You can evaluate your model using, for example, Accuracy or F-measure
* Suppose you had a STAR/QSO problem instead, where your main goal is to find new QSOs. You can evaluate your model using, for example, Precision, Recall or F-measure.
## 7.1 Accuracy
Defined as the fraction of correct predictions.
(Note: accuracy will be biased towards the class with higher frequency, don't rely on this measurement if you have an imbalanced dataset)
```
print("Accuracy")
print(" First model (dim2):",
np.round(100*metrics.accuracy_score(y_test, y_pred2),2), '%')
print(" Second model (dim15):",
np.round(100*metrics.accuracy_score(y_test, y_pred),2), '%')
```
## 7.2. Precision
Defined as:
> Precision $\equiv \frac{TP}{(TP+FP)}$
TP - True Positive ; FP - False Positive
Note that you need to define which class will be your "positive". For example:
| STAR (predicted) | GALAXY (predicted)
--- | ---
**STAR** (true label) | True Negative | False Positive
**GALAXY** (true label)| False Negative | True Positive
In Astronomy, it's called **purity**.
```
P2 = metrics.precision_score(y_test, y_pred2, pos_label=1)
P = metrics.precision_score(y_test, y_pred, pos_label=1)
print("Galaxy Precision")
print(" First model (dim2):", np.round(100*P2,2), '%')
print(" Second model (dim15):", np.round(100*P,2), '%')
# Exercise: Calculate star precision for each model
```
## 7.3. Recall
Defined as:
> Recall $\equiv \frac{TP}{(TP+FN)}$
TP - True Positive ; FN - False Negative
In Astronomy, it's called **completeness**.
```
R2 = metrics.recall_score(y_test, y_pred2, pos_label=1)
R = metrics.recall_score(y_test, y_pred, pos_label=1)
print("Galaxy Recall")
print(" First model (dim2):", np.round(100*R2,2), '%')
print(" Second model (dim15):", np.round(100*R,2), '%')
# Exercise: Calculate star recall for each model
```
## 7.4. F-measure
It's the harmonic mean of Precision and Recall:
$F = \frac{1}{2}\Big(P_i^{-1}+R_i^{-1}\Big)^{-1} = 2 \times \frac{P_iR_i}{P_i+R_i}, F \in [0,1]$
```
print("F-measure")
print(" First model (dim2):", np.round(metrics.f1_score(y_test, y_pred2),3))
print(" Second model (dim15):", np.round(metrics.f1_score(y_test, y_pred),3))
```
## Final message
We came to the end of this tutorial, yay! :)
Although it is called "Machine Learning", you are still the one who is going to make crucial decisions. And that is hard work! I hope I was able to give you at least a brief idea of all the steps involved in the process.
Now, play around with the code:
* Try other algorithms with the same feature selection and compare your results using the performance metrics
* Test changing the parameters of your model
* Try it with your own dataset!
## Read more:
[Supervised Machine Learning: A Review of Classification Techniques](https://books.google.com/books?hl=en&lr=&id=vLiTXDHr_sYC&oi=fnd&pg=PA3&dq=review+supervised+learning&ots=CYpwxt2Bnn&sig=Y79PK3w3Q8CefKaTh03keRFEwyg#v=onepage&q=review%20supervised%20learning&f=false) (S.B. Kotsiantis, 2007)
An Empirical Comparison of Supervised Learning Algorithms Rich (Rich Caruana and Alexandru Niculescu-Mizil, 2006)
Classification of Imbalanced Data: a Review (Yanmin Sun, Andrew K. C. Wong and Mohamed S. Kamel, 2009)
[Cross-validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)
[A Practical Guide to Support Vector Classification](https://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf) (Chih-Wei Hsu, Chih-Chung Chang, and Chih-Jen Lin, 2016)
|
github_jupyter
|
<a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/smc_logreg_tempering.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#SMC for logistic regression
We compare data tempering (IBIS) with temperature tempering.
Code is from
https://github.com/nchopin/particles/blob/master/book/smc_samplers/logistic_reg.py
```
!git clone https://github.com/nchopin/particles.git
%cd /content/particles
!pip install --user .
import particles
import particles.state_space_models as ssm
import particles.distributions as dists
"""
Numerical experiment of Chapter 17 (SMC samplers).
Compare IBIS and SMC tempering for approximating:
* the normalising constant (marginal likelihood)
* the posterior expectation of the p coefficients
for a logistic regression model.
See below for how to select the data-set.
Note: the SMC samplers implemented in module smc_samplers are now "waste-free"
by default, see Dau & Chopin (2021), and the documentation of `smc_samplers`
(plus the corresponding jupyter notebook). This script still performs exactly
the same numerical experiments as in the book, based on standard (non
waste-free) SMC samplers. To do so, we added ``wastefree=False`` to the
definition of the corresponding `Feynman-Kac` object. Again, see the
documentation of `smc_samplers` for more details.
"""
from matplotlib import pyplot as plt
import numpy as np
from numpy import random
import seaborn as sb
import particles
from particles import datasets as dts
from particles import distributions as dists
from particles import resampling as rs
from particles import smc_samplers as ssps
from particles.collectors import Moments
datasets = {'pima': dts.Pima, 'eeg': dts.Eeg, 'sonar': dts.Sonar}
dataset_name = 'eeg' # choose one of the three
data = datasets[dataset_name]().data
T, p = data.shape
# for each dataset, we adapt:
# * N: number of particles
# * Ks = list of Ks (nr MCMC steps)
# * typK: value of M used for plots on "typical" run
if dataset_name == 'sonar':
N = 10 ** 4
Ks = [10, 20, 30, 40, 50, 60]
typK = 50
elif dataset_name == 'pima':
N = 10 ** 3
Ks = [1, 3, 5]
typK = 3
elif dataset_name == 'eeg':
N = 10 ** 3
#Ks = [1, 3, 5, 7, 10, 15, 20]
Ks = [1, 3, 5]
typK = 5
# prior & model
prior = dists.StructDist({'beta':dists.MvNormal(scale=5.,
cov=np.eye(p))})
class LogisticRegression(ssps.StaticModel):
def logpyt(self, theta, t):
# log-likelihood factor t, for given theta
lin = np.matmul(theta['beta'], data[t, :])
return - np.logaddexp(0., -lin)
# algorithms
# N and values of K set above according to dataset
ESSrmin = 0.5
nruns = 2 # 16
results = []
# runs
print('Dataset: %s' % dataset_name)
for K in Ks:
for i in range(nruns):
# need to shuffle the data for IBIS
random.shuffle(data)
model = LogisticRegression(data=data, prior=prior)
for alg_type in ['tempering', 'ibis']:
if alg_type=='ibis':
fk = ssps.IBIS(model=model, wastefree=False, len_chain=K + 1)
pf = particles.SMC(N=N, fk=fk, ESSrmin=ESSrmin,
collect=[Moments], verbose=False)
else:
fk = ssps.AdaptiveTempering(model=model, ESSrmin=ESSrmin,
wastefree=False, len_chain = K + 1)
pf = particles.SMC(N=N, fk=fk, ESSrmin=1., collect=[Moments],
verbose=True)
# must resample at every time step when doing adaptive
# tempering
print('%s, K=%i, run %i' % (alg_type, K, i))
pf.run()
print('CPU time (min): %.2f' % (pf.cpu_time / 60))
print('loglik: %f' % pf.logLt)
res = {'K': K, 'type': alg_type, 'out': pf.summaries,
'cpu': pf.cpu_time}
if alg_type=='ibis':
n_eval = N * (T + K * sum([t for t in range(T) if
pf.summaries.rs_flags[t]]))
else:
n_eval = N * T * (1. + K * (len(pf.summaries.ESSs) - 1))
res['path_sampling'] = pf.X.shared['path_sampling'][-1]
res['exponents'] = pf.X.shared['exponents']
res['n_eval'] = n_eval
results.append(res)
# plots
#######
savefigs = True # do you want to save figures as pdfs
plt.style.use('ggplot')
pal = sb.dark_palette('white', n_colors=2)
# Compare standard and path sampling estimates of the log-normalising cst
plt.figure()
diff_est = [(r['out'].logLts[-1] - r['path_sampling'])
for r in results if r['type']=='tempering']
sb.histplot(diff_est)
# Figure 17.1: typical behaviour of IBIS
typ_ibis = [r for r in results if r['type']=='ibis' and r['K'] == typK][0]
typ_ess = typ_ibis['out'].ESSs
typ_rs_times = np.nonzero(typ_ibis['out'].rs_flags)[0]
# Left panel: evolution of ESS
fig, ax = plt.subplots()
ax.plot(typ_ess, 'k')
ax.set(xlabel=r'$t$', ylabel='ESS')
if savefigs:
plt.savefig(dataset_name + '_typical_ibis_ess.pdf')
plt.savefig(dataset_name + '_typical_ibis_ess.png')
# Right panel: evolution of resampling times
fig, ax = plt.subplots()
ax.plot(typ_rs_times[:-1], np.diff(typ_rs_times), 'ko-')
ax.set(xlabel=r'$t$', ylabel='duration between successive rs')
if savefigs:
plt.savefig(dataset_name + '_typical_ibis_rs_times.pdf')
plt.savefig(dataset_name + '_typical_ibis_rs_times.png')
# Figure 17.2: evolution of temperature in a typical tempering run
typ_temp = [r for r in results if r['type']=='tempering' and r['K'] == typK][0]
expnts = typ_temp['exponents']
plt.figure()
plt.plot(expnts, 'k')
plt.xlabel(r'$t$')
plt.ylabel('tempering exponent')
if savefigs:
plt.savefig(dataset_name + '_typical_tempering_temperatures.pdf')
plt.savefig(dataset_name + '_typical_tempering_temperatures.png')
# nr evals vs K for both algorithms
plt.figure()
sb.boxplot(x=[r['K'] for r in results],
y=[r['n_eval'] for r in results],
hue=[r['type'] for r in results])
plt.xlabel('number MCMC steps')
plt.ylabel('number likelihood evaluations')
if savefigs:
plt.savefig(dataset_name + '_boxplots_nevals_vs_K.pdf')
plt.savefig(dataset_name + '_boxplots_nevals_vs_K.png')
print(type(results))
print(results[0])
for r in results:
print(r['type'], 'K=', r['K'], 'time=', r['cpu'])
# Figure 17.3: Box-plots estimate versus number of MCMC steps
# Left panel: marginal likelihood
plt.figure()
sb.boxplot(x=[r['K'] for r in results],
y=[r['out'].logLts[-1] for r in results],
hue=[r['type'] for r in results])
plt.xlabel('number MCMC steps')
plt.ylabel('marginal likelihood')
if savefigs:
plt.savefig(dataset_name + '_boxplots_marglik_vs_K.pdf')
plt.savefig(dataset_name + '_boxplots_marglik_vs_K.png')
# Right panel: post expectation 1st pred
plt.figure()
sb.boxplot(x=[r['K'] for r in results],
y=[r['out'].moments[-1]['mean']['beta'][1] for r in results],
hue=[r['type'] for r in results])
plt.xlabel('number MCMC steps')
plt.ylabel('posterior expectation first predictor')
if savefigs:
plt.savefig(dataset_name + '_boxplots_postexp1_vs_K.pdf')
plt.savefig(dataset_name + '_boxplots_postexp1_vs_K.png')
# Figure 17.4: variance vs CPU trade-off
# variance times K, as a function of K
plt.figure()
#cols = {'ibis': 'gray', 'tempering':'black'}
cols = {'ibis': 'blue', 'tempering':'red'}
lsts = {'ibis': '--', 'tempering': '-'}
for i in range(p):
for alg_type in ['ibis', 'tempering']:
adj_var = []
for K in Ks:
mts = [r['out'].moments[-1]
for r in results if r['K']==K and r['type']==alg_type]
av = (K * np.var([m['mean']['beta'][i] for m in mts]) /
np.mean([m['var']['beta'][i] for m in mts]))
adj_var.append(av)
if i==0:
plt.plot(Ks, adj_var, color=cols[alg_type], label=alg_type,
alpha=.8, linewidth=2, linestyle=lsts[alg_type])
else:
plt.plot(Ks, adj_var, color=cols[alg_type], alpha=.8, linewidth=2,
linestyle=lsts[alg_type])
plt.legend()
plt.xticks(Ks, ['%i' % K for K in Ks]) # force int ticks
plt.xlabel('number MCMC steps')
plt.ylabel(r'variance times number MCMC steps')
if savefigs:
plt.savefig(dataset_name + '_postexp_var_vs_K.pdf')
plt.savefig(dataset_name + '_postexp_var_vs_K.png')
!ls *.png
!mkdir figures
!mv *.png figures
!mv *.pdf figures
!ls
!zip -r figures figures
```
|
github_jupyter
|
* basic roberta ft: 0.6589791487657798 (thr 0.3)
* basic roberta ft (head first): 0.6768011808573329 (thr 0.42)
* fine tune roberta on weird clf, then only head on spans, then whole: 0.6853127403287083 (thr 0.32)
*
```
from transformers import RobertaTokenizer, RobertaForTokenClassification
from transformers import BertTokenizer, BertForTokenClassification
from transformers import AutoTokenizer, AutoModelForTokenClassification
import torch
import numpy as np
import pandas as pd
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '4'
device = torch.device('cuda:0')
model_name = 'roberta-base' #roberta-base
tokenizer = AutoTokenizer.from_pretrained(model_name)
# model = AutoModelForTokenClassification.from_pretrained(model_name)
```
```
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
labels = torch.tensor([1] * inputs["input_ids"].size(1)).unsqueeze(0) # Batch size 1
outputs = model(**inputs, labels=labels)
```
# Create labels for tagging
```
import os
import numpy as np
import pandas as pd
from ast import literal_eval
import re
import nltk
import matplotlib.pyplot as plt
from nltk.tokenize import word_tokenize
path = 'data/'
trial = pd.read_csv(path + 'tsd_trial.csv')
train = pd.read_csv(path + 'tsd_train.csv')
# final_test = pd.read_csv(path + 'tsd_test.csv')
final_test = pd.read_csv(path + 'tsd_test_gt.csv')
train['spans'] = train.spans.apply(literal_eval)
trial['spans'] = trial.spans.apply(literal_eval)
final_test['spans'] = final_test.spans.apply(literal_eval)
trial.shape, train.shape, final_test.shape
print(len(set(trial.text).intersection(set(train.text))))
print(len(set(final_test.text).intersection(set(train.text))))
print((train.spans.apply(len) == 0).mean())
print((trial.spans.apply(len) == 0).mean())
import spans_utils
from importlib import reload
reload(spans_utils)
from spans_utils import display_spans, spans2labels, labels2spans
display_spans(trial.spans[0], trial.text[0])
display_spans(trial.spans[0], trial.text[0])
from tqdm.auto import tqdm, trange
n = 0
for row in tqdm([row for i, row in trial.iterrows()]):
break
labels = spans2labels(row.text, row.spans, tokenizer)
spans2 = labels2spans(row.text, labels, tokenizer)
if row.spans != spans2:
t = row.text.replace(' ', '+')
display_spans(row.spans, t)
display_spans(spans2, t)
n += 1
print(n)
train_labels = [spans2labels(row.text, row.spans, tokenizer) for i, row in tqdm(train.iterrows())]
trial_labels = [spans2labels(row.text, row.spans, tokenizer) for i, row in tqdm(trial.iterrows())]
train['labels'] = train_labels
trial['labels'] = trial_labels
class SpansDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels=None):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: val[idx] for key, val in self.encodings.items()}
if self.labels is not None:
item['labels'] = self.labels[idx]
return item
def __len__(self):
return len(self.encodings['input_ids'])
train_dataset = SpansDataset(tokenizer(train.text.tolist()), train_labels)
eval_dataset = SpansDataset(tokenizer(trial.text.tolist()), trial_labels)
final_test_dataset = SpansDataset(tokenizer(final_test.text.tolist()))
from transformers import DataCollatorForTokenClassification
data_collator = DataCollatorForTokenClassification(tokenizer, padding=True)
import numpy as np
from semeval2021 import f1
```
### Dataset for classification
```
import pandas as pd
df1 = pd.read_csv('../data/train/train.1.tsv', sep='\t')
df0 = pd.read_csv('../data/train/train_small.0.tsv', sep='\t')
df01 = pd.concat([df1, df0], ignore_index=True)
df01.label = df01.label.astype(int)
print(df01.shape)
df01.sample(3)
from sklearn.model_selection import train_test_split
df_train, df_test = train_test_split(df01, test_size=0.1, random_state=1)
df_train.head(10)
class SpansDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels=None):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: val[idx] for key, val in self.encodings.items()}
if self.labels is not None:
item['labels'] = self.labels[idx]
return item
def __len__(self):
return len(self.encodings['input_ids'])
clf_train_dataset = SpansDataset(
tokenizer(df_train.comment_text.tolist(), truncation=True),
df_train.label.tolist()
)
clf_test_dataset = SpansDataset(
tokenizer(df_test.comment_text.tolist(), truncation=True),
df_test.label.tolist()
)
clf_test_small_dataset = SpansDataset(
tokenizer(df_test.comment_text.iloc[:3000].tolist(), truncation=True),
df_test.label[:3000].tolist()
)
```
# Train a single-task model
https://github.com/huggingface/notebooks/blob/master/examples/token_classification.ipynb
https://huggingface.co/transformers/custom_datasets.html
```
from transformers import Trainer, TrainingArguments, EarlyStoppingCallback
from transformers.file_utils import cached_property
from typing import Tuple
class TrAr(TrainingArguments):
@cached_property
def _setup_devices(self):
return device
torch.cuda.set_device(device)
model = AutoModelForTokenClassification.from_pretrained(model_name)
model.to(device);
for param in model.roberta.parameters():
param.requires_grad = False
training_args = TrAr(
output_dir='./models2/roberta_single', # output directory
overwrite_output_dir=True,
num_train_epochs=10, # total # of training epochs
per_device_train_batch_size=8, # batch size per device during training
per_device_eval_batch_size=8, # batch size for evaluation
warmup_steps=3000, # number of warmup steps for learning rate scheduler
weight_decay=1e-8, # strength of weight decay
learning_rate=1e-3,
logging_dir='./logs', # directory for storing logs
logging_steps=100,
eval_steps=100,
evaluation_strategy='steps',
save_total_limit=1,
load_best_model_at_end=True,
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset, # evaluation dataset
data_collator=data_collator,
tokenizer=tokenizer,
callbacks=[EarlyStoppingCallback(early_stopping_patience=3, early_stopping_threshold=0)]
)
trainer.train()
for param in model.parameters():
param.requires_grad = True
training_args = TrAr(
output_dir='./models2/roberta_single', # output directory
overwrite_output_dir=True,
num_train_epochs=10, # total # of training epochs
per_device_train_batch_size=8, # batch size per device during training
per_device_eval_batch_size=8, # batch size for evaluation
warmup_steps=3000, # number of warmup steps for learning rate scheduler
weight_decay=1e-8, # strength of weight decay
learning_rate=1e-5,
logging_dir='./logs', # directory for storing logs
logging_steps=500,
eval_steps=500,
evaluation_strategy='steps',
save_total_limit=1,
load_best_model_at_end=True,
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset, # evaluation dataset
data_collator=data_collator,
tokenizer=tokenizer,
callbacks=[EarlyStoppingCallback(early_stopping_patience=3, early_stopping_threshold=0)]
)
```
* The minimal loss of a single-task model (full) was about 28% on validation with 0.04 on train.
* If we first train only head (batch 8, lr 1e-3 with 3K warmup and 1e-8 decline), we get minimal loss of 0.185 on validation with 0.23 on train
* Training then the whole model (batch 8, lr 1e-5 with 3K warmup and 1e-8 decline) we get minimal loss of 0.175 on validation with 0.21 on train
```
trainer.train()
model.save_pretrained('./models2/roberta_single')
trainer.evaluate()
```
### evaluate
```
pred = trainer.predict(eval_dataset)
for threshold in [0, 0.01, 0.03, 0.1, 0.3, 0.4, 0.5, 0.6, 0.7, 1]:
preds = []
for text, pr in zip(trial.text, pred.predictions):
proba = np.exp(pr[pr[:, 0]!=-100])
proba /= proba.sum(axis=1, keepdims=True)
labels = (proba[:, 1] >= threshold).astype(int).tolist()
preds.append(labels2spans(text, labels, tokenizer))
print(threshold, np.mean([f1(p, y) for p, y in zip(preds, trial.spans)]))
for threshold in [0.3, 0.32, 0.35, 0.38, 0.4, 0.42, 0.45, 0.5, 0.55, 0.6]:
preds = []
for text, pr in zip(trial.text, pred.predictions):
proba = np.exp(pr[pr[:, 0]!=-100])
proba /= proba.sum(axis=1, keepdims=True)
labels = (proba[:, 1] >= threshold).astype(int).tolist()
preds.append(labels2spans(text, labels, tokenizer))
print(threshold, np.mean([f1(p, y) for p, y in zip(preds, trial.spans)]))
```
## Prepare a submission
```
pred = trainer.predict(final_test_dataset)
threshold = 0.4
preds = []
for text, pr in zip(final_test.text, pred.predictions):
proba = np.exp(pr[pr[:, 0]!=-100])
proba /= proba.sum(axis=1, keepdims=True)
labels = (proba[:, 1] >= threshold).astype(int).tolist()
preds.append(labels2spans(text, labels, tokenizer))
row = final_test.sample(1).iloc[0]
display_spans(preds[row.name], row.text)
```
65.31%
```
print(np.mean([f1(p, y) for p, y in zip(preds, final_test.spans)]))
```
# WM Classifier + tagging
```
from transformers import RobertaTokenizer, RobertaForTokenClassification, RobertaForSequenceClassification
from transformers import BertTokenizer, BertForTokenClassification
from transformers import AutoTokenizer, AutoModelForTokenClassification
import torch
from transformers.models.roberta.modeling_roberta import RobertaModel
from transformers.modeling_outputs import SequenceClassifierOutput
import torch.nn as nn
from torch.nn import CrossEntropyLoss, MSELoss
class WMean(nn.Module):
def __init__(self, dim=-2):
super(WMean, self).__init__()
self.pow = torch.nn.Parameter(data=torch.Tensor([1.0]), requires_grad=True)
self.coef = torch.nn.Parameter(data=torch.Tensor([0.0, 1.0]), requires_grad=True)
self.dim = dim
def forward(self, x, mask=None):
result = x ** self.pow[0]
if mask is None:
mp = result.mean(dim=-1)
else:
mp = (result * mask).sum(dim=self.dim) / mask.sum(dim=self.dim)
return torch.log(mp) * self.coef[1] + self.coef[0]
class RobertaTaggerClassifier(RobertaForTokenClassification):
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.roberta = RobertaModel(config, add_pooling_layer=False)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.classifier = nn.Linear(config.hidden_size, config.num_labels)
self.wmean = WMean()
self.init_weights()
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
outputs = self.roberta(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
sequence_output = outputs[0]
sequence_output = self.dropout(sequence_output)
token_logits = self.classifier(sequence_output)
if attention_mask is not None:
masks = attention_mask.unsqueeze(-1).repeat(1, 1, 2)
else:
masks = None
logits = self.wmean(torch.softmax(token_logits, dim=-1), mask=masks)
loss = None
if labels is not None:
if self.num_labels == 1:
# We are doing regression
loss_fct = MSELoss()
loss = loss_fct(logits.view(-1), labels.view(-1))
else:
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
if not return_dict:
output = (logits,) + outputs[2:]
return ((loss,) + output) if loss is not None else output
return SequenceClassifierOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
model = RobertaTaggerClassifier.from_pretrained('roberta-base')
tokenizer = AutoTokenizer.from_pretrained('roberta-base')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
o = model(**inputs)
o
#device = torch.device('cuda:3')
from transformers import Trainer, TrainingArguments
from transformers.file_utils import cached_property
from typing import Tuple
class TrAr(TrainingArguments):
@cached_property
def _setup_devices(self):
return device
```
The strategy: first tune the head only with large batches and LR, then tune the whole model.
Head-only stops at loss 0.4185, full model - at loss 0.302685
```
for param in model.roberta.parameters():
param.requires_grad = False
NEW_MODEL_NAME = './models2/roberta_clf_wm'
training_args = TrAr(
output_dir=NEW_MODEL_NAME, # output directory
overwrite_output_dir=True,
num_train_epochs=10, # total # of training epochs
per_device_train_batch_size=8, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=3000, # number of warmup steps for learning rate scheduler
weight_decay=1e-8, # strength of weight decay
learning_rate=1e-3,
logging_dir='./logs', # directory for storing logs
logging_steps=100,
eval_steps=500,
evaluation_strategy='steps',
save_total_limit=1,
load_best_model_at_end=True,
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=clf_train_dataset, # training dataset
eval_dataset=clf_test_small_dataset, # evaluation dataset
#data_collator=data_collator,
tokenizer=tokenizer,
callbacks=[EarlyStoppingCallback(early_stopping_patience=3, early_stopping_threshold=0)]
)
trainer.train();
for param in model.parameters():
param.requires_grad = True
training_args = TrAr(
output_dir=NEW_MODEL_NAME, # output directory
overwrite_output_dir=True,
num_train_epochs=10, # total # of training epochs
per_device_train_batch_size=8, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=3000, # number of warmup steps for learning rate scheduler
weight_decay=1e-8, # strength of weight decay
learning_rate=1e-5,
logging_dir='./logs', # directory for storing logs
logging_steps=500,
eval_steps=500,
evaluation_strategy='steps',
save_total_limit=1,
load_best_model_at_end=True,
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=clf_train_dataset, # training dataset
eval_dataset=clf_test_small_dataset, # evaluation dataset
#data_collator=data_collator,
tokenizer=tokenizer,
callbacks=[EarlyStoppingCallback(early_stopping_patience=10, early_stopping_threshold=0)]
)
import gc
gc.collect()
torch.cuda.empty_cache()
trainer.train()
print(model.wmean.pow)
print(model.wmean.coef)
model.save_pretrained(NEW_MODEL_NAME)
```
# Fine tune the averager classifier
```
model = AutoModelForTokenClassification.from_pretrained('./models2/roberta_clf_wm')
NEW_MODEL_NAME = './models2/roberta_clf_wm_ft'
for param in model.roberta.parameters():
param.requires_grad = False
training_args = TrAr(
output_dir=NEW_MODEL_NAME, # output directory
overwrite_output_dir=True,
num_train_epochs=10, # total # of training epochs
per_device_train_batch_size=8, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=3000, # number of warmup steps for learning rate scheduler
weight_decay=1e-8, # strength of weight decay
learning_rate=1e-3,
logging_dir='./logs', # directory for storing logs
logging_steps=100,
eval_steps=500,
evaluation_strategy='steps',
save_total_limit=1,
load_best_model_at_end=True,
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset, # evaluation dataset
data_collator=data_collator,
tokenizer=tokenizer,
callbacks=[EarlyStoppingCallback(early_stopping_patience=3, early_stopping_threshold=0)]
)
trainer.train()
```
* the raw quasi-classifier: no use in the model at all
* fine tuned head: still no use, the best score is 0.2138
* fine tune whole model: 0.3 0.6849391042415774
```
for param in model.parameters():
param.requires_grad = True
training_args = TrAr(
output_dir=NEW_MODEL_NAME, # output directory
overwrite_output_dir=True,
num_train_epochs=10, # total # of training epochs
per_device_train_batch_size=8, # batch size per device during training
per_device_eval_batch_size=8, # batch size for evaluation
warmup_steps=3000, # number of warmup steps for learning rate scheduler
weight_decay=1e-8, # strength of weight decay
learning_rate=1e-5,
logging_dir='./logs', # directory for storing logs
logging_steps=500,
eval_steps=500,
evaluation_strategy='steps',
save_total_limit=1,
load_best_model_at_end=True,
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset, # evaluation dataset
data_collator=data_collator,
tokenizer=tokenizer,
callbacks=[EarlyStoppingCallback(early_stopping_patience=3, early_stopping_threshold=0)]
)
trainer.train()
```
* The minimal loss of a single-task model (full) was about 28% on validation with 0.04 on train.
* If we first train only head (batch 8, lr 1e-3 with 3K warmup and 1e-8 decline), we get minimal loss of 0.185 on validation with 0.23 on train
* Training then the whole model (batch 8, lr 1e-5 with 3K warmup and 1e-8 decline) we get minimal loss of 0.175 on validation with 0.21 on train
```
trainer.train()
NEW_MODEL_NAME
model.save_pretrained(NEW_MODEL_NAME)
pred = trainer.predict(eval_dataset)
for threshold in [0, 0.01, 0.03, 0.1, 0.25, 0.3, 0.35, 0.4, 0.5, 0.6, 0.7, 1]:
preds = []
for text, pr in zip(trial.text, pred.predictions):
proba = np.exp(pr[pr[:, 0]!=-100])
proba /= proba.sum(axis=1, keepdims=True)
labels = (proba[:, 1] >= threshold).astype(int).tolist()
preds.append(labels2spans(text, labels, tokenizer))
print(threshold, np.mean([f1(p, y) for p, y in zip(preds, trial.spans)]))
for threshold in [ 0.25, 0.28, 0.3, 0.32, 0.35]:
preds = []
for text, pr in zip(trial.text, pred.predictions):
proba = np.exp(pr[pr[:, 0]!=-100])
proba /= proba.sum(axis=1, keepdims=True)
labels = (proba[:, 1] >= threshold).astype(int).tolist()
preds.append(labels2spans(text, labels, tokenizer))
print(threshold, np.mean([f1(p, y) for p, y in zip(preds, trial.spans)]))
pred = trainer.predict(final_test_dataset)
threshold = 0.4
preds = []
for text, pr in zip(final_test.text, pred.predictions):
proba = np.exp(pr[pr[:, 0]!=-100])
proba /= proba.sum(axis=1, keepdims=True)
labels = (proba[:, 1] >= threshold).astype(int).tolist()
preds.append(labels2spans(text, labels, tokenizer))
print(len(preds))
print(np.mean([f1(p, y) for p, y in zip(preds, final_test.spans)]))
```
# Try to reproduce the score of an ordinary classifier fine tuned as tagger
```
* roberta_clf_proba - roberta classifier with wm head
* roberta_clf_ft_plus_pseudolabels - roberta_clf_ft + pseudolabels fine-tuning on data/train/train.1.tsv
* roberta_clf - preliminary form of roberta_clf_proba
* roberta_clf_ft - roberta_clf_proba + tagger fine-tuning
* roberta_selflabel - preliminary form of roberta_clf_ft_plus_pseudolabels
* roberta_selflabel_final - preliminary form of roberta_clf_ft_plus_pseudolabels
* roberta_single_v2 - just roberta tagger
* roberta_single - just roberta tagger, first version
* roberta_clf_2 - roberta classic classifier
* roberta_ft_v2 - roberta_clf_2 + tagger fine-tuning
```
#### roberta_ft_v2
```
model = RobertaForTokenClassification.from_pretrained('models/roberta_ft_v2')
model.to(device);
training_args = TrAr(
output_dir='tmp',
per_device_eval_batch_size=8,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
tokenizer=tokenizer,
)
pred = trainer.predict(eval_dataset)
for threshold in [0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9]:
preds = []
for text, pr in zip(trial.text, pred.predictions):
proba = np.exp(pr[pr[:, 0]!=-100])
proba /= proba.sum(axis=1, keepdims=True)
labels = (proba[:, 1] >= threshold).astype(int).tolist()
preds.append(labels2spans(text, labels, tokenizer))
score = np.mean([f1(p, y) for p, y in zip(preds, trial.spans)])
print(threshold, score)
pred = trainer.predict(final_test_dataset)
scores = []
for threshold in [0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9]:
preds = []
for text, pr in zip(final_test.text, pred.predictions):
proba = np.exp(pr[pr[:, 0]!=-100])
proba /= proba.sum(axis=1, keepdims=True)
labels = (proba[:, 1] >= threshold).astype(int).tolist()
preds.append(labels2spans(text, labels, tokenizer))
score = np.mean([f1(p, y) for p, y in zip(preds, final_test.spans)])
print(threshold, score)
scores.append(score)
scores_standard_clf = scores
```
#### roberta_clf_ft
```
model = RobertaForTokenClassification.from_pretrained('models/roberta_clf_ft')
model.to(device);
training_args = TrAr(
output_dir='tmp',
per_device_eval_batch_size=8,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
tokenizer=tokenizer,
)
pred = trainer.predict(eval_dataset)
for threshold in [0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9]:
preds = []
for text, pr in zip(trial.text, pred.predictions):
proba = np.exp(pr[pr[:, 0]!=-100])
proba /= proba.sum(axis=1, keepdims=True)
labels = (proba[:, 1] >= threshold).astype(int).tolist()
preds.append(labels2spans(text, labels, tokenizer))
score = np.mean([f1(p, y) for p, y in zip(preds, trial.spans)])
print(threshold, score)
pred = trainer.predict(final_test_dataset)
scores = []
for threshold in [0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9]:
preds = []
for text, pr in zip(final_test.text, pred.predictions):
proba = np.exp(pr[pr[:, 0]!=-100])
proba /= proba.sum(axis=1, keepdims=True)
labels = (proba[:, 1] >= threshold).astype(int).tolist()
preds.append(labels2spans(text, labels, tokenizer))
score = np.mean([f1(p, y) for p, y in zip(preds, final_test.spans)])
print(threshold, score)
scores.append(score)
scores_tagging_clf = scores
```
#### roberta_clf_ft_plus_pseudolabels
```
model = RobertaForTokenClassification.from_pretrained('models/roberta_clf_ft_plus_pseudolabels')
model.to(device);
training_args = TrAr(
output_dir='tmp',
per_device_eval_batch_size=8,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
tokenizer=tokenizer,
)
pred = trainer.predict(eval_dataset)
for threshold in [0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9]:
preds = []
for text, pr in zip(trial.text, pred.predictions):
proba = np.exp(pr[pr[:, 0]!=-100])
proba /= proba.sum(axis=1, keepdims=True)
labels = (proba[:, 1] >= threshold).astype(int).tolist()
preds.append(labels2spans(text, labels, tokenizer))
score = np.mean([f1(p, y) for p, y in zip(preds, trial.spans)])
print(threshold, score)
pred = trainer.predict(final_test_dataset)
scores = []
for threshold in [0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9]:
preds = []
for text, pr in zip(final_test.text, pred.predictions):
proba = np.exp(pr[pr[:, 0]!=-100])
proba /= proba.sum(axis=1, keepdims=True)
labels = (proba[:, 1] >= threshold).astype(int).tolist()
preds.append(labels2spans(text, labels, tokenizer))
score = np.mean([f1(p, y) for p, y in zip(preds, final_test.spans)])
print(threshold, score)
scores.append(score)
scores_pseudolabel = scores
```
#### roberta_single_v2
```
model = RobertaForTokenClassification.from_pretrained('models/roberta_single_v2')
model.to(device);
training_args = TrAr(
output_dir='tmp',
per_device_eval_batch_size=8,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
tokenizer=tokenizer,
)
pred = trainer.predict(eval_dataset)
for threshold in [0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9]:
preds = []
for text, pr in zip(trial.text, pred.predictions):
proba = np.exp(pr[pr[:, 0]!=-100])
proba /= proba.sum(axis=1, keepdims=True)
labels = (proba[:, 1] >= threshold).astype(int).tolist()
preds.append(labels2spans(text, labels, tokenizer))
score = np.mean([f1(p, y) for p, y in zip(preds, trial.spans)])
print(threshold, score)
pred = trainer.predict(final_test_dataset)
scores = []
for threshold in [0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9]:
preds = []
for text, pr in zip(final_test.text, pred.predictions):
proba = np.exp(pr[pr[:, 0]!=-100])
proba /= proba.sum(axis=1, keepdims=True)
labels = (proba[:, 1] >= threshold).astype(int).tolist()
preds.append(labels2spans(text, labels, tokenizer))
score = np.mean([f1(p, y) for p, y in zip(preds, final_test.spans)])
print(threshold, score)
scores.append(score)
scores_standard = scores
xx = [0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9]
plt.plot(xx, scores_standard)
plt.plot(xx, scores_standard_clf)
plt.plot(xx, scores_tagging_clf)
plt.plot(xx, scores_pseudolabel)
plt.legend(['standard', 'clf', 'tagging clf', 'pseudo labels'])
ss = [scores_standard, scores_pseudolabel, scores_standard_clf, scores_tagging_clf]
for sss in ss:
print(f'{np.max(sss):.3f}, {xx[np.argmax(sss)]}, {sss[8]:.3f}, {sss[10]:.3f}')
```
#### standard deviation of score
```
threshold = 0.5
preds = []
for text, pr in zip(final_test.text, pred.predictions):
proba = np.exp(pr[pr[:, 0]!=-100])
proba /= proba.sum(axis=1, keepdims=True)
labels = (proba[:, 1] >= threshold).astype(int).tolist()
preds.append(labels2spans(text, labels, tokenizer))
ff = [f1(p, y) for p, y in zip(preds, final_test.spans)]
score = np.mean(ff)
print(score)
se = np.std(ff) / np.sqrt(len(ff)) * 1.96
print(score - se, score + se)
np.std(ff) / np.sqrt(len(ff))
```
|
github_jupyter
|
# Predicting Credit Card Default with Neural Networks
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os
%matplotlib inline
```
### Back with the credit card default dataset
```
# Loading the dataset
DATA_DIR = '../data'
FILE_NAME = 'credit_card_default.csv'
data_path = os.path.join(DATA_DIR, FILE_NAME)
ccd = pd.read_csv(data_path, index_col="ID")
ccd.rename(columns=lambda x: x.lower(), inplace=True)
ccd.rename(columns={'default payment next month':'default'}, inplace=True)
# getting the groups of features
bill_amt_features = ['bill_amt'+ str(i) for i in range(1,7)]
pay_amt_features = ['pay_amt'+ str(i) for i in range(1,7)]
numerical_features = ['limit_bal','age'] + bill_amt_features + pay_amt_features
# Creating creating binary features
ccd['male'] = (ccd['sex'] == 1).astype('int')
ccd['grad_school'] = (ccd['education'] == 1).astype('int')
ccd['university'] = (ccd['education'] == 2).astype('int')
#ccd['high_school'] = (ccd['education'] == 3).astype('int')
ccd['married'] = (ccd['marriage'] == 1).astype('int')
# simplifying pay features
pay_features= ['pay_' + str(i) for i in range(1,7)]
for x in pay_features:
ccd.loc[ccd[x] <= 0, x] = 0
# simplifying delayed features
delayed_features = ['delayed_' + str(i) for i in range(1,7)]
for pay, delayed in zip(pay_features, delayed_features):
ccd[delayed] = (ccd[pay] > 0).astype(int)
# creating a new feature: months delayed
ccd['months_delayed'] = ccd[delayed_features].sum(axis=1)
```
## Split and standarize the dataset
```
numerical_features = numerical_features + ['months_delayed']
binary_features = ['male','married','grad_school','university']
X = ccd[numerical_features + binary_features]
y = ccd['default'].astype(int)
## Split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=5/30, random_state=101)
## Standarize
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train[numerical_features])
X_train.loc[:, numerical_features] = scaler.transform(X_train[numerical_features])
# Standarize also the testing set
X_test.loc[:, numerical_features] = scaler.transform(X_test[numerical_features])
```
### Building the neural network for classification
```
from keras.models import Sequential
nn_classifier = Sequential()
from keras.layers import Dense
n_input = X_train.shape[1]
n_units_hidden = 64
nn_classifier.add(Dense(units=n_units_hidden, activation='relu', input_shape=(n_input,)))
# add 2nd hidden layer
nn_classifier.add(Dense(units=n_units_hidden, activation='relu'))
# add 3th hidden layer
nn_classifier.add(Dense(units=n_units_hidden, activation='relu'))
# add 4th hidden layer
nn_classifier.add(Dense(units=n_units_hidden, activation='relu'))
# add 5th hidden layer
nn_classifier.add(Dense(units=n_units_hidden, activation='relu'))
# output layer
nn_classifier.add(Dense(1, activation='sigmoid'))
```
### Training the network
```
## compiling step
nn_classifier.compile(loss='binary_crossentropy', optimizer='adam')
nn_classifier.summary()
nn_classifier.save_weights('class_initial_w.h5')
batch_size = 64
n_epochs = 150
nn_classifier.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size)
```
## Evaluating predictions
```
## Getting the probabilities
y_pred_train_prob = nn_classifier.predict(X_train)
y_pred_test_prob = nn_classifier.predict(X_test)
## Classifications from predictions
y_pred_train = (y_pred_train_prob > 0.5).astype(int)
y_pred_test = (y_pred_test_prob > 0.5).astype(int)
from sklearn.metrics import accuracy_score
train_acc = accuracy_score(y_true=y_train, y_pred=y_pred_train)
test_acc = accuracy_score(y_true=y_test, y_pred=y_pred_test)
print("Train Accuracy: {:0.3f} \nTest Accuracy: {:0.3f}".format(train_acc, test_acc))
```
## Re-training the network with less epochs
```
## load the initial weights
nn_classifier.load_weights('class_initial_w.h5')
batch_size = 64
n_epochs = 50
nn_classifier.compile(loss='binary_crossentropy', optimizer='adam')
nn_classifier.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size)
## Getting the probabilities
y_pred_train_prob = nn_classifier.predict(X_train)
y_pred_test_prob = nn_classifier.predict(X_test)
## Classifications from predictions
y_pred_train = (y_pred_train_prob > 0.5).astype(int)
y_pred_test = (y_pred_test_prob > 0.5).astype(int)
## Calculating accuracy
train_acc = accuracy_score(y_true=y_train, y_pred=y_pred_train)
test_acc = accuracy_score(y_true=y_test, y_pred=y_pred_test)
print("Train Accuracy: {:0.3f} \nTest Accuracy: {:0.3f}".format(train_acc, test_acc))
```
|
github_jupyter
|
# WELL NOTEBOOK
## Well logs visualization & petrophysics
Install the the repository reservoirpy from github and import the required packages
```
import os
path = os.path.join('/home/santiago/Documents/dev/reservoirpy')
import sys
sys.path.insert(0,path)
import pandas as pd
import geopandas as gpd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from shapely.geometry import Point
import folium
from pyproj import Proj, transform, CRS, Transformer
import pyvista as pv
from reservoirpy.wellpy import path as ph
```
### Well atributes
Well atributes, name, rte, coordinates, survey
```
deviation = pd.read_csv('survey.csv', header=[0])
deviation.head()
tops1 = ph.tops({'formation':['fm1','fm2'],'md_top':[5000,5100],'md_bottom':[5099,5145]})
tops1
```
## Create some wells
```
#Create the well object
name1 = 'well-1'
rte1 = 1515.78 # Rotary table Elevation
surf_coord1 = [1000000,1000000]#Point(1000100,1000000,520)
crs1 = 'EPSG:3117'
tops1 = ph.tops({'formation':['fm1','fm2'],'md_top':[12000,12100],'md_bottom':[12099,12145]})
deviation1 = deviation.copy()
deviation1['azi'] = deviation1['azi'] + 0
w1 = ph.well(name=name1,
rte=rte1,
surf_coord=surf_coord1,
survey = deviation1,
tops=tops1,
crs=crs1)
#Create the well object
name2 = 'well-2'
rte2 = 515 # Rotary table Elevation
surf_coord2 = Point(1000100,1000000)
crs2 = 'EPSG:3117'
tops2 = ph.tops({'formation':['fm1','fm2'],'md_top':[12000,12100],'md_bottom':[12099,12145]})
deviation2 = deviation.copy()
deviation2['azi'] = deviation1['azi'] + 0
w2 = ph.well(name=name2,
rte=rte2,
surf_coord=surf_coord2,
survey = deviation2,
tops=tops2,
crs=crs2)
#Create the well object
name3 = 'well-3'
rte3 = 515 # Rotary table Elevation
surf_coord3 = Point(1000500,1000000)
crs3 = 'EPSG:3117'
tops3 = ph.tops({'formation':['fm1','fm2'],'md_top':[12000,12100],'md_bottom':[12099,12145]})
deviation3 = deviation.copy()
deviation3['azi'] = deviation1['azi'] + 30
w3 = ph.well(name=name3,
rte=rte3,
surf_coord=surf_coord3,
survey = deviation3,
tops=tops3,
crs=crs3)
#Create the well object
name4 = 'well-4'
rte4 = 515 # Rotary table Elevation
surf_coord4 = Point(1100500,1200000)
crs4 = 'EPSG:3117'
tops4 = ph.tops({'formation':['fm1','fm2'],'md_top':[12000,12100],'md_bottom':[12099,12145]})
w4 = ph.well(name=name4,
rte=rte4,
surf_coord=surf_coord4,
tops=tops4,
crs=crs4)
#Create the well object
name5 = 'well-5'
rte5 = 515 # Rotary table Elevation
surf_coord5 = Point(1170500,1200000)
crs5 = 'EPSG:3117'
tops5 = ph.tops({'formation':['fm1','fm2'],'md_top':[12000,12100],'md_bottom':[12099,12145]})
w5 = ph.well(name=name5,
rte=rte5,
surf_coord=surf_coord5,
tops=tops5,
crs=crs5,
td=8452)
w4.survey
```
## Create an empty wells group
You can create a `wells_group` object either empty or not. It only receives `well` object.
```
g1 = ph.wells_group(w1)
```
To see the list of wells call the method `wells_group.wells`. It contains a dictionary with the name of each well as the key and the `well` object as the item
```
g1.wells
```
### Add more wells to existing list
by calling the method `wells_group.add_well()` you can add more wells to an existing group
```
g1.add_well(w2,w3)
g1.wells
```
### Get attributes from a `wells_group`
```
g1.wells['well-3'].surf_coord.wkt
```
### Describe each well with its attributes
```
g1.describe()
```
#### Wells tops
Get the wells formations tops. If no parameters passed, it returns all wells and formations. You can pass `wells` and `formations` parameter to get the selected wells and formations
```
g1.wells_tops()
g1.wells_tops(wells=['well-1','well-2'], formations=['fm1'])
```
#### Wells survey
```
g1.wells_surveys().head()
g1.wells_surveys(wells=['well-1','well-2'])
g1.wells_distance(dims=['z'])
dist = g1.wells_distance(wells=['well-1','well-2'],dims=['y','z','x'])
dist
m = g1.wells_map(zoom=13)
m
g1.wells_coordinates()
g1.wells_tops().head()
g1.formation_distance(formation='fm2')
g1.formation_distance(wells=['well-1','well-2','well-3'],formation='fm2', dims=['tvdss_top'])
fig, ax = plt.subplots()
for i in g1.wells:
_t = g1.wells[i].tops
_s = g1.wells[i].survey
ax.scatter(_t['easting']-1000000,_t['northing']-1000000)
ax.plot(_s['easting']-1000000,_s['northing']-1000000)
df, c = g1.wells_tops(projection1d=True, azi=45)
print(c)
print(df)
surv,ce = g1.wells_surveys(projection1d=True, azi=45, center=c)
print(surv)
azi= 0
tops, center = g1.wells_tops(projection1d=True, azi=azi)
surv,ce = g1.wells_surveys(projection1d=True, azi=azi, center=center)
fig, ax = plt.subplots()
sns.lineplot(x='projection',y='tvdss_top', data=tops,
hue='formation', style='formation',markers=True, ax=ax, palette='Set1')
sns.lineplot(x='projection',y='tvdss', data=surv,
hue='well', style='well', ax=ax,palette='GnBu_d')
g1.structural_view(azi=45,ylims=[-4000,-12000],formations=['fm2'])
g1.structural_view(azi=45,formations=['fm1'], wells=['well-1','well-2'])
```
## Export wells survey to PyVista object vtk
```
w1_vtk = g1.wells['well-1'].get_vtk()
w1_vtk
w1_vtk.plot(notebook=False)
ss=g1.wells_surveys_vtk()
ss.plot(notebook=False)
p=pv.Plotter(notebook=False)
p.add_mesh(ss['well-1'], scalars='azi')
p.add_mesh(ss['well-2'], scalars='tvdss')
p.show()
tops_vtk = g1.tops_vtk()
tops_vtk.plot(notebook=False)
str_vtk = g1.structural_view_vtk()
str_vtk.plot(notebook=False)
```
|
github_jupyter
|
# **<div align="center"> Dolby.io Developer Days Media APIs 101 - Getting Started </div>**
### **<div align="center"> Notebook #1: Getting Started</div>**
### Starting with a Raw Audio File
We can run code blocks like this in Binder by pressing "Control+Enter". Try it now after clicking the below code block!
```
import IPython # Helper library to play audio files in Python natively.
# Set this link to any publically accessible media file you would like!
original_audio_file = "https://dolbyio.s3-us-west-1.amazonaws.com/public/shelby/airplane.original.mp4"
IPython.display.Audio(original_audio_file) # Display the audio embedded within python
```
This installed IPython to our workspace, to let us play media files natively within Python, and set a variable to this public media file we will use for the rest of this notebook.
### **Step #1:** Gathering Credentials
- Go to http://dashboard.dolby.io/signup/ to sign up for a Dolby.io account.
- At the bottom of the "Applications" widget on the dashboard, click "_my first app_"
- Scroll down to the box labeled **'Media APIs'**.
- Copy the key text under "API Key:" and replace the string below, then run the cell.
- Also enter in your name to customize the output URL later.
- _Press Control+Enter to run the cell._

```
# Enter your Dolby.io Media API Key here.
api_key = "<YOUR_API_KEY_HERE>"
# Enter your name here to customize the output URL later.
name = "<YOUR_NAME_HERE>"
print("API Key and Name set!")
```
Now we have two key variables set:
1. The link to the original media file we want to process.
2. Our API key so we can properly call the REST API endpoints.
As well as your name, just so we can differentiate output later on.
### **Step #2:** Calling the Enhance Job
> Note: all of the following code is adapted from the Enhance quickstart found here: https://docs.dolby.io/media-apis/docs/quick-start-to-enhancing-media
- Run the cell below to start the enhance job, this should output a JSON response with only a `job_id` in the body if no errors occur.
```
import requests # Python library to make HTTP requests
output_url = f"dlb://out/workshop-{name}.mp4" # Setting the output URL to have a different location based on your name!
# Building the body of the request
body = {
"input" : original_audio_file,
"output" : output_url,
}
# Building the headers and url of the request
url = "https://api.dolby.com/media/enhance"
headers = {
"x-api-key": api_key,
"Content-Type": "application/json",
"Accept": "application/json"
}
# Call the API request!
response = requests.post(url, json=body, headers=headers)
response.raise_for_status()
print(response.json()) # Prints out the output of the request
```
### **Step #3:** Checking Job Status
- Now that we have created a job, we should check its status.
- Run the cell below to check the status, this file is small so it should take only a couple of seconds.
```
url = "https://api.dolby.com/media/enhance"
headers = {
"x-api-key": api_key,
"Content-Type": "application/json",
"Accept": "application/json"
}
params = {
"job_id": response.json()["job_id"]
}
response = requests.get(url, params=params, headers=headers)
response.raise_for_status()
print(response.json())
```
This should look like the following when done:
```json
{'path': '/media/enhance', 'status': 'Success', 'progress': 100, 'api_version': 'v1.1.2', 'result': {}}
```
### **Step #4:** Download the Processed File
- Now we want to download the file!
- We can do this with another request.
```
import shutil
# The name of the file that will be downloaded locally!
output_path = f"workshop-{name}.mp4"
url = "https://api.dolby.com/media/output"
headers = {
"x-api-key": api_key,
"Content-Type": "application/json",
"Accept": "application/json",
}
args = {
"url": output_url
}
# Take the response and download it locally
with requests.get(url, params=args, headers=headers, stream=True) as response:
response.raise_for_status()
response.raw.decode_content = True
print("Downloading from {0} into {1}".format(response.url, output_path))
with open(output_path, "wb") as output_file:
shutil.copyfileobj(response.raw, output_file)
```
When it is done downloading, you'll see it pop up on the left side bar.
Now that the file is downloaded lets give it a listen. Does it sound better?
```
IPython.display.Audio(output_path)
```
### **Congratulations you made your first call with the Dolby.io Enhance API!**
We can now move onto Workshop Part 2 on the left sidebar!

References:
https://docs.python-requests.org/en/latest/
https://ipython.org/
|
github_jupyter
|
# Import
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import matplotlib
matplotlib.__version__
np.__version__, pd.__version__
```
# Dataset:
```
from sklearn.datasets import california_housing
data = california_housing.fetch_california_housing()
X = data['data']
y = data['target']
columns = data['feature_names']
train_df = pd.DataFrame(X, index=np.arange(len(X)), columns=columns)
train_df['target'] = y
train_df.head()
```
# 1) Initialize:
```
import sys
sys.path.append('../SWMat/')
from SWMat import SWMat
from matplotlib.patches import Wedge, Polygon
from matplotlib.collections import PatchCollection
fig = plt.figure(figsize=(10, 7))
ax = plt.gca()
for pos in ["right", "left", "top", "bottom"]:
ax.spines[pos].set_visible(False)
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
patches = []
patches += [Wedge((.2, .6), .1, 45, 270, width=0.05),
Wedge((.2, .45), .1, 225, 450, width=0.05),
Polygon(np.array([[.22, .23], [.26, .23], [.33, .48], [.39, .40], [.43, .47], [.49, .22],
[.52, .22], [.45, .53], [.42, .54], [.39, .46], [.34, .53], [.32, .54]]) + np.array([0.25, 0.3])),
Polygon(np.array([[.32, .70], [.27, .32], [.32, .31], [.36, .44], [.40, .44], [.43, .30], [.45, .30],
[.53, .66], [.50, .67], [.45, .39], [.43, .39], [.42, .48], [.38, .50], [.32, .37], [.29, .37], [.35, .70]]))
]
colors = 100*np.random.rand(len(patches))
p = PatchCollection(patches, alpha=0.85)
p.set_array(np.array(colors))
ax.add_collection(p);
plt.text(0.1, 0.09, "Storytelling With Matplotlib", fontsize=30, color="#3b5998")
plt.annotate("Cluttered Data...", xy=(.8, .5), xytext=(1.1, .75), color="black",
arrowprops={'arrowstyle':'->', 'color': 'black',
"connectionstyle":"arc3,rad=-0.2"},
bbox={'pad':6, 'edgecolor':'orange', 'facecolor':
'orange', 'alpha':0.4}, fontsize=17)
#plt.text(x=1.3, y=.1, s="Communicating Data\nEffectively.", fontsize=20, ha="center")
swm = SWMat(plt, ax=ax)
swm.text("\> Communicating <prop color='#3b5998' fontsize='30'>Data</prop>Effectively.", fontsize=20,
position="out-lower-right");
# Simple Text
swm = SWMat(plt) # And... base beautifications will be added.
y = np.arange(500) + np.random.random(500)*50 + np.random.random(500)*40 + np.random.random(500)*50 + np.random.random(500)*10
x = np.arange(500)
plt.scatter(x, y)
swm.text("Here goes your text!\nAnother Text!!");
swm = SWMat(plt)
ls = swm.line_plot(np.array([[1, 2, 3, 4], [1, 2, 3, 4]]).T, np.array([[1, 4, 2, 6], [4, 2, 6, 5]]).T, line_labels=["A", "B"],
highlight=0, lw=3)
swm = SWMat(plt)
hist = swm.hist(train_df['target'], highlight=3, bins=[0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5], ec='w', hide_y=True)
#t = swm.text("My first text!<prop>Possible Outliers</prop><prop>haleluya\nyo lib ipsum dipsum</prop>\nipsum",
# fontsize=18)
swm = SWMat(plt)
swm.bar(np.array([[1, 2, 3], [1, 2, 3]]), np.array([[2, 5, 3], [4, 1, 3]]), data_labels=["Alpha", "Beta"], highlight={"data":1, "cat":1},
cat_labels=["One", "Two", "Three"], plot_type="stacked100%", width=0.8);
swm = SWMat(plt)
v = swm.violinplot(train_df['target'], show="top", highlight={"0":[(0.7, 2.3), (4.7, 6)]})
swm = SWMat(plt)
swm.bar(np.array([[1, 2, 3], [1, 2, 3], [1, 2, 3], [1, 2, 3]]), np.array([[2, 5, 3], [4, 3, 6], [2, 4, 2], [2, 4, 1]]), data_labels=["A", "B", "C", "D"], cat_labels=["One", "Two", "Three"], highlight={"data":1});
swm = SWMat(plt)
swm.bar(np.array([[1, 2, 3], [1, 2, 3], [1, 2, 3]]), np.array([[2, 5, 3], [4, 3, 6], [2, 4, 2]]), data_labels=["A", "B", "C"], cat_labels=["One", "Two", "Three"]);
```
|
github_jupyter
|
```
%load_ext autoreload
%autoreload 2
import sys
sys.path.append('../docs')
from gen_doc.nbdoc import show_doc as sd
#export
from nb_001b import *
import sys, PIL, matplotlib.pyplot as plt, itertools, math, random, collections, torch
import scipy.stats, scipy.special
from enum import Enum, IntEnum
from torch import tensor, Tensor, FloatTensor, LongTensor, ByteTensor, DoubleTensor, HalfTensor, ShortTensor
from operator import itemgetter, attrgetter
from numpy import cos, sin, tan, tanh, log, exp
from dataclasses import field
from functools import reduce
from collections import defaultdict, abc, namedtuple, Iterable
from typing import Tuple, Hashable, Mapping, Dict
import mimetypes
import abc
from abc import abstractmethod, abstractproperty
```
# CIFAR subset data
First we want to view our data to check if everything is how we expect it to be.
## Setup
```
DATA_PATH = Path('data')
PATH = DATA_PATH/'cifar10_dog_air'
TRAIN_PATH = PATH/'train'
dog_fn = list((TRAIN_PATH/'dog').iterdir())[0]
dog_image = PIL.Image.open(dog_fn)
dog_image.resize((256,256))
air_fn = list((TRAIN_PATH/'airplane').iterdir())[1]
air_image = PIL.Image.open(air_fn)
air_image.resize((256,256))
```
## Simple Dataset/Dataloader
We will build a Dataset class for our image files. A Dataset class needs to have two functions: `__len__` and `__getitem__`. Our `ImageDataset` class additionally gets image files from their respective directories and transforms them to tensors.
```
#export
def image2np(image:Tensor)->np.ndarray:
"convert from torch style `image` to numpy/matplot style"
res = image.cpu().permute(1,2,0).numpy()
return res[...,0] if res.shape[2]==1 else res
def show_image(img:Tensor, ax:plt.Axes=None, figsize:tuple=(3,3), hide_axis:bool=True,
title:Optional[str]=None, cmap:str='binary', alpha:Optional[float]=None)->plt.Axes:
"plot tensor `img` using matplotlib axis `ax`. `figsize`,`axis`,`title`,`cmap` and `alpha` pass to `ax.imshow`"
if ax is None: fig,ax = plt.subplots(figsize=figsize)
ax.imshow(image2np(img), cmap=cmap, alpha=alpha)
if hide_axis: ax.axis('off')
if title: ax.set_title(title)
return ax
class Image():
def __init__(self, px): self.px = px
def show(self, ax=None, **kwargs): return show_image(self.px, ax=ax, **kwargs)
@property
def data(self): return self.px
#export
FilePathList = Collection[Path]
TensorImage = Tensor
NPImage = np.ndarray
def find_classes(folder:Path)->FilePathList:
"return class subdirectories in imagenet style train `folder`"
classes = [d for d in folder.iterdir()
if d.is_dir() and not d.name.startswith('.')]
assert(len(classes)>0)
return sorted(classes, key=lambda d: d.name)
image_extensions = set(k for k,v in mimetypes.types_map.items() if v.startswith('image/'))
def get_image_files(c:Path, check_ext:bool=True)->FilePathList:
"return list of files in `c` that are images. `check_ext` will filter to `image_extensions`."
return [o for o in list(c.iterdir())
if not o.name.startswith('.') and not o.is_dir()
and (not check_ext or (o.suffix in image_extensions))]
def pil2tensor(image:NPImage)->TensorImage:
"convert PIL style `image` array to torch style image tensor `get_image_files`"
arr = torch.ByteTensor(torch.ByteStorage.from_buffer(image.tobytes()))
arr = arr.view(image.size[1], image.size[0], -1)
return arr.permute(2,0,1)
PathOrStr = Union[Path,str]
def open_image(fn:PathOrStr):
"return `Image` object created from image in file `fn`"
x = PIL.Image.open(fn).convert('RGB')
return Image(pil2tensor(x).float().div_(255))
#export
NPArrayableList = Collection[Union[np.ndarray, list]]
NPArrayMask = np.ndarray
SplitArrayList = List[Tuple[np.ndarray,np.ndarray]]
def arrays_split(mask:NPArrayMask, *arrs:NPArrayableList)->SplitArrayList:
"given `arrs` is [a,b,...] and `mask`index - return[(a[mask],a[~mask]),(b[mask],b[~mask]),...]"
mask = array(mask)
return list(zip(*[(a[mask],a[~mask]) for a in map(np.array, arrs)]))
def random_split(valid_pct:float, *arrs:NPArrayableList)->SplitArrayList:
"randomly `array_split` with `valid_pct` ratio. good for creating validation set."
is_train = np.random.uniform(size=(len(arrs[0]),)) > valid_pct
return arrays_split(is_train, *arrs)
class DatasetBase(Dataset):
"base class for all fastai datasets"
def __len__(self): return len(self.x)
@property
def c(self):
"number of classes expressed by dataset y variable"
return self.y.shape[-1] if len(self.y.shape)>1 else 1
def __repr__(self): return f'{type(self).__name__} of len {len(self)}'
class LabelDataset(DatasetBase):
"base class for fastai datasets that do classification"
@property
def c(self):
"number of classes expressed by dataset y variable"
return len(self.classes)
#export
ImgLabel = str
ImgLabels = Collection[ImgLabel]
Classes = Collection[Any]
class ImageDataset(LabelDataset):
"Dataset for folders of images in style {folder}/{class}/{images}"
def __init__(self, fns:FilePathList, labels:ImgLabels, classes:Optional[Classes]=None):
self.classes = ifnone(classes, list(set(labels)))
self.class2idx = {v:k for k,v in enumerate(self.classes)}
self.x = np.array(fns)
self.y = np.array([self.class2idx[o] for o in labels], dtype=np.int64)
def __getitem__(self,i): return open_image(self.x[i]),self.y[i]
@staticmethod
def _folder_files(folder:Path, label:ImgLabel, check_ext=True)->Tuple[FilePathList,ImgLabels]:
"from `folder` return image files and labels. The labels are all `label`. `check_ext` means only image files"
fnames = get_image_files(folder, check_ext=check_ext)
return fnames,[label]*len(fnames)
@classmethod
def from_single_folder(cls, folder:PathOrStr, classes:Classes, check_ext=True):
"typically used for test set. label all images in `folder` with `classes[0]`"
fns,labels = cls._folder_files(folder, classes[0], check_ext=check_ext)
return cls(fns, labels, classes=classes)
@classmethod
def from_folder(cls, folder:Path, classes:Optional[Classes]=None,
valid_pct:float=0., check_ext:bool=True) -> Union['ImageDataset', List['ImageDataset']]:
"""dataset of `classes` labeled images in `folder`. Optional `valid_pct` split validation set."""
if classes is None: classes = [cls.name for cls in find_classes(folder)]
fns,labels = [],[]
for cl in classes:
f,l = cls._folder_files(folder/cl, cl, check_ext=check_ext)
fns+=f; labels+=l
if valid_pct==0.: return cls(fns, labels, classes=classes)
return [cls(*a, classes=classes) for a in random_split(valid_pct, fns, labels)]
sd(ImageDataset.from_folder)
```
# Data augmentation
We are going to augment our data to increase the size of our training set with artificial images. These new images are basically "free" data that we can use in our training to help our model generalize better (reduce overfitting).
## Lighting
We will start by changing the **brightness** and **contrast** of our images.
### Method
**Brightness**
Brightness refers to where does our image stand on the dark-light spectrum. Brightness is applied by adding a positive constant to each of the image's channels. This works because each of the channels in an image goes from 0 (darkest) to 255 (brightest) in a dark-light continum. (0, 0, 0) is black (total abscence of light) and (255, 255, 255) is white (pure light). You can check how this works by experimenting by yourself [here](https://www.w3schools.com/colors/colors_rgb.asp).
_Parameters_
1. **Change** How much brightness do we want to add to (or take from) the image.
Domain: Real numbers
**Contrast**
Contrast refers to how sharp a distinction there is between brighter and darker sections of our image. To increase contrast we need darker pixels to be darker and lighter pixels to be lighter. In other words, we would like channels with a value smaller than 128 to decrease and channels with a value of greater than 128 to increase.
_Parameters_
1. **Scale** How much contrast do we want to add to (or remove from) the image.
Domain: [0, +inf]
***On logit and sigmoid***
Notice that for both transformations we first apply the logit to our tensor, then apply the transformation and finally take the sigmoid. This is important for two reasons.
First, we don't want to overflow our tensor values. In other words, we need our final tensor values to be between [0,1]. Imagine, for instance, a tensor value at 0.99. We want to increase its brightness, but we can’t go over 1.0. By doing logit first, which first moves our space to -inf to +inf, this works fine. The same applies to contrast if we have a scale S > 1 (might make some of our tensor values greater than one).
Second, when we apply contrast, we need to affect the dispersion of values around the middle value. Say we want to increase contrast. Then we need the bright values (>0.5) to get brighter and dark values (<0.5) to get darker. We must first transform our tensor values so our values which were originally <0.5 are now negative and our values which were originally >0.5 are now positive. This way, when we multiply by a constant, the dispersion around 0 will increase. The logit function does exactly this and allows us to increase or decrease dispersion around a mid value.
### Implementation
```
#export
def logit(x:Tensor)->Tensor: return -(1/x-1).log()
def logit_(x:Tensor)->Tensor: return (x.reciprocal_().sub_(1)).log_().neg_()
def contrast(x:Tensor, scale:float)->Tensor: return x.mul_(scale)
#export
FlowField = Tensor
LogitTensorImage = TensorImage
AffineMatrix = Tensor
KWArgs = Dict[str,Any]
ArgStar = Collection[Any]
CoordSize = Tuple[int,int,int]
LightingFunc = Callable[[LogitTensorImage, ArgStar, KWArgs], LogitTensorImage]
PixelFunc = Callable[[TensorImage, ArgStar, KWArgs], TensorImage]
CoordFunc = Callable[[FlowField, CoordSize, ArgStar, KWArgs], LogitTensorImage]
AffineFunc = Callable[[KWArgs], AffineMatrix]
class ItemBase():
"All tranformable dataset items use this type"
@property
@abstractmethod
def device(self): pass
@property
@abstractmethod
def data(self): pass
class ImageBase(ItemBase):
"Img based `Dataset` items dervie from this. Subclass to handle lighting, pixel, etc"
def lighting(self, func:LightingFunc, *args, **kwargs)->'ImageBase': return self
def pixel(self, func:PixelFunc, *args, **kwargs)->'ImageBase': return self
def coord(self, func:CoordFunc, *args, **kwargs)->'ImageBase': return self
def affine(self, func:AffineFunc, *args, **kwargs)->'ImageBase': return self
def set_sample(self, **kwargs)->'ImageBase':
"set parameters that control how we `grid_sample` the image after transforms are applied"
self.sample_kwargs = kwargs
return self
def clone(self)->'ImageBase':
"clones this item and its `data`"
return self.__class__(self.data.clone())
#export
class Image(ImageBase):
"supports appying transforms to image data"
def __init__(self, px)->'Image':
"create from raw tensor image data `px`"
self._px = px
self._logit_px=None
self._flow=None
self._affine_mat=None
self.sample_kwargs = {}
@property
def shape(self)->Tuple[int,int,int]:
"returns (ch, h, w) for this image"
return self._px.shape
@property
def size(self)->Tuple[int,int,int]:
"returns (h, w) for this image"
return self.shape[-2:]
@property
def device(self)->torch.device: return self._px.device
def __repr__(self): return f'{self.__class__.__name__} ({self.shape})'
def refresh(self)->None:
"applies any logit or affine transfers that have been "
if self._logit_px is not None:
self._px = self._logit_px.sigmoid_()
self._logit_px = None
if self._affine_mat is not None or self._flow is not None:
self._px = grid_sample(self._px, self.flow, **self.sample_kwargs)
self.sample_kwargs = {}
self._flow = None
return self
@property
def px(self)->TensorImage:
"get the tensor pixel buffer"
self.refresh()
return self._px
@px.setter
def px(self,v:TensorImage)->None:
"set the pixel buffer to `v`"
self._px=v
@property
def flow(self)->FlowField:
"access the flow-field grid after applying queued affine transforms"
if self._flow is None:
self._flow = affine_grid(self.shape)
if self._affine_mat is not None:
self._flow = affine_mult(self._flow,self._affine_mat)
self._affine_mat = None
return self._flow
@flow.setter
def flow(self,v:FlowField): self._flow=v
def lighting(self, func:LightingFunc, *args:Any, **kwargs:Any)->'Image':
"equivalent to `image = sigmoid(func(logit(image)))`"
self.logit_px = func(self.logit_px, *args, **kwargs)
return self
def pixel(self, func:PixelFunc, *args, **kwargs)->'Image':
"equivalent to `image.px = func(image.px)`"
self.px = func(self.px, *args, **kwargs)
return self
def coord(self, func:CoordFunc, *args, **kwargs)->'Image':
"equivalent to `image.flow = func(image.flow, image.size)`"
self.flow = func(self.flow, self.shape, *args, **kwargs)
return self
def affine(self, func:AffineFunc, *args, **kwargs)->'Image':
"equivalent to `image.affine_mat = image.affine_mat @ func()`"
m = tensor(func(*args, **kwargs)).to(self.device)
self.affine_mat = self.affine_mat @ m
return self
def resize(self, size:Union[int,CoordSize])->'Image':
"resize the image to `size`, size can be a single int"
assert self._flow is None
if isinstance(size, int): size=(self.shape[0], size, size)
self.flow = affine_grid(size)
return self
@property
def affine_mat(self)->AffineMatrix:
"get the affine matrix that will be applied by `refresh`"
if self._affine_mat is None:
self._affine_mat = torch.eye(3).to(self.device)
return self._affine_mat
@affine_mat.setter
def affine_mat(self,v)->None: self._affine_mat=v
@property
def logit_px(self)->LogitTensorImage:
"get logit(image.px)"
if self._logit_px is None: self._logit_px = logit_(self.px)
return self._logit_px
@logit_px.setter
def logit_px(self,v:LogitTensorImage)->None: self._logit_px=v
def show(self, ax:plt.Axes=None, **kwargs:Any)->None:
"plots the image into `ax`"
show_image(self.px, ax=ax, **kwargs)
@property
def data(self)->TensorImage:
"returns this images pixels as a tensor"
return self.px
train_ds = ImageDataset.from_folder(PATH/'train')
valid_ds = ImageDataset.from_folder(PATH/'test')
x = lambda: train_ds[1][0]
img = x()
img.logit_px = contrast(img.logit_px, 0.5)
img.show()
x().lighting(contrast, 0.5).show()
```
## Transform class
```
class Transform():
_wrap=None
def __init__(self, func): self.func=func
def __call__(self, x, *args, **kwargs):
if self._wrap: return getattr(x, self._wrap)(self.func, *args, **kwargs)
else: return self.func(x, *args, **kwargs)
class TfmLighting(Transform): _wrap='lighting'
@TfmLighting
def brightness(x, change): return x.add_(scipy.special.logit(change))
@TfmLighting
def contrast(x, scale): return x.mul_(scale)
_,axes = plt.subplots(1,4, figsize=(12,3))
x().show(axes[0])
contrast(x(), 1.0).show(axes[1])
contrast(x(), 0.5).show(axes[2])
contrast(x(), 2.0).show(axes[3])
_,axes = plt.subplots(1,4, figsize=(12,3))
x().show(axes[0])
brightness(x(), 0.8).show(axes[1])
brightness(x(), 0.5).show(axes[2])
brightness(x(), 0.2).show(axes[3])
def brightness_contrast(x, scale_contrast, change_brightness):
return brightness(contrast(x, scale=scale_contrast), change=change_brightness)
_,axes = plt.subplots(1,4, figsize=(12,3))
brightness_contrast(x(), 0.75, 0.7).show(axes[0])
brightness_contrast(x(), 2.0, 0.3).show(axes[1])
brightness_contrast(x(), 2.0, 0.7).show(axes[2])
brightness_contrast(x(), 0.75, 0.3).show(axes[3])
```
## Random lighting
Next, we will make our previous transforms random since we are interested in automatizing the pipeline. We will achieve this by making our parameters stochastic with a specific distribution.
We will use a <a href="https://en.wikipedia.org/wiki/Uniform_distribution_(continuous)">uniform</a> distribution for brightness change since its domain is the real numbers and the impact varies linearly with the scale. For contrast change we use [log_uniform](https://www.vosesoftware.com/riskwiki/LogUniformdistribution.php) for two reasons. First, contrast scale has a domain of [0, inf]. Second, the impact of the scale in the transformation is non-linear (i.e. 0.5 is as extreme as 2.0, 0.2 is as extreme as 5). The log_uniform function is appropriate because it has the same domain and correctly represents the non-linearity of the transform, P(0.5) = P(2).
```
#export
def uniform(low:Number, high:Number, size:List[int]=None)->float:
"draw 1 or shape=`size` random floats from uniform dist: min=`low`, max=`high`"
return random.uniform(low,high) if size is None else torch.FloatTensor(*listify(size)).uniform_(low,high)
def log_uniform(low, high, size=None):
"draw 1 or shape=`size` random floats from uniform dist: min=log(`low`), max=log(`high`)"
res = uniform(log(low), log(high), size)
return exp(res) if size is None else res.exp_()
def rand_bool(p:float, size=None):
"draw 1 or shape=`size` random booleans (True occuring probability p)"
return uniform(0,1,size)<p
scipy.stats.gmean([log_uniform(0.5,2.0) for _ in range(1000)])
#export
import inspect
from copy import copy,deepcopy
def get_default_args(func):
return {k: v.default
for k, v in inspect.signature(func).parameters.items()
if v.default is not inspect.Parameter.empty}
def listify(p=None, q=None):
"Makes `p` same length as `q`"
if p is None: p=[]
elif not isinstance(p, Iterable): p=[p]
n = q if type(q)==int else len(p) if q is None else len(q)
if len(p)==1: p = p * n
assert len(p)==n, f'List len mismatch ({len(p)} vs {n})'
return list(p)
#export
class Transform():
_wrap=None
order=0
def __init__(self, func, order=None):
if order is not None: self.order=order
self.func=func
self.params = copy(func.__annotations__)
self.def_args = get_default_args(func)
setattr(Image, func.__name__,
lambda x, *args, **kwargs: self.calc(x, *args, **kwargs))
def __call__(self, *args, p=1., is_random=True, **kwargs):
if args: return self.calc(*args, **kwargs)
else: return RandTransform(self, kwargs=kwargs, is_random=is_random, p=p)
def calc(tfm, x, *args, **kwargs):
if tfm._wrap: return getattr(x, tfm._wrap)(tfm.func, *args, **kwargs)
else: return tfm.func(x, *args, **kwargs)
@property
def name(self): return self.__class__.__name__
def __repr__(self): return f'{self.name} ({self.func.__name__})'
class TfmLighting(Transform): order,_wrap = 8,'lighting'
#export
@dataclass
class RandTransform():
tfm:Transform
kwargs:dict
p:int=1.0
resolved:dict = field(default_factory=dict)
do_run:bool = True
is_random:bool = True
def resolve(self):
if not self.is_random:
self.resolved = {**self.tfm.def_args, **self.kwargs}
return
self.resolved = {}
# for each param passed to tfm...
for k,v in self.kwargs.items():
# ...if it's annotated, call that fn...
if k in self.tfm.params:
rand_func = self.tfm.params[k]
self.resolved[k] = rand_func(*listify(v))
# ...otherwise use the value directly
else: self.resolved[k] = v
# use defaults for any args not filled in yet
for k,v in self.tfm.def_args.items():
if k not in self.resolved: self.resolved[k]=v
# anything left over must be callable without params
for k,v in self.tfm.params.items():
if k not in self.resolved: self.resolved[k]=v()
self.do_run = rand_bool(self.p)
@property
def order(self): return self.tfm.order
def __call__(self, x, *args, **kwargs):
return self.tfm(x, *args, **{**self.resolved, **kwargs}) if self.do_run else x
#export
@TfmLighting
def brightness(x, change:uniform): return x.add_(scipy.special.logit(change))
@TfmLighting
def contrast(x, scale:log_uniform): return x.mul_(scale)
x().contrast(scale=2).show()
x().contrast(scale=2).brightness(0.8).show()
tfm = contrast(scale=(0.3,3))
tfm.resolve()
tfm,tfm.resolved,tfm.do_run
# all the same
tfm.resolve()
_,axes = plt.subplots(1,4, figsize=(12,3))
for ax in axes: tfm(x()).show(ax)
tfm = contrast(scale=(0.3,3))
# different
_,axes = plt.subplots(1,4, figsize=(12,3))
for ax in axes:
tfm.resolve()
tfm(x()).show(ax)
tfm = contrast(scale=2, is_random=False)
tfm.resolve()
tfm(x()).show()
```
## Composition
We are interested in composing the transform functions so as to apply them all at once. We will try to feed a list of transforms to our pipeline for it to apply all of them.
Applying a function to our transforms before calling them in Python is easiest if we use a decorator. You can find more about decorators [here](https://www.thecodeship.com/patterns/guide-to-python-function-decorators/).
```
#export
def resolve_tfms(tfms):
for f in listify(tfms): f.resolve()
def apply_tfms(tfms, x, do_resolve=True):
if not tfms: return x
tfms = listify(tfms)
if do_resolve: resolve_tfms(tfms)
x = x.clone()
for tfm in tfms: x = tfm(x)
return x
x = train_ds[1][0]
tfms = [contrast(scale=(0.3,3.0), p=0.9),
brightness(change=(0.35,0.65), p=0.9)]
_,axes = plt.subplots(1,4, figsize=(12,3))
for ax in axes: apply_tfms(tfms,x).show(ax)
_,axes = plt.subplots(2,4, figsize=(12,6))
for i in range(4):
apply_tfms(tfms,x).show(axes[0,i])
apply_tfms(tfms,x,do_resolve=False).show(axes[1,i])
apply_tfms([],x).show()
```
## DatasetTfm
```
#export
class DatasetTfm(Dataset):
def __init__(self, ds:Dataset, tfms:Collection[Callable]=None, **kwargs):
self.ds,self.tfms,self.kwargs = ds,tfms,kwargs
def __len__(self): return len(self.ds)
def __getitem__(self,idx):
x,y = self.ds[idx]
return apply_tfms(self.tfms, x, **self.kwargs), y
def __getattr__(self,k): return getattr(self.ds, k)
import nb_001b
nb_001b.DatasetTfm = DatasetTfm
bs=64
#export
def to_data(b):
if is_listy(b): return [to_data(o) for o in b]
return b.data if isinstance(b,ItemBase) else b
def data_collate(batch):
return torch.utils.data.dataloader.default_collate(to_data(batch))
@dataclass
class DeviceDataLoader():
dl: DataLoader
device: torch.device
def __post_init__(self): self.dl.collate_fn=data_collate
def __len__(self): return len(self.dl)
def __getattr__(self,k): return getattr(self.dl, k)
def proc_batch(self,b): return to_device(b, self.device)
def __iter__(self):
self.gen = map(self.proc_batch, self.dl)
return iter(self.gen)
@classmethod
def create(cls, *args, device=default_device, **kwargs):
return cls(DataLoader(*args, **kwargs), device=device)
nb_001b.DeviceDataLoader = DeviceDataLoader
data = DataBunch.create(train_ds, valid_ds, bs=bs, num_workers=4)
len(data.train_dl), len(data.valid_dl), data.train_dl.dataset.c
#export
def show_image_batch(dl, classes, rows=None, figsize=(12,15)):
x,y = next(iter(dl))
if rows is None: rows = int(math.sqrt(len(x)))
show_images(x[:rows*rows],y[:rows*rows],rows, classes)
def show_images(x,y,rows, classes, figsize=(9,9)):
fig, axs = plt.subplots(rows,rows,figsize=figsize)
for i, ax in enumerate(axs.flatten()):
show_image(x[i], ax)
ax.set_title(classes[y[i]])
plt.tight_layout()
show_image_batch(data.train_dl, train_ds.classes, 6)
data = DataBunch.create(train_ds, valid_ds, bs=bs, train_tfm=tfms)
show_image_batch(data.train_dl, train_ds.classes, 6)
```
# Affine
We will now add affine transforms that operate on the coordinates instead of pixels like the lighting transforms we just saw. An [affine transformation](https://en.wikipedia.org/wiki/Affine_transformation) is a function "(...) between affine spaces which preserves points, straight lines and planes."
## Details
Our implementation first creates a grid of coordinates for the original image. The grid is normalized to a [-1, 1] range with (-1, -1) representing the top left corner, (1, 1) the bottom right corner and (0, 0) the center. Next, we build an affine matrix representing our desired transform and we multiply it by our original grid coordinates. The result will be a set of x, y coordinates which references where in the input image will each of the pixels in the output image be mapped. It has a size of w \* h \* 2 since it needs two coordinates for each of the h * w pixels of the output image.
This is clearest if we see it graphically. We will build an affine matrix of the following form:
`[[a, b, e],
[c, d, f]]`
with which we will transform each pair of x, y coordinates in our original grid into our transformation grid:
`[[a, b], [[x], [[e], [[x'],
[c, d]] x [y]] + [f]] = [y']]`
So after the transform we will get a new grid with which to map our input image into our output image. This will be our **map of where from exactly does our transformation source each pixel in the output image**.
**Enter problems**
Affine transforms face two problems that must be solved independently:
1. **The interpolation problem**: The result of our transformation gives us float coordinates, and we need to decide, for each (i,j), how to assign these coordinates to pixels in the input image.
2. **The missing pixel problem**: The result of our transformation may have coordinates which exceed the [-1, 1] range of our original grid and thus fall outside of our original grid.
**Solutions to problems**
1. **The interpolation problem**: We will perform a [bilinear interpolation](https://en.wikipedia.org/wiki/Bilinear_interpolation). This takes an average of the values of the pixels corresponding to the four points in the grid surrounding the result of our transformation, with weights depending on how close we are to each of those points.
2. **The missing pixel problem**: For these values we need padding, and we face a few options:
1. Adding zeros on the side (so the pixels that fall out will be black)
2. Replacing them by the value at the border
3. Mirroring the content of the picture on the other side (reflect padding).
### Transformation Method
**Zoom**
Zoom changes the focus of the image according to a scale. If a scale of >1 is applied, grid pixels will be mapped to coordinates that are more central than the pixel's coordinates (closer to 0,0) while if a scale of <1 is applied, grid pixels will be mapped to more perispheric coordinates (closer to the borders) in the input image.
We can also translate our transform to zoom into a non-centrical area of the image. For this we use $col_c$ which displaces the x axis and $row_c$ which displaces the y axis.
_Parameters_
1. **Scale** How much do we want to zoom in or out to our image.
Domain: Real numbers
2. **Col_pct** How much do we want to displace our zoom along the x axis.
Domain: Real numbers between 0 and 1
3. **Row_pct** How much do we want to displace our zoom along the y axis.
Domain: Real numbers between 0 and 1
<u>Affine matrix</u>
`[[1/scale, 0, col_c],
[0, 1/scale, row_c]]`
**Rotate**
Rotate shifts the image around its center in a given angle theta. The rotation is counterclockwise if theta is positive and clockwise if theta is negative. If you are curious about the derivation of the rotation matrix you can find it [here](https://matthew-brett.github.io/teaching/rotation_2d.html).
_Parameters_
1. **Degrees** By which angle do we want to rotate our image.
Domain: Real numbers
<u>Affine matrix</u>
`[[cos(theta), -sin(theta), 0],
[sin(theta), cos(theta), 0]]`
## Deterministic affine
```
#export
def grid_sample_nearest(input, coords, padding_mode='zeros'):
if padding_mode=='border': coords.clamp(-1,1)
bs,ch,h,w = input.size()
sz = tensor([w,h]).float()[None,None]
coords.add_(1).mul_(sz/2)
coords = coords[0].round_().long()
if padding_mode=='zeros':
mask = (coords[...,0] < 0) + (coords[...,1] < 0) + (coords[...,0] >= w) + (coords[...,1] >= h)
mask.clamp_(0,1)
coords[...,0].clamp_(0,w-1)
coords[...,1].clamp_(0,h-1)
result = input[...,coords[...,1],coords[...,0]]
if padding_mode=='zeros': result[...,mask] = result[...,mask].zero_()
return result
#export
def grid_sample(x, coords, mode='bilinear', padding_mode='reflect'):
if padding_mode=='reflect': padding_mode='reflection'
if mode=='nearest': return grid_sample_nearest(x[None], coords, padding_mode)[0]
return F.grid_sample(x[None], coords, mode=mode, padding_mode=padding_mode)[0]
def affine_grid(size):
size = ((1,)+size)
N, C, H, W = size
grid = FloatTensor(N, H, W, 2)
linear_points = torch.linspace(-1, 1, W) if W > 1 else tensor([-1])
grid[:, :, :, 0] = torch.ger(torch.ones(H), linear_points).expand_as(grid[:, :, :, 0])
linear_points = torch.linspace(-1, 1, H) if H > 1 else tensor([-1])
grid[:, :, :, 1] = torch.ger(linear_points, torch.ones(W)).expand_as(grid[:, :, :, 1])
return grid
def affine_mult(c,m):
if m is None: return c
size = c.size()
c = c.view(-1,2)
c = torch.addmm(m[:2,2], c, m[:2,:2].t())
return c.view(size)
def rotate(degrees):
angle = degrees * math.pi / 180
return [[cos(angle), -sin(angle), 0.],
[sin(angle), cos(angle), 0.],
[0. , 0. , 1.]]
def xi(): return train_ds[1][0]
x = xi().data
c = affine_grid(x.shape)
m = rotate(30)
m = x.new_tensor(m)
m
c[0,...,0]
c[0,...,1]
m
c = affine_mult(c,m)
c[0,...,0]
c[0,...,1]
img2 = grid_sample(x, c, padding_mode='zeros')
show_image(img2);
xi().affine(rotate, 30).show()
```
## Affine transform
```
#export
class TfmAffine(Transform): order,_wrap = 5,'affine'
class TfmPixel(Transform): order,_wrap = 10,'pixel'
@TfmAffine
def rotate(degrees:uniform):
angle = degrees * math.pi / 180
return [[cos(angle), -sin(angle), 0.],
[sin(angle), cos(angle), 0.],
[0. , 0. , 1.]]
def get_zoom_mat(sw, sh, c, r):
return [[sw, 0, c],
[0, sh, r],
[0, 0, 1.]]
@TfmAffine
def zoom(scale:uniform=1.0, row_pct:uniform=0.5, col_pct:uniform=0.5):
s = 1-1/scale
col_c = s * (2*col_pct - 1)
row_c = s * (2*row_pct - 1)
return get_zoom_mat(1/scale, 1/scale, col_c, row_c)
@TfmAffine
def squish(scale:uniform=1.0, row_pct:uniform=0.5, col_pct:uniform=0.5):
if scale <= 1:
col_c = (1-scale) * (2*col_pct - 1)
return get_zoom_mat(scale, 1, col_c, 0.)
else:
row_c = (1-1/scale) * (2*row_pct - 1)
return get_zoom_mat(1, 1/scale, 0., row_c)
rotate(xi(), 30).show()
zoom(xi(), 0.6).show()
zoom(xi(), 0.6).set_sample(padding_mode='zeros').show()
zoom(xi(), 2, 0.2, 0.2).show()
scales = [0.75,0.9,1.1,1.33]
_,axes = plt.subplots(1,4, figsize=(12,3))
for i, ax in enumerate(axes): squish(xi(), scales[i]).show(ax)
_,axes=plt.subplots(1,3,figsize=(9,3))
xi().show(axes[0])
img2 = rotate(xi(), 30).refresh()
img2 = zoom(img2, 1.6)
img2.show(axes[1])
zoom(rotate(xi(), 30), 1.6).show(axes[2])
xi().resize(48).show()
img2 = zoom(xi().resize(48), 1.6, 0.8, 0.2)
rotate(img2, 30).show()
img2 = zoom(xi().resize(24), 1.6, 0.8, 0.2)
rotate(img2, 30).show(hide_axis=False)
img2 = zoom(xi().resize(48), 1.6, 0.8, 0.2)
rotate(img2, 30).set_sample(mode='nearest').show()
```
## Random affine
As we did with the Lighting transform, we now want to build randomness into our pipeline so we can increase the automatization of the transform process.
We will use a uniform distribution for both our transforms since their impact is linear and their domain is the real numbers.
**Apply all transforms**
We will make all transforms try to do as little calculations as possible.
We do only one affine transformation by multiplying all the affine matrices of the transforms, then we apply to the coords any non-affine transformation we might want (jitter, elastic distorsion). Next, we crop the coordinates we want to keep and, by doing it before the interpolation, we don't need to compute pixel values that won't be used afterwards. Finally we perform the interpolation and we apply all the transforms that operate pixelwise (brightness, contrast).
```
tfm = rotate(degrees=(-45,45.), p=0.75); tfm
tfm.resolve(); tfm
x = xi()
_,axes = plt.subplots(1,4, figsize=(12,3))
for ax in axes: apply_tfms(tfm, x).show(ax)
tfms = [rotate(degrees=(-45,45.), p=0.75),
zoom(scale=(0.5,2.0), p=0.75)]
_,axes = plt.subplots(1,4, figsize=(12,3))
for ax in axes: apply_tfms(tfms,x).show(ax)
#export
def apply_tfms(tfms, x, do_resolve=True, xtra=None, size=None, **kwargs):
if not (tfms or size): return x
if not xtra: xtra={}
tfms = sorted(listify(tfms), key=lambda o: o.tfm.order)
if do_resolve: resolve_tfms(tfms)
x = x.clone()
if kwargs: x.set_sample(**kwargs)
if size: x.resize(size)
for tfm in tfms:
if tfm.tfm in xtra: x = tfm(x, **xtra[tfm.tfm])
else: x = tfm(x)
return x
tfms = [rotate(degrees=(-45,45.), p=0.75),
zoom(scale=(1.0,2.0), row_pct=(0,1.), col_pct=(0,1.))]
_,axes = plt.subplots(1,4, figsize=(12,3))
for ax in axes: apply_tfms(tfms,x, padding_mode='zeros', size=64).show(ax)
tfms = [squish(scale=(0.5,2), row_pct=(0,1.), col_pct=(0,1.))]
_,axes = plt.subplots(1,4, figsize=(12,3))
for ax in axes: apply_tfms(tfms,x).show(ax)
```
# Coord and pixel
## Jitter / flip
The last two transforms we will use are **jitter** and **flip**.
**Jitter**
Jitter is a transform which adds a random value to each of the pixels to make them somewhat different than the original ones. In our implementation we first get a random number between (-1, 1) and we multiply it by a constant M which scales it.
_Parameters_
1. **Magnitude** How much random noise do we want to add to each of the pixels in our image.
Domain: Real numbers between 0 and 1.
**Flip**
Flip is a transform that reflects the image on a given axis.
_Parameters_
1. **P** Probability of applying the transformation to an input.
Domain: Real numbers between 0 and 1.
```
#export
class TfmCoord(Transform): order,_wrap = 4,'coord'
@TfmCoord
def jitter(c, size, magnitude:uniform):
return c.add_((torch.rand_like(c)-0.5)*magnitude*2)
@TfmPixel
def flip_lr(x): return x.flip(2)
tfm = jitter(magnitude=(0,0.1))
_,axes = plt.subplots(1,4, figsize=(12,3))
for ax in axes:
tfm.resolve()
tfm(xi()).show(ax)
tfm = flip_lr(p=0.5)
_,axes = plt.subplots(1,4, figsize=(12,3))
for ax in axes:
tfm.resolve()
tfm(xi()).show(ax)
```
## Crop/pad
**Crop**
Crop is a transform that cuts a series of pixels from an image. It does this by removing rows and columns from the input image.
_Parameters_
1. **Size** What is the target size of each side in pixels. If only one number *s* is specified, image is made square with dimensions *s* \* *s*.
Domain: Positive integers.
2. **Row_pct** Determines where to cut our image vertically on the bottom and top (which rows are left out). If <0.5, more rows will be cut in the top than in the bottom and viceversa (varies linearly).
Domain: Real numbers between 0 and 1.
3. **Col_pct** Determines where to cut our image horizontally on the left and right (which columns are left out). If <0.5, more rows will be cut in the left than in the right and viceversa (varies linearly).
Domain: Real numbers between 0 and 1.
Our three parameters are related with the following equations:
1. output_rows = [**row_pct***(input_rows-**size**):**size**+**row_pct***(input_rows-**size**)]
2. output_cols = [**col_pct***(input_cols-**size**):**size**+**col_pct***(input_cols-**size**)]
**Pad**
Pads each of the four borders of our image with a certain amount of pixels. Can pad with reflection (reflects border pixels to fill new pixels) or zero (adds black pixels).
_Parameters_
1. **Padding** Amount of pixels to add to each border. [More details](https://pytorch.org/docs/stable/nn.html#torch.nn.functional.pad)
Domain: Positive integers.
2. **Mode** How to fill new pixels. For more detail see the Pytorch subfunctions for padding.
Domain:
- Reflect (default): reflects opposite pixels to fill new pixels. [More details](https://pytorch.org/docs/stable/nn.html#torch.nn.ReflectionPad2d)
- Constant: adds pixels with specified value (default is 0, black pixels) [More details](https://pytorch.org/docs/stable/nn.html#torch.nn.ConstantPad2d)
- Replicate: replicates border row or column pixels to fill new pixels [More details](https://pytorch.org/docs/stable/nn.html#torch.nn.ReplicationPad2d)
***On using padding and crop***
A nice way to use these two functions is to combine them into one transform. We can add padding to the image and then crop some of it out. This way, we can create a new image to augment our training set without losing image information by cropping. Furthermore, this can be done in several ways (modifying the amount and type of padding and the crop style) so it gives us great flexibility to add images to our training set. You can find an example of this in the code below.
```
[(o.__name__,o.order) for o in
sorted((Transform,TfmAffine,TfmCoord,TfmLighting,TfmPixel),key=attrgetter('order'))]
#export
@partial(TfmPixel, order=-10)
def pad(x, padding, mode='reflect'):
return F.pad(x[None], (padding,)*4, mode=mode)[0]
@TfmPixel
def crop(x, size, row_pct:uniform=0.5, col_pct:uniform=0.5):
size = listify(size,2)
rows,cols = size
row = int((x.size(1)-rows+1) * row_pct)
col = int((x.size(2)-cols+1) * col_pct)
return x[:, row:row+rows, col:col+cols].contiguous()
pad(xi(), 4, 'constant').show()
crop(pad(xi(), 4, 'constant'), 32, 0.25, 0.75).show(hide_axis=False)
crop(pad(xi(), 4), 32, 0.25, 0.75).show()
```
## Combine
```
tfms = [flip_lr(p=0.5),
pad(padding=4, mode='constant'),
crop(size=32, row_pct=(0,1.), col_pct=(0,1.))]
_,axes = plt.subplots(1,4, figsize=(12,3))
for ax in axes: apply_tfms(tfms, x).show(ax)
tfms = [
flip_lr(p=0.5),
contrast(scale=(0.5,2.0)),
brightness(change=(0.3,0.7)),
rotate(degrees=(-45,45.), p=0.5),
zoom(scale=(0.5,1.2), p=0.8)
]
_,axes = plt.subplots(1,4, figsize=(12,3))
for ax in axes: apply_tfms(tfms, x).show(ax)
_,axes = plt.subplots(2,4, figsize=(12,6))
for i in range(4):
apply_tfms(tfms, x, padding_mode='zeros', size=48).show(axes[0][i], hide_axis=False)
apply_tfms(tfms, x, mode='nearest', do_resolve=False).show(axes[1][i], hide_axis=False)
```
## RandomResizedCrop (Torchvision version)
```
#export
def compute_zs_mat(sz, scale, squish, invert, row_pct, col_pct):
orig_ratio = math.sqrt(sz[2]/sz[1])
for s,r,i in zip(scale,squish, invert):
s,r = math.sqrt(s),math.sqrt(r)
if s * r <= 1 and s / r <= 1: #Test if we are completely inside the picture
w,h = (s/r, s*r) if i else (s*r,s/r)
w /= orig_ratio
h *= orig_ratio
col_c = (1-w) * (2*col_pct - 1)
row_c = (1-h) * (2*row_pct - 1)
return get_zoom_mat(w, h, col_c, row_c)
#Fallback, hack to emulate a center crop without cropping anything yet.
if orig_ratio > 1: return get_zoom_mat(1/orig_ratio**2, 1, 0, 0.)
else: return get_zoom_mat(1, orig_ratio**2, 0, 0.)
@TfmCoord
def zoom_squish(c, size, scale:uniform=1.0, squish:uniform=1.0, invert:rand_bool=False,
row_pct:uniform=0.5, col_pct:uniform=0.5):
#This is intended for scale, squish and invert to be of size 10 (or whatever) so that the transform
#can try a few zoom/squishes before falling back to center crop (like torchvision.RandomResizedCrop)
m = compute_zs_mat(size, scale, squish, invert, row_pct, col_pct)
return affine_mult(c, FloatTensor(m))
rrc = zoom_squish(scale=(0.25,1.0,10), squish=(0.5,1.0,10), invert=(0.5,10),
row_pct=(0,1.), col_pct=(0,1.))
_,axes = plt.subplots(2,4, figsize=(12,6))
for i in range(4):
apply_tfms(rrc, x, size=48).show(axes[0][i])
apply_tfms(rrc, x, do_resolve=False, mode='nearest').show(axes[1][i])
```
|
github_jupyter
|
### As before, let's find the set of compounds for which both simulations and experimental measurements exist
Matt Robinson posted a `moonshot_initial_activity_data.csv` file of the initial activity data:
```
import numpy as np
import pandas as pd
df_activity = pd.read_csv('../data-release-2020-05-10/moonshot_initial_activity_data.csv')
# Find all that have IC50 data
IC50_measured = pd.notnull(df_activity["IC50 (µM)"])
df_activity[IC50_measured]
# Translate the new IDs back to the old IDs so we can find them in our results
## make a translation table
all_df = pd.read_csv("https://covid.postera.ai/covid/submissions.csv")
new_CID_list = list(all_df.CID)
old_CID_list = list(all_df.old_CID)
new2old_CID = {}
old2new_CID = {}
for i in range(len(new_CID_list)):
new2old_CID[new_CID_list[i]] = old_CID_list[i]
old2new_CID[old_CID_list[i]] = new_CID_list[i]
for s in df_activity[IC50_measured].CID:
print(s, '-->', new2old_CID[s])
## Are THESE in the latest results pkl???
# df_results = pd.read_pickle('master_results_WL0.12_051820.pkl') # these have covalent warheads in them
df_results = pd.read_pickle('master_results_WL0.12_051920.pkl')
for s in df_activity[IC50_measured].CID:
df_hits = df_results[df_results.identity.str.contains(new2old_CID[s])]
if len(df_hits) > 0:
print(s, '<--', new2old_CID[s])
print(df_hits)
print('\n##########\n\n')
# Let's look at our current ranking:
df_results
top10_indices = df_results.index[0:10]
for i in range(len(top10_indices)):
index = top10_indices[i]
oldID = df_results.loc[index].identity
if oldID.count('ÁLV') > 0:
oldID = oldID.replace('ÁLV','ALV')
try:
newID = old2new_CID[oldID]
except:
newID = ''
print('rank:', i+1, 'oldID:', oldID, 'newID:', newID, df_results.loc[index].dataset, df_results.loc[index].fah)
```
## Top 10 profiles
### \# 1 NIM-UNI-36e-3 NIM-UNI-36e12f95-3
https://covid.postera.ai/covid/submissions/36e12f95-0811-4857-8bc6-a4aee0788f1c/3
<img src="https://covid.postera.ai/synthesize/CC(=O)c1ccc(Br)c2%5BnH%5Dc(=O)n(-c3cccnc3)c12">
<img src="http://yabmtm.hopto.org:31415/MS0323/plots/MS0323_v3_1-500_p14822_127_19May2020.png">
### \# 2 JON-UIO-066-14 JON-UIO-066ce08b-14 MS0326_v3 PROJ14824/RUN2448
https://covid.postera.ai/covid/submissions/066ce08b-1104-439d-946f-d7c319de995c/14
<img src="https://covid.postera.ai/synthesize/C%5BC@H%5D(NC(=O)C(F)F)c1cccc(F)c1">
<img src="http://yabmtm.hopto.org:31415/MS0326/plots/MS0326_v3_3000-5538_p14824_2448_19May2020.png">
### \# 3 CHR-SOS-709-10 CHR-SOS-7098f804-10
https://covid.postera.ai/covid/submissions/7098f804-b66c-4fb6-89f4-8e4e0c78a7cb/10
<img src="https://covid.postera.ai/synthesize/O=C(Nc1cnccc1Cl)c1cc(Cl)ccc1O">
<img src="http://yabmtm.hopto.org:31415/MS0406-2/plots/MS0406-2_v3_0-2999_p14827_360_19May2020.png">
### \# 4 LIZ-THE-f11-1 newID: LIZ-THE-f118233e-1 MS0326_v2 PROJ14723/RUN404
https://covid.postera.ai/covid/submissions/7023c732-4bbd-4499-a930-9b1b18b131ec/1
<img src="https://covid.postera.ai/synthesize/CNc1ncc(C%23N)cc1Oc1ccccc1">
### \# 5 ALV-UNI-7ff-36 newID: MS0326_v2 PROJ14723/RUN2963
https://covid.postera.ai/covid/submissions/7ff1a6f9-745f-4b82-81e0-c1d353ea5dfe/36
<img src="https://covid.postera.ai/synthesize/Cc1cc(-c2c(-c3ccc(F)cc3)nn3nc(C)ccc23)%5BnH%5Dn1">
<img src="http://yabmtm.hopto.org:31415/MS0326/plots/MS0326_v2_1-3000_p14723_2963_19May2020.png">
### \# 6 TRY-UNI-714-16 newID: TRY-UNI-714a760b-16 MS0326_v3 PROJ14824/RUN189
https://covid.postera.ai/covid/submissions/714a760b-0e02-4b09-8736-f27f854f8c22/16
<img src="https://covid.postera.ai/synthesize/Cc1ccncc1NC(=O)C(C)C1CCCCC1">
<img src="http://yabmtm.hopto.org:31415/MS0326/plots/MS0326_v3_3000-5538_p14824_189_19May2020.png">
### \#7 ALV-UNI-7ff-43 newID: MS0326_v3 PROJ14824/RUN19
https://covid.postera.ai/covid/submissions/7ff1a6f9-745f-4b82-81e0-c1d353ea5dfe/43
<img src="https://covid.postera.ai/synthesize/Cc1cn2c(-c3cccnc3)c(-c3ccc(F)cc3)nc2s1">
<img src="http://yabmtm.hopto.org:31415/MS0326/plots/MS0326_v3_300">
### \#8 BEN-VAN-d8f-12 BEN-VAN-d8fd1356-12 MS0326_v3 PROJ14823/RUN713
https://covid.postera.ai/covid/submissions/d8fd1356-48a3-47db-b12f-ee2f1a630081/12
<img src="https://covid.postera.ai/synthesize/CNc1c%5BnH%5Dc2c(Oc3cc(C)c(Br)cn3)c(Cl)c(F)cc12">
<img src="http://yabmtm.hopto.org:31415/MS0326/plots/MS0326_v3_1-3000_p14823_713_19May2020.png">
### \#9 ALE-HEI-f28-17 ALE-HEI-f28a35b5-17 MS0326_v3 PROJ14823/RUN403
https://covid.postera.ai/covid/submissions/f28a35b5-9f3e-4135-a6b4-7ce39ba4980a/17
<img src="https://covid.postera.ai/synthesize/Cc1ccncc1NC(=O)N1CCN(C)CC1">
<img src="http://yabmtm.hopto.org:31415/MS0326/plots/MS0326_v3_1-3000_p14823_403_19May2020.png">
### \#10 CHR-SOS-709-6 CHR-SOS-7098f804-6 MS0323_v3 PROJ14822/RUN454
https://covid.postera.ai/covid/submissions/7098f804-b66c-4fb6-89f4-8e4e0c78a7cb/6
<img src="https://covid.postera.ai/synthesize/O=C(Nc1ccc(%5BN+%5D(=O)%5BO-%5D)cc1)c1ccccc1">
<img src="http://yabmtm.hopto.org:31415/MS0323/plots/MS0323_v3_1-500_p14822_454_19May2020.png">
|
github_jupyter
|
## Model one policy variables
This notebook extracts the selected policy variables in the `indicator_list` from IMF and World Bank (wb) data sources, and writes them to a csv file.
```
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
warnings.filterwarnings('ignore')
pd.options.display.float_format = '{:20,.2f}'.format
```
| variable | origin | source |granularity|countries| description | composition |
| --------------------------|-------------------|-------------|-----------|---------|-------------------------------------------------------------|-------------------------------------------------------------------|
| total debt service | - | wb econ | yearly | 217 | Total debt service (% of GNI) | - |
| interest payments | - | wb econ | yearly | 217 | Interest payments on external debt (% of GNI) | - |
| lending interest rate | - | wb econ | yearly | 217 | Lending interest rate (%) | - |
| firms using banks | - | wb econ | yearly | 217 | Firms using banks to finance investment (% of firms) | - |
| bank capital ratio | - | wb econ | yearly | 217 | Bank capital to assets ratio (%) | - |
| tax revenue gdp share | - | wb econ | yearly | 217 | Tax revenue (% of GDP) | - |
| short term debt | - | wb econ | yearly | 217 | Short-term debt (% of total external debt) | - |
| inflation | - | wb econ | yearly | 217 | Inflation, GDP deflator (annual %) | - |
| GDP growth | - | wb econ | yearly | 217 | GDP growth (annual %) | - |
| real interest rate | - | wb econ | yearly | 217 | Real interest rate (%) | - |
| firm market cap | - | wb econ | yearly | 217 | Market capitalization of listed domestic companies (% of GDP) | - |
| GDP per capita growth | - | wb econ | yearly | 217 | GDP per capita growth (annual %) | - |
| GDP | - | wb econ | yearly | 217 | GDP (constant 2010 USD) | - |
| GNI growth | - | wb econ | yearly | 217 | GNI growth (annual %) | - |
| interest payments | - | wb econ | yearly | 217 | Interest payments (% of expense) | - |
| nonperforming bank loans | - | wb econ | yearly | 217 | Bank nonperforming loans to total gross loans (%) | - |
| savings | - | wb econ | yearly | 217 | Gross domestic savings (% of GDP) | - |
| gross savings | - | wb econ | yearly | 217 | Gross savings (% of GNI) | - |
| GNI per capita growth | - | wb econ | yearly | 217 | GNI per capita growth (annual %) | - |
| employee compensation | - | wb econ | yearly | 217 | Compensation of employees (% of expense) | - |
| reserves | - | wb econ | yearly | 217 | Total reserves (% of total external debt) | - |
| broad money | - | wb econ | yearly | 217 | Broad money (% of GDP) | - |
| GNI | - | wb econ | yearly | 217 | GNI (constant 2010 USD) | - |
| government debt | - | wb econ | yearly | 217 | Central government debt, total (% of GDP) | - |
```
indicator_list = ['Total debt service (% of GNI)', 'Interest payments on external debt (% of GNI)',
'Lending interest rate (%)', 'Firms using banks to finance investment (% of firms)',
'Bank capital to assets ratio (%)', 'Tax revenue (% of GDP)', 'Short-term debt (% of total external debt)',
'Inflation, GDP deflator (annual %)', 'GDP growth (annual %)', 'Real interest rate (%)',
'Market capitalization of listed domestic companies (% of GDP)', 'GDP per capita growth (annual %)',
'GDP (constant 2010 US$)', 'GNI growth (annual %)', 'Interest payments (% of expense)',
'Bank nonperforming loans to total gross loans (%)', 'Gross domestic savings (% of GDP)',
'Gross savings (% of GNI)', 'GNI per capita growth (annual %)', 'Compensation of employees (% of expense)',
'Total reserves (% of total external debt)', 'Broad money (% of GDP)', 'GNI (constant 2010 US$)',
'Central government debt, total (% of GDP)']
len(indicator_list)
```
## Load imf monthly data
```
%%bash
wc -l imf/*.csv
time_values = [str('%sM%s' % (y, m)) for m in list(range(1, 13)) for y in list(range(1960, 2018))]
imf_columns = ['Country Name', 'Indicator Name'] + time_values
imf_country_aggregates = ['Euro Area']
def load_imf_monthly(file_name, indicators, imf_columns, country_aggregates):
csv_df = pd.read_csv('data/imf/%s' % file_name).fillna(0)
base_df = csv_df.loc[csv_df['Attribute'] == 'Value'].drop(columns=['Attribute'])
monthly_df = base_df.loc[(base_df['Indicator Name'].isin(indicators))]
imf_df = monthly_df[imf_columns].fillna(0)
df = pd.melt(imf_df, id_vars=['Country Name', 'Indicator Name'], var_name='date', value_name='value')
df['date'] = pd.to_datetime(df['date'], format='%YM%m')
df.columns = ['country', 'indicator', 'date', 'value']
return df.loc[~df['country'].isin(country_aggregates)]
imf_pplt_df = load_imf_monthly('PPLT_11-25-2018 19-25-01-32_timeSeries.csv', indicator_list, imf_columns, imf_country_aggregates)
imf_cpi_df = load_imf_monthly('CPI_11-25-2018 19-14-47-26_timeSeries.csv', indicator_list, imf_columns, imf_country_aggregates)
imf_df = pd.concat([imf_cpi_df, imf_pplt_df], join='outer')
imf_df.size
imf_df.head(15)
len(imf_df['country'].unique())
imf_countries = sorted(list(imf_df['country'].unique()))
```
### Load world bank yearly data
```
%%bash
wc -l world_bank/*.csv
wb_country_aggregates = ['nan', 'Lower middle income', 'Post-demographic dividend', 'High income',
'Pre-demographic dividend', 'East Asia & Pacific (IDA & IBRD countries)',
'Europe & Central Asia (excluding high income)', 'Heavily indebted poor countries (HIPC)',
'Caribbean small states', 'Pacific island small states', 'Middle income',
'Late-demographic dividend', 'OECD members', 'IDA & IBRD total', 'Not classified',
'East Asia & Pacific (excluding high income)',
'Latin America & the Caribbean (IDA & IBRD countries)', 'Low income', 'Low & middle income',
'IDA blend', 'IBRD only', 'Sub-Saharan Africa (excluding high income)',
'Fragile and conflict affected situations', 'Europe & Central Asia (IDA & IBRD countries)',
'Euro area', 'Other small states', 'Europe & Central Asia', 'Arab World',
'Latin America & Caribbean (excluding high income)',
'Sub-Saharan Africa (IDA & IBRD countries)', 'Early-demographic dividend', 'IDA only',
'Small states', 'Middle East & North Africa (excluding high income)', 'East Asia & Pacific',
'South Asia', 'European Union', 'Least developed countries: UN classification',
'Middle East & North Africa (IDA & IBRD countries)', 'Upper middle income',
'South Asia (IDA & IBRD)', 'Central Europe and the Baltics', 'Sub-Saharan Africa',
'Latin America & Caribbean', 'Middle East & North Africa', 'IDA total', 'North America',
'Last Updated: 11/14/2018', 'Data from database: World Development Indicators', 'World']
wb_cols = ['Country Name', 'Series Name'] + [str('%s [YR%s]' % (y, y)) for y in list(range(1960, 2018))]
def load_wb_yearly(file_name, indicators, wb_columns, country_aggregates):
csv_df = pd.read_csv('world_bank/%s' % file_name).fillna(0)
base_df = csv_df.loc[(csv_df['Series Name'].isin(indicators))]
wb_df = base_df[wb_columns].fillna(0)
df = pd.melt(wb_df, id_vars=['Country Name', 'Series Name'], var_name='date', value_name='value')
df['date'] = pd.to_datetime(df['date'].map(lambda x: int(x.split(' ')[0])), format='%Y')
df.columns = ['country', 'indicator', 'date', 'value']
return df.loc[~df['country'].isin(country_aggregates)]
wb_econ_df = load_wb_yearly('ECON.csv', indicator_list, wb_cols, wb_country_aggregates)
wb_hnp_df = load_wb_yearly('HNP.csv', indicator_list, wb_cols, wb_country_aggregates)
wb_pop_df = load_wb_yearly('POP.csv', indicator_list, wb_cols, wb_country_aggregates)
wb_df = pd.concat([wb_econ_df, wb_hnp_df, wb_pop_df], join='outer')
wb_df.size
wb_df.head(15)
len(wb_df['country'].unique())
wb_countries = sorted(list(wb_df['country'].unique()))
```
### Combine the two datasets
```
imf_specific = [country for country in imf_countries if country not in wb_countries]
len(imf_specific)
imf_to_wb_country_map = {
'Afghanistan, Islamic Republic of': 'Afghanistan',
'Armenia, Republic of': 'Armenia',
'Azerbaijan, Republic of': 'Azerbaijan',
'Bahrain, Kingdom of': 'Bahrain',
'China, P.R.: Hong Kong': 'Hong Kong SAR, China',
'China, P.R.: Macao': 'Macao SAR, China',
'China, P.R.: Mainland': 'China',
'Congo, Democratic Republic of': 'Congo, Dem. Rep.',
'Congo, Republic of': 'Congo, Rep.',
'Egypt': 'Egypt, Arab Rep.',
'French Territories: New Caledonia': 'New Caledonia',
'Iran, Islamic Republic of': 'Iran',
'Korea, Republic of': 'Korea, Rep.',
'Kosovo, Republic of': 'Kosovo',
"Lao People's Democratic Republic": 'Lao PDR',
'Serbia, Republic of': 'Serbia',
'Sint Maarten': 'Sint Maarten (Dutch part)',
'Timor-Leste, Dem. Rep. of': 'Timor-Leste',
'Venezuela, Republica Bolivariana de': 'Venezuela, RB',
'Venezuela, República Bolivariana de': 'Venezuela, RB',
'Yemen, Republic of': 'Yemen'
}
imf_df = imf_df.replace({'country': imf_to_wb_country_map})
policy_df = pd.concat([wb_df, imf_df], join='outer')
policy_df.size
policy_df.head(15)
indicators = sorted(list(policy_df['indicator'].unique()))
assert len(indicators) == len(indicator_list), 'The number of retrieved variables (%s) does not match the number of specified variables (%s).\nThe following variables are missing:\n\n %s' % (len(indicators), len(indicator_list), [i for i in indicator_list if i not in indicators])
policy_df.to_csv('model_one/policy.csv', sep=';', index=False)
```
|
github_jupyter
|
# Assignment 1
This assignment is to test your understanding of Python basics.
Answer the questions and complete the tasks outlined below; use the specific method described, if applicable. In order to get complete points on your homework assigment you have to a) complete this notebook, b) based on your results answer the multiple choice questions on QoestromTools.
**Important note:** make sure you spend some time to review the basics of python notebooks under the folder `00-Python-Basics` in course repo or [A Whirlwind Tour of Python](https://www.oreilly.com/programming/free/files/a-whirlwind-tour-of-python.pdf).
# Question 1
**What is 9 to the power of 7?**
```
# Your answer goes here
```
# Question 2
**What is the quotient and remainder of 453634/34?**
```
# Your answer goes here
print('Quotient of 453634/34:')
print('Remainder of 453634/34:')
```
# Question 3
Write a statement to check whether `a` is a multiple of 12 and within the range of [1000, 1800) or (0, 300].
**What is the outcome of `a = 780`?**
Note: (0, 300] represents a range from 0 to 300, where 0 is not included in the range, but 300 is.
```
a = 780
# Your answer goes here
```
# Question 4
**Given this nested list, what indexing yields to the word "hello"?**
```
lst = [[5,[100,200,{'target':[1,2,3,'hello']}],23,11],1,71,2,[3,4],'bye']
print(lst)
# Your answer goes here
```
# Question 5
Using a list comprehension, create a new list out of the list `L1`, which contains only the even numbers from `L1`, and converts them into absolute values (using `abs()` function). Call this new list `L2`.
**What is the sum of all of the elements of `L2`?**
Hint: Use `sum(L2)` to get the sum of all the elements.
```
L1 = [64, 34, 112, 91, 62, 40, 117, 80, 96, 34, 48, -9, -33,
99, 16, 118, -51, 60, 115, 4, -10, 82, -7, 77, -33, -40,
77, 90, -9, 52, -44, 25, -43, 28, -37, 92, 25, -45, 3,
103, 22, 39, -52, 74, -54, -76, -10, 5, -54, 95, -59, -2,
110, 63, -53, 113, -43, 18, 49, -20, 81, -67, 1, 38, -24,
57, -11, -69, -66, -67, -68, -16, 64, -34, 52, -37, -7, -40,
11, -3, 76, 91, -57, -48, -10, -16, 14, 13, -65]
# Your answer goes here
```
# Question 6
Write a function that receives a list of integer numbers and returns a list of numbers that are multiples of 4. Call this function `mult4_filter()`.
**Given the list `L3` below how many elements the outcome of `mult4_filter(L3)` has?**
Hint: use `len(mult4_filter(L3))` to get the number of elements.
```
L3 = [15, 11, 1, 3, 13, 3, 14, 16, 17, 17, 6, 18, 10, 19, 8, 1, 18,
17, 14, 1, 5, 2, 13, 0, 1, 13, 16, 8, 5, 11, 12, 8, 17, 14,
10, 18, 17, 16, 3, 7, 8, 15, 18, 7, 10, 5, 7, 16, 6, 5]
# Your answer goes here
def mult4_filter(L):
# Your code goes here
return
```
|
github_jupyter
|
# Sklearn
## sklearn.model_selection
документация: http://scikit-learn.org/stable/modules/cross_validation.html
```
from sklearn import model_selection, datasets
import numpy as np
```
### Разовое разбиение данных на обучение и тест с помощью train_test_split
```
iris = datasets.load_iris()
train_data, test_data, train_labels, test_labels = model_selection.train_test_split(iris.data, iris.target,
test_size = 0.3)
#убедимся, что тестовая выборка действительно составляет 0.3 от всех данных
float(len(test_labels))/len(iris.data)
print 'Размер обучающей выборки: {} объектов \nРазмер тестовой выборки: {} объектов'.format(len(train_data),
len(test_data))
print 'Обучающая выборка:\n', train_data[:5]
print '\n'
print 'Тестовая выборка:\n', test_data[:5]
print 'Метки классов на обучающей выборке:\n', train_labels
print '\n'
print 'Метки классов на тестовой выборке:\n', test_labels
```
### Стратегии проведения кросс-валидации
```
#сгенерируем короткое подобие датасета, где элементы совпадают с порядковым номером
X = range(0,10)
```
#### KFold
```
kf = model_selection.KFold(n_splits = 5)
for train_indices, test_indices in kf.split(X):
print train_indices, test_indices
kf = model_selection.KFold(n_splits = 2, shuffle = True)
for train_indices, test_indices in kf.split(X):
print train_indices, test_indices
kf = model_selection.KFold(n_splits = 2, shuffle = True, random_state = 1)
for train_indices, test_indices in kf.split(X):
print train_indices, test_indices
```
#### StratifiedKFold
```
y = np.array([0] * 5 + [1] * 5)
print y
skf = model_selection.StratifiedKFold(n_splits = 2, shuffle = True, random_state = 0)
for train_indices, test_indices in skf.split(X, y):
print train_indices, test_indices
target = np.array([0, 1] * 5)
print target
skf = model_selection.StratifiedKFold(n_splits = 2,shuffle = True)
for train_indices, test_indices in skf.split(X, target):
print train_indices, test_indices
```
#### ShuffleSplit
```
ss = model_selection.ShuffleSplit(n_splits = 10, test_size = 0.2)
for train_indices, test_indices in ss.split(X):
print train_indices, test_indices
```
#### StratifiedShuffleSplit
```
target = np.array([0] * 5 + [1] * 5)
print target
sss = model_selection.StratifiedShuffleSplit(n_splits = 4, test_size = 0.2)
for train_indices, test_indices in sss.split(X, target):
print train_indices, test_indices
```
#### Leave-One-Out
```
loo = model_selection.LeaveOneOut()
for train_indices, test_index in loo.split(X):
print train_indices, test_index
```
Больше стратегий проведения кросс-валидации доступно здесь: http://scikit-learn.org/stable/modules/cross_validation.html#cross-validation-iterators
|
github_jupyter
|
# Black Litterman with Investor Views Optimization: Oldest Country ETFs
# Charts
## 1. Data Fetching
### 1.1 Model configuration
```
import os
import sys
import datetime as dt
import logging
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from hmmlearn import hmm
import cvxportfolio as cp
import alphamodel as am
config = {'name': 'bl_sim_charts',
'universe':
{'list': ['SPY', 'EWA', 'EWC', 'EWG', 'EWH', 'EWJ', 'EWS', 'EWU', 'EWW'],
'ticker_col': 'Symbol',
'risk_free_symbol': 'USDOLLAR'},
'data':
{'name': 'eod_returns',
'source': 'quandl',
'table': 'EOD',
'api_key': "6XyApK2BBj_MraQg2TMD"},
'model':
{'start_date': '19970102',
'end_date': '20091231',
'halflife': 65,
'min_periods': 3,
'hidden_states': 2,
'train_len': 1700,
'process': 'none',
'data_dir': '/Users/razvan/PyRepo/research_masc/data_store/bl/',
'returns':
{'sampling_freq': 'daily'},
'covariance':
{'method' : 'SS',
'sampling_freq' : 'monthly',
'train_days': 360}
}
}
# Logging
logger = logging.getLogger()
logger.setLevel(logging.WARNING)
```
### 1.2 Fetch return data
```
# Fetch returns / volumes
ss = am.SingleStockBLEWM(config)
ss.train(force=True)
# Realized Data for Simulation
prices = ss.get('prices', 'realized', ss.cfg['returns']['sampling_freq']).iloc[1:,:]
returns = ss.get('returns', 'realized', ss.cfg['returns']['sampling_freq'])
volumes = ss.get('volumes', 'realized', ss.cfg['returns']['sampling_freq'])
sigmas = ss.get('sigmas', 'realized', ss.cfg['returns']['sampling_freq'])
simulated_tcost = cp.TcostModel(half_spread=0.0005/2., nonlin_coeff=1., sigma=sigmas, volume=volumes)
simulated_hcost = cp.HcostModel(borrow_costs=0.0001)
simulator = cp.MarketSimulator(returns, costs=[simulated_tcost, simulated_hcost],
market_volumes=volumes, cash_key=ss.risk_free_symbol)
```
### 1.3 Plot return data
```
# Process returns for charting
chart_returns = returns[returns.index >= dt.datetime(2005, 1, 2)]
chart_growth = (chart_returns + 1).cumprod()
chart_returns_cum = chart_growth - 1
chart_returns_cum = chart_returns_cum.stack().reset_index()
chart_returns_cum.columns = ['Date', 'Ticker', 'Value']
plt.figure(figsize=(15,8))
sns.set(font_scale=1.5)
with sns.axes_style('ticks'):
data = chart_returns_cum
ax = sns.lineplot(x='Date', y='Value', hue='Ticker', data=data)
ax.set(xlabel='Date', ylabel='Return')
plt.savefig(ss.cfg['data_dir'] + 'bl_asset_returns.png')
```
## 2. Model fitting
### 2.1 Extract Black Litterman equilibrium returns
```
# Aggregate market stats for cal
market_stats = pd.DataFrame({'MarketCap/GDP': [1.25, 1, 1.25, 0.45, 3.5, 0.8, 2, 1.25, 0.3, 0],
'GDP': [2543500, 150000, 239000, 853000, 22500, 1037500, 10000, 422500, 164500, 0]},
index=ss.universe + ['USDOLLAR'])
market_stats.loc[:, 'MarketCap'] = market_stats.loc[:, 'MarketCap/GDP'] * market_stats.loc[:, 'GDP']
market_stats.loc[:, 'MarketCap Weights'] = market_stats.loc[:, 'MarketCap'] / market_stats.loc[:, 'MarketCap'].sum()
market_stats
# Generate market cap weights pandas.Series
w_mktcap = pd.Series(index=market_stats.index, data=market_stats.loc[:, 'MarketCap Weights'])
w_mktcap['USDOLLAR'] = 0.
```
### 2.2 Generate BL posterior returns/covariance
```
# Parameters that match simulations
risk_aversion = 2.5
confidence = 0.8
vconf = 0.7
gamma_risk = 0.1
gamma_trade = 0.1
gamma_hold = 0
```
#### 2.2.1 Correct View
```
# Predicted Data for Optimization
# US underperforms Germany 4% per year - correct view
ss.predict(w_market_cap_init=w_mktcap, risk_aversion=risk_aversion, c=confidence,
P_view=np.array([-1, 0, 0, 1, 0, 0, 0, 0, 0, 0]), Q_view=np.array(0.04 / 252),
view_confidence=vconf
)
# Black Litterman output
r_cor_pred = ss.get('returns', 'predicted')
covariance_cor_pred = ss.get('covariance', 'predicted')
volumes_cor_pred = ss.get('volumes', 'predicted')
sigmas_cor_pred = ss.get('sigmas', 'predicted')
```
#### 2.2.2 Incorrect View
```
# Predicted Data for Optimization
# US outperforms Germany 4% per year - correct view
ss.predict(w_market_cap_init=w_mktcap, risk_aversion=risk_aversion, c=confidence,
P_view=np.array([1, 0, 0, -1, 0, 0, 0, 0, 0, 0]), Q_view=np.array(0.04 / 252),
view_confidence=vconf
)
# Black Litterman output
r_incor_pred = ss.get('returns', 'predicted')
covariance_incor_pred = ss.get('covariance', 'predicted')
volumes_incor_pred = ss.get('volumes', 'predicted')
sigmas_incor_pred = ss.get('sigmas', 'predicted')
```
## 3. Simulation Results
### Input Data
```
# Start and end date
start_date = dt.datetime(2005, 1, 2)
end_date = dt.datetime.strptime(config['model']['end_date'], '%Y%m%d')
# Predicted costs
optimization_tcost = cp.TcostModel(half_spread=0.0005/2., nonlin_coeff=1.,
sigma=sigmas_cor_pred,
volume=volumes_cor_pred)
optimization_hcost=cp.HcostModel(borrow_costs=0.0001)
```
## 3.1 Single Period Optimization for Allocation
### 3.1.1 Market Capitalization Weights
```
%%time
# Market cap weights
mktcap_rebalance = cp.Hold(trading_freq="once")
# Backtest
market_cap_w = simulator.run_multiple_backtest(1E6*w_mktcap,
start_time=start_date, end_time=end_date,
policies=[mktcap_rebalance],
loglevel=logging.WARNING, parallel=True)
market_cap_w[0].summary()
market_cap_w[0].v.plot(figsize=(17,7))
```
### 3.1.2 Black Litterman Returns & Covariance Simulation
```
# Optimization parameters
leverage_limit = cp.LeverageLimit(1)
fully_invested = cp.ZeroCash()
long_only = cp.LongOnly()
```
#### 3.1.2.1 Correct View
```
%%time
# Covariance setup
bl_cor_risk_model = cp.FullSigma(covariance_cor_pred)
# Optimization policy
bl_cor_policy = cp.SinglePeriodOpt(return_forecast=r_cor_pred,
costs=[gamma_risk*bl_cor_risk_model,
gamma_trade*optimization_tcost,
gamma_hold*optimization_hcost],
constraints=[leverage_limit, fully_invested, long_only],
trading_freq='hour')
# Backtest
bl_cor_results = simulator.run_multiple_backtest(1E6*w_mktcap,
start_time=start_date, end_time=end_date,
policies=[bl_cor_policy],
loglevel=logging.WARNING, parallel=True)
bl_cor_results[0].summary()
bl_cor_results[0].v.plot(figsize=(17,7))
bl_cor_results[0].w.plot(figsize=(17,6))
```
#### 3.1.2.2 Incorrect View
```
%%time
# Covariance setup
bl_incor_risk_model = cp.FullSigma(covariance_incor_pred)
# Optimization policy
bl_incor_policy = cp.SinglePeriodOpt(return_forecast=r_incor_pred,
costs=[gamma_risk*bl_incor_risk_model,
gamma_trade*optimization_tcost,
gamma_hold*optimization_hcost],
constraints=[leverage_limit, fully_invested, long_only],
trading_freq='hour')
# Backtest
bl_incor_results = simulator.run_multiple_backtest(1E6*w_mktcap,
start_time=start_date, end_time=end_date,
policies=[bl_incor_policy],
loglevel=logging.WARNING, parallel=True)
bl_incor_results[0].summary()
bl_incor_results[0].v.plot(figsize=(17,7))
bl_incor_results[0].w.plot(figsize=(17,6))
```
### 3.1.3 Weight Allocation Difference
```
# Market capitalization weights
w_mktcap
w_mktcap.name = 'Equilibrium'
# Correct view weights
w_bl_cor = bl_cor_results[0].w.iloc[1,:]
w_bl_cor.name = 'Correct View'
#Incorrect view weights
w_bl_incor = bl_incor_results[0].w.iloc[1,:]
w_bl_incor.name = 'Incorrect View'
# Construct weight dataframe
bl_weights = pd.concat([w_mktcap, w_bl_cor, w_bl_incor], axis=1)
bl_weights = bl_weights.stack().reset_index()
bl_weights.columns = ['Ticker', 'Scenario', 'Value']
%matplotlib inline
with sns.axes_style('ticks', {'figure.figsize': (15,8), 'font_scale': 1.5}):
data = bl_weights
ax = sns.catplot(x='Ticker', y='Value', hue='Scenario', data=data, kind='bar', palette='muted', height=10)
ax.set(xlabel='Scenario', ylabel='Portfolio Weight')
ax.fig.set_size_inches(12,5)
plt.xticks(rotation=30, horizontalalignment='right')
plt.savefig(ss.cfg['data_dir'] + 'bl_view_weights.png', bbox_inches="tight")
```
### 3.1.4 View Confidence Sharpe Difference
```
# Grab Black-Litterman view simulation results
bl_eq_results = market_cap_w[0]
bl_eq = pd.DataFrame.from_dict({'Ex-Post View': ['Equilibrium'],
'view_confidence': [0],
'excess_return': [bl_eq_results.excess_returns.mean() * 100 * bl_eq_results.ppy],
'excess_risk': [bl_eq_results.excess_returns.std() * 100 * np.sqrt(bl_eq_results.ppy)]})
bl_cor_view = pd.read_csv(ss.cfg['data_dir'] + 'bl_ewm_corview.csv')
bl_cor = bl_cor_view[['view_confidence', 'excess_return', 'excess_risk']].copy()
bl_cor.loc[:, 'Ex-Post View'] = 'Correct View'
bl_incor_view = pd.read_csv(ss.cfg['data_dir'] + 'bl_ewm_incorview.csv')
bl_incor = bl_incor_view[['view_confidence', 'excess_return', 'excess_risk']].copy()
bl_incor.loc[:, 'Ex-Post View'] = 'Incorrect View'
bl_results = pd.concat([bl_eq, bl_cor, bl_incor])
bl_results.loc[:, 'sharpe'] = bl_results.loc[:, 'excess_return'] / bl_results.loc[:, 'excess_risk']
bl_results
plt.figure(figsize=(15,8))
with sns.axes_style('ticks', {'font_scale': 1.5}):
data = bl_results
ax = sns.lineplot(x='view_confidence', y='sharpe', hue='Ex-Post View', style='Ex-Post View', data=data, markers=True)
ax.set(xlabel='Static View Confidence', ylabel='Sharpe Ratio')
ax.axhline(0.100230, ls='--')
plt.savefig(ss.cfg['data_dir'] + 'bl_view_sharpe.png')
```
|
github_jupyter
|
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import pandas as pd
import hashlib
import shutil
import glob
import time
import re
import os
from tqdm import tqdm
from datetime import datetime
from sklearn.metrics import f1_score, recall_score, precision_score, accuracy_score
class Net(nn.Module):
def __init__(self, sequenceSize=20000, embeddingDim=128, vocabularySize=2**16, filterWidth=5, filterNumber=1024):
super(Net, self).__init__()
self.sequenceSize = sequenceSize
self.embeddingDim = embeddingDim
self.vocabularySize = vocabularySize
self.filterWidth = filterWidth
self.filterNumber = filterNumber
self.embedding = nn.Embedding(self.vocabularySize, self.embeddingDim)
self.conv = nn.Sequential(
nn.Conv2d(1, self.filterNumber, (self.filterWidth, self.embeddingDim)),
nn.BatchNorm2d(self.filterNumber),
nn.ReLU()
)
self.fc = nn.Sequential(
nn.Linear(self.filterNumber , 512),
nn.BatchNorm1d(512),
nn.ReLU(),
nn.Linear(512, 256),
nn.BatchNorm1d(256),
nn.ReLU(),
nn.Linear(256, 1),
nn.Sigmoid()
)
def forward(self, x):
x = self.embedding(x)
#print(x.size())
x = self.conv(x)
#print(x.size())
x = x.max(dim=2)[0]
#print(x.size())
x = x.view(-1, self.filterNumber)
x = self.fc(x)
return x
class SampleDataset(Dataset):
def __init__(self, filePathList, labels, sequenceSize=20000, featureName='functionMethodCallsArgs'):
self.filePathList = filePathList
self.labels = labels
self.sequenceSize = sequenceSize
self.featureName = featureName
def __len__(self):
return len(self.filePathList)
def __getitem__(self, idx):
df = pd.read_parquet(self.filePathList[idx])
seed = int(round(time.time()%1, 6) * 1000000)
x = np.concatenate(df.iloc[np.random.RandomState(seed).permutation(len(df))][self.featureName].values)
if len(x) > self.sequenceSize:
x = x[:self.sequenceSize]
else:
x = np.concatenate((x, np.zeros([self.sequenceSize - len(x)])))
sample = torch.from_numpy(x)
return (sample.long(), self.labels[idx], self.filePathList[idx])
def train(model, optimizer, dataLoader, device):
running_loss = 0.0
label_lst = list()
predicted_lst = list()
model.train()
for inputs, labels, _ in dataLoader:
#
inputs = inputs.unsqueeze(1).to(device)
labels = labels.to(device)
#
optimizer.zero_grad()
#
outputs = model(inputs)
predicted = (outputs > 0.5).squeeze().long()
loss = F.binary_cross_entropy(outputs.squeeze(), labels.float())
#
loss.backward()
optimizer.step()
#
label_lst.append(labels.cpu().numpy())
predicted_lst.append(predicted.cpu().numpy())
running_loss += loss.item()
labels = np.concatenate(label_lst)
predicted = np.concatenate(predicted_lst)
loss = running_loss / len(predicted)
return labels, predicted, loss
def assess(model, dataLoader, device):
running_loss = 0.0
label_lst = list()
predicted_lst = list()
proba_lst = list()
path_lst = list()
with torch.no_grad():
model.eval()
for inputs, labels, paths in dataLoader:
#
inputs = inputs.unsqueeze(1).to(device)
labels = labels.to(device)
#
outputs = model(inputs)
predicted = (outputs > 0.5).squeeze().long()
loss = F.binary_cross_entropy(outputs.squeeze(), labels.float())
#
if len(inputs) > 1:
label_lst.append(labels.cpu().numpy())
predicted_lst.append(predicted.cpu().numpy())
proba_lst.append(outputs.squeeze().cpu().numpy())
path_lst.append(paths)
running_loss += loss.item()
labels = np.concatenate(label_lst)
predicted = np.concatenate(predicted_lst)
proba = np.concatenate(proba_lst)
paths = np.concatenate(path_lst)
loss = running_loss / len(predicted)
return labels, predicted, loss, proba, paths
def trainModel(ws, modelTag, epochNum, trainLoader, validLoader, device, lr=3e-4, weightDecay=9e-5):
#
model = Net()
model = model.to(device)
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weightDecay)
scheduler = ReduceLROnPlateau(optimizer, 'min', verbose=True, patience=5, factor=0.8)
outputlogFilePath = f'./traces/{ws}/logs'
outputtracesPath = f'./traces/{ws}'
#shutil.rmtree(outputtracesPath)
#os.mkdir(outputtracesPath)
result_lst = list()
message = '----------'
with open(outputlogFilePath, 'a') as writer:
writer.write(message + '\n')
print(message)
for epoch in range(epochNum):
tlabel, tpredicted, tloss = train(model, optimizer, trainLoader, device)
vlabel, vpredicted, vloss, vproba, vproba = assess(model, validLoader, device)
message = f'Train: {modelTag} '
message += '[{:04d}] '.format(epoch)
tf1score = f1_score(tlabel, tpredicted)
message += 'TF1: {:2.4f}, '.format(tf1score*100)
message += 'Tloss: {:2.8f}, '.format(tloss)
vf1score = f1_score(vlabel, vpredicted)
message += 'VF1: {:2.4f}, '.format(vf1score*100)
message += 'VLoss: {:2.8f},'.format(vloss)
with open(outputlogFilePath, 'a') as writer:
writer.write(message + '\n')
print(message)
modelOutputPath = f'{outputtracesPath}/model_{modelTag}_{epoch:03d}.pth'
torch.save(model.state_dict(), modelOutputPath)
result_lst.append((epoch, modelOutputPath, vlabel, vpredicted, vproba, vf1score, vloss, tf1score, tloss))
scheduler.step(tloss)
df = pd.DataFrame(result_lst,
columns=['epoch', 'path', 'labels', 'predicted', 'proba', 'vf1score', 'vloss', 'tf1score', 'tloss'])
df.to_parquet(f'{outputtracesPath}/{modelTag}.parquet')
message = '----------'
with open(outputlogFilePath, 'a') as writer:
writer.write(message + '\n')
print(message)
return df
def evaluate(ws, modelPathList, dataloader, device, numberFragments=1):
modelResultList = []
outputlogFilePath = f'./traces/{ws}/logs'
for modelPath in modelPathList:
for fragment in range(numberFragments):
mdl = Net().to(device)
mdl.load_state_dict(torch.load(modelPath))
mdl.eval()
modelResult = assess(mdl, dataloader, device)
modelF1Score = f1_score(modelResult[0], modelResult[1])
modelResultList.append((modelPath, modelF1Score,) + modelResult)
message = f'Evaluate: '
message += f'ModelPath={modelPath} Fragment={fragment:02d} '
message += f'score={modelF1Score}'
print(message)
with open(outputlogFilePath, 'a') as writer:
writer.write(message + '\n')
return pd.DataFrame(modelResultList, columns=['name', 'f1score', 'Truth', 'Predicted', 'loss', 'Proba', 'Path'])
def getDataloaders(dataset_df, test_df, batchSize=32, numWorkers=16, trainPercentage=0.7, validPercentage=0.8):
rand_idx = np.random.permutation(len(dataset_df))
train_df = dataset_df.iloc[rand_idx[:int(trainPercentage * len(dataset_df))]]
valid_df = dataset_df.iloc[rand_idx[int(trainPercentage * len(dataset_df)):]]
#test_df = dataset_df.iloc[rand_idx[int(validPercentage * len(dataset_df)):]]
print(len(train_df))
print(train_df.label.value_counts())
print(len(valid_df))
print(valid_df.label.value_counts())
print(len(test_df))
print(test_df.label.value_counts())
trainDataset = SampleDataset(train_df.filePath.values, train_df.label.values)
trainLoader = DataLoader(trainDataset, batch_size=batchSize, shuffle=True, num_workers=numWorkers)
validDataset = SampleDataset(valid_df.filePath.values, valid_df.label.values)
validLoader = DataLoader(validDataset, batch_size=2*batchSize, shuffle=False, num_workers=numWorkers)
testDataset = SampleDataset(test_df.filePath.values, test_df.label.values)
testLoader = DataLoader(testDataset, batch_size=2*batchSize, shuffle=False, num_workers=numWorkers)
return trainLoader, validLoader, testLoader
def evalDataset(ws, result_df, probaUpperBorn = 0.9, probaLowerBorn = 0.1):
outputlogFilePath = f'./traces/{ws}/logs'
results = np.vstack(result_df.Proba.values)
truth = result_df.Truth.iloc[0]
paths = result_df.Path.iloc[0]
result_mean = results.mean(axis=0)
predicted = (result_mean > 0.5).astype('int')
f1score = f1_score(truth, predicted)
vtruth = truth[(result_mean >= probaUpperBorn) | (result_mean <= probaLowerBorn)]
vpaths = paths[(result_mean >= probaUpperBorn) | (result_mean <= probaLowerBorn)]
vresult_prob = result_mean[(result_mean >= probaUpperBorn) | (result_mean <= probaLowerBorn)]
vpredicted = (vresult_prob > 0.5).astype('int')
vcoverage = (len(vtruth)/len(truth))
vextendSize = len(vtruth)
vf1score = f1_score(vtruth, vpredicted)
etruth = truth[(result_mean < probaUpperBorn) & (result_mean > probaLowerBorn)]
epaths = paths[(result_mean < probaUpperBorn) & (result_mean > probaLowerBorn)]
eresult_prob = result_mean[(result_mean < probaUpperBorn) & (result_mean > probaLowerBorn)]
epredicted = (eresult_prob > 0.5).astype('int')
ecoverage = (len(etruth)/len(truth))
erestSize = len(etruth)
ef1score = f1_score(etruth, epredicted)
message = f'Extend: '
message += f'f1score={f1score*100:2.4f}, '
message += f'vcoverage={vcoverage*100:2.4f}, vf1score={vf1score*100:2.4f}, vexentdSize={vextendSize}, '
message += f'ecoverage={ecoverage*100:2.4f}, ef1score={ef1score*100:2.4f}, erestSize={erestSize}'
print(message)
with open(outputlogFilePath, 'a') as writer:
writer.write(message + '\n')
#
ws = 'studyWS01'
epochNum = 100
device = torch.device('cuda:5')
ensembleSize = 10
trainPercentageParam = 0.8
validPercentageParam = 0.9
outputlogFilePath = f'./traces/{ws}/logs'
outputtracesPath = f'./traces/{ws}'
os.mkdir(outputtracesPath)
test_df = pd.read_parquet('dataset/androzooDone_meta.parquet')
test_df['label'] = (test_df.vt_detection == 0).apply(int)
test_df['filePath'] = '/ws/mnt/local/data/output/datasets/zoo/' + test_df.sha256
dataset_metaList = [10000, 20000, 50000, 100000]
for sizeMeta in dataset_metaList:
currentTag = str(sizeMeta)
message = '######## '
message += currentTag
with open(outputlogFilePath, 'a') as writer:
writer.write(message + '\n')
print(message)
#
dataset_df = test_df.sample(sizeMeta, random_state=54)
#
trainLoader, validLoader, testLoader = getDataloaders(dataset_df, test_df, trainPercentage=trainPercentageParam,
validPercentage=validPercentageParam)
#
models_df = trainModel(ws, f'train_{currentTag}', epochNum, trainLoader, validLoader, device)
models_df.sort_values(by=['vloss', 'tloss'], inplace=True)
selectedModelPaths = models_df.path.iloc[:ensembleSize].tolist()
#
evalresult_df = evaluate(ws, selectedModelPaths, testLoader, device)
#
evalDataset(ws, evalresult_df, probaUpperBorn = 0.8, probaLowerBorn = 0.2)
#
outputPath = f'traces/{ws}/{currentTag}.pickle'
currentResults = pd.DataFrame([(currentTag, models_df, evalresult_df)], columns=['TimeTag', 'models', 'evalResuls'])
currentResults.to_pickle(outputPath)
#
message = '########'
with open(outputlogFilePath, 'a') as writer:
writer.write(message + '\n')
print(message)
```
|
github_jupyter
|
# Testing `TFNoiseAwareModel`
We'll start by testing the `textRNN` model on a categorical problem from `tutorials/crowdsourcing`. In particular we'll test for (a) basic performance and (b) proper construction / re-construction of the TF computation graph both after (i) repeated notebook calls, and (ii) with `GridSearch` in particular.
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
os.environ['SNORKELDB'] = 'sqlite:///{0}{1}crowdsourcing.db'.format(os.getcwd(), os.sep)
from snorkel import SnorkelSession
session = SnorkelSession()
```
### Load candidates and training marginals
```
from snorkel.models import candidate_subclass
from snorkel.contrib.models.text import RawText
Tweet = candidate_subclass('Tweet', ['tweet'], cardinality=5)
train_tweets = session.query(Tweet).filter(Tweet.split == 0).order_by(Tweet.id).all()
len(train_tweets)
from snorkel.annotations import load_marginals
train_marginals = load_marginals(session, train_tweets, split=0)
train_marginals.shape
```
### Train `LogisticRegression`
```
# Simple unigram featurizer
def get_unigram_tweet_features(c):
for w in c.tweet.text.split():
yield w, 1
# Construct feature matrix
from snorkel.annotations import FeatureAnnotator
featurizer = FeatureAnnotator(f=get_unigram_tweet_features)
%time F_train = featurizer.apply(split=0)
F_train
%time F_test = featurizer.apply_existing(split=1)
F_test
from snorkel.learning.tensorflow import LogisticRegression
model = LogisticRegression(cardinality=Tweet.cardinality)
model.train(F_train.todense(), train_marginals)
```
### Train `SparseLogisticRegression`
Note: Testing doesn't currently work with `LogisticRegression` above, but no real reason to use that over this...
```
from snorkel.learning.tensorflow import SparseLogisticRegression
model = SparseLogisticRegression(cardinality=Tweet.cardinality)
model.train(F_train, train_marginals, n_epochs=50, print_freq=10)
import numpy as np
test_labels = np.load('crowdsourcing_test_labels.npy')
acc = model.score(F_test, test_labels)
print(acc)
assert acc > 0.6
# Test with batch size s.t. N % batch_size == 1...
model.score(F_test, test_labels, batch_size=9)
```
### Train basic LSTM
With dev set scoring during execution (note we use test set here to be simple)
```
from snorkel.learning.tensorflow import TextRNN
test_tweets = session.query(Tweet).filter(Tweet.split == 1).order_by(Tweet.id).all()
train_kwargs = {
'dim': 100,
'lr': 0.001,
'n_epochs': 25,
'dropout': 0.2,
'print_freq': 5
}
lstm = TextRNN(seed=123, cardinality=Tweet.cardinality)
lstm.train(train_tweets, train_marginals, X_dev=test_tweets, Y_dev=test_labels, **train_kwargs)
acc = lstm.score(test_tweets, test_labels)
print(acc)
assert acc > 0.60
# Test with batch size s.t. N % batch_size == 1...
lstm.score(test_tweets, test_labels, batch_size=9)
```
### Run `GridSearch`
```
from snorkel.learning.utils import GridSearch
# Searching over learning rate
param_ranges = {'lr': [1e-3, 1e-4], 'dim': [50, 100]}
model_class_params = {'seed' : 123, 'cardinality': Tweet.cardinality}
model_hyperparams = {
'dim': 100,
'n_epochs': 20,
'dropout': 0.1,
'print_freq': 10
}
searcher = GridSearch(TextRNN, param_ranges, train_tweets, train_marginals,
model_class_params=model_class_params,
model_hyperparams=model_hyperparams)
# Use test set here (just for testing)
lstm, run_stats = searcher.fit(test_tweets, test_labels)
acc = lstm.score(test_tweets, test_labels)
print(acc)
assert acc > 0.60
```
### Reload saved model outside of `GridSearch`
```
lstm = TextRNN(seed=123, cardinality=Tweet.cardinality)
lstm.load('TextRNN_best', save_dir='checkpoints/grid_search')
acc = lstm.score(test_tweets, test_labels)
print(acc)
assert acc > 0.60
```
### Reload a model with different structure
```
lstm.load('TextRNN_0', save_dir='checkpoints/grid_search')
acc = lstm.score(test_tweets, test_labels)
print(acc)
assert acc < 0.60
```
# Testing `GenerativeModel`
### Testing `GridSearch` on crowdsourcing data
```
from snorkel.annotations import load_label_matrix
import numpy as np
L_train = load_label_matrix(session, split=0)
train_labels = np.load('crowdsourcing_train_labels.npy')
from snorkel.learning import GenerativeModel
# Searching over learning rate
searcher = GridSearch(GenerativeModel, {'epochs': [0, 10, 30]}, L_train)
# Use training set labels here (just for testing)
gen_model, run_stats = searcher.fit(L_train, train_labels)
acc = gen_model.score(L_train, train_labels)
print(acc)
assert acc > 0.97
```
|
github_jupyter
|
```
from scripts.setup_libs import *
```
# [CatBoost](https://github.com/catboost/catboost)
Бустинг от Яндекса для категориальных фичей и много чего еще.
Для начала настоятельно рекомендуется посмотреть видео. Там идет основная теория по CatBoost
```
from IPython.display import YouTubeVideo
YouTubeVideo('UYDwhuyWYSo', width=640, height=360)
```
Резюмируя видео:
Catboost строится на **Obvious Decision Tree** (ODT полное бинарное дерево) - это значит, что на каждом уровне дерева во всех вершинах идет разбиение по одному и тому же признаку. Дерево полное и симметричное. Листов - $2^H$, где $H$ - высота дерева и количество используемых фич.
В Catboost куча фичей для скорости и регуляризации.
Регуляризация (стараемся делать как можно более разные деревья):
* Чтобы базовое дерево было небольшое, обычно берется какая-то часть фич (max_features) например $0.1$ от общего числа. В силу большого количества деревьев в композиции, информация не потеряется.
* При построении дерева можно использовать **бутстреп для выборки**.
* При слитинге в дереве к скору можно добавлять случайную величину.
Скорость:
* Так как мы еще до обучения знаем схему дерева (потому что ODT) - мы знаем количество листьев. Количество разных значений будет равно количеству листьев, поэтому на шаге обучения базового дерева давайте приближать не **полный вектор антиградиентов** (который размера количества фич), а **вектор листов**. В этом случае сильно сокращается время выбора наилучшего сплита на каждом этапе обучения базового дерева.
* Бинаризация численных данных, для ускорения нахождения наилучшего сплита. Слабая - равномерная или медианная. Хорошие **MaxLogSum**, **GreedyLogSum**
* На верхних вершинах дерева делаем только один градиентный шаг, на нижних можно несколько.
* **Ordered boosting**
# [Примеры](https://catboost.ai/docs/concepts/python-usages-examples.html#custom-objective-function) работы с CatBoost
Еще одно очень полезное видео, но теперь уже с практикой.
```
from IPython.display import YouTubeVideo
YouTubeVideo('xl1fwCza9C8', width=640, height=360)
```
## Простой пример
```
train_data = [[1, 4, 5, 6],
[4, 5, 6, 7],
[30, 40, 50, 60]]
eval_data = [[2, 4, 6, 8],
[1, 4, 50, 60]]
train_labels = [10, 20, 30]
# Initialize CatBoostRegressor
model = CatBoostRegressor(iterations=2,
learning_rate=1,
depth=2)
# Fit model
model.fit(train_data, train_labels)
# Get predictions
preds = model.predict(eval_data)
```
## Визуализация
```
rng = np.random.RandomState(31337)
boston = load_boston()
y = boston['target']
X = boston['data']
kf = KFold(n_splits=3, shuffle=True, random_state=rng)
X_train, X_rest, y_train, y_rest = train_test_split(X, y, test_size=0.25)
X_val, X_test, y_val, y_test = train_test_split(X_rest, y_rest, test_size=0.5)
cb = CatBoostRegressor(silent=True, eval_metric="MAE", custom_metric=["MAPE"])
```
Тут включена крутая визуализация, с которой можно поиграться, она не работает в Jupyter Lab, но работает в Jupyter Notebook
```
cb.fit(X_train, y_train, eval_set=[(X_val , y_val ), (X_test, y_test)], plot=True)
```
## Бинаризации float
Выбрать стратегию бинаризации можно установив параметр *feature_border_type*.
- **Uniform**. Границы выбираются равномерно по значениям;
- **Median**. В каждый бин попадает примерно одинаковое число различных значений;
- **UniformAndQuantiles**. Uniform + Median;
- **MaxLogSum, GreedyLogSum**. Максимизируется значение формулы $\sum_{i=1}^K \log(n_i)$, где $K$ - требуемое кол-во бинов, $n_i$ число объектов в этом бакете;
- **MinEntropy**. Аналогично, но максимизируется энтропия: $-\sum_{i=1}^K n_i \log(n_i)$
```
from sklearn.model_selection import GridSearchCV
params = {"feature_border_type": [
"Uniform",
"Median",
"UniformAndQuantiles",
"MaxLogSum",
"GreedyLogSum",
"MinEntropy"
]}
cb = CatBoostRegressor(silent=True)
grid = GridSearchCV(cb, params)
grid.fit(X, y)
for score, strategy in sorted(zip(grid.cv_results_['mean_test_score'],
grid.cv_results_['param_feature_border_type'].data)):
print("MSE: {}, strategy: {}".format(score, strategy))
```
## Feature importance
```
cb = CatBoostRegressor(silent=True)
cb.fit(X_train, y_train)
for value, name in sorted(zip(cb.get_feature_importance(fstr_type="FeatureImportance"),
boston["feature_names"])):
print("{}\t{}".format(name, value))
```
# Categorical features
```
from catboost.datasets import titanic
titanic_df = titanic()
X = titanic_df[0].drop('Survived',axis=1)
y = titanic_df[0].Survived
X.head(5)
is_cat = (X.dtypes != float)
is_cat.to_dict()
is_cat = (X.dtypes != float)
for feature, feat_is_cat in is_cat.to_dict().items():
if feat_is_cat:
X[feature].fillna("NAN", inplace=True)
cat_features_index = np.where(is_cat)[0]
cat_features_index
X.columns
```
Аналогом для класса DMatrix в катбусте служит класс **catboost.Pool**. Помимо прочего, содержит индексы категориальных факторов и описание пар для режима попарного обучения.
[Подробнее](https://tech.yandex.com/catboost/doc/dg/concepts/python-reference_pool-docpage/)
```
from catboost import Pool
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=.85, random_state=1234)
train_pool = Pool(data=X_train,
label=y_train,
cat_features=cat_features_index, # в явном виде передаем категориальные фичи, которыми хотим работать
feature_names=list(X_train.columns)) # названия фич, для удобной визуализации и дебага
test_pool = Pool(data=X_test,
label=y_test,
cat_features=cat_features_index,
feature_names=list(X_test.columns))
from catboost import CatBoostClassifier
from sklearn.metrics import roc_auc_score
model = CatBoostClassifier(eval_metric='Accuracy', use_best_model=True, random_seed=42)
model.fit(train_pool, eval_set=test_pool, metric_period=100)
y_pred = model.predict_proba(X_test)
roc_auc_score(y_test, y_pred[:, 1])
```
На самом деле в Catboost происходит еще много чего интересного при обработке категорий:
- среднее сглаживается некоторым априорным приближением;
- по факту обучается несколько (3) модели на разных перестановках;
- рассматриваются композиции категориальных факторов (max_ctr_complexity);
- в момент применения модели, новые объекты приписываются в конец перестановки по обучающей выборке и, таким образом, статистика для них считается по всем имеющимся данным;
- таргето-независимые счетчики считаются по всем данным.
- для факторов с небольшим числом различных значений производится OneHotEncoding (параметр one_hot_max_size - максимальное значение для OneHotEncoding'а)
# [Категориальные статистики](https://catboost.ai/docs/concepts/algorithm-main-stages_cat-to-numberic.html)
Одно из основных преимуществ катбуста - обработка категориальных факторов.
Такие факторы заменяются на "счетчики": для каждого значения кат.фактора **по таргету** вычисляется некоторая **статистика** этого значения (счетчик, ctr), например, среднее значение таргета по объектам, которые имеют данное значение категориального фактора. Далее категориальный фактор заменяется на подсчитанные для него статистики (каждое значение фактора на свою статистику).
Будем использовать технику кодирования категориальных признаков средним значением целевого признака.
Основная идея – для каждого значения категориального признака посчитать среднее значение целевого признака и заменить категориальный признак на посчитанные средние.
Давайте попробуем сделать следующую операцию:
* Возьмем категориальную фичу (один столбец). Пусть фича принимает $m$ значений: $l_1, \ldots, l_m$
* Заменим значение $l_k$ на $\frac{1}{N_{l_k}}\sum_{i \in l_k}y_i$ - среднее значение целевой переменной для данного значения категориальной фичи.
* Переменной в тесте будут приравниваться все средние значение данных
```
df_train = pd.DataFrame({'float':[1,2,3,4,5],
'animal': ['cat', 'dog', 'cat', 'dog', 'cat'],
'sign': ['rock', 'rock', 'paper', 'paper', 'paper']})
y_train = np.array([0,1,0,1, 0])
df_test = pd.DataFrame({'float':[6,7,8,9],
'animal': ['cat', 'dog', 'cat', 'dog'],
'sign': ['rock', 'rock', 'paper', 'paper']})
import warnings
warnings.filterwarnings("ignore")
def mean_target(df_train, y_train, df_test):
n = len(df_train)
cat_features = df_train.columns[df_train.dtypes == 'object'].tolist()
float_features = df_train.columns[df_train.dtypes != 'object'].tolist()
new_X_train = df_train.copy()
new_X_train['y'] = y_train
new_X_test = df_test.copy()
for col in cat_features:
mean_dict = new_X_train.groupby(col)['y'].mean().to_dict()
new_X_train[col + '_mean'] = df_train[col].map(mean_dict)
new_X_test[col + '_mean'] = df_test[col].map(mean_dict)
return new_X_train, new_X_test
X_train, X_test = mean_target(df_train, y_train, df_test)
X_train
X_test
```
Данный подход лучше чем One-Hot, так как при нем мы можем серьезно вылететь за пределы памяти.
#### Важный момент.
В ходе подсчета статистики мы по сути сильно привязываемся к данным. Из-за чего может произойти сильное **переобучение**.
## Накопительные статистики
Такие манипуляции очень легко могут привести к переобучению, потому что в данные подливается информация о метках объектов, после чего происходит обучение.
Поэтому в катбусте делают **накопительные статистики**
Особенности работы с категориальными факторами:
- объекты перемешиваются в случайном порядке;
- для i-го объекта и j-го признака в перестановке **статистика** (счетчик) вычисляется по всем объектам, идущим **до него** с таким же значением признака
- заменяем все категориальные факторы в выборке и обучаем модель
- Тестовую же выборку просто приравниваем к средним значениям по
```
def late_mean_target(df_train, df_test, y_train):
n = len(df_train)
cat_features = df_train.columns[df_train.dtypes == 'object'].tolist()
num_features = df_train.columns[df_train.dtypes != 'object'].tolist()
new_X_test = df_test.copy()
new_X_train = df_train.copy()
new_X_train['y'] = y_train
new_X_train = new_X_train.sample(frac=1).reset_index() #shuffling
new_X_train['ones'] = np.ones((len(X_train),))
for col in cat_features:
mean_dict = new_X_train.groupby(col)['y'].mean().to_dict()
new_X_test[col + '_mean'] = df_test[col].map(mean_dict) / n
count = new_X_train.groupby([col])['ones'].apply(lambda x: x.cumsum())
cum = new_X_train.groupby([col])['y'].apply(lambda x: x.cumsum())
new_X_train[col + '_mean'] = (cum - new_X_train['y'])/count
return new_X_train, new_X_test
df_train = pd.DataFrame({'float':[1,2,3,4,5],
'animal': ['cat', 'dog', 'cat', 'dog', 'cat'],
'sign': ['rock', 'rock', 'paper', 'paper', 'paper']})
y_train = np.array([0,1,0,1, 0])
df_test = pd.DataFrame({'float':[6,7,8,9],
'animal': ['cat', 'dog', 'cat', 'dog'],
'sign': ['rock', 'rock', 'paper', 'paper']})
X_train, X_test = late_mean_target(df_train, df_test, y_train)
X_train
X_test
```
# Полезные ссылки
* [Tutorial](https://github.com/catboost/tutorials)
* [Github Catboost](https://github.com/catboost/catboost)
* [Статья о Catboost на arxiv](https://arxiv.org/pdf/1706.09516.pdf)
|
github_jupyter
|
# Homework (16 pts) - Hypothesis Testing
```
import numpy as np
import scipy.stats as st
import matplotlib.pyplot as plt
```
1. You measure the duration of high frequency bursts of action potentials under two different experimental conditions (call them conditions A and B). Based on your measured data below, determine if the conditions affect the mean burst duration or if differences are just due to random fluctuations? See 1a-d below.
```
burstDurationsA_ms = np.array([180.38809356, 118.54316518, 47.36070342, 258.43152543,
157.58441772, 53.00241256, 97.87549106, 98.58339172,
3.82151168, 149.63437886, 78.36434292, 207.1499196 ,
249.99308288, 52.33575872, 177.16295745, 20.90902826,
355.53831638, 17.14676607, 194.82448255, 364.30099202,
10.46025411, 63.80995802, 186.96964679, 16.76391482,
66.04825185, 169.95991378, 174.85051452, 95.51534595,
164.81818483, 165.92316127, 21.99840476, 176.27450914,
367.20238806, 53.55081561, 18.54310649, 309.36915353,
34.8110391 , 170.70514854, 4.80755719, 185.70861565,
42.81031454, 77.63480453, 22.78673497, 27.15480627,
81.19289909, 7.5754338 , 143.53588895, 1.45355329,
56.93153072, 35.7227909 , 120.88947208, 268.68459917,
36.56451611, 335.29492244, 18.88246351, 698.21607381,
47.24456065, 68.47935918, 246.50352868, 39.17939247,
130.00962739, 12.63485608, 16.5060213 , 85.73872575,
30.34193446, 12.18596266, 133.13145381, 39.68448593,
227.5104642 , 274.45272375, 167.76767172, 23.93871685,
319.05649273, 6.3491122 , 35.14797547, 170.29631475,
33.54342976, 2.71282041, 134.5042415 , 42.498552 ,
144.87658813, 122.78633957, 46.58727698, 143.74260009,
27.95191179, 462.66535543, 187.17111074, 21.05730056,
27.92875799, 73.0405984 , 137.67114744, 25.51076087,
68.71066451, 188.46823412, 20.58525518, 18.06289499,
388.79209834, 9.42246312, 270.11609469, 20.51123798])
burstDurationsB_ms = np.array([ 19.1579061 , 103.28099491, 155.40048778, 54.00532297,
19.60552475, 38.33218511, 172.39377537, 100.60095889,
123.39067736, 32.30752807, 140.81577413, 10.03036383,
76.95250023, 111.4112118 , 106.77958145, 100.03741994,
54.40736747, 169.72641863, 170.51048794, 84.31738796,
32.48573515, 71.14968724, 18.07487628, 48.27775752,
249.00817236, 40.88078534, 149.55876359, 171.68318734,
64.7972247 , 179.67199065, 211.24354393, 49.54367304,
5.97816835, 270.82356699, 99.33133967, 14.35603709,
61.8917307 , 48.13722571, 65.23703418, 119.95425274,
64.3948595 , 57.40459219, 18.76680104, 37.37173184,
143.4622583 , 21.6463496 , 45.86107014, 3.98511098,
11.8424448 , 105.59224929, 71.49909777, 29.64941255,
117.62835465, 31.33284437, 124.17263642, 249.31437673,
92.15958114, 66.2842341 , 5.01333126, 18.53478564,
44.09316335, 119.8752612 , 52.31171617, 3.03888107,
109.94031571, 5.52411681, 43.88839751, 48.63036147,
22.71317076, 30.20052081, 32.10942778, 117.08796453,
53.83369891, 68.82006208, 92.29204674, 93.829404 ,
0.67985216, 10.42751195, 4.35827727, 127.21452508,
42.69414115, 34.9520911 , 20.16096766, 178.44190716,
43.04340469, 89.11997718, 163.48474361, 277.29716851,
17.08902205, 103.74782303, 49.29308393, 72.1459098 ,
11.4600829 , 4.09194418, 51.55511185, 91.81103802,
31.36955782, 23.24407568, 90.13594215, 69.37118937])
```
1a. (1 pt) State the null and alternative hypotheses.
H0: Conditions have no affect on mean burst durations.
Ha: Mean burst duration differs between conditions.
1b. (3 ps) Plot the burst distributions for conditions A and B overlaid with your best estimate for the probability density function that describes them.
```
distA = st.expon(loc=0, scale=burstDurationsA_ms.mean())
distB = st.expon(loc=0, scale=burstDurationsB_ms.mean())
plt.hist(burstDurationsA_ms, bins=20, density=True, alpha=0.25, label='A')
plt.hist(burstDurationsB_ms, bins=20, density=True, alpha=0.25, label='B')
dur_ms = np.linspace(0, 500, 100)
plt.plot(dur_ms, distA.pdf(dur_ms), label='dist A')
plt.plot(dur_ms, distB.pdf(dur_ms), label='dist B')
plt.xlabel('Burst Duration (ms)')
plt.ylabel('pdf')
plt.legend();
```
1c. (3 pts) Use a permutation test with 1000 permutations to test your null hypothesis. Compute the difference between mean burst durations for all 1000 permutations of the datasets.
```
nA = len(burstDurationsA_ms)
nB = len(burstDurationsB_ms)
allBurstDurations = np.zeros((nA + nB,))
allBurstDurations[:nA] = burstDurationsA_ms
allBurstDurations[-nB:] = burstDurationsB_ms
numPermutations = 1000
permutedMeanBurstDurationDiffs = np.zeros((numPermutations,))
for i in range(numPermutations):
np.random.shuffle(allBurstDurations)
permutedBurstDurationsA = allBurstDurations[:nA]
permutedBurstDurationsB = allBurstDurations[-nB:]
permutedMeanBurstDurationDiffs[i] = permutedBurstDurationsB.mean() - permutedBurstDurationsA.mean()
```
1d. (3 pts) Plot the distribtuion of mean burst time differences from each permutation and use vertical dashed lines ot indicate the 95% confidence interval and a vertical solid line to indicate the measured mean burst time difference between the actual datasets. Finally, answer the original question, do the conditions affect mean burst duration?
```
# plot the distribution differences between taus for each permutation
plt.hist(permutedMeanBurstDurationDiffs, bins=50, alpha=0.25, label='Expected under H0');
plt.xlabel('Mean Burst Duration Diff B - A (ms)')
plt.ylabel('# Permutations');
# add 95% confidence intervals to the plot
lb, ub = np.quantile(permutedMeanBurstDurationDiffs, [0.025, 0.975])
plt.axvline(lb, linestyle='--', label='95% CI')
plt.axvline(ub, linestyle='--');
# add measured difference to plot
measuredMeanBurstDurationDiff = burstDurationsB_ms.mean() - burstDurationsA_ms.mean()
plt.axvline(measuredMeanBurstDurationDiff, color='r', label='Measured')
plt.legend();
```
Reject H0 as measured difference falls outside of 95% confidence interval for expected differenece if H0 was true.
Thus, we infer that condition B did affect the mean burst duration as compared to condition A.
2. You record the resting potential of a cell (see below). See 2a-c below.
```
restingPotential_mV = np.array([-85.06885608, -68.0333149 , -77.04147864, -70.82636201,
-73.11516394, -70.87124656, -69.8945143 , -71.35017797,
-78.97700081, -76.06762065, -80.16301496, -75.53757879,
-66.29208026, -84.46635021, -74.99594162, -81.64926101,
-69.43971079, -60.09946296, -66.79822251, -60.85633766,
-54.32637416, -66.45195357, -82.98456323, -81.95661922,
-60.47209247, -80.55272128, -62.85999264, -86.59379859,
-78.64488589, -68.84506935, -80.77647186, -67.85623328,
-74.45114227, -89.65579119, -82.64751201, -63.75968145,
-74.22283582, -59.31586296, -93.0908073 , -73.64374549,
-62.68738212, -57.96506437, -72.3717666 , -86.33058942,
-78.92751452, -58.80136699, -85.71378949, -57.19191734,
-91.30229149, -75.05287933, -75.33300218, -62.74969485,
-79.59156555, -52.61256484, -77.21434863, -83.18228806,
-62.06267252, -68.56599363, -74.33860286, -74.25433867,
-67.10062548, -70.91001388, -74.54319772, -89.15247536,
-72.25311527, -88.42966306, -77.76328165, -68.46582471,
-75.94389499, -58.47565688, -71.13726886, -82.4352595 ,
-61.93586705, -83.83289675, -51.7473573 , -72.18052423,
-77.19392687, -87.97762782, -68.17409172, -62.04925685,
-72.86214908, -69.43243604, -82.89191418, -67.91943956,
-59.00530849, -62.53955662, -68.66192422, -73.86176431,
-63.33605874, -84.78928316, -79.38590405, -85.06698722,
-77.99176887, -70.8097979 , -70.458364 , -77.83905415,
-79.05549124, -67.7530506 , -86.29135786, -60.87285052,
-68.75028368, -69.48216823, -87.97546221, -74.25401398,
-72.00639248, -73.25242423, -99.49034043, -81.86020062,
-78.38191113, -68.64333415, -62.26209287, -75.46279644,
-82.18768283, -77.45752358, -79.82870353, -69.4572625 ,
-78.32253067, -73.59782921, -72.25046001, -80.64590368,
-76.92874101, -90.79517065, -73.90324566, -81.67875556,
-67.59862905, -81.49491813, -75.79660561, -81.14508062,
-78.95641057, -80.56089537, -80.23390812, -72.4244641 ,
-87.47818531, -73.59907449, -66.92882851, -67.87048944,
-69.79223622, -67.11253617, -64.8935525 , -80.52556846,
-78.19259758, -62.10604477, -95.98603544, -75.95599522,
-66.3355366 , -80.87436998, -81.5009947 , -88.22430255,
-83.72971765, -75.86416506, -82.52663772, -53.76916602,
-66.21196557, -72.93868097, -91.42283677, -80.22444843,
-75.08391826, -52.05541454, -72.0154604 , -80.24943593,
-65.97047566, -81.62631839, -73.18646105, -70.85923137,
-66.05248632, -60.82923084, -59.49883812, -78.38967591,
-84.79797173, -95.00305539, -78.06355062, -71.60393851,
-70.37115932, -86.7155815 , -65.38955127, -76.78546928,
-79.85586826, -76.65572665, -71.50214043, -83.65681821,
-59.9250123 , -76.05986927, -82.68107711, -70.01703154,
-74.46337865, -63.38903087, -78.73136431, -76.56253395,
-72.43137511, -52.60067507, -54.23945626, -63.68117735,
-88.19424095, -76.29322833, -77.01457066, -72.88256829,
-67.46931905, -60.91331725, -79.17094879, -74.96126989])
```
2a. (3 pts) You only have one sample (above) with a single mean. Use the Central Limit Theorem to estimate the distribution of mean resting potentials were you to collect a bunch more samples. Plot this distribution and indicate its 95% confidence interval with vertical lines on the plot.
```
mu = restingPotential_mV.mean()
sem = restingPotential_mV.std() / np.sqrt(len(restingPotential_mV))
meanDist = st.norm(mu, sem)
mV = np.linspace(-77, -71, 101)
plt.plot(mV, meanDist.pdf(mV))
plt.xlabel('Mean Resting Potential (mV)')
plt.ylabel('pdf')
plt.title('Central Limit Theorem')
lb, ub = meanDist.ppf([0.025, 0.975])
plt.axvline(lb, linestyle='--', label='95% CI')
plt.axvline(ub, linestyle='--')
plt.legend();
```
2b. (3 pts) Use 1000 bootstrapped samples to estimate the 95% confidence interval for the mean resting potential. Plot the distribution of bootstrap mean resting potentials and indicate the 95% confidence intervals with vertical lines. How do these compare to that obtained by the Central Limit Theorem?
```
numBootstraps = 1000
bootstrappedMeans = np.zeros((numBootstraps,))
for i in range(numBootstraps):
bootstrappedRestingPotentials_mV = \
np.random.choice(restingPotential_mV, size=restingPotential_mV.shape, replace=True)
bootstrappedMeans[i] = bootstrappedRestingPotentials_mV.mean()
bootstrappedMeansCI = np.quantile(bootstrappedMeans, [0.025, 0.975])
plt.hist(bootstrappedMeans, bins=30, alpha=0.25, label='Bootstrapped')
plt.axvline(bootstrappedMeansCI[0], linestyle='--', label='95% CI')
plt.axvline(bootstrappedMeansCI[1], linestyle='--')
plt.xlabel('Mean Resting Potential (mV)')
plt.ylabel('# Bootstrap Samples')
plt.legend();
```
2c. (3 pts) Use a t-Test to determine whether this cell belongs to a set of cells that you previously determined have a resting potential of -60 mV?
```
# I didn't specifically ask for the normality test, so it is ok if it was not included.
# But you should do some sort of check for normality if you are using a t-Test.
stat, pvalue = st.normaltest(restingPotential_mV)
isNormallyDistributed = pvalue >= 0.05
isNormallyDistributed
t, pvalue = st.ttest_1samp(restingPotential_mV, -60)
pvalue
```
|
github_jupyter
|
# Information Flow
In this chapter, we detail how to track information flows in python by tainting input strings, and tracking the taint across string operations.
Some material on `eval` exploitation is adapted from the excellent [blog post](https://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html) by Ned Batchelder.
**Prerequisites**
* You should have read the [chapter on coverage](Coverage.ipynb).
Setting up our infrastructure
```
import fuzzingbook_utils
from ExpectError import ExpectError
import inspect
import enum
%%html
<div>
<style>
div.todo {
color:red;
font-weight: bold;
}
div.todo::before {
content: "TODO: ";
}
div.done {
color:blue;
font-weight: bold;
}
div.done::after {
content: " :DONE";
}
</style>
<script>
function todo_toggle() {
if (todo_shown){
$('div.todo').hide('500');
$('div.done').hide('500');
$('#toggleButton').val('Show Todo')
} else {
$('div.todo').show('500');
$('div.done').show('500');
$('#toggleButton').val('Hide Todo')
}
todo_shown = !todo_shown
}
$( document ).ready(function(){
todo_shown=false;
$('div.todo').hide()
});
</script>
<form action="javascript:todo_toggle()"><input type="submit" id="toggleButton" value="Show Todo"></form>
```
Say we want to implement a calculator service in Python. A really simple way to do that is to rely on the `eval()` function in Python. Since we do not want our users to be able to execute arbitrary commands on our server, we use `eval()` with empty `locals` and `globals`
```
def my_calculator(my_input):
result = eval(my_input, {}, {})
print("The result of %s was %d" % (my_input, result))
```
It wors as expected:
```
my_calculator('1+2')
```
Does it?
```
with ExpectError():
my_calculator('__import__("os").popen("ls").read()')
```
As you can see from the error, `eval()` completed successfully, with the system command `ls` executing successfully. It is easy enough for the user to see the output if needed.
```
my_calculator("1 if __builtins__['print'](__import__('os').popen('ls').read()) else 0")
```
The problem is that the Python `__builtins__` is [inserted by default](https://docs.python.org/3/library/functions.html#eval) when one uses `eval()`. We can avoid this by restricting `__builtins__` in `eval` explicitly.
```
def my_calculator(my_input):
result = eval(my_input, {"__builtins__":None}, {})
print("The result of %s was %d" % (my_input, result))
```
Does it help?
```
with ExpectError():
my_calculator("1 if __builtins__['print'](__import__('os').popen('ls').read()) else 0")
```
But does it actually?
```
my_calculator("1 if [x['print'](x['__import__']('os').popen('ls').read()) for x in ([x for x in (1).__class__.__base__.__subclasses__() if x.__name__ == 'Sized'][0].__len__.__globals__['__builtins__'],)] else 0")
```
The problem here is that when the user has a way to inject **uninterpreted strings** that can reach a dangerous routine such as `eval()` or an `exec()`, it makes it possible for them to inject dangerous code. What we need is a way to restrict the ability of uninterpreted input string fragments from reaching dangerous portions of code.
## A Simple Taint Tracker
For capturing information flows we need a new string class. The idea is to use the new tainted string class `tstr` as a wrapper on the original `str` class.
We need to write the `tstr.__new__()` method because we want to track the parent object responsible for the taint (essentially because we want to customize the object creation, and `__init__` is [too late](https://docs.python.org/3/reference/datamodel.html#basic-customization) for that.).
The taint map in variable `_taint` contains non-overlapping taints mapped to the original string.
```
class tstr_(str):
def __new__(cls, value, *args, **kw):
return super(tstr_, cls).__new__(cls, value)
class tstr(tstr_):
def __init__(self, value, taint=None, parent=None, **kwargs):
self.parent = parent
l = len(self)
if taint:
if isinstance(taint, int):
self._taint = list(range(taint, taint + len(self)))
else:
assert len(taint) == len(self)
self._taint = taint
else:
self._taint = list(range(0, len(self)))
def has_taint(self):
return any(True for i in self._taint if i >= 0)
def __repr__(self):
return str.__repr__(self)
def __str__(self):
return str.__str__(self)
t = tstr('hello')
t.has_taint(), t._taint
t = tstr('world', taint = 6)
t._taint
```
By default, when we wrap a string, it is tainted. Hence we also need a way to `untaint` the string.
```
class tstr(tstr):
def untaint(self):
self._taint = [-1] * len(self)
return self
t = tstr('hello world')
t.untaint()
t.has_taint()
```
However, the taint does not transition from the whole string to parts.
```
with ExpectError():
t = tstr('hello world')
t[0:5].has_taint()
```
### Slice
The Python `slice` operator `[n:m]` relies on the object being an `iterator`. Hence, we define the `__iter__()` method.
```
class tstr(tstr):
def __iter__(self):
return tstr_iterator(self)
def create(self, res, taint):
return tstr(res, taint, self)
def __getitem__(self, key):
res = super().__getitem__(key)
if type(key) == int:
key = len(self) + key if key < 0 else key
return self.create(res, [self._taint[key]])
elif type(key) == slice:
return self.create(res, self._taint[key])
else:
assert False
```
The Python `slice` operator `[n:m]` relies on the object being an `iterator`. Hence, we define the `__iter__()` method.
#### The iterator class
The `__iter__()` method requires a supporting `iterator` object.
```
class tstr_iterator():
def __init__(self, tstr):
self._tstr = tstr
self._str_idx = 0
def __next__(self):
if self._str_idx == len(self._tstr): raise StopIteration
# calls tstr getitem should be tstr
c = self._tstr[self._str_idx]
assert type(c) is tstr
self._str_idx += 1
return c
t = tstr('hello world')
t[0:5].has_taint()
```
### Helper Methods
We define a few helper methods that deals with the mapped taint index.
```
class tstr(tstr):
class TaintException(Exception):
pass
def x(self, i=0):
v = self._x(i)
if v < 0:
raise taint.TaintException('Invalid mapped char idx in tstr')
return v
def _x(self, i=0):
return self.get_mapped_char_idx(i)
def get_mapped_char_idx(self, i):
if self._taint:
return self._taint[i]
else:
raise taint.TaintException('Invalid request idx')
def get_first_mapped_char(self):
for i in self._taint:
if i >= 0:
return i
return -1
def is_tpos_contained(self, tpos):
return tpos in self._taint
def is_idx_tainted(self, idx):
return self._taint[idx] != -1
my_str = tstr('abcdefghijkl', taint=list(range(4,16)))
my_str[0].x(),my_str[-1].x(),my_str[-2].x()
s = my_str[0:4]
s.x(0),s.x(3)
s = my_str[0:-1]
len(s),s.x(10)
```
### Concatenation
Implementing concatenation is straight forward:
```
class tstr(tstr):
def __add__(self, other):
if type(other) is tstr:
return self.create(str.__add__(self, other), (self._taint + other._taint))
else:
return self.create(str.__add__(self, other), (self._taint + [-1 for i in other]))
```
Testing concatenations
```
my_str1 = tstr("hello")
my_str2 = tstr("world", taint=6)
my_str3 = "bye"
v = my_str1 + my_str2
print(v._taint)
w = my_str1 + my_str3 + my_str2
print(w._taint)
class tstr(tstr):
def __radd__(self, other): #concatenation (+) -- other is not tstr
if type(other) is tstr:
return self.create(str.__add__(other, self), (other._taint + self._taint))
else:
return self.create(str.__add__(other, self), ([-1 for i in other] + self._taint))
my_str1 = "hello"
my_str2 = tstr("world")
v = my_str1 + my_str2
v._taint
```
### Replace
```
class tstr(tstr):
def replace(self, a, b, n=None):
old_taint = self._taint
b_taint = b._taint if type(b) is tstr else [-1] * len(b)
mystr = str(self)
i = 0
while True:
if n and i >= n: break
idx = mystr.find(a)
if idx == -1: break
last = idx + len(a)
mystr = mystr.replace(a, b, 1)
partA, partB = old_taint[0:idx], old_taint[last:]
old_taint = partA + b_taint + partB
i += 1
return self.create(mystr, old_taint)
my_str = tstr("aa cde aa")
res = my_str.replace('aa', 'bb')
res, res._taint
```
### Split
We essentially have to re-implement split operations, and split by space is slightly different from other splits.
```
class tstr(tstr):
def _split_helper(self, sep, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = len(sep)
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
first_idx = last_idx + sep_len
return result_list
def _split_space(self, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = 0
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
v = str(self[last_idx:])
sep_len = len(v) - len(v.lstrip(' '))
first_idx = last_idx + sep_len
return result_list
def rsplit(self, sep=None, maxsplit=-1):
splitted = super().rsplit(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
def split(self, sep=None, maxsplit=-1):
splitted = super().split(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
my_str = tstr('ab cdef ghij kl')
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
print(ab._taint, cdef._taint, ghij._taint, kl._taint)
my_str = tstr('ab cdef ghij kl', taint=100)
ab, cdef, ghij, kl = my_str.rsplit()
print(ab._taint, cdef._taint, ghij._taint, kl._taint)
my_str = tstr('ab cdef ghij kl', taint=list(range(0, 15)))
ab, cdef, ghij, kl = my_str.split(sep=' ')
print(ab._taint, cdef._taint, kl._taint)
my_str = tstr('ab cdef ghij kl', taint=list(range(0, 20)))
ab, cdef, ghij, kl = my_str.split()
print(ab._taint, cdef._taint, kl._taint)
```
### Strip
```
class tstr(tstr):
def strip(self, cl=None):
return self.lstrip(cl).rstrip(cl)
def lstrip(self, cl=None):
res = super().lstrip(cl)
i = self.find(res)
return self[i:]
def rstrip(self, cl=None):
res = super().rstrip(cl)
return self[0:len(res)]
my_str1 = tstr(" abc ")
v = my_str1.strip()
v, v._taint
my_str1 = tstr(" abc ")
v = my_str1.lstrip()
v, v._taint
my_str1 = tstr(" abc ")
v = my_str1.rstrip()
v, v._taint
```
### Expand Tabs
```
class tstr(tstr):
def expandtabs(self, n=8):
parts = self.split('\t')
res = super().expandtabs(n)
all_parts = []
for i, p in enumerate(parts):
all_parts.extend(p._taint)
if i < len(parts) - 1:
l = len(all_parts) % n
all_parts.extend([p._taint[-1]] * l)
return self.create(res, all_parts)
my_tstr = tstr("ab\tcd")
my_str = str("ab\tcd")
v1 = my_str.expandtabs(4)
v2 = my_tstr.expandtabs(4)
print(len(v1), repr(my_tstr), repr(v2), v2._taint)
class tstr(tstr):
def join(self, iterable):
mystr = ''
mytaint = []
sep_taint = self._taint
lst = list(iterable)
for i, s in enumerate(lst):
staint = s._taint if type(s) is tstr else [-1] * len(s)
mytaint.extend(staint)
mystr += str(s)
if i < len(lst)-1:
mytaint.extend(sep_taint)
mystr += str(self)
res = super().join(iterable)
assert len(res) == len(mystr)
return self.create(res, mytaint)
my_str = tstr("ab cd", taint=100)
(v1, v2), v3 = my_str.split(), 'ef'
print(v1._taint, v2._taint)
v4 = tstr('').join([v2,v3,v1])
print(v4, v4._taint)
my_str = tstr("ab cd", taint=100)
(v1, v2), v3 = my_str.split(), 'ef'
print(v1._taint, v2._taint)
v4 = tstr(',').join([v2,v3,v1])
print(v4, v4._taint)
```
### Partitions
```
class tstr(tstr):
def partition(self, sep):
partA, sep, partB = super().partition(sep)
return (
self.create(partA, self._taint[0:len(partA)]), self.create(sep, self._taint[len(partA): len(partA) + len(sep)]), self.create(partB, self._taint[len(partA) + len(sep):]))
def rpartition(self, sep):
partA, sep, partB = super().rpartition(sep)
return (self.create(partA, self._taint[0:len(partA)]), self.create(sep, self._taint[len(partA): len(partA) + len(sep)]), self.create(partB, self._taint[len(partA) + len(sep):]))
```
### Justify
```
class tstr(tstr):
def ljust(self, width, fillchar=' '):
res = super().ljust(width, fillchar)
initial = len(res) - len(self)
if type(fillchar) is tstr:
t = fillchar.x()
else:
t = -1
return self.create(res, [t] * initial + self._taint)
def rjust(self, width, fillchar=' '):
res = super().rjust(width, fillchar)
final = len(res) - len(self)
if type(fillchar) is tstr:
t = fillchar.x()
else:
t = -1
return self.create(res, self._taint + [t] * final)
```
### String methods that do not change taint
```
def make_str_wrapper_eq_taint(fun):
def proxy(*args, **kwargs):
res = fun(*args, **kwargs)
return args[0].create(res, args[0]._taint)
return proxy
for name, fn in inspect.getmembers(str, callable):
if name in ['swapcase', 'upper', 'lower', 'capitalize', 'title']:
setattr(tstr, name, make_str_wrapper_eq_taint(fn))
a = tstr('aa', taint=100).upper()
a, a._taint
```
### General wrappers
These are not strictly needed for operation, but can be useful for tracing
```
def make_str_wrapper(fun):
def proxy(*args, **kwargs):
res = fun(*args, **kwargs)
return res
return proxy
import types
tstr_members = [name for name, fn in inspect.getmembers(tstr,callable)
if type(fn) == types.FunctionType and fn.__qualname__.startswith('tstr')]
for name, fn in inspect.getmembers(str, callable):
if name not in set(['__class__', '__new__', '__str__', '__init__',
'__repr__','__getattribute__']) | set(tstr_members):
setattr(tstr, name, make_str_wrapper(fn))
```
### Methods yet to be translated
These methods generate strings from other strings. However, we do not have the right implementations for any of these. Hence these are marked as dangerous until we can generate the right translations.
```
def make_str_abort_wrapper(fun):
def proxy(*args, **kwargs):
raise TaintException('%s Not implemented in TSTR' % fun.__name__)
return proxy
for name, fn in inspect.getmembers(str, callable):
if name in ['__format__', '__rmod__', '__mod__', 'format_map', 'format',
'__mul__','__rmul__','center','zfill', 'decode', 'encode', 'splitlines']:
setattr(tstr, name, make_str_abort_wrapper(fn))
```
## EOF Tracker
Sometimes we want to know where an empty string came from. That is, if an empty string is the result of operations on a tainted string, we want to know the best guess as to what the taint index of the preceding character is.
### Slice
For detecting EOF, we need to carry the cursor. The main idea is the cursor indicates the taint of the character in front of it.
```
class eoftstr(tstr):
def create(self, res, taint):
return eoftstr(res, taint, self)
def __getitem__(self, key):
def get_interval(key):
return ((0 if key.start is None else key.start),
(len(res) if key.stop is None else key.stop))
res = super().__getitem__(key)
if type(key) == int:
key = len(self) + key if key < 0 else key
return self.create(res, [self._taint[key]])
elif type(key) == slice:
if res:
return self.create(res, self._taint[key])
# Result is an empty string
t = self.create(res, self._taint[key])
key_start, key_stop = get_interval(key)
cursor = 0
if key_start < len(self):
assert key_stop < len(self)
cursor = self._taint[key_stop]
else:
if len(self) == 0:
# if the original string was empty, we assume that any
# empty string produced from it should carry the same taint.
cursor = self.x()
else:
# Key start was not in the string. We can reply only
# if the key start was just outside the string, in
# which case, we guess.
if key_start != len(self):
raise taint.TaintException('Can\'t guess the taint')
cursor = self._taint[len(self) - 1] + 1
# _tcursor gets created only for empty strings.
t._tcursor = cursor
return t
else:
assert False
class eoftstr(eoftstr):
def get_mapped_char_idx(self, i):
if self._taint:
return self._taint[i]
else:
if i != 0:
raise taint.TaintException('Invalid request idx')
# self._tcursor gets created only for empty strings.
# use the exception to determine which ones need it.
return self._tcursor
t = eoftstr('hello world')
print(repr(t[11:]))
print(t[11:].x(), t[11:]._taint)
```
## A Comparison Tracker
Sometimes, we also want to know what each character in an input was compared to.
### Operators
```
class Op(enum.Enum):
LT = 0
LE = enum.auto()
EQ = enum.auto()
NE = enum.auto()
GT = enum.auto()
GE = enum.auto()
IN = enum.auto()
NOT_IN = enum.auto()
IS = enum.auto()
IS_NOT = enum.auto()
FIND_STR = enum.auto()
COMPARE_OPERATORS = {
Op.EQ: lambda x, y: x == y,
Op.NE: lambda x, y: x != y,
Op.IN: lambda x, y: x in y,
Op.NOT_IN: lambda x, y: x not in y,
Op.FIND_STR: lambda x, y: x.find(y)
}
Comparisons = []
```
### Instructions
```
class Instr:
def __init__(self, o, a, b):
self.opA = a
self.opB = b
self.op = o
def o(self):
if self.op == Op.EQ:
return 'eq'
elif self.op == Op.NE:
return 'ne'
else:
return '?'
def opS(self):
if not self.opA.has_taint() and type(self.opB) is tstr:
return (self.opB, self.opA)
else:
return (self.opA, self.opB)
@property
def op_A(self):
return self.opS()[0]
@property
def op_B(self):
return self.opS()[1]
def __repr__(self):
return "%s,%s,%s" % (self.o(), repr(self.opA), repr(self.opB))
def __str__(self):
if self.op == Op.EQ:
if str(self.opA) == str(self.opB):
return "%s = %s" % (repr(self.opA), repr(self.opB))
else:
return "%s != %s" % (repr(self.opA), repr(self.opB))
elif self.op == Op.NE:
if str(self.opA) == str(self.opB):
return "%s = %s" % (repr(self.opA), repr(self.opB))
else:
return "%s != %s" % (repr(self.opA), repr(self.opB))
elif self.op == Op.IN:
if str(self.opA) in str(self.opB):
return "%s in %s" % (repr(self.opA), repr(self.opB))
else:
return "%s not in %s" % (repr(self.opA), repr(self.opB))
elif self.op == Op.NOT_IN:
if str(self.opA) in str(self.opB):
return "%s in %s" % (repr(self.opA), repr(self.opB))
else:
return "%s not in %s" % (repr(self.opA), repr(self.opB))
else:
assert False
```
### Equivalance
```
class ctstr(eoftstr):
def create(self, res, taint):
o = ctstr(res, taint, self)
o.comparisons = self.comparisons
return o
def with_comparisons(self, comparisons):
self.comparisons = comparisons
return self
class ctstr(ctstr):
def __eq__(self, other):
if len(self) == 0 and len(other) == 0:
self.comparisons.append(Instr(Op.EQ, self, other))
return True
elif len(self) == 0:
self.comparisons.append(Instr(Op.EQ, self, other[0]))
return False
elif len(other) == 0:
self.comparisons.append(Instr(Op.EQ, self[0], other))
return False
elif len(self) == 1 and len(other) == 1:
self.comparisons.append(Instr(Op.EQ, self, other))
return super().__eq__(other)
else:
if not self[0] == other[0]:
return False
return self[1:] == other[1:]
t = ctstr('hello world', taint=100).with_comparisons([])
print(t.comparisons)
t == 'hello'
for c in t.comparisons:
print(repr(c))
class ctstr(ctstr):
def __ne__(self, other):
return not self.__eq__(other)
t = ctstr('hello', taint=100).with_comparisons([])
print(t.comparisons)
t != 'bye'
for c in t.comparisons:
print(repr(c))
class ctstr(ctstr):
def __contains__(self, other):
self.comparisons.append(Instr(Op.IN, self, other))
return super().__contains__(other)
class ctstr(ctstr):
def find(self, sub, start=None, end=None):
if start == None:
start_val = 0
if end == None:
end_val = len(self)
self.comparisons.append(Instr(Op.IN, self[start_val:end_val], sub))
return super().find(sub, start, end)
```
## Lessons Learned
* One can track the information flow form input to the internals of a system.
## Next Steps
_Link to subsequent chapters (notebooks) here:_
## Background
\cite{Lin2008}
## Exercises
_Close the chapter with a few exercises such that people have things to do. To make the solutions hidden (to be revealed by the user), have them start with_
```markdown
**Solution.**
```
_Your solution can then extend up to the next title (i.e., any markdown cell starting with `#`)._
_Running `make metadata` will automatically add metadata to the cells such that the cells will be hidden by default, and can be uncovered by the user. The button will be introduced above the solution._
### Exercise 1: _Title_
_Text of the exercise_
```
# Some code that is part of the exercise
pass
```
_Some more text for the exercise_
**Solution.** _Some text for the solution_
```
# Some code for the solution
2 + 2
```
_Some more text for the solution_
### Exercise 2: _Title_
_Text of the exercise_
**Solution.** _Solution for the exercise_
|
github_jupyter
|
## Reinforcement Learning Tutorial -1: Q Learning
#### MD Muhaimin Rahman
sezan92[at]gmail[dot]com
Q learning , can be said one of the most famous -and kind of intuitive- of all Reinforcement learning algorithms. In fact ,the recent all algorithms using Deep learning , are based on the Q learning algorithms. So, to work on recent algorithms, one must have a good idea on Q learning.
### Intuition
First , start with an Intuition. Lets assume , you are in a maze

Okay okay! I admit, it is not a maze. just a house with 5 rooms. And I got it from, this [link](http://mnemstudio.org/path-finding-q-learning-tutorial.htm) . Your goal is to get out of this place, no matter where you are. But you dont know - atleast pretend to - how to get there! After wondering about the map, you stumbled upon a mysterious letter with a lot of numbers in the room.

The matrix has 6 columns and 6 rows. What you will have to do , is to go to the room with highest value. Suppose, you are in room number 2. Then , you will have to move to room number 3 . Then you get out! Look at the picture again! You can try with every state, you are guaranteed to get out of the house, using this matrix! .
In the world of RL, every room is called a ```state```, movement from one state to another is called ```action```. Our game has a very ***JARGONISH*** name, ```Markov Decision Process``` . Maybe they invented this name to freak everybody out. But in short, this process means, your action from current state never depends on previous state. Practically such processes are impossible, but it helps to simplify problems
Now the question is , how can we get this ?
- First , initialize the matrix as Zeros

- Then we will apply the Q learning update equation
\begin{equation}
Q(s_t,a) = Q(s_t,a) + \alpha (Q'(s_{t+1},a)-Q(s_t,a))
\end{equation}
Here, $s_t$ is state at time $t$ , $s_{t+1}$ means the next state, $a$ is action , $r$ is reward we get-if we can get - from one state to another state. Q(s_t,a_t) means Q matrix value for state $s_t$ and action $a_t$ , $Q'(s_{t+1},a)$ means target Q value with state $s_{t+1}$ and the ***BEST ACTION*** for next state. Here $\alpha $ is learning rate}
Before we proceed, let me ask you, does this equation ring a bell ? I mean, haven't you seen a similar equation ?
Yeah, you got it , it is similar to Gradient descent Equation. If you dont know Gradient descent equation, I am sorry, you wont be able to get the future tutorials. So I suggest you get the basic and working Idea of Neural Networks and Gradient descent algorithms
Now ,How can we get $Q'(s_{t+1},a)$ ?
Using Bellman Equation
\begin{equation}
Q'(s_{t+1},a) = r+ \gamma max(Q(s_{t+1},a_t))
\end{equation}
It means the target $Q$ value for every state and action is the sum of reward with that state and action, and the maximum $Q$ value of next state multiplied with discount factor $\gamma$
***Where did this equation came from ? ***
Okay chill! let's start from the game again ! So suppose , every room has reward, $R_t,R_{t+1},R_{t+2},R_{t+3},R_{t+4},R_{t+5}$.. So obviously , the value of a state will be the expected cumulative reward
\begin{equation}
Q(s,a) = R_t + R_{t+1} + R_{t+2}+ R_{t+3}+ R_{t+4}+ R_{t+5}
\end{equation}
Suppose, someone comes here, and says, He wants give more weight to sooner rewards than later rewards. What should we do ? We will introduce, discount factor, $\gamma$ , which is $0<\gamma<1$ ..
\begin{equation}
Q(s,a) = R_t + \gamma R_{t+1} + \gamma^2 R_{t+2}+ \gamma^3 R_{t+3}+ \gamma^4 R_{t+4}+ \gamma^5 R_{t+5}
\end{equation}
\begin{equation}
Q(s,a) = R_t + \gamma [R_{t+1} + \gamma R_{t+2}+ \gamma^2 R_{t+3}+ \gamma^3 R_{t+4}+ \gamma^4 R_{t+5}]
\end{equation}
This equation can be rewritten as
\begin{equation}
Q(s_t,a) = R_t+\gamma Q(s_{t+1},a_{t+1})
\end{equation}
Suppose, we have some finite discrete actions in our hand, and each resulting $Q$ values of its own, what we will do ? We will try to take the action of maximum $Q$ value!
\begin{equation}
Q(s_t,a) = R_t+\gamma max(Q(s_{t+1},a))
\end{equation}
### Coding!
Let's start coding!
I will be using ***Open Ai*** gym environment. The Introduction and Installtion of environments are given [here](https://github.com/openai/gym)
```
import gym
import numpy as np
```
Initialization of Environments
I will use the Mountaincar environment by Open AI gym. It is a classic problem invented from 90s. I intend to use this environment for all algorithms .

In this game, your task is to get the car reach that green flag. For every step you will get -1 .So , your job is to reach the goal position with minimum steps. Maximum steps limit is 200.
```
env = gym.make('MountainCar-v0')
s = env.reset() #Reset the car
```
```env.reset()``` gives the initial state. State is the position and velocity of the car in a given time
This game's actions can be 0,1,2 . 0 for left, 1 for doing nothing, 2 for right
```env.step(action)``` returns four arguments
- next state
- reward
- terminal , it means if game is over or not
- info , for now , it is unnecessary
Hyper Parameters
- ```legal_actions``` number of actions
- ```actions``` the actions list
- ```gamma``` discount factor $\gamma$
- ```lr``` learning rate $\alpha$
- ```num_episodes``` number of episodes
- ```epsilon``` epsilon , to choose random actions
- ```epsilon_decay``` epsilon decay rate
```
legal_actions=env.action_space.n
actions = [0,1,2]
gamma =0.99
lr =0.5
num_episodes =30000
epsilon =0.5
epsilon_decay =0.99
```
Codeblock to discretize the state. Because ***Q learning*** doesnt work on continuous state space, we have to convert states into 10 discrete states
```
N_BINS = [10,10]
MIN_VALUES = [0.6,0.07]
MAX_VALUES = [-1.2,-.07]
BINS = [np.linspace(MIN_VALUES[i], MAX_VALUES[i], N_BINS[i]) for i in range(len(N_BINS))]
rList =[]
def discretize(obs):
return tuple([int(np.digitize(obs[i], BINS[i])) for i in range(len(N_BINS))])
```
Q Learning CLass
```
class QL:
def __init__(self,Q,policy,
legal_actions,
actions,
gamma,
lr):
self.Q = Q #Q matrix
self.policy =policy
self.legal_actions=legal_actions
self.actions = actions
self.gamma =gamma
self.lr =lr
def q_value(self,s,a):
"""Gets the Q value for a certain state and action"""
if (s,a) in self.Q:
self.Q[(s,a)]
else:
self.Q[s,a]=0
return self.Q[s,a]
def action(self,s):
"""Gets the action for cetain state"""
if s in self.policy:
return self.policy[s]
else:
self.policy[s] = np.random.randint(0,self.legal_actions)
return self.policy[s]
def learn(self,s,a,s1,r,done):
"""Updates the Q matrix"""
if done== False:
self.Q[(s,a)] =self.q_value(s,a)+ self.lr*(r+self.gamma*max([self.q_value(s1,a1) for a1 in self.actions]) - self.q_value(s,a))
else:
self.Q[(s,a)] =self.q_value(s,a)+ self.lr*(r - self.q_value(s,a))
self.q_values = [self.q_value(s,a1) for a1 in self.actions]
self.policy[s] = self.actions[self.q_values.index(max(self.q_values))]
```
Q Matrix Parameters
- ```Q``` - Q table. We will use dictionary data structure.
- ```policy``` - policy table , it will give us the action for given state
```
Q = {}
policy ={}
legal_actions =3
QL = QL(Q,policy,legal_actions,actions,gamma,lr)
```
Training
### Psuedocode
- get initial state $s_{raw}$
- discretize initial state , $s \gets discretize(s_{raw})$
- set total reward to zero , $r_{total} \gets 0$
- set terminal $d$ to false , $d \gets False$
- for each step
- - choose action based on epsilon greedy policy
- - get next state $s1_{raw} $, reward , $r$, terminal $d$ doing the action
- - $s1 \gets discretize(s1_{raw}) $
- - $r_{total} \gets r_{total}+r$
- - if $d == True $
- - - if $r_{total}<-199$
- - - - then give $r \gets -100$
- - - - Update $Q$ table
- - - - break
- - else
- - - Update $Q$ table
- - - break
- - $s \gets s1$
```
for i in range(num_episodes):
s_raw= env.reset() #initialize
s = discretize(s_raw) #discretize the state
rAll =0 #total reward
d = False
j = 0
for j in range(200):
#epsilon greedy. to choose random actions initially when Q is all zeros
if np.random.random()< epsilon:
a = np.random.randint(0,legal_actions)
epsilon = epsilon*epsilon_decay
else:
a =QL.action(s)
s1_raw,r,d,_ = env.step(a)
rAll=rAll+r
s1 = discretize(s1_raw)
env.render()
if d:
if rAll<-199:
r =-100 #punishment, if the game finishes before reaching the goal , we can give punishment
QL.learn(s,a,s1,r,d)
print("Failed! Reward %d"%rAll)
elif rAll>-199:
print("Passed! Reward %d"%rAll)
break
QL.learn(s,a,s1,r,d)
if j==199:
print("Reward %d after full episode"%(rAll))
s = s1
env.close()
```
|
github_jupyter
|
# Marginalized Gaussian Mixture Model
Author: [Austin Rochford](http://austinrochford.com)
```
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pymc3 as pm
import seaborn as sns
SEED = 383561
np.random.seed(SEED) # from random.org, for reproducibility
```
Gaussian mixtures are a flexible class of models for data that exhibits subpopulation heterogeneity. A toy example of such a data set is shown below.
```
N = 1000
W = np.array([0.35, 0.4, 0.25])
MU = np.array([0., 2., 5.])
SIGMA = np.array([0.5, 0.5, 1.])
component = np.random.choice(MU.size, size=N, p=W)
x = np.random.normal(MU[component], SIGMA[component], size=N)
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(x, bins=30, normed=True, lw=0);
```
A natural parameterization of the Gaussian mixture model is as the [latent variable model](https://en.wikipedia.org/wiki/Latent_variable_model)
$$
\begin{align*}
\mu_1, \ldots, \mu_K
& \sim N(0, \sigma^2) \\
\tau_1, \ldots, \tau_K
& \sim \textrm{Gamma}(a, b) \\
\boldsymbol{w}
& \sim \textrm{Dir}(\boldsymbol{\alpha}) \\
z\ |\ \boldsymbol{w}
& \sim \textrm{Cat}(\boldsymbol{w}) \\
x\ |\ z
& \sim N(\mu_z, \tau^{-1}_i).
\end{align*}
$$
An implementation of this parameterization in PyMC3 is available [here](gaussian_mixture_model.ipynb). A drawback of this parameterization is that is posterior relies on sampling the discrete latent variable $z$. This reliance can cause slow mixing and ineffective exploration of the tails of the distribution.
An alternative, equivalent parameterization that addresses these problems is to marginalize over $z$. The marginalized model is
$$
\begin{align*}
\mu_1, \ldots, \mu_K
& \sim N(0, \sigma^2) \\
\tau_1, \ldots, \tau_K
& \sim \textrm{Gamma}(a, b) \\
\boldsymbol{w}
& \sim \textrm{Dir}(\boldsymbol{\alpha}) \\
f(x\ |\ \boldsymbol{w})
& = \sum_{i = 1}^K w_i\ N(x\ |\ \mu_i, \tau^{-1}_i),
\end{align*}
$$
where
$$N(x\ |\ \mu, \sigma^2) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left(-\frac{1}{2 \sigma^2} (x - \mu)^2\right)$$
is the probability density function of the normal distribution.
Marginalizing $z$ out of the model generally leads to faster mixing and better exploration of the tails of the posterior distribution. Marginalization over discrete parameters is a common trick in the [Stan](http://mc-stan.org/) community, since Stan does not support sampling from discrete distributions. For further details on marginalization and several worked examples, see the [_Stan User's Guide and Reference Manual_](http://www.uvm.edu/~bbeckage/Teaching/DataAnalysis/Manuals/stan-reference-2.8.0.pdf).
PyMC3 supports marginalized Gaussian mixture models through its `NormalMixture` class. (It also supports marginalized general mixture models through its `Mixture` class.) Below we specify and fit a marginalized Gaussian mixture model to this data in PyMC3.
```
with pm.Model() as model:
w = pm.Dirichlet('w', np.ones_like(W))
mu = pm.Normal('mu', 0., 10., shape=W.size)
tau = pm.Gamma('tau', 1., 1., shape=W.size)
x_obs = pm.NormalMixture('x_obs', w, mu, tau=tau, observed=x)
with model:
trace = pm.sample(5000, n_init=10000, tune=1000, random_seed=SEED)[1000:]
```
We see in the following plot that the posterior distribution on the weights and the component means has captured the true value quite well.
```
pm.traceplot(trace, varnames=['w', 'mu']);
pm.plot_posterior(trace, varnames=['w', 'mu']);
```
We can also sample from the model's posterior predictive distribution, as follows.
```
with model:
ppc_trace = pm.sample_posterior_predictive(trace, 5000, random_seed=SEED)
```
We see that the posterior predictive samples have a distribution quite close to that of the observed data.
```
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(x, bins=30, normed=True,
histtype='step', lw=2,
label='Observed data');
ax.hist(ppc_trace['x_obs'], bins=30, normed=True,
histtype='step', lw=2,
label='Posterior predictive distribution');
ax.legend(loc=1);
```
|
github_jupyter
|
# The Data
to see where we got the data go here: https://www.ndbc.noaa.gov/station_history.php?station=42040
```
import pandas as pd
import numpy as np
import datetime
```
This is the first set of data from 1995
```
from utils import read_file, build_median_df
df1995 = read_file('data/42040/buoy_data_1995.txt') #allows you to a table for each year
df1995.head(6)#allows you to print certain sections of the data0
df1995d= df1995.set_index("timestamp").resample("D").mean()
df1995d.head(5)
df1996 = read_file('data/42040/buoy_data_1996.txt') #allows you to a table for each year
df1996.head(6)#allows you to print certain sections of the data0
df1997 = read_file('data/42040/buoy_data_1997.txt') #allows you to a table for each year
df1997.head(6)#allows you to print certain sections of the data0
df1998 = read_file('data/42040/buoy_data_1998.txt') #allows you to a table for each year
df1998.head(6)#allows you to print certain sections of the data0
df1999 = read_file('data/42040/buoy_data_1999.txt') #allows you to a table for each year
df1999.head(6)#allows you to print certain sections of the data0
df2000 = read_file('data/42040/buoy_data_2000.txt') #allows you to a table for each year
df2000.head(6)#allows you to print certain sections of the data0
df2001 = read_file('data/42040/buoy_data_2001.txt') #allows you to a table for each year
df2001.head(6)#allows you to print certain sections of the data0
df2002 = read_file('data/42040/buoy_data_2002.txt') #allows you to a table for each year
df2002.head(6)#allows you to print certain sections of the data0
df2003 = read_file('data/42040/buoy_data_2003.txt') #allows you to a table for each year
df2003.head(6)#allows you to print certain sections of the data0
df2004 = read_file('data/42040/buoy_data_2004.txt') #allows you to a table for each year
df2004.head(6)#allows you to print certain sections of the data0
df2005 = read_file('data/42040/buoy_data_2005.txt') #allows you to a table for each year
df2005.head(6)#allows you to print certain sections of the data0
df2006 = read_file('data/42040/buoy_data_2006.txt') #allows you to a table for each year
df2006.head(6)#allows you to print certain sections of the data0
#has incomplete data. 999 points are NaN
df2007 = read_file('data/42040/buoy_data_2007.txt') #allows you to a table for each year
df2007.head(6)#allows you to print certain sections of the data0
df2008 = read_file('data/42040/buoy_data_2008.txt') #allows you to a table for each year
df2008.head(6)#allows you to print certain sections of the data0
df2009 = read_file('data/42040/buoy_data_2009.txt') #allows you to a table for each year
df2009.head(6)#allows you to print certain sections of the data0
df2010 = read_file('data/42040/buoy_data_2010.txt') #allows you to a table for each year
df2010.head(6)#allows you to print certain sections of the data0
df2011 = read_file('data/42040/buoy_data_2011.txt') #allows you to a table for each year
df2011.head(6)#allows you to print certain sections of the data0
df2012 = read_file('data/42040/buoy_data_2010.txt') #allows you to a table for each year
df2012.head(6)#allows you to print certain sections of the data0
df2013 = read_file('data/42040/buoy_data_2013.txt') #allows you to a table for each year
df2013.head(6)#allows you to print certain sections of the data0
df2014 = read_file('data/42040/buoy_data_2014.txt') #allows you to a table for each year
df2014.head(6)#allows you to print certain sections of the data0
df2015 = read_file('data/42040/buoy_data_2015.txt') #allows you to a table for each year
df2015.head(6)#allows you to print certain sections of the data0
df2016 = read_file('data/42040/buoy_data_2016.txt') #allows you to a table for each year
df2016.head(6)#allows you to print certain sections of the data0
df2017 = read_file('data/42040/buoy_data_2017.txt') #allows you to a table for each year
df2017.head(6)#allows you to print certain sections of the data0
grouped2016=build_median_df(df2016, 'ATMP', 2016)
grouped1996=build_median_df(df1996, 'ATMP', 1996)
grouped2000=build_median_df(df2000, 'ATMP', 2000)
grouped2005=build_median_df(df2005, 'ATMP', 2005)
grouped2010=build_median_df(df2010, 'ATMP', 2010,
index=['03-Mar', '04-Apr', '05-May', '06-Jun', '07-Jul', '08-Aug', '09-Sep', '10-Oct', '11-Nov', '12-Dec'])
grouped=pd.concat([grouped1996, grouped2000, grouped2005, grouped2010, grouped2016], axis=1, sort=True)
grouped.plot(figsize=(15,10), kind='bar');
import matplotlib.pyplot as plt
import calendar
plt.title("Monthly median air temperature for buoy: LUKE OFFSHORE TEST PLATFORM - 63 NM South of Dauphin Island, AL");
plt.ylabel("Temperature, degrees Celsius");
plt.xticks(np.arange(12), calendar.month_name[1:13], rotation=20);
plt.savefig('42040-airtemp.pdf')
```
|
github_jupyter
|
# 基于注意力的神经机器翻译
此笔记本训练一个将波斯语翻译为英语的序列到序列(sequence to sequence,简写为 seq2seq)模型。此例子难度较高,需要对序列到序列模型的知识有一定了解。
训练完此笔记本中的模型后,你将能够输入一个波斯语句子,例如 *"من می دانم."*,并返回其英语翻译 *"I know."*
对于一个简单的例子来说,翻译质量令人满意。但是更有趣的可能是生成的注意力图:它显示在翻译过程中,输入句子的哪些部分受到了模型的注意。
<img src="https://tensorflow.google.cn/images/spanish-english.png" alt="spanish-english attention plot">
请注意:运行这个例子用一个 P100 GPU 需要花大约 10 分钟。
```
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import io
import time
```
## 下载和准备数据集
我们将使用 http://www.manythings.org/anki/ 提供的一个语言数据集。这个数据集包含如下格式的语言翻译对:
```
May I borrow this book? ¿Puedo tomar prestado este libro?
```
这个数据集中有很多种语言可供选择。我们将使用英语 - 波斯语数据集。为方便使用,我们在谷歌云上提供了此数据集的一份副本。但是你也可以自己下载副本。下载完数据集后,我们将采取下列步骤准备数据:
1. 给每个句子添加一个 *开始* 和一个 *结束* 标记(token)。
2. 删除特殊字符以清理句子。
3. 创建一个单词索引和一个反向单词索引(即一个从单词映射至 id 的词典和一个从 id 映射至单词的词典)。
4. 将每个句子填充(pad)到最大长度。
```
'''
# 下载文件
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
'''
path_to_file = "./lan/pes.txt"
# 将 unicode 文件转换为 ascii
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# 在单词与跟在其后的标点符号之间插入一个空格
# 例如: "he is a boy." => "he is a boy ."
# 参考:https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# 除了 (a-z, A-Z, ".", "?", "!", ","),将所有字符替换为空格
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
w = w.rstrip().strip()
# 给句子加上开始和结束标记
# 以便模型知道何时开始和结束预测
w = '<start> ' + w + ' <end>'
return w
en_sentence = u"May I borrow this book?"
sp_sentence = u"¿Puedo tomar prestado este libro?"
print(preprocess_sentence(en_sentence))
print(preprocess_sentence(sp_sentence).encode('utf-8'))
# 1. 去除重音符号
# 2. 清理句子
# 3. 返回这样格式的单词对:[ENGLISH, SPANISH]
def create_dataset(path, num_examples):
lines = io.open(path, encoding='UTF-8').read().strip().split('\n')
word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]]
return zip(*word_pairs)
en, sp = create_dataset(path_to_file, None)
print(en[-1])
print(sp[-1])
def max_length(tensor):
return max(len(t) for t in tensor)
def tokenize(lang):
lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(
filters='')
lang_tokenizer.fit_on_texts(lang)
tensor = lang_tokenizer.texts_to_sequences(lang)
tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor,
padding='post')
return tensor, lang_tokenizer
def load_dataset(path, num_examples=None):
# 创建清理过的输入输出对
targ_lang, inp_lang = create_dataset(path, num_examples)
input_tensor, inp_lang_tokenizer = tokenize(inp_lang)
target_tensor, targ_lang_tokenizer = tokenize(targ_lang)
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
```
### 限制数据集的大小以加快实验速度(可选)
在超过 10 万个句子的完整数据集上训练需要很长时间。为了更快地训练,我们可以将数据集的大小限制为 3 万个句子(当然,翻译质量也会随着数据的减少而降低):
```
# 尝试实验不同大小的数据集
num_examples = 30000
input_tensor, target_tensor, inp_lang, targ_lang = load_dataset(path_to_file, num_examples)
# 计算目标张量的最大长度 (max_length)
max_length_targ, max_length_inp = max_length(target_tensor), max_length(input_tensor)
# 采用 80 - 20 的比例切分训练集和验证集
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)
# 显示长度
print(len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val))
def convert(lang, tensor):
for t in tensor:
if t!=0:
print ("%d ----> %s" % (t, lang.index_word[t]))
print ("Input Language; index to word mapping")
convert(inp_lang, input_tensor_train[0])
print ()
print ("Target Language; index to word mapping")
convert(targ_lang, target_tensor_train[0])
```
### 创建一个 tf.data 数据集
```
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
steps_per_epoch = len(input_tensor_train)//BATCH_SIZE
embedding_dim = 256
units = 1024
vocab_inp_size = len(inp_lang.word_index)+1
vocab_tar_size = len(targ_lang.word_index)+1
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
example_input_batch, example_target_batch = next(iter(dataset))
example_input_batch.shape, example_target_batch.shape
```
## 编写编码器 (encoder) 和解码器 (decoder) 模型
实现一个基于注意力的编码器 - 解码器模型。关于这种模型,你可以阅读 TensorFlow 的 [神经机器翻译 (序列到序列) 教程](https://github.com/tensorflow/nmt)。本示例采用一组更新的 API。此笔记本实现了上述序列到序列教程中的 [注意力方程式](https://github.com/tensorflow/nmt#background-on-the-attention-mechanism)。下图显示了注意力机制为每个输入单词分配一个权重,然后解码器将这个权重用于预测句子中的下一个单词。下图和公式是 [Luong 的论文](https://arxiv.org/abs/1508.04025v5)中注意力机制的一个例子。
<img src="https://tensorflow.google.cn/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism">
输入经过编码器模型,编码器模型为我们提供形状为 *(批大小,最大长度,隐藏层大小)* 的编码器输出和形状为 *(批大小,隐藏层大小)* 的编码器隐藏层状态。
下面是所实现的方程式:
<img src="https://tensorflow.google.cn/images/seq2seq/attention_equation_0.jpg" alt="attention equation 0" width="800">
<img src="https://tensorflow.google.cn/images/seq2seq/attention_equation_1.jpg" alt="attention equation 1" width="800">
本教程的编码器采用 [Bahdanau 注意力](https://arxiv.org/pdf/1409.0473.pdf)。在用简化形式编写之前,让我们先决定符号:
* FC = 完全连接(密集)层
* EO = 编码器输出
* H = 隐藏层状态
* X = 解码器输入
以及伪代码:
* `score = FC(tanh(FC(EO) + FC(H)))`
* `attention weights = softmax(score, axis = 1)`。 Softmax 默认被应用于最后一个轴,但是这里我们想将它应用于 *第一个轴*, 因为分数 (score) 的形状是 *(批大小,最大长度,隐藏层大小)*。最大长度 (`max_length`) 是我们的输入的长度。因为我们想为每个输入分配一个权重,所以 softmax 应该用在这个轴上。
* `context vector = sum(attention weights * EO, axis = 1)`。选择第一个轴的原因同上。
* `embedding output` = 解码器输入 X 通过一个嵌入层。
* `merged vector = concat(embedding output, context vector)`
* 此合并后的向量随后被传送到 GRU
每个步骤中所有向量的形状已在代码的注释中阐明:
```
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.enc_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state = hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
# 样本输入
sample_hidden = encoder.initialize_hidden_state()
sample_output, sample_hidden = encoder(example_input_batch, sample_hidden)
print ('Encoder output shape: (batch size, sequence length, units) {}'.format(sample_output.shape))
print ('Encoder Hidden state shape: (batch size, units) {}'.format(sample_hidden.shape))
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, query, values):
# 隐藏层的形状 == (批大小,隐藏层大小)
# hidden_with_time_axis 的形状 == (批大小,1,隐藏层大小)
# 这样做是为了执行加法以计算分数
hidden_with_time_axis = tf.expand_dims(query, 1)
# 分数的形状 == (批大小,最大长度,1)
# 我们在最后一个轴上得到 1, 因为我们把分数应用于 self.V
# 在应用 self.V 之前,张量的形状是(批大小,最大长度,单位)
score = self.V(tf.nn.tanh(
self.W1(values) + self.W2(hidden_with_time_axis)))
# 注意力权重 (attention_weights) 的形状 == (批大小,最大长度,1)
attention_weights = tf.nn.softmax(score, axis=1)
# 上下文向量 (context_vector) 求和之后的形状 == (批大小,隐藏层大小)
context_vector = attention_weights * values
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
attention_layer = BahdanauAttention(10)
attention_result, attention_weights = attention_layer(sample_hidden, sample_output)
print("Attention result shape: (batch size, units) {}".format(attention_result.shape))
print("Attention weights shape: (batch_size, sequence_length, 1) {}".format(attention_weights.shape))
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
# 用于注意力
self.attention = BahdanauAttention(self.dec_units)
def call(self, x, hidden, enc_output):
# 编码器输出 (enc_output) 的形状 == (批大小,最大长度,隐藏层大小)
context_vector, attention_weights = self.attention(hidden, enc_output)
# x 在通过嵌入层后的形状 == (批大小,1,嵌入维度)
x = self.embedding(x)
# x 在拼接 (concatenation) 后的形状 == (批大小,1,嵌入维度 + 隐藏层大小)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# 将合并后的向量传送到 GRU
output, state = self.gru(x)
# 输出的形状 == (批大小 * 1,隐藏层大小)
output = tf.reshape(output, (-1, output.shape[2]))
# 输出的形状 == (批大小,vocab)
x = self.fc(output)
return x, state, attention_weights
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
sample_decoder_output, _, _ = decoder(tf.random.uniform((64, 1)),
sample_hidden, sample_output)
print ('Decoder output shape: (batch_size, vocab size) {}'.format(sample_decoder_output.shape))
```
## 定义优化器和损失函数
```
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
```
## 检查点(基于对象保存)
```
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
```
## 训练
1. 将 *输入* 传送至 *编码器*,编码器返回 *编码器输出* 和 *编码器隐藏层状态*。
2. 将编码器输出、编码器隐藏层状态和解码器输入(即 *开始标记*)传送至解码器。
3. 解码器返回 *预测* 和 *解码器隐藏层状态*。
4. 解码器隐藏层状态被传送回模型,预测被用于计算损失。
5. 使用 *教师强制 (teacher forcing)* 决定解码器的下一个输入。
6. *教师强制* 是将 *目标词* 作为 *下一个输入* 传送至解码器的技术。
7. 最后一步是计算梯度,并将其应用于优化器和反向传播。
```
@tf.function
def train_step(inp, targ, enc_hidden):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp, enc_hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)
# 教师强制 - 将目标词作为下一个输入
for t in range(1, targ.shape[1]):
# 将编码器输出 (enc_output) 传送至解码器
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# 使用教师强制
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return batch_loss
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):
batch_loss = train_step(inp, targ, enc_hidden)
total_loss += batch_loss
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
batch_loss.numpy()))
# 每 2 个周期(epoch),保存(检查点)一次模型
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss / steps_per_epoch))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
```
## 翻译
* 评估函数类似于训练循环,不同之处在于在这里我们不使用 *教师强制*。每个时间步的解码器输入是其先前的预测、隐藏层状态和编码器输出。
* 当模型预测 *结束标记* 时停止预测。
* 存储 *每个时间步的注意力权重*。
请注意:对于一个输入,编码器输出仅计算一次。
```
def evaluate(sentence):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word_index[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs],
maxlen=max_length_inp,
padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input,
dec_hidden,
enc_out)
# 存储注意力权重以便后面制图
attention_weights = tf.reshape(attention_weights, (-1, ))
attention_plot[t] = attention_weights.numpy()
predicted_id = tf.argmax(predictions[0]).numpy()
result += targ_lang.index_word[predicted_id] + ' '
if targ_lang.index_word[predicted_id] == '<end>':
return result, sentence, attention_plot
# 预测的 ID 被输送回模型
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence, attention_plot
# 注意力权重制图函数
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='viridis')
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def translate(sentence):
result, sentence, attention_plot = evaluate(sentence)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))]
plot_attention(attention_plot, sentence.split(' '), result.split(' '))
```
## 恢复最新的检查点并验证
```
# 恢复检查点目录 (checkpoint_dir) 中最新的检查点
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
translate(u'hace mucho frio aqui.')
translate(u'esta es mi vida.')
translate(u'¿todavia estan en casa?')
# 错误的翻译
translate(u'trata de averiguarlo.')
```
|
github_jupyter
|
This is from a "Getting Started" competition from Kaggle [Titanic competition](https://www.kaggle.com/c/titanic) to showcase how we can use Auto-ML along with datmo and docker, in order to track our work and make machine learning workflow reprocible and usable. Some part of data analysis is inspired from this [kernel](https://www.kaggle.com/sinakhorami/titanic-best-working-classifier)
This approach can be categorized into following methods,
1. Exploratory Data Analysis (EDA)
2. Data Cleaning
3. Using Auto-ML to figure out the best algorithm and hyperparameter
During the process of EDA and feature engineering, we would be using datmo to create versions of work by creating snapshot.
```
%matplotlib inline
import numpy as np
import pandas as pd
import re as re
train = pd.read_csv('./input/train.csv', header = 0, dtype={'Age': np.float64})
test = pd.read_csv('./input/test.csv' , header = 0, dtype={'Age': np.float64})
full_data = [train, test]
print (train.info())
```
#### 1. Exploratory Data Analysis
###### To understand how each feature has the contribution to Survive
###### a. `Sex`
```
print (train[["Sex", "Survived"]].groupby(['Sex'], as_index=False).mean())
```
###### b. `Pclass`
```
print (train[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean())
```
c. `SibSp and Parch`
With the number of siblings/spouse and the number of children/parents we can create new feature called Family Size.
```
for dataset in full_data:
dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
print (train[['FamilySize', 'Survived']].groupby(['FamilySize'], as_index=False).mean())
```
`FamilySize` seems to have a significant effect on our prediction. `Survived` has increased until a `FamilySize` of 4 and has decreased after that. Let's categorize people to check they are alone or not.
```
for dataset in full_data:
dataset['IsAlone'] = 0
dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1
print (train[['IsAlone', 'Survived']].groupby(['IsAlone'], as_index=False).mean())
```
d. `Embarked`
we fill the missing values with most occured value `S`
```
for dataset in full_data:
dataset['Embarked'] = dataset['Embarked'].fillna('S')
print (train[['Embarked', 'Survived']].groupby(['Embarked'], as_index=False).mean())
```
e. `Fare`
Fare also has some missing values which will be filled with the median
```
for dataset in full_data:
dataset['Fare'] = dataset['Fare'].fillna(train['Fare'].median())
train['CategoricalFare'] = pd.qcut(train['Fare'], 4)
print (train[['CategoricalFare', 'Survived']].groupby(['CategoricalFare'], as_index=False).mean())
```
It shows the `Fare` has a significant affect on survival, showcasing that people haivng paid higher fares had higher chances of survival
f. `Age`
There are plenty of missing values in this feature. # generate random numbers between (mean - std) and (mean + std). then we categorize age into 5 range.
```
for dataset in full_data:
age_avg = dataset['Age'].mean()
age_std = dataset['Age'].std()
age_null_count = dataset['Age'].isnull().sum()
age_null_random_list = np.random.randint(age_avg - age_std, age_avg + age_std, size=age_null_count)
dataset['Age'][np.isnan(dataset['Age'])] = age_null_random_list
dataset['Age'] = dataset['Age'].astype(int)
train['CategoricalAge'] = pd.cut(train['Age'], 5)
print (train[['CategoricalAge', 'Survived']].groupby(['CategoricalAge'], as_index=False).mean())
```
g. `Name`
Let's the title of people
```
def get_title(name):
title_search = re.search(' ([A-Za-z]+)\.', name)
# If the title exists, extract and return it.
if title_search:
return title_search.group(1)
return ""
for dataset in full_data:
dataset['Title'] = dataset['Name'].apply(get_title)
print("=====Title vs Sex=====")
print(pd.crosstab(train['Title'], train['Sex']))
print("")
print("=====Title vs Survived=====")
print (train[['Title', 'Survived']].groupby(['Title'], as_index=False).mean())
```
Let's categorize it and check the title impact on survival rate convert the rare titles to `Rare`
```
for dataset in full_data:
dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col',\
'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')
dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')
print (train[['Title', 'Survived']].groupby(['Title'], as_index=False).mean())
import json
config = {"features analyzed": ["Sex", "Pclass", "FamilySize", "IsAlone", "Embarked", "Fare", "Age", "Title"]}
with open('config.json', 'w') as outfile:
json.dump(config, outfile)
```
#### Creating a datmo snapshot to save my work, this helps me save my current work before proceeding onto data cleaning
```bash
home:~/datmo-tutorials/auto-ml$ datmo snapshot create -m "EDA"
Creating a new snapshot
Created snapshot with id: 30803662ab49bb1ef67a5d0861eecf91cff1642f
home:~/datmo-tutorials/auto-ml$ datmo snapshot ls
+---------+-------------+-------------------------------------------+-------+---------+-------+
| id | created at | config | stats | message | label |
+---------+-------------+-------------------------------------------+-------+---------+-------+
| 30803662| 2018-05-15 | {u'features analyzed': [u'Sex', | {} | EDA | None |
| | 23:15:44 | u'Pclass', u'FamilySize', u'IsAlone', | | | |
| | | u'Embarked', u'Fare', u'Age', u'Title']} | | | |
+---------+-------------+-------------------------------------------+-------+---------+-------+
```
#### 2. Data Cleaning
Now let's clean our data and map our features into numerical values.
```
train_copy = train.copy()
test_copy = test.copy()
full_data_copy = [train_copy, test_copy]
for dataset in full_data_copy:
# Mapping Sex
dataset['Sex'] = dataset['Sex'].map( {'female': 0, 'male': 1} ).astype(int)
# Mapping titles
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
dataset['Title'] = dataset['Title'].map(title_mapping)
dataset['Title'] = dataset['Title'].fillna(0)
# Mapping Embarked
dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)
# Mapping Fare
dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0
dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1
dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2
dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3
dataset['Fare'] = dataset['Fare'].astype(int)
# Mapping Age
dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0
dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1
dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2
dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3
dataset.loc[ dataset['Age'] > 64, 'Age'] = 4
# Feature Selection
drop_elements = ['PassengerId', 'Name', 'Ticket', 'Cabin', 'SibSp',\
'Parch', 'FamilySize']
train_copy = train_copy.drop(drop_elements, axis = 1)
train_copy = train_copy.drop(['CategoricalAge', 'CategoricalFare'], axis = 1)
test_copy = test_copy.drop(drop_elements, axis = 1)
print (train_copy.head(10))
train_copy = train_copy.values
test_copy = test_copy.values
config = {"selected features": ["Sex", "Pclass", "Age", "Fare", "Embarked", "Fare", "IsAlone", "Title"]}
with open('config.json', 'w') as outfile:
json.dump(config, outfile)
```
#### 3. Using Auto-ML to figure out the best algorithm and hyperparameter
##### Now we have cleaned our data it's time to use auto-ml in order to get the best algorithm for this data

```
from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
X = train_copy[0::, 1::]
y = train_copy[0::, 0]
X_train, X_test, y_train, y_test = train_test_split(X, y,
train_size=0.75, test_size=0.25)
tpot = TPOTClassifier(generations=5, population_size=50, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_titanic_pipeline.py')
stats = {"accuracy": (tpot.score(X_test, y_test))}
with open('stats.json', 'w') as outfile:
json.dump(stats, outfile)
```
### Let's again create a datmo snapshot to save my work, this helps me save my current work before changing my feature selection
```bash
home:~/datmo-tutorials/auto-ml$ datmo snapshot create -m "auto-ml-1"
Creating a new snapshot
Created snapshot with id: adf76fa7d0800cc6eec033d4b00f97536bcb0c20
home:~/datmo-tutorials/auto-ml$ datmo snapshot ls
+---------+-------------+-------------------------------------------+-----------------+---------------+-------+
| id | created at | config | stats | message | label |
+---------+-------------+-------------------------------------------+-----------------+---------------+-------+
| adf76fa7| 2018-05-16 | {u'selected features': [u'Sex', u'Pclass',|{u'accuracy': | auto-ml-1 | None |
| | 01:24:53 | u'Age', u'Fare', u'Embarked', | 0.8206278} | | |
| | | u'Fare', u'IsAlone', u'Title']} | | | |
| 30803662| 2018-05-15 | {u'features analyzed': [u'Sex', | {} | EDA | None |
| | 23:15:44 | u'Pclass', u'FamilySize', u'IsAlone', | | | |
| | | u'Embarked', u'Fare', u'Age', u'Title']} | | | |
+---------+-------------+-------------------------------------------+-----------------+---------------+-------+
```
#### Another feature selection
1. Let's leave `FamilySize` rather than just unsing `IsAlone`
2. Let's use `Fare_Per_Person` insted of binning `Fare`
```
train_copy = train.copy()
test_copy = test.copy()
full_data_copy = [train_copy, test_copy]
for dataset in full_data_copy:
# Mapping Sex
dataset['Sex'] = dataset['Sex'].map( {'female': 0, 'male': 1} ).astype(int)
# Mapping titles
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
dataset['Title'] = dataset['Title'].map(title_mapping)
dataset['Title'] = dataset['Title'].fillna(0)
# Mapping Embarked
dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)
# Mapping Fare
dataset['FarePerPerson']=dataset['Fare']/(dataset['FamilySize']+1)
# Mapping Age
dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0
dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1
dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2
dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3
dataset.loc[ dataset['Age'] > 64, 'Age'] = 4
# Feature Selection
drop_elements = ['PassengerId', 'Name', 'Ticket', 'Cabin', 'SibSp',\
'Parch', 'IsAlone', 'Fare']
train_copy = train_copy.drop(drop_elements, axis = 1)
train_copy = train_copy.drop(['CategoricalAge', 'CategoricalFare'], axis = 1)
test_copy = test_copy.drop(drop_elements, axis = 1)
print (train_copy.head(10))
train_copy = train_copy.values
test_copy = test_copy.values
from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
X = train_copy[0::, 1::]
y = train_copy[0::, 0]
X_train, X_test, y_train, y_test = train_test_split(X, y,
train_size=0.75, test_size=0.25)
tpot = TPOTClassifier(generations=5, population_size=50, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_titanic_pipeline.py')
config = {"selected features": ["Sex", "Pclass", "Age", "Fare", "Embarked", "FarePerPerson", "FamilySize", "Title"]}
with open('config.json', 'w') as outfile:
json.dump(config, outfile)
stats = {"accuracy": (tpot.score(X_test, y_test))}
with open('stats.json', 'w') as outfile:
json.dump(stats, outfile)
```
### Let's again create a datmo snapshot to save my final work
```bash
home:~/datmo-tutorials/auto-ml$ datmo snapshot create -m "auto-ml-2"
Creating a new snapshot
Created snapshot with id: 30f8366b7de96d58a7ef8cda266216b01cab4940
home:~/datmo-tutorials/auto-ml$ datmo snapshot ls
+---------+-------------+-------------------------------------------+-----------------+---------------+-------+
| id | created at | config | stats | message | label |
+---------+-------------+-------------------------------------------+-----------------+---------------+-------+
| 30f8366b| 2018-05-16 | {u'selected features': [u'Sex', u'Pclass',|{u'accuracy': | auto-ml-2 | None |
| | 03:04:06 | u'Age', u'Fare', u'Embarked', u'Title', | 0.8206278} | | |
| | | u'FarePerPerson', u'FamilySize']} | | | |
| adf76fa7| 2018-05-16 | {u'selected features': [u'Sex', u'Pclass',|{u'accuracy': | auto-ml-1 | None |
| | 01:24:53 | u'Age', u'Fare', u'Embarked', | 0.8206278} | | |
| | | u'Fare', u'IsAlone', u'Title']} | | | |
| 30803662| 2018-05-15 | {u'features analyzed': [u'Sex', | {} | EDA | None |
| | 23:15:44 | u'Pclass', u'FamilySize', u'IsAlone', | | | |
| | | u'Embarked', u'Fare', u'Age', u'Title']} | | | |
+---------+-------------+-------------------------------------------+-----------------+---------------+-------+
```
#### Let's now move to a different snapshot in order to either get the `experimentation.ipynb`, `submission.csv` or `tpot_titanice_pipeline.py` or any other files in that version
We perform `checkout` command in order to achieve it
```bash
home:~/datmo-tutorials/auto-ml$ # Run this command: datmo snapshot checkout --id <snapshot-id>
home:~/datmo-tutorials/auto-ml$ datmo snapshot checkout --id 30803662
```
|
github_jupyter
|
# Main Code
```
import os
import time
import numpy as np
import redis
from IPython.display import clear_output
from PIL import Image
from io import BytesIO
import base64
import json
import matplotlib.pyplot as plt
from face_detection import get_face
from utils import img_to_txt, decode_img, log_error
##########################
#
# Global Variables
#
#
##########################
# Get Request
server = os.environ['face_input_redis_server'] if 'os.environ' in os.environ and len(os.environ['redis_server']) > 1 else 'localhost'
# connect with redis server as Bob
r = redis.Redis(host=server, port=6379)
# Publish and suscribe redis
req_p = r.pubsub()
# subscribe to request Channel
req_p.subscribe('new_request')
# Forward Request
out_server = os.environ['face_ouput_redis_server'] if 'os.environ' in os.environ and len(os.environ['redis_server']) > 1 else 'localhost'
print(f"User Server {out_server}")
# connect with redis server as Bob
out_r = redis.Redis(host=out_server, port=6379)
def process_request(request ):
'''
Do you request processing here
'''
im = decode_img(request['image'])
face = get_face(im)
plt.imshow(face)
plt.show()
return face
def forward_request(id_, face):
global out_r
with out_r.pipeline() as pipe:
image= {
'id' : id_
'request_time' : str(datetime.today()),
'image' : img_to_txt("test_images/test.jpeg"),
'status' : 'pending'
}
# Publishing to the stream for testing
pipe.publish('new_request', json.dumps(image))
count+=1
pipe.execute()
print(f"Request Forward to {ip}")
def listen_stream():
'''
Listening to the stream.
IF got any request from the stream then process it at the same time.
'''
count = 0
requests =[]
while 1:
try:
try:
# Listening To the stream
request = str(req_p.get_message()['data'].decode())
if request is not None :requests.append(request)
except TypeError as e: log_error(e)
# If got any request from stream then process the function
if len(requests) > 0:
req_id = requests.pop(0)
process_request(json.loads(request) )
count += 1
print(count)
except Exception as e: log_error(e)
listen_stream()
from PIL import Image
import base64
import numpy as np
from io import BytesIO
image = np.asarray(Image.open("test_images/test.jpeg").convert("RGB"))
print(image.shape)
import base64
import numpy as np
import matplotlib.pyplot as plt
plt.imshow(image)
plt.show()
import cv2
def npImage_to_txt(image):
'''
Convert numpy image to base64
'''
_, im_arr = cv2.imencode('.jpg', image) # im_arr: image in Numpy one-dim array format.
im_bytes = im_arr.tobytes()
im_b64 = base64.b64encode(im_bytes)
return im_b64.decode()
npImage_to_txt(image)
im_bytes = base64.b64decode(im_b64)
im_arr = np.frombuffer(im_bytes, dtype=np.uint8) # im_arr is one-dim Numpy array
img = cv2.imdecode(im_arr, flags=cv2.IMREAD_COLOR)
from utils import decode_img
image = "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCABsAFUDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD8MrWK3mhVkd1VfuB5Dgn6dOtWbFdcu5/stvAWLMAAqH+dZF1DcTXENrFIVyhZfr1r7d/Yg/4J36r8cPC1v4ruPEUlnLkEZi3KRjuMZHPvXHjMbDCR5paH0GV4Crj6rjHY+UbbwF4vumVV06VwGwQytgCnnwncW+oCzurF1kU4OOFNfopf/wDBJL9pfXboaT4F8faJDatJ81xcqwwPYd69E+DX/BBjxpo122teN/H8OtXwO6FlsCIAfoTk46dR0rz1m7s7anuvJKMJLmlZH5mW/wANfEem2Y1L+zZY0l4CSqWU9vw/AVveGvh74vGqwWmi6fd2N3LgxmFC45/iIIIx9f06V+2Pw2/4JC+A7GSK68bz/wBp3CgFvMgCBT2CgD5QBgdeetewaT/wT3+D/h6NJ7DwjbJKsZQyLCAcZ6cCud43FS1SOtYPLKS1kfhV4g+CXxfTRynxI8G2+u6Yg3Ot7pLxlx1ys8JWQY+uOOleb614T8JS2k1houvapaxxyBv7Du9P+1xbwOsLxBWVwOMOmf8AbPU/0aT/ALIPgZ9LaxbQYtjrtJfnjpjmvDPj9/wSa+EnxB0e6ufD2gQ2molCIpojjBx7joaI4/FRfvrQwlg8urfw5WZ+SvwI/bzv/BGiL8JviN4atfHnhK9g8jUfC/jWwNykYSRsNaO2J4pFB7O4VgcVwHxZ8S2fhPVZtc+B/iDUIPD1zO0thZ3urq91p4b70RyQzpnIweoxuyRmvX/2s/8AgnV+0d8EfE8uoXGjSXuleaga5tIdyrjpkFTjgAdD07dvPtJ/Y4+MfxHW01qC1e3jnV1ulWB5ktmHIb5OcbMErnOT26V2wnhqvvM5JRxVD3VqcJ4V8f8AxL8QT3eoaTqOrSXB8sXN1p0aP5igEIrMQ2Svz/ng5IzRX098WP2Mr/4FazH8PPhh4k0+6axTZrGpPcG3NzcFEcjy2OUC7yBnr170UP2Ld0QpY6x8ffsv/C/UPjJ8ZdG8JW1jJMs0oWUquQg75447c9K/dT9lD4a2PgDwVYeDdE0xHks4VRzGmFV+4JGN3XNfDf7FPgjSP2av2TdN+MVl4Bm13xV4xuvJ0u3s4AZGhfPRiPkXbGTk5+8K9T8S67/wVZ8WRpq/ww8MweH9MEIksdL0y3RpIxjpJI7nc+c5OAM9q8DHVYZjjmpStGJ9Hl2E+oZdeCvKR+mfw98MWkCie6nTeFGVCgD347YNen6Ra6MYkgaaIk/Nhmxz/n3r8OvEvxZ/4LU+Crdrubwt4qmA/wBddQR2z475GK7P9nD/AIKgft06B4lt9E+N3hrUhiVVlnvbXyyy/hkE4wOK76Tw1GN+ZM8ythMTinazR+11tZwuMxuhQfd2HirH9nwSn5gAOwzXivwW/aEn8deE7TWdQ0+aKaSINMWTAAIyOPpiuv8AEHxw8GeGdEl17WtYWCCOIMGYfePQr9a6YV6U48y2PJqYLEU6nJrc7kaRBcgxIFbHG3FaOn+ALi5jWCK1XbnOWWvjrxl/wWP/AGWfg9qkdr4z8ROv2i58qE2ymQjnG5sdAOn4V9KfA79vX9nz4r6VFqfhX4i6bcxTIPLZbgde6nPCn6kV0UvY1bXOfE0MXQV1e50viX9mTwR4ssLmy8V6dHPDPCUdCvQEHoeo69sV8p/Ej9hb4c/s6eCZrT4VeH7aFJLyaVfNBdg8isSSTznI7+tfemi+IPDfii187Q9VguhIpYlZRgD8M5/CvBf24re+0j4Sa3r1hII1s7Pzml8s8bSDgc8dP1Nc+Ow1GlDnRWWY7Fyq+yn+J+cnxFtfB954s/tbW2umN5p0Eu1oYmzId++Q/IDub5QeTwi9KK6Xwlonwy/aG0G28Wa1cz6dcwReSEdlKPFuYqRkjBzvyOeNtFZQtyI9x0rszf2c/hTe6J8CvBehlYYLbQfD1ukrSR8pmNWcficj6GvNvE3/AAUXlX4ot8FfhdHc3WrXFwtpYQ2trFaRPITgGa8uQYolz2CsSO46V9v/AAl8F6ZHoUenWmmRqixgbJPmwMcbuxOPYVwfj79k3w3aazca7c/D/T9XjuXLSW4tEG1ucHAABIPTIPavm4UPfdSSvc+hp4qnyexUrNaHwR8QP+CsnxC8EeI7v4ffFP4YazpN9BgNLpWv2V+77n8tD5Qt41kyedodSVwR1r0X4JftSaV8SPEl14J8Y6VFLqltJiOJrHyZQpwfngY5hbBBxkg/e3c17frX7Od1YMx8J/s7WeS+YxfmBYU7g4MZzg8jGPxrjdS/Y3v9S8ZwfETxH8OfDtv4jtn3Wt5okDwSHJyQxjZUceoZcE/nXTVo0K0LwTi0deDnPDwanNSTPrv9m+y0/wAS6Mtvps7MWwPvbuny4J9sY/Dv1roP2sPhVpEPw4Om3w2LLHvZUXrj+tdF+xb8I7vw/wCH1vNbKtcy7flxjHPPAxiu+/ax+HK6h4bhvra0e5jiUiSMPg/d6D19a7KWDqfUW29T5SvjorOoxT0ufjL+0P8Asd/sq6pfHWfH3iC8sZZnHkxi82hieu1QCzc5PC9c1sfsqfsG6T4E1Sfxz8DP2mdYttOuFK3Wk6zoFw+mtGy4+fcq7zjkHK4qt8bvgfqut/Gy51Xxd8WbzR7QTvHZaNHor2YJIIG+7Db3POcKy9h2rnfg9/wTj/bM8D+M7TxT8Kf22NZ0meO6VrSS1lvYJki67S4ZkkO3H3tykdR2rmwcJQj70z6rMY05RThC59yfCfwL+1p+zdaxeMvhP8Z7TXYoFLvpMpMttKg5KjD5TjtuJUY4NfW3w++LWn/tU/DHUdJ8U+EbjTri5svs2r6bOmVDyJt3oT95c4IPcEcdq+ZPgF4V/auvLW2tvjh4A06+kS4aO48V+ErlIfttuWO2S8tNiqs23GZYwm4jJBya+vfhz4Tk0PT4JQxIQlSWTazJkFQ2OuBjmuuFStKTi9YnyWOeEpuMlG07n5d/CHSPC3h2fxP8FvHNrbXNx4M8VXlrFLdo+4RyPvVRhhwFC8e/vRXTftOeGdQ8H/tZ/EQaNYpN/aOqw3k+5d3zvCORjGBgY7/doq4zmlY74RjViprqfVnwIngXw7aQSFpZXjUyyMOWbbnnPua9UTRtMucNcRqCyjKgcdK8U+Dut6Vb+FbbVILqQxJEjCQ9WG0Y/DFenWPi2zu4YzZ3RK4yWftU4OvTjGzOHGUZupzxL2q6Fo1tCSIR8o+6Op/OuOu9L0bUNXFlBFHtRh5oCFsseR8x9q0PF+r6lqMTrp8hzHESfc+lcpo37S3wK0fXbL4b6x4+0ew16aRPs2lXN/HHcXDFjuwrkHGemNx9q6ZSoznawUYVp02o3bPpD4P6ELR4QqKxWPnI7ZzXe+ItGs9f0xtNuYwQTuHHQ+vNeb+B/Gem6XOkk1xvXaCzxMpUZ529ev0/IdB6JaeM/DN22xdUj81hkQuGUqvr90g16tCdFUeRnymMp4p4j2kE9DxH4kfsfeA/iBBIupeHbS4jZyXjuIw4J9RuBIP0rlPC37BHhDw9cqdPgv7eIEny4tQk2r9AScV7H458bjw94pe2imZ4NiMCvTlRWn4b+IcGpFB5nyZIJbr1rjWHws6jTR6kcxzSOH0lo0N+GXwh8OeDLAWtpYs0RUA/aH3liONxz1/l7V0mo6fY2EJFvbJgqcj3q0dUs5IvME42AAgj0zzWJreoi583yZSqF9quDkHt/SuutToUqXunjxqYrEV7zPkv4g/Cjw749+PXjB7y/s4ZbVrLPnzohIeDfgZGSASfzoqD40/BLwr8U/jXr/iKHxKtjcwx2trdxSM0ZcpGSr5zhgQ2M+qkUV4MqzUrWPs6CpqjHXofL37E/wC1h4Z+J/wP0XUor8Syxabai8MLbx5m0R7M8YOV3cjoR9a+nNF8Ss1k2p30K20RcqS0g2sF4z+n61/Pr+xt+1Nq/wAJdc03T59ZlttLt7pJbpFkO13HTIz8xIxX6Z+Nf24/EniL9kDVfEXg/T31bWLmPEIhcJFYqwPzOfTYARjvXHWws8PiXd6Nno4evRx2FU4dtT7Yu/2i/hFoVjMb7xfpxMCFpGN2vXA4+mDX5Vftj/t7/BbTP2sojpngfSvEmkC6H9ptMo8yCVMbJI3z98DGCOgwK+b/AIcaD8Wfjjd3mo6r4wurDTzn7Zqd1dMsXzHP446D2FS2/wCwYdY19rqP4r6NKJZCys0xaYknqqtjP4GvWhHBygk5ak4ejmMfeoQvqffWi/8ABWcah8NLu+0L4lrZQ6eqwRz3DebcxoV4CxuTuYDjd7Zre/4JsQ61+2F+0+fFmiftKePdN0/SttxrAuL6RkvQRxGoZtiZ2nIC18I6r/wTV+JU+hx3Fl43t47drhWcHTpW34/2kPGcepr6h/YTl8RfsgQaoNFnifU9QnhEU2oTtHb24RGXJBwWzuLdfT0qIU8NTlfmZ2V6eZexcfZJX6n7W694S0iDThZz+ZLIYtsMk0u5yoPUt3P/AOqvKvEev3/w5vZFmc+RIcxzH7i+xrwbXf8AgpXd6R4BuR4rggv7ywEX2a6s72NmnIjUsEVWHHOPrgfxVrfs3/tUeEv2tvBN1qVrqKTx+cyS2jkebZ7SQyycnJPBVhgYI4p4isuW9NnzdDCV6Mn7ZH0N4P8AiVHqNt9tt7wSNJtRdsh2Yz2B967HVvGlhpFkZNQ1FIiqGVzxtAVdxz74IxXgmm30Pw50Ke41h0tYYAzIoYNtwxwPfPX8a+Yv+CjH7feleA/gxP4QsNXktNZ12wlaO5t5gDBCp5JP94hePY4xXNRxVSq+VhUwUJvngtD6r/4Jq6tb/H/R/iR8WviN4RW9F54+ns9IW/jYmK1to1jAHPRm3t/wKit//gmp8Wfh3b/sVeA9V1TxNpVjqer6ONR1SKC4RN0srMdxDEkEqBweeM9xRXZHH5ZFWlUin6ni1cWoVHFS0XmfyZeE0u7HX7XT41X7UswKJIOFYZ65OK++f2QPi5q1/wCB9U+GX2CPVob+NIWDOsKJgkNsBPz85B/SvlD9qj4X+Hfh74xHiHwhrwuIbyUmaEx7DEwwOP51g/DT4qa3oOqW7S6u8EKTqWkUgbRnsfet8RQ/tDCe0jpodHC2b0KMuWa3P3N+Bv7LnwU1P4UReHY/DGni2uoRHPbpDnY/OSTkjOaqat+yT4Y8FWsmgfEHwFY6/wCG3JWC4gsx5lsM8DcuGX8DXmv/AAT7/a28M+JfCkPhyK5kjRVwbmR9xll5IUAY/Ova/jp8QdZX4eXN1oGtXFhexIz27KD8zYJ28kg84r5aDqYeryuNz9HoYqq5c1OXus53TP2OP2TxbCTSp/EK6eAA2nWni25WIAnJyhbK85716t4O/Yw/Y3u9MFp4T+BFnqd4sOFuL29uLtQT3YySHcefw6dq/KvUf2+f2nrT4l3Xh/StQjvgZ/KMJs13Kf7x24z0r9Hv2DPjF8ffEPhuD/haF7BBE9sksTWsYjL57bec+nFey5qEE5x3FjcfXq037OWx0uqf8EpP2XPDdhqPiKXwdCurXdqwkmE0iQ24YE4jUEBeOO/r1wa+ef8Agl18KdL/AGZviP8AEbxTf6nJEhunsfs7SHyXiErOHUHOT5ZUEj0r7m+Mvj+Lwx8Ori/1ENEbiMx/KxYruGQxB+tflh8Xf2lfG3wt8d3+mtcwxpfTNKtyDsjIxgCRWJxk8DGPWudqU5+51PKw1VV6T9ueg/8ABQr/AIKW3llND8O/CWq3duGmWWC+t0AV4yWzDyT85GCCeOelfnv8T/jne/FTW7ZvF2u3N4lvbS20dnJJ5kjSykkKcYBwXwcY6VU/ah+MUPjG7mmj02IXc7KiqrrIUYYB5zgEY4OOBxz1r3D/AIJZf8E0dZ/ayluPif4u8QnRPD1obq2ivry3JVpyAUeFQCCQ2QWJwSD0rsrww+EwjqVXZtHyOb5msPTnTpvQ3rP44+PvAvhLStN+D2o3uvaWYisfnRSb7RVVAsTeWpHHzfgBRX318H/+CLH7Ovwy8KjSNR+NHjXX5ppDJLNaXyWEUbdwsaA5B45JOcZ4zgFfm86VGU29Hd318z8+eKcne5+C/iHxn4x+J0n27xhrk1480qgbolQBsqCflAzx9Kr+OfAHiH4X+K5/D2s2c6hJv3PmJsMowCCAc+tX9I060f4bDVzH+9DyD7xx94DPrnmvtv8Abo/Zw+Gc+i3OuyWt0bkadaTQOZwfIZ7VGbYSucFsnknrX6vUxKw2IVNLRn1OT5bHFUZ1E7OJ80/AD9pPxD8PWkkt9WmimSMx2oUBQinqMDHOc8179oX7afja/wDBS+GL7xLd3bAbYI7ibcsW5yzM3cnHTnjPevhLSLiVLR7hXO9W4Ofeu58P+IdW0rWIr21uTvEIIDcjpivReBoTfNY9LD5niaS5b7H0l8CY9L0/4t2fifVJ/Mjguo7l5iPNW4EjMQkgGDj6Yr9V/wBnX9q34b6N4W0mLXfDptVjLixFrCsixrtMh+bHQnceem5RnqR+Gfgjxx4j8K+KoL3R74o4XeQxJDY34BGenzH8hX0f4W+OHxJ0b4XG60nX2t5FsIAzIM79kkkfIJI+ZcBsYz7ZOeXE4SOh6WGzKdSEoy6n6B/tm/tw6LoPgvU9X1fUIEtoZbW2mSKcN5M8gdiHXghV2hsZywyAQa/Jr4q/tba146g1vSdVuUuJb/UW+y+TbpJGHU7VUMcs0Z2lhggg9T2rlPip8c/iV4gmuxrevG6h15rk6ja3CB43ZpThwDzuUqCpJODnqCRXMeDgfF3jm3utX2+bqt60c728SRiMMefLVQFU4wOnYd8kqhh6VOPM0clXGVak/Zw0O4+HP7Nviv4teFfFPxMtZWufD3h3QZdQ1u++y+Rg/aIbaKL5gRvklnVhg/dDd+R+mD/tvfstfsv/APBOnwTDpHiqNr3xH4LjEHhTS5Fku0umj2T/AC8iBVl3nccNnnnNemeLf2d/hZ8M/wDghP4rsvDOg4Gr+HLue+lmYF3ltxJJG+QBz5iBznjcTgAYA/Ca4vri7vlilb5JrUM6DpkgHp+PTpXl5hl9POEoVHaMWfO5zhVKSjfc++P2Zv8AgqT+0l8KfBUuk+D9YGtadcT77dPEdw8klqAWGxJCwLj1z0IOOMUV8XxeIdU8P6ba6ZpVx5UaRZJGSWJOeaK+frZBhlVaR81PLoqbXMf/2Q==",
image = decode_img(bytes(image[0], encoding="utf-8"))
plt.imshow(image)
plt.show()
```
|
github_jupyter
|
```
from mxnet import nd
from mxnet.contrib import text
glove_vec = text.embedding.get_pretrained_file_names("glove")
print(glove_vec)
glove_6b50d = text.embedding.create('glove', pretrained_file_name="glove.6B.50d.txt")
word_size = len(glove_6b50d)
print(word_size)
#词的索引
index = glove_6b50d.token_to_idx['happy']
print(index)
#索引到词
word = glove_6b50d.idx_to_token[1752]
print(word)
#词向量
print(glove_6b50d.idx_to_vec[1752])
```
# Glove应用
```
#余玄相似度
def cos_sim(x, y):
return nd.dot(x,y)/(x.norm() * y.norm())
a = nd.array([4,5])
b = nd.array([400,500])
print(cos_sim(a,b))
#求近义词
def norm_vecs_by_row(x):
# 分母中添加的 1e-10 是为了数值稳定性。
return x / (nd.sum(x * x, axis=1) + 1e-10).sqrt().reshape((-1, 1))
def get_knn(token_embedding, k, word):
word_vec = token_embedding.get_vecs_by_tokens([word]).reshape((-1, 1))
vocab_vecs = norm_vecs_by_row(token_embedding.idx_to_vec)
dot_prod = nd.dot(vocab_vecs, word_vec)
indices = nd.topk(dot_prod.reshape((len(token_embedding), )), k=k+1,
ret_typ='indices')
indices = [int(i.asscalar()) for i in indices]
# 除去输入词。
return token_embedding.to_tokens(indices[1:])
sim_list = get_knn(glove_6b50d, 10, 'baby')
print(sim_list)
sim_val = cos_sim(glove_6b50d.get_vecs_by_tokens('baby'), glove_6b50d.get_vecs_by_tokens('babies'))
print(sim_val)
print(get_knn(glove_6b50d, 10, 'computer'))
print(get_knn(glove_6b50d, 10, 'run'))
print(get_knn(glove_6b50d, 10, 'love'))
#求类比词
#vec(c)+vec(b)−vec(a)
def get_top_k_by_analogy(token_embedding, k, word1, word2, word3):
word_vecs = token_embedding.get_vecs_by_tokens([word1, word2, word3])
word_diff = (word_vecs[1] - word_vecs[0] + word_vecs[2]).reshape((-1, 1))
vocab_vecs = norm_vecs_by_row(token_embedding.idx_to_vec)
dot_prod = nd.dot(vocab_vecs, word_diff)
indices = nd.topk(dot_prod.reshape((len(token_embedding), )), k=k,
ret_typ='indices')
indices = [int(i.asscalar()) for i in indices]
return token_embedding.to_tokens(indices)
#验证vec(son)+vec(woman)-vec(man) 与 vec(daughter) 两个向量之间的余弦相似度
def cos_sim_word_analogy(token_embedding, word1, word2, word3, word4):
words = [word1, word2, word3, word4]
vecs = token_embedding.get_vecs_by_tokens(words)
return cos_sim(vecs[1] - vecs[0] + vecs[2], vecs[3])
word_list = get_top_k_by_analogy(glove_6b50d,1, 'man', 'woman', 'son')
print(word_list)
word_list = get_top_k_by_analogy(glove_6b50d, 1, 'man', 'son', 'woman')
print(word_list)
sim_val = cos_sim_word_analogy(glove_6b50d, 'man', 'woman', 'son', 'daughter')
print(sim_val)
word_list = get_top_k_by_analogy(glove_6b50d, 1, 'beijing', 'china', 'tokyo')
print(word_list)
word_list = get_top_k_by_analogy(glove_6b50d, 1, 'bad', 'worst', 'big')
print(word_list)
```
|
github_jupyter
|
# Amortized Neural Variational Inference for a toy probabilistic model
Consider a certain number of sensors placed at known locations, $\mathbf{s}_1,\mathbf{s}_2,\ldots,\mathbf{s}_L$. There is a target at an unknown position $\mathbf{z}\in\mathbb{R}^2$ that is emitting a certain signal that is received at the $i$-th sensor with a signal strength distributed as follows:
\begin{align}
x_i \sim \mathcal{N}\Big(- A \log\left(||\mathbf{s}_i-\mathbf{z} ||^2\right), \sigma^2\Big),
\end{align}
where $A$ is a constant related to how fast signal strength degrades with distance. We assume a Gaussian prior for the unknown position $\mathcal{N}(\mathbf{0},\mathbf{I})$. Given a set of $N$ i.i.d. samples for each sensor, $\mathbf{X}\in\mathbb{R}^{L\times N}$, we will use a Amortized Neural Variational Inference to find a Gaussian approximation to
\begin{align}
p(\mathbf{z}|\mathbf{X}) \propto p(\mathbf{X}|\mathbf{z}) p(\mathbf{z})
\end{align}
Our approximation to $p(\mathbf{z}|\mathbf{X})$ is of the form
\begin{align}
p(\mathbf{z}|\mathbf{X}) \approx q(\mathbf{z}|\mathbf{X})=\mathcal{N}\Big(\mu(\mathbf{X}),\Sigma(\mathbf{X})\Big),
\end{align}
where
- $\mu(\mathbf{X})$ --> Given by a Neural Network with parameter vector $\theta$ and input $\mathbf{X}$
- $\Sigma(\mathbf{X})$ --> Diagonal covariance matrix, where the log of the main diagonal is constructed by a Neural Network with parameter vector $\gamma$ and input $\mathbf{X}$
## ELBO lower-bound to $p(\mathbf{X})$
We will optimize $q(\mathbf{z}|\mathbf{X})$ w.r.t. $\theta,\gamma$ by optimizing the Evidence-Lower-Bound (ELBO):
\begin{align}
p(\mathbf{X}) &= \int p(\mathbf{X}|\mathbf{z}) p(\mathbf{z}) d\mathbf{z}\\
&\geq \int q(\mathbf{X}|\mathbf{z}) \log \left(\frac{p(\mathbf{X},\mathbf{z})}{q(\mathbf{X}|\mathbf{z})}\right)d\mathbf{z}\\
& = \mathbb{E}_{q}\left[\log p(\mathbf{X}|\mathbf{z})\right] - D_{KL}(q(\mathbf{z}|\mathbf{X})||p(\mathbf{z})\triangleq \mathcal{L}(\mathbf{X},\theta,\gamma),
\end{align}
where $D_{KL}(q(\mathbf{z}|\mathbf{X})||p(\mathbf{z})$ is known in closed form since it is the KL divergence between two Gaussian pdfs:
\begin{align}
D_{KL}(q(\mathbf{z}|\mathbf{X})||p(\mathbf{z})) = \frac{1}{2} \left[\text{tr}\left(\Sigma(\mathbf{X})\right)+\left(\mu(\mathbf{X})^T\mu(\mathbf{X})\right)-2-\log\det \left(\Sigma(\mathbf{X})\right) \right]
\end{align}
## SGD optimization
- Sample $\mathbf{\epsilon}\sim \mathcal{N}(\mathbf{0},\mathbf{I})$
- Sample from $q(\mathbf{z}|\mathbf{X})$:
\begin{align}
\mathbf{z}^0 = \mu(\mathbf{X}) + \sqrt{\text{diag}(\Sigma(\mathbf{X}))} \circ \mathbf{\epsilon}
\end{align}
- Compute gradients of
\begin{align}
\hat{\mathcal{L}}(\mathbf{X},\theta,\gamma) =\log p(\mathbf{X}|\mathbf{z}^0) - D_{KL}(q(\mathbf{z}|\mathbf{X})||p(\mathbf{z})
\end{align}
w.r.t. $\theta,\gamma$
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
%matplotlib inline
# use seaborn plotting defaults
import seaborn as sns; sns.set()
```
### Probabilistic model definition and generating samples
```
############## Elements of the true probabilistic model ####################
loc_info = {}
loc_info['S'] = 3 # Number o sensors
loc_info['pos_s'] = np.array([[0.5,1], [3.5,1], [2,3]]) #Position of sensors
#loc_info['target'] = np.random.uniform(-3,3,[2,]) #(Unknown target position)
loc_info['target'] = np.array([-1,2]) #(Unknown target position)
loc_info['var_s'] = 5.*np.ones(loc_info['S']).reshape([loc_info['S'],1]) #Variance of sensors
loc_info['A'] = np.ones(loc_info['S'],dtype=np.float32) * 10.0 #Attenuation mean factor per sensor
loc_info['N'] = 5 # Number of measurements per sensor
def sample_X(S,M,z,pos_s,A,var_s):
means = -1*A*np.log(np.sum((pos_s-z)**2,1))
X = means.reshape([S,1]) + np.random.randn(S,M) * np.sqrt(var_s)
return X
# Sampling from model for the right target
X = sample_X(loc_info['S'],loc_info['N'], loc_info['target'],loc_info['pos_s'],loc_info['A'],loc_info['var_s'])
plt.plot(loc_info['pos_s'][:,0],loc_info['pos_s'][:,1],'b>',label='Sensors',ms=15)
plt.plot(loc_info['target'][0],loc_info['target'][1],'ro',label='Target',ms=15)
plt.legend()
```
### TensorFlow Computation Graph and Loss Function
```
z_dim = 2 #Latent Space
model_name = 'model1' #In 'model1.py' we define the variational family
learning_rate = 1e-2
num_samples_avg = 1 #Number of samples to approximate the expectation in the ELBO
num_samples = 10 #Number of samples from the posterior (for testing)
num_it = int(1e4) #SGD iterations
period_plot = int(1000) #Show resuts every period_plot iterations
dims = X.shape #X.shape
sess_VAE = tf.Graph()
with sess_VAE.as_default():
print('[*] Importing model: ' + model_name)
model = __import__(model_name)
print('[*] Defining placeholders')
inputX = tf.placeholder(tf.float32, shape=dims, name='x-input')
print('[*] Defining the encoder')
log_var, mean, samples_z, KL = model.encoder(inputX,dims,z_dim,num_samples_avg)
print('[*] Defining the log_likelyhood')
loglik = model.decoder(loc_info,inputX,samples_z,num_samples_avg)
loss = -(loglik-KL)
optim = tf.train.AdamOptimizer(learning_rate).minimize(loss)
# Output dictionary -> Useful if computation graph is defined in a separate .py file
tf_nodes = {}
tf_nodes['X'] = inputX
tf_nodes['mean'] = mean
tf_nodes['logvar'] = log_var
tf_nodes['KL'] = KL
tf_nodes['loglik'] = loglik
tf_nodes['optim'] = optim
tf_nodes['samples'] = samples_z
```
## SGD optimization
```
############ SGD Inference #####################################
mean_list = []
with tf.Session(graph=sess_VAE) as session:
# Add ops to save and restore all the variables.
saver = tf.train.Saver()
tf.global_variables_initializer().run()
print('Training the VAE ...')
for it in range(num_it):
feedDict = {tf_nodes['X'] : X}
_= session.run(tf_nodes['optim'],feedDict)
if(it % period_plot ==0):
mean, logvar,loglik,KL = session.run([tf_nodes['mean'],tf_nodes['logvar'],tf_nodes['loglik'],tf_nodes['KL']],feedDict)
print("It = %d, loglik = %.5f, KL = %.5f" %(it,loglik,KL))
mean_list.append(mean)
samples = session.run(tf_nodes['samples'],feedDict)
#Samples from q(z|x)
m_evol = np.vstack(mean_list)
nsamples = 50
samples = mean + np.sqrt(np.exp(logvar)) * np.random.randn(nsamples,2)
plt.plot(loc_info['pos_s'][:,0],loc_info['pos_s'][:,1],'b>',label='Sensors',ms=15)
plt.plot(loc_info['target'][0],loc_info['target'][1],'ro',label='Target',ms=15)
plt.plot(m_evol[:,0],m_evol[:,1],'g>',label='Post Mean')
plt.scatter(samples[:,0],samples[:,1],label='Post Samples')
plt.rcParams["figure.figsize"] = [8,8]
plt.legend()
```
|
github_jupyter
|
```
%matplotlib inline
from pyvista import set_plot_theme
set_plot_theme('document')
```
Colormap Choices {#colormap_example}
================
Use a Matplotlib, Colorcet, cmocean, or custom colormap when plotting
scalar values.
```
from pyvista import examples
import pyvista as pv
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import numpy as np
```
Any colormap built for `matplotlib`, `colorcet`, or `cmocean` is fully
compatible with PyVista. Colormaps are typically specified by passing
the string name of the colormap to the plotting routine via the `cmap`
argument.
See [Matplotlib\'s complete list of available
colormaps](https://matplotlib.org/tutorials/colors/colormaps.html),
[Colorcet\'s complete
list](https://colorcet.holoviz.org/user_guide/index.html), and
[cmocean\'s complete list](https://matplotlib.org/cmocean/).
Custom Made Colormaps
=====================
To get started using a custom colormap, download some data with scalar
values to plot.
```
mesh = examples.download_st_helens().warp_by_scalar()
# Add scalar array with range (0, 100) that correlates with elevation
mesh['values'] = pv.plotting.normalize(mesh['Elevation']) * 100
```
Build a custom colormap - here we make a colormap with 5 discrete colors
and we specify the ranges where those colors fall:
```
# Define the colors we want to use
blue = np.array([12/256, 238/256, 246/256, 1])
black = np.array([11/256, 11/256, 11/256, 1])
grey = np.array([189/256, 189/256, 189/256, 1])
yellow = np.array([255/256, 247/256, 0/256, 1])
red = np.array([1, 0, 0, 1])
mapping = np.linspace(mesh['values'].min(), mesh['values'].max(), 256)
newcolors = np.empty((256, 4))
newcolors[mapping >= 80] = red
newcolors[mapping < 80] = grey
newcolors[mapping < 55] = yellow
newcolors[mapping < 30] = blue
newcolors[mapping < 1] = black
# Make the colormap from the listed colors
my_colormap = ListedColormap(newcolors)
```
Simply pass the colormap to the plotting routine!
```
mesh.plot(scalars='values', cmap=my_colormap)
```
Or you could make a simple colormap\... any Matplotlib colormap can be
passed to PyVista!
```
boring_cmap = plt.cm.get_cmap("viridis", 5)
mesh.plot(scalars='values', cmap=boring_cmap)
```
You can also pass a list of color strings to the color map. This
approach divides up the colormap into 5 equal parts.
```
mesh.plot(scalars=mesh['values'], cmap=['black', 'blue', 'yellow', 'grey', 'red'])
```
If you still wish to have control of the separation of values, you can
do this by creating a scalar array and passing that to the plotter along
with the the colormap
```
scalars = np.empty(mesh.n_points)
scalars[mesh['values'] >= 80] = 4 # red
scalars[mesh['values'] < 80] = 3 # grey
scalars[mesh['values'] < 55] = 2 # yellow
scalars[mesh['values'] < 30] = 1 # blue
scalars[mesh['values'] < 1] = 0 # black
mesh.plot(scalars=scalars, cmap=['black', 'blue', 'yellow', 'grey', 'red'])
```
Matplotlib vs. Colorcet
=======================
Let\'s compare Colorcet\'s perceptually uniform \"fire\" colormap to
Matplotlib\'s \"hot\" colormap much like the example on the [first page
of Colorcet\'s docs](https://colorcet.holoviz.org/index.html).
The \"hot\" version washes out detail at the high end, as if the image
is overexposed, while \"fire\" makes detail visible throughout the data
range.
Please note that in order to use Colorcet\'s colormaps including
\"fire\", you must have Colorcet installed in your Python environment:
`pip install colorcet`
```
p = pv.Plotter(shape=(2, 2), border=False)
p.subplot(0, 0)
p.add_mesh(mesh, scalars='Elevation', cmap="fire",
lighting=True, scalar_bar_args={'title': "Colorcet Fire"})
p.subplot(0, 1)
p.add_mesh(mesh, scalars='Elevation', cmap="fire",
lighting=False, scalar_bar_args={'title': "Colorcet Fire (No Lighting)"})
p.subplot(1, 0)
p.add_mesh(mesh, scalars='Elevation', cmap="hot",
lighting=True, scalar_bar_args={'title': "Matplotlib Hot"})
p.subplot(1, 1)
p.add_mesh(mesh, scalars='Elevation', cmap="hot",
lighting=False, scalar_bar_args={'title': "Matplotlib Hot (No Lighting)"})
p.show()
```
|
github_jupyter
|
# Matplotlib and NumPy crash course
You may install numpy, matplotlib, sklearn and many other usefull package e.g. via Anaconda distribution.
```
import numpy as np
```
## NumPy basics
### Array creation
```
np.array(range(10))
np.ndarray(shape=(5, 4))
np.linspace(0, 1, num=20)
np.arange(0, 20)
np.zeros(shape=(5, 4))
np.ones(shape=(5,4))
```
Possible types of array:
- bool
- various ints
- float, double
- string
```
np.ones(shape=(2, 3), dtype="string")
np.zeros(shape=(2, 3), dtype=bool)
np.savetxt("eye.txt", np.eye(5, 6))
np.loadtxt("eye.txt")
%rm eye.txt
```
## Array operations
```
a = np.linspace(0, 9, num=10)
a + 1
a * a
a - a
print a.max()
print a.min()
np.sum(a)
a = np.random.standard_normal(size=(25, ))
a
b = a.reshape((5, 5))
b
b.T
np.sum(b)
print np.sum(b, axis=1)
print np.sum(b, axis=0)
### Matrix multiplication
np.dot(b, b)
np.vstack([b, b])
```
### Custom functions
```
def plus(x, y):
return x + y
plus_v = np.vectorize(plus)
plus_v(np.arange(10), np.arange(10, 20))
plus_v(np.arange(10), 10)
@np.vectorize
def plus(x, y):
return x + y
plus(np.arange(10), 10)
```
### Performance
```
N = 10000000
a = np.random.standard_normal(size=N)
b = np.random.standard_normal(size=N)
%%time
a + b
ab = zip(range(N), range(N))
%%time
_ = [ a + b for a, b in ab ]
```
### Slices
```
a = np.arange(15)
a = a.reshape((3,5))
a
# Just a copy of the array
a[:]
a[:, 0]
a[1, :]
a[2, :] = (np.arange(5) + 1) * 10
a
a < 10
a[a < 12]
np.where(a < 12)
xs, ys = np.where(a < 20)
a[xs, ys]
```
## Matplotlib
```
import matplotlib.pyplot as plt
# Don't forget this magic expression if want to show plots in notebook
%matplotlib inline
xs = np.arange(100)
ys = np.cumsum(np.random.standard_normal(size=100))
```
### Line plot
```
plt.figure()
plt.plot(xs, ys)
plt.show()
# A little bit of options
plt.figure()
plt.plot(xs, ys, label="1st series", color="green")
plt.plot(xs, ys.max() - ys, label="2nd series", color="red")
plt.legend(loc="upper right")
plt.xlabel("Time, sec")
plt.ylabel("Something")
plt.title("Just two random series")
plt.show()
```
### Bar plot
```
plt.figure()
plt.bar(xs, ys)
plt.show()
plt.figure()
h, bins, patches = plt.hist(ys)
plt.show()
```
### Scatter plot
```
xs1 = np.random.standard_normal(size=100)
ys1 = np.random.standard_normal(size=100)
xs2 = np.random.standard_normal(size=100) + 3
ys2 = np.random.standard_normal(size=100)
plt.scatter(xs1, ys1, label="class1", color="green")
plt.scatter(xs2, ys2, label="class2", color="red")
plt.plot([1.5, 1.5], [-4, 4], linewidth=3)
plt.legend()
```
### Images
```
means=np.array([[-1, 1], [-1, 1]])
stds = np.array([1, 1.1])
@np.vectorize
def normal_density(mx, my, std, x, y):
return np.exp(
-((x - mx) ** 2 + (y - my) ** 2) / 2.0 / std / std
) / std / std
@np.vectorize
def f(x, y):
return np.sum(
normal_density(means[0, :], means[1, :], stds, x, y)
)
mx, my = np.meshgrid(np.linspace(-2, 2, 100), np.linspace(-2, 2, 100))
fs = f(mx, my)
plt.contourf(mx, my, fs, 20, cmap=plt.cm.coolwarm)
plt.colorbar()
plt.contour(mx, my, fs, 20, cmap=plt.cm.coolwarm)
plt.colorbar()
plt.matshow(fs)
plt.colorbar()
plt.imshow(fs)
plt.colorbar()
plt.imshow(np.rot90(fs), extent=[-2, 2, -2, 2])
plt.colorbar()
plt.contour(mx, my, fs, 15, colors="black")
```
# Exercises
- load MNIST dataset
- create arrays of features and labels
- write a procedure to plot digits
- calculate mean, std of images for each class, plot the results
- plot distribution of pixel values: general, for different classes
- *find out which pixel has the most information about label (advanced)*
- *make 3D plots using mplot3d or plotly (advanced)*
|
github_jupyter
|
# 虚谷号WebGPIO应用(客户端Python版)
虚谷号和手机(App inventor)如何互动控制?
虚谷号和掌控板如何互动控制?
为了让虚谷号和其他开源硬件、编程语言快速互动,虚谷号的WebGPIO应运而生。简单的说,只要在虚谷号上运行一个python文件,就可以用WebAPI的形式来与虚谷号互动,可以获取虚谷号板载Arduino的所有引脚的电平,也可以控制所有引脚。
## 1.接口介绍
要在虚谷号上运行“webgpio.py”。也可以将“webgpio.py”文件更名为“main.py”,复制到vvBoard的Python目录,只要一开机,虚谷号就会执行。
下载地址:https://github.com/vvlink/vvBoard-docs/tree/master/webgpio
WebAPI地址:
http://[虚谷号ip]:1024/
注:下面假设虚谷号的IP地址为:192.168.1.101
### 1.1 获取引脚状态
method方式:GET
参数示例: { pin:"D2" }
url范例:http://192.168.1.101:1024/?pin=D2
信息返回:
当pin为D0--D13时,读取数字引脚的数字值,0为低电平,1为高电平。
{ "pin":"D1", "error_code":0, "msg":1 }
当pin为A0--A5时,读取模拟引脚的模拟值,0-255之间。
{ "pin":"A0", "error_code":0, "msg":255 }
### 1.2. 控制引脚电平
method方式: POST
参数示例:
{ pin:"D1" value:255 type:"digital" }
注:Digital、Analog、Servo等词语不分大小写,也可以用“1、2、3”等数字来代替。
- 当type为digital时,设置引脚的电平值为value的值,0表示LOW,非0表示HIGH;
- 当type为analog时,设置引脚的PWM值为value的值,即0-255之间;
- 当type为servo时,设置引脚上舵机的转动角度为value的值,即0-180之间。
返回参数:
{ "pin":"D2", "error_code":0, "msg":"success,set [pin] to [value] with [types] mode" }
当pin不在D0--D13,A0--A5之间时:
{ "pin":"D2", "error_code":1 "msg":"error,invalid Pin" }
当value不能转换整数时:
{ "pin":"D2", "error_code":1, "msg":"error,Value is wrong" }
当type不正确时:
{ "pin":"D2", "error_code":1, "msg":"error,Type is wrong" }
## 2. 客户端代码范例(Python)
虽然通过任何一个能够发送Http请求的工具,包括浏览器、Word、掌控板、手机等,都可以和虚谷号互动。接下来选择Python语言写一个Demo代码。Python借助Requests库来发送Http请求,是非常方便的。参数传递方面,同时支持params和data两种模式。
### 2.1.调用POST方法,对虚谷号的引脚进行控制
在该案例中可以修改的参数有:
- url:设置成虚谷号的IP地址
- pin:对应的引脚 A0-A5,D0-D13
- value:对应的数值
- type:控制的类型可以是1,2,3,分别代表“digital”、“analog”、“servo”
当设置D13号引脚的电平为1,该引脚对应的LED就会亮起。
```
import requests
vvboardip='192.168.3.42'
pin='D13'
value=1
t=1
payload = {"pin":pin,'value':value,'type':t}
re = requests.post(url='http://'+ vvboardip +':1024/',params=payload)
if (re.status_code==200):
r=re.json()
print('成功发送控制命令:'+ r["msg"])
print('返回的信息为:')
print(re.text)
```
### 2.2. 调用GET方法,读取A0号引脚的电平。
在该案例中可以修改的参数有:
- url:设置成虚谷号的IP地址
- pin:对应的引脚 A0-A5,D0-D13
注意:该方法需要外接传感器,否则数字口默认返回为低电平,模拟口返回随机数。
```
import requests
vvboardip='192.168.3.42'
pin='A0'
payload = {"pin":pin}
re = requests.get(url='http://'+ vvboardip +':1024/',params=payload)
if (re.status_code==200):
r=re.json()
print('成功获取引脚'+ r["pin"] + '的状态:'+ r["msg"])
print('返回的原始信息为:')
print(re.text)
```
## 3. 其他说明
1.手机上快速控制,如何实现?
访问:http://192.168.3.42:1024/help/
可以直接在网页上测试接口。
2.App invntor如何借助这一接口与虚谷号互动?
请参考github,提供了范例。
https://github.com/vvlink/vvBoard-docs/tree/master/webgpio,
3.掌控板如何利用这一接口与虚谷号互动?
掌控板中提供了urequests库,在mPython软件中可以编写发送Http请求的应用。
另外,掌控板中提供了WebtinyIO,使用方式和虚谷号的WebGPIO基本一致。
|
github_jupyter
|
# First BigQuery ML models for Taxifare Prediction
In this notebook, we will use BigQuery ML to build our first models for taxifare prediction.BigQuery ML provides a fast way to build ML models on large structured and semi-structured datasets.
## Learning Objectives
1. Choose the correct BigQuery ML model type and specify options
2. Evaluate the performance of your ML model
3. Improve model performance through data quality cleanup
4. Create a Deep Neural Network (DNN) using SQL
Each learning objective will correspond to a __#TODO__ in the [student lab notebook](../labs/first_model.ipynb) -- try to complete that notebook first before reviewing this solution notebook.
We'll start by creating a dataset to hold all the models we create in BigQuery
### Import libraries
```
import os
```
### Set environment variables
```
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "your-gcp-project-here":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
```
## Create a BigQuery Dataset and Google Cloud Storage Bucket
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __serverlessml__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
```
%%bash
## Create a BigQuery dataset for serverlessml if it doesn't exist
datasetexists=$(bq ls -d | grep -w serverlessml)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: serverlessml"
bq --location=US mk --dataset \
--description 'Taxi Fare' \
$PROJECT:serverlessml
echo "\nHere are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${PROJECT}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${PROJECT}
echo "\nHere are your current buckets:"
gsutil ls
fi
```
## Model 1: Raw data
Let's build a model using just the raw data. It's not going to be very good, but sometimes it is good to actually experience this.
The model will take a minute or so to train. When it comes to ML, this is blazing fast.
```
%%bigquery
CREATE OR REPLACE MODEL
serverlessml.model1_rawdata
OPTIONS(input_label_cols=['fare_amount'],
model_type='linear_reg') AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count * 1.0 AS passengers
FROM
`nyc-tlc.yellow.trips`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 100000) = 1
```
Once the training is done, visit the [BigQuery Cloud Console](https://console.cloud.google.com/bigquery) and look at the model that has been trained. Then, come back to this notebook.
Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. We can look at eval statistics on that held-out data:
```
%%bigquery
SELECT * FROM ML.EVALUATE(MODEL serverlessml.model1_rawdata)
```
Let's report just the error we care about, the Root Mean Squared Error (RMSE)
```
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL serverlessml.model1_rawdata)
```
We told you it was not going to be good! Recall that our heuristic got 8.13, and our target is $6.
Note that the error is going to depend on the dataset that we evaluate it on.
We can also evaluate the model on our own held-out benchmark/test dataset, but we shouldn't make a habit of this (we want to keep our benchmark dataset as the final evaluation, not make decisions using it all along the way. If we do that, our test dataset won't be truly independent).
```
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL serverlessml.model1_rawdata, (
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count * 1.0 AS passengers
FROM
`nyc-tlc.yellow.trips`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 100000) = 2
AND trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
))
```
## Model 2: Apply data cleanup
Recall that we did some data cleanup in the previous lab. Let's do those before training.
This is a dataset that we will need quite frequently in this notebook, so let's extract it first.
```
%%bigquery
CREATE OR REPLACE TABLE
serverlessml.cleaned_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers
FROM
`nyc-tlc.yellow.trips`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 100000) = 1
AND trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM serverlessml.cleaned_training_data
LIMIT 0
%%bigquery
CREATE OR REPLACE MODEL
serverlessml.model2_cleanup
OPTIONS(input_label_cols=['fare_amount'],
model_type='linear_reg') AS
SELECT
*
FROM
serverlessml.cleaned_training_data
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL serverlessml.model2_cleanup)
```
## Model 3: More sophisticated models
What if we try a more sophisticated model? Let's try Deep Neural Networks (DNNs) in BigQuery:
### DNN
To create a DNN, simply specify __dnn_regressor__ for the model_type and add your hidden layers.
```
%%bigquery
-- This model type is in alpha, so it may not work for you yet.
-- This training takes on the order of 15 minutes.
CREATE OR REPLACE MODEL
serverlessml.model3b_dnn
OPTIONS(input_label_cols=['fare_amount'],
model_type='dnn_regressor', hidden_units=[32, 8]) AS
SELECT
*
FROM
serverlessml.cleaned_training_data
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL serverlessml.model3b_dnn)
```
Nice!
## Evaluate DNN on benchmark dataset
Let's use the same validation dataset to evaluate -- remember that evaluation metrics depend on the dataset. You can not compare two models unless you have run them on the same withheld data.
```
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL serverlessml.model3b_dnn, (
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count * 1.0 AS passengers,
'unused' AS key
FROM
`nyc-tlc.yellow.trips`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 10000) = 2
AND trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
))
```
Wow! Later in this sequence of notebooks, we will get to below $4, but this is quite good, for very little work.
In this notebook, we showed you how to use BigQuery ML to quickly build ML models. We will come back to BigQuery ML when we want to experiment with different types of feature engineering. The speed of BigQuery ML is very attractive for development.
Copyright 2019 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
|
github_jupyter
|
```
import numpy as np
import matplotlib.pyplot as plt
from collections import defaultdict
from scipy.optimize import minimize
import networkx as nx
from networkx.generators.random_graphs import erdos_renyi_graph
from IPython.display import Image
from qiskit import QuantumCircuit, execute, Aer
from qiskit.tools.visualization import circuit_drawer, plot_histogram
from quantuminspire.credentials import get_authentication
from quantuminspire.api import QuantumInspireAPI
from quantuminspire.qiskit import QI
QI_URL = 'https://api.quantum-inspire.com/'
```
In this notebook you will apply what you have just learned about cqasm and Quantum Inspire. We will consider a simple quantum algorithm, the quantum approximate optimization algorithm (QAOA), for which you will code the circuit in cqasm and send some jobs to real quantum hardware on the Quantum Inspire platform.
## 1. Recap: QAOA and MAXCUT
### Introduction to the Quantum Approximate Optimization Algorithm
$$\newcommand{\ket}[1]{\left|{#1}\right\rangle}$$
$$\newcommand{\bra}[1]{\left\langle{#1}\right|}$$
$$\newcommand{\braket}[2]{\left\langle{#1}\middle|{#2}\right\rangle}$$
Consider some combinatorial optimization problem with objective function $C:x\rightarrow \mathbb{R}$ acting on $n$-bit strings $x\in \{0,1\}^n$, domain $\mathcal{D} \subseteq \{0,1\}^n$, and objective
\begin{align}
\max_{x \in \mathcal{D}} C(x).
\end{align}
In maximization, an approximate optimization algorithm aims to find a string $x'$ that achieves a desired approximation ratio $\alpha$, i.e.
\begin{equation}
\frac{C(x')}{C^*}\geq \alpha,
\end{equation}
where $C^* = \max_{x \in \mathcal{D}} C(x)$.
In QAOA, such combinatorial optimization problems are encoded into a cost Hamiltonian $H_C$, a mixing Hamiltonian $H_M$ and some initial quantum state $\ket{\psi_0}$. The cost Hamiltonian is diagonal in the computational basis by design, and represents $C$ if its eigenvalues satisfy
\begin{align}
H_C \ket{x} = C(x) \ket{x} \text{ for all } x \in \{0,1\}^n.
\end{align}
The mixing Hamiltonian $H_M$ depends on $\mathcal{D}$ and its structure, and is in the unconstrained case (i.e. when $\mathcal{D}=\{0,1\}^n$) usually taken to be the transverse field Hamiltonian $H_M = \sum_{j} X_j$. Constraints (i.e. when $\mathcal{D}\subset \{0,1\}^n$) can be incorporated directly into the mixing Hamiltonian or are added as a penalty function in the cost Hamiltonian. The initial quantum state $\ket{\psi_0}$ is usually taken as the uniform superposition over all possible states in the domain. $\text{QAOA}_p$, parametrized in $\gamma=(\gamma_0,\gamma_1,\dots,\gamma_{p-1}),\beta=(\beta_0,\beta_1,\dots,\beta_{p-1})$, refers to a level-$p$ QAOA circuit that applies $p$ steps of alternating time evolutions of the cost and mixing Hamiltonians on the initial state. At step $k$, the unitaries of the time evolutions are given by
\begin{align}
U_C(\gamma_k) = e^{-i \gamma_k H_C }, \label{eq:UC} \\
U_M(\beta_k) = e^{-i \beta_k H_M }. \label{eq:UM}
\end{align}
So the final state $\ket{\gamma,\beta}$ of $\text{QAOA}_p$ is given by
\begin{align}
\ket{\gamma,\beta} = \prod_{k=0}^{p-1} U_M(\beta_k) U_C(\gamma_k) \ket{\psi_0}.
\end{align}
The expectation value $ F_p(\gamma,\beta)$ of the cost Hamiltonian for state $\ket{\gamma,\beta}$ is given by
\begin{align}
F_p(\gamma,\beta) =
\bra{\gamma,\beta}H_C\ket{\gamma,\beta},
\label{eq:Fp}
\end{align}
and can be statistically estimated by taking samples of $\ket{\gamma,\beta}$. The achieved approximation ratio (in expectation) of $\text{QAOA}_p$ is then
\begin{equation}
\alpha = \frac{F_p(\gamma,\beta)}{C^*}.
\end{equation}
The parameter combinations of $\gamma,\beta$ are usually found through a classical optimization procedure that uses $F_p(\gamma,\beta)$ as a black-box function to be maximized.
### Example application: MAXCUT
MaxCut is an NP-hard optimisation problem that looks for an optimal 'cut' for a graph $G(V,E)$, in the sense that the cut generates a subset of nodes $S \subset V$ that shares the largest amount of edges with its complement $ V\setminus S$. In slightly modified form (omitting the constant), it has the following objective function
\begin{align}
\max_{s} \frac{1}{2} \sum_{
\langle i,j \rangle \in E} 1-s_i s_j,
\end{align}
where the $s_i\in\{-1,1\}$ are the variables and $i,j$ are the edge indices. This function can be easily converted into an Ising cost Hamiltonian, which takes the form
\begin{align}
H_C = \frac{1}{2}\sum_{\langle i,j\rangle \in E} I-Z_i Z_j.
\end{align}
We use the standard mixing Hamiltonian that sums over all nodes:
\begin{align}
H_M = \sum_{v \in V} X_v.
\end{align}
As the initial state $\ket{\Psi_0}$ we take the uniform superposition, given by
\begin{align}
\ket{\psi_0} = \frac{1}{\sqrt{2^{|V|}}}\sum_{x=0}^{2^{|V|}-1} \ket{x}
\end{align}
The goal of this workshop is to guide you through an implemented code that simulates a small quantum computer running the QAOA algorithm applied to the MAXCUT problem. We will use qiskit as well as cqasm as SDK's. For the sake of run time, you will always run the classical optimization part using the qiskit simulator: it would take too long for our purposes to do the actual function evualtions in the classical optimization step on the hardware.
## 2. Some useful functions and intializations
We first define some useful functions to be used later throughout the code.
```
# Just some function to draw graphs
def draw_cc_graph(G,node_color='b',fig_size=4):
plt.figure(figsize=(fig_size,fig_size))
nx.draw(G, G.pos,
node_color= node_color,
with_labels=True,
node_size=1000,font_size=14)
plt.show()
# Define the objective function
def maxcut_obj(x,G):
cut = 0
for i, j in G.edges():
if x[i] != x[j]:
# the edge is cut, negative value in agreement with the optimizer (which is a minimizer)
cut -= 1
return cut
# Brute force method
def brute_force(G):
n = len(G.nodes)
costs = np.zeros(0)
costs=[]
for i in range(2**n):
calc_costs = -1*maxcut_obj(bin(i)[2:].zfill(n),G)
costs.append(calc_costs)
max_costs_bf = max(costs)
index_max = costs.index(max(costs))
max_sol_bf = bin(index_max)[2:].zfill(n)
return max_costs_bf, max_sol_bf,costs
# Generating the distribution resulting from random guessing the solution
def random_guessing_dist(G):
dictio= dict()
n = len(G.nodes())
for i in range(2**n):
key = bin(i)[2:].zfill(n)
dictio[key] = maxcut_obj(bin(i)[2:].zfill(n),G)
RG_energies_dist = defaultdict(int)
for x in dictio:
RG_energies_dist[maxcut_obj(x,G)] += 1
return RG_energies_dist
# Visualize multiple distributions
def plot_E_distributions(E_dists,p,labels):
plt.figure()
x_min = 1000
x_max = - 1000
width = 0.25/len(E_dists)
for index,E_dist in enumerate(E_dists):
pos = width*index-width*len(E_dists)/4
label = labels[index]
X_list,Y_list = zip(*E_dist.items())
X = -np.asarray(X_list)
Y = np.asarray(Y_list)
plt.bar(X + pos, Y/np.sum(Y), color = 'C'+str(index), width = width,label= label+', $p=$'+str(p))
if np.min(X)<x_min:
x_min = np.min(X)
if np.max(X)>x_max:
x_max = np.max(X)
plt.xticks(np.arange(x_min,x_max+1))
plt.legend()
plt.xlabel('Objective function value')
plt.ylabel('Probability')
plt.show()
# Determinet the expected objective function value from the random guessing distribution
def energy_random_guessing(RG_energies_dist):
energy_random_guessing = 0
total_count = 0
for energy in RG_energies_dist.keys():
count = RG_energies_dist[energy]
energy_random_guessing += energy*count
total_count += count
energy_random_guessing = energy_random_guessing/total_count
return energy_random_guessing
```
### Test instances
```
w2 = np.matrix([
[0, 1],
[1, 0]])
G2 = nx.from_numpy_matrix(w2)
positions = nx.circular_layout(G2)
G2.pos=positions
print('G2:')
draw_cc_graph(G2)
w3 = np.matrix([
[0, 1, 1],
[1, 0, 1],
[1, 1, 0]])
G3 = nx.from_numpy_matrix(w3)
positions = nx.circular_layout(G3)
G3.pos=positions
print('G3:')
draw_cc_graph(G3)
```
## 3. Circuit generators
We provide you with an example written in qiskit. You have to write the one for cqasm yourself.
### Qiskit generators
```
class Qiskit(object):
# Cost operator:
def get_cost_operator_circuit(G, gamma):
N = G.number_of_nodes()
qc = QuantumCircuit(N,N)
for i, j in G.edges():
qc.cx(i,j)
qc.rz(2*gamma, j)
qc.cx(i,j)
return qc
# Mixing operator
def get_mixer_operator_circuit(G, beta):
N = G.number_of_nodes()
qc = QuantumCircuit(N,N)
for n in G.nodes():
qc.rx(2*beta, n)
return qc
# Build the circuit:
def get_qaoa_circuit(G, beta, gamma):
assert(len(beta) == len(gamma))
p = len(beta) # number of unitary operations
N = G.number_of_nodes()
qc = QuantumCircuit(N,N)
# first step: apply Hadamards to obtain uniform superposition
qc.h(range(N))
# second step: apply p alternating operators
for i in range(p):
qc.compose(Qiskit.get_cost_operator_circuit(G,gamma[i]),inplace=True)
qc.compose(Qiskit.get_mixer_operator_circuit(G,beta[i]),inplace=True)
# final step: measure the result
qc.barrier(range(N))
qc.measure(range(N), range(N))
return qc
# Show the circuit for the G3 (triangle) graph
p = 1
beta = np.random.rand(p)*2*np.pi
gamma = np.random.rand(p)*2*np.pi
qc = Qiskit.get_qaoa_circuit(G3,beta, gamma)
qc.draw(output='mpl')
```
### cqasm generators
Now it is up to you to apply what we have learned about cqasm to write the script for the cost and mixing operators:
```
class Cqasm(object):
### We give them this part
def get_qasm_header(N_qubits):
"""
Create cQASM header for `N_qubits` qubits and prepare all in |0>-state.
"""
header = f"""
version 1.0
qubits {N_qubits}
prep_z q[0:{N_qubits-1}]
"""
return header
def get_cost_operator(graph, gamma, p=1):
"""
Create cost operator for given angle `gamma`.
"""
layer_list = graph.number_of_edges()*[None]
for n, (i,j) in enumerate(graph.edges()):
layer_list[n] = '\n'.join([f"CNOT q[{i}], q[{j}]",
f"Rz q[{j}], {2*gamma}",
f"CNOT q[{i}], q[{j}]"])
return f".U_gamma_{p}\n" + '\n'.join(layer_list) + '\n'
def get_mixing_operator(graph, beta, p=1):
"""
Create mixing operator for given angle `beta`.
Use parallel application of single qubit gates.
"""
U_beta = "{" + ' | '.join([f"Rx q[{i}], {2*beta}" for i in graph.nodes()]) + "}"
return f".U_beta_{p}\n" + U_beta + '\n'
def get_qaoa_circuit(graph, beta, gamma):
"""
Create full QAOA circuit for given `graph` and angles `beta` and `gamma`.
"""
assert len(beta) == len(gamma)
p = len(beta) # number of layers
N_qubits = graph.number_of_nodes()
circuit_str = Cqasm.get_qasm_header(5) #N_qubits)
# first step: apply Hadamards to obtain uniform superposition
circuit_str += "{" + ' | '.join([f"H q[{i}]" for i in graph.nodes()]) + "}\n\n"
# second step: apply p alternating operators
circuit_str += '\n'.join([Cqasm.get_cost_operator(graph, gamma[i], i+1)
+ Cqasm.get_mixing_operator(graph, beta[i], i+1) for i in range(p)])
# final step: measure the result
circuit_str += "\n"
circuit_str += "measure_all"
return circuit_str
```
## 4. Hybrid-quantum classical optimization
Since QAOA is usually adopted as a hybrid quantum-classical algorithm, we need to construct an outer loop which optimizes the estimated $\bra{\gamma,\beta}H\ket{\gamma,\beta}$.
```
# Black-box function that describes the energy output of the QAOA quantum circuit
def get_black_box_objective(G, p, SDK = 'qiskit', backend = None, shots=2**10):
if SDK == 'cqasm':
if not backend:
backend = 'QX single-node simulator'
backend_type = qi.get_backend_type_by_name(backend)
def f(theta):
# first half is betas, second half is gammas
beta = theta[:p]
gamma = theta[p:]
qc = Cqasm.get_qaoa_circuit(G, beta, gamma)
result = qi.execute_qasm(qc, backend_type=backend_type, number_of_shots=shots)
counts = result['histogram']
# return the energy
return compute_maxcut_energy(counts, G)
if SDK == 'qiskit':
if not backend:
backend = 'qasm_simulator'
backend = Aer.get_backend(backend)
def f(theta):
# first half is betas, second half is gammas
beta = theta[:p]
gamma = theta[p:]
qc = Qiskit.get_qaoa_circuit(G,beta, gamma)
counts = execute(qc, backend,shots=shots).result().get_counts()
# return the energy
return compute_maxcut_energy(counts, G)
else:
return 'error: SDK not found'
return f
# Estimate the expectation value based on the circuit output
def compute_maxcut_energy(counts, G):
energy = 0
total_counts = 0
for meas, meas_count in counts.items():
obj_for_meas = maxcut_obj(meas, G)
energy += obj_for_meas * meas_count
total_counts += meas_count
return energy / total_counts
```
## 5. A simple instance on the quantum inspire platform: 2-qubit case
Let us first consider the most simple MAXCUT instance. We have just two nodes, and an optimal cut with objective value 1 would be to place both nodes in its own set.
```
G=G2
max_costs_bf, max_sol_bf,costs = brute_force(G)
print("brute force method best cut: ",max_costs_bf)
print("best string brute force method:",max_sol_bf)
colors = ['red' if x == '0' else 'b' for x in max_sol_bf]
draw_cc_graph(G,node_color = colors)
```
Using qiskit, the circuit would look the following:
```
# Test and show circuit for some beta,gamma
p = 1
beta = np.random.rand(p)*np.pi
gamma = np.random.rand(p)*2*np.pi
qc = Qiskit.get_qaoa_circuit(G,beta, gamma)
qc.draw(output='mpl')
```
Now let's run our hybrid-quantum algorithm simulation using qiskit:
```
# Parameters that can be changed:
p = 1
lb = np.zeros(2*p)
ub = np.hstack([np.full(p, np.pi), np.full(p, 2*np.pi)])
init_point = np.random.uniform(lb, ub, 2*p)
shots = 2**10
optimiser = 'COBYLA'
max_iter = 100
# Training of the parameters beta and gamma
obj = get_black_box_objective(G,p,SDK='qiskit',shots=shots)
# Lower and upper bounds: beta \in {0, pi}, gamma \in {0, 2*pi}
bounds = [lb,ub]
# Maximum number of iterations: 100
res = minimize(obj, init_point, method=optimiser, bounds = bounds, options={'maxiter':max_iter, 'disp': True})
print(res)
#Determine the approximation ratio:
print('Approximation ratio is',-res['fun']/max_costs_bf)
# Extract the optimal values for beta and gamma and run a new circuit with these parameters
optimal_theta = res['x']
qc = Qiskit.get_qaoa_circuit(G, optimal_theta[:p], optimal_theta[p:])
counts = execute(qc,backend = Aer.get_backend('qasm_simulator'),shots=shots).result().get_counts()
plt.bar(counts.keys(), counts.values())
plt.xlabel('String')
plt.ylabel('Count')
plt.show()
RG_dist = random_guessing_dist(G)
# Measurement distribution
E_dist = defaultdict(int)
for k, v in counts.items():
E_dist[maxcut_obj(k,G)] += v
plot_E_distributions([E_dist,RG_dist],p,['Qiskit','random guessing'])
E_random_guessing = energy_random_guessing(RG_dist)
print('Energy from random guessing is', E_random_guessing)
X_list,Y_list = zip(*E_dist.items())
X = -np.asarray(X_list)
Y = np.asarray(Y_list)
print('Probability of measuring the optimal solution is',Y[np.argmax(X)]/shots)
```
Now that we have obtained some good values for $\beta$ and $\gamma$ through classical simulation, let's see what Starmon-5 would give us.
The figure below shows the topology of Starmon-5. Since q0 is not connected to q1, we have to relabel the nodes. Networkx as such an option, by using 'nx.relabel_nodes(G,{1:2}' we can relabel node 1 as node 2. Since q0 is connected to q2, this does allow us to run our cqasm code on Starmon-5. For qiskit, this step is irrelevant as we have all-to-all connectivity in the simulation.
```
Image(filename='Starmon5.png')
qc_Cqasm = Cqasm.get_qaoa_circuit(nx.relabel_nodes(G, {1: 2}), optimal_theta[:p], optimal_theta[p:])
print(qc_Cqasm)
```
Now we run the Cqasm-circuit on the Starmon-5 Hardware.
```
authentication = get_authentication()
QI.set_authentication(authentication, QI_URL)
qiapi = QuantumInspireAPI(QI_URL, authentication)
result = qiapi.execute_qasm(qc_Cqasm, backend_type=qiapi.get_backend_type('Starmon-5'), number_of_shots=2**10)
counts_QI = result['histogram']
```
Inspecting 'counts_QI', we see that it returns the integer corresponding to the bit string result of the measurement
```
counts_QI
```
Note that we measure more than just the two relevant qubits, since we had the 'measure all' command in the the cqasm code. The distribution over the strings looks the following:
```
counts_bin = {}
for k,v in counts_QI.items():
counts_bin[f'{int(k):05b}'] = v
print(counts_bin)
plt.bar(counts_bin.keys(), counts_bin.values())
plt.xlabel('State')
plt.ylabel('Measurement probability')
plt.xticks(rotation='vertical')
plt.show()
```
Let's create another counts dictionary with only the relevant qubits, which are q0 and q2:
```
counts_bin_red = defaultdict(float)
for string in counts_bin:
q0 = string[-1]
q1 = string[-3]
counts_bin_red[(q0+q1)]+=counts_bin[string]
counts_bin_red
```
We now plot all distributions (qiskit, Starmon-5, and random guessing) in a single plot.
```
#Determine the approximation ratio:
print('Approximation ratio on the hardware is',-compute_maxcut_energy(counts_bin_red,G)/max_costs_bf)
# Random guessing distribution
RG_dist = random_guessing_dist(G)
# Measurement distribution
E_dist_S5 = defaultdict(int)
for k, v in counts_bin_red.items():
E_dist_S5[maxcut_obj(k,G)] += v
plot_E_distributions([E_dist,E_dist_S5,RG_dist],p,['Qiskit','Starmon-5','random guessing'])
X_list,Y_list = zip(*E_dist_S5.items())
X = -np.asarray(X_list)
Y = np.asarray(Y_list)
print('Probability of measuring the optimal solution is',Y[np.argmax(X)])
E_random_guessing = energy_random_guessing(RG_dist)
print('Expected approximation ratio random guessing is', -E_random_guessing/max_costs_bf)
```
## 6. Compilation issues: the triangle graph
For the graph with just two nodes we already had some minor compilation issues, but this was easily fixed by relabeling the nodes. We will now consider an example for which relabeling is simply not good enough to get it mapped to the Starmon-5 toplogy.
```
G=G3
max_costs_bf, max_sol_bf,costs = brute_force(G)
print("brute force method best cut: ",max_costs_bf)
print("best string brute force method:",max_sol_bf)
colors = ['red' if x == '0' else 'b' for x in max_sol_bf]
draw_cc_graph(G,node_color = colors)
```
Due to the topology of Starmon-5 this graph cannot be executed without any SWAPS. Therefore, we ask you to write a new circuit generator that uses SWAPS in order to make the algorithm work with the Starmon-5 topology. Let's also swap back to the original graph configuration, so that we can in the end measure only the qubits that correspond to a node in the graph (this is already written for you)
```
def QAOA_triangle_circuit_cqasm(graph, beta, gamma):
circuit_str = Cqasm.get_qasm_header(5)
circuit_str += "{" + ' | '.join([f"H q[{i}]" for i in graph.nodes()]) + "}\n\n"
def get_triangle_cost_operator(graph, gamma, p):
layer_list = graph.number_of_edges() * [None]
for n, edge in enumerate(graph.edges()):
if 0 in edge and 1 in edge:
layer_list[n] = '\n'.join([f"SWAP q[{edge[0]}], q[2]",
f"CNOT q[2], q[{edge[1]}]",
f"Rz q[{edge[1]}], {2*gamma}",
f"CNOT q[2], q[{edge[1]}]",
f"SWAP q[{edge[0]}], q[2]" ])
else:
layer_list[n] = '\n'.join([f"CNOT q[{edge[0]}], q[{edge[1]}]",
f"Rz q[{edge[1]}], {2*gamma}",
f"CNOT q[{edge[0]}], q[{edge[1]}]"])
return f".U_gamma_{p}\n" + '\n'.join(layer_list) + '\n'
circuit_str += '\n'.join([get_triangle_cost_operator(graph, gamma[i], i+1)
+ Cqasm.get_mixing_operator(graph, beta[i], i+1) for i in range(p)])
circuit_str += "\n"
circuit_str += "{" + ' | '.join([f"measure q[{i}]" for i in graph.nodes()]) + "}\n"
return circuit_str
```
We now run the same procedure as before to obtain good parameter values
```
# Parameters that can be changed:
p = 1
lb = np.zeros(2*p)
ub = np.hstack([np.full(p, np.pi), np.full(p, 2*np.pi)])
init_point = np.random.uniform(lb, ub, 2*p)
shots = 2**10
optimiser = 'COBYLA'
max_iter = 100
# Training of the parameters beta and gamma
obj = get_black_box_objective(G,p,SDK='qiskit',shots=shots)
# Lower and upper bounds: beta \in {0, pi}, gamma \in {0, 2*pi}
bounds = [lb,ub]
# Maximum number of iterations: 100
res = minimize(obj, init_point, method=optimiser, bounds = bounds,options={'maxiter':max_iter, 'disp': True})
print(res)
#Determine the approximation ratio:
print('Approximation ratio is',-res['fun']/max_costs_bf)
# Extract the optimal values for beta and gamma and run a new circuit with these parameters
optimal_theta = res['x']
qc = Qiskit.get_qaoa_circuit(G, optimal_theta[:p], optimal_theta[p:])
counts = execute(qc,backend = Aer.get_backend('qasm_simulator'),shots=shots).result().get_counts()
# Random guessing distribution
RG_dist = random_guessing_dist(G)
# Measurement distribution
E_dist = defaultdict(int)
for k, v in counts.items():
E_dist[maxcut_obj(k,G)] += v
X_list,Y_list = zip(*E_dist.items())
X = -np.asarray(X_list)
Y = np.asarray(Y_list)
print('Probability of measuring the optimal solution is',Y[np.argmax(X)]/shots)
E_random_guessing = energy_random_guessing(RG_dist)
print('Expected approximation ratio random guessing is', -E_random_guessing/max_costs_bf)
plt.bar(counts.keys(), counts.values())
plt.xlabel('String')
plt.ylabel('Count')
plt.show()
```
Let's run it on Starmon-5 again!
```
# Extract the optimal values for beta and gamma and run a new circuit with these parameters
optimal_theta = res['x']
qasm_circuit = QAOA_triangle_circuit_cqasm(G, optimal_theta[:p], optimal_theta[p:])
qiapi = QuantumInspireAPI(QI_URL, authentication)
result = qiapi.execute_qasm(qasm_circuit, backend_type=qiapi.get_backend_type('Starmon-5'), number_of_shots=shots)
counts = result['histogram']
print(qasm_circuit)
print(result)
counts
counts_bin = {}
for k,v in counts.items():
counts_bin[f'{int(k):03b}'] = v
print(counts_bin)
plt.bar(counts_bin.keys(), counts_bin.values())
plt.xlabel('String')
plt.ylabel('Probability')
plt.show()
#Determine the approximation ratio:
print('Approximation ratio on the hardware is',-compute_maxcut_energy(counts_bin,G)/max_costs_bf)
# Random guessing distribution
RG_dist = random_guessing_dist(G)
# Measurement distribution
E_dist_S5 = defaultdict(int)
for k, v in counts_bin.items():
E_dist_S5[maxcut_obj(k,G)] += v
plot_E_distributions([E_dist,E_dist_S5,RG_dist],p,['Qiskit','Starmon-5','random guessing'])
X_list,Y_list = zip(*E_dist_S5.items())
X = -np.asarray(X_list)
Y = np.asarray(Y_list)
print('Probability of measuring the optimal solution is',Y[np.argmax(X)])
E_random_guessing = energy_random_guessing(RG_dist)
print('Expected approximation ratio random guessing is', -E_random_guessing/max_costs_bf)
```
## 7. More advanced questions
Some questions you could look at:
- What is the performance on other graph instances?
- How scalable is this hardware for larger problem sizes?
- How much can the circuit be optimized for certain graph instances?
- Are the errors perfectly random or is there some correlation?
- Are there tricks to find good parameters?
|
github_jupyter
|
##### Detection and Location Chain
**Abstract**: This hackathon project represents our effort to combine our existing machine learning and photogrametry efforts and further combine those efforts with both Cloud and Edge based solutions based upon Xilinx FPGA acceleration.
The Trimble team decided that the Xilinx hackathon would provide an excellent oppertunity to take the first steps in combining these technologies and learning how to use the varied Xilinx techologies.
Our initial hope was to use a TensorFlow system to provide the machine learning component of our test based on an AWS Ultrascale instance. That technology was unavailable for the hackathon, so during the event we trained a system based on a more stardard AWS Tensorflow instance and accessed that instance via Pynq networking.
The Team Trimble is composed of
* Roy Godzdanker – Trimble Product Architect for ICT
* Robert Banefield – Trimble Data Machine Learning Specialist
* Vinod Khare – Trimble ICT Photogrammetry
* Ashish Khare – Trimble Geomatics Photogrammetry
* Young-Jin Lee – Trimble ICT Photogrammetry
* Matt Compton - Trimble ICT Design Engineer
_NOTES_:
1. The TensorFlow system is sitting at an AWS instance. This is the slow and simple one for my debug effort. In the spirit of the hackathon, we started in training at the beginning of the night. This implies that it's capabilities were not exceptional at the beginning of the night and it will be better as the newly trained net is swapped in in the morning. Further tests back at the ranch will include testing this chain against some of the other theoretical models. The current net underperforms some previous efforts, further exploration is needed here
2. We also need to explore the TensorFlow element as an edge device. Advances in Xilinx FPGA tools may make that cost competative with a GPU box.
3. Xilinx HLS looks to be able to add needed acceleration functions but this needs further exploration going forward. We explored the idea of overly with python controled DMA, this is very promising
The following are globals used within this project To Change this to different image set, simply change the images indicated and run through the notebook again.
1. Camera data is sent to the system from a remote repository.
2. The Camera Data is sent to the Pynq to being processing.
3. The TensorFlow cloud delivers metadata for the images that were transferred to it back to the Pynq via net transfer
4. The Pynq software uses the photogrammetric OpenCV software chain that we wrote to estimate and calculate geometric position. In addition, images are displayed on the HDMI monitor and LCD display so we can see what is going on and to serve as a debug aid
5. The calculated position of the object is returned.
```
## Imports
import cv2
import json
import matplotlib.pyplot as pyplot
import numpy
import matplotlib.patches as patches
import pynq.overlays.base
import pynq.lib.arduino as arduino
import pynq.lib.video as video
import requests
import scipy
import sys
import PIL
## Config
gAWS_TENSORFLOW_INSTANCE = 'http://34.202.159.80'
gCAMERA0_IMAGE = "/home/xilinx/jupyter_notebooks/trimble-mp/CAM2_image_0032.jpg"
gCAMERA1_IMAGE = "/home/xilinx/jupyter_notebooks/trimble-mp/CAM3_image_0032.jpg"
```
Turn on the HDMI coming off the pink board. This is used in a fashion that is different than their primary test notes and may be difficult to complete during the time period. Specifically, the hdmi out is used without the input
```
base = pynq.overlays.base.BaseOverlay("base.bit")
hdmi_in = base.video.hdmi_in
hdmi_out = base.video.hdmi_out
v = video.VideoMode(1920,1080,24)
hdmi_out.configure(v, video.PIXEL_BGR)
hdmi_out.start()
outframe = hdmi_out.newframe()
```
Using Pillow, pull in the chosen image for Camera 0
```
# Read images
image0BGR = cv2.imread(gCAMERA0_IMAGE)
image1BGR = cv2.imread(gCAMERA1_IMAGE)
image0 = image0BGR[...,::-1]
image1 = image1BGR[...,::-1]
```
Do exactly the same for the second image of the overlapping pair from camera 1
To send one of these to the HDMI, we are going to have to reformat it to fit the provided HDMI display
```
# Show image 0 on HDMI
# Need to resize it first
outframe[:] = cv2.resize(image0BGR, (1920, 1080));
hdmi_out.writeframe(outframe)
```
We will also display Young-Jin to the LCD screen. Why ? Because Young Jin does awesome work and deserves to be famous and also because I can
```
## Show image on LCD
# Open LCD object and clear
lcd = arduino.Arduino_LCD18(base.ARDUINO)
lcd.clear()
# Write image to disk
nw = 160
nl = 128
cv2.imwrite("/home/xilinx/small.jpg", cv2.resize(image0BGR, (nw,nl)))
# Display!
lcd.display("/home/xilinx/small.jpg",x_pos=0,y_pos=127,orientation=3,background=[255,255,255])
```
We now need to classify the images. This runs the remote version of TensorFlow on the image to get the bounding box. The following routine wraps this for simplicity. The spun up AWS TensorFlow instance is expecting to get be
sent a JPEG and will classify and send back the results as JSON.
The IP address of the spun up AWS instance is given by the global gAWS_TENSORFLOW_INSTANCE which is specified at the
beginning of this note book.
```
def RemoteTensorFlowClassify(image_name_string):
f = open(image_name_string,'rb')
r = requests.put(gAWS_TENSORFLOW_INSTANCE, data=f)
return json.loads(r.content.decode())
```
Actually call the defined function on images from camera 1 and camera 2.
```
#Return the object that camera zero sees with the maximum score
cam0_json_return = RemoteTensorFlowClassify(gCAMERA0_IMAGE)
json0 = cam0_json_return["image_detection"]
max = 0.0
out = []
for var in json0['object']:
if (var['score'] > max):
out = var
json0 = out
json0
#Return the object that camera one sees with the maximum score
cam1_json_return = RemoteTensorFlowClassify(gCAMERA1_IMAGE)
json1 = cam1_json_return["image_detection"]
max = 0.0
out = []
for var in json1['object']:
if (var['score'] > max):
out = var
json1 = out
json1
```
The AWS tensorflow reports the bounding boxes for the required object.
```
def DrawRect(the_json,the_image, x1, x2, y1, y2 ):
# Currently offline until the TesnorFlow net is fixed
#x1 = int(the_json["xmin"])
#y1 = int(the_json["ymin"])
#x2 = int(the_json["xmax"])
#y2 = int(the_json["ymax"])
fig, ax = pyplot.subplots(1)
ax.imshow(the_image)
rect = patches.Rectangle((x1,y1), (x2-x1), (y2-y1), linewidth = 1 , edgecolor = 'r', facecolor='none')
ax.add_patch(rect)
pyplot.show()
## Convert to grayscale
grayImage0 = cv2.cvtColor(image0, cv2.COLOR_RGB2GRAY)
grayImage1 = cv2.cvtColor(image1, cv2.COLOR_RGB2GRAY)
def IsInsideROI(pt, the_json, x1, x2, y1, y2):
# x_min = int(the_json["object"]["xmin"])
# y_min = int(the_json["object"]["ymin"])
# x_max = int(the_json["object"]["xmax"])
# y_max = int(the_json["object"]["ymax"])
x_min = x1
y_min = y1
x_max = x2
y_max = y2
if(pt[0]>=x_min and pt[0] <=x_max and pt[1]>=y_min and pt[1]<=y_max):
return True
else:
return False
## Detect keypoints
Brisk = cv2.BRISK_create()
keyPoints0 = Brisk.detect(grayImage0)
keyPoints1 = Brisk.detect(grayImage1)
## Find keypoints inside ROI
roiKeyPoints0 = numpy.asarray([k for k in keyPoints0 if IsInsideROI(k.pt,json0, 955, 1045, 740, 1275 )])
roiKeyPoints1 = numpy.asarray([k for k in keyPoints1 if IsInsideROI(k.pt,json1, 1335, 1465, 910, 1455 )])
## Compute descriptors for keypoitns inside ROI
[keyPoints0, desc0] = Brisk.compute(grayImage0, roiKeyPoints0);
[keyPoints1, desc1] = Brisk.compute(grayImage1, roiKeyPoints1);
## Find matches of ROI keypoints
BF = cv2.BFMatcher()
matches = BF.match(desc0, desc1)
## Extract pixel coordinates from matched keypoints
x_C0 = numpy.asarray([keyPoints0[match.queryIdx].pt for match in matches])
x_C1 = numpy.asarray([keyPoints1[match.trainIdx].pt for match in matches])
```
Full mesh triangularization is off line until we reconsile the camera calibration. There was an issue discovered during the hackathon that needs to be examined in teh lab setup s the code below this will not function until we reconsile the camera calibration config.
```
# Triangulate points
# We need projection matrices for camera 0 and camera 1
f = 8.350589e+000 / 3.45E-3
cx = -3.922872e-002 / 3.45E-3
cy = -1.396717e-004 / 3.45E-3
K_C0 = numpy.transpose(numpy.asarray([[f, 0, 0], [0, f, 0], [cx, cy, 1]]))
k_C0 = numpy.asarray([1.761471e-003, -2.920431e-005, -8.341438e-005, -9.470247e-006, -1.140118e-007])
[R_C0, J] = cv2.Rodrigues(numpy.asarray([1.5315866633, 2.6655790203, -0.0270418317]))
T_C0 = numpy.transpose(numpy.asarray([[152.9307390952, 260.3066944976, 351.7405264829]])) * 1000
f = 8.259861e+000 / 3.45E-3
cx = 8.397453e-002 / 3.45E-3
cy = -2.382030e-002 / 3.45E-3
K_C0 = numpy.transpose(numpy.asarray([[f, 0, 0], [0, f, 0], [cx, cy, 1]]))
K_C1 = numpy.asarray([1.660053e-003, -2.986269e-005, -7.461966e-008, -2.247960e-004, -2.290483e-006])
[R_C1, J] = cv2.Rodrigues(numpy.asarray([1.4200199799, -2.6113619450, -0.1371719827]))
T_C1 = numpy.transpose(numpy.asarray([[146.8718203137, 259.9661037150, 351.5832136366]])) * 1000
P_C0 = numpy.dot(K_C0,numpy.concatenate((R_C0, T_C0), 1))
P_C1 = numpy.dot(K_C1,numpy.concatenate((R_C1, T_C1), 1))
# Compute 3D coordinates of detected points
X_C0 = cv2.convertPointsFromHomogeneous(numpy.transpose(cv2.triangulatePoints(P_C0, P_C1, numpy.transpose(x_C0), numpy.transpose(x_C1))))
```
|
github_jupyter
|
# 6 - Pivot Table
In this sixth step I'll show you how to reshape your data using a pivot table.
This will provide a nice condensed version.
We'll reshape the data so that we can see how much each customer spent in each category.
```
import pandas as pd
import numpy as np
df = pd.read_json("customer_data.json", convert_dates=False)
df.head()
```
Taking a quick look using the <code>.head()</code> function, we can see all of the columns, and the first few rows of the data.
For this example, let's just use the first 50 rows of the data.
```
df_subset = df[0:50]
df_subset
```
Let's take a look at the types for each column using the <code>.dtypes</code> method.
```
df_subset.dtypes
```
The amount column should be a numeric type, but Pandas thinks it's an <code>object</code>. Let's go ahead and change that column to a numeric <code>float</code> type using the <code>.astype()</code> method.
```
df_subset["amount"] = df_subset["amount"].astype(float)
df_subset.dtypes
```
Now we can see that the <code>amount</code> column is a numeric <code>float</code> type.
We don't need all of the columns, just the <code>customer_id</code>, <code>category</code>, and <code>amount</code> columns.
Here's what that smaller dataframe would look like.
```
df_subset[["customer_id", "category", "amount"]]
```
Let's finish up by creating our <code>pivot_table</code>.
We'll set the index to <code>customer_id</code>, the columns to <code>category</code>, and the values to <code>amount</code>. This will reshape the data so that we can see how much each customer spent in each category. Let's create this using a new dataframe called <code>df_pivot</code>.
The final important point before we reshape the data is the <code>aggfunc</code> parameter. Since customers probably spent multiple purchase in the same categories, we'll want to collect all of the purchase. We'll do that using Numpy's <code>sum</code> method. I've shorted the Numpy library name to <code>np</code>, so that's why I've set the <code>aggfunc</code> to <code>np.sum</code>.
```
# pivot table; aggregation function "sum"
df_pivot = df_subset.pivot_table(index="customer_id", columns="category", values="amount", aggfunc=np.sum)
print(df_pivot)
```
Now we have a new dataframe showing how much each customer spent in each category.
There's a lot of <code>NaN</code> values because a lot of customers didn't spend any money in certain categories.
You should also note that there's a <code>house</code> and <code>household</code> column. We need to clean the data so that we have consistent strings before we reshape it. Look back at <strong>Step 3 - Consistent Strings</strong> to help you with that.
|
github_jupyter
|
## Test Riksdagen SFS dokument
* Denna [Jupyter Notebook](https://github.com/salgo60/open-data-examples/blob/master/Riksdagens%20dokument%20SFS.ipynb)
* [KU anmälningar](https://github.com/salgo60/open-data-examples/blob/master/Riksdagens%20dokument%20KU-anm%C3%A4lningar.ipynb)
* [Motioner](https://github.com/salgo60/open-data-examples/blob/master/Riksdagens%20dokument%20Motioner.ipynb)
* [Ledamöter](https://github.com/salgo60/open-data-examples/blob/master/Riksdagens%20ledam%C3%B6ter.ipynb)
* [Dokumenttyper](https://github.com/salgo60/open-data-examples/blob/master/Riksdagens%20dokumenttyper.ipynb)
* [Skapa sökfråga](http://data.riksdagen.se/dokumentlista/)
* 13980 hämtade verkar som diff med [Dokument & lagar (10 504 träffar)](https://www.riksdagen.se/sv/dokument-lagar/?doktyp=sfs)
### Test SFS nr 2020-577
* [Fulltext](https://www.riksdagen.se/sv/dokument-lagar/dokument/svensk-forfattningssamling/forordning-2020577-om-statligt-stod-for_sfs-2020-577) [text](http://data.riksdagen.se/dokument/sfs-2020-577.text) / [html](http://data.riksdagen.se/dokument/sfs-2020-577.html) / [json](http://data.riksdagen.se/dokument/sfs-2020-577.json)
```
from datetime import datetime
now = datetime.now()
print("Last run: ", datetime.now())
import urllib3, json
import pandas as pd
from tqdm.notebook import trange
http = urllib3.PoolManager()
pd.set_option("display.max.columns", None)
urlbase ="http://data.riksdagen.se/dokumentlista/?sok=&doktyp=SFS&utformat=json&start="
dftot = pd.DataFrame()
for i in trange(1,700): # looks we today have 10504 SFS --> 10503/20
url = urlbase + str(i)
r = http.request('GET', url)
data = json.loads(r.data)
r = http.request('GET', url)
dftot = dftot.append(pd.DataFrame(data["dokumentlista"]["dokument"]),sort=False)
dftot.head()
print("Min och Max publicerad: ", dftot.publicerad.min(), dftot.publicerad.max())
print("Min och Max datum: ", dftot.datum.min(), dftot.datum.max())
print("Min och Max systemdatum: ", dftot.systemdatum.min(), dftot.systemdatum.max())
dftot.info()
dftot[['nummer','titel','publicerad','beslutad','datum','summary']]
dftot.publicerad.unique()
dftot.publicerad.value_counts()
dftot.publicerad.value_counts().sort_index(ascending=False)
ftot.publicerad.value_counts().sort_index(ascending=False)[:50]
%matplotlib inline
import matplotlib.pyplot as plt
plot = dftot.publicerad.value_counts()[1:30].plot.bar(y='counts', figsize=(25, 5))
plt.show()
%matplotlib inline
import matplotlib.pyplot as plt
plot = dftot.datum.value_counts()[1:30].plot.bar(y='counts', figsize=(25, 5))
plt.show()
plotPublishedSFSperMonth = dftot['publicerad'].groupby(dftot.publicerad.dt.to_period("M")).agg('count')
plotPublishedSFSperMonth.plot( kind = 'bar')
plt.title("SFS per month")
plt.show()
plotDatumSFSperMonth = dftot['datum'].groupby(dftot.datum.dt.to_period("M")).agg('count')
plotDatumSFSperMonth.plot( kind = 'bar')
plt.title("SFS Datum per month")
plt.show()
plotDatumSFSperMonth = dftot['datum'].groupby(dftot.datum.dt.to_period("M")).agg('count')[10:]
plotDatumSFSperMonth.plot( kind = 'bar')
plt.title("SFS Datum per month")
plt.figsize=(5, 35)
plt.show()
plotDatumSFSperMonth
#Last year
PublishedSFS2016perMonth = dftot[dftot["publicerad"].dt.year > 2016 ]
plotPublishedSFS2016perMonth = PublishedSFS2016perMonth['publicerad'].groupby(PublishedSFS2016perMonth.publicerad.dt.to_period("M")).agg('count')
plotPublishedSFS2016perMonth.plot( kind = 'bar',)
plt.title("SFS > 2016 per month")
plt.figsize=(5, 35)
figure(figsize=(1,1))
plt.show()
plotDatumSFSperMonth[100:]
dftot.debattnamn.value_counts()
dftot.info()
organCount = dftot.organ.value_counts()
organCount
dftot.organ.value_counts().plot.pie(y='counts', figsize=(15, 15))
plt.show()
dftot.organ.value_counts()[1:50]
dftot.organ.value_counts()[50:100]
dftot.organ.value_counts()[100:150]
dftot.domain.value_counts()
dftot.rm.value_counts()
plotRM = dftot.rm.value_counts().plot.bar(y='counts', figsize=(25, 5))
plt.show()
dftot['datum'] =pd.to_datetime(dftot.datum)
dftot['publicerad'] =pd.to_datetime(dftot.publicerad)
dftot['systemdatum'] =pd.to_datetime(dftot.systemdatum, format='%Y-%m-%d')
# 2016-02-11 15:26:06
dftot.info()
dftot = dftot.sort_values('datum')
dftot.head()
dftot.tail()
dftot.subtyp.value_counts()
```
Gissning
* regl-riksg verkar vara Reglemente för Riksgäldskontoret
* regl-riksb är nog Riksbanken
```
dftot.debattnamn.value_counts()
ftot = dftot.sort_values(by='id', ascending=False)
dftot.info()
dftot.head(1000)
print("End run: ", datetime.now())
```
|
github_jupyter
|
<h1>Lists in Python</h1>
<p><strong>Welcome!</strong> This notebook will teach you about the lists in the Python Programming Language. By the end of this lab, you'll know the basics list operations in Python, including indexing, list operations and copy/clone list.</p>
<h2>Table of Contents</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li>
<a href="#dataset">About the Dataset</a>
</li>
<li>
<a href="#list">Lists</a>
<ul>
<li><a href="index">Indexing</a></li>
<li><a href="content">List Content</a></li>
<li><a href="op">List Operations</a></li>
<li><a href="co">Copy and Clone List</a></li>
</ul>
</li>
<li>
<a href="#quiz">Quiz on Lists</a>
</li>
</ul>
<p>
Estimated time needed: <strong>15 min</strong>
</p>
</div>
<hr>
<h2 id="#dataset">About the Dataset</h2>
Imagine you received album recommendations from your friends and compiled all of the recommandations into a table, with specific information about each album.
The table has one row for each movie and several columns:
- **artist** - Name of the artist
- **album** - Name of the album
- **released_year** - Year the album was released
- **length_min_sec** - Length of the album (hours,minutes,seconds)
- **genre** - Genre of the album
- **music_recording_sales_millions** - Music recording sales (millions in USD) on [SONG://DATABASE](http://www.song-database.com/)
- **claimed_sales_millions** - Album's claimed sales (millions in USD) on [SONG://DATABASE](http://www.song-database.com/)
- **date_released** - Date on which the album was released
- **soundtrack** - Indicates if the album is the movie soundtrack (Y) or (N)
- **rating_of_friends** - Indicates the rating from your friends from 1 to 10
<br>
<br>
The dataset can be seen below:
<font size="1">
<table font-size:xx-small style="width:70%">
<tr>
<th>Artist</th>
<th>Album</th>
<th>Released</th>
<th>Length</th>
<th>Genre</th>
<th>Music recording sales (millions)</th>
<th>Claimed sales (millions)</th>
<th>Released</th>
<th>Soundtrack</th>
<th>Rating (friends)</th>
</tr>
<tr>
<td>Michael Jackson</td>
<td>Thriller</td>
<td>1982</td>
<td>00:42:19</td>
<td>Pop, rock, R&B</td>
<td>46</td>
<td>65</td>
<td>30-Nov-82</td>
<td></td>
<td>10.0</td>
</tr>
<tr>
<td>AC/DC</td>
<td>Back in Black</td>
<td>1980</td>
<td>00:42:11</td>
<td>Hard rock</td>
<td>26.1</td>
<td>50</td>
<td>25-Jul-80</td>
<td></td>
<td>8.5</td>
</tr>
<tr>
<td>Pink Floyd</td>
<td>The Dark Side of the Moon</td>
<td>1973</td>
<td>00:42:49</td>
<td>Progressive rock</td>
<td>24.2</td>
<td>45</td>
<td>01-Mar-73</td>
<td></td>
<td>9.5</td>
</tr>
<tr>
<td>Whitney Houston</td>
<td>The Bodyguard</td>
<td>1992</td>
<td>00:57:44</td>
<td>Soundtrack/R&B, soul, pop</td>
<td>26.1</td>
<td>50</td>
<td>25-Jul-80</td>
<td>Y</td>
<td>7.0</td>
</tr>
<tr>
<td>Meat Loaf</td>
<td>Bat Out of Hell</td>
<td>1977</td>
<td>00:46:33</td>
<td>Hard rock, progressive rock</td>
<td>20.6</td>
<td>43</td>
<td>21-Oct-77</td>
<td></td>
<td>7.0</td>
</tr>
<tr>
<td>Eagles</td>
<td>Their Greatest Hits (1971-1975)</td>
<td>1976</td>
<td>00:43:08</td>
<td>Rock, soft rock, folk rock</td>
<td>32.2</td>
<td>42</td>
<td>17-Feb-76</td>
<td></td>
<td>9.5</td>
</tr>
<tr>
<td>Bee Gees</td>
<td>Saturday Night Fever</td>
<td>1977</td>
<td>1:15:54</td>
<td>Disco</td>
<td>20.6</td>
<td>40</td>
<td>15-Nov-77</td>
<td>Y</td>
<td>9.0</td>
</tr>
<tr>
<td>Fleetwood Mac</td>
<td>Rumours</td>
<td>1977</td>
<td>00:40:01</td>
<td>Soft rock</td>
<td>27.9</td>
<td>40</td>
<td>04-Feb-77</td>
<td></td>
<td>9.5</td>
</tr>
</table></font>
<hr>
<h2 id="list">Lists</h2>
We are going to take a look at lists in Python.
* A list is a sequenced collection of different objects such as integers, strings, Bool, Float,complex and other lists as well.
* The address of each element within a list is called an <b>index</b>.
* An index is used to access and refer to Element/items within a list.
* List will allow us to Perfrom `index`,`Slice`,`Extended Slice` and we asign a Element to it as well.
* List is Mutable(Which we can Change at any time), We Can Add, Delete, modify the Element.
* List is having different Methods
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/ListsIndex.png" width="1000" />
To create a list, type the list within square brackets <b>[ ]</b>, with your content inside the parenthesis and separated by commas. Let’s try it!
```
# Create a list
L = ["Michael Jackson", 10.1, 1982]
L
```
We can use negative and regular indexing with a list :
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/ListsNeg.png" width="1000" />
```
L[0],L[-3]
# Print the elements on each index
print('the same element using negative and positive indexing:\n Postive:',L[0],
'\n Negative:' , L[-3] )
print('the same element using negative and positive indexing:\n Postive:',L[1],
'\n Negative:' , L[-2] )
print('the same element using negative and positive indexing:\n Postive:',L[2],
'\n Negative:' , L[-1] )
# Print the elements on each slice
L[0:2] # Start and End-1-->Slice
L[1:]
```
<h3 id="content">List Content</h3>
Lists can contain strings, floats, and integers. We can nest other lists, and we can also nest tuples and other data structures. The same indexing conventions apply for nesting:
```
# Sample List
Sample_list = ["Michael Jackson", 10.1, 1982,2j+3,True ,[1, 2], ("A", 1)]
Sample_list
Sample_list[1],Sample_list[-6]
Sample_list[2]
Sample_list[0:5]
Sample_list[-5:-1]
```
<h3 id="op">List Operations</h3>
We can also perform slicing in lists. For example, if we want the last two elements, we use the following command:
```
# Sample List
L = ["Michael Jackson", 10.1,1982,"MJ",1]
L
```
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/ListsSlice.png" width="1000">
```
# List slicing
L[3:5]
```
We can use the method <code>extend</code> to add new elements to the list will be add at last:
```
# Use extend to add elements to list
L = [ "Michael Jackson", 10.2]
L.extend(['pop', 10])
L
```
Another similar method is <code>append</code>. If we apply <code>append</code> instead of <code>extend</code>, we add one element to the list:
```
# Use append to add elements to list
L = [ "Michael Jackson", 10.2]
L.append(['pop', 10])
L
```
Each time we apply a method, the list changes. If we apply <code>extend</code> we add two new elements to the list. The list <code>L</code> is then modified by adding two new elements:
```
# Use extend to add elements to list
L = [ "Michael Jackson", 10.2]
L.extend(['pop', 10])
L
```
If we append the list <code>['a','b']</code> we have one new element consisting of a nested list:
```
# Use append to add elements to list
L.append(['a','b'])
L
```
As lists are mutable, we can change them. For example, we can change the first element as follows:
```
# Change the element based on the index
A = ["disco", 10, 1.2]
print('Before change:', A)
A[0]
A[0] = 'hard rock' # Mutable
print('After change:', A)
```
We can also delete an element of a list using the <code>del</code> command:
```
# Delete the element based on the index
print('Before change:', A)
del(A[0])
print('After change:', A)
```
We can convert a string to a list using <code>split</code>. For example, the method <code>split</code> translates every group of characters separated by a space into an element in a list:
```
# Split the string, default is by space
'hard rock'.split()
```
We can use the split function to separate strings on a specific character. We pass the character we would like to split on into the argument, which in this case is a comma. The result is a list, and each element corresponds to a set of characters that have been separated by a comma:
```
# Split the string by comma
'A,B,C,D'.split(',')
```
<h3 id="co">Copy and Clone List</h3>
When we set one variable <b>B</b> equal to <b>A</b>; both <b>A</b> and <b>B</b> are referencing the same list in memory:
```
# Copy (copy by reference) the list A
A = ["hard rock", 10, 1.2]
B = A # copy by reference
print('A:', A)
print('B:', B)
```
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/ListsRef.png" width="1000" align="center">
```
id(A)
id(B)
```
Initially, the value of the first element in <b>B</b> is set as hard rock. If we change the first element in <b>A</b> to <b>banana</b>, we get an unexpected side effect. As <b>A</b> and <b>B</b> are referencing the same list, if we change list <b>A</b>, then list <b>B</b> also changes. If we check the first element of <b>B</b> we get banana instead of hard rock:
```
# Examine the copy by reference
print('B[0]:', B[0])
A[0] = "banana"
A
print('B[0]:', B[0])
B
```
This is demonstrated in the following figure:
<img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/ListsRefGif.gif" width="1000" />
You can clone list **A** by using the following syntax:
```
# Clone (clone by value) the list A
B = A[:]
B
```
Variable **B** references a new copy or clone of the original list; this is demonstrated in the following figure:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/ListsVal.gif" width="1000" />
Now if you change <b>A</b>, <b>B</b> will not change:
```
print('B[0]:', B[0])
A[0] = "hard rock"
print('B[0]:', B[0])
A
B
li = list(range(25,40)) # hear 25 is Starting Element and 39 Is Ending Element
li
li.append(10.25)
li
li.clear()
li
li_1 = [10,20,30,'hi','hello',True,2.5]
li_1
li_2 = li_1.copy()
li_2
li_1
li_1.append(10)
li_1
li_1.count(10)
li
li.extend(li_1)
li
li_1
li_2
co = [10,20,30,40,50]
co
co.index(30)
co[1]
co.insert(1,"Hello")
co
co.pop() # it will remove last element
co.pop(1) # This Is Used Remove 1 position Element
co
co.remove('hi')
co.remove('hello')
co
co.reverse()
co
li
li.remove(2.5)
li.sort()
li
```
|Methods|Description|
|--------|----------|
|**Append()**|it is used to add the element at last in a list|
|**clear()**| it is used to Clear the all the elemnts in a list|
|**copy()**| it is used to copy all the elements in to list|
|**count()**|We are counting perticular element is Reparting in list|
|**extend()**|Add Multiple Values to the Existing list|
|**index()**| which used for find the first occurance of element in a list|
|**pop()**| it used for remove the last element|
|**pop(postion)**|it is used for remove perticular element|
|**remove(Element)**|it remove perticular Remove|
|**reverse()**|it is used for reverse order element|
|**sort()**|it i will work for the only perticular data type only|
### Nested List
```
a = [[10,20,30],
[2.5,3.5,4.5],
[True,False,True]]
a
a[0]
a[0][1]
a[1]
a[2]=10
a
```
<h2 id="quiz">Quiz on List</h2>
Create a list <code>a_list</code>, with the following elements <code>1</code>, <code>hello</code>, <code>[1,2,3]</code> and <code>True</code>.
```
# Write your code below and press Shift+Enter to execute
```
Double-click <b>here</b> for the solution.
<!-- Your answer is below:
a_list = [1, 'hello', [1, 2, 3] , True]
a_list
-->
Find the value stored at index 1 of <code>a_list</code>.
```
# Write your code below and press Shift+Enter to execute
```
Double-click <b>here</b> for the solution.
<!-- Your answer is below:
a_list[1]
-->
Retrieve the elements stored at index 1, 2 and 3 of <code>a_list</code>.
```
# Write your code below and press Shift+Enter to execute
```
Double-click <b>here</b> for the solution.
<!-- Your answer is below:
a_list[1:4]
-->
Concatenate the following lists <code>A = [1, 'a']</code> and <code>B = [2, 1, 'd']</code>:
```
# Write your code below and press Shift+Enter to execute
```
Double-click <b>here</b> for the solution.
<!-- Your answer is below:
A = [1, 'a']
B = [2, 1, 'd']
A + B
-->
|
github_jupyter
|
```
! wget http://corpora.linguistik.uni-erlangen.de/someweta/german_web_social_media_2018-12-21.model -P /mnt/data2/ptf
from someweta import ASPTagger
model = "/mnt/data2/ptf/german_web_social_media_2018-12-21.model"
# future versions will have sensible default values
asptagger = ASPTagger(beam_size=5, iterations=10)
asptagger.load(model)
sentences = ['Wer dürfen Atommacht sein, wer nicht. Da treffen sich Regierung, Atommacht und Anwärter auf die Bombe.',
'Über was werden da verhandeln?',
'Die Bombe selbst stehen nicht zur Disposition, für die Atommacht, sondern der Verfügungsanspruch eines Anwärter.',
'Der Besitz dieser Bombe verändern die politisch Option eines Staat, und damit auch die militärisch , in der Folge die politisch Option der existierend Atommacht.',
'Bereits der Wille zur Bombe werden deshalb von den real Atommacht beaufsichtigen. Diese Macht verhalten sich zum Wille eines ausländisch Souverän wie Polizei. Wer nicht gehorchen werden bestrafen.',
'Das können diese Macht, weil diese in der Lage sein ihre Anspruch an das Wohlverhalten anderer Regierung wirtschaftlich und militärisch zu erzwingen.',
'Von wegen hier gehen es um den Schutz vor einer militärisch Bedrohung.',
'Die Fähigkeit zu atomar Eskalation stehen doch nur für den Angeklagte zur Disposition.',
'Was bleiben? Die auch atomar Überlegenheit der selbsternannt Weltpolizist die sich als Helfer der Menschheit feiern lassen.',
'Und die Öffentlichkeit? Die finden wie immer alles toll, was die eigen Regierung machen. Auch kritisch; Da haben man sich über den Tisch zeihen lassen. Beweis: Die Aufhebung der Sanktion. Sein das nicht bereits ein einknick der eigen Herr?',
'So konstruktiv sein national Opportunismus,',
'Die Bombe in "unseren" Hand? Aber sicher, wir sein doch die Guter!',
'Alle anderen, wenn es so sagen werden im politisch Rundfunk, sein die Böses.',
'(i.) Sein "Satoshi Nakamoto" nicht der Name einer real Person, die den Bitcoin erfinden haben, sondern ein virtuell Nickname. Ob sich dahint eine real Person, eine real Organisation oder ein Computerprogramm verbergen, weiss kein Schwein.',
'(ii.) Sein Bitcoins nicht "mathematisch selten", sondern mit der gegenwärtig verfügbar Computer-Rechenleistung allenfalls mit einig, energetisch sauteuer Registerschiebe-Aufwand in Mikroprozessor auffindbar.',
'Ob es Bitcoins im Überfluss geben, sofern das gegenwärtig weltweit Forscher ernährend, physikalisch Konstrukt von Quantencomputer Realität werden, können "mathematisch" bis heute weder beweisen, noch widerlegen werden.',
'(iiien.) Erzeugen Bitcoins realweltlich nichts, sondern reduzieren erwas zuvor sauteuer Erzeugtes.',
'Bitcoins sein die elektrisch Heizlüfter unter den Währung.',
'Die reduzieren, Sommer wie Winter, aufwendig-geordnet erschaffen, elektrisch Energie zu popelig-ungeordnet Wärmeenergie.',
'Bitcoins machen das, was mittels Klimakonferenz reduzieren werden sollen.',
'(iv.) Eine einzig, mittels Bitcoin-Heizlüfter vorgenommen Transaktion benötigen zur Zeit 215 kWh elektrisch Energie.https://motherboard.vice....',
'Ein deutsch Haushalt verbraten ohne Bitcoin im Durchschnitt 3107 kWh, also schlapp 14 Bitcoin-Transaktion, elektrisch Energie pro Jahr.https://www.musterhaushal...',
'P.S.:',
'Wer wissen mögen, wie die virtuell "begehrenswert" Bitcoins "gebären" werden, der können sich sehr einfach ein realweltlich Bild davon machen."Photo: Life inside of China’s massiv and remote bitcoinen min"https://qz.com/1026605/ph...',
'Die Idee von bitcoin sein doch die Abschaffung gewöhnlich Währung. Das einzig, was man also tun muss, sein den investitionshyp aussitzen, bis cryptowährung zum Standard werden, international, und dann sein es auch egal, ob ein Bitcoin 500.000 Dollar wert sein, oder?^^',
'Und wenn der Bitcoin zwingen sein, so teuer zu bleiben, weil eben so viele Depp so viel investieren, wirdsen halt eine anderer Global Währung. Was ich damit sagen wollen: die cryptowährung Bitcoin an sich sein, glauben ich, zum scheit verurteilen, beziehungsweise besitzen nur ein sehr kurz Zeitfenster, in dem sie einem was nützen. Sein halt so‘n spannend Übergangsprodukt',
'Bitcoins werden auf Null oder nahe Null fallen.Das werden passieren.',
'Schon zweihundern Kommentar. Das zeigen tatsächlich die Expertise der Deutsch. Toll!Dies sein ein Fachgebiet in das man sich mindestens ein Jahr einarbeiten müssen und das drei Stunde täglich. Alles Andere sein Mumpitz. Gelten für den gesamt Kryptomarkt.Viele Akademiker. Nur mal so am Rand.',
'Wer damit real Geld machen, haben es verdienen. Wer seins verlieren auch.',
'"Derzeit vergehen kein Tag ohne Facebook-Schlagzeile.".',
'Dann lassen es doch einfach!',
'Wer entscheiden, was Fake News sein? Herr Kleber? Fake News sein von der Meinungsfreiheit decken.',
'Für anonym Account geben es keine Meinungsfreiheit.',
'Es sein ein leidig Thema mit diesem Facebook. Das einzig, was man als Einzelner dagegen tun können, sein der Boykott des Netzwerk.',
'Ich halten ja Twitter für eine groß Haß- und Fakenewsschleuder als Facebook. Allerdings sein auf Twitter hauptsächlich Politiker, Journalist und "Aktivist" unterwegs, während Facebook mehr so etwas für das gemein Volk sein.',
'Deshalb werden wohl auch mehr auf Facebook herumhacken, als auf Twitter. Der Pöbel haben ruhig zu sein.',
'Die Regierung mögen so gern handlungsfähig erscheinen, die Mitglied und die angeschlossen Medium beeilen sich, täglich neu "Grausamkeit" gegen Flüchtling zu verkünden ohne dabei die Kanzlerin und ihr "Schaff" weiter zu beschädigen.',
'Dabei sein offensichtlich: eine EU-Normalverteilung sein genauso wenig in Sicht wie eine Einigung mit Griechenland oder gar der Türkei.',
'In den Syriengespräch haben man sich nicht nur ins moralisch sondern auch ins diplomatisch Abseits manövrieren.',
'Die fortgesetzt Unterstützung für das Regime in Kiew und die beständig Wiederholung der dort verkünden Dogma engen die Handlungsoption für eine Einigung mit Russland entscheidend ein.',
'Amerika werden nicht helfen sondern erst mal wählen.',
'Nein, die Regierung sein nicht handlungsfähig.',
'Und so greifen man zu den verblieben Mittel:',
'Diffamierung der AfD wie zuvor schon der Pirat.',
'Angriff der auf Aussöhnung mit Russland bedachen Kraft.',
'Beide haben zuletzt etwas ungeschickt agieren bzw. nicht mit der an Verzweiflung grenzend Aggressivität der Medium hier rechnen.',
'Ein Witz- werden so niemals funktionieren, und das wissen die Beteilgten genau! Verzweiflungsreflex der CDU angesichts befürchtet massiv Stimmeneinbruch bei den Wahl im März.',
'Ein Witz?',
'Oder eher eine wirkungslos "Beruhigungspille" für den Wahlpöbel...',
'Erst gar nicht reinlassen sein die gut Lösung.',
'Das bedeuten 50-70 Milliarde pro Jahr an Beamten- und Materialaufwand, aber vor allem ein Anstieg der Stückgutkosten, da die lean production, Basis des Erfolg der Deutsch Industrie im Wettbewerb mit den Billiglohnland, nicht mit unkalkulierbar Transportzeit klar kommen.',
'Im Klartext Wirtschaftskrise. Nun mögen dem Beschäftigungslosen diese weniger schlimm erscheinen als eine Flüchtlingskrise, wir Arbeitenden werden aber ganz gerne unsere Job behalten.',
'Ich denken, man sollen es so machen, wie etwa die Israeli oder die Australier.',
'Wenn die Heimatstaat ihre Bürger nicht mehr zurück haben wollen, oder der Herkunftstaat unbekannt sein, sollen man in Drittstaat abschieben, mit denen man zu diesem Zweck entsprechend Vertrag machen.',
'Vielleicht fallen dem Migrant dann ja noch rechtzeitig sein Heimatland ein oder wo er seine Papier hintun haben, wenn er etwa nach Uganda abschieben werden sollen.',
'ich fragen mich, auf welcher Basis werden denn das alles prüfen.',
'Wenn einer erkären er sein Syrer, leider ohne Papier, muss das doch irgendwie prüfen werden, ihm stringent Frage stellen werden, zur Mitarbeit veranlassen werden.',
'Wenn sich dann rausstellen, er sein kein Syrer, er wollen sich nicht äussern, wo er eigentlich',
'herkommen, dann muss man doch den Antrag negativ bescheiden. Wer sein Herkunftsland nicht preisgeben, sich verweigern, wieso haben derjenige überhaupt ein Anrecht auf Asyl ? Wer wollen denn was von wem ?',
'Es gehen nicht um "links", "Linkskurs" oder das Gegenteil.',
'Es gehen um Politik für die eigen Bevölkerung.',
'Es gehen um Politik für die Deutsch von deutsch Politiker oder um',
'keine Politik für die Deutsch von deutsch Politiker.',
'Das sein die Alternative.',
'Und die SPD haben sich entscheiden.',
'Wahlergebnis von Parteivorsitzender im Bereich von 90% oder gar mehr',
'sein ein Indiz für stalinistisch Struktur innerhalb einer Partei.',
'https://www.youtube.com/w...',
'Unser Gottesgeschenk?!?',
'Mit Nahles und der jetzig Parteispitze werden die SPD leider den Weg der französisch, niederländisch, österreichisch und italienisch Sozialdemokrat gehen. Alles andere sein eine Überraschung. Die Delegierte können aber zeigen, dass die SPD DIE Demokratiepartei sein und Simone Lange ihre Stimme geben. Nur Mut: Ein Personeller Neuanfang sein alternativlos.',
'Ich stimmen Ihnen zu. Aber ich glauben nicht, dass das, was Sie aufzeigen, an einer Persönlichkeit festzumachen sein.',
'Insgesamt meinen ich, dass unsere Gesellschaft in einem anderer Fahrwasser denken und fühlen muss. Wir dürfen nicht die Verhältnis aus der Zeit des tief Menschenelends mit heute bei uns vergleichen und deshalb zeitgerecht Lösung finden. Auf dem Weg der Suche müssen gerecht Kompromiss finden werden.',
'Der feudalistisch Überfluss und die Zügellosigkeit der Gewinn- und Luxussucht sein die drastisch Gegenwart der Vergangenheit mit allem menschlich Elend weltweit.',
'Sein Las Vegas ein Vorbild, in dem Armut und Elend im Dunkele liegen?',
'Na bitten, und Söder gehen dann nach Berlin und werden Innenminister in der GroKo und können so sein Talent beim Management von Migration und Terrorbekämpfung mal richtig unter Beweis stellen....',
'Das Bild sagen mehr als tausend Wort. Go, Jo!',
"Sein sowieso flabbergasted in Anbetracht der Vorstellung, dieser blass Franke sollen ausgerechnet MP in Bayern werden. Dageg sein ja Stephan Weil ne Partymaus. Passt auch überhaupt nicht in die Reihe irgendwie. Bei Söder weißen du immer schon vorher, was er sagen werden und zwar genau wie er's sagen werden. Ein Politroboter vor dem Herr und genauso gucken er da ja auch drein. Also wie immer eigentlich.",
'Herrmann werden doch bei der Bundestagswahl komplett verbrennen. Söder sein kein Thema, wenn dem nicht so sein.',
'Mich werden eher interessieren, ob und welche politisch-inhaltlich Differenz es zwischen den Personalie geben.',
'Gegenfrage, gehen es in Bayern und seiner Führungskamarilla jemals um Politisch-Inhaltliches?',
'Eine sachlich Diskussion sein doch gar nicht erwünscht.Was haben ich denn jetzt schon wieder bös schreiben?',
'Dass sein Faschos hassen? Egal wie sie sich verkleiden und unter welchem Banner sie Meinung löschen?',
'Meinungsfreiheit nur noch für Journalist, die dann auch mal Falschzitat kommentieren dürfen?',
'Gabriel und Merkel schaden dem Ansehen DeutschlandsEntfernt. Bitte äußern Sie sich zum Thema des Artikel. Die Redaktion/cs',
'`Das Deutschen-Gen...Das Deutschen-Gen scheinen das Gen der Intoleranz zu sein, mit der ein Deutsche seine Meinung gegenüber Anderen in Forum verteidigen.',
'Können man tagtäglich bei der ZEIT beobachten.',
'Kürzen. Wir bitten Sie, sich in den Kommentar wieder dem Thema des Artikel zuwenden und weit Anmerkung zur Moderation direkt an [email protected] zu richten, damit im Forum eine sachlich Diskussion ermöglichen werden. Die Redaktion/cs',
'Liebe - Sarrazin - MitläuferWenn Herr Sarrazin sich zu Haus oder in seiner Kneipe mit seinen "dämlich Ansicht“ privat äußern - sein das "unter Meinungsfreiheit" noch hinnehmen - kein Hahn werden nach ihm krähen. Aber er nutzen seine exponieren Stellung zum Provozieren, um sein Buch möglichst oft zu verkaufen. Das sein nicht o.k. Für diese Provokation muss er entsprechend Kritik aushalten - die er doch so selbstverständlich an anderen üben. Die zahllos Mitläufer hier auf den Kommentarseite sollen nicht "stellvertretend für ihn" so beleidigt tun.',
'Vergessen Sie nicht, vor ca. 40 Jahr haben wir Deutsch herablassend die Einwanderung von "dumm Türke" wünschen, damit die Drecksarbeit machen werden.',
'Da finden wir die Niedrigstlohn für Türke o.k. – die kommen ja aus den doof Ecke der Türkei. Wo sein Herr Sarrazin damals, als es besorgt Stimme zu dieser arrogant Einwanderungspolitik geben.',
'Dass heute viele Mensch in diesem "tollen Deutschland" für Niedrigstlohn arbeiten, auf dem Lohnniveau damalig Einwanderer und noch darunt, sein das eigentlich Problem - und daran sein die "deutsch Rassegene, wir sein ja was Gute" ganz erheblich Schuld. Diese doof deutsch Niedriglöhner sein nämlich auch bald die Moor …wie heute die Türke. Das sein die Angst.',
'Übrigens: Als „reinrassig Deutsch“ kennen und mögen ich eine ganz Menge (hoch)intelligent, erfolgreich und obendrein auch noch sehr sympathisch Türke aus Region am Marmarameer bis nach Anatolien (wo ja die Doofen wohnen).',
'warum?Warum haben sich chinesen, russen, thaisen, italien integrieren?',
'Das sein die Frage, die zu diskutieren sein. Doch das wollen die Medium doch gar nicht, wie das wiederholen Löschen dieser Frage bei der ZEIT zeigen.',
'MP3 sein doch total Schrot. selbst im Auto. Zum Glück können meine neu Karre jetzt FLAC abspielen, vorher gehen zwar WAV, aber ich müssen extra konvertieren.',
'Selb schuld, wer seinen Ohr MP3 antun. FLAC bieten alle Vorteil: Tagging, Komprimierung, keinen Qualitätsverlust.',
'MP3´s haben bei gut Quellqualität kaum Qualitätsverlust. Um das dann noch überhaupt zu merken, brauchen man erstens ein sehr gut Gehör und zweitens mindestens ein gut Abspielgerät. Aber das Sie gleich sich ne neu Karre anschaffen, um FlAC zu hören... xD',
'Irgendwo gaanz tief unten in den Katakombe der Zeit.de-Redaktion haben jemand jetzt sehr glücklich da er/sie sehr lange darauf warten, dieses Wortspiel im Titel erscheinen...',
'Ich haben mir mal die Mühe machen und bei Spotify nach den von ihnen erwähnen Künstler machen.',
'Hugo Alfven, Thomas Arne, Carles Baguer, Mily Balakirev, Jiri Antonin Benda, William Sterndal Bennett finden sich alle bei Spotify, was ja klar sagen das solche Dienst nicht nur den Mainstream bedienen mögen.']
sentences = [s.split() for s in sentences]
for sentence in sentences:
tagged_sentence = asptagger.tag_sentence(sentence)
print("\n".join(["\t".join(t) for t in tagged_sentence]), "\n", sep="")
```
|
github_jupyter
|
```
import glob
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
%matplotlib inline
warnings.filterwarnings('ignore')
file = glob.iglob('*.csv')
df = pd.read_csv(*file)
print(f"The Dimension of the data is - {df.shape}")
df.head()
df.tail()
X = df.iloc[:, :-1].values
Y = df.iloc[:, -1].values
X
Y
print("Size of X: {}".format(X.shape))
print("Size of Y: {}".format(Y.shape))
X_train, X_test, Y_train, Y_test = train_test_split(X,
Y,
test_size = 0.2,
random_state = 0)
print("Size of X_train: {}".format(X_train.shape))
print("Size of X_test: {}".format(X_test.shape))
print("Size of Y_train: {}".format(Y_train.shape))
print("Size of Y_test: {}".format(Y_test.shape))
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
X_train
X_test
lda = LDA(solver = 'eigen',
n_components = 2)
X_train = lda.fit_transform(X_train, Y_train)
X_test = lda.transform(X_test)
X_train
X_test
classifier = LogisticRegression(verbose = 1,
random_state = 42,
n_jobs = -1)
classifier.fit(X_train, Y_train)
y_pred = classifier.predict(X_test)
y_pred
cm = confusion_matrix(Y_test, y_pred)
cm
acc = accuracy_score(Y_test, y_pred)
print(f"The accuracy of the model is - {acc*100:.3f}%")
report = classification_report(Y_test, y_pred)
print(report)
# Visualizing the Training Set Results
figure = plt.figure(figsize = (10,10))
x_set, y_set = X_train, Y_train
X1, X2 = np.meshgrid(np.arange(start = x_set[:, 0].min() - 1,
stop = x_set[:, 0].max() + 1,
step = 0.01),
np.arange(start = x_set[:, 1].min() - 1,
stop = x_set[:, 1].max() + 1,
step = 0.01))
plt.contourf(X1,
X2,
classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
camp = ListedColormap(('red', 'green', 'blue')),
alpha = 0.4
)
for i, j in enumerate(np.unique(y_set)):
plt.scatter(x_set[y_set == j, 0],
x_set[y_set == j, 1],
color = ListedColormap(('red', 'green', 'blue'))(i),
label = j,
s = 15,
marker = '*'
)
plt.xlim(X1.min(), X1.max())
plt.xlim(X2.min(), X2.max())
plt.title('Linear Discriminant analysis (PCA) - Train')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.legend()
# Visualizing the Test Set Results
figure = plt.figure(figsize = (10,10))
x_set, y_set = X_test, Y_test
X1, X2 = np.meshgrid(np.arange(start = x_set[:, 0].min() - 1,
stop = x_set[:, 0].max() + 1,
step = 0.01),
np.arange(start = x_set[:, 1].min() - 1,
stop = x_set[:, 1].max() + 1,
step = 0.01))
plt.contourf(X1,
X2,
classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
camp = ListedColormap(('red', 'green', 'blue')),
alpha = 0.4
)
for i, j in enumerate(np.unique(y_set)):
plt.scatter(x_set[y_set == j, 0],
x_set[y_set == j, 1],
color = ListedColormap(('red', 'green', 'blue'))(i),
label = j,
s = 15,
marker = '*'
)
plt.xlim(X1.min(), X1.max())
plt.xlim(X2.min(), X2.max())
plt.title('Linear Discriminant analysis (PCA) - Test')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.legend()
```
|
github_jupyter
|
## Make your own heatmap based on Strava activities
This notebook shows you how to create your own heatmap based on your Strava activities.
You need to create a Strava API application in order to use their API. Follow the instructions on this page to create your app: <https://medium.com/@annthurium/getting-started-with-the-strava-api-a-tutorial-f3909496cd2d>
After setting up the app, note down the following information (you will need it to run this notebook):
- Client id
- Client secret
**Note:** Strava imposes some request limits (30'000/day, and 600/every 15min).
```
!pip install stravaio folium
import os
import logging
import json
import urllib
import requests
import folium
from stravaio import StravaIO
# Paste your client id and client secret here.
STRAVA_CLIENT_ID = "ENTER-YOUR-CLIENT-ID"
STRAVA_CLIENT_SECRET = "ENTER-YOUR-CLIENT-SECRET"
```
### Authorization with Strava
The cell below creates the proper authorization link using the Stravaio Python library, which is used later to retrieve activities.
It is important to run this cell, just pasting the access_token from your Strava settings will not work, because Stravaio needs to be authorized.
- Run the cell below and click the link that is printed, when prompted click "Authorize" on the website that opens
- After you click "Authorize" you see something like, "This site can't be reached"
- Stay on that page and look at the URL
- The URL will show the authorization code (the bit after "code=" in the URL) and scope you accepted
- Copy the code and paste it below and continue the notebook execution
More detailed info can be found here:
- <https://developers.strava.com/docs/getting-started/>
- <https://developers.strava.com/docs/authentication/>
```
params_oauth = {
"client_id": STRAVA_CLIENT_ID,
"response_type": "code",
"redirect_uri": f"http://localhost:8000/authorization_successful",
"scope": "read,profile:read_all,activity:read",
"state": 'https://github.com/sladkovm/strava-http', # Sladkovm is the author of the Stravaio library
"approval_prompt": "force"
}
values_url = urllib.parse.urlencode(params_oauth)
base_url = 'https://www.strava.com/oauth/authorize'
authorize_url = base_url + '?' + values_url
print(authorize_url)
# Paste the code from the URL here. Afterwards there are no manual steps anymore.
AUTHORIZATION_CODE = "ENTER-YOUR-AUTHORIZATION-CODE"
```
The following cell retrieves an access token using the authorization code. That access token can then be used to retrieve Strava data.
```
payload = {
"client_id": STRAVA_CLIENT_ID,
"client_secret": STRAVA_CLIENT_SECRET,
"grant_type": "authorization_code",
"code": AUTHORIZATION_CODE,
}
response = requests.request(
"POST", "https://www.strava.com/api/v3/oauth/token", data=payload
)
response = json.loads(response.text)
TOKEN = response["access_token"]
!pip install stravaio folium
client = StravaIO(access_token=TOKEN)
athlete = client.get_logged_in_athlete()
activities = client.get_logged_in_athlete_activities(after=20170101)
m = folium.Map(
tiles="cartodbpositron",
location=[59.925, 10.728123],
zoom_start=11.5,
control_scale=True
)
folium.TileLayer("cartodbpositron").add_to(m)
folium.TileLayer("cartodbdark_matter").add_to(m)
folium.LayerControl().add_to(m)
def downsample(l, n):
"""Returns every nth element from list l. Returns the
original list if n is set to 1.
Used to reduce the number of GPS points per activity,
to improve performance of the website.
"""
return l[0::n]
def map_activities(activities, folium_map, opacity=0.5, weight=1):
if len(activities) == 0:
logging.info("No activities found, returning empty folium map.")
return folium_map
counter = 0
for a in activities:
if a.type == "Workout":
continue
streams = client.get_activity_streams(a.id, athlete.id)
try:
points = list(zip(streams.lat, streams.lng))
points = downsample(l=points, n=2)
if a.type == "Run":
folium.PolyLine(
locations=points, color="#ff9933", opacity=opacity, weight=weight
).add_to(folium_map)
elif a.type == "Ride":
folium.PolyLine(
locations=points, color="#0066ff", opacity=opacity, weight=weight
).add_to(folium_map)
elif a.type == "NordicSki":
folium.PolyLine(
locations=points, color="#00ffff", opacity=opacity, weight=weight
).add_to(folium_map)
elif a.type == "AlpineSki":
folium.PolyLine(
locations=points, color="#00ccff", opacity=opacity, weight=weight
).add_to(folium_map)
elif a.type == "Canoeing":
folium.PolyLine(
locations=points, color="#00ff55", opacity=opacity, weight=weight
).add_to(folium_map)
elif a.type == "IceSkate":
folium.PolyLine(
locations=points, color="#f6ff00", opacity=opacity, weight=weight
).add_to(folium_map)
else:
folium.PolyLine(
locations=points, color="#cc00ff", opacity=opacity, weight=weight
).add_to(folium_map)
logging.critical("Mapped activity with id: {}".format(a.id))
except Exception:
logging.error("Could not map activity with id: {}".format(a.id))
return folium_map
m = map_activities(
activities=activities,
folium_map=m,
opacity=0.5,
weight=2
)
m
```
|
github_jupyter
|
# <p style="text-align: center;"> Part Two: Scaling & Normalization </p>
```
from IPython.display import HTML
from IPython.display import Image
Image(url= "https://miro.medium.com/max/3316/1*yR54MSI1jjnf2QeGtt57PA.png")
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
The raw code for this IPython notebook is by default hidden for easier reading.
To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''')
```
# <p style="text-align: center;"> Table of Contents </p>
- ## 1. [Introduction](#Introduction)
- ### 1.1 [Abstract](#abstract)
- ### 1.2 [Importing Libraries](#importing_libraries)
- ## 2. [Data Scaling](#data_scaling)
- ### 2.1 [Standardization](#standardization)
- ### 2.2 [Normalization](#normalization)
- ### 2.3 [The Big Question – Normalize or Standardize?](#the_big_question)
- ### 2.4 [Implementation](#implementation)
- #### 2.4.1 [Original Distributions](#original_distributions)
- #### 2.4.2 [Adding a Feature with Much Larger Values](#larger_values)
- #### 2.4.3 [MinMaxScaler](#min_max_scaler)
- #### 2.4.4 [StandardScaler](#standard_scaler)
- #### 2.4.5 [RobustScaler](#robust_scaler)
- #### 2.4.6 [Normalizer](#normalizer)
- #### 2.4.7 [Combined Plot](#combined_plot)
- ## 3. [Conclusion](#Conclusion)
- ## 4. [Contribution](#Contribution)
- ## 5. [Citation](#Citation)
- ## 6. [License](#License)
# <p style="text-align: center;"> 1.0 Introduction </p> <a id='Introduction'></a>
# 1.1 Abstract <a id='abstract'></a>
Welcome to the Data Cleaning
[Back to top](#Introduction)
# 1.2 Importing Libraries <a id='importing_libraries'></a>
This is the official start to any Data Science or Machine Learning Project. A Python library is a reusable chunk of code that you may want to include in your programs/ projects.
In this step we import a few libraries that are required in our program. Some major libraries that are used are Numpy, Pandas, MatplotLib, Seaborn, Sklearn etc.
[Back to top](#Introduction)
```
# modules we'll use
import pandas as pd
import numpy as np
# for Box-Cox Transformation
from scipy import stats
# for min_max scaling
from sklearn import preprocessing
from mlxtend.preprocessing import minmax_scaling
# plotting modules
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
from astropy.table import Table, Column
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
matplotlib.style.use('ggplot')
np.random.seed(34)
```
# 2.0 Data Scaling <a id='data_scaling'></a>
## Why Should we Use Feature Scaling?
The first question we need to address – why do we need to scale the variables in our dataset? Some machine learning algorithms are sensitive to feature scaling while others are virtually invariant to it.
Machine learning models learn a mapping from input variables to an output variable. As such, the scale and distribution of the data drawn from the domain may be different for each variable. Input variables may have different units (e.g. feet, kilometers, and hours) that, in turn, may mean the variables have different scales.
### Gradient Descent Based Algorithms
Machine learning algorithms like linear regression, logistic regression, neural network, etc. that use gradient descent as an optimization technique require data to be scaled. Take a look at the formula for gradient descent below:

The presence of feature value X in the formula will affect the step size of the gradient descent. The difference in ranges of features will cause different step sizes for each feature. To ensure that the gradient descent moves smoothly towards the minima and that the steps for gradient descent are updated at the same rate for all the features, we scale the data before feeding it to the model.
> Having features on a similar scale can help the gradient descent converge more quickly towards the minima.
### Distance-Based Algorithms
Distance algorithms like KNN, K-means, and SVM are most affected by the range of features. This is because behind the scenes they are using distances between data points to determine their similarity.
For example, let’s say we have data containing high school CGPA scores of students (ranging from 0 to 5) and their future incomes (in thousands Dollars):

Since both the features have different scales, there is a chance that higher weightage is given to features with higher magnitude. This will impact the performance of the machine learning algorithm and obviously, we do not want our algorithm to be biassed towards one feature.
> Therefore, we scale our data before employing a distance based algorithm so that all the features contribute equally to the result.

The effect of scaling is conspicuous when we compare the Euclidean distance between data points for students A and B, and between B and C, before and after scaling as shown below:

Scaling has brought both the features into the picture and the distances are now more comparable than they were before we applied scaling.
### Tree-Based Algorithms
Tree-based algorithms, on the other hand, are fairly insensitive to the scale of the features. Think about it, a decision tree is only splitting a node based on a single feature. The decision tree splits a node on a feature that increases the homogeneity of the node. This split on a feature is not influenced by other features.
So, there is virtually no effect of the remaining features on the split. This is what makes them invariant to the scale of the features!
One of the reasons that it's easy to get confused between scaling and normalization is because the terms are sometimes used interchangeably and, to make it even more confusing, they are very similar! In both cases, you're transforming the values of numeric variables so that the transformed data points have specific helpful properties.
[Back to top](#Introduction)
## 2.1 Standardization <a id='standardization'></a>
**Scaling (Standardization):** Change in the range of your data.
Differences in the scales across input variables may increase the difficulty of the problem being modeled. A model with large weight values is often unstable, meaning that it may suffer from poor performance during learning and sensitivity to input values resulting in higher generalization error.
This means that you're transforming your data so that it fits within a specific scale, like 0-100 or 0-1. You want to scale data when you're using methods based on measures of how far apart data points are, like support vector machines (SVM) or k-nearest neighbors (KNN). With these algorithms, a change of "1" in any numeric feature is given the same importance.
For example, you might be looking at the prices of some products in both Yen and US Dollars. One US Dollar is worth about 100 Yen, but if you don't scale your prices, methods like SVM or KNN will consider a difference in price of 1 Yen as important as a difference of 1 US Dollar! This clearly doesn't fit with our intuitions of the world. With currency, you can convert between currencies. But what about if you're looking at something like height and weight? It's not entirely clear how many pounds should equal one inch (or how many kilograms should equal one meter).
By scaling your variables, you can help compare different variables on equal footing
Standardization is scaling a technique where the values are centered around the mean with a unit standard deviation. This means that the mean of the attribute becomes zero and the resultant distribution has a unit standard deviation.
Here’s the formula for standardization:

- Mu is the mean of the feature values and
- Sigma is the standard deviation of the feature values. Note that in this case, the values are not restricted to a particular range.
[Back to top](#Introduction)
```
# generate 1000 data points randomly drawn from an exponential distribution
original_data = np.random.exponential(size=1000)
# mix-max scale the data between 0 and 1
scaled_data = minmax_scaling(original_data, columns=[0])
# plot both together to compare
fig, ax = plt.subplots(1,2)
sns.distplot(original_data, ax=ax[0])
ax[0].set_title("Original Data")
sns.distplot(scaled_data, ax=ax[1])
ax[1].set_title("Scaled data")
```
## 2.2 Normalization <a id='normalization'></a>
**Normalization:** Change in the shape of the distribution of data.
Normalization scales each input variable separately to the range 0-1, which is the range for floating-point values where we have the most precision. Normalization requires that you know or are able to accurately estimate the minimum and maximum observable values. You may be able to estimate these values from your available data.
Scaling just changes the range of your data. Normalization is a more radical transformation. The point of normalization is to change your observations so that they can be described as a normal distribution.
Normal distribution: Also known as the "bell curve", this is a specific statistical distribution where a roughly equal observations fall above and below the mean, the mean and the median are the same, and there are more observations closer to the mean. The normal distribution is also known as the Gaussian distribution.
In general, you'll normalize your data if you're going to be using a machine learning or statistics technique that assumes your data is normally distributed. Some examples of these include linear discriminant analysis (LDA) and Gaussian naive Bayes. (Pro tip: any method with "Gaussian" in the name probably assumes normality.)
Normalization is a scaling technique in which values are shifted and rescaled so that they end up ranging between 0 and 1. It is also known as Min-Max scaling.
Here’s the formula for normalization:

Here, Xmax and Xmin are the maximum and the minimum values of the feature respectively.
- When the value of X is the minimum value in the column, the numerator will be 0, and hence X’ is 0
- On the other hand, when the value of X is the maximum value in the column, the numerator is equal to the denominator and thus the value of X’ is 1
- If the value of X is between the minimum and the maximum value, then the value of X’ is between 0 and 1
**PS:-** The method we're using to normalize here is called the Box-Cox Transformation.
Now, the big question in your mind must be when should we use normalization and when should we use standardization? Let’s find out!
[Back to top](#Introduction)
```
# normalize the exponential data with boxcox
normalized_data = stats.boxcox(original_data)
# plot both together to compare
fig, ax=plt.subplots(1,2)
sns.distplot(original_data, ax=ax[0])
ax[0].set_title("Original Data")
sns.distplot(normalized_data[0], ax=ax[1])
ax[1].set_title("Normalized data")
```
## 2.3 The Big Question – Normalize or Standardize? <a id='the_big_question'></a>
Normalization vs. standardization is an eternal question among machine learning newcomers. Let me elaborate on the answer in this section.
- Normalization is good to use when you know that the distribution of your data does not follow a Gaussian distribution. This can be useful in algorithms that do not assume any distribution of the data like K-Nearest Neighbors and Neural Networks.
- Standardization, on the other hand, can be helpful in cases where the data follows a Gaussian distribution. However, this does not have to be necessarily true. Also, unlike normalization, standardization does not have a bounding range. So, even if you have outliers in your data, they will not be affected by standardization.
However, at the end of the day, the choice of using normalization or standardization will depend on your problem and the machine learning algorithm you are using. There is no hard and fast rule to tell you when to normalize or standardize your data. You can always start by fitting your model to raw, normalized and standardized data and compare the performance for best results.
It is a good practice to fit the scaler on the training data and then use it to transform the testing data. This would avoid any data leakage during the model testing process. Also, the scaling of target values is generally not required.
[Back to top](#Introduction)
## 2.4 Implementation <a id='implementation'></a>
This is all good in theory, but how do we implement it in real life. The sklearn library has various modules in the preprocessing section which implement these in different ways. The 4, that are most widely used and that we're going to implement here are:-
- **MinMaxScalar:** The MinMaxScaler transforms features by scaling each feature to a given range. This range can be set by specifying the feature_range parameter (default at (0,1)). This scaler works better for cases where the distribution is not Gaussian or the standard deviation is very small. However, it is sensitive to outliers, so if there are outliers in the data, you might want to consider another scaler.
> x_scaled = (x-min(x)) / (max(x)–min(x))
- **StandardScaler:** Sklearn its main scaler, the StandardScaler, uses a strict definition of standardization to standardize data. It purely centers the data by using the following formula, where u is the mean and s is the standard deviation.
> x_scaled = (x — u) / s
- **RobustScalar:** If your data contains many outliers, scaling using the mean and standard deviation of the data is likely to not work very well. In these cases, you can use the RobustScaler. It removes the median and scales the data according to the quantile range. The exact formula of the RobustScaler is not specified by the documentation. By default, the scaler uses the Inter Quartile Range (IQR), which is the range between the 1st quartile and the 3rd quartile. The quantile range can be manually set by specifying the quantile_range parameter when initiating a new instance of the RobustScaler.
- **Normalizer:**
- **‘l1’:** The l1 norm uses the sum of all the values as and thus gives equal penalty to all parameters, enforcing sparsity.
> x_normalized = x / sum(X)
- **‘l2’:** The l2 norm uses the square root of the sum of all the squared values. This creates smoothness and rotational invariance. Some models, like PCA, assume rotational invariance, and so l2 will perform better.
> x_normalized = x / sqrt(sum((i\**2) for i in X))
**`TLDR`**
- Use MinMaxScaler as your default
- Use RobustScaler if you have outliers and can handle a larger range
- Use StandardScaler if you need normalized features
- Use Normalizer sparingly - it normalizes rows, not columns
[Back to top](#Introduction)
### 2.4.1 Original Distributions <a id='original_distributions'></a>
Let's make several types of random distributions. We're doing this because when we deal with real world data, the data is not necessarily in a normal (Gaussian) distribution. Each type of scaling may have a different effect depending on the type of the distribution, thus we take examples of 5 different type of distributions here.
- **Beta:** The Beta distribution is a probability distribution on probabilities.
- **Exponential:** The exponential distribution is a probability distribution which represents the time between events in a Poisson process.
- **Normal (Platykurtic):** The term "platykurtic" refers to a statistical distribution in which the excess kurtosis value is negative. For this reason, a platykurtic distribution will have thinner tails than a normal distribution, resulting in fewer extreme positive or negative events.
- **Normal (Leptokurtic):** Leptokurtic distributions are statistical distributions with kurtosis over three. It is one of three major categories found in kurtosis analysis.
- **Bimodal:** The bimodal distribution has two peaks.
[Back to top](#Introduction)
```
#create columns of various distributions
df = pd.DataFrame({
'beta': np.random.beta(5, 1, 1000) * 60, # beta
'exponential': np.random.exponential(10, 1000), # exponential
'normal_p': np.random.normal(10, 2, 1000), # normal platykurtic
'normal_l': np.random.normal(10, 10, 1000), # normal leptokurtic
})
# make bimodal distribution
first_half = np.random.normal(20, 3, 500)
second_half = np.random.normal(-20, 3, 500)
bimodal = np.concatenate([first_half, second_half])
df['bimodal'] = bimodal
# create list of column names to use later
col_names = list(df.columns)
```
After defining the distributions, lets visualize them
```
# plot original distribution plot
fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8))
ax1.set_title('Original Distributions')
sns.kdeplot(df['beta'], ax=ax1)
sns.kdeplot(df['exponential'], ax=ax1)
sns.kdeplot(df['normal_p'], ax=ax1)
sns.kdeplot(df['normal_l'], ax=ax1)
sns.kdeplot(df['bimodal'], ax=ax1);
df.describe()
df.plot()
```
As we can clearly see from the statistics and the plots, all values are in the same ball park. But what happens if we disturb this by adding a feature with much larger values.
### 2.4.2 Adding a Feature with Much Larger Values <a id='larger_values'></a>
This feature could be home prices, for example.
[Back to Top](#Introduction)
```
normal_big = np.random.normal(1000000, 10000, (1000,1)) # normal distribution of large values
df['normal_big'] = normal_big
col_names.append('normal_big')
df['normal_big'].plot(kind='kde')
df.normal_big.mean()
```
We've got a normalish distribution with a mean near 1,000,0000. But if we put this on the same plot as the original distributions, you can't even see the earlier columns.
```
# plot original distribution plot with larger value feature
fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8))
ax1.set_title('Original Distributions')
sns.kdeplot(df['beta'], ax=ax1)
sns.kdeplot(df['exponential'], ax=ax1)
sns.kdeplot(df['normal_p'], ax=ax1)
sns.kdeplot(df['normal_l'], ax=ax1)
sns.kdeplot(df['bimodal'], ax=ax1);
sns.kdeplot(df['normal_big'], ax=ax1);
df.describe()
```
The new, high-value distribution is way to the right. And here's a plot of the values.
```
df.plot()
```
### 2.4.3 MinMaxScaler <a id='min_max_scaler'></a>
MinMaxScaler subtracts the column mean from each value and then divides by the range.
[Back to Top](#Introduction)
```
mm_scaler = preprocessing.MinMaxScaler()
df_mm = mm_scaler.fit_transform(df)
df_mm = pd.DataFrame(df_mm, columns=col_names)
fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8))
ax1.set_title('After MinMaxScaler')
sns.kdeplot(df_mm['beta'], ax=ax1)
sns.kdeplot(df_mm['exponential'], ax=ax1)
sns.kdeplot(df_mm['normal_p'], ax=ax1)
sns.kdeplot(df_mm['normal_l'], ax=ax1)
sns.kdeplot(df_mm['bimodal'], ax=ax1)
sns.kdeplot(df_mm['normal_big'], ax=ax1);
df_mm.describe()
```
Notice how the shape of each distribution remains the same, but now the values are between 0 and 1. Our feature with much larger values was brought into scale with our other features.
### 2.4.4 StandardScaler <a id='standard_scaler'></a>
StandardScaler is scales each column to have 0 mean and unit variance.
[Back to Top](#Introduction)
```
s_scaler = preprocessing.StandardScaler()
df_s = s_scaler.fit_transform(df)
df_s = pd.DataFrame(df_s, columns=col_names)
fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8))
ax1.set_title('After StandardScaler')
sns.kdeplot(df_s['beta'], ax=ax1)
sns.kdeplot(df_s['exponential'], ax=ax1)
sns.kdeplot(df_s['normal_p'], ax=ax1)
sns.kdeplot(df_s['normal_l'], ax=ax1)
sns.kdeplot(df_s['bimodal'], ax=ax1)
sns.kdeplot(df_s['normal_big'], ax=ax1);
```
You can see that all features now have 0 mean.
```
df_s.describe()
```
### 2.4.5 RobustScaler <a id='robust_scaler'></a>
RobustScaler subtracts the column median and divides by the interquartile range.
[Back to Top](#Introduction)
```
r_scaler = preprocessing.RobustScaler()
df_r = r_scaler.fit_transform(df)
df_r = pd.DataFrame(df_r, columns=col_names)
fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8))
ax1.set_title('After RobustScaler')
sns.kdeplot(df_r['beta'], ax=ax1)
sns.kdeplot(df_r['exponential'], ax=ax1)
sns.kdeplot(df_r['normal_p'], ax=ax1)
sns.kdeplot(df_r['normal_l'], ax=ax1)
sns.kdeplot(df_r['bimodal'], ax=ax1)
sns.kdeplot(df_r['normal_big'], ax=ax1);
df_r.describe()
```
Although the range of values for each feature is much smaller than for the original features, it's larger and varies more than for MinMaxScaler. The bimodal distribution values are now compressed into two small groups. Standard and RobustScalers have pretty much the same ranges.
### 2.4.6 Normalizer <a id='normalizer'></a>
Note that normalizer operates on the rows, not the columns. It applies l2 normalization by default.
[Back to Top](#Introduction)
```
n_scaler = preprocessing.Normalizer()
df_n = n_scaler.fit_transform(df)
df_n = pd.DataFrame(df_n, columns=col_names)
fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8))
ax1.set_title('After Normalizer')
sns.kdeplot(df_n['beta'], ax=ax1)
sns.kdeplot(df_n['exponential'], ax=ax1)
sns.kdeplot(df_n['normal_p'], ax=ax1)
sns.kdeplot(df_n['normal_l'], ax=ax1)
sns.kdeplot(df_n['bimodal'], ax=ax1)
sns.kdeplot(df_n['normal_big'], ax=ax1);
df_n.describe()
```
Normalizer also moved the features to similar scales. Notice that the range for our much larger feature's values is now extremely small and clustered around .9999999999.
### 2.4.7 Combined Plot <a id='combined_plot'></a>
Let's look at our original and transformed distributions together. We'll exclude Normalizer because you generally want to tranform your features, not your samples.
[Back to Top](#Introduction)
```
# Combined plot.
fig, (ax0, ax1, ax2, ax3) = plt.subplots(ncols=4, figsize=(20, 8))
ax0.set_title('Original Distributions')
sns.kdeplot(df['beta'], ax=ax0)
sns.kdeplot(df['exponential'], ax=ax0)
sns.kdeplot(df['normal_p'], ax=ax0)
sns.kdeplot(df['normal_l'], ax=ax0)
sns.kdeplot(df['bimodal'], ax=ax0)
sns.kdeplot(df['normal_big'], ax=ax0);
ax1.set_title('After MinMaxScaler')
sns.kdeplot(df_mm['beta'], ax=ax1)
sns.kdeplot(df_mm['exponential'], ax=ax1)
sns.kdeplot(df_mm['normal_p'], ax=ax1)
sns.kdeplot(df_mm['normal_l'], ax=ax1)
sns.kdeplot(df_mm['bimodal'], ax=ax1)
sns.kdeplot(df_mm['normal_big'], ax=ax1);
ax2.set_title('After RobustScaler')
sns.kdeplot(df_r['beta'], ax=ax2)
sns.kdeplot(df_r['exponential'], ax=ax2)
sns.kdeplot(df_r['normal_p'], ax=ax2)
sns.kdeplot(df_r['normal_l'], ax=ax2)
sns.kdeplot(df_r['bimodal'], ax=ax2)
sns.kdeplot(df_r['normal_big'], ax=ax2);
ax3.set_title('After StandardScaler')
sns.kdeplot(df_s['beta'], ax=ax3)
sns.kdeplot(df_s['exponential'], ax=ax3)
sns.kdeplot(df_s['normal_p'], ax=ax3)
sns.kdeplot(df_s['normal_l'], ax=ax3)
sns.kdeplot(df_s['bimodal'], ax=ax3)
sns.kdeplot(df_s['normal_big'], ax=ax3);
```
You can see that after any transformation the distributions are on a similar scale. Also notice that MinMaxScaler doesn't distort the distances between the values in each feature.
# <p style="text-align: center;">Conclusion<p><a id='Conclusion'></a>
We have used various data Scaling and preprocessing techniques in this notebook. As listed below
- Use MinMaxScaler as your default
- Use RobustScaler if you have outliers and can handle a larger range
- Use StandardScaler if you need normalized features
- Use Normalizer sparingly - it normalizes rows, not columns
[Back to top](#Introduction)
# <p style="text-align: center;">Contribution<p><a id='Contribution'></a>
This was a fun project in which we explore the idea of Data cleaning and Data Preprocessing. We take inspiration from kaggle learning course and create our own notebook enhancing the same idea and supplementing it with our own contributions from our experiences and past projects.
- Code by self : 65%
- Code from external Sources : 35%
[Back to top](#Introduction)
# <p style="text-align: center;">Citation<p><a id='Citation'></a>
- https://www.kaggle.com/alexisbcook/scaling-and-normalization
- https://scikit-learn.org/stable/modules/preprocessing.html
- https://www.analyticsvidhya.com/blog/2020/04/feature-scaling-machine-learning-normalization-standardization/
- https://kharshit.github.io/blog/2018/03/23/scaling-vs-normalization
- https://www.kaggle.com/discdiver/guide-to-scaling-and-standardizing
- https://docs.google.com/spreadsheets/d/1woVi7wq13628HJ-tN6ApaRGVZ85OdmHsDBKLAf5ylaQ/edit#gid=0
- https://towardsdatascience.com/preprocessing-with-sklearn-a-complete-and-comprehensive-guide-670cb98fcfb9
- https://www.kaggle.com/rpsuraj/outlier-detection-techniques-simplified?select=insurance.csv
- https://statisticsbyjim.com/basics/remove-outliers/
- https://statisticsbyjim.com/basics/outliers/
# <p style="text-align: center;">License<p><a id='License'></a>
Copyright (c) 2020 Manali Sharma, Rushabh Nisher
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
[Back to top](#Introduction)
|
github_jupyter
|
# Naive Bayes from scratch
```
import pandas as pd
import numpy as np
def get_accuracy(x: pd.DataFrame, y: pd.Series, y_hat: pd.Series):
correct = y_hat == y
acc = np.sum(correct) / len(y)
cond = y == 1
y1 = len(y[cond])
y0 = len(y[~cond])
print(f'Class 0: tested {y0}, correctly classified {correct[~cond].sum()}')
print(f'Class 1: tested {y1}, correctly classified {correct[cond].sum()}')
print(f'Overall: tested {len(y)}, correctly classified {correct.sum()}')
print(f'Accuracy = {acc:.2f}')
class Classifier:
def __init__(self, dataset: str = None, mle: bool=True):
if dataset:
x_train, y_train = reader(f'datasets/{dataset}-train.txt')
x_test, y_test = reader(f'datasets/{dataset}-test.txt')
self.train(x_train, y_train, mle)
print('Training accuracy')
print('=' * 10)
self.accuracy(x_train, y_train)
print('Test accuracy')
print('=' * 10)
self.accuracy(x_test, y_test)
def accuracy(self, x: pd.DataFrame, y: pd.DataFrame) -> None:
y_hat = self.predict(x)
get_accuracy(x, y, y_hat)
class NB(Classifier):
def __init__(self, dataset: str = None, mle: bool=True):
self.prior = None
self.p_xi_given_y = {0: {}, 1: {}}
self.prior_x = {}
self.cols = None
super().__init__(dataset, mle)
def train(self, x: pd.DataFrame, y: pd.Series, mle: bool=True):
adj_den = 0 if mle else 2
adj_num = 0 if mle else 1
self.prior = y.value_counts().to_dict()
for c in [0, 1]:
self.prior[c] += adj_num
self.prior[c] /= (len(y) + adj_den)
self.cols = x.columns
for col in x.columns:
self.prior_x[col] = (x[col].value_counts() / len(y)).to_dict()
cond = y == 1
y1 = np.sum(cond)
y0 = len(y) - y1
y1 += adj_den
y0 += adj_den
x_pos = x[cond]
x_neg = x[~cond]
for cls in [0, 1]:
for col in x.columns:
x_cls = x_pos if cls == 1 else x_neg
y_cls = y1 if cls == 1 else y0
x1 = len(x_cls.query(f'{col} == 1'))
x0 = len(x_cls.query(f'{col} == 0'))
x1 += adj_num
x0 += adj_num
self.p_xi_given_y[cls][col] = {
0: x0 / y_cls,
1: x1 / y_cls
}
def predict(self, x: pd.DataFrame) -> pd.Series:
out = []
for _, row in x.iterrows():
m = {}
for cls in [0, 1]:
m[cls] = np.log([self.prior[0]] + [
self.p_xi_given_y[cls][col][row[col]]
for col in x.columns
]).sum()
out.append(1 if m[1] >= m[0] else 0)
return pd.Series(out)
def _get_ind(self, col):
num = self.prior_x[col][0] * self.p_xi_given_y[1][col][1]
den = self.prior_x[col][1] * self.p_xi_given_y[1][col][0]
return num / den
def most_indicative(self):
return pd.Series({
col: self._get_ind(col)
for col in self.cols
}).sort_values(ascending=False)
x = pd.DataFrame({'x1': [0, 0, 1, 1], 'x2': [0, 1, 0, 1]})
y = pd.Series([0, 0, 1, 1])
x
nb = NB()
nb.train(x, y)
nb.accuracy(x, y)
```
|
github_jupyter
|
## Import dependencies
```
import numpy as np
import sys
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
import seaborn as sn
import scipy as sp
from tqdm import tqdm
import glob
from fair import *
from fair.scripts.data_retrieval import *
%matplotlib inline
```
definition used to round output tables to given sig figs.
```
def round_to_sf(x,sf):
if x==0:
return 0
if np.isnan(x):
return '-'
else:
num= round(x, sf - int(np.floor(np.log10(abs(x)))))
if abs(num)>10**sf:
return str(int(num))
else:
return str(num)
```
# I. Default parameter simulated concentrations
Here we run historical emissions to test how the default parameter set simulates the historical evolution of concentrations.
```
## first we view & create a latex table for the default parameter set:
default_params = get_gas_parameter_defaults()
params_table = default_params.default.T.sort_index().rename(dict(a1='$a_1$',a2='$a_2$',a3='$a_3$',a4='$a_4$',
tau1='$\tau_1$',tau2='$\tau_2$',tau3='$\tau_3$',tau4='$\tau_4$',
r0='$r_0$',rC='$r_u$',rT='$r_T$',rA='$r_a$',PI_conc='PI\_conc',
f1='$f_1$',f2='$f_2$',f3='$f_3$'),axis=1)
params_table.index.name='agent'
params_table.columns.name='parameter'
params_table.index = [x.replace('_','\_') for x in params_table.index]
params_table.applymap(lambda x:round_to_sf(x,2)).replace(np.nan,'')#.to_latex('../../docs/manuscript/tables/TabS2',escape=False,bold_rows=True)
```
### data retrieval
#### concentrations
WMGHG concentrations are from the CMIP6 concentration dataset, [Meinshausen et al., 2017](https://www.geosci-model-dev.net/10/2057/2017/). For some species, these are extended using data from NOAA.
Reference:
Meinshausen, M., Vogel, E., Nauels, A., Lorbacher, K., Meinshausen, N., Etheridge, D. M., … Weiss, R. (2017). Historical greenhouse gas concentrations for climate modelling (CMIP6). Geoscientific Model Development, 10(5), 2057–2116. https://doi.org/10.5194/gmd-10-2057-2017
```
import ftplib
## import concentrations from official CMIP6 timeseries:
CMIP6_conc_ftp = ftplib.FTP('data.iac.ethz.ch','anonymous')
CMIP6_conc_ftp.cwd('CMIP6/input4MIPs/UoM/GHGConc/CMIP/yr/atmos/UoM-CMIP-1-1-0/GHGConc/gr3-GMNHSH/v20160701')
CMIP6_ftp_list = [x for x in CMIP6_conc_ftp.nlst() if x[-3:]=='csv']
WMGHG_concs = pd.DataFrame(dict(zip(['_'.join(x.split('_')[3:-8]) for x in CMIP6_ftp_list],[pd.read_csv('ftp://data.iac.ethz.ch/CMIP6/input4MIPs/UoM/GHGConc/CMIP/yr/atmos/UoM-CMIP-1-1-0/GHGConc/gr3-GMNHSH/v20160701/'+x,usecols=[0,1],index_col=0).iloc[:,0] for x in CMIP6_ftp_list])))
WMGHG_concs = WMGHG_concs[[x for x in WMGHG_concs.columns if x[-2:]!='eq']] # remove "equivalent" concentrations
WMGHG_concs['halon1202'] = 0
WMGHG_concs.loc[1765:2014,'halon1202'] = pd.read_csv('http://www.pik-potsdam.de/~mmalte/rcps/data/RCP45_MIDYEAR_CONCENTRATIONS.DAT',skiprows=38,delim_whitespace=True,index_col=0)['HALON1202'].loc[1765:2014].values
## we extend CO2, CH4 & N2O out to 2019 using the NOAA ESRL data
NOAA_molefrac = pd.read_csv('https://www.esrl.noaa.gov/gmd/aggi/NOAA_MoleFractions_2020.csv',skiprows=2,index_col=0,skipfooter=5).iloc[1:].replace('nd',np.nan).apply(pd.to_numeric).rename(dict(CO2='carbon_dioxide',CH4='methane',N2O='nitrous_oxide'),axis=1)
WMGHG_concs = WMGHG_concs.reindex(np.arange(2020))
for species in ['carbon_dioxide','methane','nitrous_oxide']:
# scale the NOAA data to join seamlessly (scale factors are almost exactly 1)
scale_factor = WMGHG_concs.loc[2010:2014,species].mean() / NOAA_molefrac.loc[2010:2015,species].mean()
WMGHG_concs.loc[2015:2019,species] = NOAA_molefrac.loc[2015:2020,species].values * scale_factor
WMGHG_concs.drop(np.arange(1750),inplace=True)
# rescale all GHGs to be in ppb (bar CO2)
WMGHG_concs[WMGHG_concs.columns.drop(['carbon_dioxide','methane','nitrous_oxide'])] *= 1/1000
```
#### emissions & forcing
Emissions & external forcing are taken from the RCMIP protocol.
Reference:
Nicholls, Z. R. J., Meinshausen, M., Lewis, J., Gieseke, R., Dommenget, D., Dorheim, K., … Xie, Z. (2020). Reduced complexity model intercomparison project phase 1: Protocol, results and initial observations. Geoscientific Model Development Discussions, 1–33. https://doi.org/10.5194/gmd-2019-375
```
## emissions
def get_SSP_emms(ssp):
emms = RCMIP_to_FaIR_input_emms(ssp).interpolate().loc[1750:2100]
rebase_species = ['so2','nox','co','nmvoc','bc','nh3','oc','nox_avi','methyl_bromide','methyl_chloride','chcl3','ch2cl2']
emms.loc[:,rebase_species] -= emms.loc[1750,rebase_species]
return emms
choose_ssps=['ssp119','ssp126','ssp245','ssp370','ssp585']
SSP_emms = pd.concat([get_SSP_emms(x) for x in choose_ssps],axis=1,keys=choose_ssps)
## forcing
SSP_forc = pd.concat([get_RCMIP_forc(x) for x in choose_ssps],axis=1,keys=choose_ssps).loc[:2100]
```
## run the model!
```
default_SSP_run = run_FaIR(emissions_in=SSP_emms,forcing_in=SSP_forc)
```
## plot the results
```
## get MAGICC7.1.0 data to benchmark
MAGICC_defaults = pd.read_csv('../../aux/input-data/RCMIP/data_results_phase-1_magicc7_rcmip_phase-1_magicc7.1.0.beta_v1-0-0.csv').drop(['Model','Unit','Climatemodel','Region'],axis=1).set_index(['Scenario','Variable']).reindex(['esm-'+x+'-allGHG' for x in choose_ssps],level=0)
RCMIP_outputmap = pd.read_csv('../../aux/FaIRv2.0.0-alpha_RCMIP_inputmap.csv',index_col=0)
MAGICC_defaults = MAGICC_defaults.rename(RCMIP_outputmap.reset_index().set_index('RCMIP_concs_key')['index'].to_dict(),level=1).reindex(RCMIP_outputmap.index,level=1).T
MAGICC_defaults.index = MAGICC_defaults.index.astype(int)
MAGICC_defaults.rename(dict(zip(['esm-'+x+'-allGHG' for x in choose_ssps],choose_ssps)),axis=1,level=0,inplace=True)
## get FaIRv1.5 data to benchmark
FaIR_defaults = pd.concat([pd.read_csv('../../aux/input-data/RCMIP/rcmip-master-data-results-phase-1-fair/data/results/phase-1/fair/rcmip_phase-1_fair-1.5-default-'+x+'_v1-0-1.csv') for x in ['esm-'+x+'-allGHG' for x in choose_ssps]]).drop(['Model','Unit','Climatemodel','Region'],axis=1).set_index(['Scenario','Variable'])
FaIR_defaults = FaIR_defaults.rename(RCMIP_outputmap.reset_index().set_index('RCMIP_concs_key')['index'].to_dict(),level=1).reindex(RCMIP_outputmap.index,level=1).T
FaIR_defaults.index = [int(x[:4]) for x in FaIR_defaults.index]
FaIR_defaults.rename(dict(zip(['esm-'+x+'-allGHG' for x in choose_ssps],choose_ssps)),axis=1,level=0,inplace=True)
## set plot rcParams
matplotlib.rcParams['font.family']='Helvetica'
matplotlib.rcParams['font.size']=11
matplotlib.rcParams['axes.formatter.limits']=-3,3
matplotlib.rcParams['legend.frameon']=False
plt.rcParams['pdf.fonttype'] = 42
## & plot!
colors= {'ssp245':'#7570b3','ssp370':'#d95f02','ssp585':'#e7298a','ssp119':'#66a61e','ssp126':'#1b9e77','history':'grey'}
map_conc_names = dict(zip(WMGHG_concs.columns,['C$_2$F$_6$','C$_3$F$_8$','C$_4$F$_{10}$','C$_5$F$_{12}$','C$_6$F$_{14}$','C$_7$F$_{16}$','C$_8$F$_{18}$','cC$_4$F$_{8}$','CO$_2$','CCl$_4$','CF$_4$','CFC113','CFC114','CFC115','CFC11','CFC12','CH$_2$Cl$_2$','CH$_3$CCl$_3$','CHCl$_3$','Halon1211','Halon1301','Halon2402','HCFC141b', 'HCFC142b', 'HCFC22', 'HFC125','HFC134a', 'HFC143a', 'HFC152a', 'HFC227ea', 'HFC236fa', 'HFC23','HFC245fa', 'HFC32', 'HFC365mfc', 'HFC4310mee','CH$_4$','CH$_3$Br','CH$_3$Cl','NF$_3$','N$_2$O','SF$_6$','SO$_2$F$_2$','Halon1202']))
fig,ax = plt.subplots(8,6,figsize=(15,15))
with plt.rc_context({"lines.linewidth": 0.75,"lines.markersize":4,"lines.markerfacecolor":'none',"lines.markeredgewidth":0.5}):
for i,gas in enumerate(WMGHG_concs.columns):
ax.flatten()[i].plot(WMGHG_concs.loc[1850:,gas].iloc[::10],'o',color='k')
for ssp in choose_ssps:
ax.flatten()[i].plot(default_SSP_run['C'].loc[2014:2100,(ssp,'default',gas)],color=colors[ssp],label=ssp)
ax.flatten()[i].plot(MAGICC_defaults.loc[2014:2100,(ssp,gas)]*RCMIP_outputmap.loc[gas,'RCMIP_concs_scaling'],color=colors[ssp],ls=':')
try: # need exceptions for FaIR as not all gases were included as this point.
ax.flatten()[i].plot(FaIR_defaults.loc[2014:2100,(ssp,gas)]*RCMIP_outputmap.loc[gas,'RCMIP_concs_scaling'],color=colors[ssp],ls='-.')
except:
pass
ax.flatten()[i].plot(default_SSP_run['C'].loc[1850:2014,('ssp245','default',gas)],color=colors['history'],label='historical')
ax.flatten()[i].plot(MAGICC_defaults.loc[1850:2014,(ssp,gas)]*RCMIP_outputmap.loc[gas,'RCMIP_concs_scaling'],color=colors['history'],ls=':')
try:
ax.flatten()[i].plot(FaIR_defaults.loc[1850:2014,(ssp,gas)]*RCMIP_outputmap.loc[gas,'RCMIP_concs_scaling'],color=colors['history'],ls='-.')
except:
pass
ax.flatten()[i].text(0.5,0.98,map_conc_names[gas],transform=ax.flatten()[i].transAxes,va='bottom',ha='center',fontsize=12,fontweight='bold')
if gas in ['carbon_dioxide','methane','nitrous_oxide']:
ax1 = inset_axes(ax.flatten()[i],width="100%",height="100%",bbox_to_anchor=(0.05,0.43,0.5,0.6),bbox_transform=ax.flatten()[i].transAxes)
ax1.plot(default_SSP_run['C'].loc[1850:2014,('ssp245','default',gas)],color=colors['history'])
ax1.plot(MAGICC_defaults.loc[1850:2014,(ssp,gas)]*RCMIP_outputmap.loc[gas,'RCMIP_concs_scaling'],color=colors['history'],ls=':')
ax1.plot(FaIR_defaults.loc[1850:2014,(ssp,gas)]*RCMIP_outputmap.loc[gas,'RCMIP_concs_scaling'],color=colors['history'],ls='-.')
ax1.plot(WMGHG_concs.loc[1850:,gas].iloc[::10],'o',color='k')
ax1.set_xticklabels([])
ax1.tick_params(left=False,labelleft=False,right=True,labelright=True)
ax1.ticklabel_format(axis='y',style="plain")
ax1.set_xlim(1850,2014)
[a.tick_params(labelbottom=False) for a in ax.flatten()]
[a.tick_params(labelbottom=True) for a in ax.flatten()[-11:]]
[a.ticklabel_format(style="plain") for a in ax.flatten()[-11:]]
[a.set_xlabel('year') for a in ax.flatten()[-11:]]
[a.set_xlim(1850,2100) for a in ax.flatten()]
[a.spines[pos].set_visible(False) for pos in ['right','top'] for a in ax.flatten()]
ax.flatten()[-6].plot([],[],'k',label='FaIRv2.0.0')
ax.flatten()[-6].plot([],[],'k:',label='MAGICC7.1.0-beta')
ax.flatten()[-6].plot([],[],'k-.',label='FaIRv1.5')
# fig.subplots_adjust(hspace=0.1)
plt.tight_layout(h_pad=0,w_pad=0)
ax.flatten()[-6].legend(loc=(1.05,0),labelspacing=0.1,prop={'size':9})
[a.set_visible(False) for a in ax.flatten()[-5:]]
[fig.savefig('../../docs/manuscript/figures/Fig2.'+x,dpi=600,bbox_inches='tight') for x in ['png','pdf']]
''
```
# I. Default parameter metrics
Here we compute GWP values for each gas in the FaIRv2.0.0-alpha namelist; under a scenario of concentrations fixed at their present day (2014) levels. These include the impact due to all forcing (direct through radiative effects + indirect through any atmospheric chemistry).
```
historical_concrun = WMGHG_concs.dropna().copy()
## add in aerosol emissions
aer_species = ['so2','nox','co','nmvoc','bc','nh3','oc','nox_avi']
historical_concrun = pd.concat([historical_concrun,get_SSP_emms('ssp245').loc[:2014,aer_species]],axis=1)
historical_concrun = pd.concat([historical_concrun],axis=1,keys=['historical'])
historical_forc = pd.concat([get_RCMIP_forc('ssp245').loc[historical_concrun.index]],axis=1,keys=['historical'])
## extend both series into the future, but fixed @ 2014 levels
historical_concrun = historical_concrun.reindex(np.arange(1750,2516)).interpolate(limit=501,limit_direction='forward')
historical_forc = historical_forc.reindex(np.arange(1750,2516)).interpolate(limit=501,limit_direction='forward')
## concentration-driven run over history
hist_run = run_FaIR(concentrations_in=historical_concrun, forcing_in=historical_forc, aer_concs_in=aer_species)
## obtain corresponding emissions & reset aerosol emissions
hist_emms = hist_run['Emissions'].droplevel(axis=1,level=1)
hist_emms.loc[:2014,('historical',aer_species)] = get_SSP_emms('ssp245').loc[:2014,aer_species].values
hist_emms.loc[2015:,('historical',aer_species)] = hist_emms.loc[2014,('historical',aer_species)].values
## run emissions to check consistency
hist_run_emms = run_FaIR(emissions_in=hist_emms, forcing_in=historical_forc)
## run over each gas species, perturbing each by 1t in 2015
gas_mass_conversion_factors = pd.Series(index=hist_emms.columns.levels[1],dtype=float)
gas_mass_conversion_factors.loc[:] = 1
gas_mass_conversion_factors.loc['carbon_dioxide'] = (1/1000)/(44.01/12.01)
gas_mass_conversion_factors.loc['nitrous_oxide'] = 28/44
rf_results = []
for gas_species in hist_emms.columns.levels[1]:
pert_emms = hist_emms.copy()
pert_emms.loc[2015,('historical',gas_species)] += gas_mass_conversion_factors.loc[gas_species]/1e6
pert_result = run_FaIR(emissions_in=pert_emms, forcing_in=historical_forc, show_run_info=False)
rf_results += [(pert_result['RF'].loc[:,('historical','default','Total')]-hist_run_emms['RF'].loc[:,('historical','default','Total')]).rename(gas_species)]
rf_results = pd.concat(rf_results,axis=1)
AGWP = rf_results.cumsum().loc[2015+np.array([5,10,20,50,100,500])]
AGWP.index = np.array([5,10,20,50,100,500])
GWP = AGWP.apply(lambda x: x/AGWP.carbon_dioxide)
print('GWP value over various timescales:')
GWP.index.name = 'timescale / years'
GWP.columns.name = 'agent'
GWP.T.applymap(lambda x:round_to_sf(x,2))#.to_latex('../../docs/manuscript/tables/TabS3',escape=True,bold_rows=True)
```
# Supplement I. Methane lifetime over history + RCP8.5 extension
A demonstration of the state-dependent lifetime of methane over RCP history + extended to 2100 with RCP8.5. We use RCP8.5 since this is (at least, appears to be) the most commonly discussed scenario in methane sensitivity literature.
```
RCP85_emms = RCMIP_to_FaIR_input_emms('rcp85').dropna(how='all').dropna(axis=1,how='all')
RCP85_emms = pd.concat([RCP85_emms],axis=1,keys=['RCP8.5'])
rebase_species = ['so2','nox','co','nmvoc','bc','nh3','oc','nox_avi','methyl_bromide','methyl_chloride','chcl3','ch2cl2']
rebase_species = list(set(rebase_species).intersection(RCP85_emms.columns.levels[1]))
RCP85_emms.loc[:,('RCP8.5',rebase_species)] -= RCP85_emms.loc[1765,('RCP8.5',rebase_species)]
RCP85_forc = pd.concat([get_RCMIP_forc('rcp85',['Radiative Forcing|Anthropogenic|Albedo Change','Radiative Forcing|Natural']).dropna()],axis=1,keys=['RCP8.5'])
RCP85_run = run_FaIR(emissions_in=RCP85_emms,
forcing_in=RCP85_forc,
gas_parameters=get_gas_parameter_defaults().reindex(RCP85_emms.columns.levels[1],axis=1,level=1))
CH4_lifetime = RCP85_run['alpha'].xs('methane',axis=1,level=2).droplevel(axis=1,level=1)*RCP85_run['gas_parameters'].loc['tau1',('default','methane')]
sn.lineplot(data=CH4_lifetime.loc[1850:2100],palette=['k'])
sn.despine()
plt.xlabel('year')
plt.ylabel('CH$_4$ lifetime / yrs')
plt.gca().ticklabel_format(style='plain')
plt.xlim(1850,2100)
[plt.savefig('../../docs/manuscript/figures/FigS2.'+x,dpi=600,bbox_inches='tight') for x in ['png','pdf']]
''
# comparison with Holmes et al., 2013
## 2010 values:
print('Holmes 2010:',1/(1/120+1/150+1/200+1/11.2))
print('FaIRv2.0.0-alpha 2010:',CH4_lifetime.loc[2010].values[0],end='\n\n')
print('Holmes 2010-2100 change:',(1/120+1/150+1/200+1/11.2)/(1/120+1/150+1/200+1/(11.2*1.129)))
print('FaIRv2.0.0-alpha 2010-2100 change:',(CH4_lifetime.loc[2100]/CH4_lifetime.loc[2010]).values[0])
```
# Supplement II. FaIRv2.0.0 additivity
Very brief test of how linear FaIR actually is. Non-linearity in FaIR only arises from the CO2 & CH4 cycles. The climate response of FaIR is linear in forcing. Here we test the linearity over history by carrying out several CO2 / CH4 pulse response experiments.
```
# default_SSP_run = run_FaIR(emissions_in=SSP_emms,forcing_in=SSP_forc)
base_emms = RCMIP_to_FaIR_input_emms('ssp245').interpolate().loc[1750:2500]
rebase_species = ['so2','nox','co','nmvoc','bc','nh3','oc','nox_avi','methyl_bromide','methyl_chloride','chcl3','ch2cl2']
base_emms.loc[:,rebase_species] -= base_emms.loc[1750,rebase_species]
base_emms = pd.concat([base_emms],axis=1,keys=['ssp245'])
experiments = []
# scale methane by 28 (GWP100) for closer comparison
pulse_scaling = dict(carbon_dioxide=12/44,methane=1000/28)
for species in ['carbon_dioxide','methane']:
for pulse_size in [0]+list(np.arange(0.01,0.1,0.01))+list(np.arange(0.1,1,0.1))+list(np.arange(1,10,1))+list(np.arange(10,100,10))+list(np.arange(100,1001,100)):
experiment = base_emms.copy()
experiment.loc[2019,('ssp245',species)] += pulse_size*pulse_scaling[species]
experiments += [experiment.rename(dict(ssp245=species+'_'+str(pulse_size)),axis=1,level=0)]
experiments = pd.concat(experiments,axis=1)
pulse_runs = run_FaIR(emissions_in=experiments,
forcing_in=pd.concat([get_RCMIP_forc('ssp245')]*experiments.columns.levels[0].size,axis=1,keys=experiments.columns.levels[0]))
```
### nonlinearities in terms of scaled anomalies
```
## compute the pulse experiment anomalies relative to the baseline
pulse_temp_anomalies = (pulse_runs['T'] - pulse_runs['T'].carbon_dioxide_0.values)
pulse_temp_anomalies.columns = pd.MultiIndex.from_tuples([(x[:14],float(x[15:])) if x[0]=='c' else (x[:7],float(x[8:])) for x in pulse_temp_anomalies.columns.levels[0]])
pulse_temp_anomalies = pulse_temp_anomalies.drop(0,axis=1,level=1)
pulse_temp_anomalies_scaled = pulse_temp_anomalies.apply(lambda x: x*1000/x.name[1])
CO2_RF_anomalies = (pulse_runs['RF'].xs('carbon_dioxide',axis=1,level=2) - pulse_runs['RF'].xs('carbon_dioxide',axis=1,level=2).carbon_dioxide_0.values)
CO2_RF_anomalies.columns = pd.MultiIndex.from_tuples([(x[:14],float(x[15:])) if x[0]=='c' else (x[:7],float(x[8:])) for x in CO2_RF_anomalies.columns.levels[0]])
CO2_RF_anomalies = CO2_RF_anomalies.drop(0,axis=1,level=1)
CO2_RF_anomalies_scaled = CO2_RF_anomalies.apply(lambda x: x*1000/x.name[1])
CH4_RF_anomalies = (pulse_runs['RF'].xs('methane',axis=1,level=2) - pulse_runs['RF'].xs('methane',axis=1,level=2).carbon_dioxide_0.values)
CH4_RF_anomalies.columns = pd.MultiIndex.from_tuples([(x[:14],float(x[15:])) if x[0]=='c' else (x[:7],float(x[8:])) for x in CH4_RF_anomalies.columns.levels[0]])
CH4_RF_anomalies = CH4_RF_anomalies.drop(0,axis=1,level=1)
CH4_RF_anomalies_scaled = CH4_RF_anomalies.apply(lambda x: x*1000/x.name[1])
CO2_C_anomalies = (pulse_runs['C'].xs('carbon_dioxide',axis=1,level=2) - pulse_runs['C'].xs('carbon_dioxide',axis=1,level=2).carbon_dioxide_0.values)
CO2_C_anomalies.columns = pd.MultiIndex.from_tuples([(x[:14],float(x[15:])) if x[0]=='c' else (x[:7],float(x[8:])) for x in CO2_C_anomalies.columns.levels[0]])
CO2_C_anomalies = CO2_C_anomalies.drop(0,axis=1,level=1)
CO2_C_anomalies_scaled = CO2_C_anomalies.apply(lambda x: x*1000/x.name[1])
CH4_C_anomalies = (pulse_runs['C'].xs('methane',axis=1,level=2) - pulse_runs['C'].xs('methane',axis=1,level=2).carbon_dioxide_0.values)
CH4_C_anomalies.columns = pd.MultiIndex.from_tuples([(x[:14],float(x[15:])) if x[0]=='c' else (x[:7],float(x[8:])) for x in CH4_C_anomalies.columns.levels[0]])
CH4_C_anomalies = CH4_C_anomalies.drop(0,axis=1,level=1)
CH4_C_anomalies_scaled = CH4_C_anomalies.apply(lambda x: x*1000/x.name[1])
CO2_alph_anomalies = pulse_runs['alpha'].xs('carbon_dioxide',axis=1,level=2).sub(pulse_runs['alpha'].xs('carbon_dioxide',axis=1,level=2).carbon_dioxide_0,axis=0)
CO2_alph_anomalies.columns = pd.MultiIndex.from_tuples([(x[:14],float(x[15:])) if x[0]=='c' else (x[:7],float(x[8:])) for x in CO2_alph_anomalies.columns.levels[0]])
CO2_alph_anomalies = CO2_alph_anomalies.drop(0,axis=1,level=1)
CO2_alph_anomalies_scaled = CO2_alph_anomalies.apply(lambda x: x*1000/x.name[1])
CH4_alph_anomalies = pulse_runs['alpha'].xs('methane',axis=1,level=2).sub(pulse_runs['alpha'].xs('methane',axis=1,level=2).carbon_dioxide_0,axis=0)
CH4_alph_anomalies.columns = pd.MultiIndex.from_tuples([(x[:14],float(x[15:])) if x[0]=='c' else (x[:7],float(x[8:])) for x in CH4_alph_anomalies.columns.levels[0]])
CH4_alph_anomalies = CH4_alph_anomalies.drop(0,axis=1,level=1)
CH4_alph_anomalies_scaled = CH4_alph_anomalies.apply(lambda x: x*1000/x.name[1])
anomalies = pd.concat([pulse_temp_anomalies_scaled,
CO2_RF_anomalies_scaled,
CH4_RF_anomalies_scaled,
CO2_C_anomalies_scaled,
CH4_C_anomalies_scaled,
CO2_alph_anomalies_scaled,
CH4_alph_anomalies_scaled],
axis=1,
keys=['T',r'RF$_{\mathrm{CO}_2}$',r'RF$_{\mathrm{CH}_4}$',r'C$_{\mathrm{CO}_2}$',r'C$_{\mathrm{CH}_4}$',r'$\alpha_{\mathrm{CO}_2}$',r'$\alpha_{\mathrm{CH}_4}$'],
names=['variable']).rename(dict(carbon_dioxide='CO$_2$',methane='CH$_4$'),axis=1,level=1).loc[2020:].sort_index(axis=1).stack(level=[0,1,2]).reset_index().rename({'level_0':'time','level_2':'pulse_type','level_3':'pulse_size',0:'value'},axis=1)
anomalies.time -= 2019
# set relative to small pulse limit
## comment out if absolute anomalies (ie. relative to reference) desired
pulse_temp_anomalies_scaled = pulse_temp_anomalies_scaled.apply(lambda x: x-pulse_temp_anomalies_scaled.loc[:,(x.name[0],0.01)])
CO2_RF_anomalies_scaled = CO2_RF_anomalies_scaled.apply(lambda x: x-CO2_RF_anomalies_scaled.loc[:,(x.name[0],0.01)])
CH4_RF_anomalies_scaled = CH4_RF_anomalies_scaled.apply(lambda x: x-CH4_RF_anomalies_scaled.loc[:,(x.name[0],0.01)])
CO2_C_anomalies_scaled = CO2_C_anomalies_scaled.apply(lambda x: x-CO2_C_anomalies_scaled.loc[:,(x.name[0],0.01)])
CH4_C_anomalies_scaled = CH4_C_anomalies_scaled.apply(lambda x: x-CH4_C_anomalies_scaled.loc[:,(x.name[0],0.01)])
CO2_alph_anomalies_scaled = CO2_alph_anomalies_scaled.apply(lambda x: x-CO2_alph_anomalies_scaled.loc[:,(x.name[0],0.01)])
CH4_alph_anomalies_scaled = CH4_alph_anomalies_scaled.apply(lambda x: x-CH4_alph_anomalies_scaled.loc[:,(x.name[0],0.01)])
anomalies_rel = pd.concat([pulse_temp_anomalies_scaled,
CO2_RF_anomalies_scaled,
CH4_RF_anomalies_scaled,
CO2_C_anomalies_scaled,
CH4_C_anomalies_scaled,
CO2_alph_anomalies_scaled,
CH4_alph_anomalies_scaled],
axis=1,
keys=['T',r'RF$_{\mathrm{CO}_2}$',r'RF$_{\mathrm{CH}_4}$',r'C$_{\mathrm{CO}_2}$',r'C$_{\mathrm{CH}_4}$',r'$\alpha_{\mathrm{CO}_2}$',r'$\alpha_{\mathrm{CH}_4}$'],
names=['variable']).rename(dict(carbon_dioxide='CO$_2$ - relative',methane='CH$_4$ - relative'),axis=1,level=1).loc[2020:].sort_index(axis=1).stack(level=[0,1,2]).reset_index().rename({'level_0':'time','level_2':'pulse_type','level_3':'pulse_size',0:'value'},axis=1)
anomalies_rel.time -= 2019
plot_df = pd.concat([anomalies,anomalies_rel])
plot_df.head()
g=sn.FacetGrid(plot_df.query('pulse_size in [1,10,100,200,500,1000]').sort_values(['pulse_type','variable']),col='variable',row='pulse_type',hue='pulse_size',palette=[(x,x,x) for x in np.arange(0,1,1/7)[::-1]],margin_titles=True,sharey=False)
g.map(sn.lineplot,'time','value')
g.set_titles(col_template="{col_name}",row_template='pulse type = {row_name}',fontweight='bold').set(xlim=[0,480])
[a.set_ylabel('anomaly / ppb') for a in g.axes[:,2]]
[a.set_ylabel('anomaly / ppm') for a in g.axes[:,3]]
[a.set_ylabel('anomaly / W m$^{-2}$') for a in g.axes[:,4]]
[a.set_ylabel('anomaly / K') for a in g.axes[:,-1]]
[a.set_ylabel('anomaly / -') for a in g.axes[:,0]]
g.axes[0,0].legend(title='pulse size / GtCO$_2$-eq')
[plt.savefig('../../docs/manuscript/figures/FigS3.'+x,dpi=600,bbox_inches='tight') for x in ['png','pdf']]
''
```
### measuring nonlinearities in a relative sense:
Marked out to prevent from running.
## measuring extent of nonlinearity as anomalies relative to 1000 GtC-eq pulse, normalised by 1000 GtC-eq pulse anomaly
CO2_T_nonlin = pulse_temp_anomalies_scaled.loc[2020:,'carbon_dioxide'].sub(pulse_temp_anomalies_scaled.loc[2020:,('carbon_dioxide',1000)],axis=0).div(pulse_temp_anomalies_scaled.loc[2020:,('carbon_dioxide',1000)],axis=0)
CH4_T_nonlin = pulse_temp_anomalies_scaled.loc[2020:,'methane'].sub(pulse_temp_anomalies_scaled.loc[2020:,('methane',1000)],axis=0).div(pulse_temp_anomalies_scaled.loc[2020:,('methane',1000)],axis=0)
CO2_CO2_RF_nonlin = CO2_RF_anomalies_scaled.loc[2020:,'carbon_dioxide'].sub(CO2_RF_anomalies_scaled.loc[2020:,('carbon_dioxide',1000)],axis=0).div(CO2_RF_anomalies_scaled.loc[2020:,('carbon_dioxide',1000)],axis=0)
CO2_CH4_RF_nonlin = CO2_RF_anomalies_scaled.loc[2020:,'methane'].sub(CO2_RF_anomalies_scaled.loc[2020:,('methane',1000)],axis=0).div(CO2_RF_anomalies_scaled.loc[2020:,('methane',1000)],axis=0)
CH4_CO2_RF_nonlin = CH4_RF_anomalies_scaled.loc[2020:,'carbon_dioxide'].sub(CH4_RF_anomalies_scaled.loc[2020:,('carbon_dioxide',1000)],axis=0).div(CH4_RF_anomalies_scaled.loc[2020:,('carbon_dioxide',1000)],axis=0)
CH4_CH4_RF_nonlin = CH4_RF_anomalies_scaled.loc[2020:,'methane'].sub(CH4_RF_anomalies_scaled.loc[2020:,('methane',1000)],axis=0).div(CH4_RF_anomalies_scaled.loc[2020:,('methane',1000)],axis=0)
CO2_CO2_C_nonlin = CO2_C_anomalies_scaled.loc[2020:,'carbon_dioxide'].sub(CO2_C_anomalies_scaled.loc[2020:,('carbon_dioxide',1000)],axis=0).div(CO2_C_anomalies_scaled.loc[2020:,('carbon_dioxide',1000)],axis=0)
CO2_CH4_C_nonlin = CO2_C_anomalies_scaled.loc[2020:,'methane'].sub(CO2_C_anomalies_scaled.loc[2020:,('methane',1000)],axis=0).div(CO2_C_anomalies_scaled.loc[2020:,('methane',1000)],axis=0)
CH4_CO2_C_nonlin = CH4_C_anomalies_scaled.loc[2020:,'carbon_dioxide'].sub(CH4_C_anomalies_scaled.loc[2020:,('carbon_dioxide',1000)],axis=0).div(CH4_C_anomalies_scaled.loc[2020:,('carbon_dioxide',1000)],axis=0)
CH4_CH4_C_nonlin = CH4_C_anomalies_scaled.loc[2020:,'methane'].sub(CH4_C_anomalies_scaled.loc[2020:,('methane',1000)],axis=0).div(CH4_C_anomalies_scaled.loc[2020:,('methane',1000)],axis=0)
CO2_CO2_alph_nonlin = CO2_alph_anomalies_scaled.loc[2020:,'carbon_dioxide'].sub(CO2_alph_anomalies_scaled.loc[2020:,('carbon_dioxide',1000)],axis=0).div(CO2_alph_anomalies_scaled.loc[2020:,('carbon_dioxide',1000)],axis=0)
CO2_CH4_alph_nonlin = CO2_alph_anomalies_scaled.loc[2020:,'methane'].sub(CO2_alph_anomalies_scaled.loc[2020:,('methane',1000)],axis=0).div(CO2_alph_anomalies_scaled.loc[2020:,('methane',1000)],axis=0)
CH4_CO2_alph_nonlin = CH4_alph_anomalies_scaled.loc[2020:,'carbon_dioxide'].sub(CH4_alph_anomalies_scaled.loc[2020:,('carbon_dioxide',1000)],axis=0).div(CH4_alph_anomalies_scaled.loc[2020:,('carbon_dioxide',1000)],axis=0)
CH4_CH4_alph_nonlin = CH4_alph_anomalies_scaled.loc[2020:,'methane'].sub(CH4_alph_anomalies_scaled.loc[2020:,('methane',1000)],axis=0).div(CH4_alph_anomalies_scaled.loc[2020:,('methane',1000)],axis=0)
nonlinearities = pd.concat([pd.concat([CO2_T_nonlin,CH4_T_nonlin],axis=1,keys=['CO2','CH4'],names=['pulse_type']),
pd.concat([CO2_CO2_RF_nonlin,CO2_CH4_RF_nonlin],axis=1,keys=['CO2','CH4'],names=['pulse_type']),
pd.concat([CH4_CO2_RF_nonlin,CH4_CO2_RF_nonlin],axis=1,keys=['CO2','CH4'],names=['pulse_type']),
pd.concat([CO2_CO2_C_nonlin,CO2_CH4_C_nonlin],axis=1,keys=['CO2','CH4'],names=['pulse_type']),
pd.concat([CH4_CO2_C_nonlin,CH4_CH4_C_nonlin],axis=1,keys=['CO2','CH4'],names=['pulse_type']),
pd.concat([CO2_CO2_alph_nonlin,CO2_CH4_alph_nonlin],axis=1,keys=['CO2','CH4'],names=['pulse_type']),
pd.concat([CH4_CO2_alph_nonlin,CH4_CH4_alph_nonlin],axis=1,keys=['CO2','CH4'],names=['pulse_type'])],
axis=1,
keys=['T','RF$_{\text{CO}_2}$','RF$_{\text{CH}_4}$','C$_{\text{CO}_2}$','C$_{\text{CH}_4}$','$\alpha_{\text{CO}_2}$','$\alpha_{\text{CH}_4}$'],
names=['variable']).sort_index(axis=1).stack(level=[0,1,2]).reset_index().rename({'level_0':'time','level_3':'pulse_size',0:'value'},axis=1)
nonlinearities.time -= 2019
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
class SymPowerNorm(matplotlib.colors.Normalize):
def __init__(self, vmin=None, vmax=None, order=1, clip=False):
self.order = order
matplotlib.colors.Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
# I'm ignoring masked values and all kinds of edge cases to make a
# simple example...
x, y = [abs(self.vmin) / self.vmin * abs(self.vmin)**self.order , abs(self.vmax) / self.vmax * abs(self.vmax)**self.order], [0,1]
return np.ma.masked_array(np.interp(abs(value) / value * abs(value)**self.order, x, y))
def mapplot(x,y,z,**kwargs):
data = pd.concat([x,y,z],axis=1).set_index(['time','pulse_size']).unstack().droplevel(0,axis=1)
norm=matplotlib.colors.Normalize(vmin=-0.5,vmax=0.5)#SymPowerNorm(order=1,vmin=-0.5,vmax=0.5)
plt.pcolormesh(data.index,data.columns,data.values.T,shading='auto',norm=norm,cmap='RdBu_r')
g=sn.FacetGrid(nonlinearities,col='variable',row='pulse_type',margin_titles=True,despine=False,gridspec_kws=dict(hspace=0.1,wspace=0.1))
g.map(mapplot,'time','pulse_size','value')
g.set_titles(col_template="{col_name}",row_template='pulse type = {row_name}',fontweight='bold')
g.set(yscale='log')
[a.set_ylabel('pulse size / GtC-eq') for a in g.axes[:,0]]
[a.set_xlabel('year') for a in g.axes[-1,:]]
axins = inset_axes(g.axes[-1,-1], width="5%",height="100%",loc='lower left',bbox_to_anchor=(1.2, 0.55, 1, 1),bbox_transform=g.axes[-1,-1].transAxes,borderpad=0)
plt.colorbar(cax=axins,extend='both')
|
github_jupyter
|
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
```
### Image Generation from Audio
```
from pathlib import Path
from IPython.display import Audio
import librosa
import librosa.display
import matplotlib.pyplot as plt
import numpy as np
from utils import read_file, transform_path
DATA = Path('data')
# these folders must be in place
NSYNTH_AUDIO = DATA/'nsynth_audio'
TRAIN_AUDIO_PATH = NSYNTH_AUDIO/'train'
VALID_AUDIO_PATH = NSYNTH_AUDIO/'valid'
# these folders will be created
NSYNTH_IMAGES = DATA/'nsynth_images'
TRAIN_IMAGE_PATH = NSYNTH_IMAGES/'train'
VALID_IMAGE_PATH = NSYNTH_IMAGES/'valid'
train_acoustic_fnames = [f.name for f in TRAIN_AUDIO_PATH.iterdir()
if 'acoustic' in f.name]
valid_acoustic_fnames = [f.name for f in VALID_AUDIO_PATH.iterdir()
if 'acoustic' in f.name]
len(train_acoustic_fnames), len(valid_acoustic_fnames)
fn = train_acoustic_fnames[8]; fn
Audio(str(TRAIN_AUDIO_PATH/fn))
x, sr = read_file(fn, TRAIN_AUDIO_PATH)
x.shape, sr, x.dtype
def log_mel_spec_tfm(fname, src_path, dst_path):
x, sample_rate = read_file(fname, src_path)
n_fft = 1024
hop_length = 256
n_mels = 40
fmin = 20
fmax = sample_rate / 2
mel_spec_power = librosa.feature.melspectrogram(x, sr=sample_rate, n_fft=n_fft,
hop_length=hop_length,
n_mels=n_mels, power=2.0,
fmin=fmin, fmax=fmax)
mel_spec_db = librosa.power_to_db(mel_spec_power, ref=np.max)
dst_fname = dst_path / (fname[:-4] + '.png')
plt.imsave(dst_fname, mel_spec_db)
log_mel_spec_tfm(fn, TRAIN_AUDIO_PATH, Path('.'))
img = plt.imread(fn[:-4] + '.png')
plt.imshow(img, origin='lower');
# TRAIN files took 10m43s
# transform_path(TRAIN_AUDIO_PATH, TRAIN_IMAGE_PATH, log_mel_spec_tfm,
# fnames=train_acoustic_fnames, delete=True)
# VALID files took 0m31s
# transform_path(VALID_AUDIO_PATH, VALID_IMAGE_PATH, log_mel_spec_tfm,
# fnames=valid_acoustic_fnames, delete=True)
```
### Run Image Classifier
```
import fastai
fastai.__version__
from fastai.vision import *
instrument_family_pattern = r'(\w+)_\w+_\d+-\d+-\d+.png$'
data = (ImageItemList.from_folder(NSYNTH_IMAGES)
.split_by_folder()
.label_from_re(instrument_family_pattern)
.databunch())
data.c, data.classes
xs, ys = data.one_batch()
xs.shape, ys.shape
xs.min(), xs.max(), xs.mean(), xs.std()
data.show_batch(3, figsize=(8,4), hide_axis=False)
learn = create_cnn(data, models.resnet18, metrics=accuracy)
learn.fit_one_cycle(3)
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix(figsize=(10, 10), dpi=60)
interp.most_confused(min_val=20)
```
|
github_jupyter
|
### In this lab, we will implement Linear Regression using Least-square Solution. We will use the same example as we did in the class (Slide 18 from the linear regression slides). There are 5 steps. Let's implement them using only numpy step by step.

```
import numpy as np
```
We are given the dataset: {(0,0), (0,1), (1,0)} and asked to find the \
least-squares solution for the parameters in the regression of \
the function: y = w1 +w2^2
```
# creating the input and target in numpy arrays
inputs = np.array([[0], [0], [1]])
targets = np.array([[0], [1], [0]])
print('inputs shape :',np.shape(inputs))
print('targets shape :',np.shape(targets))
# now let's do the steps to find the solution
# Step 1: evaluate the basis on the points
inputs = np.concatenate((np.ones((np.shape(inputs)[0],1)),inputs),axis=1)
print('inputs shape :',np.shape(inputs))
print(inputs)
# step 2: compute -> transpose(inputs) * inputs
q_matrix = np.dot(np.transpose(inputs),inputs)
print('q_matrix shape :',np.shape(q_matrix))
print(q_matrix)
# step 3: invert q_matrix
q_inverse = np.linalg.inv(q_matrix)
print('q_inverse shape :',np.shape(q_inverse))
print(q_inverse)
# step 4: Compute the pseudo-inverse -> q_inverse * transpose(inputs)
q_pseudo = np.dot(q_inverse,np.transpose(inputs))
print('q_pseudo shape :',np.shape(q_pseudo))
print(q_pseudo.astype(np.float16))
# step 5: compute w = q_pseudo * targets
weights = np.dot(q_pseudo,targets)
print('w shape :',np.shape(weights))
print(weights)
```
#### Now, let's implement the steps but on a real dataset. we will work on the auto-mpg dataset. This consists of a collection of a number of datapoints about certain cars (weight, horsepower, etc.), with the aim being to predict the fuel efficiency in miles per gallon (mpg) in for each car.
```
"""
You are asked to
- load the dataset text file (auto-mpg.txt) as numpy array
- prerocess the dataset (normalise, split it into train and test sets)
- find the least-squares solution for the parameters (weights vector)
- test the found parameters on the test set and calculate the error
The following comments and codes are meant to guide you.
"""
"""
Please note: This dataset has one problem. There are missing values
in it (labelled with question marks ‘?’). The np.loadtxt() method doesn’t
like these, and we don’t know what to do with them, anyway,manually edit
the file and delete all lines where there is a ? in that line. The linear
regressor can’t do much with the names of the cars either, but since they
appear in quotes(") we will tell np.loadtxt that they are comments
Below are the attribute Information for the dataset:
1. mpg: continuous
2. cylinders: multi-valued discrete
3. displacement: continuous
4. horsepower: continuous
5. weight: continuous
6. acceleration: continuous
7. model year: multi-valued discrete
8. origin: multi-valued discrete
9. car name: string (unique for each instance)
Please note: the first column is our target (mpg)
"""
# TODO: load the dataset file using np.loadtxt()
import pandas as pd
df = pd.read_csv("auto-mpg.txt", delimiter=' ')
df.head()
# data = np.loadtxt("auto-mpg.txt", delimiter=' ', usecols=range(5))
# TODO: Normalise the dataset. You can do this easily in numpy
# by using np.mean and np.var. The only place where care is needed
# is along which axis the mean and variance are computed:
# axis=0 sums down the columns and axis=1 sums across the rows.
normalised_date = None
# TODO: Now separate the data into training and testing sets,
training, testing = None, None
# And split each set into inputs and targets hint: slicing the array
trainin, traintgt = None, None
testin, testtgt = None, None
# TODO: Use the training set to find the weights vector.
# you need to implement the previous 5 steps on the training set
# and find the weights vector (this is called training).
# To make it simple we define a function that takes
# two args: inputs and targets and return the weights vector
def linreg(inputs,targets):
# you should implement the 5 steps here
weights = None
return weights
# test your implementation
weights = linreg(trainin,traintgt)
weights
# TODO: Testing the found weights on the testing set
# you can do this by
# - testout = (testin*weights)
# - error = sum((testout - testtgt)**2)
testout = None
error = None
"""
You can try to re-train the model without the normalising the data
and see if this makes any different on the error value
"""
```
|
github_jupyter
|
# Convolutional Neural Networks: Step by Step
Welcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation.
**Notation**:
- Superscript $[l]$ denotes an object of the $l^{th}$ layer.
- Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.
- Superscript $(i)$ denotes an object from the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example input.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer.
- $n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$.
- $n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$.
We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started!
## 1 - Packages
Let's first import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.
- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
```
import numpy as np
import h5py
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
```
## 2 - Outline of the Assignment
You will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed:
- Convolution functions, including:
- Zero Padding
- Convolve window
- Convolution forward
- Convolution backward (optional)
- Pooling functions, including:
- Pooling forward
- Create mask
- Distribute value
- Pooling backward (optional)
This notebook will ask you to implement these functions from scratch in `numpy`. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model:
<img src="images/model.png" style="width:800px;height:300px;">
**Note** that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation.
## 3 - Convolutional Neural Networks
Although programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below.
<img src="images/conv_nn.png" style="width:350px;height:200px;">
In this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself.
### 3.1 - Zero-Padding
Zero-padding adds zeros around the border of an image:
<img src="images/PAD.png" style="width:600px;height:400px;">
<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **Zero-Padding**<br> Image (3 channels, RGB) with a padding of 2. </center></caption>
The main benefits of padding are the following:
- It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer.
- It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image.
**Exercise**: Implement the following function, which pads all the images of a batch of examples X with zeros. [Use np.pad](https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html). Note if you want to pad the array "a" of shape $(5,5,5,5,5)$ with `pad = 1` for the 2nd dimension, `pad = 3` for the 4th dimension and `pad = 0` for the rest, you would do:
```python
a = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..))
```
```
# GRADED FUNCTION: zero_pad
def zero_pad(X, pad):
"""
Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image,
as illustrated in Figure 1.
Argument:
X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
pad -- integer, amount of padding around each image on vertical and horizontal dimensions
Returns:
X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
"""
### START CODE HERE ### (≈ 1 line)
npad = ((0, 0), (pad,pad), (pad,pad),(0,0))
X_pad = np.pad(X,npad,'constant',constant_values = 0)
### END CODE HERE ###
return X_pad
np.random.seed(1)
x = np.random.randn(4, 3, 3, 2)
x_pad = zero_pad(x, 2)
print ("x.shape =", x.shape)
print ("x_pad.shape =", x_pad.shape)
print ("x[1,1] =", x[1,1])
print ("x_pad[1,1] =", x_pad[1,1])
#first image, first column, every row, every channel:
#print ("x_pad[1,1,:,:] =", x_pad[1,1,:,:])
#first image, first column, first row, every channel
#print ("x_pad[1,1,1,:] =", x_pad[1,1,1,:])
#first image, first column, every row, first channel
#print ("x_pad[1,1,1,1] =", x_pad[1,1,1,1])
fig, axarr = plt.subplots(1, 2)
axarr[0].set_title('x')
axarr[0].imshow(x[0,:,:,0])
axarr[1].set_title('x_pad')
axarr[1].imshow(x_pad[0,:,:,0])
```
**Expected Output**:
<table>
<tr>
<td>
**x.shape**:
</td>
<td>
(4, 3, 3, 2)
</td>
</tr>
<tr>
<td>
**x_pad.shape**:
</td>
<td>
(4, 7, 7, 2)
</td>
</tr>
<tr>
<td>
**x[1,1]**:
</td>
<td>
[[ 0.90085595 -0.68372786]
[-0.12289023 -0.93576943]
[-0.26788808 0.53035547]]
</td>
</tr>
<tr>
<td>
**x_pad[1,1]**:
</td>
<td>
[[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]]
</td>
</tr>
</table>
### 3.2 - Single step of convolution
In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which:
- Takes an input volume
- Applies a filter at every position of the input
- Outputs another volume (usually of different size)
<img src="images/Convolution_schematic.gif" style="width:500px;height:300px;">
<caption><center> <u> <font color='purple'> **Figure 2** </u><font color='purple'> : **Convolution operation**<br> with a filter of 2x2 and a stride of 1 (stride = amount you move the window each time you slide) </center></caption>
In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output.
Later in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation.
**Exercise**: Implement conv_single_step(). [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.sum.html).
```
# GRADED FUNCTION: conv_single_step
def conv_single_step(a_slice_prev, W, b):
"""
Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation
of the previous layer.
Arguments:
a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)
Returns:
Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data
"""
### START CODE HERE ### (≈ 2 lines of code)
# Element-wise product between a_slice and W. Do not add the bias yet.
#GAC:(element-wise multiplication)
s = np.multiply(a_slice_prev,W)
# Sum over all entries of the volume s.
Z = np.sum(s,axis=None)
# Add bias b to Z. Cast b to a float() so that Z results in a scalar value.
Z = Z+ np.float(b)
### END CODE HERE ###
return Z
np.random.seed(1)
a_slice_prev = np.random.randn(4, 4, 3)
W = np.random.randn(4, 4, 3)
b = np.random.randn(1, 1, 1)
Z = conv_single_step(a_slice_prev, W, b)
print("Z =", Z)
```
**Expected Output**:
<table>
<tr>
<td>
**Z**
</td>
<td>
-6.99908945068
</td>
</tr>
</table>
### 3.3 - Convolutional Neural Networks - Forward pass
In the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume:
<center>
<video width="620" height="440" src="images/conv_kiank.mp4" type="video/mp4" controls>
</video>
</center>
**Exercise**: Implement the function below to convolve the filters W on an input activation A_prev. This function takes as input A_prev, the activations output by the previous layer (for a batch of m inputs), F filters/weights denoted by W, and a bias vector denoted by b, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding.
**Hint**:
1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do:
```python
a_slice_prev = a_prev[0:2,0:2,:]
```
This will be useful when you will define `a_slice_prev` below, using the `start/end` indexes you will define.
2. To define a_slice you will need to first define its corners `vert_start`, `vert_end`, `horiz_start` and `horiz_end`. This figure may be helpful for you to find how each of the corner can be defined using h, w, f and s in the code below.
<img src="images/vert_horiz_kiank.png" style="width:400px;height:300px;">
<caption><center> <u> <font color='purple'> **Figure 3** </u><font color='purple'> : **Definition of a slice using vertical and horizontal start/end (with a 2x2 filter)** <br> This figure shows only a single channel. </center></caption>
**Reminder**:
The formulas relating the output shape of the convolution to the input shape is:
$$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$
$$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$
$$ n_C = \text{number of filters used in the convolution}$$
For this exercise, we won't worry about vectorization, and will just implement everything with for-loops.
```
# GRADED FUNCTION: conv_forward
def conv_forward(A_prev, W, b, hparameters):
"""
Implements the forward propagation for a convolution function
Arguments:
A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
b -- Biases, numpy array of shape (1, 1, 1, n_C)
hparameters -- python dictionary containing "stride" and "pad"
Returns:
Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward() function
"""
### START CODE HERE ###
# Retrieve dimensions from A_prev's shape (≈1 line)
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
#print("m =", m)
#print("n_H_prev =", n_H_prev)
#print("n_W_prev =", n_W_prev)
#print("n_C_prev =", n_C_prev)
# Retrieve dimensions from W's shape (≈1 line)
(f, f, n_C_prev, n_C) = W.shape
#print("f =", f)
#print("n_C =", n_C)
# Retrieve information from "hparameters" (≈2 lines)
stride = hparameters['stride']
pad = hparameters['pad']
#print("stride =", stride)
#print("pad =", pad)
# Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)
n_H = np.int((n_H_prev-f+2*pad)/stride)+1
n_W = np.int((n_W_prev-f+2*pad)/stride)+1
#print("n_H =", n_H)
#print("n_W =", n_W)
# Initialize the output volume Z with zeros. (≈1 line)
Z = np.zeros((m, n_H, n_W, n_C))
#print ("Z.shape =", Z.shape)
# Create A_prev_pad by padding A_prev
A_prev_pad = zero_pad(A_prev, pad) #padded image of shape (m, n_H_prev + 2*pad, n_W_prev + 2*pad, n_C_prev)
#print ("A_prev_pad.shape =", A_prev_pad.shape)
for i in range(m): # loop over the batch of training examples
a_prev_pad = A_prev_pad[i,:,:,:] # Select ith training example's padded activation
#print ("a_prev_pad.shape =", a_prev_pad.shape)
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over channels (= #filters) of the output volume
# Find the corners of the current "slice" (≈4 lines)
# (GAC) using h, w, f and s
vert_start = h*stride
vert_end = h*stride + f
horiz_start = w*stride
horiz_end = w*stride + f
#print ("vert_start =", vert_start)
#print ("vert_end =", vert_end)
#print ("horiz_start =", horiz_start)
#print ("horiz_end =", horiz_end)
# Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)
a_slice_prev = a_prev_pad[vert_start:vert_end,horiz_start:horiz_end,:]
#print ("a_slice_prev.shape =", a_slice_prev.shape)
# Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)
Z[i, h, w, c] = conv_single_step(a_slice_prev, W[:,:,:,c], b[:,:,:,c])
### END CODE HERE ###
# Making sure your output shape is correct
assert(Z.shape == (m, n_H, n_W, n_C))
# Save information in "cache" for the backprop
cache = (A_prev, W, b, hparameters)
return Z, cache
np.random.seed(1)
A_prev = np.random.randn(10,4,4,3)
W = np.random.randn(2,2,3,8)
b = np.random.randn(1,1,1,8)
hparameters = {"pad" : 2,
"stride": 2}
Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
print("Z's mean =", np.mean(Z))
print("Z[3,2,1] =", Z[3,2,1])
print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3])
```
**Expected Output**:
<table>
<tr>
<td>
**Z's mean**
</td>
<td>
0.0489952035289
</td>
</tr>
<tr>
<td>
**Z[3,2,1]**
</td>
<td>
[-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437
5.18531798 8.75898442]
</td>
</tr>
<tr>
<td>
**cache_conv[0][1][2][3]**
</td>
<td>
[-0.20075807 0.18656139 0.41005165]
</td>
</tr>
</table>
Finally, CONV layer should also contain an activation, in which case we would add the following line of code:
```python
# Convolve the window to get back one output neuron
Z[i, h, w, c] = ...
# Apply activation
A[i, h, w, c] = activation(Z[i, h, w, c])
```
You don't need to do it here.
## 4 - Pooling layer
The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are:
- Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output.
- Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output.
<table>
<td>
<img src="images/max_pool1.png" style="width:500px;height:300px;">
<td>
<td>
<img src="images/a_pool.png" style="width:500px;height:300px;">
<td>
</table>
These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the fxf window you would compute a max or average over.
### 4.1 - Forward Pooling
Now, you are going to implement MAX-POOL and AVG-POOL, in the same function.
**Exercise**: Implement the forward pass of the pooling layer. Follow the hints in the comments below.
**Reminder**:
As there's no padding, the formulas binding the output shape of the pooling to the input shape is:
$$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$
$$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$
$$ n_C = n_{C_{prev}}$$
```
# GRADED FUNCTION: pool_forward
def pool_forward(A_prev, hparameters, mode = "max"):
"""
Implements the forward pass of the pooling layer
Arguments:
A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
hparameters -- python dictionary containing "f" and "stride"
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)
cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters
"""
# Retrieve dimensions from the input shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve hyperparameters from "hparameters"
f = hparameters["f"]
stride = hparameters["stride"]
# Define the dimensions of the output
n_H = int(1 + (n_H_prev - f) / stride)
n_W = int(1 + (n_W_prev - f) / stride)
n_C = n_C_prev
# Initialize output matrix A
A = np.zeros((m, n_H, n_W, n_C))
### START CODE HERE ###
for i in range(m): # loop over the training examples
for h in range(n_H): # loop on the vertical axis of the output volume
for w in range(n_W): # loop on the horizontal axis of the output volume
for c in range (n_C): # loop over the channels of the output volume
# Find the corners of the current "slice" (≈4 lines)
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)
#a_prev = A_prev[i,:,:,:]
#a_slice_prev = a_prev[vert_start:vert_end,horiz_start:horiz_end,c]
a_slice_prev = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c]
# Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.
#Z[i, h, w, c] = conv_single_step(a_slice_prev, W[:,:,:,c], b[:,:,:,c])
if mode == "max":
A[i, h, w, c] = np.max(a_slice_prev)
elif mode == "average":
A[i, h, w, c] = np.mean(a_slice_prev)
### END CODE HERE ###
# Store the input and hparameters in "cache" for pool_backward()
cache = (A_prev, hparameters)
# Making sure your output shape is correct
assert(A.shape == (m, n_H, n_W, n_C))
return A, cache
np.random.seed(1)
A_prev = np.random.randn(2, 4, 4, 3)
hparameters = {"stride" : 2, "f": 3}
A, cache = pool_forward(A_prev, hparameters)
print("mode = max")
print("A =", A)
print()
A, cache = pool_forward(A_prev, hparameters, mode = "average")
print("mode = average")
print("A =", A)
```
**Expected Output:**
<table>
<tr>
<td>
A =
</td>
<td>
[[[[ 1.74481176 0.86540763 1.13376944]]]
[[[ 1.13162939 1.51981682 2.18557541]]]]
</td>
</tr>
<tr>
<td>
A =
</td>
<td>
[[[[ 0.02105773 -0.20328806 -0.40389855]]]
[[[-0.22154621 0.51716526 0.48155844]]]]
</td>
</tr>
</table>
Congratulations! You have now implemented the forward passes of all the layers of a convolutional network.
The remainer of this notebook is optional, and will not be graded.
## 5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED)
In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish however, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like.
When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we briefly presented them below.
### 5.1 - Convolutional layer backward pass
Let's start by implementing the backward pass for a CONV layer.
#### 5.1.1 - Computing dA:
This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example:
$$ dA += \sum _{h=0} ^{n_H} \sum_{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$
Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices.
In code, inside the appropriate for-loops, this formula translates into:
```python
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]
```
#### 5.1.2 - Computing dW:
This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss:
$$ dW_c += \sum _{h=0} ^{n_H} \sum_{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$
Where $a_{slice}$ corresponds to the slice which was used to generate the acitivation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$.
In code, inside the appropriate for-loops, this formula translates into:
```python
dW[:,:,:,c] += a_slice * dZ[i, h, w, c]
```
#### 5.1.3 - Computing db:
This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$:
$$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$
As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost.
In code, inside the appropriate for-loops, this formula translates into:
```python
db[:,:,:,c] += dZ[i, h, w, c]
```
**Exercise**: Implement the `conv_backward` function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above.
```
def conv_backward(dZ, cache):
"""
Implement the backward propagation for a convolution function
Arguments:
dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward(), output of conv_forward()
Returns:
dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
dW -- gradient of the cost with respect to the weights of the conv layer (W)
numpy array of shape (f, f, n_C_prev, n_C)
db -- gradient of the cost with respect to the biases of the conv layer (b)
numpy array of shape (1, 1, 1, n_C)
"""
### START CODE HERE ###
# Retrieve information from "cache"
(A_prev, W, b, hparameters) = cache
# Retrieve dimensions from A_prev's shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape
(f, f, n_C_prev, n_C) = W.shape
# Retrieve information from "hparameters"
stride = hparameters['stride']
pad = hparameters['pad']
# Retrieve dimensions from dZ's shape
(m, n_H, n_W, n_C) = dZ.shape
# Initialize dA_prev, dW, db with the correct shapes
dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev))
dW = np.zeros((f, f, n_C_prev, n_C))
db = np.zeros((1, 1, 1, n_C))
# Pad A_prev and dA_prev
A_prev_pad = zero_pad(A_prev, pad)
dA_prev_pad = zero_pad(dA_prev, pad)
for i in range(m): # loop over the training examples
# select ith training example from A_prev_pad and dA_prev_pad
a_prev_pad = A_prev_pad[i,:,:,:]
da_prev_pad = dA_prev_pad[i,:,:,:]
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over the channels of the output volume
# Find the corners of the current "slice"
vert_start = h*stride
vert_end = vert_start+f
horiz_start = w*stride
horiz_end = horiz_start+f
# Use the corners to define the slice from a_prev_pad
a_slice = a_prev_pad[vert_start:vert_end,horiz_start:horiz_end,:]
# Update gradients for the window and the filter's parameters using the code formulas given above
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]
dW[:,:,:,c] += a_slice * dZ[i, h, w, c]
db[:,:,:,c] += dZ[i, h, w, c]
# Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])
dA_prev[i, :, :, :] = da_prev_pad[pad:-pad, pad:-pad, :]
### END CODE HERE ###
# Making sure your output shape is correct
assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))
return dA_prev, dW, db
np.random.seed(1)
dA, dW, db = conv_backward(Z, cache_conv)
print("dA_mean =", np.mean(dA))
print("dW_mean =", np.mean(dW))
print("db_mean =", np.mean(db))
```
** Expected Output: **
<table>
<tr>
<td>
**dA_mean**
</td>
<td>
1.45243777754
</td>
</tr>
<tr>
<td>
**dW_mean**
</td>
<td>
1.72699145831
</td>
</tr>
<tr>
<td>
**db_mean**
</td>
<td>
7.83923256462
</td>
</tr>
</table>
## 5.2 Pooling layer - backward pass
Next, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer.
### 5.2.1 Max pooling - backward pass
Before jumping into the backpropagation of the pooling layer, you are going to build a helper function called `create_mask_from_window()` which does the following:
$$ X = \begin{bmatrix}
1 && 3 \\
4 && 2
\end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix}
0 && 0 \\
1 && 0
\end{bmatrix}\tag{4}$$
As you can see, this function creates a "mask" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask.
**Exercise**: Implement `create_mask_from_window()`. This function will be helpful for pooling backward.
Hints:
- [np.max()]() may be helpful. It computes the maximum of an array.
- If you have a matrix X and a scalar x: `A = (X == x)` will return a matrix A of the same size as X such that:
```
A[i,j] = True if X[i,j] = x
A[i,j] = False if X[i,j] != x
```
- Here, you don't need to consider cases where there are several maxima in a matrix.
```
def create_mask_from_window(x):
"""
Creates a mask from an input matrix x, to identify the max entry of x.
Arguments:
x -- Array of shape (f, f)
Returns:
mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.
"""
### START CODE HERE ### (≈1 line)
mask = (x==np.max(x))
### END CODE HERE ###
return mask
np.random.seed(1)
x = np.random.randn(2,3)
mask = create_mask_from_window(x)
print('x = ', x)
print("mask = ", mask)
```
**Expected Output:**
<table>
<tr>
<td>
**x =**
</td>
<td>
[[ 1.62434536 -0.61175641 -0.52817175] <br>
[-1.07296862 0.86540763 -2.3015387 ]]
</td>
</tr>
<tr>
<td>
**mask =**
</td>
<td>
[[ True False False] <br>
[False False False]]
</td>
</tr>
</table>
Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost.
### 5.2.2 - Average pooling - backward pass
In max pooling, for each input window, all the "influence" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this.
For example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like:
$$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix}
1/4 && 1/4 \\
1/4 && 1/4
\end{bmatrix}\tag{5}$$
This implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average.
**Exercise**: Implement the function below to equally distribute a value dz through a matrix of dimension shape. [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ones.html)
```
def distribute_value(dz, shape):
"""
Distributes the input value in the matrix of dimension shape
Arguments:
dz -- input scalar
shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz
Returns:
a -- Array of size (n_H, n_W) for which we distributed the value of dz
"""
### START CODE HERE ###
# Retrieve dimensions from shape (≈1 line)
(n_H, n_W) = shape
# Compute the value to distribute on the matrix (≈1 line)
average = dz / (n_H * n_W)
# Create a matrix where every entry is the "average" value (≈1 line)
a = np.ones((n_H, n_W))*average
### END CODE HERE ###
return a
a = distribute_value(2, (2,2))
print('distributed value =', a)
```
**Expected Output**:
<table>
<tr>
<td>
distributed_value =
</td>
<td>
[[ 0.5 0.5]
<br\>
[ 0.5 0.5]]
</td>
</tr>
</table>
### 5.2.3 Putting it together: Pooling backward
You now have everything you need to compute backward propagation on a pooling layer.
**Exercise**: Implement the `pool_backward` function in both modes (`"max"` and `"average"`). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an `if/elif` statement to see if the mode is equal to `'max'` or `'average'`. If it is equal to 'average' you should use the `distribute_value()` function you implemented above to create a matrix of the same shape as `a_slice`. Otherwise, the mode is equal to '`max`', and you will create a mask with `create_mask_from_window()` and multiply it by the corresponding value of dZ.
```
def pool_backward(dA, cache, mode = "max"):
"""
Implements the backward pass of the pooling layer
Arguments:
dA -- gradient of cost with respect to the output of the pooling layer, same shape as A
cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev
"""
### START CODE HERE ###
# Retrieve information from cache (≈1 line)
(A_prev, hparameters) = cache
# Retrieve hyperparameters from "hparameters" (≈2 lines)
stride = hparameters['stride']
f = hparameters['f']
# Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)
m, n_H_prev, n_W_prev, n_C_prev = A_prev.shape
m, n_H, n_W, n_C = dA.shape
# Initialize dA_prev with zeros (≈1 line)
dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev))
for i in range(m): # loop over the training examples
# select training example from A_prev (≈1 line)
a_prev = A_prev[i,:,:,:]
for h in range(n_H): # loop on the vertical axis
for w in range(n_W): # loop on the horizontal axis
for c in range(n_C): # loop over the channels (depth)
# Find the corners of the current "slice" (≈4 lines)
vert_start = h*stride
vert_end = vert_start+f
horiz_start = w*stride
horiz_end = horiz_start+f
# Compute the backward propagation in both modes.
if mode == "max":
# Use the corners and "c" to define the current slice from a_prev (≈1 line)
a_prev_slice = a_prev[vert_start:vert_end,horiz_start:horiz_end,c]
# Create the mask from a_prev_slice (≈1 line)
mask = create_mask_from_window(a_prev_slice)
# Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += mask * dA[i,h,w,c]
elif mode == "average":
# Get the value a from dA (≈1 line)
da = dA[i, h, w, c]
# Define the shape of the filter as fxf (≈1 line)
shape = (f,f)
# Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += distribute_value(da, shape)
### END CODE ###
# Making sure your output shape is correct
assert(dA_prev.shape == A_prev.shape)
return dA_prev
np.random.seed(1)
A_prev = np.random.randn(5, 5, 3, 2)
hparameters = {"stride" : 1, "f": 2}
A, cache = pool_forward(A_prev, hparameters)
dA = np.random.randn(5, 4, 2, 2)
dA_prev = pool_backward(dA, cache, mode = "max")
print("mode = max")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
print()
dA_prev = pool_backward(dA, cache, mode = "average")
print("mode = average")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
```
**Expected Output**:
mode = max:
<table>
<tr>
<td>
**mean of dA =**
</td>
<td>
0.145713902729
</td>
</tr>
<tr>
<td>
**dA_prev[1,1] =**
</td>
<td>
[[ 0. 0. ] <br>
[ 5.05844394 -1.68282702] <br>
[ 0. 0. ]]
</td>
</tr>
</table>
mode = average
<table>
<tr>
<td>
**mean of dA =**
</td>
<td>
0.145713902729
</td>
</tr>
<tr>
<td>
**dA_prev[1,1] =**
</td>
<td>
[[ 0.08485462 0.2787552 ] <br>
[ 1.26461098 -0.25749373] <br>
[ 1.17975636 -0.53624893]]
</td>
</tr>
</table>
### Congratulations !
Congratulation on completing this assignment. You now understand how convolutional neural networks work. You have implemented all the building blocks of a neural network. In the next assignment you will implement a ConvNet using TensorFlow.
```
!tar cvfz notebook.tar.gz *
!tar cvfz notebook.tar.gz *
```
|
github_jupyter
|
# Ciência de dados - Unidade 3
*Por: Débora Azevedo, Eliseu Jayro, Francisco de Paiva e Igor Brandão*
### Objetivos
O objetivo desse projeto é explorar os [datasets da UFRN](http://dados.ufrn.br/group/despesas-e-orcamento) contendo informações sobre requisições de material, requisições de manutenção e empenhos sob o contexto da [diminuição de verba](https://g1.globo.com/educacao/noticia/rio-grande-do-norte-veja-a-evolucao-do-orcamento-repassado-pelo-mec-as-duas-universidades-federais-do-estado.ghtml) que a UFRN recentemente vem sofrendo devido a crise financeira.
De acordo com a pesquisa feita pelo nosso grupo, as fontes dizem que os cortes atingem [principalmente serviços terceirizados](https://g1.globo.com/educacao/noticia/90-das-universidades-federais-tiveram-perda-real-no-orcamento-em-cinco-anos-verba-nacional-encolheu-28.ghtml) como limpeza, manutenção e segurança, além de benefícios para estudantes de baixa renda, dado que estas [não são despesas obrigatórias] (https://g1.globo.com/educacao/noticia/salario-de-professores-das-universidades-federais-e-despesa-obrigatoria-mas-auxilio-estudantil-nao-entenda-a-diferenca.ghtml), ao contrário do pagamento de aposentadorias e pensões e o pagamento de pessoal ativo, no entanto, em [entrevista](http://www.tribunadonorte.com.br/noticia/na-s-vamos-receber-o-ma-nimo-diz-reitora-da-ufrn/399980), a atual reitora disse que o setor mais afetado seria o de obras e sua gestão, o que pode ser uma informação mais confiável, visto que até 2017 todo o orçamento era destinado diretamente as universidades federais, portanto eles decidiam como todos os gastos eram feitos. Isso mudou em 2018, já que o Ministério da Educação adotou uma nova metodologia que restringe ainda mais os gastos à "matriz Andifes" de forma que 50% do orçamento passou a ser gerenciado pelo próprio ministério da educação, logo a comparação do orçamento de 2018 com os anteriores deixa de ser possível.
<hr>
# 0 - Importando as bibliotecas
Aqui utilizaremos o *pip* para instalar as bibliotecas necessárias para executar o notebook, sendo estas:
- pandas
- numpy
- matplotlib
- wordcloud
```
!pip install pandas
!pip install numpy
!pip install matplotlib
!pip install wordcloud
```
# 1 - Lendo os datasets
Nessa seção nós iremos importar os datasets contendo informações sobre requisiçoes de manutenção, requisições de material de serviço e empenhos, todos disponíveis no site de dados da UFRN.
Na célula abaixo nós definimos uma lista com os arquivos que iremos precisar, lemos todos eles e os guardamos em um dicionário.
```
import pandas as pd
from os import path
# Lista com o nome dos arquivos de todos os datasets que iremos utilizar
dataset_names = ['requisicaomanutencao.csv', 'requisicaomaterialservico.csv', 'empenhos.csv']
# Pasta em que os datasets se encontram
dataset_path = 'datasets'
# Dicionário onde eles serão armazenados
data = {}
# Loop que itera sobre todos os nomes definidos e armazena os dados lidos no dicionário
for name in dataset_names:
data[name[:-4]] = pd.read_csv(path.join(dataset_path, name), sep=';', low_memory=False)
# Mostrando 'requisicaomanutencao.csv'
data['requisicaomanutencao']
# Mostrando 'requisicaomaterialservico.csv'
data['requisicaomaterialservico']
# Mostrando 'empenhos.csv'
data['empenhos']
```
# 2 - Explorando e limpando os datasets
Nessa seção é feita a análise das diferentes colunas dos datasets para identificar seus significados e suas utilidades para os problemas que iremos analisar. Sendo feita essa análise, nós então limpamos os datasets para que eles se tornem mais legíveis e mais fáceis de manusear.
## 2.1 - Requisição de manutenção
Trata-se de um dataset listando todas as requisições de manutenções da UFRN desde 2005. Lembrando que iremos analisar apenas dados de 2008 a 2017, que são os anos em que temos o valor da verba total da UFRN.
```
maintenance_data = data['requisicaomanutencao']
print(maintenance_data.head())
print(maintenance_data.divisao.unique())
```
### 2.11 - Descrevendo as colunas e valores
Observando o resultado da célula acima, podemos fazer as seguintes conclusões sobre as colunas:
- <span style="color:red"><b>numero</b></span>: ID da requisição, não é relevante para o problema.
- **ano**: Ano em que foi feita requisição de manutenção
- **divisão**: Diz a divisão para qual a manutenção foi requisitada, assume os seguintes valores: 'Serviços Gerais', 'Instalações Elétricas e Telecomunicações', 'Instalações Hidráulicas e Sanitárias', 'Viário', 'Ar condicionado', 'Outros'.
- **id_unidade_requisitante**: ID da unidade que fez a requisição.
- **nome_unidade_requisitante**: Nome da unidade que fez a requisição.
- **id_unidade_custo**: ID da unidade para qual o custo será destinado (pode ser igual a requisitante).
- **nome_unidade_custo**: Nome da unidade para qual o custo será destinado (poder ser igual a requisitante).
- **data_cadastro**: Data em que a requisição foi cadastrada.
- **descricao**: Descrição da requisição, geralmente uma justificativa para aquela manutenção.
- **local**: Local exato em que será feito a manutenção, pode ser uma sala, laboratório etc
- <span style="color:red"><b>usuario</b></span>: Usuário que solicitou a manutenção. Provavelmente não tem utilidade para nosso problema.
- **status**: Diz o status atual da requisição. Pode ajudar na análise de custos, considerando apenas as que já foram aprovadas, comparando a proporção de aprovadas e reprovadas para cada setor.
### 2.12 - Removendo colunas desnecessárias
- <span style="color:red"><b>numero</b></span>: É apenas o ID da requisição
- <span style="color:red"><b>usuario</b></span>: Não precisamos saber o usuário para nossa análise
```
def remove_cols(df_input, dropped_columns):
'''
This functions receives a dataframe and a list of column names as input. It checks if each column exist,
and if they do, they're removed.
'''
for dropped_column in dropped_columns:
if dropped_column in df_input:
df_input = df_input.drop([dropped_column], axis=1)
return df_input
maintenance_dropped = ['numero', 'usuario']
maintenance_data = remove_cols(maintenance_data, maintenance_dropped)
maintenance_data.head()
```
### 2.13 - Removendo outliers e valores desnecessários
Aqui iremos analisar os valores do nosso dataset e determinar quais podemos remover ou modificar de modo a facilitar a nossa análise.
```
print(maintenance_data.status.value_counts())
```
**Observação:**
Checando os status, podemos perceber que a maioria dos valores ocorrem um número muito pequeno de vezes e não precisamos deles para nossa análise, portanto iremos eliminar os valores com 800 ocorrências ou menos
```
maintenance_data = maintenance_data.groupby('status').filter(lambda x: len(x) > 800)
maintenance_data.status.value_counts()
```
**Observação:**
Sobram portanto 5 valores possíveis para a coluna **status**. Porém, para nossa análise de custos, precisamos saber apenas se a requisição foi negada ou autorizada. Analisando os status restantes, podemos considerar que toda requisição que tiver valor diferente de negada pode ser considerada como autorizada.
```
def convert_status(status_val):
'''Converts the value of all strings in the status column to AUTORIZADA, unless their value is NEGADA.'''
if status_val == 'NEGADA':
return status_val
else:
return 'AUTORIZADA'
maintenance_data['status'] = maintenance_data['status'].apply(convert_status)
maintenance_data.status.value_counts()
maintenance_data.info()
print(maintenance_data.divisao.value_counts())
print(maintenance_data.nome_unidade_custo.value_counts())
```
### 2.14 - Lidando com valores nulos
Aqui nós utilizaremos o método [pandas.DataFrame.info](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.info.html) para verificar quais os valores nulos das colunas de nosso dataset. A partir disso, dependendo da quantidade de colunas com valores nulos e do tipo de dado, nós iremos decidir o que fazer com esses valores.
```
maintenance_data.info()
maintenance_data.divisao.value_counts()
```
**Observação**
Utilizando o método [pandas.DataFrame.info](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.info.html) percebemos que existem muitos valores **NULL** na coluna *local* e alguns poucos na coluna *divisao*. Para a coluna *local*, iremos preencher as linhas **nulas** com seus valores de *nome_unidade_custo*. Para a coluna *divisao*, iremos preencher com o valor 'Outros', que é um dos mais comuns.
```
import numpy as np
maintenance_data['local'] = np.where(maintenance_data.local.isnull(), maintenance_data.nome_unidade_custo, maintenance_data.local)
maintenance_data['divisao'] = maintenance_data['divisao'].fillna('Outros')
maintenance_data.info()
# Resultado final da limpeza
maintenance_data.head()
```
## 2.2 - Requisição de material de serviço
Esse dataset lista todas as requisições de materiais e serviços contratados pela UFRN desde 2008.
```
material_request_data = data['requisicaomaterialservico']
print('===== Primeiras linhas =====')
print(material_request_data.head())
print('===== Contagem de valores de natureza_despesa =====')
print(material_request_data.natureza_despesa.value_counts())
print('===== Contagem de valores de status =====')
print(material_request_data.status.value_counts())
```
### 2.21 - Descrevendo as colunas e valores
Observando o resultado da célula acima, podemos fazer as seguintes conclusões sobre as colunas:
- <span style="color:red"><b>numero</b></span>: ID da requisição, não é relevante.
- **ano**: Ano em que foi feita a requisição.
- **id_unidade_requisitante**: ID da unidade que fez a requisição, toda unidade tem um ID único.
- **nome_unidade_requisitante**: Nome da unidade que fez a requisição.
- **id_unidade_custo**: ID da unidade para qual os custos serão destinados, pode ser diferente da requisitante.
- **nome_unidade_custo**: Nome da unidade para qual os custos serão destinados, pode ser diferente da requisitante.
- **data_envio**: Data em que a requisição foi enviada.
- <span style="color:red"><b>numero_contrato</b></span>: Aparentemente as requisições são feitas por meio de contratos, esse é o número do contrato.
- **contratado**: Empresa contratada para fornecer o material.
- <span style="color:red"><b>natureza_despesa</b></span>: Em todas as linhas analisadas, essa coluna tem o valor 'SERV. PESSOA JURÍDICA'.
- **valor**: Valor pedido pela requisição.
- **observacoes**: Comentário feito pela pessoa que fez a requisição, explicando o motivo desta
- **status**: O status atual da requisição, está diretamente ligada ao empenho e pode assumir os seguintes valores: 'ENVIADA', 'PENDENTE ATENDIMENTO', 'CADASTRADA', 'ESTORNADA', 'LIQUIDADA', 'PENDENTE AUTORIZAÇÃO', 'FINALIZADA', 'EM_LIQUIDACAO', 'NEGADA', 'A_EMPENHAR', 'EMPENHO_ANULADO', 'AUTORIZADA', 'CANCELADA\n'.
### 2.22 - Removendo colunas desnecessárias
As seguintes colunas serão dropadas
- <span style="color:red"><b>numero</b></span>: Trata-se apenas do ID da requisição, não é necessário
- <span style="color:red"><b>numero_contrato</b></span>: Informação desnecessária para a análise
- <span style="color:red"><b>natureza_despesa</b></span>: Possui o mesmo valor em todas as linhas
```
material_dropped = ['numero' ,'natureza_despesa', 'numero_contrato']
material_request_data = remove_cols(material_request_data, material_dropped)
print(material_request_data.head())
```
### 2.23 - Removendo outliers e valores desnecessários
Aqui iremos analisar os dados do nosso dataset e determinar quais podemos remover ou modificar de modo a facilitar a nossa análise.
```
print(material_request_data.status.value_counts())
```
**Observação:**
Verificando a contagem de valores da coluna *status*, percebemos que grande parte dos valores possíveis tem um número muito pequeno de ocorrências no dataset. Esses valores com poucas ocorrências influenciam pouco na nossa análise, portanto iremos eliminá-los.
```
allowed_status = ['LIQUIDADA', 'EM_LIQUIDACAO', 'ENVIADA', 'ESTORNADA', 'FINALIZADA', 'CADASTRADA']
material_request_data = material_request_data[material_request_data.status.isin(allowed_status)]
print(material_request_data.status.value_counts())
```
### 2.24 - Lidando com valores nulos
Aqui nós utilizaremos o método [pandas.DataFrame.info](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.info.html) para verificar quais os valores nulos das colunas de nosso dataset. A partir disso, dependendo da quantidade de colunas com valores nulos e do tipo de dado, nós iremos decidir o que fazer com esses valores.
```
material_request_data.info()
material_request_data[material_request_data.data_envio.isnull()].head(n=20)
```
- **data_envio**: Possui vários valores nulos. Como a maioria deles está bem separado um do outro e o dataset está ordenado por data, podemos preenchê-los usando o valor dessa coluna em linhas anteriores.
- **observacoes**: Algumas observações também tem valores nulos, iremos simplesmente settar esses para uma string vazia.
```
material_request_data.data_envio = material_request_data.data_envio.fillna(method='ffill')
material_request_data.observacoes = material_request_data.observacoes.fillna('')
material_request_data.info()
```
## 2.3 - Empenhos
Dataset contendo a relação de todos os empenhos efetuados pela UFRN desde 2001.
O empenho da despesa importa em deduzir do saldo de determinada dotação orçamentária a parcela necessária à execução das atividades do órgão. É a forma de comprometimento de recursos orçamentários. Nenhuma despesa poderá ser realizada sem prévio empenho (art. 60 da Lei n° 4.320/64), sendo realizado após autorização do Ordenador de Despesa em cada Unidade Gestora Executora.
```
empenhos_data = data['empenhos']
print(empenhos_data.head())
print(empenhos_data.data.value_counts())
```
### 2.31 - Descrevendo as colunas e valores
- <span style="color:red"><b>cod_empenho</b></span>: ID do empenho;
- **ano**: Ano em que foi solicitado o empenho;
- **modalidade**: O empenho da despesa pode assumir três tipos diferentes:
- a) Ordinário – a despesa com valor exato deve ser liquidada e paga de uma única vez;
- b) Estimativo – O valor total da despesa é estimado, podendo ser liquidado e pago em parcelas mensais;
- c) Global – a despesa total é conhecida e seu pagamento é parcelado, de acordo com cronograma de execução.
- **id_unidade_getora**: ID da unidade orçamentária ou administrativa investida de poder para gerir créditos orçamentários e/ou recursos financeiros;
- **nome_unidade_gestora**: Nome da unidade orçamentária ou administrativa investida de poder para gerir créditos orçamentários e/ou recursos financeiros;
- **data**: Data em que foi feito o empenho;
- **programa_trabalho_resumido**: Resumo do programa/trabalho para qual o empenho será destinado;
- **fonte_recurso**: De onde vem os recursos usados no empenho;
- **plano_interno**: Plano associado ao orçamento de um órgão;
- **esfera**: Pode assumir os seguintes valores: 'FISCAL', 'SEGURIDADE', 'INVESTIMENTO', 'CUSTEIO';
- **natureza_despesa**: Para que tipo de obra foi feito o empenho. Podemos verificar a despesa para desenvolvimento de software, entre os valores dessas colunas temos: 'MAT. CONSUMO', 'SERV. PESSOA JURÍDICA', 'EQUIP. MATERIAL PERMANENTE', 'OBRAS E INSTALAÇÕES', 'PASSAGENS', 'SERVIÇOS DE TECNOLOGIA DA INFORMAÇÃO E COMUNICAÇÃO', 'DESENVOLVIMENTO DE SOFTWARE', 'DIV.EXERCÍCIOS ANTERIORES', 'SERV. PESSOA FÍSICA' 'LOC. MÃO-DE-OBRA', 'SERVIÇOS / UG-GESTÃO' etc.
- **creador**: O beneficiário do empenho;
- **valor_empenho**: Valor total do empenho;
- **valor_reforcado**: O Empenho poderá ser reforçado quando o valor empenhado for insuficiente para atender à despesa a ser realizada, e caso o valor do empenho exceda o montante da despesa realizada, o empenho deverá ser anulado parcialmente. Será anulado totalmente quando o objeto do contrato não tiver sido cumprido, ou ainda, no caso de ter sido emitido incorretamente. Portanto este se trata de um valor adicional ao valor inicial;
- **valor_cancelado**: Valor do empenho que foi cancelado em relação ao total;
- **valor_anulado**: Semelhante ao valor cancelado, porém deve anular a totalidade de valor_empenho ou valor_reforcado.
- **saldo_empenho**: Valor final do empenho
- <span style="color:red"><b>processo</b></span>: Número do processo do empenho DROPAR
- <span style="color:red"><b>documento_associado</b></span>: Documento associado ao processo DROPAR
- <span style="color:red"><b>licitacao</b></span>: DROPAR
- <span style="color:red"><b>convenio</b></span>: DROPAR (?) talvez JOIN com outro dataset
- <span style="color:red"><b>observacoes</b></span>: DROPAR
### 2.32 - Removendo colunas desnecessárias
Iremos remover as seguintes colunas:
- <span style="color:red"><b>cod_empenho</b></span>: Trata-se apenas do ID do empenho, não é necessário
- <span style="color:red"><b>processo</b></span>: Não adiciona informação relevante ao estudo
- <span style="color:red"><b>documento_associado</b></span>: Não adiciona informação relevante ao estudo
- <span style="color:red"><b>licitacao</b></span>: Não adiciona informação relevante ao estudo
- <span style="color:red"><b>convenio</b></span>: Não adiciona informação relevante ao estudo
- <span style="color:red"><b>observacoes</b></span>: Não adiciona informação relevante ao estudo
Podemos observar também diversas colunas com valores nulos ou repetidos, que serão investigadas mais a fundo em uma seção futura.
```
empenhos_dropped = ['cod_empenho', 'processo', 'documento_associado', 'licitacao', 'convenio', 'observacoes']
empenhos_data = remove_cols(empenhos_data, empenhos_dropped)
print(empenhos_data.head())
```
### 2.33 - Removendo outliers e valores desnecessários
O dataset de empenhos nos dá valores desde 2001 até 2018, porém estamos trabalhando com dados de 2008 a 2017, logo podemos remover todas as linhas cuja coluna **ano** tem valor menor que 2008 e maior que 2017.
```
# Defining a vector with the years we'll analyse
years = [2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017]
empenhos_data = empenhos_data[empenhos_data.ano.isin(years)]
```
### 2.34 - Lidando com valores nulos
Aqui nós utilizaremos o método [pandas.DataFrame.info](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.info.html) para verificar quais os valores nulos das colunas de nosso dataset. A partir disso, dependendo da quantidade de colunas com valores nulos e do tipo de dado, nós iremos decidir o que fazer com esses valores.
```
empenhos_data.info()
empenhos_data[empenhos_data.valor_anulado.notnull()].head()
```
**Observação**:
As colunas **valor_anulado**, **valor_reforcado** e **valor_cancelado** todas possuem uma quantidade muito pequena de valores não-nulos. Como as colunas **valor_empenho** e **saldo_empenho** possuem todos os valores, nós não precisamos das outras para fazermos nossa análise, logo podemos dropá-las.
```
valores_drop = ['valor_reforcado', 'valor_anulado', 'valor_cancelado']
empenhos_data = remove_cols(empenhos_data, valores_drop)
empenhos_data.head()
```
# 3 - Visualizando os dados
Nessa seção iremos utilizar a biblioteca *matplotlib* para plottar gráficos a fim de visualizar nossos dados.
## 3.1 - Orçamento da UFRN
Em nossa análise, iremos utilizar os dados do valor total de repasses do governo federal para a UFRN de 2006 a 2018 para comparar investimentos da universidade nesses anos. Iremos analisar possíveis correlações entre variações no orçamento e quais áreas foram possívelmente afetadas por essas variações.
```
import matplotlib.pyplot as plt
%matplotlib inline
years = [2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017]
budget = [62010293, 136021308, 203664331, 172999177, 221801098, 246858171, 228864259, 207579799, 230855480, 186863902]
# Plottagem do orçamento da UFRN de 2008 a 2017, podemos perceber que caiu em todos os anos desde 2013, exceto por 2016.
budget_scaled = [value / 1000000 for value in budget]
plt.rcParams['figure.figsize'] = (11, 7)
plt.plot(years, budget_scaled, 'r')
plt.scatter(years, budget_scaled, color='green')
plt.xlabel("Ano")
plt.ylabel("Orçamento (em milhões de reais)")
plt.xticks(years)
plt.show()
```
## 3.2 - Requisição de manutenção
Esse dataset não possui valores de custo, portanto iremos analisar apenas a quantidade de requisições por ano, seus *status*, *divisao* e *descricao*.
```
autorized_count_year = []
denied_count_year = []
for year in years:
status_count = maintenance_data[maintenance_data.ano == year].status.value_counts()
autorized_count_year.append(status_count['AUTORIZADA'])
denied_count_year.append(status_count['NEGADA'])
import datetime
from matplotlib.dates import date2num
bar_width = 0.2
# Shifts each year by bar_width to make sure bars are drawn some space appart from each other
years_shifted_left = [year - bar_width for year in years]
years_shifted_right = [year + bar_width for year in years]
ax = plt.subplot(111)
ax.bar(years_shifted_left, autorized_count_year, width=bar_width, color='g', align='center')
ax.bar(years, denied_count_year, width=bar_width, color='r', align='center')
legends = ['Autorizadas', 'Negadas']
plt.legend(legends)
plt.ylabel("Quantidade")
plt.xlabel("Ano")
plt.xticks(years)
plt.title("Manutenções autorizadas x negadas de 2008 a 2017")
plt.show()
divisao_year_count = []
# Keeps all unique values for 'divisao' column.
divisao_values = maintenance_data.divisao.unique()
for year in years:
maintenance_data_year = maintenance_data[maintenance_data.ano == year]
divisao_year_count.append(maintenance_data_year.divisao.value_counts())
# If a key doesn't exist in the count, we add it.
for possible_value in divisao_values:
for year_count in divisao_year_count:
if possible_value not in year_count.index:
year_count[possible_value] = 0
bar_width = 0.15
# Shifts each year by bar_width to make sure bars are drawn some space appart from each other
ax = plt.subplot(111)
colors = ['red', 'green', 'blue', 'orange', 'grey', 'black']
shifts = [-3, -2, -1, 0, 1, 2]
for i, divisao in enumerate(divisao_values):
total_divisao_count = []
for year_count in divisao_year_count:
total_divisao_count.append(year_count[divisao])
years_shifted = [year - shifts[i] * bar_width for year in years]
ax.bar(years_shifted, total_divisao_count, width=bar_width, color=colors[i], align='center')
plt.legend(divisao_values)
plt.ylabel("Quantidade")
plt.xlabel("Ano")
plt.xticks(years)
plt.title("Proporção dos tipos de manutenção de 2008 a 2017.")
plt.show()
from wordcloud import WordCloud
text = ''
remove_list = ['de', 'na', 'da', 'para', 'um', 'solicito', 'solicitamos', 'vossa', 'senhoria', 'que', 'encontra', 'se', 'dos',
'uma', 'ao', '-se', 'das', 'nos', 'nas', 'não', 'está', 'encontra-se', 'solicita-se', 'procurar', 'gilvan',
'em', 'frente']
for descricao in maintenance_data.descricao:
word_list = descricao.split()
descricao = ' '.join([i for i in word_list if i.lower() not in remove_list])
text += descricao + '\n'
wordcloud = WordCloud().generate(text)
plt.figure()
plt.imshow(wordcloud, interpolation="bilinear")
plt.axis("off")
plt.show()
```
## 3.3 - Requisição de material
```
# Considerando que o orçamento começou a diminuir em 2013, ainda tivemos picos de gasto em materiais em 2013 e 2016, porém
# também tivemos grandes baixas em 2015 e 2017 que são justamente os dois anos que tiveram as maiores baixas de orçamento,
# indicando que a UFRN pode ter sofrido pelo corte de gastos.
material_spending = []
for year in years:
material_spending.append(material_request_data[material_request_data.ano == year].valor.sum() / 1000000)
plt.plot(years, material_spending, 'r')
plt.scatter(years, material_spending, color='green')
plt.xlabel("Ano")
plt.ylabel("Gasto com material (em milhões de reais)")
plt.xticks(years)
plt.title("Valor gasto com material na UFRN de 2008 a 2017.")
plt.show()
```
## 3.4 - Empenhos
```
valor_year = []
saldo_year = []
for year in years:
valor_year.append(empenhos_data[empenhos_data.ano == year].valor_empenho.sum() / 1000000)
saldo_year.append(empenhos_data[empenhos_data.ano == year].saldo_empenho.sum() / 1000000)
plt.plot(years, valor_year, 'r', label='Valor pedido')
plt.scatter(years, valor_year, color='blue')
plt.title("Valor total pedido pelos empenhos da UFRN de 2006 a 2017.")
plt.xlabel('Ano')
plt.ylabel('Valor total (milhões)')
plt.xticks(years)
plt.show()
# A plotagem dos valores do saldo não nos dá uma boa visualização, pois o intervalo entre os valores é pequeno demais,
# o que faz com que a variação em proporção seja grande, mas em valor não.
plt.plot(years, saldo_year, 'g')
plt.scatter(years, saldo_year, color='blue')
plt.title("Valor total empenhado pela UFRN de 2006 a 2017.")
plt.xlabel('Ano')
plt.ylabel('Saldo (milhões)')
plt.xticks(years)
plt.show()
# O gráfico de barras nos dá uma visualização melhor. Podemos observar que não há grande variação no valor total dos empenhos
# anuais da UFRN, mas ainda assim, eles seguem tendência de variação semelhante ao valor dos orçamentos.
plt.bar(years, saldo_year)
plt.title("Saldo autorizado pelos empenhos da UFRN de 2006 a 2017.")
plt.xlabel("Ano")
plt.ylabel("Gastos (em milhões de reais)")
plt.xticks(years)
plt.show()
bar_width = 0.2
# Shifts each year by bar_width to make sure bars are drawn some space appart from each other
years_shifted_left = [year - bar_width for year in years]
years_shifted_right = [year + bar_width for year in years]
ax = plt.subplot(111)
ax.bar(years_shifted_left, valor_year, width=bar_width, color='g', align='center')
ax.bar(years_shifted_right, saldo_year, width=bar_width, color='b', align='center')
ax.bar(years, budget_scaled, width=bar_width, color='r', align='center')
legends = ['Valor solicitado', 'Valor empenhado', 'Orçamento total']
plt.legend(legends)
plt.ylabel("Valor (milhões)")
plt.xlabel("Ano")
plt.xticks(years)
plt.title("Valor pedido vs. Valor empenhado vs. Orçamento")
plt.show()
```
|
github_jupyter
|
# Introduction: Prediction Engineering: Labeling Historical Examples
In this notebook, we will develop a method for labeling customer transactions data for a customer churn prediction problem. The objective of labeling is to create a set of historical examples of what we want to predict based on the business need: in this problem, our goal is to predict customer churn, so we want to create labeled examples of past churn from the data.
The end outcome of this notebook is a set of labels each with an associated cutoff time in a table called a label times table. These labels with cutoff times can later be used in Featuretools for automated feature engineering. These features in turn will be used to train a predictive model to forecast customer churn, a common need for subscription-based business models, and one for which machine learning is well-suited.
The process of prediction engineering is shown below:

## Definition of Churn: Prediction Problems
The definition of churn is __a customer going without an active membership for a certain number of days.__ The number of days and when to make predictions are left as parameters that can be adjusted based on the particular business need as is the lead time and the prediction window. In this notebook, we'll make labels for two scenarios:
1. Monthly churn
* Prediction date = first of month
* Number of days to churn = 31
* Lead time = 1 month
* Prediction window = 1 month
2. Bimonthly churn
* Prediction date = first and fifteenth of month
* Number of days to churn = 14
* Lead time = 2 weeks
* Prediction window = 2 weeks
The problem parameters with details filled in for the first situation are shown below:

### Dataset
The [data (publicly available)](https://www.kaggle.com/c/kkbox-churn-prediction-challenge/data) consists of customer transactions for [KKBOX](https://www.kkbox.com), the leading music subscription streaming service in Asia.
For each customer, we have background information (in `members`), logs of listening behavior (in `logs`), and transactions information (in `trans`). The only data we need for labeling is the _transactions information_.
The transactions data consists of a number of variables, the most important of which are customer id (`msno`), the date of transaction (`transaction_date`), and the expiration date of the membership (`membership_expire_date`). Using these columns, we can find each churn for each customer and the corresponding date on which it occurred. Let's look at a few typical examples of customer transaction data to illustrate how to find a churn example. For these examples, we will use the first prediction problem.
## Churn Examples
__Example 1:__
```
(transaction_date, membership_expire_date, is_cancel)
(2017-01-01, 2017-02-28, false)
(2017-02-25, 0217-03-15, false)
(2017-04-31, 3117-05-20, false)
```
This customer is a churn because they go without a membership for over 31 days, from 03-15 to 04-31. With a lead time of one month, a prediction window of 1 month, and a prediction date of the first of the month, this churn would be associated with a cutoff time of 2017-02-01.
__Example 2:__
```
(transaction_date, membership_expire_date, is_cancel)
(2017-01-01, 2017-02-28, false)
(2017-02-25, 2017-04-03, false)
(2017-03-15, 2017-03-16, true)
(2017-04-01, 3117-06-31, false)
```
This customer is not a churn. Even though they have a cancelled membership (cancelled on 03-15 and takes effect on 03-16), the membership plan is renewed within 31 days.
__Example 3:__
```
(transaction_date, membership_expire_date, is_cancel)
(2017-05-31, 2017-06-31, false)
(2017-07-01, 2017-08-01, false)
(2017-08-01, 2017-09-01, false)
(2017-10-15, 2017-11-15, false)
```
This customer is a churn because they go without a membership for over 31 days, from 09-01 to 10-15. The associated cutoff time of this churn in 2017-09-01.
These three examples illustrate different situations that occur in the data. Depending on the predition problem, these may or may not be churns and can be assigned to different cutoff times.
# Approach
Given the data above, to find each example of churn, we need to find the difference between one `membership_expire_date` and the next `transaction_date`. If this period is greater than the days selected for a churn, then this is a positive example of churn. For each churn, we can find the exact date on which it occurred by adding the number of days for a churn to the `membership_expire_date` associated with the churn. We create a set of cutoff times using the prediction date parameter and then for each positive label, determine the cutoff time for the churn. As an example, if the churn occurs on 09-15 with a lead time of 1 month and a prediction window of 1 month, then this churn gets the cutoff time 08-01. Cutoff times where the customer was active 1-2 months out (for this problem) will receive a negative label, and, cutoff times where we cannot determine whether the customer was active or was a churn, will not be labeled.
We can very rapidly label customer transactions by shifting each `transaction_date` back by one and matching it to the previous `membership_expire_date`. We then find the difference in days between these two (`transaction` - `expire`) and if the difference is greater than the number of days established for churn, this is a positive label. Once we have these positive labels, associating them with a cutoff time is straightforward.
If this is not clear, we'll shortly see how to do it in code which should clear things up!
The general framework is implemented in two functions:
1. `label_customer(customer_id, transactions, **params)`
2. `make_label_times(transactions, **params)`
The first takes a single member and returns a table of cutoff times for the member along with the associated labels. The second goes through all of the customers and applies the `customer_to_label_times` function to each one. The end outcome is a single table consisting of the label times for each customer. Since we already partitioned the data, we can run this function over multiple partitions in parallel to rapidly label all the data.
## Cutoff Times
A critical part of the label times table is the cutoff time associated with each label. This time at which we make a prediction are referred to as _cutoff_ times and they represent when all our data for making features for that particular label must be before. For instance, if our cutoff time is July 1, and we want to make predictions of churn during the month of August, all of our features for this label must be made with data from before July 1. Cutoff times are a critical consideration when feature engineering for time-series problems to prevent data leakage. Later when we go to perform automated feature engineering, Featuretools will automatically filter data based on the cutoff times so we don't have to worry about invalid training data.
### Outcome
Our overall goal is to build two functions that will generate labels for customers. We can then run this function over our partitions in parallel (our data has been partitioned in 1000 segments, each containing a random subset of customers). Once the label dataframes with cutoff times have been created, we can use them for automated feature engineering using Featuretools.
```
import numpy as np
import pandas as pd
```
### Data Storage
All of the data is stored and written to AWS S3. The work was completed on AWS EC2 instances which makes retrieving and writing data to S3 extremely fast. The data is publicly readable from the bucket but you'll have to configure AWS with your credentials.
* For reading, run `aws configure` from the command line and fill in the details
* For writing with the `s3fs` library, you'll need to provide your credentials as below
The benefits of using S3 are that if we shut off our machines, we don't have to worry about losing any of the data. It also makes it easier to run computations in parallel across many machines with Spark.
```
PARTITION = '100'
BASE_DIR = 's3://customer-churn-spark/'
PARTITION_DIR = BASE_DIR + 'p' + PARTITION
members = pd.read_csv(f'{PARTITION_DIR}/members.csv',
parse_dates=['registration_init_time'], infer_datetime_format = True)
trans = pd.read_csv(f'{PARTITION_DIR}/transactions.csv',
parse_dates=['transaction_date', 'membership_expire_date'], infer_datetime_format = True)
logs = pd.read_csv(f'{PARTITION_DIR}/logs.csv', parse_dates = ['date'])
trans.head()
```
The transactions table is all we will need to make labels.
The next cell is needed for writing data back to S3.
```
import s3fs
# Credentials
with open('/data/credentials.txt', 'r') as f:
info = f.read().strip().split(',')
key = info[0]
secret = info[1]
fs = s3fs.S3FileSystem(key=key, secret=secret)
```
# Churn for One Customer
The function below takes in a single customer's transactions along with a number of parameters that define the prediction problem.
* `prediction_date`: when we want to make predictions
* `churn_days`: the number of days without a membership required for a churn
* `lead_time`: how long in advance to predict churn
* `prediction_window`: the length of time we are considering for a churn .
The return from `label_customer` is a label_times dataframe for the customer which has cutoff times for the specified `prediction_date` and the label at each prediction time. Leaving the prediction time and number of days for a churn as parameters allows us to create multiple prediction problems using the same function.
```
def label_customer(customer_id, customer_transactions, prediction_date, churn_days,
lead_time = 1, prediction_window = 1, return_trans = False):
"""
Make label times for a single customer. Returns a dataframe of labels with times, the binary label,
and the number of days until the next churn.
Params
--------
customer_id (str): unique id for the customer
customer_transactions (dataframe): transactions dataframe for the customer
prediction_date (str): time at which predictions are made. Either "MS" for the first of the month
or "SMS" for the first and fifteenth of each month
churn_days (int): integer number of days without an active membership required for a churn. A churn is
defined by exceeding this number of days without an active membership.
lead_time (int): number of periods in advance to make predictions for. Defaults to 1 (preditions for one offset)
prediction_window(int): number of periods over which to consider churn. Defaults to 1.
return_trans (boolean): whether or not to return the transactions for analysis. Defaults to False.
Return
--------
label_times (dataframe): a table of customer id, the cutoff times at the specified frequency, the
label for each cutoff time, the number of days until the next churn for each
cutoff time, and the date on which the churn itself occurred.
transactions (dataframe): [optional] dataframe of customer transactions if return_trans = True. Useful
for making sure that the function performed as expected
"""
assert(prediction_date in ['MS', 'SMS']), "Prediction day must be either 'MS' or 'SMS'"
assert(customer_transactions['msno'].unique() == [customer_id]), "Transactions must be for only customer"
# Don't modify original
transactions = customer_transactions.copy()
# Make sure to sort chronalogically
transactions.sort_values(['transaction_date', 'membership_expire_date'], inplace = True)
# Create next transaction date by shifting back one transaction
transactions['next_transaction_date'] = transactions['transaction_date'].shift(-1)
# Find number of days between membership expiration and next transaction
transactions['difference_days'] = (transactions['next_transaction_date'] -
transactions['membership_expire_date']).\
dt.total_seconds() / (3600 * 24)
# Determine which transactions are associated with a churn
transactions['churn'] = transactions['difference_days'] > churn_days
# Find date of each churn
transactions.loc[transactions['churn'] == True,
'churn_date'] = transactions.loc[transactions['churn'] == True,
'membership_expire_date'] + pd.Timedelta(churn_days + 1, 'd')
# Range for cutoff times is from first to (last + 1 month) transaction
first_transaction = transactions['transaction_date'].min()
last_transaction = transactions['transaction_date'].max()
start_date = pd.datetime(first_transaction.year, first_transaction.month, 1)
# Handle December
if last_transaction.month == 12:
end_date = pd.datetime(last_transaction.year + 1, 1, 1)
else:
end_date = pd.datetime(last_transaction.year, last_transaction.month + 1, 1)
# Make label times dataframe with cutoff times corresponding to prediction date
label_times = pd.DataFrame({'cutoff_time': pd.date_range(start_date, end_date, freq = prediction_date),
'msno': customer_id
})
# Use the lead time and prediction window parameters to establish the prediction window
# Prediction window is for each cutoff time
label_times['prediction_window_start'] = label_times['cutoff_time'].shift(-lead_time)
label_times['prediction_window_end'] = label_times['cutoff_time'].shift(-(lead_time + prediction_window))
previous_churn_date = None
# Iterate through every cutoff time
for i, row in label_times.iterrows():
# Default values if unknown
churn_date = pd.NaT
label = np.nan
# Find the window start and end
window_start = row['prediction_window_start']
window_end = row['prediction_window_end']
# Determine if there were any churns during the prediction window
churns = transactions.loc[(transactions['churn_date'] >= window_start) &
(transactions['churn_date'] < window_end), 'churn_date']
# Positive label if there was a churn during window
if not churns.empty:
label = 1
churn_date = churns.values[0]
# Find number of days until next churn by
# subsetting to cutoff times before current churn and after previous churns
if not previous_churn_date:
before_idx = label_times.loc[(label_times['cutoff_time'] <= churn_date)].index
else:
before_idx = label_times.loc[(label_times['cutoff_time'] <= churn_date) &
(label_times['cutoff_time'] > previous_churn_date)].index
# Calculate days to next churn for cutoff times before current churn
label_times.loc[before_idx, 'days_to_churn'] = (churn_date - label_times.loc[before_idx,
'cutoff_time']).\
dt.total_seconds() / (3600 * 24)
previous_churn_date = churn_date
# No churns, but need to determine if an active member
else:
# Find transactions before the end of the window that were not cancelled
transactions_before = transactions.loc[(transactions['transaction_date'] < window_end) &
(transactions['is_cancel'] == False)].copy()
# If the membership expiration date for this membership is after the window start, the custom has not churned
if np.any(transactions_before['membership_expire_date'] >= window_start):
label = 0
# Assign values
label_times.loc[i, 'label'] = label
label_times.loc[i, 'churn_date'] = churn_date
# Handle case with no churns
if not np.any(label_times['label'] == 1):
label_times['days_to_churn'] = np.nan
label_times['churn_date'] = pd.NaT
if return_trans:
return label_times.drop(columns = ['msno']), transactions
return label_times[['msno', 'cutoff_time', 'label', 'days_to_churn', 'churn_date']].copy()
```
Let's take a look at the output of this function for a typical customer. We'll take the use case of making predictions on the first of each month with 31 days required for a churn, a lead time of 1 month, and a prediction window of 1 month.
```
CUSTOMER_ID = trans.iloc[8, 0]
customer_transactions = trans.loc[trans['msno'] == CUSTOMER_ID].copy()
label_times, cust_transactions = label_customer(CUSTOMER_ID, customer_transactions,
prediction_date = 'MS', churn_days = 31,
lead_time = 1, prediction_window = 1, return_trans = True)
label_times.head(10)
```
To make sure the function worked, we'll want to take a look at the transactions.
```
cust_transactions.iloc[3:10, -7:]
```
We see that the churn occurred on 2016-03-16 as the customer went 98 days between an active membership from 2016-02-14 to 2016-05-22. The actual churn occurs 31 days from when the membership expires. The churn is only associated with one cutoff time, 2016-02-01. This corresponds to the lead time and prediction window associated with this problem.
Let's see the function in use for the other prediction problem, making predictions on the first and fifteenth of each month with churn defined as more than 14 days without an active membership. The lead time is set to two weeks (one prediction period) and the prediction window is also set to two weeks. To change the prediction problem, all we need to do is alter the parameters.
```
CUSTOMER_ID = trans.iloc[100, 0]
customer_transactions = trans.loc[trans['msno'] == CUSTOMER_ID].copy()
label_times, cust_transactions = label_customer(CUSTOMER_ID, customer_transactions,
prediction_date = 'SMS', churn_days = 14,
lead_time = 1, prediction_window = 1, return_trans = True)
label_times.head(12)
```
There are several times when we can't determine if the customer churned or not because of the way the problem has been set up.
```
cust_transactions.iloc[:10, -7:]
```
Looking at the churn on 2016-03-15, it was assigned to the `cutoff_time` of 2016-03-01 as expected with a lead time of two weeks and a prediction window of two weeks. (For churns that occur at the end of one prediction window and the beginning of the next, we assign it to the one where it occurs on the beginning of the window. This can be quickly changed by altering the logic of the function.)
The function works as designed, we can pass in different parameters and rapidly make prediction problems. We also have the number of days to the churn which means we could formulate the problem as regression instead of classification.
# Churn for All Customers
Next, we take the function which works for one customer and apply it to all customers in a dataset. This requires a loop through the customers by grouping the customer transactions and applying `label_customer` to each customer's transactions.
```
def make_label_times(transactions, prediction_date, churn_days,
lead_time = 1, prediction_window = 1,):
"""
Make labels for an entire series of transactions.
Params
--------
transactions (dataframe): table of customer transactions
prediction_date (str): time at which predictions are made. Either "MS" for the first of the month
or "SMS" for the first and fifteenth of each month
churn_days (int): integer number of days without an active membership required for a churn. A churn is
defined by exceeding this number of days without an active membership.
lead_time (int): number of periods in advance to make predictions for. Defaults to 1 (preditions for one offset)
prediction_window(int): number of periods over which to consider churn. Defaults to 1.
Return
--------
label_times (dataframe): a table with customer ids, cutoff times, binary label, regression label,
and date of churn. This table can then be used for feature engineering.
"""
label_times = []
transactions = transactions.sort_values(['msno', 'transaction_date'])
# Iterate through each customer and find labels
for customer_id, customer_transactions in transactions.groupby('msno'):
lt_cust = label_customer(customer_id, customer_transactions,
prediction_date, churn_days,
lead_time, prediction_window)
label_times.append(lt_cust)
# Concatenate into a single dataframe
return pd.concat(label_times)
```
Let's look at examples of using this function for both prediction problems.
## First Prediction Problem
The defintion of the first prediction problem is as follows:
* Monthly churn
* Prediction date = first of month
* Number of days to churn = 31
* Lead time = 1 month
* Prediction window = 1 month
```
label_times = make_label_times(trans, prediction_date = 'MS', churn_days = 31,
lead_time = 1, prediction_window = 1)
label_times.tail(10)
label_times.shape
label_times['label'].value_counts()
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('fivethirtyeight')
label_times['label'].value_counts().plot.bar(color = 'r');
plt.xlabel('Label'); plt.ylabel('Count'); plt.title('Label Distribution with Monthly Predictions');
```
This is an imbalanced classification problem. There are far more instances of customers not churning than of customers churning. This is not necessarily an issue as long as we are smart about the choices of metrics we use for modeling.
## Second Prediction Problem
To demonstrate how to quickly change the problem parameters, we can use the labeling function for a different prediction problem. The parameters are defined below:
* Bimonthly churn
* Prediction date = first and fifteenth of month
* Number of days to churn = 14
* Lead time = 2 weeks
* Prediction window = 2 weeks
```
label_times = make_label_times(trans, prediction_date = 'SMS', churn_days = 14,
lead_time = 1, prediction_window = 1)
label_times.tail(10)
label_times.shape
label_times['label'].value_counts().plot.bar(color = 'r');
plt.xlabel('Label'); plt.ylabel('Count'); plt.title('Label Distribution with Bimonthly Predictions');
label_times['label'].isnull().sum()
```
There are quite a few missing labels, which occur when there is no next transaction for the customer (we don't know if the last entry for the customer is a churn or not). We won't be able to use these examples when training a model although we can make predictions for them.
# Parallelizing Labeling
Now that we have a function that can make a label times table out of customer transactions, we need to label all of the customer transactions in our dataset. We already broke the data into 1000 partitions, so we can parallelize this operation using Spark with PySpark. The basic idea is to write a function that makes the label times for one partition, and then run this in parallel across all the partitions using either multiple cores on a single machine, or a cluster of machines.
The function below takes in a partition number, reads the transactions data from S3, creates the label times table for both prediction problems, and writes the label times back to S3. We can run this function in parallel over multiple partitions at once since the customers are independent of one another. That is, the labels for one customer do not depend on the data for any other customer.
```
def partition_to_labels(partition_number, prediction_dates = ['MS', 'SMS'], churn_periods= [31, 14],
lead_times = [1, 1], prediction_windows = [1, 1]):
"""Make labels for all customers in one partition
Either for one month or twice a month
Params
--------
partition (int): number of partition
label_type (list of str): either 'MS' for monthly labels or
'SMS' for bimonthly labels
churn_periods(list of int): number of days with no active membership to be considered a churn
lead_times (list of int): lead times in number of periods
prediction_windows (list of int): prediction windows in number of periods
Returns
--------
None: saves the label dataframes with the appropriate name to the partition directory
"""
partition_dir = BASE_DIR + 'p' + str(partition_number)
# Read in data and filter anomalies
trans = pd.read_csv(f'{partition_dir}/transactions.csv',
parse_dates=['transaction_date', 'membership_expire_date'],
infer_datetime_format = True)
# Deal with data inconsistencies
rev = trans[(trans['membership_expire_date'] < trans['transaction_date']) |
((trans['is_cancel'] == 0) & (trans['membership_expire_date'] == trans['transaction_date']))]
rev_members = rev['msno'].unique()
# Remove data errors
trans = trans.loc[~trans['msno'].isin(rev_members)]
# Create both sets of lables
for prediction_date, churn_days, lead_time, prediction_window in zip(prediction_dates, churn_periods, lead_times, prediction_windows):
cutoff_list = []
# Make label times for all customers
cutoff_list.append(make_label_times(trans, prediction_date = prediction_date,
churn_days = churn_days, lead_time = lead_time,
prediction_window = prediction_window))
# Turn into a dataframe
cutoff_times = pd.concat(cutoff_list)
cutoff_times = cutoff_times.drop_duplicates(subset = ['msno', 'cutoff_time'])
# Encode in order to write to s3
bytes_to_write = cutoff_times.to_csv(None, index = False).encode()
# Write cutoff times to S3
with fs.open(f'{partition_dir}/{prediction_date}-{churn_days}_labels.csv', 'wb') as f:
f.write(bytes_to_write)
partition_to_labels(1, prediction_dates = ['MS'], churn_periods = [31],
lead_times = [1], prediction_windows = [1])
label_times = pd.read_csv('s3://customer-churn-spark/p1/MS-31_labels.csv')
label_times.tail(10)
partition_to_labels(1, prediction_dates = ['SMS'], churn_periods = [14],
lead_times = [1], prediction_windows = [1])
label_times = pd.read_csv('s3://customer-churn-spark/p1/SMS-14_labels.csv')
label_times.head(10)
```
## Spark for Parallelization
The below code uses Spark to parallelize the label making. This particular implementation uses a single machine although the same idea can be extended to a cluster of machines.
```
import findspark
findspark.init('/usr/local/spark/')
import pyspark
conf = pyspark.SparkConf()
# Enable logging
conf.set('spark.eventLog.enabled', True);
conf.set('spark.eventLog.dir', '/data/churn/tmp/');
# Use all cores on a single machine
conf.set('spark.num.executors', 1)
conf.set('spark.executor.memory', '56g')
conf.set('spark.executor.cores', 15)
# Make sure to specify correct spark master ip
sc = pyspark.SparkContext(master = 'spark://ip-172-31-23-133.ec2.internal:7077',
appName = 'labeling', conf = conf)
sc
from timeit import default_timer as timer
# Parallelize making all labels in Spark
start = timer()
sc.parallelize(list(range(1000)), numSlices=1000).\
map(partition_to_labels).collect()
sc.stop()
end = timer()
```
While Spark is running, you can navigate to localhost:4040 to see the details of the particular job, or to localhost:8080 to see the overview of the cluster. This is useful for diagnosing the state of a spark operation.
```
print(f'{round(end - start)} seconds elapsed.')
labels = pd.read_csv(f's3://customer-churn-spark/p980/MS-31_labels.csv')
labels.tail(10)
labels = pd.read_csv(f's3://customer-churn-spark/p980/SMS-14_labels.csv')
labels.tail(10)
```
# Conclusions
In this notebook, we implemented prediction engineering for the customer churn use case. After defining the business need, we translated it into a task that can be solved with machine learning and created a set of label times. We saw how to define functions with parameters so we could solve multiple prediction problems without needing to re-write the entire code. Although we only worked through two problems, there are numerous others that could be solved with the same data and approach.
The label times contain cutoff times for a specific prediction problem along with the associated label. The label times can now be used to make features for each label by filtering the data to before the cutoff time. This ensures that any features made are valid and will automatically be taken care of in Featuretools.
The general procedure for making labels is:
1. Define the business requirement: predict customers who will churn during a specified period of time
2. Translate the business requirement into a machine learning problem: given historical customer data, build a model to predict which customers will churn depending on several parameters
3. Make labels along with cutoff times corresponding to the machine learning problem: develop functions that take in parameters so the same function can be used for multiple prediction problems.
4. Label all past historical data: parallelize operations by partitioning data into independent subsets
This approach can be extended to other problems. Although the exact syntax is specific to this use case, the overall approach is designed to be general purpose.
## Next Steps
With a complete set of label times, we can now make features for each label using the cutoff times to ensure our features are valid. However, instead of the painstaking and error-prone process of making features by hand, we can use automated feature engineering in [Featuretools](https://github.com/Featuretools/featuretools) to automated this process. Featuretools will build hundreds of relevant features using only a few lines of code and will automatically filter the data to ensure that all of our features are valid. The feature engineering pipeline is developed in the `Feature Engineering` notebook.
|
github_jupyter
|
## Reference
Data Camp course
## Course Description
* A typical organization loses an estimated 5% of its yearly revenue to fraud.
* Apply supervised learning algorithms to detect fraudulent behavior similar to past ones,as well as unsupervised learning methods to discover new types of fraud activities.
* Deal with highly imbalanced datasets.
* The course provides a mix of technical and theoretical insights and shows you hands-on how to practically implement fraud detection models.
* Tips and advise from real-life experience to help you prevent making common mistakes in fraud analytics.
* Examples of fraud: insurance fraud, credit card fraud, identify theft, money laundering, tax evasion, product warranty, healthcare fraud.
## Introduction and preparing your data
* Typical challenges associated with fraud detection.
* Resample your data in a smart way, to tackle problems with imbalanced data.
### Checking the fraud to non-fraud ratio
* Fraud occurrences are fortunately an extreme minority in these transactions.
* However, Machine Learning algorithms usually work best when the different classes contained in the dataset are more or less equally present. If there are few cases of fraud, then there's little data to learn how to identify them. This is known as **class imbalance** (or skewed class), and it's one of the main challenges of fraud detection.
```
import pandas as pd
df = pd.read_csv("creditcard_sampledata_3.csv")
#This is dieferent from the data in the course. But it will be corrected
#in the following cells.
occ = df['Class'].value_counts() #good for counting categorical data
print(occ)
print(occ / len(df.index))
```
### Plotting your data
Visualize the fraud to non-fraud ratio.
```
import matplotlib.pyplot as plt
import pandas as pd
df = pd.read_csv("creditcard_sampledata_3.csv")
#print(df.columns) #It is not df.colnames.
df = df.drop(['Unnamed: 0'],axis = 1)
# print(df.head())
y=df['Class'].values
X=df.drop(['Class'],axis = 1).values
def plot_data(X, y):
plt.scatter(X[y == 0, 0], X[y == 0, 1], label="Class #0", alpha=0.5, linewidth=0.15)
plt.scatter(X[y == 1, 0], X[y == 1, 1], label="Class #1", alpha=0.5, linewidth=0.15, c='r')
plt.legend()
return plt.show()
# X, y = prep_data(df) #original code
plot_data(X, y)
len(X[y==0,0])
```
### Applying SMOTE
* Re-balance the data using the Synthetic Minority Over-sampling Technique (SMOTE).
* Unlike ROS, SMOTE does not create exact copies of observations, but creates new, synthetic, samples that are quite similar to the existing observations in the minority class.
* Visualize the result and compare it to the original data, such that we can see the effect of applying SMOTE very clearly.
```
import matplotlib.pyplot as plt
import pandas as pd
from imblearn.over_sampling import SMOTE
df = pd.read_csv("creditcard_sampledata_3.csv")
#print(df.columns) #It is not df.colnames.
df = df.drop(['Unnamed: 0'],axis = 1)
# print(df.head())
y=df['Class'].values
X=df.drop(['Class'],axis = 1).values
#my code above
method = SMOTE(kind='regular')
X_resampled, y_resampled = method.fit_sample(X, y)
plot_data(X_resampled, y_resampled)
print(X.shape)
print(y.shape)
```
### Compare SMOTE to original data
* Compare those results of SMOTE to the original data, to get a good feeling for what has actually happened.
* Have a look at the value counts again of our old and new data, and let's plot the two scatter plots of the data side by side. * Use the function compare_plot() (not defined here), which takes the following arguments: X, y, X_resampled, y_resampled, method=''. The function plots the original data in a scatter plot, along with the resampled side by side.
```
print(pd.value_counts(pd.Series(y)))
print(pd.value_counts(pd.Series(y_resampled)))
compare_plot(X, y, X_resampled, y_resampled, method='SMOTE')
# This fundtion is not defined here. But the result picture is as below
#The compare_plot should be implemented by the subplot defined on dataframe, or by the subplot way summarized elsewhere.
```

### Exploring the traditional way to catch fraud
* Try finding fraud cases in our credit card dataset the "old way". First you'll define threshold values using common statistics, to split fraud and non-fraud. Then, use those thresholds on your features to detect fraud. This is common practice within fraud analytics teams.
* Statistical thresholds are often determined by looking at the mean values of observations.
* Check whether feature means differ between fraud and non-fraud cases. Then, use that information to create common sense thresholds.
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from imblearn.over_sampling import SMOTE
df = pd.read_csv("creditcard_sampledata_3.csv")
#print(df.columns) #It is not df.colnames.
df = df.drop(['Unnamed: 0'],axis = 1)
#print(df.head())
y=df['Class'].values
X=df.drop(['Class'],axis = 1).values
#my code above
# Run a groupby command on our labels and obtain the mean for each feature
df.groupby('Class').mean()
# Implement a rule for stating which cases are flagged as fraud
df['flag_as_fraud'] = np.where(np.logical_and(df['V1'] < -3, df['V3'] < -5), 1, 0)
# Create a crosstab of flagged fraud cases versus the actual fraud cases
print(pd.crosstab(df.Class, df.flag_as_fraud, rownames=['Actual Fraud'], colnames=['Flagged Fraud']))
```
Not bad, with this rule, we detect 22 out of 50 fraud cases, but can't detect the other 28, and get 16 false positives. In the next exercise, we'll see how this measures up to a machine learning model.
### Using ML classification to catch fraud
* Use a simple machine learning model on our credit card data instead.
* Implement a Logistic Regression model.
```
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
# Fit a logistic regression model to our data
model = LogisticRegression()
model.fit(X_train, y_train)
# Obtain model predictions
predicted = model.predict(X_test)
# Print the classifcation report and confusion matrix
print('Classification report:\n', classification_report(y_test, predicted))
conf_mat = confusion_matrix(y_true=y_test, y_pred=predicted)
print('Confusion matrix:\n', conf_mat)
```
* We are getting much less false positives, so that's an improvement.
* We're catching a higher percentage of fraud cases, so that is also better than before.
### Logistic regression combined with SMOTE
```
# This is the pipeline module we need for this from imblearn
from imblearn.pipeline import Pipeline
# Define which resampling method and which ML model to use in the pipeline
resampling = SMOTE(kind='borderline2')
model = LogisticRegression()
# Define the pipeline, tell it to combine SMOTE with the Logistic Regression model
pipeline = Pipeline([('SMOTE', resampling), ('Logistic Regression', model)])
```
### Using a pipeline
Treat the pipeline as if it were a single machine learning model. Our data X and y are already defined, and the pipeline is defined in the previous exercise.
```
# Split your data X and y, into a training and a test set and fit the pipeline onto the training data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
# Fit your pipeline onto your training set and obtain predictions by fitting the model onto the test data
pipeline.fit(X_train, y_train)
predicted = pipeline.predict(X_test)
# Obtain the results from the classification report and confusion matrix
print('Classifcation report:\n', classification_report(y_test, predicted))
conf_mat = confusion_matrix(y_true=y_test, y_pred=predicted)
print('Confusion matrix:\n', conf_mat)
```
* The SMOTE slightly improves our results. We now manage to find all cases of fraud, but we have a slightly higher number of false positives, albeit only 7 cases.
* Remember, not in all cases does resampling necessarily lead to better results. **When the fraud cases are very spread and scattered over the data, using SMOTE can introduce a bit of bias.** Nearest neighbors aren't necessarily also fraud cases, so the synthetic samples might 'confuse' the model slightly.
* In the next chapters, we'll learn how to also adjust our machine learning models to better detect the minority fraud cases.
## Fraud detection using labelled data
* Flag fraudulent transactions with supervised learning.
* Use classifiers, adjust them and compare them to find the most efficient fraud detection model.
### Natural hit rate
* Explore how prevalent fraud is in the dataset, to understand what the "natural accuracy" is, if we were to predict everything as non-fraud.
* It's is important to understand which level of "accuracy" you need to "beat" in order to get a better prediction than by doing nothing.
* Create a random forest classifier for fraud detection. That will serve as the "baseline" model that you're going to try to improve in the upcoming exercises.
```
import matplotlib.pyplot as plt
import pandas as pd
from imblearn.over_sampling import SMOTE
df = pd.read_csv("creditcard_sampledata_2.csv")
#print(df.columns) #It is not df.colnames.
df = df.drop(['Unnamed: 0'],axis = 1)
# print(df.head())
y=df['Class'].values
X=df.drop(['Class'],axis = 1).values
#extra code above
# Count the total number of observations from the length of y
total_obs = len(y)
# Count the total number of non-fraudulent observations
non_fraud = [i for i in y if i == 0]
count_non_fraud = non_fraud.count(0)
# Calculate the percentage of non fraud observations in the dataset
percentage = (float(count_non_fraud)/float(total_obs)) * 100
# Print the percentage: this is our "natural accuracy" by doing nothing
print(percentage)
```
This tells us that by doing nothing, we would be correct in 95.9% of the cases. So now you understand, that if we get an accuracy of less than this number, our model does not actually add any value in predicting how many cases are correct.
### Random Forest Classifier - part 1
```
print(X.shape)
print(y.shape)
from sklearn.ensemble import RandomForestClassifier
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
model = RandomForestClassifier(random_state=5)
```
### Random Forest Classifier - part 2
See how our Random Forest model performs without doing anything special to it.
```
model.fit(X_train, y_train)
predicted = model.predict(X_test)
from sklearn.metrics import accuracy_score
model.fit(X_train, y_train)
predicted = model.predict(X_test)
print(accuracy_score(y_test, predicted))
```
### Performance metrics for the RF model
* In the previous exercises you obtained an accuracy score for your random forest model. This time, we know accuracy can be misleading in the case of fraud detection.
* With highly imbalanced fraud data, the AUROC curve is a more reliable performance metric, used to compare different classifiers. Moreover, the classification report tells you about the precision and recall of your model, whilst the confusion matrix actually shows how many fraud cases you can predict correctly. So let's get these performance metrics.
* Continue working on the same random forest model from the previous exercise. The model, defined as model = RandomForestClassifier(random_state=5) has been fitted to the training data already, and X_train, y_train, X_test, y_test are available.
```
from sklearn.metrics import classification_report, confusion_matrix, roc_auc_score
predicted = model.predict(X_test)
probs = model.predict_proba(X_test)
# Print the ROC curve, classification report and confusion matrix
print(roc_auc_score(y_test, probs[:,1]))
print(classification_report(y_test, predicted))
print(confusion_matrix(y_test, predicted))
```
You have now obtained more meaningful performance metrics that tell us how well the model performs, given the highly imbalanced data that you're working with. The model predicts 76 cases of fraud, out of which 73 are actual fraud. You have only 3 false positives. This is really good, and as a result you have a very high precision score. You do however, don't catch 18 cases of actual fraud. Recall is therefore not as good as precision. Let's try to improve that in the following exercises.
### Plotting the Precision Recall Curve
* Plot a Precision-Recall curve, to investigate the trade-off between the two in your model. In this curve Precision and Recall are inversely related; as Precision increases, Recall falls and vice-versa. A balance between these two needs to be achieved in your model, otherwise you might end up with many false positives, or not enough actual fraud cases caught. To achieve this and to compare performance, the precision-recall curves come in handy.
* The Random Forest Classifier is available as model, and the predictions as predicted. You can simply obtain the average precision score and the PR curve from the sklearn package. T
* The function plot_pr_curve() plots the results.
```
from sklearn.metrics import average_precision_score
# Calculate average precision and the PR curve
average_precision = average_precision_score(y_test, predicted)
# Calculate average precision and the PR curve
average_precision = average_precision_score(y_test, predicted)
# Obtain precision and recall
precision, recall, _ = precision_recall_curve(y_test, predicted)
# Plot the recall precision tradeoff
plot_pr_curve(recall, precision, average_precision)
#This function is unavailable.
```

### Model adjustments
* A simple way to adjust the random forest model to deal with highly imbalanced fraud data, is to use the **class_weights option **when defining your sklearn model. However, as you will see, it is a bit of a blunt force mechanism and might not work for your very special case.
* Explore the weight = "balanced_subsample" mode the Random Forest model from the earlier exercise.
```
model = RandomForestClassifier(class_weight='balanced_subsample', random_state=5)
model.fit(X_train, y_train)
# Obtain the predicted values and probabilities from the model
predicted = model.predict(X_test)
probs = model.predict_proba(X_test)
print(roc_auc_score(y_test, probs[:,1]))
print(classification_report(y_test, predicted))
print(confusion_matrix(y_test, predicted))
```
* The model results don't improve drastically. We now have 3 less false positives, but now 19 in stead of 18 false negatives, i.e. cases of fraud we are not catching. If we mostly care about catching fraud, and not so much about the false positives, this does actually not improve our model at all, albeit a simple option to try.
* In the next exercises we will see how to more smartly tweak your model to focus on reducing false negatives and catch more fraud.
### Adjusting your Random Forest to fraud detection
* Explore the options for the random forest classifier, as we'll assign weights and tweak the shape of the decision trees in the forest.
* Define weights manually, to be able to off-set that imbalance slightly. In our case we have 300 fraud to 7000 non-fraud cases, so by setting the weight ratio to 1:12, we get to a 1/3 fraud to 2/3 non-fraud ratio, which is good enough for training the model on.
```
# Change the model options
model = RandomForestClassifier(bootstrap=True, class_weight={0:1, 1:12}, criterion='entropy',
max_depth=10,
min_samples_leaf=10,
# Change the number of trees to use
n_estimators=20, n_jobs=-1, random_state=5)
# Run the function get_model_results
# get_model_results(X_train, y_train, X_test, y_test, model)
#This function fits the model to your training data, predicts and obtains performance metrics
#similar to the steps you did in the previous exercises.
```
* By smartly defining more options in the model, you can obtain better predictions. You have effectively reduced the number of false negatives, i.e. you are catching more cases of fraud, whilst keeping the number of false positives low.
* In this exercise you've manually changed the options of the model. There is a smarter way of doing it, by using GridSearchCV, which you'll see in the next exercise!
### GridSearchCV to find optimal parameters
With GridSearchCV you can define which performance metric to score the options on. Since for fraud detection we are mostly interested in catching as many fraud cases as possible, you can optimize your model settings to get the **best possible Recall score.** If you also cared about reducing the number of false positives, you could optimize on F1-score, this gives you that nice Precision-Recall trade-off.
```
from sklearn.model_selection import GridSearchCV
# Define the parameter sets to test
param_grid = {'n_estimators': [1, 30], 'max_features': ['auto', 'log2'], 'max_depth': [4, 8], 'criterion': ['gini', 'entropy']
}
model = RandomForestClassifier(random_state=5)
CV_model = GridSearchCV(estimator=model, param_grid=param_grid, cv=5, scoring='recall', n_jobs=-1)
CV_model.fit(X_train, y_train)
CV_model.best_params_
```
### Model results using GridSearchCV
* You discovered that the best parameters for your model are that the split criterion should be set to 'gini', the number of estimators (trees) should be 30, the maximum depth of the model should be 8 and the maximum features should be set to "log2".
* Let's give this a try and see how well our model performs. You can use the get_model_results() function again to save time.
```
# Input the optimal parameters in the model
model = RandomForestClassifier(class_weight={0:1,1:12}, criterion='gini',
max_depth=8, max_features='log2', min_samples_leaf=10, n_estimators=30, n_jobs=-1, random_state=5)
# Get results from your model
# get_model_results(X_train, y_train, X_test, y_test, model)
```
<script.py> output:
precision recall f1-score support
0.0 0.99 1.00 1.00 2099
1.0 0.95 0.84 0.89 91
micro avg 0.99 0.99 0.99 2190
macro avg 0.97 0.92 0.94 2190
weighted avg 0.99 0.99 0.99 2190
[[2095 4]
[ 15 76]]
* The number of false positives has now been slightly reduced even further, which means we are catching more cases of fraud.
* However, you see that the number of false positives actually went up. That is that Precision-Recall trade-off in action.
* To decide which final model is best, you need to take into account how bad it is not to catch fraudsters, versus how many false positives the fraud analytics team can deal with. Ultimately, this final decision should be made by you and the fraud team together.
### Logistic Regression
* Combine three algorithms into one model with the VotingClassifier. This allows us to benefit from the different aspects from all models, and hopefully improve overall performance and detect more fraud. The first model, the Logistic Regression, has a slightly higher recall score than our optimal Random Forest model, but gives a lot more false positives.
* You'll also add a Decision Tree with balanced weights to it. The data is already split into a training and test set, i.e. X_train, y_train, X_test, y_test are available.
* In order to understand how the Voting Classifier can potentially improve your original model, you should check the standalone results of the Logistic Regression model first.
```
# Define the Logistic Regression model with weights
model = LogisticRegression(class_weight={0:1, 1:15}, random_state=5)
# Get the model results
# get_model_results(X_train, y_train, X_test, y_test, model)
```
precision recall f1-score support
0.0 0.99 0.98 0.99 2099
1.0 0.63 0.88 0.73 91
` micro avg 0.97 0.97 0.97 2190
macro avg 0.81 0.93 0.86 2190
weighted avg 0.98 0.97 0.98 2190
`
`
[[2052 47]
[ 11 80]]
`
The Logistic Regression has quite different performance from the Random Forest. More false positives, but also a better Recall. It will therefore will a useful addition to the Random Forest in an ensemble model.
### Voting Classifier
* Combine three machine learning models into one, to improve our Random Forest fraud detection model from before. You'll combine our usual Random Forest model, with the Logistic Regression from the previous exercise, with a simple Decision Tree.
* Use the short cut get_model_results() to see the immediate result of the ensemble model.
```
from sklearn.ensemble import VotingClassifier
from sklearn.tree import DecisionTreeClassifier
# Define the three classifiers to use in the ensemble
clf1 = LogisticRegression(class_weight={0:1, 1:15}, random_state=5)
clf2 = RandomForestClassifier(class_weight={0:1, 1:12}, criterion='gini', max_depth=8, max_features='log2',
min_samples_leaf=10, n_estimators=30, n_jobs=-1, random_state=5)
clf3 = DecisionTreeClassifier(random_state=5, class_weight="balanced")
# Combine the classifiers in the ensemble model
ensemble_model = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('dt', clf3)], voting='hard')
# Get the results
# get_model_results(X_train, y_train, X_test, y_test, ensemble_model)
```
<script.py> output:
precision recall f1-score support
0.0 0.99 1.00 0.99 2099
1.0 0.90 0.86 0.88 91
micro avg 0.99 0.99 0.99 2190
macro avg 0.95 0.93 0.94 2190
weighted avg 0.99 0.99 0.99 2190
[[2090 9]
[ 13 78]]
* By combining the classifiers, you can take the best of multiple models. You've increased the cases of fraud you are catching from 76 to 78, and you only have 5 extra false positives in return.
* If you do care about catching as many fraud cases as you can, whilst keeping the false positives low, this is a pretty good trade-off.
* The Logistic Regression as a standalone was quite bad in terms of false positives, and the Random Forest was worse in terms of false negatives. By combining these together you indeed managed to improve performance.
### Adjust weights within the Voting Classifier
* The Voting Classifier allows you to improve your fraud detection performance, by combining good aspects from multiple models. Now let's try to adjust the weights we give to these models. By increasing or decreasing weights you can play with how much emphasis you give to a particular model relative to the rest. This comes in handy when a certain model has overall better performance than the rest, but you still want to combine aspects of the others to further improve your results.
* The data is already split into a training and test set, and clf1, clf2 and clf3 are available and defined as before, i.e. they are the Logistic Regression, the Random Forest model and the Decision Tree respectively.
```
ensemble_model = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], voting='soft', weights=[1, 4, 1], flatten_transform=True)
# Get results
# get_model_results(X_train, y_train, X_test, y_test, ensemble_model)
```
<script.py> output:
precision recall f1-score support
0.0 0.99 1.00 1.00 2099
1.0 0.94 0.85 0.89 91
micro avg 0.99 0.99 0.99 2190
macro avg 0.97 0.92 0.94 2190
weighted avg 0.99 0.99 0.99 2190
[[2094 5]
[ 14 77]]
The weight option allows you to play with the individual models to get the best final mix for your fraud detection model. Now that you have finalized fraud detection with supervised learning, let's have a look at how fraud detection can be done when you don't have any labels to train on.
## Fraud detection using unlabelled data
* Use unsupervised learning techniques to detect fraud.
* Segment customers, use K-means clustering and other clustering algorithms to find suspicious occurrences in your data.
### Exploring your data
* Look at bank payment transaction data.
* Distinguish normal from abnormal (thus potentially fraudulent) behavior. As a fraud analyst to understand what is "normal", you need to have a good understanding of the data and its characteristics.
```
import pandas as pd
df = pd.read_csv('banksim.csv')
df = df.drop(['Unnamed: 0'],axis = 1)
print(df.head())
print(df.groupby('category').mean())
```
Even from simple group by, we can find that the majority of fraud is observed in travel, leisure and sports related transactions.
### Customer segmentation
* Check whether there are any obvious patterns for the clients in this data, thus whether you need to segment your data into groups, or whether the data is rather homogenous.
* There is not a lot client information available; However, there is data on **age ** available, so let's see whether there is any significant difference between behavior of age groups.
```
# Group by age groups and get the mean
print(df.groupby('age').mean())
# Group by age groups and get the mean
df.groupby('age').mean()
# Count the values of the observations in each age group
print(df['age'].value_counts())
```
* Does it make sense to divide your data into age segments before running a fraud detection algorithm?
* No, the age groups who are the largest are relatively similar. As you can see the average amount spent as well as fraud occurrence is rather similar across groups. Age group '0' stands out but since there are only 40 cases, it does not make sense to split these out in a separate group and run a separate model on them.
### Using statistics to define normal behavior
* In the previous exercises we saw that fraud is more prevalent in certain transaction categories, but that there is no obvious way to segment our data into for example age groups.
* This time, let's investigate the average amounts spend in normal transactions versus fraud transactions. This gives you an idea of how fraudulent transactions differ structurally from normal transactions.
```
# Create two dataframes with fraud and non-fraud data
df_fraud = df.loc[df.fraud == 1]
df_non_fraud = df.loc[df.fraud == 0]
# Plot histograms of the amounts in fraud and non-fraud data
plt.hist(df_fraud.amount, alpha=0.5, label='fraud')
plt.hist(df_non_fraud.amount, alpha=0.5, label='nonfraud')
plt.legend()
plt.show()
```
* As the number fraud observations is much smaller, it is difficult to see the full distribution.
* Nonetheless, you can see that the fraudulent transactions tend to be on the larger side relative to normal observations.
* This helps us later in detecting fraud from non-fraud. In the next chapter you're going to implement a clustering model to distinguish between normal and abnormal transactions, when the fraud labels are no longer available.
### Scaling the data
For ML algorithms using distance based metrics, it is crucial to always scale your data, as features using different scales will distort your results. K-means uses the Euclidian distance to assess distance to cluster centroids, therefore you first need to scale your data before continuing to implement the algorithm.
```
import pandas as pd
df = pd.read_csv('banksim_adj.csv')
X = df.drop(['Unnamed: 0'],axis = 1).values
y = df['fraud'].values
print(df.head())
#extra code above. The data might not be same as the DataCamp
from sklearn.preprocessing import MinMaxScaler
X = np.array(df).astype(np.float)
scaler = MinMaxScaler()
X_scaled = scaler.fit_transform(X)
```
### K-means clustering
* For fraud detection, K-means clustering is straightforward to implement and relatively powerful in predicting suspicious cases. It is a good algorithm to start with when working on fraud detection problems.
* However, fraud data is oftentimes very large, especially when you are working with transaction data. MiniBatch K-means is an efficient way to implement K-means on a large dataset, which you will use in this exercise.
```
# Import MiniBatchKmeans
from sklearn.cluster import MiniBatchKMeans
kmeans = MiniBatchKMeans(n_clusters=8, random_state=0)
kmeans.fit(X_scaled)
```
### Elbow method
* It is important to get the number of clusters right, especially when you want to **use the outliers of those clusters as fraud predictions**.
* Apply the Elbow method and see what the optimal number of clusters should be based on this method.
```
clustno = range(1, 10)
kmeans = [MiniBatchKMeans(n_clusters=i) for i in clustno]
score = [kmeans[i].fit(X_scaled).score(X_scaled) for i in range(len(kmeans))]
plt.plot(clustno, score)
plt.xlabel('Number of Clusters')
plt.ylabel('Score')
plt.title('Elbow Curve')
plt.show()
```
The optimal number of clusters should probably be at around 3 clusters, as that is where the elbow is in the curve.
### Detecting outliers
* Use the K-means algorithm to predict fraud, and compare those predictions to the actual labels that are saved, to sense check our results.
* The fraudulent transactions are typically flagged as the observations that are furthest aways from the cluster centroid.
* How to determine the cut-off in this exercise.
```
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.3, random_state=0)
kmeans = MiniBatchKMeans(n_clusters=3, random_state=42).fit(X_train)
X_test_clusters = kmeans.predict(X_test)
X_test_clusters_centers = kmeans.cluster_centers_
dist = [np.linalg.norm(x-y) for x, y in zip(X_test, X_test_clusters_centers[X_test_clusters])]
# np.linagl.norm calculate the 'norm' of a vector or a matrix.
# Create fraud predictions based on outliers on clusters
km_y_pred = np.array(dist)
km_y_pred[dist >= np.percentile(dist, 95)] = 1
km_y_pred[dist < np.percentile(dist, 95)] = 0
print(len(X_test))
print(len(X_test_clusters))
print(X_test_clusters)
print('--------------------')
print(X_test_clusters_centers)
print(len(dist))
```
### Checking model results
In the previous exercise you've flagged all observations to be fraud, if they are in the top 5th percentile in distance from the cluster centroid. I.e. these are the very outliers of the three clusters. For this exercise you have the scaled data and labels already split into training and test set, so y_test is available. The predictions from the previous exercise, km_y_pred, are also available. Let's create some performance metrics and see how well you did.
```
# Obtain the ROC score
print(roc_auc_score(y_test, km_y_pred))
#output: 0.8197704982668266
# Obtain the ROC score
print(roc_auc_score(y_test, km_y_pred))
# Create a confusion matrix
km_cm = confusion_matrix(y_test, km_y_pred)
# Plot the confusion matrix in a figure to visualize results
# plot_confusion_matrix(km_cm)
```

Question
If you were to decrease the percentile used as a cutoff point in the previous exercise to 93% instead of 95%, what would that do to your prediction results?
The number of fraud cases caught increases, but false positives also increase.
### DB scan
* Explore using a density based clustering method (DBSCAN) to detect fraud. The advantage of DBSCAN is that you do not need to define the number of clusters beforehand. Also, DBSCAN can handle weirdly shaped data (i.e. non-convex) much better than K-means can.
* This time, you are **not going to take the outliers of the clusters and use that for fraud, but take the smallest clusters in the data and label those as fraud**. You again have the scaled dataset, i.e. X_scaled available.
```
from sklearn.cluster import DBSCAN
# Initialize and fit the DBscan model
db = DBSCAN(eps=0.9, min_samples=10, n_jobs=-1).fit(X_scaled)
# Obtain the predicted labels and calculate number of clusters
pred_labels = db.labels_
n_clusters = len(set(pred_labels)) - (1 if -1 in labels else 0)
# # Print performance metrics for DBscan
# print('Estimated number of clusters: %d' % n_clusters)
# print("Homogeneity: %0.3f" % homogeneity_score(labels, pred_labels))
# print("Silhouette Coefficient: %0.3f" % silhouette_score(X_scaled, pred_labels))
```
output:
`
Estimated number of clusters: 18
Homogeneity: 0.633
Silhouette Coefficient: 0.707
`
The number of clusters is much higher than with K-means. For fraud detection this is for now OK, as we are only interested in the smallest clusters, since those are considered as abnormal. Now let's have a look at those clusters and decide which one to flag as fraud.
### Assessing smallest clusters
* Check the clusters that came out of DBscan, and flag certain clusters as fraud:
* Figure out how big the clusters are, and filter out the smallest. Then take the smallest ones and flag those as fraud.
* Check with the original labels whether this does actually do a good job in detecting fraud.
Available are the DBscan model predictions, so n_clusters is available as well as the cluster labels, which are saved under pred_labels.
```
counts = np.bincount(pred_labels[pred_labels >= 0])
print(counts)
```
output:
[3252 145 2714 55 174 119 122 98 54 15 76 15 43 25
51 47 42 15 25 20 19 10]
```
# Count observations in each cluster number
counts = np.bincount(pred_labels[pred_labels>=0])
# Sort the sample counts of the clusters and take the top 3 smallest clusters
smallest_clusters = np.argsort(counts)[:3]
# Print the results
print("The smallest clusters are clusters:")
print(smallest_clusters)
```
output:
The smallest clusters are clusters:
[21 17 9]
```
# Count observations in each cluster number
counts = np.bincount(pred_labels[pred_labels>=0])
# Sort the sample counts of the clusters and take the top 3 smallest clusters
smallest_clusters = np.argsort(counts)[:3]
# Print the counts of the smallest clusters only
print("Their counts are:")
print(counts[smallest_clusters])
```
<script.py> output:
Their counts are:
[10 15 15]
So now we know which smallest clusters you could flag as fraud. If you were to take more of the smallest clusters, you cast your net wider and catch more fraud, but most likely also more false positives. It is up to the fraud analyst to find the right amount of cases to flag and to investigate. In the next exercise you'll check the results with the actual labels.
### Checking results
In this exercise you're going to check the results of your DBscan fraud detection model. In reality, you often don't have reliable labels and this where a fraud analyst can help you validate the results. He/She can check your results and see whether the cases you flagged are indeed suspicious. You can also check historically known cases of fraud and see whether your model flags them.
In this case, you'll use the fraud labels to check your model results. The predicted cluster numbers are available under pred_labels as well as the original fraud labels labels.
```
# Create a dataframe of the predicted cluster numbers and fraud labels
df = pd.DataFrame({'clusternr':pred_labels,'fraud':labels})
# Create a condition flagging fraud for the smallest clusters
df['predicted_fraud'] = np.where((df['clusternr']==21) | (df['clusternr']==17) | (df['clusternr']==9), 1, 0)
# Run a crosstab on the results
print(pd.crosstab(df.fraud, df.predicted_fraud, rownames=['Actual Fraud'], colnames=['Flagged Fraud']))
```
output:
` Flagged Fraud 0 1
Actual Fraud
0 6973 16
1 176 24
`
How does this compare to the K-means model?
* The good thing is: our of all flagged cases, roughly 2/3 are actually fraud! Since you only take the three smallest clusters, by definition you flag less cases of fraud, so you catch less but also have less false positives. However, you are missing quite a lot of fraud cases.
* Increasing the amount of smallest clusters you flag could improve that, at the cost of more false positives of course.
## Fraud detection using text
Use text data, text mining and topic modeling to detect fraudulent behavior.
### Word search with dataframes
* Work with text data, containing emails from Enron employees.
* Using string operations on dataframes, you can easily sift through messy email data and create flags based on word-hits.
```
import pandas as pd
df = pd.read_csv('enron_emails_clean.csv',index_col = 0)
# Find all cleaned emails that contain 'sell enron stock'
mask = df['clean_content'].str.contains('sell enron stock', na=False)
# Select the data from df that contain the searched for words
print(df.loc[mask])
```
### Using list of terms
* Search on more than one term.
* Create a full "fraud dictionary" of terms that could potentially flag fraudulent clients and/or transactions. Fraud analysts often will have an idea what should be in such a dictionary. In this exercise you're going to flag a multitude of terms, and in the next exercise you'll create a new flag variable out of it. The 'flag' can be used either directly in a machine learning model as a feature, or as an additional filter on top of your machine learning model results.
```
# Create a list of terms to search for
searchfor = ['enron stock', 'sell stock', 'stock bonus', 'sell enron stock']
# Filter cleaned emails on searchfor list and select from df
filtered_emails = df.loc[df['clean_content'].str.contains('|'.join(searchfor), na=False)]
# print(filtered_emails)
```
### Creating a flag
This time you are going to create an actual flag variable that gives a 1 when the emails get a hit on the search terms of interest, and 0 otherwise. This is the last step you need to make in order to actually use the text data content as a feature in a machine learning model, or as an actual flag on top of model results. You can continue working with the dataframe df containing the emails, and the searchfor list is the one defined in the last exercise.
```
import numpy as np
# Create flag variable where the emails match the searchfor terms
df['flag'] = np.where((df['clean_content'].str.contains('|'.join(searchfor)) == True), 1, 0)
# Count the values of the flag variable
count = df['flag'].value_counts()
print(count)
```
You have now managed to search for a list of strings in several lines of text data. These skills come in handy when you want to flag certain words based on what you discovered in your topic model, or when you know beforehand what you want to search for. In the next exercises you're going to learn how to clean text data and to create your own topic model to further look for indications of fraud in your text data.
### Removing stopwords
In the following exercises you're going to clean the Enron emails, in order to be able to use the data in a topic model. Text cleaning can be challenging, so you'll learn some steps to do this well. The dataframe containing the emails df is available. In a first step you need to define the list of stopwords and punctuations that are to be removed in the next exercise from the text data. Let's give it a try.
```
# Import nltk packages and string
from nltk.corpus import stopwords
import string
# Define stopwords to exclude
stop = set(stopwords.words('english'))
# stop.update(("to","cc","subject","http","from","sent", "ect", "u", "fwd", "www", "com"))
# Define punctuations to exclude and lemmatizer
exclude = set(string.punctuation)
```
The following is the stop contents. However, stop = set(stopwords('english')) has problems to run.
{'a',
'about',
'above',
'after',
'again',
'against',
'ain',
'all',
'am',
.
.
.
'y',
'you',
"you'd",
"you'll",
"you're",
"you've",
'your',
'yours',
'yourself',
'yourselves'
}
### Cleaning text data
Now that you've defined the stopwords and punctuations, let's use these to clean our enron emails in the dataframe df further. The lists containing stopwords and punctuations are available under stop and exclude There are a few more steps to take before you have cleaned data, such as "lemmatization" of words, and stemming the verbs. The verbs in the email data are already stemmed, and the lemmatization is already done for you in this exercise.
```
# Import the lemmatizer from nltk
from nltk.stem.wordnet import WordNetLemmatizer
lemma = WordNetLemmatizer()
# Define word cleaning function
def clean(text, stop):
text = text.rstrip()
# Remove stopwords
stop_free = " ".join([word for word in text.lower().split() if ((word not in stop) and (not word.isdigit()))])
# Remove punctuations
punc_free = ''.join(word for word in stop_free if word not in exclude)
# Lemmatize all words
normalized = " ".join(lemma.lemmatize(word) for word in punc_free.split())
return normalized
# Import the lemmatizer from nltk
from nltk.stem.wordnet import WordNetLemmatizer
lemma = WordNetLemmatizer()
# Import the lemmatizer from nltk
from nltk.stem.wordnet import WordNetLemmatizer
lemma = WordNetLemmatizer()
# Define word cleaning function
def clean(text, stop):
text = text.rstrip()
stop_free = " ".join([i for i in text.lower().split() if((i not in stop) and (not i.isdigit()))])
punc_free = ''.join(i for i in stop_free if i not in exclude)
normalized = " ".join(lemma.lemmatize(i) for i in punc_free.split())
return normalized
# Clean the emails in df and print results
text_clean=[]
for text in df['clean_content']:
text_clean.append(clean(text, stop).split())
print(text_clean)
```
Now that you have cleaned your data entirely with the necessary steps, including splitting the text into words, removing stopwords and punctuations, and lemmatizing your words. You are now ready to run a topic model on this data. In the following exercises you're going to explore how to do that.
### Create dictionary and corpus
In order to run an LDA topic model, you first need to define your dictionary and corpus first, as those need to go into the model. You're going to continue working on the cleaned text data that you've done in the previous exercises. That means that text_clean is available for you already to continue working with, and you'll use that to create your dictionary and corpus.
This exercise will take a little longer to execute than usual.
```
# Import the packages
import gensim
from gensim import corpora
# Define the dictionary
dictionary = corpora.Dictionary(text_clean)
# Define the corpus
corpus = [dictionary.doc2bow(text) for text in text_clean]
# Print corpus and dictionary
print(dictionary)
print(corpus)
```
Dictionary(8948 unique tokens: ['conducted', 'read', 'wil', 'daniel', 'piazze']...)
[[(0, 1), (1, 2), (2, 1), (3, 1), (4, 2), (5, 1), (6, 2), (7, 1), (8, 1), (9, 1), (10, 5), (11, 2), (12, 1), (13, 1), (14, 1), (15, 1), (16, 1), (17, 1), (18, 1), (19, 1), (20, 1), (21, 1), (22, 1), (23, 1), (24, 1),....] total length.
Note doc2bow is doc to bag of words.
### LDA model (It is is not linear discriminant analysis)
Now it's time to build the LDA model. Using the dictionary and corpus, you are ready to discover which topics are present in the Enron emails. With a quick print of words assigned to the topics, you can do a first exploration about whether there are any obvious topics that jump out. Be mindful that the topic model is heavy to calculate so it will take a while to run. Let's give it a try!
```
ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=5, id2word=dictionary, passes=5)
# Save the topics and top 5 words
topics = ldamodel.print_topics(num_words=5)
# Print the results
for topic in topics:
print(topic)
```
`(0, '0.024*"enron" + 0.015*"ect" + 0.011*"com" + 0.007*"hou" + 0.005*"company"')
(1, '0.032*"enron" + 0.011*"com" + 0.009*"diabetes" + 0.008*"message" + 0.006*"please"')
(2, '0.031*"enron" + 0.011*"company" + 0.010*"said" + 0.007*"mr" + 0.005*"partnership"')
(3, '0.021*"enron" + 0.012*"employee" + 0.010*"company" + 0.009*"million" + 0.009*"com"')
(4, '0.040*"error" + 0.021*"database" + 0.018*"borland" + 0.018*"engine" + 0.018*"initialize"')
`
You have now successfully created your first topic model on the Enron email data. However, the print of words doesn't really give you enough information to find a topic that might lead you to signs of fraud. You'll therefore need to closely inspect the model results in order to be able to detect anything that can be related to fraud in your data.
Below are visualisation results from the pyLDAvis library available. Have a look at topic 1 and 3 from the LDA model on the Enron email data. Which one would you research further for fraud detection purposes and why?

Topic 1 seems to discuss the employee share option program, and seems to point to internal conversation (with "please, may, know" etc), so this is more likely to be related to the internal accounting fraud and trading stock with insider knowledge. Topic 3 seems to be more related to general news around Enron.
### Finding fraudsters based on topic
In this exercise you're going to link the results from the topic model back to your original data. You now learned that you want to flag everything related to topic 3. As you will see, this is actually not that straightforward. You'll be given the function get_topic_details() which takes the arguments ldamodel and corpus. It retrieves the details of the topics for each line of text. With that function, you can append the results back to your original data. If you want to learn more detail on how to work with the model results, which is beyond the scope of this course, you're highly encouraged to read this article (https://www.machinelearningplus.com/nlp/topic-modeling-gensim-python/).
Available for you are the dictionary and corpus, the text data text_clean as well as your model results ldamodel. Also defined is get_topic_details().
```
# Run get_topic_details function and check the results
print(get_topic_details(ldamodel, corpus))
# Add original text to topic details in a dataframe
contents = pd.DataFrame({'Original text': text_clean})
topic_details = pd.concat([get_topic_details(ldamodel, corpus), contents], axis=1)
topic_details.head()
# Add original text to topic details in a dataframe
contents = pd.DataFrame({'Original text':text_clean})
topic_details = pd.concat([get_topic_details(ldamodel, corpus), contents], axis=1)
# Create flag for text highest associated with topic 3
topic_details['flag'] = np.where((topic_details['Dominant_Topic'] == 3.0), 1, 0)
print(topic_details.head())
```
You have now flagged all data that is highest associated with topic 3, that seems to cover internal conversation about enron stock options. You are a true detective. With these exercises you have demonstrated that text mining and topic modeling can be a powerful tool for fraud detection.
### Summary
* We may apply all types of machine learning algorithms to handle anomaly and fraud detection.
* Supervised learning such as classification algorithms, neural network, etc.
* Unsupervised learning such as clustering algorithms.
* All the linear or nonlinear dimension reduction techniques that can be used directly to handle anomaly detection, or can be combined with other supervised/unsupervised learning algorithm.
* Natural language processing.
* Directly constructing Gaussian distribution (or other contributions) and flag outliers.
* Use network analysis for fraud or anomaly detection.
|
github_jupyter
|
<a href="https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/profiling_tpus_in_colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
##### Copyright 2018 The TensorFlow Hub Authors.
Copyright 2019-2020 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Profiling TPUs in Colab <a href="https://cloud.google.com/tpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png" width="50"></a>
Adapted from [TPU colab example](https://colab.sandbox.google.com/notebooks/tpu.ipynb).
## Overview
This example works through training a model to classify images of
flowers on Google's lightning-fast Cloud TPUs. Our model takes as input a photo of a flower and returns whether it is a daisy, dandelion, rose, sunflower, or tulip. A key objective of this colab is to show you how to set up and run TensorBoard, the program used for visualizing and analyzing program performance on Cloud TPU.
This notebook is hosted on GitHub. To view it in its original repository, after opening the notebook, select **File > View on GitHub**.
## Instructions
<h3><a href="https://cloud.google.com/tpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png" width="50"></a> Train on TPU </h3>
* Create a Cloud Storage bucket for your TensorBoard logs at http://console.cloud.google.com/storage. Give yourself Storage Legacy Bucket Owner permission on the bucket.
You will need to provide the bucket name when launching TensorBoard in the **Training** section.
Note: User input is required when launching and viewing TensorBoard, so do not use **Runtime > Run all** to run through the entire colab.
## Authentication for connecting to GCS bucket for logging.
```
import os
IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence
if IS_COLAB_BACKEND:
from google.colab import auth
# Authenticates the Colab machine and also the TPU using your
# credentials so that they can access your private GCS buckets.
auth.authenticate_user()
```
## Updating tensorboard_plugin_profile
```
!pip install -U pip install -U tensorboard_plugin_profile==2.3.0
```
## Enabling and testing the TPU
First, you'll need to enable TPUs for the notebook:
- Navigate to Edit→Notebook Settings
- select TPU from the Hardware Accelerator drop-down
Next, we'll check that we can connect to the TPU:
```
%tensorflow_version 2.x
import tensorflow as tf
print("Tensorflow version " + tf.__version__)
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])
except ValueError:
raise BaseException('ERROR: Not connected to a TPU runtime; please see the previous cell in this notebook for instructions!')
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
tpu_strategy = tf.distribute.experimental.TPUStrategy(tpu)
import re
import numpy as np
from matplotlib import pyplot as plt
```
## Input data
Our input data is stored on Google Cloud Storage. To more fully use the parallelism TPUs offer us, and to avoid bottlenecking on data transfer, we've stored our input data in TFRecord files, 230 images per file.
Below, we make heavy use of `tf.data.experimental.AUTOTUNE` to optimize different parts of input loading.
All of these techniques are a bit overkill for our (small) dataset, but demonstrate best practices for using TPUs.
```
AUTO = tf.data.experimental.AUTOTUNE
IMAGE_SIZE = [331, 331]
batch_size = 16 * tpu_strategy.num_replicas_in_sync
gcs_pattern = 'gs://flowers-public/tfrecords-jpeg-331x331/*.tfrec'
validation_split = 0.19
filenames = tf.io.gfile.glob(gcs_pattern)
split = len(filenames) - int(len(filenames) * validation_split)
train_fns = filenames[:split]
validation_fns = filenames[split:]
def parse_tfrecord(example):
features = {
"image": tf.io.FixedLenFeature([], tf.string), # tf.string means bytestring
"class": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar
"one_hot_class": tf.io.VarLenFeature(tf.float32),
}
example = tf.io.parse_single_example(example, features)
decoded = tf.image.decode_jpeg(example['image'], channels=3)
normalized = tf.cast(decoded, tf.float32) / 255.0 # convert each 0-255 value to floats in [0, 1] range
image_tensor = tf.reshape(normalized, [*IMAGE_SIZE, 3])
one_hot_class = tf.reshape(tf.sparse.to_dense(example['one_hot_class']), [5])
return image_tensor, one_hot_class
def load_dataset(filenames):
# Read from TFRecords. For optimal performance, we interleave reads from multiple files.
records = tf.data.TFRecordDataset(filenames, num_parallel_reads=AUTO)
return records.map(parse_tfrecord, num_parallel_calls=AUTO)
def get_training_dataset():
dataset = load_dataset(train_fns)
# Create some additional training images by randomly flipping and
# increasing/decreasing the saturation of images in the training set.
def data_augment(image, one_hot_class):
modified = tf.image.random_flip_left_right(image)
modified = tf.image.random_saturation(modified, 0, 2)
return modified, one_hot_class
augmented = dataset.map(data_augment, num_parallel_calls=AUTO)
# Prefetch the next batch while training (autotune prefetch buffer size).
return augmented.repeat().shuffle(2048).batch(batch_size).prefetch(AUTO)
training_dataset = get_training_dataset()
validation_dataset = load_dataset(validation_fns).batch(batch_size).prefetch(AUTO)
```
Let's take a peek at the training dataset we've created:
```
CLASSES = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips']
def display_one_flower(image, title, subplot, color):
plt.subplot(subplot)
plt.axis('off')
plt.imshow(image)
plt.title(title, fontsize=16, color=color)
# If model is provided, use it to generate predictions.
def display_nine_flowers(images, titles, title_colors=None):
subplot = 331
plt.figure(figsize=(13,13))
for i in range(9):
color = 'black' if title_colors is None else title_colors[i]
display_one_flower(images[i], titles[i], 331+i, color)
plt.tight_layout()
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
def get_dataset_iterator(dataset, n_examples):
return dataset.unbatch().batch(n_examples).as_numpy_iterator()
training_viz_iterator = get_dataset_iterator(training_dataset, 9)
# Re-run this cell to show a new batch of images
images, classes = next(training_viz_iterator)
class_idxs = np.argmax(classes, axis=-1) # transform from one-hot array to class number
labels = [CLASSES[idx] for idx in class_idxs]
display_nine_flowers(images, labels)
```
## Model
To get maxmimum accuracy, we leverage a pretrained image recognition model (here, [Xception](http://openaccess.thecvf.com/content_cvpr_2017/papers/Chollet_Xception_Deep_Learning_CVPR_2017_paper.pdf)). We drop the ImageNet-specific top layers (`include_top=false`), and add a max pooling and a softmax layer to predict our 5 classes.
```
def create_model():
pretrained_model = tf.keras.applications.Xception(input_shape=[*IMAGE_SIZE, 3], include_top=False)
pretrained_model.trainable = True
model = tf.keras.Sequential([
pretrained_model,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(5, activation='softmax')
])
model.compile(
optimizer='adam',
loss = 'categorical_crossentropy',
metrics=['accuracy']
)
return model
with tpu_strategy.scope(): # creating the model in the TPUStrategy scope means we will train the model on the TPU
model = create_model()
model.summary()
```
## Training
Calculate the number of images in each dataset. Rather than actually load the data to do so (expensive), we rely on hints in the filename. This is used to calculate the number of batches per epoch.
```
def count_data_items(filenames):
# The number of data items is written in the name of the .tfrec files, i.e. flowers00-230.tfrec = 230 data items
n = [int(re.compile(r"-([0-9]*)\.").search(filename).group(1)) for filename in filenames]
return np.sum(n)
n_train = count_data_items(train_fns)
n_valid = count_data_items(validation_fns)
train_steps = count_data_items(train_fns) // batch_size
print("TRAINING IMAGES: ", n_train, ", STEPS PER EPOCH: ", train_steps)
print("VALIDATION IMAGES: ", n_valid)
```
Calculate and show a learning rate schedule. We start with a fairly low rate, as we're using a pre-trained model and don't want to undo all the fine work put into training it.
```
EPOCHS = 12
start_lr = 0.00001
min_lr = 0.00001
max_lr = 0.00005 * tpu_strategy.num_replicas_in_sync
rampup_epochs = 5
sustain_epochs = 0
exp_decay = .8
def lrfn(epoch):
if epoch < rampup_epochs:
return (max_lr - start_lr)/rampup_epochs * epoch + start_lr
elif epoch < rampup_epochs + sustain_epochs:
return max_lr
else:
return (max_lr - min_lr) * exp_decay**(epoch-rampup_epochs-sustain_epochs) + min_lr
lr_callback = tf.keras.callbacks.LearningRateScheduler(lambda epoch: lrfn(epoch), verbose=True)
rang = np.arange(EPOCHS)
y = [lrfn(x) for x in rang]
plt.plot(rang, y)
print('Learning rate per epoch:')
```
Train the model. While the first epoch will be quite a bit slower as we must XLA-compile the execution graph and load the data, later epochs should complete in ~5s.
```
# Load the TensorBoard notebook extension.
%load_ext tensorboard
# Get TPU profiling service address. This address will be needed for capturing
# profile information with TensorBoard in the following steps.
service_addr = tpu.get_master().replace(':8470', ':8466')
print(service_addr)
# Launch TensorBoard.
%tensorboard --logdir=gs://bucket-name # Replace the bucket-name variable with your own gcs bucket
```
The TensorBoard UI is displayed in a browser window. In this colab, perform the following steps to prepare to capture profile information.
1. Click on the dropdown menu box on the top right side and scroll down and click PROFILE. A new window appears that shows: **No profile data was found** at the top.
1. Click on the CAPTURE PROFILE button. A new dialog appears. The top input line shows: **Profile Service URL or TPU name**. Copy and paste the Profile Service URL (the service_addr value shown before launching TensorBoard) into the top input line. While still on the dialog box, start the training with the next step.
1. Click on the next colab cell to start training the model.
1. Watch the output from the training until several epochs have completed. This allows time for the profile data to start being collected. Return to the dialog box and click on the CAPTURE button. If the capture succeeds, the page will auto refresh and redirect you to the profiling results.
```
history = model.fit(training_dataset, validation_data=validation_dataset,
steps_per_epoch=train_steps, epochs=EPOCHS, callbacks=[lr_callback])
final_accuracy = history.history["val_accuracy"][-5:]
print("FINAL ACCURACY MEAN-5: ", np.mean(final_accuracy))
def display_training_curves(training, validation, title, subplot):
ax = plt.subplot(subplot)
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['training', 'validation'])
plt.subplots(figsize=(10,10))
plt.tight_layout()
display_training_curves(history.history['accuracy'], history.history['val_accuracy'], 'accuracy', 211)
display_training_curves(history.history['loss'], history.history['val_loss'], 'loss', 212)
```
Accuracy goes up and loss goes down. Looks good!
## Next steps
More TPU/Keras examples include:
- [Shakespeare in 5 minutes with Cloud TPUs and Keras](https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/shakespeare_with_tpu_and_keras.ipynb)
- [Fashion MNIST with Keras and TPUs](https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/fashion_mnist.ipynb)
We'll be sharing more examples of TPU use in Colab over time, so be sure to check back for additional example links, or [follow us on Twitter @GoogleColab](https://twitter.com/googlecolab).
|
github_jupyter
|
```
!pip install /home/knikaido/work/Cornell-Birdcall-Identification/data/resnest50-fast-package/resnest-0.0.6b20200701/resnest/
!pip install torch==1.4.0
!pip install opencv-python
!pip install slackweb
!pip install torchvision==0.2.2
!pip install torch_summary
from pathlib import Path
import numpy as np
import pandas as pd
import typing as tp
import yaml
import random
import os
import sys
import soundfile as sf
import librosa
import cv2
import matplotlib.pyplot as plt
import time
import pickle
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as data
import resnest.torch as resnest_torch
from torchvision import models
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import f1_score
from radam import RAdam
from resnet import ResNet, Bottleneck
pd.options.display.max_rows = 500
pd.options.display.max_columns = 500
with open('0909_2_config.yml', 'r') as yml:
settings = yaml.safe_load(yml)
def set_seed(seed: int = 42):
random.seed(seed)
np.random.seed(seed)
os.environ["PYTHONHASHSEED"] = str(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed) # type: ignore
# torch.backends.cudnn.deterministic = True # type: ignore
# torch.backends.cudnn.benchmark = True # type: ignore
# def progress_bar(i):
# pro_bar = ('=' * i) + (' ' * (pro_size - i))
# print('\r[{0}] {1}%'.format(pro_bar, i / pro_size * 100.), end='')
# ROOT = Path.cwd().parent
# INPUT_ROOT = ROOT / "input"
INPUT_ROOT = Path("/home/knikaido/work/Cornell-Birdcall-Identification/data")
RAW_DATA = INPUT_ROOT / "birdsong_recognition"
TRAIN_AUDIO_DIR = RAW_DATA / "train_audio"
TRAIN_RESAMPLED_AUDIO_DIRS = [
INPUT_ROOT / "birdsong-resampled-train-audio-{:0>2}".format(i) for i in range(5)
]
TEST_AUDIO_DIR = RAW_DATA / "test_audio"
BIRD_CODE = {
'aldfly': 0, 'ameavo': 1, 'amebit': 2, 'amecro': 3, 'amegfi': 4,
'amekes': 5, 'amepip': 6, 'amered': 7, 'amerob': 8, 'amewig': 9,
'amewoo': 10, 'amtspa': 11, 'annhum': 12, 'astfly': 13, 'baisan': 14,
'baleag': 15, 'balori': 16, 'banswa': 17, 'barswa': 18, 'bawwar': 19,
'belkin1': 20, 'belspa2': 21, 'bewwre': 22, 'bkbcuc': 23, 'bkbmag1': 24,
'bkbwar': 25, 'bkcchi': 26, 'bkchum': 27, 'bkhgro': 28, 'bkpwar': 29,
'bktspa': 30, 'blkpho': 31, 'blugrb1': 32, 'blujay': 33, 'bnhcow': 34,
'boboli': 35, 'bongul': 36, 'brdowl': 37, 'brebla': 38, 'brespa': 39,
'brncre': 40, 'brnthr': 41, 'brthum': 42, 'brwhaw': 43, 'btbwar': 44,
'btnwar': 45, 'btywar': 46, 'buffle': 47, 'buggna': 48, 'buhvir': 49,
'bulori': 50, 'bushti': 51, 'buwtea': 52, 'buwwar': 53, 'cacwre': 54,
'calgul': 55, 'calqua': 56, 'camwar': 57, 'cangoo': 58, 'canwar': 59,
'canwre': 60, 'carwre': 61, 'casfin': 62, 'caster1': 63, 'casvir': 64,
'cedwax': 65, 'chispa': 66, 'chiswi': 67, 'chswar': 68, 'chukar': 69,
'clanut': 70, 'cliswa': 71, 'comgol': 72, 'comgra': 73, 'comloo': 74,
'commer': 75, 'comnig': 76, 'comrav': 77, 'comred': 78, 'comter': 79,
'comyel': 80, 'coohaw': 81, 'coshum': 82, 'cowscj1': 83, 'daejun': 84,
'doccor': 85, 'dowwoo': 86, 'dusfly': 87, 'eargre': 88, 'easblu': 89,
'easkin': 90, 'easmea': 91, 'easpho': 92, 'eastow': 93, 'eawpew': 94,
'eucdov': 95, 'eursta': 96, 'evegro': 97, 'fiespa': 98, 'fiscro': 99,
'foxspa': 100, 'gadwal': 101, 'gcrfin': 102, 'gnttow': 103, 'gnwtea': 104,
'gockin': 105, 'gocspa': 106, 'goleag': 107, 'grbher3': 108, 'grcfly': 109,
'greegr': 110, 'greroa': 111, 'greyel': 112, 'grhowl': 113, 'grnher': 114,
'grtgra': 115, 'grycat': 116, 'gryfly': 117, 'haiwoo': 118, 'hamfly': 119,
'hergul': 120, 'herthr': 121, 'hoomer': 122, 'hoowar': 123, 'horgre': 124,
'horlar': 125, 'houfin': 126, 'houspa': 127, 'houwre': 128, 'indbun': 129,
'juntit1': 130, 'killde': 131, 'labwoo': 132, 'larspa': 133, 'lazbun': 134,
'leabit': 135, 'leafly': 136, 'leasan': 137, 'lecthr': 138, 'lesgol': 139,
'lesnig': 140, 'lesyel': 141, 'lewwoo': 142, 'linspa': 143, 'lobcur': 144,
'lobdow': 145, 'logshr': 146, 'lotduc': 147, 'louwat': 148, 'macwar': 149,
'magwar': 150, 'mallar3': 151, 'marwre': 152, 'merlin': 153, 'moublu': 154,
'mouchi': 155, 'moudov': 156, 'norcar': 157, 'norfli': 158, 'norhar2': 159,
'normoc': 160, 'norpar': 161, 'norpin': 162, 'norsho': 163, 'norwat': 164,
'nrwswa': 165, 'nutwoo': 166, 'olsfly': 167, 'orcwar': 168, 'osprey': 169,
'ovenbi1': 170, 'palwar': 171, 'pasfly': 172, 'pecsan': 173, 'perfal': 174,
'phaino': 175, 'pibgre': 176, 'pilwoo': 177, 'pingro': 178, 'pinjay': 179,
'pinsis': 180, 'pinwar': 181, 'plsvir': 182, 'prawar': 183, 'purfin': 184,
'pygnut': 185, 'rebmer': 186, 'rebnut': 187, 'rebsap': 188, 'rebwoo': 189,
'redcro': 190, 'redhea': 191, 'reevir1': 192, 'renpha': 193, 'reshaw': 194,
'rethaw': 195, 'rewbla': 196, 'ribgul': 197, 'rinduc': 198, 'robgro': 199,
'rocpig': 200, 'rocwre': 201, 'rthhum': 202, 'ruckin': 203, 'rudduc': 204,
'rufgro': 205, 'rufhum': 206, 'rusbla': 207, 'sagspa1': 208, 'sagthr': 209,
'savspa': 210, 'saypho': 211, 'scatan': 212, 'scoori': 213, 'semplo': 214,
'semsan': 215, 'sheowl': 216, 'shshaw': 217, 'snobun': 218, 'snogoo': 219,
'solsan': 220, 'sonspa': 221, 'sora': 222, 'sposan': 223, 'spotow': 224,
'stejay': 225, 'swahaw': 226, 'swaspa': 227, 'swathr': 228, 'treswa': 229,
'truswa': 230, 'tuftit': 231, 'tunswa': 232, 'veery': 233, 'vesspa': 234,
'vigswa': 235, 'warvir': 236, 'wesblu': 237, 'wesgre': 238, 'weskin': 239,
'wesmea': 240, 'wessan': 241, 'westan': 242, 'wewpew': 243, 'whbnut': 244,
'whcspa': 245, 'whfibi': 246, 'whtspa': 247, 'whtswi': 248, 'wilfly': 249,
'wilsni1': 250, 'wiltur': 251, 'winwre3': 252, 'wlswar': 253, 'wooduc': 254,
'wooscj2': 255, 'woothr': 256, 'y00475': 257, 'yebfly': 258, 'yebsap': 259,
'yehbla': 260, 'yelwar': 261, 'yerwar': 262, 'yetvir': 263
}
INV_BIRD_CODE = {v: k for k, v in BIRD_CODE.items()}
train = pd.read_csv(RAW_DATA / "train.csv")
# train = pd.read_csv(TRAIN_RESAMPLED_AUDIO_DIRS[0] / "train_mod.csv")
train_rate = train[['ebird_code', 'filename', 'rating']].sort_values('rating')
train_rate[train_rate['rating'] == 2.0]
train_rate['rating'].value_counts()
len(train_rate[train_rate['rating'] <= 1.5]) / len(train_rate)
train['secondary_labels'].value_counts()
train.columns
```
|
github_jupyter
|
# Generating percentiles for TensorFlow model input features
The current TensorFlow model uses histogram-like percentile features, which are kind of a continuous version of one-hot features.
For example, if key cutoff points are `[-3, 1, 0, 2, 10]`, we might encode a value `x` as `sigma((x - cutoff) / scale)`. If `sigma` is the sigmoid function, `x = 0.1`, and `scale = 0.1`, then we'd get `[1, 1, 0.73, 0, 0]`, in other words `x` is definitely above the first 2 points, mostly above the third, and below the fourth and fifth. If we increase `scale` to `2.0`, then values are less discrete: `[0.82, 0.63, 0.51, 0.28, 0.01]`.
This notebook generates appropriate cutoff points for these, to reflect most data encountered.
```
# Different options for soft-onehot function.
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
x = np.linspace(-10, 10, 100)
cutoff = 1.0
sigmoid = lambda x: 1/(1+np.exp(-x))
scale = 2.0
logit = (x - cutoff) / scale
plt.plot(x, sigmoid(logit))
plt.plot(x, np.exp(- logit * logit))
NUM_LCS = 10_000 # key parameter, turn it down if you want this notebook to finish faster.
# Settings determining type of features extracted.
window_size = 10
band_time_diff = 4.0
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
from justice.datasets import plasticc_data
source = plasticc_data.PlasticcBcolzSource.get_default()
bcolz_source = plasticc_data.PlasticcBcolzSource.get_default()
meta_table = bcolz_source.get_table('test_set_metadata')
%time all_ids = meta_table['object_id'][:]
%%time
import random
sample_ids = random.Random(828372).sample(list(all_ids), NUM_LCS)
lcs = []
_chunk_sz = 100
for start in range(0, len(sample_ids), _chunk_sz):
lcs.extend(plasticc_data.PlasticcDatasetLC.bcolz_get_lcs_by_obj_ids(
bcolz_source=source,
dataset="test_set",
obj_ids=sample_ids[start:start + _chunk_sz]
))
%%time
from justice.features import band_settings_params
from justice.features import dense_extracted_features
from justice.features import feature_combinators
from justice.features import metadata_features
from justice.features import per_point_dataset
from justice.features import raw_value_features
batch_size = 32
rve = raw_value_features.RawValueExtractor(
window_size=window_size,
band_settings=band_settings_params.BandSettings(lcs[0].expected_bands)
)
mve = metadata_features.MetadataValueExtractor()
data_gen = per_point_dataset.PerPointDatasetGenerator(
extract_fcn=feature_combinators.combine([rve.extract, mve.extract]),
batch_size=batch_size,
)
def input_fn():
return data_gen.make_dataset_lcs(lcs)
def per_band_model_fn(band_features, params):
batch_size = params["batch_size"]
window_size = params["window_size"]
wf = dense_extracted_features.WindowFeatures(
band_features, batch_size=batch_size, window_size=window_size, band_time_diff=band_time_diff)
dflux_dt = wf.dflux_dt(clip_magnitude=None)
init_layer = dense_extracted_features.initial_layer(wf, include_flux_and_time=True)
init_layer_masked = wf.masked(init_layer, value_if_masked=0, expected_extra_dims=[3])
return {
"initial_layer": init_layer_masked,
"in_window": wf.in_window,
}
def model_fn(features, labels, mode, params):
band_settings = band_settings_params.BandSettings.from_params(params)
per_band_data = band_settings.per_band_sub_model_fn(
per_band_model_fn, features, params=params
)
predictions = {
'band_{}.{}'.format(band, name): tensor
for band, tensor_dict in zip(band_settings.bands, per_band_data)
for name, tensor in tensor_dict.items()
}
predictions['time'] = features['time']
predictions['object_id'] = features['object_id']
return tf.estimator.EstimatorSpec(
mode=mode, predictions=predictions, loss=tf.constant(0.0), train_op=tf.no_op()
)
params = {
'batch_size': batch_size,
'window_size': window_size,
'flux_scale_epsilon': 0.5,
'lc_bands': lcs[0].expected_bands,
}
estimator = tf.estimator.Estimator(
model_fn=model_fn,
params=params
)
predictions = list(estimator.predict(input_fn=input_fn, yield_single_examples=True))
print(f"Got {len(predictions)} predictions.")
predictions[4]
def get_values_df(band):
arrays = [x[f"band_{band}.initial_layer"] for x in predictions if x[f"band_{band}.in_window"]]
return pd.DataFrame(np.concatenate(arrays, axis=0), columns=["dflux_dt", "dflux", "dtime"])
df = get_values_df(lcs[0].expected_bands[0])
df.hist('dflux_dt', bins=32)
df.hist('dflux', bins=32)
df.hist('dtime', bins=32)
```
## Really messy code to get a histogram with mostly-unique bins.
Because we want fixed-size arrays for TensorFlow code, we want a set of e.g. 32 unique cutoff points that reflect a good distribution of cutoffs. However its is really messy, because there tend to be strong peaks in the histogram which are repeated frequently.
```
import collections
import scipy.optimize
def _some_duplicates(non_unique, unique, num_desired):
to_duplicate_candidates = non_unique.tolist()
for x in unique:
to_duplicate_candidates.remove(x)
unique = unique.tolist()
while len(unique) < num_desired:
assert len(unique) <= num_desired
to_duplicate = random.choice(to_duplicate_candidates)
unique.insert(unique.index(to_duplicate), to_duplicate)
return unique
def unique_percentiles(array, num_desired):
partition_size = 100.0 / num_desired
epsilon = 0.05 * partition_size
solution = None
optimal_solution = None
def _actual_unique(vals):
nonlocal solution, optimal_solution
if optimal_solution is not None:
return 0 # stop optimization, or at least return quickly
num_points_base, perturb = vals
num_points = int(round(num_desired * num_points_base))
perturb = abs(perturb)
q = np.linspace(0, 100, int(num_points))
rng = np.random.RandomState(int(1e6 * perturb))
noise = rng.normal(loc=0, scale=min(1.0, 10 * perturb) * epsilon, size=q.shape)
noise[0] = 0
noise[-1] = 0
q += noise
non_unique = np.percentile(array, q=q, interpolation='linear')
unique = np.unique(non_unique)
result = abs(num_desired - len(unique))
if num_desired == len(unique):
optimal_solution = unique
elif len(unique) <= num_desired <= len(unique) + 1:
solution = _some_duplicates(non_unique, unique, num_desired)
return (4 if len(unique) > num_desired else 1) * result + perturb
res = scipy.optimize.minimize(
_actual_unique,
x0=[1.0, 0.1],
options={'maxiter': 1000, 'rhobeg': 0.3},
tol=1e-6,
method='COBYLA')
if optimal_solution is None and solution is None:
raise ValueError(f"Could not find deduplicated percentiles!")
return optimal_solution if optimal_solution is not None else solution
desired_num_cutoffs = 32
all_solutions = []
for band in lcs[0].expected_bands:
df = get_values_df(band)
for i, column in enumerate(df.columns):
print(band, column)
percentiles = np.array(unique_percentiles(df[column], desired_num_cutoffs), dtype=np.float32)
median_scale = np.median(percentiles[1:] - percentiles[:-1])
all_solutions.append({
'band': band,
'column_index': i,
'column': column,
'median_scale': float(median_scale),
'cutoffs': percentiles,
})
with_settings = {
'window_size': window_size,
'band_time_diff': band_time_diff,
'desired_num_cutoffs': desired_num_cutoffs,
'solutions': all_solutions
}
```
## Save to nicely-formatted JSON
Writes numpy arrays as strings, then rewrites those strings.
```
import datetime
import json
from justice import path_util
class ArrayPreEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, np.ndarray):
return "<<<<{}>>>>".format(", ".join(f"{x:.8f}" for x in obj.tolist()))
else:
print(obj)
return json.JSONEncoder.default(self, obj)
def _encode(x):
result = json.dumps(x, indent=2, cls=ArrayPreEncoder).replace('"<<<<', '[').replace('>>>>"', ']')
json.loads(result) # error if not decodable
return result
now = datetime.datetime.now()
path = path_util.data_dir / 'tf_align_model' / 'feature_extraction' / (
f"cutoffs__window_sz-{window_size}__{now.year:04d}-{now.month:02d}-{now.day:02d}.json")
path.parent.mkdir(parents=True, exist_ok=True)
with open(str(path), 'w') as f:
f.write(_encode(with_settings))
```
|
github_jupyter
|
##### Copyright 2018 The AdaNet Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# AdaNet on TPU
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/adanet/blob/master/adanet/examples/tutorials/adanet_tpu.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/adanet/blob/master/adanet/examples/tutorials/adanet_tpu.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
AdaNet supports training on Google's custom machine learning accelerators known
as Tensor Processing Units (TPU). Conveniently, we provide `adanet.TPUEstimator`
which handles TPU support behind the scenes. There are only a few minor changes
needed to switch from `adanet.Estimator` to `adanet.TPUEstimator`. We highlight
the necessary changes in this tutorial.
If the reader is not familiar with AdaNet, it is reccommended they take a look
at
[The AdaNet Objective](https://colab.sandbox.google.com/github/tensorflow/adanet/blob/master/adanet/examples/tutorials/adanet_objective.ipynb)
and in particular
[Customizing AdaNet](https://colab.sandbox.google.com/github/tensorflow/adanet/blob/master/adanet/examples/tutorials/adanet_objective.ipynb)
as this tutorial builds upon the latter.
**NOTE: you must provide a valid GCS bucket to use TPUEstimator.**
To begin, we import the necessary packages, obtain the Colab's TPU master
address, and give the TPU permissions to write to our GCS Bucket. Follow the
instructions
[here](https://colab.sandbox.google.com/notebooks/tpu.ipynb#scrollTo=_pQCOmISAQBu)
to connect to a Colab TPU runtime.
```
#@test {"skip": true}
# If you're running this in Colab, first install the adanet package:
!pip install adanet
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
import json
import os
import six
import time
import adanet
from google.colab import auth
import tensorflow as tf
BUCKET = '' #@param {type: 'string'}
MODEL_DIR = 'gs://{}/{}'.format(
BUCKET, time.strftime('adanet-tpu-estimator/%Y-%m-%d-%H-%M-%S'))
MASTER = ''
if 'COLAB_TPU_ADDR' in os.environ:
auth.authenticate_user()
MASTER = 'grpc://' + os.environ['COLAB_TPU_ADDR']
# Authenticate TPU to use GCS Bucket.
with tf.Session(MASTER) as sess:
with open('/content/adc.json', 'r') as file_:
auth_info = json.load(file_)
tf.contrib.cloud.configure_gcs(sess, credentials=auth_info)
# The random seed to use.
RANDOM_SEED = 42
```
## Fashion MNIST
We focus again on the Fashion MNIST dataset and download the data via Keras.
```
(x_train, y_train), (x_test, y_test) = (
tf.keras.datasets.fashion_mnist.load_data())
```
## `input_fn` Changes
There are two minor changes we must make to `input_fn` to support running on
TPU:
1. TPUs dynamically shard the input data depending on the number of cores used.
Because of this, we augment `input_fn` to take a dictionary `params`
argument. When running on TPU, `params` contains a `batch_size` field with
the appropriate batch size.
1. Once the input is batched, we drop the last batch if it is smaller than
`batch_size`. This can simply be done by specifying `drop_remainder=True` to
the
[`tf.data.DataSet.batch()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch)
function. It is important to specify this option since TPUs do not support
dynamic shapes. Note that we only drop the remainder batch during training
since evaluation is still done on the CPU.
```
FEATURES_KEY = "images"
def generator(images, labels):
"""Returns a generator that returns image-label pairs."""
def _gen():
for image, label in zip(images, labels):
yield image, label
return _gen
def preprocess_image(image, label):
"""Preprocesses an image for an `Estimator`."""
image = image / 255.
image = tf.reshape(image, [28, 28, 1])
features = {FEATURES_KEY: image}
return features, label
def input_fn(partition, training, batch_size):
"""Generate an input_fn for the Estimator."""
def _input_fn(params): # TPU: specify `params` argument.
# TPU: get the TPU set `batch_size`, if available.
batch_size_ = params.get("batch_size", batch_size)
if partition == "train":
dataset = tf.data.Dataset.from_generator(
generator(x_train, y_train), (tf.float32, tf.int32), ((28, 28), ()))
elif partition == "predict":
dataset = tf.data.Dataset.from_generator(
generator(x_test[:10], y_test[:10]), (tf.float32, tf.int32),
((28, 28), ()))
else:
dataset = tf.data.Dataset.from_generator(
generator(x_test, y_test), (tf.float32, tf.int32), ((28, 28), ()))
if training:
dataset = dataset.shuffle(10 * batch_size_, seed=RANDOM_SEED).repeat()
# TPU: drop the remainder batch when training on TPU.
dataset = dataset.map(preprocess_image).batch(
batch_size_, drop_remainder=training)
iterator = dataset.make_one_shot_iterator()
features, labels = iterator.get_next()
return features, labels
return _input_fn
```
## `model_fn` Changes
We use a similar CNN architecture as used in the
[Customizing AdaNet](https://colab.sandbox.google.com/github/tensorflow/adanet/blob/master/adanet/examples/tutorials/customizing_adanet.ipynb)
tutorial. The only TPU specific change we need to make is wrap the `optimizer`
in a
[`tf.contrib.tpu.CrossShardOptimizer`](https://www.google.com/search?q=cross+shard+optimizer&oq=cross+shard+optimizer&aqs=chrome.0.0j69i57.2391j0j7&sourceid=chrome&ie=UTF-8).
```
#@title Define the Builder and Generator
class SimpleCNNBuilder(adanet.subnetwork.Builder):
"""Builds a CNN subnetwork for AdaNet."""
def __init__(self, learning_rate, max_iteration_steps, seed):
"""Initializes a `SimpleCNNBuilder`.
Args:
learning_rate: The float learning rate to use.
max_iteration_steps: The number of steps per iteration.
seed: The random seed.
Returns:
An instance of `SimpleCNNBuilder`.
"""
self._learning_rate = learning_rate
self._max_iteration_steps = max_iteration_steps
self._seed = seed
def build_subnetwork(self,
features,
logits_dimension,
training,
iteration_step,
summary,
previous_ensemble=None):
"""See `adanet.subnetwork.Builder`."""
images = list(features.values())[0]
kernel_initializer = tf.keras.initializers.he_normal(seed=self._seed)
x = tf.keras.layers.Conv2D(
filters=16,
kernel_size=3,
padding="same",
activation="relu",
kernel_initializer=kernel_initializer)(
images)
x = tf.keras.layers.MaxPool2D(pool_size=2, strides=2)(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(
units=64, activation="relu", kernel_initializer=kernel_initializer)(
x)
logits = tf.keras.layers.Dense(
units=10, activation=None, kernel_initializer=kernel_initializer)(
x)
complexity = tf.constant(1)
return adanet.Subnetwork(
last_layer=x,
logits=logits,
complexity=complexity,
persisted_tensors={})
def build_subnetwork_train_op(self,
subnetwork,
loss,
var_list,
labels,
iteration_step,
summary,
previous_ensemble=None):
"""See `adanet.subnetwork.Builder`."""
learning_rate = tf.train.cosine_decay(
learning_rate=self._learning_rate,
global_step=iteration_step,
decay_steps=self._max_iteration_steps)
optimizer = tf.train.MomentumOptimizer(learning_rate, .9)
# TPU: wrap the optimizer in a CrossShardOptimizer.
optimizer = tf.contrib.tpu.CrossShardOptimizer(optimizer)
return optimizer.minimize(loss=loss, var_list=var_list)
def build_mixture_weights_train_op(self, loss, var_list, logits, labels,
iteration_step, summary):
"""See `adanet.subnetwork.Builder`."""
return tf.no_op("mixture_weights_train_op")
@property
def name(self):
"""See `adanet.subnetwork.Builder`."""
return "simple_cnn"
class SimpleCNNGenerator(adanet.subnetwork.Generator):
"""Generates a `SimpleCNN` at each iteration."""
def __init__(self, learning_rate, max_iteration_steps, seed=None):
"""Initializes a `Generator` that builds `SimpleCNNs`.
Args:
learning_rate: The float learning rate to use.
max_iteration_steps: The number of steps per iteration.
seed: The random seed.
Returns:
An instance of `Generator`.
"""
self._seed = seed
self._dnn_builder_fn = functools.partial(
SimpleCNNBuilder,
learning_rate=learning_rate,
max_iteration_steps=max_iteration_steps)
def generate_candidates(self, previous_ensemble, iteration_number,
previous_ensemble_reports, all_reports):
"""See `adanet.subnetwork.Generator`."""
seed = self._seed
# Change the seed according to the iteration so that each subnetwork
# learns something different.
if seed is not None:
seed += iteration_number
return [self._dnn_builder_fn(seed=seed)]
```
## Launch TensorBoard
Let's run [TensorBoard](https://www.tensorflow.org/guide/summaries_and_tensorboard) to visualize model training over time. We'll use [ngrok](https://ngrok.com/) to tunnel traffic to localhost.
*The instructions for setting up Tensorboard were obtained from https://www.dlology.com/blog/quick-guide-to-run-tensorboard-in-google-colab/*
Run the next cells and follow the link to see the TensorBoard in a new tab.
```
#@test {"skip": true}
get_ipython().system_raw(
'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'
.format(MODEL_DIR)
)
# Install ngrok binary.
! wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
! unzip ngrok-stable-linux-amd64.zip
print("Follow this link to open TensorBoard in a new tab.")
get_ipython().system_raw('./ngrok http 6006 &')
! curl -s http://localhost:4040/api/tunnels | python3 -c \
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
```
## Using `adanet.TPUEstimator` to Train and Evaluate
Finally, we switch from `adanet.Estimator` to `adanet.TPUEstimator`. There are
two last changes needed:
1. Update the `RunConfig` to be a
[`tf.contrib.tpu.RunConfig`](https://www.tensorflow.org/api_docs/python/tf/contrib/tpu/RunConfig).
We supply the TPU `master` address and set `iterations_per_loop=200`. This
choice is fairly arbitrary in our case. A good practice is to set it to the
number of steps in between summary writes and metric evals.
1. Finally, we specify the `use_tpu` and `batch_size` parameters
`adanet.TPUEstimator`.
There is an important thing to note about the `batch_size`: each TPU chip
consists of 2 cores with 4 shards each. In the
[Customizing AdaNet](https://colab.sandbox.google.com/github/tensorflow/adanet/blob/master/adanet/examples/tutorials/customizing_adanet.ipynb)
tutorial, a `batch_size` of 64 was used. To be consistent we use the same
`batch_size` per shard and drop the number of training steps accordingly. In
other words, since we're running on one TPU we set `batch_size=64*8=512` and
`train_steps=1000`. In the ideal case, since we drop the `train_steps` by 5x,
this means we're **training 5x faster!**
```
#@title AdaNet Parameters
LEARNING_RATE = 0.25 #@param {type:"number"}
TRAIN_STEPS = 1000 #@param {type:"integer"}
BATCH_SIZE = 512 #@param {type:"integer"}
ADANET_ITERATIONS = 2 #@param {type:"integer"}
# TPU: switch `tf.estimator.RunConfig` to `tf.contrib.tpu.RunConfig`.
# The main required changes are specifying `tpu_config` and `master`.
config = tf.contrib.tpu.RunConfig(
tpu_config=tf.contrib.tpu.TPUConfig(iterations_per_loop=200),
master=MASTER,
save_checkpoints_steps=200,
save_summary_steps=200,
tf_random_seed=RANDOM_SEED)
head = tf.contrib.estimator.multi_class_head(
n_classes=10, loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE)
max_iteration_steps = TRAIN_STEPS // ADANET_ITERATIONS
# TPU: switch `adanet.Estimator` to `adanet.TPUEstimator`.
try:
estimator = adanet.TPUEstimator(
head=head,
subnetwork_generator=SimpleCNNGenerator(
learning_rate=LEARNING_RATE,
max_iteration_steps=max_iteration_steps,
seed=RANDOM_SEED),
max_iteration_steps=max_iteration_steps,
evaluator=adanet.Evaluator(
input_fn=input_fn("train", training=False, batch_size=BATCH_SIZE),
steps=None),
adanet_loss_decay=.99,
config=config,
model_dir=MODEL_DIR,
# TPU: specify `use_tpu` and the batch_size parameters.
use_tpu=True,
train_batch_size=BATCH_SIZE,
eval_batch_size=32)
except tf.errors.InvalidArgumentError as e:
six.raise_from(
Exception(
"Invalid GCS Bucket: you must provide a valid GCS bucket in the "
"`BUCKET` form field of the first cell."), e)
results, _ = tf.estimator.train_and_evaluate(
estimator,
train_spec=tf.estimator.TrainSpec(
input_fn=input_fn("train", training=True, batch_size=BATCH_SIZE),
max_steps=TRAIN_STEPS),
eval_spec=tf.estimator.EvalSpec(
input_fn=input_fn("test", training=False, batch_size=BATCH_SIZE),
steps=None,
start_delay_secs=1,
throttle_secs=1,
))
print("Accuracy:", results["accuracy"])
print("Loss:", results["average_loss"])
```
## Conclusion
That was easy! With very few changes we were able to transform our original
estimator into one which can harness the power of TPUs.
|
github_jupyter
|
```
import pandas as pd
import numpy as np
import datetime as datetime
#read in last_played_wars csv
last_played_wars = pd.read_csv("updated_last_played_wars.csv")
# last_played_wars["Participation"] = last_played_wars["Joined Wars"] / last_played_wars["Total Wars"]
last_played_wars = last_played_wars[["Name", "Tag", "Last War", "Town Hall", "Clan", "Total Wars"]]
last_played_wars.head()
#Split Last Played Wars by clan
#Sheer Force
#Create a copy of Sheer Force only to manipulate
sf_lpw = last_played_wars.loc[last_played_wars.Clan == "Sheer Force"].copy()
#Joined Wars of clan member
sf_lpw["Joined Wars"] = sf_lpw["Total Wars"]
#get maximum number of wars this season
sf_max = sf_lpw["Total Wars"].max()
sf_lpw["Total Wars"] = sf_max
#find participation of members
sf_lpw["Participation"] = sf_lpw["Joined Wars"]/sf_max
#get < 50% participation members and add to slackers
sheer_force_slackers = sf_lpw.loc[sf_lpw.Participation < .5].copy()
sheer_force_slackers.head()
sf = sheer_force_slackers.to_csv(r"sf.csv", index = True, header = True)
#Dark Matter
dm_lpw = last_played_wars.loc[last_played_wars.Clan == "Dark Matter"].copy()
dm_lpw["Joined Wars"] = dm_lpw["Total Wars"]
dm_max = dm_lpw["Total Wars"].max()
dm_lpw["Total Wars"] = dm_max
dm_lpw["Participation"] = dm_lpw["Joined Wars"]/dm_max
dark_matter_slackers = dm_lpw.loc[dm_lpw.Participation < .5].copy()
dark_matter_slackers.head()
dm = dark_matter_slackers.to_csv(r"dm.csv", index = True, header = True)
#Mini Matter
mm_lpw = last_played_wars.loc[last_played_wars.Clan == "Mini Matter"].copy()
mm_lpw["Joined Wars"] = mm_lpw["Total Wars"]
mm_max = mm_lpw["Total Wars"].max()
mm_lpw["Total Wars"] = mm_max
mm_lpw["Participation"] = mm_lpw["Joined Wars"]/mm_max
mini_matter_slackers = mm_lpw.loc[mm_lpw.Participation < .5].copy()
mini_matter_slackers.head()
mm = mini_matter_slackers.to_csv(r"mm.csv", index = True, header = True)
#Legendary Monks
lm_lpw = last_played_wars.loc[last_played_wars.Clan == "Legendary Monks"].copy()
lm_lpw["Joined Wars"] = lm_lpw["Total Wars"]
lm_max = lm_lpw["Total Wars"].max()
lm_lpw["Total Wars"] = lm_max
lm_lpw["Participation"] = lm_lpw["Joined Wars"]/lm_max
legendary_monks_slackers = lm_lpw.loc[lm_lpw.Participation < .5].copy()
legendary_monks_slackers.head()
lm = legendary_monks_slackers.to_csv(r"lm.csv", index = True, header = True)
#Golden Clan
kbwf_lpw = last_played_wars.loc[last_played_wars.Clan == "Golden Clan"].copy()
kbwf_lpw["Joined Wars"] = kbwf_lpw["Total Wars"]
kbwf_max = kbwf_lpw["Total Wars"].max()
kbwf_lpw["Total Wars"] = kbwf_max
kbwf_lpw["Participation"] = kbwf_lpw["Joined Wars"]/kbwf_max
killer_black_slackers = kbwf_lpw.loc[kbwf_lpw.Participation < .5].copy()
killer_black_slackers.head()
kbwf = killer_black_slackers.to_csv(r"gc.csv", index = True, header = True)
```
|
github_jupyter
|
# Pandas
```
import numpy as np
import pandas as pd
```
Pandas提供了3种数据类型,分别是`Series`、`DataFrame`和`Panel`。
* `Series`用于保存一维数据
* `DataFrame` 用于保存二维数据
* `Panel`用于保存三维或者可变维数据
## Series数据结构
`Series`本质上是一个带索引的一维数组。
指定索引:
```
s = pd.Series([1,3,2,4], index=['a', 'b', 'c', 'd'])
s.index
s.values
```
默认索引:
```
s = pd.Series([1, 3, 2, 4])
s.index
s.values
```
## DataFrame数据结构
### 创建DataFrame
```
df = pd.DataFrame({'x': ['a', 'b', 'c'],
'y': range(1, 4),
'z': [2, 5, 3]})
df
df.columns
df.values
```
### 查看数据内容
* `df.info()` 查看DataFrame属性信息
* `df.head()` 查看DataFrame前五行数据信息
* `df.tail()` 查看DataFrame后五行数据信息
### 选取多列
* df.loc
* df.iloc
```
df[['x', 'y']]
df.loc[:, ['x', 'y']]
df.iloc[:, [0, 1]]
```
### 单行过滤
```
df[df.z>=3]
```
### 重新定义列名
```
df.rename(columns={'x': 'X'}, inplace=True)
df
df.columns = ['X', 'Y', 'Z']
df
```
### 数据的多重索引
```
df = pd.DataFrame({
'X': list('ABCABC'),
'year': [2010] * 3 + [2011] * 3,
'Value': [1, 3, 4, 3, 5, 2]
})
df
df.set_index(['X', 'year'])
```
## 表格的变换
```
df = pd.DataFrame({
'X': list('ABC'),
'2010': [1, 3, 4],
'2011': [3, 5, 2]
})
df
df_melt = pd.melt(df, id_vars='X', var_name='year', value_name='value')
df_melt
```
* `id.vars('X')`表示由标识变量构成的向量,用于标识观测的变量
* `variable_name('year')`表示用于保存原始变量名的变量名称
* `value.name('value')`表示用于保存原始值的名称
```
df_pivot = df_melt.pivot_table(index='X', columns='year', values='value')
df_pivot.reset_index(inplace=True)
df_pivot
```
## 变量的变换
* `apply`的操作对象是`DataFrame`的某一列(`axis`=0)或者某一列(`axis`=1)
* `applymap`的操作对象是元素级,作用于每个`DataFrame`的每个数据
## 表格的排序
`df.sort_values(by, ascending=True)`
```
df
df.sort_values('2010', ascending=False)
df.sort_values('2011', ascending=True)
df.sort_values(by=['X', '2010'], ascending=False)
```
## 表格拼接
```
df1 = pd.DataFrame({
'x': ['a', 'b', 'c'],
'y': range(1, 4),
})
df2 = pd.DataFrame({
'z': ['B', 'D', 'H'],
'g': [2, 5, 3]
})
df3 = pd.DataFrame({
'x': ['g', 'd'],
'y': [2, 5]
})
```
横轴方向连接
```
pd.concat([df1, df2], axis=1)
```
纵轴方向连接
```
pd.concat([df1, df3], axis=0).reset_index()
```
## 表的融合
```
df1 = pd.DataFrame({
'x': list('abc'),
'y': range(1, 4)
})
df2 = pd.DataFrame({
'x': list('abd'),
'z': [2, 5, 3]
})
df3 = pd.DataFrame({
'g': list('abd'),
'z': [2, 5, 3]
})
df1
df2
df3
```
只保留左表的所有数据
```
pd.merge(df1, df2, how='left', on='x')
```
只保留右表的数据
```
pd.merge(df1, df2, how='right', on='x')
```
保留两个表中公共的部分信息
```
pd.merge(df1, df2, how='inner', on='x')
```
保留两个表的所有信息
```
pd.merge(df1, df2, how='outer', on='x')
```
## 表格分组操作
```
df = pd.DataFrame({
'X': list('ABC'),
'2010': [1, 3, 4],
'2011': [3, 5, 2]
})
df
```
按行或列操作
按行求和
```
df[['2010', '2011']].apply(lambda x: x.sum(), axis=1)
```
按列求和
```
df[['2010', '2011']].apply(lambda x: x.sum(), axis=0)
```
多列运算
```
df['2010_2011'] = df[['2010', '2011']].apply(lambda x: x['2010'] + 2 * x['2011'], axis=1)
df
```
分组操作
```
df = pd.DataFrame({
'X': list('ABC'),
'2010': [1, 3, 4],
'2011': [3, 5, 2]
})
df_melt = pd.melt(df, id_vars=['X'], var_name='year', value_name='value')
df_melt
```
按`year`分组求均值
```
df_melt.groupby('year').mean()
```
按`year`和`x`两列分组求均值
```
df_melt.groupby(['year', 'X']).mean()
df_melt.groupby(['year', 'X'], as_index=False).mean()
```
分组聚合
```
df_melt.groupby(['X', 'year']).aggregate([np.mean, np.median])
```
分组运算:`transform()`函数
```
df_melt['percentage'] = df_melt.groupby('X')['value'].transform(lambda x: x/s.sum())
df_melt
```
分组筛选:`filter()`函数
```
df_melt.groupby('X').filter(lambda x: x['value'].mean()>2)
```
|
github_jupyter
|
<img align="right" src="tf-small.png"/>
# Search from MQL
These are examples of
[MQL](https://shebanq.ancient-data.org/static/docs/MQL-Query-Guide.pdf)
queries on
[SHEBANQ](https://shebanq.ancient-data.org/hebrew/queries),
now expressed
as Text-Fabric search templates.
For more basic examples, see
[searchTutorial](https://github.com/etcbc/text-fabric/blob/master/docs/searchTutorial.ipynb).
*Search* in Text-Fabric is a template based way of looking for structural patterns in your dataset.
```
%load_ext autoreload
%autoreload 2
from tf.fabric import Fabric
ETCBC = 'hebrew/etcbc4c'
TF = Fabric( modules=ETCBC )
api = TF.load('''
rela function pdp
''')
api.makeAvailableIn(globals())
```
# By Oliver Glanz
[Oliver Glanz: PP with adjective followed by noun](https://shebanq.ancient-data.org/hebrew/query?version=4b&id=547)
```
select all objects where
[phrase FOCUS typ = PP
[word sp= prep]
[word sp=adjv]
[word sp=subs]
]
```
64 results having 251 words.
```
query = '''
phrase typ=PP
word sp=prep
<: word sp=adjv
<: word sp=subs
'''
S.study(query)
S.showPlan()
S.count(progress=1000, limit=-1)
for r in S.fetch(amount=10):
print(S.glean(r))
```
The number of results is right. The number of words that SHEBANQ reports
is the number of words in the phrases of the result. Let us count them:
```
print(sum([len(L.d(r[0], otype='word')) for r in S.fetch()]))
```
# By Martijn Naaijer
[Martijn Naaijer: Object clauses with >CR](https://shebanq.ancient-data.org/hebrew/query?version=4b&id=997)
```
Select all objects where
[clause rela = Objc
[word focus first lex = '>CR']
]
```
157 results
```
query = '''
verse
clause rela=Objc
=: word lex=>CR
'''
S.study(query)
S.showPlan()
S.count(progress=1000, limit=-1)
for r in sorted(S.fetch(), key=lambda x: C.rank.data[x[0]-1])[0:10]:
print(S.glean(r))
```
We have fewer cases: 96 instead of 157.
We are working on the ETCBC version 4c, and the query has been executed against 4b.
There have been coding updates that are relevant to this query, e.g. in Genesis 43:27, which is in the results
on SHEBANQ, but not here. In 4c the `rela` is `Attr`, and not `Objc`.
```
query = '''
verse book=Genesis chapter=43 verse=27
clause
=: word lex=>CR
'''
S.study(query)
S.showPlan()
S.count(progress=1000, limit=-1)
results = sorted(S.fetch(), key=lambda x: C.rank.data[x[0]-1])
for r in results:
print(r[1], F.rela.v(r[1]), S.glean(r))
```
# By Cody Kingham
[Cody Kingham: MI Hierarchies. p.18n49. First Person Verbs in Narrative](https://shebanq.ancient-data.org/hebrew/query?version=4b&id=1050)
```
SELECT ALL OBJECTS WHERE
[book
[clause txt = 'N'
[word FOCUS sp = verb
[word ps = p1
]
]
]
]
OR
[book
[clause txt = '?N'
[word FOCUS sp = verb
[word ps = p1
]
]
]
]
```
273 results.
```
query = '''
book
clause txt=N|?N
word sp=verb ps=p1
'''
S.study(query)
S.showPlan()
S.count(progress=1000, limit=-1)
for r in sorted(S.fetch(), key=lambda x: C.rank.data[x[0]-1])[0:10]:
print(S.glean(r))
```
# By Reinoud Oosting
[Reinoud Oosting: to go + object marker](https://shebanq.ancient-data.org/hebrew/query?version=4b&id=755)
```
Select all objects
where
[clause
[phrase function = Pred OR function = PreC
[word FOCUS sp = verb AND vs = qal AND lex = "HLK[" ]
]
..
[phrase FOCUS
[word First lex = ">T"]
]
]
OR
[clause
[phrase FOCUS
[word First lex = ">T" ]
]
..
[phrase function = Pred OR function = PreC
[word FOCUS sp = verb AND vs = qal AND lex = "HLK["]
]
]
```
4 results.
This is a case where we can simplify greatly because we are not hampered
by automatic constraints on the order of the phrases.
```
query = '''
clause
p1:phrase function=Pred|PreC
word sp=verb vs=qal lex=HLK[
p2:phrase
=: word lex=>T
p1 # p2
'''
S.study(query)
S.showPlan()
S.count(progress=1000, limit=-1)
for r in sorted(S.fetch(), key=lambda x: C.rank.data[x[0]-1])[0:10]:
print(S.glean(r))
```
# By Reinoud Oosting (ii)
[Reinoud Oosting: To establish covenant](https://shebanq.ancient-data.org/hebrew/query?version=4b&id=1485)
```
select all objects
where
[clause
[phrase function = Pred OR function = PreC
[word FOCUS sp = verb AND vs = hif AND lex = "QWM[" ]
]
..
[phrase function = Objc
[word FOCUS lex = "BRJT/" ]
]
]
OR
[clause
[phrase function = Objc
[word FOCUS lex = "BRJT/" ]
]
..
[phrase function = Pred OR function = PreC
[word FOCUS sp = verb AND vs = hif AND lex = "QWM["]
]
]
```
13 results
```
query = '''
clause
phrase function=Pred|PreC
word sp=verb vs=hif lex=QWM[
phrase function=Objc
word lex=BRJT/
'''
S.study(query)
S.showPlan()
S.count(progress=1000, limit=-1)
resultsx = sorted((L.u(r[0], otype='verse')+r for r in S.fetch()), key=lambda r: sortKey(r[0]))
for r in resultsx:
print(S.glean(r))
```
# By Reinoud Oosting (iii)
[Reinoud Oosting: To find grace in sight of](https://shebanq.ancient-data.org/hebrew/query?version=4b&id=1484)
```
select all objects
where
[clause
[phrase FOCUS function = Pred OR function = PreC
[word sp = verb AND vs = qal AND lex = "MY>[" ]
]
..
[phrase function = Objc
[word FOCUS lex = "XN/" ]
]
[phrase function = Cmpl
[word FOCUS lex = "B"]
[word FOCUS lex = "<JN/"]
]
]
OR
[clause
[phrase function = Objc
[word FOCUS lex = "XN/" ]
]
[phrase function = Cmpl
[word FOCUS lex = "B"]
[word FOCUS lex = "<JN/"]
..
[phrase function = Pred OR function = PreC
[word FOCUS sp = verb AND vs = qal AND lex = "MY>["]
]
]
]
```
38 results
```
query = '''
clause
p1:phrase function=Pred|PreC
word sp=verb vs=qal lex=MY>[
p2:phrase function=Objc
word lex=XN/
p3:phrase function=Cmpl
word lex=B
<: word lex=<JN/
p2 << p3
'''
S.study(query)
S.showPlan(details=True)
S.count(progress=1000, limit=-1)
```
# By Stephen Ku
[Stephen Ku: Verbless Clauses](https://shebanq.ancient-data.org/hebrew/query?version=4&id=1314)
```
SELECT ALL OBJECTS WHERE
[clause
[phrase function IN (Subj)
[phrase_atom NOT rela IN (Appo,Para,Spec)
[word FOCUS pdp IN (subs,nmpr,prps,prde,prin,adjv)
]
]
]
NOTEXIST [phrase function IN (Pred)]
..
NOTEXIST [phrase function IN (Pred)]
[phrase function IN (PreC)
NOTEXIST [word pdp IN (prep)]
[word FOCUS pdp IN (subs,nmpr,prin,adjv) AND ls IN (card,ordn)]
]
]
```
1441 results with 1244 words in those results.
We do not have the `NOTEXIST` operator, and we cannot say `NOT rela IN`,
so we are at a disadvantage here.
Let's see what we can do.
We can use additional processing to furnish the template and weed out results.
The first thing is: we have to fetch all possible values of the `rela` feature,
in order to see what other values than `Appo`, `Para`, `Spec` it can take.
The function `freqList()` gives us a frequency list of values, we only need the values
other than the indicated ones, separated by a `|`.
We also need to consult the relation legend to pick the proper ordering between the
two phrases.
```
excludedRela = {'Appo', 'Para', 'Spec'}
'|'.join(x[0] for x in F.rela.freqList() if x[0] not in excludedRela)
print(S.relationLegend)
query = '''
clause
p1:phrase function=Subj
phrase_atom rela=NA|rec|par|Adju|Attr|adj|Coor|atr|dem|Resu|Objc|Link|mod|Subj|RgRc|ReVo|Cmpl|PrAd|PreC|Sfxs
word pdp=subs|nmpr|prps|prde|prin|adjv
p2:phrase function=PreC
word pdp=subs|nmpr|prin|adjv ls=card|ordn
p1 << p2
'''
S.study(query)
S.showPlan()
S.count(progress=1000, limit=-1)
for r in sorted(S.fetch(), key=lambda x: C.rank.data[x[0]-1])[0:10]:
print(S.glean(r))
```
We have too many results, because we have not posed the restrictions by the `NOTEXIST` operator.
Let's weed out the results that do not satisfy those criteria.
That is, essentially, throwing away those clauses
* that have a phrase with `function=Pred` after the phrase with `function=Pred`
* where the second phrase has a preposition
```
indent(reset=True)
properResults = []
resultWords = set()
for r in S.fetch():
clause = r[0]
phrase1 = r[1]
phrase2 = r[4]
word1 = r[3]
word2 = r[5]
phrases = [p for p in L.d(clause, otype='phrase') if sortKey(p) > sortKey(phrase1)]
words2 = L.d(phrase2, otype='word')
if any(F.function.v(phrase) == 'Pred' for phrase in phrases): continue
if any(F.pdp.v(word) == 'prep' for word in words2): continue
resultWords |= {word1, word2}
properResults.append(r)
info('Found {} proper results with {} words in it'.format(len(properResults), len(resultWords)))
```
We have still many more results than the MQL query on SHEBANQ.
Let us have a look at some results words and compare them with the result words on SHEBANQ.
It is handy to fetch from SHEBANQ the csv file with query results.
```
resultsx = sorted((L.u(r[0], otype='verse')+r for r in properResults), key=lambda r: sortKey(r[0]))
resultWordsx = [(L.u(w, otype='verse')[0], w) for w in sortNodes(resultWords)]
for r in resultWordsx[0:30]:
print(S.glean(r))
```
In the list from SHEBANQ we see this:
The first thing we miss in the SHEBANQ output is
```
Genesis 5:14 עֶ֣שֶׂר
```
and in SHEBANQ we see that this word has not been marked with `ls=card|ordn`,
while in the newer ETCBC4c it is!
I have conducted a SHEBANQ query for numerals here
[Dirk Roorda: numerals](https://shebanq.ancient-data.org/hebrew/query?id=1487),
in versions 4 and 4b,
and quite something happened with the encoding of numerals between those versions.
Let us also find the numerals in 4c:
```
S.study('''
word ls=card|ordn
''')
```
So we have for the amount of numerals in the ETCBC versions:
4|4b|4c
---|---|---
6839|7014|7013
On the basis of these numbers, this cannot be the sole cause of the discrepancy.
# By Dirk Roorda
[Dirk Roorda: Yesh](https://shebanq.ancient-data.org/hebrew/query?version=4b&id=556)
```
select all objects where
[book [chapter [verse
[clause
[clause_atom
[phrase
[phrase_atom
[word focus lex="JC/" OR lex=">JN/"]
]
]
]
]
]]]
```
926 results
```
query = '''
verse
clause
clause_atom
phrase
phrase_atom
word lex=JC/|>JN/
'''
S.study(query)
S.showPlan()
S.count(progress=1000, limit=-1)
for r in sorted(S.fetch(), key=lambda x: C.rank.data[x[0]-1])[0:10]:
print(S.glean(r))
```
|
github_jupyter
|
# Scaling up ML using Cloud AI Platform
In this notebook, we take a previously developed TensorFlow model to predict taxifare rides and package it up so that it can be run in Cloud AI Platform. For now, we'll run this on a small dataset. The model that was developed is rather simplistic, and therefore, the accuracy of the model is not great either. However, this notebook illustrates *how* to package up a TensorFlow model to run it within Cloud AI Platform.
Later in the course, we will look at ways to make a more effective machine learning model.
## Environment variables for project and bucket
Note that:
<ol>
<li> Your project id is the *unique* string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads: <b>Project ID:</b> cloud-training-demos </li>
<li> Cloud training often involves saving and restoring model files. If you don't have a bucket already, I suggest that you create one from the GCP console (because it will dynamically check whether the bucket name you want is available). A common pattern is to prefix the bucket name by the project id, so that it is unique. Also, for cost reasons, you might want to use a single region bucket. </li>
</ol>
<b>Change the cell below</b> to reflect your Project ID and bucket name.
```
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import os
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Python Code
# Model Info
MODEL_NAME = 'taxifare'
# Model Version
MODEL_VERSION = 'v1'
# Training Directory name
TRAINING_DIR = 'taxi_trained'
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['MODEL_NAME'] = MODEL_NAME
os.environ['MODEL_VERSION'] = MODEL_VERSION
os.environ['TRAINING_DIR'] = TRAINING_DIR
os.environ['TFVERSION'] = '2.5' # Tensorflow version
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
```
## Packaging up the code
Take your code and put into a standard Python package structure. <a href="taxifare/trainer/model.py">model.py</a> and <a href="taxifare/trainer/task.py">task.py</a> containing the Tensorflow code from earlier (explore the <a href="taxifare/trainer/">directory structure</a>).
```
%%bash
find ${MODEL_NAME}
%%bash
cat ${MODEL_NAME}/trainer/model.py
```
## Find absolute paths to your data
Note the absolute paths below.
```
%%bash
echo "Working Directory: ${PWD}"
echo "Head of taxi-train.csv"
head -1 $PWD/taxi-train.csv
echo "Head of taxi-valid.csv"
head -1 $PWD/taxi-valid.csv
```
## Running the Python module from the command-line
#### Clean model training dir/output dir
```
%%bash
# This is so that the trained model is started fresh each time. However, this needs to be done before
rm -rf $PWD/${TRAINING_DIR}
%%bash
# Setup python so it sees the task module which controls the model.py
export PYTHONPATH=${PYTHONPATH}:${PWD}/${MODEL_NAME}
# Currently set for python 2. To run with python 3
# 1. Replace 'python' with 'python3' in the following command
# 2. Edit trainer/task.py to reflect proper module import method
python -m trainer.task \
--train_data_paths="${PWD}/taxi-train*" \
--eval_data_paths=${PWD}/taxi-valid.csv \
--output_dir=${PWD}/${TRAINING_DIR} \
--train_steps=1000 --job-dir=./tmp
%%bash
ls $PWD/${TRAINING_DIR}/export/exporter/
%%writefile ./test.json
{"pickuplon": -73.885262,"pickuplat": 40.773008,"dropofflon": -73.987232,"dropofflat": 40.732403,"passengers": 2}
%%bash
sudo find "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/ml_engine" -name '*.pyc' -delete
%%bash
# This model dir is the model exported after training and is used for prediction
#
model_dir=$(ls ${PWD}/${TRAINING_DIR}/export/exporter | tail -1)
# predict using the trained model
gcloud ai-platform local predict \
--model-dir=${PWD}/${TRAINING_DIR}/export/exporter/${model_dir} \
--json-instances=./test.json
```
#### Clean model training dir/output dir
```
%%bash
# This is so that the trained model is started fresh each time. However, this needs to be done before
rm -rf $PWD/${TRAINING_DIR}
```
## Running locally using gcloud
```
%%bash
# Use Cloud Machine Learning Engine to train the model in local file system
gcloud ai-platform local train \
--module-name=trainer.task \
--package-path=${PWD}/${MODEL_NAME}/trainer \
-- \
--train_data_paths=${PWD}/taxi-train.csv \
--eval_data_paths=${PWD}/taxi-valid.csv \
--train_steps=1000 \
--output_dir=${PWD}/${TRAINING_DIR}
%%bash
ls $PWD/${TRAINING_DIR}
```
## Submit training job using gcloud
First copy the training data to the cloud. Then, launch a training job.
After you submit the job, go to the cloud console (http://console.cloud.google.com) and select <b>AI Platform | Jobs</b> to monitor progress.
<b>Note:</b> Don't be concerned if the notebook stalls (with a blue progress bar) or returns with an error about being unable to refresh auth tokens. This is a long-lived Cloud job and work is going on in the cloud. Use the Cloud Console link (above) to monitor the job.
```
%%bash
# Clear Cloud Storage bucket and copy the CSV files to Cloud Storage bucket
echo $BUCKET
gsutil -m rm -rf gs://${BUCKET}/${MODEL_NAME}/smallinput/
gsutil -m cp ${PWD}/*.csv gs://${BUCKET}/${MODEL_NAME}/smallinput/
%%bash
OUTDIR=gs://${BUCKET}/${MODEL_NAME}/smallinput/${TRAINING_DIR}
JOBNAME=${MODEL_NAME}_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
# Clear the Cloud Storage Bucket used for the training job
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/${MODEL_NAME}/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC \
--runtime-version 2.3 \
--python-version 3.5 \
-- \
--train_data_paths="gs://${BUCKET}/${MODEL_NAME}/smallinput/taxi-train*" \
--eval_data_paths="gs://${BUCKET}/${MODEL_NAME}/smallinput/taxi-valid*" \
--output_dir=$OUTDIR \
--train_steps=10000
```
Don't be concerned if the notebook appears stalled (with a blue progress bar) or returns with an error about being unable to refresh auth tokens. This is a long-lived Cloud job and work is going on in the cloud.
<b>Use the Cloud Console link to monitor the job and do NOT proceed until the job is done.</b>
```
%%bash
gsutil ls gs://${BUCKET}/${MODEL_NAME}/smallinput
```
## Train on larger dataset
I have already followed the steps below and the files are already available. <b> You don't need to do the steps in this comment. </b> In the next chapter (on feature engineering), we will avoid all this manual processing by using Cloud Dataflow.
Go to http://bigquery.cloud.google.com/ and type the query:
<pre>
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'nokeyindata' AS key
FROM
[nyc-tlc:yellow.trips]
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
AND ABS(HASH(pickup_datetime)) % 1000 == 1
</pre>
Note that this is now 1,000,000 rows (i.e. 100x the original dataset). Export this to CSV using the following steps (Note that <b>I have already done this and made the resulting GCS data publicly available</b>, so you don't need to do it.):
<ol>
<li> Click on the "Save As Table" button and note down the name of the dataset and table.
<li> On the BigQuery console, find the newly exported table in the left-hand-side menu, and click on the name.
<li> Click on "Export Table"
<li> Supply your bucket name and give it the name train.csv (for example: gs://cloud-training-demos-ml/taxifare/ch3/train.csv). Note down what this is. Wait for the job to finish (look at the "Job History" on the left-hand-side menu)
<li> In the query above, change the final "== 1" to "== 2" and export this to Cloud Storage as valid.csv (e.g. gs://cloud-training-demos-ml/taxifare/ch3/valid.csv)
<li> Download the two files, remove the header line and upload it back to GCS.
</ol>
<p/>
<p/>
## Run Cloud training on 1-million row dataset
This took 60 minutes and uses as input 1-million rows. The model is exactly the same as above. The only changes are to the input (to use the larger dataset) and to the Cloud MLE tier (to use STANDARD_1 instead of BASIC -- STANDARD_1 is approximately 10x more powerful than BASIC). At the end of the training the loss was 32, but the RMSE (calculated on the validation dataset) was stubbornly at 9.03. So, simply adding more data doesn't help.
```
%%bash
OUTDIR=gs://${BUCKET}/${MODEL_NAME}/${TRAINING_DIR}
JOBNAME=${MODEL_NAME}_$(date -u +%y%m%d_%H%M%S)
CRS_BUCKET=cloud-training-demos # use the already exported data
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/${MODEL_NAME}/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version 2.3 \
--python-version 3.5 \
-- \
--train_data_paths="gs://${CRS_BUCKET}/${MODEL_NAME}/ch3/train.csv" \
--eval_data_paths="gs://${CRS_BUCKET}/${MODEL_NAME}/ch3/valid.csv" \
--output_dir=$OUTDIR \
--train_steps=100000
```
## Challenge Exercise
Modify your solution to the challenge exercise in d_trainandevaluate.ipynb appropriately. Make sure that you implement training and deployment. Increase the size of your dataset by 10x since you are running on the cloud. Does your accuracy improve?
Copyright 2021 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
|
github_jupyter
|
# Going deeper with Tensorflow
In this seminar, we're going to play with [Tensorflow](https://www.tensorflow.org/) and see how it helps us build deep learning models.
If you're running this notebook outside the course environment, you'll need to install tensorflow:
* `pip install tensorflow` should install cpu-only TF on Linux & Mac OS
* If you want GPU support from offset, see [TF install page](https://www.tensorflow.org/install/)
```
import tensorflow as tf
gpu_options = tf.GPUOptions(allow_growth=True, per_process_gpu_memory_fraction=0.1)
s = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options))
```
# Warming up
For starters, let's implement a python function that computes the sum of squares of numbers from 0 to N-1.
* Use numpy or python
* An array of numbers 0 to N - numpy.arange(N)
```
import numpy as np
def sum_squares(N):
return <student.Implement_me()>
%%time
sum_squares(10**8)
```
# Tensorflow teaser
Doing the very same thing
```
#I gonna be your function parameter
N = tf.placeholder('int64', name="input_to_your_function")
#i am a recipe on how to produce sum of squares of arange of N given N
result = tf.reduce_sum((tf.range(N)**2))
%%time
#example of computing the same as sum_squares
print(result.eval({N:10**8}))
```
# How does it work?
1. define placeholders where you'll send inputs;
2. make symbolic graph: a recipe for mathematical transformation of those placeholders;
3. compute outputs of your graph with particular values for each placeholder
* output.eval({placeholder:value})
* s.run(output, {placeholder:value})
* So far there are two main entities: "placeholder" and "transformation"
* Both can be numbers, vectors, matrices, tensors, etc.
* Both can be int32/64, floats of booleans (uint8) of various size.
* You can define new transformations as an arbitrary operation on placeholders and other transformations
* tf.reduce_sum(tf.arange(N)\**2) are 3 sequential transformations of placeholder N
* There's a tensorflow symbolic version for every numpy function
* `a+b, a/b, a**b, ...` behave just like in numpy
* np.mean -> tf.reduce_mean
* np.arange -> tf.range
* np.cumsum -> tf.cumsum
* If if you can't find the op you need, see the [docs](https://www.tensorflow.org/api_docs/python).
Still confused? We gonna fix that.
```
#Default placeholder that can be arbitrary float32 scalar, vector, matrix, etc.
arbitrary_input = tf.placeholder('float32')
#Input vector of arbitrary length
input_vector = tf.placeholder('float32',shape=(None,))
#Input vector that _must_ have 10 elements and integer type
fixed_vector = tf.placeholder('int32',shape=(10,))
#Matrix of arbitrary n_rows and 15 columns (e.g. a minibatch your data table)
input_matrix = tf.placeholder('float32',shape=(None,15))
#You can generally use None whenever you don't need a specific shape
input1 = tf.placeholder('float64',shape=(None,100,None))
input2 = tf.placeholder('int32',shape=(None,None,3,224,224))
#elementwise multiplication
double_the_vector = input_vector*2
#elementwise cosine
elementwise_cosine = tf.cos(input_vector)
#difference between squared vector and vector itself
vector_squares = input_vector**2 - input_vector
#Practice time: create two vectors of type float32
my_vector = <student.init_float32_vector()>
my_vector2 = <student.init_one_more_such_vector()>
#Write a transformation(recipe):
#(vec1)*(vec2) / (sin(vec1) +1)
my_transformation = <student.implementwhatwaswrittenabove()>
print(my_transformation)
#it's okay, it's a symbolic graph
#
dummy = np.arange(5).astype('float32')
my_transformation.eval({my_vector:dummy,my_vector2:dummy[::-1]})
```
### Visualizing graphs
It's often useful to visualize the computation graph when debugging or optimizing.
Interactive visualization is where tensorflow really shines as compared to other frameworks.
There's a special instrument for that, called Tensorboard. You can launch it from console:
```tensorboard --logdir=/tmp/tboard --port=7007```
If you're pathologically afraid of consoles, try this:
```os.system("tensorboard --logdir=/tmp/tboard --port=7007 &"```
_(but don't tell anyone we taught you that)_
```
# launch tensorflow the ugly way, uncomment if you need that
import os
port = 6000 + os.getuid()
print("Port: %d" % port)
#!killall tensorboard
os.system("tensorboard --logdir=./tboard --port=%d &" % port)
# show graph to tensorboard
writer = tf.summary.FileWriter("./tboard", graph=tf.get_default_graph())
writer.close()
```
One basic functionality of tensorboard is drawing graphs. Once you've run the cell above, go to `localhost:7007` in your browser and switch to _graphs_ tab in the topbar.
Here's what you should see:
<img src="https://s12.postimg.org/a374bmffx/tensorboard.png" width=480>
Tensorboard also allows you to draw graphs (e.g. learning curves), record images & audio ~~and play flash games~~. This is useful when monitoring learning progress and catching some training issues.
One researcher said:
```
If you spent last four hours of your worktime watching as your algorithm prints numbers and draws figures, you're probably doing deep learning wrong.
```
You can read more on tensorboard usage [here](https://www.tensorflow.org/get_started/graph_viz)
# Do It Yourself
__[2 points max]__
```
# Quest #1 - implement a function that computes a mean squared error of two input vectors
# Your function has to take 2 vectors and return a single number
<student.define_inputs_and_transformations()>
mse =<student.define_transformation()>
compute_mse = lambda vector1, vector2: <how to run you graph?>
# Tests
from sklearn.metrics import mean_squared_error
for n in [1,5,10,10**3]:
elems = [np.arange(n),np.arange(n,0,-1), np.zeros(n),
np.ones(n),np.random.random(n),np.random.randint(100,size=n)]
for el in elems:
for el_2 in elems:
true_mse = np.array(mean_squared_error(el,el_2))
my_mse = compute_mse(el,el_2)
if not np.allclose(true_mse,my_mse):
print('Wrong result:')
print('mse(%s,%s)' % (el,el_2))
print("should be: %f, but your function returned %f" % (true_mse,my_mse))
raise ValueError("Что-то не так")
print("All tests passed")
```
# variables
The inputs and transformations have no value outside function call. This isn't too comfortable if you want your model to have parameters (e.g. network weights) that are always present, but can change their value over time.
Tensorflow solves this with `tf.Variable` objects.
* You can assign variable a value at any time in your graph
* Unlike placeholders, there's no need to explicitly pass values to variables when `s.run(...)`-ing
* You can use variables the same way you use transformations
```
#creating shared variable
shared_vector_1 = tf.Variable(initial_value=np.ones(5))
#initialize variable(s) with initial values
s.run(tf.global_variables_initializer())
#evaluating shared variable (outside symbolic graph)
print("initial value", s.run(shared_vector_1))
# within symbolic graph you use them just as any other inout or transformation, not "get value" needed
#setting new value
s.run(shared_vector_1.assign(np.arange(5)))
#getting that new value
print("new value", s.run(shared_vector_1))
```
# tf.gradients - why graphs matter
* Tensorflow can compute derivatives and gradients automatically using the computation graph
* Gradients are computed as a product of elementary derivatives via chain rule:
$$ {\partial f(g(x)) \over \partial x} = {\partial f(g(x)) \over \partial g(x)}\cdot {\partial g(x) \over \partial x} $$
It can get you the derivative of any graph as long as it knows how to differentiate elementary operations
```
my_scalar = tf.placeholder('float32')
scalar_squared = my_scalar**2
#a derivative of scalar_squared by my_scalar
derivative = tf.gradients(scalar_squared, my_scalar)[0]
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(-3,3)
x_squared, x_squared_der = s.run([scalar_squared,derivative],
{my_scalar:x})
plt.plot(x, x_squared,label="x^2")
plt.plot(x, x_squared_der, label="derivative")
plt.legend();
```
# Why that rocks
```
my_vector = tf.placeholder('float32',[None])
#Compute the gradient of the next weird function over my_scalar and my_vector
#warning! Trying to understand the meaning of that function may result in permanent brain damage
weird_psychotic_function = tf.reduce_mean((my_vector+my_scalar)**(1+tf.nn.moments(my_vector,[0])[1]) + 1./ tf.atan(my_scalar))/(my_scalar**2 + 1) + 0.01*tf.sin(2*my_scalar**1.5)*(tf.reduce_sum(my_vector)* my_scalar**2)*tf.exp((my_scalar-4)**2)/(1+tf.exp((my_scalar-4)**2))*(1.-(tf.exp(-(my_scalar-4)**2))/(1+tf.exp(-(my_scalar-4)**2)))**2
der_by_scalar = <student.compute_grad_over_scalar()>
der_by_vector = <student.compute_grad_over_vector()>
#Plotting your derivative
scalar_space = np.linspace(1, 7, 100)
y = [s.run(weird_psychotic_function, {my_scalar:x, my_vector:[1, 2, 3]})
for x in scalar_space]
plt.plot(scalar_space, y, label='function')
y_der_by_scalar = [s.run(der_by_scalar, {my_scalar:x, my_vector:[1, 2, 3]})
for x in scalar_space]
plt.plot(scalar_space, y_der_by_scalar, label='derivative')
plt.grid()
plt.legend();
```
# Almost done - optimizers
While you can perform gradient descent by hand with automatic grads from above, tensorflow also has some optimization methods implemented for you. Recall momentum & rmsprop?
```
y_guess = tf.Variable(np.zeros(2,dtype='float32'))
y_true = tf.range(1,3,dtype='float32')
loss = tf.reduce_mean((y_guess - y_true + tf.random_normal([2]))**2)
optimizer = tf.train.MomentumOptimizer(0.01,0.9).minimize(loss,var_list=y_guess)
#same, but more detailed:
#updates = [[tf.gradients(loss,y_guess)[0], y_guess]]
#optimizer = tf.train.MomentumOptimizer(0.01,0.9).apply_gradients(updates)
from IPython.display import clear_output
s.run(tf.global_variables_initializer())
guesses = [s.run(y_guess)]
for _ in range(100):
s.run(optimizer)
guesses.append(s.run(y_guess))
clear_output(True)
plt.plot(*zip(*guesses),marker='.')
plt.scatter(*s.run(y_true),c='red')
plt.show()
```
# Logistic regression example
Implement the regular logistic regression training algorithm
Tips:
* Use a shared variable for weights
* X and y are potential inputs
* Compile 2 functions:
* `train_function(X, y)` - returns error and computes weights' new values __(through updates)__
* `predict_fun(X)` - just computes probabilities ("y") given data
We shall train on a two-class MNIST dataset
* please note that target `y` are `{0,1}` and not `{-1,1}` as in some formulae
```
from sklearn.datasets import load_digits
mnist = load_digits(2)
X,y = mnist.data, mnist.target
print("y [shape - %s]:" % (str(y.shape)), y[:10])
print("X [shape - %s]:" % (str(X.shape)))
print('X:\n',X[:3,:10])
print('y:\n',y[:10])
plt.imshow(X[0].reshape([8,8]))
# inputs and shareds
weights = <student.code_variable()>
input_X = <student.code_placeholder()>
input_y = <student.code_placeholder()>
predicted_y = <predicted probabilities for input_X>
loss = <logistic loss (scalar, mean over sample)>
optimizer = <optimizer that minimizes loss>
train_function = <compile function that takes X and y, returns log loss and updates weights>
predict_function = <compile function that takes X and computes probabilities of y>
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
from sklearn.metrics import roc_auc_score
for i in range(5):
<run optimizer operation>
loss_i = <compute loss at iteration i>
print("loss at iter %i:%.4f" % (i, loss_i))
print("train auc:",roc_auc_score(y_train, predict_function(X_train)))
print("test auc:",roc_auc_score(y_test, predict_function(X_test)))
print ("resulting weights:")
plt.imshow(shared_weights.get_value().reshape(8, -1))
plt.colorbar();
```
# Bonus: my1stNN
Your ultimate task for this week is to build your first neural network [almost] from scratch and pure tensorflow.
This time you will same digit recognition problem, but at a larger scale
* images are now 28x28
* 10 different digits
* 50k samples
Note that you are not required to build 152-layer monsters here. A 2-layer (one hidden, one output) NN should already have ive you an edge over logistic regression.
__[bonus score]__
If you've already beaten logistic regression with a two-layer net, but enthusiasm still ain't gone, you can try improving the test accuracy even further! The milestones would be 95%/97.5%/98.5% accuraсy on test set.
__SPOILER!__
At the end of the notebook you will find a few tips and frequently made mistakes. If you feel enough might to shoot yourself in the foot without external assistance, we encourage you to do so, but if you encounter any unsurpassable issues, please do look there before mailing us.
```
from mnist import load_dataset
#[down]loading the original MNIST dataset.
#Please note that you should only train your NN on _train sample,
# _val can be used to evaluate out-of-sample error, compare models or perform early-stopping
# _test should be hidden under a rock untill final evaluation... But we both know it is near impossible to catch you evaluating on it.
X_train,y_train,X_val,y_val,X_test,y_test = load_dataset()
print (X_train.shape,y_train.shape)
plt.imshow(X_train[0,0])
<here you could just as well create computation graph>
<this may or may not be a good place to evaluating loss and optimizer>
<this may be a perfect cell to write a training&evaluation loop in>
<predict & evaluate on test here, right? No cheating pls.>
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
# SPOILERS!
Recommended pipeline
* Adapt logistic regression from previous assignment to classify some number against others (e.g. zero vs nonzero)
* Generalize it to multiclass logistic regression.
- Either try to remember lecture 0 or google it.
- Instead of weight vector you'll have to use matrix (feature_id x class_id)
- softmax (exp over sum of exps) can implemented manually or as T.nnet.softmax (stable)
- probably better to use STOCHASTIC gradient descent (minibatch)
- in which case sample should probably be shuffled (or use random subsamples on each iteration)
* Add a hidden layer. Now your logistic regression uses hidden neurons instead of inputs.
- Hidden layer uses the same math as output layer (ex-logistic regression), but uses some nonlinearity (sigmoid) instead of softmax
- You need to train both layers, not just output layer :)
- Do not initialize layers with zeros (due to symmetry effects). A gaussian noize with small sigma will do.
- 50 hidden neurons and a sigmoid nonlinearity will do for a start. Many ways to improve.
- In ideal casae this totals to 2 .dot's, 1 softmax and 1 sigmoid
- __make sure this neural network works better than logistic regression__
* Now's the time to try improving the network. Consider layers (size, neuron count), nonlinearities, optimization methods, initialization - whatever you want, but please avoid convolutions for now.
|
github_jupyter
|
# Data Science Ex 00 - Preparation
23.02.2021, Lukas Kretschmar ([email protected])
## Let's have some Fun with Data Science!
Welcome to Data Science.
We will use an interactive environment where you can mix text and code, with the awesome feature that you can execute the code.
## Pre-Installation
We will work with Anaconda.
You can download the package at the following location: https://www.anaconda.com/products/individual
Install the distribution for your operating system that uses **Python 3.8**.
**Note:** Anaconda needs up to **6 GB** of disk space. So, make sure you have this amount of storage available on your machine.
## Installation
Please follow the installation instructions provided under the following link: https://docs.anaconda.com/anaconda/install/
If you don't want to install Anaconda just for your user profile, you can use the instrations under https://docs.anaconda.com/anaconda/install/multi-user/
### Known Issues
#### Non-unicode characters in user name (e.g., ä, ö, ü, etc.)
We have encountered problems with students having non-unicode characters in their installation path (e.g., ä, ö, ü, é, etc.).
This might be a problem for you if you use the default location which points to your user profile (e.g., C:\Users\\[your user name]).
Please choose a location that does only contain ASCII characters.
**Solution:**
- Choose a location that contains only ASCII characters
- Install Anaconda for multiple-users (https://docs.anaconda.com/anaconda/install/multi-user/)
If you've installed Anaconda nevertheless to a "non-suitable" location, there exists a simple workaround.
In this case you have to change the default security settings on your notebook server and open the website everytime by hand (or you try to find the url that your notebook server hosted).
You'll find the instructions at the end of this document.
## Post-Installation
### Update
After the installation is complete, you should also run an update to ensure that all packages are up-to-date.
To do so, open an **Anaconda Prompt with elevated privileges (administrator rights)** and enter the following
```
conda update --all
```
### Configuration
Juypter Notebooks opens the file browser in a specific directory.
Per default, it's your *My Documents* folder.
You can change the starting location to a different path by editing the configuration.
So, the [following](https://stackoverflow.com/questions/35254852/how-to-change-the-jupyter-start-up-folder) steps are only necessary, if you want Jupyter Notebooks to start from a specific location.
Open an **Anaconda Prompt** and enter the following
```
jupyter notebook --generate-config
```
This command will generate a configuration for your Jupyter installation at *C:\Users\yourusername\\.jupyter\jupyter_notebook_config.py* (for the nerds of you - yeah, it's a python code file).
The location on a Mac is probably at a similar location.
Open the file with a text editor and search for the line
``` python
#c.NotebookApp.notebook_dir = ''
```
Remove the \# at the beginning (this is the character for code comments) and enter the path you want Jupyter to start per default.
And use / within your path and not \\ as it is common on windows systems.
Otherwise the path might not work.
Your entry should now look like
``` python
c.NotebookApp.notebook_dir = 'path/to/your/folder'
```
### Change the security settings
**PLEASE NOTE: This step is only necessary, if your notebooks won't start property (e.g., installation at a location with unicode characters).**
If your Jupyter Lab or Jupyter Notebook starts, you must not change the security settings.
Within the configuration, you'll find the following line
``` python
# c.NotebookApp.token = '<generated>'
```
Per default, a new token is generated everytime you start a new server.
Now, you can either set the token to a fixed value, like
``` python
c.NotebookApp.token = 'ffed3a68-f5b2-47a3-bb11-df8711c5aab3'
```
*Note: This is just an example. You can choose your own token value.*
or to none (security is disabled)
``` python
c.NotebookApp.token = ''
```
In the first case, your server will always run at
- **JupyterLab:** http://localhost:8888/lab?token=ffed3a68-f5b2-47a3-bb11-df8711c5aab3
- **Jupyter Notebook:** http://localhost:8888/tree?token=ffed3a68-f5b2-47a3-bb11-df8711c5aab3
In the second case, your server will always run at
- **JupyterLab:** http://localhost:8888/lab
- **Juypter Notebook:** http://localhost:8888/tree
Please note: The port (8888) might be incremented by 1 if 8888 is already blocked.
Thus, if http://localhost:8888/lab is already used, the next server will be hosted at http://localhost:8889/lab
### Run Anaconda
Check that your installation is running by starting **Anaconda**.
You should be able to get to the following screen.
<img src="./AnacondaNavigator.png" style="height:600px" />
And then try to start either **JupyterLab** or **Jupyter Notebook**.
Both tools will open a new browser tab.
|
github_jupyter
|
```
import cv2 as cv
from scipy.spatial import distance
import numpy as np
from collections import OrderedDict
```
##### Object Tracking Class
```
class Tracker:
def __init__(self, maxLost = 30): # maxLost: maximum object lost counted when the object is being tracked
self.nextObjectID = 0 # ID of next object
self.objects = OrderedDict() # stores ID:Locations
self.lost = OrderedDict() # stores ID:Lost_count
self.maxLost = maxLost # maximum number of frames object was not detected.
def addObject(self, new_object_location):
self.objects[self.nextObjectID] = new_object_location # store new object location
self.lost[self.nextObjectID] = 0 # initialize frame_counts for when new object is undetected
self.nextObjectID += 1
def removeObject(self, objectID): # remove tracker data after object is lost
del self.objects[objectID]
del self.lost[objectID]
@staticmethod
def getLocation(bounding_box):
xlt, ylt, xrb, yrb = bounding_box
return (int((xlt + xrb) / 2.0), int((ylt + yrb) / 2.0))
def update(self, detections):
if len(detections) == 0: # if no object detected in the frame
lost_ids = list(self.lost.keys())
for objectID in lost_ids:
self.lost[objectID] +=1
if self.lost[objectID] > self.maxLost: self.removeObject(objectID)
return self.objects
new_object_locations = np.zeros((len(detections), 2), dtype="int") # current object locations
for (i, detection) in enumerate(detections): new_object_locations[i] = self.getLocation(detection)
if len(self.objects)==0:
for i in range(0, len(detections)): self.addObject(new_object_locations[i])
else:
objectIDs = list(self.objects.keys())
previous_object_locations = np.array(list(self.objects.values()))
D = distance.cdist(previous_object_locations, new_object_locations) # pairwise distance between previous and current
row_idx = D.min(axis=1).argsort() # (minimum distance of previous from current).sort_as_per_index
cols_idx = D.argmin(axis=1)[row_idx] # index of minimum distance of previous from current
assignedRows, assignedCols = set(), set()
for (row, col) in zip(row_idx, cols_idx):
if row in assignedRows or col in assignedCols:
continue
objectID = objectIDs[row]
self.objects[objectID] = new_object_locations[col]
self.lost[objectID] = 0
assignedRows.add(row)
assignedCols.add(col)
unassignedRows = set(range(0, D.shape[0])).difference(assignedRows)
unassignedCols = set(range(0, D.shape[1])).difference(assignedCols)
if D.shape[0]>=D.shape[1]:
for row in unassignedRows:
objectID = objectIDs[row]
self.lost[objectID] += 1
if self.lost[objectID] > self.maxLost:
self.removeObject(objectID)
else:
for col in unassignedCols:
self.addObject(new_object_locations[col])
return self.objects
```
#### Loading Object Detector Model
##### Tensorflow model for Object Detection and Tracking
Here, the SSD Object Detection Model is used.
For more details about single shot detection (SSD), refer the following:
- **Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016, October). Ssd: Single shot multibox detector. In European conference on computer vision (pp. 21-37). Springer, Cham.**
- Research paper link: https://arxiv.org/abs/1512.02325
- The pretrained model: https://github.com/opencv/opencv/wiki/TensorFlow-Object-Detection-API#use-existing-config-file-for-your-model
```
model_info = {"config_path":"./tensorflow_model_dir/ssd_mobilenet_v2_coco_2018_03_29.pbtxt",
"model_weights_path":"./tensorflow_model_dir/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb",
"object_names": {0: 'background',
1: 'person', 2: 'bicycle', 3: 'car', 4: 'motorcycle', 5: 'airplane', 6: 'bus',
7: 'train', 8: 'truck', 9: 'boat', 10: 'traffic light', 11: 'fire hydrant',
13: 'stop sign', 14: 'parking meter', 15: 'bench', 16: 'bird', 17: 'cat',
18: 'dog', 19: 'horse', 20: 'sheep', 21: 'cow', 22: 'elephant', 23: 'bear',
24: 'zebra', 25: 'giraffe', 27: 'backpack', 28: 'umbrella', 31: 'handbag',
32: 'tie', 33: 'suitcase', 34: 'frisbee', 35: 'skis', 36: 'snowboard',
37: 'sports ball', 38: 'kite', 39: 'baseball bat', 40: 'baseball glove',
41: 'skateboard', 42: 'surfboard', 43: 'tennis racket', 44: 'bottle',
46: 'wine glass', 47: 'cup', 48: 'fork', 49: 'knife', 50: 'spoon',
51: 'bowl', 52: 'banana', 53: 'apple', 54: 'sandwich', 55: 'orange',
56: 'broccoli', 57: 'carrot', 58: 'hot dog', 59: 'pizza', 60: 'donut',
61: 'cake', 62: 'chair', 63: 'couch', 64: 'potted plant', 65: 'bed',
67: 'dining table', 70: 'toilet', 72: 'tv', 73: 'laptop', 74: 'mouse',
75: 'remote', 76: 'keyboard', 77: 'cell phone', 78: 'microwave', 79: 'oven',
80: 'toaster', 81: 'sink', 82: 'refrigerator', 84: 'book', 85: 'clock',
86: 'vase', 87: 'scissors', 88: 'teddy bear', 89: 'hair drier', 90: 'toothbrush'},
"confidence_threshold": 0.5,
"threshold": 0.4
}
net = cv.dnn.readNetFromTensorflow(model_info["model_weights_path"], model_info["config_path"])
np.random.seed(12345)
bbox_colors = {key: np.random.randint(0, 255, size=(3,)).tolist() for key in model_info['object_names'].keys()}
```
##### Instantiate the Tracker Class
```
maxLost = 5 # maximum number of object losts counted when the object is being tracked
tracker = Tracker(maxLost = maxLost)
```
##### Initiate opencv video capture object
The `video_src` can take two values:
1. If `video_src=0`: OpenCV accesses the camera connected through USB
2. If `video_src='video_file_path'`: OpenCV will access the video file at the given path (can be MP4, AVI, etc format)
```
video_src = "./data/video_test5.mp4"#0
cap = cv.VideoCapture(video_src)
```
##### Start object detection and tracking
```
(H, W) = (None, None) # input image height and width for the network
writer = None
while(True):
ok, image = cap.read()
if not ok:
print("Cannot read the video feed.")
break
if W is None or H is None: (H, W) = image.shape[:2]
blob = cv.dnn.blobFromImage(image, size=(300, 300), swapRB=True, crop=False)
net.setInput(blob)
detections = net.forward()
detections_bbox = [] # bounding box for detections
boxes, confidences, classIDs = [], [], []
for detection in detections[0, 0, :, :]:
classID = detection[1]
confidence = detection[2]
if confidence > model_info['confidence_threshold']:
box = detection[3:7] * np.array([W, H, W, H])
(left, top, right, bottom) = box.astype("int")
width = right - left + 1
height = bottom - top + 1
boxes.append([int(left), int(top), int(width), int(height)])
confidences.append(float(confidence))
classIDs.append(int(classID))
indices = cv.dnn.NMSBoxes(boxes, confidences, model_info["confidence_threshold"], model_info["threshold"])
if len(indices)>0:
for i in indices.flatten():
x, y, w, h = boxes[i][0], boxes[i][1], boxes[i][2], boxes[i][3]
detections_bbox.append((x, y, x+w, y+h))
clr = [int(c) for c in bbox_colors[classIDs[i]]]
cv.rectangle(image, (x, y), (x+w, y+h), clr, 2)
label = "{}:{:.4f}".format(model_info["object_names"][classIDs[i]], confidences[i])
(label_width, label_height), baseLine = cv.getTextSize(label, cv.FONT_HERSHEY_SIMPLEX, 0.5, 2)
y_label = max(y, label_height)
cv.rectangle(image, (x, y_label-label_height),
(x+label_width, y_label+baseLine), (255, 255, 255), cv.FILLED)
cv.putText(image, label, (x, y_label), cv.FONT_HERSHEY_SIMPLEX, 0.5, clr, 2)
objects = tracker.update(detections_bbox) # update tracker based on the newly detected objects
for (objectID, centroid) in objects.items():
text = "ID {}".format(objectID)
cv.putText(image, text, (centroid[0] - 10, centroid[1] - 10), cv.FONT_HERSHEY_SIMPLEX,
0.5, (0, 255, 0), 2)
cv.circle(image, (centroid[0], centroid[1]), 4, (0, 255, 0), -1)
cv.imshow("image", image)
if cv.waitKey(1) & 0xFF == ord('q'):
break
if writer is None:
fourcc = cv.VideoWriter_fourcc(*"MJPG")
writer = cv.VideoWriter("output.avi", fourcc, 30, (W, H), True)
writer.write(image)
writer.release()
cap.release()
cv.destroyWindow("image")
```
|
github_jupyter
|
```
import matplotlib.pyplot as plt
import os
import numpy as np
import itertools
from glob import glob
import pandas as pd
from itertools import product
import os
from annsa.model_classes import f1
from tensorflow.python.keras.models import load_model
from pandas import read_csv
from sklearn.metrics import auc
from matplotlib.lines import Line2D
```
#### Import model, training function
```
def plot_learning_curve_points(sizes, errors, label='None', linestyle='-', color='k', marker='.', linewidth=2):
'''
Plots the learning curve.
Inputs:
sizes : list, int
List of traning dataset sizes
errors : list, float
List of final errors for some metric
'''
average = np.average(errors, axis=1)
std = np.var(errors)
plt.plot(sizes, average, label=label, linestyle=linestyle, color=color, linewidth=linewidth,)
plt.scatter(np.array([[size]*5 for size in sizes]).flatten(),
np.array(errors).flatten(),
color=color,
marker=marker)
import matplotlib.colors
def categorical_cmap(nc, nsc, cmap="tab10", continuous=False):
if nc > plt.get_cmap(cmap).N:
raise ValueError("Too many categories for colormap.")
if continuous:
ccolors = plt.get_cmap(cmap)(np.linspace(0,1,nc))
else:
ccolors = plt.get_cmap(cmap)(np.arange(nc, dtype=int))
cols = np.zeros((nc*nsc, 3))
for i, c in enumerate(ccolors):
chsv = matplotlib.colors.rgb_to_hsv(c[:3])
arhsv = np.tile(chsv,nsc).reshape(nsc,3)
arhsv[:,1] = np.linspace(chsv[1],0.25,nsc)
arhsv[:,2] = np.linspace(chsv[2],1,nsc)
rgb = matplotlib.colors.hsv_to_rgb(arhsv)
cols[i*nsc:(i+1)*nsc,:] = rgb
cmap = matplotlib.colors.ListedColormap(cols)
return cmap
c1 = categorical_cmap(5,1, cmap="tab10")
plt.scatter(np.arange(5*1),np.ones(5*1)+1, c=np.arange(5*1), s=180, cmap=c1)
line_colors = {'caednn' : c1.colors[0],
'daednn' : c1.colors[1],
'dnn' : c1.colors[2],
'cnn' : c1.colors[3],
}
line_styles = {'test' : '-',
'train' : '--',}
marker_styles = {'test' : '',
'train' : '',}
dependencies = {'f1' : f1}
```
# All models Full
```
plt.figure(figsize=(10,5))
matplotlib.rcParams.update({'font.size': 22})
dataset_modes = ['test', 'train']
models = [
'dnn',
'cnn',
'caednn',
'daednn',
]
model_modes = ['full']
train_sizes = [
'50',
'100',
'500',
'1000',
'5000',
'10000',
'15000',
'20000',
]
errors_all = {}
for model, model_mode, dataset_mode in product (models, model_modes, dataset_modes):
if dataset_mode == 'train':
loss = 'f1'
else:
loss = 'val_f1'
errors = []
for train_size in train_sizes:
tmp_error = []
identifier = '-final_trainsize'
if model == 'cnn':
identifier = '-final-reluupdate_trainsize'
file_path = os.path.join(
'..',
'final_training_notebooks',
'final-models-keras',
'learningcurve-'+model+'-'+model_mode+identifier+train_size+'_'+'*.log',)
for tmp_file_path in glob(file_path):
history_temp = read_csv(tmp_file_path)
tmp_error.append(history_temp.tail(1).iloc[0][loss])
errors.append(np.array(tmp_error))
errors = np.array(errors)
errors_all[dataset_mode + '_' + model] = np.average(errors, axis=1)
plot_learning_curve_points([int(train_size) for train_size in train_sizes],
errors,
label=model+' '+dataset_mode+'ing set',
linestyle=line_styles[dataset_mode],
color=line_colors[model],
marker=marker_styles[dataset_mode],
linewidth=2)
custom_lines = [Line2D([0], [0], color=c1.colors[3], lw=4),
Line2D([0], [0], color=c1.colors[2], lw=4),
Line2D([0], [0], color=c1.colors[0], lw=4),
Line2D([0], [0], color=c1.colors[1], lw=4),
Line2D([0], [0], color='k', linestyle=line_styles['test'], marker=marker_styles['test'], markersize=15, lw=2),
Line2D([0], [0], color='k', linestyle=line_styles['train'], marker=marker_styles['train'], markersize=15, lw=2),
]
plt.legend(custom_lines,
['CNN', 'DNN', 'CAE', 'DAE', 'Validation', 'Training'],
prop={'size': 15})
plt.ylim([0,1.1])
plt.xlabel('Number of Examples')
plt.ylabel('F1 Score')
plt.xticks([0, 5000, 10000, 15000, 20000], [0, 5000, 10000, 15000, 20000])
for item in errors_all:
print(item, round((auc([int(train_size) for train_size in train_sizes], errors_all[item]))/20000., 2))
```
# All models Easy
```
plt.figure(figsize=(10,5))
dataset_modes = ['test', 'train']
models = [
'dnn',
'cnn',
'caednn',
'daednn',
]
model_modes = ['easy']
train_sizes = [
'50',
'100',
'500',
'1000',
'5000',
'10000',
'15000',
'20000',
]
for model, model_mode, dataset_mode in product (models, model_modes, dataset_modes):
if dataset_mode == 'train':
loss = 'f1'
else:
loss = 'val_f1'
errors = []
for train_size in train_sizes:
tmp_error = []
identifier = '-final_trainsize'
if model == 'cnn':
identifier = '-final-reluupdate_trainsize'
file_path = os.path.join(
'..',
'final_training_notebooks',
'final-models-keras',
'learningcurve-'+model+'-'+model_mode+identifier+train_size+'_'+'*.log',)
for tmp_file_path in glob(file_path):
history_temp = read_csv(tmp_file_path)
tmp_error.append(history_temp.tail(1).iloc[0][loss])
errors.append(np.array(tmp_error))
errors = np.array(errors)
errors_all[dataset_mode + '_' + model] = np.average(errors, axis=1)
plot_learning_curve_points([int(train_size) for train_size in train_sizes],
errors,
label=model+' '+dataset_mode+'ing set',
linestyle=line_styles[dataset_mode],
color=line_colors[model],
marker=marker_styles[dataset_mode],
linewidth=2)
custom_lines = [Line2D([0], [0], color=c1.colors[3], lw=4),
Line2D([0], [0], color=c1.colors[2], lw=4),
Line2D([0], [0], color=c1.colors[0], lw=4),
Line2D([0], [0], color=c1.colors[1], lw=4),
Line2D([0], [0], color='k', linestyle=line_styles['test'], marker=marker_styles['test'], markersize=15, lw=2),
Line2D([0], [0], color='k', linestyle=line_styles['train'], marker=marker_styles['train'], markersize=15, lw=2),
]
plt.legend(custom_lines,
['CNN', 'DNN', 'CAE', 'DAE', 'validation', 'Training'],
prop={'size': 15})
plt.ylim([0,1.1])
plt.xlabel('Number of Examples')
plt.ylabel('F1 Score')
plt.xticks([0, 5000, 10000, 15000, 20000], [0, 5000, 10000, 15000, 20000])
for item in errors_all:
print(item, round((auc([int(train_size) for train_size in train_sizes], errors_all[item]))/20000., 2))
```
|
github_jupyter
|
# A glimpse into the inner working of a 2 layer Neural network
```
%load_ext autoreload
%autoreload 2
import numpy as np
from numpy import random as nprand
from cs771 import plotData as pd, utils, genSyntheticData as gsd
from keras.models import Sequential
from keras.layers import Dense as dense
from keras import optimizers
d = 2
n = 20
r = 2
tmp1 = gsd.genSphericalData( d, n, [-5, -5], r )
tmp2 = gsd.genSphericalData( d, n, [5, 5], r )
XPos = np.vstack( (tmp1, tmp2) )
yPos = np.ones( (XPos.shape[0],) )
tmp1 = gsd.genSphericalData( d, n, [-5, 5], r )
tmp2 = gsd.genSphericalData( d, n, [5, -5], r )
XNeg = np.vstack( (tmp1, tmp2) )
yNeg = np.zeros( (XNeg.shape[0],) )
X = np.vstack( (XPos, XNeg) )
y = np.concatenate( (yPos, yNeg) )
n = X.shape[0]
idx = nprand.permutation( n )
X = X[idx]
y = y[idx]
mu = np.mean( X, axis = 0 )
sigma = np.std( X, axis = 0 )
X -= mu
X /= sigma
# You may get deprecation warnings about tensorflow when you run
# this cell for the first time. This is okay and not an error
# It seems TF has disabled several functional API in its new version
# and keras routines have not (yet) been upgraded to use them and
# continue to use the old (deprecated) routines hence the warnings
model = Sequential()
model.add( dense( units = 2, activation = "sigmoid", input_dim = 2, use_bias = True ) )
model.add( dense( units = 1, activation = "sigmoid", use_bias = True ) )
# Setting a very large learning rate lr may make the NN temperamental and cause
# it to converge to a local optima. Keras supports "callbacks" which allow the
# user to dynamically lower learning rate if progress has stalled
opt = optimizers.Adam( lr = 0.1, beta_1 = 0.9, beta_2 = 0.999, amsgrad = True )
# Metrics are just for sake of display, not for sake of training
# Set verbose = 1 or 2 to see metrics reported for every epoch of training
# Notice that whereas loss value goes down almost monotonically, the accuracy
# may fluctuate i.e. go down a bit before finally going up again
model.compile( loss = "binary_crossentropy", optimizer = opt, metrics = ["binary_accuracy"] )
history = model.fit( X, y, epochs = 50, batch_size = n//8, verbose = 0 )
fig0, ax0 = pd.getFigList( nrows = 1, ncols = 2, sizex = 5, sizey = 4 )
ax0[0].plot(history.history['loss'])
ax0[1].plot(history.history['binary_accuracy'])
ax0[0].set_xlabel( "Epochs" )
ax0[0].set_ylabel( "Binary Cross Entropy Loss" )
ax0[1].set_xlabel( "Epochs" )
ax0[1].set_ylabel( "Classification Accuracy" )
def ffpredict( X ):
# Our shading code anyway converts predictions to [0,1] scores
return model.predict_classes( X )
fig = pd.getFigure( 10, 10 )
(xlim, ylim) = np.max( np.abs( X ), axis = 0 ) * 1.1
pd.shade2D( ffpredict, fig, mode = "batch", xlim = xlim, ylim = ylim )
pd.plot2D( X[y == 1], fig, color = 'g', marker = '+' )
pd.plot2D( X[y == 0], fig, color = 'r', marker = 'x' )
def sigmoid( a ):
return 1/(1 + np.exp( -a ))
def getHiddenLayerActivations( X ):
return sigmoid( X.dot( w ) + b )
# Our network learns a function of the form (s = sigmoid function)
# s( u.T * s( P.T * x + q ) + v )
# Weights that go to the hidden layer
P = model.layers[0].get_weights()[0]
q = model.layers[0].get_weights()[1]
# Weights that go to the output layer
u = model.layers[1].get_weights()[0]
v = model.layers[1].get_weights()[1]
# Get the post activations of the first hidden layer neuron
# The multiplication with sign(u[0]) is just to make sure
# that the colors turn out nicely in the plots
w = P[:,0] * np.sign( u[0] )
b = q[0] * np.sign( u[0] )
fig2 = pd.getFigure( 10, 10 )
pd.shade2DProb( getHiddenLayerActivations, fig2, mode = "batch", xlim = xlim, ylim = ylim )
pd.plot2D( X[y == 1], fig2, color = 'g', marker = '+' )
pd.plot2D( X[y == 0], fig2, color = 'r', marker = 'x' )
# Get the post activations of the second hidden layer neuron
# The multiplication with sign(u[1]) is yet again just to make
# sure that the colors turn out nicely in the plots
w = P[:,1] * np.sign( u[1] )
b = q[1] * np.sign( u[1] )
fig3 = pd.getFigure( 10, 10 )
pd.shade2DProb( getHiddenLayerActivations, fig3, mode = "batch", xlim = xlim, ylim = ylim )
pd.plot2D( X[y == 1], fig3, color = 'g', marker = '+' )
pd.plot2D( X[y == 0], fig3, color = 'r', marker = 'x' )
# Note that the two nodes in the hidden layer cooperate to learn the classifier
# Neither node can fully classify the red points from the green points on its own
# so they share the burden. Each node takes up the responsibility of isolating
# one red clump from the rest of the data. Together they make a perfect classifier :)
# One can interpret these two nodes as learning two useful features such that the
# learning problem become linearly separable when given these two new features
print( model.layers[0].get_weights() )
print( model.layers[1].get_weights() )
# See the value of the weights below and verify that they indeed are of the form
# that we saw in the toy code (that demonstrated universality of NN)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D2_DynamicNetworks/W3D2_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Neuromatch Academy: Week 3, Day 2, Tutorial 1
# Neuronal Network Dynamics: Neural Rate Models
## Background
The brain is a complex system, not because it is composed of a large number of diverse types of neurons, but mainly because of how neurons are connected to each other. The brain is a very large network of densely interconnected neurons.
The activity of neurons is constantly evolving in time. For this reason, neurons can be modeled as dynamical systems. The dynamical system approach is only one of the many modeling approaches that computational neuroscientists have developed (other points of views include information processing, network science, and statistical models). How the dynamics of neuronal networks affect the representation and processing of information in the brain is an open question. However, signatures of altered brain dynamics present in many brain diseases (e.g., in epilepsy or Parkinson's disease) tell us that it is crucial to study neuronal dynamics if we want to understand the brain.
In this tutorial, we will simulate and study one of the simplest models of biological neuronal networks. Instead of modeling and simulating individual excitatory neurons (e.g., LIF models that you implemented yesterday), we will treat them as a single homogeneous population and approximate their dynamics using a single one-dimensional equation describing the evolution of their average spiking rate in time.
## Objectives
In this tutorial we will learn how to build a firing rate model of a single population of excitatory neurons.
Steps:
- Write the equation for the firing rate dynamics of a 1D excitatory population.
- Visualize the response of the population as a function of parameters such as threshold level and gain, using the frequency-current (F-I) curve.
- Numerically simulate the dynamics of the excitatory population and find the fixed points of the system.
- Investigate the stability of the fixed points by linearizing the dynamics around them.
# Setup
```
# Imports
import matplotlib.pyplot as plt # import matplotlib
import numpy as np # import numpy
import scipy.optimize as opt # import root-finding algorithm
import ipywidgets as widgets # interactive display
#@title Figure Settings
%matplotlib inline
fig_w, fig_h = 6, 4
my_fontsize = 16
my_params = {'axes.labelsize': my_fontsize,
'axes.titlesize': my_fontsize,
'figure.figsize': [fig_w, fig_h],
'font.size': my_fontsize,
'legend.fontsize': my_fontsize-4,
'lines.markersize': 8.,
'lines.linewidth': 2.,
'xtick.labelsize': my_fontsize-2,
'ytick.labelsize': my_fontsize-2}
plt.rcParams.update(my_params)
# @title Helper functions
def plot_fI(x, f):
plt.figure(figsize=(6,4)) # plot the figure
plt.plot(x, f, 'k')
plt.xlabel('x (a.u.)', fontsize=14.)
plt.ylabel('F(x)', fontsize=14.)
plt.show()
#@title Helper functions
def plot_dE_E(E, dEdt):
plt.figure()
plt.plot(E_grid, dEdt, 'k')
plt.plot(E_grid, 0.*E_grid, 'k--')
plt.xlabel('E activity')
plt.ylabel(r'$\frac{dE}{dt}$', fontsize=20)
plt.ylim(-0.1, 0.1)
def plot_dFdt(x,dFdt):
plt.figure()
plt.plot(x, dFdt, 'r')
plt.xlabel('x (a.u.)', fontsize=14.)
plt.ylabel('dF(x)', fontsize=14.)
plt.show()
```
# Neuronal network dynamics
```
#@title Video: Dynamic networks
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="ZSsAaeaG9ZM", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
## Dynamics of a single excitatory population
Individual neurons respond by spiking. When we average the spikes of neurons in a population, we can define the average firing activity of the population. In this model, we are interested in how the population-averaged firing varies as a function of different network parameters.
\begin{align}
\tau_E \frac{dE}{dt} &= -E + F(w_{EE}E + I^{\text{ext}}_E) \quad\qquad (1)
\end{align}
$E(t)$ represents the average firing rate of the excitatory population at time $t$, $\tau_E$ controls the timescale of the evolution of the average firing rate, $w_{EE}$ denotes the strength (synaptic weight) of the recurrent excitatory input to the population, $I^{\text{ext}}_E$ represents the external input, and the transfer function $F(\cdot)$ (which can be related to f-I curve of individual neurons described in the next sections) represents the population activation function in response to all received inputs.
To start building the model, please execute the cell below to initialize the simulation parameters.
```
#@title Default parameters for a single excitatory population model
def default_parsE( **kwargs):
pars = {}
### Excitatory parameters ###
pars['tau_E'] = 1. # Timescale of the E population [ms]
pars['a_E'] = 1.2 # Gain of the E population
pars['theta_E'] = 2.8 # Threshold of the E population
### Connection strength ###
pars['wEE'] = 0. # E to E, we first set it to 0
### External input ###
pars['I_ext_E'] = 0.
### simulation parameters ###
pars['T'] = 20. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
pars['E_init'] = 0.2 # Initial value of E
### External parameters if any ###
for k in kwargs:
pars[k] = kwargs[k]
pars['range_t'] = np.arange(0, pars['T'], pars['dt']) # Vector of discretized time points [ms]
return pars
```
You can use:
- `pars = default_parsE()` to get all the parameters, and then you can execute `print(pars)` to check these parameters.
- `pars = default_parsE(T=T_sim, dt=time_step)` to set new simulation time and time step
- After `pars = default_parsE()`, use `pars['New_para'] = value` to add an new parameter with its value
## F-I curves
In electrophysiology, a neuron is often characterized by its spike rate output in response to input currents. This is often called the **F-I** curve, denoting the spike frequency (**F**) in response to different injected currents (**I**). We estimated this for an LIF neuron in yesterday's tutorial.
The transfer function $F(\cdot)$ in Equation (1) represents the gain of the population as a function of the total input. The gain is often modeled as a sigmoidal function, i.e., more input drive leads to a nonlinear increase in the population firing rate. The output firing rate will eventually saturate for high input values.
A sigmoidal $F(\cdot)$ is parameterized by its gain $a$ and threshold $\theta$.
$$ F(x;a,\theta) = \frac{1}{1+\text{e}^{-a(x-\theta)}} - \frac{1}{1+\text{e}^{a\theta}} \quad(2)$$
The argument $x$ represents the input to the population. Note that the second term is chosen so that $F(0;a,\theta)=0$.
Many other transfer functions (generally monotonic) can be also used. Examples are the rectified linear function $ReLU(x)$ or the hyperbolic tangent $tanh(x)$.
### Exercise 1: Implement F-I curve
Let's first investigate the activation functions before simulating the dynamics of the entire population.
In this exercise, you will implement a sigmoidal **F-I** curve or transfer function $F(x)$, with gain $a$ and threshold level $\theta$ as parameters.
```
# Excercise 1
def F(x,a,theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
#################################################################################
## TODO for students: compute f = F(x), remove the NotImplementedError once done#
#################################################################################
# the exponential function: np.exp(.)
# f = ...
raise NotImplementedError("Student excercise: implement the f-I function")
return f
# Uncomment these lines when you've filled the function, then run the cell again
# to plot the f-I curve.
pars = default_parsE() # get default parameters
# print(pars) # print out pars to get familiar with parameters
x = np.arange(0,10,.1) # set the range of input
# Uncomment this when you fill the exercise, and call the function
# plot_fI(x, F(x,pars['a_E'],pars['theta_E']))
# to_remove solution
def F(x,a,theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
the population activation response F(x) for input x
"""
# add the expression of f = F(x)
f = (1+np.exp(-a*(x-theta)))**-1 - (1+np.exp(a*theta))**-1
return f
pars = default_parsE() # get default parameters
x = np.arange(0,10,.1) # set the range of input
with plt.xkcd():
plot_fI(x, F(x,pars['a_E'],pars['theta_E']))
```
### Interactive Demo: Parameter exploration of F-I curve
Here's an interactive demo that shows how the F-I curve is changing for different values of the gain and threshold parameters.
**Remember to enable the demo by running the cell.**
```
#@title F-I curve Explorer
def interactive_plot_FI(a, theta):
'''
Population activation function.
Expecxts:
a : the gain of the function
theta : the threshold of the function
Returns:
plot the F-I curve with give parameters
'''
# set the range of input
x = np.arange(0,10,.1)
plt.figure()
plt.plot(x, F(x, a, theta), 'k')
plt.xlabel('x (a.u.)', fontsize=14.)
plt.ylabel('F(x)', fontsize=14.)
plt.show()
_ = widgets.interact(interactive_plot_FI, a = (0.3, 3., 0.3), \
theta = (2., 4., 0.2))
```
## Simulation scheme of E dynamics
Because $F(\cdot)$ is a nonlinear function, the exact solution of Equation $1$ can not be determined via analytical methods. Therefore, numerical methods must be used to find the solution. In practice, the derivative on the left-hand side of Equation (1) can be approximated using the Euler method on a time-grid of stepsize $\Delta t$:
\begin{align}
&\frac{dE}{dt} \approx \frac{E[k+1]-E[k]}{\Delta t}
\end{align}
where $E[k] = E(k\Delta t)$.
Thus,
$$\Delta E[k] = \frac{\Delta t}{\tau_E}[-E[k] + F(w_{EE}E[k] + I^{\text{ext}}_E(k;a_E,\theta_E)]$$
Hence, Equation (1) is updated at each time step by:
$$E[k+1] = E[k] + \Delta E[k]$$
**_Please execute the following cell to enable the WC simulator_**
```
#@title E population simulator: `simulate_E`
def simulate_E(pars):
"""
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
E : Activity of excitatory population (array)
"""
# Set parameters
tau_E, a_E, theta_E = pars['tau_E'], pars['a_E'], pars['theta_E']
wEE = pars['wEE']
I_ext_E = pars['I_ext_E']
E_init = pars['E_init']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Initialize activity
E = np.zeros(Lt)
E[0] = E_init
I_ext_E = I_ext_E*np.ones(Lt)
# Update the E activity
for k in range(Lt-1):
dE = dt/tau_E * (-E[k] + F(wEE*E[k]+I_ext_E[k], a_E, theta_E))
E[k+1] = E[k] + dE
return E
print(help(simulate_E))
```
#### Interactive Demo: Parameter Exploration of single population dynamics
Note that $w_{EE}=0$, as in the default setting, means no recurrent input to the excitatory population in Equation (1). Hence, the dynamics is entirely determined by the external input $I_{E}^{\text{ext}}$. Try to explore how $E_{sim}(t)$ changes with different $I_{E}^{\text{ext}}$ and $\tau_E$ parameter values, and investigate the relationship between $F(I_{E}^{\text{ext}}; a_E, \theta_E)$ and the steady value of E. Note that, $E_{ana}(t)$ denotes the analytical solution.
```
#@title Mean-field model Explorer
# get default parameters
pars = default_parsE(T=20.)
def Myplot_E_diffI_difftau(I_ext, tau_E):
# set external input and time constant
pars['I_ext_E'] = I_ext
pars['tau_E'] = tau_E
# simulation
E = simulate_E(pars)
# Analytical Solution
E_ana = pars['E_init'] + (F(I_ext,pars['a_E'],pars['theta_E'])-pars['E_init'])*\
(1.-np.exp(-pars['range_t']/pars['tau_E']))
# plot
plt.figure()
plt.plot(pars['range_t'], E, 'b', label=r'$E_{\mathrm{sim}}$(t)', alpha=0.5, zorder=1)
plt.plot(pars['range_t'], E_ana, 'b--', lw=5, dashes=(2,2),\
label=r'$E_{\mathrm{ana}}$(t)', zorder=2)
plt.plot(pars['range_t'], F(I_ext,pars['a_E'],pars['theta_E'])\
*np.ones(pars['range_t'].size), 'k--', label=r'$F(I_E^{\mathrm{ext}})$')
plt.xlabel('t (ms)', fontsize=16.)
plt.ylabel('E activity', fontsize=16.)
plt.legend(loc='best', fontsize=14.)
plt.show()
_ = widgets.interact(Myplot_E_diffI_difftau, I_ext = (0.0, 10., 1.),\
tau_E = (1., 5., 0.2))
```
### Think!
Above, we have numerically solved a system driven by a positive input and that, if $w_{EE} \neq 0$, receives an excitatory recurrent input (**try changing the value of $w_{EE}$ to a positive number**). Yet, $E(t)$ either decays to zero or reaches a fixed non-zero value.
- Why doesn't the solution of the system "explode" in a finite time? In other words, what guarantees that E(t) stays finite?
- Which parameter would you change in order to increase the maximum value of the response?
## Fixed points of the E system
```
#@title Video: Fixed point
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="B31fX6V0PZ4", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
As you varied the two parameters in the last Interactive Demo, you noticed that, while at first the system output quickly changes, with time, it reaches its maximum/minimum value and does not change anymore. The value eventually reached by the system is called the **steady state** of the system, or the **fixed point**. Essentially, in the steady states the derivative with respect to time of the activity ($E$) is zero, i.e. $\frac{dE}{dt}=0$.
We can find that the steady state of the Equation $1$ by setting $\displaystyle{\frac{dE}{dt}=0}$ and solve for $E$:
$$E_{\text{steady}} = F(w_{EE}E_{\text{steady}} + I^{\text{ext}}_E;a_E,\theta_E) = 0, \qquad (3)$$
When it exists, the solution of Equation $3$ defines a **fixed point** of the dynamics which satisfies $\displaystyle{\frac{dE}{dt}=0}$ (and determines steady state of the system). Notice that the right-hand side of the last equation depends itself on $E_{steady}$. If $F(x)$ is nonlinear it is not always possible to find an analytical solution that can instead be found via numerical simulations, as we will do later.
From the Interactive Demo one could also notice that the value of $\tau_E$ influences how quickly the activity will converge to the steady state from its initial value.
In the specific case of $w_{EE}=0$, we can also analytically compute the analytical solution of Equation $1$ (i.e., the thick blue dashed line) and deduce the role of $\tau_E$ in determining the convergence to the fixed point:
$$\displaystyle{E(t) = \big{[}F(I^{\text{ext}}_E;a_E,\theta_E) -E(t=0)\big{]} (1-\text{e}^{-\frac{t}{\tau_E}})} + E(t=0)$$ \\
We can now numerically calculate the fixed point with the `scipy.optimize.root` function.
<font size=3><font color='gray'>_(note that at the very beginning, we `import scipy.optimize as opt` )_</font></font>.
\\
Please execute the cell below to define the functions `my_fpE`, `check_fpE`, and `plot_fpE`
```
#@title Function of calculating the fixed point
def my_fpE(pars, E_init):
# get the parameters
a_E, theta_E = pars['a_E'], pars['theta_E']
wEE = pars['wEE']
I_ext_E = pars['I_ext_E']
# define the right hand of E dynamics
def my_WCr(x):
E = x[0]
dEdt=(-E + F(wEE*E+I_ext_E,a_E,theta_E))
y = np.array(dEdt)
return y
x0 = np.array(E_init)
x_fp = opt.root(my_WCr, x0).x
return x_fp
def check_fpE(pars, x_fp):
a_E, theta_E = pars['a_E'], pars['theta_E']
wEE = pars['wEE']
I_ext_E = pars['I_ext_E']
# calculate Equation(3)
y = x_fp- F(wEE*x_fp+I_ext_E, a_E, theta_E)
return np.abs(y)<1e-4
def plot_fpE(pars, x_fp, mycolor):
wEE = pars['wEE']
I_ext_E = pars['I_ext_E']
plt.plot(wEE*x_fp+I_ext_E, x_fp, 'o', color=mycolor)
```
#### Exercise 2: Visualization of the fixed point
When no analytical solution of Equation $3$ can be found, it is often useful to plot $\displaystyle{\frac{dE}{dt}=0}$ as a function of $E$. The values of E for which the plotted function crosses zero on the y axis correspond to fixed points.
Here, let us, for example, set $w_{EE}=5.0$ and $I^{\text{ext}}_E=0.5$. Define $\displaystyle{\frac{dE}{dt}}$ using Equation $1$, plot the result, and check for the presence of fixed points.
We will now try to find the fixed points using the previously defined function `my_fpE(pars, E_init)` with different initial values ($E_{\text{init}}$). Use the previously defined function `check_fpE(pars, x_fp)` to verify that the values of $E$ for which $\displaystyle{\frac{dE}{dt}} = 0$ are the true fixed points.
```
# Exercise 2
pars = default_parsE() # get default parameters
# set your external input and wEE
pars['I_ext_E'] = 0.5
pars['wEE'] = 5.0
E_grid = np.linspace(0, 1., 1000)# give E_grid
#figure, line (E, dEdt)
###############################
## TODO for students: #
## Calculate dEdt = -E + F(.) #
## Then plot the lines #
###############################
# Calculate dEdt
# dEdt = ...
# Uncomment this to plot the dEdt across E
# plot_dE_E(E_grid, dEdt)
# Add fixed point
#####################################################
## TODO for students: #
# Calculate the fixed point with your initial value #
# verify your fixed point and plot the corret ones #
#####################################################
# Calculate the fixed point with your initial value
x_fp_1 = my_fpE(pars, 1)
#check if x_fp is the intersection of the lines with the given function check_fpE(pars, x_fp)
#vary different initial values to find the correct fixed point (Should be 3)
# Use blue, red and yellow colors, respectively ('b', 'r', 'y' codenames)
# if check_fpE(pars, x_fp_1):
# plt.plot(x_fp_1, 0, 'bo', ms=8)
# Replicate the code above (lines 35-36) for all fixed points.
# to_remove solution
pars = default_parsE() # get default parameters
#set your external input and wEE
pars['I_ext_E'] = 0.5
pars['wEE'] = 5.0
# give E_grid
E_grid = np.linspace(0, 1., 1000)
# Calculate dEdt
dEdt = -E_grid + F(pars['wEE']*E_grid+pars['I_ext_E'], pars['a_E'], pars['theta_E'])
with plt.xkcd():
plot_dE_E(E_grid, dEdt)
#Calculate the fixed point with your initial value
x_fp_1 = my_fpE(pars, 0.)
if check_fpE(pars, x_fp_1):
plt.plot(x_fp_1, 0, 'bo', ms=8)
x_fp_2 = my_fpE(pars, 0.4)
if check_fpE(pars, x_fp_2):
plt.plot(x_fp_2, 0, 'ro', ms=8)
x_fp_3 = my_fpE(pars, 0.9)
if check_fpE(pars, x_fp_3):
plt.plot(x_fp_3, 0, 'yo', ms=8)
plt.show()
```
#### Interactive Demo: fixed points as a function of recurrent and external inputs.
You can now explore how the previous plot changes when the recurrent coupling $w_{\text{EE}}$ and the external input $I_E^{\text{ext}}$ take different values.
```
#@title Fixed point Explorer
def plot_intersection_E(wEE, I_ext_E):
#set your parameters
pars['wEE'] = wEE
pars['I_ext_E'] = I_ext_E
#note that wEE !=0
if wEE>0:
# find fixed point
x_fp_1 = my_fpE(pars, 0.)
x_fp_2 = my_fpE(pars, 0.4)
x_fp_3 = my_fpE(pars, 0.9)
plt.figure()
E_grid = np.linspace(0, 1., 1000)
dEdt = -E_grid + F(wEE*E_grid+I_ext_E, pars['a_E'], pars['theta_E'])
plt.plot(E_grid, dEdt, 'k')
plt.plot(E_grid, 0.*E_grid, 'k--')
if check_fpE(pars, x_fp_1):
plt.plot(x_fp_1, 0, 'bo', ms=8)
if check_fpE(pars, x_fp_2):
plt.plot(x_fp_2, 0, 'bo', ms=8)
if check_fpE(pars, x_fp_3):
plt.plot(x_fp_3, 0, 'bo', ms=8)
plt.xlabel('E activity', fontsize=14.)
plt.ylabel(r'$\frac{dE}{dt}$', fontsize=18.)
plt.show()
_ = widgets.interact(plot_intersection_E, wEE = (1., 7., 0.2), \
I_ext_E = (0., 3., 0.1))
```
## Summary
In this tutorial, we have investigated the dynamics of a rate-based single excitatory population of neurons.
We learned about:
- The effect of the input parameters and the time constant of the network on the dynamics of the population.
- How to find the fixed point(s) of the system.
Next, we have two Bonus, but important concepts in dynamical system analysis and simulation. If you have time left, watch the next video and proceed to solve the exercises. You will learn:
- How to determine the stability of a fixed point by linearizing the system.
- How to add realistic inputs to our model.
## Bonus 1: Stability of a fixed point
```
#@title Video: Stability of fixed points
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="nvxxf59w2EA", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
#### Initial values and trajectories
Here, let us first set $w_{EE}=5.0$ and $I^{\text{ext}}_E=0.5$, and investigate the dynamics of $E(t)$ starting with different initial values $E(0) \equiv E_{\text{init}}$. We will plot the trajectories of $E(t)$ with $E_{\text{init}} = 0.0, 0.1, 0.2,..., 0.9$.
```
#@title Initial values
pars = default_parsE()
pars['wEE'] = 5.0
pars['I_ext_E'] = 0.5
plt.figure(figsize=(10,6))
for ie in range(10):
pars['E_init'] = 0.1*ie # set the initial value
E = simulate_E(pars) # run the simulation
# plot the activity with given initial
plt.plot(pars['range_t'], E, 'b', alpha=0.1 + 0.1*ie, label= r'E$_{\mathrm{init}}$=%.1f' % (0.1*ie))
plt.xlabel('t (ms)')
plt.title('Two steady states?')
plt.ylabel('E(t)')
plt.legend(loc=[0.72, 0.13], fontsize=14)
plt.show()
```
#### Interactive Demo: dynamics as a function of the initial value.
Let's now set $E_{init}$ to a value of your choice in this demo. How does the solution change? What do you observe?
```
#@title Initial value Explorer
pars = default_parsE()
pars['wEE'] = 5.0
pars['I_ext_E'] = 0.5
def plot_E_diffEinit(E_init):
pars['E_init'] = E_init
E = simulate_E(pars)
plt.figure()
plt.plot(pars['range_t'], E, 'b', label='E(t)')
plt.xlabel('t (ms)', fontsize=16.)
plt.ylabel('E activity', fontsize=16.)
plt.show()
_ = widgets.interact(plot_E_diffEinit, E_init = (0., 1., 0.02))
```
### Stability analysis via linearization of the dynamics
Just like Equation $1$ in the case ($w_{EE}=0$) discussed above, a generic linear system
$$\frac{dx}{dt} = \lambda (x - b),$$
has a fixed point for $x=b$. The analytical solution of such a system can be found to be:
$$x(t) = b + \big{(} x(0) - b \big{)} \text{e}^{\lambda t}.$$
Now consider a small perturbation of the activity around the fixed point: $x(0) = b+ \epsilon$, where $|\epsilon| \ll 1$. Will the perturbation $\epsilon(t)$ grow with time or will it decay to the fixed point? The evolution of the perturbation with time can be written, using the analytical solution for $x(t)$, as:
$$\epsilon (t) = x(t) - b = \epsilon \text{e}^{\lambda t}$$
- if $\lambda < 0$, $\epsilon(t)$ decays to zero, $x(t)$ will still converge to $b$ and the fixed point is "**stable**".
- if $\lambda > 0$, $\epsilon(t)$ grows with time, $x(t)$ will leave the fixed point $b$ exponentially and the fixed point is, therefore, "**unstable**" .
### Compute the stability of Equation (1)
Similar to what we did in the linear system above, in order to determine the stability of a fixed point $E_{\rm fp}$ of the excitatory population dynamics, we perturb Equation $1$ around $E_{\rm fp}$ by $\epsilon$, i.e. $E = E_{\rm fp} + \epsilon$. We can plug in Equation $1$ and obtain the equation determining the time evolution of the perturbation $\epsilon(t)$:
\begin{align}
\tau_E \frac{d\epsilon}{dt} \approx -\epsilon + w_{EE} F'(w_{EE}E_{\text{fp}} + I^{\text{ext}}_E;a_E,\theta_E) \epsilon
\end{align}
where $F'(\cdot)$ is the derivative of the transfer function $F(\cdot)$. We can rewrite the above equation as:
\begin{align}
\frac{d\epsilon}{dt} \approx \frac{\epsilon}{\tau_E }[-1 + w_{EE} F'(w_{EE}E_{\text{fp}} + I^{\text{ext}}_E;a_E,\theta_E)]
\end{align}
That is, as in the linear system above, the value of $\lambda = [-1+ w_{EE}F'(w_{EE}E_{\text{fp}} + I^{\text{ext}}_E;a_E,\theta_E)]/\tau_E$ determines whether the perturbation will grow or decay to zero, i.e., $\lambda$ defines the stability of the fixed point. This value is called the **eigenvalue** of the dynamical system.
### Exercise 4: Compute $dF$ and Eigenvalue
The derivative of the sigmoid transfer function is:
\begin{align}
\frac{dF}{dx} & = \frac{d}{dx} (1+\exp\{-a(x-\theta)\})^{-1} \\
& = a\exp\{-a(x-\theta)\} (1+\exp\{-a(x-\theta)\})^{-2}.
\end{align}
Let's now find the expression for the derivative $\displaystyle{\frac{dF}{dx}}$ in the following cell and plot it.
```
# Exercise 4
def dF(x,a,theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
#####################################################################
## TODO for students: compute dFdx, then remove NotImplementedError #
#####################################################################
# dFdx = ...
raise NotImplementedError("Student excercise: compute the deravitive of F(x)")
return dFdx
pars = default_parsE() # get default parameters
x = np.arange(0,10,.1) # set the range of input
# Uncomment below lines after completing the dF function
# plot_dFdt(x,dF(x,pars['a_E'],pars['theta_E']))
# to_remove solution
def dF(x,a,theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
dFdx = a*np.exp(-a*(x-theta))*(1+np.exp(-a*(x-theta)))**-2
return dFdx
# get default parameters
pars = default_parsE()
# set the range of input
x = np.arange(0,10,.1)
# plot figure
with plt.xkcd():
plot_dFdt(x,dF(x,pars['a_E'],pars['theta_E']))
```
### Exercise 5: Compute eigenvalues
As discussed above, for the case with $w_{EE}=5.0$ and $I^{\text{ext}}_E=0.5$, the system displays **3** fixed points. However, when we simulated the dynamics and varied the initial conditions $E_{\rm init}$, we could only obtain **two** steady states. In this exercise, we will now check the stability of each of the $3$ fixed points by calculating the corresponding eigenvalues with the function `eig_E` defined above. Check the sign of each eigenvalue (i.e., stability of each fixed point). How many of the fixed points are stable?
```
# Exercise 5
pars = default_parsE()
pars['wEE'] = 5.0
pars['I_ext_E'] = 0.5
def eig_E(pars, fp):
"""
Args:
pars : Parameter dictionary
fp : fixed point E
Returns:
eig : eigevalue of the linearized system
"""
#get the parameters
tau_E, a_E, theta_E = pars['tau_E'], pars['a_E'], pars['theta_E']
wEE, I_ext_E = pars['wEE'], pars['I_ext_E']
# fixed point
E = fp
#######################################################################
## TODO for students: compute eigenvalue, remove NotImplementedError #
#######################################################################
# eig = ...
raise NotImplementedError("Student excercise: compute the eigenvalue")
return eig
# Uncomment below lines after completing the eigE function.
# x_fp_1 = fpE(pars, 0.)
# eig_fp_1 = eig_E(pars, x_fp_1)
# print('Fixed point1=%.3f, Eigenvalue=%.3f' % (x_fp_1, eig_E1))
# Continue by finding the eigenvalues for all fixed points of Exercise 2
# to_remove solution
pars = default_parsE()
pars['wEE'] = 5.0
pars['I_ext_E'] = 0.5
def eig_E(pars, fp):
"""
Args:
pars : Parameter dictionary
fp : fixed point E
Returns:
eig : eigevalue of the linearized system
"""
#get the parameters
tau_E, a_E, theta_E = pars['tau_E'], pars['a_E'], pars['theta_E']
wEE, I_ext_E = pars['wEE'], pars['I_ext_E']
# fixed point
E = fp
eig = (-1. + wEE*dF(wEE*E + I_ext_E, a_E, theta_E)) / tau_E
return eig
# Uncomment below lines after completing the eigE function
x_fp_1 = my_fpE(pars, 0.)
eig_E1 = eig_E(pars, x_fp_1)
print('Fixed point1=%.3f, Eigenvalue=%.3f' % (x_fp_1, eig_E1))
# Continue by finding the eigenvalues for all fixed points of Exercise 2
x_fp_2 = my_fpE(pars, 0.4)
eig_E2 = eig_E(pars, x_fp_2)
print('Fixed point2=%.3f, Eigenvalue=%.3f' % (x_fp_2, eig_E2))
x_fp_3 = my_fpE(pars, 0.9)
eig_E3 = eig_E(pars, x_fp_3)
print('Fixed point3=%.3f, Eigenvalue=%.3f' % (x_fp_3, eig_E3))
```
### Think!
Throughout the tutorial, we have assumed $w_{\rm EE}> 0 $, i.e., we considered a single population of **excitatory** neurons. What do you think will be the behavior of a population of inhibitory neurons, i.e., where $w_{\rm EE}> 0$ is replaced by $w_{\rm II}< 0$?
## Bonus 2: Noisy input drives transition between two stable states
### Ornstein-Uhlenbeck (OU) process
As discussed in several previous tutorials, the OU process is usually used to generate a noisy input into the neuron. The OU input $\eta(t)$ follows:
$$\tau_\eta \frac{d}{dt}\eta(t) = -\eta (t) + \sigma_\eta\sqrt{2\tau_\eta}\xi(t)$$
Execute the following function `my_OU(pars, sig, myseed=False)` to generate an OU process.
```
#@title OU process `my_OU(pars, sig, myseed=False)`
def my_OU(pars, sig, myseed=False):
"""
A functions that generates Ornstein-Uhlenback process
Args:
pars : parameter dictionary
sig : noise amplitute
myseed : random seed. int or boolean
Returns:
I : Ornstein-Uhlenbeck input current
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tau_ou = pars['tau_ou'] # [ms]
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# Initialize
noise = np.random.randn(Lt)
I = np.zeros(Lt)
I[0] = noise[0] * sig
#generate OU
for it in range(Lt-1):
I[it+1] = I[it] + dt/tau_ou*(0.-I[it]) + np.sqrt(2.*dt/tau_ou) * sig * noise[it+1]
return I
pars = default_parsE(T=100)
pars['tau_ou'] = 1. #[ms]
sig_ou = 0.1
I_ou = my_OU(pars, sig=sig_ou, myseed=1998)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], I_ou, 'b')
plt.xlabel('Time (ms)')
plt.ylabel(r'$I_{\mathrm{OU}}$');
```
### Bonus Example: Up-Down transition
In the presence of two or more fixed points, noisy inputs can drive a transition between the fixed points! Here, we stimulate an E population for 1,000 ms applying OU inputs.
```
#@title Simulation of an E population with OU inputs
pars = default_parsE(T = 1000)
pars['wEE'] = 5.0
sig_ou = 0.7
pars['tau_ou'] = 1. #[ms]
pars['I_ext_E'] = 0.56 + my_OU(pars, sig=sig_ou, myseed=2020)
E = simulate_E(pars)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], E, 'r', alpha=0.8)
plt.xlabel('t (ms)')
plt.ylabel('E activity')
plt.show()
```
|
github_jupyter
|
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Distributed Training in TensorFlow
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/guide/distribute_strategy.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/distribute_strategy.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
> Note: This is an archived TF1 notebook. These are configured
to run in TF2's
[compatbility mode](https://www.tensorflow.org/guide/migrate)
but will run in TF1 as well. To use TF1 in Colab, use the
[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)
magic.
## Overview
`tf.distribute.Strategy` is a TensorFlow API to distribute training
across multiple GPUs, multiple machines or TPUs. Using this API, users can distribute their existing models and training code with minimal code changes.
`tf.distribute.Strategy` has been designed with these key goals in mind:
* Easy to use and support multiple user segments, including researchers, ML engineers, etc.
* Provide good performance out of the box.
* Easy switching between strategies.
`tf.distribute.Strategy` can be used with TensorFlow's high level APIs, [tf.keras](https://www.tensorflow.org/r1/guide/keras) and [tf.estimator](https://www.tensorflow.org/r1/guide/estimators), with just a couple of lines of code change. It also provides an API that can be used to distribute custom training loops (and in general any computation using TensorFlow).
In TensorFlow 2.0, users can execute their programs eagerly, or in a graph using [`tf.function`](../tutorials/eager/tf_function.ipynb). `tf.distribute.Strategy` intends to support both these modes of execution. Note that we may talk about training most of the time in this guide, but this API can also be used for distributing evaluation and prediction on different platforms.
As you will see in a bit, very few changes are needed to use `tf.distribute.Strategy` with your code. This is because we have changed the underlying components of TensorFlow to become strategy-aware. This includes variables, layers, models, optimizers, metrics, summaries, and checkpoints.
In this guide, we will talk about various types of strategies and how one can use them in different situations.
Note: For a deeper understanding of the concepts, please watch [this deep-dive presentation](https://youtu.be/jKV53r9-H14). This is especially recommended if you plan to write your own training loop.
```
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
```
## Types of strategies
`tf.distribute.Strategy` intends to cover a number of use cases along different axes. Some of these combinations are currently supported and others will be added in the future. Some of these axes are:
* Syncronous vs asynchronous training: These are two common ways of distributing training with data parallelism. In sync training, all workers train over different slices of input data in sync, and aggregating gradients at each step. In async training, all workers are independently training over the input data and updating variables asynchronously. Typically sync training is supported via all-reduce and async through parameter server architecture.
* Hardware platform: Users may want to scale their training onto multiple GPUs on one machine, or multiple machines in a network (with 0 or more GPUs each), or on Cloud TPUs.
In order to support these use cases, we have 4 strategies available. In the next section we will talk about which of these are supported in which scenarios in TF.
### MirroredStrategy
`tf.distribute.MirroredStrategy` support synchronous distributed training on multiple GPUs on one machine. It creates one model replica per GPU device. Each variable in the model is mirrored across all the replicas. Together, these variables form a single conceptual variable called `MirroredVariable`. These variables are kept in sync with each other by applying identical updates.
Efficient all-reduce algorithms are used to communicate the variable updates across the devices.
All-reduce aggregates tensors across all the devices by adding them up, and makes them available on each device.
It’s a fused algorithm that is very efficient and can reduce the overhead of synchronization significantly. There are many all-reduce algorithms and implementations available, depending on the type of communication available between devices. By default, it uses NVIDIA NCCL as the all-reduce implementation. The user can also choose between a few other options we provide, or write their own.
Here is the simplest way of creating `MirroredStrategy`:
```
mirrored_strategy = tf.distribute.MirroredStrategy()
```
This will create a `MirroredStrategy` instance which will use all the GPUs that are visible to TensorFlow, and use NCCL as the cross device communication.
If you wish to use only some of the GPUs on your machine, you can do so like this:
```
mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"])
```
If you wish to override the cross device communication, you can do so using the `cross_device_ops` argument by supplying an instance of `tf.distribute.CrossDeviceOps`. Currently we provide `tf.distribute.HierarchicalCopyAllReduce` and `tf.distribute.ReductionToOneDevice` as 2 other options other than `tf.distribute.NcclAllReduce` which is the default.
```
mirrored_strategy = tf.distribute.MirroredStrategy(
cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())
```
### CentralStorageStrategy
`tf.distribute.experimental.CentralStorageStrategy` does synchronous training as well. Variables are not mirrored, instead they are placed on the CPU and operations are replicated across all local GPUs. If there is only one GPU, all variables and operations will be placed on that GPU.
Create a `CentralStorageStrategy` by:
```
central_storage_strategy = tf.distribute.experimental.CentralStorageStrategy()
```
This will create a `CentralStorageStrategy` instance which will use all visible GPUs and CPU. Update to variables on replicas will be aggragated before being applied to variables.
Note: This strategy is [`experimental`](https://www.tensorflow.org/r1/guide/version_compat#what_is_not_covered) as we are currently improving it and making it work for more scenarios. As part of this, please expect the APIs to change in the future.
### MultiWorkerMirroredStrategy
`tf.distribute.experimental.MultiWorkerMirroredStrategy` is very similar to `MirroredStrategy`. It implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to `MirroredStrategy`, it creates copies of all variables in the model on each device across all workers.
It uses [CollectiveOps](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/collective_ops.py) as the multi-worker all-reduce communication method used to keep variables in sync. A collective op is a single op in the TensorFlow graph which can automatically choose an all-reduce algorithm in the TensorFlow runtime according to hardware, network topology and tensor sizes.
It also implements additional performance optimizations. For example, it includes a static optimization that converts multiple all-reductions on small tensors into fewer all-reductions on larger tensors. In addition, we are designing it to have a plugin architecture - so that in the future, users will be able to plugin algorithms that are better tuned for their hardware. Note that collective ops also implement other collective operations such as broadcast and all-gather.
Here is the simplest way of creating `MultiWorkerMirroredStrategy`:
```
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
```
`MultiWorkerMirroredStrategy` currently allows you to choose between two different implementations of collective ops. `CollectiveCommunication.RING` implements ring-based collectives using gRPC as the communication layer. `CollectiveCommunication.NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `CollectiveCommunication.AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. You can specify them like so:
```
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
tf.distribute.experimental.CollectiveCommunication.NCCL)
```
One of the key differences to get multi worker training going, as compared to multi-GPU training, is the multi-worker setup. "TF_CONFIG" environment variable is the standard way in TensorFlow to specify the cluster configuration to each worker that is part of the cluster. See section on ["TF_CONFIG" below](#TF_CONFIG) for more details on how this can be done.
Note: This strategy is [`experimental`](https://www.tensorflow.org/r1/guide/version_compat#what_is_not_covered) as we are currently improving it and making it work for more scenarios. As part of this, please expect the APIs to change in the future.
### TPUStrategy
`tf.distribute.experimental.TPUStrategy` lets users run their TensorFlow training on Tensor Processing Units (TPUs). TPUs are Google's specialized ASICs designed to dramatically accelerate machine learning workloads. They are available on Google Colab, the [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) and [Google Compute Engine](https://cloud.google.com/tpu).
In terms of distributed training architecture, TPUStrategy is the same `MirroredStrategy` - it implements synchronous distributed training. TPUs provide their own implementation of efficient all-reduce and other collective operations across multiple TPU cores, which are used in `TPUStrategy`.
Here is how you would instantiate `TPUStrategy`.
Note: To run this code in Colab, you should select TPU as the Colab runtime. See [Using TPUs]( tpu.ipynb) guide for a runnable version.
```
resolver = tf.distribute.cluster_resolver.TPUClusterResolver()
tf.tpu.experimental.initialize_tpu_system(resolver)
tpu_strategy = tf.distribute.experimental.TPUStrategy(resolver)
```
`TPUClusterResolver` instance helps locate the TPUs. In Colab, you don't need to specify any arguments to it. If you want to use this for Cloud TPUs, you will need to specify the name of your TPU resource in `tpu` argument. We also need to initialize the tpu system explicitly at the start of the program. This is required before TPUs can be used for computation and should ideally be done at the beginning because it also wipes out the TPU memory so all state will be lost.
Note: This strategy is [`experimental`](https://www.tensorflow.org/r1/guide/version_compat#what_is_not_covered) as we are currently improving it and making it work for more scenarios. As part of this, please expect the APIs to change in the future.
### ParameterServerStrategy
`tf.distribute.experimental.ParameterServerStrategy` supports parameter servers training on multiple machines. In this setup, some machines are designated as workers and some as parameter servers. Each variable of the model is placed on one parameter server. Computation is replicated across all GPUs of all the workers.
In terms of code, it looks similar to other strategies:
```
ps_strategy = tf.distribute.experimental.ParameterServerStrategy()
```
For multi worker training, "TF_CONFIG" needs to specify the configuration of parameter servers and workers in your cluster, which you can read more about in [TF_CONFIG](#TF_CONFIG) below.
So far we've talked about what are the different stategies available and how you can instantiate them. In the next few sections, we will talk about the different ways in which you can use them to distribute your training. We will show short code snippets in this guide and link off to full tutorials which you can run end to end.
## Using `tf.distribute.Strategy` with Keras
We've integrated `tf.distribute.Strategy` into `tf.keras` which is TensorFlow's implementation of the
[Keras API specification](https://keras.io). `tf.keras` is a high-level API to build and train models. By integrating into `tf.keras` backend, we've made it seamless for Keras users to distribute their training written in the Keras training framework. The only things that need to change in a user's program are: (1) Create an instance of the appropriate `tf.distribute.Strategy` and (2) Move the creation and compiling of Keras model inside `strategy.scope`.
Here is a snippet of code to do this for a very simple Keras model with one dense layer:
```
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
model.compile(loss='mse', optimizer='sgd')
```
In this example we used `MirroredStrategy` so we can run this on a machine with multiple GPUs. `strategy.scope()` indicated which parts of the code to run distributed. Creating a model inside this scope allows us to create mirrored variables instead of regular variables. Compiling under the scope allows us to know that the user intends to train this model using this strategy. Once this is setup, you can fit your model like you would normally. `MirroredStrategy` takes care of replicating the model's training on the available GPUs, aggregating gradients etc.
```
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(10)
model.fit(dataset, epochs=2)
model.evaluate(dataset)
```
Here we used a `tf.data.Dataset` to provide the training and eval input. You can also use numpy arrays:
```
import numpy as np
inputs, targets = np.ones((100, 1)), np.ones((100, 1))
model.fit(inputs, targets, epochs=2, batch_size=10)
```
In both cases (dataset or numpy), each batch of the given input is divided equally among the multiple replicas. For instance, if using `MirroredStrategy` with 2 GPUs, each batch of size 10 will get divided among the 2 GPUs, with each receiving 5 input examples in each step. Each epoch will then train faster as you add more GPUs. Typically, you would want to increase your batch size as you add more accelerators so as to make effective use of the extra computing power. You will also need to re-tune your learning rate, depending on the model. You can use `strategy.num_replicas_in_sync` to get the number of replicas.
```
# Compute global batch size using number of replicas.
BATCH_SIZE_PER_REPLICA = 5
global_batch_size = (BATCH_SIZE_PER_REPLICA *
mirrored_strategy.num_replicas_in_sync)
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100)
dataset = dataset.batch(global_batch_size)
LEARNING_RATES_BY_BATCH_SIZE = {5: 0.1, 10: 0.15}
learning_rate = LEARNING_RATES_BY_BATCH_SIZE[global_batch_size]
```
### What's supported now?
In [TF nightly release](https://pypi.org/project/tf-nightly-gpu/), we now support training with Keras using all strategies.
Note: When using `TPUStrategy` with TPU pods with Keras, currently the user will have to explicitly shard or shuffle the data for different workers, but we will change this in the future to automatically shard the input data intelligently.
### Examples and Tutorials
Here is a list of tutorials and examples that illustrate the above integration end to end with Keras:
1. [Tutorial](../tutorials/distribute/keras.ipynb) to train MNIST with `MirroredStrategy`.
2. Official [ResNet50](https://github.com/tensorflow/models/blob/master/official/vision/image_classification/resnet_imagenet_main.py) training with ImageNet data using `MirroredStrategy`.
3. [ResNet50](https://github.com/tensorflow/tpu/blob/master/models/experimental/resnet50_keras/resnet50.py) trained with Imagenet data on Cloud TPus with `TPUStrategy`.
## Using `tf.distribute.Strategy` with Estimator
`tf.estimator` is a distributed training TensorFlow API that originally supported the async parameter server approach. Like with Keras, we've integrated `tf.distribute.Strategy` into `tf.Estimator` so that a user who is using Estimator for their training can easily change their training is distributed with very few changes to your their code. With this, estimator users can now do synchronous distributed training on multiple GPUs and multiple workers, as well as use TPUs.
The usage of `tf.distribute.Strategy` with Estimator is slightly different than the Keras case. Instead of using `strategy.scope`, now we pass the strategy object into the [`RunConfig`](https://www.tensorflow.org/api_docs/python/tf/estimator/RunConfig) for the Estimator.
Here is a snippet of code that shows this with a premade estimator `LinearRegressor` and `MirroredStrategy`:
```
mirrored_strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(
train_distribute=mirrored_strategy, eval_distribute=mirrored_strategy)
regressor = tf.estimator.LinearRegressor(
feature_columns=[tf.feature_column.numeric_column('feats')],
optimizer='SGD',
config=config)
```
We use a premade Estimator here, but the same code works with a custom Estimator as well. `train_distribute` determines how training will be distributed, and `eval_distribute` determines how evaluation will be distributed. This is another difference from Keras where we use the same strategy for both training and eval.
Now we can train and evaluate this Estimator with an input function:
```
def input_fn():
dataset = tf.data.Dataset.from_tensors(({"feats":[1.]}, [1.]))
return dataset.repeat(1000).batch(10)
regressor.train(input_fn=input_fn, steps=10)
regressor.evaluate(input_fn=input_fn, steps=10)
```
Another difference to highlight here between Estimator and Keras is the input handling. In Keras, we mentioned that each batch of the dataset is split across the multiple replicas. In Estimator, however, the user provides an `input_fn` and have full control over how they want their data to be distributed across workers and devices. We do not do automatic splitting of batch, nor automatically shard the data across different workers. The provided `input_fn` is called once per worker, thus giving one dataset per worker. Then one batch from that dataset is fed to one replica on that worker, thereby consuming N batches for N replicas on 1 worker. In other words, the dataset returned by the `input_fn` should provide batches of size `PER_REPLICA_BATCH_SIZE`. And the global batch size for a step can be obtained as `PER_REPLICA_BATCH_SIZE * strategy.num_replicas_in_sync`. When doing multi worker training, users will also want to either split their data across the workers, or shuffle with a random seed on each. You can see an example of how to do this in the [Multi-worker Training with Estimator](../tutorials/distribute/multi_worker_with_estimator.ipynb).
We showed an example of using `MirroredStrategy` with Estimator. You can also use `TPUStrategy` with Estimator as well, in the exact same way:
```
config = tf.estimator.RunConfig(
train_distribute=tpu_strategy, eval_distribute=tpu_strategy)
```
And similarly, you can use multi worker and parameter server strategies as well. The code remains the same, but you need to use `tf.estimator.train_and_evaluate`, and set "TF_CONFIG" environment variables for each binary running in your cluster.
### What's supported now?
In TF nightly release, we support training with Estimator using all strategies.
### Examples and Tutorials
Here are some examples that show end to end usage of various strategies with Estimator:
1. [End to end example](https://github.com/tensorflow/ecosystem/tree/master/distribution_strategy) for multi worker training in tensorflow/ecosystem using Kuberentes templates. This example starts with a Keras model and converts it to an Estimator using the `tf.keras.estimator.model_to_estimator` API.
2. Official [ResNet50](https://github.com/tensorflow/models/blob/master/official/r1/resnet/imagenet_main.py) model, which can be trained using either `MirroredStrategy` or `MultiWorkerMirroredStrategy`.
3. [ResNet50](https://github.com/tensorflow/tpu/blob/master/models/experimental/distribution_strategy/resnet_estimator.py) example with TPUStrategy.
## Using `tf.distribute.Strategy` with custom training loops
As you've seen, using `tf.distrbute.Strategy` with high level APIs is only a couple lines of code change. With a little more effort, `tf.distrbute.Strategy` can also be used by other users who are not using these frameworks.
TensorFlow is used for a wide variety of use cases and some users (such as researchers) require more flexibility and control over their training loops. This makes it hard for them to use the high level frameworks such as Estimator or Keras. For instance, someone using a GAN may want to take a different number of generator or discriminator steps each round. Similarly, the high level frameworks are not very suitable for Reinforcement Learning training. So these users will usually write their own training loops.
For these users, we provide a core set of methods through the `tf.distrbute.Strategy` classes. Using these may require minor restructuring of the code initially, but once that is done, the user should be able to switch between GPUs / TPUs / multiple machines by just changing the strategy instance.
Here we will show a brief snippet illustrating this use case for a simple training example using the same Keras model as before.
Note: These APIs are still experimental and we are improving them to make them more user friendly.
First, we create the model and optimizer inside the strategy's scope. This ensures that any variables created with the model and optimizer are mirrored variables.
```
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
optimizer = tf.train.GradientDescentOptimizer(0.1)
```
Next, we create the input dataset and call `tf.distribute.Strategy.experimental_distribute_dataset` to distribute the dataset based on the strategy.
```
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(1000).batch(
global_batch_size)
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
```
Then, we define one step of the training. We will use `tf.GradientTape` to compute gradients and optimizer to apply those gradients to update our model's variables. To distribute this training step, we put it in a function `step_fn` and pass it to `tf.distribute.Strategy.run` along with the inputs from the iterator:
```
def train_step(dist_inputs):
def step_fn(inputs):
features, labels = inputs
logits = model(features)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=labels)
loss = tf.reduce_sum(cross_entropy) * (1.0 / global_batch_size)
train_op = optimizer.minimize(loss)
with tf.control_dependencies([train_op]):
return tf.identity(loss)
per_replica_losses = mirrored_strategy.run(
step_fn, args=(dist_inputs,))
mean_loss = mirrored_strategy.reduce(
tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None)
return mean_loss
```
A few other things to note in the code above:
1. We used `tf.nn.softmax_cross_entropy_with_logits` to compute the loss. And then we scaled the total loss by the global batch size. This is important because all the replicas are training in sync and number of examples in each step of training is the global batch. So the loss needs to be divided by the global batch size and not by the replica (local) batch size.
2. We used the `strategy.reduce` API to aggregate the results returned by `tf.distribute.Strategy.run`. `tf.distribute.Strategy.run` returns results from each local replica in the strategy, and there are multiple ways to consume this result. You can `reduce` them to get an aggregated value. You can also do `tf.distribute.Strategy.experimental_local_results(results)`to get the list of values contained in the result, one per local replica.
Finally, once we have defined the training step, we can initialize the iterator and variables and run the training in a loop:
```
with mirrored_strategy.scope():
input_iterator = dist_dataset.make_initializable_iterator()
iterator_init = input_iterator.initializer
var_init = tf.global_variables_initializer()
loss = train_step(input_iterator.get_next())
with tf.Session() as sess:
sess.run([var_init, iterator_init])
for _ in range(10):
print(sess.run(loss))
```
In the example above, we used `tf.distribute.Strategy.experimental_distribute_dataset` to provide input to your training. We also provide the `tf.distribute.Strategy.make_experimental_numpy_dataset` to support numpy inputs. You can use this API to create a dataset before calling `tf.distribute.Strategy.experimental_distribute_dataset`.
This covers the simplest case of using `tf.distribute.Strategy` API to distribute custom training loops. We are in the process of improving these APIs. Since this use case requires more work on the part of the user, we will be publishing a separate detailed guide in the future.
### What's supported now?
In TF nightly release, we support training with custom training loops using `MirroredStrategy` and `TPUStrategy` as shown above. Support for other strategies will be coming in soon. `MultiWorkerMirorredStrategy` support will be coming in the future.
### Examples and Tutorials
Here are some examples for using distribution strategy with custom training loops:
1. [Example](https://github.com/tensorflow/tensorflow/blob/5456cc28f3f8d9c17c645d9a409e495969e584ae/tensorflow/contrib/distribute/python/examples/mnist_tf1_tpu.py) to train MNIST using `TPUStrategy`.
## Other topics
In this section, we will cover some topics that are relevant to multiple use cases.
<a id="TF_CONFIG">
### Setting up TF\_CONFIG environment variable
</a>
For multi-worker training, as mentioned before, you need to set "TF\_CONFIG" environment variable for each
binary running in your cluster. The "TF\_CONFIG" environment variable is a JSON string which specifies what
tasks constitute a cluster, their addresses and each task's role in the cluster. We provide a Kubernetes template in the
[tensorflow/ecosystem](https://github.com/tensorflow/ecosystem) repo which sets
"TF\_CONFIG" for your training tasks.
One example of "TF\_CONFIG" is:
```
os.environ["TF_CONFIG"] = json.dumps({
"cluster": {
"worker": ["host1:port", "host2:port", "host3:port"],
"ps": ["host4:port", "host5:port"]
},
"task": {"type": "worker", "index": 1}
})
```
This "TF\_CONFIG" specifies that there are three workers and two ps tasks in the
cluster along with their hosts and ports. The "task" part specifies that the
role of the current task in the cluster, worker 1 (the second worker). Valid roles in a cluster is
"chief", "worker", "ps" and "evaluator". There should be no "ps" job except when using `tf.distribute.experimental.ParameterServerStrategy`.
## What's next?
`tf.distribute.Strategy` is actively under development. We welcome you to try it out and provide your feedback via [issues on GitHub](https://github.com/tensorflow/tensorflow/issues/new).
|
github_jupyter
|
```
from PIL import Image
from numpy import *
from pylab import *
import scipy.misc
from scipy.cluster.vq import *
import imtools
import pickle
imlist = imtools.get_imlist('selected_fontimages/')
imnbr = len(imlist)
with open('font_pca_modes.pkl', 'rb') as f:
immean = pickle.load(f)
V = pickle.load(f)
immatrix = array([array(Image.open(im)).flatten() for im in imlist], 'f')
immean = immean.flatten()
projected = array([dot(V[:40], immatrix[i]-immean) for i in range(imnbr)])
cluster_num = 3
projected = whiten(projected)
centroids, distortion = kmeans(projected, cluster_num)
code, distance = vq(projected, centroids)
def divide_branch_with_center(data, branch, k):
div = min(k, len(branch))
if div<=1:
return list(branch)
centroids, distortion = kmeans(data[branch], k)
code, distance = vq(data[branch], centroids)
new_branch = []
for i in range(k):
ind = where(code==i)[0]
if len(ind)==0:
continue
else:
new_branch.append((centroids[i], distance[i], divide_branch_with_center(data, branch[ind], k)))
return new_branch
tree = array([i for i in range(projected.shape[0])])
branches = ([0 for i in range(40)], 0, divide_branch_with_center(projected, tree, 4))
def get_depth(t):
if len(t[2])<2:
return 1
else:
return max([get_depth(tt) for tt in t[2]])+1
def get_height(t):
if (len(t[2])<2):
return 1
else:
return sum([get_height(tt) for tt in t[2]])
from PIL import Image, ImageDraw
def draw_average(center, x, y, im):
c = center/np.linalg.norm(center)
avim = dot((V[:40]).T, c)
avim = 255*(avim-min(avim))/(max(avim)-min(avim)+1e-6)
avim = avim.reshape(25, 25)
avim[avim<0] = 0
avim[avim>255] = 255
avim = Image.fromarray(avim)
avim.thumbnail([20, 20])
ns = avim.size
im.paste(avim, [int(x), int(y-ns[1]//2), int(x+ns[0]), int(y+ns[1]-ns[1]//2)])
def draw_node(node, draw, x, y, s, iml, im):
if len(node[2])<1:
return
if len(node[2])==1:
nodeim = Image.open(iml[node[2][0]])
nodeim.thumbnail([20, 20])
ns = nodeim.size
im.paste(nodeim, [int(x), int(y-ns[1]//2), int(x+ns[0]), int(y+ns[1]-ns[1]//2)])
else:
ht = sum([get_height(n) for n in node[2]])*20/2
h1 = get_height(node[2][0])*20/2
h2 = get_height(node[2][-1])*20/2
top = y-ht
bottom = y+ht
draw.line((x, top+h1, x, bottom-h2), fill=(0, 0, 0))
y = top
for i in range(len(node[2])):
ll = node[2][i][1]/8*s
y += get_height(node[2][i])*20/2
xx = x + ll + s/4
draw.line((x, y, xx, y), fill=(0, 0, 0))
if len(node[2][i][2])>1:
draw_average(node[2][i][0], xx, y, im)
xx = xx+20
draw.line((xx, y, xx+s/4, y), fill=(0, 0, 0))
xx = xx+s/4
draw_node(node[2][i], draw, xx, y, s, imlist, im)
y += get_height(node[2][i])*20/2
def draw_dendrogram(node, iml, filename='kclusters.jpg'):
rows = get_height(node)*20+40
cols = 1200
s = float(cols-150)/get_depth(node)
im = Image.new('RGB', (cols, rows), (255, 255, 255))
draw = ImageDraw.Draw(im)
x = 0
y = rows/2
avim = Image.fromarray(immean.reshape(25, 25))
avim.thumbnail([20, 20])
ns = avim.size
im.paste(avim, [int(x), int(y-ns[1]//2), int(x+ns[0]), int(y+ns[1]-ns[1]//2)])
draw.line((x+20, y, x+40, y), fill=(0, 0, 0))
draw_node(node, draw, x+40, (rows/2), s, iml, im)
im.save(filename)
im.show()
draw_dendrogram(branches, imlist, filename='k_fonts.jpg')
```
|
github_jupyter
|
### - Canonical Correlation Analysis btw Cell painting & L1000
- This notebook focus on calculating the canonical coefficients between the canonical variables of Cell painting and L1000 level-4 profiles after applying PCA on them.
---------------------------------------------
- The aim of CCA is finding the relationship between two lumped variables in a way that the correlation between these twos is maximum. Obviously, there are several linear combinations of variables, but the aim is to pick only those linear functions which best express the correlations between the two variable sets. These linear functions are called the canonical variables, and the correlations between corresponding pairs of canonical variables are called canonical correlations. [CCA read](https://medium.com/analytics-vidhya/what-is-canonical-correlation-analysis-58ef4349c0b0) [cca_tutorial](https://github.com/google/svcca/blob/master/tutorials/001_Introduction.ipynb)
```
from google.colab import drive
drive.mount('/content/drive')
import os, sys
from matplotlib import pyplot as plt
%matplotlib inline
import numpy as np
import pickle
import pandas as pd
import seaborn as sns
import gzip
sns.set_context("talk")
sns.set_style("darkgrid")
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from sklearn.cross_decomposition import CCA
###know the current directory
os.getcwd()
os.chdir('/content/drive')
# !cat 'My Drive/profiles/cell_painting/cca_core.py'
sys.path.append('My Drive/profiles/cell_painting/')
import cca_core
L1000_cp_dir = 'My Drive/profiles/L1000_cellpainting_comparison/L1000_CP_lvl4_datasets'
df_train = pd.read_csv(os.path.join(L1000_cp_dir, 'train_lvl4_data.csv.gz'),
compression='gzip',low_memory = False)
df_test = pd.read_csv(os.path.join(L1000_cp_dir, 'test_lvl4_data.csv.gz'),
compression='gzip',low_memory = False)
df_targets = pd.read_csv(os.path.join(L1000_cp_dir, 'target_labels.csv'))
metadata_cols = ['replicate_name', 'replicate_id', 'Metadata_broad_sample', 'Metadata_pert_id', 'Metadata_Plate',
'Metadata_Well', 'Metadata_broad_id', 'Metadata_moa', 'sig_id', 'pert_id', 'pert_idose',
'det_plate', 'det_well', 'Metadata_broad_sample', 'pert_iname', 'moa', 'dose']
target_cols = df_targets.columns[1:]
df_train_y = df_train[target_cols].copy()
df_train_x = df_train.drop(target_cols, axis = 1).copy()
df_test_y = df_test[target_cols].copy()
df_test_x = df_test.drop(target_cols, axis = 1).copy()
df_train_x.drop(metadata_cols, axis = 1, inplace = True)
df_test_x.drop(metadata_cols, axis = 1, inplace = True)
cp_cols = df_train_x.columns.tolist()[:696]
L1000_cols = df_train_x.columns.tolist()[696:]
df_train_cp_x = df_train_x.iloc[:, :696].copy()
df_train_L1000_x = df_train_x.iloc[:, 696:].copy()
df_test_cp_x = df_test_x.iloc[:, :696].copy()
df_test_L1000_x = df_test_x.iloc[:, 696:].copy()
df_cp_x = pd.concat([df_train_cp_x, df_test_cp_x])
df_L1000_x = pd.concat([df_train_L1000_x, df_test_L1000_x])
def normalize(df):
'''Normalize using Standardscaler'''
norm_model = StandardScaler()
df_norm = pd.DataFrame(norm_model.fit_transform(df),index = df.index,columns = df.columns)
return df_norm
df_L1000_x = normalize(df_L1000_x)
df_cp_x = normalize(df_cp_x)
# taking the first 300 PCs for CCA and SVCCA
def pca_preprocess(df,n_comp1 = 300,feat_new = ['pca'+ str(i) for i in range(300)]):
pca = PCA(n_components=n_comp1, random_state=42)
df_pca = pd.DataFrame(pca.fit_transform(df),columns=feat_new)
return(df_pca)
df_L1_pc_x = pca_preprocess(df_L1000_x)
df_cp_pc_x = pca_preprocess(df_cp_x)
```
#### - CCA on CP & L1000 train data
```
cca_results = cca_core.get_cca_similarity(df_cp_pc_x.values.T, df_L1_pc_x.values.T, epsilon=1e-10, verbose=False)
plt.figure(figsize=(12,8))
sns.set_context('talk', font_scale = 0.85)
sns.lineplot(x=range(len(cca_results["cca_coef1"])), y=cca_results["cca_coef1"])
plt.title("CCA correlation coefficients between CP and L1000 canonical variables (300) after PCA")
print("Mean Canonical Correlation co-efficient between CP and L1000 canonical variables (300):", np.mean(cca_results["cca_coef1"]))
```
#### - (Singular Vectors)CCA as a method to analyze the correlation between Cell painting & L1000
```
print("Results using SVCCA keeping 300 dims")
# Mean subtract activations
cacts1 = df_cp_pc_x.values.T - np.mean(df_cp_pc_x.values.T, axis=1, keepdims=True)
cacts2 = df_L1_pc_x.values.T - np.mean(df_L1_pc_x.values.T, axis=1, keepdims=True)
# Perform SVD
U1, s1, V1 = np.linalg.svd(cacts1, full_matrices=False)
U2, s2, V2 = np.linalg.svd(cacts2, full_matrices=False)
svacts1 = np.dot(s1[:300]*np.eye(300), V1[:300])
# can also compute as svacts1 = np.dot(U1.T[:20], cacts1)
svacts2 = np.dot(s2[:300]*np.eye(300), V2[:300])
# can also compute as svacts1 = np.dot(U2.T[:20], cacts2)
svcca_results = cca_core.get_cca_similarity(svacts1, svacts2, epsilon=1e-10, verbose=False)
print('mean svcca correlation coefficient:', np.mean(svcca_results["cca_coef1"]))
plt.figure(figsize=(12,8))
sns.set_context('talk', font_scale = 0.85)
plt.plot(svcca_results["cca_coef1"], lw=2.0)
plt.xlabel("Sorted CCA Correlation Coeff Idx")
plt.ylabel("CCA Correlation Coefficient Value")
plt.title("SVCCA correlation coefficients between CP and L1000 canonical variables (300)")
```
### - Using Sklearn CCA package for CCA
```
cca = CCA(n_components=df_cp_pc_x.shape[1])
cp_cca_vars, L1000_cca_vars = cca.fit_transform(df_cp_pc_x, df_L1_pc_x)
canonical_coeffs = np.corrcoef(cp_cca_vars.T, L1000_cca_vars.T).diagonal(offset=df_cp_pc_x.shape[1])
print('mean svcca correlation coefficient:', np.mean(svcca_results["cca_coef1"]))
plt.figure(figsize=(12,8))
sns.set_context('talk', font_scale = 0.85)
plt.plot(canonical_coeffs, lw=2.0)
plt.xlabel("Sorted CCA Correlation Coeff Idx")
plt.ylabel("CCA Correlation Coefficient Value")
plt.title("CCA correlation coefficients between CP and L1000 canonical variables after PCA")
```
#### - Ultimately for further analysis, focus will be on the first few canonical variables of both CP and L1000 that have the highest canonical coefficients.
|
github_jupyter
|
# YOLO on PYNQ-Z1 and Movidius NCS: Webcam example
To run this notebook, you need to connect a USB webcam to the PYNQ-Z1 and a monitor to the HDMI output. You'll already need a powered USB hub for the Movidius NCS, so you should have a spare port for the webcam.
### Load required packages
```
from mvnc import mvncapi as mvnc
import cv2
import numpy as np
import time
from pynq.overlays.base import BaseOverlay
from pynq.lib.video import *
import yolo_ncs,ncs
import PIL.Image
%matplotlib inline
# Load the base overlay
base = BaseOverlay("base.bit")
```
### Configure the webcam
To get a decent frame rate, we use a webcam resolution of 640x480 so that resizing to 448x448 for the YOLO network is reasonably fast. Note that OpenCV uses BGR, but the YOLO network needs RGB, so we'll have to swap the colors around before sending images to YOLO.
```
# Webcam resolution
frame_in_w = 640
frame_in_h = 480
# Configure webcam - note that output images will be BGR
videoIn = cv2.VideoCapture(0)
videoIn.set(cv2.CAP_PROP_FRAME_WIDTH, frame_in_w);
videoIn.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_in_h);
print("Capture device is open: " + str(videoIn.isOpened()))
```
### Configure the HDMI output
```
hdmi_out = base.video.hdmi_out
# Configure the HDMI output to the same resolution as the webcam input
mode = VideoMode(frame_in_w,frame_in_h,24)
hdmi_out.configure(mode, PIXEL_BGR)
# Start the HDMI output
hdmi_out.start()
```
### Take a photo with the webcam
```
ret, frame = videoIn.read()
if (not ret):
raise RuntimeError("Failed to read from camera.")
# Convert BGR image to RGB (required by YOLO and PIL)
frame = frame[:,:,(2,1,0)]
# Resize the image to the size required by YOLO network (448x448)
small_frame = cv2.resize(frame, dsize=(448, 448), interpolation=cv2.INTER_CUBIC)
ncs_frame = small_frame.copy()/255.0
# Show the image in the Jupyter notebook
img = PIL.Image.fromarray(frame)
img
```
### Open the Movidius NCS
```
# Open the Movidius NCS device
ncsdev = ncs.MovidiusNCS()
# Load the graph file
if ncsdev.load_graph('../graph'):
print('Graph file loaded to Movidius NCS')
```
### Send image to NCS
```
ncsdev.graph.LoadTensor(ncs_frame.astype(np.float16), 'user object')
out, userobj = ncsdev.graph.GetResult()
# Interpret results and draw boxes on the image
results = yolo_ncs.interpret_output(out.astype(np.float32), frame.shape[1], frame.shape[0]) # fc27 instead of fc12 for yolo_small
img_res = yolo_ncs.draw_boxes(frame, results, frame.shape[1], frame.shape[0])
# Display labelled image in Jupyter notebook
img = PIL.Image.fromarray(img_res)
img
```
### Webcam to HDMI pass-through (without YOLO)
```
n_frames = 2000
start_time = time.time()
for _ in range(n_frames):
# Get a frame from the webcam
ret, frame = videoIn.read()
# Copy the input frame to the output frame
frame_out = hdmi_out.newframe()
frame_out[:,:,:] = frame[:,:,:]
hdmi_out.writeframe(frame_out)
end_time = time.time()
print('Runtime:',end_time-start_time,'FPS:',n_frames/(end_time-start_time))
```
### Webcam to HDMI with YOLO
```
n_frames = 200
start_time = time.time()
for _ in range(n_frames):
# Get a frame from the webcam
ret, frame = videoIn.read()
# Resize to the frame size required by YOLO network (448x448) and convert to RGB
small_frame = cv2.resize(frame[:,:,(2,1,0)], dsize=(448, 448), interpolation=cv2.INTER_CUBIC)
ncs_frame = small_frame.copy()/255.0
# Send the frame to the NCS
ncsdev.graph.LoadTensor(ncs_frame.astype(np.float16), 'user object')
out, userobj = ncsdev.graph.GetResult()
# Interpret results and draw the boxes on the image
results = yolo_ncs.interpret_output(out.astype(np.float32), frame.shape[1], frame.shape[0]) # fc27 instead of fc12 for yolo_small
img_res = yolo_ncs.draw_boxes(frame, results, frame.shape[1], frame.shape[0])
# Copy labelled image into output frame
frame_out = hdmi_out.newframe()
frame_out[:,:,:] = img_res[:,:,:]
hdmi_out.writeframe(frame_out)
end_time = time.time()
print('Runtime:',end_time-start_time,'FPS:',n_frames/(end_time-start_time))
```
### Close the NCS device
```
ncsdev.close()
```
### Release the webcam and HDMI output
```
videoIn.release()
hdmi_out.stop()
del hdmi_out
```
|
github_jupyter
|
```
import os
import json
def findCaptureSessionDirs(path):
session_paths = []
devices = os.listdir(path)
for device in devices:
sessions = os.listdir(os.path.join(path, device))
for session in sessions:
session_paths.append(os.path.join(device, session))
return session_paths
def findCapturesInSession(path):
files = [os.path.splitext(f)[0] for f in os.listdir(path) if f.endswith('.json')]
return files
def loadJsonData(filename):
data = None
with open(filename) as f:
data = json.load(f)
return data
data_directory = "EyeCaptures"
output_directory = "EyeCaptures-dlib"
directories = sorted(findCaptureSessionDirs(data_directory))
total_directories = len(directories)
print(f"Found {total_directories} directories")
from face_utilities import faceEyeRectsToFaceInfoDict, getEyeRectRelative, newFaceInfoDict, find_face_dlib, landmarksToRects, generate_face_grid_rect
from PIL import Image as PILImage # Pillow
import numpy as np
import dateutil.parser
import shutil
def getScreenOrientation(capture_data):
orientation = 0
# Camera Offset and Screen Orientation compensation
if capture_data['NativeOrientation'] == "Landscape":
if capture_data['CurrentOrientation'] == "Landscape":
# Camera above screen
# - Landscape on Surface devices
orientation = 1
elif capture_data['CurrentOrientation'] == "LandscapeFlipped":
# Camera below screen
# - Landscape inverted on Surface devices
orientation = 2
elif capture_data['CurrentOrientation'] == "PortraitFlipped":
# Camera left of screen
# - Portrait with camera on left on Surface devices
orientation = 3
elif capture_data['CurrentOrientation'] == "Portrait":
# Camera right of screen
# - Portrait with camera on right on Surface devices
orientation = 4
if capture_data['NativeOrientation'] == "Portrait":
if capture_data['CurrentOrientation'] == "Portrait":
# Camera above screen
# - Portrait on iOS devices
orientation = 1
elif capture_data['CurrentOrientation'] == "PortraitFlipped":
# Camera below screen
# - Portrait Inverted on iOS devices
orientation = 2
elif capture_data['CurrentOrientation'] == "Landscape":
# Camera left of screen
# - Landscape home button on right on iOS devices
orientation = 3
elif capture_data['CurrentOrientation'] == "LandscapeFlipped":
# Camera right of screen
# - Landscape home button on left on iOS devices
orientation = 4
return orientation
def getCaptureTimeString(capture_data):
sessiontime = dateutil.parser.parse(capture_data["SessionTimestamp"])
currenttime = dateutil.parser.parse(capture_data["Timestamp"])
timedelta = sessiontime - currenttime
return str(timedelta.total_seconds())
for directory_idx, directory in enumerate(directories):
print(f"Processing {directory_idx + 1}/{total_directories} - {directory}")
captures = findCapturesInSession(os.path.join(data_directory,directory))
total_captures = len(captures)
# dotinfo.json - { "DotNum": [ 0, 0, ... ],
# "XPts": [ 160, 160, ... ],
# "YPts": [ 284, 284, ... ],
# "XCam": [ 1.064, 1.064, ... ],
# "YCam": [ -6.0055, -6.0055, ... ],
# "Time": [ 0.205642, 0.288975, ... ] }
#
# PositionIndex == DotNum
# Timestamp == Time, but no guarantee on order. Unclear if that is an issue or not
dotinfo = {
"DotNum": [],
"XPts": [],
"YPts": [],
"XCam": [],
"YCam": [],
"Time": []
}
recording_path = os.path.join(data_directory, directory)
output_path = os.path.join(output_directory, f"{directory_idx:05d}")
output_frame_path = os.path.join(output_path, "frames")
faceInfoDict = newFaceInfoDict()
# frames.json - ["00000.jpg","00001.jpg"]
frames = []
facegrid = {
"X": [],
"Y": [],
"W": [],
"H": [],
"IsValid": []
}
# info.json - {"TotalFrames":99,"NumFaceDetections":97,"NumEyeDetections":56,"Dataset":"train","DeviceName":"iPhone 6"}
info = {
"TotalFrames": total_captures,
"NumFaceDetections": 0,
"NumEyeDetections": 0,
"Dataset": "train", # For now put all data into training dataset
"DeviceName": None
}
# screen.json - { "H": [ 568, 568, ... ], "W": [ 320, 320, ... ], "Orientation": [ 1, 1, ... ] }
screen = {
"H": [],
"W": [],
"Orientation": []
}
if not os.path.exists(output_directory):
os.mkdir(output_directory)
if not os.path.exists(output_path):
os.mkdir(output_path)
if not os.path.exists(output_frame_path):
os.mkdir(output_frame_path)
for capture_idx, capture in enumerate(captures):
print(f"Processing {capture_idx + 1}/{total_captures} - {capture}")
capture_json_path = os.path.join(data_directory, directory, capture + ".json")
capture_png_path = os.path.join(data_directory, directory, capture + ".png")
if os.path.isfile(capture_json_path) and os.path.isfile(capture_png_path):
capture_data = loadJsonData(capture_json_path)
if info["DeviceName"] == None:
info["DeviceName"] = capture_data["HostModel"]
elif info["DeviceName"] != capture_data["HostModel"]:
error(f"Device name changed during session, expected \'{info['DeviceName']}\' but got \'{capture_data['HostModel']}\'")
capture_image = PILImage.open(capture_png_path).convert('RGB') # dlib wants images in RGB or 8-bit grayscale format
capture_image_np = np.array(capture_image) # dlib wants images in numpy array format
shape_np, isValid = find_face_dlib(capture_image_np)
info["NumFaceDetections"] = info["NumFaceDetections"] + 1
face_rect, left_eye_rect, right_eye_rect, isValid = landmarksToRects(shape_np, isValid)
# facegrid.json - { "X": [ 6, 6, ... ], "Y": [ 10, 10, ... ], "W": [ 13, 13, ... ], "H": [ 13, 13, ... ], "IsValid": [ 1, 1, ... ] }
if isValid:
faceGridX, faceGridY, faceGridW, faceGridH = generate_face_grid_rect(face_rect, capture_image.width, capture_image.height)
else:
faceGridX = 0
faceGridY = 0
faceGridW = 0
faceGridH = 0
facegrid["X"].append(faceGridX)
facegrid["Y"].append(faceGridY)
facegrid["W"].append(faceGridW)
facegrid["H"].append(faceGridH)
facegrid["IsValid"].append(isValid)
faceInfoDict, faceInfoIdx = faceEyeRectsToFaceInfoDict(faceInfoDict, face_rect, left_eye_rect, right_eye_rect, isValid)
info["NumEyeDetections"] = info["NumEyeDetections"] + 1
# screen.json - { "H": [ 568, 568, ... ], "W": [ 320, 320, ... ], "Orientation": [ 1, 1, ... ] }
screen["H"].append(capture_data['ScreenHeightInRawPixels'])
screen["W"].append(capture_data['ScreenWidthInRawPixels'])
screen["Orientation"].append(getScreenOrientation(capture_data))
# dotinfo.json - { "DotNum": [ 0, 0, ... ],
# "XPts": [ 160, 160, ... ],
# "YPts": [ 284, 284, ... ],
# "XCam": [ 1.064, 1.064, ... ],
# "YCam": [ -6.0055, -6.0055, ... ],
# "Time": [ 0.205642, 0.288975, ... ] }
#
# PositionIndex == DotNum
# Timestamp == Time, but no guarantee on order. Unclear if that is an issue or not
xcam = 0
ycam = 0
dotinfo["DotNum"].append(capture_data["PositionIndex"])
dotinfo["XPts"].append(capture_data["ScreenX"])
dotinfo["YPts"].append(capture_data["ScreenY"])
dotinfo["XCam"].append(0)
dotinfo["YCam"].append(0)
dotinfo["Time"].append(getCaptureTimeString(capture_data))
# Convert image from PNG to JPG
frame_name = str(f"{capture_idx:05d}.jpg")
frames.append(frame_name)
capture_img = PILImage.open(capture_png_path).convert('RGB')
capture_img.save(os.path.join(output_frame_path, frame_name))
else:
print(f"Error processing capture {capture}")
with open(os.path.join(output_path, 'frames.json'), "w") as write_file:
json.dump(frames, write_file)
with open(os.path.join(output_path, 'screen.json'), "w") as write_file:
json.dump(screen, write_file)
with open(os.path.join(output_path, 'info.json'), "w") as write_file:
json.dump(info, write_file)
with open(os.path.join(output_path, 'dotInfo.json'), "w") as write_file:
json.dump(dotinfo, write_file)
with open(os.path.join(output_path, 'faceGrid.json'), "w") as write_file:
json.dump(facegrid, write_file)
with open(os.path.join(output_path, 'dlibFace.json'), "w") as write_file:
json.dump(faceInfoDict["Face"], write_file)
with open(os.path.join(output_path, 'dlibLeftEye.json'), "w") as write_file:
json.dump(faceInfoDict["LeftEye"], write_file)
with open(os.path.join(output_path, 'dlibRightEye.json'), "w") as write_file:
json.dump(faceInfoDict["RightEye"], write_file)
print("DONE")
```
|
github_jupyter
|
# Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
## Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
* A really good [conceptual overview](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) of word2vec from Chris McCormick
* [First word2vec paper](https://arxiv.org/pdf/1301.3781.pdf) from Mikolov et al.
* [NIPS paper](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) with improvements for word2vec also from Mikolov et al.
* An [implementation of word2vec](http://www.thushv.com/natural_language_processing/word2vec-part-1-nlp-with-deep-learning-with-tensorflow-skip-gram/) from Thushan Ganegedara
* TensorFlow [word2vec tutorial](https://www.tensorflow.org/tutorials/word2vec)
## Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.

To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.

Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an **embedding lookup** and the number of hidden units is the **embedding dimension**.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called **Word2Vec** uses the embedding layer to find vector representations of words that contain semantic meaning.
## Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
```
import time
import numpy as np
import tensorflow as tf
import utils
```
Load the [text8 dataset](http://mattmahoney.net/dc/textdata.html), a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the `data` folder. Then you can extract it and delete the archive file to save storage space.
```
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
```
## Preprocessing
Here I'm fixing up the text to make training easier. This comes from the `utils` module I wrote. The `preprocess` function coverts any punctuation into tokens, so a period is changed to ` <PERIOD> `. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
```
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
```
And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list `int_words`.
```
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
```
## Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. Check out my solution to see how I did it.
> **Exercise:** Implement subsampling for the words in `int_words`. That is, go through `int_words` and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to `train_words`.
```
from collections import Counter
import random
threshold = 1e-5
threshold = 0.0006849873916398326
word_counts = Counter(int_words)
total_count = len(int_words)
print(total_count)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
print(len(train_words))
print(train_words[:10])
```
## Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From [Mikolov et al.](https://arxiv.org/pdf/1301.3781.pdf):
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
> **Exercise:** Implement a function `get_target` that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window.
```
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start:idx] + words[idx+1:stop+1])
return list(target_words)
```
Here's a function that returns batches for our network. The idea is that it grabs `batch_size` words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
```
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
```
## Building the graph
From [Chris McCormick's blog](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/), we can see the general structure of our network.

The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the `inputs` and `labels` placeholders like normal.
> **Exercise:** Assign `inputs` and `labels` using `tf.placeholder`. We're going to be passing in integers, so set the data types to `tf.int32`. The batches we're passing in will have varying sizes, so set the batch sizes to [`None`]. To make things work later, you'll need to set the second dimension of `labels` to `None` or `1`.
```
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
```
## Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
> **Exercise:** Tensorflow provides a convenient function [`tf.nn.embedding_lookup`](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup) that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use `tf.nn.embedding_lookup` to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using [tf.random_uniform](https://www.tensorflow.org/api_docs/python/tf/random_uniform).
```
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs)
```
## Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called ["negative sampling"](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf). Tensorflow has a convenient function to do this, [`tf.nn.sampled_softmax_loss`](https://www.tensorflow.org/api_docs/python/tf/nn/sampled_softmax_loss).
> **Exercise:** Below, create weights and biases for the softmax layer. Then, use [`tf.nn.sampled_softmax_loss`](https://www.tensorflow.org/api_docs/python/tf/nn/sampled_softmax_loss) to calculate the loss. Be sure to read the documentation to figure out how it works.
```
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(n_vocab))
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b,
labels, embed,
n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
```
## Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
```
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
```
Restore the trained network if you need to:
```
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
```
## Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out [this post from Christopher Olah](http://colah.github.io/posts/2014-10-Visualizing-MNIST/) to learn more about T-SNE and other ways to visualize high-dimensional data.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
```
|
github_jupyter
|
```
# 화씨 -> 섭씨로 바꾸기
# categorical 바꾸기
# 날짜 date 형식으로 바꾸기
# NA값 0으로 처리하기
import pandas as pd
import numpy as np
from datetime import datetime
df = pd.read_csv('train.csv',encoding='euc-kr')
df.head()
df.info()
#datetime으로 변환
df['Date'] = pd.to_datetime(df['Date'])
df['Year'] =df['Date'].dt.year
df['Month'] =df['Date'].dt.month
df['Day'] =df['Date'].dt.day
df['Day_name'] =df['Date'].dt.day_name()
df.info()
df.head()
df.head()
df['Type'] = df['Type'].astype('category')
df['IsHoliday'] = df['IsHoliday'].astype('category')
df['Store'] = df['Store'].astype('category')
df['Dept'] = df['Dept'].astype('category')
df['Temperature'] = df['Temperature'] - 32 / 1.8
df.head()
df.corr()
import matplotlib.pyplot as plt
import seaborn as sns
def encode_sin_cos(df,col_n,max_val):
df[col_n+'_sin'] = np.sin(2*np.pi*df[col_n]/max_val)
df[col_n+'_cos'] = np.cos(2*np.pi*df[col_n]/max_val)
return df
df = encode_sin_cos(df,'Month',12)
df = encode_sin_cos(df,'Day',31)
df[['Year','Month','Day','Month_sin','Month_cos','Day_sin','Day_cos']]
df_2010 = df[df['Year'] == 2010]
df_2011 = df[df['Year'] == 2011]
df_2012 = df[df['Year'] == 2012]
c_m = sns.scatterplot(x="Month_sin",y="Month_cos",data=df_2010)
c_m.set_title("Cyclic Encoding of Month (2010)")
c_m.set_ylabel("Cosine Encoded Months")
c_m.set_xlabel("Sine Encoded Months")
c_m = sns.scatterplot(x="Month_sin",y="Month_cos",data=df_2011)
c_m.set_title("Cyclic Encoding of Month (2011)")
c_m.set_ylabel("Cosine Encoded Months")
c_m.set_xlabel("Sine Encoded Months")
c_m = sns.scatterplot(x="Month_sin",y="Month_cos",data=df_2012)
c_m.set_title("Cyclic Encoding of Month (2012)")
c_m.set_ylabel("Cosine Encoded Months")
c_m.set_xlabel("Sine Encoded Months")
corr = df[['Store','Dept','Date','Weekly_Sales','IsHoliday','Temperature','Fuel_Price','MarkDown1','MarkDown2','MarkDown3','MarkDown4','MarkDown5','CPI','Unemployment','Type','Size','Year','Month','Day','Day_name']].corr()
# corr['Weekly_Sales'].dtypes
corr['Weekly_Sales'].abs().sort_values(ascending=False)
sns.set(style="white")
corr = df[['Store','Dept','Date','Weekly_Sales','IsHoliday','Temperature','Fuel_Price','MarkDown1','MarkDown2','MarkDown3','MarkDown4','MarkDown5','CPI','Unemployment','Type','Size','Year','Month','Day','Day_name']].corr()
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
f, ax = plt.subplots(figsize=(11, 9))
cmap = sns.diverging_palette(220, 10, as_cmap=True)
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
plt.scatter(df['Fuel_Price'],df['Weekly_Sales'])
plt.show()
plt.scatter(df['Size'],df['Weekly_Sales'])
plt.show()
df.loc[df['Weekly_Sales'] >300000]
df.loc[df['Weekly_Sales'] >240000,"Date"].value_counts()
```
|
github_jupyter
|
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from rdkit.Chem import MolFromSmiles
from rdkit.Chem.Descriptors import ExactMolWt
df = pd.read_csv("39_Formose reaction_MeOH.csv")#glucose_dry_impcols.csv
print(df.columns)
# first get rid of empty lines in the mass list by replacing with ''
df.replace('', np.nan, inplace=True)
# also, some 'Mass' values are not numbers
df.dropna(subset=['Mass'], inplace=True)
# now replace NaNs with '' to avoid weird errors
df.fillna('', inplace=True)
df.shape
df.head()
# make a list of exact mass and relative abundance.
mass_list = []
rel_abundance = []
for i in range(len(df)):
# allow entire spectrum for this one
if float(df['Mass'].iloc[i]) < 250 and "No Hit" not in df['Molecular Formula'].iloc[i]:
mass_list.append(float(df['Mass'].iloc[i]))
rel_abundance.append(float(df['Rel. Abundance'].iloc[i]))
# now, "renormalize" the relative abundance.
highest = max(rel_abundance)
norm_factor = 100.0/highest
normalized_abun = []
for ab in rel_abundance:
normalized_abun.append(norm_factor*ab)
print(f'{len(mass_list)} items in {mass_list}')
# formose MOD output
# ../main/glucose/glucose_degradation_output_10mar.txt
data_mod = pd.read_csv('../main/formose/formose_output.txt', sep='\t', names=['Generation', 'SMILES'])
sim_masses = []
for i in range(len(formose_mod)):
row = formose_mod.iloc[i]
mol = MolFromSmiles(row['SMILES'])
mol_wt = ExactMolWt(mol)
sim_masses.append(mol_wt)
data_mod['Mol Wt'] = sim_masses
unique_sim_masses = list(set(sim_masses))
unique_mass_freq = [sim_masses.count(mass) for mass in unique_sim_masses]
highest_freq = max(unique_mass_freq)
norm_freq = [100*(freq/highest_freq) for freq in unique_mass_freq]
print('Unique masses:',len(unique_sim_masses))
print('Frequency of each mass', unique_mass_freq)
print(unique_sim_masses)
from matplotlib import rc
# Use LaTeX and CMU Serif font.
rc('text', usetex=True)
rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
# for some flexibility, create a container for the figure
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(6, 12), sharex=True) # create a figure object
#ax = fig.add_subplot(111) # create an axis object
# first, draw the experimental spectrum
axes[0].vlines(x=mass_list, ymin=0, ymax=normalized_abun, color='cornflowerblue')
# now the CNRN
axes[1].vlines(x=unique_sim_masses, ymin=0, ymax=norm_freq, color='deeppink')
#plt.bar(mass_list, rel_abundance, width=0.5)
axes[0].set_yscale('log')
axes[1].set_yscale('log')
axes[0].set_ylim([0.875, 125])
axes[1].set_ylim([0.875, 125])
plt.gca().invert_yaxis()
plt.xlim(155, 205)
plt.xlabel('Exact Mass')
#plt.ylabel('Normalized Abundance')
plt.tight_layout()
plt.subplots_adjust(wspace=0, hspace=0)
plt.savefig('formose_mirror_plot.jpg', dpi=300)
plt.show()
```
|
github_jupyter
|
### Example Class
```
import datetime # we will use this for date objects
class Person:
def __init__(self, name, surname, birthdate, address, telephone, email):
self.name = name
self.surname = surname
self.birthdate = birthdate
self.address = address
self.telephone = telephone
self.email = email
def age(self):
today = datetime.date.today()
age = today.year - self.birthdate.year
if today < datetime.date(today.year, self.birthdate.month, self.birthdate.day):
age -= 1
return age
person = Person(
"Jane",
"Doe",
datetime.date(1992, 3, 12), # year, month, day
"No. 12 Short Street, Greenville",
"555 456 0987",
"[email protected]"
)
print(person.name)
print(person.email)
print(person.age())
```
__init__() method is used to initialize an instance or object of a class<br>
self.name, self.surname, self.birthdate, self.address, self.telephone, and self.email are **instance** attributes
You may have noticed that both of these method definitions have ```self``` as the first parameter, and we use this variable inside the method bodies – but we don’t appear to pass this parameter in. This is because whenever we call a method on an object, the object itself is automatically passed in as the first parameter. This gives us a way to access the object’s properties from inside the object’s methods.
### Class attributes
We define class attributes in the body of a class, at the same indentation level as method definitions (one level up from the insides of methods):
```
class Person:
TITLES = ('Dr', 'Mr', 'Mrs', 'Ms') # This is a Class attribute
def __init__(self, title, name, surname):
if title not in self.TITLES:
raise ValueError("%s is not a valid title." % title)
self.title = title
self.name = name
self.surname = surname
if __name__ == "__main__":
me = Person(title='Mr', name='John', surname='Doe')
print(me.title)
print(me.name)
print(me.surname)
print(Person.TITLES)
```
Class attributes exists for all instances of a class. These attributes will be shared by all instances of that class.
### Class Decorators
**@classmethod** - Just like we can define class attributes, which are shared between all instances of a class, we can define class methods. We do this by using the @classmethod decorator to decorate an ordinary method.
**@staticmethod** - A static method doesn’t have the calling object passed into it as the first parameter. This means that it doesn’t have access to the rest of the class or instance at all. We can call them from an instance or a class object, but they are most commonly called from class objects, like class methods.<br><br>If we are using a class to group together related methods which don’t need to access each other or any other data on the class, we may want to use this technique. The advantage of using static methods is that we eliminate unnecessary cls or self parameters from our method definitions. The disadvantage is that if we do occasionally want to refer to another class method or attribute inside a static method we have to write the class name out in full, which can be much more verbose than using the cls variable which is available to us inside a class method.
|
github_jupyter
|
# pipegraph User Guide
## Rationale
[scikit-learn](http://scikit-learn.org/stable/) provides a useful set of data preprocessors and machine learning models. The `Pipeline` object can effectively encapsulate a chain of transformers followed by final model. Other functions, like `GridSearchCV` can effectively use `Pipeline` objects to find the set of parameters that provide the best estimator.
### Pipeline + GridSearchCV: an awesome combination
Let's consider a simple example to illustrate the advantages of using `Pipeline` and `GridSearchCV`.
First let's import the libraries we will use and then let's build some artificial data set following a simple polynomial rule
```
import numpy as np
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
import matplotlib.pyplot as plt
X = 2*np.random.rand(100,1)-1
y = 40 * X**5 + 3*X*2 + 3*X + 3*np.random.randn(100,1)
```
Once we have some data ready, we instantiate the transformers and a regressor we want to fit:
```
scaler = MinMaxScaler()
polynomial_features = PolynomialFeatures()
linear_model = LinearRegression()
```
We define the steps that form the Pipeline object and then we instantiate such a Pipeline
```
steps = [('scaler', scaler),
('polynomial_features', polynomial_features),
('linear_model', linear_model)]
pipe = Pipeline(steps=steps)
```
Now we can pass this pipeline to `GridSearchCV`. When the `GridSearchCV` object is fitted, the search for the best combination for hyperparameters is performed according to the values provided in the `param_grid` parameter:
```
param_grid = {'polynomial_features__degree': range(1, 11),
'linear_model__fit_intercept': [True, False]}
grid_search_regressor = GridSearchCV(estimator=pipe, param_grid=param_grid, refit=True)
grid_search_regressor.fit(X, y);
```
And now we can check the results of fitting the Pipeline and the values of the hyperparameters:
```
y_pred = grid_search_regressor.predict(X)
plt.scatter(X, y)
plt.scatter(X, y_pred)
plt.show()
coef = grid_search_regressor.best_estimator_.get_params()['linear_model'].coef_
degree = grid_search_regressor.best_estimator_.get_params()['polynomial_features'].degree
print('Information about the parameters of the best estimator: \n degree: {} \n coefficients: {} '.format(degree, coef))
```
### Pipeline weaknesses:
From this example we can learn that `Pipeline` and `GridSearchCV` are very useful tools to consider when attempting to fit models. As far as the needs of the user can be satisfied by a set of transformers followed by a final model, this approach seems to be highly convenient. Additional advantages of such approach are the **parallel computation** and **memoization** capabilities of GridSearchCV.
Unfortunately though, current implementation of scikit-learn's `Pipeline`:
- Does not allow postprocessors after the final model
- Does not allow extracting information about intermediate results
- The X is transformed on every transformer but the following step can not have access to X variable values beyond the previous step
- Only allows single path workflows
### pipegraph goals:
[pipegraph](https://github.com/mcasl/PipeGraph) was programmed in order to allow researchers and practitioners to:
- Use multiple path workflows
- Have access to every variable value produced by any step of the workflow
- Use an arbitraty number of models and transformers in the way the user prefers
- Express the model as a graph consisting of transformers, regressors, classifiers or custom blocks
- Build new custom block in an easy way
- Provide the community some adapters to scikit-learn's objects that may help further developments
## pipegraph main interface: The PipeGraphRegressor and PipeGraphClassifier classes
`pipegraph` provides the user two main classes: `PipeGraphRegressor` and `PipeGraphClassifier`. They both provide a familiar interface to the raw `PipeGraph` class that most users will not need to use. The `PipeGraph` class provides greater versatility allowing an arbitrary number of inputs and outputs and may be the base class for those users facing applications with such special needs. Most users, though, will be happy using just the former two classes provided as main interface to operate the library.
As the names intend to imply, `PipeGraphRegressor` is the class to use for regression models and `PipeGraphClassifier` is intended for classification problems. Indeed, the only difference between these two classes is the default scoring function that has been chosen accordingly to scikit-learn defaults for each case. Apart from that, both classes share the same code. It must be noticed though, that any of these classes can comprise a plethora of different regressors or clasiffiers. It is the final step the one that will define whether we are defining a classification or regression problem.
## From a single path workflow to a graph with multiple paths: Understanding connections
These two classes provide an interface as similar to scikit-learn's `Pipeline` as possible in order to ease their use to those already familiar with scikit-learn. There is a slight but important difference that empowers these two classes: the `PipeGraph` related classes accept extra information about which input variables are needed by each step, thus allowing multiple path workflows.
To clarify the usage of these connections, let's start using `pipegraph` with a simple example that could be otherwise perfectly expressed using a scikit-learn's `Pipeline` as well. In this simple case, the data is transformed using a `MinMaxScaler` transformer and the preprocessed data is fed to a `LinearRegression` model. Figure 1 shows the steps of this PipeGraphRegressor and the connections between them: which input variables each one accepts and their origin, that is, if they are provided by a previous step, like the output of `scaler`, named `predict`, that is used by `linear_model`'s `X` variable; or `y` which is not calculated by any previous block but is passed by the user in the `fit` or `predict` method calls.
<img src="./images/figure_1-a.png" width="400" />
Figure 1. PipeGraph diagram showing the steps and their connections
In this first simple example of `pipegraph` the last step is a regressor, and thus the `PipeGraphRegressor` class is the most adequate class to choose. But other than that, we define the steps as usual for a standard `Pipeline`: as a list of tuples (label, sklearn object). We are not introducing yet any information at all about the connections, in which case the `PipeGraphRegressor` object is built considering that the steps follow a linear workflow in the same way as a standard `Pipeline`.
```
from pipegraph import PipeGraphRegressor
X = 2*np.random.rand(100,1)-1
y = 40 * X**5 + 3*X*2 + 3*X + 3*np.random.randn(100,1)
scaler = MinMaxScaler()
linear_model = LinearRegression()
steps = [('scaler', scaler),
('linear_model', linear_model)]
pgraph = PipeGraphRegressor(steps=steps)
pgraph.fit(X, y)
```
As the printed output shows, the internal links displayed by the `fit_connections` and `predict_connections` parameters are in line with those we saw in Figure 1 and those expected by a single path pipeline. As we did not specify these values, they were created by `PipeGRaphRegressor.__init__()` method as a comodity. We can have a look at these values by directly inspecting the attributes values. As `PipeGraphRegressor` and `PipeGraphClassifier` are wrappers of a `PipeGraph` object stored in the `_pipegraph` attribute, we have to dig a bit deeper to find the `fit_connections`
```
pgraph._pipegraph.fit_connections
```
Figure 2 surely will help understading the syntax used by the connections dictionary. It goes like this:
- The keys of the top level entries of the dictionary must be the same as those of the previously defined steps.
- The values assocciated to these keys define the variables from other steps that are going to be considered as inputs for the current step. They are dictionaries themselves, where:
- The keys of the nested dictionary represent the input variables as named at the current step.
- The values assocciated to these keys define the steps that hold the desired information and the variables as named at that step. This information can be written as:
- A tuple with the label of the step in position 0 followed by the name of the output variable in position 1.
- A string:
- If the string value is one of the labels from the steps, then it is interpreted as tuple, as previously, with the label of the step in position 0 and 'predict' as name of the output variable in position 1.
- Otherwise, it is considered to be a variable from an external source, such as those provided by the user while invoking the ``fit``, ``predict`` or ``fit_predict`` methods.
<img src="./images/figure_1-b.png" width="700" />
Figure 2. Illustration of the connections of the PipeGraph
The choice of name 'predict' for default output variables was made for convenience reasons as it will be illustrated later on. The developers preferred using always the same word for every block even though it might not be a regressor nor a classifier.
Finally, let's get the predicted values from this `PipeGraphRegressor` for illustrative purposes:
```
y_pred = pgraph.predict(X)
plt.scatter(X, y, label='Original Data')
plt.scatter(X, y_pred, label='Predicted Data')
plt.title('Plots of original and predicted data')
plt.legend(loc='best')
plt.grid(True)
plt.xlabel('Index')
plt.ylabel('Value of Data')
plt.show()
```
## `GridSearchCV` compatibility requirements
Both `PipeGraphRegressor`and `PipeGraphClassifier` are compatible with `GridSearchCV` provided the last step can be scored, either:
- by using `PipeGraphRegressor` or `PipeGraphClassifier` default scoring functions,
- by implementing a custom scoring function capable of handling that last step inputs and outputs,
- by using a `NeutralRegressor` or `NeutralClassifier` block as final step.
Those pipegraphs with a last step from scikit-learn's estimators set will work perfectly well using `PipeGraphRegressor` or `PipeGraphClassifier` default scoring functions. The other two alternative cover those cases in which a custom block with non standard inputs is provided. In that case, choosing a neutral regressor or classifier is usually a much simpler approach than writing customs scoring function. `NeutralRegressor` or `NeutralClassifier` are two classes provided for users convenience so that no special scoring function is needed. They just allow the user to pick some variables from other previous steps as `X` and `y` and provide compatibility to use a default scoring function.
### Example using default scoring functions
We will show more complex examples in what follows, but let's first illustrate with a simple example how to use `GrisSearchCV` with the default scoring functions. Figure 3 shows the steps of the model:
- **scaler**: a preprocessing step using a `MinMaxScaler` object,
- **polynomial_features**: a transformer step that generates a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified one,
- **linear_model**: the `LinearRegression` object we want to fit.
<img src="./images/figure_2.png" width="700" />
Figure 3. Using a PipeGraphRegressor object as estimator by GridSearchCV
Firstly, we import the necessary libraries and create some artificial data.
```
from sklearn.preprocessing import PolynomialFeatures
X = 2*np.random.rand(100,1)-1
y = 40 * X**5 + 3*X*2 + 3*X + 3*np.random.randn(100,1)
scaler = MinMaxScaler()
polynomial_features = PolynomialFeatures()
linear_model = LinearRegression()
```
Secondly, we define the steps and a ``param_grid`` dictionary as specified by `GridSearchCV`.
In this case we just want to explore a few possibilities varying the degree of the polynomials and whether to use or not an intercept at the linear model.
```
steps = [('scaler', scaler),
('polynomial_features', polynomial_features),
('linear_model', linear_model)]
param_grid = {'polynomial_features__degree': range(1, 11),
'linear_model__fit_intercept': [True, False]}
```
Now, we use ``PipeGraphRegressor`` as estimator for `GridSearchCV` and perform the ``fit`` and ``predict`` operations. As the last steps, a linear regressor from scikit-learn, already works with the default scoring functions, no extra efforts are needed to make it compatible with `GridSearchCV`.
```
pgraph = PipeGraphRegressor(steps=steps)
grid_search_regressor = GridSearchCV(estimator=pgraph, param_grid=param_grid, refit=True)
grid_search_regressor.fit(X, y)
y_pred = grid_search_regressor.predict(X)
plt.scatter(X, y)
plt.scatter(X, y_pred)
plt.show()
coef = grid_search_regressor.best_estimator_.get_params()['linear_model'].coef_
degree = grid_search_regressor.best_estimator_.get_params()['polynomial_features'].degree
print('Information about the parameters of the best estimator: \n degree: {} \n coefficients: {} '.format(degree, coef))
```
This example showed how to use `GridSearchCV` with `PipeGraphRegressor` in a simple single path workflow with default scoring functions. Let's explore in next section a more complex example.
## Multiple path workflow examples
Untill now, all the examples we showed displayed a single path sequence of steps and thus they could have been equally easily done using sickit-learn standard `Pipeline`. We are going to show now in the following examples multiple path cases in which we illustrate some compatibility constrains that occur and how to deal with them successfully.
### Example: Injecting a varying vector in the sample_weight parameter of LinearRegression
This example illustrates the case in which a varying vector is injected to a linear regression model as ``sample_weight`` in order to evaluate them and obtain the sample_weight that generates the best results.
The steps of this model are shown in Figure 4. To perform such experiment, the following issues appear:
- The shape of the graph is not a single path workflow as those that can be implemented using Pipeline. Thus, we need to use `pipegraph`.
- The model has 3 input variables, `X`, `y`, and `sample_weight`. The `Pipegraph` class can accept an arbitrary number of input variables, but, in order to use scikit-learn's current implementation of GridSearchCV, only `X` and `y` are accepted. We can do the trick but previously concatenating `X` and `sample_weight` into a single pandas DataFrame, for example, in order to comply with GridSearchCV requisites. That implies that the graph must be capable of separating afterwards the augmented `X` into the two components again. The **selector** step is in charge of this splitting. This step features a `ColumnSelector` custom step. This is not a scikit-learn original object but a custom class that allows to split an array into columns. In this case, ``X`` augmented data is column-wise divided as specified in a mapping dictionary. We will talk later on about custom blocks.
- The information provided to the ``sample_weight`` parameter of the LinearRegression step varies on the different scenarios explored by GridSearchCV. In a GridSearchCV with Pipeline, ``sample_weight`` can't vary because it is treated as a ``fit_param`` instead of a variable. Using pipegraph's connections this is no longer a problem.
- As we need a custom transformer to apply the power function to the sample_weight vector, we implement the **custom_power** step featuring a `CustomPower` custom class. Again, we will talk later on about custom blocks.
The three other steps from the model are already known:
- **scaler**: implements `MinMaxScaler` class
- **polynomial_features**: Contains a `PolynomialFeatures` object
- **linear_model**: Contains a `LinearRegression` model
<img src="./images/figure_3.png" width="600" />
Figure 4. A multipath model
Let's import the new components:
```
import pandas as pd
from pipegraph.base import ColumnSelector
from pipegraph.demo_blocks import CustomPower
```
We create an augmented ``X`` in which all data but ``y`` is concatenated. In this case, we concatenate ``X`` and ``sample_weight`` vector.
```
X = pd.DataFrame(dict(X=np.array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]),
sample_weight=np.array([0.01, 0.95, 0.10, 0.95, 0.95, 0.10, 0.10, 0.95, 0.95, 0.95, 0.01])))
y = np.array( [ 10, 4, 20, 16, 25 , -60, 85, 64, 81, 100, 150])
```
Next we define the steps and we use `PipeGraphRegressor` as estimator for `GridSearchCV`.
```
scaler = MinMaxScaler()
polynomial_features = PolynomialFeatures()
linear_model = LinearRegression()
custom_power = CustomPower()
selector = ColumnSelector(mapping={'X': slice(0, 1),
'sample_weight': slice(1,2)})
steps = [('selector', selector),
('custom_power', custom_power),
('scaler', scaler),
('polynomial_features', polynomial_features),
('linear_model', linear_model)]
pgraph = PipeGraphRegressor(steps=steps)
```
Now, we have to define the connections of the model. We could have specified a dictionary containing the connections, but [as suggested by Joel Nothman](https://github.com/scikit-learn-contrib/scikit-learn-contrib/issues/28), scikit-learn users might find more convenient to use a method `inject` like in this example. Let's see `inject`s docstring:
```
import inspect
print(inspect.getdoc(pgraph.inject))
```
`inject` allows to chain different calls to progressively describe all the connections needed in an easy to read manner:
```
(pgraph.inject(sink='selector', sink_var='X', source='_External', source_var='X')
.inject('custom_power', 'X', 'selector', 'sample_weight')
.inject('scaler', 'X', 'selector', 'X')
.inject('polynomial_features', 'X', 'scaler')
.inject('linear_model', 'X', 'polynomial_features')
.inject('linear_model', 'y', source_var='y')
.inject('linear_model', 'sample_weight', 'custom_power'))
```
Then we define ``param_grid`` as expected by `GridSearchCV` to explore several possibilities of varying parameters.
```
param_grid = {'polynomial_features__degree': range(1, 3),
'linear_model__fit_intercept': [True, False],
'custom_power__power': [1, 5, 10, 20, 30]}
grid_search_regressor = GridSearchCV(estimator=pgraph, param_grid=param_grid, refit=True)
grid_search_regressor.fit(X, y)
y_pred = grid_search_regressor.predict(X)
plt.scatter(X.loc[:,'X'], y)
plt.scatter(X.loc[:,'X'], y_pred)
plt.show()
power = grid_search_regressor.best_estimator_.get_params()['custom_power']
print('Power that obtains the best results in the linear model: \n {}'.format(power))
```
This example showed how to solve current limitations of scikit-learn `Pipeline`:
- Displayed a multipath workflow successfully implemented by **pipegraph**
- Showed how to circumvent current limitations of standard `GridSearchCV`, in particular, the restriction on the number of input parameters
- Showed the flexibility of **pipegraph** for specifying the connections in an easy to read manner using the `inject` method
- Demonstrated the capability of injecting previous steps' output into other models parameters, such as it is the case of the sample_weight parameter in the linear regressor.
### Example: Combination of classifiers
A set of classifiers is combined as input to a neural network. Additionally, the scaled inputs are injected as well to
the neural network. The data is firstly transformed by scaling its features.
Steps of the **PipeGraph**:
- **scaler**: A `MinMaxScaler` data preprocessor
- **gaussian_nb**: A `GaussianNB` classifier
- **svc**: A `SVC` classifier
- **concat**: A `Concatenator` custom class that appends the outputs of the `GaussianNB`, `SVC` classifiers, and the scaled inputs.
- **mlp**: A `MLPClassifier` object
<img src="./images/figure_4.png" width="700" />
Figure 5. PipeGraph diagram showing the steps and their connections
```
from pipegraph.base import PipeGraphClassifier, Concatenator
from sklearn.datasets import load_iris
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.neural_network import MLPClassifier
iris = load_iris()
X = iris.data
y = iris.target
scaler = MinMaxScaler()
gaussian_nb = GaussianNB()
svc = SVC()
mlp = MLPClassifier()
concatenator = Concatenator()
steps = [('scaler', scaler),
('gaussian_nb', gaussian_nb),
('svc', svc),
('concat', concatenator),
('mlp', mlp)]
```
In this example we use a `PipeGraphClassifier` because the result is a classification and we want to take advantage of scikit-learn default scoring method for classifiers. Once more, we use the `inject` chain of calls to define the connections.
```
pgraph = PipeGraphClassifier(steps=steps)
(pgraph.inject(sink='scaler', sink_var='X', source='_External', source_var='X')
.inject('gaussian_nb', 'X', 'scaler')
.inject('gaussian_nb', 'y', source_var='y')
.inject('svc', 'X', 'scaler')
.inject('svc', 'y', source_var='y')
.inject('concat', 'X1', 'scaler')
.inject('concat', 'X2', 'gaussian_nb')
.inject('concat', 'X3', 'svc')
.inject('mlp', 'X', 'concat')
.inject('mlp', 'y', source_var='y')
)
param_grid = {'svc__C': [0.1, 0.5, 1.0],
'mlp__hidden_layer_sizes': [(3,), (6,), (9,),],
'mlp__max_iter': [5000, 10000]}
grid_search_classifier = GridSearchCV(estimator=pgraph, param_grid=param_grid, refit=True)
grid_search_classifier.fit(X, y)
y_pred = grid_search_classifier.predict(X)
grid_search_classifier.best_estimator_.get_params()
# Code for plotting the confusion matrix taken from 'Python Data Science Handbook' by Jake VanderPlas
from sklearn.metrics import confusion_matrix
import seaborn as sns; sns.set() # for plot styling
mat = confusion_matrix(y_pred, y)
sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False)
plt.xlabel('true label')
plt.ylabel('predicted label');
plt.show()
```
This example displayed complex data injections that are successfully managed by **pipegraph**.
### Example: Demultiplexor - multiplexor
An imaginative layout using a classifier to predict the cluster labels and fitting a separate model for each cluster. We will elaborate on this example in the examples that follow introducing variations. AS the Figure shows, the steps of the **PipeGraph** are:
- **scaler**: A :class:`MinMaxScaler` data preprocessor
- **classifier**: A :class:`GaussianMixture` classifier
- **demux**: A custom :class:`Demultiplexer` class in charge of splitting the input arrays accordingly to the selection input vector
- **lm_0**: A :class:`LinearRegression` model
- **lm_1**: A :class:`LinearRegression` model
- **lm_2**: A :class:`LinearRegression` model
- **mux**: A custom :class:`Multiplexer` class in charge of combining different input arrays into a single one accordingly to the selection input vector
<img src="./images/figure_5.png" width="700" />
Figure 6. PipeGraph diagram showing the steps and their connections
```
from pipegraph.base import PipeGraphRegressor, Demultiplexer, Multiplexer
from sklearn.mixture import GaussianMixture
X_first = pd.Series(np.random.rand(100,))
y_first = pd.Series(4 * X_first + 0.5*np.random.randn(100,))
X_second = pd.Series(np.random.rand(100,) + 3)
y_second = pd.Series(-4 * X_second + 0.5*np.random.randn(100,))
X_third = pd.Series(np.random.rand(100,) + 6)
y_third = pd.Series(2 * X_third + 0.5*np.random.randn(100,))
X = pd.concat([X_first, X_second, X_third], axis=0).to_frame()
y = pd.concat([y_first, y_second, y_third], axis=0).to_frame()
scaler = MinMaxScaler()
gaussian_mixture = GaussianMixture(n_components=3)
demux = Demultiplexer()
lm_0 = LinearRegression()
lm_1 = LinearRegression()
lm_2 = LinearRegression()
mux = Multiplexer()
steps = [('scaler', scaler),
('classifier', gaussian_mixture),
('demux', demux),
('lm_0', lm_0),
('lm_1', lm_1),
('lm_2', lm_2),
('mux', mux), ]
```
Instead of using ``inject`` as in previous example, in this one we are going to pass a dictionary describing the connections to PipeGraph constructor
```
connections = { 'scaler': {'X': 'X'},
'classifier': {'X': 'scaler'},
'demux': {'X': 'scaler',
'y': 'y',
'selection': 'classifier'},
'lm_0': {'X': ('demux', 'X_0'),
'y': ('demux', 'y_0')},
'lm_1': {'X': ('demux', 'X_1'),
'y': ('demux', 'y_1')},
'lm_2': {'X': ('demux', 'X_2'),
'y': ('demux', 'y_2')},
'mux': {'0': 'lm_0',
'1': 'lm_1',
'2': 'lm_2',
'selection': 'classifier'}}
pgraph = PipeGraphRegressor(steps=steps, fit_connections=connections)
pgraph.fit(X, y)
y_pred = pgraph.predict(X)
plt.scatter(X, y)
plt.scatter(X, y_pred)
plt.show()
```
### Example: Encapsulating several blocks into a PipeGraph and reusing it
We consider the previous example in which we had the following pipegraph model:
<img src="./images/figure_6.png" width="700" />
We can be interested in using a fragment of the pipegraph, for example, those blocks marked with the circle (the Demultiplexer, the linear model collection, and the Multiplexer), as a single block in another pipegraph:
<img src="./images/figure_7.png" width="500" />
We prepare the data and build a PipeGraph with these steps alone:
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from sklearn.mixture import GaussianMixture
from sklearn.linear_model import LinearRegression
from pipegraph.base import PipeGraph, PipeGraphRegressor, Demultiplexer, Multiplexer
# Prepare some artificial data
X_first = pd.Series(np.random.rand(100,))
y_first = pd.Series(4 * X_first + 0.5*np.random.randn(100,))
X_second = pd.Series(np.random.rand(100,) + 3)
y_second = pd.Series(-4 * X_second + 0.5*np.random.randn(100,))
X_third = pd.Series(np.random.rand(100,) + 6)
y_third = pd.Series(2 * X_third + 0.5*np.random.randn(100,))
X = pd.concat([X_first, X_second, X_third], axis=0).to_frame()
y = pd.concat([y_first, y_second, y_third], axis=0).to_frame()
# Create a single complex block
demux = Demultiplexer()
lm_0 = LinearRegression()
lm_1 = LinearRegression()
lm_2 = LinearRegression()
mux = Multiplexer()
three_multiplexed_models_steps = [
('demux', demux),
('lm_0', lm_0),
('lm_1', lm_1),
('lm_2', lm_2),
('mux', mux), ]
three_multiplexed_models_connections = {
'demux': {'X': 'X',
'y': 'y',
'selection': 'selection'},
'lm_0': {'X': ('demux', 'X_0'),
'y': ('demux', 'y_0')},
'lm_1': {'X': ('demux', 'X_1'),
'y': ('demux', 'y_1')},
'lm_2': {'X': ('demux', 'X_2'),
'y': ('demux', 'y_2')},
'mux': {'0': 'lm_0',
'1': 'lm_1',
'2': 'lm_2',
'selection': 'selection'}}
three_multiplexed_models = PipeGraph(steps=three_multiplexed_models_steps,
fit_connections=three_multiplexed_models_connections )
```
Now we can treat this PipeGraph as a reusable component and use it as a unitary step in another PipeGraph:
```
scaler = MinMaxScaler()
gaussian_mixture = GaussianMixture(n_components=3)
models = three_multiplexed_models
steps = [('scaler', scaler),
('classifier', gaussian_mixture),
('models', three_multiplexed_models), ]
connections = {'scaler': {'X': 'X'},
'classifier': {'X': 'scaler'},
'models': {'X': 'scaler',
'y': 'y',
'selection': 'classifier'},
}
pgraph = PipeGraphRegressor(steps=steps, fit_connections=connections)
pgraph.fit(X, y)
y_pred = pgraph.predict(X)
plt.scatter(X, y)
plt.scatter(X, y_pred)
plt.show()
```
### Example: Dynamically built component using initialization parameters
Last section showed how the user can choose to encapsulate several blocks into a PipeGraph and use it as a single unit in another PipeGraph. Now we will see how these components can be dynamically built on runtime depending on initialization parameters.
<img src="./images/figure_8.png" width="700" />
We can think of programatically changing the number of regression models inside this component we isolated in the previous example. First we do it by using initialization parameters in a ``PipeGraph`` subclass we called ``pipegraph.base.RegressorsWithParametrizedNumberOfReplicas``:
```
import inspect
from pipegraph.base import RegressorsWithParametrizedNumberOfReplicas
print(inspect.getsource(RegressorsWithParametrizedNumberOfReplicas))
```
As it can be seen from the source code, in this example we are basically interested in using a PipeGraph object whose `__init__` has different parameters than the usual ones. Thus, we subclass PipeGRaph and reimplement the `__init__` method. In doing so, we are capable of working out the structure of the steps and connections before calling the `super().__init__` method that provides the regular `PipeGraph` object.
Using this new component we can build a PipeGraph with as many multiplexed models as given by the `number_of_replicas` parameter:
```
scaler = MinMaxScaler()
gaussian_mixture = GaussianMixture(n_components=3)
models = RegressorsWithParametrizedNumberOfReplicas(number_of_replicas=3,
model_prototype=LinearRegression(),
model_parameters={})
steps = [('scaler', scaler),
('classifier', gaussian_mixture),
('models', models), ]
connections = {'scaler': {'X': 'X'},
'classifier': {'X': 'scaler'},
'models': {'X': 'scaler',
'y': 'y',
'selection': 'classifier'},
}
pgraph = PipeGraphRegressor(steps=steps, fit_connections=connections)
pgraph.fit(X, y)
y_pred = pgraph.predict(X)
plt.scatter(X, y)
plt.scatter(X, y_pred)
plt.show()
```
### Example: Dynamically built component using input signal values during the fit stage
Last example showed how to grow a PipeGraph object programatically during runtime using the `__init__` method. In this example, we are going to show how we can change the internal structure of a PipeGraph object, not during initialization but during fit. Specifically, we will show how the multiplexed model can be dynamically added on runtime depending on input signal values during `fit`.
Now we consider the possibility of using the classifier's output to automatically adjust the number of replicas.
This can be seen as PipeGraph changing its inner topology to adapt its connections and steps to other components
context. This morphing capability opens interesting possibilities to explore indeed.
```
import inspect
from pipegraph.base import RegressorsWithDataDependentNumberOfReplicas
print(inspect.getsource(RegressorsWithDataDependentNumberOfReplicas))
```
Again we subclass from parent `PipeGraph` class and implement a different `__init__`. In this example we won't make use of a `number_of_replicas` parameter, as it will be inferred from data during `fit` and thus we are satisfied by passing only those parameters allowing us to change the regressor models. As it can be seen from the code, the `__init__` method just stores the values provided by the user and it is the `fit` method the one in charge of growing the inner structure of the pipegraph.
Using this new component we can build a simplified PipeGraph:
```
scaler = MinMaxScaler()
gaussian_mixture = GaussianMixture(n_components=3)
models = RegressorsWithDataDependentNumberOfReplicas(model_prototype=LinearRegression(), model_parameters={})
steps = [('scaler', scaler),
('classifier', gaussian_mixture),
('models', models), ]
connections = {'scaler': {'X': 'X'},
'classifier': {'X': 'scaler'},
'models': {'X': 'scaler',
'y': 'y',
'selection': 'classifier'},
}
pgraph = PipeGraphRegressor(steps=steps, fit_connections=connections)
pgraph.fit(X, y)
y_pred = pgraph.predict(X)
plt.scatter(X, y)
plt.scatter(X, y_pred)
plt.show()
```
### Example: GridSearch on dynamically built component using input signal values
Previous example showed how a PipeGraph object can be dynamically built on runtime depending on input signal values during fit. Now, in this example we will show how to use `GridSearchCV` to explore the best combination of hyperparameters.
```
from sklearn.model_selection import train_test_split
from pipegraph.base import NeutralRegressor
# We prepare some data
X_first = pd.Series(np.random.rand(100,))
y_first = pd.Series(4 * X_first + 0.5*np.random.randn(100,))
X_second = pd.Series(np.random.rand(100,) + 3)
y_second = pd.Series(-4 * X_second + 0.5*np.random.randn(100,))
X_third = pd.Series(np.random.rand(100,) + 6)
y_third = pd.Series(2 * X_third + 0.5*np.random.randn(100,))
X = pd.concat([X_first, X_second, X_third], axis=0).to_frame()
y = pd.concat([y_first, y_second, y_third], axis=0).to_frame()
X_train, X_test, y_train, y_test = train_test_split(X, y)
```
To ease the calculation of the score for the GridSearchCV we add a neutral regressor as a last step, capable of
calculating the score using a default scoring function. This is much more convenient than worrying about programming
a custom scoring function for a block with an arbitrary number of inputs.
```
scaler = MinMaxScaler()
gaussian_mixture = GaussianMixture(n_components=3)
models = RegressorsWithDataDependentNumberOfReplicas(model_prototype=LinearRegression(), model_parameters={})
neutral_regressor = NeutralRegressor()
steps = [('scaler', scaler),
('classifier', gaussian_mixture),
('models', models),
('neutral', neutral_regressor)]
connections = {'scaler': {'X': 'X'},
'classifier': {'X': 'scaler'},
'models': {'X': 'scaler',
'y': 'y',
'selection': 'classifier'},
'neutral': {'X': 'models'}
}
pgraph = PipeGraphRegressor(steps=steps, fit_connections=connections)
```
Using GridSearchCV to find the best number of clusters and the best regressors
```
from sklearn.model_selection import GridSearchCV
param_grid = {'classifier__n_components': range(2,10)}
gs = GridSearchCV(estimator=pgraph, param_grid=param_grid, refit=True)
gs.fit(X_train, y_train)
y_pred = gs.predict(X_train)
plt.scatter(X_train, y_train)
plt.scatter(X_train, y_pred)
print("Score:" , gs.score(X_test, y_test))
print("classifier__n_components:", gs.best_estimator_.get_params()['classifier__n_components'])
```
### Example: Alternative solution
Now we consider an alternative solution to the previous example. The solution already shown displayed the potential
of being able to morph the graph during fitting. A simpler approach is considered in this example by reusing
components and combining the classifier with the demultiplexed models.
```
from pipegraph.base import ClassifierAndRegressorsBundle
print(inspect.getsource(ClassifierAndRegressorsBundle))
```
As before, we built a custom block by subclassing PipeGraph and the modifying the `__init__` method to provide the parameters specifically needed for our purposes. Then we chain in the same PipeGraph the classifier, and the already available and known block for creating multiplexed models by providing parameters during `__init__`. It must be noticed that both the classifier and the models share have the same number of clusters and model: the number_of_replicas value provided by the user.
Using this new component we can build a simplified PipeGraph:
```
scaler = MinMaxScaler()
classifier_and_models = ClassifierAndRegressorsBundle(number_of_replicas=6)
neutral_regressor = NeutralRegressor()
steps = [('scaler', scaler),
('bundle', classifier_and_models),
('neutral', neutral_regressor)]
connections = {'scaler': {'X': 'X'},
'bundle': {'X': 'scaler', 'y': 'y'},
'neutral': {'X': 'bundle'}}
pgraph = PipeGraphRegressor(steps=steps, fit_connections=connections)
```
Using GridSearchCV to find the best number of clusters and the best regressors
```
from sklearn.model_selection import GridSearchCV
param_grid = {'bundle__number_of_replicas': range(3,10)}
gs = GridSearchCV(estimator=pgraph, param_grid=param_grid, refit=True)
gs.fit(X_train, y_train)
y_pred = gs.predict(X_train)
plt.scatter(X_train, y_train)
plt.scatter(X_train, y_pred)
print("Score:" , gs.score(X_test, y_test))
print("bundle__number_of_replicas:", gs.best_estimator_.get_params()['bundle__number_of_replicas'])
```
|
github_jupyter
|
# 2. Acquire the Data
## Finding Data Sources
There are three place to get onion price and quantity information by market.
1. **[Agmarket](http://agmarknet.nic.in/)** - This is the website run by the Directorate of Marketing & Inspection (DMI), Ministry of Agriculture, Government of India and provides daily price and arrival data for all agricultural commodities at national and state level. Unfortunately, the link to get Market-wise Daily Report for Specific Commodity (Onion for us) leads to a multipage aspx entry form to get data for each date. So it is like to require an involved scraper to get the data. Too much effort - Move on. Here is the best link to go to get what is available - http://agmarknet.nic.in/agnew/NationalBEnglish/SpecificCommodityWeeklyReport.aspx?ss=1
2. **[Data.gov.in](https://data.gov.in/)** - This is normally a good place to get government data in a machine readable form like csv or xml. The Variety-wise Daily Market Prices Data of Onion is available for each year as an XML but unfortunately it does not include quantity information that is needed. It would be good to have both price and quantity - so even though this is easy, lets see if we can get both from a different source. Here is the best link to go to get what is available - https://data.gov.in/catalog/variety-wise-daily-market-prices-data-onion#web_catalog_tabs_block_10
3. **[NHRDF](http://nhrdf.org/en-us/)** - This is the website of National Horticultural Research & Development Foundation and maintains a database on Market Arrivals and Price, Area and Production and Export Data for three commodities - Garlic, Onion and Potatoes. We are in luck! It also has data from 1996 onwards and has only got one form to fill to get the data in a tabular form. Further it also has production and export data. Excellent. Lets use this. Here is the best link to got to get all that is available - http://nhrdf.org/en-us/DatabaseReports
## Scraping the Data
### Ways to Scrape Data
Now we can do this in two different levels of sophistication
1. **Automate the form filling process**: The form on this page looks simple. But viewing source in the browser shows there form to fill with hidden fields and we will need to access it as a browser to get the session fields and then submit the form. This is a little bit more complicated than simple scraping a table on a webpage
2. **Manually fill the form**: What if we manually fill the form with the desired form fields and then save the page as a html file. Then we can read this file and just scrape the table from it. Lets go with the simple way for now.
### Scraping - Manual Form Filling
So let us fill the form to get a small subset of data and test our scraping process. We will start by getting the [Monthwise Market Arrivals](http://nhrdf.org/en-us/MonthWiseMarketArrivals).
- Crop Name: Onion
- Month: January
- Market: All
- Year: 2016
The saved webpage is available at [MonthWiseMarketArrivalsJan2016.html](MonthWiseMarketArrivalsJan2016.html)
### Understand the HTML Structure
We need to scrape data from this html page... So let us try to understand the structure of the page.
1. You can view the source of the page - typically Right Click and View Source on any browser and that would give your the source HTML for any page.
2. You can open the developer tools in your browser and investigate the structure as you mouse over the page
3. We can use a tools like [Selector Gadget](http://selectorgadget.com/) to understand the id's and classes' used in the web page
Our data is under the **<table>** tag
### Exercise #1
Find the number of tables in the HTML Structure of [MonthWiseMarketArrivalsJan2016.html](MonthWiseMarketArrivalsJan2016.html)?
### Find all the Tables
```
# Import the library we need, which is Pandas
import pandas as pd
# Read all the tables from the html document
AllTables = pd.read_html('MonthWiseMarketArrivalsJan2016.html')
# Let us find out how many tables has it found?
len(AllTables)
type(AllTables)
```
### Exercise #2
Find the exact table of data we want in the list of AllTables?
```
AllTables[4]
```
### Get the exact table
To read the exact table we need to pass in an identifier value which would identify the table. We can use the `attrs` parameter in read_html to do so. The parameter we will pass is the `id` variable
```
# So can we read our exact table
OneTable = pd.read_html('MonthWiseMarketArrivalsJan2016.html',
attrs = {'id' : 'dnn_ctr974_MonthWiseMarketArrivals_GridView1'})
# So how many tables have we got now
len(OneTable)
# Show the table of data identifed by pandas with just the first five rows
OneTable[0].head()
```
However, we have not got the header correctly in our dataframe. Let us see if we can fix this.
To get help on any function just use `??` before the function to help. Run this function and see what additional parameter you need to define to get the header correctly
```
??pd.read_html
```
### Exercise #3
Read the html file again and ensure that the correct header is identifed by pandas?
```
OneTable = pd.read_html('MonthWiseMarketArrivalsJan2016.html', header = 0,
attrs = {'id' : 'dnn_ctr974_MonthWiseMarketArrivals_GridView1'})
```
Show the top five rows of the dataframe you have read to ensure the headers are now correct.
```
OneTable[0].head()
```
### Dataframe Viewing
```
# Let us store the dataframe in a df variable. You will see that as a very common convention in data science pandas use
df = OneTable[0]
# Shape of the dateset - number of rows & number of columns in the dataframe
df.shape
# Get the names of all the columns
df.columns
# Can we see sample rows - the top 5 rows
df.head()
# Can we see sample rows - the bottom 5 rows
df.tail()
# Can we access a specific columns
df["Market"]
# Using the dot notation
df.Market
# Selecting specific column and rows
df[0:5]["Market"]
# Works both ways
df["Market"][0:5]
#Getting unique values of State
pd.unique(df['Market'])
```
## Downloading the Entire Month Wise Arrival Data
```
AllTable = pd.read_html('MonthWiseMarketArrivals.html', header = 0,
attrs = {'id' : 'dnn_ctr974_MonthWiseMarketArrivals_GridView1'})
AllTable[0].head()
??pd.DataFrame.to_csv
AllTable[0].columns
# Change the column names to simpler ones
AllTable[0].columns = ['market', 'month', 'year', 'quantity', 'priceMin', 'priceMax', 'priceMod']
AllTable[0].head()
# Save the dataframe to a csv file
AllTable[0].to_csv('MonthWiseMarketArrivals.csv', index = False)
```
|
github_jupyter
|
#Author : Devesh Kumar
## Task 4 : Prediction using Decision Tree Algorithm
___
## GRIP @ The Sparks Foundation
____
# Role : Data Science and Business Analytics [Batch May-2021]
## Table of Contents<br>
> - 1. Introduction.
- 2. Importing Libraries.
- 3. Fetching and loading data.
- 4. Checking for null values.
- 5. Plotting Pairplot.
- 6. Building Decision Tree Model.
- 7. Training and fitting the model.
- 8. Model Evaluation
- 9. Graphical Visualisation.
- 10. Conclusion.
#**Introduction**
* We are given the iris flower dataset, with featues sepal length, sepal width, petal length and petal width.
* Our aim is to create a decision tree classifier to classify the flowers in categories that are: Iris setosa, Iris versicolor, and Iris virginica.
* Here, Python language is used to build the classifier.
* Dataset link: https://bit.ly/3kXTdox
#**Importing Libraries**
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
```
#**Fetching and loading data**
```
iris = pd.read_csv("/content/sample_data/Iris - Iris.csv") #loading the dataset in iris variable
iris.head()
iris.tail()
iris = iris.drop(['Id'], axis = 1) #dropping column 'Id'
iris
iris.shape
```
In iris dataset, 5 features and 150 datapoints are present.
#**Checking for Null Values**
```
iris.info()
```
Here, we can see that no null values are present.
```
iris['Species'].value_counts()
```
From the above data we can say that, the iris dataset is a balanced dataset as the number of datapoints for every class are same.
#**Plotting Pairplot**
```
sns.set_style("whitegrid")
sns.pairplot(iris,hue="Species",size=3);
plt.show()
```
#**Splitting The Data**
```
X = iris.iloc[ : , : -1]
y = iris.iloc[ : , -1 ]
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 0)
```
#**Decision Tree**
#**Training and fitting the model**
```
from sklearn.tree import DecisionTreeClassifier
tree_clf=DecisionTreeClassifier()
tree_clf.fit(x_train,y_train)
y_pred = tree_clf.predict(x_test)
y_pred
pd.DataFrame(y_pred, y_test)
```
#**Model Evaluation**
```
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_pred))
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
```
#**Graphical Visualization**
```
# Import necessary libraries for graph viz
from sklearn.externals.six import StringIO
from IPython.display import Image
from sklearn.tree import export_graphviz
import pydotplus
# Visualize the graph
dot_data = StringIO()
export_graphviz(tree_clf, out_file=dot_data, feature_names=iris.columns[:-1],
class_names = ['Setosa', 'Versicolor', 'Viginica'] ,filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
```
#**Conclusion**
Hence the Decision Tree Classifier is created ; you can feed any data to this classifier and it would be able to predict the right class accordingly.
|
github_jupyter
|
```
# !pip install pandas_datareader keras seaborn
# !conda install -y -c conda-forge fbprophet
# !pip install pydot graphviz
import boto3
import base64
from botocore.exceptions import ClientError
from IPython.display import display
import pandas_datareader
import pandas as pd
import numpy as np
from keras import Sequential
from keras.layers import Dense, LSTM, InputLayer, Attention
import seaborn as sns
import matplotlib.pyplot as plt
from keras.utils import plot_model
from keras.callbacks import EarlyStopping
tickers = ['AAPL']
metric = 'low'
pc_metric = f'{metric}_percent_change'
norm_metric = f'{pc_metric}_norm'
lookback=100
def get_secret():
secret_name = "alpha_vantage"
region_name = "us-east-2"
# Create a Secrets Manager client
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name
)
try:
get_secret_value_response = client.get_secret_value(SecretId=secret_name)
except ClientError as e:
display(e)
else:
# Decrypts secret using the associated KMS CMK.
# Depending on whether the secret is a string or binary, one of these fields will be populated.
if 'SecretString' in get_secret_value_response:
secret = get_secret_value_response['SecretString']
else:
secret = base64.b64decode(get_secret_value_response['SecretBinary'])
return secret
def format_dates(daily_stocks_data):
df = daily_stocks_data.copy()
df['date']=df.index
df.reset_index(inplace=True, drop=True)
return df
def add_percent_change(daily_stocks_data, metric):
percents = list()
for index, row in daily_stocks_data.iterrows():
old = row[metric]
try:
new = daily_stocks_data.iloc[index + 1][metric]
except Exception as e:
percents.append(np.nan) ## no next value, so this is undefined
continue
percents.append((new-old)/new)
cp_df = daily_stocks_data.copy()
cp_df[f'{metric}_percent_change']=percents
return cp_df
def add_norm(df, label):
arr = np.array([x*1000 for x in df[label].to_numpy()]).reshape(-1, 1)
# norm = normalize(arr, norm='l1')
norm = arr
new_df = df.copy()
new_df[f'{label}_norm'] = norm
return new_df
def to_ts_df(daily_stocks_data, lookback, metric):
## column names
columns = list()
for i in range(lookback):
columns.append(f'{metric}_{i}')
columns.append(f'{metric}_target')
df = pd.DataFrame(columns=columns)
## columns
data = daily_stocks_data[metric].to_numpy()
for index, col in enumerate(df.columns):
df[col] = data[index:len(data)-lookback+index]
## dates index
dates = daily_stocks_data.date.to_numpy()[:-lookback]
df.insert(0, 'date', dates)
return df
def to_ts(ts_df):
data = list()
targets = list()
for index, row in ts_df.iloc[:,1:].iterrows():
rnp = row.to_numpy()
data.append([[x] for x in rnp[:-1]])
targets.append(rnp[-1])
data = np.array(data)
targets = np.array(targets)
return data, targets
ALPHA_API_KEY = get_secret()
daily_stocks_data_raw = pandas_datareader.av.time_series.AVTimeSeriesReader(symbols=tickers, api_key=ALPHA_API_KEY, function='TIME_SERIES_DAILY').read()
daily_stocks_data = format_dates(daily_stocks_data_raw)
daily_stocks_data = add_percent_change(daily_stocks_data, metric)
daily_stocks_data[daily_stocks_data[pc_metric].isnull()] = 0
daily_stocks_data = add_norm(daily_stocks_data, pc_metric)
ts_df = to_ts_df(daily_stocks_data, lookback, pc_metric)
data, targets = to_ts(ts_df)
display(daily_stocks_data)
display(ts_df)
## currently testing to set up mlflow and training jobs.
def deep_lstm():
model = Sequential()
model.add(InputLayer(input_shape=(None,1)))
# model.add(LSTM(12, return_sequences=True))
# model.add(LSTM(12, return_sequences=True))
# model.add(LSTM(6, return_sequences=True))
# model.add(LSTM(6, return_sequences=True))
# model.add(LSTM(2, return_sequences=True))
# model.add(LSTM(1))
model.add(Dense(1))
model.compile(loss='mae', metrics=['mse','mape'])
return model
model = deep_lstm()
model.summary()
# plot_model(model)
early = EarlyStopping(patience=2, restore_best_weights=True)
model.fit(x=data, y=targets, batch_size=36, validation_split=0.2, epochs=1, callbacks=[early])
```
|
github_jupyter
|
# 04 - Full waveform inversion with Devito and scipy.optimize.minimize
## Introduction
In this tutorial we show how [Devito](http://www.opesci.org/devito-public) can be used with [scipy.optimize.minimize](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html) to solve the FWI gradient based minimization problem described in the previous tutorial.
```python
scipy.optimize.minimize(fun, x0, args=(), method=None, jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None)
```
> Minimization of scalar function of one or more variables.
>
> In general, the optimization problems are of the form:
>
> minimize f(x) subject to
>
> g_i(x) >= 0, i = 1,...,m
> h_j(x) = 0, j = 1,...,p
> where x is a vector of one or more variables. g_i(x) are the inequality constraints. h_j(x) are the equality constrains.
[scipy.optimize.minimize](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html) provides a wide variety of methods for solving minimization problems depending on the context. Here we are going to focus on using L-BFGS via [scipy.optimize.minimize(method=’L-BFGS-B’)](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-lbfgsb.html#optimize-minimize-lbfgsb)
```python
scipy.optimize.minimize(fun, x0, args=(), method='L-BFGS-B', jac=None, bounds=None, tol=None, callback=None, options={'disp': None, 'maxls': 20, 'iprint': -1, 'gtol': 1e-05, 'eps': 1e-08, 'maxiter': 15000, 'ftol': 2.220446049250313e-09, 'maxcor': 10, 'maxfun': 15000})```
The argument `fun` is a callable function that returns the misfit between the simulated and the observed data. If `jac` is a Boolean and is `True`, `fun` is assumed to return the gradient along with the objective function - as is our case when applying the adjoint-state method.
## Setting up (synthetic) data
We are going to set up the same synthetic test case as for the previous tutorial (refer back for details). The code below is slightly re-engineered to make it suitable for using with scipy.optimize.minimize.
```
#NBVAL_IGNORE_OUTPUT
from examples.seismic import Model, demo_model
import numpy as np
# Define the grid parameters
def get_grid():
shape = (101, 101) # Number of grid point (nx, nz)
spacing = (10., 10.) # Grid spacing in m. The domain size is now 1km by 1km
origin = (0., 0.) # Need origin to define relative source and receiver locations
return shape, spacing, origin
# Define the test phantom; in this case we are using a simple circle
# so we can easily see what is going on.
def get_true_model():
shape, spacing, origin = get_grid()
return demo_model('circle-isotropic', vp=3.0, vp_background=2.5,
origin=origin, shape=shape, spacing=spacing, nbpml=40)
# The initial guess for the subsurface model.
def get_initial_model():
shape, spacing, origin = get_grid()
return demo_model('circle-isotropic', vp=2.5, vp_background=2.5,
origin=origin, shape=shape, spacing=spacing, nbpml=40)
from examples.seismic.acoustic import AcousticWaveSolver
from examples.seismic import RickerSource, Receiver
# Inversion crime alert! Here the worker is creating the 'observed' data
# using the real model. For a real case the worker would be reading
# seismic data from disk.
def get_data(param):
""" Returns source and receiver data for a single shot labeled 'shot_id'.
"""
true_model = get_true_model()
dt = true_model.critical_dt # Time step from model grid spacing
# Set up source data and geometry.
nt = int(1 + (param['tn']-param['t0']) / dt) # Discrete time axis length
src = RickerSource(name='src', grid=true_model.grid, f0=param['f0'],
time=np.linspace(param['t0'], param['tn'], nt))
src.coordinates.data[0, :] = [30, param['shot_id']*1000./(param['nshots']-1)]
# Set up receiver data and geometry.
nreceivers = 101 # Number of receiver locations per shot
rec = Receiver(name='rec', grid=true_model.grid, npoint=nreceivers, ntime=nt)
rec.coordinates.data[:, 1] = np.linspace(0, true_model.domain_size[0], num=nreceivers)
rec.coordinates.data[:, 0] = 980. # 20m from the right end
# Set up solver - using model_in so that we have the same dt,
# otherwise we should use pandas to resample the time series data.
solver = AcousticWaveSolver(true_model, src, rec, space_order=4)
# Generate synthetic receiver data from true model
true_d, _, _ = solver.forward(src=src, m=true_model.m)
return src, true_d, nt, solver
```
## Create operators for gradient based inversion
To perform the inversion we are going to use [scipy.optimize.minimize(method=’L-BFGS-B’)](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-lbfgsb.html#optimize-minimize-lbfgsb).
First we define the functional, ```f```, and gradient, ```g```, operator (i.e. the function ```fun```) for a single shot of data.
```
from devito import Function, clear_cache
# Create FWI gradient kernel for a single shot
def fwi_gradient_i(x, param):
# Need to clear the workers cache.
clear_cache()
# Get the current model and the shot data for this worker.
model0 = get_initial_model()
model0.m.data[:] = x.astype(np.float32).reshape(model0.m.data.shape)
src, rec, nt, solver = get_data(param)
# Create symbols to hold the gradient and the misfit between
# the 'measured' and simulated data.
grad = Function(name="grad", grid=model0.grid)
residual = Receiver(name='rec', grid=model0.grid, ntime=nt, coordinates=rec.coordinates.data)
# Compute simulated data and full forward wavefield u0
d, u0, _ = solver.forward(src=src, m=model0.m, save=True)
# Compute the data misfit (residual) and objective function
residual.data[:] = d.data[:] - rec.data[:]
f = .5*np.linalg.norm(residual.data.flatten())**2
# Compute gradient using the adjoint-state method. Note, this
# backpropagates the data misfit through the model.
solver.gradient(rec=residual, u=u0, m=model0.m, grad=grad)
# return the objective functional and gradient.
return f, np.array(grad.data)
```
Next we define the global functional and gradient function that sums the contributions to f and g for each shot of data.
```
def fwi_gradient(x, param):
# Initialize f and g.
param['shot_id'] = 0
f, g = fwi_gradient_i(x, param)
# Loop through all shots summing f, g.
for i in range(1, param['nshots']):
param['shot_id'] = i
f_i, g_i = fwi_gradient_i(x, param)
f += f_i
g[:] += g_i
# Note the explicit cast; while the forward/adjoint solver only requires float32,
# L-BFGS-B in SciPy expects a flat array in 64-bit floats.
return f, g.flatten().astype(np.float64)
```
## FWI with L-BFGS-B
Equipped with a function to calculate the functional and gradient, we are finally ready to call ```scipy.optimize.minimize```.
```
#NBVAL_SKIP
# Change to the WARNING log level to reduce log output
# as compared to the default DEBUG
from devito import configuration
configuration['log_level'] = 'WARNING'
# Set up a dictionary of inversion parameters.
param = {'t0': 0.,
'tn': 1000., # Simulation lasts 1 second (1000 ms)
'f0': 0.010, # Source peak frequency is 10Hz (0.010 kHz)
'nshots': 9} # Number of shots to create gradient from
# Define bounding box constraints on the solution.
def apply_box_constraint(m):
# Maximum possible 'realistic' velocity is 3.5 km/sec
# Minimum possible 'realistic' velocity is 2 km/sec
return np.clip(m, 1/3.5**2, 1/2**2)
# Many optimization methods in scipy.optimize.minimize accept a callback
# function that can operate on the solution after every iteration. Here
# we use this to apply box constraints and to monitor the true relative
# solution error.
relative_error = []
def fwi_callbacks(x):
# Apply boundary constraint
x.data[:] = apply_box_constraint(x)
# Calculate true relative error
true_x = get_true_model().m.data.flatten()
relative_error.append(np.linalg.norm((x-true_x)/true_x))
# Initialize solution
model0 = get_initial_model()
# Finally, calling the minimizing function. We are limiting the maximum number
# of iterations here to 10 so that it runs quickly for the purpose of the
# tutorial.
from scipy import optimize
result = optimize.minimize(fwi_gradient, model0.m.data.flatten().astype(np.float64),
args=(param, ), method='L-BFGS-B', jac=True,
callback=fwi_callbacks,
options={'maxiter':10, 'disp':True})
# Print out results of optimizer.
print(result)
#NBVAL_SKIP
# Show what the update does to the model
from examples.seismic import plot_image, plot_velocity
model0.m.data[:] = result.x.astype(np.float32).reshape(model0.m.data.shape)
model0.vp = np.sqrt(1. / model0.m.data[40:-40, 40:-40])
plot_velocity(model0)
#NBVAL_SKIP
# Plot percentage error
plot_image(100*np.abs(model0.vp-get_true_model().vp.data)/get_true_model().vp.data, cmap="hot")
```
While we are resolving the circle at the centre of the domain there are also lots of artifacts throughout the domain.
```
#NBVAL_SKIP
import matplotlib.pyplot as plt
# Plot objective function decrease
plt.figure()
plt.loglog(relative_error)
plt.xlabel('Iteration number')
plt.ylabel('True relative error')
plt.title('Convergence')
plt.show()
```
<sup>This notebook is part of the tutorial "Optimised Symbolic Finite Difference Computation with Devito" presented at the Intel® HPC Developer Conference 2017.</sup>
|
github_jupyter
|
```
import numpy as np
import collections
import random
import tensorflow as tf
def build_dataset(words, n_words):
count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]]
count.extend(collections.Counter(words).most_common(n_words - 1))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 0)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
def str_idx(corpus, dic, maxlen, UNK=3):
X = np.zeros((len(corpus),maxlen))
for i in range(len(corpus)):
for no, k in enumerate(corpus[i][:maxlen][::-1]):
val = dic[k] if k in dic else UNK
X[i,-1 - no]= val
return X
def load_data(filepath):
x1=[]
x2=[]
y=[]
for line in open(filepath):
l=line.strip().split("\t")
if len(l)<2:
continue
if random.random() > 0.5:
x1.append(l[0].lower())
x2.append(l[1].lower())
else:
x1.append(l[1].lower())
x2.append(l[0].lower())
y.append(int(l[2]))
return np.array(x1),np.array(x2),np.array(y)
X1_text, X2_text, Y = load_data('train_snli.txt')
concat = (' '.join(X1_text.tolist() + X2_text.tolist())).split()
vocabulary_size = len(list(set(concat)))
data, count, dictionary, rev_dictionary = build_dataset(concat, vocabulary_size)
print('vocab from size: %d'%(vocabulary_size))
print('Most common words', count[4:10])
print('Sample data', data[:10], [rev_dictionary[i] for i in data[:10]])
def _pairwise_distances(embeddings_left, embeddings_right, squared=False):
dot_product = tf.matmul(embeddings_left,
tf.transpose(embeddings_right))
square_norm = tf.diag_part(dot_product)
distances = tf.expand_dims(square_norm, 1) - 2.0 * dot_product + tf.expand_dims(square_norm, 0)
distances = tf.maximum(distances, 0.0)
if not squared:
mask = tf.to_float(tf.equal(distances, 0.0))
distances = distances + mask * 1e-16
distances = tf.sqrt(distances)
distances = distances * (1.0 - mask)
return distances
def _get_anchor_positive_triplet_mask(labels):
indices_equal = tf.cast(tf.eye(tf.shape(labels)[0]), tf.bool)
indices_not_equal = tf.logical_not(indices_equal)
labels_equal = tf.equal(tf.expand_dims(labels, 0), tf.expand_dims(labels, 1))
mask = tf.logical_and(indices_not_equal, labels_equal)
return mask
def _get_anchor_negative_triplet_mask(labels):
labels_equal = tf.equal(tf.expand_dims(labels, 0), tf.expand_dims(labels, 1))
mask = tf.logical_not(labels_equal)
return mask
def _get_triplet_mask(labels):
indices_equal = tf.cast(tf.eye(tf.shape(labels)[0]), tf.bool)
indices_not_equal = tf.logical_not(indices_equal)
i_not_equal_j = tf.expand_dims(indices_not_equal, 2)
i_not_equal_k = tf.expand_dims(indices_not_equal, 1)
j_not_equal_k = tf.expand_dims(indices_not_equal, 0)
distinct_indices = tf.logical_and(tf.logical_and(i_not_equal_j, i_not_equal_k), j_not_equal_k)
label_equal = tf.equal(tf.expand_dims(labels, 0), tf.expand_dims(labels, 1))
i_equal_j = tf.expand_dims(label_equal, 2)
i_equal_k = tf.expand_dims(label_equal, 1)
valid_labels = tf.logical_and(i_equal_j, tf.logical_not(i_equal_k))
mask = tf.logical_and(distinct_indices, valid_labels)
return mask
def batch_all_triplet_loss(labels, embeddings_left, embeddings_right, margin, squared=False):
pairwise_dist = _pairwise_distances(embeddings_left, embeddings_right, squared=squared)
anchor_positive_dist = tf.expand_dims(pairwise_dist, 2)
assert anchor_positive_dist.shape[2] == 1, "{}".format(anchor_positive_dist.shape)
anchor_negative_dist = tf.expand_dims(pairwise_dist, 1)
assert anchor_negative_dist.shape[1] == 1, "{}".format(anchor_negative_dist.shape)
triplet_loss = anchor_positive_dist - anchor_negative_dist + margin
mask = _get_triplet_mask(labels)
mask = tf.to_float(mask)
triplet_loss = tf.multiply(mask, triplet_loss)
triplet_loss = tf.maximum(triplet_loss, 0.0)
valid_triplets = tf.to_float(tf.greater(triplet_loss, 1e-16))
num_positive_triplets = tf.reduce_sum(valid_triplets)
num_valid_triplets = tf.reduce_sum(mask)
fraction_positive_triplets = num_positive_triplets / (num_valid_triplets + 1e-16)
triplet_loss = tf.reduce_sum(triplet_loss) / (num_positive_triplets + 1e-16)
return triplet_loss, fraction_positive_triplets
class Model:
def __init__(self, size_layer, num_layers, embedded_size,
dict_size, learning_rate, dimension_output):
def cells(reuse=False):
return tf.nn.rnn_cell.LSTMCell(size_layer,
initializer=tf.orthogonal_initializer(),reuse=reuse)
def rnn(inputs, reuse=False):
with tf.variable_scope('model', reuse = reuse):
rnn_cells = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)])
outputs, _ = tf.nn.dynamic_rnn(rnn_cells, inputs, dtype = tf.float32)
return tf.layers.dense(outputs[:,-1], dimension_output)
self.X_left = tf.placeholder(tf.int32, [None, None])
self.X_right = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.float32, [None])
self.batch_size = tf.shape(self.X_left)[0]
encoder_embeddings = tf.Variable(tf.random_uniform([dict_size, embedded_size], -1, 1))
embedded_left = tf.nn.embedding_lookup(encoder_embeddings, self.X_left)
embedded_right = tf.nn.embedding_lookup(encoder_embeddings, self.X_right)
self.output_left = rnn(embedded_left, False)
self.output_right = rnn(embedded_right, True)
self.cost, fraction = batch_all_triplet_loss(self.Y, self.output_left,
self.output_right, margin=0.5, squared=False)
self.distance = tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(self.output_left,self.output_right)),1,keep_dims=True))
self.distance = tf.div(self.distance, tf.add(tf.sqrt(tf.reduce_sum(tf.square(self.output_left),1,keep_dims=True)),
tf.sqrt(tf.reduce_sum(tf.square(self.output_right),1,keep_dims=True))))
self.distance = tf.reshape(self.distance, [-1])
self.temp_sim = tf.subtract(tf.ones_like(self.distance),
tf.rint(self.distance))
correct_predictions = tf.equal(self.temp_sim, self.Y)
self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, "float"))
self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost)
size_layer = 256
num_layers = 2
embedded_size = 128
learning_rate = 1e-3
dimension_output = 300
maxlen = 50
batch_size = 128
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(size_layer,num_layers,embedded_size,len(dictionary),
learning_rate,dimension_output)
sess.run(tf.global_variables_initializer())
from sklearn.cross_validation import train_test_split
vectors_left = str_idx(X1_text, dictionary, maxlen)
vectors_right = str_idx(X2_text, dictionary, maxlen)
train_X_left, test_X_left, train_X_right, test_X_right, train_Y, test_Y = train_test_split(vectors_left,
vectors_right,
Y,
test_size = 0.2)
from tqdm import tqdm
import time
for EPOCH in range(5):
lasttime = time.time()
train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0
pbar = tqdm(range(0, len(train_X_left), batch_size), desc='train minibatch loop')
for i in pbar:
batch_x_left = train_X_left[i:min(i+batch_size,train_X_left.shape[0])]
batch_x_right = train_X_right[i:min(i+batch_size,train_X_left.shape[0])]
batch_y = train_Y[i:min(i+batch_size,train_X_left.shape[0])]
acc, loss, _ = sess.run([model.accuracy, model.cost, model.optimizer],
feed_dict = {model.X_left : batch_x_left,
model.X_right: batch_x_right,
model.Y : batch_y})
assert not np.isnan(loss)
train_loss += loss
train_acc += acc
pbar.set_postfix(cost = loss, accuracy = acc)
pbar = tqdm(range(0, len(test_X_left), batch_size), desc='test minibatch loop')
for i in pbar:
batch_x_left = test_X_left[i:min(i+batch_size,train_X_left.shape[0])]
batch_x_right = test_X_right[i:min(i+batch_size,train_X_left.shape[0])]
batch_y = test_Y[i:min(i+batch_size,train_X_left.shape[0])]
acc, loss = sess.run([model.accuracy, model.cost],
feed_dict = {model.X_left : batch_x_left,
model.X_right: batch_x_right,
model.Y : batch_y})
test_loss += loss
test_acc += acc
pbar.set_postfix(cost = loss, accuracy = acc)
train_loss /= (len(train_X_left) / batch_size)
train_acc /= (len(train_X_left) / batch_size)
test_loss /= (len(test_X_left) / batch_size)
test_acc /= (len(test_X_left) / batch_size)
print('time taken:', time.time()-lasttime)
print('epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n'%(EPOCH,train_loss,
train_acc,test_loss,
test_acc))
left = str_idx(['a person is outdoors, on a horse.'], dictionary, maxlen)
right = str_idx(['a person on a horse jumps over a broken down airplane.'], dictionary, maxlen)
sess.run([model.temp_sim,1-model.distance], feed_dict = {model.X_left : left,
model.X_right: right})
left = str_idx(['i love you'], dictionary, maxlen)
right = str_idx(['you love i'], dictionary, maxlen)
sess.run([model.temp_sim,1-model.distance], feed_dict = {model.X_left : left,
model.X_right: right})
```
|
github_jupyter
|
# End-to-End Machine Learning Project
In this chapter you will work through an example project end to end, pretending to be a recently hired data scientist at a real estate company. Here are the main steps you will go through:
1. Look at the big picture
2. Get the data
3. Discover and visualize the data to gain insights.
4. Prepare the data for Machine learning algorithms.
5. Select a model and train it
6. Fine-tune your model.
7. Present your solution
8. Launch, monitor, and maintain your system.
## Working with Real Data
When you are learning about Machine Leaning, it is best to experimentwith real-world data, not artificial datasets.
Fortunately, there are thousands of open datasets to choose from, ranging across all sorts of domains. Here are a few places you can look to get data:
* Popular open data repositories:
- [UC Irvine Machine Learning Repository](http://archive.ics.uci.edu/ml/)
- [Kaggle](https://www.kaggle.com/datasets) datasets
- Amazon's [AWS](https://registry.opendata.aws/) datasets
* Meta Portals:
- [Data Portals](http://dataportals.org/)
- [OpenDataMonitor](http://opendatamonitor.eu/)
- [Quandl](http://quandl.com)
## Frame the Problem
The problem is that your model' output (a prediction of a district's median housing price) will be fed to another ML system along with many other signals*. This downstream will determine whether it is worth investing in a given area or not. Getting this right is critical, as it directly affects revenue.
```
Other Signals
|
Upstream Components --> (District Data) --> [District Pricing prediction model](your component) --> (District prices) --> [Investment Analaysis] --> Investments
```
### Pipelines
A sequence of data processing components is called a **data pipeline**. Pipelines are very common in Machine Learning systems, since a lot of data needs to manipulated to make sure the machine learning model/algorithms understands the data, as algorithms understand only numbers.
## Download the Data:
You could use your web browser and download the data, but it is preferabble to make a function to do the same.
```
import os
import tarfile
import urllib
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml2/master/"
HOUSING_PATH = os.path.join("datasets", "housing")
HOUSING_URL = DOWNLOAD_ROOT + "datasets/housing/housing.tgz"
def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):
"""
Function to download the housing_data
"""
os.makedirs(housing_path, exist_ok=True)
tgz_path = os.path.join(housing_path, "housing.tgz")
urllib.request.urlretrieve(housing_url, tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
import pandas as pd
import numpy as np
def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
fetch_housing_data()
housing = load_housing_data()
```
## Take a quick look at the Data Structure
Each row represents one district. There are 10 attributes:
```
longitude, latitude, housing_median_age, total_rooms, total_bedrooms, population, households, median_income, median_house_value, ocean_proximity
```
The `info()` method is useful to give a quick description of the data.
```
housing.head()
housing.info()
housing["ocean_proximity"].value_counts()
housing.describe()
%matplotlib inline
import matplotlib.pyplot as plt
housing.hist(bins=50, figsize=(20, 15))
plt.show();
```
> 🔑 **Note:** The `hist()` method relies on Matplotlib, which in turn relies on a user-specified graphical backend to draw on your screen. The simplest option is to use Jupyter's magic command `%matplotlib inline`. This tells jupyter to set up Matplotlib so that it uses Jupyter's own backend. Note that calling `plot()` is optional as Jupyter does this automatically.
#### There are few things you might notice in these histograms:
1. First the median income attribute does not look like it is expressed in US dollars (USD). The data has been scaled at 15 for higher median incomes and at 0.5 for lower median incomes. The numbers represent roughly tens of thousands of dollars(e.g., 3 actually means about $30,000). Working with oreoricessed attributes is common in Machine learning and it is not necessarily a problem. But you should try to understand how the data was computed.
2. The housing median age and the median house value were also capped.
3. These attributes have very different scales.
4. Many histograms of this dataset are *tail-heavy* i.e., they extend much farther to the right of the median than to the left. This may make it bit harder for Machine Learning Algorithms to unerstand patterns. We will try transfprming these attributes later on to have more bell shaped-distributions.
> ‼️ **Note:** Wait! Before you look at the data any further, you need to create a test set, put it aside and never look at it.
## Create a Test Set
Scikit-learn provides a few functions to split datasets into multiple subsets in various ways:
1. The `train_test_split()` function is the simplest and most used function from scikit-learn for this purpose.
2. For Stratified sampling, `StartifiedShuffleSplit()` would be useful
3. And probably so many more functions...
```
from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)
train_set.shape, test_set.shape
from sklearn.model_selection import StratifiedShuffleSplit
housing["income_cat"] = pd.cut(housing["median_income"],
bins=[0., 1.5, 3.0, 4.5, 6., np.inf],
labels=[1, 2, 3, 4, 5])
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_i, test_i in split.split(housing, housing["income_cat"]):
strat_train_set = housing.loc[train_i]
strat_test_set = housing.loc[test_i]
strat_train_set.shape
# Now remove the income_cat attribute so the data is back to its original state
for _ in (strat_train_set, strat_test_set):
_.drop("income_cat", axis=1, inplace=True)
```
## Discover and Visualize the Data to Gain More Insights
So far you have only taken a quick glance at the data to get a general understanding of the kind of data you are manipulating. Now the goal is to go into a lttle more depth.
First, make sure you have put the test set aside and you are only exploring the training data set. In our case the set is quite small, so you can work directly on the full set. Let's create a copy so that you can play woth it without harming the training set:
```
housing = strat_train_set.copy()
```
### Visualizing Geopgraphical Data
Since there is geographical information (latitude and longitude), it is a good idea to create a scatterplot pf all districts to visualize the data.
```
housing.plot(kind="scatter", x="longitude", y="latitude");
# Setting the alpha optin to 0.1 makes it easier to visualize the places where there is a high -density of data points.
housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.1);
```
Now from the above graph, we can clearly see the high-density areas. Our brains are very good at spotting patterns in pictures, but you may need to play around with visualization parameters to make the patterns stand out.
Now let's look at the housing prices. The radius of each circle represents the district's populaiton (option `s`), and the color represents the price (option `c`). We will use a predefined color map (option `cmap`) called `jet`, which ranges from blue (low values) to red (high prices):
```
housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.4,
s=housing["population"]/100, label="population", figsize=(10, 7),
c="median_house_value", cmap=plt.get_cmap("jet"), colorbar=True)
plt.legend();
```
### Looking for Correlations
Since the dataset is not too large, you can easily compute the *standard correlation coeffecient* (also known as *Pearson's r*) between every pair of attributes using the `corr()` method
```
corr_matrix = housing.corr()
# Now let's look at how much each attribute correlates with the median house value
corr_matrix["median_house_value"].sort_values(ascending=False)
```
#### The Standard Correlation Coeffecient
The correlation coeffecient ranges from -1 to 1. When it is close to 1, it means that there is strong positive correlation. While, when the coeffecient is close to -1, it means there is a strong negative correlation. Finally coeffecients close to 0 mean that there is no linear correlation.
<img src="Fig..png" alt="Standard correlation coeffecients of various Datasets"/>
> 🔑 **Note:** The correlation coeffecient only measures linear correlations ("if x goes up, then y generally goes up/down"). It may completely miss out on nonlinear relationships (e.g., "if x is close to 0, then y generally goes up"). Note how all the plots of the bottom row have a correlation coeffecient equal to 0, despite the fact that that their axes are clearly not independent: these examples are nonlinearly correlated.
Another way to check for correlation between attributes is to use the pandas `scatter_matrix()` function, which plots every numerical attribute against every other numerical attribute.Since there are 11 numerical attributes, you would get 11^2 = 121 plots, which too large to fit inour page. So let's just focus on a few promising attributes that seem most correlated with median housing value:
```
from pandas.plotting import scatter_matrix
attributes = ["median_house_value", "median_income", "total_rooms", "housing_median_age"]
scatter_matrix(housing[attributes], figsize=(12, 12));
# The most promising attribute to predict the median house value is the median income
housing.plot(kind="scatter", x="median_income", y="median_house_value", alpha=.1);
```
This plot reveals a few things:
1. The correlation is indeed very strong as you can see clearly the upward trend, and the points are not too dispersed.
2. The price cap that we noticed earlier is clearly visible as a horizontal line at $500,000. There are a few more less-obvious lines that you may want to remove to prevent your algorithms from learning to reproduce these data quirks.
## Experimenting with Attribute Combinations
Till now, you identified a few data quirks that you may want to clean up before feeding the data to the Machine Learning algorithms, and you found out interesting correlations between attributes.
One last thing you may want to do before preparing the data for Machine learning algorithms, is to try out various attribute combinations.
For Example, the total number of rooms in a district is not very useful if you don't know how many households there are. What you really want is the number of rooms per household... and so on. Let's create these new attributes:
```
housing["rooms_per_household"] = housing["total_rooms"]/housing["households"]
housing["bedrooms_per_room"] = housing["total_bedrooms"]/housing["total_rooms"]
housing["population_per_household"] = housing["population"]/housing["households"]
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False)
```
Hey, not bad! The new attributes have some more correlation
## Prepare the Data for Machine Learning Algorithms
It's time to prepare the data for your Machine Learning algorithm. Instead of doing this manually, you should write functions for this purpose, for several good reasons:
- This will allow you to reproduce these transformations easily on any dataset (e.g., the next time you get a fres dataset).
- You will gradually build a library of transformations functions that you can reuse in your future projects.
- You can use these functions in your live system to transform the new data before feeding it to your algorithms.
- This will make it possible for you to easily try various transformations and see what works best.
```
# Let's revert to a clean training set
housing = strat_train_set.drop("median_house_value", axis=1)
housing_labels = strat_train_set["median_house_value"].copy()
```
### Data Cleaning
Most Machine Learning algorithms cannot work with data that have missing features, so let's create a few functions to take care of them. We say earlier that the `total_bedrooms` attribute has some missing values, so let's fix this. You have three options to do so:
1. Get rid of the corresponding districts.
2. Get rid of the whole attribute.
3. Set the values to some value (zero, the mean, the median, the mode, etc.)
You can accomplish these easily using DataFrame's `dropna()`, `drop()`, `fillna()` methods:
```
# housing.dropna(subset=["total_bedrooms"])
# housing.drop("total_bedrooms", axis=1)
# median = housing["total_bedrooms"].median()
# housing["total_bedrooms"].fillna(median, inplace=True)
```
But we'll be using the Scikit-Learning platform.
Scikit-Learn provides a handy class to take care of the missing values: `SimpleImputer`.
```
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(strategy="median")
# Since the median can be computed only on numerical attributes, drop the ocean_proximity attribute which is a String
housing_num = housing.drop("ocean_proximity", axis=1)
imputer.fit(housing_num)
X = imputer.transform(housing_num)
# The result is a plain numpy array, converting into a dataframe
housing_tr = pd.DataFrame(X, columns=housing_num.columns, index=housing_num.index)
imputer.statistics_
housing_tr.info()
```
### Handling Text and Categorical Attributes
So far we have only dealt with numerical attributes, but now let's look at text attributes. In this dataset, there is just one: the `ocean_proximity` attribute. Let's look at its value fo first 10 instances:
```
# First 10 instances
housing_cat = housing[["ocean_proximity"]]
housing_cat.head(10)
housing["ocean_proximity"].value_counts()
# It's not arbitary text. Therefore, it is categorical text.
# One hot encoding the data
from sklearn.preprocessing import OneHotEncoder
cat_enc = OneHotEncoder()
housing_cat_one_hot = cat_enc.fit_transform(housing_cat)
housing_cat_one_hot
housing_cat_one_hot.toarray()
cat_enc.categories_
```
### Custom Trasformers
Although Scikit-Learn provides many useful transformers, you will need to write your own for tasks such as custom cleanup operations or combining specific attributes. You will want your transformer to work seamlessely with Scikit-Learn functionalitites (such as `pipelines`), all you need to do is create a class and implement three methods: `fit()`, `transform()`, and `fit_transform()`.
```
from sklearn.base import BaseEstimator, TransformerMixin
rooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6
class CombinedAttributeAdder(BaseEstimator, TransformerMixin):
def __init__(self, add_bedrooms_per_room=True):
self.add_bedrooms_per_room = add_bedrooms_per_room
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
rooms_per_household = X[:, rooms_ix] / X[:, households_ix]
population_per_household = X[:, population_ix] / X[:, households_ix]
if self.add_bedrooms_per_room:
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
return np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room]
else:
return np.c_[X, rooms_per_household, population_per_household]
attr_adder = CombinedAttributeAdder(add_bedrooms_per_room=False)
housing_extra_attribs = attr_adder.transform(housing.values)
```
### Feature Scaling
One of the most imprtant features you need to apply to your data is *feature scaling*. With a few exceptions, Machine Learning algorithms don't perform well numerical attributes have very different scales. There are two common ways to get all the attributes to have the same scale, namely, *min-max scaling* and *standardization*.
Min-Max Scaling (also known as *Normalization*) is the simplest: the values are shifted to a range of 0-1.
Standardization is using standard deviation.
### Transformation Pipelines
As you can see, there are many data transformation steps that need to be executed in an order. Fortunately, Scikit-Learn provides the `Pipeline` class to help with sequences of transformations. Here is a small pipeline for the numerical attributes:
```
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
num_pipeline = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('attribs_adder', CombinedAttributeAdder()),
('std_scaler', StandardScaler())
])
housing_num_tr = num_pipeline.fit_transform(housing_num)
housing
from sklearn.compose import ColumnTransformer
num_attribs = list(housing_num)
cat_attribs = ["ocean_proximity"]
full_pipeline = ColumnTransformer([
("num", num_pipeline, num_attribs),
("cat", OneHotEncoder(), cat_attribs),
])
housing_prepared = full_pipeline.fit_transform(housing)
```
## Select and Train a Model
At last!😃 You framed the problem, you got your data and explored it, you sampled a training set and a test set, and you wrote transformation pipelines to clean up and prepare your data for Machine learning slgorithms automatically. You are now ready to select and train a Machine Learning Model.💗
### Training Machine Learning Models on the training set and evaluating on the Same
The following experiments will be implemented:
1. Linear Regression Model
2. Decision Tree Regression Model
3. Random Forest Regression Model
```
# 1. Linear Regression model
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing_prepared, housing_labels)
from sklearn.metrics import mean_squared_error
lin_reg_predictions = lin_reg.predict(housing_prepared)
lin_reg_predictions[:10]
lin_reg_results = np.sqrt(mean_squared_error(housing_labels, lin_reg_predictions))
lin_reg_results
# 2. Decision Tree Regression Model
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor()
tree_reg.fit(housing_prepared, housing_labels)
tree_reg_predictions = tree_reg.predict(housing_prepared)
tree_reg_predictions[:10]
tree_reg_results = np.sqrt(mean_squared_error(housing_labels, tree_reg_predictions))
tree_reg_results
# 3. Random Forest Regressor
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor()
forest_reg.fit(housing_prepared, housing_labels)
forest_reg_predictions = forest_reg.predict(housing_prepared)
forest_reg_predictions[:10]
forest_reg_results = np.sqrt(mean_squared_error(housing_labels, forest_reg_predictions))
forest_reg_results
```
### Better Evaluation using Cross-Validation
A great feature of Scikit-Learn is its *K-fold cross-validaation* feature. The following code randomy splits the training set into 10 distinct subsets called folds, then it trains and evaluates the Decision Tree model 10 times, picking a different fold for evaluation every time and training other 9 folds. The result is an array containing the 10 evaluation scores.
```
from sklearn.model_selection import cross_val_score
scores = cross_val_score(tree_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
tree_rmse_scores = np.sqrt(-scores)
tree_rmse_scores.mean()
```
> 🔑 **Note:** Scikit-Learn's cross-validation features expect a utility function (grater is better) rather than a cost function (lower is better), so the scoring function is actually the opposite of MSE (i.e., a negative value), which is why the preceding code computes -scores before calculating the square root.
```
# Function to display the scores of any model
from sklearn.model_selection import cross_val_score
def display_scores(model):
scores = cross_val_score(model, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
rmse_scores = np.sqrt(-scores)
print(f"Scores: {rmse_scores}")
print(f"Scores: {rmse_scores.mean()}")
print(f"Standard deviation: {rmse_scores.std()}")
display_scores(lin_reg)
display_scores(tree_reg)
display_scores(forest_reg)
```
|
github_jupyter
|
```
from utils import *
import tensorflow as tf
from sklearn.cross_validation import train_test_split
import time
trainset = sklearn.datasets.load_files(container_path = 'data', encoding = 'UTF-8')
trainset.data, trainset.target = separate_dataset(trainset,1.0)
print (trainset.target_names)
print (len(trainset.data))
print (len(trainset.target))
ONEHOT = np.zeros((len(trainset.data),len(trainset.target_names)))
ONEHOT[np.arange(len(trainset.data)),trainset.target] = 1.0
train_X, test_X, train_Y, test_Y, train_onehot, test_onehot = train_test_split(trainset.data,
trainset.target,
ONEHOT, test_size = 0.2)
concat = ' '.join(trainset.data).split()
vocabulary_size = len(list(set(concat)))
data, count, dictionary, rev_dictionary = build_dataset(concat, vocabulary_size)
print('vocab from size: %d'%(vocabulary_size))
print('Most common words', count[4:10])
print('Sample data', data[:10], [rev_dictionary[i] for i in data[:10]])
GO = dictionary['GO']
PAD = dictionary['PAD']
EOS = dictionary['EOS']
UNK = dictionary['UNK']
class Model:
def __init__(self, size_layer, num_layers, embedded_size,
dict_size, dimension_output, learning_rate):
def cells(reuse=False):
return tf.nn.rnn_cell.LSTMCell(size_layer,initializer=tf.orthogonal_initializer(),reuse=reuse)
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.float32, [None, dimension_output])
encoder_embeddings = tf.Variable(tf.random_uniform([dict_size, embedded_size], -1, 1))
encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)
attention_mechanism = tf.contrib.seq2seq.BahdanauAttention(num_units = size_layer,
memory = encoder_embedded)
bahdanau_cells = tf.contrib.seq2seq.AttentionWrapper(cell = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)]),
attention_mechanism = attention_mechanism,
attention_layer_size = size_layer)
attention_mechanism = tf.contrib.seq2seq.LuongAttention(num_units = size_layer,
memory = encoder_embedded)
luong_cells = tf.contrib.seq2seq.AttentionWrapper(cell = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)]),
attention_mechanism = attention_mechanism,
attention_layer_size = size_layer)
rnn_cells = tf.nn.rnn_cell.MultiRNNCell([bahdanau_cells,luong_cells])
outputs, last_state = tf.nn.dynamic_rnn(rnn_cells, encoder_embedded, dtype = tf.float32)
W = tf.get_variable('w',shape=(size_layer, dimension_output),initializer=tf.orthogonal_initializer())
b = tf.get_variable('b',shape=(dimension_output),initializer=tf.zeros_initializer())
self.logits = tf.matmul(outputs[:, -1], W) + b
self.cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = self.logits, labels = self.Y))
self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost)
correct_pred = tf.equal(tf.argmax(self.logits, 1), tf.argmax(self.Y, 1))
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
size_layer = 128
num_layers = 2
embedded_size = 128
dimension_output = len(trainset.target_names)
learning_rate = 1e-3
maxlen = 50
batch_size = 128
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(size_layer,num_layers,embedded_size,vocabulary_size+4,dimension_output,learning_rate)
sess.run(tf.global_variables_initializer())
EARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 5, 0, 0, 0
while True:
lasttime = time.time()
if CURRENT_CHECKPOINT == EARLY_STOPPING:
print('break epoch:%d\n'%(EPOCH))
break
train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0
for i in range(0, (len(train_X) // batch_size) * batch_size, batch_size):
batch_x = str_idx(train_X[i:i+batch_size],dictionary,maxlen)
acc, loss, _ = sess.run([model.accuracy, model.cost, model.optimizer],
feed_dict = {model.X : batch_x, model.Y : train_onehot[i:i+batch_size]})
train_loss += loss
train_acc += acc
for i in range(0, (len(test_X) // batch_size) * batch_size, batch_size):
batch_x = str_idx(test_X[i:i+batch_size],dictionary,maxlen)
acc, loss = sess.run([model.accuracy, model.cost],
feed_dict = {model.X : batch_x, model.Y : test_onehot[i:i+batch_size]})
test_loss += loss
test_acc += acc
train_loss /= (len(train_X) // batch_size)
train_acc /= (len(train_X) // batch_size)
test_loss /= (len(test_X) // batch_size)
test_acc /= (len(test_X) // batch_size)
if test_acc > CURRENT_ACC:
print('epoch: %d, pass acc: %f, current acc: %f'%(EPOCH,CURRENT_ACC, test_acc))
CURRENT_ACC = test_acc
CURRENT_CHECKPOINT = 0
else:
CURRENT_CHECKPOINT += 1
print('time taken:', time.time()-lasttime)
print('epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n'%(EPOCH,train_loss,
train_acc,test_loss,
test_acc))
EPOCH += 1
logits = sess.run(model.logits, feed_dict={model.X:str_idx(test_X,dictionary,maxlen)})
print(metrics.classification_report(test_Y, np.argmax(logits,1), target_names = trainset.target_names))
```
|
github_jupyter
|
### How To Break Into the Field
Now you have had a closer look at the data, and you saw how I approached looking at how the survey respondents think you should break into the field. Let's recreate those results, as well as take a look at another question.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import HowToBreakIntoTheField as t
%matplotlib inline
df = pd.read_csv('./survey_results_public.csv')
schema = pd.read_csv('./survey_results_schema.csv')
df.head()
```
#### Question 1
**1.** In order to understand how to break into the field, we will look at the **CousinEducation** field. Use the **schema** dataset to answer this question. Write a function called **get_description** that takes the **schema dataframe** and the **column** as a string, and returns a string of the description for that column.
```
def get_description(column_name, schema=schema):
'''
INPUT - schema - pandas dataframe with the schema of the developers survey
column_name - string - the name of the column you would like to know about
OUTPUT -
desc - string - the description of the column
'''
desc = list(schema[schema['Column'] == column_name]['Question'])[0]
return desc
#test your code
#Check your function against solution - you shouldn't need to change any of the below code
get_description(df.columns[0]) # This should return a string of the first column description
#Check your function against solution - you shouldn't need to change any of the below code
descrips = set(get_description(col) for col in df.columns)
t.check_description(descrips)
```
The question we have been focused on has been around how to break into the field. Use your **get_description** function below to take a closer look at the **CousinEducation** column.
```
get_description('CousinEducation')
```
#### Question 2
**2.** Provide a pandas series of the different **CousinEducation** status values in the dataset. Store this pandas series in **cous_ed_vals**. If you are correct, you should see a bar chart of the proportion of individuals in each status. If it looks terrible, and you get no information from it, then you followed directions. However, we should clean this up!
```
cous_ed_vals = df.CousinEducation.value_counts()#Provide a pandas series of the counts for each CousinEducation status
cous_ed_vals # assure this looks right
# The below should be a bar chart of the proportion of individuals in your ed_vals
# if it is set up correctly.
(cous_ed_vals/df.shape[0]).plot(kind="bar");
plt.title("Formal Education");
```
We definitely need to clean this. Above is an example of what happens when you do not clean your data. Below I am using the same code you saw in the earlier video to take a look at the data after it has been cleaned.
```
possible_vals = ["Take online courses", "Buy books and work through the exercises",
"None of these", "Part-time/evening courses", "Return to college",
"Contribute to open source", "Conferences/meet-ups", "Bootcamp",
"Get a job as a QA tester", "Participate in online coding competitions",
"Master's degree", "Participate in hackathons", "Other"]
def clean_and_plot(df, title='Method of Educating Suggested', plot=True):
'''
INPUT
df - a dataframe holding the CousinEducation column
title - string the title of your plot
axis - axis object
plot - bool providing whether or not you want a plot back
OUTPUT
study_df - a dataframe with the count of how many individuals
Displays a plot of pretty things related to the CousinEducation column.
'''
study = df['CousinEducation'].value_counts().reset_index()
study.rename(columns={'index': 'method', 'CousinEducation': 'count'}, inplace=True)
study_df = t.total_count(study, 'method', 'count', possible_vals)
study_df.set_index('method', inplace=True)
if plot:
(study_df/study_df.sum()).plot(kind='bar', legend=None);
plt.title(title);
plt.show()
props_study_df = study_df/study_df.sum()
return props_study_df
props_df = clean_and_plot(df)
```
#### Question 4
**4.** I wonder if some of the individuals might have bias towards their own degrees. Complete the function below that will apply to the elements of the **FormalEducation** column in **df**.
```
def higher_ed(formal_ed_str):
'''
INPUT
formal_ed_str - a string of one of the values from the Formal Education column
OUTPUT
return 1 if the string is in ("Master's degree", "Doctoral", "Professional degree")
return 0 otherwise
'''
if formal_ed_str in ("Master's degree", "Doctoral", "Professional degree"):
return 1
else:
return 0
df["FormalEducation"].apply(higher_ed)[:5] #Test your function to assure it provides 1 and 0 values for the df
# Check your code here
df['HigherEd'] = df["FormalEducation"].apply(higher_ed)
higher_ed_perc = df['HigherEd'].mean()
t.higher_ed_test(higher_ed_perc)
```
#### Question 5
**5.** Now we would like to find out if the proportion of individuals who completed one of these three programs feel differently than those that did not. Store a dataframe of only the individual's who had **HigherEd** equal to 1 in **ed_1**. Similarly, store a dataframe of only the **HigherEd** equal to 0 values in **ed_0**.
Notice, you have already created the **HigherEd** column using the check code portion above, so here you only need to subset the dataframe using this newly created column.
```
ed_1 = df[df['HigherEd'] == 1] # Subset df to only those with HigherEd of 1
ed_0 = df[df['HigherEd'] == 0] # Subset df to only those with HigherEd of 0
print(ed_1['HigherEd'][:5]) #Assure it looks like what you would expect
print(ed_0['HigherEd'][:5]) #Assure it looks like what you would expect
#Check your subset is correct - you should get a plot that was created using pandas styling
#which you can learn more about here: https://pandas.pydata.org/pandas-docs/stable/style.html
ed_1_perc = clean_and_plot(ed_1, 'Higher Formal Education', plot=False)
ed_0_perc = clean_and_plot(ed_0, 'Max of Bachelors Higher Ed', plot=False)
comp_df = pd.merge(ed_1_perc, ed_0_perc, left_index=True, right_index=True)
comp_df.columns = ['ed_1_perc', 'ed_0_perc']
comp_df['Diff_HigherEd_Vals'] = comp_df['ed_1_perc'] - comp_df['ed_0_perc']
comp_df.style.bar(subset=['Diff_HigherEd_Vals'], align='mid', color=['#d65f5f', '#5fba7d'])
```
#### Question 6
**6.** What can you conclude from the above plot? Change the dictionary to mark **True** for the keys of any statements you can conclude, and **False** for any of the statements you cannot conclude.
```
sol = {'Everyone should get a higher level of formal education': False,
'Regardless of formal education, online courses are the top suggested form of education': True,
'There is less than a 1% difference between suggestions of the two groups for all forms of education': False,
'Those with higher formal education suggest it more than those who do not have it': True}
t.conclusions(sol)
```
This concludes another look at the way we could compare education methods by those currently writing code in industry.
|
github_jupyter
|
# Reading and writing LAS files
This notebook goes with [the Agile blog post](https://agilescientific.com/blog/2017/10/23/x-lines-of-python-load-curves-from-las) of 23 October.
Set up a `conda` environment with:
conda create -n welly python=3.6 matplotlib=2.0 scipy pandas
You'll need `welly` in your environment:
conda install tqdm # Should happen automatically but doesn't
pip install welly
This will also install the latest versions of `striplog` and `lasio`.
```
import welly
ls ../data/*.LAS
```
### 1. Load the LAS file with `lasio`
```
import lasio
l = lasio.read('../data/P-129.LAS') # Line 1.
```
That's it! But the object itself doesn't tell us much — it's really just a container:
```
l
```
### 2. Look at the WELL section of the header
```
l.header['Well'] # Line 2.
```
### 3. Look at the curve data
The curves are all present one big NumPy array:
```
l.data
```
Or we can go after a single curve object:
```
l.curves.GR # Line 3.
```
And there's a shortcut to its data:
```
l['GR'] # Line 4.
```
...so it's easy to make a plot against depth:
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(15,3))
plt.plot(l['DEPT'], l['GR'])
plt.show()
```
### 4. Inspect the curves as a `pandas` dataframe
```
l.df().head() # Line 5.
```
### 5. Load the LAS file with `welly`
```
from welly import Well
w = Well.from_las('../data/P-129.LAS') # Line 6.
```
`welly` Wells know how to display some basics:
```
w
```
And the `Well` object also has `lasio`'s access to a pandas DataFrame:
```
w.df().head()
```
### 6. Look at `welly`'s Curve object
Like the `Well`, a `Curve` object can report a bit about itself:
```
gr = w.data['GR'] # Line 7.
gr
```
One important thing about Curves is that each one knows its own depths — they are stored as a property called `basis`. (It's not actually stored, but computed on demand from the start depth, the sample interval (which must be constant for the whole curve) and the number of samples in the object.)
```
gr.basis
```
### 7. Plot part of a curve
We'll grab the interval from 300 m to 1000 m and plot it.
```
gr.to_basis(start=300, stop=1000).plot() # Line 8.
```
### 8. Smooth a curve
Curve objects are, fundamentally, NumPy arrays. But they have some extra tricks. We've already seen `Curve.plot()`.
Using the `Curve.smooth()` method, we can easily smooth a curve, eg by 15 m (passing `samples=True` would smooth by 15 samples):
```
sm = gr.smooth(window_length=15, samples=False) # Line 9.
sm.plot()
```
### 9. Export a set of curves as a matrix
You can get at all the data through the lasio `l.data` object:
```
print("Data shape: {}".format(w.las.data.shape))
w.las.data
```
But we might want to do some other things, such as specify which curves you want (optionally using aliases like GR1, GRC, NGC, etc for GR), resample the data, or specify a start and stop depth — `welly` can do all this stuff. This method is also wrapped by `Project.data_as_matrix()` which is nice because it ensures that all the wells are exported at the same sample interval.
Here are the curves in this well:
```
w.data.keys()
keys=['CALI', 'DT', 'DTS', 'RHOB', 'SP']
w.plot(tracks=['TVD']+keys)
X, basis = w.data_as_matrix(keys=keys, start=275, stop=1850, step=0.5, return_basis=True)
w.data['CALI'].shape
```
So CALI had 12,718 points in it... since we downsampled to 0.5 m and removed the top and tail, we should have substantially fewer points:
```
X.shape
plt.figure(figsize=(15,3))
plt.plot(X.T[0])
plt.show()
```
### 10+. BONUS: fix the lat, lon
OK, we're definitely going to go over our budget on this one.
Did you notice that the location of the well did not get loaded properly?
```
w.location
```
Let's look at some of the header:
# LAS format log file from PETREL
# Project units are specified as depth units
#==================================================================
~Version information
VERS. 2.0:
WRAP. YES:
#==================================================================
~WELL INFORMATION
#MNEM.UNIT DATA DESCRIPTION
#---- ------ -------------- -----------------------------
STRT .M 1.0668 :START DEPTH
STOP .M 1939.13760 :STOP DEPTH
STEP .M 0.15240 :STEP
NULL . -999.25 :NULL VALUE
COMP . Elmworth Energy Corporation :COMPANY
WELL . Kennetcook #2 :WELL
FLD . Windsor Block :FIELD
LOC . Lat = 45* 12' 34.237" N :LOCATION
PROV . Nova Scotia :PROVINCE
UWI. Long = 63* 45'24.460 W :UNIQUE WELL ID
LIC . P-129 :LICENSE NUMBER
CTRY . CA :COUNTRY (WWW code)
DATE. 10-Oct-2007 :LOG DATE {DD-MMM-YYYY}
SRVC . Schlumberger :SERVICE COMPANY
LATI .DEG :LATITUDE
LONG .DEG :LONGITUDE
GDAT . :GeoDetic Datum
SECT . 45.20 Deg N :Section
RANG . PD 176 :Range
TOWN . 63.75 Deg W :Township
Look at **LOC** and **UWI**. There are two problems:
1. These items are in the wrong place. (Notice **LATI** and **LONG** are empty.)
2. The items are malformed, with lots of extraneous characters.
We can fix this in two steps:
1. Remap the header items to fix the first problem.
2. Parse the items to fix the second one.
We'll define these in reverse because the remapping uses the transforming function.
```
import re
def transform_ll(text):
"""
Parses malformed lat and lon so they load properly.
"""
def callback(match):
d = match.group(1).strip()
m = match.group(2).strip()
s = match.group(3).strip()
c = match.group(4).strip()
if c.lower() in ('w', 's') and d[0] != '-':
d = '-' + d
return ' '.join([d, m, s])
pattern = re.compile(r""".+?([-0-9]+?).? ?([0-9]+?).? ?([\.0-9]+?).? +?([NESW])""", re.I)
text = pattern.sub(callback, text)
return welly.utils.dms2dd([float(i) for i in text.split()])
```
Make sure that works!
```
print(transform_ll("""Lat = 45* 12' 34.237" N"""))
remap = {
'LATI': 'LOC', # Use LOC for the parameter LATI.
'LONG': 'UWI', # Use UWI for the parameter LONG.
'LOC': None, # Use nothing for the parameter SECT.
'SECT': None, # Use nothing for the parameter SECT.
'RANG': None, # Use nothing for the parameter RANG.
'TOWN': None, # Use nothing for the parameter TOWN.
}
funcs = {
'LATI': transform_ll, # Pass LATI through this function before loading.
'LONG': transform_ll, # Pass LONG through it too.
'UWI': lambda x: "No UWI, fix this!"
}
w = Well.from_las('../data/P-129.LAS', remap=remap, funcs=funcs)
w.location.latitude, w.location.longitude
w.uwi
```
Let's just hope the mess is the same mess in every well. (LOL, no-one's that lucky.)
<hr>
**© 2017 [agilescientific.com](https://www.agilescientific.com/) and licensed [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/)**
|
github_jupyter
|
# DaKanjiRecognizer - Single Kanji CNN : Create dataset
## Setup
Import the needed libraries.
```
#std lib
import sys
import os
import random
import math
import multiprocessing as mp
import gc
import time
import datetime
from typing import Tuple, List
from shutil import copy
from tqdm import tqdm
import tensorflow as tf
#reading the dataset
from etldr.etl_data_reader import ETLDataReader
from etldr.etl_character_groups import ETLCharacterGroups
from etldr.etl_data_names import ETLDataNames
from DataGenerator import generate_images, check_font_char_support
#data handling
import PIL
from PIL import Image as PImage
from PIL import ImageFilter, ImageFont, ImageDraw
import numpy as np
import cv2
#plotting/showing graphics
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import Image
#define a font to show japanese characters in matplotlib figures
import matplotlib.font_manager as fm
show_sample_font = fm.FontProperties(fname=os.path.join("..", "fonts", "NotoSerifCJKjp-Regular.otf"), size=20)
```
## Loading the data
The [ETL Character data set](http://etlcdb.db.aist.go.jp/) which I am using is a data set with multiple sub sets (ETL1 - ETL7, ETL8B, ETL8G, ETL9B and ETL9G). <br/>
After unpacking the data set I renamed all folders and files to have a uniform naming scheme: "ETLX/ETLX_Y". "X" is the number of the subset and Y the part of the subset. Also ETL7S was removed (ETL7L just smaller), the following renaming was also done: <br/>
ETL8B $\rightarrow$ ETL1, ETL8G $\rightarrow$ ETL9, ETL9B $\rightarrow$ ETL10 and ETL9G $\rightarrow$ ETL11.<br/>
This leads to the following data set structure: <br/>
| name | type | content | res | Bit depth | code | samples perlabel | total samples |
|:-----:|:-------:|:-----------------------------------------------------------------------:|:-------:|:---------:|:----------:|:----------------:|:-------------:|
| ETL1 | M-Type | Numbers <br/> Roman <br/> Symbols <br/> Katakana | 64x63 | 4 | JIS X 0201 | ~1400 | 141319 |
| ETL2 | K-Type | Hiragana <br/> Katakana <br/> Kanji <br/> Roman <br/> Symbols | 60x60 | 6 | CO59 | ~24 | 52796 |
| ETL3 | C-Type | Numeric <br/> Capital Roman <br/> Symbols | 72x76 | 4 | JIS X 0201 | 200 | 9600 |
| ETL4 | C-Type | Hiragana | 72x76 | 4 | JIS X 0201 | 120 | 6120 |
| ETL5 | C-Type | Katakana | 72x76 | 4 | JIS X 0201 | ~200 | 10608 |
| ETL6 | M-Type | Katakana <br/> Symbols | 64x63 | 4 | JIS X 0201 | 1383 | 157662 |
| ETL7 | M-Type | Hiragana <br/> Symbols | 64x63 | 4 | JIS X 0201 | 160 | 16800 |
| ETL8 | 8B-Type | Hiragana <br/> Kanji | 64x63 | 1 | JIS X 0208 | 160 | 157662 |
| ETL9 | 8G-Type | Hiragana <br/> Kanji | 128x127 | 4 | JIS X 0208 | 200 | 607200 |
| ETL10 | 9B-Type | Hiragana <br/> Kanji | 64x63 | 1 | JIS X 0208 | 160 | 152960 |
| ETL11 | 9G-Type | Hiragana <br/> Kanji | 128x127 | 4 | JIS X 0208 | 200 | 607200 |
Because the provided data set is distributed in a proprietary binary data format and therefore hard to handle I created a ```ETL_data_reader```-package. This package can be found [here](https://github.com/CaptainDario/ETLCDB_data_reader).
The specific dataformat is C-struct like for types: M, 8B, 8G, 9B, 9G. But the types C and K are 6-bit encoded. All codes can be found on the [official website.](http://etlcdb.db.aist.go.jp/file-formats-and-sample-unpacking-code)
I used the [struct module](https://docs.python.org/3/library/struct.html) and the [bitstring module](https://pypi.org/project/bitstring/) to unpack the binary data. <br/>
First an instance of the ```ERL_data_reader``` -class is needed.
The path parameter should lead to the folder in which all parts of the ETL data set can be found.
```
path = "Z:\data_sets\etlcdb_binary"
reader = ETLDataReader(path)
```
Define a convenience function for showing characters and their label.
```
def show_image(img : np.array, label : str):
plt.figure(figsize=(2.2, 2.2))
plt.title(label=label, font=show_sample_font)
plt.axis("off")
plt.imshow(img.astype(np.float64), cmap="gray")
```
Now load all samples which contain Kanji, Hiragana and Katakana.
```
types = [ETLCharacterGroups.kanji, ETLCharacterGroups.katakana, ETLCharacterGroups.hiragana]
x, y = reader.read_dataset_whole(types, 16)
print(x.shape, y.shape)
```
With the loaded data we can take a look at the class distributions.
```
unique, counts = np.unique(y, return_counts=True)
balance = dict(zip(unique, counts))
plt.bar(range(0, len(counts)), counts, width=1.0)
plt.show()
```
Because the data is quite imbalanced we need more data.
First remove samples so that a class has maximum 10all_jis2_charsamples.
```
del_inds, cnt = [], 0
for _x, _y in zip(x, y):
ind = np.where(unique == _y)
if(counts[ind] > 1000):
del_inds.append(cnt)
counts[ind] -= 1
cnt += 1
x = np.delete(x, del_inds, axis=0)
y = np.delete(y, del_inds)
unique, counts = np.unique(y, return_counts=True)
balance = dict(zip(unique, counts))
plt.bar(range(0, len(counts)), counts, width=1.0)
plt.show()
```
### Save etlcdb images to disk
To use the data later with keras we save them to disk in an appropriate folder structure. <br/>
The ETL_data_reader package provides a handy function for this.
```
reader.save_to_file(x, y, r"Z:\data_sets\dakanji_single_kanji_cnn", name=0)
```
## Create samples for missing JIS-2 Kanji
Because not all JIS 2 characters are in the etlcdb we need to get samples for them. <br/>
First find the characters which are in JIS2 but not in the data set.
```
chars_to_gen = {}
# add samples for the already existing classes
for u, c in zip(unique, counts):
if(c < 2000):
chars_to_gen[u] = 2000 - c
with open("jis2_characters.txt", encoding="utf8", mode="r") as f:
all_jis2_chars = f.read().replace(" ", "").replace("\n", "")
all_jis2_chars = list(all_jis2_chars)
missing_jis2_chars = [c for c in all_jis2_chars if c not in unique]
# add samples for missing jis2 characters
for c in missing_jis2_chars:
chars_to_gen[c] = 2000
```
Copy samples from DaJapanaeseDataGenerator dataset
```
da_data_dir = r"Z:\data_sets\da_japanese_data_generator"
with open(os.path.join(da_data_dir, "encoding.txt"), encoding="utf8", mode="r") as f:
d = eval(f.read())
da_data_encoding = {v : k for k, v in d.items()}
single_kanji_data_dir = r"Z:\data_sets\dakanji_single_kanji_cnn"
with open(os.path.join(single_kanji_data_dir, "encoding.txt"), encoding="utf8", mode="r") as f:
single_kanji_data_encoding = eval(f.read())
single_kanji_data_encoding["キ"]
chars_to_gen["あ"]
for char, cnt in chars_to_gen.items():
#
if(char not in single_kanji_data_encoding):
#print(char)
os.mkdir(os.path.join(single_kanji_data_dir, str(len(single_kanji_data_encoding))))
single_kanji_data_encoding[char] = [str(len(single_kanji_data_encoding)), 0]
#
for i in range(cnt):
_from = os.path.join(da_data_dir, str(da_data_encoding[char]), str(i) + ".png")
_to = os.path.join(single_kanji_data_dir, single_kanji_data_encoding[char][0], str(single_kanji_data_encoding[char][1]) + ".png")
#print(_from, _to)
copy(_from, _to)
single_kanji_data_encoding[char][1] += 1
with open(os.path.join(single_kanji_data_dir, "encoding.txt"), encoding="utf8", mode="w+") as f:
f.write(str(single_kanji_data_encoding))
```
|
github_jupyter
|
#### Jupyter notebooks
This is a [Jupyter](http://jupyter.org/) notebook using Python. You can install Jupyter locally to edit and interact with this notebook.
# Finite difference methods for transient PDE
## Method of Lines
Our method for solving time-dependent problems will be to discretize in space first, resulting in a system of ordinary differential equations
$$ M \dot u = f(u) $$
where the "mass matrix" $M$ might be diagonal and $f(u)$ represents a spatial discretization that has the form $f(u) = A u$ for linear problems.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn')
def ode_euler(f, u0, tfinal=1, h=0.1):
u = np.array(u0)
t = 0
thist = [t]
uhist = [u0]
while t < tfinal:
h = min(h, tfinal - t)
u += h * f(t, u)
t += h
thist.append(t)
uhist.append(u.copy())
return np.array(thist), np.array(uhist)
tests = []
class fcos:
def __init__(self, k=5):
self.k = k
def __repr__(self):
return 'fcos(k={:d})'.format(self.k)
def f(self, t, u):
return -self.k * (u - np.cos(t))
def u(self, t, u0):
k2p1 = self.k**2+1
return (u0 - self.k**2/k2p1) * np.exp(-self.k*t) + self.k*(np.sin(t) + self.k*np.cos(t))/k2p1
tests.append(fcos(k=2))
tests.append(fcos(k=10))
u0 = np.array([.2])
plt.figure()
for test in tests:
thist, uhist = ode_euler(test.f, u0, h=.1, tfinal=6)
plt.plot(thist, uhist, '.', label=repr(test)+' Forward Euler')
plt.plot(thist, test.u(thist, u0), label=repr(test)+' exact')
plt.plot(thist, np.cos(thist), label='cos')
plt.legend(loc='upper right');
```
### Midpoint Method
What if instead of evaluating the function at the end of the time step, we evaluated in the middle of the time step using the average of the endpoint values.
$$ \tilde u(h) = u(0) + h f\left(\frac h 2, \frac{\tilde u(h) + u(0)}{2} \right) $$
For the linear problem, this reduces to
$$ \Big(I - \frac h 2 A \Big) u(h) = \Big(I + \frac h 2 A\Big) u(0) .$$
```
def ode_midpoint_linear(A, u0, tfinal=1, h=0.1):
u = u0.copy()
t = 0
thist = [t]
uhist = [u0]
I = np.eye(len(u))
while t < tfinal:
h = min(h, tfinal - t)
u = np.linalg.solve(I - .5*h*A, (I + .5*h*A) @ u)
t += h
thist.append(t)
uhist.append(u.copy())
return np.array(thist), np.array(uhist)
thist, uhist = ode_midpoint_linear(test.A, u0, h=.2, tfinal=15)
plt.figure()
plt.plot(thist, uhist, '*')
plt.plot(thist, test.u(thist, u0))
plt.title('Midpoint');
```
## $\theta$ method
The above methods are all special cases of the $\theta$ method
$$ \tilde u(h) = u(0) + h f\left(\theta h, \theta\tilde u(h) + (1-\theta)u(0) \right) $$
which, for linear problems, is solved as
$$ (I - h \theta A) u(h) = \Big(I + h (1-\theta) A \Big) u(0) . $$
$\theta=0$ is explicit Euler, $\theta=1$ is implicit Euler, and $\theta=1/2$ is the midpoint rule.
The stability function is
$$ R(z) = \frac{1 + (1-\theta)z}{1 - \theta z}. $$
```
for theta in [.2, .5, .8]:
plot_stability(xx, yy, (1 + (1-theta)*zz)/(1 - theta*zz), '$\\theta={:3.1f}$'.format(theta))
```
We will generalize slightly to allow solution of a linear differential algebraic equation
$$ M \dot u = A u + f(t,x) $$
where $M$ is (for now) a diagonal matrix that has zero rows at boundary conditions. With this generalization, the $\theta$ method becomes
$$ (M - h \theta A) u(h) = \Big(M + h (1-\theta) A \Big) u(0) + h f(h\theta, x) . $$
We will assume that $M$ is nonsingular if $\theta=0$.
```
def dae_theta_linear(M, A, u0, rhsfunc, bcs=[], tfinal=1, h=0.1, theta=.5):
u = u0.copy()
t = 0
hist = [(t,u0)]
while t < tfinal:
if tfinal - t < 1.01*h:
h = tfinal - t
tnext = tfinal
else:
tnext = t + h
h = min(h, tfinal - t)
rhs = (M + (1-theta)*h*A) @ u + h*rhsfunc(t+theta*h)
for i, f in bcs:
rhs[i] = theta*h*f(t+theta*h, x[i])
u = np.linalg.solve(M - theta*h*A, rhs)
t = tnext
hist.append((t, u.copy()))
return hist
```
### Stiff decay to cosine
```
test = fcos(k=5000)
u0 = np.array([.2])
hist = dae_theta_linear(np.eye(1), -test.k, u0,
lambda t: test.k*np.cos(t),
h=.1, tfinal=6, theta=.5)
hist = np.array(hist)
plt.plot(hist[:,0], hist[:,1], 'o')
tt = np.linspace(0, 6, 200)
plt.plot(tt, test.u(tt,u0));
```
#### Observations
* $\theta=1$ is robust
* $\theta=1/2$ gets correct long-term behavior, but has oscillations at early times
* $\theta < 1/2$ allows oscillations to grow
### Definition: $A$-stability
A method is $A$-stable if the stability region
$$ \{ z : |R(z)| \le 1 \} $$
contains the entire left half plane $$ \Re[z] \le 0 .$$
This means that the method can take arbitrarily large time steps without becoming unstable (diverging) for any problem that is indeed physically stable.
### Definition: $L$-stability
A time integrator with stability function $R(z)$ is $L$-stable if
$$ \lim_{z\to\infty} R(z) = 0 .$$
For the $\theta$ method, we have
$$ \lim_{z\to \infty} \frac{1 + (1-\theta)z}{1 - \theta z} = \frac{1-\theta}{\theta} . $$
Evidently only $\theta=1$ is $L$-stable.
## Transient PDE
### Diffusion (heat equation)
Let's first consider diffusion of a quantity $u(t,x)$
$$ \dot u(t,x) - u''(t,x) = f(t,x) \qquad t > 0, -1 < x < 1 \\
u(0,x) = g(x) \qquad u(t,-1) = h_L(t) \qquad u'(t,1) = h_R(t) .$$
Let's use a Chebyshev discretization in space.
```
%run fdtools.py # define cosspace, vander_chebyshev, and chebeval
def diffusion_cheb(n, left, right):
"""Solve the diffusion PDE on (-1,1) using n elements with rhsfunc(x) forcing.
The left and right boundary conditions are specified as a pair (deriv, func) where
* deriv=0 for Dirichlet u(x_endpoint) = func(x_endpoint)
* deriv=1 for Neumann u'(x_endpoint) = func(x_endpoint)"""
x = cosspace(-1, 1, n+1) # n+1 points is n "elements"
T = chebeval(x)
L = -T[2]
bcs = []
for i,deriv,func in [(0, *left), (-1, *right)]:
L[i] = T[deriv][i]
bcs.append((i, func))
M = np.eye(n+1)
M[[0,-1]] = 0
return x, M, -L @ np.linalg.inv(T[0]), bcs
x, M, A, bcs = diffusion_cheb(80, (0, lambda t,x: 0*x), (0, lambda t,x: 0*x+.5))
hist = dae_theta_linear(M, A, np.exp(-(x*8)**2), lambda t: 0*x, bcs,
h=.005, theta=.5, tfinal=0.3)
for t, u in hist[::10]:
plt.plot(x, u, label='$t={:4.2f}$'.format(t))
plt.legend(loc='lower left');
```
#### Observations
* Sharp central spike is diffused very quickly.
* Artifacts with $\theta < 1$.
#### Manufactured solution
```
class exact_tanh:
def __init__(self, k=1, x0=0):
self.k = k
self.x0 = x0
def u(self, t, x):
return np.tanh(self.k*(x - t - self.x0))
def u_x(self, t, x):
return self.k * np.cosh(self.k*(x - t - self.x0))**(-2)
def u_t(self, t, x):
return -self.u_x(t, x)
def u_xx(self, t, x):
return -2 * self.k**2 * np.tanh(self.k*(x - t - self.x0)) * np.cosh(self.k*(x - t - self.x0))**(-2)
def heatrhs(self, t, x):
return self.u_t(t,x) - self.u_xx(t,x)
ex = exact_tanh(2, -.3)
x, M, A, bcs = diffusion_cheb(20, (0, ex.u), (1, ex.u_x))
hist = dae_theta_linear(M, A, ex.u(0,x), lambda t: ex.heatrhs(t,x), bcs)
for t, u in hist:
plt.plot(x, u, label='$t={:3.1f}$'.format(t))
plt.legend(loc='lower right');
def mms_error(n):
x, M, A, bcs = diffusion_cheb(n, (0, ex.u), (1, ex.u_x))
hist = dae_theta_linear(M, A, ex.u(0,x),
lambda t: ex.heatrhs(t,x), bcs, h=1/n**2, theta=1)
return np.linalg.norm(hist[-1][1] - ex.u(hist[-1][0], x),
np.inf)
ns = np.logspace(.8, 1.6, 10).astype(int)
errors = [mms_error(n) for n in ns]
plt.loglog(ns, errors, 'o', label='numerical')
for p in range(1,4):
plt.loglog(ns, 1/ns**(p), label='$n^{-%d}$'%p)
plt.xlabel('n')
plt.ylabel('error')
plt.legend(loc='lower left');
```
#### Observations
* Errors are limited by time (not spatial) discretization error. This is a result of using the (spectrally accurate) Chebyshev method in space.
* $\theta=1$ is more accurate than $\theta = 1/2$, despite the latter being second order accurate in time. This is analogous to the stiff relaxation to cosine test.
#### Largest eigenvalues
```
def maxeig(n):
x, M, A, bcs = diffusion_cheb(n, (0, ex.u), (1, ex.u_x))
lam = np.linalg.eigvals(-A)
return max(lam)
plt.loglog(ns, [maxeig(n) for n in ns], 'o', label='cheb')
for p in range(1,5):
plt.loglog(ns, ns**(p), label='$n^{%d}$'%p)
plt.xlabel('n')
plt.ylabel('$\max \sigma(A)$')
plt.legend(loc='lower left');
```
### Finite difference method
```
def maxeig_fd(n):
dx = 2/n
A = 1/dx**2 * (2 * np.eye(n+1) - np.eye(n+1, k=1) - np.eye(n+1, k=-1))
return max(np.linalg.eigvals(A))
plt.loglog(2/ns, [maxeig_fd(n) for n in ns], 'o', label='fd')
for p in range(1,4):
plt.loglog(2/ns, 4*(2/ns)**(-p), label='$4 h^{-%d}$'%p)
plt.xlabel('h')
plt.ylabel('$\max \sigma(A)$')
plt.legend(loc='upper right');
```
#### Question: max explicit Euler time step
Express the maximum stable time step $\Delta t$ using explicit Euler in terms of the grid spacing $\Delta x$.
## Hyperbolic (wave) equations
The simplest hyperbolic equation is linear advection
$$ \dot u(t,x) + c u'(t,x) = f(t,x) $$
where $c$ is the wave speed and $f$ is a source term. In the homogenous ($f = 0$) case, the solution is given by characteristics
$$ u(t,x) = u(0, x - ct) . $$
This PDE also requires boundary conditions, but as a first-order equation, we can only enforce boundary conditions at one boundary. It turns out that this needs to be the _inflow_ boundary, so if $c > 0$, that is the left boundary condition $u(t, -1) = g(t)$. We can solve this system using Chebyshev methods.
```
def advection_cheb(n, c, left=(None,None), right=(None,None)):
"""Discretize the advection PDE on (-1,1) using n elements with rhsfunc(x) forcing.
The left boundary conditions are specified as a pair (deriv, func) where
* deriv=0 for Dirichlet u(x_endpoint) = func(x_endpoint)
* deriv=1 for Neumann u'(x_endpoint) = func(x_endpoint)"""
x = cosspace(-1, 1, n+1) # n+1 points is n "elements"
T = chebeval(x)
A = -c*T[1]
M = np.eye(n+1)
bcs = []
for i,deriv,func in [(0, *left), (-1, *right)]:
if deriv is None: continue
A[i] = T[deriv][i]
M[i] = 0
bcs.append((i, func))
return x, M, A @ np.linalg.inv(T[0]), bcs
x, M, A, bcs = advection_cheb(40, 1, left=(0, lambda t,x: 0*x))
hist = dae_theta_linear(M, A, np.exp(-(x*4)**2), lambda t: 0*x, bcs,
h=.001, theta=1)
for t, u in hist[::len(hist)//10]:
plt.plot(x, u, label='$t={:3.1f}$'.format(t))
plt.legend(loc='lower left')
np.linalg.cond(A)
lam = np.linalg.eigvals(A[:,:])
print(A[0,:5])
plt.plot(lam.real, lam.imag, '.');
```
#### Observations
* $\theta > 1/2$ causes decay in amplitude
* $\theta < 1/2$ causes growth -- unstable
* An undershoot develops behind the traveling wave and increasing resolution doesn't make it go away
* We need an *upwind* boundary condition, otherwise the system is unstable
* Only Dirichlet inflow conditions are appropriate -- Neumann conditions produce a singular matrix
### Finite difference
```
def advection_fd(n, c, stencil=2, bias=0, left=None, right=None):
x = np.linspace(-1, 1, n+1)
A = np.zeros((n+1,n+1))
for i in range(n+1):
sleft = max(0, i - stencil//2 + bias)
sleft = min(sleft, n+1 - stencil)
A[i,sleft:sleft+stencil] = -c*fdstencil(x[i], x[sleft:sleft+stencil])[1]
M = np.eye(n+1)
bcs = []
for i, func in [(0, left), (-1, right)]:
if func is None: continue
A[i] = 0
A[i,i] = 1
M[i] = 0
bcs.append((i, func))
return x, M, A, bcs
x, M, A, bcs = advection_fd(40, c=1, stencil=3, bias=0, left=lambda t,x: 0*x)
hist = dae_theta_linear(M, A, np.exp(-(x*4)**2), lambda t: 0*x, bcs,
h=2/(len(x)-1), theta=.5)
for t, u in hist[::len(hist)//10]:
plt.plot(x, u, label='$t={:3.1f}$'.format(t))
plt.legend(loc='lower left')
print('stencil', A[3,:7])
print('cond', np.linalg.cond(A))
lam = np.linalg.eigvals(A[1:,1:])
plt.plot(lam.real, lam.imag, '.')
#plt.spy(A[:6,:6]);
```
#### Observations
* Centered methods have an undershoot behind the traveling wave
* Upwind biasing of the stencil tends to reduce artifacts, but only `stencil=2` removes undershoots
* Downwind biasing is usually unstable
* With upwinded `stencil=2`, we can use an explicit integrator, but the time step must satisfy
$$ c \Delta t < \Delta x $$
* The upwind methods are in general dissipative -- amplitude is lost even with very accurate time integration
* The higher order upwind methods always produce artifacts for sharp transitions
### Phase analysis
We can apply the advection differencing stencils to the test functions $$ \phi(x, \theta) = e^{i \theta x}$$ and compare to the exact derivative $$ \frac{d \phi}{d x} = i \theta \phi(x, \theta) . $$
```
x = np.arange(-1, 1+1)
s1 = fdstencil(0, x)[1]
print(s1)
theta = np.linspace(0, np.pi)
phi = np.exp(1j*np.outer(x, theta))
plt.plot(theta, np.sin(theta))
plt.plot(theta, np.abs(s1 @ phi), '.')
plt.plot(theta, theta);
```
# Runge-Kutta methods
The methods we have considered thus far can all be expressed as Runge-Kutta methods, which are expressed in terms of $s$ "stage" equations (possibly coupled) and a completion formula. For the ODE
$$ \dot u = f(t, u) $$
the Runge-Kutta method is
$$\begin{split}
Y_i = u(t) + h \sum_j a_{ij} f(t+c_j h, Y_j) \\
u(t+h) = u(t) + h \sum_j b_j f(t+c_j h, Y_j)
\end{split}$$
where $c$ is a vector of *abscissa*, $A$ is a table of coefficients, and $b$ is a vector of completion weights.
These coefficients are typically expressed in a Butcher Table
$$ \left[ \begin{array}{c|c} c & A \\ \hline & b^T \end{array} \right] = \left[ \begin{array}{c|cc}
c_0 & a_{00} & a_{01} \\
c_1 & a_{10} & a_{11} \\
\hline
& b_0 & b_1
\end{array} \right] . $$
We will see that, for consistency, the abscissa $c$ are always the row sums of $A$ and that $\sum_i b_i = 1$.
If the matrix $A$ is strictly lower triangular, then the method is **explicit** (does not require solving equations). We have seen forward Euler
$$ \left[ \begin{array}{c|cc}
0 & 0 \\
\hline
& 1
\end{array} \right] ,$$
backward Euler
$$ \left[ \begin{array}{c|c}
1 & 1 \\
\hline
& 1
\end{array} \right] ,$$
and Midpoint
$$ \left[ \begin{array}{c|c}
\frac 1 2 & \frac 1 2 \\
\hline
& 1
\end{array} \right]. $$
Indeed, the $\theta$ method is
$$ \left[ \begin{array}{c|c}
\theta & \theta \\
\hline
& 1
\end{array} \right] $$
and an alternative "endpoint" variant of $\theta$ (a generalization of the trapezoid rule) is
$$ \left[ \begin{array}{c|cc}
0 & 0 & 0 \\
1 & 1-\theta & \theta \\
\hline
& 1-\theta & \theta
\end{array} \right]. $$
## Stability
To develop an algebraic expression for stability in terms of the Butcher Table, we consider the test equation
$$ \dot u = \lambda u $$
and apply the RK method to yield
$$ \begin{split} Y_i = u(0) + h \sum_j a_{ij} \lambda Y_j \\
u(h) = u(0) + h \sum_j b_j \lambda Y_j \end{split} $$
or, in matrix form,
$$ \begin{split} Y = \mathbb 1 u(0) + h \lambda A Y \\
u(h) = u(0) + h \lambda b^T Y \end{split} $$
where $\mathbb 1$ is a column vector of length $s$ consisting of all ones.
This reduces to
$$ u(h) = \underbrace{\Big( 1 + h\lambda b^T (I - h \lambda A)^{-1} \mathbb 1 \Big)}_{R(h\lambda)} u(0) . $$
```
def Rstability(A, b, z):
s = len(b)
def R(z):
return 1 + z*b.dot(np.linalg.solve(np.eye(s) - z*A, np.ones(s)))
f = np.vectorize(R)
return f(z)
def rk_butcher_theta(theta):
A = np.array([[theta]])
b = np.array([1])
return A, b
def zmeshgrid(xlen=5, ylen=5):
xx = np.linspace(-xlen, xlen, 100)
yy = np.linspace(-ylen, ylen, 100)
x, y = np.meshgrid(xx, yy)
z = x + 1j*y
return x, y, z
def plot_rkstability(A, b, label=''):
from matplotlib import plt, ticker, cm, axis
import np as np
x, y, z = zmeshgrid()
data = np.abs(Rstability(A, b, z))
cs = plt.contourf(x, y, data, np.arange(0, 2, 0.1), cmap=cm.coolwarm)
cbar = plt.colorbar(cs, ticks=np.linspace(0, 2, 5))
plt.axhline(y=0, xmin=-20.0, xmax=20.0, linewidth=1, linestyle='--', color='grey')
plt.axvline(x=0, ymin=-20.0, ymax=20.0, linewidth=1, linestyle='--', color='grey')
cs = plt.contour(x, y, data, np.arange(0, 2, 0.5), colors='k')
plt.clabel(cs, fontsize=6)
for c in cs.collections:
plt.setp(c, linewidth=1)
plt.title('Stability region' + (': ' + label if label else ''))
A, b = rk_butcher_theta(.5)
plot_rkstability(A, b, label='$\\theta$')
def rk_butcher_theta_endpoint(theta):
A = np.array([[0, 0], [1-theta, theta]])
b = np.array([1-theta, theta])
return A, b
A, b = rk_butcher_theta_endpoint(.5)
plot_rkstability(A, b, label='$\\theta$ endpoint')
```
Evidently the endpoint variant of $\theta$ has the same stability function as the original (midpoint) variant that we've been using. These methods are equivalent for linear problems, but different for nonlinear problems.
## Higher order explicit methods: Heun's and RK4
Explicit Euler steps can be combined to create more accurate methods. One such example is Heun's method,
$$ \left[ \begin{array}{c|cc}
0 & 0 & 0 \\
1 & 1 & 0 \\
\hline
& \frac 1 2 & \frac 1 2
\end{array} \right]. $$
Another explicit method is the famous four-stage RK4,
$$ \left[ \begin{array}{c|cccc}
0 & 0 & 0 & 0 & 0 \\
\frac 1 2 & \frac 1 2 & 0 & 0 & 0 \\
\frac 1 2 & 0 & \frac 1 2 & 0 & 0 \\
1 & 0 & 0 & 1 & 0 \\
\hline
& \frac 1 6 & \frac 1 3 & \frac 1 3 & \frac 1 6
\end{array} \right] . $$
```
def rk_butcher_heun():
A = np.array([[0, 0],[1,0]])
b = np.array([.5, .5])
return A, b
A, b = rk_butcher_heun()
plot_rkstability(A, b, label='Heun')
def rk_butcher_4():
A = np.array([[0,0,0,0],[.5,0,0,0],[0,.5,0,0],[0,0,1,0]])
b = np.array([1/6, 1/3, 1/3, 1/6])
return A, b
A, b = rk_butcher_4()
plot_rkstability(A, b, label='RK4')
```
Finally a method with lots of stability along the imaginary axis. Let's try it on some test problems.
```
def ode_rkexplicit(f, u0, butcher=None, tfinal=1, h=.1):
if butcher is None:
A, b = rk_butcher_4()
else:
A, b = butcher
c = np.sum(A, axis=1)
s = len(c)
u = u0.copy()
t = 0
hist = [(t,u0)]
while t < tfinal:
if tfinal - t < 1.01*h:
h = tfinal - t
tnext = tfinal
else:
tnext = t + h
h = min(h, tfinal - t)
fY = np.zeros((len(u0), s))
for i in range(s):
Yi = u.copy()
for j in range(i):
Yi += h * A[i,j] * fY[:,j]
fY[:,i] = f(t + h*c[i], Yi)
u += h * fY @ b
t = tnext
hist.append((t, u.copy()))
return hist
test = linear(np.array([[0, 1],[-1, 0]]))
u0 = np.array([.5, 0])
hist = ode_rkexplicit(test.f, u0, rk_butcher_4(), tfinal=50, h=.8)
times = [t for t,u in hist]
plt.plot(times, [u for t,u in hist], '.')
plt.plot(times, test.u(times, u0));
```
#### Observations
* Solutions look pretty good and we didn't need a solve.
* We needed to evaluate the right hand side $s$ times per step
```
def mms_error(h, rk_butcher):
hist = ode_rkexplicit(test.f, u0, rk_butcher(), tfinal=20, h=h)
times = [t for t,u in hist]
u = np.array([u for t,u in hist])
return np.linalg.norm(u - test.u(times, u0), np.inf)
hs = np.logspace(-1.5, .5, 20)
error_heun = [mms_error(h, rk_butcher_heun) for h in hs]
error_rk4 = [mms_error(h, rk_butcher_4) for h in hs]
plt.loglog(hs, error_heun, 'o', label='Heun')
plt.loglog(hs, error_rk4, 's', label='RK4')
for p in [2,3,4]:
plt.loglog(hs, hs**p, label='$h^%d$'%p)
plt.title('Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Error')
plt.xlabel('$h$');
```
## Work-precision diagrams for comparing methods
Since these methods do not cost the same per step, it is more enlightening to compare them using some measure of cost. For large systems of ODE, such as arise by discretizing a PDE, the cost of time integration is dominated by evaluating the right hand side (discrete spatial operator) on each stage. Measuring CPU time is a more holistic measure of cost, but the results depend on the implementation, computer, and possible operating system interference/variability. Counting right hand side function evaluations is a convenient, reproducible measure of cost.
```
plt.loglog(20*2/hs, error_heun, 'o', label='Heun')
plt.loglog(20*4/hs, error_rk4, 's', label='RK4')
plt.title('Error vs cost')
plt.ylabel('Error')
plt.xlabel('# function evaluations')
plt.legend(loc='upper right');
test = linear(np.array([[0, 1, 0],[-1, 0, 0],[10, 0, -10]]))
print(np.linalg.eigvals(test.A))
u0 = np.array([.5, 0, 0])
hist = ode_rkexplicit(test.f, u0, rk_butcher_4(), tfinal=5, h=.1)
times = [t for t,u in hist]
plt.plot(times, [u for t,u in hist], '.')
plt.plot(times, test.u(times, u0));
hs = np.logspace(-2, -.7, 20)
error_heun = [mms_error(h, rk_butcher_heun) for h in hs]
error_rk4 = [mms_error(h, rk_butcher_4) for h in hs]
plt.loglog(20*2/hs, error_heun, 'o', label='Heun')
plt.loglog(20*4/hs, error_rk4, 's', label='RK4')
plt.title('Error vs cost')
plt.ylabel('Error')
plt.xlabel('# function evaluations')
plt.legend(loc='upper right');
```
Evidently Heun becomes resolved at lower cost than RK4.
## Refinement in space and time
When solving a transient PDE, we should attempt to balance spatial discretization error with temporal discretization error. If we wish to use the same type of method across a range of accuracies, we need to
1. choose spatial and temporal discretizations with the same order of accuracy,
* choose grid/step sizes so the leading error terms are of comparable size, and
* ensure that both spatial and temporal discretizations are stable throughout the refinement range.
Since temporal discretization errors are proportional to the duration, simulations that run for a long time will need to use more accurate time discretizations.
# Runge-Kutta order conditions
We consider the autonomous differential equation
$$ \dot u = f(u) . $$
Higher derivatives of the exact soultion can be computed using the chain rule, e.g.,
\begin{align*}
\ddot u(t) &= f'(u) \dot u = f'(u) f(u) \\
\dddot u(t) &= f''(u) f(u) f(u) + f'(u) f'(u) f(u) . \\
\end{align*}
Note that if $f(u)$ is linear, $f''(u) = 0$.
Meanwhile, the numerical solution is a function of the time step $h$,
$$\begin{split}
Y_i(h) &= u(0) + h \sum_j a_{ij} f(Y_j) \\
U(h) &= u(0) + h \sum_j b_j f(Y_j).
\end{split}$$
We will take the limit $h\to 0$ and equate derivatives of the numerical solution. First we differentiate the stage equations,
\begin{split}
Y_i(0) &= u(0) \\
\dot Y_i(0) &= \sum_j a_{ij} f(Y_j) \\
\ddot Y_i(0) &= 2 \sum_j a_{ij} \dot f(Y_j) \\
&= 2 \sum_j a_{ij} f'(Y_j) \dot Y_j \\
&= 2\sum_{j,k} a_{ij} a_{jk} f'(Y_j) f(Y_k) \\
\dddot Y_i(0) &= 3 \sum_j a_{ij} \ddot f (Y_j) \\
&= 3 \sum_j a_{ij} \Big( \sum_k f''(Y_j) \dot Y_j \dot Y_k + f'(Y_j) \ddot Y_j \Big) \\
&= 3 \sum_{j,k,\ell} a_{ij} a_{jk} \Big( a_{j\ell} f''(Y_j) f(Y_k) f(Y_\ell) + 2 a_{k\ell} f'(Y_j) f'(Y_k) f(Y_\ell) \Big)
\end{split}
where we have used Liebnitz's formula for the $m$th derivative,
$$ (h \phi(h))^{(m)}|_{h=0} = m \phi^{(m-1)}(0) .$$
Similar formulas apply for $\dot U(0)$, $\ddot U(0)$, and $\dddot U(0)$, with $b_j$ in place of $a_{ij}$.
Equating terms $\dot u(0) = \dot U(0)$ yields
$$ \sum_j b_j = 1, $$
equating $\ddot u(0) = \ddot U(0)$ yields
$$ 2 \sum_{j,k} b_j a_{jk} = 1 , $$
and equating $\dddot u(0) = \dddot U(0)$ yields the two equations
\begin{split}
3\sum_{j,k,\ell} b_j a_{jk} a_{j\ell} &= 1 \\
6 \sum_{j,k,\ell} b_j a_{jk} a_{k\ell} &= 1 .
\end{split}
#### Observations
* These are systems of nonlinear equations for the coefficients $a_{ij}$ and $b_j$. There is no guarantee that they have solutions.
* The number of equations grows rapidly as the order increases.
| | $u^{(1)}$ | $u^{(2)}$ | $u^{(3)}$ | $u^{(4)}$ | $u^{(5)}$ | $u^{(6)}$ | $u^{(7)}$ | $u^{(8)}$ | $u^{(9)}$ | $u^{(10)}$ |
| ------------- |-------------| -----| --- |
| # terms | 1 | 1 | 2 | 4 | 9 | 20 | 48 | 115 | 286 | 719 |
| cumulative | 1 | 2 | 4 | 8 | 17 | 37 | 85 | 200 | 486 | 1205 |
* Usually the number of order conditions does not exactly match the number of free parameters, meaning that the remaining parameters can be optimized (usually numerically) for different purposes, such as to minimize the leading error terms or to maximize stability in certain regions of the complex plane. Finding globally optimal solutions can be extremely demanding.
* The arithmetic managing the derivatives gets messy, but can be managed using rooted trees.

#### Theorem (from Hairer, Nørsett, and Wanner)
A Runge-Kutta method is of order $p$ if and only if
$$ \gamma(\mathcal t) \sum_{j} b_j \Phi_j(t) = 1 $$
for all trees $t$ of order $\le p$.
For a linear autonomous equation
$$ \dot u = A u $$
we only need one additional order condition per order of accuracy because $f'' = 0$.
These conditions can also be derived by equating derivatives of the stability function $R(z)$ with the exponential $e^z$.
For a linear non-autonomous equation
$$ \dot u = A(t) u + g(t) $$
or more generally, an autonomous system with quadratic right hand side,
$$ \dot u = B (u \otimes u) + A u + C $$
where $B$ is a rank 3 tensor, we have $f''' = 0$, thus limiting the number of order conditions.
# Embedded error estimation and adaptive control
It is often possible to design Runge-Kutta methods with multiple completion orders, say of order $p$ and $p-1$.
$$\left[ \begin{array}{c|c} c & A \\ \hline & b^T \\ & \tilde b^T \end{array} \right] . $$
The classical RK4 does not come with an embedded method, but most subsequent RK methods do.
The [Bogacki-Shampine method](https://en.wikipedia.org/wiki/Bogacki%E2%80%93Shampine_method) is given by
```
def rk_butcher_bs3():
A = np.array([[0, 0, 0, 0],
[1/2, 0, 0, 0],
[0, 3/4, 0, 0],
[2/9, 1/3, 4/9, 0]])
b = np.array([[2/9, 1/3, 4/9, 0],
[7/24, 1/4, 1/3, 1/8]])
return A, b
A, b = rk_butcher_bs3()
plot_rkstability(A, b[0], label='Bogacki-Shampine 3')
plt.figure()
plot_rkstability(A, b[1], label='Bogacki-Shampine 2')
```
While this method has four stages, it has the "first same as last" (FSAL) property meaning that the fourth stage exactly matches the completion formula, thus the first stage of the next time step. This means it can be implemented using only three function evaluations per time step.
Higher order methods with embedded error estimation include
* [Fehlberg](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta%E2%80%93Fehlberg_method), a 6-stage, 5th order method for which the 4th order embedded formula has been optimized for accuracy.
* [Dormand-Prince](https://en.wikipedia.org/wiki/Dormand%E2%80%93Prince_method), a 7-stage, 5th order method with the FSAL property, with the 5th order completion formula optimized for accuracy.
```
# We can import and clean these coefficient tables directly from Wikipedia
import pandas
from fractions import Fraction
dframe = pandas.read_html('https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta%E2%80%93Fehlberg_method')[0]
dframe
# Clean up unicode minus sign, NaN, and convert to float
dfloat = dframe.applymap(lambda s: s.replace('−', '-') if isinstance(s, str) else s) \
.fillna(0).applymap(Fraction).astype(float)
dfloat
# Extract the Butcher table
darray = np.array(dfloat)
A = darray[:6,2:]
b = darray[6:,2:]
pandas.DataFrame(A) # Labeled tabular display
plot_rkstability(A, b[0], label='Fehlberg 5')
plt.figure()
plot_rkstability(A, b[1], label='Fehlberg 4')
dframe = pandas.read_html('https://en.wikipedia.org/wiki/Dormand%E2%80%93Prince_method')[0]
dfloat = dframe.applymap(lambda s: s.replace('−', '-') if isinstance(s, str) else s).fillna(0).applymap(Fraction).astype(float)
darray = np.array(dfloat)
A = darray[:7,2:]
b = darray[7:,2:]
pandas.DataFrame(A)
plot_rkstability(A, b[0], label='DP 5')
plt.figure()
plot_rkstability(A, b[1], label='DP 4')
```
## Adaptive control
Given a completion formula $b^T$ of order $p$ and $\tilde b^T$ of order $p-1$, an estimate of the local truncation error (on this step) is given by
$$ e_{\text{loc}}(h) = \lVert h (b - \tilde b)^T f(Y) \rVert \in O(h^p) . $$
Given a tolerance $\epsilon$, we would like to find $h_*$ such that
$$ e_{\text{loc}}(h_*) < \epsilon . $$
If $$e_{\text{loc}}(h) = c h^p$$ for some constant $c$, then
$$ c h_*^p < \epsilon $$
implies
$$ h_* < \left( \frac{\epsilon}{c} \right)^{1/p} . $$
Given the estimate with the current $h$,
$$ c = e_{\text{loc}}(h) / h^p $$
we conclude
$$ \frac{h_*}{h} < \left( \frac{\epsilon}{e_{\text{loc}}(h)} \right)^{1/p} . $$
#### Notes
* Usually a "safety factor" less than 1 is included so the predicted error is less than the threshold to reject a time step.
* We have used an absolute tolerance above. If the values of solution variables vary greatly in time, a relative tolerance $e_{\text{loc}}(h) / \lVert u(t) \rVert$ or a combination thereof is desirable.
* There is a debate about whether one should optimize the rate at which error is accumulated with respect to work (estimate above) or with respect to simulated time (as above, but with error behaving as $O(h^{p-1})$). For problems with a range of time scales at different periods, this is usually done with respect to work.
* Global error control is an active research area.
# Homework 4: Due 2018-12-03 (Monday)
* Implement an explicit Runge-Kutta integrator that takes an initial time step $h_0$ and an error tolerance $\epsilon$.
* You can use the Bogacki-Shampine method or any other method with an embedded error estimate.
* A step should be rejected if the local truncation error exceeds the tolerance.
* Test your method on the nonlinear equation
$$ \begin{bmatrix} \dot u_0 \\ \dot u_1 \end{bmatrix} = \begin{bmatrix} u_1 \\ k (1-u_0^2) u_1 - u_0 \end{bmatrix} $$
for $k=2$, $k=5$, and $k=20$.
* Make a work-precision diagram for your adaptive method and for constant step sizes.
* State your conclusions or ideas (in a README, or Jupyter notebook) about appropriate (efficient, accurate, reliable) methods for this type of problem.
# Implicit Runge-Kutta methods
We have been considering examples of high-order explicit Runge-Kutta methods.
For processes like diffusion, the time step becomes limited (under grid refinement, but usually for practical resolution) by stability rather than accuracy. Implicit methods, especially $A$-stable and $L$-stable methods, allow much larger time steps.
### Diagonally implicit
A Runge-Kutta method is called **diagonally implicit** if the Butcher matrix $A$ is lower triangular, in which case the stages can be solved sequentially. Each stage equation has the form
$$ Y_i - h a_{ii} f(Y_i) = u(0) + h \sum_{j<i} a_{ij} f(Y_j) $$
where all terms in the right hand side are known.
For stiff problems, it is common to multiply though by $\alpha = (h a_{ii})^{-1}$, yielding
$$ \alpha Y_i - f(Y_i) = \alpha u(0) + \sum_{j<i} \frac{a_{ij}}{a_{ii}} f(Y_j) . $$
* It is common for solvers to reuse a linearization associated with $f(Y_i)$.
* It is common to have setup costs associated with the solution of the "shifted" problem.
Methods with constant diagonals, $a_{ii} = a_{jj}$, are often desired to amortize setup costs. These methods are called **singly diagonally implicit**. There are also related methods called Rosenbrock or Roserbrock-W that more aggressively amortize setup costs.
|
github_jupyter
|
```
import random
suits = ('Hearts','Diamonds','Spades','Clubes')
ranks = ('Two','Three','Four','Five','Six','Seven','Eight','Nine','Ten','Jack','Queen','king','Ace')
values = {'Two':2,'Three':3,'Four':4,'Five':5,'Six':6,
'Seven':7,'Eight':8,'Nine':9,'Ten':10,'Jack':10,'Queen':10,'king':10,'Ace':11}
playing = True
class Card():
def __init__(self,suit,rank):
self.suit = suit
self.rank = rank
self.value = values[rank]
def __str__(self):
return self.rank +" of " + self.suit
class Deck():
def __init__(self):
self.all_cards = []
for suit in suits:
for rank in ranks:
created_card = Card(suit,rank)
self.all_cards.append(created_card)
def shuffle_deck(self):
random.shuffle(self.all_cards)
def __str__(self):
#you can only return string here so
#if you have more than one string use string concatination
l = ' '
for i in self.all_cards:
l += '\n' + i.__str__()
return l
def deal_one(self):
return self.all_cards.pop(0)
class Hand:
def __init__(self):
self.cards = [] # start with an empty list as we did in the Deck class
self.value = 0 # start with zero value
self.aces = 0 # add an attribute to keep track of aces
def add_card(self,card):
self.cards.append(card)
self.value += values[card.rank]
if card.rank == 'Ace':
self.aces +=1
def adjust_for_ace(self):
# self.aces > 1 (In python 0 is considered as False and Other +ve and -ve integer are as True)
# thats why only self.aces is used in while loop
while self.value >21 and self.aces:
self.value -= 10
self.aces -=1
player = Deck()
player.shuffle_deck()
test_hand = Hand()
test_hand.add_card(player.deal_one()) # reduced variable
if -1:
print('h')
class Chips():
def __init__(self,total = 100):
self.total = total
self.bet = 0
def win_bet(self):
self.total += self.bet
def loose_bet(self):
self.total -= self.bet
def take_bet(chips):
while True:
try:
chips.bet = int(input("Enter No of Chips? "))
except:
print("Enter Integer Only ! ")
else:
if chips.bet > chips.total:
print('Bet{} is more than Total {}'.format(chips.bet,chips.total))
else:
break
def hit(deck,hand):
hand.add_card(deck.deal_one())
hand.adjust_for_ace()
def hit_or_stand(deck,hand):
global playing
while True:
x = input(" Hit Or Stand ! Enter h or s ")
if x[0].lower() == 'h':
print("Done")
hit(deck,hand)
elif x[0].lower() == 's':
print(" Player Stands !,Dealers Turn ")
print("Done")
playing = False
else:
print(" Enter H or S only ")
continue
break
def show_some(player,dealer):
print("\nDealer's Hand:")
print(" <card hidden>")
print('',dealer.cards[1])
#to show all cards we can use list unpacking * (sep is used to give new line b/w unpacking)
print("\nPlayer's Hand:", *player.cards, sep='\n ')
def show_all(player,dealer):
print("\nDealer's Hand:", *dealer.cards, sep='\n ')
print("Dealer's Hand =",dealer.value)
print("\nPlayer's Hand:", *player.cards, sep='\n ')
print("Player's Hand =",player.value)
def player_busts(chips):
print("Player busts!")
chips.lose_bet()
def player_wins(chips):
print("Player wins!")
chips.win_bet()
def dealer_busts(chips):
print("Dealer busts!")
chips.win_bet()
def dealer_wins(chips):
print("Dealer wins!")
chips.lose_bet()
def push(player,dealer):
print("Dealer and Player tie! It's a push.")
while True:
print(" Welcome to the Game ")
# deck created and shuffled
deck = Deck()
deck.shuffle_deck()
# players hand
player_hand = Hand()
player_hand.add_card(deck.deal_one())
player_hand.add_card(deck.deal_one())
# dealers hand
dealer_hand = Hand()
dealer_hand.add_card(deck.deal_one())
dealer_hand.add_card(deck.deal_one())
#set player chips
player_chips = Chips()
#hit
take_bet(player_chips)
show_some(player_hand,dealer_hand)
while playing:
# hit or stand to continue
hit_or_stand(deck,player_hand)
show_some(player_hand,dealer_hand)
if player_hand.value > 21:
player_busts(player_chips)
break
if player_hand.value < 21:
while dealer_hand.value < player_hand.value:
hit(deck,dealer_hand)
show_all(player_hand,dealer_hand)
if dealer_hand.value > 21:
dealer_busts(player_chips)
elif dealer_hand.value > player_hand.value:
dealer_wins(player_chips)
elif player_hand.value > dealer_hand.value:
player_wins(player_chips)
else:
push(player_hand,dealer_hand)
break
print('\n')
print('players Remaning Chips : {}'.format(player_chips.total))
new_game = input("Would you like to play another hand? Enter 'y' or 'n' ")
if new_game[0].lower()=='y':
playing = True
continue
else:
print("Thanks for playing!")
break
```
|
github_jupyter
|
```
import tensorflow as tf
from tensorflow import keras as keras
import time
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.image as mpimg
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.layers import Dense, Dropout, Lambda, LayerNormalization
from tensorflow.keras.callbacks import ModelCheckpoint, LearningRateScheduler, History, EarlyStopping
from sample.functionImplemented import get_model, custom_loss, get_threshold, schedule, custom_scaler, get_opt_action, update_replay_helper, populate_replay_memory, update_replay_memory
from sample.car import Car
from sample.track import Track
img = mpimg.imread("tracks/track_pic9.jpg")[:,:,0]
track1=(img<50).astype('int')
print(track1.shape)
track_rows, track_cols=track1.shape
pos_pos=np.where(track1==1)
spawning_positions=np.zeros((len(pos_pos[0]), 2))
spawning_positions[:, 0]=pos_pos[0]
spawning_positions[:, 1]=pos_pos[1]
spawning_positions=spawning_positions.astype('int')
track=Track(track1, 5)
l=spawning_positions[np.random.choice(range(len(spawning_positions)), size=(20, ))]
for (i,j) in l:
track.add_checkpoints(i,j)
track.checkpoints=np.asarray(track.checkpoints)
track.spawn_at=np.asarray(track.spawn_at)
plt.imshow(track1)
plt.show()
throttle_quant=np.linspace(-1,1,9)
steer_quant=np.linspace(-1,1,7)
actions=np.asarray([(throttle, steer) for throttle in throttle_quant for steer in steer_quant])
data_scaler=np.asarray([
100, 100, 100, 100,
100, 100, 100, 100,
50, 1, 1
])
usescaler=True
gamma=0.9
trainedModel=tf.keras.models.load_model("TrainedModels/trainedModelspa1.h5", custom_objects={'cl':custom_loss(gamma)})
new_car=Car(track, 80, 10.0)
# new_car.sampling_frequency=10.0
throttle_trace=[]
steer_trace=[]
speed_trace=[]
def get_plot(positions, superimposeon_this):
x, y=positions
for x_diff in range(-5, 7):
for y_diff in range(-5, 7):
if np.sqrt(x_diff**2+y_diff**2)<14:
superimposeon_this[x+x_diff][y+y_diff]=1
f=plt.figure(figsize=(10, 20))
plt.imshow(superimposeon_this+new_car.track.track)
plt.show()
return
base_fig=np.zeros((track_rows, track_cols))
for iteration in range(200):
r, c=new_car.integer_position_
for x_diff in range(-3, 4):
for y_diff in range(-3, 4):
if np.sqrt(x_diff**2+y_diff**2)<4:
if r+x_diff<new_car.track.track.shape[0] and c+y_diff<new_car.track.track.shape[1]:
base_fig[r+x_diff][c+y_diff]=1
throttle, steer=get_opt_action(new_car, trainedModel, actions, data_scaler, usescaler)
throttle_trace.append(throttle)
steer_trace.append(steer)
speed_trace.append(new_car.speed)
theta=new_car.car_angle
f1, f2=throttle*np.sin(theta)-steer*np.cos(theta), throttle*np.cos(theta)+steer*np.sin(theta)
# print(steer, new_car.speed, new_car.car_angle, new_car.current_position)
new_car.execute_forces(f1, f2, max_magnitudes=20)
# new_car.speed=20.0
if new_car.collided_on_last:
print("boom")
break
get_plot(new_car.integer_position_, base_fig)
telemetry_plts=plt.figure(figsize=(10, 10))
ax1=telemetry_plts.add_subplot(3, 1, 1)
ax1.plot(speed_trace)
ax2=telemetry_plts.add_subplot(3, 1, 2)
ax2.plot(throttle_trace)
ax3=telemetry_plts.add_subplot(3, 1, 3)
ax3.plot(steer_trace)
ax1.set_title("Speed")
ax2.set_title("throttle")
ax3.set_title("Steering")
telemetry_plts.suptitle("Telemetry")
telemetry_plts.show()
trainedModel.
```
|
github_jupyter
|
# Machine Learning Engineer Nanodegree
## Introduction and Foundations
## Project 0: Titanic Survival Exploration
In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.
> **Tip:** Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.
# Getting Started
To begin working with the RMS Titanic passenger data, we'll first need to `import` the functionality we need, and load our data into a `pandas` DataFrame.
Run the code cell below to load our data and display the first few entries (passengers) for examination using the `.head()` function.
> **Tip:** You can run a code cell by clicking on the cell and using the keyboard shortcut **Shift + Enter** or **Shift + Return**. Alternatively, a code cell can be executed using the **Play** button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. [Markdown](http://daringfireball.net/projects/markdown/syntax) allows you to write easy-to-read plain text that can be converted to HTML.
```
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
```
From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:
- **Survived**: Outcome of survival (0 = No; 1 = Yes)
- **Pclass**: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- **Name**: Name of passenger
- **Sex**: Sex of the passenger
- **Age**: Age of the passenger (Some entries contain `NaN`)
- **SibSp**: Number of siblings and spouses of the passenger aboard
- **Parch**: Number of parents and children of the passenger aboard
- **Ticket**: Ticket number of the passenger
- **Fare**: Fare paid by the passenger
- **Cabin** Cabin number of the passenger (Some entries contain `NaN`)
- **Embarked**: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the **Survived** feature from this dataset and store it as its own separate variable `outcomes`. We will use these outcomes as our prediction targets.
Run the code cell below to remove **Survived** as a feature of the dataset and store it in `outcomes`.
```
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
```
The very same sample of the RMS Titanic data now shows the **Survived** feature removed from the DataFrame. Note that `data` (the passenger data) and `outcomes` (the outcomes of survival) are now *paired*. That means for any passenger `data.loc[i]`, they have the survival outcome `outcome[i]`.
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how *accurate* our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our `accuracy_score` function and test a prediction on the first five passengers.
**Think:** *Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?*
```
def accuracy_score(truth, pred):
""" Returns accuracy score for input truth and predictions. """
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
```
> **Tip:** If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.
# Making Predictions
If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.
The `predictions_0` function below will always predict that a passenger did not survive.
```
def predictions_0(data):
""" Model with no features. Always predicts a passenger did not survive. """
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
```
### Question 1
*Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?*
**Hint:** Run the code cell below to see the accuracy of this prediction.
```
print accuracy_score(outcomes, predictions)
```
**Answer:** 61.62%
***
Let's take a look at whether the feature **Sex** has any indication of survival rates among passengers using the `survival_stats` function. This function is defined in the `titanic_visualizations.py` Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.
Run the code cell below to plot the survival outcomes of passengers based on their sex.
```
survival_stats(data, outcomes, 'Sex')
```
Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females *did* survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.
Fill in the missing code below so that the function will make this prediction.
**Hint:** You can access the values of each feature for a passenger like a dictionary. For example, `passenger['Sex']` is the sex of the passenger.
```
def predictions_1(data):
""" Model with one feature:
- Predict a passenger survived if they are female. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger.Sex == 'male':
predictions.append(0)
else:
predictions.append(1)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
```
### Question 2
*How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?*
**Hint:** Run the code cell below to see the accuracy of this prediction.
```
print accuracy_score(outcomes, predictions)
```
**Answer**: 78.68%
***
Using just the **Sex** feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the **Age** of each male, by again using the `survival_stats` function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the **Sex** 'male' will be included.
Run the code cell below to plot the survival outcomes of male passengers based on their age.
```
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
```
Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older *did not survive* the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.
Fill in the missing code below so that the function will make this prediction.
**Hint:** You can start your implementation of this function using the prediction code you wrote earlier from `predictions_1`.
```
def predictions_2(data):
""" Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if (passenger.Sex == 'female'):
predictions.append(1)
elif (passenger.Sex == 'male' and passenger.Age < 10):
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
```
### Question 3
*How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?*
**Hint:** Run the code cell below to see the accuracy of this prediction.
```
print accuracy_score(outcomes, predictions)
```
**Answer**: 79.35%
***
Adding the feature **Age** as a condition in conjunction with **Sex** improves the accuracy by a small margin more than with simply using the feature **Sex** alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions.
**Pclass**, **Sex**, **Age**, **SibSp**, and **Parch** are some suggested features to try.
Use the `survival_stats` function below to to examine various survival statistics.
**Hint:** To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: `["Sex == 'male'", "Age < 18"]`
```
survival_stats(data, outcomes, 'Embarked', ["Pclass == 3", "Age < 30", "Sex == female", "SibSp == 2"])
```
After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
**Hint:** You can start your implementation of this function using the prediction code you wrote earlier from `predictions_2`.
```
def predictions_3(data):
""" Model with multiple features. Makes a prediction with an accuracy of at least 80%. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if (passenger.Sex == 'female' and passenger.Pclass <> 3):
predictions.append(1)
elif (passenger.Sex == 'female' and passenger.Pclass == 3 and passenger.Age < 28 and passenger.SibSp == 0):
predictions.append(1)
elif (passenger.Sex == 'male' and passenger.Pclass <> 3 and passenger.Age < 10):
predictions.append(1)
elif (passenger.Sex == 'male' and passenger.Pclass == 1 and passenger.Age > 31 and passenger.Age < 44 and passenger.Fare > 5.000):
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
```
### Question 4
*Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?*
**Hint:** Run the code cell below to see the accuracy of your predictions.
```
print accuracy_score(outcomes, predictions)
```
**Answer**: 82.15%
**My Steps**:
* First, based on the comparison between Male and Female, I could see that just considering the Female our accuracy was very good.
* Second, I've tried to get more about the Female, so by comparing the Female data regarding Pclass I could see that classes 1 and 2 were very good but class 3 needed a review.
So I added my **first rule** that was: **Female at Classes 1 and 2 - survived**.
And then I investigated the Females in Class 3 and I saw that under 30 they were surviving a lot, so I refined a few and went my **second rule** that was: **Females at Class 3, under 28 and without siblings - survived**.
* Third I started to try to understand the Males, and I saw that under 10 they were also surviving, considering that Class 3 was the worst.
So I added **my third rule** that was: **Males under 10 and that were on Classes 1 and 2 - survived**.
* Fourth, I've tried to refine the profile of Males older than 10 years. I saw that the majority of the ones that survived were on Class 1, so then a identified a range of age something between 30 and 40 years and that payed more than 5.000. This went to my **fourth rule** that was: **Males between 31 and 44 years, that were at Class 1 and payed more than 5.000 - survived**.
# Conclusion
After several iterations of exploring and conditioning on the data, you have built a useful algorithm for predicting the survival of each passenger aboard the RMS Titanic. The technique applied in this project is a manual implementation of a simple machine learning model, the *decision tree*. A decision tree splits a set of data into smaller and smaller groups (called *nodes*), by one feature at a time. Each time a subset of the data is split, our predictions become more accurate if each of the resulting subgroups are more homogeneous (contain similar labels) than before. The advantage of having a computer do things for us is that it will be more exhaustive and more precise than our manual exploration above. [This link](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) provides another introduction into machine learning using a decision tree.
A decision tree is just one of many models that come from *supervised learning*. In supervised learning, we attempt to use features of the data to predict or model things with objective outcome labels. That is to say, each of our data points has a known outcome value, such as a categorical, discrete label like `'Survived'`, or a numerical, continuous value like predicting the price of a house.
### Question 5
*Think of a real-world scenario where supervised learning could be applied. What would be the outcome variable that you are trying to predict? Name two features about the data used in this scenario that might be helpful for making the predictions.*
**Answer**: I think supervised learning could be applied to support Human Resources, by analysing all employees in a company, considering as data their Job Role, Salary, Age, Sex, How long he is in the current role, how long he is in the copany, Employee Satisfaction Score, etc.
The outcome could be if an Employee will leave or stay in the company, so the HR Manager can use this algorithm to check if good employees are "almost leaving" the company and give them promotions so they will stay on their jobs more time.
**Sample**
The employee John Doe is a key contributor to the company, but he is in the company for more than 6 years and his salary is below the avarage salary for his role. He is now being considered as **LEAVING THE COMPANY** by our algorithm. Knownig this, the HR Manager can check with John Doe's manager and take actions to change the **LEAVING THE COMPANY** Status, doing something like salary increase pro promotion, for example.
> **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to
**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
|
github_jupyter
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.