prompt
stringlengths 501
4.98M
| target
stringclasses 1
value | chunk_prompt
bool 1
class | kind
stringclasses 2
values | prob
float64 0.2
0.97
⌀ | path
stringlengths 10
394
⌀ | quality_prob
float64 0.4
0.99
⌀ | learning_prob
float64 0.15
1
⌀ | filename
stringlengths 4
221
⌀ |
---|---|---|---|---|---|---|---|---|
# MNIST Convolutional Neural Network - Ensemble Learning
Gaetano Bonofiglio, Veronica Iovinella
In this notebook we will verify if our single-column architecture can get any advantage from using **ensemble learning**, so a multi-column architecture.
We will train multiple networks identical to the best one defined in notebook 03, feeding them with pre-processed images shuffled and distorted using a different pseudo-random seed. This should give us a good ensemble of networks that we can average for each classification.
A prediction doesn't take more time compared to a single-column, but training time scales by a factor of N, where N is the number of columns. Networks could be trained in parallel, but not on our current hardware that is saturated by the training of a single one.
## Imports
```
import os.path
from IPython.display import Image
from util import Util
u = Util()
import numpy as np
# Explicit random seed for reproducibility
np.random.seed(1337)
from keras.callbacks import ModelCheckpoint
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.layers import Merge
from keras.utils import np_utils
from keras.preprocessing.image import ImageDataGenerator
from keras import backend as K
from keras.datasets import mnist
```
## Definitions
For this experiment we are using 5 networks, but usually a good number is in the range of 35 (but with more dataset alterations then we do).
```
batch_size = 1024
nb_classes = 10
nb_epoch = 650
# checkpoint path
checkpoints_dir = "checkpoints"
# number of networks for ensamble learning
number_of_models = 5
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters1 = 20
nb_filters2 = 40
# size of pooling area for max pooling
pool_size1 = (2, 2)
pool_size2 = (3, 3)
# convolution kernel size
kernel_size1 = (4, 4)
kernel_size2 = (5, 5)
# dense layer size
dense_layer_size1 = 200
# dropout rate
dropout = 0.15
# activation type
activation = 'relu'
```
## Data load
```
# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
u.plot_images(X_train[0:9], y_train[0:9])
if K.image_dim_ordering() == 'th':
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
```
## Image preprocessing
```
datagen = ImageDataGenerator(
rotation_range=30,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=0.1,
horizontal_flip=False)
# compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied)
datagen.fit(X_train)
```
## Model definition - Single column
This time we are going to define a helper functions to initialize the model, since we're going to use it on a list of models.
```
def initialize_network(model, dropout1=dropout, dropout2=dropout):
model.add(Convolution2D(nb_filters1, kernel_size1[0], kernel_size1[1],
border_mode='valid',
input_shape=input_shape, name='covolution_1_' + str(nb_filters1) + '_filters'))
model.add(Activation(activation, name='activation_1_' + activation))
model.add(MaxPooling2D(pool_size=pool_size1, name='max_pooling_1_' + str(pool_size1) + '_pool_size'))
model.add(Convolution2D(nb_filters2, kernel_size2[0], kernel_size2[1]))
model.add(Activation(activation, name='activation_2_' + activation))
model.add(MaxPooling2D(pool_size=pool_size2, name='max_pooling_1_' + str(pool_size2) + '_pool_size'))
model.add(Dropout(dropout))
model.add(Flatten())
model.add(Dense(dense_layer_size1, name='fully_connected_1_' + str(dense_layer_size1) + '_neurons'))
model.add(Activation(activation, name='activation_3_' + activation))
model.add(Dropout(dropout))
model.add(Dense(nb_classes, name='output_' + str(nb_classes) + '_neurons'))
model.add(Activation('softmax', name='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy', 'precision', 'recall'])
# pseudo random generation of seeds
seeds = np.random.randint(10000, size=number_of_models)
# initializing all the models
models = [None] * number_of_models
for i in range(number_of_models):
models[i] = Sequential()
initialize_network(models[i])
```
## Training and evaluation - Single column
Again we are going to define a helper functions to train the model, since we're going to use them on a list.
```
def try_load_checkpoints(model, checkpoints_filepath, warn=False):
# loading weights from checkpoints
if os.path.exists(checkpoints_filepath):
model.load_weights(checkpoints_filepath)
elif warn:
print('Warning: ' + checkpoints_filepath + ' could not be loaded')
def fit(model, checkpoints_name='test', seed=1337, initial_epoch=0,
verbose=1, window_size=(-1), plot_history=False, evaluation=True):
if window_size == (-1):
window = 1 + np.random.randint(14)
else:
window = window_size
if window >= nb_epoch:
window = nb_epoch - 1
print("Not pre-processing " + str(window) + " epoch(s)")
checkpoints_filepath = os.path.join(checkpoints_dir, '04_MNIST_weights.best_' + checkpoints_name + '.hdf5')
try_load_checkpoints(model, checkpoints_filepath, True)
# checkpoint
checkpoint = ModelCheckpoint(checkpoints_filepath, monitor='val_precision', verbose=verbose, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
# fits the model on batches with real-time data augmentation, for nb_epoch-100 epochs
history = model.fit_generator(datagen.flow(X_train, Y_train,
batch_size=batch_size,
# save_to_dir='distorted_data',
# save_format='png'
seed=1337),
samples_per_epoch=len(X_train), nb_epoch=(nb_epoch-window), verbose=0,
validation_data=(X_test, Y_test), callbacks=callbacks_list)
# ensuring best val_precision reached during training
try_load_checkpoints(model, checkpoints_filepath)
# fits the model on clear training set, for nb_epoch-700 epochs
history_cont = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=window,
verbose=0, validation_data=(X_test, Y_test), callbacks=callbacks_list)
# ensuring best val_precision reached during training
try_load_checkpoints(model, checkpoints_filepath)
if plot_history:
print("History: ")
u.plot_history(history)
u.plot_history(history, 'precision')
print("Continuation of training with no pre-processing:")
u.plot_history(history_cont)
u.plot_history(history_cont, 'precision')
if evaluation:
print('Evaluating model ' + str(index))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test accuracy:', score[1]*100, '%')
print('Test error:', (1-score[2])*100, '%')
return history, history_cont
for index in range(number_of_models):
print("Training model " + str(index) + " ...")
if index == 0:
window_size = 20
plot_history = True
else:
window_size = (-1)
plot_history = False
history, history_cont = fit(models[index],
str(index),
seed=seeds[index],
initial_epoch=0,
verbose=0,
window_size=window_size,
plot_history=plot_history)
print("Done.\n\n")
```
Just by the different seeds, error changes **from 0.5% to 0.42%** (our best result so far with a single column). The training took 12 hours.
## Model definition - Multi column
The MCDNN is obtained by creating a new model that only has 1 layer, Merge, that does the average of the outputs of the models in the given list. No training is required since we're only doing the average.
```
merged_model = Sequential()
merged_model.add(Merge(models, mode='ave'))
merged_model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy', 'precision', 'recall'])
```
## Evaluation - Multi column
```
print('Evaluating ensemble')
score = merged_model.evaluate([np.asarray(X_test)] * number_of_models,
Y_test,
verbose=0)
print('Test accuracy:', score[1]*100, '%')
print('Test error:', (1-score[2])*100, '%')
```
The error improved from 0.42% with the best network of the ensemble, to 0.4%, that is out best result so far.
```
# The predict_classes function outputs the highest probability class
# according to the trained classifier for each input example.
predicted_classes = merged_model.predict_classes([np.asarray(X_test)] * number_of_models)
# Check which items we got right / wrong
correct_indices = np.nonzero(predicted_classes == y_test)[0]
incorrect_indices = np.nonzero(predicted_classes != y_test)[0]
u.plot_images(X_test[correct_indices[:9]], y_test[correct_indices[:9]],
predicted_classes[correct_indices[:9]])
u.plot_images(X_test[incorrect_indices[:9]], y_test[incorrect_indices[:9]],
predicted_classes[incorrect_indices[:9]])
u.plot_confusion_matrix(y_test, nb_classes, predicted_classes)
```
## Results
Training 5 networks took 12 hours, of course 5 times longer then a single one. The improvement was of 0.05% error, that is quite good considering this dataset (a human has 0.2% test error on MNIST).
To further increase the precision we would need over 30 columns trained on different widths.
| true |
code
| 0.645651 | null | null | null | null |
|
# PyCaret Fugue Integration
[Fugue](https://github.com/fugue-project/fugue) is a low-code unified interface for different computing frameworks such as Spark, Dask and Pandas. PyCaret is using Fugue to support distributed computing scenarios.
## Hello World
### Classification
Let's start with the most standard example, the code is exactly the same as the local version, there is no magic.
```
from pycaret.datasets import get_data
from pycaret.classification import *
setup(data=get_data("juice"), target = 'Purchase', n_jobs=1)
test_models = models().index.tolist()[:5]
```
`compare_model` is also exactly the same if you don't want to use a distributed system
```
compare_models(include=test_models, n_select=2)
```
Now let's make it distributed, as a toy case, on dask. The only thing changed is an additional parameter `parallel_backend`
```
from pycaret.parallel import FugueBackend
compare_models(include=test_models, n_select=2, parallel=FugueBackend("dask"))
```
In order to use Spark as the execution engine, you must have access to a Spark cluster, and you must have a `SparkSession`, let's initialize a local Spark session
```
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
```
Now just change `parallel_backend` to this session object, you make it run on Spark. You must understand this is a toy case. In the real situation, you need to have a SparkSession pointing to a real Spark cluster to enjoy the power of Spark
```
compare_models(include=test_models, n_select=2, parallel=FugueBackend(spark))
```
In the end, you can `pull` to get the metrics table
```
pull()
```
### Regression
It's follows the same pattern as classification.
```
from pycaret.datasets import get_data
from pycaret.regression import *
setup(data=get_data("insurance"), target = 'charges', n_jobs=1)
test_models = models().index.tolist()[:5]
```
`compare_model` is also exactly the same if you don't want to use a distributed system
```
compare_models(include=test_models, n_select=2)
```
Now let's make it distributed, as a toy case, on dask. The only thing changed is an additional parameter `parallel_backend`
```
from pycaret.parallel import FugueBackend
compare_models(include=test_models, n_select=2, parallel=FugueBackend("dask"))
```
In order to use Spark as the execution engine, you must have access to a Spark cluster, and you must have a `SparkSession`, let's initialize a local Spark session
```
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
```
Now just change `parallel_backend` to this session object, you make it run on Spark. You must understand this is a toy case. In the real situation, you need to have a SparkSession pointing to a real Spark cluster to enjoy the power of Spark
```
compare_models(include=test_models, n_select=2, parallel=FugueBackend(spark))
```
In the end, you can `pull` to get the metrics table
```
pull()
```
As you see, the results from the distributed versions can be different from your local versions. In the next section, we will show how to make them identical.
## A more practical case
The above examples are pure toys, to make things work perfectly in a distributed system you must be careful about a few things
### Use a lambda instead of a dataframe in setup
If you directly provide a dataframe in `setup`, this dataset will need to be sent to all worker nodes. If the dataframe is 1G, you have 100 workers, then it is possible your dirver machine will need to send out up to 100G data (depending on specific framework's implementation), then this data transfer becomes a bottleneck itself. Instead, if you provide a lambda function, it doesn't change the local compute scenario, but the driver will only send the function reference to workers, and each worker will be responsible to load the data by themselves, so there is no heavy traffic on the driver side.
### Be deterministic
You should always use `session_id` to make the distributed compute deterministic, otherwise, for the exactly same logic you could get drastically different selection for each run.
### Set n_jobs
It is important to be explicit on n_jobs when you want to run something distributedly, so it will not overuse the local/remote resources. This can also avoid resrouce contention, and make the compute faster.
```
from pycaret.classification import *
setup(data=lambda: get_data("juice", verbose=False, profile=False), target = 'Purchase', session_id=0, n_jobs=1);
```
### Set the appropriate batch_size
`batch_size` parameter helps adjust between load balence and overhead. For each batch, setup will be called only once. So
| Choice |Load Balance|Overhead|Best Scenario|
|---|---|---|---|
|Smaller batch size|Better|Worse|`training time >> data loading time` or `models ~= workers`|
|Larger batch size|Worse|Better|`training time << data loading time` or `models >> workers`|
The default value is set to `1`, meaning we want the best load balance.
### Display progress
In development, you can enable visual effect by `display_remote=True`, but meanwhile you must also enable [Fugue Callback](https://fugue-tutorials.readthedocs.io/tutorials/advanced/rpc.html) so that the driver can monitor worker progress. But it is recommended to turn off display in production.
```
fconf = {
"fugue.rpc.server": "fugue.rpc.flask.FlaskRPCServer", # keep this value
"fugue.rpc.flask_server.host": "0.0.0.0", # the driver ip address workers can access
"fugue.rpc.flask_server.port": "3333", # the open port on the dirver
"fugue.rpc.flask_server.timeout": "2 sec", # the timeout for worker to talk to driver
}
be = FugueBackend("dask", fconf, display_remote=True, batch_size=3, top_only=False)
compare_models(n_select=2, parallel=be)
```
## Notes
### Spark settings
It is highly recommended to have only 1 worker on each Spark executor, so the worker can fully utilize all cpus (set `spark.task.cpus`). Also when you do this you should explicitly set `n_jobs` in `setup` to the number of cpus of each executor.
```python
executor_cores = 4
spark = SparkSession.builder.config("spark.task.cpus", executor_cores).config("spark.executor.cores", executor_cores).getOrCreate()
setup(data=get_data("juice", verbose=False, profile=False), target = 'Purchase', session_id=0, n_jobs=executor_cores)
compare_models(n_select=2, parallel=FugueBackend(spark))
```
### Databricks
On Databricks, `spark` is the magic variable representing a SparkSession. But there is no difference to use. You do the exactly same thing as before:
```python
compare_models(parallel=FugueBackend(spark))
```
But Databricks, the visualization is difficult, so it may be a good idea to do two things:
* Set `verbose` to False in `setup`
* Set `display_remote` to False in `FugueBackend`
### Dask
Dask has fake distributed modes such as the default (multi-thread) and multi-process modes. The default mode will just work fine (but they are actually running sequentially), and multi-process doesn't work for PyCaret for now because it messes up with PyCaret's global variables. On the other hand, any Spark execution mode will just work fine.
### Local Parallelization
For practical use where you try non-trivial data and models, local parallelization (The eaiest way is to use local Dask as backend as shown above) normally doesn't have performance advantage. Because it's very easy to overload the CPUS on training, increasing the contention of resources. The value of local parallelization is to verify the code and give you confidence that the distributed environment will provide the expected result with much shorter time.
### How to develop
Distributed systems are powerful but you must follow some good practices to use them:
1. **From small to large:** initially, you must start with a small set of data, for example in `compare_model` limit the models you want to try to a small number of cheap models, and when you verify they work, you can change to a larger model collection.
2. **From local to distributed:** you should follow this sequence: verify small data locally then verify small data distributedly and then verify large data distributedly. The current design makes the transition seamless. You can do these sequentially: `parallel=None` -> `parallel=FugueBackend()` -> `parallel=FugueBackend(spark)`. In the second step, you can replace with a local SparkSession or local dask.
| true |
code
| 0.659898 | null | null | null | null |
|
We use Embeddings to represent text into a numerical form. Either into a one-hot encoding format called sparse vector or a fixed Dense representation called Dense Vector.
Every Word gets it meaning from the words it is surrounded by, So when we train our embeddings we want word with similar meaning or words used in similar context to be together.
For Example:-
1. Words like Aeroplane, chopper, Helicopter, Drone should be very close to each other because they share the same feature, they are flying object.
2. Words like Man and Women should be exact opposite to each other.
3. Sentences like "Coders are boring people." and "Programmers are boring." the word `coders` and `programmers` are used in similar context so they should be close to each other.
Word Embeddings are nothing but vectors in a vector space. And using some vector calculation we can easily find
1. Synonyms or similar words
2. Finding Analogies
3. Can be used as spell check (if trained on a large corpus)
4. Pretty Much Anything which you can do with vectors.
```
import torchtext
import numpy as np
import torch
glove = torchtext.vocab.GloVe(name = '6B', dim = 100)
print(f'There are {len(glove.itos)} words in the vocabulary')
glove.itos[:10]
glove.stoi["cat"]
def get_embedding(word):
return glove.vectors[glove.stoi[word]]
get_embedding("cat")
```
# Similar Context
To find words similar to input words. We have to first take the vector representation of all words and compute the eucledian distance of the input word with respect to all words and choose the n closest words by sorting the distance ascending order.
```
def get_closest_word(word,n=10):
input_vector = get_embedding(word).numpy() if isinstance(word,str) else word.numpy()
distance = np.linalg.norm(input_vector-glove.vectors.numpy(),axis=1)
sort_dis = np.argsort(distance)[:n]
return list(zip(np.array(glove.itos)[sort_dis] , distance[sort_dis]))
get_closest_word("sad",n=10)
def get_similarity_angle(word1,word2):
word1 = get_embedding(word1).view(1,-1)
word2 = get_embedding(word2).view(1,-1)
simi = torch.nn.CosineSimilarity(dim=1)(word1,word2).numpy()
return simi,np.rad2deg(np.arccos(simi))
get_similarity_angle("sad","awful")
```
# Analogies
```
def analogy( word1, word2, word3, n=5):
#get vectors for each word
word1_vector = get_embedding(word1)
word2_vector = get_embedding(word2)
word3_vector = get_embedding(word3)
#calculate analogy vector
analogy_vector = word2_vector - word1_vector + word3_vector
# #find closest words to analogy vector
candidate_words = get_closest_word( analogy_vector, n=n+3)
#filter out words already in analogy
candidate_words = [(word, dist) for (word, dist) in candidate_words
if word not in [word1, word2, word3]][:n]
print(f'{word1} is to {word2} as {word3} is to...')
return candidate_words
analogy('man', 'king', 'woman')
```
This is the canonical example which shows off this property of word embeddings. So why does it work? Why does the vector of 'woman' added to the vector of 'king' minus the vector of 'man' give us 'queen'?
If we think about it, the vector calculated from 'king' minus 'man' gives us a "royalty vector". This is the vector associated with traveling from a man to his royal counterpart, a king. If we add this "royality vector" to 'woman', this should travel to her royal equivalent, which is a queen!
```
analogy('india', 'delhi', 'australia')
get_closest_word("reliable")
```
# Case Studies
1. https://forums.fast.ai/t/nlp-any-libraries-dictionaries-out-there-for-fixing-common-spelling-errors/16411
2. Multilingual and Cross-lingual analysis: If you work on works in translation, or on the influence of writers who write in one language on those who write in another language, word vectors can valuable ways to study these kinds of cross-lingual relationships algorithmically.
[Case Study: Using word vectors to study endangered languages](https://raw.githubusercontent.com/YaleDHLab/lab-workshops/master/word-vectors/papers/coeckelbergs.pdf)
3. Studying Language Change over Time: If you want to study the way the meaning of a word has changed over time, word vectors provide an exceptional method for this kind of study.
[Case Study: Using word vectors to analyze the changing meaning of the word "gay" in the twentieth century.](https://nlp.stanford.edu/projects/histwords/)
4. Analyzing Historical Concept Formation: If you want to analyze the ways writers in a given historical period understood particular concepts like "honor" and "chivalry", then word vectors can provide excellent opportunities to uncover these hidden associations.
[Case Study: Using word vectors to study the ways eighteenth-century authors organized moral abstractions](https://raw.githubusercontent.com/YaleDHLab/lab-workshops/master/word-vectors/papers/heuser.pdf)
5. Uncovering Text Reuse: If you want to study text reuse or literary imitation (either within one language or across multiple languages), word vectors can provide excellent tools for identifying similar passages of text.
[Case Study: Using word vectors to uncover cross-lingual text reuse in eighteenth-century writing](https://douglasduhaime.com/posts/crosslingual-plagiarism-detection.html)
| true |
code
| 0.568476 | null | null | null | null |
|
# Document Embedding with Amazon SageMaker Object2Vec
1. [Introduction](#Introduction)
2. [Background](#Background)
1. [Embedding documents using Object2Vec](#Embedding-documents-using-Object2Vec)
3. [Download and preprocess Wikipedia data](#Download-and-preprocess-Wikipedia-data)
1. [Install and load dependencies](#Install-and-load-dependencies)
2. [Build vocabulary and tokenize datasets](#Build-vocabulary-and-tokenize-datasets)
3. [Upload preprocessed data to S3](#Upload-preprocessed-data-to-S3)
4. [Define SageMaker session, Object2Vec image, S3 input and output paths](#Define-SageMaker-session,-Object2Vec-image,-S3-input-and-output-paths)
5. [Train and deploy doc2vec](#Train-and-deploy-doc2vec)
1. [Learning performance boost with new features](#Learning-performance-boost-with-new-features)
2. [Training speedup with sparse gradient update](#Training-speedup-with-sparse-gradient-update)
6. [Apply learned embeddings to document retrieval task](#Apply-learned-embeddings-to-document-retrieval-task)
1. [Comparison with the StarSpace algorithm](#Comparison-with-the-StarSpace-algorithm)
## Introduction
In this notebook, we introduce four new features to Object2Vec, a general-purpose neural embedding algorithm: negative sampling, sparse gradient update, weight-sharing, and comparator operator customization. The new features together broaden the applicability of Object2Vec, improve its training speed and accuracy, and provide users with greater flexibility. See [Introduction to the Amazon SageMaker Object2Vec](https://aws.amazon.com/blogs/machine-learning/introduction-to-amazon-sagemaker-object2vec/) if you aren’t already familiar with Object2Vec.
We demonstrate how these new features extend the applicability of Object2Vec to a new Document Embedding use-case: A customer has a large collection of documents. Instead of storing these documents in its raw format or as sparse bag-of-words vectors, to achieve training efficiency in the various downstream tasks, she would like to instead embed all documents in a common low-dimensional space, so that the semantic distance between these documents are preserved.
## Background
Object2Vec is a highly customizable multi-purpose algorithm that can learn embeddings of pairs of objects. The embeddings are learned such that it preserves their pairwise similarities in the original space.
- Similarity is user-defined: users need to provide the algorithm with pairs of objects that they define as similar (1) or dissimilar (0); alternatively, the users can define similarity in a continuous sense (provide a real-valued similarity score).
- The learned embeddings can be used to efficiently compute nearest neighbors of objects, as well as to visualize natural clusters of related objects in the embedding space. In addition, the embeddings can also be used as features of the corresponding objects in downstream supervised tasks such as classification or regression.
### Embedding documents using Object2Vec
We demonstrate how, with the new features, Object2Vec can be used to embed a large collection of documents into vectors in the same latent space.
Similar to the widely used Word2Vec algorithm for word embedding, a natural approach to document embedding is to preprocess documents as (sentence, context) pairs, where the sentence and its matching context come from the same document. The matching context is the entire document with the given sentence removed. The idea is to embed both sentence and context into a low dimensional space such that their mutual similarity is maximized, since they belong to the same document and therefore should be semantically related. The learned encoder for the context can then be used to encode new documents into the same embedding space. In order to train the encoders for sentences and documents, we also need negative (sentence, context) pairs so that the model can learn to discriminate between semantically similar and dissimilar pairs. It is easy to generate such negatives by pairing sentences with documents that they do not belong to. Since there are many more negative pairs than positives in naturally occurring data, we typically resort to random sampling techniques to achieve a balance between positive and negative pairs in the training data. The figure below shows pictorially how the positive pairs and negative pairs are generated from unlabeled data for the purpose of learning embeddings for documents (and sentences).
We show how Object2Vec with the new *negative sampling feature* can be applied to the document embedding use-case. In addition, we show how the other new features, namely, *weight-sharing*, *customization of comparator operator*, and *sparse gradient update*, together enhance the algorithm's performance and user-experience in and beyond this use-case. Sections [Learning performance boost with new features](#Learning-performance-boost-with-new-features) and [Training speedup with sparse gradient update](#Training-speedup-with-sparse-gradient-update) in this notebook provide a detailed introduction to the new features.
## Download and preprocess Wikipedia data
Please be aware of the following requirements about the acknowledgment, copyright and availability, cited from the [data source description page](https://github.com/facebookresearch/StarSpace/blob/master/LICENSE.md).
> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
```
%%bash
DATANAME="wikipedia"
DATADIR="/tmp/wiki"
mkdir -p "${DATADIR}"
if [ ! -f "${DATADIR}/${DATANAME}_train250k.txt" ]
then
echo "Downloading wikipedia data"
wget --quiet -c "https://s3-ap-northeast-1.amazonaws.com/dev.tech-sketch.jp/chakki/public/ja.wikipedia_250k.zip" -O "${DATADIR}/${DATANAME}_train.zip"
unzip "${DATADIR}/${DATANAME}_train.zip" -d "${DATADIR}"
fi
datadir = '/tmp/wiki'
!ls /tmp/wiki
```
### Install and load dependencies
```
!pip install keras tensorflow
import json
import os
import random
from itertools import chain
from keras.preprocessing.text import Tokenizer
from sklearn.preprocessing import normalize
## sagemaker api
import sagemaker, boto3
from sagemaker.session import s3_input
from sagemaker.predictor import json_serializer, json_deserializer
```
### Build vocabulary and tokenize datasets
```
def load_articles(filepath):
with open(filepath) as f:
for line in f:
yield map(str.split, line.strip().split('\t'))
def split_sents(article):
return [sent.split(' ') for sent in article.split('\t')]
def build_vocab(sents):
print('Build start...')
tok = Tokenizer(oov_token='<UNK>', filters='')
tok.fit_on_texts(sents)
print('Build end...')
return tok
def generate_positive_pairs_from_single_article(sents, tokenizer):
sents = list(sents)
idx = random.randrange(0, len(sents))
center = sents.pop(idx)
wrapper_tokens = tokenizer.texts_to_sequences(sents)
sent_tokens = tokenizer.texts_to_sequences([center])
wrapper_tokens = list(chain(*wrapper_tokens))
sent_tokens = list(chain(*sent_tokens))
yield {'in0': sent_tokens, 'in1': wrapper_tokens, 'label': 1}
def generate_positive_pairs_from_single_file(sents_per_article, tokenizer):
iter_list = [generate_positive_pairs_from_single_article(sents, tokenizer)
for sents in sents_per_article
]
return chain.from_iterable(iter_list)
filepath = os.path.join(datadir, 'ja.wikipedia_250k.txt')
sents_per_article = load_articles(filepath)
sents = chain(*sents_per_article)
tokenizer = build_vocab(sents)
# save
datadir = '.'
train_prefix = 'train250k'
fname = "wikipedia_{}.txt".format(train_prefix)
outfname = os.path.join(datadir, '{}_tokenized.jsonl'.format(train_prefix))
with open(outfname, 'w') as f:
sents_per_article = load_articles(filepath)
for sample in generate_positive_pairs_from_single_file(sents_per_article, tokenizer):
f.write('{}\n'.format(json.dumps(sample)))
# Shuffle training data
!shuf {outfname} > {train_prefix}_tokenized_shuf.jsonl
```
### Upload preprocessed data to S3
```
TRAIN_DATA="train250k_tokenized_shuf.jsonl"
# NOTE: define your s3 bucket and key here
S3_BUCKET = 'YOUR_BUCKET'
S3_KEY = 'object2vec-doc2vec'
%%bash -s "$TRAIN_DATA" "$S3_BUCKET" "$S3_KEY"
aws s3 cp "$1" s3://$2/$3/input/train/
```
## Define Sagemaker session, Object2Vec image, S3 input and output paths
```
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
region = boto3.Session().region_name
print("Your notebook is running on region '{}'".format(region))
sess = sagemaker.Session()
role = get_execution_role()
print("Your IAM role: '{}'".format(role))
container = get_image_uri(region, 'object2vec')
print("The image uri used is '{}'".format(container))
print("Using s3 buceket: {} and key prefix: {}".format(S3_BUCKET, S3_KEY))
## define input channels
s3_input_path = os.path.join('s3://', S3_BUCKET, S3_KEY, 'input')
s3_train = s3_input(os.path.join(s3_input_path, 'train', TRAIN_DATA),
distribution='ShardedByS3Key', content_type='application/jsonlines')
## define output path
output_path = os.path.join('s3://', S3_BUCKET, S3_KEY, 'models')
```
## Train and deploy doc2vec
We combine four new features into our training of Object2Vec:
- Negative sampling: With the new `negative_sampling_rate` hyperparameter, users of Object2Vec only need to provide positively labeled data pairs, and the algorithm automatically samples for negative data internally during training.
- Weight-sharing of embedding layer: The new `tied_token_embedding_weight` hyperparameter gives user the flexibility to share the embedding weights for both encoders, and it improves the performance of the algorithm in this use-case
- The new `comparator_list` hyperparameter gives users the flexibility to mix-and-match different operators so that they can tune the algorithm towards optimal performance for their applications.
```
# Define training hyperparameters
hyperparameters = {
"_kvstore": "device",
"_num_gpus": 'auto',
"_num_kv_servers": "auto",
"bucket_width": 0,
"dropout": 0.4,
"early_stopping_patience": 2,
"early_stopping_tolerance": 0.01,
"enc0_layers": "auto",
"enc0_max_seq_len": 50,
"enc0_network": "pooled_embedding",
"enc0_pretrained_embedding_file": "",
"enc0_token_embedding_dim": 300,
"enc0_vocab_size": len(tokenizer.word_index) + 1,
"enc1_network": "enc0",
"enc_dim": 300,
"epochs": 20,
"learning_rate": 0.01,
"mini_batch_size": 512,
"mlp_activation": "relu",
"mlp_dim": 512,
"mlp_layers": 2,
"num_classes": 2,
"optimizer": "adam",
"output_layer": "softmax",
"weight_decay": 0
}
hyperparameters['negative_sampling_rate'] = 3
hyperparameters['tied_token_embedding_weight'] = "true"
hyperparameters['comparator_list'] = "hadamard"
hyperparameters['token_embedding_storage_type'] = 'row_sparse'
# get estimator
doc2vec = sagemaker.estimator.Estimator(container,
role,
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
output_path=output_path,
sagemaker_session=sess)
# set hyperparameters
doc2vec.set_hyperparameters(**hyperparameters)
# fit estimator with data
doc2vec.fit({'train': s3_train})
#doc2vec.fit({'train': s3_train, 'validation':s3_valid, 'test':s3_test})
# deploy model
doc2vec_model = doc2vec.create_model(
serializer=json_serializer,
deserializer=json_deserializer,
content_type='application/json')
predictor = doc2vec_model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
sent = '今日 の 昼食 は うどん だっ た'
sent_tokens = tokenizer.texts_to_sequences([sent])
payload = {'instances': [{'in0': sent_tokens[0]}]}
result = predictor.predict(payload)
print(result)
predictor.delete_endpoint()
```
| true |
code
| 0.438905 | null | null | null | null |
|
```
import tensorflow as tf
from matplotlib import pylab
from tensorflow.examples.tutorials.mnist import input_data
import numpy as np
# Required for Data downaload and preparation
import struct
import gzip
import os
from six.moves.urllib.request import urlretrieve
```
## Defining Hyperparameters
Here we define the set of hyperparameters we're going to you in our example. These hyperparameters include `batch_size`, train dataset size (`n_train`), different layers in our CNN (`cnn_layer_ids`). You can find descriptions of each hyperparameter in comments.
```
batch_size = 100 # This is the typical batch size we've been using
image_size = 28 # This is the width/height of a single image
# Number of color channels in an image. These are black and white images
n_channels = 1
# Number of different digits we have images for (i.e. classes)
n_classes = 10
n_train = 55000 # Train dataset size
n_valid = 5000 # Validation dataset size
n_test = 10000 # Test dataset size
# Layers in the CNN in the order from input to output
cnn_layer_ids = ['conv1','pool1','conv2','pool2','fulcon1','softmax']
# Hyperparameters of each layer (e.g. filter size of each convolution layer)
layer_hyperparameters = {'conv1':{'weight_shape':[3,3,n_channels,16],'stride':[1,1,1,1],'padding':'SAME'},
'pool1':{'kernel_shape':[1,3,3,1],'stride':[1,2,2,1],'padding':'SAME'},
'conv2':{'weight_shape':[3,3,16,32],'stride':[1,1,1,1],'padding':'SAME'},
'pool2':{'kernel_shape':[1,3,3,1],'stride':[1,2,2,1],'padding':'SAME'},
'fulcon1':{'weight_shape':[7*7*32,128]},
'softmax':{'weight_shape':[128,n_classes]}
}
```
## Defining Inputs and Outputs
Here we define input and output placeholders required to process a batch of data. We will use the same placeholders for all training, validation and testing data as all of them are processed in same size batches.
```
# Inputs (Images) and Outputs (Labels) Placeholders
tf_inputs = tf.placeholder(shape=[batch_size, image_size, image_size, n_channels],dtype=tf.float32,name='tf_mnist_images')
tf_labels = tf.placeholder(shape=[batch_size, n_classes],dtype=tf.float32,name='tf_mnist_labels')
```
## Defining Model Parameters and Other Variables
Here we define various TensorFlow variables required for the following computations. These includes a global step variable (to decay learning rate) and weights and biases of each layer of the CNN.
```
# Global step for decaying the learning rate
global_step = tf.Variable(0,trainable=False)
# Initializing the variables
layer_weights = {}
layer_biases = {}
for layer_id in cnn_layer_ids:
if 'pool' not in layer_id:
layer_weights[layer_id] = tf.Variable(initial_value=tf.random_normal(shape=layer_hyperparameters[layer_id]['weight_shape'],
stddev=0.02,dtype=tf.float32),name=layer_id+'_weights')
layer_biases[layer_id] = tf.Variable(initial_value=tf.random_normal(shape=[layer_hyperparameters[layer_id]['weight_shape'][-1]],
stddev=0.01,dtype=tf.float32),name=layer_id+'_bias')
print('Variables initialized')
```
## Defining Inference of the CNN
Here we define the computations starting from input placeholder (`tf_inputs`) and then computing the hidden activations for each of the layers found in `cnn_layer_ids` (i.e. convolution/pooling and fulcon layers) and their respective parameters (`layer_hyperparamters`). At the final layer (`softmax`) we do not apply an activation function as for the rest of the layers, but obtain the unnormalized logit values without any activation function.
```
# Calculating Logits
h = tf_inputs
for layer_id in cnn_layer_ids:
if 'conv' in layer_id:
# For each convolution layer, compute the output by using conv2d function
# This operation results in a [batch_size, output_height, output_width, out_channels]
# sized 4 dimensional tensor
h = tf.nn.conv2d(h,layer_weights[layer_id],layer_hyperparameters[layer_id]['stride'],
layer_hyperparameters[layer_id]['padding']) + layer_biases[layer_id]
h = tf.nn.relu(h)
elif 'pool' in layer_id:
# For each pooling layer, compute the output by max pooling
# This operation results in a [batch_size, output_height, output_width, out_channels]
# sized 4 dimensional tensor
h = tf.nn.max_pool(h, layer_hyperparameters[layer_id]['kernel_shape'],layer_hyperparameters[layer_id]['stride'],
layer_hyperparameters[layer_id]['padding'])
elif layer_id == 'fulcon1':
# At the first fulcon layer we need to reshape the 4 dimensional output to a
# 2 dimensional output to be processed by fully connected layers
# Note this should only done once, before
# computing the output of the first fulcon layer
h = tf.reshape(h,[batch_size,-1])
h = tf.matmul(h,layer_weights[layer_id]) + layer_biases[layer_id]
h = tf.nn.relu(h)
elif layer_id == 'softmax':
# Note that here we do not perform the same reshaping we did for fulcon1
# We only perform the matrix multiplication on previous output
h = tf.matmul(h,layer_weights[layer_id]) + layer_biases[layer_id]
print('Calculated logits')
tf_logits = h
```
## Defining Loss
We use softmax cross entropy loss to optimize the parameters of the model.
```
# Calculating the softmax cross entropy loss with the computed logits and true labels (one hot encoded)
tf_loss = tf.nn.softmax_cross_entropy_with_logits_v2(logits=tf_logits,labels=tf_labels)
print('Loss defined')
```
## Model Parameter Optimizer
We define an exponentially decaying learning rate and an optimizer to optimize the parameters.
```
# Optimization
# Here we define the function to decay the learning rate exponentially.
# Everytime the global step increases the learning rate decreases
tf_learning_rate = tf.train.exponential_decay(learning_rate=0.001,global_step=global_step,decay_rate=0.5,decay_steps=1,staircase=True)
tf_loss_minimize = tf.train.RMSPropOptimizer(learning_rate=tf_learning_rate, momentum=0.9).minimize(tf_loss)
print('Loss minimization defined')
```
## Defining Predictions
We get the predictiosn out by applying a softmax activation to the logits. Additionally we define a global step increment function and will be increase every time the validation accuracy plateus.
```
tf_predictions = tf.nn.softmax(tf_logits)
print('Prediction defined')
tf_tic_toc = tf.assign(global_step, global_step + 1)
```
## Define Accuracy
A simple function to calculate accuracy for a given set of labels and predictions.
```
def accuracy(predictions,labels):
'''
Accuracy of a given set of predictions of size (N x n_classes) and
labels of size (N x n_classes)
'''
return np.sum(np.argmax(predictions,axis=1)==np.argmax(labels,axis=1))*100.0/labels.shape[0]
```
## Lolading Data
Here we download (if needed) the MNIST dataset and, perform reshaping and normalization. Also we conver the labels to one hot encoded vectors.
```
def maybe_download(url, filename, expected_bytes, force=False):
"""Download a file if not present, and make sure it's the right size."""
if force or not os.path.exists(filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, filename)
print('\nDownload Complete!')
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
def read_mnist(fname_img, fname_lbl, one_hot=False):
print('\nReading files %s and %s'%(fname_img, fname_lbl))
# Processing images
with gzip.open(fname_img) as fimg:
magic, num, rows, cols = struct.unpack(">IIII", fimg.read(16))
print(num,rows,cols)
img = (np.frombuffer(fimg.read(num*rows*cols), dtype=np.uint8).reshape(num, rows, cols,1)).astype(np.float32)
print('(Images) Returned a tensor of shape ',img.shape)
#img = (img - np.mean(img)) /np.std(img)
img *= 1.0 / 255.0
# Processing labels
with gzip.open(fname_lbl) as flbl:
# flbl.read(8) reads upto 8 bytes
magic, num = struct.unpack(">II", flbl.read(8))
lbl = np.frombuffer(flbl.read(num), dtype=np.int8)
if one_hot:
one_hot_lbl = np.zeros(shape=(num,10),dtype=np.float32)
one_hot_lbl[np.arange(num),lbl] = 1.0
print('(Labels) Returned a tensor of shape: %s'%lbl.shape)
print('Sample labels: ',lbl[:10])
if not one_hot:
return img, lbl
else:
return img, one_hot_lbl
# Download data if needed
url = 'http://yann.lecun.com/exdb/mnist/'
# training data
maybe_download(url,'train-images-idx3-ubyte.gz',9912422)
maybe_download(url,'train-labels-idx1-ubyte.gz',28881)
# testing data
maybe_download(url,'t10k-images-idx3-ubyte.gz',1648877)
maybe_download(url,'t10k-labels-idx1-ubyte.gz',4542)
# Read the training and testing data
train_inputs, train_labels = read_mnist('train-images-idx3-ubyte.gz', 'train-labels-idx1-ubyte.gz',True)
test_inputs, test_labels = read_mnist('t10k-images-idx3-ubyte.gz', 't10k-labels-idx1-ubyte.gz',True)
valid_inputs, valid_labels = train_inputs[-n_valid:,:,:,:], train_labels[-n_valid:,:]
train_inputs, train_labels = train_inputs[:-n_valid,:,:,:], train_labels[:-n_valid,:]
print('\nTrain size: ', train_inputs.shape[0])
print('\nValid size: ', valid_inputs.shape[0])
print('\nTest size: ', test_inputs.shape[0])
```
## Data Generators for MNIST
Here we have the logic to iterate through each training, validation and testing datasets, in `batch_size` size strides.
```
train_index, valid_index, test_index = 0,0,0
def get_train_batch(images, labels, batch_size):
global train_index
batch = images[train_index:train_index+batch_size,:,:,:], labels[train_index:train_index+batch_size,:]
train_index = (train_index + batch_size)%(images.shape[0] - batch_size)
return batch
def get_valid_batch(images, labels, batch_size):
global valid_index
batch = images[valid_index:valid_index+batch_size,:,:,:], labels[valid_index:valid_index+batch_size,:]
valid_index = (valid_index + batch_size)%(images.shape[0] - batch_size)
return batch
def get_test_batch(images, labels, batch_size):
global test_index
batch = images[test_index:test_index+batch_size,:,:,:], labels[test_index:test_index+batch_size,:]
test_index = (test_index + batch_size)%(images.shape[0] - batch_size)
return batch
```
## Visualizing MNIST Results
Here we define a function to collect correctly and incorrectly classified samples to visualize later. Visualizing such samples will help us to understand why the CNN incorrectly classified certain samples.
```
# Makes sure we only collect 10 samples for each
correct_fill_index, incorrect_fill_index = 0,0
# Visualization purposes
correctly_predicted = np.empty(shape=(10,28,28,1),dtype=np.float32)
correct_predictions = np.empty(shape=(10,n_classes),dtype=np.float32)
incorrectly_predicted = np.empty(shape=(10,28,28,1),dtype=np.float32)
incorrect_predictions = np.empty(shape=(10,n_classes),dtype=np.float32)
def collect_samples(test_batch_predictions,test_images, test_labels):
global correctly_predicted, correct_predictions
global incorrectly_predicted, incorrect_predictions
global correct_fill_index, incorrect_fill_index
correct_indices = np.where(np.argmax(test_batch_predictions,axis=1)==np.argmax(test_labels,axis=1))[0]
incorrect_indices = np.where(np.argmax(test_batch_predictions,axis=1)!=np.argmax(test_labels,axis=1))[0]
if correct_indices.size>0 and correct_fill_index<10:
print('\nCollecting Correctly Predicted Samples')
chosen_index = np.random.choice(correct_indices)
correctly_predicted[correct_fill_index,:,:,:]=test_images[chosen_index,:].reshape(1,image_size,image_size,n_channels)
correct_predictions[correct_fill_index,:]=test_batch_predictions[chosen_index,:]
correct_fill_index += 1
if incorrect_indices.size>0 and incorrect_fill_index<10:
print('Collecting InCorrectly Predicted Samples')
chosen_index = np.random.choice(incorrect_indices)
incorrectly_predicted[incorrect_fill_index,:,:,:]=test_images[chosen_index,:].reshape(1,image_size,image_size,n_channels)
incorrect_predictions[incorrect_fill_index,:]=test_batch_predictions[chosen_index,:]
incorrect_fill_index += 1
```
## Running MNIST Classification
Here we train our CNN on MNIST data for `n_epochs` epochs. Each epoch we train the CNN with the full training dataset. Then we calculate the validation accuracy, according to which we decay the learning rate. Finally, each epoch we calculate the test accuracy which is computed using an independent test set. This code should run under 10 minutes if you run on a decent GPU and should reach to a test accuracy of about ~95%
```
# Parameters related to learning rate decay
# counts how many times the validation accuracy has not increased consecutively for
v_acc_not_increased_for = 0
# if the above count is above this value, decrease the learning rate
v_acc_threshold = 3
# currently recorded best validation accuracy
max_v_acc = 0.0
config = tf.ConfigProto(allow_soft_placement=True)
# Good practice to use this to avoid any surprising errors thrown by TensorFlow
config.gpu_options.allow_growth = True
config.gpu_options.per_process_gpu_memory_fraction = 0.9 # Making sure Tensorflow doesn't overflow the GPU
n_epochs = 25 # Number of epochs the training runs for
session = tf.InteractiveSession(config=config)
# Initialize all variables
tf.global_variables_initializer().run()
# Run training loop
for epoch in range(n_epochs):
loss_per_epoch = []
# Training phase. We train with all training data
# processing one batch at a time
for i in range(n_train//batch_size):
# Get the next batch of MNIST dataset
batch = get_train_batch(train_inputs, train_labels, batch_size)
# Run TensorFlow opeartions
l,_ = session.run([tf_loss,tf_loss_minimize],feed_dict={tf_inputs: batch[0].reshape(batch_size,image_size,image_size,n_channels),
tf_labels: batch[1]})
# Add the loss value to a list
loss_per_epoch.append(l)
print('Average loss in epoch %d: %.5f'%(epoch,np.mean(loss_per_epoch)))
# Validation phase. We compute validation accuracy
# processing one batch at a time
valid_accuracy_per_epoch = []
for i in range(n_valid//batch_size):
# Get the next validation data batch
vbatch_images,vbatch_labels = get_valid_batch(valid_inputs, valid_labels, batch_size)
# Compute validation predictions
valid_batch_predictions = session.run(
tf_predictions,feed_dict={tf_inputs: vbatch_images}
)
# Compute and add the validation accuracy to a python list
valid_accuracy_per_epoch.append(accuracy(valid_batch_predictions,vbatch_labels))
# Compute and print average validation accuracy
mean_v_acc = np.mean(valid_accuracy_per_epoch)
print('\tAverage Valid Accuracy in epoch %d: %.5f'%(epoch,np.mean(valid_accuracy_per_epoch)))
# Learning rate decay logic
if mean_v_acc > max_v_acc:
max_v_acc = mean_v_acc
else:
v_acc_not_increased_for += 1
# Time to decrease learning rate
if v_acc_not_increased_for >= v_acc_threshold:
print('\nDecreasing Learning rate\n')
session.run(tf_tic_toc) # Increase global_step
v_acc_not_increased_for = 0
# Testing phase. We compute test accuracy
# processing one batch at a time
accuracy_per_epoch = []
for i in range(n_test//batch_size):
btest_images, btest_labels = get_test_batch(test_inputs, test_labels, batch_size)
test_batch_predictions = session.run(tf_predictions,feed_dict={tf_inputs: btest_images})
accuracy_per_epoch.append(accuracy(test_batch_predictions,btest_labels))
# Collect samples for visualization only in the last epoch
if epoch==n_epochs-1:
collect_samples(test_batch_predictions, btest_images, btest_labels)
print('\tAverage Test Accuracy in epoch %d: %.5f\n'%(epoch,np.mean(accuracy_per_epoch)))
session.close()
```
## Visualizing Predictions
Let us see how when our CNN did when it comes to predictions
```
# Defining the plot related settings
pylab.figure(figsize=(25,20)) # in inches
width=0.5 # Width of a bar in the barchart
padding = 0.05 # Padding between two bars
labels = list(range(0,10)) # Class labels
# Defining X axis
x_axis = np.arange(0,10)
# We create 4 rows and 7 column set of subplots
# We choose these to put the titles in
# First row middle
pylab.subplot(4, 7, 4)
pylab.title('Correctly Classified Samples',fontsize=24)
# Second row middle
pylab.subplot(4, 7,11)
pylab.title('Softmax Predictions for Correctly Classified Samples',fontsize=24)
# For 7 steps
for sub_i in range(7):
# Draw the top row (digit images)
pylab.subplot(4, 7, sub_i + 1)
pylab.imshow(np.squeeze(correctly_predicted[sub_i]),cmap='gray')
pylab.axis('off')
# Draw the second row (prediction bar chart)
pylab.subplot(4, 7, 7 + sub_i + 1)
pylab.bar(x_axis + padding, correct_predictions[sub_i], width)
pylab.ylim([0.0,1.0])
pylab.xticks(x_axis, labels)
# Set titles for the third and fourth rows
pylab.subplot(4, 7, 18)
pylab.title('Incorrectly Classified Samples',fontsize=26)
pylab.subplot(4, 7,25)
pylab.title('Softmax Predictions for Incorrectly Classified Samples',fontsize=24)
# For 7 steps
for sub_i in range(7):
# Draw the third row (incorrectly classified digit images)
pylab.subplot(4, 7, 14 + sub_i + 1)
pylab.imshow(np.squeeze(incorrectly_predicted[sub_i]),cmap='gray')
pylab.axis('off')
# Draw the fourth row (incorrect predictions bar chart)
pylab.subplot(4, 7, 21 + sub_i + 1)
pylab.bar(x_axis + padding, incorrect_predictions[sub_i], width)
pylab.ylim([0.0,1.0])
pylab.xticks(x_axis, labels)
# Save the figure
pylab.savefig('mnist_results.png')
pylab.show()
```
| true |
code
| 0.679219 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/mancunian1792/causal_scene_generation/blob/master/causal_model/game_characters/GameCharacter_ImageClassification.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount("/content/gdrive", force_remount=True)
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.utils import to_categorical
from keras.preprocessing import image
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from keras.utils import to_categorical
from tqdm import tqdm
from skimage.transform import rotate
from skimage.util import random_noise
from skimage.filters import gaussian
root_path = 'gdrive/My Drive/causal_scene_generation/game_characters/'
train_path = root_path + 'train/'
test_path = root_path + 'test/'
train_images = train_path + 'images/'
test_images = test_path + 'images/'
train_csv = train_path + 'train.csv'
test_csv = test_path + 'test.csv'
def preprocess(imgPath, filePath):
images = []
# Transform each image in the imgPath and add it to the input array
data = pd.read_csv(filePath)
for imgFile in tqdm(data["filename"]):
imgFullPath = imgPath + imgFile + ".png"
img = image.load_img(imgFullPath, target_size=(400,400,3), grayscale=False)
img = image.img_to_array(img)
img = img/255
images.append(img)
features = np.array(images)
# Get the labels for each
target = data.drop(["filename"], axis=1)
return features, target
def augmentData(features, target):
augmented_features = []
augmented_target = []
for idx in tqdm(range(features.shape[0])):
augmented_features.append(features[idx])
augmented_features.append(rotate(features[idx], angle=45, mode = 'wrap'))
augmented_features.append(np.fliplr(features[idx]))
augmented_features.append(np.flipud(features[idx]))
augmented_features.append(random_noise(features[idx],var=0.2**2))
for i in range(5):
augmented_target.append(target.iloc[idx, :])
return np.asarray(augmented_features), pd.DataFrame(augmented_target, columns= target.columns)
x_train, y_train = preprocess(train_images, train_csv)
x_train_augment, y_train_augment = augmentData(x_train, y_train)
del x_train, y_train
x_test, y_test = preprocess(test_images, test_csv)
x_test, x_validate, y_test, y_validate = train_test_split(x_test, y_test, random_state = 3000, test_size = 0.2)
plt.imshow(x_validate[2])
# Size of vector is 64 * 64 * 3 -> resize ((64 *64*3), 1)
# (/255 )
# Convert to grayscale.->
# The output shape
op_shape = y_train_augment.shape[1]
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=(5, 5), activation="relu", input_shape=(400,400,3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters=32, kernel_size=(10, 10), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters=64, kernel_size=(10, 10), activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters=64, kernel_size=(5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(op_shape, activation='sigmoid'))
model.summary()
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(x_train_augment, y_train_augment, epochs=10, validation_data=(x_test, y_test), batch_size=64)
model.save(root_path+"model-both-images.hdf5")
prediction = model.predict(x_validate)
prediction[0]
del x_train_augment, y_train_augment, x_test, y_test
```
### Attempt 2 - Image Classification
This time, i am splitting the images and modify the labels. The image classification will try to predict the entity (actor/reactor), character(satyr/golem), type(1/2/3) and entity_doing (action/reaction) and entity_doing_type(Idle/Attacking/Hurt/Die/Walking/Taunt)
```
# Modify the labels (Do - encoding)
splits_path = root_path + 'splits/'
splits_images = splits_path + 'images/'
splits_dataset = splits_path + 'split_dataset.csv'
df = pd.read_csv(splits_dataset)
df["type"] = df.type.str.extract('(\d+)')
images = df["img_name"]
target = df.drop(["img_name"], axis=1)
target = pd.get_dummies(target)
def processSplitImages(imgPath, filenames):
images_data = []
for img in tqdm(filenames):
imgFullPath = imgPath + img + ".png"
img = image.load_img(imgFullPath, target_size=(400,400,3), grayscale=False)
img = image.img_to_array(img)
img = img/255
images_data.append(img)
features = np.array(images_data)
return features
img_features = processSplitImages(splits_images, images)
# Split into train and test . And then augment the train data.
features_train, features_test, target_train, target_test = train_test_split(img_features, target, stratify=target, test_size=0.2)
del img_features, target
# Augmenting train data -> Not able to allocate enough RAM
#feature_train_augmented, target_augmented = augmentData(features_train, target_train)
op_shape = target_train.shape[1]
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=(5, 5), activation="relu", input_shape=(400,400,3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters=32, kernel_size=(10, 10), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters=64, kernel_size=(10, 10), activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters=64, kernel_size=(5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(op_shape, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
from keras.callbacks import ModelCheckpoint
filepath=root_path + "weights-{epoch:02d}-{val_accuracy:.3f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy',
verbose=1, mode='max')
callbacks_list = [checkpoint]
model.fit(features_train, target_train, epochs=10, validation_data=(features_test, target_test), batch_size=64, callbacks=callbacks_list)
```
| true |
code
| 0.678061 | null | null | null | null |
|
Code testing for https://github.com/pymc-devs/pymc3/pull/2986
```
import numpy as np
import pymc3 as pm
import pymc3.distributions.transforms as tr
import theano.tensor as tt
from theano.scan_module import until
import theano
import matplotlib.pylab as plt
import seaborn as sns
%matplotlib inline
```
# Polar transformation
```
# Polar to Cartesian
def backward(y):
# y = [r, theta]
x = tt.zeros(y.shape)
x = tt.inc_subtensor(x[0], y[0]*tt.cos(y[1]))
x = tt.inc_subtensor(x[1], y[0]*tt.sin(y[1]))
return x
def forward(x):
# y = [r, theta]
y = tt.zeros(x.shape)
y = tt.inc_subtensor(y[0], tt.sqrt(tt.square(x[0]) + tt.square(x[1])))
if y[0] != 0:
if x[1] < 0:
theta = -tt.arccos(x[0]/y[0])
else:
theta = tt.arccos(x[0]/y[0])
y = tt.inc_subtensor(y[1], theta)
return y
y = tt.vector('polar')
y.tag.test_value=np.asarray([1., np.pi/2])
f_inv = backward(y)
J, _ = theano.scan(lambda i, f, x: tt.grad(f[i], x),
sequences=tt.arange(f_inv.shape[0]),
non_sequences=[f_inv, y])
Jacob_f1 = theano.function([y], J)
Jacob_f1(np.asarray([1., np.pi/2]))
J2 = pm.theanof.jacobian(f_inv, [y])
Jacob_f2 = theano.function([y], J2)
Jacob_f2(np.asarray([1., np.pi/2]))
%timeit Jacob_f1(np.asarray([1., np.pi/2]))
%timeit Jacob_f2(np.asarray([1., np.pi/2]))
class VectorTransform(tr.Transform):
def jacobian_det(self, x):
f_inv = self.backward(x)
J, _ = theano.scan(lambda i, f, x: tt.grad(f[i], x),
sequences=tt.arange(f_inv.shape[0]),
non_sequences=[f_inv, x])
return tt.log(tt.abs_(tt.nlinalg.det(J)))
class Nealfun(VectorTransform):
name = "Neal_funnel"
def backward(self, y):
x = tt.zeros(y.shape)
x = tt.inc_subtensor(x[0], y[0] / 3.)
x = tt.inc_subtensor(x[1:], y[1:] / tt.exp(y[0] / 2))
return x
def forward(self, x):
y = tt.zeros(x.shape)
y = tt.inc_subtensor(y[0], x[0] * 3.)
y = tt.inc_subtensor(y[1:], tt.exp(x[0] * 3. / 2) * x[1:])
return y
y = tt.vector('y')
y.tag.test_value = np.zeros(101)
nealfun = Nealfun()
f_inv = nealfun.backward(y)
J1, _ = theano.scan(lambda i, f, x: tt.grad(f[i], x),
sequences=tt.arange(f_inv.shape[0]),
non_sequences=[f_inv, y])
Jacob_f1 = theano.function([y], J1)
J2 = pm.theanof.jacobian(f_inv, [y])
Jacob_f2 = theano.function([y], J2)
%timeit Jacob_f1(np.zeros(101))
%timeit Jacob_f2(np.zeros(101))
```
# Copulas
Background reading http://twiecki.github.io/blog/2018/05/03/copulas/
More information https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Gaussian_Copula.ipynb
```
import scipy.stats as st
norm = st.norm()
def norm_cdf(x):
return x_unif
def copulas_forward_func(nsample, cov, marg1_ppf, marg2_ppf):
mvnorm = st.multivariate_normal(mean=[0, 0], cov=cov)
# Generate random samples from multivariate normal with correlation .5
x = mvnorm.rvs(nsample)
x_unif = norm.cdf(x)
x_trans = np.vstack([marg1_ppf(x_unif[:, 0]), marg2_ppf(x_unif[:, 1])]).T
return x_trans, x_unif, x
cov = np.asarray([[1., 0.725], [0.725, 1.]])
marg1_ppf = st.gumbel_r().ppf
marg2_ppf = st.beta(a=10, b=2).ppf
x_trans, x_unif, x = copulas_forward_func(10000, cov, marg1_ppf, marg2_ppf)
sns.jointplot(x[:, 0], x[:, 1], kind='kde', stat_func=None)
sns.jointplot(x_unif[:, 0], x_unif[:, 1], kind='hex',
stat_func=None, joint_kws=dict(gridsize=50))
sns.jointplot(x_trans[:, 0], x_trans[:, 1], kind='kde',
stat_func=None, xlim=(-2, 6), ylim=(.6, 1.0),)
plt.tight_layout()
xrange = np.linspace(-2, 6, 200)
plt.hist(x_trans[:, 0], xrange, density='pdf')
plt.plot(xrange, st.gumbel_r.pdf(xrange));
def gumbel_cdf(value, mu, beta):
return tt.exp(-tt.exp(-(value-mu)/beta))
```
Beta CDF
```
from theano.scan_module import until
max_iter=200
value_, a, b = x_trans[:, 1], 10., 2.
value = theano.shared(np.reshape(value_, (1,len(value_))))
EPS = 3.0e-7
qab = a + b
qap = a + 1.0
qam = a - 1.0
def _step(i, az, bm, am, bz):
tem = i + i
d = i * (b - i) * value / ((qam + tem) * (a + tem))
d =- (a + i) * i * value / ((qap + tem) * (a + tem))
ap = az + d * am
bp = bz + d * bm
app = ap + d * az
bpp = bp + d * bz
aold = az
am = ap / bpp
bm = bp / bpp
az = app / bpp
bz = tt.ones_like(bz)
return (az, bm, am, bz), until(tt.sum(tt.lt(tt.abs_(az - aold), (EPS * tt.abs_(az)))))
(az, bm, am, bz), _ = theano.scan(_step,
sequences=[tt.arange(1, max_iter)],
outputs_info=[tt.ones_like(value),
tt.ones_like(value),
tt.ones_like(value),
1. - qab * value / qap])
def cont_fraction_beta(value_, a, b, max_iter=500):
'''Evaluates the continued fraction form of the incomplete Beta function.
Derived from implementation by Ali Shoaib (https://goo.gl/HxjIJx).
'''
EPS = 1.0e-20
qab = a + b
qap = a + 1.0
qam = a - 1.0
value = theano.shared(value_)
def _step(i, az, bm, am, bz):
tem = i + i
d = i * (b - i) * value / ((qam + tem) * (a + tem))
d = - (a + i) * i * value / ((qap + tem) * (a + tem))
ap = az + d * am
bp = bz + d * bm
app = ap + d * az
bpp = bp + d * bz
aold = az
am = ap / bpp
bm = bp / bpp
az = app / bpp
bz = tt.ones_like(bz)
return (az, bm, am, bz), until(tt.sum(tt.lt(tt.abs_(az - aold), (EPS * tt.abs_(az)))))
(az, bm, am, bz), _ = theano.scan(_step,
sequences=[tt.arange(1, max_iter)],
outputs_info=[tt.ones_like(value),
tt.ones_like(value),
tt.ones_like(value),
1. - qab * value / qap])
return az[-1]
def beta_cdf(value, a, b):
log_beta = tt.gammaln(a+b) - tt.gammaln(a) - tt.gammaln(b)
log_beta += a * tt.log(value) + b * tt.log(1 - value)
cdf = tt.switch(
tt.lt(value, (a + 1) / (a + b + 2)),
tt.exp(log_beta) * cont_fraction_beta(value, a, b) / a,
1. - tt.exp(log_beta) * cont_fraction_beta(1. - value, b, a) / b
)
return cdf
def normal_ppf(value):
return -np.sqrt(2.) * tt.erfcinv(2. * value)
functmp = theano.function([],
tt.stack([gumbel_cdf(x_trans[:, 0], 0., 1.),
beta_cdf(x_trans[:, 1], 10., 2.)]).T
)
x_ = functmp()
x_
x_unif
np.sum(~np.isfinite(x_))
with pm.Model() as model:
# r∼Uniform(−1,1)
r = pm.Uniform('r',lower=-1, upper=1)
cov = pm.Deterministic('cov',
tt.stacklists([[1., r],
[r, 1.]]))
a = pm.HalfNormal('alpha', 5., testval=10.)
b = pm.HalfNormal('beta', 2.5, testval=2.)
loc = pm.Normal('loc', 0., 5., testval=0.)
scale = pm.HalfNormal('scale', 2.5, testval=1.)
tr_func = normal_ppf(
tt.stack([gumbel_cdf(x_trans[:, 0], loc, scale),
beta_cdf(x_trans[:, 1], a, b)]).T
)
pm.MvNormal('obs', np.zeros(2), cov=cov, observed=tr_func)
pm.Gumbel('marg0', loc, scale, observed=x_trans[:, 0])
pm.Beta('marg1', a, b, observed=x_trans[:, 1])
```
The beta CDF does not quite work - use another distribution instead
```
from scipy.special import logit
xrange = np.linspace(0, 1, 200)
plt.hist(x_trans[:, 1], xrange, density='pdf')
logitnormpdf = st.norm.pdf(logit(xrange), loc=1.725, scale=.8) * 1/(xrange * (1-xrange))
plt.plot(xrange, logitnormpdf);
def logitnorm_cdf(value, mu, sd):
return .5 + .5*(tt.erf((pm.math.logit(value)-mu)/(np.sqrt(2)*sd)))
tr_func = normal_ppf(
tt.stack([gumbel_cdf(x_trans[:, 0], 0., 1.),
logitnorm_cdf(x_trans[:, 1], 1.725, .8)]).T
)
functmp = theano.function([], tr_func)
x_ = functmp()
sns.jointplot(x_[:, 0], x_[:, 1], kind='kde', stat_func=None);
np.sum(~np.isfinite(x_[:, 1]))
with pm.Model() as model:
# r∼Uniform(−1,1)
r = pm.Uniform('r',lower=-1, upper=1)
cov = pm.Deterministic('cov',
tt.stacklists([[1., r],
[r, 1.]]))
loc = pm.Normal('loc', 0., 5., testval=0.)
scale = pm.HalfNormal('scale', 2.5, testval=1.)
mu = pm.Normal('mu', 1., 1., testval=1.725)
sd = pm.HalfNormal('sd', .5, testval=.8)
tr_func = normal_ppf(
tt.stack([gumbel_cdf(x_trans[:, 0], loc, scale),
logitnorm_cdf(x_trans[:, 1], mu, sd)]).T
)
pm.MvNormal('obs', np.zeros(2), cov=cov, observed=tr_func)
pm.Gumbel('marg0', loc, scale, observed=x_trans[:, 0])
pm.LogitNormal('marg1', mu, sd, observed=x_trans[:, 1])
with model:
map1 = pm.find_MAP()
map1
_, ax = plt.subplots(1, 2, figsize=(10, 3))
x0 = np.linspace(-2, 6, 200)
ax[0].hist(x_trans[:, 0], x0, density='pdf')
ax[0].plot(x0, st.gumbel_r.pdf(x0, loc=map1['loc'], scale=map1['scale']))
x1 = np.linspace(0, 1, 200)
ax[1].hist(x_trans[:, 1], x1, density='pdf')
logitnormpdf = st.norm.pdf(logit(x1), loc=map1['mu'], scale=map1['sd']) * 1/(x1 * (1-x1))
ax[1].plot(x1, logitnormpdf);
with pm.Model() as model_marg:
loc = pm.Normal('loc', 0., 5., testval=0.)
scale = pm.HalfNormal('scale', 2.5, testval=1.)
mu = pm.Normal('mu', 1., 1., testval=1.725)
sd = pm.HalfNormal('sd', .5, testval=.8)
pm.Gumbel('marg0', loc, scale, observed=x_trans[:, 0])
pm.LogitNormal('marg1', mu, sd, observed=x_trans[:, 1])
map_ = pm.find_MAP()
map_
_, ax = plt.subplots(1, 2, figsize=(10, 3))
x0 = np.linspace(-2, 6, 200)
ax[0].hist(x_trans[:, 0], x0, density='pdf')
ax[0].plot(x0, st.gumbel_r.pdf(x0, loc=map_['loc'], scale=map_['scale']))
x1 = np.linspace(0, 1, 200)
ax[1].hist(x_trans[:, 1], x1, density='pdf')
logitnormpdf = st.norm.pdf(logit(x1), loc=map_['mu'], scale=map_['sd']) * 1/(x1 * (1-x1))
ax[1].plot(x1, logitnormpdf);
from pymc3.theanof import gradient
def jacobian_det(f_inv_x, x):
grad = tt.reshape(gradient(tt.sum(f_inv_x), [x]), x.shape)
return tt.log(tt.abs_(grad))
xt_0 = theano.shared(x_trans[:, 0])
xt_1 = theano.shared(x_trans[:, 1])
with pm.Model() as model2:
# r∼Uniform(−1,1)
r = pm.Uniform('r',lower=-1, upper=1)
cov = pm.Deterministic('cov',
tt.stacklists([[1., r],
[r, 1.]]))
loc = pm.Normal('loc', 0., 5., testval=0.)
scale = pm.HalfNormal('scale', 2.5, testval=1.)
mu = pm.Normal('mu', 1., .5, testval=1.725)
sd = pm.HalfNormal('sd', .5, testval=.8)
tr_func = normal_ppf(
tt.stack([gumbel_cdf(xt_0, loc, scale),
logitnorm_cdf(xt_1, mu, sd)]).T
)
pm.MvNormal('obs', np.zeros(2), cov=cov, observed=tr_func)
pm.Potential('jacob_det0', jacobian_det(normal_ppf(gumbel_cdf(xt_0, loc, scale)), xt_0))
pm.Potential('jacob_det1', jacobian_det(normal_ppf(logitnorm_cdf(xt_1, mu, sd)), xt_1))
map_ = pm.find_MAP()
_, ax = plt.subplots(1, 2, figsize=(10, 3))
x0 = np.linspace(-2, 6, 200)
ax[0].hist(x_trans[:, 0], x0, density='pdf')
ax[0].plot(x0, st.gumbel_r.pdf(x0, loc=map_['loc'], scale=map_['scale']))
x1 = np.linspace(0, 1, 200)
ax[1].hist(x_trans[:, 1], x1, density='pdf')
logitnormpdf = st.norm.pdf(logit(x1), loc=map_['mu'], scale=map_['sd']) * 1/(x1 * (1-x1))
ax[1].plot(x1, logitnormpdf);
```
Kumaraswamy distribution
```
from scipy.special import logit
xrange = np.linspace(0, 1, 200)
plt.hist(x_trans[:, 1], xrange, density='pdf')
Kumaraswamypdf = lambda x, a, b: a*b*np.power(x, a-1)*np.power(1-np.power(x, a), b-1)
plt.plot(xrange, Kumaraswamypdf(xrange, 8, 2));
def Kumaraswamy_cdf(value, a, b):
return 1 - tt.pow(1 - tt.pow(value, a), b)
tr_func = normal_ppf(
tt.stack([gumbel_cdf(x_trans[:, 0], 0., 1.),
Kumaraswamy_cdf(x_trans[:, 1], 8, 2)]).T
)
functmp = theano.function([], tr_func)
x_ = functmp()
sns.jointplot(x_[:, 0], x_[:, 1], kind='kde', stat_func=None);
np.sum(~np.isfinite(x_[:, 1]))
with pm.Model() as model_marg:
a = pm.HalfNormal('alpha', 5., testval=10.)
b = pm.HalfNormal('beta', 2.5, testval=2.)
loc = pm.Normal('loc', 0., 5., testval=0.)
scale = pm.HalfNormal('scale', 2.5, testval=1.)
pm.Gumbel('marg0', loc, scale, observed=x_trans[:, 0])
pm.Kumaraswamy('marg1', a, b, observed=x_trans[:, 1])
map_ = pm.find_MAP()
_, ax = plt.subplots(1, 2, figsize=(10, 3))
x0 = np.linspace(-2, 6, 200)
ax[0].hist(x_trans[:, 0], x0, density='pdf')
ax[0].plot(x0, st.gumbel_r.pdf(x0, loc=map_['loc'], scale=map_['scale']))
x1 = np.linspace(0, 1, 200)
ax[1].hist(x_trans[:, 1], x1, density='pdf')
ax[1].plot(x1, Kumaraswamypdf(x1, map_['alpha'], map_['beta']));
with pm.Model() as model2:
# r∼Uniform(−1,1)
r = pm.Uniform('r',lower=-1, upper=1)
cov = pm.Deterministic('cov',
tt.stacklists([[1., r],
[r, 1.]]))
a = pm.HalfNormal('alpha', 5.)
b = pm.HalfNormal('beta', 2.5)
loc = pm.Normal('loc', 0., 5.)
scale = pm.HalfNormal('scale', 2.5)
tr_func = normal_ppf(
tt.stack([gumbel_cdf(xt_0, loc, scale),
Kumaraswamy_cdf(xt_1, a, b)]).T
)
pm.MvNormal('obs', np.zeros(2), cov=cov, observed=tr_func)
pm.Potential('jacob_det0', jacobian_det(normal_ppf(gumbel_cdf(xt_0, loc, scale)), xt_0))
pm.Potential('jacob_det1', jacobian_det(normal_ppf(Kumaraswamy_cdf(xt_1, a, b)), xt_1))
map_ = pm.find_MAP()
_, ax = plt.subplots(1, 2, figsize=(10, 3))
x0 = np.linspace(-2, 6, 200)
ax[0].hist(x_trans[:, 0], x0, density='pdf')
ax[0].plot(x0, st.gumbel_r.pdf(x0, loc=map_['loc'], scale=map_['scale']))
x1 = np.linspace(0, 1, 200)
ax[1].hist(x_trans[:, 1], x1, density='pdf')
ax[1].plot(x1, Kumaraswamypdf(x1, map_['alpha'], map_['beta']));
map_
with model2:
trace = pm.sample()
_, ax = plt.subplots(1, 2, figsize=(10, 3))
x0 = np.linspace(-2, 6, 200)
ax[0].hist(x_trans[:, 0], x0, density='pdf')
ax[0].plot(x0, st.gumbel_r.pdf(x0, loc=trace['loc'].mean(), scale=trace['scale'].mean()))
x1 = np.linspace(0, 1, 200)
ax[1].hist(x_trans[:, 1], x1, density='pdf')
ax[1].plot(x1, Kumaraswamypdf(x1, trace['alpha'].mean(), trace['beta'].mean()));
```
| true |
code
| 0.542076 | null | null | null | null |
|
# Load MXNet model
In this tutorial, you learn how to load an existing MXNet model and use it to run a prediction task.
## Preparation
This tutorial requires the installation of Java Kernel. For more information on installing the Java Kernel, see the [README](https://github.com/awslabs/djl/blob/master/jupyter/README.md) to install Java Kernel.
```
%mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.3.0-SNAPSHOT
%maven ai.djl:repository:0.3.0-SNAPSHOT
%maven ai.djl:model-zoo:0.3.0-SNAPSHOT
%maven ai.djl.mxnet:mxnet-engine:0.3.0-SNAPSHOT
%maven ai.djl.mxnet:mxnet-model-zoo:0.3.0-SNAPSHOT
%maven org.slf4j:slf4j-api:1.7.26
%maven org.slf4j:slf4j-simple:1.7.26
%maven net.java.dev.jna:jna:5.3.0
// See https://github.com/awslabs/djl/blob/master/mxnet/mxnet-engine/README.md
// for more MXNet library selection options
%maven ai.djl.mxnet:mxnet-native-auto:1.6.0-SNAPSHOT
import java.awt.image.*;
import java.nio.file.*;
import java.util.*;
import java.util.stream.*;
import ai.djl.*;
import ai.djl.inference.*;
import ai.djl.ndarray.*;
import ai.djl.ndarray.index.*;
import ai.djl.modality.*;
import ai.djl.modality.cv.*;
import ai.djl.modality.cv.util.*;
import ai.djl.modality.cv.transform.*;
import ai.djl.mxnet.zoo.*;
import ai.djl.translate.*;
import ai.djl.training.util.*;
import ai.djl.util.*;
import ai.djl.basicmodelzoo.cv.classification.*;
```
## Step 1: Prepare your MXNet model
This tutorial assumes that you have a MXNet model trained using Python. A MXNet symbolic model usually contains the following files:
* Symbol file: {MODEL_NAME}-symbol.json - a json file that contains network information about the model
* Parameters file: {MODEL_NAME}-{EPOCH}.params - a binary file that stores the parameter weight and bias
* Synset file: synset.txt - an optional text file that stores classification classes labels
This tutorial uses a pre-trained MXNet `resnet18_v1` model.
We use [DownloadUtils.java] for downloading files from internet.
```
%load DownloadUtils.java
DownloadUtils.download("https://mlrepo.djl.ai/model/cv/image_classification/ai/djl/mxnet/resnet/0.0.1/resnet18_v1-symbol.json", "build/resnet/resnet18_v1-symbol.json", new ProgressBar());
DownloadUtils.download("https://mlrepo.djl.ai/model/cv/image_classification/ai/djl/mxnet/resnet/0.0.1/resnet18_v1-0000.params.gz", "build/resnet/resnet18_v1-0000.params", new ProgressBar());
DownloadUtils.download("https://mlrepo.djl.ai/model/cv/image_classification/ai/djl/mxnet/synset.txt", "build/resnet/synset.txt", new ProgressBar());
```
## Step 2: Load your model
```
Path modelDir = Paths.get("build/resnet");
Model model = Model.newInstance();
model.load(modelDir, "resnet18_v1");
```
## Step 3: Create a `Translator`
```
Pipeline pipeline = new Pipeline();
pipeline.add(new CenterCrop()).add(new Resize(224, 224)).add(new ToTensor());
Translator<BufferedImage, Classifications> translator = ImageClassificationTranslator.builder()
.setPipeline(pipeline)
.setSynsetArtifactName("synset.txt")
.build();
```
## Step 4: Load image for classification
```
var img = BufferedImageUtils.fromUrl("https://djl-ai.s3.amazonaws.com/resources/images/kitten.jpg");
img
```
## Step 5: Run inference
```
Predictor<BufferedImage, Classifications> predictor = model.newPredictor(translator);
Classifications classifications = predictor.predict(img);
classifications
```
## Summary
Now, you can load any MXNet symbolic model and run inference.
| true |
code
| 0.619817 | null | null | null | null |
|
# Modeling and Simulation in Python
Chapter 18
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
```
### Code from the previous chapter
Read the data.
```
data = pd.read_csv('data/glucose_insulin.csv', index_col='time');
```
Interpolate the insulin data.
```
I = interpolate(data.insulin)
```
Initialize the parameters
```
G0 = 290
k1 = 0.03
k2 = 0.02
k3 = 1e-05
```
To estimate basal levels, we'll use the concentrations at `t=0`.
```
Gb = data.glucose[0]
Ib = data.insulin[0]
```
Create the initial condtions.
```
init = State(G=G0, X=0)
```
Make the `System` object.
```
t_0 = get_first_label(data)
t_end = get_last_label(data)
system = System(init=init,
k1=k1, k2=k2, k3=k3,
I=I, Gb=Gb, Ib=Ib,
t_0=t_0, t_end=t_end, dt=2)
def update_func(state, t, system):
"""Updates the glucose minimal model.
state: State object
t: time in min
system: System object
returns: State object
"""
G, X = state
unpack(system)
dGdt = -k1 * (G - Gb) - X*G
dXdt = k3 * (I(t) - Ib) - k2 * X
G += dGdt * dt
X += dXdt * dt
return State(G=G, X=X)
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
unpack(system)
frame = TimeFrame(columns=init.index)
frame.row[t_0] = init
ts = linrange(t_0, t_end, dt)
for t in ts:
frame.row[t+dt] = update_func(frame.row[t], t, system)
return frame
%time results = run_simulation(system, update_func);
```
### Numerical solution
In the previous chapter, we approximated the differential equations with difference equations, and solved them using `run_simulation`.
In this chapter, we solve the differential equation numerically using `run_ode_solver`, which is a wrapper for the SciPy ODE solver.
Instead of an update function, we provide a slope function that evaluates the right-hand side of the differential equations. We don't have to do the update part; the solver does it for us.
```
def slope_func(state, t, system):
"""Computes derivatives of the glucose minimal model.
state: State object
t: time in min
system: System object
returns: derivatives of G and X
"""
G, X = state
unpack(system)
dGdt = -k1 * (G - Gb) - X*G
dXdt = k3 * (I(t) - Ib) - k2 * X
return dGdt, dXdt
```
We can test the slope function with the initial conditions.
```
slope_func(init, 0, system)
```
Here's how we run the ODE solver.
```
%time results2, details = run_ode_solver(system, slope_func, t_eval=data.index);
```
`details` is a `ModSimSeries` object with information about how the solver worked.
```
details
```
`results` is a `TimeFrame` with one row for each time step and one column for each state variable:
```
results2
```
Plotting the results from `run_simulation` and `run_ode_solver`, we can see that they are not very different.
```
plot(results.G, 'g-')
plot(results2.G, 'b-')
plot(data.glucose, 'bo')
```
The differences in `G` are less than 1%.
```
diff = results.G - results2.G
percent_diff = diff / results2.G * 100
percent_diff.dropna()
```
### Optimization
Now let's find the parameters that yield the best fit for the data.
We'll use these values as an initial estimate and iteratively improve them.
```
params = Params(G0 = 290,
k1 = 0.03,
k2 = 0.02,
k3 = 1e-05)
```
`make_system` takes the parameters and actual data and returns a `System` object.
```
def make_system(params, data):
"""Makes a System object with the given parameters.
params: sequence of G0, k1, k2, k3
data: DataFrame with `glucose` and `insulin`
returns: System object
"""
G0, k1, k2, k3 = params
Gb = data.glucose[0]
Ib = data.insulin[0]
t_0 = get_first_label(data)
t_end = get_last_label(data)
init = State(G=G0, X=0)
return System(G0=G0, k1=k1, k2=k2, k3=k3,
init=init, Gb=Gb, Ib=Ib,
t_0=t_0, t_end=t_end)
system = make_system(params, data)
```
`error_func` takes the parameters and actual data, makes a `System` object, and runs `odeint`, then compares the results to the data. It returns an array of errors.
```
def error_func(params, data):
"""Computes an array of errors to be minimized.
params: sequence of parameters
data: DataFrame of values to be matched
returns: array of errors
"""
print(params)
# make a System with the given parameters
system = make_system(params, data)
# solve the ODE
results, details = run_ode_solver(system, slope_func, t_eval=data.index)
# compute the difference between the model
# results and actual data
errors = results.G - data.glucose
return errors
```
When we call `error_func`, we provide a sequence of parameters as a single object.
Here's how that works:
```
error_func(params, data)
```
`fit_leastsq` is a wrapper for `scipy.optimize.leastsq`
Here's how we call it.
```
best_params, fit_details = fit_leastsq(error_func, params, data)
```
The first return value is a `Params` object with the best parameters:
```
best_params
```
The second return value is a `ModSimSeries` object with information about the results.
```
fit_details
fit_details
```
Now that we have `best_params`, we can use it to make a `System` object and run it.
```
system = make_system(best_params, data)
results, details = run_ode_solver(system, slope_func, t_eval=data.index)
details.message
```
Here are the results, along with the data. The first few points of the model don't fit the data, but we don't expect them to.
```
plot(results.G, label='simulation')
plot(data.glucose, 'bo', label='glucose data')
decorate(xlabel='Time (min)',
ylabel='Concentration (mg/dL)')
savefig('figs/chap08-fig04.pdf')
```
### Interpreting parameters
Based on the parameters of the model, we can estimate glucose effectiveness and insulin sensitivity.
```
def indices(params):
"""Compute glucose effectiveness and insulin sensitivity.
params: sequence of G0, k1, k2, k3
data: DataFrame with `glucose` and `insulin`
returns: State object containing S_G and S_I
"""
G0, k1, k2, k3 = params
return State(S_G=k1, S_I=k3/k2)
```
Here are the results.
```
indices(best_params)
```
### Under the hood
Here's the source code for `run_ode_solver` and `fit_leastsq`, if you'd like to know how they work.
```
%psource run_ode_solver
%psource fit_leastsq
```
## Exercises
**Exercise:** Since we don't expect the first few points to agree, it's probably better not to make them part of the optimization process. We can ignore them by leaving them out of the `Series` returned by `error_func`. Modify the last line of `error_func` to return `errors.loc[8:]`, which includes only the elements of the `Series` from `t=8` and up.
Does that improve the quality of the fit? Does it change the best parameters by much?
Note: You can read more about this use of `loc` [in the Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-integer).
**Exercise:** How sensitive are the results to the starting guess for the parameters. If you try different values for the starting guess, do we get the same values for the best parameters?
**Related reading:** You might be interested in this article about [people making a DIY artificial pancreas](https://www.bloomberg.com/news/features/2018-08-08/the-250-biohack-that-s-revolutionizing-life-with-diabetes).
| true |
code
| 0.787155 | null | null | null | null |
|
```
import sys
from pathlib import Path
sys.path.append(str(Path.cwd().parent.parent))
import numpy as np
from kymatio.scattering2d.core.scattering2d import scattering2d
import matplotlib.pyplot as plt
import torch
import torchvision
from kymatio import Scattering2D
from PIL import Image
from IPython.display import display
from torchvision.transforms import *
#img = Image.open('/NOBACKUP/gauthiers/KTH/sample_a/wood/54a-scale_10_im_10_col.png')
img = Image.open('/NOBACKUP/gauthiers/chest_xrays_preprocess/train/positive/MIDRC-RICORD-1C-SITE2-000216-21074-0.png')
rsz_transf = torchvision.transforms.Resize((128,128))
img = rsz_transf(img)
display(img)
```
Rotation
```
transformation = torchvision.transforms.RandomRotation(degrees = 45)
transformation.degrees = [45,45]
img_rot2 = transformation(img)
display(img_rot2)
```
Blur
```
transformation = torchvision.transforms.GaussianBlur(3)
img_blur = transformation(img)
display(img_blur)
```
Perspective
```
transformation = torchvision.transforms.RandomPerspective()
img_rdmPersp = transformation(img)
display(img_rdmPersp)
transforms = torchvision.transforms.RandomPerspective(distortion_scale=0.5,p=1)
transforms.distortion_scale = 0.9
img_1 = transforms(img)
display(img_1)
transforms = torchvision.transforms.RandomAffine(degrees = 0, shear=90)
img_2 = transforms(img)
display(img_2)
```
À la Mallat
```
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device = torch.device('cpu')
import time
t0 = time.time()
# Function \tau in Mallat's. Deform the index u. The function is chosen arbitrary as an example.
tau = lambda u : (0.5*u[0]+0.3*u[1]**2,0.3*u[1])
# Deform the index u for all u of the image.
tau_mat = lambda grid : torch.tensor([[tau(grid[i,j,:]) for j in range(len(grid))] for i in range(len(grid))],device = device)
tauV = lambda u : torch.stack([0.5*u[:,0]+0.3*u[:,1]**2,0.3*u[:,1]]).T
# Deforms the image given a function \tau.
def diffeo(img,tau):
# Image to tensor
transf = torchvision.transforms.ToTensor()
img = transf(img).unsqueeze(0).to(device)
# Number of pixels. Suppose square image.
dim = img.shape[-1]
# Create a (dim x dim) matrix of 2d vectors. Each vector represents the normalized position in the grid.
# Normalized means (-1,-1) is top left and (1,1) is bottom right.
grid = torch.tensor([[[x,y] for x in torch.linspace(-1,1,dim,device = device)] for y in torch.linspace(-1,1,dim,device = device)],device = device)
# Apply u-tau(u) in Mallat's.
grid_transf = (grid - tau_mat(grid)).unsqueeze(0)
# Apply x(u-tau(u)) by interpolating the image at the index points given by grid_transf.
img_transf = torch.nn.functional.grid_sample(img,grid_transf).squeeze(0)
# Tensor to image
transf = torchvision.transforms.ToPILImage()
return transf(img_transf)
# Calculate the deformation size : sup |J_{tau}(u)| over u.
def deformation_size(tau):
# Set a precision. This is arbitrary.
precision = 128
# Create a (flatten) grid of points between (-1,-1) and (1,1). This is the same grid as in the previous
# function (but flatten), but it feels arbitrary also.
points = [torch.tensor([x,y],device = device) for x in torch.linspace(-1,1,precision,device = device) for y in torch.linspace(-1,1,precision,device = device)]
# Evaluate the Jacobian of tau in each of those points. Returns a tensor of precision^2 x 2 x 2, i.e.
# for each point in points the 2 x 2 jacobian. Is it necessary to compute on all points, or only on the
# boundary would be sufficient?
t1 = time.time()
jac = torch.stack(list(map(lambda point : torch.stack(torch.autograd.functional.jacobian(tau,point)), points)))
print("grad calc +", (time.time()-t1))
# Find the norm of those jacobians.
norm_jac = torch.linalg.matrix_norm(jac,ord=2,dim=(1, 2))
# Return the Jacobian with the biggest norm.
return torch.max(norm_jac)
img_diffeo = diffeo(img,tau)
display(img_diffeo)
deformation_size(tau)
print("full notebook +", (time.time()-t0))
tau(torch.randn((64,2)))
points = [torch.tensor([0.,0.]),torch.tensor([1.,2.])]
jac = torch.autograd.functional.jacobian(tau,points[0])
jac2 = torch.stack(jac)
jac = torch.autograd.functional.jacobian(tau,points[1])
jac3 = torch.stack(jac)
n = 0
jac4 = torch.cat([jac2.unsqueeze(n),jac3.unsqueeze(n)],dim = n)
print(jac2)
print(jac3)
print(jac4)
print(jac4.shape)
jac5 = torch.cat([torch.stack(torch.autograd.functional.jacobian(tau,point)).unsqueeze(0) for point in points], dim = 0)
print(jac5)
points = [torch.tensor([0.,0.]),torch.tensor([1.,2.])]
jac = torch.stack(list(map(lambda point : torch.stack(torch.autograd.functional.jacobian(tau,point)), points)))
print(jac)
print(jac.shape)
points = [torch.tensor([0.,0.]),torch.tensor([1.,2.])]
jac = torch.cat([torch.cat([x.unsqueeze(1) for x in torch.autograd.functional.jacobian(tau,point)],dim =1).unsqueeze(2) for point in points],dim = 2)
print(jac)
print(jac.shape)
eps = 0.3
tau = lambda u : (eps*u[0],eps*u[1])
display(diffeo(img,tau))
eps = 0.3
tau = lambda u : (eps*u[1],eps*u[0])
display(diffeo(img,tau))
eps = 0.3
tau = lambda u : (eps*(u[0]+u[1]),eps*(u[0]+u[1]))
display(diffeo(img,tau))
eps = 0.3
tau = lambda u : (eps*(u[0]+u[1]),eps*(u[0]-u[1]))
display(diffeo(img,tau))
eps = 0.3
tau = lambda u : (eps*(u[0]**2+u[1]**2),eps*(2*u[0]*u[1]))
display(diffeo(img,tau))
eps = 0.3
tau = lambda u : (eps*(u[0]**2+u[1]**2),-eps*(2*u[0]*u[1]))
display(diffeo(img,tau))
eps = 0.3
tau = lambda u : (torch.exp(eps*u[0])-1,torch.exp(eps*u[1])-1)
display(diffeo(img,tau))
```
| true |
code
| 0.553747 | null | null | null | null |
|
# Todoist Data Analysis
This notebook processed the downloaded history of your todoist tasks. See [todoist_downloader.ipynb](https://github.com/markwk/qs_ledger/blob/master/todoist/todoist_downloader.ipynb) to export and download your task history from Todoist.
---
```
from datetime import date, datetime as dt, timedelta as td
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
%matplotlib inline
# supress warnings
import warnings
warnings.filterwarnings('ignore')
```
---
# General Data Analysis of Todoist Tasks
```
# import raw data
raw_tasks = pd.read_csv("data/todost-raw-tasks-completed.csv")
len(raw_tasks)
# import processed data
tasks = pd.read_csv("data/todost-tasks-completed.csv")
len(tasks)
```
----
### Simple Data Analysis: Completed Tasks Per Year
```
year_data = tasks['year'].value_counts().sort_index()
# Chart Monthly Tasks Count
dataset = year_data
chart_title = 'Number of Tasks Completed Per Year'
plt.style.use('seaborn-darkgrid')
ax = dataset.plot.bar(figsize=(14, 5), rot=0, legend=False)
ax.set_ylabel('Tasks Completed')
ax.set_xlabel('')
ax.set_title(chart_title)
plt.show()
```
### Simple Data Analysis: Completed Tasks Per Month
```
# simple breakdown by month
totals_by_month = tasks['month'].value_counts().sort_index()
# Chart Monthly Tasks Count
dataset = totals_by_month.tail(24)
chart_title = 'Monthly Number of Tasks Completed (Last 24 Months)'
plt.style.use('seaborn-darkgrid')
ax = dataset.plot.bar(figsize=(14, 5), rot=90, colormap='spring', stacked=True, legend=False)
ax.set_ylabel('Tasks Completed')
ax.set_xlabel('')
ax.set_title(chart_title)
plt.show()
```
------
### Simple Data Analysis: Completed Tasks by Day of Week
```
totals_dow = tasks['dow'].value_counts().sort_index()
dataset = totals_dow
chart_title = 'Completed Tasks by Day of Week'
plt.style.use('seaborn-darkgrid')
ax = dataset.plot.bar(figsize=(14, 5), rot=0, colormap='autumn', stacked=True, legend=False)
ax.set_ylabel('# Completed')
ax.set_xlabel('')
ax.set_title(chart_title)
plt.show()
```
-----
### Simple Data Analysis: Completed Tasks by Hour of the Day
```
hour_counts = tasks['hour'].value_counts().sort_index()
ax = hour_counts.plot(kind='line', figsize=[10, 4], linewidth=4, alpha=1, marker='o', color='#6684c1',
markeredgecolor='#6684c1', markerfacecolor='w', markersize=8, markeredgewidth=2)
xlabels = hour_counts.index.map(lambda x: '{:02}:00'.format(x))
ax.set_xticks(range(len(xlabels)))
ax.set_xticklabels(xlabels, rotation=45, rotation_mode='anchor', ha='right')
ax.set_xlim((hour_counts.index[0], hour_counts.index[-1]))
ax.yaxis.grid(True)
hour_max = hour_counts.max()
ax.set_ylim((0, hour_max+20))
ax.set_ylabel('Number of Tasks')
ax.set_xlabel('', )
ax.set_title('Number of Tasks Completed per hour of the day', )
plt.show()
```
----
## Daily Count of Tasks Completed
```
daily_counts = tasks['date'].value_counts().sort_index()
dataset = daily_counts.tail(30)
chart_title = 'Number of Tasks Completed per Day'
n_groups = len(dataset)
index = np.arange(n_groups)
ax = dataset.plot(kind='line', figsize=[12, 5], linewidth=4, alpha=1, marker='o', color='#6684c1',
markeredgecolor='#6684c1', markerfacecolor='w', markersize=8, markeredgewidth=2)
ax.yaxis.grid(True)
ax.xaxis.grid(True)
ax.set_xticks(index)
ax.set_ylabel('Tasks Completed Count')
# ax.set_xlabel('')
plt.xticks(index, dataset.index, rotation=90)
ax.set_title(chart_title)
plt.show()
# Export
daily_counts.to_csv("data/todoist-daily-completed.csv", index=True)
```
-----
### Projects Breakdown
```
# Optionally pass a list of projects to exclude
exclude_proj = ['Project1', 'Project2']
tasks_data = tasks[~tasks.project_name.isin(exclude_proj)]
project_counts = tasks_data['project_name'].value_counts().sort_values(ascending=False)
# Chart Project Tasks
dataset = project_counts.sort_values(ascending=True).tail(15)
chart_title = 'Project Tasks Breakdown'
plt.style.use('seaborn-darkgrid')
ax = dataset.plot.barh(y='Hours', figsize=(8, 8), colormap='plasma', legend=False)
ax.set_ylabel('')
ax.set_xlabel('Task #')
ax.set_title(chart_title)
plt.show()
```
-----
## General Summary of Todoist Tasks
```
# Life-time Project Time Summary
print('====== Todoist Lifetime Summary ====== ')
print('Total Tasks Completed: {:,}'.format(len(tasks)))
daily_average = round(daily_counts.mean(),1)
print('Daily Task Average: {:,}'.format(daily_average))
print(' ')
print('Top 5 Days with Most Tasks Completed:')
for i, v in daily_counts.sort_values(ascending=False).head(5).items():
print(v, 'tasks on ', i)
```
------
# Year in Review
```
# Set Year
target_year = 2018
```
### Year: Top Projects
```
def yearly_top_projects_chart(year, exclude_projects=[]):
year_data = tasks[tasks['year'] == year]
# Optionally pass a list of projects to exclude
if exclude_projects:
exclude_proj = exclude_projects
year_data = year_data[~tasks.project_name.isin(exclude_proj)]
project_counts = year_data['project_name'].value_counts().sort_values(ascending=False)
project_counts = year_data['project_name'].value_counts().sort_values(ascending=False)
# Chart Project Tasks
dataset = project_counts.sort_values(ascending=True).tail(10)
chart_title = '{} Project Tasks Breakdown'.format(year)
plt.style.use('seaborn-darkgrid')
ax = dataset.plot.barh(y='Hours', figsize=(8, 8), colormap='plasma', legend=False)
ax.set_ylabel('')
ax.set_xlabel('Task #')
ax.set_title(chart_title)
plt.show()
# yearly_top_projects_chart(year=target_year, exclude_projects=['ProjectName', 'ProjectName2''])
yearly_top_projects_chart(year=target_year)
```
### Year: Day of Week Comparison
```
def yearly_dow_chart(year):
year_data = tasks[tasks['year'] == year]
yearly_dow = year_data['dow'].value_counts().sort_index()
days_of_week_list = ['Mon', 'Tues', 'Wed', 'Thurs', 'Friday', 'Sat', 'Sun']
yearly_dow.index = days_of_week_list
chart_title = '{} Tasks Completed by Day of Week | Yearly Total: {:,}'.format(year, yearly_dow.sum())
plt.style.use('seaborn-darkgrid')
ax = yearly_dow.plot.bar(stacked=True, rot=0, figsize=(12,4))
ax.set_xlabel('')
ax.set_ylabel('Hours')
ax.set_title(chart_title)
plt.show()
yearly_dow_chart(year=target_year)
```
### Year: Monthly Tasks Completed Chart
```
def yearly_months_chart(year):
year_data = tasks[tasks['year'] == year]
yearly_months = year_data['month'].value_counts().sort_index()
months_of_year = ['Jan', 'Feb', 'March', 'April', 'May', 'June', 'July',
'Aug', 'Sept', 'Oct', 'Nov', 'Dec']
yearly_months.index = months_of_year
# Chart Monthly Tasks Count
dataset = yearly_months
chart_title = 'Monthly Number of Tasks Completed'
plt.style.use('seaborn-darkgrid')
ax = dataset.plot.bar(figsize=(14, 5), rot=0, colormap='spring', stacked=True, legend=False)
ax.set_ylabel('Tasks Completed')
ax.set_xlabel('')
ax.set_title(chart_title)
plt.show()
yearly_months_chart(year=target_year)
```
#### Year: Tasks Heat Map
```
# Helper Function to Create Heat Map from Data
# Adapted from https://stackoverflow.com/questions/32485907/matplotlib-and-numpy-create-a-calendar-heatmap
DAYS = ['Sun.', 'Mon.', 'Tues.', 'Wed.', 'Thurs.', 'Fri.', 'Sat.']
MONTHS = ['Jan.', 'Feb.', 'Mar.', 'Apr.', 'May', 'June', 'July', 'Aug.', 'Sept.', 'Oct.', 'Nov.', 'Dec.']
def date_heatmap(series, start=None, end=None, mean=False, ax=None, **kwargs):
'''Plot a calendar heatmap given a datetime series.
Arguments:
series (pd.Series):
A series of numeric values with a datetime index. Values occurring
on the same day are combined by sum.
start (Any):
The first day to be considered in the plot. The value can be
anything accepted by :func:`pandas.to_datetime`. The default is the
earliest date in the data.
end (Any):
The last day to be considered in the plot. The value can be
anything accepted by :func:`pandas.to_datetime`. The default is the
latest date in the data.
mean (bool):
Combine values occurring on the same day by mean instead of sum.
ax (matplotlib.Axes or None):
The axes on which to draw the heatmap. The default is the current
axes in the :module:`~matplotlib.pyplot` API.
**kwargs:
Forwarded to :meth:`~matplotlib.Axes.pcolormesh` for drawing the
heatmap.
Returns:
matplotlib.collections.Axes:
The axes on which the heatmap was drawn. This is set as the current
axes in the `~matplotlib.pyplot` API.
'''
# Combine values occurring on the same day.
dates = series.index.floor('D')
group = series.groupby(dates)
series = group.mean() if mean else group.sum()
# Parse start/end, defaulting to the min/max of the index.
start = pd.to_datetime(start or series.index.min())
end = pd.to_datetime(end or series.index.max())
# We use [start, end) as a half-open interval below.
end += np.timedelta64(1, 'D')
# Get the previous/following Sunday to start/end.
# Pandas and numpy day-of-week conventions are Monday=0 and Sunday=6.
start_sun = start - np.timedelta64((start.dayofweek + 1) % 7, 'D')
end_sun = end + np.timedelta64(7 - end.dayofweek - 1, 'D')
# Create the heatmap and track ticks.
num_weeks = (end_sun - start_sun).days // 7
heatmap = np.zeros((7, num_weeks))
ticks = {} # week number -> month name
for week in range(num_weeks):
for day in range(7):
date = start_sun + np.timedelta64(7 * week + day, 'D')
if date.day == 1:
ticks[week] = MONTHS[date.month - 1]
if date.dayofyear == 1:
ticks[week] += f'\n{date.year}'
if start <= date < end:
heatmap[day, week] = series.get(date, 0)
# Get the coordinates, offset by 0.5 to align the ticks.
y = np.arange(8) - 0.5
x = np.arange(num_weeks + 1) - 0.5
# Plot the heatmap. Prefer pcolormesh over imshow so that the figure can be
# vectorized when saved to a compatible format. We must invert the axis for
# pcolormesh, but not for imshow, so that it reads top-bottom, left-right.
ax = ax or plt.gca()
mesh = ax.pcolormesh(x, y, heatmap, **kwargs)
ax.invert_yaxis()
# Set the ticks.
ax.set_xticks(list(ticks.keys()))
ax.set_xticklabels(list(ticks.values()))
ax.set_yticks(np.arange(7))
ax.set_yticklabels(DAYS)
# Set the current image and axes in the pyplot API.
plt.sca(ax)
plt.sci(mesh)
return ax
def year_heat_chart(year):
# Filter by Year
year_data = tasks[(tasks['year'] == year)]
# daily count
year_dates_data = year_data['date'].value_counts().reset_index()
year_dates_data.columns = ['date', 'count']
year_dates_data['date'] = pd.to_datetime(year_dates_data['date'])
# Generate all dates in that year
first_date = str(year)+'-01-01'
last_date = str(year)+'-12-31'
all_dates = pd.date_range(start=first_date, end=last_date)
all_dates = pd.DataFrame(all_dates, columns=['date'])
# combine actual runs by date with total dates possible
year_data = pd.merge(left=all_dates, right=year_dates_data,
left_on="date", right_on="date", how="outer")
year_data['count'].fillna(0, inplace=True)
year_data = year_data.set_index(pd.DatetimeIndex(year_data['date']))
max_daily_count = round(year_data['count'].max(),2)
# key stat and title
total_tasks = round(year_data['count'].sum())
chart_title = '{} Todoist Tasks Heatmap | Total Tasks: {:,}'.format(year, total_tasks)
# set chart data
data = year_data['count']
data.index = year_data.index
# plot data
figsize = plt.figaspect(7 / 56)
fig = plt.figure(figsize=figsize)
ax = date_heatmap(data, edgecolor='black')
max_count = int(round(data.max(),0))
steps = int(round(max_count / 6, 0))
plt.colorbar(ticks=range(0, max_count, steps), pad=0.02)
cmap = mpl.cm.get_cmap('Purples', max_daily_count)
plt.set_cmap(cmap)
plt.clim(0, max_daily_count)
ax.set_aspect('equal')
ax.set_title(chart_title)
plt.show()
year_heat_chart(year=target_year)
# compare previous year:
year_heat_chart(year=2017)
```
### Yearly Summary
```
def yearly_summary(year):
print('====== {} Todoist Summary ======'.format(year))
# Data Setup
year_data = tasks[(tasks['year'] == year)]
print('Total Tasks Completed: {:,}'.format(len(year_data)))
daily_counts = year_data['date'].value_counts().sort_index()
daily_average = round(daily_counts.mean(),1)
print('Daily Task Average: {:,}'.format(daily_average))
print(' ')
project_counts = year_data['project_name'].value_counts()
print('=== Top Projects ===')
for i, v in project_counts.sort_values(ascending=False).head(7).items():
print("* ", v, 'tasks on ', i)
print(' ')
print('=== Monthly Breakdown ===')
monthly_counts = year_data['month'].value_counts().sort_index()
print('Monthly Task Average: {:,}'.format(round(monthly_counts.mean(),1)))
print('> Top 3 Months:')
for i, v in monthly_counts.sort_values(ascending=False).head(3).items():
print("* ", v, 'tasks on ', i)
print('> Bottom 3 Months:')
for i, v in monthly_counts.sort_values(ascending=True).head(3).items():
print("* ", v, 'tasks on ', i)
print(' ')
print('Top 5 Days with Most Tasks Completed:')
for i, v in daily_counts.sort_values(ascending=False).head(5).items():
print("* ", v, 'tasks on ', i)
yearly_summary(year=target_year)
```
| true |
code
| 0.619155 | null | null | null | null |
|
# Reproduce Allen smFISH results with Starfish
This notebook walks through a work flow that reproduces the smFISH result for one field of view using the starfish package.
```
from copy import deepcopy
from glob import glob
import json
import os
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy import ndimage as ndi
from scipy import stats
from skimage import (exposure, feature, filters, io, measure,
morphology, restoration, segmentation, transform,
util, img_as_float)
from starfish.io import Stack
from starfish.constants import Indices
# # developer note: for rapid iteration, it may be better to run this cell, download the data once, and load
# # the data from the local disk. If so, uncomment this cell and run this instead of the above.
# !aws s3 sync s3://czi.starfish.data.public/20180606/allen_smFISH ./allen_smFISH
# experiment_json = os.path.abspath("./allen_smFISH/fov_001/experiment.json")
# this is a large (1.1GB) FOV, so the download may take some time
experiment_json = 'https://dmf0bdeheu4zf.cloudfront.net/20180606/allen_smFISH/fov_001/experiment.json'
```
Load the Stack object, which while not well-named right now, should be thought of as an access point to an "ImageDataSet". In practice, we expect the Stack object or something similar to it to be an access point for _multiple_ fields of view. In practice, the thing we talk about as a "TileSet" is the `Stack.image` object. The data are currently stored in-memory in a `numpy.ndarray`, and that is where most of our operations are done.
The numpy array can be accessed through Stack.image.numpy\_array (public method, read only) or Stack.image.\_data (read and write)
```
codebook = pd.read_json('https://dmf0bdeheu4zf.cloudfront.net/20180606/allen_smFISH/fov_001/codebook.json')
codebook
```
We're ready now to load the experiment into starfish (This experiment is big, it takes a few minutes):
```
s = Stack()
s.read(experiment_json)
```
All of our implemented operations leverage the `Stack.image.apply` method to apply a single function over each of the tiles or volumes in the FOV, depending on whether the method accepts a 2d or 3d array. Below, we're clipping each image independently at the 10th percentile. I've placed the imports next to the methods so that you can easily locate the code, should you want to look under the hood and understand what parameters have been chosen.
The verbose flag for our apply loops could use a bit more refinement. We should be able to tell it how many images it needs to process from looking at the image stack, but for now it's dumb so just reports the number of tiles or volumes it's processed. This FOV has 102 images over 3 volumes.
```
from starfish.pipeline.filter import Filter
s_clip = Filter.Clip(p_min=10, p_max=100, verbose=True)
s_clip.filter(s.image)
```
We're still working through the backing of the Stack.image object with the on-disk or on-cloud Tile spec. As a result, most of our methods work in-place. For now, we can hack around this by deepcopying the data before administering the operation. This notebook was developed on a 64gb workstation, so be aware of the memory usage when copying!
```
# filtered_backup = deepcopy(s)
```
If you ever want to visualize the image in the notebook, we've added a widget to do that. The first parameter is an indices dict that specifies which hybridization round, channel, z-slice you want to view. The result is a pageable visualization across that arbitrary set of slices. Below I'm visualizing the first channel, which your codebook tells me is Nmnt.
[N.B. once you click on the slider, you can page with the arrow keys on the keyboard.]
```
s.image.show_stack({Indices.CH: 0});
s_bandpass = Filter.Bandpass(lshort=0.5, llong=7, threshold=None, truncate=4, verbose=True)
s_bandpass.filter(s.image)
```
For bandpass, there's a point where things get weird, at `c == 0; z <= 14`. In that range the images look mostly like noise. However, _above_ that, they look great + background subtracted! The later stages of the pipeline appear robust to this, though, as no spots are called for the noisy sections.
```
# I wasn't sure if this clipping was supposed to be by volume or tile. I've done tile here, but it can be easily
# switched to volume.
s_clip = Filter.Clip(p_min=10, p_max=100, is_volume=False, verbose=True)
s_clip.filter(s.image)
sigma=(1, 0, 0) # filter only in z, do nothing in x, y
glp = Filter.GaussianLowPass(sigma=sigma, is_volume=True, verbose=True)
glp.filter(s.image)
```
Below, because spot finding is so slow when single-plex, we'll pilot this on a max projection to show that the parameters work. Here's what trackpy.locate, which we wrap, produces for a z-projection of channel 1. To do use our plotting methods on z-projections we have to expose some of the starfish internals, which will be improved upon.
```
from showit import image
from trackpy import locate
# grab a section from the tensor.
ch1 = s.image.max_proj(Indices.Z)[0, 1]
results = locate(ch1, diameter=3, minmass=250, maxsize=3, separation=5, preprocess=False, percentile=10)
results.columns = ['x', 'y', 'intensity', 'r', 'eccentricity', 'signal', 'raw_mass', 'ep']
# plot the z-projection
f, ax = plt.subplots(figsize=(20, 20))
ax.imshow(ch1, vmin=15, vmax=52, cmap=plt.cm.gray)
# draw called spots on top as red circles
# scale radius plots the red circle at scale_radius * spot radius
s.image._show_spots(results, ax=plt.gca(), scale_radius=7)
```
Below spot finding is on the _volumes_ for each channel. This will take about `11m30s`
```
from starfish.pipeline.features.spots.detector import SpotFinder
# I've guessed at these parameters from the allen_smFISH code, but you might want to tweak these a bit.
# as you can see, this function takes a while. It will be great to parallelize this. That's also coming,
# although we haven't figured out where it fits in the priority list.
kwargs = dict(
spot_diameter=3, # must be odd integer
min_mass=300,
max_size=3, # this is max _radius_
separation=5,
noise_size=0.65, # this is not used because preprocess is False
preprocess=False,
percentile=10, # this is irrelevant when min_mass, spot_diameter, and max_size are set properly
verbose=True,
is_volume=True,
)
lmpf = SpotFinder.LocalMaxPeakFinder(**kwargs)
spot_attributes = lmpf.find(s.image)
# save the results to disk as json
for attrs, (hyb, ch) in spot_attributes:
attrs.save(f'spot_attributes_c{ch.value}.json')
# # if you want to load them back in the same shape, here's how:
# from starfish.pipeline.features.spot_attributes import SpotAttributes
# spot_attributes = [SpotAttributes.load(attrs) for attrs in glob('spot_attributes_c*.json')]
# this is not a very performant function because of how matplotlib renders circles as individual artists,
# but I think it's useful for debugging the spot detection.
# Note that in places where spots are "missed" it is often because they've been localized to individual
# nearby z-planes, whereas most spots exist across several layers of z.
s.image.show_stack({Indices.CH: 1, Indices.HYB: 0}, show_spots=spot_attributes[1][0], figure_size=(20, 20), p_min=60, p_max=99.9);
```
| true |
code
| 0.621081 | null | null | null | null |
|
What you should know about C
----
- Write, compile and run a simple program in C
- Static types
- Control flow especially `for` loop
- Using functions
- Using structs
- Pointers and arrays
- Function pointers
- Dynamic memory allocation
- Separate compilation and `make`
### Structs
**Exercise 1**
Write and use a `struct` to represent dates.
```
```
**Solution**
```
%%file ex1.c
#include <stdio.h>
typedef struct {
int day;
int month;
int year;
} date;
int main(int argc, char* argv[])
{
date d1;
d1.day = 29;
d1.month = 3;
d1.year = 2016;
date d2 = {30, 3, 2016};
date d3 = {.year = 2016, .month = 3, .day = 31};
printf("%d-%d-%d\n", d1.month, d1.day, d1.year);
printf("%d-%d-%d\n", d2.month, d2.day, d2.year);
printf("%d-%d-%d\n", d3.month, d3.day, d3.year);
}
%%bash
gcc -std=c99 -o ex1 ex1.c
%%bash
./ex1
```
### Pointers
**Exercise 2**
Write and use pointers for working with
- (a) doubles
- (b) the date struct
- (c) vector of doubles
- (d) 2D array of doubles
```
```
**Solution**
```
%%file ex2a.c
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char* argv[])
{
double x1 = 2.78;
double x2 = 3.14;
double *p1 = malloc(sizeof(double));
if (p1 == NULL) return -1;
double *p2 = calloc(sizeof(double), 1);
if (p2 == NULL) return -1;
printf("%p: %.2f\n", p1, *p1);
printf("%p: %.2f\n\n", p2, *p2);
p1 = &x1;
*p2 = x2;
printf("%p: %.2f\n", p1, *p1);
printf("%p: %.2f\n", p2, *p2);
// free(p1);
// free(p2);
}
%%bash
gcc -std=c99 -o ex2a ex2a.c
%%bash
./ex2a
```
**Solution**
```
%%file ex2b.c
#include <stdio.h>
#include <stdlib.h>
typedef struct {
int day;
int month;
int year;
} date;
int main(int argc, char* argv[])
{
date *d1 = malloc(sizeof(date));
if (d1 == NULL) return -1;
d1->day = 29;
d1->month = 3;
d1->year = 2016;
printf("%d-%d-%d\n", d1->month, d1->day, d1->year);
printf("%d-%d-%d\n", (*d1).month, (*d1).day, (*d1).year);
free(d1);
}
%%bash
gcc -std=c99 -o ex2b ex2b.c
%%bash
./ex2b
```
**Solution**
```
%%file ex2c.c
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char* argv[])
{
int n = atoi(argv[1]);
double *xs = calloc(sizeof(double), n);
if (xs == NULL) return -1;
for (int i=0; i<n; i++) {
xs[i] = i*i;
}
printf("%.2f\n", *(xs));
printf("%.2f\n", *(xs + 2));
printf("%.2f\n", xs[0]);
printf("%.2f\n", xs[2]);
free(xs);
}
%%bash
gcc -std=c99 -o ex2c ex2c.c
%%bash
./ex2c 10
```
**Solution**
```
%%file ex2d.c
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char* argv[])
{
int rows = 2;;
int cols = 3;
double **xs = malloc(sizeof(double) * rows);
for (int i=0; i < rows; i++) {
xs[i] = calloc(sizeof(double), cols);
}
for (int i=0; i<rows; i++) {
for (int j=0; j<cols; j++) {
xs[i][j] = i+j;
}
}
printf("%.2f\n", xs[0][0]);
printf("%.2f\n", xs[1][2]);
for (int i=0; i<rows; i++) {
free(xs[i]);
}
free(xs);
}
%%bash
gcc -std=c99 -o ex2d ex2d.c
%%bash
./ex2d
```
### Function pointers
**Exercise 3**
Write and use a function pointer.
**Solution**
```
%%file ex3.c
#include <stdio.h>
#include <stdlib.h>
double add(double x, double y) {
return x + y;
}
double mult(double x, double y) {
return x * y;
}
int main(int argc, char* argv[])
{
double a = 3.0;
double b = 4.0;
double (*f)(double, double) = add;
typedef double (*fp)(double, double);
fp g = mult;
printf("%.2f\n", add(a, b));
printf("%.2f\n", f(a, b));
printf("%.2f\n", g(a, b));
}
%%bash
gcc -std=c99 -o ex3 ex3.c
%%bash
./ex3
```
### Separate compilation
**Exercise 4**
Write header and implementation files for the add function, and use the function in a separate driver file. Use a makefile to compile the executable.
```
```
**Solution**
```
%%file ex4.h
#pragma once
double add(double x, double y);
%%file ex4.c
#include "ex4.h"
double add(double x, double y) {
return x + y;
}
%%file ex4_main.c
#include <stdio.h>
#include "ex4.h"
int main() {
double a = 3.0;
double b = 4.0;
printf("%.2f\n", add(a, b));
}
%%file makefile
ex4_main: ex4_main.c ex4.o
gcc -std=c99 -o ex4_main ex4_main.c ex4.o
ex4.o: ex4.c
gcc -std=c99 -c ex4.c
%%bash
make
%%bash
./ex4_main
%%file makefile
TARGET = ex4_main
OBJECTS = ex4.o
CFLAGS = -O3 -std=c99
LDLIBS = -lm
CC = gcc
all: $(TARGET)
clean:
rm $(TARGET) $(OBJECTS)
$(TARGET): $(OBJECTS)
%%bash
make clean
make
%%bash
./ex4_main
```
What you should know about C++
----
- Anonymous functions
- Generalized function pointers
- Ranged for
- Using the standard template library
- Iterators
- Containers
- Algorithms
- The `random` library
- Using `amradillo`
**Exercise 5**
Implement Newton's method in 1D for root finding. Pass in the function and gradient as generalized function pointers. Use the method to find all roots of the polynomial equation $f(x) = x^3 - 7x - 6$
```
```
**Solution**
```
%%file ex5.cpp
#include <iostream>
#include <vector>
#include <iomanip>
#include <cmath>
#include <functional>
using std::vector;
using std::cout;
using std::function;
using func = function<double(double)>;
double newton(double x, func f, func fprime, int max_iter=10) {
for (int i=0; i<max_iter; i++) {
x -= f(x)/fprime(x);
}
return x;
};
int main()
{
auto f = [](double x) { return pow(x, 3) - 7*x - 6; };
auto fprime = [](double x) { return 3.0*pow(x, 2) - 7; };
vector<double> x = {-5, 0, 5};
for (auto x_: x) {
cout << std::setw(2) << x_ << ": "
<< std::setw(3) << newton(x_, f, fprime) << "\n";
}
}
%%bash
g++ -std=c++11 ex5.cpp -o ex5
%%bash
./ex5
```
**Exercise 6**
Use the armadillo library to
- Generate 10 x-coordinates linearly spaced between 10 and 15
- Generate 10 random y-values as $y = 3x^2 - 7x + 2 + \epsilon$ where $\epsilon \sim 10 N(0,1)$
- Find the length of $x$ and $y$ and the Euclidean distance between $x$ and $y$
- Find the correlation between $x$ and $y$
- Solve the linear system to find a quadratic fit for this data
```
```
**Solution**
```
%%file ex6.cpp
#include <iostream>
#include <fstream>
#include <armadillo>
using std::cout;
using std::ofstream;
using namespace arma;
int main()
{
vec x = linspace<vec>(10.0,15.0,10);
vec eps = 10*randn<vec>(10);
vec y = 3*x%x - 7*x + 2 + eps;
cout << "x:\n" << x << "\n";
cout << "y:\n" << y << "\n";
cout << "Lenght of x is: " << norm(x) << "\n";
cout << "Lenght of y is: " << norm(y) << "\n";
cout << "Distance(x, y) is: " << norm(x-y) << "\n";
cout << "Correlation(x, y) is: " << cor(x, y) << "\n";
mat A = join_rows(ones<vec>(10), x);
A = join_rows(A, x%x);
cout << "A:\n" << A << "\n";
vec b = solve(A, y);
cout << "b:\n" << b << "\n";
ofstream fout1("x.txt");
x.print(fout1);
ofstream fout2("y.txt");
y.print(fout2);
ofstream fout3("b.txt");
b.print(fout3);
}
%%bash
g++ -std=c++11 ex6.cpp -o ex6 -larmadillo
%%bash
./ex6
x = np.loadtxt('x.txt')
y = np.loadtxt('y.txt')
b = np.loadtxt('b.txt')
plt.scatter(x, y, s=40)
plt.plot(x, b[0] + b[1]*x + b[2]*x**2, c='red')
pass
```
| true |
code
| 0.682177 | null | null | null | null |
|
## Training Network
In supervised training, the network processes inputs and compares its resulting outputs against the desired outputs.
Errors are propagated back through the system, causing the system to adjust the weights which control the network. This is done using the Backpropagation algorithm, also called backprop. This process occurs over and over as the weights are continually tweaked.
The set of data which enables the training is called the "training set."
During the training of a network the same set of data is processed many times as the connection weights are ever refined. Iteratively passing batches of data through the network and updating the weights, so that the error is decreased, is known as Stochastic Gradient Descent (SGD).
Training refers to determining the best set of weights for maximizing a neural network’s accuracy.
The amount by which the weights are changed is determined by a parameter called Learning rate.
Neural networks can be used without knowing precisely how training works. Most modern machine learning libraries have greatly automated the training process.
### NOTE:
Basicaly this notebook prepared to use within **Google Colab**: https://colab.research.google.com/.
The Google Colabatory has **free Tesla K80 GPU** and already prepared to develop deep learning applications.
First time opens this notebook, do not forget to enable **Python 3** runtime and **GPU** accelerator in Google Colab **Notebook Settings**.
### Setup Project
Create workspace and change directory.
```
PROJECT_HOME = '/content/keras-movie-reviews-classification'
import os.path
if not os.path.exists(PROJECT_HOME):
os.makedirs(PROJECT_HOME)
os.chdir(PROJECT_HOME)
!pwd
```
### Import Project
Import GitHub project to workspace.
```
# Import project and override existing data.
!git init .
!git remote add -t \* -f origin https://github.com/alex-agency/keras-movie-reviews-classification.git
!git reset --hard origin/master
!git checkout
!ls -la input
```
### Keras
Keras is a high-level API, written in Python and capable of running on top of TensorFlow, Theano, or CNTK deep learning frameworks.
Keras provides a simple and modular API to create and train Neural Networks, hiding most of the complicated details under the hood.
By default, Keras is configured to use Tensorflow as the backend since it is the most popular choice.
Keras is becoming super popular recently because of its simplicity.
### Keras workflow
<img src="https://www.learnopencv.com/wp-content/uploads/2017/09/keras-workflow.jpg" width="700px">
```
# Load Keras libraries
from keras.models import load_model
from keras import callbacks
```
### Load model and dataset
Loading model definition from HDF5 file.
```
import numpy as np
# Load data from numpy array
loaded = np.load('input/dataset.npz')
(X_train, Y_train), (X_test, Y_test) = loaded['dataset']
# Load model from HDF5 file.
model = load_model('input/mlps-model-definition.h5') # model with MLP network
print("Model Summary")
print(model.summary())
```
### Configuring the training process
Once the model is ready, we need to configure the learning process.
Compile the model means that Keras will generate a computation graph in TensorFlow.
### Loss functions
In a supervised learning problem, we have to find the error between the actual values and the predicted value. There can be different metrics which can be used to evaluate this error. This metric is often called loss function or cost function or objective function. There can be more than one loss function depending on what you are doing with the error. In general, we use:
* binary-cross-entropy for a binary classification problem
* categorical-cross-entropy for a multi-class classification problem
* mean-squared-error for a regression problem and so on
### Optimizers
An Optimizer determines how the network weights are updated.
Keras provides a lot of optimizers to choose from.
RMSprop and Adam is a good choice of optimizer for most problems.
### Overfitting
Overfitting describes the situation in which your model is over-optimized to accurately predict the training set, at the expense of generalizing to unknown data (which is the objective of learning in the first place). This can happen because the model greatly twists itself to perfectly conform to the training set, even capturing its underlying noise.
How can we avoid overfitting? The simplest solution is to split our dataset into a training set and a test set. The training set is used for the optimization procedure, but we evaluate the accuracy of our model by forwarding the test set to the trained model and measuring its accuracy.
During training, we can monitor the accuracy of the model on the training set and test set. The longer we train, the more likely our training accuracy is to go higher and higher, but at some point, it is likely the test set will stop improving. This is a cue to stop training at that point. We should generally expect that training accuracy is higher than test accuracy, but if it is much higher, that is a clue that we have overfit.
```
# Compile model
model.compile(loss='binary_crossentropy', # cross-entropy loss function for binary classification
optimizer='adam', # Adam optimiser one of the most popular optimization method
metrics=['accuracy']) # print the accuracy during training
# Early stopping callback
# Stop training when a monitored quantity has stopped improving.
# Using held-out validation set, to determine when to terminate the training process to avoid overfitting.
early_stopping = callbacks.EarlyStopping(monitor='val_loss', # quantity to be monitored
min_delta=0, # minimum change in the monitored quantity to qualify as an improvement
patience=2, # number of epochs with no improvement after which training will be stopped
verbose=1, mode='auto')
# Train model
history = model.fit(X_train, Y_train, # train the model using the training set
batch_size=8, # in each iteration, use size of training examples at once
epochs=20, # iterate amount of times over the entire training set
callbacks=[early_stopping], # called after each epoch
validation_split=0.2, # use 20% of the data for validation
verbose=2) # enables detailed logs, where 2 is print some information after each epoch
# Evaluate model
score = model.evaluate(X_test, Y_test, verbose=0) # evaluate the trained model on the test set
print('Test loss:', score[0])
print('Test accuracy:', score[1])
import matplotlib.pyplot as plt
# Plot the loss over each epochs.
plt.plot(history.history['loss'], label='training')
plt.plot(history.history['val_loss'], label='validation')
plt.legend()
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.show()
# Plot the accuracy evaluated on the training set.
plt.plot(history.history['acc'], label='training');
plt.plot(history.history['val_acc'], label='validation');
plt.legend()
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.show()
```
### Export trained model to file
Saving whole Keras model into a single HDF5 file which will contain:
* the architecture of the model, allowing to re-create the model
* the weights of the model
* the training configuration (loss, optimizer)
* the state of the optimizer, allowing to resume training exactly where you left off.
```
# Model filename
model_filename = 'mlps-model.h5'
# Create output directory
output_dir = 'output'
if not os.path.exists(output_dir):
os.makedirs(output_dir)
model_file = os.path.join(output_dir, model_filename)
# Export model into HDF5 file.
model.save(model_file)
!ls -la output
```
### Downloading file to your local file system
It will invoke a browser download of the file to your local computer.
```
from google.colab import files
# Download file
files.download(model_file)
```
| true |
code
| 0.640214 | null | null | null | null |
|
# Basic Motion
Welcome to JetBot's browser based programming interface! This document is
called a *Jupyter Notebook*, which combines text, code, and graphic
display all in one! Prett neat, huh? If you're unfamiliar with *Jupyter* we suggest clicking the
``Help`` drop down menu in the top toolbar. This has useful references for
programming with *Jupyter*.
In this notebook, we'll cover the basics of controlling JetBot.
### Importing the Robot class
To get started programming JetBot, we'll need to import the ``Robot`` class. This class
allows us to easily control the robot's motors! This is contained in the ``jetbot`` package.
> If you're new to Python, a *package* is essentially a folder containing
> code files. These code files are called *modules*.
To import the ``Robot`` class, highlight the cell below and press ``ctrl + enter`` or the ``play`` icon above.
This will execute the code contained in the cell
```
from jetbot import Robot
```
Now that we've imported the ``Robot`` class we can initialize the class *instance* as follows.
```
robot = Robot()
```
### Commanding the robot
Now that we've created our ``Robot`` instance we named "robot", we can use this instance
to control the robot. To make the robot spin counterclockwise at 30% of it's max speed
we can call the following
> WARNING: This next command will make the robot move! Please make sure the robot has clearance.
```
robot.left(speed=0.3)
```
Cool, you should see the robot spin counterclockwise!
> If your robot didn't turn left, that means one of the motors is wired backwards! Try powering down your
> robot and swapping the terminals that the ``red`` and ``black`` cables of the incorrect motor.
>
> REMINDER: Always be careful to check your wiring, and don't change the wiring on a running system!
Now, to stop the robot you can call the ``stop`` method.
```
robot.stop()
```
Maybe we only want to run the robot for a set period of time. For that, we can use the Python ``time`` package.
```
import time
```
This package defines the ``sleep`` function, which causes the code execution to block for the specified number of seconds
before running the next command. Try the following to make the robot turn left only for half a second.
```
robot.left(0.3)
time.sleep(0.5)
robot.stop()
```
Great. You should see the robot turn left for a bit and then stop.
> Wondering what happened to the ``speed=`` inside the ``left`` method? Python allows
> us to set function parameters by either their name, or the order that they are defined
> (without specifying the name).
The ``BasicJetbot`` class also has the methods ``right``, ``forward``, and ``backwards``. Try creating your own cell to make
the robot move forward at 50% speed for one second.
Create a new cell by highlighting an existing cell and pressing ``b`` or the ``+`` icon above. Once you've done that, type in the code that you think will make the robot move forward at 50% speed for one second.
### Controlling motors individually
Above we saw how we can control the robot using commands like ``left``, ``right``, etc. But what if we want to set each motor speed
individually? Well, there are two ways you can do this
The first way is to call the ``set_motors`` method. For example, to turn along a left arch for a second we could set the left motor to 30% and the right motor to 60% like follows.
```
robot.set_motors(0.3, 0.6)
time.sleep(1.0)
robot.stop()
```
Great! You should see the robot move along a left arch. But actually, there's another way that we could accomplish the same thing.
The ``Robot`` class has two attributes named ``left_motor`` and ``right_motor`` that represent each motor individually.
These attributes are ``Motor`` class instances, each which contains a ``value`` attribute. This ``value`` attribute
is a [traitlet](https://github.com/ipython/traitlets) which generates ``events`` when assigned a new value. In the motor
class, we attach a function that updates the motor commands whenever the value changes.
So, to accomplish the exact same thing we did above, we could execute the following.
```
robot.left_motor.value = 0.34
robot.left_motor.alpha = 0.9
robot.right_motor.value = 0.34
robot.right_motor.alpha = 0.81
time.sleep(3)
robot.left_motor.value = 0.0
robot.right_motor.value = 0.0
```
You should see the robot move in the same exact way!
### Link motors to traitlets
A really cool feature about these [traitlets](https://github.com/ipython/traitlets) is that we can
also link them to other traitlets! This is super handy because Jupyter Notebooks allow us
to make graphical ``widgets`` that use traitlets under the hood. This means we can attach
our motors to ``widgets`` to control them from the browser, or just visualize the value.
To show how to do this, let's create and display two sliders that we'll use to control our motors.
```
import ipywidgets.widgets as widgets
from IPython.display import display
# create two sliders with range [-1.0, 1.0]
left_slider = widgets.FloatSlider(description='left', min=-1.0, max=1.0, step=0.01, orientation='vertical')
right_slider = widgets.FloatSlider(description='right', min=-1.0, max=1.0, step=0.01, orientation='vertical')
# create a horizontal box container to place the sliders next to eachother
slider_container = widgets.HBox([left_slider, right_slider])
# display the container in this cell's output
display(slider_container)
```
You should see two ``vertical`` sliders displayed above.
> HELPFUL TIP: In Jupyter Lab, you can actually "pop" the output of cells into entirely separate window! It will still be
> connected to the notebook, but displayed separately. This is helpful if we want to pin the output of code we executed elsewhere.
> To do this, right click the output of the cell and select ``Create New View for Output``. You can then drag the new window
> to a location you find pleasing.
Try clicking and dragging the sliders up and down. Notice nothing happens when we move the sliders currently. That's because we haven't connected them to motors yet! We'll do that by using the ``link`` function from the traitlets package.
```
import traitlets
left_link = traitlets.link((left_slider, 'value'), (robot.left_motor, 'value'))
right_link = traitlets.link((right_slider, 'value'), (robot.right_motor, 'value'))
```
Now try dragging the sliders (slowly at first). You should see the respective motor turn!
The ``link`` function that we created above actually creates a bi-directional link! That means,
if we set the motor values elsewhere, the sliders will update! Try executing the code block below
```
robot.forward(0.3)
time.sleep(0.5)
robot.stop()
```
You should see the sliders respond to the motor commands! If we want to remove this connection we can call the
``unlink`` method of each link.
```
left_link.unlink()
right_link.unlink()
```
But what if we don't want a *bi-directional* link, let's say we only want to use the sliders to display the motor values,
but not control them. For that we can use the ``dlink`` function. The left input is the ``source`` and the right input is the ``target``
```
left_link = traitlets.dlink((robot.left_motor, 'value'), (left_slider, 'value'))
right_link = traitlets.dlink((robot.right_motor, 'value'), (right_slider, 'value'))
```
Now try moving the sliders. You should see that the robot doesn't respond. But when set the motors using a different method,
the sliders will update and display the value!
### Attach functions to events
Another way to use traitlets, is by attaching functions (like ``forward``) to events. These
functions will get called whenever a change to the object occurs, and will be passed some information about that change
like the ``old`` value and the ``new`` value.
Let's create and display some buttons that we'll use to control the robot.
```
# create buttons
button_layout = widgets.Layout(width='100px', height='80px', align_self='center')
stop_button = widgets.Button(description='stop', button_style='danger', layout=button_layout)
forward_button = widgets.Button(description='forward', layout=button_layout)
backward_button = widgets.Button(description='backward', layout=button_layout)
left_button = widgets.Button(description='left', layout=button_layout)
right_button = widgets.Button(description='right', layout=button_layout)
# display buttons
middle_box = widgets.HBox([left_button, stop_button, right_button], layout=widgets.Layout(align_self='center'))
controls_box = widgets.VBox([forward_button, middle_box, backward_button])
display(controls_box)
```
You should see a set of robot controls displayed above! But right now they wont do anything. To do that
we'll need to create some functions that we'll attach to the button's ``on_click`` event.
```
def stop(change):
robot.stop()
def step_forward(change):
robot.forward(0.3)
time.sleep(0.5)
robot.stop()
def step_backward(change):
robot.backward(0.3)
time.sleep(0.5)
robot.stop()
def step_left(change):
robot.left(0.3)
time.sleep(0.5)
robot.stop()
def step_right(change):
robot.right(0.3)
time.sleep(0.5)
robot.stop()
```
Now that we've defined the functions, let's attach them to the on-click events of each button
```
# link buttons to actions
stop_button.on_click(stop)
forward_button.on_click(step_forward)
backward_button.on_click(step_backward)
left_button.on_click(step_left)
right_button.on_click(step_right)
```
Now when you click each button, you should see the robot move!
### Heartbeat Killswitch
Here we show how to connect a 'heartbeat' to stop the robot from moving. This is a simple way to detect if the robot connection is alive. You can lower the slider below to reduce the period (in seconds) of the heartbeat. If a round-trip communication between broswer cannot be made within two heartbeats, the '`status`' attribute of the heartbeat will be set ``dead``. As soon as the connection is restored, the ``status`` attribute will return to ``alive``.
```
from jetbot import Heartbeat
heartbeat = Heartbeat()
# this function will be called when heartbeat 'alive' status changes
def handle_heartbeat_status(change):
if change['new'] == Heartbeat.Status.dead:
robot.stop()
heartbeat.observe(handle_heartbeat_status, names='status')
period_slider = widgets.FloatSlider(description='period', min=0.001, max=0.5, step=0.01, value=0.5)
traitlets.dlink((period_slider, 'value'), (heartbeat, 'period'))
display(period_slider, heartbeat.pulseout)
```
Try executing the code below to start the motors, and then lower the slider to see what happens. You can also try disconnecting your robot or PC.
```
robot.left(0.2)
# now lower the `period` slider above until the network heartbeat can't be satisfied
```
### Conclusion
That's it for this example notebook! Hopefully you feel confident that you can program your robot to move around now :)
| true |
code
| 0.32656 | null | null | null | null |
|
# 머신 러닝 교과서 3판
# 14장 - 텐서플로의 구조 자세히 알아보기 (2/3)
**아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.**
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://nbviewer.jupyter.org/github/rickiepark/python-machine-learning-book-3rd-edition/blob/master/ch14/ch14_part2.ipynb"><img src="https://jupyter.org/assets/main-logo.svg" width="28" />주피터 노트북 뷰어로 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/rickiepark/python-machine-learning-book-3rd-edition/blob/master/ch14/ch14_part2.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
</table>
### 목차
- 텐서플로 추정기
- 특성 열 사용하기
- 사전에 준비된 추정기로 머신 러닝 수행하기
```
import numpy as np
import tensorflow as tf
import pandas as pd
from IPython.display import Image
tf.__version__
```
## 텐서플로 추정기
##### 사전에 준비된 추정기 사용하는 단계
* **단계 1:** 데이터 로딩을 위해 입력 함수 정의하기
* **단계 2:** 추정기와 데이터 사이를 연결하기 위해 특성 열 정의하기
* **단계 3:** 추정기 객체를 만들거나 케라스 모델을 추정기로 바꾸기
* **단계 4:** 추정기 사용하기: train() evaluate() predict()
```
tf.random.set_seed(1)
np.random.seed(1)
```
### 특성 열 사용하기
* 정의: https://developers.google.com/machine-learning/glossary/#feature_columns
* 문서: https://www.tensorflow.org/api_docs/python/tf/feature_column
```
Image(url='https://git.io/JL56E', width=700)
dataset_path = tf.keras.utils.get_file("auto-mpg.data",
("http://archive.ics.uci.edu/ml/machine-learning-databases"
"/auto-mpg/auto-mpg.data"))
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower',
'Weight', 'Acceleration', 'ModelYear', 'Origin']
df = pd.read_csv(dataset_path, names=column_names,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)
df.tail()
print(df.isna().sum())
df = df.dropna()
df = df.reset_index(drop=True)
df.tail()
import sklearn
import sklearn.model_selection
df_train, df_test = sklearn.model_selection.train_test_split(df, train_size=0.8)
train_stats = df_train.describe().transpose()
train_stats
numeric_column_names = ['Cylinders', 'Displacement', 'Horsepower', 'Weight', 'Acceleration']
df_train_norm, df_test_norm = df_train.copy(), df_test.copy()
for col_name in numeric_column_names:
mean = train_stats.loc[col_name, 'mean']
std = train_stats.loc[col_name, 'std']
df_train_norm.loc[:, col_name] = (df_train_norm.loc[:, col_name] - mean)/std
df_test_norm.loc[:, col_name] = (df_test_norm.loc[:, col_name] - mean)/std
df_train_norm.tail()
```
#### 수치형 열
```
numeric_features = []
for col_name in numeric_column_names:
numeric_features.append(tf.feature_column.numeric_column(key=col_name))
numeric_features
feature_year = tf.feature_column.numeric_column(key="ModelYear")
bucketized_features = []
bucketized_features.append(tf.feature_column.bucketized_column(
source_column=feature_year,
boundaries=[73, 76, 79]))
print(bucketized_features)
feature_origin = tf.feature_column.categorical_column_with_vocabulary_list(
key='Origin',
vocabulary_list=[1, 2, 3])
categorical_indicator_features = []
categorical_indicator_features.append(tf.feature_column.indicator_column(feature_origin))
print(categorical_indicator_features)
```
### 사전에 준비된 추정기로 머신러닝 수행하기
```
def train_input_fn(df_train, batch_size=8):
df = df_train.copy()
train_x, train_y = df, df.pop('MPG')
dataset = tf.data.Dataset.from_tensor_slices((dict(train_x), train_y))
# 셔플, 반복, 배치
return dataset.shuffle(1000).repeat().batch(batch_size)
## 조사
ds = train_input_fn(df_train_norm)
batch = next(iter(ds))
print('키:', batch[0].keys())
print('ModelYear:', batch[0]['ModelYear'])
all_feature_columns = (numeric_features +
bucketized_features +
categorical_indicator_features)
print(all_feature_columns)
regressor = tf.estimator.DNNRegressor(
feature_columns=all_feature_columns,
hidden_units=[32, 10],
model_dir='models/autompg-dnnregressor/')
EPOCHS = 1000
BATCH_SIZE = 8
total_steps = EPOCHS * int(np.ceil(len(df_train) / BATCH_SIZE))
print('훈련 스텝:', total_steps)
regressor.train(
input_fn=lambda:train_input_fn(df_train_norm, batch_size=BATCH_SIZE),
steps=total_steps)
reloaded_regressor = tf.estimator.DNNRegressor(
feature_columns=all_feature_columns,
hidden_units=[32, 10],
warm_start_from='models/autompg-dnnregressor/',
model_dir='models/autompg-dnnregressor/')
def eval_input_fn(df_test, batch_size=8):
df = df_test.copy()
test_x, test_y = df, df.pop('MPG')
dataset = tf.data.Dataset.from_tensor_slices((dict(test_x), test_y))
return dataset.batch(batch_size)
eval_results = reloaded_regressor.evaluate(
input_fn=lambda:eval_input_fn(df_test_norm, batch_size=8))
for key in eval_results:
print('{:15s} {}'.format(key, eval_results[key]))
print('평균 손실 {:.4f}'.format(eval_results['average_loss']))
pred_res = regressor.predict(input_fn=lambda: eval_input_fn(df_test_norm, batch_size=8))
print(next(iter(pred_res)))
```
#### Boosted Tree Regressor
```
boosted_tree = tf.estimator.BoostedTreesRegressor(
feature_columns=all_feature_columns,
n_batches_per_layer=20,
n_trees=200)
boosted_tree.train(
input_fn=lambda:train_input_fn(df_train_norm, batch_size=BATCH_SIZE))
eval_results = boosted_tree.evaluate(
input_fn=lambda:eval_input_fn(df_test_norm, batch_size=8))
print(eval_results)
print('평균 손실 {:.4f}'.format(eval_results['average_loss']))
```
| true |
code
| 0.530905 | null | null | null | null |
|
# In this notebook an estimator for the Volume will be trained. No hyperparameters will be searched for, and the ones from the 'Close' values estimator will be used instead.
```
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
from sklearn.externals import joblib
import utils.preprocessing as pp
import predictor.feature_extraction as fe
```
## Let's generate the datasets
```
def generate_one_set(params):
# print(('-'*70 + '\n {}, {} \n' + '-'*70).format(params['base_days'].values, params['ahead_days'].values))
tic = time()
train_val_time = int(params['train_val_time'])
base_days = int(params['base_days'])
step_days = int(params['step_days'])
ahead_days = int(params['ahead_days'])
print('Generating: base{}_ahead{}'.format(base_days, ahead_days))
pid = 'base{}_ahead{}'.format(base_days, ahead_days)
# Getting the data
data_df = pd.read_pickle('../../data/data_train_val_df.pkl')
today = data_df.index[-1] # Real date
print(pid + ') data_df loaded')
# Drop symbols with many missing points
data_df = pp.drop_irrelevant_symbols(data_df, params['GOOD_DATA_RATIO'])
print(pid + ') Irrelevant symbols dropped.')
# Generate the intervals for the predictor
x, y = fe.generate_train_intervals(data_df,
train_val_time,
base_days,
step_days,
ahead_days,
today,
fe.feature_volume_one_to_one,
target_feature=fe.VOLUME_FEATURE)
print(pid + ') Intervals generated')
# Drop "bad" samples and fill missing data
x_y_df = pd.concat([x, y], axis=1)
x_y_df = pp.drop_irrelevant_samples(x_y_df, params['SAMPLES_GOOD_DATA_RATIO'])
x = x_y_df.iloc[:, :-1]
y = x_y_df.iloc[:, -1]
x = pp.fill_missing(x)
print(pid + ') Irrelevant samples dropped and missing data filled.')
# Pickle that
x.to_pickle('../../data/x_volume_{}.pkl'.format(pid))
y.to_pickle('../../data/y_volume_{}.pkl'.format(pid))
toc = time()
print('%s) %i intervals generated in: %i seconds.' % (pid, x.shape[0], (toc-tic)))
return pid, x, y
best_params_df = pd.read_pickle('../../data/best_params_final_df.pkl').loc[1,:]
to_drop = [
'model',
'mre',
'r2',
'x_filename',
'y_filename',
'train_days'
]
best_params_df.drop(to_drop, inplace=True)
best_params_df
generate_one_set(best_params_df)
x_volume = pd.read_pickle('../../data/x_volume_base112_ahead1.pkl')
print(x_volume.shape)
x_volume.head()
y_volume = pd.read_pickle('../../data/y_volume_base112_ahead1.pkl')
print(y_volume.shape)
y_volume.head()
```
## Let's generate the test dataset, also
```
def generate_one_test_set(params, data_df):
# print(('-'*70 + '\n {}, {} \n' + '-'*70).format(params['base_days'].values, params['ahead_days'].values))
tic = time()
train_val_time = int(params['train_val_time'])
base_days = int(params['base_days'])
step_days = int(params['step_days'])
ahead_days = int(params['ahead_days'])
print('Generating: base{}_ahead{}'.format(base_days, ahead_days))
pid = 'base{}_ahead{}'.format(base_days, ahead_days)
# Getting the data
today = data_df.index[-1] # Real date
print(pid + ') data_df loaded')
# Drop symbols with many missing points
y_train_df = pd.read_pickle('../../data/y_volume_{}.pkl'.format(pid))
kept_symbols = y_train_df.index.get_level_values(1).unique().tolist()
data_df = data_df.loc[:, (slice(None), kept_symbols)]
print(pid + ') Irrelevant symbols dropped.')
# Generate the intervals for the predictor
x, y = fe.generate_train_intervals(data_df,
train_val_time,
base_days,
step_days,
ahead_days,
today,
fe.feature_volume_one_to_one,
target_feature=fe.VOLUME_FEATURE)
print(pid + ') Intervals generated')
# Drop "bad" samples and fill missing data
x_y_df = pd.concat([x, y], axis=1)
x_y_df = pp.drop_irrelevant_samples(x_y_df, params['SAMPLES_GOOD_DATA_RATIO'])
x = x_y_df.iloc[:, :-1]
y = x_y_df.iloc[:, -1]
x = pp.fill_missing(x)
print(pid + ') Irrelevant samples dropped and missing data filled.')
# Pickle that
x.to_pickle('../../data/x_volume_{}_test.pkl'.format(pid))
y.to_pickle('../../data/y_volume_{}_test.pkl'.format(pid))
toc = time()
print('%s) %i intervals generated in: %i seconds.' % (pid, x.shape[0], (toc-tic)))
return pid, x,
data_test_df = pd.read_pickle('../../data/data_test_df.pkl')
generate_one_test_set(best_params_df, data_test_df)
x_volume_test = pd.read_pickle('../../data/x_volume_base112_ahead1_test.pkl')
print(x_volume_test.shape)
x_volume_test.head()
y_volume_test = pd.read_pickle('../../data/y_volume_base112_ahead1_test.pkl')
print(y_volume_test.shape)
y_volume_test.head()
```
## Let's train a predictor for the 'Volume' with the same hyperparameters as for the 'Close' one.
```
best_params_df = pd.read_pickle('../../data/best_params_final_df.pkl')
import predictor.feature_extraction as fe
from predictor.linear_predictor import LinearPredictor
import utils.misc as misc
import predictor.evaluation as ev
ahead_days = 1
# Get some parameters
train_days = int(best_params_df.loc[ahead_days, 'train_days'])
GOOD_DATA_RATIO, \
train_val_time, \
base_days, \
step_days, \
ahead_days, \
SAMPLES_GOOD_DATA_RATIO, \
x_filename, \
y_filename = misc.unpack_params(best_params_df.loc[ahead_days,:])
pid = 'base{}_ahead{}'.format(base_days, ahead_days)
# Get the datasets
x_train = pd.read_pickle('../../data/x_volume_{}.pkl'.format(pid))
y_train = pd.read_pickle('../../data/y_volume_{}.pkl'.format(pid))
x_test = pd.read_pickle('../../data/x_volume_{}_test.pkl'.format(pid)).sort_index()
y_test = pd.DataFrame(pd.read_pickle('../../data/y_volume_{}_test.pkl'.format(pid))).sort_index()
# Let's cut the training set to use only the required number of samples
end_date = x_train.index.levels[0][-1]
start_date = fe.add_market_days(end_date, -train_days)
x_sub_df = x_train.loc[(slice(start_date,None),slice(None)),:]
y_sub_df = pd.DataFrame(y_train.loc[(slice(start_date,None),slice(None))])
# Create the estimator and train
estimator = LinearPredictor()
estimator.fit(x_sub_df, y_sub_df)
# Get the training and test predictions
y_train_pred = estimator.predict(x_sub_df)
y_test_pred = estimator.predict(x_test)
# Get the training and test metrics for each symbol
metrics_train = ev.get_metrics_df(y_sub_df, y_train_pred)
metrics_test = ev.get_metrics_df(y_test, y_test_pred)
# Show the mean metrics
metrics_df = pd.DataFrame(columns=['train', 'test'])
metrics_df['train'] = metrics_train.mean()
metrics_df['test'] = metrics_test.mean()
print('Mean metrics: \n{}\n{}'.format(metrics_df,'-'*70))
# Plot the metrics in time
metrics_train_time = ev.get_metrics_in_time(y_sub_df, y_train_pred, base_days + ahead_days)
metrics_test_time = ev.get_metrics_in_time(y_test, y_test_pred, base_days + ahead_days)
plt.plot(metrics_train_time[2], metrics_train_time[0], label='train', marker='.')
plt.plot(metrics_test_time[2], metrics_test_time[0], label='test', marker='.')
plt.title('$r^2$ metrics')
plt.legend()
plt.figure()
plt.plot(metrics_train_time[2], metrics_train_time[1], label='train', marker='.')
plt.plot(metrics_test_time[2], metrics_test_time[1], label='test', marker='.')
plt.title('MRE metrics')
plt.legend()
joblib.dump(estimator, '../../data/best_volume_predictor.pkl')
```
| true |
code
| 0.303177 | null | null | null | null |
|
# SentencePiece and BPE
## Introduction to Tokenization
In order to process text in neural network models it is first required to **encode** text as numbers with ids, since the tensor operations act on numbers. Finally, if the output of the network is to be words, it is required to **decode** the predicted tokens ids back to text.
To encode text, the first decision that has to be made is to what level of graularity are we going to consider the text? Because ultimately, from these **tokens**, features are going to be created about them. Many different experiments have been carried out using *words*, *morphological units*, *phonemic units*, *characters*. For example,
- Tokens are tricky. (raw text)
- Tokens are tricky . ([words](https://arxiv.org/pdf/1301.3781))
- Token s _ are _ trick _ y . ([morphemes](https://arxiv.org/pdf/1907.02423.pdf))
- t oʊ k ə n z _ ɑː _ ˈt r ɪ k i. ([phonemes](https://www.aclweb.org/anthology/W18-5812.pdf), for STT)
- T o k e n s _ a r e _ t r i c k y . ([character](https://www.aclweb.org/anthology/C18-1139/))
But how to identify these units, such as words, is largely determined by the language they come from. For example, in many European languages a space is used to separate words, while in some Asian languages there are no spaces between words. Compare English and Mandarin.
- Tokens are tricky. (original sentence)
- 标记很棘手 (Mandarin)
- Biāojì hěn jíshǒu (pinyin)
- 标记 很 棘手 (Mandarin with spaces)
So, the ability to **tokenize**, i.e. split text into meaningful fundamental units is not always straight-forward.
Also, there are practical issues of how large our *vocabulary* of words, `vocab_size`, should be, considering memory limitations vs. coverage. A compromise may be need to be made between:
* the finest-grained models employing characters which can be memory intensive and
* more computationally efficient *subword* units such as [n-grams](https://arxiv.org/pdf/1712.09405) or larger units.
In [SentencePiece](https://www.aclweb.org/anthology/D18-2012.pdf) unicode characters are grouped together using either a [unigram language model](https://www.aclweb.org/anthology/P18-1007.pdf) (used in this week's assignment) or [BPE](https://arxiv.org/pdf/1508.07909.pdf), **byte-pair encoding**. We will discuss BPE, since BERT and many of its variants use a modified version of BPE and its pseudocode is easy to implement and understand... hopefully!
## SentencePiece Preprocessing
### NFKC Normalization
Unsurprisingly, even using unicode to initially tokenize text can be ambiguous, e.g.,
```
eaccent = '\u00E9'
e_accent = '\u0065\u0301'
print(f'{eaccent} = {e_accent} : {eaccent == e_accent}')
```
SentencePiece uses the Unicode standard normalization form, [NFKC](https://en.wikipedia.org/wiki/Unicode_equivalence), so this isn't an issue. Looking at our example from above but with normalization:
```
from unicodedata import normalize
norm_eaccent = normalize('NFKC', '\u00E9')
norm_e_accent = normalize('NFKC', '\u0065\u0301')
print(f'{norm_eaccent} = {norm_e_accent} : {norm_eaccent == norm_e_accent}')
```
Normalization has actually changed the unicode code point (unicode unique id) for one of these two characters.
```
def get_hex_encoding(s):
return ' '.join(hex(ord(c)) for c in s)
def print_string_and_encoding(s):
print(f'{s} : {get_hex_encoding(s)}')
for s in [eaccent, e_accent, norm_eaccent, norm_e_accent]:
print_string_and_encoding(s)
```
This normalization has other side effects which may be considered useful such as converting curly quotes “ to " their ASCII equivalent. (<sup>*</sup>Although we *now* lose directionality of the quote...)
### Lossless Tokenization<sup>*</sup>
SentencePiece also ensures that when you tokenize your data and detokenize your data the original position of white space is preserved. <sup>*</sup>However, tabs and newlines are converted to spaces, please try this experiment yourself later below.
To ensure this **lossless tokenization**, SentencePiece replaces white space with _ (U+2581). So that a simple join of the tokens by replace underscores with spaces can restore the white space, even if there are consecutive symbols. But remember first to normalize and then replace spaces with _ (U+2581). As the following example shows.
```
s = 'Tokenization is hard.'
s_ = s.replace(' ', '\u2581')
s_n = normalize('NFKC', 'Tokenization is hard.')
print(get_hex_encoding(s))
print(get_hex_encoding(s_))
print(get_hex_encoding(s_n))
```
So the special unicode underscore was replaced by the ASCII unicode. Reversing the order of the second and third operations, we that the special unicode underscore was retained.
```
s = 'Tokenization is hard.'
sn = normalize('NFKC', 'Tokenization is hard.')
sn_ = s.replace(' ', '\u2581')
print(get_hex_encoding(s))
print(get_hex_encoding(sn))
print(get_hex_encoding(sn_))
```
## BPE Algorithm
Now that we have discussed the preprocessing that SentencePiece performs, we will go get our data, preprocess, and apply the BPE algorithm. We will show how this reproduces the tokenization produced by training SentencePiece on our example dataset (from this week's assignment).
### Preparing our Data
First, we get our Squad data and process as above.
```
import ast
def convert_json_examples_to_text(filepath):
example_jsons = list(map(ast.literal_eval, open(filepath))) # Read in the json from the example file
texts = [example_json['text'].decode('utf-8') for example_json in example_jsons] # Decode the byte sequences
text = '\n\n'.join(texts) # Separate different articles by two newlines
text = normalize('NFKC', text) # Normalize the text
with open('example.txt', 'w') as fw:
fw.write(text)
return text
text = convert_json_examples_to_text('./data/data.txt')
print(text[:900])
```
In the algorithm the `vocab` variable is actually a frequency dictionary of the words. Further, those words have been prepended with an *underscore* to indicate that they are the beginning of a word. Finally, the characters have been delimited by spaces so that the BPE algorithm can group the most common characters together in the dictionary in a greedy fashion. We will see how that is done shortly.
```
from collections import Counter
vocab = Counter(['\u2581' + word for word in text.split()])
vocab = {' '.join([l for l in word]): freq for word, freq in vocab.items()}
def show_vocab(vocab, end='\n', limit=20):
"""Show word frequencys in vocab up to the limit number of words"""
shown = 0
for word, freq in vocab.items():
print(f'{word}: {freq}', end=end)
shown +=1
if shown > limit:
break
show_vocab(vocab)
```
We check the size of the vocabulary (frequency dictionary) because this is the one hyperparameter that BPE depends on crucially on how far it breaks up a word into SentencePieces. It turns out that for our trained model on our small dataset that 60% of 455 merges of the most frequent characters need to be done to reproduce the upperlimit of a 32K `vocab_size` over the entire corpus of examples.
```
print(f'Total number of unique words: {len(vocab)}')
print(f'Number of merges required to reproduce SentencePiece training on the whole corpus: {int(0.60*len(vocab))}')
```
### BPE Algorithm
Directly from the BPE paper we have the following algorithm.
```
import re, collections
def get_stats(vocab):
pairs = collections.defaultdict(int)
for word, freq in vocab.items():
symbols = word.split()
for i in range(len(symbols) - 1):
pairs[symbols[i], symbols[i+1]] += freq
return pairs
def merge_vocab(pair, v_in):
v_out = {}
bigram = re.escape(' '.join(pair))
p = re.compile(r'(?<!\S)' + bigram + r'(?!\S)')
for word in v_in:
w_out = p.sub(''.join(pair), word)
v_out[w_out] = v_in[word]
return v_out
def get_sentence_piece_vocab(vocab, frac_merges=0.60):
sp_vocab = vocab.copy()
num_merges = int(len(sp_vocab)*frac_merges)
for i in range(num_merges):
pairs = get_stats(sp_vocab)
best = max(pairs, key=pairs.get)
sp_vocab = merge_vocab(best, sp_vocab)
return sp_vocab
```
To understand what's going on first take a look at the third function `get_sentence_piece_vocab`. It takes in the current `vocab` word-frequency dictionary and the fraction, `frac_merges`, of the total `vocab_size` to merge characters in the words of the dictionary, `num_merges` times. Then for each *merge* operation it `get_stats` on how many of each pair of character sequences there are. It gets the most frequent *pair* of symbols as the `best` pair. Then it merges that pair of symbols (removes the space between them) in each word in the `vocab` that contains this `best` (= `pair`). Consequently, `merge_vocab` creates a new `vocab`, `v_out`. This process is repeated `num_merges` times and the result is the set of SentencePieces (keys of the final `sp_vocab`).
### Additional Discussion of BPE Algorithm
Please feel free to skip the below if the above description was enough.
In a little more detail then, we can see in `get_stats` we initially create a list of bigram (two character sequence) frequencies from our vocabulary. Later, this may include trigrams, quadgrams, etc. Note that the key of the `pairs` frequency dictionary is actually a 2-tuple, which is just shorthand notation for a pair.
In `merge_vocab` we take in an individual `pair` (of character sequences, note this is the most frequency `best` pair) and the current `vocab` as `v_in`. We create a new `vocab`, `v_out`, from the old by joining together the characters in the pair (removing the space), if they are present in a word of the dictionary.
[Warning](https://regex101.com/): the expression `(?<!\S)` means that either a whitespace character follows before the `bigram` or there is nothing before the bigram (it is the beginning of the word), similarly for `(?!\S)` for preceding whitespace or the end of the word.
```
sp_vocab = get_sentence_piece_vocab(vocab)
show_vocab(sp_vocab)
```
## Train SentencePiece BPE Tokenizer on Example Data
### Explore SentencePiece Model
First let us explore the SentencePiece model provided with this week's assignment. Remember you can always use Python's built in `help` command to see the documentation for any object or method.
```
import sentencepiece as spm
sp = spm.SentencePieceProcessor(model_file='./data/sentencepiece.model')
# help(sp)
```
Let's work with the first sentence of our example text.
```
s0 = 'Beginners BBQ Class Taking Place in Missoula!'
# encode: text => id
print(sp.encode_as_pieces(s0))
print(sp.encode_as_ids(s0))
# decode: id => text
print(sp.decode_pieces(sp.encode_as_pieces(s0)))
print(sp.decode_ids([12847, 277]))
```
Notice how SentencePiece breaks the words into seemingly odd parts, but we've seen something similar from our work with BPE. But how close were we to this model trained on the whole corpus of examples with a `vocab_size` of 32,000 instead of 455? Here you can also test what happens to white space, like '\n'.
But first let us note that SentencePiece encodes the SentencePieces, the tokens, and has reserved some of the ids as can be seen in this week's assignment.
```
uid = 15068
spiece = "\u2581BBQ"
unknown = "__MUST_BE_UNKNOWN__"
# id <=> piece conversion
print(f'SentencePiece for ID {uid}: {sp.id_to_piece(uid)}')
print(f'ID for Sentence Piece {spiece}: {sp.piece_to_id(spiece)}')
# returns 0 for unknown tokens (we can change the id for UNK)
print(f'ID for unknown text {unknown}: {sp.piece_to_id(unknown)}')
print(f'Beginning of sentence id: {sp.bos_id()}')
print(f'Pad id: {sp.pad_id()}')
print(f'End of sentence id: {sp.eos_id()}')
print(f'Unknown id: {sp.unk_id()}')
print(f'Vocab size: {sp.vocab_size()}')
```
We can also check what are the ids for the first part and last part of the vocabulary.
```
print('\nId\tSentP\tControl?')
print('------------------------')
# <unk>, <s>, </s> are defined by default. Their ids are (0, 1, 2)
# <s> and </s> are defined as 'control' symbol.
for uid in range(10):
print(uid, sp.id_to_piece(uid), sp.is_control(uid), sep='\t')
# for uid in range(sp.vocab_size()-10,sp.vocab_size()):
# print(uid, sp.id_to_piece(uid), sp.is_control(uid), sep='\t')
```
### Train SentencePiece BPE model with our example.txt
Finally, let's train our own BPE model directly from the SentencePiece library and compare it to the results of our implemention of the algorithm from the BPE paper itself.
```
spm.SentencePieceTrainer.train('--input=example.txt --model_prefix=example_bpe --vocab_size=450 --model_type=bpe')
sp_bpe = spm.SentencePieceProcessor()
sp_bpe.load('example_bpe.model')
print('*** BPE ***')
print(sp_bpe.encode_as_pieces(s0))
show_vocab(sp_vocab, end = ', ')
```
Our implementation of BPE's code from the paper matches up pretty well with the library itself! The differences are probably accounted for by the `vocab_size`. There is also another technical difference in that in the SentencePiece implementation of BPE a priority queue is used to more efficiently keep track of the *best pairs*. Actually, there is a priority queue in the Python standard library called `heapq` if you would like to give that a try below!
## Optionally try to implement BPE using a priority queue below
```
from heapq import heappush, heappop
def heapsort(iterable):
h = []
for value in iterable:
heappush(h, value)
return [heappop(h) for i in range(len(h))]
a = [1,4,3,1,3,2,1,4,2]
heapsort(a)
```
For a more extensive example consider looking at the [SentencePiece repo](https://github.com/google/sentencepiece/blob/master/python/sentencepiece_python_module_example.ipynb). The last few sections of this code was repurposed from that tutorial. Thanks for your participation! Next stop BERT and T5!
| true |
code
| 0.325222 | null | null | null | null |
|
### *IPCC SR15 scenario assessment*
<img style="float: right; height: 80px; padding-left: 20px;" src="../_static/IIASA_logo.png">
<img style="float: right; height: 80px;" src="../_static/IAMC_logo.jpg">
# Characteristics of four illustrative model pathways
## Figure 3b of the *Summary for Policymakers*
This notebook derives the figure panels and indicators for the table in Figure 3b in the Summary for Policymakers
of the IPCC's _"Special Report on Global Warming of 1.5°C"_.
The scenario data used in this analysis can be accessed and downloaded at [https://data.ene.iiasa.ac.at/iamc-1.5c-explorer](https://data.ene.iiasa.ac.at/iamc-1.5c-explorer).
## Load `pyam` package and other dependencies
```
import pandas as pd
import numpy as np
import io
import itertools
import yaml
import math
import matplotlib.pyplot as plt
plt.style.use('style_sr15.mplstyle')
%matplotlib inline
import pyam
```
## Import scenario data, categorization and specifications files
The metadata file with scenario categorisation and quantitative indicators can be downloaded at [https://data.ene.iiasa.ac.at/iamc-1.5c-explorer](https://data.ene.iiasa.ac.at/iamc-1.5c-explorer).
Alternatively, it can be re-created using the notebook `sr15_2.0_categories_indicators`.
The last cell of this section loads and assigns a number of auxiliary lists as defined in the categorization notebook.
```
sr1p5 = pyam.IamDataFrame(data='../data/iamc15_scenario_data_world_r2.0.xlsx')
sr1p5.load_meta('sr15_metadata_indicators.xlsx')
with open("sr15_specs.yaml", 'r') as stream:
specs = yaml.load(stream, Loader=yaml.FullLoader)
rc = pyam.run_control()
for item in specs.pop('run_control').items():
rc.update({item[0]: item[1]})
cats_15 = specs.pop('cats_15')
cats_15_no_lo = specs.pop('cats_15_no_lo')
marker = specs.pop('marker')
```
## Downselect scenario ensemble to categories of interest for this assessment
```
sr1p5.meta.rename(columns={'Kyoto-GHG|2010 (SAR)': 'kyoto_ghg_2010'}, inplace=True)
df = sr1p5.filter(category=cats_15)
```
## Global carbon dioxide emissions in four illustrative pathways
Figure SPM3b shows the contribution to CO2 emissions and removal by three categories in the four illustrative pathways.
This illustration does not use the emissions timeseries as reported by the models. This is because the variable `Emissions|CO2|Energy and Industrial Processes` represents net emissions, incorporating carbon dioxide removal in this sector.
The steps below compute the gross emissions. The long variable names are mapped to short variables for easier readibility.
```
afolu_var = 'Emissions|CO2|AFOLU'
ene_ind_var = 'Emissions|CO2|Energy and Industrial Processes'
beccs_var ='Carbon Sequestration|CCS|Biomass'
```
We downselect the entire data to the four illustrative pathways (`marker` scenarios) and the three variables of interest. For consistency with the figure in the SPM, the units are converted to Gt CO2.
```
pw = df.filter(marker=marker, variable=[afolu_var, ene_ind_var, beccs_var],
year=range(2010, 2101, 10))
pw.convert_unit('Mt CO2/yr', 'Gt CO2/yr', inplace=True)
```
As a first step, we extract the timeseries for the AFOLU emissions and rename the variable for brevity. This data will be used as is in this figure.
```
afolu = (
pw.filter(variable=afolu_var)
.rename(variable={afolu_var: 'AFOLU'})
)
```
The energy-and-industry and BECCS timeseries data needs some processing. It is first separated into two distinct dataframes, and the BECCS variable is renamed for brevity.
```
ene_ind = pw.filter(variable=ene_ind_var)
beccs = (
pw.filter(variable=beccs_var)
.rename(variable={beccs_var: 'BECCS'})
)
```
The variable `Carbon Sequestration|CCS|Biomass` reports removed carbon dioxide as positive values. For use in this figure, the sign needs to be reversed.
```
beccs.data.value = - beccs.data.value
```
The `LED` marker scenario does not use any BECCS by assumption of the scenario design. For this reason, the variable `Carbon Sequestration|CCS|Biomass` was not defined when the MESSAGE team submitted the scenario results to the IAMC 1.5°C Scenario Data ensemble.
For easier computation, we add this data series manually here.
```
years = beccs.timeseries().columns
beccs.append(
pyam.IamDataFrame(
pd.DataFrame([0] * len(years), index=years).T,
model='MESSAGEix-GLOBIOM 1.0', scenario='LowEnergyDemand',
region='World', variable='BECCS', unit='Gt CO2/yr'),
inplace=True
)
```
As a third step, we compute the difference between net CO2 emissions from the energy sector & industry and BECCS to obtain gross CO2 emissions in that sector.
```
def get_value(df):
cols = ['model', 'scenario', 'region', 'year', 'unit']
return df.data.set_index(cols)['value']
diff = get_value(ene_ind) - get_value(beccs)
ene_ind_gross = pyam.IamDataFrame(diff, variable='Fossil fuel and industry')
```
We now combine the three contribution dataframes into one joint dataframe for plotting. Because the `beccs` IamDataFrame was partially altered, concatenating directly causes an issue, so we remove all `meta` columns from that dataframe beforehand.
```
beccs.meta = beccs.meta.drop(columns=beccs.meta.columns)
co2 = pyam.concat([ene_ind_gross, afolu, beccs])
```
We now proceed to plot the four illustrative pathways.
```
fig, ax = plt.subplots(1, 4, figsize=(14, 4), sharey=True)
for i, m in enumerate(['LED', 'S1', 'S2', 'S5']):
co2.filter(marker=m).stack_plot(ax=ax[i], total=True, legend=False)
ax[i].title.set_text(m)
ax[3].legend(loc=1)
```
## Collecting indicators across illustrative pathways
### Initialize a `pyam.Statistics` instance
```
base_year = 2010
compare_years = [2030, 2050]
years = [base_year] + compare_years
stats = pyam.Statistics(df=df, groupby={'marker': ['LED', 'S1', 'S2', 'S5']},
filters=[(('pathways', 'no & lo os 1.5'), {'category': cats_15_no_lo})])
```
### CO2 and Kyoto GHG emissions reductions
```
co2 = (
df.filter(kyoto_ghg_2010='in range', variable='Emissions|CO2', year=years)
.convert_unit('Mt CO2/yr', 'Gt CO2/yr')
.timeseries()
)
for y in compare_years:
stats.add((co2[y] / co2[2010] - 1) * 100,
'CO2 emission reduction (% relative to 2010)',
subheader=y)
kyoto_ghg = (
df.filter(kyoto_ghg_2010='in range', variable='Emissions|Kyoto Gases (SAR-GWP100)', year=years)
.rename(unit={'Mt CO2-equiv/yr': 'Mt CO2e/yr'})
.convert_unit('Mt CO2e/yr','Gt CO2e/yr')
.timeseries()
)
for y in compare_years:
stats.add((kyoto_ghg[y] / kyoto_ghg[base_year] - 1) * 100,
'Kyoto-GHG emission reduction (SAR-GWP100), % relative to {})'.format(base_year),
subheader=y)
```
### Final energy demand reduction relative to 2010
```
fe = df.filter(variable='Final Energy', year=years).timeseries()
for y in compare_years:
stats.add((fe[y] / fe[base_year] - 1) * 100,
'Final energy demand reduction relative to {} (%)'.format(base_year),
subheader=y)
```
### Share of renewables in electricity generation
```
def add_stats_share(stats, var_list, name, total, total_name, years, df=df):
_df = df.filter(variable=var_list)
for v in var_list:
_df.require_variable(v, exclude_on_fail=True)
_df.filter(exclude=False, inplace=True)
component = (
_df.timeseries()
.groupby(['model', 'scenario']).sum()
)
share = component / total * 100
for y in years:
stats.add(share[y], header='Share of {} in {} (%)'.format(name, total_name),
subheader=y)
ele = df.filter(variable='Secondary Energy|Electricity', year=compare_years).timeseries()
ele.index = ele.index.droplevel([2, 3, 4])
ele_re_vars = [
'Secondary Energy|Electricity|Biomass',
'Secondary Energy|Electricity|Non-Biomass Renewables'
]
add_stats_share(stats, ele_re_vars, 'renewables', ele, 'electricity', compare_years)
```
### Changes in primary energy mix
```
mapping = [
('coal', 'Coal'),
('oil', 'Oil'),
('gas', 'Gas'),
('nuclear', 'Nuclear'),
('bioenergy', 'Biomass'),
('non-biomass renewables', 'Non-Biomass Renewables')
]
for (n, v) in mapping:
data = df.filter(variable='Primary Energy|{}'.format(v), year=years).timeseries()
for y in compare_years:
stats.add((data[y] / data[base_year] - 1) * 100,
header='Primary energy from {} (% rel to {})'.format(n, base_year),
subheader=y)
```
### Cumulative carbon capture and sequestration until the end of the century
```
def cumulative_ccs(variable, name, first_year=2016, last_year=2100):
data = (
df.filter(variable=variable)
.convert_unit('Mt CO2/yr', 'Gt CO2/yr')
.timeseries()
)
stats.add(
data.apply(pyam.cumulative, raw=False, axis=1,
first_year=first_year, last_year=last_year),
header='Cumulative {} until {} (GtCO2)'.format(name, last_year), subheader='')
cumulative_ccs('Carbon Sequestration|CCS', 'CCS')
cumulative_ccs('Carbon Sequestration|CCS|Biomass', 'BECCS')
```
### Land cover for energy crops
Convert unit to SI unit (million square kilometers).
```
energy_crops = (
df.filter(variable='Land Cover|Cropland|Energy Crops', year=2050)
.convert_unit('million ha', 'million km2', factor=0.01)
.timeseries()
)
stats.add(energy_crops[2050], header='Land are for energy crops (million km2)')
```
### Emissions from land use
```
species = ['CH4', 'N2O']
for n in species:
data = df.filter(kyoto_ghg_2010='in range', variable='Emissions|{}|AFOLU'.format(n), year=years).timeseries()
for y in compare_years:
stats.add((data[y] / data[base_year] - 1) * 100,
header='Agricultural {} emissions (% rel to {})'.format(n, base_year),
subheader=y)
```
## Display summary statistics and export to `xlsx`
```
summary = stats.summarize(interquartile=True, custom_format='{:.0f}').T
summary
summary.to_excel('output/spm_sr15_figure3b_indicators_table.xlsx')
```
| true |
code
| 0.426919 | null | null | null | null |
|
# MaterialsCoord benchmarking – sensitivity to perturbation analysis
This notebook demonstrates how to use MaterialsCoord to benchmark the sensitivity of bonding algorithms to structural perturbations. Perturbations are introduced according the Einstein crystal test rig, in which site is perturbed so that the distribution around the equilibrium position yields a normal distribution for each Cartesian component.
The perturbation complies thus with the expectation for an Einstein crystal,
in which the potential is given by $V(\delta r) = 0.5 k_\mathrm{spring} \delta r^2$, where
$k_\mathrm{spring}$ denotes the spring constant with which the sites are tethered to
their equilibrium position, and $\delta r$ is the distance of the site under
consideration from its equilibrium position.
The MaterialsCoord `Benchmark` class accepts a `perturb_sigma` option, which is equal to $(k_\mathrm{B}T/k_\mathrm{spring})^{0.5}$.
*Written using:*
- MaterialsCoord==0.1.0
*Authors: Hillary Pan, Alex Ganose (10/12/19)*
---
First, lets initialize the near neighbor methods we are interested in.
```
from pymatgen.analysis.local_env import BrunnerNN_reciprocal, EconNN, JmolNN, \
MinimumDistanceNN, MinimumOKeeffeNN, MinimumVIRENN, \
VoronoiNN, CrystalNN
nn_methods = [
BrunnerNN_reciprocal(), EconNN(tol=0.5), JmolNN(), CrystalNN(), VoronoiNN(tol=0.5),
MinimumDistanceNN(), MinimumOKeeffeNN(), MinimumVIRENN()
]
```
Next, import the benchmark and choose which structures we are interested in.
```
from materialscoord.core import Benchmark
structure_groups = ["common_binaries", "elemental", "A2BX4", "ABX3", "ABX4"]
```
Choose the initial and final perturbation sigma values to include, as well as the number of steps inbetween.
```
import numpy as np
initial_sigma = 0
final_sigma = 0.2
nsteps = 51
sigmas = np.linspace(initial_sigma, final_sigma, nsteps)
```
Run the benchmark with the perturbation turned on. Note we have disabled symmetry so that each perturbed site is treated separately. Due to the absence of symmetry and the slow speed of `MinimumVIRENN`, this can take a long time (14 hours on a 2017 MacBook Pro).
```
from tqdm import tqdm_notebook
results = []
for sigma in tqdm_notebook(sigmas):
bm = Benchmark.from_structure_group(structure_groups, perturb_sigma=sigma, symprec=None)
sigma_scores = bm.score(nn_methods)
results.append(sigma_scores.iloc[-1].values)
```
Finally, plot the results.
```
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import ticker
import os
from scipy.signal import savgol_filter
import seaborn as sns
plt_results = np.array(results).T
# define matplotlib style settings
style = {
"font.sans-serif": ["Helvetica", "Arial"], "axes.labelsize": 16,
"xtick.labelsize": 16, "ytick.labelsize": 16, "xtick.direction": "in",
"ytick.direction": "in", "xtick.major.size": 8, "xtick.minor.size": 4,
"ytick.major.size": 8, "ytick.minor.size": 4, "lines.linewidth": 2.5,
"lines.markersize": 10, "axes.linewidth": 1.2, "xtick.major.width": 1.2,
"xtick.minor.width": 1.2, "ytick.major.width": 1.2, "ytick.minor.width": 1.2,
"pdf.fonttype":42
}
nn_method_mapping = {"BrunnerNN_reciprocal": "BrunnerNN"}
colors = sns.color_palette("deep")
order = [5, 6, 7, 2, 1, 0, 4, 3]
plt.style.use(style)
fig = plt.figure(figsize=(6, 6))
ax = plt.gca()
for i, x in enumerate(order):
method = nn_methods[x]
y_vals = plt_results[x]
name = method.__class__.__name__
c = colors[i]
name = nn_method_mapping.get(name, name)
# smooth the lines with a double pass through a savgol filter
# more ideal would be to take averages accross multiple runs
# but due to the time taken to generate the data this is impractical
y_vals = savgol_filter(y_vals, 27, 2)
y_vals = savgol_filter(y_vals, 27, 2)
ax.plot(sigmas, y_vals, label=name, c=c)
ax.set(ylabel="Benchmark score", xlabel="Sigma (Å)")
ax.set_xlim((0, 0.2))
ax.yaxis.set_major_locator(ticker.MaxNLocator(5))
plt.legend(loc='upper left', bbox_to_anchor=(1, 1), frameon=False, fontsize=15)
plt.savefig(os.path.join("plots", "perturbation-tolerance.pdf"), bbox_inches="tight")
plt.show()
```
| true |
code
| 0.754067 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_02_qlearningreinforcement.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 12: Reinforcement Learning**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 12 Video Material
* Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_01_ai_gym.ipynb)
* **Part 12.2: Introduction to Q-Learning** [[Video]](https://www.youtube.com/watch?v=A3sYFcJY3lA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_02_qlearningreinforcement.ipynb)
* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=qy1SJmsRhvM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_03_keras_reinforce.ipynb)
* Part 12.4: Atari Games with Keras Neural Networks [[Video]](https://www.youtube.com/watch?v=co0SwPWoZh0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_04_atari.ipynb)
* Part 12.5: Application of Reinforcement Learning [[Video]](https://www.youtube.com/watch?v=1jQPP3RfwMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_05_apply_rl.ipynb)
# Google CoLab Instructions
The following code ensures that Google CoLab is running the correct version of TensorFlow.
```
try:
from google.colab import drive
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
if COLAB:
!sudo apt-get install -y xvfb ffmpeg x11-utils
!pip install -q 'gym==0.10.11'
!pip install -q 'imageio==2.4.0'
!pip install -q PILLOW
!pip install -q 'pyglet==1.3.2'
!pip install -q pyvirtualdisplay
!pip install -q tf-agents
```
# Part 12.2: Introduction to Q-Learning
Q-Learning is a foundational technique upon which deep reinforcement learning is based. Before we explore deep reinforcement learning, it is essential to understand Q-Learning. Several components make up any Q-Learning system.
* **Agent** - The agent is an entity that exists in an environment that takes actions to affect the state of the environment, to receive rewards.
* **Environment** - The environment is the universe that the agent exists in. The environment is always in a specific state that is changed by the actions of the agent.
* **Actions** - Steps that can be performed by the agent to alter the environment
* **Step** - A step occurs each time that the agent performs an action and potentially changes the environment state.
* **Episode** - A chain of steps that ultimately culminates in the environment entering a terminal state.
* **Epoch** - A training iteration of the agent that contains some number of episodes.
* **Terminal State** - A state in which further actions do not make sense. In many environments, a terminal state occurs when the agent has one, lost, or the environment exceeding the maximum number of steps.
Q-Learning works by building a table that suggests an action for every possible state. This approach runs into several problems. First, the environment is usually composed of several continuous numbers, resulting in an infinite number of states. Q-Learning handles continuous states by binning these numeric values into ranges.
Additionally, Q-Learning primarily deals with discrete actions, such as pressing a joystick up or down. Out of the box, Q-Learning does not deal with continuous inputs, such as a car's accelerator that can be in a range of positions from released to fully engaged. Researchers have come up with clever tricks to allow Q-Learning to accommodate continuous actions.
In the next chapter, we will learn more about deep reinforcement learning. Deep neural networks can help to solve the problems of continuous environments and action spaces. For now, we will apply regular Q-Learning to the Mountain Car problem from OpenAI Gym.
### Introducing the Mountain Car
This section will demonstrate how Q-Learning can create a solution to the mountain car gym environment. The Mountain car is an environment where a car must climb a mountain. Because gravity is stronger than the car's engine, even with full throttle, it cannot merely accelerate up the steep slope. The vehicle is situated in a valley and must learn to utilize potential energy by driving up the opposite hill before the car can make it to the goal at the top of the rightmost hill.
First, it might be helpful to visualize the mountain car environment. The following code shows this environment. This code makes use of TF-Agents to perform this render. Usually, we use TF-Agents for the type of deep reinforcement learning that we will see in the next module. However, for now, TF-Agents is just used to render the mountain care environment.
```
import tf_agents
from tf_agents.environments import suite_gym
import PIL.Image
import pyvirtualdisplay
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
env_name = 'MountainCar-v0'
env = suite_gym.load(env_name)
env.reset()
PIL.Image.fromarray(env.render())
```
The mountain car environment provides the following discrete actions:
* 0 - Apply left force
* 1 - Apply no force
* 2 - Apply right force
The mountain car environment is made up of the following continuous values:
* state[0] - Position
* state[1] - Velocity
The following code shows an agent that applies full throttle to climb the hill. The cart is not strong enough. It will need to use potential energy from the mountain behind it.
```
import gym
from gym.wrappers import Monitor
import glob
import io
import base64
from IPython.display import HTML
from pyvirtualdisplay import Display
from IPython import display as ipythondisplay
display = Display(visible=0, size=(1400, 900))
display.start()
"""
Utility functions to enable video recording of gym environment
and displaying it.
To enable video, just do "env = wrap_env(env)""
"""
def show_video():
mp4list = glob.glob('video/*.mp4')
if len(mp4list) > 0:
mp4 = mp4list[0]
video = io.open(mp4, 'r+b').read()
encoded = base64.b64encode(video)
ipythondisplay.display(HTML(data='''<video alt="test" autoplay
loop controls style="height: 400px;">
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii'))))
else:
print("Could not find video")
def wrap_env(env):
env = Monitor(env, './video', force=True)
return env
import gym
if COLAB:
env = wrap_env(gym.make("MountainCar-v0"))
else:
env = gym.make("MountainCar-v0")
env.reset()
done = False
i = 0
while not done:
i += 1
state, reward, done, _ = env.step(2)
env.render()
print(f"Step {i}: State={state}, Reward={reward}")
env.close()
show_video()
```
### Programmed Car
Now we will look at a car that I hand-programmed. This car is straightforward; however, it solves the problem. The programmed car always applies force to one direction or another. It does not break. Whatever direction the vehicle is currently rolling, the agent uses power in that direction. Therefore, the car begins to climb a hill, is overpowered, and turns backward. However, once it starts to roll backward force is immediately applied in this new direction.
The following code implements this preprogrammed car.
```
import gym
if COLAB:
env = wrap_env(gym.make("MountainCar-v0"))
else:
env = gym.make("MountainCar-v0")
state = env.reset()
done = False
i = 0
while not done:
i += 1
if state[1]>0:
action = 2
else:
action = 0
state, reward, done, _ = env.step(action)
env.render()
print(f"Step {i}: State={state}, Reward={reward}")
env.close()
```
We now visualize the preprogrammed car solving the problem.
```
show_video()
```
### Reinforcement Learning
Q-Learning is a system of rewards that the algorithm gives an agent for successfully moving the environment into a state considered successful. These rewards are the Q-values from which this algorithm takes its name. The final output from the Q-Learning algorithm is a table of Q-values that indicate the reward value of every action that the agent can take, given every possible environment state. The agent must bin continuous state values into a fixed finite number of columns.
Learning occurs when the algorithm runs the agent and environment through a series of episodes and updates the Q-values based on the rewards received from actions taken; Figure 12.REINF provides a high-level overview of this reinforcement or Q-Learning loop.
**Figure 12.REINF:Reinforcement/Q Learning**

The Q-values can dictate action by selecting the action column with the highest Q-value for the current environment state. The choice between choosing a random action and a Q-value driven action is governed by the epsilon ($\epsilon$) parameter, which is the probability of random action.
Each time through the training loop, the training algorithm updates the Q-values according to the following equation.
$Q^{new}(s_{t},a_{t}) \leftarrow \underbrace{Q(s_{t},a_{t})}_{\text{old value}} + \underbrace{\alpha}_{\text{learning rate}} \cdot \overbrace{\bigg( \underbrace{\underbrace{r_{t}}_{\text{reward}} + \underbrace{\gamma}_{\text{discount factor}} \cdot \underbrace{\max_{a}Q(s_{t+1}, a)}_{\text{estimate of optimal future value}}}_{\text{new value (temporal difference target)}} - \underbrace{Q(s_{t},a_{t})}_{\text{old value}} \bigg) }^{\text{temporal difference}}$
There are several parameters in this equation:
* alpha ($\alpha$) - The learning rate, how much should the current step cause the Q-values to be updated.
* lambda ($\lambda$) - The discount factor is the percentage of future reward that the algorithm should consider in this update.
This equation modifies several values:
* $Q(s_t,a_t)$ - The Q-table. For each combination of states, what reward would the agent likely receive for performing each action?
* $s_t$ - The current state.
* $r_t$ - The last reward received.
* $a_t$ - The action that the agent will perform.
The equation works by calculating a delta (temporal difference) that the equation should apply to the old state. This learning rate ($\alpha$) scales this delta. A learning rate of 1.0 would fully implement the temporal difference to the Q-values each iteration and would likely be very chaotic.
There are two parts to the temporal difference: the new and old values. The new value is subtracted from the old value to provide a delta; the full amount that we would change the Q-value by if the learning rate did not scale this value. The new value is a summation of the reward received from the last action and the maximum of the Q-values from the resulting state when the client takes this action. It is essential to add the maximum of action Q-values for the new state because it estimates the optimal future values from proceeding with this action.
### Q-Learning Car
We will now use Q-Learning to produce a car that learns to drive itself. Look out, Tesla! We begin by defining two essential functions.
```
import gym
import numpy as np
# This function converts the floating point state values into
# discrete values. This is often called binning. We divide
# the range that the state values might occupy and assign
# each region to a bucket.
def calc_discrete_state(state):
discrete_state = (state - env.observation_space.low)/buckets
return tuple(discrete_state.astype(np.int))
# Run one game. The q_table to use is provided. We also
# provide a flag to indicate if the game should be
# rendered/animated. Finally, we also provide
# a flag to indicate if the q_table should be updated.
def run_game(q_table, render, should_update):
done = False
discrete_state = calc_discrete_state(env.reset())
success = False
while not done:
# Exploit or explore
if np.random.random() > epsilon:
# Exploit - use q-table to take current best action
# (and probably refine)
action = np.argmax(q_table[discrete_state])
else:
# Explore - t
action = np.random.randint(0, env.action_space.n)
# Run simulation step
new_state, reward, done, _ = env.step(action)
# Convert continuous state to discrete
new_state_disc = calc_discrete_state(new_state)
# Have we reached the goal position (have we won?)?
if new_state[0] >= env.unwrapped.goal_position:
success = True
# Update q-table
if should_update:
max_future_q = np.max(q_table[new_state_disc])
current_q = q_table[discrete_state + (action,)]
new_q = (1 - LEARNING_RATE) * current_q + LEARNING_RATE * \
(reward + DISCOUNT * max_future_q)
q_table[discrete_state + (action,)] = new_q
discrete_state = new_state_disc
if render:
env.render()
return success
```
Several hyperparameters are very important for Q-Learning. These parameters will likely need adjustment as you apply Q-Learning to other problems. Because of this, it is crucial to understand the role of each parameter.
* **LEARNING_RATE** The rate at which previous Q-values are updated based on new episodes run during training.
* **DISCOUNT** The amount of significance to give estimates of future rewards when added to the reward for the current action taken. A value of 0.95 would indicate a discount of 5% to the future reward estimates.
* **EPISODES** The number of episodes to train over. Increase this for more complex problems; however, training time also increases.
* **SHOW_EVERY** How many episodes to allow to elapse before showing an update.
* **DISCRETE_GRID_SIZE** How many buckets to use when converting each of the continuous state variables. For example, [10, 10] indicates that the algorithm should use ten buckets for the first and second state variables.
* **START_EPSILON_DECAYING** Epsilon is the probability that the agent will select a random action over what the Q-Table suggests. This value determines the starting probability of randomness.
* **END_EPSILON_DECAYING** How many episodes should elapse before epsilon goes to zero and no random actions are permitted. For example, EPISODES//10 means only the first 1/10th of the episodes might have random actions.
```
LEARNING_RATE = 0.1
DISCOUNT = 0.95
EPISODES = 50000
SHOW_EVERY = 1000
DISCRETE_GRID_SIZE = [10, 10]
START_EPSILON_DECAYING = 0.5
END_EPSILON_DECAYING = EPISODES//10
```
We can now make the environment. If we are running in Google COLAB then we wrap the environment to be displayed inside the web browser. Next create the discrete buckets for state and build Q-table.
```
if COLAB:
env = wrap_env(gym.make("MountainCar-v0"))
else:
env = gym.make("MountainCar-v0")
epsilon = 1
epsilon_change = epsilon/(END_EPSILON_DECAYING - START_EPSILON_DECAYING)
buckets = (env.observation_space.high - env.observation_space.low) \
/DISCRETE_GRID_SIZE
q_table = np.random.uniform(low=-3, high=0, size=(DISCRETE_GRID_SIZE \
+ [env.action_space.n]))
success = False
```
We can now make the environment. If we are running in Google COLAB then we wrap the environment to be displayed inside the web browser. Next, create the discrete buckets for state and build Q-table.
```
episode = 0
success_count = 0
# Loop through the required number of episodes
while episode<EPISODES:
episode+=1
done = False
# Run the game. If we are local, display render animation at SHOW_EVERY
# intervals.
if episode % SHOW_EVERY == 0:
print(f"Current episode: {episode}, success: {success_count}" +\
" ({float(success_count)/SHOW_EVERY})")
success = run_game(q_table, True, False)
success_count = 0
else:
success = run_game(q_table, False, True)
# Count successes
if success:
success_count += 1
# Move epsilon towards its ending value, if it still needs to move
if END_EPSILON_DECAYING >= episode >= START_EPSILON_DECAYING:
epsilon = max(0, epsilon - epsilon_change)
print(success)
```
As you can see, the number of successful episodes generally increases as training progresses. It is not advisable to stop the first time that we observe 100% success over 1,000 episodes. There is a randomness to most games, so it is not likely that an agent would retain its 100% success rate with a new run. Once you observe that the agent has gotten 100% for several update intervals, it might be safe to stop training.
# Running and Observing the Agent
Now that the algorithm has trained the agent, we can observe the agent in action. You can use the following code to see the agent in action.
```
run_game(q_table, True, False)
show_video()
```
# Inspecting the Q-Table
We can also display the Q-table. The following code shows the action that the agent would perform for each environment state. As the weights of a neural network, this table is not straightforward to interpret. Some patterns do emerge in that directions do arise, as seen by calculating the means of rows and columns. The actions seem consistent at upper and lower halves of both velocity and position.
```
import pandas as pd
df = pd.DataFrame(q_table.argmax(axis=2))
df.columns = [f'v-{x}' for x in range(DISCRETE_GRID_SIZE[0])]
df.index = [f'p-{x}' for x in range(DISCRETE_GRID_SIZE[1])]
df
df.mean(axis=0)
df.mean(axis=1)
```
| true |
code
| 0.201577 | null | null | null | null |
|
## INTRODUCTION
- It’s a Python based scientific computing package targeted at two sets of audiences:
- A replacement for NumPy to use the power of GPUs
- Deep learning research platform that provides maximum flexibility and speed
- pros:
- Iinteractively debugging PyTorch. Many users who have used both frameworks would argue that makes pytorch significantly easier to debug and visualize.
- Clean support for dynamic graphs
- Organizational backing from Facebook
- Blend of high level and low level APIs
- cons:
- Much less mature than alternatives
- Limited references / resources outside of the official documentation
- I accept you know neural network basics. If you do not know check my tutorial. Because I will not explain neural network concepts detailed, I only explain how to use pytorch for neural network
- Neural Network tutorial: https://www.kaggle.com/kanncaa1/deep-learning-tutorial-for-beginners
- The most important parts of this tutorial from matrices to ANN. If you learn these parts very well, implementing remaining parts like CNN or RNN will be very easy.
<br>
<br>**Content:**
1. Basics of Pytorch, Linear Regression, Logistic Regression, Artificial Neural Network (ANN), Concolutional Neural Network (CNN)
- https://www.kaggle.com/kanncaa1/pytorch-tutorial-for-deep-learning-lovers/code
1. [Recurrent Neural Network (RNN)](#1)
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("../input"))
# Any results you write to the current directory are saved as output.
```
<a id="1"></a> <br>
### Recurrent Neural Network (RNN)
- RNN is essentially repeating ANN but information get pass through from previous non-linear activation function output.
- **Steps of RNN:**
1. Import Libraries
1. Prepare Dataset
1. Create RNN Model
- hidden layer dimension is 100
- number of hidden layer is 1
1. Instantiate Model Class
1. Instantiate Loss Class
- Cross entropy loss
- It also has softmax(logistic function) in it.
1. Instantiate Optimizer Class
- SGD Optimizer
1. Traning the Model
1. Prediction
```
# Import Libraries
import torch
import torch.nn as nn
import torchvision.transforms as transforms
from torch.autograd import Variable
from sklearn.model_selection import train_test_split
# Prepare Dataset
# load data
train = pd.read_csv(r"../input/train.csv",dtype = np.float32)
# split data into features(pixels) and labels(numbers from 0 to 9)
targets_numpy = train.label.values
features_numpy = train.loc[:,train.columns != "label"].values/255 # normalization
# train test split. Size of train data is 80% and size of test data is 20%.
features_train, features_test, targets_train, targets_test = train_test_split(features_numpy,
targets_numpy,
test_size = 0.2,
random_state = 42)
# create feature and targets tensor for train set. As you remember we need variable to accumulate gradients. Therefore first we create tensor, then we will create variable
featuresTrain = torch.from_numpy(features_train)
targetsTrain = torch.from_numpy(targets_train).type(torch.LongTensor) # data type is long
# create feature and targets tensor for test set.
featuresTest = torch.from_numpy(features_test)
targetsTest = torch.from_numpy(targets_test).type(torch.LongTensor) # data type is long
# batch_size, epoch and iteration
batch_size = 100
n_iters = 10000
num_epochs = n_iters / (len(features_train) / batch_size)
num_epochs = int(num_epochs)
# Pytorch train and test sets
train = torch.utils.data.TensorDataset(featuresTrain,targetsTrain)
test = torch.utils.data.TensorDataset(featuresTest,targetsTest)
# data loader
train_loader = torch.utils.data.DataLoader(train, batch_size = batch_size, shuffle = False)
test_loader = torch.utils.data.DataLoader(test, batch_size = batch_size, shuffle = False)
# visualize one of the images in data set
plt.imshow(features_numpy[10].reshape(28,28))
plt.axis("off")
plt.title(str(targets_numpy[10]))
plt.savefig('graph.png')
plt.show()
# Create RNN Model
class RNNModel(nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim, output_dim):
super(RNNModel, self).__init__()
# Number of hidden dimensions
self.hidden_dim = hidden_dim
# Number of hidden layers
self.layer_dim = layer_dim
# RNN
self.rnn = nn.RNN(input_dim, hidden_dim, layer_dim, batch_first=True,
nonlinearity='relu')
# Readout layer
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
# Initialize hidden state with zeros
h0 = Variable(torch.zeros(self.layer_dim, x.size(0), self.hidden_dim))
# One time step
out, hn = self.rnn(x, h0)
out = self.fc(out[:, -1, :])
return out
# batch_size, epoch and iteration
batch_size = 100
n_iters = 2500
num_epochs = n_iters / (len(features_train) / batch_size)
num_epochs = int(num_epochs)
# Pytorch train and test sets
train = torch.utils.data.TensorDataset(featuresTrain,targetsTrain)
test = torch.utils.data.TensorDataset(featuresTest,targetsTest)
# data loader
train_loader = torch.utils.data.DataLoader(train, batch_size = batch_size, shuffle = False)
test_loader = torch.utils.data.DataLoader(test, batch_size = batch_size, shuffle = False)
# Create RNN
input_dim = 28 # input dimension
hidden_dim = 100 # hidden layer dimension
layer_dim = 2 # number of hidden layers
output_dim = 10 # output dimension
model = RNNModel(input_dim, hidden_dim, layer_dim, output_dim)
# Cross Entropy Loss
error = nn.CrossEntropyLoss()
# SGD Optimizer
learning_rate = 0.05
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
seq_dim = 28
loss_list = []
iteration_list = []
accuracy_list = []
count = 0
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
train = Variable(images.view(-1, seq_dim, input_dim))
labels = Variable(labels )
# Clear gradients
optimizer.zero_grad()
# Forward propagation
outputs = model(train)
# Calculate softmax and ross entropy loss
loss = error(outputs, labels)
# Calculating gradients
loss.backward()
# Update parameters
optimizer.step()
count += 1
if count % 250 == 0:
# Calculate Accuracy
correct = 0
total = 0
# Iterate through test dataset
for images, labels in test_loader:
images = Variable(images.view(-1, seq_dim, input_dim))
# Forward propagation
outputs = model(images)
# Get predictions from the maximum value
predicted = torch.max(outputs.data, 1)[1]
# Total number of labels
total += labels.size(0)
correct += (predicted == labels).sum()
accuracy = 100 * correct / float(total)
# store loss and iteration
loss_list.append(loss.data)
iteration_list.append(count)
accuracy_list.append(accuracy)
if count % 500 == 0:
# Print Loss
print('Iteration: {} Loss: {} Accuracy: {} %'.format(count, loss.data[0], accuracy))
# visualization loss
plt.plot(iteration_list,loss_list)
plt.xlabel("Number of iteration")
plt.ylabel("Loss")
plt.title("RNN: Loss vs Number of iteration")
plt.show()
# visualization accuracy
plt.plot(iteration_list,accuracy_list,color = "red")
plt.xlabel("Number of iteration")
plt.ylabel("Accuracy")
plt.title("RNN: Accuracy vs Number of iteration")
plt.savefig('graph.png')
plt.show()
```
### Conclusion
In this tutorial, we learn:
1. Basics of pytorch
1. Linear regression with pytorch
1. Logistic regression with pytorch
1. Artificial neural network with with pytorch
1. Convolutional neural network with pytorch
- https://www.kaggle.com/kanncaa1/pytorch-tutorial-for-deep-learning-lovers/code
1. Recurrent neural network with pytorch
<br> If you have any question or suggest, I will be happy to hear it
| true |
code
| 0.78967 | null | null | null | null |
|
# TTV Retrieval for Kepler-36 (a well-studied, dynamically-interacting system)
In this notebook, we will perform a dynamical retrieval for Kepler-36 = KOI-277. With two neighboring planets of drastically different densities (the inner planet is rocky and the outer planet is gaseous; see [Carter et al. 2012](https://ui.adsabs.harvard.edu/abs/2012Sci...337..556C/abstract)), this is one of the more well-studied TTV systems in existence. First, let's import packages and download data from the Rowe et al. (2015) TTV catalog:
```
%matplotlib inline
import ttvnest
import numpy as np
koi = 277
nplanets = 2
data, errs, epochs = ttvnest.load_data.get_data(koi, nplanets)
```
Now, let's set up the ttvnest system:
```
kepler36_b = ttvnest.TTVPlanet(data[1], errs[1], epochs[1], mass_prior = ('Uniform', 0, 100.),
period_prior = ('Normal', 13.84, 0.01)
)
kepler36_c = ttvnest.TTVPlanet(data[0], errs[0], epochs[0], mass_prior = ('Uniform', 0, 100.),
period_prior = ('Normal', 16.23, 0.01)
)
kepler36 = ttvnest.TTVSystem(kepler36_b, kepler36_c)
```
Before retrieval, let's plot the data alone to see what they look like:
```
ttvnest.plot_utils.plot_ttv_data(kepler36)
```
Clear, anticorrelated signals! Let's retrieve:
```
results = kepler36.retrieve()
```
Let's check out our results. I'm not going to work out the Carter et al. (2012) posterior distribution on the eccentricity vectors since they use a different basis than I choose here. But it's probably worth converting their mass ratio constraints to what we should expect here. They get a mass ratio sum $q_+ = (M_1 + M_2)/M_\star= 3.51\times10^{-5}$. In ttvnest dynamical masses are normalized by $3\times10^{-6} = M_\mathrm{Earth}/M_\mathrm{Sun}$, so this gives $q_+ = 11.7$ in our units. Their planetary mass ratio is $q_p = M_1/M_2 = 0.55$. Taken together, this gives dynamical masses of $M_1/M_\star = 4.15$ and $M_2/M_\star = 7.55$.
Let's see if we get there...
```
kepler36.posterior_summary()
ttvnest.plot_utils.plot_results(kepler36, uncertainty_curves = 100,
sim_length = 365.25*10, outname = 'kepler36')
```
We are a little on the low side, but that's apparently to be expected from other works like Hadden & Lithwick (2017). Let's make the dynesty plots for good measure:
```
ttvnest.plot_utils.dynesty_plots(kepler36, outname = 'kepler36')
```
Wow, what a nice system. Let's save our results for later:
```
ttvnest.io_utils.save_results(kepler36, 'kepler36.p')
```
| true |
code
| 0.600833 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/AmberLJC/FedScale/blob/master/dataset/Femnist_stats.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# **[Jupyter notebook] Understand the heterogeneous FL data.**
# Download the Femnist dataset and FedScale
Follow the sownload instruction in /content/FedScale/dataset/download.sh
```
# Download Fedscale and femnist dataset
!pwd
!wget -O /content/femnist.tar.gz https://fedscale.eecs.umich.edu/dataset/femnist.tar.gz
!tar -xf /content/femnist.tar.gz -C /content/
!rm -f /content/femnist.tar.gz
!echo -e "${GREEN}FEMNIST dataset downloaded!${NC}"
!git clone https://github.com/AmberLJC/FedScale.git
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
import numpy as np
from FedScale.core.utils.femnist import FEMNIST
from FedScale.core.utils.utils_data import get_data_transform
from FedScale.core.utils.divide_data import DataPartitioner
from FedScale.core.argParser import args
```
# Data Loader
```
train_transform, test_transform = get_data_transform('mnist')
train_dataset = FEMNIST('/content/femnist', dataset='train', transform=train_transform)
test_dataset = FEMNIST('/content/femnist', dataset='test', transform=test_transform)
```
Partition the dataset by the `clientclient_data_mapping` file, which gives the real-world client-level heterogeneoity.
```
args.task = 'cv'
training_sets = DataPartitioner(data=train_dataset, args=args, numOfClass=62)
training_sets.partition_data_helper(num_clients=None, data_map_file='/content/femnist/client_data_mapping/train.csv')
#testing_sets = DataPartitioner(data=test_dataset, args=args, numOfClass=62, isTest=True)
#testing_sets.partition_data_helper(num_clients=None, data_map_file='/content/femnist/client_data_mapping/train.csv')
```
# Print and plot statistics of the dataset.
```
print(f'Total number of data smaples: {training_sets.getDataLen()}')
print(f'Total number of clients: {training_sets.getClientLen()}')
print(f'The number of data smaples of each clients: {training_sets.getSize()}')
print(f'The number of unique labels of each clients: {training_sets.getClientLabel()}')
fig, axs = plt.subplots(1, 2, sharey=True, tight_layout=True)
size_dist = training_sets.getSize()['size']
n_bins = 20
axs[0].hist(size_dist, bins=n_bins)
axs[0].set_title('Client data size distribution')
label_dist = training_sets.getClientLabel()
axs[1].hist(label_dist, bins=n_bins)
axs[1].set_title('Client label distribution')
```
# Visiualize the clients' data.
```
rank=1
isTest = False
dropLast = True
partition = training_sets.use(rank - 1, isTest)
num_loaders = min(int(len(partition)/ args.batch_size/2), args.num_loaders)
dataloader = DataLoader(partition, batch_size=16, shuffle=True, pin_memory=True, timeout=60, num_workers=num_loaders, drop_last=dropLast)
for data in iter(dataloader):
plt.imshow(np.transpose(data[0][0].numpy(), (1, 2, 0)))
break
```
| true |
code
| 0.606207 | null | null | null | null |
|
## Code for policy section
```
# Load libraries
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mlp
# Ensure type 1 fonts are used
mlp.rcParams['ps.useafm'] = True
mlp.rcParams['pdf.use14corefonts'] = True
mlp.rcParams['text.usetex'] = True
import seaborn as sns
import pandas as pd
import pickle
import itertools as it
```
## Solve for the final size of the outbreak in Lombardy, Italy
```
# Estimate based on the value of the basic reproduction number as provided by best fit
# For formula, see here: https://web.stanford.edu/~jhj1/teachingdocs/Jones-on-R0.pdf
from sympy import Symbol, solve, log
x = Symbol('x')
r0 = 3.16
s_inf = solve(log(x)-r0_max*(x-1),x)[0]
print("% of the population that is still susceptible by the end of the outbreak in Lombardy, Italy: {0:10.4f}".format(s_inf*100))
print("% of the population that has ever been infected by the end of the outbreak in Lombardy, Italy: {0:10.4f}".format(100-s_inf*100))
# Set of colors
# For age group policies
color_list_shahin = ['orange','green','blue','purple','black']
# For additional baseline policies (50% or 100% of the population being asked to shelter-in-place)
color_list_add = ['dodgerblue','hotpink']
# Number of distinct ages in the UN age distribution
# Currently ages 0-100, with each age counted separately
n_ages = 101
# Shelter-in-place probabilities per age group, equivalent to 1 million of the considered generation in each case
age_ranges = [(0,14), (15,29), (30,49), (50,69), (70,100)]
isolation_rates_by_age = [0.803689, 0.713332, 0.380842, 0.358301, 0.516221]
# Learn about the structure of the folder containing the simulation results
all_possible_combos = []
for a, iso_rate in zip(age_ranges, isolation_rates_by_age):
combo = np.zeros(n_ages)
combo[a[0]:a[1]+1] = iso_rate
all_possible_combos.append(combo)
# Two possibilities for mean time to isolation: either 4.6 days (default value) or a large number to mimic no isolation in place
mean_time_to_isolations = [4.6, 10000]
all_possible_combos = list(it.product(mean_time_to_isolations, all_possible_combos))
NUM_COMBOS = len(all_possible_combos)
print("NUM COMBOS:",NUM_COMBOS)
mtti_val_even = all_possible_combos[0][0]
combo_frac_stay_home_even = all_possible_combos[0][1]
mtti_val_odd = all_possible_combos[1][0]
combo_frac_stay_home_odd = all_possible_combos[1][1]
print("Value of mean time to isolation - even index: ", mtti_val_even)
print("Combo fraction stay home - even index", combo_frac_stay_home_even)
print("Value of mean time to isolation - odd index: ", mtti_val_odd)
print("Combo fraction stay home - odd index: ", combo_frac_stay_home_odd)
# Learn about the structure of the folder containing the simulation results
all_possible_combos = []
for a in age_ranges:
# Either 50% or 100% of the population in each age group is asked to shelter-in-place
for val in [0.5, 1.0]:
combo = np.zeros(n_ages)
combo[a[0]:a[1]+1]=val
all_possible_combos.append(combo)
# Two possibilities for mean time to isolation: either 4.6 days (default value) or a large number to mimic no isolation in place
mean_time_to_isolations = [4.6, 10000]
all_possible_combos = list(it.product(mean_time_to_isolations, all_possible_combos))
NUM_COMBOS = len(all_possible_combos)
print("NUM COMBOS:",NUM_COMBOS)
mtti_val_even = all_possible_combos[0][0]
combo_frac_stay_home_even = all_possible_combos[0][1]
mtti_val_odd = all_possible_combos[1][0]
combo_frac_stay_home_odd = all_possible_combos[1][1]
print("Value of mean time to isolation - even index: ", mtti_val_even)
print("Combo fraction stay home - even index: ", combo_frac_stay_home_even)
print("Value of mean time to isolation - odd index: ", mtti_val_even)
print("Combo fraction stay home - odd index: ", combo_frac_stay_home_even)
# Set font sizes for plots
legend_fontsize = 13
title_fontsize = 15
xlab_fontsize = 23
ylab_fontsize = 23
xtick_fontsize = 17
ytick_fontsize = 17
```
## Functions to be used to plot four subgraphs in Figure 8
### Function to be used to plot the projected percentage of infected people in the population over time, in the absence of physical distancing
### Figures 8(a) and 8(b)
```
def perc_infected_age_group_node_removal(pop_size, group_vec_age, t_lockdown_vec, n_sims, sim_end, today, combo_start, combo_end, folder1, folder2, filename1, filename2, option, specific_title):
if option == 2:
nb = 0
# baseline
base_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline_0_paramsweep_n10000000.0_i0_N'
base_folder = 'nolockdown_noage/'
Infected_Trials = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Mild = pd.read_csv(base_folder + base_filename + str(i) + '_p0.029_m4_s22_mild.csv',delimiter=' ',header=None)
Severe = pd.read_csv(base_folder + base_filename + str(i) + '_p0.029_m4_s22_severe.csv',delimiter=' ',header=None)
Critical = pd.read_csv(base_folder + base_filename + str(i) + '_p0.029_m4_s22_critical.csv',delimiter=' ',header=None)
R = pd.read_csv(base_folder + base_filename + str(i) + '_p0.029_m4_s22_recovered.csv',delimiter=' ',header=None)
D = pd.read_csv(base_folder + base_filename + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
Infected_Trials[i,:] = Mild+Severe+Critical+R+D
Infected_Trials = Infected_Trials.mean(axis=0)
Infected_Trials = Infected_Trials/pop_size*100.
print("Baseline 0: No intervention")
print("% infected on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("% infected today: ", Infected_Trials[today])
print("% infected at the end of the simulation: ", Infected_Trials[sim_end])
plt.plot(Infected_Trials,color='gray',linestyle='-.')
for j in range(combo_start,combo_end,2):
nb +=1
Infected_Trials = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
if i < 50:
folder = folder1
filename = filename1
else:
folder = folder2
filename = filename2
Mild = pd.read_csv( folder + filename + str(j) + '_N' + str(i%50) + '_p0.029_m4_s22_mild.csv',delimiter=' ',header=None)
Severe = pd.read_csv(folder + filename + str(j) + '_N' + str(i%50) + '_p0.029_m4_s22_severe.csv',delimiter=' ',header=None)
Critical = pd.read_csv(folder + filename + str(j) + '_N' + str(i%50) + '_p0.029_m4_s22_critical.csv',delimiter=' ',header=None)
R = pd.read_csv(folder + filename + str(j) + '_N' + str(i%50) + '_p0.029_m4_s22_recovered.csv',delimiter=' ',header=None)
D = pd.read_csv(folder + filename + str(j) + '_N' + str(i%50) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
Infected_Trials[i,:] = Mild+Severe+Critical+R+D
Infected_Trials = Infected_Trials.mean(axis=0)
Infected_Trials = Infected_Trials/pop_size*100.
print("Age group: ", group_vec_age[nb-1])
print("% infected on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("% infected today: ", Infected_Trials[today])
print("% infected at the end of the simulation: ", Infected_Trials[sim_end])
plt.plot(Infected_Trials,color=color_list_shahin[nb-1])
# new baseline - 50% population is isolated
base2_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline2_0_paramsweep_n10000000.0_i'
base2_folder = 'nolockdown_fullisolation/'
for j in range(2,4):
Infected_Trials = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Mild = pd.read_csv( base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_mild.csv',delimiter=' ',header=None)
Severe = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_severe.csv',delimiter=' ',header=None)
Critical = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_critical.csv',delimiter=' ',header=None)
R = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_recovered.csv',delimiter=' ',header=None)
D = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
Infected_Trials[i,:] = Mild+Severe+Critical+R+D
Infected_Trials = Infected_Trials.mean(axis=0)
Infected_Trials = Infected_Trials/pop_size*100.
print("Baseline: ", j-1)
print("% infected on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("% infected today: ", Infected_Trials[today])
print("% infected at the end of the simulation: ", Infected_Trials[sim_end])
plt.plot(Infected_Trials,color=color_list_add[j-2],linestyle='-.')
plt.axvline(t_lockdown_vec[0], 0,color='red',linestyle='--')
plt.legend(['Absence of\n intervention']+['Ages ' + str(group_vec_age[i]) for i in range(len(group_vec_age))]+['All ages\n50\% confined','All ages\n100\% confined'], fontsize = 13)
plt.ylim(0,100)
plt.title(specific_title,fontsize=15)
plt.xticks(fontsize=17)
plt.yticks(fontsize=17)
plt.ylabel('Percentage of infected', fontsize=23)
plt.xlabel('Days since patient zero', fontsize=23)
return(plt)
elif option == 1:
nb = 0
# baseline
base_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline_0_paramsweep_n10000000.0_i0_N'
base_folder = 'nolockdown_noage/'
Infected_Trials=np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Mild = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_mild.csv',delimiter=' ',header=None)
Severe = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_severe.csv',delimiter=' ',header=None)
Critical = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_critical.csv',delimiter=' ',header=None)
R = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_recovered.csv',delimiter=' ',header=None)
D = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
Infected_Trials[i,:] = Mild+Severe+Critical+R+D
Infected_Trials = Infected_Trials.mean(axis=0)
Infected_Trials = Infected_Trials/pop_size*100.
print("Baseline 0: No intervention")
print("% infected on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("% infected today: ", Infected_Trials[today])
print("% infected at the end of the simulation: ", Infected_Trials[sim_end])
plt.plot(Infected_Trials,color='gray',linestyle='-.')
for j in range(combo_start+1,combo_end,2):
nb +=1
Infected_Trials = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
if i < 50:
folder = folder1
filename = filename1
else:
folder = folder2
filename = filename2
Mild = pd.read_csv( folder + filename + str(j) + '_N' + str(i%50) + '_p0.029_m4_s22_mild.csv',delimiter=' ',header=None)
Severe = pd.read_csv(folder + filename + str(j) + '_N' + str(i%50) + '_p0.029_m4_s22_severe.csv',delimiter=' ',header=None)
Critical = pd.read_csv(folder + filename + str(j) + '_N' + str(i%50) + '_p0.029_m4_s22_critical.csv',delimiter=' ',header=None)
R = pd.read_csv(folder + filename + str(j) + '_N' + str(i%50) + '_p0.029_m4_s22_recovered.csv',delimiter=' ',header=None)
D = pd.read_csv(folder + filename + str(j) + '_N' + str(i%50) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
Infected_Trials[i,:] = Mild+Severe+Critical+R+D
Infected_Trials = Infected_Trials.mean(axis=0)
Infected_Trials = Infected_Trials/pop_size*100.
print("Age group: ", group_vec_age[nb-1])
print("% infected on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("% infected today: ", Infected_Trials[today])
print("% infected at the end of the simulation: ", Infected_Trials[sim_end])
plt.plot(Infected_Trials,color=color_list_shahin[nb-1])
# new baseline - 50% population is isolated
base2_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline2_0_paramsweep_n10000000.0_i'
base2_folder = 'nolockdown_fullisolation/'
for j in range(2,4):
Infected_Trials = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Mild = pd.read_csv( base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_mild.csv',delimiter=' ',header=None)
Severe = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_severe.csv',delimiter=' ',header=None)
Critical = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_critical.csv',delimiter=' ',header=None)
R = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_recovered.csv',delimiter=' ',header=None)
D = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
Infected_Trials[i,:] = Mild+Severe+Critical+R+D
Infected_Trials = Infected_Trials.mean(axis=0)
Infected_Trials = Infected_Trials/pop_size*100.
print("Baseline: ", j-1)
print("% infected on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("% infected today: ", Infected_Trials[today])
print("% infected at the end of the simulation: ", Infected_Trials[sim_end])
plt.plot(Infected_Trials,color=color_list_add[j-2],linestyle='-.')
plt.axvline(t_lockdown_vec[0], 0,color='red',linestyle='--')
plt.legend(['Absence of\nintervention']+['Ages ' + str(group_vec_age[i]) for i in range(len(group_vec_age))]+['All ages\n50\% confined','All ages\n100\% confined'], fontsize = 13)
plt.ylim(0,100)
plt.title(specific_title,fontsize=15)
plt.ylabel('Percentage of infected', fontsize=23)
plt.xticks(fontsize=17)
plt.yticks(fontsize=17)
plt.xlabel('Days since patient zero', fontsize=23)
return(plt)
else:
nb = 0
# baseline
base_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline_0_paramsweep_n10000000.0_i0_N'
base_folder = 'nolockdown_noage/'
Infected_Trials=np.zeros((100,sim_end+1))
for i in range(100):
Mild = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_mild.csv',delimiter=' ',header=None)
Documented = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_documented.csv',delimiter=' ',header=None)
Severe = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_severe.csv',delimiter=' ',header=None)
Critical = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_critical.csv',delimiter=' ',header=None)
R = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_recovered.csv',delimiter=' ',header=None)
D = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
Infected_Trials[i,:] = Mild+Severe+Critical+R+D
Infected_Trials = Infected_Trials.mean(axis=0)
Infected_Trials = Infected_Trials/pop_size*100.
print("Baseline 0: No intervention")
print("% infected on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("% infected today: ", Infected_Trials[today])
print("% infected at the end of the simulation: ", Infected_Trials[sim_end])
plt.plot(Infected_Trials,color='gray',linestyle='-.')
for j in range(combo_start,combo_end):
nb +=1
Infected_Trials = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
if i < 50:
folder = folder1
filename = filename1
else:
folder = folder2
filename = filename2
Mild = pd.read_csv( folder + filename + str(j) + '_N' + str(i%50) + '_p0.029_m4_s22_mild.csv',delimiter=' ',header=None)
Severe = pd.read_csv(folder + filename + str(j) + '_N' + str(i%50) + '_p0.029_m4_s22_severe.csv',delimiter=' ',header=None)
Critical = pd.read_csv(folder + filename + str(j) + '_N' + str(i%50) + '_p0.029_m4_s22_critical.csv',delimiter=' ',header=None)
R = pd.read_csv(folder + filename + str(j) + '_N' + str(i%50) + '_p0.029_m4_s22_recovered.csv',delimiter=' ',header=None)
D = pd.read_csv(folder + filename + str(j) + '_N' + str(i%50) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
Infected_Trials[i,:] = Mild+Severe+Critical+R+D
Infected_Trials = Infected_Trials.mean(axis=0)
Infected_Trials = Infected_Trials/pop_size
print("Age group: ", group_vec_age[nb-1])
print("% infected on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("% infected today: ", Infected_Trials[today])
print("% infected at the end of the simulation: ", Infected_Trials[sim_end])
plt.plot(Infected_Trials,color=color_list_shahin[nb-1])
# new baseline - 50% population is isolated
base2_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline2_0_paramsweep_n10000000.0_i'
base2_folder = 'nolockdown_fullisolation/'
for j in range(2,4):
Infected_Trials = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Mild = pd.read_csv( base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_mild.csv',delimiter=' ',header=None)
Severe = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_severe.csv',delimiter=' ',header=None)
Critical = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_critical.csv',delimiter=' ',header=None)
R = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_recovered.csv',delimiter=' ',header=None)
D = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
Infected_Trials[i,:] = Mild+Severe+Critical+R+D
Infected_Trials = Infected_Trials.mean(axis=0)
Infected_Trials = Infected_Trials/pop_size*100.
print("Baseline: ", j-1)
print("% infected on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("% infected today: ", Infected_Trials[today])
print("% infected at the end of the simulation: ", Infected_Trials[sim_end])
plt.plot(Infected_Trials,color=color_list_add[j-2],linestyle='-.')
plt.axvline(t_lockdown_vec[0], 0,color='red',linestyle='--')
plt.ylim(0,100)
plt.legend(['Absence of\nintervention']+['Ages ' + str(group_vec_age[i]) for i in range(5)]+['All ages\n50\% confined','All ages\n100\% confined'], fontsize = 13)
plt.title(specific_title, fontsize=23)
plt.xticks(fontsize=17)
plt.yticks(fontsize=17)
plt.ylabel('Percentage of infected', fontsize=23)
plt.xlabel('Days since patient zero', fontsize=23)
return(plt)
```
### Function to be used to plot the projected number of deaths over time, in the absence of physical distancing
### Figures 8(c) and 8(d)
```
def death_age_group_node_removal(group_vec_age, t_lockdown_vec, n_sims, sim_end, today, combo_start, combo_end,
folder1, folder2, filename1, filename2, option, specific_title):
if option == 2:
nb = 0
# Baseline - No intervention
base_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline_0_paramsweep_n10000000.0_i0_N'
base_folder = 'nolockdown_noage/'
D=np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Deaths = pd.read_csv(base_folder + base_filename + str(i) + '_p0.029_m4_s22_deaths.csv', delimiter=' ',header=None)
D[i,:]=Deaths
D = D.mean(axis=0)
print("Baseline 0: No intervention")
print("# of deaths on lockdown day: ", D[t_lockdown_vec[0]])
print("# of deaths today: ", D[today])
print("# of deaths at the end of the simulation: ", D[sim_end])
D = D/1000.
plt.plot(D,color='gray',linestyle='-.')
for j in range(combo_start,combo_end,2):
nb +=1
D = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
if i < 50:
folder = folder1
filename = filename1
else:
folder = folder2
filename = filename2
Deaths = pd.read_csv(folder + filename + str(j) + '_N' + str(i%50) + '_p0.029_m4_s22_deaths.csv', delimiter=' ',header=None)
D[i,:] = Deaths
D = D.mean(axis=0)
print("Age group: ", group_vec_age[nb-1])
print("# of deaths on lockdown day: ", D[t_lockdown_vec[0]])
print("# of deaths today : ", D[today])
print("# of deaths at the end of the simulation: ", D[sim_end])
D = D/1000.
plt.plot(D,color=color_list_shahin[nb-1])
# Additional baselines - 50% and 100% of population stays home
base2_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline2_0_paramsweep_n10000000.0_i'
base2_folder = 'nolockdown_fullisolation/'
for j in range(2,4):
D = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Deaths = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_deaths.csv', delimiter=' ',header=None)
D[i,:]=Deaths
D = D.mean(axis=0)
print("Baseline: ", j-1)
print("# of deaths on lockdown day: ", D[t_lockdown_vec[0]])
print("# of deaths today: ", D[today])
print("# of deaths at the end of the simulation: ", D[sim_end])
D = D/1000.
plt.plot(D,color=color_list_add[j-2], linestyle='-.')
plt.axvline(t_lockdown_vec[0], 0, color='red', linestyle='--')
plt.legend(['Absence of\n intervention']+['Ages ' + str(group_vec_age[i]) for i in range(len(group_vec_age))]+['All ages\n50\% confined','All ages\n100\% confined'], fontsize=legend_fontsize)
plt.ylim(0,400)
plt.title(specific_title, fontsize=title_fontsize)
plt.xlabel('Days since patient zero', fontsize=xlab_fontsize)
plt.ylabel('Total deaths (thousands)', fontsize=ylab_fontsize)
plt.xticks(fontsize=xtick_fontsize)
plt.yticks(fontsize=ytick_fontsize)
return(plt)
elif option == 1:
nb = 0
# Baseline - No intervention
base_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline_0_paramsweep_n10000000.0_i0_N'
base_folder = 'nolockdown_noage/'
D = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Deaths = pd.read_csv(base_folder + base_filename + str(i) + '_p0.029_m4_s22_deaths.csv', delimiter=' ', header=None)
D[i,:]=Deaths
D = D.mean(axis=0)
print("Baseline 0: No intervention")
print("# of deaths on lockdown day: ", D[t_lockdown_vec[0]])
print("# of deaths today: ", D[today])
print("# of deaths at the end of the simulation: ", D[sim_end])
D = D/1000.
plt.plot(D,color='gray',linestyle='-.')
# Average simulations per age group over n_sims random seeds
for j in range(combo_start+1,combo_end,2):
nb +=1
D = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
if i < 50:
folder = folder1
filename = filename1
else:
folder = folder2
filename = filename2
Deaths = pd.read_csv(folder + filename + str(j) + '_N' + str(i%50) + '_p0.029_m4_s22_deaths.csv', delimiter=' ',header=None)
D[i,:] = Deaths
D = D.mean(axis=0)
print("Age group: ", group_vec_age[nb-1])
print("# of deaths on lockdown day: ", D[t_lockdown_vec[0]])
print("# of deaths today: ", D[today])
print("# of deaths at the end of the simulatuon: ", D[today])
D = D/1000.
plt.plot(D,color=color_list_shahin[nb-1])
# Additional baselines - 50% and 100% of population stays home
base2_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline2_0_paramsweep_n10000000.0_i'
base2_folder = 'nolockdown_fullisolation/'
for j in range(2,4):
D = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Deaths = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
D[i,:]=Deaths
D = D.mean(axis=0)
print("Baseline: ", j-1)
print("# of deaths on lockdown day: ", D[t_lockdown_vec[0]])
print("# of deaths today: ", D[today])
print("# of deaths at the end of the simulation: ", D[sim_end])
D = D/1000.
plt.plot(D,color=color_list_add[j-2],linestyle='-.')
plt.axvline(t_lockdown_vec[0], 0,color='red',linestyle='--')
plt.legend(['Absence of\nintervention']+['Ages ' + str(group_vec_age[i]) for i in range(5)]+['All ages\n50\% confined','All ages\n100\% confined'], fontsize = 13)
plt.ylim(0,400)
plt.title(specific_title,fontsize=15)
plt.ylabel('Total deaths (thousands)', fontsize=23)
plt.xticks(fontsize=17)
plt.yticks(fontsize=17)
plt.xlabel('Days since patient zero', fontsize=23)
return(plt)
else:
nb = 0
# baseline
base_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline_0_paramsweep_n10000000.0_i0_N'
base_folder = 'nolockdown_noage/'
D=np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Deaths = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
D[i,:]=Deaths
D = D.mean(axis=0)
print("Baseline 0: No intervention")
print("# of deaths on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("# of deaths today: ", D[today])
print("# of deaths at the end of the simulation: ", Infected_Trials[sim_end])
D = D/1000.
plt.plot(D,color='gray',linestyle='-.')
for j in range(combo_start,combo_end):
nb = nb+1
D = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
if i < 50:
folder = folder1
filename = filename1
else:
folder = folder2
filename = filename2
Deaths = pd.read_csv(folder + filename + str(j) + '_N' + str(i%50) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
D[i,:] = Deaths
D = D.mean(axis=0)
print("Age group: ", group_vec_age[nb-1])
print("# of deaths on lockdown day: ", D[t_lockdown_vec[0]])
print("# of deaths today: ", D[today])
print("# of deaths at the end of the simulation: ", D[sim_end])
D = D/1000.
plt.plot(D,color=color_list_shahin[nb-1])
# new baseline - 50% population is isolated
base2_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline2_0_paramsweep_n10000000.0_i'
base2_folder = 'nolockdown_fullisolation/'
for j in range(2,4):
D = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Deaths = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
D[i,:]=Deaths
D = D.mean(axis=0)
print("Baseline: ", j-1)
print("% infected on lockdown day: ", D[t_lockdown_vec[0]])
print("# of deaths today: ", D[today])
print("# of deaths at the end of the simulation: ", D[sim_end])
D = D/1000.
plt.plot(D,color=color_list_add[j-2],linestyle='-.')
plt.axvline(t_lockdown_vec[0], 0,color='red',linestyle='--')
plt.ylim(0,400)
plt.legend(['Absence of\nintervention']+['Ages ' + str(group_vec_age[i]) for i in range(5)]+['All ages\n50\% confined','All ages\n100\% confined'], fontsize = 13)
plt.title(specific_title, fontsize=23)
plt.xticks(fontsize=17)
plt.yticks(fontsize=17)
plt.ylabel('Total deaths (thousands)', fontsize=23)
plt.xlabel('Days since patient zero', fontsize=23)
return(plt)
```
## Functions to be used to plot four subgraphs in Figure 9
### Function to be used to plot the projected percentage of infected people in the population over time, when physical distancing is in place
### Figures 9(a) and 9(b)
```
def perc_infected_age_group_node_removal_lockdown(pop_size, group_vec_age, t_lockdown_vec, n_sims, sim_end, today, combo_start, combo_end, folder, filename, option, specific_title):
if option == 2:
nb = 0
Infected_Trials = np.zeros((n_sims,sim_end+1))
# Baseline - "No intervention" scenario
base_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline_0_paramsweep_n10000000.0_i0_N'
base_folder = 'nolockdown_noage/'
for i in range(n_sims):
Mild = pd.read_csv(base_folder + base_filename + str(i) + '_p0.029_m4_s22_mild.csv', delimiter=' ', header=None)
Severe = pd.read_csv(base_folder + base_filename + str(i) + '_p0.029_m4_s22_severe.csv', delimiter=' ', header=None)
Critical = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_critical.csv', delimiter=' ',header=None)
R = pd.read_csv(base_folder + base_filename + str(i) + '_p0.029_m4_s22_recovered.csv', delimiter=' ', header=None)
D = pd.read_csv(base_folder + base_filename + str(i) + '_p0.029_m4_s22_deaths.csv', delimiter=' ', header=None)
Infected_Trials[i,:] = Mild+Severe+Critical+R+D
Infected_Trials = Infected_Trials.mean(axis=0)
Infected_Trials = Infected_Trials/pop_size*100.
print("Baseline 0: No intervention")
print("% infected on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("% infected today: ", Infected_Trials[today])
print("% infected at the end of the simulation: ", Infected_Trials[sim_end])
plt.plot(Infected_Trials,color='gray',linestyle='-.')
for j in range(combo_start,combo_end,2):
nb +=1
Infected_Trials = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Mild = pd.read_csv( folder + filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_mild.csv', delimiter=' ',header=None)
Severe = pd.read_csv(folder + filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_severe.csv', delimiter=' ',header=None)
Critical = pd.read_csv(folder + filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_critical.csv', delimiter=' ',header=None)
R = pd.read_csv(folder + filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_recovered.csv', delimiter=' ',header=None)
D = pd.read_csv(folder + filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_deaths.csv', delimiter=' ',header=None)
Infected_Trials[i,:] = Mild+Severe+Critical+R+D
Infected_Trials = Infected_Trials.mean(axis=0)
Infected_Trials = Infected_Trials/pop_size*100.
print("Age group: ", group_vec_age[nb-1])
print("% infected on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("% infected today: ", Infected_Trials[today])
print("% infected at the end of the simulation: ", Infected_Trials[sim_end])
plt.plot(Infected_Trials,color=color_list_shahin[nb-1])
# new baseline - 50% or 100% of the population of an age group is isolated
base2_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline2_0_paramsweep_n10000000.0_i'
base2_folder = 'nolockdown_fullisolation/'
for j in range(2,4):
Infected_Trials = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Mild = pd.read_csv( base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_mild.csv',delimiter=' ',header=None)
Severe = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_severe.csv',delimiter=' ',header=None)
Critical = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_critical.csv',delimiter=' ',header=None)
R = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_recovered.csv',delimiter=' ',header=None)
D = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
Infected_Trials[i,:] = Mild+Severe+Critical+R+D
Infected_Trials = Infected_Trials/pop_size*100.
Infected_Trials = Infected_Trials.mean(axis=0)
print("Baseline: ", j-1)
print("% infected on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("% infected today: ", Infected_Trials[today])
print("% infected at the end of the simulation: ", Infected_Trials[sim_end])
plt.plot(Infected_Trials,color=color_list_add[j-2],linestyle='-.')
plt.axvline(t_lockdown_vec[0], 0,linestyle='--',color='red')
plt.legend(['Absence of\nintervention']+['Ages ' + str(group_vec_age[i]) for i in range(len(group_vec_age))]+['All ages\n50\% confined','All ages\n100\% confined'],fontsize=13)
plt.ylim(0,100)
plt.title(specific_title)
plt.ylabel('Percentage of infected',fontsize=23)
plt.xticks(fontsize=17)
plt.yticks(fontsize=17)
plt.xlabel('Days since patient zero',fontsize=23)
return(plt)
elif option == 1:
nb = 0
Infected_Trials = np.zeros((n_sims,sim_end+1))
# baseline
base_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline_0_paramsweep_n10000000.0_i0_N'
base_folder = 'nolockdown_noage/'
for i in range(n_sims):
Mild = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_mild.csv',delimiter=' ',header=None)
Severe = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_severe.csv',delimiter=' ',header=None)
Critical = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_critical.csv',delimiter=' ',header=None)
R = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_recovered.csv',delimiter=' ',header=None)
D = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
Infected_Trials[i,:] = Mild+Severe+Critical+R+D
Infected_Trials = Infected_Trials.mean(axis=0)
Infected_Trials = Infected_Trials/pop_size*100.
print("Baseline 0: No intervention")
print("% infected on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("% infected today: ", Infected_Trials[today])
print("% infected at the end of the simulation: ", Infected_Trials[sim_end])
plt.plot(Infected_Trials,color='gray',linestyle='-.')
for j in range(combo_start+1,combo_end,2):
nb = nb+1
Infected_Trials = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Mild = pd.read_csv( folder + filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_mild.csv',delimiter=' ',header=None)
Severe = pd.read_csv(folder + filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_severe.csv',delimiter=' ',header=None)
Critical = pd.read_csv(folder + filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_critical.csv',delimiter=' ',header=None)
R = pd.read_csv(folder + filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_recovered.csv',delimiter=' ',header=None)
D = pd.read_csv(folder + filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
Infected_Trials[i,:] = Mild+Severe+Critical+R+D
Infected_Trials = Infected_Trials.mean(axis=0)
Infected_Trials = Infected_Trials/pop_size*100.
print("Age group: ", group_vec_age[nb-1])
print("% infected on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("% infected today: ", Infected_Trials[today])
print("% infected at the end of the simulation: ", Infected_Trials[sim_end])
plt.plot(Infected_Trials,color=color_list_shahin[nb-1])
# new baseline - 50% population is isolated
base2_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline2_0_paramsweep_n10000000.0_i'
base2_folder = 'nolockdown_fullisolation/'
for j in range(2,4):
Infected_Trials = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Mild = pd.read_csv( base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_mild.csv',delimiter=' ',header=None)
Severe = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_severe.csv',delimiter=' ',header=None)
Critical = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_critical.csv',delimiter=' ',header=None)
R = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_recovered.csv',delimiter=' ',header=None)
D = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
Infected_Trials[i,:] = Mild+Severe+Critical+R+D
Infected_Trials = Infected_Trials.mean(axis=0)
Infected_Trials = Infected_Trials/pop_size*100.
print("Baseline: ", j-1)
print("% infected on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("% infected today: ", Infected_Trials[today])
print("% infected at the end of the simulation: ", Infected_Trials[sim_end])
plt.plot(Infected_Trials,color=color_list_add[j-2],linestyle='-.')
plt.axvline(t_lockdown_vec[0], 0,linestyle='--',color='red')
plt.legend(['Absence of\nintervention']+['Ages ' + str(group_vec_age[i]) for i in range(5)]+['All ages\n50\% confined','All ages\n100\% confined'],fontsize=13)
plt.ylim(0,100)
plt.title(specific_title)
plt.ylabel('Percentage of infected',fontsize=23)
plt.xticks(fontsize=17)
plt.yticks(fontsize=17)
plt.xlabel('Days since patient zero',fontsize=23)
return(plt)
else:
nb = 0
Infected_Trials = np.zeros((n_sims,sim+end+1))
# baseline
base_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline_0_paramsweep_n10000000.0_i0_N'
base_folder = 'nolockdown_noage/'
for i in range(n_sims):
Mild = pd.read_csv(base_folder + base_filename + str(i) + '_p0.029_m4_s22_mild.csv',delimiter=' ',header=None)
Documented = pd.read_csv(base_folder + base_filename + str(i) + '_p0.029_m4_s22_documented.csv',delimiter=' ',header=None)
Severe = pd.read_csv(base_folder + base_filename + str(i) + '_p0.029_m4_s22_severe.csv',delimiter=' ',header=None)
Critical = pd.read_csv(base_folder + base_filename + str(i) + '_p0.029_m4_s22_critical.csv',delimiter=' ',header=None)
R = pd.read_csv(base_folder + base_filename + str(i) + '_p0.029_m4_s22_recovered.csv',delimiter=' ',header=None)
D = pd.read_csv(base_folder + base_filename + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
Infected_Trials[i,:] = Mild+Severe+Critical+R+D
Infected_Trials = Infected_Trials.mean(axis=0)
Infected_Trials = Infected_Trials/pop_size*100.
print("Baseline 0: No intervention")
print("% infected on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("% infected today: ", Infected_Trials[today])
print("% infected at the end of the simulation: ", Infected_Trials[sim_end])
plt.plot(Infected_Trials,color='gray',linestyle='-.')
for j in range(combo_start,combo_end):
nb +=1
Infected_Trials = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Mild = pd.read_csv(folder + filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_mild.csv',delimiter=' ',header=None)
Severe = pd.read_csv(folder + filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_severe.csv',delimiter=' ',header=None)
Critical = pd.read_csv(folder + filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_critical.csv',delimiter=' ',header=None)
R = pd.read_csv(folder + filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_recovered.csv',delimiter=' ',header=None)
D = pd.read_csv(folder + filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
Infected_Trials[i,:] = Mild+Severe+Critical+R+D
Infected_Trials = Infected_Trials.mean(axis=0)
Infected_Trials = Infected_Trials/pop_size*100.
print("Age group: ", group_vec_age[j-1])
print("% infected on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("% infected today: ", Infected_Trials[today])
print("% infected at the end of the simulation: ", Infected_Trials[sim_end])
plt.plot(Infected_Trials, color=color_list_shahin[nb-1])
# new baseline - 50% population is isolated
base2_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline2_0_paramsweep_n10000000.0_i'
base2_folder = 'nolockdown_fullisolation/'
for j in range(2,4):
Infected_Trials = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Mild = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_mild.csv',delimiter=' ',header=None)
Severe = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_severe.csv',delimiter=' ',header=None)
Critical = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_critical.csv',delimiter=' ',header=None)
R = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_recovered.csv',delimiter=' ',header=None)
D = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
Infected_Trials[i,:] = Mild+Severe+Critical+R+D
Infected_Trials = Infected_Trials.mean(axis=0)
Infected_Trials = Infected_Trials/pop_size*100.
print("Baseline ",j-1)
print("% infected on lockdown day: ", Infected_Trials[t_lockdown_vec[0]])
print("% infected today: ", Infected_Trials[today])
print("% infected at the end of the simulation: ", Infected_Trials[sim_end])
plt.plot(Infected_Trials,color=color_list_add[j-2],linestyle='-.')
plt.axvline(t_lockdown_vec[0], 0,linestyle='--',color='red')
plt.legend(['Absence of\nintervention']+['Ages ' + str(group_vec_age[i]) for i in range(len(group_vec_age))]+['All ages\n50\% confined','All ages\n100\% confined'],fontsize=13)
plt.ylim(0,100)
plt.title(specific_title)
plt.ylabel('Percentage of infected',fontsize=23)
plt.xticks(fontsize=17)
plt.yticks(fontsize=17)
plt.xlabel('Days since patient zero',fontsize=23)
return(plt)
```
### Function to be used to plot the projected number of deaths over time, when physical distancing is in place
### Figures 9(c) and 9(d)
```
def death_age_group_node_removal_lockdown(group_vec_age, t_lockdown_vec, n_sims, sim_end, today, combo_start, combo_end, folder, filename, option, specific_title):
if option == 2:
nb = 0
D=np.zeros((n_sims,sim_end+1))
# baseline
base_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline_0_paramsweep_n10000000.0_i0_N'
base_folder = 'nolockdown_noage/'
for i in range(n_sims):
Deaths = pd.read_csv(base_folder + base_filename + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
D[i,:]=Deaths
D = D.mean(axis=0)
print("Baseline 0: No intervention")
print("# of deaths on lockdown day", D[t_lockdown_vec[0]])
print("# of deaths today: ", D[today])
print("# of deaths at the end of the simulation: ", D[sim_end])
D = D/1000.
plt.plot(D,color='gray',linestyle='-.')
# not baseline
for j in range(combo_start,combo_end,2):
nb +=1
D = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Deaths = pd.read_csv(folder + filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
D[i,:]=Deaths
D = D.mean(axis=0)
print("Age group: ", group_vec_age[nb-1])
print("# of deaths on lockdown day: ", D[t_lockdown_vec[0]])
print("# of deaths today ", D[today])
print("# of deaths at the end of the simulation: ", D[sim_end])
D = D/1000.
plt.plot(D,color=color_list_shahin[nb-1])
# new baseline - 50% or 100% of the population of an age group is isolated
base2_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline2_0_paramsweep_n10000000.0_i'
base2_folder = 'nolockdown_fullisolation/'
for j in range(2,4):
D = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Deaths = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
D[i,:] = Deaths
D = D.mean(axis=0)
print("Baseline: ",j-1)
print("# of deaths on lockdown day: ", D[t_lockdown_vec[0]])
print("# of deaths today: ", D[today])
print("# of deaths at the end of the simulation: ", D[sim_end])
D = D/1000.
plt.plot(D,color=color_list_add[j-2],linestyle='-.')
plt.axvline(t_lockdown_vec[0], 0, linestyle='--',color='red')
plt.legend(['Absence of\nintervention']+['Ages ' + str(group_vec_age[i]) for i in range(5)]+['All ages\n50\% confined','All ages\n100\% confined'],fontsize=13)
plt.ylim(0,400)
plt.title(specific_title)
plt.ylabel('Total deaths (thousands)',fontsize=23)
plt.xticks(fontsize=17)
plt.yticks(fontsize=17)
plt.xlabel('Days since patient zero',fontsize=23)
return(plt)
elif option == 1:
nb = 0
# Baseline
D=np.zeros((n_sims,sim_end+1))
base_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline_0_paramsweep_n10000000.0_i0_N'
base_folder = 'nolockdown_noage/'
for i in range(n_sims):
Deaths = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
D[i,:] = Deaths
D = D.mean(axis=0)
print("Baseline: No intervention")
print("# of deaths on lockdown day: ", D[t_lockdown_vec[0]])
print("# of deaths today: ", D[today])
print("# of deaths at the end of the simulation: ", D[sim_end])
D = D/1000.
plt.plot(D,color='gray',linestyle='-.')
# Per age group
for j in range(combo_start+1,combo_end,2):
nb = nb +1
D = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Deaths = pd.read_csv(folder + filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
D[i,:] = Deaths
D = D.mean(axis=0)
print("Age group: ", group_vec_age[nb-1])
print("# of deaths on lockdown day: ", D[t_lockdown_vec[0]])
print("# of deaths today: ", D[today])
print("# of deaths at the end of the simulation: ", D[sim_end])
D = D/1000.
plt.plot(D,color=color_list_shahin[nb-1])
# new baseline - 50% population is isolated
base2_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline2_0_paramsweep_n10000000.0_i'
base2_folder = 'nolockdown_fullisolation/'
for j in range(2,4):
D = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Deaths = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
D[i,:] = Deaths
D = D.mean(axis=0)
print("Baseline: ", j-1)
print("# of deaths on lockdown day: " + str(D[t_lockdown_vec[0]]))
print("# of deaths today: " + str(D[today]))
print("# of deaths at the end of the simulation: " + str(D[sim_end]))
D = D/1000.
plt.plot(D,color=color_list_add[j-2],linestyle='-.')
plt.axvline(t_lockdown_vec[0], 0,linestyle='--',color='red')
plt.legend(['Absence of\nintervention']+['Ages ' + str(group_vec_age[i]) for i in range(5)]+['All ages\n50\% confined','All ages\n100\% confined'],fontsize=13)
plt.ylim(0,400)
plt.title(specific_title)
plt.ylabel('Total deaths (thousands)',fontsize=23)
plt.xticks(fontsize=17)
plt.yticks(fontsize=17)
plt.xlabel('Days since patient zero',fontsize=23)
return(plt)
else:
nb = 0
# baseline
D = np.zeros((n_sims,sim_end+1))
base_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline_0_paramsweep_n10000000.0_i0_N'
base_folder = 'nolockdown_noage/'
for i in range(n_sims):
Deaths = pd.read_csv( base_folder + base_filename + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
D[i,:] = Deaths
D = D.mean(axis=0)
print("Baseline: No intervention")
print("# of deaths on lockdown day: " + str(D[t_lockdown_vec[0]]))
print("# of deaths today: " + str(D[today]))
print("# of deaths at the end of the simulation: " + str(D[sim_end]))
D = D/1000.
plt.plot(D,color='gray',linestyle='-.')
# Per age group
for j in range(combo_start,combo_end):
nb +=1
D = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Deaths = pd.read_csv(folder + filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
D[i,:]=Deaths
D = D.mean(axis=0)
print("Age group: ", group_vec_age[nb-1])
print("# of deaths on lockdown day: " + str(D[t_lockdown_vec[0]]))
print("# of deaths today: " + str(D[today]))
print("# of deaths at the end of the simulation: " + str(D[sim_end]))
D = D/1000.
plt.plot(D,color=color_list_shahin[nb-1])
# new baseline - 50% population is isolated
base2_filename = 'lombardy_distributed_agepolicy_nolockdown_baseline2_0_paramsweep_n10000000.0_i'
base2_folder = 'nolockdown_fullisolation/'
for j in range(2,4):
D = np.zeros((n_sims,sim_end+1))
for i in range(n_sims):
Deaths = pd.read_csv(base2_folder + base2_filename + str(j) + '_N' + str(i) + '_p0.029_m4_s22_deaths.csv',delimiter=' ',header=None)
D[i,:] = Deaths
D = D.mean(axis=0)
print("Baseline: ",j-1)
print("# of deaths on lockdown day:" + str(t_lockdown_vec[0]))
print("# of deaths today: " + str(D[today]))
print("# of deaths at the end of the simulation: "+ str(D[sim_end]))
D = D/1000.
plt.plot(D,color=color_list_add[j-2],linestyle='-.')
plt.axvline(t_lockdown_vec[0], 0,linestyle='--',color='red')
plt.legend(['Absence of\nintervention']+['Ages ' + str(group_vec_age[i]) for i in range(len(group_vec_age))]+['All ages\n50\% confined','All ages\n100\% confined'],fontsize=13)
plt.ylim(0,400)
plt.title(specific_title)
plt.ylabel('Total deaths (thousands)',fontsize=23)
plt.xticks(fontsize=17)
plt.yticks(fontsize=17)
plt.xlabel('Days since patient zero',fontsize=23)
return(plt)
```
## Figure 8(a)
```
# Mean time to isolation 4.6 and 50% of age category removed
t_lockdown_vec = [46]
sim_end = 119
today = 67
group_vec_age = ['0-14','15-29','30-49','50-69','70+']
combo_start = 0
combo_end = 10
pop_size = 10000000
filename1 = 'lombardy_distributed_agepolicy_0_paramsweep_n10000000.0_i'
filename2 = 'lombardy_distributed_agepolicy_1_paramsweep_n10000000.0_i'
folder1 = 'perc_policy_results/run1/'
folder2 = 'perc_policy_results/run2/'
option = 2
specific_title = ''
perc_infected_age_group_node_removal(pop_size, group_vec_age, t_lockdown_vec, n_sims, sim_end, today, combo_start, combo_end, folder1, folder2, filename1, filename2, option, specific_title)
```
## Figure 8(c)
```
# Mean time to isolation 4.6 and 50% of age category removed
t_lockdown_vec = [46]
sim_end = 119
today = 67
group_vec_age = ['0-14','15-29','30-49','50-69','70+']
combo_start = 0
combo_end = 10
pop_size = 10000000
filename1 = 'lombardy_distributed_agepolicy_0_paramsweep_n10000000.0_i'
filename2 = 'lombardy_distributed_agepolicy_1_paramsweep_n10000000.0_i'
folder1 = 'perc_policy_results/run1/'
folder2 = 'perc_policy_results/run2/'
option = 2
#specific_title = 'Mean Time to Isolation = 4.6 days for all' + '\n50% stay home, per age group'
specific_title = ''
death_age_group_node_removal( group_vec_age, t_lockdown_vec, n_sims, sim_end, today, combo_start, combo_end, folder1, folder2, filename1, filename2, option, specific_title)
```
## Figure 8(b)
```
# Mean time to isolation 4.6 and 100% of age category removed
t_lockdown_vec = [46]
n_sims = 100
sim_end = 119
today = 67
group_vec_age = ['0-14','15-29','30-49','50-69','70+']
combo_start = 0
combo_end = 10
pop_size = 10000000
filename1 = 'lombardy_distributed_agepolicy_0_paramsweep_n10000000.0_i'
filename2 = 'lombardy_distributed_agepolicy_1_paramsweep_n10000000.0_i'
folder1 = 'perc_policy_results/run1/'
folder2 = 'perc_policy_results/run2/'
option = 1
specific_title = ''
perc_infected_age_group_node_removal(pop_size, group_vec_age, t_lockdown_vec, n_sims, sim_end, today, combo_start, combo_end, folder1, folder2, filename1, filename2, option, specific_title)
```
## Figure 8(d)
```
# Mean time to isolation 4.6 and 100% of age category removed
t_lockdown_vec = [46]
n_sims = 100
sim_end = 119
today = 67
group_vec_age = ['0-14','15-29','30-49','50-69','70+']
combo_start = 0
combo_end = 10
pop_size = 10000000
filename1 = 'lombardy_distributed_agepolicy_0_paramsweep_n10000000.0_i'
filename2 = 'lombardy_distributed_agepolicy_1_paramsweep_n10000000.0_i'
folder1 = 'perc_policy_results/run1/'
folder2 = 'perc_policy_results/run2/'
option = 1
specific_title = ''
death_age_group_node_removal(group_vec_age,t_lockdown_vec, n_sims, sim_end, today, combo_start, combo_end, folder1, folder2, filename1, filename2, option, specific_title)
```
## Figure 9(a)
```
# Mean time to isolation 4.6 and 50% of age category removed
t_lockdown_vec = [46]
n_sims = 100
sim_end = 119
# As of March 29 of 2020
today = 67
group_vec_age = ['0-14','15-29','30-49','50-69','70+']
combo_start = 0
combo_end = 10
pop_size = 10000000
filename = 'lombardy_distributed_agepolicy_yeslockdown_0_paramsweep_n10000000.0_i'
folder = 'lockdown_perc_policy_results/'
option = 2
specific_title = ''
perc_infected_age_group_node_removal_lockdown(pop_size, group_vec_age, t_lockdown_vec, n_sims, sim_end, today, combo_start, combo_end, folder, filename, option, specific_title)
```
## Figure 9(c)
```
# Mean time to isolation 4.6 and 50% of age category removed
t_lockdown_vec = [46]
n_sims = 100
sim_end = 119
# As of March 29 of 2020
today = 67
group_vec_age = ['0-14','15-29','30-49','50-69','70+']
combo_start = 0
combo_end = 10
pop_size = 10000000
filename = 'lombardy_distributed_agepolicy_yeslockdown_0_paramsweep_n10000000.0_i'
folder = 'lockdown_perc_policy_results/'
option = 2
specific_title = ''
death_age_group_node_removal_lockdown(group_vec_age, t_lockdown_vec, n_sims, sim_end, today, combo_start, combo_end, folder, filename, option, specific_title)
```
## Figure 9(b)
```
# Mean time to isolation 4.6 and 100% of age category removed
t_lockdown_vec = [46]
n_sims = 100
sim_end = 119
today = 67
group_vec_age = ['0-14','15-29','30-49','50-69','70+']
combo_start = 0
combo_end = 10
pop_size = 10000000
filename = 'lombardy_distributed_agepolicy_yeslockdown_0_paramsweep_n10000000.0_i'
folder = 'lockdown_perc_policy_results/'
option = 1
# Lombardy - Time of Lockdown = 46 days\n, \nInfected = Mild+Severe+Critical+R+D
#specific_title = 'Mean Time to Isolation = 4.6 days for all' + '\n100% stay home, per age group' + '\n+ Social distance increased by a factor of 2'
specific_title = ''
perc_infected_age_group_node_removal_lockdown(pop_size, group_vec_age, t_lockdown_vec, n_sims, sim_end, today, combo_start, combo_end, folder, filename, option, specific_title)
```
## Figure 9(d)
```
# Mean time to isolation 4.6 and 100% of age category removed
t_lockdown_vec = [46]
n_sims = 100
sim_end = 119
today = 67
group_vec_age = ['0-14','15-29','30-49','50-69','70+']
combo_start = 0
combo_end = 10
pop_size = 10000000
filename = 'lombardy_distributed_agepolicy_yeslockdown_0_paramsweep_n10000000.0_i'
folder = 'lockdown_perc_policy_results/'
option = 1
specific_title = ''
death_age_group_node_removal_lockdown(group_vec_age, t_lockdown_vec, n_sims, sim_end, today, combo_start, combo_end, folder, filename, option, specific_title)
```
| true |
code
| 0.312731 | null | null | null | null |
|
```
%load_ext autoreload
%autoreload 2
```
# Sampling from a Bayesian network: an open problem
A Bayesian network encodes a probability distribution. It is often desirable to be able to sample from a Bayesian network. The most common way to do this is via forward sampling (also called prior sampling). It's a really dumb algorithm that is trivial to implement. You just loop over the nodes in breadth-first order and sample a value each node, conditioning on the parents (which have already been sampled).
The problem with forward sampling is that impossible situations can arise for some networks. Basically, forward sampling doesn't ensure that the produced samples are *valid*. The easiest way to grok this is via some examples.
## Example 1
```
import hedgehog as hh
import pandas as pd
def example_1():
X = pd.DataFrame(
[
[True, True, True],
[False, False, False]
],
columns=['A', 'B', 'C']
)
bn = hh.BayesNet(
(['A', 'B'], 'C')
)
bn.fit(X)
return bn
bn = example_1()
bn
bn.full_joint_dist()
```
The problem with forward sampling is this case is that if we sample from A and then B independently, then we can end up by sampling pairs (A, B) that don't exist. This will raise an error when we condition P(C) on its parents.
In `hedhehog`, this will raise a `KeyError` when `sample` is called because the distribution that corresponds to `(A=False, B=True)` doesn't exist.
```
while True:
try:
bn.sample()
except KeyError:
print('Yep, told you.')
break
```
## Example 2
```
import hedgehog as hh
import pandas as pd
def example_2():
X = pd.DataFrame(
[
[1, 1, 1, 1],
[2, 1, 2, 1]
],
columns=['A', 'B', 'C', 'D']
)
bn = hh.BayesNet(
('A', 'B'),
('B', 'C'),
(['A', 'C'], 'D')
)
bn.fit(X)
return bn
bn = example_2()
bn
```
In this case, a problem will occur if we sample `(A, 1)`, then `(B, 1)`, then `(C, 2)`. Indeed, `(A, 1)` and `(C, 1)` have never been seen so there's now way of sampling `D`.
```
while True:
try:
bn.sample()
except KeyError:
print('Yep, told you.')
break
```
One way to circumvent these issues would be to sample from the full joint distribution. But this is too costly. Another way is to add a prior distribution by supposing that every combination occurred once, but that's not elegant.
Ideally we would like to have some way of doing forward sampling that only produces valid data. This is still an open question for me.
| true |
code
| 0.458591 | null | null | null | null |
|
# Collect Physicists Raw Data
The goal of this notebook is to collect demographic data on the list of [physicists notable for their achievements](../data/raw/physicists.txt). Wikipedia contains this semi-structured data in an *Infobox* on the top right side of the article for each physicist. However, similar data is available in a more machine readable, [JSON](https://www.json.org/) format from [DBpedia](https://wiki.dbpedia.org/about). We will need to send HTTP requests to DBpedia to get the JSON data. For an example, compare *Albert Einstein's* [Wikipedia infobox](https://en.wikipedia.org/wiki/Albert_Einstein) to his [DBPedia JSON](http://dbpedia.org/data/Albert_Einstein.json). It is important to realize, that although the data is similar, it is not identical.
The shortcomings of Wikipedia infoboxes and the advantages of DBpedia datasets are explained in section 4.3 of [DBpedia datasets](https://wiki.dbpedia.org/services-resources/datasets/dbpedia-datasets#h434-10). But basically the summary is that DBpedia data is much cleaner and better structured than Wikipedia Infoboxes as it is based on hand-generated mappings of Wikipedia infoboxes / templates to a [DBpedia ontology](https://wiki.dbpedia.org/services-resources/ontology). Consequently, we will be using DBpedia as the data source for this project.
However, DBpedia does have the disadvantage that its content is roughly 6-18 months behind updates applied to Wikipedia content. This is due to its data being generated from a [static dump of Wikipedia content](https://wiki.dbpedia.org/online-access/DBpediaLive) in a process that takes approximately 6 months. The fact that the data is not in sync with the latest Wikipedia content is not of great significance for this project as the data is edited infrequently. Also when edits are made, they tend to be only minor.
## Setting the Environment
A few initialization steps are needed to setup the environment:
- The locale needs to be set for all categories to the user’s default setting (typically specified in the LANG environment variable) to enable correct sorting of physicists names with accents.
- A bool constant `FETCH_JSON_DATA` needs to be set to decide whether to fetch the json data. Set to False so that the previously fetched data is used. In this case the results of the study are guaranteed be reproducible. Set to True so that the latest data is fetched. In this case it is possible that the results of the study will change.
```
import locale
locale.setlocale(locale.LC_ALL, '')
FETCH_JSON_DATA = False
```
## Constructing the URLs
To make the HTTP requests, we will need a list of URLs representing the resources (i.e the physicists). It's fairly easy to construct these URLs from the list of notable physicists. However, it's important to "quote" any physicist name in unicode since unicode characters are not allowed in URLs. OK let's create the list now.
```
import gzip
import os
import shutil
from collections import OrderedDict
import jsonlines
import pandas as pd
from src.data.jsonl_utils import read_jsonl
from src.data.url_utils import DBPEDIA_DATA_URL
from src.data.url_utils import fetch_json_data
from src.data.url_utils import urls_progress_bar
def construct_urls(file='../data/raw/physicists.txt'):
"""Construct DBpedia data URLs from list in file.
Args:
file (str): File containing a list of url filepaths
with spaces replacing underscores.
Returns:
list(str): List of URLs.
"""
with open(file, encoding='utf-8') as file:
names = [line.rstrip('\n') for line in file]
urls = [DBPEDIA_DATA_URL + name.replace(' ', '_') + '.json'
for name in names]
return urls
urls_to_fetch = construct_urls()
assert(len(urls_to_fetch) == 1069)
```
## Fetching the Data
Now we have the list of URLs, it's time to make the HTTP requests to acquire the data. The code is asynchronous, which dramatically helps with performance. It is important to set the `max_workers` parameter sensibly in order to crawl responsibly and not hammer the site's server. Although the site seems to be rate limited, it's still good etiquette.
```
jsonl_file = '../data/raw/physicists.jsonl'
if FETCH_JSON_DATA:
json_data = fetch_json_data(urls_to_fetch, max_workers=20, timeout=30,
progress_bar=urls_progress_bar(len(urls_to_fetch)))
else:
json_data = read_jsonl('../data/raw/physicists.jsonl' + '.gz')
```
Let's sort the data alphabetically by URL, confirm that all the data was fetched and take a look at the first JSON response.
```
if FETCH_JSON_DATA:
json_data = OrderedDict(sorted(json_data.items(), key=lambda x: locale.strxfrm(x[0])))
assert(len(json_data) == 1069)
print(list(json_data.keys())[0])
print(list(json_data.values())[0])
else:
assert(len(json_data) == 1058)
print(json_data[0])
```
It is clear that every request successfully received a response. However, we see that some responses came back empty from the server. Basically, although there are Wikipedia pages for these physicists, they do not have a corresponding page in DBpedia (or the page in DBpedia has a different name). Not to worry, there are only 11 and they are not so famous, so we will just exclude these "Z-listers" from the analysis.
```
if FETCH_JSON_DATA:
urls_to_drop = [url for (url, data) in json_data.items() if not data]
assert(len(urls_to_drop) == 11)
display(urls_to_drop)
if FETCH_JSON_DATA:
json_data = [data for data in json_data.values() if data]
assert(len(json_data) == 1058)
```
## Persisting the Data
Now that we have the list of JSON responses, we would like to persist them for later analysis. We will use [Json Lines](http://jsonlines.org/) as it seems like a convenient format for storing structured data that may be processed one record at a time.
```
if FETCH_JSON_DATA:
with jsonlines.open(jsonl_file, 'w') as writer:
writer.write_all(json_data)
```
Let's do a quick sanity check to make sure the file contains the expected number of records.
```
if FETCH_JSON_DATA:
json_lines = read_jsonl(jsonl_file)
assert(len(json_lines) == 1058)
```
Finally, let's compress the file to reduce its footprint.
```
if FETCH_JSON_DATA:
with open(jsonl_file, 'rb') as src, gzip.open(jsonl_file + '.gz', 'wb') as dest:
shutil.copyfileobj(src, dest)
os.remove(jsonl_file)
```
| true |
code
| 0.435361 | null | null | null | null |
|
## UBC Intro to Machine Learning
### APIs
Instructor: Socorro Dominguez
February 05, 2022
## Exercise to try in your local machine
## Motivation
For our ML class, we want to do a Classifier that differentiates images from dogs and cats.
## Problem
We need a dataset to do this. Our friends don't have enough cats and dogs.
Let's take free, open and legal data from the [Unsplash Image API](https://unsplash.com/developers).
## Caveats
Sometimes, raw data is unsuitable for machine learning algorithms. For instance, we may want:
- Only images that are landscape (i.e. width > height)
- All our images to be of the same resolution
---
## Step 1: Get cat and dog image URLs from the API
We will use the [`search/photos` GET method](https://unsplash.com/documentation#search-photos).
```
import requests
import config as cfg
# API variables
root_endpoint = 'https://api.unsplash.com/'
client_id = cfg.splash['key']
# Wrapper function for making API calls and grabbing results
def search_photos(search_term):
api_method = 'search/photos'
endpoint = root_endpoint + api_method
response = requests.get(endpoint,
params={'query': search_term, 'per_page': 30, 'client_id': client_id})
status_code, result = response.status_code, response.json()
if status_code != 200:
print(f'Bad status code: {status_code}')
image_urls = [img['urls']['small'] for img in result['results']]
return image_urls
dog_urls = search_photos('dog')
cat_urls = search_photos('cat')
cat_urls
```
---
## Step 2: Download the images from the URLs
(Step 2a: Google [how to download an image from a URL in Python](https://stackoverflow.com/a/40944159))
We'll just define the function to download an image for now. Later on, we'll use it on images one at a time (but after doing some processing).
```
from PIL import Image
def download_image(url):
image = Image.open(requests.get(url, stream=True).raw)
return image
test_img = download_image(cat_urls[0])
test_img.show()
```
---
## Step 3: Download and save images that meet our requirements
We'll need to know how to work with the [PIL Image data type](https://pillow.readthedocs.io/en/stable/reference/Image.html), which is what our `download_image(url)` function returns. Namely, we need to be able to a) get it's resolution and b) resize it.
```
import os
def is_landscape(image):
return image.width > image.height
def save_category_images(urls, category_name, resolution=(256, 256)):
save_folder = f'saved_images/{category_name}'
if not os.path.exists(save_folder):
os.mkdir(save_folder)
for i, url in enumerate(urls):
image = download_image(url)
if is_landscape(image):
image = image.resize(resolution)
filename = f'{i:05d}.jpg'
image.save(os.path.join(save_folder, filename))
save_category_images(dog_urls, 'dogs')
save_category_images(cat_urls, 'cats')
```
| true |
code
| 0.213254 | null | null | null | null |
|
# Assignment 2: Implementation of Selection Sort
## Deliverables:
We will again generate random data for this assignment.
1) Please set up five data arrays of length 5,000, 10,000, 15,000, 20,000, and 25,000 of uniformly distributed random numbers (you may use either integers or floating point).
Ensure that a common random number seed is used to generate each of the arrays.
2) Execute the base algorithm (Selection Sort) for each of the random number arrays, noting the execution time with each execution.
Use one of the timing methods we learned in class.
3) Just as in the last assignment, please organize the results of the study into a table showing the size of data array and the time taken to sort the array.
Discuss the differences in timing and how they relate to data type and length of array.
4) Use Python matpl otlib or Seaborn to generate a measure of the size of the data set on the horizontal axis and with execution time in milliseconds on the vertical axis.
The plot should show execution time against problem size for each form of the algorithm being tested.
### Prepare an exec summary of your results, referring to the table and figures you have generated. Explain how your results relate to big O notation. Describe your results in language that management can understand. This summary should be included as text paragraphs in the Jupyter notebook. Explain how the algorithm works and why it is a useful to data engineers.
# Discussion
### The selection sort algorithm as implemented below uses a nested for loop. The inner loop indentifies the smallest componenent of an array and it's index while the outer loop manipulates the arrays (adds the smallest element to the new array and removes the element from the parent array). Since we have these two for loops the algorithm grows at a rate of approximately n*n. There are two operations first we identify the smallest element, then we place it in the new array. In big O notation, this is denoted O(n^2). Figure 1 below shows the sort times as a function of the length of the array. It is apparent that the lowest point demonstrates the non-linear scaling of this algorithm which is confirmed by taking the square root of the time. Figure 2 shows the square root of time as a function of the length of the array and is approximately linear.
### In some data retrieval systems items are required to be indexed sequentially, so we need methodologies to sort them, selection sort provides this methodology in an easy to implement fashion, however it is not very efficient due to the nested operations. Below are the two functions, required for the sort:
1) FindSmallest will start at the first index of an array and set it to an object 'smallest' which will be used in a repetative logical evaluation.
As we progress through the length of the array, each time the next value is smaller than smallest, smallest is replaced and it's index also is captured in smallest index.
This continues until the entire array is processed.
2) SelectionSort will find use FindSmallest to search through a given array using FindSmallest in a nested fashion to find the smallest value ('small') in the given array and append it to a new array.
The found value is removed from the original array (via it's returned index in FindSmallest; 'smallest_index') and the algorightm continues until the are no elements in the original array. The new array is returned along with the elapsed time to complete the sort in milliseconds.
```
import numpy as np
import pandas as pd
from datetime import datetime
import seaborn as sns
import time
#FindSmallest will start at the first index of an array and set it to an object 'smallest' which will be used in a repetative logical evaluation. As we progress through the length of the array, each time the next value is smaller than smallest, smallest is replaced and it's index also is captured in smallest index. This continues until the entire array is processed.
def FindSmallest(arr):
smallest = arr[0]
smallest_index=0
for i in range(1, len(arr)):
if arr[i] < smallest:
smallest = arr[i]
smallest_index = i
return smallest_index, smallest
# SelectionSort will find use FindSmallest to search through a given array using FindSmallest in a nested fashion to find the smallest value ('small') in the given array and append it to a new array. The found value is removed from the original array (via it's returned index in FindSmallest; 'smallest_index') and the algorightm continues until the are no elements in the original array. The new array is returned along with the elapsed time to complete the sort in milliseconds.
def SelectionSort(arr):
newArr = []
start = time.perf_counter()
for i in range(len(arr)):
smallest =FindSmallest(arr)[1]
smallest_index = FindSmallest(arr)[0]
newArr.append(smallest) #adds smallest element to new array.
arr = np.delete(arr, smallest_index) # removes smallest element from parent array by index.
end = time.perf_counter()
return newArr , (end-start)*1E3
```
# A. Generate arrays with a common random seed
```
#Sets the Random Seed
RANDOM_SEED = 123
np.random.seed(RANDOM_SEED)
arr5E4 = np.random.randint(low=1, high= 1000001, size=5000)#5,000 elements, 1-1E6 (inclusive)
np.random.seed(RANDOM_SEED)
arr10E4 = np.random.randint(low=1, high= 1000001, size=10000)#10,000 elements, 1-1E6 (inclusive)
np.random.seed(RANDOM_SEED)
arr15E4 = np.random.randint(low=1, high= 1000001, size=15000)#15,000 elements, 1-1E6 (inclusive)
np.random.seed(RANDOM_SEED)
arr20E4 = np.random.randint(low=1, high= 1000001, size=20000)#20,000 elements, 1-1E6 (inclusive)
np.random.seed(RANDOM_SEED)
arr25E4 = np.random.randint(low=1, high= 1000001, size=25000)#25,000 elements, 1-1E6 (inclusive)
```
# B. Sort using SelectionSort function
```
sorted_5E4 = SelectionSort(arr5E4)
sorted_10E4 = SelectionSort(arr10E4)
sorted_15E4 = SelectionSort(arr15E4)
sorted_20E4 = SelectionSort(arr20E4)
sorted_25E4 = SelectionSort(arr25E4)
Summary = {
'NumberOfElements': [ len(sorted_5E4[0]), len(sorted_10E4[0]), len(sorted_15E4[0]),len(sorted_20E4[0]), len(sorted_25E4[0])],
'Time(ms)': [ sorted_5E4[1], sorted_10E4[1], sorted_15E4[1], sorted_20E4[1], sorted_25E4[1]]}
df = pd.DataFrame.from_dict(Summary)
df['rt(Time)'] = np.sqrt(df['Time(ms)'])
display(df)
```
## Fig 1. Sort times in milliseconds as a function of the number of elements.
```
sns.scatterplot(x=df['NumberOfElements'], y=df['Time(ms)'])
```
## Fig 2. Square root of sort times in milliseconds as a function of the number of elements.
```
sns.scatterplot(x=df['NumberOfElements'], y=df['rt(Time)'])
```
# ------------------------ END ------------------------
code graveyard
```
### This code is for testing
#np.random.seed(123)
#arr7_39 = np.random.randint(low=7, high= 39, size=12)
#print("the array is",arr7_39)
#small = FindSmallest(arr7_39)
#print('the smallest index is at', small[0], 'and has value of', small[1])
#testing = SelectionSort(arr7_39)
#print('the array sorted is:', testing[0])
#print('execution time was: ', testing[1], 'ms')
```
| true |
code
| 0.311028 | null | null | null | null |
|
# Demos: Lecture 17
## Demo 1: bit flip errors
```
import pennylane as qml
from pennylane import numpy as np
import matplotlib.pyplot as plt
from lecture17_helpers import *
from scipy.stats import unitary_group
dev = qml.device("default.mixed", wires=1)
@qml.qnode(dev)
def prepare_state(U, p):
qml.QubitUnitary(U, wires=0)
qml.BitFlip(p, wires=0)
#qml.DepolarizingChannel(p, wires=0)
return qml.state()
n_samples = 500
original_states = []
flipped_states = []
for _ in range(n_samples):
U = unitary_group.rvs(2)
original_state = prepare_state(U, 0)
flipped_state = prepare_state(U, 0.3)
original_states.append(convert_to_bloch_vector(original_state))
flipped_states.append(convert_to_bloch_vector(flipped_state))
plot_bloch_sphere(original_states)
plot_bloch_sphere(flipped_states)
```
## Demo 2: depolarizing noise
## Demo 3: fidelity and trace distance
$$
F(\rho, \sigma) = \left( \hbox{Tr} \sqrt{\sqrt{\rho}\sigma\sqrt{\rho}} \right)^2
$$
```
from scipy.linalg import sqrtm
def fidelity(rho, sigma):
sqrt_rho = sqrtm(rho)
inner_thing = np.linalg.multi_dot([sqrt_rho, sigma, sqrt_rho])
return np.trace(sqrtm(inner_thing)) ** 2
proj_0 = np.array([[1, 0], [0, 0]])
proj_1 = np.array([[0, 0], [0, 1]])
fidelity(proj_0, proj_0)
fidelity(proj_0, proj_1)
```
$$
T(\rho, \sigma) = \frac{1}{2} \hbox{Tr} \left( \sqrt{(\rho - \sigma)^\dagger (\rho - \sigma)} \right)
$$
```
def trace_distance(rho, sigma):
rms = rho - sigma
inner_thing = np.dot(rms.conj().T, rms)
return 0.5 * np.trace(sqrtm(inner_thing))
U = unitary_group.rvs(2)
p_vals = np.linspace(0, 1, 10)
fids = []
tr_ds = []
for p in p_vals:
original_state = prepare_state(U, 0)
error_state = prepare_state(U, p)
fids.append(fidelity(original_state, error_state))
tr_ds.append(trace_distance(original_state, error_state))
plt.scatter(p_vals, fids)
plt.scatter(p_vals, tr_ds)
```
## Demo 4: VQE for $H_2$ molecule
```
bond_length = 1.3228
symbols = ["H", "H"]
coordinates = np.array([0.0, 0.0, -bond_length/2, 0.0, 0.0, bond_length/2])
H, n_qubits = qml.qchem.molecular_hamiltonian(symbols, coordinates)
print(H)
```
Ground state of $H_2$ looks like:
$$
|\psi_g(\theta)\rangle = \cos(\theta/2) |1100\rangle - \sin(\theta/2) |0011\rangle
$$
```
dev = qml.device("default.qubit", wires=4)
def prepare_ground_state(theta):
qml.PauliX(wires=0)
qml.PauliX(wires=1)
qml.DoubleExcitation(theta, wires=range(4))
return qml.expval(H)
opt = qml.GradientDescentOptimizer(stepsize=0.5)
ideal_qnode = qml.QNode(prepare_ground_state, dev)
theta = np.array(0.0, requires_grad=True)
energies = []
for _ in range(30):
theta, _energy = opt.step_and_cost(ideal_qnode, theta)
energies.append(_energy)
plt.plot(energies)
energies[-1]
theta
```
## Demo 5: VQE on a noisy device
```
from qiskit.test.mock import FakeSantiago
from qiskit.providers.aer import QasmSimulator
from qiskit.providers.aer.noise import NoiseModel
device = QasmSimulator.from_backend(FakeSantiago())
noise_model = NoiseModel.from_backend(device, readout_error=False)
noisy_dev = qml.device(
"qiskit.aer", backend='qasm_simulator', wires=4, shots=10000, noise_model=noise_model
)
noisy_qnode = qml.QNode(prepare_ground_state, noisy_dev)
noisy_qnode(theta)
opt = qml.GradientDescentOptimizer(stepsize=0.5)
theta = np.array(0.0, requires_grad=True)
noisy_energies = []
for it in range(30):
if it % 5 == 0:
print(f"it = {it}")
theta, _energy = opt.step_and_cost(noisy_qnode, theta)
noisy_energies.append(_energy)
plt.scatter(range(30), energies)
plt.scatter(range(30), noisy_energies)
```
## Demo 6: zero-noise extrapolation
| true |
code
| 0.5047 | null | null | null | null |
|
# Overfitting y Regularización
El **overfitting** o sobreajuste es otro problema común al entrenar un modelo de aprendizaje automático. Consiste en entrenar modelos que aprenden a la perfección los datos de entrenamiento, perdiendo de esta forma generalidad. De modo, que si al modelo se le pasan datos nuevos que jamás ha visto, no será capaz de realizar una buena predicción.
Existe un problema opuesto al overfitting conocido como **underfitting** o subajuste, en el que el modelo no logra realizar una predicción ni siquiera cercana a los datos de entrenamiento y esta lejos de hacer una generalización.

Para evitar el underfitting y el overfitting se pueden utilizar curvas de **loss**, **f1_score** o **accuracy** utilizando los datos de entrenamiento y validación. Haciendo un análisis sobre estas curvas se logra identificar estos problemas.
# Ejercicio
Utilizar el dataset [MNIST](http://yann.lecun.com/exdb/mnist/) para identificar los problemas de **underfitting** y **overfitting**, utilizando una ANN de capas lineales.
```
#-- Descomprimimos el dataset
# !rm -r mnist
# !unzip mnist.zip
#--- Buscamos las direcciones de cada archivo de imagen
from glob import glob
train_files = glob('./mnist/train/*/*.png')
valid_files = glob('./mnist/valid/*/*.png')
test_files = glob('./mnist/test/*/*.png')
train_files[0]
#--- Ordenamos los datos de forma aleatoria para evitar sesgos
import numpy as np
np.random.shuffle(train_files)
np.random.shuffle(valid_files)
np.random.shuffle(test_files)
len(train_files), len(valid_files), len(test_files)
#--- Cargamos los datos de entrenamiento en listas
from PIL import Image
N_train = len(train_files)
X_train = []
Y_train = []
for i, train_file in enumerate(train_files):
Y_train.append( int(train_file.split('/')[3]) )
X_train.append(np.array(Image.open(train_file)))
#--- Cargamos los datos de validación en listas
N_valid = len(valid_files)
X_valid = []
Y_valid = []
for i, valid_file in enumerate(valid_files):
Y_valid.append( int(valid_file.split('/')[3]) )
X_valid.append( np.array(Image.open(valid_file)) )
#--- Cargamos los datos de testeo en listas
N_test = len(test_files)
X_test = []
Y_test = []
for i, test_file in enumerate(test_files):
Y_test.append( int(test_file.split('/')[3]) )
X_test.append( np.array(Image.open(test_file)) )
#--- Visualizamos el tamaño de cada subset
len(X_train), len(X_valid), len(X_test)
#--- Visualizamos la distribución de clases en cada subset
from PIL import Image
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,5))
plt.subplot(1,3,1)
plt.hist(np.sort(Y_train))
plt.xlabel('class')
plt.ylabel('counts')
plt.title('Train set')
plt.subplot(1,3,2)
plt.hist(np.sort(Y_valid))
plt.xlabel('class')
plt.ylabel('counts')
plt.title('Valid set')
plt.subplot(1,3,3)
plt.hist(np.sort(Y_test))
plt.xlabel('class')
plt.ylabel('counts')
plt.title('Test set')
plt.show()
#-- Visualizamos los datos
fig = plt.figure(figsize=(8,8))
for i in range(4):
plt.subplot(2,2,i+1)
plt.imshow(X_test[i*15])
plt.title(Y_test[i*15])
plt.axis(False)
plt.show()
#--- Convetimos las listas con los datos a tensores de torch
import torch
from torch.autograd import Variable
X_train = Variable(torch.from_numpy(np.array(X_train))).float()
Y_train = Variable(torch.from_numpy(np.array(Y_train))).long()
X_valid = Variable(torch.from_numpy(np.array(X_valid))).float()
Y_valid = Variable(torch.from_numpy(np.array(Y_valid))).long()
X_test = Variable(torch.from_numpy(np.array(X_test))).float()
Y_test = Variable(torch.from_numpy(np.array(Y_test))).long()
X_train.data.size()
#--- Definimos una función que nos permita entrenar diferentes modelos de ANN
from sklearn.metrics import f1_score
def train_valid(model, n_epoch, optimizer, criterion):
loss_train = []
f1_train = []
acc_train = []
loss_valid = []
f1_valid = []
acc_valid = []
for epoch in range(n_epoch):
model.train()
Xtr = X_train.view(X_train.size(0), -1)
Y_pred = model(Xtr)
loss = criterion(Y_pred,Y_train)
loss_train.append(loss.item())
Y_pred = torch.argmax(Y_pred, 1)
f1_train.append( f1_score(Y_train,Y_pred, average='macro') )
acc = sum(Y_train == Y_pred)/len(Y_pred)
acc_train.append(acc)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print( 'Epoch [{}/{}], loss: {}. f1:{} acc: {} '.format(epoch+1,n_epoch,loss_train[-1], f1_train[-1], acc_train[-1]) )
model.eval()
Xvl = X_valid.view(X_valid.size(0), -1)
Y_pred = model(Xvl)
loss = criterion(Y_pred,Y_valid)
loss_valid.append(loss.item())
Y_pred = torch.argmax(Y_pred, 1)
f1_valid.append( f1_score(Y_valid, Y_pred, average='macro') )
acc = sum(Y_valid == Y_pred)/len(Y_pred)
acc_valid.append(acc)
fig = plt.figure(figsize=(15,5))
plt.subplot(1,3,1)
plt.plot(range(n_epoch), loss_train, label='train')
plt.plot(range(n_epoch), loss_valid, label='valid')
plt.xlabel('n_epoch')
plt.ylabel('loss')
plt.legend()
plt.grid()
plt.subplot(1,3,2)
plt.plot(range(n_epoch), f1_train, label='train')
plt.plot(range(n_epoch), f1_valid, label='valid')
plt.xlabel('n_epoch')
plt.ylabel('f1_score')
plt.legend()
plt.grid()
plt.subplot(1,3,3)
plt.plot(range(n_epoch), acc_train, label='train')
plt.plot(range(n_epoch), acc_valid, label='valid')
plt.xlabel('n_epoch')
plt.ylabel('accuracy')
plt.legend()
plt.grid()
```
## Underfitting
El **underfitting** o sub ajuste se puede presentar en las siguientes situaciones:
* **Finalización temprana**: Cuando el modelo se entrena hasta una época temprana a pesar de que la tendencia indica una posible obtención de mejores resultados.
* **Modelo Simple**: Cuando el modelo es tan básico que no es capaz de extraer ningún tipo de patrón efectivo que le permita hacer una generalización de los datos.
```
#--- Definimos una ANN simple para identificar un error de underfitting
input_dim = 28*28
out_dim = 10
model = torch.nn.Sequential(
torch.nn.Linear(input_dim, out_dim)
)
optimizer = torch.optim.Adam(model.parameters())
criterion = torch.nn.CrossEntropyLoss()
train_valid(model,30,optimizer,criterion)
#-- Evaluamos el modelo entrenado con el set de testeo
model.eval()
Xts = X_test.view(X_test.size(0), -1)
Y_pred = model(Xts)
loss = criterion(Y_pred,Y_test)
Y_pred = torch.argmax(Y_pred, 1)
f1 = f1_score(Y_test, Y_pred, average='macro')
acc = sum(Y_test == Y_pred)/len(Y_pred)
print('loss: {}, f1: {}, acc: {}'.format(loss.item(), f1, acc))
```
## Overfitting
El **overfitting** o sobreajuste es el caso opuesto al subajuste y se puede presentar en la siguiente situación:
una obtención de mejores resultados.
* **Modelo Complejo**: El modelo es tan complejo que aprendió perfectamente los datos de entrenamiento, perdiendo generalidad. Cuando el modelo vea datos nuevos, diferentes a los del entrenamiento, su predicción será errónea.
```
input_dim = 28*28
out_dim = 10
hidden = 60
model = torch.nn.Sequential(
torch.nn.Linear(input_dim, hidden),
torch.nn.ReLU(),
torch.nn.Linear(hidden, hidden),
torch.nn.ReLU(),
torch.nn.Linear(hidden, hidden),
torch.nn.ReLU(),
torch.nn.Linear(hidden, hidden),
torch.nn.ReLU(),
torch.nn.Linear(hidden, hidden),
torch.nn.ReLU(),
torch.nn.Linear(hidden, hidden),
torch.nn.ReLU(),
torch.nn.Linear(hidden, out_dim)
)
optimizer = torch.optim.Adam(model.parameters())
criterion = torch.nn.CrossEntropyLoss()
train_valid(model,100,optimizer,criterion)
#-- Evaluamos el modelo entrenado con el set de testeo
model.eval()
Xts = X_test.view(X_test.size(0), -1)
Y_pred = model(Xts)
loss = criterion(Y_pred,Y_test)
Y_pred = torch.argmax(Y_pred, 1)
f1 = f1_score(Y_test, Y_pred, average='macro')
acc = sum(Y_test == Y_pred)/len(Y_pred)
print('loss: {}, f1: {}, acc: {}'.format(loss.item(), f1, acc))
```
## Regularización
Un mecanismo que permite evitar el sobreajuste es conocido como **regularización**. La cantidad de regularización afectará el rendimiento de validación del modelo. Muy poca regularización no resolverá el problema de sobreajuste. Demasiada regularización hará que el modelo sea mucho menos efectivo. La regularización actúa como una restricción sobre el conjunto de posibles funciones aprendibles.
<br>
Según [Ian Goodfellow](https://en.wikipedia.org/wiki/Ian_Goodfellow), "*La regularización es cualquier modificación que hacemos a un algoritmo de aprendizaje que tiene como objetivo reducir su error de generalización pero no su error de entrenamiento.*"
<br>
**Regularización de caída de peso**
La pérdida de peso es la técnica de regularización más común (implementada en Pytorch). En PyTorch, la caída de peso se proporciona como un parámetro para el optimizador *decay_weight*. En [este](https://pytorch.org/docs/stable/optim.html) enlace se muestran otros parámetros que pueden ser usados en los optimizadores.
A la caída de peso también se le llama:
* L2
* Ridge
Para la disminución de peso, agregamos un término de penalización en la actualización de los pesos:
$w(x) = w(x) − \eta \nabla x - \alpha \eta x$
Este nuevo término en la actualización lleva los parámetros $w$ ligeramente hacia cero, agregando algo de **decaimiento** en los pesos con cada actualización.
```
input_dim = 28*28
out_dim = 10
hidden = 60
model = torch.nn.Sequential(
torch.nn.Linear(input_dim, hidden),
torch.nn.ReLU(),
torch.nn.Linear(hidden, hidden),
torch.nn.ReLU(),
torch.nn.Linear(hidden, hidden),
torch.nn.ReLU(),
torch.nn.Linear(hidden, hidden),
torch.nn.ReLU(),
torch.nn.Linear(hidden, hidden),
torch.nn.ReLU(),
torch.nn.Linear(hidden, hidden),
torch.nn.ReLU(),
torch.nn.Linear(hidden, out_dim)
)
optimizer = torch.optim.Adam(model.parameters(), weight_decay=0.01)
criterion = torch.nn.CrossEntropyLoss()
train_valid(model,100,optimizer,criterion)
#-- Evaluamos el modelo entrenado con el set de testeo
model.eval()
Xts = X_test.view(X_test.size(0), -1)
Y_pred = model(Xts)
loss = criterion(Y_pred,Y_test)
Y_pred = torch.argmax(Y_pred, 1)
f1 = f1_score(Y_test, Y_pred, average='macro')
acc = sum(Y_test == Y_pred)/len(Y_pred)
print('loss: {}, f1: {}, acc: {}'.format(loss.item(), f1, acc))
```
| true |
code
| 0.491334 | null | null | null | null |
|
# 概率潜在语义分析
概率潜在语义分析(probabilistic latent semantic analysis, PLSA),也称概率潜在语义索引(probabilistic latent semantic indexing, PLSI),是一种利用概率生成模型对文本集合进行话题分析的无监督学习方法。
模型最大特点是用隐变量表示话题,整个模型表示文本生成话题,话题生成单词,从而得到单词-文本共现数据的过程;假设每个文本由一个话题分布决定,每个话题由一个单词分布决定。
### **18.1.2 生成模型**
假设有单词集合 $W = $ {$w_{1}, w_{2}, ..., w_{M}$}, 其中M是单词个数;文本(指标)集合$D = $ {$d_{1}, d_{2}, ..., d_{N}$}, 其中N是文本个数;话题集合$Z = $ {$z_{1}, z_{2}, ..., z_{K}$},其中$K$是预先设定的话题个数。随机变量 $w$ 取值于单词集合;随机变量 $d$ 取值于文本集合,随机变量 $z$ 取值于话题集合。概率分布 $P(d)$、条件概率分布 $P(z|d)$、条件概率分布 $P(w|z)$ 皆属于多项分布,其中 $P(d)$ 表示生成文本 $d$ 的概率,$P(z|d)$ 表示文本 $d$ 生成话题 $z$ 的概率,$P(w|z)$ 表示话题 $z$ 生成单词 $w$ 的概率。
每个文本 $d$ 拥有自己的话题概率分布 $P(z|d)$,每个话题 $z$ 拥有自己的单词概率分布 $P(w|z)$;也就是说**一个文本的内容由其相关话题决定,一个话题的内容由其相关单词决定**。
生成模型通过以下步骤生成文本·单词共现数据:
(1)依据概率分布 $P(d)$,从文本(指标)集合中随机选取一个文本 $d$ , 共生成 $N$ 个文本;针对每个文本,执行以下操作;
(2)在文本$d$ 给定条件下,依据条件概率分布 $P(z|d)$, 从话题集合随机选取一个话题 $z$, 共生成 $L$ 个话题,这里 $L$ 是文本长度;
(3)在话题 $z$ 给定条件下,依据条件概率分布 $P(w|z)$ , 从单词集合中随机选取一个单词 $w$.
注意这里为叙述方便,假设文本都是等长的,现实中不需要这个假设。
生成模型中, 单词变量 $w$ 与文本变量 $d$ 是观测变量, 话题变量 $z$ 是隐变量, 也就是说模型生成的是单词-话题-文本三元组合 ($w, z ,d$)的集合, 但观测到的单词-文本二元组 ($w, d$)的集合, 观测数据表示为单词-文本矩阵 $T$的形式,矩阵 $T$ 的行表示单词,列表示文本, 元素表示单词-文本对($w, d$)的出现次数。
从数据的生成过程可以推出,文本-单词共现数据$T$的生成概率为所有单词-文本对($w,d$)的生成概率的乘积:
$P(T) = \prod_{w,d}P(w,d)^{n(w,d)}$
这里 $n(w,d)$ 表示 ($w,d$)的出现次数,单词-文本对出现的总次数是 $N*L$。 每个单词-文本对($w,d$)的生成概率由一下公式决定:
$P(w,d) = P(d)P(w|d)$
$= P(d)\sum_{z}P(w,z|d)$
$=P(d)\sum_{z}P(z|d)P(w|z)$
### **18.1.3 共现模型**
$P(w,d) = \sum_{z\in Z}P(z)P(w|z)P(d|z)$
虽然生成模型与共现模型在概率公式意义上是等价的,但是拥有不同的性质。生成模型刻画文本-单词共现数据生成的过程,共现模型描述文本-单词共现数据拥有的模式。
如果直接定义单词与文本的共现概率 $P(w,d)$, 模型参数的个数是 $O(M*N)$, 其中 $M$ 是单词数, $N$ 是文本数。 概率潜在语义分析的生成模型和共现模型的参数个数是 $O(M*K + N*K)$, 其中 $K$ 是话题数。 现实中 $K<<M$, 所以**概率潜在语义分析通过话题对数据进行了更简洁的表示,减少了学习过程中过拟合的可能性**。
### 算法 18.1 (概率潜在语义模型参数估计的EM算法)
输入: 设单词集合为 $W = ${$w_{1}, w_{2},..., w_{M}$}, 文本集合为 $D=${$d_{1}, d_{2},..., d_{N}$}, 话题集合为 $Z=${$z_{1}, z_{2},..., z_{K}$}, 共现数据 $\left \{ n(w_{i}, d_{j}) \right \}, i = 1,2,..., M, j = 1,2,...,N;$
输出: $P(w_{i}|z_{k})$ 和 $P(z_{k}|d_{j})$.
1. 设置参数 $P(w_{i}|z_{k})$ 和 $P(z_{k}|d_{j})$ 的初始值。
2. 迭代执行以下E步,M步,直到收敛为止。
E步:
$P(z_{k}|w_{i},d_{j})=\frac{P(w_{i}|z_{k})P(z_{k}|d_{j})}{\sum_{k=1}^{K}P(w_{i}|z_{k})P(z_{k}|d_{j})}$
M步:
$P(w_{i}|z_{k})=\frac{\sum_{j=1}^{N}n(w_{i},d_{j})P(z_{k}|w_{i},d_{j})}{\sum_{m=1}^{M}\sum_{j=1}^{N}n(w_{m},d_{j})P(z_{k}|w_{m},d_{j})}$
$P(z_{k}|d_{j}) = \frac{\sum_{i=1}^{M}n(w_{i},d_{j})P(z_{k}|w_{i},d_{j})}{n(d_{j})}$
#### 习题 18.3
```
import numpy as np
X = [[0,0,1,1,0,0,0,0,0],
[0,0,0,0,0,1,0,0,1],
[0,1,0,0,0,0,0,1,0],
[0,0,0,0,0,0,1,0,1],
[1,0,0,0,0,1,0,0,0],
[1,1,1,1,1,1,1,1,1],
[1,0,1,0,0,0,0,0,0],
[0,0,0,0,0,0,1,0,1],
[0,0,0,0,0,2,0,0,1],
[1,0,1,0,0,0,0,1,0],
[0,0,0,1,1,0,0,0,0]]
X = np.asarray(X);X
X.shape
X = X.T;X
class PLSA:
def __init__(self, K, max_iter):
self.K = K
self.max_iter = max_iter
def fit(self, X):
n_d, n_w = X.shape
# P(z|w,d)
p_z_dw = np.zeros((n_d, n_w, self.K))
# P(z|d)
p_z_d = np.random.rand(n_d, self.K)
# P(w|z)
p_w_z = np.random.rand(self.K, n_w)
for i_iter in range(self.max_iter):
# E step
for di in range(n_d):
for wi in range(n_w):
sum_zk = np.zeros((self.K))
for zi in range(self.K):
sum_zk[zi] = p_z_d[di, zi] * p_w_z[zi, wi]
sum1 = np.sum(sum_zk)
if sum1 == 0:
sum1 = 1
for zi in range(self.K):
p_z_dw[di, wi, zi] = sum_zk[zi] / sum1
# M step
# update P(z|d)
for di in range(n_d):
for zi in range(self.K):
sum1 = 0.
sum2 = 0.
for wi in range(n_w):
sum1 = sum1 + X[di, wi] * p_z_dw[di, wi, zi]
sum2 = sum2 + X[di, wi]
if sum2 == 0:
sum2 = 1
p_z_d[di, zi] = sum1 / sum2
# update P(w|z)
for zi in range(self.K):
sum2 = np.zeros((n_w))
for wi in range(n_w):
for di in range(n_d):
sum2[wi] = sum2[wi] + X[di, wi] * p_z_dw[di, wi, zi]
sum1 = np.sum(sum2)
if sum1 == 0:
sum1 = 1
for wi in range(n_w):
p_w_z[zi, wi] = sum2[wi] / sum1
return p_w_z, p_z_d
# https://github.com/lipiji/PG_PLSA/blob/master/plsa.py
model = PLSA(2, 100)
p_w_z, p_z_d = model.fit(X)
p_w_z
p_z_d
```
| true |
code
| 0.228372 | null | null | null | null |
|
## Training a differentially private LSTM model for name classification
In this tutorial we will build a differentially-private LSTM model to classify names to their source languages, which is the same task as in the tutorial **NLP From Scratch** (https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html). Since the objective of this tutorial is to demonstrate the effective use of an LSTM with privacy guarantees, we will be utilizing it in place of the bare-bones RNN model defined in the original tutorial. Specifically, we use the `DPLSTM` module from `opacus.layers.dp_lstm` to facilitate calculation of the per-example gradients, which are utilized in the addition of noise during application of differential privacy. `DPLSTM` has the same API and functionality as the `nn.LSTM`, with some restrictions (ex. we currently support single layers, the full list is given below).
## Dataset
First, let us download the dataset of names and their associated language labels as given in https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html. We train our differentially-private LSTM on the same dataset as in that tutorial.
```
import os
import requests
NAMES_DATASET_URL = "https://download.pytorch.org/tutorial/data.zip"
DATA_DIR = "names"
import zipfile
import urllib
def download_and_extract(dataset_url, data_dir):
print("Downloading and extracting ...")
filename = "data.zip"
urllib.request.urlretrieve(dataset_url, filename)
with zipfile.ZipFile(filename) as zip_ref:
zip_ref.extractall(data_dir)
os.remove(filename)
print("Completed!")
download_and_extract(NAMES_DATASET_URL, DATA_DIR)
names_folder = os.path.join(DATA_DIR, 'data', 'names')
all_filenames = []
for language_file in os.listdir(names_folder):
all_filenames.append(os.path.join(names_folder, language_file))
print(os.listdir(names_folder))
import torch
import torch.nn as nn
class CharByteEncoder(nn.Module):
"""
This encoder takes a UTF-8 string and encodes its bytes into a Tensor. It can also
perform the opposite operation to check a result.
Examples:
>>> encoder = CharByteEncoder()
>>> t = encoder('Ślusàrski') # returns tensor([256, 197, 154, 108, 117, 115, 195, 160, 114, 115, 107, 105, 257])
>>> encoder.decode(t) # returns "<s>Ślusàrski</s>"
"""
def __init__(self):
super().__init__()
self.start_token = "<s>"
self.end_token = "</s>"
self.pad_token = "<pad>"
self.start_idx = 256
self.end_idx = 257
self.pad_idx = 258
def forward(self, s: str, pad_to=0) -> torch.LongTensor:
"""
Encodes a string. It will append a start token <s> (id=self.start_idx) and an end token </s>
(id=self.end_idx).
Args:
s: The string to encode.
pad_to: If not zero, pad by appending self.pad_idx until string is of length `pad_to`.
Defaults to 0.
Returns:
The encoded LongTensor of indices.
"""
encoded = s.encode()
n_pad = pad_to - len(encoded) if pad_to > len(encoded) else 0
return torch.LongTensor(
[self.start_idx]
+ [c for c in encoded] # noqa
+ [self.end_idx]
+ [self.pad_idx for _ in range(n_pad)]
)
def decode(self, char_ids_tensor: torch.LongTensor) -> str:
"""
The inverse of `forward`. Keeps the start, end and pad indices.
"""
char_ids = char_ids_tensor.cpu().detach().tolist()
out = []
buf = []
for c in char_ids:
if c < 256:
buf.append(c)
else:
if buf:
out.append(bytes(buf).decode())
buf = []
if c == self.start_idx:
out.append(self.start_token)
elif c == self.end_idx:
out.append(self.end_token)
elif c == self.pad_idx:
out.append(self.pad_token)
if buf: # in case some are left
out.append(bytes(buf).decode())
return "".join(out)
def __len__(self):
"""
The length of our encoder space. This is fixed to 256 (one byte) + 3 special chars
(start, end, pad).
Returns:
259
"""
return 259
```
## Training / Validation Set Preparation
```
from torch.nn.utils.rnn import pad_sequence
def padded_collate(batch, padding_idx=0):
x = pad_sequence(
[elem[0] for elem in batch], batch_first=True, padding_value=padding_idx
)
y = torch.stack([elem[1] for elem in batch]).long()
return x, y
from torch.utils.data import Dataset
from pathlib import Path
class NamesDataset(Dataset):
def __init__(self, root):
self.root = Path(root)
self.labels = list({langfile.stem for langfile in self.root.iterdir()})
self.labels_dict = {label: i for i, label in enumerate(self.labels)}
self.encoder = CharByteEncoder()
self.samples = self.construct_samples()
def __getitem__(self, i):
return self.samples[i]
def __len__(self):
return len(self.samples)
def construct_samples(self):
samples = []
for langfile in self.root.iterdir():
label_name = langfile.stem
label_id = self.labels_dict[label_name]
with open(langfile, "r") as fin:
for row in fin:
samples.append(
(self.encoder(row.strip()), torch.tensor(label_id).long())
)
return samples
def label_count(self):
cnt = Counter()
for _x, y in self.samples:
label = self.labels[int(y)]
cnt[label] += 1
return cnt
VOCAB_SIZE = 256 + 3 # 256 alternatives in one byte, plus 3 special characters.
```
We split the dataset into a 80-20 split for training and validation.
```
secure_rng = False
train_split = 0.8
test_every = 5
batch_size = 800
ds = NamesDataset(names_folder)
train_len = int(train_split * len(ds))
test_len = len(ds) - train_len
print(f"{train_len} samples for training, {test_len} for testing")
if secure_rng:
try:
import torchcsprng as prng
except ImportError as e:
msg = (
"To use secure RNG, you must install the torchcsprng package! "
"Check out the instructions here: https://github.com/pytorch/csprng#installation"
)
raise ImportError(msg) from e
generator = prng.create_random_device_generator("/dev/urandom")
else:
generator = None
train_ds, test_ds = torch.utils.data.random_split(
ds, [train_len, test_len], generator=generator
)
from torch.utils.data import DataLoader
from opacus.utils.uniform_sampler import UniformWithReplacementSampler
sample_rate = batch_size / len(train_ds)
train_loader = DataLoader(
train_ds,
num_workers=8,
pin_memory=True,
generator=generator,
batch_sampler=UniformWithReplacementSampler(
num_samples=len(train_ds),
sample_rate=sample_rate,
generator=generator,
),
collate_fn=padded_collate,
)
test_loader = DataLoader(
test_ds,
batch_size=2 * batch_size,
shuffle=False,
num_workers=8,
pin_memory=True,
collate_fn=padded_collate,
)
```
After splitting the dataset into a training and a validation set, we now have to convert the data into a numeric form suitable for training the LSTM model. For each name, we set a maximum sequence length of 15, and if a name is longer than the threshold, we truncate it (this rarely happens this dataset !). If a name is smaller than the threshold, we add a dummy `#` character to pad it to the desired length. We also batch the names in the dataset and set a batch size of 256 for all the experiments in this tutorial. The function `line_to_tensor()` returns a tensor of shape [15, 256] where each element is the index (in `all_letters`) of the corresponding character.
## Training/Evaluation Cycle
The training and the evaluation functions `train()` and `test()` are defined below. During the training loop, the per-example gradients are computed and the parameters are updated subsequent to gradient clipping (to bound their sensitivity) and addition of noise.
```
from statistics import mean
def train(model, criterion, optimizer, train_loader, epoch, device="cuda:0"):
accs = []
losses = []
for x, y in tqdm(train_loader):
x = x.to(device)
y = y.to(device)
logits = model(x)
loss = criterion(logits, y)
loss.backward()
optimizer.step()
optimizer.zero_grad()
preds = logits.argmax(-1)
n_correct = float(preds.eq(y).sum())
batch_accuracy = n_correct / len(y)
accs.append(batch_accuracy)
losses.append(float(loss))
printstr = (
f"\t Epoch {epoch}. Accuracy: {mean(accs):.6f} | Loss: {mean(losses):.6f}"
)
try:
privacy_engine = optimizer.privacy_engine
epsilon, best_alpha = privacy_engine.get_privacy_spent()
printstr += f" | (ε = {epsilon:.2f}, δ = {privacy_engine.target_delta}) for α = {best_alpha}"
except AttributeError:
pass
print(printstr)
return
def test(model, test_loader, privacy_engine, device="cuda:0"):
accs = []
with torch.no_grad():
for x, y in tqdm(test_loader):
x = x.to(device)
y = y.to(device)
preds = model(x).argmax(-1)
n_correct = float(preds.eq(y).sum())
batch_accuracy = n_correct / len(y)
accs.append(batch_accuracy)
printstr = "\n----------------------------\n" f"Test Accuracy: {mean(accs):.6f}"
if privacy_engine:
epsilon, best_alpha = privacy_engine.get_privacy_spent()
printstr += f" (ε = {epsilon:.2f}, δ = {privacy_engine.target_delta}) for α = {best_alpha}"
print(printstr + "\n----------------------------\n")
return
```
## Hyper-parameters
There are two sets of hyper-parameters associated with this model. The first are hyper-parameters which we would expect in any machine learning training, such as the learning rate and batch size. The second set are related to the privacy engine, where for example we define the amount of noise added to the gradients (`noise_multiplier`), and the maximum L2 norm to which the per-sample gradients are clipped (`max_grad_norm`).
```
# Training hyper-parameters
epochs = 50
learning_rate = 2.0
# Privacy engine hyper-parameters
max_per_sample_grad_norm = 1.5
delta = 8e-5
epsilon = 12.0
```
## Model
We define the name classification model in the cell below. Note that it is a simple char-LSTM classifier, where the input characters are passed through an `nn.Embedding` layer, and are subsequently input to the DPLSTM.
```
import torch
from torch import nn
from opacus.layers import DPLSTM
class CharNNClassifier(nn.Module):
def __init__(
self,
embedding_size,
hidden_size,
output_size,
num_lstm_layers=1,
bidirectional=False,
vocab_size=VOCAB_SIZE,
):
super().__init__()
self.embedding_size = embedding_size
self.hidden_size = hidden_size
self.output_size = output_size
self.vocab_size = vocab_size
self.embedding = nn.Embedding(vocab_size, embedding_size)
self.lstm = DPLSTM(
embedding_size,
hidden_size,
num_layers=num_lstm_layers,
bidirectional=bidirectional,
batch_first=True,
)
self.out_layer = nn.Linear(hidden_size, output_size)
def forward(self, x, hidden=None):
x = self.embedding(x) # -> [B, T, D]
x, _ = self.lstm(x, hidden) # -> [B, T, H]
x = x[:, -1, :] # -> [B, H]
x = self.out_layer(x) # -> [B, C]
return x
```
We now proceed to instantiate the objects (privacy engine, model and optimizer) for our differentially-private LSTM training. However, the `nn.LSTM` is replaced with a `DPLSTM` module which enables us to calculate per-example gradients.
```
# Set the device to run on a GPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Define classifier parameters
embedding_size = 64
hidden_size = 128 # Number of neurons in hidden layer after LSTM
n_lstm_layers = 1
bidirectional_lstm = False
model = CharNNClassifier(
embedding_size,
hidden_size,
len(ds.labels),
n_lstm_layers,
bidirectional_lstm,
).to(device)
```
## Defining the privacy engine, optimizer and loss criterion for the problem
```
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
from opacus import PrivacyEngine
privacy_engine = PrivacyEngine(
model,
sample_rate=sample_rate,
max_grad_norm=max_per_sample_grad_norm,
target_delta=delta,
target_epsilon=epsilon,
epochs=epochs,
secure_rng=secure_rng,
)
privacy_engine.attach(optimizer)
```
## Training the name classifier with privacy
Finally we can start training ! We will be training for 50 epochs iterations (where each epoch corresponds to a pass over the whole dataset). We will be reporting the privacy epsilon every `test_every` epochs. We will also benchmark this differentially-private model against a model without privacy and obtain almost identical performance. Further, the private model trained with Opacus incurs only minimal overhead in training time, with the differentially-private classifier only slightly slower (by a couple of minutes) than the non-private model.
```
from tqdm import tqdm
print("Train stats: \n")
for epoch in tqdm(range(epochs)):
train(model, criterion, optimizer, train_loader, epoch, device=device)
if test_every:
if epoch % test_every == 0:
test(model, test_loader, privacy_engine, device=device)
test(model, test_loader, privacy_engine, device=device)
```
The differentially-private name classification model obtains a test accuracy of 0.73 with an epsilon of just under 12. This shows that we can achieve a good accuracy on this task, with minimal loss of privacy.
## Training the name classifier without privacy
We also run a comparison with a non-private model to see if the performance obtained with privacy is comparable to it. To do this, we keep the parameters such as learning rate and batch size the same, and only define a different instance of the model along with a separate optimizer.
```
model_nodp = CharNNClassifier(
embedding_size,
hidden_size,
len(ds.labels),
n_lstm_layers,
bidirectional_lstm,
).to(device)
optimizer_nodp = torch.optim.SGD(model_nodp.parameters(), lr=0.5)
for epoch in tqdm(range(epochs)):
train(model_nodp, criterion, optimizer_nodp, train_loader, epoch, device=device)
if test_every:
if epoch % test_every == 0:
test(model_nodp, test_loader, None, device=device)
test(model_nodp, test_loader, None, device=device)
```
We run the training loop again, this time without privacy and for the same number of iterations.
The non-private classifier obtains a test accuracy of around 0.75 with the same parameters and number of epochs. We are effectively trading off performance on the name classification task for a lower loss of privacy.
| true |
code
| 0.697081 | null | null | null | null |
|
# MNIST distributed training and batch transform
The SageMaker Python SDK helps you deploy your models for training and hosting in optimized, production-ready containers in SageMaker. The SageMaker Python SDK is easy to use, modular, extensible and compatible with TensorFlow and MXNet. This tutorial focuses on how to create a convolutional neural network model to train the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) using TensorFlow distributed training.
## Set up the environment
First, we'll just set up a few things needed for this example
```
import sagemaker
from sagemaker import get_execution_role
from sagemaker.session import Session
sagemaker_session = sagemaker.Session()
region = sagemaker_session.boto_session.region_name
role = get_execution_role()
```
### Download the MNIST dataset
We'll now need to download the MNIST dataset, and upload it to a location in S3 after preparing for training.
```
import utils
from tensorflow.contrib.learn.python.learn.datasets import mnist
import tensorflow as tf
data_sets = mnist.read_data_sets('data', dtype=tf.uint8, reshape=False, validation_size=5000)
utils.convert_to(data_sets.train, 'train', 'data')
utils.convert_to(data_sets.validation, 'validation', 'data')
utils.convert_to(data_sets.test, 'test', 'data')
```
### Upload the data
We use the ```sagemaker.Session.upload_data``` function to upload our datasets to an S3 location. The return value inputs identifies the location -- we will use this later when we start the training job.
```
inputs = sagemaker_session.upload_data(path='data', key_prefix='data/DEMO-mnist')
```
# Construct a script for distributed training
Here is the full code for the network model:
```
!cat 'mnist.py'
```
## Create a training job
```
from sagemaker.tensorflow import TensorFlow
mnist_estimator = TensorFlow(entry_point='mnist.py',
role=role,
framework_version='1.11.0',
training_steps=1000,
evaluation_steps=100,
train_instance_count=2,
train_instance_type='ml.c4.xlarge')
mnist_estimator.fit(inputs)
```
The `fit()` method will create a training job in two ml.c4.xlarge instances. The logs above will show the instances doing training, evaluation, and incrementing the number of training steps.
In the end of the training, the training job will generate a saved model for TF serving.
## SageMaker's transformer class
After training, we use our TensorFlow estimator object to create a `Transformer` by invoking the `transformer()` method. This method takes arguments for configuring our options with the batch transform job; these do not need to be the same values as the one we used for the training job. The method also creates a SageMaker Model to be used for the batch transform jobs.
The `Transformer` class is responsible for running batch transform jobs, which will deploy the trained model to an endpoint and send requests for performing inference.
```
transformer = mnist_estimator.transformer(instance_count=1, instance_type='ml.m4.xlarge')
```
# Perform inference
Now that we've trained a model, we're going to use it to perform inference with a SageMaker batch transform job. The request handling behavior of the Endpoint deployed during the transform job is determined by the `mnist.py` script we looked at earlier.
## Run a batch transform job
For our batch transform job, we're going to use input data that contains 1000 MNIST images, located in the public SageMaker sample data S3 bucket. To create the batch transform job, we simply call `transform()` on our transformer with information about the input data.
```
input_bucket_name = 'sagemaker-sample-data-{}'.format(region)
input_file_path = 'batch-transform/mnist-1000-samples'
transformer.transform('s3://{}/{}'.format(input_bucket_name, input_file_path), content_type='text/csv')
```
Now we wait for the batch transform job to complete. We have a convenience method, `wait()`, that will block until the batch transform job has completed. We can call that here to see if the batch transform job is still running; the cell will finish running when the batch transform job has completed.
```
transformer.wait()
```
## Download the results
The batch transform job uploads its predictions to S3. Since we did not specify `output_path` when creating the Transformer, one was generated based on the batch transform job name:
```
print(transformer.output_path)
```
Now let's download the first ten results from S3:
```
import json
from six.moves.urllib import parse
import boto3
parsed_url = parse.urlparse(transformer.output_path)
bucket_name = parsed_url.netloc
prefix = parsed_url.path[1:]
s3 = boto3.resource('s3')
predictions = []
for i in range(10):
file_key = '{}/data-{}.csv.out'.format(prefix, i)
output_obj = s3.Object(bucket_name, file_key)
output = output_obj.get()["Body"].read().decode('utf-8')
predictions.extend(json.loads(output)['outputs']['classes']['int64Val'])
```
For demonstration purposes, we're also going to download the corresponding original input data so that we can see how the model did with its predictions.
```
import os
import matplotlib.pyplot as plt
from numpy import genfromtxt
plt.rcParams['figure.figsize'] = (2,10)
def show_digit(img, caption='', subplot=None):
if subplot == None:
_,(subplot) = plt.subplots(1,1)
imgr = img.reshape((28,28))
subplot.axis('off')
subplot.imshow(imgr, cmap='gray')
plt.title(caption)
tmp_dir = '/tmp/data'
if not os.path.exists(tmp_dir):
os.makedirs(tmp_dir)
for i in range(10):
input_file_name = 'data-{}.csv'.format(i)
input_file_key = '{}/{}'.format(input_file_path, input_file_name)
s3.Bucket(input_bucket_name).download_file(input_file_key, os.path.join(tmp_dir, input_file_name))
input_data = genfromtxt(os.path.join(tmp_dir, input_file_name), delimiter=',')
show_digit(input_data)
```
Here, we can see the original labels are:
```
7, 2, 1, 0, 4, 1, 4, 9, 5, 9
```
Now let's print out the predictions to compare:
```
print(', '.join(predictions))
```
| true |
code
| 0.425516 | null | null | null | null |
|
```
%matplotlib inline
```
# Generating an input file
This examples shows how to generate an input file in HDF5-format, which can
then be processed by the `py-fmas` library code.
This is useful when the project-specific code is separate from the `py-fmas`
library code.
.. codeauthor:: Oliver Melchert <[email protected]>
We start by importing the required `py-fmas` functionality. Since the
file-input for `py-fmas` is required to be provided in HDF5-format, we need
some python package that offers the possibility to read and write this
format. Here we opted for the python module h5py which is listed as one of
the dependencies of the `py-fmas` package.
```
import h5py
import numpy as np
import numpy.fft as nfft
```
We then define the desired propagation constant
```
def beta_fun_detuning(w):
r'''Function defining propagation constant
Implements group-velocity dispersion with expansion coefficients
listed in Tab. I of Ref. [1]. Expansion coefficients are valid for
:math:`lambda = 835\,\mathrm{nm}`, i.e. for :math:`\omega_0 \approx
2.56\,\mathrm{rad/fs}`.
References:
[1] J. M. Dudley, G. Genty, S. Coen,
Supercontinuum generation in photonic crystal fiber,
Rev. Mod. Phys. 78 (2006) 1135,
http://dx.doi.org/10.1103/RevModPhys.78.1135
Note:
A corresponding propagation constant is implemented as function
`define_beta_fun_PCF_Ranka2000` in `py-fmas` module
`propatation_constant`.
Args:
w (:obj:`numpy.ndarray`): Angular frequency detuning.
Returns:
:obj:`numpy.ndarray` Propagation constant as function of
frequency detuning.
'''
# ... EXPANSION COEFFICIENTS DISPERSION
b2 = -1.1830e-2 # (fs^2/micron)
b3 = 8.1038e-2 # (fs^3/micron)
b4 = -0.95205e-1 # (fs^4/micron)
b5 = 2.0737e-1 # (fs^5/micron)
b6 = -5.3943e-1 # (fs^6/micron)
b7 = 1.3486 # (fs^7/micron)
b8 = -2.5495 # (fs^8/micron)
b9 = 3.0524 # (fs^9/micron)
b10 = -1.7140 # (fs^10/micron)
# ... PROPAGATION CONSTANT (DEPENDING ON DETUNING)
beta_fun_detuning = np.poly1d([b10/3628800, b9/362880, b8/40320,
b7/5040, b6/720, b5/120, b4/24, b3/6, b2/2, 0., 0.])
return beta_fun_detuning(w)
```
Next, we define all parameters needed to specify a simulation run
```
# -- DEFINE SIMULATION PARAMETERS
# ... COMPUTATIONAL DOMAIN
t_max = 3500. # (fs)
t_num = 2**14 # (-)
z_max = 0.1*1e6 # (micron)
z_num = 4000 # (-)
z_skip = 20 # (-)
t = np.linspace(-t_max, t_max, t_num, endpoint=False)
w = nfft.fftfreq(t.size, d=t[1]-t[0])*2*np.pi
# ... MODEL SPECIFIC PARAMETERS
# ... PROPAGATION CONSTANT
c = 0.29979 # (fs/micron)
lam0 = 0.835 # (micron)
w0 = 2*np.pi*c/lam0 # (rad/fs)
beta_w = beta_fun_detuning(w-w0)
gam0 = 0.11e-6 # (1/W/micron)
n2 = gam0*c/w0 # (micron^2/W)
# ... PARAMETERS FOR RAMAN RESPONSE
fR = 0.18 # (-)
tau1= 12.2 # (fs)
tau2= 32.0 # (fs)
# ... INITIAL CONDITION
t0 = 28.4 # (fs)
P0 = 1e4 # (W)
E_0t_fun = lambda t: np.real(np.sqrt(P0)/np.cosh(t/t0)*np.exp(-1j*w0*t))
E_0t = E_0t_fun(t)
```
The subsequent code will store the simulation parameters defined above to the
file `input_file.h5` in the current working directory.
```
def save_data_hdf5(file_path, data_dict):
with h5py.File(file_path, 'w') as f:
for key, val in data_dict.items():
f.create_dataset(key, data=val)
data_dict = {
't_max': t_max,
't_num': t_num,
'z_min': 0.0,
'z_max': z_max,
'z_num': z_num,
'z_skip': z_skip,
'E_0t': E_0t,
'beta_w': beta_w,
'n2': n2,
'fR': fR,
'tau1': tau1,
'tau2': tau2,
'out_file_path': 'out_file.h5'
}
save_data_hdf5('input_file.h5', data_dict)
```
An example, showing how to use `py-fmas` as a black-box simulation tool that
performs a simulation run for the propagation scenario stored under the file
`input_file.h5` is available under the link below:
`sphx_glr_auto_tutorials_basics_g_app.py`
| true |
code
| 0.793746 | null | null | null | null |
|
# Using `bw2waterbalancer`
Notebook showing typical usage of `bw2waterbalancer`
## Generating the samples
`bw2waterbalancer` works with Brightway2. You only need set as current a project in which the database for which you want to balance water exchanges is imported.
```
import brightway2 as bw
import numpy as np
bw.projects.set_current('ei36cutoff')
```
The only Class you need is the `DatabaseWaterBalancer`:
```
from bw2waterbalancer import DatabaseWaterBalancer
```
Instantiating the DatabaseWaterBalancer will automatically identify activities that are associated with water exchanges.
```
dwb = DatabaseWaterBalancer(
ecoinvent_version="3.6", # used to identify activities with water production exchanges
database_name="ei36_cutoff", #name the LCI db in the brightway2 project
)
```
Generating presamples for the whole database is a lengthy process. Thankfully, it only ever needs to be done once per database:
```
dwb.add_samples_for_all_acts(iterations=1000)
```
The samples and associated indices are stored as attributes:
```
dwb.matrix_samples
dwb.matrix_samples.shape
dwb.matrix_indices[0:10] # First ten indices
len(dwb.matrix_indices)
```
These can directly be used to generate [`presamples`](https://presamples.readthedocs.io/):
```
presamples_id, presamples_fp = dwb.create_presamples(
name=None, #Could have specified a string as name, not passing anything will use automatically generated random name
dirpath=None, #Could have specified a directory path to save presamples somewhere specific
id_=None, #Could have specified a string as id, not passing anything will use automatically generated random id
seed='sequential', #or None, or int.
)
```
## Using the samples
The samples are formatted for use in brighway2 via the presamples package.
The following function calculates:
- Deterministic results, using `bw.LCA`
- Stochastic results, using `bw.MonteCarloLCA`
- Stochastic results using presamples, using `bw.MonteCarloLCA` and passing `presamples=[presamples_fp]`
The ratio of stochastic results to deterministic results are then plotted for Monte Carlo results with and without presamples.
Ratios for Monte Carlo with presamples are on the order of 1.
Ratios for Monte Carlo without presamples are much greater, as much (for the randomly selected activities) up to two orders of magnitude.
```
def check_presamples_act(act_key, ps_fp, lcia_method, iterations=1000):
"""Plot histrograms of Monte Carlo samples/det result for case w/ and w/o presamples"""
lca = bw.LCA({act_key:1}, method=m)
lca.lci()
lca.lcia()
mc_arr_wo = np.empty(shape=iterations)
mc = bw.MonteCarloLCA({act_key:1}, method=m)
for i in range(iterations):
mc_arr_wo[i] = next(mc)/lca.score
mc_arr_w = np.empty(shape=iterations)
mc_w = bw.MonteCarloLCA({act_key:1}, method=m, presamples=[ps_fp])
for i in range(iterations):
mc_arr_w[i] = next(mc_w)/lca.score
plt.hist(mc_arr_wo, histtype="step", color='orange', label="without presamples")
plt.hist(mc_arr_w, histtype="step", color='green', label="with presamples")
plt.legend()
```
Let's run this on a couple of random ecoinvent products with the ImpactWorld+ water scarcity LCIA method:
```
m=('IMPACTWorld+ (Default_Recommended_Midpoint 1.23)', 'Midpoint', 'Water scarcity')
import matplotlib.pyplot as plt
%matplotlib inline
act = bw.Database('ei36_cutoff').random()
print("Randomly working on ", act)
check_presamples_act(act.key, presamples_fp, m)
act = bw.Database('ei36_cutoff').random()
print("Randomly working on ", act)
check_presamples_act(act.key, presamples_fp, m)
act = bw.Database('ei36_cutoff').random()
print("Randomly working on ", act)
check_presamples_act(act.key, presamples_fp, m)
act = bw.Database('ei36_cutoff').random()
print("Randomly working on ", act)
check_presamples_act(act.key, presamples_fp, m)
act = bw.Database('ei36_cutoff').random()
print("Randomly working on ", act)
check_presamples_act(act.key, presamples_fp, m)
```
| true |
code
| 0.601301 | null | null | null | null |
|
# Spark on Tour
## Ejemplo de procesamiento de datos en streaming para generar un dashboard en NRT
En este notebook vamos a ver un ejemplo completo de como se podría utilizar la API de streaming estructurado de Spark para procesar un stream de eventos de puntuación en vivo, en el tiempo real, y generar como salida un conjunto de estadísticas, o valores agregados, con los que poder construir un dashboard de visualización y monitorización en tiempo real.
Particularmente vamos a simular una plataforma de vídeo bajo demanda en la que los usuarios están viendo pelítculas y puntuándolas. Tomaremos los eventos de puntuación que van entrando en streaming, y genrar, en tiempo real, estadísticas de visualización agredas por género, de forma que podamos monitorizar qué películas son las más populates en este momento.
### Importamos librerías, definimos esquemas e inicializamos la sesión Spark.
```
import findspark
findspark.init()
import pyspark
from pyspark.sql.types import *
from pyspark.sql import SparkSession
import pyspark.sql.functions as f
from IPython.display import clear_output
import plotly.express as px
ratingSchema = StructType([
StructField("user", IntegerType()),
StructField("movie", IntegerType()),
StructField("rating", FloatType())
])
movieSchema = StructType([
StructField("movie", IntegerType()),
StructField("title", StringType()),
StructField("genres", StringType())
])
def foreach_batch_function(df, epoch_id):
mostPopularMovies = df.limit(10).toPandas()
clear_output()
print(mostPopularMovies)
#setup spark session
sparkSession = (SparkSession.builder
.appName("Movie ratings streaming")
.master("local[*]")
.config("spark.scheduler.mode", "FAIR")
.getOrCreate())
sparkSession.sparkContext.setLogLevel("ERROR")
```
### Leemos el dataset de películas
```
movies = sparkSession.read.csv("/tmp/movielens/movies.csv", schema=movieSchema, header=True)
movies.show()
```
### Inicializamos la carga del stream de puntuaciones desde Apache Kafka
```
dataset = (sparkSession
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "localhost:29092")
.option("subscribe", "ratings")
.load())
dataset = dataset.selectExpr("CAST(value AS STRING)")
dataset = dataset.select(f.from_json(f.col("value"), ratingSchema).alias("data")).select("data.*")
```
### Agrupamos por película y sumamos visualizaciones y media de puntuación
```
dataset = dataset.select("movie", "rating") \
.groupBy("movie") \
.agg(f.count("rating").alias("num_ratings"), f.avg("rating").alias("avg_rating"))
```
### Mezclamos con el dataset de películas para obtener el título
```
dataset = dataset.join(movies, dataset["movie"] == movies["movie"], "left_outer") \
.drop(movies["movie"]) \
.drop("genres")
```
### Ordenamos la salida por número de votaciones (visualizaciones)
```
dataset = dataset.select("movie", "title", "avg_rating", "num_ratings") \
.sort(f.desc("num_ratings"))
```
### Ejecutamos el procesamiento en streaming
```
query = dataset \
.writeStream \
.outputMode("complete") \
.format("console") \
.trigger(processingTime='5 seconds') \
.foreachBatch(foreach_batch_function) \
.start()
query.explain()
query.awaitTermination()
```
| true |
code
| 0.302063 | null | null | null | null |
|
# Federated Tensorflow Mnist Tutorial
# Long-Living entities update
* We now may have director running on another machine.
* We use Federation API to communicate with Director.
* Federation object should hold a Director's client (for user service)
* Keeping in mind that several API instances may be connacted to one Director.
* We do not think for now how we start a Director.
* But it knows the data shape and target shape for the DataScience problem in the Federation.
* Director holds the list of connected envoys, we do not need to specify it anymore.
* Director and Envoys are responsible for encrypting connections, we do not need to worry about certs.
* Yet we MUST have a cert to communicate to the Director.
* We MUST know the FQDN of a Director.
* Director communicates data and target shape to the Federation interface object.
* Experiment API may use this info to construct a dummy dataset and a `shard descriptor` stub.
```
# Install dependencies if not already installed
# !pip install tensorflow==2.3.1
```
## Connect to the Federation
```
# Create a federation
from openfl.interface.interactive_api.federation import Federation
# please use the same identificator that was used in signed certificate
client_id = 'api'
cert_dir = 'cert'
director_node_fqdn = 'localhost'
director_port=50051
# 1) Run with API layer - Director mTLS
# If the user wants to enable mTLS their must provide CA root chain, and signed key pair to the federation interface
# cert_chain = f'{cert_dir}/root_ca.crt'
# api_certificate = f'{cert_dir}/{client_id}.crt'
# api_private_key = f'{cert_dir}/{client_id}.key'
# federation = Federation(
# client_id=client_id,
# director_node_fqdn=director_node_fqdn,
# director_port=director_port,
# cert_chain=cert_chain,
# api_cert=api_certificate,
# api_private_key=api_private_key
# )
# --------------------------------------------------------------------------------------------------------------------
# 2) Run with TLS disabled (trusted environment)
# Federation can also determine local fqdn automatically
federation = Federation(
client_id=client_id,
director_node_fqdn=director_node_fqdn,
director_port=director_port,
tls=False
)
shard_registry = federation.get_shard_registry()
shard_registry
# First, request a dummy_shard_desc that holds information about the federated dataset
dummy_shard_desc = federation.get_dummy_shard_descriptor(size=10)
dummy_shard_dataset = dummy_shard_desc.get_dataset('train')
sample, target = dummy_shard_dataset[0]
f"Sample shape: {sample.shape}, target shape: {target.shape}"
```
## Describing FL experimen
```
from openfl.interface.interactive_api.experiment import TaskInterface, DataInterface, ModelInterface, FLExperiment
```
### Register model
```
from layers import create_model, optimizer
framework_adapter = 'openfl.plugins.frameworks_adapters.keras_adapter.FrameworkAdapterPlugin'
model = create_model()
MI = ModelInterface(model=model, optimizer=optimizer, framework_plugin=framework_adapter)
```
### Register dataset
```
import numpy as np
from tensorflow.keras.utils import Sequence
class DataGenerator(Sequence):
def __init__(self, shard_descriptor, batch_size):
self.shard_descriptor = shard_descriptor
self.batch_size = batch_size
self.indices = np.arange(len(shard_descriptor))
self.on_epoch_end()
def __len__(self):
return len(self.indices) // self.batch_size
def __getitem__(self, index):
index = self.indices[index * self.batch_size:(index + 1) * self.batch_size]
batch = [self.indices[k] for k in index]
X, y = self.shard_descriptor[batch]
return X, y
def on_epoch_end(self):
np.random.shuffle(self.indices)
class MnistFedDataset(DataInterface):
def __init__(self, **kwargs):
super().__init__(**kwargs)
@property
def shard_descriptor(self):
return self._shard_descriptor
@shard_descriptor.setter
def shard_descriptor(self, shard_descriptor):
"""
Describe per-collaborator procedures or sharding.
This method will be called during a collaborator initialization.
Local shard_descriptor will be set by Envoy.
"""
self._shard_descriptor = shard_descriptor
self.train_set = shard_descriptor.get_dataset('train')
self.valid_set = shard_descriptor.get_dataset('val')
def __getitem__(self, index):
return self.shard_descriptor[index]
def __len__(self):
return len(self.shard_descriptor)
def get_train_loader(self):
"""
Output of this method will be provided to tasks with optimizer in contract
"""
if self.kwargs['train_bs']:
batch_size = self.kwargs['train_bs']
else:
batch_size = 32
return DataGenerator(self.train_set, batch_size=batch_size)
def get_valid_loader(self):
"""
Output of this method will be provided to tasks without optimizer in contract
"""
if self.kwargs['valid_bs']:
batch_size = self.kwargs['valid_bs']
else:
batch_size = 32
return DataGenerator(self.valid_set, batch_size=batch_size)
def get_train_data_size(self):
"""
Information for aggregation
"""
return len(self.train_set)
def get_valid_data_size(self):
"""
Information for aggregation
"""
return len(self.valid_set)
```
### Create Mnist federated dataset
```
fed_dataset = MnistFedDataset(train_bs=64, valid_bs=512)
```
## Define and register FL tasks
```
TI = TaskInterface()
import time
import tensorflow as tf
from layers import train_acc_metric, val_acc_metric, loss_fn
@TI.register_fl_task(model='model', data_loader='train_dataset', \
device='device', optimizer='optimizer')
def train(model, train_dataset, optimizer, device, loss_fn=loss_fn, warmup=False):
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
logits = model(x_batch_train, training=True)
loss_value = loss_fn(y_batch_train, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Update training metric.
train_acc_metric.update_state(y_batch_train, logits)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * 64))
if warmup:
break
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
return {'train_acc': train_acc,}
@TI.register_fl_task(model='model', data_loader='val_dataset', device='device')
def validate(model, val_dataset, device):
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
val_logits = model(x_batch_val, training=False)
# Update val metrics
val_acc_metric.update_state(y_batch_val, val_logits)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
return {'validation_accuracy': val_acc,}
```
## Time to start a federated learning experiment
```
# create an experimnet in federation
experiment_name = 'mnist_experiment'
fl_experiment = FLExperiment(federation=federation, experiment_name=experiment_name)
# The following command zips the workspace and python requirements to be transfered to collaborator nodes
fl_experiment.start(model_provider=MI,
task_keeper=TI,
data_loader=fed_dataset,
rounds_to_train=5,
opt_treatment='CONTINUE_GLOBAL')
fl_experiment.stream_metrics()
```
| true |
code
| 0.743494 | null | null | null | null |
|
# Linear Regression
## Setup
First, let's set up some environmental dependencies. These just make the numerics easier and adjust some of the plotting defaults to make things more legible.
```
# Python 3 compatability
from __future__ import division, print_function
from six.moves import range
# system functions that are always useful to have
import time, sys, os
# basic numeric setup
import numpy as np
# inline plotting
%matplotlib inline
# plotting
import matplotlib
from matplotlib import pyplot as plt
# seed the random number generator
np.random.seed(56101)
# re-defining plotting defaults
from matplotlib import rcParams
rcParams.update({'xtick.major.pad': '7.0'})
rcParams.update({'xtick.major.size': '7.5'})
rcParams.update({'xtick.major.width': '1.5'})
rcParams.update({'xtick.minor.pad': '7.0'})
rcParams.update({'xtick.minor.size': '3.5'})
rcParams.update({'xtick.minor.width': '1.0'})
rcParams.update({'ytick.major.pad': '7.0'})
rcParams.update({'ytick.major.size': '7.5'})
rcParams.update({'ytick.major.width': '1.5'})
rcParams.update({'ytick.minor.pad': '7.0'})
rcParams.update({'ytick.minor.size': '3.5'})
rcParams.update({'ytick.minor.width': '1.0'})
rcParams.update({'font.size': 30})
import dynesty
```
Linear regression is ubiquitous in research. In this example we'll fit a line
$$ y=mx+b $$
to data where the error bars have been underestimated and need to be inflated by a factor $f$. This example is taken from the [emcee documentation](http://dan.iel.fm/emcee/current/user/line/).
```
# truth
m_true = -0.9594
b_true = 4.294
f_true = 0.534
# generate mock data
N = 50
x = np.sort(10 * np.random.rand(N))
yerr = 0.1 + 0.5 * np.random.rand(N)
y_true = m_true * x + b_true
y = y_true + np.abs(f_true * y_true) * np.random.randn(N)
y += yerr * np.random.randn(N)
# plot results
plt.figure(figsize=(10, 5))
plt.errorbar(x, y, yerr=yerr, fmt='ko', ecolor='red')
plt.plot(x, y_true, color='blue', lw=3)
plt.xlabel(r'$X$')
plt.ylabel(r'$Y$')
plt.tight_layout()
```
We will assume the errors are Normal and impose uniform priors on $(m, b, \ln f)$.
```
# log-likelihood
def loglike(theta):
m, b, lnf = theta
model = m * x + b
inv_sigma2 = 1.0 / (yerr**2 + model**2 * np.exp(2 * lnf))
return -0.5 * (np.sum((y-model)**2 * inv_sigma2 - np.log(inv_sigma2)))
# prior transform
def prior_transform(utheta):
um, ub, ulf = utheta
m = 5.5 * um - 5.
b = 10. * ub
lnf = 11. * ulf - 10.
return m, b, lnf
```
Let's sample from this distribution using multiple bounding ellipsoids and random "staggers" (and alternative to random walks).
```
dsampler = dynesty.DynamicNestedSampler(loglike, prior_transform, ndim=3,
bound='multi', sample='rstagger')
dsampler.run_nested()
dres = dsampler.results
```
Let's see how we did.
```
from dynesty import plotting as dyplot
truths = [m_true, b_true, np.log(f_true)]
labels = [r'$m$', r'$b$', r'$\ln f$']
fig, axes = dyplot.traceplot(dsampler.results, truths=truths, labels=labels,
fig=plt.subplots(3, 2, figsize=(16, 12)))
fig.tight_layout()
fig, axes = dyplot.cornerplot(dres, truths=truths, show_titles=True,
title_kwargs={'y': 1.04}, labels=labels,
fig=plt.subplots(3, 3, figsize=(15, 15)))
```
| true |
code
| 0.768386 | null | null | null | null |
|
# Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Also, some information [here at r2rt](http://r2rt.com/recurrent-neural-networks-in-tensorflow-ii.html) and from [Sherjil Ozair](https://github.com/sherjilozair/char-rnn-tensorflow) on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
```
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
```
First we'll load the text file and convert it into integers for our network to use.
```
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
```
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the `split_frac` keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
```
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
```
I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size `batch_size X num_steps`. For example, if we want our network to train on a sequence of 100 characters, `num_steps = 100`. For the next batch, we'll shift this window the next sequence of `num_steps` characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
```
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
with tf.name_scope('inputs'):
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
with tf.name_scope('targets'):
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
with tf.name_scope("RNN_cells"):
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
with tf.name_scope("RNN_init_state"):
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
with tf.name_scope("RNN_forward"):
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
with tf.name_scope('sequence_reshape'):
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN outputs to a softmax layer and calculate the cost
with tf.name_scope('logits'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
tf.summary.histogram('softmax_w', softmax_w)
tf.summary.histogram('softmax_b', softmax_b)
with tf.name_scope('predictions'):
preds = tf.nn.softmax(logits, name='predictions')
tf.summary.histogram('predictions', preds)
with tf.name_scope('cost'):
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
tf.summary.scalar('cost', cost)
# Optimizer for training, using gradient clipping to control exploding gradients
with tf.name_scope('train'):
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
merged = tf.summary.merge_all()
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer', 'merged']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
```
## Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are `lstm_size` and `num_layers`. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
```
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
```
## Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by `save_every_n`) I calculate the validation loss and save a checkpoint.
```
!mkdir -p checkpoints/anna
epochs = 10
save_every_n = 100
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
train_writer = tf.summary.FileWriter('./logs/2/train', sess.graph)
test_writer = tf.summary.FileWriter('./logs/2/test')
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
summary, batch_loss, new_state, _ = sess.run([model.merged, model.cost,
model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
train_writer.add_summary(summary, iteration)
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
summary, batch_loss, new_state = sess.run([model.merged, model.cost,
model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
test_writer.add_summary(summary, iteration)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
#saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
```
## Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
```
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
```
| true |
code
| 0.640664 | null | null | null | null |
|
```
%load_ext autoreload
%autoreload 2 """Reloads all functions automatically"""
%matplotlib notebook
from irreversible_stressstrain import StressStrain as strainmodel
import test_suite as suite
import graph_suite as plot
import numpy as np
model = strainmodel('ref/HSRS/22').get_experimental_data()
slopes = suite.get_slopes(model)
second_deriv_slopes = suite.get_slopes(suite.combine_data(model[:-1,0],slopes))
# -- we think that yield occurs where the standard deviation is decreasing AND the slopes are mostly negative
def findYieldInterval(slopes, numberofsections):
def numneg(val):
return sum((val<0).astype(int))
# -- divide into ten intervals and save stddev of each
splitslopes = np.array_split(slopes,numberofsections)
splitseconds = np.array_split(second_deriv_slopes,numberofsections)
# -- displays the number of negative values in a range (USEFUL!!!)
for section in splitslopes:
print numneg(section), len(section)
print "-------------------------------"
for section in splitseconds:
print numneg(section), len(section)
divs = [np.std(vals) for vals in splitslopes]
# -- stddev of the whole thing
stdev = np.std(slopes)
interval = 0
slopesect = splitslopes[interval]
secondsect = splitseconds[interval]
print divs, stdev
# -- the proportion of slope values in an interval that must be negative to determine that material yields
cutoff = 3./4.
while numneg(slopesect)<len(slopesect)*cutoff and numneg(secondsect)<len(secondsect)*cutoff:
interval = interval + 1
"""Guard against going out of bounds"""
if interval==len(splitslopes): break
slopesect = splitslopes[interval]
secondsect = splitseconds[interval]
print
print interval
return interval
numberofsections = 15
interval_length = len(model)/numberofsections
"""
Middle of selected interval
Guard against going out of bounds
"""
yield_interval = findYieldInterval(slopes,numberofsections)
yield_index = min(yield_interval*interval_length + interval_length/2,len(model[:])-1)
yield_value = np.array(model[yield_index])[None,:]
print
print yield_value
```
## Make these estimates more reliable and robust
```
model = strainmodel('ref/HSRS/326').get_experimental_data()
strain = model[:,0]
stress = model[:,1]
slopes = suite.get_slopes(model)
second_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],slopes))
"""Now what if we have strain vs slope"""
strainvslope = suite.combine_data(strain,slopes)
strainvsecond = suite.combine_data(strain,second_deriv)
plot.plot2D(strainvsecond,'Strain','Slope',marker="ro")
plot.plot2D(model,'Strain','Stress',marker="ro")
model = strainmodel('ref/HSRS/326').get_experimental_data()
strain = model[:,0]
stress = model[:,1]
slopes = suite.get_slopes(model)
second_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],slopes))
num_intervals = 80
interval_length = len(second_deriv)/num_intervals
split_2nd_derivs = np.array_split(second_deriv,num_intervals)
print np.mean(second_deriv)
down_index = 0
for index, section in enumerate(split_2nd_derivs):
if sum(section)<np.mean(slopes):
down_index = index
break
yield_index = down_index*interval_length
print strain[yield_index], stress[yield_index]
model = strainmodel('ref/HSRS/326').get_experimental_data()
strain = model[:,0]
stress = model[:,1]
first_deriv = suite.get_slopes(model)
second_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],first_deriv))
plot1 = suite.combine_data(strain,first_deriv)
plot2 = suite.combine_data(strain,second_deriv)
plot.plot2D(model)
plot.plot2D(plot1)
plot.plot2D(plot2)
```
### See when standard deviation of second derivative begins to decrease
```
model = strainmodel('ref/HSRS/222').get_experimental_data()
strain = model[:,0]
stress = model[:,1]
first_deriv = suite.get_slopes(model)
second_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],first_deriv))
ave_deviation = np.std(second_deriv)
deviation_second = [np.std(val) for val in np.array_split(second_deriv,30)]
yielding = 0
for index,value in enumerate(deviation_second):
if value != 0.0 and value<ave_deviation and index!=0:
yielding = index
break
print second_deriv
#print "It seems to yield at index:", yielding
#print "These are all of the standard deviations, by section:", deviation_second, "\n"
#print "The overall standard deviation of the second derivative is:", ave_deviation
```
## The actual yield values are as follows (These are approximate):
### ref/HSRS/22: Index 106 [1.3912797535, 900.2614980977]
### ref/HSRS/222: Index 119 [0, 904.6702299]
### ref/HSRS/326: Index 150 [6.772314989, 906.275032]
### Index of max standard deviation of the curve
```
model = strainmodel('ref/HSRS/22').get_experimental_data()
strain = model[:,0]
stress = model[:,1]
first_deriv = suite.get_slopes(model)
second_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],first_deriv))
print second_deriv;
return;
chunks = 20
int_length = len(model[:])/chunks
deriv2spl = np.array_split(second_deriv,chunks)
deviation_second = [abs(np.mean(val)) for val in deriv2spl]
del(deviation_second[0])
print deviation_second
print np.argmax(deviation_second)
#print "The standard deviation of all the second derivatives is", np.std(second_deriv)
```
### If our data dips, we can attempt to find local maxima
```
import numpy as np
# -- climbs a discrete dataset to find local max
def hillclimber(data, guessindex = 0):
x = data[:,0]
y = data[:,1]
curx = x[guessindex]
cury = y[guessindex]
guessleft = max(0,guessindex-1)
guessright = min(len(x)-1,guessindex+1)
done = False
while not done:
left = y[guessleft]
right = y[guessright]
difleft = left-cury
difright = right-cury
if difleft<0 and difright<0 or (difleft==0 and difright==0):
done = True
elif difleft>difright:
cur = left
guessindex = guessleft
elif difright>difleft or difright==difleft:
cur = right
guessindex = guessright
return guessindex
func = lambda x: x**2
xs = np.linspace(0.,10.,5)
ys = func(xs)
data = suite.combine_data(xs,ys)
print hillclimber(data)
```
| true |
code
| 0.541227 | null | null | null | null |
|
# VAE outlier detection on CIFAR10
## Method
The Variational Auto-Encoder ([VAE](https://arxiv.org/abs/1312.6114)) outlier detector is first trained on a batch of unlabeled, but normal (*inlier*) data. Unsupervised training is desireable since labeled data is often scarce. The VAE detector tries to reconstruct the input it receives. If the input data cannot be reconstructed well, the reconstruction error is high and the data can be flagged as an outlier. The reconstruction error is either measured as the mean squared error (MSE) between the input and the reconstructed instance or as the probability that both the input and the reconstructed instance are generated by the same process.
## Dataset
[CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) consists of 60,000 32 by 32 RGB images equally distributed over 10 classes.
```
import logging
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
tf.keras.backend.clear_session()
from tensorflow.keras.layers import Conv2D, Conv2DTranspose, Dense, Layer, Reshape, InputLayer
from tqdm import tqdm
from alibi_detect.models.losses import elbo
from alibi_detect.od import OutlierVAE
from alibi_detect.utils.fetching import fetch_detector
from alibi_detect.utils.perturbation import apply_mask
from alibi_detect.utils.saving import save_detector, load_detector
from alibi_detect.utils.visualize import plot_instance_score, plot_feature_outlier_image
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
```
## Load CIFAR10 data
```
train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
```
## Load or define outlier detector
The pretrained outlier and adversarial detectors used in the example notebooks can be found [here](https://console.cloud.google.com/storage/browser/seldon-models/alibi-detect). You can use the built-in ```fetch_detector``` function which saves the pre-trained models in a local directory ```filepath``` and loads the detector. Alternatively, you can train a detector from scratch:
```
load_outlier_detector = True
filepath = 'my_path' # change to directory where model is downloaded
if load_outlier_detector: # load pretrained outlier detector
detector_type = 'outlier'
dataset = 'cifar10'
detector_name = 'OutlierVAE'
od = fetch_detector(filepath, detector_type, dataset, detector_name)
filepath = os.path.join(filepath, detector_name)
else: # define model, initialize, train and save outlier detector
latent_dim = 1024
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(32, 32, 3)),
Conv2D(64, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2D(128, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2D(512, 4, strides=2, padding='same', activation=tf.nn.relu)
])
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim,)),
Dense(4*4*128),
Reshape(target_shape=(4, 4, 128)),
Conv2DTranspose(256, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2DTranspose(64, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2DTranspose(3, 4, strides=2, padding='same', activation='sigmoid')
])
# initialize outlier detector
od = OutlierVAE(threshold=.015, # threshold for outlier score
score_type='mse', # use MSE of reconstruction error for outlier detection
encoder_net=encoder_net, # can also pass VAE model instead
decoder_net=decoder_net, # of separate encoder and decoder
latent_dim=latent_dim,
samples=2)
# train
od.fit(X_train,
loss_fn=elbo,
cov_elbo=dict(sim=.05),
epochs=50,
verbose=False)
# save the trained outlier detector
save_detector(od, filepath)
```
## Check quality VAE model
```
idx = 8
X = X_train[idx].reshape(1, 32, 32, 3)
X_recon = od.vae(X)
plt.imshow(X.reshape(32, 32, 3))
plt.axis('off')
plt.show()
plt.imshow(X_recon.numpy().reshape(32, 32, 3))
plt.axis('off')
plt.show()
```
## Check outliers on original CIFAR images
```
X = X_train[:500]
print(X.shape)
od_preds = od.predict(X,
outlier_type='instance', # use 'feature' or 'instance' level
return_feature_score=True, # scores used to determine outliers
return_instance_score=True)
print(list(od_preds['data'].keys()))
```
### Plot instance level outlier scores
```
target = np.zeros(X.shape[0],).astype(int) # all normal CIFAR10 training instances
labels = ['normal', 'outlier']
plot_instance_score(od_preds, target, labels, od.threshold)
```
### Visualize predictions
```
X_recon = od.vae(X).numpy()
plot_feature_outlier_image(od_preds,
X,
X_recon=X_recon,
instance_ids=[8, 60, 100, 330], # pass a list with indices of instances to display
max_instances=5, # max nb of instances to display
outliers_only=False) # only show outlier predictions
```
## Predict outliers on perturbed CIFAR images
We perturb CIFAR images by adding random noise to patches (masks) of the image. For each mask size in `n_mask_sizes`, sample `n_masks` and apply those to each of the `n_imgs` images. Then we predict outliers on the masked instances:
```
# nb of predictions per image: n_masks * n_mask_sizes
n_mask_sizes = 10
n_masks = 20
n_imgs = 50
```
Define masks and get images:
```
mask_sizes = [(2*n,2*n) for n in range(1,n_mask_sizes+1)]
print(mask_sizes)
img_ids = np.arange(n_imgs)
X_orig = X[img_ids].reshape(img_ids.shape[0], 32, 32, 3)
print(X_orig.shape)
```
Calculate instance level outlier scores:
```
all_img_scores = []
for i in tqdm(range(X_orig.shape[0])):
img_scores = np.zeros((len(mask_sizes),))
for j, mask_size in enumerate(mask_sizes):
# create masked instances
X_mask, mask = apply_mask(X_orig[i].reshape(1, 32, 32, 3),
mask_size=mask_size,
n_masks=n_masks,
channels=[0,1,2],
mask_type='normal',
noise_distr=(0,1),
clip_rng=(0,1))
# predict outliers
od_preds_mask = od.predict(X_mask)
score = od_preds_mask['data']['instance_score']
# store average score over `n_masks` for a given mask size
img_scores[j] = np.mean(score)
all_img_scores.append(img_scores)
```
### Visualize outlier scores vs. mask sizes
```
x_plt = [mask[0] for mask in mask_sizes]
for ais in all_img_scores:
plt.plot(x_plt, ais)
plt.xticks(x_plt)
plt.title('Outlier Score All Images for Increasing Mask Size')
plt.xlabel('Mask size')
plt.ylabel('Outlier Score')
plt.show()
ais_np = np.zeros((len(all_img_scores), all_img_scores[0].shape[0]))
for i, ais in enumerate(all_img_scores):
ais_np[i, :] = ais
ais_mean = np.mean(ais_np, axis=0)
plt.title('Mean Outlier Score All Images for Increasing Mask Size')
plt.xlabel('Mask size')
plt.ylabel('Outlier score')
plt.plot(x_plt, ais_mean)
plt.xticks(x_plt)
plt.show()
```
### Investigate instance level outlier
```
i = 8 # index of instance to look at
plt.plot(x_plt, all_img_scores[i])
plt.xticks(x_plt)
plt.title('Outlier Scores Image {} for Increasing Mask Size'.format(i))
plt.xlabel('Mask size')
plt.ylabel('Outlier score')
plt.show()
```
Reconstruction of masked images and outlier scores per channel:
```
all_X_mask = []
X_i = X_orig[i].reshape(1, 32, 32, 3)
all_X_mask.append(X_i)
# apply masks
for j, mask_size in enumerate(mask_sizes):
# create masked instances
X_mask, mask = apply_mask(X_i,
mask_size=mask_size,
n_masks=1, # just 1 for visualization purposes
channels=[0,1,2],
mask_type='normal',
noise_distr=(0,1),
clip_rng=(0,1))
all_X_mask.append(X_mask)
all_X_mask = np.concatenate(all_X_mask, axis=0)
all_X_recon = od.vae(all_X_mask).numpy()
od_preds = od.predict(all_X_mask)
```
Visualize:
```
plot_feature_outlier_image(od_preds,
all_X_mask,
X_recon=all_X_recon,
max_instances=all_X_mask.shape[0],
n_channels=3)
```
## Predict outliers on a subset of features
The sensitivity of the outlier detector can not only be controlled via the `threshold`, but also by selecting the percentage of the features used for the instance level outlier score computation. For instance, we might want to flag outliers if 40% of the features (pixels for images) have an average outlier score above the threshold. This is possible via the `outlier_perc` argument in the `predict` function. It specifies the percentage of the features that are used for outlier detection, sorted in descending outlier score order.
```
perc_list = [20, 40, 60, 80, 100]
all_perc_scores = []
for perc in perc_list:
od_preds_perc = od.predict(all_X_mask, outlier_perc=perc)
iscore = od_preds_perc['data']['instance_score']
all_perc_scores.append(iscore)
```
Visualize outlier scores vs. mask sizes and percentage of features used:
```
x_plt = [0] + x_plt
for aps in all_perc_scores:
plt.plot(x_plt, aps)
plt.xticks(x_plt)
plt.legend(perc_list)
plt.title('Outlier Score for Increasing Mask Size and Different Feature Subsets')
plt.xlabel('Mask Size')
plt.ylabel('Outlier Score')
plt.show()
```
## Infer outlier threshold value
Finding good threshold values can be tricky since they are typically not easy to interpret. The `infer_threshold` method helps finding a sensible value. We need to pass a batch of instances `X` and specify what percentage of those we consider to be normal via `threshold_perc`.
```
print('Current threshold: {}'.format(od.threshold))
od.infer_threshold(X, threshold_perc=99) # assume 1% of the training data are outliers
print('New threshold: {}'.format(od.threshold))
```
| true |
code
| 0.729249 | null | null | null | null |
|
# Converters for Quadratic Programs
Optimization problems in Qiskit's optimization module are represented with the `QuadraticProgram` class, which is generic and powerful representation for optimization problems. In general, optimization algorithms are defined for a certain formulation of a quadratic program and we need to convert our problem to the right type.
For instance, Qiskit provides several optimization algorithms that can handle Quadratic Unconstrained Binary Optimization (QUBO) problems. These are mapped to Ising Hamiltonians, for which Qiskit uses the `qiskit.aqua.operators` module, and then their ground state is approximated. For this optimization commonly known algorithms such as VQE or QAOA can be used as underlying routine. See the following tutorial about the [Minimum Eigen Optimizer](./03_minimum_eigen_optimizer.ipynb) for more detail. Note that also other algorithms exist that work differently, such as the `GroverOptimizer`.
To map a problem to the correct input format, the optimization module of Qiskit offers a variety of converters. In this tutorial we're providing an overview on this functionality. Currently, Qiskit contains the following converters.
- `InequalityToEquality`: converts inequality constraints into equality constraints with additional slack variables.
- `IntegerToBinary`: converts integer variables into binary variables and corresponding coefficients.
- `LinearEqualityToPenalty`: convert equality constraints into additional terms of the object function.
- `QuadraticProgramToQubo`: a wrapper for `InequalityToEquality`, `IntegerToBinary`, and `LinearEqualityToPenalty` for convenience.
## InequalityToEquality
`InequalityToEqualityConverter` converts inequality constraints into equality constraints with additional slack variables to remove inequality constraints from `QuadraticProgram`. The upper bounds and the lower bounds of slack variables will be calculated from the difference between the left sides and the right sides of constraints. Signs of slack variables depend on symbols in constraints such as $\leq$ and $\geq$.
The following is an example of a maximization problem with two inequality constraints. Variable $x$ and $y$ are binary variables and variable $z$ is an integer variable.
\begin{aligned}
& \text{maximize}
& 2x + y + z\\
& \text{subject to:}
& x+y+z \leq 5.5\\
& & x+y+z \geq 2.5\\
& & x, y \in \{0,1\}\\
& & z \in \{0,1,2,3,4,5,6,7\} \\
\end{aligned}
With `QuadraticProgram`, an optimization model of the problem is written as follows.
```
from qiskit_optimization import QuadraticProgram
qp = QuadraticProgram()
qp.binary_var('x')
qp.binary_var('y')
qp.integer_var(lowerbound=0, upperbound=7, name='z')
qp.maximize(linear={'x': 2, 'y': 1, 'z': 1})
qp.linear_constraint(linear={'x': 1, 'y': 1, 'z': 1}, sense='LE', rhs=5.5,name='xyz_leq')
qp.linear_constraint(linear={'x': 1, 'y': 1, 'z': 1}, sense='GE', rhs=2.5,name='xyz_geq')
print(qp.export_as_lp_string())
```
Call `convert` method of `InequalityToEquality` to convert.
```
from qiskit_optimization.converters import InequalityToEquality
ineq2eq = InequalityToEquality()
qp_eq = ineq2eq.convert(qp)
print(qp_eq.export_as_lp_string())
```
After converting, the formulation of the problem looks like as the follows. As we can see, the inequality constraints are replaced with equality constraints with additional integer slack variables, $xyz\_leg\text{@}int\_slack$ and $xyz\_geq\text{@}int\_slack$.
Let us explain how the conversion works. For example, the lower bound of the left side of the first constraint is $0$ which is the case of $x=0$, $y=0$, and $z=0$. Thus, the upperbound of the additional integer variable must be $5$ to be able to satisfy even the case of $x=0$, $y=0$, and $z=0$. Note that we cut off the part after the decimal point in the converted formulation since the left side of the first constraint in the original formulation can be only integer values. For the second constraint, basically we apply the same approach. However, the symbol in the second constraint is $\geq$, so we add minus before $xyz\_geq\text{@}int\_slack$ to be able to satisfy even the case of $x=1, y=1$, and $z=7$.
\begin{aligned}
& \text{maximize}
& 2x + y + z\\
& \text{subject to:}
& x+y+z+ xyz\_leg\text{@}int\_slack= 5\\
& & x+y+z+xyz\_geq\text{@}int\_slack= 3\\
& & x, y \in \{0,1\}\\
& & z \in \{0,1,2,3,4,5,6,7\} \\
& & xyz\_leg\text{@}int\_slack \in \{0,1,2,3,4,5\} \\
& & xyz\_geq\text{@}int\_slack \in \{0,1,2,3,4,5,6\} \\
\end{aligned}
## IntegerToBinary
`IntegerToBinary` converts integer variables into binary variables and coefficients to remove integer variables from `QuadraticProgram`. For converting, bounded-coefficient encoding proposed in [arxiv:1706.01945](https://arxiv.org/abs/1706.01945) (Eq. (5)) is used. For more detail of the encoding method, please see the paper.
We use the output of `InequalityToEquality` as starting point. Variable $x$ and $y$ are binary variables, while the variable $z$ and the slack variables $xyz\_leq\text{@}int\_slack$ and $xyz\_geq\text{@}int\_slack$ are integer variables. We print the problem again for reference.
```
print(qp_eq.export_as_lp_string())
```
Call `convert` method of `IntegerToBinary` to convert.
```
from qiskit_optimization.converters import IntegerToBinary
int2bin = IntegerToBinary()
qp_eq_bin = int2bin.convert(qp_eq)
print(qp_eq_bin.export_as_lp_string())
```
After converting, integer variables $z$ is replaced with three binary variables $z\text{@}0$, $z\text{@}1$ and $z\text{@}2$ with coefficients 1, 2 and 4, respectively as the above.
The slack variables $xyz\_leq\text{@}int\_slack$ and $xyz\_geq\text{@}int\_slack$ that were introduced by `InequalityToEquality` are also both replaced with three binary variables with coefficients 1, 2, 2, and 1, 2, 3, respectively.
Note: Essentially the coefficients mean that the sum of these binary variables with coefficients can be the sum of a subset of $\{1, 2, 4\}$, $\{1, 2, 2\}$, and $\{1, 2, 3\}$ to represent that acceptable values $\{0, \ldots, 7\}$, $\{0, \ldots, 5\}$, and $\{0, \ldots, 6\}$, which respects the lower bound and the upper bound of original integer variables correctly.
`IntegerToBinary` also provides `interpret` method that is the functionality to translate a given binary result back to the original integer representation.
## LinearEqualityToPenalty
`LinearEqualityToPenalty` converts linear equality constraints into additional quadratic penalty terms of the objective function to map `QuadraticProgram` to an unconstrained form.
An input to the converter has to be a `QuadraticProgram` with only linear equality constraints. Those equality constraints, e.g. $\sum_i a_i x_i = b$ where $a_i$ and $b$ are numbers and $x_i$ is a variable, will be added to the objective function in the form of $M(b - \sum_i a_i x_i)^2$ where $M$ is a large number as penalty factor.
By default $M= 1e5$. The sign of the term depends on whether the problem type is a maximization or minimization.
We use the output of `IntegerToBinary` as starting point, where all variables are binary variables and all inequality constraints have been mapped to equality constraints.
We print the problem again for reference.
```
print(qp_eq_bin.export_as_lp_string())
```
Call `convert` method of `LinearEqualityToPenalty` to convert.
```
from qiskit_optimization.converters import LinearEqualityToPenalty
lineq2penalty = LinearEqualityToPenalty()
qubo = lineq2penalty.convert(qp_eq_bin)
print(qubo.export_as_lp_string())
```
After converting, the equality constraints are added to the objective function as additional terms with the default penalty factor $M=1e5$.
The resulting problem is now a QUBO and compatible with many quantum optimization algorithms such as VQE, QAOA and so on.
This gives the same result as before.
```
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| true |
code
| 0.706241 | null | null | null | null |
|
# Essential: Static file management with SourceLoader
Data pipelines usually interact with external systems such as SQL databases. Using relative paths to find such files is error-prone as the path to the file depends on the file loading it, on the other hand, absolute paths are to restrictive, the path will only work in your current environment but will break others. Combining `Env` with `SourceLoader` provides a clean approach for managing static files.
```
from pathlib import Path
import pandas as pd
from sklearn import datasets
from IPython.display import display, Markdown
from ploomber import DAG, SourceLoader, with_env
from ploomber.tasks import PythonCallable, NotebookRunner, SQLUpload, SQLScript
from ploomber.products import File, SQLiteRelation
from ploomber.clients import SQLAlchemyClient
from ploomber.executors import Serial
# initialize a temporary directory
import tempfile
import os
tmp_dir = Path(tempfile.mkdtemp())
tmp_dir_static = tmp_dir / 'static'
tmp_dir_static.mkdir()
os.chdir(str(tmp_dir))
report_py = """
# static/report.py
# +
# This file is in jupytext light format
import seaborn as sns
import pandas as pd
# -
# + tags=['parameters']
# papermill will add the parameters below this cell
upstream = None
product = None
# -
# +
path = upstream['raw']
df = pd.read_parquet(path)
# -
# ## AGE distribution
# +
_ = sns.distplot(df.AGE)
# -
# ## Price distribution
# +
_ = sns.distplot(df.price)
# -
"""
clean_table_sql = """
-- static/clean_table.sql
DROP TABLE IF EXISTS {{product}};
CREATE TABLE {{product}}
AS SELECT * FROM {{upstream["raw_table"]}}
WHERE AGE < 100
"""
env_yaml = """
_module: '{{here}}'
path:
data: '{{here}}/data/'
static: '{{here}}/static/'
"""
(tmp_dir_static / 'report.py').write_text(report_py)
(tmp_dir_static / 'clean_table.sql').write_text(clean_table_sql)
(tmp_dir / 'env.yaml').write_text(env_yaml)
def display_file(file, syntax):
s = """
```{}
{}
```
""".format(syntax, file)
return display(Markdown(s))
```
Our working environment has an `env.yaml` file with a `static/` folder holding a SQL and a Python script.
```
! tree $tmp_dir
```
### Content of `env.yaml`
```
display_file(env_yaml, 'yaml')
```
### Content of `static/report.py`
```
display_file(report_py, 'python')
```
### Content of `static/create_table.sql`
```
display_file(clean_table_sql, 'sql')
```
### Pipeline declaration
```
def _get_data(product):
data = datasets.load_boston()
df = pd.DataFrame(data.data)
df.columns = data.feature_names
df['price'] = data.target
df.to_parquet(str(product))
@with_env
def make(env):
# NOTE: passing the executor parameter is only required for testing purposes, can be removed
dag = DAG(executor=Serial(build_in_subprocess=False))
client = SQLAlchemyClient('sqlite:///my_db.db')
dag.clients[SQLUpload] = client
dag.clients[SQLiteRelation] = client
dag.clients[SQLScript] = client
# initialize SourceLoader in our static directory
loader = SourceLoader(path=env.path.static)
get_data = PythonCallable(_get_data,
product=File(tmp_dir / 'raw.parquet'),
dag=dag,
name='raw')
# if we do not pass a name, the filename will be used as default
report = NotebookRunner(loader['report.py'],
product=File(tmp_dir / 'report.html'),
dag=dag,
kernelspec_name='python3')
raw_table = SQLUpload(source='{{upstream["raw"]}}',
product=SQLiteRelation(('raw', 'table')),
dag=dag,
name='raw_table')
# same here, no need to pass a name
clean_table = SQLScript(loader['clean_table.sql'],
product=SQLiteRelation(('clean', 'table')),
dag=dag)
get_data >> report
get_data >> raw_table >> clean_table
return dag
dag = make()
```
### Pipeline status
```
# Using SourceLoader automatically adds 'Location' column which points to the the source code location
dag.status()
dag.build()
```
## Advanced jinja2 features
`SourceLoader` initializes a proper jinja2 environment, so you can use features such as [macros](https://jinja.palletsprojects.com/en/2.11.x/templates/#macros), this is very useful to maximize SQL code reusability.
```
import shutil
shutil.rmtree(str(tmp_dir))
```
| true |
code
| 0.368946 | null | null | null | null |
|
## Compile a training set using ASPCAP normalization
```
from utils_h5 import H5Compiler
from astropy.io import fits
import numpy as np
# To create a astroNN compiler instance
compiler_aspcap_train = H5Compiler()
compiler_aspcap_train.teff_low = 4000 # Effective Temperature Upper
compiler_aspcap_train.teff_high = 5500 # Effective Temperature Lower
compiler_aspcap_train.vscattercut = 1 # Velocity Scattering Upper
compiler_aspcap_train.starflagcut = True # STARFLAG == 0
compiler_aspcap_train.aspcapflagcut = True # ASPCAPFALG == 0
compiler_aspcap_train.ironlow = -10000. # [Fe/H] Lower
compiler_aspcap_train.continuum = False # use aspcap normalization
compiler_aspcap_train.SNR_low = 200 # SNR Lower
compiler_aspcap_train.SNR_high = 99999 # SNR Upper
compiler_aspcap_train.filename = 'aspcap_norm_train'
# To compile a .h5 datasets, use .compile() method
compiler_aspcap_train.compile()
```
## Compile a testing set using ASPCAP normalization
```
from utils_h5 import H5Compiler
from astropy.io import fits
import numpy as np
# To create a astroNN compiler instance
compiler_aspcap_test = H5Compiler()
compiler_aspcap_test.teff_low = 4000 # Effective Temperature Upper
compiler_aspcap_test.teff_high = 5500 # Effective Temperature Lower
compiler_aspcap_test.vscattercut = 1 # Velocity Scattering Upper
compiler_aspcap_test.starflagcut = True # STARFLAG == 0
compiler_aspcap_test.aspcapflagcut = True # ASPCAPFALG == 0
compiler_aspcap_test.ironlow = -10000. # [Fe/H] Lower
compiler_aspcap_test.continuum = False # use aspcap normalization
compiler_aspcap_test.SNR_low = 100 # SNR Lower
compiler_aspcap_test.SNR_high = 200 # SNR Upper
compiler_aspcap_test.filename = 'aspcap_norm_test'
# To compile a .h5 datasets, use .compile() method
compiler_aspcap_test.compile()
```
## Train a NN with ASPCAP normalization
```
import numpy as np
from utils_h5 import H5Loader
from astroNN.models import ApogeeBCNNCensored
loader = H5Loader('aspcap_norm_train') # continuum normalized dataset
loader.load_err = True
loader.target = ['teff', 'logg', 'C', 'C1', 'N', 'O', 'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'K',
'Ca', 'Ti', 'Ti2', 'V', 'Cr', 'Mn', 'Fe','Co', 'Ni']
x, y, x_err, y_err = loader.load()
bcnn = ApogeeBCNNCensored()
bcnn.num_hidden = [192, 64, 32, 16, 2] # default model size used in the paper
bcnn.max_epochs = 60 # default max epochs used in the paper
bcnn.autosave = True
bcnn.folder_name = 'aspcapStar_BCNNCensored'
bcnn.train(x, y, labels_err=y_err)
```
## Test the NN with ASPCAP normalization
```
import numpy as np
import pandas as pd
from astropy.stats import mad_std as mad
from utils_h5 import H5Loader
from astroNN.models import ApogeeBCNNCensored, load_folder
loader = H5Loader('aspcap_norm_test') # continuum normalized dataset
loader.load_err = True
loader.target = ['teff', 'logg', 'C', 'C1', 'N', 'O', 'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'K',
'Ca', 'Ti', 'Ti2', 'V', 'Cr', 'Mn', 'Fe','Co', 'Ni']
x, y, x_err, y_err = loader.load()
bcnn = load_folder('aspcapStar_BCNNCensored')
pred, pred_error = bcnn.test(x, y)
residue = (pred - y)
bias = np.ma.median(np.ma.array(residue, mask=[y == -9999.]), axis=0)
scatter = mad(np.ma.array(residue, mask=[y == -9999.]), axis=0)
d = {'Name': bcnn.targetname, 'Bias': [f'{bias_single:.{3}f}' for bias_single in bias], 'Scatter': [f'{scatter_single:.{3}f}' for scatter_single in scatter]}
df = pd.DataFrame(data=d)
df
```
| true |
code
| 0.706874 | null | null | null | null |
|
# Semantic Text Summarization
Here we are using the semantic method to understand the text and also keep up the standards of the extractive summarization. The task is implemnted using the various pre-defined models such **BERT, BART, T5, XLNet and GPT2** for summarizing the articles. It is also comapared with a classical method i.e. **summarzation based on word frequencies**.
```
## installation
!pip install transformers --upgrade
!pip install bert-extractive-summarizer
!pip install neuralcoref
!python -m spacy download en_core_web_md
from transformers import pipeline
from summarizer import Summarizer, TransformerSummarizer
import pprint
pp = pprint.PrettyPrinter(indent=14)
## documentation for summarizer: https://huggingface.co/transformers/main_classes/pipelines.html#summarizationpipeline
# summarize with BART
summarizer_bart = pipeline(task='summarization', model="bart-large-cnn")
#summarize with BERT
summarizer_bert = Summarizer()
# summarize with T5
summarizer_t5 = pipeline(task='summarization', model="t5-large") # options: ‘t5-small’, ‘t5-base’, ‘t5-large’, ‘t5-3b’, ‘t5-11b’
#for T5 you can chose the size of the model. Everything above t5-base is very slow, even on GPU or TPU.
# summarize with XLNet
summarizer_xlnet = TransformerSummarizer(transformer_type="XLNet",transformer_model_key="xlnet-base-cased")
# summarize with GPT2
summarizer_gpt2 = TransformerSummarizer(transformer_type="GPT2",transformer_model_key="gpt2-medium")
data = '''
For the actual assembly of an module, the Material list of a complete module is displayed in order to make the necessary materials physically available. Also CAD model of the assembly and 2-D construction models can be viewed or printed out in order to be able to later on
to carry out individual steps.
Necessary steps: The material list, 3D model and 2D drawings of a complete assembly must be available.
'''
# Bart for Text - Summarization
print('Bart for Text - Summarization')
summary_bart = summarizer_bart(data, min_length=10, max_length=40) # change min_ and max_length for different output
pp.pprint(summary_bart[0]['summary_text'])
# BERT for Text - Summarization
print('\n BERT for Text - Summarization')
summary_bert = summarizer_bert(data, min_length=60)
full = ''.join(summary_bert)
pp.pprint(full)
# XLNet for Text - Summarization
print('\n XLNet for Text - Summarization')
summary_xlnet = summarizer_xlnet(data, min_length=60)
full = ''.join(summary_xlnet)
pp.pprint(full)
# GPT2 for Text - Summarization
print('\n GPT2 for Text - Summarization')
summary_gpt2 = summarizer_gpt2(data, min_length=60, ratio = 0.1)
full = ''.join(summary_gpt2)
pp.pprint(full)
# T5 for Text - Summarization
print('\n T5 for Text - Summarization')
summary_t5 = summarizer_t5(data, min_length=10) # change min_ and max_length for different output
pp.pprint(summary_t5[0]['summary_text'])
# a review on another data
data = '''
In the production of SMC (Sheet Moulding Compound), the maturing of the semi-finished product (resin+glass fibre) is of decisive importance.
The associated thickening of the material determines the viscosity and thus the quality of the end product.
Possible defects due to short maturing and soft semi-finished products are lack of fibre transport, while too long maturing and hard semi-finished products result in incompletely filled components.
By adjusting the press parameters such as closing force, closing speed, mould temperature etc., the fluctuations in thickening can normally be compensated.
By measuring the flowability/viscosity of the material or by measuring additional parameters during the manufacturing process, the ideal process window for the production of SMC is to be controlled even better.
'''
# Bart for Text - Summarization
print('Bart for Text - Summarization')
summary_bart = summarizer_bart(data, min_length=10, max_length=40) # change min_ and max_length for different output
pp.pprint(summary_bart[0]['summary_text'])
# BERT for Text - Summarization
print('\n BERT for Text - Summarization')
summary_bert = summarizer_bert(data, min_length=60)
full = ''.join(summary_bert)
pp.pprint(full)
# XLNet for Text - Summarization
print('\n XLNet for Text - Summarization')
summary_xlnet = summarizer_xlnet(data, min_length=60)
full = ''.join(summary_xlnet)
pp.pprint(full)
# GPT2 for Text - Summarization
print('\n GPT2 for Text - Summarization')
summary_gpt2 = summarizer_gpt2(data, min_length=60)
full = ''.join(summary_gpt2)
pp.pprint(full)
# T5 for Text - Summarization
print('\n T5 for Text - Summarization')
summary_t5 = summarizer_t5(data, min_length=10) # change min_ and max_length for different output
pp.pprint(summary_t5[0]['summary_text'])
# Text - Summarization using word frequencies
# importing libraries
import nltk
nltk.download('stopwords')
nltk.download('punkt')
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.tokenize import word_tokenize, sent_tokenize
import bs4 as BeautifulSoup
import urllib.request
#fetching the content from the URL
fetched_data = urllib.request.urlopen('https://en.wikipedia.org/wiki/20th_century')
article_read = fetched_data.read()
#parsing the URL content and storing in a variable
article_parsed = BeautifulSoup.BeautifulSoup(article_read,'html.parser')
#returning <p> tags
paragraphs = article_parsed.find_all('p')
article_content = '''
In the production of SMC (Sheet Moulding Compound), the maturing of the semi-finished product (resin+glass fibre) is of decisive importance. The associated thickening of the material determines the viscosity and thus the quality of the end product. Possible defects due to short maturing and soft semi-finished products are lack of fibre transport, while too long maturing and hard semi-finished products result in incompletely filled components. By adjusting the press parameters such as closing force, closing speed, mould temperature etc., the fluctuations in thickening can normally be compensated. By measuring the flowability/viscosity of the material or by measuring additional parameters during the manufacturing process, the ideal process window for the production of SMC is to be controlled even better.
'''
#looping through the paragraphs and adding them to the variable
#for p in paragraphs:
# article_content += p.text
#print(article_content)
def _create_dictionary_table(text_string) -> dict:
#removing stop words
stop_words = set(stopwords.words("english"))
words = word_tokenize(text_string)
#reducing words to their root form
stem = PorterStemmer()
#creating dictionary for the word frequency table
frequency_table = dict()
for wd in words:
wd = stem.stem(wd)
if wd in stop_words:
continue
if wd in frequency_table:
frequency_table[wd] += 1
else:
frequency_table[wd] = 1
return frequency_table
def _calculate_sentence_scores(sentences, frequency_table) -> dict:
#algorithm for scoring a sentence by its words
sentence_weight = dict()
for sentence in sentences:
sentence_wordcount = (len(word_tokenize(sentence)))
sentence_wordcount_without_stop_words = 0
for word_weight in frequency_table:
if word_weight in sentence.lower():
sentence_wordcount_without_stop_words += 1
if sentence[:7] in sentence_weight:
sentence_weight[sentence[:7]] += frequency_table[word_weight]
else:
sentence_weight[sentence[:7]] = frequency_table[word_weight]
sentence_weight[sentence[:7]] = sentence_weight[sentence[:7]] / sentence_wordcount_without_stop_words
return sentence_weight
def _calculate_average_score(sentence_weight) -> int:
#calculating the average score for the sentences
sum_values = 0
for entry in sentence_weight:
sum_values += sentence_weight[entry]
#getting sentence average value from source text
average_score = (sum_values / len(sentence_weight))
return average_score
def _get_article_summary(sentences, sentence_weight, threshold):
sentence_counter = 0
article_summary = ''
for sentence in sentences:
if sentence[:7] in sentence_weight and sentence_weight[sentence[:7]] >= (threshold):
article_summary += " " + sentence
sentence_counter += 1
return article_summary
def _run_article_summary(article):
#creating a dictionary for the word frequency table
frequency_table = _create_dictionary_table(article)
#tokenizing the sentences
sentences = sent_tokenize(article)
#algorithm for scoring a sentence by its words
sentence_scores = _calculate_sentence_scores(sentences, frequency_table)
#getting the threshold
threshold = _calculate_average_score(sentence_scores)
#producing the summary
article_summary = _get_article_summary(sentences, sentence_scores, 1.1 * threshold)
return article_summary
if __name__ == '__main__':
summary_results = _run_article_summary(article_content)
print(summary_results)
# Text - Summarization using GenSim
from gensim.summarization.summarizer import summarize
print(summarize(data))
```
| true |
code
| 0.638046 | null | null | null | null |
|
# Quantum Machine Learning and TTN
Let's look at the Tree Tensor Network as a model for quantum machine learning.
## What you will learn
1. TTN model
2. Optimization
## Install Blueqat
```
!pip install blueqat
```
The model we are going to build is called TTN. The quantum circuit is as follows.
<img src="../tutorial-ja/img/253_img.png" width="25%">
It has a shape that takes on tree structure.
This circuit uses a one qubit arbitrary rotation gate (a combination of $Rz$ and $Ry$ gates) and a two qubit gate ($CX$ gate).
More details are as follows.
<img src="../tutorial-ja/img/253_img_2.png" width="35%">
```
from blueqat import Circuit
import matplotlib.pyplot as plt
import numpy as np
import time
%matplotlib inline
```
Configure hyperparameters and other settings.
```
np.random.seed(45)
# Number of steps of optimizetaion
nsteps = 2000
# Number of parameters of the quantum circuit to be optimized
nparams = 18
# Fineness of numerical differentiation
h = 0.01
# Learning rate
e = 0.01
# Initial parameter
param_init = [np.random.rand()*np.pi*2 for i in range(nparams)]
# list for containing results
arr = []
#1: train, 2: prediction
mode = 1
```
We create a model of the tree structure.
Set upthe input to the quantum circuit and the target label for it and start learning.
This time, the input data can be selected by arguments.
```
def TTN_Z(a, ran, mode=1):
# Input circuit
init = [Circuit(4).x[0,1], Circuit(4).x[2,3], Circuit(4).x[0], Circuit(4).x[1], Circuit(4).x[2], Circuit(4).x[0,2]]
# Target label
target = [1,1,-1,-1,-1,1]
# Circuit construction
u = init[ran]
u.rz(a[0])[0].ry(a[1])[0].rz(a[2])[0]
u.rz(a[3])[1].ry(a[4])[1].rz(a[5])[1]
u.rz(a[6])[2].ry(a[7])[2].rz(a[8])[2]
u.rz(a[9])[3].ry(a[10])[3].rz(a[11])[3]
u.cx[0,1].cx[2,3]
u.rz(a[12])[1].ry(a[13])[1].rz(a[14])[1]
u.rz(a[15])[3].ry(a[16])[3].rz(a[17])[3]
u.cx[1,3]
# Calculate expectation value from state vector
full = u.run()
expt = sum(np.abs(full[:8])**2)-sum(np.abs(full[8:])**2)
if(mode ==1):
# return error between label and prediction
return (expt - target[ran])**2
else:
return expt
```
Stochastic gradient descent (SGD) is used to learning.
At the start of each step, the input data is randomly selected from 6 ways (0 to 5), and the gradient is calculated and the parameters are updated.
In each step, the gradient calculation and parameter update are performed on only one data, but by repeating the process while randomly selecting input data, the system eventually learns to minimize the loss function for all data.
```
start = time.time()
param = param_init.copy()
for i in range(nsteps):
it = np.random.randint(0,6)
loss = TTN_Z(param, it, mode)
arr.append(loss)
param_new = [0 for i in range(nparams)]
for j in range(nparams):
_param = param.copy()
_param[j] += h
param_new[j] = param[j] - e*(TTN_Z(_param, it, mode) - loss)/h
param = param_new
plt.plot(arr)
plt.show()
print(time.time() - start)
```
It converged well.
Let's check it out.
```
target = [1,1,-1,-1,-1,1]
preds = []
for i in range(6):
pred = TTN_Z(param, i, mode=2)
preds.append(pred)
print("Prediction :", pred, " Target :", target[i])
```
From the above, we were able to learn a quantum circuit using the TTN model.
| true |
code
| 0.523664 | null | null | null | null |
|
```
%matplotlib inline
import numpy as np
import pandas as pd
import math
from scipy import stats
import pickle
from causality.analysis.dataframe import CausalDataFrame
from sklearn.linear_model import LinearRegression
import datetime
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rcParams['font.sans-serif'] = "Gotham"
matplotlib.rcParams['font.family'] = "sans-serif"
import plotly
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
```
Open the data from past notebooks and correct them to only include years that are common between the data structures (>1999).
```
with open('VariableData/money_data.pickle', 'rb') as f:
income_data, housing_data, rent_data = pickle.load(f)
with open('VariableData/demographic_data.pickle', 'rb') as f:
demographic_data = pickle.load(f)
with open('VariableData/endowment.pickle', 'rb') as f:
endowment = pickle.load(f)
with open('VariableData/expander.pickle', 'rb') as f:
expander = pickle.load(f)
endowment = endowment[endowment['FY'] > 1997].reset_index()
endowment.drop('index', axis=1, inplace=True)
demographic_data = demographic_data[demographic_data['year'] > 1999].reset_index()
demographic_data.drop('index', axis=1, inplace=True)
income_data = income_data[income_data['year'] > 1999].reset_index()
income_data.drop('index', axis=1, inplace=True)
housing_data = housing_data[housing_data['year'] > 1999].reset_index()
housing_data.drop('index', axis=1, inplace=True)
rent_data = rent_data[rent_data['year'] > 1999].reset_index()
rent_data.drop('index', axis=1, inplace=True)
```
Read in the data on Harvard owned land and Cambridge's property records. Restrict the Harvard data to Cambridge, MA.
```
harvard_land = pd.read_excel("Spreadsheets/2018_building_reference_list.xlsx", header=3)
harvard_land = harvard_land[harvard_land['City'] == 'Cambridge']
cambridge_property = pd.read_excel("Spreadsheets/cambridge_properties.xlsx")
```
Restrict the Cambridge data to Harvard properties, and only use relevant columns.
```
cambridge_property = cambridge_property[cambridge_property['Owner_Name'].isin(['PRESIDENT & FELLOWS OF HARVARD COLLEGE', 'PRESIDENT & FELLOW OF HARVARD COLLEGE'])]
cambridge_property = cambridge_property[['Address', 'PropertyClass', 'LandArea', 'BuildingValue', 'LandValue', 'AssessedValue', 'SalePrice', 'SaleDate', 'Owner_Name']]
```
Fix the time data.
```
cambridge_property['SaleDate'] = pd.to_datetime(cambridge_property['SaleDate'], infer_datetime_format=True)
clean_property = cambridge_property.drop_duplicates(subset=['Address'])
clean_property.head()
type(clean_property['SaleDate'])
```
Only look at properties purchased after 2000.
```
recent_property = clean_property[clean_property['SaleDate'] > datetime.date(2000, 1, 1)]
property_numbers = recent_property[['LandArea', 'AssessedValue', 'SalePrice']]
num_recent = recent_property['Address'].count()
sum_properties = property_numbers.sum()
sum_properties
full_property_numbers = clean_property[['LandArea', 'AssessedValue', 'SalePrice']]
sum_full = full_property_numbers.sum()
delta_property = sum_properties / sum_full
delta_property
```
What can be gathered from above?
Since the year 2000, Harvard has increased its presence in Cambridge by about 3%, corresponding to about 2% of its overall assessed value, an increase of 281,219 square feet and \$115,226,500. Although the assessed value increase is so high, Harvard only paid \$57,548,900 for the property at their times of purchase.
To make some adjustments for inflation:
Note that the inflation rate since 2000 is ~37.8% (https://data.bls.gov/timeseries/CUUR0000SA0L1E?output_view=pct_12mths).
```
inflation_data = pd.read_excel("Spreadsheets/inflation.xlsx", header=11)
inflation_data = inflation_data[['Year', 'Jan']]
inflation_data['Year'] = pd.to_datetime(inflation_data['Year'], format='%Y')
inflation_data['CumulativeInflation'] = inflation_data['Jan'].cumsum()
inflation_data.rename(columns={'Year' : 'SaleDate'}, inplace=True)
recent_property['SaleDate'] = recent_property['SaleDate'].dt.year
inflation_data['SaleDate'] = inflation_data['SaleDate'].dt.year
recent_property = pd.merge(recent_property, inflation_data, how="left", on=['SaleDate'])
recent_property = recent_property.drop('Jan', 1)
recent_property['TodaySale'] = (1 + (recent_property['CumulativeInflation'] / 100)) * recent_property['SalePrice']
today_sale_sum = recent_property['TodaySale'].sum()
today_sale_sum
sum_properties['AssessedValue'] - today_sale_sum
```
Hence, adjusted for inflation, the sale price of the property Harvard has acquired since 2000 is \$65,929,240.
The difference between this value and the assessed value of the property (in 2018) is: \$49,297,260, showing that Harvard's property has appreciated in value even more than (twice more than) inflation would account for, illustrating a clear advantageous dynamic for Harvard.
```
sorted_df = recent_property.sort_values(by=['SaleDate'])
sorted_df = sorted_df.reset_index().drop('index', 1)
sorted_df['CumLand'] = sorted_df['LandArea'].cumsum()
sorted_df['CumValue'] = sorted_df['AssessedValue'].cumsum()
sorted_df
```
Graph the results.
```
def fitter(x, y, regr_x):
"""
Use linear regression to make a best fit line for a set of data.
Args:
x (numpy array): The independent variable.
y (numpy array): The dependent variable.
regr_x (numpy array): The array used to extrapolate the regression.
"""
slope, intercept, r_value, p_value, std_err = stats.linregress(x, y)
return (slope * regr_x + intercept)
years = sorted_df['SaleDate'].as_matrix()
cum_land = sorted_df['CumLand'].as_matrix()
cum_value = sorted_df['CumValue'].as_matrix()
regr = np.arange(2000, 2012)
line0 = fitter(years, cum_land, regr)
trace0 = go.Scatter(
x = years,
y = cum_land,
mode = 'markers',
name='Harvard Land\n In Cambridge',
marker=go.Marker(color='#601014')
)
fit0 = go.Scatter(
x = regr,
y = line0,
mode='lines',
marker=go.Marker(color='#D2232A'),
name='Fit'
)
data = [trace0, fit0]
layout = go.Layout(
title = "The Change In Harvard's Land in Cambridge Since 2000",
font = dict(family='Gotham', size=18),
yaxis=dict(
title='Land Accumulated Since 2000 (Sq. Feet)'
),
xaxis=dict(
title='Year')
)
fig = go.Figure(data=data, layout=layout)
iplot(fig, filename="land_changes")
graph2_df = pd.DataFrame(list(zip(regr, line0)))
graph2_df.to_csv('graph2.csv')
def grapher(x, y, city, title, ytitle, xtitle, filename):
slope, intercept, r_value, p_value, std_err = stats.linregress(x, y)
fit = slope * x + intercept
trace0 = go.Scatter(
x = x,
y = y,
mode = 'markers',
name=city,
marker=go.Marker(color='#D2232A')
)
fit0 = go.Scatter(
x = x,
y = fit,
mode='lines',
marker=go.Marker(color='#AC1D23'),
name='Linear Fit'
)
data = [trace0, fit0]
layout = go.Layout(
title = title,
font = dict(family='Gotham', size=12),
yaxis=dict(
title=ytitle
),
xaxis=dict(
title=xtitle)
)
fig = go.Figure(data=data, layout=layout)
return iplot(fig, filename=filename)
len(line0)
```
Restrict the demographic data to certain years (up to 2012) in order to fit the data well.
```
demographic_data = demographic_data[demographic_data['year'] < 2011]
rent_data = rent_data[rent_data['year'] < 2011]
housing_data = housing_data[housing_data['year'] < 2011]
x = cum_land
y = pd.to_numeric(demographic_data['c_black']).as_matrix()
z1 = pd.to_numeric(rent_data['cambridge']).as_matrix()
z2 = pd.to_numeric(housing_data['cambridge']).as_matrix()
endow_black = grapher(x, y, "Cambridge", "The Correlation Between Harvard Land Change and Black Population", "Black Population of Cambridge", "Land Change (Sq. Feet)", "land_black")
X = CausalDataFrame({'x': x, 'y': y, 'z1': z1, 'z2': z2})
causal_land_black = X.zplot(x='x', y='y', z=['z1', 'z2'], z_types={'z1': 'c', 'z2': 'c'}, kind='line', color="#D2232A")
fig = causal_land_black.get_figure()
fig.set_size_inches(9, 5.5)
ax = plt.gca()
ax.set_frame_on(False)
ax.get_yaxis().set_visible(False)
ax.legend_.remove()
ax.set_title("The Controlled Correlation Between Land Use (Square Feet) and Black Population", fontproperties=gotham_black, size=10, color="#595959")
ax.set_xlabel("Land Use", fontproperties=gotham_book, fontsize=10, color="#595959")
for tick in ax.get_xticklabels():
tick.set_fontproperties(gotham_book)
tick.set_fontsize(10)
tick.set_color("#595959")
fig.savefig('images/black_land.svg', format='svg', dpi=2400, bbox_inches='tight')
z2
graph9_df = pd.DataFrame(X)
graph9_df.to_csv('graph9.csv')
y = pd.to_numeric(rent_data['cambridge']).as_matrix()
z1 = pd.to_numeric(housing_data['cambridge']).as_matrix()
X = CausalDataFrame({'x': x, 'y': y, 'z1': z1})
causal_land_rent = X.zplot(x='x', y='y', z=['z1'], z_types={'z1': 'c'}, kind='line', color="#D2232A")
fig = causal_land_rent.get_figure()
fig.set_size_inches(9, 5.5)
ax = plt.gca()
ax.set_frame_on(False)
ax.get_yaxis().set_visible(False)
ax.legend_.remove()
ax.set_title("The Controlled Correlation Between Land Use (Square Feet) and Rent", fontproperties=gotham_black, size=10, color="#595959")
ax.set_xlabel("Land Use", fontproperties=gotham_book, fontsize=10, color="#595959")
for tick in ax.get_xticklabels():
tick.set_fontproperties(gotham_book)
tick.set_fontsize(10)
tick.set_color("#595959")
fig.savefig('images/rent_land.svg', format='svg', dpi=1200, bbox_inches='tight')
```
| true |
code
| 0.548311 | null | null | null | null |
|
<h1> Logistic Regression using Spark ML </h1>
Set up bucket
```
BUCKET='cloud-training-demos-ml' # CHANGE ME
os.environ['BUCKET'] = BUCKET
# Create spark session
from pyspark.sql import SparkSession
from pyspark import SparkContext
sc = SparkContext('local', 'logistic')
spark = SparkSession \
.builder \
.appName("Logistic regression w/ Spark ML") \
.getOrCreate()
print spark
print sc
from pyspark.mllib.classification import LogisticRegressionWithLBFGS
from pyspark.mllib.regression import LabeledPoint
```
<h2> Read dataset </h2>
```
traindays = spark.read \
.option("header", "true") \
.csv('gs://{}/flights/trainday.csv'.format(BUCKET))
traindays.createOrReplaceTempView('traindays')
spark.sql("SELECT * from traindays LIMIT 5").show()
from pyspark.sql.types import StringType, FloatType, StructType, StructField
header = 'FL_DATE,UNIQUE_CARRIER,AIRLINE_ID,CARRIER,FL_NUM,ORIGIN_AIRPORT_ID,ORIGIN_AIRPORT_SEQ_ID,ORIGIN_CITY_MARKET_ID,ORIGIN,DEST_AIRPORT_ID,DEST_AIRPORT_SEQ_ID,DEST_CITY_MARKET_ID,DEST,CRS_DEP_TIME,DEP_TIME,DEP_DELAY,TAXI_OUT,WHEELS_OFF,WHEELS_ON,TAXI_IN,CRS_ARR_TIME,ARR_TIME,ARR_DELAY,CANCELLED,CANCELLATION_CODE,DIVERTED,DISTANCE,DEP_AIRPORT_LAT,DEP_AIRPORT_LON,DEP_AIRPORT_TZOFFSET,ARR_AIRPORT_LAT,ARR_AIRPORT_LON,ARR_AIRPORT_TZOFFSET,EVENT,NOTIFY_TIME'
def get_structfield(colname):
if colname in ['ARR_DELAY', 'DEP_DELAY', 'DISTANCE', 'TAXI_OUT']:
return StructField(colname, FloatType(), True)
else:
return StructField(colname, StringType(), True)
schema = StructType([get_structfield(colname) for colname in header.split(',')])
inputs = 'gs://{}/flights/tzcorr/all_flights-00000-*'.format(BUCKET) # 1/30th; you may have to change this to find a shard that has training data
#inputs = 'gs://{}/flights/tzcorr/all_flights-*'.format(BUCKET) # FULL
flights = spark.read\
.schema(schema)\
.csv(inputs)
# this view can now be queried ...
flights.createOrReplaceTempView('flights')
```
<h2> Clean up </h2>
```
trainquery = """
SELECT
f.*
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True'
"""
traindata = spark.sql(trainquery)
print traindata.head(2) # if this is empty, try changing the shard you are using.
traindata.describe().show()
```
Note that the counts for the various columns are all different; We have to remove NULLs in the delay variables (these correspond to canceled or diverted flights).
<h2> Logistic regression </h2>
```
trainquery = """
SELECT
DEP_DELAY, TAXI_OUT, ARR_DELAY, DISTANCE
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True' AND
f.dep_delay IS NOT NULL AND
f.arr_delay IS NOT NULL
"""
traindata = spark.sql(trainquery)
traindata.describe().show()
trainquery = """
SELECT
DEP_DELAY, TAXI_OUT, ARR_DELAY, DISTANCE
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True' AND
f.CANCELLED == '0.00' AND
f.DIVERTED == '0.00'
"""
traindata = spark.sql(trainquery)
traindata.describe().show()
def to_example(fields):
return LabeledPoint(\
float(fields['ARR_DELAY'] < 15), #ontime? \
[ \
fields['DEP_DELAY'], \
fields['TAXI_OUT'], \
fields['DISTANCE'], \
])
examples = traindata.rdd.map(to_example)
lrmodel = LogisticRegressionWithLBFGS.train(examples, intercept=True)
print lrmodel.weights,lrmodel.intercept
print lrmodel.predict([6.0,12.0,594.0])
print lrmodel.predict([36.0,12.0,594.0])
lrmodel.clearThreshold()
print lrmodel.predict([6.0,12.0,594.0])
print lrmodel.predict([36.0,12.0,594.0])
lrmodel.setThreshold(0.7) # cancel if prob-of-ontime < 0.7
print lrmodel.predict([6.0,12.0,594.0])
print lrmodel.predict([36.0,12.0,594.0])
```
<h2> Predict with the model </h2>
First save the model
```
!gsutil -m rm -r gs://$BUCKET/flights/sparkmloutput/model
MODEL_FILE='gs://' + BUCKET + '/flights/sparkmloutput/model'
lrmodel.save(sc, MODEL_FILE)
print '{} saved'.format(MODEL_FILE)
lrmodel = 0
print lrmodel
```
Now retrieve the model
```
from pyspark.mllib.classification import LogisticRegressionModel
lrmodel = LogisticRegressionModel.load(sc, MODEL_FILE)
lrmodel.setThreshold(0.7)
print lrmodel.predict([36.0,12.0,594.0])
print lrmodel.predict([8.0,4.0,594.0])
```
<h2> Examine the model behavior </h2>
For dep_delay=20 and taxiout=10, how does the distance affect prediction?
```
lrmodel.clearThreshold() # to make the model produce probabilities
print lrmodel.predict([20, 10, 500])
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
dist = np.arange(10, 2000, 10)
prob = [lrmodel.predict([20, 10, d]) for d in dist]
sns.set_style("whitegrid")
ax = plt.plot(dist, prob)
plt.xlabel('distance (miles)')
plt.ylabel('probability of ontime arrival')
delay = np.arange(-20, 60, 1)
prob = [lrmodel.predict([d, 10, 500]) for d in delay]
ax = plt.plot(delay, prob)
plt.xlabel('departure delay (minutes)')
plt.ylabel('probability of ontime arrival')
```
<h2> Evaluate model </h2>
Evaluate on the test data
```
inputs = 'gs://{}/flights/tzcorr/all_flights-00001-*'.format(BUCKET) # you may have to change this to find a shard that has test data
flights = spark.read\
.schema(schema)\
.csv(inputs)
flights.createOrReplaceTempView('flights')
testquery = trainquery.replace("t.is_train_day == 'True'","t.is_train_day == 'False'")
print testquery
testdata = spark.sql(testquery)
examples = testdata.rdd.map(to_example)
testdata.describe().show() # if this is empty, change the shard you are using
def eval(labelpred):
cancel = labelpred.filter(lambda (label, pred): pred < 0.7)
nocancel = labelpred.filter(lambda (label, pred): pred >= 0.7)
corr_cancel = cancel.filter(lambda (label, pred): label == int(pred >= 0.7)).count()
corr_nocancel = nocancel.filter(lambda (label, pred): label == int(pred >= 0.7)).count()
cancel_denom = cancel.count()
nocancel_denom = nocancel.count()
if cancel_denom == 0:
cancel_denom = 1
if nocancel_denom == 0:
nocancel_denom = 1
return {'total_cancel': cancel.count(), \
'correct_cancel': float(corr_cancel)/cancel_denom, \
'total_noncancel': nocancel.count(), \
'correct_noncancel': float(corr_nocancel)/nocancel_denom \
}
# Evaluate model
lrmodel.clearThreshold() # so it returns probabilities
labelpred = examples.map(lambda p: (p.label, lrmodel.predict(p.features)))
print 'All flights:'
print eval(labelpred)
# keep only those examples near the decision threshold
print 'Flights near decision threshold:'
labelpred = labelpred.filter(lambda (label, pred): pred > 0.65 and pred < 0.75)
print eval(labelpred)
```
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| true |
code
| 0.503479 | null | null | null | null |
|
Early stopping of model simulations
===================
For certain distance functions and certain models it is possible to calculate the
distance on-the-fly while the model is running. This is e.g. possible if the distance is calculated as a cumulative sum and the model is a stochastic process. For example, Markov Jump Processes belong to this class. However, we want to keep things simple here and only demonstrate how to use the pyABC interface in such cases. So don't expect a sophisticated (or even useful) model implementation here.
In this example we'll use in particular the following classes for integrated simulation and accepting/rejecting a parameter:
Let's start with the necessary imports:
```
# install if not done yet
!pip install pyabc --quiet
%matplotlib inline
import pyabc
from pyabc import (ABCSMC,
RV, Distribution,
IntegratedModel, ModelResult,
MedianEpsilon,
LocalTransition,
NoDistance)
from pyabc.sampler import SingleCoreSampler
import matplotlib.pyplot as plt
import os
import tempfile
import pandas as pd
import numpy as np
pyabc.settings.set_figure_params('pyabc') # for beautified plots
db_path = ("sqlite:///" +
os.path.join(tempfile.gettempdir(), "test.db"))
```
We define here a (very) simple stochastic process, purely for demonstrative reasons.
First, we fix the number of steps *n_steps* to 30.
```
n_steps = 30
```
We then define our process as follows:
$$
x(t+1) = x(t) + s \xi,
$$
in which $\xi \sim U(0, 1)$ denotes a uniformly in $[0, 1]$ distributed
random variable, and $s$ is the step size, $s = $ step_size.
The function `simulate` implements this stochastic process:
```
def simulate(step_size):
trajectory = np.zeros(n_steps)
for t in range(1, n_steps):
xi = np.random.uniform()
trajectory[t] = trajectory[t-1] + xi * step_size
return trajectory
```
We take as distance function between two such generated trajectories
the sum of the absolute values of the pointwise differences.
```
def distance(trajectory_1, trajectory_2):
return np.absolute(trajectory_1 - trajectory_2).sum()
```
Let's run the simulation and plot the trajectories to get a better
idea of the so generated data.
We set the ground truth step size *gt_step_size* to
```
gt_step_size = 5
```
This will be used to generate the data which will be subject to inference later on.
```
gt_trajectory = simulate(gt_step_size)
trajectoy_2 = simulate(2)
dist_1_2 = distance(gt_trajectory, trajectoy_2)
plt.plot(gt_trajectory,
label="Step size = {} (Ground Truth)".format(gt_step_size))
plt.plot(trajectoy_2,
label="Step size = 2")
plt.legend();
plt.title("Distance={:.2f}".format(dist_1_2));
```
As you might have noted already we could calculate the distance on the fly.
After each step in the stochastic process, we could increment the cumulative sum.
This will supposedly save time in the ABC-SMC run later on.
Let's start with the code first and explain it afterwards.
```
class MyStochasticProcess(IntegratedModel):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.n_early_stopped = 0
def integrated_simulate(self, pars, eps):
cumsum = 0
trajectory = np.zeros(n_steps)
for t in range(1, n_steps):
xi = np.random.uniform()
next_val = trajectory[t-1] + xi * pars["step_size"]
cumsum += abs(next_val - gt_trajectory[t])
trajectory[t] = next_val
if cumsum > eps:
self.n_early_stopped += 1
return ModelResult(accepted=False)
return ModelResult(accepted=True,
distance=cumsum,
sum_stat={"trajectory": trajectory})
```
Our `MyStochasticProcess` class is a subclass of `IntegratedModel <pyabc.model.IntegratedModel>`.
The `__init__` method is not really necessary. Here, we just want to keep
track of how often early stopping has actually happened.
More interesting is the `integrated_simulate` method. This is where the real thing
happens.
As already said, we calculate the cumulative sum on the fly.
In each simulation step, we update the cumulative sum.
Note that *gt_trajectory* is actually a global variable here.
If *cumsum > eps* at some step of the simulation, we return immediately,
indicating that the parameter was not accepted
by returning `ModelResult(accepted=False)`.
If the *cumsum* never passed *eps*, the parameter got accepted. In this case
we return an accepted result together with the calculated distance and the trajectory.
Note that, while it is mandatory to return the distance, returning the trajectory is optional. If it is returned, it is stored in the database.
We define a uniform prior over the interval $[0, 10]$ over the step size
```
prior = Distribution(step_size=RV("uniform", 0 , 10))
```
and create and instance of our integrated model MyStochasticProcess
```
model = MyStochasticProcess()
```
We then configure the ABC-SMC run.
As the distance function is calculated within `MyStochasticProcess`, we just pass
`None` to the `distance_function` parameter.
As sampler, we use the `SingleCoreSampler` here. We do so to correctly keep track of `MyStochasticProcess.n_early_stopped`. Otherwise, the counter gets incremented in subprocesses and we don't see anything here.
Of course, you could also use the `MyStochasticProcess` model in a multi-core or
distributed setting.
Importantly, we pre-specify the initial acceptance threshold to a given value, here to 300. Otherwise, pyABC will try to automatically determine it by drawing samples from the prior and evaluating the distance function.
However, we do not have a distance function here, so this approach would break down.
```
abc = ABCSMC(models=model,
parameter_priors=prior,
distance_function=NoDistance(),
sampler=SingleCoreSampler(),
population_size=30,
transitions=LocalTransition(k_fraction=.2),
eps=MedianEpsilon(300, median_multiplier=0.7))
```
We then indicate that we want to start a new ABC-SMC run:
```
abc.new(db_path)
```
We do not need to pass any data here. However, we could still pass additionally
a dictionary `{"trajectory": gt_trajectory}` only for storage purposes
to the `new` method. The data will however be ignored during the ABC-SMC run.
Then, let's start the sampling
```
h = abc.run(minimum_epsilon=40, max_nr_populations=3)
```
and check how often the early stopping was used:
```
model.n_early_stopped
```
Quite a lot actually.
Lastly we estimate KDEs of the different populations to inspect our results
and plot everything (the vertical dashed line is the ground truth step size).
```
from pyabc.visualization import plot_kde_1d
fig, ax = plt.subplots()
for t in range(h.max_t+1):
particles = h.get_distribution(m=0, t=t)
plot_kde_1d(*particles, "step_size",
label="t={}".format(t), ax=ax,
xmin=0, xmax=10, numx=300)
ax.axvline(gt_step_size, color="k", linestyle="dashed");
```
That's it. You should be able to see how the distribution
contracts around the true parameter.
| true |
code
| 0.658857 | null | null | null | null |
|
# Explore the generated data
Here we explore the data that is generated with the [generate-data.ipynb](generate-data.ipynb) notebook.
You can either run the simulations or download the data set. See [README.md](README.md) for the download link and instructions.
### Joining the seperate data files of one simulation together, example:
```python
# for example if the generated files have the following names:
# 'tmp/1d_alpha_vs_B_x_000.hdf',
# 'tmp/1d_alpha_vs_B_x_001.hdf',
# 'tmp/1d_alpha_vs_B_x_002.hdf', ...
# The following line with join the files and save it as 'data/new_name.hdf'.
df = common.combine_dfs('tmp/1d_alpha_vs_B_x_*.hdf', 'data/new_name.hdf')
```
```
import holoviews as hv
import numpy as np
import pandas as pd
import common
hv.notebook_extension()
def add_energy_gs(df):
hbar = df.hbar.unique()[0]
eV = df.eV.unique()[0]
flux_quantum_over_2pi = hbar / (2 * eV) / (eV * 1e6)
df['E'] = df['currents'].apply(np.cumsum)
df['E'] *= flux_quantum_over_2pi
df['phase_gs_arg'] = df['E'].apply(np.argmin)
df['phase_gs'] = [row['phases'][row['phase_gs_arg']] for i, row in df.iterrows()]
# Move the phase_gs from -π to +π if they are within the tolerance
tol = np.diff(df['phases'].iloc[0]).max()
df['phase_gs'] = [-row['phase_gs'] if row['phase_gs'] < -(np.pi - tol) else row['phase_gs']
for i, row in df.iterrows()]
return df
```
# Data like Figure 4 but with all combinations
```
%%opts Curve (color='k') Scatter (s=200)
def plot(orbital, g, alpha, mu, disorder, salt, B_x):
gr = gb.get_group((orbital, g, alpha, mu, disorder, salt))
gr = gr.set_index('B_x', drop=False)
x = gr.loc[B_x]
current = hv.Curve((gr.B_x, gr.current_c), kdims=['B_x'], vdims=['I_c'])[0:, 0:]
phase_gs = hv.Curve((gr.B_x, gr.phase_gs), kdims=['B_x'], vdims=['theta_gs'])[:, -3.2:3.2]
cpr = hv.Curve((x.phases, x.currents), kdims=['phi'], vdims=['I'])
energy = hv.Curve((x.phases, x.E), kdims=['phi'], vdims=['I'])
E_min = hv.Scatter((x.phase_gs, x.E[x.phase_gs_arg]), kdims=['phi'], vdims=['E'])
VLine = hv.VLine(B_x)
return (current * VLine + phase_gs * VLine + cpr + energy * E_min).cols(2)
kdims = [hv.Dimension('orbital', values=df.orbital.unique()),
hv.Dimension('g', values=df.g.unique()),
hv.Dimension('alpha', values=df.alpha.unique()),
hv.Dimension('mu', values=df.mu.unique()),
hv.Dimension('disorder', values=df.disorder.unique()),
hv.Dimension('salt', values=df.salt.unique()),
hv.Dimension('B_x', values=df.B_x.unique())]
df = pd.read_hdf('data/I_c(B_x)_mu10,20meV_disorder0,75meV_T0.1K_all_combinations_of_effects.hdf')
df = add_energy_gs(df)
params = ['orbital', 'g', 'alpha', 'mu', 'disorder', 'salt']
gb = df.groupby(params)
hv.DynamicMap(plot, kdims=kdims)
```
First mode, no disorder, T=50mK, with orbital and SOI
```
df = pd.read_hdf('data/I_c(B_x)_mu10_disorder0_T0.05K_orbital.hdf')
df = add_energy_gs(df)
params = ['orbital', 'g', 'alpha', 'mu', 'disorder', 'salt']
gb = df.groupby(params)
hv.DynamicMap(plot, kdims=kdims)
```
First mode, no disorder, T=50mK, without orbital and SOI, Zeeman only
```
df = pd.read_hdf('data/I_c(B_x)_mu10_disorder0_T0.05K.hdf')
df = add_energy_gs(df)
params = ['orbital', 'g', 'alpha', 'mu', 'disorder', 'salt']
gb = df.groupby(params)
hv.DynamicMap(plot, kdims=kdims)
%%opts Curve (color='k') Scatter (s=200)
def plot(orbital, g, alpha, mu, disorder, salt, B_x):
gr = gb.get_group((orbital, g, alpha, mu, disorder, salt))
gr = gr.set_index('B_x', drop=False)
x = gr.loc[B_x]
current = hv.Curve((gr.B_x, gr.current_c), kdims=['B_x'], vdims=['I_c'])[0:, 0:]
phase_gs = hv.Curve((gr.B_x, gr.phase_gs), kdims=['B_x'], vdims=['theta_gs'])[:, -3.2:3.2]
cpr = hv.Curve((x.phases, x.currents), kdims=['phi'], vdims=['I'])
energy = hv.Curve((x.phases, x.E), kdims=['phi'], vdims=['I'])
IB = hv.Curve((gr.mu, gr.current_c), kdims=['potential'], vdims=['I_c'])
E_min = hv.Scatter((x.phase_gs, x.E[x.phase_gs_arg]), kdims=['phi'], vdims=['E'])
VLine = hv.VLine(B_x)
return (current * VLine + phase_gs * VLine + cpr + energy * E_min).cols(2)
kdims = [hv.Dimension('orbital', values=df.orbital.unique()),
hv.Dimension('g', values=df.g.unique()),
hv.Dimension('alpha', values=df.alpha.unique()),
hv.Dimension('mu', values=df.mu.unique()),
hv.Dimension('disorder', values=df.disorder.unique()),
hv.Dimension('salt', values=df.salt.unique()),
hv.Dimension('B_x', values=df.B_x.unique())]
```
First mode, no disorder, T=50mK, with orbital but no spin-orbital
```
df = pd.read_hdf('data/I_c(B_x)_mu10_disorder0_T0.05K_onlyorbital.hdf')
df = add_energy_gs(df)
params = ['orbital', 'g', 'alpha', 'mu', 'disorder', 'salt']
gb = df.groupby(params)
hv.DynamicMap(plot, kdims=kdims)
df = pd.read_hdf('data/I_c(B_x)_mu5,10,20meV_disorder0,75meV_T0.05K_orbital_SOI_Zeeman.hdf')
df = add_energy_gs(df)
params = ['orbital', 'g', 'alpha', 'mu', 'disorder', 'salt']
gb = df.groupby(params)
hv.DynamicMap(plot, kdims=kdims)
```
# Different $T$, with or without leads, different lenghts of the system
```
df2 = pd.read_hdf('data/I_c(B_x)_no_disorder_combinations_of_effects_and_geometries.hdf')
df2 = add_energy_gs(df2)
params = ['T', 'L', 'orbital', 'g', 'alpha', 'mu', 'with_leads']
gb = df2.groupby(params)
%%opts Curve (color='k') Scatter (s=200)
def plot(T, L, orbital, g, alpha, mu, with_leads, B_x):
gr = gb.get_group((T, L, orbital, g, alpha, mu, with_leads))
gr = gr.set_index('B_x', drop=False)
x = gr.loc[B_x]
current = hv.Curve((gr.B_x, gr.current_c), kdims=['B_x'], vdims=['I_c'])[:, 0:]
phase_gs = hv.Curve((gr.B_x, gr.phase_gs), kdims=['B_x'], vdims=['theta_gs'])[:, -3.2:3.2]
cpr = hv.Curve((x.phases, x.currents), kdims=['phi'], vdims=['I'])
energy = hv.Curve((x.phases, x.E), kdims=['phi'], vdims=['E'])
E_min = hv.Scatter((x.phase_gs, x.E[x.phase_gs_arg]), kdims=['phi'], vdims=['E'])
VLine = hv.VLine(B_x)
return (current * VLine + phase_gs * VLine + cpr + energy * E_min).cols(2)
kdims = [hv.Dimension('T', values=df2['T'].unique()),
hv.Dimension('L', values=df2.L.unique()),
hv.Dimension('orbital', values=df2.orbital.unique()),
hv.Dimension('g', values=df2.g.unique()),
hv.Dimension('alpha', values=df2.alpha.unique()),
hv.Dimension('mu', values=df2.mu.unique()),
hv.Dimension('with_leads', values=df2.with_leads.unique()),
hv.Dimension('B_x', values=df2.B_x.unique())]
dm = hv.DynamicMap(plot, kdims=kdims)
dm
ds = hv.Dataset(df2)
ds.to.curve(['B_x'], ['current_c'], groupby=params, dynamic=True).overlay('L').select(B=(0, 0.5))
params = ['T', 'B_x', 'orbital', 'g', 'alpha', 'mu', 'with_leads']
curve = ds.to.curve(['L'], ['current_c'], groupby=params, dynamic=True)
curve.redim(current_c=dict(range=(0, None)))
```
# Rotation of field
```
%%opts Path [aspect='square']
df = pd.read_hdf('data/I_c(B_x)_mu20meV_rotation_of_field_in_xy_plane.hdf')
df = add_energy_gs(df)
df2 = common.drop_constant_columns(df)
ds = hv.Dataset(df2)
current = ds.to.curve(kdims='B', vdims='current_c', groupby=['theta', 'disorder']).redim(current_c=dict(range=(0, None)))
phase = ds.to.curve(kdims='B', vdims='phase_gs', groupby=['theta', 'disorder'])
current + phase
```
| true |
code
| 0.459622 | null | null | null | null |
|
# Sequence to Sequence attention model for machine translation
This notebook trains a sequence to sequence (seq2seq) model with two different attentions implemented for Spanish to English translation.
The codes are built on TensorFlow Core tutorials: https://www.tensorflow.org/tutorials/text/nmt_with_attention
```
import tensorflow as tf
print(tf.__version__)
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import io
import time
```
# Load data set
* Clean the sentences by removing special characters.
* Add a start and end token to each sentence.
* Create a word index and reverse word index (dictionaries mapping from word → id and id → word).
* Pad each sentence to a maximum length.
```
# Download the file
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
# Converts the unicode file to ascii
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",","¿")
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
# remove extra space
w = w.strip()
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
en_sentence = u"May I borrow this @ book?"
sp_sentence = u"¿Puedo tomar prestado este libro?"
print(preprocess_sentence(en_sentence))
print(preprocess_sentence(sp_sentence))
print(preprocess_sentence(sp_sentence).encode("UTF-8"))
# Return word pairs in the format: [ENGLISH, SPANISH]
def create_dataset(path, num_examples):
lines = io.open(path, encoding='UTF-8').read().strip().split('\n')
word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]]
return zip(*word_pairs)
en, sp = create_dataset(path_to_file, None)
print(en[-1])
print(sp[-1])
print(len(en), len(sp))
# Tokenize the sentence into list of words(integers) and pad the sequence to the same length
def tokenize(lang):
lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(
filters='')
lang_tokenizer.fit_on_texts(lang)
tensor = lang_tokenizer.texts_to_sequences(lang)
tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor,
padding='post')
return tensor, lang_tokenizer
def load_dataset(path, num_examples=None):
# creating cleaned input, output pairs
targ_lang, inp_lang = create_dataset(path, num_examples)
input_tensor, inp_lang_tokenizer = tokenize(inp_lang)
target_tensor, targ_lang_tokenizer = tokenize(targ_lang)
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
# Try experimenting with the size of that dataset
num_examples = 30000
input_tensor, target_tensor, inp_lang, targ_lang = load_dataset(path_to_file, num_examples)
# Calculate max_length of the target tensors
max_length_targ, max_length_inp = target_tensor.shape[1], input_tensor.shape[1]
print(max_length_targ, max_length_inp)
# Creating training and validation sets using an 80-20 split
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)
# Show length
print(len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val))
print(input_tensor_train[0])
print(target_tensor_train[0])
```
# Create a tf.data datasest
The tf.data.Dataset API supports writing descriptive and efficient input pipelines. Dataset usage follows a common pattern:
* Create a source dataset from your input data.
* Apply dataset transformations to preprocess the data.
* Iterate over the dataset and process the elements.
Iteration happens in a streaming fashion, so the full dataset does not need to fit into memory.
```
# Configuration
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
steps_per_epoch = len(input_tensor_train)//BATCH_SIZE
steps_per_epoch_val = len(input_tensor_val)//BATCH_SIZE
embedding_dim = 256 # for word embedding
units = 1024 # dimensionality of the output space of RNN
vocab_inp_size = len(inp_lang.word_index)+1
vocab_tar_size = len(targ_lang.word_index)+1
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
validation_dataset = tf.data.Dataset.from_tensor_slices((input_tensor_val, target_tensor_val)).shuffle(BUFFER_SIZE)
validation_dataset = validation_dataset.batch(BATCH_SIZE, drop_remainder=True)
example_input_batch, example_target_batch = next(iter(dataset))
example_input_batch.shape, example_target_batch.shape
```
# Basic seq2seq model: encoder and decoder
Model groups layers into an object with training and inference features. Two ways to define tf model:

Basic sequence to sequence model without attention:

```
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.enc_units,
return_sequences=True, # Whether to return the last output in the output sequence, or the full sequence.
return_state=True, # Whether to return the last state in addition to the output.
recurrent_initializer='glorot_uniform')
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state = hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
# sample input
sample_hidden = encoder.initialize_hidden_state()
sample_output, sample_hidden = encoder(example_input_batch, sample_hidden)
print ('Encoder output shape: (batch size, sequence length, units) {}'.format(sample_output.shape))
print ('Encoder Hidden state shape: (batch size, units) {}'.format(sample_hidden.shape))
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
def call(self, x, hidden):
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# passing the concatenated vector to the GRU
output, state = self.gru(x, initial_state = hidden)
# output shape == (batch_size * 1, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size, vocab)
x = self.fc(output)
return x, state
tf.reshape([[1,2,3],[4,5,6]], (-1, 2))
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
sample_decoder_output, _ = decoder(tf.random.uniform((BATCH_SIZE, 1)),
sample_hidden)
print ('Decoder output shape: (batch_size, vocab size) {}'.format(sample_decoder_output.shape))
```
# Dot-product attention


```
class DotProductAttention(tf.keras.layers.Layer):
def call(self, query, values):
# query hidden state shape == (batch_size, hidden size)
# query_with_time_axis shape == (batch_size, 1, hidden size)
# values shape == (batch_size, max_len, hidden size)
# we are doing this to broadcast addition along the time axis to calculate the score
query_with_time_axis = tf.expand_dims(query, 1)
# inner product, score shape == (batch_size, max_length, 1)
score = query_with_time_axis * values
score = tf.reduce_sum(score, axis=2)
score = tf.expand_dims(score, 2)
# attention_weights shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * values
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
attention_layer = DotProductAttention()
attention_result, attention_weights = attention_layer(sample_hidden, sample_output)
print("Attention result shape: (batch size, units) {}".format(attention_result.shape))
print("Attention weights shape: (batch_size, sequence_length, 1) {}".format(attention_weights.shape))
```
# Additive attention

```
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, query, values):
# query hidden state shape == (batch_size, hidden size)
# query_with_time_axis shape == (batch_size, 1, hidden size)
# values shape == (batch_size, max_len, hidden size)
query_with_time_axis = tf.expand_dims(query, 1)
# score shape == (batch_size, max_length, 1)
# we get 1 at the last axis because we are applying score to self.V
# the shape of the tensor before applying self.V is (batch_size, max_length, units)
score = self.V(tf.nn.tanh(
self.W1(values) + self.W2(query_with_time_axis)))
# attention_weights shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * values
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
```
# Decoder layer with attention

```
class DecoderWithAttention(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz, attention_layer = None):
super(DecoderWithAttention, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
# used for attention
self.attention = attention_layer
def call(self, x, hidden, enc_output):
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
attention_weights = None
if self.attention:
# enc_output shape == (batch_size, max_length, hidden_size)
context_vector, attention_weights = self.attention(hidden, enc_output)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x, initial_state = hidden)
# output shape == (batch_size * 1, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size, vocab)
x = self.fc(output)
return x, state, attention_weights
```
# Define loss function
Cross-entropy loss, or log loss, measures the performance of a classification model whose output is a probability value between 0 and 1. Cross-entropy loss increases as the predicted probability diverges from the actual label.

```
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
print(loss_object([1,2],[[0,0.6,0.3,0.1],[0,0.6,0.3,0.1]]))
print(loss_function([1,2],[[0,0.6,0.3,0.1],[0,0.6,0.3,0.1]]))
```
# Training
@tf.function
In TensorFlow 2, eager execution is turned on by default. The user interface is intuitive and flexible (running one-off operations is much easier and faster), but this can come at the expense of performance and deployability. It is recommended to debug in eager mode, then decorate with @tf.function for better performance.
In TensorFlow 2.0, users should refactor their code into smaller functions which are called as needed. In general, it's not necessary to decorate each of these smaller functions with tf.function; only use tf.function to decorate high-level computations - for example, one step of training, or the forward pass of your model.
TensorFlow provides the tf.GradientTape API for automatic differentiation; that is, computing the gradient of a computation with respect to some inputs, usually tf.Variables. TensorFlow "records" relevant operations executed inside the context of a tf.GradientTape onto a "tape". TensorFlow then uses that tape to compute the gradients of a "recorded" computation using reverse mode differentiation.
```
optimizer = tf.keras.optimizers.Adam()
def get_train_step_func():
@tf.function
def train_step(inp, targ, enc_hidden, encoder, decoder):
loss = 0
with tf.GradientTape() as tape: # for automatic differentiation
enc_output, enc_hidden = encoder(inp, enc_hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# using teacher forcing
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return batch_loss
return train_step
def caculate_validation_loss(inp, targ, enc_hidden, encoder, decoder):
loss = 0
enc_output, enc_hidden = encoder(inp, enc_hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
dec_input = tf.expand_dims(targ[:, t], 1)
loss = loss / int(targ.shape[1])
return loss
def training_seq2seq(epochs, attention):
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
decoder = DecoderWithAttention(vocab_tar_size, embedding_dim, units, BATCH_SIZE, attention)
train_step_func = get_train_step_func()
training_loss = []
validation_loss = []
for epoch in range(epochs):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):
batch_loss = train_step_func(inp, targ, enc_hidden, encoder, decoder)
total_loss += batch_loss
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1, batch, batch_loss))
enc_hidden = encoder.initialize_hidden_state()
total_val_loss = 0
for (batch, (inp, targ)) in enumerate(validation_dataset.take(steps_per_epoch)):
val_loss = caculate_validation_loss(inp, targ, enc_hidden, encoder, decoder)
total_val_loss += val_loss
training_loss.append(total_loss / steps_per_epoch)
validation_loss.append(total_val_loss / steps_per_epoch_val)
print('Epoch {} Loss {:.4f} Validation Loss {:.4f}'.format(epoch + 1,
training_loss[-1], validation_loss[-1]))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
return encoder, decoder, training_loss, validation_loss
```
## Training seq2seq without attention
```
epochs = 10
attention = None
print("Running seq2seq model without attention")
encoder, decoder, training_loss, validation_loss = training_seq2seq(epochs, attention)
tloss = training_loss
vloss = validation_loss
```
## Training seq2seq with dot product attention
```
attention = DotProductAttention()
print("Running seq2seq model with dot product attention")
encoder_dp, decoder_dp, training_loss, validation_loss = training_seq2seq(epochs, attention)
tloss = np.vstack((tloss, training_loss))
vloss = np.vstack((vloss, validation_loss))
```
## Training seq2seq with Bahdanau attention
```
epochs = 10
attention = BahdanauAttention(units)
print("Running seq2seq model with Bahdanau attention")
encoder_bah, decoder_bah, training_loss, validation_loss = training_seq2seq(epochs, attention)
tloss = np.vstack((tloss, training_loss))
vloss = np.vstack((vloss, validation_loss))
import matplotlib.pyplot as plt
ax = plt.subplot(111)
t = np.arange(1, epochs+1)
for i in range(0, vloss.shape[0]):
line, = plt.plot(t, vloss[i,:], lw=2)
ax.legend(('No attention', 'Dot product', 'Bahdanau'))
ax.set_title("Validation loss")
```
# Translation
```
def translate(sentence, encoder, decoder):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word_index[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs],
maxlen=max_length_inp,
padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input,
dec_hidden,
enc_out)
predicted_id = tf.argmax(predictions[0]).numpy()
result += targ_lang.index_word[predicted_id] + ' '
# until the predicted word is <end>.
if targ_lang.index_word[predicted_id] == '<end>':
return result, sentence
# the predicted ID is fed back into the model, no teacher forcing.
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence
result, sentence = translate(u'esta es mi vida.', encoder_bah, decoder_bah)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
result, sentence = translate(u'esta es mi vida.', encoder_dp, decoder_dp)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
result, sentence = translate(u'¿todavia estan en casa?', encoder_bah, decoder_bah)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
```
# Next Steps
* Training on larger dataset
* Model tuning
* Try out other attention scores such as multiplicative
* Train on other seq2seq tasks
| true |
code
| 0.689436 | null | null | null | null |
|
```
# default_exp checker
```
# Dependency Checker
> A pragmatic way to talk with pypi and find out what dependencies are out of date
```
#hide
from nbverbose.showdoc import *
```
## Dependency Traversing
Sometimes, we may want to check the current installed versions of a project's basic dependencies, and further check if those dependencies are out of date. `dependency_checker` is designed around this concept, utilizing the `pipdeptree` library.
```
#export
import json, ast, pipdeptree, sys, subprocess
#export
def get_installed_dependencies(
package_name:str, # The name of a python package
depth_limit:int=1, # How deep to follow nested dependencies
include_self:bool=False, # Whether to include the original library in the results
) -> dict: # A dictionary of {package:version}
"Recursively grabs dependencies of python package"
pkgs = pipdeptree.get_installed_distributions(local_only=False, user_only=False)
tree = pipdeptree.PackageDAG.from_pkgs(pkgs)
tree = tree.filter([package_name], None)
curr_depth=0
def _get_deps(j, dep_dict={}, curr_depth=0):
if curr_depth > depth_limit: return dep_dict
if isinstance(j, list):
for a in j:
_get_deps(a, dep_dict, curr_depth)
elif isinstance(j, dict):
if 'package_name' in j.keys():
if j['package_name'] not in dep_dict.keys():
dep_dict[j['package_name']] = j['installed_version']
if 'dependencies' in j.keys():
curr_depth += 1
return _get_deps(j['dependencies'], dep_dict, curr_depth)
return dep_dict
deps = _get_deps(ast.literal_eval(pipdeptree.render_json_tree(tree, 4)), {})
if not include_self: deps.pop(package_name, None)
return deps
```
This function operates by traversing a DAG and grabbing dependencies of projects found from it. Generally a depth of 1 is recommended, below is a quick guide to what will be returned at each depth.
**0**: A depth of zero will an empty dictionary unless `include_self` is `True`. If so, it will include only the library name:
```
deps = get_installed_dependencies('pipdeptree', depth_limit=0)
assert deps == {}
deps = get_installed_dependencies('pipdeptree', depth_limit=0, include_self=True)
assert deps == {'pipdeptree':'2.1.0'}
```
**1**: A depth of one will return the project and its main dependencies (if `include_self` is `True`), such as those stated in the `requirements.txt` as well as packages such as `pip`
```
deps = get_installed_dependencies('pipdeptree', depth_limit=1, include_self=True)
assert len(deps.keys()) == 2
assert all(package in deps.keys() for package in ('pipdeptree', 'pip'))
deps = get_installed_dependencies('pipdeptree', depth_limit=1, include_self=False)
assert len(deps.keys()) == 1
assert 'pip' in deps.keys()
```
**2+**: A depth of two or greater will return the dependencies for each of the dependencies above that layer. These allow for more fine-grained requirements
## Checking for New Versions
Given these dependencies, we can also then check for a new version to see if an upgrade is available. This is what the `is_latest_version` function is designed for:
```
#export
def is_latest_version(
package_name:str, # The name of a pip python package
current_version:str, # The installed version of a package, such as "1.2.3"
) -> bool: # Whether the versions are the same
"Compares the current version with the latest version, and returns if they are different"
latest_version = str(subprocess.run([sys.executable, '-m', 'pip', 'install', '{}==random'.format(package_name)], capture_output=True, text=True))
latest_version = latest_version[latest_version.find('(from versions:')+15:]
latest_version = latest_version[:latest_version.find(')')]
latest_version = latest_version.replace(' ','').split(',')[-1]
if latest_version == current_version:
return True
else:
return False
using_latest_version = is_latest_version('pipdeptree', '2.0.9')
assert using_latest_version == False
```
Here we tested if `pipdeptree` is the latest version. The version we specified is one less than that of the latest release at the time of development. We got `False`, meaning a newer version is available.
| true |
code
| 0.465023 | null | null | null | null |
|
SPARQL Transformer evaluation
=========================
This notebook contains some quantitative measures for the evaluation of SPARQL Transformer.
```
import json
import os
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from ipywidgets import FloatProgress
from IPython.display import display
from SPARQLWrapper import SPARQLWrapper, JSON
from SPARQLTransformer import sparqlTransformer
input_folder = './sparql'
ENDPOINT = 'http://0.0.0.0:7790/sparql'
# ENDPOINT = 'http://dbpedia.org/sparql'
json_queries_files = list(filter(lambda x: x.endswith('.json'), os.listdir(input_folder)))
json_queries_files.sort()
rq_queries_files = [f.replace('.json', '.rq') for f in json_queries_files]
json_queries = [json.load(open('%s/%s' % (input_folder, f), 'r')) for f in json_queries_files]
rq_queries = [open('%s/%s' % (input_folder, f), 'r').read() for f in rq_queries_files]
json_queries_files
```
The test queries have been taken from the __[DBpedia wiki](https://wiki.dbpedia.org/OnlineAccess)__.
Those SELECT queries have been manually converted in json query, making sure that the transformed query was equal to the original one (variable names apart).
The following table shows, for each query:
- `n vars`, how many variable are selected
- `levels`, how many levels are present in the json prototype, considered that `1` refers to a flat object (all properties attached to the root) and `2` at one level of nested object
- `features` included in the query
| name | n vars | levels | features |
|--------------------------|--------|--------|----------------------|
|1.Born_in_Berlin | 4 | 1 | filter, orderby |
|2.German_musicians | 4 | 1 | lang filter, optional|
|3.Musicians_born_in_Berlin| 4 | 1 | lang filter |
|4.Soccer_players | 5 | 2 | filter, orderby |
|5.Games | 2 | 1 | orderby |
Functions for executing the query and returning the bindings.
- For JSON queries, we use **SPARQLTransformer**.
- For SPARQL queries, we use **SPARQLWrapper** (which is also internally used by SPARQLTransformer).
```
def sparql_exec(query):
sparql = SPARQLWrapper(ENDPOINT)
sparql.setQuery(query)
sparql.setReturnFormat(JSON)
result = sparql.query().convert()
return result["results"]["bindings"]
def json_exec(query, debug=False):
return sparqlTransformer(query, {'endpoint': ENDPOINT, 'debug': debug})
```
Functions for running the test for a particular query (sparql or json).
The test measure the **execution time** of the query (including any parsing task) and the **number of results**.
```
def test_atom(query, typ='sparql'):
start = time.time()
if typ == 'sparql':
r = sparql_exec(query)
else:
r = json_exec(query)
end = time.time()
timing = end - start
return len(r), timing
```
We will execute the test multiple times for each query, to obtain an average result as much as possible not correlated to the network/server workload.
In particular, each test would be executed `num_iteration` times. Each couple of consecutive iteration will be separated by `sleep_time` seconds.
```
num_iteration = 100
sleep_time = 5
def mean_without_outliers(x):
df = pd.DataFrame(x)
Q1 = df.quantile(0.25)
Q3 = df.quantile(0.75)
IQR = Q3 - Q1
return float(df[(df >= Q1-1.5*IQR ) | (df <= Q3+1.5*IQR)].mean())
test_results = []
all_timings = []
for i, json_query in enumerate(json_queries):
# queries
json_query = json_queries[i]
rq_query = rq_queries[i]
title = rq_queries_files[i].replace('.rq', '')
print(title)
# progress bars
fs = FloatProgress(min=0, max=num_iteration, description='SPARQL test:')
display(fs)
fj = FloatProgress(min=0, max=num_iteration, description='JSON test:')
display(fj)
sparql_time = []
sparql_results = 0
json_time = []
json_results = 0
for j in np.arange(num_iteration):
if (i + j) > 0 :
time.sleep(sleep_time)
sparql_results, t = test_atom(rq_query, typ='sparql')
sparql_time.append(t)
fs.value += 1
for j in np.arange(num_iteration):
time.sleep(sleep_time)
json_results, t = test_atom(json_query, typ='json')
json_time.append(t)
fj.value += 1
ts = np.mean(sparql_time)
tj = np.mean(json_time)
time_diff = (tj - ts)
time_diff_percent = 100 * time_diff / np.mean([ts,tj])
test_results.append({
'name': title,
'time_sparql': ts,
'result_sparql': sparql_results,
'time_json': tj ,
'result_json': json_results,
'time_diff': '{0:.2g}'.format(time_diff),
'time_diff_percent': '{0:.2g}%'.format(time_diff_percent)
});
all_timings.append({
'name': title,
'json': json_time,
'sparql': sparql_time
})
```
Those plots show that over the whole test, some query tooks much longer to be executed. The **outliers** are clearly visible as dots.
When computing the mean, we excluded all the outliers, where an outlier stands outside the IQR (see [definition](https://www.purplemath.com/modules/boxwhisk3.htm)).
```
for i, json_query in enumerate(json_queries):
tim = all_timings[i]
a = np.array([np.hstack(tim['sparql']), np.hstack(tim['json'])]).transpose()
df = pd.DataFrame(a, columns=['SPARQL', 'JSON'])
bp = df.boxplot(vert=False, figsize=(16,4))
fig = np.asarray(bp).reshape(-1)[0].get_figure()
fig.suptitle(tim['name'])
plt.show()
pd.DataFrame.from_dict(test_results)
```
The table give us two different informations.
#### Time difference
The execution time of JSON queries (`time_json`) is quite close to the one of SPARQL ones (`time_sparql`). The difference in percentage (`time_diff`) never overcomes few hundredths of a second.
#### Result difference
The number of results (bindings) returned by SPARQL Transformer (`result_json`) is always lower than the ones returned by the endpoint (`result_json`). This is due to the fact that the latter represents all the combination of values as distinct bindings, while the former aggregates the results with the same id.
### Example of result for `1.Born_in_Berlin`.
An interest case is the 2nd result about [Prince Adalbert of Prussia](http://dbpedia.org/resource/Prince_Adalbert_of_Prussia_(1811–1873)), which has 4 names and 2 differently formatted death date. This is represented with 4 * 2 = 8 bindings, then merged with SPARQL Transformer
```
# SPARQL query
sparql_exec(rq_queries[0])[1:9]
# SPARQL query
json_exec(json_queries[0])[1]
test_results
```
| true |
code
| 0.216923 | null | null | null | null |
|
# hello paddle: 从普通程序走向机器学习程序
**作者:** [PaddlePaddle](https://github.com/PaddlePaddle) <br>
**日期:** 2021.12 <br>
**摘要:** 这篇示例向你介绍普通程序跟机器学习程序的区别,并带着你用飞桨框架,实现第一个机器学习程序。
## 一、普通程序跟机器学习程序的逻辑区别
作为一名开发者,你最熟悉的开始学习一门编程语言,或者一个深度学习框架的方式,可能是通过一个hello world程序。
学习飞桨也可以这样,这篇小示例教程将会通过一个非常简单的示例来向你展示如何开始使用飞桨。
机器学习程序跟通常的程序最大的不同是,通常的程序是在给定输入的情况下,通过告诉计算机处理数据的规则,然后得到处理后的结果。而机器学习程序则是在并不知道这些规则的情况下,让机器来从数据当中**学习**出来规则。
作为热身,先来看看通常的程序所做的事情。
现在面临这样一个任务:
乘坐出租车的时候,会有一个10元的起步价,只要上车就需要收取。出租车每行驶1公里,需要再支付每公里2元的行驶费用。当一个乘客坐完出租车之后,车上的计价器需要算出来该乘客需要支付的乘车费用。
如果用python来实现该功能,会如下所示:
```
def calculate_fee(distance_travelled):
return 10 + 2 * distance_travelled
for x in [1.0, 3.0, 5.0, 9.0, 10.0, 20.0]:
print(calculate_fee(x))
```
接下来,把问题稍微变换一下,现在知道乘客每次乘坐出租车的公里数,也知道乘客每次下车的时候支付给出租车司机的总费用。但是并不知道乘车的起步价,以及每公里行驶费用是多少。希望让机器从这些数据当中学习出来计算总费用的规则。
更具体的,想要让机器学习程序通过数据学习出来下面的公式当中的参数 `w` 和参数 `b`(这是一个非常简单的示例,所以`w`和`b`都是浮点数,随着对深度学习了解的深入,你将会知道`w`和`b`通常情况下会是矩阵和向量)。这样,当下次乘车的时候,知道了行驶里程`distance_travelled`的时候,就可以估算出来用户的总费用`total_fee`了。
```
total_fee = w * distance_travelled + b
```
接下来,看看用飞桨如何实现这个hello, world级别的机器学习程序。
## 二、导入飞桨
为了能够使用飞桨,需要先用python的`import`语句导入飞桨`paddle`。
同时,为了能够更好的对数组进行计算和处理,还需要导入`numpy`。
如果你是在本机运行这个notebook,而且还没有安装飞桨,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) Paddle 2.2.0。
```
import paddle
print("paddle " + paddle.__version__)
```
## 三、准备数据
在这个机器学习任务中,已经知道了乘客的行驶里程`distance_travelled`,和对应的,这些乘客的总费用`total_fee`。
通常情况下,在机器学习任务中,像`distance_travelled`这样的输入值,一般被称为`x`(或者特征`feature`),像`total_fee`这样的输出值,一般被称为`y`(或者标签`label`)。
可以用`paddle.to_tensor`把示例数据转换为paddle的Tensor数据。
```
x_data = paddle.to_tensor([[1.], [3.0], [5.0], [9.0], [10.0], [20.0]])
y_data = paddle.to_tensor([[12.], [16.0], [20.0], [28.0], [30.0], [50.0]])
```
## 四、用飞桨定义模型的计算
使用飞桨定义模型的计算的过程,本质上,是用python,通过飞桨提供的API,来告诉飞桨计算规则的过程。回顾一下,想要通过飞桨用机器学习方法,从数据当中学习出来如下公式当中的`w`和`b`。这样在未来,给定`x`时就可以估算出来`y`值(估算出来的`y`记为`y_predict`)
```
y_predict = w * x + b
```
将会用飞桨的线性变换层:`paddle.nn.Linear`来实现这个计算过程,这个公式里的变量`x, y, w, b, y_predict`,对应着飞桨里面的[Tensor概念](https://www.paddlepaddle.org.cn/documentation/docs/zh/beginners_guide/basic_concept/tensor.html)。
**稍微补充一下**
在这里的示例中,根据经验,已经事先知道了`distance_travelled`和`total_fee`之间是线性的关系,而在更实际的问题当中,`x`和`y`的关系通常是非线性的,因此也就需要使用更多类型,也更复杂的神经网络。(比如,BMI指数跟你的身高就不是线性关系,一张图片里的某个像素值跟这个图片是猫还是狗也不是线性关系。)
```
linear = paddle.nn.Linear(in_features=1, out_features=1)
```
## 五、准备好运行飞桨
机器(计算机)在一开始的时候会随便猜`w`和`b`,先看看机器猜的怎么样。你应该可以看到,这时候的`w`是一个随机值,`b`是0.0,这是飞桨的初始化策略,也是这个领域常用的初始化策略。(如果你愿意,也可以采用其他的初始化的方式,今后你也会看到,选择不同的初始化策略也是对于做好深度学习任务来说很重要的一点)。
```
w_before_opt = linear.weight.numpy().item()
b_before_opt = linear.bias.numpy().item()
print("w before optimize: {}".format(w_before_opt))
print("b before optimize: {}".format(b_before_opt))
```
## 六、告诉飞桨怎么样学习
前面定义好了神经网络(尽管是一个最简单的神经网络),还需要告诉飞桨,怎么样去**学习**,从而能得到参数`w`和`b`。
这个过程简单的来陈述一下,你应该就会大致明白了(尽管背后的理论和知识还需要逐步的去学习)。在机器学习/深度学习当中,机器(计算机)在最开始的时候,得到参数`w`和`b`的方式是随便猜一下,用这种随便猜测得到的参数值,去进行计算(预测)的时候,得到的`y_predict`,跟实际的`y`值一定是有**差距**的。接下来,机器会根据这个差距来**调整`w`和`b`**,随着这样的逐步的调整,`w`和`b`会越来越正确,`y_predict`跟`y`之间的差距也会越来越小,从而最终能得到好用的`w`和`b`。这个过程就是机器**学习**的过程。
用更加技术的语言来说,衡量**差距**的函数(一个公式)就是损失函数,用来**调整**参数的方法就是优化算法。
在本示例当中,用最简单的均方误差(mean square error)作为损失函数(`paddle.nn.MSELoss`);和最常见的优化算法SGD(stocastic gradient descent)作为优化算法(传给`paddle.optimizer.SGD`的参数`learning_rate`,你可以理解为控制每次调整的步子大小的参数)。
```
mse_loss = paddle.nn.MSELoss()
sgd_optimizer = paddle.optimizer.SGD(learning_rate=0.001, parameters = linear.parameters())
```
## 七、运行优化算法
接下来,让飞桨运行一下这个优化算法,这会是一个前面介绍过的逐步调整参数的过程,你应该可以看到loss值(衡量`y`和`y_predict`的差距的`loss`)在不断的降低。
```
total_epoch = 5000
for i in range(total_epoch):
y_predict = linear(x_data)
loss = mse_loss(y_predict, y_data)
loss.backward()
sgd_optimizer.step()
sgd_optimizer.clear_grad()
if i%1000 == 0:
print("epoch {} loss {}".format(i, loss.numpy()))
print("finished training, loss {}".format(loss.numpy()))
```
## 八、机器学习出来的参数
经过了这样的对参数`w`和`b`的调整(**学习**),再通过下面的程序,来看看现在的参数变成了多少。你应该会发现`w`变成了很接近2.0的一个值,`b`变成了接近10.0的一个值。虽然并不是正好的2和10,但却是从数据当中学习出来的还不错的模型的参数,可以在未来的时候,用从这批数据当中学习到的参数来预估了。(如果你愿意,也可以通过让机器多学习一段时间,从而得到更加接近2.0和10.0的参数值。)
```
w_after_opt = linear.weight.numpy().item()
b_after_opt = linear.bias.numpy().item()
print("w after optimize: {}".format(w_after_opt))
print("b after optimize: {}".format(b_after_opt))
```
## 九、hello paddle
通过这个小示例,希望你已经初步了解了飞桨,能在接下来随着对飞桨的更多学习,来解决实际遇到的问题。
```
print("hello paddle")
```
| true |
code
| 0.479382 | null | null | null | null |
|
# Accumulation of roundoof error
In this notebook we'll study some effects of accumulation of roundoof error.
# Unstable Algorithms
We need to solve this integral for $n=1,2,....8$
$$y_n=\int_0^1\frac{x^n}{x+5}$$
We write the equation like this:
$$y_n = \frac{1}{n} - 5y_{n-1}$$
$$y_{1}=1-5(y_{0}+\epsilon )=1-5y_{0}-5\epsilon$$
$$y_{2}={\frac {1}{2}}-5(1-5y_{0}-5\epsilon )={\frac {1}{2}}-5+25y_{0}+5^{2}\epsilon$$
$$\vdots$$
$$y_{n}=\ldots +5^{n}\epsilon$$
The roundoff error is amplified ,$\mathcal{O}(5^n)$, in succeeding calculations so this algorithm is unstable.
```
import numpy as np
import scipy as sc
import matplotlib.pyplot as plt
import sys
def function(y0, n):
y_sol = np.zeros(n)
y_sol[0] = y0
for i in range (1,n):
y_sol[i] = 1/n - 5*y_sol[i-1]
return y_sol
n = 8
x = np.linspace(-1,1,8)
y0 = 0
y = function(y0, n)
plt.plot(x,y)
# The value of 'y' goes to infinity
```
# Conditioned problems
Even if a stable algorithm is used, the solution to a problem is still inaccurate due to the accumulation of roundoff error when the problem itself is ill-conditioned.
## Dangers of Higher-Order Polynomial Interpolation
In 1901, Carl Runge published a study on the dangers of higher-
order polynomial interpolation. He looked at the following simple-looking function:
$$f(x) = \frac{1}{1+25x^2}$$
which is now called Runge’s function
```
x = np.linspace(-1,1,10)
y = 1/(1 + 25*x**2)
xx = np.linspace(-1,1,100)
p = np.polyfit(x,y,4)
y4 = np.polyval(p,xx)
yr = 1/(1 + 25*xx**2)
plt.plot(x,y,'o')
plt.plot(xx,y4)
plt.plot(xx,yr,'--')
plt.legend(['','Polynomial fit','Runge function'])
# The polynomial does a poor job of following Runge’s function
# Continuing with the analysis,
# the 20th-order polynomial can be generated and plotted
x = np.linspace(-1,1,10)
y = 1/(1 + 25*x**2)
xx = np.linspace(-1,1,100)
p = np.polyfit(x,y,20)
y4 = np.polyval(p,xx)
yr = 1/(1+25*xx**2)
plt.plot(x,y,'o')
plt.plot(xx,y4)
plt.plot(xx,yr,'--')
plt.legend(['','Polynomial fit','Runge function'])
# The polynomial does a poor job of following Runge’s function
```
Although there may be certain contexts where higher-order polynomials are neces-
sary, they are usually to be avoided. In most engineering and scientific contexts, lower-
order polynomials of the type described in this chapter can be used effectively to capture
the curving trends of data without suffering from oscillations
[Real world example: Patriot missile failure due to magnification of roundoff error](https://en.wikipedia.org/wiki/Round-off_error)
# Error Estimates for Iterative Methods
The approximation of $e$ using Maclaurin series expansion
$$e^x = 1+ x+ \frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!} ... \frac{x^n}{n!}$$
```
def maclaurin(x, esp, max_int):
"""
Maclaurin series of exponential function
input:
x = value at which series evaluated
esp = stopping criterion (default = 0.0001)
max_int = maximum iterations (default = 50)
output:
fx = estimated value
ea = approximate relative error (%)
iter = number of iterations
"""
iter = 1
sol = 1
ea = 100
while sol:
sol_old = sol
sol = sol + x**iter / np.math.factorial(iter)
iter += 1
if sol != 0:
ea = np.abs((sol - sol_old)/sol)*100
if ea <= esp and iter>= max_int:
break
fx = sol
return fx, ea, iter
maclaurin(1,1e-6,100)
e, a, inte = maclaurin(1,1e-6,100)
# np.exp(1) return the true value of the number 'e'
# At least if a better approximation than our method
print('The error is: '+ str(np.exp(1) - e))
print("The epsilon funciton build in python is: "+str(sys.float_info.epsilon))
```
The 52 bits used for the mantissa correspond to about 15 to 16 base-10 digits, so in our programming language the machine epsilvol is $10^{-16}$
Remember that?
$$lim_{n\to\infty}(1 + \frac{1}{n})^n = e = 2.718281828...$$
Let's use the power of python to calculate
```
def euler(n):
return (1 + 1/n)**n
euler(10000)
# We can write 10^16 like 10E16 or 10e16
# What just happen?
euler(10e16)
```
When $n$ becames bigger than $10^{15}$ our functions stop to increase and start to oscillating
```
x = np.linspace(1,1e16,100)
y = euler(x)
y2 = np.exp(1)
plt.xscale('log')
plt.axhline(y=y2, color='r', linestyle='--')
plt.plot(x,y)
plt.title("euler function in lin-log scale")
plt.legend(["Real Value of Euler Number", "f(n) "])
```
| true |
code
| 0.593609 | null | null | null | null |
|
[Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb)
# Nonlinear Filtering
```
#format the book
%matplotlib inline
from __future__ import division, print_function
from book_format import load_style
load_style()
```
## Introduction
The Kalman filter that we have developed uses linear equations, and so the filter can only handle linear problems. But the world is nonlinear, and so the classic filter that we have been studying to this point can have very limited utility.
There can be nonlinearity in the process model. Suppose we want to track an object falling through the atmosphere. The acceleration of the object depends on the drag it encounters. Drag depends on air density, and the air density decreases with altitude. In one dimension this can be modelled with the nonlinear differential equation
$$\ddot x = \frac{0.0034ge^{-x/22000}\dot x^2}{2\beta} - g$$
A second source of nonlinearity comes from the measurements. For example, radars measure the slant range to an object, and we are typically interested in the aircraft's position over the ground. We invoke Pythagoras and get the nonlinear equation:
$$x=\sqrt{\mathtt{slant}^2 - \mathtt{altitude}^2}$$
These facts were not lost on the early adopters of the Kalman filter. Soon after Dr. Kalman published his paper people began working on how to extend the Kalman filter for nonlinear problems.
It is almost true to state that the only equation anyone knows how to solve is $\mathbf{Ax}=\mathbf{b}$. We only really know how to do linear algebra. I can give you any linear set of equations and you can either solve it or prove that it has no solution.
Anyone with formal education in math or physics has spent years learning various analytic ways to solve integrals, differential equations and so on. Yet even trivial physical systems produce equations that cannot be solved analytically. I can take an equation that you are able to integrate, insert a $\log$ term, and render it insolvable. This leads to jokes about physicists stating "assume a spherical cow on a frictionless surface in a vacuum...". Without making extreme simplifications most physical problems do not have analytic solutions.
How do we do things like model airflow over an aircraft in a computer, or predict weather, or track missiles with a Kalman filter? We retreat to what we know: $\mathbf{Ax}=\mathbf{b}$. We find some way to linearize the problem, turning it into a set of linear equations, and then use linear algebra software packages to compute an approximate solution.
Linearizing a nonlinear problem gives us inexact answers, and in a recursive algorithm like a Kalman filter or weather tracking system these small errors can sometimes reinforce each other at each step, quickly causing the algorithm to spit out nonsense.
What we are about to embark upon is a difficult problem. There is not one obvious, correct, mathematically optimal solution anymore. We will be using approximations, we will be introducing errors into our computations, and we will forever be battling filters that *diverge*, that is, filters whose numerical errors overwhelm the solution.
In the remainder of this short chapter I will illustrate the specific problems the nonlinear Kalman filter faces. You can only design a filter after understanding the particular problems the nonlinearity in your problem causes. Subsequent chapters will then teach you how to design and implement different kinds of nonlinear filters.
## The Problem with Nonlinearity
The mathematics of the Kalman filter is beautiful in part due to the Gaussian equation being so special. It is nonlinear, but when we add and multiply them we get another Gaussian as a result. That is very rare. $\sin{x}*\sin{y}$ does not yield a $\sin$ as an output.
What I mean by linearity may be obvious, but there are some subtleties. The mathematical requirements are twofold:
* additivity: $f(x+y) = f(x) + f(y)$
* homogeneity: $f(ax) = af(x)$
This leads us to say that a linear system is defined as a system whose output is linearly proportional to the sum of all its inputs. A consequence of this is that to be linear if the input is zero than the output must also be zero. Consider an audio amp - if I sing into a microphone, and you start talking, the output should be the sum of our voices (input) scaled by the amplifier gain. But if amplifier outputs a nonzero signal such as a hum for a zero input the additive relationship no longer holds. This is because you linearity requires that $amp(voice) = amp(voice + 0)$ This clearly should give the same output, but if amp(0) is nonzero, then
$$
\begin{aligned}
amp(voice) &= amp(voice + 0) \\
&= amp(voice) + amp(0) \\
&= amp(voice) + non\_zero\_value
\end{aligned}
$$
which is clearly nonsense. Hence, an apparently linear equation such as
$$L(f(t)) = f(t) + 1$$
is not linear because $L(0) = 1$. Be careful!
## An Intuitive Look at the Problem
I particularly like the following way of looking at the problem, which I am borrowing from Dan Simon's *Optimal State Estimation* [[1]](#[1]). Consider a tracking problem where we get the range and bearing to a target, and we want to track its position. The reported distance is 50 km, and the reported angle is 90$^\circ$. Assume that the errors in both range and angle are distributed in a Gaussian manner. Given an infinite number of measurements what is the expected value of the position?
I have been recommending using intuition to gain insight, so let's see how it fares for this problem. We might reason that since the mean of the range will be 50 km, and the mean of the angle will be 90$^\circ$, that the answer will be x=0 km, y=50 km.
Let's plot that and find out. Here are 3000 points plotted with a normal distribution of the distance of 0.4 km, and the angle having a normal distribution of 0.35 radians. We compute the average of the all of the positions, and display it as a star. Our intuition is displayed with a large circle.
```
import numpy as np
from numpy.random import randn
import matplotlib.pyplot as plt
N = 3000
a = np.pi/2. + (randn(N) * 0.35)
r = 50.0 + (randn(N) * 0.4)
xs = r * np.cos(a)
ys = r * np.sin(a)
plt.figure()
plt.scatter(xs, ys, label='Sensor', color='k', marker='.', s=2)
xs, ys = sum(xs)/N, sum(ys)/N
plt.scatter(xs, ys, c='r', marker='*', s=200, label='Mean')
plt.scatter(0, 50, c='k', marker='o', s=300, label='Intuition')
plt.axis('equal')
plt.legend();
```
We can see that out intuition failed us because the nonlinearity of the problem forced all of the errors to be biased in one direction. This bias, over many iterations, can cause the Kalman filter to diverge. Even if it doesn't diverge the solution will not be optimal. Linear approximations applied to nonlinear problems yields inaccurate results.
## The Effect of Nonlinear Functions on Gaussians
Gaussians are not closed under an arbitrary nonlinear function. Recall the equations of the Kalman filter - at each evolution we pass the Gaussian representing the state through the process function to get the Gaussian at time $k$. Our process function was always linear, so the output was always another Gaussian. Let's look at that on a graph. I will take an arbitrary Gaussian and pass it through the function $f(x) = 2x + 1$ and plot the result. We know how to do this analytically, but let's use sampling. I will generate 500,000 points with a normal distribution, pass them through $f(x)$, and plot the results. I do it this way because the next example will be nonlinear, and we will have no way to compute this analytically.
```
import numpy as np
from numpy.random import normal
gaussian = (0., 1.)
data = normal(loc=gaussian[0], scale=gaussian[1], size=500000)
plt.figure()
plt.hist(2*data + 1, 1000);
```
This is an unsurprising result. The result of passing the Gaussian through $f(x)=2x+1$ is another Gaussian centered around 1. Let's look at the input, nonlinear function, and output at once.
```
from kf_book.book_plots import set_figsize, figsize
from kf_book.nonlinear_plots import plot_nonlinear_func
def g1(x):
return 2*x+1
plt.figure()
plot_nonlinear_func(data, g1, gaussian)
```
> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the
Supporting_Notebooks folder. You can also read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb)[1]
The plot labeled 'Input' is the histogram of the original data. This is passed through the function $f(x)=2x+1$ which is displayed in the chart on the bottom left. The red lines shows how one value, $x=0$ is passed through the function. Each value from input is passed through in the same way to the output function on the right. For the output I computed the mean by taking the average of all the points, and drew the results with the dotted blue line. A solid blue line shows the actual mean for the point $x=0$. The output looks like a Gaussian, and is in fact a Gaussian. We can see that the variance in the output is larger than the variance in the input, and the mean has been shifted from 0 to 1, which is what we would expect given the transfer function $f(x)=2x+1$ The $2x$ affects the variance, and the $+1$ shifts the mean The computed mean, represented by the dotted blue line, is nearly equal to the actual mean. If we used more points in our computation we could get arbitrarily close to the actual value.
Now let's look at a nonlinear function and see how it affects the probability distribution.
```
def g2(x):
return (np.cos(3*(x/2 + 0.7))) * np.sin(0.3*x) - 1.6*x
plt.figure()
plot_nonlinear_func(data, g2, gaussian)
```
This result may be somewhat surprising to you. The function looks "fairly" linear, but the probability distribution of the output is completely different from a Gaussian. Recall the equations for multiplying two univariate Gaussians:
$$\begin{aligned}
\mu &=\frac{\sigma_1^2 \mu_2 + \sigma_2^2 \mu_1} {\sigma_1^2 + \sigma_2^2} \\
\sigma &= \frac{1}{\frac{1}{\sigma_1^2} + \frac{1}{\sigma_2^2}}
\end{aligned}$$
These equations do not hold for non-Gaussians, and certainly do not hold for the probability distribution shown in the 'Output' chart above.
Think of what this implies for the Kalman filter algorithm of the previous chapter. All of the equations assume that a Gaussian passed through the process function results in another Gaussian. If this is not true then all of the assumptions and guarantees of the Kalman filter do not hold. Let's look at what happens when we pass the output back through the function again, simulating the next step time step of the Kalman filter.
```
y = g2(data)
gaussian2 = (np.mean(y), np.var(y))
plt.figure()
plot_nonlinear_func(y, g2, gaussian2)
```
As you can see the probability function is further distorted from the original Gaussian. However, the graph is still somewhat symmetric around x=0, let's see what the mean is.
```
print('input mean, variance: %.4f, %.4f' %
(np.mean(data), np.var(data)))
print('output mean, variance: %.4f, %.4f' %
(np.mean(y), np.var(y)))
```
Let's compare that to the linear function that passes through (-2,3) and (2,-3), which is very close to the nonlinear function we have plotted. Using the equation of a line we have
$$m=\frac{-3-3}{2-(-2)}=-1.5$$
```
def g3(x):
return -1.5 * x
plt.figure()
plot_nonlinear_func(data, g3, gaussian)
out = g3(data)
print('output mean, variance: %.4f, %.4f' %
(np.mean(out), np.var(out)))
```
Although the shapes of the output are very different, the mean and variance of each are almost the same. This may lead us to reasoning that perhaps we can ignore this problem if the nonlinear equation is 'close to' linear. To test that, we can iterate several times and then compare the results.
```
out = g3(data)
out2 = g2(data)
for i in range(10):
out = g3(out)
out2 = g2(out2)
print('linear output mean, variance: %.4f, %.4f' %
(np.average(out), np.std(out)**2))
print('nonlinear output mean, variance: %.4f, %.4f' %
(np.average(out2), np.std(out2)**2))
```
Unfortunately the nonlinear version is not stable. It drifted significantly from the mean of 0, and the variance is half an order of magnitude larger.
I minimized the issue by using a function that is quite close to a straight line. What happens if the function is $y(x)=x^2$?
```
def g3(x):
return -x*x
x0 = (1, 1)
data = normal(loc=x0[0], scale=x0[1], size=500000)
plt.figure()
plot_nonlinear_func(data, g3, gaussian=x0)
```
Despite the curve being smooth and reasonably straight at $x=1$ the probability distribution of the output doesn't look anything like a Gaussian and the computed mean of the output is quite different than the value computed directly. This is not an unusual function - a ballistic object moves in a parabola, and this is the sort of nonlinearity your filter will need to handle. If you recall we've tried to track a ball and failed miserably. This graph should give you insight into why the filter performed so poorly.
## A 2D Example
It is hard to look at probability distributions and reason about what will happen in a filter. So let's think about tracking an aircraft with radar. The estimate may have a covariance that looks like this:
```
import kf_book.nonlinear_internal as nonlinear_internal
nonlinear_internal.plot1()
```
What happens when we try to linearize this problem? The radar gives us a range to the aircraft. Suppose the radar is directly under the aircraft (x=10) and the next measurement states that the aircraft is 3 miles away (y=3). The positions that could match that measurement form a circle with radius 3 miles, like so.
```
nonlinear_internal.plot2()
```
We can see by inspection that the probable position of the aircraft is somewhere near x=11.4, y=2.7 because that is where the covariance ellipse and range measurement overlap. But the range measurement is nonlinear so we have to linearize it. We haven't covered this material yet, but the Extended Kalman filter will linearize at the last position of the aircraft - (10,2). At x=10 the range measurement has y=3, and so we linearize at that point.
```
nonlinear_internal.plot3()
```
Now we have a linear representation of the problem (literally a straight line) which we can solve. Unfortunately you can see that the intersection of the line and the covariance ellipse is a long way from the actual aircraft position.
```
nonlinear_internal.plot4()
```
That sort of error often leads to disastrous results. The error in this estimate is large. But in the next innovation of the filter that very bad estimate will be used to linearize the next radar measurement, so the next estimate is likely to be markedly worse than this one. After only a few iterations the Kalman filter will diverge, and start producing results that have no correspondence to reality.
This covariance ellipse spans miles. I exaggerated the size to illustrate the difficulties of highly nonlinear systems. In real radar tracking problems the nonlinearity is usually not that bad, but the errors will still accumulate. Other systems you may be work could have this amount of nonlinearity - this was not an exaggeration only to make a point. You will always be battling divergence when working with nonlinear systems.
## The Algorithms
You may be impatient to solve a specific problem, and wondering which filter to use. I will quickly survey the options. The subsequent chapters are somewhat independent of each other, and you can fruitfully skip around, though I recommend reading linearly if you truly want to master all of the material.
The workhorses of nonlinear filters are the *linearized Kalman filter* and *extended Kalman filter* (EKF). These two techniques were invented shortly after Kalman published his paper and they have been the main techniques used since then. The flight software in airplanes, the GPS in your car or phone almost certainly use one of these techniques.
However, these techniques are extremely demanding. The EKF linearizes the differential equations at one point, which requires you to find a solution to a matrix of partial derivatives (a Jacobian). This can be difficult or impossible to do analytically. If impossible, you have to use numerical techniques to find the Jacobian, but this is expensive computationally and introduces more error into the system. Finally, if the problem is quite nonlinear the linearization leads to a lot of error being introduced in each step, and the filters frequently diverge. You can not throw some equations into some arbitrary solver and expect to to get good results. It's a difficult field for professionals. I note that most Kalman filtering textbooks merely gloss over the EKF despite it being the most frequently used technique in real world applications.
Recently the field has been changing in exciting ways. First, computing power has grown to the point that we can use techniques that were once beyond the ability of a supercomputer. These use *Monte Carlo* techniques - the computer generates thousands to tens of thousands of random points and tests all of them against the measurements. It then probabilistically kills or duplicates points based on how well they match the measurements. A point far away from the measurement is unlikely to be retained, whereas a point very close is quite likely to be retained. After a few iterations there is a clump of particles closely tracking your object, and a sparse cloud of points where there is no object.
This has two benefits. First, the algorithm is robust even for extremely nonlinear problems. Second, the algorithm can track arbitrarily many objects at once - some particles will match the behavior on one object, and other particles will match other objects. So this technique is often used to track automobile traffic, people in crowds, and so on.
The costs should be clear. It is computationally expensive to test tens of thousands of points for every step in the filter. But modern CPUs are very fast, and this is a good problem for GPUs because the part of the algorithm is parallelizable. Another cost is that the answer is not mathematical. With a Kalman filter my covariance matrix gives me important information about the amount of error in the estimate. The particle filter does not give me a rigorous way to compute this. Finally, the output of the filter is a cloud of points; I then have to figure out how to interpret it. Usually you will be doing something like taking the mean and standard deviations of the points, but this is a difficult problem. There are still many points that do not 'belong' to a tracked object, so you first have to run some sort of clustering algorithm to first find the points that seem to be tracking an object, and then you need another algorithm to produce an state estimate from those points. None of this is intractable, but it is all quite computationally expensive.
Finally, we have a new algorithm called the *unscented Kalman filter* (UKF). It does not require you to find analytic solutions to nonlinear equations, and yet almost always performs better than the EKF. It does well with nonlinear problems - problems where the EKF has significant difficulties. Designing the filter is extremely easy. Some will say the jury is still out on the UKF, but to my mind the UKF is superior in almost every way to the EKF. I suggest that the UKF should be the starting point for any implementation, especially if you are not a Kalman filter professional with a graduate degree in control theory. The main downside is that the UKF can be a few times slower than the EKF, but this really depends on whether the EKF solves the Jacobian analytically or numerically. If numerically the UKF is almost certainly faster. It has not been proven (and probably it cannot be proven) that the UKF always yields more accurate results than the EKF. In practice it almost always does, often significantly so. It is very easy to understand and implement, and I strongly suggest this filter as your starting point.
## Summary
The world is nonlinear, but we only really know how to solve linear problems. This introduces significant difficulties for Kalman filters. We've looked at how nonlinearity affects filtering in 3 different but equivalent ways, and I've given you a brief summary of the major appoaches: the linearized Kalman filter, the extended Kalman filter, the Unscented Kalman filter, and the particle filter.
Until recently the linearized Kalman filter and EKF have been the standard way to solve these problems. They are very difficult to understand and use, and they are also potentially very unstable.
Recent developments have offered what are to my mind superior approaches. The UKF dispenses with the need to find solutions to partial differential equations, yet it is also usually more accurate than the EKF. It is easy to use and understand. I can get a basic UKF going in a few minutes by using FilterPy. The particle filter dispenses with mathimatical modeling completely in favor of a Monte Carlo technique of generating a random cloud of thousands of points. It runs slowly, but it can solve otherwise intractable problems with relative ease.
I get more email about the EKF than anything else; I suspect that this is because most treatments in books, papers, and on the internet use the EKF. If your interest is in mastering the field of course you will want to learn about the EKF. But if you are just trying to get good results I point you to the UKF and particle filter first. They are much easier to implement, understand, and use, and they are typically far more stable than the EKF.
Some will quibble with that advice. A lot of recent publications are devoted to a comparison of the EKF, UKF, and perhaps a few other choices for a given problem. Do you not need to perform a similar comparison for your problem? If you are sending a rocket to Mars then of course you do. You will be balancing issues such as accuracy, round off errors, divergence, mathematical proof of correctness, and the computational effort required. I can't imagine not knowing the EKF intimately.
On the other hand the UKF works spectacularly! I use it at work for real world applications. I mostly haven't even tried to implement an EKF for these applications because I can verify that the UKF is working fine. Is it possible that I might eke out another 0.2% of performance from the EKF in certain situations? Sure! Do I care? No! I completely understand the UKF implementation, it is easy to test and verify, I can pass the code to others and be confident that they can understand and modify it, and I am not a masochist that wants to battle difficult equations when I already have a working solution. If the UKF or particle filters start to perform poorly for some problem then I will turn other to techniques, but not before then. And realistically, the UKF usually provides substantially better performance than the EKF over a wide range of problems and conditions. If "really good" is good enough I'm going to spend my time working on other problems.
I'm belaboring this point because in most textbooks the EKF is given center stage, and the UKF is either not mentioned at all or just given a 2 page gloss that leaves you completely unprepared to use the filter. The UKF is still relatively new, and it takes time to write new editions of books. At the time many books were written the UKF was either not discovered yet, or it was just an unproven but promising curiosity. But I am writing this now, the UKF has had enormous success, and it needs to be in your toolkit. That is what I will spend most of my effort trying to teach you.
## References
<A name="[1]">[1]</A> https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb
| true |
code
| 0.718273 | null | null | null | null |
|
# Algorithm used :

```
%matplotlib inline
import gym
import itertools
import matplotlib
import numpy as np
import pandas as pd
import sys
if "../" not in sys.path:
sys.path.append("../")
from collections import defaultdict
from lib.envs.windy_gridworld import WindyGridworldEnv
from lib import plotting
matplotlib.style.use('ggplot')
env = WindyGridworldEnv()
def make_epsilon_greedy_policy(Q, epsilon, nA):
"""
Creates an epsilon-greedy policy based on a given Q-function and epsilon.
Args:
Q: A dictionary that maps from state -> action-values.
Each value is a numpy array of length nA (see below)
epsilon: The probability to select a random action . float between 0 and 1.
nA: Number of actions in the environment.
Returns:
A function that takes the observation as an argument and returns
the probabilities for each action in the form of a numpy array of length nA.
"""
def policy_fn(observation):
A = np.ones(nA, dtype=float) * epsilon / nA
best_action = np.argmax(Q[observation])
A[best_action] += (1.0 - epsilon)
return A
return policy_fn
def sarsa(env, num_episodes, discount_factor=1.0, alpha=0.5, epsilon=0.1):
"""
SARSA algorithm: On-policy TD control. Finds the optimal epsilon-greedy policy.
Args:
env: OpenAI environment.
num_episodes: Number of episodes to run for.
discount_factor: Gamma discount factor.
alpha: TD learning rate.
epsilon: Chance the sample a random action. Float betwen 0 and 1.
Returns:
A tuple (Q, stats).
Q is the optimal action-value function, a dictionary mapping state -> action values.
stats is an EpisodeStats object with two numpy arrays for episode_lengths and episode_rewards.
"""
# The final action-value function.
# A nested dictionary that maps state -> (action -> action-value).
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# Keeps track of useful statistics
stats = plotting.EpisodeStats(
episode_lengths=np.zeros(num_episodes),
episode_rewards=np.zeros(num_episodes))
# The policy we're following
policy = make_epsilon_greedy_policy(Q, epsilon, env.action_space.n)
for i_episode in range(num_episodes):
# Print out which episode we're on, useful for debugging.
if (i_episode + 1) % 100 == 0:
print("\rEpisode {}/{}.".format(i_episode + 1, num_episodes), end="")
sys.stdout.flush()
# Reset the environment and pick the first action
state = env.reset()
#Chaque action est modelisé par un numero, on va mettre une prob
#qui suit le epsilon greedy pour choisir l'action a prendre selon
#l'etat.
action_probs = policy(state)
action = np.random.choice(np.arange(len(action_probs)), p=action_probs)
# One step in the environment
for t in itertools.count():
# Take a step
next_state, reward, done, _ = env.step(action)
# Pick the next action
next_action_probs = policy(next_state)
next_action = np.random.choice(np.arange(len(next_action_probs)), p=next_action_probs)
# Update statistics
stats.episode_rewards[i_episode] += reward
stats.episode_lengths[i_episode] = t
# TD Update
td_target = reward + discount_factor * Q[next_state][next_action]
td_delta = td_target - Q[state][action]
Q[state][action] += alpha * td_delta
if done:
break
action = next_action
state = next_state
return Q, stats
Q, stats = sarsa(env, 200)
plotting.plot_episode_stats(stats)
```
| true |
code
| 0.750868 | null | null | null | null |
|
# Chapter 2: Conditional probability
----
```
import numpy as np
```
## Simulating the frequentist interpretation
Recall that the frequentist interpretation of conditional probability based on a large number `n` of repetitions of an experiment is $P(A|B) ≈ n_{AB}/n_{B}$, where $n_{AB}$ is the number of times that $A \cap B$ occurs and $n_{B}$ is the number of times that $B$ occurs. Let's try this out by simulation, and verify the results of Example 2.2.5. So let's use [`numpy.random.choice`](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.choice.html) to simulate `n` families, each with two children.
```
np.random.seed(34)
n = 10**5
child1 = np.random.choice([1,2], n, replace=True)
child2 = np.random.choice([1,2], n, replace=True)
print('child1:\n{}\n'.format(child1))
print('child2:\n{}\n'.format(child2))
```
Here `child1` is a NumPy `array` of length `n`, where each element is a 1 or a 2. Letting 1 stand for "girl" and 2 stand for "boy", this `array` represents the gender of the elder child in each of the `n` families. Similarly, `child2` represents the gender of the younger child in each family.
Alternatively, we could have used
```
np.random.choice(["girl", "boy"], n, replace=True)
```
but it is more convenient working with numerical values.
Let $A$ be the event that both children are girls and $B$ the event that the elder is a girl. Following the frequentist interpretation, we count the number of repetitions where $B$ occurred and name it `n_b`, and we also count the number of repetitions where $A \cap B$ occurred and name it `n_ab`. Finally, we divide `n_ab` by ` n_b` to approximate $P(A|B)$.
```
n_b = np.sum(child1==1)
n_ab = np.sum((child1==1) & (child2==1))
print('P(both girls | elder is girl) = {:0.2F}'.format(n_ab / n_b))
```
The ampersand `&` is an elementwise $AND$, so `n_ab` is the number of families where both the first child and the second child are girls. When we ran this code, we got 0.50, confirming our answer $P(\text{both girls | elder is a girl}) = 1/2$.
Now let $A$ be the event that both children are girls and $B$ the event that at least one of the children is a girl. Then $A \cap B$ is the same, but `n_b` needs to count the number of families where at least one child is a girl. This is accomplished with the elementwise $OR$ operator `|` (this is not a conditioning bar; it is an inclusive $OR$, returning `True` if at least one element is `True`).
```
n_b = np.sum((child1==1) | (child2==2))
n_ab = np.sum((child1==1) & (child2==1))
print('P(both girls | at least one girl) = {:0.2F}'.format(n_ab / n_b))
```
For us, the result was 0.33, confirming that $P(\text{both girls | at least one girl}) = 1/3$.
## Monty Hall simulation
Many long, bitter debates about the Monty Hall problem could have been averted by trying it out with a simulation. To study how well the never-switch strategy performs, let's generate 10<sup>5</sup> runs of the Monty Hall game. To simplify notation, assume the contestant always chooses door 1. Then we can generate a vector specifying which door has the car for each repetition:
```
np.random.seed(55)
n = 10**5
cardoor = np.random.choice([1,2,3] , n, replace=True)
print('The never-switch strategy has success rate {:.3F}'.format(np.sum(cardoor==1) / n))
```
At this point we could generate the vector specifying which doors Monty opens, but that's unnecessary since the never-switch strategy succeeds if and only if door 1 has the car! So the fraction of times when the never-switch strategy succeeds is `numpy.sum(cardoor==1)/n`, which was 0.331in our simulation. This is very close to 1/3.
What if we want to play the Monty Hall game interactively? We can do this by programming a Python class that would let us play interactively or let us run a simulation across many trials.
```
class Monty():
def __init__(self):
""" Object creation function. """
self.state = 0
self.doors = np.array([1, 2, 3])
self.prepare_game()
def get_success_rate(self):
""" Return the rate of success in this series of plays: num. wins / num. plays. """
if self.num_plays > 0:
return 1.0*self.num_wins / self.num_plays
else:
return 0.0
def prepare_game(self):
""" Prepare initial values for game play, and randonly choose the door with the car. """
self.num_plays = 0
self.num_wins = 0
self.cardoor = np.random.choice(self.doors)
self.players_choice = None
self.montys_choice = None
def choose_door(self, door):
""" Player chooses a door at state 0. Monty will choose a remaining door to reveal a goat. """
self.state = 1
self.players_choice = door
self.montys_choice = np.random.choice(self.doors[(self.doors!=self.players_choice) & (self.doors!=self.cardoor)])
def switch_door(self, do_switch):
""" Player has the option to switch from the door she has chosen to the remaining unopened door.
If the door the player has selected is the same as the cardoor, then num. of wins is incremented.
Finally, number of plays will be incremented.
"""
self.state = 2
if do_switch:
self.players_choice = self.doors[(self.doors!=self.players_choice) & (self.doors!=self.montys_choice)][0]
if self.players_choice == self.cardoor:
self.num_wins += 1
self.num_plays += 1
def continue_play(self):
""" Player opts to continue playing in this series.
The game is returned to state 0, but the counters for num. wins and num. plays
will be kept intact and running.
A new cardoor is randomly chosen.
"""
self.state = 0
self.cardoor = np.random.choice(self.doors)
self.players_choice = None
self.montys_choice = None
def reset(self):
""" The entire game state is returned to its initial state.
All counters and variable holdling state are re-initialized.
"""
self.state = 0
self.prepare_game()
```
In brief:
* The `Monty` class represents a simple state model for the game.
* When an instance of the `Monty` game is created, game state-holding variables are initialized and a `cardoor` randomly chosen.
* After the player initially picks a door, `Monty` will choose a remaining door that does not have car behind it.
* The player can then choose to switch to the other, remaining unopened door, or stick with her initial choice.
* `Monty` will then see if the player wins or not, and updates the state-holding variables for num. wins and num. plays.
* The player can continue playing, or stop and reset the game to its original state.
### As a short simulation program
Here is an example showing how to use the `Monty` class above to run a simulation to see how often the switching strategy succeeds.
```
np.random.seed(89)
trials = 10**5
game = Monty()
for _ in range(trials):
game.choose_door(np.random.choice([1,2,3]))
game.switch_door(True)
game.continue_play()
print('In {} trials, the switching strategy won {} times.'.format(game.num_plays, game.num_wins))
print('Success rate is {:.3f}'.format(game.get_success_rate()))
```
### As an interactive widget in this Jupyter notebook
Optionally, the `Monty` Python class above can also be used as an engine to power an interactive widget that lets you play the three-door game _in the browser_ using [`ipywidgets` ](https://ipywidgets.readthedocs.io/en/stable/user_guide.html).
To run the interactive widget, make sure you have the `ipywidgets` package installed (v7.4.2 or greater).
To install with the `conda` package manager, execute the following command:
conda install ipywidgets
To install with the `pip` package manager, execute the following command:
pip install ipywidgets
```
from ipywidgets import Box, Button, ButtonStyle, FloatText, GridBox, IntText, Label, Layout, HBox
from IPython.display import display
```
The doors in the game are represented by [`ipywidgets.Button`](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20List.html#Button).
```
door1 = Button(description='Door 1', layout=Layout(flex='1 1 auto', width='auto'))
door2 = Button(description='Door 2', layout=door1.layout)
door3 = Button(description='Door 3', layout=door1.layout)
doors_arr = [door1, door2, door3]
doors = Box(doors_arr, layout=Layout(width='auto', grid_area='doors'))
```
State-holding variables in the `Monty` object are displayed using [`ipywidgets.IntText`](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20List.html#IntText) (for the `num_wins` and `num_plays`); and [`ipywidgets.FloatText`](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20List.html#FloatText) (for the success rate).
```
label1 = Label(value='number of plays', layout=Layout(width='auto', grid_area='label1'))
text1 = IntText(disabled=True, layout=Layout(width='auto', grid_area='text1'))
label2 = Label(value='number of wins', layout=Layout(width='auto', grid_area='label2'))
text2 = IntText(disabled=True, layout=Layout(width='auto', grid_area='text2'))
label3 = Label(value='success rate', layout=Layout(width='auto', grid_area='label3'))
text3 = FloatText(disabled=True, layout=Layout(width='auto', grid_area='text3'))
```
[`ipywidgets.Label`](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20List.html#Label) is used to display the title and descriptive text in the game widget.
```
banner = Box([Label(value='Interactive widget: Monty Hall problem',
layout=Layout(width='50%'))],
layout=Layout(width='auto', justify_content='center', grid_area='banner'))
status = Label(value='Pick a door...', layout=Layout(width='auto', grid_area='status'))
```
Buttons allowing for further user actions are located at the bottom of the widget.
* The `reveal` button is used to show what's behind all of the doors after the player makes her final choice.
* After the player completes a round of play, she can click the `continue` button to keep counting game state (num. wins and num. plays)
* The `reset` button lets the player return the game to its original state after completing a round of play.
```
button_layout = Layout(flex='1 1 auto', width='auto')
reveal = Button(description='reveal', tooltip='open selected door', layout=button_layout, disabled=True)
contin = Button(description='continue', tooltip='continue play', layout=button_layout, disabled=True)
reset = Button(description='reset', tooltip='reset game', layout=button_layout, disabled=True)
actions = Box([reveal, contin, reset], layout=Layout(width='auto', grid_area='actions'))
```
[`ipywidgets.GridBox`](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Styling.html#The-Grid-layout) helps us lay out the user interface elements for the `Monty` game widget.
```
ui = GridBox(children=[banner, doors, label1, text1, label2, text2, label3, text3, status, actions],
layout=Layout(
width='50%',
grid_template_rows='auto auto auto auto auto auto auto',
grid_template_columns='25% 25% 25% 25%',
grid_template_areas='''
"banner banner banner banner"
"doors doors doors doors"
"label1 label1 text1 text1"
"label2 label2 text2 text2"
"label3 label3 text3 text3"
"status status status status"
". . actions actions"
'''
)
)
```
We lastly create some functions to connect the widget to the `Monty` game object. These functions adapt player action [events](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Events.html#Example) to state changes in the `Monty` object, and then update the widget user interface accordingly.
```
uigame = Monty()
def reset_ui(disable_reset=True):
""" Return widget elements to their initial state.
Do not disable the reset button in the case of continue.
"""
for i,d in enumerate(doors_arr):
d.description = 'Door {}'.format(i+1)
d.disabled = False
d.icon = ''
d.button_style = ''
reveal.disabled = True
contin.disabled = True
reset.disabled = disable_reset
def update_status(new_status):
""" Update the widget text fields for displaying present game status. """
text1.value = uigame.num_plays
text2.value = uigame.num_wins
text3.value = uigame.get_success_rate()
status.value = new_status
def update_ui_reveal():
""" Helper function to update the widget after the player clicks the reveal button. """
if uigame.players_choice == uigame.cardoor:
new_status = 'You win! Continue playing?'
else:
new_status = 'Sorry, you lose. Continue playing?'
for i,d in enumerate(doors_arr):
d.disabled = True
if uigame.cardoor == i+1:
d.description = 'car'
else:
d.description = 'goat'
if uigame.players_choice == i+1:
if uigame.players_choice == uigame.cardoor:
d.button_style = 'success'
d.icon = 'check'
else:
d.button_style = 'danger'
d.icon = 'times'
update_status(new_status)
reveal.disabled = True
contin.disabled = False
reset.disabled = False
def on_button_clicked(b):
""" Event-handling function that maps button click events in the widget
to corresponding functions in Monty, and updates the user interface
according to the present game state.
"""
if uigame.state == 0:
if b.description in ['Door 1', 'Door 2', 'Door 3']:
c = int(b.description.split()[1])
uigame.choose_door(c)
b.disabled = True
b.button_style = 'info'
m = doors_arr[uigame.montys_choice-1]
m.disabled = True
m.description = 'goat'
unopened = uigame.doors[(uigame.doors != uigame.players_choice) &
(uigame.doors != uigame.montys_choice)][0]
status.value = 'Monty reveals a goat behind Door {}. Click Door {} to switch, or \'reveal\' Door {}.' \
.format(uigame.montys_choice, unopened, uigame.players_choice)
reveal.disabled = False
reset.disabled = False
elif b.description == 'reset':
uigame.reset()
reset_ui()
update_status('Pick a door...')
elif uigame.state == 1:
if b.description in ['Door 1', 'Door 2', 'Door 3']:
prev_choice = uigame.players_choice
uigame.switch_door(True)
pb = doors_arr[prev_choice-1]
pb.icon = ''
pb.button_style = ''
b.disabled = True
b.button_style = 'info'
status.value = 'Now click \'reveal\' to see what\'s behind Door {}.'.format(uigame.players_choice)
elif b.description == 'reset':
uigame.reset()
reset_ui()
update_status('Pick a door...')
elif b.description == 'reveal':
uigame.switch_door(False)
update_ui_reveal()
elif uigame.state == 2:
if b.description == 'reveal':
update_ui_reveal()
else:
if b.description == 'continue':
uigame.continue_play()
reset_ui(False)
update_status('Pick a door once more...')
elif b.description == 'reset':
uigame.reset()
reset_ui()
update_status('Pick a door...')
# hook up all buttons to our event-handling function
door1.on_click(on_button_clicked)
door2.on_click(on_button_clicked)
door3.on_click(on_button_clicked)
reveal.on_click(on_button_clicked)
contin.on_click(on_button_clicked)
reset.on_click(on_button_clicked)
display(ui)
```
How to play:
* Click a door to select.
* Monty will select a remaining door and open to reveal a goat.
* Click the `reveal` button to open your selected door.
* Or click the remaining unopened Door button to switch your door choice, and then click `reveal`.
* Click the `continue` button to keep playing.
* You may click the `reset` button at any time to return the game back to its initial state.
| true |
code
| 0.355229 | null | null | null | null |
|
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
```
# Text Classification of Movie Reviews
```
from helpers import Timer
from sklearn.datasets import load_files
reviews_train = load_files("aclImdb/train/")
text_train, y_train = reviews_train.data, reviews_train.target
print("Number of documents in training data: %d" % len(text_train))
print(np.bincount(y_train))
reviews_test = load_files("aclImdb/test/")
text_test, y_test = reviews_test.data, reviews_test.target
print("Number of documents in test data: %d" % len(text_test))
print(np.bincount(y_test))
print(text_train[1])
print(y_train[1])
```
### Bag of words reminder:
<img src="bag_of_words.svg" width=80%>
```
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
cv.fit(text_train)
len(cv.vocabulary_)
print(cv.get_feature_names()[:50])
print(cv.get_feature_names()[50000:50050])
X_train = cv.transform(text_train)
X_train
print(text_train[19726])
X_train[19726].nonzero()[1]
X_test = cv.transform(text_test)
from sklearn.svm import LinearSVC
svm = LinearSVC()
with Timer():
svm.fit(X_train, y_train)
svm.score(X_train, y_train)
svm.score(X_test, y_test)
def visualize_coefficients(classifier, feature_names, n_top_features=25):
# get coefficients with large absolute values
coef = classifier.coef_.ravel()
positive_coefficients = np.argsort(coef)[-n_top_features:]
negative_coefficients = np.argsort(coef)[:n_top_features]
interesting_coefficients = np.hstack([negative_coefficients, positive_coefficients])
# plot them
plt.figure(figsize=(15, 5))
colors = ["red" if c < 0 else "blue" for c in coef[interesting_coefficients]]
plt.bar(np.arange(2 * n_top_features), coef[interesting_coefficients], color=colors)
feature_names = np.array(feature_names)
plt.xticks(np.arange(1, 1 + 2 * n_top_features), feature_names[interesting_coefficients], rotation=60, ha="right");
visualize_coefficients(svm, cv.get_feature_names())
from sklearn.pipeline import make_pipeline
text_pipe = make_pipeline(CountVectorizer(), LinearSVC())
with Timer():
text_pipe.fit(text_train, y_train)
text_pipe.score(text_test, y_test)
from sklearn.grid_search import GridSearchCV
param_grid = {'linearsvc__C': np.logspace(-5, 0, 6)}
grid = GridSearchCV(text_pipe, param_grid, cv=5)
with Timer():
grid.fit(text_train, y_train);
from figures import plot_grid_1d
plot_grid_1d(grid)
grid.best_params_
visualize_coefficients(grid.best_estimator_.named_steps['linearsvc'],
grid.best_estimator_.named_steps['countvectorizer'].get_feature_names())
grid.best_score_
grid.score(text_test, y_test)
```
# Text Classification continuation.
## TfidfVectorizer
```
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_pipe = make_pipeline(TfidfVectorizer(), LinearSVC())
param_grid = {'linearsvc__C': np.logspace(-3, 2, 6)}
grid = GridSearchCV(tfidf_pipe, param_grid, cv=5)
with Timer():
grid.fit(text_train, y_train)
plot_grid_1d(grid)
visualize_coefficients(grid.best_estimator_.named_steps['linearsvc'],
grid.best_estimator_.named_steps['tfidfvectorizer'].get_feature_names())
grid.best_score_
grid.score(text_test, y_test)
```
# N-Grams
```
text_pipe = make_pipeline(CountVectorizer(), LinearSVC())
param_grid = {'linearsvc__C': np.logspace(-3, 2, 6),
"countvectorizer__ngram_range": [(1, 1), (1, 2), (1, 3)]}
grid = GridSearchCV(text_pipe, param_grid, cv=5)
with Timer():
grid.fit(text_train, y_train)
scores = np.array([score.mean_validation_score for score in grid.grid_scores_]).reshape(3, -1)
plt.matshow(scores)
plt.ylabel("n-gram range")
plt.yticks(range(3), param_grid["countvectorizer__ngram_range"])
plt.xlabel("C")
plt.xticks(range(6), param_grid["linearsvc__C"]);
plt.colorbar()
grid.best_params_
visualize_coefficients(grid.best_estimator_.named_steps['linearsvc'],
grid.best_estimator_.named_steps['countvectorizer'].get_feature_names())
grid.score(text_test, y_test)
```
## Look at the Natural Laguage Tool Kit (NLTK)
| true |
code
| 0.617311 | null | null | null | null |
|
# Variational Inference and Learning in the Big Data Regime
Many real-world modelling solutions require fitting models with large numbers of data-points and parameters, which is made convenient recently through software implementing automatic differentiation, but also require uncertainty quantification. Variational inference is a generic family of tools that reformulates (Bayesian) model inference into an optimisation problem, thereby making use of modern software tools but also having the ability to give model uncertainty. This talk will motivate how variational inference works and what the state-of-the-art methods are. We will also accompany the theory with implementations on some simple probabilistic models, such as variational autoencoders (VAE). If time-permitting, we will briefly talk about some of the recent frontiers of variational inference, namely normalising flows and Stein Variational Gradient Descent.
💻 Content covered:
Current inference methods: maximum likelihood and Markov chain Monte Carlo
Information theory and KL divergence
Mean field variational inference
Bayesian linear regression
Monte Carlo variational inference (MCVI), reparameterisation trick and law of the unconscious statistician (LOTUS)
Example software implementations: VAE
👾 This lecture will be held online on Microsoft Teams.
🔴The event will be recorded and will be publicly available.
🎉 Attendance is FREE for members! Whether you are a student at Imperial College or not, sign up to be a member at www.icdss.club/joinus
⭐️ We encourage participants of this workshop to have looked at our previous sessions on YouTube. Prerequisites: basic understanding of Bayesian statistics
📖 A schedule of our lecture series is currently available
## Background
- Variational Inference: A Review for Statisticians: https://www.tandfonline.com/doi/full/10.1080/01621459.2017.1285773
- Auto-Encoding Variational Bayes: https://arxiv.org/pdf/1312.6114.pdf
- http://yingzhenli.net/home/en/approximateinference
- https://github.com/ethanluoyc/pytorch-vae
Consider crop yields $y$ and we have a likelihood $p(y|z)$ where $z$ are latent parameters. Suppose $z$ has some prior distribution $p(z)$, then the posterior distribution is
$$
p(z|y) \propto p(y|z)p(z) := \tilde{p}(z|y).
$$
We then want to be able to compute quantities $\mathbb{E}_{z\sim p(z|y)}[h(Z)]$, for certain functions $h$ e.g. $h(z)=z$ for the posterior mean of $Z$.
We could compute $p(z|y$) analytically if we have nice priors (conjugate priors), but this is usually not the case for most models e.g. Autoencoders with latent parameters or certain Gaussian mixture models.
Markov chain Monte Carlo (MCMC) allows us to obtain samples from $z\sim p(z|y)$ using samplers (e.g. Hamiltonian Monte Carlo (HMC) or Metropolis-Hastings), but it could be very expensive and prohibits it from being used for the big data setting.
### Variational Inference
Variational Inference (VI)/Variational Bayes/Variational Approximation turns this problem into an optimisation problem. We now seek $q(z)$ in a space of functions $\mathcal{Q}$, instead of computing the exact $p(z|y)$, in which
$$KL(q(z) || p(z|y)) = \int \log\frac{q(z)}{p(z|y)} q(z) dq$$
is minimised. This KL denotes the KL-divergence, which is a divergence measure that looks at how close 2 distributions are to one-another. It is:
- Non-negative
- Is equal to 0 if and only if $q(z) = p(z|y)$
- Note: $KL(q(z)||p(z|y)) \neq KL(p(z|y) || q(z))$. Minimising $KL(p(z|y) || q(z))$ is the objective of Expectation Propagation, which is another method for approximating posterior distributions.
Note that maximum likelihood estimation (MLE) is done by maximising the log-likelihood, which is the same as minimising the KL divergence:
$$
\text{argmin}_{\theta} KL(\hat{p}(y|\theta^*) || p(y|\theta)) = \text{argmin}_{\theta} \frac{1}{n}\sum_{i=1}^n \log \frac{p(y_i|\hat{\theta})}{p(y_i|\theta)} = \text{argmin}_{\theta} \frac{1}{n}\sum_{i=1}^n \log \frac{1}{p(y_i|\theta)} = \text{argmax}_{\theta} \frac{1}{n}\sum_{i=1}^n \log p(y_i|\theta).
$$
**Evidence Lower-Bound**
Suppose I pose a family of posteriors $q(z)$, then
\begin{align*}
KL(q(z) || p(z|y)) = \int \log\frac{q(z)}{p(z|y)} q(z) dq &= \mathbb{E}_{z\sim q(z)}[\log q(z)] - \mathbb{E}_{z\sim q(z)}[\log p(z|y)] \\
&= \mathbb{E}_{z\sim q(z)}[\log q(z)] - \mathbb{E}_{z\sim q(z)}[\log p(z,y)] + \log p(y) \\
&= \mathbb{E}_{z\sim q(z)}[\log q(z)] - \mathbb{E}_{z\sim q(z)}[\log p(y|z)] - \mathbb{E}_{z\sim q(z)}[p(z)] + \log p(y) \\
&=\log p(y) + \mathbb{E}_{z\sim q(z)}[\log \frac{q(z)}{p(z)}] - \mathbb{E}_{z\sim q(z)}[\log p(y|z)] \\
&= \log p(y) + KL(q(z) || p(z)) - \mathbb{E}_{z\sim q(z)}[\log p(y|z)].
\end{align*}
Since the left term is positive and $\log p(y)$ is fixed, it is sufficient to minimise:
$$
KL(q(z) || p(z)) - \mathbb{E}_{z\sim q(z)}[\log p(y|z)].
$$
The evidence lower-bound is $ELBO(q) = \mathbb{E}_{z\sim q(z)}[\log p(y|z)] - KL(q(z) || p(z))$, which is maximised.
### Mean-Field Variational Inference
As fancy as it sounds, it just means specifying a family of posteriors $\mathcal{Q}$ such that
$$
q(z) = \prod_{j=1}^m q_j(z_j),
$$
where $m$ is the number of parameters.
**Coordinate Ascent Variational Inference (CAVI)**
Blei et al. (2017)

Let's look at an example (Li (2021)):
$$
y|x \sim \mathcal{N}(y; x^\intercal\theta, \sigma^2),\qquad \theta\sim\mathcal{N}(\theta; \mu_0, \Gamma_0^{-1}).
$$
This has an analytical solution
$$
p(\theta|\mathcal{D}) = \mathcal{N}(\theta; \mu,\Gamma^{-1})
$$
with
\begin{align*}
\Gamma &= \Gamma_0 + \frac{1}{\sigma^2}X^\intercal X \\
\mu &= \frac{1}{\sigma^2}(X^\intercal X + \Gamma_0)^{-1}X^Ty,
\end{align*}
where $X=(x_1,\ldots,x_n)^\intercal$ and $y=(y_1,\ldots,y_n)^\intercal$. **Let's try CAVI**:
\begin{align*}
\log q_1(\theta_1) =& \int q_2(\theta_2) \log \tilde{p}(\theta_1, \theta_2) d\theta_2\\
=& \int -\frac{1}{2}\left[(\theta_1-\mu_1)^2\Gamma_{11} + 2(\theta_1-\mu_1)\Gamma_{12}(\theta_2-\mu_2) \right]q_2(\theta_2) d\theta_2 + const \\
=& -\frac{1}{2}\left[(\theta_1-\mu_1)^2\Gamma_{11} + 2(\theta_1-\mu_1)\Gamma_{12}(\mathbb{E}_{\theta_2\sim q_2}[\theta_2]-\mu_2) \right] + const,
\end{align*}
which is Gaussian with mean and variance
$$
\tilde{\mu}_1 = \mu_1 - \Gamma_{11}^{-1}\Gamma_{12}(\mathbb{E}_{q_2}[\theta_2] - \mu_2),\qquad \tilde{\gamma}_2^{-1} = \Gamma_{11}.
$$
Similarly, you can obtain a similar expression for $q_2(\theta_2)$. For CAVI to convergence, it can be shown that $(\tilde{\mu}_1, \tilde{\mu}_2)^\intercal = \mu$, giving
$$
\tilde{\mu}_1 = \mu_1, \qquad \tilde{\mu}_2 = \mu_2.
$$
In this case, CAVI gives a Gaussian posteriors.
### Monte Carlo Variational Inference (MCVI)
For big data situations, the variational expectation term can be (1) very expensive and (2) is not available in closed form. We can also add some more complexity to the posterior instead of just having a mean-field approximation. Recall the bound:
$$
\mathcal{L}(q; p) = KL(q(z) || p(z)) - \mathbb{E}_{z\sim q(z)}[\log p(y|z)].
$$
MCVI calculates the variational expectation using Monte Carlo integration
$$
\mathbb{E}_{z\sim q(z)}[\log p(y_i|z)] \approx \frac{1}{M}\sum_{j=1}^M \log p(y_i|z^j),\qquad z^j\sim q(z).
$$
Even better, we can calculate this using mini-batches:
$$
\sum_{i=1}^n\mathbb{E}_{z\sim q(z)}[\log p(y_i|z)] = \mathbb{E}_{S\sim \{1,\ldots,n\}}\left[\frac{n}{|S|}\sum_{i\in S} \mathbb{E}_q[\log p(y_i|z)] \right],
$$
where the inner expectation can be calculated as before. Now, to minimise $\mathcal{L}(q; p)$, we differentiate with respect to the parameters, let's call it $\theta$. Therefore, we need
\begin{align*}
\nabla_\theta \mathcal{L}(q; p) =& \nabla_\theta\left[KL(q(z) || p(z)) - \mathbb{E}_{z\sim q(z)}[\log p(y|z)] \right] \\
=& \nabla_\theta \left[ \frac{1}{M}\sum_{j=1}^M \log\frac{q(z^j)}{p(z^j)} \right] - \nabla_\theta\left[\mathbb{E}_{S\sim \{1,\ldots,n\}}\left[\frac{n}{|S|}\sum_{i\in S} \frac{1}{M}\sum_{j=1}^M \log p(y_i|z^j)\right] \right],
\end{align*}
where $z^j\sim q(z)$. We can get rid of the expectation with respect to the mini-batches and get a nice approximation for the bound for each batch $S$.
**Reparameterisation Trick/Law of the Unconcious Statistician (LOTUS)**
LOTUS basically refers to the identity:
$$
E_X[f(X)] = \int f(x) p(x) dx = \int f(g(\epsilon)) p(\epsilon) dx = E_\epsilon[f(g(\epsilon))]
$$
for $x=g(\epsilon)$, via the inverse function theorem and the change of variable theorem. The reparameterisation trick thus makes it easier to compute the bound by allowing us to sample from a simpler distribution $p(\epsilon)$ to get $q(z)$:
\begin{align*}
\nabla_\theta \mathcal{L}(q; p) =& \nabla_\theta\left[KL(q(z) || p(z)) - \mathbb{E}_{z\sim q(z)}[\log p(y|z)] \right] \\
=& \nabla_\theta\left[KL(q(z) || p(z)) - \mathbb{E}_{\epsilon}[\log p(y|g_\theta(\epsilon))] \right]\\
=& \nabla_\theta KL(q(z) || p(z)) - \mathbb{E}_{\epsilon}[\nabla_g \log p(y|g_\theta(\epsilon)) \times \nabla_\theta g_\theta(\epsilon)].
\end{align*}
Then repeat using the same MCVI integration method to approximate the variational expectation. In practice, we can also use automatic differentiation to calculate the gradients.
**Example: Variational Autoencoders (VAEs)**
Model (Taken from https://lilianweng.github.io/lil-log/2018/08/12/from-autoencoder-to-beta-vae.html)

**(1)**
The decoder represents the likelihood $p(y|z)$, where $y$ is an image. In the upcoming example, we have
$$
\log p(y|z) = \log N(y; f_\theta(z), I) \equiv ||y - f_\theta(z)||_2^2,
$$
the MSE loss.
**(2)**
The prior is $z\sim \mathcal{N}(0, I)$.
**(3)**
As you will see in many applications, they people only use 1 sample to calculate the variational expectation. i.e. taking $M=1$.
**(4)**
The variational distribution that we are going for is $$q(z|y) = N(g_\phi(y)[0], g_\phi(y)[1] I),$$
where the variational distribution is parameterised by the encoder network.
**(5)**
We note that we can actually analytically compute the KL divergence as they are 2 Gaussians (proceed to Wikipedia for the formula...)
## Experiments
```
# from https://github.com/ethanluoyc/pytorch-vae/blob/master/vae.py
import torch
from torch.autograd import Variable
import numpy as np
import torch.nn.functional as F
import torchvision
from torchvision import transforms
import torch.optim as optim
from torch import nn
import matplotlib.pyplot as plt
from six.moves import urllib
opener = urllib.request.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
urllib.request.install_opener(opener)
class Normal(object):
def __init__(self, mu, sigma, log_sigma, v=None, r=None):
self.mu = mu
self.sigma = sigma # either stdev diagonal itself, or stdev diagonal from decomposition
self.logsigma = log_sigma
dim = mu.get_shape()
if v is None:
v = torch.FloatTensor(*dim)
if r is None:
r = torch.FloatTensor(*dim)
self.v = v
self.r = r
class Encoder(torch.nn.Module):
def __init__(self, D_in, H, D_out):
super(Encoder, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H, D_out)
def forward(self, x):
x = F.relu(self.linear1(x))
return F.relu(self.linear2(x))
class Decoder(torch.nn.Module):
def __init__(self, D_in, H, D_out):
super(Decoder, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H, D_out)
def forward(self, x):
x = F.relu(self.linear1(x))
return F.relu(self.linear2(x))
class VAE(torch.nn.Module):
latent_dim = 8
def __init__(self, encoder, decoder):
super(VAE, self).__init__()
self.encoder = encoder
self.decoder = decoder
self._enc_mu = torch.nn.Linear(100, 8)
self._enc_log_sigma = torch.nn.Linear(100, 8)
def _sample_latent(self, h_enc):
"""
Return the latent normal sample z ~ N(mu, sigma^2)
"""
mu = self._enc_mu(h_enc)
log_sigma = self._enc_log_sigma(h_enc)
sigma = torch.exp(log_sigma)
std_z = torch.from_numpy(np.random.normal(0, 1, size=sigma.size())).float()
self.z_mean = mu
self.z_sigma = sigma
return mu + sigma * Variable(std_z, requires_grad=False) # Reparameterization trick
def forward(self, state):
h_enc = self.encoder(state)
z = self._sample_latent(h_enc)
return self.decoder(z)
def latent_loss(z_mean, z_stddev):
mean_sq = z_mean * z_mean
stddev_sq = z_stddev * z_stddev
return 0.5 * torch.mean(mean_sq + stddev_sq - torch.log(stddev_sq) - 1)
input_dim = 28 * 28
batch_size = 32
transform = transforms.Compose(
[transforms.ToTensor()])
mnist = torchvision.datasets.MNIST('./', download=True, transform=transform)
dataloader = torch.utils.data.DataLoader(mnist, batch_size=batch_size,
shuffle=True, num_workers=2)
print('Number of samples: ', len(mnist))
encoder = Encoder(input_dim, 100, 100)
decoder = Decoder(8, 100, input_dim)
vae = VAE(encoder, decoder)
criterion = nn.MSELoss()
optimizer = optim.Adam(vae.parameters(), lr=0.001)
l = None
for epoch in range(5):
for i, data in enumerate(dataloader, 0):
inputs, classes = data
inputs, classes = Variable(inputs.resize_(batch_size, input_dim)), Variable(classes)
optimizer.zero_grad()
dec = vae(inputs)
ll = latent_loss(vae.z_mean, vae.z_sigma)
loss = criterion(dec, inputs) + ll
loss.backward()
optimizer.step()
l = loss.item()
print(epoch, l)
plt.imshow(vae(inputs).data[0].numpy().reshape(28, 28), cmap='gray')
plt.show(block=True)
plt.imshow(inputs[0].numpy().reshape(28, 28), cmap='gray')
```
### Normalising Flows
Using a "nice" class of diffeomorphisms, one can obtain diagonal Jacobians from the diffeomorphisms, we apply the change of variables formula:
\begin{align*}
q(z_L) = q(z) \prod_{l=1}^L |\det(\nabla_{z_{l-1}} T_l(z_{l-1}))|^{-1}
\end{align*}
| true |
code
| 0.819244 | null | null | null | null |
|
# Inference acceleration of `T5` for large batch size / long sequence length / > large models
Every week or so, a new impressive few shots learning work taking advantage of autoregressive models is released by some team around the world.
Still `LLM` inference is rarely discussed and few projects are focusing on this aspect.
In this notebook, we describe our take to significantly improve autoregressive model latency.
We plan to intensively test large autoregressive models, so we want something:
* which **scales**: the improvement exists on small and large models, for short and long sequences, in greedy and beam search;
* This is very important in a few shots learning where sequences are most of the time hundreds or thousands tokens long and beam search is used to improve text quality.
* that has **no hidden cost**: no big increase in memory usage, no degradation in quality of generated text, support state-of-the-art decoding algorithms;
* that is **generic**: works for any transformer based architecture, and not specific to an inference engine;
* that is **easy to maintain**: no hard-coded behaviors or other technical debt if it doesn't bring a clear advantage.
To be clear, **we are not targeting the best performance ever but the right trade off** (for us at least) between simplicity to use/maintain and acceptable latency.
## The challenge
In most situations, performing inference with `Onnx Runtime` or `TensorRT` usually bring large improvement over `Pytorch` implementations.
It's very true with `transformer` based models.
The main reason is that these tools will perform `kernel fusions` (merging several operations into a single one) and therefore reduce the number of memory bounded operations. Sometimes they also replace some operations by a much faster approximation.
In the very specific case of autoregressive languages, things are a bit more complicated.
On most `Pytorch` implementations of these models, there is a `cache` of `K` and `V` values.
Let's remind us that in attention blocks, each token is projected on 3 matrices called `Query`, `Key`, and `Value`.
Then, those projections will be used to compute a representation of each token which takes into account the information from the related other tokens of the sequence.
As autoregressive models generate the sequence one token at a time, they should recompute final representation of all past tokens for each new token to generate.
Because each token can only attend to the past, the result of these computations never changes; therefore one simple trick to reduce latency is to just memorize them and reuse them later, avoiding lots of computation.
Out of the box, the cache mechanism can't be exported to `Onnx` from `Hugging Face` models (and all other `Pytorch` implementations we are aware of).
The reason is that those models are not `torchscript` scripting compliant (it requires `Pytorch` code to follow some [restrictive rules](https://pytorch.org/docs/stable/jit_builtin_functions.html)).
Because of that, `Onnx` export is done through `tracing` which erases any control flow instructions (including the `If` instruction to enable or not a cache).
## Existing solutions
Some interesting solutions targeting inference latency that we have considered and/or tested:
* [TensorRT](https://developer.nvidia.com/blog/optimizing-t5-and-gpt-2-for-real-time-inference-with-`TensorRT`/), which targets `GPU`, heavily optimizes the computation graph, making `T5` inference very fast (they report X10 speedup on `small-T5`). The trick is that it doesn't use any cache (see below for more details), so it's very fast on short sequence and small models, as it avoids many memory bounded operations by redoing full computation again and again... but as several users have already found ([1](https://github.com/NVIDIA/TensorRT/issues/1807), [2](https://github.com/NVIDIA/TensorRT/issues/1642), [3](https://github.com/NVIDIA/TensorRT/issues/1799), [4](https://github.com/NVIDIA/TensorRT/issues/1845), ...), this approach doesn't scale when the computation intensity increases, i.e., when base or large models are used instead of a small one, when generation is done on moderately long sequence of few hundred of tokens or if beam search is used instead of a greedy search;
* [FastT5](https://github.com/Ki6an/fastT5), which targets `CPU`, exports 2 versions of the decoder, one with cache and one without. You need the `no cache` version to compute the first token and the first `past state` tensors (aka the cached tensors), and for all the other tokens you use the `cache` version of the computation graph. Basically, it makes the memory foot print 2 times bigger as all weights are duplicated. As generative models tend to be huge, they work around the memory issue by using dynamic `int-8` quantization, the final memory foot print of the decoders is now the same as `Hugging Face` in `FP16`... but 1/ dynamic quantization only works on `CPU`, and 2/ according to several reports dynamic quantization degrades significantly generative model output, to a point where it may make them useless ([1](https://github.com/huggingface/transformers/issues/2466#issuecomment-572781378), [2](https://github.com/huggingface/transformers/issues/2466#issuecomment-982710520), and [here](https://github.com/microsoft/onnxruntime/issues/6549#issuecomment-1016948837) you can find a report in the `GPT-2` context from a Microsoft engineer: "*int8 quantization are not recommended due to accuracy loss*").
* [Onnx Runtime T5 export tool](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/transformers/models/t5) targets both `GPU` and `CPU`. It works in a similar way than `FastT5`: `decoder` module is exported 2 times. Like `FastT5`, the memory footprint of the decoder part is doubled (this time there is no `int-8` quantization).
* [FasterTransformer](https://github.com/NVIDIA/FasterTransformer/blob/main/docs/t5_guide.md#translation-process) targets `GPU` and is a mix of `Pytorch` and `CUDA`/`C++` dedicated code. The performance boost is huge on `T5`, they report a 10X speedup like `TensorRT`. However, it may significantly decrease the accuracy of the model ([here](https://github.com/NVIDIA/FasterTransformer/blob/main/docs/t5_guide.md#translation-process) when sampling is enabled, it reduces BLEU score of translation task by 8 points, the cause may be a bug in the decoding algorithm or an approximation a bit too aggressive) plus the speedup is computed on a [translation task](https://github.com/NVIDIA/FasterTransformer/blob/main/examples/pytorch/decoding/utils/translation/test.en) where sequences are 25 tokens long on average. In our experience, improvement on very short sequences tends to decrease by large margins on longer sequences. It seems to us that their objectives are different from ours.
With the existing solutions, you need to choose one or two items of the following:
* double decoder memory footprint;
* be slower than `Hugging Face` for moderately long sequence length / beam search;
* degrade output quality.
## Our approach
Our approach to make autoregressive `transformer` based models 2X faster than `Hugging Face` `Pytorch` implementation (the base line) is based on 3 key ingredients:
* storing 2 computation graphs in a single `Onnx` file: this let us have both cache and no cache support without having any duplicated weights,
* `zero copy` to retrieve output from `Onnx Runtime`: we built over our past work to connect in the most efficient way `Pytorch` tensors (used in the decoding part) and `Onnx Runtime`. Our previous work was to avoid `host` <-> `GPU` tensor copy, but it still required a `GPU` <-> `GPU`. It is now part of the official `Onnx Runtime` documentation (apparently [thanks of our project](https://github.com/microsoft/onnxruntime/pull/10651)!). This time we found out a way to directly expose the internal state of `Onnx Runtime` through a `Pytorch` tensor in zero copy way. Combined with cache mechanism, this is responsible for most of the speedup we have obtained.
* a generic tool to convert any model (whatever the architecture) to `FP16` without any risk of having out of range values or rounding to zero: `FP16` is still the way to reduce memory footprint of a model. The main issue is that some nodes may output values outside of `FP16` range or round others to zero, resulting in `NaN` output; moreover, very small values may be rounded to zero which is an issue for log and div operations. We have built a tool which detect those nodes so we can keep their precision in `FP32`. It's quite important to reduce memory footprint of these models, not just because they tend to be huge, but also because past states (that we cache) and internal buffers can be even bigger than the weights of the model itself.
## Results
As demonstrated at the end of this notebook, **we are able to provide a X2 speedup** whatever the batch size, the sequence length or the model size.
> For `TensorRT` we have our own implementation of our approach described above which helps to provide similar latency to `Onnx Runtime`. It's in a Python script in the same folder as this notebook. We had to work around a documented limitation. Because of that the code is slightly more complex and we wanted to keep this notebook easy to follow.
```
! nvidia-smi
```
## `Onnx Runtime` compilation
Version 1.11.1 of `Onnx Runtime` and older have a bug which makes them much slower when most inputs are used by subgraphs of an `If` node.
Unfortunately, it's exactly what will do below, so we need to compile our own version of `Onnx Runtime` until the version 1.12 is released (in June 2022).
Code below has been tested on Ubuntu 22.04 and supposes that your machine has `CUDA` 11.4 installed.
If not, use the Docker image of this library.
We use a specific commit of `Onnx Runtime` with a better management of `If`/`Else`/`Then` `Onnx` nodes:
```shell
git clone --recursive https://github.com/Microsoft/onnxruntime
cd onnxruntime
git checkout -b fix_if 81d78706feb1dc923f3e43f7ba8ac30b55f5b19b
CUDACXX=/usr/local/cuda-11.4/bin/nvcc ./build.sh \
--config Release \
--build_wheel \
--parallel \
--use_cuda \
--cuda_home /usr/local/cuda-11.4 \
--cudnn_home /usr/lib/x86_
-linux-gnu/ \
--skip_test
# pip install ...
# other required dependencies
# pip install nvtx seaborn
```
On our machine, it takes around 20 minutes.
> to clear previous compilation, delete content of `./build` folder
```
import json
import random
from transformer_deploy.backends.ort_utils import get_keep_fp32_nodes
from transformer_deploy.backends.ort_utils import convert_fp16
import time
from typing import Callable, Dict, Optional, List
import matplotlib.pylab as plt
from onnxruntime import IOBinding
import numpy as np
import onnx
import torch
from pathlib import Path
from typing import Tuple
from onnx import GraphProto, ModelProto, helper
from torch.nn import Linear
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, PretrainedConfig, T5ForConditionalGeneration, TensorType
from transformers.generation_utils import GenerationMixin
from transformers.modeling_outputs import BaseModelOutputWithPastAndCrossAttentions, Seq2SeqLMOutput
from transformers.models.t5.modeling_t5 import T5Stack
from nvtx import nvtx
from copy import copy
from transformer_deploy.backends.ort_utils import create_model_for_provider, inference_onnx_binding
from transformer_deploy.backends.pytorch_utils import convert_to_onnx
import seaborn as sns
import operator
from collections import defaultdict
import gc
```
## Loading `Hugging Face` model / tokenizer
Below we load the model and set global variables of this notebook.
```
np.random.seed(123)
torch.random.manual_seed(123)
# other possible values: t5-small, t5-base, t5-large. t5-3b should work when ORT library is fixed
model_name = "t5-large"
tokenizer = AutoTokenizer.from_pretrained(model_name)
input_ids: torch.Tensor = tokenizer(
"translate English to French: This model is now very fast!", return_tensors=TensorType.PYTORCH
).input_ids
input_ids = input_ids.type(torch.int32).to("cuda")
pytorch_model: T5ForConditionalGeneration = AutoModelForSeq2SeqLM.from_pretrained(model_name)
pytorch_model = pytorch_model.eval()
pytorch_model = pytorch_model.cuda()
pytorch_model.config.use_cache = True # not really needed, just to make things obvious
num_layers = pytorch_model.config.num_layers
# tolerance between Onnx FP16 and Pytorch FP32.
# Rounding errors increase with number of layers: 1e-1 for t5-small, 5e-1 for large, 3 for 3b. 11b not tested.
# Do not impact final quality
fp16_default_tolerance = 5e-1
def are_equal(a: torch.Tensor, b: torch.Tensor, atol: float = fp16_default_tolerance) -> None:
assert np.allclose(a=a.detach().cpu().numpy(), b=b.detach().cpu().numpy(), atol=atol), f"{a}\n\nVS\n\n{b}"
def save_onnx(proto: onnx.ModelProto, model_path: str) -> None:
# protobuff doesn't support files > 2Gb, in this case, weights are stored in another binary file
save_external_data: bool = proto.ByteSize() > 2 * 1024**3
filename = Path(model_path).name
onnx.save_model(
proto=proto,
f=model_path,
save_as_external_data=save_external_data,
all_tensors_to_one_file=True,
location=filename + ".data",
)
def prepare_folder(path: str) -> Tuple[str, str]:
p = Path(path)
p.mkdir(parents=True, exist_ok=True)
[item.unlink() for item in Path(path).glob("*") if item.is_file()]
return path + "/model.onnx", path + "/model_fp16.onnx"
# create/clean folders where each model will be stored.
# as multiple files will be saved for T5-3B and 11B, we use different folders for the encoder and the decoders.
encoder_model_path, encoder_fp16_model_path = prepare_folder(path="./test-enc")
dec_cache_model_path, dec_cache_fp16_model_path = prepare_folder(path="./test-dec-cache")
dec_no_cache_model_path, dec_no_cache_fp16_model_path = prepare_folder(path="./test-dec-no-cache")
_, dec_if_fp16_model_path = prepare_folder(path="./test-dec-if")
# some outputs to compare with
out_enc: BaseModelOutputWithPastAndCrossAttentions = pytorch_model.encoder(input_ids=input_ids)
out_full: Seq2SeqLMOutput = pytorch_model(input_ids=input_ids, decoder_input_ids=input_ids)
```
# Export to Onnx
First step is to export the model to `Onnx` graph.
`T5` is made of 2 parts, an `encoder` and a `decoder`.
## Export encoder part
The `encoder` part export doesn't imply any specific challenge.
We use export function built for `Bert` like model, exported model is in `FP16`.
```
pytorch_model = pytorch_model.to("cuda")
convert_to_onnx(
model_pytorch=pytorch_model.encoder,
output_path=encoder_model_path,
inputs_pytorch={"input_ids": input_ids},
var_output_seq=True,
quantization=False,
)
```
## Conversion to mixed precision
### Why mixed precision?
As `T5` can have up to 11 billion parameters, it requires lots of computation, and even more important, it takes lots of space in device memory.
We convert the `encoder` to half precision.
If we blindly convert the whole graph to `FP16`, we will have 2 issues:
* `overflow`: some nodes, like exponential nodes, will try to output values out of the `FP16` range, at the end you get some `NaN`.
* `underflow`: values very close to 0 will be rounded to 0, which may be an issue for some operations like `Div` and `Log` .
### The challenge
Mixed precision is done out of the box by `Pytorch` and follow some strict rules described in https://pytorch.org/docs/stable/amp.html
Those rules are generic and quite conservative. Many nodes will be kept in `FP32` even if their output is always in the `FP16` range.
Other approaches we have found:
* `Onnx Runtime T5` [demo](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/models/t5/t5_helper.py): provide a list of operations to keep in `FP32` (Pow, ReduceMean, Add, Sqrt, Div, Mul, Softmax, Relu). We have found this approach to need more an more tweaking on larger networks and encoder part (decoder part seems simpler to manage, https://github.com/microsoft/onnxruntime/issues/11119);
* `TensorRT T5` [demo](https://github.com/NVIDIA/TensorRT/tree/main/demo/HuggingFace/notebooks): provide the exact pattern of nodes to keep in `FP32`. This approach is much more effective, but imply lots of code to describe the patterns and may not generalize well, basically what works for a `base` model may not work for 11 billion parameters model. And it does not scale to other architectures without adaptations, for a library like `transformer-deploy`, it would lead to unmaintainable technical debt.
### Our approach
We have chosen an architecture agnostic approach: we inject random input sequences and audit the output of each computation graph node; finally, we make a list of all nodes that have output values out of the `FP16` range /close to zero values and perform some cleaning (to avoid unnecessary casting).
We have chosen to use random values only for the `input_ids` field as the search space is limited: positive integers lower than the vocabulary size.
You can also decide to send real data from a dataset you want to work on.
To finish, we provide the list of nodes to keep in `FP32` to the conversion function.
```
def get_random_input_encoder() -> Dict[str, torch.Tensor]:
max_seq = 512
seq_len = random.randint(a=1, b=max_seq)
batch = max_seq // seq_len
random_input_ids = torch.randint(
low=0, high=tokenizer.vocab_size, size=(batch, seq_len), dtype=torch.int32, device="cuda"
)
inputs = {"input_ids": random_input_ids}
return inputs
keep_fp32_encoder = get_keep_fp32_nodes(onnx_model_path=encoder_model_path, get_input=get_random_input_encoder)
assert len(keep_fp32_encoder) > 0
enc_model_onnx = convert_fp16(onnx_model=encoder_model_path, nodes_to_exclude=keep_fp32_encoder)
save_onnx(proto=enc_model_onnx, model_path=encoder_fp16_model_path)
del enc_model_onnx
torch.cuda.empty_cache()
gc.collect()
print(f"20 first nodes to keep in FP32 (total {len(keep_fp32_encoder)}):")
keep_fp32_encoder[:20]
```
Compare the output of the `Onnx` `FP16` model with `Pytorch` one
```
enc_fp16_onnx = create_model_for_provider(encoder_fp16_model_path, "CUDAExecutionProvider")
enc_fp16_onnx_binding: IOBinding = enc_fp16_onnx.io_binding()
enc_onnx_out = inference_onnx_binding(
model_onnx=enc_fp16_onnx,
binding=enc_fp16_onnx_binding,
inputs={"input_ids": input_ids},
device=input_ids.device.type,
)["output"]
are_equal(a=enc_onnx_out, b=out_enc.last_hidden_state)
```
## Export decoder
The decoder export part is more challenging:
* we first need to wrap it in a `Pytorch` model to add the final layer so it's output provide scores for each vocabulary token and can be directly used by the `Hugging Face` `decoding` algorithm
* then, we need to manipulate the `Onnx` graph to add support of `Key`/`Value` cache
The second point is the key ingredient of the observed acceleration of `Onnx` vs `Hugging Face` inference.
### Wrapper to include some post-processing on the decoder output
The post-processing is mainly a projection of the decoder output on a matrix with one of its dimensions equal to model vocabulary size, so we have scores for each possible token.
```
class ExportT5(torch.nn.Module):
def __init__(self, decoder: T5Stack, lm_head: Linear):
super(ExportT5, self).__init__()
self.decoder = decoder
self.lm_head = lm_head
def forward(self, input_ids: torch.Tensor, encoder_hidden_states: torch.Tensor, past_key_values: Tuple = None):
out_dec = self.decoder.forward(
input_ids=input_ids, encoder_hidden_states=encoder_hidden_states, past_key_values=past_key_values
)
# Rescale output before projecting on vocab
out_dec["last_hidden_state"] = out_dec["last_hidden_state"] * (pytorch_model.model_dim**-0.5)
out_dec["last_hidden_state"] = self.lm_head(out_dec["last_hidden_state"])
return out_dec
pytorch_model.cuda()
model_decoder = ExportT5(decoder=pytorch_model.decoder, lm_head=pytorch_model.lm_head).eval()
out_model_export: torch.Tensor = model_decoder(input_ids=input_ids, encoder_hidden_states=out_enc.last_hidden_state)
are_equal(a=out_model_export["last_hidden_state"], b=out_full.logits)
```
### Export decoder part to `Onnx`
Below we export 2 versions of the decoder, one without cache support and one with it.
Model inputs with past states (cache support):
```
model_decoder.cuda()
# decoder output one step before
out_dec_pytorch = model_decoder(input_ids=input_ids[:, :-1], encoder_hidden_states=out_enc.last_hidden_state)
model_inputs = {
"input_ids": input_ids[:, -1:].type(torch.int32),
"encoder_hidden_states": out_enc.last_hidden_state,
"past_key_values": out_dec_pytorch.past_key_values,
}
input_names = ["input_ids", "encoder_hidden_states"]
for i in range(num_layers):
input_names.append(f"past_key_values.{i}.decoder.key")
input_names.append(f"past_key_values.{i}.decoder.value")
input_names.append(f"past_key_values.{i}.encoder.key")
input_names.append(f"past_key_values.{i}.encoder.value")
output_names = ["logits"]
for i in range(num_layers):
output_names.append(f"present.{i}.decoder.key")
output_names.append(f"present.{i}.decoder.value")
output_names.append(f"present.{i}.encoder.key")
output_names.append(f"present.{i}.encoder.value")
dynamic_axis = {
"input_ids": {0: "batch", 1: "encoder_sequence"},
"encoder_hidden_states": {0: "batch", 1: "encoder_sequence"},
"logits": {0: "batch", 1: "decoder_sequence"},
}
for i in range(num_layers):
dynamic_axis[f"past_key_values.{i}.decoder.key"] = {0: "batch", 2: "past_decoder_sequence"}
dynamic_axis[f"past_key_values.{i}.decoder.value"] = {0: "batch", 2: "past_decoder_sequence"}
dynamic_axis[f"past_key_values.{i}.encoder.key"] = {0: "batch", 2: "encoder_sequence_length"}
dynamic_axis[f"past_key_values.{i}.encoder.value"] = {0: "batch", 2: "encoder_sequence_length"}
dynamic_axis[f"present.{i}.decoder.key"] = {0: "batch", 2: "decoder_sequence"}
dynamic_axis[f"present.{i}.decoder.value"] = {0: "batch", 2: "decoder_sequence"}
dynamic_axis[f"present.{i}.encoder.key"] = {0: "batch", 2: "encoder_sequence_length"}
dynamic_axis[f"present.{i}.encoder.value"] = {0: "batch", 2: "encoder_sequence_length"}
```
Export of the model with cache support:
```
with torch.no_grad():
pytorch_model.config.return_dict = True
pytorch_model.eval()
# export can works with named args but the dict containing named args as to be last element of the args tuple
torch.onnx.export(
model_decoder,
(model_inputs,),
f=dec_cache_model_path,
input_names=input_names,
output_names=output_names,
dynamic_axes=dynamic_axis,
do_constant_folding=True,
opset_version=13,
)
```
Export of the model computing Key/Values for the whole sequence (we basically just remove past states from the input, the `Pytorch` code will recompute them):
```
model_inputs_no_cache = {
"input_ids": input_ids,
"encoder_hidden_states": out_enc.last_hidden_state,
}
with torch.no_grad():
pytorch_model.config.return_dict = True
pytorch_model.eval()
# export can works with named args but the dict containing named args as to be last element of the args tuple
torch.onnx.export(
model_decoder,
(model_inputs_no_cache,),
f=dec_no_cache_model_path,
input_names=list(model_inputs_no_cache.keys()),
output_names=output_names,
dynamic_axes={k: v for k, v in dynamic_axis.items() if "past_key_values" not in k},
do_constant_folding=True,
opset_version=13,
)
_ = pytorch_model.cpu() # free cuda memory
torch.cuda.empty_cache()
```
## Conversion to mixed precision
Decoder module has different kinds of inputs, `input_ids` but also some float tensors.
It would a bit more complicated to generate random values for those tensors: in theory it can be of any value in the FP32 range, but because of how models are initialized and trained, most of them are close to 0.
To avoid too much guessing, we have decided to just take the output of the real model being fed with random `input_ids`.
```
def get_random_input_no_cache() -> Dict[str, torch.Tensor]:
inputs = get_random_input_encoder()
encoder_hidden_states = inference_onnx_binding(
model_onnx=enc_fp16_onnx,
binding=enc_fp16_onnx_binding,
inputs=inputs,
device="cuda",
clone_tensor=False,
)["output"]
# it will serve as input of a FP32 model
inputs["encoder_hidden_states"] = encoder_hidden_states.type(torch.float32)
return inputs
keep_fp32_no_cache = get_keep_fp32_nodes(onnx_model_path=dec_no_cache_model_path, get_input=get_random_input_no_cache)
onnx_model_no_cache_fp16 = convert_fp16(onnx_model=dec_no_cache_model_path, nodes_to_exclude=keep_fp32_no_cache)
save_onnx(proto=onnx_model_no_cache_fp16, model_path=dec_no_cache_fp16_model_path)
print(f"20 first nodes to keep in FP32 (total {len(keep_fp32_no_cache)}):")
keep_fp32_no_cache[:20]
dec_no_cache_ort_model = create_model_for_provider(dec_no_cache_model_path, "CUDAExecutionProvider")
# use info from tokenizer size and max shape provided through the command line
def get_random_input_cache() -> Dict[str, torch.Tensor]:
inputs = get_random_input_no_cache()
dec_past_states = inference_onnx_binding(
model_onnx=dec_no_cache_ort_model,
inputs=inputs,
device="cuda",
clone_tensor=False,
)
for k, v in dec_past_states.items():
if k == "logits":
continue
new_k = k.replace("present", "past_key_values")
inputs[new_k] = v
batch, _ = inputs["input_ids"].shape
complement = torch.randint(low=0, high=tokenizer.vocab_size, size=(batch, 1), dtype=torch.int32, device="cuda")
inputs["input_ids"] = torch.concat(tensors=[inputs["input_ids"], complement], dim=1)
return inputs
keep_fp32_cache = get_keep_fp32_nodes(onnx_model_path=dec_cache_model_path, get_input=get_random_input_cache)
del dec_no_cache_ort_model # free cuda memory
torch.cuda.empty_cache()
gc.collect()
onnx_model_cache_fp16 = convert_fp16(onnx_model=dec_cache_model_path, nodes_to_exclude=keep_fp32_cache)
save_onnx(proto=onnx_model_cache_fp16, model_path=dec_cache_fp16_model_path)
print(f"20 first nodes to keep in FP32 (total {len(keep_fp32_cache)}):")
keep_fp32_cache[:20]
```
## Merge `Onnx` computation graph to deduplicate weights
Finally, we will merge the 2 decoders together.
The idea is simple:
* we prefix the node / edge names of one of them to avoid naming collision
* we deduplicate the weights (the same weight matrix will have different names in the 2 models)
* we join the 2 computation graphs through an `If` node
* we generate the `Onnx` file
The new model will take a new input, `enable_cache`. When it contains a `True` value, computation graph with cache support is used.
> code below is written to be easy to read, but could be made much faster to run
```
prefix = "cache_node_"
mapping_initializer_cache_to_no_cache = dict()
# search for not-duplicated weights, called initializer in Onnx
to_add = list()
for node_cache in onnx_model_cache_fp16.graph.initializer:
found = False
for node_no_cache in onnx_model_no_cache_fp16.graph.initializer:
if node_cache.raw_data == node_no_cache.raw_data:
found = True
mapping_initializer_cache_to_no_cache[node_cache.name] = node_no_cache.name
break
if not found:
node_cache.name = prefix + node_cache.name
to_add.append(node_cache)
mapping_initializer_cache_to_no_cache[node_cache.name] = node_cache.name
onnx_model_no_cache_fp16.graph.initializer.extend(to_add)
# I/O model names should not be prefixed
model_io_names = [n.name for n in list(onnx_model_cache_fp16.graph.input) + list(onnx_model_cache_fp16.graph.output)]
# replace pointers to duplicated weights to their deduplicated version
for node in onnx_model_cache_fp16.graph.node:
for index, input_name in enumerate(node.input):
if input_name in model_io_names:
continue
node.input[index] = mapping_initializer_cache_to_no_cache.get(input_name, prefix + input_name)
for index, output_name in enumerate(node.output):
if output_name in model_io_names:
continue
node.output[index] = prefix + output_name
node.name = prefix + node.name
model_io_names = [n.name for n in list(onnx_model_cache_fp16.graph.input) + list(onnx_model_cache_fp16.graph.output)]
# prefix nodes to avoid naming collision
prefix = "init_"
cache = dict()
for node in onnx_model_no_cache_fp16.graph.initializer:
if node.name in model_io_names:
new_name = prefix + node.name
cache[node.name] = new_name
node.name = new_name
for node in onnx_model_no_cache_fp16.graph.node:
for input_index, n in enumerate(node.input):
node.input[input_index] = cache.get(n, n)
# mandatory for subgraph in if/else node
assert len(onnx_model_cache_fp16.graph.output) == len(
onnx_model_no_cache_fp16.graph.output
), f"{len(onnx_model_cache_fp16.graph.output)} vs {len(onnx_model_no_cache_fp16.graph.output)}"
# build a computation graph with cache support
graph_cache: onnx.GraphProto = onnx.helper.make_graph(
nodes=list(onnx_model_cache_fp16.graph.node),
name="graph-cache",
inputs=[],
outputs=list(onnx_model_cache_fp16.graph.output),
initializer=[],
)
# build a computation which doesn't need past states to run
graph_no_cache: onnx.GraphProto = onnx.helper.make_graph(
nodes=list(onnx_model_no_cache_fp16.graph.node),
name="graph-no-cache",
inputs=[],
outputs=list(onnx_model_no_cache_fp16.graph.output),
initializer=[],
)
# a new input to decide if we use past state or not
enable_cache_input = onnx.helper.make_tensor_value_info(name="enable_cache", elem_type=onnx.TensorProto.BOOL, shape=[1])
if_node = onnx.helper.make_node(
op_type="If",
inputs=["enable_cache"],
outputs=[o.name for o in list(onnx_model_no_cache_fp16.graph.output)],
then_branch=graph_cache,
else_branch=graph_no_cache,
)
# final model which can disable its cache
if_graph_def: GraphProto = helper.make_graph(
nodes=[if_node],
name="if-model",
inputs=list(onnx_model_cache_fp16.graph.input) + [enable_cache_input],
outputs=list(onnx_model_no_cache_fp16.graph.output),
initializer=list(onnx_model_no_cache_fp16.graph.initializer),
)
# serialization and cleaning
model_if: ModelProto = helper.make_model(
if_graph_def, producer_name="onnx-example", opset_imports=[helper.make_opsetid(onnx.defs.ONNX_DOMAIN, 13)]
)
save_onnx(proto=model_if, model_path=dec_if_fp16_model_path)
del model_if
torch.cuda.empty_cache()
gc.collect()
```
### Check `Onnx` decoder output
Compare `Onnx` output with and without cache, plus compare with `Pytorch` output.
```
pytorch_model = pytorch_model.cuda()
model_decoder = model_decoder.cuda()
input_ids = input_ids.cuda()
pytorch_model = pytorch_model.eval()
model_decoder = model_decoder.eval()
dec_onnx = create_model_for_provider(dec_if_fp16_model_path, "CUDAExecutionProvider", log_severity=3)
dec_onnx_binding: IOBinding = dec_onnx.io_binding()
```
## Zero copy output
Below, we check that the new model output is similar to the ones from `Pytorch`.
We use our new implementation of inference call.
The idea is the following:
* we ask `Onnx Runtime` to output a pointer to the `CUDA` array containing the result of the inference;
* we use `Cupy` API to wrap the array and provide information regarding tensor shape and type. `Cupy` doesn't own the data;
* we use `Dlpack` support to convert the `Cupy` tensor to `Pytorch`, another zero copy process.
This pipeline is unsafe, as the content of the tensor may change or disappear silently: only `Onnx Runtime` has the control of the array containing the data. It will happen at the next inference call. Because we know that during the text generation we discard each output before recalling `Onnx Runtime`, it works well in our case.
A second benefit of this approach is that we do not have anymore to guess the output shape.
Before using this approach, to avoid the output to be stored on host memory (RAM) which made inference slower, we had to provide `Onnx Runtime` with a pointer to `Pytorch` tensor with the right size. As the size change with the sequence length (so it changes for each generated token), we had to store the logic to guess the size somewhere in the code. The new approach frees us from this burden.
```
pytorch_model = pytorch_model.half()
with torch.inference_mode():
out_enc_pytorch: BaseModelOutputWithPastAndCrossAttentions = pytorch_model.encoder(input_ids=input_ids)
previous_step_pytorch: BaseModelOutputWithPastAndCrossAttentions = model_decoder(
input_ids=input_ids[:, :-1], encoder_hidden_states=out_enc_pytorch.last_hidden_state
)
out_dec_pytorch: BaseModelOutputWithPastAndCrossAttentions = model_decoder(
input_ids=input_ids, encoder_hidden_states=out_enc_pytorch.last_hidden_state
)
def decoder_pytorch_inference(decoder_input_ids: torch.Tensor, encoder_hidden_states: torch.Tensor, **_):
with torch.inference_mode():
return model_decoder(input_ids=decoder_input_ids, encoder_hidden_states=encoder_hidden_states)
def decoder_onnx_inference(
decoder_input_ids: torch.Tensor,
encoder_hidden_states: torch.Tensor,
enable_cache: torch.Tensor,
past_key_values: Optional[torch.Tensor],
):
inputs_onnx_dict = {
"input_ids": decoder_input_ids,
"encoder_hidden_states": encoder_hidden_states,
"enable_cache": enable_cache,
}
if past_key_values is not None:
for index, (k_dec, v_dec, k_enc, v_enc) in enumerate(past_key_values):
inputs_onnx_dict[f"past_key_values.{index}.decoder.key"] = k_dec
inputs_onnx_dict[f"past_key_values.{index}.decoder.value"] = v_dec
inputs_onnx_dict[f"past_key_values.{index}.encoder.key"] = k_enc
inputs_onnx_dict[f"past_key_values.{index}.encoder.value"] = v_enc
result_dict = inference_onnx_binding(
model_onnx=dec_onnx,
inputs=inputs_onnx_dict,
binding=dec_onnx_binding, # recycle the binding
device=decoder_input_ids.device.type,
clone_tensor=False, # no memory copy -> best perf and lowest memory footprint!
)
past_states = list()
for index in range(pytorch_model.config.num_layers):
kv = (
result_dict[f"present.{index}.decoder.key"],
result_dict[f"present.{index}.decoder.value"],
result_dict[f"present.{index}.encoder.key"],
result_dict[f"present.{index}.encoder.value"],
)
past_states.append(kv)
return BaseModelOutputWithPastAndCrossAttentions(
last_hidden_state=result_dict["logits"],
past_key_values=past_states,
)
out_dec_onnx_no_cache = decoder_onnx_inference(
decoder_input_ids=input_ids,
encoder_hidden_states=out_enc_pytorch.last_hidden_state,
enable_cache=torch.tensor([False], device="cuda", dtype=torch.bool),
past_key_values=None,
)
are_equal(a=out_dec_onnx_no_cache.last_hidden_state[:, -1:, :], b=out_dec_pytorch.last_hidden_state[:, -1:, :])
# check that past states are identical between Onnx and Pytorch
assert len(out_dec_onnx_no_cache.past_key_values) == len(out_dec_pytorch.past_key_values)
for (o_dec_k, o_dev_v, o_enc_k, o_enc_v), (p_dec_k, p_dev_v, p_enc_k, p_enc_v) in zip(
out_dec_onnx_no_cache.past_key_values, out_dec_pytorch.past_key_values
):
are_equal(a=o_dec_k, b=p_dec_k)
are_equal(a=o_dev_v, b=p_dev_v)
are_equal(a=o_enc_k, b=p_enc_k)
are_equal(a=o_enc_v, b=p_enc_v)
out_dec_onnx_cache = decoder_onnx_inference(
decoder_input_ids=input_ids[:, -1:],
encoder_hidden_states=out_enc_pytorch.last_hidden_state,
enable_cache=torch.tensor([True], device="cuda", dtype=torch.bool),
past_key_values=previous_step_pytorch.past_key_values,
)
are_equal(a=out_dec_onnx_cache.last_hidden_state[:, -1:, :], b=out_dec_pytorch.last_hidden_state[:, -1:, :])
# check that past states are identical between Onnx and Pytorch
assert len(out_dec_onnx_cache.past_key_values) == len(out_dec_pytorch.past_key_values)
for (o_dec_k, o_dev_v, o_enc_k, o_enc_v), (p_dec_k, p_dev_v, p_enc_k, p_enc_v) in zip(
out_dec_onnx_cache.past_key_values, out_dec_pytorch.past_key_values
):
are_equal(a=o_dec_k, b=p_dec_k)
are_equal(a=o_dev_v, b=p_dev_v)
are_equal(a=o_enc_k, b=p_enc_k)
are_equal(a=o_enc_v, b=p_enc_v)
```
## Benchmarks!
Finally, we will compare the performances of 4 setup in end-to-end scenarii:
* `Pytorch`
* `Pytorch` + cache
* `Onnx`
* `Onnx` + cache
For the comparison, we first do a sanity check by just generating a short sequence (we already have checked that output tensors are OK).
Then we force each model to generate:
* 256 tokens + batch size 1 (similar to `TensorRT` demo)
* 1000 tokens + batch size 4
```
def encoder_onnx_inference(input_ids: torch.Tensor, **_) -> BaseModelOutputWithPastAndCrossAttentions:
last_hidden_state = inference_onnx_binding(
model_onnx=enc_fp16_onnx, # noqa: F821
inputs={"input_ids": input_ids},
device=input_ids.device.type,
binding=enc_fp16_onnx_binding,
)["output"]
return BaseModelOutputWithPastAndCrossAttentions(last_hidden_state=last_hidden_state.type(torch.float16))
def encoder_pytorch_inference(input_ids, **_) -> BaseModelOutputWithPastAndCrossAttentions:
with torch.inference_mode():
res = pytorch_model.encoder(input_ids=input_ids).type(torch.float16)
return res
# https://github.com/NVIDIA/TensorRT/blob/main/demo/HuggingFace/T5/export.py
class ExtT5(torch.nn.Module, GenerationMixin):
def __init__(self, config: PretrainedConfig, device: torch.device, encoder_func: Callable, decoder_func: Callable):
super(ExtT5, self).__init__()
self.main_input_name = "input_ids" # https://github.com/huggingface/transformers/pull/14803
self.config: PretrainedConfig = config
self.device: torch.device = device
self.encoder_func = encoder_func
self.decoder_func = decoder_func
self.use_cache = True
self.timings = list()
def get_encoder(self):
return self.encoder_func
def get_decoder(self):
return self.decoder_func
def set_cache(self, enable: bool) -> None:
self.use_cache = enable
# from transformers library (modeling_t5.py)
def _reorder_cache(self, past, beam_idx):
reordered_decoder_past = ()
for layer_past_states in past:
# get the correct batch idx from layer past batch dim
# batch dim of `past` is at 2nd position
reordered_layer_past_states = ()
for layer_past_state in layer_past_states:
# need to set correct `past` for each of the four key / value states
reordered_layer_past_states = reordered_layer_past_states + (
layer_past_state.index_select(0, beam_idx),
)
assert reordered_layer_past_states[0].shape == layer_past_states[0].shape
assert len(reordered_layer_past_states) == len(layer_past_states)
reordered_decoder_past = reordered_decoder_past + (reordered_layer_past_states,)
return reordered_decoder_past
def prepare_inputs_for_generation(self, input_ids, past=None, use_cache=None, **kwargs) -> Dict[str, torch.Tensor]:
params = {
"encoder_hidden_states": kwargs["encoder_outputs"]["last_hidden_state"],
}
if past is None: # this is the 1st inferred token
self.timings = list()
if not self.use_cache:
past = None
if past is None:
params[self.main_input_name] = input_ids
params["enable_cache"] = torch.tensor([False], device="cuda", dtype=torch.bool)
else:
params[self.main_input_name] = input_ids[:, -1:]
params["enable_cache"] = torch.tensor([True], device="cuda", dtype=torch.bool)
params["past_key_values"] = past
return params
def forward(
self,
input_ids: torch.Tensor,
encoder_hidden_states: torch.Tensor,
enable_cache: torch.Tensor,
past_key_values: Optional[torch.Tensor] = None,
**_,
):
start_timer = time.monotonic()
dec_output = self.get_decoder()(
decoder_input_ids=input_ids,
encoder_hidden_states=encoder_hidden_states,
enable_cache=enable_cache,
past_key_values=past_key_values,
)
self.timings.append(time.monotonic() - start_timer)
return Seq2SeqLMOutput(logits=dec_output.last_hidden_state, past_key_values=dec_output.past_key_values)
model_gen = (
ExtT5(
config=pytorch_model.config,
device=pytorch_model.device,
encoder_func=encoder_onnx_inference, # encoder_pytorch_inference
decoder_func=decoder_onnx_inference, # decoder_pytorch_inference
)
.cuda()
.eval()
)
torch.cuda.synchronize()
with torch.inference_mode():
print("Onnx:")
print(
tokenizer.decode(
model_gen.generate(
inputs=input_ids,
min_length=3,
max_length=60,
num_beams=4,
no_repeat_ngram_size=2,
)[0],
skip_special_tokens=True,
)
)
print("Pytorch:")
print(
tokenizer.decode(
pytorch_model.generate(
input_ids=input_ids,
min_length=3,
max_length=60,
num_beams=4,
no_repeat_ngram_size=2,
)[0],
skip_special_tokens=True,
)
)
def print_timings(name: str, total: float, inference: float):
percent_inference = 100 * inference / total
print(f"{name}: {total:.1f}, including inference: {inference:.1f} ({percent_inference:.1f}%)")
all_timings: Dict[str, Dict[str, List[float]]] = dict()
for seq_len, num_beam in [(256, 1), (1000, 4)]:
timings = dict()
print(f"seq len: {seq_len} / # beam (batch size): {num_beam}")
task = "Onnx"
with nvtx.annotate(
task, color="red"
): # nvtx is for Nvidia nsight profiler, you can remove the line or install the library
model_gen.set_cache(enable=False)
# warmup
model_gen.generate(inputs=input_ids, max_length=10, num_beams=num_beam, min_length=10)
start = time.monotonic()
model_gen.generate(inputs=input_ids, max_length=seq_len, num_beams=num_beam, min_length=seq_len)
total_time = time.monotonic() - start
print_timings(name=task, total=total_time, inference=sum(model_gen.timings))
timings[f"{task}"] = model_gen.timings
task = "Onnx + cache"
with nvtx.annotate(task, color="red"):
model_gen.set_cache(enable=True)
# warmup
model_gen.generate(inputs=input_ids, max_length=10, num_beams=num_beam, min_length=10)
start = time.monotonic()
model_gen.generate(inputs=input_ids, max_length=seq_len, num_beams=num_beam, min_length=seq_len)
total_time = time.monotonic() - start
print_timings(name=task, total=total_time, inference=sum(model_gen.timings))
timings[f"{task}"] = model_gen.timings
# monckey patching of forward function to add a timer per generated token
old_fw = pytorch_model.forward
timing_pytorch = list()
def new_fw(self, *args, **kwargs):
timer_start = time.monotonic()
res = old_fw(self, *args, **kwargs)
torch.cuda.synchronize() # makes timings correct without having significant impact on e2e latency
total_time = time.monotonic() - timer_start
timing_pytorch.append(total_time)
return res
task = "Pytorch"
with nvtx.annotate(task, color="orange"):
pytorch_model.config.use_cache = False
with torch.inference_mode():
with torch.cuda.amp.autocast():
# warmup
pytorch_model.generate(inputs=input_ids, max_length=10, num_beams=num_beam, min_length=10)
pytorch_model.forward = new_fw.__get__(pytorch_model)
start = time.monotonic()
pytorch_model.generate(inputs=input_ids, max_length=seq_len, num_beams=num_beam, min_length=seq_len)
total_time = time.monotonic() - start
pytorch_model.forward = old_fw
inference_time = np.sum(timing_pytorch)
print_timings(name="Pytorch", total=total_time, inference=inference_time)
timing_pytorch_no_cache = copy(timing_pytorch)
timings[f"{task}"] = copy(timing_pytorch)
timing_pytorch.clear()
torch.cuda.empty_cache()
task = "Pytorch + cache"
with nvtx.annotate("Pytorch + cache", color="green"):
pytorch_model.config.use_cache = True
with torch.inference_mode():
with torch.cuda.amp.autocast():
# warmup
pytorch_model.generate(inputs=input_ids, max_length=10, num_beams=num_beam, min_length=10)
pytorch_model.forward = new_fw.__get__(pytorch_model)
start = time.monotonic()
pytorch_model.generate(inputs=input_ids, max_length=seq_len, num_beams=num_beam, min_length=seq_len)
total_time = time.monotonic() - start
pytorch_model.forward = old_fw
print_timings(name="Pytorch + cache", total=total_time, inference=sum(timing_pytorch))
timings[f"{task}"] = copy(timing_pytorch)
timing_pytorch.clear()
all_timings[f"{seq_len} / {num_beam}"] = timings
torch.cuda.empty_cache()
```
## Benchmark analysis
Below, we plot for each setup (short and long sequence):
* the time spent on each token generation
* the full time to generate the sequence (for each length)
We can see that for short sequence and batch size of 1, cache or not, latency appears to be stable.
However, for longer sequences, we can see that the no cache approach (being `Pytorch` or `Onnx` based) doesn't scale well, and at some point, `Onnx` is even slower than `Hugging Face` code with cache support.
On the other side, `Onnx` timings are mostly stable whatever the sequence length which is quite remarkable.
It's because we are working one token at a time and converted a quadratic complexity in the attention layer into a linear one.
```
sns.set_style("darkgrid") # darkgrid, whitegrid, dark, white and ticks
plt.rc("axes", titlesize=15) # fontsize of the axes title
plt.rc("axes", labelsize=14) # fontsize of the x and y labels
plt.rc("xtick", labelsize=13) # fontsize of the tick labels
plt.rc("ytick", labelsize=13) # fontsize of the tick labels
plt.rc("legend", fontsize=15) # legend fontsize
plt.rc("font", size=13) # controls default text sizes
colors = sns.color_palette("deep")
fig = plt.figure(constrained_layout=True, figsize=(12, 8))
subfigs = fig.subfigures(nrows=2, ncols=1)
fig.supxlabel("seq len (# tokens)")
fig.supylabel("latency (s)")
fig.suptitle(f"Small seq len and greedy search on {model_name} don't tell the whole (inference) story...")
for row, (plot_name, timings) in enumerate(all_timings.items()):
subfigs[row].suptitle(f"setup #{1+row}: {plot_name} (seq len / beam search)")
axs = subfigs[row].subplots(nrows=1, ncols=2)
for col, accumulated in enumerate([False, True]):
plot_axis = axs[col]
for index, (k, v) in enumerate(timings.items()):
axis = range(len(v))
color = colors[index]
v = np.array(v)
# remove extreme values
p99 = np.percentile(v, 99)
v[v > p99] = p99
v = np.cumsum(v) if accumulated else v
plot_axis.scatter(axis, v, label=k, s=2)
title = f"latency for the full sequence" if accumulated else f"latency for each token"
plot_axis.title.set_text(title)
# legend deduplication
handles, labels = plt.gca().get_legend_handles_labels()
by_label = dict(zip(labels, handles))
fig.legend(by_label.values(), by_label.keys(), bbox_to_anchor=(1, 1), loc="upper left", markerscale=5)
plt.show()
```
## Profiling model at the kernel level
Below we reload the decoder model with `Onnx Runtime` kernel profiling enabled.
It will help us to understand on which part of the computation graph the GPU spends its time.
The number of events that `Onnx Runtime` can save is limited to [1 million](https://github.com/microsoft/onnxruntime/blob/a4b5fa334aa939fb159bdc571ed3d56ca8d31fc7/onnxruntime/core/common/profiler.cc#L10).
It is not an issue as we have seen that timings per token are mostly stable, so having only n first token information don't change anything.
The main information it gives us is that 30% of the time is spent on matrix multiplication when caching is used.
The rest of the time is spent on mostly memory bound operations:
* element-wise operations which require little computation (`add`, `mul`, `div`, etc.)
* copy pasting tensors `GPU` <-> `GPU` with little transformation in between (`transpose`, `concat`, `cast`, etc.)
It matches the information provided by both `nvidia-smi` and `Nvidia Nsight` (the GPU profiler from Nvidia): the GPU is under utilized.
That's why we think that a tool like `TensorRT` which will perform aggressive kernel fusion, reducing time spent on memory bounded operations, should be a good fit for autoregressive models.
> there is a nice opportunity to increase the speedup by reducing the number of casting operations. We keep this work for the future.
```
dec_onnx = create_model_for_provider(
dec_if_fp16_model_path, "CUDAExecutionProvider", enable_profiling=True, log_severity=3
)
dec_onnx_binding: IOBinding = dec_onnx.io_binding()
_ = model_gen.generate(inputs=input_ids, max_length=10, num_beams=4, min_length=10)
profile_name = dec_onnx.end_profiling()
with open(profile_name) as f:
content = json.load(f)
op_timings = defaultdict(lambda: 0)
for c in content:
if "op_name" not in c["args"]:
continue
op_name = c["args"]["op_name"]
if op_name == "If":
continue # subgraph
time_taken = c["dur"]
op_timings[op_name] += time_taken
op_timings_filter = dict(sorted(op_timings.items(), key=operator.itemgetter(1), reverse=True)[:10])
total_kernel_timing = sum(op_timings.values())
op_timings_percent = {k: 100 * v / total_kernel_timing for k, v in op_timings_filter.items()}
plt.barh(list(op_timings_percent.keys()), list(op_timings_percent.values()))
plt.title("Time spent per kernel\n(top 10 kernels)")
plt.xlabel("% total inference time")
plt.show()
```
| true |
code
| 0.764463 | null | null | null | null |
|
## Exploratory analysis of the US Airport Dataset
This dataset contains data for 25 years[1995-2015] of flights between various US airports and metadata about these routes. Taken from Bureau of Transportation Statistics, United States Department of Transportation.
Let's see what can we make out of this!
```
%matplotlib inline
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import warnings
warnings.filterwarnings('ignore')
pass_air_data = pd.read_csv('datasets/passengers.csv')
```
In the `pass_air_data` dataframe we have the information of number of people that fly every year on a particular route on the list of airlines that fly that route.
```
pass_air_data.head()
# Create a MultiDiGraph from this dataset
passenger_graph = nx.from_pandas_edgelist(pass_air_data, source='ORIGIN', target='DEST', edge_attr=['YEAR', 'PASSENGERS', 'UNIQUE_CARRIER_NAME'], create_using=nx.MultiDiGraph())
```
### Cleveland to Chicago, how many people fly this route?
```
passenger_graph['CLE']['ORD'][25]
temp = [(i['YEAR'], i['PASSENGERS'])for i in dict(passenger_graph['CLE']['ORD']).values()]
x, y = zip(*temp)
plt.plot(x, y)
plt.show()
```
## Exercise
Find the busiest route in 1990 and in 2015 according to number of passengers, and plot the time series of number of passengers on these routes.
You can use the DataFrame instead of working with the network. It will be faster ;)
[5 mins]
```
temp = pass_air_data.groupby(['YEAR'])['PASSENGERS'].transform(max) == pass_air_data['PASSENGERS']
pass_air_data[temp][pass_air_data.YEAR.isin([1990, 2015])]
pass_air_data[(pass_air_data['ORIGIN'] == 'LAX') & (pass_air_data['DEST'] == 'HNL')].plot('YEAR', 'PASSENGERS')
pass_air_data[(pass_air_data['ORIGIN'] == 'LAX') & (pass_air_data['DEST'] == 'SFO')].plot('YEAR', 'PASSENGERS')
```
So let's have a look at the important nodes in this network, i.e. important airports in this network. We'll use pagerank, betweenness centrality and degree centrality.
```
# nx.pagerank(passenger_graph)
def year_network(G, year):
temp_g = nx.DiGraph()
for i in G.edges(data=True):
if i[2]['YEAR'] == year:
temp_g.add_edge(i[0], i[1], weight=i[2]['PASSENGERS'])
return temp_g
pass_2015 = year_network(passenger_graph, 2015)
len(pass_2015)
len(pass_2015.edges())
# Load in the GPS coordinates of all the airports
lat_long = pd.read_csv('datasets/GlobalAirportDatabase.txt', delimiter=':', header=None)
lat_long[lat_long[1].isin(list(pass_2015.nodes()))]
pos_dict = {}
for airport in lat_long[lat_long[1].isin(list(pass_2015.nodes()))].iterrows():
pos_dict[airport[1][1]] = (airport[1][15], airport[1][14])
pos_dict
```
## Exercise
Using the position dictionary `pos_dict` create a plot of the airports, only the nodes not the edges.
- As we don't have coordinates for all the airports we have to create a subgraph first.
- Use `nx.subgraph(Graph, iterable of nodes)` to create the subgraph
- Use `nx.draw_networkx_nodes(G, pos)` to map the nodes.
or
- Just use a scatter plot :)
```
plt.figure(figsize=(20, 9))
G = nx.subgraph(pass_2015, pos_dict.keys())
nx.draw_networkx_nodes(G, pos=pos_dict, node_size=10, alpha=0.6, node_color='b')
# nx.draw_networkx_edges(G, pos=pos_dict, width=0.1, arrows=False)
plt.show()
plt.figure(figsize=(20, 9))
x = [i[0] for i in pos_dict.values()]
y = [i[1] for i in pos_dict.values()]
plt.scatter(x, y)
```
### What about degree distribution of this network?
```
plt.hist(list(nx.degree_centrality(pass_2015).values()))
plt.show()
```
Let's plot a log log plot to get a better overview of this.
```
d = {}
for i, j in dict(nx.degree(pass_2015)).items():
if j in d:
d[j] += 1
else:
d[j] = 1
x = np.log2(list((d.keys())))
y = np.log2(list(d.values()))
plt.scatter(x, y, alpha=0.4)
plt.show()
```
### Directed Graphs

```
G = nx.DiGraph()
G.add_edge(1, 2, weight=1)
# print(G.edges())
# G[1][2]
# G[2][1]
# G.is_directed()
# type(G)
G.add_edges_from([(1, 2), (3, 2), (4, 2), (5, 2), (6, 2), (7, 2)])
nx.draw_circular(G, with_labels=True)
G.in_degree()
nx.pagerank(G)
G.add_edge(5, 6)
nx.draw_circular(G, with_labels=True)
nx.pagerank(G)
G.add_edge(2, 8)
nx.draw_circular(G, with_labels=True)
nx.pagerank(G)
```
### Moving back to Airports
```
sorted(nx.pagerank(pass_2015, weight=None).items(), key=lambda x:x[1], reverse=True)[:10]
sorted(nx.betweenness_centrality(pass_2015).items(), key=lambda x:x[1], reverse=True)[0:10]
sorted(nx.degree_centrality(pass_2015).items(), key=lambda x:x[1], reverse=True)[0:10]
```
'ANC' is the airport code of Anchorage airport, a place in Alaska, and according to pagerank and betweenness centrality it is the most important airport in this network Isn't that weird? Thoughts?
related blog post: https://toreopsahl.com/2011/08/12/why-anchorage-is-not-that-important-binary-ties-and-sample-selection/
Let's look at weighted version, i.e taking into account the number of people flying to these places.
```
sorted(nx.betweenness_centrality(pass_2015, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10]
sorted(nx.pagerank(pass_2015, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10]
```
## How reachable is this network?
We calculate the average shortest path length of this network, it gives us an idea about the number of jumps we need to make around the network to go from one airport to any other airport in this network.
```
# nx.average_shortest_path_length(pass_2015)
```
Wait, What??? This network is not connected. That seems like a really stupid thing to do.
```
list(nx.weakly_connected_components(pass_2015))
```
### SPB, SSB, AIK anyone?
```
pass_air_data[(pass_air_data['YEAR'] == 2015) & (pass_air_data['ORIGIN'] == 'AIK')]
pass_2015.remove_nodes_from(['SPB', 'SSB', 'AIK'])
nx.is_weakly_connected(pass_2015)
nx.is_strongly_connected(pass_2015)
```
### Strongly vs weakly connected graphs.
```
G = nx.DiGraph()
G.add_edge(1, 2)
G.add_edge(2, 3)
G.add_edge(3, 1)
nx.draw(G)
G.add_edge(3, 4)
nx.draw(G)
nx.is_strongly_connected(G)
list(nx.strongly_connected_components(pass_2015))
pass_air_data[(pass_air_data['YEAR'] == 2015) & (pass_air_data['DEST'] == 'TSP')]
pass_2015_strong = max(nx.strongly_connected_component_subgraphs(pass_2015), key=len)
len(pass_2015_strong)
nx.average_shortest_path_length(pass_2015_strong)
```
#### Exercise! (Actually this is a game :D)
How can we decrease the avg shortest path length of this network?
Think of an effective way to add new edges to decrease the avg shortest path length.
Let's see if we can come up with a nice way to do this, and the one who gets the highest decrease wins!!!
The rules are simple:
- You can't add more than 2% of the current edges( ~500 edges)
[10 mins]
```
sort_degree = sorted(nx.degree_centrality(pass_2015_strong).items(), key=lambda x:x[1], reverse=True)
top_count = 0
for n, v in sort_degree:
count = 0
for node, val in sort_degree:
if node != n:
if node not in pass_2015_strong.adj[n]:
pass_2015_strong.add_edge(n, node)
count += 1
if count == 25:
break
top_count += 1
if top_count == 20:
break
nx.average_shortest_path_length(pass_2015_strong)
```
### What about airlines? Can we find airline specific reachability?
```
passenger_graph['JFK']['SFO'][25]
def str_to_list(a):
return a[1:-1].split(', ')
for i in str_to_list(passenger_graph['JFK']['SFO'][25]['UNIQUE_CARRIER_NAME']):
print(i)
%%time
for origin, dest in passenger_graph.edges():
for key in passenger_graph[origin][dest]:
passenger_graph[origin][dest][key]['airlines'] = str_to_list(passenger_graph[origin][dest][key]['UNIQUE_CARRIER_NAME'])
```
### Exercise
Play around with United Airlines network.
- Extract a network for United Airlines flights from the metagraph `passenger_graph` for the year 2015
- Make sure it's a weighted network, where weight is the number of passengers.
- Find the number of airports and connections in this network
- Find the most important airport, according to PageRank and degree centrality.
```
united_network = nx.DiGraph()
for origin, dest in passenger_graph.edges():
if 25 in passenger_graph[origin][dest]:
if "'United Air Lines Inc.'" in passenger_graph[origin][dest][25]['airlines']:
united_network.add_edge(origin, dest, weight=passenger_graph[origin][dest][25]['PASSENGERS'])
len(united_network)
len(united_network.edges())
sorted(nx.pagerank(united_network, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10]
sorted(nx.degree_centrality(united_network).items(), key=lambda x:x[1], reverse=True)[0:10]
```
### Exercise
We are in Cleveland so what should we do?
Obviously we will make a time series of number of passengers flying out of Cleveland with United Airlines over the years.
There are 2 ways of doing it.
- Create a new multidigraph specifically for this exercise.
OR
- exploit the `pass_air_data` dataframe.
```
pass_air_data[(pass_air_data.ORIGIN == 'CLE') &
(pass_air_data.UNIQUE_CARRIER_NAME.str.contains('United Air Lines Inc.'))
].groupby('YEAR')['PASSENGERS'].sum().plot()
```
| true |
code
| 0.332094 | null | null | null | null |
|
500 hPa Vorticity Advection
===========================
Plot an 500-hPa map with calculating vorticity advection using MetPy calculations.
Beyond just plotting 500-hPa level data, this uses calculations from `metpy.calc` to find
the vorticity and vorticity advection. Currently, this needs an extra helper function to
calculate the distance between lat/lon grid points.
Imports
```
from datetime import datetime
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import metpy.calc as mpcalc
import numpy as np
import scipy.ndimage as ndimage
from metpy.units import units
from netCDF4 import num2date
from siphon.catalog import TDSCatalog
```
Data Aquisition
---------------
```
dt = datetime(2016, 4, 16, 18)
# Assemble our URL to the THREDDS Data Server catalog,
# and access our desired dataset within via NCSS
base_url = 'https://www.ncei.noaa.gov/thredds/catalog/model-namanl-old/'
cat = TDSCatalog(f'{base_url}{dt:%Y%m}/{dt:%Y%m%d}/catalog.xml')
ncss = cat.datasets[f'namanl_218_{dt:%Y%m%d}_{dt:%H}00_000.grb'].subset()
# Query for Latest GFS Run
query = ncss.query()
query.time(dt)
query.accept('netcdf')
query.variables('Geopotential_height_isobaric',
'u-component_of_wind_isobaric',
'v-component_of_wind_isobaric')
query.add_lonlat()
# Obtain our queried data
ds = ncss.get_data(query)
lon = ds.variables['lon'][:]
lat = ds.variables['lat'][:]
times = ds.variables[ds.variables['Geopotential_height_isobaric'].dimensions[0]]
vtime = num2date(times[:].squeeze(), units=times.units)
lev_500 = np.where(ds.variables['isobaric'][:] == 500)[0][0]
hght_500 = ds.variables['Geopotential_height_isobaric'][0, lev_500, :, :]
hght_500 = ndimage.gaussian_filter(hght_500, sigma=3, order=0) * units.meter
uwnd_500 = units('m/s') * ds.variables['u-component_of_wind_isobaric'][0, lev_500, :, :]
vwnd_500 = units('m/s') * ds.variables['v-component_of_wind_isobaric'][0, lev_500, :, :]
```
Begin Data Calculations
-----------------------
```
dx, dy = mpcalc.lat_lon_grid_deltas(lon, lat)
f = mpcalc.coriolis_parameter(np.deg2rad(lat)).to(units('1/sec'))
avor = mpcalc.vorticity(uwnd_500, vwnd_500, dx, dy, dim_order='yx') + f
avor = ndimage.gaussian_filter(avor, sigma=3, order=0) * units('1/s')
vort_adv = mpcalc.advection(avor, [uwnd_500, vwnd_500], (dx, dy), dim_order='yx') * 1e9
```
Map Creation
------------
```
# Set up Coordinate System for Plot and Transforms
dproj = ds.variables['LambertConformal_Projection']
globe = ccrs.Globe(ellipse='sphere', semimajor_axis=dproj.earth_radius,
semiminor_axis=dproj.earth_radius)
datacrs = ccrs.LambertConformal(central_latitude=dproj.latitude_of_projection_origin,
central_longitude=dproj.longitude_of_central_meridian,
standard_parallels=[dproj.standard_parallel],
globe=globe)
plotcrs = ccrs.LambertConformal(central_latitude=45., central_longitude=-100.,
standard_parallels=[30, 60])
fig = plt.figure(1, figsize=(14., 12))
gs = gridspec.GridSpec(2, 1, height_ratios=[1, .02], bottom=.07, top=.99,
hspace=0.01, wspace=0.01)
ax = plt.subplot(gs[0], projection=plotcrs)
# Plot Titles
plt.title(r'500-hPa Heights (m), AVOR$*10^5$ ($s^{-1}$), AVOR Adv$*10^8$ ($s^{-2}$)',
loc='left')
plt.title(f'VALID: {vtime}', loc='right')
# Plot Background
ax.set_extent([235., 290., 20., 58.], ccrs.PlateCarree())
ax.coastlines('50m', edgecolor='black', linewidth=0.75)
ax.add_feature(cfeature.STATES, linewidth=.5)
# Plot Height Contours
clev500 = np.arange(5100, 6061, 60)
cs = ax.contour(lon, lat, hght_500.m, clev500, colors='black', linewidths=1.0,
linestyles='solid', transform=ccrs.PlateCarree())
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=10, fmt='%i',
rightside_up=True, use_clabeltext=True)
# Plot Absolute Vorticity Contours
clevvort500 = np.arange(-9, 50, 5)
cs2 = ax.contour(lon, lat, avor*10**5, clevvort500, colors='grey',
linewidths=1.25, linestyles='dashed', transform=ccrs.PlateCarree())
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=10, fmt='%i',
rightside_up=True, use_clabeltext=True)
# Plot Colorfill of Vorticity Advection
clev_avoradv = np.arange(-30, 31, 5)
cf = ax.contourf(lon, lat, vort_adv.m, clev_avoradv[clev_avoradv != 0], extend='both',
cmap='bwr', transform=ccrs.PlateCarree())
cax = plt.subplot(gs[1])
cb = plt.colorbar(cf, cax=cax, orientation='horizontal', extendrect='True', ticks=clev_avoradv)
cb.set_label(r'$1/s^2$', size='large')
# Plot Wind Barbs
# Transform Vectors and plot wind barbs.
ax.barbs(lon, lat, uwnd_500.m, vwnd_500.m, length=6, regrid_shape=20,
pivot='middle', transform=ccrs.PlateCarree())
```
| true |
code
| 0.656328 | null | null | null | null |
|
**Import library**
```
import pandas as pd
import numpy as np
import calendar
from datetime import datetime
import time
# Standard plotly imports
import plotly.express as px
import plotly.graph_objects as go
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context("paper", font_scale=1.3)
sns.set_style('white')
# stats
from statsmodels.tsa.statespace.sarimax import SARIMAX
from random import random
from statsmodels.tsa.stattools import adfuller
#Prophet
from fbprophet import Prophet
# SKLEARN
from sklearn.metrics import mean_squared_error
```
**Import data**
```
# Read in the raw temperature dataset
raw_global = pd.read_csv('GLB.Ts+dSST.csv', skiprows=1)
raw_global = raw_global.iloc[:,:13]
raw_global.head()
raw_global.tail()
```
**Data Preprocessing**
```
def clean_value(raw_value):
try:
return float(raw_value)
except:
return np.NaN
def preprocess_data(raw):
data_horizon = pd.date_range(start='1/1/1880', end='12/31/2019', freq='M')
data = pd.DataFrame(data_horizon, columns=['Date'])
#extract temperature data
temp_list = []
for idx in range(raw.shape[0]):
temp_list.extend(raw.iloc[idx,1:])
data['Temp'] = temp_list
#clean value
data['Temp'] = data['Temp'].apply(lambda x: clean_value(x))
data.fillna(method='ffill', inplace=True)
return data
global_t = preprocess_data(raw_global)
global_t.head()
global_t.tail()
```
**Data Visualization**
```
fig = px.line(global_t, x="Date", y="Temp", title='Global-mean monthly Combined Land-Surface Air and Sea-Surface Water Temperature Anomalies')
fig.show()
fig = px.line(global_t.resample('A', on='Date').mean().reset_index(), x="Date", y="Temp", title='Global-mean yearly Combined Land-Surface Air and Sea-Surface Water Temperature Anomalies')
fig.show()
```
Test stationarity
```
def test_stationarity(timeseries):
rolmean = timeseries.rolling(window=30).mean()
rolstd = timeseries.rolling(window=30).std()
plt.figure(figsize=(14,5))
sns.despine(left=True)
orig = plt.plot(timeseries, color='blue',label='Original')
mean = plt.plot(rolmean, color='red', label='Rolling Mean')
std = plt.plot(rolstd, color='black', label = 'Rolling Std')
plt.legend(loc='best'); plt.title('Rolling Mean & Standard Deviation')
plt.show()
print ('<Results of Dickey-Fuller Test>')
dftest = adfuller(timeseries, autolag='AIC')
dfoutput = pd.Series(dftest[0:4],
index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
print(dfoutput)
test_stationarity(global_t.Temp.dropna())
```
since the p-value > 0.05, we accept the null hypothesis (H0), the data has a unit root and is non-stationary.
**Time Series Prediction - SARIMA**
The Seasonal Autoregressive Integrated Moving Average (SARIMA) method models the next step in the sequence as a linear function of the differenced observations, errors, differenced seasonal observations, and seasonal errors at prior time steps.
It combines the ARIMA model with the ability to perform the same autoregression, differencing, and moving average modeling at the seasonal level.
The notation for the model involves specifying the order for the AR(p), I(d), and MA(q) models as parameters to an ARIMA function and AR(P), I(D), MA(Q) and m parameters at the seasonal level, e.g. SARIMA(p, d, q)(P, D, Q)m where “m” is the number of time steps in each season (the seasonal period). A SARIMA model can be used to develop AR, MA, ARMA and ARIMA models.
The method is suitable for univariate time series with trend and/or seasonal components.
```
def plot(y_true,y_pred):
# Plot
fig = go.Figure()
x = global_t['Date'][global_t.shape[0]-len(y_true):]
fig.add_trace(go.Scatter(x=x, y=y_true, mode='lines', name='actual'))
fig.add_trace(go.Scatter(x=x, y=y_pred, mode='lines', name='predicted'))
# Edit the layout
fig.update_layout(title='Southern Hemisphere-mean Temperature: Predicted v.s. Actual',
xaxis_title='Month',
yaxis_title='Temperature')
fig.show()
def SARIMA_prediction(temp_data):
y_true = []
y_pred = []
temperature = temp_data['Temp'].tolist()
train = temperature[:-336]
test = temperature[len(train):]
#predict the latest 336 values (20% of data)
for idx in range(len(test)):
true_val = test[idx]
if len(y_pred)>0:
record = train+y_pred
else:
record = train
# fit model
model = SARIMAX(record, order=(1, 1, 1), seasonal_order=(1, 1, 1, 1))
model_fit = model.fit(disp=False,low_memory=True)
# make predictions
yhat = model_fit.predict(len(record), len(record))
# save value
y_true.append(true_val)
y_pred.extend(yhat)
print(mean_squared_error(y_true, y_pred))
plot(y_true,y_pred)
start_time = time.time()
SARIMA_prediction(global_t)
print("--- %s seconds ---" % (time.time() - start_time))
```
**Time Series Prediction - Prophet**
Prophet is a procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects. It works best with time series that have strong seasonal effects and several seasons of historical data. Prophet is robust to missing data and shifts in the trend, and typically handles outliers well.
```
def prophet_prediction(temp_data):
#removing the last 336 values (10 years)
df = temp_data.iloc[:-336]
df = df.rename(columns={'Date':'ds', 'Temp':'y'})
#load prophet model
model = Prophet(weekly_seasonality=True)
model.fit(df)
#prediction
future = model.make_future_dataframe(periods=336, freq = 'm')
forecast = model.predict(future)
model.plot(forecast)
return forecast
start_time = time.time()
prophet_forecast = prophet_prediction(global_t)
print("--- %s seconds ---" % (time.time() - start_time))
prophet_forecast_last = prophet_forecast.iloc[prophet_forecast.shape[0]-336:]
global_t_last = global_t.iloc[global_t.shape[0]-336:]
mean_squared_error(global_t_last.Temp, prophet_forecast_last.yhat)
```
**Time series prediction - LSTM**
```
from keras.models import Sequential
from keras.layers.recurrent import LSTM
from keras.layers.core import Dense, Activation, Dropout
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from sklearn.utils import shuffle
from keras.callbacks import EarlyStopping
earlyStop=EarlyStopping(monitor="val_loss",verbose=2,mode='min',patience=5)
```
**Data preparation**
```
temp_raw = np.array(global_t.Temp.astype("float32")).reshape(-1,1)
# Apply the MinMax scaler from sklearn to normalize data in the (0, 1) interval.
scaler = MinMaxScaler(feature_range = (0, 1))
temp_LSTM = scaler.fit_transform(temp_raw)
# Train test split - Using 80% of data for training, 20% for validation.
ratio = 0.6
train_size = int(len(temp_LSTM) * ratio)
val_size = int(len(temp_LSTM) * 0.2)
test_size = len(temp_LSTM) - train_size - val_size
train, val, test = temp_LSTM[0:train_size, :], temp_LSTM[train_size:train_size+val_size, :], temp_LSTM[train_size+val_size:len(temp_LSTM), :]
print("Number of entries (training set, val set, test set): " + str((len(train), len(val), len(test))))
def create_dataset(dataset):
window_size = 1
data_X, data_Y = [], []
for i in range(len(dataset) - window_size - 1):
a = dataset[i:(i + window_size), 0]
data_X.append(a)
data_Y.append(dataset[i + window_size, 0])
return(np.array(data_X), np.array(data_Y))
# Create test and training sets for one-step-ahead regression.
train_X, train_Y = create_dataset(train)
val_X, val_Y = create_dataset(val)
test_X, test_Y = create_dataset(test)
# Reshape the input data into appropriate form for Keras.
train_X = np.reshape(train_X, (train_X.shape[0], 1,train_X.shape[1]))
val_X = np.reshape(val_X, (val_X.shape[0], 1,val_X.shape[1]))
test_X = np.reshape(test_X, (test_X.shape[0], 1,test_X.shape[1]))
print("Training data for Keras shape:")
print(train_X.shape)
```
**LSTM Model**
The LSTM architecture here consists of:
- One input layer.
- One LSTM layer of 4 blocks.
- One Dense layer to produce a single output.
- Use MSE as loss function.
```
def LSTM_modelone(train_X, train_Y, window_size):
model = Sequential()
model.add(LSTM(4,
input_shape = (1, window_size)))
model.add(Dense(1))
model.compile(loss = "mean_squared_error",
optimizer = "adam")
model.fit(train_X,
train_Y,
epochs = 100,
batch_size = 10,
verbose = 2,
validation_data=(val_X,val_Y),callbacks=[earlyStop])
return model
start_time = time.time()
LSTM_model1 = LSTM_modelone(train_X, train_Y, window_size=1)
print("--- %s seconds ---" % (time.time() - start_time))
def predict_and_score(model, X, Y):
# Make predictions on the original scale of the data.
pred = scaler.inverse_transform(model.predict(X))
# Prepare Y data to also be on the original scale for interpretability.
orig_data = scaler.inverse_transform([Y])
# Calculate RMSE.
score = mean_squared_error(orig_data[0], pred[:, 0])
return score
print("Test data score: %.3f MSE" % predict_and_score(LSTM_model1,test_X, test_Y))
```
The second model architecture is slightly more complex. Its elements are:
- Define the LSTM with 100 neurons in the first hidden layer and 1 neuron in the output layer
- Dropout 20%.
- Use the MSE loss function and the efficient Adam version of stochastic gradient descent.
- The model will be fit for 50 training epochs with a batch size of 5.
```
def LSTM_modeltwo(train_X, train_Y):
model = Sequential()
model.add(LSTM(50, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dropout(0.2))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
print(model.summary())
model.fit(train_X, train_Y, epochs=50, batch_size=5, verbose=2, shuffle=False, validation_data=(val_X,val_Y),callbacks=[earlyStop])
return model
start_time = time.time()
LSTM_model2 = LSTM_modeltwo(train_X, train_Y)
print("--- %s seconds ---" % (time.time() - start_time))
print("Test data score: %.3f MSE" % predict_and_score(LSTM_model2,test_X, test_Y))
def predict_and_plot(model, X, Y):
# Make predictions on the original scale of the data.
pred = scaler.inverse_transform(model.predict(X))
# Prepare Y data to also be on the original scale for interpretability.
orig_data = scaler.inverse_transform([Y])
# Plot
fig = go.Figure()
x = global_t['Date'][global_t.shape[0]-len(orig_data[0]):]
fig.add_trace(go.Scatter(x=x, y=orig_data[0], mode='lines', name='actual'))
fig.add_trace(go.Scatter(x=x, y=pred[:, 0], mode='lines', name='predicted'))
# Edit the layout
fig.update_layout(title='Global Temperature: Predicted v.s. Actual',
xaxis_title='Month',
yaxis_title='Temperature')
fig.show()
predict_and_plot(LSTM_model2,test_X, test_Y)
```
**MLP Model**
```
def MLP_model(train_X, train_Y):
model = Sequential()
model.add(Dense(100, input_shape=(1,)))
model.add(Activation('relu'))
model.add(Dropout(0.25))
model.add(Dense(50))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('linear'))
model.compile(optimizer='adam', loss='mse')
print(model.summary())
model.fit(train_X, train_Y, epochs=50, batch_size=10, verbose=2, shuffle=False, validation_data=(val_X,val_Y),callbacks=[earlyStop])
return model
start_time = time.time()
MLP_model_result = MLP_model(train_X, train_Y)
print("--- %s seconds ---" % (time.time() - start_time))
print("Test data score: %.3f MSE" % predict_and_score(MLP_model_result,test_X, test_Y))
```
| true |
code
| 0.548432 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/ewotawa/secure_private_ai/blob/master/Section_2_Federated_Learning_Final_Project.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Federated Learning Final Project
## Overview
* See <a href="https://classroom.udacity.com/nanodegrees/nd185/parts/3fe1bb10-68d7-4d84-9c99-9539dedffad5/modules/28d685f0-0cb1-4f94-a8ea-2e16614ab421/lessons/c8fe481d-81ea-41be-8206-06d2deeb8575/concepts/a5fb4b4c-e38a-48de-b2a7-4e853c62acbe">video</a> for additional details.
* Do Federated Learning where the central server is not trusted with the raw gradients.
* In the final project notebook, you'll receive a dataset.
* Train on the dataset using Federated Learning.
* The gradients should not come up to the server in raw form.
* Instead, use the new .move() command to move all of the gradients to one of the workers, sum them up there, and then bring that batch up to the central server and then bring that batch up
* Idea: the central server never actually sees the raw gradient for any person.
* We'll look at secure aggregation in course 3.
* For now, do a larger-scale Federated Learning case where you handle the gradients in a special way.
## Approach
* Use the method illustrated in the "DEEP LEARNING" article referenced below. Update the code such that the MNIST model trains locally. Updated for my personal code style preferences.
* Per conversation in the SPAIC Slack channel, use of a federated data loader approach trains the model and keeps the disaggregated gradients off of the local machine. The aggregate model returns when model.get() is called.
* Contacted the team at OpenMined. They confirmed that PySyft currently does not work with GPUs, although updates are in progress. (7/18/2019).
## References
* <a href = "https://blog.openmined.org/upgrade-to-federated-learning-in-10-lines/">DEEP LEARNING -> FEDERATED LEARNING IN 10 LINES OF PYTORCH + PYSYFT</a>
* <a href ="https://github.com/udacity/private-ai/pull/10">added data for Federated Learning project</a>
* <a href="https://github.com/OpenMined/PySyft/blob/master/examples/tutorials/Part%206%20-%20Federated%20Learning%20on%20MNIST%20using%20a%20CNN.ipynb">Part 6 - Federated Learning on MNIST using a CNN.ipynb</a>
* <a href="https://docs.google.com/spreadsheets/d/1x-QQK-3Wn86bvSbNTf2_p2FXVCqiic2QwjcArQEuQlg/edit#gid=0">Slack Channel's reference sheet </a>
* <a href="https://github.com/ucalyptus/Federated-Learning/blob/master/Federated%20Learning.ipynb">Federated Learning Example from Slack Channel reference sheet</a>
### Install libraries and dependencies
```
!pip install syft
import syft as sy
!pip install torch
!pip install torchvision
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms
import numpy as np
hook = sy.TorchHook(torch) # <-- NEW: hook PyTorch ie add extra functionalities to support Federated Learning
vw00 = sy.VirtualWorker(hook, id="vw00")
vw01 = sy.VirtualWorker(hook, id="vw01")
aggr = sy.VirtualWorker(hook, id="aggr")
class Arguments():
def __init__(self):
self.batch_size = 64
self.test_batch_size = 1000
self.epochs = 10
self.lr = 0.01
self.momentum = 0.5
self.no_cuda = False
self.seed = 1
self.log_interval = 10
self.save_model = False
args = Arguments()
use_cuda = not args.no_cuda and torch.cuda.is_available()
torch.manual_seed(args.seed)
device = torch.device("cuda" if use_cuda else "cpu")
kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
# Note: removed **kwargs from end of federated_train_loader and test_loader definitions.
transform = transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
federated_train_loader = sy.FederatedDataLoader(datasets.MNIST('../data', train=True, download=True, transform=transform).federate((vw00, vw01)),
batch_size=args.batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(datasets.MNIST('../data', train=False, transform=transform),
batch_size=args.test_batch_size, shuffle=True)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear(4*4*50, 500)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 4*4*50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
def train(args, model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(federated_train_loader): # <-- now it is a distributed dataset
model.send(data.location) # <-- NEW: send the model to the right location
# data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
model.get() # <-- NEW: get the model back
if batch_idx % args.log_interval == 0:
loss = loss.get() # <-- NEW: get the loss back
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * args.batch_size, len(train_loader) * args.batch_size, #batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
def test(args, model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
# data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
# model = Net().to(device)
model = Net()
optimizer = optim.SGD(model.parameters(), lr=args.lr) # TODO momentum is not supported at the moment
for epoch in range(1, args.epochs + 1):
train(args, model, device, federated_train_loader, optimizer, epoch)
test(args, model, device, test_loader)
if (args.save_model):
torch.save(model.state_dict(), "mnist_cnn.pt")
```
| true |
code
| 0.824462 | null | null | null | null |
|
```
%matplotlib inline
import matplotlib.pyplot as plt # for plotting
import numpy as np # for matrix and vector computations
import pandas as pd
import seaborn as sns
```
### Debugging
* Python array indices start from zero
* Vector/matrix operations work only with numpy arrays.Inspect matrix operations to make sure that you are adding and multiplying matrices of compatible dimensions. Printing the dimensions of numpy arrays using the shape property will help you debug.
* If you want to do matrix multiplication, you need to use the dot function in numpy. For, example if A and B are two numpy matrices, then the matrix operation AB is np.dot(A, B)
## Return a 5x5 Identity Matrix
```
A = np.eye(5) # using eye()
A
```
Implement linear regression with one variable to predict profits for a food truck. Suppose you are the CEO of a restaurant franchise and are considering different cities for opening a new outlet. The chain already has trucks in various cities and you have data for profits and populations from the cities. You would like to use this data to help you select which city to expand to next.
The file Data/ex1data1.txt contains the dataset for our linear regression problem. The first column is the population of a city (in 10,000s) and the second column is the profit of a food truck in that city (in $10,000s). A negative value for profit indicates a loss.
## 1) Load the dataset
```
# Load the dataset
data = np.loadtxt('ex1data1.txt',delimiter=',')
X = data[:,0]
y = data[:,1]
# X and y are matrices
m = y.size # number of training samples
m
X.shape, y.shape, X.ndim, y.ndim
```
## 2) Plotting the Data
Before starting on any task, it is often useful to understand the data by visualizing it. For this dataset has only two properties to plot (profit and population).
```
"""
Plots the data points x and y into a new figure. Plots the data
points and gives the figure axes labels of population and profit.
Parameters
----------
x : array_like
Data point values for x-axis.
y : array_like
Data point values for y-axis. Note x and y should have the same size.
----
You can use the 'ro' option with plot to have the markers
appear as red circles. Furthermore, you can make the markers larger by
using plot(..., 'ro', ms=10), where `ms` refers to marker size. You
can also set the marker edge color using the `mec` property.
"""
def plotData(x,y):
fig = plt.figure(figsize=(8,6))
plt.plot(x,y,'ro',ms=10,mec='k')
plt.xlabel('Profit in $10,000')
plt.ylabel('Population of a city in 10,000')
plotData(X,y)
```
## 3) Gradient Descent
Fit the linear regression parameters $\theta$ to the dataset using gradient descent.
<a id="section2"></a>
### 3.1 Update Equations
The objective of linear regression is to minimize the cost function $J(\theta)$
$$ J(\theta) = \frac{1}{2m} \sum_{i=1}^m \left( h_{\theta}(x^{(i)}) - y^{(i)}\right)^2$$
where the hypothesis $h_\theta(x)$ is given by the linear model
$$ h_\theta(x) = \theta^Tx = \theta_0 + \theta_1 x_1$$
Recall that the parameters of your model are the $\theta_j$ values. These are
the values you will adjust to minimize cost $J(\theta)$. One way to do this is to
use the **batch gradient descent algorithm**. In batch gradient descent, each
iteration performs the update
$$ \theta_j = \theta_j - \alpha \frac{1}{m} \sum_{i=1}^m \left( h_\theta(x^{(i)}) - y^{(i)}\right)x_j^{(i)} \qquad \text{simultaneously update } \theta_j \text{ for all } j$$
With each step of gradient descent, your parameters $\theta_j$ come closer to the optimal values that will achieve the lowest cost J($\theta$).
<div class="alert alert-block alert-warning">
**Implementation Note:** We store each sample as a row in the the $X$ matrix in Python `numpy`. To take into account the intercept term ($\theta_0$), we add an additional first column to $X$ and set it to all ones. This allows us to treat $\theta_0$ as simply another 'feature'.
</div>
```
# initially X contains features x1,x2. Add x0 = 1, so X will now contain the features x0,x1,x2
#### Add a column of ones to X. The numpy function stack() joins arrays along a given axis.
# The first axis (axis=0) refers to rows (training samples), and second axis (axis=1) refers to columns (features).
X = np.stack([np.ones(m),X],axis=1) # This cell is executed only once!
```
<a id="section2"></a>
### 3.2 Computing the cost $J(\theta)$
As you perform gradient descent to minimize the cost function $J(\theta)$, it is helpful to monitor the convergence by computing the cost. Implement a function to calculate $J(\theta)$ so you can check the convergence of your gradient descent implementation.
Remember that the variables $X$ and $y$ are not scalar values. $X$ is a matrix whose rows represent the samples from the training set (feature) and $y$ (label) is a vector whose each element represent the value at a given row of $X$.
<a id="computeCost"></a>
```
"""
Compute cost for linear regression. Computes the cost of using theta as the
parameter for linear regression to fit the data points in X and y.
Parameters
----------
X : array_like
The input dataset of shape (m x n+1) dimesnions, where m is the number of samples,
and n is the number of features. We assume a vector of one's already
appended to the features so we have n+1 columns.
y : array_like
The values of the function at each data point. This is a vector of
shape (m, ) i.e. (mx1) dimensions
theta : array_like
The parameters for the hypothesis/regression function. This is a vector of
shape (n+1, ) i.e. (n+1)x1 dimensions.
Returns
-------
J : float - The value of the regression cost function.
"""
def computeCost(X,y,theta):
m = y.size # no. of training samples
J = 0
h = np.dot(X,theta) # X and theta are matrices
J = (1/(2 * m)) * np.sum(np.square(np.dot(X, theta) - y))
return J
# take random values of theta0 and theta1
J = computeCost(X,y ,theta=np.array([0.0,0.0])) # two values for theta0 and theta1
print(f"With theta = [0, 0] \nCost computed = {J:.2f}")
print()
J = computeCost(X,y ,theta=np.array([-1,2]))
print(f"With theta = [-1, 2] \nCost computed = {J:.2f}")
```
<a id="section3"></a>
### 3.3 Gradient descent
Complete a function which Implements gradient descent. Update $\theta$ with each iteration of the loop.
As you program, make sure you understand what you are trying to optimize and what is being updated. Keep in mind that the cost $J(\theta)$ is parameterized by the vector $\theta$, not $X$ and $y$. That is, we minimize the value of $J(\theta)$ by changing the values of the vector $\theta$, not by changing $X$ or $y$.
A good way to verify that gradient descent is working correctly is to look at the value of $J(\theta)$ and check that it is decreasing with each step.
```
"""
Performs gradient descent to learn `theta`. Updates theta by taking `num_iters`
gradient steps with learning rate `alpha`.
Parameters
----------
X : array_like
The input dataset of shape (m x n+1).
y : array_like
Value at given features. A vector of shape (m, ), i.e. (mx1) dimensions
theta : array_like
Initial values for the linear regression parameters.
A vector of shape (n+1, ), i.e. (n+1)x1 dimensions
alpha : float
The learning rate.
num_iters : int
The number of iterations for gradient descent.
Returns
-------
theta : array_like
The learned linear regression parameters. A vector of shape (n+1, ). This is the optimal theta
for which J is minimum
J_history : list
A python list for the values of the cost function after each iteration.
Instructions
------------
Peform a single gradient step on the parameter vector theta.
While debugging, it can be useful to print out the values of
the cost function (computeCost) and gradient here.
"""
def gradient_descent(X,y,theta,alpha,num_iters):
m = y.size # or y.shape[0] # number of training samples
# make a copy of theta, to avoid changing the original array, since numpy arrays are passed by reference to functions
theta = theta.copy()
J_history = [] # Use a python list to store cost in every iteration
for i in range(num_iters):
theta = theta - (alpha/m) * (np.dot(X,theta) - y).dot(X)
# print(theta)
# save the cost J in every iteration
min_cost = computeCost(X,y,theta)
J_history.append(min_cost)
# print(J_history[i])
return theta, J_history # theta will return 2 values --> theta0, theta1
# randomly initialize fitting parameters
theta = np.zeros(2)
# some gradient descent settings
iterations = 1500
alpha = 0.01
theta, J_history = gradient_descent(X,y,theta,alpha,iterations)
print('Theta found by gradient descent: {:.4f}, {:.4f}'.format(*theta)) # adds theta to empty string
```
## 4) Plot the linear fit
```
plotData(X[:,1],y) # plot the samples - excluding x1=0 (0th column)
# Linear regression line/hypothesis line of best fit --> y = h(x) = theta0 + theta1*X
# x is feature except x1=0,y is entire equation
plt.plot(X[:,1],np.dot(X,theta),ls='-')
plt.legend(['Training Data','Linear Regression']); # x is training data, y is linear regression line
```
## 5) Predict some values
```
# we now have the optimal theta
# Predict values for population sizes of 35,000 and 70,000
# Note that the first argument to the `numpy` function `dot` is a python list.
# `numpy` can internally convert **valid** python lists to numpy arrays when explicitly provided as arguments to `numpy` functions.
# profit x in 10,000 and population y in 10,000, so 3.5 --> 350000, 1 -> 10000
predict1 = np.dot([1,3.5],theta)
print(f"For population = 35,000, we predict a profit of {predict1 * 10000:.2f}")
predict2 = np.dot([1,7],theta)
print(f"For population = 35,000, we predict a profit of {predict2 * 10000:.2f}")
```
| true |
code
| 0.861945 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/hf2000510/infectious_disease_modelling/blob/master/part_two.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Make sure to open in Colab to see the plots!
### Importing the libraries
```
from scipy.integrate import odeint
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
!pip install mpld3
import mpld3
mpld3.enable_notebook()
```
### Plot Function
```
def plotseird(t, S, E, I, R, D=None, L=None, R0=None, Alpha=None, CFR=None):
f, ax = plt.subplots(1,1,figsize=(10,4))
ax.plot(t, S, 'b', alpha=0.7, linewidth=2, label='Susceptible')
ax.plot(t, E, 'y', alpha=0.7, linewidth=2, label='Exposed')
ax.plot(t, I, 'r', alpha=0.7, linewidth=2, label='Infected')
ax.plot(t, R, 'g', alpha=0.7, linewidth=2, label='Recovered')
if D is not None:
ax.plot(t, D, 'k', alpha=0.7, linewidth=2, label='Dead')
ax.plot(t, S+E+I+R+D, 'c--', alpha=0.7, linewidth=2, label='Total')
else:
ax.plot(t, S+E+I+R, 'c--', alpha=0.7, linewidth=2, label='Total')
ax.set_xlabel('Time (days)')
ax.yaxis.set_tick_params(length=0)
ax.xaxis.set_tick_params(length=0)
ax.grid(b=True, which='major', c='w', lw=2, ls='-')
legend = ax.legend(borderpad=2.0)
legend.get_frame().set_alpha(0.5)
for spine in ('top', 'right', 'bottom', 'left'):
ax.spines[spine].set_visible(False)
if L is not None:
plt.title("Lockdown after {} days".format(L))
plt.show();
if R0 is not None or CFR is not None:
f = plt.figure(figsize=(12,4))
if R0 is not None:
# sp1
ax1 = f.add_subplot(121)
ax1.plot(t, R0, 'b--', alpha=0.7, linewidth=2, label='R_0')
ax1.set_xlabel('Time (days)')
ax1.title.set_text('R_0 over time')
# ax.set_ylabel('Number (1000s)')
# ax.set_ylim(0,1.2)
ax1.yaxis.set_tick_params(length=0)
ax1.xaxis.set_tick_params(length=0)
ax1.grid(b=True, which='major', c='w', lw=2, ls='-')
legend = ax1.legend()
legend.get_frame().set_alpha(0.5)
for spine in ('top', 'right', 'bottom', 'left'):
ax.spines[spine].set_visible(False)
if Alpha is not None:
# sp2
ax2 = f.add_subplot(122)
ax2.plot(t, Alpha, 'r--', alpha=0.7, linewidth=2, label='alpha')
ax2.set_xlabel('Time (days)')
ax2.title.set_text('fatality rate over time')
# ax.set_ylabel('Number (1000s)')
# ax.set_ylim(0,1.2)
ax2.yaxis.set_tick_params(length=0)
ax2.xaxis.set_tick_params(length=0)
ax2.grid(b=True, which='major', c='w', lw=2, ls='-')
legend = ax2.legend()
legend.get_frame().set_alpha(0.5)
for spine in ('top', 'right', 'bottom', 'left'):
ax.spines[spine].set_visible(False)
plt.show();
```
## Basic SIR Equations
```
def deriv(y, t, N, beta, gamma):
S, I, R = y
dSdt = -beta * S * I / N
dIdt = beta * S * I / N - gamma * I
dRdt = gamma * I
return dSdt, dIdt, dRdt
```
## The Exposed-Compartment
```
def deriv(y, t, N, beta, gamma, delta):
S, E, I, R = y
dSdt = -beta * S * I / N
dEdt = beta * S * I / N - delta * E
dIdt = delta * E - gamma * I
dRdt = gamma * I
return dSdt, dEdt, dIdt, dRdt
```
### Variables that we define:
```
N = 1_000_000 # total population
D = 4.0 # infections lasts four days
gamma = 1.0 / D
delta = 1.0 / 5.0 # incubation period of five days
R_0 = 5.0
beta = R_0 * gamma # R_0 = beta / gamma, so beta = R_0 * gamma
S0, E0, I0, R0 = N-1, 1, 0, 0 # initial conditions: one exposed
t = np.linspace(0, 99, 100) # Grid of time points (in days)
y0 = S0, E0, I0, R0 # Initial conditions vector
# Integrate the SIR equations over the time grid, t.
ret = odeint(deriv, y0, t, args=(N, beta, gamma, delta))
S, E, I, R = ret.T
```
### Plot the result:
```
plotseird(t, S, E, I, R)
```
## Programming the Dead-Compartment
```
def deriv(y, t, N, beta, gamma, delta, alpha, rho):
S, E, I, R, D = y
dSdt = -beta * S * I / N
dEdt = beta * S * I / N - delta * E
dIdt = delta * E - (1 - alpha) * gamma * I - alpha * rho * I
dRdt = (1 - alpha) * gamma * I
dDdt = alpha * rho * I
return dSdt, dEdt, dIdt, dRdt, dDdt
```
### New variables:
```
N = 1_000_000
D = 4.0 # infections lasts four days
gamma = 1.0 / D
delta = 1.0 / 5.0 # incubation period of five days
R_0 = 5.0
beta = R_0 * gamma # R_0 = beta / gamma, so beta = R_0 * gamma
alpha = 0.2 # 20% death rate
rho = 1/9 # 9 days from infection until death
S0, E0, I0, R0, D0 = N-1, 1, 0, 0, 0 # initial conditions: one exposed
t = np.linspace(0, 99, 100) # Grid of time points (in days)
y0 = S0, E0, I0, R0, D0 # Initial conditions vector
# Integrate the SIR equations over the time grid, t.
ret = odeint(deriv, y0, t, args=(N, beta, gamma, delta, alpha, rho))
S, E, I, R, D = ret.T
```
### Plot the result:
```
plotseird(t, S, E, I, R, D)
```
## Time-Dependent $R_{0}$
### Simple Approach: Single Lockdown
```
def deriv(y, t, N, beta, gamma, delta, alpha, rho):
S, E, I, R, D = y
dSdt = -beta(t) * S * I / N
dEdt = beta(t) * S * I / N - delta * E
dIdt = delta * E - (1 - alpha) * gamma * I - alpha * rho * I
dRdt = (1 - alpha) * gamma * I
dDdt = alpha * rho * I
return dSdt, dEdt, dIdt, dRdt, dDdt
L = 40
N = 1_000_000
D = 4.0 # infections lasts four days
gamma = 1.0 / D
delta = 1.0 / 5.0 # incubation period of five days
def R_0(t):
return 5.0 if t < L else 0.9
def beta(t):
return R_0(t) * gamma
alpha = 0.2 # 20% death rate
rho = 1/9 # 9 days from infection until death
S0, E0, I0, R0, D0 = N-1, 1, 0, 0, 0 # initial conditions: one exposed
t = np.linspace(0, 99, 100) # Grid of time points (in days)
y0 = S0, E0, I0, R0, D0 # Initial conditions vector
# Integrate the SIR equations over the time grid, t.
ret = odeint(deriv, y0, t, args=(N, beta, gamma, delta, alpha, rho))
S, E, I, R, D = ret.T
```
### Plot the result:
```
plotseird(t, S, E, I, R, D, L)
```
### Advanced Approach: logistic $R_{0}$
```
### we will use the logistic R in our model, because R probably never “jumps” from one value to another. Rather, it continuously changes.
def deriv(y, t, N, beta, gamma, delta, alpha, rho):
S, E, I, R, D = y
dSdt = -beta(t) * S * I / N
dEdt = beta(t) * S * I / N - delta * E
dIdt = delta * E - (1 - alpha) * gamma * I - alpha * rho * I
dRdt = (1 - alpha) * gamma * I
dDdt = alpha * rho * I
return dSdt, dEdt, dIdt, dRdt, dDdt
N = 1_000_000
D = 4.0 # infections lasts four days
gamma = 1.0 / D
delta = 1.0 / 5.0 # incubation period of five days
R_0_start, k, x0, R_0_end = 5.0, 0.5, 50, 0.5
def logistic_R_0(t):
return (R_0_start-R_0_end) / (1 + np.exp(-k*(-t+x0))) + R_0_end
def beta(t):
return logistic_R_0(t) * gamma
alpha = 0.2 # 20% death rate
rho = 1/9 # 9 days from infection until death
S0, E0, I0, R0, D0 = N-1, 1, 0, 0, 0 # initial conditions: one exposed
t = np.linspace(0, 99, 100) # Grid of time points (in days)
y0 = S0, E0, I0, R0, D0 # Initial conditions vector
# Integrate the SIR equations over the time grid, t.
ret = odeint(deriv, y0, t, args=(N, beta, gamma, delta, alpha, rho))
S, E, I, R, D = ret.T
R0_over_time = [logistic_R_0(i) for i in range(len(t))] # to plot R_0 over time: get function values
```
### Plot the result:
```
plotseird(t, S, E, I, R, D, R0=R0_over_time)
```
## Resource- and Age-Dependent Fatality Rate
```
def deriv(y, t, N, beta, gamma, delta, alpha_opt, rho):
S, E, I, R, D = y
def alpha(t):
return s * I/N + alpha_opt
dSdt = -beta(t) * S * I / N
dEdt = beta(t) * S * I / N - delta * E
dIdt = delta * E - (1 - alpha(t)) * gamma * I - alpha(t) * rho * I
dRdt = (1 - alpha(t)) * gamma * I
dDdt = alpha(t) * rho * I
return dSdt, dEdt, dIdt, dRdt, dDdt
### New variables:
N = 1_000_000
D = 4.0 # infections lasts four days
gamma = 1.0 / D
delta = 1.0 / 5.0 # incubation period of five days
R_0_start, k, x0, R_0_end = 5.0, 0.5, 50, 0.5
def logistic_R_0(t):
return (R_0_start-R_0_end) / (1 + np.exp(-k*(-t+x0))) + R_0_end
def beta(t):
return logistic_R_0(t) * gamma
alpha_by_agegroup = {"0-29": 0.01, "30-59": 0.05, "60-89": 0.2, "89+": 0.3}
proportion_of_agegroup = {"0-29": 0.1, "30-59": 0.3, "60-89": 0.4, "89+": 0.2}
s = 0.01
alpha_opt = sum(alpha_by_agegroup[i] * proportion_of_agegroup[i] for i in list(alpha_by_agegroup.keys()))
rho = 1/9 # 9 days from infection until death
S0, E0, I0, R0, D0 = N-1, 1, 0, 0, 0 # initial conditions: one exposed
t = np.linspace(0, 99, 100) # Grid of time points (in days)
y0 = S0, E0, I0, R0, D0 # Initial conditions vector
# Integrate the SIR equations over the time grid, t.
ret = odeint(deriv, y0, t, args=(N, beta, gamma, delta, alpha_opt, rho))
S, E, I, R, D = ret.T
R0_over_time = [logistic_R_0(i) for i in range(len(t))] # to plot R_0 over time: get function values
Alpha_over_time = [s * I[i]/N + alpha_opt for i in range(len(t))] # to plot alpha over time
```
### Plot the result:
```
plotseird(t, S, E, I, R, D, R0=R0_over_time, Alpha=Alpha_over_time)
```
| true |
code
| 0.672923 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/ayulockin/Explore-NFNet/blob/main/Train_Basline_With_Gradient_Clipping.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# 🧰 Setups, Installations and Imports
```
%%capture
!pip install wandb --upgrade
!pip install albumentations
!git clone https://github.com/ayulockin/Explore-NFNet
import tensorflow as tf
print(tf.__version__)
import tensorflow_datasets as tfds
import sys
sys.path.append("Explore-NFNet")
import os
import cv2
import numpy as np
from functools import partial
import matplotlib.pyplot as plt
# Imports from the cloned repository
from models.resnet import resnet_v1
from models.mini_vgg import get_mini_vgg
# Augmentation related imports
import albumentations as A
# Seed everything for reproducibility
def seed_everything():
# Set the random seeds
os.environ['TF_CUDNN_DETERMINISTIC'] = '1'
np.random.seed(hash("improves reproducibility") % 2**32 - 1)
tf.random.set_seed(hash("by removing stochasticity") % 2**32 - 1)
seed_everything()
# Avoid TensorFlow to allocate all the GPU at once.
# Ref: https://www.tensorflow.org/guide/gpu
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)
import wandb
from wandb.keras import WandbCallback
wandb.login()
DATASET_NAME = 'cifar10'
IMG_HEIGHT = 32
IMG_WIDTH = 32
NUM_CLASSES = 10
SHUFFLE_BUFFER = 1024
BATCH_SIZE = 256
EPOCHS = 100
AUTOTUNE = tf.data.experimental.AUTOTUNE
print(f'Global batch size is: {BATCH_SIZE}')
```
# ⛄ Download and Prepare Dataset
```
(train_ds, val_ds, test_ds), info = tfds.load(name=DATASET_NAME,
split=["train[:85%]", "train[85%:]", "test"],
with_info=True,
as_supervised=True)
@tf.function
def preprocess(image, label):
# preprocess image
image = tf.cast(image, tf.float32)
image = image/255.0
return image, label
# Define the augmentation policies. Note that they are applied sequentially with some probability p.
transforms = A.Compose([
A.HorizontalFlip(p=0.7),
A.Rotate(limit=30, p=0.7)
])
# Apply augmentation policies.
def aug_fn(image):
data = {"image":image}
aug_data = transforms(**data)
aug_img = aug_data["image"]
return aug_img
@tf.function
def apply_augmentation(image, label):
aug_img = tf.numpy_function(func=aug_fn, inp=[image], Tout=tf.float32)
aug_img.set_shape((IMG_HEIGHT, IMG_WIDTH, 3))
return aug_img, label
train_ds = (
train_ds
.shuffle(SHUFFLE_BUFFER)
.map(preprocess, num_parallel_calls=AUTOTUNE)
.map(apply_augmentation, num_parallel_calls=AUTOTUNE)
.batch(BATCH_SIZE)
.prefetch(AUTOTUNE)
)
val_ds = (
val_ds
.map(preprocess, num_parallel_calls=AUTOTUNE)
.batch(BATCH_SIZE)
.prefetch(AUTOTUNE)
)
test_ds = (
test_ds
.map(preprocess, num_parallel_calls=AUTOTUNE)
.batch(BATCH_SIZE)
.prefetch(AUTOTUNE)
)
def show_batch(image_batch, label_batch):
plt.figure(figsize=(10,10))
for n in range(25):
ax = plt.subplot(5,5,n+1)
plt.imshow(image_batch[n])
# plt.title(f'{np.argmax(label_batch[n].numpy())}')
plt.title(f'{label_batch[n].numpy()}')
plt.axis('off')
image_batch, label_batch = next(iter(train_ds))
show_batch(image_batch, label_batch)
print(image_batch.shape, label_batch.shape)
```
# 🐤 Model
```
class ResNetModel(tf.keras.Model):
def __init__(self, resnet):
super(ResNetModel, self).__init__()
self.resnet = resnet
def train_step(self, data):
images, labels = data
with tf.GradientTape() as tape:
predictions = self.resnet(images)
loss = self.compiled_loss(labels, predictions)
trainable_params = self.resnet.trainable_variables
gradients = tape.gradient(loss, trainable_params)
gradients_clipped = [tf.clip_by_norm(g, 0.01) for g in gradients]
self.optimizer.apply_gradients(zip(gradients_clipped, trainable_params))
self.compiled_metrics.update_state(labels, predictions)
return {m.name: m.result() for m in self.metrics}
def test_step(self, data):
images, labels = data
predictions = self.resnet(images, training=False)
loss = self.compiled_loss(labels, predictions)
self.compiled_metrics.update_state(labels, predictions)
return {m.name: m.result() for m in self.metrics}
def save_weights(self, filepath):
self.resnet.save_weights(filepath=filepath, save_format="tf")
def call(self, inputs, *args, **kwargs):
return self.resnet(inputs)
tf.keras.backend.clear_session()
test_model = ResNetModel(resnet_v1((IMG_HEIGHT, IMG_WIDTH, 3), 20, num_classes=NUM_CLASSES, use_bn=False))
test_model.build((1, IMG_HEIGHT, IMG_WIDTH, 3))
test_model.summary()
print(f"Total learnable parameters: {test_model.count_params()/1e6} M")
```
# 📲 Callbacks
```
earlystopper = tf.keras.callbacks.EarlyStopping(
monitor='val_loss', patience=10, verbose=0, mode='auto',
restore_best_weights=True
)
reducelronplateau = tf.keras.callbacks.ReduceLROnPlateau(
monitor="val_loss", factor=0.5,
patience=3, verbose=1
)
```
# 🚋 Train with W&B
```
tf.keras.backend.clear_session()
# Intialize model
model = ResNetModel(resnet_v1((IMG_HEIGHT, IMG_WIDTH, 3), 20, num_classes=NUM_CLASSES, use_bn=False))
model.compile('adam', 'sparse_categorical_crossentropy', metrics=['acc'])
# Intialize W&B run
run = wandb.init(entity='ayush-thakur', project='nfnet', job_type='train-baseline')
# Train model
model.fit(train_ds,
epochs=EPOCHS,
validation_data=val_ds,
callbacks=[WandbCallback(),
reducelronplateau,
earlystopper])
# Evaluate model on test set
loss, acc = model.evaluate(test_ds)
wandb.log({'Test Accuracy': round(acc, 3)})
# Close W&B run
run.finish()
```

| true |
code
| 0.718557 | null | null | null | null |
|
# Hyperparams And Distributions
This page introduces the hyperparams, and distributions in Neuraxle. You can find [Hyperparams Distribution API here](https://www.neuraxle.org/stable/api/neuraxle.hyperparams.distributions.html), and
[Hyperparameter Samples API here](https://www.neuraxle.org/stable/api/neuraxle.hyperparams.space.html).
Hyperparameter is a parameter drawn from a prior distribution. In Neuraxle, we have a few built-in distributions, and we are also compatible with scipy distributions.
Create a [Uniform Distribution](https://www.neuraxle.org/stable/api/neuraxle.hyperparams.distributions.html#neuraxle.hyperparams.distributions.Uniform):
```
from neuraxle.hyperparams.distributions import Uniform
hd = Uniform(
min_included=-10,
max_included=10,
null_default_value=0
)
```
Sample the random variable using [rvs](https://www.neuraxle.org/stable/api/neuraxle.hyperparams.distributions.html#neuraxle.hyperparams.distributions.HyperparameterDistribution.rvs):
```
sample = hd.rvs()
print(sample)
```
Nullify the random variable using [nullify](https://www.neuraxle.org/stable/api/neuraxle.hyperparams.distributions.html#neuraxle.hyperparams.distributions.HyperparameterDistribution.nullify):
```
nullified_sample = hd.nullify()
assert nullified_sample == hd.null_default_value
```
Get the probability distribution function value at `x` using [pdf](https://www.neuraxle.org/stable/api/neuraxle.hyperparams.distributions.html#neuraxle.hyperparams.distributions.HyperparameterDistribution.pdf):
```
pdf = hd.pdf(1)
print('pdf: {}'.format(pdf))
```
Get the cumulative probability distribution function value at `x` using [cdf](https://www.neuraxle.org/stable/api/neuraxle.hyperparams.distributions.html#neuraxle.hyperparams.distributions.HyperparameterDistribution.cdf)
```
cdf = hd.cdf(1)
print('cdf: {}'.format(cdf))
```
## Setting And Updating Hyperparams
In Neuraxle, each step has hyperparams of type [HyperparameterSamples](https://www.neuraxle.org/stable/api/neuraxle.hyperparams.space.html#neuraxle.hyperparams.space.HyperparameterSamples), and spaces of type [HyperparameterSpace](https://www.neuraxle.org/stable/api/neuraxle.hyperparams.distributions.html#neuraxle.hyperparams.distributions.HyperparameterDistribution).
Consider a simple pipeline that contains 2 MultiplyByN steps, and one PCA component inside a nested pipeline:
```
from sklearn.decomposition import PCA
from neuraxle.hyperparams.distributions import RandInt
from neuraxle.hyperparams.space import HyperparameterSpace, HyperparameterSamples
from neuraxle.pipeline import Pipeline
from neuraxle.steps.numpy import MultiplyByN
p = Pipeline([
('step1', MultiplyByN(2)),
('step2', MultiplyByN(2)),
Pipeline([
PCA(n_components=4)
])
])
```
We can set or update the hyperparams, and spaces by doing the following:
```
p.set_hyperparams(HyperparameterSamples({
'step1__multiply_by': 42,
'step2__multiply_by': -10,
'Pipeline__PCA__n_components': 2
}))
p.update_hyperparams(HyperparameterSamples({
'Pipeline__PCA__n_components': 3
}))
p.set_hyperparams_space(HyperparameterSpace({
'step1__multiply_by': RandInt(42, 50),
'step2__multiply_by': RandInt(-10, 0),
'Pipeline__PCA__n_components': RandInt(2, 3)
}))
```
We can sample the space of random variables:
```
samples = p.get_hyperparams_space().rvs()
assert 42 <= samples['step1__multiply_by'] <= 50
assert -10 <= samples['step2__multiply_by'] <= 0
assert samples['Pipeline__PCA__n_components'] in [2, 3]
```
We can get all hyperparams:
```
samples = p.get_hyperparams()
assert 42 <= samples['step1__multiply_by'] <= 50
assert -10 <= samples['step2__multiply_by'] <= 0
assert samples['Pipeline__PCA__n_components'] in [2, 3]
assert p['Pipeline']['PCA'].get_wrapped_sklearn_predictor().n_components in [2, 3]
```
## Neuraxle Custom Distributions
## Scipy Distributions
To define a scipy distribution that is compatible with Neuraxle, you need to wrap the scipy distribution with ScipyDistributionWrapper:
```
from neuraxle.hyperparams.scipy_distributions import ScipyDistributionWrapper, BaseContinuousDistribution, BaseDiscreteDistribution
from scipy.integrate import quad
from scipy.special import factorial
from scipy.stats import rv_continuous, norm, rv_discrete, rv_histogram, truncnorm, randint
import numpy as np
import math
hd = ScipyDistributionWrapper(
scipy_distribution=randint(low=0, high=10),
is_continuous=False,
null_default_value=0
)
```
### Discrete Distributions
For discrete distribution that inherit from [rv_discrete](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_discrete.html#scipy.stats.rv_discrete), you only need to implement _pmf. The rest is taken care of magically by scipy.
For example, here is a discrete poisson distribution:
```
class Poisson(BaseDiscreteDistribution):
def __init__(self, min_included: float, max_included: float, null_default_value: float = None, mu=0.6):
super().__init__(
min_included=min_included,
max_included=max_included,
name='poisson',
null_default_value=null_default_value
)
self.mu = mu
def _pmf(self, x):
return math.exp(-self.mu) * self.mu ** x / factorial(x)
```
### Continuous Distributions
For continous distribution that inherit from [rv_continuous](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_continuous.html), you only need to implement _pdf function. The rest is taken care of magically by scipy.
For example, here is a continous gaussian distribution:
```
class Gaussian(BaseContinuousDistribution):
def __init__(self, min_included: int, max_included: int, null_default_value: float = None):
self.max_included = max_included
self.min_included = min_included
BaseContinuousDistribution.__init__(
self,
name='gaussian',
min_included=min_included,
max_included=max_included,
null_default_value=null_default_value
)
def _pdf(self, x):
return math.exp(-x ** 2 / 2.) / np.sqrt(2.0 * np.pi)
```
### Custom Arguments
If you want to add more properties to calculate your distributions, just add them in self. They will be available in all of the scipy private methods you can override like _pmf, and _pdf.
```
class LogNormal(BaseContinuousDistribution):
def __init__(
self,
log2_space_mean: float,
log2_space_std: float,
hard_clip_min: float,
hard_clip_max: float,
null_default_value: float = None
):
if null_default_value is None:
null_default_value = hard_clip_min
if hard_clip_min is None:
hard_clip_min = np.nan
if hard_clip_max is None:
hard_clip_max = np.nan
self.log2_space_mean = log2_space_mean
self.log2_space_std = log2_space_std
super().__init__(
name='log_normal',
min_included=hard_clip_min,
max_included=hard_clip_max,
null_default_value=null_default_value
)
def _pdf(self, x):
if x <= 0:
return 0.
cdf_min = 0.
cdf_max = 1.
pdf_x = 1 / (x * math.log(2) * self.log2_space_std * math.sqrt(2 * math.pi)) * math.exp(
-(math.log2(x) - self.log2_space_mean) ** 2 / (2 * self.log2_space_std ** 2))
return pdf_x / (cdf_max - cdf_min)
```
### Scipy methods
All of the scipy distribution methods are available:
```
def get_many_samples_for(hd, num_trial):
return [hd.rvs() for _ in range(num_trial)]
samples = get_many_samples_for(hd, 1000)
for s in samples:
assert type(s) == int
hd = Gaussian(min_included=0, max_included=10, null_default_value=0)
assert 0.0 <= hd.rvs() <= 10.0
assert hd.pdf(10) < 0.001
assert hd.pdf(0) < 0.42
assert 0.55 > hd.cdf(5.0) > 0.45
assert hd.cdf(0) == 0.0
assert hd.logpdf(5) == -13.418938533204672
assert hd.logcdf(5) == -0.6931477538632531
assert hd.sf(5) == 0.5000002866515718
assert hd.logsf(5) == -0.693146607256966
assert np.all(hd.ppf([0.0, 0.01, 0.05, 0.1, 1 - 0.10, 1 - 0.05, 1 - 0.01, 1.0], 10))
assert np.isclose(hd.moment(2), 50.50000000091249)
assert hd.stats()[0]
assert hd.stats()[1]
assert np.array_equal(hd.entropy(), np.array(0.7094692666023363))
assert hd.median()
assert hd.mean() == 5.398942280397029
assert np.isclose(hd.std(), 4.620759921685374)
assert np.isclose(hd.var(), 21.35142225385382)
assert np.isclose(hd.expect(), 0.39894228040143276)
interval = hd.interval(alpha=[0.25, 0.50])
assert np.all(interval[0])
assert np.all(interval[1])
assert hd.support() == (0, 10)
```
## SKLearn Hyperparams
SKLearnWrapper wraps sklearn predictors so that they can be compatible with Neuraxle. When you set the hyperparams of an SKLearnWrapper, it automatically sets the params of the sklearn predictor for you:
```
from neuraxle.hyperparams.distributions import Choice
from neuraxle.hyperparams.distributions import RandInt
from neuraxle.hyperparams.space import HyperparameterSpace
from neuraxle.steps.sklearn import SKLearnWrapper
from sklearn.tree import DecisionTreeClassifier
decision_tree_classifier = SKLearnWrapper(
DecisionTreeClassifier(),
HyperparameterSpace({
'criterion': Choice(['gini', 'entropy']),
'splitter': Choice(['best', 'random']),
'min_samples_leaf': RandInt(2, 5),
'min_samples_split': RandInt(1, 3)
})
).set_hyperparams(HyperparameterSamples({
'criterion': 'gini',
'splitter': 'best',
'min_samples_leaf': 3,
'min_samples_split': 3
}))
```
| true |
code
| 0.742474 | null | null | null | null |
|
# Hyperparameter Tuning using SageMaker Tensorflow Container
This tutorial focuses on how to create a convolutional neural network model to train the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) using **SageMaker TensorFlow container**. It leverages hyperparameter tuning to kick off multiple training jobs with different hyperparameter combinations, to find the one with best model training result.
## Set up the environment
We will set up a few things before starting the workflow.
1. specify the s3 bucket and prefix where training data set and model artifacts will be stored
2. get the execution role which will be passed to sagemaker for accessing your resources such as s3 bucket
```
import sagemaker
import project_path
from lib import utils
bucket = '{{s3_workshop_bucket}}}'
prefix = 'sagemaker/DEMO-hpo-tensorflow-high' # you can customize the prefix (subfolder) here
role = sagemaker.get_execution_role() # we are using the notebook instance role for training in this example
```
Now we'll import the Python libraries we'll need.
```
import boto3
from time import gmtime, strftime
from sagemaker.tensorflow import TensorFlow
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner
```
## Download the MNIST dataset
```
import utils
from tensorflow.contrib.learn.python.learn.datasets import mnist
import tensorflow as tf
data_sets = mnist.read_data_sets('data', dtype=tf.uint8, reshape=False, validation_size=5000)
utils.convert_to(data_sets.train, 'train', 'data')
utils.convert_to(data_sets.validation, 'validation', 'data')
utils.convert_to(data_sets.test, 'test', 'data')
```
## Upload the data
We use the ```sagemaker.Session.upload_data``` function to upload our datasets to an S3 location. The return value identifies the location -- we will use this later when we start the training job.
```
inputs = sagemaker.Session().upload_data(path='data', bucket=bucket, key_prefix=prefix+'/data/mnist')
print (inputs)
```
## Construct a script for distributed training
Here is the full code for the network model:
```
!cat '../scripts/mnist.py'
```
The script here is and adaptation of the [TensorFlow MNIST example](https://github.com/tensorflow/models/tree/master/official/mnist). It provides a ```model_fn(features, labels, mode)```, which is used for training, evaluation and inference.
### A regular ```model_fn```
A regular **```model_fn```** follows the pattern:
1. [defines a neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L96)
- [applies the ```features``` in the neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L178)
- [if the ```mode``` is ```PREDICT```, returns the output from the neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L186)
- [calculates the loss function comparing the output with the ```labels```](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L188)
- [creates an optimizer and minimizes the loss function to improve the neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L193)
- [returns the output, optimizer and loss function](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L205)
### Writing a ```model_fn``` for distributed training
When distributed training happens, the same neural network will be sent to the multiple training instances. Each instance will predict a batch of the dataset, calculate loss and minimize the optimizer. One entire loop of this process is called **training step**.
#### Syncronizing training steps
A [global step](https://www.tensorflow.org/api_docs/python/tf/train/global_step) is a global variable shared between the instances. It necessary for distributed training, so the optimizer will keep track of the number of **training steps** between runs:
```python
train_op = optimizer.minimize(loss, tf.train.get_or_create_global_step())
```
That is the only required change for distributed training!
## Set up hyperparameter tuning job
*Note, with the default setting below, the hyperparameter tuning job can take about 30 minutes to complete.*
Now we will set up the hyperparameter tuning job using SageMaker Python SDK, following below steps:
* Create an estimator to set up the TensorFlow training job
* Define the ranges of hyperparameters we plan to tune, in this example, we are tuning "learning_rate"
* Define the objective metric for the tuning job to optimize
* Create a hyperparameter tuner with above setting, as well as tuning resource configurations
Similar to training a single TensorFlow job in SageMaker, we define our TensorFlow estimator passing in the TensorFlow script, IAM role, and (per job) hardware configuration.
```
estimator = TensorFlow(entry_point='../scripts/mnist.py',
role=role,
framework_version='1.11.0',
training_steps=1000,
evaluation_steps=100,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
base_job_name='DEMO-hpo-tensorflow')
```
Once we've defined our estimator we can specify the hyperparameters we'd like to tune and their possible values. We have three different types of hyperparameters.
- Categorical parameters need to take one value from a discrete set. We define this by passing the list of possible values to `CategoricalParameter(list)`
- Continuous parameters can take any real number value between the minimum and maximum value, defined by `ContinuousParameter(min, max)`
- Integer parameters can take any integer value between the minimum and maximum value, defined by `IntegerParameter(min, max)`
*Note, if possible, it's almost always best to specify a value as the least restrictive type. For example, tuning learning rate as a continuous value between 0.01 and 0.2 is likely to yield a better result than tuning as a categorical parameter with values 0.01, 0.1, 0.15, or 0.2.*
```
hyperparameter_ranges = {'learning_rate': ContinuousParameter(0.01, 0.2)}
```
Next we'll specify the objective metric that we'd like to tune and its definition, which includes the regular expression (Regex) needed to extract that metric from the CloudWatch logs of the training job. In this particular case, our script emits loss value and we will use it as the objective metric, we also set the objective_type to be 'minimize', so that hyperparameter tuning seeks to minize the objective metric when searching for the best hyperparameter setting. By default, objective_type is set to 'maximize'.
```
objective_metric_name = 'loss'
objective_type = 'Minimize'
metric_definitions = [{'Name': 'loss',
'Regex': 'loss = ([0-9\\.]+)'}]
```
Now, we'll create a `HyperparameterTuner` object, to which we pass:
- The TensorFlow estimator we created above
- Our hyperparameter ranges
- Objective metric name and definition
- Tuning resource configurations such as Number of training jobs to run in total and how many training jobs can be run in parallel.
```
tuner = HyperparameterTuner(estimator,
objective_metric_name,
hyperparameter_ranges,
metric_definitions,
max_jobs=9,
max_parallel_jobs=3,
objective_type=objective_type)
```
## Launch hyperparameter tuning job
And finally, we can start our hyperprameter tuning job by calling `.fit()` and passing in the S3 path to our train and test dataset.
After the hyperprameter tuning job is created, you should be able to describe the tuning job to see its progress in the next step, and you can go to SageMaker console->Jobs to check out the progress of the progress of the hyperparameter tuning job.
```
tuner.fit(inputs)
```
Let's just run a quick check of the hyperparameter tuning jobs status to make sure it started successfully.
```
boto3.client('sagemaker').describe_hyper_parameter_tuning_job(
HyperParameterTuningJobName=tuner.latest_tuning_job.job_name)['HyperParameterTuningJobStatus']
```
## Analyze tuning job results - after tuning job is completed
Please refer to "HPO_Analyze_TuningJob_Results.ipynb" to see example code to analyze the tuning job results.
## Deploy the best model
Now that we have got the best model, we can deploy it to an endpoint. Please refer to other SageMaker sample notebooks or SageMaker documentation to see how to deploy a model.
| true |
code
| 0.733291 | null | null | null | null |
|
## Common plotting pitfalls that get worse with large data
When working with large datasets, visualizations are often the only way available to understand the properties of that dataset -- there are simply too many data points to examine each one! Thus it is very important to be aware of some common plotting problems that are minor inconveniences with small datasets but very serious problems with larger ones.
We'll cover:
1. [Overplotting](#1.-Overplotting)
2. [Oversaturation](#2.-Oversaturation)
3. [Undersampling](#3.-Undersampling)
4. [Undersaturation](#4.-Undersaturation)
5. [Underutilized range](#5.-Underutilized-range)
6. [Nonuniform colormapping](#6.-Nonuniform-colormapping)
You can [skip to the end](#Summary) if you just want to see an illustration of these problems.
This notebook requires [HoloViews](http://holoviews.org), [colorcet](https://github.com/bokeh/colorcet), and matplotlib, and optionally scikit-image, which can be installed with:
```
conda install -c bokeh -c ioam holoviews colorcet matplotlib scikit-image
```
We'll first load the plotting libraries and set up some defaults:
```
import numpy as np
np.random.seed(42)
import holoviews as hv
from holoviews.operation.datashader import datashade
from holoviews import opts, dim
hv.extension('matplotlib')
from colorcet import fire
datashade.cmap=fire[50:]
opts.defaults(
opts.Image(cmap="gray_r", axiswise=True),
opts.Points(cmap="bwr", edgecolors='k', s=50, alpha=1.0), # Remove color_index=2
opts.RGB(bgcolor="black", show_grid=False),
opts.Scatter3D(color=dim('c'), fig_size=250, cmap='bwr', edgecolor='k', s=50, alpha=1.0)) #color_index=3
```
### 1. Overplotting
Let's consider plotting some 2D data points that come from two separate categories, here plotted as blue and red in **A** and **B** below. When the two categories are overlaid, the appearance of the result can be very different depending on which one is plotted first:
```
def blue_points(offset=0.5,pts=300):
blues = (np.random.normal( offset,size=pts), np.random.normal( offset,size=pts), -1 * np.ones((pts)))
return hv.Points(blues, vdims=['c']).opts(color=dim('c'))
def red_points(offset=0.5,pts=300):
reds = (np.random.normal(-offset,size=pts), np.random.normal(-offset,size=pts), 1*np.ones((pts)))
return hv.Points(reds, vdims=['c']).opts(color=dim('c'))
blues, reds = blue_points(), red_points()
blues + reds + (reds * blues) + (blues * reds)
```
Plots **C** and **D** shown the same distribution of points, yet they give a very different impression of which category is more common, which can lead to incorrect decisions based on this data. Of course, both are equally common in this case, so neither **C** nor **D** accurately reflects the data. The cause for this problem is simply occlusion:
```
hmap = hv.HoloMap({0:blues,0.000001:reds,1:blues,2:reds}, kdims=['level'])
hv.Scatter3D(hmap.table(), kdims=['x','y','level'], vdims=['c'])
```
Occlusion of data by other data is called **overplotting** or **overdrawing**, and it occurs whenever a datapoint or curve is plotted on top of another datapoint or curve, obscuring it. It's thus a problem not just for scatterplots, as here, but for curve plots, 3D surface plots, 3D bar graphs, and any other plot type where data can be obscured.
### 2. Oversaturation
You can reduce problems with overplotting by using transparency/opacity, via the alpha parameter provided to control opacity in most plotting programs. E.g. if alpha is 0.1, full color saturation will be achieved only when 10 points overlap, reducing the effects of plot ordering but making it harder to see individual points:
```
layout = blues + reds + (reds * blues) + (blues * reds)
layout.opts(opts.Points(s=50, alpha=0.1))
```
Here **C **and **D **look very similar (as they should, since the distributions are identical), but there are still a few locations with **oversaturation**, a problem that will occur when more than 10 points overlap. In this example the oversaturated points are located near the middle of the plot, but the only way to know whether they are there would be to plot both versions and compare, or to examine the pixel values to see if any have reached full saturation (a necessary but not sufficient condition for oversaturation). Locations where saturation has been reached have problems similar to overplotting, because only the last 10 points plotted will affect the final color (for alpha of 0.1).
Worse, even if one has set the alpha value to approximately or usually avoid oversaturation, as in the plot above, the correct value depends on the dataset. If there are more points overlapping in that particular region, a manually adjusted alpha setting that worked well for a previous dataset will systematically misrepresent the new dataset:
```
blues, reds = blue_points(pts=600), red_points(pts=600)
layout = blues + reds + (reds * blues) + (blues * reds)
layout.opts(opts.Points(alpha=0.1))
```
Here **C **and **D **again look qualitatively different, yet still represent the same distributions. Since we're assuming that the point of the visualization is to reveal the underlying dataset, having to tune visualization parameters manually based on the properties of the dataset itself is a serious problem.
To make it even more complicated, the correct alpha also depends on the dot size, because smaller dots have less overlap for the same dataset. With smaller dots, **C **and **D **look more similar, but the color of the dots is now difficult to see in all cases because the dots are too transparent for this size:
```
layout = blues + reds + (reds * blues) + (blues * reds)
layout.opts(opts.Points(s=10, alpha=0.1, edgecolor=None))
```
As you can see, it is very difficult to find settings for the dotsize and alpha parameters that correctly reveal the data, even for relatively small and obvious datasets like these. With larger datasets with unknown contents, it is difficult to detect that such problems are occuring, leading to false conclusions based on inappropriately visualized data.
### 3. Undersampling
With a single category instead of the multiple categories shown above, oversaturation simply obscures spatial differences in density. For instance, 10, 20, and 2000 single-category points overlapping will all look the same visually, for alpha=0.1. Let's again consider an example that has a sum of two normal distributions slightly offset from one another, but no longer using color to separate them into categories:
```
def gaussians(specs=[(1.5,0,1.0),(-1.5,0,1.0)],num=100):
"""
A concatenated list of points taken from 2D Gaussian distributions.
Each distribution is specified as a tuple (x,y,s), where x,y is the mean
and s is the standard deviation. Defaults to two horizontally
offset unit-mean Gaussians.
"""
np.random.seed(1)
dists = [(np.random.normal(x,s,num), np.random.normal(y,s,num)) for x,y,s in specs]
return np.hstack([d[0] for d in dists]), np.hstack([d[1] for d in dists])
points = (hv.Points(gaussians(num=600), label="600 points", group="Small dots") +
hv.Points(gaussians(num=60000), label="60000 points", group="Small dots") +
hv.Points(gaussians(num=600), label="600 points", group="Tiny dots") +
hv.Points(gaussians(num=60000), label="60000 points", group="Tiny dots"))
points.opts(
opts.Points('Small_dots', s=1, alpha=1),
opts.Points('Tiny_dots', s=0.1, alpha=0.1))
```
Just as shown for the multiple-category case above, finding settings to avoid overplotting and oversaturation is difficult. The "Small dots" setting (size 0.1, full alpha) works fairly well for a sample of 600 points **A,** but it has serious overplotting issues for larger datasets, obscuring the shape and density of the distribution **B.** Using the "Tiny dots" setting (10 times smaller dots, alpha 0.1) works well for the larger dataset **D,** but not at all for the 600-point dataset **C.** Clearly, not all of these settings are accurately conveying the underlying distribution, as they all appear quite different from one another. Similar problems occur for the same size of dataset, but with greater or lesser levels of overlap between points, which of course varies with every new dataset.
In any case, as dataset size increases, at some point plotting a full scatterplot like any of these will become impractical with current plotting software. At this point, people often simply subsample their dataset, plotting 10,000 or perhaps 100,000 randomly selected datapoints. But as panel **A **shows, the shape of an **undersampled** distribution can be very difficult or impossible to make out, leading to incorrect conclusions about the distribution. Such problems can occur even when taking very large numbers of samples, if examining sparsely populated regions of the space, which will approximate panel **A **for some plot settings and panel **C **for others. The actual shape of the distribution is only visible if sufficient datapoints are available in that region *and* appropriate plot settings are used, as in **D,** but ensuring that both conditions are true is a quite difficult process of trial and error, making it very likely that important features of the dataset will be missed.
To avoid undersampling large datasets, researchers often use 2D histograms visualized as heatmaps, rather than scatterplots showing individual points. A heatmap has a fixed-size grid regardless of the dataset size, so that they can make use of all the data. Heatmaps effectively approximate a probability density function over the specified space, with coarser heatmaps averaging out noise or irrelevant variations to reveal an underlying distribution, and finer heatmaps able to represent more details in the distribution.
Let's look at some heatmaps with different numbers of bins for the same two-Gaussians distribution:
```
def heatmap(coords,bins=10,offset=0.0,transform=lambda d,m:d, label=None):
"""
Given a set of coordinates, bins them into a 2d histogram grid
of the specified size, and optionally transforms the counts
and/or compresses them into a visible range starting at a
specified offset between 0 and 1.0.
"""
hist,xs,ys = np.histogram2d(coords[0], coords[1], bins=bins)
counts = hist[:,::-1].T
transformed = transform(counts,counts!=0)
span = transformed.max()-transformed.min()
compressed = np.where(counts!=0,offset+(1.0-offset)*transformed/span,0)
args = dict(label=label) if label else {}
return hv.Image(compressed,bounds=(xs[-1],ys[-1],xs[1],ys[1]),**args)
hv.Layout([heatmap(gaussians(num=60000),bins) for bins in [8,20,200]])
```
As you can see, a too-coarse binning grid **A **cannot represent this distribution faithfully, but with enough bins **C,** the heatmap will approximate a tiny-dot scatterplot like plot **D **in the previous figure. For intermediate grid sizes **B **the heatmap can average out the effects of undersampling; **B **is actually a more faithful representation of the *distribution* than **C **is (which we know is two offset 2D Gaussians), while **C **more faithfully represents the *sampling* (i.e., the individual points drawn from this distribution). Thus choosing a good binning grid size for a heatmap does take some expertise and knowledge of the goals of the visualization, and it's always useful to look at multiple binning-grid spacings for comparison. Still, at least the binning parameter is something meaningful at the data level (how coarse a view of the data is desired?) rather than just a plotting detail (what size and transparency should I use for the points?) that must be determined arbitrarily.
In any case, at least in principle, the heatmap approach can entirely avoid the first three problems above: **overplotting** (since multiple data points sum arithmetically into the grid cell, without obscuring one another), **oversaturation** (because the minimum and maximum counts observed can automatically be mapped to the two ends of a visible color range), and **undersampling** (since the resulting plot size is independent of the number of data points, allowing it to use an unbounded amount of incoming data).
### 4. Undersaturation
Of course, heatmaps come with their own plotting pitfalls. One rarely appreciated issue common to both heatmaps and alpha-based scatterplots is **undersaturation**, where large numbers of data points can be missed entirely because they are spread over many different heatmap bins or many nearly transparent scatter points. To look at this problem, let's again consider a set of multiple 2D Gaussians, but this time with different amounts of spread (standard deviation):
```
dist = gaussians(specs=[(2,2,0.02), (2,-2,0.1), (-2,-2,0.5), (-2,2,1.0), (0,0,3)],num=10000)
hv.Points(dist) + hv.Points(dist).opts(s=0.1) + hv.Points(dist).opts(s=0.01, alpha=0.05)
```
Plots **A,** **B,** and **C **are all scatterplots for the same data, which is a sum of 5 Gaussian distributions at different locations and with different standard deviations:
1. Location (2,2): very narrow spread
2. Location (2,-2): narrow spread
3. Location (-2,-2): medium spread
4. Location (-2,2): large spread
5. Location (0,0): very large spread
In plot **A,** of course, the very large spread covers up everything else, completely obscuring the structure of this dataset by overplotting. Plots **B **and **C **reveal the structure better, but they required hand tuning and neither one is particularly satisfactory. In **B **there are four clearly visible Gaussians, but all but the largest appear to have the same density of points per pixel, which we know is not the case from how the dataset was constructed, and the smallest is nearly invisible. Each of the five Gaussians has the same number of data points (10000), but the second-largest looks like it has more than the others, and the narrowest one is likely to be overlooked altogether, which is thus a clear example of oversaturation obscuring important features. Yet if we try to combat the oversaturation by using transparency in **C,** we now get a clear problem with **undersaturation** -- the "very large spread" Gaussian is now essentially invisible. Again, there are just as many datapoints in that category, but we'd never even know they were there if only looking at **C.**
Similar problems occur for a heatmap view of the same data:
```
hv.Layout([heatmap(dist,bins) for bins in [8,20,200]])
```
Here the narrow-spread distributions lead to pixels with a very high count, and if the other pixels are linearly ramped into the available color range, from zero to that high count value, then the wider-spread values are obscured (as in **B **) or entirely invisible (as in **C **).
To avoid undersaturation, you can add an offset to ensure that low-count (but nonzero) bins are mapped into a visible color, with the remaining intensity scale used to indicate differences in counts:
```
hv.Layout([heatmap(dist,bins,offset=0.2) for bins in [8,20,200]]).cols(4)
```
Such mapping entirely avoids undersaturation, since all pixels are either clearly zero (in the background color, i.e. white in this case), or a non-background color taken from the colormap. The widest-spread Gaussian is now clearly visible in all cases.
However, the actual structure (5 Gaussians of different spreads) is still not visible. In **A **the problem is clearly too-coarse binning, but in **B **the binning is also somewhat too coarse for this data, since the "very narrow spread" and "narrow spread" Gaussians show up identically, each mapping entirely into a single bin (the two black pixels). **C **shouldn't suffer from too-coarse binning, yet it still looks more like a plot of the "very large spread" distribution alone, than a plot of these five distributions of different spreads, and it is thus still highly misleading despite the correction for undersaturation.
### 5. Underutilized range
So, what is the problem in plot **C **above? By construction, we've avoided the first four pitfalls: **overplotting**, **oversaturation**, **undersampling**, and **undersaturation**. But the problem is now more subtle: differences in datapoint density are not visible between the five Gaussians, because all or nearly all pixels end up being mapped into either the bottom end of the visible range (light gray), or the top end (black, used only for the single pixel holding the "very narrow spread" distribution). The entire rest of the visible colors in this gray colormap are unused, conveying no information to the viewer about the rich structure that we know this distribution contains. If the data were uniformly distributed over the range from minimum to maximum counts per pixel (0 to 10,000, in this case), then the above plot would work well, but that's not the case for this dataset or for most real-world datasets.
So, let's try transforming the data from its default linear representation (integer count values) into something that preserves relative differences in count values but maps them into visually distinct colors. A logarithmic transformation is one common choice:
```
hv.Layout([heatmap(dist,bins,offset=0.2,transform=lambda d,m: np.where(m,np.log1p(d),0)) for bins in [8,20,200]])
```
Aha! We can now see the full structure of the dataset, with all five Gaussians clearly visible in **B **and **C,** and the relative spreads also clearly visible in **C.**
We still have a problem, though. The choice of a logarithmic transform was fairly arbitrary, and it mainly works well because we happened to have used an approximately geometric progression of spread sizes when constructing the example. For large datasets with truly unknown structure, can we have a more principled approach to mapping the dataset values into a visible range?
Yes, if we think of the visualization problem in a different way. The underlying difficulty in plotting this dataset (as for very many real-world datasets) is that the values in each bin are numerically very different (ranging from 10,000, in the bin for the "very narrow spread" Gaussian, to 1 (for single datapoints from the "very large spread" Gaussian)). Given the 256 gray levels available in a normal monitor (and the similarly limited human ability to detect differences in gray values), numerically mapping the data values into the visible range is not going to work well. But given that we are already backing off from a direct numerical mapping in the above approaches for correcting undersaturation and for doing log transformations, what if we entirely abandon the numerical mapping approach, using the numbers only to form a partial ordering of the data values? Such an approach would be a rank-order plot, preserving order and not magnitudes. For 100 gray values, you can think of it as a percentile-based plot, with the lowest 1% of the data values mapping to the first visible gray value, the next 1% mapping to the next visible gray value, and so on to the top 1% of the data values mapping to the gray value 255 (black in this case). The actual data values would be ignored in such plots, but their relative magnitudes would still determine how they map onto colors on the screen, preserving the structure of the distribution rather than the numerical values.
We can approximate such a rank-order or percentile encoding using the histogram equalization function from an image-processing package, which makes sure that each gray level is used for about the same number of pixels in the plot:
```
try:
from skimage.exposure import equalize_hist
eq_hist = lambda d,m: equalize_hist(1000*d,nbins=100000,mask=m)
except ImportError:
eq_hist = lambda d,m: d
print("scikit-image not installed; skipping histogram equalization")
hv.Layout([heatmap(dist,bins,transform=eq_hist) for bins in [8,20,200]])
```
Plot **C** now reveals the full structure that we know was in this dataset, i.e. five Gaussians with different spreads, with no arbitrary parameter choices. (Well, there is a "number of bins" parameter for building the histogram for equalizing, but for integer data like this even that parameter can be eliminated entirely.) The differences in counts between pixels are now very clearly visible, across the full (and very wide) range of counts in the original data.
Of course, we've lost the actual counts themselves, and so we can no longer tell just how many datapoints are in the "very narrow spread" pixel in this case. So plot **C** is accurately conveying the structure, but additional information would need to be provided to show the actual counts, by adding a color key mapping from the visible gray values into the actual counts and/or by providing hovering value information.
At this point, one could also consider explicitly highlighting hotspots so that they cannot be overlooked. In plots B and C above, the two highest-density pixels are mapped to the two darkest pixel colors, which can reveal problems with your monitor settings if they were adjusted to make dark text appear blacker. Thus on those monitors, the highest values may not be clearly distinguishable from each other or from nearby grey values, which is a possible downside to fully utilizing the dynamic range available. But once the data is reliably and automatically mapped into a repeatable, reliable, fully utilized range for display, making explicit adjustments (e.g. based on wanting to make hotspots particularly clear) can be done in a principled way that doesn't depend on the actual data distribution (e.g. by just making the top few pixel values into a different color, or by stretching out those portions of the color map to show the extremes more safely across different monitors). Before getting into such specialized manipulations, there's a big pitfall to avoid first:
### 6. Nonuniform colormapping
Let's say you've managed avoid pitfalls 1-5 somehow. However, there is one more problem waiting to catch you at the last stage, ruining all of your work eliminating the other issues: using a perceptually non-uniform colormap. A heatmap requires a colormap before it can be visualized, i.e., a lookup table from a data value (typically a normalized magnitude in the range 0 to 1) to a pixel color. The goal of a scientific visualization is to reveal the underlying properties of the data to your visual system, and to do so it is necessary to choose colors for each pixel that lead the viewer to perceive that data faithfully. Unfortunately, most of the colormaps in common use in plotting programs are highly *non*uniform.
For instance, in "jet" (the default colormap for matlab and matplotlib until 2015), a large range of data values will all appear in shades of green that are perceptually indistinguishable, and similarly for the yellow regions of their "hot" colormaps:

In this image, a good colormap would have "teeth" equally visible at all data values, as for the perceptually uniform equivalents from the [colorcet](https://github.com/bokeh/colorcet) package:

We can easily see these effects if we look at our example dataset after histogram equalization, where all the different data levels are known to be distributed evenly in the array of normalized magnitudes:
```
hv.Layout([heatmap(dist,200,transform=eq_hist,label=cmap).opts(cmap=cmap) for cmap in ["hot","fire"]]).cols(2)
```
Comparing **A ** to **B **it should be clear that the "fire" colormap is revealing much more of the data, accurately rendering the density differences between each of the different blobs. The unsuitable "hot" colormap is mapping all of the high density regions to perceptually indistinguishable shades of bright yellow/white, giving an "oversaturated" appearance even though we know the underlying heatmap array is *not* oversaturated (by construction). Luckily it is easy to avoid this problem; just use one of the 50 perceptually uniform colormaps available in the [colorcet](https://github.com/bokeh/colorcet) package, one of the four shipped with matplotlib [(viridis, plasma, inferno, or magma)](https://bids.github.io/colormap), or the Parula colormap shipped with Matlab.
## Summary
Starting with plots of specific datapoints, we showed how typical visualization techniques will systematically misrepresent the distribution of those points. Here's an example of each of those six problems, all for the same distribution:
```
layout = (hv.Points(dist,label="1. Overplotting") +
hv.Points(dist,label="2. Oversaturation").opts(s=0.1,alpha=0.5) +
hv.Points((dist[0][::200],dist[1][::200]),label="3. Undersampling").opts(s=2,alpha=0.5) +
hv.Points(dist,label="4. Undersaturation").opts(s=0.01,alpha=0.05) +
heatmap(dist,200,offset=0.2,label="5. Underutilized dynamic range") +
heatmap(dist,200,transform=eq_hist,label="6. Nonuniform colormapping").opts(cmap="hot"))
layout.opts(
opts.Points(axiswise=False),
opts.Layout(sublabel_format="", tight=True)).cols(3)
```
Here we could avoid each of these problems by hand, using trial and error based on our knowledge about the underlying dataset, since we created it. But for big data in general, these issues are major problems, because you don't know what the data *should* look like. Thus:
#### For big data, you don't know when the viz is lying
I.e., visualization is supposed to help you explore and understand your data, but if your visualizations are systematically misrepresenting your data because of **overplotting**, **oversaturation**, **undersampling**, **undersaturation**, **underutilized range**, and **nonuniform colormapping**, then you won't be able to discover the real qualities of your data and will be unable to make the right decisions.
Luckily, using the systematic approach outlined in this discussion, you can avoid *all* of these pitfalls, allowing you to render your data faithfully without requiring *any* "magic parameters" that depend on your dataset:
```
heatmap(dist,200,transform=eq_hist).opts(cmap="fire")
```
### [Datashader](https://github.com/bokeh/datashader)
The steps above show how to avoid the six main plotting pitfalls by hand, but it can be awkward and relatively slow to do so. Luckily there is a new Python library available to automate and optimize these steps, named [Datashader](https://github.com/bokeh/datashader). Datashader avoids users having to make dataset-dependent decisions and parameter settings when visualizing a new dataset. Datashader makes it practical to create accurate visualizations of datasets too large to understand directly, up to a billion points on a normal laptop and larger datasets on a compute cluster. As a simple teaser, the above steps can be expressed very concisely using the Datashader interface provided by [HoloViews](http://holoviews.org):
```
hv.output(size=200)
datashade(hv.Points(dist))
```
Without any change to the settings, the same command will work with dataset sizes too large for most plotting programs, like this 50-million-point version of the distribution:
```
dist = gaussians(specs=[(2,2,0.02), (2,-2,0.1), (-2,-2,0.5), (-2,2,1.0), (0,0,3)], num=10000000)
datashade(hv.Points(dist))
```
See the [Datashader web site](https://raw.githubusercontent.com/bokeh/datashader/master/examples/README.md) for details and examples to help you get started.
| true |
code
| 0.600833 | null | null | null | null |
|
# Breast Cancer Diagnosis
In this notebook we will apply the LogitBoost algorithm to a toy dataset to classify cases of breast cancer as benign or malignant.
## Imports
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='darkgrid', palette='colorblind', color_codes=True)
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report
from sklearn.manifold import TSNE
from logitboost import LogitBoost
```
## Loading the Data
The breast cancer dataset imported from [scikit-learn](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html) contains 569 samples with 30 real, positive features (including cancer mass attributes like mean radius, mean texture, mean perimeter, et cetera).
Of the samples, 212 are labeled "malignant" and 357 are labeled "benign".
We load this data into a 569-by-30 feature matrix and a 569-dimensional target vector.
Then we randomly shuffle the data and designate two thirds for training and one third for testing.
```
data = load_breast_cancer()
X = data.data
y = data.target_names[data.target]
n_classes = data.target.size
# Shuffle data and split it into training/testing samples
test_size = 1 / 3
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size,
shuffle=True, stratify=y,
random_state=0)
```
## Visualizing the Training Set
Although the features are 30-dimensional, we can visualize the training set by using [t-distributed stochastic neighbor embedding](https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding) (t-SNE) to project the features onto a 2-dimensional space.
```
tsne = TSNE(n_components=2, random_state=0)
X_train_tsne = tsne.fit_transform(X_train)
plt.figure(figsize=(10, 8))
mask_benign = (y_train == 'benign')
mask_malignant = (y_train == 'malignant')
plt.scatter(X_train_tsne[mask_benign, 0], X_train_tsne[mask_benign, 1],
marker='s', c='g', label='benign', edgecolor='k', alpha=0.7)
plt.scatter(X_train_tsne[mask_malignant, 0], X_train_tsne[mask_malignant, 1],
marker='o', c='r', label='malignant', edgecolor='k', alpha=0.7)
plt.title('t-SNE plot of the training data')
plt.xlabel('1st embedding axis')
plt.ylabel('2nd embedding axis')
plt.legend(loc='best', frameon=True, shadow=True)
plt.tight_layout()
plt.show()
plt.close()
```
## Fitting the LogitBoost Model
Next, we initialize a LogitBoost classifier and fit it to the training data.
By default, LogitBoost uses decision stumps (decision trees with depth 1, i.e., a single split) as its base estimator.
```
lboost = LogitBoost(n_estimators=200, random_state=0)
lboost.fit(X_train, y_train)
```
## Prediction Accuracy
As a first indicator of how well the model predicts the correct labels, we can check its accuracy score (number of correct predictions over the number of total predictions) on the training and test data.
If the classifier is good, then the accuracy score should be close to 1.
```
y_pred_train = lboost.predict(X_train)
y_pred_test = lboost.predict(X_test)
accuracy_train = accuracy_score(y_train, y_pred_train)
accuracy_test = accuracy_score(y_test, y_pred_test)
print('Training accuracy: %.4f' % accuracy_train)
print('Test accuracy: %.4f' % accuracy_test)
```
## Precision and Recall
We can also report our LogitBoost model's [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall).
```
report_train = classification_report(y_train, y_pred_train)
report_test = classification_report(y_test, y_pred_test)
print('Training\n%s' % report_train)
print('Testing\n%s' % report_test)
```
## Visualizing Accuracy During Boosting
```
iterations = np.arange(1, lboost.n_estimators + 1)
staged_accuracy_train = list(lboost.staged_score(X_train, y_train))
staged_accuracy_test = list(lboost.staged_score(X_test, y_test))
plt.figure(figsize=(10, 8))
plt.plot(iterations, staged_accuracy_train, label='Training', marker='.')
plt.plot(iterations, staged_accuracy_test, label='Test', marker='.')
plt.xlabel('Iteration')
plt.ylabel('Accuracy')
plt.title('Ensemble accuracy during each boosting iteration')
plt.legend(loc='best', shadow=True, frameon=True)
plt.tight_layout()
plt.show()
plt.close()
```
## Contribution of Each Estimator in the Ensemble
Like other ensemble models, the LogitBoost model can suffer from *over-specialization*: estimators added to the ensemble in later boosting iterations make relatively small or even negligible contributions toward improving the overall predictions on the training set.
This can be quantified by computing the mean of the absolute prediction of each estimator in the ensemble taken over the training set.
```
contrib_train = lboost.contributions(X_train)
plt.figure(figsize=(10, 8))
plt.plot(iterations, contrib_train, lw=2)
plt.xlabel('Estimator Number')
plt.ylabel('Average Absolute Contribution')
plt.title('Average absolute contribution of the estimators in the ensemble')
plt.show()
plt.close()
```
## Appendix: System Information
This is included for replicability.
```
# sys_info.py is a file in the same directory as these example notebooks:
# doc/source/examples
import sys_info
```
| true |
code
| 0.656686 | null | null | null | null |
|
# Регрессия - последняя подготовка перед боем!
> 🚀 В этой практике нам понадобятся: `numpy==1.21.2, pandas==1.3.3, matplotlib==3.4.3, scikit-learn==0.24.2, seaborn==0.11.2`
> 🚀 Установить вы их можете с помощью команды: `!pip install numpy==1.21.2, pandas==1.3.3, matplotlib==3.4.3, scikit-learn==0.24.2, seaborn==0.11.2`
# Содержание <a name="content"></a>
* [Лирическое вступление](#Liricheskoe_vstuplenie)
* [Первые реальные данные](#Pervye_real_nye_dannye)
* [Анализ одной переменной (унивариантный - univariate)](#Analiz_odnoj_peremennoj_(univariantnyj_-_univariate))
* [Анализ нескольких переменных (мультивариантный - multivariate)](#Analiz_neskol_kih_peremennyh_(mul_tivariantnyj_-_multivariate))
* [LSTAT - MEDV](#LSTAT_-_MEDV)
* [RM - MEDV](#RM_-_MEDV)
* [Подготовка кода предобработки](#Podgotovka_koda_predobrabotki)
* [fit()](#fit())
* [transform()](#transform())
* [Back to programming!](#Back_to_programming!)
* [Заключение](#Zakljuchenie)
* [Вопросы для закрепления](#Voprosy_dlja_zakreplenija)
* [Полезные ссылки](#Poleznye_ssylki)
```
# Настройки для визуализации
# Если используется темная тема - лучше текст сделать белым
import matplotlib
import numpy as np
import pandas as pd
import seaborn as sns
import random
TEXT_COLOR = 'black'
matplotlib.rcParams['figure.figsize'] = (15, 10)
matplotlib.rcParams['text.color'] = TEXT_COLOR
matplotlib.rcParams['font.size'] = 14
matplotlib.rcParams['lines.markersize'] = 15
matplotlib.rcParams['axes.labelcolor'] = TEXT_COLOR
matplotlib.rcParams['xtick.color'] = TEXT_COLOR
matplotlib.rcParams['ytick.color'] = TEXT_COLOR
sns.set_style('darkgrid')
# Зафиксируем состояние случайных чисел
RANDOM_SEED = 42
np.random.seed(RANDOM_SEED)
random.seed(RANDOM_SEED)
```
## Лирическое вступление <a name="intro"></a>
И снова привет!
К этому моменту мы многому научились и уже знаем немало! Тем не менее, много знаний не бывает, ведь мы приближаемся к первой боевой задаче!
Да-да, скоро вам предстоит самостоятельно провести работу с набором данных! Правда, мы немного считерим, потому что в этой практике с этими данными частично познакомимся, но сделаем это частично, чтобы не забирать у вас всё веселье!
Ранее мы много говорили о том, как учить модель машинного обучения, как разделять данные, как анализировать модель и т.д. В работе с данными эта часть зовётся "обучение и анализ модели". В этой практике мы поговорим о совершенно новой части в работе с данными и научимся данные анализировать.
Зачем это нужно? Ну, просто обучить модель на данных - это зовётся **baseline**. **Baseline** как правило - это самое быстрое и простое решение, которое даёт результат!
Вот, например, у нас есть данные о ценах на земли в городе. Задача - на основе этих данных предсказывать цены на другие участки земли. Самым простым решением будет взять сумму целевых значений (цен) и поделить на количество! Так мы получим среднее значение цены в данных и его можно постоянно предсказывать!
Вот таким простым способом мы получили модель, которая всё время предсказывает постоянное значение. Да, у неё есть какая-то ошибка, да, это вообще не будет похоже на зависимость в данных, но не это важно!
Важно то, что имея baseline, вы будете точно знать, относительно какого решения нужно улучшать вашу модель! Уже и MAE/RMSE есть с чем сравнить - одни плюсы!
> Обратите внимание, что показатель R2 как раз в этом случае будет равень 0, так как значения больше нуля - а значит, модель лучше, чем простое предсказание среднего!
> 🤓 **Baseline решение** - простое и быстро достижимое решение, используется для дальнейшей оценки улучшений предсказаний при работе с данными.
Так вот к чему всё это? Сейчас мы пока что с вами научились строить baseline модели.
А как научиться делать что-то лучше? Вот тут то и не хватает недостающей части, о которой мы с вами поговорим! И часть это зовется - **анализ данных**!
Но зачем он нужен, если модель делает всё за нас? Учится на данных, регуляризацией мы убираем оверфит, на всякий проверим показатели на тестовой выборке - куда лучше?
Поверьте, есть куда стремиться!
В работе с реальными данными есть простое правило - не сложность модели определяет, кто будет круче, а качество и количество данных!
> ⚠️ Ещё раз, данные важнее, чем модели!
То есть, важно понимать, что происходит с моделью, оверфит это или нужна сложность модели побольше (недообучение). Но хорошее качество и количество данных могут дать намного больший прирост точности, так как шума и выбросов в них будет меньше, а зависимости более выражены.
И как же тогда нам сделать данные качественнее, если вот у нас есть датасет, и сделать его больше мы не можем?
Ответ прост - как можно лучше понять данные и предобработать, а для этого - проанализировать их в первую очередь!
> ⚠️⚠️ Очень важный аспект - **понимание данных**. Если вы хорошо понимаете, что за данные вы имеете и что каждый признак означает, то высока вероятность, что вы лучше их обработаете и очистите!
В таком случае, подводим **итог**! Создавать baseline модели на тех данных, что мы имеем - полезный навык. Но если мы хотим сделать нашу модель ещё круче и эффективнее, то нужно данные проанализировать и подготовить.
> ⚠️ Все новые термины **обработка**, **очистка** и другие действия с данными относятся к общему понятию **подготовка данных** для модели. Baseline может строиться на неподготовленных данных и решать задачу (вероятнее всего плохо), подготовка данных нацелена на улучшение качества данных, чтобы модель, которая на них учится, выявила необходимые зависимости без влияния шума.
> ⚠️ Для реализации хорошей **подготовки данных** необходимо провести **анализ данных**, чтобы данные лучше понять.
Это всё слова, но пора к делу!
Вы ещё увидите, почему анализ данных иногда бывает намного интереснее простого обучения модельки!
## Первые реальные данные <a name="real_data"></a>
Настройтесь, сейчас мы с вами загрузим наши первые реальные данные и начнём с ними работать. Чувствуете это предвкушение?
<p align="center"><img src="https://vk.com/sticker/1-2920-512-9" width=300/></p>
Стоп, а где эти данные взять?
Не переживайте, сегодня не вы одни занимаете наукой о данных, поэтому есть очень много ресурсов с разными данными, а мы постучимся на [Kaggle](https://www.kaggle.com/)! Для начала вам нужно там зарегистрироваться, если вы этого ещё не сделали!
Дальше, нам нужно достать данные, которые нам сейчас нужны - мы воспользуемся [этим датасетом](https://www.kaggle.com/fedesoriano/the-boston-houseprice-data). После регистрации у вас будет возможность скачать CSV файл `boston.csv`.
После этого всё зависит от того, где вы работаете. Если вы проходите практики на Google Colab, то вам нужно загрузить файл с данными на сам Colab (для этого есть меню слева).
Если вы работаете локально, на своей машине (компьютере), то достаточно положить рядом с ноутбуком!
> ✨ Если вы всё выполнили верно, то код дальше будет выполняться без проблем. Если нет - обратитесь к преподавателю за помощью!
```
df_src = pd.read_csv('boston.csv')
```
Когда данные успешно загружены, то важно первым делом посмотреть на размер данных и на сами данные!
```
df_src.shape
df_src.head(10)
df_src.info()
# И конечно, сразу посмотреть на общие пропуски в данных
df_src.isnull().sum()
```
Смотрите, пара действий, а мы уже видим некоторую информацию о данных.
* Во-первых, у нас есть 14 переменных, из которых как минимум одну мы планируем предсказывать.
* Во-вторых, во всём наборе данных есть всего 506 записей (примеров). Это немного, но хватит, чтобы много обсудить!
Но здесь есть важная особенность, каждая колонка имеет название, но все они в виде аббревиатур! Это плохо, так как это затруднит разбор данных и может ухудшить понимание. Небольшой поиск по странице датасета и в интернете даёт как минимум два источника, в которых есть следующая информация о данных:
- https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html#:~:text=The%20Boston%20Housing%20Dataset,the%20area%20of%20Boston%20Mass
- https://scikit-learn.org/stable/datasets/toy_dataset.html#boston-house-prices-dataset
Информация о колонках:
- CRIM - per capita crime rate by town
- ZN - proportion of residential land zoned for lots over 25,000 sq.ft.
- INDUS - proportion of non-retail business acres per town
- CHAS - Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
- NOX - nitric oxides concentration (parts per 10 million)
- RM - average number of rooms per dwelling
- AGE - proportion of owner-occupied units built prior to 1940
- DIS - weighted distances to five Boston employment centres
- RAD - index of accessibility to radial highways
- TAX - full-value property-tax rate per $10,000
- PTRATIO - pupil-teacher ratio by town
- B - 1000(Bk - 0.63)^2 where Bk is the proportion of black people by town
- LSTAT - % lower status of the population
- MEDV - Median value of owner-occupied homes in $1000’s
Отлично, какая-то информация есть и её можно перевести с английского, что даёт нам:
- CRIM - уровень преступности на душу населения по городам
- ZN - доля жилой земли, зонированной для участков площадью более 25 000 кв. футов.
- INDUS - доля акров нетоварного бизнеса в городе
- CHAS - переменная-флаг приближенности к реке (= 1 если рядом с рекой; 0 в ином случае)
- NOX - концентрация оксидов азота (частей на 10 миллионов)
- RM - среднее количество комнат в одном жилом помещении
- AGE - доля квартир, занятых владельцами, построенных до 1940 года
- DIS - взвешенные расстояния до пяти бостонских центров занятости
- RAD - индекс доступности радиальных магистралей
- TAX - недвижимость с полной стоимостью-ставка налога за 10 000 долларов США
- PTRATIO - соотношение числа учащихся и учителей по городам
- B - 1000(Bk - 0.63)^2, где Bk - доля чернокожего населения по городам
- LSTAT - процент бедности населения
- MEDV - средняя стоимость домов, занятых владельцами, в 1000 долларов США
Шикарно, это пригодится нам в ходе анализа!
Уже сейчас мы можем сформировать постановку задачи предсказания - нам нужно предсказывать **цену дома (MEDV)** по 13-ти имеющимся признакам. Не факт, что мы всеми признаками воспользуемся, но всё-таки это то, что мы сейчас имеем.
> Не бойтесь, работа с 13 переменными, когда мы вот только работали всего с одной - не так страшна, как кажется. Более того, когда мы строили полиномиальную регрессию 15-го порядка, то там у нас было аж 15 признаков!
Так с чего же начинается анализ данных? Самое простое - с анализа каждой переменной!
Что мы хотим увидеть? В анализе одной переменной важно понять:
- что представляет из себя переменная
- есть ли у неё пропуски и как лучше их заполнитиь
- есть ли у переменной явные выбросы
- какое у переменной распределение и есть ли смещение
- и другие интересности, которые мы заметим =)
В этой практике мы пройдёмся по наиболее важным переменным, а вот в реальной задаче вам предстоит проанализировать каждую переменную! Так можно составить более полную картину данных!
> ⚠️ Этот список не исчерпывающий, но он сообщает, что любые странности и закономерности в данных важно выявить и проанализировать на предмет того, полезный ли эффект наблюдается или его лучше убрать, чтобы моделе было проще искать базовые зависимости в данных.
## Анализ одной переменной (унивариантный - univariate) <a name="uni"></a>
Начнем с анализа под названием унивариантный. Он так называется, потому что мы анализируем каждую переменную по отдельности. Обычно, самым простым вариантом является построение распределения переменной, чтобы понять характер распределения.
Здесь для примера мы возьмем переменную RM (среднее количество комнат в одном жилом помещении).
```
sns.displot(df_src['RM'], kde=True, height=7, aspect=1.5)
```
Что мы видим на графике?
Распределение этой переменной близко к нормальному (Gauss-like - близко к Гауссовому).
Пределы значений в диапазоне около [3; 9] комнат.
Здесь важный акцент мы сделаем на "нормальности" распределения, так как бывают разные вариации нормальности. При анализе другой переменной мы это увидим.
Тогда по этой переменной мы можем заключить следующее:
* по таблице пропусков переменная пропусков не имеет
* распределение близкое к нормальному
* значения лежат в пределах, ожидаемых для описания этой переменной - количество комнат.
Не сложно, правда?
Другую переменную мы возьмём явно с интересным эффектом:
```
sns.displot(df_src['DIS'], kde=True, height=7, aspect=1.5)
```
Вот эту переменную уже сложнее назвать нормально распределённой. Она имеет явное **смещение влево**. Ещё это назвают **правый хвост**, так как правая часть похожа на хвост.
Что делать с такими переменными?
Ну, есть разные способы. Тут мы уже с вами говорим про методы модификации данных, а значит начинаем строить план обработки данных!
Можно выделить два наиболее явных способа исправления распределения:
- исправление с помощью логарифма (он исправляет левое смещение)
- воспользоваться автоматизированными способами коррекции, например, [PowerTransformer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html)
Первый способ мы попробуем сейчас, а вот со вторым вы можете разобраться самостоятельно, когда в следующей практике ринетесь в бой!
```
dis_log_col = np.log(df_src['DIS'])
sns.displot(dis_log_col, kde=True, height=7, aspect=1.5)
```
Как видите, центр распределения сместился ближе к середине и само распределение стало больше похоже на нормальное, результат - успех!
> 🔥 Не только в DS, но и в других областях, где вы модифицируете данные - всегда проверяйте результат и сравнивайте с ожиданиями! Это важно, так как без проверки промежуточного результата может появиться проблема, которая доставит много головной боли потом!
> ⚠️ Исправление распределения очень важно для линейных моделей. Мы сейчас не заостряем внимание на этом, но в следующей самостоятельной практике обязательно сравните результаты с исправлением и без него!
В результате, вывод по переменной:
* пропусков не имеет
* *распределение смещено, поэтому требуется исправление*
Последний вывод важно записать в список дел, так как по результатам мы будм делать всю обработку данных единым образом.
Давайте для примера возьмём ещё одну переменную, чтобы проанализировать нестандартное распределение:
```
sns.displot(df_src['CHAS'], kde=True, height=7, aspect=1.5)
```
Можно было бы сказать, что распределение смещено влево, но обратите внимание - в данных всего два значения: 0 и 1. Давайте это проверим:
```
df_src['CHAS'].unique()
```
Действительно, что же нам в таком случае делать?
Да ничего, это распределение бимодальное, поэтому мы не будем пытаться его исправить.
Вывод по переменной:
* пропусков нет
* распределение бимодальное
Делать с этой переменной пока ничего не будем!
Остальные переменные мы оставим за кадром, чтобы вам тоже было, с чем поработать!
По результату анализа одной переменной делается вывод об основных особенностях каждой переменной. Мы с вами ещё научимся другим подходам анализа и многому интересному, но пока достаточно понимать следующие вещи:
- имеет ли переменная пропуски (как их заполнять узнаем потом)?
- понимаем ли мы суть переменной, сходится ли с описанием и логичные ли значения?
- нужно ли корректировать распределение?
## Анализ нескольких переменных (мультивариантный - multivariate) <a name="multi"></a>
Вот мы переходим к более вкусному анализу - зависимости между переменными!
И начнем мы с определения **корреляций**!
Мы уже много говорили о том, что в данных есть зависимости, но наблюдали мы их только на графиках. Как и во всех методах - хорошо бы иметь метод, который численно подтвердит наличие зависимости в данных! Есть он у меня для вас!
Для примера мы возьмём пару переменных - полный анализ (все переменные) вы проведёте самостоятельно!
```
# Для примера выберем следующие признаки
# Мы специально включили целевую переменную, чтобы показать, как проводить вместе в ней анализ
features = ['CRIM', 'LSTAT', 'RM', 'MEDV']
correlation_mtrx = df_src[features].corr()
correlation_mtrx
```
Таблица - это хорошо, но, как обычно, график лучше воспринимается =)
```
sns.heatmap(correlation_mtrx, annot=True, fmt='.2f')
```
Корреляция - это способ численно показать наличие зависимости между двумя переменными.
Давайте попробуем проанализировать то, что мы видим здесь.
С целевой переменной (MEDV) имеют близкую к высокой корреляция (считается, что высокая корреляция +/- 0.8-0.85 и выше по модулю) переменные RM и LSTAT. Это **может** означать, что эти переменные сильнее влияют на формирование цены, чем признак CRIM.
Почему **может**? Да потому, что коэффициент корреляции - это лишь число, которое может не полностью отражать картину, поэтому такие выводы должны лишь заставлять задуматься, но ни в коем случае не делать конечные выводы лишь на основе корреляции!
> 🤓 Корреляция всегда оценивается по модулю. Она может быть как высокой положительной, так и высокой отрицательной. Это для случая коэффициента Пирсона. Есть и другие коэффициенты, которые имеют диапазон [0; 1], но это уже совсем другая история =)
Поглядите, что такое корреляция на более общем представлении разных ситуаций:
<p align="center"><img src="https://raw.githubusercontent.com/kail4ek/ml_edu/master/assets/correlations.png" width=600/></p>
> ⚠️ Высокая корреляция переменных между собой является эффектом **мультиколлинеарности признаков**. Это плохой эффект для модели, так как в случае сильной взаимосвязи переменных между собой модель может запутаться в расставлении весов независимым переменным. Они ведь не просто так зовутся независимыми! Одна из практик - в данных для предсказания оставлять одну из пары зависимых между собой переменных, а другую убирать из данных.
По умолчанию, метод `.corr()` вычисляет коэффициент корреляции Пирсона. Этот тип коэффициента корреляции хорошо оценивает линейные зависимости. Попробуйте разобраться в документации, как оценить корреляцию по Спирману (Spearman) и выведите матрицу. Оцените, как изменились коэффициенты. Как изменился показатель на LSTAT-MEDV? Почему?
```
# TODO - выведите матрицу корреляции по Спирману и проанализируйте ее
```
Отлично, вот так незатейливо мы научились анализировать зависимости в данных без просмотра данных.
На основе этого мы можем построить первоначальные выводы, но не посмотреть на данные (визуализировать их) - это сродне очень серьезной ошибке. Всегда важно по максимуму визуализировать данные и просматривать их. Так можно тщательнее провести анализ и узнать больше полезной информации о данных!
Поэтому, давайте воспользуемся хитрым графиком для отображения зависимостей между данными:
```
sns.pairplot(df_src[features], diag_kind='auto', height=6)
```
Что мы видим на графике?
По главной диагонали отображается распределение самой переменной, так как на 2d графике показывать точки переменной самой с собой - это будет просто линия. В отличных от диагональных ячейках располагаются графики распределения в плоскости одной переменной против другой.
Здесь сразу можно сделать два вывода:
- LSTAT-MEDV имееть нелинейную зависимость (видите, как замедляется уменьшение MEDV при увеличении LSTAT?)
- На графике RM-MEDV видны точки, который очень "странно" лежат. Явно видно, что с увеличением RM MEDV растёт, но есть несколько точек, которые лежат как бы на прямой, вне зависимости от RM. Их нужно проанализировать!
Давайте перейдем к конкретному разбору!
### LSTAT - MEDV <a name="lstat_medv"></a>
Попробуем вывести точечный график переменных:
```
sns.scatterplot(x='LSTAT', y='MEDV', data=df_src)
```
Здесь явно выделяется нелинейная зависимость, поэтому мы в ходе предобработки сформируем новый признак - вторая степень от LSTAT. Это обусловлено этой явной нелинейностью. Запишем в планы!
### RM - MEDV <a name="rm_medv"></a>
Аналогично более подробно смотрим точечный график переменных:
```
sns.scatterplot(x='RM', y='MEDV', data=df_src)
```
Смотрите, у на есть два типа потенциальных **выбросов**.
* Одни выбросы - лежат на прямой на уровне около MEDV ~= 50.
* Другие - выбиваются от общей зависимости в диапазонах: RM < 4 и (RM > 8 & MEDV < 30).
При обработке выбросов важно смотреть, что из себя представляют данные, поэтому выведем примеры и глянем на них:
```
outliers_1 = df_src[df_src['MEDV'] >= 50]
outliers_2 = df_src[(df_src['RM'] < 4) | ((df_src['RM'] > 8) & (df_src['MEDV'] < 30))]
outliers_1
outliers_2
```
Давайте посмотрим, выбросы по уровню цены = 50, которые очень нестандартно лежат на плоскости.
По данным явно не видно очевидной зависимости, поэтому трудно сразу сказать, что это явные выбросы. Как правило, выбросы имеют сильные искажения в данных, что видно и по другим переменным.
Если всмотреться, то выбиваются именно точки, которые имеют RM < 7, а у них значение TAX = 666. Если построить распределение переменной TAX (вы это проделаете сами), то можно заметить, что значение 666 отстоит от основных данных, но таких записей с этим значением - аж 130, что сложно назвать выбросом.
Тем не менее, это повторяется и в выбросах, которые отстают от основной группы точек, что наводит на мысль, что это всё-таки их обощает.
Одно из предположений, которое можно сделать - **цензурирование данных**. Это подход, при котором в данных суммы и информация, которую важно закрыть, заменяется каким-то константным значением.
Поэтому, при обработке, мы удалим эти данные, так как цензурирование искажает зависимости и это может сказаться на результатах работы.
Давайте попробуем подчистить данные и посмотреть, как изменятся распределения точек на графиках:
> ⚠️ Очистка данных - процесс очень выборочный, поэтому важно ещё раз всё перепроверять, чтобы не совершить ошибки, так как в результате данных становится меньше.
> ⚠️ В ходе очистки удаляются записи данных - строки.
```
outliers_mask_1 = df_src['MEDV'] == 50
outliers_mask_2 = df_src['RM'] < 4
outliers_mask_3 = (df_src['RM'] > 8) & (df_src['MEDV'] < 30)
outliers_mask = outliers_mask_1 | outliers_mask_2 | outliers_mask_3
df_cleaned = df_src.loc[~outliers_mask]
sns.pairplot(df_cleaned[features], diag_kind='auto', height=6)
```
Как видите, график стал почище, а зависимость RM-MEDV стала более выраженной. Можем даже по-новой проверить корреляцию:
> ⚠️ Если вы обратили внимание, что на графике CRIM-MEDV много точек лежит на значении CRIM=0 - молодцы! Внимательность - это отлично! В данном случае мы не рассматриваем их в качестве кандидатов на выбросы, так как их мало и нам ещё помогает **смысл переменной**: много домов с низким криминальным уровнем - это нормально.
```
sns.heatmap(df_cleaned[features].corr(), annot=True, fmt='.2f')
```
RM-MEDV ранее был 0.7, а теперь стал 0.73 и всё благодаря чистке данных!
Как видите, как анализ одной переменной, так и анализ нескольких переменных не отличается чем-то сверх-научным. Как правило, данные достаточно посмотреть, пропустить через пару вычислений (как, например, корреляция) и уже можно составлять определённую картину.
Также, в подготовке и очистке данных помогает понимание данных. Так, например, если бы в наших данных количество комнат (RM) имело бы значения -1, то мы понимали бы, что такого быть не может и тоже рассматривали бы это как выбросы.
В результате, мы научились базовому анализу нескольких переменных (multivariate), рассмотрели, как можно детектировать выбросы и как оценивать зависимости численно - отличный результат, мы молодцы!
## Подготовка кода предобработки <a name="preproc"></a>
Помимо того, что на каждом из этапов анализа проверяется своя подготовка, очистка и другая обработка данных - важно в конечном итоге сформировать единый код для предобработки данных, чтобы пользоваться было им удобно и он был более-менее универсален (была возможность применить его на новых данных).
Давайте выделим два этапа:
* очистка данных
* предобработка
Очистка делается для процесса обучения, чтобы модели предоставить более чистые данные без выбросов и лишнего шума.
Предобработка делатся как для обучения, так и для обработки новых данных.
> ⚠️ Помним, что конечная цель модели машинного обучения не просто обучиться и показать высокую метрику, а давать предсказания на новых данных и делать это хорошо.
Так вот важно предобработку нормально оформить, чтобы потом не пришлось корячиться с кодом, когда надо будет его разворачивать в облаке =)
Для этого нам поможет парадигма классов в Python!
Но перед этим, мы быстренько оформим код очитки данных:
```
# TODO - напишите функцию clean_dataset(), который принимает DataFrame на вход и выдает его очищенным
# NOTE - в функции надо выбрать выбросы той методикой, которую мы уже выработали и вернуть почищенный датасет
# TEST
_test_df = pd.DataFrame({
'MEDV': [10, 20, 50, 50, 30, 10],
'RM': [5, 6, 7, 7, 3, 8],
})
_test_result = clean_dataset(_test_df)
pd.testing.assert_index_equal(pd.Index([0, 1, 5]), _test_result.index)
print("Well done!")
```
Отлично, функция очистки написана и её мы применим только для нашего датасета, поэтому её универсальность не так важна!
А теперь приступим к проработке класса для нашей собственной предобработки!
Начнём с архитектуры, вот так будет выглядеть наш класс:
```
class DataPreprocessing:
def __init__(self):
pass
def fit(self, df):
pass
def transform(self, df):
return df
```
Вот и весь класс, ничего страшного =)
Только, его методы (а-ля функции) ещё не реализованы, поэтому рано говорить о размерах кода =)
Давайте обсудим, что мы уже написали и зачем нужны эти методы:
### fit() <a name="fit"></a>
`.fit()` - это метод, который занимается сбором статистики с данных, чтобы их потом обработать. Собранную статистику мы будет хранить в атрибутах класса.
Что такое *сбор статистики*?
Всё просто. Давайте вспомним, как в прошлый раз масштабировали данные с помощью MinMaxScale. По сути, нам нужно вычислить минимум и максимум в данных и затем применить формулу с этими константами.
А теперь вспомним, что нам надо масштабировать на обучающей выборке и выборке для теста.
Давайте рассмотрим плохой вариант (*неправильный*): мы вычисляем мин-макс на обучающей выборке, допустим, получили (минимум = 10 и максимум = 100). Преобразовали обучающую выборку и всё ок.
Теперь, берём тестовую и вычисляем то же самое (получаем, минимум = 20 и максимум = 105). Преобразовали тестовую выборку.
А что дальше?
Ну, модель обучится, ведь обучение - простая математика и предсказания будут как-то работать, но будет **концептуальная** ошибка!
Именно в том, что модель учится на данных, ей приходит значение признака 1.0, а в исходных данных 1.0 ~ 100 (ведь максимум на обучающей = 100). Потом мы передаём тестовую и там тоже есть значение 1.0, но только на тестовой это означает 105.
К чему это приводит?
Модель ничего не заметит, сделает предсказание, а в нём будет ошибка! Ведь мы, хоть и не специально, начинаем модель путать, подавая данные, которые означают совсем другое, нежели на чём модель училась.
Что же мы можем сделать?
А что если, мы на обучающей выборке найдем минимум и максимум, запомним их и применим как к обучающей, так и тестовой выборке! Тогда, во всех данных (и даже в новых), 1.0 будет означать 100 и мы никого путать не будем!
> 🤓 Да, в нашем случае на тестовой будут значения больше 1.0, но это не страшно! Главное для масштабирования - привести к одинаковым порядкам, а для правильной обработки - собрать статистику на обучающей выборке (train) и дальше применять её для трансформации как на обучающей, так и на тестовой выборке!
Так вот мы и подошли к главному правилу в организации `fit()-transform()`: `fit()` всегда применяется только на train выборке! Эта функция собирает статистику, а её надо собирать только на обучающей выборке! На полной (train+test), не тестовой (test), а только на обучающей (train)!
### transform() <a name="transform"></a>
Ну тут уже все проще. Все этапы обработки данных, что требуют сбор статистики - собирают в `fit()`, ну а дальше просто применяем всю обработку в `transform()`! Все просто! =)
## Back to programming! <a name="prog"></a>
Отлично, мы разобрались, зачем нужен каждый метод! Давайте попробуем написать свой класс для предобработки!
Реализуем следующую предобработку:
- Выравнивание распределения для признака `DIS` с помощью логарифма
- Нужно создать новый признак `DIS_log`, а старый удалить
- Генерация полиномиального признака для `LSTAT` с названием `LSTAT_poly_2`
- MinMaxScale - посмотрите на класс [MinMaxScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html)
- Сделайте масштабирование всех признаков
По сути, это небольшой набор того, как мы запланировали предобработать данные по результатам анализа!
> 🔥 Объекты трансформеров из `sklearn` работают по аналогичному принципу, как мы с вами обсудили. Поэтому, при работе с ними можно сами объекты трансформеров создавать прямо в конструкторе нашего класса. `fit()` трансформеров вызывать в нашем методе `fit()`, ну и `transform()`, соответственно.
```
# TODO - реализуйте описанную предобработку
class DataPreprocessing:
def __init__(self):
pass
def fit(self, df):
# Скопируем исходные данные, чтобы не изменять их
df_copy = df.copy()
# Здесь обратите внимание, что нужно сгенерировать полином и выровнять логарифмом, чтобы MinMaxScaler обучился и на них тоже
pass
def transform(self, df):
# Возвращать transform() должен тоже DataFrame!
return df
# TEST
_test_df = pd.DataFrame({'DIS': [2.3, 1.9, 0.4, 2.2], 'LSTAT': [0.1, 0.2, 0.3, 0.4], 'MORE_FEAT': [1, 2, 3, 4]}, index=[4, 6, 10, 12])
preproc = DataPreprocessing()
preproc.fit(_test_df)
_test_result = preproc.transform(_test_df)
_test_expected = pd.DataFrame({
'DIS_log': [1.0, 0.8907756387942631, 0.0, 0.9745873735075969],
'LSTAT': [0.0, 0.333, 0.666, 1.0],
'LSTAT_poly_2': [0.0, 0.2, 0.5333, 1.],
'MORE_FEAT': [0.0, 0.333, 0.666, 1.0]
}, index=_test_df.index)
pd.testing.assert_frame_equal(_test_result, _test_expected, check_like=True, atol=1e-3)
print("Well done!")
```
Если вы прошли тест - значит вы большие молодцы!!
В результате такой класс можно спокойно применять для подготовки данных для обучения модели и более того, для подготовки данных при поступлении новых!
А это значит, мы ещё не обучили, но уже готовы предсказывать и показывать, как круто наша модель работает! Стремимся к высоким целям!
## Заключение <a name="conclusion"></a>
В результате прохождения этой практики вы узнали очень важный факт (а может и несколько).
**Анализ данных нужен и важен!**
Конечно, мы только увидели пару приёмов, но в следующей практике, вы попробуете их в бою и увидите, что это действительно работает!
## Вопросы для закрепления <a name="qa"></a>
А теперь пара вопросов, чтобы закрепить материал!
1. Зачем нужны классы в DS?
2. Чем полезна предобработка данных?
3. Опасно ли удалять какие-то данные из исходных? Когда можно такое делать?
4. На какой выборке применяется метод-fit?
5. На какой выборке применяется метод-transform?
# Полезные ссылки <a name='links'></a>
* [Linear Discriminant Analysis (LDA) от StatQuest](https://www.youtube.com/watch?v=azXCzI57Yfc)
* [Basic Statistics for Data Science на Medium](https://medium.com/mlearning-ai/important-statistical-concepts-for-data-scientists-54e09106b75e)
* [Quartiles for Beginners in DS на Medium](https://medium.com/@vinitasilaparasetty/quartiles-for-beginners-in-data-science-2ca5a640b07b)
* [Understanding Value of Correlations in DS на Medium](https://medium.com/fintechexplained/did-you-know-the-importance-of-finding-correlations-in-data-science-1fa3943debc2)
* [Correlation](https://luminousmen.com/post/data-science-correlation)
* [Fundamentals of Statistics](https://towardsdatascience.com/fundamentals-of-statistics-for-data-scientists-and-data-analysts-69d93a05aae7)
| true |
code
| 0.247441 | null | null | null | null |
|
```
%pylab inline
import numpy as np
import matplotlib.pyplot as plt
# PyTorch imports
import torch
# This has neural network layer primitives that you can use to build things quickly
import torch.nn as nn
# This has things like activation functions and other useful nonlinearities
from torch.nn import functional as F
# This has various gradient descent algorithms
import torch.optim
# In order to take derivatives, we have to wrap things as a Variable or a Parameter.
# Variables are things like inputs to the model
# Parameters are things like weights
# If you make a child class of nn.Module, it automatically keeps tracks of all parameters declared during
# __init__ for you - really handy!
from torch.autograd import Variable
from torch.nn import Parameter
from IPython import display
import time
```
## Generative Adversarial Networks
Generative adversarial networks (GANs) are a method to learn to produce samples from high-dimensional distributions based only on a set of samples from that distribution. The basic idea is that you have two networks which are competing with eachother on a shared game. One network (the Generator) must create samples from the target distribution, while the other network (the Discriminator) must correctly predict whether a given sample came from the Generator or from the actual data set.
For this game, the Nash equilibrium is for the Generator to produce samples exactly according to the probability density of the data distribution, and for the Discriminator to return the probability density of a given input sample. So a trained GAN in principle gives you both a way to sample from a distribution as well as a way to evaluate the local probability density around a sample.
In practice, the Generator and Discriminator may not converge to the Nash equilibrium, but will often oscillate around it, overspecialize to sub-regions of the distribution ('mode collapse'), etc. As such, there are a large family of algorithms designed to improve the convergence properties of the basic setup.
In this example, we'll just implement a basic GAN to reproduce some 2d distributions (so that the quality of the reconstruction can be easily checked).
```
# Some utility functions
def toFloatVar(x):
return Variable(torch.FloatTensor(x), requires_grad=False)
def toLongVar(x):
return Variable(torch.LongTensor(x), requires_grad=False)
```
## Generator network
First we'll specify the Generator. This network needs to produce a distribution of outcomes, not just an input-output relationship or single output, so we need to provide it a source of noise that it will transform into the target distribution. In essence, the Generator implements a transform from one probability distribution $p(z)$ to a target distribution (in a different set of variables) $q(x)$ - one sample at a time.
So basically the procedure is, we sample a random $z$ from $p(z)$ (which will just be a high-dimensional Gaussian), then apply the network to get $x = G(z)$.
```
class Generator(nn.Module):
def __init__(self, noiseDimension = 16, hiddenDimension = 64, targetDimension = 2):
super(Generator,self).__init__()
self.layer1 = nn.Linear(noiseDimension, hiddenDimension)
self.layer2 = nn.Linear(hiddenDimension, hiddenDimension)
self.layer3 = nn.Linear(hiddenDimension, hiddenDimension)
self.layer4 = nn.Linear(hiddenDimension, targetDimension)
self.noiseDimension = noiseDimension
# Each network will have its own optimizer, so we can train them at cross purposes to each-other
self.optimizer = torch.optim.Adam(self.parameters(), lr = 1e-3)
# For forward, we want to get samples based on specific values of the noise input
def forward(self, x):
z = F.relu(self.layer1(x))
z = F.relu(self.layer2(z))
z = F.relu(self.layer3(z))
z = self.layer4(z)
return z
# For convenience, lets also make a function that generates a batch of random samples
def sample(self, N=100):
z = toFloatVar(np.random.randn(N, self.noiseDimension))
return self.forward(z)
```
## Discriminator Network
The Discriminator network takes a sample either from the true dataset or from fakes made by the Generator, and should return a probability that the sample is real or fake.
```
class Discriminator(nn.Module):
def __init__(self, hiddenDimension = 64, targetDimension = 2):
super(Discriminator,self).__init__()
self.layer1 = nn.Linear(targetDimension, hiddenDimension)
self.layer2 = nn.Linear(hiddenDimension, hiddenDimension)
self.layer3 = nn.Linear(hiddenDimension, hiddenDimension)
self.layer4 = nn.Linear(hiddenDimension, 1)
# Each network will have its own optimizer, so we can train them at cross purposes to each-other
self.optimizer = torch.optim.Adam(self.parameters(), lr = 1e-3)
def forward(self, x):
z = F.relu(self.layer1(x))
z = F.relu(self.layer2(z))
z = F.relu(self.layer3(z))
# Clamp for numerical stability
z = torch.clamp( F.sigmoid(self.layer4(z)), 1e-6, 1-1e-6)
return z
```
## Training
The training procedure involves two steps: training the Discriminator and training the Generator. We'll do these separately for clarity, despite that introducing a bit of redundancy.
Training the discriminator:
- Form a batch which contains 50% samples from true distribution and 50% samples from the generator
- If $D()$ is the output of the discriminator and $x$ the true data, minimize the logistic loss: $L = -\log(D(x)) - \log(1-D(G(z)))$
- Update the discriminator weights only
Training the generator:
- Form a batch containing 100% samples from the generator
- Apply the discriminator to get $D(G(z))$
- Update the generator to maximize the discriminator's loss: $L = \log(1-D(G(z)))$.
```
def trainDiscriminator(data, generator, discriminator):
fakes = generator.sample(N=data.shape[0])
# Zero the discriminator gradient
discriminator.zero_grad()
# Get the fake batch and true batch
p_fakes = discriminator.forward(fakes)
p_true = discriminator.forward(data)
# Compute the loss
loss = torch.mean(-torch.log(p_true)) + torch.mean(-torch.log(1-p_fakes))
# Update the discriminator weights only
loss.backward()
discriminator.optimizer.step()
# Get the loss to follow training progress
return loss.data.numpy().copy()
# Training the generator doesn't require access to the dataset
# Careful though - training to completion on a fixed discriminator leads to mode collapse
# We have to train them together dynamically
def trainGenerator(generator, discriminator):
# Zero generator gradient
generator.zero_grad()
fakes = generator.sample(N=250)
p_fakes = discriminator.forward(fakes)
# Get the generator loss
loss = torch.mean(torch.log(1-p_fakes))
# Update generator weights
loss.backward()
generator.optimizer.step()
# Track generator loss for training
return loss.data.numpy().copy()
```
## Data distribution
We'll learn a simple bimodal distribution to test the GAN
```
def generateData(N):
# Generate which mode we're in
x = np.random.randint(2,size=(N,1))
# Generate Gaussian fluctuations around the mode
z = np.random.randn(N,2)*0.5
# Centers of the two modes
centers = np.array([[-1.5,0.5], [0.6, 1.3]])
return centers[x[:,0]] + z
data = generateData(250)
plt.scatter(data[:,0],data[:,1])
plt.show()
```
## Training the GAN
```
generator = Generator()
discriminator = Discriminator()
gen_loss = []
disc_loss = []
for epoch in range(1000):
# It's often better for the discriminator to be slightly better than the generator for stability
# So we'll use two steps here
dl = trainDiscriminator(toFloatVar(data), generator, discriminator)
dl = trainDiscriminator(toFloatVar(data), generator, discriminator)
gl = trainGenerator(generator, discriminator)
gen_loss.append(gl)
disc_loss.append(dl)
if epoch%5 == 0:
samples = generator.sample(N=250)
plt.clf()
plt.subplot(1,2,1)
plt.title("Generated Distribution")
plt.scatter(data[:,0],data[:,1])
plt.scatter(samples[:,0],samples[:,1])
plt.xlim(-4,2.5)
plt.ylim(-1.5,4)
plt.subplot(1,2,2)
plt.title("Training Loss")
plt.plot(disc_loss,label="Discriminator")
plt.plot(gen_loss,label="Generator")
plt.legend()
plt.gcf().set_size_inches((12,6))
display.clear_output(wait=True)
display.display(plt.gcf())
time.sleep(0.01)
```
| true |
code
| 0.779154 | null | null | null | null |
|
# Data Cleaning And Feature Engineering
* Data is very dirty so we have to clean our data for analysis.
* Also have many missing values represented by -1(have to fix it is very important).
```
import pandas as pd
data=pd.read_csv('original_data.csv')
data.head()
data.shape
#droping duplicates
data=data.drop_duplicates(data.columns)
data.shape
```
# Salary column
```
#droping salary which have -1 i.e no salary provided
data=data[data['Salary Estimate'] != '-1']
data.shape
data.head(20)
#replacing ₹ and k to 000
data['Salary Estimate']=data['Salary Estimate'].apply(lambda x: x.replace('₹','').replace('K','000').replace(',',''))
data.head()
data.dtypes
data['Salary Estimate'][0:50]
#making another column with 0 1
#1 if salary is by hourly else 0
data['hourly'] = data['Salary Estimate'].apply(lambda x: 1 if '/hr' in x.lower() else 0)
#making another column with 0 1
#1 if salary is by monthly else 0
data['monthly'] = data['Salary Estimate'].apply(lambda x: 1 if '/mo' in x.lower() else 0)
#removing /hr and /mo
data['Salary Estimate']=data['Salary Estimate'].apply(lambda x: x.lower().replace('/hr','').replace('/mo',''))
#if needed in the future
data['min_salary'] = data['Salary Estimate'].apply(lambda x: (x.split('-')[0]))
#check point
data.to_csv('clean.csv',index=False)
df=pd.read_csv('clean.csv')
def avg_salary(x):
lst=x.split('-')
l=len(lst)
if l>1:
return (float(lst[1])+float(lst[0]))/2
else:
return float(lst[0])
df['avg_salary'] = df['Salary Estimate'].apply(avg_salary)
df.head()
df.shape
#hourly salary to annual
df['avg_salary'] = df.apply(lambda x: x.avg_salary*2000 if x.hourly ==1 else x.avg_salary, axis =1)
#monthly salry to annual
df['avg_salary'] = df.apply(lambda x: x.avg_salary*12 if x.monthly ==1 else x.avg_salary, axis =1)
```
# Company Name Column
```
#cleaning company name
df['Company Name']=df['Company Name'].apply(lambda x: x.split('\n')[0])
df.head()
```
# Founded column
```
data[data['Founded']==-1]
#adding new column company_age
#age of company
df['company_age'] = df.Founded.apply(lambda x: x if x <1 else 2020 - x)
```
# job description Column
```
import numpy as np
def clean_des(x):
try:
return x.replace('\n', ' ')
except AttributeError:
return np.NaN
#cleaning job description
#job description have an values
df['Job Description']=df['Job Description'].apply(clean_des)
df.tail()
```
# Job Title Column
```
df['Job Title'].value_counts()
def title_simplifier(title):
if 'data scientist' in title.lower() or 'data science' in title.lower():
return 'data scientist'
elif 'data engineer' in title.lower():
return 'data engineer'
elif 'analyst' in title.lower():
return 'analyst'
elif 'machine learning' in title.lower():
return 'machine learning engineer'
elif 'manager' in title.lower():
return 'manager'
elif 'director' in title.lower():
return 'director'
else:
return 'other'
#simplifing titles to simplify thw work as there are 282 unique values which have the mostly same work
df['job_title_simplified'] = df['Job Title'].apply(title_simplifier)
df['job_title_simplified'].value_counts()
#if required for analysis
df['number_competitors'] = df['Competitors'].apply(lambda x: len(x.split(',')) if x != '-1' else 'not provided')
df.head()
```
# Revenue Column
* exploring revenue column as it can be a important feature in analysis
```
# replace -1 values with NaN (missing value)
df = df.replace(to_replace = -1, value = np.nan)
#null value in revenue
#df[df['Revenue']=='Unknown / Non-Applicable']
#making another column same as Revenue so that we can make changes to this new column that will not effect origial Revenue column.
df['revenue']=df['Revenue']
df.head()
df['revenue']=df['revenue'].apply(lambda x: x.replace('Unknown / Non-Applicable','-1'))
```
### cleaning revenue column.
```
#replaceing all the characters that are not numbers
df['revenue']=df['revenue'].apply(lambda x: x.replace('₹','').replace('+','').replace('INR','').replace('()','').replace('billion',''))
#making another column with 0 1
#1 if revenue is in million else 0
df['Revenue_million'] = df['revenue'].apply(lambda x: 1 if 'million' in x.lower() else 0)
#replaceing million
df['revenue']=df['revenue'].apply(lambda x: x.replace('million',''))
df['revenue']=df['revenue'].apply(lambda x: x.replace('to','-'))
```
### Making another column for avg of revenue as original revenue have values in form of ranges but we want a specific value for analysis.
```
#there are -1 so when split on - it raise an error that is why use try block
def avg_revenue(x):
lst=x.split('-')
l=len(lst)
if l>1:
try:
return (float(lst[1])+float(lst[0]))/2
except:
return np.nan
else:
return float(lst[0])
df['avg_revenue'] = df['revenue'].apply(avg_revenue)
#### making unit of average revenue as uniform
df['avg_revenue'] = df.apply(lambda x: x.avg_revenue/1000 if x.Revenue_million ==1 else x.avg_revenue, axis =1)
#check percentage of NaN data in every column
round((100*df.isnull().sum())/len(df.index),2)
```
#### Avg_Revenue have about 47% of missing values.It is said that column that have missing value % greater than 30 will be droped but Revenue can be a important column for analysis so we will fill missing values bt using advanced techniques like KNN-Imputer.
#### AS we will fill there values there will be possiblity that analysis around revenue may be wrong we will see it what is the effect of revenue on salary.
```
#import required libraries from advanced imputation techniques
from sklearn.impute import KNNImputer
pd.set_option('display.max_rows',None)
X=df.drop(['Company Name', 'Competitors', 'Headquarters', 'Industry',
'Job Description', 'Job Title', 'Location','Founded','revenue',
'Salary Estimate', 'Sector', 'Size', 'Type of ownership', 'hourly',
'monthly', 'min_salary','Revenue','company_age','Rating','avg_salary',
'job_title_simplified', 'number_competitors', 'Revenue_million'],axis=1)
X
imputer = KNNImputer(n_neighbors=3)
df['avg_revenue']=imputer.fit_transform(X)
df['avg_revenue']=round(df['avg_revenue'])
df.head()
df.columns
df2=df.drop(columns=[ 'hourly', 'monthly', 'min_salary','number_competitors', 'revenue','Revenue_million'])
df2.head()
df2.to_csv('final_cleaned_data.csv',index=False)
```
| true |
code
| 0.230963 | null | null | null | null |
|
<table>
<tr>
<td><img src='SystemLink_icon.png' /></td>
<td ><h1><strong>NI SystemLink Python API</strong></h1></td>
</tr>
</table>
## Test Monitor Service Example
***
The Test Monitor Service API provides functions to create, update, delete and query Test results and Test steps.
***
# Prerequisites
- The **NI SystemLink Server Test Module** needs to be installed in order to run this example
- The **NI SystemLink Client** needs to be installed on a system which has TestStand installed and is registered to the SystemLink server. Configure the SystemLink TestStand plugin reporting to enable publishing test results.
- Before you run this example, TestStand mock test results are needed:
- From **TestStand** open the **'Computer Motherboard Test Sequence.seq'**:
- Go to Help -> Find Examples and follow the instructions to open the Examples workspace (Examples.tsw)
- From the Workspace tab, expand **Demos** and select **Computer Motherboard Test**. Open one of the sequence files, based on your language of choice
- Run the sequence at least 10 times
- Make sure you fail several tests, on different components
# Summary
This notebook uses the Test Monitor Service API to import test and step results into Python. The data is used to do custom analytics.
- Get all the test results that were created from the 'Computer Motherboard Test Sequence.seq'
- Create a Pandas Dataframe with the information we want to process for each test
- Plot pass vs. fail tests
- Visualize test run vs. test duration
- Pareto graph (step type)
***
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from systemlink.testmonclient import TestMonitorClient, testmon_messages
testmonclient = TestMonitorClient(service_name='TestMonitorClient')
# Create pandas dataframe with the relevant test results information, to be used later
def get_dataframe_from_results(results):
return pd.concat([pd.DataFrame({'status': result.status.status_name,
'startedAt': result.started_at,
'updatedAt': result.updated_at,
'programName': result.program_name,
'id': result.id,
'systemId': result.system_id,
'operator': result.operator,
'serialNumber': result.serial_number,
'totalTimeInSeconds': result.total_time_in_seconds,
}, index=[idx]) for idx, result in enumerate(results)])
# Only query test results that belong to the 'Computer Motherboard Test Sequence.seq' test program
query = testmon_messages.ResultQuery(None, None, None, ['Computer Motherboard Test Sequence.seq'], None, None, None, None, None, None, None, None, None)
results, _ = testmonclient.query_results(query)
df_results = get_dataframe_from_results(results)
# Show the first elements of the dataframe, which holds the data we will use for further analysis
df_results[:2]
```
# Bar Plot of Test Results
Group the tests results by pass/fail. Create a bar plot to visualize the test runs by result.
```
# Visualize tests results (pass/fail)
bar_width = 0.4
opacity = 0.4
res = df_results.groupby('status').count()
failed = res['id']['Failed']
passed = res['id']['Passed']
plt.style.use('fivethirtyeight')
fig = plt.figure(figsize=(7, 7))
plt.bar(1, passed, bar_width, alpha=opacity, color='b', label='Pass')
plt.bar(1.5, failed, bar_width, alpha=opacity, color='r', label='Fail')
plt.xticks([1, 1.5], ['Pass', 'Fail'], size='15')
plt.ylabel('Runs', size='15')
plt.title('Total Runs: ' + str(passed + failed), weight='bold', size='15')
plt.show()
```
# Plot Test Run vs. Duration
Visualize the test runs vs. duration, with red/green color indicating pass/fail.
```
# Visualize test failures vs duration
result_idx = np.arange(df_results.shape[0])
df_time = df_results[['totalTimeInSeconds', 'status']]
color = ['r' if status == 'Failed' else 'g' for status in df_time['status']]
fig = plt.figure(figsize=(10, 7))
plt.scatter(result_idx, df_time['totalTimeInSeconds'], s=150, c=color, alpha='0.5')
plt.title('Test Results - Duration', weight='bold', size='15')
plt.xlabel('Test Runs', size='15')
plt.ylabel('Time (seconds)', size='15')
plt.show()
```
# Pareto distribution
Get a Pandas Dataframe with all the step failures. Visualize the failures in a Pareto graph, which helps visualize the failure distribution, by step type.
```
# Pareto distribution of step failures visualization
# Create pandas dataframe with the step results information that we want for further processing
def get_failed_steps_dataframe(steps):
failed_steps = [step for step in steps if step.status.status_name == 'Failed' and step.step_type != 'SequenceCall']
return pd.concat([pd.DataFrame({'name': step.name,
'id': step.step_id,
'totalTimeInSeconds': step.total_time_in_seconds,
}, index=[idx]) for idx, step in enumerate(failed_steps)])
results_ids = [result.id for result in results]
step_query = testmon_messages.StepQuery(None, None, None, results_ids, None, None, None, None, None, None)
steps, _ = testmonclient.query_steps(step_query)
steps_df = get_failed_steps_dataframe(steps)
res = steps_df.groupby('name').count()
res = res.sort_values('id', ascending=False)
fig, ax1 = plt.subplots()
fig.set_size_inches(15, 7)
plt.title('Failures by Test', weight='bold', size='15')
plt.ylabel('Number of Runs', size='15')
plt.xlabel('Test Type', size='15')
ax1.get_xaxis().set_ticks([])
# Create the Pareto chart bars
previous_val = 0
cumulative = []
for idx, row in res.iterrows():
val = row['id']
cumulative.append(val + previous_val)
previous_val = val + previous_val
ax1.bar(idx, val, bar_width, alpha=opacity, label=idx)
# Add a legend
labels = list(steps_df['name'])
plt.legend(labels, loc='upper right')
# Cumulative line, in percentage
cumulative_percentage = cumulative/cumulative[-1] * 100
ax2 = ax1.twinx()
ax2.set_ylim([0, 100])
ax2.plot(cumulative_percentage)
plt.ylabel('Failure Percentage', size='15')
plt.show()
```
| true |
code
| 0.547887 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/satyajitghana/PadhAI-Course/blob/master/13_OverfittingAndRegularization.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, mean_squared_error, log_loss
from tqdm import tqdm_notebook
import seaborn as sns
sns.set()
from sklearn.preprocessing import OneHotEncoder
from sklearn.datasets import load_iris
from numpy.linalg import norm
my_cmap = 'inferno'
np.random.seed(0)
```
## Generate data
```
iris=load_iris()
data = iris.data[:, :2] # take only the first two features
labels = iris.target
plt.scatter(data[:,0], data[:,1], c=labels, cmap=my_cmap)
plt.show()
print("Data shape",data.shape)
print("Labels shape",labels.shape)
```
## Multi class classification
```
X_train, X_val, Y_train, Y_val = train_test_split(data, labels, stratify=labels, random_state=0,test_size=0.2)
print(X_train.shape, X_val.shape, labels.shape)
enc = OneHotEncoder()
y_OH_train = enc.fit_transform(np.expand_dims(Y_train,1)).toarray()
y_OH_val = enc.fit_transform(np.expand_dims(Y_val,1)).toarray()
print(y_OH_train.shape, y_OH_val.shape)
```
## FF Class
```
class FFNetwork:
def __init__(self, num_hidden=2, init_method = 'xavier', activation_function = 'sigmoid', leaky_slope = 0.1):
self.params={}
self.num_layers=2
self.layer_sizes = [2, num_hidden, 3]
self.activation_function = activation_function
self.leaky_slope = leaky_slope
np.random.seed(0)
if init_method == "random":
for i in range(1,self.num_layers+1):
self.params["W"+str(i)] = np.random.randn(self.layer_sizes[i-1],self.layer_sizes[i])
self.params["B"+str(i)] = np.random.randn(1,self.layer_sizes[i])
elif init_method == "he":
for i in range(1,self.num_layers+1):
self.params["W"+str(i)] = np.random.randn(self.layer_sizes[i-1],self.layer_sizes[i])*np.sqrt(2/self.layer_sizes[i-1])
self.params["B"+str(i)] = np.random.randn(1,self.layer_sizes[i])
elif init_method == "xavier":
for i in range(1,self.num_layers+1):
self.params["W"+str(i)]=np.random.randn(self.layer_sizes[i-1],self.layer_sizes[i])*np.sqrt(1/self.layer_sizes[i-1])
self.params["B"+str(i)]=np.random.randn(1,self.layer_sizes[i])
self.gradients={}
self.update_params={}
self.prev_update_params={}
for i in range(1,self.num_layers+1):
self.update_params["v_w"+str(i)]=0
self.update_params["v_b"+str(i)]=0
self.update_params["m_b"+str(i)]=0
self.update_params["m_w"+str(i)]=0
self.prev_update_params["v_w"+str(i)]=0
self.prev_update_params["v_b"+str(i)]=0
def forward_activation(self, X):
if self.activation_function == "sigmoid":
return 1.0/(1.0 + np.exp(-X))
elif self.activation_function == "tanh":
return np.tanh(X)
elif self.activation_function == "relu":
return np.maximum(0,X)
elif self.activation_function == "leaky_relu":
return np.maximum(self.leaky_slope*X,X)
def grad_activation(self, X):
if self.activation_function == "sigmoid":
return X*(1-X)
elif self.activation_function == "tanh":
return (1-np.square(X))
elif self.activation_function == "relu":
return 1.0*(X>0)
elif self.activation_function == "leaky_relu":
d=np.zeros_like(X)
d[X<=0]=self.leaky_slope
d[X>0]=1
return d
def get_accuracy(self):
Y_pred_train = model.predict(X_train)
Y_pred_train = np.argmax(Y_pred_train,1)
Y_pred_val = model.predict(X_val)
Y_pred_val = np.argmax(Y_pred_val,1)
accuracy_train = accuracy_score(Y_pred_train, Y_train)
accuracy_val = accuracy_score(Y_pred_val, Y_val)
return accuracy_train,accuracy_val
def softmax(self, X):
exps = np.exp(X)
return exps / np.sum(exps, axis=1).reshape(-1,1)
def forward_pass(self, X, params = None):
if params is None:
params = self.params
self.A1 = np.matmul(X, params["W1"]) + params["B1"] # (N, 2) * (2, 2) -> (N, 2)
self.H1 = self.forward_activation(self.A1) # (N, 2)
self.A2 = np.matmul(self.H1, params["W2"]) + params["B2"] # (N, 2) * (2, 2) -> (N, 2)
self.H2 = self.softmax(self.A2) # (N, 2)
return self.H2
def grad(self, X, Y, params = None):
if params is None:
params = self.params
self.forward_pass(X, params)
m = X.shape[0]
self.gradients["dA2"] = self.H2 - Y # (N, 4) - (N, 4) -> (N, 4)
self.gradients["dW2"] = np.matmul(self.H1.T, self.gradients["dA2"]) # (2, N) * (N, 4) -> (2, 4)
self.gradients["dB2"] = np.sum(self.gradients["dA2"], axis=0).reshape(1, -1) # (N, 4) -> (1, 4)
self.gradients["dH1"] = np.matmul(self.gradients["dA2"], params["W2"].T) # (N, 4) * (4, 2) -> (N, 2)
self.gradients["dA1"] = np.multiply(self.gradients["dH1"], self.grad_activation(self.H1)) # (N, 2) .* (N, 2) -> (N, 2)
self.gradients["dW1"] = np.matmul(X.T, self.gradients["dA1"]) # (2, N) * (N, 2) -> (2, 2)
self.gradients["dB1"] = np.sum(self.gradients["dA1"], axis=0).reshape(1, -1) # (N, 2) -> (1, 2)
def fit(self, X, Y, epochs=1, algo= "GD",l2_norm=False, lambda_val=0.8, display_loss=False, eta=1):
train_accuracies={}
val_accuracies={}
if display_loss:
loss = []
weight_mag = []
for num_epoch in tqdm_notebook(range(epochs), total=epochs, unit="epoch"):
m = X.shape[0]
self.grad(X, Y)
for i in range(1,self.num_layers+1):
if l2_norm:
self.params["W"+str(i)] -= (eta * lambda_val)/m * self.params["W"+str(i)] + eta * (self.gradients["dW"+str(i)]/m)
else:
self.params["W"+str(i)] -= eta * (self.gradients["dW"+str(i)]/m)
self.params["B"+str(i)] -= eta * (self.gradients["dB"+str(i)]/m)
train_accuracy,val_accuracy=self.get_accuracy()
train_accuracies[num_epoch]=train_accuracy
val_accuracies[num_epoch]=val_accuracy
if display_loss:
Y_pred = self.predict(X)
loss.append(log_loss(np.argmax(Y, axis=1), Y_pred))
weight_mag.append((norm(self.params["W1"]) + norm(self.params["W2"]) + norm(self.params["B1"]) + norm(self.params["B2"]))/18)
plt.plot(list(train_accuracies.values()),label="Train accuracy")
plt.plot(list(val_accuracies.values()),label="Validation accuracy")
plt.plot(np.ones((epochs, 1))*0.9)
plt.plot(np.ones((epochs, 1))*0.33)
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
if display_loss:
fig, ax1 = plt.subplots()
color = 'tab:red'
ax1.set_xlabel('epochs')
ax1.set_ylabel('Log Loss', color=color)
ax1.plot(loss, '-o', color=color)
ax1.tick_params(axis='y', labelcolor=color)
ax2 = ax1.twinx()
color = 'tab:blue'
ax2.set_ylabel('Weight Magnitude', color=color) # we already handled the x-label with ax1
ax2.plot(weight_mag, '-*', color=color)
ax2.tick_params(axis='y', labelcolor=color)
fig.tight_layout()
plt.show()
def predict(self, X):
Y_pred = self.forward_pass(X)
return np.array(Y_pred).squeeze()
def print_accuracy():
Y_pred_train = model.predict(X_train)
Y_pred_train = np.argmax(Y_pred_train,1)
Y_pred_val = model.predict(X_val)
Y_pred_val = np.argmax(Y_pred_val,1)
accuracy_train = accuracy_score(Y_pred_train, Y_train)
accuracy_val = accuracy_score(Y_pred_val, Y_val)
print("Training accuracy", round(accuracy_train, 4))
print("Validation accuracy", round(accuracy_val, 4))
if False:
plt.scatter(X_train[:,0], X_train[:,1], c=Y_pred_train, cmap=my_cmap, s=15*(np.abs(np.sign(Y_pred_train-Y_train))+.1))
plt.show()
model = FFNetwork(num_hidden=1)
model.fit(X_train, y_OH_train, epochs=100, eta=0.1)
print_accuracy()
model = FFNetwork(num_hidden=2)
model.fit(X_train, y_OH_train, epochs=100, eta=1, display_loss=False)
print_accuracy()
model = FFNetwork(num_hidden=4)
model.fit(X_train, y_OH_train, epochs=400, eta=0.25, display_loss=False)
print_accuracy()
model = FFNetwork(num_hidden=8)
model.fit(X_train, y_OH_train, epochs=500, eta=0.2, display_loss=False)
print_accuracy()
model = FFNetwork(num_hidden=32)
model.fit(X_train, y_OH_train, epochs=500, eta=0.2, display_loss=False)
print_accuracy()
model = FFNetwork(num_hidden=64)
model.fit(X_train, y_OH_train, epochs=2000, eta=0.1, l2_norm=False)
print_accuracy()
```
## Add L2 Regularization
```
model = FFNetwork(num_hidden=64)
model.fit(X_train, y_OH_train, epochs=2000, eta=0.1, l2_norm=True, lambda_val=0.1, display_loss=True)
print_accuracy()
model = FFNetwork(num_hidden=64)
model.fit(X_train, y_OH_train, epochs=2000, eta=0.1, l2_norm=True, lambda_val=1, display_loss=True)
print_accuracy()
model = FFNetwork(num_hidden=64)
model.fit(X_train, y_OH_train, epochs=2000, eta=0.1, l2_norm=True, lambda_val=5, display_loss=True)
print_accuracy()
model = FFNetwork(num_hidden=64)
model.fit(X_train, y_OH_train, epochs=2000, eta=0.1, l2_norm=True, lambda_val=10, display_loss=True)
print_accuracy()
```
## Add noise to training data set
```
model = FFNetwork(num_hidden=64)
model.fit(X_train, y_OH_train, epochs=2000, eta=0.1, l2_norm=False)
print_accuracy()
for noise_fraction in [0.01, 0.05, 0.1, 0.15, 0.18, 0.2]:
print(noise_fraction)
X_train_noisy = X_train * (1 - noise_fraction*np.random.randn(X_train.shape[0], X_train.shape[1]))
model = FFNetwork(num_hidden=64)
model.fit(X_train_noisy, y_OH_train, epochs=2000, eta=0.1, l2_norm=False)
print_accuracy()
```
## Early stopping
```
model = FFNetwork(num_hidden=32)
model.fit(X_train, y_OH_train, epochs=500, eta=0.2, display_loss=True)
print_accuracy()
model = FFNetwork(num_hidden=32)
model.fit(X_train, y_OH_train, epochs=100, eta=0.2, display_loss=True)
print_accuracy()
```
| true |
code
| 0.565899 | null | null | null | null |
|
# Marginal Gaussianization
* Author: J. Emmanuel Johnson
* Email: [email protected]
In this demonstration, we will show how we can do the marginal Gaussianization on a 2D dataset using the Histogram transformation and Inverse CDF Gaussian distribution.
```
import os, sys
cwd = os.getcwd()
# sys.path.insert(0, f"{cwd}/../")
sys.path.insert(0, "/home/emmanuel/code/rbig")
from rbig.data import ToyData
from rbig.transform.gaussianization import MarginalGaussianization
# from rbig.transform.gaussianization import HistogramGaussianization, KDEGaussianization
from rbig.transform import InverseGaussCDF
import numpy as np
from scipy import stats
# Plot Functions
import matplotlib.pyplot as plt
import seaborn as sns
sns.reset_defaults()
#sns.set_style('whitegrid')
#sns.set_context('talk')
sns.set_context(context='talk',font_scale=0.7)
%matplotlib inline
%load_ext autoreload
%autoreload 2
```
## Data
For this example, we are looking at a 2D dataset.
```
def plot_2d_joint(data, color='blue', title='Original Data'):
fig = plt.figure(figsize=(5, 5))
g = sns.jointplot(x=data[:, 0], y=data[:, 1], kind='hex', color=color)
plt.xlabel('X')
plt.ylabel('Y')
plt.suptitle(title)
plt.tight_layout()
plt.show()
def plot_prob(data, probs, title='Probabilities'):
fig, ax = plt.subplots()
h = ax.scatter(data[:, 0], data[:, 1], s=1, c=probs, cmap='Reds')
ax.set_xlabel('X')
ax.set_ylabel('Y')
cbar = plt.colorbar(h, )
ax.set_title(title)
plt.show()
seed = 123
rng = np.random.RandomState(seed=seed)
dataset = 'rbig'
n_samples = 10_000
n_features = 2
noise = 0.25
random_state=1
clusters = 2
data = ToyData(
dataset=dataset,
n_samples=n_samples,
n_features=n_features,
noise=noise,
random_state=random_state,
clusters=clusters,
).generate_samples()
X = data[:, 0]
Y = data[:, 1]
plot_2d_joint(data, title='Original Data')
```
## Uniformization Transformation
```
from rbig.transform.uniformization import HistogramUniformization, KDEUniformization, MarginalUniformization
# from rbig.density.histogram import ScipyHistogram, QuantileHistogram
# from rbig.den
```
#### Initialize Uniformization Algorithm
```
# INITIALIZE UNIFORMIZATION ALGORITHM
#===
# uniform_clf = HistogramUniformization(bins=100, support_extension=10, alpha=1e-4, n_quantiles=None)
uniform_clf = KDEUniformization(n_quantiles=50, method='fft')
# density_clf = KDEScipy(n_quantiles=50, bw_method='scott', support_extension=10)
# density_clf = KDESklearn(n_quantiles=100, support_extension=10)
```
#### Add it to Marginal Transformation Algorithm
```
mg_uniformizer = MarginalUniformization(uniform_clf)
mg_uniformizer.fit(data)
X_trans = mg_uniformizer.transform(data)
plot_2d_joint(X_trans, title='Transformed Data')
data_approx = mg_uniformizer.inverse_transform(X_trans)
plot_2d_joint(data_approx, title='Transformed Data')
X_ldj = mg_uniformizer.log_abs_det_jacobian(data)
plot_2d_joint(X_ldj, title='Transformed Data')
plot_2d_joint(np.exp(X_ldj), title='Transformed Data')
plot_prob(data, X_ldj.sum(-1), title='Log Probabilities')
plot_prob(data, np.exp(X_ldj.sum(-1)), title='Probabilities')
```
## Marginal Gaussinization
```
from rbig.transform.uniformization import HistogramUniformization, KDEUniformization, MarginalUniformization
from rbig.transform.gaussianization import MarginalGaussianization
uniform_clf = HistogramUniformization(bins=100, support_extension=10, alpha=1e-4, n_quantiles=None)
uniform_clf = KDEUniformization(n_quantiles=50, method='fft', )
mg_gaussianizer = MarginalGaussianization(uniform_clf)
mg_gaussianizer.fit(data)
X_trans = mg_gaussianizer.transform(data)
plot_2d_joint(X_trans, title='Transformed Data')
data_approx = mg_gaussianizer.inverse_transform(X_trans)
plot_2d_joint(data_approx, title='Transformed Data')
X_ldj = mg_gaussianizer.log_abs_det_jacobian(data)
plot_2d_joint(X_ldj, title='Transformed Data')
plot_2d_joint(np.exp(X_ldj), title='Transformed Data')
X_lprob = mg_gaussianizer.score_samples(data)
plot_prob(data, X_lprob, title='Log Probabilities')
plot_prob(data, np.exp(X_lprob), title='Probabilities')
```
### Negative Log Likelihood
```
X_nll = mg_gaussianizer.score(data,)
print(f"Negative Log-Likelihood Score: {X_nll:.4f}")
```
## Marginal Histogram Transformation
So, for this transformation, we are going to transform our data from the current distribution to a marginally Gaussian distribution and then perform a rotation. In theory, if we do enough of these, we will eventually convert to a Gaussian distribution.
```
# parameters
nbins = 1_000 # number of bins to do the histogram transform
alpha = 1e-05 # adds some regularization (noise)
support_extension = 10
# initialize the transformer
mg_transformer = HistogramGaussianization(
nbins=nbins,
alpha=alpha
)
# fit the transformer to the data
mg_transformer.fit(data);
```
### 1. Forward Transformation
For this transformation, we will be applying the following:
$$\Psi(\mathbf{x}) = \Phi^{-1}(\mathbf{x})$$
where $\Phi^{-1}(\cdot)$ is the inverse CDF of the Gaussian distribution.
```
data_trans = mg_transformer.transform(data)
plot_2d_joint(data_trans, title='Transformed Data')
```
So clearly we can see that the transformation works. Both of the marginals are Gaussian distributed..
### 2. Inverse Transformation
For this step, we will apply the inverse transformation:
$$\Psi^{-1}(\mathbf{x}) = \Phi \left( \mathbf{x} \right)$$
where $\Phi(\cdot)$ is the CDF of the Gaussian distribution.
```
data_approx = mg_transformer.inverse_transform(data_trans)
# check that its more or less equal
np.testing.assert_array_almost_equal(data_approx, data, decimal=1e-5)
```
We see that this transformation is very close to the original. In fact, it's close to approximately 1e-5 decimal places. The errors will definitely stem from the boundaries.
```
# Plot results
plot_2d_joint(data_approx, title='Inverse Transformed Data')
```
## Log Absolute Determinant Jacobian
Using the derivative of inverse-functions theorem, we can calculate the derivative like so:
$$\nabla_\mathbf{x} \Phi^{-1}(\mathbf{x}) = \frac{1}{\phi (\Phi^{-1} (x)) }$$
where $\phi(\cdot)$ is the PDF of the Gaussian distribution. Taking the log of these terms gives us:
$$ \log \nabla_\mathbf{x} \Phi^{-1}(\mathbf{x}) = - \log \phi (\Phi^{-1} (x))$$
```
X_slogdet = mg_transformer.log_abs_det_jacobian(data)
print(X_slogdet.min(), X_slogdet.max())
print(np.exp(X_slogdet).min(), np.exp(X_slogdet).max())
# plot the gradients
plot_2d_joint(np.exp(X_slogdet), title='Jacobian Data')
```
## Log Probability
$$\log p_\theta(\mathbf{x}) = \log p_\theta \left( \mathbf{z} \right) + \log \left| \nabla_\mathbf{x} \mathbf{z} \right|$$
where $\mathbf{z} = \Psi(\mathbf{x})$
```
# score samples
log_prob = mg_transformer.score_samples(data)
# score samples
log_prob = mg_transformer.score_samples(data)
plot_prob(data, log_prob, title='Log Probabilities')
```
## Probability
This is the same as above but without the log scale:
$$p_\theta(\mathbf{x}) = p_\theta \left( \mathbf{z} \right) \left| \nabla_\mathbf{x} \mathbf{z} \right|$$
where $\mathbf{z} = \Psi(\mathbf{x})$
```
plot_prob(data, np.exp(log_prob), title='Probabilities')
```
## Negative Log-Likelihood
We need to take the expected value (mean) of all log probabilities.
$$\text{nll} = \frac{1}{N} \sum_{n=1}^{N} \log p_\theta(\mathbf{x})$$
```
score = mg_transformer.score(data)
print(f"Negative Log-Likelihood Score: {score:.4f}")
```
| true |
code
| 0.693148 | null | null | null | null |
|
```
!pip install -q --upgrade jax jaxlib
from __future__ import print_function, division
import jax.numpy as np
from jax import grad, jit, vmap
from jax import random
key = random.PRNGKey(0)
```
# The Autodiff Cookbook
*alexbw@, mattjj@*
JAX has a pretty general automatic differentiation system. In this notebook, we'll go through a whole bunch of neat autodiff ideas that you can cherry pick for your own work, starting with the basics.
## Gradients
### Starting with `grad`
You can differentiate a function with `grad`:
```
grad_tanh = grad(np.tanh)
print(grad_tanh(2.0))
```
`grad` takes a function and returns a function. If you have a Python function `f` that evaluates the mathematical function $f$, then `grad(f)` is a Python function that evaluates the mathematical function $\nabla f$. That means `grad(f)(x)` represents the value $\nabla f(x)$.
Since `grad` operates on functions, you can apply it to its own output to differentiate as many times as you like:
```
print(grad(grad(np.tanh))(2.0))
print(grad(grad(grad(np.tanh)))(2.0))
```
Let's look at computing gradients with `grad` in a linear logistic regression model. First, the setup:
```
def sigmoid(x):
return 0.5 * (np.tanh(x / 2) + 1)
# Outputs probability of a label being true.
def predict(W, b, inputs):
return sigmoid(np.dot(inputs, W) + b)
# Build a toy dataset.
inputs = np.array([[0.52, 1.12, 0.77],
[0.88, -1.08, 0.15],
[0.52, 0.06, -1.30],
[0.74, -2.49, 1.39]])
targets = np.array([True, True, False, True])
# Training loss is the negative log-likelihood of the training examples.
def loss(W, b):
preds = predict(W, b, inputs)
label_probs = preds * targets + (1 - preds) * (1 - targets)
return -np.sum(np.log(label_probs))
# Initialize random model coefficients
key, W_key, b_key = random.split(key, 3)
W = random.normal(W_key, (3,))
b = random.normal(b_key, ())
```
Use the `grad` function with its `argnums` argument to differentiate a function with respect to positional arguments.
```
# Differentiate `loss` with respect to the first positional argument:
W_grad = grad(loss, argnums=0)(W, b)
print('W_grad', W_grad)
# Since argnums=0 is the default, this does the same thing:
W_grad = grad(loss)(W, b)
print('W_grad', W_grad)
# But we can choose different values too, and drop the keyword:
b_grad = grad(loss, 1)(W, b)
print('b_grad', b_grad)
# Including tuple values
W_grad, b_grad = grad(loss, (0, 1))(W, b)
print('W_grad', W_grad)
print('b_grad', b_grad)
```
This `grad` API has a direct correspondence to the excellent notation in Spivak's classic *Calculus on Manifolds* (1965), also used in Sussman and Wisdom's [*Structure and Interpretation of Classical Mechanics*](http://mitpress.mit.edu/sites/default/files/titles/content/sicm_edition_2/book.html) (2015) and their [*Functional Differential Geometry*](https://mitpress.mit.edu/books/functional-differential-geometry) (2013). Both books are open-access. See in particular the "Prologue" section of *Functional Differential Geometry* for a defense of this notation.
Essentially, when using the `argnums` argument, if `f` is a Python function for evaluating the mathematical function $f$, then the Python expression `grad(f, i)` evaluates to a Python function for evaluating $\partial_i f$.
### Differentiating with respect to nested lists, tuples, and dicts
Differentiating with respect to standard Python containers just works, so use tuples, lists, and dicts (and arbitrary nesting) however you like.
```
def loss2(params_dict):
preds = predict(params_dict['W'], params_dict['b'], inputs)
label_probs = preds * targets + (1 - preds) * (1 - targets)
return -np.sum(np.log(label_probs))
print(grad(loss2)({'W': W, 'b': b}))
```
You can [register your own container types](https://github.com/google/jax/issues/446#issuecomment-467105048) to work with not just `grad` but all the JAX transformations (`jit`, `vmap`, etc.).
### Evaluate a function and its gradient using `value_and_grad`
Another convenient function is `value_and_grad` for efficiently computing both a function's value as well as its gradient's value:
```
from jax import value_and_grad
loss_value, Wb_grad = value_and_grad(loss, (0, 1))(W, b)
print('loss value', loss_value)
print('loss value', loss(W, b))
```
### Checking against numerical differences
A great thing about derivatives is that they're straightforward to check with finite differences:
```
# Set a step size for finite differences calculations
eps = 1e-4
# Check b_grad with scalar finite differences
b_grad_numerical = (loss(W, b + eps / 2.) - loss(W, b - eps / 2.)) / eps
print('b_grad_numerical', b_grad_numerical)
print('b_grad_autodiff', grad(loss, 1)(W, b))
# Check W_grad with finite differences in a random direction
key, subkey = random.split(key)
vec = random.normal(subkey, W.shape)
unitvec = vec / np.sqrt(np.vdot(vec, vec))
W_grad_numerical = (loss(W + eps / 2. * unitvec, b) - loss(W - eps / 2. * unitvec, b)) / eps
print('W_dirderiv_numerical', W_grad_numerical)
print('W_dirderiv_autodiff', np.vdot(grad(loss)(W, b), unitvec))
```
JAX provides a simple convenience function that does essentially the same thing, but checks up to any order of differentiation that you like:
```
from jax.test_util import check_grads
check_grads(loss, (W, b), order=2) # check up to 2nd order derivatives
```
### Hessian-vector products with `grad`-of-`grad`
One thing we can do with higher-order `grad` is build a Hessian-vector product function. (Later on we'll write an even more efficient implementation that mixes both forward- and reverse-mode, but this one will use pure reverse-mode.)
A Hessian-vector product function can be useful in a [truncated Newton Conjugate-Gradient algorithm](https://en.wikipedia.org/wiki/Truncated_Newton_method) for minimizing smooth convex functions, or for studying the curvature of neural network training objectives (e.g. [1](https://arxiv.org/abs/1406.2572), [2](https://arxiv.org/abs/1811.07062), [3](https://arxiv.org/abs/1706.04454), [4](https://arxiv.org/abs/1802.03451)).
For a scalar-valued function $f : \mathbb{R}^n \to \mathbb{R}$, the Hessian at a point $x \in \mathbb{R}^n$ is written as $\partial^2 f(x)$. A Hessian-vector product function is then able to evaluate
$\qquad v \mapsto \partial^2 f(x) \cdot v$
for any $v \in \mathbb{R}^n$.
The trick is not to instantiate the full Hessian matrix: if $n$ is large, perhaps in the millions or billions in the context of neural networks, then that might be impossible to store.
Luckily, `grad` already gives us a way to write an efficient Hessian-vector product function. We just have to use the identity
$\qquad \partial^2 f (x) v = \partial [x \mapsto \partial f(x) \cdot v] = \partial g(x)$,
where $g(x) = \partial f(x) \cdot v$ is a new scalar-valued function that dots the gradient of $f$ at $x$ with the vector $v$. Nottice that we're only ever differentiating scalar-valued functions of vector-valued arguments, which is exactly where we know `grad` is efficient.
In JAX code, we can just write this:
```
def hvp(f, x, v):
return grad(lambda x: np.vdot(grad(f)(x), v))
```
This example shows that you can freely use lexical closure, and JAX will never get perturbed or confused.
We'll check this implementation a few cells down, once we see how to compute dense Hessian matrices. We'll also write an even better version that uses both forward-mode and reverse-mode.
## Jacobians and Hessians using `jacfwd` and `jacrev`
You can compute full Jacobian matrices using the `jacfwd` and `jacrev` functions:
```
from jax import jacfwd, jacrev
# Isolate the function from the weight matrix to the predictions
f = lambda W: predict(W, b, inputs)
J = jacfwd(f)(W)
print("jacfwd result, with shape", J.shape)
print(J)
J = jacrev(f)(W)
print("jacrev result, with shape", J.shape)
print(J)
```
These two functions compute the same values (up to machine numerics), but differ in their implementation: `jacfwd` uses forward-mode automatic differentiation, which is more efficient for "tall" Jacobian matrices, while `jacrev` uses reverse-mode, which is more efficient for "wide" Jacobian matrices. For matrices that are near-square, `jacfwd` probably has an edge over `jacrev`.
You can also use `jacfwd` and `jacrev` with container types:
```
def predict_dict(params, inputs):
return predict(params['W'], params['b'], inputs)
J_dict = jacrev(predict_dict)({'W': W, 'b': b}, inputs)
for k, v in J_dict.items():
print("Jacobian from {} to logits is".format(k))
print(v)
```
For more details on forward- and reverse-mode, as well as how to implement `jacfwd` and `jacrev` as efficiently as possible, read on!
Using a composition of two of these functions gives us a way to compute dense Hessian matrices:
```
def hessian(f):
return jacfwd(jacrev(f))
H = hessian(f)(W)
print("hessian, with shape", H.shape)
print(H)
```
This shape makes sense: if we start with a function $f : \mathbb{R}^n \to \mathbb{R}^m$, then at a point $x \in \mathbb{R}^n$ we expect to get the shapes
* $f(x) \in \mathbb{R}^m$, the value of $f$ at $x$,
* $\partial f(x) \in \mathbb{R}^{m \times n}$, the Jacobian matrix at $x$,
* $\partial^2 f(x) \in \mathbb{R}^{m \times n \times n}$, the Hessian at $x$,
and so on.
To implement `hessian`, we could have used `jacrev(jacrev(f))` or `jacrev(jacfwd(f))` or any other composition of the two. But forward-over-reverse is typically the most efficient. That's because in the inner Jacobian computation we're often differentiating a function wide Jacobian (maybe like a loss function $f : \mathbb{R}^n \to \mathbb{R}$), while in the outer Jacobian computation we're differentiating a function with a square Jacobian (since $\nabla f : \mathbb{R}^n \to \mathbb{R}^n$), which is where forward-mode wins out.
## How it's made: two foundational autodiff functions
### Jacobian-Vector products (JVPs, aka forward-mode autodiff)
JAX includes efficient and general implementations of both forward- and reverse-mode automatic differentiation. The familiar `grad` function is built on reverse-mode, but to explain the difference in the two modes, and when each can be useful, we need a bit of math background.
#### JVPs in math
Mathematically, given a function $f : \mathbb{R}^n \to \mathbb{R}^m$, the Jacobian matrix of $f$ evaluated at an input point $x \in \mathbb{R}^n$, denoted $\partial f(x)$, is often thought of as a matrix in $\mathbb{R}^m \times \mathbb{R}^n$:
$\qquad \partial f(x) \in \mathbb{R}^{m \times n}$.
But we can also think of $\partial f(x)$ as a linear map, which maps the tangent space of the domain of $f$ at the point $x$ (which is just another copy of $\mathbb{R}^n$) to the tangent space of the codomain of $f$ at the point $f(x)$ (a copy of $\mathbb{R}^m$):
$\qquad \partial f(x) : \mathbb{R}^n \to \mathbb{R}^m$.
This map is called the [pushforward map](https://en.wikipedia.org/wiki/Pushforward_(differential)) of $f$ at $x$. The Jacobian matrix is just the matrix for this linear map in a standard basis.
If we don't commit to one specific input point $x$, then we can think of the function $\partial f$ as first taking an input point and returning the Jacobian linear map at that input point:
$\qquad \partial f : \mathbb{R}^n \to \mathbb{R}^n \to \mathbb{R}^m$.
In particular, we can uncurry things so that given input point $x \in \mathbb{R}^n$ and a tangent vector $v \in \mathbb{R}^n$, we get back an output tangent vector in $\mathbb{R}^m$. We call that mapping, from $(x, v)$ pairs to output tangent vectors, the *Jacobian-vector product*, and write it as
$\qquad (x, v) \mapsto \partial f(x) v$
#### JVPs in JAX code
Back in Python code, JAX's `jvp` function models this transformation. Given a Python function that evaluates $f$, JAX's `jvp` is a way to get a Python function for evaluating $(x, v) \mapsto (f(x), \partial f(x) v)$.
```
from jax import jvp
# Isolate the function from the weight matrix to the predictions
f = lambda W: predict(W, b, inputs)
key, subkey = random.split(key)
v = random.normal(subkey, W.shape)
# Push forward the vector `v` along `f` evaluated at `W`
y, u = jvp(f, (W,), (v,))
```
In terms of Haskell-like type signatures, we could write
```haskell
jvp :: (a -> b) -> a -> T a -> (b, T b)
```
where we use `T a` to denote the type of the tangent space for `a`. In words, `jvp` takes as arguments a function of type `a -> b`, a value of type `a`, and a tangent vector value of type `T a`. It gives back a pair consisting of a value of type `b` and an output tangent vector of type `T b`.
The `jvp`-transformed function is evaluated much like the original function, but paired up with each primal value of type `a` it pushes along tangent values of type `T a`. For each primitive numerical operation that the original function would have applied, the `jvp`-transformed function executes a "JVP rule" for that primitive that both evaluates the primitive on the primals and applies the primitive's JVP at those primal values.
That evaluation strategy has some immediate implications about computational complexity: since we evaluate JVPs as we go, we don't need to store anything for later, and so the memory cost is independent of the depth of the computation. In addition, the FLOP cost of the `jvp`-transformed function is about 2x the cost of just evaluating the function. Put another way, for a fixed primal point $x$, we can evaluate $v \mapsto \partial f(x) \cdot v$ for about the same cost as evaluating $f$.
That memory complexity sounds pretty compelling! So why don't we see forward-mode very often in machine learning?
To answer that, first think about how you could use a JVP to build a full Jacobian matrix. If we apply a JVP to a one-hot tangent vector, it reveals one column of the Jacobian matrix, corresponding to the nonzero entry we fed in. So we can build a full Jacobian one column at a time, and to get each column costs about the same as one function evaluation. That will be efficient for functions with "tall" Jacobians, but inefficient for "wide" Jacobians.
If you're doing gradient-based optimization in machine learning, you probably want to minimize a loss function from parameters in $\mathbb{R}^n$ to a scalar loss value in $\mathbb{R}$. That means the Jacobian of this function is a very wide matrix: $\partial f(x) \in \mathbb{R}^{1 \times n}$, which we often identify with the Gradient vector $\nabla f(x) \in \mathbb{R}^n$. Building that matrix one column at a time, with each call taking a similar number of FLOPs to evaluating the original function, sure seems inefficient! In particular, for training neural networks, where $f$ is a training loss function and $n$ can be in the millions or billions, this approach just won't scale.
To do better for functions like this, we just need to use reverse-mode.
### Vector-Jacobian products (VJPs, aka reverse-mode autodiff)
Where forward-mode gives us back a function for evaluating Jacobian-vector products, which we can then use to build Jacobian matrices one column at a time, reverse-mode is a way to get back a function for evaluating vector-Jacobian products (equivalently Jacobian-transpose-vector products), which we can use to build Jacobian matrices one row at a time.
#### VJPs in math
Let's again consider a function $f : \mathbb{R}^n \to \mathbb{R}^m$.
Starting from our notation for JVPs, the notation for VJPs is pretty simple:
$\qquad (x, v) \mapsto v \partial f(x)$,
where $v$ is an element of the cotangent space of $f$ at $x$ (isomorphic to another copy of $\mathbb{R}^m$). When being rigorous, we should think of $v$ as a linear map $v : \mathbb{R}^m \to \mathbb{R}$, and when we write $v \partial f(x)$ we mean function composition $v \circ \partial f(x)$, where the types work out because $\partial f(x) : \mathbb{R}^n \to \mathbb{R}^m$. But in the common case we can identify $v$ with a vector in $\mathbb{R}^m$ and use the two almost interchageably, just like we might sometimes flip between "column vectors" and "row vectors" without much comment.
With that identification, we can alternatively think of the linear part of a VJP as the transpose (or adjoint conjugate) of the linear part of a JVP:
$\qquad (x, v) \mapsto \partial f(x)^\mathsf{T} v$.
For a given point $x$, we can write the signature as
$\qquad \partial f(x)^\mathsf{T} : \mathbb{R}^m \to \mathbb{R}^n$.
The corresponding map on cotangent spaces is often called the [pullback](https://en.wikipedia.org/wiki/Pullback_(differential_geometry))
of $f$ at $x$. The key for our purposes is that it goes from something that looks like the output of $f$ to something that looks like the input of $f$, just like we might expect from a transposed linear function.
#### VJPs in JAX code
Switching from math back to Python, the JAX function `vjp` can take a Python function for evaluating $f$ and give us back a Python function for evaluating the VJP $(x, v) \mapsto (f(x), v^\mathsf{T} \partial f(x))$.
```
from jax import vjp
# Isolate the function from the weight matrix to the predictions
f = lambda W: predict(W, b, inputs)
y, vjp_fun = vjp(f, W)
key, subkey = random.split(key)
u = random.normal(subkey, y.shape)
# Pull back the covector `u` along `f` evaluated at `W`
v = vjp_fun(u)
```
In terms of Haskell-like type signatures, we could write
```haskell
vjp :: (a -> b) -> a -> (b, CT b -> CT a)
```
where we use `CT a` to denote the type for the cotangent space for `a`. In words, `vjp` takes as arguments a function of type `a -> b` and a point of type `a`, and gives back a pair consisting of a value of type `b` and a linear map of type `CT b -> CT a`.
This is great because it lets us build Jacobian matrices one row at a time, and the FLOP cost for evaluating $(x, v) \mapsto (f(x), v^\mathsf{T} \partial f(x))$ is only about twice the cost of evaluating $f$. In particular, if we want the gradient of a function $f : \mathbb{R}^n \to \mathbb{R}$, we can do it in just one call. That's how `grad` is efficient for gradient-based optimization, even for objectives like neural network training loss functions on millions or billions of parameters.
There's a cost, though: though the FLOPs are friendly, memory scales with the depth of the computation. Also, the implementation is traditionally more complex than that of forward-mode, though JAX has some tricks up its sleeve (that's a story for a future notebook!).
For more on how reverse-mode works, see [this tutorial video from the Deep Learning Summer School in 2017](http://videolectures.net/deeplearning2017_johnson_automatic_differentiation/).
## Hessian-vector products using both forward- and reverse-mode
In a previous section, we implemented a Hessian-vector product function just using reverse-mode:
```
def hvp(f, x, v):
return grad(lambda x: np.vdot(grad(f)(x), v))
```
That's efficient, but we can do even better and save some memory by using forward-mode together with reverse-mode.
Mathematically, given a function $f : \mathbb{R}^n \to \mathbb{R}$ to differentiate, a point $x \in \mathbb{R}^n$ at which to linearize the function, and a vector $v \in \mathbb{R}^n$, the Hessian-vector product function we want is
$(x, v) \mapsto \partial^2 f(x) v$
Consider the helper function $g : \mathbb{R}^n \to \mathbb{R}^n$ defined to be the derivative (or gradient) of $f$, namely $g(x) = \partial f(x)$. All we need is its JVP, since that will give us
$(x, v) \mapsto \partial g(x) v = \partial^2 f(x) v$.
We can translate that almost directly into code:
```
from jax import jvp, grad
# forward-over-reverse
def hvp(f, primals, tangents):
return jvp(grad(f), primals, tangents)[1]
```
Even better, since we didn't have to call `np.dot` directly, this `hvp` function works with arrays of any shape and with arbitrary container types (like vectors stored as nested lists/dicts/tuples), and doesn't even have a dependence on `jax.numpy`.
Here's an example of how to use it:
```
def f(X):
return np.sum(np.tanh(X)**2)
key, subkey1, subkey2 = random.split(key, 3)
X = random.normal(subkey1, (30, 40))
V = random.normal(subkey2, (30, 40))
ans1 = hvp(f, (X,), (V,))
ans2 = np.tensordot(hessian(f)(X), V, 2)
print(np.allclose(ans1, ans2, 1e-4, 1e-4))
```
Another way you might consider writing this is using reverse-over-forward:
```
# reverse-over-forward
def hvp_revfwd(f, primals, tangents):
g = lambda primals: jvp(f, primals, tangents)[1]
return grad(g)(primals)
```
That's not quite as good, though, because forward-mode has less overhead than reverse-mode, and since the outer differentiation operator here has to differentiate a larger computation than the inner one, keeping forward-mode on the outside works best:
```
# reverse-over-reverse, only works for single arguments
def hvp_revrev(f, primals, tangents):
x, = primals
v, = tangents
return grad(lambda x: np.vdot(grad(f)(x), v))(x)
print("Forward over reverse")
%timeit -n10 -r3 hvp(f, (X,), (V,))
print("Reverse over forward")
%timeit -n10 -r3 hvp_revfwd(f, (X,), (V,))
print("Reverse over reverse")
%timeit -n10 -r3 hvp_revrev(f, (X,), (V,))
print("Naive full Hessian materialization")
%timeit -n10 -r3 np.tensordot(hessian(f)(X), V, 2)
```
## Composing VJPs, JVPs, and `vmap`
### Jacobian-Matrix and Matrix-Jacobian products
Now that we have `jvp` and `vjp` transformations that give us functions to push-forward or pull-back single vectors at a time, we can use JAX's [`vmap` transformation](https://github.com/google/jax#auto-vectorization-with-vmap) to push and pull entire bases at once. In particular, we can use that to write fast matrix-Jacobian and Jacobian-matrix products.
```
# Isolate the function from the weight matrix to the predictions
f = lambda W: predict(W, b, inputs)
# Pull back the covectors `m_i` along `f`, evaluated at `W`, for all `i`.
# First, use a list comprehension to loop over rows in the matrix M.
def loop_mjp(f, x, M):
y, vjp_fun = vjp(f, x)
return np.vstack([vjp_fun(mi) for mi in M])
# Now, use vmap to build a computation that does a single fast matrix-matrix
# multiply, rather than an outer loop over vector-matrix multiplies.
def vmap_mjp(f, x, M):
y, vjp_fun = vjp(f, x)
return vmap(vjp_fun)(M)
key = random.PRNGKey(0)
num_covecs = 128
U = random.normal(key, (num_covecs,) + y.shape)
loop_vs = loop_mjp(f, W, M=U)
print('Non-vmapped Matrix-Jacobian product')
%timeit -n10 -r3 loop_mjp(f, W, M=U)
print('\nVmapped Matrix-Jacobian product')
vmap_vs = vmap_mjp(f, W, M=U)
%timeit -n10 -r3 vmap_mjp(f, W, M=U)
assert np.allclose(loop_vs, vmap_vs), 'Vmap and non-vmapped Matrix-Jacobian Products should be identical'
def loop_jmp(f, x, M):
# jvp immediately returns the primal and tangent values as a tuple,
# so we'll compute and select the tangents in a list comprehension
return np.vstack([jvp(f, (W,), (si,))[1] for si in S])
def vmap_jmp(f, x, M):
_jvp = lambda s: jvp(f, (W,), (s,))[1]
return vmap(_jvp)(M)
num_vecs = 128
S = random.normal(key, (num_vecs,) + W.shape)
loop_vs = loop_jmp(f, W, M=S)
print('Non-vmapped Jacobian-Matrix product')
%timeit -n10 -r3 loop_jmp(f, W, M=S)
vmap_vs = vmap_jmp(f, W, M=S)
print('\nVmapped Jacobian-Matrix product')
%timeit -n10 -r3 vmap_jmp(f, W, M=S)
assert np.allclose(loop_vs, vmap_vs), 'Vmap and non-vmapped Jacobian-Matrix products should be identical'
```
### The implementation of `jacfwd` and `jacrev`
Now that we've seen fast Jacobian-matrix and matrix-Jacobian products, it's not hard to guess how to write `jacfwd` and `jacrev`. We just use the same technique to push-forward or pull-back an entire standard basis (isomorphic to an identity matrix) at once.
```
from jax import jacrev as builtin_jacrev
def our_jacrev(f):
def jacfun(x):
y, vjp_fun = vjp(f, x)
# Use vmap to do a matrix-Jacobian product.
# Here, the matrix is the Euclidean basis, so we get all
# entries in the Jacobian at once.
J, = vmap(vjp_fun, in_axes=0)(np.eye(len(y)))
return J
return jacfun
assert np.allclose(builtin_jacrev(f)(W), our_jacrev(f)(W)), 'Incorrect reverse-mode Jacobian results!'
from jax import jacfwd as builtin_jacfwd
def our_jacfwd(f):
def jacfun(x):
_jvp = lambda s: jvp(f, (x,), (s,))[1]
Jt =vmap(_jvp, in_axes=1)(np.eye(len(x)))
return np.transpose(Jt)
return jacfun
assert np.allclose(builtin_jacfwd(f)(W), our_jacfwd(f)(W)), 'Incorrect forward-mode Jacobian results!'
```
Interestingly, [Autograd](https://github.com/hips/autograd) couldn't do this. Our [implementation of reverse-mode `jacobian` in Autograd](https://github.com/HIPS/autograd/blob/96a03f44da43cd7044c61ac945c483955deba957/autograd/differential_operators.py#L60) had to pull back one vector at a time with an outer-loop `map`. Pushing one vector at a time through the computation is much less efficient than batching it all together with `vmap`.
Another thing that Autograd couldn't do is `jit`. Interestingly, no matter how much Python dynamism you use in your function to be differentiated, we could always use `jit` on the linear part of the computation. For example:
```
def f(x):
try:
if x < 3:
return 2 * x ** 3
else:
raise ValueError
except ValueError:
return np.pi * x
y, f_vjp = vjp(f, 4.)
print(jit(f_vjp)(1.))
```
## Complex numbers and differentiation
JAX is great at complex numbers and differentiation. To support both [holomorphic and non-holomorphic differentiation](https://en.wikipedia.org/wiki/Holomorphic_function), JAX follows [Autograd's convention](https://github.com/HIPS/autograd/blob/master/docs/tutorial.md#complex-numbers) for encoding complex derivatives.
Consider a complex-to-complex function $f: \mathbb{C} \to \mathbb{C}$ that we break down into its component real-to-real functions:
```
def f(z):
x, y = real(z), imag(z)
return u(x, y), v(x, y) * 1j
```
That is, we've decomposed $f(z) = u(x, y) + v(x, y) i$ where $z = x + y i$. We define `grad(f)` to correspond to
```
def grad_f(z):
x, y = real(z), imag(z)
return grad(u, 0)(x, y) + grad(u, 1)(x, y) * 1j
```
In math symbols, that means we define $\partial f(z) \triangleq \partial_0 u(x, y) + \partial_1 u(x, y)$. So we throw out $v$, ignoring the complex component function of $f$ entirely!
This convention covers three important cases:
1. If `f` evaluates a holomorphic function, then we get the usual complex derivative, since $\partial_0 u = \partial_1 v$ and $\partial_1 u = - \partial_0 v$.
2. If `f` is evaluates the real-valued loss function of a complex parameter `x`, then we get a result that we can use in gradient-based optimization by taking steps in the direction of the conjugate of `grad(f)(x)`.
3. If `f` evaluates a real-to-real function, but its implementation uses complex primitives internally (some of which must be non-holomorphic, e.g. FFTs used in convolutions) then we get the same result that an implementation that only used real primitives would have given.
By throwing away `v` entirely, this convention does not handle the case where `f` evaluates a non-holomorphic function and you want to evaluate all of $\partial_0 u$, $\partial_1 u$, $\partial_0 v$, and $\partial_1 v$ at once. But in that case the answer would have to contain four real values, and so there's no way to express it as a single complex number.
You should expect complex numbers to work everywhere in JAX. Here's differentiating through a Cholesky decomposition of a complex matrix:
```
A = np.array([[5., 2.+3j, 5j],
[2.-3j, 7., 1.+7j],
[-5j, 1.-7j, 12.]])
def f(X):
L = np.linalg.cholesky(X)
return np.sum((L - np.sin(L))**2)
grad(f)(A)
```
For primitives' JVP rules, writing the primals as $z = a + bi$ and the tangents as $t = c + di$, we define the Jacobian-vector product $t \mapsto \partial f(z) \cdot t$ as
$t \mapsto
\begin{matrix} \begin{bmatrix} 1 & 1 \end{bmatrix} \\ ~ \end{matrix}
\begin{bmatrix} \partial_0 u(a, b) & -\partial_0 v(a, b) \\ - \partial_1 u(a, b) i & \partial_1 v(a, b) i \end{bmatrix}
\begin{bmatrix} c \\ d \end{bmatrix}$.
See Chapter 4 of [Dougal's PhD thesis](https://dougalmaclaurin.com/phd-thesis.pdf) for more details.
# More advanced autodiff
In this notebook, we worked through some easy, and then progressively more complicated, applications of automatic differentiation in JAX. We hope you now feel that taking derivatives in JAX is easy and powerful.
There's a whole world of other autodiff tricks and functionality out there. Topics we didn't cover, but hope to in a "Advanced Autodiff Cookbook" include:
- Gauss-Newton Vector Products, linearizing once
- Custom VJPs and JVPs
- Efficient derivatives at fixed-points
- Estimating the trace of a Hessian using random Hessian-vector products.
- Forward-mode autodiff using only reverse-mode autodiff.
- Taking derivatives with respect to custom data types.
- Checkpointing (binomial checkpointing for efficient reverse-mode, not model snapshotting).
- Optimizing VJPs with Jacobian pre-accumulation.
| true |
code
| 0.779416 | null | null | null | null |
|
# 1D Variability hypothesis testing for HBEC IFN experiment
```
import scanpy as sc
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
from pybedtools import BedTool
import pickle as pkl
%matplotlib inline
import sys
sys.path.append('/home/ssm-user/Github/scrna-parameter-estimation/dist/memento-0.0.6-py3.8.egg')
sys.path.append('/home/ssm-user/Github/misc-seq/miscseq/')
import encode
import memento
data_path = '/data_volume/memento/hbec/'
```
### Read the processed RNA data
Focus on the club and bc/club cells and type I interferons for now.
Encode the timestamps to integers.
```
adata = sc.read(data_path + 'HBEC_type_I_filtered_counts_deep.h5ad')
adata = adata[:, ~adata.var.index.str.startswith('MT-')].copy()
# adata.obs['cell_type'] = adata.obs['cell_type'].apply(lambda x: x if x != 'basal/club' else 'bc')
# adata.obs['cell_type'] = adata.obs['cell_type'].apply(lambda x: x if x != 'ionocyte/tuft' else 'ion-tuft')
```
```
converter = {'basal/club':'BC', 'basal':'B', 'ciliated':'C', 'goblet':'G', 'ionocyte/tuft':'IT', 'neuroendo':'N'}
adata.obs['ct'] = adata.obs['cell_type'].apply(lambda x: converter[x])
```
### Setup memento
```
def assign_q(batch):
if batch == 0:
return 0.387*0.25
elif batch == 1:
return 0.392*0.25
elif batch == 2:
return 0.436*0.25
else:
return 0.417*0.25
adata.obs['q'] = adata.obs['batch'].apply(assign_q)
memento.setup_memento(adata, q_column='q')
```
### Run memento for each subset, comparing to control
```
cts = ['C', 'B', 'BC']
tps = ['3', '6', '9', '24', '48']
stims = ['alpha', 'beta', 'gamma', 'lambda']
import os
done_files = os.listdir(data_path + 'binary_test_latest/')
for ct in cts:
for tp in tps:
for stim in stims:
fname = '{}_{}_{}.h5ad'.format('-'.join(ct), stim, tp)
if fname in done_files:
print('Skipping', fname)
continue
print('starting', ct, tp, stim)
adata_stim = adata.copy()[
adata.obs.ct.isin([ct]) & \
adata.obs.stim.isin(['control', stim]) & \
adata.obs.time.isin(['0',tp]), :].copy()
time_converter={0:0, int(tp):1}
adata_stim.obs['time_step'] = adata_stim.obs['time'].astype(int).apply(lambda x: time_converter[x])
memento.create_groups(adata_stim, label_columns=['time_step', 'donor'])
memento.compute_1d_moments(adata_stim, min_perc_group=.9)
memento.ht_1d_moments(
adata_stim,
formula_like='1 + time_step + donor',
treatment_col='time_step',
num_boot=10000,
verbose=1,
num_cpus=93,
resampling='permutation',
approx=True)
adata_stim.write(data_path + 'binary_test_latest/{}_{}_{}.h5ad'.format(ct, stim, tp))
```
| true |
code
| 0.285833 | null | null | null | null |
|
```
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
```
# Utilizando un modelo pre-entrenado
[`torchvision.models`](https://pytorch.org/vision/stable/models.html) ofrece una serie de modelos famosos de la literatura de *deep learning*
Por defecto el modelo se carga con pesos aleatorios
Si indicamos `pretrained=True` se descarga un modelo entrenado
Se pueden escoger modelos para clasificar, localizar y segmentar
## Modelo para clasificar imágenes
torchvision tiene una basta cantidad de modelos para clasificar incluyendo distintas versiones de VGG, ResNet, AlexNet, GoogLeNet, DenseNet, entre otros
Cargaremos un modelo [resnet18](https://arxiv.org/pdf/1512.03385.pdf) [pre-entrenado](https://pytorch.org/docs/stable/torchvision/models.html#torchvision.models.resnet18) en [ImageNet](http://image-net.org/)
```
from torchvision import models
model = models.resnet18(pretrained=True, progress=True)
model.eval()
```
Los modelos pre-entrenados esperan imágenes con
- tres canales (RGB)
- al menos 224x224 píxeles
- píxeles entre 0 y 1 (float)
- normalizadas con
normalize = torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
```
img
from PIL import Image
import torch
from torchvision import transforms
img = Image.open("img/dog.jpg")
my_transform = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=(0.485, 0.456, 0.406),
std=(0.229, 0.224, 0.225))])
# Las clases con probabilidad más alta son
probs = torch.nn.Softmax(dim=1)(model.forward(my_transform(img).unsqueeze(0)))
best = probs.argsort(descending=True)
display(best[0, :10],
probs[0, best[0, :10]])
```
¿A qué corresponde estas clases?
Clases de ImageNet: https://gist.github.com/ageitgey/4e1342c10a71981d0b491e1b8227328b
## Modelo para detectar entidades en imágenes
Adicional a los modelos de clasificación torchvision también tiene modelos para
- Detectar entidades en una imagen: Faster RCNN
- Hacer segmentación por instancia: Mask RCNN
- Hacer segmentación semántica: FCC, DeepLab
- Clasificación de video
A continuación probaremos la [Faster RCNN](https://arxiv.org/abs/1506.01497) para hace detección
Este modelo fue pre-entrenado en la base de datos [COCO](https://cocodataset.org/)
El modelo retorna un diccionario con
- 'boxes': Los bounding box de las entidades
- 'labels': La etiqueta de la clase más probable de la entidad
- 'score': La probabilidad de la etiqueta
```
model = models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
model.eval()
transform = transforms.ToTensor()
img = Image.open("img/pelea.jpg") # No require normalización de color
img_tensor = transform(img)
result = model(img_tensor.unsqueeze(0))[0]
def filter_results(result, threshold=0.9):
mask = result['scores'] > threshold
bbox = result['boxes'][mask].detach().cpu().numpy()
lbls = result['labels'][mask].detach().cpu().numpy()
return bbox, lbls
from PIL import ImageFont, ImageDraw
#fnt = ImageFont.truetype("arial.ttf", 20)
label2name = {1: 'persona', 2: 'bicicleta', 3: 'auto', 4: 'moto',
8: 'camioneta', 18: 'perro'}
def draw_rectangles(img, bbox, lbls):
draw = ImageDraw.Draw(img)
for k in range(len(bbox)):
if lbls[k] in label2name.keys():
draw.rectangle(bbox[k], fill=None, outline='white', width=2)
draw.text([int(d) for d in bbox[k][:2]], label2name[lbls[k]], fill='white')
bbox, lbls = filter_results(result)
img = Image.open("img/pelea.jpg")
draw_rectangles(img, bbox, lbls)
display(img)
```
# Transferencia de Aprendizaje
A continuación usaremos la técnicas de transferencia de aprendizaje para aprender un clasificador de imágenes para un fragmento de la base de datos food 5k
El objetivo es clasificar si la imagen corresponde a comida o no
Guardamos las imagenes con la siguiente estructura de carpetas
```
!ls img/food5k/
!ls img/food5k/train
!ls img/food5k/valid
```
Con esto podemos usar `torchvision.datasets.ImageFolder` para crear los dataset de forma muy sencilla
Dado que usaremos un modelo preentrenado debemos transformar entregar las imágenes en tamaño 224x224 y con color normalizado
Usaremos también aumentación de datos en el conjunto de entrenamiento
```
from torchvision import datasets
train_transforms = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
valid_transforms = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
train_dataset = datasets.ImageFolder('img/food5k/train', transform=train_transforms)
valid_dataset = datasets.ImageFolder('img/food5k/valid', transform=valid_transforms)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=32, shuffle=True)
valid_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=256, shuffle=False)
for image, label in train_loader:
break
fig, ax = plt.subplots(1, 6, figsize=(9, 2), tight_layout=True)
for i in range(6):
ax[i].imshow(image[i].permute(1,2,0).numpy())
ax[i].axis('off')
ax[i].set_title(label[i].numpy())
```
Usaremos el modelo ResNet18
```
model = models.resnet18(pretrained=True, progress=True)
# model = models.squeezenet1_1(pretrained=True, progress=True)
display(model)
```
En este caso re-entrenaremos sólo la última capa: `fc`
Las demás capas las congelaremos
Para congelar una capa simplemente usamos `requires_grad=False` en sus parámetros
Cuando llamemos `backward` no se calculará gradiente para estas capas
```
#Congelamos todos los parámetros
for param in model.parameters():
param.requires_grad = False
# La reemplazamos por una nueva capa de salida
model.fc = torch.nn.Linear(model.fc.in_features , 2) # Para resnet
#model.classifier = torch.nn.Sequential(torch.nn.Dropout(p=0.5, inplace=False),
# torch.nn.Conv2d(512, 2, kernel_size=(1, 1), stride=(1, 1)),
# torch.nn.ReLU(inplace=True),
# torch.nn.AdaptiveAvgPool2d(output_size=(1, 1))) # Para Squeezenet
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
for epoch in range(10):
for x, y in train_loader:
optimizer.zero_grad()
yhat = model.forward(x)
loss = criterion(yhat, y)
loss.backward()
optimizer.step()
epoch_loss = 0.0
for x, y in valid_loader:
yhat = model.forward(x)
loss = criterion(yhat, y)
epoch_loss += loss.item()
print(f"{epoch}, {epoch_loss:0.4f}, {torch.sum(yhat.argmax(dim=1) == y).item()/100}")
targets, predictions = [], []
for mbdata, label in valid_loader:
logits = model.forward(mbdata)
predictions.append(logits.argmax(dim=1).detach().numpy())
targets.append(label.numpy())
predictions = np.concatenate(predictions)
targets = np.concatenate(targets)
from sklearn.metrics import confusion_matrix, classification_report
cm = confusion_matrix(targets, predictions)
display(cm)
print(classification_report(targets, predictions))
```
¿Cómo se compara lo anterior a entrenar una arquitectura convolucional desde cero?
A modo de ejemplo se adapta la arquitectura Lenet5 para aceptar imágenes a color de 224x224 ¿Cuánto desempeño se obtiene entrenando la misma cantidad de épocas?
```
import torch.nn as nn
class Lenet5(nn.Module):
def __init__(self):
super(type(self), self).__init__()
# La entrada son imágenes de 3x224x224
self.features = nn.Sequential(nn.Conv2d(3, 6, 5),
nn.ReLU(),
nn.MaxPool2d(3),
nn.Conv2d(6, 16, 5),
nn.ReLU(),
nn.MaxPool2d(3),
nn.Conv2d(16, 32, 5),
nn.ReLU(),
nn.MaxPool2d(3))
self.classifier = nn.Sequential(nn.Linear(32*6*6, 120),
nn.ReLU(),
nn.Linear(120, 84),
nn.ReLU(),
nn.Linear(84, 2))
def forward(self, x):
z = self.features(x)
#print(z.shape)
# Esto es de tamaño Mx16x5x5
z = z.view(-1, 32*6*6)
# Esto es de tamaño Mx400
return self.classifier(z)
model = Lenet5()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
for epoch in range(10):
for x, y in train_loader:
optimizer.zero_grad()
yhat = model.forward(x)
loss = criterion(yhat, y)
loss.backward()
optimizer.step()
epoch_loss = 0.0
for x, y in valid_loader:
yhat = model.forward(x)
loss = criterion(yhat, y)
epoch_loss += loss.item()
print(f"{epoch}, {epoch_loss:0.4f}, {torch.sum(yhat.argmax(dim=1) == y).item()/100}")
targets, predictions = [], []
for mbdata, label in valid_loader:
logits = model.forward(mbdata)
predictions.append(logits.argmax(dim=1).detach().numpy())
targets.append(label.numpy())
predictions = np.concatenate(predictions)
targets = np.concatenate(targets)
from sklearn.metrics import confusion_matrix, classification_report
cm = confusion_matrix(targets, predictions)
display(cm)
print(classification_report(targets, predictions))
```
# Resumen
Aspectos a considerar durante el entrenamiento de redes neuronales
- Arquitecturas: cantidad y organización de capas, funciones de activación
- Funciones de costo, optimizadores y sus parámetros (tasa de aprendizaje, momentum)
- Verificar convergencia y sobreajuste:
- Checkpoint: Guardar el último modelo y el con menor costo de validación
- Early stopping: Detener el entrenamiento si el error de validación no disminuye en un cierto número de épocas
- Inicialización de los parámetros: Probar varios entrenamientos desde inicios aleatorios distintos
- Si el modelo se sobreajusta pronto
- Disminuir complejidad
- Incorporar regularización: Aumentación de datos, decaimiento de pesos, Dropout
- Si quiero aprovechar un modelo preentrenado
- Transferencia de aprendizaje
- [Zoológico de modelos](https://modelzoo.co/)
- [Papers with code](https://paperswithcode.com/)
Estrategia agil
> Desarrolla rápido e itera: Empieza simple. Propón una solución, impleméntala, entrena y evalua. Analiza las fallas, modifica e intenta de nuevo
Mucho exito en sus desarrollos futuros!
| true |
code
| 0.782553 | null | null | null | null |
|
# The JupyterLab Interface
The JupyterLab interface consists of a main work area containing tabs of documents and activities, a collapsible left sidebar, and a menu bar. The left sidebar contains a file browser, the list of running terminals and kernels, the table of contents, and the extension manager.

JupyterLab sessions always reside in a workspace. Workspaces contain the state of JupyterLab: the files that are currently open, the layout of the application areas and tabs, etc.
Reference: [https://jupyterlab.readthedocs.io/en/latest/user/interface.html](https://jupyterlab.readthedocs.io/en/latest/user/interface.html)
# Notebook
Currently you are looking at a Jupyter Notebook. A Jupyter Notebook is an interactive environment for writing and running ocde. The notebook is capable of running code in a wide range of languages. However, each notebook is associated with a single kernel. This notebook is associated with the IPython kernel, therefore it runs Python code.
Reference: [https://github.com/jupyter/notebook/blob/6.1.x/docs/source/examples/Notebook/Running%20Code.ipynb](https://github.com/jupyter/notebook/blob/6.1.x/docs/source/examples/Notebook/Running%20Code.ipynb)
## Notebook Cell Types
In a Jupyter Notebook we can have text cells and code cells.
In text cells we can write markdown ([Markdown cheat sheet](https://www.markdownguide.org/cheat-sheet/)).
In code cells we can write program code which is executed by the IPython kernel associated with the notebook.
Code cells have brackets `[ ]:` in front of them:
* `[ ]:` means that the cell is empty.
* `[*]:` means that the cell is currently being executed.
* `[1]:` here the number indicates the execution step in the notebook. This execution step is updated everytime the cell is executed.
To render a text cell or execute a code cell you can press the run button in the toolbar above or press `Shift-Enter` on your keyboard.
```
2 + 2
# If we want to write text in a code cell we have to comment it out with '#'.
# Next we asign some variables.
a = 2
b = 2
c = a + b
print("a is", a)
print("b is", b)
print("a + b =", c)
```
# Displaying an Image
```
# import packages
from tifffile import imread
from matplotlib import pyplot as plt
img = imread('imgs/t000.tif')
plt.figure(figsize=(10, 10))
plt.imshow(img, cmap='gray')
```
# Python Basics
## If Statement
The if statement allows us to branch code and act on different conditions.
The statement has the following logic:
```
if condition_0:
code_0
elif condition_1:
code_1
else:
code_2
```
`code_0` is executed if `condition_0` holds true. If `condition_0` is false `condition_1` is evaluated and `code_1` is executed if `condition_1` is true. If both conditions evaluate to false `code_2` is executed.
__Note:__ `elif` and `else` are optional.
```
# Assign value to number
number = 3
# Test if the number is negative, zero or positive
if number < 0:
print("{} is a negative number.".format(number))
elif number == 0:
print("The number is zero.")
else:
print("{} is a positive number.".format(number))
# The following code is outside of the if statement and always executed.
print("Done")
```
## Functions
In Python we can define functions which can be reused in our code. It is good practice to define a function if we want to reuse the same code multiple times!
```
def categorize_number(number):
"""
Prints to standard output if the number is negative, zero or positive.
Parameter:
----------
number: The number to categorize.
"""
if number < 0:
print("{} is a negative number.".format(number))
elif number == 0:
print("The number is zero.")
else:
print("{} is a positive number.".format(number))
categorize_number(number=-2)
```
## Lists
In python we can easily define a list.
```
numbers = [-1, 0, 1, 2]
type(numbers)
```
## For Loop
If we want to apply some code (e.g. a function) to all elements of a list we can use a for loop.
```
for number in numbers:
print("Currently processing number = {}.".format(number))
categorize_number(number)
```
## Range
A typical usecase is that we want to get all numbers from 0 up to a given number e.g. 100. Luckely we don't have to type all 100 numbers into a list to iterate over them. We can just use the `range`-function which is part of Python.
```
for i in range(100):
categorize_number(number=i)
import this
```
| true |
code
| 0.415788 | null | null | null | null |
|
# Large Scale Kernel Ridge Regression
```
import sys
sys.path.insert(0, '/Users/eman/Documents/code_projects/kernellib')
sys.path.insert(0, '/home/emmanuel/code/kernellib')
import numpy as np
from kernellib.large_scale import RKSKernelRidge, KernelRidge as RKernelRidge
from kernellib.utils import estimate_sigma, r_assessment
from sklearn.model_selection import GridSearchCV
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
```
#### Sample Data
```
seed = 123
rng = np.random.RandomState(seed)
n_train, n_test = 10000, 1000
d_dimensions = 1
noise = 0.1
xtrain = rng.randn(n_train, d_dimensions)
ytrain = np.sin(xtrain) + noise * rng.randn(n_train, d_dimensions)
xtest = rng.randn(n_test, d_dimensions)
ytest = np.sin(xtest) + noise * rng.randn(n_test, d_dimensions)
# training
n_components = 10
alpha = 1e-3
sigma = estimate_sigma(xtrain)
```
## Random Kitchen Sinks Regression
In this method, I implement the Random Kitchen Sinks algorithm found [here](https://people.eecs.berkeley.edu/~brecht/kitchensinks.html) and [here](https://people.eecs.berkeley.edu/~brecht/kitchensinks.html). I don't try and transform the problem into a matrix approximation and then fit it into the KRR framework. This is largely because the RKS algorithm that they implement use complex values that need to be present in solving and transforming the data. If the complex values are taken out before the transformation, the results are garbage. Furthermore, some experiments that I ran (see below) show that the RKS as a transformer do not approximate the kernel matrix very well. So therefore, this algorithm comes as is. It's a shame that you cannot write the function as a transformer but the phenomenal results that you obtain make it worth it in my opinion.
```
rks_model = RKSKernelRidge(n_components=n_components, alpha=alpha, sigma=sigma,
random_state=seed)
rks_model.fit(xtrain, ytrain)
y_pred = rks_model.predict(xtest)
r_assessment(y_pred, ytest, verbose=1);
%timeit rks_model.fit(xtrain, ytrain);
%timeit rks_model.predict(xtest);
fig, ax = plt.subplots()
xplot = np.linspace(xtrain.min(), xtest.max(), 100)[:, np.newaxis]
yplot = rks_model.predict(xplot)
ax.scatter(xtrain, ytrain, color='r', label='Training Data')
ax.plot(xplot, yplot, color='k', linewidth=2, label='Predictions')
ax.legend()
ax.set_title('Random Kitchen Sinks Approximation')
plt.show()
```
#### Cross Validation Compatibility
```
sigmaMin = np.log10(sigma*0.1);
sigmaMax = np.log10(sigma*10);
sigmas = np.logspace(sigmaMin,sigmaMax,20);
param_grid = {
'n_components': [1, 5, 10, 25],
'alpha': [1e0, 1e-1, 1e-2, 1e-3],
'sigma': sigmas
}
n_jobs = 24
cv = 3
rks_grid_model = GridSearchCV(RKSKernelRidge(random_state=seed),
param_grid=param_grid, n_jobs=n_jobs, cv=cv,
verbose=1)
rks_grid_model.fit(xtrain, ytrain);
y_pred = rks_grid_model.predict(xtest)
r_assessment(y_pred, ytest)
fig, ax = plt.subplots()
xplot = np.linspace(xtrain.min(), xtest.max(), 100)[:, np.newaxis]
yplot = rks_grid_model.predict(xplot)
ax.scatter(xtrain, ytrain, color='r', label='Training Data')
ax.plot(xplot, yplot, color='k', linewidth=2, label='Predictions')
ax.legend()
ax.set_title('Random Kitchen Sinks Approximation w/ Grid Search')
plt.show()
```
## Nystrom Approximation
```
approximation = 'nystrom'
nys_model = RKernelRidge(n_components=n_components,
alpha=alpha,
sigma=sigma,
kernel='rbf',
random_state=seed,
approximation=approximation)
nys_model.fit(xtrain, ytrain);
y_pred = nys_model.predict(xtest)
r_assessment(y_pred, ytest, verbose=1);
%timeit nys_model.fit(xtrain, ytrain);
%timeit nys_model.predict(xtest);
fig, ax = plt.subplots()
xplot = np.linspace(xtrain.min(), xtest.max(), 100)[:, np.newaxis]
yplot = nys_model.predict(xplot)
ax.scatter(xtrain, ytrain, color='r', label='Training Data')
ax.plot(xplot, yplot, color='k', linewidth=2, label='Predictions')
ax.legend()
ax.set_title('Nystrom Approximation')
plt.show()
```
### Nystrom w/ Grid Search
```
sigmaMin = np.log10(sigma*0.1);
sigmaMax = np.log10(sigma*10);
sigmas = np.logspace(sigmaMin,sigmaMax,20);
param_grid = {
'kernel': ['rbf'],
'n_components': [1, 5, 10, 25],
'alpha': [1e0, 1e-1, 1e-2, 1e-3],
'sigma': sigmas
}
n_jobs = 24
cv = 3
nys_grid_model = GridSearchCV(RKernelRidge(random_state=seed,
approximation=approximation),
param_grid=param_grid, n_jobs=n_jobs, cv=cv,
verbose=1)
nys_grid_model.fit(xtrain, ytrain);
r_assessment(y_pred, ytest, verbose=1);
print('Best sigma:', nys_grid_model.best_estimator_.sigma)
print('Best alpha:',nys_grid_model.best_estimator_.alpha)
print('Best Number of features:', nys_grid_model.best_estimator_.n_components)
print('Best Kernel:', nys_grid_model.best_estimator_.kernel)
fig, ax = plt.subplots()
xplot = np.linspace(xtrain.min(), xtest.max(), 100)[:, np.newaxis]
yplot = nys_grid_model.predict(xplot)
ax.scatter(xtrain, ytrain, color='r', label='Training Data')
ax.plot(xplot, yplot, color='k', linewidth=2, label='Predictions')
ax.legend()
ax.set_title('Nystrom Approximation w/ Grid Search')
plt.show()
```
## Randomized Nystrom Matrix Approximation
```
approximation = 'rnystrom'
k_rank = 10
rnys_model = RKernelRidge(n_components=n_components,
alpha=alpha,
sigma=sigma,
kernel='rbf',
random_state=seed,
approximation=approximation,
k_rank=k_rank)
rnys_model.fit(xtrain, ytrain);
y_pred = rnys_model.predict(xtest)
r_assessment(y_pred, ytest, verbose=1);
%timeit rnys_model.fit(xtrain, ytrain);
%timeit rnys_model.predict(xtest);
fig, ax = plt.subplots()
xplot = np.linspace(xtrain.min(), xtest.max(), 100)[:, np.newaxis]
yplot = rnys_model.predict(xplot)
ax.scatter(xtrain, ytrain, color='r', label='Training Data')
ax.plot(xplot, yplot, color='k', linewidth=2, label='Predictions')
ax.legend()
ax.set_title('Randomized Nystrom Approximation')
plt.show()
```
## Random Fourier Features Approximation
```
approximation = 'rff'
rff_model = RKernelRidge(n_components=n_components,
alpha=alpha,
sigma=sigma,
kernel='rbf',
random_state=seed,
approximation=approximation)
rff_model.fit(xtrain, ytrain);
y_pred = rff_model.predict(xtest)
r_assessment(y_pred, ytest, verbose=1);
%timeit rff_model.fit(xtrain, ytrain);
%timeit rff_model.predict(xtest);
fig, ax = plt.subplots()
xplot = np.linspace(xtrain.min(), xtest.max(), 100)[:, np.newaxis]
yplot = rff_model.predict(xplot)
ax.scatter(xtrain, ytrain, color='r', label='Training Data')
ax.plot(xplot, yplot, color='k', linewidth=2, label='Predictions')
ax.legend()
ax.set_title('Random Fourier Features')
plt.show()
```
### Fast Food
```
approximation = 'fastfood'
fastfood_model = RKernelRidge(n_components=n_components,
alpha=alpha,
sigma=sigma,
kernel='rbf',
random_state=seed,
approximation=approximation,
trade_off='mem')
fastfood_model.fit(xtrain, ytrain);
y_pred = fastfood_model.predict(xtest)
r_assessment(y_pred, ytest, verbose=1);
%timeit fastfood_model.fit(xtrain, ytrain);
%timeit fastfood_model.predict(xtest);
fig, ax = plt.subplots()
xplot = np.linspace(xtrain.min(), xtest.max(), 100)[:, np.newaxis]
yplot = fastfood_model.predict(xplot)
ax.scatter(xtrain, ytrain, color='r', label='Training Data')
ax.plot(xplot, yplot, color='k', linewidth=2, label='Predictions')
ax.legend()
ax.set_title('Fast Food')
plt.show()
```
### Timing Comparison
#### Number of Features
```
from sklearn.datasets import make_low_rank_matrix
import seaborn; seaborn.set()
m_range = (2 ** (np.arange(12.3, 20))).astype(int)
print(m_range.shape, m_range.min(), m_range.max())
from sklearn.datasets import make_regression
print(t1.average, t1.stdev)
%%time
t_rks = list()
t_nys = list()
t_rnys = list()
t_rbf = list()
t_rff = list()
# training
n_components = 50
alpha = 1e-3
gamma = 1.0
for m in m_range:
xtrain, ytrain = make_regression(n_samples=m, n_features=2000,
n_informative=200, n_targets=1,
effective_rank=50, noise=0.2,
random_state=seed)
print(xtrain.shape)
# -------------------------------
# Random Kitchen Sinks)
# -------------------------------
rks_model = RKSKernelRidge(n_components=n_components, alpha=alpha,
gamma=gamma, random_state=seed)
t1 = %timeit -oq rks_model.fit(xtrain, ytrain)
# ------------------------------
# Nystrom
# ------------------------------
approximation = 'nystrom'
nys_model = RKernelRidge(n_components=n_components,
alpha=alpha,
gamma=gamma,
kernel='rbf',
random_state=seed,
approximation=approximation)
t2 = %timeit -oq nys_model.fit(xtrain, ytrain);
# ----------------------------
# Randomized Nystrom
# ----------------------------
approximation = 'rnystrom'
k_rank = n_components
rnys_model = RKernelRidge(n_components=n_components,
alpha=alpha,
gamma=gamma,
kernel='rbf',
random_state=seed,
approximation=approximation,
k_rank=k_rank)
t3 = %timeit -oq rnys_model.fit(xtrain, ytrain);
# -----------------------------------
# RBF Sampler (Random Kitchen Sinks)
# -----------------------------------
approximation = 'rks'
rks_model = RKernelRidge(n_components=n_components,
alpha=alpha,
gamma=gamma,
kernel='rbf',
random_state=seed,
approximation=approximation)
t4 = %timeit -oq rks_model.fit(xtrain, ytrain);
# -----------------------------
# Random Fourier Features
# -----------------------------
approximation = 'rff'
rff_model = RKernelRidge(n_components=n_components,
alpha=alpha,
gamma=gamma,
kernel='rbf',
random_state=seed,
approximation=approximation)
t5 = %timeit -oq rff_model.fit(xtrain, ytrain);
t_rks.append(t1.best)
t_nys.append(t2.best)
t_rnys.append(t3.best)
t_rbf.append(t4.best)
t_rff.append(t5.best)
plt.loglog(m_range, t_rks, label='Random Kitchen Sinks')
plt.loglog(m_range, t_rff, label='Random Fourier Features')
plt.loglog(m_range, t_nys, label='Nystrom')
plt.loglog(m_range, t_rnys, label='Randomized Nystrom')
plt.loglog(m_range, t_rbf, label='RBF Sampler')
plt.legend(loc='upper left')
plt.xlabel('Number of Elements')
plt.ylabel('Execution Time (secs)');
plt.plot(m_range, t_rks, label='Random Kitchen Sinks')
plt.plot(m_range, t_rff, label='Random Fourier Features')
plt.plot(m_range, t_nys, label='Nystrom')
plt.plot(m_range, t_rnys, label='Randomized Nystrom')
plt.plot(m_range, t_rbf, label='RBF Sampler')
plt.legend(loc='upper left')
plt.xlabel('Number of Elements')
plt.ylabel('Execution Time (secs)');
```
| true |
code
| 0.653155 | null | null | null | null |
|
<a href="https://www.skills.network/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01"><img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DL0120ENedX/labs/Template%20for%20Instructional%20Hands-on%20Labs/images/IDSNlogo.png" width="400px" align="center"></a>
<h1 align="center"><font size="5">RESTRICTED BOLTZMANN MACHINES</font></h1>
<h3>Introduction</h3>
<b>Restricted Boltzmann Machine (RBM):</b> RBMs are shallow neural nets that learn to reconstruct data by themselves in an unsupervised fashion.
<h4>Why are RBMs important?</h4>
An RBM are a basic form of autoencoder. It can automatically extract <b>meaningful</b> features from a given input.
<h4>How does it work?</h4>
RBM is a 2 layer neural network. Simply, RBM takes the inputs and translates those into a set of binary values that represents them in the hidden layer. Then, these numbers can be translated back to reconstruct the inputs. Through several forward and backward passes, the RBM will be trained, and a trained RBM can reveal which features are the most important ones when detecting patterns.
<h4>What are the applications of an RBM?</h4>
RBM is useful for <a href='http://www.cs.utoronto.ca/~hinton/absps/netflixICML.pdf?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01'> Collaborative Filtering</a>, dimensionality reduction, classification, regression, feature learning, topic modeling and even <b>Deep Belief Networks</b>.
<h4>Is RBM a generative or Discriminative model?</h4>
RBM is a generative model. Let me explain it by first, see what is different between discriminative and generative models:
<b>Discriminative:</b> Consider a classification problem where we want to learn to distinguish between Sedan cars (y = 1) and SUV cars (y = 0), based on some features of cars. Given a training set, an algorithm like logistic regression tries to find a straight line, or <i>decision boundary</i>, that separates the suv and sedan.
<b>Generative:</b> looking at cars, we can build a model of what Sedan cars look like. Then, looking at SUVs, we can build a separate model of what SUV cars look like. Finally, to classify a new car, we can match the new car against the Sedan model, and match it against the SUV model, to see whether the new car looks more like the SUV or Sedan.
Generative Models specify a probability distribution over a dataset of input vectors. We can carry out both supervised and unsupervised tasks with generative models:
<ul>
<li>In an unsupervised task, we try to form a model for $P(x)$, where $P$ is the probability given $x$ as an input vector.</li>
<li>In the supervised task, we first form a model for $P(x|y)$, where $P$ is the probability of $x$ given $y$(the label for $x$). For example, if $y = 0$ indicates that a car is an SUV, and $y = 1$ indicates that a car is a sedan, then $p(x|y = 0)$ models the distribution of SUV features, and $p(x|y = 1)$ models the distribution of sedan features. If we manage to find $P(x|y)$ and $P(y)$, then we can use <b>Bayes rule</b> to estimate $P(y|x)$, because:
$$p(y|x) = \frac{p(x|y)p(y)}{p(x)}$$</li>
</ul>
Now the question is, can we build a generative model, and then use it to create synthetic data by directly sampling from the modeled probability distributions? Lets see.
<h2>Table of Contents</h2>
<ol>
<li><a href="https://#ref1">Initialization</a></li>
<li><a href="https://#ref2">RBM layers</a></li>
<li><a href="https://#ref3">What RBM can do after training?</a></li>
<li><a href="https://#ref4">How to train the model?</a></li>
<li><a href="https://#ref5">Learned features</a></li>
</ol>
<p></p>
</div>
<br>
<hr>
<a id="ref1"></a>
<h3>Initialization</h3>
First, we have to load the utility file which contains different utility functions that are not connected
in any way to the networks presented in the tutorials, but rather help in
processing the outputs into a more understandable way.
```
import urllib.request
with urllib.request.urlopen("https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DL0120EN-SkillsNetwork/labs/Week4/data/utils.py") as url:
response = url.read()
target = open('utils.py', 'w')
target.write(response.decode('utf-8'))
target.close()
```
<h2>Installing TensorFlow </h2>
We will installing TensorFlow version 2.2.0 and its required prerequistes. Also installing pillow\...
```
!pip install grpcio==1.24.3
!pip install tensorflow==2.2.0
!pip install pillow==8.1.0
```
<b>Notice:</b> This notebook has been created with TensorFlow version 2.2, and might not work with other versions. Therefore we check:
```
import tensorflow as tf
from IPython.display import Markdown, display
def printmd(string):
display(Markdown('# <span style="color:red">'+string+'</span>'))
if not tf.__version__ == '2.2.0':
printmd('<<<<<!!!!! ERROR !!!! please upgrade to TensorFlow 2.2.0, or restart your Kernel (Kernel->Restart & Clear Output)>>>>>')
```
Now, we load in all the packages that we use to create the net including the TensorFlow package:
```
import tensorflow as tf
import numpy as np
from PIL import Image
from utils import tile_raster_images
import matplotlib.pyplot as plt
%matplotlib inline
```
<hr>
<a id="ref2"></a>
<h3>RBM layers</h3>
An RBM has two layers. The first layer of the RBM is called the <b>visible</b> (or input layer). Imagine that our toy example, has only vectors with 7 values, so the visible layer must have $V=7$ input nodes.
The second layer is the <b>hidden</b> layer, which has $H$ neurons in our case. Each hidden node takes on values of either 0 or 1 (i.e., $h_i = 1$ or $h_i$ = 0), with a probability that is a logistic function of the inputs it receives from the other $V$ visible units, called for example, $p(h_i = 1)$. For our toy sample, we'll use 2 nodes in the hidden layer, so $H = 2$.
<center><img src="https://ibm.box.com/shared/static/eu26opvcefgls6vnwuo29uwp0nudmokh.png" alt="RBM Model" style="width: 400px;"></center>
Each node in the first layer also has a <b>bias</b>. We will denote the bias as $v\_{bias}$, and this single value is shared among the $V$ visible units.
The <b>bias</b> of the second is defined similarly as $h\_{bias}$, and this single value among the $H$ hidden units.
```
v_bias = tf.Variable(tf.zeros([7]), tf.float32)
h_bias = tf.Variable(tf.zeros([2]), tf.float32)
```
We have to define weights among the input layer and hidden layer nodes. In the weight matrix, the number of rows are equal to the input nodes, and the number of columns are equal to the output nodes. We define a tensor $\mathbf{W}$ of shape = (7,2), where the number of visible neurons = 7, and the number of hidden neurons = 2.
```
W = tf.constant(np.random.normal(loc=0.0, scale=1.0, size=(7, 2)).astype(np.float32))
```
<hr>
<a id="ref3"></a>
<h3>What RBM can do after training?</h3>
Think of RBM as a model that has been trained based on images of a dataset of many SUV and sedan cars. Also, imagine that the RBM network has only two hidden nodes, where one node encodes the weight and, and the other encodes the size.
In a sense, the different configurations represent different cars, where one is an SUV and the other is Sedan. In a training process, through many forward and backward passes, the RBM adjust its weights to send a stronger signal to either the SUV node (0, 1) or the sedan node (1, 0) in the hidden layer, given the pixels of images. Now, given an SUV in hidden layer, which distribution of pixels should we expect? RBM can give you 2 things. First, it encodes your images in hidden layer. Second, it gives you the probability of observing a case, given some hidden values.
<h3>The Inference Process</h3>
RBM has two phases:
<ul>
<li>Forward Pass</li>
<li>Backward Pass or Reconstruction</li>
</ul>
<b>Phase 1) Forward pass:</b>
Input one training sample (one image) $\mathbf{x}$ through all visible nodes, and pass it to all hidden nodes. Processing happens in each node in the hidden layer. This computation begins by making stochastic decisions about whether to transmit that input or not (i.e. to determine the state of each hidden layer). First, the probability vector is computed using the input feature vector $\mathbf{x}$, the weight matrix $\mathbf{W}$, and the bias term $h\_{bias}$, as
$$p({h_j}|\mathbf x)= \sigma( \sum\_{i=1}^V W\_{ij} x_i + h\_{bias} )$$,
where $\sigma(z) = (1+e^{-z})^{-1}$ is the logistic function.
So, what does $p({h_j})$ represent? It is the <b>probability distribution</b> of the hidden units. That is, RBM uses inputs $x_i$ to make predictions about hidden node activations. For example, imagine that the hidden node activation values are \[0.51 0.84] for the first training item. It tells you that the conditional probability for each hidden neuron for Phase 1 is:
$$p(h\_{1} = 1|\mathbf{v}) = 0.51$$
$$p(h\_{2} = 1|\mathbf{v}) = 0.84$$
As a result, for each row in the training set, vector of probabilities is generated. In TensorFlow, this is referred to as a `tensor` with a shape of (1,2).
We then turn unit $j$ with probability $p(h\_{j}|\mathbf{v})$, and turn it off with probability $1 - p(h\_{j}|\mathbf{v})$ by generating a uniform random number vector $\mathbf{\xi}$, and comparing it to the activation probability as
<center>If $\xi_j>p(h_{j}|\mathbf{v})$, then $h_j=1$, else $h_j=0$.</center>
Therefore, the conditional probability of a configuration of $\mathbf{h}$ given $\mathbf{v}$ (for a training sample) is:
$$p(\mathbf{h} \mid \mathbf{v}) = \prod\_{j=1}^H p(h_j \mid \mathbf{v})$$
where $H$ is the number of hidden units.
Before we go further, let's look at a toy example for one case out of all input. Assume that we have a trained RBM, and a very simple input vector, such as \[1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0].\
Let's see what the output of forward pass would look like:
```
X = tf.constant([[1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0]], tf.float32)
v_state = X
print ("Input: ", v_state)
h_bias = tf.constant([0.1, 0.1])
print ("hb: ", h_bias)
print ("w: ", W)
# Calculate the probabilities of turning the hidden units on:
h_prob = tf.nn.sigmoid(tf.matmul(v_state, W) + h_bias) #probabilities of the hidden units
print ("p(h|v): ", h_prob)
# Draw samples from the distribution:
h_state = tf.nn.relu(tf.sign(h_prob - tf.random.uniform(tf.shape(h_prob)))) #states
print ("h0 states:", h_state)
```
<b>Phase 2) Backward Pass (Reconstruction):</b>
The RBM reconstructs data by making several forward and backward passes between the visible and hidden layers.
So, in the second phase (i.e. reconstruction phase), the samples from the hidden layer (i.e. $\mathbf h$) becomes the input in the backward pass. The same weight matrix and visible layer biases are used to passed to the sigmoid function. The reproduced output is a reconstruction which is an approximation of the original input.
```
vb = tf.constant([0.1, 0.2, 0.1, 0.1, 0.1, 0.2, 0.1])
print ("b: ", vb)
v_prob = tf.nn.sigmoid(tf.matmul(h_state, tf.transpose(W)) + vb)
print ("p(vi∣h): ", v_prob)
v_state = tf.nn.relu(tf.sign(v_prob - tf.random.uniform(tf.shape(v_prob))))
print ("v probability states: ", v_state)
```
RBM learns a probability distribution over the input, and then, after being trained, the RBM can generate new samples from the learned probability distribution. As you know, <b>probability distribution</b>, is a mathematical function that provides the probabilities of occurrence of different possible outcomes in an experiment.
The (conditional) probability distribution over the visible units v is given by
$$p(\mathbf{v} \mid \mathbf{h}) = \prod\_{i=1}^V p(v_i \mid \mathbf{h}),$$
where,
$$p(v_i \mid \mathbf{h}) = \sigma\left(\sum\_{j=1}^H W\_{ji} h_j + v\_{bias} \right)$$
so, given current state of hidden units and weights, what is the probability of generating \[1. 0. 0. 1. 0. 0. 0.] in reconstruction phase, based on the above <b>probability distribution</b> function?
```
inp = X
print("input X:" , inp.numpy())
print("probablity vector:" , v_prob[0].numpy())
v_probability = 1
for elm, p in zip(inp[0],v_prob[0]) :
if elm ==1:
v_probability *= p
else:
v_probability *= (1-p)
print("probability of generating X: " , v_probability.numpy())
```
How similar are vectors $\mathbf{x}$ and $\mathbf{v}$? Of course, the reconstructed values most likely will not look anything like the input vector, because our network has not been trained yet. Our objective is to train the model in such a way that the input vector and reconstructed vector to be same. Therefore, based on how different the input values look to the ones that we just reconstructed, the weights are adjusted.
<hr>
<h2>MNIST</h2>
We will be using the MNIST dataset to practice the usage of RBMs. The following cell loads the MNIST dataset.
```
#loading training and test data
mnist = tf.keras.datasets.mnist
(trX, trY), (teX, teY) = mnist.load_data()
# showing an example of the Flatten class and operation
from tensorflow.keras.layers import Flatten
flatten = Flatten(dtype='float32')
trX = flatten(trX/255.0)
trY = flatten(trY/255.0)
```
Lets look at the dimension of the images.
MNIST images have 784 pixels, so the visible layer must have 784 input nodes. For our case, we'll use 50 nodes in the hidden layer, so i = 50.
```
vb = tf.Variable(tf.zeros([784]), tf.float32)
hb = tf.Variable(tf.zeros([50]), tf.float32)
```
Let $\mathbf W$ be the Tensor of 784x50 (784 - number of visible neurons, 50 - number of hidden neurons) that represents weights between the neurons.
```
W = tf.Variable(tf.zeros([784,50]), tf.float32)
```
Lets define the visible layer:
```
v0_state = tf.Variable(tf.zeros([784]), tf.float32)
#testing to see if the matrix product works
tf.matmul( [v0_state], W)
```
Now, we can define hidden layer:
```
#computing the hidden nodes probability vector and checking shape
h0_prob = tf.nn.sigmoid(tf.matmul([v0_state], W) + hb) #probabilities of the hidden units
print("h0_state shape: " , tf.shape(h0_prob))
#defining a function to return only the generated hidden states
def hidden_layer(v0_state, W, hb):
h0_prob = tf.nn.sigmoid(tf.matmul([v0_state], W) + hb) #probabilities of the hidden units
h0_state = tf.nn.relu(tf.sign(h0_prob - tf.random.uniform(tf.shape(h0_prob)))) #sample_h_given_X
return h0_state
h0_state = hidden_layer(v0_state, W, hb)
print("first 15 hidden states: ", h0_state[0][0:15])
```
Now, we define reconstruction part:
```
def reconstructed_output(h0_state, W, vb):
v1_prob = tf.nn.sigmoid(tf.matmul(h0_state, tf.transpose(W)) + vb)
v1_state = tf.nn.relu(tf.sign(v1_prob - tf.random.uniform(tf.shape(v1_prob)))) #sample_v_given_h
return v1_state[0]
v1_state = reconstructed_output(h0_state, W, vb)
print("hidden state shape: ", h0_state.shape)
print("v0 state shape: ", v0_state.shape)
print("v1 state shape: ", v1_state.shape)
```
<h3>What is the objective function?</h3>
<b>Goal</b>: Maximize the likelihood of our data being drawn from that distribution
<b>Calculate error:</b>\
In each epoch, we compute the "error" as a sum of the squared difference between step 1 and step n,
e.g the error shows the difference between the data and its reconstruction.
<b>Note:</b> tf.reduce_mean computes the mean of elements across dimensions of a tensor.
```
def error(v0_state, v1_state):
return tf.reduce_mean(tf.square(v0_state - v1_state))
err = tf.reduce_mean(tf.square(v0_state - v1_state))
print("error" , err.numpy())
```
<a id="ref4"></a>
<h3>Training the Model</h3>
<b>Warning...</b> The following part is math-heavy, but you can skip it if you just want to run the cells in the next section.
As mentioned, we want to give a high probability to the input data we train on. So, in order to train an RBM, we have to maximize the product of probabilities assigned to all rows $\mathbf{v}$ (images) in the training set $\mathbf{V}$ (a matrix, where each row of it is treated as a visible vector $\mathbf{v}$)
$$\arg \max_W \prod\_{\mathbf{v}\in\mathbf{V}\_T} p(\mathbf{v})$$
which is equivalent to maximizing the expectation of the log probability, given as
$$\arg\max_W\left\[ \mathbb{E} \left(\prod\_{\mathbf v\in \mathbf V}\text{log} \left(p(\mathbf v)\right) \right) \right].$$
So, we have to update the weights $W\_{ij}$ to increase $p(\mathbf{v})$ for all $\mathbf{v}$ in our training data during training. So we have to calculate the derivative:
$$\frac{\partial \log p(\mathbf v)}{\partial W\_{ij}}$$
This cannot be easily done by typical <b>gradient descent (SGD)</b>, so we can use another approach, which has 2 steps:
<ol>
<li>Gibbs Sampling</li>
<li>Contrastive Divergence</li>
</ol>
<h3>Gibbs Sampling</h3>
<h4>Gibbs Sampling Step 1</h4>
Given an input vector $\mathbf{v}$, we are using $p(\mathbf{h}|\mathbf{v})$ to predict the hidden values $\mathbf{h}$.
$$p({h_j}|\mathbf v)= \sigma\left(\sum_{i=1}^V W_{ij} v_i + h_{bias} \right)$$
The samples are generated from this distribution by generating the uniform random variate vector $\mathbf{\xi} \sim U[0,1]$ of length $H$ and comparing to the computed probabilities as
<center>If $\xi_j>p(h_{j}|\mathbf{v})$, then $h_j=1$, else $h_j=0$.</center>
<h4>Gibbs Sampling Step 2</h4>
Then, knowing the hidden values, we use $p(\mathbf v| \mathbf h)$ for reconstructing of new input values v.
$$p({v_i}|\mathbf h)= \sigma\left(\sum\_{j=1}^H W^{T}*{ij} h_j + v*{bias} \right)$$
The samples are generated from this distribution by generating a uniform random variate vector $\mathbf{\xi} \sim U\[0,1]$ of length $V$ and comparing to the computed probabilities as
<center>If $\xi_i>p(v_{i}|\mathbf{h})$, then $v_i=1$, else $v_i=0$.</center>
Let vectors $\mathbf v_k$ and $\mathbf h_k$ be for the $k$th iteration. In general, the $kth$ state is generrated as:
<b>Iteration</b> $k$:
$$\mathbf v\_{k-1} \Rightarrow p(\mathbf h\_{k-1}|\mathbf v\_{k-1})\Rightarrow \mathbf h\_{k-1}\Rightarrow p(\mathbf v\_{k}|\mathbf h\_{k-1})\Rightarrow \mathbf v_k$$
<h3>Contrastive Divergence (CD-k)</h3>
The update of the weight matrix is done during the Contrastive Divergence step.
Vectors v0 and vk are used to calculate the activation probabilities for hidden values h0 and hk. The difference between the outer products of those probabilities with input vectors v0 and vk results in the update matrix:
$$\Delta \mathbf W_k =\mathbf v_k \otimes \mathbf h_k - \mathbf v\_{k-1} \otimes \mathbf h\_{k-1}$$
Contrastive Divergence is actually matrix of values that is computed and used to adjust values of the $\mathbf W$ matrix. Changing $\mathbf W$ incrementally leads to training of the $\mathbf W$ values. Then, on each step (epoch), $\mathbf W$ is updated using the following:
$$\mathbf W_k = \mathbf W\_{k-1} + \alpha \* \Delta \mathbf W_k$$
Reconstruction steps:
<ul>
<li> Get one data point from data set, like <i>x</i>, and pass it through the following steps:</li>
<b>Iteration</b> $k=1$:
Sampling (starting with input image)
$$\mathbf x = \mathbf v\_0 \Rightarrow p(\mathbf h\_0|\mathbf v\_0)\Rightarrow \mathbf h\_0 \Rightarrow p(\mathbf v\_1|\mathbf h\_0)\Rightarrow \mathbf v\_1$$\
followed by the CD-k step
$$\Delta \mathbf W\_1 =\mathbf v\_1 \otimes \mathbf h\_1 - \mathbf v\_{0} \otimes \mathbf h\_{0}$$\
$$\mathbf W\_1 = \mathbf W\_{0} + \alpha \* \Delta \mathbf W\_1$$
<li> $\mathbf v_1$ is the reconstruction of $\mathbf x$ sent to the next iteration).</li>
<b>Iteration</b> $k=2$:
Sampling (starting with $\mathbf v\_1$)
$$\mathbf v\_1 \Rightarrow p(\mathbf h\_1|\mathbf v\_1)\Rightarrow \mathbf h\_1\Rightarrow p(\mathbf v\_2|\mathbf h\_1)\Rightarrow \mathbf v\_2$$
followed by the CD-k step
$$\Delta \mathbf W\_2 =\mathbf v\_2 \otimes \mathbf h\_2 - \mathbf v\_{1} \otimes \mathbf h\_{1}$$\
$$\mathbf W\_2 = \mathbf W\_{1} + \alpha \* \Delta \mathbf W\_2$$
<li> $\mathbf v_2$ is the reconstruction of $\mathbf v_1$ sent to the next iteration).</li>
<b>Iteration</b> $k=K$:
Sampling (starting with $\mathbf v\_{K-1}$)
$$\mathbf v\_{K-1} \Rightarrow p(\mathbf h\_{K-1}|\mathbf v\_{K-1})\Rightarrow \mathbf h\_{K-1}\Rightarrow p(\mathbf v_K|\mathbf h\_{K-1})\Rightarrow \mathbf v_K$$
followed by the CD-k step
$$\Delta \mathbf W_K =\mathbf v_K \otimes \mathbf h_K - \mathbf v\_{K-1} \otimes \mathbf h\_{K-1}$$\
$$\mathbf W_K = \mathbf W\_{K-1} + \alpha \* \Delta \mathbf W_K$$
<b>What is $\alpha$?</b>\
Here, alpha is some small step size, and is also known as the "learning rate".
$K$ is adjustable, and good performance can be achieved with $K=1$, so that we just take one set of sampling steps per image.
```
h1_prob = tf.nn.sigmoid(tf.matmul([v1_state], W) + hb)
h1_state = tf.nn.relu(tf.sign(h1_prob - tf.random.uniform(tf.shape(h1_prob)))) #sample_h_given_X
```
Lets look at the error of the first run:
```
print("error: ", error(v0_state, v1_state))
#Parameters
alpha = 0.01
epochs = 1
batchsize = 200
weights = []
errors = []
batch_number = 0
K = 1
#creating datasets
train_ds = \
tf.data.Dataset.from_tensor_slices((trX, trY)).batch(batchsize)
for epoch in range(epochs):
for batch_x, batch_y in train_ds:
batch_number += 1
for i_sample in range(batchsize):
for k in range(K):
v0_state = batch_x[i_sample]
h0_state = hidden_layer(v0_state, W, hb)
v1_state = reconstructed_output(h0_state, W, vb)
h1_state = hidden_layer(v1_state, W, hb)
delta_W = tf.matmul(tf.transpose([v0_state]), h0_state) - tf.matmul(tf.transpose([v1_state]), h1_state)
W = W + alpha * delta_W
vb = vb + alpha * tf.reduce_mean(v0_state - v1_state, 0)
hb = hb + alpha * tf.reduce_mean(h0_state - h1_state, 0)
v0_state = v1_state
if i_sample == batchsize-1:
err = error(batch_x[i_sample], v1_state)
errors.append(err)
weights.append(W)
print ( 'Epoch: %d' % epoch,
"batch #: %i " % batch_number, "of %i" % int(60e3/batchsize),
"sample #: %i" % i_sample,
'reconstruction error: %f' % err)
```
Let's take a look at the errors at the end of each batch:
```
plt.plot(errors)
plt.xlabel("Batch Number")
plt.ylabel("Error")
plt.show()
```
What is the final weight matrix $W$ after training?
```
print(W.numpy()) # a weight matrix of shape (50,784)
```
<a id="ref5"></a>
<h3>Learned features</h3>
We can take each hidden unit and visualize the connections between that hidden unit and each element in the input vector. In our case, we have 50 hidden units. Lets visualize those.
Let's plot the current weights: <b>tile_raster_images</b> helps in generating an easy to grasp image from a set of samples or weights. It transforms the <b>uw</b> (with one flattened image per row of size 784), into an array (of size $28\times28$) in which images are reshaped and laid out like tiles on a floor.
```
tile_raster_images(X=W.numpy().T, img_shape=(28, 28), tile_shape=(5, 10), tile_spacing=(1, 1))
import matplotlib.pyplot as plt
from PIL import Image
%matplotlib inline
image = Image.fromarray(tile_raster_images(X=W.numpy().T, img_shape=(28, 28) ,tile_shape=(5, 10), tile_spacing=(1, 1)))
### Plot image
plt.rcParams['figure.figsize'] = (18.0, 18.0)
imgplot = plt.imshow(image)
imgplot.set_cmap('gray')
```
Each tile in the above visualization corresponds to a vector of connections between a hidden unit and visible layer's units.
Let's look at one of the learned weights corresponding to one of hidden units for example. In this particular square, the gray color represents weight = 0, and the whiter it is, the more positive the weights are (closer to 1). Conversely, the darker pixels are, the more negative the weights. The positive pixels will increase the probability of activation in hidden units (after multiplying by input/visible pixels), and negative pixels will decrease the probability of a unit hidden to be 1 (activated). So, why is this important? So we can see that this specific square (hidden unit) can detect a feature (e.g. a "/" shape) and if it exists in the input.
```
from PIL import Image
image = Image.fromarray(tile_raster_images(X =W.numpy().T[10:11], img_shape=(28, 28),tile_shape=(1, 1), tile_spacing=(1, 1)))
### Plot image
plt.rcParams['figure.figsize'] = (4.0, 4.0)
imgplot = plt.imshow(image)
imgplot.set_cmap('gray')
```
Let's look at the reconstruction of an image now. Imagine that we have a destructed image of figure 3. Lets see if our trained network can fix it:
First we plot the image:
```
!wget -O destructed3.jpg https://ibm.box.com/shared/static/vvm1b63uvuxq88vbw9znpwu5ol380mco.jpg
img = Image.open('destructed3.jpg')
img
```
Now let's pass this image through the neural net:
```
# convert the image to a 1d numpy array
sample_case = np.array(img.convert('I').resize((28,28))).ravel().reshape((1, -1))/255.0
sample_case = tf.cast(sample_case, dtype=tf.float32)
```
Feed the sample case into the network and reconstruct the output:
```
hh0_p = tf.nn.sigmoid(tf.matmul(sample_case, W) + hb)
hh0_s = tf.round(hh0_p)
print("Probability nodes in hidden layer:" ,hh0_p)
print("activated nodes in hidden layer:" ,hh0_s)
# reconstruct
vv1_p = tf.nn.sigmoid(tf.matmul(hh0_s, tf.transpose(W)) + vb)
print(vv1_p)
#rec_prob = sess.run(vv1_p, feed_dict={ hh0_s: hh0_s_val, W: prv_w, vb: prv_vb})
```
Here we plot the reconstructed image:
```
img = Image.fromarray(tile_raster_images(X=vv1_p.numpy(), img_shape=(28, 28),tile_shape=(1, 1), tile_spacing=(1, 1)))
plt.rcParams['figure.figsize'] = (4.0, 4.0)
imgplot = plt.imshow(img)
imgplot.set_cmap('gray')
```
<hr>
## Want to learn more?
Also, you can use **Watson Studio** to run these notebooks faster with bigger datasets.**Watson Studio** is IBM’s leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, **Watson Studio** enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of **Watson Studio** users today with a free account at [Watson Studio](https://cocl.us/ML0120EN_DSX).This is the end of this lesson. Thank you for reading this notebook, and good luck on your studies.
### Thanks for completing this lesson!
Notebook created by: <a href = "https://ca.linkedin.com/in/saeedaghabozorgi?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01">Saeed Aghabozorgi</a>
Updated to TF 2.X by <a href="https://ca.linkedin.com/in/nilmeier?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01"> Jerome Nilmeier</a><br />
### References:
[https://en.wikipedia.org/wiki/Restricted_Boltzmann_machine](https://en.wikipedia.org/wiki/Restricted_Boltzmann_machine?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01)\
[http://deeplearning.net/tutorial/rbm.html](http://deeplearning.net/tutorial/rbm.html?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01)\
[http://www.cs.utoronto.ca/\~hinton/absps/netflixICML.pdf](http://www.cs.utoronto.ca/\~hinton/absps/netflixICML.pdf?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01)<br>
<http://imonad.com/rbm/restricted-boltzmann-machine/>
<hr>
Copyright © 2018 [Cognitive Class](https://cocl.us/DX0108EN_CC). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01).
| true |
code
| 0.364947 | null | null | null | null |
|
# Burgers Optimization with a Differentiable Physics Gradient
To illustrate the process of computing gradients in a _differentiable physics_ (DP) setting, we target the same inverse problem (the reconstruction task) used for the PINN example in {doc}`physicalloss-code`. The choice of DP as a method has some immediate implications: we start with a discretized PDE, and the evolution of the system is now fully determined by the resulting numerical solver. Hence, the only real unknown is the initial state. We will still need to re-compute all the states between the initial and target state many times, just now we won't need an NN for this step. Instead, we can rely on our discretized model.
Also, as we choose an initial discretization for the DP approach, the unknown initial state consists of the sampling points of the involved physical fields, and we can simply represent these unknowns as floating point variables. Hence, even for the initial state we do not need to set up an NN. Thus, our Burgers reconstruction problem reduces to a gradient-based optimization without any NN when solving it with DP. Nonetheless, it's a very good starting point to illustrate the process.
First, we'll set up our discretized simulation. Here we can employ phiflow, as shown in the overview section on _Burgers forward simulations_.
[[run in colab]](https://colab.research.google.com/github/tum-pbs/pbdl-book/blob/main/diffphys-code-burgers.ipynb)
## Initialization
phiflow directly gives us a sequence of differentiable operations, provided that we don't use the _numpy_ backend.
The important step here is to include `phi.tf.flow` instad of `phi.flow` (for _pytorch_ you could use `phi.torch.flow`).
So, as a first step, let's set up some constants, and initialize a `velocity` field with zeros, and our constraint at $t=0.5$ (step 16), now as a `CenteredGrid` in phiflow. Both are using periodic boundary conditions (via `extrapolation.PERIODIC`) and a spatial discretization of $\Delta x = 1/128$.
```
#!pip install --upgrade --quiet phiflow
from phi.tf.flow import *
N = 128
DX = 2/N
STEPS = 32
DT = 1/STEPS
NU = 0.01/(N*np.pi)
# allocate velocity grid
velocity = CenteredGrid(0, extrapolation.PERIODIC, x=N, bounds=Box[-1:1])
# and a grid with the reference solution
REFERENCE_DATA = math.tensor([0.008612174447657694, 0.02584669669548606, 0.043136357266407785, 0.060491074685516746, 0.07793926183951633, 0.0954779141740818, 0.11311894389663882, 0.1308497114054023, 0.14867023658641343, 0.1665634396808965, 0.18452263429574314, 0.20253084411376132, 0.22057828799835133, 0.23865132431365316, 0.25673879161339097, 0.27483167307082423, 0.2929182325574904, 0.3109944766354339, 0.3290477753208284, 0.34707880794585116, 0.36507311960102307, 0.38303584302507954, 0.40094962955534186, 0.4188235294008765, 0.4366357052408043, 0.45439856841363885, 0.4720845505219581, 0.4897081943759776, 0.5072391070000235, 0.5247011051514834, 0.542067187709797, 0.5593576751669057, 0.5765465453632126, 0.5936507311857876, 0.6106452944663003, 0.6275435911624945, 0.6443221318186165, 0.6609900633731869, 0.67752574922899, 0.6939334022562877, 0.7101938106059631, 0.7263049537163667, 0.7422506131457406, 0.7580207366534812, 0.7736033721649875, 0.7889776974379873, 0.8041371279965555, 0.8190465276590387, 0.8337064887158392, 0.8480617965162781, 0.8621229412131242, 0.8758057344502199, 0.8891341984763013, 0.9019806505391214, 0.9143881632159129, 0.9261597966464793, 0.9373647624856912, 0.9476871303793314, 0.9572273019669029, 0.9654367940878237, 0.9724097482283165, 0.9767381835635638, 0.9669484658390122, 0.659083299684951, -0.659083180712816, -0.9669485121167052, -0.9767382069792288, -0.9724097635533602, -0.9654367970450167, -0.9572273263645859, -0.9476871280825523, -0.9373647681120841, -0.9261598056102645, -0.9143881718456056, -0.9019807055316369, -0.8891341634240081, -0.8758057205293912, -0.8621229450911845, -0.8480618138204272, -0.833706571569058, -0.8190466131476127, -0.8041372124868691, -0.7889777195422356, -0.7736033858767385, -0.758020740007683, -0.7422507481169578, -0.7263049162371344, -0.7101938950789042, -0.6939334061553678, -0.677525822052029, -0.6609901538934517, -0.6443222327338847, -0.6275436932970322, -0.6106454472814152, -0.5936507836778451, -0.5765466491708988, -0.5593578078967361, -0.5420672759411125, -0.5247011730988912, -0.5072391580614087, -0.4897082914472909, -0.47208460952428394, -0.4543985995006753, -0.4366355580500639, -0.41882350871539187, -0.40094955631843376, -0.38303594105786365, -0.36507302109186685, -0.3470786936847069, -0.3290476440540586, -0.31099441589505206, -0.2929180880304103, -0.27483158663081614, -0.2567388003912687, -0.2386513127155433, -0.22057831776499126, -0.20253089403524566, -0.18452269630486776, -0.1665634500729787, -0.14867027528284874, -0.13084990929476334, -0.1131191325854089, -0.09547794429803691, -0.07793928430794522, -0.06049114408297565, -0.0431364527809777, -0.025846763281087953, -0.00861212501518312] , math.spatial('x'))
SOLUTION_T16 = CenteredGrid( REFERENCE_DATA, extrapolation.PERIODIC, x=N, bounds=Box[-1:1])
```
We can verify that the fields of our simulation are now backed by TensorFlow.
```
type(velocity.values.native())
```
## Gradients
The `record_gradients` function of phiflow triggers the generation of a gradient tape to compute gradients of a simulation via `math.gradients(loss, values)`.
To use it for the Burgers case we need to specify a loss function: we want the solution at $t=0.5$ to match the reference data. Thus we simply compute an $L^2$ difference between step number 16 and our constraint array as `loss`. Afterwards, we evaluate the gradient of the initial velocity state `velocity` with respect to this loss.
```
velocities = [velocity]
with math.record_gradients(velocity.values):
for time_step in range(STEPS):
v1 = diffuse.explicit(1.0*velocities[-1], NU, DT, substeps=1)
v2 = advect.semi_lagrangian(v1, v1, DT)
velocities.append(v2)
loss = field.l2_loss(velocities[16] - SOLUTION_T16)*2./N # MSE
grad = math.gradients(loss, velocity.values)
print('Loss: %f' % (loss))
```
Because we're only constraining time step 16, we could actually omit steps 17 to 31 in this setup. They don't have any degrees of freedom and are not constrained in any way. However, for fairness regarding a comparison with the previous PINN case, we include them.
Note that we've done a lot of calculations here: first the 32 steps of our simulation, and then another 16 steps backwards from the loss. They were recorded by the gradient tape, and used to backpropagate the loss to the initial state of the simulation.
Not surprisingly, because we're starting from zero, there's also a significant initial error of ca. 0.38 for the 16th simulation step.
So what do we get as a gradient here? It has the same dimensions as the velocity, and we can easily visualize it:
Starting from the zero state for `velocity` (shown in blue), the first gradient is shown as a green line below. If you compare it with the solution it points in the opposite direction, as expected. The solution is much larger in magnitude, so we omit it here (see the next graph).
```
import pylab as plt
fig = plt.figure().gca()
pltx = np.linspace(-1,1,N)
# first gradient
fig.plot(pltx, grad.numpy('x') , lw=2, color='green', label="Gradient")
fig.plot(pltx, velocity.values.numpy('x'), lw=2, color='mediumblue', label="u at t=0")
plt.xlabel('x'); plt.ylabel('u'); plt.legend();
# some (optional) other fields to plot:
#fig.plot(pltx, (velocities[16]).values.numpy('x') , lw=2, color='cyan', label="u at t=0.5")
#fig.plot(pltx, (SOLUTION_T16).values.numpy('x') , lw=2, color='red', label="solution at t=0.5")
#fig.plot(pltx, (velocities[16] - SOLUTION_T16).values.numpy('x') , lw=2, color='blue', label="difference at t=0.5")
```
This gives us a "search direction" for each velocity variable. Based on a linear approximation, the gradient tells us how to change each of them to increase the loss function (gradients _always_ point "upwards"). Thus, we can use the gradient to run an optimization and find an initial state `velocity` that minimizes our loss.
## Optimization
Equipped with the gradient we can run a gradient descent optimization. Below, we're using a learning rate of `LR=5`, and we're re-evaluating the loss for the updated state to track convergence.
In the following code block, we're additionally saving the gradients in a list called `grads`, such that we can visualize them later on. For a regular optimization we could of course discard the gradient after performing an update of the velocity.
```
LR = 5.
grads=[]
for optim_step in range(5):
velocities = [velocity]
with math.record_gradients(velocity.values):
for time_step in range(STEPS):
v1 = diffuse.explicit(1.0*velocities[-1], NU, DT)
v2 = advect.semi_lagrangian(v1, v1, DT)
velocities.append(v2)
loss = field.l2_loss(velocities[16] - SOLUTION_T16)*2./N # MSE
print('Optimization step %d, loss: %f' % (optim_step,loss))
grads.append( math.gradients(loss, velocity.values) )
velocity = velocity - LR * grads[-1]
```
Now we can check well the 16th state of the simulation actually matches the target after the 5 update steps. This is what the loss measures, after all. The next graph shows the constraints (i.e. the solution we'd like to obtain) in green, and the reconstructed state after the initial state `velocity` (which we updated five times via the gradient by now) was updated 16 times by the solver.
```
fig = plt.figure().gca()
# target constraint at t=0.5
fig.plot(pltx, SOLUTION_T16.values.numpy('x'), lw=2, color='forestgreen', label="Reference")
# optimized state of our simulation after 16 steps
fig.plot(pltx, velocities[16].values.numpy('x'), lw=2, color='mediumblue', label="Simulated velocity")
plt.xlabel('x'); plt.ylabel('u'); plt.legend(); plt.title("After 5 Optimization Steps at t=0.5");
```
This seems to be going in the right direction! It's definitely not perfect, but we've only computed 5 GD update steps so far. The two peaks with a positive velocity on the left side of the shock and the negative peak on the right side are starting to show.
This is a good indicator that the backpropagation of gradients through all of our 16 simulated steps is behaving correctly, and that it's driving the solution in the right direction. The graph above only hints at how powerful the setup is: the gradient that we obtain from each of the simulation steps (and each operation within them) can easily be chained together into more complex sequences. In the example above, we're backpropagating through all 16 steps of the simulation, and we could easily enlarge this "look-ahead" of the optimization with minor changes to the code.
## More optimization steps
Before moving on to more complex physics simulations, or involving NNs, let's finish the optimization task at hand, and run more steps to get a better solution.
```
import time
start = time.time()
for optim_step in range(45):
velocities = [velocity]
with math.record_gradients(velocity.values):
for time_step in range(STEPS):
v1 = diffuse.explicit(1.0*velocities[-1], NU, DT)
v2 = advect.semi_lagrangian(v1, v1, DT)
velocities.append(v2)
loss = field.l2_loss(velocities[16] - SOLUTION_T16)*2./N # MSE
if optim_step%5==0:
print('Optimization step %d, loss: %f' % (optim_step,loss))
grad = math.gradients(loss, velocity.values)
velocity = velocity - LR * grad
end = time.time()
print("Runtime {:.2f}s".format(end-start))
```
Thinking back to the PINN version from {doc}`diffphys-code-burgers`, we have a much lower error here after only 50 steps (by ca. an order of magnitude), and the runtime is also lower (roughly by a factor of 1.5 to 2). This behavior stems fro
Let's plot again how well our solution at $t=0.5$ (blue) matches the constraints (green) now:
```
fig = plt.figure().gca()
fig.plot(pltx, SOLUTION_T16.values.numpy('x'), lw=2, color='forestgreen', label="Reference")
fig.plot(pltx, velocities[16].values.numpy('x'), lw=2, color='mediumblue', label="Simulated velocity")
plt.xlabel('x'); plt.ylabel('u'); plt.legend(); plt.title("After 50 Optimization Steps at t=0.5");
```
Not bad. But how well is the initial state recovered via backpropagation through the 16 simulation steps? This is what we're changing, and because it's only indirectly constrained via the observation later in time there is more room to deviate from a desired or expected solution.
This is shown in the next plot:
```
fig = plt.figure().gca()
pltx = np.linspace(-1,1,N)
# ground truth state at time=0 , move down
INITIAL_GT = np.asarray( [-np.sin(np.pi * x) for x in np.linspace(-1+DX/2,1-DX/2,N)] ) # 1D numpy array
fig.plot(pltx, INITIAL_GT.flatten() , lw=2, color='forestgreen', label="Ground truth initial state") # ground truth initial state of sim
fig.plot(pltx, velocity.values.numpy('x'), lw=2, color='mediumblue', label="Optimized initial state") # manual
plt.xlabel('x'); plt.ylabel('u'); plt.legend(); plt.title("Initial State After 50 Optimization Steps");
```
Naturally, this is a tougher task: the optimization receives direct feedback what the state at $t=0.5$ should look like, but due to the non-linear model equation, we typically have a large number of solutions that exactly or numerically very closely satisfy the constraints. Hence, our minimizer does not necessarily find the exact state we started from (we can observe some numerical oscillations from the diffusion operator here with the default settings). However, the solution is still quite close in this Burgers scenario.
Before measuring the overall error of the reconstruction, let's visualize the full evolution of our system over time as this also yields the solution in the form of a numpy array that we can compare to the other versions:
```
import pylab
def show_state(a):
a=np.expand_dims(a, axis=2)
for i in range(4):
a = np.concatenate( [a,a] , axis=2)
a = np.reshape( a, [a.shape[0],a.shape[1]*a.shape[2]] )
fig, axes = pylab.subplots(1, 1, figsize=(16, 5))
im = axes.imshow(a, origin='upper', cmap='inferno')
pylab.colorbar(im)
# get numpy versions of all states
vels = [ x.values.numpy('x,vector') for x in velocities]
# concatenate along vector/features dimension
vels = np.concatenate(vels, axis=-1)
# save for comparison with other methods
import os; os.makedirs("./temp",exist_ok=True)
np.savez_compressed("./temp/burgers-diffphys-solution.npz", np.reshape(vels,[N,STEPS+1])) # remove batch & channel dimension
show_state(vels)
```
## Physics-informed vs. differentiable physics reconstruction
Now we have both versions, the one with the PINN, and the DP version, so let's compare both reconstructions in more detail. (Note: The following cells expect that the Burgers-forward and PINN notebooks were executed in the same environment beforehand.)
Let's first look at the solutions side by side. The code below generates an image with 3 versions, from top to bottom: the "ground truth" (GT) solution as given by the regular forward simulation, in the middle the PINN reconstruction, and at the bottom the differentiable physics version.
```
# note, this requires previous runs of the forward-sim & PINN notebooks in the same environment
sol_gt=npfile=np.load("./temp/burgers-groundtruth-solution.npz")["arr_0"]
sol_pi=npfile=np.load("./temp/burgers-pinn-solution.npz")["arr_0"]
sol_dp=npfile=np.load("./temp/burgers-diffphys-solution.npz")["arr_0"]
divider = np.ones([10,33])*-1. # we'll sneak in a block of -1s to show a black divider in the image
sbs = np.concatenate( [sol_gt, divider, sol_pi, divider, sol_dp], axis=0)
print("\nSolutions Ground Truth (top), PINN (middle) , DiffPhys (bottom):")
show_state(np.reshape(sbs,[N*3+20,33,1]))
```
It's quite clearly visible here that the PINN solution (in the middle) recovers the overall shape of the solution, hence the temporal constraints are at least partially fulfilled. However, it doesn't manage to capture the amplitudes of the GT solution very well.
The reconstruction from the optimization with a differentiable solver (at the bottom) is much closer to the ground truth thanks to an improved flow of gradients over the whole course of the sequence. In addition, it can leverage the grid-based discretization for both forward as well as backward passes, and in this way provide a more accurate signal to the unknown initial state. It is nonetheless visible that the reconstruction lacks certain "sharper" features of the GT version, e.g., visible in the bottom left corner of the solution image.
Let's quantify these errors over the whole sequence:
```
err_pi = np.sum( np.abs(sol_pi-sol_gt)) / (STEPS*N)
err_dp = np.sum( np.abs(sol_dp-sol_gt)) / (STEPS*N)
print("MAE PINN: {:7.5f} \nMAE DP: {:7.5f}".format(err_pi,err_dp))
print("\nError GT to PINN (top) , DiffPhys (bottom):")
show_state(np.reshape( np.concatenate([sol_pi-sol_gt, divider, sol_dp-sol_gt],axis=0) ,[N*2+10,33,1]))
```
That's a pretty clear result: the PINN error is almost 4 times higher than the one from the Differentiable Physics (DP) reconstruction.
This difference also shows clearly in the jointly visualized image at the bottom: the magnitudes of the errors of the DP reconstruction are much closer to zero, as indicated by the purple color above.
A simple direct reconstruction problem like this one is always a good initial test for a DP solver. It can be tested independently before moving on to more complex setups, e.g., coupling it with an NN. If the direct optimization does not converge, there's probably still something fundamentally wrong, and there's no point involving an NN.
Now we have a first example to show similarities and differences of the two approaches. In the next section, we'll present a discussion of the findings so far, before moving to more complex cases in the following chapter.
## Next steps
As with the PINN version, there's variety of things that can be improved and experimented with using the code above:
* You can try to adjust the training parameters to further improve the reconstruction.
* As for the PINN case, you can activate a different optimizer, and observe the changing (not necessarily improved) convergence behavior.
* Vary the number of steps, or the resolution of the simulation and reconstruction.
| true |
code
| 0.579876 | null | null | null | null |
|
```
# Automatically reload imported modules that are changed outside this notebook
%load_ext autoreload
%autoreload 2
# More pixels in figures
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams["figure.dpi"] = 200
# Init PRNG with fixed seed for reproducibility
import numpy as np
np_rng = np.random.default_rng(1)
import tensorflow as tf
tf.random.set_seed(np_rng.integers(0, tf.int64.max))
```
# Common Voice spoken language identification with a neural network
**2020-11-08**
This example is a thorough, but simple walk-through on how to do everything from loading mp3-files containing speech to preprocessing and transforming the speech data into something we can feed to a neural network classifier.
Deep learning based speech analysis is a vast research topic and there are countless techniques that could possibly be applied to improve the results of this example.
This example tries to avoid going into too much detail into these techniques and instead focuses on getting an end-to-end classification pipeline up and running with a small dataset.
## Data
This example uses open speech data downloaded from the [Mozilla Common Voice](https://commonvoice.mozilla.org/en/datasets) project.
See the readme file for downloading the data.
In addition to the space needed for the downloaded data, you will need at least 10 GiB of free disk space for caching (can be disabled).
```
import urllib.parse
from IPython.display import display, Markdown
languages = """
et
mn
ta
tr
""".split()
languages = sorted(l.strip() for l in languages)
display(Markdown("### Languages"))
display(Markdown('\n'.join("* `{}`".format(l) for l in languages)))
bcp47_validator_url = 'https://schneegans.de/lv/?tags='
display(Markdown("See [this tool]({}) for a description of the BCP-47 language codes."
.format(bcp47_validator_url + urllib.parse.quote('\n'.join(languages)))))
```
## Loading the metadata
We start by preprocessing the Common Voice metadata files.
Update `datadir` and `workdir` to match your setup.
All output will be written to `workdir`.
```
import os
workdir = "/data/exp/cv4"
datadir = "/mnt/data/speech/common-voice/downloads/2020/cv-corpus"
print("work dir:", workdir)
print("data source dir:", datadir)
os.makedirs(workdir, exist_ok=True)
assert os.path.isdir(datadir), datadir + " does not exist"
```
Common Voice metadata is distributed as `tsv` files and all audio samples are mp3-files under `clips`.
```
dirs = sorted((f for f in os.scandir(datadir) if f.is_dir()), key=lambda f: f.name)
print(datadir)
for d in dirs:
if d.name in languages:
print(' ', d.name)
for f in os.scandir(d):
print(' ', f.name)
missing_languages = set(languages) - set(d.name for d in dirs)
assert missing_languages == set(), "missing languages: {}".format(missing_languages)
```
There's plenty of metadata, but it seems that the train-dev-test split has been predefined so lets use that.
[pandas](https://pandas.pydata.org/pandas-docs/stable/index.html) makes it easy to read, filter, and manipulate metadata in tables.
Lets try to preprocess all metadata here so we don't have to worry about it later.
```
import pandas as pd
from IPython.display import display, Markdown
# Lexicographic order of labels as a fixed index target to label mapping
target2lang = tuple(sorted(languages))
lang2target = {lang: target for target, lang in enumerate(target2lang)}
print("lang2target:", lang2target)
print("target2lang:", target2lang)
def expand_metadata(row):
"""
Update dataframe row by generating a unique utterance id,
expanding the absolute path to the mp3 file,
and adding an integer target for the label.
"""
row.id = "{:s}_{:s}".format(
row.path.split(".mp3", 1)[0].split("common_voice_", 1)[1],
row.split)
row.path = os.path.join(datadir, row.lang, "clips", row.path)
row.target = lang2target[row.lang]
return row
def tsv_to_lang_dataframe(lang, split):
"""
Given a language and dataset split (train, dev, test),
load the Common Voice metadata tsv-file from disk into a pandas.DataFrame.
Preprocess all rows by dropping unneeded columns and adding new metadata.
"""
df = pd.read_csv(
os.path.join(datadir, lang, split + ".tsv"),
sep='\t',
# We only need these columns from the metadata
usecols=("client_id", "path", "sentence"))
# Add language label as column
df.insert(len(df.columns), "lang", lang)
# Add split name to every row for easier filtering
df.insert(len(df.columns), "split", split)
# Add placeholders for integer targets and utterance ids generated row-wise
df.insert(len(df.columns), "target", -1)
df.insert(len(df.columns), "id", "")
# Create new metadata columns
df = df.transform(expand_metadata, axis=1)
return df
split_names = ("train", "dev", "test")
# Concatenate metadata for all 4 languages into a single table for each split
splits = [pd.concat([tsv_to_lang_dataframe(lang, split) for lang in target2lang])
for split in split_names]
# Concatenate split metadata into a single table, indexed by utterance ids
meta = (pd.concat(splits)
.set_index("id", drop=True, verify_integrity=True)
.sort_index())
del splits
for split in split_names:
display(Markdown("### " + split))
display(meta[meta["split"]==split])
```
### Checking that all splits are disjoint by speaker
To ensure our neural network will learn what language is being spoken and not who is speaking, we want to test it on data that does not have any voices present in the training data.
The `client_id` should correspond to a unique, pseudonymized identifier for every speaker.
Lets check all splits are disjoint by speaker id.
```
def assert_splits_disjoint_by_speaker(meta):
split2spk = {split: set(meta[meta["split"]==split].client_id.to_numpy())
for split in split_names}
for split, spk in split2spk.items():
print("split {} has {} speakers".format(split, len(spk)))
print()
print("asserting all are disjoint")
assert split2spk["train"] & split2spk["test"] == set(), "train and test, mutual speakers"
assert split2spk["train"] & split2spk["dev"] == set(), "train and dev, mutual speakers"
assert split2spk["dev"] & split2spk["test"] == set(), "dev and test, mutual speakers"
print("ok")
assert_splits_disjoint_by_speaker(meta)
```
We can see that none of the speakers are in two or more dataset splits.
We also see that the test set has a lot of unique speakers who are not in the training set.
This is good because we want to test that our neural network classifier knows how to classify input from unknown speakers.
### Checking that all audio files exist
```
for uttid, row in meta.iterrows():
assert os.path.exists(row["path"]), row["path"] + " does not exist"
print("ok")
```
## Balancing the language distribution
Lets see how many samples we have per language.
```
import seaborn as sns
sns.set(rc={'figure.figsize': (8, 6)})
ax = sns.countplot(
x="split",
order=split_names,
hue="lang",
hue_order=target2lang,
data=meta)
ax.set_title("Total amount of audio samples")
plt.show()
```
We can see that the amount of samples with Mongolian, Tamil, and Turkish speech are quite balanced, but we have significantly larger amounts of Estonian speech.
More data is of course always better, but if there is too much of one label compared to the others, our neural network might overfit on this label.
But these are only the counts of audio files, how much speech do we have in total per language?
We need to read every file to get a reliable answer.
See also [SoX](http://sox.sourceforge.net/Main/HomePage) for a good command line tool.
```
import miniaudio
meta["duration"] = np.array([
miniaudio.mp3_get_file_info(path).duration for path in meta.path], np.float32)
meta
def plot_duration_distribution(data):
sns.set(rc={'figure.figsize': (8, 6)})
ax = sns.boxplot(
x="split",
order=split_names,
y="duration",
hue="lang",
hue_order=target2lang,
data=data)
ax.set_title("Median audio file duration in seconds")
plt.show()
ax = sns.barplot(
x="split",
order=split_names,
y="duration",
hue="lang",
hue_order=target2lang,
data=data,
ci=None,
estimator=np.sum)
ax.set_title("Total amount of audio in seconds")
plt.show()
plot_duration_distribution(meta)
```
The median length of Estonian samples is approx. 2.5 seconds greater compared to Turkish samples, which have the shortest median length.
We can also see that the total amount of Estonian speech is much larger compared to other languages in our datasets.
Notice also the significant amount of outliers with long durations in the Tamil and Turkish datasets.
Lets do simple random oversampling for the training split using this approach:
1. Select the target language according to maximum total amount of speech in seconds (Estonian).
2. Compute differences in total durations between the target language and the three other languages.
3. Compute median signal length by language.
4. Compute sample sizes by dividing the duration deltas with median signal lengths, separately for each language.
5. Draw samples with replacement from the metadata separately for each language.
6. Merge samples with rest of the metadata and verify there are no duplicate ids.
```
def random_oversampling(meta):
groupby_lang = meta[["lang", "duration"]].groupby("lang")
total_dur = groupby_lang.sum()
target_lang = total_dur.idxmax()[0]
print("target lang:", target_lang)
print("total durations:")
display(total_dur)
total_dur_delta = total_dur.loc[target_lang] - total_dur
print("total duration delta to target lang:")
display(total_dur_delta)
median_dur = groupby_lang.median()
print("median durations:")
display(median_dur)
sample_sizes = (total_dur_delta / median_dur).astype(np.int32)
print("median duration weighted sample sizes based on total duration differences:")
display(sample_sizes)
samples = []
for lang in groupby_lang.groups:
sample_size = sample_sizes.loc[lang][0]
sample = (meta[meta["lang"]==lang]
.sample(n=sample_size, replace=True, random_state=np_rng.bit_generator)
.reset_index()
.transform(update_sample_id, axis=1))
samples.append(sample)
return pd.concat(samples).set_index("id", drop=True, verify_integrity=True)
def update_sample_id(row):
row["id"] = "{}_copy_{}".format(row["id"], row.name)
return row
# Augment training set metadata
meta = pd.concat([random_oversampling(meta[meta["split"]=="train"]), meta]).sort_index()
assert not meta.isna().any(axis=None), "NaNs in metadata after augmentation"
plot_duration_distribution(meta)
assert_splits_disjoint_by_speaker(meta)
meta
```
Speech data augmentation is a common research topic.
There are [better](https://www.isca-speech.org/archive/interspeech_2015/papers/i15_3586.pdf) ways to augment data than the simple duplication of metadata rows we did here.
One approach (which we won't be doing here) which is easy to implement and might work well is to take copies of signals and make them randomly a bit faster or slower.
For example, draw randomly speed ratios from `[0.9, 1.1]` and resample the signal by multiplying its sample rate with the random ratio.
## Inspecting the audio
Lets take a look at the speech data and listen to a few randomly picked samples from each label.
We pick 2 random samples for each language from the training set.
```
samples = (meta[meta["split"]=="train"]
.groupby("lang")
.sample(n=2, random_state=np_rng.bit_generator))
samples
```
Then lets read the mp3-files from disk, plot the signals, and listen to the audio.
```
from IPython.display import display, Audio, HTML
import scipy.signal
def read_mp3(path, resample_rate=16000):
if isinstance(path, bytes):
# If path is a tf.string tensor, it will be in bytes
path = path.decode("utf-8")
f = miniaudio.mp3_read_file_f32(path)
# Downsample to target rate, 16 kHz is commonly used for speech data
new_len = round(len(f.samples) * float(resample_rate) / f.sample_rate)
signal = scipy.signal.resample(f.samples, new_len)
# Normalize to [-1, 1]
signal /= np.abs(signal).max()
return signal, resample_rate
def embed_audio(signal, rate):
display(Audio(data=signal, rate=rate, embed=True, normalize=False))
def plot_signal(data, figsize=(6, 0.5), **kwargs):
ax = sns.lineplot(data=data, lw=0.1, **kwargs)
ax.set_axis_off()
ax.margins(0)
plt.gcf().set_size_inches(*figsize)
plt.show()
def plot_separator():
display(HTML(data="<hr style='border: 2px solid'>"))
for sentence, lang, clip_path in samples[["sentence", "lang", "path"]].to_numpy():
signal, rate = read_mp3(clip_path)
plot_signal(signal)
print("length: {} sec".format(signal.size / rate))
print("lang:", lang)
print("sentence:", sentence)
embed_audio(signal, rate)
plot_separator()
```
One of the most challenging aspects of the Mozilla Common Voice dataset is that the audio quality varies greatly: different microphones, background noise, user is speaking close to the device or far away etc.
It is difficult to ensure that a neural network will learn to classify different languages as opposed to classifying distinct acoustic artefacts from specific microphones.
There's a [vast amount of research](https://www.isca-speech.org/archive/Interspeech_2020/) being done on developing techniques for solving these kind of problems.
However, these are well out of scope for this simple example and we won't be studying them here.
## Spectral representations
It is usually not possible (at least not yet in 2020) to detect languages directly from the waveform.
Instead, the [fast Fourier transform](https://en.wikipedia.org/wiki/Short-time_Fourier_transform) (FFT) is applied on small, overlapping windows of the signal to get a 2-dimensional representation of energies in different frequency bands.
See [this](https://wiki.aalto.fi/display/ITSP/Spectrogram+and+the+STFT) for further details.
However, output from the FFT is usually not usable directly and must be refined.
Lets begin by selecting the first signal from our random sample and extract the power spectrogram.
### Power spectrogram
```
from lidbox.features.audio import spectrograms
def plot_spectrogram(S, cmap="viridis", figsize=None, **kwargs):
if figsize is None:
figsize = S.shape[0]/50, S.shape[1]/50
ax = sns.heatmap(S.T, cbar=False, cmap=cmap, **kwargs)
ax.invert_yaxis()
ax.set_axis_off()
ax.margins(0)
plt.gcf().set_size_inches(*figsize)
plt.show()
sample = samples[["sentence", "lang", "path"]].to_numpy()[0]
sentence, lang, clip_path = sample
signal, rate = read_mp3(clip_path)
plot_signal(signal)
powspec = spectrograms([signal], rate)[0]
plot_spectrogram(powspec.numpy())
```
This representation is very sparse, with zeros everywhere except in the lowest frequency bands.
The main problem here is that relative differences between energy values are very large, making it different to compare large changes in energy.
These differences can be reduced by mapping the values onto a logarithmic scale.
The [decibel-scale](https://en.wikipedia.org/wiki/Decibel) is a common choice.
We will use the maximum value of `powspec` as the reference power ($\text{P}_0$).
### Decibel-scale spectrogram
```
from lidbox.features.audio import power_to_db
dbspec = power_to_db([powspec])[0]
plot_spectrogram(dbspec.numpy())
```
This is an improvement, but the representation is still rather sparse.
We also see that most speech information is in the lower bands, with a bit of energy in the higher frequencies.
A common approach is to "squeeze together" the y-axis of all frequency bands by using a different scale, such as the [Mel-scale](https://en.wikipedia.org/wiki/Mel_scale).
Lets "squeeze" the current 256 frequency bins into 40 Mel-bins.
### Log-scale Mel-spectrogram
**Note** that we are scaling different things here.
The Mel-scale warps the frequency bins (y-axis), while the logarithm is used to reduce relative differences between individual spectrogram values (pixels).
```
from lidbox.features.audio import linear_to_mel
def logmelspectrograms(signals, rate):
powspecs = spectrograms(signals, rate)
melspecs = linear_to_mel(powspecs, rate, num_mel_bins=40)
return tf.math.log(melspecs + 1e-6)
logmelspec = logmelspectrograms([signal], rate)[0]
plot_spectrogram(logmelspec.numpy())
```
One common normalization technique is frequency channel standardization, i.e. normalization of rows to zero mean and unit variance.
```
from lidbox.features import cmvn
logmelspec_mv = cmvn([logmelspec])[0]
plot_spectrogram(logmelspec_mv.numpy())
```
Or only mean-normalization if you think the variances contain important information.
```
logmelspec_m = cmvn([logmelspec], normalize_variance=False)[0]
plot_spectrogram(logmelspec_m.numpy())
```
## Cepstral representations
Another common representation are the Mel-frequency cepstral coefficients (MFCC), which are obtained by applying the [discrete cosine transform](https://en.wikipedia.org/wiki/Discrete_cosine_transform) on the log-scale Mel-spectrogram.
### MFCC
```
def plot_cepstra(X, figsize=None):
if not figsize:
figsize = (X.shape[0]/50, X.shape[1]/20)
plot_spectrogram(X, cmap="RdBu_r", figsize=figsize)
mfcc = tf.signal.mfccs_from_log_mel_spectrograms([logmelspec])[0]
plot_cepstra(mfcc.numpy())
```
Most of the information is concentrated in the lower coefficients.
It is common to drop the 0th coefficient and select a subset starting at 1, e.g. 1 to 20.
See [this post](http://practicalcryptography.com/miscellaneous/machine-learning/guide-mel-frequency-cepstral-coefficients-mfccs/) for more details.
```
mfcc = mfcc[:,1:21]
plot_cepstra(mfcc.numpy())
```
Now we have a very compact representation, but most of the variance is still in the lower coefficients and overshadows the smaller changes in higher coefficients.
We can normalize the MFCC matrix row-wise by standardizing each row to zero mean and unit variance.
This is commonly called cepstral mean and variance normalization (CMVN).
### MFCC + CMVN
```
mfcc_cmvn = cmvn([mfcc])[0]
plot_cepstra(mfcc_cmvn.numpy())
```
### Which one is best?
Speech feature extraction is a large, active research topic and it is impossible to choose one representation that would work well in all situations.
Common choices in state-of-the-art spoken language identification are log-scale Mel-spectrograms and MFCCs, with different normalization approaches.
For example, [here](https://github.com/swshon/dialectID_e2e) is an experiment in Arabic dialect identification, where log-scale Mel-spectra (referred to as FBANK) produced slightly better results compared to MFCCs.
It is not obvious when to choose which representation, or if we should even use the FFT at all.
You can read [this post](https://haythamfayek.com/2016/04/21/speech-processing-for-machine-learning.html) for a more detailed discussion.
## Voice activity detection
It is common for speech datasets to contain audio samples with short segments of silence or sounds that are not speech.
Since these are usually irrelevant for making a language classification decision, we would prefer to discard such segments.
This is called voice activity detection (VAD) and it is another large, active research area.
[Here](https://wiki.aalto.fi/pages/viewpage.action?pageId=151500905) is a brief overview of VAD.
Non-speech segments can be either noise or silence.
Separating non-speech noise from speech is non-trivial but possible, for example with [neural networks](https://www.isca-speech.org/archive/Interspeech_2019/pdfs/1354.pdf).
Silence, on the other hand, shows up as zeros in our speech representations, since these segments contain lower energy values compared to segments with speech.
Such non-speech segments are therefore easy to detect and discard, for example by comparing the energy of the segment to the average energy of the whole sample.
If the samples in our example do not contain much background noise, a simple energy-based VAD technique should be enough to drop all silent segments.
We'll use the [root mean square](https://en.wikipedia.org/wiki/Root_mean_square) (RMS) energy to detect short silence segments.
`lidbox` has a simple energy-based VAD function, which we will use as follows:
1. Divide the signal into non-overlapping 10 ms long windows.
2. Compute RMS of each window.
3. Reduce all window RMS values by averaging to get a single mean RMS value.
4. Set a decision threshold at 0.1 for marking silence windows. In other words, if the window RMS is less than 0.1 of the mean RMS, mark the window as silence.
```
from lidbox.features.audio import framewise_rms_energy_vad_decisions
import matplotlib.patches as patches
sentence, lang, clip_path = sample
signal, rate = read_mp3(clip_path)
window_ms = tf.constant(10, tf.int32)
window_frame_length = (window_ms * rate) // 1000
# Get binary VAD decisions for each 10 ms window
vad_1 = framewise_rms_energy_vad_decisions(
signal=signal,
sample_rate=rate,
frame_step_ms=window_ms,
strength=0.1)
# Plot unfiltered signal
sns.set(rc={'figure.figsize': (6, 0.5)})
ax = sns.lineplot(data=signal, lw=0.1, legend=None)
ax.set_axis_off()
ax.margins(0)
# Plot shaded area over samples marked as not speech (VAD == 0)
for x, is_speech in enumerate(vad_1.numpy()):
if not is_speech:
rect = patches.Rectangle(
(x*window_frame_length, -1),
window_frame_length,
2,
linewidth=0,
color='gray',
alpha=0.2)
ax.add_patch(rect)
plt.show()
print("lang:", lang)
print("sentence: '{}'".format(sentence))
embed_audio(signal, rate)
# Partition the signal into 10 ms windows to match the VAD decisions
windows = tf.signal.frame(signal, window_frame_length, window_frame_length)
# Filter signal with VAD decision == 1 (remove gray areas)
filtered_signal = tf.reshape(windows[vad_1], [-1])
plot_signal(filtered_signal)
print("dropped {:d} out of {:d} frames, leaving {:.3f} of the original signal".format(
signal.shape[0] - filtered_signal.shape[0],
signal.shape[0],
filtered_signal.shape[0]/signal.shape[0]))
embed_audio(filtered_signal, rate)
```
The filtered signal has less silence, but some of the pauses between words sound too short and unnatural.
We would prefer not to remove small pauses that normally occur between words, so lets say all pauses shorter than 300 ms should not be filtered out.
Lets also move all VAD code into a function.
```
def remove_silence(signal, rate):
window_ms = tf.constant(10, tf.int32)
window_frames = (window_ms * rate) // 1000
# Get binary VAD decisions for each 10 ms window
vad_1 = framewise_rms_energy_vad_decisions(
signal=signal,
sample_rate=rate,
frame_step_ms=window_ms,
# Do not return VAD = 0 decisions for sequences shorter than 300 ms
min_non_speech_ms=300,
strength=0.1)
# Partition the signal into 10 ms windows to match the VAD decisions
windows = tf.signal.frame(signal, window_frames, window_frames)
# Filter signal with VAD decision == 1
return tf.reshape(windows[vad_1], [-1])
sentence, lang, clip_path = sample
signal, rate = read_mp3(clip_path)
filtered_signal = remove_silence(signal, rate)
plot_signal(filtered_signal)
print("dropped {:d} out of {:d} frames, leaving {:.3f} of the original signal".format(
signal.shape[0] - filtered_signal.shape[0],
signal.shape[0],
filtered_signal.shape[0]/signal.shape[0]))
print("lang:", lang)
print("sentence: '{}'".format(sentence))
embed_audio(filtered_signal, rate)
```
We dropped some silence segments but left most of the speech intact, perhaps this is enough for our example.
Although this VAD approach is simple and works ok for our data, it will not work for speech data with non-speech sounds in the background like music or noise.
For such data we might need more powerful VAD filters such as neural networks that have been trained on a speech vs non-speech classification task with large amounts of different noise.
But lets not add more complexity to our example.
We'll use the RMS based filter for all other signals too.
## Comparison of representations
Lets extract these features for all signals in our random sample.
```
for sentence, lang, clip_path in samples[["sentence", "lang", "path"]].to_numpy():
signal_before_vad, rate = read_mp3(clip_path)
signal = remove_silence(signal_before_vad, rate)
logmelspec = logmelspectrograms([signal], rate)[0]
logmelspec_mvn = cmvn([logmelspec], normalize_variance=False)[0]
mfcc = tf.signal.mfccs_from_log_mel_spectrograms([logmelspec])[0]
mfcc = mfcc[:,1:21]
mfcc_cmvn = cmvn([mfcc])[0]
plot_width = logmelspec.shape[0]/50
plot_signal(signal.numpy(), figsize=(plot_width, .6))
print("VAD: {} -> {} sec".format(
signal_before_vad.size / rate,
signal.numpy().size / rate))
print("lang:", lang)
print("sentence:", sentence)
embed_audio(signal.numpy(), rate)
plot_spectrogram(logmelspec_mvn.numpy(), figsize=(plot_width, 1.2))
plot_cepstra(mfcc_cmvn.numpy(), figsize=(plot_width, .6))
plot_separator()
```
## Loading the samples to a `tf.data.Dataset` iterator
Our dataset is relatively small (2.5 GiB) and we might be able to read all files into signals and keep them in main memory.
However, most speech datasets are much larger due to the amount of data needed for training neural network models that would be of any practical use.
We need some kind of lazy iteration or streaming solution that views only one part of the dataset at a time.
One such solution is to represent the dataset as a [TensorFlow iterator](https://www.tensorflow.org/api_docs/python/tf/data/Dataset), which evaluates its contents only when they are needed, similar to the [MapReduce](https://en.wikipedia.org/wiki/MapReduce) programming model for big data.
The downside with lazy iteration or streaming is that we lose the capability of doing random access by row id.
However, this shouldn't be a problem since we can always keep the whole metadata table in memory and do random access on its rows whenever needed.
Another benefit of TensorFlow dataset iterators is that we can map arbitrary [`tf.function`](https://www.tensorflow.org/api_docs/python/tf/function)s over the dataset and TensorFlow will automatically parallelize the computations and place them on different devices, such as the GPU.
The core architecture of `lidbox` has been organized around the `tf.data.Dataset` API, leaving all the heavy lifting for TensorFlow to handle.
But before we load all our speech data, lets warmup with our small random sample of 8 rows.
```
samples
```
Lets load it into a `tf.data.Dataset`.
```
def metadata_to_dataset_input(meta):
# Create a mapping from column names to all values under the column as tensors
return {
"id": tf.constant(meta.index, tf.string),
"path": tf.constant(meta.path, tf.string),
"lang": tf.constant(meta.lang, tf.string),
"target": tf.constant(meta.target, tf.int32),
"split": tf.constant(meta.split, tf.string),
}
sample_ds = tf.data.Dataset.from_tensor_slices(metadata_to_dataset_input(samples))
sample_ds
```
All elements produced by the `Dataset` iterator are `dict`s of (string, Tensor) pairs, where the string denotes the metadata type.
Although the `Dataset` object is primarily for automating large-scale data processing pipelines, it is easy to extract all elements as `numpy`-values:
```
for x in sample_ds.as_numpy_iterator():
display(x)
```
### Reading audio files
Lets load the signals by [mapping](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map) a file reading function for each element over the whole dataset.
We'll add a `tf.data.Dataset` function wrapper on top of `read_mp3`, which we defined earlier.
TensorFlow will infer the input and output values of the wrapper as tensors from the type signature of dataset elements.
We must use `tf.numpy_function` if we want to allow calling the non-TensorFlow function `read_mp3` also from
inside the graph environment.
It might not be as efficient as using TensorFlow ops but reading a file would have a lot of latency anyway so this is not such a big hit for performance.
Besides, we can always hide the latency by reading several files in parallel.
```
def read_mp3_wrapper(x):
signal, sample_rate = tf.numpy_function(
# Function
read_mp3,
# Argument list
[x["path"]],
# Return value types
[tf.float32, tf.int64])
return dict(x, signal=signal, sample_rate=tf.cast(sample_rate, tf.int32))
for x in sample_ds.map(read_mp3_wrapper).as_numpy_iterator():
print("id: {}".format(x["id"].decode("utf-8")))
print("signal.shape: {}, sample rate: {}".format(x["signal"].shape, x["sample_rate"]))
print()
```
### Removing silence and extracting features
Organizing all preprocessing steps as functions that can be mapped over the `Dataset` object allows us to represent complex transformations easily.
```
def remove_silence_wrapper(x):
return dict(x, signal=remove_silence(x["signal"], x["sample_rate"]))
def batch_extract_features(x):
with tf.device("GPU"):
signals, rates = x["signal"], x["sample_rate"]
logmelspecs = logmelspectrograms(signals, rates[0])
logmelspecs_smn = cmvn(logmelspecs, normalize_variance=False)
mfccs = tf.signal.mfccs_from_log_mel_spectrograms(logmelspecs)
mfccs = mfccs[...,1:21]
mfccs_cmvn = cmvn(mfccs)
return dict(x, logmelspec=logmelspecs_smn, mfcc=mfccs_cmvn)
features_ds = (sample_ds.map(read_mp3_wrapper)
.map(remove_silence_wrapper)
.batch(1)
.map(batch_extract_features)
.unbatch())
for x in features_ds.as_numpy_iterator():
print(x["id"])
for k in ("signal", "logmelspec", "mfcc"):
print("{}.shape: {}".format(k, x[k].shape))
print()
```
### Inspecting dataset contents in TensorBoard
`lidbox` has a helper function for dumping element information into [`TensorBoard`](https://www.tensorflow.org/tensorboard) summaries.
This converts all 2D features into images, writes signals as audio summaries, and extracts utterance ids.
```
import lidbox.data.steps as ds_steps
cachedir = os.path.join(workdir, "cache")
_ = ds_steps.consume_to_tensorboard(
# Rename logmelspec as 'input', these will be plotted as images
ds=features_ds.map(lambda x: dict(x, input=x["logmelspec"])),
summary_dir=os.path.join(cachedir, "tensorboard", "data", "sample"),
config={"batch_size": 1, "image_size_multiplier": 4})
```
Open a terminal and launch TensorBoard to view the summaries written to `$wrkdir/cache/tensorboard/dataset/sample`:
```
tensorboard --logdir /data/exp/cv4/cache/tensorboard
```
Then open the url in a browser and inspect the contents.
You can leave the server running, since we'll log the training progress to the same directory.
## Loading all data
We'll now begin loading everything from disk and preparing a pipeline from mp3-filepaths to neural network input.
We'll use the autotune feature of `tf.data` to allow TensorFlow figure out automatically how much of the pipeline should be split up into parallel calls.
```
import lidbox.data.steps as ds_steps
TF_AUTOTUNE = tf.data.experimental.AUTOTUNE
def signal_is_not_empty(x):
return tf.size(x["signal"]) > 0
def pipeline_from_metadata(data, shuffle=False):
if shuffle:
# Shuffle metadata to get an even distribution of labels
data = data.sample(frac=1, random_state=np_rng.bit_generator)
ds = (
# Initialize dataset from metadata
tf.data.Dataset.from_tensor_slices(metadata_to_dataset_input(data))
# Read mp3 files from disk in parallel
.map(read_mp3_wrapper, num_parallel_calls=TF_AUTOTUNE)
# Apply RMS VAD to drop silence from all signals
.map(remove_silence_wrapper, num_parallel_calls=TF_AUTOTUNE)
# Drop signals that VAD removed completely
.filter(signal_is_not_empty)
# Extract features in parallel
.batch(1)
.map(batch_extract_features, num_parallel_calls=TF_AUTOTUNE)
.unbatch()
)
return ds
# Mapping from dataset split names to tf.data.Dataset objects
split2ds = {
split: pipeline_from_metadata(meta[meta["split"]==split], shuffle=split=="train")
for split in split_names
}
```
### Testing pipeline performance
Note that we only constructed the pipeline with all steps we want to compute.
All TensorFlow ops are computed only when elements are requested from the iterator.
Lets iterate over the training dataset from first to last element to ensure the pipeline will not be a performance bottleneck during training.
```
_ = ds_steps.consume(split2ds["train"], log_interval=2000)
```
### Caching pipeline state
We can [cache](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#cache) the iterator state as a single binary file at arbitrary stages.
This allows us to automatically skip all steps that precede the call to `tf.Dataset.cache`.
Lets cache the training dataset and iterate again over all elements to fill the cache.
**Note** that you will still be storing all data on the disk (4.6 GiB new data), so this optimization is a space-time tradeoff.
```
os.makedirs(os.path.join(cachedir, "data"))
split2ds["train"] = split2ds["train"].cache(os.path.join(cachedir, "data", "train"))
_ = ds_steps.consume(split2ds["train"], log_interval=2000)
```
If we iterate over the dataset again, TensorFlow should read all elements from the cache file.
```
_ = ds_steps.consume(split2ds["train"], log_interval=2000)
```
As a side note, if your training environment has fast read-write access to a file system configured for reading and writing very large files, this optimization can be a very significant performance improvement.
**Note** also that all usual problems related to cache invalidation apply.
When caching extracted features and metadata to disk, be extra careful in your experiments to ensure you are not interpreting results computed on data from some outdated cache.
### Dumping a few batches to TensorBoard
Lets extract 100 first elements of every split to TensorBoard.
```
for split, ds in split2ds.items():
_ = ds_steps.consume_to_tensorboard(
ds.map(lambda x: dict(x, input=x["logmelspec"])),
os.path.join(cachedir, "tensorboard", "data", split),
{"batch_size": 1,
"image_size_multiplier": 2,
"num_batches": 100},
exist_ok=True)
```
## Training a supervised, neural network language classifier
We have now configured an efficient data pipeline and extracted some data samples to summary files for TensorBoard.
It is time to train a classifier on the data.
### Drop metadata from dataset
During training, we only need a tuple of model input and targets.
We can therefore drop everything else from the dataset elements just before training starts.
This is also a good place to decide if we want to train on MFCCs or Mel-spectra.
```
model_input_type = "logmelspec"
def as_model_input(x):
return x[model_input_type], x["target"]
train_ds_demo = list(split2ds["train"]
.map(as_model_input)
.shuffle(100)
.take(6)
.as_numpy_iterator())
for input, target in train_ds_demo:
print(input.shape, target2lang[target])
if model_input_type == "mfcc":
plot_cepstra(input)
else:
plot_spectrogram(input)
plot_separator()
```
### Asserting all input is valid
Since the training dataset is cached, we can quickly iterate over all elements and check that we don't have any NaNs or negative targets.
```
def assert_finite(x, y):
tf.debugging.assert_all_finite(x, "non-finite input")
tf.debugging.assert_non_negative(y, "negative target")
return x, y
_ = ds_steps.consume(split2ds["train"].map(as_model_input).map(assert_finite), log_interval=5000)
```
It is also easy to compute stats on the dataset elements.
For example finding global minimum and maximum values of the inputs.
```
x_min = split2ds["train"].map(as_model_input).reduce(
tf.float32.max,
lambda acc, elem: tf.math.minimum(acc, tf.math.reduce_min(elem[0])))
x_max = split2ds["train"].map(as_model_input).reduce(
tf.float32.min,
lambda acc, elem: tf.math.maximum(acc, tf.math.reduce_max(elem[0])))
print("input tensor global minimum: {}, maximum: {}".format(x_min.numpy(), x_max.numpy()))
```
### Selecting a model architecture
`lidbox` provides a small set of neural network model architectures out of the box.
Many of these architectures have good results in the literature for different datasets.
These models have been implemented in Keras, so you could replace the model we are using here with anything you want.
The ["x-vector"](http://danielpovey.com/files/2018_odyssey_xvector_lid.pdf) architecture has worked well in speaker and language identification so lets create an untrained Keras x-vector model.
One of its core features is learning fixed length vector representations (x-vectors) for input of arbitrary length.
These vectors are extracted from the first fully connected layer (`segment1`), without activation.
This opens up opportunities for doing all kinds of statistical analysis on these vectors, but that's out of scope for our example.
We'll try to regularize the network by adding frequency [channel dropout](https://dl.acm.org/doi/abs/10.1016/j.patrec.2017.09.023) with probability 0.8.
In other words, during training we set input rows randomly to zeros with probability 0.8.
This might avoid overfitting the network on frequency channels containing noise that is irrelevant for deciding the language.
```
import lidbox.models.xvector as xvector
def create_model(num_freq_bins, num_labels):
model = xvector.create([None, num_freq_bins], num_labels, channel_dropout_rate=0.8)
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5))
return model
model = create_model(
num_freq_bins=20 if model_input_type == "mfcc" else 40,
num_labels=len(target2lang))
model.summary()
```
### Channel dropout demo
Here's what happens to the input during training.
```
channel_dropout = tf.keras.layers.SpatialDropout1D(model.get_layer("channel_dropout").rate)
for input, target in train_ds_demo:
print(input.shape, target2lang[target])
input = channel_dropout(tf.expand_dims(input, 0), training=True)[0].numpy()
if model_input_type == "mfcc":
plot_cepstra(input)
else:
plot_spectrogram(input)
plot_separator()
```
### Training the classifier
The validation set is needed after every epoch, so we might as well cache it.
**Note** that this writes 2.5 GiB of additional data to disk the first time the validation set is iterated over, i.e. at the end of epoch 1.
Also, we can't use batches since our input is of different lengths (perhaps with [ragged tensors](https://www.tensorflow.org/versions/r2.3/api_docs/python/tf/data/experimental/dense_to_ragged_batch)).
```
callbacks = [
# Write scalar metrics and network weights to TensorBoard
tf.keras.callbacks.TensorBoard(
log_dir=os.path.join(cachedir, "tensorboard", model.name),
update_freq="epoch",
write_images=True,
profile_batch=0,
),
# Stop training if validation loss has not improved from the global minimum in 10 epochs
tf.keras.callbacks.EarlyStopping(
monitor='val_loss',
patience=10,
),
# Write model weights to cache everytime we get a new global minimum loss value
tf.keras.callbacks.ModelCheckpoint(
os.path.join(cachedir, "model", model.name),
monitor='val_loss',
save_weights_only=True,
save_best_only=True,
verbose=1,
),
]
train_ds = split2ds["train"].map(as_model_input).shuffle(1000)
dev_ds = split2ds["dev"].cache(os.path.join(cachedir, "data", "dev")).map(as_model_input)
history = model.fit(
train_ds.batch(1),
validation_data=dev_ds.batch(1),
callbacks=callbacks,
verbose=2,
epochs=100)
```
## Evaluating the classifier
Lets run all test set samples through our trained model by loading the best weights from the cache.
```
from lidbox.util import predict_with_model
test_ds = split2ds["test"].map(lambda x: dict(x, input=x["logmelspec"])).batch(1)
_ = model.load_weights(os.path.join(cachedir, "model", model.name))
utt2pred = predict_with_model(model, test_ds)
test_meta = meta[meta["split"]=="test"]
assert not test_meta.join(utt2pred).isna().any(axis=None), "missing predictions"
test_meta = test_meta.join(utt2pred)
test_meta
```
### Average detection cost ($\text{C}_\text{avg}$)
The de facto standard metric for evaluating spoken language classifiers might be the *average detection cost* ($\text{C}_\text{avg}$), which has been refined to its current form during past [language recognition competitions](https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=925272).
`lidbox` provides this metric as a `tf.keras.Metric` subclass.
Scikit-learn provides other commonly used metrics so there is no need to manually compute those.
```
from lidbox.util import classification_report
from lidbox.visualize import draw_confusion_matrix
true_sparse = test_meta.target.to_numpy(np.int32)
pred_dense = np.stack(test_meta.prediction)
pred_sparse = pred_dense.argmax(axis=1).astype(np.int32)
report = classification_report(true_sparse, pred_dense, lang2target)
for m in ("avg_detection_cost", "avg_equal_error_rate", "accuracy"):
print("{}: {:.3f}".format(m, report[m]))
lang_metrics = pd.DataFrame.from_dict({k: v for k, v in report.items() if k in lang2target})
lang_metrics["mean"] = lang_metrics.mean(axis=1)
display(lang_metrics.T)
fig, ax = draw_confusion_matrix(report["confusion_matrix"], lang2target)
```
## Conclusions
This was an example on deep learning based simple spoken language identification of 4 different languages from the Mozilla Common Voice free speech datasets.
We managed to train a model that adequately recognizes languages spoken by the test set speakers.
However, there is clearly room for improvement.
We did simple random oversampling to balance the language distribution in the training set, but perhaps there are better ways to do this.
We also did not tune optimization hyperparameters or try different neural network architectures or layer combinations.
It might also be possible to increase robustness by audio feature engineering, such as [random FIR filtering](https://www.isca-speech.org/archive/Interspeech_2018/abstracts/1047.html) to simulate microphone differences.
| true |
code
| 0.594551 | null | null | null | null |
|
# Mandala: self-managing experiments
## What is Mandala?
Mandala enables new, simpler patterns for working with complex and evolving
computational experiments.
It eliminates low-level code and decisions for how to save, load, query,
delete and otherwise organize results. To achieve this, it lets computational
code "manage itself" by organizing and addressing its own data storage.
```{admonition} Under construction
:class: warning
This project is under active development
```
### Features at a glance
- **concise**: code computations in pure Python (w/ control flow, collections,
...) -- results are automatically tracked and queriable
- **iterate rapidly**: add/edit parameters/logic and rerun code -- past results
are loaded on demand, and only new computations are executed
- **pattern-match against Python code**: query across complex, branching
projects by reusing computational code itself
### Quick start
#### Installation
```console
pip install git+https://github.com/amakelov/mandala
```
#### Recommended introductions
To build some understanding, check these out:
- 2-minute introduction: [intro to self-managing code](2mins)
- 10-minute introduction: [manage a small ML project](10mins)
#### Minimal working examples
If you want to jump right into code, below are a few minimal, somewhat
interesting examples to play with and extend:
```
from typing import List
from mandala.all import *
set_logging_level('warning')
# create a storage for results
storage = Storage(in_memory=True) # can also be persistent (on disk)
@op(storage) # memoization decorator
def inc(x) -> int:
return x + 1
@op(storage)
def mean(x:List[int]) -> float:
# you can operate on / return collections of memoized results
return sum(x) / len(x)
with run(storage): # calls inside `run` block are memoized
nums = [inc(i) for i in range(5)]
result = mean(nums) # memoization composes through lists without copying data
print(f'Mean of 5 nums: {result}')
# add logic/parameters directly on top of memoized code without re-doing past work
with run(storage, lazy=True):
nums = [inc(i) for i in range(10)]
result = mean(nums)
# walk over chains of calls without loading intermediate data
# to traverse storage and collect results flexibly
with run(storage, lazy=True):
nums = [inc(i) for i in range(10)]
result = mean(nums)
print(f'Reference to mean of 10 nums: {result}')
storage.attach(result) # load the value in-place
print(f'Loaded mean of 10 nums: {result}')
# pattern-match to memoized compositions of calls
with query(storage) as q:
# this may not make sense unless you read the tutorials
i = Query()
inc_i = inc(i).named('inc_i')
nums = MakeList(containing=inc_i, at_index=0).named('nums')
result = mean(nums).named('result')
df = q.get_table(inc_i, nums, result)
df
```
## Why Mandala?
### Advantages
Compared to other tools for tracking and managing computations, the features that
most set Mandala apart are the direct and concise patterns in which complex
Python code can interact with its own storage. This manifests in several ways:
- **Python code as interface to its own storage**: you just write the code to compute
what you want to compute (freely using Python's control flow and collections),
and directly add more parameters and logic to it over time. Mandala takes
care of the rest:
- **the organization of storage mirrors the structure of code**, and Mandala
provides you with the tools to make maximum use of this --
retracing memoized code with on-demand data loading, and declarative
code-based pattern-matching.
- this leads to **simple, intuitive and flexible ways to query and iterate on
experiments**, even when their logic gets quite complex -- without any data
organization efforts on your part.
- it also allows you to **query relationships between any variables in your
projects**, even when they are separated by many computational steps -- **without
explicitly annotating these relationships**.
- **refactor code and data will follow**: Mandala makes it easy to apply
familiar software refactorings to code *without* losing the relationship to
this code's existing results. This gives you high-level tools to manage the
complexity of both the code and its data as the project grows.
- **organize all results and their relationships**: Mandala manages all the
artifacts produced by computations, not just a set of human-readable
metrics. It lets you use pure Python idioms to
- compute with **data structures with shared substructure**
- **index and view data in multiple ways** and on multiple levels of analysis
without storage duplication. This gives you much flexibility in manipulating
the contents of storage to express your intent.
### Comparisons
Mandala takes inspiration from many other programming tools and concepts. Below
is an (incomplete but growing) list of comparisons with relevant tools:
- [algebraicjulia](https://www.algebraicjulia.org/):
[conjunctive](https://www.algebraicjulia.org/blog/post/2020/12/cset-conjunctive-queries/) [queries](https://www.algebraicjulia.org/blog/post/2020/11/sql-as-hypergraph/)
are integral to Mandala's declarative interface, and are generalized in
several ways to make them practical for complex experiments:
- a single table of values is used to enable polymorphism
- operations on lists/dicts are integrated with query construction
- queries can use the hierarchical structure of computations
- constraints can be partitioned (to avoid interaction) while using some
shared base (to enable code reuse)
- dynamic query generation can use conditionals to enable disjunctive
queries, and even loops (though this quickly becomes inefficient)
- [koji](https://arxiv.org/abs/1901.01908) and [content-addressable computation](https://research.protocol.ai/publications/ipfs-fan-a-function-addressable-computation-network/delarocha2021a.pdf):
Mandala uses causal hashing to
- ensure correct, deterministic and idempotent behavior;
- avoid hashing large (or unhashable) Python objects;
- avoid discrepancies between object hashes across library versions
Mandala can be thought of as a single-node, Python-only implementation of
general-purpose content-addressable computation with two extra features:
- hierarchical organization of computation,
- declarative queries
- [funsies](https://github.com/aspuru-guzik-group/funsies) is a workflow engine
for Python scripts that also uses causal hashing. Mandala differs by
integrating more closely with Python (by using functions instead of scripts as
the units of work), and thus enabling more fine-grained control and
expressiveness over what gets computed and how.
- [joblib.Memory](https://joblib.readthedocs.io/en/latest/memory.html#memory)
implements persistent memoization for Python functions that overcomes some of
the issues naive implementations have with large and complex Python objects.
Mandala augments `joblib.Memory` in some key ways:
- memoized calls can be queried/deleted declaratively
- collections and memoized functions calling other memoized functions can
reuse storage
- you can modify and refactor memoized functions while retaining connection to
memoized calls
- you can avoid the latency of hashing large/complex objects
- [incpy](https://dl.acm.org/doi/abs/10.1145/2001420.2001455?casa_token=ahM2UC4Uk-4AAAAA:9lZXVDS7nYEHzHPJk-UCTOAICGb2astAh2hrL00VB125nF6IGG90OwA-ujbe-cIg2hT4T1MOpbE2)
augments the Python interpreter with automatic persistent memoization. Mandala
also enables automatic persistent memoization, but it is different from
`incpy` in some key ways:
- uses decorators to explicitly designate memoized functions (which can be
good or bad depending on your goals)
- allows for lazy retracing of memoized calls
- provides additional features like the ones mentioned in the comparison with
`joblib.Memory`
### Philosophy
When can we declare data management for computational experiments a solved
problem? It's unclear how to turn this question into a measurable goal, but
there is a somewhat objective *lower bound* on how simple data management can
get:
> At the end of the day, we have to *at least* write down the (Python) code to express
> the computations we want to run, *regardless* of data management concerns.
> Can this be *all* the code we have to write, and *still* be able to achieve
> the goals of data management?
Mandala aims to bring us to this idealized lower bound. It adopts the view that
Python itself is flexible and expressive enough to capture our intentions about
experiments. There shouldn't be a ton of extra interfaces, concepts and syntax
between your thoughts, their expression in code, and its results.
By mirroring the structure of computational code in the organization of data,
and harmoniously extending Python's tools for capturing intention and managing
complexity, we can achieve a more flexible, natural and immediate way to
interact with computations.
This echoes the design goals of some other tools. For example,
[dask](https://dask.org) and [ray](https://ray.io) (both of which Mandala
integrates with) aim to let you write Python code the way you are used to, and
take care of parallelization for you.
## Limitations
This project is under active development, and not ready for production. Its goal
so far has been to demonstrate that certain high-level programming patterns are
viable by building a sufficiently useful working prototype. Limitations can be
summarized as follows:
- it is easy to get started, but effective use in complex projects requires some
getting used to;
- much of the code does what it does in very simple and often inefficient ways;
- interfaces and (more importantly) storage formats may change in backward
incompatible ways.
- bugs likely still exist;
That being said, Mandala is already quite usable in many practical situations.
Below is a detailed outline of current limitations you should be aware of if you
consider using this library in your work.
### "Missing" features
There are some things you may be used to seeing in projects like this that
currently don't exist:
- **functions over scripts**: Mandala focuses on functions as the basic
building blocks of experiments as opposed to Python scripts. There is no
fundamental conceptual distinction between the two, but:
- functions provide a better-behaved interface, especially when it comes to
typing, refactoring, and hierarchical organization
- using functions makes it much easier to use
projects such as [ray](https://www.ray.io/) and [dask](https://dask.org/)
alongside Mandala
- if you don't need to do something extra complicated involving different
Python processes or virtual environments, it is easy to wrap a script as a
function that takes in some settings and resource descriptions (e.g., paths to
input files) and returns other resource descriptions (e.g., paths to output
files). However, the burden of refactoring the script's interface manually
and organizing its input/output resources would still be on you. So, always
use a function where you can.
- **no integration with git**: version control data is not automatically
included in Mandala's records at this point, thought this would be an easy
addition. There are other programming patterns available for working with
multiple versions of code.
- **no GUI**: for now, the library leans heavily towards using computational
code itself as a highly programmable interface to results, and visualization
is left to other tools.
### Acquiring best practices
Using some features effectively requires deeper understanding:
- **declarative queries**: It's possible to create underconstrained
pattern-matching queries which return a number of rows that grows
multiplicatively with the numbers of rows of memoization tables of functions
in the query. Such queries may take a very long time or run out of RAM even
for moderately-sized projects (`sqlite` will usually complain about this at
the start of the query).
Certain ways to define and compose memoized functions promote such queries, so
a good understanding of this issue may be needed depending on the project.
- **deletions**: deleting anything from storage is subject to invariants that
prevent the existence of "mysterious" objects (ones without a computational
history tracing back to user inputs) from existing. This means that you must
understand well how deletion works to avoid deleting more things than you
really intend.
### Performance
The library has not been optimized much for performance. A few things to keep in
mind for now:
- When using disk-based persistence, Mandala introduces an overhead of a few 10s
of ms for each call to a memoized function, on top of any work to serialize
inputs/outputs and run the function.
- Storing and loading large collections can be slow (a list of 1000 integers
already leads to a visible ~1s delay)
| true |
code
| 0.856422 | null | null | null | null |
|
# notebook for processing fully reduced m3 data "triplets"
This is a notebook for processing L0 / L1B / L2 triplets (i.e.,
the observations that got reduced).
## general notes
We process the reduced data in triplets simply to improve the metadata on the
L0 and L2 products. We convert L1B first to extract several attributes to fill
out their metadata. This data is scratched to disk in
[./directories/m3/m3_index.csv'](./directories/m3/m3_index.csv), because it
also serves as a useful user-facing index to the archive. A complete version
of this index is provided in this repository, but this index was originally
created during this conversion process, and will be recreated if you run it
again. This index is read into the ```m3_index variable``` below; its path is
also soft-coded in several ```m3_conversion``` classes, so make sure you
change that or feed them the correct path as an argument if you change this
location.
This notebook does not apply programmatic rules to iterate over the file
structure of the mirrored archive. It uses an index that was partly manually
generated:
[/src/directories/m3/m3_data_mappings.csv](/src/directories/m3/m3_data_mappings.csv).
This was manually manipulated to manage several small idiosyncracies in the
PDS3 archive.
35 of the V3 L1B products in the PDS3 archive are duplicated: one copy in the
correct month-by-year directory, one copy in some incorrect month-by-year
directory. We pick the 'first' one in all cases (see the line
```pds3_label_file = input_directory + group_files[product_type][0]``` below).
Each pair's members have identical md5sums, so it *probably* doesn't matter
which member of the pair we use.
## performance tips
The most likely bottlenecks for this process are I/O throughput and CPU. We
recommend both using a high-throughput disk and parallelizing this, either
using ```pathos``` (vanilla Python ```multiprocessing``` will probably fail
during a pickling step) or simply by running multiple
copies of this notebook. If you do parallelize this process on a single
machine, note that working memory can suddenly catch you off-guard as a
constraint. While many of the M3 observational data files are small, some are
over 4 GB, and the method presented here requires them to be completely loaded
into memory in order to convert them to FITS and strip the prefix tables from
the L0 files. When passed ```clean=True```, the ```m3_converter```
observational data writer class constructors aggressively delete data after
using it, but this still results in a pretty high -- and spiky -- working
memory burden.
```
import datetime as dt
import os
from types import MappingProxyType
from more_itertools import distribute
import pandas as pd
import sh
from m3_bulk import basenamer, make_m3_triplet, \
m3_triplet_bundle_paths, crude_time_log, fix_end_object_tags
from m3_conversion import M3L0Converter, M3L1BConverter, M3L2Converter
from pvl.decoder import ParseError
m3_index = pd.read_csv('./directories/m3/m3_index.csv')
# directory of file mappings, grouped into m3 basename clusters
file_mappings = pd.read_csv('./directories/m3/m3_data_mappings.csv')
file_mappings["basename"] = file_mappings["filepath"].apply(basenamer)
basename_groups = list(file_mappings.groupby("basename"))
# what kind of files does each pds4 product have?
# paths to the locally-written versions are stored in the relevant attributes of
# the associated PDSVersionConverter instance.
pds4_filetypes = MappingProxyType({
'l0': ('pds4_label_file', 'clock_file', 'fits_image_file'),
'l1b': ('pds4_label_file', 'loc_file', 'tim_file', 'rdn_file', 'obs_file'),
'l2': ('pds4_label_file', 'sup_file', 'rfl_file')
})
# root directories of PDS3 and PDS4 data sets respectively
input_directory = '/home/ubuntu/m3_input/'
output_directory = '/home/ubuntu/m3_output/'
# all the triplets: what we are converting here.
reduced_groups = [group for group in basename_groups if len(group[1]) >= 3]
# the
edr_groups = [group for group in basename_groups if len(group[1]) == 1] # lonesome EDR images
triplet_product_types = ('l1b', 'l0', 'l2')
# initialize our mapping of product types to
# product-writer class constructors.
# MappingProxyType is just a safety mechanism
# to make sure constructors don't get messed with
converters = MappingProxyType({
'l0': M3L0Converter,
'l1b': M3L1BConverter,
'l2': M3L2Converter
})
writers = {} # dict to hold instances of the converter classes
# initialize iteration, control execution in whatever way
# this is a place to split your index up however you like
# if you're parallelizing using multiple copies of this
# notebook.
chunk_ix_of_this_notebook = 0
total_chunks = 40
chunks = distribute(total_chunks, reduced_groups)
# eagerly evaluate so we know how long it is,
# and what all is in it if we have an error
chunk = list(chunks[chunk_ix_of_this_notebook])
log_string = "_" + str(chunk_ix_of_this_notebook)
group_enumerator = enumerate(chunk)
for ix, group in group_enumerator:
print(ix, len(chunk))
print("beginning product conversion")
triplet_start_time = dt.datetime.now()
group_files = make_m3_triplet(group)
# what are the correct output paths (relative to
# the root of the pds4 bundle) for these products?
bundle_paths = m3_triplet_bundle_paths(group)
for product_type in triplet_product_types:
# read the PDS3 product and perform file conversions
pds3_label_file = input_directory + group_files[product_type][0]
try:
writers[product_type] = converters[product_type](
pds3_label_file, suppress_warnings=True, clean=True
)
except ParseError: # fix broken END_OBJECT tags in some of the target-mode files
print("fixing broken END_OBJECT tags")
temp_label_file = fix_end_object_tags(pds3_label_file)
writers[product_type] = converters[product_type](
temp_label_file, suppress_warnings=True, clean=True
)
os.remove(temp_label_file)
# write PDS4 label and product files
# don't actually need to shave the extra / here but...
# this would be more safely rewritten with PyFilesystem
# (see clem-conversion)
output_path = output_directory + bundle_paths[product_type][1:]
sh.mkdir("-p", output_path)
writers[product_type].write_pds4(output_path, write_product_files=True, clean=True)
# occasionally (slow but very useful) spot-check with validate tool
# note that this just invokes a one-line script at /usr/bin/validate
# that links to the local install of the PDS Validate Tool; this
# allows us to avoid throwing java stuff all over our environment
if ix % 20 == 1:
print("1-mod-20th triplet: running Validate Tool")
validate_results = sh.validate("-t", writers[product_type].pds4_label_file)
with open("validate_dump.txt", "a") as file:
file.write(validate_results.stdout.decode())
print("validated successfully")
# log transfer crudely
crude_time_log(
"m3_data_conversion_log" + log_string + ".csv",
writers[product_type],
str((dt.datetime.now() - triplet_start_time).total_seconds())
)
print(
"done with this triplet; total seconds "
+ str((dt.datetime.now() - triplet_start_time).total_seconds())
)
```
| true |
code
| 0.601886 | null | null | null | null |
|
# Speed benchmarks
This is just for having a quick reference of how the speed of running the program scales
```
from __future__ import print_function
import pprint
import subprocess
import sys
sys.path.append('../')
# sys.path.append('/home/heberto/learning/attractor_sequences/benchmarking/')
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
%matplotlib inline
np.set_printoptions(suppress=True, precision=2)
sns.set(font_scale=2.0)
```
#### Git machine
```
run_old_version = False
if run_old_version:
hash_when_file_was_written = '321620ef1b753fe42375bbf535c9ab941b72ae26'
hash_at_the_moment = subprocess.check_output(["git", 'rev-parse', 'HEAD']).strip()
print('Actual hash', hash_at_the_moment)
print('Hash of the commit used to run the simulation', hash_when_file_was_written)
subprocess.call(['git', 'checkout', hash_when_file_was_written])
```
#### Load the libraries
```
from benchmarking.standard_program import run_standard_program, calculate_succes_program, training_program
import timeit
def wrapper(func, *args, **kwargs):
def wrapped():
return func(*args, **kwargs)
return wrapped
```
## Standard program
#### Minicolumns
```
hypercolumns = 4
minicolumns_range = np.arange(10, 100, 5)
epochs = 1
times_minicolumns = []
for minicolumns in minicolumns_range:
function = wrapper(run_standard_program, hypercolumns=hypercolumns, minicolumns=minicolumns, epochs=epochs)
time = timeit.timeit(function, number=1)
times_minicolumns.append(time)
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.plot(minicolumns_range, times_minicolumns, '*-', markersize=14)
ax.set_xlabel('Minicolumns')
ax.set_ylabel('Seconds that the program runed');
```
#### Hypercolumns
```
hypercolumns_range = np.arange(4, 20, 2)
minicolumns = 20
epochs = 1
times_hypercolumns = []
for hypercolumns in hypercolumns_range:
function = wrapper(run_standard_program, hypercolumns, minicolumns, epochs)
time = timeit.timeit(function, number=1)
times_hypercolumns.append(time)
sns.set(font_scale=2.0)
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.plot(hypercolumns_range, times_hypercolumns, '*-', markersize=14)
ax.set_xlabel('Hypercolumns')
ax.set_ylabel('Seconds that the program runed');
```
#### Epochs
```
hypercolumns = 4
minicolumns = 20
epochs_range = np.arange(1, 10, 1)
times_epochs = []
for epochs in epochs_range:
function = wrapper(run_standard_program, hypercolumns, minicolumns, epochs)
time = timeit.timeit(function, number=1)
times_epochs.append(time)
sns.set(font_scale=2.0)
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.plot(epochs_range, times_epochs, '*-', markersize=14)
ax.set_xlabel('Epochs')
ax.set_ylabel('Seconds that the program runed')
```
#### Everything to compare
```
fig = plt.figure(figsize=(16, 12))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
ax1.plot(minicolumns_range, times_minicolumns, '*-', markersize=14)
ax2.plot(hypercolumns_range, times_hypercolumns, '*-', markersize=14)
ax3.plot(epochs_range, times_epochs, '*-', markersize=14)
ax1.set_title('Minicolumn scaling')
ax2.set_title('Hypercolumn scaling')
ax3.set_title('Epoch scaling')
ax1.set_ylabel('Time (s)');
```
## Training and recalling times
Her we run the standard program before and then we test how long it takes for it to run recalls and test recall success
```
hypercolumns = 4
minicolumns = 10
epochs = 3
manager = run_standard_program(hypercolumns, minicolumns, epochs)
```
#### Recall only
```
T_recall_range = np.arange(3, 20, 1)
time_recall = []
for T_recall in T_recall_range:
function = wrapper(training_program, manager=manager, T_recall=T_recall)
time = timeit.timeit(function, number=1)
time_recall.append(time)
# Plot4
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.plot(T_recall_range, time_recall, '*-', markersize=14)
ax.set_xlabel('T_recall')
ax.set_ylabel('Seconds that the program took to run')
ax.set_title('Normal recall profile')
plt.show()
```
#### Success recall
```
T_recall_range = np.arange(3, 20, 1)
time_success = []
for T_recall in T_recall_range:
function = wrapper(calculate_succes_program, manager=manager, T_recall=T_recall)
time = timeit.timeit(function, number=1)
time_success.append(time)
# Plot
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.plot(T_recall_range, time_success, '*-', markersize=14)
ax.plot(T_recall_range, time_recall, '*-', markersize=14)
ax.set_xlabel('T_recall')
ax.set_ylabel('Seconds that the program took to run')
ax.set_title('Recall Success profiling')
plt.show()
```
| true |
code
| 0.657978 | null | null | null | null |
|
<!--- <div style="text-align: center;">
<font size="5">
<b>Data-driven Design and Analyses of Structures and Materials (3dasm)</b>
</font>
</div>
<br>
</br>
<div style="text-align: center;">
<font size="5">
<b>Lecture 1</b>
</font>
</div>
<center>
<img src=docs/tudelft_logo.jpg width=550px>
</center>
<div style="text-align: center;">
<font size="4">
<b>Miguel A. Bessa | <a href = "mailto: [email protected]">[email protected]</a> | Associate Professor</b>
</font>
</div> -->
<img src=docs/tudelft_logo.jpg width=50%>
## Data-driven Design and Analyses of Structures and Materials (3dasm)
## Lecture 1
### Miguel A. Bessa | <a href = "mailto: [email protected]">[email protected]</a> | Associate Professor
## Introduction
**What:** A lecture of the "3dasm" course
**Where:** This notebook comes from this [repository](https://github.com/bessagroup/3dasm_course)
**Reference for entire course:** Murphy, Kevin P. *Probabilistic machine learning: an introduction*. MIT press, 2022. Available online [here](https://probml.github.io/pml-book/book1.html)
**How:** We try to follow Murphy's book closely, but the sequence of Chapters and Sections is different. The intention is to use notebooks as an introduction to the topic and Murphy's book as a resource.
* If working offline: Go through this notebook and read the book.
* If attending class in person: listen to me (!) but also go through the notebook in your laptop at the same time. Read the book.
* If attending lectures remotely: listen to me (!) via Zoom and (ideally) use two screens where you have the notebook open in 1 screen and you see the lectures on the other. Read the book.
**Optional reference (the "bible" by the "bishop"... pun intended 😆) :** Bishop, Christopher M. *Pattern recognition and machine learning*. Springer Verlag, 2006.
**References/resources to create this notebook:**
* [Figure (Car stopping distance)](https://korkortonline.se/en/theory/reaction-braking-stopping/)
* Snippets of code from this awesome [repo](https://github.com/gerdm/prml) by Gerardo Duran-Martin that replicates many figures in Bishop's book
Apologies in advance if I missed some reference used in this notebook. Please contact me if that is the case, and I will gladly include it here.
## **OPTION 1**. Run this notebook **locally in your computer**:
1. Install miniconda3 [here](https://docs.conda.io/en/latest/miniconda.html)
2. Open a command window and create a virtual environment called "3dasm":
```
conda create -n 3dasm python=3 numpy scipy jupyter nb_conda matplotlib pandas scikit-learn rise tensorflow -c conda-forge
```
3. Install [git](https://github.com/git-guides/install-git), open command window & clone the repository to your computer:
```
git clone https://github.com/bessagroup/3dasm_course
```
4. Load jupyter notebook by typing in (anaconda) command window (it will open in your internet browser):
```
conda activate 3dasm
jupyter notebook
```
5. Open notebook (3dasm_course/Lectures/Lecture1/3dasm_Lecture1.ipynb)
**Short note:** My personal environment also has other packages that help me while teaching.
> conda install -n 3dasm -c conda-forge jupyter_contrib_nbextensions hide_code
Then in the 3dasm conda environment:
> jupyter nbextension install --py hide_code --sys-prefix
>
> jupyter nbextension enable --py hide_code
>
> jupyter serverextension enable --py hide_code
>
> jupyter nbextension enable splitcell/splitcell
## **OPTION 2**. Use **Google's Colab** (no installation required, but times out if idle):
1. go to https://colab.research.google.com
2. login
3. File > Open notebook
4. click on Github (no need to login or authorize anything)
5. paste the git link: https://github.com/bessagroup/3dasm_course
6. click search and then click on the notebook (*3dasm_course/Lectures/Lecture1/3dasm_Lecture1.ipynb*)
```
# Basic plotting tools needed in Python.
import matplotlib.pyplot as plt # import plotting tools to create figures
import numpy as np # import numpy to handle a lot of things!
%config InlineBackend.figure_format = "retina" # render higher resolution images in the notebook
plt.style.use("seaborn") # style for plotting that comes from seaborn
plt.rcParams["figure.figsize"] = (8,4) # rescale figure size appropriately for slides
```
## Outline for today
* Introduction
- Taking a probabilistic perspective on machine learning
* Basics of univariate statistics
- Continuous random variables
- Probabilities vs probability densities
- Moments of a probability distribution
* The mindblowing Bayes' rule
- The rule that spawns almost every ML model (even when we don't realize it)
**Reading material**: This notebook + Chapter 2 until Section 2.3
## Get hyped about Artificial Intelligence...
```
from IPython.display import display, YouTubeVideo, HTML
YouTubeVideo('RNnZwvklwa8', width=512, height=288) # show that slides are interactive:
# rescale video to 768x432 and back to 512x288
```
**Well...** This class *might* not make you break the world (yet!). Let's focus on the fundamentals:
* Probabilistic perspective on machine learning
* Supervised learning (especially regression)
## Machine learning (ML)
* **ML definition**: A computer program that learns from experience $E$ wrt tasks $T$ such that the performance $P$ at those tasks improves with experience $E$.
* We'll treat ML from a **probabilistic perspective**:
- Treat all unknown quantities as **random variables**
* What are random variables?
- Variables endowed with probability distributions!
## The car stopping distance problem
<img src="docs/reaction-braking-stopping.svg" title="Car stopping distance" width="50%" align="right">
<br></br>
Car stopping distance ${\color{red}y}$ as a function of its velocity ${\color{green}x}$ before it starts braking:
${\color{red}y} = {\color{blue}z} x + \frac{1}{2\mu g} {\color{green}x}^2 = {\color{blue}z} x + 0.1 {\color{green}x}^2$
- ${\color{blue}z}$ is the driver's reaction time (in seconds)
- $\mu$ is the road/tires coefficient of friction (assume $\mu=0.5$)
- $g$ is the acceleration of gravity (assume $g=10$ m/s$^2$).
## The car stopping distance problem
### How to obtain this formula?
$y = d_r + d_{b}$
where $d_r$ is the reaction distance, and $d_b$ is the braking distance.
### Reaction distance $d_r$
$d_r = z x$
with $z$ being the driver's reaction time, and $x$ being the velocity of the car at the start of braking.
## The car stopping distance problem
### Braking distance $d_b$
Kinetic energy of moving car:
$E = \frac{1}{2}m x^2$ where $m$ is the car mass.
Work done by braking:
$W = \mu m g d_b$ where $\mu$ is the coefficient of friction between the road and the tire, $g$ is the acceleration of gravity, and $d_b$ is the car braking distance.
The braking distance follows from $E=W$:
$d_b = \frac{1}{2\mu g}x^2$
Therefore, if we add the reacting distance $d_r$ to the braking distance $d_b$ we get the stopping distance $y$:
$$y = d_r + d_b = z x + \frac{1}{2\mu g} x^2$$
## The car stopping distance problem
<img src="docs/reaction-braking-stopping.svg" title="Car stopping distance" width="25%" align="right">
$y = {\color{blue}z} x + 0.1 x^2$
The driver's reaction time ${\color{blue}z}$ is a **random variable (rv)**
* Every driver has its own reaction time $z$
* Assume the distribution associated to $z$ is Gaussian with **mean** $\mu_z=1.5$ seconds and **variance** $\sigma_z^2=0.5$ seconds$^2$
$$
z \sim \mathcal{N}(\mu_z=1.5,\sigma_z^2=0.5^2)
$$
where $\sim$ means "sampled from", and $\mathcal{N}$ indicates a Gaussian **probability density function (pdf)**
## Univariate Gaussian <a title="probability density function">pdf</a>
The gaussian <a title="probability density function">pdf</a> is defined as:
$$
\mathcal{N}(z | \mu_z, \sigma_z^2) = \frac{1}{\sqrt{2\pi\sigma_z^2}}e^{-\frac{1}{2\sigma_z^2}(z - \mu_z)^2}
$$
Alternatively, we can write it using the **precision** term $\lambda_z := 1 / \sigma_z^2$ instead of using $\sigma_z^2$:
$$
\mathcal{N}(z | \mu_z, \lambda_z^{-1}) = \frac{\lambda_z^{1/2}}{\sqrt{2\pi}}e^{-\frac{\lambda_z}{2}(z - \mu_z)^2}
$$
Anyway, recall how this <a title="probability density function">pdf</a> looks like...
```
def norm_pdf(z, mu_z, sigma_z2): return 1 / np.sqrt(2 * np.pi * sigma_z2) * np.exp(-(z - mu_z)**2 / (2 * sigma_z2))
zrange = np.linspace(-8, 4, 200) # create a list of 200 z points between z=-8 and z=4
fig, ax = plt.subplots() # create a plot
ax.plot(zrange, norm_pdf(zrange, 0, 1), label=r"$\mu_z=0; \ \sigma_z^2=1$") # plot norm_pdf(z|0,1)
ax.plot(zrange, norm_pdf(zrange, 1.5, 0.5**2), label=r"$\mu_z=1.5; \ \sigma_z^2=0.5^2$") # plot norm_pdf(z|1.5,0.5^2)
ax.plot(zrange, norm_pdf(zrange, -1, 2**2), label=r"$\mu_z=-1; \ \sigma_z^2=2^2$") # plot norm_pdf(z|-1,2^2)
ax.set_xlabel("z", fontsize=20) # create x-axis label with font size 20
ax.set_ylabel("probability density", fontsize=20) # create y-axis label with font size 20
ax.legend(fontsize=15) # create legend with font size 15
ax.set_title("Three different Gaussian pdfs", fontsize=20); # create title with font size 20
```
The <span style="color:green">green</span> curve shows the Gaussian <a title="probability density function">pdf</a> of the <a title="random variable">rv</a> $z$ **conditioned** on the mean $\mu_z=1.5$ and variance $\sigma_z^2=0.5^2$ for the car stopping distance problem.
## Univariate Gaussian <a title="probability density function">pdf</a>
$$
p(z) = \mathcal{N}(z | \mu_z, \sigma_z^2) = \frac{1}{\sqrt{2\pi\sigma_z^2}}e^{-\frac{1}{2\sigma_z^2}(z - \mu_z)^2}
$$
The output of this expression is the **PROBABILITY DENSITY** of $z$ **given** (or conditioned to) a particular $\mu_z$ and $\sigma_z^2$.
* **Important**: Probability Density $\neq$ Probability
So, what is a probability?
## Probability
The probability of an event $A$ is denoted by $\text{Pr}(A)$.
* $\text{Pr}(A)$ means the probability with which we believe event A is true
* An event $A$ is a binary variable saying whether or not some state of the world holds.
Probability is defined such that: $0 \leq \text{Pr}(A) \leq 1$
where $\text{Pr}(A)=1$ if the event will definitely happen and $\text{Pr}(A)=0$ if it definitely will not happen.
## Joint probability
**Joint probability** of two events: $\text{Pr}(A \wedge B)= \text{Pr}(A, B)$
If $A$ and $B$ are **independent**: $\text{Pr}(A, B)= \text{Pr}(A) \text{Pr}(B)$
For example, suppose $z_1$ and $z_2$ are chosen uniformly at random from the set $\mathcal{Z} = \{1, 2, 3, 4\}$.
Let $A$ be the event that $z_1 \in \{1, 2\}$ and $B$ be the event that **another** <a title="random variable">rv</a> denoted as $z_2 \in \{3\}$.
Then we have: $\text{Pr}(A, B) = \text{Pr}(A) \text{Pr}(B) = \frac{1}{2} \cdot \frac{1}{4}$.
## Probability of a union of two events
Probability of event $A$ or $B$ happening is: $\text{Pr}(A \vee B)= \text{Pr}(A) + \text{Pr}(B) - \text{Pr}(A \wedge B)$
If these events are mutually exclusive (they can't happen at the same time):
$$
\text{Pr}(A \vee B)= \text{Pr}(A) + \text{Pr}(B)
$$
For example, suppose an <a title="random variable">rv</a> denoted as $z_1$ is chosen uniformly at random from the set $\mathcal{Z} = \{1, 2, 3, 4\}$.
Let $A$ be the event that $z_1 \in \{1, 2\}$ and $B$ be the event that the **same** <a title="random variable">rv</a> $z_1 \in \{3\}$.
Then we have $\text{Pr}(A \vee B) = \frac{2}{4} + \frac{1}{4}$.
## Conditional probability of one event given another
We define the **conditional probability** of event $B$ happening given that $A$ has occurred as follows:
$$
\text{Pr}(B | A)= \frac{\text{Pr}(A,B)}{\text{Pr}(A)}
$$
This is not defined if $\text{Pr}(A) = 0$, since we cannot condition on an impossible event.
## Conditional independence of one event given another
We say that event $A$ is conditionally independent of event $B$ if we have $\text{Pr}(A | B)= \text{Pr}(A)$
This implies $\text{Pr}(B|A) = \text{Pr}(B)$. Hence, the joint probability becomes $\text{Pr}(A, B) = \text{Pr}(A) \text{Pr}(B)$
The book uses the notation $A \perp B$ to denote this property.
## Coming back to our car stopping distance problem
<img src="docs/reaction-braking-stopping.svg" title="Car stopping distance" width="25%" align="right">
$y = {\color{blue}z} x + 0.1 x^2$
where $z$ is a **continuous** <a title="random variable">rv</a> such that $z \sim \mathcal{N}(\mu_z=1.5,\sigma_z^2=0.5^2)$.
* What is the probability of an event $Z$ defined by a reaction time $z \leq 0.52$ seconds?
$$
\text{Pr}(Z)=\text{Pr}(z \leq 0.52)= P(z=0.52)
$$
where $P(z)$ denotes the **cumulative distribution function (cdf)**. Note that <a title="cumulative distribution function">cdf</a> is denoted with a capital $P$.
Likewise, we can compute the probability of being in any interval as follows:
$\text{Pr}(a \leq z \leq b)= P(z=b)-P(z=a)$
* But how do we compute the cdf at a particular value $b$, e.g. $P(z=b)$?
## <a title="Cumulative distribution functions">Cdf's</a> result from <a title="probability density functions">pdf's</a>
A <a title="probability density functions">pdf</a> $p(z)$ is defined as the derivative of the <a title="cumulative distribution functions">cdf</a> $P(z)$:
$$
p(z)=\frac{d}{d z}P(z)
$$
So, given a <a title="probability density function">pdf</a> $p(z)$, we can compute the following probabilities:
$$\text{Pr}(z \leq b)=\int_{-\infty}^b p(z) dz = P(b)$$
$$\text{Pr}(z \geq a)=\int_a^{\infty} p(z) dz = 1 - P(a)$$
$$\text{Pr}(a \leq z \leq b)=\int_a^b p(z) dz = P(b) - P(a)$$
**IMPORTANT**: $\int_{-\infty}^{\infty} p(z) dz = 1$
### Some notes about <a title="probability density functions">pdf's</a>
The integration to unity is important!
$$\int_{-\infty}^{\infty} p(z) dz = 1$$
**Remember:** the integral of a <a title="probability density function">pdf</a> leads to a probability, and probabilities cannot be larger than 1.
For example, from this property we can derive the following:
$$
\int_{-\infty}^{\infty} p(z) dz = \int_{-\infty}^{a} p(z) dz + \int_{a}^{\infty} p(z) dz
$$
$$
\Rightarrow \text{Pr}(z \geq a)= 1 - \text{Pr}(z \leq a) = 1 - \text{P}(a) = 1 - \int_{-\infty}^a p(z) dz
$$
In some cases we will work with probability distributions that are **unnormalized**, so this comment is important!
* Being unnormalized means that the probability density of the distribution does not integrate to 1.
* In this case, we cannot call such function a <a title="probability density function">pdf</a>, even though its output is a probability density.
## <a title="Cumulative distribution functions">Cdf's</a> result from <a title="probability density functions">pdf's</a>
Key point?
* Given a <a title="probability density function">pdf</a> $p(z)$, we can compute the probability of a continuous <a title="random variable">rv</a> $z$ being in a finite interval as follows:
$$
\text{Pr}(a \leq z \leq b)=\int_a^b p(z) dz = P(b) - P(a)
$$
As the size of the interval gets smaller, we can write
$$
\text{Pr}\left(z - \frac{dz}{2} \leq z \leq z + \frac{dz}{2}\right) \approx p(z) dz
$$
Intuitively, this says the probability of $z$ being in a small interval around $z$ is the density at $z$ times
the width of the interval.
```
from scipy.stats import norm # import from scipy.stats the normal distribution
zrange = np.linspace(-3, 3, 100) # 100 values for plot
fig_std_norm, (ax1, ax2) = plt.subplots(1, 2) # create a plot with 2 subplots side-by-side
ax1.plot(zrange, norm.cdf(zrange, 0, 1), label=r"$\mu_z=0; \ \sigma_z=1$") # plot cdf of standard normal
ax1.set_xlabel("z", fontsize=20)
ax1.set_ylabel("probability", fontsize=20)
ax1.legend(fontsize=15)
ax1.set_title("Standard Gaussian cdf", fontsize=20)
ax2.plot(zrange, norm.pdf(zrange, 0, 1), label=r"$\mu_z=0; \ \sigma_z=1$") # plot pdf of standard normal
ax2.set_xlabel("z", fontsize=20)
ax2.set_ylabel("probability density", fontsize=20)
ax2.legend(fontsize=15)
ax2.set_title("Standard Gaussian pdf", fontsize=20)
fig_std_norm.set_size_inches(25, 5) # scale figure to be wider (since there are 2 subplots)
```
## Note about scipy.stats
[scipy](https://docs.scipy.org/doc/scipy/index.html) is an open-source software for mathematics, science, and engineering. It's brilliant and widely used for many things!
**In particular**, [scipy.stats](https://docs.scipy.org/doc/scipy/reference/stats.html) is a simple module within scipy that has statistical functions and operations that are very useful. This way, we don't need to code all the functions ourselves. That's why we are using it to plot the cdf and pdf of the Gaussian distribution from now on, and we will use it for other things later.
* In case you are interested, scipy.stats has a nice [tutorial](https://docs.scipy.org/doc/scipy/tutorial/stats.html)
## Coming back to our car stopping distance problem
<img src="docs/reaction-braking-stopping.svg" title="Car stopping distance" width="25%" align="right">
$y = {\color{blue}z} x + 0.1 x^2$
where $z$ is a continuous <a title="random variable">rv</a> such that $p(z)= \mathcal{N}(z | \mu_z=1.5,\sigma_z^2=0.5^2)$.
* What is the probability of an event $Z$ defined by a reaction time $z \leq 0.52$ seconds?
$$
\text{Pr}(Z) = \text{Pr}(z \leq 0.52) = P(z=0.52) = \int_{-\infty}^{0.52} p(z) dz
$$
```
Pr_Z = norm.cdf(0.52, 1.5, 0.5) # using scipy norm.cdf(z=0.52 | mu_z=1.5, sigma_z=0.5)
print("The probability of event Z is: Pr(Z) = ",round(Pr_Z,3))
z_value = 0.52 # z = 0.52 seconds
zrange = np.linspace(0, 3, 200) # 200 values for plot
fig_car_norm, (ax1, ax2) = plt.subplots(1, 2) # create subplot (two figures in 1)
ax1.plot(zrange, norm.cdf(zrange, 1.5, 0.5), label=r"$\mu_z=1.5; \ \sigma_z=0.5$") # Figure 1 is cdf
ax1.plot(z_value, norm.cdf(z_value, 1.5, 0.5), 'r*',markersize=15, linewidth=2,
label=u'$P(z=0.52~|~\mu_z=1.5, \sigma_z^2=0.5^2)$')
ax1.set_xlabel("z", fontsize=20)
ax1.set_ylabel("probability", fontsize=20)
ax1.legend(fontsize=15)
ax1.set_title("Gaussian cdf of $z$ for car problem", fontsize=20)
ax2.plot(zrange, norm.pdf(zrange, 1.5, 0.5), label=r"$\mu_z=1.5; \ \sigma_z=0.5$") # figure 2 is pdf
ax2.plot(z_value, norm.pdf(z_value, 1.5, 0.5), 'r*', markersize=15, linewidth=2,
label=u'$p(z=0.52~|~\mu_z=1.5, \sigma_z^2=0.5^2)$')
ax2.set_xlabel("z", fontsize=20)
ax2.set_ylabel("probability density", fontsize=20)
ax2.legend(fontsize=15)
ax2.set_title("Gaussian pdf of $z$ for car problem", fontsize=20)
fig_car_norm.set_size_inches(25, 5) # scale figure to be wider (since there are 2 subplots)
```
### Why is the Gaussian distribution so widely used?
Several reasons:
1. It has two parameters which are easy to interpret, and which capture some of the most basic properties of a distribution, namely its mean and variance.
2. The central limit theorem (Sec. 2.8.6 of the book) tells us that sums of independent random variables have an approximately Gaussian distribution, making it a good choice for modeling residual errors or “noise”.
3. The Gaussian distribution makes the least number of assumptions (has maximum entropy), subject to the constraint of having a specified mean and variance (Sec. 3.4.4 of the book); this makes it a good default choice in many cases.
4. It has a simple mathematical form, which results in easy to implement, but often highly effective, methods.
## Car stopping distance problem
<img src="docs/reaction-braking-stopping.svg" title="Car stopping distance" width="25%" align="right">
$y = {\color{blue}z} x + 0.1 x^2$
where $z$ is a continuous <a title="random variable">rv</a> such that $z \sim \mathcal{N}(\mu_z=1.5,\sigma_z^2=0.5^2)$.
* What is the **expected** value for the reaction time $z$?
This is not a trick question! It's the mean $\mu_z$, of course!
* But how do we compute the expected value for any distribution?
## Moments of a distribution
### First moment: Expected value or mean
The expected value (mean) of a distribution is the **first moment** of the distribution:
$$
\mathbb{E}[z]= \int_{\mathcal{Z}}z p(z) dz
$$
where $\mathcal{Z}$ indicates the support of the distribution (the $z$ domain).
* Often, $\mathcal{Z}$ is omitted as it is usually between $-\infty$ to $\infty$
* The expected value $\mathbb{E}[z]$ is often denoted by $\mu_z$
As you might expect (pun intended 😆), the expected value is a linear operator:
$$
\mathbb{E}[az+b]= a\mathbb{E}[z] + b
$$
where $a$ and $b$ are fixed variables (NOT rv's).
Additionally, for a set of $n$ rv's, one can show that the expectation of their sum is as follows:
$\mathbb{E}\left[\sum_{i=1}^n z_i\right]= \sum_{i=1}^n \mathbb{E}[z_i]$
If they are **independent**, the expectation of their product is given by
$\mathbb{E}\left[\prod_{i=1}^n z_i\right]= \prod_{i=1}^n \mathbb{E}[z_i]$
## Moments of a distribution
### Second moment (and relation to Variance)
The 2nd moment of a distribution $p(z)$ is:
$$
\mathbb{E}[z^2]= \int_{\mathcal{Z}}z^2 p(z) dz
$$
#### Variance can be obtained from the 1st and 2nd moments
The variance is a measure of the “spread” of the distribution:
$$
\mathbb{V}[z] = \mathbb{E}[(z-\mu_z)^2] = \int (z-\mu_z)^2 p(z) dz = \mathbb{E}[z^2] - \mu_z^2
$$
* It is often denoted by the square of the standard deviation, i.e. $\sigma_z^2 = \mathbb{V}[z] = \mathbb{E}[(z-\mu_z)^2]$
#### Elaboration of the variance as a result of the first two moments of a distribution
$$
\begin{align}
\mathbb{V}[z] & = \mathbb{E}[(z-\mu_z)^2] \\
& = \int (z-\mu_z)^2 p(z) dz \\
& = \int z^2 p(z) dz + \mu_z^2 \int p(z) dz - 2\mu_z \int zp(z) dz \\
& = \mathbb{E}[z^2] - \mu_z^2
\end{align}
$$
where $\mu_z = \mathbb{E}[z]$ is the first moment, and $\mathbb{E}[z^2]$ is the second moment.
Therefore, we can also write the second moment of a distribution as
$$\mathbb{E}[z^2] = \sigma_z^2 + \mu_z^2$$
#### Variance and standard deviation properties
The standard deviation is defined as
$ \sigma_z = \text{std}[z] = \sqrt{\mathbb{V}[z]}$
The variance of a shifted and scaled version of a random variable is given by
$\mathbb{V}[a z + b] = a^2\mathbb{V}[z]$
where $a$ and $b$ are fixed variables (NOT rv's).
If we have a set of $n$ independent rv's, the variance of their sum is given by the sum of their variances
$$
\mathbb{V}\left[\sum_{i=1}^n z_i\right] = \sum_{i=1}^n \mathbb{V}[z_i]
$$
The variance of their product can also be derived, as follows:
$$
\begin{align}
\mathbb{V}\left[\prod_{i=1}^n z_i\right] & = \mathbb{E}\left[ \left(\prod_i z_i\right)^2 \right] - \left( \mathbb{E}\left[\prod_i z_i \right]\right)^2\\
& = \mathbb{E}\left[ \prod_i z_i^2 \right] - \left( \prod_i\mathbb{E}\left[ z_i \right]\right)^2\\
& = \prod_i \mathbb{E}\left[ z_i^2 \right] - \prod_i\left( \mathbb{E}\left[ z_i \right]\right)^2\\
& = \prod_i \left( \mathbb{V}\left[ z_i \right] +\left( \mathbb{E}\left[ z_i \right]\right)^2 \right)- \prod_i\left( \mathbb{E}\left[ z_i \right]\right)^2\\
& = \prod_i \left( \sigma_{z,\,i}^2 + \mu_{z,\,i}^2 \right)- \prod_i\mu_{z,\,i}^2 \\
\end{align}
$$
## Note about higher-order moments
* The $k$-th moment of a distribution $p(z)$ is defined as the expected value of the $k$-th power of $z$, i.e. $z^k$:
$$
\mathbb{E}[z^k]= \int_{\mathcal{Z}}z^k p(z) dz
$$
## Mode of a distribution
The mode of an <a title="random variable">rv</a> $z$ is the value of $z$ for which $p(z)$ is maximum.
Formally, this is written as,
$$ \mathbf{z}^* = \underset{z}{\mathrm{argmax}}~p(z)$$
If the distribution is multimodal, this may not be unique:
* That's why $\mathbf{z}^*$ is in **bold**, to denote that in general it is a vector that is retrieved!
* However, if the distribution is unimodal (one maximum), like the univariate Gaussian distribution, then it retrieves a scalar $z^*$
Note that even if there is a unique mode, this point may not be a good summary of the distribution.
## Mean vs mode for a non-symmetric distribution
```
# 1. Create a gamma pdf with parameter a = 2.0
from scipy.stats import gamma # import from scipy.stats the Gamma distribution
a = 2.0 # this is the only input parameter needed for this distribution
# Define the support of the distribution (its domain) by using the
# inverse of the cdf (called ppf) to get the lowest z of the plot that
# corresponds to Pr = 0.01 and the highest z of the plot that corresponds
# to Pr = 0.99:
zrange = np.linspace(gamma.ppf(0.01, a), gamma.ppf(0.99, a), 200)
mu_z, var_z = gamma.stats(2.0, moments='mv') # This computes the mean and variance of the pdf
fig_gamma_pdf, ax = plt.subplots() # a trick to save the figure for later use
ax.plot(zrange, gamma.pdf(zrange, a), label=r"$\Gamma(z|a=2.0)$")
ax.set_xlabel("z", fontsize=20)
ax.set_ylabel("probability density", fontsize=20)
ax.legend(fontsize=15)
ax.set_title("Gamma pdf for $a=2.0$", fontsize=20)
plt.close(fig_gamma_pdf) # do not plot the figure now. We will show it in a later cell
# 2. Plot the expected value (mean) for this pdf
ax.plot(mu_z, gamma.pdf(mu_z, a), 'r*', markersize=15, linewidth=2, label=u'$\mu_z = \mathbb{E}[z]$')
# 3. Calculate the mode and plot it
from scipy.optimize import minimize # import minimizer
# Finding the maximum of the gamma pdf can be done by minimizing
# the negative gamma pdf. So, we create a function that outputs
# the negative of the gamma pdf given the parameter a=2.0:
def neg_gamma_given_a(z): return -gamma.pdf(z,a)
# Use the default optimizer of scipy (L-BFGS) to find the
# maximum (by minimizing the negative gamma pdf). Note
# that we need to give an initial guess for the value of z,
# so we can use, for example, z=mu_z:
mode_z = minimize(neg_gamma_given_a,mu_z).x
ax.plot(mode_z, np.max(gamma.pdf(mode_z, a)),'g^', markersize=15,
linewidth=2,label=u'mode $\mathbf{z}^*=\mathrm{argmax}~p(z)$')
ax.legend() # show legend
# Code to generate this Gamma distribution hidden during presentation (it's shown as notes)
print('The mean is ',mu_z) # print the mean calculated for this gamma pdf
print('The mode is approximately ',mode_z) # print the mode
fig_gamma_pdf # show figure of this gamma pdf
```
## The amazing Bayes' rule
<font color='red'>Bayesian</font> <font color='blue'>inference</font> definition:
* <font color='blue'>Inference</font> means “the act of passing from sample data to generalizations, usually with calculated degrees of certainty”.
* <font color='red'>Bayesian</font> is used to refer to inference methods that represent “degrees of certainty” using probability theory, and which leverage Bayes’ rule to update the degree of certainty given data.
**Bayes’ rule** is a formula for computing the probability distribution over possible values of an unknown (or hidden) quantity $z$ given some observed data $y$:
$$
p(z|y) = \frac{p(y|z) p(z)}{p(y)}
$$
Bayes' rule follows automatically from the identity: $p(z|y) p(y) = p(y|z) p(z) = p(y,z) = p(z,y)$
## The amazing Bayes' rule
* I know... You don't find it very amazing (yet!).
* Wait until you realize that almost all ML methods can be derived from this simple formula
$$
p(z|y) = \frac{p(y|z) p(z)}{p(y)}
$$
### See you next class
Have fun!
| true |
code
| 0.763853 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/stephenbeckr/numerical-analysis-class/blob/master/Demos/Ch4_integration.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Numerical Integration (quadrature)
- See also Prof. Brown's [integration notebook](https://github.com/cu-numcomp/numcomp-class/blob/master/Integration.ipynb) for CSCI-3656 [](https://colab.research.google.com/github/cu-numcomp/numcomp-class/blob/master/Integration.ipynb)
- Bengt Fornberg's talk [Gregory formulas and improving on the Trapezoidal rule](https://www.colorado.edu/amath/sites/default/files/attached-files/2019_unm_0.pdf)
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import BarycentricInterpolator as interp
# From Table 9.2 in Quarteroni, Sacco and Saleri "Numerical Mathematics" (Springer, 2000)
ClosedNewtonCotesWeights = { 1:[1/2,1/2], 2:[1/3,4/3,1/3], 3:[3/8,9/8,9/8,3/8], 4:[14/45, 64/45, 24/45, 64/45, 14/45],
5:[95/288, 375/288,250/288, 250/288, 375/288, 95/288], 6:[41/140,216/140,27/140,272/140,27/140,216/140,41/140]}
ClosedNewtonCotesNames = {1:"n=1, Trapezoid", 2:"n=2, Simpson's", 3:"n=3, Simpson's 3/8", 4:"n=4, Boole's", 5:"n=5", 6:"n=6"}
f = lambda x : np.cos(x)
F = lambda x : np.sin(x) # dF/dx = f
a,b = -1,2
# Other examples to try
# f = lambda x : x**(3/2)
# F = lambda x : 2/5*x**(5/2)
# a,b = 0,1
# f = lambda x : 1/(1+x**2) # aka Runge's function
# F = lambda x : np.arctan(x)
# a,b = -5,5
I = F(b) - F(a)
print("Integral I is {:.3f}".format(I))
x = np.linspace(a,b)
plt.fill_between( x, f(x), alpha=0.5);
plt.axvline(color='k');
plt.axhline(color='k');
```
### Try the Trapezoidal rule, n = 1
```
n = 1
print("Using the rule: ", ClosedNewtonCotesNames[n] )
weights = ClosedNewtonCotesWeights[n]
(nodes,h) = np.linspace(a,b,n+1,retstep=True) # retstep tells it to return the spacing h
I_estimate = h*np.dot( weights, f(nodes) )
p = interp(nodes,f(nodes))
x = np.linspace(a,b)
plt.fill_between( x, f(x), alpha=0.5);
plt.axvline(color='k');
plt.axhline(color='k');
plt.plot( x, p(x), 'r-', label="Interpolating polynomial" )
plt.legend()
print("True integral: {:.3f}, Estimate: {:.3f}, Abs. Error: {:.5f}".format(I,I_estimate,abs(I-I_estimate)))
nodes.tolist(),h,weights
```
### And Simpson's rule, n=2
```
n = 2
print("Using the rule: ", ClosedNewtonCotesNames[n] )
weights = ClosedNewtonCotesWeights[n]
(nodes,h) = np.linspace(a,b,n+1,retstep=True) # retstep tells it to return the spacing h
I_estimate = h*np.dot( weights, f(nodes) )
p = interp(nodes,f(nodes))
x = np.linspace(a,b)
plt.fill_between( x, f(x), alpha=0.5);
plt.axvline(color='k');
plt.axhline(color='k');
plt.plot( x, p(x), 'r-', label="Interpolating polynomial" )
plt.legend()
print("True integral: {:.3f}, Estimate: {:.3f}, Abs. Error: {:.5f}".format(I,I_estimate,abs(I-I_estimate)))
nodes.tolist(),h,weights
```
### n=3
```
n = 3
print("Using the rule: ", ClosedNewtonCotesNames[n] )
weights = ClosedNewtonCotesWeights[n]
(nodes,h) = np.linspace(a,b,n+1,retstep=True) # retstep tells it to return the spacing h
I_estimate = h*np.dot( weights, f(nodes) )
p = interp(nodes,f(nodes))
x = np.linspace(a,b)
plt.fill_between( x, f(x), alpha=0.5);
plt.axvline(color='k');
plt.axhline(color='k');
plt.plot( x, p(x), 'r-', label="Interpolating polynomial" )
plt.legend()
print("True integral: {:.3f}, Estimate: {:.3f}, Abs. Error: {:.5f}".format(I,I_estimate,abs(I-I_estimate)))
nodes.tolist(),h,weights
```
### n=4
```
n = 4
print("Using the rule: ", ClosedNewtonCotesNames[n] )
weights = ClosedNewtonCotesWeights[n]
(nodes,h) = np.linspace(a,b,n+1,retstep=True) # retstep tells it to return the spacing h
I_estimate = h*np.dot( weights, f(nodes) )
p = interp(nodes,f(nodes))
x = np.linspace(a,b)
plt.fill_between( x, f(x), alpha=0.5);
plt.axvline(color='k');
plt.axhline(color='k');
plt.plot( x, p(x), 'r-', label="Interpolating polynomial" )
plt.legend()
print("True integral: {:.3f}, Estimate: {:.3f}, Abs. Error: {:.5f}".format(I,I_estimate,abs(I-I_estimate)))
nodes.tolist(),h,weights
```
### n=5
```
n = 5
print("Using the rule: ", ClosedNewtonCotesNames[n] )
weights = ClosedNewtonCotesWeights[n]
(nodes,h) = np.linspace(a,b,n+1,retstep=True) # retstep tells it to return the spacing h
I_estimate = h*np.dot( weights, f(nodes) )
p = interp(nodes,f(nodes))
x = np.linspace(a,b)
plt.fill_between( x, f(x), alpha=0.5);
plt.axvline(color='k');
plt.axhline(color='k');
plt.plot( x, p(x), 'r-', label="Interpolating polynomial" )
plt.legend()
print("True integral: {:.3f}, Estimate: {:.3f}, Abs. Error: {:.5f}".format(I,I_estimate,abs(I-I_estimate)))
nodes.tolist(),h,weights
```
### n=6
```
n = 6
print("Using the rule: ", ClosedNewtonCotesNames[n] )
weights = ClosedNewtonCotesWeights[n]
(nodes,h) = np.linspace(a,b,n+1,retstep=True) # retstep tells it to return the spacing h
I_estimate = h*np.dot( weights, f(nodes) )
p = interp(nodes,f(nodes))
x = np.linspace(a,b)
plt.fill_between( x, f(x), alpha=0.5);
plt.axvline(color='k');
plt.axhline(color='k');
plt.plot( x, p(x), 'r-', label="Interpolating polynomial" )
plt.legend()
print("True integral: {:.3f}, Estimate: {:.3f}, Abs. Error: {:.5f}".format(I,I_estimate,abs(I-I_estimate)))
nodes.tolist(),h,weights
```
## Let's try different kinds of functions
```
def tryAllRules( f, F, a, b):
err = []
for n in range(1,6+1):
weights = ClosedNewtonCotesWeights[n]
(nodes,h) = np.linspace(a,b,n+1,retstep=True)
I_estimate = h*np.dot( weights, f(nodes) )
I = F(b) - F(a) # True answer
err.append( abs(I_estimate - I))
return np.array( err )
f = lambda x : np.cos(x)
F = lambda x : np.sin(x) # dF/dx = f
a,b = -1,2
err1 = tryAllRules( f, F, a, b)
# Other examples to try
f = lambda x : x**(3/2)
F = lambda x : 2/5*x**(5/2)
a,b = 0,1
err2 = tryAllRules( f, F, a, b)
f = lambda x : x**(11/2)
F = lambda x : 2/13*x**(5/13)
a,b = 0,1
err3 = tryAllRules( f, F, a, b)
# Runge's function
f = lambda x : 1/(1+x**2)
F = lambda x : np.arctan(x)
a,b = -5,5
err4 = tryAllRules( f, F, a, b)
print("Rows are different n, columns are different functions")
print(np.array2string( np.array([err1,err2,err3,err4]).T, precision=2))
```
### Let's examine Runge's function more closely
$$f(x) = \frac{1}{1+x^2}$$
Our error wasn't going down, but the function is $C^\infty(\mathbb{R})$. Did we make a mistake?
No, our formula was correct, the issue is that the $f'(\xi)$ term (and $f''(\xi)$, etc.) are very large. One way to think of this issue is that the function has a **singularity** (though it is on the imaginary axis, at $\pm i$).
(Btw, how do you prounce Runge? It's German, and you can listen to native speakers say it [at Forvo](https://forvo.com/search/Runge/))
```
import sympy
from sympy.abc import x
from sympy import init_printing
from sympy.utilities.lambdify import lambdify
init_printing()
import matplotlib as mpl
mpl.rcParams['mathtext.fontset'] = 'cm'
mpl.rcParams.update({'font.size': 20})
f = lambda x : 1/(1+x**2)
F = lambda x : np.arctan(x)
a,b = -5,5
g = 1/(1+x**2) # symbolic version
gNumerical = lambdify(x,g) # avoid sympy plotting
xGrid = np.linspace(a,b,150)
plt.figure(figsize=(10,8))
plt.plot( xGrid, gNumerical(xGrid),label='$f(x)$' )
#k = 3 # order of derivative
for k in range(1,6):
dg = lambdify(x,sympy.diff(g,x,k))
plt.plot( xGrid, dg(xGrid), label="$f^{("+str(k)+")}(x)$");
plt.axvline(color='k');
plt.axhline(color='k');
#plt.legend(prop={'size': 20});
plt.legend()
plt.title("Runge's function");
#sympy.plot(g); # sympy plots are not so nice
# sympy.plot(sympy.diff(g,x,k));
```
| true |
code
| 0.602705 | null | null | null | null |
|
```
# default_exp resimulation
```
# Match resimulation
> Simulating match outcomes based on the xG of individual shots
```
#hide
from nbdev.showdoc import *
#export
import collections
import itertools
import numpy as np
```
Use Poisson-Binomial distribution calculation from https://github.com/tsakim/poibin
It looks like [there are plans to package the code](https://github.com/tsakim/poibin/pull/8), but for now, just copy+paste the requisite class in here (original code is provided with MIT License).
```
#export
class PoiBin(object):
"""Poisson Binomial distribution for random variables.
This class implements the Poisson Binomial distribution for Bernoulli
trials with different success probabilities. The distribution describes
thus a random variable that is the sum of independent and not identically
distributed single Bernoulli random variables.
The class offers methods for calculating the probability mass function, the
cumulative distribution function, and p-values for right-sided testing.
"""
def __init__(self, probabilities):
"""Initialize the class and calculate the ``pmf`` and ``cdf``.
:param probabilities: sequence of success probabilities :math:`p_i \\in
[0, 1] \\forall i \\in [0, N]` for :math:`N` independent but not
identically distributed Bernoulli random variables
:type probabilities: numpy.array
"""
self.success_probabilities = np.array(probabilities)
self.number_trials = self.success_probabilities.size
self.check_input_prob()
self.omega = 2 * np.pi / (self.number_trials + 1)
self.pmf_list = self.get_pmf_xi()
self.cdf_list = self.get_cdf(self.pmf_list)
# ------------------------------------------------------------------------------
# Methods for the Poisson Binomial Distribution
# ------------------------------------------------------------------------------
def pmf(self, number_successes):
"""Calculate the probability mass function ``pmf`` for the input values.
The ``pmf`` is defined as
.. math::
pmf(k) = Pr(X = k), k = 0, 1, ..., n.
:param number_successes: number of successful trials for which the
probability mass function is calculated
:type number_successes: int or list of integers
"""
self.check_rv_input(number_successes)
return self.pmf_list[number_successes]
def cdf(self, number_successes):
"""Calculate the cumulative distribution function for the input values.
The cumulative distribution function ``cdf`` for a number ``k`` of
successes is defined as
.. math::
cdf(k) = Pr(X \\leq k), k = 0, 1, ..., n.
:param number_successes: number of successful trials for which the
cumulative distribution function is calculated
:type number_successes: int or list of integers
"""
self.check_rv_input(number_successes)
return self.cdf_list[number_successes]
def pval(self, number_successes):
"""Return the p-values corresponding to the input numbers of successes.
The p-values for right-sided testing are defined as
.. math::
pval(k) = Pr(X \\geq k ), k = 0, 1, ..., n.
.. note::
Since :math:`cdf(k) = Pr(X <= k)`, the function returns
.. math::
1 - cdf(X < k) & = 1 - cdf(X <= k - 1)
& = 1 - cdf(X <= k) + pmf(X = k),
k = 0, 1, .., n.
:param number_successes: number of successful trials for which the
p-value is calculated
:type number_successes: int, numpy.array, or list of integers
"""
self.check_rv_input(number_successes)
i = 0
try:
isinstance(number_successes, collections.Iterable)
pvalues = np.array(number_successes, dtype='float')
# if input is iterable (list, numpy.array):
for k in number_successes:
pvalues[i] = 1. - self.cdf(k) + self.pmf(k)
i += 1
return pvalues
except TypeError:
# if input is an integer:
if number_successes == 0:
return 1
else:
return 1 - self.cdf(number_successes - 1)
# ------------------------------------------------------------------------------
# Methods to obtain pmf and cdf
# ------------------------------------------------------------------------------
def get_cdf(self, event_probabilities):
"""Return the values of the cumulative density function.
Return a list which contains all the values of the cumulative
density function for :math:`i = 0, 1, ..., n`.
:param event_probabilities: array of single event probabilities
:type event_probabilities: numpy.array
"""
cdf = np.empty(self.number_trials + 1)
cdf[0] = event_probabilities[0]
for i in range(1, self.number_trials + 1):
cdf[i] = cdf[i - 1] + event_probabilities[i]
return cdf
def get_pmf_xi(self):
"""Return the values of the variable ``xi``.
The components ``xi`` make up the probability mass function, i.e.
:math:`\\xi(k) = pmf(k) = Pr(X = k)`.
"""
chi = np.empty(self.number_trials + 1, dtype=complex)
chi[0] = 1
half_number_trials = int(
self.number_trials / 2 + self.number_trials % 2)
# set first half of chis:
chi[1:half_number_trials + 1] = self.get_chi(
np.arange(1, half_number_trials + 1))
# set second half of chis:
chi[half_number_trials + 1:self.number_trials + 1] = np.conjugate(
chi[1:self.number_trials - half_number_trials + 1] [::-1])
chi /= self.number_trials + 1
xi = np.fft.fft(chi)
if self.check_xi_are_real(xi):
xi = xi.real
else:
raise TypeError("pmf / xi values have to be real.")
xi += np.finfo(type(xi[0])).eps
return xi
def get_chi(self, idx_array):
"""Return the values of ``chi`` for the specified indices.
:param idx_array: array of indices for which the ``chi`` values should
be calculated
:type idx_array: numpy.array
"""
# get_z:
exp_value = np.exp(self.omega * idx_array * 1j)
xy = 1 - self.success_probabilities + \
self.success_probabilities * exp_value[:, np.newaxis]
# sum over the principal values of the arguments of z:
argz_sum = np.arctan2(xy.imag, xy.real).sum(axis=1)
# get d value:
exparg = np.log(np.abs(xy)).sum(axis=1)
d_value = np.exp(exparg)
# get chi values:
chi = d_value * np.exp(argz_sum * 1j)
return chi
# ------------------------------------------------------------------------------
# Auxiliary functions
# ------------------------------------------------------------------------------
def check_rv_input(self, number_successes):
"""Assert that the input values ``number_successes`` are OK.
The input values ``number_successes`` for the random variable have to be
integers, greater or equal to 0, and smaller or equal to the total
number of trials ``self.number_trials``.
:param number_successes: number of successful trials
:type number_successes: int or list of integers """
try:
for k in number_successes:
assert (type(k) == int or type(k) == np.int64), \
"Values in input list must be integers"
assert k >= 0, 'Values in input list cannot be negative.'
assert k <= self.number_trials, \
'Values in input list must be smaller or equal to the ' \
'number of input probabilities "n"'
except TypeError:
assert (type(number_successes) == int or \
type(number_successes) == np.int64), \
'Input value must be an integer.'
assert number_successes >= 0, "Input value cannot be negative."
assert number_successes <= self.number_trials, \
'Input value cannot be greater than ' + str(self.number_trials)
return True
@staticmethod
def check_xi_are_real(xi_values):
"""Check whether all the ``xi``s have imaginary part equal to 0.
The probabilities :math:`\\xi(k) = pmf(k) = Pr(X = k)` have to be
positive and must have imaginary part equal to zero.
:param xi_values: single event probabilities
:type xi_values: complex
"""
return np.all(xi_values.imag <= np.finfo(float).eps)
def check_input_prob(self):
"""Check that all the input probabilities are in the interval [0, 1]."""
if self.success_probabilities.shape != (self.number_trials,):
raise ValueError(
"Input must be an one-dimensional array or a list.")
if not np.all(self.success_probabilities >= 0):
raise ValueError("Input probabilities have to be non negative.")
if not np.all(self.success_probabilities <= 1):
raise ValueError("Input probabilities have to be smaller than 1.")
#export
def poisson_binomial_pmf(probs, xs):
return PoiBin(probs).pmf(xs)
def resimulate_match(shots, up_to=26, min_xg=0.0001, **kwargs):
"""
'Resimulate' a match based on xG. Takes a list of maps, where each map
represents a shot has and has 'is_home' (bool) and 'xg' (float) keys.
"""
# Prevent potential underflow
home_xgs = [max(s['xg'], min_xg) for s in shots if s['is_home']]
away_xgs = [max(s['xg'], min_xg) for s in shots if not s['is_home']]
home_scores = list(range(min(len(home_xgs) + 1, up_to)))
away_scores = list(range(min(len(away_xgs) + 1, up_to)))
home_probs = dict(zip(home_scores, poisson_binomial_pmf(home_xgs, home_scores)))
away_probs = dict(zip(away_scores, poisson_binomial_pmf(away_xgs, away_scores)))
scores = []
for h, a in itertools.product(range(up_to), repeat=2):
home_prob = home_probs.get(h, 0)
away_prob = away_probs.get(a, 0)
scores.append({
'home_goals': h,
'away_goals': a,
'home_probability': home_prob,
'away_probability': away_prob,
'probability': home_prob*away_prob,
**kwargs
})
# Keep everything up to 4-4; filter out P == 0 results above that
return [
s for s in scores
if s['probability'] > 0
or (s['home_goals'] < 5 and s['away_goals'] < 5)
]
def extract_prob(probs, home_goals, away_goals):
filtered = [p for p in probs if p['home_goals'] == home_goals and p['away_goals'] == away_goals]
if len(filtered) == 0:
return 0
return filtered[0]['probability']
probs = resimulate_match([
{'is_home': True, 'xg': 0.1}
])
assert np.isclose(extract_prob(probs, 1, 0), 0.1)
shots = [
{"is_home": False, "xg": 0.030929630622267723},
{"is_home": False, "xg": 0.021505167707800865},
{"is_home": False, "xg": 0.013733051717281342},
{"is_home": False, "xg": 0.06314441561698914},
]
probs = resimulate_match(shots)
assert np.isclose(
extract_prob(probs, 0, 4),
np.product([s['xg'] for s in shots])
)
```
| true |
code
| 0.841337 | null | null | null | null |
|
# Document Classification & Clustering - Lecture
What could we do with the document-term-matrices (dtm[s]) created in the previous notebook? We could visualize them or train an algorithm to do some specific task. We have covered both classification and clustering before, so we won't focus on the particulars of algorithms. Instead we'll focus on the unique problems of dealing with text input for these models.
## Contents
* [Part 1](#p1): Vectorize a whole Corpus
* [Part 2](#p2): Tune the vectorizer
* [Part 3](#p3): Apply Vectorizer to Classification problem
* [Part 4](#p4): Introduce topic modeling on text data
**Business Case**: Your managers at Smartphone Inc. have asked to develop a system to bucket text messages into two categories: **spam** and **not spam (ham)**. The system will be implemented on your companies products to help users identify suspicious texts.
# Spam Filter - Count Vectorization Method
```
import pandas as pd
import numpy as np
pd.set_option('display.max_colwidth', 200)
```
**Import the data and take a look at it**
```
def load():
url = "https://raw.githubusercontent.com/sokjc/BayesNotBaes/master/sms.tsv"
df = pd.read_csv(url, sep='\t', header=None,
names=['label', 'msg'])
df = df.rename(columns={"msg":"text"})
# encode target
df['label_num'] = df['label'].map({'ham': 0, 'spam': 1})
return df
pd.set_option('display.max_colwidth', 200)
df = load()
df.tail()
```
Notice that this text isn't as coherent as the job listings. We'll proceed like normal though.
What is the ratio of Spam to Ham messages?
```
df['label'].value_counts()
df['label'].value_counts(normalize=True)
```
**Model Validation - Train Test Split** (Cross Validation would be better here)
```
from sklearn.model_selection import train_test_split
X = df['text']
y = df['label_num']
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.2, random_state=812)
print(X_train.shape,
X_test.shape,
y_train.shape,
y_test.shape, sep='\n')
```
**Count Vectorizer**
Today we're just going to let Scikit-Learn do our text cleaning and preprocessing for us.
Lets run our vectorizer on our text messages and take a peek at the tokenization of the vocabulary
```
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(max_features=None, ngram_range=(1,1),
stop_words='english')
vectorizer.fit(X_train)
print(vectorizer.get_feature_names()[300:325])
```
Now we'll complete the vectorization with `.transform()`
```
train_word_counts = vectorizer.transform(X_train)
# not necessary to save to a dataframe, but helpful for previewing
X_train_vectorized = pd.DataFrame(train_word_counts.toarray(),
columns=vectorizer.get_feature_names())
print(X_train_vectorized.shape)
X_train_vectorized.head()
```
We also need to vectorize our `X_test` data, but **we need to use the same vocabulary as the training dataset**, so we'll just call `.transform()` on `X_test` to get our `X_test_vectorized`
```
test_word_counts = vectorizer.transform(X_test)
X_test_vectorized = pd.DataFrame(test_word_counts.toarray(),
columns=vectorizer.get_feature_names())
print(X_test_vectorized.shape)
X_test_vectorized.head()
```
Lets run some classification models and see what kind of accuracy we can get!
# Model Selection
```
from sklearn.metrics import accuracy_score
def assess_model(model, X_train, X_test,
y_train, y_test, vect_type='Count'):
model.fit(X_train, y_train)
train_predictions = model.predict(X_train)
test_predictions = model.predict(X_test)
result = {}
result['model'] = str(model).split('(')[0]
result['acc_train'] = accuracy_score(y_train, train_predictions)
result['acc_test'] = accuracy_score(y_test, test_predictions)
result['vect_type'] = vect_type
print(result)
return result
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB # Multinomial Naive Bayes
from sklearn.ensemble import RandomForestClassifier
models = [LogisticRegression(random_state=42, solver='lbfgs'),
MultinomialNB(),
RandomForestClassifier()]
results = []
for model in models:
result = assess_model(
model,
X_train_vectorized, X_test_vectorized, y_train, y_test)
results.append(result)
pd.DataFrame.from_records(results)
```
# Spam Filter - TF-IDF Vectorization Method
```
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(
max_features=None, ngram_range=(1,1), stop_words='english')
# fit to train
vectorizer.fit(X_train)
print(vectorizer)
# apply to train
train_word_counts = vectorizer.transform(X_train)
X_train_vectorized = pd.DataFrame(train_word_counts.toarray(),
columns=vectorizer.get_feature_names())
print(X_train_vectorized.shape)
X_train_vectorized.head()
# apply to test
test_word_counts = vectorizer.transform(X_test)
X_test_vectorized = pd.DataFrame(test_word_counts.toarray(),
columns=vectorizer.get_feature_names())
print(X_test_vectorized.shape)
X_test_vectorized.head()
models = [LogisticRegression(random_state=42, solver='lbfgs'),
MultinomialNB(),
RandomForestClassifier()]
for model in models:
result = assess_model(
model,
X_train_vectorized, X_test_vectorized, y_train, y_test,
vect_type='Tfidf')
results.append(result)
pd.DataFrame.from_records(results)
```
# Sentiment Analysis
The objective of **sentiment analysis** is to take a text phrase and determine if its sentiment is: Postive, Neutral, or Negative.
Suppose that you wanted to use NLP to classify reviews for your company's products as either positive, neutral, or negative. Maybe you don't trust the star ratings left by the users and you want an additional measure of sentiment from each review - maybe you would use this as a feature generation technique for additional modeling, or to identify disgruntled customers and reach out to them to improve your customer service, etc. Sentiment Analysis has also been used heavily in stock market price estimation by trying to track the sentiment of the tweets of individuals after breaking news comes out about a company.
Does every word in each review contribute to its overall sentiment? Not really. Stop words for example don't really tell us much about the overall sentiment of the text, so just like we did before, we will discard them.
### NLTK Movie Review Sentiment Analysis
`pip install -U nltk`
```
import random
import nltk
def load_movie_reviews():
from nltk.corpus import movie_reviews
nltk.download('movie_reviews')
nltk.download('stopwords')
print("Total reviews:", len(movie_reviews.fileids()))
print("Positive reviews:", len(movie_reviews.fileids('pos')))
print("Negative reviews:", len(movie_reviews.fileids('neg')))
# Get Reviews and randomize
reviews = [(list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)]
random.shuffle(reviews)
documents = []
sentiments = []
for review in reviews:
# Add sentiment to list
if review[1] == "pos":
sentiments.append(1)
else:
sentiments.append(0)
# Add text to list
review_text = " ".join(review[0])
documents.append(review_text)
df = pd.DataFrame({"text": documents,
"sentiment": sentiments})
return df
df = load_movie_reviews()
df.head()
```
### Train Test Split
```
X = df['text']
y = df['sentiment']
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.2, random_state=42)
```
# Sentiment Analysis - CountVectorizer
## Generate vocabulary from train dataset
```
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(max_features=None, ngram_range=(1,1),
stop_words='english')
vectorizer.fit(X_train)
train_word_counts = vectorizer.transform(X_train)
X_train_vectorized = pd.DataFrame(train_word_counts.toarray(),
columns=vectorizer.get_feature_names())
print(X_train_vectorized.shape)
X_train_vectorized.head()
test_word_counts = vectorizer.transform(X_test)
X_test_vectorized = pd.DataFrame(test_word_counts.toarray(), columns=vectorizer.get_feature_names())
print(X_test_vectorized.shape)
X_test_vectorized.head()
```
### Model Selection
```
models = [LogisticRegression(random_state=42, solver='lbfgs'),
MultinomialNB(),
RandomForestClassifier()]
results = []
for model in models:
result = assess_model(
model,
X_train_vectorized, X_test_vectorized, y_train, y_test,
vect_type='Count')
results.append(result)
pd.DataFrame.from_records(results)
```
# Sentiment Analysis - tfidfVectorizer
```
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(max_features=2000, ngram_range=(1,2),
min_df = 5, max_df = .80,
stop_words='english')
vectorizer.fit(X_train)
train_word_counts = vectorizer.transform(X_train)
X_train_vectorized = pd.DataFrame(train_word_counts.toarray(), columns=vectorizer.get_feature_names())
print(X_train_vectorized.shape)
X_train_vectorized.head()
test_word_counts = vectorizer.transform(X_test)
X_test_vectorized = pd.DataFrame(test_word_counts.toarray(),
columns=vectorizer.get_feature_names())
print(X_test_vectorized.shape)
X_test_vectorized.head()
```
### Model Selection
```
for model in models:
result = assess_model(
model,
X_train_vectorized, X_test_vectorized, y_train, y_test,
vect_type='tfidf')
results.append(result)
pd.DataFrame.from_records(results)
```
# Using NLTK to clean the data
### Importing the data fresh to avoid variable collisions
```
df = load_movie_reviews()
```
### Cleaning function to apply to each document
```
from nltk.corpus import stopwords
import string
# turn a doc into clean tokens
def clean_doc(doc):
# split into tokens by white space
tokens = doc.split()
# remove punctuation from each token
table = str.maketrans('', '', string.punctuation)
tokens = [w.translate(table) for w in tokens]
# remove remaining tokens that are not alphabetic
tokens = [word for word in tokens if word.isalpha()]
# filter out stop words
stop_words = set(stopwords.words('english'))
tokens = [w for w in tokens if not w in stop_words]
# filter out short tokens
tokens = [word for word in tokens if len(word) > 1]
return tokens
df_nltk = pd.DataFrame()
df_nltk['text'] = df.text.apply(clean_doc)
df_nltk['sentiment'] = df.sentiment
df_nltk.head()
```
### Reformat reviews for sklearn
```
documents = []
for review in df_nltk.text:
review = " ".join(review)
documents.append(review)
sentiment = list(df_nltk.sentiment)
new_df = pd.DataFrame({'text': documents, 'sentiment': sentiment})
new_df.head()
```
### Train Test Split
```
X = new_df.text
y = new_df.sentiment
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.2, random_state=42)
```
### Vectorize the reviews
```
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(max_features=None, ngram_range=(1,1),
stop_words='english')
vectorizer.fit(X_train)
train_word_counts = vectorizer.transform(X_train)
X_train_vectorized = pd.DataFrame(train_word_counts.toarray(),
columns=vectorizer.get_feature_names())
print(X_train_vectorized.shape)
X_train_vectorized.head()
test_word_counts = vectorizer.transform(X_test)
X_test_vectorized = pd.DataFrame(test_word_counts.toarray(), columns=vectorizer.get_feature_names())
print(X_test_vectorized.shape)
X_test_vectorized.head()
```
### Model Selection
```
models = [LogisticRegression(random_state=42, solver='lbfgs'),
MultinomialNB(),
RandomForestClassifier()]
results = []
for model in models:
result = assess_model(
model,
X_train_vectorized, X_test_vectorized, y_train, y_test,
vect_type='Tfidf')
results.append(result)
pd.DataFrame.from_records(results)
# import xgboost as xgb
from xgboost.sklearn import XGBClassifier
clf = XGBClassifier(
#hyper params
n_jobs = -1,
)
clf.fit(X_train_vectorized, y_train, eval_metric = 'auc')
```
| true |
code
| 0.465205 | null | null | null | null |
|
# PyTorch: Tabular Classify Binary

```
import torch
import torch.nn as nn
from torch import optim
import torchmetrics
from sklearn.preprocessing import LabelBinarizer, StandardScaler
import aiqc
from aiqc import datum
```
---
## Example Data
Reference [Example Datasets](example_datasets.ipynb) for more information.
```
df = datum.to_pandas('sonar.csv')
df.head()
```
---
## a) High-Level API
Reference [High-Level API Docs](api_high_level.ipynb) for more information including how to work with non-tabular data.
```
splitset = aiqc.Pipeline.Tabular.make(
df_or_path = df
, dtype = None
, feature_cols_excluded = 'object'
, feature_interpolaters = None
, feature_window = None
, feature_encoders = dict(
sklearn_preprocess = StandardScaler()
, dtypes = ['float64']
)
, feature_reshape_indices = None
, label_column = 'object'
, label_interpolater = None
, label_encoder = dict(sklearn_preprocess = LabelBinarizer(sparse_output=False))
, size_test = 0.12
, size_validation = 0.22
, fold_count = None
, bin_count = None
)
def fn_build(features_shape, label_shape, **hp):
model = nn.Sequential(
nn.Linear(features_shape[0], 12),
nn.BatchNorm1d(12,12),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(12, label_shape[0]),
nn.Sigmoid()
)
return model
def fn_train(model, loser, optimizer, samples_train, samples_evaluate, **hp):
## --- Prepare mini batches for analysis ---
batched_features, batched_labels = aiqc.torch_batcher(
samples_train['features'], samples_train['labels'],
batch_size=5, enforce_sameSize=False, allow_1Sample=False
)
## --- Metrics ---
acc = torchmetrics.Accuracy()
# Mirrors `keras.model.History.history` object.
history = {
'loss':list(), 'accuracy': list(),
'val_loss':list(), 'val_accuracy':list()
}
## --- Training loop ---
epochs = hp['epoch_count']
for epoch in range(epochs):
## --- Batch training ---
for i, batch in enumerate(batched_features):
# Make raw (unlabeled) predictions.
batch_probability = model(batched_features[i])
batch_loss = loser(batch_probability, batched_labels[i])
# Backpropagation.
optimizer.zero_grad()
batch_loss.backward()
optimizer.step()
## --- Epoch metrics ---
# Overall performance on training data.
train_probability = model(samples_train['features'])
train_loss = loser(train_probability, samples_train['labels'])
train_acc = acc(train_probability, samples_train['labels'].to(torch.short))
history['loss'].append(float(train_loss))
history['accuracy'].append(float(train_acc))
# Performance on evaluation data.
eval_probability = model(samples_evaluate['features'])
eval_loss = loser(eval_probability, samples_evaluate['labels'])
eval_acc = acc(eval_probability, samples_evaluate['labels'].to(torch.short))
history['val_loss'].append(float(eval_loss))
history['val_accuracy'].append(float(eval_acc))
return model, history
```
Optional, will be automatically selected based on `analysis_type` if left as `None`.
```
def fn_optimize(model, **hp):
optimizer = optim.Adamax(
model.parameters()
, lr=hp['learning_rate']
)
return optimizer
hyperparameters = {
"learning_rate": [0.01, 0.005]
, "epoch_count": [50]
}
queue = aiqc.Experiment.make(
library = "pytorch"
, analysis_type = "classification_binary"
, fn_build = fn_build
, fn_train = fn_train
, splitset_id = splitset.id
, repeat_count = 2
, hide_test = False
, hyperparameters = hyperparameters
, fn_lose = None #optional/ automated
, fn_optimize = fn_optimize #optional/ automated
, fn_predict = None #optional/ automated
, foldset_id = None
)
queue.run_jobs()
```
For more information on visualization of performance metrics, reference the [Visualization & Metrics](visualization.html) documentation.
---
## b) Low-Level API
Reference [Low-Level API Docs](api_low_level.ipynb) for more information including how to work with non-tabular data and defining optimizers.
```
dataset = aiqc.Dataset.Tabular.from_pandas(df)
label_column = 'object'
label = dataset.make_label(columns=[label_column])
labelcoder = label.make_labelcoder(
sklearn_preprocess = LabelBinarizer(sparse_output=False)
)
feature = dataset.make_feature(exclude_columns=[label_column])
encoderset = feature.make_encoderset()
featurecoder_0 = encoderset.make_featurecoder(
sklearn_preprocess = StandardScaler()
, dtypes = ['float64']
)
splitset = aiqc.Splitset.make(
feature_ids = [feature.id]
, label_id = label.id
, size_test = 0.22
, size_validation = 0.12
)
def fn_build(features_shape, label_shape, **hp):
model = nn.Sequential(
nn.Linear(features_shape[0], 12),
nn.BatchNorm1d(12,12),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(12, label_shape[0]),
nn.Sigmoid()
)
return model
def fn_train(model, loser, optimizer, samples_train, samples_evaluate, **hp):
## --- Prepare mini batches for analysis ---
batched_features, batched_labels = aiqc.torch_batcher(
samples_train['features'], samples_train['labels'],
batch_size=5, enforce_sameSize=False, allow_1Sample=False
)
## --- Metrics ---
acc = torchmetrics.Accuracy()
# Mirrors `keras.model.History.history` object.
history = {
'loss':list(), 'accuracy': list(),
'val_loss':list(), 'val_accuracy':list()
}
## --- Training loop ---
epochs = hp['epoch_count']
for epoch in range(epochs):
## --- Batch training ---
for i, batch in enumerate(batched_features):
# Make raw (unlabeled) predictions.
batch_probability = model(batched_features[i])
batch_loss = loser(batch_probability, batched_labels[i])
# Backpropagation.
optimizer.zero_grad()
batch_loss.backward()
optimizer.step()
## --- Epoch metrics ---
# Overall performance on training data.
train_probability = model(samples_train['features'])
train_loss = loser(train_probability, samples_train['labels'])
train_acc = acc(train_probability, samples_train['labels'].to(torch.short))
history['loss'].append(float(train_loss))
history['accuracy'].append(float(train_acc))
# Performance on evaluation data.
eval_probability = model(samples_evaluate['features'])
eval_loss = loser(eval_probability, samples_evaluate['labels'])
eval_acc = acc(eval_probability, samples_evaluate['labels'].to(torch.short))
history['val_loss'].append(float(eval_loss))
history['val_accuracy'].append(float(eval_acc))
return model, history
```
Optional, will be automatically selected based on `analysis_type` if left as `None`.
```
def fn_optimize(model, **hp):
optimizer = optim.Adamax(
model.parameters()
, lr=hp['learning_rate']
)
return optimizer
hyperparameters = {
"learning_rate": [0.01, 0.005]
, "epoch_count": [50]
}
algorithm = aiqc.Algorithm.make(
library = "pytorch"
, analysis_type = "classification_binary"
, fn_build = fn_build
, fn_train = fn_train
, fn_optimize = fn_optimize
)
hyperparamset = algorithm.make_hyperparamset(
hyperparameters = hyperparameters
)
queue = algorithm.make_queue(
splitset_id = splitset.id
, hyperparamset_id = hyperparamset.id
, repeat_count = 1
)
queue.run_jobs()
```
For more information on visualization of performance metrics, reference the [Visualization & Metrics](visualization.html) documentation.
| true |
code
| 0.794006 | null | null | null | null |
|
# Hinge Loss
In this project you will be implementing linear classifiers beginning with the Perceptron algorithm. You will begin by writing your loss function, a hinge-loss function. For this function you are given the parameters of your model θ and θ0
Additionally, you are given a feature matrix in which the rows are feature vectors and the columns are individual features, and a vector of labels representing the actual sentiment of the corresponding feature vector.
1. First, implement the basic hinge loss calculation on a single data-point. Instead of the entire feature matrix, you are given one row, representing the feature vector of a single data sample, and its label of +1 or -1 representing the ground truth sentiment of the data sample
def hinge_loss_single(feature_vector, label, theta, theta_0):
feature_vector - A numpy array describing the given data point.
label - A real valued number, the correct classification of the data
point.
theta - A numpy array describing the linear classifier.
theta_0 - A real valued number representing the offset parameter.
Returns: A real number representing the hinge loss associated with the
given data point and parameters.
```
import numpy as np
feature_vector= np.array([1, 2])
label= 1
theta= np.array([-1, 1])
theta_0= -0.2
def hinge_loss_single(feature_vector, label, theta, theta_0):
if (label* np.dot(feature_vector, theta) + theta_0) >=1:
loss= 0
else:
loss= 1 - (label* (np.dot(theta, feature_vector) + theta_0))
return loss
```
# The Complete Hinge Loss
Now it's time to implement the complete hinge loss for a full set of data. Your input will be a full feature matrix this time, and you will have a vector of corresponding labels. The kth row of the feature matrix corresponds to the kth element of the labels vector. This function should return the appropriate loss of the classifier on the given dataset.
```
def hinge_loss_full(feature_matrix, labels, theta, theta_0):
total_loss=[]
for i, x in enumerate(feature_matrix):
if (labels[i]*(np.dot(theta, feature_matrix[i]) + theta_0)) >= 1:
loss= 0
else:
loss= 1 - (labels[i]*(np.dot(theta, feature_matrix[i])+ theta_0))
total_loss.append(loss)
return sum(total_loss)/len(feature_matrix)
feature_matrix = np.array([[1, 2], [1, -1]])
label, theta, theta_0 = np.array([1, 1]), np.array([-1, 1]), -0.2
hinge_loss_full(feature_matrix, label, theta, theta_0)
```
| true |
code
| 0.528594 | null | null | null | null |
|
(Feedforward)=
# Chapter 8 -- Feedforward
Let's take a look at how feedforward is processed in a three layers neural net.
<img src="images/feedForward.PNG" width="500">
Figure 8.1
From the figure 8.1 above, we know that the two input values for the first and the second neuron in the hidden layer are
$$
h_1^{(1)} = w_{11}^{(1)}*x_1 + w_{21}^{(1)}*x_2 + w_{31}^{(1)}*x_3+ w_{41}^{(1)}*1
$$ (eq8_1)
$$
h_2^{(1)} = w_{12}^{(2)}*x_1 + w_{22}^{(2)}*x_2 + w_{32}^{(2)}*x_3+ w_{42}^{(2)}*1
$$ (eq8_2)
where the $w^{(n)}_{4m}$ term is the bias term in the form of weight.
To simplify the two equations above, we can use matrix
$$
H^{(1)} = [h_1^{(1)} \;\; h_2^{(1)}] = [x_1 \;\; x_2 \;\; x_3 \;\; 1]
\begin{bmatrix}
w^{(1)}_{11} & w^{(1)}_{12} \\
w^{(1)}_{21} & w^{(1)}_{22} \\
w^{(1)}_{31} & w^{(1)}_{32} \\
w^{(1)}_{41} & w^{(1)}_{4
2}
\end{bmatrix}
$$ (eq8_3)
Similarly, the two outputs from the input layer can be the inputs for the hidden layer
$$
\sigma(H^{(1)}) = [\sigma(h_1^{(1)}) \;\; \sigma( h_2^{(1)})]
$$ (eq8_4)
This in turns can be the input values for the next layer (output layer)
$$
h^{(2)} = w^{(2)}_{11}* \sigma(h^{(1)}_1)+w^{(2)}_{21} *\sigma(h^{(1)}_2)+w^{(2)}_{31}*1
$$ (eq8_5)
Again, we can simplify this equation by using matrix
$$
H^{(2)} = [\sigma(h_1^{(1)}) \;\;\sigma(h_2^{(1)}) \; \; 1]
\begin{bmatrix}
w^{(2)}_{11} \\
w^{(2)}_{21} \\
w^{(2)}_{31}
\end{bmatrix}
$$ (eq8_6)
Then we send this value $h^{(2)}$ into the sigma function in the final output layer to obtain the prediction
$$
\hat{y} = \sigma(h^{(2)})
$$ (eq8_7)
To put all the equation of three layers together, we can have
$$
\hat{y} = \sigma(\sigma([x_1 \;\; x_2 \;\; x_3 \;\; 1]
\begin{bmatrix}
w^{(1)}_{11} & w^{(1)}_{12} \\
w^{(1)}_{21} & w^{(1)}_{22} \\
w^{(1)}_{31} & w^{(1)}_{32} \\
w^{(1)}_{41} & w^{(1)}_{42}
\end{bmatrix})
\begin{bmatrix}
w^{(2)}_{11} \\
w^{(2)}_{21} \\
w^{(2)}_{31}
\end{bmatrix})
$$ (eq8_8)
Or we can simplify it to be
$$
\hat{y} = \sigma(\sigma(xW^{(1)})W^{(2)})
$$ (eq8_9)
This is the feedforward process: based on the known weights $W$ and input $x$ to calculate the prediction $\hat{y}$.
Finally, it's easy to write code computing the output from a Network instance. We begin by defining the sigmoid function:
```
def sigmoid(z):
return 1.0/(1.0+np.exp(-z))
```
Note that when the input z is a vector or Numpy array, Numpy automatically applies the function sigmoid elementwise, that is, in vectorized form.
We then add a feedforward method to the Network class, which, given an input a for the network, returns the corresponding output:
```
def feedforward(self, a):
"""Returning the output a, which is the input to the next layer"""
for b, w in zip(self.biases, self.weights):
a = sigmoid(np.dot(w, a)+b)
return a
```
| true |
code
| 0.5919 | null | null | null | null |
|

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/Spark%20v2.7.6%20Notebooks/21.Gender_Classifier.ipynb)
# 21. Gender Classifier
**Gender Classifier** detects the gender of the patient in the clinical document.
It can classify the documents into `Female`, `Male` and `Unknown`.
-'**Classifierdl_gender_sbert**' (works with licensed `sbiobert_base_cased_mli`)
It has been trained on more than four thousands clinical documents (radiology reports, pathology reports, clinical visits etc.) which were annotated internally.
## Colab Setup
```
import json
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
%%capture
for k,v in license_keys.items():
%set_env $k=$v
!wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jsl_colab_setup.sh
!bash jsl_colab_setup.sh -p 2.4.4
import json
import os
from pyspark.ml import Pipeline,PipelineModel
from pyspark.sql import SparkSession
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
import sparknlp
params = {"spark.driver.memory":"16G",
"spark.kryoserializer.buffer.max":"2000M",
"spark.driver.maxResultSize":"2000M"}
spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)
print (sparknlp.version())
print (sparknlp_jsl.version())
spark
# if you want to start the session with custom params as in start function above
def start(secret):
builder = SparkSession.builder \
.appName("Spark NLP Licensed") \
.master("local[*]") \
.config("spark.driver.memory", "16G") \
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer") \
.config("spark.kryoserializer.buffer.max", "2000M") \
.config("spark.jars.packages", "com.johnsnowlabs.nlp:spark-nlp_2.11:"+version) \
.config("spark.jars", "https://pypi.johnsnowlabs.com/"+secret+"/spark-nlp-jsl-"+jsl_version+".jar")
return builder.getOrCreate()
#spark = start(secret)
```
# Gender Classifier Pipeline with **sbert**
```
document = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sbert_embedder = BertSentenceEmbeddings().pretrained("sbiobert_base_cased_mli", 'en', 'clinical/models')\
.setInputCols(["document"])\
.setOutputCol("sentence_embeddings")\
.setMaxSentenceLength(512)
gender_classifier = ClassifierDLModel.pretrained( 'classifierdl_gender_sbert', 'en', 'clinical/models') \
.setInputCols(["document", "sentence_embeddings"]) \
.setOutputCol("class")
gender_pred_pipeline_sbert = Pipeline(stages = [
document,
sbert_embedder,
gender_classifier
])
empty_data = spark.createDataFrame([[""]]).toDF("text")
model_sbert = gender_pred_pipeline_sbert.fit(empty_data)
text ="""social history: shows that does not smoke cigarettes or drink alcohol,lives in a nursing home.family history: shows a family history of breast cancer."""
gender_pipeline_sbert = LightPipeline(model_sbert)
result = gender_pipeline_sbert.annotate(text)
result['class'][0]
```
### Sample Clinical Notes
```
text1 = '''social history: shows that does not smoke cigarettes or drink alcohol,lives in a nursing home.
family history: shows a family history of breast cancer.'''
result = gender_pipeline_sbert.annotate(text1)
result['class'][0]
text2 = '''The patient is a 48- year-old, with severe mitral stenosis diagnosed by echocardiography, moderate
aortic insufficiency and moderate to severe pulmonary hypertension who is being evaluated as a part of a preoperative
workup for mitral and possible aortic valve repair or replacement.'''
result = gender_pipeline_sbert.annotate(text2)
result['class'][0]
text3 = '''HISTORY: The patient is a 57-year-old XX, who I initially saw in the office on 12/27/07, as a referral from the Tomball Breast Center.
On 12/21/07, the patient underwent image-guided needle core biopsy of a 1.5 cm lesion at the 7 o'clock position of the left breast (inferomedial).
The biopsy returned showing infiltrating ductal carcinoma high histologic grade.
The patient stated that xx had recently felt and her physician had felt a palpable mass in that area prior to her breast imaging.'''
result = gender_pipeline_sbert.annotate(text3)
result['class'][0]
text4 = '''The patient states that xx has been overweight for approximately 35 years and has tried multiple weight loss modalities in
the past including Weight Watchers, NutriSystem, Jenny Craig, TOPS, cabbage diet, grape fruit diet, Slim-Fast, Richard Simmons,
as well as over-the-counter measures without any long-term sustainable weight loss.
At the time of presentation to the practice, xx is 5 feet 6 inches tall with a weight of 285.4 pounds and a body mass index of 46.
xx has obesity-related comorbidities, which includes hypertension and hypercholesterolemia.'''
result = gender_pipeline_sbert.annotate(text4)
result['class'][0]
text5 = '''Prostate gland showing moderately differentiated infiltrating adenocarcinoma,
Gleason 3 + 2 extending to the apex involving both lobes of the prostate, mainly right.'''
result = gender_pipeline_sbert.annotate(text5)
result['class'][0]
text6 = '''SKIN: The patient has significant subcutaneous emphysema of the upper chest and
anterior neck area although he states that the subcutaneous emphysema has improved significantly since yesterday.'''
result = gender_pipeline_sbert.annotate(text6)
result['class'][0]
text7 = '''INDICATION: The patient is a 42-year-old XX who is five days out from transanal excision of a benign anterior base lesion.
xx presents today with diarrhea and bleeding. Digital exam reveals bright red blood on the finger.
xx is for exam under anesthesia and control of hemorrhage at this time.
'''
result = gender_pipeline_sbert.annotate(text7)
result['class'][0]
text8 = '''INDICATION: ___ year old patient with complicated medical history of paraplegia
and chronic indwelling foley, recurrent MDR UTIs, hx Gallbladder fossa
abscess,type 2 DM, HTN, CAD, DVT s/p left AKA complicated complicated by
respiratory failure requiring tracheostomy and PEG placement, right ischium
osteomyelitis due to chronic pressure ulcers with acute shortness of breath...'''
result = gender_pipeline_sbert.annotate(text8)
result['class'][0]
```
| true |
code
| 0.410225 | null | null | null | null |
|
## Eng+Wales well-mixed example model
This is the inference notebook with increased inference window. There are various model variants as encoded by `expt_params_local` and `model_local`, which are shared by the notebooks in a given directory.
Outputs of this notebook:
(same as `inf` notebook with added `tWin` label in filename)
NOTE carefully : `Im` compartment is cumulative deaths, this is called `D` elsewhere
### Start notebook
(the following line is for efficient parallel processing)
```
%env OMP_NUM_THREADS=1
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
import pyross
import time
import pandas as pd
import matplotlib.image as mpimg
import pickle
import os
import pprint
import scipy.stats
# comment these before commit
#print(pyross.__file__)
#print(os.getcwd())
from ew_fns import *
import expt_params_local
import model_local
```
### switches etc
```
verboseMod=False ## print ancillary info about the model? (would usually be False, for brevity)
## Calculate things, or load from files ?
doInf = False ## do inference, or load it ?
doHes = True ## Hessian may take a few minutes !! does this get removed? what to do?
## time unit is one week
daysPerWeek = 7.0
## these are params that might be varied in different expts
exptParams = expt_params_local.getLocalParams()
## over-ride params for inference window
exptParams['timeLast'] = 11
exptParams['forecastTime'] = 11-exptParams['timeLast']
exptParams['pikFileRoot'] += '-tWin11'
pprint.pprint(exptParams)
## this is used for filename handling throughout
pikFileRoot = exptParams['pikFileRoot']
```
### convenient settings
```
np.set_printoptions(precision=3)
pltAuto = True
plt.rcParams.update({'figure.autolayout': pltAuto})
plt.rcParams.update({'font.size': 14})
```
## LOAD MODEL
```
loadModel = model_local.loadModel(exptParams,daysPerWeek,verboseMod)
## should use a dictionary but...
[ numCohorts, fi, N, Ni, model_spec, estimator, contactBasis, interventionFn,
modParams, priorsAll, initPriorsLinMode, obsDeath, fltrDeath,
simTime, deathCumulativeDat ] = loadModel
```
### Inspect most likely trajectory for model with prior mean params
```
x0_lin = estimator.get_mean_inits(initPriorsLinMode, obsDeath[0], fltrDeath)
guessTraj = estimator.integrate( x0_lin, exptParams['timeZero'], simTime, simTime+1)
## plots
yesPlot = model_spec['classes'].copy()
yesPlot.remove('S')
plt.yscale('log')
for lab in yesPlot :
indClass = model_spec['classes'].index(lab)
totClass = np.sum(guessTraj[:,indClass*numCohorts:(indClass+1)*numCohorts],axis=1)
plt.plot( N * totClass,'-',lw=3,label=lab)
plt.plot(N*np.sum(obsDeath,axis=1),'X',label='data')
plt.legend(fontsize=14,bbox_to_anchor=(1, 1.0))
plt.xlabel('time in weeks')
plt.ylabel('class population')
plt.show() ; plt.close()
indClass = model_spec['classes'].index('Im')
plt.yscale('log')
for coh in range(numCohorts):
plt.plot( N*guessTraj[:,coh+indClass*numCohorts],label='m{c:d}'.format(c=coh) )
plt.xlabel('time in weeks')
plt.ylabel('cumul deaths by age cohort')
plt.legend(fontsize=8,bbox_to_anchor=(1, 1.0))
plt.show() ; plt.close()
```
## INFERENCE
parameter count
* 32 for age-dependent Ai and Af (or beta and Af)
* 2 (step-like) or 3 (NPI-with-easing) for lockdown time and width (+easing param)
* 1 for projection of initial condition along mode
* 5 for initial condition in oldest cohort
* 5 for the gammas
* 1 for beta in late stage
total: 46 (step-like) or 47 (with-easing)
The following computation with CMA-ES takes some minutes depending on compute power, it should use multiple CPUs efficiently, if available. The result will vary (slightly) according to the random seed, can be controlled by passing `cma_random_seed` to `latent_infer`
```
def runInf() :
infResult = estimator.latent_infer(obsDeath, fltrDeath, simTime,
priorsAll,
initPriorsLinMode,
generator=contactBasis,
intervention_fun=interventionFn,
tangent=False,
verbose=True,
enable_global=True,
enable_local =True,
**exptParams['infOptions'],
)
return infResult
if doInf:
## do the computation
elapsedInf = time.time()
infResult = runInf()
elapsedInf = time.time() - elapsedInf
print('** elapsed time',elapsedInf/60.0,'mins')
# save the answer
opFile = pikFileRoot + "-inf.pik"
print('opf',opFile)
with open(opFile, 'wb') as f:
pickle.dump([infResult,elapsedInf],f)
else:
## load a saved computation
print(' Load data')
# here we load the data
# (this may be the file that we just saved, it is deliberately outside the if: else:)
ipFile = pikFileRoot + "-inf.pik"
print('ipf',ipFile)
with open(ipFile, 'rb') as f:
[infResult,elapsedInf] = pickle.load(f)
```
#### unpack results
```
epiParamsMAP = infResult['params_dict']
conParamsMAP = infResult['control_params_dict']
x0_MAP = infResult['x0']
CM_MAP = contactBasis.intervention_custom_temporal( interventionFn,
**conParamsMAP)
logPinf = -estimator.minus_logp_red(epiParamsMAP, x0_MAP, obsDeath, fltrDeath, simTime,
CM_MAP, tangent=False)
print('** measuredLikelihood',logPinf)
print('** logPosterior ',infResult['log_posterior'])
print('** logLikelihood',infResult['log_likelihood'])
```
#### MAP dominant trajectory
```
estimator.set_params(epiParamsMAP)
estimator.set_contact_matrix(CM_MAP)
trajMAP = estimator.integrate( x0_MAP, exptParams['timeZero'], simTime, simTime+1)
yesPlot = model_spec['classes'].copy()
yesPlot.remove('S')
plt.yscale('log')
for lab in yesPlot :
indClass = model_spec['classes'].index(lab)
totClass = np.sum(trajMAP[:,indClass*numCohorts:(indClass+1)*numCohorts],axis=1)
plt.plot( N * totClass,'-',lw=3,label=lab)
plt.plot(N*np.sum(obsDeath,axis=1),'X',label='data')
plt.xlabel('time in weeks')
plt.ylabel('class population')
plt.legend(fontsize=14,bbox_to_anchor=(1, 1.0))
plt.show() ; plt.close()
fig,axs = plt.subplots(1,2,figsize=(10,4.5))
cohRanges = [ [x,x+4] for x in range(0,75,5) ]
#print(cohRanges)
cohLabs = ["{l:d}-{u:d}".format(l=low,u=up) for [low,up] in cohRanges ]
cohLabs.append("75+")
ax = axs[0]
ax.set_title('MAP (average dynamics)')
mSize = 3
minY = 0.12
maxY = 1.0
indClass = model_spec['classes'].index('Im')
ax.set_yscale('log')
ax.set_ylabel('cumulative M (by cohort)')
ax.set_xlabel('time/weeks')
for coh in reversed(list(range(numCohorts))) :
ax.plot( N*trajMAP[:,coh+indClass*numCohorts],'o-',label=cohLabs[coh],ms=mSize )
maxY = np.maximum( maxY, np.max(N*trajMAP[:,coh+indClass*numCohorts]))
#ax.legend(fontsize=8,bbox_to_anchor=(1, 1.0))
maxY *= 1.6
ax.set_ylim(bottom=minY,top=maxY)
#plt.show() ; plt.close()
ax = axs[1]
ax.set_title('data')
ax.set_xlabel('time/weeks')
indClass = model_spec['classes'].index('Im')
ax.set_yscale('log')
for coh in reversed(list(range(numCohorts))) :
ax.plot( N*obsDeath[:,coh],'o-',label=cohLabs[coh],ms=mSize )
## keep the same as other panel
ax.set_ylim(bottom=minY,top=maxY)
ax.legend(fontsize=10,bbox_to_anchor=(1, 1.0))
#plt.show() ; plt.close()
#plt.savefig('ageMAPandData.png')
plt.show(fig)
```
#### sanity check : plot the prior and inf value for one or two params
```
(likFun,priFun,dim) = pyross.evidence.latent_get_parameters(estimator,
obsDeath, fltrDeath, simTime,
priorsAll,
initPriorsLinMode,
generator=contactBasis,
intervention_fun=interventionFn,
tangent=False,
)
def showInfPrior(xLab) :
fig = plt.figure(figsize=(4,4))
dimFlat = np.size(infResult['flat_params'])
## magic to work out the index of this param in flat_params
jj = infResult['param_keys'].index(xLab)
xInd = infResult['param_guess_range'][jj]
## get the range
xVals = np.linspace( *priorsAll[xLab]['bounds'], 100 )
#print(infResult['flat_params'][xInd])
pVals = []
checkVals = []
for xx in xVals :
flatP = np.zeros( dimFlat )
flatP[xInd] = xx
pdfAll = np.exp( priFun.logpdf(flatP) )
pVals.append( pdfAll[xInd] )
#checkVals.append( scipy.stats.norm.pdf(xx,loc=0.2,scale=0.1) )
plt.plot(xVals,pVals,'-',label='prior')
infVal = infResult['flat_params'][xInd]
infPdf = np.exp( priFun.logpdf(infResult['flat_params']) )[xInd]
plt.plot([infVal],[infPdf],'ro',label='inf')
plt.xlabel(xLab)
upperLim = 1.05*np.max(pVals)
plt.ylim(0,upperLim)
#plt.plot(xVals,checkVals)
plt.legend()
plt.show(fig) ; plt.close()
#print('**params\n',infResult['flat_params'])
#print('**logPrior\n',priFun.logpdf(infResult['flat_params']))
showInfPrior('gammaE')
```
## Hessian matrix of log-posterior
(this can take a few minutes, it does not make use of multiple cores)
```
if doHes:
## this eps amounts to a perturbation of approx 1% on each param
## (1/4) power of machine epsilon is standard for second deriv
xx = infResult['flat_params']
eps = 100 * xx*( np.spacing(xx)/xx )**(0.25)
#print('**params\n',infResult['flat_params'])
#print('** rel eps\n',eps/infResult['flat_params'])
CM_MAP = contactBasis.intervention_custom_temporal( interventionFn,
**conParamsMAP)
estimator.set_params(epiParamsMAP)
estimator.set_contact_matrix(CM_MAP)
start = time.time()
hessian = estimator.latent_hessian(obs=obsDeath, fltr=fltrDeath,
Tf=simTime, generator=contactBasis,
infer_result=infResult,
intervention_fun=interventionFn,
eps=eps, tangent=False, fd_method="central",
inter_steps=0)
end = time.time()
print('time',(end-start)/60,'mins')
opFile = pikFileRoot + "-hess.npy"
print('opf',opFile)
with open(opFile, 'wb') as f:
np.save(f,hessian)
else :
print('Load hessian')
# reload in all cases (even if we just saved it)
ipFile = pikFileRoot + "-hess.npy"
try:
print('ipf',ipFile)
with open(ipFile, 'rb') as f:
hessian = np.load(f)
except (OSError, IOError) :
print('... error loading hessian')
hessian = None
#print(hessian)
print("** param vals")
print(infResult['flat_params'],'\n')
if np.all(hessian) != None :
print("** naive uncertainty v1 : reciprocal sqrt diagonal elements (x2)")
print( 2/np.sqrt(np.diagonal(hessian)) ,'\n')
print("** naive uncertainty v2 : sqrt diagonal elements of inverse (x2)")
print( 2*np.sqrt(np.diagonal(np.linalg.inv(hessian))) ,'\n')
```
| true |
code
| 0.520314 | null | null | null | null |
|
# import required library
```
# Import numpy, pandas for data manipulation
import numpy as np
import pandas as pd
# Import matplotlib, seaborn for visualization
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
# Import the data
weather_data = pd.read_csv('weather.csv')
weather_data.head()
rain_df = weather_data[['Date','Rainfall']]
rain_df.head()
rain_df.shape
rain_df.info()
```
**Using 50 values**
```
rain_df = rain_df.loc[:49]
rain_df.head()
rain_df.shape
# Convert the time column into datetime
rain_df['Date'] = pd.to_datetime(rain_df['Date'])
rain_df['Date'].head()
rain_df.info()
# fill the empty row
rain_df = rain_df.fillna(rain_df['Rainfall'].mean())
rain_df.head()
```
### Dataset Explanation
```
rain_df.describe()
# Output the maximum and minimum rain date
print(rain_df.loc[rain_df["Rainfall"] == rain_df["Rainfall"].max()])
print(rain_df.loc[rain_df["Rainfall"] == rain_df["Rainfall"].min()])
# Reset the index
rain_df.set_index("Date", inplace=True)
```
### Data Visualization
```
# Plot the daily temperature change
plt.figure(figsize=(16,10), dpi=100)
plt.plot(rain_df.index, rain_df.Rainfall, color='tab:red')
plt.gca().set(title="Daily Rain", xlabel='Date', ylabel="rain value")
plt.show()
# Apply the Moving Average function by a subset of size 10 days.
rain_df_mean = rain_df.Rainfall.rolling(window=10).mean()
rain_df_mean.plot(figsize=(16,10))
plt.show()
from statsmodels.tsa.seasonal import seasonal_decompose
# Additive Decomposition
result_add = seasonal_decompose(rain_df.Rainfall, model='additive', extrapolate_trend=0)
# Plot
plt.rcParams.update({'figure.figsize': (10,10)})
result_add.plot().suptitle('Additive Decomposition', fontsize=22)
plt.show()
```
### Baseline Model
```
# Shift the current rain to the next day.
predicted_df = rain_df["Rainfall"].to_frame().shift(1).rename(columns = {"Rainfall": "rain_pred" })
actual_df = rain_df["Rainfall"].to_frame().rename(columns = {"Rainfall": "rain_actual" })
# Concatenate the actual and predicted rain
one_step_df = pd.concat([actual_df,predicted_df],axis=1)
# Select from the second row, because there is no prediction for today due to shifting.
one_step_df = one_step_df[1:]
one_step_df.head(10)
```
> Here you can the we have two column one is our **actual rain** column and othe is **predicted rain** column that we use next model
We could validate how well our model is by looking at the Root Mean Squared Error(RMSE) between the predicted and actual rain
```
from sklearn.metrics import mean_squared_error as MSE
from math import sqrt
# Calculate the RMSE
rain_pred_err = MSE(one_step_df.rain_actual, one_step_df.rain_pred, squared=False)
print("The RMSE is",rain_pred_err)
```
> Our RMSE value is 4.002 is arround 4 that are pretty good for model.
## Using SARIMA model
### Parameter Selection
#### Grid Search
We are going to apply one of the most commonly used method for time-series forecasting, known as SARIMA, which stands for Seasonal Autoregressive Integrated Moving Average. SARIMA models are denoted with the notation SARIMA(p,d,q)(P,D,Q,s). These three parameters account for seasonality, trend, and noise in data:
We will use a “grid search” to iteratively explore different combinations of parameters. For each combination of parameters, we fit a new seasonal SARIMA model with the SARIMAX() function from the statsmodels module and assess its overall quality.
```
import itertools
# Define the p, d and q parameters to take any value between 0 and 2
p = d = q = range(0, 2)
# Generate all different combinations of p, q and q triplets
pdq = list(itertools.product(p, d, q))
# Generate all different combinations of seasonal p, q and q triplets
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]
print('Examples of parameter combinations for Seasonal ARIMA...')
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[1]))
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[2]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[3]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[4]))
for param in pdq:
for param_seasonal in seasonal_pdq:
try:
mod = sm.tsa.statespace.SARIMAX(one_step_df.rain_actual,
order=param,
seasonal_order=param_seasonal,
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
print('SARIMA{}x{}12 - AIC:{}'.format(param, param_seasonal, results.aic))
except:
continue
```
### Fitting the Model
```
import warnings
warnings.filterwarnings("ignore") # specify to ignore warning messages
# Import the statsmodels library for using SARIMAX model
import statsmodels.api as sm
# Fit the SARIMAX model using optimal parameters
mod = sm.tsa.statespace.SARIMAX(one_step_df.rain_actual,
order=(1,1,1),
seasonal_order=(1,1,1,12),
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
results.summary()
```
**Predictions**
```
pred = results.predict(start=0,end=49)[1:]
pred
pred = results.get_prediction(start=0,end = 49, dynamic=False)
pred_ci = pred.conf_int()
pred_ci.head()
print(pred)
ax = one_step_df.rain_actual.plot(label='observed',figsize=(16,10))
ax.set_xlabel('Date')
ax.set_ylabel('value')
plt.ylim([0,2.0])
plt.legend()
plt.show()
```
### Forecast Diagnostic
It is also useful to quantify the accuracy of our forecasts. We will use the MSE (Mean Squared Error), in which for each predicted value, we compute its distance to the true value and square the result
```
y_forecasted = pred.predicted_mean[:49]
y_truth = one_step_df.rain_actual
print(y_forecasted.shape)
print(y_truth.shape)
# Compute the mean square error
mse = MSE(y_truth, y_forecasted, squared=True)
print('The Mean Squared Error of our forecasts is {}'.format(round(mse, 2)))
```
Amazziingggg! Our forecast model forecasts the rain with only an error of 25.85.
In the weather forecast field, the prediction error of 2.19 degrees seems promising and sufficient, as there are many other factors that contribute to the change in rain, including but not limited to the wind speed, the air pressure, etc.
### Validating the Dynamic Forecast
In this case, we only use information from the time series up to a certain point, and after that, forecasts are generated using values from previous forecasted time points.
```
pred_dynamic = results.get_prediction(start=0,end = 49, dynamic=True, full_results=True)
pred_dynamic_ci = pred_dynamic.conf_int()
pred_dynamic_ci.head()
```
Once again, we plot the real and forecasted values of the average daily rain to assess how well we did:
```
ax = one_step_df.rain_actual.plot(label='observed', figsize=(15, 11))
pred_dynamic.predicted_mean.plot(label='Dynamic Forecast', ax=ax)
ax.fill_between(pred_dynamic_ci.index,
pred_dynamic_ci.iloc[:, 0],
pred_dynamic_ci.iloc[:, 1], color='k', alpha=.25)
ax.set_xlabel('Date')
ax.set_ylabel('Temperature (in Celsius)')
plt.ylim([0,2.0])
plt.legend()
plt.show()
```
> In this case, the model seems to predict the rain inaccurately, with major fluctuations between the true value and the predicted value.
### Forecast Diagnostic
```
# Extract the predicted and true values of our time series
y_forecasted = pred_dynamic.predicted_mean[:49]
y_truth = one_step_df.rain_actual
# Compute the mean square error
mse = sqrt(MSE(y_truth, y_forecasted).mean())
print('The Root Mean Squared Error of our forecasts is {}'.format(round(mse, 2)))
```
The **predicted** values obtained from the dynamic forecasts yield an MSE of 3.68. This is significantly higher than the one-step ahead, which is to be expected given that we are relying on less historical data from the time series.
# Conclusion
I described how to implement a seasonal SARIMA model in Python. I made extensive use of the pandas and statsmodels libraries and showed how to run model diagnostics, as well as how to produce forecasts of the Rain.
Recall that in the assumption I made in the section 2.2 Baseline Model, I could even reinforce our assumption and continue our belief that the rainfall today depends on the rainfall yesterday, the rainfall yesterday depends on the day before yesterday, and so on.
It is the best so far to use the history up to the point that we would like to make **predictions** on. Especially it holds for weather forecasting, where the rainfall today does not change much from yesterday, and the transition to another season signaling through the rainfall should gradually occur, unless there is any disastrous factors such as storm, drought, etc.
| true |
code
| 0.661103 | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.