text
stringlengths 2.5k
6.39M
| kind
stringclasses 3
values |
---|---|
# Simulate and Generate Empirical Distributions in Python
## Mini-Lab: Simulations, Empirical Distributions, Sampling
Welcome to your next mini-lab! Go ahead an run the following cell to get started. You can do that by clicking on the cell and then clickcing `Run` on the top bar. You can also just press `Shift` + `Enter` to run the cell.
```
from datascience import *
import numpy as np
import random
import otter
grader = otter.Notebook("m6_l1_tests")
```
Let's continue our analysis of COVID-19 data with the same false negative and false positive values of 10% and 5%. For the first task, let's try and create a sample population with 10,000 people. Let's say that 20% of this population has COVID-19. Replace the `...` in function below to create this sample population. The `create_population` function takes in an input `n` and returns a table with `n` rows. These rows can either have `positive` or `negative` as their value. These values indicate whether or not an individual has COVID-19.
For random number generation, feel free to look up the [NumPy documentation](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.random.html) or the [Python random documentation](https://docs.python.org/3.8/library/random.html).
```
def create_population(n):
test_results = ...
for ...:
random_num = ...
if ...:
disease_result = ...
else:
disease_result = ...
test_results = np.append(test_results, disease_result)
return Table().with_column("COVID-19", test_results)
covid_population = create_population(...)
covid_population.show(5)
# There is a chance that this test may fail even with a correct solution due to randomness!
# Run the above cell again and run the grader again if you think this is the case.
grader.check("q1")
```
Given this population, let's go ahead and randomly test 1000 members. Complete `test_population` below by replacing the `...` with functional code. This function takes in a `population` which is a `datascience` table and a number `n`, where `n` is the number of people that we are testing. Inside the function, we add a column to this table called `Test Results` which contains the test result for each person in the sample based on the false negative and false positve rates given earlier. There is another function called `test_individual` that simplifies `test_population`. You will use `test_individual` within `test_population`.
```
def test_population(population, n):
population = ...
test_results = population.apply(test_individuals, "COVID-19")
population = population.with_column(...)
return population
def test_individuals(individual):
random_num = ...
if individual == "positive":
if ...:
return ...
else:
return ...
else:
if ...:
return ...
else:
return ...
covid_sample = ...
covid_sample.show(5)
# There is a chance that this test may fail even with a correct solution due to randomness!
# Run the above cell again and run the grader again if you think this is the case.
grader.check("q2")
```
Now that we've simulated a population and sampled this population, let's take a look at our results. We'll pivot first by the `COVID-19` column and then by the `Test Results` column to look at how well our COVID-19 test does using "real-life" figures.
```
covid_sample.pivot("COVID-19", "Test Results")
```
You'll see that though our test correctly identifies the disease most of the time, there are still some instances where our test gets it wrong. It is impossible for a test to have both a 0% false negative rate and a 0% false positive rate. In the case of this disease and testing, which should we prioritize? Driving down the false positive rate or driving down the false negative rate? Is there reason why one should be prioritized over the other? There is no simple answer to these questions, and as data scientists, we'll have to grapple with these issues oursleves and navigate the complex web we call life.
Congratulations on finishing! Run the next cell to make sure that you passed all of the test cases.
```
grader.check_all()
```
|
github_jupyter
|
Cognizant Data Science Summit 2020 : July 1, 2020
Yogesh Deshpande [157456]
# Week 1 challenge - Python
Description
The eight queens puzzle is the problem of placing eight chess queens on an 8×8 chessboard so that no two queens threaten each other; thus, a solution requires that no two queens share the same row, column, or diagonal. The eight queens puzzle is an example of the more general n queens problem of placing n non-attacking queens on an n×n chessboard. (Source : https://en.wikipedia.org/wiki/Eight_queens_puzzle )
Challenge
The challenge is to generate one right sequence through Genetic Programming. The sequence has to be 8 numbers between 0 to 7. Each number represents the positions the Queens can be placed. Each number refers to the row number in the specific column
0 3 4 5 6 1 2 4
• 0 is the row number in the column 0 where the Queen can be placed
• 3 is the row number in the column 1 where the Queen can be placed
# Initiaze variables, functions' definitions
```
import random
# Set the variables as per the problem statement
NumberofQueens = 8
InitialPopulation = 1000000 # Initial population has number of chromozones out of which one or more are possible solutions
NumberofIterations = 1000 # Number of generations to check for possible solution
def create_chromozone(NumberofQueens):
chromozone = []
for gene in range(NumberofQueens):
chromozone.append(random.randint(0, NumberofQueens-1))
return chromozone
#print(chromozone)
# Unit testing
# create_chromozone(NumberofQueens)
def create_population(NumberofQueens, InitialPopulation):
Population = []
for chromozone in range(InitialPopulation):
Population.append(create_chromozone(NumberofQueens))
#print(Population)
return Population
# Unit testing
#create_population(NumberofQueens, InitialPopulation)
def fitness_calculation(chromosome, maxFitness):
horizontal_collisions = sum([chromosome.count(i) - 1 for i in chromosome])/2
diagonal_collisions = 0
for record in range(1,len(chromosome)+1):
column1 = record-1
row1 = chromosome[column1]
for i in range (column1+1, len(chromosome)):
column2 = i
row2 = chromosome[i]
deltaRow = abs(row1 - row2)
deltaCol = abs(column1 - column2)
if (deltaRow == deltaCol):
#print("######## Collision detected ##############")
diagonal_collisions = diagonal_collisions + 1
#print("Horizontal Collisions are {} and Diagonal are {} ".format(horizontal_collisions, diagonal_collisions))
fitness_score = maxFitness - (horizontal_collisions + diagonal_collisions)
#print("The fitness score is {}".format(fitness_score))
return fitness_score
#Unit Test
#itness_calculation([4, 1, 5, 8, 2, 7, 3, 6], 28)
def strength_of_chromosome(chromosome, maxFitness):
return fitness_calculation(chromosome, maxFitness) / maxFitness
#Unit Test
#strength_of_chromosome([1, 1, 1, 1, 1, 1, 1, 1], 28)
#strength_of_chromosome([4, 1, 5, 8, 2, 7, 3, 6], 28)
```
# Main Program for solution to get a 8-Queen sequence
```
# Main Program
if __name__ == "__main__":
# Calulate the target Fitness
TargetFitness = (NumberofQueens * (NumberofQueens - 1)) /2
print("Maximum score to achive is = {}".format(TargetFitness))
# Inital population
Population = create_population(NumberofQueens, InitialPopulation)
generation_counter = 0
for iteration in range(NumberofIterations):
MaxPopulationScore = max([fitness_calculation(chromozone, TargetFitness) for chromozone in Population])
print("generation counter = {}, MaxPopulationScore = {}".format(generation_counter, MaxPopulationScore))
if (MaxPopulationScore != TargetFitness):
# If the current population has no score matching target score, continue with next generation
generation_counter = generation_counter + 1
else:
# Target score is achieved at this stage
break
print("Solved in generation {}".format(generation_counter+1))
for chromosome in Population:
if (fitness_calculation(chromosome, TargetFitness) == TargetFitness):
print("Solution =======> {}".format(chromosome))
create_chromozone(8)
create_chromozone(8)
```
|
github_jupyter
|
# Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
## Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
```
## Explore the Data
Play around with view_sentence_range to view different parts of the data.
```
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
```
## Implement Preprocessing Function
### Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `<EOS>` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.
You can get the `<EOS>` word id by doing:
```python
target_vocab_to_int['<EOS>']
```
You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`.
```
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
source_id_text = []
target_id_text = []
for line in source_text.splitlines():
source_id_text.append([source_vocab_to_int[word] for word in line.split()])
EOS = target_vocab_to_int['<EOS>']
for line in target_text.splitlines():
target_id_text.append([target_vocab_to_int[word] for word in line.split()] + [EOS])
return source_id_text, target_id_text
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
```
### Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
```
# Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
```
### Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
```
## Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- `model_inputs`
- `process_decoder_input`
- `encoding_layer`
- `decoding_layer_train`
- `decoding_layer_infer`
- `decoding_layer`
- `seq2seq_model`
### Input
Implement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
- Targets placeholder with rank 2.
- Learning rate placeholder with rank 0.
- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
- Target sequence length placeholder named "target_sequence_length" with rank 1
- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
- Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
```
def model_inputs():
"""
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
"""
input_ = tf.placeholder(tf.int32, shape=[None, None], name="input")
targets = tf.placeholder(tf.int32, shape=[None, None], name="targets")
learning_rate = tf.placeholder(tf.float32, shape=None)
keep_prob = tf.placeholder(tf.float32, shape=None, name="keep_prob")
target_sequence_length = tf.placeholder(tf.int32, shape=[None], name="target_sequence_length")
max_target_len = tf.reduce_max(target_sequence_length, name="max_target_len")
source_sequence_length = tf.placeholder(tf.int32, shape=[None], name="source_sequence_length")
return input_, targets, learning_rate, keep_prob, target_sequence_length, max_target_len, source_sequence_length
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
```
### Process Decoder Input
Implement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch.
```
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
batches = tf.strided_slice(target_data, [0,0], [batch_size, -1], [1, 1])
padding = tf.fill(dims=[batch_size, 1], value=target_vocab_to_int['<GO>'])
preprocessed_target_data = tf.concat(values=[padding, batches], axis=1)
return preprocessed_target_data
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_encoding_input(process_decoder_input)
```
### Encoding
Implement `encoding_layer()` to create a Encoder RNN layer:
* Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence)
* Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.md#stacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper)
* Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn)
```
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
"""
def build_cell():
lstm = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
cell = tf.contrib.rnn.MultiRNNCell([build_cell() for _ in range(num_layers)])
output, state = tf.nn.dynamic_rnn(cell, embed_input, source_sequence_length, dtype=tf.float32)
return output, state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
```
### Decoding - Training
Create a training decoding layer:
* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper)
* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)
* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode)
```
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
"""
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
training_helper,
encoder_state,
output_layer)
training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,
impute_finished=True,
maximum_iterations=max_summary_length)
return training_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
```
### Decoding - Inference
Create inference decoder:
* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)
* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)
* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode)
```
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
"""
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,
start_tokens,
end_of_sequence_id)
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
inference_helper,
encoder_state,
output_layer)
inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)
return inference_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
```
### Build the Decoding Layer
Implement `decoding_layer()` to create a Decoder RNN layer.
* Embed the target sequences
* Construct the decoder LSTM cell (just like you constructed the encoder cell above)
* Create an output layer to map the outputs of the decoder to the elements of our vocabulary
* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.
* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.
Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference.
```
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
"""
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
def build_cell():
lstm = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
dec_cell = tf.contrib.rnn.MultiRNNCell([build_cell() for _ in range(num_layers)])
start_of_sequence_id = target_vocab_to_int["<GO>"]
end_of_sequence_id = target_vocab_to_int['<EOS>']
vocab_size = len(target_vocab_to_int)
dec_embeddings = tf.Variable(tf.random_uniform([vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
output_layer = Dense(target_vocab_size,
kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))
with tf.variable_scope("decode"):# as decoding_scope:
training_decoder_output = decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_target_sequence_length,
output_layer, keep_prob)
# decoding_scope.reuse_variables
with tf.variable_scope("decode", reuse=True):
inference_decoder_output = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob)
return training_decoder_output, inference_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
```
### Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to the input data for the encoder.
- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.
- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.
- Apply embedding to the target data for the decoder.
- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function.
```
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
_, encoder_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size, enc_embedding_size)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
training_decoder_output, inference_decoder_output = decoding_layer(dec_input,
encoder_state,
target_sequence_length,
max_target_sentence_length,
rnn_size,
num_layers,
target_vocab_to_int,
target_vocab_size,
batch_size,
keep_prob,
dec_embedding_size)
return training_decoder_output, inference_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
```
## Neural Network Training
### Hyperparameters
Tune the following parameters:
- Set `epochs` to the number of epochs.
- Set `batch_size` to the batch size.
- Set `rnn_size` to the size of the RNNs.
- Set `num_layers` to the number of layers.
- Set `encoding_embedding_size` to the size of the embedding for the encoder.
- Set `decoding_embedding_size` to the size of the embedding for the decoder.
- Set `learning_rate` to the learning rate.
- Set `keep_probability` to the Dropout keep probability
- Set `display_step` to state how many steps between each debug output statement
```
# Number of Epochs
epochs = 8
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 200
decoding_embedding_size = 200
# Learning Rate
learning_rate = 0.0005
# Dropout Keep Probability
keep_probability = 0.75
display_step = 20
```
### Build the Graph
Build the graph using the neural network you implemented.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
```
Batch and pad the source and target sequences
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def pad_sentence_batch(sentence_batch, pad_int):
"""Pad sentences with <PAD> so that each sentence of a batch has the same length"""
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
"""Batch targets, sources, and the lengths of their sentences together"""
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
```
### Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
```
### Save Parameters
Save the `batch_size` and `save_path` parameters for inference.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
```
# Checkpoint
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
```
## Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.
- Convert the sentence to lowercase
- Convert words into ids using `vocab_to_int`
- Convert words not in the vocabulary, to the `<UNK>` word id.
```
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
index = [vocab_to_int.get(word.lower(), vocab_to_int["<UNK>"]) for word in sentence.split()]
return index
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
```
## Translate
This will translate `translate_sentence` from English to French.
```
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
```
## Imperfect Translation
You might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.
You can train on the [WMT10 French-English corpus](http://www.statmt.org/wmt10/training-giga-fren.tar). This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.
## Submitting This Project
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_language_translation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.
|
github_jupyter
|
# 🔬 Sequence Comparison of DNA using `BioPython`
### 🦠 `Covid-19`, `SARS`, `MERS`, and `Ebola`
#### Analysis Techniques:
* Compare their DNA sequence and Protein (Amino Acid) sequence
* GC Content
* Freq of Each Amino Acids
* Find similarity between them
* Alignment
* hamming distance
* 3D structure of each
| DNA Sequence | Datasource |
|:-----------------|:--------------------------------------------------------------|
| Latest Sequence | https://www.ncbi.nlm.nih.gov/genbank/sars-cov-2-seqs/ |
| Wuhan-Hu-1 | https://www.ncbi.nlm.nih.gov/nuccore/MN908947.3?report=fasta |
| Covid19 | https://www.ncbi.nlm.nih.gov/nuccore/NC_045512.2?report=fasta |
| SARS | https://www.ncbi.nlm.nih.gov/nuccore/NC_004718.3?report=fasta |
| MERS | https://www.ncbi.nlm.nih.gov/nuccore/NC_019843.3?report=fasta |
| EBOLA | https://www.ncbi.nlm.nih.gov/nuccore/NC_002549.1?report=fasta |
### 1. Analysis Techniques
```
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from Bio.Seq import Seq
# Create our sequence
seq1 = Seq('ACTCGA')
seq2 = Seq('AC')
```
#### GC Contents In DNA
* `GC-content` (or guanine-cytosine content) is the **percentage of nitrogenous bases** in a DNA or RNA molecule that are either guanine (`G`) or cytosine (`C`)
#### Usefulness
* In polymerase chain reaction (PCR) experiments, the GC-content of short oligonucleotides known as primers is often used to predict their **annealing temperature** to the template DNA.
* A `high` GC-content level indicates a relatively higher melting temperature.
* DNA with `low` GC-content is less stable than DNA with high GC-content
> Question: which sequence is more stable when heat is applied?
```
from Bio.SeqUtils import GC
# Check GC (guanine-cytosine) percentage in sequence
print(f"{GC(seq1)}% \t({seq1})")
print(f"{GC(seq2)}% \t({seq2})")
```
### Sequence Alignment
* `Global alignment` finds the best concordance/agreement between all characters in two sequences
* `Local Alignment` finds just the subsequences that align the best
```
from Bio import pairwise2
from Bio.pairwise2 import format_alignment
print('seq1 =', seq1, '\nseq2 =', seq2, '\n\n')
# Global alignment
alignments = pairwise2.align.globalxx(seq1, seq2)
print(f'Alignments found: {len(alignments)}')
print(*alignments)
# Print nicely
print(format_alignment(*alignments[0]))
# 2nd alignment
print(format_alignment(*alignments[1]))
# To see all possible alignments
for a in alignments:
print(format_alignment(*a), '\n')
# Get the number of possible sequence alignments
alignment_score = pairwise2.align.globalxx(seq1,seq2,one_alignment_only=True,score_only=True)
alignment_score
```
#### Sequence Similarity
* Fraction of nucleotides that is the same/ total number of nucleotides * 100%
```
alignment_score/len(seq1)*100
```
### Hamming Distance: `How Many Subsitutions are Required to Match Two Sequences?`
* Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different.
* In other words, it measures the minimum number of substitutions required to change one string into the other, or the minimum number of errors that could have transformed one string into the other
* It is used for error detection or error correction
* It is used to quantify the similarity of DNA sequences
#### Edit Distance
* Is a way of quantifying how dissimilar two strings (e.g., words) are to one another by counting the minimum number of operations required to transform one string into the other (e.g. Levenshtein distance)
```
def hamming_distance(lhs, rhs):
return len([(x,y) for x,y in zip(lhs,rhs) if x != y])
hamming_distance('TT', 'ACCTA')
def hammer_time(s1, s2, verbose=True):
"""Take two nucleotide sequences s1 and s2, and display
the possible alignments and hamming distance.
"""
if verbose:
print('s1 =', s1, '\ns2 =', s2, '\n\n')
print('Hamming Distance:', hamming_distance(s1, s2), '\n(min substitutions for sequences to match)')
print('\nAlignment Options:\n\n')
alignments = pairwise2.align.globalxx(s1, s2)
for a in alignments:
print(format_alignment(*a), '\n')
s1 = 'ACTCGAA'
s2 = 'ACGA'
hammer_time(s1, s2)
```
### Dot Plot
* A dot plot is a graphical method that allows the **comparison of two biological sequences** and identify regions of **close similarity** between them.
* Simplest explanation: put a dot wherever sequences are identical
#### Usefulness
Dot plots can also be used to visually inspect sequences for
- Direct or inverted repeats
- Regions with low sequence complexity
- Similar regions
- Repeated sequences
- Sequence rearrangements
- RNA structures
- Gene order
Acknowledgement: https://stackoverflow.com/questions/40822400/how-to-create-a-dotplot-of-two-dna-sequence-in-python
```
def delta(x,y):
return 0 if x == y else 1
def M(seq1,seq2,i,j,k):
return sum(delta(x,y) for x,y in zip(seq1[i:i+k],seq2[j:j+k]))
def makeMatrix(seq1,seq2,k):
n = len(seq1)
m = len(seq2)
return [[M(seq1,seq2,i,j,k) for j in range(m-k+1)] for i in range(n-k+1)]
def plotMatrix(M,t, seq1, seq2, nonblank = chr(0x25A0), blank = ' '):
print(' |' + seq2)
print('-'*(2 + len(seq2)))
for label,row in zip(seq1,M):
line = ''.join(nonblank if s < t else blank for s in row)
print(label + '|' + line)
def dotplot(seq1,seq2,k = 1,t = 1):
M = makeMatrix(seq1,seq2,k)
plotMatrix(M, t, seq1,seq2) #experiment with character choice
# The dot plot: put a dot where the two sequences are identical
s1 = 'ACTCGA'
s2 = 'AC'
dotplot(s1, s2)
# Identical proteins will show a diagonal line.
s1 = 'ACCTAG'
s2 = 'ACCTAG'
dotplot(s1, s2)
print('\n\n')
hammer_time(s1, s2, verbose=False)
```
# 🔬 2. Comparative Analysis of Virus DNA
### 🦠 `Covid-19`, `SARS`, `MERS`, `Ebola`
* Covid19(`SARS-CoV2`) is a novel coronavirus identified as the cause of coronavirus disease 2019 (COVID-19) that began in Wuhan, China in late 2019 and spread worldwide.
* MERS(`MERS-CoV`) was identified in 2012 as the cause of Middle East respiratory syndrome (MERS).
* SARS(`SARS-CoV`) was identified in 2002 as the cause of an outbreak of severe acute respiratory syndrome (SARS).
#### `fasta` DNA Sequence Files
* Covid19 : https://www.rcsb.org/3d-view/6LU7
* SARS: https://www.ncbi.nlm.nih.gov/nuccore/NC_004718.3?report=fasta
* MERS: https://www.ncbi.nlm.nih.gov/nuccore/NC_019843.3?report=fasta
* EBOLA:https://www.rcsb.org/structure/6HS4
```
import pandas as pd
import numpy as np
from Bio import SeqIO
covid = SeqIO.read("../data/01_COVID_MN908947.3.fasta","fasta")
mers = SeqIO.read("../data/02_MERS_NC_019843.3.fasta","fasta")
sars = SeqIO.read("../data/03_SARS_rcsb_pdb_5XES.fasta","fasta")
ebola = SeqIO.read("../data/04_EBOLA_rcsb_pdb_6HS4.fasta","fasta")
# Convert imports to BioPython sequences
covid_seq = covid.seq
mers_seq = mers.seq
sars_seq = sars.seq
ebola_seq = ebola.seq
# Create dataframe
df = pd.DataFrame({'name': ['COVID19', 'MERS', 'SARS', 'EBOLA'],
'seq': [covid_seq, mers_seq, sars_seq, ebola_seq]})
df
```
#### Length of Each Genome
```
df['len'] = df.seq.apply(lambda x: len(x))
df[['name', 'len']].sort_values('len', ascending=False) \
.style.bar(color='#cde8F6', vmin=0, width=100, align='left')
```
* `MERS`, `COVID` and `SARS` all have about the same genome length (30,000 base pairs)
#### Which of them is more heat stable?
```
# Check the GC content
df['gc_content'] = df.seq.apply(lambda x: GC(x))
df[['name', 'gc_content']].sort_values('gc_content', ascending=False) \
.style.bar(color='#cde8F6', vmin=0)
```
* `MERS` is the most stable with a GC of `41.24` followed by Ebola
#### Translate RNA into proteins
How many proteins are in each dna sequence?
```
# Translate the RNA into Proteins
df['proteins'] = df.seq.apply(lambda s: len(s.translate()))
df[['name', 'proteins']].sort_values('proteins', ascending=False) \
.style.bar(color='#cde8F6', vmin=0)
```
#### How Many Amino Acids are Created?
```
from Bio.SeqUtils.ProtParam import ProteinAnalysis
from collections import Counter
# Method 1
covid_analysed = ProteinAnalysis(str(covid_protein))
mers_analysed = ProteinAnalysis(str(mers_protein))
sars_analysed = ProteinAnalysis(str(sars_protein))
ebola_analysed = ProteinAnalysis(str(ebola_protein))
# Check for the Frequence of AA
covid_analysed.count_amino_acids()
# Method 2
from collections import Counter
# Find the Amino Acid Frequency
df['aa_freq'] = df.seq.apply(lambda s: Counter(s.translate()))
df
```
#### Most Common Amino Acid
```
# For Covid
df[df.name=='COVID19'].aa_freq.values[0].most_common(10)
# Plot the Amino Acids of COVID-19
aa = df[df.name=='COVID19'].aa_freq.values[0]
plt.bar(aa.keys(), aa.values())
# All viruses -- same chart (not stacked)
for virus in df.name:
aa = df[df.name==virus].aa_freq.values[0]
plt.bar(aa.keys(), aa.values())
plt.show()
```
### Dot Plots of Opening Sequences
```
# COVID and MERS
dotplot(covid_seq[0:10],mers_seq[0:10])
# COVID and SARS
n = 10
dotplot(covid_seq[0:n],sars_seq[0:n])
# Plotting function to illustrate deeper matches
def dotplotx(seq1, seq2, n):
seq1=seq1[0:n]
seq2=seq2[0:n]
plt.imshow(np.array(makeMatrix(seq1,seq2,1)))
# on x-axis list all sequences of seq 2
xt=plt.xticks(np.arange(len(list(seq2))),list(seq2))
# on y-axis list all sequences of seq 1
yt=plt.yticks(np.arange(len(list(seq1))),list(seq1))
plt.show()
dotplotx(covid_seq, sars_seq, n=100)
```
Notice the large diagonal line for the second half of the first 100 nucleotides - indicating these are the same for `COVID19` and `SARS`
```
dotplotx(covid_seq, ebola_seq, n=100)
```
No corresponding matches for `EBOLA` and `COVID`
#### Calculate Pairwise Alignment for the First 100 Nucleotides
```
def pairwise_alignment(s1, s2, n):
if n == 'full': n = min(len(s1), len(s2))
alignment = pairwise2.align.globalxx(s1[0:n], s2[0:n], one_alignment_only=True, score_only=True)
print(f'Pairwise alignment: {alignment:.0f}/{n} ({(alignment/n)*100:0.1f}%)')
# SARS and COVID
pairwise_alignment(covid_seq, sars_seq, n=100)
pairwise_alignment(covid_seq, sars_seq, n=10000)
pairwise_alignment(covid_seq, sars_seq, n=len(sars_seq))
```
* `82.9`% of the COVID19 genome is exactly the same as SARS
```
pairwise_alignment(covid_seq, mers_seq, n='full')
pairwise_alignment(covid_seq, ebola_seq, n='full')
```
* `COVID19` and `SARS` have a `82.9`% similarity. Both are of the same genus and belong to `Sars_Cov`.
* `COVID19` and `EBOLA` have a `65.3`% similarity since they are from a different family of virus
### Example of the Opening Sequence of `COVID19` and `SARS`
Sequencing found similar structure from `40:100` so lets use our functions to visualise it.
```
s1 = covid_seq[40:100]
s2 = sars_seq[40:100]
print('Similarity matrix (look for diagonal)')
dotplotx(s1, s2, n=100)
print('Possible alignment pathways: \n\n')
hammer_time(s1, s2, verbose=False)
```
|
github_jupyter
|
<span style="font-size:20pt;color:blue">Add title here</span>
This is a sample file of interactive stopped-flow data analysis. You do <b>NOT</b> need to understand python language to use this program. By replacing file names and options with your own, you can easily produce figures and interactively adjust plotting optinos.
It is strongly recommended to keep this file for reference, and make edits on a duplication of this file.
# import libraries and define functions
<span style="color:red">Press Ctrl+Enter to run sections</span>
```
# import mpld3
# mpld3.enable_notebook()
%matplotlib widget
from sf_utils import *
from uv_utils import *
```
# compare multiple inputs on selected lines
In many cases, the same trace may be repeated for several times. This
```
rcParams['figure.figsize'] = [6, 4.5]
csvfiles = [
'average-sample-10s-1.csv',
'average-sample-10s-2.csv',
'average-sample-10s-3.csv',
]
sfData = SFData.quickload_csv(csvfiles)
sfData.plot_selected_kinetics()
display(widgets.HBox([sfData.add_logx_button(), sfData.plot_scan_wavelength_button()]))
sfData.plot_interactive_buttons_for_kinetics()
```
# save averaged files
```
csvfiles = [
'average-sample-10s-1.csv',
'average-sample-10s-2.csv',
'average-sample-10s-3.csv',
]
save_average(csvfiles)
```
# plot full spectra with more options
## plot with table
```
df = pd.DataFrame(
columns=['csvfile', 'legend', 'shift', 'scale', 'color', 'timepoint'],
data=[
['average-sample-10s-ave.csv', '0.003s', 0, 1, 'black', 0.003],
['average-sample-10s-ave.csv', '0.01s', 0, 1, 'red', 0.01],
['average-sample-10s-ave.csv', '0.1s', 0, 1, 'blue', 0.1],
['average-sample-10s-ave.csv', '1s', 0, 1, 'orange', 1],
['average-sample-10s-ave.csv', '10s', 0, 1, 'green', 10],
]
)
base = pd.DataFrame(
columns=['csvfile', 'timepoint'],
data=[
['average-sample-10s-ave.csv', 0.002],
]
)
# plot_from_df(df, valuetype='timepoint') # no subtraction
plot_from_df(df, valuetype='timepoint', base=base) # subtract spectra at certain timepoint
```
## plot with variables (lower level API)
```
rcParams['figure.figsize'] = [6, 4.5]
fig = plt.figure()
axis = fig.gca()
csvfiles = [
'average-sample-10s-ave.csv',
'average-sample-10s-ave.csv',
'average-sample-10s-ave.csv',
'average-sample-10s-ave.csv',
'average-sample-10s-ave.csv',
]
dfs = list(map(load_data, csvfiles))
df_base = dfs[0].iloc[1,:]
dfs = [df - df_base for df in dfs]
timepoints = [
[0.003],
[0.01],
[0.1],
[1],
[10]
]
legends = [
['0.003s'],
['0.01s'],
['0.1s'],
['1s'],
['10s']
]
shifts = [
[0],
[0],
[0],
[0],
[0]
]
scales = [
[1],
[1],
[1],
[1],
[1],
]
colors = [
['black'],
['red'],
['blue'],
['orange'],
['green']
]
sfData = SFData(
axis=axis,
dfs=dfs,
colors=colors,
legends=legends,
scales=scales,
shifts=shifts,
xlabel='Time (s)',
ylabel='Abs',
)
sfData.plot_selected_spectra(timepoints)
# display(widgets.HBox([sfData.plot_scan_timepoint_button()]))
sfData.plot_interactive_buttons_for_spectra()
```
# plot kinetic curves with more options
## plot with table
```
df = pd.DataFrame(
columns=['csvfile', 'legend', 'shift', 'scale', 'color', 'wavelength'],
data=[
['average-sample-10s-ave.csv', '350nm', 0, 1, 'black', 350],
['average-sample-10s-ave.csv', '400nm', 0, 1, 'red', 400],
['average-sample-10s-ave.csv', '450nm', 0, 1, 'blue', 450],
]
)
base = pd.DataFrame(
columns=['csvfile', 'timepoint'],
data=[
['average-sample-10s-ave.csv', 0.002],
]
)
# plot_from_df(df, valuetype='wavelength')
plot_from_df(df, valuetype='wavelength', base=base)
```
## plot with variables (lower level API)
```
rcParams['figure.figsize'] = [6, 4.5]
fig = plt.figure()
axis = fig.gca()
csvfiles = [
'average-sample-10s-ave.csv',
'average-sample-10s-ave.csv',
'average-sample-10s-ave.csv',
]
dfs = list(map(load_data, csvfiles))
df_base = dfs[0].iloc[1,:]
dfs = [df - df_base for df in dfs]
wavelengths = [
[350],
[400],
[450],
]
legends = [
['350 nm'],
['400 nm'],
['450 nm'],
]
shifts = [
[0],
[0],
[0],
]
scales = [
[1],
[1],
[1],
]
colors = [
['black'],
['red'],
['blue'],
]
sfData = SFData(
axis=axis,
dfs=dfs,
colors=colors,
legends=legends,
scales=scales,
shifts=shifts,
xlabel='Time (s)',
ylabel='Abs',
)
sfData.plot_selected_kinetics(wavelengths)
display(sfData.add_logx_button())
# display(widgets.HBoxsfDataData.add_logx_button(), sfData.plot_scan_wavelength_button()]))
sfData.plot_interactive_buttons_for_kinetics()
```
# overview - kinetic curve and full spectra
<span style="color:red">Warning: this section is slow</span>
```
rcParams['figure.figsize'] = [9, 4.5]
csvfile = 'average-sample-10s-ave.csv'
df = load_data(csvfile)
(row, col) = (1, 2)
fig, axs = plt.subplots(row, col, sharex=False, sharey=False)
axis1 = axs[0] # first axis
axis2 = axs[1] # second axis
plot_all_kinetic(df, axis1)
plot_all_spectra(df, axis2)
```
# overview - difference spectra
<span style="color:red">Warning: this section is slow</span>
```
rcParams['figure.figsize'] = [9, 4.5]
df = load_data(csvfile)
baseCurve = df.iloc[1,:] # select the second time point as baseline
diffDf = df - baseCurve
df1 = df - baseCurve
(row, col) = (1, 2)
fig, axs = plt.subplots(row, col, sharex=False, sharey=False)
axis1 = axs[0] # first axis
axis2 = axs[1] # second axis
plot_all_kinetic(df1, axis1)
plot_all_spectra(df1, axis2)
```
# export kintek input files
```
csvfile = 'kintek-sample-1-10s.csv'
wavelengths = [440]
kintekFileName = 'kintek-sample-1-10s-data.txt'
export_kintek(csvfile, wavelengths, kintekFileName)
csvfile = 'kintek-sample-2-10s.csv'
wavelengths = [440]
kintekFileName = 'kintek-sample-2-10s-data.txt'
export_kintek(csvfile, wavelengths, kintekFileName)
```
# plot original kinetic and kintek simulation
```
rcParams['figure.figsize'] = [6, 4.5]
rcParams.update({'xtick.labelsize': 14})
rcParams.update({'ytick.labelsize': 14})
rcParams.update({'axes.labelsize':16})
rcParams.update({'legend.frameon': False})
rcParams.update({'legend.fontsize': 14})
simfiles = [
'kintek-sample-1-10s-data.txt',
'kintek-sample-2-10s-data.txt',
'kintek-sample-1-10s-sim.txt',
'kintek-sample-2-10s-sim.txt',
]
dfs = list(map(read_kintek_simulation, simfiles))
df = pd.concat(dfs, axis=1)
df = df[df.index > 0.002] # filter the value range to plot
df = df[df.index < 0.8] # filter the value range to plot
aPlot = AdjustablePlot.quickload_df(df)
aPlot.colors = ['red', '#0080ff', 'black', 'black']
aPlot.shifts = [0.007, 0, 0.007, 0]
aPlot.legends = ['1', '2', '1-sim', '2-sim']
aPlot.plot()
aPlot.axis.set_xscale('log')
aPlot.axis.set_xlim([0.001, 1])
# aPlot.axis.set_ylim([-0.019, 0.029])
for i in range(2):
line = aPlot.axis.lines[i]
line.set_marker('.')
line.set_linewidth(0)
line.set_markersize(5)
aPlot.axis.lines[-1].set_linestyle('dashed')
aPlot.plot_interactive_buttons()
_ = aPlot.axis.legend().set_draggable(True)
_ = aPlot.axis.set_xlabel('Time (s)')
_ = aPlot.axis.set_ylabel('ΔAbs')
```
# plot UV-Vis titration data
## plot titration
```
rcParams['figure.figsize'] = [6, 4.5]
uv_filenames = [
'titration_and_UV/2.0.CSV',
'titration_and_UV/2.1.CSV',
'titration_and_UV/2.2.CSV',
'titration_and_UV/2.3.CSV',
'titration_and_UV/2.4.CSV',
'titration_and_UV/2.5.CSV',
'titration_and_UV/2.6.CSV',
'titration_and_UV/2.7.CSV',
'titration_and_UV/2.8.CSV',
'titration_and_UV/2.9.CSV',
]
df = read_multiple_uv_to_df(uv_filenames)
aPlot = AdjustablePlot.quickload_df(df)
aPlot.colors = color_range('red', 'black', len(uv_filenames))
# calculate shift on each spectra to remove baseline floating issue
# aPlot.shifts = shift_to_align_wavelength(df, wavelength=1000)
aPlot.legends = ['%5.1f eq aKG' % (0.5*i) for i in range(len(uv_filenames))]
aPlot.plot()
aPlot.plot_interactive_buttons()
aPlot.axis.set_xlim([320, 1100])
aPlot.axis.set_ylim([-0.2, 1.2])
aPlot.axis.set_title('Titration: PIsnB + 4Fe + 4TyrNC + n*0.5aKG')
aPlot.axis.set_xlabel('wavelength (nm)')
aPlot.axis.set_ylabel('Abs')
```
## subtraction
```
# subtract base
base = aPlot.df.iloc[:,[0]]
aPlot.df = aPlot.df - base.values
# plot in a new figure
aPlot.axis = plt.figure().gca()
aPlot.plot()
aPlot.plot_interactive_buttons()
aPlot.axis.set_xlim([350, 1100])
aPlot.axis.set_ylim([-0.02, 0.15])
aPlot.axis.set_title('Titration: PIsnB + 4Fe + 4TyrNC + n*0.5aKG')
aPlot.axis.set_xlabel('wavelength (nm)')
aPlot.axis.set_ylabel('ΔAbs')
```
## remove baseline shift
```
# subtract base
base = aPlot.df.iloc[:,[0]]
aPlot.df = aPlot.df - base.values
# remove baseline shift
aPlot.shifts = shift_to_align_wavelength(aPlot.df, wavelength=800)
# plot in a new figure
aPlot.axis = plt.figure().gca()
aPlot.plot()
aPlot.plot_interactive_buttons()
aPlot.axis.set_xlim([350, 1100])
aPlot.axis.set_ylim([-0.02, 0.12])
aPlot.axis.set_title('Titration: PIsnB + 4Fe + 4TyrNC + n*0.5aKG')
aPlot.axis.set_xlabel('wavelength (nm)')
aPlot.axis.set_ylabel('ΔAbs')
```
## plot trend at certain x value
```
df_trace512 = aPlot.df.iloc[[get_index_of_closest_x_value(aPlot.df, 512)], :].transpose()
df_trace512.index = [0.5*i for i in range(len(uv_filenames))]
trace512Plot = AdjustablePlot.quickload_df(df_trace512)
trace512Plot.plot()
trace512Plot.plot_interactive_buttons()
trace512Plot.axis.lines[0].set_marker('o')
trace512Plot.axis.legend().set_draggable(True)
trace512Plot.axis.set_xlabel('equivalent of aKG')
trace512Plot.axis.set_ylabel('ΔAbs at 512 nm')
```
# Appendix: more options
```
# global settings
# two semantics are equivalent
# more options can be found at https://matplotlib.org/users/customizing.html
# set figure size
rcParams['figure.figsize'] = [9, 4.5]
# set tick pointing inwards or outwards
rcParams['xtick.direction'] = 'in'
rcParams['ytick.direction'] = 'in'
# set visibility of minor ticks
rcParams['xtick.minor.visible'] = True
rcParams['ytick.minor.visible'] = True
# set ticks on top and right axes
rcParams['xtick.top'] = True
rcParams['ytick.right'] = True
# set better layout for multiple plots in a figure
# https://matplotlib.org/3.1.1/tutorials/intermediate/tight_layout_guide.html
rcParams.update({'figure.autolayout': True})
# set x and y label size
rcParams.update({'axes.labelsize': 20})
# set tick label size
rcParams.update({'xtick.labelsize': 12})
rcParams.update({'ytick.labelsize': 12})
# turn on/off frame of legend box
rcParams.update({'legend.frameon': False})
# set legend fontsize
rcParams.update({'legend.fontsize': 10})
```
|
github_jupyter
|
# Monetary Economics: Chapter 5
### Preliminaries
```
# This line configures matplotlib to show figures embedded in the notebook,
# instead of opening a new window for each figure. More about that later.
# If you are using an old version of IPython, try using '%pylab inline' instead.
%matplotlib inline
import matplotlib.pyplot as plt
from pysolve.model import Model
from pysolve.utils import is_close,round_solution
```
### Model LP1
```
def create_lp1_model():
model = Model()
model.set_var_default(0)
model.var('Bcb', desc='Government bills held by the Central Bank')
model.var('Bd', desc='Demand for government bills')
model.var('Bh', desc='Government bills held by households')
model.var('Bs', desc='Government bills supplied by government')
model.var('BLd', desc='Demand for government bonds')
model.var('BLh', desc='Government bonds held by households')
model.var('BLs', desc='Supply of government bonds')
model.var('CG', desc='Capital gains on bonds')
model.var('CGe', desc='Expected capital gains on bonds')
model.var('C', desc='Consumption')
model.var('ERrbl', desc='Expected rate of return on bonds')
model.var('Hd', desc='Demand for cash')
model.var('Hh', desc='Cash held by households')
model.var('Hs', desc='Cash supplied by the central bank')
model.var('Pbl', desc='Price of bonds')
model.var('Pble', desc='Expected price of bonds')
model.var('Rb', desc='Interest rate on government bills')
model.var('Rbl', desc='Interest rate on government bonds')
model.var('T', desc='Taxes')
model.var('V', desc='Household wealth')
model.var('Ve', desc='Expected household wealth')
model.var('Y', desc='Income = GDP')
model.var('YDr', desc='Regular disposable income of households')
model.var('YDre', desc='Expected regular disposable income of households')
model.set_param_default(0)
model.param('alpha1', desc='Propensity to consume out of income')
model.param('alpha2', desc='Propensity to consume out of wealth')
model.param('chi', desc='Weight of conviction in expected bond price')
model.param('lambda10', desc='Parameter in asset demand function')
model.param('lambda12', desc='Parameter in asset demand function')
model.param('lambda13', desc='Parameter in asset demand function')
model.param('lambda14', desc='Parameter in asset demand function')
model.param('lambda20', desc='Parameter in asset demand function')
model.param('lambda22', desc='Parameter in asset demand function')
model.param('lambda23', desc='Parameter in asset demand function')
model.param('lambda24', desc='Parameter in asset demand function')
model.param('lambda30', desc='Parameter in asset demand function')
model.param('lambda32', desc='Parameter in asset demand function')
model.param('lambda33', desc='Parameter in asset demand function')
model.param('lambda34', desc='Parameter in asset demand function')
model.param('theta', desc='Tax rate')
model.param('G', desc='Government goods')
model.param('Rbar', desc='Exogenously set interest rate on govt bills')
model.param('Pblbar', desc='Exogenously set price of bonds')
model.add('Y = C + G') # 5.1
model.add('YDr = Y - T + Rb(-1)*Bh(-1) + BLh(-1)') # 5.2
model.add('T = theta *(Y + Rb(-1)*Bh(-1) + BLh(-1))') # 5.3
model.add('V - V(-1) = (YDr - C) + CG') # 5.4
model.add('CG = (Pbl - Pbl(-1))*BLh(-1)')
model.add('C = alpha1*YDre + alpha2*V(-1)')
model.add('Ve = V(-1) + (YDre - C) + CG')
model.add('Hh = V - Bh - Pbl*BLh')
model.add('Hd = Ve - Bd - Pbl*BLd')
model.add('Bd = Ve*lambda20 + Ve*lambda22*Rb' +
'- Ve*lambda23*ERrbl - lambda24*YDre')
model.add('BLd = (Ve*lambda30 - Ve*lambda32*Rb ' +
'+ Ve*lambda33*ERrbl - lambda34*YDre)/Pbl')
model.add('Bh = Bd')
model.add('BLh = BLd')
model.add('Bs - Bs(-1) = (G + Rb(-1)*Bs(-1) + ' +
'BLs(-1)) - (T + Rb(-1)*Bcb(-1)) - (BLs - BLs(-1))*Pbl')
model.add('Hs - Hs(-1) = Bcb - Bcb(-1)')
model.add('Bcb = Bs - Bh')
model.add('BLs = BLh')
model.add('ERrbl = Rbl + chi * (Pble - Pbl) / Pbl')
model.add('Rbl = 1./Pbl')
model.add('Pble = Pbl')
model.add('CGe = chi * (Pble - Pbl)*BLh')
model.add('YDre = YDr(-1)')
model.add('Rb = Rbar')
model.add('Pbl = Pblbar')
return model
lp1_parameters = {'alpha1': 0.8,
'alpha2': 0.2,
'chi': 0.1,
'lambda20': 0.44196,
'lambda22': 1.1,
'lambda23': 1,
'lambda24': 0.03,
'lambda30': 0.3997,
'lambda32': 1,
'lambda33': 1.1,
'lambda34': 0.03,
'theta': 0.1938}
lp1_exogenous = {'G': 20,
'Rbar': 0.03,
'Pblbar': 20}
lp1_variables = {'V': 95.803,
'Bh': 37.839,
'Bs': 57.964,
'Bcb': 57.964 - 37.839,
'BLh': 1.892,
'BLs': 1.892,
'Hs': 20.125,
'YDr': 95.803,
'Rb': 0.03,
'Pbl': 20}
```
### Scenario: Interest rate shock
```
lp1 = create_lp1_model()
lp1.set_values(lp1_parameters)
lp1.set_values(lp1_exogenous)
lp1.set_values(lp1_variables)
for _ in xrange(15):
lp1.solve(iterations=100, threshold=1e-6)
# shock the system
lp1.set_values({'Rbar': 0.04,
'Pblbar': 15})
for _ in xrange(45):
lp1.solve(iterations=100, threshold=1e-6)
```
###### Figure 5.2
```
caption = '''
Figure 5.2 Evolution of the wealth to disposable income ratio, following an increase
in both the short-term and long-term interest rates, with model LP1'''
data = [s['V']/s['YDr'] for s in lp1.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(0.89, 1.01)
axes.plot(data, 'k')
# add labels
plt.text(20, 0.98, 'Wealth to disposable income ratio')
fig.text(0.1, -.05, caption);
```
###### Figure 5.3
```
caption = '''
Figure 5.3 Evolution of the wealth to disposable income ratio, following an increase
in both the short-term and long-term interest rates, with model LP1'''
ydrdata = [s['YDr'] for s in lp1.solutions[5:]]
cdata = [s['C'] for s in lp1.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(92.5, 101.5)
axes.plot(ydrdata, 'k')
axes.plot(cdata, linestyle='--', color='r')
# add labels
plt.text(16, 98, 'Disposable')
plt.text(16, 97.6, 'income')
plt.text(22, 95, 'Consumption')
fig.text(0.1, -.05, caption);
```
###### Figure 5.4
```
caption = '''
Figure 5.4 Evolution of the bonds to wealth ration and the bills to wealth ratio,
following an increase from 3% to 4% in the short-term interest rate, while the
long-term interest rates moves from 5% to 6.67%, with model LP1'''
bhdata = [s['Bh']/s['V'] for s in lp1.solutions[5:]]
pdata = [s['Pbl']*s['BLh']/s['V'] for s in lp1.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(0.382, 0.408)
axes.plot(bhdata, 'k')
axes.plot(pdata, linestyle='--', color='r')
# add labels
plt.text(14, 0.3978, 'Bonds to wealth ratio')
plt.text(17, 0.39, 'Bills to wealth ratio')
fig.text(0.1, -.05, caption);
```
### Model LP2
```
def create_lp2_model():
model = Model()
model.set_var_default(0)
model.var('Bcb', desc='Government bills held by the Central Bank')
model.var('Bd', desc='Demand for government bills')
model.var('Bh', desc='Government bills held by households')
model.var('Bs', desc='Government bills supplied by government')
model.var('BLd', desc='Demand for government bonds')
model.var('BLh', desc='Government bonds held by households')
model.var('BLs', desc='Supply of government bonds')
model.var('CG', desc='Capital gains on bonds')
model.var('CGe', desc='Expected capital gains on bonds')
model.var('C', desc='Consumption')
model.var('ERrbl', desc='Expected rate of return on bonds')
model.var('Hd', desc='Demand for cash')
model.var('Hh', desc='Cash held by households')
model.var('Hs', desc='Cash supplied by the central bank')
model.var('Pbl', desc='Price of bonds')
model.var('Pble', desc='Expected price of bonds')
model.var('Rb', desc='Interest rate on government bills')
model.var('Rbl', desc='Interest rate on government bonds')
model.var('T', desc='Taxes')
model.var('TP', desc='Target proportion in households portfolio')
model.var('V', desc='Household wealth')
model.var('Ve', desc='Expected household wealth')
model.var('Y', desc='Income = GDP')
model.var('YDr', desc='Regular disposable income of households')
model.var('YDre', desc='Expected regular disposable income of households')
model.var('z1', desc='Switch parameter')
model.var('z2', desc='Switch parameter')
model.set_param_default(0)
model.param('add', desc='Random shock to expectations')
model.param('alpha1', desc='Propensity to consume out of income')
model.param('alpha2', desc='Propensity to consume out of wealth')
model.param('beta', desc='Adjustment parameter in price of bills')
model.param('betae', desc='Adjustment parameter in expectations')
model.param('bot', desc='Bottom value for TP')
model.param('chi', desc='Weight of conviction in expected bond price')
model.param('lambda10', desc='Parameter in asset demand function')
model.param('lambda12', desc='Parameter in asset demand function')
model.param('lambda13', desc='Parameter in asset demand function')
model.param('lambda14', desc='Parameter in asset demand function')
model.param('lambda20', desc='Parameter in asset demand function')
model.param('lambda22', desc='Parameter in asset demand function')
model.param('lambda23', desc='Parameter in asset demand function')
model.param('lambda24', desc='Parameter in asset demand function')
model.param('lambda30', desc='Parameter in asset demand function')
model.param('lambda32', desc='Parameter in asset demand function')
model.param('lambda33', desc='Parameter in asset demand function')
model.param('lambda34', desc='Parameter in asset demand function')
model.param('theta', desc='Tax rate')
model.param('top', desc='Top value for TP')
model.param('G', desc='Government goods')
model.param('Pblbar', desc='Exogenously set price of bonds')
model.param('Rbar', desc='Exogenously set interest rate on govt bills')
model.add('Y = C + G') # 5.1
model.add('YDr = Y - T + Rb(-1)*Bh(-1) + BLh(-1)') # 5.2
model.add('T = theta *(Y + Rb(-1)*Bh(-1) + BLh(-1))') # 5.3
model.add('V - V(-1) = (YDr - C) + CG') # 5.4
model.add('CG = (Pbl - Pbl(-1))*BLh(-1)')
model.add('C = alpha1*YDre + alpha2*V(-1)')
model.add('Ve = V(-1) + (YDre - C) + CG')
model.add('Hh = V - Bh - Pbl*BLh')
model.add('Hd = Ve - Bd - Pbl*BLd')
model.add('Bd = Ve*lambda20 + Ve*lambda22*Rb' +
'- Ve*lambda23*ERrbl - lambda24*YDre')
model.add('BLd = (Ve*lambda30 - Ve*lambda32*Rb ' +
'+ Ve*lambda33*ERrbl - lambda34*YDre)/Pbl')
model.add('Bh = Bd')
model.add('BLh = BLd')
model.add('Bs - Bs(-1) = (G + Rb(-1)*Bs(-1) + BLs(-1))' +
' - (T + Rb(-1)*Bcb(-1)) - Pbl*(BLs - BLs(-1))')
model.add('Hs - Hs(-1) = Bcb - Bcb(-1)')
model.add('Bcb = Bs - Bh')
model.add('BLs = BLh')
model.add('ERrbl = Rbl + ((chi * (Pble - Pbl))/ Pbl)')
model.add('Rbl = 1./Pbl')
model.add('Pble = Pble(-1) - betae*(Pble(-1) - Pbl) + add')
model.add('CGe = chi * (Pble - Pbl)*BLh')
model.add('YDre = YDr(-1)')
model.add('Rb = Rbar')
model.add('Pbl = (1 + z1*beta - z2*beta)*Pbl(-1)')
model.add('z1 = if_true(TP > top)')
model.add('z2 = if_true(TP < bot)')
model.add('TP = (BLh(-1)*Pbl(-1))/(BLh(-1)*Pbl(-1) + Bh(-1))')
return model
lp2_parameters = {'alpha1': 0.8,
'alpha2': 0.2,
'beta': 0.01,
'betae': 0.5,
'chi': 0.1,
'lambda20': 0.44196,
'lambda22': 1.1,
'lambda23': 1,
'lambda24': 0.03,
'lambda30': 0.3997,
'lambda32': 1,
'lambda33': 1.1,
'lambda34': 0.03,
'theta': 0.1938,
'bot': 0.495,
'top': 0.505 }
lp2_exogenous = {'G': 20,
'Rbar': 0.03,
'Pblbar': 20,
'add': 0}
lp2_variables = {'V': 95.803,
'Bh': 37.839,
'Bs': 57.964,
'Bcb': 57.964 - 37.839,
'BLh': 1.892,
'BLs': 1.892,
'Hs': 20.125,
'YDr': 95.803,
'Rb': 0.03,
'Pbl': 20,
'Pble': 20,
'TP': 1.892*20/(1.892*20+37.839), # BLh*Pbl/(BLh*Pbl+Bh)
'z1': 0,
'z2': 0}
```
### Scenario: interest rate shock
```
lp2_bill = create_lp2_model()
lp2_bill.set_values(lp2_parameters)
lp2_bill.set_values(lp2_exogenous)
lp2_bill.set_values(lp2_variables)
lp2_bill.set_values({'z1': lp2_bill.evaluate('if_true(TP > top)'),
'z2': lp2_bill.evaluate('if_true(TP < bot)')})
for _ in xrange(10):
lp2_bill.solve(iterations=100, threshold=1e-4)
# shock the system
lp2_bill.set_values({'Rbar': 0.035})
for _ in xrange(45):
lp2_bill.solve(iterations=100, threshold=1e-4)
```
###### Figure 5.5
```
caption = '''
Figure 5.5 Evolution of the long-term interest rate (the bond yield), following an
increase in the short-term interest rate (the bill rate), as a result of the response of
the central bank and the Treasury, with Model LP2.'''
rbdata = [s['Rb'] for s in lp2_bill.solutions[5:]]
pbldata = [1./s['Pbl'] for s in lp2_bill.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 0.9])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.set_ylim(0.029, 0.036)
axes.plot(rbdata, linestyle='--', color='r')
axes2 = axes.twinx()
axes2.spines['top'].set_visible(False)
axes2.set_ylim(0.0495, 0.052)
axes2.plot(pbldata, 'k')
# add labels
plt.text(12, 0.0518, 'Short-term interest rate')
plt.text(15, 0.0513, 'Long-term interest rate')
fig.text(0.05, 1.05, 'Bill rate')
fig.text(1.15, 1.05, 'Bond yield')
fig.text(0.1, -.1, caption);
```
###### Figure 5.6
```
caption = '''
Figure 5.6 Evolution of the target proportion (TP), that is the share of bonds in the
government debt held by households, following an increase in the short-term interest
rate (the bill rate) and the response of the central bank and of the Treasury,
with Model LP2'''
tpdata = [s['TP'] for s in lp2_bill.solutions[5:]]
topdata = [s['top'] for s in lp2_bill.solutions[5:]]
botdata = [s['bot'] for s in lp2_bill.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.set_ylim(0.490, 0.506)
axes.plot(topdata, color='k')
axes.plot(botdata, color='k')
axes.plot(tpdata, linestyle='--', color='r')
# add labels
plt.text(30, 0.5055, 'Ceiling of target range')
plt.text(30, 0.494, 'Floor of target range')
plt.text(10, 0.493, 'Share of bonds')
plt.text(10, 0.4922, 'in government debt')
plt.text(10, 0.4914, 'held by households')
fig.text(0.1, -.15, caption);
```
### Scenario: Shock to the bond price expectations
```
lp2_bond = create_lp2_model()
lp2_bond.set_values(lp2_parameters)
lp2_bond.set_values(lp2_exogenous)
lp2_bond.set_values(lp2_variables)
lp2_bond.set_values({'z1': 'if_true(TP > top)',
'z2': 'if_true(TP < bot)'})
for _ in xrange(10):
lp2_bond.solve(iterations=100, threshold=1e-5)
# shock the system
lp2_bond.set_values({'add': -3})
lp2_bond.solve(iterations=100, threshold=1e-5)
lp2_bond.set_values({'add': 0})
for _ in xrange(43):
lp2_bond.solve(iterations=100, threshold=1e-4)
```
###### Figure 5.7
```
caption = '''
Figure 5.7 Evolution of the long-term interest rate, following an anticipated fall in
the price of bonds, as a consequence of the response of the central bank and of the
Treasury, with Model LP2'''
pbldata = [1./s['Pbl'] for s in lp2_bond.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(0.0497, 0.0512)
axes.plot(pbldata, linestyle='--', color='k')
# add labels
plt.text(15, 0.0509, 'Long-term interest rate')
fig.text(0.1, -.1, caption);
```
###### Figure 5.8
```
caption = '''
Figure 5.8 Evolution of the expected and actual bond prices, following an anticipated
fall in the price of bonds, as a consequence of the response of the central bank and of
the Treasury, with Model LP2'''
pbldata = [s['Pbl'] for s in lp2_bond.solutions[5:]]
pbledata = [s['Pble'] for s in lp2_bond.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(16.5, 21)
axes.plot(pbldata, linestyle='--', color='k')
axes.plot(pbledata, linestyle='-', color='r')
# add labels
plt.text(8, 20, 'Actual price of bonds')
plt.text(10, 19, 'Expected price of bonds')
fig.text(0.1, -.1, caption);
```
###### Figure 5.9
```
caption = '''
Figure 5.9 Evolution of the target proportion (TP), that is the share of bonds in the
government debt held by households, following an anticipated fall in the price of
bonds, as a consequence of the response of the central bank and of the Treasury, with
Model LP2'''
tpdata = [s['TP'] for s in lp2_bond.solutions[5:]]
botdata = [s['top'] for s in lp2_bond.solutions[5:]]
topdata = [s['bot'] for s in lp2_bond.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(0.47, 0.52)
axes.plot(tpdata, linestyle='--', color='r')
axes.plot(botdata, linestyle='-', color='k')
axes.plot(topdata, linestyle='-', color='k')
# add labels
plt.text(30, 0.508, 'Ceiling of target range')
plt.text(30, 0.491, 'Floor of target range')
plt.text(10, 0.49, 'Share of bonds in')
plt.text(10, 0.487, 'government debt')
plt.text(10, 0.484, 'held by households')
fig.text(0.1, -.15, caption);
```
### Scenario: Model LP1, propensity to consume shock
```
lp1_alpha = create_lp1_model()
lp1_alpha.set_values(lp1_parameters)
lp1_alpha.set_values(lp1_exogenous)
lp1_alpha.set_values(lp1_variables)
for _ in xrange(10):
lp1_alpha.solve(iterations=100, threshold=1e-6)
# shock the system
lp1_alpha.set_values({'alpha1': 0.7})
for _ in xrange(45):
lp1_alpha.solve(iterations=100, threshold=1e-6)
```
### Model LP3
```
def create_lp3_model():
model = Model()
model.set_var_default(0)
model.var('Bcb', desc='Government bills held by the Central Bank')
model.var('Bd', desc='Demand for government bills')
model.var('Bh', desc='Government bills held by households')
model.var('Bs', desc='Government bills supplied by government')
model.var('BLd', desc='Demand for government bonds')
model.var('BLh', desc='Government bonds held by households')
model.var('BLs', desc='Supply of government bonds')
model.var('CG', desc='Capital gains on bonds')
model.var('CGe', desc='Expected capital gains on bonds')
model.var('C', desc='Consumption')
model.var('ERrbl', desc='Expected rate of return on bonds')
model.var('Hd', desc='Demand for cash')
model.var('Hh', desc='Cash held by households')
model.var('Hs', desc='Cash supplied by the central bank')
model.var('Pbl', desc='Price of bonds')
model.var('Pble', desc='Expected price of bonds')
model.var('PSBR', desc='Public sector borrowing requirement (PSBR)')
model.var('Rb', desc='Interest rate on government bills')
model.var('Rbl', desc='Interest rate on government bonds')
model.var('T', desc='Taxes')
model.var('TP', desc='Target proportion in households portfolio')
model.var('V', desc='Household wealth')
model.var('Ve', desc='Expected household wealth')
model.var('Y', desc='Income = GDP')
model.var('YDr', desc='Regular disposable income of households')
model.var('YDre', desc='Expected regular disposable income of households')
model.var('z1', desc='Switch parameter')
model.var('z2', desc='Switch parameter')
model.var('z3', desc='Switch parameter')
model.var('z4', desc='Switch parameter')
# no longer exogenous
model.var('G', desc='Government goods')
model.set_param_default(0)
model.param('add', desc='Random shock to expectations')
model.param('add2', desc='Addition to the government expenditure setting rule')
model.param('alpha1', desc='Propensity to consume out of income')
model.param('alpha2', desc='Propensity to consume out of wealth')
model.param('beta', desc='Adjustment parameter in price of bills')
model.param('betae', desc='Adjustment parameter in expectations')
model.param('bot', desc='Bottom value for TP')
model.param('chi', desc='Weight of conviction in expected bond price')
model.param('lambda10', desc='Parameter in asset demand function')
model.param('lambda12', desc='Parameter in asset demand function')
model.param('lambda13', desc='Parameter in asset demand function')
model.param('lambda14', desc='Parameter in asset demand function')
model.param('lambda20', desc='Parameter in asset demand function')
model.param('lambda22', desc='Parameter in asset demand function')
model.param('lambda23', desc='Parameter in asset demand function')
model.param('lambda24', desc='Parameter in asset demand function')
model.param('lambda30', desc='Parameter in asset demand function')
model.param('lambda32', desc='Parameter in asset demand function')
model.param('lambda33', desc='Parameter in asset demand function')
model.param('lambda34', desc='Parameter in asset demand function')
model.param('theta', desc='Tax rate')
model.param('top', desc='Top value for TP')
model.param('Pblbar', desc='Exogenously set price of bonds')
model.param('Rbar', desc='Exogenously set interest rate on govt bills')
model.add('Y = C + G') # 5.1
model.add('YDr = Y - T + Rb(-1)*Bh(-1) + BLh(-1)') # 5.2
model.add('T = theta *(Y + Rb(-1)*Bh(-1) + BLh(-1))') # 5.3
model.add('V - V(-1) = (YDr - C) + CG') # 5.4
model.add('CG = (Pbl - Pbl(-1))*BLh(-1)')
model.add('C = alpha1*YDre + alpha2*V(-1)')
model.add('Ve = V(-1) + (YDre - C) + CG')
model.add('Hh = V - Bh - Pbl*BLh')
model.add('Hd = Ve - Bd - Pbl*BLd')
model.add('Bd = Ve*lambda20 + Ve*lambda22*Rb' +
'- Ve*lambda23*ERrbl - lambda24*YDre')
model.add('BLd = (Ve*lambda30 - Ve*lambda32*Rb ' +
'+ Ve*lambda33*ERrbl - lambda34*YDre)/Pbl')
model.add('Bh = Bd')
model.add('BLh = BLd')
model.add('Bs - Bs(-1) = (G + Rb(-1)*Bs(-1) + BLs(-1))' +
' - (T + Rb(-1)*Bcb(-1)) - Pbl*(BLs - BLs(-1))')
model.add('Hs - Hs(-1) = Bcb - Bcb(-1)')
model.add('Bcb = Bs - Bh')
model.add('BLs = BLh')
model.add('ERrbl = Rbl + ((chi * (Pble - Pbl))/ Pbl)')
model.add('Rbl = 1./Pbl')
model.add('Pble = Pble(-1) - betae*(Pble(-1) - Pbl) + add')
model.add('CGe = chi * (Pble - Pbl)*BLh')
model.add('YDre = YDr(-1)')
model.add('Rb = Rbar')
model.add('Pbl = (1 + z1*beta - z2*beta)*Pbl(-1)')
model.add('z1 = if_true(TP > top)')
model.add('z2 = if_true(TP < bot)')
model.add('TP = (BLh(-1)*Pbl(-1))/(BLh(-1)*Pbl(-1) + Bh(-1))')
model.add('PSBR = (G + Rb*Bs(-1) + BLs(-1)) - (T + Rb*Bcb(-1))')
model.add('z3 = if_true((PSBR(-1)/Y(-1)) > 0.03)')
model.add('z4 = if_true((PSBR(-1)/Y(-1)) < -0.03)')
model.add('G = G(-1) - (z3 + z4)*PSBR(-1) + add2')
return model
lp3_parameters = {'alpha1': 0.8,
'alpha2': 0.2,
'beta': 0.01,
'betae': 0.5,
'chi': 0.1,
'lambda20': 0.44196,
'lambda22': 1.1,
'lambda23': 1,
'lambda24': 0.03,
'lambda30': 0.3997,
'lambda32': 1,
'lambda33': 1.1,
'lambda34': 0.03,
'theta': 0.1938,
'bot': 0.495,
'top': 0.505 }
lp3_exogenous = {'Rbar': 0.03,
'Pblbar': 20,
'add': 0,
'add2': 0}
lp3_variables = {'G': 20,
'V': 95.803,
'Bh': 37.839,
'Bs': 57.964,
'Bcb': 57.964 - 37.839,
'BLh': 1.892,
'BLs': 1.892,
'Hs': 20.125,
'YDr': 95.803,
'Rb': 0.03,
'Pbl': 20,
'Pble': 20,
'PSBR': 0,
'Y': 115.8,
'TP': 1.892*20/(1.892*20+37.839), # BLh*Pbl/(BLh*Pbl+Bh)
'z1': 0,
'z2': 0,
'z3': 0,
'z4': 0}
```
### Scenario: LP3, decrease in propensity to consume
```
lp3_alpha = create_lp3_model()
lp3_alpha.set_values(lp3_parameters)
lp3_alpha.set_values(lp3_exogenous)
lp3_alpha.set_values(lp3_variables)
for _ in xrange(10):
lp3_alpha.solve(iterations=100, threshold=1e-6)
# shock the system
lp3_alpha.set_values({'alpha1': 0.7})
for _ in xrange(45):
lp3_alpha.solve(iterations=100, threshold=1e-6)
```
###### Figure 5.10
```
caption = '''
Figure 5.10 Evolution of national income (GDP), following a sharp decrease in the
propensity to consume out of current income, with Model LP1'''
ydata = [s['Y'] for s in lp1_alpha.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(90, 128)
axes.plot(ydata, linestyle='--', color='k')
# add labels
plt.text(20, 110, 'Gross Domestic Product')
fig.text(0.1, -.05, caption);
```
###### Figure 5.11
```
caption = '''
Figure 5.11 Evolution of national income (GDP), following a sharp decrease in the
propensity to consume out of current income, with Model LP3'''
ydata = [s['Y'] for s in lp3_alpha.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(90, 128)
axes.plot(ydata, linestyle='--', color='k')
# add labels
plt.text(20, 110, 'Gross Domestic Product')
fig.text(0.1, -.05, caption);
```
###### Figure 5.12
```
caption = '''
Figure 5.12 Evolution of pure government expenditures and of the government deficit
to national income ratio (the PSBR to GDP ratio), following a sharp decrease in the
propensity to consume out of current income, with Model LP3'''
gdata = [s['G'] for s in lp3_alpha.solutions[5:]]
ratiodata = [s['PSBR']/s['Y'] for s in lp3_alpha.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top='off')
axes.spines['top'].set_visible(False)
axes.set_ylim(16, 20.5)
axes.plot(gdata, linestyle='--', color='r')
plt.text(5, 20.4, 'Pure government')
plt.text(5, 20.15, 'expenditures (LHS)')
plt.text(30, 18, 'Deficit to national')
plt.text(30, 17.75, 'income ration (RHS)')
axes2 = axes.twinx()
axes2.tick_params(top='off')
axes2.spines['top'].set_visible(False)
axes2.set_ylim(-.01, 0.04)
axes2.plot(ratiodata, linestyle='-', color='b')
# add labels
fig.text(0.1, 1.05, 'G')
fig.text(0.9, 1.05, 'PSBR to Y ratio')
fig.text(0.1, -.1, caption);
```
|
github_jupyter
|
# `Cannabis (drug)`
#### `INFORMATION`:
### Everything we need to know about marijuana (cannabis)
>`Cannabis, also known as marijuana among other names, is a psychoactive drug from the Cannabis plant used for medical or recreational purposes. The main psychoactive part of cannabis is tetrahydrocannabinol (THC), one of 483 known compounds in the plant, including at least 65 other cannabinoids. Cannabis can be used by smoking, vaporizing, within food, or as an extract`
>[For more information](https://www.medicalnewstoday.com/articles/246392.php)
[VIDEO](https://youtu.be/GhTYI3DeNgA)

```
import pandas as pd
import missingno
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("dark")
import warnings
warnings.simplefilter("ignore")
import numpy as np
df = pd.read_csv("cannabis.csv");df.head()
```
`Understainding attributes`
>| Attribute Name | Info |
| -------------- | -------------------------------------------- |
| Strain Name | Given Name of strain |
| Type | type of Strain(namly indica, sativa, hybrid) |
| rating | User Rating |
| Effects | Different effects optained |
| Description | other backround info |
### UNDERSTAING DATA
like finding null values
```
df.shape#showing (row , columns)
df.info()#geting basic information like datatypes
#we can clearly see there are some missing values in Flavor and Description
missingno.matrix(df);
df.isnull().sum()#see null values
print(df.Type.value_counts())#counting the occurence of values
sns.categorical.countplot(df.Type);#displaying it through graph
plt.ylabel("distribuition")
sns.distplot(df.Rating);
#by this we can see that all the types have rating more than 3.5
#finding max rating to each type
df.groupby(["Type"])["Rating"].max()
#finding min rating to each type
df.groupby(["Type"])["Rating"].min()
#mean rating
df.groupby(["Type"])["Rating"].mean()
#Now we will extract the values in Effects and Flavor and pass to a new column
effect = pd.DataFrame(df.Effects.str.split(',',4).tolist(),
columns = ['Eone','Etwo','Ethree','Efour','Efive'])
flavors = pd.DataFrame(df.Flavor.str.split(',',n=2,expand=True).values.tolist(),
columns = ['Fone','Ftwo','Fthree'])
df = pd.concat([df, effect], axis=1)
df = pd.concat([df, flavors], axis=1)#concating the two dataframes
#for more information plz visit
#link => http://pandas.pydata.org/pandas-docs/stable/merging.html
df.columns
#finding top 5 effects
print(df.Eone.value_counts().head())
plt.figure(figsize=(15,10))
sns.boxplot(x = "Eone",y = "Rating",hue="Type",data=df[df.Rating > 3.5]);
#finding top 5 Flavor
df.Eone.value_counts().head()
plt.figure(figsize=(15,10))
plt.xticks(rotation=90)
sns.countplot(x = "Fone",data=df);
```
|
github_jupyter
|
##### week1-Q1.
What does the analogy “AI is the new electricity” refer to?
1. Through the “smart grid”, AI is delivering a new wave of electricity.
2. AI runs on computers and is thus powered by electricity, but it is letting computers do things not possible before.
3. Similar to electricity starting about 100 years ago, AI is transforming multiple industries.
4. AI is powering personal devices in our homes and offices, similar to electricity.
<span style="color:blue">
Answer: 3.
AI is transforming many fields from the car industry to agriculture to supply-chain.
</span>
##### week1-Q2.
Which of these are reasons for Deep Learning recently taking off? (Check the three options that apply.)
1. Deep learning has resulted in significant improvements in important applications such as online advertising, speech recognition, and image recognition.
2. Neural Networks are a brand new field.
3. We have access to a lot more data.
4. We have access to a lot more computational power.
<span style="color:blue">
Answer: 1,3,4.
The digitalization of our society has played a huge role in this. The development of hardware, perhaps especially GPU computing, has significantly improved deep learning algorithms' performance.
</span>
##### week1-Q3.
Recall this diagram of iterating over different ML ideas. Which of the statements below are true? (Check all that apply.)
<img src="images/cycle.png" alt="cycle.png" style="width:300px"/>
1. Being able to try out ideas quickly allows deep learning engineers to iterate more quickly.
2. Faster computation can help speed up how long a team takes to iterate to a good idea.
3. It is faster to train on a big dataset than a small dataset.
4. Recent progress in deep learning algorithms has allowed us to train good models faster (even without changing the CPU/GPU hardware).
<span style="color:blue">
Answer: 1,2,4.
For example, we discussed how switching from sigmoid to ReLU activation functions allows faster training.
</span>
##### week1-Q4.
When an experienced deep learning engineer works on a new problem, they can usually use insight from previous problems to train a good model on the first try, without needing to iterate multiple times through different models. True/False?
<span style="color:blue">
Answer: False.
Finding the characteristics of a model is key to have good performance. Although experience can help, it requires multiple iterations to build a good model.
</span>
##### week1-Q5.
ReLU activation function?

##### week1-Q6.
Images for cat recognition is an example of “structured” data, because it is represented as a structured array in a computer. True/False?
<span style="color:blue">
Answer: False.
Images for cat recognition is an example of “unstructured” data.
</span>
##### week1-Q7.
A demographic dataset with statistics on different cities' population, GDP per capita, economic growth is an example of “unstructured” data because it contains data coming from different sources. True/False?
<span style="color:blue">
Answer: False.
A demographic dataset with statistics on different cities' population, GDP per capita, economic growth is an example of “structured” data by opposition to image, audio or text datasets.
</span>
##### week1-Q8.
Why is an RNN (Recurrent Neural Network) used for machine translation, say translating English to French? (Check all that apply.)
1. It can be trained as a supervised learning problem.
2. It is strictly more powerful than a Convolutional Neural Network (CNN).
3. It is applicable when the input/output is a sequence (e.g., a sequence of words).
4. RNNs represent the recurrent process of Idea->Code->Experiment->Idea->....
<span style="color:blue">
Answer: 1,3.
An RNN can map from a sequence of english words to a sequence of french words.
</span>
##### week1-Q9.
In this diagram which we hand-drew in lecture, what do the horizontal axis (x-axis) and vertical axis (y-axis) represent?
<img src="images/networks.png" alt="networks.png" style="width:550px"/>
<span style="color:blue">
Answer: x-axis is the amount of data.
y-axis (vertical axis) is the performance of the algorithm.
</span>
##### week1-Q10.
Assuming the trends described in the previous question's figure are accurate (and hoping you got the axis labels right), which of the following are true? (Check all that apply.)
1. Decreasing the training set size generally does not hurt an algorithm’s performance, and it may help significantly.
2. Increasing the size of a neural network generally does not hurt an algorithm’s performance, and it may help significantly.
3. Increasing the training set size generally does not hurt an algorithm’s performance, and it may help significantly.
4. Decreasing the size of a neural network generally does not hurt an algorithm’s performance, and it may help significantly.
<span style="color:blue">
Answer: 2,3.
According to the trends in the figure above, big networks usually perform better than small networks.
Bringing more data to a model is almost always beneficial.
</span>
##### week2-Q3.
Suppose img is a `(32,32,3)` array, representing a 32x32 image with 3 color channels red, green and blue. How do you reshape this into a column vector?
<span style="color:blue">
Answer: x = img.reshape((32\*32\*3,1)).
</span>
##### week2-Q4.
Consider the two following random arrays "a" and "b".
What will be the shape of "c"?
```
import numpy as np
a = np.random.randn(2, 3) # a.shape = (2, 3)
b = np.random.randn(2, 1) # b.shape = (2, 1)
c = a + b
print(c.shape)
```
<span style="color:blue">
Answer: This is broadcasting. b (column vector) is copied 3 times so that it can be summed to each column of a.
</span>
##### week2-Q5.
Consider the two following random arrays "a" and "b".
What will be the shape of "c"?
```
a = np.random.randn(4, 3) # a.shape = (4, 3)
b = np.random.randn(3, 2) # b.shape = (3, 2)
# operands could not be broadcast together with shapes (4,3) (3,2)
# print((a*b).shape)
print((np.dot(a,b)).shape)
```
<span style="color:blue">
Answer: The computation cannot happen because the sizes don't match. It's going to be "Error"!
In numpy the "*" operator indicates element-wise multiplication. It is different from "np.dot()". If you would try "c = np.dot(a,b)" you would get c.shape = (4, 2).
</span>
##### week2-Q7.
Recall that "np.dot(a,b)" performs a matrix multiplication on a and b, whereas "a*b" performs an element-wise multiplication.
Consider the two following random arrays "a" and "b":
```
a = np.random.randn(12288, 150) # a.shape = (12288, 150)
b = np.random.randn(150, 45) # b.shape = (150, 45)
c = np.dot(a,b)
print(c.shape)
```
<span style="color:blue">
Answer: Remember that a np.dot(a, b) has shape (number of rows of a, number of columns of b). The sizes match because : "number of columns of a = 150 = number of rows of b"
</span>
##### week2-Q8.
Consider the following code snippet. How do you vectorize this?
```
a = np.random.randn(3,4)
b = np.random.randn(4,1)
for i in range(3):
for j in range(4):
c[i][j] = a[i][j] + b[j]
print(c.shape)
c = a + b.T
print(c.shape)
```
##### week2-Q9.
Consider the following code:
```
a = np.random.randn(3, 3)
b = np.random.randn(3, 1)
c = a*b
print(c.shape)
```
What will be c? (If you’re not sure, feel free to run this in python to find out).
<span style="color:blue">
Answer: This will invoke broadcasting, so b is copied three times to become (3,3), and ∗ is an element-wise product so c.shape will be (3, 3).
</span>
##### week3-Q5.
Consider the following code:
```
import numpy as np
A = np.random.randn(4,3)
B = np.sum(A, axis=1, keepdims=True)
print(B.shape)
```
<span style="color:blue">
Answer: We use (keepdims = True) to make sure that A.shape is (4,1) and not (4, ). It makes our code more rigorous.
</span>
##### week2-Q10.
Consider the following computation graph.
<img src="images/computation.png" alt="computation.png" style="width:600px"/>
What is the output J?
<span style="color:blue">
Answer: `J = (a - 1) * (b + c)`
`J = u + v - w = a*b + a*c - (b + c) = a * (b + c) - (b + c) = (a - 1) * (b + c)`.
</span>
##### week2-Q2.
Which of these is the "Logistic Loss"?
<span style="color:blue">
Answer: $\mathcal{L}^{(i)}(\hat{y}^{(i)}, y^{(i)}) = y^{(i)}\log(\hat{y}^{(i)}) + (1- y^{(i)})\log(1-\hat{y}^{(i)})$
</span>
##### week-Q6.
How many layers does this network have?
<img src="images/n4.png" alt="n4.png" style="width:400px" />
<span style="color:blue">
Answer: The number of layers $L$ is 4. The number of hidden layers is 3. As seen in lecture, the number of layers is counted as the number of hidden layers + 1. The input and output layers are not counted as hidden layers.
</span>
##### week3-Q1.
Which of the following are true? (Check all that apply.)
- $a^{[2]}$ denotes the activation vector of the $2^{nd}$ layer. <span style="color:blue">(True)</span>
- $a^{[2]}_4$ is the activation output by the $4^{th}$ neuron of the $2^{nd}$ layer. <span style="color:blue">(True)</span>
- $a^{[2](12)}$ denotes the activation vector of the $2^{nd}$ layer for the $12^{th}$ training example.<span style="color:blue">(True)</span>
- $X$ is a matrix in which each column is one training example. <span style="color:blue">(True)</span>
##### week2-Q1.
What does a neuron compute?
<span style="color:blue">
Answer: A neuron computes a linear function `(z = Wx + b)` followed by an activation function.
The output of a neuron is `a = g(Wx + b)` where `g` is the activation function (sigmoid, tanh, ReLU, ...).
</span>
##### week3-Q3.
Vectorized implementation of forward propagation for layer $l$, where $1 \leq l \leq L$?
<span style="color:blue">
Answer:
$$Z^{[l]} = W^{[l]} A^{[l-1]}+ b^{[l]}$$
$$A^{[l]} = g^{[l]}(Z^{[l]})$$
</span>
##### week-Q4.
Vectorization allows you to compute forward propagation in an L-layer neural network without an explicit for-loop (or any other explicit iterative loop) over the layers l=1, 2, …,L. True/False?
<span style="color:blue">
Answer: False.
Forward propagation propagates the input through the layers, although for shallow networks we may just write all the lines ($a^{[2]} = g^{[2]}(z^{[2]})$, $z^{[2]}= W^{[2]}a^{[1]}+b^{[2]}$, ...) in a deeper network, we cannot avoid a for loop iterating over the layers: ($a^{[l]} = g^{[l]}(z^{[l]})$, $z^{[l]} = W^{[l]}a^{[l-1]} + b^{[l]}$, ...).
</span>
##### week-Q5.
Assume we store the values for $n^{[l]}$ in an array called layers, as follows: layer_dims = $[n_x,4,3,2,1]$. So layer 1 has four hidden units, layer 2 has 3 hidden units and so on. Which of the following for-loops will allow you to initialize the parameters for the model?
```python
for(i in range(1, len(layer_dims))):
parameter[‘W’ + str(i)] = np.random.randn(layers[i], layers[i-1]) * 0.01
parameter[‘b’ + str(i)] = np.random.randn(layers[i], 1) * 0.01
```
##### week4-Q10.
Whereas the previous question used a specific network, in the general case what is the dimension of $W^{[l]}$, the weight matrix associated with layer $l$?
<span style="color:blue">
Answer: $W^{[l]}$ has shape $(n^{[l]}, n^{[l-1]})$.
</span>
##### week4-Q9.
Which of the following statements are True? (Check all that apply).
<img src="./images/n2.png" alt="n2.png" style="width: 450px;"/>
1. $W^{[1]}$ will have shape (4, 4). <span style="color:blue">True, shape of $W^{[l]}$ is $(n^{[l]}, n^{[l-1]})$</span>
3. $W^{[2]}$ will have shape (3, 4). <span style="color:blue">True, shape of $W^{[l]}$ is $(n^{[l]}, n^{[l-1]})$</span>
3. $W^{[3]}$ will have shape (1, 3). <span style="color:blue">True, shape of $W^{[l]}$ is $(n^{[l]}, n^{[l-1]})$</span>
2. $b^{[1]}$ will have shape (4, 1). <span style="color:blue">True, shape of $b^{[l]}$ is $(n^{[l]}, 1)$</span>
4. $b^{[2]}$ will have shape (3, 1). <span style="color:blue">True, shape of $b^{[l]}$ is $(n^{[l]}, 1)$</span>
4. $b^{[3]}$ will have shape (1, 1). <span style="color:blue">True, shape of $b^{[l]}$ is $(n^{[l]}, 1)$</span>
##### week3-Q9.
Consider the following 1 hidden layer neural network:

Which of the following statements are True? (Check all that apply).
- $W^{[1]}$ will have shape (4, 2) <span style="color:blue">(True)</span>
- $W^{[2]}$ will have shape (1, 4) <span style="color:blue">(True)</span>
- $b^{[1]}$ will have shape (4, 1) <span style="color:blue">(True)</span>
- $b^{[2]}$ will have shape (1, 1) <span style="color:blue">(True)</span>
##### week2-Q6.
Suppose you have nx input features per example. Recall that $X = [x^{(1)} x^{(2)} ... x^{(m)}]$. What is the dimension of X?
<span style="color:blue">
Answer: $(n_x, m)$
</span>
##### week3-Q10.
In the same network as the previous question, what are the dimensions of $Z^{[1]}$ and $A^{[1]}$?
<span style="color:blue">
Answer: $Z^{[1]}$ and $A^{[1]}$ are (4, m).
</span>
##### week3-Q4.
You are building a binary classifier for recognizing cucumbers (y=1) vs. watermelons (y=0). Which one of these activation functions would you recommend using for the output layer?
<span style="color:blue">
Answer: sigmoid.
Sigmoid outputs a value between 0 and 1 which makes it a very good choice for binary classification. You can classify as 0 if the output is less than 0.5 and classify as 1 if the output is more than 0.5. It can be done with tanh as well but it is less convenient as the output is between -1 and 1.
</span>
##### week3-Q2.
The tanh activation usually works better than sigmoid activation function for hidden units because the mean of its output is closer to zero, and so it centers the data better for the next layer. True/False?
<span style="color:blue">
Answer: True, as seen in lecture the output of the tanh is between -1 and 1, it thus centers the data which makes the learning simpler for the next layer.
</span>
##### week3-Q8.
You have built a network using the tanh activation for all the hidden units. You initialize the weights to relative large values, using `np.random.randn(..,..)*1000`. What will happen?
<span style="color:blue">
Answer: This will cause the inputs of the tanh to also be very large, thus causing gradients to be close to zero. The optimization algorithm will thus become slow. `tanh` becomes flat for large values, this leads its gradient to be close to zero. This slows down the optimization algorithm.
</span>
##### week3-Q6.
Suppose you have built a neural network. You decide to initialize the weights and biases to be zero. Which of the following statements is true?
<span style="color:blue">
Answer: Each neuron in the first hidden layer will perform the same computation. So even after multiple iterations of gradient descent each neuron in the layer will be computing the same thing as other neurons.
</span>
##### week3-Q7.
Logistic regression’s weights w should be initialized randomly rather than to all zeros, because if you initialize to all zeros, then logistic regression will fail to learn a useful decision boundary because it will fail to “break symmetry”, True/False?
<span style="color:blue">
Answer: False.
Logistic Regression doesn't have a hidden layer. If you initialize the weights to zeros, the first example x fed in the logistic regression will output zero but the derivatives of the Logistic Regression depend on the input x (because there's no hidden layer) which is not zero. So at the second iteration, the weights values follow x's distribution and are different from each other if x is not a constant vector.
</span>
##### week4-Q1.
What is the "cache" used for in our implementation of forward propagation and backward propagation?
<span style="color:blue">
Answer: We use it to pass variables computed during forward propagation to the corresponding backward propagation step. It contains useful values for backward propagation to compute derivatives. The "cache" records values from the forward propagation units and sends it to the backward propagation units because it is needed to compute the chain rule derivatives.
</span>
##### week4-Q2.
Among the following, which ones are "hyperparameters"? (Check all that apply.)
<span style="color:blue">
Answer: learning rate $\alpha$, number of layers $L$ in the neural network, number of iterations, size of the hidden layers $n^{[l]}$.
</span>
<span style="color:red">
Not hyperparameters: bias vectors $b^{[l]}$, weight matrices $W^{[l]}$, activation values $a^{[l]}$.
</span>
##### week4-Q3.
Which of the following statements is true?
1. The deeper layers of a neural network are typically computing more complex features of the input than the earlier layers.
2. The earlier layers of a neural network are typically computing more complex features of the input than the deeper layers.
<span style="color:blue">
Answer: 1.
</span>
##### week-Q7.
During forward propagation, in the forward function for a layer l you need to know what is the activation function in a layer (Sigmoid, tanh, ReLU, etc.). During backpropagation, the corresponding backward function also needs to know what is the activation function for layer l, since the gradient depends on it. True/False?
<span style="color:blue">
Answer: True, as you've seen in the week 3 each activation has a different derivative. Thus, during backpropagation you need to know which activation was used in the forward propagation to be able to compute the correct derivative.
</span>
##### week-Q8.
There are certain functions with the following properties:
(i) To compute the function using a shallow network circuit, you will need a large network (where we measure size by the number of logic gates in the network), but (ii) To compute it using a deep network circuit, you need only an exponentially smaller network. True/False?
<span style="color:blue">
Answer: True.
</span>
|
github_jupyter
|
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
%matplotlib inline
sns.set_style("whitegrid")
plt.style.use("fivethirtyeight")
df = pd.read_csv('diabetes.csv')
df[0:10]
pd.set_option("display.float", "{:.2f}".format)
df.describe()
df.info()
missing_values_count = df.isnull().sum()
total_cells = np.product(df.shape)
total_missing = missing_values_count.sum()
percentage_missing = (total_missing/total_cells)*100
print(percentage_missing)
from sklearn.ensemble import RandomForestRegressor
x = df.copy()
y = x.pop('Outcome')
from sklearn.model_selection import train_test_split
X_train,X_test,Y_train,Y_test = train_test_split(x,y,test_size=0.20,random_state=0)
from sklearn.metrics import accuracy_score
```
## logistic regression
```
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X_train,Y_train)
Y_pred_lr = lr.predict(X_test)
score_lr = round(accuracy_score(Y_pred_lr,Y_test)*100,2)
print("The accuracy score achieved using Logistic Regression is: "+str(score_lr)+" %")
```
## Navie bayes
```
from sklearn.naive_bayes import GaussianNB
nb = GaussianNB()
nb.fit(X_train,Y_train)
Y_pred_nb = nb.predict(X_test)
score_nb = round(accuracy_score(Y_pred_nb,Y_test)*100,2)
print("The accuracy score achieved using Naive Bayes is: "+str(score_nb)+" %")
```
## Support Vector Machine
```
from sklearn import svm
sv = svm.SVC(kernel='linear')
sv.fit(X_train, Y_train)
Y_pred_svm = sv.predict(X_test)
score_svm = round(accuracy_score(Y_pred_svm,Y_test)*100,2)
print("The accuracy score achieved using Linear SVM is: "+str(score_svm)+" %")
```
## KNN
```
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=7)
knn.fit(X_train,Y_train)
Y_pred_knn=knn.predict(X_test)
score_knn = round(accuracy_score(Y_pred_knn,Y_test)*100,2)
print("The accuracy score achieved using KNN is: "+str(score_knn)+" %")
```
## XG Boost
```
import xgboost as xgb
xgb_model = xgb.XGBClassifier(objective="binary:logistic", random_state=42)
xgb_model.fit(X_train, Y_train)
Y_pred_xgb = xgb_model.predict(X_test)
score_xgb = round(accuracy_score(Y_pred_xgb,Y_test)*100,2)
print("The accuracy score achieved using XGBoost is: "+str(score_xgb)+" %")
```
## Feature Scaling
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
```
# Neural Network
```
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import Dense
model = Sequential()
model.add(Dense(11,activation='relu',input_dim=8))
model.add(Dense(1,activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
history = model.fit(X_train,Y_train, validation_data=(X_test, Y_test),epochs=200, batch_size=10)
import matplotlib.pyplot as plt
%matplotlib inline
# Model accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()
# Model Losss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()
Y_pred_nn = model.predict(X_test)
rounded = [round(x[0]) for x in Y_pred_nn]
Y_pred_nn = rounded
score_nn = round(accuracy_score(Y_pred_nn,Y_test)*100,2)
print("The accuracy score achieved using Neural Network is: "+str(score_nn)+" %")
```
# Convolutional Neural Network
```
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten,BatchNormalization
from tensorflow.keras.layers import Conv1D, MaxPool1D
from tensorflow.keras.optimizers import Adam
print(tf.__version__)
X_train.shape
X_test.shape
x_train = X_train.reshape(614,8,1)
x_test = X_test.reshape(154,8,1)
epochs = 100
model = Sequential()
model.add(Conv1D(filters=32, kernel_size=2, activation='relu', input_shape=(8,1)))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Conv1D(filters=32, kernel_size=2, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Conv1D(filters=32, kernel_size=2, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(64,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy', optimizer=Adam(lr=0.00005),metrics=['accuracy'])
hists = model.fit(x_train, Y_train,validation_data=(x_test, Y_test), epochs=200, verbose=1)
# Model accuracy
plt.plot(hists.history['accuracy'])
plt.plot(hists.history['val_accuracy'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()
# Model Losss
plt.plot(hists.history['loss'])
plt.plot(hists.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()
# Predicting the Test set results
y_pred_cnn = model.predict(x_test)
rounded = [round(x[0]) for x in y_pred_cnn]
Y_pred_cnn = rounded
score_cnn = round(accuracy_score(Y_pred_cnn,Y_test)*100,2)
print("The accuracy score achieved using artificial Neural Network is: "+str(score_cnn)+" %")
```
# Artificial Neural Network
```
import keras
from keras.models import Sequential
from keras.layers import Dense
# Intinialising the ANN
classifier = Sequential()
# Adding the input layer and the first Hidden layer
classifier.add(Dense(activation="relu", input_dim=8, units=7, kernel_initializer="uniform"))
# Adding the output layer
classifier.add(Dense(activation="sigmoid", input_dim=8, units=1, kernel_initializer="uniform"))
# Compiling the ANN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Fitting the ANN to the training set
hist = classifier.fit(X_train, Y_train,validation_data=(X_test, Y_test), batch_size=10, epochs=500)
# Model accuracy
plt.plot(hist.history['accuracy'])
plt.plot(hist.history['val_accuracy'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()
# Model Losss
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()
# Predicting the Test set results
y_pred_ann = classifier.predict(X_test)
rounded = [round(x[0]) for x in y_pred_ann]
Y_pred_ann = rounded
score_ann = round(accuracy_score(Y_pred_ann,Y_test)*100,2)
print("The accuracy score achieved using artificial Neural Network is: "+str(score_ann)+" %")
```
## model with best score
```
scores = [score_lr,score_nb,score_svm,score_knn,score_xgb,score_nn,score_ann,score_cnn]
algorithms = ["Logistic Regression","Naive Bayes","Support Vector Machine","K-Nearest Neighbors","XGBoost","Neural Network","Art. Neural Network","Conv. Neural Network"]
for i in range(len(algorithms)):
print("The accuracy score achieved using "+algorithms[i]+" is: "+str(scores[i])+" %")
sns.set(rc={'figure.figsize':(15,7)})
plt.xlabel("Algorithms")
plt.ylabel("Accuracy score")
sns.barplot(algorithms,scores)
```
|
github_jupyter
|
```
import sys
sys.path.append('../scripts/')
from mcl import *
from kf import *
class EstimatedLandmark(Landmark):
def __init__(self):
super().__init__(0,0)
self.cov = None
def draw(self, ax, elems):
if self.cov is None:
return
##推定位置に青い星を描く##
c = ax.scatter(self.pos[0], self.pos[1], s=100, marker="*", label="landmarks", color="blue")
elems.append(c)
elems.append(ax.text(self.pos[0], self.pos[1], "id:" + str(self.id), fontsize=10))
##誤差楕円を描く##
e = sigma_ellipse(self.pos, self.cov, 3)
elems.append(ax.add_patch(e))
class MapParticle(Particle):
def __init__(self, init_pose, weight, landmark_num):
super().__init__(init_pose, weight)
self.map = Map()
for i in range(landmark_num):
self.map.append_landmark(EstimatedLandmark())
def init_landmark_estimation(self, landmark, z, distance_dev_rate, direction_dev):
landmark.pos = z[0]*np.array([np.cos(self.pose[2] + z[1]), np.sin(self.pose[2] + z[1])]).T + self.pose[0:2]
H = matH(self.pose, landmark.pos)[0:2,0:2] #カルマンフィルタのHの右上2x2を取り出す
Q = matQ(distance_dev_rate*z[0], direction_dev)
landmark.cov = np.linalg.inv(H.T.dot( np.linalg.inv(Q) ).dot(H))
def observation_update_landmark(self, landmark, z, distance_dev_rate, direction_dev): ###fastslam4landestm
estm_z = IdealCamera.observation_function(self.pose, landmark.pos) #ランドマークの推定位置から予想される計測値
if estm_z[0] < 0.01: # 推定位置が近すぎると計算がおかしくなるので回避
return
H = - matH(self.pose, landmark.pos)[0:2,0:2] #ここは符号の整合性が必要
Q = matQ(distance_dev_rate*estm_z[0], direction_dev)
K = landmark.cov.dot(H.T).dot( np.linalg.inv(Q + H.dot(landmark.cov).dot(H.T)) )
landmark.pos = K.dot(z - estm_z) + landmark.pos
landmark.cov = (np.eye(2) - K.dot(H)).dot(landmark.cov)
def observation_update(self, observation, distance_dev_rate, direction_dev): ###fastslam4obsupdate
for d in observation:
z = d[0]
landmark = self.map.landmarks[d[1]]
if landmark.cov is None:
self.init_landmark_estimation(landmark, z, distance_dev_rate, direction_dev)
else: #追加
self.observation_update_landmark(landmark, z, distance_dev_rate, direction_dev)
class FastSlam(Mcl):
def __init__(self, init_pose, particle_num, landmark_num, motion_noise_stds={"nn":0.19, "no":0.001, "on":0.13, "oo":0.2},\
distance_dev_rate=0.14, direction_dev=0.05):
super().__init__(None, init_pose, particle_num, motion_noise_stds, distance_dev_rate, direction_dev)
self.particles = [MapParticle(init_pose, 1.0/particle_num, landmark_num) for i in range(particle_num)]
self.ml = self.particles[0]
def observation_update(self, observation):
for p in self.particles:
p.observation_update(observation, self.distance_dev_rate, self.direction_dev) #self.mapを削除
self.set_ml()
self.resampling()
def draw(self, ax, elems):
super().draw(ax, elems)
self.ml.map.draw(ax, elems)
def trial():
time_interval = 0.1
world = World(30, time_interval, debug=False)
###真の地図を作成###
m = Map()
for ln in [(-4,2), (2,-3), (3,3)]: m.append_landmark(Landmark(*ln))
world.append(m)
### ロボットを作る ###
init_pose = np.array([0,0,0]).T
pf = FastSlam(init_pose,100, len(m.landmarks))
a = EstimationAgent(time_interval, 0.2, 10.0/180*math.pi, pf)
r = Robot(init_pose, sensor=Camera(m), agent=a, color="red")
world.append(r)
world.draw()
trial()
#a.estimator.particles[10].map.landmarks[2].cov
#math.sqrt(0.0025)
```
|
github_jupyter
|
Put `coveval` folder into our path for easy import of modules:
```
import sys
sys.path.append('../')
```
# Load data
```
from coveval import utils
from coveval.connectors import generic
```
Let's load some data corresponding to the state of New-York and look at the number of daily fatalities:
```
df_reported = utils.get_outbreak_data_usa(prefix='reported_').loc['US-NY']
df_predicted = generic.load_predictions('../data/demo/predictions.json', prefix='predicted_')
data = utils.add_outbreak_data(df_predicted, df_reported)[['reported_deathIncrease', 'predicted_incDeath']]
_ = utils.show_data(df=data,
cols=['reported_deathIncrease', 'predicted_incDeath'],
t_min='2020-02',
t_max='2021-01',
colors={'cols':{'reported_deathIncrease': '#85C5FF', 'predicted_incDeath': '#FFAD66'}},
linewidths={'reported_deathIncrease': 3, 'predicted_incDeath': 3},
show_leg={'reported_deathIncrease': 'reported', 'predicted_incDeath': 'predicted'},
y_label='daily fatalities',
x_label='date',
figsize=(11,5))
```
# Smooth reported values
```
from coveval.core import smoothing
```
We want to smooth out noise in the reported data due to reporting errors and delays. To do so we can use for instance the "missed case" smoother.
And it's also useful to smooth out high frequency noise in the predictions made by stochastc models as they do not correspond to a useful signal and pollute trend comparisons. A simple low-pass filter can be appropriate here.
```
# define smoothers
smoothers = {'missed': smoothing.missed_cases(cost_missing=.1, cost_der1=10, cost_der2=1),
'gaussian': smoothing.gaussian(sigma=2)}
# smooth reported data
col = 'reported_deathIncrease'
s_name = 'missed'
smoothers[s_name].smooth_df(data, col, inplace=True)
data.rename(columns={col + '_smoothed' : col + '_smoothed_' + s_name}, inplace=True)
# smooth predictions
col = 'predicted_incDeath'
s_name = 'gaussian'
smoothers[s_name].smooth_df(data, col, inplace=True)
data.rename(columns={col + '_smoothed' : col + '_smoothed_' + s_name}, inplace=True)
_ = utils.show_data(data,
cols=['reported_deathIncrease_smoothed_missed','predicted_incDeath_smoothed_gaussian'],
scatter=['reported_deathIncrease'],
colors={'cols':{'reported_deathIncrease_smoothed_missed': '#85C5FF',
'predicted_incDeath_smoothed_gaussian': '#FFAD66'}},
date_auto=False,
t_min='2020-03',
t_max='2020-06-20',
show_leg={'reported_deathIncrease': 'reported',
'reported_deathIncrease_smoothed_missed': 'reported smoothed "missed"',
'predicted_incDeath_smoothed_gaussian': 'predicted smoothed "gaussian"'},
y_label='daily fatalities',
x_label='date',
figsize=(11,5))
```
# Normalise predicted values
```
from coveval.core import normalising
```
The goal here is to avoid repeatedly punishing predictions made by a model due to e.g. the model getting the start of the outbreak wrong.
```
normaliser_scaling = normalising.dynamic_scaling()
normaliser_scaling.normalise_df(df=data,
col_truth='reported_deathIncrease_smoothed_missed',
col_pred='predicted_incDeath_smoothed_gaussian',
inplace=True)
# let's store the difference between the truth and the normalised predictions
data['predicted_incDeath_smoothed_gaussian_norm_match'] = data['reported_deathIncrease_smoothed_missed'] - data['predicted_incDeath_smoothed_gaussian_norm']
fig = utils.show_normalisation(data,
truth='reported_deathIncrease',
pred_raw='predicted_incDeath_smoothed_gaussian',
pred_norm='predicted_incDeath_smoothed_gaussian_norm',
pred_match='predicted_incDeath_smoothed_gaussian_norm_match',
truth_smoothed='reported_deathIncrease_smoothed_missed')
```
# Compare to truth
```
from coveval.core import losses
```
Now we can use a simple Poisson loss to judge how well each normalised prediction compares to the reported data and compute an overall score that can be compared with that of other models.
```
poisson_loss = losses.poisson()
poisson_loss.compute_df(df=data,
col_truth='reported_deathIncrease_smoothed_missed',
col_pred='predicted_incDeath_smoothed_gaussian_norm',
inplace=True)
data['predicted_incDeath_smoothed_gaussian_norm_loss'].mean()
```
# All in one: scorer
```
from coveval.scoring import scorer
```
The scorer object performes the above operations in a single call - with the exception of smoothing the predictions as for some models this is not necessary.
```
default_scorer = scorer(smoother=smoothers['missed'],
normaliser=normaliser_scaling,
loss=poisson_loss)
results = default_scorer.score_df(df=data,
col_truth='reported_deathIncrease',
col_pred='predicted_incDeath_smoothed_gaussian')
# the average loss is the score (so the closer to 0 the better)
results['score']
```
|
github_jupyter
|
```
# default_exp dl_101
```
# Deep learning 101 with Pytorch and fastai
> Some code and text snippets have been extracted from the book [\"Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD\"](https://course.fast.ai/), and from these blog posts [[ref1](https://muellerzr.github.io/fastblog/2021/02/14/Pytorchtofastai.html)].
```
#hide
from nbdev.showdoc import *
from fastcore.all import *
# export
import torch
from torch.utils.data import TensorDataset
import matplotlib.pyplot as plt
import torch.nn as nn
from wwdl.utils import *
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
device
```
## Linear regression model in Pytorch
### Datasets and Dataloaders
We'll create a dataset that contains $(x,y)$ pairs sampled from the linear function $y = ax + b+ \epsilon$. To do this, we'll create a PyTorch's `TensorDataset`.
A PyTorch tensor is nearly the same thing as a NumPy array. The vast majority of methods and operators supported by NumPy on these structures are also supported by PyTorch, but PyTorch tensors have additional capabilities. One major capability is that these structures can live on the GPU, in which case their computation will be optimized for the GPU and can run much faster (given lots of values to work on). In addition, PyTorch can automatically calculate derivatives of these operations, including combinations of operations. These two things are critical for deep learning
```
# export
def linear_function_dataset(a, b, n=100, show_plot=False):
r"""
Creates a Pytorch's `TensorDataset` with `n` random samples of the
linear function y = `a`*x + `b`. `show_plot` dcides whether or not to
plot the dataset
"""
x = torch.randn(n, 1)
y = a*x + b + 0.1*torch.randn(n, 1)
if show_plot:
show_TensorFunction1D(x, y, marker='.')
return TensorDataset(x, y)
a = 2
b = 3
n = 100
data = linear_function_dataset(a, b, n, show_plot=True)
test_eq(type(data), TensorDataset)
```
In every machine/deep learning experiment, we need to have at least two datasets:
- training: used to train the model
- validation: used to validate the model after each training step. It allows to detect overfitting and adjust the hyperparameters of the model properly
```
train_ds = linear_function_dataset(a, b, n=100, show_plot=True)
valid_ds = linear_function_dataset(a, b, n=20, show_plot=True)
```
A dataloader combines a dataset an a sampler that samples data into **batches**, and provides an iterable over the given dataset.
```
bs = 10
train_dl = torch.utils.data.DataLoader(train_ds, batch_size=bs, shuffle=True)
valid_dl = torch.utils.data.DataLoader(valid_ds, batch_size=bs, shuffle=False)
for i, data in enumerate(train_dl, 1):
x, y = data
print(f'batch {i}: x={x.shape} ({x.device}), y={y.shape} ({y.device})')
```
### Defining a linear regression model in Pytorch
The class `torch.nn.Module` is the base structure for all models in Pytorch. It mostly helps to register all the trainable parameters. A module is an object of a class that inherits from the PyTorch `nn.Module` class.
To implement an `nn.Module` you just need to:
- Make sure the superclass __init__ is called first when you initialize it.
- Define any parameters of the model as attributes with nn.Parameter. To tell `Module` that we want to treat a tensor as a parameter, we have to wrap it in the `nn.Parameter` class. All PyTorch modules use `nn.Parameter` for any trainable parameters. This class doesn't actually add any functionality (other than automatically calling `requires_grad`). It's only used as a "marker" to show what to include in parameters:
- Define a forward function that returns the output of your model.
```
#export
class LinRegModel(nn.Module):
def __init__(self):
super().__init__()
self.a = nn.Parameter(torch.randn(1))
self.b = nn.Parameter(torch.randn(1))
def forward(self, x): return self.a*x + self.b
model = LinRegModel()
pa, pb = model.parameters()
pa, pa.shape, pb, pb.shape
```
Objects of this class behave identically to standard Python functions, in that you can call them using parentheses and they will return the activations of a model.
```
x = torch.randn(10, 1)
out = model(x)
x, x.shape, out, out.shape
```
### Loss function and optimizer
The loss is the thing the machine is using as the measure of performance to decide how to update model parameters. The loss function is simple enough for a regression problem, we'll just use the Mean Square Error (MSE)
```
loss_func = nn.MSELoss()
loss_func(x, out)
```
We have data, a model, and a loss function; we only need one more thing we can fit a model, and that's an optimizer.
```
opt_func = torch.optim.SGD(model.parameters(), lr = 1e-3)
```
### Training loop
During training, we need to push our model and our batches to the GPU. Calling `cuda()` on a model or a tensor this class puts all these parameters on the GPU:
```
model = model.to(device)
```
To train a model, we will need to compute all the gradients of a given loss with respect to its parameters, which is known as the *backward pass*. The *forward pass* is where we compute the output of the model on a given input, based on the matrix products. PyTorch computes all the gradients we need with a magic call to `loss.backward`. The backward pass is the chain rule applied multiple times, computing the gradients from the output of our model and going back, one layer at a time.
In Pytorch, Each basic function we need to differentiate is written as a `torch.autograd.Function` object that has a `forward` and a `backward` method. PyTorch will then keep trace of any computation we do to be able to properly run the backward pass, unless we set the `requires_grad` attribute of our tensors to `False`.
For minibatch gradient descent (the usual way of training in deep learning), we calculate gradients on batches. Before moving onto the next batch, we modify our model's parameters based on the gradients. For each iteration through our dataset (which would be called an **epoch**), the optimizer would perform as many updates as we have batches.
There are two important methods in a Pytorch optimizer:
- `zero_grad`: In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes. `zero_grad` just loops through the parameters of the model and sets the gradients to zero. It also calls `detach_`, which removes any history of gradient computation, since it won't be needed after `zero_grad`.
```
n_epochs = 10
# export
def train(model, device, train_dl, loss_func, opt_func, epoch_idx):
r"""
Train `model` for one epoch, whose index is given in `epoch_idx`. The
training loop will iterate through all the batches of `train_dl`, using
the the loss function given in `loss_func` and the optimizer given in `opt_func`
"""
running_loss = 0.0
batches_processed = 0
for batch_idx, (x, y) in enumerate(train_dl, 1):
x, y = x.to(device), y.to(device) # Push data to GPU
opt_func.zero_grad() # Reset gradients
# Forward pass
output = model(x)
loss = loss_func(output, y)
# Backward pass
loss.backward()
# Optimizer step
opt_func.step()
# print statistics
running_loss += loss.item()
batches_processed += 1
print(f'Train loss [Epoch {epoch_idx}]: {running_loss/batches_processed : .2f})')
for epoch in range(1, n_epochs+1):
train(model, device, train_dl, loss_func, opt_func, epoch)
```
We can see how the parameters of the regression model are getting closer to the truth values `a` and `b` from the linear function.
```
L(model.named_parameters())
```
### Validating the model
Validating the model requires only a forward pass, it's just inference. Disabling gradient calculation with the method `torch.no_grad()` is useful for inference, when you are sure that you will not call :meth:`Tensor.backward()`.
```
#export
def validate(model, device, dl):
running_loss = 0.
total_batches = 0
with torch.no_grad():
for x, y in valid_dl:
x, y = x.to(device), y.to(device)
output = model(x)
loss = loss_func(output, y)
running_loss += loss.item()
total_batches += 1
print(f'Valid loss: {running_loss/total_batches : .2f}')
validate(model, device, valid_dl)
```
In order to spot overfitting, it is useful to validate the model after each training epoch.
```
for epoch in range(1, n_epochs +1):
train(model, device, train_dl, loss_func, opt_func, epoch)
validate(model, device, valid_dl)
```
## Abstracting the manual training loop: moving from Pytorch to fastai
```
from fastai.basics import *
from fastai.callback.progress import ProgressCallback
```
We can entirely replace the custom training loop with fastai's. That means you can get rid of `train()`, `validate()`, and the epoch loop in the original code, and replace it all with a couple of lines.
fastai's training loop lives in a `Learner`. The Learner is the glue that merges everything together (Datasets, Dataloaders, model and optimizer) and enables to train by just calling a `fit` function.
fastai's `Learner` expects DataLoaders to be used, rather than simply one DataLoader, so let's make that. We could just do `dls = Dataloaders(train_dl, valid_dl)`, to keep the PyTorch Dataloaders. However, by using a fastai `DataLoader` instead, created directly from the `TensorDataset` objects, we have some automations, such as automatic pushing of the data to GPU.
```
dls = DataLoaders.from_dsets(train_ds, valid_ds, bs=10)
learn = Learner(dls, model=LinRegModel(), loss_func=nn.MSELoss(), opt_func=SGD)
```
Now we have everything needed to do a basic `fit`:
```
learn.fit(10, lr=1e-3)
```
Having a Learner allows us to easily gather the model predictions for the validation set, which we can use for visualisation and analysis
```
inputs, preds, outputs = learn.get_preds(with_input=True)
inputs.shape, preds.shape, outputs.shape
show_TensorFunction1D(inputs, outputs, y_hat=preds, marker='.')
```
## Building a simple neural network
For the next example, we will create the dataset by sampling values from a non linear sample $y(x) = -\frac{1}{100}x^7 - x^4 - 2x^2 - 4x + 1$
```
# export
def nonlinear_function_dataset(n=100, show_plot=False):
r"""
Creates a Pytorch's `TensorDataset` with `n` random samples of the
nonlinear function y = (-1/100)*x**7 -x**4 -2*x**2 -4*x + 1 with a bit
of noise. `show_plot` decides whether or not to plot the dataset
"""
x = torch.rand(n, 1)*20 - 10 # Random values between [-10 and 10]
y = (-1/100)*x**7 -x**4 -2*x**2 -4*x + 1 + 0.1*torch.randn(n, 1)
if show_plot:
show_TensorFunction1D(x, y, marker='.')
return TensorDataset(x, y)
n = 100
ds = nonlinear_function_dataset(n, show_plot=True)
x, y = ds.tensors
test_eq(x.shape, y.shape)
```
We will create the trainin and test dataset, and build the Dataloaders with them, this time directly in fastai, using the `Dataloaders.from_dsets` method.
```
train_ds = nonlinear_function_dataset(n=1000)
valid_ds = nonlinear_function_dataset(n=200)
```
Normalization in deep learning are used to make optimization easier by smoothing the loss surface of the network. We will normalize the data based on the mean and std of the train dataset
```
norm_mean = train_ds.tensors[1].mean()
norm_std = train_ds.tensors[1].std()
train_ds_norm = TensorDataset(train_ds.tensors[0],
(train_ds.tensors[1] - norm_mean)/norm_std)
valid_ds_norm = TensorDataset(valid_ds.tensors[0],
(valid_ds.tensors[1] - norm_mean)/norm_std)
dls = DataLoaders.from_dsets(train_ds_norm, valid_ds_norm, bs = 32)
```
We will build a Multi Layer Perceptron with 3 hidden layers. These networks are also known as Feed-Forward Neural Networks. The layers aof this type of networks are known as Fully Connected Layers, because, between every subsequent pair of layers, all the neurons are connected to each other.
<img alt="Neural network architecture" caption="Neural network" src="https://i.imgur.com/5ZWPtRS.png">
The easiest way of wrapping several layers in Pytorch is using the `nn.Sequential` module. It creates a module with a `forward` method that will call each of the listed layers or functions in turn, without us having to do the loop manually in the forward pass.
```
#export
class MLP3(nn.Module):
r"""
Multilayer perceptron with 3 hidden layers, with sizes `nh1`, `nh2` and
`nh3` respectively.
"""
def __init__(self, n_in=1, nh1=200, nh2=100, nh3=50, n_out=1):
super().__init__()
self.layers = nn.Sequential(
nn.Linear(n_in, nh1),
nn.ReLU(),
nn.Linear(nh1, nh2),
nn.ReLU(),
nn.Linear(nh2, nh3),
nn.ReLU(),
nn.Linear(nh3, n_out)
)
def forward(self, x): return self.layers(x)
x, y = dls.one_batch()
model = MLP3()
output = model(x)
output.shape
learn = Learner(dls, MLP3(), loss_func=nn.MSELoss(), opt_func=Adam)
learn.fit(10, lr=1e-3)
inputs, preds, outputs = learn.get_preds(with_input = True)
show_TensorFunction1D(inputs, outputs, y_hat=preds, marker='.')
```
Let's compare these results with the ones by our previous linear regression model
```
learn_lin = Learner(dls, LinRegModel(), loss_func=nn.MSELoss(), opt_func=Adam)
learn_lin.fit(20, lr=1e-3)
inputs, preds, outputs = learn_lin.get_preds(with_input = True)
show_TensorFunction1D(inputs, outputs, y_hat=preds, marker='.')
```
## Export
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
|
github_jupyter
|
# CNN - Example 01
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
### Load Keras Dataset
```
from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
```
#### Visualize data
```
print(x_train.shape)
single_image = x_train[0]
print(single_image.shape)
plt.imshow(single_image)
```
### Pre-Process data
#### One Hot encode
```
# Make it one hot encoded otherwise it will think as a regression problem on a continuous axis
from tensorflow.keras.utils import to_categorical
print("Shape before one hot encoding" +str(y_train.shape))
y_example = to_categorical(y_train)
print(y_example)
print("Shape after one hot encoding" +str(y_train.shape))
y_example[0]
y_cat_test = to_categorical(y_test,10)
y_cat_train = to_categorical(y_train,10)
```
#### Normalize the images
```
x_train = x_train/255
x_test = x_test/255
scaled_single = x_train[0]
plt.imshow(scaled_single)
```
#### Reshape the images
```
# Reshape to include channel dimension (in this case, 1 channel)
# x_train.shape
x_train = x_train.reshape(60000, 28, 28, 1)
x_test = x_test.reshape(10000,28,28,1)
# ### Image data augmentation
from tensorflow.keras.preprocessing.image import ImageDataGenerator
```
help(ImageDataGenerator)
```
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True)
datagen.fit(x_train)
it = datagen.flow(x_train, y_cat_train, batch_size=32)
# Preparing the Samples and Plot for displaying output
for i in range(9):
# preparing the subplot
plt.subplot(330 + 1 + i)
# generating images in batches
batch = it.next()
# Remember to convert these images to unsigned integers for viewing
image = batch[0][0].astype('uint8')
# Plotting the data
plt.imshow(image)
# Displaying the figure
plt.show()
```
### Model # 1
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, MaxPool2D, Flatten
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(4,4), input_shape=(28, 28, 1), activation='relu',))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(10, activation='softmax'))
```
Notes : If y is not one hot coded then loss= sparse_categorical_crossentropy
```
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy', 'categorical_accuracy'])
# we can add in additional metrics https://keras.io/metrics/
model.summary()
```
#### Add Early Stopping
```
from tensorflow.keras.callbacks import EarlyStopping
early_stop = EarlyStopping(monitor='val_loss', patience=2)
```
##### Training using one hot encoding
```
# fits the model on batches with real-time data augmentation:
history = model.fit(datagen.flow(x_train, y_cat_train, batch_size=32),
epochs=10,
steps_per_epoch=len(x_train) / 32,
validation_data=(x_test,y_cat_test),
callbacks=[early_stop])
```
#### Save model
Saving model
from tensorflow.keras.models import load_model
model_file = 'D:\\Sandbox\\Github\\MODELS\\' + '01_mnist.h5'
model.save(model_file)
#### Retreive model
Retrieve model
model = load_model(model_file)
#### Evaluate
Rule of thumb
1. High Bias accuracy = 80% val-accuracy = 78% (2% gap)
2. High Variance accuracy = 98% val-accuracy = 80% (18% gap)
3. High Bias and High Variance accuracy = 80% val-accuracy = 60% (20% gap)
4. Low Bias and Low Variance accuracy = 98% val-accuracy = 96% (2% gap)
#### Eval - Train
```
model.metrics_names
pd.DataFrame(history.history).head()
#pd.DataFrame(model.history.history).head()
```
pd.DataFrame(history.history).plot()
```
losses = pd.DataFrame(history.history)
losses[['loss','val_loss']].plot()
losses[['accuracy','val_accuracy']].plot()
# Plot loss per iteration
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.legend()
# Plot accuracy per iteration
plt.plot(history.history['accuracy'], label='acc')
plt.plot(history.history['val_accuracy'], label='val_acc')
plt.legend()
```
#### Eval - Test
```
test_metrics = model.evaluate(x_test,y_cat_test,verbose=1)
print('Loss on test dataset:', test_metrics[0])
print('Accuracy on test dataset:', test_metrics[1])
print("Loss and Accuracy on Train dataset:")
pd.DataFrame(history.history).tail(1)
```
As it turns out, the accuracy on the test dataset is smaller than the accuracy on the training dataset.
This is completely normal, since the model was trained on the `train_dataset`.
When the model sees images it has never seen during training, (that is, from the `test_dataset`),
we can expect performance to go down.
#### Prediction
```
y_prediction = np.argmax(model.predict(x_test), axis=-1)
```
#### Reports
```
from sklearn.metrics import classification_report,confusion_matrix
print(classification_report(y_test, y_prediction))
print(confusion_matrix(y_test, y_prediction))
```
Recall (sensivity) : Fraud detection recall because you want to catch FN (real fraud guys)
Precision (specificity): Sentiment analysis precision is important. You want to catch all feeling FP ()
F1 score : Higher is better to compare two or more models
accuracy : higher is better
error : 1 - accuracy
Ideally, We want both Precision & Recall to be 1 but it is a zero-sum game. You can't have both
```
import seaborn as sns
plt.figure(figsize=(10,6))
sns.heatmap(confusion_matrix(y_test,y_prediction),annot=True)
```
#### Predictions go wrong!
```
# Show some misclassified examples
misclassified_idx = np.where(y_prediction != y_test)[0]
i = np.random.choice(misclassified_idx)
plt.imshow(x_test[i].reshape(28,28), cmap='gray')
plt.title("True label: %s Predicted: %s" % (y_test[i], y_prediction[i]));
```
#### Final thoughts
Rule of thumb
1. High Bias accuracy = 80% val-accuracy = 78% (2% gap)
2. High Variance accuracy = 98% val-accuracy = 80% (18% gap)
3. High Bias and High Variance accuracy = 80% val-accuracy = 60% (20% gap)
4. Low Bias and Low Variance accuracy = 98% val-accuracy = 96% (2% gap)
```
print("Percentage of wrong predcitions : " + str(len(misclassified_idx)/len(y_prediction)*100) + " %")
print("Models maximum accuracy : " + str(np.max(history.history['accuracy'])*100) + " %")
print("Models maximum validation accuracy : " + str(np.max(history.history['val_accuracy'])*100) + " %")
```
Model has Low Bias and High Variance with more than 29% gap. The recall is also bad. Image augmentation
doesn't help here. Augumentation with rotation and tilting doesn't help b/c it is a unique digital shape.
|
github_jupyter
|
## Data Distillation
In this notebook we train models using data distillation.
```
from google.colab import drive
drive.mount('/content/drive')
from google.colab import files
uploaded = files.upload()
!unzip dataset.zip -d dataset
import warnings
import os
import shutil
import glob
import random
import random
import cv2
from fastai.vision import *
from fastai.utils.mem import *
warnings.filterwarnings("ignore", category=UserWarning, module="torch.nn.functional")
dataset="dataset"
classesPaths=sorted(glob.glob(dataset+'/*'))
classes=[pt.split(os.sep)[-1] for pt in classesPaths if os.path.isdir(pt)]
images=[pt for pt in classesPaths if not os.path.isdir(pt)]
os.makedirs(dataset+'/train')
os.makedirs(dataset+'/valid')
os.makedirs(dataset+'/images')
for im in images:
shutil.move(im,dataset+'/images/')
for cl in classes:
os.mkdir(dataset+'/train/'+cl)
images=sorted(glob.glob(dataset+'/'+cl+'/*'))
for i in range(int(len(images)*0.75)):
images=sorted(glob.glob(dataset+'/'+cl+'/*'))
j=random.randint(0,len(images)-1)
shutil.move(images[j],dataset+'/train/'+cl)
os.mkdir(dataset+'/valid/'+cl)
images=sorted(glob.glob(dataset+'/'+cl+'/*'))
for i in range(len(images)):
shutil.move(images[i],dataset+'/valid/'+cl)
def learn_with_model(dataset,model):
data=ImageDataBunch.from_folder(dataset,
ds_tfms=get_transforms(), size=224,bs=32).normalize(imagenet_stats)
learn = cnn_learner(data, model, metrics=accuracy)
learn.fit_one_cycle(2)
learn.unfreeze()
learn.lr_find()
lr=learn.recorder.lrs[np.argmin(learn.recorder.losses)]
if lr<1e-05:
lr=1e-03
learn.fit_one_cycle(8,max_lr=slice(lr/100,lr))
return learn,data
def moda(lista):
tam=len(lista[0][2])
x=np.zeros(tam)
for l in lista:
x=x+l[2].numpy()
x=x/len(lista)
maximo=x.argmax()
return maximo, x[maximo]
def omniData(dataset,learn,th):
images=sorted(glob.glob(dataset+"/images/*"))
for image in images:
im=cv2.imread(image,1)
im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
lista=[]
n=Image(pil2tensor(im, dtype=np.float32).div_(255))
pn=learn.predict(n)
lista.append(pn)
h_im=cv2.flip(im,0)
h=Image(pil2tensor(h_im, dtype=np.float32).div_(255))
ph=learn.predict(h)
lista.append(ph)
v_im=cv2.flip(im,1)
v=Image(pil2tensor(v_im, dtype=np.float32).div_(255))
pv=learn.predict(v)
lista.append(pv)
b_im=cv2.flip(im,-1)
b=Image(pil2tensor(b_im, dtype=np.float32).div_(255))
pb=learn.predict(b)
lista.append(pb)
blur_im=cv2.blur(im,(5,5))
blur=Image(pil2tensor(blur_im, dtype=np.float32).div_(255))
pblur=learn.predict(blur)
lista.append(pblur)
invGamma=1.0
table=np.array([((i/255.0)**invGamma)*255 for i in np.arange(0,256)]).astype('uint8')
gamma_im=cv2.LUT(im,table)
gamma=Image(pil2tensor(gamma_im, dtype=np.float32).div_(255))
pgamma=learn.predict(gamma)
lista.append(pgamma)
gblur_im=cv2.GaussianBlur(im,(5,5),cv2.BORDER_DEFAULT)
gblur=Image(pil2tensor(gblur_im, dtype=np.float32).div_(255))
pgblur=learn.predict(gblur)
lista.append(pgblur)
mod, predMax=moda(lista)
if predMax>th:
shutil.copyfile(image,dataset+'/train/'+data.classes[mod]+'/'+data.classes[mod]+'_'+image.split('/')[-1])
os.remove(image)
print(image+" --> "+dataset+'/train/'+data.classes[mod]+'/'+data.classes[mod]+'_'+image.split('/')[-1])
learner_resnet50,data=learn_with_model(dataset,models.resnet50)
shutil.copytree(dataset, 'dataset_resnet50')
omniData('dataset_resnet50',learner_resnet50,0)
learnerDD_resnet50,data=learn_with_model('dataset_resnet50',models.resnet50)
learnerDD_resnet50.export('/content/drive/My Drive/learnerDD_resnet50.pkl')
learner_resnet34,data=learn_with_model(dataset,models.resnet34)
shutil.copytree(dataset, 'dataset_resnet34')
omniData('dataset_resnet34',learner_resnet34,0)
learnerDD_resnet34,data=learn_with_model('dataset_resnet34',models.resnet34)
learnerDD_resnet34.export('/content/drive/My Drive/learnerDD_resnet34.pkl')
learner_resnet101,data=learn_with_model(dataset,models.resnet101)
shutil.copytree(dataset, 'dataset_resnet101')
omniData('dataset_resnet101',learner_resnet101,0)
learnerDD_resnet101,data=learn_with_model('dataset_resnet101',models.resnet101)
learnerDD_resnet101.export('/content/drive/My Drive/learnerDD_resnet101.pkl')
```
|
github_jupyter
|
```
# default_exp utils_blitz
```
# uitls_blitz
> API details.
```
#export
#hide
from blitz.modules import BayesianLinear
from blitz.modules import BayesianEmbedding, BayesianConv1d, BayesianConv2d, BayesianConv3d
from blitz.modules.base_bayesian_module import BayesianModule
from torch import nn
import torch
from fastcore.basics import patch
@patch
def extra_repr(self: BayesianLinear):
return f"Shape: {list(self.weight_sampler.mu.shape)}"
@patch
def extra_repr(self: BayesianConv1d):
return f"in_channels={self.in_channels}, out_channels={self.out_channels}, kernelel_size={self.kernel_size}, stride={self.stride}"
@patch
def extra_repr(self: BayesianConv2d):
return f"in_channels={self.in_channels}, out_channels={self.out_channels}, kernelel_size={self.kernel_size}, stride={self.stride}"
@patch
def extra_repr(self: BayesianConv3d):
return f"in_channels={self.in_channels}, out_channels={self.out_channels}, kernelel_size={self.kernel_size}, stride={self.stride}"
#export
def convert_layer_to_bayesian(layer, config: dict):
if isinstance(layer, torch.nn.Linear):
new_layer = BayesianLinear(
layer.in_features,
layer.out_features,
prior_sigma_1=config["prior_sigma_1"],
prior_sigma_2=config["prior_sigma_2"],
prior_pi=config["prior_pi"],
posterior_mu_init=config["posterior_mu_init"],
posterior_rho_init=config["posterior_rho_init"],
)
elif isinstance(layer, nn.Embedding):
new_layer = BayesianEmbedding(
layer.num_embeddings,
layer.embedding_dim,
prior_sigma_1=config["prior_sigma_1"],
prior_sigma_2=config["prior_sigma_2"],
prior_pi=config["prior_pi"],
posterior_mu_init=config["posterior_mu_init"],
posterior_rho_init=config["posterior_rho_init"],
)
elif isinstance(layer, (nn.Conv1d, nn.Conv2d, nn.Conv3d)):
matching_class = BayesianConv1d
kernel_size = layer.kernel_size[0]
if type(layer) == nn.Conv2d:
kernel_size = layer.kernel_size
matching_class = BayesianConv2d
elif type(layer) == nn.Conv3d:
matching_class = BayesianConv3d
kernel_size = layer.kernel_size
new_layer = matching_class(
layer.in_channels,
layer.out_channels,
kernel_size=kernel_size,
groups=layer.groups,
padding=layer.padding,
dilation=layer.dilation,
prior_sigma_1=config["prior_sigma_1"],
prior_sigma_2=config["prior_sigma_2"],
prior_pi=config["prior_pi"],
posterior_mu_init=config["posterior_mu_init"],
posterior_rho_init=config["posterior_rho_init"],
)
else:
Warning(
f"Could not find correct type for conversion of layer {layer} with type {type(layer)}"
)
new_layer = layer
return new_layer
config = {"prior_sigma_1":0.1,
"prior_sigma_2":0.4,
"prior_pi":1,
"posterior_mu_init":0,
"posterior_rho_init":-7}
convert_layer_to_bayesian(nn.Linear(10,2), config)
convert_layer_to_bayesian(nn.Embedding(10,2), config)
convert_layer_to_bayesian(nn.Conv1d(1,2,3), config)
convert_layer_to_bayesian(nn.Conv2d(1,2,(3,3)), config)
convert_layer_to_bayesian(nn.Conv2d(1,2,(3,3, 3)), config)
#export
def convert_to_bayesian_model(model, config: dict):
for p in model.named_children():
cur_layer_name = p[0]
cur_layer = p[1]
if len(list(cur_layer.named_children())) > 0:
convert_to_bayesian_model(cur_layer, config)
elif not isinstance(cur_layer, BayesianModule):
new_layer = convert_layer_to_bayesian(cur_layer, config)
setattr(model, cur_layer_name, new_layer)
return model
convert_to_bayesian_model(nn.Sequential(nn.Linear(3,4), nn.Linear(4,1)), config)
#export
def set_train_mode(model, mode):
if isinstance(model, BayesianModule):
model.freeze = not mode
for module in model.children():
set_train_mode(module, mode)
```
|
github_jupyter
|
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import statistics
rep5_04_002_data = pd.read_csv('proc_rep5_04_002.csv')
del rep5_04_002_data['Unnamed: 0']
rep5_04_002_data
rgg_rgg_data = rep5_04_002_data.copy()
rgg_rand_data = rep5_04_002_data.copy()
rand_rgg_data = rep5_04_002_data.copy()
rand_rand_data = rep5_04_002_data.copy()
rgg_rgg_drop_list = []
rgg_rand_drop_list = []
rand_rgg_drop_list = []
rand_rand_drop_list = []
for i in range(400):
if i % 4 == 0:
rgg_rand_drop_list.append(i)
rand_rgg_drop_list.append(i)
rand_rand_drop_list.append(i)
elif i % 4 == 1:
rgg_rgg_drop_list.append(i)
rand_rgg_drop_list.append(i)
rand_rand_drop_list.append(i)
elif i % 4 == 2:
rgg_rgg_drop_list.append(i)
rgg_rand_drop_list.append(i)
rand_rand_drop_list.append(i)
elif i % 4 == 3:
rgg_rgg_drop_list.append(i)
rgg_rand_drop_list.append(i)
rand_rgg_drop_list.append(i)
rgg_rgg_data = rgg_rgg_data.drop(rgg_rgg_drop_list)
rgg_rand_data = rgg_rand_data.drop(rgg_rand_drop_list)
rand_rgg_data = rand_rgg_data.drop(rand_rgg_drop_list)
rand_rand_data = rand_rand_data.drop(rand_rand_drop_list)
rgg_rgg_data = rgg_rgg_data.reset_index(drop=True)
rgg_rand_data = rgg_rand_data.reset_index(drop=True)
rand_rgg_data = rand_rgg_data.reset_index(drop=True)
rand_rand_data = rand_rand_data.reset_index(drop=True)
rgg_rgg_data
rgg_rgg_for_rand_data
rgg_rgg_dict = {}
rgg_rand_dict = {}
rand_rgg_dict = {}
rand_rand_dict = {}
for i in range(49):
target = [i*5 + 0, i*5 + 1, i*5 + 2, i*5 + 3, i*5 + 4]
temp_rgg_rgg = rgg_rgg_data[i*5 + 0 : i*5 + 5]
temp_rgg_rand = rgg_rand_data[i*5 + 0 : i*5 + 5]
temp_rand_rgg = rand_rgg_data[i*5 + 0 : i*5 + 5]
temp_rand_rand = rand_rand_data[i*5 + 0 : i*5 + 5]
if i == 0:
rgg_rgg_dict['intra_thres'] = [statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist())]
rgg_rgg_dict['alive_nodes'] = [statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist())]
rgg_rand_dict['intra_thres'] = [statistics.mean(temp_rgg_rand['intra_thres'].values.tolist())]
rgg_rand_dict['alive_nodes'] = [statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist())]
rand_rgg_dict['intra_thres'] = [statistics.mean(temp_rand_rgg['intra_thres'].values.tolist())]
rand_rgg_dict['alive_nodes'] = [statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist())]
rand_rand_dict['intra_thres'] = [statistics.mean(temp_rand_rand['intra_thres'].values.tolist())]
rand_rand_dict['alive_nodes'] = [statistics.mean(temp_rand_rand['alive_nodes'].values.tolist())]
else:
rgg_rgg_dict['intra_thres'].append(statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist()))
rgg_rgg_dict['alive_nodes'].append(statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist()))
rgg_rand_dict['intra_thres'].append(statistics.mean(temp_rgg_rand['intra_thres'].values.tolist()))
rgg_rand_dict['alive_nodes'].append(statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist()))
rand_rgg_dict['intra_thres'].append(statistics.mean(temp_rand_rgg['intra_thres'].values.tolist()))
rand_rgg_dict['alive_nodes'].append(statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist()))
rand_rand_dict['intra_thres'].append(statistics.mean(temp_rand_rand['intra_thres'].values.tolist()))
rand_rand_dict['alive_nodes'].append(statistics.mean(temp_rand_rand['alive_nodes'].values.tolist()))
rgg_rgg_for_rand_dict = {}
rgg_rand_for_rand_dict = {}
rand_rgg_for_rand_dict = {}
rand_rand_for_rand_dict = {}
for i in range(21):
target = [i*5 + 0, i*5 + 1, i*5 + 2, i*5 + 3, i*5 + 4]
temp_rgg_rgg = rgg_rgg_for_rand_data[i*5 + 0 : i*5 + 5]
temp_rgg_rand = rgg_rand_for_rand_data[i*5 + 0 : i*5 + 5]
temp_rand_rgg = rand_rgg_for_rand_data[i*5 + 0 : i*5 + 5]
temp_rand_rand = rand_rand_for_rand_data[i*5 + 0 : i*5 + 5]
if i == 0:
rgg_rgg_for_rand_dict['intra_thres'] = [statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist())]
rgg_rgg_for_rand_dict['alive_nodes'] = [statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist())]
rgg_rand_for_rand_dict['intra_thres'] = [statistics.mean(temp_rgg_rand['intra_thres'].values.tolist())]
rgg_rand_for_rand_dict['alive_nodes'] = [statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist())]
rand_rgg_for_rand_dict['intra_thres'] = [statistics.mean(temp_rand_rgg['intra_thres'].values.tolist())]
rand_rgg_for_rand_dict['alive_nodes'] = [statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist())]
rand_rand_for_rand_dict['intra_thres'] = [statistics.mean(temp_rand_rand['intra_thres'].values.tolist())]
rand_rand_for_rand_dict['alive_nodes'] = [statistics.mean(temp_rand_rand['alive_nodes'].values.tolist())]
else:
rgg_rgg_for_rand_dict['intra_thres'].append(statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist()))
rgg_rgg_for_rand_dict['alive_nodes'].append(statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist()))
rgg_rand_for_rand_dict['intra_thres'].append(statistics.mean(temp_rgg_rand['intra_thres'].values.tolist()))
rgg_rand_for_rand_dict['alive_nodes'].append(statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist()))
rand_rgg_for_rand_dict['intra_thres'].append(statistics.mean(temp_rand_rgg['intra_thres'].values.tolist()))
rand_rgg_for_rand_dict['alive_nodes'].append(statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist()))
rand_rand_for_rand_dict['intra_thres'].append(statistics.mean(temp_rand_rand['intra_thres'].values.tolist()))
rand_rand_for_rand_dict['alive_nodes'].append(statistics.mean(temp_rand_rand['alive_nodes'].values.tolist()))
plt.plot(rgg_rgg_dict['intra_thres'], rgg_rgg_dict['alive_nodes'])
plt.plot(rgg_rgg_dict['intra_thres'], rgg_rand_dict['alive_nodes'])
plt.plot(rgg_rgg_dict['intra_thres'], rand_rgg_dict['alive_nodes'])
plt.plot(rgg_rgg_dict['intra_thres'], rand_rand_dict['alive_nodes'])
plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand'])
plt.title('Mean Alive nodes')
plt.show()
plt.plot(rgg_rgg_for_rand_dict['intra_thres'], rgg_rgg_for_rand_dict['alive_nodes'])
plt.plot(rgg_rgg_for_rand_dict['intra_thres'], rgg_rand_for_rand_dict['alive_nodes'])
plt.plot(rgg_rgg_for_rand_dict['intra_thres'], rand_rgg_for_rand_dict['alive_nodes'])
plt.plot(rgg_rgg_for_rand_dict['intra_thres'], rand_rand_for_rand_dict['alive_nodes'])
plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand'])
plt.title('Mean Alive nodes')
plt.show()
step_nums = []
step_nums.append(statistics.mean(rgg_rgg_data['cas_steps'].values.tolist()))
step_nums.append(statistics.mean(rgg_rand_data['cas_steps'].values.tolist()))
step_nums.append(statistics.mean(rand_rgg_data['cas_steps'].values.tolist()))
step_nums.append(statistics.mean(rand_rand_data['cas_steps'].values.tolist()))
index = np.arange(4)
graph_types = ['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']
plt.bar(index, step_nums, width=0.3, color='gray')
plt.xticks(index, graph_types)
plt.title('Number of steps')
plt.savefig('The number of steps.png')
plt.show()
rgg_rgg_isol = []
rgg_rgg_unsupp = []
rgg_rand_isol = []
rgg_rand_unsupp = []
rand_rgg_isol = []
rand_rgg_unsupp = []
rand_rand_isol = []
rand_rand_unsupp =[]
index = 1
for col_name in rgg_rgg_data:
if col_name == ('step%d_isol' % index):
rgg_rgg_isol.append(statistics.mean(rgg_rgg_data[col_name].values.tolist()))
if col_name == ('step%d_unsupp' % index):
rgg_rgg_unsupp.append(statistics.mean(rgg_rgg_data[col_name].values.tolist()))
index += 1
index = 1
for col_name in rgg_rand_data:
if col_name == ('step%d_isol' % index):
rgg_rand_isol.append(statistics.mean(rgg_rand_data[col_name].values.tolist()))
if col_name == ('step%d_unsupp' % index):
rgg_rand_unsupp.append(statistics.mean(rgg_rand_data[col_name].values.tolist()))
index += 1
index = 1
for col_name in rand_rgg_data:
if col_name == ('step%d_isol' % index):
rand_rgg_isol.append(statistics.mean(rand_rgg_data[col_name].values.tolist()))
if col_name == ('step%d_unsupp' % index):
rand_rgg_unsupp.append(statistics.mean(rand_rgg_data[col_name].values.tolist()))
index += 1
index = 1
for col_name in rand_rand_data:
if col_name == ('step%d_isol' % index):
rand_rand_isol.append(statistics.mean(rand_rand_data[col_name].values.tolist()))
if col_name == ('step%d_unsupp' % index):
rand_rand_unsupp.append(statistics.mean(rand_rand_data[col_name].values.tolist()))
index += 1
print(len(rgg_rgg_isol))
print(len(rgg_rgg_unsupp))
print(len(rgg_rand_isol))
print(len(rgg_rand_unsupp))
print(len(rand_rgg_isol))
print(len(rand_rgg_unsupp))
print(len(rand_rand_isol))
print(len(rand_rand_unsupp))
cum_rgg_rgg_isol = []
cum_rgg_rgg_unsupp = []
cum_rgg_rand_isol = []
cum_rgg_rand_unsupp = []
cum_rand_rgg_isol = []
cum_rand_rgg_unsupp = []
cum_rand_rand_isol = []
cum_rand_rand_unsupp = []
total = []
for i in range(len(rgg_rgg_isol)):
if i == 0:
total.append(rgg_rgg_isol[i])
total.append(rgg_rgg_unsupp[i])
else:
total[0] += rgg_rgg_isol[i]
total[1] += rgg_rgg_unsupp[i]
cum_rgg_rgg_isol.append(total[0])
cum_rgg_rgg_unsupp.append(total[1])
total = []
for i in range(len(rgg_rand_isol)):
if i == 0:
total.append(rgg_rand_isol[i])
total.append(rgg_rand_unsupp[i])
else:
total[0] += rgg_rand_isol[i]
total[1] += rgg_rand_unsupp[i]
cum_rgg_rand_isol.append(total[0])
cum_rgg_rand_unsupp.append(total[1])
total = []
for i in range(len(rand_rgg_isol)):
if i == 0:
total.append(rand_rgg_isol[i])
total.append(rand_rgg_unsupp[i])
else:
total[0] += rand_rgg_isol[i]
total[1] += rand_rgg_unsupp[i]
cum_rand_rgg_isol.append(total[0])
cum_rand_rgg_unsupp.append(total[1])
total = []
for i in range(len(rand_rand_isol)):
if i == 0:
total.append(rand_rand_isol[i])
total.append(rand_rand_unsupp[i])
else:
total[0] += rand_rand_isol[i]
total[1] += rand_rand_unsupp[i]
cum_rand_rand_isol.append(total[0])
cum_rand_rand_unsupp.append(total[1])
```
## Isolation vs Unsupport
```
plt.plot(range(len(cum_rgg_rgg_isol)), cum_rgg_rgg_isol)
plt.plot(range(len(cum_rgg_rgg_isol)), cum_rgg_rgg_unsupp)
plt.legend(['rgg_rgg_isol','rgg_rgg_unsupp'])
plt.title('Isolation vs Unsupport: RGG-RGG')
plt.savefig('Isolation vs Unsupport_RGG-RGG.png')
plt.show()
plt.plot(range(len(cum_rgg_rand_isol)), cum_rgg_rand_isol)
plt.plot(range(len(cum_rgg_rand_isol)), cum_rgg_rand_unsupp)
plt.legend(['rgg_rand_isol','rgg_rand_unsupp'])
plt.title('Isolation vs Unsupport: RGG-Rand')
plt.savefig('Isolation vs Unsupport_RGG-Rand.png')
plt.show()
plt.plot(range(len(cum_rand_rgg_isol)), cum_rand_rgg_isol)
plt.plot(range(len(cum_rand_rgg_isol)), cum_rand_rgg_unsupp)
plt.legend(['rand_rgg_isol','rand_rgg_unsupp'])
plt.title('Isolation vs Unsupport: Rand-RGG')
plt.savefig('Isolation vs Unsupport_Rand-RGG.png')
plt.show()
plt.plot(range(len(cum_rand_rand_isol)), cum_rand_rand_isol)
plt.plot(range(len(cum_rand_rand_isol)), cum_rand_rand_unsupp)
plt.legend(['rand_rand_isol','rand_rand_unsupp'])
plt.title('Isolation vs Unsupport: Rand-Rand')
plt.savefig('Isolation vs Unsupport_Rand-Rand.png')
plt.show()
df_len = []
df_len.append(list(rgg_rgg_isol))
df_len.append(list(rgg_rand_isol))
df_len.append(list(rand_rgg_isol))
df_len.append(list(rand_rand_isol))
max_df_len = max(df_len, key=len)
x_val = list(range(len(max_df_len)))
proc_isol = []
proc_unsupp = []
proc_isol.append(cum_rgg_rgg_isol)
proc_isol.append(cum_rgg_rand_isol)
proc_isol.append(cum_rand_rgg_isol)
proc_isol.append(cum_rand_rand_isol)
proc_unsupp.append(cum_rgg_rgg_unsupp)
proc_unsupp.append(cum_rgg_rand_unsupp)
proc_unsupp.append(cum_rand_rgg_unsupp)
proc_unsupp.append(cum_rand_rand_unsupp)
for x in x_val:
if len(rgg_rgg_isol) <= x:
proc_isol[0].append(cum_rgg_rgg_isol[len(rgg_rgg_isol) - 1])
proc_unsupp[0].append(cum_rgg_rgg_unsupp[len(rgg_rgg_isol) - 1])
if len(rgg_rand_isol) <= x:
proc_isol[1].append(cum_rgg_rand_isol[len(rgg_rand_isol) - 1])
proc_unsupp[1].append(cum_rgg_rand_unsupp[len(rgg_rand_isol) - 1])
if len(rand_rgg_isol) <= x:
proc_isol[2].append(cum_rand_rgg_isol[len(rand_rgg_isol) - 1])
proc_unsupp[2].append(cum_rand_rgg_unsupp[len(rand_rgg_isol) - 1])
if len(rand_rand_isol) <= x:
proc_isol[3].append(cum_rand_rand_isol[len(rand_rand_isol) - 1])
proc_unsupp[3].append(cum_rand_rand_unsupp[len(rand_rand_isol) - 1])
plt.plot(x_val, proc_isol[0])
plt.plot(x_val, proc_isol[1])
plt.plot(x_val, proc_isol[2])
plt.plot(x_val, proc_isol[3])
plt.legend(['rgg_rgg_isol','rgg_rand_isol', 'rand_rgg_isol', 'rand_rand_isol'])
plt.title('Isolation trend')
plt.show()
plt.plot(x_val, proc_unsupp[0])
plt.plot(x_val, proc_unsupp[1])
plt.plot(x_val, proc_unsupp[2])
plt.plot(x_val, proc_unsupp[3])
plt.legend(['rgg_rgg_unsupp','rgg_rand_unsupp', 'rand_rgg_unsupp', 'rand_rand_unsupp'])
plt.title('Unsupport trend')
plt.show()
```
## Pie Chart
```
init_death = 150
labels = ['Alive nodes', 'Initial death', 'Dead nodes from isolation', 'Dead nodes from unsupport']
alive = []
alive.append(statistics.mean(rgg_rgg_data['alive_nodes']))
alive.append(statistics.mean(rgg_rand_data['alive_nodes']))
alive.append(statistics.mean(rand_rgg_data['alive_nodes']))
alive.append(statistics.mean(rand_rand_data['alive_nodes']))
tot_isol = []
tot_isol.append(statistics.mean(rgg_rgg_data['tot_isol_node']))
tot_isol.append(statistics.mean(rgg_rand_data['tot_isol_node']))
tot_isol.append(statistics.mean(rand_rgg_data['tot_isol_node']))
tot_isol.append(statistics.mean(rand_rand_data['tot_isol_node']))
tot_unsupp = []
tot_unsupp.append(statistics.mean(rgg_rgg_data['tot_unsupp_node']))
tot_unsupp.append(statistics.mean(rgg_rand_data['tot_unsupp_node']))
tot_unsupp.append(statistics.mean(rand_rgg_data['tot_unsupp_node']))
tot_unsupp.append(statistics.mean(rand_rand_data['tot_unsupp_node']))
deaths = [alive[0], init_death, tot_isol[0], tot_unsupp[0]]
plt.pie(deaths, labels=labels, autopct='%.1f%%')
plt.title('RGG-RGG death trend')
plt.show()
deaths = [alive[1], init_death, tot_isol[1], tot_unsupp[1]]
plt.pie(deaths, labels=labels, autopct='%.1f%%')
plt.title('RGG-Rand death trend')
plt.show()
deaths = [alive[2], init_death, tot_isol[2], tot_unsupp[2]]
plt.pie(deaths, labels=labels, autopct='%.1f%%')
plt.title('Rand-RGG death trend')
plt.show()
deaths = [alive[3], init_death, tot_isol[3], tot_unsupp[3]]
plt.pie(deaths, labels=labels, autopct='%.1f%%')
plt.title('Rand-Rand death trend')
plt.show()
```
## Compute the number of nodes
```
x_val = np.arange(4)
labels = ['initial', 'final']
plt.bar(x_val, alive)
plt.xticks(x_val, graph_types)
plt.title('Alive nodes')
plt.savefig('alive nodes.png')
plt.show()
```
## Compare the number of edges
```
init_intra = []
init_intra.append(statistics.mean(rgg_rgg_data['init_intra_edge']))
init_intra.append(statistics.mean(rgg_rand_data['init_intra_edge']))
init_intra.append(statistics.mean(rand_rgg_data['init_intra_edge']))
init_intra.append(statistics.mean(rand_rand_data['init_intra_edge']))
init_inter = []
init_inter.append(statistics.mean(rgg_rgg_data['init_inter_edge']))
init_inter.append(statistics.mean(rgg_rand_data['init_inter_edge']))
init_inter.append(statistics.mean(rand_rgg_data['init_inter_edge']))
init_inter.append(statistics.mean(rand_rand_data['init_inter_edge']))
init_supp = []
init_supp.append(statistics.mean(rgg_rgg_data['init_supp_edge']))
init_supp.append(statistics.mean(rgg_rand_data['init_supp_edge']))
init_supp.append(statistics.mean(rand_rgg_data['init_supp_edge']))
init_supp.append(statistics.mean(rand_rand_data['init_supp_edge']))
fin_intra = []
fin_intra.append(statistics.mean(rgg_rgg_data['fin_intra_edge']))
fin_intra.append(statistics.mean(rgg_rand_data['fin_intra_edge']))
fin_intra.append(statistics.mean(rand_rgg_data['fin_intra_edge']))
fin_intra.append(statistics.mean(rand_rand_data['fin_intra_edge']))
fin_inter = []
fin_inter.append(statistics.mean(rgg_rgg_data['fin_inter_edge']))
fin_inter.append(statistics.mean(rgg_rand_data['fin_inter_edge']))
fin_inter.append(statistics.mean(rand_rgg_data['fin_inter_edge']))
fin_inter.append(statistics.mean(rand_rand_data['fin_inter_edge']))
fin_supp = []
fin_supp.append(statistics.mean(rgg_rgg_data['fin_supp_edge']))
fin_supp.append(statistics.mean(rgg_rand_data['fin_supp_edge']))
fin_supp.append(statistics.mean(rand_rgg_data['fin_supp_edge']))
fin_supp.append(statistics.mean(rand_rand_data['fin_supp_edge']))
plt.bar(x_val-0.1, init_intra, width=0.2)
plt.bar(x_val+0.1, fin_intra, width=0.2)
plt.legend(labels)
plt.xticks(x_val, graph_types)
plt.title('Initial_intra_edge vs Final_intra_edge')
plt.show()
plt.bar(x_val-0.1, init_inter, width=0.2)
plt.bar(x_val+0.1, fin_inter, width=0.2)
plt.legend(labels)
plt.xticks(x_val, graph_types)
plt.title('Initial_inter_edge vs Final_inter_edge')
plt.show()
plt.bar(x_val-0.1, init_supp, width=0.2)
plt.bar(x_val+0.1, fin_supp, width=0.2)
plt.legend(labels)
plt.xticks(x_val, graph_types)
plt.title('Initial_support_edge vs Final_support_edge')
plt.show()
```
## Network Analysis
```
init_far = []
init_far.append(statistics.mean(rgg_rgg_data['init_far_node']))
init_far.append(statistics.mean(rgg_rand_data['init_far_node']))
init_far.append(statistics.mean(rand_rgg_data['init_far_node']))
init_far.append(statistics.mean(rand_rand_data['init_far_node']))
fin_far = []
fin_far.append(statistics.mean(rgg_rgg_data['fin_far_node']))
fin_far.append(statistics.mean(rgg_rand_data['fin_far_node']))
fin_far.append(statistics.mean(rand_rgg_data['fin_far_node']))
fin_far.append(statistics.mean(rand_rand_data['fin_far_node']))
plt.bar(x_val-0.1, init_far, width=0.2)
plt.bar(x_val+0.1, fin_far, width=0.2)
plt.legend(labels)
plt.xticks(x_val, graph_types)
plt.title('Initial_far_node vs Final_far_node')
plt.show()
init_clust = []
init_clust.append(statistics.mean(rgg_rgg_data['init_clust']))
init_clust.append(statistics.mean(rgg_rand_data['init_clust']))
init_clust.append(statistics.mean(rand_rgg_data['init_clust']))
init_clust.append(statistics.mean(rand_rand_data['init_clust']))
fin_clust = []
fin_clust.append(statistics.mean(rgg_rgg_data['fin_clust']))
fin_clust.append(statistics.mean(rgg_rand_data['fin_clust']))
fin_clust.append(statistics.mean(rand_rgg_data['fin_clust']))
fin_clust.append(statistics.mean(rand_rand_data['fin_clust']))
plt.bar(x_val-0.1, init_clust, width=0.2)
plt.bar(x_val+0.1, fin_clust, width=0.2)
plt.legend(labels)
plt.xticks(x_val, graph_types)
plt.title('Initial_clustering_coefficient vs Final_clustering_coefficient')
plt.show()
init_mean_deg = []
init_mean_deg.append(statistics.mean(rgg_rgg_data['init_mean_deg']))
init_mean_deg.append(statistics.mean(rgg_rand_data['init_mean_deg']))
init_mean_deg.append(statistics.mean(rand_rgg_data['init_mean_deg']))
init_mean_deg.append(statistics.mean(rand_rand_data['init_mean_deg']))
fin_mean_deg = []
fin_mean_deg.append(statistics.mean(rgg_rgg_data['fin_mean_deg']))
fin_mean_deg.append(statistics.mean(rgg_rand_data['fin_mean_deg']))
fin_mean_deg.append(statistics.mean(rand_rgg_data['fin_mean_deg']))
fin_mean_deg.append(statistics.mean(rand_rand_data['fin_mean_deg']))
plt.bar(x_val-0.1, init_mean_deg, width=0.2)
plt.bar(x_val+0.1, fin_mean_deg, width=0.2)
plt.legend(labels)
plt.xticks(x_val, graph_types)
plt.title('Initial_mean_degree vs Final_mean_degree')
plt.show()
init_larg_comp = []
init_larg_comp.append(statistics.mean(rgg_rgg_data['init_larg_comp']))
init_larg_comp.append(statistics.mean(rgg_rand_data['init_larg_comp']))
init_larg_comp.append(statistics.mean(rand_rgg_data['init_larg_comp']))
init_larg_comp.append(statistics.mean(rand_rand_data['init_larg_comp']))
fin_larg_comp = []
fin_larg_comp.append(statistics.mean(rgg_rgg_data['fin_larg_comp']))
fin_larg_comp.append(statistics.mean(rgg_rand_data['fin_larg_comp']))
fin_larg_comp.append(statistics.mean(rand_rgg_data['fin_larg_comp']))
fin_larg_comp.append(statistics.mean(rand_rand_data['fin_larg_comp']))
plt.bar(x_val-0.1, init_larg_comp, width=0.2)
plt.bar(x_val+0.1, fin_larg_comp, width=0.2)
plt.legend(labels)
plt.xticks(x_val, graph_types)
plt.title('Initial_largest_component_size vs Final_largest_component_size')
plt.show()
deg_assort = []
a = rgg_rgg_data['deg_assort'].fillna(0)
b = rgg_rand_data['deg_assort'].fillna(0)
c = rand_rgg_data['deg_assort'].fillna(0)
d = rand_rand_data['deg_assort'].fillna(0)
deg_assort.append(statistics.mean(a))
deg_assort.append(statistics.mean(b))
deg_assort.append(statistics.mean(c))
deg_assort.append(statistics.mean(d))
plt.bar(x_val, deg_assort)
plt.xticks(x_val, graph_types)
plt.title('Degree Assortativity')
plt.show()
dist_deg_cent = []
dist_deg_cent.append(statistics.mean(rgg_rgg_data['dist_deg_cent']))
dist_deg_cent.append(statistics.mean(rgg_rand_data['dist_deg_cent']))
dist_deg_cent.append(statistics.mean(rand_rgg_data['dist_deg_cent']))
dist_deg_cent.append(statistics.mean(rand_rand_data['dist_deg_cent']))
plt.bar(x_val, dist_deg_cent)
plt.xticks(x_val, graph_types)
plt.title('Distance to degree centre from the attack point')
plt.show()
dist_bet_cent = []
dist_bet_cent.append(statistics.mean(rgg_rgg_data['dist_bet_cent']))
dist_bet_cent.append(statistics.mean(rgg_rand_data['dist_bet_cent']))
dist_bet_cent.append(statistics.mean(rand_rgg_data['dist_bet_cent']))
dist_bet_cent.append(statistics.mean(rand_rand_data['dist_bet_cent']))
plt.bar(x_val, dist_bet_cent)
plt.xticks(x_val, graph_types)
plt.title('Distance to betweenes centre from the attack point')
plt.show()
```
|
github_jupyter
|
# 卷积神经网络示例与各层可视化
```
import os
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
%matplotlib inline
print ("当前TensorFlow版本为 [%s]" % (tf.__version__))
print ("所有包载入完毕")
```
## 载入 MNIST
```
mnist = input_data.read_data_sets('data/', one_hot=True)
trainimg = mnist.train.images
trainlabel = mnist.train.labels
testimg = mnist.test.images
testlabel = mnist.test.labels
print ("MNIST ready")
```
## 定义模型
```
# NETWORK TOPOLOGIES
n_input = 784
n_channel = 64
n_classes = 10
# INPUTS AND OUTPUTS
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
# NETWORK PARAMETERS
stddev = 0.1
weights = {
'c1': tf.Variable(tf.random_normal([7, 7, 1, n_channel], stddev=stddev)),
'd1': tf.Variable(tf.random_normal([14*14*64, n_classes], stddev=stddev))
}
biases = {
'c1': tf.Variable(tf.random_normal([n_channel], stddev=stddev)),
'd1': tf.Variable(tf.random_normal([n_classes], stddev=stddev))
}
print ("NETWORK READY")
```
## 定义图结构
```
# MODEL
def CNN(_x, _w, _b):
# RESHAPE
_x_r = tf.reshape(_x, shape=[-1, 28, 28, 1])
# CONVOLUTION
_conv1 = tf.nn.conv2d(_x_r, _w['c1'], strides=[1, 1, 1, 1], padding='SAME')
# ADD BIAS
_conv2 = tf.nn.bias_add(_conv1, _b['c1'])
# RELU
_conv3 = tf.nn.relu(_conv2)
# MAX-POOL
_pool = tf.nn.max_pool(_conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# VECTORIZE
_dense = tf.reshape(_pool, [-1, _w['d1'].get_shape().as_list()[0]])
# DENSE
_logit = tf.add(tf.matmul(_dense, _w['d1']), _b['d1'])
_out = {
'x_r': _x_r, 'conv1': _conv1, 'conv2': _conv2, 'conv3': _conv3
, 'pool': _pool, 'dense': _dense, 'logit': _logit
}
return _out
# PREDICTION
cnnout = CNN(x, weights, biases)
# LOSS AND OPTIMIZER
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=y, logits=cnnout['logit']))
optm = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
corr = tf.equal(tf.argmax(cnnout['logit'], 1), tf.argmax(y, 1))
accr = tf.reduce_mean(tf.cast(corr, "float"))
# INITIALIZER
init = tf.global_variables_initializer()
print ("FUNCTIONS READY")
```
## 存储
```
savedir = "nets/cnn_mnist_simple/"
saver = tf.train.Saver(max_to_keep=3)
save_step = 4
if not os.path.exists(savedir):
os.makedirs(savedir)
print ("SAVER READY")
```
## 运行
```
# PARAMETERS
training_epochs = 20
batch_size = 100
display_step = 4
# LAUNCH THE GRAPH
sess = tf.Session()
sess.run(init)
# OPTIMIZE
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# ITERATION
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
feeds = {x: batch_xs, y: batch_ys}
sess.run(optm, feed_dict=feeds)
avg_cost += sess.run(cost, feed_dict=feeds)
avg_cost = avg_cost / total_batch
# DISPLAY
if (epoch+1) % display_step == 0:
print ("Epoch: %03d/%03d cost: %.9f" % (epoch+1, training_epochs, avg_cost))
feeds = {x: batch_xs, y: batch_ys}
train_acc = sess.run(accr, feed_dict=feeds)
print ("TRAIN ACCURACY: %.3f" % (train_acc))
feeds = {x: mnist.test.images, y: mnist.test.labels}
test_acc = sess.run(accr, feed_dict=feeds)
print ("TEST ACCURACY: %.3f" % (test_acc))
# SAVE
if (epoch+1) % save_step == 0:
savename = savedir+"net-"+str(epoch+1)+".ckpt"
saver.save(sess, savename)
print ("[%s] SAVED." % (savename))
print ("OPTIMIZATION FINISHED")
```
## 恢复
```
do_restore = 0
if do_restore == 1:
sess = tf.Session()
epoch = 20
savename = savedir+"net-"+str(epoch)+".ckpt"
saver.restore(sess, savename)
print ("NETWORK RESTORED")
else:
print ("DO NOTHING")
```
## CNN如何工作
```
input_r = sess.run(cnnout['x_r'], feed_dict={x: trainimg[0:1, :]})
conv1 = sess.run(cnnout['conv1'], feed_dict={x: trainimg[0:1, :]})
conv2 = sess.run(cnnout['conv2'], feed_dict={x: trainimg[0:1, :]})
conv3 = sess.run(cnnout['conv3'], feed_dict={x: trainimg[0:1, :]})
pool = sess.run(cnnout['pool'], feed_dict={x: trainimg[0:1, :]})
dense = sess.run(cnnout['dense'], feed_dict={x: trainimg[0:1, :]})
out = sess.run(cnnout['logit'], feed_dict={x: trainimg[0:1, :]})
```
## 输入
```
print ("Size of 'input_r' is %s" % (input_r.shape,))
label = np.argmax(trainlabel[0, :])
print ("Label is %d" % (label))
# PLOT
plt.matshow(input_r[0, :, :, 0], cmap=plt.get_cmap('gray'))
plt.title("Label of this image is " + str(label) + "")
plt.colorbar()
plt.show()
```
# CONV 卷积层
```
print ("SIZE OF 'CONV1' IS %s" % (conv1.shape,))
for i in range(3):
plt.matshow(conv1[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv1")
plt.colorbar()
plt.show()
```
## CONV + BIAS
```
print ("SIZE OF 'CONV2' IS %s" % (conv2.shape,))
for i in range(3):
plt.matshow(conv2[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv2")
plt.colorbar()
plt.show()
```
## CONV + BIAS + RELU
```
print ("SIZE OF 'CONV3' IS %s" % (conv3.shape,))
for i in range(3):
plt.matshow(conv3[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv3")
plt.colorbar()
plt.show()
```
## POOL
```
print ("SIZE OF 'POOL' IS %s" % (pool.shape,))
for i in range(3):
plt.matshow(pool[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th pool")
plt.colorbar()
plt.show()
```
## DENSE
```
print ("SIZE OF 'DENSE' IS %s" % (dense.shape,))
print ("SIZE OF 'OUT' IS %s" % (out.shape,))
plt.matshow(out, cmap=plt.get_cmap('gray'))
plt.title("OUT")
plt.colorbar()
plt.show()
```
## CONVOLUTION FILTER 卷积核
```
wc1 = sess.run(weights['c1'])
print ("SIZE OF 'WC1' IS %s" % (wc1.shape,))
for i in range(3):
plt.matshow(wc1[:, :, 0, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv filter")
plt.colorbar()
plt.show()
```
|
github_jupyter
|
# TV Script Generation
In this project, you'll generate your own [Simpsons](https://en.wikipedia.org/wiki/The_Simpsons) TV scripts using RNNs. You'll be using part of the [Simpsons dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data) of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at [Moe's Tavern](https://simpsonswiki.com/wiki/Moe's_Tavern).
## Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
```
## Explore the Data
Play around with `view_sentence_range` to view different parts of the data.
```
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
```
## Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
### Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call `vocab_to_int`
- Dictionary to go from the id to word, we'll call `int_to_vocab`
Return these dictionaries in the following tuple `(vocab_to_int, int_to_vocab)`
```
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab_to_int = {w:i for i, w in enumerate(set(text))}
int_to_vocab = {i:w for i, w in enumerate(set(text))}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
```
### Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
```
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
return {
'.': '||Period||',
',': '||Comma||',
'"': '||Quotation_Mark||',
';': '||Semicolon||',
'!': '||Exclamation_mark||',
'?': '||Question_mark||',
'(': '||Left_Parentheses||',
')': '||Right_Parentheses||',
'--': '||Dash||',
"\n": '||Return||'
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
```
## Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
```
# Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
```
## Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
### Check the Version of TensorFlow and Access to GPU
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
```
### Input
Implement the `get_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) `name` parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple `(Input, Targets, LearingRate)`
```
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
inputs = tf.placeholder(dtype=tf.int32, shape=[None, None], name='input')
targets = tf.placeholder(dtype=tf.int32, shape=[None, None], name='targets')
learning_rate = tf.placeholder(dtype=tf.float32, name='learning_rate')
return inputs, targets, learning_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
```
### Build RNN Cell and Initialize
Stack one or more [`BasicLSTMCells`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/BasicLSTMCell) in a [`MultiRNNCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell).
- The Rnn size should be set using `rnn_size`
- Initalize Cell State using the MultiRNNCell's [`zero_state()`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell#zero_state) function
- Apply the name "initial_state" to the initial state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity)
Return the cell and initial state in the following tuple `(Cell, InitialState)`
```
def get_init_cell(batch_size, rnn_size, keep_prob=0.8, layers=3):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
multi = tf.contrib.rnn.MultiRNNCell([cell] * layers)
init_state = multi.zero_state(batch_size, tf.float32)
init_state = tf.identity(init_state, 'initial_state')
return multi, init_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
```
### Word Embedding
Apply embedding to `input_data` using TensorFlow. Return the embedded sequence.
```
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
embeddings = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embeddings, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
```
### Build RNN
You created a RNN Cell in the `get_init_cell()` function. Time to use the cell to create a RNN.
- Build the RNN using the [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn)
- Apply the name "final_state" to the final state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity)
Return the outputs and final_state state in the following tuple `(Outputs, FinalState)`
```
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, 'final_state')
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
```
### Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to `input_data` using your `get_embed(input_data, vocab_size, embed_dim)` function.
- Build RNN using `cell` and your `build_rnn(cell, inputs)` function.
- Apply a fully connected layer with a linear activation and `vocab_size` as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
```
def build_nn(cell, rnn_size, input_data, vocab_size):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
"""
embed = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
return logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
```
### Batches
Implement `get_batches` to create batches of input and targets using `int_text`. The batches should be a Numpy array with the shape `(number of batches, 2, batch size, sequence length)`. Each batch contains two elements:
- The first element is a single batch of **input** with the shape `[batch size, sequence length]`
- The second element is a single batch of **targets** with the shape `[batch size, sequence length]`
If you can't fill the last batch with enough data, drop the last batch.
For exmple, `get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3)` would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
```
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
n_batches = len(int_text) // (batch_size * seq_length)
result = []
for i in range(n_batches):
inputs = []
targets = []
for j in range(batch_size):
idx = i * seq_length + j * seq_length
inputs.append(int_text[idx:idx + seq_length])
targets.append(int_text[idx + 1:idx + seq_length + 1])
result.append([inputs, targets])
result=np.array(result)
print(result.shape)
print(result[1])
print(n_batches)
print(batch_size)
print(seq_length)
# (number of batches, 2, batch size, sequence length).
return np.array(result)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
```
## Neural Network Training
### Hyperparameters
Tune the following parameters:
- Set `num_epochs` to the number of epochs.
- Set `batch_size` to the batch size.
- Set `rnn_size` to the size of the RNNs.
- Set `seq_length` to the length of sequence.
- Set `learning_rate` to the learning rate.
- Set `show_every_n_batches` to the number of batches the neural network should print progress.
```
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Sequence Length
seq_length = 25
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 50
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
```
### Build the Graph
Build the graph using the neural network you implemented.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
```
## Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the [forms](https://discussions.udacity.com/) to see if anyone is having the same problem.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
```
## Save Parameters
Save `seq_length` and `save_dir` for generating a new TV script.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
```
# Checkpoint
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
```
## Implement Generate Functions
### Get Tensors
Get tensors from `loaded_graph` using the function [`get_tensor_by_name()`](https://www.tensorflow.org/api_docs/python/tf/Graph#get_tensor_by_name). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple `(InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)`
```
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
inputs = loaded_graph.get_tensor_by_name('input:0')
init_state = loaded_graph.get_tensor_by_name('initial_state:0')
final_state = loaded_graph.get_tensor_by_name('final_state:0')
probs = loaded_graph.get_tensor_by_name('probs:0')
return inputs, init_state, final_state, probs
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
```
### Choose Word
Implement the `pick_word()` function to select the next word using `probabilities`.
```
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
return int_to_vocab[np.argmax(probabilities)]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
```
## Generate TV Script
This will generate the TV script for you. Set `gen_length` to the length of TV script you want to generate.
```
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
```
# The TV Script is Nonsensical
It's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of [another dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data). We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.
# Submitting This Project
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.
|
github_jupyter
|
```
%run startup.py
%%javascript
$.getScript('./assets/js/ipython_notebook_toc.js')
```
# A Decision Tree of Observable Operators
## Part 1: NEW Observables.
> source: http://reactivex.io/documentation/operators.html#tree.
> (transcribed to RxPY 1.5.7, Py2.7 / 2016-12, Gunther Klessinger, [axiros](http://www.axiros.com))
**This tree can help you find the ReactiveX Observable operator you’re looking for.**
<h2 id="tocheading">Table of Contents</h2>
<div id="toc"></div>
## Usage
There are no configured behind the scenes imports or code except [`startup.py`](./edit/startup.py), which defines output helper functions, mainly:
- `rst, reset_start_time`: resets a global timer, in order to have use cases starting from 0.
- `subs(observable)`: subscribes to an observable, printing notifications with time, thread, value
All other code is explicitly given in the notebook.
Since all initialisiation of tools is in the first cell, you always have to run the first cell after ipython kernel restarts.
**All other cells are autonmous.**
In the use case functions, in contrast to the official examples we simply use **`rand`** quite often (mapped to `randint(0, 100)`), to demonstrate when/how often observable sequences are generated and when their result is buffered for various subscribers.
*When in doubt then run the cell again, you might have been "lucky" and got the same random.*
### RxJS
The (bold printed) operator functions are linked to the [official documentation](http://reactivex.io/documentation/operators.html#tree) and created roughly analogous to the **RxJS** examples. The rest of the TOC lines links to anchors within the notebooks.
### Output
When the output is not in marble format we display it like so:
```
new subscription on stream 276507289
3.4 M [next] 1.4: {'answer': 42}
3.5 T1 [cmpl] 1.6: fin
```
where the lines are syncronously `print`ed as they happen. "M" and "T1" would be thread names ("M" is main thread).
For each use case in `reset_start_time()` (alias `rst`), a global timer is set to 0 and we show the offset to it, in *milliseconds* & with one decimal value and also the offset to the start of stream subscription. In the example 3.4, 3.5 are millis since global counter reset, while 1.4, 1.6 are offsets to start of subscription.
# I want to create a **NEW** Observable...
## ... that emits a particular item: **[just](http://reactivex.io/documentation/operators/just.html) **
```
reset_start_time(O.just)
stream = O.just({'answer': rand()})
disposable = subs(stream)
sleep(0.5)
disposable = subs(stream) # same answer
# all stream ops work, its a real stream:
disposable = subs(stream.map(lambda x: x.get('answer', 0) * 2))
```
## ..that was returned from a function *called at subscribe-time*: **[start](http://reactivex.io/documentation/operators/start.html)**
```
print('There is a little API difference to RxJS, see Remarks:\n')
rst(O.start)
def f():
log('function called')
return rand()
stream = O.start(func=f)
d = subs(stream)
d = subs(stream)
header("Exceptions are handled correctly (an observable should never except):")
def breaking_f():
return 1 / 0
stream = O.start(func=breaking_f)
d = subs(stream)
d = subs(stream)
# startasync: only in python3 and possibly here(?) http://www.tornadoweb.org/en/stable/concurrent.html#tornado.concurrent.Future
#stream = O.start_async(f)
#d = subs(stream)
```
## ..that was returned from an Action, Callable, Runnable, or something of that sort, called at subscribe-time: **[from](http://reactivex.io/documentation/operators/from.html)**
```
rst(O.from_iterable)
def f():
log('function called')
return rand()
# aliases: O.from_, O.from_list
# 1.: From a tuple:
stream = O.from_iterable((1,2,rand()))
d = subs(stream)
# d = subs(stream) # same result
# 2. from a generator
gen = (rand() for j in range(3))
stream = O.from_iterable(gen)
d = subs(stream)
rst(O.from_callback)
# in my words: In the on_next of the subscriber you'll have the original arguments,
# potentially objects, e.g. user original http requests.
# i.e. you could merge those with the result stream of a backend call to
# a webservice or db and send the request.response back to the user then.
def g(f, a, b):
f(a, b)
log('called f')
stream = O.from_callback(lambda a, b, f: g(f, a, b))('fu', 'bar')
d = subs(stream.delay(200))
# d = subs(stream.delay(200)) # does NOT work
```
## ...after a specified delay: **[timer](http://reactivex.io/documentation/operators/timer.html)**
```
rst()
# start a stream of 0, 1, 2, .. after 200 ms, with a delay of 100 ms:
stream = O.timer(200, 100).time_interval()\
.map(lambda x: 'val:%s dt:%s' % (x.value, x.interval))\
.take(3)
d = subs(stream, name='observer1')
# intermix directly with another one
d = subs(stream, name='observer2')
```
## ...that emits a sequence of items repeatedly: **[repeat](http://reactivex.io/documentation/operators/repeat.html) **
```
rst(O.repeat)
# repeat is over *values*, not function calls. Use generate or create for function calls!
subs(O.repeat({'rand': time.time()}, 3))
header('do while:')
l = []
def condition(x):
l.append(1)
return True if len(l) < 2 else False
stream = O.just(42).do_while(condition)
d = subs(stream)
```
## ...from scratch, with custom logic and cleanup (calling a function again and again): **[create](http://reactivex.io/documentation/operators/create.html) **
```
rx = O.create
rst(rx)
def f(obs):
# this function is called for every observer
obs.on_next(rand())
obs.on_next(rand())
obs.on_completed()
def cleanup():
log('cleaning up...')
return cleanup
stream = O.create(f).delay(200) # the delay causes the cleanup called before the subs gets the vals
d = subs(stream)
d = subs(stream)
sleep(0.5)
rst(title='Exceptions are handled nicely')
l = []
def excepting_f(obs):
for i in range(3):
l.append(1)
obs.on_next('%s %s (observer hash: %s)' % (i, 1. / (3 - len(l)), hash(obs) ))
obs.on_completed()
stream = O.create(excepting_f)
d = subs(stream)
d = subs(stream)
rst(title='Feature or Bug?')
print('(where are the first two values?)')
l = []
def excepting_f(obs):
for i in range(3):
l.append(1)
obs.on_next('%s %s (observer hash: %s)' % (i, 1. / (3 - len(l)), hash(obs) ))
obs.on_completed()
stream = O.create(excepting_f).delay(100)
d = subs(stream)
d = subs(stream)
# I think its an (amazing) feature, preventing to process functions results of later(!) failing functions
rx = O.generate
rst(rx)
"""The basic form of generate takes four parameters:
the first item to emit
a function to test an item to determine whether to emit it (true) or terminate the Observable (false)
a function to generate the next item to test and emit based on the value of the previous item
a function to transform items before emitting them
"""
def generator_based_on_previous(x): return x + 1.1
def doubler(x): return 2 * x
d = subs(rx(0, lambda x: x < 4, generator_based_on_previous, doubler))
rx = O.generate_with_relative_time
rst(rx)
stream = rx(1, lambda x: x < 4, lambda x: x + 1, lambda x: x, lambda t: 100)
d = subs(stream)
```
## ...for each observer that subscribes OR according to a condition at subscription time: **[defer / if_then](http://reactivex.io/documentation/operators/defer.html) **
```
rst(O.defer)
# plural! (unique per subscription)
streams = O.defer(lambda: O.just(rand()))
d = subs(streams)
d = subs(streams) # gets other values - created by subscription!
# evaluating a condition at subscription time in order to decide which of two streams to take.
rst(O.if_then)
cond = True
def should_run():
return cond
streams = O.if_then(should_run, O.return_value(43), O.return_value(56))
d = subs(streams)
log('condition will now evaluate falsy:')
cond = False
streams = O.if_then(should_run, O.return_value(43), O.return_value(rand()))
d = subs(streams)
d = subs(streams)
```
## ...that emits a sequence of integers: **[range](http://reactivex.io/documentation/operators/range.html) **
```
rst(O.range)
d = subs(O.range(0, 3))
```
### ...at particular intervals of time: **[interval](http://reactivex.io/documentation/operators/interval.html) **
(you can `.publish()` it to get an easy "hot" observable)
```
rst(O.interval)
d = subs(O.interval(100).time_interval()\
.map(lambda x, v: '%(interval)s %(value)s' \
% ItemGetter(x)).take(3))
```
### ...after a specified delay (see timer)
## ...that completes without emitting items: **[empty](http://reactivex.io/documentation/operators/empty-never-throw.html) **
```
rst(O.empty)
d = subs(O.empty())
```
## ...that does nothing at all: **[never](http://reactivex.io/documentation/operators/empty-never-throw.html) **
```
rst(O.never)
d = subs(O.never())
```
## ...that excepts: **[throw](http://reactivex.io/documentation/operators/empty-never-throw.html) **
```
rst(O.on_error)
d = subs(O.on_error(ZeroDivisionError))
```
|
github_jupyter
|
#### Copyright 2017 Google LLC.
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
%matplotlib inline
```
# Quick Introduction to pandas
**Learning Objectives:**
* Gain an introduction to the `DataFrame` and `Series` data structures of the *pandas* library
* Access and manipulate data within a `DataFrame` and `Series`
* Import CSV data into a *pandas* `DataFrame`
* Reindex a `DataFrame` to shuffle data
[*pandas*](http://pandas.pydata.org/) is a column-oriented data analysis API. It's a great tool for handling and analyzing input data, and many ML frameworks support *pandas* data structures as inputs.
Although a comprehensive introduction to the *pandas* API would span many pages, the core concepts are fairly straightforward, and we'll present them below. For a more complete reference, the [*pandas* docs site](http://pandas.pydata.org/pandas-docs/stable/index.html) contains extensive documentation and many tutorials.
## Basic Concepts
The following line imports the *pandas* API and prints the API version:
```
import pandas as pd
pd.__version__
```
The primary data structures in *pandas* are implemented as two classes:
* **`DataFrame`**, which you can imagine as a relational data table, with rows and named columns.
* **`Series`**, which is a single column. A `DataFrame` contains one or more `Series` and a name for each `Series`.
The data frame is a commonly used abstraction for data manipulation. Similar implementations exist in [Spark](https://spark.apache.org/) and [R](https://www.r-project.org/about.html).
One way to create a `Series` is to construct a `Series` object. For example:
```
pd.Series(['San Francisco', 'San Jose', 'Sacramento'])
```
`DataFrame` objects can be created by passing a `dict` mapping `string` column names to their respective `Series`. If the `Series` don't match in length, missing values are filled with special [NA/NaN](http://pandas.pydata.org/pandas-docs/stable/missing_data.html) values. Example:
```
city_names = pd.Series(['San Francisco', 'San Jose', 'Sacramento'])
population = pd.Series([852469, 1015785, 485199])
pd.DataFrame({ 'City name': city_names, 'Population': population })
```
But most of the time, you load an entire file into a `DataFrame`. The following example loads a file with California housing data. Run the following cell to load the data and create feature definitions:
```
california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe.describe()
```
The example above used `DataFrame.describe` to show interesting statistics about a `DataFrame`. Another useful function is `DataFrame.head`, which displays the first few records of a `DataFrame`:
```
california_housing_dataframe.head()
```
Another powerful feature of *pandas* is graphing. For example, `DataFrame.hist` lets you quickly study the distribution of values in a column:
```
california_housing_dataframe.hist('housing_median_age')
```
## Accessing Data
You can access `DataFrame` data using familiar Python dict/list operations:
```
cities = pd.DataFrame({ 'City name': city_names, 'Population': population })
print(type(cities['City name']))
cities['City name']
print(type(cities['City name'][1]))
cities['City name'][1]
print(type(cities[0:2]))
cities[0:2]
```
In addition, *pandas* provides an extremely rich API for advanced [indexing and selection](http://pandas.pydata.org/pandas-docs/stable/indexing.html) that is too extensive to be covered here.
## Manipulating Data
You may apply Python's basic arithmetic operations to `Series`. For example:
```
population / 1000.
```
[NumPy](http://www.numpy.org/) is a popular toolkit for scientific computing. *pandas* `Series` can be used as arguments to most NumPy functions:
```
import numpy as np
np.log(population)
```
For more complex single-column transformations, you can use `Series.apply`. Like the Python [map function](https://docs.python.org/2/library/functions.html#map),
`Series.apply` accepts as an argument a [lambda function](https://docs.python.org/2/tutorial/controlflow.html#lambda-expressions), which is applied to each value.
The example below creates a new `Series` that indicates whether `population` is over one million:
```
population.apply(lambda val: val > 1000000)
```
Modifying `DataFrames` is also straightforward. For example, the following code adds two `Series` to an existing `DataFrame`:
```
cities['Area square miles'] = pd.Series([46.87, 176.53, 97.92])
cities['Population density'] = cities['Population'] / cities['Area square miles']
cities
```
## Exercise #1
Modify the `cities` table by adding a new boolean column that is True if and only if *both* of the following are True:
* The city is named after a saint.
* The city has an area greater than 50 square miles.
**Note:** Boolean `Series` are combined using the bitwise, rather than the traditional boolean, operators. For example, when performing *logical and*, use `&` instead of `and`.
**Hint:** "San" in Spanish means "saint."
```
cities['check'] = cities['City name'].apply(lambda x: x[:3]=='San') & \
cities['Area square miles'].apply(lambda x: x > 50)
cities
```
### Solution
Click below for a solution.
```
cities['Is wide and has saint name'] = (cities['Area square miles'] > 50) & cities['City name'].apply(lambda name: name.startswith('San'))
cities
```
## Indexes
Both `Series` and `DataFrame` objects also define an `index` property that assigns an identifier value to each `Series` item or `DataFrame` row.
By default, at construction, *pandas* assigns index values that reflect the ordering of the source data. Once created, the index values are stable; that is, they do not change when data is reordered.
```
city_names.index
cities.index
```
Call `DataFrame.reindex` to manually reorder the rows. For example, the following has the same effect as sorting by city name:
```
cities.reindex([2, 0, 1])
```
Reindexing is a great way to shuffle (randomize) a `DataFrame`. In the example below, we take the index, which is array-like, and pass it to NumPy's `random.permutation` function, which shuffles its values in place. Calling `reindex` with this shuffled array causes the `DataFrame` rows to be shuffled in the same way.
Try running the following cell multiple times!
```
cities.reindex(np.random.permutation(cities.index))
```
For more information, see the [Index documentation](http://pandas.pydata.org/pandas-docs/stable/indexing.html#index-objects).
## Exercise #2
The `reindex` method allows index values that are not in the original `DataFrame`'s index values. Try it and see what happens if you use such values! Why do you think this is allowed?
```
cities.reindex([2, -1, 8])
```
### Solution
Click below for the solution.
If your `reindex` input array includes values not in the original `DataFrame` index values, `reindex` will add new rows for these "missing" indices and populate all corresponding columns with `NaN` values:
```
cities.reindex([0, 4, 5, 2])
```
This behavior is desirable because indexes are often strings pulled from the actual data (see the [*pandas* reindex
documentation](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html) for an example
in which the index values are browser names).
In this case, allowing "missing" indices makes it easy to reindex using an external list, as you don't have to worry about
sanitizing the input.
|
github_jupyter
|
```
import phys
import phys.newton
import phys.light
import numpy as np
import time
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
class ScatterDeleteStep2(phys.Step):
def __init__(self, n, A):
self.n = n
self.A = A
self.built = False
def run(self, sim):
if self.built != True:
skip = phys.CLInput(name="photon_check", type="obj_action", code="if type(obj) != phys.light.PhotonObject:\n \t\t continue")
d0, d1, d2 = tuple([phys.CLInput(name="d" + str(x), type="obj", obj_attr="dr[" + str(x) + "]") for x in range(0, 3)])
rand = phys.CLInput(name="rand", type="obj_def", obj_def="np.random.random()")
A_, n_ = phys.CLInput(name="A", type="const", const_value=str(self.n)), phys.CLInput(name="n", type="const", const_value=str(self.A))
pht = phys.CLInput(name="pht", type="obj_track", obj_track="obj")
res = phys.CLOutput(name="res", ctype="int")
kernel = """
int gid = get_global_id(0);
double norm = sqrt(pow(d0[gid], 2) + pow(d1[gid], 2) + pow(d2[gid], 2));
double pcoll = A * n * norm;
if (pcoll >= rand[gid]){
// Mark for removal.
res[gid] = 1;
} else {
res[gid] = 0;
}
"""
self.prog = phys.CLProgram(sim, "test", kernel)
self.prog.prep_metadata = [skip, d0, d1, d2, rand, pht, A_, n_]
self.prog.output_metadata = [res]
self.prog.build_kernel()
self.built = True
out = self.prog.run()
for idx, x in enumerate(out["res"]):
if x == 1:
sim.remove_obj(self.prog.pht[idx])
def new_sim(step, n):
sim = phys.Simulation({"cl_on": True})
sim.add_objs(phys.light.generate_photons_from_E(np.linspace(phys.Measurement(5e-19, "J**1"), phys.Measurement(1e-18, "J**1"), 1000)))
sim.exit = lambda cond: len(cond.objects) == 0
sim.add_step(0, phys.UpdateTimeStep(lambda s: phys.Measurement(np.double(0.001), "s**1")))
sim.add_step(1, phys.newton.NewtonianKinematicsStep())
A = np.double(0.001)
n = np.double(0.001)
sim.add_step(2, step(n, A))
sim.add_step(3, phys.light.ScatterMeasureStep(None, True))
return sim
orig, new = [], []
ns = np.floor(10 ** np.linspace(2, 5, 9))
for i in ns:
print("Testing old " + str(i))
o = new_sim(phys.light.ScatterDeleteStepReference, int(i))
o.start()
o.join()
orig.append(o.run_time)
plt.plot(o.ts, [x[1] for x in o.steps[3].data], label="n")
plt.ylabel("Photons")
plt.xlabel("Time (s)")
plt.title("Photon Count vs. Time (s) w/ old, N = " + str(i))
plt.show()
print("Testing new " + str(i))
n = new_sim(ScatterDeleteStep2, int(i))
n.start()
n.join()
new.append(n.run_time)
plt.plot(ns, new, label="New")
plt.plot(ns, orig, label="Original")
plt.legend()
plt.xlabel("$N_\gamma$")
plt.ylabel("Time (s)")
plt.title("$N_\gamma$ vs. Time")
plt.show()
o.ts[0].size
```
|
github_jupyter
|
# Testing cosmogan
Aug 25, 2020
Borrowing pieces of code from :
- https://github.com/pytorch/tutorials/blob/11569e0db3599ac214b03e01956c2971b02c64ce/beginner_source/dcgan_faces_tutorial.py
- https://github.com/exalearn/epiCorvid/tree/master/cGAN
```
import os
import random
import logging
import sys
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
from torchsummary import summary
from torch.utils.data import DataLoader, TensorDataset
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
import argparse
import time
from datetime import datetime
import glob
import pickle
import yaml
import collections
import torch.distributed as dist
import socket
%matplotlib widget
```
## Modules
```
def try_barrier():
"""Attempt a barrier but ignore any exceptions"""
print('BAR %d'%rank)
try:
dist.barrier()
except:
pass
def _get_sync_file():
"""Logic for naming sync file using slurm env variables"""
#sync_file_dir = '%s/pytorch-sync-files' % os.environ['SCRATCH']
sync_file_dir = '/global/homes/b/balewski/prjs/tmp/local-sync-files'
os.makedirs(sync_file_dir, exist_ok=True)
sync_file = 'file://%s/pytorch_sync.%s.%s' % (
sync_file_dir, os.environ['SLURM_JOB_ID'], os.environ['SLURM_STEP_ID'])
return sync_file
def f_load_config(config_file):
with open(config_file) as f:
config = yaml.load(f, Loader=yaml.SafeLoader)
return config
### Transformation functions for image pixel values
def f_transform(x):
return 2.*x/(x + 4.) - 1.
def f_invtransform(s):
return 4.*(1. + s)/(1. - s)
# custom weights initialization called on netG and netD
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02)
elif classname.find('BatchNorm') != -1:
nn.init.normal_(m.weight.data, 1.0, 0.02)
nn.init.constant_(m.bias.data, 0)
# Generator Code
class View(nn.Module):
def __init__(self, shape):
super(View, self).__init__()
self.shape = shape
def forward(self, x):
return x.view(*self.shape)
class Generator(nn.Module):
def __init__(self, gdict):
super(Generator, self).__init__()
## Define new variables from dict
keys=['ngpu','nz','nc','ngf','kernel_size','stride','g_padding']
ngpu, nz,nc,ngf,kernel_size,stride,g_padding=list(collections.OrderedDict({key:gdict[key] for key in keys}).values())
self.main = nn.Sequential(
# nn.ConvTranspose2d(in_channels, out_channels, kernel_size,stride,padding,output_padding,groups,bias, Dilation,padding_mode)
nn.Linear(nz,nc*ngf*8*8*8),# 32768
nn.BatchNorm2d(nc,eps=1e-05, momentum=0.9, affine=True),
nn.ReLU(inplace=True),
View(shape=[-1,ngf*8,8,8]),
nn.ConvTranspose2d(ngf * 8, ngf * 4, kernel_size, stride, g_padding, output_padding=1, bias=False),
nn.BatchNorm2d(ngf*4,eps=1e-05, momentum=0.9, affine=True),
nn.ReLU(inplace=True),
# state size. (ngf*4) x 8 x 8
nn.ConvTranspose2d( ngf * 4, ngf * 2, kernel_size, stride, g_padding, 1, bias=False),
nn.BatchNorm2d(ngf*2,eps=1e-05, momentum=0.9, affine=True),
nn.ReLU(inplace=True),
# state size. (ngf*2) x 16 x 16
nn.ConvTranspose2d( ngf * 2, ngf, kernel_size, stride, g_padding, 1, bias=False),
nn.BatchNorm2d(ngf,eps=1e-05, momentum=0.9, affine=True),
nn.ReLU(inplace=True),
# state size. (ngf) x 32 x 32
nn.ConvTranspose2d( ngf, nc, kernel_size, stride,g_padding, 1, bias=False),
nn.Tanh()
)
def forward(self, ip):
return self.main(ip)
class Discriminator(nn.Module):
def __init__(self, gdict):
super(Discriminator, self).__init__()
## Define new variables from dict
keys=['ngpu','nz','nc','ndf','kernel_size','stride','d_padding']
ngpu, nz,nc,ndf,kernel_size,stride,d_padding=list(collections.OrderedDict({key:gdict[key] for key in keys}).values())
self.main = nn.Sequential(
# input is (nc) x 64 x 64
# nn.Conv2d(in_channels, out_channels, kernel_size,stride,padding,output_padding,groups,bias, Dilation,padding_mode)
nn.Conv2d(nc, ndf,kernel_size, stride, d_padding, bias=True),
nn.BatchNorm2d(ndf,eps=1e-05, momentum=0.9, affine=True),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, kernel_size, stride, d_padding, bias=True),
nn.BatchNorm2d(ndf * 2,eps=1e-05, momentum=0.9, affine=True),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 2, ndf * 4, kernel_size, stride, d_padding, bias=True),
nn.BatchNorm2d(ndf * 4,eps=1e-05, momentum=0.9, affine=True),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, kernel_size, stride, d_padding, bias=True),
nn.BatchNorm2d(ndf * 8,eps=1e-05, momentum=0.9, affine=True),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 4 x 4
nn.Flatten(),
nn.Linear(nc*ndf*8*8*8, 1)
# nn.Sigmoid()
)
def forward(self, ip):
return self.main(ip)
def f_gen_images(gdict,netG,optimizerG,ip_fname,op_loc,op_strg='inf_img_',op_size=500):
'''Generate images for best saved models
Arguments: gdict, netG, optimizerG,
ip_fname: name of input file
op_strg: [string name for output file]
op_size: Number of images to generate
'''
nz,device=gdict['nz'],gdict['device']
try:
if torch.cuda.is_available(): checkpoint=torch.load(ip_fname)
else: checkpoint=torch.load(ip_fname,map_location=torch.device('cpu'))
except Exception as e:
print(e)
print("skipping generation of images for ",ip_fname)
return
## Load checkpoint
if gdict['multi-gpu']:
netG.module.load_state_dict(checkpoint['G_state'])
else:
netG.load_state_dict(checkpoint['G_state'])
## Load other stuff
iters=checkpoint['iters']
epoch=checkpoint['epoch']
optimizerG.load_state_dict(checkpoint['optimizerG_state_dict'])
# Generate batch of latent vectors
noise = torch.randn(op_size, 1, 1, nz, device=device)
# Generate fake image batch with G
netG.eval() ## This is required before running inference
gen = netG(noise)
gen_images=gen.detach().cpu().numpy()[:,:,:,:]
print(gen_images.shape)
op_fname='%s_epoch-%s_step-%s.npy'%(op_strg,epoch,iters)
np.save(op_loc+op_fname,gen_images)
print("Image saved in ",op_fname)
def f_save_checkpoint(gdict,epoch,iters,best_chi1,best_chi2,netG,netD,optimizerG,optimizerD,save_loc):
''' Checkpoint model '''
if gdict['multi-gpu']: ## Dataparallel
torch.save({'epoch':epoch,'iters':iters,'best_chi1':best_chi1,'best_chi2':best_chi2,
'G_state':netG.module.state_dict(),'D_state':netD.module.state_dict(),'optimizerG_state_dict':optimizerG.state_dict(),
'optimizerD_state_dict':optimizerD.state_dict()}, save_loc)
else :
torch.save({'epoch':epoch,'iters':iters,'best_chi1':best_chi1,'best_chi2':best_chi2,
'G_state':netG.state_dict(),'D_state':netD.state_dict(),'optimizerG_state_dict':optimizerG.state_dict(),
'optimizerD_state_dict':optimizerD.state_dict()}, save_loc)
def f_load_checkpoint(ip_fname,netG,netD,optimizerG,optimizerD,gdict):
''' Load saved checkpoint
Also loads step, epoch, best_chi1, best_chi2'''
try:
checkpoint=torch.load(ip_fname)
except Exception as e:
print(e)
print("skipping generation of images for ",ip_fname)
raise SystemError
## Load checkpoint
if gdict['multi-gpu']:
netG.module.load_state_dict(checkpoint['G_state'])
netD.module.load_state_dict(checkpoint['D_state'])
else:
netG.load_state_dict(checkpoint['G_state'])
netD.load_state_dict(checkpoint['D_state'])
optimizerD.load_state_dict(checkpoint['optimizerD_state_dict'])
optimizerG.load_state_dict(checkpoint['optimizerG_state_dict'])
iters=checkpoint['iters']
epoch=checkpoint['epoch']
best_chi1=checkpoint['best_chi1']
best_chi2=checkpoint['best_chi2']
netG.train()
netD.train()
return iters,epoch,best_chi1,best_chi2
####################
### Pytorch code ###
####################
def f_torch_radial_profile(img, center=(None,None)):
''' Module to compute radial profile of a 2D image
Bincount causes issues with backprop, so not using this code
'''
y,x=torch.meshgrid(torch.arange(0,img.shape[0]),torch.arange(0,img.shape[1])) # Get a grid of x and y values
if center[0]==None and center[1]==None:
center = torch.Tensor([(x.max()-x.min())/2.0, (y.max()-y.min())/2.0]) # compute centers
# get radial values of every pair of points
r = torch.sqrt((x - center[0])**2 + (y - center[1])**2)
r= r.int()
# print(r.shape,img.shape)
# Compute histogram of r values
tbin=torch.bincount(torch.reshape(r,(-1,)),weights=torch.reshape(img,(-1,)).type(torch.DoubleTensor))
nr = torch.bincount(torch.reshape(r,(-1,)))
radialprofile = tbin / nr
return radialprofile[1:-1]
def f_torch_get_azimuthalAverage_with_batch(image, center=None): ### Not used in this code.
"""
Calculate the azimuthally averaged radial profile. Only use if you need to combine batches
image - The 2D image
center - The [x,y] pixel coordinates used as the center. The default is
None, which then uses the center of the image (including
fracitonal pixels).
source: https://www.astrobetter.com/blog/2010/03/03/fourier-transforms-of-images-in-python/
"""
batch, channel, height, width = image.shape
# Create a grid of points with x and y coordinates
y, x = np.indices([height,width])
if not center:
center = np.array([(x.max()-x.min())/2.0, (y.max()-y.min())/2.0])
# Get the radial coordinate for every grid point. Array has the shape of image
r = torch.tensor(np.hypot(x - center[0], y - center[1]))
# Get sorted radii
ind = torch.argsort(torch.reshape(r, (batch, channel,-1)))
r_sorted = torch.gather(torch.reshape(r, (batch, channel, -1,)),2, ind)
i_sorted = torch.gather(torch.reshape(image, (batch, channel, -1,)),2, ind)
# Get the integer part of the radii (bin size = 1)
r_int=r_sorted.to(torch.int32)
# Find all pixels that fall within each radial bin.
deltar = r_int[:,:,1:] - r_int[:,:,:-1] # Assumes all radii represented
rind = torch.reshape(torch.where(deltar)[2], (batch, -1)) # location of changes in radius
rind=torch.unsqueeze(rind,1)
nr = (rind[:,:,1:] - rind[:,:,:-1]).type(torch.float) # number of radius bin
# Cumulative sum to figure out sums for each radius bin
csum = torch.cumsum(i_sorted, axis=-1)
# print(csum.shape,rind.shape,nr.shape)
tbin = torch.gather(csum, 2, rind[:,:,1:]) - torch.gather(csum, 2, rind[:,:,:-1])
radial_prof = tbin / nr
return radial_prof
def f_get_rad(img):
''' Get the radial tensor for use in f_torch_get_azimuthalAverage '''
height,width=img.shape[-2:]
# Create a grid of points with x and y coordinates
y, x = np.indices([height,width])
center=[]
if not center:
center = np.array([(x.max()-x.min())/2.0, (y.max()-y.min())/2.0])
# Get the radial coordinate for every grid point. Array has the shape of image
r = torch.tensor(np.hypot(x - center[0], y - center[1]))
# Get sorted radii
ind = torch.argsort(torch.reshape(r, (-1,)))
return r.detach(),ind.detach()
def f_torch_get_azimuthalAverage(image,r,ind):
"""
Calculate the azimuthally averaged radial profile.
image - The 2D image
center - The [x,y] pixel coordinates used as the center. The default is
None, which then uses the center of the image (including
fracitonal pixels).
source: https://www.astrobetter.com/blog/2010/03/03/fourier-transforms-of-images-in-python/
"""
# height, width = image.shape
# # Create a grid of points with x and y coordinates
# y, x = np.indices([height,width])
# if not center:
# center = np.array([(x.max()-x.min())/2.0, (y.max()-y.min())/2.0])
# # Get the radial coordinate for every grid point. Array has the shape of image
# r = torch.tensor(np.hypot(x - center[0], y - center[1]))
# # Get sorted radii
# ind = torch.argsort(torch.reshape(r, (-1,)))
r_sorted = torch.gather(torch.reshape(r, ( -1,)),0, ind)
i_sorted = torch.gather(torch.reshape(image, ( -1,)),0, ind)
# Get the integer part of the radii (bin size = 1)
r_int=r_sorted.to(torch.int32)
# Find all pixels that fall within each radial bin.
deltar = r_int[1:] - r_int[:-1] # Assumes all radii represented
rind = torch.reshape(torch.where(deltar)[0], (-1,)) # location of changes in radius
nr = (rind[1:] - rind[:-1]).type(torch.float) # number of radius bin
# Cumulative sum to figure out sums for each radius bin
csum = torch.cumsum(i_sorted, axis=-1)
tbin = torch.gather(csum, 0, rind[1:]) - torch.gather(csum, 0, rind[:-1])
radial_prof = tbin / nr
return radial_prof
def f_torch_fftshift(real, imag):
for dim in range(0, len(real.size())):
real = torch.roll(real, dims=dim, shifts=real.size(dim)//2)
imag = torch.roll(imag, dims=dim, shifts=imag.size(dim)//2)
return real, imag
def f_torch_compute_spectrum(arr,r,ind):
GLOBAL_MEAN=1.0
arr=(arr-GLOBAL_MEAN)/(GLOBAL_MEAN)
y1=torch.rfft(arr,signal_ndim=2,onesided=False)
real,imag=f_torch_fftshift(y1[:,:,0],y1[:,:,1]) ## last index is real/imag part
y2=real**2+imag**2 ## Absolute value of each complex number
# print(y2.shape)
z1=f_torch_get_azimuthalAverage(y2,r,ind) ## Compute radial profile
return z1
def f_torch_compute_batch_spectrum(arr,r,ind):
batch_pk=torch.stack([f_torch_compute_spectrum(i,r,ind) for i in arr])
return batch_pk
def f_torch_image_spectrum(x,num_channels,r,ind):
'''
Data has to be in the form (batch,channel,x,y)
'''
mean=[[] for i in range(num_channels)]
sdev=[[] for i in range(num_channels)]
for i in range(num_channels):
arr=x[:,i,:,:]
batch_pk=f_torch_compute_batch_spectrum(arr,r,ind)
mean[i]=torch.mean(batch_pk,axis=0)
# sdev[i]=torch.std(batch_pk,axis=0)/np.sqrt(batch_pk.shape[0])
# sdev[i]=torch.std(batch_pk,axis=0)
sdev[i]=torch.var(batch_pk,axis=0)
mean=torch.stack(mean)
sdev=torch.stack(sdev)
return mean,sdev
def f_compute_hist(data,bins):
try:
hist_data=torch.histc(data,bins=bins)
## A kind of normalization of histograms: divide by total sum
hist_data=(hist_data*bins)/torch.sum(hist_data)
except Exception as e:
print(e)
hist_data=torch.zeros(bins)
return hist_data
### Losses
def loss_spectrum(spec_mean,spec_mean_ref,spec_std,spec_std_ref,image_size,lambda1):
''' Loss function for the spectrum : mean + variance
Log(sum( batch value - expect value) ^ 2 )) '''
idx=int(image_size/2) ### For the spectrum, use only N/2 indices for loss calc.
### Warning: the first index is the channel number.For multiple channels, you are averaging over them, which is fine.
spec_mean=torch.log(torch.mean(torch.pow(spec_mean[:,:idx]-spec_mean_ref[:,:idx],2)))
spec_sdev=torch.log(torch.mean(torch.pow(spec_std[:,:idx]-spec_std_ref[:,:idx],2)))
lambda1=lambda1;
lambda2=lambda1;
ans=lambda1*spec_mean+lambda2*spec_sdev
if torch.isnan(spec_sdev).any(): print("spec loss with nan",ans)
return ans
def loss_hist(hist_sample,hist_ref):
lambda1=1.0
return lambda1*torch.log(torch.mean(torch.pow(hist_sample-hist_ref,2)))
# def f_size(ip):
# p=2;s=2
# # return (ip + 2 * 0 - 1 * (p-1) -1 )/ s + 1
# return (ip-1)*s - 2 * p + 1 *(5-1)+ 1 + 1
# f_size(128)
# logging.basicConfig(filename=save_dir+'/log.log',filemode='w',format='%(name)s - %(levelname)s - %(message)s')
```
## Main code
```
def f_train_loop(dataloader,metrics_df,gdict):
''' Train single epoch '''
## Define new variables from dict
keys=['image_size','start_epoch','epochs','iters','best_chi1','best_chi2','save_dir','device','flip_prob','nz','batchsize','bns']
image_size,start_epoch,epochs,iters,best_chi1,best_chi2,save_dir,device,flip_prob,nz,batchsize,bns=list(collections.OrderedDict({key:gdict[key] for key in keys}).values())
for epoch in range(start_epoch,epochs):
t_epoch_start=time.time()
for count, data in enumerate(dataloader, 0):
####### Train GAN ########
netG.train(); netD.train(); ### Need to add these after inference and before training
tme1=time.time()
### Update D network: maximize log(D(x)) + log(1 - D(G(z)))
netD.zero_grad()
real_cpu = data[0].to(device)
b_size = real_cpu.size(0)
real_label = torch.full((b_size,), 1, device=device)
fake_label = torch.full((b_size,), 0, device=device)
g_label = torch.full((b_size,), 1, device=device) ## No flipping for Generator labels
# Flip labels with probability flip_prob
for idx in np.random.choice(np.arange(b_size),size=int(np.ceil(b_size*flip_prob))):
real_label[idx]=0; fake_label[idx]=1
# Generate fake image batch with G
noise = torch.randn(b_size, 1, 1, nz, device=device)
fake = netG(noise)
# Forward pass real batch through D
output = netD(real_cpu).view(-1)
errD_real = criterion(output, real_label)
errD_real.backward()
D_x = output.mean().item()
# Forward pass real batch through D
output = netD(fake.detach()).view(-1)
errD_fake = criterion(output, fake_label)
errD_fake.backward()
D_G_z1 = output.mean().item()
errD = errD_real + errD_fake
optimizerD.step()
###Update G network: maximize log(D(G(z)))
netG.zero_grad()
output = netD(fake).view(-1)
errG_adv = criterion(output, g_label)
# Histogram pixel intensity loss
hist_gen=f_compute_hist(fake,bins=bns)
hist_loss=loss_hist(hist_gen,hist_val.to(device))
# Add spectral loss
mean,sdev=f_torch_image_spectrum(f_invtransform(fake),1,r.to(device),ind.to(device))
spec_loss=loss_spectrum(mean,mean_spec_val.to(device),sdev,sdev_spec_val.to(device),image_size,gdict['lambda1'])
if gdict['spec_loss_flag']: errG=errG_adv+spec_loss
else: errG=errG_adv
if torch.isnan(errG).any():
logging.info(errG)
raise SystemError
# Calculate gradients for G
errG.backward()
D_G_z2 = output.mean().item()
optimizerG.step()
tme2=time.time()
####### Store metrics ########
# Output training stats
if count % gdict['checkpoint_size'] == 0:
logging.info('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_adv: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f'
% (epoch, epochs, count, len(dataloader), errD.item(), errG_adv.item(),errG.item(), D_x, D_G_z1, D_G_z2)),
logging.info("Spec loss: %s,\t hist loss: %s"%(spec_loss.item(),hist_loss.item())),
logging.info("Training time for step %s : %s"%(iters, tme2-tme1))
# Save metrics
cols=['step','epoch','Dreal','Dfake','Dfull','G_adv','G_full','spec_loss','hist_loss','D(x)','D_G_z1','D_G_z2','time']
vals=[iters,epoch,errD_real.item(),errD_fake.item(),errD.item(),errG_adv.item(),errG.item(),spec_loss.item(),hist_loss.item(),D_x,D_G_z1,D_G_z2,tme2-tme1]
for col,val in zip(cols,vals): metrics_df.loc[iters,col]=val
### Checkpoint the best model
checkpoint=True
iters += 1 ### Model has been updated, so update iters before saving metrics and model.
### Compute validation metrics for updated model
netG.eval()
with torch.no_grad():
#fake = netG(fixed_noise).detach().cpu()
fake = netG(fixed_noise)
hist_gen=f_compute_hist(fake,bins=bns)
hist_chi=loss_hist(hist_gen,hist_val.to(device))
mean,sdev=f_torch_image_spectrum(f_invtransform(fake),1,r.to(device),ind.to(device))
spec_chi=loss_spectrum(mean,mean_spec_val.to(device),sdev,sdev_spec_val.to(device),image_size,gdict['lambda1'])
# Storing chi for next step
for col,val in zip(['spec_chi','hist_chi'],[spec_chi.item(),hist_chi.item()]): metrics_df.loc[iters,col]=val
# Checkpoint model for continuing run
if count == len(dataloader)-1: ## Check point at last step of epoch
f_save_checkpoint(gdict,epoch,iters,best_chi1,best_chi2,netG,netD,optimizerG,optimizerD,save_loc=save_dir+'/models/checkpoint_last.tar')
if (checkpoint and (epoch > 1)): # Choose best models by metric
if hist_chi< best_chi1:
f_save_checkpoint(gdict,epoch,iters,best_chi1,best_chi2,netG,netD,optimizerG,optimizerD,save_loc=save_dir+'/models/checkpoint_best_hist.tar')
best_chi1=hist_chi.item()
logging.info("Saving best hist model at epoch %s, step %s."%(epoch,iters))
if spec_chi< best_chi2:
f_save_checkpoint(gdict,epoch,iters,best_chi1,best_chi2,netG,netD,optimizerG,optimizerD,save_loc=save_dir+'/models/checkpoint_best_spec.tar')
best_chi2=spec_chi.item()
logging.info("Saving best spec model at epoch %s, step %s"%(epoch,iters))
if iters in gdict['save_steps_list']:
f_save_checkpoint(gdict,epoch,iters,best_chi1,best_chi2,netG,netD,optimizerG,optimizerD,save_loc=save_dir+'/models/checkpoint_{0}.tar'.format(iters))
logging.info("Saving given-step at epoch %s, step %s."%(epoch,iters))
# Save G's output on fixed_noise
if ((iters % gdict['checkpoint_size'] == 0) or ((epoch == epochs-1) and (count == len(dataloader)-1))):
netG.eval()
with torch.no_grad():
fake = netG(fixed_noise).detach().cpu()
img_arr=np.array(fake[:,:,:,:])
fname='gen_img_epoch-%s_step-%s'%(epoch,iters)
np.save(save_dir+'/images/'+fname,img_arr)
t_epoch_end=time.time()
logging.info("Time taken for epoch %s: %s"%(epoch,t_epoch_end-t_epoch_start))
# Save Metrics to file after each epoch
metrics_df.to_pickle(save_dir+'/df_metrics.pkle')
logging.info("best chis: {0}, {1}".format(best_chi1,best_chi2))
def f_init_gdict(gdict,config_dict):
''' Initialize the global dictionary gdict with values in config file'''
keys1=['workers','nc','nz','ngf','ndf','beta1','kernel_size','stride','g_padding','d_padding','flip_prob']
keys2=['image_size','checkpoint_size','num_imgs','ip_fname','op_loc']
for key in keys1: gdict[key]=config_dict['training'][key]
for key in keys2: gdict[key]=config_dict['data'][key]
device='cuda'
if device=='cuda':
rank = int(os.environ['SLURM_PROCID'])
world_size = int(os.environ['SLURM_NTASKS'])
locRank=int(os.environ['SLURM_LOCALID'])
else:
rank=0; world_size = 1; locRank=0
host=socket.gethostname()
verb=rank==0
print('M:myRank=',rank,'world_size =',world_size,'verb=',verb,host,'locRank=',locRank )
masterIP=os.getenv('MASTER_ADDR')
if masterIP==None:
assert device=='cuda' # must speciffy MASTER_ADDR
sync_file = _get_sync_file()
if verb: print('use sync_file =',sync_file)
else:
sync_file='env://'
masterPort=os.getenv('MASTER_PORT')
if verb: print('use masterIP',masterIP,masterPort)
assert masterPort!=None
if verb:
print('imported PyTorch ver:',torch.__version__)
dist.init_process_group(backend='nccl', init_method=sync_file, world_size=world_size, rank=rank)
print("M:after dist.init_process_group")
inp_dim=280
fc_dim=20
out_dim=10
epochs=15
batch_size=16*1024//world_size # local batch size
steps=16
num_eve=steps*batch_size
learning_rate = 0.02
num_cpus=5 # to load the data in parallel , -c10 locks 5 phys cores
# Initialize model
torch.manual_seed(0)
model = JanModel(inp_dim,fc_dim,out_dim)
if device=='cuda':
torch.cuda.set_device(locRank)
model.cuda(locRank)
# define loss function
loss_fn = nn.MSELoss().cuda(locRank)
# Initialize optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# Wrap the model
# This reproduces the model onto the GPU for the process.
if device=='cuda':
model = nn.parallel.DistributedDataParallel(model, device_ids=[locRank])
# - - - - DATA PERP - - - - -
if verb: print('\nM: generate data and train, num_eve=',num_eve)
X,Y=create_dataset(num_eve,inp_dim,out_dim)
if verb: print('\nCreate torch-Dataset instance')
trainDst=JanDataset(X,Y)
if verb: print('\nCreate torch-DataLoader instance & test it, num_cpus=',num_cpus)
trainLdr = DataLoader(trainDst, batch_size=batch_size, shuffle=True, num_workers=num_cpus,pin_memory=True)
# - - - - DATA READY - - - - -
# Note, intentionally I do not use torch.utils.data.distributed.DistributedSampler(..)
# because I want to controll manually what data will be sent to which GPU - here data are generated on CPU matched to GPU
if verb:
print('\n print one batch of training data ')
xx, yy = next(iter(trainLdr))
print('test_dataLoader: X:',xx.shape,'Y:',yy.shape)
print('Y[:,]',yy[:,0])
print('\n= = = = Prepare for the treaining = = =\n')
print('\n\nM: torchsummary.summary(model):'); print(model)
inp_size=(inp_dim,) # input_size=(channels, H, W)) if CNN is first
if device=='cuda':
model=model.to(device) # re-cast model on device, data will be cast later
```
## Start
```
if __name__=="__main__":
torch.backends.cudnn.benchmark=True
# torch.autograd.set_detect_anomaly(True)
t0=time.time()
#################################
# args=f_parse_args()
# Manually add args ( different for jupyter notebook)
args=argparse.Namespace()
args.config='1_main_code/config_128.yaml'
args.ngpu=1
args.batchsize=128
args.spec_loss_flag=True
args.checkpoint_size=50
args.epochs=10
args.learn_rate=0.0002
args.mode='fresh'
# args.mode='continue'
# args.ip_fldr='/global/cfs/cdirs/m3363/vayyar/cosmogan_data/results_from_other_code/pytorch/results/128sq/20201211_093818_nb_test/'
args.run_suffix='nb_test'
args.deterministic=False
args.seed='234373'
args.lambda1=0.1
args.save_steps_list=[5,10]
### Set up ###
config_file=args.config
config_dict=f_load_config(config_file)
# Initilize variables
gdict={}
f_init_gdict(gdict,config_dict)
## Add args variables to gdict
for key in ['ngpu','batchsize','mode','spec_loss_flag','epochs','learn_rate','lambda1','save_steps_list']:
gdict[key]=vars(args)[key]
###### Set up directories #######
if gdict['mode']=='fresh':
# Create prefix for foldername
fldr_name=datetime.now().strftime('%Y%m%d_%H%M%S') ## time format
gdict['save_dir']=gdict['op_loc']+fldr_name+'_'+args.run_suffix
if not os.path.exists(gdict['save_dir']):
os.makedirs(gdict['save_dir']+'/models')
os.makedirs(gdict['save_dir']+'/images')
elif gdict['mode']=='continue': ## For checkpointed runs
gdict['save_dir']=args.ip_fldr
### Read loss data
with open (gdict['save_dir']+'df_metrics.pkle','rb') as f:
metrics_dict=pickle.load(f)
# ### Write all logging.info statements to stdout and log file (different for jpt notebooks)
# logfile=gdict['save_dir']+'/log.log'
# logging.basicConfig(level=logging.DEBUG, filename=logfile, filemode="a+", format="%(asctime)-15s %(levelname)-8s %(message)s")
# Lg = logging.getLogger()
# Lg.setLevel(logging.DEBUG)
# lg_handler_file = logging.FileHandler(logfile)
# lg_handler_stdout = logging.StreamHandler(sys.stdout)
# Lg.addHandler(lg_handler_file)
# Lg.addHandler(lg_handler_stdout)
# logging.info('Args: {0}'.format(args))
# logging.info(config_dict)
# logging.info('Start: %s'%(datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
# if gdict['spec_loss_flag']: logging.info("Using Spectral loss")
### Override (different for jpt notebooks)
gdict['num_imgs']=2000
## Special declarations
gdict['bns']=50
gdict['device']=torch.device("cuda" if (torch.cuda.is_available() and gdict['ngpu'] > 0) else "cpu")
gdict['ngpu']=torch.cuda.device_count()
gdict['multi-gpu']=True if (gdict['device'].type == 'cuda') and (gdict['ngpu'] > 1) else False
print(gdict)
### Initialize random seed
if args.seed=='random': manualSeed = np.random.randint(1, 10000)
else: manualSeed=int(args.seed)
logging.info("Seed:{0}".format(manualSeed))
random.seed(manualSeed)
np.random.seed(manualSeed)
torch.manual_seed(manualSeed)
torch.cuda.manual_seed_all(manualSeed)
logging.info('Device:{0}'.format(gdict['device']))
if args.deterministic:
logging.info("Running with deterministic sequence. Performance will be slower")
torch.backends.cudnn.deterministic=True
# torch.backends.cudnn.enabled = False
torch.backends.cudnn.benchmark = False
#################################
####### Read data and precompute ######
img=np.load(gdict['ip_fname'],mmap_mode='r')[:gdict['num_imgs']].transpose(0,1,2,3)
t_img=torch.from_numpy(img)
print("%s, %s"%(img.shape,t_img.shape))
dataset=TensorDataset(t_img)
dataloader=DataLoader(dataset,batch_size=gdict['batchsize'],shuffle=True,num_workers=0,drop_last=True)
# Precompute metrics with validation data for computing losses
with torch.no_grad():
val_img=np.load(gdict['ip_fname'])[-3000:].transpose(0,1,2,3)
t_val_img=torch.from_numpy(val_img).to(gdict['device'])
# Precompute radial coordinates
r,ind=f_get_rad(img)
r=r.to(gdict['device']); ind=ind.to(gdict['device'])
# Stored mean and std of spectrum for full input data once
mean_spec_val,sdev_spec_val=f_torch_image_spectrum(f_invtransform(t_val_img),1,r,ind)
hist_val=f_compute_hist(t_val_img,bins=gdict['bns'])
del val_img; del t_val_img; del img; del t_img
#################################
###### Build Networks ###
# Define Models
print("Building GAN networks")
# Create Generator
netG = Generator(gdict).to(gdict['device'])
netG.apply(weights_init)
# print(netG)
summary(netG,(1,1,64))
# Create Discriminator
netD = Discriminator(gdict).to(gdict['device'])
netD.apply(weights_init)
# print(netD)
summary(netD,(1,128,128))
print("Number of GPUs used %s"%(gdict['ngpu']))
if (gdict['multi-gpu']):
netG = nn.DataParallel(netG, list(range(gdict['ngpu'])))
netD = nn.DataParallel(netD, list(range(gdict['ngpu'])))
#### Initialize networks ####
# criterion = nn.BCELoss()
criterion = nn.BCEWithLogitsLoss()
if gdict['mode']=='fresh':
optimizerD = optim.Adam(netD.parameters(), lr=gdict['learn_rate'], betas=(gdict['beta1'], 0.999),eps=1e-7)
optimizerG = optim.Adam(netG.parameters(), lr=gdict['learn_rate'], betas=(gdict['beta1'], 0.999),eps=1e-7)
### Initialize variables
iters,start_epoch,best_chi1,best_chi2=0,0,1e10,1e10
### Load network weights for continuing run
elif gdict['mode']=='continue':
iters,start_epoch,best_chi1,best_chi2=f_load_checkpoint(gdict['save_dir']+'/models/checkpoint_last.tar',netG,netD,optimizerG,optimizerD,gdict)
logging.info("Continuing existing run. Loading checkpoint with epoch {0} and step {1}".format(start_epoch,iters))
start_epoch+=1 ## Start with the next epoch
## Add to gdict
for key,val in zip(['best_chi1','best_chi2','iters','start_epoch'],[best_chi1,best_chi2,iters,start_epoch]): gdict[key]=val
print(gdict)
fixed_noise = torch.randn(gdict['batchsize'], 1, 1, gdict['nz'], device=gdict['device']) #Latent vectors to view G progress
if __name__=="__main__":
#################################
### Set up metrics dataframe
cols=['step','epoch','Dreal','Dfake','Dfull','G_adv','G_full','spec_loss','hist_loss','spec_chi','hist_chi','D(x)','D_G_z1','D_G_z2','time']
# size=int(len(dataloader) * epochs)+1
metrics_df=pd.DataFrame(columns=cols)
#################################
########## Train loop and save metrics and images ######
print("Starting Training Loop...")
f_train_loop(dataloader,metrics_df,gdict)
## Generate images for best saved models ######
op_loc=gdict['save_dir']+'/images/'
ip_fname=gdict['save_dir']+'/models/checkpoint_best_spec.tar'
f_gen_images(gdict,netG,optimizerG,ip_fname,op_loc,op_strg='best_spec',op_size=200)
ip_fname=gdict['save_dir']+'/models/checkpoint_best_hist.tar'
f_gen_images(gdict,netG,optimizerG,ip_fname,op_loc,op_strg='best_hist',op_size=200)
tf=time.time()
print("Total time %s"%(tf-t0))
print('End: %s'%(datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
# metrics_df.plot('step','time')
metrics_df
gdict
```
|
github_jupyter
|
## Programming Exercise 1 - Linear Regression
- [warmUpExercise](#warmUpExercise)
- [Linear regression with one variable](#Linear-regression-with-one-variable)
- [Gradient Descent](#Gradient-Descent)
```
# %load ../../standard_import.txt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from mpl_toolkits.mplot3d import axes3d
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 150)
pd.set_option('display.max_seq_items', None)
#%config InlineBackend.figure_formats = {'pdf',}
%matplotlib inline
import seaborn as sns
sns.set_context('notebook')
sns.set_style('white')
```
#### warmUpExercise
```
def warmUpExercise():
return(np.identity(5))
warmUpExercise()
```
### Linear regression with one variable
```
data = np.loadtxt('data/ex1data1.txt', delimiter=',')
X = np.c_[np.ones(data.shape[0]),data[:,0]]
y = np.c_[data[:,1]]
plt.scatter(X[:,1], y, s=30, c='r', marker='x', linewidths=1)
plt.xlim(4,24)
plt.xlabel('Population of City in 10,000s')
plt.ylabel('Profit in $10,000s');
```
#### Gradient Descent
```
def computeCost(X, y, theta=[[0],[0]]):
m = y.size
J = 0
h = X.dot(theta)
J = 1/(2*m)*np.sum(np.square(h-y))
return(J)
computeCost(X,y)
def gradientDescent(X, y, theta=[[0],[0]], alpha=0.01, num_iters=1500):
m = y.size
J_history = np.zeros(num_iters)
for iter in np.arange(num_iters):
h = X.dot(theta)
theta = theta - alpha*(1/m)*(X.T.dot(h-y))
J_history[iter] = computeCost(X, y, theta)
return(theta, J_history)
# theta for minimized cost J
theta , Cost_J = gradientDescent(X, y)
print('theta: ',theta.ravel())
plt.plot(Cost_J)
plt.ylabel('Cost J')
plt.xlabel('Iterations');
xx = np.arange(5,23)
yy = theta[0]+theta[1]*xx
# Plot gradient descent
plt.scatter(X[:,1], y, s=30, c='r', marker='x', linewidths=1)
plt.plot(xx,yy, label='Linear regression (Gradient descent)')
# Compare with Scikit-learn Linear regression
regr = LinearRegression()
regr.fit(X[:,1].reshape(-1,1), y.ravel())
plt.plot(xx, regr.intercept_+regr.coef_*xx, label='Linear regression (Scikit-learn GLM)')
plt.xlim(4,24)
plt.xlabel('Population of City in 10,000s')
plt.ylabel('Profit in $10,000s')
plt.legend(loc=4);
# Predict profit for a city with population of 35000 and 70000
print(theta.T.dot([1, 3.5])*10000)
print(theta.T.dot([1, 7])*10000)
# Create grid coordinates for plotting
B0 = np.linspace(-10, 10, 50)
B1 = np.linspace(-1, 4, 50)
xx, yy = np.meshgrid(B0, B1, indexing='xy')
Z = np.zeros((B0.size,B1.size))
# Calculate Z-values (Cost) based on grid of coefficients
for (i,j),v in np.ndenumerate(Z):
Z[i,j] = computeCost(X,y, theta=[[xx[i,j]], [yy[i,j]]])
fig = plt.figure(figsize=(15,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122, projection='3d')
# Left plot
CS = ax1.contour(xx, yy, Z, np.logspace(-2, 3, 20), cmap=plt.cm.jet)
ax1.scatter(theta[0],theta[1], c='r')
# Right plot
ax2.plot_surface(xx, yy, Z, rstride=1, cstride=1, alpha=0.6, cmap=plt.cm.jet)
ax2.set_zlabel('Cost')
ax2.set_zlim(Z.min(),Z.max())
ax2.view_init(elev=15, azim=230)
# settings common to both plots
for ax in fig.axes:
ax.set_xlabel(r'$\theta_0$', fontsize=17)
ax.set_ylabel(r'$\theta_1$', fontsize=17)
```
|
github_jupyter
|
# BEL to Natural Language
**Author:** [Charles Tapley Hoyt](https://github.com/cthoyt/)
**Estimated Run Time:** 5 seconds
This notebook shows how the PyBEL-INDRA integration can be used to turn a BEL graph into natural language. Special thanks to John Bachman and Ben Gyori for all of their efforts in making this possible.
To view the interactive Javascript output in this notebook, open in the [Jupyter NBViewer](http://nbviewer.jupyter.org/github/pybel/pybel-notebooks/blob/master/BEL%20to%20Natural%20Language.ipynb).
## Imports
```
import sys
import time
import indra
import indra.util.get_version
import ndex2
import pybel
from indra.assemblers.english_assembler import EnglishAssembler
from indra.sources.bel.bel_api import process_pybel_graph
from pybel.examples import sialic_acid_graph
from pybel_tools.visualization import to_jupyter
```
## Environment
```
print(sys.version)
print(time.asctime())
```
## Dependencies
```
pybel.utils.get_version()
indra.util.get_version.get_version()
```
# Data
The [Sialic Acid graph](http://pybel.readthedocs.io/en/latest/examples.html#pybel.examples.sialic_acid_example.pybel.examples.sialic_acid_graph) is used as an example.
```
to_jupyter(sialic_acid_graph)
```
# Conversion
The PyBEL BELGraph instance is converted to INDRA statments with the function [`process_pybel_graph`](http://indra.readthedocs.io/en/latest/modules/sources/bel/index.html#indra.sources.bel.bel_api.process_pybel_graph). It returns an instance of [`PybelProcessor`](`http://indra.readthedocs.io/en/latest/modules/sources/bel/index.html#module-indra.sources.bel.pybel_processor`), which stores the INDRA statments.
```
pbp = process_pybel_graph(sialic_acid_graph)
```
A list of INDRA statements is extracted from the BEL graph and stored in the field [`PybelProcessor.statements`](http://indra.readthedocs.io/en/latest/modules/sources/bel/index.html#indra.sources.bel.pybel_processor.PybelProcessor.statements). Note that INDRA is built to consider mechanistic information, and therefore excludes most associative relationships.
```
stmts = pbp.statements
stmts
```
The list of INDRA statements is converted to plain english using the [`EnglishAssembler`](http://indra.readthedocs.io/en/latest/modules/assemblers/english_assembler.html#indra.assemblers.english_assembler.EnglishAssembler).
```
asm = EnglishAssembler(stmts)
print(asm.make_model(), sep='\n')
```
# Conclusion
While knowledge assembly is indeed difficult and precarious, the true scientific task is to use them to generate mechanistic hypotheses. By far, the most common way is for a scientist to use their intution and choose an explanatory subgraph or pathway. This notebook has demonstrated that after this has been done, the results can be serialized to english prose in a precise manner.
|
github_jupyter
|
# Basic Examples with Different Protocols
## Prerequisites
* A kubernetes cluster with kubectl configured
* curl
* grpcurl
* pygmentize
## Examples
* [Seldon Protocol](#Seldon-Protocol-Model)
* [Tensorflow Protocol](#Tensorflow-Protocol-Model)
* [KFServing V2 Protocol](#KFServing-V2-Protocol-Model)
## Setup Seldon Core
Use the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html) to setup Seldon Core with an ingress - either Ambassador or Istio.
Then port-forward to that ingress on localhost:8003 in a separate terminal either with:
* Ambassador: `kubectl port-forward $(kubectl get pods -n seldon -l app.kubernetes.io/name=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080`
* Istio: `kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8003:80`
```
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
import json
import time
from IPython.core.magic import register_line_cell_magic
@register_line_cell_magic
def writetemplate(line, cell):
with open(line, 'w') as f:
f.write(cell.format(**globals()))
VERSION=!cat ../version.txt
VERSION=VERSION[0]
VERSION
```
## Seldon Protocol Model
We will deploy a REST model that uses the SELDON Protocol namely by specifying the attribute `protocol: seldon`
```
%%writetemplate resources/model_seldon.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: example-seldon
spec:
protocol: seldon
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/mock_classifier:{VERSION}
name: classifier
graph:
name: classifier
type: MODEL
name: model
replicas: 1
!kubectl apply -f resources/model_seldon.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=example-seldon -o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep example-seldon -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
X=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/example-seldon/api/v1.0/predictions \
-H "Content-Type: application/json"
d=json.loads(X[0])
print(d)
assert(d["data"]["ndarray"][0][0] > 0.4)
X=!cd ../executor/proto && grpcurl -d '{"data":{"ndarray":[[1.0,2.0,5.0]]}}' \
-rpc-header seldon:example-seldon -rpc-header namespace:seldon \
-plaintext \
-proto ./prediction.proto 0.0.0.0:8003 seldon.protos.Seldon/Predict
d=json.loads("".join(X))
print(d)
assert(d["data"]["ndarray"][0][0] > 0.4)
!kubectl delete -f resources/model_seldon.yaml
```
## Tensorflow Protocol Model
We will deploy a model that uses the TENSORLFOW Protocol namely by specifying the attribute `protocol: tensorflow`
```
%%writefile resources/model_tfserving.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: example-tfserving
spec:
protocol: tensorflow
predictors:
- componentSpecs:
- spec:
containers:
- args:
- --port=8500
- --rest_api_port=8501
- --model_name=halfplustwo
- --model_base_path=gs://seldon-models/tfserving/half_plus_two
image: tensorflow/serving
name: halfplustwo
ports:
- containerPort: 8501
name: http
protocol: TCP
- containerPort: 8500
name: grpc
protocol: TCP
graph:
name: halfplustwo
type: MODEL
endpoint:
httpPort: 8501
grpcPort: 8500
name: model
replicas: 1
!kubectl apply -f resources/model_tfserving.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=example-tfserving \
-o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep example-tfserving -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
X=!curl -s -d '{"instances": [1.0, 2.0, 5.0]}' \
-X POST http://localhost:8003/seldon/seldon/example-tfserving/v1/models/halfplustwo/:predict \
-H "Content-Type: application/json"
d=json.loads("".join(X))
print(d)
assert(d["predictions"][0] == 2.5)
X=!cd ../executor/proto && grpcurl \
-d '{"model_spec":{"name":"halfplustwo"},"inputs":{"x":{"dtype": 1, "tensor_shape": {"dim":[{"size": 3}]}, "floatVal" : [1.0, 2.0, 3.0]}}}' \
-rpc-header seldon:example-tfserving -rpc-header namespace:seldon \
-plaintext -proto ./prediction_service.proto \
0.0.0.0:8003 tensorflow.serving.PredictionService/Predict
d=json.loads("".join(X))
print(d)
assert(d["outputs"]["x"]["floatVal"][0] == 2.5)
!kubectl delete -f resources/model_tfserving.yaml
```
## KFServing V2 Protocol Model
We will deploy a REST model that uses the KFServing V2 Protocol namely by specifying the attribute `protocol: kfserving`
```
%%writefile resources/model_v2.yaml
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
name: triton
spec:
protocol: kfserving
predictors:
- graph:
children: []
implementation: TRITON_SERVER
modelUri: gs://seldon-models/trtis/simple-model
name: simple
name: simple
replicas: 1
!kubectl apply -f resources/model_v2.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=triton -o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep triton -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
X=!curl -s -d '{"inputs":[{"name":"INPUT0","data":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"datatype":"INT32","shape":[1,16]},{"name":"INPUT1","data":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"datatype":"INT32","shape":[1,16]}]}' \
-X POST http://0.0.0.0:8003/seldon/seldon/triton/v2/models/simple/infer \
-H "Content-Type: application/json"
d=json.loads(X[0])
print(d)
assert(d["outputs"][0]["data"][0]==2)
X=!cd ../executor/api/grpc/kfserving/inference && \
grpcurl -d '{"model_name":"simple","inputs":[{"name":"INPUT0","contents":{"int_contents":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]},"datatype":"INT32","shape":[1,16]},{"name":"INPUT1","contents":{"int_contents":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]},"datatype":"INT32","shape":[1,16]}]}' \
-plaintext -proto ./grpc_service.proto \
-rpc-header seldon:triton -rpc-header namespace:seldon \
0.0.0.0:8003 inference.GRPCInferenceService/ModelInfer
X="".join(X)
print(X)
!kubectl delete -f resources/model_v2.yaml
```
|
github_jupyter
|
# NumPy Array Basics - Multi-dimensional Arrays
```
import sys
print(sys.version)
import numpy as np
print(np.__version__)
npa = np.arange(25)
npa
```
We learned in the last video how to generate arrays, now let’s generate multidimensional arrays. These are, as you might guess, arrays with multiple dimensions.
We can create these by reshaping arrays. One of the simplest ways is to just reshape an array with the reshape command. That gives us an x by x array.
```
npa.reshape((5,5))
```
We can also use the zeros commands.
```
npa2 = np.zeros((5,5))
npa2
```
To get the size of the array we can use the size method.
```
npa2.size
```
To get the shape of the array we can use the shape method.
```
npa2.shape
```
to get the number of dimension we use the ndim method.
```
npa2.ndim
```
We can create as many dimensions as we need to, here's 3 dimensions.
```
np.arange(8).reshape(2,2,2)
```
Here's 4 dimensions
```
np.zeros((4,4,4,4))
np.arange(16).reshape(2,2,2,2)
```
For the most part we’ll be working with 2 dimensions.
```
npa2
npa
```
Now we can really see the power of vectorization, let’s create two random 2 dimensional arrays.
Now I’m going to set the random seed. This basically makes your random number generation reproducible.
```
np.random.seed(10)
```
let’s try some random number generation and then we can perform some matrix comparisons.
```
npa2 = np.random.random_integers(1,10,25).reshape(5,5)
npa2
npa3 = np.random.random_integers(1,10,25).reshape(5,5)
npa3
```
We can do this comparison with greater than or equal to.
```
npa2 > npa3
```
We can also sum up the values where there are equal.
```
(npa2 == npa3).sum()
```
Or we can sum where one is greater than or equal to in the columns.
We can do that with sum or we could get the total by summing that array.
```
sum(npa2 >= npa3)
sum(npa2 >= npa3).sum()
```
We can also get the minimums and maximums like we got with single dimensional arrays or for specific dimensions.
```
npa2.min()
npa2.min(axis=1)
npa2.max(axis=0)
```
There are plenty of other functions that numpy as. we can transpose with .T property or transpose method.
```
npa2.T
npa2.transpose()
npa2.T == npa2.transpose()
```
We can also multiply this transposition by itself for example. This will be an item by item multiplication
```
npa2.T * npa2
```
We can flatten these arrays in several different ways.
we can flatten it, which returns a new array that we can change
```
np2 = npa2.flatten()
np2
```
or we can ravel it which ends up returning the original array in a flattened format.
```
r = npa2.ravel()
r
np2[0] = 25
npa2
```
With ravel if we change a value in the raveled array that will change it in the original n-dimensional array as well
```
r[0] = 25
npa2
```
Now we can use some other helpful functions like cumsum and comprod to get the cumulative products and sums. This works for any dimensional array.
```
npa2.cumsum()
npa2.cumprod()
```
That really covers a lot of the basic functions you’re going to use or need when working with pandas but it is worth being aware that numpy is a very deep library that does a lot more things that I've covered here. I wanted to cover these basics because they're going to come up when we're working with pandas. I'm sure this has felt fairly academic at this point but I can promise you that it provides a valuable foundation to pandas.
need. If there’s anything you have questions about feel free to ask along the side and I can create some appendix videos to help you along.
|
github_jupyter
|
<a href="https://colab.research.google.com/github/RSid8/SMM4H21/blob/main/Task1a.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Importing the Libraries and Models
```
from google.colab import drive
drive.mount('/content/drive')
!pip install fairseq
!git clone https://github.com/pytorch/fairseq
%cd fairseq
%%shell
wget https://dl.fbaipublicfiles.com/fairseq/models/roberta.large.tar.gz
tar -xzvf roberta.large.tar.gz
import torch
from tqdm import tqdm
roberta = torch.hub.load('pytorch/fairseq', 'roberta.large')
!ls
tokens = roberta.encode('Hello world!')
assert tokens.tolist() == [0, 31414, 232, 328, 2]
roberta.decode(tokens) # 'Hello world!'
```
# Preprocessing the data for training
```
import pandas as pd
df = pd.read_csv("/content/drive/MyDrive/UPENN/Task1a/train.tsv", sep = '\t')
df_tweets = pd.read_csv('/content/drive/MyDrive/UPENN/Task1a/tweets.tsv', sep = '\t')
df_class = pd.read_csv('/content/drive/MyDrive/UPENN/Task1a/class.tsv', sep = '\t')
df.columns = ["tweet_id", "tweet", "label"]
df_valid = pd.merge(df_tweets, df_class, on = 'tweet_id')
df = pd.concat([df, df_valid], axis=0)
df.head()
df.label.value_counts()
df.tweet_id.nunique()
import numpy as np
# count = df['tweet'].str.split().apply(len).value_counts()
# count.index = count.index.astype(str) + ' words:'
# count.sort_index(inplace=True)
# count
a = np.array(df['tweet'].str.split().apply(len))
print(f'Longest sentence {a.max()}, smallest sentence {a.min()}, average sentence length {a.mean()}')
# something is wrong in example - 11986
index_names = df[df['tweet'].str.split().apply(len)>35].index
df.drop(index_names, inplace=True)
df.tweet_id.nunique()
df['label'].replace({"NoADE":0, "ADE":1}, inplace=True)
df.head()
import os
import random
from glob import glob
import sklearn
from sklearn.model_selection import train_test_split
from collections import Counter
from imblearn.over_sampling import RandomOverSampler
from imblearn.under_sampling import RandomUnderSampler
X_train,X_val, Y_train, Y_val= train_test_split(df['tweet'], df['label'], test_size = 0.1, random_state = 21)
X_train.reset_index(drop=True, inplace = True)
X_val.reset_index(drop=True, inplace = True)
Y_train.reset_index(drop=True, inplace = True)
Y_val.reset_index(drop=True, inplace=True)
# define oversampling strategy
over = RandomOverSampler(sampling_strategy=0.1)
# define undersampling strategy
under = RandomUnderSampler(sampling_strategy=0.5)
X_train = X_train.values.reshape(-1, 1)
X_train, Y_train = over.fit_resample(X_train, Y_train)
X_train, Y_train = under.fit_resample(X_train, Y_train)
print(Counter(Y_train))
print(X_train[0][0])
for split in ['train', 'val']:
out_fname = 'train' if split == 'train' else 'val'
f1 = open(os.path.join("/content/drive/MyDrive/UPENN/Task1a", out_fname+'.input0'), 'w')
f2 = open(os.path.join("/content/drive/MyDrive/UPENN/Task1a", out_fname+'.label'), 'w')
if split=='train':
for i in range(len(X_train)):
f1.write(str(X_train[i][0])+'\n')
f2.write(str(Y_train[i])+'\n')
else:
for i in range(len(X_val)):
f1.write(X_val[i]+'\n')
f2.write(str(Y_val[i])+'\n')
f1.close()
f2.close()
```
# Tokenize the data and Finetune Roberta
```
%%shell
wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json'
wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe'
for SPLIT in train val; do
python -m examples.roberta.multiprocessing_bpe_encoder \
--encoder-json encoder.json \
--vocab-bpe vocab.bpe \
--inputs "/content/drive/MyDrive/UPENN/Task1a/$SPLIT.input0" \
--outputs "/content/drive/MyDrive/UPENN/Task1a/$SPLIT.input0.bpe" \
--workers 60 \
--keep-empty
done
%%shell
wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt'
%%bash
fairseq-preprocess \
--only-source \
--trainpref "/content/drive/MyDrive/UPENN/Task1a/train.input0.bpe" \
--validpref "/content/drive/MyDrive/UPENN/Task1a/val.input0.bpe" \
--destdir "/content/drive/MyDrive/UPENN/Task1a-bin/input0" \
--workers 60 \
--srcdict dict.txt
%%bash
fairseq-preprocess \
--only-source \
--trainpref "/content/drive/MyDrive/UPENN/Task1a/train.label" \
--validpref "/content/drive/MyDrive/UPENN/Task1a/val.label" \
--destdir "/content/drive/MyDrive/UPENN/Task1a-bin/label" \
--workers 60
%%shell
TOTAL_NUM_UPDATES=3614 # 10 epochs through UPENN for bsz 32
WARMUP_UPDATES=217 # 6 percent of the number of updates
LR=1e-05 # Peak LR for polynomial LR scheduler.
HEAD_NAME=task1a_head # Custom name for the classification head.
NUM_CLASSES=2 # Number of classes for the classification task.
MAX_SENTENCES=8 # Batch size.
ROBERTA_PATH=/content/fairseq/roberta.large/model.pt #/content/fairseq/checkpoint/checkpoint_best.pt
CUDA_VISIBLE_DEVICES=0 fairseq-train /content/drive/MyDrive/UPENN/Task1a-bin/ \
--restore-file $ROBERTA_PATH \
--max-positions 512 \
--batch-size $MAX_SENTENCES \
--max-tokens 4400 \
--task sentence_prediction \
--reset-optimizer --reset-dataloader --reset-meters \
--required-batch-size-multiple 1 \
--init-token 0 --separator-token 2 \
--arch roberta_large \
--criterion sentence_prediction \
--classification-head-name $HEAD_NAME \
--num-classes $NUM_CLASSES \
--dropout 0.1 --attention-dropout 0.1 \
--weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \
--clip-norm 0.0 \
--lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \
--fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \
--max-epoch 6 \
--best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \
--shorten-method "truncate" \
--find-unused-parameters \
--update-freq 4
!cp checkpoints/checkpoint_best.pt /content/drive/MyDrive/UPENN/checkpoints/ckpt_6_fin_rob.pt
%ls /content/drive/MyDrive/UPENN/checkpoints/
```
# Testing the Validation Split
```
from fairseq.models.roberta import RobertaModel
roberta = RobertaModel.from_pretrained(
'checkpoints',
checkpoint_file='checkpoint_best.pt',
data_name_or_path='/content/drive/MyDrive/UPENN/Task1a-bin'
)
roberta.eval() # disable dropout
label_fn = lambda label: roberta.task.label_dictionary.string(
[label + roberta.task.label_dictionary.nspecial]
)
preds, labels = [], []
for i in tqdm(range(len(X_val)), total=len(X_val)):
tokens = roberta.encode(X_val[i])
pred = label_fn(roberta.predict('task1a_head',tokens).argmax().item())
preds.append(pred)
labels.append(Y_val[i])
import pandas as pd
from sklearn.metrics import classification_report
df_preds=pd.read_csv("/content/val_final.tsv", sep='\t')
df_label=pd.read_csv("/content/class.tsv", sep='\t')
df=df_preds.merge(df_label, on="tweet_id")
df.columns=["tweet_id","preds","label"]
df['label'].replace({"NoADE":0, "ADE":1}, inplace=True)
df['preds'].replace({"NoADE":0, "ADE":1}, inplace=True)
df.head()
import pandas as pd
from sklearn.metrics import classification_report
preds=df["preds"]
labels=df["label"]
report = classification_report(labels, list(map(int, preds)))
print(report)
!rm checkpoints/checkpoint1.pt checkpoints/checkpoint2.pt checkpoints/checkpoint3.pt checkpoints/checkpoint4.pt
```
# Running on the Validation Set
```
df_tweets = pd.read_csv('/content/tweets.tsv', sep = '\t')
df_class = pd.read_csv('/content/class.tsv', sep = '\t')
df_valid = pd.merge(df_tweets, df_class, on = 'tweet_id')
df_valid['label'].replace({"NoADE":0, "ADE":1}, inplace=True)
df_valid.head()
df_test = pd.read_csv('/content/drive/MyDrive/UPENN/test_tweets.tsv', sep='\t')
index_names = df_test[df_test['tweet'].str.split().apply(len)>35].index
df_test.drop(index_names, inplace=True)
df_test.tweet_id.nunique()
label_fn = lambda label: roberta.task.label_dictionary.string(
[label + roberta.task.label_dictionary.nspecial]
)
preds, id = [], []
for index, row in tqdm(df_test.iterrows(), total=len(df_test)):
tokens = roberta.encode(row["tweet"])
pred = label_fn(roberta.predict('task1a_head',tokens).argmax().item())
preds.append(pred)
id.append(row["tweet_id"])
df_1a = pd.DataFrame(list(zip(id, preds)), columns = ['tweet_id', 'label'])
df_1a['label']=df_1a['label'].replace({0:"NoADE", 1:"ADE"})
df_1a.reset_index(drop=True, inplace=True)
df_1a.head()
df_1a.to_csv("/content/drive/MyDrive/UPENN/1a_sub2.tsv", sep='\t')
from sklearn.metrics import classification_report
report = classification_report(labels, list(map(int, preds)))
print(report)
print(Counter(preds))
df_preds = pd.DataFrame(preds, columns = ['Predictions'])
df_id = pd.DataFrame(df_valid['tweet_id'], columns = ['tweet_id'])
df_results = pd.concat([df_id, df_preds], join = 'outer', axis = 1)
df_results.head()
df_results.to_csv('/content/val.tsv', sep = '\t')
len(df_id)
import pandas as pd
df = pd.read_csv('/content/drive/MyDrive/UPENN/1a_sub2.tsv', sep = '\t')
df.drop(["Unnamed: 0"], axis=1, inplace=True)
df.columns = ["tweet_id", "label"]
df['label'].replace({0:"NoADE", 1:"ADE"}, inplace=True)
df.reset_index(drop=True, inplace=True)
df = df[df.label=="ADE"]
df.head()
df.to_csv('/content/test_sub2.tsv', sep = '\t', index= False)
import pandas as pd
df_1a = pd.read_csv('/content/test_sub2.tsv', sep = '\t')
df_1b = pd.read_csv('/content/1b_new.tsv', sep = '\t')
df_1b.drop(["Unnamed: 0"], axis=1, inplace=True)
df_1b.head()
df_1 = df_1a.merge(df_1b, on = 'tweet_id')
df_1.columns = ["tweet_id", "label", "start", "end", "span"]
df_1["start"] = df_1["start"].astype(int)
df_1["end"] = df_1["end"].astype(int)
df_1.dropna(axis=0, inplace=True)
df_1 = df_1[df_1.label=="ADE"]
df_1.head()
df_1.to_csv('/content/testb_final.tsv', sep = '\t', index= False)
```
|
github_jupyter
|
```
import intake
import xarray as xr
import os
import pandas as pd
import numpy as np
import zarr
import rhg_compute_tools.kubernetes as rhgk
import warnings
warnings.filterwarnings("ignore")
write_direc = '/gcs/rhg-data/climate/downscaled/workdir'
client, cluster = rhgk.get_standard_cluster()
cluster
```
get some CMIP6 data from GCS
here we're going to get daily `tmax` from `IPSL` for historical and SSP370 runs. The ensemble member `r1i1p1f1` isn't available in GCS so we're using `r4i1p1f1` instead.
Note that the `activity_id` for historical runs is `CMIP`, not `ScenarioMIP` as it is for the ssp-rcp scenarios.
```
activity_id = 'ScenarioMIP'
experiment_id = 'ssp370'
table_id = 'day'
variable_id = 'tasmax'
source_id = 'IPSL-CM6A-LR'
institution_id = 'NCAR'
member_id = 'r4i1p1f1'
```
first we'll take a look at what our options are
```
df_cmip6 = pd.read_csv('https://cmip6.storage.googleapis.com/cmip6-zarr-consolidated-stores-noQC.csv', dtype={'version': 'unicode'})
len(df_cmip6)
df_subset_future = df_cmip6.loc[(df_cmip6['activity_id'] == activity_id) & (df_cmip6['experiment_id'] == experiment_id)
& (df_cmip6['table_id'] == table_id) & (df_cmip6['variable_id'] == variable_id)
& (df_cmip6['source_id'] == source_id) & (df_cmip6['member_id'] == member_id)]
df_subset_future
df_subset_hist = df_cmip6.loc[(df_cmip6['experiment_id'] == 'historical')
& (df_cmip6['table_id'] == table_id) & (df_cmip6['variable_id'] == variable_id)
& (df_cmip6['source_id'] == source_id) & (df_cmip6['member_id'] == member_id)]
df_subset_hist
```
now let's actually pull the data
```
# search the cmip6 catalog
col = intake.open_esm_datastore("https://storage.googleapis.com/cmip6/pangeo-cmip6.json")
cat = col.search(activity_id=['CMIP', activity_id],
experiment_id=['historical', experiment_id], table_id=table_id, variable_id=variable_id,
source_id=source_id, member_id=member_id)
ds_model = {}
ds_model['historical'] = cat['CMIP.IPSL.IPSL-CM6A-LR.historical.day.gr'].to_dask().isel(member_id=0
).squeeze(drop=True).drop(['member_id',
'height',
'time_bounds'])
ds_model['ssp370'] = cat['ScenarioMIP.IPSL.IPSL-CM6A-LR.ssp370.day.gr'].to_dask().isel(member_id=0
).squeeze(drop=True).drop(['member_id',
'height',
'time_bounds'])
ds_model['historical']
```
rechunk in space for global bias correction
```
chunks = {'lat': 10, 'lon': 10, 'time': -1}
ds_model['historical'] = ds_model['historical'].chunk(chunks)
ds_model['historical'] = ds_model['historical'].persist()
ds_model['historical'] = ds_model['historical'].load()
ds_model['ssp370'] = ds_model['ssp370'].chunk(chunks)
ds_model['ssp370'] = ds_model['ssp370'].persist()
ds_model['historical'].to_zarr(os.path.join(write_direc, 'cmip6_test_model_historical'),
consolidated=True, compute=False, mode='w')
ds_test = xr.open_zarr(os.path.join(write_direc, 'cmip6_test_model_historical.zarr'))
ds_test
ds_test.info
ds_model['historical'].to_zarr(os.path.join(write_direc, 'cmip6_test_model_historical'), mode='w')
ds_model['ssp370'].to_netcdf(os.path.join(write_direc, 'cmip6_test_model_ssp370.nc'))
```
read in the zarr stores and see how hard it is to rechunk them in time instead of space for computing weights
```
ds_hist = zarr.open(os.path.join(write_direc, 'cmip6_test_model_historical.zarr'), mode='r')
ds_hist
ds_hist.info
```
|
github_jupyter
|
# Gymnasion Data Processing
Here I'm going to mine some chunk of Project Gutenberg texts for `(adj,noun)` and `(noun,verb,object)` relations using mostly SpaCy and textacy. Extracting them is easy. Filtering out the chaff is not so easy.
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from tqdm import tqdm
import json
from collections import defaultdict
from nltk import ngrams
from textacy import extract
import spacy
nlp = spacy.load('en')
```
Load in some randomly chosen Gutenberg texts.
```
import os
gb_files = [f for f in os.listdir("/Users/kyle/Documents/gutenberg_samples/") if f.startswith('gb_')]
```
Define a function to extract `(adj,noun)` relations.
```
def extract_adj2nouns(tempspacy):
"""
For a sentence like "I ate the small frog." returns [(small, frog)].
lemmatizes the noun, lowers the adjective
"""
nouns = ["NN","NNS"]
adj_noun_tuples = []
for token in tempspacy:
if token.dep_=="amod":
if token.head.tag_ in nouns:
adj_noun_tuples.append((token.text.lower(),token.head.lemma_))
return adj_noun_tuples
extract_adj2nouns(nlp(u"The small frogs were not the only ones there. The dog walked itself through the house."))
```
Textacy extracts `(s,v,o)` triples.
```
for triple in extract.subject_verb_object_triples(nlp(u"The husband ignored his wife.")):
print triple
```
I want to loop through a bunch of Gutenberg texts that I've randomly downloaded with the Gutenberg python package.
```
from langdetect import detect ## to make sure texts are english
from unidecode import unidecode ## to crudely deal with text encoding issues
noun2adj = defaultdict(list)
noun2object = defaultdict(list)
noun2adj_tuples = []
svo_triples = []
errors = 0
for fy in tqdm(gb_files[:1000]):
with open("/Users/kyle/Documents/gutenberg_samples/"+fy,'r') as f:
tempdata = f.read()
try:
if detect(tempdata)=="en": ## check english
tempspacy = nlp(tempdata.decode('utf-8'))
### adjectives
try:
for pair in extract_adj2nouns(tempspacy):
noun2adj_tuples.append(pair)
except:
pass
### svo triples
try:
gutenberg_svo_triples = extract.subject_verb_object_triples(tempspacy)
for trip in gutenberg_svo_triples:
svo_triples.append(trip)
except:
pass
except:
errors+=1
```
How many pairs (not unique) do I have of `(adj,noun)` relations?
```
len(noun2adj_tuples)
```
Of `(s,v,o)` relations?
```
len(svo_triples)
```
## Inspecting the data so far...
### `(adj, noun)` relations
```
import random
random.sample(noun2adj_tuples,20)
```
Another way to inspect data: frequency distributions.
```
from nltk import FreqDist as fd
ADJ_noun_fd = fd([a for a,n in noun2adj_tuples])
adj_NOUN_fd = fd([n for a,n in noun2adj_tuples])
ADJ_noun_fd.most_common(40)
adj_NOUN_fd.most_common(40)
```
#### Ideas...
So there are really two problems. Looking at the frequency distribution tells me that some of the most common adjectives (e.g. "few", "other")) are undesirable, because they aren't closely tied to a noun. That leaves are `green` is better to know than that leaves can be `other`. (Also, certain common nouns are probably not as interesting, especially ones like `other`). I have two intuitions: 1) really common relationships between adjectives and nouns are less interesting/desirable than less common ones, and 2) at the same time, I really don't want `(adj,noun)` pairs that are totally aberrant. Regarding the second point, I could filter out any adjective that doesn't occur at least `n` in modification of a certain noun, but that really penalizes uncommon nouns (which won't have many adjectives modifying them).
My plan:
1. Filter out relations containing the most adjectives as well as a handful of annoying nouns
2. Filter out those relations between words that are not strongly related according to a word2vec model
A handmade list of nouns to exclude.
```
ADJS_nouns_to_exclude = [word for word,count in ADJ_noun_fd.most_common(40)]
print ADJS_nouns_to_exclude
from nltk.corpus import stopwords
stops = stopwords.words('english')
stops = stops + ["whose","less","thee","thine","thy","thou","one"] ##adjectives nouns
stops = stops + ["time","thing","one","way","part","something"] ##annoying nouns
noun2adj = defaultdict(list)
for a,n in noun2adj_tuples:
if n not in stops:
if (a not in ADJS_nouns_to_exclude) and (a not in stops):
noun2adj[n].append(a)
import gensim
word2vec_path = "/Users/kyle/Desktop/GoogleNews-vectors-negative300.bin"
model = gensim.models.KeyedVectors.load_word2vec_format(word2vec_path, binary=True)
```
Below I define a pretty confusing loop to go through the dictionary I just made to filter out those words that `(adj,noun)` pairs that are unrelated according to a word2vec model. (Here I'm using just cosine similarity but I could probably maybe just measure the probability of the text according to the model.)
```
new_noun2adj = defaultdict(list)
for k in tqdm(noun2adj.keys()):
adjs = []
for adj in noun2adj[k]:
try:
adjs.append((adj,model.similarity(adj,k)))
except:
pass
adjs.sort(key = lambda x: x[1], reverse=True)
adjs_ = []
for a,score in adjs:
if a not in adjs_:
adjs_.append(a)
## this is some weird hand-crafted logic to filter adjectives belonging to rare and common nouns differently...
## the idea is to only take the cream of the crop when there are a lot of options --- i.e. when the noun is common
if len(adjs_)>20:
for adj in adjs_[:10]:
new_noun2adj[k].append(adj)
elif len(adjs_)>10:
for adj in adjs_[:10]:
new_noun2adj[k].append(adj)
elif len(adjs_)>2:
adj = adjs_[0]
new_noun2adj[k].append(adj)
else:
pass
new_noun2adj['hat']
with open("data/noun2adj.json","w") as f:
json.dump(new_noun2adj,f)
```
### `(s,v,o)` triples...
```
svo_triples_reformatted = [(s.lemma_,v.lemma_,o.text) for s,v,o, in svo_triples]
```
Inspect data.
```
random.sample(svo_triples_reformatted,20)
Svo_fd = fd([s for s,v,o in svo_triples_reformatted])
sVo_fd = fd([v for s,v,o in svo_triples_reformatted])
svO_fd = fd([o for s,v,o, in svo_triples_reformatted])
topS = [word for word,count in Svo_fd.most_common(40)]
print topS
topV = [word for word,count in sVo_fd.most_common(40)]
print topV
topO = [word for word,count in svO_fd.most_common(40)]
print topO
```
The loop below filters out an `(s,v,o)` triple if any one of its elements meets certain exclusionary conditions.
```
svo_triples_filtered = []
for s,v,o, in svo_triples_reformatted:
Sval,Vval,Oval=False,False,False
if len(s.split())==1: ## make sure it's not a complicated noun chunk
if s.lower() not in stops: ## make sure it's not a stopword
Sval=True
if v not in topV: ## make sure it's not really common
if len(v.split())==1: ## make sure it's not a complicated verb chunk
if v.lower() not in stops: ## make sure it's not a stopwords
Vval=True
if len(o.split())==1: ### make sure it's not a complicated noun chunk
if o.lower() not in stops: ### make sure it's not a stopword
if o.lower()==o: ## this is kind of a hack to exclude proper nouns
if o.endswith("ing")==False: ## filter out annoying present participles
Oval=True
if (Sval,Vval,Oval)==(True,True,True):
svo_triples_filtered.append((s,v,o))
noun2v_o = defaultdict(list)
for s,v,o in svo_triples_filtered:
noun2v_o[s].append((v,o))
noun2v_o["king"]
```
Again, filter out those `(s,v,o)` combinations in which the `v` and `o` are not similar according to word2vec model.
```
new_noun2v_o = defaultdict(list)
for k in tqdm(noun2v_o.keys()):
vos = []
for verb,obj in noun2v_o[k]:
try:
vos.append((verb,obj,model.similarity(obj,verb)))
except:
pass
vos.sort(key = lambda x: x[2], reverse=True)
##again, logic to handle rare and common nouns differently
if len(vos)>20:
for verb,obj,value in vos[:10]:
new_noun2v_o[k].append((verb,obj))
elif len(vos)>10:
for verb,obj,value in vos[:5]:
new_noun2v_o[k].append((verb,obj))
elif len(vos)>2:
verb,obj,value = vos[0]
new_noun2v_o[k].append((verb,obj))
else:
pass
new_noun2v_o["seed"]
with open("data/noun2v_o.json","w") as f:
json.dump(new_noun2v_o,f)
```
|
github_jupyter
|
# T1129 - Shared Modules
Adversaries may abuse shared modules to execute malicious payloads. The Windows module loader can be instructed to load DLLs from arbitrary local paths and arbitrary Universal Naming Convention (UNC) network paths. This functionality resides in NTDLL.dll and is part of the Windows [Native API](https://attack.mitre.org/techniques/T1106) which is called from functions like <code>CreateProcess</code>, <code>LoadLibrary</code>, etc. of the Win32 API. (Citation: Wikipedia Windows Library Files)
The module loader can load DLLs:
* via specification of the (fully-qualified or relative) DLL pathname in the IMPORT directory;
* via EXPORT forwarded to another DLL, specified with (fully-qualified or relative) pathname (but without extension);
* via an NTFS junction or symlink program.exe.local with the fully-qualified or relative pathname of a directory containing the DLLs specified in the IMPORT directory or forwarded EXPORTs;
* via <code><file name="filename.extension" loadFrom="fully-qualified or relative pathname"></code> in an embedded or external "application manifest". The file name refers to an entry in the IMPORT directory or a forwarded EXPORT.
Adversaries may use this functionality as a way to execute arbitrary code on a victim system. For example, malware may execute share modules to load additional components or features.
## Atomic Tests:
Currently, no tests are available for this technique.
## Detection
Monitoring DLL module loads may generate a significant amount of data and may not be directly useful for defense unless collected under specific circumstances, since benign use of Windows modules load functions are common and may be difficult to distinguish from malicious behavior. Legitimate software will likely only need to load routine, bundled DLL modules or Windows system DLLs such that deviation from known module loads may be suspicious. Limiting DLL module loads to <code>%SystemRoot%</code> and <code>%ProgramFiles%</code> directories will protect against module loads from unsafe paths.
Correlation of other events with behavior surrounding module loads using API monitoring and suspicious DLLs written to disk will provide additional context to an event that may assist in determining if it is due to malicious behavior.
## Shield Active Defense
### Software Manipulation
Make changes to a system's software properties and functions to achieve a desired effect.
Software Manipulation allows a defender to alter or replace elements of the operating system, file system, or any other software installed and executed on a system.
#### Opportunity
There is an opportunity for the defender to observe the adversary and control what they can see, what effects they can have, and/or what data they can access.
#### Use Case
A defender can modify system calls to break communications, route things to decoy systems, prevent full execution, etc.
#### Procedures
Hook the Win32 Sleep() function so that it always performs a Sleep(1) instead of the intended duration. This can increase the speed at which dynamic analysis can be performed when a normal malicious file sleeps for long periods before attempting additional capabilities.
Hook the Win32 NetUserChangePassword() and modify it such that the new password is different from the one provided. The data passed into the function is encrypted along with the modified new password, then logged so a defender can get alerted about the change as well as decrypt the new password for use.
Alter the output of an adversary's profiling commands to make newly-built systems look like the operating system was installed months earlier.
Alter the output of adversary recon commands to not show important assets, such as a file server containing sensitive data.
|
github_jupyter
|
# <center>Master M2 MVA 2017/2018 - Graphical models - HWK 3<center/>
### <center>WANG Yifan && CHU Xiao<center/>
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
from scipy.stats import multivariate_normal as norm
import warnings
warnings.filterwarnings("ignore")
# Data loading
data_path = 'classification_data_HWK3/'
train = np.loadtxt(data_path + 'EMGaussian.data')
test = np.loadtxt(data_path + 'EMGaussian.test')
print(train.shape, test.shape)
plt.scatter(train[0:100,0], train[0:100,1])
plt.show()
```
## Question 1
The code is implemented in class `HMM`, in function `gamma_` ( for $p(q_t |u_1 ,... , u_T )$) and `ksi_` ( for $p(q_t , q_{t+1} |u_1 ,..., u_T )$).
```
class HMM(object):
def __init__(self, K, A, means, covs, pi):
"""
Args:
K (int): number of states
A: transition matrix A(u, q) = p(u|q)
pi (K,1): prior p(q)
means: means of guassian distributions
covs: covariances of gaussian distributions
"""
self.K = K
self.A = A
self.pi = pi
self.means = means
self.covs = covs
def p_(self, z, u):
""" Gaussian emission probability ~ N(means, covs)
Args:
z: latent variable, 0...K-1
u: observation
"""
return norm.pdf(u, self.means[z], self.covs[z])
def emission_(self, u):
""" Compute p(u|q=0...K-1)
u: observation
q: latent variable
Return:
proba (K, 1)
"""
eps = 1e-30
proba = np.asarray([self.p_(z, u) for z in range(self.K)]).reshape(-1,1) + eps
return proba
def alpha(self, data):
""" p(u_1...u_t, q_t)
Return:
alpha (K, T)
logalpha (K, T)
"""
T = len(data)
eps = 1e-30
logalpha = np.zeros((self.K, T))
logalpha[:, 0] = (np.log(self.emission_(data[0]) + eps) + np.log(self.pi) + eps).reshape(-1)
for t in range(1, T):
logalpha_max = logalpha[:, t-1].max()
p = np.exp(logalpha[:, t-1] - logalpha_max).reshape(-1,1)
logalpha[:, t] = (np.log(self.emission_(data[t]) + eps) \
+ np.log(self.A.dot(p) + eps) + logalpha_max).reshape(-1)
alpha = np.exp(logalpha)
return alpha, logalpha
def beta(self, data):
""" p(u_{t+1}...u_T|q_t)
Return:
beta (K, T)
logbeta (K, T)
"""
T = len(data)
eps = 1e-30
logbeta = np.zeros((self.K, T))
logbeta[:, T-1] = (np.log(self.emission_(data[0]) + eps) + np.log(self.pi + eps)).reshape(-1)
for t in range(1, T):
t = T - t -1 # T-2 ... 0
logbeta_max = logbeta[:, t+1].max()
p = np.exp((logbeta[:, t+1] - logbeta_max).reshape(-1,1) + np.log(self.emission_(data[t+1]) + eps)).reshape(-1,1)
logbeta[:, t] = (np.log(self.A.T.dot(p) + eps) + logbeta_max).reshape(-1)
beta = np.exp(logbeta)
return beta, logbeta
def gamma_(self, data):
""" Marginal posterior distribution of all latent variable q_t=0..T-1: p(q_t|U)
Return:
gamma (K, T)
"""
T = len(data)
_, logalpha = self.alpha(data)
_, logbeta = self.beta(data)
gamma = np.zeros((self.K, T))
for t in range(T):
log_alpha_beta = logalpha[:,t] + logbeta[:,t]
log_alpha_beta_max = np.max(log_alpha_beta)
# p(q_t, U)
p = np.exp(log_alpha_beta-log_alpha_beta_max)
gamma[:, t] = p/np.sum(p)
return gamma
def ksi_(self, data):
""" Joint posterior distribution of two successive latent variables: ksi[i,j] = p(q_t=i, q_t+1=j|U)
Return:
ksi (K, K, T-1)
"""
T = len(data)
_, logalpha = self.alpha(data)
_, logbeta = self.beta(data)
ksi = np.zeros((self.K, self.K, T-1))
log_ksi = np.zeros((self.K, self.K, T-1))
for t in range(T-1):
for i in range(self.K):
for j in range(self.K):
log_alpha_beta = logalpha[:, t] + logbeta[:, t]
log_alpha_beta_max = log_alpha_beta.max()
log_p = log_alpha_beta_max + np.log(np.sum(np.exp(log_alpha_beta - log_alpha_beta_max)))
log_ksi[i, j, t] = -log_p + logalpha[i, t] + logbeta[j, t+1] + np.log(self.A[j, i]) \
+ np.log(self.p_(j, data[t+1]))
ksi[i, j, t] = np.exp(log_ksi[i, j, t])
return ksi, log_ksi
def smoothing(self, data):
""" p(q_t|U)
Return:
gamma (K, T)
"""
return self.gamma_(data)
def lower_bound(self, data):
"""Compute lower bound of complete log likelihood
"""
ll = 0
eps = 1e-30
T = len(data)
gamma = self.gamma_(data)
ksi, _ = self.ksi_(data)
ll += np.sum(gamma[:,0].reshape(-1,1) * np.log(self.pi + eps))
for t in range(T-1):
ll += np.sum(ksi[:,:,t].reshape(self.K, self.K).T * np.log(self.A + eps))
for t in range(1, T):
ll += np.sum(gamma[:,t].reshape(-1,1) * np.log(self.emission_(data[t]) + eps))
return ll
def log_likelihood(self, data):
""" Compute the log likelihood of the observations
"""
T = len(data)
_, logalpha = self.alpha(data)
_, logbeta = self.beta(data)
mx = (logalpha[:,0] + logbeta[:,0]).max()
ll = np.log(np.sum([np.exp(logalpha[:,0] + logbeta[:,0] - mx) for i in range(self.K)])) + mx
return ll
def train(self, data, max_iter=100, verbal=True, validation=None):
"""
Args:
data: (T, D), training data, D is the feature dimension
max_iter: int, maximal number of iterations
verbal: boolean, if True, print log likelyhood
valdation: None or (T, D), if provided, its log likelyhood will be computed and returned
Return:
lls: list, log likelyhoods of training data
lls_valid: list, log likelyhoods of validation dataset
"""
i = 0
eps = 1e-4
lls = [self.log_likelihood(data)]
lls_valid = [] if validation is None else [self.log_likelihood(validation)]
if verbal:
print("\tTrain log likelihood: {1}".format(i, lls[0]))
if validation is not None:
print("\tValid log likelihood: {0}".format(lls_valid[0]))
while i < max_iter:
i += 1
self.train_step(data)
ll = self.log_likelihood(data)
if len(lls) > 2 and (ll - lls[-1]) < eps:
break
lls.append(ll)
if verbal:
print("Iteration {0}:\n\tTrain log likelihood: {1}".format(i, ll))
if validation is not None:
ll_valid = self.log_likelihood(validation)
lls_valid.append(ll_valid)
if verbal:
print("\tValid log likelihood: {0}".format(ll_valid))
return lls, lls_valid
def train_step(self, data):
""" Perform EM algorithm for one step
Args:
data: (T, D), training data, D is the feature dimension
"""
T = len(data)
# E-step
gamma = self.gamma_(data)
ksi, _ = self.ksi_(data)
# M-step
self.pi = (gamma[:,0] / gamma[:,0].sum()).reshape(-1,1)
for j in range(self.K):
for k in range(self.K):
self.A[k, j] = ksi[j, k, :].sum()/np.sum(ksi[j, :, :])
for k in range(self.K):
self.means[k] = gamma[k,:].dot(data)/gamma[k,:].sum() # (1,T)*(T,D) -> (1,D)
self.covs[k] = np.sum([gamma[k,n]*(data[n]-self.means[k]).reshape(-1, 1).dot((data[n]-self.means[k]).reshape(1,-1)) for n in range(T)], 0)/gamma[k,:].sum()
def decode(self, data):
""" Viterbi algorithm (forward)
Args:
data: (T, D), training data, D is the feature dimension
"""
# Initialization
T = len(data)
eps = 1e-30
maxProb = np.zeros((self.K, T))
prev_state = np.zeros((self.K, T))
# Find the index which can maximiser the tmp_proba
for t in range(T):
if (t==0):
maxProb[:,0] = np.log(self.pi).reshape(-1)
else:
for i in range(self.K):
tmp_proba = maxProb[:,t-1] + np.log(A[i,:].T + eps) + np.log(self.emission_(data[t-1]) + eps).reshape(-1)
maxValue = np.max(tmp_proba)
maxIndex = np.argmax(tmp_proba)
maxProb[i,t] = maxValue
prev_state[i,t] = maxIndex
tmp_proba = np.log(maxProb[:,T-1]) + np.log(self.emission_(data[T-1]) + eps).reshape(-1)
maxValue = np.max(tmp_proba)
maxIndex = np.argmax(tmp_proba)
# Find the best path
state_index_path = np.zeros(T, dtype=int)
state_index_path[T-1] = maxIndex;
for t in range(T-2,-1,-1):
state_index_path[t] = prev_state[state_index_path[t+1],t+1]
# # Viterbi algorithm (backward)
# T = len(data)
# log_viterbi = np.zeros((self.K, T))
# log_post_viterbi = np.zeros((self.K, T))
# viterbi_path = np.zeros((self.K, T), dtype=int)
# for t in range(T-1,-1,-1):
# if t == T-1:
# log_post_viterbi[:, t] = np.zeros(self.K)
# else:
# mxvalue = np.max(log_viterbi[:, t + 1])
# p = np.exp(log_viterbi[:, t + 1] - mxvalue)
# max_x = [np.max(A.T[i, :] * p) for i in range(self.K)]
# viterbi_path[:, t] = [np.argmax(self.A.T[i, :] * p) for i in range(self.K)]
# log_post_viterbi[:, t] = np.log(max_x) + mxvalue
# log_viterbi[:, t] = log_post_viterbi[:, t] + np.log(self.emission_(data[t])).reshape(-1)
# state_index_path = np.ones(T, dtype=int) * -1
# z = np.argmax(log_viterbi[:, 0])
# state_index_path[0] = z
# for t in range(T - 1):
# z = viterbi_path[z, t]
# state_index_path[t+1] = z
# return state_index_path
return state_index_path
# GMM classifier
class GMM(object):
def __init__(self, k, covariance_type='full'):
self.k = k
self.mus = None
self.alpha2 = None
self.sigmas = None
self.resp = None
self.pis = None
self.clusters = {}
self.labels = None
self.label_history = []
self.covariance_type = covariance_type
def train(self, X, init="kmeans"):
n, d = X.shape
centers = None
# initialize
if init == "kmeans":
clf = KMeans(self.k)
clf.train(X)
self.mus = clf.centers
self.labels = clf.labels
self.pis = np.array([len(clf.clusters[i])/n for i in range(self.k)])
if self.covariance_type == 'spherical':
self.alpha2 = np.array([np.sum((np.array(clf.clusters[i]) - self.mus[i]) ** 2)/len(clf.clusters[i])/2. for i in range(self.k)])
self.sigmas = np.array([self.alpha2[i] * np.eye(d) for i in range(self.k)])
elif self.covariance_type == 'full':
self.sigmas = np.array([np.cov(np.array(clf.clusters[k]).T) for k in range(self.k)])
self.resp = np.zeros((self.k, n))
for i in range(self.k):
self.resp[i] = np.array(gamma(X, i, self.k, self.pis, self.mus, self.sigmas))
t = 0
resp = self.resp.copy()
pis = self.pis.copy()
mus = self.mus.copy()
if self.covariance_type == 'spherical':
alpha2 = self.alpha2.copy()
sigmas = self.sigmas.copy()
while t < 30:
t += 1
# update
for i in range(self.k):
pis[i] = np.mean(self.resp[i])
mus[i] = np.sum(X * self.resp[i][:, np.newaxis], 0)/np.sum(self.resp[i])
if self.covariance_type == 'spherical':
alpha2[i] = np.sum([(X[j] - self.mus[i]) ** 2 * self.resp[i,j] for j in range(n)])/np.sum(self.resp[i])/2.
sigmas[i] = alpha2[i] * np.eye(d)
elif self.covariance_type == 'full':
sigmas[i] = np.sum([(X[j] - self.mus[i]).reshape(-1,1).dot((X[j] - self.mus[i]).reshape(1,-1)) * self.resp[i,j] for j in range(n)], 0)/np.sum(self.resp[i])
for i in range(self.k):
resp[i] = np.array(gamma(X, i, self.k, pis, mus, sigmas))
self.resp = resp.copy()
self.pis = pis.copy()
self.mus = mus.copy()
if self.covariance_type == 'spherical':
self.alpha2 = alpha2.copy()
self.sigmas = sigmas.copy()
labels = np.zeros(n)
for i in range(n):
self.labels[i] = np.argmax(self.resp[:, i])
def test(self, X):
n, d = X.shape
resp = np.zeros((self.k, n))
for i in range(self.k):
resp[i] = np.array(gamma(X, i, self.k, self.pis, self.mus, self.sigmas))
labels = np.zeros(n)
for i in range(n):
labels[i] = np.argmax(resp[:, i])
return labels.astype(np.int32), resp
def log_likelihood(self, X):
n, d = X.shape
_, resp = self.test(X)
return np.sum([[resp[k,i] * np.log(self.pis[k] * norm.pdf(X[i], self.mus[k], self.sigmas[k])) for k in range(self.k)] for i in range(n)])
# K-means classifier
class KMeans(object):
def __init__(self, k):
self.k = k
self.centers = None
self.clusters = {}
self.labels = None
self.inertia = None
self.label_history = []
def train(self, X, init="random"):
n = X.shape[0]
centers = None
# initialize
if init == "random":
self.centers = X[np.random.choice(n, self.k, replace=False)]
elif init == 'kmeans++':
# TODO: implement K-means++
pass
while (centers is None or np.abs(centers - self.centers).max() > 1e-5):
# old centers
centers = self.centers.copy()
for i in range(self.k):
self.clusters[i] = []
labels = []
for x in X:
dis = np.sum((centers - x)**2, 1)
label = np.argmin(dis)
self.clusters[label].append(x)
labels.append(label)
self.labels = np.array(labels)
self.label_history.append(self.labels)
# new centers
for i in range(self.k):
self.centers[i] = np.mean(np.array(self.clusters[i]), 0)
def gamma(X, k, K, pis, mus, sigmas):
""" Responsibilities
"""
return (pis[k]* norm.pdf(X, mus[k], sigmas[k]))/(np.sum([pis[i]* norm.pdf(X, mus[i], sigmas[i]) for i in range(K)], 0))
```
## Question 2
Represent $p(q_t |u_1 ,..., u_T )$ for each of the 4 states as a function of time for the 100 first data points in `EMGaussienne.test`.
```
A = np.diag([1./2 - 1./6]*4) + np.ones((4,4)) * 1./6
pi = np.ones((4,1))/4.
# pre-train GMM
clf = GMM(4, covariance_type='full')
clf.train(test)
# train HMM
hmm = HMM(K=4, A=A, pi=pi, means=clf.mus, covs=clf.sigmas)
smoothing = hmm.smoothing(test)
print(smoothing.shape)
for i in range(4):
plt.scatter(range(100), smoothing[i, :100])
plt.legend(['state 1', 'state 2', 'state 3', 'state 4'])
plt.show()
```
## Question 3
Derive the estimation equations of the EM algorithm.
## Question 4
Implement the EM algorithm to learn the parameters of the model ($\pi$, $A$, $\mu_k$ , $\Sigma_k$, $k = 1...4$). The means and covariances could be initialized with the ones obtained in the previous homework. Learn the model from the training data in“EMGaussienne.data”.
```
A = np.diag([1./2 - 1./6]*4) + np.ones((4,4)) * 1./6
pi = np.ones((4,1))/4.
clf = GMM(4, covariance_type='full')
clf.train(train)
# train HMM
hmm = HMM(K=4, A=A, pi=pi, means=clf.mus, covs=clf.sigmas)
ll, ll_valid = hmm.train(train, max_iter=20, verbal=True, validation=test)
```
## Question 5
Plot the log-likelihood on the train data “EMGaussienne.data” and on the test data “EMGaussienne.test” as a function of the iterations of the algorithm. Comment.
```
plt.plot(ll)
plt.plot(ll_valid)
plt.legend(['EMGaussienne.data', 'EMGaussienne.test'])
plt.title("Log-likelihood on train and test data")
plt.xlabel("iteration")
plt.ylabel("log-likelihood")
plt.show()
```
## Question 6
Return in a table the values of the log-likelihoods of the Gaussian mixture models and of the HMM on the train and on the test data.
```
# GMM
print("GMM-train:", clf.log_likelihood(train))
print("GMM-test:", clf.log_likelihood(test))
# HMM
print("HMM-train:", hmm.log_likelihood(train))
print("HMM-test:", hmm.log_likelihood(test))
```
### 8. Implement Viterbi decoding.
```
viterbi_path = hmm.decode(train)
plt.figure()
plt.title("Most likely sequence of states (Viterbi algorithm)")
plt.scatter(train[:,0], train[:,1], c=viterbi_path)
plt.scatter(hmm.means[:,0], hmm.means[:,1], color = "red")
plt.show()
plt.figure()
plt.title("Most likely sequence of states (Viterbi algorithm)")
plt.scatter(train[0:100,0], train[0:100,1], c=viterbi_path[0:100])
plt.scatter(hmm.means[:,0], hmm.means[:,1], color = "red")
plt.show()
```
### 9. For the datapoints in the test file “EMGaussienne.test”, compute the marginal probability p(qt|u_1, . . . , u_T) for each point to be in state {1,2,3,4} for the parameters learned on the training set.
```
gamma_test = hmm.smoothing(test)
plt.figure(figsize=(15,5))
plt.title("The smoothing distribution (test file)")
plt.imshow(1-gamma_test[:,0:100], cmap="gray",origin="lower")
plt.xlabel("T")
plt.ylabel("States")
plt.show()
```
### 10. For each of these same 100 points, compute their most likely state according to the marginal probability computed in the previous question.
```
state_smoothing = np.argmax(gamma_test, axis=0)
plt.figure(figsize=(12,3))
plt.title("Most likely states (Smoothing distribution)")
plt.scatter(np.arange(100), state_smoothing[0:100]+1)
plt.xlabel("T")
plt.ylabel("States")
plt.show()
```
### 11. Run Viterbi on the test data. Compare the most likely sequence of states obtained for the 100 first data points with the sequence of states obtained in the previous question.
```
viterbi_test = hmm.decode(test)
plt.figure(figsize=(12,3))
plt.title("Most likely states (Viterbi algorithm)")
plt.scatter(np.arange(100), viterbi_test[0:100]+1)
plt.xlabel("T")
plt.ylabel("States")
plt.show()
```
|
github_jupyter
|
# Predicting Heart Disease using Machine Learning
This notebook uses various Python based machine learning and data science libraries in an attempt to build a machine learning model capable of predicting whether or not someone has a Heart Disease based on their medical attributes.
We're going to take the following approach:
1. [Problem Definition](#definition)
2. [Data](#data)
3. [Evaluation](#evaluation)
4. [Features](#features)
5. [Modelling](#modelling)
6. [Experimentation](#experimentation)
## <a name="definition">1. Problem Definition</a>
In a statement,
> Given clinical parameters about a patient, can we predict whether or not a they have a heart disease or not?
## <a name="data">2. Data</a>
[Heart Disease UCI - Original Version](https://archive.ics.uci.edu/ml/datasets/heart+disease)
[Heart Disease UCI - Kaggle Version](https://www.kaggle.com/ronitf/heart-disease-uci)
## <a name="evaluation">3.Evaluation</a>
> If we can reach 95% of accuracy at predicting whether or not a patient has heart disease during the proof of concept, we'll pursue the project.
## <a name="features">4.Features</a>
The following are the features we'll use to predict our target variable (heart disease or no heart disease).
1. age - age in years
2. sex - (1 = male; 0 = female)
3. cp - chest pain type
* 0: Typical angina: chest pain related decrease blood supply to the heart
* 1: Atypical angina: chest pain not related to heart
* 2: Non-anginal pain: typically esophageal spasms (non heart related)
* 3: Asymptomatic: chest pain not showing signs of disease
4. trestbps - resting blood pressure (in mm Hg on admission to the hospital)
* anything above 130-140 is typically cause for concern
5. chol - serum cholestoral in mg/dl
* serum = LDL + HDL + .2 * triglycerides
* above 200 is cause for concern
6. fbs - (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false)
* '>126' mg/dL signals diabetes
7. restecg - resting electrocardiographic results
* 0: Nothing to note
* 1: ST-T Wave abnormality
- can range from mild symptoms to severe problems
- signals non-normal heart beat
* 2: Possible or definite left ventricular hypertrophy
- Enlarged heart's main pumping chamber
8. thalach - maximum heart rate achieved
9. exang - exercise induced angina (1 = yes; 0 = no)
10. oldpeak - ST depression induced by exercise relative to rest
* looks at stress of heart during excercise
* unhealthy heart will stress more
11. slope - the slope of the peak exercise ST segment
* 0: Upsloping: better heart rate with excercise (uncommon)
* 1: Flatsloping: minimal change (typical healthy heart)
* 2: Downslopins: signs of unhealthy heart
12. ca - number of major vessels (0-3) colored by flourosopy
* colored vessel means the doctor can see the blood passing through
* the more blood movement the better (no clots)
13. thal - thalium stress result
* 1,3: normal
* 6: fixed defect: used to be defect but ok now
* 7: reversable defect: no proper blood movement when excercising
14. target - have disease or not (1=yes, 0=no) (= the predicted attribute)
**Note:** No personal identifiable information (PPI) can be found in the dataset.
```
# Regular EDA and plotting libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Models from scikit-learn
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
# Model Evaluations
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.metrics import precision_score, recall_score, f1_score
from sklearn.metrics import plot_roc_curve, plot_confusion_matrix
```
---------
## Load data
```
df = pd.read_csv('data/heart-disease.csv')
df.head()
```
--------
## Exploratory Data Analysis (EDA)
1. What question(s) are we trying to solve?
2. What kind of data do we have and how do we treat different types?
3. What are the missing data and how are we going to handle them?
4. What are the outliers, why we care about them and how are we going to handle them?
5. How can we add, change or remove features to get more out of the data?
```
df.tail()
df.info()
# check if there is any missing data
df.isnull().sum()
# how many classes are in target variable?
df['target'].value_counts()
# visulaiztion of classes
# sns.countplot(x=df['target']);
df['target'].value_counts().plot.bar(color=['salmon', 'lightblue']);
plt.xlabel('0: No Disease, 1: Heart Disease')
plt.ylabel('Count');
```
Seem like there are 2 categories and pretty balanced dataset.
-------
## Finding Patterns in data
```
df.describe().transpose()
```
### Heart disease frequency according to Sex
```
df['sex'].value_counts()
```
There are about 207 male and 96 females. So we have more male population, so we need to keep that in back of our mind
(1 = male; 0 = female)
```
pd.crosstab(df['sex'], df['target'])
72/(24+72), 93/(114+93)
```
We can see that based on existing data, there are 75% chances that female can have a heart disease. For male, there are 45% chance.
```
# visualize the data
# pd.crosstab(df['sex'], df['target']).plot(kind='bar', color=['salmon', 'lightblue']);
pd.crosstab(df['sex'], df['target']).plot(kind='bar');
plt.title('Heart disease frequency by Sex')
plt.xlabel('0: No Disease, 1: Heart Disease ')
plt.ylabel('Count')
plt.legend(['Female', 'Male']);
plt.xticks(rotation=0);
```
### Age Vs Max. Heart Rate for people who have Heart Disease
```
df.columns
plt.figure(figsize=(10, 7))
# positive cases
sns.scatterplot(data=df, x=df.age[df.target==1], y=df.thalach[df.target==1], color='salmon', s=50, alpha=0.8);
# negative cases
sns.scatterplot(data=df, x=df.age[df.target==0], y=df.thalach[df.target==0], color='lightblue', s=50, alpha=0.8)
plt.title('Heart Disease in function of Age and Max Heart Rate')
plt.xlabel('Age')
plt.ylabel('Max Heart Rate');
plt.legend(['Heart Disease', 'No Disease']);
```
### Distribution of Age
```
sns.histplot(data=df, x=df['age'], bins=30);
```
### Heart Disease Frequency per Chest Pain level
cp - chest pain type
* 0: Typical angina: chest pain related decrease blood supply to the heart
* 1: Atypical angina: chest pain not related to heart
* 2: Non-anginal pain: typically esophageal spasms (non heart related)
* 3: Asymptomatic: chest pain not showing signs of disease
```
pd.crosstab(df['target'], df['cp'])
pd.crosstab(df['cp'], df['target']).plot(kind='bar', color=['lightblue', 'salmon']);
plt.title('Heart Disease Frequency per Chest Pain level')
plt.xlabel('Chest Pain Level')
plt.ylabel('Count')
plt.legend(['No Diease', 'Heart Disease'])
plt.xticks(rotation=0);
```
### Correlation between indepedent variables
```
df.corr()['target'][:-1]
# visualization
corr_matrix = df.corr()
plt.figure(figsize=(12, 8))
sns.heatmap(corr_matrix, annot=True, linewidth=0.5, fmt='.2f', cmap='viridis_r');
```
As per the heatmap above, `Chest pain (cp)` has the highest positive correlation with Target variable among the features, followed by `thalach (Maximum Heart Rate)` variable.
On the other hand, `exang - exercise induced angina` and `oldpeak - ST depression induced by exercise relative to rest` have the lowest correlation with Target variable.
--------
## <a name="modelling">5. Modelling</a>
```
df.head(2)
# split features and labels
X = df.drop('target', axis=1)
y = df['target']
X.head(2)
y.head(2)
# split into training, testing datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
```
As there is no missing data and no values to convert from categorical to numerical values, we will continue to build model and train them .
### Model Training
We will try 3 different models.
1. Logistic Regression
2. K-Nearest Neighbours Classifier
3. Random Forest Classifier
```
# put models in dictionary
models = {
'LogisticRegression': LogisticRegression(max_iter=1000),
'KNN': KNeighborsClassifier(),
'RandomForestClassifer': RandomForestClassifier()
}
# create function to fit and score model
def fit_and_score(models, X_train, X_test, y_train, y_test):
"""
Fits and evalute given machine learning models.
models: a dictionary of different scikit learn machine learning models
X_train: training date (no labels)
X_test: testing data (no labels)
y_train: training labels
y_test : testing labels
returns model scores dictionary.
"""
# set random seed
np.random.seed(42)
# make dictonary to keep scores
model_scores = {}
# loop through models to fit and score
for model_name, model in models.items():
model.fit(X_train, y_train) # fit model
score = model.score(X_test, y_test) # get score
model_scores[model_name] = score # put score for each model
return model_scores
# fit and score
model_scores = fit_and_score(models, X_train, X_test, y_train, y_test)
model_scores
```
### Model Comparison
```
model_compare = pd.DataFrame(model_scores, index=['accuracy'])
model_compare.head()
model_compare.T.plot(kind='bar');
```
---------
## <a name="experimentation">6.Experimentation</a>
### Tuning or Improving our models
Now we've got baseline models. and we might want to experiment to improve the results.
We will be doing:
* Hyperparameter tuning
* Feature Importance
* Confusion Matrix
* Cross Validation
* Precision
* Recall
* F1 Score
* Classification Report
* ROC curve
* Area Under the Curve (AUC)
### Hyperparameter Tuning
1. [Hyperparameter Tuning - Manually](#manually)
2. [Hyperparameter Tuning - using RandomizedSearchCV](#randomized)
3.[Hyperparameter Tuning - using GridSearchCV](#grid)
### <a name='manually'>Hyperparameter Tuning - Manually</a>
```
train_scores = []
test_scores = []
```
### KNN
```
# create a different values of parameters
neighbours = range(1, 21)
# instantiate instance
knn = KNeighborsClassifier()
# loop through different n_neighbors
for i in neighbours:
# set param
knn.set_params(n_neighbors=i)
# fit model
knn.fit(X_train, y_train)
# get score
train_scores.append(knn.score(X_train, y_train))
test_scores.append(knn.score(X_test, y_test))
plt.plot(neighbours, train_scores, label='Train Score')
plt.plot(neighbours, test_scores, label='Test Score');
plt.xticks(np.arange(1,21,1))
plt.legend();
plt.xlabel('n_neighbor')
plt.ylabel('score');
print(f"Maximum KNN score on the test data: {max(test_scores) * 100:.2f}%")
```
-----
## <a name='randomized'>Hyperparameter Tuning - using RandomizedSearchCV</a>
We are going to tune the following models using RandomizedSearchCV.
* Logistic Regression
* RandomForest Classifier
```
# help(LogisticRegression)
np.logspace(-4, 4, 20)
# help(RandomForestClassifier)
```
#### Create Hyperparameter Grid
```
# create hyperparameter grid for Logistic Regression
log_reg_grid = {
'C': np.logspace(-4, 4, 20),
'solver': ['liblinear']
}
# create hyperparameter grid for Random Forest Classifier
rf_grid = {
'n_estimators': np.arange(10, 1000, 50),
'max_depth': [None, 3, 5, 10],
'min_samples_split': np.arange(2, 20, 2),
'min_samples_leaf': np.arange(1, 20, 2)
}
```
#### Create RandomizedSearchCV with created Hyperparameter Grid (Logistic Regression)
```
np.random.seed(42)
# set up random hyperparameter search for Logistic Regression
rs_log_reg = RandomizedSearchCV(LogisticRegression(),
log_reg_grid,
cv=5,
n_iter=20,
verbose=True)
# fit random hyperparameter search model for Logistic Regression
rs_log_reg.fit(X_train, y_train)
# check best parameters
rs_log_reg.best_params_
# check the score
rs_log_reg.score(X_test, y_test)
# comparing with baseline scores
model_scores
```
#### Create RandomizedSearchCV with created Hyperparameter Grid (Random Forest Classifier)
```
np.random.seed(42)
# set up random hyperparameter search for RandomForestClassifier
rs_rf = RandomizedSearchCV(RandomForestClassifier(), rf_grid, cv=5, n_iter=20, verbose=True)
# fit random hyperparamter search model
rs_rf.fit(X_train, y_train)
# check best parameters
rs_rf.best_params_
# check the score
rs_rf.score(X_test, y_test)
# comparing with baseline scores
model_scores
```
**We can see that between LogisticRegression and RandomForestClassifier using RandomizedSearchCV, LogisticRegression score is better.**
**So we will explore using LogisticRegression with GridSearchCV to further improve the performance.**
---------
## <a name='grid'>Hyperparameter Tuning - using GridSearchCV</a>
We are going to tune the following models using GridSearchCV.
* Logistic Regression
```
# create hyperparameter grid for Logistic Regression
log_reg_grid = {
'C': np.logspace(-4, 4, 20),
'solver': ['liblinear']
}
# set up grid hyperparameter search for Logistic Regression
gs_log_reg = GridSearchCV(LogisticRegression(),
log_reg_grid,
cv=5,
verbose=True)
# train the model
gs_log_reg.fit(X_train, y_train)
# get best parameters
gs_log_reg.best_params_
# get the score
gs_log_reg.score(X_test, y_test)
```
---------
### Evaluating Models
Evaluating our tuned machine learning classifiers, beyond accuracy
* ROC and AUC score
* Confusion Matrix, Plot Confusion Matrix
* Classification Report
* Precision
* Recall
* F1
```
# make predictions
y_preds = gs_log_reg.predict(X_test)
# ROC curve and AUC
plot_roc_curve(gs_log_reg, X_test, y_test);
confusion_matrix(y_test, y_preds)
plot_confusion_matrix(gs_log_reg, X_test, y_test);
print(classification_report(y_test, y_preds))
```
**NOTE: As the above `classification report` only covers ONE train test split set of Test data.**
**So we may want to use cross validated precision, recall, f1 score to get the whole idea.**
--------
## Calculate evaluation metrics using Cross Validated Precision, Recall and F1 score
- we will use `cross_val_score` for this with different `scoring` parameter.
- we will create new model and validated on whole dataset.
```
# check current best parameter
gs_log_reg.best_params_
# create a new classifier with current best parameter
clf = LogisticRegression(C=0.23357214690901212, solver='liblinear')
# Cross Validated Accuracy
cv_accuracy = cross_val_score(clf, X, y, scoring='accuracy', cv=5)
cv_accuracy
# mean of cross valided accuracy
cv_accuracy = np.mean(cv_accuracy)
cv_accuracy
# Cross Validated Precision
cv_precision = cross_val_score(clf, X, y, scoring='precision', cv=5)
cv_precision = np.mean(cv_precision)
cv_precision
# Cross Validated Recall
cv_recall = cross_val_score(clf, X, y, scoring='recall', cv=5)
cv_recall = np.mean(cv_recall)
cv_recall
# Cross Validated F1
cv_f1 = cross_val_score(clf, X, y, scoring='f1', cv=5)
cv_f1 = np.mean(cv_f1)
cv_f1
# Visualize cross-validated metrics
cv_metrics = pd.DataFrame({'Accuracy': cv_accuracy,
'Precision': cv_precision,
'Recall': cv_recall,
'F1': cv_f1},
index=[0])
cv_metrics.T.plot.bar(legend=False);
plt.title('Cross Validated Classification Metrics')
plt.xticks(rotation=30);
```
-----------
## Feature Importance
Feature Importance is another as asking, "which features contributed most to the outcomes of the model and how did they contribute?
Finding Feature Importance is different for each machine learning model.
### Finding Feature Importance for Logistic Regression
```
model = LogisticRegression(C=0.23357214690901212, solver='liblinear')
model.fit(X_train, y_train)
# check Coefficient of features
model.coef_
df.head(2)
# Match coef's of features to columns name
feature_dict = dict(zip(df.columns, list(model.coef_[0])))
feature_dict
```
**NOTE: Unlike correlation which is done during EDA, cofficient is model driven.**
We got those coef_ values after we have the model.
```
# Visualize Feature Importance
feature_df = pd.DataFrame(feature_dict, index=[0])
feature_df.T.plot.bar(title='Feature Importance of Logistic Regression', legend=False);
pd.crosstab(df['slope'], df['target'])
```
based on the coef_, the higher the value of slope, the model tends to predict higher value (which is 0 to 1: meaning likely to have heart disease)
-------
```
pd.crosstab(df['sex'], df['target'])
72/24
93/114
```
based on the coef_, the higher the value of sex (0 => 1), the model tends to predict lower value.
Example:
For Sex 0 (female), Target changes from 0 => 1 is 72/24 = 3.0
For Sex 1 (male), Target changes from 0 => 1 is 93/114 = 0.8157894736842105
So the value got decrease from 3.0 to 0.8157894736842105.
-------
## Additional Experimentation
To improve our evaluation metrics, we can
* collect more data.
* try different models like XGBoost or CatBoost.
* improve the current model with additional hyperparameter tuning
```
# save the model
from joblib import dump
dump(clf, 'model/mdl_logistic_regression')
```
|
github_jupyter
|
<pre>
Torch : Manipulating vectors like dot product, addition etc and using GPU
Numpy : Manipuliting vectors
Pandas : Reading CSV file
Matplotlib : Plotting figure
</pre>
```
import numpy as np
import torch
import pandas as pd
from matplotlib import pyplot as plt
```
<pre>
O
O
O
O
O
O O
O O
O O
O O
O O
O O
O O
O O
O O
O O
O O
O O
O O
O |
O |
O |
O |
O |
| |
| |
| |
| |
| |
| |
| |
Visible Hidden/Feature
Layer Layer
(n_v) (n_h)
RBM : A class that initialize RBM with default values
</pre>
<pre>
Parameters
n_v : Number of visible inputs
Initialized by 0 but then take value of number of inputs
n_h : Number of features want to extract
Must be set by user
k : Sampling steps for contrastive divergance
Default value is 2 steps
epochs : Number of epochs for training RBM
Must be set by user
mini_batch_size : Size of mini batch for training
Must be set by user
alpha : Learning rate for updating parameters of RBM
Default value is 0.01
momentum : Reduces large jumps for updating parameters
weight_decay : Reduces the value of weight after every step of contrastive divergance
data : Data to be fitted for RBM
Must be given by user or else, thats all useless
</pre>
```
class RBM():
# Parameters
# n_v : Number of visible inputs
# Initialized by 0 but then take value of number of inputs
# n_h : Number of features want to extract
# Must be set by user
# k : Sampling steps for contrastive divergance
# Default value is 2 steps
# epochs : Number of epochs for training RBM
# Must be set by user
# mini_batch_size : Size of mini batch for training
# Must be set by user
# alpha : Learning rate for updating parameters of RBM
# Default value is 0.001
# momentum : Reduces large jumps for updating parameters
# weight_decay : Reduces the value of weight after every step of contrastive divergance
# data : Data to be fitted for RBM
# Must be given by user or else, thats all useless
def __init__(self, n_v=0, n_h=0, k=2, epochs=15, mini_batch_size=64, alpha=0.001, momentum=0.9, weight_decay=0.001):
self.number_features = 0
self.n_v = n_v
self.n_h = self.number_features
self.k = k
self.alpha = alpha
self.momentum = momentum
self.weight_decay = weight_decay
self.mini_batch_size = mini_batch_size
self.epochs = epochs
self.data = torch.randn(1, device="cuda")
# fit method is called to fit RBM for provided data
# First, data is converted in range of 0-1 cuda float tensors by dividing it by their maximum value
# Here, after calling this method, n_v is reinitialized to number of input values present in data
# number_features must be given by user before calling this method
# w Tensor of weights of RBM
# (n_v x n_h) Randomly initialized between 0-1
# a Tensor of bias for visible units
# (n_v x 1) Initialized by 1's
# b Tensor of bias for hidden units
# (n_b x 1) Initialized by 1's
# w_moment Momentum value for weights
# (n_v x n_h) Initialized by zeros
# a_moment Momentum values for visible units
# (n_v x 1) Initialized by zeros
# b_moment Momentum values for hidden units
# (n_h x 1) Initialized by zeros
def fit(self):
self.data /= self.data.max()
self.data = self.data.type(torch.cuda.FloatTensor)
self.n_v = len(self.data[0])
self.n_h = self.number_features
self.w = torch.randn(self.n_v, self.n_h, device="cuda") * 0.1
self.a = torch.ones(self.n_v, device="cuda") * 0.5
self.b = torch.ones(self.n_h, device="cuda")
self.w_moment = torch.zeros(self.n_v, self.n_h, device="cuda")
self.a_moment = torch.zeros(self.n_v, device="cuda")
self.b_moment = torch.zeros(self.n_h, device="cuda")
self.train()
# train This method splits dataset into mini_batch and run for given epoch number of times
def train(self):
for epoch_no in range(self.epochs):
ep_error = 0
for i in range(0, len(self.data), self.mini_batch_size):
mini_batch = self.data[i:i+self.mini_batch_size]
ep_error += self.contrastive_divergence(mini_batch)
print("Epoch Number : ", epoch_no, " Error : ", ep_error.item())
# cont_diverg It performs contrastive divergance using gibbs sampling algorithm
# p_h_0 Value of hidden units for given visivle units
# h_0 Activated hidden units as sampled from normal distribution (0 or 1)
# g_0 Positive associations of RBM
# wv_a Unactivated hidden units
# p_v_h Probability of hidden neuron to be activated given values of visible neurons
# p_h_v Probability of visible neuron to be activated given values of hidden neurons
# p_v_k Value of visible units for given visivle units after k step Gibbs Sampling
# p_h_k Value of hidden units for given visivle units after k step Gibbs Sampling
# g_k Negative associations of RBM
# error Recontruction error for given mini_batch
def contrastive_divergence(self, v):
p_h_0 = self.sample_hidden(v)
h_0 = (p_h_0 >= torch.rand(self.n_h, device="cuda")).float()
g_0 = v.transpose(0, 1).mm(h_0)
wv_a = h_0
# Gibbs Sampling step
for step in range(self.k):
p_v_h = self.sample_visible(wv_a)
p_h_v = self.sample_hidden(p_v_h)
wv_a = (p_h_v >= torch.rand(self.n_h, device="cuda")).float()
p_v_k = p_v_h
p_h_k = p_h_v
g_k = p_v_k.transpose(0, 1).mm(p_h_k)
self.update_parameters(g_0, g_k, v, p_v_k, p_h_0, p_h_k)
error = torch.sum((v - p_v_k)**2)
return error
# p_v_h : Probability of hidden neuron to be activated given values of visible neurons
# p_h_v : Probability of visible neuron to be activated given values of hidden neurons
#-----------------------------------Bernoulli-Bernoulli RBM--------------------------------------------
# p_h_v = sigmoid ( weight x visible + visible_bias )
# p_v_h = sigmoid (weight.t x hidden + hidden_bias )
#------------------------------------------------------------------------------------------------------
def sample_hidden(self, p_v_h): # Bernoulli-Bernoulli RBM
wv = p_v_h.mm(self.w)
wv_a = wv + self.b
p_h_v = torch.sigmoid(wv_a)
return p_h_v
def sample_visible(self, p_h_v): # Bernoulli-Bernoulli RBM
wh = p_h_v.mm(self.w.transpose(0, 1))
wh_b = wh + self.a
p_v_h = torch.sigmoid(wh_b)
return p_v_h
# weight_(t) = weight_(t) + ( positive_association - negative_association ) + weight_(t-1)
# visible_bias_(t) = visible_bias_(t) + sum( input - activated_visivle_at_k_step_sample ) + visible_bias_(t-1)
# hidden_bias_(t) = hidden_bias_(t) + sum( activated_initial_hidden - activated_hidden_at_k_step_sample ) + hidden_bias_(t-1)
def update_parameters(self, g_0, g_k, v, p_v_k, p_h_0, p_h_k):
self.w_moment *= self.momentum
del_w = (g_0 - g_k) + self.w_moment
self.a_moment *= self.momentum
del_a = torch.sum(v - p_v_k, dim=0) + self.a_moment
self.b_moment *= self.momentum
del_b = torch.sum(p_h_0 - p_h_k, dim=0) + self.b_moment
batch_size = v.size(0)
self.w += del_w * self.alpha / batch_size
self.a += del_a * self.alpha / batch_size
self.b += del_b * self.alpha / batch_size
self.w -= (self.w * self.weight_decay)
self.w_moment = del_w
self.a_moment = del_a
self.b_moment = del_b
dataset = pd.read_csv("/home/pushpull/mount/intHdd/dataset/mnist/mnist_train.csv", header=None)
data = torch.tensor(np.array(dataset)[:, 1:], device="cuda")
mnist = RBM()
mnist.data = data
mnist.number_features = 300
error = mnist.fit()
```
|
github_jupyter
|
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# Train your first neural network: basic classification
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/keras/basic_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/keras/basic_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/keras/basic_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
This guide trains a neural network model to classify images of clothing, like sneakers and shirts. It's okay if you don't understand all the details, this is a fast-paced overview of a complete TensorFlow program with the details explained as we go.
This guide uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow.
```
!pip install tf-nightly-2.0-preview
from __future__ import absolute_import, division, print_function
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
```
## Import the Fashion MNIST dataset
This guide uses the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here:
<table>
<tr><td>
<img src="https://tensorflow.org/images/fashion-mnist-sprite.png"
alt="Fashion MNIST sprite" width="600">
</td></tr>
<tr><td align="center">
<b>Figure 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>
</td></tr>
</table>
Fashion MNIST is intended as a drop-in replacement for the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset—often used as the "Hello, World" of machine learning programs for computer vision. The MNIST dataset contains images of handwritten digits (0, 1, 2, etc) in an identical format to the articles of clothing we'll use here.
This guide uses Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code.
We will use 60,000 images to train the network and 10,000 images to evaluate how accurately the network learned to classify images. You can access the Fashion MNIST directly from TensorFlow, just import and load the data:
```
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
```
Loading the dataset returns four NumPy arrays:
* The `train_images` and `train_labels` arrays are the *training set*—the data the model uses to learn.
* The model is tested against the *test set*, the `test_images`, and `test_labels` arrays.
The images are 28x28 NumPy arrays, with pixel values ranging between 0 and 255. The *labels* are an array of integers, ranging from 0 to 9. These correspond to the *class* of clothing the image represents:
<table>
<tr>
<th>Label</th>
<th>Class</th>
</tr>
<tr>
<td>0</td>
<td>T-shirt/top</td>
</tr>
<tr>
<td>1</td>
<td>Trouser</td>
</tr>
<tr>
<td>2</td>
<td>Pullover</td>
</tr>
<tr>
<td>3</td>
<td>Dress</td>
</tr>
<tr>
<td>4</td>
<td>Coat</td>
</tr>
<tr>
<td>5</td>
<td>Sandal</td>
</tr>
<tr>
<td>6</td>
<td>Shirt</td>
</tr>
<tr>
<td>7</td>
<td>Sneaker</td>
</tr>
<tr>
<td>8</td>
<td>Bag</td>
</tr>
<tr>
<td>9</td>
<td>Ankle boot</td>
</tr>
</table>
Each image is mapped to a single label. Since the *class names* are not included with the dataset, store them here to use later when plotting the images:
```
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
```
## Explore the data
Let's explore the format of the dataset before training the model. The following shows there are 60,000 images in the training set, with each image represented as 28 x 28 pixels:
```
train_images.shape
```
Likewise, there are 60,000 labels in the training set:
```
len(train_labels)
```
Each label is an integer between 0 and 9:
```
train_labels
```
There are 10,000 images in the test set. Again, each image is represented as 28 x 28 pixels:
```
test_images.shape
```
And the test set contains 10,000 images labels:
```
len(test_labels)
```
## Preprocess the data
The data must be preprocessed before training the network. If you inspect the first image in the training set, you will see that the pixel values fall in the range of 0 to 255:
```
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
```
We scale these values to a range of 0 to 1 before feeding to the neural network model. For this, we divide the values by 255. It's important that the *training set* and the *testing set* are preprocessed in the same way:
```
train_images = train_images / 255.0
test_images = test_images / 255.0
```
Display the first 25 images from the *training set* and display the class name below each image. Verify that the data is in the correct format and we're ready to build and train the network.
```
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
```
## Build the model
Building the neural network requires configuring the layers of the model, then compiling the model.
### Setup the layers
The basic building block of a neural network is the *layer*. Layers extract representations from the data fed into them. And, hopefully, these representations are more meaningful for the problem at hand.
Most of deep learning consists of chaining together simple layers. Most layers, like `tf.keras.layers.Dense`, have parameters that are learned during training.
```
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
```
The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (of 28 by 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; it only reformats the data.
After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are densely-connected, or fully-connected, neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer is a 10-node *softmax* layer—this returns an array of 10 probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the 10 classes.
### Compile the model
Before the model is ready for training, it needs a few more settings. These are added during the model's *compile* step:
* *Loss function* —This measures how accurate the model is during training. We want to minimize this function to "steer" the model in the right direction.
* *Optimizer* —This is how the model is updated based on the data it sees and its loss function.
* *Metrics* —Used to monitor the training and testing steps. The following example uses *accuracy*, the fraction of the images that are correctly classified.
```
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
```
## Train the model
Training the neural network model requires the following steps:
1. Feed the training data to the model—in this example, the `train_images` and `train_labels` arrays.
2. The model learns to associate images and labels.
3. We ask the model to make predictions about a test set—in this example, the `test_images` array. We verify that the predictions match the labels from the `test_labels` array.
To start training, call the `model.fit` method—the model is "fit" to the training data:
```
model.fit(train_images, train_labels, epochs=5)
```
As the model trains, the loss and accuracy metrics are displayed. This model reaches an accuracy of about 0.88 (or 88%) on the training data.
## Evaluate accuracy
Next, compare how the model performs on the test dataset:
```
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('\nTest accuracy:', test_acc)
```
It turns out, the accuracy on the test dataset is a little less than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*. Overfitting is when a machine learning model performs worse on new data than on their training data.
## Make predictions
With the model trained, we can use it to make predictions about some images.
```
predictions = model.predict(test_images)
```
Here, the model has predicted the label for each image in the testing set. Let's take a look at the first prediction:
```
predictions[0]
```
A prediction is an array of 10 numbers. These describe the "confidence" of the model that the image corresponds to each of the 10 different articles of clothing. We can see which label has the highest confidence value:
```
np.argmax(predictions[0])
```
So the model is most confident that this image is an ankle boot, or `class_names[9]`. And we can check the test label to see this is correct:
```
test_labels[0]
```
We can graph this to look at the full set of 10 channels
```
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
```
Let's look at the 0th image, predictions, and prediction array.
```
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
```
Let's plot several images with their predictions. Correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent (out of 100) for the predicted label. Note that it can be wrong even when very confident.
```
# Plot the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
```
Finally, use the trained model to make a prediction about a single image.
```
# Grab an image from the test dataset
img = test_images[0]
print(img.shape)
```
`tf.keras` models are optimized to make predictions on a *batch*, or collection, of examples at once. So even though we're using a single image, we need to add it to a list:
```
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))
print(img.shape)
```
Now predict the image:
```
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
```
`model.predict` returns a list of lists, one for each image in the batch of data. Grab the predictions for our (only) image in the batch:
```
np.argmax(predictions_single[0])
```
And, as before, the model predicts a label of 9.
|
github_jupyter
|
# 6. Pandas Introduction
In the previous chapters, we have learned how to handle Numpy arrays that can be used to efficiently perform numerical calculations. Those arrays are however homogeneous structures i.e. they can only contain one type of data. Also, even if we have a single type of data, the different rows or columns of an array do not have labels, making it difficult to track what they contain. For such cases, we need a structure closer to a table as can be found in Excel, and these structures are implemented by the package Pandas.
But why can't we simply use Excel then? While Excel is practical to browse through data, it is very cumbersome to use to combine, re-arrange and thoroughly analyze data: code is hidden and difficult to share, there's no version control, it's difficult to automate tasks, the manual clicking around leads to mistakes etc.
In the next chapters, you will learn how to handle tabular data with Pandas, a Python package widely used in the scientific and data science areas. You will learn how to create and import tables, how to combine them, modify them, do statistical analysis on them and finally how to use them to easily create complex visualizations.
So that you see where this leads, we start with a short example of how Pandas can be used in a project. We look here at data provided openly by the Swiss National Science Foundation about grants attributed since 1975.
```
import numpy as np
import pandas as pd
import seaborn as sns
```
## 6.1 Importing data
Before anything, we need access to the data that can be found [here](https://opendata.swiss/de/dataset/p3-export-projects-people-and-publications). We can either manually download them and then use the path to read the data or directly use the url. The latter has the advantage that if you have an evolving source of data, these will always be up to date:
```
# local import
projects = pd.read_csv('Data/P3_GrantExport.csv',sep = ';')
# import from url
#projects = pd.read_csv('http://p3.snf.ch/P3Export/P3_GrantExport.csv',sep = ';')
```
Then we can have a brief look at the table itself that Jupyter displays in a formated way and limit the view to the 5 first rows using ```head()```:
```
projects.head(5)
```
## 6.2 Exploring data
Pandas offers a variety of tools to compile information about data, and that compilation can be done very efficiently without the need for loops, conditionals etc.
For example we can quickly count how many times each University appear in that table. We just use the ```value_counts()``` method for that:
```
projects['University'].value_counts().head(10)
```
Then we can very easily plot the resulting information, either using directly Pandas or a more advanced library like Seaborn, plotnine or Altair.
Here first with plain Pandas (using Matplotlib under the hood):
```
projects['University'].value_counts().head(10).plot(kind='bar')
```
## 6.3 Handling different data types
Unlike Numpy arrays, Pandas can handle a variety of different data types in a dataframe. For example it is very efficient at dealing with dates. We see that our table contains e.g. a ```Start Date```. We can turn this string into an actual date:
```
projects['start'] = pd.to_datetime(projects['Start Date'])
projects['year'] = projects.start.apply(lambda x: x.year)
projects.loc[0].start
projects.loc[0].year
```
## 6.4 Data wrangling, aggregation and statistics
Pandas is very efficient at wrangling and aggregating data, i.e. grouping several elements of a table to calculate statistics on them. For example we first need here to convert the ```Approved Amount``` to a numeric value. Certain rows contain text (e.g. "not applicable") and we force the conversion:
```
projects['Approved Amount'] = pd.to_numeric(projects['Approved Amount'], errors = 'coerce')
```
Then we want to extract the type of filed without subfields e.g. "Humanities" instead of "Humanities and Social Sciences;Theology & religion". For that we can create a custom function and apply it to an entire column:
```
science_types = ['Humanities', 'Mathematics','Biology']
projects['Field'] = projects['Discipline Name Hierarchy'].apply(
lambda el: next((y for y in [x for x in science_types if x in el] if y is not None),None) if not pd.isna(el) else el)
```
Then we group the data by discipline and year, and calculate the mean of each group:
```
aggregated = projects.groupby(['Institution Country', 'year','Field'], as_index=False).mean()
```
Finally we can use Seaborn to plot the data by "Field" using just keywords to indicate what the axes and colours should mean (following some principles of the grammar of graphics):
```
sns.lineplot(data = aggregated, x = 'year', y='Approved Amount', hue='Field');
```
Note that here, axis labelling, colorouring, legend, interval of confidence have been done automatically based on the content of the dataframe.
We see a drastic augmentation around 2010: let's have a closer look. We can here again group data by year and funding type and calculate the total funding:
```
grouped = projects.groupby(['year','Funding Instrument Hierarchy']).agg(
total_sum=pd.NamedAgg(column='Approved Amount', aggfunc='sum')).reset_index()
grouped
```
Now, for each year we keep only the 5 largest funding types to be able to plot them:
```
group_sorted = grouped.groupby('year',as_index=False).apply(lambda x: (x.groupby('Funding Instrument Hierarchy')
.sum()
.sort_values('total_sum', ascending=False))
.head(5)).reset_index()
```
Finally, we only keep year in the 2000's:
```
instruments_by_year = group_sorted[(group_sorted.year > 2005) & (group_sorted.year < 2012)]
import matplotlib.pyplot as plt
plt.figure(figsize=(10,10))
sns.barplot(data=instruments_by_year,
x='year', y='total_sum', hue='Funding Instrument Hierarchy')
```
We see that the main change, is the sudden increase in funding for national research programs.
|
github_jupyter
|
<style>div.container { width: 100% }</style>
<img style="float:left; vertical-align:text-bottom;" height="65" width="172" src="../assets/holoviz-logo-unstacked.svg" />
<div style="float:right; vertical-align:text-bottom;"><h2>Tutorial 0. Setup</h2></div>
This first step to the tutorial will make sure your system is set up to do all the remaining sections, with all software installed and all data downloaded as needed. The [index](index.ipynb) provided some links you might want to examine before you start.
## Getting set up
Please consult [holoviz.org](http://holoviz.org/installation.html) for the full instructions on installing the software used in these tutorials. Here is the condensed version of those instructions, assuming you have already downloaded and installed [Anaconda](https://www.anaconda.com/download) or [Miniconda](https://conda.io/miniconda.html) and have opened a command prompt in a Conda environment:
```
conda install anaconda-project
anaconda-project download pyviz/holoviz_tutorial
cd holoviz_tutorial # You may need to delete this directory if you've run the command above before
anaconda-project run jupyter notebook
```
If you prefer JupyterLab to the default (classic) notebook interface, you can replace "notebook" with "lab".
Once your chosen environment is running, navigate to `tutorial/00_Setup.ipynb` (i.e. this notebook) and run the following cell to test the key imports needed for this tutorial. If it completes without errors your environment should be ready to go:
```
import datashader as ds, bokeh, holoviews as hv # noqa
from distutils.version import LooseVersion
min_versions = dict(ds='0.13.0', bokeh='2.3.2', hv='1.14.4')
for lib, ver in min_versions.items():
v = globals()[lib].__version__
if LooseVersion(v) < LooseVersion(ver):
print("Error: expected {}={}, got {}".format(lib,ver,v))
```
And you should see the HoloViews, Bokeh, and Matplotlib logos after running the following cell:
```
hv.extension('bokeh', 'matplotlib')
```
## Downloading sample data
Lastly, let's make sure the datasets needed are available. First, check that the large earthquake dataset was downloaded correctly during the `anaconda-project run` command:
```
import os
from pyct import cmd
if not os.path.isfile('../data/earthquakes-projected.parq'):
cmd.fetch_data(name='holoviz', path='..') # Alternative way to fetch the data
```
Make sure that you have the SNAPPY dependency required to read these data:
```
try:
import pandas as pd
columns = ['depth', 'id', 'latitude', 'longitude', 'mag', 'place', 'time', 'type']
data = pd.read_parquet('../data/earthquakes-projected.parq', columns=columns, engine='fastparquet')
data.head()
except RuntimeError as e:
print('The data cannot be read: %s' % e)
```
If you don't see any error messages above, you should be good to go! Now that you are set up, you can continue with the [rest of the tutorial sections](01_Overview.ipynb).
|
github_jupyter
|
### Seminar: Spectrogram Madness

#### Today you're finally gonna deal with speech! We'll walk you through all the main steps of speech processing pipeline and you'll get to do voice-warping. It's gonna be fun! ....and creepy. Very creepy.
```
from IPython.display import display, Audio
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import librosa
display(Audio("sample1.wav"))
display(Audio("sample2.wav"))
display(Audio("welcome.wav"))
amplitudes, sample_rate = librosa.core.load("sample1.wav")
display(Audio(amplitudes, rate=sample_rate))
print(sample_rate)
print("Length: {} seconds at sample rate {}".format(amplitudes.shape[0] / sample_rate, sample_rate))
plt.figure(figsize=[16, 4])
plt.title("First 10^4 out of {} amplitudes".format(len(amplitudes)))
plt.plot(amplitudes[:10000]);
```
### Task 1: Mel-Spectrogram (5 points)
As you can see, amplitudes follow a periodic patterns with different frequencies. However, it is very difficult to process these amplitudes directly because there's so many of them! A typical WAV file contains 22050 amplitudes per second, which is already way above a typical sequence length for other NLP applications. Hence, we need to compress this information to something manageable.
A typical solution is to use __spectrogram:__ instead of saving thousands of amplitudes, we can perform Fourier transformation to find which periodics are prevalent at each point in time. More formally, a spectrogram applies [Short-Time Fourier Transform (STFT)](https://en.wikipedia.org/wiki/Short-time_Fourier_transform) to small overlapping windows of the amplitude time-series:
<img src="https://www.researchgate.net/profile/Phillip_Lobel/publication/267827408/figure/fig2/AS:295457826852866@1447454043380/Spectrograms-and-Oscillograms-This-is-an-oscillogram-and-spectrogram-of-the-boatwhistle.png" width="480px">
However, this spectrogram may have extraordinarily large numbers that can break down neural networks. Therefore the standard approach is to convert spectrogram into a __mel-spectrogram__ by changing frequencies to [Mel-frequency spectrum(https://en.wikipedia.org/wiki/Mel-frequency_cepstrum)].
Hence, the algorithm to compute spectrogram of amplitudes $y$ becomes:
1. Compute Short-Time Fourier Transform (STFT): apply fourier transform to overlapping windows
2. Build a spectrogram: $S_{ij} = abs(STFT(y)_{ij}^2)$
3. Convert spectrogram to a Mel basis
```
# Some helpers:
# 1. slice time-series into overlapping windows
def slice_into_frames(amplitudes, window_length, hop_length):
return librosa.core.spectrum.util.frame(
np.pad(amplitudes, int(window_length // 2), mode='reflect'),
frame_length=window_length, hop_length=hop_length)
# output shape: [window_length, num_windows]
dummy_amps = amplitudes[2048: 6144]
dummy_frames = slice_into_frames(dummy_amps, 2048, 512)
print(amplitudes.shape)
plt.figure(figsize=[16, 4])
plt.subplot(121, title='Whole audio sequence', ylim=[-3, 3])
plt.plot(dummy_amps)
plt.subplot(122, title='Overlapping frames', yticks=[])
for i, frame in enumerate(dummy_frames.T):
plt.plot(frame + 10 - i);
# 2. Weights for window transform. Before performing FFT you can scale amplitudes by a set of weights
# The weights we're gonna use are large in the middle of the window and small on the sides
dummy_window_length = 3000
dummy_weights_window = librosa.core.spectrum.get_window('hann', dummy_window_length, fftbins=True)
plt.plot(dummy_weights_window); plt.plot([1500, 1500], [0, 1.1], label='window center'); plt.legend()
# 3. Fast Fourier Transform in Numpy. Note: this function can process several inputs at once (mind the axis!)
dummy_fft = np.fft.rfft(dummy_amps[:3000, None] * dummy_weights_window[:, None], axis=0) # complex[sequence_length, num_sequences]
plt.plot(np.real(dummy_fft)[:, 0])
print(dummy_fft.shape)
```
Okay, now it's time to combine everything into a __S__hort-__T__ime __F__ourier __T__ransform
```
def get_STFT(amplitudes, window_length, hop_length):
""" Compute short-time Fourier Transform """
# slice amplitudes into overlapping frames [window_length, num_frames]
frames = slice_into_frames(amplitudes, window_length, hop_length)
# get weights for fourier transform, float[window_length]
weights_window = <YOUR CODE>
# apply fourier transfrorm to frames scaled by weights
stft = <YOUR CODE>
return stft
stft = get_STFT(amplitudes, window_length=2048, hop_length=512)
plt.plot(abs(stft)[0])
def get_spectrogram(amplitudes, sample_rate=22050, n_mels=128,
window_length=2048, hop_length=512, fmin=1, fmax=8192):
"""
Implement mel-spectrogram as described above.
:param amplitudes: float [num_amplitudes], time-series of sound amplitude, same as above
:param sample rate: num amplitudes per second
:param n_mels: spectrogram channels
:param window_length: length of a patch to which you apply FFT
:param hop_length: interval between consecutive windows
:param f_min: minimal frequency
:param f_max: maximal frequency
:returns: mel-spetrogram [n_mels, duration]
"""
# Step I: compute Short-Time Fourier Transform
stft = <YOUR CODE>
assert stft.shape == (window_length // 2 + 1, len(amplitudes) // hop_length + 1)
# Step II: convert stft to a spectrogram
spectrogram = <YOUR CODE>
return spectrogram
```
#### The Mel Basis
The Mel-scale is a perceptual scale which represents how sensitive humans are to various sounds. We will use it to compress and transform our spectrograms.
```
mel_basis = librosa.filters.mel(22050, n_fft=2048,
n_mels=128, fmin=1, fmax=8192)
plt.figure(figsize=[16, 10])
plt.title("Mel Basis"); plt.xlabel("Frequence"); plt.ylabel("Mel-Basis")
plt.imshow(np.log(mel_basis),origin='lower', cmap=plt.cm.hot,interpolation='nearest', aspect='auto')
plt.colorbar(use_gridspec=True)
# Can
mat= np.matmul(mel_basis.T, mel_basis)
plt.figure(figsize=[16, 10])
plt.title("recovered frequence Basis"); plt.xlabel("Frequence"); plt.ylabel("Frequency")
plt.imshow(np.log(mat),origin='lower', cmap=plt.cm.hot,interpolation='nearest', aspect='auto')
plt.colorbar(use_gridspec=True)
def get_melspectrogram(amplitudes, sample_rate=22050, n_mels=128,
window_length=2048, hop_length=512, fmin=1, fmax=8192):
spectrogram = get_spectrogram(amplitudes, sample_rate=sample_rate, n_mels=n_mels,
window_length=window_length, hop_length=hop_length, fmin=fmin, fmax=fmax)
# Step III: convert spectrogram into Mel basis (multiplying by transformation matrix)
mel_basis = librosa.filters.mel(sample_rate, n_fft=window_length,
n_mels=n_mels, fmin=fmin, fmax=fmax)
# -- matrix [n_mels, window_length / 2 + 1]
mel_spectrogram = <YOUR_CODE>
assert mel_spectrogram.shape == (n_mels, len(amplitudes) // hop_length + 1)
return mel_spectrogram
amplitudes1, s1 = librosa.core.load("./sample1.wav")
amplitudes2, s2 = librosa.core.load("./sample2.wav")
print(s1)
ref1 = librosa.feature.melspectrogram(amplitudes1, sr=sample_rate, n_mels=128, fmin=1, fmax=8192)
ref2 = librosa.feature.melspectrogram(amplitudes2, sr=sample_rate, n_mels=128, fmin=1, fmax=8192)
assert np.allclose(get_melspectrogram(amplitudes1), ref1, rtol=1e-4, atol=1e-4)
assert np.allclose(get_melspectrogram(amplitudes2), ref2, rtol=1e-4, atol=1e-4)
plt.figure(figsize=[16, 4])
plt.subplot(1, 2, 1)
plt.title("That's no moon - it's a space station!"); plt.xlabel("Time"); plt.ylabel("Frequency")
plt.imshow(np.log10(get_melspectrogram(amplitudes1)),origin='lower', vmin=-10, vmax=5, cmap=plt.cm.hot)
plt.colorbar(use_gridspec=True)
plt.subplot(1, 2, 2)
plt.title("Help me, Obi Wan Kenobi. You're my only hope."); plt.xlabel("Time"); plt.ylabel("Frequency")
plt.imshow(np.log10(get_melspectrogram(amplitudes2)),origin='lower', vmin=-10, vmax=5, cmap=plt.cm.hot);
plt.colorbar(use_gridspec=True)
# note that the second spectrogram has higher mean frequency corresponding to the difference in gender
```
### Task 2 - Griffin-Lim Algorithm - 5 Points
In this task you are going to reconstruct the original audio signal from a spectrogram using the __Griffin-Lim Algorithm (GLA)__ . The Griffin-Lim Algorithm is a phase reconstruction method based on the redundancy of the short-time Fourier transform. It promotes the consistency of a spectrogram by iterating two projections, where a spectrogram is said to be consistent when its inter-bin dependency owing to the redundancy of STFT is retained. GLA is based only on the consistency and does not take any prior knowledge about the target signal into account.
This algorithm expects to recover a __complex-valued spectrogram__, which is consistent and maintains the given amplitude $\mathbf{A}$, by the following alternative projection procedure. Initialize a random "reconstruced" signal $\mathbf{x}$, and obtain it's STFT
$$\mathbf{X} = \text{STFT}(\mathbf{x})$$
Then we __discard__ the magnitude of $\mathbf{X}$ and keep only a random phase $\mathbf{\phi}$. Using the phase and the given magnitude $\mathbf{A}$ we construct a new complex value spectrogram $ \mathbf{\tilde X}$ using the euler equation
$$\mathbf{\tilde X} = \mathbf{A}\cdot e^{j\mathbf{\phi}}$$
Then we reconstruct the signal $\mathbf{\tilde x}$ using an __inverse STFT__:
$$\mathbf{\tilde x} = \text{iSTFT}(\mathbf{\tilde X})$$
We update our value of the signal reconstruction:
$$ \mathbf{x} = \mathbf{\tilde x} $$
Finally, we interate this procedure multiple times and return the final $$\mathbf{x}$$.
```
# STEP 1: Reconstruct your Spectrogram from the Mel-Spectrogram
def inv_mel_spectrogram(mel_spectrogram, sample_rate=22050, n_mels=128,
window_length=2048, hop_length=512, fmin=1, fmax=8192):
mel_basis = librosa.filters.mel(sample_rate, n_fft=window_length,
n_mels=n_mels, fmin=fmin, fmax=fmax)
inv_mel_basis = <INSERT YOUR CODE>
spectrogram = <INSERT YOUT CODE>
return spectrogram
amplitudes, sample_rate = librosa.core.load("welcome.wav")
display(Audio(amplitudes, rate=sample_rate))
true_spec = get_spectrogram(amplitudes)
mel_spec = get_melspectrogram(amplitudes, window_length=2048, hop_length=512)
#!!! Here you can modify your Mel-Spectrogram. Let your twisted imagination fly wild here !!!
#mel_spec[40:50,:]=0 # Zero out some freqs
# mel_spec[10:124,:] = mel_spec[0:114,:] # #Pitch-up
# mel_spec[0:10,:]=0
# mel_spec[0:114,:] = mel_spec[10:124,:] # #Pitch-down
# mel_spec[114:124,:]=0
#mel_spec[:,:] = mel_spec[:,::-1] #Time reverse
#mel_spec[64:,:] = mel_spec[:64,:] # Trippy Shit
#mel_spec[:,:] = mel_spec[::-1,:] # Aliens are here
#mel_spec[64:,:] = mel_spec[:64,:] # Trippy Shit
#mel_spec[:,:] = mel_spec[::-1,::-1] # Say hello to your friendly neighborhood Chaos God
#!!! END MADNESS !!!
#Convert Back to Spec
spec = inv_mel_spectrogram(mel_spec, window_length=2048, hop_length=512)
scale_1 = 1.0 / np.amax(mel_spec)
scale_1 = 1.0 / np.amax(true_spec)
scale_2 = 1.0 / np.amax(spec)
plt.figure(figsize=[16, 4])
plt.subplot(1, 2, 1)
plt.title("Welcome...!"); plt.xlabel("Time"); plt.ylabel("Frequency")
plt.imshow((true_spec*scale_1)**0.125,origin='lower',interpolation='nearest', cmap=plt.cm.hot, aspect='auto')
plt.colorbar(use_gridspec=True)
plt.subplot(1, 2, 2)
plt.title("Xkdfsas...!"); plt.xlabel("Time"); plt.ylabel("Frequency")
plt.imshow((spec*scale_2)**0.125,origin='lower',interpolation='nearest', cmap=plt.cm.hot, aspect='auto')
plt.colorbar(use_gridspec=True)
plt.figure(figsize=[16, 10])
plt.title("Xkdfsas...!"); plt.xlabel("Time"); plt.ylabel("Frequency")
plt.imshow((mel_spec**0.125),origin='lower',interpolation='nearest', cmap=plt.cm.hot, aspect='auto')
plt.colorbar(use_gridspec=True)
# Lets examine how to take an inverse FFT
dummy_window_length = 3000
dummy_weights_window = librosa.core.spectrum.get_window('hann', dummy_window_length, fftbins=True)
dummy_fft = np.fft.rfft(dummy_amps[:3000, None] * dummy_weights_window[:, None], axis=0) # complex[sequence_length, num_sequences]
print(dummy_fft.shape)
rec_dummy_amps = dummy_weights_window*np.real(np.fft.irfft(dummy_fft[:,0]))
plt.plot(dummy_amps[:3000])
plt.plot(rec_dummy_amps[:3000])
plt.legend(['Original', 'Reconstructed'])
# Step II: Reconstruct amplitude samples from STFT
def get_iSTFT(spectrogram, window_length, hop_length):
""" Compute inverse short-time Fourier Transform """
# get weights for fourier transform, float[window_length]
window = librosa.core.spectrum.get_window('hann', window_length, fftbins=True)
time_slices = spectrogram.shape[1]
len_samples = int(time_slices*hop_length+window_length)
x = np.zeros(len_samples)
# apply inverse fourier transfrorm to frames scaled by weights and save into x
amplitudes = <YOUR CODE>
# Trim the array to correct length from both sides
x = <YOUR_CODE>
return x
# Step III: Implement the Griffin-Lim Algorithm
def griffin_lim(power_spectrogram, window_size, hop_length, iterations, seed=1, verbose=True):
"""Reconstruct an audio signal from a magnitude spectrogram.
Given a power spectrogram as input, reconstruct
the audio signal and return it using the Griffin-Lim algorithm from the paper:
"Signal estimation from modified short-time fourier transform" by Griffin and Lim,
in IEEE transactions on Acoustics, Speech, and Signal Processing. Vol ASSP-32, No. 2, April 1984.
Args:
power_spectrogram (2-dim Numpy array): The power spectrogram. The rows correspond to the time slices
and the columns correspond to frequency bins.
window_size (int): The FFT size, which should be a power of 2.
hop_length (int): The hope size in samples.
iterations (int): Number of iterations for the Griffin-Lim algorithm. Typically a few hundred
is sufficient.
Returns:
The reconstructed time domain signal as a 1-dim Numpy array.
"""
time_slices = power_spectrogram.shape[1]
len_samples = int(time_slices*hop_length-hop_length)
# Obtain STFT magnitude from Spectrogram
magnitude_spectrogram = <YOUR CODE>
# Initialize the reconstructed signal to noise.
np.random.seed(seed)
x_reconstruct = np.random.randn(len_samples)
for n in range(iterations):
# Get the SFTF of a random signal
reconstruction_spectrogram = <YOUR_CODE>
# Obtain the angle part of random STFT. Hint: unit np.angle
reconstruction_angle = <YOUR_CODE>
# Discard magnitude part of the reconstruction and use the supplied magnitude spectrogram instead.
proposal_spectrogram = <YOUR_CODE>
assert proposal_spectrogram.dtype == np.complex
# Save previous construction
prev_x = x_reconstruct
# Reconstruct signal
x_reconstruct = <YOUR CODE>
# Measure RMSE
diff = np.sqrt(sum((x_reconstruct - prev_x)**2)/x_reconstruct.size)
if verbose:
# HINT: This should decrease over multiple iterations. If its not, your code doesn't work right!
# Use this to debug your code!
print('Reconstruction iteration: {}/{} RMSE: {} '.format(n, iterations, diff))
return x_reconstruct
rec_amplitudes1 = griffin_lim(true_spec, 2048, 512, 1, verbose=False)
display(Audio(rec_amplitudes1, rate=sample_rate))
rec_amplitudes2 = griffin_lim(true_spec, 2048, 512, 50, verbose=False)
display(Audio(rec_amplitudes2, rate=sample_rate))
rec_amplitudes3 = griffin_lim(spec, 2048, 512, 1, verbose=False)
display(Audio(rec_amplitudes3, rate=sample_rate))
rec_amplitudes4 = griffin_lim(spec, 2048, 512, 50, verbose=False)
display(Audio(rec_amplitudes4, rate=sample_rate))
# THIS IS AN EXAMPLE OF WHAT YOU ARE SUPPORT TO GET
# Remember to apply sqrt to spectrogram to get magnitude, note power here.
# Let's try this on a real spectrogram
ref_amplitudes1 = librosa.griffinlim(np.sqrt(true_spec), n_iter=1, hop_length=512, win_length=2048)
display(Audio(ref_amplitudes1, rate=sample_rate))
ref_amplitudes2 = librosa.griffinlim(np.sqrt(true_spec), n_iter=50, hop_length=512, win_length=2048)
display(Audio(ref_amplitudes2, rate=sample_rate))
# Not let's try this on a reconstructed spectrogram
ref_amplitudes3 = librosa.griffinlim(np.sqrt(spec), n_iter=1, hop_length=512, win_length=2048)
display(Audio(ref_amplitudes3, rate=sample_rate))
ref_amplitudes4 = librosa.griffinlim(np.sqrt(spec), n_iter=50, hop_length=512, win_length=2048)
display(Audio(ref_amplitudes4, rate=sample_rate))
```
|
github_jupyter
|
# Deep Q-learning
In this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called [Cart-Pole](https://gym.openai.com/envs/CartPole-v0). In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible.

We can simulate this game using [OpenAI Gym](https://gym.openai.com/). First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game.
```
import gym
import tensorflow as tf
import numpy as np
```
>**Note:** Make sure you have OpenAI Gym cloned into the same directory with this notebook. I've included `gym` as a submodule, so you can run `git submodule --init --recursive` to pull the contents into the `gym` repo.
```
# Create the Cart-Pole game environment
env = gym.make('CartPole-v0')
```
We interact with the simulation through `env`. To show the simulation running, you can use `env.render()` to render one frame. Passing in an action as an integer to `env.step` will generate the next step in the simulation. You can see how many actions are possible from `env.action_space` and to get a random action you can use `env.action_space.sample()`. This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1.
Run the code below to watch the simulation run.
```
env.reset()
rewards = []
for _ in range(100):
env.render()
state, reward, done, info = env.step(env.action_space.sample()) # take a random action
rewards.append(reward)
if done:
rewards = []
env.reset()
```
To shut the window showing the simulation, use `env.close()`.
If you ran the simulation above, we can look at the rewards:
```
print(rewards[-20:])
```
The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right.
## Q-Network
We train our Q-learning agent using the Bellman Equation:
$$
Q(s, a) = r + \gamma \max{Q(s', a')}
$$
where $s$ is a state, $a$ is an action, and $s'$ is the next state from state $s$ and action $a$.
Before we used this equation to learn values for a Q-_table_. However, for this game there are a huge number of states available. The state has four values: the position and velocity of the cart, and the position and velocity of the pole. These are all real-valued numbers, so ignoring floating point precisions, you practically have infinite states. Instead of using a table then, we'll replace it with a neural network that will approximate the Q-table lookup function.
<img src="assets/deep-q-learning.png" width=450px>
Now, our Q value, $Q(s, a)$ is calculated by passing in a state to the network. The output will be Q-values for each available action, with fully connected hidden layers.
<img src="assets/q-network.png" width=550px>
As I showed before, we can define our targets for training as $\hat{Q}(s,a) = r + \gamma \max{Q(s', a')}$. Then we update the weights by minimizing $(\hat{Q}(s,a) - Q(s,a))^2$.
For this Cart-Pole game, we have four inputs, one for each value in the state, and two outputs, one for each action. To get $\hat{Q}$, we'll first choose an action, then simulate the game using that action. This will get us the next state, $s'$, and the reward. With that, we can calculate $\hat{Q}$ then pass it back into the $Q$ network to run the optimizer and update the weights.
Below is my implementation of the Q-network. I used two fully connected layers with ReLU activations. Two seems to be good enough, three might be better. Feel free to try it out.
```
class QNetwork:
def __init__(self, learning_rate=0.01, state_size=4,
action_size=2, hidden_size=10,
name='QNetwork'):
# state inputs to the Q-network
with tf.variable_scope(name):
self.inputs_ = tf.placeholder(tf.float32, [None, state_size], name='inputs')
# One hot encode the actions to later choose the Q-value for the action
self.actions_ = tf.placeholder(tf.int32, [None], name='actions')
one_hot_actions = tf.one_hot(self.actions_, action_size)
# Target Q values for training
self.targetQs_ = tf.placeholder(tf.float32, [None], name='target')
# ReLU hidden layers
self.fc1 = tf.contrib.layers.fully_connected(self.inputs_, hidden_size)
self.fc2 = tf.contrib.layers.fully_connected(self.fc1, hidden_size)
# Linear output layer
self.output = tf.contrib.layers.fully_connected(self.fc2, action_size,
activation_fn=None)
### Train with loss (targetQ - Q)^2
# output has length 2, for two actions. This next line chooses
# one value from output (per row) according to the one-hot encoded actions.
self.Q = tf.reduce_sum(tf.multiply(self.output, one_hot_actions), axis=1)
self.loss = tf.reduce_mean(tf.square(self.targetQs_ - self.Q))
self.opt = tf.train.AdamOptimizer(learning_rate).minimize(self.loss)
```
## Experience replay
Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on.
Here, we'll create a `Memory` object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those.
Below, I've implemented a `Memory` object. If you're unfamiliar with `deque`, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer.
```
from collections import deque
class Memory():
def __init__(self, max_size = 1000):
self.buffer = deque(maxlen=max_size)
def add(self, experience):
self.buffer.append(experience)
def sample(self, batch_size):
idx = np.random.choice(np.arange(len(self.buffer)),
size=batch_size,
replace=False)
return [self.buffer[ii] for ii in idx]
```
## Exploration - Exploitation
To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability $1 - \epsilon$, the agent will choose an action from $Q(s,a)$. This is called an **$\epsilon$-greedy policy**.
At first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called _exploitation_. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training.
## Q-Learning training algorithm
Putting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in _episodes_. One *episode* is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent:
* Initialize the memory $D$
* Initialize the action-value network $Q$ with random weights
* **For** episode = 1, $M$ **do**
* **For** $t$, $T$ **do**
* With probability $\epsilon$ select a random action $a_t$, otherwise select $a_t = \mathrm{argmax}_a Q(s,a)$
* Execute action $a_t$ in simulator and observe reward $r_{t+1}$ and new state $s_{t+1}$
* Store transition $<s_t, a_t, r_{t+1}, s_{t+1}>$ in memory $D$
* Sample random mini-batch from $D$: $<s_j, a_j, r_j, s'_j>$
* Set $\hat{Q}_j = r_j$ if the episode ends at $j+1$, otherwise set $\hat{Q}_j = r_j + \gamma \max_{a'}{Q(s'_j, a')}$
* Make a gradient descent step with loss $(\hat{Q}_j - Q(s_j, a_j))^2$
* **endfor**
* **endfor**
## Hyperparameters
One of the more difficult aspects of reinforcememt learning are the large number of hyperparameters. Not only are we tuning the network, but we're tuning the simulation.
```
train_episodes = 1000 # max number of episodes to learn from
max_steps = 200 # max steps in an episode
gamma = 0.99 # future reward discount
# Exploration parameters
explore_start = 1.0 # exploration probability at start
explore_stop = 0.01 # minimum exploration probability
decay_rate = 0.0001 # exponential decay rate for exploration prob
# Network parameters
hidden_size = 64 # number of units in each Q-network hidden layer
learning_rate = 0.0001 # Q-network learning rate
# Memory parameters
memory_size = 10000 # memory capacity
batch_size = 20 # experience mini-batch size
pretrain_length = batch_size # number experiences to pretrain the memory
tf.reset_default_graph()
mainQN = QNetwork(name='main', hidden_size=hidden_size, learning_rate=learning_rate)
```
## Populate the experience memory
Here I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game.
```
# Initialize the simulation
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
memory = Memory(max_size=memory_size)
# Make a bunch of random actions and store the experiences
for ii in range(pretrain_length):
# Uncomment the line below to watch the simulation
# env.render()
# Make a random action
action = env.action_space.sample()
next_state, reward, done, _ = env.step(action)
if done:
# The simulation fails so no next state
next_state = np.zeros(state.shape)
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
```
## Training
Below we'll train our agent. If you want to watch it train, uncomment the `env.render()` line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game.
```
# Now train with experiences
saver = tf.train.Saver()
rewards_list = []
with tf.Session() as sess:
# Initialize variables
sess.run(tf.global_variables_initializer())
step = 0
for ep in range(1, train_episodes):
total_reward = 0
t = 0
while t < max_steps:
step += 1
# Uncomment this next line to watch the training
# env.render()
# Explore or Exploit
explore_p = explore_stop + (explore_start - explore_stop)*np.exp(-decay_rate*step)
if explore_p > np.random.rand():
# Make a random action
action = env.action_space.sample()
else:
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
total_reward += reward
if done:
# the episode ends so no next state
next_state = np.zeros(state.shape)
t = max_steps
print('Episode: {}'.format(ep),
'Total reward: {}'.format(total_reward),
'Training loss: {:.4f}'.format(loss),
'Explore P: {:.4f}'.format(explore_p))
rewards_list.append((ep, total_reward))
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
t += 1
# Sample mini-batch from memory
batch = memory.sample(batch_size)
states = np.array([each[0] for each in batch])
actions = np.array([each[1] for each in batch])
rewards = np.array([each[2] for each in batch])
next_states = np.array([each[3] for each in batch])
# Train network
target_Qs = sess.run(mainQN.output, feed_dict={mainQN.inputs_: next_states})
# Set target_Qs to 0 for states where episode ends
episode_ends = (next_states == np.zeros(states[0].shape)).all(axis=1)
target_Qs[episode_ends] = (0, 0)
targets = rewards + gamma * np.max(target_Qs, axis=1)
loss, _ = sess.run([mainQN.loss, mainQN.opt],
feed_dict={mainQN.inputs_: states,
mainQN.targetQs_: targets,
mainQN.actions_: actions})
saver.save(sess, "checkpoints/cartpole.ckpt")
```
## Visualizing training
Below I'll plot the total rewards for each episode. I'm plotting the rolling average too, in blue.
```
%matplotlib inline
import matplotlib.pyplot as plt
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / N
eps, rews = np.array(rewards_list).T
smoothed_rews = running_mean(rews, 10)
plt.plot(eps[-len(smoothed_rews):], smoothed_rews)
plt.plot(eps, rews, color='grey', alpha=0.3)
plt.xlabel('Episode')
plt.ylabel('Total Reward')
```
## Testing
Let's checkout how our trained agent plays the game.
```
test_episodes = 10
test_max_steps = 400
env.reset()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
for ep in range(1, test_episodes):
t = 0
while t < test_max_steps:
env.render()
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
if done:
t = test_max_steps
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
state = next_state
t += 1
env.close()
```
## Extending this
So, Cart-Pole is a pretty simple game. However, the same model can be used to train an agent to play something much more complicated like Pong or Space Invaders. Instead of a state like we're using here though, you'd want to use convolutional layers to get the state from the screen images.

I'll leave it as a challenge for you to use deep Q-learning to train an agent to play Atari games. Here's the original paper which will get you started: http://www.davidqiu.com:8888/research/nature14236.pdf.
|
github_jupyter
|
# NASA Data Exploration
```
raw_data_dir = '../data/raw'
processed_data_dir = '../data/processed'
figsize_width = 12
figsize_height = 8
output_dpi = 72
# Imports
import os
import numpy as np
import pandas as pd
from datetime import datetime
import matplotlib.pyplot as plt
# Load Data
nasa_temp_file = os.path.join(raw_data_dir, 'nasa_temperature_anomaly.txt')
nasa_sea_file = os.path.join(raw_data_dir, 'nasa_sea_level.txt')
nasa_co2_file = os.path.join(raw_data_dir, 'nasa_carbon_dioxide_levels.txt')
# Variable Setup
default_fig_size = (figsize_width, figsize_height)
# - Process temperature data
temp_data = pd.read_csv(nasa_temp_file, sep='\t', header=None)
temp_data.columns = ['Year', 'Annual Mean', 'Lowness Smoothing']
temp_data.set_index('Year', inplace=True)
fig, ax = plt.subplots(figsize=default_fig_size)
temp_data.plot(ax=ax)
ax.grid(True, linestyle='--', color='grey', alpha=0.6)
ax.set_title('Global Temperature Anomaly Data', fontweight='bold')
ax.set_xlabel('')
ax.set_ylabel('Temperature Anomaly ($\degree$C)')
ax.legend()
plt.show();
# - Process Sea-level File
# -- Figure out header rows
with open(nasa_sea_file, 'r') as fin:
all_lines = fin.readlines()
header_lines = np.array([1 for x in all_lines if x.startswith('HDR')]).sum()
sea_level_data = pd.read_csv(nasa_sea_file, delim_whitespace=True,
skiprows=header_lines-1).reset_index()
sea_level_data.columns = ['Altimeter Type', 'File Cycle', 'Year Fraction',
'N Observations', 'N Weighted Observations', 'GMSL',
'Std GMSL', 'GMSL (smoothed)', 'GMSL (GIA Applied)',
'Std GMSL (GIA Applied)', 'GMSL (GIA, smoothed)',
'GMSL (GIA, smoothed, filtered)']
sea_level_data.set_index('Year Fraction', inplace=True)
fig, ax = plt.subplots(figsize=default_fig_size)
sea_level_var = sea_level_data.loc[:, 'GMSL (GIA, smoothed, filtered)'] \
- sea_level_data.loc[:, 'GMSL (GIA, smoothed, filtered)'].iloc[0]
sea_level_var.plot(ax=ax)
ax.grid(True, color='grey', alpha=0.6, linestyle='--')
ax.set_title('Global Sea-Level Height Change over Time', fontweight='bold')
ax.set_xlabel('')
ax.set_ylabel('Sea Height Change (mm)')
ax.legend(loc='upper left')
plt.show();
# - Process Carbon Dioxide Data
with open(nasa_co2_file, 'r') as fin:
all_lines = fin.readlines()
header_lines = np.array([1 for x in all_lines if x.startswith('#')]).sum()
co2_data = pd.read_csv(nasa_co2_file, skiprows=header_lines, header=None,
delim_whitespace=True)
co2_data[co2_data == -99.99] = np.nan
co2_data.columns = ['Year', 'Month', 'Year Fraction', 'Average', 'Interpolated',
'Trend', 'N Days']
co2_data.set_index(['Year', 'Month'], inplace=True)
new_idx = [datetime(x[0], x[1], 1) for x in co2_data.index]
co2_data.index = new_idx
co2_data.index.name = 'Date'
# - Plot
fig, ax = plt.subplots(figsize=default_fig_size)
co2_data.loc[:, 'Average'].plot(ax=ax)
ax.grid(True, linestyle='--', color='grey', alpha=0.6)
ax.set_xlabel('')
ax.set_ylabel('$CO_2$ Level (ppm)')
ax.set_title('Global Carbon Dioxide Level over Time', fontweight='bold')
plt.show();
```
|
github_jupyter
|
## Train a model with Iris data using XGBoost algorithm
### Model is trained with XGBoost installed in notebook instance
### In the later examples, we will train using SageMaker's XGBoost algorithm
```
# Install xgboost in notebook instance.
#### Command to install xgboost
!pip install xgboost==1.2
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import itertools
import xgboost as xgb
from sklearn import preprocessing
from sklearn.metrics import classification_report, confusion_matrix
column_list_file = 'iris_train_column_list.txt'
train_file = 'iris_train.csv'
validation_file = 'iris_validation.csv'
columns = ''
with open(column_list_file,'r') as f:
columns = f.read().split(',')
columns
# Encode Class Labels to integers
# Labeled Classes
labels=[0,1,2]
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
le = preprocessing.LabelEncoder()
le.fit(classes)
# Specify the column names as the file does not have column header
df_train = pd.read_csv(train_file,names=columns)
df_validation = pd.read_csv(validation_file,names=columns)
df_train.head()
df_validation.head()
X_train = df_train.iloc[:,1:] # Features: 1st column onwards
y_train = df_train.iloc[:,0].ravel() # Target: 0th column
X_validation = df_validation.iloc[:,1:]
y_validation = df_validation.iloc[:,0].ravel()
# Launch a classifier
# XGBoost Training Parameter Reference:
# https://xgboost.readthedocs.io/en/latest/parameter.html
classifier = xgb.XGBClassifier(objective="multi:softmax",
num_class=3,
n_estimators=100)
classifier
classifier.fit(X_train,
y_train,
eval_set = [(X_train, y_train), (X_validation, y_validation)],
eval_metric=['mlogloss'],
early_stopping_rounds=10)
# early_stopping_rounds - needs to be passed in as a hyperparameter in SageMaker XGBoost implementation
# "The model trains until the validation score stops improving.
# Validation error needs to decrease at least every early_stopping_rounds to continue training.
# Amazon SageMaker hosting uses the best model for inference."
eval_result = classifier.evals_result()
training_rounds = range(len(eval_result['validation_0']['mlogloss']))
print(training_rounds)
plt.scatter(x=training_rounds,y=eval_result['validation_0']['mlogloss'],label='Training Error')
plt.scatter(x=training_rounds,y=eval_result['validation_1']['mlogloss'],label='Validation Error')
plt.grid(True)
plt.xlabel('Iteration')
plt.ylabel('LogLoss')
plt.title('Training Vs Validation Error')
plt.legend()
plt.show()
xgb.plot_importance(classifier)
plt.show()
df = pd.read_csv(validation_file,names=columns)
df.head()
X_test = df.iloc[:,1:]
print(X_test[:5])
result = classifier.predict(X_test)
result[:5]
df['predicted_class'] = result #le.inverse_transform(result)
df.head()
# Compare performance of Actual and Model 1 Prediction
plt.figure()
plt.scatter(df.index,df['encoded_class'],label='Actual')
plt.scatter(df.index,df['predicted_class'],label='Predicted',marker='^')
plt.legend(loc=4)
plt.yticks([0,1,2])
plt.xlabel('Sample')
plt.ylabel('Class')
plt.show()
```
<h2>Confusion Matrix</h2>
Confusion Matrix is a table that summarizes performance of classification model.<br><br>
```
# Reference:
# https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
#print("Normalized confusion matrix")
#else:
# print('Confusion matrix, without normalization')
#print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
# Compute confusion matrix
cnf_matrix = confusion_matrix(df['encoded_class'],
df['predicted_class'],labels=labels)
cnf_matrix
# Plot confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=classes,
title='Confusion matrix - Count')
# Plot confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=classes,
title='Confusion matrix - Count',normalize=True)
print(classification_report(
df['encoded_class'],
df['predicted_class'],
labels=labels,
target_names=classes))
```
|
github_jupyter
|
# Handling uncertainty with quantile regression
```
%matplotlib inline
```
[Quantile regression](https://www.wikiwand.com/en/Quantile_regression) is useful when you're not so much interested in the accuracy of your model, but rather you want your model to be good at ranking observations correctly. The typical way to perform quantile regression is to use a special loss function, namely the quantile loss. The quantile loss takes a parameter, $\alpha$ (alpha), which indicates which quantile the model should be targeting. In the case of $\alpha = 0.5$, then this is equivalent to asking the model to predict the median value of the target, and not the most likely value which would be the mean.
A nice thing we can do with quantile regression is to produce a prediction interval for each prediction. Indeed, if we predict the lower and upper quantiles of the target then we will be able to obtain a "trust region" in between which the true value is likely to belong. Of course, the likeliness will depend on the chosen quantiles. For a slightly more detailed explanation see [this](https://medium.com/the-artificial-impostor/quantile-regression-part-1-e25bdd8d9d43) blog post.
As an example, let us take the [simple time series model we built in another notebook](building-a-simple-time-series-model.md). Instead of predicting the mean value of the target distribution, we will predict the 5th, 50th, 95th quantiles. This will require training three separate models, so we will encapsulate the model building logic in a function called `make_model`. We also have to slightly adapt the training loop, but not by much. Finally, we will draw the prediction interval along with the predictions from for 50th quantile (i.e. the median) and the true values.
```
import calendar
import math
import matplotlib.pyplot as plt
from river import compose
from river import datasets
from river import linear_model
from river import metrics
from river import optim
from river import preprocessing
from river import stats
from river import time_series
def get_ordinal_date(x):
return {'ordinal_date': x['month'].toordinal()}
def get_month_distances(x):
return {
calendar.month_name[month]: math.exp(-(x['month'].month - month) ** 2)
for month in range(1, 13)
}
def make_model(alpha):
extract_features = compose.TransformerUnion(get_ordinal_date, get_month_distances)
scale = preprocessing.StandardScaler()
learn = linear_model.LinearRegression(
intercept_lr=0,
optimizer=optim.SGD(3),
loss=optim.losses.Quantile(alpha=alpha)
)
model = extract_features | scale | learn
model = time_series.Detrender(regressor=model, window_size=12)
return model
metric = metrics.MAE()
models = {
'lower': make_model(alpha=0.05),
'center': make_model(alpha=0.5),
'upper': make_model(alpha=0.95)
}
dates = []
y_trues = []
y_preds = {
'lower': [],
'center': [],
'upper': []
}
for x, y in datasets.AirlinePassengers():
y_trues.append(y)
dates.append(x['month'])
for name, model in models.items():
y_preds[name].append(model.predict_one(x))
model.learn_one(x, y)
# Update the error metric
metric.update(y, y_preds['center'][-1])
# Plot the results
fig, ax = plt.subplots(figsize=(10, 6))
ax.grid(alpha=0.75)
ax.plot(dates, y_trues, lw=3, color='#2ecc71', alpha=0.8, label='Truth')
ax.plot(dates, y_preds['center'], lw=3, color='#e74c3c', alpha=0.8, label='Prediction')
ax.fill_between(dates, y_preds['lower'], y_preds['upper'], color='#e74c3c', alpha=0.3, label='Prediction interval')
ax.legend()
ax.set_title(metric);
```
An important thing to note is that the prediction interval we obtained should not be confused with a confidence interval. Simply put, a prediction interval represents uncertainty for where the true value lies, whereas a confidence interval encapsulates the uncertainty on the prediction. You can find out more by reading [this](https://stats.stackexchange.com/questions/16493/difference-between-confidence-intervals-and-prediction-intervals) CrossValidated post.
|
github_jupyter
|
<a id='1'></a>
# 1. Import packages
```
from keras.models import Sequential, Model
from keras.layers import *
from keras.layers.advanced_activations import LeakyReLU
from keras.activations import relu
from keras.initializers import RandomNormal
from keras.applications import *
import keras.backend as K
from tensorflow.contrib.distributions import Beta
import tensorflow as tf
from keras.optimizers import Adam
from image_augmentation import random_transform
from image_augmentation import random_warp
from utils import get_image_paths, load_images, stack_images
from pixel_shuffler import PixelShuffler
import time
import numpy as np
from PIL import Image
import cv2
import glob
from random import randint, shuffle
from IPython.display import clear_output
from IPython.display import display
import matplotlib.pyplot as plt
%matplotlib inline
```
<a id='4'></a>
# 4. Config
mixup paper: https://arxiv.org/abs/1710.09412
Default training data directories: `./faceA/` and `./faceB/`
```
K.set_learning_phase(0)
channel_axis=-1
channel_first = False
IMAGE_SHAPE = (64, 64, 3)
nc_in = 3 # number of input channels of generators
nc_D_inp = 6 # number of input channels of discriminators
use_self_attn = False
w_l2 = 1e-4 # weight decay
batchSize = 8
# Path of training images
img_dirA = './faceA/*.*'
img_dirB = './faceB/*.*'
```
<a id='5'></a>
# 5. Define models
```
class Scale(Layer):
'''
Code borrows from https://github.com/flyyufelix/cnn_finetune
'''
def __init__(self, weights=None, axis=-1, gamma_init='zero', **kwargs):
self.axis = axis
self.gamma_init = initializers.get(gamma_init)
self.initial_weights = weights
super(Scale, self).__init__(**kwargs)
def build(self, input_shape):
self.input_spec = [InputSpec(shape=input_shape)]
# Compatibility with TensorFlow >= 1.0.0
self.gamma = K.variable(self.gamma_init((1,)), name='{}_gamma'.format(self.name))
self.trainable_weights = [self.gamma]
if self.initial_weights is not None:
self.set_weights(self.initial_weights)
del self.initial_weights
def call(self, x, mask=None):
return self.gamma * x
def get_config(self):
config = {"axis": self.axis}
base_config = super(Scale, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
def self_attn_block(inp, nc):
'''
Code borrows from https://github.com/taki0112/Self-Attention-GAN-Tensorflow
'''
assert nc//8 > 0, f"Input channels must be >= 8, but got nc={nc}"
x = inp
shape_x = x.get_shape().as_list()
f = Conv2D(nc//8, 1, kernel_initializer=conv_init)(x)
g = Conv2D(nc//8, 1, kernel_initializer=conv_init)(x)
h = Conv2D(nc, 1, kernel_initializer=conv_init)(x)
shape_f = f.get_shape().as_list()
shape_g = g.get_shape().as_list()
shape_h = h.get_shape().as_list()
flat_f = Reshape((-1, shape_f[-1]))(f)
flat_g = Reshape((-1, shape_g[-1]))(g)
flat_h = Reshape((-1, shape_h[-1]))(h)
s = Lambda(lambda x: tf.matmul(x[0], x[1], transpose_b=True))([flat_g, flat_f])
beta = Softmax(axis=-1)(s)
o = Lambda(lambda x: tf.matmul(x[0], x[1]))([beta, flat_h])
o = Reshape(shape_x[1:])(o)
o = Scale()(o)
out = add([o, inp])
return out
def conv_block(input_tensor, f):
x = input_tensor
x = Conv2D(f, kernel_size=3, strides=2, kernel_regularizer=regularizers.l2(w_l2),
kernel_initializer=conv_init, use_bias=False, padding="same")(x)
x = Activation("relu")(x)
return x
def conv_block_d(input_tensor, f, use_instance_norm=False):
x = input_tensor
x = Conv2D(f, kernel_size=4, strides=2, kernel_regularizer=regularizers.l2(w_l2),
kernel_initializer=conv_init, use_bias=False, padding="same")(x)
x = LeakyReLU(alpha=0.2)(x)
return x
def res_block(input_tensor, f):
x = input_tensor
x = Conv2D(f, kernel_size=3, kernel_regularizer=regularizers.l2(w_l2),
kernel_initializer=conv_init, use_bias=False, padding="same")(x)
x = LeakyReLU(alpha=0.2)(x)
x = Conv2D(f, kernel_size=3, kernel_regularizer=regularizers.l2(w_l2),
kernel_initializer=conv_init, use_bias=False, padding="same")(x)
x = add([x, input_tensor])
x = LeakyReLU(alpha=0.2)(x)
return x
def upscale_ps(filters, use_norm=True):
def block(x):
x = Conv2D(filters*4, kernel_size=3, kernel_regularizer=regularizers.l2(w_l2),
kernel_initializer=RandomNormal(0, 0.02), padding='same')(x)
x = LeakyReLU(0.2)(x)
x = PixelShuffler()(x)
return x
return block
def Discriminator(nc_in, input_size=64):
inp = Input(shape=(input_size, input_size, nc_in))
#x = GaussianNoise(0.05)(inp)
x = conv_block_d(inp, 64, False)
x = conv_block_d(x, 128, False)
x = self_attn_block(x, 128) if use_self_attn else x
x = conv_block_d(x, 256, False)
x = self_attn_block(x, 256) if use_self_attn else x
out = Conv2D(1, kernel_size=4, kernel_initializer=conv_init, use_bias=False, padding="same")(x)
return Model(inputs=[inp], outputs=out)
def Encoder(nc_in=3, input_size=64):
inp = Input(shape=(input_size, input_size, nc_in))
x = Conv2D(64, kernel_size=5, kernel_initializer=conv_init, use_bias=False, padding="same")(inp)
x = conv_block(x,128)
x = conv_block(x,256)
x = self_attn_block(x, 256) if use_self_attn else x
x = conv_block(x,512)
x = self_attn_block(x, 512) if use_self_attn else x
x = conv_block(x,1024)
x = Dense(1024)(Flatten()(x))
x = Dense(4*4*1024)(x)
x = Reshape((4, 4, 1024))(x)
out = upscale_ps(512)(x)
return Model(inputs=inp, outputs=out)
def Decoder_ps(nc_in=512, input_size=8):
input_ = Input(shape=(input_size, input_size, nc_in))
x = input_
x = upscale_ps(256)(x)
x = upscale_ps(128)(x)
x = self_attn_block(x, 128) if use_self_attn else x
x = upscale_ps(64)(x)
x = res_block(x, 64)
x = self_attn_block(x, 64) if use_self_attn else x
#x = Conv2D(4, kernel_size=5, padding='same')(x)
alpha = Conv2D(1, kernel_size=5, padding='same', activation="sigmoid")(x)
rgb = Conv2D(3, kernel_size=5, padding='same', activation="tanh")(x)
out = concatenate([alpha, rgb])
return Model(input_, out)
encoder = Encoder()
decoder_A = Decoder_ps()
decoder_B = Decoder_ps()
x = Input(shape=IMAGE_SHAPE)
netGA = Model(x, decoder_A(encoder(x)))
netGB = Model(x, decoder_B(encoder(x)))
netDA = Discriminator(nc_D_inp)
netDB = Discriminator(nc_D_inp)
```
<a id='6'></a>
# 6. Load Models
```
try:
encoder.load_weights("models/encoder.h5")
decoder_A.load_weights("models/decoder_A.h5")
decoder_B.load_weights("models/decoder_B.h5")
#netDA.load_weights("models/netDA.h5")
#netDB.load_weights("models/netDB.h5")
print ("model loaded.")
except:
print ("Weights file not found.")
pass
```
<a id='7'></a>
# 7. Define Inputs/Outputs Variables
distorted_A: A (batch_size, 64, 64, 3) tensor, input of generator_A (netGA).
distorted_B: A (batch_size, 64, 64, 3) tensor, input of generator_B (netGB).
fake_A: (batch_size, 64, 64, 3) tensor, output of generator_A (netGA).
fake_B: (batch_size, 64, 64, 3) tensor, output of generator_B (netGB).
mask_A: (batch_size, 64, 64, 1) tensor, mask output of generator_A (netGA).
mask_B: (batch_size, 64, 64, 1) tensor, mask output of generator_B (netGB).
path_A: A function that takes distorted_A as input and outputs fake_A.
path_B: A function that takes distorted_B as input and outputs fake_B.
path_mask_A: A function that takes distorted_A as input and outputs mask_A.
path_mask_B: A function that takes distorted_B as input and outputs mask_B.
path_abgr_A: A function that takes distorted_A as input and outputs concat([mask_A, fake_A]).
path_abgr_B: A function that takes distorted_B as input and outputs concat([mask_B, fake_B]).
real_A: A (batch_size, 64, 64, 3) tensor, target images for generator_A given input distorted_A.
real_B: A (batch_size, 64, 64, 3) tensor, target images for generator_B given input distorted_B.
```
def cycle_variables(netG):
distorted_input = netG.inputs[0]
fake_output = netG.outputs[0]
alpha = Lambda(lambda x: x[:,:,:, :1])(fake_output)
rgb = Lambda(lambda x: x[:,:,:, 1:])(fake_output)
masked_fake_output = alpha * rgb + (1-alpha) * distorted_input
fn_generate = K.function([distorted_input], [masked_fake_output])
fn_mask = K.function([distorted_input], [concatenate([alpha, alpha, alpha])])
fn_abgr = K.function([distorted_input], [concatenate([alpha, rgb])])
return distorted_input, fake_output, alpha, fn_generate, fn_mask, fn_abgr
distorted_A, fake_A, mask_A, path_A, path_mask_A, path_abgr_A = cycle_variables(netGA)
distorted_B, fake_B, mask_B, path_B, path_mask_B, path_abgr_B = cycle_variables(netGB)
real_A = Input(shape=IMAGE_SHAPE)
real_B = Input(shape=IMAGE_SHAPE)
```
<a id='11'></a>
# 11. Helper Function: face_swap()
This function is provided for those who don't have enough VRAM to run dlib's CNN and GAN model at the same time.
INPUTS:
img: A RGB face image of any size.
path_func: a function that is either path_abgr_A or path_abgr_B.
OUPUTS:
result_img: A RGB swapped face image after masking.
result_mask: A single channel uint8 mask image.
```
def swap_face(img, path_func):
input_size = img.shape
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR) # generator expects BGR input
ae_input = cv2.resize(img, (64,64))/255. * 2 - 1
result = np.squeeze(np.array([path_func([[ae_input]])]))
result_a = result[:,:,0] * 255
result_a = cv2.resize(result_a, (input_size[1],input_size[0]))[...,np.newaxis]
result_bgr = np.clip( (result[:,:,1:] + 1) * 255 / 2, 0, 255)
result_bgr = cv2.resize(result_bgr, (input_size[1],input_size[0]))
result = (result_a/255 * result_bgr + (1 - result_a/255) * img).astype('uint8')
result = cv2.cvtColor(result, cv2.COLOR_BGR2RGB)
result = cv2.resize(result, (input_size[1],input_size[0]))
result_a = np.expand_dims(cv2.resize(result_a, (input_size[1],input_size[0])), axis=2)
return result, result_a
whom2whom = "BtoA" # default trainsforming faceB to faceA
if whom2whom is "AtoB":
path_func = path_abgr_B
elif whom2whom is "BtoA":
path_func = path_abgr_A
else:
print ("whom2whom should be either AtoB or BtoA")
input_img = plt.imread("./IMAGE_FILENAME.jpg")
plt.imshow(input_img)
result_img, result_mask = swap_face(input_img, path_func)
plt.imshow(result_img)
plt.imshow(result_mask[:, :, 0]) # cmap='gray'
```
|
github_jupyter
|
### Privatizing Histograms
Sometimes we want to release the counts of individual outcomes in a dataset.
When plotted, this makes a histogram.
The library currently has two approaches:
1. Known category set `make_count_by_categories`
2. Unknown category set `make_count_by`
The next code block imports just handles boilerplate: imports, data loading, plotting.
```
import os
from opendp.meas import *
from opendp.mod import enable_features, binary_search_chain, Measurement, Transformation
from opendp.trans import *
from opendp.typing import *
enable_features("contrib")
max_influence = 1
budget = (1., 1e-8)
# public information
col_names = ["age", "sex", "educ", "race", "income", "married"]
data_path = os.path.join('.', 'data', 'PUMS_california_demographics_1000', 'data.csv')
size = 1000
with open(data_path) as input_data:
data = input_data.read()
def plot_histogram(sensitive_counts, released_counts):
"""Plot a histogram that compares true data against released data"""
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
fig = plt.figure()
ax = fig.add_axes([1,1,1,1])
plt.ylim([0,225])
tick_spacing = 1.
ax.xaxis.set_major_locator(ticker.MultipleLocator(tick_spacing))
plt.xlim(0,15)
width = .4
ax.bar(list([x+width for x in range(0, len(sensitive_counts))]), sensitive_counts, width=width, label='True Value')
ax.bar(list([x+2*width for x in range(0, len(released_counts))]), released_counts, width=width, label='DP Value')
ax.legend()
plt.title('Histogram of Education Level')
plt.xlabel('Years of Education')
plt.ylabel('Count')
plt.show()
```
### Private histogram via `make_count_by_categories`
This approach is only applicable if the set of potential values that the data may take on is public information.
If this information is not available, then use `make_count_by` instead.
It typically has greater utility than `make_count_by` until the size of the category set is greater than dataset size.
In this data, we know that the category set is public information:
strings consisting of the numbers between 1 and 20.
The counting aggregator computes a vector of counts in the same order as the input categories.
It also includes one extra count at the end of the vector,
consisting of the number of elements that were not members of the category set.
You'll notice that `make_base_geometric` has an additional argument that explicitly sets the type of the domain, `D`.
It defaults to `AllDomain[int]` which works in situations where the mechanism is noising a scalar.
However, in this situation, we are noising a vector of scalars,
and thus the appropriate domain is `VectorDomain[AllDomain[int]]`.
```
# public information
categories = list(map(str, range(1, 20)))
histogram = (
make_split_dataframe(separator=",", col_names=col_names) >>
make_select_column(key="educ", TOA=str) >>
# Compute counts for each of the categories and null
make_count_by_categories(categories=categories)
)
noisy_histogram = binary_search_chain(
lambda s: histogram >> make_base_geometric(scale=s, D=VectorDomain[AllDomain[int]]),
d_in=max_influence, d_out=budget[0])
sensitive_counts = histogram(data)
released_counts = noisy_histogram(data)
print("Educational level counts:\n", sensitive_counts[:-1])
print("DP Educational level counts:\n", released_counts[:-1])
print("DP estimate for the number of records that were not a member of the category set:", released_counts[-1])
plot_histogram(sensitive_counts, released_counts)
```
### Private histogram via `make_count_by`
This approach is applicable when the set of categories is unknown or very large.
Any categories with a noisy count less than a given threshold will be censored from the final release.
The noise scale influences the epsilon parameter of the budget, and the threshold influences the delta parameter in the budget.
`ptr` stands for Propose-Test-Release, a framework for censoring queries for which the local sensitivity is greater than some threshold.
Any category with a count sufficiently small is censored from the release.
It is sometimes referred to as a "stability histogram" because it only releases counts for "stable" categories that exist in all datasets that are considered "neighboring" to your private dataset.
I start out by defining a function that finds the tightest noise scale and threshold for which the stability histogram is (d_in, d_out)-close.
You may find this useful for your application.
```
def make_base_ptr_budget(
preprocess: Transformation,
d_in, d_out,
TK: RuntimeTypeDescriptor) -> Measurement:
"""Make a stability histogram that respects a given d_in, d_out.
:param preprocess: Transformation
:param d_in: Input distance to satisfy
:param d_out: Output distance to satisfy
:param TK: Type of Key (hashable)
"""
from opendp.mod import binary_search_param
def privatize(s, t=1e8):
return preprocess >> make_base_ptr(scale=s, threshold=t, TK=TK)
s = binary_search_param(lambda s: privatize(s=s), d_in, d_out)
t = binary_search_param(lambda t: privatize(s=s, t=t), d_in, d_out)
return privatize(s=s, t=t)
```
I now use the `make_count_by_ptr_budget` constructor to release a private histogram on the education data.
The stability mechanism, as currently written, samples from a continuous noise distribution.
If you haven't already, please read about [floating-point behavior in the docs](https://docs.opendp.org/en/latest/user/measurement-constructors.html#floating-point).
```
from opendp.mod import enable_features
enable_features("floating-point")
preprocess = (
make_split_dataframe(separator=",", col_names=col_names) >>
make_select_column(key="educ", TOA=str) >>
make_count_by(MO=L1Distance[float], TK=str, TV=float)
)
noisy_histogram = make_base_ptr_budget(
preprocess,
d_in=max_influence, d_out=budget,
TK=str)
sensitive_counts = histogram(data)
released_counts = noisy_histogram(data)
# postprocess to make the results easier to compare
postprocessed_counts = {k: round(v) for k, v in released_counts.items()}
print("Educational level counts:\n", sensitive_counts)
print("DP Educational level counts:\n", postprocessed_counts)
def as_array(data):
return [data.get(k, 0) for k in categories]
plot_histogram(sensitive_counts, as_array(released_counts))
```
|
github_jupyter
|
## Set up the dependencies
```
# for reading and validating data
import emeval.input.spec_details as eisd
import emeval.input.phone_view as eipv
import emeval.input.eval_view as eiev
# Visualization helpers
import emeval.viz.phone_view as ezpv
import emeval.viz.eval_view as ezev
# Metrics helpers
import emeval.metrics.segmentation as ems
# For plots
import matplotlib.pyplot as plt
from matplotlib.collections import PatchCollection
from matplotlib.patches import Rectangle
%matplotlib inline
# For maps
import folium
import branca.element as bre
# For easier debugging while working on modules
import importlib
import pandas as pd
import numpy as np
pd.options.display.float_format = '{:.6f}'.format
import arrow
THIRTY_MINUTES = 30 * 60
TIME_THRESHOLD = THIRTY_MINUTES
importlib.reload(ems)
```
## The spec
The spec defines what experiments were done, and over which time ranges. Once the experiment is complete, most of the structure is read back from the data, but we use the spec to validate that it all worked correctly. The spec also contains the ground truth for the legs. Here, we read the spec for the trip to UC Berkeley.
```
DATASTORE_URL = "http://cardshark.cs.berkeley.edu"
AUTHOR_EMAIL = "[email protected]"
sd_la = eisd.SpecDetails(DATASTORE_URL, AUTHOR_EMAIL, "unimodal_trip_car_bike_mtv_la")
sd_sj = eisd.SpecDetails(DATASTORE_URL, AUTHOR_EMAIL, "car_scooter_brex_san_jose")
sd_ucb = eisd.SpecDetails(DATASTORE_URL, AUTHOR_EMAIL, "train_bus_ebike_mtv_ucb")
```
## The views
There are two main views for the data - the phone view and the evaluation view.
### Phone view
In the phone view, the phone is primary, and then there is a tree that you can traverse to get the data that you want. Traversing that tree typically involves nested for loops; here's an example of loading the phone view and traversing it. You can replace the print statements with real code. When you are ready to check this in, please move the function to one of the python modules so that we can invoke it more generally
```
importlib.reload(eipv)
pv_la = eipv.PhoneView(sd_la)
pv_sj = eipv.PhoneView(sd_sj)
pv_ucb = eipv.PhoneView(sd_ucb)
```
## Number of detected trips versus ground truth trips
Checks to see how many spurious transitions there were
```
importlib.reload(ems)
ems.fill_sensed_trip_ranges(pv_la)
ems.fill_sensed_trip_ranges(pv_sj)
ems.fill_sensed_trip_ranges(pv_ucb)
pv_sj.map()["ios"]["ucb-sdb-ios-2"]["evaluation_ranges"][4]["trip_id"]
```
### Start and end times mismatch
```
curr_run = pv_la.map()["android"]["ucb-sdb-android-2"]["evaluation_ranges"][0]
print(curr_run.keys())
ems.find_matching_segments(curr_run["evaluation_trip_ranges"], "trip_id", curr_run["sensed_trip_ranges"])
[1,2,3][1:2]
def get_tradeoff_entries(pv):
tradeoff_entry_list = []
for phone_os, phone_map in pv.map().items():
print(15 * "=*")
print(phone_os, phone_map.keys())
for phone_label, phone_detail_map in phone_map.items():
print(4 * ' ', 15 * "-*")
print(4 * ' ', phone_label, phone_detail_map.keys())
if "control" in phone_detail_map["role"]:
print("Ignoring %s phone %s since they are always on" % (phone_detail_map["role"], phone_label))
continue
# this spec does not have any calibration ranges, but evaluation ranges are actually cooler
for r in phone_detail_map["evaluation_ranges"]:
print(8 * ' ', 30 * "=")
print(8 * ' ',r.keys())
print(8 * ' ',r["trip_id"], r["eval_common_trip_id"], r["eval_role"], len(r["evaluation_trip_ranges"]))
bcs = r["battery_df"]["battery_level_pct"]
delta_battery = bcs.iloc[0] - bcs.iloc[-1]
print("Battery starts at %d, ends at %d, drain = %d" % (bcs.iloc[0], bcs.iloc[-1], delta_battery))
sensed_trips = len(r["sensed_trip_ranges"])
visit_reports = len(r["visit_sensed_trip_ranges"])
matching_trip_map = ems.find_matching_segments(r["evaluation_trip_ranges"], "trip_id", r["sensed_trip_ranges"])
print(matching_trip_map)
for trip in r["evaluation_trip_ranges"]:
sensed_trip_range = matching_trip_map[trip["trip_id"]]
results = ems.get_count_start_end_ts_diff(trip, sensed_trip_range)
print("Got results %s" % results)
tradeoff_entry = {"phone_os": phone_os, "phone_label": phone_label,
"timeline": pv.spec_details.curr_spec["id"],
"range_id": r["trip_id"],
"run": r["trip_run"], "duration": r["duration"],
"role": r["eval_role_base"], "battery_drain": delta_battery,
"trip_count": sensed_trips, "visit_reports": visit_reports,
"trip_id": trip["trip_id"]}
tradeoff_entry.update(results)
tradeoff_entry_list.append(tradeoff_entry)
return tradeoff_entry_list
# We are not going to look at battery life at the evaluation trip level; we will end with evaluation range
# since we want to capture the overall drain for the timeline
tradeoff_entries_list = []
tradeoff_entries_list.extend(get_tradeoff_entries(pv_la))
tradeoff_entries_list.extend(get_tradeoff_entries(pv_sj))
tradeoff_entries_list.extend(get_tradeoff_entries(pv_ucb))
tradeoff_df = pd.DataFrame(tradeoff_entries_list)
r2q_map = {"power_control": 0, "HAMFDC": 1, "MAHFDC": 2, "HAHFDC": 3, "accuracy_control": 4}
q2r_map = {0: "power", 1: "HAMFDC", 2: "MAHFDC", 3: "HAHFDC", 4: "accuracy"}
# Make a number so that can get the plots to come out in order
tradeoff_df["quality"] = tradeoff_df.role.apply(lambda r: r2q_map[r])
tradeoff_df["count_diff"] = tradeoff_df[["count"]] - 1
import itertools
```
## Trip count analysis
### Scatter plot
```
ifig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12,4))
errorboxes = []
for key, df in tradeoff_df.query("phone_os == 'android'").groupby(["role", "timeline"]):
print(key, df)
tcd = df.trip_count
bd = df.battery_drain
print("Plotting rect with params %s, %d, %d" % (str((tcd.min(), bd.min())),
tcd.max() - tcd.min(),
bd.max() - bd.min()))
print(tcd.min(), tcd.max(), tcd.std())
xerror = np.array([[tcd.min(), tcd.max()]])
print(xerror.shape)
ax.errorbar(x=tcd.mean(), y=bd.mean(), xerr=[[tcd.min()], [tcd.max()]], yerr=[[bd.min()], [bd.max()]], label=key)
plt.legend()
```
### Timeline + trip specific variation
How many sensed trips matched to each ground truth trip?
```
ifig, ax_array = plt.subplots(nrows=2,ncols=3,figsize=(9,6), sharex=False, sharey=True)
timeline_list = ["train_bus_ebike_mtv_ucb", "car_scooter_brex_san_jose", "unimodal_trip_car_bike_mtv_la"]
for i, tl in enumerate(timeline_list):
tradeoff_df.query("timeline == @tl & phone_os == 'android'").boxplot(ax = ax_array[0][i], column=["count_diff"], by=["quality"])
ax_array[0][i].set_title(tl)
tradeoff_df.query("timeline == @tl & phone_os == 'ios'").boxplot(ax = ax_array[1][i], column=["count_diff"], by=["quality"])
ax_array[1][i].set_title("")
# tradeoff_df.query("timeline == @tl & phone_os == 'ios'").boxplot(ax = ax_array[2][i], column=["visit_reports"], by=["quality"])
# ax_array[2][i].set_title("")
# print(android_ax_returned.shape, ios_ax_returned.shape)
for i, ax in enumerate(ax_array[0]):
ax.set_xticklabels([q2r_map[int(t.get_text())] for t in ax.get_xticklabels()])
ax.set_xlabel("")
for i, ax in enumerate(ax_array[1]):
ax.set_xticklabels([q2r_map[int(t.get_text())] for t in ax.get_xticklabels()])
ax.set_xlabel("")
# for ax in ax_array[1]:
# ax.set_xticklabels(q2r_ios_list[1:])
# ax.set_xlabel("")
# for ax in ax_array[2]:
# ax.set_xticklabels(q2r_ios_list[1:])
# ax.set_xlabel("")
ax_array[0][0].set_ylabel("Difference in trip counts (android)")
ax_array[1][0].set_ylabel("Difference in trip counts (ios)")
# ax_array[2][0].set_ylabel("Difference in visit reports (ios)")
ifig.suptitle("Trip count differences v/s configured quality over multiple timelines")
# ifig.tight_layout()
```
### Timeline specific variation
```
def plot_count_with_errors(ax_array, phone_os):
for i, (tl, trip_gt) in enumerate(timeline_trip_gt.items()):
ax_array[i].bar(0, trip_gt)
for q in range(1,4):
curr_df = tradeoff_df.query("timeline == @tl & phone_os == @phone_os & quality == @q")
print("%s %s %s values = %s %s %s" % (phone_os, tl, q2r_map[q], curr_df.trip_count.min(), curr_df.trip_count.mean(), curr_df.trip_count.max()))
lower_error = curr_df.trip_count.mean() - curr_df.trip_count.min()
upper_error = curr_df.trip_count.max() - curr_df.trip_count.mean()
ax_array[i].bar(x=q, height=curr_df.trip_count.mean(),
yerr=[[lower_error], [upper_error]])
print("%s %s %s errors = %s %s %s" % (phone_os, tl, q2r_map[q], lower_error, curr_df.trip_count.mean(), upper_error))
ax_array[i].set_title(tl)
ifig, ax_array = plt.subplots(nrows=2,ncols=3,figsize=(10,5), sharex=False, sharey=True)
timeline_trip_gt = {"train_bus_ebike_mtv_ucb": 3,
"car_scooter_brex_san_jose": 2,
"unimodal_trip_car_bike_mtv_la": 2}
plot_count_with_errors(ax_array[0], "android")
plot_count_with_errors(ax_array[1], "ios")
for ax in ax_array[0]:
ax.set_xticks(range(0,4))
ax.set_xticklabels(["truth"] + [q2r_map[r] for r in range(1,4)])
ax.set_yticks(range(0,tradeoff_df.trip_count.max(),3))
for ax in ax_array[1]:
ax.set_xticks(range(0,4))
ax.set_xticklabels(["truth"] + [q2r_map[r] for r in range(1,4)])
ax.set_yticks(range(0,tradeoff_df.trip_count.max(),3))
ax_array[0,0].set_ylabel("nTrips (android)")
ax_array[1,0].set_ylabel("nTrips (ios)")
ifig.tight_layout(pad=0.85)
```
## Start end results
```
for r, df in tradeoff_df.query("timeline == @tl & phone_os == 'android'").groupby("role"):
print(r, df.trip_count.mean() , df.trip_count.min(), df.trip_count.max())
```
The HAHFDC phone ran out of battery on all three runs of the `train_bus_ebike_mtv_ucb` timeline, so the trips never ended. Let's remove those so that they don't obfuscate the values from the other runs.
```
out_of_battery_phones = tradeoff_df.query("timeline=='train_bus_ebike_mtv_ucb' & role=='HAHFDC' & trip_id=='berkeley_to_mtv_SF_express_bus_0' & phone_os == 'android'")
for i in out_of_battery_phones.index:
tradeoff_df.loc[i,"end_diff_mins"] = float('nan')
tradeoff_df.query("timeline=='train_bus_ebike_mtv_ucb' & role=='HAHFDC' & trip_id=='berkeley_to_mtv_SF_express_bus_0' & phone_os == 'android'")
```
### Overall results
```
ifig, ax_array = plt.subplots(nrows=1,ncols=4,figsize=(16,4), sharex=False, sharey=True)
tradeoff_df.query("phone_os == 'android'").boxplot(ax = ax_array[0], column=["start_diff_mins"], by=["quality"])
ax_array[0].set_title("start time (android)")
tradeoff_df.query("phone_os == 'android'").boxplot(ax = ax_array[1], column=["end_diff_mins"], by=["quality"])
ax_array[1].set_title("end time (android)")
tradeoff_df.query("phone_os == 'ios'").boxplot(ax = ax_array[2], column=["start_diff_mins"], by=["quality"])
ax_array[2].set_title("start_time (ios)")
tradeoff_df.query("phone_os == 'ios'").boxplot(ax = ax_array[3], column=["end_diff_mins"], by=["quality"])
ax_array[3].set_title("end_time (ios)")
# print(android_ax_returned.shape, ios_ax_returned.shape)
ax_array[0].set_xticklabels([q2r_map[int(t.get_text())] for t in ax_array[0].get_xticklabels()])
ax_array[1].set_xticklabels([q2r_map[int(t.get_text())] for t in ax_array[1].get_xticklabels()])
ax_array[2].set_xticklabels([q2r_map[int(t.get_text())] for t in ax_array[2].get_xticklabels()])
ax_array[3].set_xticklabels([q2r_map[int(t.get_text())] for t in ax_array[3].get_xticklabels()])
for ax in ax_array:
ax.set_xlabel("")
ax_array[1].text(0.55,25,"Excluding trips where battery ran out")
ax_array[0].set_ylabel("Diff (mins)")
ifig.suptitle("Trip start end accuracy v/s configured quality")
ifig.tight_layout(pad=1.7)
```
### Timeline specific
```
ifig, ax_array = plt.subplots(nrows=4,ncols=3,figsize=(10,10), sharex=False, sharey=True)
timeline_list = ["train_bus_ebike_mtv_ucb", "car_scooter_brex_san_jose", "unimodal_trip_car_bike_mtv_la"]
for i, tl in enumerate(timeline_list):
tradeoff_df.query("timeline == @tl & phone_os == 'android'").boxplot(ax = ax_array[0][i], column=["start_diff_mins"], by=["quality"])
ax_array[0][i].set_title(tl)
tradeoff_df.query("timeline == @tl & phone_os == 'android'").boxplot(ax = ax_array[1][i], column=["end_diff_mins"], by=["quality"])
ax_array[1][i].set_title("")
tradeoff_df.query("timeline == @tl & phone_os == 'ios'").boxplot(ax = ax_array[2][i], column=["start_diff_mins"], by=["quality"])
ax_array[2][i].set_title("")
tradeoff_df.query("timeline == @tl & phone_os == 'ios'").boxplot(ax = ax_array[3][i], column=["end_diff_mins"], by=["quality"])
ax_array[3][i].set_title("")
# print(android_ax_returned.shape, ios_ax_returned.shape)
for ax in ax_array[0]:
ax.set_xticklabels([q2r_map[int(t.get_text())] for t in ax.get_xticklabels()])
ax.set_xlabel("")
for ax in ax_array[1]:
ax.set_xticklabels([q2r_map[int(t.get_text())] for t in ax.get_xticklabels()])
ax.set_xlabel("")
ax_array[1,0].text(0.55,25,"Excluding trips where battery ran out")
for ax in ax_array[2]:
ax.set_xticklabels([q2r_map[int(t.get_text())] for t in ax.get_xticklabels()])
ax.set_xlabel("")
for ax in ax_array[3]:
ax.set_xticklabels([q2r_map[int(t.get_text())] for t in ax.get_xticklabels()])
ax.set_xlabel("")
ax_array[0][0].set_ylabel("Start time diff (android)")
ax_array[1][0].set_ylabel("End time diff (android)")
ax_array[2][0].set_ylabel("Start time diff (ios)")
ax_array[3][0].set_ylabel("End time diff (ios)")
ifig.suptitle("Trip start end accuracy (mins) v/s configured quality over multiple timelines")
# ifig.tight_layout(pad=2.5)
```
## Outlier checks
We can have unexpected values for both time and count. Unfortunately, there is no overlap between the two (intersection is zero). So we will look at a random sample from both cases
```
expected_legs = "&".join(["not (trip_id == 'bus trip with e-scooter access_0' & count == 2)",
"not (trip_id == 'mtv_to_berkeley_sf_bart_0' & count == 3)"])
count_outliers = tradeoff_df.query("count > 1 & %s" % expected_legs)
count_outliers[["phone_os", "range_id", "trip_id", "run", "role", "count", "start_diff_mins", "end_diff_mins"]].head()
tradeoff_df.query("count < 1 & role == 'HAHFDC'")
time_outliers = tradeoff_df.query("start_diff_mins == 30 | end_diff_mins == 30")
time_outliers[["phone_os", "range_id", "trip_id", "run", "role", "start_diff_mins", "end_diff_mins"]].head()
print(len(time_outliers.index.union(count_outliers.index)), len(time_outliers.index.intersection(count_outliers.index)))
time_outliers.sample(n=3, random_state=1)[["phone_os", "range_id", "trip_id", "run", "role", "count", "start_diff_mins", "end_diff_mins"]]
count_outliers.sample(n=3, random_state=1)[["phone_os", "range_id", "trip_id", "run", "role", "count", "start_diff_mins", "end_diff_mins"]]
fmt = lambda ts: arrow.get(ts).to("America/Los_Angeles")
def check_outlier(eval_range, trip_idx, mismatch_key):
eval_trip_range = eval_range["evaluation_trip_ranges"][trip_idx]
print("Trip %s, ground truth experiment for metric %s, %s, trip %s" % (eval_range["trip_id"], mismatch_key, fmt(eval_range[mismatch_key]), fmt(eval_trip_range[mismatch_key])))
print(eval_trip_range["transition_df"][["transition", "fmt_time"]])
print("**** For entire experiment ***")
print(eval_range["transition_df"][["transition", "fmt_time"]])
if mismatch_key == "end_ts":
# print("Transitions after trip end")
# print(eval_range["transition_df"].query("ts > %s" % eval_trip_range["end_ts"])[["transition", "fmt_time"]])
return ezpv.display_map_detail_from_df(eval_trip_range["location_df"])
else:
return ezpv.display_map_detail_from_df(eval_trip_range["location_df"])
```
##### MAHFDC is just terrible
It looks like with MAHFDC, we essentially get no trip ends on android. Let's investigate these a bit further.
- run 0: trip never ended: trip actually ended just before next trip started `15:01:26`. And then next trip had geofence exit, but we didn't detect it because it never ended, so we didn't create a sensed range for it.
- run 1: trip ended but after 30 mins: similar behavior; trip ended just before next trip started `15:49:39`.
```
tradeoff_df.query("phone_os == 'android' & role == 'MAHFDC' & timeline == 'car_scooter_brex_san_jose'")[["range_id", "trip_id", "run", "role", "count", "start_diff_mins", "end_diff_mins"]]
FMT_STRING = "HH:mm:SS"
for t in pv_sj.map()["android"]["ucb-sdb-android-3"]["evaluation_ranges"][3]["evaluation_trip_ranges"]:
print(sd_sj.fmt(t["start_ts"], FMT_STRING), "->", sd_sj.fmt(t["end_ts"], FMT_STRING))
pv_sj.map()["android"]["ucb-sdb-android-3"]["evaluation_ranges"][3]["transition_df"]
FMT_STRING = "HH:mm:SS"
for t in pv_sj.map()["android"]["ucb-sdb-android-3"]["evaluation_ranges"][4]["evaluation_trip_ranges"]:
print(sd_sj.fmt(t["start_ts"], FMT_STRING), "->", sd_sj.fmt(t["end_ts"], FMT_STRING))
pv_sj.map()["android"]["ucb-sdb-android-3"]["evaluation_ranges"][4]["transition_df"]
```
##### Visit detection kicked in almost at the end of the trip
```
# 44 ios suburb_city_driving_weekend_0 1 HAMFDC 0 30.000000 30.000000
check_outlier(pv_la.map()["ios"]["ucb-sdb-ios-3"]["evaluation_ranges"][4], 0, "start_ts")
```
##### Trip end never detected
Trip ended at 14:11, experiment ended at 14:45. No stopped_moving for the last trip
```
# 65 android bus trip with e-scooter access_0 2 HAMFDC 1 3.632239 30.000000
check_outlier(pv_sj.map()["android"]["ucb-sdb-android-3"]["evaluation_ranges"][2], 1, "end_ts")
```
##### Trip end detection errors on iOS
Original experiment, explanation for the outliers on the HAHFDC and MAHFDC first runs to San Jose
- HAHFDC: Trip end detected 1.5 hours after real end, but before next trip start
- MAHFDC: Trip end detected 5 hours after real end, at the end of the next trip
- MAHFDC: Clearly this was not even detected as a separate trip, so this is correct. There was a spurious trip from `17:42:22` - `17:44:22` which ended up matching this. But clearly because of the missing trip end detection, both the previous trip and this one were incorrect. You can click on the points at the Mountain View library to confirm when the trip ended.
```
fig = bre.Figure()
fig.add_subplot(1,3,1).add_child(check_outlier(pv_sj.map()["ios"]["ucb-sdb-ios-2"]["evaluation_ranges"][0], 0, "end_ts"))
fig.add_subplot(1,3,2).add_child(check_outlier(pv_sj.map()["ios"]["ucb-sdb-ios-3"]["evaluation_ranges"][0], 0, "end_ts"))
fig.add_subplot(1,3,3).add_child(check_outlier(pv_sj.map()["ios"]["ucb-sdb-ios-3"]["evaluation_ranges"][0], 1, "start_ts"))
# check_outlier(pv_sj.map()["ios"]["ucb-sdb-ios-2"]["evaluation_ranges"][0], 0, "end_ts")
```
##### No geofence exit ever detected
On the middle trip of the second round of data collection to the San Jose library, we got no geofence exits. The entire list of transitions is
```
transition fmt_time
3 T_VISIT_ENDED 2019-08-06T11:29:20.573817-07:00
6 T_VISIT_STARTED 2019-08-06T11:29:20.911773-07:00
8 T_VISIT_ENDED 2019-08-06T11:35:38.250980-07:00
9 T_VISIT_STARTED 2019-08-06T12:00:05.445936-07:00
12 T_TRIP_ENDED 2019-08-06T12:00:07.093790-07:00
15 T_VISIT_ENDED 2019-08-06T15:59:13.998068-07:00
18 T_VISIT_STARTED 2019-08-06T17:12:38.808743-07:00
21 T_TRIP_ENDED 2019-08-06T17:12:40.504285-07:00
```
We did get visit notifications, so we did track location points (albeit after a long time), and we did get the trip end notifications, but we have no sensed trips. Had to handle this in the code as well
```
check_outlier(pv_sj.map()["ios"]["ucb-sdb-ios-2"]["evaluation_ranges"][4], 0, "start_ts")
```
##### No geofence exit ever detected
On the middle trip of the second round of data collection to the San Jose library, we got no geofence exits.
We did get visit notifications, so we did track location points (albeit after a long time), and we did get the trip end notifications, but we have no sensed trips. Had to handle this in the code as well
```
# 81 ios bus trip with e-scooter access_0 1 HAHFDC 0 30.000000 30.000000
check_outlier(pv_sj.map()["ios"]["ucb-sdb-ios-2"]["evaluation_ranges"][4], 1, "end_ts")
```
### 7 mapped trips for one
This is essentially from the time that I wandered around looking for the bikeshare bike. This raises the question of whether I should filter out the points within the polygon in this case too. Overall, I think not. The only part within the polygon that we don't guarantee is the ground truth trajectory. We still do have the ground truth of the trip/section start end, and there really is no reason why we should have had so many "trips" when I was walking around. I certainly didn't wait for too long while walking and this was not semantically a "trip" by any stretch of the imagination.
```
# 113 android berkeley_to_mtv_SF_express_bus_0 2 HAMFDC 7 2.528077 3.356611
check_outlier(pv_ucb.map()["android"]["ucb-sdb-android-3"]["evaluation_ranges"][2], 2, "end_ts")
```
### Trip split into two in medium accuracy *only*
Actual trip ends at `14:21`. In medium accuracy, detected trips were `14:12:15 -> 14:17:33` and `14:22:14 -> 14:24:15`. This was after we reached the destination, but there is a large gap because we basically got no points for a large part of the trip. This seems correct - it looks like iOS is just prematurely detecting the trip end in the MA case.
```
# 127 ios walk_urban_university_0 1 MAHFDC 2 4.002549 2.352913
fig = bre.Figure()
def compare_med_high_accuracy():
trip_idx = 1
mismatch_key = "end_ts"
ha_range = pv_ucb.map()["ios"]["ucb-sdb-ios-2"]["evaluation_ranges"][1]
ha_trip_range = ha_range["evaluation_trip_ranges"][trip_idx]
eval_range = pv_ucb.map()["ios"]["ucb-sdb-ios-3"]["evaluation_ranges"][1]
eval_trip_range = eval_range["evaluation_trip_ranges"][trip_idx]
print("Trip %s, ground truth experiment for metric %s, %s, trip %s, high accuracy %s" %
(eval_range["trip_id"], mismatch_key,
fmt(eval_range[mismatch_key]), fmt(eval_trip_range[mismatch_key]), fmt(ha_trip_range[mismatch_key])))
print(eval_trip_range["transition_df"][["transition", "fmt_time"]])
print("**** Expanded ***")
print(eval_range["transition_df"].query("%s < ts < %s" %
((eval_trip_range["end_ts"] - 30*60), (eval_trip_range["end_ts"] + 30*60)))[["transition", "fmt_time"]])
fig = bre.Figure()
fig.add_subplot(1,2,1).add_child(ezpv.display_map_detail_from_df(ha_trip_range["location_df"]))
fig.add_subplot(1,2,2).add_child(ezpv.display_map_detail_from_df(eval_trip_range["location_df"]))
return fig
compare_med_high_accuracy()
[{'start_ts': fmt(1564089135.368705), 'end_ts': fmt(1564089453.8783798)},
{'start_ts': fmt(1564089734.305933), 'end_ts': fmt(1564089855.8683748)}]
```
### We just didn't detect any trip ends in the middle
We only detected a trip end at the Mountain View station. This is arguably more correct than the multiple trips that we get with a dwell time.
```
# 120 ios mtv_to_berkeley_sf_bart_0 2 HAHFDC 2 3.175024 1.046759
check_outlier(pv_ucb.map()["ios"]["ucb-sdb-ios-2"]["evaluation_ranges"][2], 0, "end_ts")
```
|
github_jupyter
|
# The art of using pipelines
Pipelines are a natural way to think about a machine learning system. Indeed with some practice a data scientist can visualise data "flowing" through a series of steps. The input is typically some raw data which has to be processed in some manner. The goal is to represent the data in such a way that is can be ingested by a machine learning algorithm. Along the way some steps will extract features, while others will normalize the data and remove undesirable elements. Pipelines are simple, and yet they are a powerful way of designing sophisticated machine learning systems.
Both [scikit-learn](https://stackoverflow.com/questions/33091376/python-what-is-exactly-sklearn-pipeline-pipeline) and [pandas](https://tomaugspurger.github.io/method-chaining) make it possible to use pipelines. However it's quite rare to see pipelines being used in practice (at least on Kaggle). Sometimes you get to see people using scikit-learn's `pipeline` module, however the `pipe` method from `pandas` is sadly underappreciated. A big reason why pipelines are not given much love is that it's easier to think of batch learning in terms of a script or a notebook. Indeed many people doing data science seem to prefer a procedural style to a declarative style. Moreover in practice pipelines can be a bit rigid if one wishes to do non-orthodox operations.
Although pipelines may be a bit of an odd fit for batch learning, they make complete sense when they are used for online learning. Indeed the UNIX philosophy has advocated the use of pipelines for data processing for many decades. If you can visualise data as a stream of observations then using pipelines should make a lot of sense to you. We'll attempt to convince you by writing a machine learning algorithm in a procedural way and then converting it to a declarative pipeline in small steps. Hopefully by the end you'll be convinced, or not!
In this notebook we'll manipulate data from the [Kaggle Recruit Restaurants Visitor Forecasting competition](https://www.kaggle.com/c/recruit-restaurant-visitor-forecasting). The data is directly available through `river`'s `datasets` module.
```
from pprint import pprint
from river import datasets
for x, y in datasets.Restaurants():
pprint(x)
pprint(y)
break
```
We'll start by building and running a model using a procedural coding style. The performance of the model doesn't matter, we're simply interested in the design of the model.
```
from river import feature_extraction
from river import linear_model
from river import metrics
from river import preprocessing
from river import stats
means = (
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21))
)
scaler = preprocessing.StandardScaler()
lin_reg = linear_model.LinearRegression()
metric = metrics.MAE()
for x, y in datasets.Restaurants():
# Derive date features
x['weekday'] = x['date'].weekday()
x['is_weekend'] = x['date'].weekday() in (5, 6)
# Process the rolling means of the target
for mean in means:
x = {**x, **mean.transform_one(x)}
mean.learn_one(x, y)
# Remove the key/value pairs that aren't features
for key in ['store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude']:
x.pop(key)
# Rescale the data
x = scaler.learn_one(x).transform_one(x)
# Fit the linear regression
y_pred = lin_reg.predict_one(x)
lin_reg.learn_one(x, y)
# Update the metric using the out-of-fold prediction
metric.update(y, y_pred)
print(metric)
```
We're not using many features. We can print the last `x` to get an idea of the features (don't forget they've been scaled!)
```
pprint(x)
```
The above chunk of code is quite explicit but it's a bit verbose. The whole point of libraries such as `river` is to make life easier for users. Moreover there's too much space for users to mess up the order in which things are done, which increases the chance of there being target leakage. We'll now rewrite our model in a declarative fashion using a pipeline *à la sklearn*.
```
from river import compose
def get_date_features(x):
weekday = x['date'].weekday()
return {'weekday': weekday, 'is_weekend': weekday in (5, 6)}
model = compose.Pipeline(
('features', compose.TransformerUnion(
('date_features', compose.FuncTransformer(get_date_features)),
('last_7_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7))),
('last_14_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14))),
('last_21_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21)))
)),
('drop_non_features', compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude')),
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LinearRegression())
)
metric = metrics.MAE()
for x, y in datasets.Restaurants():
# Make a prediction without using the target
y_pred = model.predict_one(x)
# Update the model using the target
model.learn_one(x, y)
# Update the metric using the out-of-fold prediction
metric.update(y, y_pred)
print(metric)
```
We use a `Pipeline` to arrange each step in a sequential order. A `TransformerUnion` is used to merge multiple feature extractors into a single transformer. The `for` loop is now much shorter and is thus easier to grok: we get the out-of-fold prediction, we fit the model, and finally we update the metric. This way of evaluating a model is typical of online learning, and so we put it wrapped it inside a function called `progressive_val_score` part of the `evaluate` module. We can use it to replace the `for` loop.
```
from river import evaluate
model = compose.Pipeline(
('features', compose.TransformerUnion(
('date_features', compose.FuncTransformer(get_date_features)),
('last_7_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7))),
('last_14_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14))),
('last_21_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21)))
)),
('drop_non_features', compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude')),
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LinearRegression())
)
evaluate.progressive_val_score(dataset=datasets.Restaurants(), model=model, metric=metrics.MAE())
```
Notice that you couldn't have used the `progressive_val_score` method if you wrote the model in a procedural manner.
Our code is getting shorter, but it's still a bit difficult on the eyes. Indeed there is a lot of boilerplate code associated with pipelines that can get tedious to write. However `river` has some special tricks up it's sleeve to save you from a lot of pain.
The first trick is that the name of each step in the pipeline can be omitted. If no name is given for a step then `river` automatically infers one.
```
model = compose.Pipeline(
compose.TransformerUnion(
compose.FuncTransformer(get_date_features),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21))
),
compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude'),
preprocessing.StandardScaler(),
linear_model.LinearRegression()
)
evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE())
```
Under the hood a `Pipeline` inherits from `collections.OrderedDict`. Indeed this makes sense because if you think about it a `Pipeline` is simply a sequence of steps where each step has a name. The reason we mention this is because it means you can manipulate a `Pipeline` the same way you would manipulate an ordinary `dict`. For instance we can print the name of each step by using the `keys` method.
```
for name in model.steps:
print(name)
```
The first step is a `FeatureUnion` and it's string representation contains the string representation of each of it's elements. Not having to write names saves up some time and space and is certainly less tedious.
The next trick is that we can use mathematical operators to compose our pipeline. For example we can use the `+` operator to merge `Transformer`s into a `TransformerUnion`.
```
model = compose.Pipeline(
compose.FuncTransformer(get_date_features) + \
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)) + \
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)) + \
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21)),
compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude'),
preprocessing.StandardScaler(),
linear_model.LinearRegression()
)
evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE())
```
Likewhise we can use the `|` operator to assemble steps into a `Pipeline`.
```
model = (
compose.FuncTransformer(get_date_features) +
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)) +
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)) +
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21))
)
to_discard = ['store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude']
model = model | compose.Discard(*to_discard) | preprocessing.StandardScaler()
model |= linear_model.LinearRegression()
evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE())
```
Hopefully you'll agree that this is a powerful way to express machine learning pipelines. For some people this should be quite remeniscent of the UNIX pipe operator. One final trick we want to mention is that functions are automatically wrapped with a `FuncTransformer`, which can be quite handy.
```
model = get_date_features
for n in [7, 14, 21]:
model += feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(n))
model |= compose.Discard(*to_discard)
model |= preprocessing.StandardScaler()
model |= linear_model.LinearRegression()
evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE())
```
Naturally some may prefer the procedural style we first used because they find it easier to work with. It all depends on your style and you should use what you feel comfortable with. However we encourage you to use operators because we believe that this will increase the readability of your code, which is very important. To each their own!
Before finishing we can take an interactive look at our pipeline.
```
model
```
|
github_jupyter
|
# Explore the classification results
This notebook will guide you through different visualizations of the test set evaluation of any of the presented models.
In a first step you can select the result file of any of the models you want to explore.
```
model = 'vgg_results_sample.csv' #should be placed in the /eval/ folder
```
Then we will import some packages and setup some basic variables
```
import os
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from skimage.io import imread
from sklearn.metrics import accuracy_score, confusion_matrix
from utils import plot_confusion_matrix, class_names, plots
%matplotlib inline
# get the path of the data and eval folder
root_dir = os.path.abspath(os.path.join(sys.path[0], os.pardir))
eval_dir = os.path.join(root_dir, 'eval')
data_dir = os.path.join(root_dir, 'data')
```
**Next** we will load the evaluation results and compute different **accuracy** values.
1. The per-image accuracy. Should not be be confused with the per-fish accuracy. The per-image accuracy means how many of the samples (image + corresponding features) in the test set were predicted with the correct class.
2. The per-fish accuracy if the results of the different images of one fish are combine by the `mode` of the single image predictions
3. The per-fish accuracy if the final prediction is derived from the class with the highest overall probability if we sum up all class probabilities of the single images.
To get the different per-fish accuracies we can group the data by `video_name` and `label` because each combination of `video_name` and `label` stays for one individual fish.
The columns `prediction_max_prob` and `prediction_mode` contain the predicted class label for each individual fish derived through the above stated methods.
```
#load the evaluation data
df = pd.read_csv(os.path.join(eval_dir, model), sep=';')
#compute the per-image accuracy
acc_per_img = accuracy_score(df['label'], df['pred'])
# get the per-fish results by grouping the dataframe and taking first entry of each group as the columns for the
# per-fish prediction are the same for all images of one fish
fish = df.groupby(['video_name', 'label']).first().reset_index()
# calculate the per-fish accuracies
acc_fish_mode = accuracy_score(fish['label'], fish['prediction_mode'])
acc_fish_max_prob = accuracy_score(fish['label'], fish['prediction_max_prob'])
# print the results
print("From a total of %i images, %.2f percent of the images were classified correctly." %(len(df), acc_per_img*100))
print("If combined by the mode of %i fish individuals %.2f were classified correctly." %(len(fish), acc_fish_mode*100))
print("If derived from the max probability of the summed class probabilities, out of %i fish %.2f were classified correctly." %(len(fish), acc_fish_max_prob*100))
```
As we are interested in the per-fish accuracy, we can see that combining the classification results of many images of one fish can help to raise the overall prediction accuracy. From the two different methods, this can be best done through the `prediction_max_prob` method.
**Next** we will display the **confusion matrix**. The confusion matrix displays the true class vs. the predicted class. This might help to understand which classes the model can seperate and between which it makes the most mispredictions.
By changing the first line, you can select which confusion matrix should be displayed. You can chose between ['pre-img', 'mode', 'max_prob'] referring to the 3 different accuracies from above.
```
method = 'max_prob' # chose one of ['per-img', 'mode', 'max_prob']
#compute confusion matrix
if method == 'per-img':
cm = confusion_matrix(df['label'], df['pred'])
elif method == 'mode':
cm = confusion_matrix(fish['label'], fish['prediction_mode'])
elif method == 'max_prob':
cm = confusion_matrix(fish['label'], fish['prediction_max_prob'])
else:
raise ValueError("Select a valid method. Must be one of ['per-img', 'mode', 'max_prob']")
#plot confusion matrix
plot_confusion_matrix(cm, [n.split(',')[1] for n in class_names])
```
You will see for all of the combinations you can selected (4 models and 3 different methods) that the models make the most number of misclassifications between similar (at least somehow) looking fish species. For example you can see that most misclassifications of brown trouts were made with rainbow trouts (and vice versa) and the same for chub and common nase. Through this we can conclude that the models understand the content of the images/features they are confronted with.
**Next** we will look at some random images of each class the model was absolutely sure about it's prediction and the predicted class was indeed the true class.
```
#number of total classes and images per class we will plot
num_classes = 7
n_view = 4
#iterate over each class and find the images the model assigned the highest class probability to (to the true class)
for i in range(num_classes):
corrects = np.where((df['label'] == i) & (df['pred'] == i))
df_corrects = df.loc[corrects]
df_corrects.sort_values('score'+str(i), inplace = True, ascending = False)
#print number of correct images per class
print("Found %i correct %s" %(len(df_corrects), class_names[i]))
#plot images
plots(df_corrects, df_corrects.index[:n_view], data_dir)
```
To some degree, these images should make sense. Most of them show specific characteristics of each species that are enought to classify them correctly. Some might not make much sense for us as a human observer but might make sense to the model.
E.g. We had plenty of rainbow trouts in turbid water, so that the model seems to have learned that a salmonide looking fish in turbid water is more likely to be a rainbow than a brown trout.
Up **Next** we could visualize the opposite: We could look at the images of each class, for which the model was most uncertain about.
```
for i in range(num_classes):
fish = df.loc[df['label'] == i]
fish.is_copy = False
fish.sort_values('score'+str(i), inplace = True, ascending = True)
plots(fish, fish.index[:n_view], data_dir)
```
These results may let you wonder, why for some of these images the model predicted the wrong class, although the fish could be clearly seen. To be honest, I don't know the answer and could only guess for some of the cases. I think one key point could be the size of the dataset and one could expect that a bigger sample size per class could yield better results.
One the other hand, one could also see many images of bad quality, turbid water etc. where even for us humans it might be hard to classify the fish by this one single images.
Also note, that this are per-image prediction results. A lot of these single image misclassifications aren't influencing the per-fish predictions.
|
github_jupyter
|
#### Setup
```
# standard imports
import numpy as np
import torch
import matplotlib.pyplot as plt
from torch import optim
from ipdb import set_trace
from datetime import datetime
# jupyter setup
%matplotlib inline
%load_ext autoreload
%autoreload 2
# own modules
from dataloader import CAL_Dataset
from net import get_model
from dataloader import get_data, get_mini_data, load_json, save_json
from train import fit, custom_loss, validate
from metrics import calc_metrics
# paths
data_path = './dataset/'
```
uncomment the cell below if you want your experiments to yield always the same results
```
# manualSeed = 42
# np.random.seed(manualSeed)
# torch.manual_seed(manualSeed)
# # if you are using GPU
# torch.cuda.manual_seed(manualSeed)
# torch.cuda.manual_seed_all(manualSeed)
# torch.backends.cudnn.enabled = False
# torch.backends.cudnn.benchmark = False
# torch.backends.cudnn.deterministic = True
```
#### Training
Initialize the model. Possible Values for the task block type: MLP, LSTM, GRU, TempConv
```
params = {'name': 'tempconv', 'type_': 'TempConv', 'lr': 3e-4, 'n_h': 128, 'p':0.5, 'seq_len':5}
save_json(params, f"models/{params['name']}")
model, opt = get_model(params)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = model.to(device)
```
get the data loader. get mini data gets only a subset of the training data, on which we can try if the model is able to overfit
```
train_dl, valid_dl = get_data(data_path, model.params.seq_len, batch_size=16)
# train_dl, valid_dl = get_mini_data(data_path, model.params.seq_len, batch_size=16, l=4000)
```
Train the model. We automatically save the model with the lowest val_loss. If you want to continue the training and keep the loss history, just pass it as an additional argument as shown below.
```
model, val_hist = fit(1, model, custom_loss, opt, train_dl, valid_dl)
# model, val_hist = fit(1, model, custom_loss, opt, train_dl, valid_dl, val_hist=val_hist)
```
uncomment the following two cells if the feature extractor should also be trained
```
# for name,param in model.named_parameters():
# param.requires_grad = True
# opt = optim.Adam(model.parameters())
# model, val_hist = fit(1, model, custom_loss, opt, train_dl, valid_dl)
plt.plot(val_hist)
```
#### evalute the model
reload model
```
name = 'gru'
params = load_json(f"models/{name}")
model, _ = get_model(params)
model.load_state_dict(torch.load(f"./models/{name}.pth"));
model.eval().to(device);
_, valid_dl = get_data(data_path, model.params.seq_len, batch_size=16)
```
run evaluation on full val set
```
_, all_preds, all_labels = validate(model, valid_dl, custom_loss)
calc_metrics(all_preds, all_labels)
```
#### plot results
```
# for convience, we can pass an integer instead of the full string
int2key = {0: 'red_light', 1:'hazard_stop', 2:'speed_sign',
3:'relative_angle', 4: 'center_distance', 5: 'veh_distance'}
def plot_preds(k, all_preds, all_labels, start=0, delta=1000):
if isinstance(k, int): k = int2key[k]
# get preds and labels
class_labels = ['red_light', 'hazard_stop', 'speed_sign']
pred = np.argmax(all_preds[k], axis=1) if k in class_labels else all_preds[k]
label = all_labels[k][:, 1] if k in class_labels else all_labels[k]
plt.plot(pred[start:start+delta], 'r--', label='Prediction', linewidth=2.0)
plt.plot(label[start:start+delta], 'g', label='Ground Truth', linewidth=2.0)
plt.legend()
plt.grid()
plt.show()
plot_preds(5, all_preds, all_labels, start=0, delta=4000)
```
#### param search
```
from numpy.random import choice
np.random.seed()
params = {'name': 'tempconv', 'type_': 'TempConv', 'lr': 3e-4, 'n_h': 128, 'p':0.5, 'seq_len':5}
def get_random_NN_parameters():
params = {}
params['type_'] = choice(['MLP', 'GRU', 'LSTM', 'TempConv'])
params['name'] = datetime.now().strftime("%Y_%m_%d_%H_%M")
params['lr'] = np.random.uniform(1e-5, 1e-2)
params['n_h'] = np.random.randint(5, 200)
params['p'] = np.random.uniform(0.25, 0.75)
params['seq_len'] = np.random.randint(1, 15)
return params
while True:
params = get_random_NN_parameters()
print('PARAMS: {}'.format(params))
# instantiate the model
model, opt = get_model(params)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = model.to(device)
save_json(params, f"models/{params['name']}")
# get the data loaders
train_dl, valid_dl = get_data(data_path, model.params.seq_len, batch_size=16)
# start the training
model, val_hist = fit(5, model, custom_loss, opt, train_dl, valid_dl)
for name,param in model.named_parameters():
param.requires_grad = True
opt = optim.Adam(model.parameters())
model, val_hist = fit(5, model, custom_loss, opt, train_dl, valid_dl, val_hist=val_hist)
```
|
github_jupyter
|
```
%matplotlib widget
import os
import sys
sys.path.insert(0, os.getenv('HOME')+'/pycode/MscThesis/')
import pandas as pd
from amftrack.util import get_dates_datetime, get_dirname, get_plate_number, get_postion_number
import ast
from amftrack.plotutil import plot_t_tp1
from scipy import sparse
from datetime import datetime
from amftrack.pipeline.functions.node_id import orient
import pickle
import scipy.io as sio
from pymatreader import read_mat
from matplotlib import colors
import cv2
import imageio
import matplotlib.pyplot as plt
import numpy as np
from skimage.filters import frangi
from skimage import filters
from random import choice
import scipy.sparse
import os
from amftrack.pipeline.functions.extract_graph import from_sparse_to_graph, generate_nx_graph, sparse_to_doc
from skimage.feature import hessian_matrix_det
from amftrack.pipeline.functions.experiment_class_surf import Experiment
from amftrack.pipeline.paths.directory import run_parallel, find_state, directory_scratch, directory_project, path_code
from amftrack.notebooks.analysis.data_info import *
import matplotlib.patches as mpatches
from statsmodels.stats import weightstats as stests
window=800
results={}
for treatment in treatments.keys():
insts = treatments[treatment]
for inst in insts:
results[inst] = pickle.load(open(f'{path_code}/MscThesis/Results/straight_{window}_{inst}.pick', "rb"))
column_names = ["plate","inst", "treatment", "angle", "curvature","density","growth","speed","straightness","t","hyph"]
infos = pd.DataFrame(columns=column_names)
for treatment in treatments.keys():
insts = treatments[treatment]
for inst in insts:
angles, curvatures, densities,growths,speeds,tortuosities,ts,hyphs = results[inst]
for i,angle in enumerate(angles):
new_line = pd.DataFrame(
{ "plate": [plate_number[inst]],
"inst": [inst],
"treatment": [treatment],
"angle": [angle],
"curvature": [curvatures[i]],
"density": [densities[i]],
"growth": [growths[i]],
"speed": [speeds[i]],
"straightness": [tortuosities[i]],
"t": [ts[i]],
"hyph": [hyphs[i]],
}
) # index 0 for
# mothers need to be modified to resolve multi mother issue
infos = infos.append(new_line, ignore_index=True)
corrected = infos.loc[infos["straightness"] <= 1]
corrected
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
bplot1 = corrected.boxplot(column = ['speed'],by="plate",figsize =(9,8),ax =ax,patch_artist=True, showfliers=False)
colors = ['lightblue']+ ['pink'] +['lightgreen']
for i,(artist, col) in enumerate(zip(ax.artists, colors)):
artist.set_edgecolor(col)
artist.set_facecolor(col)
ax.set_xlabel('Plate')
ax.set_ylabel('Speed')
ax.set_ylim(0.9)
plt.show()
max_speeds = []
total_growth = []
for treatment in treatments.keys():
insts = treatments[treatment]
for inst in insts:
inst_tab = corrected.loc[corrected["inst"]==inst]
for hyph in set(inst_tab['hyph']):
max_speeds.append(np.max(inst_tab.loc[inst_tab['hyph']==hyph]['speed']))
total_growth.append(np.sum(inst_tab.loc[inst_tab['hyph']==hyph]['growth']))
len(max_speeds)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
ax.scatter(np.log(total_growth),max_speeds)
# ax.set_xlim(100,300)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/wisrovi/pyimagesearch-buy/blob/main/visual_logging_example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>

# Visual-logging, my new favorite tool for debugging OpenCV and Python apps
### by [PyImageSearch.com](http://www.pyimagesearch.com)
## Welcome to **[PyImageSearch Plus](http://pyimg.co/plus)** Jupyter Notebooks!
This notebook is associated with the [Visual-logging, my new favorite tool for debugging OpenCV and Python apps](https://www.pyimagesearch.com/2014/12/22/visual-logging-new-favorite-tool-debugging-opencv-python-apps/) blog post published on 2014-12-22.
Only the code for the blog post is here. Most codeblocks have a 1:1 relationship with what you find in the blog post with two exceptions: (1) Python classes are not separate files as they are typically organized with PyImageSearch projects, and (2) Command Line Argument parsing is replaced with an `args` dictionary that you can manipulate as needed.
We recommend that you execute (press ▶️) the code block-by-block, as-is, before adjusting parameters and `args` inputs. Once you've verified that the code is working, you are welcome to hack with it and learn from manipulating inputs, settings, and parameters. For more information on using Jupyter and Colab, please refer to these resources:
* [Jupyter Notebook User Interface](https://jupyter-notebook.readthedocs.io/en/stable/notebook.html#notebook-user-interface)
* [Overview of Google Colaboratory Features](https://colab.research.google.com/notebooks/basic_features_overview.ipynb)
As a reminder, these PyImageSearch Plus Jupyter Notebooks are not for sharing; please refer to the **Copyright** directly below and **Code License Agreement** in the last cell of this notebook.
Happy hacking!
*Adrian*
<hr>
***Copyright:*** *The contents of this Jupyter Notebook, unless otherwise indicated, are Copyright 2020 Adrian Rosebrock, PyimageSearch.com. All rights reserved. Content like this is made possible by the time invested by the authors. If you received this Jupyter Notebook and did not purchase it, please consider making future content possible by joining PyImageSearch Plus at http://pyimg.co/plus/ today.*
### Install the necessary packages
```
!pip install visual-logging
```
### Download the code zip file
```
!wget https://www.pyimagesearch.com/wp-content/uploads/2014/12/visual-logging-example.zip
!unzip -qq visual-logging-example.zip
%cd visual-logging-example
```
## Blog Post Code
### Import Packages
```
# import the necessary packages
from matplotlib import pyplot as plt
from logging import FileHandler
from vlogging import VisualRecord
import logging
import cv2
```
### visual-logging, my new favorite tool for debugging OpenCV and Python apps
```
# open the logging file
logger = logging.getLogger("visual_logging_example")
fh = FileHandler("demo.html", mode = "w")
# set the logger attributes
logger.setLevel(logging.DEBUG)
logger.addHandler(fh)
# load our example image and convert it to grayscale
image = cv2.imread("lex.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# loop over some varying sigma sizes
for s in range(3, 11, 2):
# blur the image and detect edges
blurred = cv2.GaussianBlur(image, (s, s), 0)
edged = cv2.Canny(blurred, 75, 200)
logger.debug(VisualRecord(("Detected edges using sigma = %d" % (s)),
[blurred, edged], fmt = "png"))
#@title Display `demo.html`
import IPython
IPython.display.HTML(filename="demo.html")
```
For a detailed walkthrough of the concepts and code, be sure to refer to the full tutorial, [*Visual-logging, my new favorite tool for debugging OpenCV and Python apps*](https://www.pyimagesearch.com/2014/12/22/visual-logging-new-favorite-tool-debugging-opencv-python-apps/) published on 2014-12-22.
# Code License Agreement
```
Copyright (c) 2020 PyImageSearch.com
SIMPLE VERSION
Feel free to use this code for your own projects, whether they are
purely educational, for fun, or for profit. THE EXCEPTION BEING if
you are developing a course, book, or other educational product.
Under *NO CIRCUMSTANCE* may you use this code for your own paid
educational or self-promotional ventures without written consent
from Adrian Rosebrock and PyImageSearch.com.
LONGER, FORMAL VERSION
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files
(the "Software"), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of the Software,
and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
Notwithstanding the foregoing, you may not use, copy, modify, merge,
publish, distribute, sublicense, create a derivative work, and/or
sell copies of the Software in any work that is designed, intended,
or marketed for pedagogical or instructional purposes related to
programming, coding, application development, or information
technology. Permission for such use, copying, modification, and
merger, publication, distribution, sub-licensing, creation of
derivative works, or sale is expressly withheld.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
|
github_jupyter
|
# The Python ecosystem
## Why Python?
### Python in a nutshell
[Python](https://www.python.org) is a multi-purpose programming language created in 1989 by [Guido van Rossum](https://en.wikipedia.org/wiki/Guido_van_Rossum) and developed under a open source license.
It has the following characteristics:
- multi-paradigms (procedural, fonctional, object-oriented);
- dynamic types;
- automatic memory management;
- and much more!
### The Python syntax
For more examples, see the [Python cheatsheet](../tools/python_cheatsheet).
```
def hello(name):
print(f"Hello, {name}")
friends = ["Lou", "David", "Iggy"]
for friend in friends:
hello(friend)
```
### Introduction to Data Science
- Main objective: extract insight from data.
- Expression born in 1997 in the statistician community.
- "A Data Scientist is a statistician that lives in San Francisco".
- 2012 : "Sexiest job of the 21st century" (Harvard Business Review).
- [Controversy](https://en.wikipedia.org/wiki/Data_science#Relationship_to_statistics) on the expression's real usefulness.
[](https://en.wikipedia.org/wiki/Data_science)
[](http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram)
[](http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram)
### Python, a standard for ML and Data Science
- Language qualities (ease of use, simplicity, versatility).
- Involvement of the scientific and academical communities.
- Rich ecosystem of dedicated open source libraries.
## Essential Python tools
### Anaconda
[Anaconda](https://www.anaconda.com/distribution/) is a scientific distribution including Python and many (1500+) specialized packages. it is the easiest way to setup a work environment for ML and Data Science with Python.
[](https://www.anaconda.com/distribution/)
### Jupyter Notebook
The Jupyter Notebook is an open-source web application that allows to manage documents (_.ipynb_ files) that may contain live code, equations, visualizations and text.
It has become the *de facto* standard for sharing research results in numerical fields.
[](https://jupyter.org/)
### Google Colaboratory
Cloud environment for executing Jupyter notebooks through CPU, GPU or TPU.
[](https://colab.research.google.com)
### NumPy
[NumPy](https://numpy.org/) is a Python library providing support for multi-dimensional arrays, along with a large collection of mathematical functions to operate on these arrays.
It is the fundamental package for scientific computing in Python.
```
# Import the NumPy package under the alias "np"
import numpy as np
x = np.array([1, 4, 2, 5, 3])
print(x[:2])
print(x[2:])
print(np.sort(x))
```
### pandas
[pandas](https://pandas.pydata.org/) is a Python library providing high-performance, easy-to-use data structures and data analysis tools.
The primary data structures in **pandas** are implemented as two classes:
- **DataFrame**, which you can imagine as a relational data table, with rows and named columns.
- **Series**, which is a single column. A DataFrame contains one or more Series and a name for each Series.
The DataFrame is a commonly used abstraction for data manipulation.
```
import pandas as pd
# Create a DataFrame object contraining two Series
pop = pd.Series({"CAL": 38332521, "TEX": 26448193, "NY": 19651127})
area = pd.Series({"CAL": 423967, "TEX": 695662, "NY": 141297})
pd.DataFrame({"population": pop, "area": area})
```
### Matplotlib and Seaborn
[Matplotlib](https://matplotlib.org/) is a Python library for 2D plotting. [Seaborn](https://seaborn.pydata.org) is another visualization library that improves presentation of matplotlib-generated graphics.
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Setup plots (should be done on a separate cell for better results)
%matplotlib inline
plt.rcParams["figure.figsize"] = 10, 8
%config InlineBackend.figure_format = "retina"
sns.set()
# Plot a single function
x = np.linspace(0, 10, 30)
plt.plot(x, np.cos(x), label="cosinus")
plt.plot(x, np.sin(x), '-ok', label="sinus")
plt.legend()
plt.show()
```
### scikit-learn
[scikit-learn](https://scikit-learn.org) is a multi-purpose library built over Numpy and Matplotlib and providing dozens of built-in ML algorithms and models.
It is the Swiss army knife of Machine Learning.
Fun fact: scikit-learn was originally created by [INRIA](https://www.inria.fr).
### Keras
[Keras](https://keras.io/) is a high-level, user-friendly API for creating and training neural nets.
Once compatible with many back-end tools (Caffe, Theano, CNTK...), Keras is now the official high-level API of [TensorFlow](https://www.tensorflow.org/), Google's Machine Learning platform.
The [2.3.0 release](https://github.com/keras-team/keras/releases/tag/2.3.0) (Sept. 2019) was the last major release of multi-backend Keras.
See [this notebook](https://colab.research.google.com/drive/1UCJt8EYjlzCs1H1d1X0iDGYJsHKwu-NO) for a introduction to TF+Keras.
### PyTorch
[PyTorch](https://pytorch.org) is a Machine Learning platform supported by Facebook and competing with [TensorFlow](https://www.tensorflow.org/) for the hearts and minds of ML practitioners worldwide. It provides:
- a array manipulation API similar to NumPy;
- an autodifferentiation engine for computing gradients;
- a neural network API.
It is based on previous work, notably [Torch](http://torch.ch/) and [Chainer](https://chainer.org/).
|
github_jupyter
|
```
#coding:utf-8
import sys
import numpy as np
sys.path.append("..")
import argparse
from train_models.mtcnn_model import P_Net, R_Net, O_Net
from prepare_data.loader import TestLoader
from Detection.detector import Detector
from Detection.fcn_detector import FcnDetector
from Detection.MtcnnDetector import MtcnnDetector
import cv2
import os
data_dir = '../../DATA/WIDER_val/images'
anno_file = 'wider_face_val.txt'
def read_gt_bbox(raw_list):
list_len = len(raw_list)
bbox_num = (list_len-1)//4
idx = 1
bboxes = np.zeros((bbox_num,4),dtype=int)
for i in range(4):
for j in range(bbox_num):
bboxes[j][i] = int(raw_list[idx])
idx += 1
return bboxes
def get_image_info(anno_file):
f = open(anno_file,'r')
image_info = []
for line in f:
ct_list = line.strip().split(' ')
path = ct_list[0]
path_list = path.split('\\')
event = path_list[0]
name = path_list [1]
#print(event, name )
bboxes = read_gt_bbox(ct_list)
image_info.append([event,name,bboxes])
print('total number of images in validation set: ', len(image_info))
return image_info
test_mode = "ONet"
thresh = [0.6,0.5,0.4]
min_face_size = 24
stride = 2
slide_window = False
shuffle = False
vis = False
detectors = [None, None, None]
prefix = ['../data/MTCNN_model/PNet_landmark/PNet', '../data/MTCNN_model/RNet_landmark/RNet', '../data/MTCNN_model/ONet_landmark/ONet']
epoch = [18, 14, 16]
batch_size = [2048, 256, 16]
model_path = ['%s-%s' % (x, y) for x, y in zip(prefix, epoch)]
if slide_window:
PNet = Detector(P_Net, 12, batch_size[0], model_path[0])
else:
PNet = FcnDetector(P_Net, model_path[0])
detectors[0] = PNet
# load rnet model
if test_mode in ["RNet", "ONet"]:
RNet = Detector(R_Net, 24, batch_size[1], model_path[1])
detectors[1] = RNet
# load onet model
if test_mode == "ONet":
ONet = Detector(O_Net, 48, batch_size[2], model_path[2])
detectors[2] = ONet
mtcnn_detector = MtcnnDetector(detectors=detectors, min_face_size=min_face_size,
stride=stride, threshold=thresh, slide_window=slide_window)
image_info = get_image_info(anno_file)
str1='aaa'
str2 = 'bbb'
str3 = 'aaa'
print(str1 != str2)
print (str1 == str3)
a ='asdfasdf.jpg'
a.split('.jpg')
current_event = ''
save_path = ''
for item in image_info:
image_file_name = os.path.join(data_dir,item[0],item[1])
if current_event != item[0]:
current_event = item[0]
save_path = os.path.join('../../DATA',item[0])
if not os.path.exists(save_path):
os.mkdir(save_path)
f_name= item[1].split('.jpg')[0]
dets_file_name = os.path.join(save_path,f_name + '.txt')
img = cv2.imread(image_file_name)
all_boxes,_ = mtcnn_detector.detect_single_image(img)
```
|
github_jupyter
|
# Coronagraph Basics
This set of exercises guides the user through a step-by-step process of simulating NIRCam coronagraphic observations of the HR 8799 exoplanetary system. The goal is to familiarize the user with basic `pynrc` classes and functions relevant to coronagraphy.
```
# If running Python 2.x, makes print and division act like Python 3
from __future__ import print_function, division
# Import the usual libraries
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# Enable inline plotting at lower left
%matplotlib inline
from IPython.display import display, Latex, clear_output
```
We will start by first importing `pynrc` along with the `obs_hci` (High Contrast Imaging) class, which lives in the `pynrc.obs_nircam` module.
```
import pynrc
from pynrc import nrc_utils # Variety of useful functions and classes
from pynrc.obs_nircam import obs_hci # High-contrast imaging observation class
# Disable informational messages and only include warnings and higher
pynrc.setup_logging(level='WARN')
```
## Source Definitions
The `obs_hci` class first requires two arguments describing the spectra of the science and reference sources (`sp_sci` and `sp_ref`, respectively. Each argument should be a Pysynphot spectrum already normalized to some known flux. `pynrc` includes built-in functions for generating spectra. The user may use either of these or should feel free to supply their own as long as it meets the requirements.
1. The `pynrc.stellar_spectrum` function provides the simplest way to define a new spectrum:
```python
bp_k = pynrc.bp_2mass('k') # Define bandpass to normalize spectrum
sp_sci = pynrc.stellar_spectrum('F0V', 5.24, 'vegamag', bp_k)
```
You can also be more specific about the stellar properties with `Teff`, `metallicity`, and `log_g` keywords.
```python
sp_sci = pynrc.stellar_spectrum('F0V', 5.24, 'vegamag', bp_k,
Teff=7430, metallicity=-0.47, log_g=4.35)
```
2. Alternatively, the `pynrc.source_spectrum` class ingests spectral information of a given target and generates a model fit to the known photometric SED. Two model routines can be fit. The first is a very simple scale factor that is applied to the input spectrum, while the second takes the input spectrum and adds an IR excess modeled as a modified blackbody function. The user can find the relevant photometric data at http://vizier.u-strasbg.fr/vizier/sed/ and click download data as a VOTable.
```
# Define 2MASS Ks bandpass and source information
bp_k = pynrc.bp_2mass('k')
# Science source, dist, age, sptype, Teff, [Fe/H], log_g, mag, band
args_sources = [('HR 8799', 39.0, 30, 'F0V', 7430, -0.47, 4.35, 5.24, bp_k)]
# References source, sptype, Teff, [Fe/H], log_g, mag, band
ref_sources = [('HD 220657', 'F8III', 5888, -0.01, 3.22, 3.04, bp_k)]
name_sci, dist_sci, age, spt_sci, Teff_sci, feh_sci, logg_sci, mag_sci, bp_sci = args_sources[0]
name_ref, spt_ref, Teff_ref, feh_ref, logg_ref, mag_ref, bp_ref = ref_sources[0]
# For the purposes of simplicity, we will use pynrc.stellar_spectrum()
sp_sci = pynrc.stellar_spectrum(spt_sci, mag_sci, 'vegamag', bp_sci,
Teff=Teff_sci, metallicity=feh_sci, log_g=logg_sci)
sp_sci.name = name_sci
# And the refernece source
sp_ref = pynrc.stellar_spectrum(spt_ref, mag_ref, 'vegamag', bp_ref,
Teff=Teff_ref, metallicity=feh_ref, log_g=logg_ref)
sp_ref.name = name_ref
# Plot the two spectra
fig, ax = plt.subplots(1,1, figsize=(8,5))
xr = [2.5,5.5]
for sp in [sp_sci, sp_ref]:
w = sp.wave / 1e4
ind = (w>=xr[0]) & (w<=xr[1])
sp.convert('Jy')
f = sp.flux / np.interp(4.0, w, sp.flux)
ax.semilogy(w[ind], f[ind], lw=1.5, label=sp.name)
ax.set_ylabel('Flux (Jy) normalized at 4 $\mu m$')
sp.convert('flam')
ax.set_xlim(xr)
ax.set_xlabel(r'Wavelength ($\mu m$)')
ax.set_title('Spectral Sources')
# Overplot Filter Bandpass
bp = pynrc.read_filter('F444W', 'CIRCLYOT', 'MASK430R')
ax2 = ax.twinx()
ax2.plot(bp.wave/1e4, bp.throughput, color='C2', label=bp.name+' Bandpass')
ax2.set_ylim([0,0.8])
ax2.set_xlim(xr)
ax2.set_ylabel('Bandpass Throughput')
ax.legend(loc='upper left')
ax2.legend(loc='upper right')
fig.tight_layout()
```
## Initialize Observation
Now we will initialize the high-contrast imaging class `pynrc.obs_hci` using the spectral objects and various other settings. The `obs_hci` object is a subclass of the more generalized `NIRCam` class. It implements new settings and functions specific to high-contrast imaging observations for corongraphy and direct imaging.
For this tutorial, we want to observe these targets using the `MASK430R` coronagraph in the `F444W` filter. All circular coronagraphic masks such as the `430R` (R=round) should be paired with the `CIRCLYOT` pupil element, whereas wedge/bar masks are paired with `WEDGELYOT` pupil. Observations in the LW channel are most commonly observed in `WINDOW` mode with a 320x320 detector subarray size. Full detector sizes are also available.
The PSF simulation size (`fov_pix` keyword) should also be of similar size as the subarray window (recommend avoiding anything above `fov_pix=1024` due to computation time and memory usage). Use odd numbers to center the PSF in the middle of the pixel. If `fov_pix` is specified as even, then PSFs get centered at the corners. This distinction really only matter for unocculted observations, (ie., where the PSF flux is concentrated in a tight central core).
We also need to specify a WFE drift value (`wfe_ref_drift` parameter), which defines the anticipated drift in nm between the science and reference sources. For the moment, let's intialize with a value of 0nm. This prevents an initially long process by which `pynrc` calculates changes made to the PSF over a wide range of drift values.
Extended disk models can also be specified upon initialization using the `disk_hdu` keyword.
```
filt, mask, pupil = ('F444W', 'MASK430R', 'CIRCLYOT')
wind_mode, subsize = ('WINDOW', 320)
fov_pix, oversample = (320, 2)
wfe_ref_drift = 0
obs = pynrc.obs_hci(sp_sci, sp_ref, dist_sci, filter=filt, mask=mask, pupil=pupil,
wfe_ref_drift=wfe_ref_drift, fov_pix=fov_pix, oversample=oversample,
wind_mode=wind_mode, xpix=subsize, ypix=subsize, verbose=True)
```
All information for the reference observation is stored in the attribute `obs.nrc_ref`, which is simply it's own isolated `NIRCam` (`nrc_hci`) class. After initialization, any updates made to the primary `obs` instrument configuration (e.g., filters, detector size, etc.) must also be made inside the `obs.nrc_ref` class as well. That is to say, it does not automatically propogate. In many ways, it's best to think of these as two separate classes,
```python
obs_sci = obs
obs_ref = obs.nrc_ref
```
with some linked references between the two.
Now that we've succesffully initialized the obs_hci observations, let's specify the `wfe_ref_drift`. If this is your first time, then the `nrc_utils.wfed_coeff` function is called to determine a relationship between PSFs in the presense of WFE drift. This relationship is saved to disk in the `PYNRC_DATA` directory as a set of polynomial coefficients. Future calculations utilize these coefficients to quickly generate a new PSF for any arbitary drift value.
```
# WFE drift amount between rolls
# This only gets called during gen_roll_image()
# and temporarily updates obs.wfe_drift to create
# a new PSF.
obs.wfe_roll_drift = 2
# Drift amount between Roll 1 and reference
# This is simply a link to obs.nrc_ref.wfe_drift
obs.wfe_ref_drift = 10
```
## Exposure Settings
Optimization of exposure settings are demonstrated in another tutorial, so we will not repeat that process here. We can assume the optimization process was performed elsewhere to choose the `DEEP8` pattern with 16 groups and 5 total integrations. These settings apply to each roll position of the science observation as well as the for the reference observation.
```
# Update both the science and reference observations
obs.update_detectors(read_mode='DEEP8', ngroup=16, nint=5, verbose=True)
obs.nrc_ref.update_detectors(read_mode='DEEP8', ngroup=16, nint=5)
```
## Add Planets
There are four known giant planets orbiting HR 8799 at various locations. Ideally, we would like to position them at their predicted locations on the anticipated observation date. For this case, we choose a plausible observation date of November 1, 2019. To convert between $(x,y)$ and $(r,\theta)$, use the `nrc_utils.xy_to_rtheta` and `nrc_utils.rtheta_to_xy` functions.
When adding the planets, it doesn't matter too much which exoplanet model spectrum we decide to use since the spectra are still fairly unconstrained at these wavelengths. We do know roughly the planets' luminosities, so we can simply choose some reasonable model and renormalize it to the appropriate filter brightness. Currently, the only exoplanet spectral models available to `pynrc` are those from Spiegel & Burrows (2012).
```
# Projected locations for date 11/01/2019
# These are prelimary positions, but within constrained orbital parameters
loc_list = [(-1.57, 0.64), (0.42, 0.87), (0.5, -0.45), (0.35, 0.20)]
# Estimated magnitudes within F444W filter
pmags = [16.0, 15.0, 14.6, 14.7]
# Add planet information to observation class.
# These are stored in obs.planets.
# Can be cleared using obs.kill_planets().
obs.kill_planets()
for i, loc in enumerate(loc_list):
obs.add_planet(mass=10, entropy=13, age=age, xy=loc, runits='arcsec',
renorm_args=(pmags[i], 'vegamag', obs.bandpass))
# Generate and plot a noiseless slope image to make sure things look right
PA1 = 85
im_planets = obs.gen_planets_image(PA_offset=PA1)
from matplotlib.patches import Circle
from pynrc.nrc_utils import (coron_ap_locs, build_mask_detid, fshift, pad_or_cut_to_size)
fig, ax = plt.subplots(figsize=(6,6))
xasec = obs.det_info['xpix'] * obs.pix_scale
yasec = obs.det_info['ypix'] * obs.pix_scale
extent = [-xasec/2, xasec/2, -yasec/2, yasec/2]
xylim = 4
vmin = 0
vmax = 0.5*im_planets.max()
ax.imshow(im_planets, extent=extent, vmin=vmin, vmax=vmax)
# Overlay the coronagraphic mask
detid = obs.Detectors[0].detid
im_mask = obs.mask_images[detid]
# Do some masked transparency overlays
masked = np.ma.masked_where(im_mask>0.99, im_mask)
#ax.imshow(1-masked, extent=extent, alpha=0.5)
ax.imshow(1-masked, extent=extent, alpha=0.3, cmap='Greys_r', vmin=-0.5)
xc_off = obs.bar_offset
for loc in loc_list:
xc, yc = loc
xc, yc = nrc_utils.xy_rot(xc, yc, PA1)
xc += xc_off
circle = Circle((xc,yc), radius=xylim/15., alpha=0.7, lw=1, edgecolor='red', facecolor='none')
ax.add_artist(circle)
xlim = ylim = np.array([-1,1])*xylim
xlim = xlim + xc_off
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.set_xlabel('Arcsec')
ax.set_ylabel('Arcsec')
ax.set_title('{} planets -- {} {}'.format(sp_sci.name, obs.filter, obs.mask))
color = 'grey'
ax.tick_params(axis='both', color=color, which='both')
for k in ax.spines.keys():
ax.spines[k].set_color(color)
nrc_utils.plotAxes(ax, width=1, headwidth=5, alength=0.15, angle=PA1,
position=(0.25,0.9), label1='E', label2='N')
fig.tight_layout()
```
As we can see, even with "perfect PSF subtraction" and no noise, it's difficult to make out planet e. This is primarily due to its location relative to the occulting mask reducing throughput along with confusion of bright diffraction spots from nearby sources.
## Estimated Performance
Now we are ready to determine contrast performance and sensitivites as a function of distance from the star.
### 1. Roll-Subtracted Images
First, we will create a quick simulated roll-subtracted image using the in `gen_roll_image` method. For the selected observation date of 11/1/2019, APT shows a PA range of 84$^{\circ}$ to 96$^{\circ}$. So, we'll assume Roll 1 has PA1=85, while Roll 2 has PA2=95. In this case, "roll subtraction" simply creates two science images observed at different parallactic angles, then subtracts the same reference observation from each. The two results are then de-rotated to a common PA=0 and averaged.
There is also the option to create ADI images, where the other roll position becomes the reference star by setting `no_ref=True`.
```
# Cycle through a few WFE drift values
wfe_list = [0,5,10]
# PA values for each roll
PA1, PA2 = (85, 95)
# A dictionary of HDULists
hdul_dict = {}
for i, wfe_drift in enumerate(wfe_list):
print(wfe_drift)
# Upate WFE reference drift value
obs.wfe_ref_drift = wfe_drift
# Set the final output image to be oversampled
hdulist = obs.gen_roll_image(PA1=PA1, PA2=PA2)
hdul_dict[wfe_drift] = hdulist
from pynrc.obs_nircam import plot_hdulist
from matplotlib.patches import Circle
fig, axes = plt.subplots(1,3, figsize=(14,4.3))
xylim = 2.5
xlim = ylim = np.array([-1,1])*xylim
for j, wfe_drift in enumerate(wfe_list):
ax = axes[j]
hdul = hdul_dict[wfe_drift]
plot_hdulist(hdul, xr=xlim, yr=ylim, ax=ax, vmin=0, vmax=8)
# Location of planet
for loc in loc_list:
circle = Circle(loc, radius=xylim/15., lw=1, edgecolor='red', facecolor='none')
ax.add_artist(circle)
ax.set_title('$\Delta$WFE = {:.0f} nm'.format(wfe_drift))
nrc_utils.plotAxes(ax, width=1, headwidth=5, alength=0.15, position=(0.9,0.7), label1='E', label2='N')
fig.suptitle('{} -- {} {}'.format(name_sci, obs.filter, obs.mask), fontsize=14)
fig.tight_layout()
fig.subplots_adjust(top=0.85)
```
**Note:** At first glance, it appears as if the innermost Planet e is getting brighter with increased WFE drift, which would be understandably confusing. However, upon further investigation, there just happens to be a bright residual speckle that lines up well with Planet e when observed at this specific parallactic angle. This was verified by adjusting the observed PA as well as removing the planets from the simulations.
### 2. Contrast Curves
Next, we will cycle through a few WFE drift values to get an idea of potential predicted sensitivity curves. The `calc_contrast` method returns a tuple of three arrays:
1. The radius in arcsec.
2. The n-sigma contrast.
3. The n-sigma magnitude sensitivity limit (vega mag).
```
# Cycle through varying levels of WFE drift and calculate contrasts
wfe_list = [0,5,10]
nsig = 5
# PA values for each roll
PA1, PA2 = (85, 95)
roll_angle = np.abs(PA2 - PA1)
curves = []
for i, wfe_drift in enumerate(wfe_list):
print(wfe_drift)
# Generate series of observations for each filter
obs.wfe_ref_drift = wfe_drift
# Generate contrast curves
result = obs.calc_contrast(roll_angle=roll_angle, nsig=nsig)
curves.append(result)
from pynrc.obs_nircam import plot_contrasts, plot_planet_patches, plot_contrasts_mjup
import matplotlib.patches as mpatches
# fig, ax = plt.subplots(figsize=(8,5))
fig, axes = plt.subplots(1,2, figsize=(14,4.5))
xr=[0,5]
yr=[24,8]
# 1a. Plot contrast curves and set x/y limits
ax = axes[0]
ax, ax2, ax3 = plot_contrasts(curves, nsig, wfe_list, obs=obs,
xr=xr, yr=yr, ax=ax, return_axes=True)
# 1b. Plot the locations of exoplanet companions
label = 'Companions ({})'.format(filt)
planet_dist = [np.sqrt(x**2+y**2) for x,y in loc_list]
ax.plot(planet_dist, pmags, marker='o', ls='None', label=label, color='k', zorder=10)
# 1c. Plot Spiegel & Burrows (2012) exoplanet fluxes (Hot Start)
plot_planet_patches(ax, obs, age=age, entropy=13, av_vals=None)
ax.legend(ncol=2)
# 2. Plot in terms of MJup using COND models
ax = axes[1]
plot_contrasts_mjup(curves, nsig, wfe_list, obs=obs, age=age,
ax=ax, twin_ax=True, xr=xr, yr=None)
ax.set_yscale('log')
ax.set_ylim([0.08,100])
ax.legend(loc='upper right', title='COND ({:.0f} Myr)'.format(age))
fig.suptitle('{} ({} + {})'.format(name_sci, obs.filter, obs.mask), fontsize=16)
fig.tight_layout()
fig.subplots_adjust(top=0.85, bottom=0.1 , left=0.05, right=0.97)
```
The innermost Planet e is right on the edge of the detection threshold as suggested by the simulated images.
### 3. Saturation Levels
Create an image showing level of saturation for each pixel. For NIRCam, saturation is important to track for purposes of accurate slope fits and persistence correction. In this case, we will plot the saturation levels both at `NGROUP=2` and `NGROUP=obs.det_info['ngroup']`. Saturation is defined at 80% well level, but can be modified using the `well_fill` keyword.
We want to perform this analysis for both science and reference targets.
```
# Saturation limits
ng_max = obs.det_info['ngroup']
sp_flat = pynrc.stellar_spectrum('flat')
print('NGROUP=2')
_ = obs.sat_limits(sp=sp_flat,ngroup=2,verbose=True)
print('')
print('NGROUP={}'.format(ng_max))
_ = obs.sat_limits(sp=sp_flat,ngroup=ng_max,verbose=True)
mag_sci = obs.star_flux('vegamag')
mag_ref = obs.star_flux('vegamag', sp=obs.sp_ref)
print('')
print('{} flux at {}: {:0.2f} mags'.format(obs.sp_sci.name, obs.filter, mag_sci))
print('{} flux at {}: {:0.2f} mags'.format(obs.sp_ref.name, obs.filter, mag_ref))
```
In this case, we don't expect HR 8799 to saturated. However, the reference source should have some saturated pixels before the end of an integration.
```
# Well level of each pixel for science source
sci_levels1 = obs.saturation_levels(ngroup=2)
sci_levels2 = obs.saturation_levels(ngroup=ng_max)
# Which pixels are saturated?
sci_mask1 = sci_levels1 > 0.8
sci_mask2 = sci_levels2 > 0.8
# Well level of each pixel for reference source
ref_levels1 = obs.saturation_levels(ngroup=2, do_ref=True)
ref_levels2 = obs.saturation_levels(ngroup=ng_max, do_ref=True)
# Which pixels are saturated?
ref_mask1 = ref_levels1 > 0.8
ref_mask2 = ref_levels2 > 0.8
# How many saturated pixels?
nsat1_sci = len(sci_levels1[sci_mask1])
nsat2_sci = len(sci_levels2[sci_mask2])
print(obs.sp_sci.name)
print('{} saturated pixel at NGROUP=2'.format(nsat1_sci))
print('{} saturated pixel at NGROUP={}'.format(nsat2_sci,ng_max))
# How many saturated pixels?
nsat1_ref = len(ref_levels1[ref_mask1])
nsat2_ref = len(ref_levels2[ref_mask2])
print('')
print(obs.sp_ref.name)
print('{} saturated pixel at NGROUP=2'.format(nsat1_ref))
print('{} saturated pixel at NGROUP={}'.format(nsat2_ref,ng_max))
# Saturation Mask for science target
nsat1, nsat2 = (nsat1_sci, nsat2_sci)
sat_mask1, sat_mask2 = (sci_mask1, sci_mask2)
sp = obs.sp_sci
nrc = obs
# Only display saturation masks if there are saturated pixels
if nsat2 > 0:
fig, axes = plt.subplots(1,2, figsize=(10,5))
xasec = nrc.det_info['xpix'] * nrc.pix_scale
yasec = nrc.det_info['ypix'] * nrc.pix_scale
extent = [-xasec/2, xasec/2, -yasec/2, yasec/2]
axes[0].imshow(sat_mask1, extent=extent)
axes[1].imshow(sat_mask2, extent=extent)
axes[0].set_title('{} Saturation (NGROUP=2)'.format(sp.name))
axes[1].set_title('{} Saturation (NGROUP={})'.format(sp.name,ng_max))
for ax in axes:
ax.set_xlabel('Arcsec')
ax.set_ylabel('Arcsec')
ax.tick_params(axis='both', color='white', which='both')
for k in ax.spines.keys():
ax.spines[k].set_color('white')
fig.tight_layout()
else:
print('No saturation detected.')
# Saturation Mask for reference
nsat1, nsat2 = (nsat1_ref, nsat2_ref)
sat_mask1, sat_mask2 = (ref_mask1, ref_mask2)
sp = obs.sp_ref
nrc = obs.nrc_ref
# Only display saturation masks if there are saturated pixels
if nsat2 > 0:
fig, axes = plt.subplots(1,2, figsize=(10,5))
xasec = nrc.det_info['xpix'] * nrc.pix_scale
yasec = nrc.det_info['ypix'] * nrc.pix_scale
extent = [-xasec/2, xasec/2, -yasec/2, yasec/2]
axes[0].imshow(sat_mask1, extent=extent)
axes[1].imshow(sat_mask2, extent=extent)
axes[0].set_title('{} Saturation (NGROUP=2)'.format(sp.name))
axes[1].set_title('{} Saturation (NGROUP={})'.format(sp.name,ng_max))
for ax in axes:
ax.set_xlabel('Arcsec')
ax.set_ylabel('Arcsec')
ax.tick_params(axis='both', color='white', which='both')
for k in ax.spines.keys():
ax.spines[k].set_color('white')
fig.tight_layout()
else:
print('No saturation detected.')
```
|
github_jupyter
|
```
# ignore this
%matplotlib inline
%load_ext music21.ipython21
```
# User's Guide, Chapter 15: Keys and KeySignatures
Music21 has two main objects for working with keys: the :class:`~music21.key.KeySignature` object, which handles the spelling of key signatures and the :class:`~music21.key.Key` object which does everything a KeySignature object does but also knows more advanced aspects of tonal harmony. We'll go through the basics of each one here.
We start, like always, by importing music21:
```
from music21 import *
```
Now let's get a couple of different key signatures, representing different numbers of sharps:
```
ks2 = key.KeySignature(2)
ks2.sharps
ks7 = key.KeySignature(7)
ks7
```
We can get a list of which pitches (as :class:`~music21.pitch.Pitch` objects) are altered by the key signature with the `.alteredPitches` property:
```
ks2.alteredPitches
```
There's also a method that lets us see what the accidental is for any given step:
```
ks2.accidentalByStep('C')
ks2.accidentalByStep('E') is None
```
Notice that we give a string of just a letter name from C-B. This won't work:
```
ks2.accidentalByStep('C#')
```
We can create key signatures with absurd numbers of sharps and get strange accidentals:
```
ks12 = key.KeySignature(12)
ks12.accidentalByStep('F')
```
These absurd key signatures display in some programs (such as Lilypond) and are exported into MusicXML but do not display in most MusicXML readers.
Key Signatures transpose like Pitches and Notes, taking each of the notes and moving it:
```
ks4 = ks2.transpose('M2')
ks4
```
And the number of sharps can be changed after the fact:
```
ks4.sharps = 0
ks4
```
We can get the Major or Minor scale corresponding to the Key Signature:
```
ks2.getScale('major')
ks2.getScale('minor')
```
We'll see what we can do with scales in a bit.
If we put a KeySignature into a Stream, we can see it:
```
m = stream.Measure()
m.insert(0, meter.TimeSignature('3/4'))
m.insert(0, ks2)
d = note.Note('D')
c = note.Note('C')
fis = note.Note('F#') # German name
m.append([d, c, fis])
m.show()
```
Note that the Note 'C' is treated as C-natural and thus needs the natural sign in front of it. The Note F# however does not need a natural sign to be displayed. The process of calling `.show()` on the stream made a copy of the notes and set the `.pitch.accidental.displayStatus` on the F# to `False` and created an accidental for the C note with a natural and a displayStatus of True. Then the copies were discarded, so we don't see them here:
```
fis.pitch.accidental.displayStatus
```
But we could instead call `.makeNotation(inPlace=True)` or `.makeAccidentals(inPlace=True)` on the Measure to do this manually:
```
m.makeAccidentals(inPlace=True)
fis.pitch.accidental.displayStatus
c.pitch.accidental, c.pitch.accidental.displayStatus
```
If we have a `Measure` (not just any `Stream`) we can also set the KeySignature for the beginning of the measure with the Measure object's `.keySignature` property:
```
m.keySignature = key.KeySignature(4)
m.show()
```
Of course life isn't all about sharps; it'd be a pretty terrible KeySignature object if we couldn't have flats. To do it, just specify the number of flats as a negative number. So -1 = one flat, -2 = two flats. Or if you have the number as a positive already, just multiply by -1.
```
eroicaFlats = 3
ksEroica = key.KeySignature(-1 * eroicaFlats)
ksEroica
ksEroica.sharps
```
There is no `.flats` routine:
```
ksEroica.flats
```
## Example: Adjusting notes to fit the Key Signature
Here's a nice study, suppose you had a score like this:
```
m1 = stream.Measure()
m1.timeSignature = meter.TimeSignature('2/4')
m1.keySignature = key.KeySignature(-5)
m1.append([note.Note('D'), note.Note('A')])
m2 = stream.Measure()
m2.append([note.Note('B-'), note.Note('G#')])
p = stream.Part()
p.append([m1, m2])
p.show()
```
Let's pretend that this was played by a young oboe player who was having trouble with the strange key signature. She got the B-flat right, and remembered to play some accidental on the G, but didn't do very well overall. Let's fix these notes so that they fit with the key signature.
Now we could simply do something like this for each note:
```
m1.notes[0].pitch.accidental = pitch.Accidental('flat')
```
But that wouldn't be as great as getting the notes from the Key itself. Let's do that with the accidentalByStep routine:
```
ks = m1.keySignature
for n in p.recurse().notes: # we need to recurse because the notes are in measures...
nStep = n.pitch.step
rightAccidental = ks.accidentalByStep(nStep)
n.pitch.accidental = rightAccidental
p.show()
```
Yep, now our student is ready to play the concert! Though wouldn't this be an easier key?
```
p.transpose(1).show()
```
## Key objects
A Key is a lot like a KeySignature, but much more powerful. Unlike a KeySignature, which we initialize with the number of sharps and flats, we initialize a Key with a tonic string or Pitch:
```
kD = key.Key('D')
kD
bFlat = pitch.Pitch('B-')
kBflat = key.Key(bFlat)
kBflat
```
By default, keys are major, but we can make minor keys by specifying 'minor' as the second argument:
```
kd = key.Key('D', 'minor')
kd
```
Note that the key is represented as lowercase ('d minor' as opposed to 'D minor'). This is a clue as to a shortcut for making minor keys:
```
kg = key.Key('g')
kg
```
We can also take KeySignatures and turn them into Keys by using the `asKey(mode)` method on them:
```
(ksEroica.asKey('major'), ksEroica.asKey('minor'))
```
(In the latter case we should probably have called the variable ksFifthSymphony...)
We can also make church modes:
```
amixy = key.Key('a', 'mixolydian')
amixy
```
If you've forgotten how many sharps or flats are in the key of A mixolydian, you'll be happy to know that all the properties and methods of KeySignatures are also available to Keys:
```
amixy.sharps
amixy.alteredPitches
amixy.transpose('M3')
aDarkKey = key.Key('B--', 'locrian')
aDarkKey.alteredPitches
```
(as a music historian and someone who specializes in history of music theory, I am contractually obliged to mention that "locrian" is not a historic mode and doesn't really exist in actual music before the 20th c. But it's fun to play with).
Keys know their `.mode`:
```
kg.mode, amixy.mode
```
They also know their tonic pitches:
```
kg.tonic, amixy.tonic
```
For major and minor keys, we can get the relative (minor or major) and parallel (minor or major) keys simply:
```
kg.relative
kg.parallel
```
And because two keys are equal if their modes and tonics are the same, this is true:
```
kg.relative.relative == kg
```
This is pretty helpful from time to time:
```
kg.tonicPitchNameWithCase
kg.parallel.tonicPitchNameWithCase
```
Some analysis routines produce keys:
```
bach = corpus.parse('bwv66.6')
bach.id = 'bach66'
bach.analyze('key')
```
The keys from these routines have two extra cool features. They have a certainty measure:
```
fis = bach.analyze('key')
fis.correlationCoefficient
fis.tonalCertainty()
```
Here are some of the other keys that the Bach piece could have been in:
```
fis.alternateInterpretations[0:4]
```
And the least likely:
```
fis.alternateInterpretations[-3:]
c = bach.measures(1, 4).chordify()
for ch in c.recurse().getElementsByClass('Chord'):
ch.closedPosition(inPlace=True, forceOctave=4)
c.show()
```
Yeah, that passes the smell test to me!
So, how does it know what the key is? The key analysis routines are a variation of the famous (well at least in the small world of computational music theory) algorithm developed by Carol Krumhansl and Mark A. Schmuckler called probe-tone key finding. The distribution of pitches used in the piece are compared to sample distributions of pitches for major and minor keys and the closest matches are reported. (see http://rnhart.net/articles/key-finding/ for more details). `Music21` can be asked to use the sample distributions of several authors, including Krumhansl and Schmuckler's original weights:
```
bach.analyze('key.krumhanslschmuckler')
```
Though the `key` returned by `.analyze('key')` and `.analyze('key.krumhanslschmuckler')` are the same, the correlationCoefficient is somewhat different. `fis` is the analysis from `.analyze('key')`.
```
fisNew = bach.analyze('key.krumhanslschmuckler')
fisCC = round(fis.correlationCoefficient, 3)
fisNewCC = round(fisNew.correlationCoefficient, 3)
(fisCC, fisNewCC)
```
Calling `.analyze()` on a Stream calls :func:`music21.analysis.discrete.analyzeStream` which then calls an appropriate Class there.
There is another way of looking at the key of a piece and that is looking at differently sized windows of analysis on the piece and seeing what happens every quarter note, every half note, every measure, every two measures, etc. to the top. This plot was created by Jared Sadoian and is explained in the `analysis.windowed` module:
```
bach.flatten().plot('pianoroll')
```
A Key object is derived from a KeySignature object and also a Scale object, which we will explain more about later.
```
k = key.Key('E-')
k.classes
```
But for now, a few methods that are present on scales that might end up being useful for Keys as well include:
```
k.pitchFromDegree(2)
```
(octaves in 4 and 5 are chosen just to give some ordering to the pitches)
```
k.solfeg('G')
```
## Key Context and Note Spelling
`Key` and `KeySignature` objects affect how notes are spelled in some situations. Let's set up a simple situation of a F-natural whole note in D major and then B-flat minor.
```
s = stream.Stream()
s.append(key.Key('D'))
s.append(note.Note('F', type='whole'))
s.append(key.Key('b-', 'minor'))
s.append(note.Note('F', type='whole'))
s2 = s.makeNotation()
s2.show()
```
When we transpose each note up a half step (`n.transpose(1)`), music21 understands that the first F-natural should become F-sharp, while the second one will fit better as a G-flat.
```
for n in s2.recurse().notes:
n.transpose(1, inPlace=True)
s2.show()
```
## Example: Prepare a vocal exercise in all major keys, ascending by step.
Let's create a simple exercise in playing or singing thirds. I think I remember this from the [First Division Band Method](https://www.google.com/search?q=First+Division+Band+Method&tbm=isch) "Blue Book":
```
pitchStream = stream.Part()
pitchStream.insert(0, meter.TimeSignature('4/4'))
for step in ('c', 'e', 'd', 'f', 'e', 'g', 'f', 'a',
'g', 'e', 'f', 'd', 'c', 'e', 'c'):
n = note.Note(step, type='eighth')
n.pitch.octave = 4
pitchStream.append(n)
pitchStream.notes[-1].duration.type = 'quarter'
pitchStream.makeMeasures(inPlace=True)
pitchStream.show()
```
This melody does not have a key associated with it. Let's put a Key of C Major at the beginning of the piece:
```
k = key.Key('C')
pitchStream.measure(1).insert(0, k)
pitchStream.show()
```
Note that putting the key of C into the Stream doesn't change what it looks like when we show the Stream, since there are no sharps or flats. But what makes the difference between an instrumental and a vocal exercise is the act of transposition. When we transpose the `Key` object up 1 semitone, to D-flat major, it will show up:
```
k.transpose(1, inPlace=True)
pitchStream.show()
```
Now the key signature is D-flat, but the notes are still in C-major, so we should transpose them also:
```
for n in pitchStream.recurse().notes:
n.transpose(1, inPlace=True)
pitchStream.show()
```
Notice that we choose a semitone transposition and not a diatonic transposition such as minor second (`"m2"`); minor second would work just as good in this case, but then to do another half-step up, we would need to remember to transpose by an augmented unison (`"A1"`) so that D-flat became D-natural and not E-double-flat. The semitone transposition is smart enough to make sure that the `Key` object remains between six-flats and six-sharps. Not only that, but the notes will match the best spelling for the current key signature.
```
k.transpose(1, inPlace=True)
for n in pitchStream.recurse().notes:
n.transpose(1, inPlace=True)
pitchStream.show()
k.transpose(1, inPlace=True)
for n in pitchStream.recurse().notes:
n.transpose(1, inPlace=True)
pitchStream.show()
```
So, we can make a nice, ascending vocal exercise by varying the transposition amount from 0 to 7 (or however high you can sing) and putting each of the two-measure excerpts together into one Part.
We will introduce the tinyNotation format here, which will be described in the next chapter:
```
out = stream.Part()
for i in range(0, 8):
pitchStream = converter.parse("tinyNotation: 4/4 c8 e d f e g f a g e f d c e c4")
if i != 0:
# remove redundant clefs and time signature
trebleClef = pitchStream.recurse().getElementsByClass('Clef')[0]
fourFour = pitchStream.recurse().getElementsByClass('TimeSignature')[0]
pitchStream.remove(trebleClef, recurse=True)
pitchStream.remove(fourFour, recurse=True)
if i % 2 == 0:
# add a line break at the beginning of every other line:
pitchStream.measure(1).insert(0, layout.SystemLayout(isNew=True))
k = key.Key('C')
pitchStream.measure(1).insert(0, k)
k.transpose(i, inPlace=True)
for n in pitchStream.recurse().notes:
n.transpose(i, inPlace=True)
for el in pitchStream:
out.append(el)
out.show()
```
And we can listen to it as well:
```
out.show('midi')
```
That's enough about keys for now, let's move on to a fast way of getting small amounts of music into music21, with :ref:`Chapter 16, Tiny Notation <usersGuide_16_tinyNotation>`
|
github_jupyter
|
CER002 - Download existing Root CA certificate
==============================================
Use this notebook to download a generated Root CA certificate from a
cluster that installed one using:
- [CER001 - Generate a Root CA
certificate](../cert-management/cer001-create-root-ca.ipynb)
And then to upload the generated Root CA to another cluster use:
- [CER003 - Upload existing Root CA
certificate](../cert-management/cer003-upload-existing-root-ca.ipynb)
If needed, use these notebooks to view and set the Kubernetes
configuration context appropriately to enable downloading the Root CA
from a Big Data Cluster in one Kubernetes cluster, and to upload it to a
Big Data Cluster in another Kubernetes cluster.
- [TSG010 - Get configuration
contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb)
- [SOP011 - Set kubernetes configuration
context](../common/sop011-set-kubernetes-context.ipynb)
Steps
-----
### Parameters
```
local_folder_name = "mssql-cluster-root-ca"
test_cert_store_root = "/var/opt/secrets/test-certificates"
```
### Common functions
Define helper functions used in this notebook.
```
# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows
import sys
import os
import re
import json
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
retry_hints = {} # Output in stderr known to be transient, therefore automatically retry
error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help
install_hint = {} # The SOP to help install the executable if it cannot be found
first_run = True
rules = None
debug_logging = False
def run(cmd, return_output=False, no_output=False, retry_count=0):
"""Run shell command, stream stdout, print stderr and optionally return output
NOTES:
1. Commands that need this kind of ' quoting on Windows e.g.:
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}
Need to actually pass in as '"':
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name}
The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:
`iter(p.stdout.readline, b'')`
The shlex.split call does the right thing for each platform, just use the '"' pattern for a '
"""
MAX_RETRIES = 5
output = ""
retry = False
global first_run
global rules
if first_run:
first_run = False
rules = load_rules()
# When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see:
#
# ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')
#
if platform.system() == "Windows" and cmd.startswith("azdata sql query"):
cmd = cmd.replace("\n", " ")
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries
#
user_provided_exe_name = cmd_actual[0].lower()
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
# When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`
#
if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ:
cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc")
# To aid supportabilty, determine which binary file will actually be executed on the machine
#
which_binary = None
# Special case for CURL on Windows. The version of CURL in Windows System32 does not work to
# get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance
# of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost
# always the first curl.exe in the path, and it can't be uninstalled from System32, so here we
# look for the 2nd installation of CURL in the path)
if platform.system() == "Windows" and cmd.startswith("curl "):
path = os.getenv('PATH')
for p in path.split(os.path.pathsep):
p = os.path.join(p, "curl.exe")
if os.path.exists(p) and os.access(p, os.X_OK):
if p.lower().find("system32") == -1:
cmd_actual[0] = p
which_binary = p
break
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
if which_binary == None:
which_binary = shutil.which(cmd_actual[0])
if which_binary == None:
if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:
display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
# Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait
#
wait = True
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
wait = False
break # otherwise infinite hang, have not worked out why yet.
else:
print(line, end='')
if rules is not None:
apply_expert_rules(line)
if wait:
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()
if not no_output:
for line in iter(p.stderr.readline, b''):
try:
line_decoded = line.decode()
except UnicodeDecodeError:
# NOTE: Sometimes we get characters back that cannot be decoded(), e.g.
#
# \xa0
#
# For example see this in the response from `az group create`:
#
# ERROR: Get Token request returned http error: 400 and server
# response: {"error":"invalid_grant",# "error_description":"AADSTS700082:
# The refresh token has expired due to inactivity.\xa0The token was
# issued on 2018-10-25T23:35:11.9832872Z
#
# which generates the exception:
#
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte
#
print("WARNING: Unable to decode stderr line, printing raw bytes:")
print(line)
line_decoded = ""
pass
else:
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"STDERR: {line_decoded}", end='')
if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"):
exit_code_workaround = 1
# inject HINTs to next TSG/SOP based on output in stderr
#
if user_provided_exe_name in error_hints:
for error_hint in error_hints[user_provided_exe_name]:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))
# apply expert rules (to run follow-on notebooks), based on output
#
if rules is not None:
apply_expert_rules(line_decoded)
# Verify if a transient error, if so automatically retry (recursive)
#
if user_provided_exe_name in retry_hints:
for retry_hint in retry_hints[user_provided_exe_name]:
if line_decoded.find(retry_hint) != -1:
if retry_count < MAX_RETRIES:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, retry_count=retry_count)
if return_output:
return output
else:
return
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
# WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so
# don't wait here, if success known above
#
if wait:
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
else:
if exit_code_workaround !=0 :
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed.\n')
if return_output:
return output
def load_json(filename):
"""Load a json file from disk and return the contents"""
with open(filename, encoding="utf8") as json_file:
return json.load(json_file)
def load_rules():
"""Load any 'expert rules' from the metadata of this notebook (.ipynb) that should be applied to the stderr of the running executable"""
# Load this notebook as json to get access to the expert rules in the notebook metadata.
#
try:
j = load_json("cer002-download-existing-root-ca.ipynb")
except:
pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename?
else:
if "metadata" in j and \
"azdata" in j["metadata"] and \
"expert" in j["metadata"]["azdata"] and \
"expanded_rules" in j["metadata"]["azdata"]["expert"]:
rules = j["metadata"]["azdata"]["expert"]["expanded_rules"]
rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first.
# print (f"EXPERT: There are {len(rules)} rules to evaluate.")
return rules
def apply_expert_rules(line):
"""Determine if the stderr line passed in, matches the regular expressions for any of the 'expert rules', if so
inject a 'HINT' to the follow-on SOP/TSG to run"""
global rules
for rule in rules:
notebook = rule[1]
cell_type = rule[2]
output_type = rule[3] # i.e. stream or error
output_type_name = rule[4] # i.e. ename or name
output_type_value = rule[5] # i.e. SystemExit or stdout
details_name = rule[6] # i.e. evalue or text
expression = rule[7].replace("\\*", "*") # Something escaped *, and put a \ in front of it!
if debug_logging:
print(f"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.")
if re.match(expression, line, re.DOTALL):
if debug_logging:
print("EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'".format(output_type_name, output_type_value, expression, notebook))
match_found = True
display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.'))
print('Common functions defined successfully.')
# Hints for binary (transient fault) retry, (known) error and install guide
#
retry_hints = {'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond']}
error_hints = {'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb']]}
install_hint = {'kubectl': ['SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb']}
```
### Get the Kubernetes namespace for the big data cluster
Get the namespace of the Big Data Cluster use the kubectl command line
interface .
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA\_NAMESPACE, before starting
Azure Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = run(f'kubectl get namespace --selector=MSSQL_CLUSTER -o jsonpath={{.items[0].metadata.name}}', return_output=True)
except:
from IPython.display import Markdown
print(f"ERROR: Unable to find a Kubernetes namespace with label 'MSSQL_CLUSTER'. SQL Server Big Data Cluster Kubernetes namespaces contain the label 'MSSQL_CLUSTER'.")
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print(f'The SQL Server Big Data Cluster Kubernetes namespace is: {namespace}')
```
### Get name of the ‘Running’ `controller` `pod`
```
# Place the name of the 'Running' controller pod in variable `controller`
controller = run(f'kubectl get pod --selector=app=controller -n {namespace} -o jsonpath={{.items[0].metadata.name}} --field-selector=status.phase=Running', return_output=True)
print(f"Controller pod name: {controller}")
```
### Create a temporary folder to hold Root CA certificate
```
import os
import tempfile
import shutil
path = os.path.join(tempfile.gettempdir(), local_folder_name)
if os.path.isdir(path):
shutil.rmtree(path)
os.mkdir(path)
```
### Copy Root CA certificate from `controller` `pod`
```
import os
cwd = os.getcwd()
os.chdir(path) # Workaround kubectl bug on Windows, can't put c:\ on kubectl cp cmd line
run(f'kubectl cp {controller}:{test_cert_store_root}/cacert.pem cacert.pem -c controller -n {namespace}')
run(f'kubectl cp {controller}:{test_cert_store_root}/cakey.pem cakey.pem -c controller -n {namespace}')
os.chdir(cwd)
print('Notebook execution complete.')
```
Related
-------
- [CER001 - Generate a Root CA
certificate](../cert-management/cer001-create-root-ca.ipynb)
- [CER003 - Upload existing Root CA
certificate](../cert-management/cer003-upload-existing-root-ca.ipynb)
- [CER010 - Install generated Root CA
locally](../cert-management/cer010-install-generated-root-ca-locally.ipynb)
|
github_jupyter
|
# Exploratory Data Analysis Case Study -
##### Conducted by Nirbhay Tandon & Naveen Sharma
## 1.Import libraries and set required parameters
```
#import all the libraries and modules
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import re
from scipy import stats
# Supress Warnings
#Enable autocomplete in Jupyter Notebook.
%config IPCompleter.greedy=True
import warnings
warnings.filterwarnings('ignore')
import os
## Set the max display columns to None so that pandas doesn't sandwich the output
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 40)
```
### Reading and analysing Data
```
applicationData=pd.read_csv("./application_data.csv")
applicationData.head()
```
## 2. Data Inspection
```
#shape of application_data.csv data
applicationData.shape
#take information about the data
applicationData.info()
#get the information about the numerical data
applicationData.describe()
## print the column names for application_data.csv
applicationData.columns
## print the various datatypes of application_data.csv
applicationData.dtypes
```
## 3. Data Cleaning & Quality Check
In this section we will perform various checks and balances on the application_data.csv file.
We will:
* Perform a check for the number of missing/null values on each column
* Perform a check for the percentage of missing/null values of each column
* Drop the columns that have a high percentage of null values, i.e. over 60%
* Print the names of the dropped columns
* Verify that the columns were dropped by comparing the shape of the new dataframe created
* For columns with around 13% of null values we will discuss the best way to handle the missing/null values in the columns
* Check the data types of these columns and determine if they are categorical in nature or not
* Check the data types for all the columns in the dataframe and convert them to numerical data types if required
* Check for any outliers in any 3 numerical columns and treat them accordingly
* Create a bin for continous variables and analyse them
```
### Let us create a utility function to generate a list of null values in different dataframes
### We will utilize this function extensively througout the notebook.
def generateNullValuesPercentageTable(dataframe):
totalNullValues = dataframe.isnull().sum().sort_values(ascending=False)
percentageOfNullValues = round((dataframe.isnull().sum()*100/len(dataframe)).sort_values(ascending=False),2)
columnNamesWithPrcntgOfNullValues = pd.concat([totalNullValues, percentageOfNullValues], axis=1, keys=['Total Null Values', 'Percentage of Null Values'])
return columnNamesWithPrcntgOfNullValues
## Check the number of null values of each column and display them in
## decending order along with the percentage of null values there is
generateNullValuesPercentageTable(applicationData)
### Assess the shape of the dataframe before dropping
### columns with a high percentage of
### null values
print("The Initial shape of the DataFrame is: ", applicationData.shape)
#Drop all the columns where the
## percentage of missing values is above 60% in application_data.csv
droppedColumns = applicationData.columns[applicationData.isnull().mean() > 0.60]
applicationDataAfterDroppedColumns = applicationData.drop(droppedColumns, axis = 1)
print("The new shape of the DataFrame is: ", applicationDataAfterDroppedColumns.shape)
## analysing the dataframe is correct after dropping columns
applicationDataAfterDroppedColumns.head()
```
### Observation:
As you can see, the shape of the data has changed from (307511, 122) to (307511, 105). Which mean we have dropped 17 columns that had over 60% percent null values. The dropped columns are mentioned below.
```
print("The columns that have been dropped are: ", droppedColumns)
## print the percentage of columns with null values in the
## new data frame after the columns have been dropped
generateNullValuesPercentageTable(applicationDataAfterDroppedColumns)
#### Check dataframe shape to confirm no other columns were dropped
applicationDataAfterDroppedColumns.shape
```
### Observation:
As you can see above, there are still a few columns that have a above 30% of null/missing values. We can deal with those null/missing values using various methods of imputation.
##### Some key points:
- The columns with above 60% of null values have successfully been dropped
- The column with the highest percentage of null values after the drop is "LANDAREA_MEDI" with 59.38% null values. Whereas earlier it was "COMMONAREA_MEDI" with 69.87% null values
- The new shape of the dataframe is (307511, 105)
Checking the datadrame after dropping null values
```
applicationDataAfterDroppedColumns.head()
### Analyzing Columns with null values around 14% to determine
### what might be the best way to impute such values
listOfColumnsWithLessValuesOfNull = applicationDataAfterDroppedColumns.columns[applicationDataAfterDroppedColumns.isnull().mean() < 0.14]
applicationDataWithLessPrcntgOfNulls = applicationDataAfterDroppedColumns.loc[:, listOfColumnsWithLessValuesOfNull]
print(applicationDataWithLessPrcntgOfNulls.shape)
applicationDataWithLessPrcntgOfNulls.head(20)
### Analysing columns with around 13.5% null values
columnsToDescribe = ['AMT_REQ_CREDIT_BUREAU_QRT', 'AMT_REQ_CREDIT_BUREAU_YEAR', 'AMT_REQ_CREDIT_BUREAU_MON', 'AMT_REQ_CREDIT_BUREAU_DAY', 'AMT_REQ_CREDIT_BUREAU_HOUR','AMT_REQ_CREDIT_BUREAU_WEEK', 'OBS_30_CNT_SOCIAL_CIRCLE', 'OBS_60_CNT_SOCIAL_CIRCLE', 'EXT_SOURCE_2']
applicationDataAfterDroppedColumns[columnsToDescribe].describe()
### Let us plot a boxplot to see the various variables
fig, axes = plt.subplots(nrows=3, ncols = 2, figsize=(40,25))
sns.boxplot(data=applicationDataAfterDroppedColumns.AMT_REQ_CREDIT_BUREAU_YEAR, ax=axes[0][0])
axes[0][0].set_title('AMT_REQ_CREDIT_BUREAU_YEAR')
sns.boxplot(data=applicationDataAfterDroppedColumns.AMT_REQ_CREDIT_BUREAU_MON, ax=axes[0][1])
axes[0][1].set_title('AMT_REQ_CREDIT_BUREAU_MON')
sns.boxplot(data=applicationDataAfterDroppedColumns.AMT_REQ_CREDIT_BUREAU_DAY, ax=axes[1][0])
axes[1][0].set_title('AMT_REQ_CREDIT_BUREAU_DAY')
sns.boxplot(applicationDataAfterDroppedColumns.AMT_REQ_CREDIT_BUREAU_HOUR, ax=axes[1][1])
axes[1][1].set_title('AMT_REQ_CREDIT_BUREAU_HOUR')
sns.boxplot(applicationDataAfterDroppedColumns.AMT_REQ_CREDIT_BUREAU_WEEK, ax=axes[2][0])
axes[2][0].set_title('AMT_REQ_CREDIT_BUREAU_WEEK')
plt.show()
```
### Observation
As you can see above, when we take a look at the columns that have a low number of null values, the shape of the data changes to (307511, 71) compared to (307511, 105). We lose 34 columns in the process.
Checking columns having less no. of Null values(around 13% or so) and analysing the best metric
to impute the missing/null values in those columns basis if the column/variable is 'Categorical' or 'Continuous''
- AMT_REQ_CREDIT_BUREAU_HOUR (99.4% of the values are 0.0 with 4.0 and 3.0 values being outliers. Its safe to impute the missing values with 0.0)
- AMT_REQ_CREDIT_BUREAU_DAY (99.4% of the values are 0.0 with 9.0 and 8.0 values being outliers. Its safe to impute the missing values with 0.0)
- AMT_REQ_CREDIT_BUREAU_WEEK (96.8% of the values are 0.0 with 8.0 and 7.0 values being outliers. Its safe to impute the missing values with 0.0)
- AMT_REQ_CREDIT_BUREAU_MON (83.6% of the values are 0.0. Its safe to impute the missing values with mode : 0.0)
- AMT_REQ_CREDIT_BUREAU_YEAR (It seems fine to use the median value 1.0 here for imputing the missing values)
```
### Checking for categorical data
categoricalDataColumns = applicationDataAfterDroppedColumns.nunique().sort_values()
categoricalDataColumns
```
### Observation:
Given the wide number of columns with a less number of unique values, we will convert all columns with upto 5 values into categorical columns
```
listOfColumnsWithMaxTenUniqueValues = [i for i in applicationDataAfterDroppedColumns.columns if applicationDataAfterDroppedColumns[i].nunique() <= 5]
for col in listOfColumnsWithMaxTenUniqueValues:
applicationDataAfterDroppedColumns[col] = applicationDataAfterDroppedColumns[col].astype('category')
applicationDataAfterDroppedColumns.shape
applicationDataAfterDroppedColumns.head()
## Check for datatypes of all columns in the new dataframe
applicationDataAfterDroppedColumns.info()
```
### Observation:
We notice above that after dropping the null columns we still have:
- 43 Categorical
- 48 Float
- 6 Integer
- 8 Object data types
```
## Convert the categorical data columns into individual columns with numeric values for better analysis
## we will do this using one-hot-encoding method
convertedCategoricalColumnsDataframe = pd.get_dummies(applicationDataAfterDroppedColumns, columns=listOfColumnsWithMaxTenUniqueValues, prefix=listOfColumnsWithMaxTenUniqueValues)
convertedCategoricalColumnsDataframe.head()
## Converting these columns has changed the shape of the data to
print("Shape of Application Data after categorical column conversion: ", convertedCategoricalColumnsDataframe.shape)
```
### Observation
As you can see above we have successfully converted the varius categorical datatypes into their own columns.
The new shape of the data is (307511, 158) compared to (307511, 105). We have introuced 53 new columns. These will help us identify the best possible method to use for imputing values.
```
### Count the number of missing values in the new dataframe
generateNullValuesPercentageTable(convertedCategoricalColumnsDataframe)
```
### Observation
Let us take the following columns - AMT_REQ_CREDIT_BUREAU_YEAR, AMT_REQ_CREDIT_BUREAU_MON, OBS_30_CNT_SOCIAL_CIRCLE, OBS_60_CNT_SOCIAL_CIRCLE, EXT_SOURCE_2.
Determine their datatypes and using the describe above try and identify what values can be used to impute into the null columns.
```
listOfCols = ['AMT_REQ_CREDIT_BUREAU_YEAR', 'AMT_REQ_CREDIT_BUREAU_MON', 'OBS_30_CNT_SOCIAL_CIRCLE', 'OBS_60_CNT_SOCIAL_CIRCLE', 'EXT_SOURCE_2']
convertedCategoricalColumnsDataframe[listOfCols].dtypes
applicationDataAfterDroppedColumns['AMT_REQ_CREDIT_BUREAU_HOUR'].fillna(0.0, inplace = True)
applicationDataAfterDroppedColumns['AMT_REQ_CREDIT_BUREAU_HOUR'] = applicationDataAfterDroppedColumns['AMT_REQ_CREDIT_BUREAU_HOUR'].astype(int)
## convert DAYS_BIRTH to years
def func_age_yrs(x):
return round(abs(x/365),0)
applicationDataAfterDroppedColumns['DAYS_BIRTH'] = applicationDataAfterDroppedColumns['DAYS_BIRTH'].apply(func_age_yrs)
```
### Observation
In all the selected columns we can see that we can use the median to impute the values in the dataframe. They all correspond to 0.00 except EXT_SOURCE_2. For EXT_SOURCE_2 we observe that the mean and the median values are roughly similar at 5.143927e-01 for mean & 5.659614e-01 for median. So we could use either of those values to impute.
Let us now check for outliers on 6 numerical columns.
For this we can use our dataset from after we dropped the columns with over 60% null values.
```
### We will use boxplots to handle the outliers on AMT_CREDIT, AMT_ANNUITY, AMT_GOODS_PRICE
fig, axes = plt.subplots(nrows=3, ncols = 2, figsize=(50,50))
sns.boxplot(data= applicationDataAfterDroppedColumns.AMT_CREDIT.dropna(), ax=axes[0][0])
axes[0][0].set_title('AMT_CREDIT')
sns.boxplot(data= applicationDataAfterDroppedColumns.AMT_ANNUITY.dropna(), ax=axes[0][1])
axes[0][1].set_title('AMT_ANNUITY')
sns.boxplot(data= applicationDataAfterDroppedColumns.AMT_GOODS_PRICE.dropna(), ax=axes[1][0])
axes[1][0].set_title('AMT_GOODS_PRICE')
sns.boxplot(data= applicationDataAfterDroppedColumns.AMT_INCOME_TOTAL.dropna(), ax=axes[1][1])
axes[1][1].set_title('AMT_INCOME_TOTAL')
sns.boxplot(data= applicationDataAfterDroppedColumns.DAYS_BIRTH.dropna(), ax=axes[2][0])
axes[2][0].set_title('DAYS_BIRTH')
sns.boxplot(data= applicationDataAfterDroppedColumns.DAYS_EMPLOYED.dropna(), ax=axes[2][1])
axes[2][1].set_title('DAYS_EMPLOYED')
plt.show()
```
### Observation
We can easily see in the box plot that there are so many outliers which has to removed for the better calculation. So, In the next part of the code we remove outliers from the function "remove_outliers" which accept dataframe and columns name (In which we want to remove outliers) as argument and return the outliers removed dataframe.
Analysing outliers in Numeric variables and Handling/Treating them with appropriate methods.
- AMT_REQ_CREDIT_BUREAU_HOUR (99.4% of the values are 0.0 with value '4' and '3' being outliers. Should be retained)
Considering that its the number of enquiries made by the company to credit bureau, this could significantly mean that the company was extremely cautious in making a decision of whether to grant loan/credit to this particular client or not. This might imply that it could be a case of 'High Risk' client and can influence the Target variable. Its better to retain these outlier values
- AMT_INCOME_TOTAL ( Clearly 117000000.0 is an outlier here.)
The above oulier can be dropped in order to not skew with the analysis. We can use IQR to remove this value.
- DAYS_BIRTH ( There is no outlier in this column)
- DAYS_EMPLOYED ( Clearly 1001 is an outlier here and should be deleted.18% of the column values are 1001)
Clearly 1001 is an outlier here. 18% of the column values are 1001. Since , this represents the no. of years of employement as on the application date, these should be deleted. Though values above 40 years till 49 years of employment seems questionable as well but lets not drop it for now considering exception cases.
Another way to see the distribution of is using a distribution plot.
```
fig, axes = plt.subplots(nrows=3, ncols = 2, figsize=(50,50))
sns.distplot(applicationDataAfterDroppedColumns.AMT_CREDIT.dropna(), ax=axes[0][0])
axes[0][0].set_title('AMT_CREDIT')
sns.distplot(applicationDataAfterDroppedColumns.AMT_ANNUITY.dropna(), ax=axes[0][1])
axes[0][1].set_title('AMT_ANNUITY')
sns.distplot(applicationDataAfterDroppedColumns.AMT_GOODS_PRICE.dropna(), ax=axes[1][0])
axes[1][0].set_title('AMT_GOODS_PRICE')
sns.distplot(applicationDataAfterDroppedColumns.AMT_INCOME_TOTAL.dropna(), ax=axes[1][1])
axes[1][1].set_title('AMT_INCOME_TOTAL')
sns.distplot(applicationDataAfterDroppedColumns.DAYS_BIRTH.dropna(), ax=axes[2][0])
axes[2][0].set_title('DAYS_BIRTH')
sns.distplot(applicationDataAfterDroppedColumns.DAYS_EMPLOYED.dropna(), ax=axes[2][1])
axes[2][1].set_title('DAYS_EMPLOYED')
plt.show()
```
### Observation
As you can see from the distplots above there are a few outliers that aren't properly normalized.
The 'DAYS_EMPLOYED' column is heavily skewed in the -ve side of the plot.
```
#Function for removing outliers
def remove_outlier(df, col_name):
q1 = df[col_name].quantile(0.25)
q3 = df[col_name].quantile(0.75)
iqr = q3-q1 #Interquartile range
l = q1-1.5*iqr
h = q3+1.5*iqr
dfOutput = df.loc[(df[col_name] > l) & (df[col_name] < h)]
return dfOutput
cols=['AMT_CREDIT','AMT_ANNUITY', 'AMT_GOODS_PRICE', 'AMT_INCOME_TOTAL', 'DAYS_EMPLOYED']
for i in cols:
applicationDataAfterDroppedColumns=remove_outlier(applicationDataAfterDroppedColumns,i)
applicationDataAfterDroppedColumns.head()
### Plot the box plot again after removing outliers
fig, axes = plt.subplots(nrows=3, ncols = 2, figsize=(50,50))
sns.boxplot(data= applicationDataAfterDroppedColumns.AMT_CREDIT.dropna(), ax=axes[0][0])
axes[0][0].set_title('AMT_CREDIT')
sns.boxplot(data= applicationDataAfterDroppedColumns.AMT_ANNUITY.dropna(), ax=axes[0][1])
axes[0][1].set_title('AMT_ANNUITY')
sns.boxplot(data= applicationDataAfterDroppedColumns.AMT_GOODS_PRICE.dropna(), ax=axes[1][0])
axes[1][0].set_title('AMT_GOODS_PRICE')
sns.boxplot(data= applicationDataAfterDroppedColumns.AMT_INCOME_TOTAL.dropna(), ax=axes[1][1])
axes[1][1].set_title('AMT_INCOME_TOTAL')
sns.boxplot(data= applicationDataAfterDroppedColumns.DAYS_BIRTH.dropna(), ax=axes[2][0])
axes[2][0].set_title('DAYS_BIRTH')
sns.boxplot(data= applicationDataAfterDroppedColumns.DAYS_EMPLOYED.dropna(), ax=axes[2][1])
axes[2][1].set_title('DAYS_EMPLOYED')
plt.show()
```
### Observation
After dropping the outliers we observe that there very few points mentioned on the box plots above for the outliers.
```
### Plotting the distribution plot after removing the outliers
fig, axes = plt.subplots(nrows=3, ncols = 2, figsize=(50,50))
sns.distplot(applicationDataAfterDroppedColumns.AMT_CREDIT.dropna(), ax=axes[0][0])
axes[0][0].set_title('AMT_CREDIT')
sns.distplot(applicationDataAfterDroppedColumns.AMT_ANNUITY.dropna(), ax=axes[0][1])
axes[0][1].set_title('AMT_ANNUITY')
sns.distplot(applicationDataAfterDroppedColumns.AMT_GOODS_PRICE.dropna(), ax=axes[1][0])
axes[1][0].set_title('AMT_GOODS_PRICE')
sns.distplot(applicationDataAfterDroppedColumns.AMT_INCOME_TOTAL.dropna(), ax=axes[1][1])
axes[1][1].set_title('AMT_INCOME_TOTAL')
sns.distplot(applicationDataAfterDroppedColumns.DAYS_BIRTH.dropna(), ax=axes[2][0])
axes[2][0].set_title('DAYS_BIRTH')
sns.distplot(applicationDataAfterDroppedColumns.DAYS_EMPLOYED.dropna(), ax=axes[2][1])
axes[2][1].set_title('DAYS_EMPLOYED')
plt.show()
```
### Observation
Based on the distplots above you can see that there is a marked difference between the minimum values for various columns, particularly the DAYS_EMPLOYED column where the minimum value increased from -7500 to -6000. This proves that the treatment of outliers was succesful
```
applicationDataAfterDroppedColumns.shape
```
### Observation
We observe that after removing the outliers the boxplots show a slight shift in the maximum ranges.
The distribution plot gives us a more significant display in changes. There is a significant reduction in the max ranges on the x-axis for all the three variables we chose.
As we can see above, after treating the outliers for various columns the shape of our dataset has changed significantly. The shape of the dataframe after dropping columns with high number of null values was (307511, 105) & after treating for outliers is (209624, 105).
Let us now create bins for 3 different continous variables and plot them. We will use AMT_INCOME_TOTAL, AMT_CREDIT & DAYS_BIRTH to create our bins.
```
## Creating bins for Income range based on AMT_INCOME_TOTAL
bins=[0,100000,200000,300000,400000,500000,600000,20000000]
range_period=['0-100000','100000-200000','200000-300000','300000-400000','400000-500000','500000-600000','600000 and above']
applicationDataAfterDroppedColumns['Income_amount_range']=pd.cut(applicationDataAfterDroppedColumns['AMT_INCOME_TOTAL'],bins,labels=range_period)
plotIncomeAmountRange = applicationDataAfterDroppedColumns['Income_amount_range'].value_counts().plot(kind='bar', title='Income Range Bins Plot')
plotIncomeAmountRange.set_xlabel('Income Range Bins')
plotIncomeAmountRange.set_ylabel('Count')
```
### Observation
As you can clearly see from the plot above:
- The most number of people earn between 100000-200000
- The number of people who earn between 200000-300000 is less than half of the number of people in 100000-200000 range
- No one earns above 300000.
```
#create bins for credit anount
bins=[0,50000,100000,150000,200000,250000,300000,400000]
range_period=['0-50000','50000-100000','100000-150000','150000-200000','200000-250000','250000-300000','300000-400000']
applicationDataAfterDroppedColumns['credit_amount_range']=pd.cut(applicationDataAfterDroppedColumns['AMT_CREDIT'],bins,labels=range_period)
plotCreditAmountRange = applicationDataAfterDroppedColumns['credit_amount_range'].value_counts().plot(kind='bar', title='Credit Amount Range Plots')
plotCreditAmountRange.set_xlabel('Credit Amount Range Bins')
plotCreditAmountRange.set_ylabel('Count')
```
### Observation
As you can see from the plots above
- Very less number of people borrow money between 0-50000
- Highest number of people are borrowing money between 250000-300000
```
##Creating bins for age range for DAYS_BIRTH in years
bins = [10, 20, 30, 40, 50, 60, 70, 80]
labels = ['10-20','21-30','31-40','41-50','51-60','61-70','71-80']
applicationDataAfterDroppedColumns['BINNED_AGE'] = pd.cut(applicationDataAfterDroppedColumns['DAYS_BIRTH'], bins=bins,labels=labels)
plotAgeRange = applicationDataAfterDroppedColumns['BINNED_AGE'].value_counts().plot(kind='bar', title='Age Range Plot')
plotAgeRange.set_xlabel('Age Range')
plotAgeRange.set_ylabel('Count')
```
### Observation
- People between the ages of 71-80 & 10-20 are not borrowing any money.
- For people in the age range of 10-20, no borrowing could suggest that children/teenagers/young adults could have just opened new bank accounts with their parents or have just joined university so do not have a need of borrowing money
- People in between the ages of 31-40 have a significantly higher number of borrowers, this could be suggestive of various personal expenses & it would be beneficial for the firm to identify the reasons why they are borrowing more so that they can introduce newer products at more competitive interest rates to these customers
# 4. Data Analysis
In this section we will perform indepth analysis on the application_data.csv file.
This will be achieved by:
- Checking the imbalance percentage in the dataset
- Dividing the dataset based on the "TARGET" column into 2 separate dataframes
- Performing univariate analysis for categorical variables on both Target = 0 & Target = 1 columns
- Identifying the correlation between the numerical columns for both Target = 0 & Target = 1 columns
- Comparing the results across continous variables
- Performing bivariate analysis for numerical variables on both Target = 0 & Target = 1 columns
## Selecting relevant columns from 'applicationDataAfterDroppedColumns' which would be used for EDA further
- Selecting only the relevant columns(25 or so) from 'applicationDataAfterDroppedColumns' i.e. removing those columns which aren't relevant for analysis out of a total of 105 columns
```
applicationDataWithRelevantColumns = applicationDataAfterDroppedColumns.loc[:,['SK_ID_CURR',
'TARGET',
'NAME_CONTRACT_TYPE',
'CODE_GENDER',
'FLAG_OWN_CAR',
'FLAG_OWN_REALTY',
'CNT_CHILDREN',
'AMT_INCOME_TOTAL',
'AMT_CREDIT',
'AMT_ANNUITY',
'AMT_GOODS_PRICE',
'NAME_INCOME_TYPE',
'NAME_EDUCATION_TYPE',
'NAME_FAMILY_STATUS',
'NAME_HOUSING_TYPE',
'REGION_POPULATION_RELATIVE',
'BINNED_AGE',
'DAYS_EMPLOYED',
'DAYS_REGISTRATION',
'DAYS_ID_PUBLISH',
'FLAG_CONT_MOBILE',
'OCCUPATION_TYPE',
'CNT_FAM_MEMBERS',
'REGION_RATING_CLIENT',
'REGION_RATING_CLIENT_W_CITY',
'ORGANIZATION_TYPE',
'AMT_REQ_CREDIT_BUREAU_HOUR',
'AMT_REQ_CREDIT_BUREAU_DAY']]
```
We will now use applicationDataWithRelevantColumns as our dataframe to run further analysis
```
### Checking shape of the new dataframe
applicationDataWithRelevantColumns.shape
applicationDataWithRelevantColumns['CODE_GENDER'].value_counts()
```
Since the number of Females is higher than Males, we can safely impute XNA values with F.
```
applicationDataWithRelevantColumns.loc[applicationDataWithRelevantColumns['CODE_GENDER']=='XNA','CODE_GENDER']='F'
applicationDataWithRelevantColumns['CODE_GENDER'].value_counts()
#Check the total percentage of target value as 0 and 1.
imbalancePercentage = applicationDataWithRelevantColumns['TARGET'].value_counts()*100/len(applicationDataAfterDroppedColumns)
imbalancePercentage
imbalancePercentage.plot(kind='bar',rot=0)
```
### Observation
We can easily see that this data is very much imbalance. Rows with target value 0 is only 90.612239% and with 1 is only 9.387761%.
This also means that only 9.38% of all the loan applicants default while paying back their loans.
```
#Splitting the data based on target values
one_df = applicationDataWithRelevantColumns.loc[applicationDataWithRelevantColumns['TARGET']==1]
zero_df = applicationDataWithRelevantColumns.loc[applicationDataWithRelevantColumns['TARGET']==0]
## Inspecting data with TARGET = 1
one_df.head()
one_df.info()
one_df.shape
## Inspecting data with TARGET = 0
zero_df.head()
zero_df.describe
zero_df.shape
zero_df.info
```
We will now use the following columns to perform Univariate & Bivariate analysis
- CODE_GENDER
- NAME_CONTRACT_TYPE
- NAME_INCOME_TYPE
- NAME_EDUCATION_TYPE
- NAME_FAMILY_STATUS
- NAME_HOUSING_TYPE
- OCCUPATION_TYPE
- ORGANIZATION_TYPE
### Univariate Analysis:-
Univariate Analysis on one_df dataset
```
#Univariate Analysis for categorical variable 'CODE_GENDER' in dataframe one_df.
sns.countplot(x ='CODE_GENDER', data = one_df)
plt.title('Number of applications by Gender')
plt.ylabel('Number of Applications')
plt.xlabel('Gender')
plt.show()
```
### Observation
As you can see above the number of Female applicants is higher than the number of Male applicants.
```
#Univariate Analysis for categorical variable 'NAME_EDUCATION_TYPE' in dataframe T1.
sns.countplot(x ='NAME_EDUCATION_TYPE', data = one_df)
plt.title("Number of applications by Client's Education Level")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Education Level")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
From the plot above we can infer that:
- The highest number of applications for credit were made by people having Secondary/ secondary special education and these people defaulted on being able to pay back their loans. This could mean that they face trouble in being able to manage their money effectively or have jobs that pay less/are contractual in nature
- People with higher education also applied for a credit and defaulted on their loans
```
#Univariate Analysis for categorical variable 'NAME_CONTRACT_TYPE' in dataframe one_df.
sns.countplot(x ='NAME_CONTRACT_TYPE', data = one_df)
plt.title('Number of applications by Contract Type')
plt.ylabel('Number of Applications')
plt.xlabel('Contract Type')
plt.show()
```
### Observation
- A high number of applicants who defaulted applied for cash loans
```
#Univariate Analysis for categorical variable 'NAME_INCOME_TYPE' in dataframe one_df.
sns.countplot(x ='NAME_INCOME_TYPE', data = one_df)
plt.title("Number of applications by Client's Income Type")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Income Type")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
- Mostly working professionals apply for credit and are also the ones that default on being able to payback the loans on time
- State servants have a very low number of defaulters
```
#Univariate Analysis for categorical variable 'NAME_FAMILY_STATUS' in dataframe one_df.
sns.countplot(x ='NAME_FAMILY_STATUS', data = one_df)
plt.title("Number of applications by Client's Family Status")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Family Status")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
- Married applicants make a higher number of applications as compared to other categories
- It would be beneficial for the bank to introduce newer products for people in such a category to attract more customers
```
#Univariate Analysis for categorical variable 'NAME_HOUSING_TYPE' in dataframe one_df.
sns.countplot(x ='NAME_HOUSING_TYPE', data = one_df)
plt.title("Number of applications by Client's Housing Status")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Housing Status")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
- People who live in their own apartment/house apply for loans almost 160 times more than those who live with their parents.
- People living in office apartments default significantly less. This could be because their houses are rent free or they pay minimum charges to live in the house.
```
#Univariate Analysis for categorical variable 'OCCUPATION_TYPE' in dataframe one_df.
sns.countplot(x ='OCCUPATION_TYPE', data = one_df)
plt.title("Number of applications by Client's Occupation Type")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Occupation Type")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
- Labourers apply for a lot of loans and default on being able to repay them. This could be because of the contractual nature of their work and the unsetady + low income they might earn from their daily jobs
- IT & HR Staff make very few applications for credit and default the least on their loan applications. This could be, in stark contrast to the labourers, because of the stable job & salaried nature of their work. Thus enabling them to be better at handling monthly expenses.
```
# Since there are subcategories like Type1,2 etc under few categories like Business Entity,Trade etc.
# Because of this, there are a lot of categories making it difficult to analyse data
# Its better to remove the types and just have the main category there
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Business Entity Type 3", "Business Entity")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Business Entity Type 2", "Business Entity")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Business Entity Type 1", "Business Entity")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Trade: type 7", "Trade")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Trade: type 3", "Trade")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Trade: type 2", "Trade")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Trade: type 1", "Trade")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Trade: type 6", "Trade")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Trade: type 5", "Trade")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Trade: type 4", "Trade")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Transport: type 4", "Transport")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Transport: type 3", "Transport")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Transport: type 2", "Transport")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Transport: type 1", "Transport")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 1", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 2", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 3", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 4", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 5", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 6", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 7", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 8", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 9", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 10", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 11", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 12", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 13", "Industry")
one_df['ORGANIZATION_TYPE'].value_counts()
#Univariate Analysis for categorical variable 'ORGANIZATION_TYPE' in dataframe one_df.
plt.figure(figsize = (14,14))
sns.countplot(x ='ORGANIZATION_TYPE', data = one_df)
plt.title("Number of applications by Client's Organization Type")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Organization Type")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
- Based on the plot above we can see that Business Entity employees have the maximum number of loan applications
- Religious people, priests etc dont seem to be making any credit applications at all
- Self-employed people also make a lot of loan applications. This could be to boost their business or to repay other loans.
##### Continuous - Continuous Bivariate Analysis for one_df dataframe
```
## Plotting cont-cont Client Income vs Credit Amount
plt.figure(figsize=(12,12))
sns.scatterplot(x="AMT_INCOME_TOTAL", y="AMT_CREDIT",
hue="CODE_GENDER", style="CODE_GENDER", data=one_df)
plt.xlabel('Income of client')
plt.ylabel('Credit Amount of loan')
plt.title('Client Income vs Credit Amount')
plt.show()
```
### Observation
- We do see some outliers here wherein Females having income less than 50000 have applied for loan with credit amount 1300000 approx
- Most of the loans seem to be concentrated between credit amount of 200000 & 6000000 for income ranging from 50000-150000
```
## Plotting cont-cont Client Income vs Region population
plt.figure(figsize=(12,12))
sns.scatterplot(x="AMT_INCOME_TOTAL", y="REGION_POPULATION_RELATIVE",
hue="CODE_GENDER", style="CODE_GENDER", data=one_df)
plt.xlabel('Income of client')
plt.ylabel('Population of region where client lives')
plt.title('Client Income vs Region population')
plt.show()
```
### Observation
- Very less no of people live in highly dense/populated region
- Most of the clients live between population density of 0.00 to 0.04
##### Univariate analysis for zero_df dataframe
```
#Univariate Analysis for categorical variable 'CODE_GENDER' in dataframe zero_df.
sns.countplot(x ='CODE_GENDER', data = zero_df)
plt.title('Number of applications by Gender')
plt.ylabel('Number of Applications')
plt.xlabel('Gender')
plt.show()
```
### Observation
As you can see above the number of Female applicants is higher than the number of Male applicants.
```
#Univariate Analysis for categorical variable 'NAME_CONTRACT_TYPE' in dataframe zero_df.
sns.countplot(x ='NAME_CONTRACT_TYPE', data = zero_df)
plt.title('Number of applications by Contract Type')
plt.ylabel('Number of Applications')
plt.xlabel('Contract Type')
plt.show()
```
### Observation
Applicants prefer to apply more for cash loans rather than revolving loans
```
#Univariate Analysis for categorical variable 'NAME_INCOME_TYPE' in dataframe zero_df.
sns.countplot(x ='NAME_INCOME_TYPE', data = zero_df)
plt.title("Number of applications by Client's Income Type")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Income Type")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
- Working people make the most number of applications and are able to successfully repay their loans as well.
- Students, Pensioners, Business men and Maternity leave applicants is close to 0. This could be due to a multitude of reasons.
```
#Univariate Analysis for categorical variable 'NAME_EDUCATION_TYPE' in dataframe zero_df.
sns.countplot(x ='NAME_EDUCATION_TYPE', data = zero_df)
plt.title("Number of applications by Client's Education Level")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Education Level")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
From the plot above we can infer that:
- The highest number of applications for credit were made by people having Secondary/ secondary special education and these people did not default on being able to pay back their loans.
- People with higher education also applied for a credit and were able to repay them successfully
```
#Univariate Analysis for categorical variable 'NAME_FAMILY_STATUS' in dataframe zero_df.
sns.countplot(x ='NAME_FAMILY_STATUS', data = zero_df)
plt.title("Number of applications by Client's Family Status")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Family Status")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
From the plot above we can infer that:
- Married people apply for credit the most.
- Married people are able to repay their loans without any defaults as well
```
#Univariate Analysis for categorical variable 'NAME_HOUSING_TYPE' in dataframe zero_df.
sns.countplot(x ='NAME_HOUSING_TYPE', data = zero_df)
plt.title("Number of applications by Client's Housing Status")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Housing Status")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
- People who live in their own apartment/house apply for loans almost 160 times more than those who live with their parents.
- People living in office apartments apply for loans significantly less. This could be because their houses are rent free or they pay minimum charges to live in the house.
- People in rented apartments apply for loans significantly less. This could be due to the added expenses of paying rent and other utility bills leaves them with not enough capital to payback their loans.
```
#Univariate Analysis for categorical variable 'OCCUPATION_TYPE' in dataframe zero_df.
sns.countplot(x ='OCCUPATION_TYPE', data = zero_df)
plt.title("Number of applications by Client's Occupation Type")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Occupation Type")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
- Labourers apply for a lot of loans.
- IT & HR Staff make very few applications for credit. This could be, in stark contrast to the labourers, because of the stable job & salaried nature of their work. Thus enabling them to be better at handling monthly expenses.
```
# Since there are subcategories like Type1,2 etc under few categories like Business Entity,Trade etc.
# Because of this, there are a lot of categories making it difficult to analyse data
# Its better to remove the types and just have the main category there
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Business Entity Type 3", "Business Entity")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Business Entity Type 2", "Business Entity")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Business Entity Type 1", "Business Entity")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Trade: type 7", "Trade")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Trade: type 3", "Trade")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Trade: type 2", "Trade")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Trade: type 1", "Trade")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Trade: type 6", "Trade")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Trade: type 5", "Trade")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Trade: type 4", "Trade")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Transport: type 4", "Transport")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Transport: type 3", "Transport")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Transport: type 2", "Transport")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Transport: type 1", "Transport")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 1", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 2", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 3", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 4", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 5", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 6", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 7", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 8", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 9", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 10", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 11", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 12", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 13", "Industry")
zero_df['ORGANIZATION_TYPE'].value_counts()
#Univariate Analysis for categorical variable 'ORGANIZATION_TYPE' in dataframe zero_df.
plt.figure(figsize = (14,14))
sns.countplot(x ='ORGANIZATION_TYPE', data = zero_df)
plt.title("Number of applications by Client's Organization Type")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Organization Type")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
- Based on the plot above we can see that Business Entity employees have the maximum number of loan applications
- Religious people, priests etc dont seem to be making a lot of credit applications at all. They are able to repay their loans on time as well.
- Self-employed people also make a lot of loan applications. This could be to boost their business or to repay other loans.
### Bivariate Analysis for zero_df
```
### Let us create a helper function to help with
### plotting various graphs
def uniplot(df,col,title,hue =None):
sns.set_style('whitegrid')
sns.set_context('talk')
plt.rcParams["axes.labelsize"] = 20
plt.rcParams['axes.titlesize'] = 22
plt.rcParams['axes.titlepad'] = 30
plt.figure(figsize=(40,20))
temp = pd.Series(data = hue)
fig, ax = plt.subplots()
width = len(df[col].unique()) + 7 + 4*len(temp.unique())
fig.set_size_inches(width , 8)
plt.xticks(rotation=45)
plt.title(title)
ax = sns.countplot(data = df, x= col, order=df[col].value_counts().index,hue = hue,
palette='magma')
plt.show()
# PLotting for income range
uniplot(zero_df,col='NAME_INCOME_TYPE',title='Distribution of Income type',hue='CODE_GENDER')
```
### Observation
- For income type ‘working’, ’commercial associate’, and ‘State Servant’ the number of credits are higher than others.
- For this Females are having more number of credit applications than males in all the categories.
```
uniplot(zero_df,col='NAME_CONTRACT_TYPE',title='Distribution of contract type',hue='CODE_GENDER')
```
### Observation
- For contract type ‘cash loans’ is having higher number of credits than ‘Revolving loans’ contract type.
- For this also Females are applying for credit a lot more than males.
```
uniplot(zero_df,col='NAME_FAMILY_STATUS',title='Distribution of Family status',hue='CODE_GENDER')
```
### Observation
- As observed above the number of married females applying for loans is almost 3.5 times the number of single females.
- No male widowers are applying for credit
```
uniplot(zero_df,col='NAME_EDUCATION_TYPE',title='Distribution of education level',hue='CODE_GENDER')
```
### Observation
- No person with an 'Academic Degree' is applying for a loan
- The number of females with 'Higher Education' that apply for a loan is almost double the number of males for the same category
```
uniplot(zero_df,col='NAME_HOUSING_TYPE',title='Distribution of Housing Type',hue='CODE_GENDER')
```
### Observation
- Females living in their own apartments/houses apply for more loans and are able to successfully payback.
- A very small number of females living in Co-op apartments apply for loans
```
uniplot(zero_df,col='OCCUPATION_TYPE',title='Distribution Occupation Type',hue='CODE_GENDER')
```
### Observation
- Male Labourers & Drivers take more loans and are able to successfully payback in time.
- Female Care staff & Sales Staff are also able to take loans and payback in time
### Bivariate Analysis on one_df
Perform correlation between numerical columns for finding correlation which having TARGET value as 1
```
uniplot(one_df,col='NAME_INCOME_TYPE',title='Distribution of Income type',hue='CODE_GENDER')
```
### Observation
- For income type ‘working’, ’commercial associate’, and ‘State Servant’ the number of credits are higher than others.
- Females have more number of credit applications than males in all the categories.
```
uniplot(one_df,col='NAME_CONTRACT_TYPE',title='Distribution of contract type',hue='CODE_GENDER')
```
### Observation
- For contract type ‘cash loans’ is having higher number of credits than ‘Revolving loans’ contract type.
- For this also Females are applying for credit a lot more than males.
- Females are also able to payback their loans on time
```
uniplot(one_df,col='NAME_FAMILY_STATUS',title='Distribution of Family status',hue='CODE_GENDER')
```
### Observation
- As observed above the number of married females applying for loans is almost 3.5 times the number of single females.
- No male widowers are applying for credit
- The number of males applying for loans and being able to not payback is higher if they are unmarried/single compared to females
- A very small number of male widowers are unable to payback their loans after
```
uniplot(one_df,col='NAME_EDUCATION_TYPE',title='Distribution of education level',hue='CODE_GENDER')
```
### Observation
- Males with lower secondary education make more loan applications and default more compared to females
- There is very little difference between the number of defaulters for males and females with secondary education compared to the non-defaulters we saw above
```
uniplot(one_df,col='NAME_HOUSING_TYPE',title='Distribution of Housing Type',hue='CODE_GENDER')
```
### Observation
- Males living with their parents tend to apply and default more on their loans
- Almost an equal number of males and females default on loans if they are living in rented apartments
```
uniplot(one_df,col='OCCUPATION_TYPE',title='Distribution Occupation Type',hue='CODE_GENDER')
```
### Observations
- The number of male applicants who default on paying back their loans is almost double the amount of female applicants
- Irrespective of gender, managers seem to default on their loans equally
#### Categorical vs Numerical Analysis
```
# Box plotting for Credit amount for zero_df based on education type and family status
plt.figure(figsize=(40,20))
plt.xticks(rotation=45)
sns.boxplot(data =zero_df, x='NAME_EDUCATION_TYPE',y='AMT_CREDIT', hue ='NAME_FAMILY_STATUS',orient='v')
plt.title('Credit amount vs Education Status')
plt.show()
```
### Observation
- Widows with secondary education have a very high median credit amount borrowing and default on paying back loans as well. It would be better to be vary of lending to them
- Widows with an academic degree have a higher median for borrowing as compared to any other category.
- People in civil marriages, those who are seperated and widows with secondary education have the same median values and usually borrow in around 400000
```
# Box plotting for Income amount for zero_df based on their education type & family status
plt.figure(figsize=(40,20))
plt.xticks(rotation=45)
plt.yscale('log')
sns.boxplot(data =zero_df, x='NAME_EDUCATION_TYPE',y='AMT_INCOME_TOTAL', hue ='NAME_FAMILY_STATUS',orient='v')
plt.title('Income amount vs Education Status')
plt.show()
```
### Observation
- Except widows, the median earning for all other family status types with an incomplete higher education is the same
- Median income for all family status categories is the same for people with a secondary education
```
# Box plotting for Credit amount for one_df
plt.figure(figsize=(16,12))
plt.xticks(rotation=45)
sns.boxplot(data =one_df, x='NAME_EDUCATION_TYPE',y='AMT_CREDIT', hue ='NAME_FAMILY_STATUS',orient='v')
plt.title('Credit amount vs Education Status')
plt.show()
```
### Observation
- Widows with secondary education have a very high median credit amount borrowing and default on paying back loans as well. It would be better to be vary of lending to them
- Married people have a consistently high median across all categories of education except secondary education
```
# Box plotting for Income amount for one_df
plt.figure(figsize=(40,20))
plt.xticks(rotation=45)
plt.yscale('log')
sns.boxplot(data =one_df, x='NAME_EDUCATION_TYPE',y='AMT_INCOME_TOTAL', hue ='NAME_FAMILY_STATUS',orient='v')
plt.title('Income amount vs Education Status')
plt.show()
```
### Observation
- The median income for all family status types is the same for people with education type as Secondary/secondary special
- The median income for widows is the lowest across all the education types
```
### Perform correlation between CNT_CHILDREN, AMT_INCOME_TOTAL, AMT_CREDIT, AMT_GOODS_PRICE, REGION_POPULATION_RELATIVE
### and AMT_ANNUITY. Then make correlation matrix across the one_df dataframe
columns=['CNT_CHILDREN','AMT_INCOME_TOTAL','AMT_CREDIT','AMT_GOODS_PRICE','REGION_POPULATION_RELATIVE', 'AMT_ANNUITY']
corr=one_df[columns].corr()
corr.style.background_gradient(cmap='coolwarm')
```
### Observation
In the heatmap above: The closer you are to RED there is a stronger relationship, the closer you are to blue the weaker the relationship.
As we can see from the corelation matrix above, there is a very close relationship between AMT_GOODS_PRICE & AMT_CREDIT.
AMT_ANNUITY & AMT_CREDIT have a medium/strong relationship. Annuity has a similar relationship with AMT_GOODS_PRICE.
```
### Sorting based on the correlation and extracting top 10 relationships on the defaulters in one_df
corrOneDf = corr.where(np.triu(np.ones(corr.shape), k=1).astype(np.bool)).unstack().reset_index()
corrOneDf.columns = ['VAR1','VAR2','Correlation']
corrOneDf.sort_values('Correlation', ascending = False).nlargest(10, 'Correlation')
```
### Observation
In the correlation matrix, we can identify-
Columns with High Correlation:
1.AMT_GOODS_PRICE and AMT_CREDIT
Columns with Medium Correlation:
1.REGION_POPULATION_RELATIVE and AMT_INCOME_TOTAL
2.REGION_POPULATION_RELATIVE and AMT_GOODS_PRICE
3.REGION_POPULATION_RELATIVE and AMT_CREDIT
Columns with low correlation:
1.AMT_INCOME_TOTAL and CNT_CHILDREN
We also observed that the top 10 correlation pairs are:
- VAR1 VAR2 Correlation Value
- AMT_GOODS_PRICE AMT_CREDIT 0.981276
- AMT_ANNUITY AMT_CREDIT 0.748446
- AMT_ANNUITY AMT_GOODS_PRICE 0.747315
- AMT_ANNUITY AMT_INCOME_TOTAL 0.390809
- AMT_GOODS_PRICE AMT_INCOME_TOTAL 0.317123
- AMT_CREDIT AMT_INCOME_TOTAL 0.313347
- REGION_POPULATION_RELATIVE AMT_INCOME_TOTAL 0.141307
- AMT_ANNUITY REGION_POPULATION_RELATIVE 0.065024
- REGION_POPULATION_RELATIVE AMT_GOODS_PRICE 0.055120
- REGION_POPULATION_RELATIVE AMT_CREDIT 0.050097
Perform correlation between numerical columns for finding correlation which having TARGET value as 0
```
#Perform correlation between CNT_CHILDREN, AMT_INCOME_TOTAL, AMT_CREDIT, AMT_GOODS_PRICE and REGION_POPULATION_RELATIVE
#Then make correlation matrix
corrZero=zero_df[columns].corr()
corrZero.style.background_gradient(cmap='coolwarm')
```
### Observation
In the heatmap above: The closer you are to RED there is a stronger relationship, the closer you are to blue the weaker the relationship.
As we can see from the corelation matrix above, there is a very close relationship between AMT_GOODS_PRICE & AMT_CREDIT.
AMT_ANNUITY & AMT_CREDIT have a medium/strong relationship. Annuity has a similar relationship with AMT_GOODS_PRICE.
This relationship is consistent with the one we saw for the defaulters in the one_df dataframe. Thus confirming that the relationships are consistent across TARGET values
```
corrZeroDf = corrZero.where(np.triu(np.ones(corrZero.shape), k=1).astype(np.bool)).unstack().reset_index()
corrZeroDf.columns = ['VAR1','VAR2','Correlation']
# corrOneDf.dropna(subset - ['Correlation'],inplace = True)
corrZeroDf.sort_values('Correlation', ascending = False).nlargest(10, 'Correlation')
```
In the correlation matrix, we can identify-
Columns with High Correlation:
1.AMT_GOODS_PRICE and AMT_CREDIT
Columns with Medium Correlation:
1.AMT_INCOME_TOTAL and AMT_CREDIT
2.AMT_INCOME_TOTAL and AMT_GOODS_PRICE
Columns with low correlation:
1.AMT_GOODS_PRICE and CNT_CHILDREN
We also observed that the top 10 correlation pairs are:
- VAR1 VAR2 Correlation
- AMT_GOODS_PRICE AMT_CREDIT 0.981276
- AMT_ANNUITY AMT_CREDIT 0.748446
- AMT_ANNUITY AMT_GOODS_PRICE 0.747315
- AMT_ANNUITY AMT_INCOME_TOTAL 0.390809
- AMT_GOODS_PRICE AMT_INCOME_TOTAL 0.317123
- AMT_CREDIT AMT_INCOME_TOTAL 0.313347
- REGION_POPULATION_RELATIVE AMT_INCOME_TOTAL 0.141307
- AMT_ANNUITY REGION_POPULATION_RELATIVE 0.065024
- REGION_POPULATION_RELATIVE AMT_GOODS_PRICE 0.055120
- REGION_POPULATION_RELATIVE AMT_CREDIT 0.050097
#### Key Obervation
We also observed that the top categories between both the data frames zero_df & one_df is the same:
AMT_GOODS_PRICE AMT_CREDIT 0.981276
### Analysing Numerical Data
```
#Box plot on the numerical columns having TARGET value as 1
plt.figure(figsize=(25,25))
plt.subplot(2,2,1)
plt.title('CHILDREN COUNT')
sns.boxplot(one_df['CNT_CHILDREN'])
plt.subplot(2,2,2)
plt.title('AMT_INCOME_TOTAL')
sns.boxplot(one_df['AMT_INCOME_TOTAL'])
plt.subplot(2,2,3)
plt.title('AMT_CREDIT')
sns.boxplot(one_df['AMT_CREDIT'])
plt.subplot(2,2,4)
plt.title('AMT_GOODS_PRICE')
sns.boxplot(one_df['AMT_GOODS_PRICE'])
plt.show()
```
### Observation
- From the box plots above we can safely say that having children has no impact on the reason to why someone defaults on paying back their loans
- The amount of credit taken is roughly around 450000 by the defaulters
```
#Box plot on the numerical columns having TARGET value as 0
plt.figure(figsize=(25,25))
plt.subplot(2,2,1)
plt.title('CHILDREN COUNT')
sns.boxplot(zero_df['CNT_CHILDREN'])
plt.subplot(2,2,2)
plt.title('AMT_INCOME_TOTAL')
sns.boxplot(zero_df['AMT_INCOME_TOTAL'])
plt.subplot(2,2,3)
plt.title('AMT_CREDIT')
sns.boxplot(zero_df['AMT_CREDIT'])
plt.subplot(2,2,4)
plt.title('AMT_GOODS_PRICE')
sns.boxplot(zero_df['AMT_GOODS_PRICE'])
plt.show()
```
### Observation
- From the box plots above we can safely say that having children has no impact oa persons ability to repay their loans
- The amount of credit taken is roughly around 450000 by the defaulters
- There are no outliers in the amoount of goods price
- The income median lies just below 150000
### Bivariate Analysis on zero_df for continuous - continuous (Target value =0)
```
## Plotting cont-cont Client Income vs Credit Amount
plt.figure(figsize=(12,12))
sns.scatterplot(x="AMT_INCOME_TOTAL", y="AMT_CREDIT",
hue="CODE_GENDER", style="CODE_GENDER", data=zero_df)
plt.xlabel('Income of client')
plt.ylabel('Credit Amount of loan')
plt.title('Client Income vs Credit Amount')
plt.show()
```
### Observation
- We do see some outliers here wherein Females having income less than 50000 have applied for loan with credit amount 1300000 approx
```
## Plotting cont-cont Client Income vs Region population
plt.figure(figsize=(12,12))
sns.scatterplot(x="AMT_INCOME_TOTAL", y="REGION_POPULATION_RELATIVE",
hue="CODE_GENDER", style="CODE_GENDER", data=zero_df)
plt.xlabel('Income of client')
plt.ylabel('Population of region where client lives')
plt.title('Client Income vs Region population')
plt.show()
```
### Observation
- Very less no of people live in highly dense/populated region >0.07
- Most of the clients live between population density of 0.00 to 0.04
# 5 PREVIOUS DATA
Read the dataset file previous_application.csv which consist previous loan of the customer.
```
previousApplicationData=pd.read_csv("./previous_application.csv")
previousApplicationData.head()
```
### Analysing previous application data
```
previousApplicationData.shape
previousApplicationData.describe
previousApplicationData.columns
previousApplicationData.dtypes
### Join the previous application data and application data files using merge
mergedApplicationDataAndPreviousData = pd.merge(applicationDataWithRelevantColumns, previousApplicationData, how='left', on=['SK_ID_CURR'])
mergedApplicationDataAndPreviousData.head()
```
### Observation
We will be merging on 'SK_ID_CURR' column as we have duplicate IDs present in the SK_ID_CURR in previousApplicationData and in the application_data file all the values are unique.
```
mergedApplicationDataAndPreviousData.shape
mergedApplicationDataAndPreviousData.NAME_CONTRACT_STATUS.value_counts(normalize=True)
```
### Analysis
We will be focusing on analysing the NAME_CONTRACT_STATUS Column and the various relationships based on that.
## Univariate Analysis
```
uniplot(mergedApplicationDataAndPreviousData,col='NAME_CONTRACT_STATUS',title='Distribution of contract status type', hue=None)
```
### Observation
- A large number of applications were approved for the clients
- Some clients who recieved the offer did not use their loan offers
- The number of refused & cancelled applications is roughly the same
## Bivariate Analysis
```
uniplot(mergedApplicationDataAndPreviousData,col='NAME_CONTRACT_STATUS',title='Distribution Occupation Type',hue='NAME_INCOME_TYPE')
```
### Observation
Based on the plot above we can conclude that:
- Working professionals have the highest number of approved loan applications.
- Working professionals also have the highest number of refused or cancelled loan applications
- Students, pensioners, businessmen and applicants on maternity leave have statistically low or no application status data present
```
uniplot(mergedApplicationDataAndPreviousData,col='NAME_CONTRACT_STATUS',title='Distribution based on Gender',hue='CODE_GENDER')
```
### Observation
- Female applicants make more applications and have a higher number of applications approved
- They also have a higher number of applications refused or canceled
- The number of male applicant statuses is lower than female ones across the board. This could be because of low number of males present in the dataset.
```
uniplot(mergedApplicationDataAndPreviousData,col='NAME_CONTRACT_STATUS',title='Distribution Target',hue='TARGET')
```
### Observation
- Based on the target column, we see that a high number of applicants who have a history of being abe to repay their loans are approved for new loans
- A very low number of defaulters are approved for new loans. This means that the bank is following a cautious approach to defaulters
```
uniplot(mergedApplicationDataAndPreviousData,col='NAME_CONTRACT_STATUS',title='Distribution based on Family Status',hue='NAME_FAMILY_STATUS')
```
### Observation
- A large number of married people make loan applications & are approved for loans
- Separated individuals have a very low number of applications in the unused offer
- The number of single/not married people who apply for loans and are refused or have their applications cancelled as compared to approved is less than half.
```
uniplot(mergedApplicationDataAndPreviousData,col='NAME_CONTRACT_STATUS',title='Distribution based Application Start Day',hue='WEEKDAY_APPR_PROCESS_START')
```
### Observation
- Most applicants start their loan applications on a Saturday and are successfully approved
- Applicants who start their applications on Friday have a higher chance of getting rejected or cancelling their application compared to the other 2 weekend days, Saturday and Sunday
- The number of cancelled applications is highest on Monday. This could suggest that after starting the application on the weekend, the client changed their mind on a workday.
```
uniplot(mergedApplicationDataAndPreviousData,col='NAME_CONTRACT_STATUS',title='Distribution of Age on Loans',hue='BINNED_AGE')
```
### Observation
- People between the ages of 31-40 apply for the most number of loans and have consistently higher values across all application statuses
- People above the age of 71 & below 20 dont make any loan applications
- The people in the ages of 31-40 could be applying for more loans as they are married or living with a partner
```
plt.figure(figsize=(40,25))
sns.catplot(x="NAME_CONTRACT_STATUS", hue="TARGET", col="CODE_GENDER",
data=mergedApplicationDataAndPreviousData, kind="count")
```
### Observation
- Female population has high chances of getting the loans approved
- Cancellation of loans by females is significant across defaulters and non defaulters
### Continous & Categorical Plots
```
### Plotting the relationship between NAME_CONTRACT_STATUS vs AMT_CREDIT_x
### from the merged application data and splitting on the basis of family status
plt.figure(figsize=(40,25))
plt.xticks(rotation=45)
sns.boxplot(data =mergedApplicationDataAndPreviousData, x='NAME_CONTRACT_STATUS',y='AMT_CREDIT_x', hue ='NAME_FAMILY_STATUS',orient='v')
plt.title('Income amount vs Application Status based on Family Status')
plt.show()
```
### Observation
- Married people take a higher amount of credit and have a higher median chance of getting approved
- People in Civil marriage, widows & separated applicants have a consistently similar median value across all the application statuses
```
### Plotting the relationship between NAME_CONTRACT_STATUS vs AMT_INCOME_TOTAL
### from the merged application data and splitting on the basis of family status
plt.figure(figsize=(40,25))
plt.xticks(rotation=45)
plt.yscale('log')
sns.boxplot(data =mergedApplicationDataAndPreviousData, x='NAME_CONTRACT_STATUS',y='AMT_INCOME_TOTAL', hue ='NAME_FAMILY_STATUS',orient='v')
plt.title('Income amount vs Application status based on Family Status')
plt.show()
```
### Observation
- People who are married, live in civil marriages & single/not married earn consistently well across all application status types
- Their median income is also the same
- Widows earn less than all the other categories
### Continous & Continuous Plots
```
plt.figure(figsize=(30,20))
plt.scatter(mergedApplicationDataAndPreviousData.AMT_APPLICATION, mergedApplicationDataAndPreviousData.AMT_CREDIT_y)
plt.title("Final Amount Approved vs Credit Amount Applied")
plt.xlabel("Credit Amount applied by Client")
plt.ylabel("Final Amount approved by Bank")
plt.show()
```
### Observation
- The Credit Amount applied vs Final Amount approved shows a good linear relation till 2000000.
- However post 2000000, we could see good number of outliers where the approved amount is quite less as compared to amount applied
- The number of applications with credit amount > 3500000 are quite less and there are not very good chances that the same amount is going to be approved
# Conclusion
Through this case study we have made the following conclusions:
- Most popular days for making applications is Saturday. The bank could focus on keeping offices open longer on Saturday to aid in completion of the applications.
- Most popular age group for taking loans or credit is 31-40 with the most number of applications. The firm should focus on exploring more lucrative options for clients in that age range. They could be offered lower interest rates, longer repayment holidays etc.
- Married people have the highest chance of making a loan application and being approved for a loan.
- Because of the imbalance in the data, Females appear to be making the most number of loan applications. They also have a higher chance of getting approved and being able to repay the loans on time
- Widows with secondary education have a very high median credit amount borrowing and default on paying back loans as well. It would be better to be vary of lending to them
- Male labourers have high number of applications and also a high number of defaults as compared to females. It would be better for the bank to assess whether the person borrowing in this occupation type could be helped with staged loans or with loans on a lower interest rate than the other categories
- The number of applications with credit amount > 3500000 are quite less and there are not very good chances that the same amount is going to be approved
- Cancellation of loans by females is significant across defaulters and non defaulters
```
sns.boxplot(data= applicationData.AMT_ANNUITY.head(500000).isnull())
axes[0][1].set_title('AMT_ANNUITY')
plt.show()
print(applicationDataAfterDroppedColumns.AMT_ANNUITY.head(500000).isnull().sum())
print(applicationData.AMT_ANNUITY.head(500000).isnull().sum())
sns.boxplot(data= applicationDataAfterDroppedColumns.AMT_ANNUITY.dropna())
plt.show()
```
# END OF FILE
|
github_jupyter
|
# Introduction to NumPy
Forked from [Lecture 2](https://github.com/jrjohansson/scientific-python-lectures/blob/master/Lecture-2-Numpy.ipynb) of [Scientific Python Lectures](http://github.com/jrjohansson/scientific-python-lectures) by [J.R. Johansson](http://jrjohansson.github.io/)
```
%matplotlib inline
import traceback
import matplotlib.pyplot as plt
import numpy as np
```
### Why NumPy?
```
%%time
total = 0
for i in range(100000):
total += i
%%time
total = np.arange(100000).sum()
%%time
l = list(range(0, 1000000))
ltimes5 = [x * 5 for x in l]
%%time
l = np.arange(1000000)
ltimes5 = l * 5
```
## Introduction
The `numpy` package (module) is used in almost all numerical computation using Python. It is a package that provide high-performance vector, matrix and higher-dimensional data structures for Python. It is implemented in C and Fortran so when calculations are vectorized (formulated with vectors and matrices), performance is very good.
To use `numpy` you need to import the module, using for example:
```
import numpy as np
```
In the `numpy` package the terminology used for vectors, matrices and higher-dimensional data sets is *array*.
## Creating `numpy` arrays
There are a number of ways to initialize new numpy arrays, for example from
* a Python list or tuples
* using functions that are dedicated to generating numpy arrays, such as `arange`, `linspace`, etc.
* reading data from files
### From lists
For example, to create new vector and matrix arrays from Python lists we can use the `numpy.array` function.
```
# a vector: the argument to the array function is a Python list
v = np.array([1,2,3,4])
v
# a matrix: the argument to the array function is a nested Python list
M = np.array([[1, 2], [3, 4]])
M
```
The `v` and `M` objects are both of the type `ndarray` that the `numpy` module provides.
```
type(v), type(M)
```
The difference between the `v` and `M` arrays is only their shapes. We can get information about the shape of an array by using the `ndarray.shape` property.
```
v.shape
M.shape
```
The number of elements in the array is available through the `ndarray.size` property:
```
M.size
```
Equivalently, we could use the function `numpy.shape` and `numpy.size`
```
np.shape(M)
np.size(M)
```
So far the `numpy.ndarray` looks awefully much like a Python list (or nested list). Why not simply use Python lists for computations instead of creating a new array type?
There are several reasons:
* Python lists are very general. They can contain any kind of object. They are dynamically typed. They do not support mathematical functions such as matrix and dot multiplications, etc. Implementing such functions for Python lists would not be very efficient because of the dynamic typing.
* Numpy arrays are **statically typed** and **homogeneous**. The type of the elements is determined when the array is created.
* Numpy arrays are memory efficient.
* Because of the static typing, fast implementation of mathematical functions such as multiplication and addition of `numpy` arrays can be implemented in a compiled language (C and Fortran is used).
Using the `dtype` (data type) property of an `ndarray`, we can see what type the data of an array has:
```
M.dtype
```
We get an error if we try to assign a value of the wrong type to an element in a numpy array:
```
try:
M[0,0] = "hello"
except ValueError as e:
print(traceback.format_exc())
```
If we want, we can explicitly define the type of the array data when we create it, using the `dtype` keyword argument:
```
M = np.array([[1, 2], [3, 4]], dtype=complex)
M
```
Common data types that can be used with `dtype` are: `int`, `float`, `complex`, `bool`, `object`, etc.
We can also explicitly define the bit size of the data types, for example: `int64`, `int16`, `float128`, `complex128`.
### Using array-generating functions
For larger arrays it is inpractical to initialize the data manually, using explicit python lists. Instead we can use one of the many functions in `numpy` that generate arrays of different forms. Some of the more common are:
#### arange
```
# create a range
x = np.arange(0, 10, 1) # arguments: start, stop, step
x
x = np.arange(-1, 1, 0.1)
x
```
#### linspace and logspace
```
# using linspace, both end points ARE included
np.linspace(0, 10, 25)
np.logspace(0, 10, 10, base=np.e)
```
#### mgrid
```
x, y = np.mgrid[0:5, 0:5] # similar to meshgrid in MATLAB
x
y
```
#### random data
```
# uniform random numbers in [0,1]
np.random.rand(5,5)
# standard normal distributed random numbers
np.random.randn(5,5)
```
#### diag
```
# a diagonal matrix
np.diag([1,2,3])
# diagonal with offset from the main diagonal
np.diag([1,2,3], k=1)
```
#### zeros and ones
```
np.zeros((3,3))
np.ones((3,3))
```
## File I/O
### Comma-separated values (CSV)
A very common file format for data files is comma-separated values (CSV), or related formats such as TSV (tab-separated values). To read data from such files into Numpy arrays we can use the `numpy.genfromtxt` function. For example,
```
!head ../data/stockholm_td_adj.dat
data = np.genfromtxt('../data/stockholm_td_adj.dat')
data.shape
fig, ax = plt.subplots(figsize=(14,4))
ax.plot(data[:,0]+data[:,1]/12.0+data[:,2]/365, data[:,5])
ax.axis('tight')
ax.set_title('tempeatures in Stockholm')
ax.set_xlabel('year')
ax.set_ylabel('temperature (C)');
```
Using `numpy.savetxt` we can store a Numpy array to a file in CSV format:
```
M = np.random.rand(3,3)
M
np.savetxt("../data/random-matrix.csv", M)
!cat ../data/random-matrix.csv
np.savetxt("../data/random-matrix.csv", M, fmt='%.5f') # fmt specifies the format
!cat ../data/random-matrix.csv
```
### Numpy's native file format
Useful when storing and reading back numpy array data. Use the functions `numpy.save` and `numpy.load`:
```
np.save("../data/random-matrix.npy", M)
!file ../data/random-matrix.npy
np.load("../data/random-matrix.npy")
```
## More properties of the numpy arrays
```
M.itemsize # bytes per element
M.nbytes # number of bytes
M.ndim # number of dimensions
```
## Manipulating arrays
### Indexing
We can index elements in an array using square brackets and indices:
```
# v is a vector, and has only one dimension, taking one index
v[0]
# M is a matrix, or a 2 dimensional array, taking two indices
M[1,1]
```
If we omit an index of a multidimensional array it returns the whole row (or, in general, a N-1 dimensional array)
```
M
M[1]
```
The same thing can be achieved with using `:` instead of an index:
```
M[1,:] # row 1
M[:,1] # column 1
```
We can assign new values to elements in an array using indexing:
```
M[0,0] = 1
M
# also works for rows and columns
M[1,:] = 0
M[:,2] = -1
M
```
### Index slicing
Index slicing is the technical name for the syntax `M[lower:upper:step]` to extract part of an array:
```
A = np.array([1,2,3,4,5])
A
A[1:3]
```
Array slices are *mutable*: if they are assigned a new value the original array from which the slice was extracted is modified:
```
A[1:3] = [-2,-3]
A
```
We can omit any of the three parameters in `M[lower:upper:step]`:
```
A[::] # lower, upper, step all take the default values
A[::2] # step is 2, lower and upper defaults to the beginning and end of the array
A[:3] # first three elements
A[3:] # elements from index 3
```
Negative indices counts from the end of the array (positive index from the begining):
```
A = np.array([1,2,3,4,5])
A[-1] # the last element in the array
A[-3:] # the last three elements
```
Index slicing works exactly the same way for multidimensional arrays:
```
A = np.array([[n+m*10 for n in range(5)] for m in range(5)])
A
# a block from the original array
A[1:4, 1:4]
# strides
A[::2, ::2]
```
### Fancy indexing
Fancy indexing is the name for when an array or list is used in-place of an index:
```
row_indices = [1, 2, 3]
A[row_indices]
col_indices = [1, 2, -1] # remember, index -1 means the last element
A[row_indices, col_indices]
```
We can also use index masks: If the index mask is an Numpy array of data type `bool`, then an element is selected (True) or not (False) depending on the value of the index mask at the position of each element:
```
B = np.array([n for n in range(5)])
B
row_mask = np.array([True, False, True, False, False])
B[row_mask]
# same thing
row_mask = np.array([1,0,1,0,0], dtype=bool)
B[row_mask]
```
This feature is very useful to conditionally select elements from an array, using for example comparison operators:
```
x = np.arange(0, 10, 0.5)
x
mask = (5 < x) * (x < 7.5)
mask
x[mask]
```
## Functions for extracting data from arrays and creating arrays
### where
The index mask can be converted to position index using the `where` function
```
indices = np.where(mask)
indices
x[indices] # this indexing is equivalent to the fancy indexing x[mask]
```
### diag
With the diag function we can also extract the diagonal and subdiagonals of an array:
```
np.diag(A)
np.diag(A, -1)
```
### take
The `take` function is similar to fancy indexing described above:
```
v2 = np.arange(-3,3)
v2
row_indices = [1, 3, 5]
v2[row_indices] # fancy indexing
v2.take(row_indices)
```
But `take` also works on lists and other objects:
```
np.take([-3, -2, -1, 0, 1, 2], row_indices)
```
### choose
Constructs an array by picking elements from several arrays:
```
which = [1, 0, 1, 0]
choices = [[-2,-2,-2,-2], [5,5,5,5]]
np.choose(which, choices)
```
## Linear algebra
Vectorizing code is the key to writing efficient numerical calculation with Python/Numpy. That means that as much as possible of a program should be formulated in terms of matrix and vector operations, like matrix-matrix multiplication.
### Scalar-array operations
We can use the usual arithmetic operators to multiply, add, subtract, and divide arrays with scalar numbers.
```
v1 = np.arange(0, 5)
v1 * 2
v1 + 2
A * 2, A + 2
```
### Element-wise array-array operations
When we add, subtract, multiply and divide arrays with each other, the default behaviour is **element-wise** operations:
```
A * A # element-wise multiplication
v1 * v1
```
If we multiply arrays with compatible shapes, we get an element-wise multiplication of each row:
```
A.shape, v1.shape
A * v1
```
### Matrix algebra
What about matrix mutiplication? There are two ways. We can either use the `dot` function, which applies a matrix-matrix, matrix-vector, or inner vector multiplication to its two arguments:
```
np.dot(A, A)
```
Python 3 has a new operator for using infix notation with matrix multiplication.
```
A @ A
np.dot(A, v1)
np.dot(v1, v1)
```
Alternatively, we can cast the array objects to the type `matrix`. This changes the behavior of the standard arithmetic operators `+, -, *` to use matrix algebra.
```
M = np.matrix(A)
v = np.matrix(v1).T # make it a column vector
v
M * M
M * v
# inner product
v.T * v
# with matrix objects, standard matrix algebra applies
v + M*v
```
If we try to add, subtract or multiply objects with incomplatible shapes we get an error:
```
v = np.matrix([1,2,3,4,5,6]).T
M.shape, v.shape
import traceback
try:
M * v
except ValueError as e:
print(traceback.format_exc())
```
See also the related functions: `inner`, `outer`, `cross`, `kron`, `tensordot`. Try for example `help(np.kron)`.
### Array/Matrix transformations
Above we have used the `.T` to transpose the matrix object `v`. We could also have used the `transpose` function to accomplish the same thing.
Other mathematical functions that transform matrix objects are:
```
C = np.matrix([[1j, 2j], [3j, 4j]])
C
np.conjugate(C)
```
Hermitian conjugate: transpose + conjugate
```
C.H
```
We can extract the real and imaginary parts of complex-valued arrays using `real` and `imag`:
```
np.real(C) # same as: C.real
np.imag(C) # same as: C.imag
```
Or the complex argument and absolute value
```
np.angle(C+1) # heads up MATLAB Users, angle is used instead of arg
abs(C)
```
### Matrix computations
#### Inverse
```
np.linalg.inv(C) # equivalent to C.I
C.I * C
```
#### Determinant
```
np.linalg.det(C)
np.linalg.det(C.I)
```
### Data processing
Often it is useful to store datasets in Numpy arrays. Numpy provides a number of functions to calculate statistics of datasets in arrays.
For example, let's calculate some properties from the Stockholm temperature dataset used above.
```
# reminder, the tempeature dataset is stored in the data variable:
np.shape(data)
```
#### mean
```
# the temperature data is in column 3
np.mean(data[:,3])
```
The daily mean temperature in Stockholm over the last 200 years has been about 6.2 C.
#### standard deviations and variance
```
np.std(data[:,3]), np.var(data[:,3])
```
#### min and max
```
# lowest daily average temperature
data[:,3].min()
# highest daily average temperature
data[:,3].max()
```
#### sum, prod, and trace
```
d = np.arange(0, 10)
d
# sum up all elements
np.sum(d)
# product of all elements
np.prod(d+1)
# cummulative sum
np.cumsum(d)
# cummulative product
np.cumprod(d+1)
# same as: diag(A).sum()
np.trace(A)
```
### Computations on subsets of arrays
We can compute with subsets of the data in an array using indexing, fancy indexing, and the other methods of extracting data from an array (described above).
For example, let's go back to the temperature dataset:
```
!head -n 3 ../data/stockholm_td_adj.dat
```
The dataformat is: year, month, day, daily average temperature, low, high, location.
If we are interested in the average temperature only in a particular month, say February, then we can create a index mask and use it to select only the data for that month using:
```
np.unique(data[:,1]) # the month column takes values from 1 to 12
mask_feb = data[:,1] == 2
# the temperature data is in column 3
np.mean(data[mask_feb,3])
```
With these tools we have very powerful data processing capabilities at our disposal. For example, to extract the average monthly average temperatures for each month of the year only takes a few lines of code:
```
months = np.arange(1,13)
monthly_mean = [np.mean(data[data[:,1] == month, 3]) for month in months]
fig, ax = plt.subplots()
ax.bar(months, monthly_mean)
ax.set_xlabel("Month")
ax.set_ylabel("Monthly avg. temp.");
```
### Calculations with higher-dimensional data
When functions such as `min`, `max`, etc. are applied to a multidimensional arrays, it is sometimes useful to apply the calculation to the entire array, and sometimes only on a row or column basis. Using the `axis` argument we can specify how these functions should behave:
```
m = np.random.rand(3,3)
m
# global max
m.max()
# max in each column
m.max(axis=0)
# max in each row
m.max(axis=1)
```
Many other functions and methods in the `array` and `matrix` classes accept the same (optional) `axis` keyword argument.
## Reshaping, resizing and stacking arrays
The shape of an Numpy array can be modified without copying the underlaying data, which makes it a fast operation even for large arrays.
```
A
n, m = A.shape
B = A.reshape((1,n*m))
B
B[0,0:5] = 5 # modify the array
B
A # and the original variable is also changed. B is only a different view of the same data
```
We can also use the function `flatten` to make a higher-dimensional array into a vector. But this function create a copy of the data.
```
B = A.flatten()
B
B[0:5] = 10
B
A # now A has not changed, because B's data is a copy of A's, not refering to the same data
```
## Adding a new dimension: newaxis
With `newaxis`, we can insert new dimensions in an array, for example converting a vector to a column or row matrix:
```
v = np.array([1,2,3])
v.shape
# make a column matrix of the vector v
v[:, np.newaxis]
# column matrix
v[:, np.newaxis].shape
# row matrix
v[np.newaxis, :].shape
```
## Stacking and repeating arrays
Using function `repeat`, `tile`, `vstack`, `hstack`, and `concatenate` we can create larger vectors and matrices from smaller ones:
### tile and repeat
```
a = np.array([[1, 2], [3, 4]])
# repeat each element 3 times
np.repeat(a, 3)
# tile the matrix 3 times
np.tile(a, 3)
```
### concatenate
```
b = np.array([[5, 6]])
np.concatenate((a, b), axis=0)
np.concatenate((a, b.T), axis=1)
```
### hstack and vstack
```
np.vstack((a,b))
np.hstack((a,b.T))
```
## Copy and "deep copy"
To achieve high performance, assignments in Python usually do not copy the underlaying objects. This is important for example when objects are passed between functions, to avoid an excessive amount of memory copying when it is not necessary (technical term: pass by reference).
```
A = np.array([[1, 2], [3, 4]])
A
# now B is referring to the same array data as A
B = A
# changing B affects A
B[0,0] = 10
B
A
```
If we want to avoid this behavior, so that when we get a new completely independent object `B` copied from `A`, then we need to do a so-called "deep copy" using the function `copy`:
```
B = np.copy(A)
# now, if we modify B, A is not affected
B[0,0] = -5
B
A
```
## Iterating over array elements
Generally, we want to avoid iterating over the elements of arrays whenever we can (at all costs). The reason is that in a interpreted language like Python (or MATLAB/R), iterations are really slow compared to vectorized operations.
However, sometimes iterations are unavoidable. For such cases, the Python `for` loop is the most convenient way to iterate over an array:
```
v = np.array([1,2,3,4])
for element in v:
print(element)
M = np.array([[1,2], [3,4]])
for row in M:
print("row", row)
for element in row:
print(element)
```
When we need to iterate over each element of an array and modify its elements, it is convenient to use the `enumerate` function to obtain both the element and its index in the `for` loop:
```
for row_idx, row in enumerate(M):
print("row_idx", row_idx, "row", row)
for col_idx, element in enumerate(row):
print("col_idx", col_idx, "element", element)
# update the matrix M: square each element
M[row_idx, col_idx] = element ** 2
# each element in M is now squared
M
```
## Vectorizing functions
As mentioned several times by now, to get good performance we should try to avoid looping over elements in our vectors and matrices, and instead use vectorized algorithms. The first step in converting a scalar algorithm to a vectorized algorithm is to make sure that the functions we write work with vector inputs.
```
def theta(x):
"""
Scalar implemenation of the Heaviside step function.
"""
if x >= 0:
return 1
else:
return 0
try:
theta(np.array([-3,-2,-1,0,1,2,3]))
except Exception as e:
print(traceback.format_exc())
```
OK, that didn't work because we didn't write the `Theta` function so that it can handle a vector input...
To get a vectorized version of Theta we can use the Numpy function `vectorize`. In many cases it can automatically vectorize a function:
```
theta_vec = np.vectorize(theta)
%%time
theta_vec(np.array([-3,-2,-1,0,1,2,3]))
```
We can also implement the function to accept a vector input from the beginning (requires more effort but might give better performance):
```
def theta(x):
"""
Vector-aware implemenation of the Heaviside step function.
"""
return 1 * (x >= 0)
%%time
theta(np.array([-3,-2,-1,0,1,2,3]))
# still works for scalars as well
theta(-1.2), theta(2.6)
```
## Using arrays in conditions
When using arrays in conditions,for example `if` statements and other boolean expressions, one needs to use `any` or `all`, which requires that any or all elements in the array evalutes to `True`:
```
M
if (M > 5).any():
print("at least one element in M is larger than 5")
else:
print("no element in M is larger than 5")
if (M > 5).all():
print("all elements in M are larger than 5")
else:
print("all elements in M are not larger than 5")
```
## Type casting
Since Numpy arrays are *statically typed*, the type of an array does not change once created. But we can explicitly cast an array of some type to another using the `astype` functions (see also the similar `asarray` function). This always create a new array of new type:
```
M.dtype
M2 = M.astype(float)
M2
M2.dtype
M3 = M.astype(bool)
M3
```
## Further reading
* http://numpy.scipy.org - Official Numpy Documentation
* http://scipy.org/Tentative_NumPy_Tutorial - Official Numpy Quickstart Tutorial (highly recommended)
* http://www.scipy-lectures.org/intro/numpy/index.html - Scipy Lectures: Lecture 1.3
## Versions
```
%reload_ext version_information
%version_information numpy
```
|
github_jupyter
|
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 用 tf.data 加载 CSV 数据
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/tutorials/load_data/csv"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />在 Tensorflow.org 上查看</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/zh-cn/tutorials/load_data/csv.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />在 Google Colab 运行</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/zh-cn/tutorials/load_data/csv.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />在 Github 上查看源代码</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/site/zh-cn/tutorials/load_data/csv.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载此 notebook</a>
</td>
</table>
Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的
[官方英文文档](https://www.tensorflow.org/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到
[tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入
[[email protected] Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
这篇教程通过一个示例展示了怎样将 CSV 格式的数据加载进 `tf.data.Dataset`。
这篇教程使用的是泰坦尼克号乘客的数据。模型会根据乘客的年龄、性别、票务舱和是否独自旅行等特征来预测乘客生还的可能性。
## 设置
```
try:
# Colab only
%tensorflow_version 2.x
except Exception:
pass
from __future__ import absolute_import, division, print_function, unicode_literals
import functools
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv"
train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL)
# 让 numpy 数据更易读。
np.set_printoptions(precision=3, suppress=True)
```
## 加载数据
开始的时候,我们通过打印 CSV 文件的前几行来了解文件的格式。
```
!head {train_file_path}
```
正如你看到的那样,CSV 文件的每列都会有一个列名。dataset 的构造函数会自动识别这些列名。如果你使用的文件的第一行不包含列名,那么需要将列名通过字符串列表传给 `make_csv_dataset` 函数的 `column_names` 参数。
```python
CSV_COLUMNS = ['survived', 'sex', 'age', 'n_siblings_spouses', 'parch', 'fare', 'class', 'deck', 'embark_town', 'alone']
dataset = tf.data.experimental.make_csv_dataset(
...,
column_names=CSV_COLUMNS,
...)
```
这个示例使用了所有的列。如果你需要忽略数据集中的某些列,创建一个包含你需要使用的列的列表,然后传给构造器的(可选)参数 `select_columns`。
```python
dataset = tf.data.experimental.make_csv_dataset(
...,
select_columns = columns_to_use,
...)
```
对于包含模型需要预测的值的列是你需要显式指定的。
```
LABEL_COLUMN = 'survived'
LABELS = [0, 1]
```
现在从文件中读取 CSV 数据并且创建 dataset。
(完整的文档,参考 `tf.data.experimental.make_csv_dataset`)
```
def get_dataset(file_path):
dataset = tf.data.experimental.make_csv_dataset(
file_path,
batch_size=12, # 为了示例更容易展示,手动设置较小的值
label_name=LABEL_COLUMN,
na_value="?",
num_epochs=1,
ignore_errors=True)
return dataset
raw_train_data = get_dataset(train_file_path)
raw_test_data = get_dataset(test_file_path)
```
dataset 中的每个条目都是一个批次,用一个元组(*多个样本*,*多个标签*)表示。样本中的数据组织形式是以列为主的张量(而不是以行为主的张量),每条数据中包含的元素个数就是批次大小(这个示例中是 12)。
阅读下面的示例有助于你的理解。
```
examples, labels = next(iter(raw_train_data)) # 第一个批次
print("EXAMPLES: \n", examples, "\n")
print("LABELS: \n", labels)
```
## 数据预处理
### 分类数据
CSV 数据中的有些列是分类的列。也就是说,这些列只能在有限的集合中取值。
使用 `tf.feature_column` API 创建一个 `tf.feature_column.indicator_column` 集合,每个 `tf.feature_column.indicator_column` 对应一个分类的列。
```
CATEGORIES = {
'sex': ['male', 'female'],
'class' : ['First', 'Second', 'Third'],
'deck' : ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'],
'embark_town' : ['Cherbourg', 'Southhampton', 'Queenstown'],
'alone' : ['y', 'n']
}
categorical_columns = []
for feature, vocab in CATEGORIES.items():
cat_col = tf.feature_column.categorical_column_with_vocabulary_list(
key=feature, vocabulary_list=vocab)
categorical_columns.append(tf.feature_column.indicator_column(cat_col))
# 你刚才创建的内容
categorical_columns
```
这将是后续构建模型时处理输入数据的一部分。
### 连续数据
连续数据需要标准化。
写一个函数标准化这些值,然后将这些值改造成 2 维的张量。
```
def process_continuous_data(mean, data):
# 标准化数据
data = tf.cast(data, tf.float32) * 1/(2*mean)
return tf.reshape(data, [-1, 1])
```
现在创建一个数值列的集合。`tf.feature_columns.numeric_column` API 会使用 `normalizer_fn` 参数。在传参的时候使用 [`functools.partial`](https://docs.python.org/3/library/functools.html#functools.partial),`functools.partial` 由使用每个列的均值进行标准化的函数构成。
```
MEANS = {
'age' : 29.631308,
'n_siblings_spouses' : 0.545455,
'parch' : 0.379585,
'fare' : 34.385399
}
numerical_columns = []
for feature in MEANS.keys():
num_col = tf.feature_column.numeric_column(feature, normalizer_fn=functools.partial(process_continuous_data, MEANS[feature]))
numerical_columns.append(num_col)
# 你刚才创建的内容。
numerical_columns
```
这里使用标准化的方法需要提前知道每列的均值。如果需要计算连续的数据流的标准化的值可以使用 [TensorFlow Transform](https://www.tensorflow.org/tfx/transform/get_started)。
### 创建预处理层
将这两个特征列的集合相加,并且传给 `tf.keras.layers.DenseFeatures` 从而创建一个进行预处理的输入层。
```
preprocessing_layer = tf.keras.layers.DenseFeatures(categorical_columns+numerical_columns)
```
## 构建模型
从 `preprocessing_layer` 开始构建 `tf.keras.Sequential`。
```
model = tf.keras.Sequential([
preprocessing_layer,
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid'),
])
model.compile(
loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
```
## 训练、评估和预测
现在可以实例化和训练模型。
```
train_data = raw_train_data.shuffle(500)
test_data = raw_test_data
model.fit(train_data, epochs=20)
```
当模型训练完成的时候,你可以在测试集 `test_data` 上检查准确性。
```
test_loss, test_accuracy = model.evaluate(test_data)
print('\n\nTest Loss {}, Test Accuracy {}'.format(test_loss, test_accuracy))
```
使用 `tf.keras.Model.predict` 推断一个批次或多个批次的标签。
```
predictions = model.predict(test_data)
# 显示部分结果
for prediction, survived in zip(predictions[:10], list(test_data)[0][1][:10]):
print("Predicted survival: {:.2%}".format(prediction[0]),
" | Actual outcome: ",
("SURVIVED" if bool(survived) else "DIED"))
```
|
github_jupyter
|
<img width="100" src="https://carbonplan-assets.s3.amazonaws.com/monogram/dark-small.png" style="margin-left:0px;margin-top:20px"/>
# Forest Emissions Tracking - Validation
_CarbonPlan ClimateTrace Team_
This notebook compares our estimates of country-level forest emissions to prior estimates from other
groups. The notebook currently compares againsts:
- Global Forest Watch (Zarin et al. 2016)
- Global Carbon Project (Friedlingstein et al. 2020)
```
import geopandas
import pandas as pd
from io import StringIO
import matplotlib.pyplot as plt
import numpy as np
from carbonplan_styles.mpl import set_theme
set_theme()
axis_name_size = 12
# country shapes from GADM36
countries = geopandas.read_file("s3://carbonplan-climatetrace/inputs/shapes/countries.shp")
# CarbonPlan's emissions
emissions = pd.read_csv("s3://carbonplan-climatetrace/v0.4/country_rollups_emissions.csv")
agb = pd.read_csv("s3://carbonplan-climatetrace/v0.4/country_rollups_agb.csv")
# Input data
# ----------
# GFW emissions
gfw_emissions = pd.read_excel(
"s3://carbonplan-climatetrace/validation/gfw_global_emissions.xlsx",
sheet_name="Country co2 emissions",
).dropna(axis=0)
gfw_emissions = gfw_emissions[gfw_emissions["threshold"] == 10] # select threshold
# rename
gfw_emissions.loc[gfw_emissions.country == "Republic of Congo", "country"] = "Congo"
gfw_emissions.loc[
gfw_emissions.country == "Bolivia", "country"
] = "Bolivia (Plurinational State of)"
gfw_emissions.loc[gfw_emissions.country == "Brunei", "country"] = "Brunei Darussalam"
gfw_emissions.loc[gfw_emissions.country == "Côte d'Ivoire", "country"] = "Côte dIvoire"
gfw_emissions.loc[gfw_emissions.country == "Laos", "country"] = "Lao Peoples Democratic Republic"
gfw_emissions.loc[gfw_emissions.country == "Swaziland", "country"] = "Eswatini"
gfw_emissions.loc[gfw_emissions.country == "Tanzania", "country"] = "United Republic of Tanzania"
gfw_emissions.loc[
gfw_emissions.country == "Venezuela", "country"
] = "Venezuela (Bolivarian Republic of)"
gfw_emissions.loc[gfw_emissions.country == "Vietnam", "country"] = "Viet Nam"
gfw_emissions.loc[
gfw_emissions.country == "Virgin Islands, U.S.", "country"
] = "United States Virgin Islands"
gfw_emissions.loc[gfw_emissions.country == "Zimbabwe", "country"] = "Zimbabwe)"
emissions.groupby("begin_date").sum().mean() / 1e9
# Merge emissions dataframes with countries GeoDataFrame
gfw_countries = countries.merge(gfw_emissions.rename(columns={"country": "name"}), on="name")
trace_countries = countries.merge(emissions.rename(columns={"iso3_country": "alpha3"}), on="alpha3")
agb_countries = countries.merge(agb.rename(columns={"iso3_country": "alpha3"}), on="alpha3")
agb = pd.merge(
left=agb_countries.rename(columns={"agb": "trace_agb"}),
right=gfw_countries[["alpha3", "abg_co2_stock_2000__Mg"]].rename(
columns={"abg_co2_stock_2000__Mg": "gfw_agb_co2"}
),
on="alpha3",
)
agb["trace_agb_co2"] = agb.trace_agb * 0.5 * 3.67
agb["trace_agb_co2"] = agb.trace_agb_co2 / 1e6
agb["gfw_agb_co2"] = agb.gfw_agb_co2 / 1e6
agb = agb[["name", "alpha3", "geometry", "trace_agb_co2", "gfw_agb_co2"]]
# reformat to "wide" format (time x country)
trace_wide = (
emissions.drop(columns=["end_date"])
.pivot(index="begin_date", columns="iso3_country")
.droplevel(0, axis=1)
)
trace_wide.index = pd.to_datetime(trace_wide.index)
gfw_wide = gfw_emissions.set_index("country").filter(regex="whrc_aboveground_co2_emissions_Mg_.*").T
gfw_wide.index = [pd.to_datetime(f"{l[-4:]}-01-01") for l in gfw_wide.index]
gfw_wide.head()
df = pd.read_csv("s3://carbonplan-climatetrace/v0.4/country_rollups_emissions_from_clearing.csv")
df.head()
df.loc[df.iso3_country == "AGO"].tCO2eq / 1e6
```
## Part 1 - Compare time-averaged country emissions (tropics only)
```
# Create a new dataframe with average emissions
avg_emissions = countries.set_index("alpha3")
avg_emissions["trace"] = trace_wide.mean().transpose() / 1e6
# avg_emissions["trace"] = trace_wide.loc['2020-01-01'] / 1e6
avg_emissions = avg_emissions.reset_index().set_index("name")
avg_emissions["gfw"] = gfw_wide.mean().transpose() / 1e6
# avg_emissions["gfw"] = gfw_wide.loc['2020-01-01'] / 1e6
avg_emissions = avg_emissions.dropna()
len(avg_emissions)
from sklearn.metrics import r2_score
r2_score(avg_emissions.gfw, avg_emissions.trace)
avg_emissions["me"] = avg_emissions.trace - avg_emissions.gfw
avg_emissions["mae"] = (avg_emissions.trace - avg_emissions.gfw).abs()
avg_emissions["mape"] = (avg_emissions.trace - avg_emissions.gfw).abs() / avg_emissions.gfw * 100
avg_emissions = avg_emissions.replace(np.inf, np.nan)
avg_emissions.mean().round(2)
sub = avg_emissions.loc[(avg_emissions.mape > 1) & (avg_emissions.gfw > 1)]
sub
(avg_emissions.gfw > 1).mean()
top20 = avg_emissions.sort_values(by="mae", ascending=False).head(20)
names = {
"Democratic Republic of the Congo": "DRC",
"Lao Peoples Democratic Republic": "Laos",
"Bolivia (Plurinational State of)": "Bolivia",
"Côte dIvoire": "Côte d'Ivoire",
"United Republic of Tanzania": "Tanzania",
"Viet Nam": "Vietnam",
"Venezuela (Bolivarian Republic of)": "Venezuela",
}
plt.figure(figsize=(12, 10))
for i, row in top20.reset_index()[["name", "alpha3"]].iterrows():
plt.subplot(5, 4, i + 1)
name = row["name"]
alpha3 = row["alpha3"]
plt.plot(gfw_wide[name].index, gfw_wide[name].values / 1e6, label="Zarin et al.")
plt.plot(trace_wide[alpha3].index, trace_wide[alpha3].values / 1e6, label="CarbonPlan")
plt.xticks(["2001-01-01", "2010-01-01", "2020-01-01"], [2001, 2010, 2020])
if name in names:
name = names[name]
plt.title(name, fontsize=axis_name_size)
if i > 3:
plt.ylim(0, 200)
if i == 8:
plt.ylabel("Emissions [Mt CO2 / yr]", fontsize=axis_name_size)
ax = plt.gca()
fig = plt.gcf()
handles, labels = ax.get_legend_handles_labels()
fig.legend(handles, labels, loc="upper center", ncol=2, bbox_to_anchor=(0.5, 1.03))
plt.tight_layout()
plt.savefig("top20_time_series.png", bbox_inches="tight")
plt.show()
plt.close()
# Scatter Plot
xmin = 1e-6
xmax = 1e4
plt.figure(figsize=(10, 5))
plt.subplot(1, 2, 1)
plt.plot([xmin, xmax], [xmin, xmax], "0.5")
avg_emissions.plot.scatter("gfw", "trace", ax=plt.gca())
plt.gca().set_xscale("log")
plt.gca().set_yscale("log")
plt.ylabel("CarbonPlan [Mt CO$_2$ / yr]", fontsize=axis_name_size)
plt.xlabel("Zarin [Mt CO$_2$ / yr]", fontsize=axis_name_size)
plt.xlim(xmin, xmax)
plt.ylim(xmin, xmax)
plt.title("a) Forest related carbon emissions", fontsize=axis_name_size)
xmin = 1e-4
xmax = 1e6
plt.subplot(1, 2, 2)
plt.plot([xmin, xmax], [xmin, xmax], "0.5")
agb.plot.scatter("gfw_agb_co2", "trace_agb_co2", ax=plt.gca())
plt.gca().set_xscale("log")
plt.gca().set_yscale("log")
plt.ylabel("CarbonPlan [Mt CO$_2$]", fontsize=axis_name_size)
plt.xlabel("Zarin [Mt CO$_2$]", fontsize=axis_name_size)
plt.xlim(xmin, xmax)
plt.ylim(xmin, xmax)
plt.title("b) Forest AGB stock in 2000", fontsize=axis_name_size)
plt.tight_layout()
plt.savefig("gfw_scatter.png")
```
## Part 2 - Maps of Tropical Emissions
```
from mpl_toolkits.axes_grid1 import make_axes_locatable
plt.figure(figsize=(14, 8))
plt.subplot(2, 1, 1)
kwargs = dict(
legend=True,
legend_kwds={
"orientation": "vertical",
"label": "Emissions [Mt CO$_2$ / yr]",
},
lw=0.25,
cmap="Reds",
vmin=0,
vmax=1000,
)
ax = plt.gca()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="2%", pad=0.2)
avg_emissions.plot("trace", ax=ax, cax=cax, **kwargs)
ax.set_title("Forest related carbon emissions from CarbonPlan", fontsize=axis_name_size)
ax.set_xlabel("Longitude", fontsize=axis_name_size)
ax.set_ylabel("Latitude", fontsize=axis_name_size)
plt.subplot(2, 1, 2)
kwargs = dict(
legend=True,
legend_kwds={
"orientation": "vertical",
"label": "Emissions Difference [%]",
},
lw=0.25,
cmap="RdBu_r",
vmin=-20,
vmax=20,
)
avg_emissions["pdiff"] = (
(avg_emissions["trace"] - avg_emissions["gfw"]) / avg_emissions["gfw"]
) * 100
ax = plt.gca()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="2%", pad=0.2)
avg_emissions.plot("pdiff", ax=ax, cax=cax, **kwargs)
ax.set_title("% difference from Zarin", fontsize=axis_name_size)
ax.set_xlabel("Longitude", fontsize=axis_name_size)
ax.set_ylabel("Latitude", fontsize=axis_name_size)
plt.tight_layout()
plt.savefig("gfw_map.png")
```
## Part 3 - Compare fire emissions
```
# CarbonPlan's emissions
emissions = {}
versions = ["v0.4"]
for version in versions:
for mechanism in ["fire"]:
emissions[version + "-" + mechanism] = pd.read_csv(
"s3://carbonplan-climatetrace/{}/country_rollups_emissions_from_{}.csv".format(
version, mechanism
)
)
# Blue Sky Fire emissions
emissions["Blue Sky"] = pd.read_csv("forest-fires_bsa.csv")
emissions[f"{version}-fire"]
emissions["Blue Sky"]
version = "v0.4"
comparison = pd.merge(
emissions[f"{version}-fire"].rename({"tCO2eq": "CarbonPlan"}, axis=1),
emissions["Blue Sky"].rename({"tCO2": "BSA"}, axis=1),
how="inner", # "left",
left_on=["iso3_country", "begin_date"],
right_on=["iso3_country", "begin_date"],
)
comparison["BSA"] /= 1e6
comparison["CarbonPlan"] /= 1e6
comparison["year"] = pd.to_datetime(comparison.begin_date).dt.year
comparison["BSA"] = comparison.BSA.fillna(0)
r2_score(comparison.BSA, comparison.CarbonPlan)
(comparison.CarbonPlan - comparison.BSA).mean()
(comparison.CarbonPlan <= comparison.BSA).mean()
len(comparison.iso3_country.unique())
xmin = 1e-4
xmax = 1e4
plt.figure(figsize=(5, 5))
plt.plot([xmin, xmax], [xmin, xmax], "0.5")
comparison.plot.scatter("BSA", "CarbonPlan", ax=plt.gca())
plt.gca().set_xscale("log")
plt.gca().set_yscale("log")
plt.ylabel("CarbonPlan [Mt CO$_2$ / yr]", fontsize=axis_name_size)
plt.xlabel("BSA [Mt CO$_2$ / yr]", fontsize=axis_name_size)
plt.yticks()
plt.xlim(xmin, xmax)
plt.ylim(xmin, xmax)
plt.title("Forest fire emissions", fontsize=axis_name_size)
plt.savefig("bsa_scatter.png", bbox_inches="tight")
avg_yr = comparison.groupby("iso3_country").mean()
xmin = 1e-4
xmax = 1e4
plt.figure(figsize=(5, 5))
plt.plot([xmin, xmax], [xmin, xmax], "0.5")
avg_yr.plot.scatter("BSA", "CarbonPlan", ax=plt.gca())
plt.gca().set_xscale("log")
plt.gca().set_yscale("log")
plt.ylabel("CarbonPlan [Mt CO$_2$ / yr]", fontsize=axis_name_size)
plt.xlabel("BSA [Mt CO$_2$ / yr]", fontsize=axis_name_size)
plt.xlim(xmin, xmax)
plt.ylim(xmin, xmax)
plt.title("Forest fire emissions", fontsize=axis_name_size)
plt.tight_layout()
plt.savefig("bsa_scatter_avg.png")
comparison.head()
comparison.loc[comparison.iso3_country.isin(["RUS", "USA"])]
comparison.loc[comparison.iso3_country.isin(["BRA"])]
emissions["Mt CO2"] = emissions.tCO2eq / 1e6
sub = emissions.loc[(emissions.iso3_country == "LKA"), ["begin_date", "Mt CO2", "iso3_country"]]
sub["year"] = pd.to_datetime(sub.begin_date).dt.year
plt.plot(sub.year, sub["Mt CO2"], "o-")
plt.xticks([2001, 2005, 2010, 2015, 2020], [2001, 2005, 2010, 2015, 2020])
plt.ylabel("Mt CO2")
plt.grid()
sub[["iso3_country", "year", "Mt CO2"]]
```
|
github_jupyter
|
# Object Oriented Programming (OOP)
### classes and attributes
```
# definition of a class object
class vec3:
pass
# instance of the vec3 class object
a = vec3()
# add some attributes to the v instance
a.x = 1
a.y = 2
a.z = 2.5
print(a)
print(a.z)
print(a.__dict__)
# another instance of the vec3 class object
b = vec3()
print(b)
print(b.__dict__)
class vec2:
pass
print(isinstance(a, vec3))
print(isinstance(b, vec3))
print(isinstance(a, vec2))
# all vec3 instances should have the attributes x, y, z
class vec3:
# attributes
x = 1
y = 2
z = 2.5
a = vec3()
b = vec3()
print(a, a.__dict__)
print(b, b.__dict__)
# !!! Neither a nor b has x, y, or z! Huh?
# the class vec3 owns x, y and z!
print(vec3.__dict__)
# but a and b still have access to x, y and z
print(a.x, a.y, a.z)
print(b.x, b.y, b.z)
# this changes z for all vec3 instances
vec3.z = 3
print(vec3.__dict__)
print(a.x, a.y, a.z)
print(b.x, b.y, b.z)
# what if we change z only for a?
a.z = 7
print(vec3.__dict__)
print(a.x, a.y, a.z)
print(b.x, b.y, b.z)
# a now has both a class level attribute z and it's own attribute z!
print(a.__dict__)
# if we get rid of a.z, a will default back to vec3.z
del a.__dict__['z']
print(a.__dict__)
print(a.x, a.y, a.z)
```
### initialization
```
# class initialization
class vec3:
""" __init__() is a method of vec3 (i.e. a function belonging to the class vec3).
It is called whenever we create a new instance of a vec3 object.
"""
def __init__(self):
self.x = 10
self.y = 20
self.z = 30
a = vec3()
print(vec3.__dict__)
print(a.__dict__)
# a and b are two separate instances of the vec3 object
b = vec3()
print(a)
print(b)
a.x = 5
print(a.__dict__)
print(b.__dict__)
# passing arguments during class instantiation
class vec3:
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
a = vec3(2, 4, 6)
print(a.__dict__)
class vec3:
def __init__(self, x=0, y=0, z=0):
self.x = x
self.y = y
self.z = z
a = vec3()
b = vec3(1, 2, 3)
print(a.__dict__)
print(b.__dict__)
```
### methods
```
# class methods are just functions (e.g. __init__) that are wrapped up into the class
class vec3:
def __init__(self, x=0, y=0, z=0):
self.x = x
self.y = y
self.z = z
def translate(self, dx, dy, dz):
self.x += dx
self.y += dy
self.z += dz
a = vec3(1, 2, 3)
print(a.__dict__)
a.translate(10, 10, -10)
print(a.__dict__)
# two ways to call a class method
a = vec3(1, 2, 3)
vec3.translate(a, 10, 10, -10)
print(a.__dict__)
a = vec3(1, 2, 3)
a.translate(10, 10, -10)
print(a.__dict__)
```
### special methods
```
class vec3:
def __init__(self, x=0, y=0, z=0):
self.x = x
self.y = y
self.z = z
def __repr__(self):
return f"({self.x}, {self.y}, {self.z})"
def translate(self, dx, dy, dz):
self.x += dx
self.y += dy
self.z += dz
a = vec3(1, 2, 3)
print(a)
```
### inheritance
```
# vec4 inherits all of vec3's functionality
class vec4(vec3):
pass
a = vec4()
print(a.__dict__)
a = vec4(1, 2, 3)
print(a.__dict__)
a.translate(10, 10, -10)
print(a.__dict__)
print(issubclass(vec4, vec2))
print(issubclass(vec4, vec3))
# vec4 extends vec3's functionality
class vec4(vec3):
""" vec4 instantiation will use this __init__ instead of vec3's
"""
def __init__(self):
self.w = 1
a = vec4()
print(a.__dict__)
class vec4(vec3):
def __init__(self, x=0, y=0, z=0, w=1):
vec3.__init__(self, x, y, z) # or you could use `super().__init__(x, y, z)`
self.w = w
a = vec4()
print(a.__dict__)
a = vec4(1, 2, 3)
print(a.__dict__)
a = vec4(1, 2, 3, 0)
print(a.__dict__)
print(help(vec4))
print(help(vec3))
# all classes inherit from builtins.object by default
class tmp1(object):
pass
class tmp2():
pass
print(help(tmp1))
print(help(tmp2))
```
### Exercise: Create a class
### Diabetes dataset
```
import numpy as np
from sklearn import datasets
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
y -= y.mean()
features = "age sex bmi map tc ldl hdl tch ltg glu".split()
```
### OLS regression class
```
from sklearn.linear_model import LinearRegression
class MyLinearRegression:
def __init__(self, X, y):
self.data = X
self.target = y
self.model = LinearRegression()
self.fit()
def fit(self):
self.model.fit(self.data, self.target)
return self.predict(self.data)
def predict(self, X):
return self.model.predict(X)
def params(self):
return self.model.coef_
def getMSE(self, X, y):
return np.mean((y - self.predict(X))**2)
def getR2(self, X, y):
return self.model.score(X, y)
mymodel = MyLinearRegression(X, y)
print(mymodel.params())
print(f"MSE = {mymodel.getMSE(X, y)}")
print(f"R^2 = {mymodel.getR2(X, y)}")
```
### Exercise: Add a method to MyLinearRegression that automatically loads the dibaetes data set.
### Exercise: Add a method to MyLinearRegression that plots the slope factors in a bar graph.
### Exercise: Ridge regression class
```
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_validate, GridSearchCV
class MyRigeRegression:
def __init__(self, X, y, alphas):
```
### Exercise: KNN regression class
```
from sklearn import neighbors
class MyRigeRegression:
def __init__(self, X, y, K):
```
|
github_jupyter
|
```
from py2neo import Graph,Node,Relationship
import pandas as pd
import os
import QUANTAXIS as QA
import datetime
import numpy as np
import statsmodels.formula.api as sml
from QAStrategy.qastockbase import QAStrategyStockBase
import matplotlib.pyplot as plt
import scipy.stats as scs
import matplotlib.mlab as mlab
from easyquant.indicator.base import *
import json
from easyquant import MongoIo
import statsmodels.api as sm
from multiprocessing import Process, Pool, cpu_count, Manager
mongo = MongoIo()
def tdx_base_func(data, code_list = None):
"""
准备数据
"""
# highs = data.high
# start_t = datetime.datetime.now()
# print("begin-tdx_base_func:", start_t)
if len(data) < 10:
data = data.copy()
data['bflg'] = 0
data['sflg'] = 0
return data
CLOSE=data.close
C=data.close
# df_macd = MACD(C,12,26,9)
# mtj1 = IFAND(df_macd.DIFF < 0, df_macd.DEA < 0, 1, 0)
# mtj2 = IFAND(mtj1, df_macd.MACD < 0, 1, 0)
花 = SLOPE(EMA(C, 3), 3)
神 = SLOPE(EMA(C, 7), 7)
买 = IFAND(COUNT(花 < 神, 5)==4 , 花 >= 神,1,0)
卖 = IFAND(COUNT(花 >= 神, 5)==4, 花 < 神,1,0)
钻石 = IFAND(CROSS(花, 神), CLOSE / REF(CLOSE, 1) > 1.03, 1, 0)
买股 = IFAND(买, 钻石,1,0)
# 买股 = IFAND(mtj2, 买股1, 1, 0)
# AND(CROSS(花, 神)
# AND
# CLOSE / REF(CLOSE, 1) > 1.03);
# return pd.DataFrame({'FLG': 后炮}).iloc[-1]['FLG']
# return 后炮.iloc[-1]
# 斜率
data = data.copy()
# data['bflg'] = IF(REF(后炮,1) > 0, 1, 0)
data['bflg'] = 买股
data['sflg'] = 卖
# print("code=%s, bflg=%s" % (code, data['bflg'].iloc[-1]))
# data['beta'] = 0
# data['R2'] = 0
# beta_rsquared = np.zeros((len(data), 2),)
#
# for i in range(N - 1, len(highs) - 1):
# #for i in range(len(highs))[N:]:
# df_ne = data.iloc[i - N + 1:i + 1, :]
# model = sml.ols(formula='high~low', data = df_ne)
# result = model.fit()
#
# # beta = low
# beta_rsquared[i + 1, 0] = result.params[1]
# beta_rsquared[i + 1, 1] = result.rsquared
#
# data[['beta', 'R2']] = beta_rsquared
# 日收益率
data['ret'] = data.close.pct_change(1)
# 标准分
# data['beta_norm'] = (data['beta'] - data.beta.rolling(M).mean().shift(1)) / data.beta.rolling(M).std().shift(1)
#
# beta_norm = data.columns.get_loc('beta_norm')
# beta = data.columns.get_loc('beta')
# for i in range(min(M, len(highs))):
# data.iat[i, beta_norm] = (data.iat[i, beta] - data.iloc[:i - 1, beta].mean()) / data.iloc[:i - 1, beta].std() if (data.iloc[:i - 1, beta].std() != 0) else np.nan
# data.iat[2, beta_norm] = 0
# data['RSRS_R2'] = data.beta_norm * data.R2
# data = data.fillna(0)
#
# # 右偏标准分
# data['beta_right'] = data.RSRS_R2 * data.beta
# if code == '000732':
# print(data.tail(22))
return data
def buy_sell_fun(price, S1=1.0, S2=0.8):
"""
斜率指标交易策略标准分策略
"""
data = price.copy()
data['flag'] = 0 # 买卖标记
data['position'] = 0 # 持仓标记
data['hold_price'] = 0 # 持仓价格
bflag = data.columns.get_loc('bflg')
sflag = data.columns.get_loc('sflg')
# beta = data.columns.get_loc('beta')
flag = data.columns.get_loc('flag')
position_col = data.columns.get_loc('position')
close_col = data.columns.get_loc('close')
high_col = data.columns.get_loc('high')
open_col = data.columns.get_loc('open')
hold_price_col = data.columns.get_loc('hold_price')
position = 0 # 是否持仓,持仓:1,不持仓:0
for i in range(1,data.shape[0] - 1):
# 开仓
if data.iat[i, bflag] > 0 and position == 0:
data.iat[i, flag] = 1
data.iat[i, position_col] = 1
data.iat[i, hold_price_col] = data.iat[i, open_col]
data.iat[i + 1, position_col] = 1
data.iat[i + 1, hold_price_col] = data.iat[i, open_col]
position = 1
print("buy : date=%s code=%s price=%.2f" % (data.iloc[i].name[0], data.iloc[i].name[1], data.iloc[i].close))
code = data.iloc[i].name[1]
price = data.iloc[i].close
# qa_order.send_order('BUY', 'OPEN', code=code, price=price, volume=1000)
# order.send_order('SELL', 'CLOSE', code=code, price=price, volume=1000)
# 平仓
# elif data.iat[i, bflag] == S2 and position == 1:
elif data.iat[i, position_col] > 0 and position == 1:
cprice = data.iat[i, close_col]
# oprice = data.iat[i, open_col]
hole_price = data.iat[i, hold_price_col]
high_price = data.iat[i, high_col]
if cprice < hole_price * 0.95:# or cprice > hprice * 1.2:
data.iat[i, flag] = -1
data.iat[i + 1, position_col] = 0
data.iat[i + 1, hold_price_col] = 0
position = 0
print("sell : code=%s date=%s price=%.2f" % (data.iloc[i].name[0], data.iloc[i].name[1], data.iloc[i].close))
code = data.iloc[i].name[1]
price = data.iloc[i].close
# order.send_order('BUY', 'OPEN', code=code, price=price, volume=1000)
# qa_order.send_order('SELL', 'CLOSE', code=code, price=price, volume=1000)
elif cprice > hole_price * 1.1 and high_price / cprice > 1.05:
data.iat[i, flag] = -1
data.iat[i + 1, position_col] = 0
data.iat[i + 1, hold_price_col] = 0
position = 0
print("sell : code=%s date=%s price=%.2f" % (data.iloc[i].name[0], data.iloc[i].name[1], data.iloc[i].close))
code = data.iloc[i].name[1]
price = data.iloc[i].close
# order.send_order('BUY', 'OPEN', code=code, price=price, volume=1000)
# qa_order.send_order('SELL', 'CLOSE', code=code, price=price, volume=1000)
elif cprice > hole_price * 1.2 and high_price / cprice > 1.06:
data.iat[i, flag] = -1
data.iat[i + 1, position_col] = 0
data.iat[i + 1, hold_price_col] = 0
position = 0
print("sell : code=%s date=%s price=%.2f" % (data.iloc[i].name[0], data.iloc[i].name[1], data.iloc[i].close))
code = data.iloc[i].name[1]
price = data.iloc[i].close
# order.send_order('BUY', 'OPEN', code=code, price=price, volume=1000)
# qa_order.send_order('SELL', 'CLOSE', code=code, price=price, volume=1000)
elif data.iat[i, sflag] > 0:
data.iat[i, flag] = -1
data.iat[i + 1, position_col] = 0
data.iat[i + 1, hold_price_col] = 0
position = 0
print("sell : code=%s date=%s price=%.2f" % (data.iloc[i].name[0], data.iloc[i].name[1], data.iloc[i].close))
code = data.iloc[i].name[1]
price = data.iloc[i].close
# order.send_order('BUY', 'OPEN', code=code, price=price, volume=1000)
# qa_order.send_order('SELL', 'CLOSE', code=code, price=price, volume=1000)
else:
data.iat[i + 1, position_col] = data.iat[i, position_col]
data.iat[i + 1, hold_price_col] = data.iat[i, hold_price_col]
# 保持
else:
data.iat[i + 1, position_col] = data.iat[i, position_col]
data.iat[i + 1, hold_price_col] = data.iat[i, hold_price_col]
data['nav'] = (1+data.close.pct_change(1).fillna(0) * data.position).cumprod()
data['nav1'] = data.close * data.position
return data
df=mongo.get_stock_day('600718')
df.tail()
data1=buy_sell_fun(df1)
df1=tdx_base_func(df)
df1.tail()
data1.loc['2018-04-10':]
a = np.array([10,11,13,15,12,7,14])
ap = np.array([1,1,1,1,1,0,0])
b = np.array([1.2,1.1,1.8,1.5,1.2,0.7,1.4])
# a = np.array([[10,11,13,15,12,7,14],[10,11,18,15,12,7,14]])
dfn=pd.Series(a)
dfb=pd.Series(b)
df=pd.DataFrame()
df['a']=pd.Series(a)
df['ap']=pd.Series(ap)
df
bc1=(1+dfn.pct_change(1).fillna(0)).cumprod()
bc2=(1+dfb.pct_change(1).fillna(0)).cumprod()
bc2
result01=bc2
round(float(max([(result01.iloc[idx] - result01.iloc[idx::].min()) / result01.iloc[idx] for idx in range(len(result01))])), 2)
bc
```
|
github_jupyter
|
# Integrating TAO Models in DeepStream
In the first of two notebooks, we will be building a 4-class object detection pipeline as shown in the illustration below using Nvidia's TrafficCamNet pretrained model, directly downloaded from NGC.
Note: This notebook has code inspired from a sample application provided by NVIDIA in a GitHub repository. You can find this repository [here](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps).
## The Pipeline

We notice there are multiple DeepStream plugins used in the pipeline , Let us have a look at them and try to understand them.
## NVIDIA DeepStream Plugins
### Nvinfer
The nvinfer plugin provides [TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html)-based inference for detection and tracking. The lowlevel library (libnvds_infer) operates either on float RGB or BGR planar data with dimensions of Network Height and Network Width. The plugin accepts NV12/RGBA data from upstream components like the decoder, muxer, and dewarper.
The Gst-nvinfer plugin also performs preprocessing operations like format conversion, scaling, mean subtraction, and produces final float RGB/BGR planar data which is passed to the low-level library. The low-level library uses the TensorRT engine for inferencing. It outputs each classified object’s class and each detected object’s bounding boxes (Bboxes) after clustering.

### Nvvidconv
We create the nvvidconv plugin that performs color format conversions, which is required to make data ready for the nvosd plugin.

### Nvosd
The nvosd plugin draws bounding boxes, text, and RoI (Regions of Interest) polygons (Polygons are presented as a set of lines). The plugin accepts an RGBA buffer with attached metadata from the upstream component. It
draws bounding boxes, which may be shaded depending on the configuration (e.g. width, color, and opacity) of a given bounding box. It also draws text and RoI polygons at specified locations in the frame. Text and polygon parameters are configurable through metadata.

Now with this idea , let us get started into building the pipeline.
# Building the pipeline

```
# Import Required Libraries
import sys
sys.path.append('../source_code')
import gi
import time
gi.require_version('Gst', '1.0')
from gi.repository import GObject, Gst, GLib
from common.bus_call import bus_call
import pyds
# Defining the Class Labels
PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_PERSON = 2
PGIE_CLASS_ID_ROADSIGN = 3
# Defining the input output video file
INPUT_VIDEO_NAME = '../videos/sample_720p.h264'
OUTPUT_VIDEO_NAME = "../videos/out.mp4"
```
We define a function `make_elm_or_print_err()` to create our elements and report any errors if the creation fails.
Elements are created using the `Gst.ElementFactory.make()` function as part of Gstreamer library.
```
## Make Element or Print Error and any other detail
def make_elm_or_print_err(factoryname, name, printedname, detail=""):
print("Creating", printedname)
elm = Gst.ElementFactory.make(factoryname, name)
if not elm:
sys.stderr.write("Unable to create " + printedname + " \n")
if detail:
sys.stderr.write(detail)
return elm
```
#### Initialise GStreamer and Create an Empty Pipeline
```
# Standard GStreamer initialization
Gst.init(None)
# Create Gstreamer elements
# Create Pipeline element that will form a connection of other elements
print("Creating Pipeline \n ")
pipeline = Gst.Pipeline()
if not pipeline:
sys.stderr.write(" Unable to create Pipeline \n")
```
#### Create Elements that are required for our pipeline
```
# Creating elements required for the pipeline
# Source element for reading from file
source = make_elm_or_print_err("filesrc", "file-source","Source")
# Parse the data since the input is an elementary .h264 stream
h264parser = make_elm_or_print_err("h264parse", "h264-parser","h264 parse")
# For hardware accelerated decoding of the stream
decoder = make_elm_or_print_err("nvv4l2decoder", "nvv4l2-decoder","Nvv4l2 Decoder")
# Form batches from one or more sources
streammux = make_elm_or_print_err("nvstreammux", "Stream-muxer",'NvStreamMux')
# Run inference on the decoded stream, this property is set through a configuration file later
pgie = make_elm_or_print_err("nvinfer", "primary-inference" ,"pgie")
# Convert output stream to formatted buffer accepted by Nvosd
nvvidconv = make_elm_or_print_err("nvvideoconvert", "convertor","nvvidconv")
# Draw on the buffer
nvosd = make_elm_or_print_err("nvdsosd", "onscreendisplay","nvosd")
# Encode and save the OSD output
queue = make_elm_or_print_err("queue", "queue", "Queue")
# Convert output for saving
nvvidconv2 = make_elm_or_print_err("nvvideoconvert", "convertor2","nvvidconv2")
# Save as video file
encoder = make_elm_or_print_err("avenc_mpeg4", "encoder", "Encoder")
# Parse output from encoder
codeparser = make_elm_or_print_err("mpeg4videoparse", "mpeg4-parser", 'Code Parser')
# Create a container
container = make_elm_or_print_err("qtmux", "qtmux", "Container")
# Create sink for string the output
sink = make_elm_or_print_err("filesink", "filesink", "Sink")
```
Now that we have created the elements ,we can now set various properties for out pipeline at this point.
### Understanding the configuration file
We set an `config-file-path` for our nvinfer ( Interference plugin ) and it points to the file `config_infer_primary_trafficcamnet.txt`
You can have a have a look at the [file](../configs/config_infer_primary_trafficcamnet.txt)
Here are some parts of the configuration file :
```
# Copyright (c) 2020 NVIDIA Corporation. All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-model-key=tlt_encode
tlt-encoded-model=../models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt
labelfile-path=labels_trafficnet.txt
int8-calib-file=../models/trafficcamnet/trafficnet_int8.bin
model-engine-file=../models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine
input-dims=3;544;960;0
uff-input-blob-name=input_1
batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
[class-attrs-all]
pre-cluster-threshold=0.2
group-threshold=1
## Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)
eps=0.2
#minBoxes=3
```
Here we define all the parameters of our model. In this example we use model-file `resnet18_trafficcamnet_pruned`. `Nvinfer` creates an TensorRT Engine specific to the Host GPU to accelerate it's inference performance.
```
# Set properties for elements
print("Playing file %s" %INPUT_VIDEO_NAME)
# Set input file
source.set_property('location', INPUT_VIDEO_NAME)
# Set input height, width, and batch size
streammux.set_property('width', 1920)
streammux.set_property('height', 1080)
streammux.set_property('batch-size', 1)
# Set timer (in microseconds) to wait after the first buffer is available
# to push the batch even if batch is never completely formed
streammux.set_property('batched-push-timeout', 4000000)
# Set configuration files for Nvinfer
pgie.set_property('config-file-path', "../configs/config_infer_primary_trafficcamnet.txt")
# Set encoder bitrate for output video
encoder.set_property("bitrate", 2000000)
# Set output file location, disable sync and async
sink.set_property("location", OUTPUT_VIDEO_NAME)
sink.set_property("sync", 0)
sink.set_property("async", 0)
```
We now link all the elements in the order we prefer and create Gstreamer bus to feed all messages through it.
```
# Add and link all elements to the pipeline
# Adding elements
print("Adding elements to Pipeline \n")
pipeline.add(source)
pipeline.add(h264parser)
pipeline.add(decoder)
pipeline.add(streammux)
pipeline.add(pgie)
pipeline.add(nvvidconv)
pipeline.add(nvosd)
pipeline.add(queue)
pipeline.add(nvvidconv2)
pipeline.add(encoder)
pipeline.add(codeparser)
pipeline.add(container)
pipeline.add(sink)
# Linking elements
# Order: source -> h264parser -> decoder -> streammux -> pgie ->
# -> vidconv -> osd -> queue -> vidconv2 -> encoder -> parser ->
# -> container -> sink
print("Linking elements in the Pipeline \n")
source.link(h264parser)
h264parser.link(decoder)
sinkpad = streammux.get_request_pad("sink_0")
if not sinkpad:
sys.stderr.write(" Unable to get the sink pad of streammux \n")
# Create source pad from Decoder
srcpad = decoder.get_static_pad("src")
if not srcpad:
sys.stderr.write(" Unable to get source pad of decoder \n")
srcpad.link(sinkpad)
streammux.link(pgie)
pgie.link(nvvidconv)
nvvidconv.link(nvosd)
nvosd.link(queue)
queue.link(nvvidconv2)
nvvidconv2.link(encoder)
encoder.link(codeparser)
codeparser.link(container)
container.link(sink)
# Create an event loop and feed GStreamer bus messages to it
loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect ("message", bus_call, loop)
```
## Working with the Metadata
Our pipeline now carries the metadata forward but we have not done anything with it until now, but as mentoioned in the above pipeline diagram , we will now create a callback function to write relevant data on the frame once called and create a sink pad in the `nvosd` element to call the function.
```
# Working with metadata
def osd_sink_pad_buffer_probe(pad,info,u_data):
obj_counter = {
PGIE_CLASS_ID_VEHICLE:0,
PGIE_CLASS_ID_PERSON:0,
PGIE_CLASS_ID_BICYCLE:0,
PGIE_CLASS_ID_ROADSIGN:0
}
# Reset frame number and number of rectanges to zero
frame_number=0
num_rects=0
gst_buffer = info.get_buffer()
if not gst_buffer:
print("Unable to get GstBuffer ")
return
# Retrieve metadata from gst_buffer
# Note: since we use the pyds shared object library,
# the input is the C address of gst_buffer
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
try:
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
except StopIteration:
break
# Get frame number, number of rectangles to draw and object metadata
frame_number=frame_meta.frame_num
num_rects = frame_meta.num_obj_meta
l_obj=frame_meta.obj_meta_list
while l_obj is not None:
try:
obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
except StopIteration:
break
# Increment object class by 1 and set box border color to red
obj_counter[obj_meta.class_id] += 1
obj_meta.rect_params.border_color.set(0.0, 0.0, 1.0, 0.0)
try:
l_obj=l_obj.next
except StopIteration:
break
# Setting metadata display configuration
# Acquire display meta object
display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
display_meta.num_labels = 1
py_nvosd_text_params = display_meta.text_params[0]
# Set display text to be shown on screen
py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])
# Set where the string will appear
py_nvosd_text_params.x_offset = 10
py_nvosd_text_params.y_offset = 12
# Font, font colour and font size
py_nvosd_text_params.font_params.font_name = "Serif"
py_nvosd_text_params.font_params.font_size = 10
# Set color (We used white)
py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)
# Set text background colour (We used black)
py_nvosd_text_params.set_bg_clr = 1
py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
# Print the display text in the console as well
print(pyds.get_string(py_nvosd_text_params.display_text))
pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
try:
l_frame=l_frame.next
except StopIteration:
break
return Gst.PadProbeReturn.OK
# Adding probe to sinkpad of the OSD element
osdsinkpad = nvosd.get_static_pad("sink")
if not osdsinkpad:
sys.stderr.write(" Unable to get sink pad of nvosd \n")
osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)
```
Now with everything defined , we can start the playback and listen the events.
```
# Start the pipeline
print("Starting pipeline \n")
start_time = time.time()
pipeline.set_state(Gst.State.PLAYING)
try:
loop.run()
except:
pass
# Cleanup
pipeline.set_state(Gst.State.NULL)
print("--- %s seconds ---" % (time.time() - start_time))
```
This video output is not compatible to be shown in this Notebook. To circumvent this, we convert the output in a Jupyter Notebook-readable format. For this we use the shell command `ffmpeg`.
```
# Convert video profile to be compatible with the Notebook
!ffmpeg -loglevel panic -y -an -i ../videos/out.mp4 -vcodec libx264 -pix_fmt yuv420p -profile:v baseline -level 3 ../videos/output.mp4
```
Finally, we display the output in the notbook by creating an HTML video element.
```
# Display the Output
from IPython.display import HTML
HTML("""
<video width="640" height="480" controls>
<source src="../videos/output.mp4"
</video>
""".format())
```
In the next notebook , we will learn about object tracking and build an attribute classification pipeline along with the primary inference built in this notebook.
|
github_jupyter
|
# Importing libaries
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LinearRegression, BayesianRidge
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score
from sklearn import linear_model
```
# Importing first databases
Radars.csv contains all cars, trucks, motorcycles and buses that comes thru São Paulo's radar system
```
df_base = pd.read_csv(r"D:\\Users\\guilh\\Documents\\GitHub\\Dados_CET\\Marco_2018_nAg\\2_nAg.csv", index_col= "Data")
df_base.head()
```
### Reading the columns
* **Radar** is the number of identification of a street section
* **Lane** goes from 1 to 6 in most radars, low lane number are closer to the center of the freeway, high lane numbers are "local" lanes, to the right
* **Register** represents each vehicle
* **Types** are: motorcycle = 0, car = 1, bus = 2 ou truck = 3
* **Classes** are: *light* (motorcycle and car) = 0 ou *heavy* (bus and truck) = 1
* **Speeds** are in kilometer per hour
* **Radar_Lane** comes to identify each lane on a single radar (will be usefull to merge dataframes)
```
# Preprocessing
df = df_base[["Numero Agrupado", "Faixa", "Registro", "Especie", "Classe", "Velocidade"]]
# turns speed from dm/s to km/h
df["Velocidade"] = df["Velocidade"] * 0.36
df.index.names = ["Date"]
df["Radar_Lane"] = df["Numero Agrupado"].astype(str) + df["Faixa"].astype(str)
# renaming columns to english
df.columns = ["Radar", "Lane", "Register", "Type", "Class", "Speed [km/h]", "Radar_Lane"]
df.head()
```
### Lane types database
Helps to tell the **use of each lane** .
"Tipo" contains the information of lanes where all types of vehicycles can use ( *mix_use* ) and other that are for buses only ( *exclusive_bus* )
```
lane_types = pd.read_excel(r"D:\Users\guilh\Documents\[POLI]_6_Semestre\IC\2021\codigos olimpio\\Faixa Tipo.xlsx", usecols = ["Num_agrupado","faixa", "Num_fx","tipo"],engine='openpyxl')
lane_types.head()
```
### Merge dataframes
To identify the type of the lane, if it is exclusive for buses, or multipurpose
```
df_merged = lane_types[["Num_fx", "tipo"]].merge(df, left_on = "Num_fx", right_on = "Radar_Lane", how="right")
df_merged["Lane_use"] = df_merged["tipo"].map({"mista":"mix_use", "onibus": "exclusive_bus"})
df_merged = df_merged[["Radar", "Lane", "Register", "Type", "Class", "Speed [km/h]", "Lane_use"]]
df_merged.head()
```
### Looking for NaNs
As shown below, NaNs are less than 1% (actually, less than 0,2%)
With this information, there will be low loss in dropping NaNs
```
print(df_merged.isna().mean() *100)
df_merged.dropna(inplace=True)
```
### Selection of Lanes
Using only the data from mix_use lanes, select for each lane to create comparison
The max numper of lanes is 6, but only few roads have all 6, so it can be excluded from the analysis
```
lanes = df_merged.loc[df_merged["Lane_use"] == "mix_use"]
lane_1 = lanes.loc[lanes["Lane"] == 1]
lane_2 = lanes.loc[lanes["Lane"] == 2]
lane_3 = lanes.loc[lanes["Lane"] == 3]
lane_4 = lanes.loc[lanes["Lane"] == 4]
lane_5 = lanes.loc[lanes["Lane"] == 5]
lane_6 = lanes.loc[lanes["Lane"] == 6]
print(lane_1.shape, lane_2.shape, lane_3.shape, lane_4.shape, lane_5.shape, lane_6.shape)
```
### Plotting the means
```
means = []
for lane in [lane_1,lane_2,lane_3,lane_4,lane_5]:
means.append(lane["Speed [km/h]"].mean())
means = [ round(elem, 2) for elem in means ]
fig, ax = plt.subplots()
rects = ax.bar([1,2,3,4,5],means, width= 0.5)
ax.set_ylabel("Speed [km/h]")
ax.set_xlabel("Lanes")
ax.set_title('Speeds per lane')
def autolabel(rects):
"""Attach a text label above each bar in *rects*, displaying its height."""
for rect in rects:
height = rect.get_height()
ax.annotate('{}'.format(height),
xy=(rect.get_x() + rect.get_width() / 2, height),
xytext=(0, 3), # 3 points vertical offset
textcoords="offset points",
ha='center', va='bottom')
autolabel(rects)
plt.show()
```
# How can we predicti a new car?
```
df_regression = df_base[["Numero Agrupado", "Faixa", "Registro", "Especie", "Classe", "Velocidade", "Comprimento"]]
df_regression.loc[:,"Comprimento"] = df_regression.loc[:,"Comprimento"] /10
df_regression.loc[:,"Velocidade"] = df_regression.loc[:,"Velocidade"] * 0.36
```
### Reading the columns refazerrrrrrrrrrrrrrrr
* **Radar** is the number of identification of a street section
* **Lane** goes from 1 to 6 in most radars, low lane number are closer to the center of the freeway, high lane numbers are "local" lanes, to the right
* **Register** represents each vehicle
* **Types** are: motorcycle = 0, car = 1, bus = 2 ou truck = 3
* **Classes** are: *light* (motorcycle and car) = 0 ou *heavy* (bus and truck) = 1
* **Speeds** are in kilometer per hour
* **Radar_Lane** comes to identify each lane on a single radar (will be usefull to merge dataframes)
```
df_regression.columns = ["Radar", "Lane", "Register", "Type", "Class", "Speed [km/h]", "Length"]
Validation = df_regression.loc[df_regression["Speed [km/h]"].isna()]
X = df_regression[["Lane", "Type", "Class", "Length"]].dropna()
X = pd.concat([pd.get_dummies(X[["Lane", "Type", "Class"]].astype("object")),X["Length"]], axis=1)
y = df_regression["Speed [km/h]"].dropna()
X.head()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
lr = LinearRegression(normalize=True)
lr.fit(X_train, y_train)
pred = lr.predict(X_test)
print(lr.score(X_train, y_train),lr.score(X_test, y_test))
corrMatrix = df_regression[["Lane", "Type", "Speed [km/h]","Length"]].corr()
display (corrMatrix)
sns.heatmap(corrMatrix, annot=True)
plt.show()
```
|
github_jupyter
|
```
import matplotlib as mpl
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import nltk
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
from sklearn.metrics import accuracy_score
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
# configuring matplotlib
plt.axes.titlesize : 24
plt.axes.labelsize : 20
plt.figsize = (15, 10)
# plt.cmap.
RANDOM_STATE = 42
np.random.seed(RANDOM_STATE)
```
## For augmenting the dataset
### Random Deletion
```
# random deletion use list.pop()b
import random
p = [1, 23,4 ,5, 34, 35, 23, 54, 645, 53]
random.randrange(len(p))
def delete_random(text):
text = text.split(" ")
random_index = random.randrange(len(text))
text.pop(random_index)
text = " ".join(text)
return text
delete_random('I feel guilty when when I realize that I consider material things more important than caring for my relatives. I feel very self-centered.')
```
### Random swap
```
# Random swap
def swap_random(text):
text = text.split(" ")
idx = range(len(text))
i1, i2 = random.sample(idx, 2)
text[i1], text[i2] = text[i2], text[i1]
text = " ".join(text)
return text
swap_random("I feel guilty when when I realize that I consider material things more important than caring for my relatives. I feel very self-centered.")
```
### Lemmatization
```
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.stem import PorterStemmer
porter = PorterStemmer()
def lemmatize(text):
sentences = sent_tokenize(text)
stem_sentence=[]
for sent in sentences:
token_words=word_tokenize(sent)
for word in token_words:
stem_sentence.append(porter.stem(word))
stem_sentence.append(" ")
return "".join(stem_sentence)
lemmatize("I feel guilty when when I realize that I consider material things more important than caring for my relatives. I feel very self-centered.")
raw_data = pd.read_csv('../data/raw/ISEAR.csv', header=None)
raw_data.head(15)
raw_data.columns = ['index', 'sentiment', 'text']
raw_data.set_index('index')
raw_data.head()
raw_data['text'][6]
```
### Remove newline character
```
raw_data['text'] = raw_data['text'].apply(lambda x: x.replace('\n', ''))
raw_data['text'][6]
```
## Convert text to lowercase
```
raw_data['text'] = raw_data['text'].apply( lambda x: x.lower())
raw_data.head()
```
### Dividing into train and test set
```
# Diving data into train, validation and test set
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
X, y = raw_data['text'], raw_data['sentiment']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=RANDOM_STATE, stratify=y)
# X_train, X_test = list(X_train), list(y_train)
X_train.head()
# Lemmatize X_train
X_train = X_train.apply(lemmatize)
# Apply swap random and delet random to X
X_train_original = X_train
y_train_original = y_train
X_train_swapped = X_train.apply(swap_random)
y_train_swapped = y_train
X_train_deleted = X_train.apply(delete_random)
y_train_deleted = y_train
y_train_original.shape, X_train_swapped.shape, X_train_deleted.shape
X_train_combined = X_train_original.append(X_train_swapped)
X_train_combined = X_train_combined.append(X_train_deleted)
y_train_combined = y_train_original.append(y_train_swapped)
y_train_combined = y_train_combined.append(y_train_deleted)
X_train_combined.shape, y_train_combined.shape
```
### Vectorizing the training and testing features separately
```
vectorizer = CountVectorizer(
analyzer = 'word',
stop_words = 'english', # removes common english words
ngram_range = (2, 2), # extracting bigrams
lowercase = True,
)
tfidf_transformer = TfidfTransformer()
features_train = vectorizer.fit_transform(
X_train_combined
)
features_train = tfidf_transformer.fit_transform(features_train)
features_train = features_train.toarray() # for easy usage
# for testing features
features_test = vectorizer.transform(
X_test
)
features_test = tfidf_transformer.transform(features_test)
features_test = features_test.toarray() # for easy usage
```
## Encoding the training and testing labels separately using the same label encoder
```
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
y_train = le.fit_transform(y_train_combined)
# encodeing the labels of test set
y_test = le.transform(y_test)
y_test, y_train
```
## making the classifier
```
from sklearn.linear_model import SGDClassifier
classifier = SGDClassifier(random_state=RANDOM_STATE)
y_pred = classifier.fit(features_train, y_train).predict(features_test)
accuracy = accuracy_score(y_test, y_pred)
accuracy
```
### MNB
```
from sklearn.naive_bayes import MultinomialNB
classifier = MultinomialNB()
y_pred = classifier.fit(features_train, y_train).predict(features_test)
accuracy = accuracy_score(y_test, y_pred)
accuracy
my_colors = [(0.5,0.4,0.5), (0.75, 0.75, 0.25)]*7 # <-- make two custom RGBs and repeat/alternate them over all the bar elements.
raw_data['sentiment'].value_counts().plot(kind='bar', stacked=True, color=my_colors)
plt.savefig('../images/sentiment_distribution.png')
```
From above graph it is clear that all classes of sentiment have almost equal number of instances
```
def make_wordcloud(texts, stopwords=STOPWORDS):
texts = texts.lower()
sw = set(stopwords)
wordcloud = WordCloud(stopwords=stopwords, background_color="white").generate(texts)
return wordcloud
# def plot_wordclouds(dataframe, subplot_rows, subplot_columns):
rows = 4
columns = 3
fig = plt.figure()
p = 0
for col in raw_data['sentiment'].unique():
temp_df = raw_data[raw_data['sentiment']==col]
temp_df_texts = " ".join(text for text in temp_df['text'])
wordcloud = make_wordcloud(temp_df_texts)
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.title(col)
image_name = '../images/'+ col+ '_wordcloud.png'
plt.savefig(image_name)
plt.show()
```
From above plots it is common that words like friend, mother, felt is common in all the texts. So we will need to remove them.
## Creating a new column that will hold the text as a list of words
```
frequent_words = []
def get_most_common_words(dataframe):
for col in dataframe['sentiment'].unique():
temp_df = dataframe[raw_data['sentiment']==col]
temp_df_texts = " ".join(text for text in temp_df['text'])
temp_df_texts = temp_df_texts.lower()
wordcloud = make_wordcloud(temp_df_texts)
frequent_words.append(list(wordcloud.words_.keys())[:50])
return frequent_words
most_frequent_words = get_most_common_words(raw_data)
print(len(most_frequent_words))
p =set(most_frequent_words[0])
for i in range(1, len(most_frequent_words)):
print(i)
p.intersection_update(set(most_frequent_words[i]))
print(p)
```
The words present above are the most frequent words so they can also be removed from the text.
```
p = " ".join(list(p))
most_frequent_wordcloud = make_wordcloud(p)
plt.imshow(most_frequent_wordcloud, interpolation='bilinear')
plt.axis("off")
plt.title('Most frequent words')
image_name = '../images/'+ 'most_frequent_words'+ '_wordcloud.png'
plt.savefig(image_name)
plt.show()
raw_data['text_length'] = raw_data['text'].apply(lambda x: len(x.split(' ')))
raw_data.head()
raw_data['text_length'].plot.hist()
plt.title('Distribution of text length')
plt.savefig('../images/distribution_of_text_length.png')
stopwords = list(STOPWORDS) + list(p)
```
## Converting all the text to lowercase
```
raw_data['text'] = raw_data['text'].apply( lambda x: x.lower())
raw_data.head()
vectorizer = CountVectorizer(
analyzer = 'word',
stop_words = 'english', # removes common english words
ngram_range = (2, 2), # extracting bigrams
lowercase = True,
)
features = vectorizer.fit_transform(
raw_data['text']
)
tfidf_transformer = TfidfTransformer()
features = tfidf_transformer.fit_transform(features)
```
## Saving countvectorizer
```
import pickle
# Save the label encoder as pickel object
output = open('../models/encoder_and_vectorizer/tf_idf_transformer.pkl', 'wb')
pickle.dump(tfidf_transformer, output)
output.close()
features_nd = features.toarray() # for easy usage
# print(features_nd.shape)
# raw_data['text_vectorized'] = list(features_nd)
# print(raw_data['text_vectorized'].shape)
# raw_data.head()
output = open('../models/encoder_and_vectorizer/vectorizer.pkl', 'wb')
pickle.dump(vectorizer, output)
output.close()
```
The vectorizer will also need to be saved. Because we will need to use the same vectorizer for making new predictions
```
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
raw_data['sentiment_encoded'] = le.fit_transform(raw_data['sentiment'])
# raw_data = raw_data[['sentiment_encoded','text_vectorized']]
```
Save the label encoder as a pickle or in some form. Make a function that takes column names as input, converts the column, saves the label encoder and then returns the new column values.
## Saving label encoder to a file
```
# Save the label encoder as pickel object
output = open('../models/encoder_and_vectorizer/label_encoder.pkl', 'wb')
pickle.dump(le, output)
output.close()
# Saving the processed data
# raw_data.to_csv('../data/processed/sentiment_features.csv')
```
## Making the actual model
```
# Diving data into train, validation and test set
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
X, y = features_nd, raw_data['sentiment_encoded']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=RANDOM_STATE, stratify=y)
# X_train, X_test = list(X_train), list(y_train)
```
### knn model
```
from sklearn import neighbors
knn=neighbors.KNeighborsClassifier()
# we create an instance of Neighbours Classifier and fit the data.
knn.fit(X_train, y_train)
predicted_results = knn.predict(X_test)
accuracy = accuracy_score(y_test, predicted_results)
accuracy
```
### naive bayes'
```
gnb = GaussianNB()
y_pred = gnb.fit(X_train, y_train).predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
accuracy
```
### Random Forest
```
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(n_estimators=100, random_state=RANDOM_STATE)
y_pred = classifier.fit(X_train, y_train).predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
accuracy
```
### SGD
```
from sklearn.linear_model import SGDClassifier
classifier = SGDClassifier(random_state=RANDOM_STATE)
y_pred = classifier.fit(X_train, y_train).predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
accuracy
```
## Random Search with SGD
```
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import uniform
clf = SGDClassifier()
distributions = dict(
loss=['hinge', 'log', 'modified_huber', 'squared_hinge', 'perceptron'],
learning_rate=['optimal', 'invscaling', 'adaptive'],
eta0=uniform(loc=1e-7, scale=1e-2)
)
random_search_cv = RandomizedSearchCV(
estimator=clf,
param_distributions=distributions,
cv=5,
n_iter=50
)
random_search_cv.fit(X_train, y_train)
! ls
```
|
github_jupyter
|
Copyright 2020 Verily Life Sciences LLC
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file or at
https://developers.google.com/open-source/licenses/bsd
# Trial Specification Demo
The first step to use the Baseline Site Selection Tool is to specify your trial.
All data in the Baseline Site Selection Tool is stored in [xarray.DataArray](http://xarray.pydata.org/en/stable/generated/xarray.DataArray.html) datasets. This is a [convenient datastructure](http://xarray.pydata.org/en/stable/why-xarray.html) for storing multidimensional arrays with different labels, coordinates or attributes. You don't need to have any expertise with xr.Datasets to use the Baseline Site Selection Tool. The goal of this notebook is to walk you through the construction of the dataset that contains the specification of your trial.
This notebook has several sections:
1. **Define the Trial**. In this section you will load all aspects of your trial, including the trial sites, the expected recruitment demographics for each trial site (e.g. from a census) as well as the rules for how the trial will be carried out.
2. **Load Incidence Forecasts**. In this section you will load forecasts for covid incidence at the locations of your trial. We highly recommend using forecasts that are as local as possible for the sites of the trial. There is significant variation in covid incidence among counties in the same state, and taking the state (province) average can be highly misleading. Here we include code to preload forecasts for county level forecasts from the US Center for Disease Control. The trial planner should include whatever forecasts they find most compelling.
3. **Simulate the Trial** Given the incidence forecasts and the trial rules, the third section will simulate the trial.
4. **Optimize the Trial** Given the parameters of the trial within our control, the next section asks whether we can set those parameters to make the trial meet our objective criteria, for example most likely to succeed or to succeed as quickly as possible. We have written a set of optimization routines for optimizing different types of trials.
We write out different trial plans, which you can then examine interactively in the second notebook in the Baseline Site Selection Tool. That notebook lets you visualize how the trial is proceeding at a per site level and experiment with what will happen when you turn up or down different sites.
If you have questions about how to implement these steps for your clinical trial, or there are variations in the trial specification that are not captured with this framework, please contact [email protected] for additional help.
## Imports
```
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('ticks')
import functools
import importlib.resources
import numpy as np
import os
import pandas as pd
pd.plotting.register_matplotlib_converters()
import xarray as xr
from IPython.display import display
# bsst imports
from bsst import demo_data
from bsst import io as bsst_io
from bsst import util
from bsst import optimization
from bsst import sim
from bsst import sim_scenarios
from bsst import public_data
```
## Helper methods for visualization
```
def plot_participants(participants):
time = participants.time.values
util.sum_all_but_dims(['time'], participants).cumsum('time').plot()
plt.title('Participants recruited (both control and treatment arm)')
plt.xlim(time[0], time[-1])
plt.ylim(bottom=0)
plt.show()
def plot_events(events):
time = events.time.values
events.cumsum('time').plot.line(x='time', color='k', alpha=.02, add_legend=False)
for analysis, num_events in c.needed_control_arm_events.to_series().items():
plt.axhline(num_events, linestyle='--')
plt.text(time[0], num_events, analysis, ha='left', va='bottom')
plt.ylim(0, 120)
plt.xlim(time[0], time[-1])
plt.title(f'Control arm events\n{events.scenario.size} simulated scenarios')
plt.show()
def plot_success(c, events):
time = c.time.values
success_day = xr.DataArray(util.success_day(c.needed_control_arm_events, events),
coords=(events.scenario, c.analysis))
fig, axes = plt.subplots(c.analysis.size, 1, sharex=True)
step = max(1, int(np.timedelta64(3, 'D') / (time[1] - time[0])))
bins = mpl.units.registry[np.datetime64].convert(time[::step], None, None)
for analysis, ax in zip(c.analysis.values, axes):
success_days = success_day.sel(analysis=analysis).values
np.where(np.isnat(success_days), np.datetime64('2050-06-01'), success_days)
ax.hist(success_days, bins=bins, density=True)
ax.yaxis.set_visible(False)
# subtract time[0] to make into timedelta64s so that we can take a mean/median
median = np.median(success_days - time[0]) + time[0]
median = pd.to_datetime(median).date()
ax.axvline(median, color='r')
ax.text(time[0], 0, f'{analysis}\n{median} median', ha='left', va='bottom')
plt.xlabel('Date when sufficient statistical power is achieved')
plt.xlim(time[0], time[-1])
plt.xticks(rotation=35)
plt.show()
```
# 1. Define the trial
## Choose the sites
A trial specification consists a list of sites, together with various properties of the sites.
For this demo, we read demonstration data embedded in the Baseline Site Selection Tool Python package. Specifically, this information is loaded from the file `demo_data/site_list1.csv`. Each row of this file contains the name of a site, as well as the detailed information about the trial. In this illustrative example, we pick sites in real US counties. Each column contains the following information:
* `opencovid_key` . This is a key that specifies location within [COVID-19 Open Data](https://github.com/GoogleCloudPlatform/covid-19-open-data). It is required by this schema because it is the way we join the incidence forecasts to the site locations.
* `capacity`, the number of participants the site can recruit each week, including both control arm and treatment arms. For simplicity, we assume this is constant over time, but variable recruitment rates are also supported. (See the construction of the `site_capacity` array below).
* `start_date`. This is the first date on which the site can recruit participants.
* The proportion of the population in various demographic categories. For this example, we consider categories for age (`over_60`), ethnicity (`black`, `hisp_lat`), and comorbidities (`smokers`, `diabetes`, `obese`). **Here we just fill in demographic information with random numbers.** We assume different categories are independent, but the data structure supports complex beliefs about how different categories intersect, how much each site can enrich for different categories, and different infection risks for different categories. These are represented in the factors `population_fraction`, `participant_fraction`, `incidence_scaler`, and `incidence_to_event_factor` below. In a practical situation, we recommend that the trial planner uses accurate estimates of the populations for the different sites they are drawing from.
```
with importlib.resources.path(demo_data, 'site_list1.csv') as p:
demo_data_file_path = os.fspath(p)
site_df = pd.read_csv(demo_data_file_path, index_col=0)
site_df.index.name = 'location'
site_df['start_date'] = pd.to_datetime(site_df['start_date'])
display(site_df)
# Add in information we have about each county.
site_df = pd.concat([site_df, public_data.us_county_data().loc[site_df.opencovid_key].set_index(site_df.index)], axis=1)
```
## Choose trial parameters
The trial requires a number of parameters that have to be specified to be able to simulate what will happen in the trial: These include:
* `trial_size_cap`: the maximum number of participants in the trial (includes both control and treatment arms)
* `start_day` and `end_day`: the boundaries of the time period we will simulate.
* `proportion_control_arm`: what proportion of participants are in the control arm. It's assumed that the control arm is as uniformly distributed across locations and time (e.g. at each location on each day, half of the recruited participants are assigned to the control arm).
* `needed_control_arm_events`: the number of events required in the *control* arm of the trial at various intermediate analysis points. For this example we assume intermediate analyses which would demonstrate a vaccine efficacy of about 55%, 65%, 75%, 85%, or 95%.
* `observation_delay`: how long after a participant is recruited before they contribute an event. This is measured in the same time units as your incidence forecasts. Here we assume 28 days.
* `site_capacity` and `site_activation`: the number of participants each site could recruit *if* it were activated, and whether each site is activated at any given time. Here we assume each site as a constant weekly capacity, but time dependence can be included (e.g. to model ramp up of recruitment).
* `population_fraction`, `participant_fraction`, and `incidence_scaler`: the proportion of the general population and the proportion of participants who fall into different demographic categories at each location, and the infection risk factor for each category. These three are required to translate an overall incidence forecast for the population into the incidence forecast for your control arm.
* `incidence_to_event_factor`: what proportion of infections lead to a clinical event. We assume a constant 0.6, but you can specify different values for different demographic categories.
These factors are specified in the datastructure below.
```
start_day = np.datetime64('2021-05-15')
end_day = np.datetime64('2021-10-01')
time_resolution = np.timedelta64(1, 'D')
time = np.arange(start_day, end_day + time_resolution, time_resolution)
c = xr.Dataset(coords=dict(time=time))
c['proportion_control_arm'] = 0.5
# Assume some intermediate analyses.
frac_control = float(c.proportion_control_arm)
efficacy = np.array([.55, .65, .75, .85, .95])
ctrl_events = util.needed_control_arm_events(efficacy, frac_control)
vaccine_events = (1 - efficacy) * ctrl_events * (1 - frac_control) / frac_control
ctrl_events, vaccine_events = np.round(ctrl_events), np.round(vaccine_events)
efficacy = 1 - (vaccine_events / ctrl_events)
total_events = ctrl_events + vaccine_events
analysis_names = [
f'{int(t)} total events @{int(100 * e)}% VE' for t, e in zip(total_events, efficacy)
]
c['needed_control_arm_events'] = xr.DataArray(
ctrl_events, dims=('analysis',)).assign_coords(analysis=analysis_names)
c['recruitment_type'] = 'default'
c['observation_delay'] = int(np.timedelta64(28, 'D') / time_resolution) # 28 days
c['trial_size_cap'] = 30000
# convert weekly capacity to capacity per time step
site_capacity = site_df.capacity.to_xarray() * time_resolution / np.timedelta64(7, 'D')
site_capacity = site_capacity.broadcast_like(c.time).astype('float')
# Can't recruit before the activation date
activation_date = site_df.start_date.to_xarray()
for l in activation_date.location.values:
date = activation_date.loc[l]
site_capacity.loc[site_capacity.time < date, l] = 0.0
c['site_capacity'] = site_capacity.transpose('location', 'time')
c['site_activation'] = xr.ones_like(c.site_capacity)
# For the sake of simplicity, this code assumes black and hisp_lat are
# non-overlapping, and that obese/smokers/diabetes are non-overlapping.
frac_and_scalar = util.fraction_and_incidence_scaler
fraction_scalers = [
frac_and_scalar(site_df, 'age', ['over_60'], [1], 'under_60'),
frac_and_scalar(site_df, 'ethnicity', ['black', 'hisp_lat'], [1, 1],
'other'),
frac_and_scalar(site_df, 'comorbidity', ['smokers', 'diabetes', 'obese'],
[1, 1, 1], 'none')
]
fractions, incidence_scalers = zip(*fraction_scalers)
# We assume that different categories are independent (e.g. the proportion of
# smokers over 60 is the same as the proportion of smokers under 60)
c['population_fraction'] = functools.reduce(lambda x, y: x * y, fractions)
# We assume the participants are drawn uniformly from the population.
c['participant_fraction'] = c['population_fraction']
# Assume some boosted incidence risk for subpopulations. We pick random numbers
# here, but in actual use you'd put your best estimate for the incidence risk
# of each demographic category.
# Since we assume participants are uniformly drawn from the county population,
# this actually doesn't end up affecting the estimated number of clinical events.
c['incidence_scaler'] = functools.reduce(lambda x, y: x * y,
incidence_scalers)
c.incidence_scaler.loc[dict(age='over_60')] = 1 + 2 * np.random.random()
c.incidence_scaler.loc[dict(comorbidity=['smokers', 'diabetes', 'obese'])] = 1 + 2 * np.random.random()
c.incidence_scaler.loc[dict(ethnicity=['black', 'hisp_lat'])] = 1 + 2 * np.random.random()
# We assume a constant incidence_to_event_factor.
c['incidence_to_event_factor'] = 0.6 * xr.ones_like(c.incidence_scaler)
util.add_empty_history(c)
```
# 2. Load incidence forecasts
We load historical incidence data from [COVID-19 Open Data](https://github.com/GoogleCloudPlatform/covid-19-open-data) and forecasts from [COVID-19 Forecast Hub](https://github.com/reichlab/covid19-forecast-hub).
We note that there are a set of caveats when using the CDC models that should be considered when using these for trial planning:
* Forecasts are only available for US counties. Hence, these forecasts will only work for US-only trials. Trials with sites outside the US will need to supplement these forecasts.
* Forecasts only go out for four weeks. Trials take much longer than four weeks to complete, when measured from site selection to logging the required number of cases in the control arm. For simplicity, here we extrapolate incidence as *constant* after the last point of the forecast. Here we extrapolate out to October 1, 2021.
* The forecasts from the CDC are provided with quantile estimates. Our method depends on getting *representative forecasts* from the model: we need a set of sample forecasts for each site which represent the set of scenarios that can occur. Ideally these scenarios will be equally probable so that we can compute probabilities by averaging over samples. To get samples from quantiles, we interpolate/extrapolate to get 100 evenly spaced quantile estimates, which we treat as representative samples.
You can of course replace these forecasts with whatever represents your beliefs and uncertainty about what will happen.
```
# Extrapolate out a bit extra to ensure we're within bounds when we interpolate later.
full_pred = public_data.fetch_cdc_forecasts([('COVIDhub-ensemble', '2021-05-10'),
('COVIDhub-baseline', '2021-05-10')],
end_date=c.time.values[-1] + np.timedelta64(15, 'D'),
num_samples=50)
full_gt = public_data.fetch_opencovid_incidence()
# Suppose we only have ground truth through 2021-05-09.
full_gt = full_gt.sel(time=slice(None, np.datetime64('2021-05-09')))
# Include more historical incidence here for context. It will be trimmed off when
# we construct scenarios to simulate. The funny backwards range is to ensure that if
# we use weekly instead of daily resolution, we use the same day of the week as c.
time = np.arange(c.time.values[-1], np.datetime64('2021-04-01'), -time_resolution)[::-1]
incidence_model = public_data.assemble_forecast(full_gt, full_pred, site_df, time)
locs = np.random.choice(c.location.values, size=5, replace=False)
incidence_model.sel(location=locs).plot.line(x='time', color='k', alpha=.1, add_legend=False, col='location', row='model')
plt.ylim(0.0, 1e-3)
plt.suptitle('Forecast incidence at a sampling of sites', y=1.0)
pass
```
# 3. Simulate the trial
Now that we've specified how the trial works, we can compute how the trial will turn out given the incidence forecasts you've specified. We do this by first imagining what sampling what incidence will be at all locations simultaneously. For any given fully-specified scenario, we compute how many participants will be under observation at any given time in any given location (in any given combination of demographic buckets), then based on the specified local incidence we compute how many will become infected, and how many will produce clinical events.
Here we assume that the incidence trajectories of different locations are drawn at random from the available forecasts. Other scenario-generation methods in `sim_scenarios` support more complex approaches. For example, we may be highly uncertain about the incidence at each site, but believe that if incidence is high at a site, then it will also be high at geographically nearby sites. If this is the case then the simulation should not choose forecasts independently at each site but instead should take these correlations into account. The code scenario-generating methods in `sim_scenarios` allows us to do that.
```
# incidence_flattened: rolls together all the models you've included in your ensemble, treating them as independent samples.
incidence_flattened = sim_scenarios.get_incidence_flattened(incidence_model, c)
# incidence_scenarios: chooses scenarios given the incidence curves and your chosen method of scenario-generation.
incidence_scenarios = sim_scenarios.generate_scenarios_independently(incidence_flattened, num_scenarios=100)
# compute the number of participants recruited under your trial rule
participants = sim.recruitment(c)
# compute the number of control arm events under your trial rules and incidence_scenarios.
events = sim.control_arm_events(c, participants, incidence_scenarios)
plot_participants(participants)
# plot events and label different vaccine efficacies
plot_events(events)
# plot histograms of time to success
plot_success(c, events)
sim.add_stuff_to_ville(c, incidence_model, site_df, num_scenarios=100)
!mkdir -p demo_data
bsst_io.write_ville_to_netcdf(c, 'demo_data/site_list1_all_site_on.nc')
```
# 4. Optimize the trial
The simulations above supposed that all sites are activated as soon as possible (i.e. `site_activation` is identically 1). Now that we have shown the ability to simulate the outcome of the trial, we can turn it into a mathematical optimization problem.
**Given the parameters of the trial within our control, how can we set those parameters to make the trial most likely to succeed or to succeed as quickly as possible?**
We imagine the main levers of control are which sites to activate or which sites to prioritize activating, and this is what is implemented here.
However, the framework we have developed is very general and could be extended to just about anything you control which you can predict the impact of. For example,
* If you can estimate the impact of money spent boosting recruitment of high-risk participants, we could use those estimates to help figure out how to best allocate a fixed budget.
* If you had requirements for the number of people infected in different demographic groups, we could use those to help figure out how to best allocate doses between sites with different population characteristics.
The optimization algorithms are implemented in [JAX](https://github.com/google/jax), a python library that makes it possible to differentiate through native python and numpy functions. The flexibility of the language makes it possible to compose a variety of trial optimization scenarios and then to write algorithms that find optima. There are a number of technical details in how the optimization algorithms are written that will be discussed elsewhere.
### Example: Optimizing Static site activations
Suppose that the only variable we can control is which sites should be activated, and we have to make this decision at the beginning of the trial. This decision is then set in stone for the duration of the trial. To calculate this we proceed as follows:
The optimizer takes in the trial plan, encoded in the xarray `c` as well as the `incidence_scenarios`, and then calls the optimizer to find the sites that should be activated to minimize the time to success of the trial. The algorithm modifies `c` *in place*, so that after the algorithm runs, it returns the trial plan `c` but with the site activations chosen to be on or off in accordance with the optimizion.
```
%time optimization.optimize_static_activation(c, incidence_scenarios)
```
#### Plot the resulting sites
Now we can plot the activations for the resulting sites. Only a subset of the original sites are activated in the optimized plan. Comparing the distributions for the time to success for the optimized sites to those in the original trial plan (all sites activated), the optimized plan will save a bit of time if the vaccine efficacy is low. If the vaccine efficacy is high, then just getting as many participants as possible as quickly as possible is optimal.
```
all_sites = c.location.values
activated_sites = c.location.values[c.site_activation.mean('time') == 1]
# Simulate the results with this activation scheme.
print(f'\n\n{len(activated_sites)} of {len(all_sites)} activated')
participants = sim.recruitment(c)
events = sim.control_arm_events(c, participants, incidence_scenarios)
plot_participants(participants)
plot_events(events)
plot_success(c, events)
df = (participants.sum(['location', 'time', 'comorbidity']) / participants.sum()).to_pandas()
display(df.style.set_caption('Proportion of participants by age and ethnicity'))
sim.add_stuff_to_ville(c, incidence_model, site_df, num_scenarios=100)
!mkdir -p demo_data
bsst_io.write_ville_to_netcdf(c, 'demo_data/site_list1_optimized_static.nc')
```
### Example: Custom loss penalizing site activation and promoting diverse participants
Suppose we want to factor in considerations aside from how quickly the trial succeeds. In this example, we assume that activating sites is expensive, so we'd like to activate as few of them as possible, so long as it doesn't delay the success of the trial too much. Similarly, we assume that it's valuable to have a larger proportion of elderly, black, or hispanic participants, and we're willing to activate sites which can recruit from these demographic groups, even if doing so delays success a bit.
```
def loss_fn(c):
# sum over location, time, comorbidity
# remaining dimensions are [age, ethnicity]
participants = c.participants.sum(axis=0).sum(axis=0).sum(axis=-1)
total_participants = participants.sum()
return (
optimization.negative_mean_successiness(c) # demonstrate efficacy fast
+ 0.2 * c.site_activation.mean() # turning on sites is costly
- 0.5 * participants[1:, :].sum() / total_participants # we want people over 60
- 0.5 * participants[:, 1:].sum() / total_participants # we want blacks and hispanics
)
%time optimization.optimize_static_activation(c, incidence_scenarios, loss_fn)
```
#### Plot the resulting sites
This time only 53 of 146 sites are activated. The slower recruitment costs us 1-2 weeks until the trial succeeds (depending on vaccine efficacy). In exchange, we don't need to activate as many sites, and we end up with a greater proportion of participants who are elderly, black, or hispanic (dropping from 55.7% to 45.6% young white).
```
all_sites = c.location.values
activated_sites = c.location.values[c.site_activation.mean('time') == 1]
# Simulate the results with this activation scheme.
print(f'\n\n{len(activated_sites)} of {len(all_sites)} activated')
participants = sim.recruitment(c)
events = sim.control_arm_events(c, participants, incidence_scenarios)
plot_participants(participants)
plot_events(events)
plot_success(c, events)
df = (participants.sum(['location', 'time', 'comorbidity']) / participants.sum()).to_pandas()
display(df.style.set_caption('Proportion of participants by age and ethnicity'))
```
### Example: prioritizing sites
Suppose we can activate up to 20 sites each week for 10 weeks. How do we prioritize them?
```
# We put all sites in on group. We also support prioritizing sites within groupings.
# For example, if you can activate 2 sites per state per week, sites would be grouped
# according to the state they're in.
site_to_group = pd.Series(['all_sites'] * len(site_df), index=site_df.index)
decision_dates = c.time.values[:70:7]
allowed_activations = pd.DataFrame([[20] * len(decision_dates)], index=['all_sites'], columns=decision_dates)
parameterizer = optimization.PivotTableActivation(c, site_to_group, allowed_activations, can_deactivate=False)
optimization.optimize_params(c, incidence_scenarios, parameterizer)
c['site_activation'] = c.site_activation.round() # each site has to be on or off at each time
df = c.site_activation.to_pandas()
df.columns = [pd.to_datetime(x).date() for x in df.columns]
sns.heatmap(df, cbar=False)
plt.title('Which sites are activated when')
plt.show()
participants = sim.recruitment(c)
events = sim.control_arm_events(c, participants, incidence_scenarios)
plot_participants(participants)
plot_events(events)
plot_success(c, events)
sim.add_stuff_to_ville(c, incidence_model, site_df, num_scenarios=100)
!mkdir -p demo_data
bsst_io.write_ville_to_netcdf(c, 'demo_data/site_list1_prioritized.nc')
```
|
github_jupyter
|
```
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# AutoML natural language text classification model
## Installation
Install the latest version of AutoML SDK.
```
! pip3 install google-cloud-automl
```
Install the Google *cloud-storage* library as well.
```
! pip3 install google-cloud-storage
```
### Restart the Kernel
Once you've installed the AutoML SDK and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
```
import os
if not os.getenv("AUTORUN"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Before you begin
### GPU run-time
*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU**
### Set up your GCP project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a GCP project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)
3. [Enable the AutoML APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)
4. [Google Cloud SDK](https://cloud.google.com/sdk) is already installed in AutoML Notebooks.
5. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
#### Project ID
**If you don't know your project ID**, try to get your project ID using `gcloud` command by executing the second cell below.
```
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
```
#### Region
You can also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for AutoML. We recommend when possible, to choose the region closest to you.
- Americas: `us-central1`
- Europe: `europe-west4`
- Asia Pacific: `asia-east1`
You cannot use a Multi-Regional Storage bucket for training with AutoML. Not all regions provide support for all AutoML services. For the latest support per region, see [Region support for AutoML services]()
```
REGION = "us-central1" # @param {type: "string"}
```
#### Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
```
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
```
### Authenticate your GCP account
**If you are using AutoML Notebooks**, your environment is already
authenticated. Skip this step.
*Note: If you are on an AutoML notebook and run the cell, the cell knows to skip executing the authentication steps.*
```
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Vertex, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth login
```
### Create a Cloud Storage bucket
**The following steps are required, regardless of your notebook environment.**
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
```
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
```
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
```
! gsutil mb -l $REGION gs://$BUCKET_NAME
```
Finally, validate access to your Cloud Storage bucket by examining its contents:
```
! gsutil ls -al gs://$BUCKET_NAME
```
### Set up variables
Next, set up some variables used throughout the tutorial.
### Import libraries and define constants
#### Import AutoML SDK
Import the AutoM SDK into our Python environment.
```
import json
import time
from google.cloud import automl
from google.protobuf.json_format import MessageToJson
```
#### AutoML constants
Setup up the following constants for AutoML:
- `PARENT`: The AutoM location root path for dataset, model and endpoint resources.
```
# AutoM location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
```
## Clients
The AutoML SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (AutoML).
You will use several clients in this tutorial, so set them all up upfront.
```
def automl_client():
return automl.AutoMlClient()
def prediction_client():
return automl.PredictionServiceClient()
def operations_client():
return automl.AutoMlClient()._transport.operations_client
clients = {}
clients["automl"] = automl_client()
clients["prediction"] = prediction_client()
clients["operations"] = operations_client()
for client in clients.items():
print(client)
IMPORT_FILE = "gs://cloud-ml-data/NL-classification/happiness.csv"
! gsutil cat $IMPORT_FILE | head -n 10
```
*Example output*:
```
I went on a successful date with someone I felt sympathy and connection with.,affection
I was happy when my son got 90% marks in his examination,affection
I went to the gym this morning and did yoga.,exercise
We had a serious talk with some friends of ours who have been flaky lately. They understood and we had a good evening hanging out.,bonding
I went with grandchildren to butterfly display at Crohn Conservatory,affection
I meditated last night.,leisure
"I made a new recipe for peasant bread, and it came out spectacular!",achievement
I got gift from my elder brother which was really surprising me,affection
YESTERDAY MY MOMS BIRTHDAY SO I ENJOYED,enjoy_the_moment
Watching cupcake wars with my three teen children,affection
```
## Create a dataset
### [projects.locations.datasets.create](https://cloud.google.com/automl/docs/reference/rest/v1beta1/projects.locations.datasets/create)
#### Request
```
dataset = {
"display_name": "happiness_" + TIMESTAMP,
"text_classification_dataset_metadata": {"classification_type": "MULTICLASS"},
}
print(
MessageToJson(
automl.CreateDatasetRequest(parent=PARENT, dataset=dataset).__dict__["_pb"]
)
)
```
*Example output*:
```
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"dataset": {
"displayName": "happiness_20210228224317",
"textClassificationDatasetMetadata": {
"classificationType": "MULTICLASS"
}
}
}
```
#### Call
```
request = clients["automl"].create_dataset(parent=PARENT, dataset=dataset)
```
#### Response
```
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
```
*Example output*:
```
{
"name": "projects/116273516712/locations/us-central1/datasets/TCN2705019056410329088"
}
```
```
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
```
### [projects.locations.datasets.importData](https://cloud.google.com/automl/docs/reference/rest/v1beta1/projects.locations.datasets/importData)
#### Request
```
input_config = {"gcs_source": {"input_uris": [IMPORT_FILE]}}
print(
MessageToJson(
automl.ImportDataRequest(name=dataset_id, input_config=input_config).__dict__[
"_pb"
]
)
)
```
*Example output*:
```
{
"name": "projects/116273516712/locations/us-central1/datasets/TCN2705019056410329088",
"inputConfig": {
"gcsSource": {
"inputUris": [
"gs://cloud-ml-data/NL-classification/happiness.csv"
]
}
}
}
```
#### Call
```
request = clients["automl"].import_data(name=dataset_id, input_config=input_config)
```
#### Response
```
result = request.result()
print(MessageToJson(result))
```
*Example output*:
```
{}
```
## Train a model
### [projects.locations.models.create](https://cloud.google.com/automl/docs/reference/rest/v1beta1/projects.locations.models/create)
#### Request
```
model = automl.Model(
display_name="happiness_" + TIMESTAMP,
dataset_id=dataset_short_id,
text_classification_model_metadata=automl.TextClassificationModelMetadata(),
)
print(
MessageToJson(automl.CreateModelRequest(parent=PARENT, model=model).__dict__["_pb"])
)
```
*Example output*:
```
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"model": {
"displayName": "happiness_20210228224317",
"datasetId": "TCN2705019056410329088",
"textClassificationModelMetadata": {}
}
}
```
#### Call
```
request = clients["automl"].create_model(parent=PARENT, model=model)
```
#### Response
```
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
```
*Example output*:
```
{
"name": "projects/116273516712/locations/us-central1/models/TCN5333697920992542720"
}
```
```
# The full unique ID for the training pipeline
model_id = result.name
# The short numeric ID for the training pipeline
model_short_id = model_id.split("/")[-1]
print(model_short_id)
```
## Evaluate the model
### [projects.locations.models.modelEvaluations.list](https://cloud.google.com/automl/docs/reference/rest/v1beta1/projects.locations.models.modelEvaluations/list)
#### Call
```
request = clients["automl"].list_model_evaluations(parent=model_id, filter="")
```
#### Response
```
evaluations_list = [
json.loads(MessageToJson(me.__dict__["_pb"])) for me in request.model_evaluation
]
print(json.dumps(evaluations_list, indent=2))
# The evaluation slice
evaluation_slice = request.model_evaluation[0].name
```
*Example output*:
```
[
{
"name": "projects/116273516712/locations/us-central1/models/TCN5333697920992542720/modelEvaluations/1436745357261371663",
"annotationSpecId": "3130761503557287936",
"createTime": "2021-03-01T02:56:28.878044Z",
"evaluatedExampleCount": 1193,
"classificationEvaluationMetrics": {
"auPrc": 0.99065405,
"confidenceMetricsEntry": [
{
"recall": 1.0,
"precision": 0.01424979,
"f1Score": 0.028099174
},
{
"confidenceThreshold": 0.05,
"recall": 1.0,
"precision": 0.5862069,
"f1Score": 0.73913044
},
{
"confidenceThreshold": 0.94,
"recall": 0.64705884,
"precision": 1.0,
"f1Score": 0.7857143
},
# REMOVED FOR BREVITY
{
"confidenceThreshold": 0.999,
"recall": 0.21372032,
"precision": 1.0,
"f1Score": 0.35217392
},
{
"confidenceThreshold": 1.0,
"recall": 0.0026385225,
"precision": 1.0,
"f1Score": 0.005263158
}
],
"logLoss": 0.14686257
},
"displayName": "achievement"
}
]
```
### [projects.locations.models.modelEvaluations.get](https://cloud.google.com/automl/docs/reference/rest/v1beta1/projects.locations.models.modelEvaluations/get)
#### Call
```
request = clients["automl"].get_model_evaluation(name=evaluation_slice)
```
#### Response
```
print(MessageToJson(request.__dict__["_pb"]))
```
*Example output*:
```
{
"name": "projects/116273516712/locations/us-central1/models/TCN5333697920992542720/modelEvaluations/1436745357261371663",
"annotationSpecId": "3130761503557287936",
"createTime": "2021-03-01T02:56:28.878044Z",
"evaluatedExampleCount": 1193,
"classificationEvaluationMetrics": {
"auPrc": 0.99065405,
"confidenceMetricsEntry": [
{
"recall": 1.0,
"precision": 0.01424979,
"f1Score": 0.028099174
},
{
"confidenceThreshold": 0.05,
"recall": 1.0,
"precision": 0.5862069,
"f1Score": 0.73913044
},
# REMOVED FOR BREVITY
{
"confidenceThreshold": 0.999,
"recall": 0.23529412,
"precision": 1.0,
"f1Score": 0.3809524
},
{
"confidenceThreshold": 1.0,
"precision": 1.0
}
],
"logLoss": 0.005436425
},
"displayName": "exercise"
}
```
## Make batch predictions
### Prepare files for batch prediction
```
test_item = ! gsutil cat $IMPORT_FILE | head -n1
test_item, test_label = str(test_item[0]).split(",")
print(test_item, test_label)
import json
import tensorflow as tf
test_item_uri = "gs://" + BUCKET_NAME + "/test.txt"
with tf.io.gfile.GFile(test_item_uri, "w") as f:
f.write(test_item + "\n")
gcs_input_uri = "gs://" + BUCKET_NAME + "/batch.csv"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
f.write(test_item_uri + "\n")
! gsutil cat $gcs_input_uri
! gsutil cat $test_item_uri
```
*Example output*:
```
gs://migration-ucaip-trainingaip-20210228224317/test.txt
I went on a successful date with someone I felt sympathy and connection with.
```
### [projects.locations.models.batchPredict](https://cloud.google.com/automl/docs/reference/rest/v1beta1/projects.locations.models/batchPredict)
#### Request
```
input_config = {"gcs_source": {"input_uris": [gcs_input_uri]}}
output_config = {
"gcs_destination": {"output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/"}
}
print(
MessageToJson(
automl.BatchPredictRequest(
name=model_id, input_config=input_config, output_config=output_config
).__dict__["_pb"]
)
)
```
*Example output*:
```
{
"name": "projects/116273516712/locations/us-central1/models/TCN5333697920992542720",
"inputConfig": {
"gcsSource": {
"inputUris": [
"gs://migration-ucaip-trainingaip-20210228224317/batch.csv"
]
}
},
"outputConfig": {
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210228224317/batch_output/"
}
}
}
```
#### Call
```
request = clients["prediction"].batch_predict(
name=model_id, input_config=input_config, output_config=output_config
)
```
#### Response
```
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
```
*Example output*:
```
{}
```
```
destination_uri = output_config["gcs_destination"]["output_uri_prefix"][:-1]
! gsutil ls $destination_uri/*
! gsutil cat $destination_uri/prediction*/*.jsonl
```
*Example output*:
```
gs://migration-ucaip-trainingaip-20210228224317/batch_output/prediction-happiness_20210228224317-2021-03-01T02:57:02.004934Z/text_classification_1.jsonl
gs://migration-ucaip-trainingaip-20210228224317/batch_output/prediction-happiness_20210228224317-2021-03-01T02:57:02.004934Z/text_classification_2.jsonl
{"textSnippet":{"contentUri":"gs://migration-ucaip-trainingaip-20210228224317/test.txt"},"annotations":[{"annotationSpecId":"5436604512770981888","classification":{"score":0.93047273},"displayName":"affection"},{"annotationSpecId":"3707222255860711424","classification":{"score":0.002518793},"displayName":"achievement"},{"annotationSpecId":"7742447521984675840","classification":{"score":1.3182563E-4},"displayName":"enjoy_the_moment"},{"annotationSpecId":"824918494343593984","classification":{"score":0.06613126},"displayName":"bonding"},{"annotationSpecId":"1977839998950440960","classification":{"score":1.5267624E-5},"displayName":"leisure"},{"annotationSpecId":"8318908274288099328","classification":{"score":8.887557E-6},"displayName":"nature"},{"annotationSpecId":"3130761503557287936","classification":{"score":7.2130124E-4},"displayName":"exercise"}]}
```
## Make online predictions
### [projects.locations.models.deploy](https://cloud.google.com/automl/docs/reference/rest/v1beta1/projects.locations.models/deploy)
#### Call
```
request = clients["automl"].deploy_model(name=model_id)
```
#### Response
```
result = request.result()
print(MessageToJson(result))
```
*Example output*:
```
{}
```
### [projects.locations.models.predict](https://cloud.google.com/automl/docs/reference/rest/v1beta1/projects.locations.models/predict)
### Prepare data item for online prediction
```
test_item = ! gsutil cat $IMPORT_FILE | head -n1
test_item, test_label = str(test_item[0]).split(",")
```
#### Request
```
payload = {"text_snippet": {"content": test_item, "mime_type": "text/plain"}}
request = automl.PredictRequest(name=model_id, payload=payload)
print(MessageToJson(request.__dict__["_pb"]))
```
*Example output*:
```
{
"name": "projects/116273516712/locations/us-central1/models/TCN5333697920992542720",
"payload": {
"textSnippet": {
"content": "I went on a successful date with someone I felt sympathy and connection with.",
"mimeType": "text/plain"
}
}
}
```
#### Call
```
request = clients["prediction"].predict(request=request)
```
#### Response
```
print(MessageToJson(request.__dict__["_pb"]))
```
*Example output*:
```
{
"payload": [
{
"annotationSpecId": "5436604512770981888",
"classification": {
"score": 0.9272586
},
"displayName": "affection"
},
{
"annotationSpecId": "824918494343593984",
"classification": {
"score": 0.068884976
},
"displayName": "bonding"
},
{
"annotationSpecId": "3707222255860711424",
"classification": {
"score": 0.0028119811
},
"displayName": "achievement"
},
{
"annotationSpecId": "3130761503557287936",
"classification": {
"score": 0.0008869726
},
"displayName": "exercise"
},
{
"annotationSpecId": "7742447521984675840",
"classification": {
"score": 0.00013229548
},
"displayName": "enjoy_the_moment"
},
{
"annotationSpecId": "1977839998950440960",
"classification": {
"score": 1.5584701e-05
},
"displayName": "leisure"
},
{
"annotationSpecId": "8318908274288099328",
"classification": {
"score": 9.5975e-06
},
"displayName": "nature"
}
]
}
```
# Cleaning up
To clean up all GCP resources used in this project, you can [delete the GCP
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial.
```
delete_dataset = True
delete_model = True
delete_bucket = True
# Delete the dataset using the AutoML fully qualified identifier for the dataset
try:
if delete_dataset:
clients["automl"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the model using the AutoML fully qualified identifier for the model
try:
if delete_model:
clients["automl"].delete_model(name=model_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r gs://$BUCKET_NAME
```
|
github_jupyter
|
```
from matplotlib_venn import venn2, venn3
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
isaacs = pd.read_json('isaacs-reach.json', typ='series')
setA = set(isaacs)
mathias = pd.read_json('mathias-reach.json', typ='series')
setB = set(mathias)
packages = pd.read_json('latestPackages.json', typ='series')
setC = set(packages)
# print(setA,setB,setC)
plt.figure(figsize=(8,6), dpi=100)
venn3([setA, setB, setC], ('isaacs', 'mathias', 'All packages'))
plt.savefig("top2_reach_maintainer.png")
import matplotlib.ticker as mtick
import pandas as pd
import matplotlib.pyplot as plt
ranking = pd.read_json('optimal_ranking_2018_all.json', typ='series')
# for i, x in to100Percent.iteritems():
# if i > 5:
# amountTop5 = x
# break
# print(amountTop5, amountTop5 / 667224)
# max = 0
# index = 0
# for i, x in to100Percent.iteritems():
# if x > max:
# max = x
# index = i
# print('full reach at maintainer count {} with {} packages'.format(index, max))
plt.rcParams.update({'font.size': 14})
print(len(ranking["IntersectionCounts"]))
plt.figure(figsize=(10,6), dpi=100)
plt.plot(ranking["IntersectionCounts"])
# plt.axvline(index, color='r', linestyle='--')
plt.ylim(0,500000)
plt.xlabel('Number of maintainers ordered by optimal reach to reach all packages with a dependency')
plt.ylabel('Reached packages')
# plt.annotate("Full reach", (index, max), xytext=(+10, +30), textcoords='offset points', fontsize=10,
# arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
plt.gca().set_yticklabels(['{:.0f}%'.format(x/667224*100) for x in plt.gca().get_yticks()])
plt.savefig("complete_reach_maintainer.png")
import matplotlib.ticker as mtick
import pandas as pd
import matplotlib.pyplot as plt
ranking = pd.read_json('optimalRanking_2018_top100.json', typ='series')
plt.figure(figsize=(8,6), dpi=100)
plt.plot(ranking["IntersectionCounts"], '.')
plt.ylim(0,400000)
plt.xlabel('Number of maintainers ordered by optimal reach')
plt.ylabel('Reached packages')
plt.gca().set_yticklabels(['{:.0f}%'.format(x/667224*100) for x in plt.gca().get_yticks()])
plt.savefig("top100_reach_maintainer.png")
import matplotlib.pyplot as plt
x = [2012,2013,2014,2015,2016,2017,2018]
packageCount = [5165, 18574, 50393, 112373, 218744, 390190, 605079]
reachCount = [2261, 9462, 27216, 63555, 124638, 210433, 322578]
percentage = []
for i in range(7):
percentage.append(reachCount[i] / packageCount[i])
plt.figure(figsize=(8,6), dpi=100)
plt.plot(x, percentage, '--bo')
plt.xlabel('Year')
plt.ylabel('Reach of Top 20 Maintainers (% of All Packages)')
plt.rcParams.update({'font.size': 14})
plt.gca().set_yticklabels(['{:.2f}%'.format(x*100) for x in plt.gca().get_yticks()])
plt.savefig("top20_maintainer_reach_evolution.png")
```
|
github_jupyter
|
```
import torch
import torchvision
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt
import random
import backwardcompatibilityml.loss as bcloss
import backwardcompatibilityml.scores as scores
# Initialize random seed
random.seed(123)
torch.manual_seed(456)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
%matplotlib inline
n_epochs = 3
batch_size_train = 64
batch_size_test = 1000
learning_rate = 0.01
momentum = 0.5
log_interval = 10
torch.backends.cudnn.enabled = False
train_loader = list(torch.utils.data.DataLoader(
torchvision.datasets.MNIST('datasets/', train=True, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))
])),
batch_size=batch_size_train, shuffle=True))
test_loader = list(torch.utils.data.DataLoader(
torchvision.datasets.MNIST('datasets/', train=False, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))
])),
batch_size=batch_size_test, shuffle=True))
train_loader_a = train_loader[:int(len(train_loader)/2)]
train_loader_b = train_loader[int(len(train_loader)/2):]
fig = plt.figure()
for i in range(6):
plt.subplot(2,3,i+1)
plt.tight_layout()
plt.imshow(train_loader_a[0][0][i][0], cmap='gray', interpolation='none')
plt.title("Ground Truth: {}".format(train_loader_a[0][1][i]))
plt.xticks([])
plt.yticks([])
fig
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return x, F.softmax(x, dim=1), F.log_softmax(x, dim=1)
network = Net()
optimizer = optim.SGD(network.parameters(), lr=learning_rate, momentum=momentum)
train_losses = []
train_counter = []
test_losses = []
test_counter = [i*len(train_loader_a)*batch_size_train for i in range(n_epochs + 1)]
def train(epoch):
network.train()
for batch_idx, (data, target) in enumerate(train_loader_a):
optimizer.zero_grad()
_, _, output = network(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader_a)*batch_size_train,
100. * batch_idx / len(train_loader_a), loss.item()))
train_losses.append(loss.item())
train_counter.append(
(batch_idx*64) + ((epoch-1)*len(train_loader_a)*batch_size_train))
def test():
network.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
_, _, output = network(data)
test_loss += F.nll_loss(output, target, reduction="sum").item()
pred = output.data.max(1, keepdim=True)[1]
correct += pred.eq(target.data.view_as(pred)).sum()
test_loss /= len(train_loader_a)*batch_size_train
test_losses.append(test_loss)
print('\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(train_loader_a)*batch_size_train,
100. * correct / (len(train_loader_a)*batch_size_train)))
test()
for epoch in range(1, n_epochs + 1):
train(epoch)
test()
fig = plt.figure()
plt.plot(train_counter, train_losses, color='blue')
plt.scatter(test_counter, test_losses, color='red')
plt.legend(['Train Loss', 'Test Loss'], loc='upper right')
plt.xlabel('number of training examples seen')
plt.ylabel('negative log likelihood loss')
fig
with torch.no_grad():
_, _, output = network(test_loader[0][0])
fig = plt.figure()
for i in range(6):
plt.subplot(2,3,i+1)
plt.tight_layout()
plt.imshow(test_loader[0][0][i][0], cmap='gray', interpolation='none')
plt.title("Prediction: {}".format(
output.data.max(1, keepdim=True)[1][i].item()))
plt.xticks([])
plt.yticks([])
fig
import copy
import importlib
importlib.reload(bcloss)
h1 = copy.deepcopy(network)
h2 = copy.deepcopy(network)
h1.eval()
new_optimizer = optim.SGD(h2.parameters(), lr=learning_rate, momentum=momentum)
lambda_c = 1.0
si_loss = bcloss.StrictImitationCrossEntropyLoss(h1, h2, lambda_c)
update_train_losses = []
update_train_counter = []
update_test_losses = []
update_test_counter = [i*len(train_loader_b)*batch_size_train for i in range(n_epochs + 1)]
def train_update(epoch):
for batch_idx, (data, target) in enumerate(train_loader_b):
new_optimizer.zero_grad()
loss = si_loss(data, target)
loss.backward()
new_optimizer.step()
if batch_idx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader_b)*batch_size_train,
100. * batch_idx / len(train_loader_b), loss.item()))
update_train_losses.append(loss.item())
update_train_counter.append(
(batch_idx*64) + ((epoch-1)*len(train_loader_b)*batch_size_train))
def test_update():
h2.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
_, _, output = h2(data)
test_loss += F.nll_loss(output, target, reduction="sum").item()
pred = output.data.max(1, keepdim=True)[1]
correct += pred.eq(target.data.view_as(pred)).sum()
test_loss /= len(train_loader_b)*batch_size_train
update_test_losses.append(test_loss)
print('\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(train_loader_b)*batch_size_train,
100. * correct / (len(train_loader_b)*batch_size_train)))
test_update()
for epoch in range(1, n_epochs + 1):
train_update(epoch)
test_update()
fig = plt.figure()
plt.plot(update_train_counter, update_train_losses, color='blue')
plt.scatter(update_test_counter, update_test_losses, color='red')
plt.legend(['Train Loss', 'Test Loss'], loc='upper right')
plt.xlabel('number of training examples seen')
plt.ylabel('negative log likelihood loss')
fig
h2.eval()
h1.eval()
test_index = 5
with torch.no_grad():
_, _, h1_output = h1(test_loader[test_index][0])
_, _, h2_output = h2(test_loader[test_index][0])
h1_labels = h1_output.data.max(1)[1]
h2_labels = h2_output.data.max(1)[1]
expected_labels = test_loader[test_index][1]
fig = plt.figure()
for i in range(6):
plt.subplot(2,3,i+1)
plt.tight_layout()
plt.imshow(test_loader[test_index][0][i][0], cmap='gray', interpolation='none')
plt.title("Prediction: {}".format(
h2_labels[i].item()))
plt.xticks([])
plt.yticks([])
fig
trust_compatibility = scores.trust_compatibility_score(h1_labels, h2_labels, expected_labels)
error_compatibility = scores.error_compatibility_score(h1_labels, h2_labels, expected_labels)
print(f"Error Compatibility Score: {error_compatibility}")
print(f"Trust Compatibility Score: {trust_compatibility}")
```
|
github_jupyter
|
# 一等函数
函数是一等对象。
## 一等对象
一等对象:
- 在运行时创建
- 能赋值给变量或数据结构中的元素
- 能作为参数传给函数
- 能作为函数的返回结果
```
def factorial(n):
'''return n!'''
return 1 if n < 2 else n * factorial(n-1)
# 将函数看作是对象传入方法中:
list(map(factorial, range(11)))
dir(factorial)
```
## 可调用对象 Callable Object
### 一共7种:
- 用户定义的函数 : def或lambda
- 内置函数
- 内置方法
- 方法:在类的定义体内定义的函数
- 类:\_\_new\_\_:创建一个实例;\_\_init\_\_初始化实例。\_\_call\_\_实例可以作为函数调用
- 类的实例:\_\_call\_\_实例可以作为函数调用
- 生成器函数: yield的函数或方法,返回生成器对象(14章)
```
# callable
import random
class BingoCage:
def __init__(self, items):
self._items = list(items)
random.shuffle(self._items)
def pick(self):
try:
return self._items.pop()
except IndexError:
raise LookupError('pick from an empty BingCage')
def __call__(self):
'''
The class is callable only on defined the __call__ function
'''
return self.pick()
bingo = BingoCage(range(3))
bingo.pick()
bingo()
callable(bingo)
dir(BingoCage)
```
## 函数内省
```
class C:pass #自定义类
obj = C() #自定义类实例
def func():pass #自定义函数
sorted( set(dir(func)) - set(dir(obj))) #求差集
```
### 函数专有的属性
| 名称 | 类型 | 说明 |
|:--------------------|:----|:----|
| \_\_annotations\_\_ | dict |参数和返回值的注解 |
|\_\_call\_\_ |method-wrapper|实现()运算符;即可调用对象协议|
|\_\_closure\_\_ |tuple|函数闭包,即自由变量的绑定(通常是None)|
|\_\_code\_\_ |code|编译成字节码的函数元数据和函数定义体
|\_\_defaults\_\_ |tuple|形式参数的默认值
|\_\_get\_\_ |method-wrapper|实现只读描述符协议(20章)
|\_\_globals\_\_ |dict|函数所在模块中的全局变量
|\_\_kwdefaults\_\_ |dict|仅限关键字形式参数的默认值
|\_\_name\_\_ |str|函数名称
|\_\_qualname\_\_ |str|函数的限定名称
## 函数参数
- 仅限关键字参数
- 使用\*存无变量名的一个或以上个参数到元组中
- 使用\**存有变量名的一个或以上个参数到字典中
```
def tag(name, *content, cls=None, **attrs):
"""生成一个或多个HTML标签"""
if cls is not None:
attrs['class'] = cls
if attrs:
attr_str = ''.join(' %s="%s"'%(attr, value)
for attr, value
in sorted(attrs.items()))
else:
attr_str = ''
if content:
return '\n'.join('<%s%s>%s</%s>'%(name, attr_str, c, name)
for c in content)
else:
return '<%s%s />'%(name, attr_str)
# 传入单个定位参数,生成一个指定名称的空标签。
# name = 'br'
tag('br')
# 第一个参数后面的任意个参数会被 *content 捕获,存入一个元组。
# name = 'p'
# content = ('hello')
tag('p', 'hello')
# name = 'p'
# content = ('hello', 'world')
tag('p', 'hello', 'world')
# tag 函数签名中没有明确指定名称的关键字参数会被 **attrs 捕 获,存入一个字典。
# name = 'p'
# content = ('hello')
# attrs['id'] = 33
tag('p', 'hello', id=33)
# cls 参数只能作为关键字参数传入。
# name = 'p'
# content = ('hello', 'world')
# cls = 'sidebar'
print(tag('p', 'hello', 'world', cls='sidebar'))
# 调用 tag 函数时,即便第一个定位参数也能作为关键字参数传入。
# name = 'img'
# attrs['content'] = 'testing'
tag(content='testing', name="img")
# 在 my_tag 前面加上 **,字典中的所有元素作为单个参数传入,
# 同名键会绑定到对应的具名参数上,余下的则被 **attrs 捕获。
# name = 'img'
# attrs['title'] = 'Sunset Buolevard'
# attrs['src'] = 'sunset.jpg'
# cls = 'framed'
my_tag = {'name':'img', 'title':'Sunset Buolevard',
'src':'sunset.jpg', 'cls':'framed'}
tag(**my_tag)
```
定义函数 时若想指定仅限关键字参数(如上面参数中的 *cls*),要把它们放到前面有 \* 的参数后面。如果不想支持数量不定的定位参数,但是想支持仅限关键字参数,在签名中 放一个 \*,如下所示:
```
def f(a, *, b):
return a,b
f(1, b=2) # b参数必须指定设置了
```
# 获取关于参数的信息
```
def clip(text, max_len = 80):
"""
在max_len前面或后面的第一个空格处截断文本
"""
end = None
if len(text) > max_len:
space_before = text.rfind(' ', 0, max_len)
if space_before >= 0:
end = space_before
else:
space_after = text.rfind(' ', max_len)
if space_after >= 0:
end = space_after
if end == None:
end = len(text)
return text[:end].rstrip()
clip('I will learn python by myself every night.', 18)
'''
函数对象:
__defaults__: 定位参数和关键字参数的默认值元组
__kwdefaults__:仅限关键字参数默认值元组
参数的默认值只能通过它们在 __defaults__ 元组中的位置确定,
因此要从后向前扫描才能把参数和默认值对应起来。
在这个示例中 clip 函数有两个参数,text 和 max_len,
其中一个有默认值,即 80,因此它必然属于最后一个参数,即max_len。这有违常理。
'''
clip.__defaults__
"""
__code__.varname:函数参数名称,但也包含了局部变量名
"""
clip.__code__.co_varnames
"""
__code__.co_argcount:结合上面的变量,可以获得参数变量
需要注意,这里返回的数量是不包括前缀*或**参数的。
"""
clip.__code__.co_argcount
```
## 使用inspect模块提取函数签名
```
from inspect import signature
sig = signature(clip)
sig
for name, param in sig.parameters.items():
print (param.kind, ":", name, "=", param.default)
```
如果没有默认值,返回的是inspect.\_empty,因为None本身也是一个值。
| kind | meaning |
| :--- | ------: |
|POSITIONAL_OR_KEYWORD | 可以通过定位参数和关键字参数传入的形参(多数) |
|VAR_POSITIONAL| 定位参数元组 |
|VAR_KEYWORD| 关键字参数字典 |
|KEYWORD_ONLY| 仅限关键字参数(python3新增) |
|POSITIONAL_ONLY| 仅限定位参数,python声明函数的句法不支持 |
# 函数注解
```
"""
没有设置函数注解时,返回一个空字典
"""
clip.__annotations__
def clip_ann(text:str, max_len:'int > 0' = 80) -> str:
"""
在max_len前面或后面的第一个空格处截断文本
"""
end = None
if len(text) > max_len:
space_before = text.rfind(' ', 0, max_len)
if space_before >= 0:
end = space_before
else:
space_after = text.rfind(' ', max_len)
if space_after >= 0:
end = space_after
if end == None:
end = len(text)
return text[:end].rstrip()
clip_ann.__annotations__
```
Python 对注解所做的唯一的事情是,把它们存储在函数的
__annotations__ 属性里。仅此而已,Python 不做检查、不做强制、
不做验证,什么操作都不做。换句话说,注解对 Python 解释器没有任何 意义。注解只是元数据,可以供 IDE、框架和装饰器等工具使用。
# 函数式编程
涉及到两个包:
- operator
- functools
## operator
### 使用mul替代* (相乘)运算符
```
# 如何将运算符当作函数使用
# 传统做法:
from functools import reduce
def fact(n):
return reduce(lambda a,b: a*b, range(1,n+1))
fact(5)
# 如果使用operator库,就可以避免使用匿名函数
from functools import reduce
from operator import mul
def fact_op(n):
return reduce(mul, range(1,n+1))
fact_op(5)
```
### 使用itemgetter来代表[ ]序号运算符
```
metro_data = [
('Tokyo', 'JP', 36.933, (35.689722, 139.691667)),
('Delhi NCR', 'IN', 21.935, (28.613889, 77.208889)),
('Mexico City', 'MX', 20.142, (19.433333, -99.133333)),
('New York-Newark', 'US', 20.104, (40.808611, -74.020386)),
('Sao Paulo', 'BR', 19.649, (-23.547778, -46.635833)),
]
from operator import itemgetter
for city in sorted(metro_data, key = itemgetter(1)):
print(city)
# 这里的itemgetter(1)等价于 lambda fields:fields[1]
cc_name = itemgetter(1,0) # 将返回提取的值构成的元组
for city in metro_data:
print( cc_name(city) )
```
### 使用attrgetter获取指定的属性,类似ver.attr(点运算符)
```
from collections import namedtuple
metro_data = [
('Tokyo', 'JP', 36.933, (35.689722, 139.691667)),
('Delhi NCR', 'IN', 21.935, (28.613889, 77.208889)),
('Mexico City', 'MX', 20.142, (19.433333, -99.133333)),
('New York-Newark', 'US', 20.104, (40.808611, -74.020386)),
('Sao Paulo', 'BR', 19.649, (-23.547778, -46.635833)),
]
LatLog = namedtuple('LatLong','lat long')
Metropolis = namedtuple('Metropolis', 'name cc pop coord')
metro_areas = [Metropolis(name, cc, pop, LatLog(lat, long)) for name, cc
, pop, (lat, long) in metro_data]
# 点运算符
metro_areas[0].coord.lat
# 使用operator函数来代替操作符
from operator import attrgetter
name_lat = attrgetter('name', 'coord.lat')
# 用坐标lat排序
for city in sorted(metro_areas, key = attrgetter('coord.lat')):
print(name_lat(city))
```
### methodcaller为参数调用指定的方法
```
from operator import methodcaller
s = 'The time has come'
# 可以把upcase看作是一个创建出来的函数
upcase = methodcaller('upper')# 指定方法
upcase(s)
# 冻结参数:replace(str, ' ', '-')--->部分应用
hiphenate = methodcaller('replace', ' ','-')
hiphenate(s)
```
### operator 中的函数列表
```
import operator
funcs = [name for name in dir(operator) if not name.startswith('_')]
for func in funcs:
print(func)
```
## functools
### functools.partial 冻结参数
partial 的第一个参数是一个可调用对象,后面跟着任意个要绑定的 定位参数和关键字参数。
```
from operator import mul
from functools import partial
# 将mul(a,b)的一个参数冻结为3
triple = partial(mul, 3)
triple(7)
# 当参数只接受只有一个参数的函数时
list(map(triple,range(1,10)))
```
### functools.partial 规范化函数
```
# 在需要经常使用的一些函数时,我们可以将其作为一个冻结变量,会更加方便
import unicodedata, functools
# 提炼常用函数 unicodedata.normalize('NFC',s)
nfc = functools.partial(unicodedata.normalize, 'NFC')
s1 ='café'
s2 = 'cafe\u0301'
s1 == s2
nfc(s1) == nfc(s2)
def tag(name, *content, cls=None, **attrs):
"""生成一个或多个HTML标签"""
if cls is not None:
attrs['class'] = cls
if attrs:
attr_str = ''.join(' %s="%s"'%(attr, value)
for attr, value
in sorted(attrs.items()))
else:
attr_str = ''
if content:
return '\n'.join('<%s%s>%s</%s>'%(name, attr_str, c, name)
for c in content)
else:
return '<%s%s />'%(name, attr_str)
# 将tag函数参数进行部分冻结,使用将更方便:
from functools import partial
picture = partial(tag, 'img', cls='pic-frame')
picture(src = 'wum.jpeg')
picture
picture.func
tag
picture.args
picture.keywords
def clip
```
|
github_jupyter
|
## Dependencies
```
import json, warnings, shutil, glob
from jigsaw_utility_scripts import *
from scripts_step_lr_schedulers import *
from transformers import TFXLMRobertaModel, XLMRobertaConfig
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
```
## TPU configuration
```
strategy, tpu = set_up_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
```
# Load data
```
database_base_path = '/kaggle/input/jigsaw-data-split-roberta-192-ratio-2-clean-tail6/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
valid_df = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/validation.csv",
usecols=['comment_text', 'toxic', 'lang'])
print('Train samples: %d' % len(k_fold))
display(k_fold.head())
print('Validation samples: %d' % len(valid_df))
display(valid_df.head())
base_data_path = 'fold_1/'
fold_n = 1
# Unzip files
!tar -xf /kaggle/input/jigsaw-data-split-roberta-192-ratio-2-clean-tail6/fold_1.tar.gz
```
# Model parameters
```
base_path = '/kaggle/input/jigsaw-transformers/XLM-RoBERTa/'
config = {
"MAX_LEN": 192,
"BATCH_SIZE": 128,
"EPOCHS": 4,
"LEARNING_RATE": 1e-5,
"ES_PATIENCE": None,
"base_model_path": base_path + 'tf-xlm-roberta-large-tf_model.h5',
"config_path": base_path + 'xlm-roberta-large-config.json'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
```
## Learning rate schedule
```
lr_min = 1e-7
lr_start = 1e-7
lr_max = config['LEARNING_RATE']
step_size = len(k_fold[k_fold[f'fold_{fold_n}'] == 'train']) // config['BATCH_SIZE']
total_steps = config['EPOCHS'] * step_size
hold_max_steps = 0
warmup_steps = step_size * 1
decay = .9997
rng = [i for i in range(0, total_steps, config['BATCH_SIZE'])]
y = [exponential_schedule_with_warmup(tf.cast(x, tf.float32), warmup_steps, hold_max_steps,
lr_start, lr_max, lr_min, decay) for x in rng]
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
```
# Model
```
module_config = XLMRobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFXLMRobertaModel.from_pretrained(config['base_model_path'], config=module_config)
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
cls_token = last_hidden_state[:, 0, :]
output = layers.Dense(1, activation='sigmoid', name='output')(cls_token)
model = Model(inputs=[input_ids, attention_mask], outputs=output)
return model
```
# Train
```
# Load data
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train_int.npy').reshape(x_train.shape[1], 1).astype(np.float32)
x_valid_ml = np.load(database_base_path + 'x_valid.npy')
y_valid_ml = np.load(database_base_path + 'y_valid.npy').reshape(x_valid_ml.shape[1], 1).astype(np.float32)
#################### ADD TAIL ####################
x_train_tail = np.load(base_data_path + 'x_train_tail.npy')
y_train_tail = np.load(base_data_path + 'y_train_int_tail.npy').reshape(x_train_tail.shape[1], 1).astype(np.float32)
x_train = np.hstack([x_train, x_train_tail])
y_train = np.vstack([y_train, y_train_tail])
step_size = x_train.shape[1] // config['BATCH_SIZE']
valid_step_size = x_valid_ml.shape[1] // config['BATCH_SIZE']
# Build TF datasets
train_dist_ds = strategy.experimental_distribute_dataset(get_training_dataset(x_train, y_train, config['BATCH_SIZE'], AUTO, seed=SEED))
valid_dist_ds = strategy.experimental_distribute_dataset(get_validation_dataset(x_valid_ml, y_valid_ml, config['BATCH_SIZE'], AUTO, repeated=True, seed=SEED))
train_data_iter = iter(train_dist_ds)
valid_data_iter = iter(valid_dist_ds)
# Step functions
@tf.function
def train_step(data_iter):
def train_step_fn(x, y):
with tf.GradientTape() as tape:
probabilities = model(x, training=True)
loss = loss_fn(y, probabilities)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_auc.update_state(y, probabilities)
train_loss.update_state(loss)
for _ in tf.range(step_size):
strategy.experimental_run_v2(train_step_fn, next(data_iter))
@tf.function
def valid_step(data_iter):
def valid_step_fn(x, y):
probabilities = model(x, training=False)
loss = loss_fn(y, probabilities)
valid_auc.update_state(y, probabilities)
valid_loss.update_state(loss)
for _ in tf.range(valid_step_size):
strategy.experimental_run_v2(valid_step_fn, next(data_iter))
# Train model
with strategy.scope():
model = model_fn(config['MAX_LEN'])
optimizer = optimizers.Adam(learning_rate=lambda:
exponential_schedule_with_warmup(tf.cast(optimizer.iterations, tf.float32),
warmup_steps, hold_max_steps, lr_start,
lr_max, lr_min, decay))
loss_fn = losses.binary_crossentropy
train_auc = metrics.AUC()
valid_auc = metrics.AUC()
train_loss = metrics.Sum()
valid_loss = metrics.Sum()
metrics_dict = {'loss': train_loss, 'auc': train_auc,
'val_loss': valid_loss, 'val_auc': valid_auc}
history = custom_fit(model, metrics_dict, train_step, valid_step, train_data_iter, valid_data_iter,
step_size, valid_step_size, config['BATCH_SIZE'], config['EPOCHS'],
config['ES_PATIENCE'], save_last=False)
# model.save_weights('model.h5')
# Make predictions
x_train = np.load(base_data_path + 'x_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
x_valid_ml_eval = np.load(database_base_path + 'x_valid.npy')
train_preds = model.predict(get_test_dataset(x_train, config['BATCH_SIZE'], AUTO))
valid_preds = model.predict(get_test_dataset(x_valid, config['BATCH_SIZE'], AUTO))
valid_ml_preds = model.predict(get_test_dataset(x_valid_ml_eval, config['BATCH_SIZE'], AUTO))
k_fold.loc[k_fold[f'fold_{fold_n}'] == 'train', f'pred_{fold_n}'] = np.round(train_preds)
k_fold.loc[k_fold[f'fold_{fold_n}'] == 'validation', f'pred_{fold_n}'] = np.round(valid_preds)
valid_df[f'pred_{fold_n}'] = valid_ml_preds
# Fine-tune on validation set
#################### ADD TAIL ####################
x_valid_ml_tail = np.hstack([x_valid_ml, np.load(database_base_path + 'x_valid_tail.npy')])
y_valid_ml_tail = np.vstack([y_valid_ml, y_valid_ml])
valid_step_size_tail = x_valid_ml_tail.shape[1] // config['BATCH_SIZE']
# Build TF datasets
train_ml_dist_ds = strategy.experimental_distribute_dataset(get_training_dataset(x_valid_ml_tail, y_valid_ml_tail,
config['BATCH_SIZE'], AUTO, seed=SEED))
train_ml_data_iter = iter(train_ml_dist_ds)
history_ml = custom_fit(model, metrics_dict, train_step, valid_step, train_ml_data_iter, valid_data_iter,
valid_step_size_tail, valid_step_size, config['BATCH_SIZE'], 1,
config['ES_PATIENCE'], save_last=False)
# Join history
for key in history_ml.keys():
history[key] += history_ml[key]
model.save_weights('model_ml.h5')
# Make predictions
valid_ml_preds = model.predict(get_test_dataset(x_valid_ml_eval, config['BATCH_SIZE'], AUTO))
valid_df[f'pred_ml_{fold_n}'] = valid_ml_preds
### Delete data dir
shutil.rmtree(base_data_path)
```
## Model loss graph
```
plot_metrics(history)
```
# Model evaluation
```
display(evaluate_model_single_fold(k_fold, fold_n, label_col='toxic_int').style.applymap(color_map))
```
# Confusion matrix
```
train_set = k_fold[k_fold[f'fold_{fold_n}'] == 'train']
validation_set = k_fold[k_fold[f'fold_{fold_n}'] == 'validation']
plot_confusion_matrix(train_set['toxic_int'], train_set[f'pred_{fold_n}'],
validation_set['toxic_int'], validation_set[f'pred_{fold_n}'])
```
# Model evaluation by language
```
display(evaluate_model_single_fold_lang(valid_df, fold_n).style.applymap(color_map))
# ML fine-tunned preds
display(evaluate_model_single_fold_lang(valid_df, fold_n, pred_col='pred_ml').style.applymap(color_map))
```
# Visualize predictions
```
pd.set_option('max_colwidth', 120)
print('English validation set')
display(k_fold[['comment_text', 'toxic'] + [c for c in k_fold.columns if c.startswith('pred')]].head(10))
print('Multilingual validation set')
display(valid_df[['comment_text', 'toxic'] + [c for c in valid_df.columns if c.startswith('pred')]].head(10))
```
# Test set predictions
```
x_test = np.load(database_base_path + 'x_test.npy')
test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE'], AUTO))
submission = pd.read_csv('/kaggle/input/jigsaw-multilingual-toxic-comment-classification/sample_submission.csv')
submission['toxic'] = test_preds
submission.to_csv('submission.csv', index=False)
display(submission.describe())
display(submission.head(10))
```
|
github_jupyter
|
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# How to use EstimatorStep in AML Pipeline
This notebook shows how to use the EstimatorStep with Azure Machine Learning Pipelines. Estimator is a convenient object in Azure Machine Learning that wraps run configuration information to help simplify the tasks of specifying how a script is executed.
## Prerequisite:
* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning
* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](https://aka.ms/pl-config) to:
* install the AML SDK
* create a workspace and its configuration file (`config.json`)
Let's get started. First let's import some Python libraries.
```
import azureml.core
# check core SDK version number
print("Azure ML SDK Version: ", azureml.core.VERSION)
```
## Initialize workspace
Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`.
```
from azureml.core import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
```
## Create or Attach existing AmlCompute
You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.
If we could not find the cluster with the given name, then we will create a new cluster here. We will create an `AmlCompute` cluster of `STANDARD_NC6` GPU VMs. This process is broken down into 3 steps:
1. create the configuration (this step is local and only takes a second)
2. create the cluster (this step will take about **20 seconds**)
3. provision the VMs to bring the cluster to the initial size (of 1 in this case). This step will take about **3-5 minutes** and is providing only sparse output in the process. Please make sure to wait until the call returns before moving to the next cell
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
cluster_name = "cpu-cluster"
try:
cpu_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', max_nodes=4)
# create the cluster
cpu_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
# can poll for a minimum number of nodes and for a specific timeout.
# if no min node count is provided it uses the scale settings for the cluster
cpu_cluster.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# use get_status() to get a detailed status for the current cluster.
print(cpu_cluster.get_status().serialize())
```
Now that you have created the compute target, let's see what the workspace's `compute_targets` property returns. You should now see one entry named 'cpu-cluster' of type `AmlCompute`.
## Use a simple script
We have already created a simple "hello world" script. This is the script that we will submit through the estimator pattern. It prints a hello-world message, and if Azure ML SDK is installed, it will also logs an array of values ([Fibonacci numbers](https://en.wikipedia.org/wiki/Fibonacci_number)).
## Build an Estimator object
Estimator by default will attempt to use Docker-based execution. You can also enable Docker and let estimator pick the default CPU image supplied by Azure ML for execution. You can target an AmlCompute cluster (or any other supported compute target types). You can also customize the conda environment by adding conda and/or pip packages.
> Note: The arguments to the entry script used in the Estimator object should be specified as *list* using
'estimator_entry_script_arguments' parameter when instantiating EstimatorStep. Estimator object's parameter
'script_params' accepts a dictionary. However 'estimator_entry_script_arguments' parameter expects arguments as
a list.
> Estimator object initialization involves specifying a list of DataReference objects in its 'inputs' parameter.
In Pipelines, a step can take another step's output or DataReferences as input. So when creating an EstimatorStep,
the parameters 'inputs' and 'outputs' need to be set explicitly and that will override 'inputs' parameter
specified in the Estimator object.
> The best practice is to use separate folders for scripts and its dependent files for each step and specify that folder as the `source_directory` for the step. This helps reduce the size of the snapshot created for the step (only the specific folder is snapshotted). Since changes in any files in the `source_directory` would trigger a re-upload of the snapshot, this helps keep the reuse of the step when there are no changes in the `source_directory` of the step.
```
from azureml.core import Datastore
from azureml.data.data_reference import DataReference
from azureml.pipeline.core import PipelineData
def_blob_store = Datastore(ws, "workspaceblobstore")
input_data = DataReference(
datastore=def_blob_store,
data_reference_name="input_data",
path_on_datastore="20newsgroups/20news.pkl")
output = PipelineData("output", datastore=def_blob_store)
source_directory = 'estimator_train'
from azureml.train.estimator import Estimator
est = Estimator(source_directory=source_directory,
compute_target=cpu_cluster,
entry_script='dummy_train.py',
conda_packages=['scikit-learn'])
```
## Create an EstimatorStep
[EstimatorStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.estimator_step.estimatorstep?view=azure-ml-py) adds a step to run Estimator in a Pipeline.
- **name:** Name of the step
- **estimator:** Estimator object
- **estimator_entry_script_arguments:**
- **runconfig_pipeline_params:** Override runconfig properties at runtime using key-value pairs each with name of the runconfig property and PipelineParameter for that property
- **inputs:** Inputs
- **outputs:** Output is list of PipelineData
- **compute_target:** Compute target to use
- **allow_reuse:** Whether the step should reuse previous results when run with the same settings/inputs. If this is false, a new run will always be generated for this step during pipeline execution.
- **version:** Optional version tag to denote a change in functionality for the step
```
from azureml.pipeline.steps import EstimatorStep
est_step = EstimatorStep(name="Estimator_Train",
estimator=est,
estimator_entry_script_arguments=["--datadir", input_data, "--output", output],
runconfig_pipeline_params=None,
inputs=[input_data],
outputs=[output],
compute_target=cpu_cluster)
```
## Build and Submit the Experiment
```
from azureml.pipeline.core import Pipeline
from azureml.core import Experiment
pipeline = Pipeline(workspace=ws, steps=[est_step])
pipeline_run = Experiment(ws, 'Estimator_sample').submit(pipeline)
```
## View Run Details
```
from azureml.widgets import RunDetails
RunDetails(pipeline_run).show()
```
|
github_jupyter
|
# Softmax exercise
*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*
This exercise is analogous to the SVM exercise. You will:
- implement a fully-vectorized **loss function** for the Softmax classifier
- implement the fully-vectorized expression for its **analytic gradient**
- **check your implementation** with numerical gradient
- use a validation set to **tune the learning rate and regularization** strength
- **optimize** the loss function with **SGD**
- **visualize** the final learned weights
```
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the linear classifier. These are the same steps as we used for the
SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis = 0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# add bias dimension and transform into columns
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data()
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
print 'dev data shape: ', X_dev.shape
print 'dev labels shape: ', y_dev.shape
```
## Softmax Classifier
Your code for this section will all be written inside **cs231n/classifiers/softmax.py**.
```
# First implement the naive softmax loss function with nested loops.
# Open the file cs231n/classifiers/softmax.py and implement the
# softmax_loss_naive function.
from cs231n.classifiers.softmax import softmax_loss_naive
import time
# Generate a random softmax weight matrix and use it to compute the loss.
W = np.random.randn(3073, 10) * 0.0001
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As a rough sanity check, our loss should be something close to -log(0.1).
print 'loss: %f' % loss
print 'sanity check: %f' % (-np.log(0.1))
```
## Inline Question 1:
Why do we expect our loss to be close to -log(0.1)? Explain briefly.**
**Your answer:** *Fill this in*
```
# Complete the implementation of softmax_loss_naive and implement a (naive)
# version of the gradient that uses nested loops.
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As we did for the SVM, use numeric gradient checking as a debugging tool.
# The numeric gradient should be close to the analytic gradient.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# similar to SVM case, do another gradient check with regularization
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 1e2)
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 1e2)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# Now that we have a naive implementation of the softmax loss function and its gradient,
# implement a vectorized version in softmax_loss_vectorized.
# The two versions should compute the same results, but the vectorized version should be
# much faster.
tic = time.time()
loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'naive loss: %e computed in %fs' % (loss_naive, toc - tic)
from cs231n.classifiers.softmax import softmax_loss_vectorized
tic = time.time()
loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic)
# As we did for the SVM, we use the Frobenius norm to compare the two versions
# of the gradient.
grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print 'Loss difference: %f' % np.abs(loss_naive - loss_vectorized)
print 'Gradient difference: %f' % grad_difference
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of over 0.35 on the validation set.
from cs231n.classifiers import Softmax
results = {}
best_val = -1
best_softmax = None
learning_rates = [1e-7, 5e-7]
regularization_strengths = [5e4, 1e8]
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained softmax classifer in best_softmax. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# evaluate on test set
# Evaluate the best softmax on test set
y_test_pred = best_softmax.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print 'softmax on raw pixels final test set accuracy: %f' % (test_accuracy, )
# Visualize the learned weights for each class
w = best_softmax.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in xrange(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
```
|
github_jupyter
|
# Poisson Regression, Gradient Descent
In this notebook, we will show how to use gradient descent to solve a [Poisson regression model](https://en.wikipedia.org/wiki/Poisson_regression). A Poisson regression model takes on the following form.
$\operatorname{E}(Y\mid\mathbf{x})=e^{\boldsymbol{\theta}' \mathbf{x}}$
where
* $x$ is a vector of input values
* $\theta$ is a vector weights (the coefficients)
* $y$ is the expected value of the parameter for a [Poisson distribution](https://en.wikipedia.org/wiki/Poisson_distribution), typically, denoted as $\lambda$
Note that [Scikit-Learn](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.linear_model) does not provide a solver a Poisson regression model, but [statsmodels](http://www.statsmodels.org/dev/generated/statsmodels.discrete.discrete_model.Poisson.html) does, though examples for the latter is [thin](https://datascience.stackexchange.com/questions/23143/poisson-regression-options-in-python).
## Simulate data
Now, let's simulate the data. Note that the coefficients are $[1, 0.5, 0.2]$ and that there is error $\epsilon \sim \mathcal{N}(0, 1)$ added to the simulated data.
$y=e^{1 + 0.5x_1 + 0.2x_2 + \epsilon}$
In this notebook, the score is denoted as $z$ and $z = 1 + 0.5x_1 + 0.2x_2 + \epsilon$. Additionally, $y$ is the mean for a Poisson distribution. The variables $X_1$ and $X_2$ are independently sampled from their own normal distribution $\mathcal{N}(0, 1)$.
After we simulate the data, we will plot the distribution of the scores and means. Note that the expected value of the output $y$ is 5.2.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from numpy.random import normal
from scipy.stats import poisson
np.random.seed(37)
sns.set(color_codes=True)
n = 10000
X = np.hstack([
np.array([1 for _ in range(n)]).reshape(n, 1),
normal(0.0, 1.0, n).reshape(n, 1),
normal(0.0, 1.0, n).reshape(n, 1)
])
z = np.dot(X, np.array([1.0, 0.5, 0.2])) + normal(0.0, 1.0, n)
y = np.exp(z)
```
## Visualize data
```
fig, ax = plt.subplots(1, 2, figsize=(20, 5))
sns.kdeplot(z, ax=ax[0])
ax[0].set_title(r'Distribution of Scores')
ax[0].set_xlabel('score')
ax[0].set_ylabel('probability')
sns.kdeplot(y, ax=ax[1])
ax[1].set_title(r'Distribution of Means')
ax[1].set_xlabel('mean')
ax[1].set_ylabel('probability')
```
## Solve for the Poisson regression model weights
Now we learn the weights of the Poisson regression model using gradient descent. Notice that the loss function of a Poisson regression model is identical to an Ordinary Least Square (OLS) regression model?
$L(\theta) = \frac{1}{n} (\hat{y} - y)^2$
We do not have to worry about writing out the gradient of the loss function since we are using [Autograd](https://github.com/HIPS/autograd).
```
import autograd.numpy as np
from autograd import grad
from autograd.numpy import exp, log, sqrt
# define the loss function
def loss(w, X, y):
y_pred = np.exp(np.dot(X, w))
loss = ((y_pred - y) ** 2.0)
return loss.mean(axis=None)
#the magic line that gives you the gradient of the loss function
loss_grad = grad(loss)
def learn_weights(X, y, alpha=0.05, max_iter=30000, debug=False):
w = np.array([0.0 for _ in range(X.shape[1])])
if debug is True:
print('initial weights = {}'.format(w))
loss_trace = []
weight_trace = []
for i in range(max_iter):
loss = loss_grad(w, X, y)
w = w - (loss * alpha)
if i % 2000 == 0 and debug is True:
print('{}: loss = {}, weights = {}'.format(i, loss, w))
loss_trace.append(loss)
weight_trace.append(w)
if debug is True:
print('intercept + weights: {}'.format(w))
loss_trace = np.array(loss_trace)
weight_trace = np.array(weight_trace)
return w, loss_trace, weight_trace
def plot_traces(w, loss_trace, weight_trace, alpha):
fig, ax = plt.subplots(1, 2, figsize=(20, 5))
ax[0].set_title(r'Log-loss of the weights over iterations, $\alpha=${}'.format(alpha))
ax[0].set_xlabel('iteration')
ax[0].set_ylabel('log-loss')
ax[0].plot(loss_trace[:, 0], label=r'$\beta$')
ax[0].plot(loss_trace[:, 1], label=r'$x_0$')
ax[0].plot(loss_trace[:, 2], label=r'$x_1$')
ax[0].legend()
ax[1].set_title(r'Weight learning over iterations, $\alpha=${}'.format(alpha))
ax[1].set_xlabel('iteration')
ax[1].set_ylabel('weight')
ax[1].plot(weight_trace[:, 0], label=r'$\beta={:.2f}$'.format(w[0]))
ax[1].plot(weight_trace[:, 1], label=r'$x_0={:.2f}$'.format(w[1]))
ax[1].plot(weight_trace[:, 2], label=r'$x_1={:.2f}$'.format(w[2]))
ax[1].legend()
```
We try learning the coefficients with different learning weights $\alpha$. Note the behavior of the traces of the loss and weights for different $\alpha$? The loss function was the same one used for OLS regression, but the loss function for Poisson regression is defined differently. Nevertheless, we still get acceptable results.
### Use gradient descent with $\alpha=0.001$
```
alpha = 0.001
w, loss_trace, weight_trace = learn_weights(X, y, alpha=alpha, max_iter=1000)
plot_traces(w, loss_trace, weight_trace, alpha=alpha)
print(w)
```
### Use gradient descent with $\alpha=0.005$
```
alpha = 0.005
w, loss_trace, weight_trace = learn_weights(X, y, alpha=alpha, max_iter=200)
plot_traces(w, loss_trace, weight_trace, alpha=alpha)
print(w)
```
### Use gradient descent with $\alpha=0.01$
```
alpha = 0.01
w, loss_trace, weight_trace = learn_weights(X, y, alpha=alpha, max_iter=200)
plot_traces(w, loss_trace, weight_trace, alpha=alpha)
print(w)
```
|
github_jupyter
|
# Unwetter Simulator
```
import os
os.chdir('..')
f'Working directory: {os.getcwd()}'
from unwetter import db, map
from datetime import datetime
from unwetter import config
config.SEVERITY_FILTER = ['Severe', 'Extreme']
config.STATES_FILTER = ['NW']
config.URGENCY_FILTER = ['Immediate']
severities = {
'Minor': 'Wetterwarnung',
'Moderate': 'Markante Wetterwarnung',
'Severe': '🔴 Amtliche Unwetterwarnung',
'Extreme': '🔴 Amtliche Extreme Unwetterwarnung',
}
search_start = datetime(2019, 6, 19, 11, 0)
search_end = datetime(2019, 6, 19, 22, 0)
search_filter = {
'$and': [
{
'sent': {
'$gt': search_start,
},
},
{
'sent': {
'$lt': search_end,
},
},
]
}
events = list(db.collection.find(search_filter).sort([('sent', 1)]))
len(events)
len([e for e in events if e['published']])
for e in events:
e['published'] = False
def mock_by_ids(ids):
return [event for event in events if event['id'] in ids]
def mock_publish(ids):
for event in events:
if event['id'] in ids:
event['published'] = True
from unwetter.dwd import special_type
def mock_has_changes(event, old_events):
if not any(t['published'] for t in old_events):
extended_references = set()
extended_references.update(event.get('extended_references', event['references']))
for old_event in old_events:
if 'extended_references' in old_event:
extended_references.update(old_event['extended_references'])
elif 'references' in old_event:
extended_references.update(old_event['references'])
event['extended_references'] = sorted(extended_references, reverse=True)
old_events = mock_by_ids(extended_references)
event['has_changes'] = [
{
'id': old_event['id'],
'changed': mock_changes(event, old_event),
'published': old_event['published'],
}
for old_event in old_events
]
event['special_type'] = special_type(event, old_events)
return event
from datetime import datetime, timedelta
from unwetter.generate.blocks import expires, district_list, state_for_cell_id, region_list, dates
from unwetter.generate.helpers import upper_first, local_time
STATES_FILTER = config.STATES_FILTER
def mock_changes_old(event, old_event):
"""
Generate a list of changes between two events
:param event:
:param old_event:
:return: str
"""
text = ''
simple_fields = {
'severity': 'Warnstufe',
'event': 'Wetterphänomen',
'certainty': 'Wahrscheinlichkeit',
}
for field in simple_fields:
if old_event.get(field) != event.get(field):
if field == 'severity' and event[field] in ['Minor', 'Moderate']:
text += f'{simple_fields[field]}: Herabstufung auf {severities[event[field]]}\n\n'
elif field == 'severity':
text += f'{simple_fields[field]}: {severities[event[field]]} ' \
f'(zuvor "{severities[old_event[field]]}")\n\n'
else:
text += f'{simple_fields[field]}: {event[field]} ' \
f'(zuvor "{old_event.get(field, "Nicht angegeben")}")\n\n'
# Editorial request to check only, if expires time changed, since every update has new onset-time
if abs(event['onset'] - event['sent']) > timedelta(minutes=2) and dates(old_event) != dates(event):
text += f'Gültigkeit: {dates(event)} (zuvor "{dates(old_event)}")\n\n'
elif expires(old_event) != expires(event):
text += f'Ende der Gültigkeit: {expires(event)} (zuvor "{expires(old_event)}")\n\n'
if district_list(old_event) != district_list(event):
districts_now = {
district['name'] for district in event['districts']
if state_for_cell_id(district['warn_cell_id']) in STATES_FILTER
}
districts_before = {
district['name'] for district in old_event['districts']
if state_for_cell_id(district['warn_cell_id']) in STATES_FILTER
}
added = districts_now - districts_before
removed = districts_before - districts_now
if added:
text += f'Neue Kreise/Städte: {", ".join(sorted(added))}\n'
if removed:
text += f'Nicht mehr betroffene Kreise/Städte: {", ".join(sorted(removed))}\n'
if region_list(old_event) != region_list(event):
text += f'Regionale Zuordnung: {upper_first(region_list(event))} ' \
f'(zuvor: "{upper_first(region_list(old_event))}")\n\n'
else:
text += f'Regionale Zuordnung unverändert: {upper_first(region_list(event))}\n\n'
'''
# Editorial choice --> No relevant information due to relatively small area --> Thus, no update
elif commune_list(old_event) != commune_list(event):
text += 'Regionale Zuordnung: Änderung der betroffenen Gemeinden\n\n'
'''
return text
from datetime import datetime, timedelta
import re
STATES_FILTER = config.STATES_FILTER
def mock_changes(event, old_event):
"""
Generate a list of changes between two events
:param event:
:param old_event:
:return: bool
"""
if any(event.get(field) != old_event.get(field) for field in ['severity', 'certainty']):
return True
# Notify about big hail sizes
if 'Hagel' not in event['parameters']:
if event['event'] != old_event['event'].replace(' und HAGEL', ''):
return True
else:
hail_re = r'^.*?(\d+).*?cm'
hail_size_now = int(re.match(hail_re, event['parameters']['Hagel']).group(1))
hail_size_before = int(re.match(hail_re, old_event['parameters'].get('Hagel', '0 cm')).group(1))
if hail_size_now >= 3 and hail_size_before < 3:
return True
else:
if event['event'].replace(' und HAGEL', '') != old_event['event'].replace(' und HAGEL', ''):
return True
if abs(event['onset'] - event['sent']) > timedelta(minutes=2) and event['sent'] - event['onset'] < timedelta(minutes=2) and old_event['onset'] != event['onset']:
return True
elif old_event['expires'] != event['expires']:
return True
if len(set(r[0] for r in event['regions']) - set(r[0] for r in old_event['regions'])) > 0:
return True
districts_now = {
district['name'] for district in event['districts']
if state_for_cell_id(district['warn_cell_id']) in STATES_FILTER
}
districts_before = {
district['name'] for district in old_event['districts']
if state_for_cell_id(district['warn_cell_id']) in STATES_FILTER
}
added = districts_now - districts_before
if len(districts_before) <= 3 and added:
return True
return False
from unwetter.config import filter_event
def mock_update(new_events):
filtered = []
for event in new_events:
if filter_event(event):
if event['msg_type'] in ['Alert', 'Cancel']:
filtered.append(event)
elif any(t['changed'] and t['published'] for t in event['has_changes']):
filtered.append(event)
elif event['special_type'] == 'UpdateAlert':
filtered.append(event)
elif not any(t['changed'] and t['published'] for t in event['has_changes']):
continue
else:
print(f'Event was not filtered 1: {event["id"]}')
mock_publish([event['id'] for event in filtered])
return filtered
from unwetter.generate.blocks import changes
current_sent = events[0]['sent']
bins = []
current_bin = []
for event in events:
if event['sent'] != current_sent:
current_sent = event['sent']
bins.append(current_bin)
current_bin = []
current_bin.append(event)
bins.append(current_bin)
processed = []
for bin in bins:
for event in bin:
if 'references' in event:
old_events = mock_by_ids(event.get('extended_references', event['references']))
mock_has_changes(event, old_events)
processed.append(mock_update(bin))
sum(len(bin) for bin in processed)
[print(event['event'], event['sent'] + timedelta(hours=2), event.get('special_type'), event['id']) for bin in processed for event in bin]
```
|
github_jupyter
|
# MLI BYOR: Custom Explainers
This notebook is a demo of MLI **bring your own explainer recipe** (BYOR) Python API.
**Ad-hoc OOTB and/or custom explainer run** scenario:
* **Upload** interpretation recipe.
* Determine recipe upload job **status**.
* **Run** ad-hoc recipe run job.
* Determine ad-hoc recipe job **status**.
* **Get** explainer type for given job.
* **List** explanation types for given explainer run.
* **List** explanation representations (formats) URLs for given explanation type.
* **Download** explanation representation from ^ URL.
* **Download** interpretation recipe job result **data**.
**Interpretation explainers run** scenario:
* **List** available/compatible explainers.
* **Choose** subset or all ^ compatible explainers.
* **Run** interpretation job.
* Determine **status** of interpretation job.
* Determine **status** per explainer job (running within the scope of interpretation).
* **List** explainer types which were ran within interpretation.
* **List** explainer runs for given explainer type within the interpretation.
* **List** explanation types for given explainer run.
* **List** explanation representations (formats) URLs for given explanation type.
* **Download** explanation representation from ^ URL.
### Virtual Environment and Dependencies
Prepare and activate virtual environment for this notebook by running:
```
. .env/bin/activate
pip install ipykernel
ipython kernel install --user --name=dai
```
```
import os
import pprint
import time
from random import randint
from h2oaicore.mli.oss.commons import MimeType, MliJobStatus
from h2oaicore.messages import (
CommonDaiExplainerParameters,
CommonExplainerParameters,
DatasetReference,
Explainer,
ExplainerDescriptor,
ExplainerJobStatus,
ExplainerRunJob,
ExplainersRunJob,
InterpretationJob,
ModelReference,
)
import h2o
import h2oai_client
from h2oai_client import Client
```
**Connect** to DAI server:
```
# connect to Driverless AI server - make sure to use the same
# user name and password as when signing in through the GUI
hostname = '127.0.0.1'
address = 'http://' + hostname + ':12345'
username = 'h2oai'
password = 'h2oai'
# h2oai = Client("http://localhost:12345", "h2oai", "h2oai")
h2oai = Client(address = address, username = username, password = password)
```
# Upload
Upload BYOR interpretation [recipe](http://192.168.59.141/mli-byor/mli_byor_foo.py) to Driverless AI server.
```
# URL of the recipe to upload
# Custom Morris sensitivity analysis
URL_BYOR_EXPLAINER = "https://h2o-public-test-data.s3.amazonaws.com/recipes/explainers/morris_sensitivity_explainer.py"
BYOR_EXPLAINER_NAME = "Morris Sensitivity Analysis"
```
**Upload recipe** to DAI server:
```
recipe_job_key = h2oai.create_custom_recipe_from_url(URL_BYOR_EXPLAINER)
recipe_job_key
recipe_job = h2oai._wait_for_recipe_load(recipe_job_key)
pprint.pprint(recipe_job.dump())
if recipe_job.entity.explainers:
uploaded_explainer_id:str = recipe_job.entity.explainers[0].id
else:
# explainer already deployed (look it up)
explainers = h2oai.list_explainers(
experiment_types=None,
explanation_scopes=None,
dai_model_key=None,
keywords=None,
explainer_filter=[]
)
uploaded_explainer_id = [explainer.id for explainer in explainers if BYOR_EXPLAINER_NAME == explainer.name][0]
print(f"Uploaded recipe ID: {uploaded_explainer_id}'")
```
Driverless AI **model** and **dataset**:
```
# *) hardcoded DAI keys
# DATASET_KEY = "f12f69b4-475b-11ea-bf67-9cb6d06b189b"
# MODEL_KEY = "f268e364-475b-11ea-bf67-9cb6d06b189b"
# *) lookup compatible DAI keys
compatible_models = h2oai.list_explainable_models(
explainer_id=uploaded_explainer_id, offset=0, size=30
)
if compatible_models.models:
MODEL_KEY = compatible_models.models[0].key
DATASET_KEY = compatible_models.models[0].parameters.dataset.key
else:
raise RuntimeError("No compatible models found: please train an IID regression/binomial experiment")
target_col = h2oai.get_model_summary(key=MODEL_KEY).parameters.target_col
print(f"Model : {MODEL_KEY}\nDataset: {DATASET_KEY}\nTarget : {target_col}")
```
# List Explainers
List custom and OOTB recipes.
```
# list available server recipes
explainers = h2oai.list_explainers(
experiment_types=None,
explanation_scopes=None,
dai_model_key=None,
keywords=None,
explainer_filter=[]
)
for e in explainers:
pprint.pprint(e.dump())
# list server recipes for given experiment type
explainers = h2oai.list_explainers(
experiment_types=["binomial"],
explanation_scopes=None,
dai_model_key=None,
keywords=None,
explainer_filter=[]
)
for e in explainers:
pprint.pprint(e.dump())
# list server recipes compatible with given DAI model
explainers = h2oai.list_explainers(
dai_model_key=MODEL_KEY,
experiment_types=None,
explanation_scopes=None,
keywords=None,
explainer_filter=[]
)
for e in explainers:
pprint.pprint(e.dump())
```
# Ad-hoc Run of Built-in Explainer Recipe
Run OOTB explainer recipe shipped w/ DAI server:
```
sa_explainer_id = [explainer.id for explainer in explainers if "SA explainer" == explainer.name][0]
sa_explainer_id
# prepare explaination parameters
explanation_params=Client.build_common_dai_explainer_params(
target_col=target_col,
model_key=MODEL_KEY,
dataset_key=DATASET_KEY,
)
explanation_params.dump()
# run explainer
explainer_id = sa_explainer_id
print(f"Running OOTB explainer: {explainer_id}")
run_job = h2oai.run_explainers(
explainers=[Explainer(
explainer_id=explainer_id,
explainer_params="",
)],
params=explanation_params,
)
run_job.dump()
# wait for explainer to finish
explainer_job_statuses = h2oai.wait_for_explainers(run_job.mli_key)
for job_status in explainer_job_statuses:
pprint.pprint(job_status.dump())
mli_key = job_status.mli_key
explainer_job_key = job_status.explainer_job_key
explainer_job = job_status.explainer_job
```
## Get Recipe Result
```
# get recipe result FORMATS/TYPES (representations of recipe output)
explainer_job.entity.can_explain
explainer_job.entity.explanation_scopes
for explanation in explainer_job.entity.explanations: pprint.pprint(explanation.dump())
# choose the most suitable format (if more than one) and get the result
BASE_URL = f"{address}/files/"
for explanation in explainer_job.entity.explanations:
for e_format in explanation.formats:
server_path: str = h2oai.get_explainer_result_url_path(
mli_key=mli_key,
explainer_job_key=explainer_job_key,
explanation_type=explanation.explanation_type,
explanation_format=e_format
)
print(f"Explanation {explanation.explanation_type}:\n {e_format}:\n {BASE_URL}{server_path}")
download_dir = "/tmp"
h2oai.download(server_path, download_dir)
!ls -l {download_dir}/explanation.zip
```
URL from above can be used to **download** choosen recipe result representation. Explainer log can be downloaded from:
```
server_path = h2oai.get_explainer_run_log_url_path(
mli_key=mli_key,
explainer_job_key=explainer_job_key,
)
print(f"{BASE_URL}{server_path}")
```
# Ad-hoc Run of Custom Explainer Recipe
Running previously uploaded custom explainer.
```
# run custom explainer - use previously uploaded recipe ID
if uploaded_explainer_id:
explainer_id = uploaded_explainer_id
else:
# explainer has been uploaded before
explainers = h2oai.list_explainers(
time_series=False,
dai_model_key=MODEL_KEY,
experiment_types=None,
explanation_scopes=None,
keywords=None,
)
for e in explainers:
if e.name == BYOR_EXPLAINER_NAME:
explainer_id = e.id
print(f"Running CUSTOM explainer: {explainer_id}")
run_job = h2oai.run_explainers(
explainers=[
Explainer(explainer_id=explainer_id, explainer_params=None)
],
params=Client.build_common_dai_explainer_params(
target_col=target_col,
model_key=MODEL_KEY,
dataset_key=DATASET_KEY,
),
)
run_job.dump()
# wait for explainer to finish
explainer_job_statuses = h2oai.wait_for_explainers(run_job.mli_key)
for job_status in explainer_job_statuses:
pprint.pprint(job_status.dump())
server_path = h2oai.get_explainer_result_url_path(
mli_key=job_status.mli_key,
explainer_job_key=job_status.explainer_job_key,
explanation_type='global-feature-importance',
explanation_format='application/json'
)
print(f"{BASE_URL}{server_path}")
```
URL from above can be used to **download** choosen **custom** recipe result representation.
# Explain (Model) with All Compatible or Selected Explainers
```
# get IDs of previously listed recipes
explainer_ids = [explainer.id for explainer in explainers]
explainer_ids
# run explainers: list of IDs OR empty list
# - empty explainer IDs list means "run all model COMPATIBLE explainers with default parameters")
print(f"All explainers:\n{explainer_ids}")
run_job: ExplainersRunJob = h2oai.run_explainers(
explainers=[],
params=Client.build_common_dai_explainer_params(
target_col=target_col,
model_key=MODEL_KEY,
dataset_key=DATASET_KEY,
),
)
run_job.dump()
# check interpretation job status (legacy RPC API)
i_job: InterpretationJob = h2oai.get_interpretation_job(run_job.mli_key)
# note per-explainer (subtask) ID and display name
i_job.dump()
# check particular sub-job status (existing RPC API reused)
h2oai.get_explainer_job_status(run_job.mli_key, run_job.explainer_job_keys[0]).dump()
# check sub-jobs statuses (existing RPC API reused)
job_statuses = h2oai.get_explainer_job_statuses(run_job.mli_key, run_job.explainer_job_keys)
for js in job_statuses:
pprint.pprint(js.dump())
# wait for ALL explainers to finish
explainer_statuses=h2oai.wait_for_explainers(run_job.mli_key)
explainer_statuses
# download explanation type in desired format
DOWNLOAD_DIR = f"/tmp/interpretation_run_{randint(0,1_000_000)}"
explainer_job_key = explainer_statuses[0].explainer_job_key
explanations = h2oai.list_explainer_results(explainer_job_key=explainer_job_key).explanations
# explanations
for explanation in explanations:
# explantion's formats
for explanation_format in explanation.formats:
# format's URL
result_path: str = h2oai.get_explainer_result_url_path(
mli_key=run_job.mli_key,
explainer_job_key=explainer_job_key,
explanation_type=explanation.explanation_type,
explanation_format=explanation_format,
)
# where to download
EXPLANATION_DIR = f"{DOWNLOAD_DIR}/explanation_{randint(0,1_000_000)}"
os.makedirs(EXPLANATION_DIR, exist_ok=True)
# download
h2oai.download(result_path, EXPLANATION_DIR)
print(
f"Explanation {explanation.explanation_type}:\n"
f" {explanation_format}:\n"
f" {BASE_URL}{result_path}"
)
print(f"\nDownloaded explanations in {DOWNLOAD_DIR}:")
!ls -l {DOWNLOAD_DIR}
```
|
github_jupyter
|
# Sensitivity Analysis
```
import os
import itertools
import random
import pandas as pd
import numpy as np
import scipy
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="whitegrid")
import sys
sys.path.insert(0, '../utils')
import model_utils
import geoutils
import logging
import warnings
logging.getLogger().setLevel(logging.ERROR)
warnings.filterwarnings("ignore")
SEED = 42
%load_ext autoreload
%autoreload 2
```
## File Locations
```
lr_index, rf_index, svc_index = 7, 3, 3
output_dir = "../outputs/"
neg_samples_strs = ['10k', '30k', '50k']
neg_samples_dirs = [
output_dir + '10k_results/',
output_dir + '30k_results/',
output_dir + '50k_results/'
]
model_types = [
'logistic_regression',
'random_forest',
'linear_svc'
]
```
## Load Results
```
results_dict = model_utils.load_neg_sample_results(model_types, neg_samples_strs, neg_samples_dirs)
results_dict['logistic_regression']['10k_per_area'][0]['pixel_preds'][lr_index].head(3)
```
## Generate Sensitivity Analysis Matrix
```
lr_area_dict = {
'10k LR' : results_dict['logistic_regression']['10k_per_area'],
'30k LR' : results_dict['logistic_regression']['30k_per_area'],
'50k LR' : results_dict['logistic_regression']['50k_per_area'],
}
model_utils.generate_iou_matrix_per_area(
lr_area_dict, lr_index, model_utils.AREA_CODES, percent=0.20
)
rf_area_dict = {
'10k RF' : results_dict['random_forest']['10k_per_area'],
'30k RF' : results_dict['random_forest']['30k_per_area'],
'50k RF' : results_dict['random_forest']['50k_per_area'],
}
model_utils.generate_iou_matrix_per_area(
rf_area_dict, rf_index, model_utils.AREA_CODES, percent=0.20
)
svc_area_dict = {
'10k SVC' : results_dict['linear_svc']['10k_per_area'],
'30k SVC' : results_dict['linear_svc']['30k_per_area'],
'50k SVC' : results_dict['linear_svc']['50k_per_area'],
}
model_utils.generate_iou_matrix_per_area(
svc_area_dict, svc_index, model_utils.AREA_CODES, percent=0.20
)
```
## Sensitivity Analysis on Unseen Test Set
```
from tqdm import tqdm
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import (
MinMaxScaler,
StandardScaler
)
SEED = 42
```
### File Locations
```
version = '20200509'
data_dir = "../data/"
input_file = data_dir + '{}_dataset.csv'.format(version)
output_dir = "../outputs/sensitivity/"
tmp_dir = data_dir + 'tmp/'
images_dir = data_dir + 'images/'
indices_dir = data_dir + 'indices/'
if not os.path.exists(output_dir):
os.makedirs(output_dir)
if not os.path.exists(tmp_dir):
os.makedirs(tmp_dir)
#!gsutil -q -m cp gs://immap-images/20200525/medellin_*.tif {images_dir}
#!gsutil -q -m cp gs://immap-indices/20200525/indices_medellin_*.tif {indices_dir}
#!gsutil -q -m cp gs://immap-images/20200518/cali_*.tif {images_dir}
#!gsutil -q -m cp gs://immap-indices/20200518/indices_cali_*.tif {indices_dir}
#!gsutil -q -m cp gs://immap-images/20200508/malambo_*.tif {images_dir}
#!gsutil -q -m cp gs://immap-indices/20200508/indices_malambo_*.tif {indices_dir}
```
### Load Data
```
raw_data = pd.read_csv(input_file).reset_index(drop=True)
print('Data dimensions: {}'.format(raw_data.shape))
raw_data.head(3)
```
### Check Hyperparameters of Best Model
```
print('Logistic Regression Parameters: {}'.format(
results_dict['logistic_regression']['30k']['labels'][lr_index]
))
print('Random Forest Parameters: {}'.format(
results_dict['random_forest']['30k']['labels'][rf_index]
))
```
### Instantiate Models
```
lr = LogisticRegression(penalty='l1', C=1.0)
rf = RandomForestClassifier(
n_estimators=800,
max_depth=12,
min_samples_split=15,
min_samples_leaf=2,
random_state=42
)
neg_samples_list = [10000, 30000, 50000]
models, model_strs = [lr, rf], ['LR', 'RF']
areas = ['medellin', 'cali', 'malambo']
area_dict = geoutils.get_filepaths(areas, images_dir, indices_dir)
```
### Run Model for 10k, 30k, and 50k Negative Samples
```
for num_neg_samples, neg_samples_str in zip(neg_samples_list, neg_samples_strs):
for model, model_str in zip(models, model_strs):
model, features = model_utils.train_model(model, raw_data, num_neg_samples, SEED)
for area in areas:
output = output_dir + '{}_{}_{}_{}.tif'.format(version, area, model_str, neg_samples_str)
geoutils.get_preds_windowing(
area=area,
area_dict=area_dict,
model=model,
tmp_dir=tmp_dir,
best_features=features,
output=output,
grid_blocks=9,
threshold=0
)
for file in os.listdir(output_dir):
if '.ipynb' not in file:
out_file = output_dir + file
!gsutil -q cp {out_file} gs://immap-results/probmaps/
```
## Test on Unseen Data
```
import geopandas as gpd
areas = ['medellin', 'malambo', 'cali']
data_dir = "../data/"
grid_dirs = [data_dir + 'grids/grid-' + area + '.gpkg' for area in areas]
grid_gpkgs = {area: gpd.read_file(file) for area, file in zip(areas, grid_dirs)}
grid_gpkgs['medellin'].head(3)
lr_area_dict = {
'10k LR' : {area: {'grid_preds' : [grid_gpkgs[area][['id', 'LR_10k_mean']]]} for area in areas},
'30k LR' : {area: {'grid_preds' : [grid_gpkgs[area][['id', 'LR_30k_mean']]]} for area in areas},
'50k LR' : {area: {'grid_preds' : [grid_gpkgs[area][['id', 'LR_50k_mean']]]} for area in areas},
}
model_utils.generate_iou_matrix_per_area(
lr_area_dict, 0, areas, percent=0.20, nrows=1, ncols=3, figsize=(8,2.5)
)
rf_area_dict = {
'10k RF' : {area: {'grid_preds' : [grid_gpkgs[area][['id', 'RF_10K_mean']]]} for area in areas},
'30k RF' : {area: {'grid_preds' : [grid_gpkgs[area][['id', 'RF_30K_mean']]]} for area in areas},
'50k RF' : {area: {'grid_preds' : [grid_gpkgs[area][['id', 'RF_50K_mean']]]} for area in areas},
}
model_utils.generate_iou_matrix_per_area(
rf_area_dict, 0, areas, percent=0.20, nrows=1, ncols=3, figsize=(8,2.5)
)
```
|
github_jupyter
|
## KF Basics - Part I
### Introduction
#### What is the need to describe belief in terms of PDF's?
This is because robot environments are stochastic. A robot environment may have cows with Tesla by side. That is a robot and it's environment cannot be deterministically modelled(e.g as a function of something like time t). In the real world sensors are also error prone, and hence there'll be a set of values with a mean and variance that it can take. Hence, we always have to model around some mean and variances associated.
#### What is Expectation of a Random Variables?
Expectation is nothing but an average of the probabilites
$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$
In the continous form,
$$\mathbb E[X] = \int_{-\infty}^\infty x\, f(x) \,dx$$
```
import numpy as np
import random
x=[3,1,2]
p=[0.1,0.3,0.4]
E_x=np.sum(np.multiply(x,p))
print(E_x)
```
#### What is the advantage of representing the belief as a unimodal as opposed to multimodal?
Obviously, it makes sense because we can't multiple probabilities to a car moving for two locations. This would be too confusing and the information will not be useful.
### Variance, Covariance and Correlation
#### Variance
Variance is the spread of the data. The mean does'nt tell much **about** the data. Therefore the variance tells us about the **story** about the data meaning the spread of the data.
$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$
```
x=np.random.randn(10)
np.var(x)
```
#### Covariance
This is for a multivariate distribution. For example, a robot in 2-D space can take values in both x and y. To describe them, a normal distribution with mean in both x and y is needed.
For a multivariate distribution, mean $\mu$ can be represented as a matrix,
$$
\mu = \begin{bmatrix}\mu_1\\\mu_2\\ \vdots \\\mu_n\end{bmatrix}
$$
Similarly, variance can also be represented.
But an important concept is that in the same way as every variable or dimension has a variation in its values, it is also possible that there will be values on how they **together vary**. This is also a measure of how two datasets are related to each other or **correlation**.
For example, as height increases weight also generally increases. These variables are correlated. They are positively correlated because as one variable gets larger so does the other.
We use a **covariance matrix** to denote covariances of a multivariate normal distribution:
$$
\Sigma = \begin{bmatrix}
\sigma_1^2 & \sigma_{12} & \cdots & \sigma_{1n} \\
\sigma_{21} &\sigma_2^2 & \cdots & \sigma_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
\sigma_{n1} & \sigma_{n2} & \cdots & \sigma_n^2
\end{bmatrix}
$$
**Diagonal** - Variance of each variable associated.
**Off-Diagonal** - covariance between ith and jth variables.
$$\begin{aligned}VAR(X) = \sigma_x^2 &= \frac{1}{n}\sum_{i=1}^n(X - \mu)^2\\
COV(X, Y) = \sigma_{xy} &= \frac{1}{n}\sum_{i=1}^n[(X-\mu_x)(Y-\mu_y)\big]\end{aligned}$$
```
x=np.random.random((3,3))
np.cov(x)
```
Covariance taking the data as **sample** with $\frac{1}{N-1}$
```
x_cor=np.random.rand(1,10)
y_cor=np.random.rand(1,10)
np.cov(x_cor,y_cor)
```
Covariance taking the data as **population** with $\frac{1}{N}$
```
np.cov(x_cor,y_cor,bias=1)
```
### Gaussians
#### Central Limit Theorem
According to this theorem, the average of n samples of random and independent variables tends to follow a normal distribution as we increase the sample size.(Generally, for n>=30)
```
import matplotlib.pyplot as plt
import random
a=np.zeros((100,))
for i in range(100):
x=[random.uniform(1,10) for _ in range(1000)]
a[i]=np.sum(x,axis=0)/1000
plt.hist(a)
```
#### Gaussian Distribution
A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:
$$
f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]
$$
Range is $$[-\inf,\inf] $$
This is just a function of mean($\mu$) and standard deviation ($\sigma$) and what gives the normal distribution the charecteristic **bell curve**.
```
import matplotlib.mlab as mlab
import math
import scipy.stats
mu = 0
variance = 5
sigma = math.sqrt(variance)
x = np.linspace(mu - 5*sigma, mu + 5*sigma, 100)
plt.plot(x,scipy.stats.norm.pdf(x, mu, sigma))
plt.show()
```
#### Why do we need Gaussian distributions?
Since it becomes really difficult in the real world to deal with multimodal distribution as we cannot put the belief in two seperate location of the robots. This becomes really confusing and in practice impossible to comprehend.
Gaussian probability distribution allows us to drive the robots using only one mode with peak at the mean with some variance.
### Gaussian Properties
**Multiplication**
For the measurement update in a Bayes Filter, the algorithm tells us to multiply the Prior P(X_t) and measurement P(Z_t|X_t) to calculate the posterior:
$$P(X \mid Z) = \frac{P(Z \mid X)P(X)}{P(Z)}$$
Here for the numerator, $P(Z \mid X),P(X)$ both are gaussian.
$N(\mu_1, \sigma_1^2)$ and $N(\mu_2, \sigma_2^2)$ are their mean and variances.
New mean is
$$\mu_\mathtt{new} = \frac{\mu_1 \sigma_2^2 + \mu_2 \sigma_1^2}{\sigma_1^2+\sigma_2^2}$$
New variance is
$$\sigma_\mathtt{new} = \frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2}$$
```
import matplotlib.mlab as mlab
import math
mu1 = 0
variance1 = 2
sigma = math.sqrt(variance1)
x1 = np.linspace(mu1 - 3*sigma, mu1 + 3*sigma, 100)
plt.plot(x1,scipy.stats.norm.pdf(x1, mu1, sigma),label='prior')
mu2 = 10
variance2 = 2
sigma = math.sqrt(variance2)
x2 = np.linspace(mu2 - 3*sigma, mu2 + 3*sigma, 100)
plt.plot(x2,scipy.stats.norm.pdf(x2, mu2, sigma),"g-",label='measurement')
mu_new=(mu1*variance2+mu2*variance1)/(variance1+variance2)
print("New mean is at: ",mu_new)
var_new=(variance1*variance2)/(variance1+variance2)
print("New variance is: ",var_new)
sigma = math.sqrt(var_new)
x3 = np.linspace(mu_new - 3*sigma, mu_new + 3*sigma, 100)
plt.plot(x3,scipy.stats.norm.pdf(x3, mu_new, var_new),label="posterior")
plt.legend(loc='upper left')
plt.xlim(-10,20)
plt.show()
```
**Addition**
The motion step involves a case of adding up probability (Since it has to abide the Law of Total Probability). This means their beliefs are to be added and hence two gaussians. They are simply arithmetic additions of the two.
$$\begin{gathered}\mu_x = \mu_p + \mu_z \\
\sigma_x^2 = \sigma_z^2+\sigma_p^2\, \square\end{gathered}$$
```
import matplotlib.mlab as mlab
import math
mu1 = 5
variance1 = 1
sigma = math.sqrt(variance1)
x1 = np.linspace(mu1 - 3*sigma, mu1 + 3*sigma, 100)
plt.plot(x1,scipy.stats.norm.pdf(x1, mu1, sigma),label='prior')
mu2 = 10
variance2 = 1
sigma = math.sqrt(variance2)
x2 = np.linspace(mu2 - 3*sigma, mu2 + 3*sigma, 100)
plt.plot(x2,scipy.stats.norm.pdf(x2, mu2, sigma),"g-",label='measurement')
mu_new=mu1+mu2
print("New mean is at: ",mu_new)
var_new=(variance1+variance2)
print("New variance is: ",var_new)
sigma = math.sqrt(var_new)
x3 = np.linspace(mu_new - 3*sigma, mu_new + 3*sigma, 100)
plt.plot(x3,scipy.stats.norm.pdf(x3, mu_new, var_new),label="posterior")
plt.legend(loc='upper left')
plt.xlim(-10,20)
plt.show()
#Example from:
#https://scipython.com/blog/visualizing-the-bivariate-gaussian-distribution/
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
# Our 2-dimensional distribution will be over variables X and Y
N = 60
X = np.linspace(-3, 3, N)
Y = np.linspace(-3, 4, N)
X, Y = np.meshgrid(X, Y)
# Mean vector and covariance matrix
mu = np.array([0., 1.])
Sigma = np.array([[ 1. , -0.5], [-0.5, 1.5]])
# Pack X and Y into a single 3-dimensional array
pos = np.empty(X.shape + (2,))
pos[:, :, 0] = X
pos[:, :, 1] = Y
def multivariate_gaussian(pos, mu, Sigma):
"""Return the multivariate Gaussian distribution on array pos.
pos is an array constructed by packing the meshed arrays of variables
x_1, x_2, x_3, ..., x_k into its _last_ dimension.
"""
n = mu.shape[0]
Sigma_det = np.linalg.det(Sigma)
Sigma_inv = np.linalg.inv(Sigma)
N = np.sqrt((2*np.pi)**n * Sigma_det)
# This einsum call calculates (x-mu)T.Sigma-1.(x-mu) in a vectorized
# way across all the input variables.
fac = np.einsum('...k,kl,...l->...', pos-mu, Sigma_inv, pos-mu)
return np.exp(-fac / 2) / N
# The distribution on the variables X, Y packed into pos.
Z = multivariate_gaussian(pos, mu, Sigma)
# Create a surface plot and projected filled contour plot under it.
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(X, Y, Z, rstride=3, cstride=3, linewidth=1, antialiased=True,
cmap=cm.viridis)
cset = ax.contourf(X, Y, Z, zdir='z', offset=-0.15, cmap=cm.viridis)
# Adjust the limits, ticks and view angle
ax.set_zlim(-0.15,0.2)
ax.set_zticks(np.linspace(0,0.2,5))
ax.view_init(27, -21)
plt.show()
```
This is a 3D projection of the gaussians involved with the lower surface showing the 2D projection of the 3D projection above. The innermost ellipse represents the highest peak, that is the maximum probability for a given (X,Y) value.
** numpy einsum examples **
```
a = np.arange(25).reshape(5,5)
b = np.arange(5)
c = np.arange(6).reshape(2,3)
print(a)
print(b)
print(c)
#this is the diagonal sum, i repeated means the diagonal
np.einsum('ij', a)
#this takes the output ii which is the diagonal and outputs to a
np.einsum('ii->i',a)
#this takes in the array A represented by their axes 'ij' and B by its only axes'j'
#and multiples them element wise
np.einsum('ij,j',a, b)
A = np.arange(3).reshape(3,1)
B = np.array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
C=np.multiply(A,B)
np.sum(C,axis=1)
D = np.array([0,1,2])
E = np.array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
np.einsum('i,ij->i',D,E)
from scipy.stats import multivariate_normal
x, y = np.mgrid[-5:5:.1, -5:5:.1]
pos = np.empty(x.shape + (2,))
pos[:, :, 0] = x; pos[:, :, 1] = y
rv = multivariate_normal([0.5, -0.2], [[2.0, 0.9], [0.9, 0.5]])
plt.contourf(x, y, rv.pdf(pos))
```
### References:
1. Roger Labbe's [repo](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python) on Kalman Filters. (Majority of the examples in the notes are from this)
2. Probabilistic Robotics by Sebastian Thrun, Wolfram Burgard and Dieter Fox, MIT Press.
3. Scipy [Documentation](https://scipython.com/blog/visualizing-the-bivariate-gaussian-distribution/)
|
github_jupyter
|
```
import json
import pathlib
import numpy as np
import sklearn
import yaml
from sklearn.preprocessing import normalize
from numba import jit
from utils import get_weight_path_in_current_system
def load_features() -> dict:
datasets = ("cifar10", "cifar100", "ag_news")
epochs = (500, 500, 100)
features = {}
for dataset, epoch in zip(datasets, epochs):
base_dir = pathlib.Path("../results/{}/analysis/save_unnormalised_feature/".format(dataset))
for config_path in base_dir.glob("**/config.yaml"):
with open(config_path) as f:
config = yaml.load(f, Loader=yaml.FullLoader)
seed = config["experiment"]["seed"]
if config["experiment"]["use_projection_head"]:
extractor = "Head"
else:
extractor = "Without Head"
self_sup_path = pathlib.Path(
get_weight_path_in_current_system(config["experiment"]["target_weight_file"])).parent
with open(self_sup_path / ".hydra" / "config.yaml") as f:
config = yaml.load(f, Loader=yaml.FullLoader)
num_mini_batches = config["experiment"]["batches"]
path = config_path.parent.parent
d = dataset.replace("100", "").replace("10", "")
y_train = np.load(path / "epoch_{}-{}.pt.label.train.npy".format(epoch, d))
X_train_0 = np.load(path / "epoch_{}-{}.pt.feature.0.train.npy".format(epoch, d))
X_train_1 = np.load(path / "epoch_{}-{}.pt.feature.1.train.npy".format(epoch, d))
d_name = dataset
if "augmentation_type" in config["dataset"]:
d_name = "{}-{}".format(dataset, config["dataset"]["augmentation_type"])
if d_name not in features:
features[d_name] = {}
if extractor not in features[d_name]:
features[d_name][extractor] = {}
if seed not in features[d_name][extractor]:
features[d_name][extractor][seed] = {}
features[d_name][extractor][seed][num_mini_batches] = (
X_train_0,
X_train_1,
y_train
)
return features
features = load_features()
@jit(nopython=True, parallel=True)
def compute_bound(c, y_train, X_train_0, X_train_1):
target_ids = y_train == c
X_train_0_c = X_train_0[target_ids]
X_train_1_c = X_train_1[target_ids]
cos_sim = X_train_0_c.dot(X_train_1_c.T)
n = np.sum(target_ids)
bounds_by_sample = np.abs(cos_sim - np.diag(cos_sim)).sum(axis=0) / (n - 1)
return bounds_by_sample
upper_bound_collision = {}
for dataset, f_d in features.items():
upper_bound_collision[dataset] = {}
for head_info, f_d_h in f_d.items():
upper_bound_collision[dataset][head_info] = {}
for seed, f_d_h_s in f_d_h.items():
negs = list(sorted(f_d_h_s))
for i, neg in enumerate(negs):
if neg not in upper_bound_collision[dataset][head_info]:
upper_bound_collision[dataset][head_info][neg] = []
X_train_0, X_train_1, y_train = f_d_h[seed][neg]
C = len(np.unique(y_train))
X_train_0 = sklearn.preprocessing.normalize(X_train_0, axis=1)
X_train_1 = sklearn.preprocessing.normalize(X_train_1, axis=1)
upper_bounds = []
for c in range(C):
upper_bounds.append(
compute_bound(c, y_train, X_train_0, X_train_1)
)
upper_bound = np.array(upper_bounds).flatten().mean()
print(dataset, head_info, seed, neg, upper_bound)
upper_bound_collision[dataset][head_info][neg].append(float(upper_bound))
with open("upper_bound_collision.json", "w") as f:
json.dump(upper_bound_collision, f)
```
|
github_jupyter
|
```
"""
created by Arj at 16:28 BST
#Section
Investigating the challenge notebook and running it's code.
#Subsection
Running a simulated qubit with errors
"""
import matplotlib.pyplot as plt
import numpy as np
from qctrlvisualizer import get_qctrl_style, plot_controls
from qctrl import Qctrl
plt.style.use(get_qctrl_style())
qctrl = Qctrl()
def simulate_more_realistic_qubit(
duration=1, values=np.array([np.pi]), shots=1024, repetitions=1
):
# 1. Limits for drive amplitudes
assert np.amax(values) <= 1.0
assert np.amin(values) >= -1.0
max_drive_amplitude = 2 * np.pi * 20 # MHz
# 2. Dephasing error
dephasing_error = -2 * 2 * np.pi # MHz
# 3. Amplitude error
amplitude_i_error = 0.98
amplitude_q_error = 1.03
# 4. Control line bandwidth limit
cut_off_frequency = 2 * np.pi * 10 # MHz
resample_segment_count = 1000
# 5. SPAM error confusion matrix
confusion_matrix = np.array([[0.99, 0.01], [0.02, 0.98]])
# Lowering operator
b = np.array([[0, 1], [0, 0]])
# Number operator
n = np.diag([0, 1])
# Initial state
initial_state = np.array([[1], [0]])
with qctrl.create_graph() as graph:
# Apply 1. max Rabi rate.
values = values * max_drive_amplitude
# Apply 3. amplitude errors.
values_i = np.real(values) * amplitude_i_error
values_q = np.imag(values) * amplitude_q_error
values = values_i + 1j * values_q
# Apply 4. bandwidth limits
drive_unfiltered = qctrl.operations.pwc_signal(duration=duration, values=values)
drive_filtered = qctrl.operations.convolve_pwc(
pwc=drive_unfiltered,
kernel_integral=qctrl.operations.sinc_integral_function(cut_off_frequency),
)
drive = qctrl.operations.discretize_stf(
drive_filtered, duration=duration, segments_count=resample_segment_count
)
# Construct microwave drive
drive_term = qctrl.operations.pwc_operator_hermitian_part(
qctrl.operations.pwc_operator(signal=drive, operator=b)
)
# Construct 2. dephasing term.
dephasing_term = qctrl.operations.constant_pwc_operator(
operator=dephasing_error * n,
duration=duration,
)
# Construct Hamiltonian.
hamiltonian = qctrl.operations.pwc_sum(
[
drive_term,
dephasing_term,
]
)
# Solve Schrodinger's equation and get total unitary at the end
unitary = qctrl.operations.time_evolution_operators_pwc(
hamiltonian=hamiltonian,
sample_times=np.array([duration]),
)[-1]
unitary.name = "unitary"
# Repeat final unitary
repeated_unitary = np.eye(2)
for _ in range(repetitions):
repeated_unitary = repeated_unitary @ unitary
repeated_unitary.name = "repeated_unitary"
# Calculate final state.
state = repeated_unitary @ initial_state
# Calculate final populations.
populations = qctrl.operations.abs(state[:, 0]) ** 2
# Normalize populations
norm = qctrl.operations.sum(populations)
populations = populations / norm
populations.name = "populations"
# Evaluate graph.
result = qctrl.functions.calculate_graph(
graph=graph,
output_node_names=["unitary", "repeated_unitary", "populations"],
)
# Extract outputs.
unitary = result.output["unitary"]["value"]
repeated_unitary = result.output["repeated_unitary"]["value"]
populations = result.output["populations"]["value"]
# Sample projective measurements.
true_measurements = np.random.choice(2, size=shots, p=populations)
measurements = np.array(
[np.random.choice(2, p=confusion_matrix[m]) for m in true_measurements]
)
results = {"unitary": unitary, "measurements": measurements}
return results
max_rabi_rate = 20 * 2 * np.pi # MHz
not_duration = np.pi / (max_rabi_rate) # us
h_duration = np.pi / (2 * max_rabi_rate) # us
shots = 1024
values = np.array([1.0])
not_results = simulate_more_realistic_qubit(
duration=not_duration, values=values, shots=shots
)
h_results = simulate_more_realistic_qubit(
duration=h_duration, values=values, shots=shots
)
error_norm = (
lambda operate_a, operator_b: 1
- np.abs(np.trace((operate_a.conj().T @ operator_b)) / 2) ** 2
)
def estimate_probability_of_one(measurements):
size = len(measurements)
probability = np.mean(measurements)
standard_error = np.std(measurements) / np.sqrt(size)
return (probability, standard_error)
realised_not_gate = not_results["unitary"]
ideal_not_gate = np.array([[0, -1j], [-1j, 0]])
not_error = error_norm(realised_not_gate, ideal_not_gate)
realised_h_gate = h_results["unitary"]
ideal_h_gate = (1 / np.sqrt(2)) * np.array([[1, -1j], [-1j, 1]])
h_error = error_norm(realised_h_gate, ideal_h_gate)
not_measurements = not_results["measurements"]
h_measurements = h_results["measurements"]
not_probability, not_standard_error = estimate_probability_of_one(not_measurements)
h_probability, h_standard_error = estimate_probability_of_one(h_measurements)
print("Realised NOT Gate:")
print(realised_not_gate)
print("Ideal NOT Gate:")
print(ideal_not_gate)
print("NOT Gate Error:" + str(not_error))
print("NOT estimated probability of getting 1:" + str(not_probability))
print("NOT estimate standard error:" + str(not_standard_error) + "\n")
print("Realised H Gate:")
print(realised_h_gate)
print("Ideal H Gate:")
print(ideal_h_gate)
print("H Gate Error:" + str(h_error))
print("H estimated probability of getting 1:" + str(h_probability))
print("H estimate standard error:" + str(h_standard_error))
# Now using the CLHO
# Define standard matrices.
sigma_x = np.array([[0, 1], [1, 0]], dtype=np.complex)
sigma_y = np.array([[0, -1j], [1j, 0]], dtype=np.complex)
sigma_z = np.array([[1, 0], [0, -1]], dtype=np.complex)
# Define control parameters.
duration = 1e-6 # s
# Define standard deviation of the errors in the experimental results.
sigma = 0.01
# Create a random unknown operator.
rng = np.random.default_rng(seed=10)
phi = rng.uniform(-np.pi, np.pi)
u = rng.uniform(-1, 1)
Q_unknown = (
u * sigma_z + np.sqrt(1 - u ** 2) * (np.cos(phi) * sigma_x + np.sin(phi) * sigma_y)
) / 4
def run_experiments(omegas):
"""
Simulates a series of experiments where controls `omegas` attempt to apply
an X gate to a system. The result of each experiment is the infidelity plus
a Gaussian error.
In your actual implementation, this function would run the experiment with
the parameters passed. Note that the simulation handles multiple test points,
while your experimental implementation might need to queue the test point
requests to obtain one at a time from the apparatus.
"""
# Create the graph with the dynamics of the system.
with qctrl.create_graph() as graph:
signal = qctrl.operations.pwc_signal(values=omegas, duration=duration)
hamiltonian = qctrl.operations.pwc_operator(
signal=signal,
operator=0.5 * (sigma_x + Q_unknown),
)
qctrl.operations.infidelity_pwc(
hamiltonian=hamiltonian,
target_operator=qctrl.operations.target(operator=sigma_x),
name="infidelities",
)
# Run the simulation.
result = qctrl.functions.calculate_graph(
graph=graph,
output_node_names=["infidelities"],
)
# Add error to the measurement.
error_values = rng.normal(loc=0, scale=sigma, size=len(omegas))
infidelities = result.output["infidelities"]["value"] + error_values
# Return only infidelities between 0 and 1.
return np.clip(infidelities, 0, 1)
# Define the number of test points obtained per run.
test_point_count = 20
# Define number of segments in the control.
segment_count = 10
# Define parameters as a set of controls with piecewise constant segments.
parameter_set = (
np.pi
/ duration
* (np.linspace(-1, 1, test_point_count)[:, None])
* np.ones((test_point_count, segment_count))
)
# Obtain a set of initial experimental results.
experiment_results = run_experiments(parameter_set)
# Define initialization object for the automated closed-loop optimization.
length_scale_bound = qctrl.types.closed_loop_optimization_step.BoxConstraint(
lower_bound=1e-5,
upper_bound=1e5,
)
bound = qctrl.types.closed_loop_optimization_step.BoxConstraint(
lower_bound=-5 * np.pi / duration,
upper_bound=5 * np.pi / duration,
)
initializer = qctrl.types.closed_loop_optimization_step.GaussianProcessInitializer(
length_scale_bounds=[length_scale_bound] * segment_count,
bounds=[bound] * segment_count,
rng_seed=0,
)
# Define state object for the closed-loop optimization.
optimizer = qctrl.types.closed_loop_optimization_step.Optimizer(
gaussian_process_initializer=initializer,
)
```
|
github_jupyter
|
```
import os
import pandas as pd
def load_data(path):
full_path = os.path.join(os.path.realpath('..'), path)
df = pd.read_csv(full_path, header=0, index_col=0)
print("Dataset has {} rows, {} columns.".format(*df.shape))
return df
df_train = load_data('data/raw/train.csv')
df_test = load_data('data/raw/test.csv')
```
## Data cleaning
```
# fill NaN with string "unknown"
df_train.fillna('unknown',inplace=True)
df_test.fillna('unknown',inplace=True)
```
## Create features
```
def create_features(df):
"Create features as seen in EDA"
print("Dataframe as {} rows and {} columns.".format(*df.shape))
# Uppercase count
df['processed'] = df['comment_text'].str.split()
print("Counting uppercases...")
df['uppercase_count'] = df['processed'].apply(lambda x: sum(1 for t in x if t.isupper() and len(t)>2))
print("Dataframe as {} rows and {} columns.".format(*df.shape))
# Bad words
print("Counting bad words...")
path = 'data/external/badwords.txt'
bad_words = []
f = open(os.path.join(os.path.realpath('..'), path), mode='rt', encoding='utf-8')
for line in f:
words = line.split(', ')
for word in words:
word = word.replace('\n', '')
bad_words.append(word)
f.close()
df['bad_words'] = df['processed'].apply(lambda x: sum(1 for t in x if t in bad_words))
print("Dataframe as {} rows and {} columns.".format(*df.shape))
# Count of typos
from enchant.checker import SpellChecker
def typo_count(corpus):
"Count the number of errors found by pyenchant"
count = []
for row in corpus:
chkr = SpellChecker("en_US")
chkr.set_text(row)
i = 0
for err in chkr:
i += 1
count.append(i)
return count
print("Counting typos...")
df['typos'] = typo_count(df.comment_text)
print("Dataframe as {} rows and {} columns.".format(*df.shape))
# Doc length
print("Counting length of each comment...")
df['length'] = [len(t) for t in df['processed']]
print("Dataframe as {} rows and {} columns.".format(*df.shape))
# Drop processed (helper column)
df = df.drop(['processed'], axis=1)
print("Dataframe as {} rows and {} columns.".format(*df.shape))
return df
df_train = create_features(df_train)
df_test = create_features(df_test)
```
## Spell check - TBC
```
import enchant
from enchant.checker import SpellChecker
from enchant.checker import SpellChecker
def spellcheck(corpus):
"Spellcheck using pyenchant"
for row in corpus:
chkr = SpellChecker("en_US")
chkr.set_text(row)
for err in chkr:
sug = err.suggest()[0]
err.replace(sug)
print(err.word, sug)
row = chkr.get_text()
return corpus
spellcheck(df_train.comment_text[:5])
```
## Output
```
# save list to file
def save_list(lines, filename):
# convert lines to a single blob of text data = '\n'.join(lines)
data = '\n'.join(lines)
# open file
file = open(filename, 'w')
# write text
file.write(data)
# close file
file.close()
def save_df(df, path):
full_path = os.path.join(os.path.realpath('..'), path)
df.to_csv(full_path, header=True, index=True)
print('Dataframe ({}, {}) saved as csv.'.format(*df.shape))
save_df(df_train, 'data/processed/train.csv')
save_df(df_test, 'data/processed/test.csv')
```
|
github_jupyter
|
## Sentiment Analysis with MXNet and Gluon
This tutorial will show how to train and test a Sentiment Analysis (Text Classification) model on SageMaker using MXNet and the Gluon API.
```
import os
import boto3
import sagemaker
from sagemaker.mxnet import MXNet
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
```
## Download training and test data
In this notebook, we will train the **Sentiment Analysis** model on [SST-2 dataset (Stanford Sentiment Treebank 2)](https://nlp.stanford.edu/sentiment/index.html). The dataset consists of movie reviews with one sentence per review. Classification involves detecting positive/negative reviews.
We will download the preprocessed version of this dataset from the links below. Each line in the dataset has space separated tokens, the first token being the label: 1 for positive and 0 for negative.
```
%%bash
mkdir data
curl https://raw.githubusercontent.com/saurabh3949/Text-Classification-Datasets/master/stsa.binary.phrases.train > data/train
curl https://raw.githubusercontent.com/saurabh3949/Text-Classification-Datasets/master/stsa.binary.test > data/test
```
## Uploading the data
We use the `sagemaker.Session.upload_data` function to upload our datasets to an S3 location. The return value `inputs` identifies the location -- we will use this later when we start the training job.
```
inputs = sagemaker_session.upload_data(path='data', key_prefix='data/DEMO-sentiment')
```
## Implement the training function
We need to provide a training script that can run on the SageMaker platform. The training scripts are essentially the same as one you would write for local training, except that you need to provide a `train` function. When SageMaker calls your function, it will pass in arguments that describe the training environment. Check the script below to see how this works.
The script here is a simplified implementation of ["Bag of Tricks for Efficient Text Classification"](https://arxiv.org/abs/1607.01759), as implemented by Facebook's [FastText](https://github.com/facebookresearch/fastText/) for text classification. The model maps each word to a vector and averages vectors of all the words in a sentence to form a hidden representation of the sentence, which is inputted to a softmax classification layer. Please refer to the paper for more details.
```
!cat 'sentiment.py'
```
## Run the training script on SageMaker
The ```MXNet``` class allows us to run our training function on SageMaker infrastructure. We need to configure it with our training script, an IAM role, the number of training instances, and the training instance type. In this case we will run our training job on a single c4.2xlarge instance.
```
m = MXNet('sentiment.py',
role=role,
train_instance_count=1,
train_instance_type='ml.c4.xlarge',
framework_version='1.4.0',
py_version='py2',
distributions={'parameter_server': {'enabled': True}},
hyperparameters={'batch-size': 8,
'epochs': 2,
'learning-rate': 0.01,
'embedding-size': 50,
'log-interval': 1000})
```
After we've constructed our `MXNet` object, we can fit it using the data we uploaded to S3. SageMaker makes sure our data is available in the local filesystem, so our training script can simply read the data from disk.
```
m.fit(inputs)
```
As can be seen from the logs, we get > 80% accuracy on the test set using the above hyperparameters.
After training, we use the MXNet object to build and deploy an MXNetPredictor object. This creates a SageMaker endpoint that we can use to perform inference.
This allows us to perform inference on json encoded string array.
```
predictor = m.deploy(initial_instance_count=1, instance_type='ml.c4.xlarge')
```
The predictor runs inference on our input data and returns the predicted sentiment (1 for positive and 0 for negative).
```
data = ["this movie was extremely good .",
"the plot was very boring .",
"this film is so slick , superficial and trend-hoppy .",
"i just could not watch it till the end .",
"the movie was so enthralling !"]
response = predictor.predict(data)
print(response)
```
## Cleanup
After you have finished with this example, remember to delete the prediction endpoint to release the instance(s) associated with it.
```
predictor.delete_endpoint()
```
|
github_jupyter
|
```
from warnings import filterwarnings
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pymc as pm
from sklearn.linear_model import LinearRegression
%load_ext lab_black
%load_ext watermark
filterwarnings("ignore")
```
# A Simple Regression
From [Codes for Unit 1](https://www2.isye.gatech.edu/isye6420/supporting.html).
Associated lecture video: Unit 1 Lesson 4
You don't usually need to set inits in PyMC. The default method of generating inits is 'jitter+adapt_diag', which chooses them based on the model and input data while adding some randomness.
If you do want to set an initial value, pass a dictionary to the start parameter of pm.sample.
```python
inits = {
"alpha": np.array(0.0),
"beta": np.array(0.0)
}
trace = pm.sample(2000, start=inits)
```
```
X = np.array([1, 2, 3, 4, 5])
y = np.array([1, 3, 3, 3, 5])
x_bar = np.mean(X)
with pm.Model() as m:
# priors
alpha = pm.Normal("alpha", sigma=100)
beta = pm.Normal("beta", sigma=100)
# using precision for direct comparison with BUGS output
tau = pm.Gamma("tau", alpha=0.001, beta=0.001)
sigma = 1 / pm.math.sqrt(tau)
mu = alpha + beta * (X - x_bar)
likelihood = pm.Normal("likelihood", mu=mu, sigma=sigma, observed=y)
# start sampling
trace = pm.sample(
3000, # samples
chains=4,
tune=500,
init="jitter+adapt_diag",
random_seed=1,
cores=4, # parallel processing of chains
return_inferencedata=True, # return arviz inferencedata object
)
```
PyMC3 uses the tuning step specified in the pm.sample call to adjust various parameters in the No-U-Turn Sampler [(NUTS) algorithm](https://arxiv.org/abs/1111.4246), which is a form of Hamiltonian Monte Carlo. BUGS also silently uses different types of tuning depending on the algorithm it [chooses](https://www.york.ac.uk/depts/maths/histstat/pml1/bayes/winbugsinfo/cowles_winbugs.pdf). The professor often burns some number of samples in his examples. Note that this is separate from the tuning phase for both programs!
For some more detail on tuning, see [this post](https://colcarroll.github.io/hmc_tuning_talk/).
```
# this burns the first 500 samples
trace_burned = trace.sel(draw=slice(500, None))
```
Arviz has a variety of functions to view the results of the model. One of the most useful is az.summary. Professor Vidakovic arbitrarily asks for the 95% credible set (also called the highest density interval), so we can specify hdi_prob=.95 to get that. This is the HPD, or minimum-width, credible set.
```
az.summary(trace_burned, hdi_prob=0.95)
```
You can also get the HDIs directly:
```
az.hdi(trace_burned, hdi_prob=0.95)["beta"].values
```
There are a variety of plots available. Commonly used to diagnose problems are the trace (see [When Traceplots go Bad](https://jpreszler.rbind.io/post/2019-09-28-bad-traceplots/)) and rank plots (see the Maybe it's time to let traceplots die section from [this post](https://statmodeling.stat.columbia.edu/2019/03/19/maybe-its-time-to-let-the-old-ways-die-or-we-broke-r-hat-so-now-we-have-to-fix-it/)).
```
az.plot_trace(trace_burned)
plt.show()
az.plot_rank(trace_burned)
plt.show()
```
There are many ways to manipulate Arviz [InferenceData](https://arviz-devs.github.io/arviz/api/generated/arviz.InferenceData.html) objects to calculate statistics after sampling is complete.
```
# alpha - beta * x.bar
intercept = (
trace_burned.posterior.alpha.mean() - trace_burned.posterior.beta.mean() * x_bar
)
intercept.values
```
OpenBugs results:
| | mean | sd | MC_error | val2.5pc | median | val97.5pc | start | sample |
|-------|--------|--------|----------|----------|--------|-----------|-------|--------|
| alpha | 2.995 | 0.5388 | 0.005863 | 1.947 | 3.008 | 4.015 | 1000 | 9001 |
| beta | 0.7963 | 0.3669 | 0.003795 | 0.08055 | 0.7936 | 1.526 | 1000 | 9001 |
| tau | 1.88 | 1.524 | 0.02414 | 0.1416 | 1.484 | 5.79 | 1000 | 9001 |
Sometimes you might want to do a sanity check with classical regression. If your Bayesian regression has noninformative priors, the results should be close.
```
reg = LinearRegression().fit(X.reshape(-1, 1), y)
# compare with intercept and beta from above
reg.intercept_, reg.coef_
%watermark --iversions -v
```
|
github_jupyter
|
# CSX46
## Class session 6: BFS
Objective: write and test a function that can compute single-vertex shortest paths in an unweighted simple graph. Compare to the results that we get using `igraph.Graph.get_shortest_paths()`.
We're going to need several packages for this notebook; let's import them first
```
import random
import igraph
import numpy as np
import math
import collections
```
Let's set the random number seed using `random.seed` and with seed value 1337, so that we are all starting with the same graph structure. Make a simple 10-vertex random (Barabasi-Albert model) graph. Set the random number seed so that the graph is always the same, for purposes of reproducibility (we want to know that the "hub" vertex will be vertex 2, and we will test your BFS function starting at that "hub" vertex).
Let's plot the graph, using `bbox=[0,0,200,200]` so it is not huge, and using `vertex_label=` to display the vertex IDs.
Let's look at an adjacency list representation of the graph, using the method `igraph.Graph.get_adjlist`
Let's look at the degrees of the vertices using the `igraph.Graph.degree` method and the `enumerate` built-in function and list comprehension:
OK, let's implement a function to compute shortest-path (geodesic path) distances to all vertices in the graph, starting at a single vertex `p_vertex`. We'll implement the
breadth-first search (BFS) algorithm in order to compute these geodesic path distances.
We'll start by implementing the queue data structure "by hand" with our own `read_ptr` and `write_ptr` exactly as described on page 320 of Newman's book. Newman says to use an "array" to implement the queue. As it turns out, Python's native `list` data type is internally implemented as a (resizeable) array, so we can just use a `list` here. We'll call our function `bfs_single_vertex_newman`.
```
# compute N, the number of vertices by calling len() on the VertexSet obtained from graph.vs()
# initialize "queue" array (length N, containing np.nan)
# initialize distances array (length N, containing np.nan)
# set "p_vertex" entry of distances array to be 0
# while write_ptr is gerater than read_ptr:
# obtain the vertex ID of the entry at index "read_ptr" in the queue array, as cur_vertex_num
# increment read_ptr
# get the distance to cur_vertex_num, from the "distances" array
# get the neighbors of vertex cur_vertex_num in the graph, using the igraph "neighbors" func
# for each vertex_neighbor in the array vertex_neighbors
# if the distances[vertex_neighbor] is nan:
# (1) set the distance to vertex_neighbor (in "distances" vector) to the distance to
# cur_vertex_num, plus one
# (2) add neighbor to the queue
# put vertex_neighbor at position write_ptr in the queue array
# increment write_ptr
# end-while
# return "distances"
```
Let's test out our implementation of `bfs_single_vertex_newman`, on vertex 0 of the graph. Do the results make sense?
Now let's re-implement the single-vertex BFS distance function using a convenient queue data structure, `collections.deque` (note, `deque` is actually a *double-ended* queue, so it is a bit more fancy than we need, but that's OK, we just will only be using its methods `popleft` and `append`)
```
# compute N, the number of vertices by calling len() on the VertexSet obtained from graph.vs()
# create a deque data structure called "queue" and initialize it to contain p_vertex
# while the queue is not empty:
# pop vertex_id off of the left of the queue
# get the vertex_id entry of the distances vector, call it "vertex_dist"
# for each neighbor_id of vertex_id:
# if the neighbor_id entry of the distances vector is nan:
# set the neighbor_id entry of the distances vector to vertex_dist + 1
# append neighbor_id to the queue
# return "distances"
```
Compare the code implementations of `bfs_single_vertex_newman` and `bfs_single_vertex`. Which is easier to read and understand?
Test out your function `bfs_single_vertex` on vertex 0. Do we get the same result as when we used `bfs_single_vertex_newman`?
If the graph was a lot bigger, how could we systematically check that the results of `bfs_single_vertex` (from vertex 0) are correctly calculated? We can use the `igraph.Graph.get_shortest_paths` method, and specify `v=0`. Let's look at the results of calling `get_shortest_paths` with `v=0`:
So, clearly, we need to calculate the length of the list of vertices in each entry of this ragged list. But the *path* length is one less than the length of the list of vertices, so we have to subtract one in order to get the correct path length. Now we are ready to compare our BFS-based single-vertex geodesic distances with the results from calling `igraph.Graph.get_shortest_paths`:
Now let's implement a function that can compute a numpy.matrix of geodesic path distances for all pairs of vertices. The pythonic way to do this is probably to use the list-of-lists constructor for np.array, and to use list comprehension.
```
def sp_matrix(p_graph):
return FILL IN HERE
```
How about if we want to implement it using a plain old for loop?
```
def sp_matrix_forloop(p_graph):
N = FILL IN HERE
geo_dists = FILL IN HERE
FILL IN HERE
return geo_dists
```
Let's run it on our little ten-vertex graph:
|
github_jupyter
|
# Robot Class
In this project, we'll be localizing a robot in a 2D grid world. The basis for simultaneous localization and mapping (SLAM) is to gather information from a robot's sensors and motions over time, and then use information about measurements and motion to re-construct a map of the world.
### Uncertainty
As you've learned, robot motion and sensors have some uncertainty associated with them. For example, imagine a car driving up hill and down hill; the speedometer reading will likely overestimate the speed of the car going up hill and underestimate the speed of the car going down hill because it cannot perfectly account for gravity. Similarly, we cannot perfectly predict the *motion* of a robot. A robot is likely to slightly overshoot or undershoot a target location.
In this notebook, we'll look at the `robot` class that is *partially* given to you for the upcoming SLAM notebook. First, we'll create a robot and move it around a 2D grid world. Then, **you'll be tasked with defining a `sense` function for this robot that allows it to sense landmarks in a given world**! It's important that you understand how this robot moves, senses, and how it keeps track of different landmarks that it sees in a 2D grid world, so that you can work with it's movement and sensor data.
---
Before we start analyzing robot motion, let's load in our resources and define the `robot` class. You can see that this class initializes the robot's position and adds measures of uncertainty for motion. You'll also see a `sense()` function which is not yet implemented, and you will learn more about that later in this notebook.
```
# import some resources
import numpy as np
import matplotlib.pyplot as plt
import random
%matplotlib inline
# the robot class
class robot:
# --------
# init:
# creates a robot with the specified parameters and initializes
# the location (self.x, self.y) to the center of the world
#
def __init__(self, world_size = 100.0, measurement_range = 30.0,
motion_noise = 1.0, measurement_noise = 1.0):
self.measurement_noise = 0.0
self.world_size = world_size
self.measurement_range = measurement_range
self.x = world_size / 2.0
self.y = world_size / 2.0
self.motion_noise = motion_noise
self.measurement_noise = measurement_noise
self.landmarks = []
self.num_landmarks = 0
# returns a positive, random float
def rand(self):
return random.random() * 2.0 - 1.0
# --------
# move: attempts to move robot by dx, dy. If outside world
# boundary, then the move does nothing and instead returns failure
#
def move(self, dx, dy):
x = self.x + dx + self.rand() * self.motion_noise
y = self.y + dy + self.rand() * self.motion_noise
if x < 0.0 or x > self.world_size or y < 0.0 or y > self.world_size:
return False
else:
self.x = x
self.y = y
return True
# --------
# sense: returns x- and y- distances to landmarks within visibility range
# because not all landmarks may be in this range, the list of measurements
# is of variable length. Set measurement_range to -1 if you want all
# landmarks to be visible at all times
#
## TODO: complete the sense function
def sense(self):
''' This function does not take in any parameters, instead it references internal variables
(such as self.landamrks) to measure the distance between the robot and any landmarks
that the robot can see (that are within its measurement range).
This function returns a list of landmark indices, and the measured distances (dx, dy)
between the robot's position and said landmarks.
This function should account for measurement_noise and measurement_range.
One item in the returned list should be in the form: [landmark_index, dx, dy].
'''
measurements = []
## TODO: iterate through all of the landmarks in a world
for i, lm in enumerate(self.landmarks):
## 1. compute dx and dy, the distances between the robot and the landmark
dx = lm[0] - self.x
dy = lm[1] - self.y
## TODO: For each landmark
## 2. account for measurement noise by *adding* a noise component to dx and dy
dx += self.rand() * self.measurement_noise
dy += self.rand() * self.measurement_noise
## - The noise component should be a random value between [-1.0, 1.0)*measurement_noise
## - Feel free to use the function self.rand() to help calculate this noise component
## - It may help to reference the `move` function for noise calculation
## 3. If either of the distances, dx or dy, fall outside of the internal var, measurement_range
## then we cannot record them; if they do fall in the range, then add them to the measurements list
## as list.append([index, dx, dy]), this format is important for data creation done later
if abs(dx) <= self.measurement_range and abs(dy) <= self.measurement_range:
measurements.append([i, dx, dy])
## TODO: return the final, complete list of measurements
return measurements
# --------
# make_landmarks:
# make random landmarks located in the world
#
def make_landmarks(self, num_landmarks):
self.landmarks = []
for i in range(num_landmarks):
self.landmarks.append([round(random.random() * self.world_size),
round(random.random() * self.world_size)])
self.num_landmarks = num_landmarks
# called when print(robot) is called; prints the robot's location
def __repr__(self):
return 'Robot: [x=%.5f y=%.5f]' % (self.x, self.y)
```
## Define a world and a robot
Next, let's instantiate a robot object. As you can see in `__init__` above, the robot class takes in a number of parameters including a world size and some values that indicate the sensing and movement capabilities of the robot.
In the next example, we define a small 10x10 square world, a measurement range that is half that of the world and small values for motion and measurement noise. These values will typically be about 10 times larger, but we ust want to demonstrate this behavior on a small scale. You are also free to change these values and note what happens as your robot moves!
```
world_size = 10.0 # size of world (square)
measurement_range = 5.0 # range at which we can sense landmarks
motion_noise = 0.2 # noise in robot motion
measurement_noise = 0.2 # noise in the measurements
# instantiate a robot, r
r = robot(world_size, measurement_range, motion_noise, measurement_noise)
# print out the location of r
print(r)
```
## Visualizing the World
In the given example, we can see/print out that the robot is in the middle of the 10x10 world at (x, y) = (5.0, 5.0), which is exactly what we expect!
However, it's kind of hard to imagine this robot in the center of a world, without visualizing the grid itself, and so in the next cell we provide a helper visualization function, `display_world`, that will display a grid world in a plot and draw a red `o` at the location of our robot, `r`. The details of how this function wors can be found in the `helpers.py` file in the home directory; you do not have to change anything in this `helpers.py` file.
```
# import helper function
from helpers import display_world
# define figure size
plt.rcParams["figure.figsize"] = (5,5)
# call display_world and display the robot in it's grid world
print(r)
display_world(int(world_size), [r.x, r.y])
```
## Movement
Now you can really picture where the robot is in the world! Next, let's call the robot's `move` function. We'll ask it to move some distance `(dx, dy)` and we'll see that this motion is not perfect by the placement of our robot `o` and by the printed out position of `r`.
Try changing the values of `dx` and `dy` and/or running this cell multiple times; see how the robot moves and how the uncertainty in robot motion accumulates over multiple movements.
#### For a `dx` = 1, does the robot move *exactly* one spot to the right? What about `dx` = -1? What happens if you try to move the robot past the boundaries of the world?
```
# choose values of dx and dy (negative works, too)
dx = 1
dy = 2
r.move(dx, dy)
# print out the exact location
print(r)
# display the world after movement, not that this is the same call as before
# the robot tracks its own movement
display_world(int(world_size), [r.x, r.y])
```
## Landmarks
Next, let's create landmarks, which are measurable features in the map. You can think of landmarks as things like notable buildings, or something smaller such as a tree, rock, or other feature.
The robot class has a function `make_landmarks` which randomly generates locations for the number of specified landmarks. Try changing `num_landmarks` or running this cell multiple times to see where these landmarks appear. We have to pass these locations as a third argument to the `display_world` function and the list of landmark locations is accessed similar to how we find the robot position `r.landmarks`.
Each landmark is displayed as a purple `x` in the grid world, and we also print out the exact `[x, y]` locations of these landmarks at the end of this cell.
```
# create any number of landmarks
num_landmarks = 3
r.make_landmarks(num_landmarks)
# print out our robot's exact location
print(r)
# display the world including these landmarks
display_world(int(world_size), [r.x, r.y], r.landmarks)
# print the locations of the landmarks
print('Landmark locations [x,y]: ', r.landmarks)
```
## Sense
Once we have some landmarks to sense, we need to be able to tell our robot to *try* to sense how far they are away from it. It will be up t you to code the `sense` function in our robot class.
The `sense` function uses only internal class parameters and returns a list of the the measured/sensed x and y distances to the landmarks it senses within the specified `measurement_range`.
### TODO: Implement the `sense` function
Follow the `##TODO's` in the class code above to complete the `sense` function for the robot class. Once you have tested out your code, please **copy your complete `sense` code to the `robot_class.py` file in the home directory**. By placing this complete code in the `robot_class` Python file, we will be able to refernce this class in a later notebook.
The measurements have the format, `[i, dx, dy]` where `i` is the landmark index (0, 1, 2, ...) and `dx` and `dy` are the measured distance between the robot's location (x, y) and the landmark's location (x, y). This distance will not be perfect since our sense function has some associated `measurement noise`.
---
In the example in the following cell, we have a given our robot a range of `5.0` so any landmarks that are within that range of our robot's location, should appear in a list of measurements. Not all landmarks are guaranteed to be in our visibility range, so this list will be variable in length.
*Note: the robot's location is often called the **pose** or `[Pxi, Pyi]` and the landmark locations are often written as `[Lxi, Lyi]`. You'll see this notation in the next notebook.*
```
# try to sense any surrounding landmarks
measurements = r.sense()
# this will print out an empty list if `sense` has not been implemented
print(measurements)
```
**Refer back to the grid map above. Do these measurements make sense to you? Are all the landmarks captured in this list (why/why not)?**
---
## Data
#### Putting it all together
To perform SLAM, we'll collect a series of robot sensor measurements and motions, in that order, over a defined period of time. Then we'll use only this data to re-construct the map of the world with the robot and landmar locations. You can think of SLAM as peforming what we've done in this notebook, only backwards. Instead of defining a world and robot and creating movement and sensor data, it will be up to you to use movement and sensor measurements to reconstruct the world!
In the next notebook, you'll see this list of movements and measurements (which you'll use to re-construct the world) listed in a structure called `data`. This is an array that holds sensor measurements and movements in a specific order, which will be useful to call upon when you have to extract this data and form constraint matrices and vectors.
`data` is constructed over a series of time steps as follows:
```
data = []
# after a robot first senses, then moves (one time step)
# that data is appended like so:
data.append([measurements, [dx, dy]])
# for our example movement and measurement
print(data)
# in this example, we have only created one time step (0)
time_step = 0
# so you can access robot measurements:
print('Measurements: ', data[time_step][0])
# and its motion for a given time step:
print('Motion: ', data[time_step][1])
```
### Final robot class
Before moving on to the last notebook in this series, please make sure that you have copied your final, completed `sense` function into the `robot_class.py` file in the home directory. We will be using this file in the final implementation of slam!
|
github_jupyter
|
## Классная работа
Является ли процесс ($X_n$) мартингалом по отношению к фильтрации $\mathcal{F}_n$?
1. $z_1,z_2,\ldots,z_n$ — независимы и $z_i\sim N(0,49)$, $X_n=\sum_{i=1}^n z_i$. Фильтрация: $\mathcal{F}_n=\sigma(z_1,z_2,\ldots,z_n);$
2. $z_1,z_2,\ldots,z_n$ — независимы и $z_i\sim U[0,1]$, $X_n=\sum_{i=1}^n z_i$. Фильтрация: $\mathcal{F}_n=\sigma(z_1,z_2,\ldots,z_n);$
3. Есть колода карт. Всего 52 карты, 4 масти. Я открываю одну карту за другой и смотрю, какую карту я открыла. Пусть $X_n$ — доля тузов в оставшейся колоде после открытия $n$ карт. $\mathcal{F}_n$ — знаю те карты, которые открыты. Рассмотрим, какие значения могут принимать $X_0$ и $X_{51}.$
$X_0=\dfrac4{52}.$
После открытия 51-ой карты, получим, что значения, которые принимает $X_{51}$ будет либо 1 (последняя карта — туз), либо 0 (последняя карта — не туз). Тогда вероятность того, что последняя карта окажется тузом, равна $\dfrac4{52}$, так как всего 4 туза, а количество карт равно 52.
| Исход | Не туз | Туз |
|----------|----------------|-------------|
| $X_{51}$ | $0$ | $1$ |
| $p$ |$\dfrac{48}{52}$|$\dfrac4{52}$|
d) Сколько элементов в $\mathcal{F_1}$ и $\mathcal{F_2}$? Понять, что больше: число элементарных частиц во Вселенной или число элементов в $\mathcal{F_2}$?
### Решение:
Для всех случаев нужно проверить выполнение двух условий из определения мартингала.
**a) Рассмотрим 1-ый случай:**
1-ое условие: Я знаю $z_1,z_2,\ldots,z_n$ и так как $X_n=\sum_{i=1}^n z_i$, то я знаю $X_n$.
2-ое условие: $E(X_{n+1}|\mathcal{F}_n) = E(z_1+z_2+\ldots+z_{n+1}|z_1,z_2,\ldots,z_n) =$ (знаю $z_1,z_2,\ldots,z_n$, поэтому могу их вынести) $= z_1+z_2+\ldots+z_n + E(z_{n+1}|z_1,z_2,\ldots,z_n) = z_1+z_2+\ldots+z_n+E (z_{n+1})=z_1+z_2+\ldots+z_n=X_n.$
Пояснения ко 2-ому условию: ($E (z_{n+1}) = 0$ так как $z_i \sim N(0,1).$
$E (z_{n+1}|z_1,z_2,\ldots,z_n)=E (z_{n+1})$, так как случайные величины $z_1,z_2,\ldots,z_{n+1}$ — независимы).
Оба условия выполняются, а значит, процесс ($X_n$) — мартингал по отношению к фильтрации $\mathcal{F}_n.$
**b) Рассмотрим 2-ой случай:**
1-ое условие: Я знаю $z_1,z_2,\ldots,z_n$ и так как $X_n=\sum_{i=1}^n z_i$, то я знаю $X_n.$
2-ое условие: $E (X_{n+1}|\mathcal{F}_n)=E (z_1+z_2+\ldots+z_{n+1}|z_1,z_2,\ldots,z_n) =$ (знаю $z_1,z_2,\ldots,z_n$, поэтому могу их вынести) $= z_1+z_2+\ldots+z_n+E (z_{n+1}|z_1,z_2,\ldots,z_n) = z_1+z_2+\ldots+z_n+E (z_{n+1}) = z_1+z_2+\ldots+z_n+\dfrac{0+1}{2}=X_n+\dfrac12 \ne X_n.$
2-ое условие не выполняется, а значит, в этом случае процесс ($X_n$) — не является мартингалом.
**c) Рассмотрим 3-ий случай:**
1-ое условие: выполнено, так как если я вижу открытые карты, то могу посчитать долю тузов среди неоткрытых, то есть могу посчитать $X_n.$
2-ое условие:
Спрогнозируем долю тузов, когда открою следующую карту : $E (X_{n+1}|\mathcal{F}_n).$
Сейчас: открыто $n$, закрыто $52-n.$
Доля тузов среди закрытых карт: $X_n.$
Количество закрытых тузов: $X_n(52-n).$
Тогда вероятность того, что при открытии $n+1$ карты будет туз, равна доле тузов среди закрытых карт или $X_n$. Если карта — туз, то после её открытия доля тузов будет равна $X_{n+1}=\dfrac{(52-n)X_n-1}{51-n}$. Если же при открытии карта окажется не тузом, то $X_{n+1}=\dfrac{(52-n)X_n}{51-n}$. Ниже представлена таблица с долями тузов и вероятностями исходов.
| Исход | Туз | Не туз |
|---------|---------------------------|-------------------------|
|$X_{n+1}$|$\dfrac{(52-n)X_n-1}{51-n}$|$\dfrac{(52-n)X_n}{51-n}$|
| $p$ | $X_n$ | $1-X_n$ |
$E (X_{n+1}|\mathcal{F}_n) = X_n\dfrac{(52-n)X_n-1}{51-n}+(1-X_n)\dfrac{(52-n)X_n}{51-n} = \dfrac{52X_n^2-nX_n^2-X_n+52X_n-52X_n^2-nX_n+nX_n^2}{51-n}=\dfrac{51X_n-nX_n}{51-n}=X_n$
2-ое условие выполняется.
Оба условия выполняются, а значит, процесс ($X_n$) — мартингал по отношению к фильтрации $\mathcal{F}_n.$
**d) Последнее задание**
$\mathcal{F_1}$ содержит 52 элементарных события (например, карта №1 — дама $\clubsuit$, карта №2 — туз $\spadesuit$ и т.д.). Каждое событие либо включаем либо не включаем, поэтому получим $card\mathcal{F_1}=2^{52}.$
$card\mathcal{F_2}=2^{C_{52}^1C_{51}^1}=2^{52\cdot51} \approx (4 \cdot 10^{15})^{51}=4^{51} \cdot 10^{15\cdot51}$
Число элементарных частиц во Вселенной $\approx 10^{81}$
$4^{51}\cdot 10^{15\cdot51} \gg 10^{81}$
**Упражнение**
$z_1,z_2,\ldots,z_n$ — независимы и $z_i\sim U[0,1]$, $X_n=\sum_{i=1}^n z_i.$ Фильтрация: $\mathcal{F}_n=\sigma(X_1,X_2,\ldots,X_n).$ Возьмём процесс $M_n=a^{X_n}$. Нужно подобрать число $a$ так, чтобы $(M_n)$ был мартингалом относительно фильтрации $\mathcal{F}_n$.
**Решение**
Простой случай: $a = 1$. Действительно, $(M_n) = (1,1,1,1,\ldots)$. Тогда $E (M_{n+1}|\mathcal{F}_n)=1=M_n$, а значит, $(M_n)$ — мартингал.
Теперь попробуем найти $a \ne 1$. Для этого проверим выполнимость двух условий из определения мартингала.
1-ое условие: $M_n$ измерим относительно $\mathcal{F}_n$ при известном $a.$
2-ое условие: $E (M_{n+1}|\mathcal{F}_n)=E (a^{X_{n+1}}|\mathcal{F}_n)=E (a^{z_1+z_2+\ldots+z_{n+1}}|\mathcal{F}_n) =$ (знаю $z_1,z_2,\ldots,z_n$, поэтому могу их вынести) $= a^{z_1+z_2+\ldots+z_n}E (a^{z_{n+1}}|\mathcal{F}_n)=a^{X_n}E (a^{z_{n+1}}|\mathcal{F}_n) =$ (так как случайная величина $z_{n+1}$ не зависит от $z_1,z_2,\ldots,z_n$) $= M_nE (a^{z_{n+1}}) =$ (по определению мартингала) $= M_n.$
Тогда
$E (a^{z_{n+1}})=1$
$E (a^{z_{n+1}})=\int\limits_0^1 a^t\,dt=1$
$\int\limits_0^1 e^{t\cdot\ln a}\,dt=\left. \dfrac{e^{t\cdot\ln a}}{\ln a}\right|_0^1=\dfrac{e^{\ln a}}{\ln a}-\dfrac1{\ln a}=\dfrac{e^{\ln a}-1}{\ln a}=\dfrac{a-1}{\ln a} = 1$ $\Rightarrow$
$\Rightarrow a-1=\ln a$
Это уравнение имеет единственное решение $a = 1.$
Получаем:
Процесс $M_n=a^{X_n}$ является мартингалом относительно фильтрации $\mathcal{F}_n$ только при $a = 1.$
# Мартингалы (продолжение). Момент остановки.{#12 Martingals. Stopping time}
## Мартингалы (продолжение)
### Задачка
### Упражнение:
Известно, что $M_t$ — мартингал. Чему равняется $E(M_{t+1}|\mathcal{F_{t+1}})$, $E(M_{t+2}|\mathcal{F_t}),$
а также $E(M_{t+k}|\mathcal{F_t}),$ (при $k \geqslant 0$)?
### Решение:
1) По определению мартингала: $E(M_{t+1}|\mathcal{F_{t+1}})=M_{t+1}.$
**Важное свойство:** $\mathcal{F_t} \leqslant \mathcal{F_{t+1}}.$
2) $E(M_{t+2}|\mathcal{F_t})=E[E(M_{t+2}|\mathcal{F_{t+1}})|\mathcal{F_t}]=$ (по свойству повторного математического ожидания) $=E(M_{t+1}|\mathcal{F_t})=M_t.$
3) $E(M_{t+k}|\mathcal{F_t})=M_t, k \geqslant 0.$
## Момент остановки
**Определение:**
Случайная величина $T$ называется моментом остановки (stopping time) по отношению к фильтрации $\mathcal{F_t}$, если:
1) Интуитивно: когда $T$ наступит, это можно понять;
2) Формально:
2.1) $T$ принимает значения $({0,1,2,3,\ldots }) U (+\infty)$;
2.2)Событие $(T=k)$ содержится в $\mathcal{F_k}$ для любого k.
### Задачки:
#### Задача №1:
Пусть $X_t$ — симметричное случайное блуждание,
$X_t=D_1+D_2+\ldots+D_t$, где $D_i$ — независимы и равновероятно принимают значения $(\pm 1)$
Фильтрация:
$\mathcal{F_t}=\sigma(X_1,X_2,\ldots,X_t)$ (мы видим значения случайного блуждания).
Имеются случайные величины:
$T_1=min\{t|X_t=100\}$,
$T_2=T_1+1$,
$T_3=T_1-1.$
Что из этого является моментом остановки?
#### Решение:
$T_1=min\{t|X_t=100\}$ — Да, момент остановки. В тот момент, когда он наступает, мы можем точно сказать был он или не был.
$T_2=T_1+1$ — Да, момент остановки. Если $T_1$ — произошло, то мы сможем сказать, что на следующем шаге наступит $T_2.$
$T_3=T_1-1$ — Нет.
Интуитивное объяснение: Бабушка говорит внуку: "Приходи ко мне в гости, когда наступит момент $T$", а внук видит значения $X$. Прийдет ли внук вовремя в гости к бабушке?
Ответ: $T_1$, $T_2.$
#### Задача №2:
Извлекаем карты из коллоды по одной и видим извлечённые значения.
$T_1$ — извлечение второго туза, является моментом остановки,
$T_1/2$ — не является моментом остановки.
## Остановленный процесс
**Определение:**
Пусть $X_t$ — случайный процесс, а $t$ — момент остановки.
Процесс $Y_t=X_{min\{t,T\}}$ называется остановленным процессом $X_t$.
### Примеры:
#### Пример №1:
Пусть $X_t$ — симметричное случайное блуждание,
$\tau=min\{t|X_t=20\}.$
Построить две траектории $X_t$ и соответствующие траектории $Y_t=X_{\tau}.$

Когда $min\{t,\tau\}=t$ и $t \leqslant \tau$ то $Y_t = X_t$,
когда $t > \tau$ то $Y_t < X_t.$

|
github_jupyter
|
# 2.3 Least Squares and Nearest Neighbors
### 2.3.3 From Least Squares to Nearest Neighbors
1. Generates 10 means $m_k$ from a bivariate Gaussian distrubition for each color:
- $N((1, 0)^T, \textbf{I})$ for <span style="color: blue">BLUE</span>
- $N((0, 1)^T, \textbf{I})$ for <span style="color: orange">ORANGE</span>
2. For each color generates 100 observations as following:
- For each observation it picks $m_k$ at random with probability 1/10.
- Then generates a $N(m_k,\textbf{I}/5)$
```
%matplotlib inline
import random
import numpy as np
import matplotlib.pyplot as plt
sample_size = 100
def generate_data(size, mean):
identity = np.identity(2)
m = np.random.multivariate_normal(mean, identity, 10)
return np.array([
np.random.multivariate_normal(random.choice(m), identity / 5)
for _ in range(size)
])
def plot_data(orange_data, blue_data):
axes.plot(orange_data[:, 0], orange_data[:, 1], 'o', color='orange')
axes.plot(blue_data[:, 0], blue_data[:, 1], 'o', color='blue')
blue_data = generate_data(sample_size, [1, 0])
orange_data = generate_data(sample_size, [0, 1])
data_x = np.r_[blue_data, orange_data]
data_y = np.r_[np.zeros(sample_size), np.ones(sample_size)]
# plotting
fig = plt.figure(figsize = (8, 8))
axes = fig.add_subplot(1, 1, 1)
plot_data(orange_data, blue_data)
plt.show()
```
### 2.3.1 Linear Models and Least Squares
$$\hat{Y} = \hat{\beta_0} + \sum_{j=1}^{p} X_j\hat{\beta_j}$$
where $\hat{\beta_0}$ is the intercept, also know as the *bias*. It is convenient to include the constant variable 1 in X and $\hat{\beta_0}$ in the vector of coefficients $\hat{\beta}$, and then write as:
$$\hat{Y} = X^T\hat{\beta} $$
#### Residual sum of squares
How to fit the linear model to a set of training data? Pick the coefficients $\beta$ to minimize the *residual sum of squares*:
$$RSS(\beta) = \sum_{i=1}^{N} (y_i - x_i^T\beta) ^ 2 = (\textbf{y} - \textbf{X}\beta)^T (\textbf{y} - \textbf{X}\beta)$$
where $\textbf{X}$ is an $N \times p$ matrix with each row an input vector, and $\textbf{y}$ is an N-vector of the outputs in the training set. Differentiating w.r.t. β we get the normal equations:
$$\mathbf{X}^T(\mathbf{y} - \mathbf{X}\beta) = 0$$
If $\mathbf{X}^T\mathbf{X}$ is nonsingular, then the unique solution is given by:
$$\hat{\beta} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}$$
```
class LinearRegression:
def fit(self, X, y):
X = np.c_[np.ones((X.shape[0], 1)), X]
self.beta = np.linalg.inv(X.T @ X) @ X.T @ y
return self
def predict(self, x):
return np.dot(self.beta, np.r_[1, x])
model = LinearRegression().fit(data_x, data_y)
print("beta = ", model.beta)
```
#### Example of the linear model in a classification context
The fitted values $\hat{Y}$ are converted to a fitted class variable $\hat{G}$ according to the rule:
$$
\begin{equation}
\hat{G} = \begin{cases}
\text{ORANGE} & \text{ if } \hat{Y} \gt 0.5 \\
\text{BLUE } & \text{ if } \hat{Y} \leq 0.5
\end{cases}
\end{equation}
$$
```
from itertools import filterfalse, product
def plot_grid(orange_grid, blue_grid):
axes.plot(orange_grid[:, 0], orange_grid[:, 1], '.', zorder = 0.001,
color='orange', alpha = 0.3, scalex = False, scaley = False)
axes.plot(blue_grid[:, 0], blue_grid[:, 1], '.', zorder = 0.001,
color='blue', alpha = 0.3, scalex = False, scaley = False)
plot_xlim = axes.get_xlim()
plot_ylim = axes.get_ylim()
grid = np.array([*product(np.linspace(*plot_xlim, 50), np.linspace(*plot_ylim, 50))])
is_orange = lambda x: model.predict(x) > 0.5
orange_grid = np.array([*filter(is_orange, grid)])
blue_grid = np.array([*filterfalse(is_orange, grid)])
axes.clear()
axes.set_title("Linear Regression of 0/1 Response")
plot_data(orange_data, blue_data)
plot_grid(orange_grid, blue_grid)
find_y = lambda x: (0.5 - model.beta[0] - x * model.beta[1]) / model.beta[2]
axes.plot(plot_xlim, [*map(find_y, plot_xlim)], color = 'black',
scalex = False, scaley = False)
fig
```
### 2.3.2 Nearest-Neighbor Methods
$$\hat{Y}(x) = \frac{1}{k} \sum_{x_i \in N_k(x)} y_i$$
where $N_k(x)$ is the neighborhood of $x$ defined by the $k$ closest points $x_i$ in the training sample.
```
class KNeighborsRegressor:
def __init__(self, k):
self._k = k
def fit(self, X, y):
self._X = X
self._y = y
return self
def predict(self, x):
X, y, k = self._X, self._y, self._k
distances = ((X - x) ** 2).sum(axis=1)
return np.mean(y[distances.argpartition(k)[:k]])
def plot_k_nearest_neighbors(k):
model = KNeighborsRegressor(k).fit(data_x, data_y)
is_orange = lambda x: model.predict(x) > 0.5
orange_grid = np.array([*filter(is_orange, grid)])
blue_grid = np.array([*filterfalse(is_orange, grid)])
axes.clear()
axes.set_title(str(k) + "-Nearest Neighbor Classifier")
plot_data(orange_data, blue_data)
plot_grid(orange_grid, blue_grid)
plot_k_nearest_neighbors(1)
fig
```
It appears that k-nearest-neighbor have a single parameter (*k*), however the effective number of parameters is N/k and is generally bigger than the p parameters in least-squares fits. **Note:** if the neighborhoods
were nonoverlapping, there would be N/k neighborhoods and we would fit one parameter (a mean) in each neighborhood.
```
plot_k_nearest_neighbors(15)
fig
```
|
github_jupyter
|
```
import re
import json
import matplotlib.pylab as plt
import numpy as np
import glob
%matplotlib inline
all_test_acc = []
all_test_err = []
all_train_loss = []
all_test_loss = []
all_cardinalities = []
all_depths = []
all_widths = []
for file in glob.glob('logs_cardinality/Cifar2/*.txt'):
with open(file) as logs:
next(logs)
test_acc = []
test_err = []
train_loss = []
test_loss = []
i = 0
for line in logs:
i += 1
if i % 2 != 0:
for t in re.finditer(r"\{.*\}", line):
try:
data = json.loads(t.group())
train_loss.append(data['train_loss'])
test_loss.append(data['test_loss'])
test_acc.append(data['test_accuracy'])
test_err.append((1-data['test_accuracy'])*100)
cardinality = data['cardinality']
depth = data['depth']
width = data['base_width']
except ValueError:
pass
all_test_acc.append(test_acc)
all_test_err.append(test_err)
all_train_loss.append(train_loss)
all_test_loss.append(test_loss)
all_cardinalities.append(cardinality)
all_depths.append(depth)
all_widths.append(width)
epochs = np.arange(0, 300, 2)
ordered_test_err = []
ordered_test_err.append(all_test_err[all_cardinalities.index(1)])
ordered_test_err.append(all_test_err[all_cardinalities.index(2)])
ordered_test_err.append(all_test_err[all_cardinalities.index(4)])
ordered_test_err.append(all_test_err[all_cardinalities.index(8)])
ordered_test_err.append(all_test_err[all_cardinalities.index(16)])
all_cardinalities = sorted(all_cardinalities)
ordered_test_err = []
ordered_test_err.append(all_test_err[all_depths.index(20)])
ordered_test_err.append(all_test_err[all_depths.index(29)])
all_depths = sorted(all_depths)
ordered_test_err = []
ordered_test_err.append(all_test_err[all_widths.index(32)])
ordered_test_err.append(all_test_err[all_widths.index(64)])
all_widths = sorted(all_widths)
for file_no in range(0, 3):
plt.plot(epochs, ordered_test_err[file_no])
plt.legend([cardinality for cardinality in all_cardinalities[0:3]], loc='upper right')
plt.xlabel('epochs \n\n (f)')
plt.ylabel('top-1 error(%)')
plt.show()
for file_no in range(0, 2):
plt.plot(epochs, ordered_test_err[file_no])
plt.legend([depth for depth in all_depths], loc='upper right')
plt.xlabel('epochs \n\n (c)')
plt.ylabel('top-1 error(%)')
# plt.title('(a)')
plt.show()
for file_no in range(0, 2):
plt.plot(epochs, ordered_test_err[file_no])
plt.legend([width for width in all_widths], loc='upper right')
plt.xlabel('epochs \n\n (a)')
plt.ylabel('top-1 error(%)')
plt.show()
cardinalities = [1, 2, 4, 8, 16]
params = [5.6, 9.8, 18.3, 34.4, 68.1]
text = ['1x64d', '2x64d', '4x64d', '8x64d', '16x64d']
cifar29 = [[0.786, 0.797, 0.803, 0.83, 0.823], [0.886, 0.887, 0.86, 0.914, 0.92], [0.939, 0.939, 0.941, 0.946, 0.946]]
fig = plt.figure()
ax = fig.add_subplot(111)
y = [(1-val)*100 for val in cifar29[2]]
ax.plot(params, y, 'x-')
plt.xlabel('# of parameters (M)')
plt.ylabel('test error (%)')
for i, txt in enumerate(text):
ax.annotate(txt, (params[i], y[i]))
plt.title('CIFAR 2 Dataset')
```
|
github_jupyter
|
# Using an external master clock for hardware control of a stage-scanning high NA oblique plane microscope
Tutorial provided by [qi2lab](https://www.shepherdlaboratory.org).
This tutorial uses Pycro-Manager to rapidly acquire terabyte-scale volumetric images using external hardware triggering of a stage scan optimized, high numerical aperture (NA) oblique plane microscope (OPM). The microscope that this notebook controls is described in detail in this [preprint](https://www.biorxiv.org/content/10.1101/2020.04.07.030569v2), under the *stage scan OPM* section in the methods.
This high NA OPM allows for versatile, high-resolution, and large field-of-view single molecule imaging. The main application is quantifying 3D spatial gene expression in millions of cells or large pieces of intact tissue using interative RNA-FISH (see examples [here](https://www.nature.com/articles/s41598-018-22297-7) and [here](https://www.nature.com/articles/s41598-019-43943-8)). Because the fluidics controller for the iterative labeling is also controlled via Python (code not provided here), using Pycro-Manager greatly simplifies controlling these complex experiments.
The tutorial highlights the use of the `post_camera_hook_fn` and `post_hardware_hook_fn` functionality to allow an external controller to synchronize the microscope acquisition (external master). This is different from the standard hardware sequencing functionality in Pycro-Manager, where the acquisition engine sets up sequencable hardware and the camera serves as the master clock.
The tutorial also discusses how to structure the events and avoid timeouts to acquire >10 million of events per acquistion.
## Microscope hardware
Briefly, the stage scan high NA OPM is built around a [bespoke tertiary objective](https://andrewgyork.github.io/high_na_single_objective_lightsheet/) designed by Alfred Millet-Sikking and Andrew York at Calico Labs. Stage scanning is performed by an ASI scan optimized XY stage, an ASI FTP Z stage, and an ASI Tiger controller with a programmable logic card. Excitation light is provided by a Coherent OBIS Laser Box. A custom Teensy based DAC synchronizes laser emission and a galvanometer mirror to the scan stage motion to eliminate motion blur. Emitted fluorescence is imaged by a Photometrics Prime BSI.
The ASI Tiger controller is the master clock in this experiment. The custom Teensy DAC is setup in a closed loop with the Photometrics camera. This controller is detailed in a previous [publication](https://www.nature.com/articles/s41467-017-00514-7) on adaptive light sheet microscopy.
The code to orthogonally deskew the acquired data and place it into a BigDataViewer HDF5 file that can be read stitched and fused using BigStitcher is found at the qi2lab (www.github.com/qi2lab/OPM/).
## Initial setup
### Imports
```
from pycromanager import Bridge, Acquisition
import numpy as np
from pathlib import Path
from time import sleep
```
### Create bridge to Micro-Manager
```
with Bridge() as bridge:
core = bridge.get_core()
```
## Define pycromanager specific hook functions for externally controlled hardware acquisition
### Post camera hook function to start external controller
This is run once after the camera is put into active mode in the sequence acquisition. The stage starts moving on this command and outputs a TTL pulse to the camera when it passes the preset initial position. This TTL starts the camera running at the set exposure time using internal timing. The camera acts the master signal for the galvo/laser controller using its own "exposure out" signal.
```
def post_camera_hook_(event,bridge,event_queue):
"""
Run a set of commands after the camera is started
:param event: current list of events, each a dictionary, to run in this hardware sequence
:type event: list
:param bridge: pycro-manager java bridge
:type bridge: pycromanager.core.Bridge
:param event_queue: thread-safe event queue
:type event_queue: multiprocessing.Queue
:return: event_queue
"""
# acquire core from bridge
core=bridge.get_core()
# send Tiger command to start constant speed scan
command='1SCAN'
core.set_property('TigerCommHub','SerialCommand',command)
return event
```
### Post hardware setup function to make sure external controller is ready
This is run once after the acquisition engine sets up the hardware for the non-sequencable hardware, such as the height axis stage and channel.
```
def post_hardware_hook(event,bridge,event_queue):
"""
Run a set of commands after the hardware setup calls by acquisition engine are finished
:param event: current list of events, each a dictionary, to run in this hardware sequence
:type event: list
:param bridge: pycro-manager java bridge
:type bridge: pycromanager.core.Bridge
:param event_queue: thread-safe event queue
:type event_queue: multiprocessing.Queue
:return: event_queue
"""
# acquire core from bridge
core = bridge.get_core()
# turn on 'transmit repeated commands' for Tiger
core.set_property('TigerCommHub','OnlySendSerialCommandOnChange','No')
# check to make sure Tiger is not busy
ready='B'
while(ready!='N'):
command = 'STATUS'
core.set_property('TigerCommHub','SerialCommand',command)
ready = core.get_property('TigerCommHub','SerialResponse')
sleep(.500)
# turn off 'transmit repeated commands' for Tiger
core.set_property('TigerCommHub','OnlySendSerialCommandOnChange','Yes')
return event
```
## Acquistion parameters set by user
### Select laser channels and powers
```
# lasers to use
# 0 -> inactive
# 1 -> active
state_405 = 0
state_488 = 0
state_561 = 1
state_635 = 0
state_730 = 0
# laser powers (0 -> 100%)
power_405 = 0
power_488 = 0
power_561 = 0
power_635 = 0
power_730 = 0
# construct arrays for laser informaton
channel_states = [state_405,state_488,state_561,state_635,state_730]
channel_powers = [power_405,power_488,power_561,power_635,power_730]
```
### Camera parameters
```
# FOV parameters.
# x size (256) is the Rayleigh length of oblique light sheet excitation
# y size (1600) is the high quality lateral extent of the remote image system (~180 microns)
# camera is oriented so that cropping the x size limits the number of readout rows and therefore lowering readout time
ROI = [1024, 0, 256, 1600] #unit: pixels
# camera exposure
exposure_ms = 5 #unit: ms
# camera pixel size
pixel_size_um = .115 #unit: um
```
### Stage scan parameters
The user defines these by interactively moving the XY and Z stages around the sample. At the edges of the sample, the user records the positions.
```
# distance between adjacent images.
scan_axis_step_um = 0.2 #unit: um
# scan axis limits. Use stage positions reported by Micromanager
scan_axis_start_um = 0. #unit: um
scan_axis_end_um = 5000. #unit: um
# tile axis limits. Use stage positions reported by Micromanager
tile_axis_start_um = 0. #unit: um
tile_axis_end_um = 5000. #unit: um
# height axis limits. Use stage positions reported by Micromanager
height_axis_start_um = 0.#unit: um
height_axis_end_um = 30. #unit: um
```
### Path to save acquistion data
```
save_directory = Path('/path/to/save')
save_name = 'test'
```
## Setup hardware for stage scanning sample through oblique digitally scanned light sheet
### Calculate stage limits and speeds from user provided scan parameters
Here, the number of events along the scan (x) axis in each acquisition, the overlap between adajcent strips along the tile (y) axis, and the overlap between adajacent strips along the height (z) axis are all calculated.
```
# scan axis setup
scan_axis_step_mm = scan_axis_step_um / 1000. #unit: mm
scan_axis_start_mm = scan_axis_start_um / 1000. #unit: mm
scan_axis_end_mm = scan_axis_end_um / 1000. #unit: mm
scan_axis_range_um = np.abs(scan_axis_end_um-scan_axis_start_um) # unit: um
scan_axis_range_mm = scan_axis_range_um / 1000 #unit: mm
actual_exposure_s = actual_readout_ms / 1000. #unit: s
scan_axis_speed = np.round(scan_axis_step_mm / actual_exposure_s,2) #unit: mm/s
scan_axis_positions = np.rint(scan_axis_range_mm / scan_axis_step_mm).astype(int) #unit: number of positions
# tile axis setup
tile_axis_overlap=0.2 #unit: percentage
tile_axis_range_um = np.abs(tile_axis_end_um - tile_axis_start_um) #unit: um
tile_axis_range_mm = tile_axis_range_um / 1000 #unit: mm
tile_axis_ROI = ROI[3]*pixel_size_um #unit: um
tile_axis_step_um = np.round((tile_axis_ROI) * (1-tile_axis_overlap),2) #unit: um
tile_axis_step_mm = tile_axis_step_um / 1000 #unit: mm
tile_axis_positions = np.rint(tile_axis_range_mm / tile_axis_step_mm).astype(int) #unit: number of positions
# if tile_axis_positions rounded to zero, make sure acquisition visits at least one position
if tile_axis_positions == 0:
tile_axis_positions=1
# height axis setup
# this is more complicated, because the excitation is an oblique light sheet
# the height of the scan is the length of the ROI in the tilted direction * sin(tilt angle)
height_axis_overlap=0.2 #unit: percentage
height_axis_range_um = np.abs(height_axis_end_um-height_axis_start_um) #unit: um
height_axis_range_mm = height_axis_range_um / 1000 #unit: mm
height_axis_ROI = ROI[2]*pixel_size_um*np.sin(30*(np.pi/180.)) #unit: um
height_axis_step_um = np.round((height_axis_ROI)*(1-height_axis_overlap),2) #unit: um
height_axis_step_mm = height_axis_step_um / 1000 #unit: mm
height_axis_positions = np.rint(height_axis_range_mm / height_axis_step_mm).astype(int) #unit: number of positions
# if height_axis_positions rounded to zero, make sure acquisition visits at least one position
if height_axis_positions==0:
height_axis_positions=1
```
### Setup Coherent laser box from user provided laser parameters
```
with Bridge() as bridge:
core = bridge.get_core()
# turn off lasers
# this relies on a Micro-Manager configuration group that sets all lasers to "off" state
core.set_config('Coherent-State','off')
core.wait_for_config('Coherent-State','off')
# set lasers to user defined power
core.set_property('Coherent-Scientific Remote','Laser 405-100C - PowerSetpoint (%)',channel_powers[0])
core.set_property('Coherent-Scientific Remote','Laser 488-150C - PowerSetpoint (%)',channel_powers[1])
core.set_property('Coherent-Scientific Remote','Laser OBIS LS 561-150 - PowerSetpoint (%)',channel_powers[2])
core.set_property('Coherent-Scientific Remote','Laser 637-140C - PowerSetpoint (%)',channel_powers[3])
core.set_property('Coherent-Scientific Remote','Laser 730-30C - PowerSetpoint (%)',channel_powers[4])
```
### Setup Photometrics camera for low-noise readout and triggering
The camera input trigger is set to `Trigger first` mode to allow for external control and the output trigger is set to `Rolling Shutter` mode to ensure that laser light is only delivered when the entire chip is exposed. The custom Teensy DAC waits for the signal from the camera to go HIGH and then sweeps a Gaussian pencil beam once across the field-of-view. It then rapidly resets and scans again upon the next trigger. The Teensy additionally blanks the Coherent laser box emission between frames.
```
with Bridge() as bridge:
core = bridge.get_core()
# set camera into 16bit readout mode
core.set_property('Camera','ReadoutRate','100MHz 16bit')
# give camera time to change modes
sleep(5)
# set camera into low noise readout mode
core.set_property('Camera','Gain','2-CMS')
# give camera time to change modes
sleep(5)
# set camera to give an exposure out signal
# this signal is used by the custom DAC to synchronize blanking and a digitally swept light sheet
core.set_property('Camera','ExposureOut','Rolling Shutter')
# give camera time to change modes
sleep(5)
# change camera timeout.
# this is necessary because the acquisition engine can take a long time to setup with millions of events
# on the first run
core.set_property('Camera','Trigger Timeout (secs)',300)
# give camera time to change modes
sleep(5)
# set camera to internal trigger
core.set_property('Camera','TriggerMode','Internal Trigger')
# give camera time to change modes
sleep(5)
```
### Setup ASI stage control cards and programmable logic card in the Tiger controller
Hardware is setup for a constant-speed scan along the `x` direction, lateral tiling along the `y` direction, and height tiling along the `z` direction. The programmable logic card sends a signal to the camera to start acquiring once the scan (x) axis reaches the desired speed and crosses the user defined start position.
Documentation for the specific commands to setup the constant speed stage scan on the Tiger controller is at the following links,
- [SCAN](http://asiimaging.com/docs/commands/scan)
- [SCANR](http://asiimaging.com/docs/commands/scanr)
- [SCANV](http://www.asiimaging.com/docs/commands/scanv)
Documentation for the programmable logic card is found [here](http://www.asiimaging.com/docs/tiger_programmable_logic_card?s[]=plc).
The Tiger is polled after each command to make sure that it is ready to receive another command.
```
with Bridge() as bridge:
core = bridge.get_core()
# Setup the PLC to output external TTL when an internal signal is received from the stage scanning card
plcName = 'PLogic:E:36'
propPosition = 'PointerPosition'
propCellConfig = 'EditCellConfig'
addrOutputBNC3 = 35
addrStageSync = 46 # TTL5 on Tiger backplane = stage sync signal
core.set_property(plcName, propPosition, addrOutputBNC3)
core.set_property(plcName, propCellConfig, addrStageSync)
# turn on 'transmit repeated commands' for Tiger
core.set_property('TigerCommHub','OnlySendSerialCommandOnChange','No')
# set tile (y) axis speed to 25% of maximum for all moves
command = 'SPEED Y=.25'
core.set_property('TigerCommHub','SerialCommand',command)
# check to make sure Tiger is not busy
ready='B'
while(ready!='N'):
command = 'STATUS'
core.set_property('TigerCommHub','SerialCommand',command)
ready = core.get_property('TigerCommHub','SerialResponse')
sleep(.500)
# set scan (x) axis speed to 25% of maximum for non-sequenced moves
command = 'SPEED X=.25'
core.set_property('TigerCommHub','SerialCommand',command)
# check to make sure Tiger is not busy
ready='B'
while(ready!='N'):
command = 'STATUS'
core.set_property('TigerCommHub','SerialCommand',command)
ready = core.get_property('TigerCommHub','SerialResponse')
sleep(.500)
# turn off 'transmit repeated commands' for Tiger
core.set_property('TigerCommHub','OnlySendSerialCommandOnChange','Yes')
# turn on 'transmit repeated commands' for Tiger
core.set_property('TigerCommHub','OnlySendSerialCommandOnChange','No')
# set scan (x) axis speed to correct speed for constant speed movement of scan (x) axis
# expects mm/s
command = 'SPEED X='+str(scan_axis_speed)
core.set_property('TigerCommHub','SerialCommand',command)
# check to make sure Tiger is not busy
ready='B'
while(ready!='N'):
command = 'STATUS'
core.set_property('TigerCommHub','SerialCommand',command)
ready = core.get_property('TigerCommHub','SerialResponse')
sleep(.500)
# set scan (x) axis to true 1D scan with no backlash
command = '1SCAN X? Y=0 Z=9 F=0'
core.set_property('TigerCommHub','SerialCommand',command)
# check to make sure Tiger is not busy
ready='B'
while(ready!='N'):
command = 'STATUS'
core.set_property('TigerCommHub','SerialCommand',command)
ready = core.get_property('TigerCommHub','SerialResponse')
sleep(.500)
# set range and return speed (25% of max) for constant speed movement of scan (x) axis
# expects mm
command = '1SCANR X='+str(scan_axis_start_mm)+' Y='+str(scan_axis_end_mm)+' R=25'
core.set_property('TigerCommHub','SerialCommand',command)
# check to make sure Tiger is not busy
ready='B'
while(ready!='N'):
command = 'STATUS'
core.set_property('TigerCommHub','SerialCommand',command)
ready = core.get_property('TigerCommHub','SerialResponse')
sleep(.500)
# turn off 'transmit repeated commands' for Tiger
core.set_property('TigerCommHub','OnlySendSerialCommandOnChange','Yes')
```
## Setup and run the acquisition
### Change core timeout
This is necessary because of the large, slow XY stage moves.
```
with Bridge() as bridge:
core = bridge.get_core()
# change core timeout for long stage moves
core.set_property('Core','TimeoutMs',20000)
```
### Move stage hardware to initial positions
```
with Bridge() as bridge:
core = bridge.get_core()
# move scan (x) and tile (y) stages to starting positions
core.set_xy_position(scan_axis_start_um,tile_axis_start_um)
core.wait_for_device(xy_stage)
# move height (z) stage to starting position
core.set_position(height_position_um)
core.wait_for_device(z_stage)
```
### Create event structure
The external controller handles all of the events in `x` for a given `yzc` position. To make sure that pycro-manager structures the acquistion this way, the value of the stage positions for `x` are kept constant for all events at a given `yzc` position. This gives the order of the loops to create the event structure as `yzcx`.
```
# empty event dictionary
events = []
# loop over all tile (y) positions.
for y in range(tile_axis_positions):
# update tile (y) axis position
tile_position_um = tile_axis_start_um+(tile_axis_step_um*y)
# loop over all height (z) positions
for z in range(height_axis_positions):
# update height (z) axis position
height_position_um = height_axis_start_um+(height_axis_step_um*z)
# loop over all channels (c)
for c in range(len(channel_states)):
# create events for all scan (x) axis positions.
# The acquistion engine knows that this is a hardware triggered sequence because
# the physical x position does not change when specifying the large number of x events
for x in range(scan_axis_positions):
# only create events if user sets laser to active
# this relies on a Micromanager group 'Coherent-State' that has individual entries that correspond
# the correct on/off state of each laser. Laser blanking and synchronization are handled by the
# custom Teensy DAC controller.
if channel_states[c]==1:
if (c==0):
evt = { 'axes': {'x': x, 'y':y, 'z':z}, 'x': scan_axis_start_um, 'y': tile_position_um,
'z': height_position_um, 'channel' : {'group': 'Coherent-State', 'config': '405nm'}}
elif (c==1):
evt = { 'axes': {'x': x, 'y':y, 'z':z}, 'x': scan_axis_start_um, 'y': tile_position_um,
'z': height_position_um, 'channel' : {'group': 'Coherent-State', 'config': '488nm'}}
elif (c==2):
evt = { 'axes': {'x': x, 'y':y, 'z':z}, 'x': scan_axis_start_um, 'y': tile_position_um,
'z': height_position_um, 'channel' : {'group': 'Coherent-State', 'config': '561nm'}}
elif (c==3):
evt = { 'axes': {'x': x, 'y':y, 'z':z}, 'x': scan_axis_start_um, 'y': tile_position_um,
'z': height_position_um, 'channel' : {'group': 'Coherent-State', 'config': '637nm'}}
elif (c==4):
evt = { 'axes': {'x': x, 'y':y, 'z':z}, 'x': scan_axis_start_um, 'y': tile_position_um,
'z': height_position_um, 'channel' : {'group': 'Coherent-State', 'config': '730nm'}}
events.append(evt)
```
### Run acquisition
- The camera is set to `Trigger first` mode. In this mode, the camera waits for an external trigger and then runs using the internal timing.
- The acquisition is setup and started. The initial acquisition setup by Pycro-manager and the Java acquisition engine takes a few minutes and requires at significant amount of RAM allocated to ImageJ. 40 GB of RAM seems acceptable. The circular buffer is only allocated 2 GB, because the computer for this experiment has an SSD array capable of writing up to 600 MBps.
- At each `yzc` position, the ASI Tiger controller supplies the external master signal when the the (scan) axis has ramped up to the correct constant speed and crossed `scan_axis_start_um`. The speed is defined by `scan_axis_speed = scan_axis_step_um / camera_exposure_ms`. Acquired images are placed into the `x` axis of the Acquisition without Pycro-Manager interacting with the hardware.
- Once the full acquisition is completed, all lasers are set to `off` and the camera is placed back in `Internal Trigger` mode.
```
with Bridge() as bridge:
core = bridge.get_core()
# set camera to trigger first mode for stage synchronization
# give camera time to change modes
core.set_property('Camera','TriggerMode','Trigger first')
sleep(5)
# run acquisition
# the acquisition needs to write data at roughly 100-500 MBps depending on frame rate and ROI
# so the display is set to off and no multi-resolution calculations are done
with Acquisition(directory=save_directory, name=save_name, post_hardware_hook_fn=post_hardware_hook,
post_camera_hook_fn=post_camera_hook, show_display=False, max_multi_res_index=0) as acq:
acq.acquire(events)
# turn off lasers
core.set_config('Coherent-State','off')
core.wait_for_config('Coherent-State','off')
# set camera to internal trigger
core.set_property('Camera','TriggerMode','Internal Trigger')
# give camera time to change modes
sleep(5)
```
|
github_jupyter
|
```
%load_ext autoreload
%autoreload 2
import os
import sys
from pathlib import Path
ROOT_DIR = os.path.abspath(os.path.join(Path().absolute(), os.pardir))
sys.path.insert(1, ROOT_DIR)
import numpy as np
import scipy
import matplotlib.pyplot as plt
from frequency_response import FrequencyResponse
from biquad import peaking, low_shelf, high_shelf, digital_coeffs
harman_overear = FrequencyResponse.read_from_csv(os.path.join(ROOT_DIR, 'compensation', 'harman_over-ear_2018.csv'))
fig, ax = harman_overear.plot_graph(show=False, color='C0')
fs = 48000
a0, a1, a2, b0, b1, b2 = low_shelf(105.0, 0.71, 6, fs=fs)
shelf = digital_coeffs(harman_overear.frequency, fs, a0, a1, a2, b0, b1, b2)
shelf = FrequencyResponse(name='Shelf', frequency=harman_overear.frequency.copy(), raw=shelf)
shelf.plot_graph(fig=fig, ax=ax, show=False, color='C1')
harman_overear_wo_bass = FrequencyResponse(
name='Harman over-ear target 2018 without bass',
frequency=harman_overear.frequency.copy(),
raw=harman_overear.raw - shelf.raw
)
harman_overear_wo_bass.plot_graph(fig=fig, ax=ax, color='C2', show=False)
ax.legend(['Harman over-ear 2018', 'Low shelf', 'Harman over-ear 2018 without bass shelf'])
ax.set_ylim([-4, 10])
plt.show()
harman_inear = FrequencyResponse.read_from_csv(os.path.join(ROOT_DIR, 'compensation', 'harman_in-ear_2019v2.csv'))
fig, ax = harman_inear.plot_graph(show=False, color='C0')
fs = 48000
a0, a1, a2, b0, b1, b2 = low_shelf(105.0, 0.71, 9, fs=fs)
shelf = digital_coeffs(harman_inear.frequency, fs, a0, a1, a2, b0, b1, b2)
shelf = FrequencyResponse(name='Shelf', frequency=harman_inear.frequency.copy(), raw=shelf)
shelf.plot_graph(fig=fig, ax=ax, show=False, color='C1')
harman_inear_wo_bass = FrequencyResponse(
name='Harman in-ear target 2019 without bass',
frequency=harman_inear.frequency.copy(),
raw=harman_inear.raw - shelf.raw
)
harman_inear_wo_bass.plot_graph(fig=fig, ax=ax, color='C2', show=False)
ax.legend(['Harman in-ear 2019', 'Low shelf', 'Harman in-ear target 2019 without bass'])
ax.set_ylim([-4, 10])
plt.show()
fig, ax = harman_overear.plot_graph(show=False, color='C0')
harman_overear_wo_bass.plot_graph(fig=fig, ax=ax, show=False, color='C1')
harman_overear_4_bass = harman_overear_wo_bass.copy()
harman_overear_4_bass.raw += digital_coeffs(harman_overear_4_bass.frequency, fs, *low_shelf(105, 0.71, 4, fs=fs))
harman_overear_4_bass.plot_graph(fig=fig, ax=ax, show=False, color='C2')
ax.legend(['Harman over-ear 2018', 'Harman over-ear 2018 without bass', 'Harman over-ear 2018 with 4 dB bass'])
ax.set_ylim([-4, 10])
ax.set_title('Harman over-ear')
plt.show()
fig, ax = harman_inear.plot_graph(show=False, color='C0')
harman_inear_wo_bass.plot_graph(fig=fig, ax=ax, show=False, color='C1')
harman_inear_6_bass = harman_inear_wo_bass.copy()
harman_inear_6_bass.raw += digital_coeffs(harman_inear_6_bass.frequency, fs, *low_shelf(105, 0.71, 4, fs=fs))
harman_inear_6_bass.plot_graph(fig=fig, ax=ax, show=False, color='C2')
ax.legend(['Harman in-ear 2019', 'Harman in-ear 2019 without bass', 'Harman in-ear 2019 with 6 dB bass'])
ax.set_ylim([-4, 10])
ax.set_title('Harman in-ear')
plt.show()
# WARNING: These will overwrite the files
harman_overear_wo_bass.write_to_csv(os.path.join(ROOT_DIR, 'compensation', 'harman_over-ear_2018_wo_bass.csv'))
harman_overear_wo_bass.plot_graph(file_path=os.path.join(ROOT_DIR, 'compensation', 'harman_over-ear_2018_wo_bass.png'), color='C0')
harman_inear_wo_bass.write_to_csv(os.path.join(ROOT_DIR, 'compensation', 'harman_in-ear_2019v2_wo_bass.csv'))
harman_inear_wo_bass.plot_graph(file_path=os.path.join(ROOT_DIR, 'compensation', 'harman_in-ear_2019v2_wo_bass.png'), color='C0')
```
|
github_jupyter
|
# Direct Grib Read
If you have installed more recent versions of pygrib, you can ingest grib mosaics directly without conversion to netCDF. This speeds up the ingest by ~15-20 seconds. This notebook will also demonstrate how to use MMM-Py with cartopy, and how to download near-realtime data from NCEP.
```
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
import datetime as dt
import pandas as pd
import glob
import mmmpy
import cartopy.crs as ccrs
import cartopy.feature as cfeature
from cartopy.io.img_tiles import StamenTerrain
import pygrib
import os
import pyart
%matplotlib inline
```
### Download MRMS directly from NCEP
```
def download_files(input_dt, max_seconds=300):
"""
This function takes an input datetime object, and will try to match with the closest mosaics in time
that are available at NCEP. Note that NCEP does not archive much beyond 24 hours of data.
Parameters
----------
input_dt : datetime.datetime object
input datetime object, will try to find closest file in time on NCEP server
Other Parameters
----------------
max_seconds : int or float
Maximum number of seconds difference tolerated between input and selected datetimes,
before file matching will fail
Returns
-------
files : 1-D ndarray of strings
Array of mosaic file names, ready for ingest into MMM-Py
"""
baseurl = 'http://mrms.ncep.noaa.gov/data/3DReflPlus/'
page1 = pd.read_html(baseurl)
directories = np.array(page1[0][0][3:-1]) # May need to change indices depending on pandas version
urllist = []
files = []
for i, d in enumerate(directories):
print(baseurl + d)
page2 = pd.read_html(baseurl + d)
filelist = np.array(page2[0][0][3:-1]) # May need to change indices depending on pandas version
dts = []
for filen in filelist:
# Will need to change in event of a name change
dts.append(dt.datetime.strptime(filen[32:47], '%Y%m%d-%H%M%S'))
dts = np.array(dts)
diff = np.abs((dts - input_dt))
if np.min(diff).total_seconds() <= max_seconds:
urllist.append(baseurl + d + filelist[np.argmin(diff)])
files.append(filelist[np.argmin(diff)])
for url in urllist:
print(url)
os.system('wget ' + url)
return np.array(files)
files = download_files(dt.datetime.utcnow())
```
### Direct ingest of grib into MMM-Py
```
mosaic = mmmpy.MosaicTile(files)
mosaic.diag()
```
### Plot with cartopy
```
tiler = StamenTerrain()
ext = [-130, -65, 20, 50]
fig = plt.figure(figsize=(12, 6))
projection = ccrs.PlateCarree() # ShadedReliefESRI().crs
ax = plt.axes(projection=projection)
ax.set_extent(ext)
ax.add_image(tiler, 3)
# Create a feature for States/Admin 1 regions at 1:10m from Natural Earth
states_provinces = cfeature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
scale='50m',
facecolor='none')
ax.add_feature(states_provinces, edgecolor='gray')
# Create a feature for Countries 0 regions at 1:10m from Natural Earth
countries = cfeature.NaturalEarthFeature(
category='cultural',
name='admin_0_boundary_lines_land',
scale='50m',
facecolor='none')
ax.add_feature(countries, edgecolor='k')
ax.coastlines(resolution='50m')
mosaic.get_comp()
valmask = np.ma.masked_where(mosaic.mrefl3d_comp <= 0, mosaic.mrefl3d_comp)
cs = plt.pcolormesh(mosaic.Longitude, mosaic.Latitude, valmask, vmin=0, vmax=55,
cmap='pyart_Carbone42', transform=projection)
plt.colorbar(cs, label='Composite Reflectivity (dBZ)',
orientation='horizontal', pad=0.05, shrink=0.75, fraction=0.05, aspect=30)
plt.title(dt.datetime.utcfromtimestamp(mosaic.Time).strftime('%m/%d/%Y %H:%M UTC'))
```
|
github_jupyter
|
```
# header files
import torch
import torch.nn as nn
import torchvision
import numpy as np
from torch.utils.tensorboard import SummaryWriter
from google.colab import drive
drive.mount('/content/drive')
np.random.seed(1234)
torch.manual_seed(1234)
torch.cuda.manual_seed(1234)
# define transforms
train_transforms = torchvision.transforms.Compose([torchvision.transforms.RandomRotation(30),
torchvision.transforms.Resize((224, 224)),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
# datasets
train_data = torchvision.datasets.ImageFolder("/content/drive/My Drive/train_images/", transform=train_transforms)
val_data = torchvision.datasets.ImageFolder("/content/drive/My Drive/val_images/", transform=train_transforms)
print(len(train_data))
print(len(val_data))
# load the data
train_loader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=True, num_workers=16)
val_loader = torch.utils.data.DataLoader(val_data, batch_size=32, shuffle=False, num_workers=16)
class Convolution(torch.nn.Sequential):
# init method
def __init__(self, in_channels, out_channels, kernel_size, strides, padding):
super().__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = kernel_size
self.strides = strides
self.padding = padding
self.add_module("conv", torch.nn.Conv2d(self.in_channels, self.out_channels, kernel_size=self.kernel_size, stride=self.strides, padding=self.padding))
self.add_module("norm", torch.nn.BatchNorm2d(self.out_channels))
self.add_module("act", torch.nn.ReLU(inplace=True))
# define VGG19 network
class VGG19(torch.nn.Module):
# init method
def __init__(self, num_classes=2):
super(VGG19, self).__init__()
self.features = nn.Sequential(
# first cnn block
Convolution(3, 64, 3, 1, 1),
Convolution(64, 64, 3, 1, 1),
nn.MaxPool2d(kernel_size=2, stride=2),
# second cnn block
Convolution(64, 128, 3, 1, 1),
Convolution(128, 128, 3, 1, 1),
nn.MaxPool2d(kernel_size=2, stride=2),
# third cnn block
Convolution(128, 256, 3, 1, 1),
Convolution(256, 256, 3, 1, 1),
Convolution(256, 256, 3, 1, 1),
Convolution(256, 256, 3, 1, 1),
nn.MaxPool2d(kernel_size=2, stride=2),
# fourth cnn block
Convolution(256, 512, 3, 1, 1),
Convolution(512, 512, 3, 1, 1),
Convolution(512, 512, 3, 1, 1),
Convolution(512, 512, 3, 1, 1),
nn.MaxPool2d(kernel_size=2, stride=2),
# fifth cnn block
Convolution(512, 512, 3, 1, 1),
Convolution(512, 512, 3, 1, 1),
Convolution(512, 512, 3, 1, 1),
Convolution(512, 512, 3, 1, 1),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.avgpool = nn.AdaptiveAvgPool2d(7)
self.classifier = nn.Sequential(
nn.Linear(512 * 7 * 7, 4096),
nn.ReLU(inplace = True),
nn.Dropout(0.5),
nn.Linear(4096, 4096),
nn.ReLU(inplace = True),
nn.Dropout(0.5),
nn.Linear(4096, num_classes),
)
# forward step
def forward(self, x):
x = self.features(x)
x = self.avgpool(x)
x = x.view(x.shape[0], -1)
x = self.classifier(x)
return x
# Cross-Entropy loss with Label Smoothing
class CrossEntropyLabelSmoothingLoss(nn.Module):
def __init__(self, smoothing=0.0):
super(CrossEntropyLabelSmoothingLoss, self).__init__()
self.smoothing = smoothing
def forward(self, pred, target):
log_prob = torch.nn.functional.log_softmax(pred, dim=-1)
weight = input.new_ones(pred.size()) * (self.smoothing/(pred.size(-1)-1.))
weight.scatter_(-1, target.unsqueeze(-1), (1.-self.smoothing))
loss = (-weight * log_prob).sum(dim=-1).mean()
return loss
# define loss (smoothing=0 is equivalent to standard Cross-Entropy loss)
criterion = CrossEntropyLabelSmoothingLoss(0.0)
# load model
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = VGG19()
model.to(device)
# load tensorboard
%load_ext tensorboard
%tensorboard --logdir logs
# optimizer to be used
optimizer = torch.optim.SGD(model.parameters(), lr=0.005, momentum=0.9, weight_decay=5e-4)
best_metric = -1
best_metric_epoch = -1
writer = SummaryWriter("./logs/")
# train and validate
for epoch in range(0, 100):
# train
model.train()
training_loss = 0.0
total = 0
correct = 0
for i, (input, target) in enumerate(train_loader):
input = input.to(device)
target = target.to(device)
optimizer.zero_grad()
output = model(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()
training_loss = training_loss + loss.item()
_, predicted = output.max(1)
total += target.size(0)
correct += predicted.eq(target).sum().item()
training_loss = training_loss/float(len(train_loader))
training_accuracy = str(100.0*(float(correct)/float(total)))
writer.add_scalar("Loss/train", float(training_loss), epoch)
writer.add_scalar("Accuracy/train", float(training_accuracy), epoch)
# validate
model.eval()
valid_loss = 0.0
total = 0
correct = 0
for i, (input, target) in enumerate(val_loader):
with torch.no_grad():
input = input.to(device)
target = target.to(device)
output = model(input)
loss = criterion(output, target)
_, predicted = output.max(1)
total += target.size(0)
correct += predicted.eq(target).sum().item()
valid_loss = valid_loss + loss.item()
valid_loss = valid_loss/float(len(val_loader))
valid_accuracy = str(100.0*(float(correct)/float(total)))
writer.add_scalar("Loss/val", float(valid_loss), epoch)
writer.add_scalar("Accuracy/val", float(valid_accuracy), epoch)
# store best model
if(float(valid_accuracy)>best_metric and epoch>=10):
best_metric = float(valid_accuracy)
best_metric_epoch = epoch
torch.save(model.state_dict(), "best_model_vgg19.pth")
print()
print("Epoch" + str(epoch) + ":")
print("Training Accuracy: " + str(training_accuracy) + " Validation Accuracy: " + str(valid_accuracy))
print("Training Loss: " + str(training_loss) + " Validation Loss: " + str(valid_loss))
print()
```
|
github_jupyter
|
```
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
np.random.seed(0)
from statistics import mean
```
今回はアルゴリズムの評価が中心の章なので,学習アルゴリズム実装は後に回し、sklearnを学習アルゴリズムとして使用する。
```
import sklearn
```
今回、学習に使うデータはsin関数に正規分布$N(\varepsilon|0,0.05)$ノイズ項を加えたデータを使う
```
size = 100
max_degree = 11
x_data = np.random.rand(size) * np.pi * 2
var_data = np.random.normal(loc=0,scale=0.1,size=size)
sin_data = np.sin(x_data) + var_data
plt.ylim(-1.2,1.2)
plt.scatter(x_data,sin_data)
```
学習用のアルゴリズムは多項式回帰を使います。
```
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import Pipeline
```
2.2.2:**MSE**:近似の良さの評価手法。
$$MSE=\int (y(x;D) - h(x))^2p(x)dx=E\{(y(x;D)-h(x))^2\}$$
```
def MSE(y,t):
return np.sum(np.square(y-t))/y.size
MSE(np.array([10,3,3]),np.array([1,2,3]))
```
2.2.1 (1)**ホールドアウト法**:
手元のデータを2つに分割し、片方をトレーニングに使い、片方をテストに使う手法。
テストデータの数が必要
```
%%time
def holdout_method(x,y,per=0.8,value_func=MSE,degree=11):
index = np.random.permutation(x.size)
index_train,index_test = np.split(index,[int(x.size*per)])
#plt.scatter(x_data[index_train],sin_data[index_train])
test_score_list = []
train_score_list = []
for i in range(1,degree):
pf = PolynomialFeatures(degree=i, include_bias=False)
lr = LinearRegression()
pl = Pipeline([("PF", pf), ("LR", lr)])
pl.fit(x[index_train].reshape(-1,1), y[index_train])
pred_y_test = pl.predict(x[index_test].reshape(-1,1))
pred_y_train = pl.predict(x[index_train].reshape(-1,1))
score_train = value_func(pred_y_train,y[index_train])
score_test = value_func(pred_y_test,y[index_test])
train_score_list.append(score_train)
test_score_list.append(score_test)
return train_score_list,test_score_list
hold_train_score_list,hold_test_score_list = holdout_method(x_data,sin_data,degree=max_degree)
plt.plot(np.array(range(1,max_degree)),np.array(hold_train_score_list),color='b')
plt.plot(np.array(range(1,max_degree)),np.array(hold_test_score_list),color='r')
```
(2)**交差確認法**:手元の各クラスをn分割して、n-1のグループで学習して、残りの1つのグループのデータでテストをし、その平均を誤り率とした性能評価を行う。
```
def cross_validation(x,y,value_func=MSE,split_num=5,degree=1):
assert x.size % split_num==0,"You must use divisible number"
n = x.size / split_num
train_scores =[]
test_scores =[]
for i in range(split_num):
indices = [int(i*n),int(i*n+n)]
train_x_1,test_x,train_x_2=np.split(x,indices)
train_y_1,test_y,train_y_2=np.split(y,indices)
train_x = np.concatenate([train_x_1,train_x_2])
train_y = np.concatenate([train_y_1,train_y_2])
pf = PolynomialFeatures(degree=degree, include_bias=False)
lr = LinearRegression()
pl = Pipeline([("PF", pf), ("LR", lr)])
pl.fit(train_x.reshape(-1,1), train_y)
pred_y_test = pl.predict(np.array(test_x).reshape(-1,1))
pred_y_train = pl.predict(np.array(train_x).reshape(-1,1))
score_train = value_func(pred_y_train,train_y)
#print(score_train)
score_test = value_func(pred_y_test,test_y)
#print(len(test_y))
train_scores.append(score_train)
test_scores.append(score_test)
return mean(train_scores),mean(test_scores)
cross_test_score_list = []
cross_train_score_list = []
for i in range(1,max_degree):
tra,tes = cross_validation(x_data,sin_data,degree=i)
cross_train_score_list.append(tra)
cross_test_score_list.append(tes)
plt.plot(np.array(range(1,max_degree)),np.array(cross_train_score_list),color='b')
plt.plot(np.array(range(1,max_degree)),np.array(cross_test_score_list),color='r')
```
(3)**一つ抜き法**:交差確認法の特別な場合で、データ数=グループの数としたものである。
```
def leave_one_out(x,y,value_func=MSE,size=size,degree=1):
return cross_validation(x,y,value_func,split_num=size,degree=degree)
leave_test_score_list = []
leave_train_score_list = []
for i in range(1,max_degree):
tra,tes = leave_one_out(x_data,sin_data,degree=i)
leave_train_score_list.append(tra)
leave_test_score_list.append(tes)
plt.plot(np.array(range(1,max_degree)),np.array(leave_train_score_list),color='b')
plt.plot(np.array(range(1,max_degree)),np.array(leave_test_score_list),color='r')
plt.plot(np.array(range(1,max_degree)),np.array(hold_train_score_list),color='y')
plt.plot(np.array(range(1,max_degree)),np.array(hold_test_score_list),color='m')
plt.plot(np.array(range(1,max_degree)),np.array(cross_train_score_list),color='k')
plt.plot(np.array(range(1,max_degree)),np.array(cross_test_score_list),color='c')
plt.plot(np.array(range(1,max_degree)),np.array(leave_train_score_list),color='b')
plt.plot(np.array(range(1,max_degree)),np.array(leave_test_score_list),color='r')
```
(4)**ブートストラップ法**:N個の復元抽出をしてブートストラップサンプルを作り、そこから
$bias=\varepsilon(N^*,N^*)-N(N^*,N)$
を推定して、それをいくつか計算してその平均でバイアスを推定する。
その推定値を$\overline{bias}$として、その推定値を
$\varepsilon = \varepsilon(N,N)-\overline{bias}$
とする。
```
def bootstrap(x,y,value_func=MSE,trial=50,degree=1):
biases=[]
for i in range(trial):
boot_ind = np.random.choice(range(x.size),size=x.size,replace=True)
pf = PolynomialFeatures(degree=degree, include_bias=False)
lr = LinearRegression()
pl = Pipeline([("PF", pf), ("LR", lr)])
pl.fit(x[boot_ind].reshape(-1,1), y[boot_ind])
pred_y_boot = pl.predict(x[boot_ind].reshape(-1,1))
pred_y_base = pl.predict(x.reshape(-1,1))
score_boot = value_func(pred_y_boot,y[boot_ind])
#print(score_train)
score_base = value_func(pred_y_base,y)
bias = score_base - score_boot
#print(bias)
biases.append(bias)
pf = PolynomialFeatures(degree=degree, include_bias=False)
lr = LinearRegression()
pl = Pipeline([("PF", pf), ("LR", lr)])
pl.fit(x.reshape(-1,1), y)
pred_y_base = pl.predict(x.reshape(-1,1))
score_base = value_func(pred_y_base,y)
return score_base + mean(biases)
boot_score_list = []
for i in range(1,max_degree):
boot_score = bootstrap(x_data,sin_data,degree=i)
boot_score_list.append(boot_score)
plt.plot(np.array(range(1,max_degree)),np.array(boot_score_list),color='b')
```
|
github_jupyter
|
___
<a href='https://www.udemy.com/user/joseportilla/'><img src='../Pierian_Data_Logo.png'/></a>
___
<center><em>Content Copyright by Pierian Data</em></center>
# Warmup Project Exercise
## Simple War Game
Before we launch in to the OOP Milestone 2 Project, let's walk through together on using OOP for a more robust and complex application, such as a game. We will use Python OOP to simulate a simplified version of the game war. Two players will each start off with half the deck, then they each remove a card, compare which card has the highest value, and the player with the higher card wins both cards. In the event of a time
## Single Card Class
### Creating a Card Class with outside variables
Here we will use some outside variables that we know don't change regardless of the situation, such as a deck of cards. Regardless of what round,match, or game we're playing, we'll still need the same deck of cards.
```
# We'll use this later
import random
suits = ('Hearts', 'Diamonds', 'Spades', 'Clubs')
ranks = ('Two', 'Three', 'Four', 'Five', 'Six', 'Seven', 'Eight', 'Nine', 'Ten', 'Jack', 'Queen', 'King', 'Ace')
values = {'Two':2, 'Three':3, 'Four':4, 'Five':5, 'Six':6, 'Seven':7, 'Eight':8,
'Nine':9, 'Ten':10, 'Jack':11, 'Queen':12, 'King':13, 'Ace':14}
class Card:
def __init__(self,suit,rank):
self.suit = suit
self.rank = rank
self.value = values[rank]
def __str__(self):
return self.rank + ' of ' + self.suit
```
Create an example card
```
suits[0]
ranks[0]
two_hearts = Card(suits[0],ranks[0])
two_hearts
print(two_hearts)
two_hearts.rank
two_hearts.value
values[two_hearts.rank]
```
## Deck Class
### Using a class within another class
We just created a single card, but how can we create an entire Deck of cards? Let's explore doing this with a class that utilizes the Card class.
A Deck will be made up of multiple Cards. Which mean's we will actually use the Card class within the \_\_init__ of the Deck class.
```
class Deck:
def __init__(self):
# Note this only happens once upon creation of a new Deck
self.all_cards = []
for suit in suits:
for rank in ranks:
# This assumes the Card class has already been defined!
self.all_cards.append(Card(suit,rank))
def shuffle(self):
# Note this doesn't return anything
random.shuffle(self.all_cards)
def deal_one(self):
# Note we remove one card from the list of all_cards
return self.all_cards.pop()
```
### Create a Deck
```
mydeck = Deck()
len(mydeck.all_cards)
mydeck.all_cards[0]
print(mydeck.all_cards[0])
mydeck.shuffle()
print(mydeck.all_cards[0])
my_card = mydeck.deal_one()
print(my_card)
```
# Player Class
Let's create a Player Class, a player should be able to hold instances of Cards, they should also be able to remove and add them from their hand. We want the Player class to be flexible enough to add one card, or many cards so we'll use a simple if check to keep it all in the same method.
We'll keep this all in mind as we create the methods for the Player class.
### Player Class
```
class Player:
def __init__(self,name):
self.name = name
# A new player has no cards
self.all_cards = []
def remove_one(self):
# Note we remove one card from the list of all_cards
# We state 0 to remove from the "top" of the deck
# We'll imagine index -1 as the bottom of the deck
return self.all_cards.pop(0)
def add_cards(self,new_cards):
if type(new_cards) == type([]):
self.all_cards.extend(new_cards)
else:
self.all_cards.append(new_cards)
def __str__(self):
return f'Player {self.name} has {len(self.all_cards)} cards.'
jose = Player("Jose")
jose
print(jose)
two_hearts
jose.add_cards(two_hearts)
print(jose)
jose.add_cards([two_hearts,two_hearts,two_hearts])
print(jose)
```
## War Game Logic
```
player_one = Player("One")
player_two = Player("Two")
```
## Setup New Game
```
new_deck = Deck()
new_deck.shuffle()
```
### Split the Deck between players
```
len(new_deck.all_cards)/2
for x in range(26):
player_one.add_cards(new_deck.deal_one())
player_two.add_cards(new_deck.deal_one())
len(new_deck.all_cards)
len(player_one.all_cards)
len(player_two.all_cards)
```
## Play the Game
```
import pdb
game_on = True
round_num = 0
while game_on:
round_num += 1
print(f"Round {round_num}")
# Check to see if a player is out of cards:
if len(player_one.all_cards) == 0:
print("Player One out of cards! Game Over")
print("Player Two Wins!")
game_on = False
break
if len(player_two.all_cards) == 0:
print("Player Two out of cards! Game Over")
print("Player One Wins!")
game_on = False
break
# Otherwise, the game is still on!
# Start a new round and reset current cards "on the table"
player_one_cards = []
player_one_cards.append(player_one.remove_one())
player_two_cards = []
player_two_cards.append(player_two.remove_one())
at_war = True
while at_war:
if player_one_cards[-1].value > player_two_cards[-1].value:
# Player One gets the cards
player_one.add_cards(player_one_cards)
player_one.add_cards(player_two_cards)
# No Longer at "war" , time for next round
at_war = False
# Player Two Has higher Card
elif player_one_cards[-1].value < player_two_cards[-1].value:
# Player Two gets the cards
player_two.add_cards(player_one_cards)
player_two.add_cards(player_two_cards)
# No Longer at "war" , time for next round
at_war = False
else:
print('WAR!')
# This occurs when the cards are equal.
# We'll grab another card each and continue the current war.
# First check to see if player has enough cards
# Check to see if a player is out of cards:
if len(player_one.all_cards) < 5:
print("Player One unable to play war! Game Over at War")
print("Player Two Wins! Player One Loses!")
game_on = False
break
elif len(player_two.all_cards) < 5:
print("Player Two unable to play war! Game Over at War")
print("Player One Wins! Player One Loses!")
game_on = False
break
# Otherwise, we're still at war, so we'll add the next cards
else:
for num in range(5):
player_one_cards.append(player_one.remove_one())
player_two_cards.append(player_two.remove_one())
```
## Game Setup in One Cell
```
player_one = Player("One")
player_two = Player("Two")
new_deck = Deck()
new_deck.shuffle()
for x in range(26):
player_one.add_cards(new_deck.deal_one())
player_two.add_cards(new_deck.deal_one())
game_on = True
round_num = 0
while game_on:
round_num += 1
print(f"Round {round_num}")
# Check to see if a player is out of cards:
if len(player_one.all_cards) == 0:
print("Player One out of cards! Game Over")
print("Player Two Wins!")
game_on = False
break
if len(player_two.all_cards) == 0:
print("Player Two out of cards! Game Over")
print("Player One Wins!")
game_on = False
break
# Otherwise, the game is still on!
# Start a new round and reset current cards "on the table"
player_one_cards = []
player_one_cards.append(player_one.remove_one())
player_two_cards = []
player_two_cards.append(player_two.remove_one())
at_war = True
while at_war:
if player_one_cards[-1].value > player_two_cards[-1].value:
# Player One gets the cards
player_one.add_cards(player_one_cards)
player_one.add_cards(player_two_cards)
# No Longer at "war" , time for next round
at_war = False
# Player Two Has higher Card
elif player_one_cards[-1].value < player_two_cards[-1].value:
# Player Two gets the cards
player_two.add_cards(player_one_cards)
player_two.add_cards(player_two_cards)
# No Longer at "war" , time for next round
at_war = False
else:
print('WAR!')
# This occurs when the cards are equal.
# We'll grab another card each and continue the current war.
# First check to see if player has enough cards
# Check to see if a player is out of cards:
if len(player_one.all_cards) < 5:
print("Player One unable to play war! Game Over at War")
print("Player Two Wins! Player One Loses!")
game_on = False
break
elif len(player_two.all_cards) < 5:
print("Player Two unable to play war! Game Over at War")
print("Player One Wins! Player One Loses!")
game_on = False
break
# Otherwise, we're still at war, so we'll add the next cards
else:
for num in range(5):
player_one_cards.append(player_one.remove_one())
player_two_cards.append(player_two.remove_one())
len(player_one.all_cards)
len(player_two.all_cards)
print(player_one_cards[-1])
print(player_two_cards[-1])
```
## Great Work!
Other links that may interest you:
* https://www.reddit.com/r/learnpython/comments/7ay83p/war_card_game/
* https://codereview.stackexchange.com/questions/131174/war-card-game-using-classes
* https://gist.github.com/damianesteban/6896120
* https://lethain.com/war-card-game-in-python/
* https://hectorpefo.github.io/2017-09-13-Card-Wars/
* https://www.wimpyprogrammer.com/the-statistics-of-war-the-card-game
|
github_jupyter
|
```
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
The raw code for this IPython notebook is by default hidden for easier reading.
To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''')
```

<h1 align='center'>Stats Can Notebook Template: Quick Dataset Exploration</h1>
<h4 align='center'>Laura Gutierrez Funderburk $\mid$ Stats Can Notebook</h4>
<h2 align='center'>Abstract</h2>
This notebook may be used to quickly explore most data sets from Stats Can. To explore the contents of a dataset, simply visit https://www150.statcan.gc.ca/n1/en/type/data?MM=1 and select a "Table".
To select a table, copy the string next to Table, under the data set name. Here is an example.

In this case, the data set's table is 10-10-0122-01.
Simply copy and paste that table in the box below, and press the Download Dataset button.
```
%run -i ./StatsCan/helpers.py
%run -i ./StatsCan/scwds.py
%run -i ./StatsCan/sc.py
from ipywidgets import widgets, VBox, HBox, Button
from ipywidgets import Button, Layout, widgets
from IPython.display import display, Javascript, Markdown, HTML
import datetime as dt
import qgrid as q
import pandas as pd
import json
import datetime
import qgrid
from tqdm import tnrange, tqdm_notebook
from time import sleep
import sys
grid_features = { 'fullWidthRows': True,
'syncColumnCellResize': True,
'forceFitColumns': True,
'enableColumnReorder': True,
'enableTextSelectionOnCells': True,
'editable': False,
'filterable': True,
'sortable': False,
'highlightSelectedRow': True}
def rerun_cell( b ):
display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index()+1,IPython.notebook.get_selected_index()+3)'))
def run_4cell( b ):
display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index()+1,IPython.notebook.get_selected_index()+5)'))
style = {'description_width': 'initial'}
```
<h2 align='center'>Downloading Stats Can Data</h2>
To download a full dataset, enter a product ID and press the Download Dataset button.
```
prod_ID = widgets.Text(
value="10-10-0122-01",
placeholder='ProductID value',
description='productID value',
disabled=False,
style=style
)
DS_button = widgets.Button(
button_style='success',
description="Download Dataset",
layout=Layout(width='15%', height='30px'),
style=style
)
DS_button.on_click( run_4cell )
display(prod_ID)
display(DS_button)
# # Download data
productId = prod_ID.value
if "-" not in productId:
if len(productId)!=10:
print("WARNING: THIS IS LIKELY A NUMBER NOT ASSOCIATED WITH A DATA TABLE. VERIFY AND TRY AGAIN")
sys.exit(1)
else:
if len(productId.split("-")) !=4:
print("WARNING: THIS IS LIKELY A NUMBER NOT ASSOCIATED WITH A DATA TABLE. VERIFY AND TRY AGAIN")
sys.exit(1)
download_tables(str(productId))
def download_and_store_json(productId):
with open(str(productId) +'.json') as f:
data = json.load(f)
f.close()
return data
import zipfile
def read_data_compute_df(productID):
zf = zipfile.ZipFile('./' + str(productID) + '-eng.zip')
df = pd.read_csv(zf.open(str(productID)+'.csv'))
return df
# Example
#data = download_and_store_json(productId)
# Example, we will select the study we downloaded previously
df_fullDATA = zip_table_to_dataframe(productId)
cols = list(df_fullDATA.loc[:,'REF_DATE':'UOM'])+ ['SCALAR_FACTOR'] + ['VALUE']
df_less = df_fullDATA[cols]
df_less2 = df_less.drop(["DGUID"], axis=1)
df_less2.head()
iteration_nr = df_less2.shape[1]
categories = []
for i in range(iteration_nr-1):
categories.append(df_less2.iloc[:,i].unique())
all_the_widgets = []
for i in range(len(categories)):
if i==0:
a_category = widgets.Dropdown(
value = categories[i][0],
options = categories[i],
description ='Start Date:',
style = style,
disabled=False
)
b_category = widgets.Dropdown(
value = categories[i][-1],
options = categories[i],
description ='End Date:',
style = style,
disabled=False
)
all_the_widgets.append(a_category)
all_the_widgets.append(b_category)
elif i==1:
a_category = widgets.Dropdown(
value = categories[i][0],
options = categories[i],
description ='Location:',
style = style,
disabled=False
)
all_the_widgets.append(a_category)
elif i==len(categories)-1:
a_category = widgets.Dropdown(
value = categories[i][0],
options = categories[i],
description ='Scalar factor:',
style = style,
disabled=False
)
all_the_widgets.append(a_category)
elif i==len(categories)-2:
a_category = widgets.Dropdown(
value = categories[i][0],
options = categories[i],
description ='Units of Measure :',
style = style,
disabled=False
)
all_the_widgets.append(a_category)
else:
a_category = widgets.Dropdown(
value = categories[i][0],
options = categories[i],
description ='Subcategory ' + str(i),
style = style,
disabled=False
)
all_the_widgets.append(a_category)
```
## <h2 align='center'>Select Data Subsets: One-Dimensional Plotting</h2>
Use the user menu below to select a cateory within the full subset you are interested in exploring.
Choose a starting and end date to plot results.
If there is data available, it will appear under the headers.
Be careful to select dataframes with actual data in them!.
Use the Select Dataset button to help you preview the data.
```
CD_button = widgets.Button(
button_style='success',
description="Preview Dataset",
layout=Layout(width='15%', height='30px'),
style=style
)
CD_button.on_click( run_4cell )
tab3 = VBox(children=[HBox(children=all_the_widgets[0:3]),
HBox(children=all_the_widgets[3:5]),
HBox(children=all_the_widgets[5:len(all_the_widgets)]),
CD_button])
tab = widgets.Tab(children=[tab3])
tab.set_title(0, 'Load Data Subset')
display(tab)
df_sub = df_less2[(df_less2["REF_DATE"]>=all_the_widgets[0].value) &
(df_less2["REF_DATE"]<=all_the_widgets[1].value) &
(df_less2["GEO"]==all_the_widgets[2].value) &
(df_less2["UOM"]==all_the_widgets[-2].value) &
(df_less2["SCALAR_FACTOR"]==all_the_widgets[-1].value) ]
df_sub.head()
# TO HANDLE THE REST OF THE COLUMNS, SIMPLY SUBSTITUTE VALUES
col_name = df_sub.columns[2]
# weather_data = pd.read_csv("DATA.csv",sep=',')
col_name
df_sub_final = df_sub[(df_sub[col_name]==all_the_widgets[3].value)]
import matplotlib.pyplot as plt
%matplotlib inline
fig1 = plt.figure(facecolor='w',figsize=(18,18))
plt.subplot(3, 3, 1)
plt.axis('off');
plt.subplot(3, 3, 2)
plt.plot(df_sub_final["REF_DATE"],df_sub_final["VALUE"],'b--',label='Value')
#plt.plot(df_20_USA["REF_DATE"],df_20_USA["VALUE"],'r--',label='U.S. dollar, daily average')
plt.xlabel('Year-Month', fontsize=20)
plt.ylabel('Value',fontsize=20)
plt.title(str(all_the_widgets[3].value) + ", "+ str(all_the_widgets[2].value),fontsize=20)
plt.xticks(rotation=90)
plt.grid(True)
plt.subplot(3, 3, 3);
plt.axis('off');
```
<h2 align='center'>References</h2>
Statistics Canada.
https://www150.statcan.gc.ca/n1/en/type/data?MM=1
# 
|
github_jupyter
|
# H2O Tutorial: EEG Eye State Classification
Author: Erin LeDell
Contact: [email protected]
This tutorial steps through a quick introduction to H2O's R API. The goal of this tutorial is to introduce through a complete example H2O's capabilities from R.
Most of the functionality for R's `data.frame` is exactly the same syntax for an `H2OFrame`, so if you are comfortable with R, data frame manipulation will come naturally to you in H2O. The modeling syntax in the H2O R API may also remind you of other machine learning packages in R.
References: [H2O R API documentation](http://h2o-release.s3.amazonaws.com/h2o/latest_stable_Rdoc.html), the [H2O Documentation landing page](http://www.h2o.ai/docs/) and [H2O general documentation](http://h2o-release.s3.amazonaws.com/h2o/latest_stable_doc.html).
## Install H2O in R
### Prerequisites
This tutorial assumes you have R installed. The `h2o` R package has a few dependencies which can be installed using CRAN. The packages that are required (which also have their own dependencies) can be installed in R as follows:
```r
pkgs <- c("methods","statmod","stats","graphics","RCurl","jsonlite","tools","utils")
for (pkg in pkgs) {
if (! (pkg %in% rownames(installed.packages()))) { install.packages(pkg) }
}
```
### Install h2o
Once the dependencies are installed, you can install H2O. We will use the latest stable version of the `h2o` R package, which at the time of writing is H2O v3.8.0.4 (aka "Tukey-4"). The latest stable version can be installed using the commands on the [H2O R Installation](http://www.h2o.ai/download/h2o/r) page.
## Start up an H2O cluster
After the R package is installed, we can start up an H2O cluster. In a R terminal, we load the `h2o` package and start up an H2O cluster as follows:
```
library(h2o)
# Start an H2O Cluster on your local machine
h2o.init(nthreads = -1) #nthreads = -1 uses all cores on your machine
```
If you already have an H2O cluster running that you'd like to connect to (for example, in a multi-node Hadoop environment), then you can specify the IP and port of that cluster as follows:
```
# This will not actually do anything since it's a fake IP address
# h2o.init(ip="123.45.67.89", port=54321)
```
## Download EEG Data
The following code downloads a copy of the [EEG Eye State](http://archive.ics.uci.edu/ml/datasets/EEG+Eye+State#) dataset. All data is from one continuous EEG measurement with the [Emotiv EEG Neuroheadset](https://emotiv.com/epoc.php). The duration of the measurement was 117 seconds. The eye state was detected via a camera during the EEG measurement and added later manually to the file after analysing the video frames. '1' indicates the eye-closed and '0' the eye-open state. All values are in chronological order with the first measured value at the top of the data.

We can import the data directly into H2O using the `import_file` method in the Python API. The import path can be a URL, a local path, a path to an HDFS file, or a file on Amazon S3.
```
#csv_url <- "http://www.stat.berkeley.edu/~ledell/data/eeg_eyestate_splits.csv"
csv_url <- "https://h2o-public-test-data.s3.amazonaws.com/eeg_eyestate_splits.csv"
data <- h2o.importFile(csv_url)
```
## Explore Data
Once we have loaded the data, let's take a quick look. First the dimension of the frame:
```
dim(data)
```
Now let's take a look at the top of the frame:
```
head(data)
```
The first 14 columns are numeric values that represent EEG measurements from the headset. The "eyeDetection" column is the response. There is an additional column called "split" that was added (by me) in order to specify partitions of the data (so we can easily benchmark against other tools outside of H2O using the same splits). I randomly divided the dataset into three partitions: train (60%), valid (%20) and test (20%) and marked which split each row belongs to in the "split" column.
Let's take a look at the column names. The data contains derived features from the medical images of the tumors.
```
names(data)
```
To select a subset of the columns to look at, typical R data.frame indexing applies:
```
columns <- c('AF3', 'eyeDetection', 'split')
head(data[columns])
```
Now let's select a single column, for example -- the response column, and look at the data more closely:
```
y <- 'eyeDetection'
data[y]
```
It looks like a binary response, but let's validate that assumption:
```
h2o.unique(data[y])
```
If you don't specify the column types when you import the file, H2O makes a guess at what your column types are. If there are 0's and 1's in a column, H2O will automatically parse that as numeric by default.
Therefore, we should convert the response column to a more efficient "factor" representation (called "enum" in Java) -- in this case it is a categorial variable with two levels, 0 and 1. If the only column in my data that is categorical is the response, I typically don't bother specifying the column type during the parse, and instead use this one-liner to convert it aftewards:
```
data[y] <- as.factor(data[y])
```
Now we can check that there are two levels in our response column:
```
h2o.nlevels(data[y])
```
We can query the categorical "levels" as well ('0' and '1' stand for "eye open" and "eye closed") to see what they are:
```
h2o.levels(data[y])
```
We may want to check if there are any missing values, so let's look for NAs in our dataset. For all the supervised H2O algorithms, H2O will handle missing values automatically, so it's not a problem if we are missing certain feature values. However, it is always a good idea to check to make sure that you are not missing any of the training labels.
To figure out which, if any, values are missing, we can use the `h2o.nacnt` (NA count) method on any H2OFrame (or column). The columns in an H2O Frame are also H2O Frames themselves, so all the methods that apply to an H2OFrame also apply to a single column.
```
h2o.nacnt(data[y])
```
Great, no missing labels. :-)
Out of curiosity, let's see if there is any missing data in any of the columsn of this frame:
```
h2o.nacnt(data)
```
Each column returns a zero, so there are no missing values in any of the columns.
The next thing I may wonder about in a binary classification problem is the distribution of the response in the training data. Is one of the two outcomes under-represented in the training set? Many real datasets have what's called an "imbalanace" problem, where one of the classes has far fewer training examples than the other class. Let's take a look at the distribution:
```
h2o.table(data[y])
```
Ok, the data is not exactly evenly distributed between the two classes -- there are more 0's than 1's in the dataset. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below).
Let's calculate the percentage that each class represents:
```
n <- nrow(data) # Total number of training samples
h2o.table(data[y])['Count']/n
```
### Split H2O Frame into a train and test set
So far we have explored the original dataset (all rows). For the machine learning portion of this tutorial, we will break the dataset into three parts: a training set, validation set and a test set.
If you want H2O to do the splitting for you, you can use the `split_frame` method. However, we have explicit splits that we want (for reproducibility reasons), so we can just subset the Frame to get the partitions we want.
Subset the `data` H2O Frame on the "split" column:
```
train <- data[data['split']=="train",]
nrow(train)
valid <- data[data['split']=="valid",]
nrow(valid)
test <- data[data['split']=="test",]
nrow(test)
```
## Machine Learning in H2O
We will do a quick demo of the H2O software using a Gradient Boosting Machine (GBM). The goal of this problem is to train a model to predict eye state (open vs closed) from EEG data.
### Train and Test a GBM model
In the steps above, we have already created the training set and validation set, so the next step is to specify the predictor set and response variable.
#### Specify the predictor set and response
As with any machine learning algorithm, we need to specify the response and predictor columns in the training set.
The `x` argument should be a vector of predictor names in the training frame, and `y` specifies the response column. We have already set `y <- "eyeDetector"` above, but we still need to specify `x`.
```
names(train)
x <- setdiff(names(train), c("eyeDetection", "split")) #Remove the 13th and 14th columns
x
```
Now that we have specified `x` and `y`, we can train the GBM model using a few non-default model parameters. Since we are predicting a binary response, we set `distribution = "bernoulli"`.
```
model <- h2o.gbm(x = x, y = y,
training_frame = train,
validation_frame = valid,
distribution = "bernoulli",
ntrees = 100,
max_depth = 4,
learn_rate = 0.1)
```
### Inspect Model
The type of results shown when you print a model, are determined by the following:
- Model class of the estimator (e.g. GBM, RF, GLM, DL)
- The type of machine learning problem (e.g. binary classification, multiclass classification, regression)
- The data you specify (e.g. `training_frame` only, `training_frame` and `validation_frame`, or `training_frame` and `nfolds`)
Below, we see a GBM Model Summary, as well as training and validation metrics since we supplied a `validation_frame`. Since this a binary classification task, we are shown the relevant performance metrics, which inclues: MSE, R^2, LogLoss, AUC and Gini. Also, we are shown a Confusion Matrix, where the threshold for classification is chosen automatically (by H2O) as the threshold which maximizes the F1 score.
The scoring history is also printed, which shows the performance metrics over some increment such as "number of trees" in the case of GBM and RF.
Lastly, for tree-based methods (GBM and RF), we also print variable importance.
```
print(model)
```
### Model Performance on a Test Set
Once a model has been trained, you can also use it to make predictions on a test set. In the case above, we just ran the model once, so our validation set (passed as `validation_frame`), could have also served as a "test set." We technically have already created test set predictions and evaluated test set performance.
However, when performing model selection over a variety of model parameters, it is common for users to train a variety of models (using different parameters) using the training set, `train`, and a validation set, `valid`. Once the user selects the best model (based on validation set performance), the true test of model performance is performed by making a final set of predictions on the held-out (never been used before) test set, `test`.
You can use the `model_performance` method to generate predictions on a new dataset. The results are stored in an object of class, `"H2OBinomialMetrics"`.
```
perf <- h2o.performance(model = model, newdata = test)
class(perf)
```
Individual model performance metrics can be extracted using methods like `r2`, `auc` and `mse`. In the case of binary classification, we may be most interested in evaluating test set Area Under the ROC Curve (AUC).
```
h2o.r2(perf)
h2o.auc(perf)
h2o.mse(perf)
```
### Cross-validated Performance
To perform k-fold cross-validation, you use the same code as above, but you specify `nfolds` as an integer greater than 1, or add a "fold_column" to your H2O Frame which indicates a fold ID for each row.
Unless you have a specific reason to manually assign the observations to folds, you will find it easiest to simply use the `nfolds` argument.
When performing cross-validation, you can still pass a `validation_frame`, but you can also choose to use the original dataset that contains all the rows. We will cross-validate a model below using the original H2O Frame which is called `data`.
```
cvmodel <- h2o.gbm(x = x, y = y,
training_frame = train,
validation_frame = valid,
distribution = "bernoulli",
ntrees = 100,
max_depth = 4,
learn_rate = 0.1,
nfolds = 5)
```
This time around, we will simply pull the training and cross-validation metrics out of the model. To do so, you use the `auc` method again, and you can specify `train` or `xval` as `TRUE` to get the correct metric.
```
print(h2o.auc(cvmodel, train = TRUE))
print(h2o.auc(cvmodel, xval = TRUE))
```
### Grid Search
One way of evaluting models with different parameters is to perform a grid search over a set of parameter values. For example, in GBM, here are three model parameters that may be useful to search over:
- `ntrees`: Number of trees
- `max_depth`: Maximum depth of a tree
- `learn_rate`: Learning rate in the GBM
We will define a grid as follows:
```
ntrees_opt <- c(5,50,100)
max_depth_opt <- c(2,3,5)
learn_rate_opt <- c(0.1,0.2)
hyper_params = list('ntrees' = ntrees_opt,
'max_depth' = max_depth_opt,
'learn_rate' = learn_rate_opt)
```
The `h2o.grid` function can be used to train a `"H2OGrid"` object for any of the H2O algorithms (specified by the `"algorithm"` argument.
```
gs <- h2o.grid(algorithm = "gbm",
grid_id = "eeg_demo_gbm_grid",
hyper_params = hyper_params,
x = x, y = y,
training_frame = train,
validation_frame = valid)
```
### Compare Models
```
print(gs)
```
By default, grids of models will return the grid results sorted by (increasing) logloss on the validation set. However, if we are interested in sorting on another model performance metric, we can do that using the `h2o.getGrid` function as follows:
```
# print out the auc for all of the models
auc_table <- h2o.getGrid(grid_id = "eeg_demo_gbm_grid", sort_by = "auc", decreasing = TRUE)
print(auc_table)
```
The "best" model in terms of validation set AUC is listed first in auc_table.
```
best_model <- h2o.getModel(auc_table@model_ids[[1]])
h2o.auc(best_model, valid = TRUE) #Validation AUC for best model
```
The last thing we may want to do is generate predictions on the test set using the "best" model, and evaluate the test set AUC.
```
best_perf <- h2o.performance(model = best_model, newdata = test)
h2o.auc(best_perf)
```
The test set AUC is approximately 0.97. Not bad!!
|
github_jupyter
|
<a href="https://colab.research.google.com/github/issdl/from-data-to-solution-2021/blob/main/4_metrics.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Metrics
## Imports
```
import numpy as np
np.random.seed(2021)
import random
random.seed(2021)
from IPython.display import Markdown, display
def printmd(string):
display(Markdown(string))
```
## Create Toy Datasets
```
def pc(db): # print count
print("Database contains {} negative and {} positive samples".format(db.count(0), db.count(1)))
length = 100
# Balanced
db_balanced = [0] * (length//2) + [1] * (length//2)
pc(db_balanced)
# More positives
amount = random.uniform(0.9, 0.99)
db_positives = [1] * int(length*amount) + [0] * int(length*(1-amount)+1)
pc(db_positives)
# More negatives
amount = random.uniform(0.9, 0.99)
db_negatives = [0] * int(length*amount) + [1] * int(length*(1-amount)+1)
pc(db_negatives)
```
## Dummy model
```
top_no = 95
def dummy_model(data, threshold):
correct=0
output=[]
for i, d in enumerate(data):
if i < threshold or i > top_no :
output.append(d)
correct+=1
else:
output.append(abs(1-d))
return output
```
### *Balanced dataset*
```
balanced_threshold = 80
out_balanced = dummy_model(db_balanced, balanced_threshold)
print('Labels:')
printmd('{}**{}**{}'.format(db_balanced[:balanced_threshold], db_balanced[balanced_threshold:top_no], db_balanced[top_no+1:],))
print('Predictions:')
printmd('{}**{}**{}'.format(out_balanced[:balanced_threshold], out_balanced[balanced_threshold:top_no], out_balanced[top_no+1:],))
```
### *More positives*
```
positives_threshold = 80
out_positives = dummy_model(db_positives, positives_threshold)
print('Labels:')
printmd('{}**{}**{}'.format(db_positives[:positives_threshold], db_positives[positives_threshold:top_no], db_positives[top_no+1:]))
print('Predictions:')
printmd('{}**{}**{}'.format(out_positives[:positives_threshold], out_positives[positives_threshold:top_no], out_positives[top_no+1:]))
```
### *More negatives*
```
negatives_threshold = 80
out_negatives = dummy_model(db_negatives, negatives_threshold)
print('Labels:')
printmd('{}**{}**{}'.format(db_negatives[:negatives_threshold], db_negatives[negatives_threshold:top_no], db_negatives[top_no+1:]))
print('Predictions:')
printmd('{}**{}**{}'.format(out_negatives[:negatives_threshold], out_negatives[negatives_threshold:top_no], db_negatives[top_no+1:]))
```
## Metrics
### **Accuracy**
Tasks:
* Create method implementing accuracy metric
*Balanced dataset*
```
from sklearn.metrics import accuracy_score
## Implement method implementing accuracy metric
def acc(labels, predictions):
## START
## END
printmd('Accuracy custom {}'.format(acc(db_balanced, out_balanced)))
printmd('Accuracy sklearn {}'.format(accuracy_score(db_balanced, out_balanced)))
```
*More positives*
```
printmd('Accuracy custom {}'.format(acc(db_positives, out_positives)))
printmd('Accuracy sklearn {}'.format(accuracy_score(db_positives, out_positives)))
```
*More negatives*
```
printmd('Accuracy custom {}'.format(acc(db_negatives, out_negatives)))
printmd('Accuracy sklearn {}'.format(accuracy_score(db_negatives, out_negatives)))
```
*More positives - all positive predictions*
```
printmd('Accuracy {}'.format(accuracy_score(db_positives, np.ones(length))))
```
*More negatives - all negative predictions*
```
printmd('Accuracy {}'.format(accuracy_score(db_negatives, np.zeros(length))))
```
### **Confusion Matrix**
```
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
```
*Balanced dataset*
```
cmd = ConfusionMatrixDisplay(confusion_matrix(db_balanced, out_balanced), display_labels=[0,1])
cmd.plot()
```
*More positives*
```
cmd = ConfusionMatrixDisplay(confusion_matrix(db_positives, out_positives), display_labels=[0,1])
cmd.plot()
```
*More negatives*
```
cmd = ConfusionMatrixDisplay(confusion_matrix(db_negatives, out_negatives), display_labels=[0,1])
cmd.plot()
```
*More positives - all positive predictions*
```
cmd = ConfusionMatrixDisplay(confusion_matrix(db_positives, np.ones(length)), display_labels=[0,1])
cmd.plot()
```
*More negatives - all negative predictions*
```
cmd = ConfusionMatrixDisplay(confusion_matrix(db_negatives, np.zeros(length)), display_labels=[0,1])
cmd.plot()
```
### **Precision**
Tasks:
* Create method implementing precision metric
```
from sklearn.metrics import precision_score
## Create method implementing precision metric
def precision(labels, predictions):
## START
## END
```
*Balanced dataset*
```
printmd('Precision custom {}'.format(precision(db_balanced, out_balanced)))
printmd('Precision sklearn {}'.format(precision_score(db_balanced, out_balanced)))
```
*More positives*
```
printmd('Precision custom {}'.format(precision(db_positives, out_positives)))
printmd('Precision sklearn {}'.format(precision_score(db_positives, out_positives)))
```
*More negatives*
```
printmd('Precision custom {}'.format(precision(db_negatives, out_negatives)))
printmd('Precision sklearn {}'.format(precision_score(db_negatives, out_negatives)))
```
*More positives - all positive predictions*
```
printmd('Precision custom {}'.format(precision(db_positives, np.ones(length))))
printmd('Precision sklearn {}'.format(precision_score(db_positives, np.ones(length))))
```
*More negatives - all negative predictions*
```
printmd('Precision custom {}'.format(precision(db_negatives, np.zeros(length))))
printmd('Precision sklearn {}'.format(precision_score(db_negatives, np.zeros(length))))
```
### **Recall**
Tasks:
* Create method implementing recall metric
```
from sklearn.metrics import recall_score
## Create method implementing recall metric
def recall(labels, predictions):
## START
## END
```
*Balanced dataset*
```
printmd('Recall custom {}'.format(recall(db_balanced, out_balanced)))
printmd('Recall sklearn {}'.format(recall_score(db_balanced, out_balanced)))
```
*More positives*
```
printmd('Recall custom {}'.format(recall(db_positives, out_positives)))
printmd('Recall sklearn {}'.format(recall_score(db_positives, out_positives)))
```
*More negatives*
```
printmd('Recall custom {}'.format(recall(db_negatives, out_negatives)))
printmd('Recall sklearn {}'.format(recall_score(db_negatives, out_negatives)))
```
*More positives - all positive predictions*
```
printmd('Recall custom {}'.format(recall(db_positives, np.ones(length))))
printmd('Recall sklearn {}'.format(recall_score(db_positives, np.ones(length))))
```
*More negatives - all negative predictions*
```
printmd('Recall custom {}'.format(recall(db_negatives, np.zeros(length))))
printmd('Recall sklearn {}'.format(recall_score(db_negatives, np.zeros(length))))
```
### **False Positive Rate = Specificity**
```
def fpr(labels, predictions):
assert len(labels)==len(predictions)
fp=0
tn=0
#fpr=fp/(fp+tn)
for i, p in enumerate(predictions):
if p == labels[i] and p == 0:
tn+=1
elif p != labels[i] and p == 1:
fp+=1
if (fp+tn)==0:
return 0
return fp/(fp+tn)
```
*Balanced dataset*
```
printmd('fpr {}'.format(fpr(db_balanced, out_balanced)))
```
*More positives*
```
printmd('fpr {}'.format(fpr(db_positives, out_positives)))
```
*More negatives*
```
printmd('fpr {}'.format(fpr(db_negatives, out_negatives)))
```
*More positives - all positive predictions*
```
printmd('fpr {}'.format(fpr(db_positives, np.ones(length))))
```
*More negatives - all negative predictions*
### **True Positive Rate = Recall = Sensitivity**
### **F1 Score**
```
from sklearn.metrics import f1_score
def f1():
pass
```
*Balanced dataset*
```
printmd('F1 sklearn {}'.format(f1_score(db_balanced, out_balanced)))
```
*More positives*
```
printmd('F1 sklearn {}'.format(f1_score(db_positives, out_positives)))
printmd('F1 sklearn weighted {}'.format(f1_score(db_positives, out_positives, average='weighted')))
```
*More negatives*
```
printmd('F1 sklearn {}'.format(f1_score(db_negatives, out_negatives)))
printmd('F1 sklearn weighted {}'.format(f1_score(db_negatives, out_negatives, average='weighted')))
```
*More positives - all positive predictions*
```
printmd('F1 sklearn {}'.format(f1_score(db_positives, np.ones(length))))
printmd('F1 sklearn weighted {}'.format(f1_score(db_positives, np.ones(length), average='weighted')))
```
*More negatives - all negative predictions*
```
printmd('F1 sklearn {}'.format(f1_score(db_negatives, np.zeros(length))))
printmd('F1 sklearn weighted {}'.format(f1_score(db_negatives, np.zeros(length), average='weighted')))
```
|
github_jupyter
|
```
import pandas as pd
import numpy as np
import os
import matplotlib
import matplotlib.pyplot as plt
from xgboost.sklearn import XGBRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, roc_auc_score, make_scorer, accuracy_score
from xgboost import XGBClassifier, plot_importance
import math
main_df = pd.read_csv(os.path.join('data', 'unpacked_genres.csv')).drop('Unnamed: 0', axis=1)
lang_df = pd.read_csv(os.path.join('data', 'languages_parsed.csv')).drop('Unnamed: 0', axis=1)
main_df.head()
lang_df.columns
main_df['id'] = main_df['id'].astype('str')
lang_df['id'] = lang_df['id'].astype('str')
lang_df = lang_df[['id', u'numlang', u'cn', u'da', u'de',
u'en', u'es', u'fr', u'hi', u'it', u'ja', u'ko', u'ml', u'ru', u'ta',
u'zh']]
all_df = pd.merge(main_df, lang_df, on='id')
all_df.columns
all_df.to_csv(os.path.join('data', 'final.csv'))
all_df = all_df.drop(['production_countries', 'spoken_languages', 'original_language'], axis=1)
all_df.to_csv(os.path.join('data', 'final.csv'))
all_df.head()
all_df.drop('original_language', axis=1).to_csv(os.path.join('data', 'final.csv'))
df = pd.read_csv(os.path.join('data', 'final.csv'))
X = df.drop(['revenue', 'id', 'likes', 'dislikes'], axis=1)
y = df.revenue
reg = XGBRegressor()
X_train, X_test, y_train, y_test = train_test_split(X, y)
reg.fit(X_train, y_train)
print(math.sqrt(mean_squared_error(y_test, reg.predict(X_test))))
print(reg.predict(df[df['id'] == 862].drop(['id', 'revenue'], axis=1)))
X.columns
Xp = X.drop([u'cn',
u'da', u'de', u'es', u'fr', u'hi', u'it', u'ja', u'ko', u'ml',
u'ru', u'ta', u'zh'], axis=1)
Xp.head()
reg = XGBRegressor()
X_train, X_test, y_train, y_test = train_test_split(X, y)
reg.fit(X_train, y_train)
print(math.sqrt(mean_squared_error(y_test, reg.predict(X_test))))
import seaborn as sns
sns.heatmap(X.corr())
df.columns
sns.heatmap(df.drop([u'cn', u'da', u'de', u'es',
u'fr', u'hi', u'it', u'ja', u'ko', u'ml', u'ru', u'ta', u'zh'], axis=1).corr())
df.revenue.hist()
profit = []
for i in range(len(df)):
profit.append(df['revenue'][i] - df['budget'][i])
df['profit'] = profit
len(df[df['profit'] < 0])
isProfitable = []
for i in range(len(df)):
isProfitable.append(df['profit'][i] > 0)
df['isProfitable'] = isProfitable
df = pd.read_csv(os.path.join('data', 'final_clf.csv')).drop('Unnamed: 0', axis=1)
X = df.drop(['id', 'revenue', 'TV Movie', 'profit', 'isProfitable'], axis=1)
y = df.isProfitable.astype('int')
X_train, X_test, y_train, y_test = train_test_split(X, y)
clf = XGBClassifier()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
plot_importance(clf)
plt.show()
roc_auc_score(y_test, np.array(clf.predict_proba(X_test))[:,1])
roc_auc_score(y, np.array(clf.predict_proba(X))[:,1])
from sklearn.model_selection import GridSearchCV
all_df.head()
all_df.drop('original_language', axis=1).to_csv(os.path.join('data', 'final.csv'))
df = pd.read_csv(os.path.join('data', 'final.csv'))
X = df.drop(['revenue', 'id', 'likes', 'dislikes'], axis=1)
y = df.revenue
reg = XGBRegressor()
X_train, X_test, y_train, y_test = train_test_split(X, y)
reg.fit(X_train, y_train)
print(math.sqrt(mean_squared_error(y_test, reg.predict(X_test))))
print(reg.predict(df[df['id'] == 862].drop(['id', 'revenue'], axis=1)))
X.columns
Xp = X.drop([u'cn',
u'da', u'de', u'es', u'fr', u'hi', u'it', u'ja', u'ko', u'ml',
u'ru', u'ta', u'zh'], axis=1)
Xp.head()
reg = XGBRegressor()
X_train, X_test, y_train, y_test = train_test_split(X, y)
df.revenue.hist()
profit = []
for i in range(len(df)):
profit.append(df['revenue'][i] - df['budget'][i])
df['profit'] = profit
grid_params = {
'max_depth': range(5, 15, 3),
'n_estimators': range(50, 200, 25)
}
scoring = {'AUC': 'roc_auc', 'Accuracy': make_scorer(accuracy_score)}
clf = GridSearchCV(XGBClassifier(), param_grid=grid_params, scoring=scoring, cv=5, refit='AUC')
clf.fit(X, y)
best_clf = clf.best_estimator_
df.columns
X = df.drop(['id', 'revenue', 'TV Movie', 'profit', 'isProfitable'], axis=1)
y = df.isProfitable.astype('int')
X_train, X_test, y_train, y_test = train_test_split(X, y)
clf = XGBClassifier()
roc_auc_score(y, np.array(best_clf.predict_proba(X))[:,1])
plot_importance(best_clf)
plt.show()
from xgboost import plot_tree
df.daysSinceStart.plot.hist()
df['isProfitable'] = df['isProfitable'].astype('int')
len(df[df['isProfitable'] == 0])
1421.0/(len(df)-1421.0)
df.to_csv(os.path.join('data', 'final_clf.csv'))
```
|
github_jupyter
|
```
import re
from robobrowser import RoboBrowser
import urllib
import os
class ProgressBar(object):
"""
链接:https://www.zhihu.com/question/41132103/answer/93438156
来源:知乎
"""
def __init__(self, title, count=0.0, run_status=None, fin_status=None, total=100.0, unit='', sep='/', chunk_size=1.0):
super(ProgressBar, self).__init__()
self.info = "【%s】 %s %.2f %s %s %.2f %s"
self.title = title
self.total = total
self.count = count
self.chunk_size = chunk_size
self.status = run_status or ""
self.fin_status = fin_status or " " * len(self.statue)
self.unit = unit
self.seq = sep
def __get_info(self):
"""【razorback】 下载完成 3751.50 KB / 3751.50 KB """
_info = self.info % (self.title, self.status, self.count/self.chunk_size, self.unit, self.seq, self.total/self.chunk_size, self.unit)
return _info
def refresh(self, count=1, status=None):
self.count += count
self.status = status or self.status
end_str = "\r"
if self.count >= self.total:
end_str = '\n'
self.status = status or self.fin_status
print(self.__get_info(), end=end_str)
path = './'
def download_video_by_url(url, path, vid_title):
outfile = os.path.join(path,vid_title+'.mp4')
with closing(requests.get(url, stream=True)) as response:
chunk_size = 1024
content_size = int(response.headers['content-length'])
progress = ProgressBar(vid_title, total=content_size, unit="KB", chunk_size=chunk_size, run_status="正在下载", fin_status="下载完成")
assert response.status_code == 200
with open(outfile, "wb") as file:
for data in response.iter_content(chunk_size=chunk_size):
file.write(data)
progress.refresh(count=len(data))
return True
url = 'http://91porn.com/view_video.php?viewkey=4d65b13fa47b2afb51b8'
br = RoboBrowser(history=True,parser='lxml')
br.open(url)
lang = br.get_forms()[0]
lang['session_language'].options = ['cn_CN']
lang['session_language'].value = 'cn_CN'
br.submit_form(lang)
vid_title = br.find('div',{'id':'viewvideo-title'}).text.strip()
print(vid_title)
vid_id = re.findall(r'\d{6}',br.find('a',{'href':'#featureVideo'}).attrs['onclick'])[0]
vid_real_url = 'http://192.240.120.34//mp43/{}.mp4'.format(vid_id)
urllib.request.urlretrieve(vid_real_url,'{}.mp4'.format(vid_title))
if download_video_by_url(vid_real_url, path, vid_title):
print('下载成功!珍惜生命,远离黄赌毒!')
hot_videos = {}
br = RoboBrowser(history=True,parser='lxml')
url = 'http://91porn.com/v.php?category=rf&viewtype=basic&page=1'
br.open(url)
lang = br.get_forms()[0]
lang['session_language'].options = ['cn_CN']
lang['session_language'].value = 'cn_CN'
br.submit_form(lang)
# get every video's information
videos = br.find_all('div',{'class':'listchannel'})
# get their titles and urls
videos_dict = dict([(i.find('a').find('img')['title'],i.find('a')['href']) for i in videos])
hot_videos.update(videos_dict)
for i,j in enumerate(hot_videos.keys()):
print(i,j)
```
|
github_jupyter
|
# Character-based LSTM
## Grab all Chesterton texts from Gutenberg
```
from nltk.corpus import gutenberg
gutenberg.fileids()
text = ''
for txt in gutenberg.fileids():
if 'chesterton' in txt:
text += gutenberg.raw(txt).lower()
chars = sorted(list(set(text)))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
'corpus length: {} total chars: {}'.format(len(text), len(chars))
print(text[:100])
```
## Create the Training set
Build a training and test dataset. Take 40 characters and then save the 41st character. We will teach the model that a certain 40 char sequence should generate the 41st char. Use a step size of 3 so there is overlap in the training set and we get a lot more 40/41 samples.
```
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i+maxlen])
next_chars.append(text[i + maxlen])
print("sequences: ", len(sentences))
print(sentences[0])
print(sentences[1])
print(next_chars[0])
```
One-hot encode
```
import numpy as np
X = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
X[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
```
## Create the Model
```
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.optimizers import RMSprop
model = Sequential()
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))
optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
model.summary()
```
## Train the Model
```
epochs = 2
batch_size = 128
model.fit(X, y, batch_size=batch_size, epochs=epochs)
```
## Generate new sequence
```
import random
def sample(preds, temperature=1.0):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
import sys
start_index = random.randint(0, len(text) - maxlen - 1)
for diversity in [0.2, 0.5, 1.0]:
print()
print('----- diversity:', diversity)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(400):
x = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x[0, t, char_indices[char]] = 1.
preds = model.predict(x, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
```
|
github_jupyter
|
Check coefficients for integration schemes - they should all line up nicely for values in the middle and vary smoothly
```
from bokeh import plotting, io, models, palettes
io.output_notebook()
import numpy
from maxr.integrator import history
nmax = 5
figures = []
palette = palettes.Category10[3]
for n in range(1, nmax):
fig = plotting.figure(height=100, width=600,
active_drag='pan', active_scroll='wheel_zoom')
for order, color in zip((1, 2, 3), palette):
try:
coeffs = history.coefficients(n, order=order)
ticks = range(len(coeffs))
fig.line(ticks, coeffs, alpha=0.9, color=color)
fig.circle(ticks, coeffs, alpha=0.9, color=color)
except ValueError:
# Skip orders if we don't have enough coefficients to calculate these
continue
fig.yaxis.axis_label = 'n={0}'.format(n)
fig.toolbar.logo = None
fig.toolbar_location = None
figures.append(fig)
# Set up scaling
if len(figures) == 1:
figures[0].x_range = models.Range1d(0, nmax - 1)
figures[0].y_range = models.Range1d(0, 2)
else:
figures[-1].x_range = figures[0].x_range
figures[-1].y_range = figures[0].y_range
io.show(models.Column(*figures))
```
Define some timesteps to integrate over
```
tmin, tmax = 0, 30
ts = numpy.linspace(tmin, tmax, 1000)
```
Check we can integrate things!
```
expected = -1.2492166377597749
history.integrator(numpy.sin(ts), ts) - expected < 1e-5
```
Turn this into a history integrator for a python function
```
def evaluate_history_integral(f, ts, order=1):
""" Evaluate the history integral for a given driving function f
"""
return numpy.array([0] + [
history.integrator(f(ts[:idx+1]), ts[:idx+1], order=order)
for idx in range(1, len(ts))])
results = evaluate_history_integral(numpy.sin, ts)
figure = plotting.figure(height=300)
figure.line(ts, results)
figure.title.text = "∫sin(t)/√(t-𝜏)d𝜏"
io.show(figure)
```
Check accuracy of convergence. We use a sinusoidal forcing and plot the response
$$
\int_0^{t} \frac{\sin{(\tau)}}{\sqrt{t - \tau}}d\tau = \sqrt{2 \pi}\left[C{\left(\sqrt{\frac{2t}{\pi}}\right)}\sin{t} - S{\left(\sqrt{\frac{2t}{\pi}}\right)}\cos{t}\right]
$$
where $C$ is the Fresnel C (cos) integral, and $S$ is the Fresnel $S$ (sin) integral. Note the solution in the paper is **WRONG**
```
from scipy.special import fresnel
def solution(t):
ssc, csc = fresnel(numpy.sqrt(2 * t / numpy.pi))
return numpy.sqrt(2 * numpy.pi) * (
csc * numpy.sin(t) - ssc * numpy.cos(t))
```
Show the solution
```
figure = plotting.figure(height=300)
figure.line(ts, numpy.sin(ts), legend='Source function sin(t)', color=palette[1], alpha=0.7)
figure.line(ts, solution(ts), legend='Analytic ∫sin(t)/√(t-𝜏)d𝜏', color=palette[0], alpha=0.7)
figure.line(ts, evaluate_history_integral(numpy.sin, ts), legend='Numerical ∫sin(t)/√(t-𝜏)d𝜏', color=palette[2], alpha=0.7)
io.show(figure)
```
and try integration numerically
```
nsteps = 30
order = 3
tmin = 0
tmax = 40
# Evaluate solution
ts = numpy.linspace(tmin, tmax, nsteps)
numeric = evaluate_history_integral(numpy.sin, ts, order=order)
exact = solution(ts)
figure = plotting.figure(height=300)
figure.line(ts, exact, legend='Analytic', color=palette[0], alpha=0.7)
figure.line(ts, numeric, legend='Numerical', color=palette[2], alpha=0.7)
io.show(figure)
numpy.mean(numeric - exact)
```
Now we loop through by order and computer the error
```
from collections import defaultdict
# Set up steps
nstepstep = 50
nsteps = numpy.arange(nstepstep, 500, nstepstep)
spacing = 10 / (nsteps - 1)
# Calculate error
error = defaultdict(list)
for order in (1, 2, 3):
for N in nsteps:
ts = numpy.linspace(0, tmax, N)
err = evaluate_history_integral(numpy.sin, ts, order=order) - solution(ts)
error[order].append(abs(err).max())
# Convert to arrays
for key, value in error.items():
error[key] = numpy.asarray(value)
```
We can plot how the error changes with spacing
```
figure = plotting.figure(height=300, x_axis_type='log', y_axis_type='log')
for order, color in zip((1, 2, 3), palette):
figure.line(spacing, error[order], legend='Order = {0}'.format(order),
color=color, alpha=0.9)
figure.xaxis.axis_label = 'Timestep (𝛿t)'
figure.yaxis.axis_label = 'Error (𝜀)'
figure.legend.location = 'bottom_right'
io.show(figure)
```
check that we get reasonable scaling (should be about $\epsilon\sim\delta t ^{\text{order} + 1}$)
```
def slope(rise, run):
return (rise[1:] - rise[0]) / (run[1:] - run[0])
figure = plotting.figure(height=300, x_axis_type='log')
for order, color in zip((1, 2, 3), palette):
figure.line(spacing[1:],
slope(numpy.log(error[order]), numpy.log(spacing)),
legend='Order = {0}'.format(order),
color=color, alpha=0.9)
figure.xaxis.axis_label = 'Timestep (𝛿t)'
figure.yaxis.axis_label = 'Scaling exponent'
figure.legend.location = 'center_right'
io.show(figure)
```
|
github_jupyter
|
### Notebook for the Udacity Project "Write A Data Science Blog Post"
#### Dataset used: "TripAdvisor Restaurants Info for 31 Euro-Cities"
https://www.kaggle.com/damienbeneschi/krakow-ta-restaurans-data-raw
https://www.kaggle.com/damienbeneschi/krakow-ta-restaurans-data-raw/downloads/krakow-ta-restaurans-data-raw.zip/5
## 1.: Business Understanding according to CRISP-DM
I was in south-western Poland recently and while searching for a good place to eat on Google Maps I noticed, that there were a lot of restaurants that had really good ratings and reviews in the 4+ region, in cities as well as at the countryside. This made me thinking, because in my hometown Munich there is also many great places, but also a lot that are in not-so-good-region around 3 stars. In general, ratings seemed to be better there compared to what I know. So I thought, maybe people just rate more mildly there. Then I had my first lunch at one of those 4+ places and not only the staff was so friendly and the food looked really nicely, it also tasted amazing at a decent pricetag. Okay, I was lucky I thought. On the evening of the same day I tried another place and had the same great experience.
I had even more great eats. So is the quality of the polish restaurants on average better than the quality of the bavarian ones? Subjectively… Yes, it seemed so. But what does data science say? Are there differences in average ratings and number of ratings between regions? To answer this question, I used the TripAdvisor Restaurants Info for 31 Euro-Cities from Kaggle. This dataset contains the TripAdvisor reviews and ratings for 111927 restaurants in 31 European cities.
## Problem Definition / Research Questions:
- RQ 1: Are there differences in average ratings and number of ratings between cities?
- RQ 2: Are there more vegetarian-friendly cities and if so, are they locally concentrated?
- RQ 3: Is local cuisine rated better than foreign cusine and if so, is there a difference between cities?
```
# Import Statements
import pandas as pd
import numpy as np
# Load in dataset
data_raw = pd.read_csv("TA_restaurants_curated.csv")
```
## 2.: Data Understanding according to CRISP-DM
In the following, we have a look at the raw data of the dataset.
```
# Having a first look at the data
data_raw.head()
data_raw.describe()
# Which cities are included in the dataset?
cities = data_raw.City.unique()
cities
# Manually add the name of the local cuisines into an array (needed for RQ3)
local_cuisine = ['Dutch', 'Greek', 'Spanish', 'German', 'Eastern European', 'Belgian', 'Hungarian', 'Danish', 'Irish', 'Scottish', 'Swiss', 'German', 'Scandinavian', 'Polish', 'Portuguese', 'Slovenian', 'British', 'European', 'French', 'Spanish', 'Italian', 'German', 'Portuguese', 'Norwegian', 'French', 'Czech', 'Italian', 'Swedish', 'Austrian', 'Polish', 'Swiss']
```
As I live in Munich, I will want to have a closer look on the data for the city of Munich. So I will filter for the Munich data and have a first look on it.
```
# Function to return data for a specific city
def getRawData(city):
'''Returns the data for a specific city, which is given to the function via the city argument.'''
data_raw_city = data_raw[(data_raw.City == "Munich")]
return data_raw_city
# Filter for Munich data and have a first look
city = "Munich"
data_raw_city = getRawData(city)
data_raw_city.head(10)
data_raw_city.tail(10)
data_raw_city.describe()
```
### Dealing with missing data:
It can be seen, that some restaurants, especially the last ones, don't have any Ranking, Rating, Price Ranges or reviews. How to deal with that data? I have chosen to ignore those restaurants in the relevant questions. If, for example, the average rating of a cities restaurant is needed, I only use that restaurants, that actually have a rating. The other restaurants without rating are ignored.
## 3. and 4.: Data Preparation and Modeling according to CRISP-DM
### Calculate the data for RQ 1 - 3
In the following code, the data is first prepared by only selecting relevant and non-NaN data. Afterwards, data is modelled by calculating the relevant statistical numbers.
```
# Loop through entries for each city
# Create empty lists
num_entries = []
num_rated = []
perc_rated = []
avg_num_ratings = []
avg_rating = []
avg_veg_available = []
avg_loc_available = []
avg_loc_rating = []
avg_non_loc_rating = []
diff_loc_rating = []
total_local_rating = []
total_non_local_rating = []
# Initialize city number
n_city = -1
for city in cities:
n_city = n_city + 1
# Compute Data for RQ1
# Select data for one city
data_1city = data_raw[(data_raw.City == city)]
ratings = data_1city.Rating
data_1city_non_NaN = data_1city[data_1city['Rating'].notnull()]
ratings_non_NaN = data_1city_non_NaN.Rating
# Compute Data for RQ2 & RQ3
# Initialize lists for the current city
veg_available = []
loc_available = []
rating_local = []
rating_non_local = []
data_1city_stl_non_Nan = data_1city[data_1city['Cuisine Style'].notnull()]
# Iterate through every restaurant and check if they offer vegetarian/vegan food.
for i in range(len(data_1city_stl_non_Nan)):
veg_true = 0
styles = data_1city_stl_non_Nan.iloc[i, 3]
if 'Vegetarian' in styles:
veg_true = 1
#print('Veg Found')
elif 'Vegan' in styles:
veg_true = 1
veg_available.append(veg_true)
# For RQ3 check if the current restaurant offers local food and add the rating to the respective list.
loc_true = 0
if local_cuisine[n_city] in styles:
loc_true = 1
if ~np.isnan(data_1city_stl_non_Nan.iloc[i, 5]):
rating_local.append(data_1city_stl_non_Nan.iloc[i, 5])
total_local_rating.append(data_1city_stl_non_Nan.iloc[i, 5])
else:
if ~np.isnan(data_1city_stl_non_Nan.iloc[i, 5]):
rating_non_local.append(data_1city_stl_non_Nan.iloc[i, 5])
total_non_local_rating.append(data_1city_stl_non_Nan.iloc[i, 5])
loc_available.append(loc_true)
# Add to lists / caluclate aggregated values
num_entries.append(len(data_1city))
num_rated.append(len(data_1city_non_NaN))
perc_rated.append(len(data_1city_non_NaN) / len(data_1city))
avg_num_ratings.append(np.mean(data_1city_non_NaN['Number of Reviews']))
avg_rating.append(np.mean(data_1city_non_NaN['Rating']))
avg_veg_available.append(np.mean(veg_available))
avg_loc_available.append(np.mean(loc_available))
avg_loc_rating.append(np.mean(rating_local))
avg_non_loc_rating.append(np.mean(rating_non_local))
diff_loc_rating.append(np.mean(rating_local) - np.mean(rating_non_local))
# Create Dataframe
data_RQ1 = pd.DataFrame({'City': cities, 'Local_Cuisine': local_cuisine, 'Num_Entries': num_entries, 'Num_Rated': num_rated, 'Perc_Rated': perc_rated, 'Avg_Num_Ratings': avg_num_ratings, 'Avg_Rating': avg_rating, 'Avg_Veg_Av': avg_veg_available, 'Avg_Loc_Av': avg_loc_available, 'Avg_loc_rating': avg_loc_rating, 'Avg_non_loc_rating': avg_non_loc_rating, 'Diff_loc_rating': diff_loc_rating})
# Show the before computed data for RQ 1, 2 and 3.
data_RQ1.head(31)
```
## 5.: Evaluate the Results according to CRISP-DM
In the following, for every research questions relevant plots and statistical numbers are plotted to interpret the results. Afterward the plots, results are discussed.
### RQ 1: Are there differences in average ratings and number of ratings between cities?
```
data_RQ1.plot.bar(x='City', y='Avg_Rating', rot=0, figsize=(30,6))
print('Lowest Average Rating: {:.3f}'.format(min(data_RQ1.Avg_Rating)))
print('Highest Average Rating: {:.3f}'.format(max(data_RQ1.Avg_Rating)))
print('Difference from lowest to highest average Rating: {:.3f}'.format(max(data_RQ1.Avg_Rating) - min(data_RQ1.Avg_Rating)))
```
#### As it can clearly be seen, there is a difference in average ratings by citiy. The highest average rating is 4.232 for the city of Rome and 3.797 for the city of Madrid. An interesting follow-up question would be, wether the general quality of restaurants is better in Rome or if reviewers give better ratings in Rome compared to Madrid. Another more vague explaination would be that Tripadvisor is more often used by Tourists than locals, and that tourists rate Italian food better, as they are better used to it since it is better known in the world compared to spanish food.
```
data_RQ1.plot.bar(x='City', y='Avg_Num_Ratings', rot=0, figsize=(30,6))
print('Lowest Average Number of Ratings: {:.3f}'.format(min(data_RQ1.Avg_Num_Ratings)))
print('Highest Average Number of Ratings: {:.3f}'.format(max(data_RQ1.Avg_Num_Ratings)))
print('Difference from lowest to highest number of Ratings: {:.3f}'.format(max(data_RQ1.Avg_Num_Ratings) - min(data_RQ1.Avg_Num_Ratings)))
```
#### Also with the number of ratings it can be noted, that there definitely is a a difference in number of ratings. The highest average number of ratings with 293.896 is (again) seen in the city of Rome, while Hamburg with 45.942 has the lowest average number of ratings, which makes up of a difference of close to 248 in average ratings - that means rome has 6 times the average number of ratings as Hamburg, which can't be explained by the difference in inhabitants, which is 2.872.800 for Rome (Wikipedia) and 1.841.179 for Hamburg (Wikipedia). Other explainations would be that certain regions are more rating-friendly, prefer Tripadvisor or other tools such as Google Maps or that the probably higher number of tourists in Rome uses Tripadvisor more often.
### RQ 2: Are there more vegetarian-friendly cities and if so, are they locally concentrated?
```
data_RQ1.plot.bar(x='City', y='Avg_Veg_Av', rot=0, figsize=(30,6))
print('Lowest Average Number of Vegetarian/Vegan Available: {:.3f}'.format(min(data_RQ1.Avg_Veg_Av)))
print('Highest Average Number of Vegetarian/Vegan Available: {:.3f}'.format(max(data_RQ1.Avg_Veg_Av)))
print('Difference from lowest to highest number: {:.3f}'.format(max(data_RQ1.Avg_Veg_Av) - min(data_RQ1.Avg_Veg_Av)))
```
#### It seems that there are also great differences in average number of restaurants with vegetarian/vegan option available: Edinburgh has the highest number of restaurants that offer veg, with 56.9%, Lyon on the other hand with 12,9% is a lot less veg-friendly. A clear local pattern can not be distinguished.
### RQ 3: Is local cuisine rated better than foreign cusine and if so, is there a difference between cities?
```
data_RQ1.plot.bar(x='City', y='Avg_Loc_Av', rot=0, figsize=(30,6))
data_RQ1.plot.bar(x='City', y='Avg_loc_rating', rot=0, figsize=(30,6))
data_RQ1.plot.bar(x='City', y='Avg_non_loc_rating', rot=0, figsize=(30,6))
data_RQ1.plot.bar(x='City', y='Diff_loc_rating', rot=0, figsize=(30,6))
print('Lowest Rating Difference: {:.3f}'.format(min(data_RQ1.Diff_loc_rating)))
print('Highest Rating Difference: {:.3f}'.format(max(data_RQ1.Diff_loc_rating)))
print('Average Total Rating Difference: {:.3f}'.format(np.mean(data_RQ1.Diff_loc_rating)))
print()
print('Total Local Ratings: {}'.format(len(total_local_rating)))
print('Total Local Rating Mean: {}'.format(np.mean(total_local_rating)))
print('Total Non-Local Ratings: {}'.format(len(total_non_local_rating)))
print('Total Non-Local Rating Mean: {}'.format(np.mean(total_non_local_rating)))
print('Total Non-Local Rating Mean Difference: {}'.format(np.mean(total_local_rating) - np.mean(total_non_local_rating)))
```
#### Although there is a difference with local restaurants being rated better than restaurants not serving local food (aggregated difference is 0.026 / total difference is 0.0155), it is quite small and not neccessarily statistically significant in general. Yet it is interesting to notive, that for some cities the hypothesis is true. Especially Copenhagen, Edicnburgh, Helsinki, Ljubliana and Lyana show more significant differences with local restaurants being favored and cities like Barcelona, Berlin, Bratislava, Brussels and Prahgue, where local restaurants are rated less good, in the case of Bratislava the difference is greater than 0.2.
So, again, this can have multiple reasons. It is possible that people who use Tripadvisor, which are often tourists, prefer certain cousines that they are familiar to. Also it is possible, that certain local cuisines are "easier" for the non local. Other reasons are thinkable.
|
github_jupyter
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.