code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
Lambda School Data Science
*Unit 4, Sprint 3, Module 1*
---
# Recurrent Neural Networks (RNNs) and Long Short Term Memory (LSTM) (Prepare)
<img src="https://media.giphy.com/media/l2JJu8U8SoHhQEnoQ/giphy.gif" width=480 height=356>
<br></br>
<br></br>
## Learning Objectives
- <a href="#p1">Part 1: </a>Describe Neural Networks used for modeling sequences
- <a href="#p2">Part 2: </a>Apply a LSTM to a text generation problem using Keras
## Overview
> "Yesterday's just a memory - tomorrow is never what it's supposed to be." -- Bob Dylan
Wish you could save [Time In A Bottle](https://www.youtube.com/watch?v=AnWWj6xOleY)? With statistics you can do the next best thing - understand how data varies over time (or any sequential order), and use the order/time dimension predictively.
A sequence is just any enumerated collection - order counts, and repetition is allowed. Python lists are a good elemental example - `[1, 2, 2, -1]` is a valid list, and is different from `[1, 2, -1, 2]`. The data structures we tend to use (e.g. NumPy arrays) are often built on this fundamental structure.
A time series is data where you have not just the order but some actual continuous marker for where they lie "in time" - this could be a date, a timestamp, [Unix time](https://en.wikipedia.org/wiki/Unix_time), or something else. All time series are also sequences, and for some techniques you may just consider their order and not "how far apart" the entries are (if you have particularly consistent data collected at regular intervals it may not matter).
# Neural Networks for Sequences (Learn)
## Overview
There's plenty more to "traditional" time series, but the latest and greatest technique for sequence data is recurrent neural networks. A recurrence relation in math is an equation that uses recursion to define a sequence - a famous example is the Fibonacci numbers:
$F_n = F_{n-1} + F_{n-2}$
For formal math you also need a base case $F_0=1, F_1=1$, and then the rest builds from there. But for neural networks what we're really talking about are loops:

The hidden layers have edges (output) going back to their own input - this loop means that for any time `t` the training is at least partly based on the output from time `t-1`. The entire network is being represented on the left, and you can unfold the network explicitly to see how it behaves at any given `t`.
Different units can have this "loop", but a particularly successful one is the long short-term memory unit (LSTM):

There's a lot going on here - in a nutshell, the calculus still works out and backpropagation can still be implemented. The advantage (ane namesake) of LSTM is that it can generally put more weight on recent (short-term) events while not completely losing older (long-term) information.
After enough iterations, a typical neural network will start calculating prior gradients that are so small they effectively become zero - this is the [vanishing gradient problem](https://en.wikipedia.org/wiki/Vanishing_gradient_problem), and is what RNN with LSTM addresses. Pay special attention to the $c_t$ parameters and how they pass through the unit to get an intuition for how this problem is solved.
So why are these cool? One particularly compelling application is actually not time series but language modeling - language is inherently ordered data (letters/words go one after another, and the order *matters*). [The Unreasonable Effectiveness of Recurrent Neural Networks](https://karpathy.github.io/2015/05/21/rnn-effectiveness/) is a famous and worth reading blog post on this topic.
For our purposes, let's use TensorFlow and Keras to train RNNs with natural language. Resources:
- https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py
- https://keras.io/layers/recurrent/#lstm
- http://adventuresinmachinelearning.com/keras-lstm-tutorial/
Note that `tensorflow.contrib` [also has an implementation of RNN/LSTM](https://www.tensorflow.org/tutorials/sequences/recurrent).
## Follow Along
Sequences come in many shapes and forms from stock prices to text. We'll focus on text, because modeling text as a sequence is a strength of Neural Networks. Let's start with a simple classification task using a TensorFlow tutorial.
### RNN/LSTM Sentiment Classification with Keras
```
'''
#Trains an LSTM model on the IMDB sentiment classification task.
The dataset is actually too small for LSTM to be of any advantage
compared to simpler, much faster methods such as TF-IDF + LogReg.
**Notes**
- RNNs are tricky. Choice of batch size is important,
choice of loss and optimizer is critical, etc.
Some configurations won't converge.
- LSTM loss decrease patterns during training can be quite different
from what you see with CNNs/MLPs/etc.
'''
from __future__ import print_function
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Embedding
from tensorflow.keras.layers import LSTM
from tensorflow.keras.datasets import imdb
max_features = 20000
# cut texts after this number of words (among top max_features most common words)
maxlen = 80
batch_size = 32
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
x_train[0]
print('Pad Sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape: ', x_train.shape)
print('x_test shape: ', x_test.shape)
x_train[0]
y_train[0]
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
unicorns = model.fit(x_train, y_train,
batch_size=1024,
epochs=2,
validation_data=(x_test,y_test))
import matplotlib.pyplot as plt
# Plot training & validation loss values
plt.plot(unicorns.history['loss'])
plt.plot(unicorns.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show();
```
## Challenge
You will be expected to use an Keras LSTM for a classicification task on the *Sprint Challenge*.
# LSTM Text generation with Keras (Learn)
## Overview
What else can we do with LSTMs? Since we're analyzing the *sequence*, we can do more than classify - we can *generate* text. I'ved pulled some news stories using [newspaper](https://github.com/codelucas/newspaper/).
This example is drawn from the Keras [documentation](https://keras.io/examples/lstm_text_generation/).
```
from tensorflow.keras.callbacks import LambdaCallback
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM
from tensorflow.keras.optimizers import RMSprop
import numpy as np
import random
import sys
import os
data_files = os.listdir('./articles')
# Read in Data
data = []
for file in data_files:
if file[-3:] == 'txt':
with open(f'./articles/{file}', 'r', encoding='utf-8') as f:
data.append(f.read())
len(data)
data[-1]
# Encode Data as Chars
# Gather all text
# Why? 1. See all possible characters 2. For training / splitting later
text = " ".join(data)
# Unique Characters
chars = list(set(text))
# Lookup Tables
char_int = {c:i for i, c in enumerate(chars)}
int_char = {i:c for i, c in enumerate(chars)}
len(chars)
# Create the sequence data
maxlen = 40
step = 5
encoded = [char_int[c] for c in text]
sequences = [] # Each element is 40 chars long
next_char = [] # One element for each sequence
for i in range(0, len(encoded) - maxlen, step):
sequences.append(encoded[i : i + maxlen])
next_char.append(encoded[i + maxlen])
print('sequences: ', len(sequences))
sequences[0]
# Create x & y
x = np.zeros((len(sequences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sequences),len(chars)), dtype=np.bool)
for i, sequence in enumerate(sequences):
for t, char in enumerate(sequence):
x[i,t,char] = 1
y[i, next_char[i]] = 1
x.shape
data[0][:45]
x[0][1]
int_char[78]
y.shape
# build the model: a single LSTM
model = Sequential()
model.add(LSTM(128, input_shape=(maxlen, len(chars)), dropout=.2))
model.add(Dense(len(chars), activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
def sample(preds):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / 1
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
def on_epoch_end(epoch, _):
# Function invoked at end of each epoch. Prints generated text.
print()
print('----- Generating text after Epoch: %d' % epoch)
start_index = random.randint(0, len(text) - maxlen - 1)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(400):
x_pred = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_int[char]] = 1
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds)
next_char = int_char[next_index]
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
# fit the model
model.fit(x, y,
batch_size=1024,
epochs=10,
callbacks=[print_callback])
```
## Challenge
You will be expected to use a Keras LSTM to generate text on today's assignment.
# Review
- <a href="#p1">Part 1: </a>Describe Neural Networks used for modeling sequences
* Sequence Problems:
- Time Series (like Stock Prices, Weather, etc.)
- Text Classification
- Text Generation
- And many more! :D
* LSTMs are generally preferred over RNNs for most problems
* LSTMs are typically a single hidden layer of LSTM type; although, other architectures are possible.
* Keras has LSTMs/RNN layer types implemented nicely
- <a href="#p2">Part 2: </a>Apply a LSTM to a text generation problem using Keras
* Shape of input data is very important
* Can take a while to train
* You can use it to write movie scripts. :P
|
github_jupyter
|
'''
#Trains an LSTM model on the IMDB sentiment classification task.
The dataset is actually too small for LSTM to be of any advantage
compared to simpler, much faster methods such as TF-IDF + LogReg.
**Notes**
- RNNs are tricky. Choice of batch size is important,
choice of loss and optimizer is critical, etc.
Some configurations won't converge.
- LSTM loss decrease patterns during training can be quite different
from what you see with CNNs/MLPs/etc.
'''
from __future__ import print_function
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Embedding
from tensorflow.keras.layers import LSTM
from tensorflow.keras.datasets import imdb
max_features = 20000
# cut texts after this number of words (among top max_features most common words)
maxlen = 80
batch_size = 32
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
x_train[0]
print('Pad Sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape: ', x_train.shape)
print('x_test shape: ', x_test.shape)
x_train[0]
y_train[0]
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
unicorns = model.fit(x_train, y_train,
batch_size=1024,
epochs=2,
validation_data=(x_test,y_test))
import matplotlib.pyplot as plt
# Plot training & validation loss values
plt.plot(unicorns.history['loss'])
plt.plot(unicorns.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show();
from tensorflow.keras.callbacks import LambdaCallback
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM
from tensorflow.keras.optimizers import RMSprop
import numpy as np
import random
import sys
import os
data_files = os.listdir('./articles')
# Read in Data
data = []
for file in data_files:
if file[-3:] == 'txt':
with open(f'./articles/{file}', 'r', encoding='utf-8') as f:
data.append(f.read())
len(data)
data[-1]
# Encode Data as Chars
# Gather all text
# Why? 1. See all possible characters 2. For training / splitting later
text = " ".join(data)
# Unique Characters
chars = list(set(text))
# Lookup Tables
char_int = {c:i for i, c in enumerate(chars)}
int_char = {i:c for i, c in enumerate(chars)}
len(chars)
# Create the sequence data
maxlen = 40
step = 5
encoded = [char_int[c] for c in text]
sequences = [] # Each element is 40 chars long
next_char = [] # One element for each sequence
for i in range(0, len(encoded) - maxlen, step):
sequences.append(encoded[i : i + maxlen])
next_char.append(encoded[i + maxlen])
print('sequences: ', len(sequences))
sequences[0]
# Create x & y
x = np.zeros((len(sequences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sequences),len(chars)), dtype=np.bool)
for i, sequence in enumerate(sequences):
for t, char in enumerate(sequence):
x[i,t,char] = 1
y[i, next_char[i]] = 1
x.shape
data[0][:45]
x[0][1]
int_char[78]
y.shape
# build the model: a single LSTM
model = Sequential()
model.add(LSTM(128, input_shape=(maxlen, len(chars)), dropout=.2))
model.add(Dense(len(chars), activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
def sample(preds):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / 1
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
def on_epoch_end(epoch, _):
# Function invoked at end of each epoch. Prints generated text.
print()
print('----- Generating text after Epoch: %d' % epoch)
start_index = random.randint(0, len(text) - maxlen - 1)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(400):
x_pred = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_int[char]] = 1
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds)
next_char = int_char[next_index]
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
# fit the model
model.fit(x, y,
batch_size=1024,
epochs=10,
callbacks=[print_callback])
| 0.77437 | 0.936227 |
# Solving Vehicle Routing Problem with Amazon SageMaker RL
In this notebook, you will see how reinforcement learning can be used to solve a Vehicle Routing Probelm (VRP). Given one or more vehicles and a set of locations, VRP tries to find the route that reaches all locations with minimal operational cost. This problem has been of great interest for decades, as it has a wide application in logistics, parcel delivery, and more. It has many variants that characterize different constraints or features, among which online and stochastic version is considered in this example.
## Problem Statement
Consider a delivery driver using a phone app, orders arrive on the app in a dynamic manner. Each order has a delivery charge known to the driver at the time of order creation, and it is assigned to a location in the city. The city is a grid map and consists of mutually exclusive zones that generate orders at different rates and rewards. The orders have a delivery time limit, the timer starts with the order creation and is same for all orders. The driver has to accept an order and pick up the package from a given location prior to delivery. The vehicle has a capacity limit, but the driver can accept unlimited orders and plan their route accordingly. The driver’s goal is to maximize the total net reward.
This formulation is known as stochastic and dynamic capacitated vehicle routing problem with pickup and delivery, time windows and service guarantee.
<img src="images/rl_vehicle_routing.png" width="500" align="center"/>
At each time step, the RL agent is aware of the following information:
- Pickup location
- Driver info: driver's position, capacity left
- Order info: order's location, order's status (open, accepted, picked up or delivered), time elapsed since each order’s generation, order dollar value
At each time step, the RL agent can take one of the following actions:
- Accept an order
- Pick up an accepted order
- Go to a customer's node for delivery
- Head to a specific pickup location
- Wait and stay unmoved
During training, the agent cannot perform the following invalid actions: (i) pick up an order when its remaining capacity is 0; (ii) pick up an order that is not yet accepted; (iii) deliver an order that is not yet picked up.
At each time step, the reward is defined as the difference between total value of all delivered orders and cost:
- Total value of all delivered orders is divided into 3 equal parts -- when the order gets accepted, picked up, and delivered respectively
- Cost includes time cost and moving cost. Both are per time step
- A large penalty is imposed if the agent accepts an order but fails to deliver within the delivery limit
## Using Amazon SageMaker for RL
Amazon SageMaker allows you to train your RL agents in cloud machines using docker containers. You do not have to worry about setting up your machines with the RL toolkits and deep learning frameworks. You can easily switch between many different machines setup for you, including powerful GPU machines that give a big speedup. You can also choose to use multiple machines in a cluster to further speedup training, often necessary for production level loads.
## Pre-requisites
### Roles and permissions
To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations.
```
import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import HTML
import time
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object, wait_for_training_job_to_complete
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
```
### Setup S3 bucket
Set up the linkage and authentication to the S3 bucket that you want to use for checkpoint and the metadata.
```
# S3 bucket
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
s3_output_path = 's3://{}/'.format(s3_bucket)
print("S3 bucket path: {}".format(s3_output_path))
```
### Define Variables
We define variables such as the job prefix for the training jobs *and the image path for the container (only when this is BYOC).*
```
# create unique job name
job_name_prefix = 'rl-vehicle-routing'
```
### Configure settings
You can run your RL training jobs on a SageMaker notebook instance or on your own machine. In both of these scenarios, you can run the following in either `local` or `SageMaker` modes. The `local` mode uses the SageMaker Python SDK to run your code in a local container before deploying to SageMaker. This can speed up iterative testing and debugging while using the same familiar Python SDK interface. You just need to set `local_mode = True`.
```
local_mode = False
if local_mode:
instance_type = 'local'
else:
# If on SageMaker, pick the instance type
instance_type = "ml.m5.xlarge"
```
### Create an IAM role
Either get the execution role when running from a SageMaker notebook instance `role = sagemaker.get_execution_role()` or, when running from local notebook instance, use utils method `role = get_execution_role()` to create an execution role.
```
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role()
print("Using IAM role arn: {}".format(role))
```
### Install docker for `local` mode
In order to work in `local` mode, you need to have docker installed. When running from you local machine, please make sure that you have docker or docker-compose (for local CPU machines) and nvidia-docker (for local GPU machines) installed. Alternatively, when running from a SageMaker notebook instance, you can simply run the following script to install dependenceis.
Note, you can only run a single local notebook at one time.
```
# only run from SageMaker notebook instance
if local_mode:
!/bin/bash ./common/setup.sh
```
## Set up the environment
The environment is defined in a Python file called `autoscalesim.py` and the file is uploaded on `/src` directory.
The environment also implements the `init()`, `step()` and `reset()` functions that describe how the environment behaves. This is consistent with Open AI Gym interfaces for defining an environment.
1. init() - initialize the environment in a pre-defined state
2. step() - take an action on the environment
3. reset()- restart the environment on a new episode
4. [if applicable] render() - get a rendered image of the environment in its current state
```
# uncomment the following line to see the environment
# !pygmentize src/vrp_env.py
```
## Write the training code
The training code is written in the file `train_bin_packing.py` which is also uploaded in the `/src` directory.
First import the environment files and the preset files, and then define the main() function.
```
!pygmentize src/train_vehicle_routing_problem.py
```
## Train the RL model using the Python SDK Script mode
If you are using local mode, the training will run on the notebook instance. When using SageMaker for training, you can select a GPU or CPU instance. The [RLEstimator](https://sagemaker.readthedocs.io/en/stable/sagemaker.rl.html) is used for training RL jobs.
1. Specify the source directory where the gym environment and training code is uploaded.
2. Specify the entry point as the training code
3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container.
4. Define the training parameters such as the instance count, job name, S3 path for output and job name.
5. Specify the hyperparameters for the RL agent algorithm. The RLCOACH_PRESET or the RLRAY_PRESET can be used to specify the RL agent algorithm you want to use.
6. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
### Define Metric
A list of dictionaries that defines the metric(s) used to evaluate the training jobs. Each dictionary contains two keys: ‘Name’ for the name of the metric, and ‘Regex’ for the regular expression used to extract the metric from the logs.
```
metric_definitions = [{'Name': 'episode_reward_mean',
'Regex': 'episode_reward_mean: ([-+]?[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?)'},
{'Name': 'episode_reward_max',
'Regex': 'episode_reward_max: ([-+]?[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?)'},
{'Name': 'episode_reward_min',
'Regex': 'episode_reward_min: ([-+]?[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?)'}]
```
### Define Estimator
This Estimator executes an RLEstimator script in a managed Reinforcement Learning (RL) execution environment within a SageMaker Training Job. The managed RL environment is an Amazon-built Docker container that executes functions defined in the supplied entry_point Python script.
```
train_entry_point = "train_vehicle_routing_problem.py"
train_job_max_duration_in_seconds = 60 * 15
estimator = RLEstimator(entry_point= train_entry_point,
source_dir="src",
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.RAY,
toolkit_version='0.6.5',
framework=RLFramework.TENSORFLOW,
role=role,
instance_type=instance_type,
instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
metric_definitions=metric_definitions,
max_run=train_job_max_duration_in_seconds,
hyperparameters={}
)
estimator.fit(wait=local_mode)
job_name = estimator.latest_training_job.job_name
print("Training job: %s" % job_name)
```
## Visualization
RL training can take a long time. So while it's running there are a variety of ways we can track progress of the running training job. Some intermediate output gets saved to S3 during training, so we'll set up to capture that.
```
s3_url = "s3://{}/{}".format(s3_bucket,job_name)
intermediate_folder_key = "{}/output/intermediate/".format(job_name)
intermediate_url = "s3://{}/{}training/".format(s3_bucket, intermediate_folder_key)
print("S3 job path: {}".format(s3_url))
print("Intermediate folder path: {}".format(intermediate_url))
```
### Plot metrics for training job
We can see the reward metric of the training as it's running, using algorithm metrics that are recorded in CloudWatch metrics. We can plot this to see the performance of the model over time.
```
%matplotlib inline
from sagemaker.analytics import TrainingJobAnalytics
if not local_mode:
wait_for_training_job_to_complete(job_name) # Wait for the job to finish
df = TrainingJobAnalytics(job_name, ['episode_reward_mean']).dataframe()
df_min = TrainingJobAnalytics(job_name, ['episode_reward_min']).dataframe()
df_max = TrainingJobAnalytics(job_name, ['episode_reward_max']).dataframe()
df['rl_reward_mean'] = df['value']
df['rl_reward_min'] = df_min['value']
df['rl_reward_max'] = df_max['value']
num_metrics = len(df)
if num_metrics == 0:
print("No algorithm metrics found in CloudWatch")
else:
plt = df.plot(x='timestamp', y=['rl_reward_mean'], figsize=(18,6), fontsize=18, legend=True, style='-', color=['b','r','g'])
plt.fill_between(df.timestamp, df.rl_reward_min, df.rl_reward_max, color='b', alpha=0.2)
plt.set_ylabel('Mean reward per episode', fontsize=20)
plt.set_xlabel('Training time (s)', fontsize=20)
plt.legend(loc=4, prop={'size': 20})
else:
print("Can't plot metrics in local mode.")
```
#### Monitor training progress
You can repeatedly run the visualization cells to get the latest metrics as the training job proceeds.
## Training Results
You can let the training job run longer by specifying `train_max_run` in `RLEstimator`. The figure below illustrates the reward function of the RL policy vs. that of a MIP baseline. The experiments are conducted on a p3.2x instance. For more details on the environment setup and how different parameters are set, please refer to [ORL: Reinforcement Learning Benchmarks for Online Stochastic Optimization
Problems](https://arxiv.org/pdf/1911.10641.pdf).
<img src="images/rl_vehicle_routing_result.png" width="800" align="center"/>
|
github_jupyter
|
import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import HTML
import time
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object, wait_for_training_job_to_complete
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
# S3 bucket
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
s3_output_path = 's3://{}/'.format(s3_bucket)
print("S3 bucket path: {}".format(s3_output_path))
# create unique job name
job_name_prefix = 'rl-vehicle-routing'
local_mode = False
if local_mode:
instance_type = 'local'
else:
# If on SageMaker, pick the instance type
instance_type = "ml.m5.xlarge"
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role()
print("Using IAM role arn: {}".format(role))
# only run from SageMaker notebook instance
if local_mode:
!/bin/bash ./common/setup.sh
# uncomment the following line to see the environment
# !pygmentize src/vrp_env.py
!pygmentize src/train_vehicle_routing_problem.py
metric_definitions = [{'Name': 'episode_reward_mean',
'Regex': 'episode_reward_mean: ([-+]?[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?)'},
{'Name': 'episode_reward_max',
'Regex': 'episode_reward_max: ([-+]?[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?)'},
{'Name': 'episode_reward_min',
'Regex': 'episode_reward_min: ([-+]?[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?)'}]
train_entry_point = "train_vehicle_routing_problem.py"
train_job_max_duration_in_seconds = 60 * 15
estimator = RLEstimator(entry_point= train_entry_point,
source_dir="src",
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.RAY,
toolkit_version='0.6.5',
framework=RLFramework.TENSORFLOW,
role=role,
instance_type=instance_type,
instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
metric_definitions=metric_definitions,
max_run=train_job_max_duration_in_seconds,
hyperparameters={}
)
estimator.fit(wait=local_mode)
job_name = estimator.latest_training_job.job_name
print("Training job: %s" % job_name)
s3_url = "s3://{}/{}".format(s3_bucket,job_name)
intermediate_folder_key = "{}/output/intermediate/".format(job_name)
intermediate_url = "s3://{}/{}training/".format(s3_bucket, intermediate_folder_key)
print("S3 job path: {}".format(s3_url))
print("Intermediate folder path: {}".format(intermediate_url))
%matplotlib inline
from sagemaker.analytics import TrainingJobAnalytics
if not local_mode:
wait_for_training_job_to_complete(job_name) # Wait for the job to finish
df = TrainingJobAnalytics(job_name, ['episode_reward_mean']).dataframe()
df_min = TrainingJobAnalytics(job_name, ['episode_reward_min']).dataframe()
df_max = TrainingJobAnalytics(job_name, ['episode_reward_max']).dataframe()
df['rl_reward_mean'] = df['value']
df['rl_reward_min'] = df_min['value']
df['rl_reward_max'] = df_max['value']
num_metrics = len(df)
if num_metrics == 0:
print("No algorithm metrics found in CloudWatch")
else:
plt = df.plot(x='timestamp', y=['rl_reward_mean'], figsize=(18,6), fontsize=18, legend=True, style='-', color=['b','r','g'])
plt.fill_between(df.timestamp, df.rl_reward_min, df.rl_reward_max, color='b', alpha=0.2)
plt.set_ylabel('Mean reward per episode', fontsize=20)
plt.set_xlabel('Training time (s)', fontsize=20)
plt.legend(loc=4, prop={'size': 20})
else:
print("Can't plot metrics in local mode.")
| 0.236164 | 0.969469 |
# Practice Assignment: Understanding Distributions Through Sampling
** *This assignment is optional, and I encourage you to share your solutions with me and your peers in the discussion forums!* **
To complete this assignment, create a code cell that:
* Creates a number of subplots using the `pyplot subplots` or `matplotlib gridspec` functionality.
* Creates an animation, pulling between 100 and 1000 samples from each of the random variables (`x1`, `x2`, `x3`, `x4`) for each plot and plotting this as we did in the lecture on animation.
* **Bonus:** Go above and beyond and "wow" your classmates (and me!) by looking into matplotlib widgets and adding a widget which allows for parameterization of the distributions behind the sampling animations.
Tips:
* Before you start, think about the different ways you can create this visualization to be as interesting and effective as possible.
* Take a look at the histograms below to get an idea of what the random variables look like, as well as their positioning with respect to one another. This is just a guide, so be creative in how you lay things out!
* Try to keep the length of your animation reasonable (roughly between 10 and 30 seconds).
```
import matplotlib.pyplot as plt
import numpy as np
%matplotlib notebook
# generate 4 random variables from the random, gamma, exponential, and uniform distributions
x1 = np.random.normal(-2.5, 1, 10000)
x2 = np.random.gamma(2, 1.5, 10000)
x3 = np.random.exponential(2, 10000)+7
x4 = np.random.uniform(14,20, 10000)
# plot the histograms
plt.figure(figsize=(9,3))
plt.hist(x1, normed=True, bins=20, alpha=0.5)
plt.hist(x2, normed=True, bins=20, alpha=0.5)
plt.hist(x3, normed=True, bins=20, alpha=0.5)
plt.hist(x4, normed=True, bins=20, alpha=0.5);
plt.axis([-7,21,0,0.6])
plt.text(x1.mean()-1.5, 0.5, 'x1\nNormal')
plt.text(x2.mean()-1.5, 0.5, 'x2\nGamma')
plt.text(x3.mean()-1.5, 0.5, 'x3\nExponential')
plt.text(x4.mean()-1.5, 0.5, 'x4\nUniform')
z
fig, (ax1,ax2,ax3,ax4) = plt.subplots(1, 4, sharey=True, figsize=(9,2.5))
# plot the histograms
ax1.hist(x1, normed=True, bins=20, alpha=0.5, color='purple')
ax2.hist(x2, normed=True, bins=20, alpha=0.5, color='orange')
ax3.hist(x3, normed=True, bins=20, alpha=0.5, color='green')
ax4.hist(x4, normed=True, bins=20, alpha=0.5, color='skyblue');
ax1.text(x1.mean()-1.5, 0.5, 'x1\nNormal')
ax2.text(x2.mean()-1.5, 0.5, 'x2\nGamma')
ax3.text(x3.mean()-1.5, 0.5, 'x3\nExponential')
ax4.text(x4.mean()-1.5, 0.5, 'x4\nUniform')
# Adjust the subplot layout, because the logit one may take more space
# than usual, due to y-tick labels like "1 - 10^{-3}"
plt.subplots_adjust(top=0.7, bottom=0.08, left=0.10, right=0.95, wspace=0.35)
plt.show()
import matplotlib.animation as animation
import numpy as np
n = 10000
fig, (ax1,ax2,ax3,ax4) = plt.subplots(1, 4, sharey=True, figsize=(9,2.5))
plt.subplots_adjust(top=0.75, bottom=0.08, left=0.10, right=0.95, wspace=0.4)
def updateData(curr):
if curr <=2: return
for ax in (ax1, ax2, ax3, ax4):
ax.clear()
ax1.hist(x1[:curr], normed=True, bins=np.linspace(-6,1, num=21), alpha=0.5, color='purple')
ax2.hist(x2[:curr], normed=True, bins=np.linspace(0,15,num=21), alpha=0.5, color='orange')
ax3.hist(x3[:curr], normed=True, bins=np.linspace(7,20,num=21), alpha=0.5, color='green')
ax4.hist(x4[:curr], normed=True, bins=np.linspace(14,20,num=21), alpha=0.5, color='skyblue')
ax1.set_title('x1\nNormal')
ax2.set_title('x2\nGamma')
ax3.set_title('x3\nExponential')
ax4.set_title('x4\nUniform')
simulation = animation.FuncAnimation(fig, updateData, interval=50, repeat=False)
plt.show()
z
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import numpy as np
%matplotlib notebook
# generate 4 random variables from the random, gamma, exponential, and uniform distributions
x1 = np.random.normal(-2.5, 1, 10000)
x2 = np.random.gamma(2, 1.5, 10000)
x3 = np.random.exponential(2, 10000)+7
x4 = np.random.uniform(14,20, 10000)
# plot the histograms
plt.figure(figsize=(9,3))
plt.hist(x1, normed=True, bins=20, alpha=0.5)
plt.hist(x2, normed=True, bins=20, alpha=0.5)
plt.hist(x3, normed=True, bins=20, alpha=0.5)
plt.hist(x4, normed=True, bins=20, alpha=0.5);
plt.axis([-7,21,0,0.6])
plt.text(x1.mean()-1.5, 0.5, 'x1\nNormal')
plt.text(x2.mean()-1.5, 0.5, 'x2\nGamma')
plt.text(x3.mean()-1.5, 0.5, 'x3\nExponential')
plt.text(x4.mean()-1.5, 0.5, 'x4\nUniform')
z
fig, (ax1,ax2,ax3,ax4) = plt.subplots(1, 4, sharey=True, figsize=(9,2.5))
# plot the histograms
ax1.hist(x1, normed=True, bins=20, alpha=0.5, color='purple')
ax2.hist(x2, normed=True, bins=20, alpha=0.5, color='orange')
ax3.hist(x3, normed=True, bins=20, alpha=0.5, color='green')
ax4.hist(x4, normed=True, bins=20, alpha=0.5, color='skyblue');
ax1.text(x1.mean()-1.5, 0.5, 'x1\nNormal')
ax2.text(x2.mean()-1.5, 0.5, 'x2\nGamma')
ax3.text(x3.mean()-1.5, 0.5, 'x3\nExponential')
ax4.text(x4.mean()-1.5, 0.5, 'x4\nUniform')
# Adjust the subplot layout, because the logit one may take more space
# than usual, due to y-tick labels like "1 - 10^{-3}"
plt.subplots_adjust(top=0.7, bottom=0.08, left=0.10, right=0.95, wspace=0.35)
plt.show()
import matplotlib.animation as animation
import numpy as np
n = 10000
fig, (ax1,ax2,ax3,ax4) = plt.subplots(1, 4, sharey=True, figsize=(9,2.5))
plt.subplots_adjust(top=0.75, bottom=0.08, left=0.10, right=0.95, wspace=0.4)
def updateData(curr):
if curr <=2: return
for ax in (ax1, ax2, ax3, ax4):
ax.clear()
ax1.hist(x1[:curr], normed=True, bins=np.linspace(-6,1, num=21), alpha=0.5, color='purple')
ax2.hist(x2[:curr], normed=True, bins=np.linspace(0,15,num=21), alpha=0.5, color='orange')
ax3.hist(x3[:curr], normed=True, bins=np.linspace(7,20,num=21), alpha=0.5, color='green')
ax4.hist(x4[:curr], normed=True, bins=np.linspace(14,20,num=21), alpha=0.5, color='skyblue')
ax1.set_title('x1\nNormal')
ax2.set_title('x2\nGamma')
ax3.set_title('x3\nExponential')
ax4.set_title('x4\nUniform')
simulation = animation.FuncAnimation(fig, updateData, interval=50, repeat=False)
plt.show()
z
| 0.771585 | 0.983863 |
# What?
- Numpy!
- https://numpy.org/
- https://numpy.org/devdocs/user/quickstart.html
## So What?
- Numpy is one of the main reasons why Python is so powerful and popular for scientific computing
- Super fast. Numpy arrays are implemented in C, which makes numpy very fast.
- Numpy is the most popular linear algebra library for Python
- Provides loop-like behavior w/o the overhead of loops or list comprehensions (vectorized operations)
- Provides list + loop + conditional behavior for filtering arrays
## Now What?
- Start working with numpy arrays! `np.array([1, 2, 3])` to create a numpy array!
- We'll start using built-in numpy functions all the time:
- min, max, mean, sum, std
- np.median,
- Learn to use some vectorized operations
- Learn how to create arrays of booleans to filter results
```
import numpy as np
%%timeit
[x ** 2 for x in range(1, 1_000_000)]
%%timeit
np.arange(1, 1_000_000) ** 2
370 / 3.04
```
C is "closer to the metal" than Python
Assembly is closer to the metal than C
and Processor instruction sets == are the metal!
```
# Let's make our first numpy array!
x = [1, 2, 3, 4, 5]
x = np.array(x)
x, type(x)
# shape tells us the shape of our n-dimensional array
x.shape
matrix = np.array([
[1, 2, 3],
[4, 5, 6],
[7, 0, 9]
])
matrix
type(matrix)
# List index syntax works on numpy arrays!
matrix[0]
matrix[0][0]
matrix[0, 0]
first_row = matrix[0]
first_row
first_element = first_row[0]
first_element
x, x[1:3]
a = np.array(range(1, 100))
a
a.sum(), a.mean(), a.min(), a.max(), a.std()
# median is different (and there's some other functions that work similarly)
np.median(a)
b = np.array([2, 3, 4, 5])
should_include_elements = np.array([False, True, False, True])
b[should_include_elements]
should_include_elements = np.array([False, True, True])
matrix[should_include_elements]
```
## Arrays of Booleans == Beating Heart of Filtering/Transforming Arrays
- This is how we can do loop-like stuff w/o loops
- This "spell" is called "Boolean Masking" and folks may "array filtering" or "indexing"
```
x = np.array([1, 2, 3, 4, 5])
x == 3
x < -9
x > 3
x % 2 == 0
only_threes = x == 3
x[only_threes]
# We don't need the extra variable, however, we can do the following:
# I read this code almost like SQL in my head:
# Select X where X is equal to 3
x[x == 3]
# Select X where X is less than zero
x[x < 0]
# Select x where x is greater than 3
x[x > 3]
# Select x where x divided by two leaves no remainder
evens_from_x = x[x % 2 == 0]
evens_from_x
# In the Python admissions test, there was a question called "remove_evens" where you write a function that removes evens
# In base Python, this is a loop w/ a conditional and another operation to append to a list, or a list comprehension
def remove_evens(x):
x = np.array(x)
return x[x % 2 == 0]
y = remove_evens([2, 3, 34, 5, 6, 24, 442, 24, 12, 3, 24, 3, 3, 23, 23, 23, 10])
y
x = np.array([1, 2, "3", 4])
x
```
## Intro to Vectorization
- Loop like behavior on a array w/o the loop
```
x = np.zeros(10)
x
x + 1
# Let's make an array of random integers
# start is inclusive, end is exclusive
# So the following line is like rolling a 20 sided die
x = np.random.randint(1, 21)
x
# 3rd argument is the size (or the number of random numbers)
x = np.random.randint(1, 21, 10)
x
# Let's make Python fall on its face
a = [1, 2, 3]
# In Python, how would we add one to every item on this list?
[n + 1 for n in a]
x + 1
# elementwise division
x / 10
# Vector addition
x + x
# scalar-vector multiplication along w/ vector subtraction
x - 2*x
# There's many more linear algebra features
np.dot(x, x)
np.linalg.norm(x)
original_array = [1, 2, 3, 4, 5]
array_with_one_added = []
for n in original_array:
array_with_one_added.append(n + 1)
array_with_one_added
np.array(original_array) + 1
beatles = np.array(["Ringo", "George", "Paul", "John"])
beatles == "Ringo"
beatles[beatles != "Ringo"]
beatles[beatles == "Ringo"]
matrix * 2
# Inner product
np.dot(x, x)
np.random.randn(10)
np.random.randint(1, 7, 6)
```
|
github_jupyter
|
import numpy as np
%%timeit
[x ** 2 for x in range(1, 1_000_000)]
%%timeit
np.arange(1, 1_000_000) ** 2
370 / 3.04
# Let's make our first numpy array!
x = [1, 2, 3, 4, 5]
x = np.array(x)
x, type(x)
# shape tells us the shape of our n-dimensional array
x.shape
matrix = np.array([
[1, 2, 3],
[4, 5, 6],
[7, 0, 9]
])
matrix
type(matrix)
# List index syntax works on numpy arrays!
matrix[0]
matrix[0][0]
matrix[0, 0]
first_row = matrix[0]
first_row
first_element = first_row[0]
first_element
x, x[1:3]
a = np.array(range(1, 100))
a
a.sum(), a.mean(), a.min(), a.max(), a.std()
# median is different (and there's some other functions that work similarly)
np.median(a)
b = np.array([2, 3, 4, 5])
should_include_elements = np.array([False, True, False, True])
b[should_include_elements]
should_include_elements = np.array([False, True, True])
matrix[should_include_elements]
x = np.array([1, 2, 3, 4, 5])
x == 3
x < -9
x > 3
x % 2 == 0
only_threes = x == 3
x[only_threes]
# We don't need the extra variable, however, we can do the following:
# I read this code almost like SQL in my head:
# Select X where X is equal to 3
x[x == 3]
# Select X where X is less than zero
x[x < 0]
# Select x where x is greater than 3
x[x > 3]
# Select x where x divided by two leaves no remainder
evens_from_x = x[x % 2 == 0]
evens_from_x
# In the Python admissions test, there was a question called "remove_evens" where you write a function that removes evens
# In base Python, this is a loop w/ a conditional and another operation to append to a list, or a list comprehension
def remove_evens(x):
x = np.array(x)
return x[x % 2 == 0]
y = remove_evens([2, 3, 34, 5, 6, 24, 442, 24, 12, 3, 24, 3, 3, 23, 23, 23, 10])
y
x = np.array([1, 2, "3", 4])
x
x = np.zeros(10)
x
x + 1
# Let's make an array of random integers
# start is inclusive, end is exclusive
# So the following line is like rolling a 20 sided die
x = np.random.randint(1, 21)
x
# 3rd argument is the size (or the number of random numbers)
x = np.random.randint(1, 21, 10)
x
# Let's make Python fall on its face
a = [1, 2, 3]
# In Python, how would we add one to every item on this list?
[n + 1 for n in a]
x + 1
# elementwise division
x / 10
# Vector addition
x + x
# scalar-vector multiplication along w/ vector subtraction
x - 2*x
# There's many more linear algebra features
np.dot(x, x)
np.linalg.norm(x)
original_array = [1, 2, 3, 4, 5]
array_with_one_added = []
for n in original_array:
array_with_one_added.append(n + 1)
array_with_one_added
np.array(original_array) + 1
beatles = np.array(["Ringo", "George", "Paul", "John"])
beatles == "Ringo"
beatles[beatles != "Ringo"]
beatles[beatles == "Ringo"]
matrix * 2
# Inner product
np.dot(x, x)
np.random.randn(10)
np.random.randint(1, 7, 6)
| 0.731155 | 0.9598 |
# Введение в координатный спуск (coordinate descent): теория и приложения
## Постановка задачи и основное предположение
$$
\min_{x \in \mathbb{R}^n} f(x)
$$
- $f$ выпуклая функция
- Если по каждой координате будет выполнено $f(x + \varepsilon e_i) \geq f(x)$, будет ли это означать, что $x$ точка минимума?
- Если $f$ гладкая, то да, по критерию первого порядка $f'(x) = 0$
- Если $f$ негладкая, то нет, так как условие может быть выполнено в "угловых" точках, которые не являются точками минимума
- Если $f$ негладкая, но композитная с сепарабельной негладкой частью, то есть
$$
f(x) = g(x) + \sum_{i=1}^n h_i(x_i),
$$
то да. Почему?
- Для любого $y$ и $x$, в котором выполнено условие оптимальности по каждому направлению, выполнено
$$
f(y) - f(x) = g(y) - g(x) + \sum_{i=1}^n (h_i(y_i) - h_i(x_i)) \geq \langle g'(x), y - x \rangle+ \sum_{i=1}^n (h_i(y_i) - h_i(x_i)) = \sum_{i=1}^n [g'_i(x)(y_i - x_i) + h_i(y_i) - h_i(x_i)] \geq 0
$$
- Значит для функций такого вида поиск минимума можно проводить покоординатно, а в результате всё равно получить точку минимума
### Вычислительные нюансы
- На этапе вычисления $i+1$ координаты используются обновлённые значения $1, 2, \ldots, i$ координат при последовательном переборе координат
- Вспомните разницу между методами Якоби и Гаусса-Зейделя для решения линейных систем!
- Порядок выбора координат имеет значение
- Сложность обновления полного вектора $\sim$ сложности обновления $n$ его компонент, то есть для покоординатного обновления целевой переменной не требуется оперировать с полным градиентом!
## Простой пример
- $f(x) = \frac12 \|Ax - b\|_2^2$, где $A \in \mathbb{R}^{m \times n}$ и $m \gg n$
- Выберем некоторую координату $i$
- Тогда покоординатное условие оптимальности $[f'(x)]_i = A^{\top}_i(Ax - b) = A^{\top}_i(A_{-i} x_{-i} + A_ix_i - b) = 0$
- Откуда $x_i = \dfrac{A^{\top}_i (b - A_{-i} x_{-i})}{\|A_i\|_2^2}$ - сложность $O(nm)$, что сопоставимо с вычислением полного градиента. Можно ли быстрее?
- Да, можно! Для этого необходимо заметить следующее
$$
x_i = \dfrac{A^{\top}_i (b - A_{-i} x_{-i})}{\|A_i\|_2^2} = \dfrac{A^{\top}_i (b - Ax + A_{i}x_i)}{\|A_i\|_2^2} = x_i - \dfrac{A^{\top}_i r}{\|A_i\|_2^2},
$$
где $r = Ax - b$
- Обновление $r$ - $\mathcal{O}(m)$, вычисление $A^{\top}_i r$ - $\mathcal{O}(m)$
- В итоге, обновить одну координату стоит $\mathcal{O}(n)$, то есть сложность обновления всех координат сопоставима с вычислением полного градиента $\mathcal{O}(mn)$
## Как выбирать координаты?
- По циклы от 1 до $n$
- Случайной перестановкой
- Правило Gauss-Southwell: $i = \arg\max_k |f'_k(x)|$ - потенциально более дорогое чем остальные
```
import numpy as np
import matplotlib.pyplot as plt
plt.rc("text", usetex=True)
m = 1000
n = 200
A = np.random.randn(m, n)
print(np.linalg.cond(A.T @ A))
x_true = np.random.randn(n)
b = A @ x_true + 1e-3 * np.random.randn(m)
def coordinate_descent_lsq(x0, num_iter, sampler="sequential"):
conv = [x0]
x = x0.copy()
r = A @ x0 - b
grad = A.T @ r
if sampler == "sequential" or sampler == "GS":
perm = np.arange(x.shape[0])
elif sampler == "random":
perm = np.random.permutation(x.shape[0])
else:
raise ValueError("Unknown sampler!")
for i in range(num_iter):
for idx in perm:
if sampler == "GS":
idx = np.argmax(np.abs(grad))
new_x_idx = x[idx] - A[:, idx] @ r / (A[:, idx] @ A[:, idx])
r = r + A[:, idx] * (new_x_idx - x[idx])
if sampler == "GS":
grad = A.T @ r
x[idx] = new_x_idx
if sampler == "random":
perm = np.random.permutation(x.shape[0])
conv.append(x.copy())
# print(np.linalg.norm(A @ x - b))
return x, conv
x0 = np.random.randn(n)
num_iter = 100
x_cd_seq, conv_cd_seq = coordinate_descent_lsq(x0, num_iter)
x_cd_rand, conv_cd_rand = coordinate_descent_lsq(x0, num_iter, "random")
x_cd_gs, conv_cd_gs = coordinate_descent_lsq(x0, num_iter, "GS")
# !pip install git+https://github.com/amkatrutsa/liboptpy
import liboptpy.unconstr_solvers as methods
import liboptpy.step_size as ss
def f(x):
res = A @ x - b
return 0.5 * res @ res
def gradf(x):
res = A @ x - b
return A.T @ res
L = np.max(np.linalg.eigvalsh(A.T @ A))
gd = methods.fo.GradientDescent(f, gradf, ss.ConstantStepSize(1 / L))
x_gd = gd.solve(x0=x0, max_iter=num_iter)
acc_gd = methods.fo.AcceleratedGD(f, gradf, ss.ConstantStepSize(1 / L))
x_accgd = acc_gd.solve(x0=x0, max_iter=num_iter)
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_cd_rand], label="Random")
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_cd_seq], label="Sequential")
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_cd_gs], label="GS")
plt.semilogy([np.linalg.norm(A @ x - b) for x in gd.get_convergence()], label="GD")
plt.semilogy([np.linalg.norm(A @ x - b) for x in acc_gd.get_convergence()], label="Nesterov")
plt.legend(fontsize=20)
plt.xlabel("Number of iterations", fontsize=24)
plt.ylabel("$\|Ax - b\|_2$", fontsize=24)
plt.grid(True)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.show()
plt.semilogy([np.linalg.norm(x - x_true) for x in conv_cd_rand], label="Random")
plt.semilogy([np.linalg.norm(x - x_true) for x in conv_cd_seq], label="Sequential")
plt.semilogy([np.linalg.norm(x - x_true) for x in conv_cd_gs], label="GS")
plt.semilogy([np.linalg.norm(x - x_true) for x in gd.get_convergence()], label="GD")
plt.semilogy([np.linalg.norm(x - x_true) for x in acc_gd.get_convergence()], label="Nesterov")
plt.legend(fontsize=20)
plt.xlabel("Number of iterations", fontsize=24)
plt.ylabel("$\|x - x^*\|_2$", fontsize=24)
plt.grid(True)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.show()
```
## Сходимость
- Сублинейная для выпуклых глакдих с Липшицевым градиентом
- Линейная для сильно выпуклых функций
- Прямая аналогия с градиентным спуском
- Но много особенностей использования
## Типичные примеры использования
- Lasso (снова)
- SMO метод обучения SVM - блочный координатный спуск с размером блока равным 2
- Вывод в графических моделях
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
plt.rc("text", usetex=True)
m = 1000
n = 200
A = np.random.randn(m, n)
print(np.linalg.cond(A.T @ A))
x_true = np.random.randn(n)
b = A @ x_true + 1e-3 * np.random.randn(m)
def coordinate_descent_lsq(x0, num_iter, sampler="sequential"):
conv = [x0]
x = x0.copy()
r = A @ x0 - b
grad = A.T @ r
if sampler == "sequential" or sampler == "GS":
perm = np.arange(x.shape[0])
elif sampler == "random":
perm = np.random.permutation(x.shape[0])
else:
raise ValueError("Unknown sampler!")
for i in range(num_iter):
for idx in perm:
if sampler == "GS":
idx = np.argmax(np.abs(grad))
new_x_idx = x[idx] - A[:, idx] @ r / (A[:, idx] @ A[:, idx])
r = r + A[:, idx] * (new_x_idx - x[idx])
if sampler == "GS":
grad = A.T @ r
x[idx] = new_x_idx
if sampler == "random":
perm = np.random.permutation(x.shape[0])
conv.append(x.copy())
# print(np.linalg.norm(A @ x - b))
return x, conv
x0 = np.random.randn(n)
num_iter = 100
x_cd_seq, conv_cd_seq = coordinate_descent_lsq(x0, num_iter)
x_cd_rand, conv_cd_rand = coordinate_descent_lsq(x0, num_iter, "random")
x_cd_gs, conv_cd_gs = coordinate_descent_lsq(x0, num_iter, "GS")
# !pip install git+https://github.com/amkatrutsa/liboptpy
import liboptpy.unconstr_solvers as methods
import liboptpy.step_size as ss
def f(x):
res = A @ x - b
return 0.5 * res @ res
def gradf(x):
res = A @ x - b
return A.T @ res
L = np.max(np.linalg.eigvalsh(A.T @ A))
gd = methods.fo.GradientDescent(f, gradf, ss.ConstantStepSize(1 / L))
x_gd = gd.solve(x0=x0, max_iter=num_iter)
acc_gd = methods.fo.AcceleratedGD(f, gradf, ss.ConstantStepSize(1 / L))
x_accgd = acc_gd.solve(x0=x0, max_iter=num_iter)
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_cd_rand], label="Random")
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_cd_seq], label="Sequential")
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_cd_gs], label="GS")
plt.semilogy([np.linalg.norm(A @ x - b) for x in gd.get_convergence()], label="GD")
plt.semilogy([np.linalg.norm(A @ x - b) for x in acc_gd.get_convergence()], label="Nesterov")
plt.legend(fontsize=20)
plt.xlabel("Number of iterations", fontsize=24)
plt.ylabel("$\|Ax - b\|_2$", fontsize=24)
plt.grid(True)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.show()
plt.semilogy([np.linalg.norm(x - x_true) for x in conv_cd_rand], label="Random")
plt.semilogy([np.linalg.norm(x - x_true) for x in conv_cd_seq], label="Sequential")
plt.semilogy([np.linalg.norm(x - x_true) for x in conv_cd_gs], label="GS")
plt.semilogy([np.linalg.norm(x - x_true) for x in gd.get_convergence()], label="GD")
plt.semilogy([np.linalg.norm(x - x_true) for x in acc_gd.get_convergence()], label="Nesterov")
plt.legend(fontsize=20)
plt.xlabel("Number of iterations", fontsize=24)
plt.ylabel("$\|x - x^*\|_2$", fontsize=24)
plt.grid(True)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.show()
| 0.405449 | 0.937897 |
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import datetime as dt
import tensorflow as tf
from tensorflow import keras
# Load the TensorBoard notebook extension
# %load_ext tensorboard
# Not required in vs code
# https://devblogs.microsoft.com/python/python-in-visual-studio-code-february-2021-release/
data= pd.read_csv("../DATA/fake_reg.csv")
data.head()
data.info()
data.isnull().sum()
data.describe()
data.plot(kind="hist", subplots=True)
sns.scatterplot(x="feature1", y="price", data=data)
sns.scatterplot(x="feature2", y="price", data=data)
sns.heatmap(data.corr(),annot=True, mask=np.triu(data.corr()))
# Feature 2 is highly corelated with price than feature-1
x= data.drop("price", axis=1)
y=data.price
x.shape, y.shape
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test= train_test_split(x,y,test_size=0.3, random_state=42)
from sklearn.preprocessing import MinMaxScaler
scaler= MinMaxScaler()
x_train= scaler.fit_transform(x_train)
x_test= scaler.transform(x_test)
def get_callback_logdir() :
curr_time= dt.datetime.now().strftime("%Y%m%d-%H%M%S")
log_dir= "./logs/fit/"+curr_time
callback= keras.callbacks.TensorBoard(log_dir=log_dir )
return callback
model1= keras.Sequential([
keras.layers.Dense(units=2, activation=None),
keras.layers.Dense(units=1)
])
model1.compile(loss=keras.losses.mse, optimizer= keras.optimizers.SGD(learning_rate=0.001), metrics=["mse"])
model1_history= model1.fit(x_train,y_train, epochs=100, validation_split=0.2, callbacks=[get_callback_logdir()])
plt.plot(model1_history.history["mse"])
# changing the layer and adding Neurons
tf.random.set_seed(42)
model2= keras.Sequential([
keras.layers.Dense(units=4, activation="relu"),
keras.layers.Dense(units=4, activation="relu"),
keras.layers.Dense(units=2, activation="relu"),
keras.layers.Dense(units=1)
], name="model-2")
model2.compile(loss=keras.losses.mse, optimizer= keras.optimizers.Adam(learning_rate=0.01), metrics=["mse"])
model2_history= model2.fit(x_train,y_train, epochs=150, validation_split=0.23 ) #callbacks=[get_callback_logdir()]
plt.plot(model2_history.history["mse"])
model2.evaluate(x_test,y_test), model2.evaluate(x_train,y_train)
from sklearn.metrics import mean_squared_error as mse, r2_score as r2
preds= model2.predict(x_test)
mse(y_test, preds), r2(y_test, preds)
sns.kdeplot(y_test-preds.flat)
# preds.shape, y_test.shape
# Predicting new data
new_gem = [[998,1000]]
model2.predict(scaler.transform(new_gem))
# Seeing the model
# https://www.tensorflow.org/api_docs/python/tf/keras/utils/plot_model
from tensorflow.keras.utils import plot_model
plot_model(model2, show_shapes=True)
#saving the model
model2.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
# Loading the same model
from tensorflow.keras.models import load_model
later_model = load_model('my_model.h5')
later_model.predict(scaler.transform(new_gem))
# tensorboard --logdir logs/fit
# %load_ext tensorboard
# %tensorboard --logdir logs
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import datetime as dt
import tensorflow as tf
from tensorflow import keras
# Load the TensorBoard notebook extension
# %load_ext tensorboard
# Not required in vs code
# https://devblogs.microsoft.com/python/python-in-visual-studio-code-february-2021-release/
data= pd.read_csv("../DATA/fake_reg.csv")
data.head()
data.info()
data.isnull().sum()
data.describe()
data.plot(kind="hist", subplots=True)
sns.scatterplot(x="feature1", y="price", data=data)
sns.scatterplot(x="feature2", y="price", data=data)
sns.heatmap(data.corr(),annot=True, mask=np.triu(data.corr()))
# Feature 2 is highly corelated with price than feature-1
x= data.drop("price", axis=1)
y=data.price
x.shape, y.shape
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test= train_test_split(x,y,test_size=0.3, random_state=42)
from sklearn.preprocessing import MinMaxScaler
scaler= MinMaxScaler()
x_train= scaler.fit_transform(x_train)
x_test= scaler.transform(x_test)
def get_callback_logdir() :
curr_time= dt.datetime.now().strftime("%Y%m%d-%H%M%S")
log_dir= "./logs/fit/"+curr_time
callback= keras.callbacks.TensorBoard(log_dir=log_dir )
return callback
model1= keras.Sequential([
keras.layers.Dense(units=2, activation=None),
keras.layers.Dense(units=1)
])
model1.compile(loss=keras.losses.mse, optimizer= keras.optimizers.SGD(learning_rate=0.001), metrics=["mse"])
model1_history= model1.fit(x_train,y_train, epochs=100, validation_split=0.2, callbacks=[get_callback_logdir()])
plt.plot(model1_history.history["mse"])
# changing the layer and adding Neurons
tf.random.set_seed(42)
model2= keras.Sequential([
keras.layers.Dense(units=4, activation="relu"),
keras.layers.Dense(units=4, activation="relu"),
keras.layers.Dense(units=2, activation="relu"),
keras.layers.Dense(units=1)
], name="model-2")
model2.compile(loss=keras.losses.mse, optimizer= keras.optimizers.Adam(learning_rate=0.01), metrics=["mse"])
model2_history= model2.fit(x_train,y_train, epochs=150, validation_split=0.23 ) #callbacks=[get_callback_logdir()]
plt.plot(model2_history.history["mse"])
model2.evaluate(x_test,y_test), model2.evaluate(x_train,y_train)
from sklearn.metrics import mean_squared_error as mse, r2_score as r2
preds= model2.predict(x_test)
mse(y_test, preds), r2(y_test, preds)
sns.kdeplot(y_test-preds.flat)
# preds.shape, y_test.shape
# Predicting new data
new_gem = [[998,1000]]
model2.predict(scaler.transform(new_gem))
# Seeing the model
# https://www.tensorflow.org/api_docs/python/tf/keras/utils/plot_model
from tensorflow.keras.utils import plot_model
plot_model(model2, show_shapes=True)
#saving the model
model2.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
# Loading the same model
from tensorflow.keras.models import load_model
later_model = load_model('my_model.h5')
later_model.predict(scaler.transform(new_gem))
# tensorboard --logdir logs/fit
# %load_ext tensorboard
# %tensorboard --logdir logs
| 0.883274 | 0.438665 |
# Time Series with Trend and Seasonality Data Generator
This notebook presents a function for generating time series data that exhibits trend and seasonality. The following code block includes defines the function. The time series data for each product can be written to a *csv* file in a user-specified sub-folder.
```
def time_series_generator(products = 1,
periods = 12,
seasons = 4,
seasonal_likelihood = 0.25,
trend_likelihood = 0.75,
b_range = (5000, 20000),
m_range = (-100, 100),
noise_sd = 100,
save_to_files = False,
directory = ''):
'''
This function is able to generate time series data for a user-specified
number of products that includes both trend and seasonality. The function
returns a Pandas DataFrame object that includes all of the generated data.
In addition, users may save the data for each product to a comma-separated
file by specifying the may set the save_to_files argument to True. The
directory argument may be used to create a new directory for the data files.
Arguments
products: the number of products to generate time series for
periods: the length of the time series to generate
seasons: the number of seasons (the periods argument should be an integer
multiple of the seasons)
seasonal_likelihood: a floating point value between 0 and 1 that specifies
the probability that a time series includes seasonality
trend_likelihood: a floating point value between 0 and 1 that specifies
the probability that a time series includes trend
b_range: a tuple of two integers (low, high), where low specifies the minimum
intercept value for the linear equation for trend and high specified the
maximum intercept value for the linear equation for trend (value = m*period + b).
The b value for each time series is randomly generated between these values.
m_range: a tuple of two integers (low, high), where low specifies the minimum
slope value for the linear equation for trend and high specifies the
maximum slope value for the linear equation for trend (value = m*period + b).
The m value for each time series is randomly generated between these values.
noise_sd: the standard deviation for the random noise added to each period. It is
assumed that the noise is normally distributed with a mean of zero.
save_to_files: True or false to indicate whether or not the time series data should
be saved to csv files (one for each product)
directory: a string specifying the directory in which the data files will be
stored when save_to_files is set to True
Returns
df: a dataframe that includes the time series data for all products
Dependencies
This function depends on the NumPy and Pandas packages
Example:
>>> data = time_series_generator(products = 4,
periods = 6,
seasons = 2,
seasonal_likelihood = 0.75,
trend_likelihood = 0.75,
b_range = (5000, 20000),
m_range = (-100, 100),
noise_sd = 100,
save_to_files = False,
directory = '')
>>> print(data)
Product Period Sales
0 1 1 6002.0
1 1 2 5307.0
2 1 3 6472.0
3 1 4 5408.0
4 1 5 6679.0
5 1 6 5628.0
6 2 1 20023.0
7 2 2 15773.0
8 2 3 20043.0
9 2 4 15921.0
10 2 5 20040.0
11 2 6 15945.0
12 3 1 16315.0
13 3 2 16333.0
14 3 3 16441.0
15 3 4 16452.0
16 3 5 16386.0
17 3 6 16239.0
18 4 1 13977.0
19 4 2 13961.0
20 4 3 13866.0
21 4 4 13977.0
22 4 5 13916.0
23 4 6 13859.0
'''
import numpy as np
import pandas as pd
if save_to_files:
import os
full_path = os.getcwd() + "\\" + directory
if os.path.isdir(full_path):
pass
else:
os.mkdir(directory + '/')
df = None
for product in range(products):
is_seasonal = False
if(np.random.rand() <= seasonal_likelihood):
is_seasonal = True
is_trending = False
if(np.random.rand() <= trend_likelihood):
is_trending = True
b = np.random.randint(low = b_range[0], high = b_range[1])
m = np.random.randint(low = m_range[0], high = m_range[1])
values = []
if is_trending and is_seasonal:
seasonal_indices = np.random.rand(seasons) * 0.20 + 0.90
seasonal_indices = seasonal_indices/seasonal_indices.mean()
for period in range(periods):
season = period % (seasons)
value = m*period + b
value = seasonal_indices[season] * value
value = value + np.random.normal(loc = 0, scale = noise_sd)
values.append(np.floor(value))
elif is_trending:
for period in range(periods):
value = m*period + b
value = value + np.random.normal(loc = 0, scale = noise_sd)
values.append(np.floor(value))
elif is_seasonal:
seasonal_indices = np.random.random(seasons) * 0.50 + 0.75
seasonal_indices = seasonal_indices/seasonal_indices.mean()
for period in range(periods):
season = period % (seasons)
value = b
value = seasonal_indices[season] * value
value = value + np.random.normal(loc = 0, scale = noise_sd)
values.append(np.floor(value))
else:
for period in range(periods):
value = b
value = value + np.random.normal(loc = 0, scale = noise_sd)
values.append(np.floor(value))
my_dict = {'Product':[product + 1]*periods,
'Period': [(i + 1) for i in range(periods)],
'Value': values}
if save_to_files:
filename = directory + '/' + f'Product_{product + 1}.csv'
pd.DataFrame.from_dict(my_dict).to_csv(filename, index = False)
if df is None:
df = pd.DataFrame.from_dict(my_dict)
else:
df = df.append(pd.DataFrame.from_dict(my_dict), ignore_index = True)
return df
```
|
github_jupyter
|
def time_series_generator(products = 1,
periods = 12,
seasons = 4,
seasonal_likelihood = 0.25,
trend_likelihood = 0.75,
b_range = (5000, 20000),
m_range = (-100, 100),
noise_sd = 100,
save_to_files = False,
directory = ''):
'''
This function is able to generate time series data for a user-specified
number of products that includes both trend and seasonality. The function
returns a Pandas DataFrame object that includes all of the generated data.
In addition, users may save the data for each product to a comma-separated
file by specifying the may set the save_to_files argument to True. The
directory argument may be used to create a new directory for the data files.
Arguments
products: the number of products to generate time series for
periods: the length of the time series to generate
seasons: the number of seasons (the periods argument should be an integer
multiple of the seasons)
seasonal_likelihood: a floating point value between 0 and 1 that specifies
the probability that a time series includes seasonality
trend_likelihood: a floating point value between 0 and 1 that specifies
the probability that a time series includes trend
b_range: a tuple of two integers (low, high), where low specifies the minimum
intercept value for the linear equation for trend and high specified the
maximum intercept value for the linear equation for trend (value = m*period + b).
The b value for each time series is randomly generated between these values.
m_range: a tuple of two integers (low, high), where low specifies the minimum
slope value for the linear equation for trend and high specifies the
maximum slope value for the linear equation for trend (value = m*period + b).
The m value for each time series is randomly generated between these values.
noise_sd: the standard deviation for the random noise added to each period. It is
assumed that the noise is normally distributed with a mean of zero.
save_to_files: True or false to indicate whether or not the time series data should
be saved to csv files (one for each product)
directory: a string specifying the directory in which the data files will be
stored when save_to_files is set to True
Returns
df: a dataframe that includes the time series data for all products
Dependencies
This function depends on the NumPy and Pandas packages
Example:
>>> data = time_series_generator(products = 4,
periods = 6,
seasons = 2,
seasonal_likelihood = 0.75,
trend_likelihood = 0.75,
b_range = (5000, 20000),
m_range = (-100, 100),
noise_sd = 100,
save_to_files = False,
directory = '')
>>> print(data)
Product Period Sales
0 1 1 6002.0
1 1 2 5307.0
2 1 3 6472.0
3 1 4 5408.0
4 1 5 6679.0
5 1 6 5628.0
6 2 1 20023.0
7 2 2 15773.0
8 2 3 20043.0
9 2 4 15921.0
10 2 5 20040.0
11 2 6 15945.0
12 3 1 16315.0
13 3 2 16333.0
14 3 3 16441.0
15 3 4 16452.0
16 3 5 16386.0
17 3 6 16239.0
18 4 1 13977.0
19 4 2 13961.0
20 4 3 13866.0
21 4 4 13977.0
22 4 5 13916.0
23 4 6 13859.0
'''
import numpy as np
import pandas as pd
if save_to_files:
import os
full_path = os.getcwd() + "\\" + directory
if os.path.isdir(full_path):
pass
else:
os.mkdir(directory + '/')
df = None
for product in range(products):
is_seasonal = False
if(np.random.rand() <= seasonal_likelihood):
is_seasonal = True
is_trending = False
if(np.random.rand() <= trend_likelihood):
is_trending = True
b = np.random.randint(low = b_range[0], high = b_range[1])
m = np.random.randint(low = m_range[0], high = m_range[1])
values = []
if is_trending and is_seasonal:
seasonal_indices = np.random.rand(seasons) * 0.20 + 0.90
seasonal_indices = seasonal_indices/seasonal_indices.mean()
for period in range(periods):
season = period % (seasons)
value = m*period + b
value = seasonal_indices[season] * value
value = value + np.random.normal(loc = 0, scale = noise_sd)
values.append(np.floor(value))
elif is_trending:
for period in range(periods):
value = m*period + b
value = value + np.random.normal(loc = 0, scale = noise_sd)
values.append(np.floor(value))
elif is_seasonal:
seasonal_indices = np.random.random(seasons) * 0.50 + 0.75
seasonal_indices = seasonal_indices/seasonal_indices.mean()
for period in range(periods):
season = period % (seasons)
value = b
value = seasonal_indices[season] * value
value = value + np.random.normal(loc = 0, scale = noise_sd)
values.append(np.floor(value))
else:
for period in range(periods):
value = b
value = value + np.random.normal(loc = 0, scale = noise_sd)
values.append(np.floor(value))
my_dict = {'Product':[product + 1]*periods,
'Period': [(i + 1) for i in range(periods)],
'Value': values}
if save_to_files:
filename = directory + '/' + f'Product_{product + 1}.csv'
pd.DataFrame.from_dict(my_dict).to_csv(filename, index = False)
if df is None:
df = pd.DataFrame.from_dict(my_dict)
else:
df = df.append(pd.DataFrame.from_dict(my_dict), ignore_index = True)
return df
| 0.807916 | 0.958109 |
## 特征工程
### 如何合并这些表
1. 类似 sql的iner join的方式.
用store id 把 train, store_info , 和store的洲先合起来;
用weather.merge
```
Merge DataFrame objects by performing a database-style join operation by
columns or indexes.
```
```
按store id join,保留left,
store_train_join_info = train.merge(store,how='left',on='Store')
按state left join, state 简称和全称
store_state_name = store_states.merge(state_names,how='left',left_on='State',right_on='State')
根据全称,join 天气数据
store_state_name.merge(weather,how='left',left_on='StateName',right_on='file')
```
store info and state and weather
```
store 1 state 2013-1-1 wether info
store 1 state 2013-1-2 wether info
store 1 state 2013-1-3 wether info
...
store 1 state 2015-1-1 wether info
store 2 state 2013-1-1 wether info
store 2 state 2013-1-2 wether info
store 2 state 2013-1-3 wether info
...
store 2 state 2015-1-1 wether info
```
最终按on = 'Store' and 'date',把 train 和 store_info_all join起来
2. googletrend提供的是哪个星期,哪个周。根据`星期和周`,把 store和week 把 trend join起来
3. weather 提供的每天 哪个周的情况, 根据`date 和 洲` join weather.
### 特征编码
[参考](https://blog.csdn.net/hshuihui/article/details/53259710)
category 怎么处理
数值型特征怎么处理.
目前比较熟悉的有pandas里的get_dummies做one-hot encoding.
sklearn里也有很多特征编码方法.
下面的例子是要把数值型或者字符型都用sklearn 来one-hot encoding
```
import pandas as pd
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import LabelBinarizer
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import *
import numpy as np
dataset = pd.DataFrame({'pet': ['cat', 'dog', 'dog', 'fish'],
'age': [4 , 6, 3, 3],
'salary':[4, 5, 1, 1],
'lvl':[4, 5, pd.np.nan, 1]})
dataset.head()
```
#### get_dummies
get_dummies对数值型也是有效果的
```
dataset.head()
d = pd.get_dummies(dataset,columns=['age','salary','pet','lvl']);
d.head()
##nan treat as [0,0....0]
```
dummy_na 就是把na也编码
```
d = pd.get_dummies(dataset,columns=['age','salary','pet','lvl'],dummy_na=True);
d.head()
```
#### sklearn里的用法 输入2D-arrayt, 得到的是也是2Dnarray 比pd麻烦多了
* 字符型不能处理
* Nan不能处理
```
age= OneHotEncoder(sparse = False).fit_transform( dataset[['age']])
# pet= OneHotEncoder(sparse = False).fit_transform( dataset[['pet']])
salary= OneHotEncoder(sparse = False).fit_transform( dataset[['salary']])
# lvl = OneHotEncoder(sparse = False).fit_transform( dataset[['lvl']])
d = np.hstack( (age,salary))
```
LabelEncoder 针对字符型label. ['a','b','c']--->[0,1,2]
先LabelEncoder ,再OneHotEncoder才能达到最终的效果
```
numi_pet = LabelEncoder().fit_transform(dataset[['pet']])
print(numi_pet)
pet = OneHotEncoder(sparse = False).fit_transform( numi_pet.reshape(-1,1))
pet
```
等效于LabelBinarizer
```
pet = LabelBinarizer().fit_transform( numi_pet.reshape(-1,1))
pet
```
### proc_df
proc_df takes a data frame df and splits off the response variable, and
changes the df into an entirely numeric dataframe.
把y剥离,剩下的df里的特征全部numerial 化,很方便
并没有`one-hot-coding`掉,因为接下来要做`entity embedding`
```
df, y, nas, mapper = proc_df(joined_samp, 'Sales', do_scale=True)
yl = np.log(y)
```
df,y就是特征编码后的输出,还输出了一个mapper,就是上面地套的DataFrameMaper, 用来给test set直接用了
```
df_test, _, nas, mapper = proc_df(joined_test, 'Sales', do_scale=True, skip_flds=['Id'],
mapper=mapper, na_dict=nas)
```
## Rossman ex
```
%matplotlib inline
%reload_ext autoreload
%autoreload 2
from fastai.structured import *
from fastai.column_data import *
np.set_printoptions(threshold=50, edgeitems=20)
from IPython.display import HTML
import datetime
PATH='../data/rossmann/'
table_names = ['train', 'store', 'store_states', 'state_names',
'googletrend', 'weather', 'test']
tables = [pd.read_csv(f'{PATH}{fname}.csv', low_memory=False) for fname in table_names]
# for t in tables: display(t.head())
train, store, store_states, state_names, googletrend, weather, test = tables
```
### 合并表
```
state_names.head()
store_state_name = store_states.merge(state_names,how='left',left_on='State',right_on='State')
store_state_name.shape
store_state_name.head()
weather.shape
```
weather的file和statename可以join
```
weather.head()
state_weather = store_state_name.merge(weather,how='left',left_on='StateName',right_on='file')
state_weather.shape
```
形式是 store 1 data 2013-01-01的weather
```
state_weather.head()
store.shape
store.head()
```
join state_weather 和 store
```
store_info_all = store.merge(state_weather,how='left',on='Store')
store_info_all.shape
store_info_all.head()
store_info_all[['Date','Store']].head()
train.shape
train.head()
```
store_info_all 和 train join ,注意用date 和store同时join
```
store_all_with_train =train.merge(store_info_all,how='left',on=['Store','Date'],suffixes='_s')
store_all_with_test =test.merge(store_info_all,how='left',on=['Store','Date'],suffixes='_s')
train.columns
store_all_with_train.shape
test.columns
```
test里没有costumers?
```
store_all_with_test.shape
store_all_with_train.columns
```
用store和date排列下数据
```
store_all_with_train.sort_values(by=['Store','Date'],ascending=True)
store_all_with_test.sort_values(by=['Store','Date'],ascending=True)
```
最后考虑googletrend,这个和week以及state有关,需要提取下原数据关于洲的信息
提取file列的后缀为State,应该是洲的简称,
```
def get_state(s): return s[-2:]
googletrend['State']=googletrend.file.apply(get_state)
googletrend.head()
a='2012-12-02 - 2012-12-08'
a.split(' - ')[0]
```
两种思路:
1. 先将week start end分裂,然后扩充到每天,注意只需要从`2013-01-01`到`2015-07-31`的数据
2. 将week start- week end 标记为 week of year , 同样的将store_all_with_train中的date计算为month of year,然后 再join!
```
googletrend['WeekStart']=googletrend.week.apply(lambda x: x.split(' - ')[0] )
googletrend['WeekEnd']=googletrend.week.apply(lambda x: x.split(' - ')[1] )
max(store_all_with_train['Date'])
min(store_all_with_train['Date'])
min(googletrend['WeekStart'])
max(googletrend['WeekStart'])
googletrend.head()
```
增加Date列
```
googletrend['Date']=googletrend['WeekStart']
line_loc = googletrend.loc[0]
line_loc
nextday = lambda x: x +datetime.timedelta(days=1)
line_loc = googletrend.loc[0]
weekstart= datetime.datetime.strptime (line_loc['WeekStart'],"%Y-%m-%d")
weekend= datetime.datetime.strptime (line_loc['WeekEnd'],"%Y-%m-%d")
delta_day = weekend-weekstart
delta_days=delta_day.days;delta_days
nextday(weekstart).strftime("%Y-%m-%d")
len(googletrend)
googletrend.shape
weeks = len(googletrend);weeks
```
pandas增加行很慢啊 有没有好的方法
```
# 每个week start-week end扩充为6个date
for j in range(weeks):
cp_row = googletrend.loc[j]
# date
date= datetime.datetime.strptime (cp_row['WeekStart'],"%Y-%m-%d")
for i in range(delta_days):
# for each row duplicate 6 times
date = nextday(date);
# print(date)
cp_row['Date'] = date.strftime("%Y-%m-%d")
# googletrend.append(cp_row)
googletrend = googletrend.append(cp_row)
googletrend.shape
googletrend = googletrend.reset_index(drop=True)
googletrend.sort_values(by=['State','week'],ascending=True)
```
state里没有NI,只有NB,HI,需要将NI,转成NB,HI
```
state_names['State'].unique()
googletrend['State'].unique()
googletrend.loc[googletrend.State=='NI', "State"] = 'HB,NI'
finals = store_all_with_train.merge(googletrend,how='left',on=['State','Date'])
finals_test = store_all_with_test.merge(googletrend,how='left',on=['State','Date'])
```
把整个德国地区DE的trend作为一个feature 加上去
```
trend_de = googletrend[googletrend.file == 'Rossmann_DE']
trend_de.sort_values(['Date'])
```
用date join即可,已经是唯一了
```
finals = finals.merge(trend_de,how='left',on=['Date'],suffixes=('', '_DE'))
finals_test = finals_test.merge(trend_de,how='left',on=['Date'],suffixes=('', '_DE'))
finals_test.shape
finals.shape
State=finals['StateName']
```
去掉多余的列
```
finals.columns
finals = finals.drop(labels=['Customers','WeekStart','WeekEnd','week','file_y','file_x','StateName',
'file', 'week_DE', 'State_DE', 'WeekStart_DE', 'WeekEnd_DE' ],axis=1)
finals_test = finals_test.drop(labels=['Id','WeekStart','WeekEnd','week','file_y','file_x','StateName',
'file', 'week_DE', 'State_DE', 'WeekStart_DE', 'WeekEnd_DE' ],axis=1)
finals = finals.sort_values(by=['Date','Store']).reset_index(drop=True)
finals_test = finals_test.sort_values(by=['Date','Store']).reset_index(drop=True)
finals.shape
finals_test.shape
finals.to_feather(f'{PATH}finals.df')
finals_test.to_feather(f'{PATH}finals_test.df')
```
### 特征工程
#### load from file
```
%matplotlib inline
%reload_ext autoreload
%autoreload 2
from fastai.structured import *
from fastai.column_data import *
np.set_printoptions(threshold=50, edgeitems=20)
from IPython.display import HTML
import datetime
PATH='../data/rossmann/'
train = pd.read_feather(f'{PATH}finals.df')
test = pd.read_feather(f'{PATH}finals_test.df')
```
#### 时间提取!
根据Date生成 `'Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear',
'Is_month_end', 'Is_month_start', 'Is_quarter_end', 'Is_quarter_start',
'Is_year_end', 'Is_year_start', 'Elapsed'`这些新的特征,更好看出时间的趋势
```
add_datepart(train, "Date", drop=False)
add_datepart(test, "Date", drop=False)
```
#### 特殊特征
留意哪些类型是object的,可能值有问题
```
train.dtypes
train['Year'].unique()
train['Date'].unique()
```
注意Date变成了datetime64,方便时间的计算
```
train['StateHoliday'].unique()
train.StateHoliday = train.StateHoliday!='0'
test.StateHoliday = test.StateHoliday!='0'
train['StateHoliday'].unique()
train['StateHoliday'] = train['StateHoliday'].astype(int)
train['StoreType'].unique()
train['Assortment'].unique()
train['PromoInterval'].unique()
train['Events'].unique()
```
one-hot掉? 不着急,deep learning用embedding的方式
#### fillna
```
train.isnull().sum()
test.isnull().sum()
train['CompetitionDistance']= train['CompetitionDistance'].fillna(train['CompetitionDistance'].median());
test['CompetitionDistance']= test['CompetitionDistance'].fillna(test['CompetitionDistance'].median())
```
竞争对手开业时间的NaN怎么填? 都是1900年1月1日算起!
```
for df in (train,test):
df['CompetitionOpenSinceYear'] = df.CompetitionOpenSinceYear.fillna(1900).astype(np.int32)
df['CompetitionOpenSinceMonth'] = df.CompetitionOpenSinceMonth.fillna(1).astype(np.int32)
df['Promo2SinceYear'] = df.Promo2SinceYear.fillna(1900).astype(np.int32)
df['Promo2SinceWeek'] = df.Promo2SinceWeek.fillna(1).astype(np.int32)
```
Next we'll extract features "CompetitionOpenSince" and "CompetitionDaysOpen". Note the use of apply() in mapping a function across dataframe values.
```
for df in (train,test):
df["CompetitionOpenSince"] = pd.to_datetime(dict(year=df.CompetitionOpenSinceYear,
month=df.CompetitionOpenSinceMonth, day=15))
df["CompetitionDaysOpen"] = df.Date.subtract(df.CompetitionOpenSince).dt.days
min(train['CompetitionOpenSinceYear'])
max(train['CompetitionDaysOpen'])
```
We'll replace some erroneous / outlying data.
```
for df in (train,test):
df.loc[df.CompetitionDaysOpen<0, "CompetitionDaysOpen"] = 0
df.loc[df.CompetitionOpenSinceYear<1990, "CompetitionDaysOpen"] = 0
```
We add "CompetitionMonthsOpen" field, limiting the maximum to 2 years to limit number of unique categories.
```
for df in (train,test):
df["CompetitionMonthsOpen"] = df["CompetitionDaysOpen"]//30
df.loc[df.CompetitionMonthsOpen>24, "CompetitionMonthsOpen"] = 24
train.CompetitionMonthsOpen.unique()
train[ ['CompetitionOpenSinceYear','CompetitionDaysOpen','CompetitionMonthsOpen','CompetitionOpenSinceMonth']].sort_values('CompetitionOpenSinceYear')
```
这个年龄fill还是有点问题的?
Same process for Promo dates.
```
for df in (test,train):
df["Promo2Since"] = pd.to_datetime(df.apply(lambda x: Week(
x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1).astype(pd.datetime))
df["Promo2Days"] = df.Date.subtract(df["Promo2Since"]).dt.days
for df in (train,test):
df.loc[df.Promo2Days<0, "Promo2Days"] = 0
df.loc[df.Promo2SinceYear<1990, "Promo2Days"] = 0
df["Promo2Weeks"] = df["Promo2Days"]//7
df.loc[df.Promo2Weeks<0, "Promo2Weeks"] = 0
df.loc[df.Promo2Weeks>25, "Promo2Weeks"] = 25
df.Promo2Weeks.unique()
```
不是所有的na都需要填!可以单独多为一个类,one-hot or embedding都可以处理
```
test['Open']=test.Open.fillna(1);
test['Open'].unique()
min(train['Date'])
max(train['Date'])
```
#### 特征提取
row之间的时间关系提取,如promote day , state holiday的前后多少天,都是促销的高峰期。
类似将holiday [1 0 0 0 0 0 1 1 1 0 0 0 0 0] 提取出特征: [ -5 -4 -3 -2 -1 0 0 0 1 2 3 4 5 ...]
It is common when working with time series data to extract data that explains relationships across rows as opposed to columns, e.g.:
* Running averages
* Time until next event
* Time since last event
This is often difficult to do with most table manipulation frameworks, since they are designed to work with relationships across columns. As such, we've created a class to handle this type of data.
We'll define a function `get_elapsed` for cumulative counting across a sorted dataframe. Given a particular field `fld` to monitor, this function will start tracking time since the last occurrence of that field. When the field is seen again, the counter is set to zero.
Upon initialization, this will result in datetime na's until the field is encountered. This is reset every time a new store is seen. We'll see how to use this shortly.
```
def get_elapsed(df , fld, pre):
day1 = np.timedelta64(1, 'D')
last_date = np.datetime64()
last_store = 0
res = []
for s,v,d in zip(df.Store.values,df[fld].values, df.Date.values):
if s != last_store:
last_date = np.datetime64()
last_store = s
if v: last_date = d
res.append(((d-last_date).astype('timedelta64[D]') / day1))
df[pre+fld] = res
```
用store 和 date来做索引
```
train = train.sort_values(['Store','Date'])
test = test.sort_values(['Store','Date'])
columns = ["Date", "Store", "Promo", "StateHoliday", "SchoolHoliday"]
```
step1: AfterPromo [ 0 0 0 1 1 1 0 0 0 0 1 1 ] - > [ NaN NaN NaN 0 0 0 1 2 3 4 0 0 ]
```
get_elapsed(train, 'Promo', 'After')
get_elapsed(train, 'StateHoliday', 'After')
get_elapsed(train, 'SchoolHoliday', 'After')
get_elapsed(test, 'Promo', 'After')
get_elapsed(test, 'StateHoliday', 'After')
get_elapsed(test, 'SchoolHoliday', 'After')
train.shape
train.columns
```
每个store的时间倒排
step2: BeforePromo [ 0 0 0 1 1 1 0 0 0 0 1 1 ] - > [ -3 -2 -1 0 0 0 -4 -3 -2 -1 0 0 ]
```
train = train.sort_values(['Store','Date'],ascending=[True,False])
test = test.sort_values(['Store','Date'],ascending=[True,False])
get_elapsed(train, 'Promo', 'Before')
get_elapsed(train, 'StateHoliday', 'Before')
get_elapsed(train, 'SchoolHoliday', 'Before')
get_elapsed(test, 'Promo', 'Before')
get_elapsed(test, 'StateHoliday', 'Before')
get_elapsed(test, 'SchoolHoliday', 'Before')
```
恢复store和date 排列
```
train = train.sort_values(['Store','Date'])
test = test.sort_values(['Store','Date'])
```
fill Na
```
for o in ['Before', 'After']:
for p in ['SchoolHoliday', 'StateHoliday', 'Promo']:
a = o+p
train[a] = train[a].fillna(0).astype(int)
test[a] = test[a].fillna(0).astype(int)
columns = ['SchoolHoliday', 'StateHoliday', 'Promo']
```
rolling sum by week,截至到当日,过去一个星期的promote day累加起来:
step 3 :
BeforePromo [ 0 0 0 1 1 1 0 0 0 0 1 1 0 0 ] - > [ 0 0 0 1 2 3 3 3 3 3 3 3 2 2 ]
```
train[ ['Date','Store']+columns].head()
roll_train_df = train[columns]
roll_test_df = test[columns]
roll_test_df.columns
bwd_train = roll_train_df.rolling(7,min_periods=1).sum()
bwd_test = roll_test_df.rolling(7,min_periods=1).sum()
fwd_train = roll_train_df.sort_index(ascending=False).rolling(7,min_periods=1).sum()
fwd_test = roll_test_df.sort_index(ascending=False).rolling(7,min_periods=1).sum()
```
和train test合并!
```
columns
for sufix in ['_fwd','_bwd']:
for c in columns:
train[c + sufix] = bwd_train[c]
train[c + sufix] = fwd_train[c]
test[c + sufix] = bwd_test[c]
test[c + sufix] = fwd_test[c]
train.shape
test.shape
train.columns
train.reset_index(drop=True,inplace=True)
test.reset_index(drop=True,inplace=True)
train.to_feather(f'{PATH}train.df')
test.to_feather(f'{PATH}test.df')
```
## Rossman 用部分特征
```
train = pd.read_feather(f'{PATH}train.df')
test = pd.read_feather(f'{PATH}test.df')
```
特征分为cat和numerial ,cat类做embedding 处理. 为何不用全部的特征???
```
train.shape
train.columns
cat_vars = ['Store', 'DayOfWeek', 'Year', 'Month', 'Day', 'StateHoliday', 'CompetitionMonthsOpen',
'Promo2Weeks', 'StoreType', 'Assortment', 'PromoInterval', 'CompetitionOpenSinceYear', 'Promo2SinceYear',
'State', 'Week', 'Events', 'Promo_fwd', 'Promo_bwd', 'StateHoliday_fwd', 'StateHoliday_bwd',
'SchoolHoliday_fwd', 'SchoolHoliday_bwd']
contin_vars = ['CompetitionDistance', 'Max_TemperatureC', 'Mean_TemperatureC', 'Min_TemperatureC',
'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h',
'Mean_Wind_SpeedKm_h', 'CloudCover', 'trend', 'trend_DE',
'AfterStateHoliday', 'BeforeStateHoliday', 'Promo', 'SchoolHoliday']
len(cat_vars)
len(contin_vars)
dep = 'Sales'
joined = train[cat_vars+contin_vars+[dep, 'Date']].copy()
Id = pd.read_csv('testId.csv',names=['Id'])
joined_test = test[cat_vars+contin_vars+[ 'Date']].copy()
joined_test[dep] = 0
joined_test.shape
for v in cat_vars:
joined[v] = joined[v].astype('category').cat.as_ordered()
joined_test[v] = joined_test[v].astype('category').cat.as_ordered()
for v in contin_vars:
joined[v] = joined[v].astype('float32')
joined_test[v] = joined_test[v].astype('float32')
n = len(joined);n
```
Validation Set
```
idxs = get_cv_idxs(n, val_pct=150000/n)
joined_samp = joined.iloc[idxs].set_index("Date")
samp_size = len(joined_samp); samp_size
samp_size = n
joined_samp = joined.set_index("Date")
joined_samp.head(2)
```
joined_samp里cat特征数值化,便于embedding; 剥离y, 提取mapper给test_set用
```
df, y, nas, mapper = proc_df(joined_samp, 'Sales', do_scale=True)
yl = np.log1p(y)
joined_test = joined_test.set_index("Date")
df.head()
df_test, _, nas, mapper = proc_df(joined_test, 'Sales', do_scale=True,
mapper=mapper, na_dict=nas)
df.head(2)
```
用连续时间的来做验证集合,更符合时间序列的特点。
```
val_idx = np.flatnonzero(
(df.index<=datetime.datetime(2014,9,17)) & (df.index>=datetime.datetime(2014,8,1)))
```
## Fast AI Models
```
def inv_y(a): return np.exp(a)
def exp_rmspe(y_pred, targ):
targ = inv_y(targ)
pct_var = (targ - inv_y(y_pred))/targ
return math.sqrt((pct_var**2).mean())
max_log_y = np.max(yl)
y_range = (0, max_log_y*1.2)
md = ColumnarModelData.from_data_frame(PATH, val_idx, df, yl.astype(np.float32), cat_flds=cat_vars, bs=128,
test_df=df_test)
```
embedding之前的特征情况
```
cat_sz = [(c, len(joined_samp[c].cat.categories)+1) for c in cat_vars];cat_sz
```
确定embedding size
```
emb_szs = [(c, min(50, (c+1)//2)) for _,c in cat_sz];emb_szs
m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars),
0.04, 1, [1000,500], [0.001,0.01], y_range=y_range)
lr = 1e-3
m.lr_find()
m.sched.plot(100)
lr = 0.0001
# m.fit(lr, 3, metrics=[exp_rmspe])
# m.fit(lr, 2, metrics=[exp_rmspe], cycle_len=4)
```
|
github_jupyter
|
Merge DataFrame objects by performing a database-style join operation by
columns or indexes.
store info and state and weather
最终按on = 'Store' and 'date',把 train 和 store_info_all join起来
2. googletrend提供的是哪个星期,哪个周。根据`星期和周`,把 store和week 把 trend join起来
3. weather 提供的每天 哪个周的情况, 根据`date 和 洲` join weather.
### 特征编码
[参考](https://blog.csdn.net/hshuihui/article/details/53259710)
category 怎么处理
数值型特征怎么处理.
目前比较熟悉的有pandas里的get_dummies做one-hot encoding.
sklearn里也有很多特征编码方法.
下面的例子是要把数值型或者字符型都用sklearn 来one-hot encoding
#### get_dummies
get_dummies对数值型也是有效果的
dummy_na 就是把na也编码
#### sklearn里的用法 输入2D-arrayt, 得到的是也是2Dnarray 比pd麻烦多了
* 字符型不能处理
* Nan不能处理
LabelEncoder 针对字符型label. ['a','b','c']--->[0,1,2]
先LabelEncoder ,再OneHotEncoder才能达到最终的效果
等效于LabelBinarizer
### proc_df
proc_df takes a data frame df and splits off the response variable, and
changes the df into an entirely numeric dataframe.
把y剥离,剩下的df里的特征全部numerial 化,很方便
并没有`one-hot-coding`掉,因为接下来要做`entity embedding`
df,y就是特征编码后的输出,还输出了一个mapper,就是上面地套的DataFrameMaper, 用来给test set直接用了
## Rossman ex
### 合并表
weather的file和statename可以join
形式是 store 1 data 2013-01-01的weather
join state_weather 和 store
store_info_all 和 train join ,注意用date 和store同时join
test里没有costumers?
用store和date排列下数据
最后考虑googletrend,这个和week以及state有关,需要提取下原数据关于洲的信息
提取file列的后缀为State,应该是洲的简称,
两种思路:
1. 先将week start end分裂,然后扩充到每天,注意只需要从`2013-01-01`到`2015-07-31`的数据
2. 将week start- week end 标记为 week of year , 同样的将store_all_with_train中的date计算为month of year,然后 再join!
增加Date列
pandas增加行很慢啊 有没有好的方法
state里没有NI,只有NB,HI,需要将NI,转成NB,HI
把整个德国地区DE的trend作为一个feature 加上去
用date join即可,已经是唯一了
去掉多余的列
### 特征工程
#### load from file
#### 时间提取!
根据Date生成 `'Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear',
'Is_month_end', 'Is_month_start', 'Is_quarter_end', 'Is_quarter_start',
'Is_year_end', 'Is_year_start', 'Elapsed'`这些新的特征,更好看出时间的趋势
#### 特殊特征
留意哪些类型是object的,可能值有问题
注意Date变成了datetime64,方便时间的计算
one-hot掉? 不着急,deep learning用embedding的方式
#### fillna
竞争对手开业时间的NaN怎么填? 都是1900年1月1日算起!
Next we'll extract features "CompetitionOpenSince" and "CompetitionDaysOpen". Note the use of apply() in mapping a function across dataframe values.
We'll replace some erroneous / outlying data.
We add "CompetitionMonthsOpen" field, limiting the maximum to 2 years to limit number of unique categories.
这个年龄fill还是有点问题的?
Same process for Promo dates.
不是所有的na都需要填!可以单独多为一个类,one-hot or embedding都可以处理
#### 特征提取
row之间的时间关系提取,如promote day , state holiday的前后多少天,都是促销的高峰期。
类似将holiday [1 0 0 0 0 0 1 1 1 0 0 0 0 0] 提取出特征: [ -5 -4 -3 -2 -1 0 0 0 1 2 3 4 5 ...]
It is common when working with time series data to extract data that explains relationships across rows as opposed to columns, e.g.:
* Running averages
* Time until next event
* Time since last event
This is often difficult to do with most table manipulation frameworks, since they are designed to work with relationships across columns. As such, we've created a class to handle this type of data.
We'll define a function `get_elapsed` for cumulative counting across a sorted dataframe. Given a particular field `fld` to monitor, this function will start tracking time since the last occurrence of that field. When the field is seen again, the counter is set to zero.
Upon initialization, this will result in datetime na's until the field is encountered. This is reset every time a new store is seen. We'll see how to use this shortly.
用store 和 date来做索引
step1: AfterPromo [ 0 0 0 1 1 1 0 0 0 0 1 1 ] - > [ NaN NaN NaN 0 0 0 1 2 3 4 0 0 ]
每个store的时间倒排
step2: BeforePromo [ 0 0 0 1 1 1 0 0 0 0 1 1 ] - > [ -3 -2 -1 0 0 0 -4 -3 -2 -1 0 0 ]
恢复store和date 排列
fill Na
rolling sum by week,截至到当日,过去一个星期的promote day累加起来:
step 3 :
BeforePromo [ 0 0 0 1 1 1 0 0 0 0 1 1 0 0 ] - > [ 0 0 0 1 2 3 3 3 3 3 3 3 2 2 ]
和train test合并!
## Rossman 用部分特征
特征分为cat和numerial ,cat类做embedding 处理. 为何不用全部的特征???
Validation Set
joined_samp里cat特征数值化,便于embedding; 剥离y, 提取mapper给test_set用
用连续时间的来做验证集合,更符合时间序列的特点。
## Fast AI Models
embedding之前的特征情况
确定embedding size
| 0.469763 | 0.873754 |
```
# Load libraries and functions
%load_ext autoreload
%autoreload 2
%matplotlib inline
RANDOM_STATE = 42 # Pseudo-random state
from utils import *
sns.set_palette("tab10") # Default seaborn theme
# Extra libraries for this notebook
import cmprsk
from cmprsk import utils
from cmprsk.cmprsk import cuminc
import scikit_posthocs as sph
from statannot import add_stat_annotation
# Upload dataset
fn_vae_data = glob.glob('./Updated*.pkl')
latest_fn_vae_data = max(fn_vae_data, key=os.path.getctime)
print("Loading... ",latest_fn_vae_data)
with open(latest_fn_vae_data, "rb") as f:
vae_data_main = pickle.load(f)
print("Done")
```
# Define functions
```
######### Posthoc analysis for multiple groups by chi-square test
def get_asterisks_for_pval(p_val):
"""Receives the p-value and returns asterisks string."""
if p_val > 0.05:
p_text = "ns" # above threshold => not significant
elif p_val < 1e-4:
p_text = '****'
elif p_val < 1e-3:
p_text = '***'
elif p_val < 1e-2:
p_text = '**'
else:
p_text = '*'
return p_text
def chisq_and_posthoc_corrected(df): #df is a contingency table
"""Receives a dataframe and performs chi2 test and then post hoc.
Prints the p-values and corrected p-values (after FDR correction)"""
# start by running chi2 test on the matrix
chi2, p, dof, ex = chi(df, correction=True)
print(f"Chi2 result of the contingency table: {chi2}, p-value: {p}")
# post-hoc
all_combinations = list(combinations(df.index, 2)) # gathering all combinations for post-hoc chi2
p_vals = []
print("Significance results:")
for comb in all_combinations:
new_df = df[(df.index == comb[0]) | (df.index == comb[1])]
chi2, p, dof, ex = chi(new_df, correction=True)
p_vals.append(p)
# print(f"For {comb}: {p}") # uncorrected
# checking significance
# correction for multiple testing
reject_list, corrected_p_vals = multipletests(p_vals, method='fdr_bh')[:2]
for p_val, corr_p_val, reject, comb in zip(p_vals, corrected_p_vals, reject_list, all_combinations):
print(f"{comb}: p_value: {p_val:5f}; corrected: {corr_p_val:5f} ({get_asterisks_for_pval(p_val)})")
```
# Hospital LOS
```
# Select outcome data for ICU admissions and individuals
# Group attribution is selected by hierarchy
df_admissions = vae_data_main[['los', 'day_in_icu_max', 'ID_subid', 'ID', 'outcome_death', 'date', 'group']]
df_admissions = df_admissions.groupby('ID_subid').agg({'los': max, 'day_in_icu_max':max, 'group':max,
'date': min, 'ID':max, 'outcome_death':max,})
df_admissions.date = df_admissions.date.dt.year
df_individuals = df_admissions.copy()
df_individuals = df_individuals.groupby('ID').agg({'los': max, 'day_in_icu_max':max, 'group':max,
'date': min, 'outcome_death':max,})
#Drop Dual HARTI data - not included in the analysis due to small sample size
df_admissions = df_admissions.loc[~(df_admissions.group == "Dual HARTI")]
df_individuals = df_individuals.loc[~(df_individuals.group == "Dual HARTI")]
# Display descriptive data by groups for LOS
df_individuals[['los', 'group']].groupby('group').describe().T
# Compare groups by ANOVA (normal distribution assumption)
lm = smf.ols('los ~ group', data=df_individuals).fit()
anova = sm.stats.anova_lm(lm)
print(anova)
# Compare groups by Kruskal test (non-parametric)
data = [df_individuals.loc[ids, 'los'].values for ids in df_individuals.groupby('group').groups.values()]
H, p = stats.kruskal(*data)
print('\nKruskal test p-value: ', p)
# Compare groups pairwise (non-parametric Conover test)
sph.posthoc_conover(df_individuals, val_col='los', group_col='group', p_adjust ='holm')
```
### Dynamics of hospital LOS by groups and years
```
# Calculate numbers for LOS
medians = {}
for group in df_individuals.group.unique():
m = []
a = df_individuals[(df_individuals.group==group)]
for i in range(2011,2021):
b = a[(a.date == i)].los.median()
m.append(b)
medians[group] = m
los = pd.DataFrame.from_dict(medians).T
# test significance of outcome dynamics by years
pvals = []
for col in los.index:
a = linregress(los.T[col], np.arange(len(los.T[col]))).pvalue
pvals.append(a)
los = los.assign(pvalues = pvals)
los
# Get p-values
def get_p_suffix(x):
pval = los.pvalues.dropna().to_dict().get(x, None)
if pval is not None:
return f'{x} ($p={pval:.03f}$)'
return x
data = df_individuals.copy()
data.group = data.group.apply(get_p_suffix)
# Plot boxplots by years and groups
colors_sns = ['medium blue', 'orange', 'light purple', 'light red']
sns.set_palette(sns.xkcd_palette(colors_sns))
fig, ax = plt.subplots(1, figsize=(15, 7))
sns.boxplot(x='date', y='los', hue='group', data=data, ax=ax,
showfliers=False,
hue_order=data.group.unique()[[2,3,0,1]],
)
ax.set_title('Length of hospital stay in 4 groups by years')
ax.set_ylabel('Length of hospital stay, days')
ax.set_xlabel('')
ax.minorticks_on()
ax.grid(linestyle='dotted', which='both', axis='y')
plt.tight_layout()
plt.savefig('pictures/los_years.pdf', bbox_inches="tight", dpi=600)
```
# ICU LOS
```
# Display desriptive data by groups for ICU LOS
df_admissions[['day_in_icu_max', 'group']].groupby('group').describe().T
# Compare groups by ANOVA (normal distribution assumption)
lm = smf.ols('day_in_icu_max ~ group', data=df_admissions).fit()
anova = sm.stats.anova_lm(lm)
print(anova)
# Compare groups by Kruskal test (non-parametric)
data = [df_admissions.loc[ids, 'day_in_icu_max'].values for ids in df_admissions.groupby('group').groups.values()]
H, p = stats.kruskal(*data)
print('\nKruskal test p-value: ', p)
# Compare groups pairwise (non-parametric Conover test)
sph.posthoc_conover(df_admissions, val_col='day_in_icu_max', group_col='group', p_adjust ='holm')
```
### Dynamics of ICU LOS by groups and years
```
# Calculate numbers for ICU LOS
medians = {}
for group in df_admissions.group.unique():
m = []
a = df_admissions[(df_admissions.group==group)]
for i in range(2011,2021):
b = a[(a.date == i)].day_in_icu_max.median()
m.append(b)
medians[group] = m
losicu = pd.DataFrame.from_dict(medians).T
# test significance of outcome dynamics by years
pvals = []
for col in losicu.index:
a = linregress(losicu.T[col], np.arange(len(losicu.T[col]))).pvalue
pvals.append(a)
losicu = losicu.assign(pvalues = pvals)
losicu
# Get p-values
def get_p_suffix(x):
pval = losicu.pvalues.dropna().to_dict().get(x, None)
if pval is not None:
return f'{x} ($p={pval:.03f}$)'
return x
data = df_admissions.copy()
data.group = data.group.apply(get_p_suffix)
# Plot boxplots by years and groups
colors_sns = ['medium blue', 'orange', 'light purple', 'light red']
sns.set_palette(sns.xkcd_palette(colors_sns))
fig, ax = plt.subplots(1, figsize=(15, 7))
sns.boxplot(x='date', y='day_in_icu_max', hue='group', data=data, ax=ax,
showfliers=False,
hue_order=data.group.unique()[[2,3,0,1]],
)
ax.set_title('Length of ICU stay in 4 groups by years')
ax.set_ylabel('Length of ICU stay, days')
ax.set_xlabel('')
ax.minorticks_on()
ax.grid(linestyle='dotted', which='both', axis='y')
plt.tight_layout()
plt.savefig('pictures/los_icu_years.pdf', bbox_inches="tight", dpi=600)
```
### Plot hospital LOS, ICU LOS and mortality by groups
```
# Define comparisons
colors_sns = ['medium blue', 'orange', 'light purple', 'light red']
sns.set_palette(sns.xkcd_palette(colors_sns))
fig, [ax, ax1, ax2] = plt.subplots(1,3, figsize=(17.5, 7))
boxpairs=[('VA-HARTI', 'NVA-HARTI'), ('VA-HARTI', 'Other HAI'), ('VA-HARTI', 'No HAI'),
('NVA-HARTI', 'No HAI'), ('NVA-HARTI', 'Other HAI')]
order = ['VA-HARTI', 'NVA-HARTI', 'Other HAI', 'No HAI']
# LOS
sns.boxplot(x='group', y='los', data=df_individuals, ax=ax, showfliers=False, order=order)
# Add p-value annotation
pvals_los_all = sph.posthoc_conover(df_individuals, val_col='los', group_col='group', p_adjust ='holm')
pvalues_los = []
for i in boxpairs:
pvalues_los.append(pvals_los_all.loc[i])
add_stat_annotation(ax=ax, data=df_individuals, x='group', y='los', order=order, box_pairs=boxpairs,
perform_stat_test=False, pvalues=pvalues_los,
test=None, text_format='star',
loc='outside', verbose=0, text_offset=1)
ax.minorticks_on()
ax.grid(linestyle='dotted', which='both', axis='y')
ax.set_xlabel('')
ax.set_xticklabels(['VA-HARTI', 'NVA-HARTI', 'Other HAI', 'No HAI'])
ax.set_ylabel('Length of hospital stay, days')
# ICU LOS
sns.boxplot(x='group', y='day_in_icu_max', data=df_admissions, ax=ax1, showfliers=False, order=order)
# Add p-value annotation
pvals_iculos_all = sph.posthoc_conover(df_admissions, val_col='day_in_icu_max', group_col='group', p_adjust ='holm')
pvalues_iculos = []
for i in boxpairs:
pvalues_iculos.append(pvals_iculos_all.loc[i])
add_stat_annotation(ax=ax1, data=df_admissions, x='group', y='day_in_icu_max', order=order, box_pairs=boxpairs,
perform_stat_test=False, pvalues=pvalues_iculos,
test=None, text_format='star',
loc='outside', verbose=0, text_offset=1)
ax1.minorticks_on()
ax1.grid(linestyle='dotted', which='both', axis='y')
ax1.set_xlabel('')
ax1.set_ylabel('Length of ICU stay, days')
# Mortality rate
sns.pointplot(x='group', y="outcome_death", data=df_individuals, join=False, ax=ax2,
order=order, capsize=.2)
# Add p-value annotation
add_stat_annotation(ax=ax2, data=df_individuals, x='group', y='outcome_death', order=order,
box_pairs=[('No HAI', 'VA-HARTI')],
perform_stat_test=False,
pvalues= [0.000001],
test=None, text_format='star',
line_offset_to_box=1.6,
loc='outside',
verbose=0, text_offset=2
)
ax2.minorticks_on()
ax2.grid(linestyle='dotted', which='both', axis='y')
ax2.set_xlabel('')
ax2.set_xticklabels(['VA-HARTI', 'NVA-HARTI', 'Other HAI', 'No HAI'])
ax2.set_ylabel('Crude in-hospital mortality')
plt.tight_layout()
plt.savefig('./pictures/outcomes_all.pdf', dpi=600)
```
# Mortality
```
# Print overall mortality
print('All patients mortality rate: ', df_individuals.outcome_death.mean())
cil, cir = ci(df_individuals.outcome_death.sum(), len(df_individuals))
print("All patients mortality 95% CI: ", cil, cir)
# Plot proroption dead with 95% CI
plt.rcParams['ytick.right'] = plt.rcParams['ytick.labelright'] = False
fig, ax = plt.subplots(1, figsize=(7,7))
sns.pointplot(x='date', y="outcome_death", data=df_individuals, ax=ax,
capsize=.03,
scale=1,
errwidth = 1.7,
markers='o', linestyles='dotted',
join=True
)
m = []
for i in range(2011, 2021):
b = df_individuals[(df_individuals.date == i)]
val = b.outcome_death.mean()
m.append(val)
pval = linregress(m, np.arange(len(m))).pvalue
ax.text(0,0.03, 'p-value = '+ "%.4f" % pval, fontsize=14)
ax.legend(['Mortality'], fontsize=14)
ax.minorticks_on()
ax.grid(linestyle='dotted', which='both', axis='y')
ax.tick_params(axis='y', which='both', right=False, left=True)
ax.set_title('Mortality by years, full study population')
ax.set_ylabel('Crude in-hospital mortality', fontsize=12)
ax.set_xlabel('')
ax.set_ylim(0,0.28)
print(m)
plt.tight_layout()
plt.savefig('./pictures/outcome_mortality_summary.pdf', dpi=600)
# Describe mortality by groups
mortality = {}
for group in df_individuals.group.unique():
mortality[group] = {}
a = df_individuals[(df_individuals.group==group)]
mortality[group]['n'] = a.outcome_death.sum()
mortality[group]['mortality'] = a.outcome_death.mean()
cil, cir = ci(a.outcome_death.sum(), len(a))
mortality[group]['cil'] = cil
mortality[group]['cir'] = cir
mortality = pd.DataFrame.from_dict(mortality)
mortality
# test difference in groups
df_individuals.reset_index(level=0, inplace=True)
contigency= pd.crosstab(df_individuals[['ID', 'group']].groupby('ID').max()['group'],
df_individuals[['ID', 'outcome_death']].groupby('ID').max()['outcome_death'])
# Compare mortality in groups by chi-sq test. Pairwise comparison
chisq_and_posthoc_corrected(contigency)
```
### Dynamics of mortality by groups and years
```
# Calculate numbers for mortality by years
medians = {}
for group in df_individuals.group.unique():
m = []
a = df_individuals[(df_individuals.group==group)]
for i in range(2011,2021):
b = a[(a.date == i)]
val = b.outcome_death.sum() / len(b)
m.append(val)
medians[group] = m
mortality_years = pd.DataFrame.from_dict(medians).T
# test significance of outcome dynamics by years
pvals = []
for col in mortality_years.index:
a = linregress(mortality_years.T[col], np.arange(len(mortality_years.T[col]))).pvalue
pvals.append(a)
mortality_years = mortality_years.assign(pvalues = pvals)
mortality_years
# Define data; add p-value to legend items
def get_p_suffix(x, g_dict=None):
pval = mortality_years.pvalues.dropna().to_dict().get(x, None)
if pval is not None:
return f'{x} ($p={pval:.03f}$)'
return x
if not 'No HAI' in mortality_years.index:
mortality_years.index = mortality_years.index.map({v: k for k, v in groups_dict.items()})
data = df_individuals.copy()
data.group = data.group.apply(get_p_suffix)
# Plot proroption dead with 95% CI
fig, ax = plt.subplots(1, figsize=(15,7))
sns.pointplot(x='date', y="outcome_death", data=data, ax=ax,
hue='group',
hue_order=data.group.unique()[[2,3,0,1]],
dodge=0.3,
capsize=.03,
scale=1.3,
errwidth = 1.7,
join=False
)
ax.legend(fontsize=14)
ax.minorticks_on()
ax.grid(linestyle='dotted', which='both', axis='y')
ax.set_xlabel('')
ax.set_ylabel('Crude in-hospital mortality')
plt.tight_layout()
plt.savefig('./pictures/outcome_mortality.pdf', dpi=600)
```
________
|
github_jupyter
|
# Load libraries and functions
%load_ext autoreload
%autoreload 2
%matplotlib inline
RANDOM_STATE = 42 # Pseudo-random state
from utils import *
sns.set_palette("tab10") # Default seaborn theme
# Extra libraries for this notebook
import cmprsk
from cmprsk import utils
from cmprsk.cmprsk import cuminc
import scikit_posthocs as sph
from statannot import add_stat_annotation
# Upload dataset
fn_vae_data = glob.glob('./Updated*.pkl')
latest_fn_vae_data = max(fn_vae_data, key=os.path.getctime)
print("Loading... ",latest_fn_vae_data)
with open(latest_fn_vae_data, "rb") as f:
vae_data_main = pickle.load(f)
print("Done")
######### Posthoc analysis for multiple groups by chi-square test
def get_asterisks_for_pval(p_val):
"""Receives the p-value and returns asterisks string."""
if p_val > 0.05:
p_text = "ns" # above threshold => not significant
elif p_val < 1e-4:
p_text = '****'
elif p_val < 1e-3:
p_text = '***'
elif p_val < 1e-2:
p_text = '**'
else:
p_text = '*'
return p_text
def chisq_and_posthoc_corrected(df): #df is a contingency table
"""Receives a dataframe and performs chi2 test and then post hoc.
Prints the p-values and corrected p-values (after FDR correction)"""
# start by running chi2 test on the matrix
chi2, p, dof, ex = chi(df, correction=True)
print(f"Chi2 result of the contingency table: {chi2}, p-value: {p}")
# post-hoc
all_combinations = list(combinations(df.index, 2)) # gathering all combinations for post-hoc chi2
p_vals = []
print("Significance results:")
for comb in all_combinations:
new_df = df[(df.index == comb[0]) | (df.index == comb[1])]
chi2, p, dof, ex = chi(new_df, correction=True)
p_vals.append(p)
# print(f"For {comb}: {p}") # uncorrected
# checking significance
# correction for multiple testing
reject_list, corrected_p_vals = multipletests(p_vals, method='fdr_bh')[:2]
for p_val, corr_p_val, reject, comb in zip(p_vals, corrected_p_vals, reject_list, all_combinations):
print(f"{comb}: p_value: {p_val:5f}; corrected: {corr_p_val:5f} ({get_asterisks_for_pval(p_val)})")
# Select outcome data for ICU admissions and individuals
# Group attribution is selected by hierarchy
df_admissions = vae_data_main[['los', 'day_in_icu_max', 'ID_subid', 'ID', 'outcome_death', 'date', 'group']]
df_admissions = df_admissions.groupby('ID_subid').agg({'los': max, 'day_in_icu_max':max, 'group':max,
'date': min, 'ID':max, 'outcome_death':max,})
df_admissions.date = df_admissions.date.dt.year
df_individuals = df_admissions.copy()
df_individuals = df_individuals.groupby('ID').agg({'los': max, 'day_in_icu_max':max, 'group':max,
'date': min, 'outcome_death':max,})
#Drop Dual HARTI data - not included in the analysis due to small sample size
df_admissions = df_admissions.loc[~(df_admissions.group == "Dual HARTI")]
df_individuals = df_individuals.loc[~(df_individuals.group == "Dual HARTI")]
# Display descriptive data by groups for LOS
df_individuals[['los', 'group']].groupby('group').describe().T
# Compare groups by ANOVA (normal distribution assumption)
lm = smf.ols('los ~ group', data=df_individuals).fit()
anova = sm.stats.anova_lm(lm)
print(anova)
# Compare groups by Kruskal test (non-parametric)
data = [df_individuals.loc[ids, 'los'].values for ids in df_individuals.groupby('group').groups.values()]
H, p = stats.kruskal(*data)
print('\nKruskal test p-value: ', p)
# Compare groups pairwise (non-parametric Conover test)
sph.posthoc_conover(df_individuals, val_col='los', group_col='group', p_adjust ='holm')
# Calculate numbers for LOS
medians = {}
for group in df_individuals.group.unique():
m = []
a = df_individuals[(df_individuals.group==group)]
for i in range(2011,2021):
b = a[(a.date == i)].los.median()
m.append(b)
medians[group] = m
los = pd.DataFrame.from_dict(medians).T
# test significance of outcome dynamics by years
pvals = []
for col in los.index:
a = linregress(los.T[col], np.arange(len(los.T[col]))).pvalue
pvals.append(a)
los = los.assign(pvalues = pvals)
los
# Get p-values
def get_p_suffix(x):
pval = los.pvalues.dropna().to_dict().get(x, None)
if pval is not None:
return f'{x} ($p={pval:.03f}$)'
return x
data = df_individuals.copy()
data.group = data.group.apply(get_p_suffix)
# Plot boxplots by years and groups
colors_sns = ['medium blue', 'orange', 'light purple', 'light red']
sns.set_palette(sns.xkcd_palette(colors_sns))
fig, ax = plt.subplots(1, figsize=(15, 7))
sns.boxplot(x='date', y='los', hue='group', data=data, ax=ax,
showfliers=False,
hue_order=data.group.unique()[[2,3,0,1]],
)
ax.set_title('Length of hospital stay in 4 groups by years')
ax.set_ylabel('Length of hospital stay, days')
ax.set_xlabel('')
ax.minorticks_on()
ax.grid(linestyle='dotted', which='both', axis='y')
plt.tight_layout()
plt.savefig('pictures/los_years.pdf', bbox_inches="tight", dpi=600)
# Display desriptive data by groups for ICU LOS
df_admissions[['day_in_icu_max', 'group']].groupby('group').describe().T
# Compare groups by ANOVA (normal distribution assumption)
lm = smf.ols('day_in_icu_max ~ group', data=df_admissions).fit()
anova = sm.stats.anova_lm(lm)
print(anova)
# Compare groups by Kruskal test (non-parametric)
data = [df_admissions.loc[ids, 'day_in_icu_max'].values for ids in df_admissions.groupby('group').groups.values()]
H, p = stats.kruskal(*data)
print('\nKruskal test p-value: ', p)
# Compare groups pairwise (non-parametric Conover test)
sph.posthoc_conover(df_admissions, val_col='day_in_icu_max', group_col='group', p_adjust ='holm')
# Calculate numbers for ICU LOS
medians = {}
for group in df_admissions.group.unique():
m = []
a = df_admissions[(df_admissions.group==group)]
for i in range(2011,2021):
b = a[(a.date == i)].day_in_icu_max.median()
m.append(b)
medians[group] = m
losicu = pd.DataFrame.from_dict(medians).T
# test significance of outcome dynamics by years
pvals = []
for col in losicu.index:
a = linregress(losicu.T[col], np.arange(len(losicu.T[col]))).pvalue
pvals.append(a)
losicu = losicu.assign(pvalues = pvals)
losicu
# Get p-values
def get_p_suffix(x):
pval = losicu.pvalues.dropna().to_dict().get(x, None)
if pval is not None:
return f'{x} ($p={pval:.03f}$)'
return x
data = df_admissions.copy()
data.group = data.group.apply(get_p_suffix)
# Plot boxplots by years and groups
colors_sns = ['medium blue', 'orange', 'light purple', 'light red']
sns.set_palette(sns.xkcd_palette(colors_sns))
fig, ax = plt.subplots(1, figsize=(15, 7))
sns.boxplot(x='date', y='day_in_icu_max', hue='group', data=data, ax=ax,
showfliers=False,
hue_order=data.group.unique()[[2,3,0,1]],
)
ax.set_title('Length of ICU stay in 4 groups by years')
ax.set_ylabel('Length of ICU stay, days')
ax.set_xlabel('')
ax.minorticks_on()
ax.grid(linestyle='dotted', which='both', axis='y')
plt.tight_layout()
plt.savefig('pictures/los_icu_years.pdf', bbox_inches="tight", dpi=600)
# Define comparisons
colors_sns = ['medium blue', 'orange', 'light purple', 'light red']
sns.set_palette(sns.xkcd_palette(colors_sns))
fig, [ax, ax1, ax2] = plt.subplots(1,3, figsize=(17.5, 7))
boxpairs=[('VA-HARTI', 'NVA-HARTI'), ('VA-HARTI', 'Other HAI'), ('VA-HARTI', 'No HAI'),
('NVA-HARTI', 'No HAI'), ('NVA-HARTI', 'Other HAI')]
order = ['VA-HARTI', 'NVA-HARTI', 'Other HAI', 'No HAI']
# LOS
sns.boxplot(x='group', y='los', data=df_individuals, ax=ax, showfliers=False, order=order)
# Add p-value annotation
pvals_los_all = sph.posthoc_conover(df_individuals, val_col='los', group_col='group', p_adjust ='holm')
pvalues_los = []
for i in boxpairs:
pvalues_los.append(pvals_los_all.loc[i])
add_stat_annotation(ax=ax, data=df_individuals, x='group', y='los', order=order, box_pairs=boxpairs,
perform_stat_test=False, pvalues=pvalues_los,
test=None, text_format='star',
loc='outside', verbose=0, text_offset=1)
ax.minorticks_on()
ax.grid(linestyle='dotted', which='both', axis='y')
ax.set_xlabel('')
ax.set_xticklabels(['VA-HARTI', 'NVA-HARTI', 'Other HAI', 'No HAI'])
ax.set_ylabel('Length of hospital stay, days')
# ICU LOS
sns.boxplot(x='group', y='day_in_icu_max', data=df_admissions, ax=ax1, showfliers=False, order=order)
# Add p-value annotation
pvals_iculos_all = sph.posthoc_conover(df_admissions, val_col='day_in_icu_max', group_col='group', p_adjust ='holm')
pvalues_iculos = []
for i in boxpairs:
pvalues_iculos.append(pvals_iculos_all.loc[i])
add_stat_annotation(ax=ax1, data=df_admissions, x='group', y='day_in_icu_max', order=order, box_pairs=boxpairs,
perform_stat_test=False, pvalues=pvalues_iculos,
test=None, text_format='star',
loc='outside', verbose=0, text_offset=1)
ax1.minorticks_on()
ax1.grid(linestyle='dotted', which='both', axis='y')
ax1.set_xlabel('')
ax1.set_ylabel('Length of ICU stay, days')
# Mortality rate
sns.pointplot(x='group', y="outcome_death", data=df_individuals, join=False, ax=ax2,
order=order, capsize=.2)
# Add p-value annotation
add_stat_annotation(ax=ax2, data=df_individuals, x='group', y='outcome_death', order=order,
box_pairs=[('No HAI', 'VA-HARTI')],
perform_stat_test=False,
pvalues= [0.000001],
test=None, text_format='star',
line_offset_to_box=1.6,
loc='outside',
verbose=0, text_offset=2
)
ax2.minorticks_on()
ax2.grid(linestyle='dotted', which='both', axis='y')
ax2.set_xlabel('')
ax2.set_xticklabels(['VA-HARTI', 'NVA-HARTI', 'Other HAI', 'No HAI'])
ax2.set_ylabel('Crude in-hospital mortality')
plt.tight_layout()
plt.savefig('./pictures/outcomes_all.pdf', dpi=600)
# Print overall mortality
print('All patients mortality rate: ', df_individuals.outcome_death.mean())
cil, cir = ci(df_individuals.outcome_death.sum(), len(df_individuals))
print("All patients mortality 95% CI: ", cil, cir)
# Plot proroption dead with 95% CI
plt.rcParams['ytick.right'] = plt.rcParams['ytick.labelright'] = False
fig, ax = plt.subplots(1, figsize=(7,7))
sns.pointplot(x='date', y="outcome_death", data=df_individuals, ax=ax,
capsize=.03,
scale=1,
errwidth = 1.7,
markers='o', linestyles='dotted',
join=True
)
m = []
for i in range(2011, 2021):
b = df_individuals[(df_individuals.date == i)]
val = b.outcome_death.mean()
m.append(val)
pval = linregress(m, np.arange(len(m))).pvalue
ax.text(0,0.03, 'p-value = '+ "%.4f" % pval, fontsize=14)
ax.legend(['Mortality'], fontsize=14)
ax.minorticks_on()
ax.grid(linestyle='dotted', which='both', axis='y')
ax.tick_params(axis='y', which='both', right=False, left=True)
ax.set_title('Mortality by years, full study population')
ax.set_ylabel('Crude in-hospital mortality', fontsize=12)
ax.set_xlabel('')
ax.set_ylim(0,0.28)
print(m)
plt.tight_layout()
plt.savefig('./pictures/outcome_mortality_summary.pdf', dpi=600)
# Describe mortality by groups
mortality = {}
for group in df_individuals.group.unique():
mortality[group] = {}
a = df_individuals[(df_individuals.group==group)]
mortality[group]['n'] = a.outcome_death.sum()
mortality[group]['mortality'] = a.outcome_death.mean()
cil, cir = ci(a.outcome_death.sum(), len(a))
mortality[group]['cil'] = cil
mortality[group]['cir'] = cir
mortality = pd.DataFrame.from_dict(mortality)
mortality
# test difference in groups
df_individuals.reset_index(level=0, inplace=True)
contigency= pd.crosstab(df_individuals[['ID', 'group']].groupby('ID').max()['group'],
df_individuals[['ID', 'outcome_death']].groupby('ID').max()['outcome_death'])
# Compare mortality in groups by chi-sq test. Pairwise comparison
chisq_and_posthoc_corrected(contigency)
# Calculate numbers for mortality by years
medians = {}
for group in df_individuals.group.unique():
m = []
a = df_individuals[(df_individuals.group==group)]
for i in range(2011,2021):
b = a[(a.date == i)]
val = b.outcome_death.sum() / len(b)
m.append(val)
medians[group] = m
mortality_years = pd.DataFrame.from_dict(medians).T
# test significance of outcome dynamics by years
pvals = []
for col in mortality_years.index:
a = linregress(mortality_years.T[col], np.arange(len(mortality_years.T[col]))).pvalue
pvals.append(a)
mortality_years = mortality_years.assign(pvalues = pvals)
mortality_years
# Define data; add p-value to legend items
def get_p_suffix(x, g_dict=None):
pval = mortality_years.pvalues.dropna().to_dict().get(x, None)
if pval is not None:
return f'{x} ($p={pval:.03f}$)'
return x
if not 'No HAI' in mortality_years.index:
mortality_years.index = mortality_years.index.map({v: k for k, v in groups_dict.items()})
data = df_individuals.copy()
data.group = data.group.apply(get_p_suffix)
# Plot proroption dead with 95% CI
fig, ax = plt.subplots(1, figsize=(15,7))
sns.pointplot(x='date', y="outcome_death", data=data, ax=ax,
hue='group',
hue_order=data.group.unique()[[2,3,0,1]],
dodge=0.3,
capsize=.03,
scale=1.3,
errwidth = 1.7,
join=False
)
ax.legend(fontsize=14)
ax.minorticks_on()
ax.grid(linestyle='dotted', which='both', axis='y')
ax.set_xlabel('')
ax.set_ylabel('Crude in-hospital mortality')
plt.tight_layout()
plt.savefig('./pictures/outcome_mortality.pdf', dpi=600)
| 0.625324 | 0.728048 |
# Install dependency
```
!pip install simpletransformers
```
# Import ClassificationModel
- simpletransformers provides abstractions for `torch` and `transformers` (Hugging Face) implementations
```
from simpletransformers.classification import (
ClassificationModel, ClassificationArgs
)
```
# Define function to load and (optional) split fnc data into training and val
```
import os
import csv
import pandas as pd
import numpy as np
from tqdm import tqdm
from sklearn.model_selection import train_test_split
FNC1_DATA_PATH = '/content/drive/MyDrive/fnc-1'
STANCE_2_ID = {'agree': 0, 'disagree': 1, 'discuss': 2, 'unrelated': 3}
SENTENCE_PAIR_COLS = ['text_a', 'text_b', 'labels']
def combine_headline_body_and_split_train_val(body_path, headline_path, split=True, body_dict={}):
body_csv_df = pd.read_csv(body_path)
df = body_csv_df.reset_index()
for index, row in body_csv_df.iterrows():
body_dict[row["Body ID"]] = row["articleBody"]
headlines, bodies, labels = [], [], []
headline_csv_df = pd.read_csv(headline_path)
df = headline_csv_df.reset_index()
for index, row in headline_csv_df.iterrows():
headlines.append(row["Headline"])
bodies.append(body_dict[row["Body ID"]])
labels.append(STANCE_2_ID[row["Stance"]])
combined_df = pd.DataFrame(list(zip(headlines, bodies, labels)), columns=SENTENCE_PAIR_COLS)
if not split:
labels_df = pd.Series(combined_df['labels']).to_numpy()
return combined_df, labels_df
train_df, val_df = train_test_split(combined_df)
return train_df, val_df, pd.Series(val_df['labels']).to_numpy()
```
# Define Evaluate Model Function to calculate F1 scores and accurary
```
from sklearn.metrics import f1_score
LABELS = [0, 1, 2, 3]
RELATED = [0, 1, 2]
CONFUSION_MATRIX = [[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]]
def calc_f1(real_labels, predicted_labels):
f1_macro = f1_score(real_labels, predicted_labels, average='macro')
f1_classwise = f1_score(real_labels, predicted_labels, average=None, labels=[0, 1, 2, 3])
return f1_macro, f1_classwise
def calculate_accuracy(predicted_labels, real_labels):
score = 0.0
cm = CONFUSION_MATRIX
for i, (g, t) in enumerate(zip(predicted_labels, real_labels)):
cm[g][t] += 1
hit, total = 0, 0
for i, row in enumerate(cm):
hit += row[i]
total += sum(row)
return (hit / total)*100
def evaluate_model(model, test_df):
_, outputs, _ = model.eval_model(test_df)
predictions = np.argmax(outputs, axis=1)
print(calc_f1(predictions, test_labels))
print(calculate_accuracy(predictions, test_labels))
```
# Load Training and Val Data and Labels
```
train_df, val_df, labels_val = combine_headline_body_and_split_train_val(
os.path.join(FNC1_DATA_PATH, 'train_bodies.csv'),
os.path.join(FNC1_DATA_PATH, 'train_stances.csv'),
)
```
# Load Competition Test Data and Labels
```
test_df, test_labels = combine_headline_body_and_split_train_val(
os.path.join(FNC1_DATA_PATH, 'competition_test_bodies.csv'),
os.path.join(FNC1_DATA_PATH, 'competition_test_stances.csv'),
split=False
)
```
# Train and Tune Model with BERT
```
bert_model = ClassificationModel(
'bert',
'bert-base',
use_cuda=True,
num_labels=4,
args={
'fp16': True,
# Tune hyperparameter 3e-4, 1e-4, 5e-5, 3e-5
'learning_rate':3e-5,
'num_train_epochs': 4,
'reprocess_input_data': True,
'overwrite_output_dir': True,
'process_count': 10,
'train_batch_size': 8,
'eval_batch_size': 8,
'max_seq_length': 512
# 'output_dir': ''
})
# TRAIN
bert_model.train_model(train_df)
evaluate_model(bert_model, test_labels)
# TUNE
bert_model.train_model(val_df)
evaluate_model(bert_model, labels_val)
```
# Train and Tune Model with RoBERTa
> Indented block
```
roberta_model = ClassificationModel(
'roberta',
'roberta-base',
use_cuda=True,
num_labels=4,
args={
'fp16': True,
# Tune hyperparameter 3e-4, 1e-4, 5e-5, 3e-5
'learning_rate':5e-5,
'num_train_epochs': 4,
'reprocess_input_data': True,
'overwrite_output_dir': True,
'process_count': 10,
'train_batch_size': 8,
'eval_batch_size': 8,
'max_seq_length': 512,
# 'output_dir': ''
})
# TRAIN
roberta_model.train_model(train_df)
evaluate_model(roberta_model, test_labels)
# TUNE
roberta_model.train_model(val_df)
evaluate_model(roberta_model, labels_val)
```
## Generate Competition Submission Prediction
```
import os
import csv
import pandas as pd
from tqdm import tqdm
from sklearn.model_selection import train_test_split
FNC1_DATA_PATH = '/content/drive/MyDrive/fnc-1'
body_dict = {}
body_csv_df = pd.read_csv(os.path.join(FNC1_DATA_PATH, 'competition_test_bodies.csv'))
df = body_csv_df.reset_index()
for index, row in body_csv_df.iterrows():
body_dict[row["Body ID"]] = row["articleBody"]
headlines, bodies, combined_headline_bodies = [], [], []
headline_csv_df = pd.read_csv(os.path.join(FNC1_DATA_PATH, 'competition_test_stances_unlabeled.csv'))
df = headline_csv_df.reset_index()
for index, row in headline_csv_df.iterrows():
headlines.append(row["Headline"])
bodies.append(row["Body ID"])
combined_headline_bodies.append(row["Headline"], body_dict[row["Body ID"]])
predictions, raw_outputs = roberta_model.predict(combined_headline_bodies)
```
# Store and format submission csv
```
df = pd.DataFrame(list(zip(headlines, bodies, predictions)), columns=['Headline', 'Body ID', 'Stance'])
df['Stance'] = df['Stance'].replace({0: 'agree', 1: 'disagree', 2: 'discuss', 3: 'unrelated'})
df.to_csv('answer.csv', index=False, encoding='utf-8') # From pandas library
```
|
github_jupyter
|
!pip install simpletransformers
from simpletransformers.classification import (
ClassificationModel, ClassificationArgs
)
import os
import csv
import pandas as pd
import numpy as np
from tqdm import tqdm
from sklearn.model_selection import train_test_split
FNC1_DATA_PATH = '/content/drive/MyDrive/fnc-1'
STANCE_2_ID = {'agree': 0, 'disagree': 1, 'discuss': 2, 'unrelated': 3}
SENTENCE_PAIR_COLS = ['text_a', 'text_b', 'labels']
def combine_headline_body_and_split_train_val(body_path, headline_path, split=True, body_dict={}):
body_csv_df = pd.read_csv(body_path)
df = body_csv_df.reset_index()
for index, row in body_csv_df.iterrows():
body_dict[row["Body ID"]] = row["articleBody"]
headlines, bodies, labels = [], [], []
headline_csv_df = pd.read_csv(headline_path)
df = headline_csv_df.reset_index()
for index, row in headline_csv_df.iterrows():
headlines.append(row["Headline"])
bodies.append(body_dict[row["Body ID"]])
labels.append(STANCE_2_ID[row["Stance"]])
combined_df = pd.DataFrame(list(zip(headlines, bodies, labels)), columns=SENTENCE_PAIR_COLS)
if not split:
labels_df = pd.Series(combined_df['labels']).to_numpy()
return combined_df, labels_df
train_df, val_df = train_test_split(combined_df)
return train_df, val_df, pd.Series(val_df['labels']).to_numpy()
from sklearn.metrics import f1_score
LABELS = [0, 1, 2, 3]
RELATED = [0, 1, 2]
CONFUSION_MATRIX = [[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]]
def calc_f1(real_labels, predicted_labels):
f1_macro = f1_score(real_labels, predicted_labels, average='macro')
f1_classwise = f1_score(real_labels, predicted_labels, average=None, labels=[0, 1, 2, 3])
return f1_macro, f1_classwise
def calculate_accuracy(predicted_labels, real_labels):
score = 0.0
cm = CONFUSION_MATRIX
for i, (g, t) in enumerate(zip(predicted_labels, real_labels)):
cm[g][t] += 1
hit, total = 0, 0
for i, row in enumerate(cm):
hit += row[i]
total += sum(row)
return (hit / total)*100
def evaluate_model(model, test_df):
_, outputs, _ = model.eval_model(test_df)
predictions = np.argmax(outputs, axis=1)
print(calc_f1(predictions, test_labels))
print(calculate_accuracy(predictions, test_labels))
train_df, val_df, labels_val = combine_headline_body_and_split_train_val(
os.path.join(FNC1_DATA_PATH, 'train_bodies.csv'),
os.path.join(FNC1_DATA_PATH, 'train_stances.csv'),
)
test_df, test_labels = combine_headline_body_and_split_train_val(
os.path.join(FNC1_DATA_PATH, 'competition_test_bodies.csv'),
os.path.join(FNC1_DATA_PATH, 'competition_test_stances.csv'),
split=False
)
bert_model = ClassificationModel(
'bert',
'bert-base',
use_cuda=True,
num_labels=4,
args={
'fp16': True,
# Tune hyperparameter 3e-4, 1e-4, 5e-5, 3e-5
'learning_rate':3e-5,
'num_train_epochs': 4,
'reprocess_input_data': True,
'overwrite_output_dir': True,
'process_count': 10,
'train_batch_size': 8,
'eval_batch_size': 8,
'max_seq_length': 512
# 'output_dir': ''
})
# TRAIN
bert_model.train_model(train_df)
evaluate_model(bert_model, test_labels)
# TUNE
bert_model.train_model(val_df)
evaluate_model(bert_model, labels_val)
roberta_model = ClassificationModel(
'roberta',
'roberta-base',
use_cuda=True,
num_labels=4,
args={
'fp16': True,
# Tune hyperparameter 3e-4, 1e-4, 5e-5, 3e-5
'learning_rate':5e-5,
'num_train_epochs': 4,
'reprocess_input_data': True,
'overwrite_output_dir': True,
'process_count': 10,
'train_batch_size': 8,
'eval_batch_size': 8,
'max_seq_length': 512,
# 'output_dir': ''
})
# TRAIN
roberta_model.train_model(train_df)
evaluate_model(roberta_model, test_labels)
# TUNE
roberta_model.train_model(val_df)
evaluate_model(roberta_model, labels_val)
import os
import csv
import pandas as pd
from tqdm import tqdm
from sklearn.model_selection import train_test_split
FNC1_DATA_PATH = '/content/drive/MyDrive/fnc-1'
body_dict = {}
body_csv_df = pd.read_csv(os.path.join(FNC1_DATA_PATH, 'competition_test_bodies.csv'))
df = body_csv_df.reset_index()
for index, row in body_csv_df.iterrows():
body_dict[row["Body ID"]] = row["articleBody"]
headlines, bodies, combined_headline_bodies = [], [], []
headline_csv_df = pd.read_csv(os.path.join(FNC1_DATA_PATH, 'competition_test_stances_unlabeled.csv'))
df = headline_csv_df.reset_index()
for index, row in headline_csv_df.iterrows():
headlines.append(row["Headline"])
bodies.append(row["Body ID"])
combined_headline_bodies.append(row["Headline"], body_dict[row["Body ID"]])
predictions, raw_outputs = roberta_model.predict(combined_headline_bodies)
df = pd.DataFrame(list(zip(headlines, bodies, predictions)), columns=['Headline', 'Body ID', 'Stance'])
df['Stance'] = df['Stance'].replace({0: 'agree', 1: 'disagree', 2: 'discuss', 3: 'unrelated'})
df.to_csv('answer.csv', index=False, encoding='utf-8') # From pandas library
| 0.442396 | 0.73338 |
# Predicting Product Success When Review Data Is Available
_**Using XGBoost to Predict Whether Sales will Exceed the "Hit" Threshold**_
---
---
## Contents
1. [Background](#Background)
1. [Setup](#Setup)
1. [Data](#Data)
1. [Train](#Train)
1. [Host](#Host)
1. [Evaluation](#Evaluation)
1. [Extensions](#Extensions)
## Background
Word of mouth in the form of user reviews, critic reviews, social media comments, etc. often can provide insights about whether a product ultimately will be a success. In the video game industry in particular, reviews and ratings can have a large impact on a game's success. However, not all games with bad reviews fail, and not all games with good reviews turn out to be hits. To predict hit games, machine learning algorithms potentially can take advantage of various relevant data attributes in addition to reviews.
For this notebook, we will work with the data set [Video Game Sales with Ratings](https://www.kaggle.com/rush4ratio/video-game-sales-with-ratings) from Kaggle. This [Metacritic](http://www.metacritic.com/browse/games/release-date/available) data includes attributes for user reviews as well as critic reviews, sales, ESRB ratings, among others. Both user reviews and critic reviews are in the form of ratings scores, on a scale of 0 to 10 or 0 to 100. Although this is convenient, a significant issue with the data set is that it is relatively small.
Dealing with a small data set such as this one is a common problem in machine learning. This problem often is compounded by imbalances between the classes in the small data set. In such situations, using an ensemble learner can be a good choice. This notebook will focus on using XGBoost, a popular ensemble learner, to build a classifier to determine whether a game will be a hit.
## Setup
_This notebook was created and tested on an ml.m4.xlarge notebook instance._
Let's start by specifying:
- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.
- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the `get_execution_role()` call with the appropriate full IAM role arn string(s).
```
bucket = '<your_s3_bucket_name_here>'
prefix = 'sagemaker/DEMO-videogames-xgboost'
# Define IAM role
import sagemaker
role = sagemaker.get_execution_role()
```
Next we'll import the Python libraries we'll need.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import Image
from IPython.display import display
from sklearn.datasets import dump_svmlight_file
from time import gmtime, strftime
import sys
import math
import json
import boto3
```
---
## Data
Before proceeding further, you'll need to sign in to Kaggle or create a Kaggle account if you don't have one. Then **upload the raw CSV data set from the above Kaggle link to the S3 bucket and prefix you specified above**. The raw_data_filename specified below is the name of the data file from Kaggle, but you should alter it if the name changes. Let's download the data from your S3 bucket to your notebook instance, where it will appear in the same directory as this notebook. Then we'll take an initial look at the data.
```
raw_data_filename = 'Video_Games_Sales_as_at_22_Dec_2016.csv'
s3 = boto3.resource('s3')
s3.Bucket(bucket).download_file(prefix + '/' + raw_data_filename, 'raw_data.csv')
data = pd.read_csv('./raw_data.csv')
pd.set_option('display.max_rows', 20)
data
```
Before proceeding further, we need to decide upon a target to predict. Video game development budgets can run into the tens of millions of dollars, so it is critical for game publishers to publish "hit" games to recoup their costs and make a profit. As a proxy for what constitutes a "hit" game, we will set a target of greater than 1 million units in global sales.
```
data['y'] = (data['Global_Sales'] > 1)
```
With our target now defined, let's take a look at the imbalance between the "hit" and "not a hit" classes:
```
plt.bar(['not a hit', 'hit'], data['y'].value_counts())
plt.show()
```
Not surprisingly, only a small fraction of games can be considered "hits" under our metric. Next, we'll choose features that have predictive power for our target. We'll begin by plotting review scores versus global sales to check our hunch that such scores have an impact on sales. Logarithmic scale is used for clarity.
```
viz = data.filter(['User_Score','Critic_Score', 'Global_Sales'], axis=1)
viz['User_Score'] = pd.Series(viz['User_Score'].apply(pd.to_numeric, errors='coerce'))
viz['User_Score'] = viz['User_Score'].mask(np.isnan(viz["User_Score"]), viz['Critic_Score'] / 10.0)
viz.plot(kind='scatter', logx=True, logy=True, x='Critic_Score', y='Global_Sales')
viz.plot(kind='scatter', logx=True, logy=True, x='User_Score', y='Global_Sales')
plt.show()
```
Our intuition about the relationship between review scores and sales seems justified. We also note in passing that other relevant features can be extracted from the data set. For example, the ESRB rating has an impact since games with an "E" for everyone rating typically reach a wider audience than games with an age-restricted "M" for mature rating, though depending on another feature, the genre (such as shooter or action), M-rated games also can be huge hits. Our model hopefully will learn these relationships and others.
Next, looking at the columns of features of this data set, we can identify several that should be excluded. For example, there are five columns that specify sales numbers: these numbers are directly related to the target we're trying to predict, so these columns should be dropped. Other features may be irrelevant, such as the name of the game.
```
data = data.drop(['Name', 'Year_of_Release', 'NA_Sales', 'EU_Sales', 'JP_Sales', 'Other_Sales', 'Global_Sales', 'Critic_Count', 'User_Count', 'Developer'], axis=1)
```
With the number of columns reduced, now is a good time to check how many columns are missing data:
```
data.isnull().sum()
```
As noted in Kaggle's overview of this data set, many review ratings are missing. Unfortunately, since those are crucial features that we are relying on for our predictions, and there is no reliable way of imputing so many of them, we'll need to drop rows missing those features.
```
data = data.dropna()
```
Now we need to resolve a problem we see in the User_Score column: it contains some 'tbd' string values, so it obviously is not numeric. User_Score is more properly a numeric rather than categorical feature, so we'll need to convert it from string type to numeric, and temporarily fill in NaNs for the tbds. Next, we must decide what to do with these new NaNs in the User_Score column. We've already thrown out a large number of rows, so if we can salvage these rows, we should. As a first approximation, we'll take the value in the Critic_Score column and divide by 10 since the user scores tend to track the critic scores (though on a scale of 0 to 10 instead of 0 to 100).
```
data['User_Score'] = data['User_Score'].apply(pd.to_numeric, errors='coerce')
data['User_Score'] = data['User_Score'].mask(np.isnan(data["User_Score"]), data['Critic_Score'] / 10.0)
```
Let's do some final preprocessing of the data, including converting the categorical features into numeric using the one-hot encoding method.
```
data['y'] = data['y'].apply(lambda y: 'yes' if y == True else 'no')
model_data = pd.get_dummies(data)
```
To help prevent overfitting the model, we'll randomly split the data into three groups. Specifically, the model will be trained on 70% of the data. It will then be evaluated on 20% of the data to give us an estimate of the accuracy we hope to have on "new" data. As a final testing dataset, the remaining 10% will be held out until the end.
```
train_data, validation_data, test_data = np.split(model_data.sample(frac=1, random_state=1729), [int(0.7 * len(model_data)), int(0.9 * len(model_data))])
```
XGBoost operates on data in the libSVM data format, with features and the target variable provided as separate arguments. To avoid any misalignment issues due to random reordering, this split is done after the previous split in the above cell. As a last step before training, we'll copy the resulting files to S3 as input for SageMaker's managed training.
```
dump_svmlight_file(X=train_data.drop(['y_no', 'y_yes'], axis=1), y=train_data['y_yes'], f='train.libsvm')
dump_svmlight_file(X=validation_data.drop(['y_no', 'y_yes'], axis=1), y=validation_data['y_yes'], f='validation.libsvm')
dump_svmlight_file(X=test_data.drop(['y_no', 'y_yes'], axis=1), y=test_data['y_yes'], f='test.libsvm')
boto3.Session().resource('s3').Bucket(bucket).Object(prefix + '/train/train.libsvm').upload_file('train.libsvm')
boto3.Session().resource('s3').Bucket(bucket).Object(prefix + '/validation/validation.libsvm').upload_file('validation.libsvm')
```
---
## Train
Our data is now ready to be used to train a XGBoost model. The XGBoost algorithm has many tunable hyperparameters. Some of these hyperparameters are listed below; initially we'll only use a few of them.
- `max_depth`: Maximum depth of a tree. As a cautionary note, a value too small could underfit the data, while increasing it will make the model more complex and thus more likely to overfit the data (in other words, the classic bias-variance tradeoff).
- `eta`: Step size shrinkage used in updates to prevent overfitting.
- `eval_metric`: Evaluation metric(s) for validation data. For data sets such as this one with imbalanced classes, we'll use the AUC metric.
- `scale_pos_weight`: Controls the balance of positive and negative weights, again useful for data sets having imbalanced classes.
First we'll setup the parameters for a training job, then create a training job with those parameters and run it.
```
job_name = 'DEMO-videogames-xgboost-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(boto3.Session().region_name, 'xgboost')
create_training_params = \
{
"RoleArn": role,
"TrainingJobName": job_name,
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.c4.xlarge",
"VolumeSizeInGB": 10
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": "s3://{}/{}/train".format(bucket, prefix),
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": "s3://{}/{}/validation".format(bucket, prefix),
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
],
"OutputDataConfig": {
"S3OutputPath": "s3://{}/{}/xgboost-video-games/output".format(bucket, prefix)
},
"HyperParameters": {
"max_depth":"3",
"eta":"0.1",
"eval_metric":"auc",
"scale_pos_weight":"2.0",
"subsample":"0.5",
"objective":"binary:logistic",
"num_round":"100"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 60 * 60
}
}
%%time
sm = boto3.client('sagemaker')
sm.create_training_job(**create_training_params)
status = sm.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
try:
sm.get_waiter('training_job_completed_or_stopped').wait(TrainingJobName=job_name)
finally:
status = sm.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print("Training job ended with status: " + status)
if status == 'Failed':
message = sm.describe_training_job(TrainingJobName=job_name)['FailureReason']
print('Training failed with the following error: {}'.format(message))
raise Exception('Training job failed')
```
---
## Host
Now that we've trained the XGBoost algorithm on our data, let's prepare the model for hosting on a SageMaker serverless endpoint. We will:
1. Point to the scoring container
1. Point to the model.tar.gz that came from training
1. Create the hosting model
```
create_model_response = sm.create_model(
ModelName=job_name,
ExecutionRoleArn=role,
PrimaryContainer={
'Image': container,
'ModelDataUrl': sm.describe_training_job(TrainingJobName=job_name)['ModelArtifacts']['S3ModelArtifacts']})
print(create_model_response['ModelArn'])
```
Next, we'll configure our hosting endpoint. Here we specify:
1. EC2 instance type to use for hosting
1. The initial number of instances
1. Our hosting model name
After the endpoint has been configured, we'll create the endpoint itself.
```
xgboost_endpoint_config = 'DEMO-videogames-xgboost-config-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(xgboost_endpoint_config)
create_endpoint_config_response = sm.create_endpoint_config(
EndpointConfigName=xgboost_endpoint_config,
ProductionVariants=[{
'InstanceType': 'ml.t2.medium',
'InitialInstanceCount': 1,
'ModelName': job_name,
'VariantName': 'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
%%time
xgboost_endpoint = 'DEMO-videogames-xgboost-endpoint-' + strftime("%Y%m%d%H%M", gmtime())
print(xgboost_endpoint)
create_endpoint_response = sm.create_endpoint(
EndpointName=xgboost_endpoint,
EndpointConfigName=xgboost_endpoint_config)
print(create_endpoint_response['EndpointArn'])
resp = sm.describe_endpoint(EndpointName=xgboost_endpoint)
status = resp['EndpointStatus']
print("Status: " + status)
try:
sm.get_waiter('endpoint_in_service').wait(EndpointName=xgboost_endpoint)
finally:
resp = sm.describe_endpoint(EndpointName=xgboost_endpoint)
status = resp['EndpointStatus']
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
if status != 'InService':
message = sm.describe_endpoint(EndpointName=xgboost_endpoint)['FailureReason']
print('Endpoint creation failed with the following error: {}'.format(message))
raise Exception('Endpoint creation did not succeed')
```
---
## Evaluation
Now that we have our hosted endpoint, we can generate predictions from it. More specifically, let's generate predictions from our test data set to understand how well our model generalizes to data it has not seen yet.
There are many ways to compare the performance of a machine learning model. We'll start simply by comparing actual to predicted values of whether the game was a "hit" (`1`) or not (`0`). Then we'll produce a confusion matrix, which shows how many test data points were predicted by the model in each category versus how many test data points actually belonged in each category.
```
runtime = boto3.client('runtime.sagemaker')
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [round(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
%%time
import json
with open('test.libsvm', 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, xgboost_endpoint, 'text/x-libsvm')
print ('\nerror rate=%f' % ( sum(1 for i in range(len(preds)) if preds[i]!=labels[i]) /float(len(preds))))
pd.crosstab(index=np.array(labels), columns=np.array(preds))
```
Of the 132 games in the test set that actually are "hits" by our metric, the model correctly identified 73, while the overall error rate is 13%. The amount of false negatives versus true positives can be shifted substantially in favor of true positives by increasing the hyperparameter scale_pos_weight. Of course, this increase comes at the expense of reduced accuracy/increased error rate and more false positives. How to make this trade-off ultimately is a business decision based on the relative costs of false positives, false negatives, etc.
---
## Extensions
This XGBoost model is just the starting point for predicting whether a game will be a hit based on reviews and other features. There are several possible avenues for improving the model's performance. First, of course, would be to collect more data and, if possible, fill in the existing missing fields with actual information. Another possibility is further hyperparameter tuning, with Amazon SageMaker's Hyperparameter Optimization service. And, although ensemble learners often do well with imbalanced data sets, it could be worth exploring techniques for mitigating imbalances such as downsampling, synthetic data augmentation, and other approaches.
```
sm.delete_endpoint(EndpointName=xgboost_endpoint)
```
|
github_jupyter
|
bucket = '<your_s3_bucket_name_here>'
prefix = 'sagemaker/DEMO-videogames-xgboost'
# Define IAM role
import sagemaker
role = sagemaker.get_execution_role()
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import Image
from IPython.display import display
from sklearn.datasets import dump_svmlight_file
from time import gmtime, strftime
import sys
import math
import json
import boto3
raw_data_filename = 'Video_Games_Sales_as_at_22_Dec_2016.csv'
s3 = boto3.resource('s3')
s3.Bucket(bucket).download_file(prefix + '/' + raw_data_filename, 'raw_data.csv')
data = pd.read_csv('./raw_data.csv')
pd.set_option('display.max_rows', 20)
data
data['y'] = (data['Global_Sales'] > 1)
plt.bar(['not a hit', 'hit'], data['y'].value_counts())
plt.show()
viz = data.filter(['User_Score','Critic_Score', 'Global_Sales'], axis=1)
viz['User_Score'] = pd.Series(viz['User_Score'].apply(pd.to_numeric, errors='coerce'))
viz['User_Score'] = viz['User_Score'].mask(np.isnan(viz["User_Score"]), viz['Critic_Score'] / 10.0)
viz.plot(kind='scatter', logx=True, logy=True, x='Critic_Score', y='Global_Sales')
viz.plot(kind='scatter', logx=True, logy=True, x='User_Score', y='Global_Sales')
plt.show()
data = data.drop(['Name', 'Year_of_Release', 'NA_Sales', 'EU_Sales', 'JP_Sales', 'Other_Sales', 'Global_Sales', 'Critic_Count', 'User_Count', 'Developer'], axis=1)
data.isnull().sum()
data = data.dropna()
data['User_Score'] = data['User_Score'].apply(pd.to_numeric, errors='coerce')
data['User_Score'] = data['User_Score'].mask(np.isnan(data["User_Score"]), data['Critic_Score'] / 10.0)
data['y'] = data['y'].apply(lambda y: 'yes' if y == True else 'no')
model_data = pd.get_dummies(data)
train_data, validation_data, test_data = np.split(model_data.sample(frac=1, random_state=1729), [int(0.7 * len(model_data)), int(0.9 * len(model_data))])
dump_svmlight_file(X=train_data.drop(['y_no', 'y_yes'], axis=1), y=train_data['y_yes'], f='train.libsvm')
dump_svmlight_file(X=validation_data.drop(['y_no', 'y_yes'], axis=1), y=validation_data['y_yes'], f='validation.libsvm')
dump_svmlight_file(X=test_data.drop(['y_no', 'y_yes'], axis=1), y=test_data['y_yes'], f='test.libsvm')
boto3.Session().resource('s3').Bucket(bucket).Object(prefix + '/train/train.libsvm').upload_file('train.libsvm')
boto3.Session().resource('s3').Bucket(bucket).Object(prefix + '/validation/validation.libsvm').upload_file('validation.libsvm')
job_name = 'DEMO-videogames-xgboost-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(boto3.Session().region_name, 'xgboost')
create_training_params = \
{
"RoleArn": role,
"TrainingJobName": job_name,
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.c4.xlarge",
"VolumeSizeInGB": 10
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": "s3://{}/{}/train".format(bucket, prefix),
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": "s3://{}/{}/validation".format(bucket, prefix),
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
],
"OutputDataConfig": {
"S3OutputPath": "s3://{}/{}/xgboost-video-games/output".format(bucket, prefix)
},
"HyperParameters": {
"max_depth":"3",
"eta":"0.1",
"eval_metric":"auc",
"scale_pos_weight":"2.0",
"subsample":"0.5",
"objective":"binary:logistic",
"num_round":"100"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 60 * 60
}
}
%%time
sm = boto3.client('sagemaker')
sm.create_training_job(**create_training_params)
status = sm.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
try:
sm.get_waiter('training_job_completed_or_stopped').wait(TrainingJobName=job_name)
finally:
status = sm.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print("Training job ended with status: " + status)
if status == 'Failed':
message = sm.describe_training_job(TrainingJobName=job_name)['FailureReason']
print('Training failed with the following error: {}'.format(message))
raise Exception('Training job failed')
create_model_response = sm.create_model(
ModelName=job_name,
ExecutionRoleArn=role,
PrimaryContainer={
'Image': container,
'ModelDataUrl': sm.describe_training_job(TrainingJobName=job_name)['ModelArtifacts']['S3ModelArtifacts']})
print(create_model_response['ModelArn'])
xgboost_endpoint_config = 'DEMO-videogames-xgboost-config-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(xgboost_endpoint_config)
create_endpoint_config_response = sm.create_endpoint_config(
EndpointConfigName=xgboost_endpoint_config,
ProductionVariants=[{
'InstanceType': 'ml.t2.medium',
'InitialInstanceCount': 1,
'ModelName': job_name,
'VariantName': 'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
%%time
xgboost_endpoint = 'DEMO-videogames-xgboost-endpoint-' + strftime("%Y%m%d%H%M", gmtime())
print(xgboost_endpoint)
create_endpoint_response = sm.create_endpoint(
EndpointName=xgboost_endpoint,
EndpointConfigName=xgboost_endpoint_config)
print(create_endpoint_response['EndpointArn'])
resp = sm.describe_endpoint(EndpointName=xgboost_endpoint)
status = resp['EndpointStatus']
print("Status: " + status)
try:
sm.get_waiter('endpoint_in_service').wait(EndpointName=xgboost_endpoint)
finally:
resp = sm.describe_endpoint(EndpointName=xgboost_endpoint)
status = resp['EndpointStatus']
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
if status != 'InService':
message = sm.describe_endpoint(EndpointName=xgboost_endpoint)['FailureReason']
print('Endpoint creation failed with the following error: {}'.format(message))
raise Exception('Endpoint creation did not succeed')
runtime = boto3.client('runtime.sagemaker')
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [round(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
%%time
import json
with open('test.libsvm', 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, xgboost_endpoint, 'text/x-libsvm')
print ('\nerror rate=%f' % ( sum(1 for i in range(len(preds)) if preds[i]!=labels[i]) /float(len(preds))))
pd.crosstab(index=np.array(labels), columns=np.array(preds))
sm.delete_endpoint(EndpointName=xgboost_endpoint)
| 0.194177 | 0.975554 |
# Ray RLlib Multi-Armed Bandits - Linear Thompson Sampling
© 2019-2021, Anyscale. All Rights Reserved

This lesson uses a second exploration strategy we discussed briefly in lesson [02 Exploration vs. Exploitation Strategies](02-Exploration-vs-Exploitation-Strategies.ipynb), _Thompson Sampling_, with a linear variant, [LinTS](https://docs.ray.io/en/latest/rllib-algorithms.html?highlight=greedy#linear-thompson-sampling-contrib-lints).
## Wheel Bandit
We'll use it on the `Wheel Bandit` problem ([RLlib discrete.py source code](https://github.com/ray-project/ray/blob/master/rllib/contrib/bandits/envs/discrete.py)), which is an artificial problem designed to force exploration. It is described in the paper [Deep Bayesian Bandits Showdown](https://arxiv.org/abs/1802.09127) (see _The Wheel Bandit_ section). The paper uses it to model 2D contexts, but it can be generalized to more than two dimensions.
You can visualize this problem as a wheel (circle) with four other regions around it. An exploration parameter delta $\delta$ defines a threshold, such that if the norm of the context vector is less than or equal to delta (inside the “wheel”) then the leader action is taken (conventionally numbered `1`). Otherwise, the other four actions are explored.
From figure 3 in [Deep Bayesian Bandits Showdown](https://arxiv.org/abs/1802.09127), the Wheel Bandit can be visualized this way:

The radius of the entire colored circle is 1.0, while the radius of the blue "core" is $\delta$.
Contexts are sampled randomly within the unit circle (radius 1.0). The optimal action for the blue, red, green, black, or yellow region is the action 1, 2, 3, 4, or 5, respectively. In other words, if the context is in the blue region, radius < $\delta$, action 1 is optimal, if it is in the upper-right-hand quadrant with radius between $\delta$ and 1.0, then action 2 is optimal, etc.
The parameter $\delta$ controls how aggressively we explore. The reward $r$ for each action and context combination are based on a normal distribution as follows:
Action 1 offers the reward, $r \sim \mathcal{N}({\mu_1,\sigma^2})$, independent of context.
Actions 2-5 offer the reward, $r \sim \mathcal{N}({\mu_2,\sigma^2})$ where $\mu_2 < \mu_1$, _when they are suboptimal choices_. When they are optimal, the reward is $r \sim \mathcal{N}({\mu_3,\sigma^2})$ where $\mu_3 \gg \mu_1$.
In addition to $\delta$, the parameters $\mu_1$, $\mu_2$ $\mu_3$, and $\sigma$ are configurable. The default values for these parameters in the paper and in the [RLlib implementation](https://github.com/ray-project/ray/blob/master/rllib/contrib/bandits/envs/discrete.py) are as follows:
```python
DEFAULT_CONFIG_WHEEL = {
"delta": 0.5,
"mu_1": 1.2,
"mu_2": 1.0,
"mu_3": 50.0,
"std": 0.01 # sigma
}
```
Note that the probability of a context randomly falling in the high-reward region (not blue) is 1 − $\delta^2$. Therefore, the difficulty of the problem increases with $\delta$, and algorithms used with this bandit are more likely to get stuck repeatedly selecting action 1 for large $\delta$.
## Use Wheel Bandit with Thompson Sampling
Note the import in the next cell of `LinTSTrainer` and how it is used below when setting up the _Tune_ job. For the `LinUCB` example in the [previous lesson](04-Linear-Upper-Confidence-bound.ipynb), we didn't import the corresponding `LinUCBTrainer`, but passed a "magic" string to Tune, `contrib/LinUCB`, which RLlib already knows how to associate with the corresponding `LinUCBTrainer` implementation. Passing the class explicitly, as we do here, is an alternative. The [RLlib environments documentation](https://docs.ray.io/en/latest/rllib-env.html) discusses these techniques.
```
import time
import numpy as np
import pandas as pd
import ray
from ray.rllib.contrib.bandits.agents import LinTSTrainer
from ray.rllib.contrib.bandits.agents.lin_ts import TS_CONFIG
from ray.rllib.contrib.bandits.envs import WheelBanditEnv
wbe = WheelBanditEnv()
wbe.config
```
The effective number of `training_iterations` will be `20 * timesteps_per_iteration == 2,000` where the timesteps per iteration is `100` by default.
```
TS_CONFIG["env"] = WheelBanditEnv
training_iterations = 20
print("Running training for %s time steps" % training_iterations)
```
What's in the standard config object for _LinTS_ anyway??
```
TS_CONFIG
```
Initialize Ray...
```
ray.init(ignore_reinit_error=True)
analysis = ray.tune.run(
LinTSTrainer,
config=TS_CONFIG,
stop={"training_iteration": training_iterations},
num_samples=2,
checkpoint_at_end=True,
verbose=1
)
```
How long did it take?
```
stats = analysis.stats()
secs = stats["timestamp"] - stats["start_time"]
print(f'{secs:7.2f} seconds, {secs/60.0:7.2f} minutes')
```
Analyze cumulative regrets of the trials
```
df = pd.DataFrame()
for key, df_trial in analysis.trial_dataframes.items():
df = df.append(df_trial, ignore_index=True)
regrets = df \
.groupby("info/num_steps_trained")["info/learner/default_policy/cumulative_regret"] \
.aggregate(["mean", "max", "min", "std"])
regrets
regrets.plot(y="mean", title="Cumulative Regrets")
```
As always, here is an [image](../../images/rllib/LinTS-Cumulative-Regret-05.png) from a previous run. How similar is your graph? We have observed a great deal of variability from one run to the next, more than we have seen with _LinUCB_. This suggests that extra caution is required when using _LinTS_ to ensure that good results are achieved.
Here is how you can restore a trainer from a checkpoint:
```
trial = analysis.trials[0]
trainer = LinTSTrainer(config=TS_CONFIG)
trainer.restore(trial.checkpoint.value)
```
Get model to plot arm weights distribution
```
model = trainer.get_policy().model
means = [model.arms[i].theta.numpy() for i in range(5)]
covs = [model.arms[i].covariance.numpy() for i in range(5)]
model, means, covs
```
Plot the weight distributions for the different arms:
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
colors = ["blue", "black", "green", "red", "yellow"]
labels = ["arm{}".format(i) for i in range(5)]
for i in range(0, 5):
x, y = np.random.multivariate_normal(means[i] / 30, covs[i], 5000).T
plt.scatter(x, y, color=colors[i])
plt.show()
```
Here's an [image](../../images/rllib/LinTS-Weight-Distribution-of-Arms-05.png) from a previous run. How similar is your graph?
## Exercise 1
Experiment with different $\delta$ values, for example 0.7 and 0.9. What do the cumulative regret and weights graphs look like?
You can set the $\delta$ value like this:
```python
TS_CONFIG["delta"] = 0.7
```
See the [solutions notebook](solutions/Multi-Armed-Bandits-Solutions.ipynb) for discussion of this exercise.
```
ray.shutdown()
```
|
github_jupyter
|
DEFAULT_CONFIG_WHEEL = {
"delta": 0.5,
"mu_1": 1.2,
"mu_2": 1.0,
"mu_3": 50.0,
"std": 0.01 # sigma
}
import time
import numpy as np
import pandas as pd
import ray
from ray.rllib.contrib.bandits.agents import LinTSTrainer
from ray.rllib.contrib.bandits.agents.lin_ts import TS_CONFIG
from ray.rllib.contrib.bandits.envs import WheelBanditEnv
wbe = WheelBanditEnv()
wbe.config
TS_CONFIG["env"] = WheelBanditEnv
training_iterations = 20
print("Running training for %s time steps" % training_iterations)
TS_CONFIG
ray.init(ignore_reinit_error=True)
analysis = ray.tune.run(
LinTSTrainer,
config=TS_CONFIG,
stop={"training_iteration": training_iterations},
num_samples=2,
checkpoint_at_end=True,
verbose=1
)
stats = analysis.stats()
secs = stats["timestamp"] - stats["start_time"]
print(f'{secs:7.2f} seconds, {secs/60.0:7.2f} minutes')
df = pd.DataFrame()
for key, df_trial in analysis.trial_dataframes.items():
df = df.append(df_trial, ignore_index=True)
regrets = df \
.groupby("info/num_steps_trained")["info/learner/default_policy/cumulative_regret"] \
.aggregate(["mean", "max", "min", "std"])
regrets
regrets.plot(y="mean", title="Cumulative Regrets")
trial = analysis.trials[0]
trainer = LinTSTrainer(config=TS_CONFIG)
trainer.restore(trial.checkpoint.value)
model = trainer.get_policy().model
means = [model.arms[i].theta.numpy() for i in range(5)]
covs = [model.arms[i].covariance.numpy() for i in range(5)]
model, means, covs
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
colors = ["blue", "black", "green", "red", "yellow"]
labels = ["arm{}".format(i) for i in range(5)]
for i in range(0, 5):
x, y = np.random.multivariate_normal(means[i] / 30, covs[i], 5000).T
plt.scatter(x, y, color=colors[i])
plt.show()
TS_CONFIG["delta"] = 0.7
ray.shutdown()
| 0.442877 | 0.991522 |
# Diagnosing validity of causal effects on decision trees
One of the biggest issues in causal inference problems is confounding, which stands for the impact explanatory variables may have on treatment assignment and the target. To draw conclusions on the causality of a treatment, we must isolate its effect, controlling the effects of other variables. In a perfect world, we would observe the effect of a treatment in identical individuals, inferring the treatment effect as the difference in their outcomes.
Sounds like an impossible task. However, ML can help us in it, providing us with individuals that are "identical enough". A simple and effective way to do this is using a decision tree. I've [shown before](https://gdmarmerola.github.io/decision-tree-counterfactual/) that decision trees can be good causal inference models with some small adaptations, and tried to make this knowledge acessible through the [cfml_tools](https://github.com/gdmarmerola/cfml_tools) package. They are not the best model out there, however, they show good results, are interpretable, simple and fast.
The methodology is as follows: we build a decision tree to solve a regression or classification problem from explanatory variables `X` to target `y`, and then compare outcomes for every treatment `W` at each leaf node to build counterfactual predictions. It yields good performance on fklearn's [causal inference problem](https://fklearn.readthedocs.io/en/latest/examples/causal_inference.html) out-of-the-box. Recursive partitioning performed by the tree will create clusters with individuals that are "identical enough" and enable us to perform counterfactual predictions.
But that is not always true. Not all partitions are born equal, and thus we need some support to diagnose where inference is valid and where it may be biased. In this Notebook, we'll use the `.run_leaf_diagnostics()` method from `DecisionTreeCounterfactual` which helps us diagnose that, and check if our counterfactual estimates are reasonable and unconfounded.
```
# autoreload
%load_ext autoreload
%autoreload 2
# changing working directory
import sys
sys.path.append("../")
%matplotlib inline
# basics
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# better plots
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('retina')
def df_to_markdown(df, float_format='%.2g'):
"""
Export a pandas.DataFrame to markdown-formatted text.
DataFrame should not contain any `|` characters.
"""
from os import linesep
return linesep.join([
'|'.join(df.columns),
'|'.join(4 * '-' for i in df.columns),
df.to_csv(sep='|', index=False, header=False, float_format=float_format)
]).replace('|', ' | ')
```
## Data: `make_confounded_data` from `fklearn`
Nubank's `fklearn` module provides a nice causal inference problem generator, so we're going to use the same data generating process and example from its [documentation](https://fklearn.readthedocs.io/en/latest/examples/causal_inference.html).
```
# rodando a função para gerar dados confounded
from cfml_tools.utils import make_confounded_data
df_rnd, df_obs, df_cf = make_confounded_data(500000)
print(df_to_markdown(df_obs.head(5)))
```
We have five features: `sex`, `age`, `severity`, `medication` and `recovery`. We want to estimate the impact of `medication` on `recovery`. So, our *target* variable is `recovery`, our *treatment* variable is `medication` and the rest are our *explanatory* variables. Additionally, the function outputs three data frames: `df_rnd`, where treatment assingment is random, `df_obs`, where treatment assingment is confounded and `df_cf`, which is the counterfactual dataframe, with the treatment indicator flipped.
The real effect is $\frac{E[y | W=1]}{E[y | W=0]} = exp(-1) = 0.368$. We use `df_obs` to show the effects of confounding.
## Decision Trees as causal inference models
Let us do a quick refresher on the usage of the `DecisionTreeCounterfactual` method. First, we organize data in `X`, `W` and `y` format:
```
# organizing data into X, W and y
X = df_obs[['sex','age','severity']]
W = df_obs['medication'].astype(int)
y = df_obs['recovery']
```
Then, we fit the counterfactual model. The default setting is a minimum of 100 samples per leaf, which (in my experience) would be reasonable for most cases.
```
# importing cfml-tools
from cfml_tools.tree import DecisionTreeCounterfactual
# instance of DecisionTreeCounterfactual
dtcf = DecisionTreeCounterfactual(save_explanatory=True)
# fitting data to our model
dtcf.fit(X, W, y)
```
We then predict the counterfactuals for all our individuals. We get the dataframe in the `counterfactuals` variable, which predicts outcomes for both `W = 0` and `W = 1`.
We can see some NaNs in the dataframe. That's because for some individuals there are not enough treated or untreated samples at the leaf node to estimate the counterfactuals, controlled by the parameter `min_sample_effect`. When this parameter is high, we are conservative, getting more NaNs but less variance in counterfactual estimation.
```
# let us predict counterfactuals for these guys
counterfactuals = dtcf.predict(X)
counterfactuals.iloc[5:10]
```
We check if the underlying regression from `X`to `y` generalizes well, with reasonable R2 scores:
```
# validating model using 5-fold CV
cv_scores = dtcf.get_cross_val_scores(X, y)
print(cv_scores)
```
We want to keep this as high as possible, so we can trust that the model "strips away" the effects from `X` to `y`.
Then we can observe the inferred treatment effect, which the model retrieves nicely:
```
# importing matplotlib
import numpy as np
import matplotlib.pyplot as plt
# treatment effects
treatment_effects = counterfactuals['y_hat'][1]/counterfactuals['y_hat'][0]
# plotting effects
plt.style.use('bmh')
plt.figure(figsize=(12,4), dpi=200)
plt.hist(treatment_effects, bins=100);
plt.axvline(np.exp(-1), color='r', label='truth={}'.format(np.round(np.exp(-1), 3)))
plt.axvline(treatment_effects.mean(), color='k', label='predicted={}'.format(np.round(treatment_effects.mean(),3)))
plt.xlim(0.25, 0.50)
plt.legend()
plt.show()
```
The inference is good, but not perfect. We can observe that some estimates are way over the real values. Also, in a real causal inference setting the true effect would not be acessible as an observed quantity. That's why we perform CV only to diagnose the model's generalization power from `X`to `y`. For the treatment effect, we have to trust theory and have a set of diagnostic tools at our disposal. That's why we built `.run_leaf_diagnostics()` :). We'll show it in the next section.
## Digging deeper with *leaf diagnostics*
The `.run_leaf_diagnostics()` method provides valuable information to diagnose the countefactual predictions of the model. It performs analysis over the leaf nodes, testing if they really are the clusters containing the "identical enough" individuals we need.
We run the method and get the following dataframe:
```
# running leaf diagnostics
leaf_diagnostics_df = dtcf.run_leaf_diagnostics()
leaf_diagnostics_df.head()
```
The dataframe provides a diagnostic on leaves with enough samples for counterfactual inference, showing some interesting quantities:
* average outcomes across treatments (`avg_outcome`)
* explanatory variable distribution across treatments (`percentile_*` variables)
* a confounding score for each variable, meaning how much we can predict the treatment from explanatory variables inside leaf nodes using a linear model (`confounding_score`)
The most important column is the `confounding_score`, which tells us if treatments are not randomly assigned within leaves given explanatory variables. It is actually the AUC of a linear model that tries to tell treated from untreated individuals within each leaf. Let us check how our model fares on this score:
```
# avg confounding score
confounding_score_mean = leaf_diagnostics_df['confounding_score'].median()
# plotting
plt.figure(figsize=(12,6), dpi=120)
plt.axvline(
confounding_score_mean,
linestyle='dashed',
color='black',
label=f'Median confounding score: {confounding_score_mean:.3f}'
)
plt.hist(leaf_diagnostics_df['confounding_score'], bins=100);
plt.legend()
```
Cool. This is a good result, as within more than half of the leaves treated and untreated individuals are virtually indistinguishable (AUC less than 0.6). However, we can check that for some of the leaves we have a high confounding score. Therefore, in those cases, our individuals are be not "identical enough" and inference may be biased.
We actually can measure how biased inference can be. Let us plot how the confounding score impacts the estimated effect.
```
# adding effect to df
leaf_diagnostics_df = (
leaf_diagnostics_df
.assign(effect=lambda x: x['avg_outcome'][1]/x['avg_outcome'][0])
)
# real effect of fklearn's toy problem
real_effect = np.exp(-1)
# plotting
fig, ax = plt.subplots(1, 1, figsize=(12,6), dpi=120)
plt.axhline(
real_effect,
linestyle='dashed',
color='black',
label='True effect'
)
leaf_diagnostics_df.plot(
x='confounding_score',
y='effect',
kind='scatter',
color='tomato',
s=10,
alpha=0.5,
ax=ax
)
plt.legend()
```
As we can see, there's a slight bias that presents itself with higher confounding scores:
```
effect_low_confounding = (
leaf_diagnostics_df
.loc[lambda x: x.confounding_score < 0.6]
['effect'].mean()
)
effect_high_confounding = (
leaf_diagnostics_df
.loc[lambda x: x.confounding_score > 0.8]
['effect'].mean()
)
print(f'Effect for leaves with confounding score < 0.6: {effect_low_confounding:.3f}')
print(f'Effect for leaves with confounding score > 0.8: {effect_high_confounding:.3f}')
```
Let's examine the leaf with highest confounding score:
```
# leaf with highest confounding score
leaf_diagnostics_df.sort_values('effect', ascending=False).head(1)
```
It shows a confounding score of 0.983, so we can almost perfectly tell treated and untreated individuals apart using the explanatory variables. The effect is grossly underestimated, as $log(0.665) = -0.408$. Checking the feature percentiles, we can see a big difference in `severity` and `sex` for treated and untreated individuals.
We can check these differences further by leveraging the stored training dataframe and building a boxplot:
```
# getting the individuals from biased leaf
biased_leaf_samples = dtcf.train_df.loc[lambda x: x.leaf == 7678]
# plotting
fig, ax = plt.subplots(1, 4, figsize=(16, 5), dpi=150)
biased_leaf_samples.boxplot('age','W', ax=ax[0])
biased_leaf_samples.boxplot('sex','W', ax=ax[1])
biased_leaf_samples.boxplot('severity','W', ax=ax[2])
biased_leaf_samples.boxplot('y','W', ax=ax[3])
```
As we can see, ages are comparable but `sex` and `severity` are not. In a unbiased leaf, such as 263 (with confounding score of 0.54), the boxplots show much more homogenous populations:
```
# getting the individuals from biased leaf
biased_leaf_samples = dtcf.train_df.loc[lambda x: x.leaf == 263]
# plotting
fig, ax = plt.subplots(1, 4, figsize=(16, 5), dpi=150)
biased_leaf_samples.boxplot('age','W', ax=ax[0])
biased_leaf_samples.boxplot('sex','W', ax=ax[1])
biased_leaf_samples.boxplot('severity','W', ax=ax[2])
biased_leaf_samples.boxplot('y','W', ax=ax[3])
```
And that's it! I hope this post help you in your journey to accurate, unbiased counterfactual predictions. All feedbacks are appreciated!
|
github_jupyter
|
# autoreload
%load_ext autoreload
%autoreload 2
# changing working directory
import sys
sys.path.append("../")
%matplotlib inline
# basics
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# better plots
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('retina')
def df_to_markdown(df, float_format='%.2g'):
"""
Export a pandas.DataFrame to markdown-formatted text.
DataFrame should not contain any `|` characters.
"""
from os import linesep
return linesep.join([
'|'.join(df.columns),
'|'.join(4 * '-' for i in df.columns),
df.to_csv(sep='|', index=False, header=False, float_format=float_format)
]).replace('|', ' | ')
# rodando a função para gerar dados confounded
from cfml_tools.utils import make_confounded_data
df_rnd, df_obs, df_cf = make_confounded_data(500000)
print(df_to_markdown(df_obs.head(5)))
# organizing data into X, W and y
X = df_obs[['sex','age','severity']]
W = df_obs['medication'].astype(int)
y = df_obs['recovery']
# importing cfml-tools
from cfml_tools.tree import DecisionTreeCounterfactual
# instance of DecisionTreeCounterfactual
dtcf = DecisionTreeCounterfactual(save_explanatory=True)
# fitting data to our model
dtcf.fit(X, W, y)
# let us predict counterfactuals for these guys
counterfactuals = dtcf.predict(X)
counterfactuals.iloc[5:10]
# validating model using 5-fold CV
cv_scores = dtcf.get_cross_val_scores(X, y)
print(cv_scores)
# importing matplotlib
import numpy as np
import matplotlib.pyplot as plt
# treatment effects
treatment_effects = counterfactuals['y_hat'][1]/counterfactuals['y_hat'][0]
# plotting effects
plt.style.use('bmh')
plt.figure(figsize=(12,4), dpi=200)
plt.hist(treatment_effects, bins=100);
plt.axvline(np.exp(-1), color='r', label='truth={}'.format(np.round(np.exp(-1), 3)))
plt.axvline(treatment_effects.mean(), color='k', label='predicted={}'.format(np.round(treatment_effects.mean(),3)))
plt.xlim(0.25, 0.50)
plt.legend()
plt.show()
# running leaf diagnostics
leaf_diagnostics_df = dtcf.run_leaf_diagnostics()
leaf_diagnostics_df.head()
# avg confounding score
confounding_score_mean = leaf_diagnostics_df['confounding_score'].median()
# plotting
plt.figure(figsize=(12,6), dpi=120)
plt.axvline(
confounding_score_mean,
linestyle='dashed',
color='black',
label=f'Median confounding score: {confounding_score_mean:.3f}'
)
plt.hist(leaf_diagnostics_df['confounding_score'], bins=100);
plt.legend()
# adding effect to df
leaf_diagnostics_df = (
leaf_diagnostics_df
.assign(effect=lambda x: x['avg_outcome'][1]/x['avg_outcome'][0])
)
# real effect of fklearn's toy problem
real_effect = np.exp(-1)
# plotting
fig, ax = plt.subplots(1, 1, figsize=(12,6), dpi=120)
plt.axhline(
real_effect,
linestyle='dashed',
color='black',
label='True effect'
)
leaf_diagnostics_df.plot(
x='confounding_score',
y='effect',
kind='scatter',
color='tomato',
s=10,
alpha=0.5,
ax=ax
)
plt.legend()
effect_low_confounding = (
leaf_diagnostics_df
.loc[lambda x: x.confounding_score < 0.6]
['effect'].mean()
)
effect_high_confounding = (
leaf_diagnostics_df
.loc[lambda x: x.confounding_score > 0.8]
['effect'].mean()
)
print(f'Effect for leaves with confounding score < 0.6: {effect_low_confounding:.3f}')
print(f'Effect for leaves with confounding score > 0.8: {effect_high_confounding:.3f}')
# leaf with highest confounding score
leaf_diagnostics_df.sort_values('effect', ascending=False).head(1)
# getting the individuals from biased leaf
biased_leaf_samples = dtcf.train_df.loc[lambda x: x.leaf == 7678]
# plotting
fig, ax = plt.subplots(1, 4, figsize=(16, 5), dpi=150)
biased_leaf_samples.boxplot('age','W', ax=ax[0])
biased_leaf_samples.boxplot('sex','W', ax=ax[1])
biased_leaf_samples.boxplot('severity','W', ax=ax[2])
biased_leaf_samples.boxplot('y','W', ax=ax[3])
# getting the individuals from biased leaf
biased_leaf_samples = dtcf.train_df.loc[lambda x: x.leaf == 263]
# plotting
fig, ax = plt.subplots(1, 4, figsize=(16, 5), dpi=150)
biased_leaf_samples.boxplot('age','W', ax=ax[0])
biased_leaf_samples.boxplot('sex','W', ax=ax[1])
biased_leaf_samples.boxplot('severity','W', ax=ax[2])
biased_leaf_samples.boxplot('y','W', ax=ax[3])
| 0.542136 | 0.990514 |
# Solving the $\mathbb{p}^\top$ integral
In this notebook we validate our expressions for the vector $\mathbb{p}^\top$.
```
%matplotlib inline
%run notebook_setup.py
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import quad
from IPython.display import Latex
from scipy.special import binom
np.seterr(invalid="ignore", divide="ignore");
```
## Limits of integration
In the paper, we presented the expressions for the point of intersection between the occultor limb and the occulted limb and the intersection between the occultor limb and the terminator, both parameterized in terms of the angle $\phi$:
```
def compute_phi(b, theta, bo, ro):
"""
Return the limits of integration for the pT integral.
Note that we're in the F' frame, so theta is actually theta'.
"""
# Occultor/occulted intersection
sign = np.array([1, -1])
phi0 = 0.5 * np.pi + sign * (
np.arcsin((1 - ro ** 2 - bo ** 2) / (2 * bo * ro)) - 0.5 * np.pi
)
# Occultor/terminator intersection
# Must solve a quartic!
xo = bo * np.sin(theta)
yo = bo * np.cos(theta)
A = (1 - b ** 2) ** 2
B = -4 * xo * (1 - b ** 2)
C = -2 * (
b ** 4
+ ro ** 2
- 3 * xo ** 2
- yo ** 2
- b ** 2 * (1 + ro ** 2 - xo ** 2 + yo ** 2)
)
D = -4 * xo * (b ** 2 - ro ** 2 + xo ** 2 + yo ** 2)
E = (
b ** 4
- 2 * b ** 2 * (ro ** 2 - xo ** 2 + yo ** 2)
+ (ro ** 2 - xo ** 2 - yo ** 2) ** 2
)
x = np.roots([A, B, C, D, E])
# Exclude imaginary roots
x = np.array([xi.real for xi in x if np.abs(xi.imag) < 1e-8])
# Exclude roots not on the terminator
y = b * np.sqrt(1 - x ** 2)
xprime = x * np.cos(theta) - y * np.sin(theta)
yprime = x * np.sin(theta) + y * np.cos(theta)
good = np.abs(xprime ** 2 + (yprime - bo) ** 2 - ro ** 2) < 1e-8
x = x[good]
phi1 = theta + np.arctan2(b * np.sqrt(1 - x ** 2) - yo, x - xo)
return np.append(phi0, phi1)
```
Let's verify that these expressions give us the correct points of intersection for the example in Figure 16:
```
def plot(b, theta, bo, ro, phi):
"""Plot the occultor, the occulted body, and the day/night terminator in frame F'."""
# Equation of a rotated ellipse
x0 = np.linspace(-1, 1, 1000)
y0 = b * np.sqrt(1 - x0 ** 2)
x = x0 * np.cos(theta) - y0 * np.sin(theta)
y = x0 * np.sin(theta) + y0 * np.cos(theta)
# Plot the curves
fig, ax = plt.subplots(1, figsize=(4, 4))
ax.add_artist(plt.Circle((0, 0), 1, ec="k", fc="none"))
ax.add_artist(plt.Circle((0, bo), ro, ec="k", fc="none"))
ax.plot(x, y)
# Indicate the angles
ax.plot(0, bo, "k.")
ax.plot([0, ro], [bo, bo], "k--", lw=1)
for i, phi_i in enumerate(phi):
x = ro * np.cos(phi_i)
y = bo + ro * np.sin(phi_i)
ax.plot([0, x], [bo, y], "k-", lw=1, alpha=0.5)
ax.plot(x, y, "C1o")
ax.annotate(
i + 1,
xy=(x, y),
xycoords="data",
xytext=(20 * x, 20 * (y - bo)),
textcoords="offset points",
va="center",
ha="center",
)
# Appearance
ax.set_aspect(1)
ax.set_xlim(-1.01, 1.01)
ax.set_ylim(-1.01, 1.25)
ax.axis("off")
return ax
# input
b = 0.5
theta = 75 * np.pi / 180
bo = 0.5
ro = 0.7
# compute the angles
phi = compute_phi(b, theta, bo, ro)
# Plot
ax = plot(b, theta, bo, ro, phi);
```
The angle $\phi$ is measured counter-clockwise from the line $y = b_o$ (dashed line in the plot). Note that one of the points of intersection (#1) isn't relevant in our case, since it's on the nightside of the planet. The integral we want to compute is the line integral along the occultor limb between points #2 and #3.
```
phi = phi[[1, 2]]
```
Here are the two angles for future reference:
```
for i in range(2):
display(Latex(r"${:.2f}^\circ$".format(phi[i] * 180 / np.pi)))
```
## Numerical evaluation
Now that we can compute `phi`, let's borrow some code from [Greens.ipynb](Greens.ipynb), where we showed how to evaluate $\mathbb{p}^\top$ numerically via Green's theorem. We'll compare our analytic solution to this numerical version.
```
def G(n):
"""
Return the anti-exterior derivative of the nth term of the Green's basis.
This is a two-dimensional (Gx, Gy) vector of functions of x and y.
"""
# Get the mu, nu indices
l = int(np.floor(np.sqrt(n)))
m = n - l * l - l
mu = l - m
nu = l + m
# NOTE: The abs prevents NaNs when the argument of the sqrt is
# zero but floating point error causes it to be ~ -eps.
z = lambda x, y: np.maximum(1e-12, np.sqrt(np.abs(1 - x ** 2 - y ** 2)))
if nu % 2 == 0:
G = [lambda x, y: 0, lambda x, y: x ** (0.5 * (mu + 2)) * y ** (0.5 * nu)]
elif (l == 1) and (m == 0):
def G0(x, y):
z_ = z(x, y)
if z_ > 1 - 1e-8:
return -0.5 * y
else:
return (1 - z_ ** 3) / (3 * (1 - z_ ** 2)) * (-y)
def G1(x, y):
z_ = z(x, y)
if z_ > 1 - 1e-8:
return 0.5 * x
else:
return (1 - z_ ** 3) / (3 * (1 - z_ ** 2)) * x
G = [G0, G1]
elif (mu == 1) and (l % 2 == 0):
G = [lambda x, y: x ** (l - 2) * z(x, y) ** 3, lambda x, y: 0]
elif (mu == 1) and (l % 2 != 0):
G = [lambda x, y: x ** (l - 3) * y * z(x, y) ** 3, lambda x, y: 0]
else:
G = [
lambda x, y: 0,
lambda x, y: x ** (0.5 * (mu - 3))
* y ** (0.5 * (nu - 1))
* z(x, y) ** 3,
]
return G
def primitive(x, y, dx, dy, theta1, theta2, n=0):
"""A general primitive integral computed numerically."""
def func(theta):
Gx, Gy = G(n)
return Gx(x(theta), y(theta)) * dx(theta) + Gy(x(theta), y(theta)) * dy(theta)
res, _ = quad(func, theta1, theta2, epsabs=1e-12, epsrel=1e-12)
return res
def pT_numerical(deg, phi, bo, ro):
"""Compute the pT vector numerically from its integral definition."""
N = (deg + 1) ** 2
pT = np.zeros(N)
for n in range(N):
for phi1, phi2 in phi.reshape(-1, 2):
x = lambda phi: ro * np.cos(phi)
y = lambda phi: bo + ro * np.sin(phi)
dx = lambda phi: -ro * np.sin(phi)
dy = lambda phi: ro * np.cos(phi)
pT[n] += primitive(x, y, dx, dy, phi1, phi2, n)
return pT
```
## Analytic evaluation
Now let's code up the expression derived in the paper in terms of the vectors $\mathbb{i}$, $\mathbb{j}$, $\mathbb{u}$, and $\mathbb{w}$. We show how to evaluate these vectors analytically in the notebooks
[I.ipynb](I.ipynb), [J.ipynb](J.ipynb), [U.ipynb](U.ipynb), and [W.ipynb](W.ipynb), respectively, so here we'll evaluate them numerically for simplicity.
```
def A(u, v, i, bo, ro):
"""Compute the Vieta coefficient A_{u, v, i}(bo, ro)."""
j1 = max(0, u - i)
j2 = min(u + v - i, u)
delta = (bo - ro) / (2 * ro)
return sum(
[
float(binom(u, j))
* float(binom(v, u + v - i - j))
* (-1) ** (u + j)
* delta ** (u + v - i - j)
for j in range(j1, j2 + 1)
]
)
def V(u, v, w, bo, ro, x):
"""Compute the Vieta operator V(x)."""
res = 0
u = int(u)
v = int(v)
w = int(w)
for i in range(u + v + 1):
res += A(u, v, i, bo, ro) * x[u + w + i]
return res
def computeI(vmax, phi):
"""
The vector i evaluated by direct numerical integration.
"""
alpha = 0.5 * phi + 0.25 * np.pi
i = np.zeros(vmax + 1)
for v in range(vmax + 1):
for k in range(0, len(alpha), 2):
func = lambda x: np.sin(x) ** (2 * v)
i[v] += quad(func, alpha[k], alpha[k + 1], epsabs=1e-12, epsrel=1e-12,)[0]
return i
def computeJ(vmax, bo, ro, phi):
"""
The vector j evaluated by direct numerical integration.
"""
alpha = 0.5 * phi + 0.25 * np.pi
k2 = (1 - bo ** 2 - ro ** 2 + 2 * bo * ro) / (4 * bo * ro)
j = np.zeros(vmax + 1)
for v in range(vmax + 1):
for i in range(0, len(alpha), 2):
func = lambda x: np.sin(x) ** (2 * v) * (1 - np.sin(x + 0j) ** 2 / k2) ** 1.5
j[v] += quad(func, alpha[i], alpha[i + 1], epsabs=1e-12, epsrel=1e-12,)[0]
return j
def computeU(vmax, phi):
"""
The vector u evaluated by direct numerical integration.
"""
alpha = 0.5 * phi + 0.25 * np.pi
u = np.zeros(vmax + 1)
for v in range(vmax + 1):
for i in range(0, len(alpha), 2):
func = lambda x: np.cos(x) * np.sin(x) ** (2 * v + 1)
u[v] += quad(func, alpha[i], alpha[i + 1], epsabs=1e-12, epsrel=1e-12,)[0]
return u
def computeW(vmax, bo, ro, phi):
"""
The vector w evaluated by direct numerical integration.
"""
alpha = 0.5 * phi + 0.25 * np.pi
k2 = (1 - bo ** 2 - ro ** 2 + 2 * bo * ro) / (4 * bo * ro)
w = np.zeros(vmax + 1)
for v in range(vmax + 1):
for i in range(0, len(alpha), 2):
func = (
lambda x: np.cos(x)
* np.sin(x) ** (2 * v + 1)
* (1 + 0j - np.sin(x) ** 2 / k2) ** 1.5
)
w[v] += quad(func, alpha[i], alpha[i + 1], epsabs=1e-12, epsrel=1e-12,)[0]
return w
def pT(deg, phi, bo, ro):
"""
Compute the pT integral, evaluated in terms of the integrals i, j, u, and w.
"""
# Initialize
N = (deg + 1) ** 2
pT = np.zeros(N) * np.nan
# Pre-compute the helper integrals
I = computeI(deg + 3, phi)
J = computeJ(deg + 1, bo, ro, phi)
U = computeU(2 * deg + 5, phi)
W = computeW(deg, bo, ro, phi)
# Pre-compute the p2 term
p2 = 0
for i in range(0, len(phi), 2):
def func(phi):
sinphi = np.sin(phi)
z = np.sqrt(1 - ro ** 2 - bo ** 2 - 2 * bo * ro * sinphi)
return (1.0 - z ** 3) / (1.0 - z ** 2) * (ro + bo * sinphi) * ro / 3.0
p2 += quad(func, phi[i], phi[i + 1], epsabs=1e-12, epsrel=1e-12,)[0]
for n in range(N):
# Get the mu, nu indices
l = int(np.floor(np.sqrt(n)))
m = n - l * l - l
mu = l - m
nu = l + m
# Cases!
if mu % 2 == 0:
c = 2 * (2 * ro) ** (l + 2)
if (mu / 2) % 2 == 0:
pT[n] = c * V((mu + 4) / 4, nu / 2, 0, bo, ro, I)
elif (mu / 2) % 2 != 0:
pT[n] = c * V((mu + 2) / 4, nu / 2, 0, bo, ro, U)
elif mu == nu == 1:
pT[n] = p2
else:
beta = (1 - (bo - ro) ** 2) ** 1.5
c = beta * (2 * ro) ** (l - 1)
if mu == 1:
if l % 2 == 0:
pT[n] = c * (
V((l - 2) / 2, 0, 0, bo, ro, J)
- 2 * V((l - 2) / 2, 0, 1, bo, ro, J)
)
elif ((l % 2) != 0) and (l != 1):
pT[n] = c * (
V((l - 3) / 2, 1, 0, bo, ro, J)
- 2 * V((l - 3) / 2, 1, 1, bo, ro, J)
)
elif mu > 1:
if ((mu - 1) / 2) % 2 == 0:
pT[n] = c * 2 * V((mu - 1) / 4, (nu - 1) / 2, 0, bo, ro, J)
elif ((mu - 1) / 2) % 2 != 0:
pT[n] = c * 2 * V((mu - 1) / 4, (nu - 1) / 2, 0, bo, ro, W)
return pT
```
Finally, let's show that the numerical and analytic expressions agree up to degree 5 for the example in Figure 16:
```
plt.plot(pT(5, phi, bo, ro), label="analytic")
plt.plot(pT_numerical(5, phi, bo, ro), "C1o", label="numerical")
plt.legend()
plt.xlabel(r"$n$", fontsize=20)
plt.ylabel(r"$\mathbb{p}_n^\top$", fontsize=20);
```
The error is at the machine level:
```
plt.plot(np.abs(pT(5, phi, bo, ro) - pT_numerical(5, phi, bo, ro)), "k-")
plt.xlabel(r"$n$", fontsize=20)
plt.yscale("log")
plt.ylabel(r"difference", fontsize=20);
```
|
github_jupyter
|
%matplotlib inline
%run notebook_setup.py
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import quad
from IPython.display import Latex
from scipy.special import binom
np.seterr(invalid="ignore", divide="ignore");
def compute_phi(b, theta, bo, ro):
"""
Return the limits of integration for the pT integral.
Note that we're in the F' frame, so theta is actually theta'.
"""
# Occultor/occulted intersection
sign = np.array([1, -1])
phi0 = 0.5 * np.pi + sign * (
np.arcsin((1 - ro ** 2 - bo ** 2) / (2 * bo * ro)) - 0.5 * np.pi
)
# Occultor/terminator intersection
# Must solve a quartic!
xo = bo * np.sin(theta)
yo = bo * np.cos(theta)
A = (1 - b ** 2) ** 2
B = -4 * xo * (1 - b ** 2)
C = -2 * (
b ** 4
+ ro ** 2
- 3 * xo ** 2
- yo ** 2
- b ** 2 * (1 + ro ** 2 - xo ** 2 + yo ** 2)
)
D = -4 * xo * (b ** 2 - ro ** 2 + xo ** 2 + yo ** 2)
E = (
b ** 4
- 2 * b ** 2 * (ro ** 2 - xo ** 2 + yo ** 2)
+ (ro ** 2 - xo ** 2 - yo ** 2) ** 2
)
x = np.roots([A, B, C, D, E])
# Exclude imaginary roots
x = np.array([xi.real for xi in x if np.abs(xi.imag) < 1e-8])
# Exclude roots not on the terminator
y = b * np.sqrt(1 - x ** 2)
xprime = x * np.cos(theta) - y * np.sin(theta)
yprime = x * np.sin(theta) + y * np.cos(theta)
good = np.abs(xprime ** 2 + (yprime - bo) ** 2 - ro ** 2) < 1e-8
x = x[good]
phi1 = theta + np.arctan2(b * np.sqrt(1 - x ** 2) - yo, x - xo)
return np.append(phi0, phi1)
def plot(b, theta, bo, ro, phi):
"""Plot the occultor, the occulted body, and the day/night terminator in frame F'."""
# Equation of a rotated ellipse
x0 = np.linspace(-1, 1, 1000)
y0 = b * np.sqrt(1 - x0 ** 2)
x = x0 * np.cos(theta) - y0 * np.sin(theta)
y = x0 * np.sin(theta) + y0 * np.cos(theta)
# Plot the curves
fig, ax = plt.subplots(1, figsize=(4, 4))
ax.add_artist(plt.Circle((0, 0), 1, ec="k", fc="none"))
ax.add_artist(plt.Circle((0, bo), ro, ec="k", fc="none"))
ax.plot(x, y)
# Indicate the angles
ax.plot(0, bo, "k.")
ax.plot([0, ro], [bo, bo], "k--", lw=1)
for i, phi_i in enumerate(phi):
x = ro * np.cos(phi_i)
y = bo + ro * np.sin(phi_i)
ax.plot([0, x], [bo, y], "k-", lw=1, alpha=0.5)
ax.plot(x, y, "C1o")
ax.annotate(
i + 1,
xy=(x, y),
xycoords="data",
xytext=(20 * x, 20 * (y - bo)),
textcoords="offset points",
va="center",
ha="center",
)
# Appearance
ax.set_aspect(1)
ax.set_xlim(-1.01, 1.01)
ax.set_ylim(-1.01, 1.25)
ax.axis("off")
return ax
# input
b = 0.5
theta = 75 * np.pi / 180
bo = 0.5
ro = 0.7
# compute the angles
phi = compute_phi(b, theta, bo, ro)
# Plot
ax = plot(b, theta, bo, ro, phi);
phi = phi[[1, 2]]
for i in range(2):
display(Latex(r"${:.2f}^\circ$".format(phi[i] * 180 / np.pi)))
def G(n):
"""
Return the anti-exterior derivative of the nth term of the Green's basis.
This is a two-dimensional (Gx, Gy) vector of functions of x and y.
"""
# Get the mu, nu indices
l = int(np.floor(np.sqrt(n)))
m = n - l * l - l
mu = l - m
nu = l + m
# NOTE: The abs prevents NaNs when the argument of the sqrt is
# zero but floating point error causes it to be ~ -eps.
z = lambda x, y: np.maximum(1e-12, np.sqrt(np.abs(1 - x ** 2 - y ** 2)))
if nu % 2 == 0:
G = [lambda x, y: 0, lambda x, y: x ** (0.5 * (mu + 2)) * y ** (0.5 * nu)]
elif (l == 1) and (m == 0):
def G0(x, y):
z_ = z(x, y)
if z_ > 1 - 1e-8:
return -0.5 * y
else:
return (1 - z_ ** 3) / (3 * (1 - z_ ** 2)) * (-y)
def G1(x, y):
z_ = z(x, y)
if z_ > 1 - 1e-8:
return 0.5 * x
else:
return (1 - z_ ** 3) / (3 * (1 - z_ ** 2)) * x
G = [G0, G1]
elif (mu == 1) and (l % 2 == 0):
G = [lambda x, y: x ** (l - 2) * z(x, y) ** 3, lambda x, y: 0]
elif (mu == 1) and (l % 2 != 0):
G = [lambda x, y: x ** (l - 3) * y * z(x, y) ** 3, lambda x, y: 0]
else:
G = [
lambda x, y: 0,
lambda x, y: x ** (0.5 * (mu - 3))
* y ** (0.5 * (nu - 1))
* z(x, y) ** 3,
]
return G
def primitive(x, y, dx, dy, theta1, theta2, n=0):
"""A general primitive integral computed numerically."""
def func(theta):
Gx, Gy = G(n)
return Gx(x(theta), y(theta)) * dx(theta) + Gy(x(theta), y(theta)) * dy(theta)
res, _ = quad(func, theta1, theta2, epsabs=1e-12, epsrel=1e-12)
return res
def pT_numerical(deg, phi, bo, ro):
"""Compute the pT vector numerically from its integral definition."""
N = (deg + 1) ** 2
pT = np.zeros(N)
for n in range(N):
for phi1, phi2 in phi.reshape(-1, 2):
x = lambda phi: ro * np.cos(phi)
y = lambda phi: bo + ro * np.sin(phi)
dx = lambda phi: -ro * np.sin(phi)
dy = lambda phi: ro * np.cos(phi)
pT[n] += primitive(x, y, dx, dy, phi1, phi2, n)
return pT
def A(u, v, i, bo, ro):
"""Compute the Vieta coefficient A_{u, v, i}(bo, ro)."""
j1 = max(0, u - i)
j2 = min(u + v - i, u)
delta = (bo - ro) / (2 * ro)
return sum(
[
float(binom(u, j))
* float(binom(v, u + v - i - j))
* (-1) ** (u + j)
* delta ** (u + v - i - j)
for j in range(j1, j2 + 1)
]
)
def V(u, v, w, bo, ro, x):
"""Compute the Vieta operator V(x)."""
res = 0
u = int(u)
v = int(v)
w = int(w)
for i in range(u + v + 1):
res += A(u, v, i, bo, ro) * x[u + w + i]
return res
def computeI(vmax, phi):
"""
The vector i evaluated by direct numerical integration.
"""
alpha = 0.5 * phi + 0.25 * np.pi
i = np.zeros(vmax + 1)
for v in range(vmax + 1):
for k in range(0, len(alpha), 2):
func = lambda x: np.sin(x) ** (2 * v)
i[v] += quad(func, alpha[k], alpha[k + 1], epsabs=1e-12, epsrel=1e-12,)[0]
return i
def computeJ(vmax, bo, ro, phi):
"""
The vector j evaluated by direct numerical integration.
"""
alpha = 0.5 * phi + 0.25 * np.pi
k2 = (1 - bo ** 2 - ro ** 2 + 2 * bo * ro) / (4 * bo * ro)
j = np.zeros(vmax + 1)
for v in range(vmax + 1):
for i in range(0, len(alpha), 2):
func = lambda x: np.sin(x) ** (2 * v) * (1 - np.sin(x + 0j) ** 2 / k2) ** 1.5
j[v] += quad(func, alpha[i], alpha[i + 1], epsabs=1e-12, epsrel=1e-12,)[0]
return j
def computeU(vmax, phi):
"""
The vector u evaluated by direct numerical integration.
"""
alpha = 0.5 * phi + 0.25 * np.pi
u = np.zeros(vmax + 1)
for v in range(vmax + 1):
for i in range(0, len(alpha), 2):
func = lambda x: np.cos(x) * np.sin(x) ** (2 * v + 1)
u[v] += quad(func, alpha[i], alpha[i + 1], epsabs=1e-12, epsrel=1e-12,)[0]
return u
def computeW(vmax, bo, ro, phi):
"""
The vector w evaluated by direct numerical integration.
"""
alpha = 0.5 * phi + 0.25 * np.pi
k2 = (1 - bo ** 2 - ro ** 2 + 2 * bo * ro) / (4 * bo * ro)
w = np.zeros(vmax + 1)
for v in range(vmax + 1):
for i in range(0, len(alpha), 2):
func = (
lambda x: np.cos(x)
* np.sin(x) ** (2 * v + 1)
* (1 + 0j - np.sin(x) ** 2 / k2) ** 1.5
)
w[v] += quad(func, alpha[i], alpha[i + 1], epsabs=1e-12, epsrel=1e-12,)[0]
return w
def pT(deg, phi, bo, ro):
"""
Compute the pT integral, evaluated in terms of the integrals i, j, u, and w.
"""
# Initialize
N = (deg + 1) ** 2
pT = np.zeros(N) * np.nan
# Pre-compute the helper integrals
I = computeI(deg + 3, phi)
J = computeJ(deg + 1, bo, ro, phi)
U = computeU(2 * deg + 5, phi)
W = computeW(deg, bo, ro, phi)
# Pre-compute the p2 term
p2 = 0
for i in range(0, len(phi), 2):
def func(phi):
sinphi = np.sin(phi)
z = np.sqrt(1 - ro ** 2 - bo ** 2 - 2 * bo * ro * sinphi)
return (1.0 - z ** 3) / (1.0 - z ** 2) * (ro + bo * sinphi) * ro / 3.0
p2 += quad(func, phi[i], phi[i + 1], epsabs=1e-12, epsrel=1e-12,)[0]
for n in range(N):
# Get the mu, nu indices
l = int(np.floor(np.sqrt(n)))
m = n - l * l - l
mu = l - m
nu = l + m
# Cases!
if mu % 2 == 0:
c = 2 * (2 * ro) ** (l + 2)
if (mu / 2) % 2 == 0:
pT[n] = c * V((mu + 4) / 4, nu / 2, 0, bo, ro, I)
elif (mu / 2) % 2 != 0:
pT[n] = c * V((mu + 2) / 4, nu / 2, 0, bo, ro, U)
elif mu == nu == 1:
pT[n] = p2
else:
beta = (1 - (bo - ro) ** 2) ** 1.5
c = beta * (2 * ro) ** (l - 1)
if mu == 1:
if l % 2 == 0:
pT[n] = c * (
V((l - 2) / 2, 0, 0, bo, ro, J)
- 2 * V((l - 2) / 2, 0, 1, bo, ro, J)
)
elif ((l % 2) != 0) and (l != 1):
pT[n] = c * (
V((l - 3) / 2, 1, 0, bo, ro, J)
- 2 * V((l - 3) / 2, 1, 1, bo, ro, J)
)
elif mu > 1:
if ((mu - 1) / 2) % 2 == 0:
pT[n] = c * 2 * V((mu - 1) / 4, (nu - 1) / 2, 0, bo, ro, J)
elif ((mu - 1) / 2) % 2 != 0:
pT[n] = c * 2 * V((mu - 1) / 4, (nu - 1) / 2, 0, bo, ro, W)
return pT
plt.plot(pT(5, phi, bo, ro), label="analytic")
plt.plot(pT_numerical(5, phi, bo, ro), "C1o", label="numerical")
plt.legend()
plt.xlabel(r"$n$", fontsize=20)
plt.ylabel(r"$\mathbb{p}_n^\top$", fontsize=20);
plt.plot(np.abs(pT(5, phi, bo, ro) - pT_numerical(5, phi, bo, ro)), "k-")
plt.xlabel(r"$n$", fontsize=20)
plt.yscale("log")
plt.ylabel(r"difference", fontsize=20);
| 0.833121 | 0.984185 |
NOTE: Before running this notebook, one may need to install the packages below if using Jupyterlab
```bash
conda install -c conda-forge nodejs
jupyter labextension install @jupyter-widgets/jupyterlab-manager
```
```
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
import spectrum
from pyleoclim import Spectral
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.ticker import ScalarFormatter, FormatStrFormatter
import LMRt
from tqdm import tqdm
from IPython.display import display
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
def run_wwz(series, xlim=[0, 0.05], factor=1e3, label='WWZ', c=1e-3, loglog=False, title=None):
time = series.index.values
signal = series.values
tau = np.linspace(np.min(time), np.max(time), 21)
res_psd = Spectral.wwz_psd(signal, time, tau=tau, c=c, nMC=0, standardize=False)
med = np.nanmedian(res_psd.psd)
print('median PSD:', med)
threshold = factor*med
print('thread PSD:', threshold)
activated_freqs = res_psd.freqs[res_psd.psd > threshold]
print('window:', np.max(activated_freqs) - np.min(activated_freqs))
sns.set(style='ticks', font_scale=1.5)
fig = plt.figure(figsize=[10, 10])
ax_signal = plt.subplot(2, 1, 1)
ax_signal.plot(time, signal, label='signal')
ax_signal.spines['right'].set_visible(False)
ax_signal.spines['top'].set_visible(False)
ax_signal.set_ylabel('Value')
ax_signal.set_xlabel('Time')
if title:
ax_signal.set_title(title)
ax_spec = plt.subplot(2, 1, 2)
if loglog:
ax_spec.loglog(res_psd.freqs, res_psd.psd, lw=3, label=label)
else:
ax_spec.plot(res_psd.freqs, res_psd.psd, lw=3, label=label)
ax_spec.set_xlim(xlim)
ax_spec.axhline(y=threshold, ls='--', lw=2, color='k')
ax_spec.get_xaxis().set_major_formatter(ScalarFormatter())
ax_spec.xaxis.set_major_formatter(FormatStrFormatter('%g'))
ax_spec.set_ylabel('Spectral Density')
ax_spec.set_xlabel('Frequency')
ax_spec.legend(frameon=False)
ax_spec.spines['right'].set_visible(False)
ax_spec.spines['top'].set_visible(False)
# return fig, res_psd
# creat a DataFrame to store the data
df = pd.DataFrame()
time1 = np.arange(1000)
f1 = 1/50
signal1 = np.cos(2*np.pi*f1*time1)
time2 = np.arange(1000, 2001)
f2 = 1/60
signal2 = np.cos(2*np.pi*f2*time2)
signal = np.concatenate([signal1, signal2])
time = np.concatenate([time1, time2])
# series = pd.Series(signal, index=time)
series = pd.Series(signal1, index=time1)
df['two_freqs'] = series
def interact_wwz(c, factor):
print(f'c={c}, factor={factor}')
return run_wwz(series, c=c, factor=factor)
c = widgets.FloatSlider(min=1e-4, max=0.01, step=1e-4)
factor = widgets.FloatSlider(min=1e3, max=1e5, step=1e3)
widgets.interact(interact_wwz, c=c, factor=factor)
```
|
github_jupyter
|
conda install -c conda-forge nodejs
jupyter labextension install @jupyter-widgets/jupyterlab-manager
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
import spectrum
from pyleoclim import Spectral
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.ticker import ScalarFormatter, FormatStrFormatter
import LMRt
from tqdm import tqdm
from IPython.display import display
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
def run_wwz(series, xlim=[0, 0.05], factor=1e3, label='WWZ', c=1e-3, loglog=False, title=None):
time = series.index.values
signal = series.values
tau = np.linspace(np.min(time), np.max(time), 21)
res_psd = Spectral.wwz_psd(signal, time, tau=tau, c=c, nMC=0, standardize=False)
med = np.nanmedian(res_psd.psd)
print('median PSD:', med)
threshold = factor*med
print('thread PSD:', threshold)
activated_freqs = res_psd.freqs[res_psd.psd > threshold]
print('window:', np.max(activated_freqs) - np.min(activated_freqs))
sns.set(style='ticks', font_scale=1.5)
fig = plt.figure(figsize=[10, 10])
ax_signal = plt.subplot(2, 1, 1)
ax_signal.plot(time, signal, label='signal')
ax_signal.spines['right'].set_visible(False)
ax_signal.spines['top'].set_visible(False)
ax_signal.set_ylabel('Value')
ax_signal.set_xlabel('Time')
if title:
ax_signal.set_title(title)
ax_spec = plt.subplot(2, 1, 2)
if loglog:
ax_spec.loglog(res_psd.freqs, res_psd.psd, lw=3, label=label)
else:
ax_spec.plot(res_psd.freqs, res_psd.psd, lw=3, label=label)
ax_spec.set_xlim(xlim)
ax_spec.axhline(y=threshold, ls='--', lw=2, color='k')
ax_spec.get_xaxis().set_major_formatter(ScalarFormatter())
ax_spec.xaxis.set_major_formatter(FormatStrFormatter('%g'))
ax_spec.set_ylabel('Spectral Density')
ax_spec.set_xlabel('Frequency')
ax_spec.legend(frameon=False)
ax_spec.spines['right'].set_visible(False)
ax_spec.spines['top'].set_visible(False)
# return fig, res_psd
# creat a DataFrame to store the data
df = pd.DataFrame()
time1 = np.arange(1000)
f1 = 1/50
signal1 = np.cos(2*np.pi*f1*time1)
time2 = np.arange(1000, 2001)
f2 = 1/60
signal2 = np.cos(2*np.pi*f2*time2)
signal = np.concatenate([signal1, signal2])
time = np.concatenate([time1, time2])
# series = pd.Series(signal, index=time)
series = pd.Series(signal1, index=time1)
df['two_freqs'] = series
def interact_wwz(c, factor):
print(f'c={c}, factor={factor}')
return run_wwz(series, c=c, factor=factor)
c = widgets.FloatSlider(min=1e-4, max=0.01, step=1e-4)
factor = widgets.FloatSlider(min=1e3, max=1e5, step=1e3)
widgets.interact(interact_wwz, c=c, factor=factor)
| 0.520984 | 0.82755 |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
data = pd.read_csv("../data/phones.csv")
data.info()
data = data.drop(['Unnamed: 0', 'Beşinci Arka Kamera Çözünürlüğü','2.Yardımcı İşlemci','İkinci Ön Kamera Çözünürlüğü',
'Kamera Zoom','1.Yardımcı İşlemci','Suya Dayanıklılık Seviyesi','Toza Dayanıklılık Seviyesi'], axis=1)
data.columns
data.head(10)
plt.figure(figsize=(16,16))
plt.subplot(2,1,1)
sns.countplot(x='Marka',data = data,order = data['Marka'].value_counts().index)
plt.xticks(rotation = 90)
plt.show()
for i in range(0,100):
data["Price"] = data.Price.str.replace(',{}'.format(i), '')
data["Price"] = data.Price.str.replace(',00'.format(i), '')
data["Price"] = data.Price.str.replace(',01'.format(i), '')
data["Price"] = data.Price.str.replace(',02'.format(i), '')
data["Price"] = data.Price.str.replace(',03'.format(i), '')
data["Price"] = data.Price.str.replace(',04'.format(i), '')
data["Price"] = data.Price.str.replace(',05'.format(i), '')
data["Price"] = data.Price.str.replace(',06'.format(i), '')
data["Price"] = data.Price.str.replace(',07'.format(i), '')
data["Price"] = data.Price.str.replace(',08'.format(i), '')
data["Price"] = data.Price.str.replace(',09'.format(i), '')
data["Price"] = data.Price.str.replace('.', '')
data["Price"] = data.Price.astype(float)
data["Price"]
data["Ön Kamera Çözünürlüğü"]
data=data.rename(columns = {'Ön Kamera Çözünürlüğü':'On_Kamera_cozunurlu','Arka Kamera Çözünürlüğü':'Arka_Kamera_Cozunurlugu','Bellek Kapasitesi':'Bellek_Kapasitesi','Batarya Kapasitesi':'Batarya_Kapasitesi','Ekran Boyutu':'Ekran_Boyutu','Ekran Çözünürlüğü':'Ekran_Cozunurlugu'})
data["On_Kamera_cozunurlu"] = data.On_Kamera_cozunurlu.str.replace('MP', '')
data["Arka_Kamera_Cozunurlugu"]
data["Arka_Kamera_Cozunurlugu"] = data.Arka_Kamera_Cozunurlugu.str.replace('MP', '')
data["Bellek_Kapasitesi"]
data["Bellek_Kapasitesi"] = data.Bellek_Kapasitesi.str.replace('GB', '')
data["Batarya_Kapasitesi"]
data["Batarya_Kapasitesi"] = data.Batarya_Kapasitesi.str.replace('mAh', '')
data["Batarya_Kapasitesi"] = data.Batarya_Kapasitesi.str.replace('mAH ve üzeri', '')
data["Dahili_Hafiza"]
data["On_Kamera_cozunurlu"] = data.On_Kamera_cozunurlu.astype(float)
data["Arka_Kamera_Cozunurlugu"] = data.Arka_Kamera_Cozunurlugu.astype(float)
data["Bellek_Kapasitesi"] = data.Bellek_Kapasitesi.astype(float)
data["Batarya_Kapasitesi"] = data.Batarya_Kapasitesi.astype(float)
data["Dahili_Hafiza"] = data.Dahili_Hafiza.astype(float)
plt.figure(figsize=(16,16))
plt.subplot(2,1,1)
sns.countplot(x='İşletim Sistemi',data = data,order = data['İşletim Sistemi'].value_counts().index)
plt.xticks(rotation = 90)
plt.show()
print("Max Fiyat: "+str(data["Price"].max()).replace('0.0', '') + " TL --> " + str(data["URL"].loc[data["Price"].idxmax()]))
print("Max Ön Kamera: "+str(data["On_Kamera_cozunurlu"].max()) + " MP --> " + str(data["URL"].loc[data["On_Kamera_cozunurlu"].idxmax()]))
print("Max Arka Kamera: "+str(data["Arka_Kamera_Cozunurlugu"].max()) + " MP --> "+ str(data["URL"].loc[data["Arka_Kamera_Cozunurlugu"].idxmax()]))
print("Max Bellek Kapasitesi: "+str(data["Bellek_Kapasitesi"].max()) + " GB --> "+ str(data["URL"].loc[data["Bellek_Kapasitesi"].idxmax()]))
print("Max Batarya Kapasitesi: "+str(data["Batarya_Kapasitesi"].max()) + " mAh --> "+ str(data["URL"].loc[data["Batarya_Kapasitesi"].idxmax()]))
print("Max Dahili Hafıza: "+str(data["Dahili_Hafiza"].max()) + " GB --> "+ str(data["URL"].loc[data["Dahili_Hafiza"].idxmax()]))
print("Min Fiyat: "+str(data["Price"].min()).replace('0.0', '') + " TL --> " + str(data["URL"].loc[data["Price"].idxmin()]))
print("Min Ön Kamera: "+str(data["On_Kamera_cozunurlu"].min()) + " MP --> " + str(data["URL"].loc[data["On_Kamera_cozunurlu"].idxmin()]))
print("Min Arka Kamera: "+str(data["Arka_Kamera_Cozunurlugu"].min()) + " MP --> "+ str(data["URL"].loc[data["Arka_Kamera_Cozunurlugu"].idxmin()]))
print("Min Bellek Kapasitesi: "+str(data["Bellek_Kapasitesi"].min()) + " GB --> "+ str(data["URL"].loc[data["Bellek_Kapasitesi"].idxmin()]))
print("Min Batarya Kapasitesi: "+str(data["Batarya_Kapasitesi"].min()) + " mAh --> "+ str(data["URL"].loc[data["Batarya_Kapasitesi"].idxmin()]))
print("Min Dahili Hafıza: "+str(data["Dahili_Hafiza"].min()) + " GB --> "+ str(data["URL"].loc[data["Dahili_Hafiza"].idxmin()]))
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
data = pd.read_csv("../data/phones.csv")
data.info()
data = data.drop(['Unnamed: 0', 'Beşinci Arka Kamera Çözünürlüğü','2.Yardımcı İşlemci','İkinci Ön Kamera Çözünürlüğü',
'Kamera Zoom','1.Yardımcı İşlemci','Suya Dayanıklılık Seviyesi','Toza Dayanıklılık Seviyesi'], axis=1)
data.columns
data.head(10)
plt.figure(figsize=(16,16))
plt.subplot(2,1,1)
sns.countplot(x='Marka',data = data,order = data['Marka'].value_counts().index)
plt.xticks(rotation = 90)
plt.show()
for i in range(0,100):
data["Price"] = data.Price.str.replace(',{}'.format(i), '')
data["Price"] = data.Price.str.replace(',00'.format(i), '')
data["Price"] = data.Price.str.replace(',01'.format(i), '')
data["Price"] = data.Price.str.replace(',02'.format(i), '')
data["Price"] = data.Price.str.replace(',03'.format(i), '')
data["Price"] = data.Price.str.replace(',04'.format(i), '')
data["Price"] = data.Price.str.replace(',05'.format(i), '')
data["Price"] = data.Price.str.replace(',06'.format(i), '')
data["Price"] = data.Price.str.replace(',07'.format(i), '')
data["Price"] = data.Price.str.replace(',08'.format(i), '')
data["Price"] = data.Price.str.replace(',09'.format(i), '')
data["Price"] = data.Price.str.replace('.', '')
data["Price"] = data.Price.astype(float)
data["Price"]
data["Ön Kamera Çözünürlüğü"]
data=data.rename(columns = {'Ön Kamera Çözünürlüğü':'On_Kamera_cozunurlu','Arka Kamera Çözünürlüğü':'Arka_Kamera_Cozunurlugu','Bellek Kapasitesi':'Bellek_Kapasitesi','Batarya Kapasitesi':'Batarya_Kapasitesi','Ekran Boyutu':'Ekran_Boyutu','Ekran Çözünürlüğü':'Ekran_Cozunurlugu'})
data["On_Kamera_cozunurlu"] = data.On_Kamera_cozunurlu.str.replace('MP', '')
data["Arka_Kamera_Cozunurlugu"]
data["Arka_Kamera_Cozunurlugu"] = data.Arka_Kamera_Cozunurlugu.str.replace('MP', '')
data["Bellek_Kapasitesi"]
data["Bellek_Kapasitesi"] = data.Bellek_Kapasitesi.str.replace('GB', '')
data["Batarya_Kapasitesi"]
data["Batarya_Kapasitesi"] = data.Batarya_Kapasitesi.str.replace('mAh', '')
data["Batarya_Kapasitesi"] = data.Batarya_Kapasitesi.str.replace('mAH ve üzeri', '')
data["Dahili_Hafiza"]
data["On_Kamera_cozunurlu"] = data.On_Kamera_cozunurlu.astype(float)
data["Arka_Kamera_Cozunurlugu"] = data.Arka_Kamera_Cozunurlugu.astype(float)
data["Bellek_Kapasitesi"] = data.Bellek_Kapasitesi.astype(float)
data["Batarya_Kapasitesi"] = data.Batarya_Kapasitesi.astype(float)
data["Dahili_Hafiza"] = data.Dahili_Hafiza.astype(float)
plt.figure(figsize=(16,16))
plt.subplot(2,1,1)
sns.countplot(x='İşletim Sistemi',data = data,order = data['İşletim Sistemi'].value_counts().index)
plt.xticks(rotation = 90)
plt.show()
print("Max Fiyat: "+str(data["Price"].max()).replace('0.0', '') + " TL --> " + str(data["URL"].loc[data["Price"].idxmax()]))
print("Max Ön Kamera: "+str(data["On_Kamera_cozunurlu"].max()) + " MP --> " + str(data["URL"].loc[data["On_Kamera_cozunurlu"].idxmax()]))
print("Max Arka Kamera: "+str(data["Arka_Kamera_Cozunurlugu"].max()) + " MP --> "+ str(data["URL"].loc[data["Arka_Kamera_Cozunurlugu"].idxmax()]))
print("Max Bellek Kapasitesi: "+str(data["Bellek_Kapasitesi"].max()) + " GB --> "+ str(data["URL"].loc[data["Bellek_Kapasitesi"].idxmax()]))
print("Max Batarya Kapasitesi: "+str(data["Batarya_Kapasitesi"].max()) + " mAh --> "+ str(data["URL"].loc[data["Batarya_Kapasitesi"].idxmax()]))
print("Max Dahili Hafıza: "+str(data["Dahili_Hafiza"].max()) + " GB --> "+ str(data["URL"].loc[data["Dahili_Hafiza"].idxmax()]))
print("Min Fiyat: "+str(data["Price"].min()).replace('0.0', '') + " TL --> " + str(data["URL"].loc[data["Price"].idxmin()]))
print("Min Ön Kamera: "+str(data["On_Kamera_cozunurlu"].min()) + " MP --> " + str(data["URL"].loc[data["On_Kamera_cozunurlu"].idxmin()]))
print("Min Arka Kamera: "+str(data["Arka_Kamera_Cozunurlugu"].min()) + " MP --> "+ str(data["URL"].loc[data["Arka_Kamera_Cozunurlugu"].idxmin()]))
print("Min Bellek Kapasitesi: "+str(data["Bellek_Kapasitesi"].min()) + " GB --> "+ str(data["URL"].loc[data["Bellek_Kapasitesi"].idxmin()]))
print("Min Batarya Kapasitesi: "+str(data["Batarya_Kapasitesi"].min()) + " mAh --> "+ str(data["URL"].loc[data["Batarya_Kapasitesi"].idxmin()]))
print("Min Dahili Hafıza: "+str(data["Dahili_Hafiza"].min()) + " GB --> "+ str(data["URL"].loc[data["Dahili_Hafiza"].idxmin()]))
| 0.091626 | 0.448909 |
# Retrieve Chatbot
## Chatbot using the Poly-encoder Transformer architecture (Humeau et al., 2019) for retrieval
```
# This notebook is based on :
# https://aritter.github.io/CS-7650/
# This Project was developed at the Georgia Institute of Technology by Ashutosh Baheti ([email protected]),
# borrowing from the Neural Machine Translation Project (Project 2)
# of the UC Berkeley NLP course https://cal-cs288.github.io/sp20/
# Imports
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import torch
from torch.jit import script, trace
import torch.nn as nn
from torch import optim
import torch.nn.functional as F
import numpy as np
import csv
import random
import re
import os
import unicodedata
import codecs
from io import open
import itertools
import math
import pickle
import statistics
from torch.utils.data import Dataset, DataLoader
from torch.nn.utils.rnn import pad_sequence
import tqdm
import nltk
import gc
gc.collect()
import pandas as pd
import numpy as np
import sys
from functools import partial
import time
bert_model_name = 'distilbert-base-uncased'
# Bert Imports
from transformers import DistilBertTokenizer, DistilBertModel
#bert_model = DistilBertModel.from_pretrained(bert_model_name)
tokenizer = DistilBertTokenizer.from_pretrained(bert_model_name)
# Utils
def make_dir_if_not_exists(directory):
if not os.path.exists(directory):
logging.info("Creating new directory: {}".format(directory))
os.makedirs(directory)
def print_list(l, K=None):
# If K is given then only print first K
for i, e in enumerate(l):
if i == K:
break
print(e)
print()
def remove_multiple_spaces(string):
return re.sub(r'\s+', ' ', string).strip()
def save_in_pickle(save_object, save_file):
with open(save_file, "wb") as pickle_out:
pickle.dump(save_object, pickle_out)
def load_from_pickle(pickle_file):
with open(pickle_file, "rb") as pickle_in:
return pickle.load(pickle_in)
def save_in_txt(list_of_strings, save_file):
with open(save_file, "w") as writer:
for line in list_of_strings:
line = line.strip()
writer.write(f"{line}\n")
def load_from_txt(txt_file):
with open(txt_file, "r") as reader:
all_lines = list()
for line in reader:
line = line.strip()
all_lines.append(line)
return all_lines
# Check CUDA
print(torch.cuda.is_available())
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
print("Using device:", device)
```
## Load Data
### Cornell Movie Database
```
# Loading the pre-processed conversational exchanges (source-target pairs) from pickle data files
all_conversations = load_from_pickle("../data/cornell_movie/processed_CMDC.pkl")
# Extract 100 conversations from the end for evaluation and keep the rest for training
eval_conversations = all_conversations[-100:]
all_conversations = all_conversations[:-100]
# Logging data stats
print(f"Number of Training Conversation Pairs = {len(all_conversations)}")
print(f"Number of Evaluation Conversation Pairs = {len(eval_conversations)}")
```
#### Building the vocabulary
```
pad_word = "<pad>"
bos_word = "<s>"
eos_word = "</s>"
unk_word = "<unk>"
pad_id = 0
bos_id = 1
eos_id = 2
unk_id = 3
def normalize_sentence(s):
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
s = re.sub(r"\s+", r" ", s).strip()
return s
class Vocabulary:
def __init__(self):
self.word_to_id = {pad_word: pad_id, bos_word: bos_id, eos_word:eos_id, unk_word: unk_id}
self.word_count = {}
self.id_to_word = {pad_id: pad_word, bos_id: bos_word, eos_id: eos_word, unk_id: unk_word}
self.num_words = 4
def get_ids_from_sentence(self, sentence):
sentence = normalize_sentence(sentence)
sent_ids = [bos_id] + [self.word_to_id[word] if word in self.word_to_id \
else unk_id for word in sentence.split()] + \
[eos_id]
return sent_ids
def tokenized_sentence(self, sentence):
sent_ids = self.get_ids_from_sentence(sentence)
return [self.id_to_word[word_id] for word_id in sent_ids]
def decode_sentence_from_ids(self, sent_ids):
words = list()
for i, word_id in enumerate(sent_ids):
if word_id in [bos_id, eos_id, pad_id]:
# Skip these words
continue
else:
words.append(self.id_to_word[word_id])
return ' '.join(words)
def add_words_from_sentence(self, sentence):
sentence = normalize_sentence(sentence)
for word in sentence.split():
if word not in self.word_to_id:
# add this word to the vocabulary
self.word_to_id[word] = self.num_words
self.id_to_word[self.num_words] = word
self.word_count[word] = 1
self.num_words += 1
else:
# update the word count
self.word_count[word] += 1
vocab = Vocabulary()
for src, tgt in all_conversations:
vocab.add_words_from_sentence(src)
vocab.add_words_from_sentence(tgt)
print(f"Total words in the vocabulary = {vocab.num_words}")
```
## Dataset Preparation
```
def transformer_collate_fn(batch, tokenizer):
bert_vocab = tokenizer.get_vocab()
bert_pad_token = bert_vocab['[PAD]']
bert_unk_token = bert_vocab['[UNK]']
bert_cls_token = bert_vocab['[CLS]']
inputs, masks_input, outputs, masks_output = [], [], [], []
for data in batch:
tokenizer_input = tokenizer([data[0]])
tokenized_sent = tokenizer_input['input_ids'][0]
mask_input = tokenizer_input['attention_mask'][0]
inputs.append(torch.tensor(tokenized_sent))
tokenizer_output = tokenizer([data[1]])
tokenized_sent = tokenizer_output['input_ids'][0]
mask_output = tokenizer_output['attention_mask'][0]
outputs.append(torch.tensor(tokenized_sent))
masks_input.append(torch.tensor(mask_input))
masks_output.append(torch.tensor(mask_output))
inputs = pad_sequence(inputs, batch_first=True, padding_value=bert_pad_token)
outputs = pad_sequence(outputs, batch_first=True, padding_value=bert_pad_token)
masks_input = pad_sequence(masks_input, batch_first=True, padding_value=0.0)
masks_output = pad_sequence(masks_output, batch_first=True, padding_value=0.0)
return inputs, masks_input, outputs, masks_output
#create pytorch dataloaders from train_dataset, val_dataset, and test_datset
batch_size=5
train_dataloader = DataLoader(all_conversations,batch_size=batch_size,collate_fn=partial(transformer_collate_fn, tokenizer=tokenizer), shuffle = True)
tokenizer.batch_decode(transformer_collate_fn(all_conversations[0:10],tokenizer)[0], skip_special_tokens=True)
tokenizer.batch_decode(transformer_collate_fn(all_conversations[0:10],tokenizer)[2], skip_special_tokens=True)
```
## Polyencoder Model
```
#torch.cuda.empty_cache()
#bert1 = DistilBertModel.from_pretrained(bert_model_name)
#bert2 = DistilBertModel.from_pretrained(bert_model_name)
bert = DistilBertModel.from_pretrained(bert_model_name)
#Double Bert
class RetrieverPolyencoder(nn.Module):
def __init__(self, contextBert, candidateBert, vocab, max_len = 300, hidden_dim = 768, out_dim = 64, num_layers = 2, dropout=0.1, device=device):
super().__init__()
self.device = device
self.hidden_dim = hidden_dim
self.max_len = max_len
self.out_dim = out_dim
# Context layers
self.contextBert = contextBert
self.contextDropout = nn.Dropout(dropout)
# Candidates layers
self.candidatesBert = candidateBert
self.pos_emb = nn.Embedding(self.max_len, self.hidden_dim)
self.candidatesDropout = nn.Dropout(dropout)
self.att_dropout = nn.Dropout(dropout)
def attention(self, q, k, v, vMask=None):
w = torch.matmul(q, k.transpose(-1, -2))
if vMask is not None:
w *= vMask.unsqueeze(1)
w = F.softmax(w, -1)
w = self.att_dropout(w)
score = torch.matmul(w, v)
return score
def score(self, context, context_mask, responses, responses_mask):
"""Run the model on the source and compute the loss on the target.
Args:
source: An integer tensor with shape (max_source_sequence_length,
batch_size) containing subword indices for the source sentences.
target: An integer tensor with shape (max_target_sequence_length,
batch_size) containing subword indices for the target sentences.
Returns:
A scalar float tensor representing cross-entropy loss on the current batch
divided by the number of target tokens in the batch.
Many of the target tokens will be pad tokens. You should mask the loss
from these tokens using appropriate mask on the target tokens loss.
"""
batch_size, nb_cand, seq_len = responses.shape
# Context
context_encoded = self.contextBert(context,context_mask)[0]
pos_emb = self.pos_emb(torch.arange(self.max_len).to(self.device))
context_att = self.attention(pos_emb, context_encoded, context_encoded, context_mask)
# Response
responses_encoded = self.candidatesBert(responses.view(-1,responses.shape[2]), responses_mask.view(-1,responses.shape[2]))[0][:,0,:]
responses_encoded = responses_encoded.view(batch_size,nb_cand,-1)
context_emb = self.attention(responses_encoded, context_att, context_att).squeeze()
dot_product = (context_emb*responses_encoded).sum(-1)
return dot_product
def compute_loss(self, context, context_mask, response, response_mask):
"""Run the model on the source and compute the loss on the target.
Args:
source: An integer tensor with shape (max_source_sequence_length,
batch_size) containing subword indices for the source sentences.
target: An integer tensor with shape (max_target_sequence_length,
batch_size) containing subword indices for the target sentences.
Returns:
A scalar float tensor representing cross-entropy loss on the current batch
divided by the number of target tokens in the batch.
Many of the target tokens will be pad tokens. You should mask the loss
from these tokens using appropriate mask on the target tokens loss.
"""
batch_size = context.shape[0]
# Context
context_encoded = self.contextBert(context,context_mask)[0]
pos_emb = self.pos_emb(torch.arange(self.max_len).to(self.device))
context_att = self.attention(pos_emb, context_encoded, context_encoded, context_mask)
# Response
response_encoded = self.candidatesBert(response, response_mask)[0][:,0,:]
response_encoded = response_encoded.unsqueeze(0).expand(batch_size, batch_size, response_encoded.shape[1])
context_emb = self.attention(response_encoded, context_att, context_att).squeeze()
dot_product = (context_emb*response_encoded).sum(-1)
mask = torch.eye(batch_size).to(self.device)
loss = F.log_softmax(dot_product, dim=-1) * mask
loss = (-loss.sum(dim=1)).mean()
return loss
#Simple Bert
class RetrieverPolyencoder_single(nn.Module):
def __init__(self, Bert, max_len = 300, hidden_dim = 768, out_dim = 64, num_layers = 2, dropout=0.1, device=device):
super().__init__()
self.device = device
self.hidden_dim = hidden_dim
self.max_len = max_len
self.out_dim = out_dim
self.bert = Bert
# Context layers
self.contextDropout = nn.Dropout(dropout)
# Candidates layers
self.pos_emb = nn.Embedding(self.max_len, self.hidden_dim)
self.candidatesDropout = nn.Dropout(dropout)
self.att_dropout = nn.Dropout(dropout)
def attention(self, q, k, v, vMask=None):
w = torch.matmul(q, k.transpose(-1, -2))
if vMask is not None:
w *= vMask.unsqueeze(1)
w = F.softmax(w, -1)
w = self.att_dropout(w)
score = torch.matmul(w, v)
return score
def score(self, context, context_mask, responses, responses_mask):
"""Run the model on the source and compute the loss on the target.
Args:
source: An integer tensor with shape (max_source_sequence_length,
batch_size) containing subword indices for the source sentences.
target: An integer tensor with shape (max_target_sequence_length,
batch_size) containing subword indices for the target sentences.
Returns:
A scalar float tensor representing cross-entropy loss on the current batch
divided by the number of target tokens in the batch.
Many of the target tokens will be pad tokens. You should mask the loss
from these tokens using appropriate mask on the target tokens loss.
"""
batch_size, nb_cand, seq_len = responses.shape
# Context
context_encoded = self.bert(context,context_mask)[0]
pos_emb = self.pos_emb(torch.arange(self.max_len).to(self.device))
context_att = self.attention(pos_emb, context_encoded, context_encoded, context_mask)
# Response
responses_encoded = self.bert(responses.view(-1,responses.shape[2]), responses_mask.view(-1,responses.shape[2]))[0][:,0,:]
responses_encoded = responses_encoded.view(batch_size,nb_cand,-1)
context_emb = self.attention(responses_encoded, context_att, context_att).squeeze()
dot_product = (context_emb*responses_encoded).sum(-1)
return dot_product
def compute_loss(self, context, context_mask, response, response_mask):
"""Run the model on the source and compute the loss on the target.
Args:
source: An integer tensor with shape (max_source_sequence_length,
batch_size) containing subword indices for the source sentences.
target: An integer tensor with shape (max_target_sequence_length,
batch_size) containing subword indices for the target sentences.
Returns:
A scalar float tensor representing cross-entropy loss on the current batch
divided by the number of target tokens in the batch.
Many of the target tokens will be pad tokens. You should mask the loss
from these tokens using appropriate mask on the target tokens loss.
"""
batch_size = context.shape[0]
# Context
context_encoded = self.bert(context,context_mask)[0]
pos_emb = self.pos_emb(torch.arange(self.max_len).to(self.device))
context_att = self.attention(pos_emb, context_encoded, context_encoded, context_mask)
# Response
response_encoded = self.bert(response, response_mask)[0][:,0,:]
response_encoded = response_encoded.unsqueeze(0).expand(batch_size, batch_size, response_encoded.shape[1])
context_emb = self.attention(response_encoded, context_att, context_att).squeeze()
dot_product = (context_emb*response_encoded).sum(-1)
mask = torch.eye(batch_size).to(self.device)
loss = F.log_softmax(dot_product, dim=-1) * mask
loss = (-loss.sum(dim=1)).mean()
return loss
#Bi-encoder
class RetrieverBiencoder(nn.Module):
def __init__(self, bert):
super().__init__()
self.bert = bert
def score(self, context, context_mask, responses, responses_mask):
context_vec = self.bert(context, context_mask)[0][:,0,:] # [bs,dim]
batch_size, res_length = response.shape
responses_vec = self.bert(responses_input_ids, responses_input_masks)[0][:,0,:] # [bs,dim]
responses_vec = responses_vec.view(batch_size, 1, -1)
responses_vec = responses_vec.squeeze(1)
context_vec = context_vec.unsqueeze(1)
dot_product = torch.matmul(context_vec, responses_vec.permute(0, 2, 1)).squeeze()
return dot_product
def compute_loss(self, context, context_mask, response, response_mask):
context_vec = self.bert(context, context_mask)[0][:,0,:] # [bs,dim]
batch_size, res_length = response.shape
responses_vec = self.bert(response, response_mask)[0][:,0,:] # [bs,dim]
responses_vec = responses_vec.view(batch_size, -1)
dot_product = torch.matmul(context_vec, responses_vec.t()) # [bs, bs]
mask = torch.eye(context.size(0)).to(context_mask.device)
loss = F.log_softmax(dot_product, dim=-1) * mask
loss = (-loss.sum(dim=1)).mean()
return loss
loss_rec = []
def train(model, data_loader, num_epochs, model_file, learning_rate=0.0001):
"""Train the model for given µnumber of epochs and save the trained model in
the final model_file.
"""
decoder_learning_ratio = 5.0
#encoder_parameter_names = ['word_embedding', 'encoder']
encoder_parameter_names = ['encode_emb', 'encode_gru', 'l1', 'l2']
encoder_named_params = list(filter(lambda kv: any(key in kv[0] for key in encoder_parameter_names), model.named_parameters()))
decoder_named_params = list(filter(lambda kv: not any(key in kv[0] for key in encoder_parameter_names), model.named_parameters()))
encoder_params = [e[1] for e in encoder_named_params]
decoder_params = [e[1] for e in decoder_named_params]
optimizer = torch.optim.AdamW([{'params': encoder_params},
{'params': decoder_params, 'lr': learning_rate * decoder_learning_ratio}], lr=learning_rate)
clip = 50.0
for epoch in tqdm.notebook.trange(num_epochs, desc="training", unit="epoch"):
# print(f"Total training instances = {len(train_dataset)}")
# print(f"train_data_loader = {len(train_data_loader)} {1180 > len(train_data_loader)/20}")
with tqdm.notebook.tqdm(
data_loader,
desc="epoch {}".format(epoch + 1),
unit="batch",
total=len(data_loader)) as batch_iterator:
model.train()
total_loss = 0.0
for i, batch_data in enumerate(batch_iterator, start=1):
source, source_mask, target, target_mask = batch_data
print(source.shape)
print(source_mask.shape)
print(target.shape)
print(target_mask.shape)
optimizer.zero_grad()
loss = model.compute_loss(source.cuda(), source_mask.cuda(), target.cuda(), target_mask.cuda())
total_loss += loss.item()
loss.backward()
# Gradient clipping before taking the step
_ = nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
batch_iterator.set_postfix(mean_loss=total_loss / i, current_loss=loss.item())
loss_rec.append(total_loss)
# Save the model after training
torch.save(model.state_dict(), model_file)
# You are welcome to adjust these parameters based on your model implementation.
num_epochs = 4
batch_size = 2
learning_rate = 0.001
# Reloading the data_loader to increase batch_size
train_dataloader = DataLoader(all_conversations,batch_size=batch_size,collate_fn=partial(transformer_collate_fn, tokenizer=tokenizer), shuffle = True)
baseline_model = RetrieverPolyencoder_single(bert).to(device)
train(baseline_model, train_dataloader, num_epochs, "/models/baseline_model.pt",learning_rate=learning_rate)
# Download the trained model to local for future use
#files.download('baseline_model.pt')
loss_rec
baseline_model = RetrieverPolyencoder(bert1,bert2,vocab).to(device)
baseline_model.load_state_dict(torch.load("baseline_model3.pt", map_location=device))
vals = transformer_collate_fn(all_conversations[0:100],tokenizer)
i=3
scores = baseline_model.score(vals[0][i].unsqueeze(0).cuda(),vals[1][i].unsqueeze(0).cuda(),vals[2].unsqueeze(0).cuda(),vals[3].unsqueeze(0).cuda()).detach().cpu().numpy()
all_conversations[i][0]
all_conversations[np.argmax(scores)][1]
max_v = 100
vals = transformer_collate_fn(all_conversations[0:max_v],tokenizer)
correct = 0
for i in range(max_v):
scores = baseline_model.score(vals[0][i].unsqueeze(0).cuda(),vals[1][i].unsqueeze(0).cuda(),vals[2].unsqueeze(0).cuda(),vals[3].unsqueeze(0).cuda()).detach().cpu().numpy()
if np.argmax(scores)==i:
correct+=1
print(all_conversations[i][0])
print(all_conversations[np.argmax(scores)][1]+"\n")
print(correct/max_v)
```
|
github_jupyter
|
# This notebook is based on :
# https://aritter.github.io/CS-7650/
# This Project was developed at the Georgia Institute of Technology by Ashutosh Baheti ([email protected]),
# borrowing from the Neural Machine Translation Project (Project 2)
# of the UC Berkeley NLP course https://cal-cs288.github.io/sp20/
# Imports
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import torch
from torch.jit import script, trace
import torch.nn as nn
from torch import optim
import torch.nn.functional as F
import numpy as np
import csv
import random
import re
import os
import unicodedata
import codecs
from io import open
import itertools
import math
import pickle
import statistics
from torch.utils.data import Dataset, DataLoader
from torch.nn.utils.rnn import pad_sequence
import tqdm
import nltk
import gc
gc.collect()
import pandas as pd
import numpy as np
import sys
from functools import partial
import time
bert_model_name = 'distilbert-base-uncased'
# Bert Imports
from transformers import DistilBertTokenizer, DistilBertModel
#bert_model = DistilBertModel.from_pretrained(bert_model_name)
tokenizer = DistilBertTokenizer.from_pretrained(bert_model_name)
# Utils
def make_dir_if_not_exists(directory):
if not os.path.exists(directory):
logging.info("Creating new directory: {}".format(directory))
os.makedirs(directory)
def print_list(l, K=None):
# If K is given then only print first K
for i, e in enumerate(l):
if i == K:
break
print(e)
print()
def remove_multiple_spaces(string):
return re.sub(r'\s+', ' ', string).strip()
def save_in_pickle(save_object, save_file):
with open(save_file, "wb") as pickle_out:
pickle.dump(save_object, pickle_out)
def load_from_pickle(pickle_file):
with open(pickle_file, "rb") as pickle_in:
return pickle.load(pickle_in)
def save_in_txt(list_of_strings, save_file):
with open(save_file, "w") as writer:
for line in list_of_strings:
line = line.strip()
writer.write(f"{line}\n")
def load_from_txt(txt_file):
with open(txt_file, "r") as reader:
all_lines = list()
for line in reader:
line = line.strip()
all_lines.append(line)
return all_lines
# Check CUDA
print(torch.cuda.is_available())
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
print("Using device:", device)
# Loading the pre-processed conversational exchanges (source-target pairs) from pickle data files
all_conversations = load_from_pickle("../data/cornell_movie/processed_CMDC.pkl")
# Extract 100 conversations from the end for evaluation and keep the rest for training
eval_conversations = all_conversations[-100:]
all_conversations = all_conversations[:-100]
# Logging data stats
print(f"Number of Training Conversation Pairs = {len(all_conversations)}")
print(f"Number of Evaluation Conversation Pairs = {len(eval_conversations)}")
pad_word = "<pad>"
bos_word = "<s>"
eos_word = "</s>"
unk_word = "<unk>"
pad_id = 0
bos_id = 1
eos_id = 2
unk_id = 3
def normalize_sentence(s):
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
s = re.sub(r"\s+", r" ", s).strip()
return s
class Vocabulary:
def __init__(self):
self.word_to_id = {pad_word: pad_id, bos_word: bos_id, eos_word:eos_id, unk_word: unk_id}
self.word_count = {}
self.id_to_word = {pad_id: pad_word, bos_id: bos_word, eos_id: eos_word, unk_id: unk_word}
self.num_words = 4
def get_ids_from_sentence(self, sentence):
sentence = normalize_sentence(sentence)
sent_ids = [bos_id] + [self.word_to_id[word] if word in self.word_to_id \
else unk_id for word in sentence.split()] + \
[eos_id]
return sent_ids
def tokenized_sentence(self, sentence):
sent_ids = self.get_ids_from_sentence(sentence)
return [self.id_to_word[word_id] for word_id in sent_ids]
def decode_sentence_from_ids(self, sent_ids):
words = list()
for i, word_id in enumerate(sent_ids):
if word_id in [bos_id, eos_id, pad_id]:
# Skip these words
continue
else:
words.append(self.id_to_word[word_id])
return ' '.join(words)
def add_words_from_sentence(self, sentence):
sentence = normalize_sentence(sentence)
for word in sentence.split():
if word not in self.word_to_id:
# add this word to the vocabulary
self.word_to_id[word] = self.num_words
self.id_to_word[self.num_words] = word
self.word_count[word] = 1
self.num_words += 1
else:
# update the word count
self.word_count[word] += 1
vocab = Vocabulary()
for src, tgt in all_conversations:
vocab.add_words_from_sentence(src)
vocab.add_words_from_sentence(tgt)
print(f"Total words in the vocabulary = {vocab.num_words}")
def transformer_collate_fn(batch, tokenizer):
bert_vocab = tokenizer.get_vocab()
bert_pad_token = bert_vocab['[PAD]']
bert_unk_token = bert_vocab['[UNK]']
bert_cls_token = bert_vocab['[CLS]']
inputs, masks_input, outputs, masks_output = [], [], [], []
for data in batch:
tokenizer_input = tokenizer([data[0]])
tokenized_sent = tokenizer_input['input_ids'][0]
mask_input = tokenizer_input['attention_mask'][0]
inputs.append(torch.tensor(tokenized_sent))
tokenizer_output = tokenizer([data[1]])
tokenized_sent = tokenizer_output['input_ids'][0]
mask_output = tokenizer_output['attention_mask'][0]
outputs.append(torch.tensor(tokenized_sent))
masks_input.append(torch.tensor(mask_input))
masks_output.append(torch.tensor(mask_output))
inputs = pad_sequence(inputs, batch_first=True, padding_value=bert_pad_token)
outputs = pad_sequence(outputs, batch_first=True, padding_value=bert_pad_token)
masks_input = pad_sequence(masks_input, batch_first=True, padding_value=0.0)
masks_output = pad_sequence(masks_output, batch_first=True, padding_value=0.0)
return inputs, masks_input, outputs, masks_output
#create pytorch dataloaders from train_dataset, val_dataset, and test_datset
batch_size=5
train_dataloader = DataLoader(all_conversations,batch_size=batch_size,collate_fn=partial(transformer_collate_fn, tokenizer=tokenizer), shuffle = True)
tokenizer.batch_decode(transformer_collate_fn(all_conversations[0:10],tokenizer)[0], skip_special_tokens=True)
tokenizer.batch_decode(transformer_collate_fn(all_conversations[0:10],tokenizer)[2], skip_special_tokens=True)
#torch.cuda.empty_cache()
#bert1 = DistilBertModel.from_pretrained(bert_model_name)
#bert2 = DistilBertModel.from_pretrained(bert_model_name)
bert = DistilBertModel.from_pretrained(bert_model_name)
#Double Bert
class RetrieverPolyencoder(nn.Module):
def __init__(self, contextBert, candidateBert, vocab, max_len = 300, hidden_dim = 768, out_dim = 64, num_layers = 2, dropout=0.1, device=device):
super().__init__()
self.device = device
self.hidden_dim = hidden_dim
self.max_len = max_len
self.out_dim = out_dim
# Context layers
self.contextBert = contextBert
self.contextDropout = nn.Dropout(dropout)
# Candidates layers
self.candidatesBert = candidateBert
self.pos_emb = nn.Embedding(self.max_len, self.hidden_dim)
self.candidatesDropout = nn.Dropout(dropout)
self.att_dropout = nn.Dropout(dropout)
def attention(self, q, k, v, vMask=None):
w = torch.matmul(q, k.transpose(-1, -2))
if vMask is not None:
w *= vMask.unsqueeze(1)
w = F.softmax(w, -1)
w = self.att_dropout(w)
score = torch.matmul(w, v)
return score
def score(self, context, context_mask, responses, responses_mask):
"""Run the model on the source and compute the loss on the target.
Args:
source: An integer tensor with shape (max_source_sequence_length,
batch_size) containing subword indices for the source sentences.
target: An integer tensor with shape (max_target_sequence_length,
batch_size) containing subword indices for the target sentences.
Returns:
A scalar float tensor representing cross-entropy loss on the current batch
divided by the number of target tokens in the batch.
Many of the target tokens will be pad tokens. You should mask the loss
from these tokens using appropriate mask on the target tokens loss.
"""
batch_size, nb_cand, seq_len = responses.shape
# Context
context_encoded = self.contextBert(context,context_mask)[0]
pos_emb = self.pos_emb(torch.arange(self.max_len).to(self.device))
context_att = self.attention(pos_emb, context_encoded, context_encoded, context_mask)
# Response
responses_encoded = self.candidatesBert(responses.view(-1,responses.shape[2]), responses_mask.view(-1,responses.shape[2]))[0][:,0,:]
responses_encoded = responses_encoded.view(batch_size,nb_cand,-1)
context_emb = self.attention(responses_encoded, context_att, context_att).squeeze()
dot_product = (context_emb*responses_encoded).sum(-1)
return dot_product
def compute_loss(self, context, context_mask, response, response_mask):
"""Run the model on the source and compute the loss on the target.
Args:
source: An integer tensor with shape (max_source_sequence_length,
batch_size) containing subword indices for the source sentences.
target: An integer tensor with shape (max_target_sequence_length,
batch_size) containing subword indices for the target sentences.
Returns:
A scalar float tensor representing cross-entropy loss on the current batch
divided by the number of target tokens in the batch.
Many of the target tokens will be pad tokens. You should mask the loss
from these tokens using appropriate mask on the target tokens loss.
"""
batch_size = context.shape[0]
# Context
context_encoded = self.contextBert(context,context_mask)[0]
pos_emb = self.pos_emb(torch.arange(self.max_len).to(self.device))
context_att = self.attention(pos_emb, context_encoded, context_encoded, context_mask)
# Response
response_encoded = self.candidatesBert(response, response_mask)[0][:,0,:]
response_encoded = response_encoded.unsqueeze(0).expand(batch_size, batch_size, response_encoded.shape[1])
context_emb = self.attention(response_encoded, context_att, context_att).squeeze()
dot_product = (context_emb*response_encoded).sum(-1)
mask = torch.eye(batch_size).to(self.device)
loss = F.log_softmax(dot_product, dim=-1) * mask
loss = (-loss.sum(dim=1)).mean()
return loss
#Simple Bert
class RetrieverPolyencoder_single(nn.Module):
def __init__(self, Bert, max_len = 300, hidden_dim = 768, out_dim = 64, num_layers = 2, dropout=0.1, device=device):
super().__init__()
self.device = device
self.hidden_dim = hidden_dim
self.max_len = max_len
self.out_dim = out_dim
self.bert = Bert
# Context layers
self.contextDropout = nn.Dropout(dropout)
# Candidates layers
self.pos_emb = nn.Embedding(self.max_len, self.hidden_dim)
self.candidatesDropout = nn.Dropout(dropout)
self.att_dropout = nn.Dropout(dropout)
def attention(self, q, k, v, vMask=None):
w = torch.matmul(q, k.transpose(-1, -2))
if vMask is not None:
w *= vMask.unsqueeze(1)
w = F.softmax(w, -1)
w = self.att_dropout(w)
score = torch.matmul(w, v)
return score
def score(self, context, context_mask, responses, responses_mask):
"""Run the model on the source and compute the loss on the target.
Args:
source: An integer tensor with shape (max_source_sequence_length,
batch_size) containing subword indices for the source sentences.
target: An integer tensor with shape (max_target_sequence_length,
batch_size) containing subword indices for the target sentences.
Returns:
A scalar float tensor representing cross-entropy loss on the current batch
divided by the number of target tokens in the batch.
Many of the target tokens will be pad tokens. You should mask the loss
from these tokens using appropriate mask on the target tokens loss.
"""
batch_size, nb_cand, seq_len = responses.shape
# Context
context_encoded = self.bert(context,context_mask)[0]
pos_emb = self.pos_emb(torch.arange(self.max_len).to(self.device))
context_att = self.attention(pos_emb, context_encoded, context_encoded, context_mask)
# Response
responses_encoded = self.bert(responses.view(-1,responses.shape[2]), responses_mask.view(-1,responses.shape[2]))[0][:,0,:]
responses_encoded = responses_encoded.view(batch_size,nb_cand,-1)
context_emb = self.attention(responses_encoded, context_att, context_att).squeeze()
dot_product = (context_emb*responses_encoded).sum(-1)
return dot_product
def compute_loss(self, context, context_mask, response, response_mask):
"""Run the model on the source and compute the loss on the target.
Args:
source: An integer tensor with shape (max_source_sequence_length,
batch_size) containing subword indices for the source sentences.
target: An integer tensor with shape (max_target_sequence_length,
batch_size) containing subword indices for the target sentences.
Returns:
A scalar float tensor representing cross-entropy loss on the current batch
divided by the number of target tokens in the batch.
Many of the target tokens will be pad tokens. You should mask the loss
from these tokens using appropriate mask on the target tokens loss.
"""
batch_size = context.shape[0]
# Context
context_encoded = self.bert(context,context_mask)[0]
pos_emb = self.pos_emb(torch.arange(self.max_len).to(self.device))
context_att = self.attention(pos_emb, context_encoded, context_encoded, context_mask)
# Response
response_encoded = self.bert(response, response_mask)[0][:,0,:]
response_encoded = response_encoded.unsqueeze(0).expand(batch_size, batch_size, response_encoded.shape[1])
context_emb = self.attention(response_encoded, context_att, context_att).squeeze()
dot_product = (context_emb*response_encoded).sum(-1)
mask = torch.eye(batch_size).to(self.device)
loss = F.log_softmax(dot_product, dim=-1) * mask
loss = (-loss.sum(dim=1)).mean()
return loss
#Bi-encoder
class RetrieverBiencoder(nn.Module):
def __init__(self, bert):
super().__init__()
self.bert = bert
def score(self, context, context_mask, responses, responses_mask):
context_vec = self.bert(context, context_mask)[0][:,0,:] # [bs,dim]
batch_size, res_length = response.shape
responses_vec = self.bert(responses_input_ids, responses_input_masks)[0][:,0,:] # [bs,dim]
responses_vec = responses_vec.view(batch_size, 1, -1)
responses_vec = responses_vec.squeeze(1)
context_vec = context_vec.unsqueeze(1)
dot_product = torch.matmul(context_vec, responses_vec.permute(0, 2, 1)).squeeze()
return dot_product
def compute_loss(self, context, context_mask, response, response_mask):
context_vec = self.bert(context, context_mask)[0][:,0,:] # [bs,dim]
batch_size, res_length = response.shape
responses_vec = self.bert(response, response_mask)[0][:,0,:] # [bs,dim]
responses_vec = responses_vec.view(batch_size, -1)
dot_product = torch.matmul(context_vec, responses_vec.t()) # [bs, bs]
mask = torch.eye(context.size(0)).to(context_mask.device)
loss = F.log_softmax(dot_product, dim=-1) * mask
loss = (-loss.sum(dim=1)).mean()
return loss
loss_rec = []
def train(model, data_loader, num_epochs, model_file, learning_rate=0.0001):
"""Train the model for given µnumber of epochs and save the trained model in
the final model_file.
"""
decoder_learning_ratio = 5.0
#encoder_parameter_names = ['word_embedding', 'encoder']
encoder_parameter_names = ['encode_emb', 'encode_gru', 'l1', 'l2']
encoder_named_params = list(filter(lambda kv: any(key in kv[0] for key in encoder_parameter_names), model.named_parameters()))
decoder_named_params = list(filter(lambda kv: not any(key in kv[0] for key in encoder_parameter_names), model.named_parameters()))
encoder_params = [e[1] for e in encoder_named_params]
decoder_params = [e[1] for e in decoder_named_params]
optimizer = torch.optim.AdamW([{'params': encoder_params},
{'params': decoder_params, 'lr': learning_rate * decoder_learning_ratio}], lr=learning_rate)
clip = 50.0
for epoch in tqdm.notebook.trange(num_epochs, desc="training", unit="epoch"):
# print(f"Total training instances = {len(train_dataset)}")
# print(f"train_data_loader = {len(train_data_loader)} {1180 > len(train_data_loader)/20}")
with tqdm.notebook.tqdm(
data_loader,
desc="epoch {}".format(epoch + 1),
unit="batch",
total=len(data_loader)) as batch_iterator:
model.train()
total_loss = 0.0
for i, batch_data in enumerate(batch_iterator, start=1):
source, source_mask, target, target_mask = batch_data
print(source.shape)
print(source_mask.shape)
print(target.shape)
print(target_mask.shape)
optimizer.zero_grad()
loss = model.compute_loss(source.cuda(), source_mask.cuda(), target.cuda(), target_mask.cuda())
total_loss += loss.item()
loss.backward()
# Gradient clipping before taking the step
_ = nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
batch_iterator.set_postfix(mean_loss=total_loss / i, current_loss=loss.item())
loss_rec.append(total_loss)
# Save the model after training
torch.save(model.state_dict(), model_file)
# You are welcome to adjust these parameters based on your model implementation.
num_epochs = 4
batch_size = 2
learning_rate = 0.001
# Reloading the data_loader to increase batch_size
train_dataloader = DataLoader(all_conversations,batch_size=batch_size,collate_fn=partial(transformer_collate_fn, tokenizer=tokenizer), shuffle = True)
baseline_model = RetrieverPolyencoder_single(bert).to(device)
train(baseline_model, train_dataloader, num_epochs, "/models/baseline_model.pt",learning_rate=learning_rate)
# Download the trained model to local for future use
#files.download('baseline_model.pt')
loss_rec
baseline_model = RetrieverPolyencoder(bert1,bert2,vocab).to(device)
baseline_model.load_state_dict(torch.load("baseline_model3.pt", map_location=device))
vals = transformer_collate_fn(all_conversations[0:100],tokenizer)
i=3
scores = baseline_model.score(vals[0][i].unsqueeze(0).cuda(),vals[1][i].unsqueeze(0).cuda(),vals[2].unsqueeze(0).cuda(),vals[3].unsqueeze(0).cuda()).detach().cpu().numpy()
all_conversations[i][0]
all_conversations[np.argmax(scores)][1]
max_v = 100
vals = transformer_collate_fn(all_conversations[0:max_v],tokenizer)
correct = 0
for i in range(max_v):
scores = baseline_model.score(vals[0][i].unsqueeze(0).cuda(),vals[1][i].unsqueeze(0).cuda(),vals[2].unsqueeze(0).cuda(),vals[3].unsqueeze(0).cuda()).detach().cpu().numpy()
if np.argmax(scores)==i:
correct+=1
print(all_conversations[i][0])
print(all_conversations[np.argmax(scores)][1]+"\n")
print(correct/max_v)
| 0.581778 | 0.596374 |
<a href="https://colab.research.google.com/github/codigoquant/python_para_investimentos/blob/master/16_CVM_Os_Melhores_e_os_Piores_Fundos_de_Investimento_do_mes_Python_para_Investimentos.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
###Ricos pelo Acaso
* Link para o vídeo: https://youtu.be/NHCUUZOvk7k
---
* Base de Dados: http://dados.cvm.gov.br/
###Coletando os dados da CVM
```
import pandas as pd
pd.set_option("display.max_colwidth", 150)
#pd.options.display.float_format = '{:.2f}'.format
```
>Funções que buscam dados no site da CVM e retornam um DataFrame Pandas:
```
def busca_informes_cvm(ano, mes):
url = 'http://dados.cvm.gov.br/dados/FI/DOC/INF_DIARIO/DADOS/inf_diario_fi_{:02d}{:02d}.csv'.format(ano,mes)
return pd.read_csv(url, sep=';')
def busca_cadastro_cvm():
url = "http://dados.cvm.gov.br/dados/FI/CAD/DADOS/cad_fi.csv"
return pd.read_csv(url, sep=';', encoding='ISO-8859-1')
```
>Buscando dados no site da CVM
```
informes_diarios = busca_informes_cvm(2021,9)
informes_diarios
cadastro_cvm = busca_cadastro_cvm()
cadastro_cvm
cadastro_cvm
```
###Manipulando os dados da CVM
>Definindo filtros para os Fundos de Investimento
```
minimo_cotistas = 100
```
>Manipulando os dados e aplicando filtros
```
fundos = informes_diarios[informes_diarios['NR_COTST'] >= minimo_cotistas].pivot(index='DT_COMPTC', columns='CNPJ_FUNDO', values=['VL_TOTAL', 'VL_QUOTA', 'VL_PATRIM_LIQ', 'CAPTC_DIA', 'RESG_DIA'])
fundos
```
>Normalizando os dados de cotas para efeitos comparativos
```
cotas_normalizadas = fundos['VL_QUOTA'] / fundos['VL_QUOTA'].iloc[0]
cotas_normalizadas
```
###Fundos de Investimento com os melhores desempenhos em Abril de 2020
```
melhores = pd.DataFrame()
melhores['retorno(%)'] = (cotas_normalizadas.iloc[-1].sort_values(ascending=False)[:5] - 1) * 100
melhores
```
>Buscando dados dos Fundos de Investimento pelo CNPJ
```
for cnpj in melhores.index:
fundo = cadastro_cvm[cadastro_cvm['CNPJ_FUNDO'] == cnpj]
melhores.at[cnpj, 'Fundo de Investimento'] = fundo['DENOM_SOCIAL'].values[0]
melhores.at[cnpj, 'Classe'] = fundo['CLASSE'].values[0]
melhores.at[cnpj, 'PL'] = fundo['VL_PATRIM_LIQ'].values[0]
melhores
```
###Fundos de Investimento com os piores desempenhos em Abril de 2020
```
piores = pd.DataFrame()
piores['retorno(%)'] = (cotas_normalizadas.iloc[-1].sort_values(ascending=True)[:5] - 1) * 100
piores
```
>Buscando dados dos Fundos de Investimento pelo CNPJ
```
for cnpj in piores.index:
fundo = cadastro_cvm[cadastro_cvm['CNPJ_FUNDO'] == cnpj]
piores.at[cnpj, 'Fundo de Investimento'] = fundo['DENOM_SOCIAL'].values[0]
piores.at[cnpj, 'Classe'] = fundo['CLASSE'].values[0]
piores.at[cnpj, 'PL'] = fundo['VL_PATRIM_LIQ'].values[0]
piores
```
|
github_jupyter
|
import pandas as pd
pd.set_option("display.max_colwidth", 150)
#pd.options.display.float_format = '{:.2f}'.format
def busca_informes_cvm(ano, mes):
url = 'http://dados.cvm.gov.br/dados/FI/DOC/INF_DIARIO/DADOS/inf_diario_fi_{:02d}{:02d}.csv'.format(ano,mes)
return pd.read_csv(url, sep=';')
def busca_cadastro_cvm():
url = "http://dados.cvm.gov.br/dados/FI/CAD/DADOS/cad_fi.csv"
return pd.read_csv(url, sep=';', encoding='ISO-8859-1')
informes_diarios = busca_informes_cvm(2021,9)
informes_diarios
cadastro_cvm = busca_cadastro_cvm()
cadastro_cvm
cadastro_cvm
minimo_cotistas = 100
fundos = informes_diarios[informes_diarios['NR_COTST'] >= minimo_cotistas].pivot(index='DT_COMPTC', columns='CNPJ_FUNDO', values=['VL_TOTAL', 'VL_QUOTA', 'VL_PATRIM_LIQ', 'CAPTC_DIA', 'RESG_DIA'])
fundos
cotas_normalizadas = fundos['VL_QUOTA'] / fundos['VL_QUOTA'].iloc[0]
cotas_normalizadas
melhores = pd.DataFrame()
melhores['retorno(%)'] = (cotas_normalizadas.iloc[-1].sort_values(ascending=False)[:5] - 1) * 100
melhores
for cnpj in melhores.index:
fundo = cadastro_cvm[cadastro_cvm['CNPJ_FUNDO'] == cnpj]
melhores.at[cnpj, 'Fundo de Investimento'] = fundo['DENOM_SOCIAL'].values[0]
melhores.at[cnpj, 'Classe'] = fundo['CLASSE'].values[0]
melhores.at[cnpj, 'PL'] = fundo['VL_PATRIM_LIQ'].values[0]
melhores
piores = pd.DataFrame()
piores['retorno(%)'] = (cotas_normalizadas.iloc[-1].sort_values(ascending=True)[:5] - 1) * 100
piores
for cnpj in piores.index:
fundo = cadastro_cvm[cadastro_cvm['CNPJ_FUNDO'] == cnpj]
piores.at[cnpj, 'Fundo de Investimento'] = fundo['DENOM_SOCIAL'].values[0]
piores.at[cnpj, 'Classe'] = fundo['CLASSE'].values[0]
piores.at[cnpj, 'PL'] = fundo['VL_PATRIM_LIQ'].values[0]
piores
| 0.250179 | 0.882479 |
```
'''
This script is part of a network Performance Profile Recommender system built on
top of Intel DAAL Kmeans and (implicit) ALS algorithms.
It converts the distributed *CSV* files output by kmeans_dense_distributed_mpi2.cpp
(DAAL KMean w/ MPI) to input *CSR* files of implicit_als_csr_distributed_mpi2.cpp.
See kmeans_dense_distributed_mpi2.cpp for C++ code of the DAAL Kmeans app.
See implicit_als_csr_distributed_mpi2.cpp for C++ code of the DAAL ALS app.
'''
import csv
import numpy as np
from scipy import sparse
'''
A generator class for 'with' to read a CSV file. Actually no with is okay
because there is already a 'with' to close the file; for entertainment only.
'''
class readCSV:
def __init__(self, fileName):
self.fileName = fileName
def __generator__(self):
with open(self.fileName) as f:
for cols in csv.reader(f):
yield cols
def __enter__(self):
self.generator = self.__generator__()
return self.generator
def __exit__(self, type, value, traceback):
# anything to do?
pass
# get a list of input CSV files
import glob
import threading
csv_files = glob.glob('../data/sta_phyr_x*.csv')
if len(csv_files) <= 0:
sys.exit()
'''
The order of file names (sta_phyr_xaa, sta_phyr_xab, ...)
is significant for merging and slicing the matrices !!!!!
'''
csv_files = sorted(csv_files, key=lambda f: f)
print '\n'.join(csv_files)
# prepare to merge the CSV files in parallel
max_stas = [1 for i in range(len(csv_files))]
max_prfs = [1 for i in range(len(csv_files))]
sta_prfs = sparse.lil_matrix((9,9))
'''
Thread function to read CSV files output by kmeans_dense_distributed_mpi2.cpp.
Depends on the value of set, it either gets shapes or load data to sta_prfs[].
'''
def worker_csv(i, fname, load):
with readCSV(fname) as csv:
for cols in csv:
if len(cols) == 3:
if load == False:
# get shapes only
if cols[0] > max_stas[i]: max_stas[i] = cols[0]
if cols[2] > max_prfs[i]: max_prfs[i] = cols[2]
else:
# load data
sta_prfs[cols[0], cols[2]] = cols[1]
%%time
'''
To convert the CSV files output by kmeans_dense_distributed_mpi2.cpp,
it needs 2 passes:
1. learn maximum network endpoint id and network profile id (aka. learn matrix shape)
(these will be the maximum 'user' id and 'item' id for ALS)
2. load network endpoint id's, profile id's and their ratings
This cell is for pass 1; next cell is for pass 2.
'''
threads = []
for i, csv_file in enumerate(csv_files):
t = threading.Thread(target=worker_csv, args=(i, csv_file, False))
threads.append(t)
t.start()
[t.join() for t in threads]
max_sta = 1 + int(max(max_stas))
max_prf = 1 + int(max(max_prfs))
# To construct a matrix efficiently, use either dok_matrix or lil_matrix...
sta_prfs = sparse.lil_matrix((max_sta, max_prf), dtype=int, shape=None)
print sta_prfs.shape
%%time
# fork a list of threads to populate entries of user-item matrix
threads = []
for i, csv_file in enumerate(csv_files):
t = threading.Thread(target=worker_csv, args=(i, csv_file, True))
threads.append(t)
t.start()
[t.join() for t in threads]
# play dummy traversal of the matrix. (for debug only)
if False:
nc = 0
lil = sta_prfs
for i, (row, data) in enumerate(zip(lil.rows, lil.data)):
if nc > 1000: break
for j, val in zip(row, data):
print '[%d, %d] = %d' % (i, j, val)
nc += 1
if nc > 100: break
if False:
# @deprecated! use mazzage() below instead!
# remove all-zero rows and columns, other DAAL ALS aborts!!!
# !!! TODO: need to create a mapping table for the removed STAs/profiles !!!
sta_prfs = sta_prfs[sta_prfs.getnnz(1)>0][:,sta_prfs.getnnz(0)>0]
print sta_prfs.shape
'''
Check if matrix is a CSR
'''
def is_csr(matrix):
return str(type(matrix)).find('csr_matrix') > 0
'''
This function massages a sparse matrix by making any of its rows or columns that
contain all zero values no more contain all zero values.
Deprecated!! It is confirmed thru experiments that Intel DAAL ALS accepts an
input matrix or its transpose to have all zero values in any row/colum.
'''
def mazzage(matrix, filler=1, debug=False):
if debug: print type(matrix)
if debug: print 'matrix: \n', matrix.toarray()
# toggle between CSR and CSC to get indices of any all-zero rows and columns
matriy= matrix.tocsc() if is_csr(matrix) else matrix.tocsr()
diff1 = np.where(np.diff(matrix.indptr) == 0)[0]
diff2 = np.where(np.diff(matriy.indptr) == 0)[0]
# swap indices if it's a CSC matrix
if not is_csr(matrix):
diff1, diff2 = diff2, diff1
if debug: print 'diff1: ', diff1
if debug: print 'diff2: ', diff2
if len(diff1)>0 or len(diff2)>0:
# need outer loop, because sucky np.broadcast can't handle index blocks larger than 32
for ri in diff1 if len(diff1) else [0]:
matrix[ri, diff2 if len(diff2) else [0]] = filler
if debug: print 'matrix: \n', matrix.toarray()
if debug: print '=' * 80
%%time
import random
'''
This function slices a matrix into the same number of CSR slices as that of input (CSV slices).
The CSR slices are then written to files with a '.csr' or a '.tsr' extension depending on
whether the input matrix is the original or transposed matrix, respectively.
Note! The matrix must be in lil form due to possible disorder of the slice function of scipy
CSV or CSC object.
'''
# cut matrix to (4) slices
def slicer(matrix, filext):
nslice = len(csv_files)
zslice = int((matrix.shape[0] + nslice - 1) / nslice)
for i in range(nslice):
slice = matrix[zslice*i : min(zslice*(i+1), matrix.shape[0]), ]
#print "slice type = ", str(type(slice))
# massge slice to not have non-zero row/col
# if False:
# mazzage(slice, filler=1, debug=True)
# elif True:
# # only for debug. make matrix as large as possible
# slice[-1, random.choice(range(slice.shape[1]))] += 1
# slice[random.choice(range(slice.shape[0])), -1] += 1
# output the slice ALWAYS in CSR format !!
fname = csv_files[i].replace('csv', filext)
slice = sparse.csr_matrix(slice)
with open(fname, 'w') as csvfile:
writer = csv.writer(csvfile, delimiter=',')
writer.writerow(slice.indptr + 1)
writer.writerow(slice.indices + 1)
writer.writerow(slice.data / 100.)
# DAAL ALS expects a preference of [0,1]
print 'slices[%s] = ' % fname, slice.shape
# @ only for debug - 1
if False:
for k in range(100):
sta_prfs[k//10, k%10] = k / 100.
'''
Don't convert sta_prfs from lil to csr BEFORE slicing it!
Otherwise, Intel DAAL ALS rejects the files and crashes!!
'''
slicer(sta_prfs, 'csr')
slicer(sta_prfs.transpose().tocsr(), 'tsr') # lil.transpose is a CSC !!!
'''
This and the following cells are for visually verifying the output
CSR files before feeding them into my C++ implicit ALS app.
See ./implicit_als_csr_distributed_mpi2.cpp.
Before running this verification, change the condition "@ only for debug - 1"
in last cell to True so as to verify the first slice is correctly transposed.
'''
def readCSR(fileName):
print fileName
# csr file should have 3 rows: indices, indptrs and data
rows = []
with open(fileName) as f:
reader = csv.reader(f)
for i, row in enumerate(reader):
rows.append(row)
print "%d = " % i, row[0:10]
rows[0] = np.array(rows[0]).astype(np.integer) - 1
rows[1] = np.array(rows[1]).astype(np.integer) - 1
rows[2] = np.array(rows[2]).astype(np.float)
csr = sparse.csr_matrix((rows[2], rows[1], rows[0]))
# dump csr and its dense
if True:
print rows[0][0:10]
print rows[1][0:10]
dense = csr.toarray()
print "dense = ", dense.shape
print dense[0:10, 0:10]
# print 'diff=',np.where(np.diff(csr.indptr) == 0)[0]
# for i in range(csr.shape[0]):
# print "row[%d]="%i, rows[1][rows[0][i]:rows[0][i+1]]
# if len(rows[1][rows[0][i]:rows[0][i+1]] ) == 0:
# print "row[%d]="%i, rows[1][rows[0][i]:rows[0][i+1]]
print '=' * 80
return csr
# verify the written CSR files
for csv_file in csv_files: readCSR(csv_file.replace('csv', 'csr'))
for csv_file in csv_files: readCSR(csv_file.replace('csv', 'tsr'))
```
|
github_jupyter
|
'''
This script is part of a network Performance Profile Recommender system built on
top of Intel DAAL Kmeans and (implicit) ALS algorithms.
It converts the distributed *CSV* files output by kmeans_dense_distributed_mpi2.cpp
(DAAL KMean w/ MPI) to input *CSR* files of implicit_als_csr_distributed_mpi2.cpp.
See kmeans_dense_distributed_mpi2.cpp for C++ code of the DAAL Kmeans app.
See implicit_als_csr_distributed_mpi2.cpp for C++ code of the DAAL ALS app.
'''
import csv
import numpy as np
from scipy import sparse
'''
A generator class for 'with' to read a CSV file. Actually no with is okay
because there is already a 'with' to close the file; for entertainment only.
'''
class readCSV:
def __init__(self, fileName):
self.fileName = fileName
def __generator__(self):
with open(self.fileName) as f:
for cols in csv.reader(f):
yield cols
def __enter__(self):
self.generator = self.__generator__()
return self.generator
def __exit__(self, type, value, traceback):
# anything to do?
pass
# get a list of input CSV files
import glob
import threading
csv_files = glob.glob('../data/sta_phyr_x*.csv')
if len(csv_files) <= 0:
sys.exit()
'''
The order of file names (sta_phyr_xaa, sta_phyr_xab, ...)
is significant for merging and slicing the matrices !!!!!
'''
csv_files = sorted(csv_files, key=lambda f: f)
print '\n'.join(csv_files)
# prepare to merge the CSV files in parallel
max_stas = [1 for i in range(len(csv_files))]
max_prfs = [1 for i in range(len(csv_files))]
sta_prfs = sparse.lil_matrix((9,9))
'''
Thread function to read CSV files output by kmeans_dense_distributed_mpi2.cpp.
Depends on the value of set, it either gets shapes or load data to sta_prfs[].
'''
def worker_csv(i, fname, load):
with readCSV(fname) as csv:
for cols in csv:
if len(cols) == 3:
if load == False:
# get shapes only
if cols[0] > max_stas[i]: max_stas[i] = cols[0]
if cols[2] > max_prfs[i]: max_prfs[i] = cols[2]
else:
# load data
sta_prfs[cols[0], cols[2]] = cols[1]
%%time
'''
To convert the CSV files output by kmeans_dense_distributed_mpi2.cpp,
it needs 2 passes:
1. learn maximum network endpoint id and network profile id (aka. learn matrix shape)
(these will be the maximum 'user' id and 'item' id for ALS)
2. load network endpoint id's, profile id's and their ratings
This cell is for pass 1; next cell is for pass 2.
'''
threads = []
for i, csv_file in enumerate(csv_files):
t = threading.Thread(target=worker_csv, args=(i, csv_file, False))
threads.append(t)
t.start()
[t.join() for t in threads]
max_sta = 1 + int(max(max_stas))
max_prf = 1 + int(max(max_prfs))
# To construct a matrix efficiently, use either dok_matrix or lil_matrix...
sta_prfs = sparse.lil_matrix((max_sta, max_prf), dtype=int, shape=None)
print sta_prfs.shape
%%time
# fork a list of threads to populate entries of user-item matrix
threads = []
for i, csv_file in enumerate(csv_files):
t = threading.Thread(target=worker_csv, args=(i, csv_file, True))
threads.append(t)
t.start()
[t.join() for t in threads]
# play dummy traversal of the matrix. (for debug only)
if False:
nc = 0
lil = sta_prfs
for i, (row, data) in enumerate(zip(lil.rows, lil.data)):
if nc > 1000: break
for j, val in zip(row, data):
print '[%d, %d] = %d' % (i, j, val)
nc += 1
if nc > 100: break
if False:
# @deprecated! use mazzage() below instead!
# remove all-zero rows and columns, other DAAL ALS aborts!!!
# !!! TODO: need to create a mapping table for the removed STAs/profiles !!!
sta_prfs = sta_prfs[sta_prfs.getnnz(1)>0][:,sta_prfs.getnnz(0)>0]
print sta_prfs.shape
'''
Check if matrix is a CSR
'''
def is_csr(matrix):
return str(type(matrix)).find('csr_matrix') > 0
'''
This function massages a sparse matrix by making any of its rows or columns that
contain all zero values no more contain all zero values.
Deprecated!! It is confirmed thru experiments that Intel DAAL ALS accepts an
input matrix or its transpose to have all zero values in any row/colum.
'''
def mazzage(matrix, filler=1, debug=False):
if debug: print type(matrix)
if debug: print 'matrix: \n', matrix.toarray()
# toggle between CSR and CSC to get indices of any all-zero rows and columns
matriy= matrix.tocsc() if is_csr(matrix) else matrix.tocsr()
diff1 = np.where(np.diff(matrix.indptr) == 0)[0]
diff2 = np.where(np.diff(matriy.indptr) == 0)[0]
# swap indices if it's a CSC matrix
if not is_csr(matrix):
diff1, diff2 = diff2, diff1
if debug: print 'diff1: ', diff1
if debug: print 'diff2: ', diff2
if len(diff1)>0 or len(diff2)>0:
# need outer loop, because sucky np.broadcast can't handle index blocks larger than 32
for ri in diff1 if len(diff1) else [0]:
matrix[ri, diff2 if len(diff2) else [0]] = filler
if debug: print 'matrix: \n', matrix.toarray()
if debug: print '=' * 80
%%time
import random
'''
This function slices a matrix into the same number of CSR slices as that of input (CSV slices).
The CSR slices are then written to files with a '.csr' or a '.tsr' extension depending on
whether the input matrix is the original or transposed matrix, respectively.
Note! The matrix must be in lil form due to possible disorder of the slice function of scipy
CSV or CSC object.
'''
# cut matrix to (4) slices
def slicer(matrix, filext):
nslice = len(csv_files)
zslice = int((matrix.shape[0] + nslice - 1) / nslice)
for i in range(nslice):
slice = matrix[zslice*i : min(zslice*(i+1), matrix.shape[0]), ]
#print "slice type = ", str(type(slice))
# massge slice to not have non-zero row/col
# if False:
# mazzage(slice, filler=1, debug=True)
# elif True:
# # only for debug. make matrix as large as possible
# slice[-1, random.choice(range(slice.shape[1]))] += 1
# slice[random.choice(range(slice.shape[0])), -1] += 1
# output the slice ALWAYS in CSR format !!
fname = csv_files[i].replace('csv', filext)
slice = sparse.csr_matrix(slice)
with open(fname, 'w') as csvfile:
writer = csv.writer(csvfile, delimiter=',')
writer.writerow(slice.indptr + 1)
writer.writerow(slice.indices + 1)
writer.writerow(slice.data / 100.)
# DAAL ALS expects a preference of [0,1]
print 'slices[%s] = ' % fname, slice.shape
# @ only for debug - 1
if False:
for k in range(100):
sta_prfs[k//10, k%10] = k / 100.
'''
Don't convert sta_prfs from lil to csr BEFORE slicing it!
Otherwise, Intel DAAL ALS rejects the files and crashes!!
'''
slicer(sta_prfs, 'csr')
slicer(sta_prfs.transpose().tocsr(), 'tsr') # lil.transpose is a CSC !!!
'''
This and the following cells are for visually verifying the output
CSR files before feeding them into my C++ implicit ALS app.
See ./implicit_als_csr_distributed_mpi2.cpp.
Before running this verification, change the condition "@ only for debug - 1"
in last cell to True so as to verify the first slice is correctly transposed.
'''
def readCSR(fileName):
print fileName
# csr file should have 3 rows: indices, indptrs and data
rows = []
with open(fileName) as f:
reader = csv.reader(f)
for i, row in enumerate(reader):
rows.append(row)
print "%d = " % i, row[0:10]
rows[0] = np.array(rows[0]).astype(np.integer) - 1
rows[1] = np.array(rows[1]).astype(np.integer) - 1
rows[2] = np.array(rows[2]).astype(np.float)
csr = sparse.csr_matrix((rows[2], rows[1], rows[0]))
# dump csr and its dense
if True:
print rows[0][0:10]
print rows[1][0:10]
dense = csr.toarray()
print "dense = ", dense.shape
print dense[0:10, 0:10]
# print 'diff=',np.where(np.diff(csr.indptr) == 0)[0]
# for i in range(csr.shape[0]):
# print "row[%d]="%i, rows[1][rows[0][i]:rows[0][i+1]]
# if len(rows[1][rows[0][i]:rows[0][i+1]] ) == 0:
# print "row[%d]="%i, rows[1][rows[0][i]:rows[0][i+1]]
print '=' * 80
return csr
# verify the written CSR files
for csv_file in csv_files: readCSR(csv_file.replace('csv', 'csr'))
for csv_file in csv_files: readCSR(csv_file.replace('csv', 'tsr'))
| 0.411229 | 0.484624 |
## Functional annotation
We will quickly get a "full" set of ChIP-Seq x RNA-Seq target genes
```
import pandas as pd
```
#### Gene annotations
Read gene annotation table and extract gene names
```
genes = pd.read_csv("~/shared/MCB280A_data/S288C_R64-3-1/saccharomyces_cerevisiae_R64-3-1_20210421.gff",
delimiter="\t",
header=None,
names=['seqid', 'source', 'type',
'start', 'end', 'score', 'strand',
'phase', 'attributes'])
genes = genes[genes['type'] == "gene"]
genes['name'] = genes['attributes'].str.split(';').str[1]
genes['name'] = genes['name'].str.replace("Name=", "")
```
Build promoter regions for each gene
```
import numpy as np
genes['prmstart'] = np.where(genes['strand'] == '+',
genes['start'] - 1000,
genes['end'] + 1)
genes['prmend'] = genes['prmstart'] + 1000
```
#### ChIP-Seq Peaks
Now, we'll read in the table of ChIP-Seq peaks.
```
peaks = pd.read_csv("~/Hsf1/ChIP-Seq/macs2/Hsf1_ChIP_heatshk_peaks.xls",
comment='#', delimiter='\t')
```
Compute intersection between promoter regions and ChIP-Seq peaks
```
gene_peaks = {}
top_peaks = peaks[peaks['-log10(qvalue)'] > 20]
for peak in top_peaks.itertuples():
for gene in genes.itertuples():
if (peak.chr == gene.seqid) and (peak.abs_summit >= gene.prmstart) and (peak.abs_summit <= gene.prmend):
gene_peaks[gene.name] = peak.name
gene_peaks = pd.Series(gene_peaks, name='peak')
```
#### ChIP-Seq Genes
Add peaks to the gene table
```
genes2 = pd.merge(genes, gene_peaks,
left_on='name', right_index=True, how='left')
```
Now, we will merge in the peaks table by matching up the `peak` column with the `name` column in the peaks table.
```
genes3 = pd.merge(genes2, peaks,
left_on='peak', right_on='name', how='left')
```
#### RNA-Seq data
Finally, we're ready to read in the table of RNA-Seq results.
```
results = pd.read_csv("full.results.csv",
index_col=0)
```
Merge RNA-Seq into the gene table by name
```
genes4 = pd.merge(genes3, results,
left_on='name_x', right_index=True)
```
### Hsf1 Target Genes
Here we get the set of genes that have a ChIP-Seq peak and a significant expression change into a set called `targets`
We want a list of target genes for functional analysis. The gene names can be found in the `name_x` column.
We want a simple listing of gene names, which we can produce using the `to_string()` method on the column and setting the `index` parameter to turn off the "index", i.e., the row number.
This generates a string, which we need to `print(...)`.
### RNA-Seq enrichment analysis
We can also run an enrichment analysis based just on the RNA-Seq data.
To do this, we write a table of genes and expression changes.
We want to exclude genes that are not expressed at all under any condition. Create a table of `present` genes that are above a cutoff `baseMean` value.
See how many significantly changed genes show up in this analysis.
Extract the column of expression changes
Write a file of expression changes, using the `sep` parameter to make a tab-delimited text file rather than the default CSV.
|
github_jupyter
|
import pandas as pd
genes = pd.read_csv("~/shared/MCB280A_data/S288C_R64-3-1/saccharomyces_cerevisiae_R64-3-1_20210421.gff",
delimiter="\t",
header=None,
names=['seqid', 'source', 'type',
'start', 'end', 'score', 'strand',
'phase', 'attributes'])
genes = genes[genes['type'] == "gene"]
genes['name'] = genes['attributes'].str.split(';').str[1]
genes['name'] = genes['name'].str.replace("Name=", "")
import numpy as np
genes['prmstart'] = np.where(genes['strand'] == '+',
genes['start'] - 1000,
genes['end'] + 1)
genes['prmend'] = genes['prmstart'] + 1000
peaks = pd.read_csv("~/Hsf1/ChIP-Seq/macs2/Hsf1_ChIP_heatshk_peaks.xls",
comment='#', delimiter='\t')
gene_peaks = {}
top_peaks = peaks[peaks['-log10(qvalue)'] > 20]
for peak in top_peaks.itertuples():
for gene in genes.itertuples():
if (peak.chr == gene.seqid) and (peak.abs_summit >= gene.prmstart) and (peak.abs_summit <= gene.prmend):
gene_peaks[gene.name] = peak.name
gene_peaks = pd.Series(gene_peaks, name='peak')
genes2 = pd.merge(genes, gene_peaks,
left_on='name', right_index=True, how='left')
genes3 = pd.merge(genes2, peaks,
left_on='peak', right_on='name', how='left')
results = pd.read_csv("full.results.csv",
index_col=0)
genes4 = pd.merge(genes3, results,
left_on='name_x', right_index=True)
| 0.315209 | 0.919823 |
##### **1) Finding zeros in dataframe**
##### **2) Replace Zeros with NaN**
- replace()
##### **3) Replace NaN Values with Zeros**
- fillna()
- replace()
##### **4)Replace NA values with mode of a DataFrame column**
df['column'].fillna(df['column'].mode()[0], inplace=True)
##### **5) Replace NA values with mean of a DataFrame column**
df['column'].fillna((df['column'].mean()), inplace=True)
----------------
### **1) Replace Zeros with NaN**
Download the dataset from https://github.com/arupbhunia/Data-Pre-processing/blob/master/datasets/diabetes.csv
Dataset used - **diabetes.csv**
Download instruction: go to the given link--->click **raw** button on top right corner---->Press **Ctrl+S** -->save it as **.csv file.**
```
from google.colab import files
uploaded = files.upload()
import numpy as np
import pandas as pd
# create a data frame named diabetes and load the csv file
diabetes = pd.read_csv("diabetes.csv")
#print the head
diabetes.head()
```
#### **a) Finding zeros in dataframe column wise**
```
# will summarize the number of zeroes in each column
(diabetes == 0).sum(axis=0)
# (diabetes == 0).sum() ---> Column wise
# (diabetes == 0).sum(axis=0) ---> Where axis 0 specifies that sum will operate on columns.
# (diabetes == 0).sum(axis=1) ---> Where axis 1 specifies that sum will operate on rows.
# will show the total number of zeroes in the whole dataframe
(diabetes == 0).sum().sum()
# using Numpy
print(np.count_nonzero(diabetes==0))
```
- We know that some features **can not be zero(e.g. a person's blood pressure can not be 0)** hence we will impute **zeros** with **nan** value in these features.
#### **b) Replace zero with nan**
- **Glucose, BloodPressure, SkinThickness, Insulin, and BMI** features **cannot be zero**, we will impute **zeros** with **nan** value in these features**
**1) Replace zero with nan for single column**
df['amount'] = df['amount'].replace(0, np.nan)
**2) Replace zero with nan for multiple columns**
cols = ["Weight","Height","BootSize","SuitSize","Type"]
df[cols] = df[cols].replace(['0', 0], np.nan)
**3) in column of dataframe, replace zero with blanks**
df['amount'].replace(['0', '0.0'], '', inplace=True)
```
cols = ["Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI"]
diabetes[cols] = diabetes[cols].replace(['0', 0], np.nan)
# Display the no of null values in each feature
diabetes.isnull().sum()
```
------------------
##### **Question 1**
#### **Count how many zero values in a column pandas**
```
# Create Pandas dataframe:
import pandas as pd
df = pd.DataFrame({'a':[1,0,0,1,3],
'b':[0,0,1,0,1],
'c':[0,0,0,0,0]})
df
```
**both gives same results**
**(df == 0).sum() ---> Column wise**
**(df == 0).sum(axis=0) ---> Where axis 0 specifies that sum will operate on columns**
```
(df == 0).sum()
```
**(df == 0).sum(axis=1) ---> Where axis 1 specifies that sum will operate on rows**
```
(df == 0).sum(axis=1)
```
#### **count number of zeros in a column**
```
(df['a'] == 0).sum()
```
#### **df count zeros**
```
df.astype(bool).sum(axis=0)
```
-----------------------
#### **2) Replace NaN Values with Zeros**
##### **Methods to replace NaN values with zeros in Pandas DataFrame:**
**fillna()**
- The **fillna()** function is used to fill **NA/NaN** values using the specified method.
**replace()**
- The **dataframe.replace()** function in Pandas can be defined as a simple method used to replace a **string, regex, list, dictionary** etc. in a DataFrame.
##### **Steps to replace NaN values:**
- **For a single column using Pandas:**
df['DataFrame Column'] = df['DataFrame Column'].fillna(0)
- **For a single column using NumPy:**
df['DataFrame Column'] = df['DataFrame Column'].replace(np.nan, 0)
- **For an entire DataFrame using Pandas:**
df.fillna(0)
- **For an entire DataFrame using NumPy:**
df.replace(np.nan, 0)
- **Replace NA values with mode of a DataFrame column**
df['column'].fillna(df['column'].mode()[0], inplace=True)
- **Replace NA values with mean of a DataFrame column**
df['column'].fillna((df['column'].mean()), inplace=True)
##### **Method 1: Using fillna() function for a single column**
```
# importing libraries
import pandas as pd
import numpy as np
nums = {'Set_of_Numbers': [2, 3, 5, 7, 11, 13,
np.nan, 19, 23, np.nan]}
# Create the dataframe
df1 = pd.DataFrame(nums, columns =['Set_of_Numbers'])
# print the DataFrame
df1
# Apply the function
df1['Set_of_Numbers'] = df1['Set_of_Numbers'].fillna(0)
# print the DataFrame
df1
```
##### **Method 2: Using replace() function for a single column**
```
# importing libraries
import pandas as pd
import numpy as np
nums = {'Car Model Number': [223, np.nan, 237, 195, np.nan,
575, 110, 313, np.nan, 190, 143,
np.nan],
'Engine Number': [4511, np.nan, 7570, 1565, 1450, 3786,
2995, 5345, 7777, 2323, 2785, 1120]}
# Create the dataframe
df2 = pd.DataFrame(nums, columns =['Car Model Number', 'Engine Number'])
# print the DataFrame
df2
# Apply the function
df2['Car Model Number'] = df2['Car Model Number'].replace(np.nan, 0)
# print the DataFrame
df2
```
##### **Method 3: Using fillna() function for the whole dataframe**
```
# importing libraries
import pandas as pd
import numpy as np
nums = {'Number_set_1': [0, 1, 1, 2, 3, 5, np.nan,
13, 21, np.nan],
'Number_set_2': [3, 7, np.nan, 23, 31, 41,
np.nan, 59, 67, np.nan],
'Number_set_3': [2, 3, 5, np.nan, 11, 13, 17,
19, 23, np.nan]}
# Create the dataframe
df3 = pd.DataFrame(nums)
# print the DataFrame
df3
# Apply the function
df3 = df3.fillna(0)
# print the DataFrame
df3
```
##### **Method 4: Using replace() function for the whole dataframe**
```
# importing libraries
import pandas as pd
import numpy as np
nums = {'Student Name': [ 'Shrek', 'Shivansh', 'Ishdeep',
'Siddharth', 'Nakul', 'Prakhar',
'Yash', 'Srikar', 'Kaustubh',
'Aditya', 'Manav', 'Dubey'],
'Roll No.': [ 18229, 18232, np.nan, 18247, 18136,
np.nan, 18283, 18310, 18102, 18012,
18121, 18168],
'Subject ID': [204, np.nan, 201, 105, np.nan, 204,
101, 101, np.nan, 165, 715, np.nan],
'Grade Point': [9, np.nan, 7, np.nan, 8, 7, 9, 10,
np.nan, 9, 6, 8]}
# Create the dataframe
df4 = pd.DataFrame(nums)
# print the DataFrame
df4
# Apply the function
df4 = df4.replace(np.nan, 0)
# print the DataFrame
df4
```
|
github_jupyter
|
from google.colab import files
uploaded = files.upload()
import numpy as np
import pandas as pd
# create a data frame named diabetes and load the csv file
diabetes = pd.read_csv("diabetes.csv")
#print the head
diabetes.head()
# will summarize the number of zeroes in each column
(diabetes == 0).sum(axis=0)
# (diabetes == 0).sum() ---> Column wise
# (diabetes == 0).sum(axis=0) ---> Where axis 0 specifies that sum will operate on columns.
# (diabetes == 0).sum(axis=1) ---> Where axis 1 specifies that sum will operate on rows.
# will show the total number of zeroes in the whole dataframe
(diabetes == 0).sum().sum()
# using Numpy
print(np.count_nonzero(diabetes==0))
cols = ["Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI"]
diabetes[cols] = diabetes[cols].replace(['0', 0], np.nan)
# Display the no of null values in each feature
diabetes.isnull().sum()
# Create Pandas dataframe:
import pandas as pd
df = pd.DataFrame({'a':[1,0,0,1,3],
'b':[0,0,1,0,1],
'c':[0,0,0,0,0]})
df
(df == 0).sum()
(df == 0).sum(axis=1)
(df['a'] == 0).sum()
df.astype(bool).sum(axis=0)
# importing libraries
import pandas as pd
import numpy as np
nums = {'Set_of_Numbers': [2, 3, 5, 7, 11, 13,
np.nan, 19, 23, np.nan]}
# Create the dataframe
df1 = pd.DataFrame(nums, columns =['Set_of_Numbers'])
# print the DataFrame
df1
# Apply the function
df1['Set_of_Numbers'] = df1['Set_of_Numbers'].fillna(0)
# print the DataFrame
df1
# importing libraries
import pandas as pd
import numpy as np
nums = {'Car Model Number': [223, np.nan, 237, 195, np.nan,
575, 110, 313, np.nan, 190, 143,
np.nan],
'Engine Number': [4511, np.nan, 7570, 1565, 1450, 3786,
2995, 5345, 7777, 2323, 2785, 1120]}
# Create the dataframe
df2 = pd.DataFrame(nums, columns =['Car Model Number', 'Engine Number'])
# print the DataFrame
df2
# Apply the function
df2['Car Model Number'] = df2['Car Model Number'].replace(np.nan, 0)
# print the DataFrame
df2
# importing libraries
import pandas as pd
import numpy as np
nums = {'Number_set_1': [0, 1, 1, 2, 3, 5, np.nan,
13, 21, np.nan],
'Number_set_2': [3, 7, np.nan, 23, 31, 41,
np.nan, 59, 67, np.nan],
'Number_set_3': [2, 3, 5, np.nan, 11, 13, 17,
19, 23, np.nan]}
# Create the dataframe
df3 = pd.DataFrame(nums)
# print the DataFrame
df3
# Apply the function
df3 = df3.fillna(0)
# print the DataFrame
df3
# importing libraries
import pandas as pd
import numpy as np
nums = {'Student Name': [ 'Shrek', 'Shivansh', 'Ishdeep',
'Siddharth', 'Nakul', 'Prakhar',
'Yash', 'Srikar', 'Kaustubh',
'Aditya', 'Manav', 'Dubey'],
'Roll No.': [ 18229, 18232, np.nan, 18247, 18136,
np.nan, 18283, 18310, 18102, 18012,
18121, 18168],
'Subject ID': [204, np.nan, 201, 105, np.nan, 204,
101, 101, np.nan, 165, 715, np.nan],
'Grade Point': [9, np.nan, 7, np.nan, 8, 7, 9, 10,
np.nan, 9, 6, 8]}
# Create the dataframe
df4 = pd.DataFrame(nums)
# print the DataFrame
df4
# Apply the function
df4 = df4.replace(np.nan, 0)
# print the DataFrame
df4
| 0.213295 | 0.978611 |
```
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
from mpl_toolkits.mplot3d import Axes3D
import IPython.html.widgets as widg
from IPython.display import clear_output
import sys
%matplotlib inline
class Network:
def __init__(self, shape):
self.shape = np.array(shape) #shape is array-like, i.e. (2,3,4) is a 2 input, 3 hidden node, 4 output network
self.weights = [np.random.ranf((self.shape[i],self.shape[i-1]))*.1 for i in range(1,len(self.shape))]
self.biases = [np.random.ranf((self.shape[i],))*.1 for i in range(1,len(self.shape))]
self.errors = [np.random.ranf((self.shape[i],)) for i in range(1,len(self.shape))]
self.eta = .2
self.lam = .01
def sigmoid(self, inputs):
return 1/(1+np.exp(-inputs))
def feedforward(self, inputs):
assert inputs.shape==self.shape[0] #inputs must feed directly into the first layer.
self.activation = [np.zeros((self.shape[i],)) for i in range(len(self.shape))]
self.activation[0] = inputs
for i in range(1,len(self.shape)):
self.activation[i]=self.sigmoid(np.dot(self.weights[i-1],self.activation[i-1])+self.biases[i-1])
return self.activation[-1]
def comp_error(self, answer):
assert answer.shape==self.activation[-1].shape
self.errors[-1] = (self.activation[-1]-answer)*np.exp(np.dot(self.weights[-1],self.activation[-2])+self.biases[-1])/(np.exp(np.dot(self.weights[-1],self.activation[-2])+self.biases[-1])+1)**2
for i in range(len(self.shape)-2, 0, -1):
self.errors[i-1] = self.weights[i].transpose().dot(self.errors[i])*np.exp(np.dot(self.weights[i-1],self.activation[i-1])+self.biases[i-1])/(np.exp(np.dot(self.weights[i-1],self.activation[i-1])+self.biases[i-1])+1)**2
def grad_descent(self):
for i in range(len(self.biases)):
self.biases[i]=self.biases[i]-self.eta*self.errors[i]
for i in range(len(self.weights)):
for j in range(self.weights[i].shape[0]):
for k in range(self.weights[i].shape[1]):
self.weights[i][j,k] = (1-self.eta*self.lam/1000)*self.weights[i][j,k] - self.eta*self.activation[i][k]*self.errors[i][j]
def train(self, inputs, answer):
self.feedforward(inputs)
self.comp_error(answer)
self.grad_descent()
n1 = Network([2,15,1])
print n1.feedforward(np.array([1,2]))
for i in range(1000):
n1.train(np.array([1,200]), np.array([.5]))
print n1.feedforward(np.array([1,2]))
from sklearn.datasets import load_digits
digits = load_digits()
print(digits.data[0]*.01)
iden = np.eye(10)
acc = np.zeros((50,))
num = Network([64, 5, 10])
print num.feedforward(digits.data[89]*.01)
for i in range(50):
for dig, ans in zip(digits.data[1:1000],digits.target[1:1000]):
num.train(dig*.01,iden[ans])
cor = 0
tot = 0
for dig, ans in zip(digits.data, digits.target):
if num.feedforward(dig*.01).argmax()==ans:
cor += 1
acc[i] = cor/float(tot)
print num.feedforward(digits.data[90]*.01), digits.target[90]
plt.figure(figsize=(15,10))
plt.plot(np.linspace(0,50,50),acc)
iden = np.eye(10)
acc = np.zeros((1000,2000))
f = plt.figure(figsize = (15,50))
for h in range(301, 401):
num = Network([64, 14, 10])
print str(((2000*(h-1))/(2000.*(100))))
for i in range(2000):
for dig, ans in zip(digits.data[1:h],digits.target[1:h]):
num.train(dig*.01,iden[ans])
cor = 0
for dig, ans in zip(digits.data, digits.target):
if num.feedforward(dig*.01).argmax()==ans:
cor += 1
acc[h-1,i] = cor/float(len(digits.data))
np.savetxt("Accuracy_Data_run_7_d.dat", acc)
def plot_epochs(az_angle, eleva):
fig = plt.figure(figsize=(15, 10))
ax = fig.add_subplot(111, projection='3d')
X, Y = np.meshgrid(np.linspace(0,2000,2000), np.linspace(8,14, 7))
ax.plot_surface(X, Y, acc)
ax.view_init(elev=eleva, azim=az_angle)
widg.interact(plot_epochs, az_angle=(0, 360, 1), eleva=(0,20,1))
print acc
```
|
github_jupyter
|
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
from mpl_toolkits.mplot3d import Axes3D
import IPython.html.widgets as widg
from IPython.display import clear_output
import sys
%matplotlib inline
class Network:
def __init__(self, shape):
self.shape = np.array(shape) #shape is array-like, i.e. (2,3,4) is a 2 input, 3 hidden node, 4 output network
self.weights = [np.random.ranf((self.shape[i],self.shape[i-1]))*.1 for i in range(1,len(self.shape))]
self.biases = [np.random.ranf((self.shape[i],))*.1 for i in range(1,len(self.shape))]
self.errors = [np.random.ranf((self.shape[i],)) for i in range(1,len(self.shape))]
self.eta = .2
self.lam = .01
def sigmoid(self, inputs):
return 1/(1+np.exp(-inputs))
def feedforward(self, inputs):
assert inputs.shape==self.shape[0] #inputs must feed directly into the first layer.
self.activation = [np.zeros((self.shape[i],)) for i in range(len(self.shape))]
self.activation[0] = inputs
for i in range(1,len(self.shape)):
self.activation[i]=self.sigmoid(np.dot(self.weights[i-1],self.activation[i-1])+self.biases[i-1])
return self.activation[-1]
def comp_error(self, answer):
assert answer.shape==self.activation[-1].shape
self.errors[-1] = (self.activation[-1]-answer)*np.exp(np.dot(self.weights[-1],self.activation[-2])+self.biases[-1])/(np.exp(np.dot(self.weights[-1],self.activation[-2])+self.biases[-1])+1)**2
for i in range(len(self.shape)-2, 0, -1):
self.errors[i-1] = self.weights[i].transpose().dot(self.errors[i])*np.exp(np.dot(self.weights[i-1],self.activation[i-1])+self.biases[i-1])/(np.exp(np.dot(self.weights[i-1],self.activation[i-1])+self.biases[i-1])+1)**2
def grad_descent(self):
for i in range(len(self.biases)):
self.biases[i]=self.biases[i]-self.eta*self.errors[i]
for i in range(len(self.weights)):
for j in range(self.weights[i].shape[0]):
for k in range(self.weights[i].shape[1]):
self.weights[i][j,k] = (1-self.eta*self.lam/1000)*self.weights[i][j,k] - self.eta*self.activation[i][k]*self.errors[i][j]
def train(self, inputs, answer):
self.feedforward(inputs)
self.comp_error(answer)
self.grad_descent()
n1 = Network([2,15,1])
print n1.feedforward(np.array([1,2]))
for i in range(1000):
n1.train(np.array([1,200]), np.array([.5]))
print n1.feedforward(np.array([1,2]))
from sklearn.datasets import load_digits
digits = load_digits()
print(digits.data[0]*.01)
iden = np.eye(10)
acc = np.zeros((50,))
num = Network([64, 5, 10])
print num.feedforward(digits.data[89]*.01)
for i in range(50):
for dig, ans in zip(digits.data[1:1000],digits.target[1:1000]):
num.train(dig*.01,iden[ans])
cor = 0
tot = 0
for dig, ans in zip(digits.data, digits.target):
if num.feedforward(dig*.01).argmax()==ans:
cor += 1
acc[i] = cor/float(tot)
print num.feedforward(digits.data[90]*.01), digits.target[90]
plt.figure(figsize=(15,10))
plt.plot(np.linspace(0,50,50),acc)
iden = np.eye(10)
acc = np.zeros((1000,2000))
f = plt.figure(figsize = (15,50))
for h in range(301, 401):
num = Network([64, 14, 10])
print str(((2000*(h-1))/(2000.*(100))))
for i in range(2000):
for dig, ans in zip(digits.data[1:h],digits.target[1:h]):
num.train(dig*.01,iden[ans])
cor = 0
for dig, ans in zip(digits.data, digits.target):
if num.feedforward(dig*.01).argmax()==ans:
cor += 1
acc[h-1,i] = cor/float(len(digits.data))
np.savetxt("Accuracy_Data_run_7_d.dat", acc)
def plot_epochs(az_angle, eleva):
fig = plt.figure(figsize=(15, 10))
ax = fig.add_subplot(111, projection='3d')
X, Y = np.meshgrid(np.linspace(0,2000,2000), np.linspace(8,14, 7))
ax.plot_surface(X, Y, acc)
ax.view_init(elev=eleva, azim=az_angle)
widg.interact(plot_epochs, az_angle=(0, 360, 1), eleva=(0,20,1))
print acc
| 0.211417 | 0.745352 |
```
# Import required packages
import os
import shutil
import numpy as np
import sklearn.utils as sku
import Config as conf
import CSV as csv
# Set LOG_DIR & OUTPUT_DIR
LOG_DIR = conf.LOG_DIR.format('SLSTM')
OUTPUT_DIR = conf.OUTPUT_DIR.format('SLSTM')
# Import CSV data
csi, label, size = csv.getWindows()
# Import Keras
import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.callbacks as kc
import tensorflow.keras.layers as kl
import tensorflow.keras.models as km
import tensorflow.keras.optimizers as ko
import tensorflow.keras.utils as ku
# Set CUDA (use what gpu?) -- comment this if use all GPUs
os.environ["CUDA_VISIBLE_DEVICES"]="0,1,2,3"
# Print tensorflow version
print("Tensorflow:", tf.__version__)
print("Keras:", keras.__version__)
# Setup Keras LSTM Model
model = None
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
adam = ko.Adam(learning_rate=conf.LEARNING_RATE, amsgrad=True)
lstm = kl.LSTM(
2048,
unit_forget_bias=True,
input_shape=(size[0], size[1]))
lstm.add_loss(lambda: 1e-8)
model = km.Sequential()
model.add(lstm)
model.add(kl.Dense(conf.ACTION_CNT, activation="softmax"))
model.compile(
loss="categorical_crossentropy",
optimizer=adam,
metrics=["accuracy"]
)
model.summary()
# Check output directory and prepare tensorboard
if os.path.exists(OUTPUT_DIR):
shutil.rmtree(OUTPUT_DIR)
os.makedirs(OUTPUT_DIR)
if os.path.exists(LOG_DIR):
shutil.rmtree(LOG_DIR)
os.makedirs(LOG_DIR)
tensorboard = kc.TensorBoard(
log_dir=LOG_DIR,
write_graph=True,
write_images=True,
update_freq=10)
print(
"Your tensorboard command is:"
)
print(" tensorboard --logdir=" + LOG_DIR)
print("Keras checkpoints and final result will be saved in here:")
print(" " + OUTPUT_DIR)
# Run KFold
xx, yy = sku.shuffle(csi, label, random_state=0)
for i in range(conf.KFOLD):
# Roll the data
xx = np.roll(xx, int(len(xx) / conf.KFOLD), axis=0)
yy = np.roll(yy, int(len(yy) / conf.KFOLD), axis=0)
# Data separation
xTrain = xx[int(len(xx) / conf.KFOLD):]
yTrain = yy[int(len(yy) / conf.KFOLD):]
xEval = xx[:int(len(xx) / conf.KFOLD)]
yEval = yy[:int(len(yy) / conf.KFOLD)]
# If there exists only one action, convert Y to binary form
if yEval.shape[1] == 1:
yTrain = ku.to_categorical(yTrain)
yEval = ku.to_categorical(yEval)
# Setup Keras Checkpoint
checkpoint = kc.ModelCheckpoint(OUTPUT_DIR + "K" + str(i + 1) + "_A{val_accuracy:.6f}_L{val_loss:.6f}.h5")
# Fit model (learn)
print(str(i + 1) + " th fitting started. Endpoint is " + str(conf.KFOLD) + " th.")
model.fit(
xTrain,
yTrain,
epochs=conf.EPOCH_CNT,
batch_size=conf.BATCH_SIZE,
shuffle=True,
verbose=1,
callbacks=[tensorboard, checkpoint],
validation_data=(xEval, yEval),
validation_freq=1,
use_multiprocessing=True)
print("Epoch completed!")
# Saving model
print("Saving model & model information...")
modelYML = model.to_yaml()
with open(OUTPUT_DIR + "model.yml", "w") as yml:
yml.write(modelYML)
modelJSON = model.to_json()
with open(OUTPUT_DIR + "model.json", "w") as json:
json.write(modelJSON)
model.save(OUTPUT_DIR + "model.h5")
print('Model saved!')
# Finished
```
|
github_jupyter
|
# Import required packages
import os
import shutil
import numpy as np
import sklearn.utils as sku
import Config as conf
import CSV as csv
# Set LOG_DIR & OUTPUT_DIR
LOG_DIR = conf.LOG_DIR.format('SLSTM')
OUTPUT_DIR = conf.OUTPUT_DIR.format('SLSTM')
# Import CSV data
csi, label, size = csv.getWindows()
# Import Keras
import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.callbacks as kc
import tensorflow.keras.layers as kl
import tensorflow.keras.models as km
import tensorflow.keras.optimizers as ko
import tensorflow.keras.utils as ku
# Set CUDA (use what gpu?) -- comment this if use all GPUs
os.environ["CUDA_VISIBLE_DEVICES"]="0,1,2,3"
# Print tensorflow version
print("Tensorflow:", tf.__version__)
print("Keras:", keras.__version__)
# Setup Keras LSTM Model
model = None
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
adam = ko.Adam(learning_rate=conf.LEARNING_RATE, amsgrad=True)
lstm = kl.LSTM(
2048,
unit_forget_bias=True,
input_shape=(size[0], size[1]))
lstm.add_loss(lambda: 1e-8)
model = km.Sequential()
model.add(lstm)
model.add(kl.Dense(conf.ACTION_CNT, activation="softmax"))
model.compile(
loss="categorical_crossentropy",
optimizer=adam,
metrics=["accuracy"]
)
model.summary()
# Check output directory and prepare tensorboard
if os.path.exists(OUTPUT_DIR):
shutil.rmtree(OUTPUT_DIR)
os.makedirs(OUTPUT_DIR)
if os.path.exists(LOG_DIR):
shutil.rmtree(LOG_DIR)
os.makedirs(LOG_DIR)
tensorboard = kc.TensorBoard(
log_dir=LOG_DIR,
write_graph=True,
write_images=True,
update_freq=10)
print(
"Your tensorboard command is:"
)
print(" tensorboard --logdir=" + LOG_DIR)
print("Keras checkpoints and final result will be saved in here:")
print(" " + OUTPUT_DIR)
# Run KFold
xx, yy = sku.shuffle(csi, label, random_state=0)
for i in range(conf.KFOLD):
# Roll the data
xx = np.roll(xx, int(len(xx) / conf.KFOLD), axis=0)
yy = np.roll(yy, int(len(yy) / conf.KFOLD), axis=0)
# Data separation
xTrain = xx[int(len(xx) / conf.KFOLD):]
yTrain = yy[int(len(yy) / conf.KFOLD):]
xEval = xx[:int(len(xx) / conf.KFOLD)]
yEval = yy[:int(len(yy) / conf.KFOLD)]
# If there exists only one action, convert Y to binary form
if yEval.shape[1] == 1:
yTrain = ku.to_categorical(yTrain)
yEval = ku.to_categorical(yEval)
# Setup Keras Checkpoint
checkpoint = kc.ModelCheckpoint(OUTPUT_DIR + "K" + str(i + 1) + "_A{val_accuracy:.6f}_L{val_loss:.6f}.h5")
# Fit model (learn)
print(str(i + 1) + " th fitting started. Endpoint is " + str(conf.KFOLD) + " th.")
model.fit(
xTrain,
yTrain,
epochs=conf.EPOCH_CNT,
batch_size=conf.BATCH_SIZE,
shuffle=True,
verbose=1,
callbacks=[tensorboard, checkpoint],
validation_data=(xEval, yEval),
validation_freq=1,
use_multiprocessing=True)
print("Epoch completed!")
# Saving model
print("Saving model & model information...")
modelYML = model.to_yaml()
with open(OUTPUT_DIR + "model.yml", "w") as yml:
yml.write(modelYML)
modelJSON = model.to_json()
with open(OUTPUT_DIR + "model.json", "w") as json:
json.write(modelJSON)
model.save(OUTPUT_DIR + "model.h5")
print('Model saved!')
# Finished
| 0.508788 | 0.182644 |
### PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY , STABILITY , AND VARIATION
By: Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
https://arxiv.org/abs/1710.10196
Implemented by: Deniz A. ACAR & Selcuk Sezer
This paper is basically a training methodology for generative adverserial networks (GAN). The main idea here is to start training the adverserial networks from low resolutions to high resolutions progressively. In other words the paper suggests to construct a GAN using two simple blocks as generator and discriminator in such a way that initially it tries to generate a 4x4 image. After that add new layers progressively to generate higher resolution image.
How this feat is achieved can be percieved in the following figure.
As it can be seen initially the generator creates 4x4 images and after training is done a block is added to generator and discriminator to generate 8x8 images and so on.

This incremental nature allows the training to first discover large-scale structure of the
image distribution and then shift attention to increasingly finer scale detail, instead of having to learn
all scales simultaneously. This on the other hand would make the training more stable and because there is less information to be learned and there are fewer modes as a result.
In this project we have implemented the network that is used in the paper to generate high resolution images trained on CelebA_HQ dataset. The architecture and methododlogy that we have implemented can be seen below:

There are some interesting ideas which we will discuss here briefly.
### EQUALIZED LEARNING RATE
It is one of the fascinating ideas in the paper.
Optimization methods like ADAM and RMSPROP normalize a gradient update by its estimated standard deviation, as a result parameters get updated independent of the scale. In orther to make the updating dependent on the scale what the authors basically do in the paper is to initialize the weight matrix from normal distribution. then scale the weights at the runtime by the per-layer normalization constant (c) from He's initializer.
In other words the scale the weights in the forward pass by multiplying the weight with c.
### PIXELWISE FEATURE VECTOR NORMALIZATION IN GENERATOR
In order to restrict the magnitude of the weights in generator -so that- they do not get out of control due to the competition between generator and discriminator, they are normalize by local response normalization.

### INCREASING VARIATION USING MINIBATCH STANDARD DEVIATION
GANs have a tendency to capture only a subset of the variation found in training data, and Salimans
et al. (2016) suggest “minibatch discrimination” as a solution. They compute feature statistics not
only from individual images but also across the minibatch, thus encouraging the minibatches of
generated and training images to show similar statistics. This is implemented by adding a minibatch
layer towards the end of the discriminator, where the layer learns a large tensor that projects the
input activation to an array of statistics.
Here the authors first compute standard deviation for each feature in each spatial location over the minibatch, then average these estimates over all features and spatial locations to obtain a single value for each subgroup. Then this values is replicated for the images in that subgroup in the minibatch and the result is concatinated the the input.
### TRANSITION
When adding a new block to the generator and discriminator (which are the exact mirror of each other)
they are fed in smoothly which is represented below:

The value $\alpha$ increases gradually at each epoch until it reaches 1.
### GENERATOR and DISCRIMINATOR
The generator architecture can be seen below:


reference: https://towardsdatascience.com/progan-how-nvidia-generated-images-of-unprecedented-quality-51c98ec2cbd2
and these are the results that they have obtained in the reference above:

We have implemented the project as python files rather than in jupyter (which we initially did for version 1 implementation).
We have decided to do so due to our limited resources.
### Challenges:
[UPDATE]
We have found a mistake in our code; The discriminator sub-blocks do not have normalization layers. we have fixed the problem and started training the networks again. (hopefully it will solve the training problems we faced before.)
We have aimed to generate imges with 128x128 resolution which due to hardware deficiencies was not acheived. We also have struggled to get satisfying results for a 64x64 image. During our first attempt we have encountered mode collapse. Which we believe was a result of selecting a large learning rate.
The reason that we selected a large learning rate instead of what was proposed in the paper was that after even 400 epochs we did not observe any significant changes in 4x4 images.
We have tried to use changing learning rate. We have used 0.128 / resolution which performed quite well on the blocks until we reach the level that generated 64x64 images. We did not see any significant changes even after about 15 hours of training we have also tried different transition configurations to no avail. At this point we suspect it takes way more time to train the GAN with 64x64 than we spend on the training.
Another problem that we faced after completing the code was that the generator would generate black images (4x4) no matter how many epochs has passed. we have tried to debug the code but it turns out that the learning rate given by the authors was very small. We spend quite a long time on trying to fix this problem which was easily solved by increasing the learning rate.
One other interesting problem that we faced was due to implementation of equilized learning rate. Which was not sent to the device even after being added to nn.Sequential. Our solution was to send it to the device of its input in the forward method of its class.
INCREASING VARIATION USING MINIBATCH STANDARD DEVIATION was explained briefly in the paper but there was no information about its details so that we could refer to and implement it. We have used GAN ZOO implementation in the code.
We have lost quite a lot of time in debugging (as explianed) and training the 64x64 generator.
Our results can be seen below:
```
import os
import sys
sys.path.insert(0, os.path.abspath('{}/src/'.format(os.getcwd())))
config = {'channels':[128,128,128,128,128,128,128], # must be len(config['sr]) + 1
'latent_size':128,
'sr':[4, 8, 16, 32, 64, 128], # spatial resolution
'start_sr':4,
'level_batch_size':[16, 16, 16, 16, 16, 16],
'epochs_before_jump':[16, 15, 15, 15, 15, 15],
'learning_rate_generator':0.1,
'learning_rate_critic':0.1,
'generator_betas':(0.0, 0.99),
'critic_betas':(0.0, 0.99),
'ncrit':1,
'critic_lambda':10.,
'epsilon_drift':0.001,
'dataset_dir':'/home/deniz/Desktop/data_set/CelebAMask-HQ/',
'stat_format':'epoch {:4d} resolution {:4d} critic_loss {:6.4f} generator_loss {:6.4f} time {:6f}'}
import matplotlib.pyplot as plt
import numpy as np
import torch
from model.generator import Generator
from model.discriminator import Discriminator
from loss.WGANGP import PG_Gradient_Penalty
from torch.utils.data import DataLoader
from torchvision.datasets import ImageFolder
from torchvision.transforms import Compose, ToTensor
from os import getcwd
from numpy import array, log2, linspace
from time import time
def show_img(d):
plt.clf()
h = 10
s = d.shape[0]
fig = plt.figure(figsize=(config['sr'][level_index], config['sr'][level_index]))
m = int(s / h)
ax = [plt.subplot(m+1,h,i) for i in range(1, s+1)]
for i in range(1, s+1):
plt.axis('on')
ax[i-1].imshow(d[i-1,:,:], cmap='gray')
ax[i-1].set_xticklabels([])
ax[i-1].set_yticklabels([])
ax[i-1].set_aspect('equal')
fig.subplots_adjust(hspace=0, wspace=0.1)
fig.tight_layout()
plt.show(block=False)
plt.pause(10)
plt.close()
return
level_index = 4
device = torch.device('cuda:0')
generator = Generator(config['sr'][level_index], config, transition=True, save_checkpoint=False).to(device)
x = torch.randn(20, config['latent_size']).to(device)
a = generator(x) # .reshape(3, config['sr'][level_index], config['sr'][level_index])
image = array((a).tolist()).astype(int)
image = np.transpose(image, (0,2,3,1))
show_img(image)
level_index = 4
device = torch.device('cuda:0')
generator = Generator(config['sr'][level_index], config, transition=True, save_checkpoint=False).to(device)
x1 = np.random.randn(config['latent_size'])
x2 = np.random.randn(config['latent_size'])
alpha = linspace(0.,1.,20)
d = []
for i in alpha:
d.append(x1 * i + x2 * (1-i))
kk = torch.Tensor(array(d)).to(device)
a = generator(kk) # .reshape(3, config['sr'][level_index], config['sr'][level_index])
image = array((a).tolist()).astype(int)
image = np.transpose(image, (0,2,3,1))
show_img(image)
```
## FID Score
For quantitative evaluation, FID score from https://github.com/bioinf-jku/TTUR is used.
```
## Construct Fake Dataset
if not os.path.exists('./fake_dataset/'):
os.mkdir('./fake_dataset')
real_data_path = '/CelebA_32/'
fake_data_path = '/fake_dataset/'
batch_in = 100
fake_dataset_size = 1000
latent_vars = torch.randn(fake_dataset_size, config['latent_size']).to(device)
im_ind = 0
# Generate fake images
for i in range(fake_dataset_size//batch_in):
a = generator(latent_vars[i*batch_in:(i+1)*batch_in,:])
images = array((a).tolist()).astype(int)
images = np.transpose(images, (0,2,3,1))
for image in images:
im_name = str(im_ind)+'.jpg'
image = np.clip(image,0,255.0)
cv2.imwrite(fake_data_path+im_name, image)
im_ind += 1
## Evaluate FID score
!git clone https://github.com/bioinf-jku/TTUR.git ./FID
!python ./FID/fid.py ./CelebA_32 ./fake_dataset
```
|
github_jupyter
|
import os
import sys
sys.path.insert(0, os.path.abspath('{}/src/'.format(os.getcwd())))
config = {'channels':[128,128,128,128,128,128,128], # must be len(config['sr]) + 1
'latent_size':128,
'sr':[4, 8, 16, 32, 64, 128], # spatial resolution
'start_sr':4,
'level_batch_size':[16, 16, 16, 16, 16, 16],
'epochs_before_jump':[16, 15, 15, 15, 15, 15],
'learning_rate_generator':0.1,
'learning_rate_critic':0.1,
'generator_betas':(0.0, 0.99),
'critic_betas':(0.0, 0.99),
'ncrit':1,
'critic_lambda':10.,
'epsilon_drift':0.001,
'dataset_dir':'/home/deniz/Desktop/data_set/CelebAMask-HQ/',
'stat_format':'epoch {:4d} resolution {:4d} critic_loss {:6.4f} generator_loss {:6.4f} time {:6f}'}
import matplotlib.pyplot as plt
import numpy as np
import torch
from model.generator import Generator
from model.discriminator import Discriminator
from loss.WGANGP import PG_Gradient_Penalty
from torch.utils.data import DataLoader
from torchvision.datasets import ImageFolder
from torchvision.transforms import Compose, ToTensor
from os import getcwd
from numpy import array, log2, linspace
from time import time
def show_img(d):
plt.clf()
h = 10
s = d.shape[0]
fig = plt.figure(figsize=(config['sr'][level_index], config['sr'][level_index]))
m = int(s / h)
ax = [plt.subplot(m+1,h,i) for i in range(1, s+1)]
for i in range(1, s+1):
plt.axis('on')
ax[i-1].imshow(d[i-1,:,:], cmap='gray')
ax[i-1].set_xticklabels([])
ax[i-1].set_yticklabels([])
ax[i-1].set_aspect('equal')
fig.subplots_adjust(hspace=0, wspace=0.1)
fig.tight_layout()
plt.show(block=False)
plt.pause(10)
plt.close()
return
level_index = 4
device = torch.device('cuda:0')
generator = Generator(config['sr'][level_index], config, transition=True, save_checkpoint=False).to(device)
x = torch.randn(20, config['latent_size']).to(device)
a = generator(x) # .reshape(3, config['sr'][level_index], config['sr'][level_index])
image = array((a).tolist()).astype(int)
image = np.transpose(image, (0,2,3,1))
show_img(image)
level_index = 4
device = torch.device('cuda:0')
generator = Generator(config['sr'][level_index], config, transition=True, save_checkpoint=False).to(device)
x1 = np.random.randn(config['latent_size'])
x2 = np.random.randn(config['latent_size'])
alpha = linspace(0.,1.,20)
d = []
for i in alpha:
d.append(x1 * i + x2 * (1-i))
kk = torch.Tensor(array(d)).to(device)
a = generator(kk) # .reshape(3, config['sr'][level_index], config['sr'][level_index])
image = array((a).tolist()).astype(int)
image = np.transpose(image, (0,2,3,1))
show_img(image)
## Construct Fake Dataset
if not os.path.exists('./fake_dataset/'):
os.mkdir('./fake_dataset')
real_data_path = '/CelebA_32/'
fake_data_path = '/fake_dataset/'
batch_in = 100
fake_dataset_size = 1000
latent_vars = torch.randn(fake_dataset_size, config['latent_size']).to(device)
im_ind = 0
# Generate fake images
for i in range(fake_dataset_size//batch_in):
a = generator(latent_vars[i*batch_in:(i+1)*batch_in,:])
images = array((a).tolist()).astype(int)
images = np.transpose(images, (0,2,3,1))
for image in images:
im_name = str(im_ind)+'.jpg'
image = np.clip(image,0,255.0)
cv2.imwrite(fake_data_path+im_name, image)
im_ind += 1
## Evaluate FID score
!git clone https://github.com/bioinf-jku/TTUR.git ./FID
!python ./FID/fid.py ./CelebA_32 ./fake_dataset
| 0.34798 | 0.864139 |
## Setup File Structure
```
import netCDF4
import numpy as np
try:
ncfile.close()
except:
pass
ncfile = netCDF4.Dataset("new-dvmdostem-output.nc", mode="w", format='NETCDF4')
# Dimensions for the file.
time_dim = ncfile.createDimension('time', None) # unlimited axis (can be appended to).
community_type = ncfile.createDimension('community_type', 10)
pft = ncfile.createDimension('pft', 10)
y = ncfile.createDimension('x', 10)
x = ncfile.createDimension('y', 10)
# Coordinate Variables
x = ncfile.createVariable('x', np.int, ('x',)) # x,y are pixel coords in 2D (spatial?) image
y = ncfile.createVariable('y', np.int, ('y',))
community_type = ncfile.createVariable('community_type', np.int, ('y','x'))
# Spatial Reference Variables?
lat = ncfile.createVariable('lat', np.float32, ('y', 'x',))
lon = ncfile.createVariable('lon', np.float32, ('y', 'x',))
# Add space/time variables...
grow_start = ncfile.createVariable('grow_start', np.int, ('time', 'y', 'x',)) # day of year
grow_end = ncfile.createVariable('grow_end', np.int, ('time', 'y', 'x'))
org_shlw_thickness = ncfile.createVariable('org_shlw_thickness', np.float32, ('time', 'y', 'x'))
# Need to add all these:
# 1 //OSHLWDZ - (23) shallow fibrous organic soil horizon thickness (m)
# 1 //ODEEPDZ - (24) deep amorphous organic soil horizon thickness (m)
# 1 //MINEADZ - (25) upper minearal soil horizon thickness (m)
# 1 //MINEBDZ - (26) middel mineral soil horizon thickness (m)
# 1 //MINECDZ - (27) lower mineral soil horizon thickness (m)
# 1 //OSHLWC - (28) SOM C in firbrous soil horizon (gC/m2)
# 1 //ODEEPC - (29) SOM C in amorphous soil horizon (gC/m2)
# 1 //MINEAC - (30) SOM C in upper mineral soil horizon (gC/m2)
# 1 //MINEBC - (31) SOM C in middle mineral soil horizon (gC/m2)
# 1 //MINECC - (32) SOM C in lower mineral soil horizon (gC/m2)
# 1 //ORGN - (33) total soil organic N (gN/m2)
# 2 //AVLN - (35) total soil mineral N (gN/m2)
# Add more complicated time/cmt type/PFT/Y/X variables...
veg_fraction = ncfile.createVariable('veg_fraction', np.float32, ('time','community_type','pft','y','x'))
vegc = ncfile.createVariable('vegc', np.float64, ('time','community_type','pft','y','x'))
# Need to add all these:
# 1 //VEGFRAC - (3) each pft's land coverage fraction (m2/m2)
# 1 //VEGAGE - (4) each pft's age (years)
# 2 //LAI - (5) each pft's LAI (m2/m2)
# 2 //VEGC - (6) each pft's total veg. biomass C (gC/m2)
# 2 //LEAFC - (7) each pft's leaf biomass C (gC/m2)
# 2 //STEMC - (8) each pft's stem biomass C (gC/m2)
# 2 //ROOTC - (9) each pft's root biomass C (gC/m2)
# 2 //VEGN - (10) each pft's total veg. biomass N (gC/m2)
# 2 //LABN - (11) each pft's labile N (gN/m2)
# 2 //LEAFN - (12) each pft's leaf structural N (gN/m2)
# 2 //STEMN - (13) each pft's stem structural N (gN/m2)
# Add some random data to the vegC variable so we can check it
# out with ncview and see if the dimensions "make sense"
vegc[:,:,:,:,:] = np.reshape(np.random.uniform(0, 1, 40000), (4,10,10,10,10))
# vegc[time, cmt, pft, y, x]
print "NetCDF File Dimensions:"
for dim in ncfile.dimensions.items():
print " -->", dim
ncfile.close()
print("")
!ncdump -h new-dvmdostem-output.nc
```
|
github_jupyter
|
import netCDF4
import numpy as np
try:
ncfile.close()
except:
pass
ncfile = netCDF4.Dataset("new-dvmdostem-output.nc", mode="w", format='NETCDF4')
# Dimensions for the file.
time_dim = ncfile.createDimension('time', None) # unlimited axis (can be appended to).
community_type = ncfile.createDimension('community_type', 10)
pft = ncfile.createDimension('pft', 10)
y = ncfile.createDimension('x', 10)
x = ncfile.createDimension('y', 10)
# Coordinate Variables
x = ncfile.createVariable('x', np.int, ('x',)) # x,y are pixel coords in 2D (spatial?) image
y = ncfile.createVariable('y', np.int, ('y',))
community_type = ncfile.createVariable('community_type', np.int, ('y','x'))
# Spatial Reference Variables?
lat = ncfile.createVariable('lat', np.float32, ('y', 'x',))
lon = ncfile.createVariable('lon', np.float32, ('y', 'x',))
# Add space/time variables...
grow_start = ncfile.createVariable('grow_start', np.int, ('time', 'y', 'x',)) # day of year
grow_end = ncfile.createVariable('grow_end', np.int, ('time', 'y', 'x'))
org_shlw_thickness = ncfile.createVariable('org_shlw_thickness', np.float32, ('time', 'y', 'x'))
# Need to add all these:
# 1 //OSHLWDZ - (23) shallow fibrous organic soil horizon thickness (m)
# 1 //ODEEPDZ - (24) deep amorphous organic soil horizon thickness (m)
# 1 //MINEADZ - (25) upper minearal soil horizon thickness (m)
# 1 //MINEBDZ - (26) middel mineral soil horizon thickness (m)
# 1 //MINECDZ - (27) lower mineral soil horizon thickness (m)
# 1 //OSHLWC - (28) SOM C in firbrous soil horizon (gC/m2)
# 1 //ODEEPC - (29) SOM C in amorphous soil horizon (gC/m2)
# 1 //MINEAC - (30) SOM C in upper mineral soil horizon (gC/m2)
# 1 //MINEBC - (31) SOM C in middle mineral soil horizon (gC/m2)
# 1 //MINECC - (32) SOM C in lower mineral soil horizon (gC/m2)
# 1 //ORGN - (33) total soil organic N (gN/m2)
# 2 //AVLN - (35) total soil mineral N (gN/m2)
# Add more complicated time/cmt type/PFT/Y/X variables...
veg_fraction = ncfile.createVariable('veg_fraction', np.float32, ('time','community_type','pft','y','x'))
vegc = ncfile.createVariable('vegc', np.float64, ('time','community_type','pft','y','x'))
# Need to add all these:
# 1 //VEGFRAC - (3) each pft's land coverage fraction (m2/m2)
# 1 //VEGAGE - (4) each pft's age (years)
# 2 //LAI - (5) each pft's LAI (m2/m2)
# 2 //VEGC - (6) each pft's total veg. biomass C (gC/m2)
# 2 //LEAFC - (7) each pft's leaf biomass C (gC/m2)
# 2 //STEMC - (8) each pft's stem biomass C (gC/m2)
# 2 //ROOTC - (9) each pft's root biomass C (gC/m2)
# 2 //VEGN - (10) each pft's total veg. biomass N (gC/m2)
# 2 //LABN - (11) each pft's labile N (gN/m2)
# 2 //LEAFN - (12) each pft's leaf structural N (gN/m2)
# 2 //STEMN - (13) each pft's stem structural N (gN/m2)
# Add some random data to the vegC variable so we can check it
# out with ncview and see if the dimensions "make sense"
vegc[:,:,:,:,:] = np.reshape(np.random.uniform(0, 1, 40000), (4,10,10,10,10))
# vegc[time, cmt, pft, y, x]
print "NetCDF File Dimensions:"
for dim in ncfile.dimensions.items():
print " -->", dim
ncfile.close()
print("")
!ncdump -h new-dvmdostem-output.nc
| 0.378344 | 0.64791 |
# Dual CRISPR Screen Analysis
# Step 5: Count Plots
Amanda Birmingham, CCBB, UCSD ([email protected])
## Instructions
To run this notebook reproducibly, follow these steps:
1. Click **Kernel** > **Restart & Clear Output**
2. When prompted, click the red **Restart & clear all outputs** button
3. Fill in the values for your analysis for each of the variables in the [Input Parameters](#Input-Parameters) section
4. Click **Cell** > **Run All**
## Input Parameters
```
g_dataset_name = "Notebook5Test"
g_fastq_counts_run_prefix = "TestSet5"
g_fastq_counts_dir = '~/dual_crispr/test_data/test_set_5'
g_collapsed_counts_run_prefix = ""
g_collapsed_counts_dir = ""
g_combined_counts_run_prefix = ""
g_combined_counts_dir = ""
g_plots_run_prefix = ""
g_plots_dir = '~/dual_crispr/test_outputs/test_set_5'
```
## Automated Set-Up
```
import inspect
import ccbb_pyutils.analysis_run_prefixes as ns_runs
import ccbb_pyutils.files_and_paths as ns_files
import ccbb_pyutils.notebook_logging as ns_logs
def describe_var_list(input_var_name_list):
description_list = ["{0}: {1}\n".format(name, eval(name)) for name in input_var_name_list]
return "".join(description_list)
ns_logs.set_stdout_info_logger()
g_fastq_counts_dir = ns_files.expand_path(g_fastq_counts_dir)
g_collapsed_counts_run_prefix = ns_runs.check_or_set(g_collapsed_counts_run_prefix, g_fastq_counts_run_prefix)
g_collapsed_counts_dir = ns_files.expand_path(ns_runs.check_or_set(g_collapsed_counts_dir, g_fastq_counts_dir))
g_combined_counts_run_prefix = ns_runs.check_or_set(g_combined_counts_run_prefix, g_collapsed_counts_run_prefix)
g_combined_counts_dir = ns_files.expand_path(ns_runs.check_or_set(g_combined_counts_dir, g_collapsed_counts_dir))
g_plots_run_prefix = ns_runs.check_or_set(g_plots_run_prefix, ns_runs.generate_run_prefix(g_dataset_name))
g_plots_dir = ns_files.expand_path(ns_runs.check_or_set(g_plots_dir, g_combined_counts_dir))
print(describe_var_list(['g_fastq_counts_dir', 'g_collapsed_counts_run_prefix','g_collapsed_counts_dir',
'g_combined_counts_run_prefix', 'g_combined_counts_dir',
'g_plots_run_prefix', 'g_plots_dir']))
ns_files.verify_or_make_dir(g_collapsed_counts_dir)
ns_files.verify_or_make_dir(g_combined_counts_dir)
ns_files.verify_or_make_dir(g_plots_dir)
%matplotlib inline
```
## Count File Suffixes
```
import dual_crispr.construct_counter as ns_counter
print(inspect.getsource(ns_counter.get_counts_file_suffix))
import dual_crispr.count_combination as ns_combine
print(inspect.getsource(ns_combine.get_collapsed_counts_file_suffix))
print(inspect.getsource(ns_combine.get_combined_counts_file_suffix))
```
## Count Plots Functions
```
import dual_crispr.count_plots as ns_plot
print(inspect.getsource(ns_plot))
```
## Individual fastq Plots
```
print(ns_files.check_file_presence(g_fastq_counts_dir, g_fastq_counts_run_prefix,
ns_counter.get_counts_file_suffix(),
check_failure_msg="Count plots could not detect any individual fastq count files."))
ns_plot.plot_raw_counts(g_fastq_counts_dir, g_fastq_counts_run_prefix, ns_counter.get_counts_file_suffix(),
g_plots_dir, g_plots_run_prefix, ns_plot.get_boxplot_suffix())
```
## Individual Sample Plots
```
print(ns_files.check_file_presence(g_collapsed_counts_dir, g_collapsed_counts_run_prefix,
ns_combine.get_collapsed_counts_file_suffix(),
check_failure_msg="Count plots could not detect any individual sample count files.")
)
ns_plot.plot_raw_counts(g_collapsed_counts_dir, g_collapsed_counts_run_prefix,
ns_combine.get_collapsed_counts_file_suffix(), g_plots_dir, g_plots_run_prefix, ns_plot.get_boxplot_suffix())
```
## Combined Samples Plot
```
print(ns_files.check_file_presence(g_combined_counts_dir, g_combined_counts_run_prefix,
ns_combine.get_combined_counts_file_suffix(),
check_failure_msg="Count plots could not detect a combined count file."))
ns_plot.plot_combined_raw_counts(g_combined_counts_dir, g_combined_counts_run_prefix,
ns_combine.get_combined_counts_file_suffix(), g_plots_dir, g_plots_run_prefix, ns_plot.get_boxplot_suffix())
```
|
github_jupyter
|
g_dataset_name = "Notebook5Test"
g_fastq_counts_run_prefix = "TestSet5"
g_fastq_counts_dir = '~/dual_crispr/test_data/test_set_5'
g_collapsed_counts_run_prefix = ""
g_collapsed_counts_dir = ""
g_combined_counts_run_prefix = ""
g_combined_counts_dir = ""
g_plots_run_prefix = ""
g_plots_dir = '~/dual_crispr/test_outputs/test_set_5'
import inspect
import ccbb_pyutils.analysis_run_prefixes as ns_runs
import ccbb_pyutils.files_and_paths as ns_files
import ccbb_pyutils.notebook_logging as ns_logs
def describe_var_list(input_var_name_list):
description_list = ["{0}: {1}\n".format(name, eval(name)) for name in input_var_name_list]
return "".join(description_list)
ns_logs.set_stdout_info_logger()
g_fastq_counts_dir = ns_files.expand_path(g_fastq_counts_dir)
g_collapsed_counts_run_prefix = ns_runs.check_or_set(g_collapsed_counts_run_prefix, g_fastq_counts_run_prefix)
g_collapsed_counts_dir = ns_files.expand_path(ns_runs.check_or_set(g_collapsed_counts_dir, g_fastq_counts_dir))
g_combined_counts_run_prefix = ns_runs.check_or_set(g_combined_counts_run_prefix, g_collapsed_counts_run_prefix)
g_combined_counts_dir = ns_files.expand_path(ns_runs.check_or_set(g_combined_counts_dir, g_collapsed_counts_dir))
g_plots_run_prefix = ns_runs.check_or_set(g_plots_run_prefix, ns_runs.generate_run_prefix(g_dataset_name))
g_plots_dir = ns_files.expand_path(ns_runs.check_or_set(g_plots_dir, g_combined_counts_dir))
print(describe_var_list(['g_fastq_counts_dir', 'g_collapsed_counts_run_prefix','g_collapsed_counts_dir',
'g_combined_counts_run_prefix', 'g_combined_counts_dir',
'g_plots_run_prefix', 'g_plots_dir']))
ns_files.verify_or_make_dir(g_collapsed_counts_dir)
ns_files.verify_or_make_dir(g_combined_counts_dir)
ns_files.verify_or_make_dir(g_plots_dir)
%matplotlib inline
import dual_crispr.construct_counter as ns_counter
print(inspect.getsource(ns_counter.get_counts_file_suffix))
import dual_crispr.count_combination as ns_combine
print(inspect.getsource(ns_combine.get_collapsed_counts_file_suffix))
print(inspect.getsource(ns_combine.get_combined_counts_file_suffix))
import dual_crispr.count_plots as ns_plot
print(inspect.getsource(ns_plot))
print(ns_files.check_file_presence(g_fastq_counts_dir, g_fastq_counts_run_prefix,
ns_counter.get_counts_file_suffix(),
check_failure_msg="Count plots could not detect any individual fastq count files."))
ns_plot.plot_raw_counts(g_fastq_counts_dir, g_fastq_counts_run_prefix, ns_counter.get_counts_file_suffix(),
g_plots_dir, g_plots_run_prefix, ns_plot.get_boxplot_suffix())
print(ns_files.check_file_presence(g_collapsed_counts_dir, g_collapsed_counts_run_prefix,
ns_combine.get_collapsed_counts_file_suffix(),
check_failure_msg="Count plots could not detect any individual sample count files.")
)
ns_plot.plot_raw_counts(g_collapsed_counts_dir, g_collapsed_counts_run_prefix,
ns_combine.get_collapsed_counts_file_suffix(), g_plots_dir, g_plots_run_prefix, ns_plot.get_boxplot_suffix())
print(ns_files.check_file_presence(g_combined_counts_dir, g_combined_counts_run_prefix,
ns_combine.get_combined_counts_file_suffix(),
check_failure_msg="Count plots could not detect a combined count file."))
ns_plot.plot_combined_raw_counts(g_combined_counts_dir, g_combined_counts_run_prefix,
ns_combine.get_combined_counts_file_suffix(), g_plots_dir, g_plots_run_prefix, ns_plot.get_boxplot_suffix())
| 0.32338 | 0.824709 |
# Eaxmple training CAE on FashionMNIST with multiple GPUs
## Uses Pytorch's DataParallel module.
See https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html for more details.
Apparently some of the internal methods of the model may not be accessible after wrapping with DataParallel. https://pytorch.org/tutorials/beginner/former_torchies/parallelism_tutorial.html. This may be a problem when trying to create latent vectors and generating samples later. May need to subclass nn.DataParallel in a parallel-specific autoencoder class. Then would need to make sure that any model created and saved would be portable to a single-gpu or cpu setup.
Note: Using DistributedDataParallel does not work on Windows or OSX since Pytorch doesn't support distributed training on these platforms. DistributedDataParallel uses multiprocessing and potentially could be faster than DataParallel. See https://pytorch.org/tutorials/intermediate/ddp_tutorial.html. Could try using a Docker container to do this.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from time import time
import os
import torch
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
from Autoencoders.encoders import Encoder2DConv
from Autoencoders.decoders import Decoder2DConv
from Autoencoders.autoencoders import Autoencoder
from Autoencoders.losses import vae_loss
from sklearn.manifold import TSNE
torch.distributed.is_available()
```
## Load FashionMNIST data and create a dataloader
```
batch_size = 128
traindata = datasets.FashionMNIST('./sampledata/FashionMNIST', download=True, train=True, transform=transforms.ToTensor())
trainloader = DataLoader(traindata, batch_size=batch_size, num_workers=8)
testdata = datasets.FashionMNIST('./sampledata/FashionMNIST', download=True, train=False, transform=transforms.ToTensor())
testloader = DataLoader(testdata, batch_size=batch_size, num_workers=8)
for data, _ in trainloader:
print(data.size())
break
```
## Parameters
```
inputdims = (28,28)
latentdims = 32
nlayers = 2
use_cuda = True
epochs = 20
```
## Create the single-GPU Convolutional Autoencoder (CAE)
```
cae_encoder = Encoder2DConv(inputdims, latentdims, nlayers=nlayers, use_batchnorm=True)
cae_decoder = Decoder2DConv(inputdims, latentdims, nlayers=nlayers, use_batchnorm=True)
cae = Autoencoder(cae_encoder, cae_decoder)
if use_cuda == True:
cae = cae.cuda()
cae_loss = torch.nn.functional.mse_loss
cae_optimizer = torch.optim.Adam(cae.parameters())
```
## Train the single-GPU CAE
```
def train_cae(epochs):
cae.train()
train_loss = 0
for batch_idx, (x, _) in enumerate(trainloader):
if use_cuda:
x = x.cuda()
cae_optimizer.zero_grad()
recon_x = cae(x)
loss = cae_loss(recon_x, x, reduction='sum')
loss.backward()
train_loss += loss.item()
cae_optimizer.step()
if batch_idx % 20 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(x), len(trainloader.dataset),
100. * batch_idx / len(trainloader),
loss.item() / len(x)),
end="\r", flush=True)
print('\n====> Epoch: {} Average loss: {:.4f}'.format(
epoch, train_loss / len(trainloader.dataset)))
return train_loss / len(trainloader.dataset)
cae_epoch_loss = []
t0 = time()
for epoch in range(epochs):
loss = train_cae(epoch)
cae_epoch_loss.append(loss)
print('Total training time: {:.2f} seconds'.format(time()-t0))
plt.plot(cae_epoch_loss)
```
## Create the multi-GPU CAE
```
device = torch.device("cuda:0")
mgpu_encoder = Encoder2DConv(inputdims, latentdims, nlayers=nlayers, use_batchnorm=True)
mgpu_decoder = Decoder2DConv(inputdims, latentdims, nlayers=nlayers, use_batchnorm=True)
model = Autoencoder(mgpu_encoder, mgpu_decoder)
# output_device defaults to device_ids[0]
mgpu_model = torch.nn.DataParallel(model)
mgpu_model.to(device)
mgpu_loss = torch.nn.functional.mse_loss
mgpu_optimizer = torch.optim.Adam(mgpu_model.parameters())
def train_mgpu_model(epochs):
mgpu_model.train()
train_loss = 0
for batch_idx, (x, _) in enumerate(trainloader):
mgpu_optimizer.zero_grad()
x = x.to(device)
recon_x = mgpu_model(x)
loss = mgpu_loss(recon_x, x, reduction='sum')
loss.backward()
train_loss += loss.item()
mgpu_optimizer.step()
if batch_idx % 20 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(x), len(trainloader.dataset),
100. * batch_idx / len(trainloader),
loss.item() / len(x)),
end="\r", flush=True)
print(x.size())
print('\n====> Epoch: {} Average loss: {:.4f}'.format(
epoch, train_loss / len(trainloader.dataset)))
return train_loss / len(trainloader.dataset)
```
## Train the multi-gpu setup
```
mgpu_epoch_loss = []
t0 = time()
for epoch in range(epochs):
loss = train_mgpu_model(epoch)
mgpu_epoch_loss.append(loss)
print('Total training time: {:.2f} seconds'.format(time()-t0))
plt.plot(mgpu_epoch_loss)
```
## Uh-oh, DataParallel takes longer (193s vs 140s). Gotta figure that out. For now, I'll use single GPU training.
|
github_jupyter
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from time import time
import os
import torch
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
from Autoencoders.encoders import Encoder2DConv
from Autoencoders.decoders import Decoder2DConv
from Autoencoders.autoencoders import Autoencoder
from Autoencoders.losses import vae_loss
from sklearn.manifold import TSNE
torch.distributed.is_available()
batch_size = 128
traindata = datasets.FashionMNIST('./sampledata/FashionMNIST', download=True, train=True, transform=transforms.ToTensor())
trainloader = DataLoader(traindata, batch_size=batch_size, num_workers=8)
testdata = datasets.FashionMNIST('./sampledata/FashionMNIST', download=True, train=False, transform=transforms.ToTensor())
testloader = DataLoader(testdata, batch_size=batch_size, num_workers=8)
for data, _ in trainloader:
print(data.size())
break
inputdims = (28,28)
latentdims = 32
nlayers = 2
use_cuda = True
epochs = 20
cae_encoder = Encoder2DConv(inputdims, latentdims, nlayers=nlayers, use_batchnorm=True)
cae_decoder = Decoder2DConv(inputdims, latentdims, nlayers=nlayers, use_batchnorm=True)
cae = Autoencoder(cae_encoder, cae_decoder)
if use_cuda == True:
cae = cae.cuda()
cae_loss = torch.nn.functional.mse_loss
cae_optimizer = torch.optim.Adam(cae.parameters())
def train_cae(epochs):
cae.train()
train_loss = 0
for batch_idx, (x, _) in enumerate(trainloader):
if use_cuda:
x = x.cuda()
cae_optimizer.zero_grad()
recon_x = cae(x)
loss = cae_loss(recon_x, x, reduction='sum')
loss.backward()
train_loss += loss.item()
cae_optimizer.step()
if batch_idx % 20 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(x), len(trainloader.dataset),
100. * batch_idx / len(trainloader),
loss.item() / len(x)),
end="\r", flush=True)
print('\n====> Epoch: {} Average loss: {:.4f}'.format(
epoch, train_loss / len(trainloader.dataset)))
return train_loss / len(trainloader.dataset)
cae_epoch_loss = []
t0 = time()
for epoch in range(epochs):
loss = train_cae(epoch)
cae_epoch_loss.append(loss)
print('Total training time: {:.2f} seconds'.format(time()-t0))
plt.plot(cae_epoch_loss)
device = torch.device("cuda:0")
mgpu_encoder = Encoder2DConv(inputdims, latentdims, nlayers=nlayers, use_batchnorm=True)
mgpu_decoder = Decoder2DConv(inputdims, latentdims, nlayers=nlayers, use_batchnorm=True)
model = Autoencoder(mgpu_encoder, mgpu_decoder)
# output_device defaults to device_ids[0]
mgpu_model = torch.nn.DataParallel(model)
mgpu_model.to(device)
mgpu_loss = torch.nn.functional.mse_loss
mgpu_optimizer = torch.optim.Adam(mgpu_model.parameters())
def train_mgpu_model(epochs):
mgpu_model.train()
train_loss = 0
for batch_idx, (x, _) in enumerate(trainloader):
mgpu_optimizer.zero_grad()
x = x.to(device)
recon_x = mgpu_model(x)
loss = mgpu_loss(recon_x, x, reduction='sum')
loss.backward()
train_loss += loss.item()
mgpu_optimizer.step()
if batch_idx % 20 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(x), len(trainloader.dataset),
100. * batch_idx / len(trainloader),
loss.item() / len(x)),
end="\r", flush=True)
print(x.size())
print('\n====> Epoch: {} Average loss: {:.4f}'.format(
epoch, train_loss / len(trainloader.dataset)))
return train_loss / len(trainloader.dataset)
mgpu_epoch_loss = []
t0 = time()
for epoch in range(epochs):
loss = train_mgpu_model(epoch)
mgpu_epoch_loss.append(loss)
print('Total training time: {:.2f} seconds'.format(time()-t0))
plt.plot(mgpu_epoch_loss)
| 0.791781 | 0.942082 |
# cyBERT: a flexible log parser based on the BERT language model
## Author
- Rachel Allen, PhD (NVIDIA) [[email protected]]
## Development Notes
* Developed using: RAPIDS v0.10.0
* Last tested using: RAPIDS v0.10.0 on Nov 5, 2019
## Table of Contents
* Introduction
* Generating Labeled Logs
* Tokenization
* Data Loading
* Fine-tuning pretrained BERT
* Model Evaluation
## Introduction
One of the most arduous tasks of any security operation (and equally as time consuming for a data scientist) is ETL and parsing. This notebook illustrates how to train a BERT language model using previously parsed windows event logs as a labeled data set. We will fine-tune a pretrained BERT model with a classification layer for Named Entity Recognition.
```
import torch
from pytorch_transformers import BertTokenizer, BertModel, BertForTokenClassification, AdamW
from torch.optim import Adam
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
import torch.nn.functional as F
from torch.nn.utils.rnn import pad_sequence
from seqeval.metrics import classification_report,accuracy_score,f1_score
from sklearn.model_selection import train_test_split
from tqdm import tqdm,trange
import pandas as pd
import numpy as np
gdf = cudf.read_csv('/datasets/cyber/fake_win_events.csv')
gdf.eventcode.value_counts()
mini4624 = gdf[gdf["eventcode"] == 4624].dropna(axis='columns')
mini4624.head()
len(mini4624.dropna(axis='rows'))
len(mini4624.dropna(axis='rows'))
mini4624.columns
small_pdf = mini4624
small_pdf['generated_raw'] = small_pdf['insert_time']+ " LogName= " + small_pdf['logname'].astype(str) \
+ " SourceName= " + small_pdf['sourcename'].astype(str) \
+ " EventCode= " + small_pdf['eventcode'].astype(str) \
+ " EventType= " + small_pdf['eventtype'].astype(str) \
+ " Type= " + small_pdf['type'] \
+ " ComputerName= " + small_pdf['computername'].astype(str) \
+ " TaskCategory= " + small_pdf['taskcategory'].astype(str) \
+ " OpCode= " + small_pdf['opcode'].astype(str) \
+ " RecordNumber= " + small_pdf['recordnumber'].astype(str) \
+ " Keywords= " + small_pdf['keywords'].astype(str) \
+ " Message= " + small_pdf['message'].astype(str) \
+ " Subject: Account Name: " + small_pdf['subject_account_name'].astype(str) \
+ " Account Domain: " + small_pdf['subject_account_domain'].astype(str) \
+ " New Logon: Account Name: " + small_pdf['new_logon_account_name'].astype(str) \
+ " Account Domain: " + small_pdf['new_logon_account_domain'].astype(str) \
+ " "
+ " Target Account: Security ID: " + small_pdf['target_account_security_id'].astype(str) \
+ " Account Name: " + small_pdf['target_account_security_id'].str.split('\\').str[2] \
+ " Account Domain: " + small_pdf['target_account_account_domain']
gdf.head()
```
## Data prep
Initially we separate the logs by whitespace and create a label for every element from the split logs.
```
##sample data
raw_logs = ["09/04/2019 05:58:30 PM LogName= Security SourceName= Microsoft Windows security auditing. EventCode= 4724 EventType= 0 Type= Information ComputerName= ZXJSDNFMAIL.acme.com \
TaskCategory= User Account Management OpCode= Info RecordNumber= 16822951635 Keywords= Audit Success Message= An attempt was made to reset an account's password. Subject: \
Security ID: NT AUTHORITY\\\\SYSTEM Account Name: DHDHMAIL107$ Account Domain: ACME.COM Logon ID: 0x3E7 Target Account: Security ID: DHDHMAIL107\\\\CLIUSER Account Name: \
CLIUSR Account Domain: DHDHMAIL107",
"09/04/2019 06:03:35 PM LogName= Security SourceName= Microsoft Windows security auditing. EventCode= 4725 EventType= 0 Type= Information ComputerName= SKFJMAIL.acme.com \
TaskCategory= User Account Management OpCode= Info RecordNumber= 12434814940 Keywords= Audit Success Message= A user account was disabled. Subject: Security ID: NT \
AUTHORITY\\\\SYSTEM Account Name: VGVMAIL104$ Account Domain: ACME.COM Logon ID: 0x3E7 Target Account: Security ID: VGVMAIL104\\\\CLIUSR Account Name: CLIUSR Account \
Domain: VGVMAIL104"]
logs = [['09/04/2019','05:58:30','PM','LogName=','Security','SourceName=','Microsoft','Windows','security','auditing.','EventCode=','4724','EventType=','0','Type=','Information',
'ComputerName=','ZXJSDNFMAIL.acme.com','TaskCategory=','User','Account','Management','OpCode=','Info','RecordNumber=','16822951635','Keywords=','Audit','Success',
'Message=','An','attempt','was','made','to','reset','an',"account's",'password.','Subject:','Security','ID:','NT','AUTHORITY\\\\SYSTEM','Account','Name:','DHDHMAIL107$',
'Account','Domain:','ACME.COM','Logon','ID:','0x3E7','Target','Account:','Security','ID:','DHDHMAIL107\\\\CLIUSER','Account','Name:','CLIUSR','Account','Domain:','DHDHMAIL107'],
['09/04/2019','06:03:35','PM','LogName=','Security','SourceName=','Microsoft','Windows','security','auditing.','EventCode=','4725','EventType=','0','Type=','Information',
'ComputerName=','SKFJMAIL.acme.com','TaskCategory=','User','Account','Management','OpCode=','Info','RecordNumber=','12434814940','Keywords=','Audit','Success','Message=',
'A','user','account','was', 'disabled.','Subject:','Security','ID:','NT','AUTHORITY\\\\SYSTEM','Account','Name:','VGVMAIL104$','Account','Domain:','ACME.COM','Logon',
'ID:','0x3E7','Target','Account:','Security','ID:','VGVMAIL104\\\\CLIUSR','Account','Name:','CLIUSR','Account','Domain:','VGVMAIL104']
]
labels = [['time', 'time', 'time', 'key', 'logname', 'key', 'sourcename', 'sourcename', 'sourcename', 'sourcename', 'key', 'eventcode', 'key', 'eventtype', 'key', 'type', 'key',
'computername', 'key', 'taskcategory', 'taskcategory', 'taskcategory', 'key', 'opcode', 'key', 'recordnumber', 'key', 'keywords', 'keywords', 'key', 'message', 'message',
'message', 'message', 'message', 'message', 'message', 'message', 'message', 'key', 'key', 'key', 'subject_security_id', 'subject_security_id', 'key', 'key', 'subject_account_name',
'key', 'key', 'subject_account_domain', 'key', 'key', 'subject_logon_id', 'key', 'key', 'key', 'key', 'target_account_security_id', 'key', 'key', 'subject_logon_id', 'key', 'key',
'target_account_account_domain'],
['time', 'time', 'time', 'key', 'logname', 'key', 'sourcename', 'sourcename', 'sourcename', 'sourcename', 'key', 'eventcode', 'key', 'eventtype', 'key', 'type', 'key', 'computername',
'key', 'taskcategory', 'taskcategory', 'taskcategory', 'key', 'opcode', 'key', 'recordnumber', 'key', 'keywords', 'keywords', 'key', 'message', 'message', 'message', 'message',
'message', 'key', 'key', 'key', 'subject_security_id', 'subject_security_id', 'key', 'key', 'subject_account_name', 'key', 'key', 'subject_account_domain', 'key', 'key', 'subject_logon_id',
'key', 'key', 'key', 'key', 'target_account_security_id', 'key', 'key', 'subject_logon_id', 'key', 'key', 'target_account_account_domain']]
```
We create a set list of all labels(tags) from our dataset, add `X` for wordpiece tokens we will not have tags for and `[PAD]` for logs shorter than the length of the model's embedding.
```
# set of tags
tag_values = list(set(x for l in labels for x in l))
# add 'X' tag for wordpiece
tag_values.append('X')
tag_values.append('[PAD]')
# Set a dict for mapping id to tag name
tag2idx = {t: i for i, t in enumerate(tag_values)}
```
## Wordpiece tokenization
We are using the `bert-base-uncased` tokenizer from the pretrained BERT library from [HuggingFace](https://github.com/huggingface). This tokenizer splits our whitespace separated words further into in dictionary sub-word pieces. The model eventually uses the label from the first piece of a word as it's tag, so we do not care about the model's ability to predict labels for the sub-word pieces. For training, the tag used for these pieces is `X`. To learn more see the [BERT paper](https://arxiv.org/abs/1810.04805)
```
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
tokenized_texts = []
new_labels = []
for sentence, tags in zip(logs,labels):
new_tags = []
new_text = []
for word, tag in zip(sentence,tags):
sub_words = tokenizer.wordpiece_tokenizer.tokenize(word.lower())
for count, sub_word in enumerate(sub_words):
if count > 0:
tag = 'X'
new_tags.append(tag)
new_text.append(sub_word)
tokenized_texts.append(new_text)
new_labels.append(new_tags)
```
# Model inputs
For training our models needs (1) wordpiece tokens as integers padded to the specific length of the model (2) corresponding tags as integers and (3) a binary attention mask that ignores padding. Here we have have used 256 for the model size for each log or log piece.
```
# convert string tokens into ints
input_ids = [tokenizer.convert_tokens_to_ids(txt) for txt in tokenized_texts]
# pad with input_ids with zeros and labels with [PAD]
def pad(l, content, width):
l.extend([content] * (width - len(l)))
return l
input_ids = [pad(x, 0, 256) for x in input_ids]
new_labels = [pad(x, '[PAD]', 256) for x in new_labels]
# attention mask for model to ignore padding
attention_masks = [[int(i>0) for i in ii] for ii in input_ids]
# convert labels/tags to ints
tags = [[tag2idx.get(l) for l in lab] for lab in new_labels]
```
# Training and testing datasets
We split the data into training and validation sets.
```
tr_inputs, val_inputs, tr_tags, val_tags,tr_masks, val_masks = train_test_split(input_ids, tags, attention_masks, random_state=1234, test_size=0.1)
```
Move the datasets to the GPU
```
device = torch.device("cuda")
tr_inputs = torch.tensor(tr_inputs)
val_inputs = torch.tensor(val_inputs)
tr_tags = torch.tensor(tr_tags)
val_tags = torch.tensor(val_tags)
tr_masks = torch.tensor(tr_masks)
val_masks = torch.tensor(val_masks)
```
We create dataloaders to make batches of data ready to feed into the model. Here we use a batch size of 32.
```
train_data = TensorDataset(tr_inputs, tr_masks, tr_tags)
train_sampler = RandomSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=32)
valid_data = TensorDataset(val_inputs, val_masks, val_tags)
valid_sampler = SequentialSampler(valid_data)
valid_dataloader = DataLoader(valid_data, sampler=valid_sampler, batch_size=32)
```
# Fine-tuning pretrained BERT
```
model = BertForTokenClassification.from_pretrained("bert-base-uncased", num_labels=len(tag2idx))
#model to gpu
model.cuda();
FULL_FINETUNING = True
if FULL_FINETUNING:
#fine tune all layer parameters
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'gamma', 'beta']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.0}
]
else:
# only fine tune classifier parameters
param_optimizer = list(model.classifier.named_parameters())
optimizer_grouped_parameters = [{"params": [p for n, p in param_optimizer]}]
optimizer = Adam(optimizer_grouped_parameters, lr=3e-5)
def flat_accuracy(preds, labels):
pred_flat = np.argmax(preds, axis=2).flatten()
labels_flat = labels.flatten()
return np.sum(pred_flat == labels_flat) / len(labels_flat)
epochs = 1
max_grad_norm = 1.0
for _ in trange(epochs, desc="Epoch"):
# TRAIN loop
model.train()
tr_loss = 0
nb_tr_examples, nb_tr_steps = 0, 0
for step, batch in enumerate(train_dataloader):
# add batch to gpu
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
# forward pass
loss, scores = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask, labels=b_labels)
# backward pass
loss.backward()
# track train loss
tr_loss += loss.item()
nb_tr_examples += b_input_ids.size(0)
nb_tr_steps += 1
# gradient clipping
torch.nn.utils.clip_grad_norm_(parameters=model.parameters(), max_norm=max_grad_norm)
# update parameters
optimizer.step()
model.zero_grad()
# print train loss per epoch
print("Train loss: {}".format(tr_loss/nb_tr_steps))
# VALIDATION on validation set
model.eval()
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
predictions , true_labels = [], []
for batch in valid_dataloader:
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
with torch.no_grad():
tmp_eval_loss, logits = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask, labels=b_labels)
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
predictions.extend([list(p) for p in np.argmax(logits, axis=2)])
true_labels.append(label_ids)
tmp_eval_accuracy = flat_accuracy(logits, label_ids)
eval_loss += tmp_eval_loss.mean().item()
eval_accuracy += tmp_eval_accuracy
nb_eval_examples += b_input_ids.size(0)
nb_eval_steps += 1
eval_loss = eval_loss/nb_eval_steps
print("Validation loss: {}".format(eval_loss))
print("Validation Accuracy: {}".format(eval_accuracy/nb_eval_steps))
pred_tags = [tag_values[p_i] for p in predictions for p_i in p]
valid_tags = [tag_values[l_ii] for l in true_labels for l_i in l for l_ii in l_i]
print("F1-Score: {}".format(f1_score(pred_tags, valid_tags)))
```
#### model eval
```
model.eval();
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
y_true = []
y_pred = []
for step, batch in enumerate(valid_dataloader):
batch = tuple(t.to(device) for t in batch)
input_ids, input_mask, label_ids = batch
with torch.no_grad():
outputs = model(input_ids, token_type_ids=None,
attention_mask=input_mask,)
# For eval mode, the first result of outputs is logits
logits = outputs[0]
# Get NER predict result
logits = torch.argmax(F.log_softmax(logits,dim=2),dim=2)
logits = logits.detach().cpu().numpy()
# Get NER true result
label_ids = label_ids.to('cpu').numpy()
# Only predict the groud truth, mask=0, will not calculate
input_mask = input_mask.to('cpu').numpy()
# Compare the valuable predict result
for i,mask in enumerate(input_mask):
# ground truth
temp_1 = []
# Prediction
temp_2 = []
for j, m in enumerate(mask):
# Mask=0 is PAD, do not compare
if m: # Exclude the X label
if tag2name[label_ids[i][j]] != "X" and tag2name[label_ids[i][j]] != "[CLS]" and tag2name[label_ids[i][j]] != "[SEP]" :
temp_1.append(tag2name[label_ids[i][j]])
temp_2.append(tag2name[logits[i][j]])
else:
break
y_true.append(temp_1)
y_pred.append(temp_2)
print("f1 socre: %f"%(f1_score(y_true, y_pred)))
print("Accuracy score: %f"%(accuracy_score(y_true, y_pred)))
# Get acc , recall, F1 result report
print(classification_report(y_true, y_pred,digits=4))
```
|
github_jupyter
|
import torch
from pytorch_transformers import BertTokenizer, BertModel, BertForTokenClassification, AdamW
from torch.optim import Adam
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
import torch.nn.functional as F
from torch.nn.utils.rnn import pad_sequence
from seqeval.metrics import classification_report,accuracy_score,f1_score
from sklearn.model_selection import train_test_split
from tqdm import tqdm,trange
import pandas as pd
import numpy as np
gdf = cudf.read_csv('/datasets/cyber/fake_win_events.csv')
gdf.eventcode.value_counts()
mini4624 = gdf[gdf["eventcode"] == 4624].dropna(axis='columns')
mini4624.head()
len(mini4624.dropna(axis='rows'))
len(mini4624.dropna(axis='rows'))
mini4624.columns
small_pdf = mini4624
small_pdf['generated_raw'] = small_pdf['insert_time']+ " LogName= " + small_pdf['logname'].astype(str) \
+ " SourceName= " + small_pdf['sourcename'].astype(str) \
+ " EventCode= " + small_pdf['eventcode'].astype(str) \
+ " EventType= " + small_pdf['eventtype'].astype(str) \
+ " Type= " + small_pdf['type'] \
+ " ComputerName= " + small_pdf['computername'].astype(str) \
+ " TaskCategory= " + small_pdf['taskcategory'].astype(str) \
+ " OpCode= " + small_pdf['opcode'].astype(str) \
+ " RecordNumber= " + small_pdf['recordnumber'].astype(str) \
+ " Keywords= " + small_pdf['keywords'].astype(str) \
+ " Message= " + small_pdf['message'].astype(str) \
+ " Subject: Account Name: " + small_pdf['subject_account_name'].astype(str) \
+ " Account Domain: " + small_pdf['subject_account_domain'].astype(str) \
+ " New Logon: Account Name: " + small_pdf['new_logon_account_name'].astype(str) \
+ " Account Domain: " + small_pdf['new_logon_account_domain'].astype(str) \
+ " "
+ " Target Account: Security ID: " + small_pdf['target_account_security_id'].astype(str) \
+ " Account Name: " + small_pdf['target_account_security_id'].str.split('\\').str[2] \
+ " Account Domain: " + small_pdf['target_account_account_domain']
gdf.head()
##sample data
raw_logs = ["09/04/2019 05:58:30 PM LogName= Security SourceName= Microsoft Windows security auditing. EventCode= 4724 EventType= 0 Type= Information ComputerName= ZXJSDNFMAIL.acme.com \
TaskCategory= User Account Management OpCode= Info RecordNumber= 16822951635 Keywords= Audit Success Message= An attempt was made to reset an account's password. Subject: \
Security ID: NT AUTHORITY\\\\SYSTEM Account Name: DHDHMAIL107$ Account Domain: ACME.COM Logon ID: 0x3E7 Target Account: Security ID: DHDHMAIL107\\\\CLIUSER Account Name: \
CLIUSR Account Domain: DHDHMAIL107",
"09/04/2019 06:03:35 PM LogName= Security SourceName= Microsoft Windows security auditing. EventCode= 4725 EventType= 0 Type= Information ComputerName= SKFJMAIL.acme.com \
TaskCategory= User Account Management OpCode= Info RecordNumber= 12434814940 Keywords= Audit Success Message= A user account was disabled. Subject: Security ID: NT \
AUTHORITY\\\\SYSTEM Account Name: VGVMAIL104$ Account Domain: ACME.COM Logon ID: 0x3E7 Target Account: Security ID: VGVMAIL104\\\\CLIUSR Account Name: CLIUSR Account \
Domain: VGVMAIL104"]
logs = [['09/04/2019','05:58:30','PM','LogName=','Security','SourceName=','Microsoft','Windows','security','auditing.','EventCode=','4724','EventType=','0','Type=','Information',
'ComputerName=','ZXJSDNFMAIL.acme.com','TaskCategory=','User','Account','Management','OpCode=','Info','RecordNumber=','16822951635','Keywords=','Audit','Success',
'Message=','An','attempt','was','made','to','reset','an',"account's",'password.','Subject:','Security','ID:','NT','AUTHORITY\\\\SYSTEM','Account','Name:','DHDHMAIL107$',
'Account','Domain:','ACME.COM','Logon','ID:','0x3E7','Target','Account:','Security','ID:','DHDHMAIL107\\\\CLIUSER','Account','Name:','CLIUSR','Account','Domain:','DHDHMAIL107'],
['09/04/2019','06:03:35','PM','LogName=','Security','SourceName=','Microsoft','Windows','security','auditing.','EventCode=','4725','EventType=','0','Type=','Information',
'ComputerName=','SKFJMAIL.acme.com','TaskCategory=','User','Account','Management','OpCode=','Info','RecordNumber=','12434814940','Keywords=','Audit','Success','Message=',
'A','user','account','was', 'disabled.','Subject:','Security','ID:','NT','AUTHORITY\\\\SYSTEM','Account','Name:','VGVMAIL104$','Account','Domain:','ACME.COM','Logon',
'ID:','0x3E7','Target','Account:','Security','ID:','VGVMAIL104\\\\CLIUSR','Account','Name:','CLIUSR','Account','Domain:','VGVMAIL104']
]
labels = [['time', 'time', 'time', 'key', 'logname', 'key', 'sourcename', 'sourcename', 'sourcename', 'sourcename', 'key', 'eventcode', 'key', 'eventtype', 'key', 'type', 'key',
'computername', 'key', 'taskcategory', 'taskcategory', 'taskcategory', 'key', 'opcode', 'key', 'recordnumber', 'key', 'keywords', 'keywords', 'key', 'message', 'message',
'message', 'message', 'message', 'message', 'message', 'message', 'message', 'key', 'key', 'key', 'subject_security_id', 'subject_security_id', 'key', 'key', 'subject_account_name',
'key', 'key', 'subject_account_domain', 'key', 'key', 'subject_logon_id', 'key', 'key', 'key', 'key', 'target_account_security_id', 'key', 'key', 'subject_logon_id', 'key', 'key',
'target_account_account_domain'],
['time', 'time', 'time', 'key', 'logname', 'key', 'sourcename', 'sourcename', 'sourcename', 'sourcename', 'key', 'eventcode', 'key', 'eventtype', 'key', 'type', 'key', 'computername',
'key', 'taskcategory', 'taskcategory', 'taskcategory', 'key', 'opcode', 'key', 'recordnumber', 'key', 'keywords', 'keywords', 'key', 'message', 'message', 'message', 'message',
'message', 'key', 'key', 'key', 'subject_security_id', 'subject_security_id', 'key', 'key', 'subject_account_name', 'key', 'key', 'subject_account_domain', 'key', 'key', 'subject_logon_id',
'key', 'key', 'key', 'key', 'target_account_security_id', 'key', 'key', 'subject_logon_id', 'key', 'key', 'target_account_account_domain']]
# set of tags
tag_values = list(set(x for l in labels for x in l))
# add 'X' tag for wordpiece
tag_values.append('X')
tag_values.append('[PAD]')
# Set a dict for mapping id to tag name
tag2idx = {t: i for i, t in enumerate(tag_values)}
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
tokenized_texts = []
new_labels = []
for sentence, tags in zip(logs,labels):
new_tags = []
new_text = []
for word, tag in zip(sentence,tags):
sub_words = tokenizer.wordpiece_tokenizer.tokenize(word.lower())
for count, sub_word in enumerate(sub_words):
if count > 0:
tag = 'X'
new_tags.append(tag)
new_text.append(sub_word)
tokenized_texts.append(new_text)
new_labels.append(new_tags)
# convert string tokens into ints
input_ids = [tokenizer.convert_tokens_to_ids(txt) for txt in tokenized_texts]
# pad with input_ids with zeros and labels with [PAD]
def pad(l, content, width):
l.extend([content] * (width - len(l)))
return l
input_ids = [pad(x, 0, 256) for x in input_ids]
new_labels = [pad(x, '[PAD]', 256) for x in new_labels]
# attention mask for model to ignore padding
attention_masks = [[int(i>0) for i in ii] for ii in input_ids]
# convert labels/tags to ints
tags = [[tag2idx.get(l) for l in lab] for lab in new_labels]
tr_inputs, val_inputs, tr_tags, val_tags,tr_masks, val_masks = train_test_split(input_ids, tags, attention_masks, random_state=1234, test_size=0.1)
device = torch.device("cuda")
tr_inputs = torch.tensor(tr_inputs)
val_inputs = torch.tensor(val_inputs)
tr_tags = torch.tensor(tr_tags)
val_tags = torch.tensor(val_tags)
tr_masks = torch.tensor(tr_masks)
val_masks = torch.tensor(val_masks)
train_data = TensorDataset(tr_inputs, tr_masks, tr_tags)
train_sampler = RandomSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=32)
valid_data = TensorDataset(val_inputs, val_masks, val_tags)
valid_sampler = SequentialSampler(valid_data)
valid_dataloader = DataLoader(valid_data, sampler=valid_sampler, batch_size=32)
model = BertForTokenClassification.from_pretrained("bert-base-uncased", num_labels=len(tag2idx))
#model to gpu
model.cuda();
FULL_FINETUNING = True
if FULL_FINETUNING:
#fine tune all layer parameters
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'gamma', 'beta']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.0}
]
else:
# only fine tune classifier parameters
param_optimizer = list(model.classifier.named_parameters())
optimizer_grouped_parameters = [{"params": [p for n, p in param_optimizer]}]
optimizer = Adam(optimizer_grouped_parameters, lr=3e-5)
def flat_accuracy(preds, labels):
pred_flat = np.argmax(preds, axis=2).flatten()
labels_flat = labels.flatten()
return np.sum(pred_flat == labels_flat) / len(labels_flat)
epochs = 1
max_grad_norm = 1.0
for _ in trange(epochs, desc="Epoch"):
# TRAIN loop
model.train()
tr_loss = 0
nb_tr_examples, nb_tr_steps = 0, 0
for step, batch in enumerate(train_dataloader):
# add batch to gpu
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
# forward pass
loss, scores = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask, labels=b_labels)
# backward pass
loss.backward()
# track train loss
tr_loss += loss.item()
nb_tr_examples += b_input_ids.size(0)
nb_tr_steps += 1
# gradient clipping
torch.nn.utils.clip_grad_norm_(parameters=model.parameters(), max_norm=max_grad_norm)
# update parameters
optimizer.step()
model.zero_grad()
# print train loss per epoch
print("Train loss: {}".format(tr_loss/nb_tr_steps))
# VALIDATION on validation set
model.eval()
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
predictions , true_labels = [], []
for batch in valid_dataloader:
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
with torch.no_grad():
tmp_eval_loss, logits = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask, labels=b_labels)
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
predictions.extend([list(p) for p in np.argmax(logits, axis=2)])
true_labels.append(label_ids)
tmp_eval_accuracy = flat_accuracy(logits, label_ids)
eval_loss += tmp_eval_loss.mean().item()
eval_accuracy += tmp_eval_accuracy
nb_eval_examples += b_input_ids.size(0)
nb_eval_steps += 1
eval_loss = eval_loss/nb_eval_steps
print("Validation loss: {}".format(eval_loss))
print("Validation Accuracy: {}".format(eval_accuracy/nb_eval_steps))
pred_tags = [tag_values[p_i] for p in predictions for p_i in p]
valid_tags = [tag_values[l_ii] for l in true_labels for l_i in l for l_ii in l_i]
print("F1-Score: {}".format(f1_score(pred_tags, valid_tags)))
model.eval();
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
y_true = []
y_pred = []
for step, batch in enumerate(valid_dataloader):
batch = tuple(t.to(device) for t in batch)
input_ids, input_mask, label_ids = batch
with torch.no_grad():
outputs = model(input_ids, token_type_ids=None,
attention_mask=input_mask,)
# For eval mode, the first result of outputs is logits
logits = outputs[0]
# Get NER predict result
logits = torch.argmax(F.log_softmax(logits,dim=2),dim=2)
logits = logits.detach().cpu().numpy()
# Get NER true result
label_ids = label_ids.to('cpu').numpy()
# Only predict the groud truth, mask=0, will not calculate
input_mask = input_mask.to('cpu').numpy()
# Compare the valuable predict result
for i,mask in enumerate(input_mask):
# ground truth
temp_1 = []
# Prediction
temp_2 = []
for j, m in enumerate(mask):
# Mask=0 is PAD, do not compare
if m: # Exclude the X label
if tag2name[label_ids[i][j]] != "X" and tag2name[label_ids[i][j]] != "[CLS]" and tag2name[label_ids[i][j]] != "[SEP]" :
temp_1.append(tag2name[label_ids[i][j]])
temp_2.append(tag2name[logits[i][j]])
else:
break
y_true.append(temp_1)
y_pred.append(temp_2)
print("f1 socre: %f"%(f1_score(y_true, y_pred)))
print("Accuracy score: %f"%(accuracy_score(y_true, y_pred)))
# Get acc , recall, F1 result report
print(classification_report(y_true, y_pred,digits=4))
| 0.471223 | 0.818845 |
```
%matplotlib inline
```
# Comparing initial sampling methods
Holger Nahrstaedt 2020 Sigurd Carlsen October 2019
.. currentmodule:: skopt
When doing baysian optimization we often want to reserve some of the
early part of the optimization to pure exploration. By default the
optimizer suggests purely random samples for the first n_initial_points
(10 by default). The downside to this is that there is no guarantee that
these samples are spread out evenly across all the dimensions.
Sampling methods as Latin hypercube, Sobol, Halton and Hammersly
take advantage of the fact that we know beforehand how many random
points we want to sample. Then these points can be "spread out" in
such a way that each dimension is explored.
See also the example on an integer space
`sphx_glr_auto_examples_initial_sampling_method_integer.py`
```
print(__doc__)
import numpy as np
np.random.seed(123)
import matplotlib.pyplot as plt
from skopt.space import Space
from skopt.sampler import Sobol
from skopt.sampler import Lhs
from skopt.sampler import Halton
from skopt.sampler import Hammersly
from skopt.sampler import Grid
from scipy.spatial.distance import pdist
def plot_searchspace(x, title):
fig, ax = plt.subplots()
plt.plot(np.array(x)[:, 0], np.array(x)[:, 1], 'bo', label='samples')
plt.plot(np.array(x)[:, 0], np.array(x)[:, 1], 'bo', markersize=80, alpha=0.5)
# ax.legend(loc="best", numpoints=1)
ax.set_xlabel("X1")
ax.set_xlim([-5, 10])
ax.set_ylabel("X2")
ax.set_ylim([0, 15])
plt.title(title)
n_samples = 10
space = Space([(-5., 10.), (0., 15.)])
# space.set_transformer("normalize")
```
## Random sampling
```
x = space.rvs(n_samples)
plot_searchspace(x, "Random samples")
pdist_data = []
x_label = []
pdist_data.append(pdist(x).flatten())
x_label.append("random")
```
## Sobol
```
sobol = Sobol()
x = sobol.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Sobol')
pdist_data.append(pdist(x).flatten())
x_label.append("sobol")
```
## Classic Latin hypercube sampling
```
lhs = Lhs(lhs_type="classic", criterion=None)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'classic LHS')
pdist_data.append(pdist(x).flatten())
x_label.append("lhs")
```
## Centered Latin hypercube sampling
```
lhs = Lhs(lhs_type="centered", criterion=None)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'centered LHS')
pdist_data.append(pdist(x).flatten())
x_label.append("center")
```
## Maximin optimized hypercube sampling
```
lhs = Lhs(criterion="maximin", iterations=10000)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'maximin LHS')
pdist_data.append(pdist(x).flatten())
x_label.append("maximin")
```
## Correlation optimized hypercube sampling
```
lhs = Lhs(criterion="correlation", iterations=10000)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'correlation LHS')
pdist_data.append(pdist(x).flatten())
x_label.append("corr")
```
## Ratio optimized hypercube sampling
```
lhs = Lhs(criterion="ratio", iterations=10000)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'ratio LHS')
pdist_data.append(pdist(x).flatten())
x_label.append("ratio")
```
## Halton sampling
```
halton = Halton()
x = halton.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Halton')
pdist_data.append(pdist(x).flatten())
x_label.append("halton")
```
## Hammersly sampling
```
hammersly = Hammersly()
x = hammersly.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Hammersly')
pdist_data.append(pdist(x).flatten())
x_label.append("hammersly")
```
## Grid sampling
```
grid = Grid(border="include", use_full_layout=False)
x = grid.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Grid')
pdist_data.append(pdist(x).flatten())
x_label.append("grid")
```
## Pdist boxplot of all methods
This boxplot shows the distance between all generated points using
Euclidian distance. The higher the value, the better the sampling method.
It can be seen that random has the worst performance
```
fig, ax = plt.subplots()
ax.boxplot(pdist_data)
plt.grid(True)
plt.ylabel("pdist")
_ = ax.set_ylim(0, 12)
_ = ax.set_xticklabels(x_label, rotation=45, fontsize=8)
```
|
github_jupyter
|
%matplotlib inline
print(__doc__)
import numpy as np
np.random.seed(123)
import matplotlib.pyplot as plt
from skopt.space import Space
from skopt.sampler import Sobol
from skopt.sampler import Lhs
from skopt.sampler import Halton
from skopt.sampler import Hammersly
from skopt.sampler import Grid
from scipy.spatial.distance import pdist
def plot_searchspace(x, title):
fig, ax = plt.subplots()
plt.plot(np.array(x)[:, 0], np.array(x)[:, 1], 'bo', label='samples')
plt.plot(np.array(x)[:, 0], np.array(x)[:, 1], 'bo', markersize=80, alpha=0.5)
# ax.legend(loc="best", numpoints=1)
ax.set_xlabel("X1")
ax.set_xlim([-5, 10])
ax.set_ylabel("X2")
ax.set_ylim([0, 15])
plt.title(title)
n_samples = 10
space = Space([(-5., 10.), (0., 15.)])
# space.set_transformer("normalize")
x = space.rvs(n_samples)
plot_searchspace(x, "Random samples")
pdist_data = []
x_label = []
pdist_data.append(pdist(x).flatten())
x_label.append("random")
sobol = Sobol()
x = sobol.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Sobol')
pdist_data.append(pdist(x).flatten())
x_label.append("sobol")
lhs = Lhs(lhs_type="classic", criterion=None)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'classic LHS')
pdist_data.append(pdist(x).flatten())
x_label.append("lhs")
lhs = Lhs(lhs_type="centered", criterion=None)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'centered LHS')
pdist_data.append(pdist(x).flatten())
x_label.append("center")
lhs = Lhs(criterion="maximin", iterations=10000)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'maximin LHS')
pdist_data.append(pdist(x).flatten())
x_label.append("maximin")
lhs = Lhs(criterion="correlation", iterations=10000)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'correlation LHS')
pdist_data.append(pdist(x).flatten())
x_label.append("corr")
lhs = Lhs(criterion="ratio", iterations=10000)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'ratio LHS')
pdist_data.append(pdist(x).flatten())
x_label.append("ratio")
halton = Halton()
x = halton.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Halton')
pdist_data.append(pdist(x).flatten())
x_label.append("halton")
hammersly = Hammersly()
x = hammersly.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Hammersly')
pdist_data.append(pdist(x).flatten())
x_label.append("hammersly")
grid = Grid(border="include", use_full_layout=False)
x = grid.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Grid')
pdist_data.append(pdist(x).flatten())
x_label.append("grid")
fig, ax = plt.subplots()
ax.boxplot(pdist_data)
plt.grid(True)
plt.ylabel("pdist")
_ = ax.set_ylim(0, 12)
_ = ax.set_xticklabels(x_label, rotation=45, fontsize=8)
| 0.494385 | 0.968141 |
# Yeezy Taught Me
Alex Chavez
GA: Data Science (Summer 2016)
(Not) Famous.

# 🔷 Intro and Project Problem 🔷
Although Kanye West is confident that he is the greatest artist of all time, there are many (haters) in the media with various opinions regarding his claim. Can the success of today’s popular hip-hop artists be attributed to the previous work of Kanye?
**Determine if audio features and lyrics from the songs of recent hip-hop artists active between 2004-2010 were influenced by Kanye using data from The Million Song Dataset and musiXmatch.**
If Kanye West is not the biggest influencer, or even a major one, then who is and who are they grouped with?
## Hypothesis
Artists who were influenced by Kanye West’s work will have songs with similar audio features (e.g. acousticness, danceability, energy, loudness, mode, speechiness, etc.) to that of Kanye.
<img src="assets/images/jcole.png" style="height: 400px;" alt="J. Cole on Kanye West">
Artists like J. Cole and Drake have attributed their success to Kanye for allowing them to breaking into the hip hop industry that was largely dominated by gangsta rap in the 90's to late 00's.
## Impact and Motivation 🔥🔥🔥
Many who know me well are intrigued to learn that I am *slightly* obsessed with Kanye West. Yeah, I can understand that his public persona can come off as being an asshole and all - but he makes good music tho and is arguably among the greatest hip hop artists and producers in the industry. At least that is what this project will try to show.
This is for all the Ye Stans out there. Even if the data shows that he is not as influential as I perceive him to be, then it will be a fun topic to at least explore. It's an itch that needs to be scratched.
This project will also show who has been influential in the rap game and which artists tend to cluster together in terms of musical tastes and similarity. This project will attempt to yield quantitative data to qualitative questions, and the opinions and perceptions of the public.
# How Can Machine Learning Be Useful? 🤖
Finding out how influential Ye is can be accomplished by grouping similar artists together through a common set of features ranking them in order of similarity. Assuming that is how we go about finding Kanye's influence score/rank, an **unsupervised learning** algorithm can be employed to accomplish this goal. More specifically - a clustering algorithm that takes features of a song into account like tempo, bass, instrument-to-speech ratio, duration, etc.
<img src="assets/images/awesome.gif" style="width: 400px;" alt="Be awesome.">
## Related Work 👍
In lieu of having experience in the music or entertainment industry I have decided to look elsewhere first and see what attempts others have made to extract trends in music through the use of data.
- [Who Needs Genres When There is Data?](http://www.decibelsanddecimals.com/dbdblog/2016/6/13/spotify-related-artists) Brady Fowler of *Decibels & Decimals* uses network theory to discover related artists through Spotify user listening data. Fowler generated a graph to indicate how artists fit together and argues that they can grouped by similarity as opposed to a genre. What if Kanye started his own subgenre of hip hop?
- [Sound Predictions: How the audio properties of a track lead to sales?](https://www.nextbigsound.com/labs/echonest) The audio properties of a track can predict sales with 63% accuracy using only the audio properties of a song. Can we predict how similar a newer song is to Kanye's older songs and use this as a measure of similarity?
- [Music Anatomy 201: Predicting Sales](https://github.com/eric-czech/portfolio/tree/master/demonstrative/R/music_anatomy) Eric Czech's R code and analysis that compares iTunes sales data to audio features. Can we use the same dataset and song features to cluster related hip hop artists?
- [The Most Successful Labels in Hip Hop](http://poly-graph.co/labels/) Matt Daniels' analysis surfaces notable labels in hip hop's history (starting from 1989) by measuring each label by its artists chart performance on Billboard. I would like to incorporate Daniel's visualization into my project given time to compare clusters of artists by their similarity (i.e. influence) of a major player. His analysis also has me wondering if the label of an artist has any relation to the influence of said artist; think Aftermath with Dr. Dre and Eminem or Top Dawg Entertainment with Kendrick Lamar.
<img src="assets/images/successful_labels.png" style="width: 600px;" alt="Successful artists chart">
## Potential Methods and Models 📏🔮🎯
- **k-means/DBSCAN clustering algorithms:** use a clustering algorithm on audio features of songs to group similar artists, and presumably genres, together
- **Logistic regression:** iterate over an artist’s catalog and calculate the probability of an artist sounding like Ye
- **Natural language processing (NLP):** use the "bag-of-words" method on song lyrics where the (frequency of) occurrence of each word is used as a feature for training a classifier
- **[Term frequency-inverse document frequency (tf-idf)](http://www.tfidf.com/):** use to retrieve similar documents (lyrics) together. In addition to dope beats, the lyrics are what make a song lit (dolla, dolla bill y'all)!
# Datasets 📖
Potential datasets that will be used in this endeavor.
[**The Million Song Dataset:**](http://labrosa.ee.columbia.edu/millionsong/) collection of audio features and metadata for a million contemporary popular tracks based on the now deprecated Echo Nest API. The dataset was created in December 2010. The accompanying **musiXmatch** dataset provides lyrics for the Million Song Dataset.
[**Spotify API**:](https://developer.spotify.com/web-api/get-several-audio-features/) Spotify acquired Echo Nest in 2014. Spotify’s Tracks API endpoint provides many of the same audio features found on The Million Song Dataset for the entire Spotify music catalog. Additional observations might have to be collected from this API to include more hip-hop artists and more recent artists that have emerged since 2010.
[**Genius API:**](https://docs.genius.com/) provides community generated lyrics for songs. Lyrics might need to be scraped from the site and merged with metadata from the API to accompany new song observations fetched from Spotify.
## Million Song Dataset Data Dictionary
| Field name | Type | Description |
|-----------------------------|----------------|-----------------------------------------------|
| analysis sample rate | float | sample rate of the audio used |
| artist 7digitalid | int | ID from 7digital.com or -1 |
| artist familiarity | float | algorithmic estimation |
| artist hotttnesss | float | algorithmic estimation |
| artist id | string | Echo Nest ID |
| artist latitude | float | latitude |
| artist location | string | location name |
| artist longitude | float | longitude |
| artist mbid | string | ID from musicbrainz.org |
| artist mbtags | array string | tags from musicbrainz.org |
| artist mbtags count | array int | tag counts for musicbrainz tags |
| artist name | string | artist name |
| artist playmeid | int | ID from playme.com, or -1 |
| artist terms | array string | Echo Nest tags |
| artist terms freq | array float | Echo Nest tags freqs |
| artist terms weight | array float | Echo Nest tags weight |
| audio md5 | string | audio hash code |
| bars confidence | array float | confidence measure |
| bars start | array float | beginning of bars, usually on a beat |
| beats confidence | array float | confidence measure |
| beats start | array float | result of beat tracking |
| danceability | float | algorithmic estimation |
| duration | float | in seconds |
| end of fade in | float | seconds at the beginning of the song |
| energy | float | energy from listener point of view |
| key | int | key the song is in |
| key confidence | float | confidence measure |
| loudness | float | overall loudness in dB |
| mode | int | major or minor |
| mode confidence | float | confidence measure |
| release | string | album name |
| release 7digitalid | int | ID from 7digital.com or -1 |
| sections confidence | array float | confidence measure |
| sections start | array float | largest grouping in a song, e.g. verse |
| segments confidence | array float | confidence measure |
| segments loudness max | array float | max dB value |
| segments loudness max time | array float | time of max dB value, i.e. end of attack |
| segments loudness max start | array float | dB value at onset |
| segments pitches | 2D array float | chroma feature, one value per note |
| segments start | array float | musical events, ~ note onsets |
| segments timbre | 2D array float | texture features (MFCC+PCA-like) |
| similar artists | array string | Echo Nest artist IDs (sim. algo. unpublished) |
| song hotttnesss | float | algorithmic estimation |
| song id | string | Echo Nest song ID |
| start of fade out | float | time in sec |
| tatums confidence | array float | confidence measure |
| tatums start | array float | smallest rythmic element |
| tempo | float | estimated tempo in BPM |
| time signature | int | estimate of number of beats per bar, e.g. 4 |
| time signature confidence | float | confidence measure |
| title | string | song title |
| track id | string | Echo Nest track ID |
| track 7digitalid | int | ID from 7digital.com or -1 |
| year | int | song release year from MusicBrainz or 0 |
## MusiXmatch Data Dictionary
There is a MusiXmatch dataset that accompanies the Million Song Dataset that can be used to analyze lyrics for the dataset's songs. The lyrics come in bag-of-words format: each track is described as the word-counts for a dictionary of the top 5,000 words across the set.
| Data | Description |
|------------------------------|-----------------------------------------|
| \# | comment, ignore |
|%word1,word2,... | list of top words, in popularity order |
|TID,MXMID,idx:cnt,idx:cnt,... | track ID from MSD, track ID from musiXmatch, then word index : word count (word index starts at 1!) |
## Spotify API: Audio Features Endpoint
As Spotify has acquired Echo Nest, which provided the data for the Million Song Dataset, a lot of the same data is available.
Sample request with a given OAuth 2 Bearer Token (command line):
```bash
curl -X GET "https://api.spotify.com/v1/audio-features/?ids=4JpKVNYnVcJ8tuMKjAj50A,2NRANZE9UCmPAS5XVbXL40,24JygzOLM0EmRQeGtFcIcG" -H "Authorization: Bearer {your access token}"
```
Sample JSON response:
```javascript
{ audio_features:
[ { "danceability": 0.808,
"energy": 0.626,
"key": 7,
"loudness": -12.733,
"mode": 1,
"speechiness": 0.168,
"acousticness": 0.00187,
"instrumentalness": 0.159,
"liveness": 0.376,
"valence": 0.369,
"tempo": 123.99,
"type": "audio_features",
"id": "4JpKVNYnVcJ8tuMKjAj50A",
"uri": "spotify:track:4JpKVNYnVcJ8tuMKjAj50A",
"track_href": "https://api.spotify.com/v1/tracks/4JpKVNYnVcJ8tuMKjAj50A",
"analysis_url": "http://echonest-analysis.s3.amazonaws.com/TR/WhpYUARk1kNJ_qP0AdKGcDDFKOQTTgsOoINrqyPQjkUnbteuuBiyj_u94iFCSGzdxGiwqQ6d77f4QLL_8=/3/full.json?AWSAccessKeyId=AKIAJRDFEY23UEVW42BQ&Expires=1458063189&Signature=JRE8SDZStpNOdUsPN/PoS49FMtQ%3D",
"duration_ms": 535223,
"time_signature": 4
},
{ "danceability": 0.457,
"energy": 0.815,
"key": 1,
"loudness": -7.199,
"mode": 1,
"speechiness": 0.034,
"acousticness": 0.102,
"instrumentalness": 0.0319,
"liveness": 0.103,
"valence": 0.382,
"tempo": 96.083,
"type": "audio_features",
"id": "2NRANZE9UCmPAS5XVbXL40",
"uri": "spotify:track:2NRANZE9UCmPAS5XVbXL40",
"track_href": "https://api.spotify.com/v1/tracks/2NRANZE9UCmPAS5XVbXL40",
"analysis_url": "http://echonest-analysis.s3.amazonaws.com/TR/WhuQhwPDhmEg5TO4JjbJu0my-awIhk3eaXkRd1ofoJ7tXogPnMtbxkTyLOeHXu5Jke0FCIt52saKJyfPM=/3/full.json?AWSAccessKeyId=AKIAJRDFEY23UEVW42BQ&Expires=1458063189&Signature=qfclum7FwTaR/7aQbnBNO0daCsM%3D",
"duration_ms": 187800,
"time_signature": 4
},
{ "danceability": 0.281,
"energy": 0.402,
"key": 4,
"loudness": -17.921,
"mode": 1,
"speechiness": 0.0291,
"acousticness": 0.0734,
"instrumentalness": 0.83,
"liveness": 0.0593,
"valence": 0.0748,
"tempo": 115.7,
"type": "audio_features",
"id": "24JygzOLM0EmRQeGtFcIcG",
"uri": "spotify:track:24JygzOLM0EmRQeGtFcIcG",
"track_href": "https://api.spotify.com/v1/tracks/24JygzOLM0EmRQeGtFcIcG",
"analysis_url": "http://echonest-analysis.s3.amazonaws.com/TR/ehbkMg05Ck-FN7p3lV7vd8TUdBCvM6z5mgDiZRv6iSlw8P_b8GYBZ4PRAlOgTl3e5rS34_l3dZGDeYzH4=/3/full.json?AWSAccessKeyId=AKIAJRDFEY23UEVW42BQ&Expires=1458063189&Signature=bnTm0Hcb%2Bxo8ZCmuxm1mY0JY4Hs%3D",
"duration_ms": 497493,
"time_signature": 3
} ]
}
```
# Project Concerns 🙅
<img src="assets/images/serious.gif" alt="100 to 0 really quick!">
## Outstanding Questions Regarding the Project
1. Are clustering algorithms the best way to tackle this project? Should I try a mixture of linear regression and classification?
1. What are some good algorithms to rank/score similarity? Will PageRank be sufficient?
1. How many songs will I need to analyze to be statistically significant?
1. Can I get away with random sampling?
1. If lyrics are taken into account as an additional set of features, do the similarity of lyrics indicate influence?
1. What if rap (or your favorite genre) music really "sounds the same"?
## Assumptions
- Assuming that Kanye West has been an influential artist and producer
- Assuming that since the intro of Kanye West, a subgenre of hip hop has emerged that is distinct from previous hip hop trends
- Will count songs from other artists that Ye has been featured in or produced for as part of his sphere of influence
- Assuming that the Million Song Dataset has significant amount of hip hop observations
- Rappers have been known to quote lines or perform a play of words on a line as a shout-out or diss
- Assuming that an unsupervised learning clustering algorithm would be the best way to tackle this problem
- Presumably not accounting for underground artists, groups, and indie labels
- Taking time series into account with this analysis; e.g. to see if "The College Dropout" and "Late Registration" had artists introduce similar sounding music
## Risks and Caveats
- Audio and lyrical features might not actually show any significant differences
- The Million Song Dataset might not contain enough observations for hip hop or Kanye. In this case I will end up using the Spotify API to (politely) collect relevant data
- We might end up comparing producers and ghostwriters with each other, not necessarily the induvial artists themselves. This inherent bias may skew our results
# Outcomes 🙌
<img src="assets/images/ok.gif" style="height: 250px;" alt="OK OK OK OK OK">
- I expect my output to be a table of artists and an influence/similarity score of related artists under
- My target audience would instead prefer to see easily digestible visualizations - graphs with color coded clusters, a similarity/ranking table where they can enter their own artist, audio samples or comparisons to make the case that Ye has been influential
- Project will be considered a success if a cluster of artists is identified for each one of Ye's album releases. As long as it's statistically significant and there are a handful of well-known artists that have been influenced, like J. Cole or Drake
- Model might be relatively simple if I only take audio features or song lyrics and metadata into account. Model might become more complex if I include both sets of features, but might be necessary as music is as much about the lyrics as dope beats
|
github_jupyter
|
curl -X GET "https://api.spotify.com/v1/audio-features/?ids=4JpKVNYnVcJ8tuMKjAj50A,2NRANZE9UCmPAS5XVbXL40,24JygzOLM0EmRQeGtFcIcG" -H "Authorization: Bearer {your access token}"
{ audio_features:
[ { "danceability": 0.808,
"energy": 0.626,
"key": 7,
"loudness": -12.733,
"mode": 1,
"speechiness": 0.168,
"acousticness": 0.00187,
"instrumentalness": 0.159,
"liveness": 0.376,
"valence": 0.369,
"tempo": 123.99,
"type": "audio_features",
"id": "4JpKVNYnVcJ8tuMKjAj50A",
"uri": "spotify:track:4JpKVNYnVcJ8tuMKjAj50A",
"track_href": "https://api.spotify.com/v1/tracks/4JpKVNYnVcJ8tuMKjAj50A",
"analysis_url": "http://echonest-analysis.s3.amazonaws.com/TR/WhpYUARk1kNJ_qP0AdKGcDDFKOQTTgsOoINrqyPQjkUnbteuuBiyj_u94iFCSGzdxGiwqQ6d77f4QLL_8=/3/full.json?AWSAccessKeyId=AKIAJRDFEY23UEVW42BQ&Expires=1458063189&Signature=JRE8SDZStpNOdUsPN/PoS49FMtQ%3D",
"duration_ms": 535223,
"time_signature": 4
},
{ "danceability": 0.457,
"energy": 0.815,
"key": 1,
"loudness": -7.199,
"mode": 1,
"speechiness": 0.034,
"acousticness": 0.102,
"instrumentalness": 0.0319,
"liveness": 0.103,
"valence": 0.382,
"tempo": 96.083,
"type": "audio_features",
"id": "2NRANZE9UCmPAS5XVbXL40",
"uri": "spotify:track:2NRANZE9UCmPAS5XVbXL40",
"track_href": "https://api.spotify.com/v1/tracks/2NRANZE9UCmPAS5XVbXL40",
"analysis_url": "http://echonest-analysis.s3.amazonaws.com/TR/WhuQhwPDhmEg5TO4JjbJu0my-awIhk3eaXkRd1ofoJ7tXogPnMtbxkTyLOeHXu5Jke0FCIt52saKJyfPM=/3/full.json?AWSAccessKeyId=AKIAJRDFEY23UEVW42BQ&Expires=1458063189&Signature=qfclum7FwTaR/7aQbnBNO0daCsM%3D",
"duration_ms": 187800,
"time_signature": 4
},
{ "danceability": 0.281,
"energy": 0.402,
"key": 4,
"loudness": -17.921,
"mode": 1,
"speechiness": 0.0291,
"acousticness": 0.0734,
"instrumentalness": 0.83,
"liveness": 0.0593,
"valence": 0.0748,
"tempo": 115.7,
"type": "audio_features",
"id": "24JygzOLM0EmRQeGtFcIcG",
"uri": "spotify:track:24JygzOLM0EmRQeGtFcIcG",
"track_href": "https://api.spotify.com/v1/tracks/24JygzOLM0EmRQeGtFcIcG",
"analysis_url": "http://echonest-analysis.s3.amazonaws.com/TR/ehbkMg05Ck-FN7p3lV7vd8TUdBCvM6z5mgDiZRv6iSlw8P_b8GYBZ4PRAlOgTl3e5rS34_l3dZGDeYzH4=/3/full.json?AWSAccessKeyId=AKIAJRDFEY23UEVW42BQ&Expires=1458063189&Signature=bnTm0Hcb%2Bxo8ZCmuxm1mY0JY4Hs%3D",
"duration_ms": 497493,
"time_signature": 3
} ]
}
| 0.605216 | 0.808257 |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/NAIP/metadata.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/NAIP/metadata.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=NAIP/metadata.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/NAIP/metadata.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The magic command `%%capture` can be used to hide output from a specific cell.
```
# %%capture
# !pip install earthengine-api
# !pip install geehydro
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()`
if you are running this notebook for this first time or if you are getting an authentication error.
```
# ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
# fc = (ee.FeatureCollection('ft:1fRY18cjsHzDgGiJiS2nnpUU3v9JPDc2HNaR7Xk8')
# .filter(ee.Filter().eq('Name', 'Minnesota')))
def print_image_id(image):
index = image.get('system:time_start')
print(index.getInfo())
lat = 46.80514
lng = -99.22023
lng_lat = ee.Geometry.Point(lng, lat)
collection = ee.ImageCollection('USDA/NAIP/DOQQ')
naip = collection.filterBounds(lng_lat)
naip_2015 = naip.filterDate('2010-01-01', '2015-12-31')
# print(naip_2015.getInfo())
# print(naip_2015.map(print_image_id))
# Map.setCenter(lon, lat, 13)
# Map.addLayer(naip_2015)
image = ee.Image('USDA/NAIP/DOQQ/m_4609915_sw_14_1_20100629')
bandNames = image.bandNames()
print('Band names: ', bandNames.getInfo())
b_nir = image.select('N')
proj = b_nir.projection()
print('Projection: ', proj.getInfo())
props = b_nir.propertyNames()
print(props.getInfo())
img_date = ee.Date(image.get('system:time_start'))
print('Timestamp: ', img_date.getInfo())
id = image.get('system:index')
print(id.getInfo())
# print(image.getInfo())
vis = {'bands': ['N', 'R', 'G']}
# Map.setCenter(lng, lat, 12)
# Map.addLayer(image,vis)
size = naip_2015.toList(100).length()
print("Number of images: ", size.getInfo())
count = naip_2015.size()
print("Count: ", count.getInfo())
dates = ee.List(naip_2015.get('date_range'))
date_range = ee.DateRange(dates.get(0),dates.get(1))
print("Date range: ", date_range.getInfo())
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
|
github_jupyter
|
# %%capture
# !pip install earthengine-api
# !pip install geehydro
import ee
import folium
import geehydro
# ee.Authenticate()
ee.Initialize()
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
# fc = (ee.FeatureCollection('ft:1fRY18cjsHzDgGiJiS2nnpUU3v9JPDc2HNaR7Xk8')
# .filter(ee.Filter().eq('Name', 'Minnesota')))
def print_image_id(image):
index = image.get('system:time_start')
print(index.getInfo())
lat = 46.80514
lng = -99.22023
lng_lat = ee.Geometry.Point(lng, lat)
collection = ee.ImageCollection('USDA/NAIP/DOQQ')
naip = collection.filterBounds(lng_lat)
naip_2015 = naip.filterDate('2010-01-01', '2015-12-31')
# print(naip_2015.getInfo())
# print(naip_2015.map(print_image_id))
# Map.setCenter(lon, lat, 13)
# Map.addLayer(naip_2015)
image = ee.Image('USDA/NAIP/DOQQ/m_4609915_sw_14_1_20100629')
bandNames = image.bandNames()
print('Band names: ', bandNames.getInfo())
b_nir = image.select('N')
proj = b_nir.projection()
print('Projection: ', proj.getInfo())
props = b_nir.propertyNames()
print(props.getInfo())
img_date = ee.Date(image.get('system:time_start'))
print('Timestamp: ', img_date.getInfo())
id = image.get('system:index')
print(id.getInfo())
# print(image.getInfo())
vis = {'bands': ['N', 'R', 'G']}
# Map.setCenter(lng, lat, 12)
# Map.addLayer(image,vis)
size = naip_2015.toList(100).length()
print("Number of images: ", size.getInfo())
count = naip_2015.size()
print("Count: ", count.getInfo())
dates = ee.List(naip_2015.get('date_range'))
date_range = ee.DateRange(dates.get(0),dates.get(1))
print("Date range: ", date_range.getInfo())
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
| 0.170232 | 0.945349 |
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
<i>Licensed under the MIT License.</i>
# SAR Single Node on MovieLens (Python, CPU)
Simple Algorithm for Recommendation (SAR) is a fast and scalable algorithm for personalized recommendations based on user transaction history. It produces easily explainable and interpretable recommendations and handles "cold item" and "semi-cold user" scenarios. SAR is a kind of neighborhood based algorithm (as discussed in [Recommender Systems by Aggarwal](https://dl.acm.org/citation.cfm?id=2931100)) which is intended for ranking top items for each user. More details about SAR can be found in the [deep dive notebook](../02_model/sar_deep_dive.ipynb).
SAR recommends items that are most ***similar*** to the ones that the user already has an existing ***affinity*** for. Two items are ***similar*** if the users that interacted with one item are also likely to have interacted with the other. A user has an ***affinity*** to an item if they have interacted with it in the past.
### Advantages of SAR:
- High accuracy for an easy to train and deploy algorithm
- Fast training, only requiring simple counting to construct matrices used at prediction time.
- Fast scoring, only involving multiplication of the similarity matrix with an affinity vector
### Notes to use SAR properly:
- Since it does not use item or user features, it can be at a disadvantage against algorithms that do.
- It's memory-hungry, requiring the creation of an $mxm$ sparse square matrix (where $m$ is the number of items). This can also be a problem for many matrix factorization algorithms.
- SAR favors an implicit rating scenario and it does not predict ratings.
This notebook provides an example of how to utilize and evaluate SAR in Python on a CPU.
# 0 Global Settings and Imports
```
# set the environment path to find Recommenders
import sys
sys.path.append("../../")
import logging
import time
import numpy as np
import pandas as pd
import papermill as pm
from reco_utils.dataset import movielens
from reco_utils.dataset.python_splitters import python_stratified_split
from reco_utils.evaluation.python_evaluation import map_at_k, ndcg_at_k, precision_at_k, recall_at_k
from reco_utils.recommender.sar import SAR
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
```
# 1 Load Data
SAR is intended to be used on interactions with the following schema:
`<User ID>, <Item ID>,<Time>,[<Event Type>], [<Event Weight>]`.
Each row represents a single interaction between a user and an item. These interactions might be different types of events on an e-commerce website, such as a user clicking to view an item, adding it to a shopping basket, following a recommendation link, and so on. Each event type can be assigned a different weight, for example, we might assign a “buy” event a weight of 10, while a “view” event might only have a weight of 1.
The MovieLens dataset is well formatted interactions of Users providing Ratings to Movies (movie ratings are used as the event weight) - we will use it for the rest of the example.
```
# top k items to recommend
TOP_K = 10
# Select MovieLens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
```
### 1.1 Download and use the MovieLens Dataset
```
data = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE
)
# Convert the float precision to 32-bit in order to reduce memory consumption
data['rating'] = data['rating'].astype(np.float32)
data.head()
```
### 1.2 Split the data using the python random splitter provided in utilities:
We split the full dataset into a `train` and `test` dataset to evaluate performance of the algorithm against a held-out set not seen during training. Because SAR generates recommendations based on user preferences, all users that are in the test set must also exist in the training set. For this case, we can use the provided `python_stratified_split` function which holds out a percentage (in this case 25%) of items from each user, but ensures all users are in both `train` and `test` datasets. Other options are available in the `dataset.python_splitters` module which provide more control over how the split occurs.
```
train, test = python_stratified_split(data, ratio=0.75, col_user='userID', col_item='itemID', seed=42)
print("""
Train:
Total Ratings: {train_total}
Unique Users: {train_users}
Unique Items: {train_items}
Test:
Total Ratings: {test_total}
Unique Users: {test_users}
Unique Items: {test_items}
""".format(
train_total=len(train),
train_users=len(train['userID'].unique()),
train_items=len(train['itemID'].unique()),
test_total=len(test),
test_users=len(test['userID'].unique()),
test_items=len(test['itemID'].unique()),
))
```
# 2 Train the SAR Model
### 2.1 Instantiate the SAR algorithm and set the index
We will use the single node implementation of SAR and specify the column names to match our dataset (timestamp is an optional column that is used and can be removed if your dataset does not contain it).
Other options are specified to control the behavior of the algorithm as described in the [deep dive notebook](../02_model/sar_deep_dive.ipynb).
```
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)-8s %(message)s')
model = SAR(
col_user="userID",
col_item="itemID",
col_rating="rating",
col_timestamp="timestamp",
similarity_type="jaccard",
time_decay_coefficient=30,
timedecay_formula=True
)
```
### 2.2 Train the SAR model on our training data, and get the top-k recommendations for our testing data
SAR first computes an item-to-item ***co-occurence matrix***. Co-occurence represents the number of times two items appear together for any given user. Once we have the co-occurence matrix, we compute an ***item similarity matrix*** by rescaling the cooccurences by a given metric (Jaccard similarity in this example).
We also compute an ***affinity matrix*** to capture the strength of the relationship between each user and each item. Affinity is driven by different types (like *rating* or *viewing* a movie), and by the time of the event.
Recommendations are achieved by multiplying the affinity matrix $A$ and the similarity matrix $S$. The result is a ***recommendation score matrix*** $R$. We compute the ***top-k*** results for each user in the `recommend_k_items` function seen below.
A full walkthrough of the SAR algorithm can be found [here](../02_model/sar_deep_dive.ipynb).
```
start_time = time.time()
model.fit(train)
train_time = time.time() - start_time
print("Took {} seconds for training.".format(train_time))
start_time = time.time()
top_k = model.recommend_k_items(test, remove_seen=True)
test_time = time.time() - start_time
print("Took {} seconds for prediction.".format(test_time))
display(top_k.head())
```
### 5. Evaluate how well SAR performs
We evaluate how well SAR performs for a few common ranking metrics provided in the `python_evaluation` module in reco_utils. We will consider the Mean Average Precision (MAP), Normalized Discounted Cumalative Gain (NDCG), Precision, and Recall for the top-k items per user we computed with SAR. User, item and rating column names are specified in each evaluation method.
```
eval_map = map_at_k(test, top_k, col_user='userID', col_item='itemID', col_rating='rating', k=TOP_K)
eval_ndcg = ndcg_at_k(test, top_k, col_user='userID', col_item='itemID', col_rating='rating', k=TOP_K)
eval_precision = precision_at_k(test, top_k, col_user='userID', col_item='itemID', col_rating='rating', k=TOP_K)
eval_recall = recall_at_k(test, top_k, col_user='userID', col_item='itemID', col_rating='rating', k=TOP_K)
print("Model:\t",
"Top K:\t%d" % TOP_K,
"MAP:\t%f" % eval_map,
"NDCG:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall, sep='\n')
# Now let's look at the results for a specific user
user_id = 876
ground_truth = test[test['userID']==user_id].sort_values(by='rating', ascending=False)[:TOP_K]
prediction = model.recommend_k_items(pd.DataFrame(dict(userID=[user_id])), remove_seen=True)
pd.merge(ground_truth, prediction, on=['userID', 'itemID'], how='left')
```
Above, we see that one of the highest rated items from the test set was recovered by the model's top-k recommendations, however the others were not. Offline evaluations are difficult as they can only use what was seen previously in the test set and may not represent the user's actual preferences across the entire set of items. Adjustments to how the data is split, algorithm is used and hyper-parameters can improve the results here.
```
# Record results with papermill for tests - ignore this cell
pm.record("map", eval_map)
pm.record("ndcg", eval_ndcg)
pm.record("precision", eval_precision)
pm.record("recall", eval_recall)
pm.record("train_time", train_time)
pm.record("test_time", test_time)
```
|
github_jupyter
|
# set the environment path to find Recommenders
import sys
sys.path.append("../../")
import logging
import time
import numpy as np
import pandas as pd
import papermill as pm
from reco_utils.dataset import movielens
from reco_utils.dataset.python_splitters import python_stratified_split
from reco_utils.evaluation.python_evaluation import map_at_k, ndcg_at_k, precision_at_k, recall_at_k
from reco_utils.recommender.sar import SAR
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
# top k items to recommend
TOP_K = 10
# Select MovieLens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
data = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE
)
# Convert the float precision to 32-bit in order to reduce memory consumption
data['rating'] = data['rating'].astype(np.float32)
data.head()
train, test = python_stratified_split(data, ratio=0.75, col_user='userID', col_item='itemID', seed=42)
print("""
Train:
Total Ratings: {train_total}
Unique Users: {train_users}
Unique Items: {train_items}
Test:
Total Ratings: {test_total}
Unique Users: {test_users}
Unique Items: {test_items}
""".format(
train_total=len(train),
train_users=len(train['userID'].unique()),
train_items=len(train['itemID'].unique()),
test_total=len(test),
test_users=len(test['userID'].unique()),
test_items=len(test['itemID'].unique()),
))
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)-8s %(message)s')
model = SAR(
col_user="userID",
col_item="itemID",
col_rating="rating",
col_timestamp="timestamp",
similarity_type="jaccard",
time_decay_coefficient=30,
timedecay_formula=True
)
start_time = time.time()
model.fit(train)
train_time = time.time() - start_time
print("Took {} seconds for training.".format(train_time))
start_time = time.time()
top_k = model.recommend_k_items(test, remove_seen=True)
test_time = time.time() - start_time
print("Took {} seconds for prediction.".format(test_time))
display(top_k.head())
eval_map = map_at_k(test, top_k, col_user='userID', col_item='itemID', col_rating='rating', k=TOP_K)
eval_ndcg = ndcg_at_k(test, top_k, col_user='userID', col_item='itemID', col_rating='rating', k=TOP_K)
eval_precision = precision_at_k(test, top_k, col_user='userID', col_item='itemID', col_rating='rating', k=TOP_K)
eval_recall = recall_at_k(test, top_k, col_user='userID', col_item='itemID', col_rating='rating', k=TOP_K)
print("Model:\t",
"Top K:\t%d" % TOP_K,
"MAP:\t%f" % eval_map,
"NDCG:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall, sep='\n')
# Now let's look at the results for a specific user
user_id = 876
ground_truth = test[test['userID']==user_id].sort_values(by='rating', ascending=False)[:TOP_K]
prediction = model.recommend_k_items(pd.DataFrame(dict(userID=[user_id])), remove_seen=True)
pd.merge(ground_truth, prediction, on=['userID', 'itemID'], how='left')
# Record results with papermill for tests - ignore this cell
pm.record("map", eval_map)
pm.record("ndcg", eval_ndcg)
pm.record("precision", eval_precision)
pm.record("recall", eval_recall)
pm.record("train_time", train_time)
pm.record("test_time", test_time)
| 0.3512 | 0.955569 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import preprocessing
import time
from datetime import datetime
```
## First introduction to COVID-19 Global Forecasting Competition on Kaggle
```
train = pd.read_csv('../data/raw/train.csv')
train.head()
train.describe()
len(train)
train.info()
train.isnull().any()
train.isnull().sum()
print('Number fo Country/Region: ',train['Country/Region'].nunique())
print('Dates go from day', max(train['Date']), 'to day', min(train['Date']),', a total of', train['Date'].nunique(),'days')
print('Countries with Province/State informed: ', train[train['Province/State'].isna()==False]['Country/Region'].unique())
```
## General Insights
```
confirmed_total_date = train.groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date = train.groupby(['Date']).agg({'Fatalities':['sum']})
totals_date = confirmed_total_date.join(fatalities_total_date)
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(20,8))
totals_date.plot(ax=ax1)
ax1.set_title('Global Confirmed Cases',size =15)
ax1.set_ylabel('Number of Cases', size=13)
ax1.set_xlabel('Date',size =13)
fatalities_total_date.plot(ax=ax2, color = 'red')
ax2.set_title("Global Deceased Cases", size=15)
ax2.set_ylabel("Number of Cases", size=13)
ax2.set_xlabel("Date", size=13)
```
## Global Tendency Minus China
```
confirmed_total_date_noChina = train[train['Country/Region']!='China'].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_noChina = train[train['Country/Region']!='China'].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_noChina = confirmed_total_date_noChina.join(fatalities_total_date_noChina)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,8))
total_date_noChina.plot(ax=ax1)
ax1.set_title("Global confirmed cases excluding China", size=13)
ax1.set_ylabel("Number of cases", size=13)
ax1.set_xlabel("Date", size=13)
fatalities_total_date_noChina.plot(ax=ax2, color='red')
ax2.set_title("Global deceased cases excluding China", size=13)
ax2.set_ylabel("Number of cases", size=13)
ax2.set_xlabel("Date", size=13)
```
## China
```
confirmed_total_date_China = train[train['Country/Region']=='China'].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_China = train[train['Country/Region']=='China'].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_China = confirmed_total_date_China.join(fatalities_total_date_China)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,8))
total_date_China.plot(ax=ax1)
ax1.set_title("China confirmed cases", size=15)
ax1.set_ylabel("Number of cases", size=13)
ax1.set_xlabel("Date", size=13)
fatalities_total_date_China.plot(ax=ax2, color='red')
ax2.set_title("China deceased cases", size=15)
ax2.set_ylabel("Number of cases", size=13)
ax2.set_xlabel("Date", size=13)
```
## Italy, Spain, UK and Singapore
```
#confirmed_country_Italy = train[train['Country/Region']=='Italy'].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_Italy = train[train['Country/Region']=='Italy'].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_Italy = train[train['Country/Region']=='Italy'].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_Italy = train[train['Country/Region']=='Italy'].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_Italy = confirmed_total_date_Italy.join(fatalities_total_date_Italy)
#confirmed_country_Spain = train[train['Country/Region']=='Spain'].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_Spain = train[train['Country/Region']=='Spain'].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_Spain = train[train['Country/Region']=='Spain'].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_Spain = train[train['Country/Region']=='Spain'].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_Spain = confirmed_total_date_Spain.join(fatalities_total_date_Spain)
#confirmed_country_UK = train[train['Country/Region']=='United Kingdom'].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_UK = train[train['Country/Region']=='United Kingdom'].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_UK = train[train['Country/Region']=='United Kingdom'].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_UK = train[train['Country/Region']=='United Kingdom'].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_UK = confirmed_total_date_UK.join(fatalities_total_date_UK)
#confirmed_country_Australia = train[train['Country/Region']=='Australia'].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_Australia = train[train['Country/Region']=='Australia'].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_Australia = train[train['Country/Region']=='Australia'].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_Australia = train[train['Country/Region']=='Australia'].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_Australia = confirmed_total_date_Australia.join(fatalities_total_date_Australia)
#confirmed_country_Singapore = train[train['Country/Region']=='Singapore'].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_Singapore = train[train['Country/Region']=='Singapore'].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_Singapore = train[train['Country/Region']=='Singapore'].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_Singapore = train[train['Country/Region']=='Singapore'].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_Singapore = confirmed_total_date_Singapore.join(fatalities_total_date_Singapore)
plt.figure(figsize=(25,10))
plt.subplot(2, 2, 1)
total_date_Italy.plot(ax=plt.gca(), title='Italy')
plt.ylabel("Confirmed infection cases", size=13)
plt.subplot(2, 2, 2)
total_date_Spain.plot(ax=plt.gca(), title='Spain')
plt.subplot(2, 2, 3)
total_date_UK.plot(ax=plt.gca(), title='United Kingdom')
plt.ylabel("Confirmed infection cases", size=13)
plt.subplot(2, 2, 4)
total_date_Singapore.plot(ax=plt.gca(), title='Singapore')
pop_italy = 60486683.
pop_spain = 46749696.
pop_UK = 67784927.
pop_singapore = 5837230.
total_date_Italy.ConfirmedCases = total_date_Italy.ConfirmedCases/pop_italy*100.
total_date_Italy.Fatalities = total_date_Italy.ConfirmedCases/pop_italy*100.
total_date_Spain.ConfirmedCases = total_date_Spain.ConfirmedCases/pop_spain*100.
total_date_Spain.Fatalities = total_date_Spain.ConfirmedCases/pop_spain*100.
total_date_UK.ConfirmedCases = total_date_UK.ConfirmedCases/pop_UK*100.
total_date_UK.Fatalities = total_date_UK.ConfirmedCases/pop_UK*100.
total_date_Singapore.ConfirmedCases = total_date_Singapore.ConfirmedCases/pop_singapore*100.
total_date_Singapore.Fatalities = total_date_Singapore.ConfirmedCases/pop_singapore*100.
plt.figure(figsize=(15,10))
plt.subplot(2, 2, 1)
total_date_Italy.ConfirmedCases.plot(ax=plt.gca(), title='Italy')
plt.ylabel("Fraction of population infected")
plt.ylim(0, 0.06)
plt.subplot(2, 2, 2)
total_date_Spain.ConfirmedCases.plot(ax=plt.gca(), title='Spain')
plt.ylim(0, 0.06)
plt.subplot(2, 2, 3)
total_date_UK.ConfirmedCases.plot(ax=plt.gca(), title='United Kingdom')
plt.ylabel("Fraction of population infected")
plt.ylim(0, 0.005)
plt.subplot(2, 2, 4)
total_date_Singapore.ConfirmedCases.plot(ax=plt.gca(), title='Singapore')
plt.ylim(0, 0.005)
#confirmed_country_Italy = train[(train['Country/Region']=='Italy') & train['ConfirmedCases']!=0].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_Italy = train[(train['Country/Region']=='Italy') & train['ConfirmedCases']!=0].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_Italy = train[(train['Country/Region']=='Italy') & train['ConfirmedCases']!=0].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_Italy = train[(train['Country/Region']=='Italy') & train['ConfirmedCases']!=0].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_Italy = confirmed_total_date_Italy.join(fatalities_total_date_Italy)
#confirmed_country_Spain = train[(train['Country/Region']=='Spain') & (train['ConfirmedCases']!=0)].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_Spain = train[(train['Country/Region']=='Spain') & (train['ConfirmedCases']!=0)].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_Spain = train[(train['Country/Region']=='Spain') & (train['ConfirmedCases']!=0)].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_Spain = train[(train['Country/Region']=='Spain') & (train['ConfirmedCases']!=0)].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_Spain = confirmed_total_date_Spain.join(fatalities_total_date_Spain)
#confirmed_country_UK = train[(train['Country/Region']=='United Kingdom') & (train['ConfirmedCases']!=0)].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_UK = train[(train['Country/Region']=='United Kingdom') & (train['ConfirmedCases']!=0)].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_UK = train[(train['Country/Region']=='United Kingdom') & (train['ConfirmedCases']!=0)].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_UK = train[(train['Country/Region']=='United Kingdom') & (train['ConfirmedCases']!=0)].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_UK = confirmed_total_date_UK.join(fatalities_total_date_UK)
#confirmed_country_Australia = train[(train['Country/Region']=='Australia') & (train['ConfirmedCases']!=0)].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_Australia = train[(train['Country/Region']=='Australia') & (train['ConfirmedCases']!=0)].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_Australia = train[(train['Country/Region']=='Australia') & (train['ConfirmedCases']!=0)].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_Australia = train[(train['Country/Region']=='Australia') & (train['ConfirmedCases']!=0)].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_Australia = confirmed_total_date_Australia.join(fatalities_total_date_Australia)
#confirmed_country_Singapore = train[(train['Country/Region']=='Singapore') & (train['ConfirmedCases']!=0)].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_Singapore = train[(train['Country/Region']=='Singapore') & (train['ConfirmedCases']!=0)].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_Singapore = train[(train['Country/Region']=='Singapore') & (train['ConfirmedCases']!=0)].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_Singapore = train[(train['Country/Region']=='Singapore') & (train['ConfirmedCases']!=0)].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_Singapore = confirmed_total_date_Singapore.join(fatalities_total_date_Singapore)
italy = [i for i in total_date_Italy.ConfirmedCases['sum'].values]
italy_30 = italy[0:50]
spain = [i for i in total_date_Spain.ConfirmedCases['sum'].values]
spain_30 = spain[0:50]
UK = [i for i in total_date_UK.ConfirmedCases['sum'].values]
UK_30 = UK[0:50]
singapore = [i for i in total_date_Singapore.ConfirmedCases['sum'].values]
singapore_30 = singapore[0:50]
# Plots
plt.figure(figsize=(20,8))
plt.plot(italy_30)
plt.plot(spain_30)
plt.plot(UK_30)
plt.plot(singapore_30)
plt.legend(["Italy", "Spain", "UK", "Singapore"], loc='upper left')
plt.title("COVID-19 infections from the first confirmed case", size=15)
plt.xlabel("Days", size=13)
plt.ylabel("Infected cases", size=13)
plt.ylim(0, 60000)
plt.show()
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import preprocessing
import time
from datetime import datetime
train = pd.read_csv('../data/raw/train.csv')
train.head()
train.describe()
len(train)
train.info()
train.isnull().any()
train.isnull().sum()
print('Number fo Country/Region: ',train['Country/Region'].nunique())
print('Dates go from day', max(train['Date']), 'to day', min(train['Date']),', a total of', train['Date'].nunique(),'days')
print('Countries with Province/State informed: ', train[train['Province/State'].isna()==False]['Country/Region'].unique())
confirmed_total_date = train.groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date = train.groupby(['Date']).agg({'Fatalities':['sum']})
totals_date = confirmed_total_date.join(fatalities_total_date)
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(20,8))
totals_date.plot(ax=ax1)
ax1.set_title('Global Confirmed Cases',size =15)
ax1.set_ylabel('Number of Cases', size=13)
ax1.set_xlabel('Date',size =13)
fatalities_total_date.plot(ax=ax2, color = 'red')
ax2.set_title("Global Deceased Cases", size=15)
ax2.set_ylabel("Number of Cases", size=13)
ax2.set_xlabel("Date", size=13)
confirmed_total_date_noChina = train[train['Country/Region']!='China'].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_noChina = train[train['Country/Region']!='China'].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_noChina = confirmed_total_date_noChina.join(fatalities_total_date_noChina)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,8))
total_date_noChina.plot(ax=ax1)
ax1.set_title("Global confirmed cases excluding China", size=13)
ax1.set_ylabel("Number of cases", size=13)
ax1.set_xlabel("Date", size=13)
fatalities_total_date_noChina.plot(ax=ax2, color='red')
ax2.set_title("Global deceased cases excluding China", size=13)
ax2.set_ylabel("Number of cases", size=13)
ax2.set_xlabel("Date", size=13)
confirmed_total_date_China = train[train['Country/Region']=='China'].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_China = train[train['Country/Region']=='China'].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_China = confirmed_total_date_China.join(fatalities_total_date_China)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,8))
total_date_China.plot(ax=ax1)
ax1.set_title("China confirmed cases", size=15)
ax1.set_ylabel("Number of cases", size=13)
ax1.set_xlabel("Date", size=13)
fatalities_total_date_China.plot(ax=ax2, color='red')
ax2.set_title("China deceased cases", size=15)
ax2.set_ylabel("Number of cases", size=13)
ax2.set_xlabel("Date", size=13)
#confirmed_country_Italy = train[train['Country/Region']=='Italy'].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_Italy = train[train['Country/Region']=='Italy'].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_Italy = train[train['Country/Region']=='Italy'].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_Italy = train[train['Country/Region']=='Italy'].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_Italy = confirmed_total_date_Italy.join(fatalities_total_date_Italy)
#confirmed_country_Spain = train[train['Country/Region']=='Spain'].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_Spain = train[train['Country/Region']=='Spain'].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_Spain = train[train['Country/Region']=='Spain'].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_Spain = train[train['Country/Region']=='Spain'].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_Spain = confirmed_total_date_Spain.join(fatalities_total_date_Spain)
#confirmed_country_UK = train[train['Country/Region']=='United Kingdom'].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_UK = train[train['Country/Region']=='United Kingdom'].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_UK = train[train['Country/Region']=='United Kingdom'].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_UK = train[train['Country/Region']=='United Kingdom'].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_UK = confirmed_total_date_UK.join(fatalities_total_date_UK)
#confirmed_country_Australia = train[train['Country/Region']=='Australia'].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_Australia = train[train['Country/Region']=='Australia'].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_Australia = train[train['Country/Region']=='Australia'].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_Australia = train[train['Country/Region']=='Australia'].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_Australia = confirmed_total_date_Australia.join(fatalities_total_date_Australia)
#confirmed_country_Singapore = train[train['Country/Region']=='Singapore'].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_Singapore = train[train['Country/Region']=='Singapore'].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_Singapore = train[train['Country/Region']=='Singapore'].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_Singapore = train[train['Country/Region']=='Singapore'].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_Singapore = confirmed_total_date_Singapore.join(fatalities_total_date_Singapore)
plt.figure(figsize=(25,10))
plt.subplot(2, 2, 1)
total_date_Italy.plot(ax=plt.gca(), title='Italy')
plt.ylabel("Confirmed infection cases", size=13)
plt.subplot(2, 2, 2)
total_date_Spain.plot(ax=plt.gca(), title='Spain')
plt.subplot(2, 2, 3)
total_date_UK.plot(ax=plt.gca(), title='United Kingdom')
plt.ylabel("Confirmed infection cases", size=13)
plt.subplot(2, 2, 4)
total_date_Singapore.plot(ax=plt.gca(), title='Singapore')
pop_italy = 60486683.
pop_spain = 46749696.
pop_UK = 67784927.
pop_singapore = 5837230.
total_date_Italy.ConfirmedCases = total_date_Italy.ConfirmedCases/pop_italy*100.
total_date_Italy.Fatalities = total_date_Italy.ConfirmedCases/pop_italy*100.
total_date_Spain.ConfirmedCases = total_date_Spain.ConfirmedCases/pop_spain*100.
total_date_Spain.Fatalities = total_date_Spain.ConfirmedCases/pop_spain*100.
total_date_UK.ConfirmedCases = total_date_UK.ConfirmedCases/pop_UK*100.
total_date_UK.Fatalities = total_date_UK.ConfirmedCases/pop_UK*100.
total_date_Singapore.ConfirmedCases = total_date_Singapore.ConfirmedCases/pop_singapore*100.
total_date_Singapore.Fatalities = total_date_Singapore.ConfirmedCases/pop_singapore*100.
plt.figure(figsize=(15,10))
plt.subplot(2, 2, 1)
total_date_Italy.ConfirmedCases.plot(ax=plt.gca(), title='Italy')
plt.ylabel("Fraction of population infected")
plt.ylim(0, 0.06)
plt.subplot(2, 2, 2)
total_date_Spain.ConfirmedCases.plot(ax=plt.gca(), title='Spain')
plt.ylim(0, 0.06)
plt.subplot(2, 2, 3)
total_date_UK.ConfirmedCases.plot(ax=plt.gca(), title='United Kingdom')
plt.ylabel("Fraction of population infected")
plt.ylim(0, 0.005)
plt.subplot(2, 2, 4)
total_date_Singapore.ConfirmedCases.plot(ax=plt.gca(), title='Singapore')
plt.ylim(0, 0.005)
#confirmed_country_Italy = train[(train['Country/Region']=='Italy') & train['ConfirmedCases']!=0].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_Italy = train[(train['Country/Region']=='Italy') & train['ConfirmedCases']!=0].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_Italy = train[(train['Country/Region']=='Italy') & train['ConfirmedCases']!=0].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_Italy = train[(train['Country/Region']=='Italy') & train['ConfirmedCases']!=0].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_Italy = confirmed_total_date_Italy.join(fatalities_total_date_Italy)
#confirmed_country_Spain = train[(train['Country/Region']=='Spain') & (train['ConfirmedCases']!=0)].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_Spain = train[(train['Country/Region']=='Spain') & (train['ConfirmedCases']!=0)].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_Spain = train[(train['Country/Region']=='Spain') & (train['ConfirmedCases']!=0)].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_Spain = train[(train['Country/Region']=='Spain') & (train['ConfirmedCases']!=0)].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_Spain = confirmed_total_date_Spain.join(fatalities_total_date_Spain)
#confirmed_country_UK = train[(train['Country/Region']=='United Kingdom') & (train['ConfirmedCases']!=0)].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_UK = train[(train['Country/Region']=='United Kingdom') & (train['ConfirmedCases']!=0)].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_UK = train[(train['Country/Region']=='United Kingdom') & (train['ConfirmedCases']!=0)].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_UK = train[(train['Country/Region']=='United Kingdom') & (train['ConfirmedCases']!=0)].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_UK = confirmed_total_date_UK.join(fatalities_total_date_UK)
#confirmed_country_Australia = train[(train['Country/Region']=='Australia') & (train['ConfirmedCases']!=0)].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_Australia = train[(train['Country/Region']=='Australia') & (train['ConfirmedCases']!=0)].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_Australia = train[(train['Country/Region']=='Australia') & (train['ConfirmedCases']!=0)].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_Australia = train[(train['Country/Region']=='Australia') & (train['ConfirmedCases']!=0)].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_Australia = confirmed_total_date_Australia.join(fatalities_total_date_Australia)
#confirmed_country_Singapore = train[(train['Country/Region']=='Singapore') & (train['ConfirmedCases']!=0)].groupby(['Country/Region', 'Province/State']).agg({'ConfirmedCases':['sum']})
#fatalities_country_Singapore = train[(train['Country/Region']=='Singapore') & (train['ConfirmedCases']!=0)].groupby(['Country/Region', 'Province/State']).agg({'Fatalities':['sum']})
confirmed_total_date_Singapore = train[(train['Country/Region']=='Singapore') & (train['ConfirmedCases']!=0)].groupby(['Date']).agg({'ConfirmedCases':['sum']})
fatalities_total_date_Singapore = train[(train['Country/Region']=='Singapore') & (train['ConfirmedCases']!=0)].groupby(['Date']).agg({'Fatalities':['sum']})
total_date_Singapore = confirmed_total_date_Singapore.join(fatalities_total_date_Singapore)
italy = [i for i in total_date_Italy.ConfirmedCases['sum'].values]
italy_30 = italy[0:50]
spain = [i for i in total_date_Spain.ConfirmedCases['sum'].values]
spain_30 = spain[0:50]
UK = [i for i in total_date_UK.ConfirmedCases['sum'].values]
UK_30 = UK[0:50]
singapore = [i for i in total_date_Singapore.ConfirmedCases['sum'].values]
singapore_30 = singapore[0:50]
# Plots
plt.figure(figsize=(20,8))
plt.plot(italy_30)
plt.plot(spain_30)
plt.plot(UK_30)
plt.plot(singapore_30)
plt.legend(["Italy", "Spain", "UK", "Singapore"], loc='upper left')
plt.title("COVID-19 infections from the first confirmed case", size=15)
plt.xlabel("Days", size=13)
plt.ylabel("Infected cases", size=13)
plt.ylim(0, 60000)
plt.show()
| 0.17653 | 0.662442 |
# Transformers, what can they do?
Install the Transformers and Datasets libraries to run this notebook.
```
!pip install datasets transformers[sentencepiece]
from transformers import pipeline
classifier = pipeline("sentiment-analysis")
classifier("I've been waiting for a HuggingFace course my whole life.")
classifier([
"I've been waiting for a HuggingFace course my whole life.",
"I hate this so much!"
])
from transformers import pipeline
classifier = pipeline("zero-shot-classification")
classifier(
"This is a course about the Transformers library",
candidate_labels=["education", "politics", "business"],
)
from transformers import pipeline
generator = pipeline("text-generation")
generator("In this course, we will teach you how to")
from transformers import pipeline
generator = pipeline("text-generation", model="distilgpt2")
generator(
"In this course, we will teach you how to",
max_length=30,
num_return_sequences=2,
)
from transformers import pipeline
unmasker = pipeline("fill-mask")
unmasker("This course will teach you all about <mask> models.", top_k=2)
from transformers import pipeline
ner = pipeline("ner", grouped_entities=True)
ner("My name is Sylvain and I work at Hugging Face in Brooklyn.")
from transformers import pipeline
question_answerer = pipeline("question-answering")
question_answerer(
question="Where do I work?",
context="My name is Sylvain and I work at Hugging Face in Brooklyn"
)
from transformers import pipeline
summarizer = pipeline("summarization")
summarizer("""
America has changed dramatically during recent years. Not only has the number of
graduates in traditional engineering disciplines such as mechanical, civil,
electrical, chemical, and aeronautical engineering declined, but in most of
the premier American universities engineering curricula now concentrate on
and encourage largely the study of engineering science. As a result, there
are declining offerings in engineering subjects dealing with infrastructure,
the environment, and related issues, and greater concentration on high
technology subjects, largely supporting increasingly complex scientific
developments. While the latter is important, it should not be at the expense
of more traditional engineering.
Rapidly developing economies such as China and India, as well as other
industrial countries in Europe and Asia, continue to encourage and advance
the teaching of engineering. Both China and India, respectively, graduate
six and eight times as many traditional engineers as does the United States.
Other industrial countries at minimum maintain their output, while America
suffers an increasingly serious decline in the number of engineering graduates
and a lack of well-educated engineers.
""")
from transformers import pipeline
translator = pipeline("translation", model="Helsinki-NLP/opus-mt-fr-en")
translator("Ce cours est produit par Hugging Face.")
```
|
github_jupyter
|
!pip install datasets transformers[sentencepiece]
from transformers import pipeline
classifier = pipeline("sentiment-analysis")
classifier("I've been waiting for a HuggingFace course my whole life.")
classifier([
"I've been waiting for a HuggingFace course my whole life.",
"I hate this so much!"
])
from transformers import pipeline
classifier = pipeline("zero-shot-classification")
classifier(
"This is a course about the Transformers library",
candidate_labels=["education", "politics", "business"],
)
from transformers import pipeline
generator = pipeline("text-generation")
generator("In this course, we will teach you how to")
from transformers import pipeline
generator = pipeline("text-generation", model="distilgpt2")
generator(
"In this course, we will teach you how to",
max_length=30,
num_return_sequences=2,
)
from transformers import pipeline
unmasker = pipeline("fill-mask")
unmasker("This course will teach you all about <mask> models.", top_k=2)
from transformers import pipeline
ner = pipeline("ner", grouped_entities=True)
ner("My name is Sylvain and I work at Hugging Face in Brooklyn.")
from transformers import pipeline
question_answerer = pipeline("question-answering")
question_answerer(
question="Where do I work?",
context="My name is Sylvain and I work at Hugging Face in Brooklyn"
)
from transformers import pipeline
summarizer = pipeline("summarization")
summarizer("""
America has changed dramatically during recent years. Not only has the number of
graduates in traditional engineering disciplines such as mechanical, civil,
electrical, chemical, and aeronautical engineering declined, but in most of
the premier American universities engineering curricula now concentrate on
and encourage largely the study of engineering science. As a result, there
are declining offerings in engineering subjects dealing with infrastructure,
the environment, and related issues, and greater concentration on high
technology subjects, largely supporting increasingly complex scientific
developments. While the latter is important, it should not be at the expense
of more traditional engineering.
Rapidly developing economies such as China and India, as well as other
industrial countries in Europe and Asia, continue to encourage and advance
the teaching of engineering. Both China and India, respectively, graduate
six and eight times as many traditional engineers as does the United States.
Other industrial countries at minimum maintain their output, while America
suffers an increasingly serious decline in the number of engineering graduates
and a lack of well-educated engineers.
""")
from transformers import pipeline
translator = pipeline("translation", model="Helsinki-NLP/opus-mt-fr-en")
translator("Ce cours est produit par Hugging Face.")
| 0.644673 | 0.834609 |
```
from bayes_opt import BayesianOptimization
import matplotlib.pyplot as plt
import numpy as np
from math import cos
from sklearn.gaussian_process.kernels import RBF
from mpl_toolkits.mplot3d import Axes3D
import itertools
import sys
sys.path.insert(0,'../python_scripts/')
from bayesian_optimization import IBO
import time
import seaborn as sns
import pandas as pd
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
%pylab inline
%load_ext autoreload
```
# This notebook assumes that you have an understanding of how Bayesian Optimization works.
- For a review, please visit the notebook Overview_of_Intelligent_Bayesian_Optimization
## We will compare my implementation of Bayesian Optimization with the bayesian_optimization package
> Bayesian_optimization package: https://github.com/fmfn/BayesianOptimization
- First, compare performance on a one-dimensional function
- Second, compare performance on a two-dimensional function
- Third, compare performance using an objective function to find the best hyperparameters of gradient boosting
### Define the one-dimensional function
```
oneD_function =lambda x: cos(1000*x-500)+(abs(x*100_000))/(x**4+1000)
plt.plot(np.linspace(-10,10),[oneD_function(_) for _ in np.linspace(-10,10)])
plt.title('1D Demo Function');
```
### Setup the package Bayesian_Optimization
```
bayes_opt = BayesianOptimization(oneD_function,
{'x': (-10, 10)}) # the bounds to explore
bayes_opt .explore({'x': np.linspace(-10,10)}) # the points to explore
```
### Setup my implementation
- Import IBO (Intelligent Bayesian Optimization)
```
bo_implementation = IBO()
```
- Define the one train point, as well as the testing domain
```
test_x_oneD = np.array(np.linspace(-10,10,2_000)).reshape(-1,1)
train_x_oneD = np.array(np.random.choice(test_x_oneD.ravel())).reshape(-1,1)
train_y_numbers_oneD = np.array([oneD_function(_) for _ in train_x_oneD]).reshape(-1,1)
```
- Fit the gaussian process from IBO
```
bo_implementation.fit(train_x_oneD,train_y_numbers_oneD, test_x_oneD, oneD_function, y_func_type='real' )
```
- There are two primary methods in my implementation
- 1) Predict: predict the next x-coordinates
- 2) Maximize: try to find the best x-coordinates given the number of steps
# Run one trial of the bayes_opt vs my implementataion
### Compare performance and time
- My implementation
```
# find the max
start_my_bo = time.time()
bo_implementation.maximize(n_steps=10)
end_my_bo = time.time()
```
- Bayes Opt implementation
- Ensure to use the same number of initialization points, same number of steps, same acquisition function (Expected Improvement)
```
start_bayes_opt = time.time()
bayes_opt .maximize(init_points=1, n_iter=10, acq='ei') # ei = expected improvement
end_bayes_opt = time.time()
```
#### Compare the results
```
print(f"The best y-value from my bayesian optimization implementation was {bo_implementation.best_y}.\
The best x-coordiantes from my bayesian optimiztion implementation was {bo_implementation.best_x}.\
My implementation took {round(end_my_bo -start_my_bo,2)} seconds for 10 steps")
print(f" The best values from the Bayes_Opt package was {bayes_opt.res['max']}.\
The Bayes_Opt implementation took {round(end_bayes_opt -start_bayes_opt,2) } seconds for 10 steps")
```
- The Bayes_Opt package beat my implementation in time and speed
## Run each implmentation for 10 trials of 10 steps. See how many times each algo wins.
- One dimensional implementation
```
# keep track of the wins
my_implementation_wins = 0
bayes_opt_wins = 0
n_trials=10
for i in range(n_trials):
print('Step Number =',i+1)
# My implementation
bo_implementation.fit(train_x_oneD,train_y_numbers_oneD, test_x_oneD, oneD_function,
y_func_type='real', verbose = False )
bo_implementation.maximize()
# Bayes Opt Implementation
bayes_opt = BayesianOptimization(oneD_function,
{'x': (-10, 10)}) # the range to explore
bayes_opt .explore({'x': np.linspace(-10,10)}) # the points to explore
bayes_opt .maximize(init_points=1, n_iter=10, acq='ei')
if bayes_opt.res['max']['max_val'] > bo_implementation.best_y:
print("Bayes Opt Won")
bayes_opt_wins +=1
else:
print('My implementation won')
my_implementation_wins +=1
plt.figure(figsize=(10,5))
plt.title('Results of 1D hyperparameter search')
sns.barplot(x = ['bayes_opt_implementation','my_implementation'],y =[bayes_opt_wins, my_implementation_wins] );
```
- Not a huge suprise that the 'professional' implementation outperforms my implementation
# Next, compare performance over a two-dimensional function
### Eggholder function
${\displaystyle f(x,y)=-\left(y+47\right)\sin {\sqrt {\left|{\frac {x}{2}}+\left(y+47\right)\right|}}-x\sin {\sqrt {\left|x-\left(y+47\right)\right|}}} $
```
#twoD_function = lambda x,y: 100*np.sqrt(abs(y-.01*x**2))+.01*abs(x+10)
twoD_function = lambda x,y: -(y+47)*sin(np.sqrt(abs((x/2)+(y+47))))-x*sin(np.sqrt(abs(x-(y+47))))
twoD_domain = np.linspace(-100,100,60)
combo_domain = list(itertools.product(*[twoD_domain,twoD_domain]))
twoD_train_x = np.array([combo_domain[np.random.choice(len(combo_domain))] ] )
twoD_train_y = np.array([twoD_function(twoD_train_x[0][0],twoD_train_x[0][1])]).reshape(-1,1)
# Define the axes for the scatter plot
xs = [combo_domain[i][0] for i in range(len(combo_domain))]
ys = [combo_domain[i][1] for i in range(len(combo_domain))]
print(f"The domain has {len(combo_domain):,} parameters")
twoD_function_y = np.array([twoD_function(combo_domain[i][0],combo_domain[i][1]) for i in range(len(combo_domain))]).reshape(-1,1)
fig = plt.figure(figsize=(13,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xs, ys, twoD_function_y,cmap='hot',c=twoD_function_y)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
ax.set_title('Eggholder Function');
```
### Setup my implementation
```
bo_implementation_2d = IBO()
bo_implementation_2d.fit(twoD_train_x, twoD_train_y ,combo_domain, twoD_function , y_func_type='real',
kernel_params={'rbf_length':50}) # change the length parameter for the RBF kernel
```
### Setup the bayesian_optimization implementation
```
bayes_opt_2d = BayesianOptimization(twoD_function ,
{'x': (-100,100),'y':(-100,100)}) # the bounds to explore
bayes_opt_2d .explore({'x': np.linspace(-100,100,60),'y':np.linspace(-100,100,60)}) # the points to explore
```
## Compare two-dimensional results
```
my_implementation_2d_s = time.time()
bo_implementation_2d.maximize()
my_implementation_2d_e = time.time()
bayes_opt_2d_time_s = time.time()
bayes_opt_2d.maximize(init_points=1, n_iter=10, acq='ei')
bayes_opt_2d_time_e = time.time()
print(f"The best y-value from my bayesian optimization implementation was {bo_implementation_2d.best_y}.\
The best x-coordiantes from my bayesian optimiztion implementation was {bo_implementation_2d.best_x}.\
My implementation took {round(my_implementation_2d_e -my_implementation_2d_s,2)} seconds for 10 steps")
print(f" The best values from the Bayes_Opt package was {bayes_opt_2d.res['max']}.\
The Bayes_Opt implementation took {round(bayes_opt_2d_time_e -bayes_opt_2d_time_s,2) } seconds for 10 steps")
```
# Run this two dimensional parameter search ten times to compare perforamnce
```
# keep track of the wins
my_implementation_wins_2d = 0
bayes_opt_wins_2d = 0
n_trials=10
for i in range(n_trials):
print('Step Number =',i+1)
# My implementation
bo_implementation_2d.fit(twoD_train_x, twoD_train_y ,combo_domain, twoD_function , y_func_type='real',
kernel_params={'length':50}, verbose=False) # change the length parameter
bo_implementation_2d.maximize()
# Bayes Opt Implementation
bayes_opt_2d = BayesianOptimization(twoD_function ,
{'x': (-100,100),'y':(-100,100)}) # the bounds to explore
bayes_opt_2d .explore({'x': np.linspace(-100,100,60),'y':np.linspace(-100,100,60)}) # the points to explore
bayes_opt_2d .maximize(init_points=1, n_iter=10, acq='ei')
if bayes_opt_2d.res['max']['max_val'] > bo_implementation_2d.best_y:
print("Bayes Opt Won")
bayes_opt_wins_2d +=1
else:
print('My implementation won')
my_implementation_wins_2d +=1
plt.figure(figsize=(10,5))
plt.title('Results of 2D hyperparameter search')
sns.barplot(x = ['bayes_opt_implementation','my_implementation'],y =[bayes_opt_wins_2d, my_implementation_wins_2d] );
```
- I manually tuned the length parameter of the RBF kernel which explains the better performance
# Finally, test IBO vs bayesian_optimization using an objective function
- Maximize the negative root mean squarred error
- Use a the Cycle Power Plant data set https://archive.ics.uci.edu/ml/datasets/Combined+Cycle+Power+Plant
>The dataset contains 9568 data points collected from a Combined Cycle Power Plant over 6 years (2006-2011), when the power plant was set to work with full load. Features consist of hourly average ambient variables Temperature (T), Ambient Pressure (AP), Relative Humidity (RH) and Exhaust Vacuum (V) to predict the net hourly electrical energy output (EP) of the plant.
A combined cycle power plant (CCPP) is composed of gas turbines (GT), steam turbines (ST) and heat recovery steam generators. In a CCPP, the electricity is generated by gas and steam turbines, which are combined in one cycle, and is transferred from one turbine to another. While the Vacuum is colected from and has effect on the Steam Turbine, he other three of the ambient variables effect the GT performance.
For comparability with our baseline studies, and to allow 5x2 fold statistical tests be carried out, we provide the data shuffled five times. For each shuffling 2-fold CV is carried out and the resulting 10 measurements are used for statistical testing.
We provide the data both in .ods and in .xlsx formats.
- Predict the net hourly electrical energy output (EP)
- Domain: N_estimators [1,700,5] Max_depth [1,50]
### Open up the data
```
power_cycle_df = pd.read_excel("../data/power_cycle.xlsx")
power_cycle_df.head()
power_cycle_df.describe()
X_train, X_test, y_train, y_test = train_test_split(power_cycle_df.iloc[:,:-1] , power_cycle_df.iloc[:,-1]
,test_size = .1, random_state = 20)
```
### Need to define the objective function to visualize the loss manifold
```
def hyperparam_choice_function(hyperparameter_value, X_train_in=X_train,
X_test_in = X_test, y_train_in = y_train, y_test_in = y_test,
model = GradientBoostingRegressor, dimensions = 'one', hyperparameter_value_two = None):
"""Returns the negative MSE of the input hyperparameter for the given hyperparameter.
Used with GGradient Boosting
Relies on a global name scope to bring in the data.
If dimensions = one, then search n_estimators. if dimension equal two then search over n_estimators and max_depth"""
if dimensions == 'one':
try:
m = model(n_estimators= int(hyperparameter_value))
except:
m = model(n_estimators= hyperparameter_value)
m.fit(X_train_in, y_train_in)
pred = m.predict(X_test_in)
n_mse = root_mean_squared_error(y_test_in, pred)
return n_mse
elif dimensions =='two':
try:
m = model(n_estimators = int(hyperparameter_value), max_depth = int(hyperparameter_value_two))
except:
m = model(n_estimators = hyperparameter_value, max_depth = hyperparameter_value_two)
m.fit(X_train_in, y_train_in)
pred = m.predict(X_test_in)
n_mse = root_mean_squared_error(y_test_in, pred)
return n_mse
else:
return ' We do not support this number of dimensions yet'
# MSE
def root_mean_squared_error(actual, predicted, negative = True):
"""RMSE of actual and predicted value.
Negative turn the MSE negative to allow for maximization instead of minimization"""
if negative == True:
return - np.sqrt(sum((actual.reshape(-1,1) - predicted.reshape(-1,1)**2))
/len(actual))
else:
return np.sqrt(sum((actual.reshape(-1,1) - predicted.reshape(-1,1)**2))
/len(actual))
def hyperparam_choice_function_two(hyperparameter_value,hyperparameter_value_two, X_train_in=X_train,
X_test_in = X_test, y_train_in = y_train, y_test_in = y_test,
model = GradientBoostingRegressor):
"""For two dimensions choice function"""
m = model(n_estimators = int(hyperparameter_value), max_depth = int(hyperparameter_value_two))
m.fit(X_train_in, y_train_in)
pred = m.predict(X_test_in)
n_mse = mean_squared_error(y_test_in, pred, negative= True)
return n_mse
def hyp_choice_sklearn(hyperparameter_value,hyperparameter_value_two, y_tra = y_train, X_tra = X_train,
X_tes = X_test, y_tes = y_test):
hyperparameter_value = int(hyperparameter_value)
hyperparameter_value_two = int(hyperparameter_value_two)
nmse = - mean_squared_error(y_tes,
GradientBoostingRegressor(n_estimators=hyperparameter_value,
max_depth =hyperparameter_value_two).fit(X_tra.as_matrix()
,y_tra.as_matrix()).predict(
X_tes.as_matrix()))
return nmse
hyp_choice_sklearn(500,3)
```
### Visualize the loss manifold
```
domain_n_estimators = range(1,700,5)
domain_max_depth = range(1,50)
combo_domain_hyp = np.array(list(itertools.product(*[domain_n_estimators, domain_max_depth ])))
twoD_train_x_hyp = np.array([combo_domain_hyp[np.random.choice(len(combo_domain_hyp))] ] )
twoD_train_y_hyp = np.array([hyperparam_choice_function(twoD_train_x_hyp[0][0], dimensions = 'two',
hyperparameter_value_two = twoD_train_x_hyp[0][1])]).reshape(-1,1)
# Define the axes for the scatter plot
xs = [combo_domain_hyp[i][0] for i in range(len(combo_domain_hyp))]
ys = [combo_domain_hyp[i][1] for i in range(len(combo_domain_hyp))]
combo_domain_hyp[0]
zs = np.array([hyperparam_choice_function(combo_domain_hyp[i][0],dimensions='two',
hyperparameter_value_two = combo_domain_hyp[i][1]) for i in range(len(combo_domain_hyp))]).reshape(-1,1)
fig = plt.figure(figsize=(13,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xs, ys, twoD_function_y,cmap='hot',c=twoD_function_y)
ax.set_xlabel('n_estimators')
ax.set_ylabel('max_depth')
ax.set_zlabel('negativeRMSE')
ax.set_title('Gradient Boosting');
```
### Fit my implementation
```
bo_hyperparams = IBO()
bo_hyperparams.fit(twoD_train_x_hyp , twoD_train_y_hyp, combo_domain_hyp ,None, y_func_type='objective' ,
test_points_x = X_test,
test_points_y = y_test, model_obj= GradientBoostingRegressor,
model_train_points_x =X_train, model_train_points_y = y_train)
```
### Fit the bayesian_optimization package
```
bayes_opt_hyp = BayesianOptimization(hyp_choice_sklearn,
{'hyperparameter_value': (1, 700),'hyperparameter_value_two':(1,71) }) # the bounds to explore
bayes_opt_hyp .explore({'hyperparameter_value': [int(i) for i in range(1,700,10)],
'hyperparameter_value_two':[int(i) for i in range(1,71)]}) # the points to explore, need the same size
```
## Maximize my function
```
bo_hyperparams.maximize()
```
## Maximize the BayesianOptimization package
```
bayes_opt_hyp.maximize(init_points=1, n_iter=10, acq='ei') # ten steps
```
- My implementation outperformed here .Most likely due to manually tuning the length parameter for the RBF kernel.
|
github_jupyter
|
from bayes_opt import BayesianOptimization
import matplotlib.pyplot as plt
import numpy as np
from math import cos
from sklearn.gaussian_process.kernels import RBF
from mpl_toolkits.mplot3d import Axes3D
import itertools
import sys
sys.path.insert(0,'../python_scripts/')
from bayesian_optimization import IBO
import time
import seaborn as sns
import pandas as pd
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
%pylab inline
%load_ext autoreload
oneD_function =lambda x: cos(1000*x-500)+(abs(x*100_000))/(x**4+1000)
plt.plot(np.linspace(-10,10),[oneD_function(_) for _ in np.linspace(-10,10)])
plt.title('1D Demo Function');
bayes_opt = BayesianOptimization(oneD_function,
{'x': (-10, 10)}) # the bounds to explore
bayes_opt .explore({'x': np.linspace(-10,10)}) # the points to explore
bo_implementation = IBO()
test_x_oneD = np.array(np.linspace(-10,10,2_000)).reshape(-1,1)
train_x_oneD = np.array(np.random.choice(test_x_oneD.ravel())).reshape(-1,1)
train_y_numbers_oneD = np.array([oneD_function(_) for _ in train_x_oneD]).reshape(-1,1)
bo_implementation.fit(train_x_oneD,train_y_numbers_oneD, test_x_oneD, oneD_function, y_func_type='real' )
# find the max
start_my_bo = time.time()
bo_implementation.maximize(n_steps=10)
end_my_bo = time.time()
start_bayes_opt = time.time()
bayes_opt .maximize(init_points=1, n_iter=10, acq='ei') # ei = expected improvement
end_bayes_opt = time.time()
print(f"The best y-value from my bayesian optimization implementation was {bo_implementation.best_y}.\
The best x-coordiantes from my bayesian optimiztion implementation was {bo_implementation.best_x}.\
My implementation took {round(end_my_bo -start_my_bo,2)} seconds for 10 steps")
print(f" The best values from the Bayes_Opt package was {bayes_opt.res['max']}.\
The Bayes_Opt implementation took {round(end_bayes_opt -start_bayes_opt,2) } seconds for 10 steps")
# keep track of the wins
my_implementation_wins = 0
bayes_opt_wins = 0
n_trials=10
for i in range(n_trials):
print('Step Number =',i+1)
# My implementation
bo_implementation.fit(train_x_oneD,train_y_numbers_oneD, test_x_oneD, oneD_function,
y_func_type='real', verbose = False )
bo_implementation.maximize()
# Bayes Opt Implementation
bayes_opt = BayesianOptimization(oneD_function,
{'x': (-10, 10)}) # the range to explore
bayes_opt .explore({'x': np.linspace(-10,10)}) # the points to explore
bayes_opt .maximize(init_points=1, n_iter=10, acq='ei')
if bayes_opt.res['max']['max_val'] > bo_implementation.best_y:
print("Bayes Opt Won")
bayes_opt_wins +=1
else:
print('My implementation won')
my_implementation_wins +=1
plt.figure(figsize=(10,5))
plt.title('Results of 1D hyperparameter search')
sns.barplot(x = ['bayes_opt_implementation','my_implementation'],y =[bayes_opt_wins, my_implementation_wins] );
#twoD_function = lambda x,y: 100*np.sqrt(abs(y-.01*x**2))+.01*abs(x+10)
twoD_function = lambda x,y: -(y+47)*sin(np.sqrt(abs((x/2)+(y+47))))-x*sin(np.sqrt(abs(x-(y+47))))
twoD_domain = np.linspace(-100,100,60)
combo_domain = list(itertools.product(*[twoD_domain,twoD_domain]))
twoD_train_x = np.array([combo_domain[np.random.choice(len(combo_domain))] ] )
twoD_train_y = np.array([twoD_function(twoD_train_x[0][0],twoD_train_x[0][1])]).reshape(-1,1)
# Define the axes for the scatter plot
xs = [combo_domain[i][0] for i in range(len(combo_domain))]
ys = [combo_domain[i][1] for i in range(len(combo_domain))]
print(f"The domain has {len(combo_domain):,} parameters")
twoD_function_y = np.array([twoD_function(combo_domain[i][0],combo_domain[i][1]) for i in range(len(combo_domain))]).reshape(-1,1)
fig = plt.figure(figsize=(13,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xs, ys, twoD_function_y,cmap='hot',c=twoD_function_y)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
ax.set_title('Eggholder Function');
bo_implementation_2d = IBO()
bo_implementation_2d.fit(twoD_train_x, twoD_train_y ,combo_domain, twoD_function , y_func_type='real',
kernel_params={'rbf_length':50}) # change the length parameter for the RBF kernel
bayes_opt_2d = BayesianOptimization(twoD_function ,
{'x': (-100,100),'y':(-100,100)}) # the bounds to explore
bayes_opt_2d .explore({'x': np.linspace(-100,100,60),'y':np.linspace(-100,100,60)}) # the points to explore
my_implementation_2d_s = time.time()
bo_implementation_2d.maximize()
my_implementation_2d_e = time.time()
bayes_opt_2d_time_s = time.time()
bayes_opt_2d.maximize(init_points=1, n_iter=10, acq='ei')
bayes_opt_2d_time_e = time.time()
print(f"The best y-value from my bayesian optimization implementation was {bo_implementation_2d.best_y}.\
The best x-coordiantes from my bayesian optimiztion implementation was {bo_implementation_2d.best_x}.\
My implementation took {round(my_implementation_2d_e -my_implementation_2d_s,2)} seconds for 10 steps")
print(f" The best values from the Bayes_Opt package was {bayes_opt_2d.res['max']}.\
The Bayes_Opt implementation took {round(bayes_opt_2d_time_e -bayes_opt_2d_time_s,2) } seconds for 10 steps")
# keep track of the wins
my_implementation_wins_2d = 0
bayes_opt_wins_2d = 0
n_trials=10
for i in range(n_trials):
print('Step Number =',i+1)
# My implementation
bo_implementation_2d.fit(twoD_train_x, twoD_train_y ,combo_domain, twoD_function , y_func_type='real',
kernel_params={'length':50}, verbose=False) # change the length parameter
bo_implementation_2d.maximize()
# Bayes Opt Implementation
bayes_opt_2d = BayesianOptimization(twoD_function ,
{'x': (-100,100),'y':(-100,100)}) # the bounds to explore
bayes_opt_2d .explore({'x': np.linspace(-100,100,60),'y':np.linspace(-100,100,60)}) # the points to explore
bayes_opt_2d .maximize(init_points=1, n_iter=10, acq='ei')
if bayes_opt_2d.res['max']['max_val'] > bo_implementation_2d.best_y:
print("Bayes Opt Won")
bayes_opt_wins_2d +=1
else:
print('My implementation won')
my_implementation_wins_2d +=1
plt.figure(figsize=(10,5))
plt.title('Results of 2D hyperparameter search')
sns.barplot(x = ['bayes_opt_implementation','my_implementation'],y =[bayes_opt_wins_2d, my_implementation_wins_2d] );
power_cycle_df = pd.read_excel("../data/power_cycle.xlsx")
power_cycle_df.head()
power_cycle_df.describe()
X_train, X_test, y_train, y_test = train_test_split(power_cycle_df.iloc[:,:-1] , power_cycle_df.iloc[:,-1]
,test_size = .1, random_state = 20)
def hyperparam_choice_function(hyperparameter_value, X_train_in=X_train,
X_test_in = X_test, y_train_in = y_train, y_test_in = y_test,
model = GradientBoostingRegressor, dimensions = 'one', hyperparameter_value_two = None):
"""Returns the negative MSE of the input hyperparameter for the given hyperparameter.
Used with GGradient Boosting
Relies on a global name scope to bring in the data.
If dimensions = one, then search n_estimators. if dimension equal two then search over n_estimators and max_depth"""
if dimensions == 'one':
try:
m = model(n_estimators= int(hyperparameter_value))
except:
m = model(n_estimators= hyperparameter_value)
m.fit(X_train_in, y_train_in)
pred = m.predict(X_test_in)
n_mse = root_mean_squared_error(y_test_in, pred)
return n_mse
elif dimensions =='two':
try:
m = model(n_estimators = int(hyperparameter_value), max_depth = int(hyperparameter_value_two))
except:
m = model(n_estimators = hyperparameter_value, max_depth = hyperparameter_value_two)
m.fit(X_train_in, y_train_in)
pred = m.predict(X_test_in)
n_mse = root_mean_squared_error(y_test_in, pred)
return n_mse
else:
return ' We do not support this number of dimensions yet'
# MSE
def root_mean_squared_error(actual, predicted, negative = True):
"""RMSE of actual and predicted value.
Negative turn the MSE negative to allow for maximization instead of minimization"""
if negative == True:
return - np.sqrt(sum((actual.reshape(-1,1) - predicted.reshape(-1,1)**2))
/len(actual))
else:
return np.sqrt(sum((actual.reshape(-1,1) - predicted.reshape(-1,1)**2))
/len(actual))
def hyperparam_choice_function_two(hyperparameter_value,hyperparameter_value_two, X_train_in=X_train,
X_test_in = X_test, y_train_in = y_train, y_test_in = y_test,
model = GradientBoostingRegressor):
"""For two dimensions choice function"""
m = model(n_estimators = int(hyperparameter_value), max_depth = int(hyperparameter_value_two))
m.fit(X_train_in, y_train_in)
pred = m.predict(X_test_in)
n_mse = mean_squared_error(y_test_in, pred, negative= True)
return n_mse
def hyp_choice_sklearn(hyperparameter_value,hyperparameter_value_two, y_tra = y_train, X_tra = X_train,
X_tes = X_test, y_tes = y_test):
hyperparameter_value = int(hyperparameter_value)
hyperparameter_value_two = int(hyperparameter_value_two)
nmse = - mean_squared_error(y_tes,
GradientBoostingRegressor(n_estimators=hyperparameter_value,
max_depth =hyperparameter_value_two).fit(X_tra.as_matrix()
,y_tra.as_matrix()).predict(
X_tes.as_matrix()))
return nmse
hyp_choice_sklearn(500,3)
domain_n_estimators = range(1,700,5)
domain_max_depth = range(1,50)
combo_domain_hyp = np.array(list(itertools.product(*[domain_n_estimators, domain_max_depth ])))
twoD_train_x_hyp = np.array([combo_domain_hyp[np.random.choice(len(combo_domain_hyp))] ] )
twoD_train_y_hyp = np.array([hyperparam_choice_function(twoD_train_x_hyp[0][0], dimensions = 'two',
hyperparameter_value_two = twoD_train_x_hyp[0][1])]).reshape(-1,1)
# Define the axes for the scatter plot
xs = [combo_domain_hyp[i][0] for i in range(len(combo_domain_hyp))]
ys = [combo_domain_hyp[i][1] for i in range(len(combo_domain_hyp))]
combo_domain_hyp[0]
zs = np.array([hyperparam_choice_function(combo_domain_hyp[i][0],dimensions='two',
hyperparameter_value_two = combo_domain_hyp[i][1]) for i in range(len(combo_domain_hyp))]).reshape(-1,1)
fig = plt.figure(figsize=(13,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xs, ys, twoD_function_y,cmap='hot',c=twoD_function_y)
ax.set_xlabel('n_estimators')
ax.set_ylabel('max_depth')
ax.set_zlabel('negativeRMSE')
ax.set_title('Gradient Boosting');
bo_hyperparams = IBO()
bo_hyperparams.fit(twoD_train_x_hyp , twoD_train_y_hyp, combo_domain_hyp ,None, y_func_type='objective' ,
test_points_x = X_test,
test_points_y = y_test, model_obj= GradientBoostingRegressor,
model_train_points_x =X_train, model_train_points_y = y_train)
bayes_opt_hyp = BayesianOptimization(hyp_choice_sklearn,
{'hyperparameter_value': (1, 700),'hyperparameter_value_two':(1,71) }) # the bounds to explore
bayes_opt_hyp .explore({'hyperparameter_value': [int(i) for i in range(1,700,10)],
'hyperparameter_value_two':[int(i) for i in range(1,71)]}) # the points to explore, need the same size
bo_hyperparams.maximize()
bayes_opt_hyp.maximize(init_points=1, n_iter=10, acq='ei') # ten steps
| 0.472683 | 0.858659 |
<h1 style='color: green; font-size: 36px; font-weight: bold;'>Data Science - Regressão Linear</h1>
# <font color='red' style='font-size: 30px;'>Conhecendo o Dataset</font>
<hr style='border: 2px solid red;'>
## Importando bibliotecas
```
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
import numpy as np
```
## O Dataset e o Projeto
<hr>
### Fonte: https://www.kaggle.com/greenwing1985/housepricing
### Descrição:
<p style='font-size: 18px; line-height: 2; margin: 10px 50px; text-align: justify;'>Nosso objetivo neste exercício é criar um modelo de machine learning, utilizando a técnica de Regressão Linear, que faça previsões sobre os preços de imóveis a partir de um conjunto de características conhecidas dos imóveis.</p>
<p style='font-size: 18px; line-height: 2; margin: 10px 50px; text-align: justify;'>Vamos utilizar um dataset disponível no Kaggle que foi gerado por computador para treinamento de machine learning para iniciantes. Este dataset foi modificado para facilitar o nosso objetivo, que é fixar o conhecimento adquirido no treinamento de Regressão Linear.</p>
<p style='font-size: 18px; line-height: 2; margin: 10px 50px; text-align: justify;'>Siga os passos propostos nos comentários acima de cada célular e bons estudos.</p>
### Dados:
<ul style='font-size: 18px; line-height: 2; text-align: justify;'>
<li><b>precos</b> - Preços do imóveis</li>
<li><b>area</b> - Área do imóvel</li>
<li><b>garagem</b> - Número de vagas de garagem</li>
<li><b>banheiros</b> - Número de banheiros</li>
<li><b>lareira</b> - Número de lareiras</li>
<li><b>marmore</b> - Se o imóvel possui acabamento em mármore branco (1) ou não (0)</li>
<li><b>andares</b> - Se o imóvel possui mais de um andar (1) ou não (0)</li>
</ul>
## Leitura dos dados
Dataset está na pasta "Dados" com o nome "HousePrices_HalfMil.csv" em usa como separador ";".
```
dados = pd.read_csv('../Exercicio/dados/HousePrices_HalfMil.csv', sep = ';')
```
## Visualizar os dados
```
dados
```
## Verificando o tamanho do dataset
```
dados.shape
```
# <font color='red' style='font-size: 30px;'>Análises Preliminares</font>
<hr style='border: 2px solid red;'>
## Estatísticas descritivas
```
dados.describe().round(2)
```
## Matriz de correlação
<p style='font-size: 18px; line-height: 2; margin: 10px 50px; text-align: justify;'>O <b>coeficiente de correlação</b> é uma medida de associação linear entre duas variáveis e situa-se entre <b>-1</b> e <b>+1</b> sendo que <b>-1</b> indica associação negativa perfeita e <b>+1</b> indica associação positiva perfeita.</p>
### Observe as correlações entre as variáveis:
<ul style='font-size: 16px; line-height: 2; text-align: justify;'>
<li>Quais são mais correlacionadas com a variável dependete (Preço)?</li>
<li>Qual o relacionamento entre elas (positivo ou negativo)?</li>
<li>Existe correlação forte entre as variáveis explicativas?</li>
</ul>
```
dados.corr().round(4)
```
# <font color='red' style='font-size: 30px;'>Comportamento da Variável Dependente (Y)</font>
<hr style='border: 2px solid red;'>
# Análises gráficas
<img width='700px' src='../Dados/img/Box-Plot.png'>
## Importando biblioteca seaborn
```
import seaborn as sns
```
## Configure o estilo e cor dos gráficos (opcional)
```
sns.set_palette('Accent')
sns.set_style('darkgrid')
```
## Box plot da variável *dependente* (y)
### Avalie o comportamento da distribuição da variável dependente:
<ul style='font-size: 16px; line-height: 2; text-align: justify;'>
<li>Parecem existir valores discrepantes (outliers)?</li>
<li>O box plot apresenta alguma tendência?</li>
</ul>
https://seaborn.pydata.org/generated/seaborn.boxplot.html?highlight=boxplot#seaborn.boxplot
```
ax = sns.boxplot(data=dados['precos'], orient = 'v', width=0.2)
ax.figure.set_size_inches(12, 6)
ax.set_title('Preços dos Imóveis', fontsize=20)
ax.set_ylabel('$', fontsize=16)
ax
```
## Investigando a variável *dependente* (y) juntamente com outras característica
Faça um box plot da variável dependente em conjunto com cada variável explicativa (somente as categóricas).
### Avalie o comportamento da distribuição da variável dependente com cada variável explicativa categórica:
<ul style='font-size: 16px; line-height: 2; text-align: justify;'>
<li>As estatísticas apresentam mudança significativa entre as categorias?</li>
<li>O box plot apresenta alguma tendência bem definida?</li>
</ul>
### Box-plot (Preço X Garagem)
```
ax = sns.boxplot(y ='precos', x = 'garagem', data=dados, orient = 'v', width=0.5)
ax.figure.set_size_inches(12, 6)
ax.set_title('Preços dos Imóveis', fontsize=20)
ax.set_ylabel('$', fontsize=16)
ax.set_xlabel('Número de Vagas de Garagem', fontsize=16)
ax
```
### Box-plot (Preço X Banheiros)
```
ax = sns.boxplot(y ='precos', x = 'banheiros', data=dados, orient = 'v', width=0.5)
ax.figure.set_size_inches(12, 6)
ax.set_title('Preços dos Imóveis', fontsize=20)
ax.set_ylabel('$', fontsize=16)
ax.set_xlabel('Número de Banheiros', fontsize=16)
ax
```
### Box-plot (Preço X Lareira)
```
ax = sns.boxplot(y ='precos', x = 'lareira', data=dados, orient = 'v', width=0.5)
ax.figure.set_size_inches(12, 6)
ax.set_title('Preços dos Imóveis', fontsize=20)
ax.set_ylabel('$', fontsize=16)
ax.set_xlabel('Número de Lareiras', fontsize=16)
ax
```
### Box-plot (Preço X Acabamento em Mármore)
```
ax = sns.boxplot(y ='precos', x = 'marmore', data=dados, orient = 'v', width=0.5)
ax.figure.set_size_inches(12, 6)
ax.set_title('Preços dos Imóveis', fontsize=20)
ax.set_ylabel('$', fontsize=16)
ax.set_xlabel('Acabamento em Mármore', fontsize=16)
ax
```
### Box-plot (Preço X Andares)
```
ax = sns.boxplot(y ='precos', x = 'andares', data=dados, orient = 'v', width=0.5)
ax.figure.set_size_inches(12, 6)
ax.set_title('Preços dos Imóveis', fontsize=20)
ax.set_ylabel('$', fontsize=16)
ax.set_xlabel('Mais de um Andar', fontsize=16)
ax
```
## Distribuição de frequências da variável *dependente* (y)
Construa um histograma da variável dependente (Preço).
### Avalie:
<ul style='font-size: 16px; line-height: 2; text-align: justify;'>
<li>A distribuição de frequências da variável dependente parece ser assimétrica?</li>
<li>É possível supor que a variável dependente segue uma distribuição normal?</li>
</ul>
https://seaborn.pydata.org/generated/seaborn.distplot.html?highlight=distplot#seaborn.distplot
```
ax = sns.distplot(dados['precos'])
ax.figure.set_size_inches(12, 6)
ax.set_title('Distribuição de Frequencias', fontsize=20)
ax.set_ylabel('Frequências', fontsize=16)
ax.set_xlabel('$', fontsize=16)
ax
```
## Gráficos de dispersão entre as variáveis do dataset
## Plotando o pairplot fixando somente uma variável no eixo y
https://seaborn.pydata.org/generated/seaborn.pairplot.html?highlight=pairplot#seaborn.pairplot
Plote gráficos de dispersão da variável dependente contra cada variável explicativa. Utilize o pairplot da biblioteca seaborn para isso.
Plote o mesmo gráfico utilizando o parâmetro kind='reg'.
### Avalie:
<ul style='font-size: 16px; line-height: 2; text-align: justify;'>
<li>É possível identificar alguma relação linear entre as variáveis?</li>
<li>A relação é positiva ou negativa?</li>
<li>Compare com os resultados obtidos na matriz de correlação.</li>
</ul>
```
ax = sns.pairplot(dados, y_vars = 'precos', x_vars = ['area','garagem', 'banheiros', 'lareira', 'marmore', 'andares'])
ax.fig.suptitle('Dispersão entre as Variáveis', fontsize=20, y=1.05)
ax
ax = sns.pairplot(dados, y_vars = 'precos', x_vars = ['area','garagem', 'banheiros', 'lareira', 'marmore', 'andares'], kind='reg')
ax.fig.suptitle('Dispersão entre as Variáveis', fontsize=20, y=1.05)
ax
```
# <font color='red' style='font-size: 30px;'>Estimando um Modelo de Regressão Linear</font>
<hr style='border: 2px solid red;'>
## Importando o *train_test_split* da biblioteca *scikit-learn*
https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
```
from sklearn.model_selection import train_test_split
```
## Criando uma Series (pandas) para armazenar a variável dependente (y)
```
y = dados['precos']
```
## Criando um DataFrame (pandas) para armazenar as variáveis explicativas (X)
```
x = dados[['area','garagem', 'banheiros', 'lareira', 'marmore', 'andares']]
```
## Criando os datasets de treino e de teste
```
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=2811)
```
## Importando *LinearRegression* e *metrics* da biblioteca *scikit-learn*
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html
https://scikit-learn.org/stable/modules/classes.html#regression-metrics
```
from sklearn.linear_model import LinearRegression
from sklearn import metrics
```
## Instanciando a classe *LinearRegression()*
```
modelo = LinearRegression()
```
## Utilizando o método *fit()* para estimar o modelo linear utilizando os dados de TREINO (y_train e X_train)
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression.fit
```
modelo.fit(x_train, y_train)
```
## Obtendo o coeficiente de determinação (R²) do modelo estimado com os dados de TREINO
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression.score
### Avalie:
<ul style='font-size: 16px; line-height: 2; text-align: justify;'>
<li>O modelo apresenta um bom ajuste?</li>
<li>Você lembra o que representa o R²?</li>
<li>Qual medida podemos tomar para melhorar essa estatística?</li>
</ul>
```
print('R² = {}'.format(modelo.score(x_train, y_train).round(2)))
```
## Gerando previsões para os dados de TESTE (X_test) utilizando o método *predict()*
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression.predict
```
y_previsto = modelo.predict(x_test)
```
## Obtendo o coeficiente de determinação (R²) para as previsões do nosso modelo
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html#sklearn.metrics.r2_score
```
print('R² = %s' % metrics.r2_score(y_test, y_previsto).round(2))
```
# <font color='red' style='font-size: 30px;'>Obtendo Previsões Pontuais</font>
<hr style='border: 2px solid red;'>
## Criando um simulador simples
Crie um simulador que gere estimativas de preço a partir de um conjunto de informações de um imóvel.
```
area = 38
garagem = 2
banheiros = 4
lareira = 4
marmore = 0
andares = 1
entrada = [[area, garagem, banheiros, lareira, marmore, andares]]
print('$ {0:.2f}'.format(modelo.predict(entrada)[0]))
```
# <font color='red' style='font-size: 30px;'>Métricas de Regressão</font>
<hr style='border: 2px solid red;'>
## Métricas da regressão
<hr>
fonte: https://scikit-learn.org/stable/modules/model_evaluation.html#regression-metrics
Algumas estatísticas obtidas do modelo de regressão são muito úteis como critério de comparação entre modelos estimados e de seleção do melhor modelo, as principais métricas de regressão que o scikit-learn disponibiliza para modelos lineares são as seguintes:
### Erro Quadrático Médio
Média dos quadrados dos erros. Ajustes melhores apresentam $EQM$ mais baixo.
$$EQM(y, \hat{y}) = \frac 1n\sum_{i=0}^{n-1}(y_i-\hat{y}_i)^2$$
### Raíz do Erro Quadrático Médio
Raíz quadrada da média dos quadrados dos erros. Ajustes melhores apresentam $\sqrt{EQM}$ mais baixo.
$$\sqrt{EQM(y, \hat{y})} = \sqrt{\frac 1n\sum_{i=0}^{n-1}(y_i-\hat{y}_i)^2}$$
### Coeficiente de Determinação - R²
O coeficiente de determinação (R²) é uma medida resumida que diz quanto a linha de regressão ajusta-se aos dados. É um valor entra 0 e 1.
$$R^2(y, \hat{y}) = 1 - \frac {\sum_{i=0}^{n-1}(y_i-\hat{y}_i)^2}{\sum_{i=0}^{n-1}(y_i-\bar{y}_i)^2}$$
## Obtendo métricas para o modelo com Temperatura Máxima
```
EQM = metrics.mean_squared_error(y_test, y_previsto).round(2)
REQM = np.sqrt(metrics.mean_squared_error(y_test, y_previsto)).round(2)
R2 = metrics.r2_score(y_test, y_previsto).round(2)
pd.DataFrame([EQM, REQM, R2], ['EQM', 'REQM', 'R²'], columns=['Métricas'])
```
# <font color='red' style='font-size: 30px;'>Salvando e Carregando o Modelo Estimado</font>
<hr style='border: 2px solid red;'>
## Importando a biblioteca pickle
```
import pickle
```
## Salvando o modelo estimado
```
output = open('modelo_preco', 'wb')
pickle.dump(modelo, output)
output.close()
```
### Em um novo notebook/projeto Python
<h4 style='color: blue; font-weight: normal'>In [1]:</h4>
```sh
import pickle
modelo = open('modelo_preço','rb')
lm_new = pickle.load(modelo)
modelo.close()
area = 38
garagem = 2
banheiros = 4
lareira = 4
marmore = 0
andares = 1
entrada = [[area, garagem, banheiros, lareira, marmore, andares]]
print('$ {0:.2f}'.format(lm_new.predict(entrada)[0]))
```
<h4 style='color: red; font-weight: normal'>Out [1]:</h4>
```
$ 46389.80
```
|
github_jupyter
|
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
import numpy as np
dados = pd.read_csv('../Exercicio/dados/HousePrices_HalfMil.csv', sep = ';')
dados
dados.shape
dados.describe().round(2)
dados.corr().round(4)
import seaborn as sns
sns.set_palette('Accent')
sns.set_style('darkgrid')
ax = sns.boxplot(data=dados['precos'], orient = 'v', width=0.2)
ax.figure.set_size_inches(12, 6)
ax.set_title('Preços dos Imóveis', fontsize=20)
ax.set_ylabel('$', fontsize=16)
ax
ax = sns.boxplot(y ='precos', x = 'garagem', data=dados, orient = 'v', width=0.5)
ax.figure.set_size_inches(12, 6)
ax.set_title('Preços dos Imóveis', fontsize=20)
ax.set_ylabel('$', fontsize=16)
ax.set_xlabel('Número de Vagas de Garagem', fontsize=16)
ax
ax = sns.boxplot(y ='precos', x = 'banheiros', data=dados, orient = 'v', width=0.5)
ax.figure.set_size_inches(12, 6)
ax.set_title('Preços dos Imóveis', fontsize=20)
ax.set_ylabel('$', fontsize=16)
ax.set_xlabel('Número de Banheiros', fontsize=16)
ax
ax = sns.boxplot(y ='precos', x = 'lareira', data=dados, orient = 'v', width=0.5)
ax.figure.set_size_inches(12, 6)
ax.set_title('Preços dos Imóveis', fontsize=20)
ax.set_ylabel('$', fontsize=16)
ax.set_xlabel('Número de Lareiras', fontsize=16)
ax
ax = sns.boxplot(y ='precos', x = 'marmore', data=dados, orient = 'v', width=0.5)
ax.figure.set_size_inches(12, 6)
ax.set_title('Preços dos Imóveis', fontsize=20)
ax.set_ylabel('$', fontsize=16)
ax.set_xlabel('Acabamento em Mármore', fontsize=16)
ax
ax = sns.boxplot(y ='precos', x = 'andares', data=dados, orient = 'v', width=0.5)
ax.figure.set_size_inches(12, 6)
ax.set_title('Preços dos Imóveis', fontsize=20)
ax.set_ylabel('$', fontsize=16)
ax.set_xlabel('Mais de um Andar', fontsize=16)
ax
ax = sns.distplot(dados['precos'])
ax.figure.set_size_inches(12, 6)
ax.set_title('Distribuição de Frequencias', fontsize=20)
ax.set_ylabel('Frequências', fontsize=16)
ax.set_xlabel('$', fontsize=16)
ax
ax = sns.pairplot(dados, y_vars = 'precos', x_vars = ['area','garagem', 'banheiros', 'lareira', 'marmore', 'andares'])
ax.fig.suptitle('Dispersão entre as Variáveis', fontsize=20, y=1.05)
ax
ax = sns.pairplot(dados, y_vars = 'precos', x_vars = ['area','garagem', 'banheiros', 'lareira', 'marmore', 'andares'], kind='reg')
ax.fig.suptitle('Dispersão entre as Variáveis', fontsize=20, y=1.05)
ax
from sklearn.model_selection import train_test_split
y = dados['precos']
x = dados[['area','garagem', 'banheiros', 'lareira', 'marmore', 'andares']]
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=2811)
from sklearn.linear_model import LinearRegression
from sklearn import metrics
modelo = LinearRegression()
modelo.fit(x_train, y_train)
print('R² = {}'.format(modelo.score(x_train, y_train).round(2)))
y_previsto = modelo.predict(x_test)
print('R² = %s' % metrics.r2_score(y_test, y_previsto).round(2))
area = 38
garagem = 2
banheiros = 4
lareira = 4
marmore = 0
andares = 1
entrada = [[area, garagem, banheiros, lareira, marmore, andares]]
print('$ {0:.2f}'.format(modelo.predict(entrada)[0]))
EQM = metrics.mean_squared_error(y_test, y_previsto).round(2)
REQM = np.sqrt(metrics.mean_squared_error(y_test, y_previsto)).round(2)
R2 = metrics.r2_score(y_test, y_previsto).round(2)
pd.DataFrame([EQM, REQM, R2], ['EQM', 'REQM', 'R²'], columns=['Métricas'])
import pickle
output = open('modelo_preco', 'wb')
pickle.dump(modelo, output)
output.close()
import pickle
modelo = open('modelo_preço','rb')
lm_new = pickle.load(modelo)
modelo.close()
area = 38
garagem = 2
banheiros = 4
lareira = 4
marmore = 0
andares = 1
entrada = [[area, garagem, banheiros, lareira, marmore, andares]]
print('$ {0:.2f}'.format(lm_new.predict(entrada)[0]))
$ 46389.80
| 0.364325 | 0.932145 |
# Use AutoAI to predict credit risk with `ibm-watson-machine-learning`
This notebook demonstrates how to deploy in Watson Machine Learning service an AutoAI model created in `Generated Scikit-learn Notebook`
which is composed
during autoai experiments (in order to learn more about AutoAI experiments go to [experiments/autoai](https://github.com/IBM/watson-machine-learning-samples/tree/master/cloud/notebooks/python_sdk/experiments/autoai)).
Some familiarity with bash is helpful. This notebook uses Python 3.7.
## Learning goals
The learning goals of this notebook are:
- Working with the Watson Machine Learning instance
- Online deployment of AutoAI model
- Scoring data using deployed model
## Contents
This notebook contains the following parts:
1. [Setup](#setup)
2. [Model upload](#upload)
3. [Web service creation](#deploy)
4. [Scoring](#score)
5. [Clean up](#cleanup)
6. [Summary and next steps](#summary)
<a id="setup"></a>
## 1. Set up the environment
Before you use the sample code in this notebook, you must perform the following setup tasks:
- Create a <a href="https://console.ng.bluemix.net/catalog/services/ibm-watson-machine-learning/" target="_blank" rel="noopener no referrer">Watson Machine Learning (WML) Service</a> instance (a free plan is offered and information about how to create the instance can be found <a href=" https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html?context=analytics" target="_blank" rel="noopener no referrer">here</a>).
### Connection to WML
Authenticate the Watson Machine Learning service on IBM Cloud. You need to provide platform `api_key` and instance `location`.
You can use [IBM Cloud CLI](https://cloud.ibm.com/docs/cli/index.html) to retrieve platform API Key and instance location.
API Key can be generated in the following way:
```
ibmcloud login
ibmcloud iam api-key-create API_KEY_NAME
```
In result, get the value of `api_key` from the output.
Location of your WML instance can be retrieved in the following way:
```
ibmcloud login --apikey API_KEY -a https://cloud.ibm.com
ibmcloud resource service-instance WML_INSTANCE_NAME
```
In result, get the value of `location` from the output.
**Tip**: Your `Cloud API key` can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam#/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below. You can also get a service specific url by going to the [**Endpoint URLs** section of the Watson Machine Learning docs](https://cloud.ibm.com/apidocs/machine-learning). You can check your instance location in your <a href="https://console.ng.bluemix.net/catalog/services/ibm-watson-machine-learning/" target="_blank" rel="noopener no referrer">Watson Machine Learning (WML) Service</a> instance details.
You can also get service specific apikey by going to the [**Service IDs** section of the Cloud Console](https://cloud.ibm.com/iam/serviceids). From that page, click **Create**, then copy the created key and paste it below.
**Action**: Enter your `api_key` and `location` in the following cell.
```
api_key = 'PASTE YOUR PLATFORM API KEY HERE'
location = 'PASTE YOUR INSTANCE LOCATION HERE'
wml_credentials = {
"apikey": api_key,
"url": 'https://' + location + '.ml.cloud.ibm.com'
}
```
### Install and import the `ibm-watson-machine-learning` package
**Note:** `ibm-watson-machine-learning` documentation can be found <a href="http://ibm-wml-api-pyclient.mybluemix.net/" target="_blank" rel="noopener no referrer">here</a>.
```
!pip install -U ibm-watson-machine-learning
from ibm_watson_machine_learning import APIClient
client = APIClient(wml_credentials)
```
### Working with spaces
First, create a space that will be used for your work. If you do not have space already created, you can use [Deployment Spaces Dashboard](https://dataplatform.cloud.ibm.com/ml-runtime/spaces?context=cpdaas) to create one.
- Click New Deployment Space
- Create an empty space
- Select Cloud Object Storage
- Select Watson Machine Learning instance and press Create
- Copy `space_id` and paste it below
**Tip**: You can also use SDK to prepare the space for your work. More information can be found [here](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/instance-management/Space%20management.ipynb).
**Action**: Assign space ID below
```
space_id = 'PASTE YOUR SPACE ID HERE'
```
You can use `list` method to print all existing spaces.
```
client.spaces.list(limit=10)
```
To be able to interact with all resources available in Watson Machine Learning, you need to set **space** which you will be using.
```
client.set.default_space(space_id)
```
<a id="upload"></a>
## 2. Upload model
In this section you will learn how to upload the model to the Cloud.
#### Download the data as an pandas DataFrame and AutoAI saved as scikit pipeline model using `wget`.
**Hint**: To install required packages exacute command `!pip install pandas wget numpy`.
We can exract model from executed AutoAI experiment using `ibm-watson-machine-learning` with following command: `experiment.optimizer(...).get_pipeline(astype='sklearn')`.
```
import os, wget
import pandas as pd
import numpy as np
filename = 'german_credit_data_biased_training.csv'
url = 'https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/data/credit_risk/german_credit_data_biased_training.csv'
if not os.path.isfile(filename):
wget.download(url)
model_name = "model.pickle"
url = 'https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/models/autoai/credit-risk/model.pickle'
if not os.path.isfile(model_name):
wget.download(url)
credit_risk_df = pd.read_csv(filename)
X = credit_risk_df.drop(['Risk'], axis=1)
y = credit_risk_df['Risk']
credit_risk_df.head()
```
#### Custom software_specification
Create new software specification based on default Python 3.7 environment extended by autoai-libs package.
```
base_sw_spec_uid = client.software_specifications.get_uid_by_name("default_py3.7")
url = 'https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/configs/config.yaml'
if not os.path.isfile('config.yaml'):
wget.download(url)
!cat config.yaml
```
`config.yaml` file describes details of package extention. Now you need to store new package extention with APIClient.
```
meta_prop_pkg_extn = {
client.package_extensions.ConfigurationMetaNames.NAME: "scikt with autoai-libs",
client.package_extensions.ConfigurationMetaNames.DESCRIPTION: "Extension for autoai-libs",
client.package_extensions.ConfigurationMetaNames.TYPE: "conda_yml"
}
pkg_extn_details = client.package_extensions.store(meta_props=meta_prop_pkg_extn, file_path="config.yaml")
pkg_extn_uid = client.package_extensions.get_uid(pkg_extn_details)
pkg_extn_url = client.package_extensions.get_href(pkg_extn_details)
```
#### Create new software specification and add created package extention to it.
```
meta_prop_sw_spec = {
client.software_specifications.ConfigurationMetaNames.NAME: "Mitigated AutoAI bases on scikit spec",
client.software_specifications.ConfigurationMetaNames.DESCRIPTION: "Software specification for scikt with autoai-libs",
client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION: {"guid": base_sw_spec_uid}
}
sw_spec_details = client.software_specifications.store(meta_props=meta_prop_sw_spec)
sw_spec_uid = client.software_specifications.get_uid(sw_spec_details)
client.software_specifications.add_package_extension(sw_spec_uid, pkg_extn_uid)
```
#### Get the details of created software specification
```
client.software_specifications.get_details(sw_spec_uid)
```
#### Load the AutoAI model saved as `scikit-learn` pipeline.
Depending on estimator type in autoai model pipeline may consist models from following frameworks:
- `xgboost`
- `lightgbm`
- `scikit-learn`
```
from joblib import load
pipeline = load(model_name)
```
#### Store the model
```
model_props = {
client.repository.ModelMetaNames.NAME: "AutoAI model",
client.repository.ModelMetaNames.TYPE: 'scikit-learn_0.23',
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid
}
feature_vector = X.columns
published_model = client.repository.store_model(
model=pipeline,
meta_props=model_props,
training_data=X.values,
training_target=y.values,
feature_names=feature_vector,
label_column_names=['Risk']
)
published_model_uid = client.repository.get_model_id(published_model)
```
#### Get model details
```
client.repository.get_details(published_model_uid)
```
**Note:** You can see that model is successfully stored in Watson Machine Learning Service.
```
client.repository.list_models()
```
<a id="deploy"></a>
## 3. Create online deployment
You can use commands bellow to create online deployment for stored model (web service).
```
metadata = {
client.deployments.ConfigurationMetaNames.NAME: "Deployment of AutoAI model.",
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
created_deployment = client.deployments.create(published_model_uid, meta_props=metadata)
```
Get deployment id.
```
deployment_id = client.deployments.get_uid(created_deployment)
print(deployment_id)
```
<a id="score"></a>
## 4. Scoring
You can send new scoring records to web-service deployment using `score` method.
```
values = X.values
scoring_payload = {
"input_data": [{
'values': values[:5]
}]
}
predictions = client.deployments.score(deployment_id, scoring_payload)
predictions
```
<a id="cleanup"></a>
## 5. Clean up
If you want to clean up all created assets:
- experiments
- trainings
- pipelines
- model definitions
- models
- functions
- deployments
see the steps in this sample [notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb).
<a id="summary"></a>
## 6. Summary and next steps
You successfully completed this notebook! You learned how to use Watson Machine Learning for AutoA model deployment and scoring. Check out our [Online Documentation](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/welcome-main.html?context=analytics) for more samples, tutorials, documentation, how-tos, and blog posts.
### Author
**Jan Sołtysik** Intern in Watson Machine Learning.
Copyright © 2020, 2021 IBM. This notebook and its source code are released under the terms of the MIT License.
|
github_jupyter
|
ibmcloud login
ibmcloud iam api-key-create API_KEY_NAME
ibmcloud login --apikey API_KEY -a https://cloud.ibm.com
ibmcloud resource service-instance WML_INSTANCE_NAME
api_key = 'PASTE YOUR PLATFORM API KEY HERE'
location = 'PASTE YOUR INSTANCE LOCATION HERE'
wml_credentials = {
"apikey": api_key,
"url": 'https://' + location + '.ml.cloud.ibm.com'
}
!pip install -U ibm-watson-machine-learning
from ibm_watson_machine_learning import APIClient
client = APIClient(wml_credentials)
space_id = 'PASTE YOUR SPACE ID HERE'
client.spaces.list(limit=10)
client.set.default_space(space_id)
import os, wget
import pandas as pd
import numpy as np
filename = 'german_credit_data_biased_training.csv'
url = 'https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/data/credit_risk/german_credit_data_biased_training.csv'
if not os.path.isfile(filename):
wget.download(url)
model_name = "model.pickle"
url = 'https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/models/autoai/credit-risk/model.pickle'
if not os.path.isfile(model_name):
wget.download(url)
credit_risk_df = pd.read_csv(filename)
X = credit_risk_df.drop(['Risk'], axis=1)
y = credit_risk_df['Risk']
credit_risk_df.head()
base_sw_spec_uid = client.software_specifications.get_uid_by_name("default_py3.7")
url = 'https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/configs/config.yaml'
if not os.path.isfile('config.yaml'):
wget.download(url)
!cat config.yaml
meta_prop_pkg_extn = {
client.package_extensions.ConfigurationMetaNames.NAME: "scikt with autoai-libs",
client.package_extensions.ConfigurationMetaNames.DESCRIPTION: "Extension for autoai-libs",
client.package_extensions.ConfigurationMetaNames.TYPE: "conda_yml"
}
pkg_extn_details = client.package_extensions.store(meta_props=meta_prop_pkg_extn, file_path="config.yaml")
pkg_extn_uid = client.package_extensions.get_uid(pkg_extn_details)
pkg_extn_url = client.package_extensions.get_href(pkg_extn_details)
meta_prop_sw_spec = {
client.software_specifications.ConfigurationMetaNames.NAME: "Mitigated AutoAI bases on scikit spec",
client.software_specifications.ConfigurationMetaNames.DESCRIPTION: "Software specification for scikt with autoai-libs",
client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION: {"guid": base_sw_spec_uid}
}
sw_spec_details = client.software_specifications.store(meta_props=meta_prop_sw_spec)
sw_spec_uid = client.software_specifications.get_uid(sw_spec_details)
client.software_specifications.add_package_extension(sw_spec_uid, pkg_extn_uid)
client.software_specifications.get_details(sw_spec_uid)
from joblib import load
pipeline = load(model_name)
model_props = {
client.repository.ModelMetaNames.NAME: "AutoAI model",
client.repository.ModelMetaNames.TYPE: 'scikit-learn_0.23',
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid
}
feature_vector = X.columns
published_model = client.repository.store_model(
model=pipeline,
meta_props=model_props,
training_data=X.values,
training_target=y.values,
feature_names=feature_vector,
label_column_names=['Risk']
)
published_model_uid = client.repository.get_model_id(published_model)
client.repository.get_details(published_model_uid)
client.repository.list_models()
metadata = {
client.deployments.ConfigurationMetaNames.NAME: "Deployment of AutoAI model.",
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
created_deployment = client.deployments.create(published_model_uid, meta_props=metadata)
deployment_id = client.deployments.get_uid(created_deployment)
print(deployment_id)
values = X.values
scoring_payload = {
"input_data": [{
'values': values[:5]
}]
}
predictions = client.deployments.score(deployment_id, scoring_payload)
predictions
| 0.39222 | 0.945551 |
```
import pandas as pd
#Loading data from the Github repository to colab notebook
filename = 'https://raw.githubusercontent.com/PacktWorkshops/The-Data-Science-Workshop/master/Chapter15/Dataset/crx.data'
# Loading the data using pandas
credData = pd.read_csv(filename,sep=",",header = None,na_values = "?")
credData.head()
# Changing the Classes to 1 & 0
credData.loc[credData[15] == '+' , 15] = 1
credData.loc[credData[15] == '-' , 15] = 0
credData.head()
# Dropping all the rows with na values
newcred = credData.dropna(axis = 0)
newcred.shape
# Seperating the categorical variables to make dummy variables
credCat = pd.get_dummies(newcred[[0,3,4,5,6,8,9,11,12]])
# Seperating the numerical variables
credNum = newcred[[1,2,7,10,13,14]]
# Making the X variable which is a concatenation of categorical and numerical data
X = pd.concat([credCat,credNum],axis = 1)
print(X.shape)
# Seperating the label as y variable
y = newcred[15]
print(y.shape)
# Normalising the data sets
# Import library function
from sklearn import preprocessing
# Creating the scaling function
minmaxScaler = preprocessing.MinMaxScaler()
# Transforming with the scaler function
X_tran = pd.DataFrame(minmaxScaler.fit_transform(X))
# Printing the output
X_tran.head()
# Splitting the data set to train and test sets
from sklearn.model_selection import train_test_split
# Splitting the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X_tran, y, test_size=0.3, random_state=123)
```
**Weighted Averaging**
```
# Defining three base models
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
model1 = LogisticRegression(random_state=123)
model2 = KNeighborsClassifier(n_neighbors=5)
model3 = RandomForestClassifier(n_estimators=500)
# Fitting all three models on the training data
model1.fit(X_train,y_train)
model2.fit(X_train,y_train)
model3.fit(X_train,y_train)
# Predicting probabilities of each model on the test set
pred1=model1.predict_proba(X_test)
pred2=model2.predict_proba(X_test)
pred3=model3.predict_proba(X_test)
```
**Iteration 1 : For Weights**
```
# Calculating the ensemble prediction by applying weights for each prediction
ensemblepred=(pred1 *0.60+pred2 * 0.20+pred3 * 0.20)
# Displaying first 4 rows of the ensemble predictions
ensemblepred[0:4,:]
# Printing the order of classes for each model
print(model1.classes_)
print(model2.classes_)
print(model3.classes_)
# Generating predictions from probabilities
import numpy as np
pred = np.argmax(ensemblepred,axis = 1)
pred
# Generating confusion matrix
from sklearn.metrics import confusion_matrix
confusionMatrix = confusion_matrix(y_test, pred)
print(confusionMatrix)
# Generating classification report
from sklearn.metrics import classification_report
print(classification_report(y_test, pred))
```
**Iteration 2 of weights**
Let us now try a different set of weights and see its effects
```
# Calculating the ensemble prediction by applying weights for each prediction
ensemblepred=(pred1 *0.70+pred2 * 0.15+pred3 * 0.15)
# Generating predictions from probabilities
import numpy as np
pred = np.argmax(ensemblepred,axis = 1)
# Generating confusion matrix
from sklearn.metrics import confusion_matrix
confusionMatrix = confusion_matrix(y_test, pred)
print(confusionMatrix)
# Generating classification report
from sklearn.metrics import classification_report
print(classification_report(y_test, pred))
```
|
github_jupyter
|
import pandas as pd
#Loading data from the Github repository to colab notebook
filename = 'https://raw.githubusercontent.com/PacktWorkshops/The-Data-Science-Workshop/master/Chapter15/Dataset/crx.data'
# Loading the data using pandas
credData = pd.read_csv(filename,sep=",",header = None,na_values = "?")
credData.head()
# Changing the Classes to 1 & 0
credData.loc[credData[15] == '+' , 15] = 1
credData.loc[credData[15] == '-' , 15] = 0
credData.head()
# Dropping all the rows with na values
newcred = credData.dropna(axis = 0)
newcred.shape
# Seperating the categorical variables to make dummy variables
credCat = pd.get_dummies(newcred[[0,3,4,5,6,8,9,11,12]])
# Seperating the numerical variables
credNum = newcred[[1,2,7,10,13,14]]
# Making the X variable which is a concatenation of categorical and numerical data
X = pd.concat([credCat,credNum],axis = 1)
print(X.shape)
# Seperating the label as y variable
y = newcred[15]
print(y.shape)
# Normalising the data sets
# Import library function
from sklearn import preprocessing
# Creating the scaling function
minmaxScaler = preprocessing.MinMaxScaler()
# Transforming with the scaler function
X_tran = pd.DataFrame(minmaxScaler.fit_transform(X))
# Printing the output
X_tran.head()
# Splitting the data set to train and test sets
from sklearn.model_selection import train_test_split
# Splitting the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X_tran, y, test_size=0.3, random_state=123)
# Defining three base models
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
model1 = LogisticRegression(random_state=123)
model2 = KNeighborsClassifier(n_neighbors=5)
model3 = RandomForestClassifier(n_estimators=500)
# Fitting all three models on the training data
model1.fit(X_train,y_train)
model2.fit(X_train,y_train)
model3.fit(X_train,y_train)
# Predicting probabilities of each model on the test set
pred1=model1.predict_proba(X_test)
pred2=model2.predict_proba(X_test)
pred3=model3.predict_proba(X_test)
# Calculating the ensemble prediction by applying weights for each prediction
ensemblepred=(pred1 *0.60+pred2 * 0.20+pred3 * 0.20)
# Displaying first 4 rows of the ensemble predictions
ensemblepred[0:4,:]
# Printing the order of classes for each model
print(model1.classes_)
print(model2.classes_)
print(model3.classes_)
# Generating predictions from probabilities
import numpy as np
pred = np.argmax(ensemblepred,axis = 1)
pred
# Generating confusion matrix
from sklearn.metrics import confusion_matrix
confusionMatrix = confusion_matrix(y_test, pred)
print(confusionMatrix)
# Generating classification report
from sklearn.metrics import classification_report
print(classification_report(y_test, pred))
# Calculating the ensemble prediction by applying weights for each prediction
ensemblepred=(pred1 *0.70+pred2 * 0.15+pred3 * 0.15)
# Generating predictions from probabilities
import numpy as np
pred = np.argmax(ensemblepred,axis = 1)
# Generating confusion matrix
from sklearn.metrics import confusion_matrix
confusionMatrix = confusion_matrix(y_test, pred)
print(confusionMatrix)
# Generating classification report
from sklearn.metrics import classification_report
print(classification_report(y_test, pred))
| 0.465387 | 0.80456 |
Machine Learning for COVID 19
https://arxiv.org/pdf/2003.11336.pdf
summary of approaches in field. Source for many of the bellow
Drug Repurposing:
https://www.biorxiv.org/content/10.1101/2020.01.31.929547v1.full
Describes the identification and repurposing of poly-ADP-ribose polymerase 1 (PARP1) inhibitor to treat COVID19. Presents hindsight analysis on SARS and MERS, with tested favorable results. The drug targets viral replication. *Very shot* methods section. Says BERT like pre training on 1,000,000,000 SMILES representations. Then presumably using a preexisting molecular bindings database (possibly with locality prediction, though they don't say), to fine-tune the model onto the target problem. BERT (https://arxiv.org/abs/1810.04805) is language modeling by blanking our ~15% of words and predicting the missing words from the remaining words. BERT also implies a self attention model, and lots of training time. Recent advances like ELECTRA(https://github.com/google-research/electra) might cut this from weeks to 4 days(assuming English is as complex as "protienese" ... the language of proteins), but it would be a heavy compute load.
https://arxiv.org/pdf/1908.06760.pdf
Similar to the above, but with *MUCH* better documentation. Has a github (https://github.com/deargen/mt-dti)
https://www.researchgate.net/publication/339998830_AI_for_the_repurposing_of_approved_or_investigational_drugs_against_COVID-19
Unsupervised learning of gene expression in respect to foreign substances. Look for drugs with similar embeddings to the "knock out" mutation of COBP2. Which is believed to be a protein that is coopted by SARS for replication. The list of therapeutics presented matches other recommendations, but I'm not really sure "why" this works.
Structure Prediction:
https://www.journalofinfection.com/article/S0163-4453(20)30107-9/pdf
Not ML. sequence and structural similarity to predict binding sites of COVID19 spike protein to GRP78. ESpript 3 was used to cross reference with known coronavirus spike proteins active regions. Might relate to This week's lab? Uses PyMOL with "structural superposition" from most similar spike protein to determine folding of protein. HADDOCK was used for docking prediction with significant importing of prior literature. PRODIGY used for binding affinity. At each step the researchers note the cluster used for the task. This is likely beyond our resources to replicate.
https://deepmind.com/research/open-source/computational-predictions-of-protein-structures-associated-with-COVID-19
Already mentioned this in Project1. Still relevant. I believe it predicts binding affinity matrices and feeds it into some of the algorithms in the above paper.
Diagnosis:
https://www.medrxiv.org/content/10.1101/2020.03.30.20047456v1.full.pdf
transfer convolutional neural network pre-trained on image net to predict the presence of bacterial, non-COVID, and COVID infection from chest X-rays.
Applicability limited by access to X-rays, expensive, and non-mobile, medically risky with consistent use. Dataset of ~16k images available. Barely enough for good function without pre-training.
Datasets:
http://www.pdbbind.org.cn/
A dataset of binding affinity for many proteins in the protein data bank. Used in many direct ligand prediction methods.
https://www.drugbank.ca/
A database of current in production drugs. Used in repurposing research.
https://www.ebi.ac.uk/chembl/
Another bio molecule affinity database. Hand curated. Not specifically for proteins like pdbbind.
https://zinc.docking.org/
chemical database. Commercially, purchasable.
http://www.lincsproject.org/
Gene Expression database. Map biological activity to perturbagens (chemical or genetic reagents that alter intracellular processes).
Other:
https://link.springer.com/article/10.1186/s13321-019-0404-1
Non-covid19 research. Description of an neural method for generating de novo (novel) drugs. The approach uses a graph structure (not SMILES) to ensure chemical validity of target molecules. Further training is performed using a cycle consistency loss (https://junyanz.github.io/CycleGAN/). The data is in the form of drugs matching a property (A) and drugs lacking the property(B). A generator learns to translate from A->B and from B->A well enough to fool a separate discriminator model. The researchers also impose a structural similarity loss to prevent drastic changes. The actual use seems to be adding properties to pre-existing molecules with minimal change. The researchers claim that this can be used along with "Tanimoto similarity on Morgan Fingerprints" to generate ligands for target receptors (they use dopamine receptors as an example). My gut reaction is that this is *too complicated* to scale to COVID19 drugs, and more focused on small molecules.
https://www.ncbi.nlm.nih.gov/pubmed/29595767
I couldn't find the full text of this publication. It seems to focus on small molecules again, using neural nets to prune a Monte Carlo search of molecule synthesis. It attempts to find methods of producing target molecules. It is described as contributing to https://arxiv.org/abs/1907.01417 which performs a search over published research to create a knowledge graph of known molecules that act on structures similar to COVID.
|
github_jupyter
|
Machine Learning for COVID 19
https://arxiv.org/pdf/2003.11336.pdf
summary of approaches in field. Source for many of the bellow
Drug Repurposing:
https://www.biorxiv.org/content/10.1101/2020.01.31.929547v1.full
Describes the identification and repurposing of poly-ADP-ribose polymerase 1 (PARP1) inhibitor to treat COVID19. Presents hindsight analysis on SARS and MERS, with tested favorable results. The drug targets viral replication. *Very shot* methods section. Says BERT like pre training on 1,000,000,000 SMILES representations. Then presumably using a preexisting molecular bindings database (possibly with locality prediction, though they don't say), to fine-tune the model onto the target problem. BERT (https://arxiv.org/abs/1810.04805) is language modeling by blanking our ~15% of words and predicting the missing words from the remaining words. BERT also implies a self attention model, and lots of training time. Recent advances like ELECTRA(https://github.com/google-research/electra) might cut this from weeks to 4 days(assuming English is as complex as "protienese" ... the language of proteins), but it would be a heavy compute load.
https://arxiv.org/pdf/1908.06760.pdf
Similar to the above, but with *MUCH* better documentation. Has a github (https://github.com/deargen/mt-dti)
https://www.researchgate.net/publication/339998830_AI_for_the_repurposing_of_approved_or_investigational_drugs_against_COVID-19
Unsupervised learning of gene expression in respect to foreign substances. Look for drugs with similar embeddings to the "knock out" mutation of COBP2. Which is believed to be a protein that is coopted by SARS for replication. The list of therapeutics presented matches other recommendations, but I'm not really sure "why" this works.
Structure Prediction:
https://www.journalofinfection.com/article/S0163-4453(20)30107-9/pdf
Not ML. sequence and structural similarity to predict binding sites of COVID19 spike protein to GRP78. ESpript 3 was used to cross reference with known coronavirus spike proteins active regions. Might relate to This week's lab? Uses PyMOL with "structural superposition" from most similar spike protein to determine folding of protein. HADDOCK was used for docking prediction with significant importing of prior literature. PRODIGY used for binding affinity. At each step the researchers note the cluster used for the task. This is likely beyond our resources to replicate.
https://deepmind.com/research/open-source/computational-predictions-of-protein-structures-associated-with-COVID-19
Already mentioned this in Project1. Still relevant. I believe it predicts binding affinity matrices and feeds it into some of the algorithms in the above paper.
Diagnosis:
https://www.medrxiv.org/content/10.1101/2020.03.30.20047456v1.full.pdf
transfer convolutional neural network pre-trained on image net to predict the presence of bacterial, non-COVID, and COVID infection from chest X-rays.
Applicability limited by access to X-rays, expensive, and non-mobile, medically risky with consistent use. Dataset of ~16k images available. Barely enough for good function without pre-training.
Datasets:
http://www.pdbbind.org.cn/
A dataset of binding affinity for many proteins in the protein data bank. Used in many direct ligand prediction methods.
https://www.drugbank.ca/
A database of current in production drugs. Used in repurposing research.
https://www.ebi.ac.uk/chembl/
Another bio molecule affinity database. Hand curated. Not specifically for proteins like pdbbind.
https://zinc.docking.org/
chemical database. Commercially, purchasable.
http://www.lincsproject.org/
Gene Expression database. Map biological activity to perturbagens (chemical or genetic reagents that alter intracellular processes).
Other:
https://link.springer.com/article/10.1186/s13321-019-0404-1
Non-covid19 research. Description of an neural method for generating de novo (novel) drugs. The approach uses a graph structure (not SMILES) to ensure chemical validity of target molecules. Further training is performed using a cycle consistency loss (https://junyanz.github.io/CycleGAN/). The data is in the form of drugs matching a property (A) and drugs lacking the property(B). A generator learns to translate from A->B and from B->A well enough to fool a separate discriminator model. The researchers also impose a structural similarity loss to prevent drastic changes. The actual use seems to be adding properties to pre-existing molecules with minimal change. The researchers claim that this can be used along with "Tanimoto similarity on Morgan Fingerprints" to generate ligands for target receptors (they use dopamine receptors as an example). My gut reaction is that this is *too complicated* to scale to COVID19 drugs, and more focused on small molecules.
https://www.ncbi.nlm.nih.gov/pubmed/29595767
I couldn't find the full text of this publication. It seems to focus on small molecules again, using neural nets to prune a Monte Carlo search of molecule synthesis. It attempts to find methods of producing target molecules. It is described as contributing to https://arxiv.org/abs/1907.01417 which performs a search over published research to create a knowledge graph of known molecules that act on structures similar to COVID.
| 0.772616 | 0.860604 |
### - Load Pytorch
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import warnings
warnings.filterwarnings(action='ignore')
```
### - Input & Weights
```
# Define input
input_tensor = torch.tensor([0.2, 0.1], dtype=torch.float64)
# Define weights: w1, w2, b1, b2
w1 = nn.Embedding(2, 2, dtype=torch.float64)
w2 = nn.Embedding(2, 2, dtype=torch.float64)
b1 = nn.Embedding(1, 2, dtype=torch.float64)
b2 = nn.Embedding(1, 2, dtype=torch.float64)
# Init weights: w1, w2, b1, b2
w1.weight.data = torch.tensor([[0.2, 0.1], [0.4, 0.15]], dtype=torch.float64, requires_grad=True).t()
w2.weight.data = torch.tensor([[0.65, 0.7], [0.45, 0.3]], dtype=torch.float64, requires_grad=True).t()
b1.weight.data = torch.tensor([[0.3]], dtype=torch.float64, requires_grad=True).t()
b2.weight.data = torch.tensor([[0.5]], dtype=torch.float64, requires_grad=True).t()
# Print weights
print('*'*30)
print('input_tensor:', input_tensor)
print('*'*30)
print('w1.weight:', w1.weight)
# w1.weight.grad = None
# If the optimizer.step() method is not called, that is, if backpropagation is not performed, the gradient of the weight is not obtained.
print('w1.weight.grad:', w1.weight.grad)
print('b1.weight:', b1.weight)
print('b1.weight.grad:', b1.weight.grad)
print('*'*30)
print('w2.weight:', w2.weight)
print('w2.weight.grad:', w2.weight.grad)
print('b2.weight:', b2.weight)
print('b2.weight.grad:', b2.weight.grad)
print('*'*30)
```
### - Forward Propagation
```
# Hidden layer (MLP)
net_h1_h2 = torch.matmul(input_tensor, w1.weight) + b1.weight
out_h1_h2 = F.sigmoid(net_h1_h2)
# [[net_h1, net_h2]]
print('net_h1_h2:', net_h1_h2)
# [[out_h1, out_h2]]
print('out_h1_h2:', out_h1_h2)
print('out_h1_h2.grad:', out_h1_h2.grad)
# Output layer (MLP)
net_o1_o2 = torch.matmul(out_h1_h2, w2.weight) + b2.weight
out_o1_o2 = F.sigmoid(net_o1_o2)
# [[net_o1, net_o2]]
print('net_o1_o2:', net_o1_o2)
# [[out_o1, out_o2]]
print('out_o1_o2:', out_o1_o2)
print('out_o1_o2.grad:', out_o1_o2.grad)
```
### - Loss
```
label = torch.tensor([0.99, 0.01], dtype=torch.float64, requires_grad=True)
# Loss function
loss = torch.sum(0.5*torch.square(label - out_o1_o2))
print('loss:', loss)
```
### - Backward Propagation
```
# Get gradient of each weight & bias
loss.backward()
# Gradients
# Save gradients in weight.grad attribute
print('w1.weight.grad:', w1.weight.grad)
print('b1.weight.grad:', b1.weight.grad)
print('w2.weight.grad:', w2.weight.grad)
print('b2.weight.grad:', b2.weight.grad)
```
### - Optimization (1 epoch)
```
# Before optimization
# Weights
print('w1.weight:', w1.weight)
print('b1.weight:', b1.weight)
print('w2.weight:', w2.weight)
print('b2.weight:', b2.weight)
# Loss
h1 = F.sigmoid(torch.matmul(input_tensor, w1.weight) + b1.weight)
output = F.sigmoid(torch.matmul(h1, w2.weight) + b2.weight)
print('loss:', torch.sum(0.5*torch.square(label - output)))
# Output
print('label:', label)
print('output:', output)
# Learning rate
lr = 0.5
# Optimizer
optimizer = optim.SGD((w1.weight, w2.weight, b1.weight, b2.weight), lr=0.5)
# Optimization
optimizer.step()
# Optimization (1 epoch)
# Optimizing weights
print('w1.weight:', w1.weight)
print('b1.weight:', b1.weight)
print('w2.weight:', w2.weight)
print('b2.weight:', b2.weight)
# Decreasing loss
h1 = F.sigmoid(torch.matmul(input_tensor, w1.weight) + b1.weight)
output = F.sigmoid(torch.matmul(h1, w2.weight) + b2.weight)
print('loss:', torch.sum(0.5*torch.square(label - output)))
# More optimizing output
print('label:', label)
print('output:', output)
```
### - Optimization (1000 epochs)
```
# 1000 epochs
for i in range(1, 1001):
# Init gradient of optimizer
# If this method not called, gradient is stacked.
optimizer.zero_grad()
# Foward pass
h1 = F.sigmoid(torch.matmul(input_tensor, w1.weight) + b1.weight)
output = F.sigmoid(torch.matmul(h1, w2.weight) + b2.weight)
# Loss
loss = torch.sum(0.5*torch.square(label - output))
if i == 1 or i % 100 == 0:
# Decreasing loss
print('loss:', loss)
# Backward pass
loss.backward()
# Optimization
optimizer.step()
# Validation of output (1000 epochs)
h1 = F.sigmoid(torch.matmul(input_tensor, w1.weight) + b1.weight)
output = F.sigmoid(torch.matmul(h1, w2.weight) + b2.weight)
# Output: close to the label
print('label:', label)
print('output:', output)
```
|
github_jupyter
|
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import warnings
warnings.filterwarnings(action='ignore')
# Define input
input_tensor = torch.tensor([0.2, 0.1], dtype=torch.float64)
# Define weights: w1, w2, b1, b2
w1 = nn.Embedding(2, 2, dtype=torch.float64)
w2 = nn.Embedding(2, 2, dtype=torch.float64)
b1 = nn.Embedding(1, 2, dtype=torch.float64)
b2 = nn.Embedding(1, 2, dtype=torch.float64)
# Init weights: w1, w2, b1, b2
w1.weight.data = torch.tensor([[0.2, 0.1], [0.4, 0.15]], dtype=torch.float64, requires_grad=True).t()
w2.weight.data = torch.tensor([[0.65, 0.7], [0.45, 0.3]], dtype=torch.float64, requires_grad=True).t()
b1.weight.data = torch.tensor([[0.3]], dtype=torch.float64, requires_grad=True).t()
b2.weight.data = torch.tensor([[0.5]], dtype=torch.float64, requires_grad=True).t()
# Print weights
print('*'*30)
print('input_tensor:', input_tensor)
print('*'*30)
print('w1.weight:', w1.weight)
# w1.weight.grad = None
# If the optimizer.step() method is not called, that is, if backpropagation is not performed, the gradient of the weight is not obtained.
print('w1.weight.grad:', w1.weight.grad)
print('b1.weight:', b1.weight)
print('b1.weight.grad:', b1.weight.grad)
print('*'*30)
print('w2.weight:', w2.weight)
print('w2.weight.grad:', w2.weight.grad)
print('b2.weight:', b2.weight)
print('b2.weight.grad:', b2.weight.grad)
print('*'*30)
# Hidden layer (MLP)
net_h1_h2 = torch.matmul(input_tensor, w1.weight) + b1.weight
out_h1_h2 = F.sigmoid(net_h1_h2)
# [[net_h1, net_h2]]
print('net_h1_h2:', net_h1_h2)
# [[out_h1, out_h2]]
print('out_h1_h2:', out_h1_h2)
print('out_h1_h2.grad:', out_h1_h2.grad)
# Output layer (MLP)
net_o1_o2 = torch.matmul(out_h1_h2, w2.weight) + b2.weight
out_o1_o2 = F.sigmoid(net_o1_o2)
# [[net_o1, net_o2]]
print('net_o1_o2:', net_o1_o2)
# [[out_o1, out_o2]]
print('out_o1_o2:', out_o1_o2)
print('out_o1_o2.grad:', out_o1_o2.grad)
label = torch.tensor([0.99, 0.01], dtype=torch.float64, requires_grad=True)
# Loss function
loss = torch.sum(0.5*torch.square(label - out_o1_o2))
print('loss:', loss)
# Get gradient of each weight & bias
loss.backward()
# Gradients
# Save gradients in weight.grad attribute
print('w1.weight.grad:', w1.weight.grad)
print('b1.weight.grad:', b1.weight.grad)
print('w2.weight.grad:', w2.weight.grad)
print('b2.weight.grad:', b2.weight.grad)
# Before optimization
# Weights
print('w1.weight:', w1.weight)
print('b1.weight:', b1.weight)
print('w2.weight:', w2.weight)
print('b2.weight:', b2.weight)
# Loss
h1 = F.sigmoid(torch.matmul(input_tensor, w1.weight) + b1.weight)
output = F.sigmoid(torch.matmul(h1, w2.weight) + b2.weight)
print('loss:', torch.sum(0.5*torch.square(label - output)))
# Output
print('label:', label)
print('output:', output)
# Learning rate
lr = 0.5
# Optimizer
optimizer = optim.SGD((w1.weight, w2.weight, b1.weight, b2.weight), lr=0.5)
# Optimization
optimizer.step()
# Optimization (1 epoch)
# Optimizing weights
print('w1.weight:', w1.weight)
print('b1.weight:', b1.weight)
print('w2.weight:', w2.weight)
print('b2.weight:', b2.weight)
# Decreasing loss
h1 = F.sigmoid(torch.matmul(input_tensor, w1.weight) + b1.weight)
output = F.sigmoid(torch.matmul(h1, w2.weight) + b2.weight)
print('loss:', torch.sum(0.5*torch.square(label - output)))
# More optimizing output
print('label:', label)
print('output:', output)
# 1000 epochs
for i in range(1, 1001):
# Init gradient of optimizer
# If this method not called, gradient is stacked.
optimizer.zero_grad()
# Foward pass
h1 = F.sigmoid(torch.matmul(input_tensor, w1.weight) + b1.weight)
output = F.sigmoid(torch.matmul(h1, w2.weight) + b2.weight)
# Loss
loss = torch.sum(0.5*torch.square(label - output))
if i == 1 or i % 100 == 0:
# Decreasing loss
print('loss:', loss)
# Backward pass
loss.backward()
# Optimization
optimizer.step()
# Validation of output (1000 epochs)
h1 = F.sigmoid(torch.matmul(input_tensor, w1.weight) + b1.weight)
output = F.sigmoid(torch.matmul(h1, w2.weight) + b2.weight)
# Output: close to the label
print('label:', label)
print('output:', output)
| 0.778649 | 0.89058 |
```
from datascience import *
path_data = '../../../data/'
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
cones = Table.read_table(path_data + 'cones.csv')
nba = Table.read_table(path_data + 'nba_salaries.csv').relabeled(3, 'SALARY')
movies = Table.read_table(path_data + 'movies_by_year.csv')
```
We can now apply Python to analyze data. We will work with data stored in Table structures.
Tables are a fundamental way of representing data sets. A table can be viewed in two ways:
* a sequence of named columns that each describe a single attribute of all entries in a data set, or
* a sequence of rows that each contain all information about a single individual in a data set.
We will study tables in great detail in the next several chapters. For now, we will just introduce a few methods without going into technical details.
The table `cones` has been imported for us; later we will see how, but here we will just work with it. First, let's take a look at it.
```
cones
```
The table has six rows. Each row corresponds to one ice cream cone. The ice cream cones are the *individuals*.
Each cone has three attributes: flavor, color, and price. Each column contains the data on one of these attributes, and so all the entries of any single column are of the same kind. Each column has a label. We will refer to columns by their labels.
A table method is just like a function, but it must operate on a table. So the call looks like
`name_of_table.method(arguments)`
For example, if you want to see just the first two rows of a table, you can use the table method `show`.
```
cones.show(2)
```
You can replace 2 by any number of rows. If you ask for more than six, you will only get six, because `cones` only has six rows.
### Choosing Sets of Columns ###
The method `select` creates a new table consisting of only the specified columns.
```
cones.select('Flavor')
```
This leaves the original table unchanged.
```
cones
```
You can select more than one column, by separating the column labels by commas.
```
cones.select('Flavor', 'Price')
```
You can also *drop* columns you don't want. The table above can be created by dropping the `Color` column.
```
cones.drop('Color')
```
You can name this new table and look at it again by just typing its name.
```
no_colors = cones.drop('Color')
no_colors
```
Like `select`, the `drop` method creates a smaller table and leaves the original table unchanged. In order to explore your data, you can create any number of smaller tables by using choosing or dropping columns. It will do no harm to your original data table.
### Sorting Rows ###
The `sort` method creates a new table by arranging the rows of the original table in ascending order of the values in the specified column. Here the `cones` table has been sorted in ascending order of the price of the cones.
```
cones.sort('Price')
```
To sort in descending order, you can use an *optional* argument to `sort`. As the name implies, optional arguments don't have to be used, but they can be used if you want to change the default behavior of a method.
By default, `sort` sorts in increasing order of the values in the specified column. To sort in decreasing order, use the optional argument `descending=True`.
```
cones.sort('Price', descending=True)
```
Like `select` and `drop`, the `sort` method leaves the original table unchanged.
### Selecting Rows that Satisfy a Condition ###
The `where` method creates a new table consisting only of the rows that satisfy a given condition. In this section we will work with a very simple condition, which is that the value in a specified column must be equal to a value that we also specify. Thus the `where` method has two arguments.
The code in the cell below creates a table consisting only of the rows corresponding to chocolate cones.
```
cones.where('Flavor', 'chocolate')
```
The arguments, separated by a comma, are the label of the column and the value we are looking for in that column. The `where` method can also be used when the condition that the rows must satisfy is more complicated. In those situations the call will be a little more complicated as well.
It is important to provide the value exactly. For example, if we specify `Chocolate` instead of `chocolate`, then `where` correctly finds no rows where the flavor is `Chocolate`.
```
cones.where('Flavor', 'Chocolate')
```
Like all the other table methods in this section, `where` leaves the original table unchanged.
### Example: Salaries in the NBA ###
"The NBA is the highest paying professional sports league in the world," [reported CNN](http://edition.cnn.com/2015/12/04/sport/gallery/highest-paid-nba-players/) in March 2016. The table `nba` contains the [salaries of all National Basketball Association players](https://www.statcrunch.com/app/index.php?dataid=1843341) in 2015-2016.
Each row represents one player. The columns are:
| **Column Label** | Description |
|--------------------|-----------------------------------------------------|
| `PLAYER` | Player's name |
| `POSITION` | Player's position on team |
| `TEAM` | Team name |
|`SALARY` | Player's salary in 2015-2016, in millions of dollars|
The code for the positions is PG (Point Guard), SG (Shooting Guard), PF (Power Forward), SF (Small Forward), and C (Center). But what follows doesn't involve details about how basketball is played.
The first row shows that Paul Millsap, Power Forward for the Atlanta Hawks, had a salary of almost $\$18.7$ million in 2015-2016.
```
nba
```
Fans of Stephen Curry can find his row by using `where`.
```
nba.where('PLAYER', 'Stephen Curry')
```
We can also create a new table called `warriors` consisting of just the data for the Golden State Warriors.
```
warriors = nba.where('TEAM', 'Golden State Warriors')
warriors
```
By default, the first 10 lines of a table are displayed. You can use `show` to display more or fewer. To display the entire table, use `show` with no argument in the parentheses.
```
warriors.show()
```
The `nba` table is sorted in alphabetical order of the team names. To see how the players were paid in 2015-2016, it is useful to sort the data by salary. Remember that by default, the sorting is in increasing order.
```
nba.sort('SALARY')
```
These figures are somewhat difficult to compare as some of these players changed teams during the season and received salaries from more than one team; only the salary from the last team appears in the table.
The CNN report is about the other end of the salary scale – the players who are among the highest paid in the world. To identify these players we can sort in descending order of salary and look at the top few rows.
```
nba.sort('SALARY', descending=True)
```
Kobe Bryant, since retired, was the highest earning NBA player in 2015-2016.
|
github_jupyter
|
from datascience import *
path_data = '../../../data/'
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
cones = Table.read_table(path_data + 'cones.csv')
nba = Table.read_table(path_data + 'nba_salaries.csv').relabeled(3, 'SALARY')
movies = Table.read_table(path_data + 'movies_by_year.csv')
cones
cones.show(2)
cones.select('Flavor')
cones
cones.select('Flavor', 'Price')
cones.drop('Color')
no_colors = cones.drop('Color')
no_colors
cones.sort('Price')
cones.sort('Price', descending=True)
cones.where('Flavor', 'chocolate')
cones.where('Flavor', 'Chocolate')
nba
nba.where('PLAYER', 'Stephen Curry')
warriors = nba.where('TEAM', 'Golden State Warriors')
warriors
warriors.show()
nba.sort('SALARY')
nba.sort('SALARY', descending=True)
| 0.4206 | 0.94428 |
# Tutorial
**Note**: This guide assumes you have installed QIIME 2 using one of the procedures in the [install documents](https://docs.qiime2.org/2019.1/install/) and have installed [DEICODE](https://library.qiime2.org/plugins/q2-deicode).
## Introduction
In this tutorial you will learn how to interpret and perform Robust Aitchison PCA through QIIME 2. The focus of this tutorial is compositional beta diversity. There are many beta diversity metrics that have been proposed, all with varying benefits on varying data structures. However, presence/absence metric often prove to give better results than those that rely on abundances (i.e. unweighted vs. weighted UniFrac). One component of this phenomenon is the interpretation of relative abundances can provide spurious results (see [the differential abundance analysis introduction](https://docs.qiime2.org/2019.1/tutorials/gneiss/). One solution to this problem is to use a compositional distance metric such as Aitchison distance.
As a toy example let’s build three taxa. These three taxa represent common distributions we see in microbiome datasets. Where the first taxon is increasing exponentially across samples, this is a trend that we would be interested in. However, taxon 2 and 3 have much higher counts and taxon 3 is randomly fluctuating across samples.

In our distances below we have Euclidean, Bray-Curtis, Jaccard, and Aitchison distances (from left to right). We can see that the abundance based metrics Euclidean and Bray-Curtis are heavily influenced by the abundance of taxon 3 and seem to randomly fluctuate. In the presence/absence metric, Jaccard, we see that the distance saturates to one very quickly. However, in the Aitchison distance we see a linear curve representing taxon 1. The reason the distance is linear is because Aitchison distance relies on log transforms (the log of an exponential taxon 1 is linear).

From this toy example, it is clear that Aitchison distance better accounts for the proportions. However, we made the unrealistic assumption in our toy example that there were no zero counts. In real microbiome datasets there are a large number of zeros (i.e. sparsity). Sparsity complicates log ratio transformations because the log-ratio of zero is undefined. To solve this pseudo counts, that can skew results, are commonly used (see [Naught all zeros in sequence count data are the same](https://www.biorxiv.org/content/10.1101/477794v1)).
Robust Aitchison PCA solves this problem in two steps:
1. Compostional preprocessing using the centered log ratio transform on only the non-zero values of the data (no pseudo count)
2. Dimensionality reduction through PCA on only the non-zero values of the data (via [SVD]( https://en.wikipedia.org/wiki/Singular_value_decomposition) by [matrix completion]( https://arxiv.org/pdf/0906.2027.pdf)).
To demonstrate this in action we will run an example dataset below, where the output can be viewed as a compositional biplot through emperor.
## Example
In this example we will use Robust Aitchison PCA via DEICODE on the “Moving Pictures” tutorial, if you have not yet completed the tutorial it can be found [here](https://docs.qiime2.org/2019.1/tutorials/moving-pictures/). The dataset consists of human microbiome samples from two individuals at four body sites at five timepoints, the first of which immediately followed antibiotic usage ([Caporaso et al. 2011](https://www.ncbi.nlm.nih.gov/pubmed/21624126)). If you have completed this tutorial run the following command and skip the download section.
##### Table [view](https://view.qiime2.org/?src=https%3A%2F%2Fdocs.qiime2.org%2F2019.1%2Fdata%2Ftutorials%2Fmoving-pictures%2Ftable.qza) | [download](https://docs.qiime2.org/2019.1/data/tutorials/moving-pictures/table.qza)
**save as:** table.qza
##### Sample Metadata [download](https://data.qiime2.org/2019.1/tutorials/moving-pictures/sample_metadata.tsv)
**save as:** sample-metadata.tsv
##### Feature Metadata [view](https://view.qiime2.org/?src=https%3A%2F%2Fdocs.qiime2.org%2F2019.1%2Fdata%2Ftutorials%2Fmoving-pictures%2Ftaxonomy.qza) | [download](https://docs.qiime2.org/2019.1/data/tutorials/moving-pictures/taxonomy.qza)
**save as:** taxonomy.qza
```
!cd qiime2-moving-pictures-tutorial
```
Using table.qza, of the type raw count table (FeatureTable[Frequency]), we will generate our beta diversity ordination file. There are a few parameters to DEICODE that we may want to consider. The first is filtering cutoffs, these are p-min-feature-count and p-min-sample-count. Both of these parameters accept integer values and remove feature or samples, respectively, with sums below this cutoff. The feature cut-off is useful in the case that features with very low total counts among all samples represent contamination or chimeric sequences. The sample cut off is useful for the case that some sample received very few reads relative to other samples.
**Note:** it is _not_ recommended to bin your features by taxonomic assignment (i.e. by genus level).
**Note:** it is _not_ recommended to rarefy your data before using DEICODE.
The other two parameters are --p-rank and --p-iterations. These parameters should rarely have to change from the default. However, the minimum value of --p-rank can be 1 and the maximum recommended value is 10. Similarly, the minimum value of --p-iterations is 1 and is recommended to be below 500.
Now that we understand the acceptable parameters, we are ready to run DEICODE.
```
!qiime dev refresh-cache
!qiime deicode rpca \
--i-table qiime2-moving-pictures-tutorial/table.qza \
--p-min-feature-count 10 \
--p-min-sample-count 500 \
--o-biplot qiime2-moving-pictures-tutorial/ordination.qza \
--o-distance-matrix qiime2-moving-pictures-tutorial/distance.qza
```
Now that we have our ordination file, with type (PCoAResults % Properties(['biplot'])), we are ready to visualize the results. This can be done using the [Emperor](https://docs.qiime2.org/2019.1/plugins/available/emperor/) biplot functionality. In this case we will include metadata for our features (optional) and our samples (required).
```
!qiime emperor biplot \
--i-biplot qiime2-moving-pictures-tutorial/ordination.qza \
--m-sample-metadata-file qiime2-moving-pictures-tutorial/sample-metadata.tsv \
--m-feature-metadata-file qiime2-moving-pictures-tutorial/taxonomy.qza \
--o-visualization qiime2-moving-pictures-tutorial/biplot.qzv \
--p-number-of-features 8
```
Biplots are exploratory visualization tools that allow us to represent the features (i.e. taxonomy or OTUs) that strongly influence the principal component axis as arrows. The interpretation of the compositional biplot differs slightly from classical biplot interpretation (we can view the qzv file at [view.qiime2](https://view.qiime2.org). The important features with regard to sample clusters are not a single arrow but by the log ratio between features represented by arrows pointing in different directions. A visualization tool for these log ratios is coming soon to QIIME.

From this visualization we noticed that BodySite seems to explain the clusters well. We can run [PERMANOVA](https://docs.qiime2.org/2019.1/plugins/available/diversity/beta-group-significance/) on the distances to get a statistical significance for this.
```
!qiime diversity beta-group-significance \
--i-distance-matrix qiime2-moving-pictures-tutorial/distance.qza \
--m-metadata-file qiime2-moving-pictures-tutorial/sample-metadata.tsv \
--m-metadata-column BodySite \
--p-method permanova \
--o-visualization qiime2-moving-pictures-tutorial/BodySite_significance.qzv
```
Indeed we can now see that the clusters we saw in the biplot were significant by viewing the BodySite_significance.qzv at [view.qiime2](https://view.qiime2.org).

|
github_jupyter
|
!cd qiime2-moving-pictures-tutorial
!qiime dev refresh-cache
!qiime deicode rpca \
--i-table qiime2-moving-pictures-tutorial/table.qza \
--p-min-feature-count 10 \
--p-min-sample-count 500 \
--o-biplot qiime2-moving-pictures-tutorial/ordination.qza \
--o-distance-matrix qiime2-moving-pictures-tutorial/distance.qza
!qiime emperor biplot \
--i-biplot qiime2-moving-pictures-tutorial/ordination.qza \
--m-sample-metadata-file qiime2-moving-pictures-tutorial/sample-metadata.tsv \
--m-feature-metadata-file qiime2-moving-pictures-tutorial/taxonomy.qza \
--o-visualization qiime2-moving-pictures-tutorial/biplot.qzv \
--p-number-of-features 8
!qiime diversity beta-group-significance \
--i-distance-matrix qiime2-moving-pictures-tutorial/distance.qza \
--m-metadata-file qiime2-moving-pictures-tutorial/sample-metadata.tsv \
--m-metadata-column BodySite \
--p-method permanova \
--o-visualization qiime2-moving-pictures-tutorial/BodySite_significance.qzv
| 0.341473 | 0.991859 |
# 多尺度目标检测
:label:`sec_multiscale-object-detection`
在 :numref:`sec_anchor`中,我们以输入图像的每个像素为中心,生成了多个锚框。
基本而言,这些锚框代表了图像不同区域的样本。
然而,如果为每个像素都生成的锚框,我们最终可能会得到太多需要计算的锚框。
想象一个$561 \times 728$的输入图像,如果以每个像素为中心生成五个形状不同的锚框,就需要在图像上标记和预测超过200万个锚框($561 \times 728 \times 5$)。
## 多尺度锚框
:label:`subsec_multiscale-anchor-boxes`
你可能会意识到,减少图像上的锚框数量并不困难。
比如,我们可以在输入图像中均匀采样一小部分像素,并以它们为中心生成锚框。
此外,在不同尺度下,我们可以生成不同数量和不同大小的锚框。
直观地说,比起较大的目标,较小的目标在图像上出现的可能性更多样。
例如,$1 \times 1$、$1 \times 2$和$2 \times 2$的目标可以分别以4、2和1种可能的方式出现在$2 \times 2$图像上。
因此,当使用较小的锚框检测较小的物体时,我们可以采样更多的区域,而对于较大的物体,我们可以采样较少的区域。
为了演示如何在多个尺度下生成锚框,让我们先读取一张图像。
它的高度和宽度分别为561和728像素。
```
%matplotlib inline
from mxnet import image, np, npx
from d2l import mxnet as d2l
npx.set_np()
img = image.imread('../img/catdog.jpg')
h, w = img.shape[:2]
h, w
```
回想一下,在 :numref:`sec_conv_layer`中,我们将卷积图层的二维数组输出称为特征图。
通过定义特征图的形状,我们可以确定任何图像上均匀采样锚框的中心。
`display_anchors`函数定义如下。
我们[**在特征图(`fmap`)上生成锚框(`anchors`),每个单位(像素)作为锚框的中心**]。
由于锚框中的$(x, y)$轴坐标值(`anchors`)已经被除以特征图(`fmap`)的宽度和高度,因此这些值介于0和1之间,表示特征图中锚框的相对位置。
由于锚框(`anchors`)的中心分布于特征图(`fmap`)上的所有单位,因此这些中心必须根据其相对空间位置在任何输入图像上*均匀*分布。
更具体地说,给定特征图的宽度和高度`fmap_w`和`fmap_h`,以下函数将*均匀地*对任何输入图像中`fmap_h`行和`fmap_w`列中的像素进行采样。
以这些均匀采样的像素为中心,将会生成大小为`s`(假设列表`s`的长度为1)且宽高比(`ratios`)不同的锚框。
```
def display_anchors(fmap_w, fmap_h, s):
d2l.set_figsize()
# 前两个维度上的值不影响输出
fmap = np.zeros((1, 10, fmap_h, fmap_w))
anchors = npx.multibox_prior(fmap, sizes=s, ratios=[1, 2, 0.5])
bbox_scale = np.array((w, h, w, h))
d2l.show_bboxes(d2l.plt.imshow(img.asnumpy()).axes,
anchors[0] * bbox_scale)
```
首先,让我们考虑[**探测小目标**]。
为了在显示时更容易分辨,在这里具有不同中心的锚框不会重叠:
锚框的尺度设置为0.15,特征图的高度和宽度设置为4。
我们可以看到,图像上4行和4列的锚框的中心是均匀分布的。
```
display_anchors(fmap_w=4, fmap_h=4, s=[0.15])
```
然后,我们[**将特征图的高度和宽度减小一半,然后使用较大的锚框来检测较大的目标**]。
当尺度设置为0.4时,一些锚框将彼此重叠。
```
display_anchors(fmap_w=2, fmap_h=2, s=[0.4])
```
最后,我们进一步[**将特征图的高度和宽度减小一半,然后将锚框的尺度增加到0.8**]。
此时,锚框的中心即是图像的中心。
```
display_anchors(fmap_w=1, fmap_h=1, s=[0.8])
```
## 多尺度检测
既然我们已经生成了多尺度的锚框,我们就将使用它们来检测不同尺度下各种大小的目标。
下面,我们介绍一种基于CNN的多尺度目标检测方法,将在 :numref:`sec_ssd`中实现。
在某种规模上,假设我们有$c$张形状为$h \times w$的特征图。
使用 :numref:`subsec_multiscale-anchor-boxes`中的方法,我们生成了$hw$组锚框,其中每组都有$a$个中心相同的锚框。
例如,在 :numref:`subsec_multiscale-anchor-boxes`实验的第一个尺度上,给定10个(通道数量)$4 \times 4$的特征图,我们生成了16组锚框,每组包含3个中心相同的锚框。
接下来,每个锚框都根据真实值边界框来标记了类和偏移量。
在当前尺度下,目标检测模型需要预测输入图像上$hw$组锚框类别和偏移量,其中不同组锚框具有不同的中心。
假设此处的$c$张特征图是CNN基于输入图像的正向传播算法获得的中间输出。
既然每张特征图上都有$hw$个不同的空间位置,那么相同空间位置可以看作含有$c$个单元。
根据 :numref:`sec_conv_layer`中对感受野的定义,特征图在相同空间位置的$c$个单元在输入图像上的感受野相同:
它们表征了同一感受野内的输入图像信息。
因此,我们可以将特征图在同一空间位置的$c$个单元变换为使用此空间位置生成的$a$个锚框类别和偏移量。
本质上,我们用输入图像在某个感受野区域内的信息,来预测输入图像上与该区域位置相近的锚框类别和偏移量。
当不同层的特征图在输入图像上分别拥有不同大小的感受野时,它们可以用于检测不同大小的目标。
例如,我们可以设计一个神经网络,其中靠近输出层的特征图单元具有更宽的感受野,这样它们就可以从输入图像中检测到较大的目标。
简言之,我们可以利用深层神经网络在多个层次上对图像进行分层表示,从而实现多尺度目标检测。
在 :numref:`sec_ssd`,我们将通过一个具体的例子来说明它是如何工作的。
## 小结
* 在多个尺度下,我们可以生成不同尺寸的锚框来检测不同尺寸的目标。
* 通过定义特征图的形状,我们可以决定任何图像上均匀采样的锚框的中心。
* 我们使用输入图像在某个感受野区域内的信息,来预测输入图像上与该区域位置相近的锚框类别和偏移量。
* 我们可以通过深入学习,在多个层次上的图像分层表示进行多尺度目标检测。
## 练习
1. 根据我们在 :numref:`sec_alexnet`中的讨论,深度神经网络学习图像特征级别抽象层次,随网络深度的增加而升级。在多尺度目标检测中,不同尺度的特征映射是否对应于不同的抽象层次?为什么?
1. 在 :numref:`subsec_multiscale-anchor-boxes`中的实验里的第一个尺度(`fmap_w=4, fmap_h=4`)下,生成可能重叠的均匀分布的锚框。
1. 给定形状为$1 \times c \times h \times w$的特征图变量,其中$c$、$h$和$w$分别是特征图的通道数、高度和宽度。你怎样才能将这个变量转换为锚框类别和偏移量?输出的形状是什么?
[Discussions](https://discuss.d2l.ai/t/2947)
|
github_jupyter
|
%matplotlib inline
from mxnet import image, np, npx
from d2l import mxnet as d2l
npx.set_np()
img = image.imread('../img/catdog.jpg')
h, w = img.shape[:2]
h, w
def display_anchors(fmap_w, fmap_h, s):
d2l.set_figsize()
# 前两个维度上的值不影响输出
fmap = np.zeros((1, 10, fmap_h, fmap_w))
anchors = npx.multibox_prior(fmap, sizes=s, ratios=[1, 2, 0.5])
bbox_scale = np.array((w, h, w, h))
d2l.show_bboxes(d2l.plt.imshow(img.asnumpy()).axes,
anchors[0] * bbox_scale)
display_anchors(fmap_w=4, fmap_h=4, s=[0.15])
display_anchors(fmap_w=2, fmap_h=2, s=[0.4])
display_anchors(fmap_w=1, fmap_h=1, s=[0.8])
| 0.19853 | 0.904059 |
```
%matplotlib inline
```
# <center> Doing Math with Python </center>
<center>
<p> <b>Amit Saha</b>
<p>May 29, PyCon US 2016 Education Summit
<p>Portland, Oregon
</center>
## About me
- Software Engineer at [Freelancer.com](https://www.freelancer.com) HQ in Sydney, Australia
- Author of "Doing Math with Python" (No Starch Press, 2015)
- Writes for Linux Voice, Linux Journal, etc.
- [Blog](http://echorand.me), [GitHub](http://github.com/amitsaha)
#### Contact
- [@echorand](http://twitter.com/echorand)
- [Email](mailto:[email protected])
### This talk - a proposal, a hypothesis, a statement
*Python can lead to a more enriching learning and teaching experience in the classroom*
```
As I will attempt to describe in the next slides, Python is an amazing way to lead to a more fun learning and teaching
experience.
It can be a basic calculator, a fancy calculator and
Math, Science, Geography..
Tools that will help us in that quest are:
```
### (Main) Tools
<img align="center" src="collage/logo_collage.png"></img>
### Python - a scientific calculator
- Python 3 is my favorite calculator (not Python 2 because 1/2 = 0)
- `fabs()`, `abs()`, `sin()`, `cos()`, `gcd()`, `log()` (See [math](https://docs.python.org/3/library/math.html))
- Descriptive statistics (See [statistics](https://docs.python.org/3/library/statistics.html#module-statistics))
### Python - a scientific calculator
- Develop your own functions: unit conversion, finding correlation, .., anything really
- Use PYTHONSTARTUP to extend the battery of readily available mathematical functions
```
$ PYTHONSTARTUP=~/work/dmwp/pycon-us-2016/startup_math.py idle3 -s
```
### Unit conversion functions
```
>>> unit_conversion()
1. Kilometers to Miles
2. Miles to Kilometers
3. Kilograms to Pounds
4. Pounds to Kilograms
5. Celsius to Fahrenheit
6. Fahrenheit to Celsius
Which conversion would you like to do? 6
Enter temperature in fahrenheit: 98
Temperature in celsius: 36.66666666666667
>>>
```
### Finding linear correlation
```
>>>
>>> x = [1, 2, 3, 4]
>>> y = [2, 4, 6.1, 7.9]
>>> find_corr_x_y(x, y)
0.9995411791453812
```
### Python - a really fancy calculator
SymPy - a pure Python symbolic math library
*from sympy import awesomeness* - don't try that :)
```
When you bring in SymPy to the picture, things really get awesome. You are suddenly writing computer
programs which are capable of speaking algebra. You are no more limited to numbers.
# Create graphs from algebraic expressions
from sympy import Symbol, plot
x = Symbol('x')
p = plot(2*x**2 + 2*x + 2)
# Solve equations
from sympy import solve, Symbol
x = Symbol('x')
solve(2*x + 1)
# Limits
from sympy import Symbol, Limit, sin
x = Symbol('x')
Limit(sin(x)/x, x, 0).doit()
# Derivative
from sympy import Symbol, Derivative, sin, init_printing
x = Symbol('x')
init_printing()
Derivative(sin(x)**(2*x+1), x).doit()
# Indefinite integral
from sympy import Symbol, Integral, sqrt, sin, init_printing
x = Symbol('x')
init_printing()
Integral(sqrt(x)).doit()
# Definite integral
from sympy import Symbol, Integral, sqrt
x = Symbol('x')
Integral(sqrt(x), (x, 0, 2)).doit()
```
### Python - Making other subjects more lively
<img align="center" src="collage/collage1.png"></img>
- matplotlib
- basemap
- Interactive Jupyter Notebooks
#### Bringing Science to life
*Animation of a Projectile motion*
#### Drawing fractals
*Interactively drawing a Barnsley Fern*
#### The world is your graph paper
*Showing places on a digital map*
### Great base for the future
*Statistics and Graphing data* -> *Data Science*
*Differential Calculus* -> *Machine learning*
### Application of differentiation
Use gradient descent to find a function's minimum value
### Predict the college admission score based on high school math score
Use gradient descent as the optimizer for single variable linear regression model
```
### TODO: digit recognition using Neural networks
### Scikitlearn, pandas, scipy, statsmodel
```
### Book: Doing Math With Python
<img align="center" src="dmwp-cover.png" href="https://doingmathwithpython.github.io"></img>
Published by No Starch Press, out in 2015.
Early feedback very encouraging
#### Comments
> Saha does an excellent job providing a clear link between Python and upper-level math concepts, and demonstrates how Python can be transformed into a mathematical stage.
> This book is highly recommended for the high school or college student and anyone who is looking for a more natural way of programming math and scientific functions
> As a teacher I highly recommend this book as a way to work with someone in learning both math and programming
### Links
- [Doing Math with Python](http://nostarch.com/doingmathwithpython)
- [Doing Math with Python Blog](doingmathwithpython.github.io)
- [Upcoming O'Reilly Webcast](http://www.oreilly.com/pub/e/3712)
### PyCon Special!
*Use PYCONMATH code to get 30% off from No Starch Press*
(Valid from May 26th - June 8th)
Book Signing - May 31st - 2.00 PM - No Starch Press booth
### Dialogue
Questions, Thoughts, comments, discussions?
Online: @echorand, [email protected]
### Acknowledgements
PyCon US Education Summit team for inviting me
Thanks to PyCon US for reduced registration rates
Massive thanks to my employer, Freelancer.com for sponsoring my travel and stay
|
github_jupyter
|
%matplotlib inline
As I will attempt to describe in the next slides, Python is an amazing way to lead to a more fun learning and teaching
experience.
It can be a basic calculator, a fancy calculator and
Math, Science, Geography..
Tools that will help us in that quest are:
$ PYTHONSTARTUP=~/work/dmwp/pycon-us-2016/startup_math.py idle3 -s
>>> unit_conversion()
1. Kilometers to Miles
2. Miles to Kilometers
3. Kilograms to Pounds
4. Pounds to Kilograms
5. Celsius to Fahrenheit
6. Fahrenheit to Celsius
Which conversion would you like to do? 6
Enter temperature in fahrenheit: 98
Temperature in celsius: 36.66666666666667
>>>
>>>
>>> x = [1, 2, 3, 4]
>>> y = [2, 4, 6.1, 7.9]
>>> find_corr_x_y(x, y)
0.9995411791453812
When you bring in SymPy to the picture, things really get awesome. You are suddenly writing computer
programs which are capable of speaking algebra. You are no more limited to numbers.
# Create graphs from algebraic expressions
from sympy import Symbol, plot
x = Symbol('x')
p = plot(2*x**2 + 2*x + 2)
# Solve equations
from sympy import solve, Symbol
x = Symbol('x')
solve(2*x + 1)
# Limits
from sympy import Symbol, Limit, sin
x = Symbol('x')
Limit(sin(x)/x, x, 0).doit()
# Derivative
from sympy import Symbol, Derivative, sin, init_printing
x = Symbol('x')
init_printing()
Derivative(sin(x)**(2*x+1), x).doit()
# Indefinite integral
from sympy import Symbol, Integral, sqrt, sin, init_printing
x = Symbol('x')
init_printing()
Integral(sqrt(x)).doit()
# Definite integral
from sympy import Symbol, Integral, sqrt
x = Symbol('x')
Integral(sqrt(x), (x, 0, 2)).doit()
### TODO: digit recognition using Neural networks
### Scikitlearn, pandas, scipy, statsmodel
| 0.471467 | 0.973919 |
<a href="https://colab.research.google.com/github/Jobhert/Activity/blob/main/Assignment4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Linear Algebra for ECE
## Laboratory 4 : Matrices
Now that you have a fundamental knowledge about Python, we'll try to look into greater dimensions.
### Objectives
At the end of this activity you will be able to:
1. Be familiar with matrices and their relation to linear equations.
2. Perform basic matrix operations.
3. Program and translate matrix equations and operations using Python.
# Discussion
```
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
```
### **Matrices**
-A specialized two-dimensional rectangular array of data stored in rows and columns.
The notation and use of matrices is probably one of the fundamentals of modern computing. Matrices are also handy representations of complex equations or multiple inter-related equations from 2-dimensional equations to even hundreds and thousands of them.
Let's say for example you have $A$ and $B$ as system of equation.
$$
A = \left\{
\begin{array}\
x + y \\
4x - 10y
\end{array}
\right. \\
B = \left\{
\begin{array}\
x+y+z \\
3x -2y -z \\
-x + 4y +2z
\end{array}
\right. $$
We could see that $A$ is a system of 2 equations with 2 parameters. While $B$ is a system of 3 equations with 3 parameters. We can represent them as matrices as:
$$
A=\begin{bmatrix} 1 & 1 \\ 4 & {-10}\end{bmatrix} \\
B=\begin{bmatrix} 1 & 1 & 1 \\ 3 & -2 & -1 \\ -1 & 4 & 2\end{bmatrix}
$$
So assuming that you already discussed the fundamental representation of matrices, their types, and operations. We'll proceed in doing them in here in Python.
### Declaring Matrices
Just like our previous laboratory activity, we'll represent system of linear equations as a matrix. The entities or numbers in matrices are called the elements of a matrix. These elements are arranged and ordered in rows and columns which form the list/array-like structure of matrices. And just like arrays, these elements are indexed according to their position with respect to their rows and columns. This can be reprsented just like the equation below. Whereas $A$ is a matrix consisting of elements denoted by $a_{i,j}$. Denoted by $i$ is the number of rows in the matrix while $j$ stands for the number of columns.<br>
Do note that the $size$ of a matrix is $i\times j$.
$$A=\begin{bmatrix}
a_{(0,0)}&a_{(0,1)}&\dots&a_{(0,j-1)}\\
a_{(1,0)}&a_{(1,1)}&\dots&a_{(1,j-1)}\\
\vdots&\vdots&\ddots&\vdots&\\
a_{(i-1,0)}&a_{(i-1,1)}&\dots&a_{(i-1,j-1)}
\end{bmatrix}
$$
We already gone over some of the types of matrices as vectors but we'll further discuss them in this laboratory activity. Since you already know how to describe vectors using <b>shape</b>, <b>dimensions</b> and <b>size</b> attributes, we'll use them to analyze these matrices.
```
## Since we'll keep on describing matrices. Let's make a function.
def describe_mat(matrix):
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\n')
## Declaring a 2 x 2 matrix
A = np.array([
[1, 2],
[3, 1]
])
describe_mat(A)
G = np.array([
[1,1],
[2,2]
])
describe_mat(G)
## Declaring a 3 x 2 matrix
B = np.array([
[8, 2],
[5, 4],
[1, 1]
])
describe_mat(B)
H = np.array([1,2,3,4,5])
describe_mat(H)
```
## Categorizing Matrices
There are several ways of classifying matrices. Once could be according to their <b>shape</b> and another is according to their <b>element values</b>. We'll try to go through them.
### According to shape
-The function "shape" returns the shape of an array. The shape is a tuple of integers. These numbers denote the lengths of the corresponding array dimension.
#### Row and Column Matrices
Row and column matrices are common in vector and matrix computations. They can also represent row and column spaces of a bigger vector space. Row and column matrices are represented by a single column or single row. So with that being, the shape of row matrices would be $1 \times j$ and column matrices would be $i \times 1$.
```
## Declaring a Row Matrix
row_mat_1D = np.array([
1, 3, 2
]) ## this is a 1-D Matrix with a shape of (3,), it's not really considered as a row matrix.
row_mat_2D = np.array([
[1,2,3]
]) ## this is a 2-D Matrix with a shape of (1,3)
describe_mat(row_mat_1D)
describe_mat(row_mat_2D)
## Declaring a Column Matrix
col_mat = np.array([
[1],
[2],
[5]
]) ## this is a 2-D Matrix with a shape of (3,1)
describe_mat(col_mat)
```
#### Square Matrices
Square matrices are matrices that have the same row and column sizes. We could say a matrix is square if $i = j$. We can tweak our matrix descriptor function to determine square matrices.
```
def describe_mat(matrix):
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
square_mat = np.array([
[1,2,5],
[3,3,8],
[6,1,2]
])
non_square_mat = np.array([
[1,2,5],
[3,3,8]
])
describe_mat(square_mat)
describe_mat(non_square_mat)
```
### According to element values
#### Null Matrix
-Has no rows and no columns.
A Null Matrix is a matrix that has no elements. It is always a subspace of any vector or matrix.
```
def describe_mat(matrix):
if matrix.size > 0:
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
else:
print('Matrix is Null')
null_mat = np.array([])
describe_mat(null_mat)
```
#### Zero Matrix
A zero matrix can be any rectangular matrix but with all elements having a value of 0.
```
zero_mat_row = np.zeros((1,2))
zero_mat_sqr = np.zeros((2,2))
zero_mat_rct = np.zeros((3,2))
print(f'Zero Row Matrix: \n{zero_mat_row}')
print(f'Zero Square Matrix: \n{zero_mat_sqr}')
print(f'Zero Rectangular Matrix: \n{zero_mat_rct}')
```
#### Ones Matrix
A ones matrix, just like the zero matrix, can be any rectangular matrix but all of its elements are 1s instead of 0s.
```
ones_mat_row = np.ones((1,2))
ones_mat_sqr = np.ones((2,2))
ones_mat_rct = np.ones((3,2))
print(f'Ones Row Matrix: \n{ones_mat_row}')
print(f'Ones Square Matrix: \n{ones_mat_sqr}')
print(f'Ones Rectangular Matrix: \n{ones_mat_rct}')
```
#### Diagonal Matrix
A diagonal matrix is a square matrix that has values only at the diagonal of the matrix.
```
np.array([
[2,0,0],
[0,3,0],
[0,0,5]
])
# a[1,1], a[2,2], a[3,3], ... a[n-1,n-1]
d = np.diag([2,3,5,7])
np.diag(d).shape == d.shape[0] == d.shape[1]
```
#### Identity Matrix
An identity matrix is a special diagonal matrix in which the values at the diagonal are ones.
```
np.eye(5)
np.identity(5)
```
#### Upper Triangular Matrix
An upper triangular matrix is a matrix that has no values below the diagonal.
```
np.array([
[1,2,3],
[0,3,1],
[0,0,5]
])
```
#### Lower Triangular Matrix
A lower triangular matrix is a matrix that has no values above the diagonal.
```
np.array([
[1,0,0],
[5,3,0],
[7,8,5]
])
```
## Practice
1. Given the linear combination below, try to create a corresponding matrix representing it.
$$\theta = 5x + 3y - z$$
```
theta = np.array([[5 , 3 , -1]])
describe_mat(theta)
```
2. Given the system of linear combinations below, try to encode it as a matrix. Also describe the matrix.
$$
A = \left\{\begin{array}
5x_1 + 2x_2 +x_3\\
4x_2 - x_3\\
10x_3
\end{array}\right.
$$
```
A=np.array([
[1,2,1],
[0,4,-1],
[0,0,10]
])
describe_mat(A)
```
3. Given the matrix below, express it as a linear combination in a markdown.
```
G = np.array([
[1,7,8],
[2,2,2],
[4,6,7]
])
```
G=\begin{bmatrix} 1 & 7 & 8 \\ 2 & 2 & 2 \\ 4 & 6 & 7\end{bmatrix}
$$
G = \left\{
\begin{array}\
x + 7x_2+8x_3\\
2x + 2x_2+2x_3\\
4x+6x_2+7x_3
\end{array}
\right. \\
$$
4. Given the matrix below, display the output as a LaTeX makdown also express it as a system of linear combinations.
```
H = np.tril(G)
H
```
# Matrix Algebra
The Matrix is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns.
### Addition
```
A = np.array([
[1,2],
[2,3],
[4,1]
])
B = np.array([
[2,2],
[0,0],
[1,1]
])
A+B
2+A ##Broadcasting
# 2*np.ones(A.shape)+A
```
### Subtraction
```
A-B
3-B == 3*np.ones(B.shape)-B
```
### Element-wise Multiplication
```
A*B
np.multiply(A,B)
2*A
A@B
alpha=10**-10
A/(alpha+B)
np.add(A,B)
```
## Activity
### Task 1
Create a function named `mat_desc()` that througouhly describes a matrix, it should: <br>
1. Displays the shape, size, and rank of the matrix. <br>
2. Displays whether the matrix is square or non-square. <br>
3. Displays whether the matrix is an empty matrix. <br>
4. Displays if the matrix is an identity, ones, or zeros matrix <br>
Use 5 sample matrices in which their shapes are not lower than $(3,3)$.
In your methodology, create a flowchart discuss the functions and methods you have done. Present your results in the results section showing the description of each matrix you have declared.
```
## Function area
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
def desc_mat(matrix):
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\n')
def desc_mat(matrix):
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
def desc_mat(matrix):
if matrix.size > 0:
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
else:
print('Matrix is Null')
## Matrix declarations
square_mat = np.array([
[3,4,4],
[5,6,7],
[8,1,2]
])
non_square_mat = np.array([
[2,2,7],
[3,4,7],
[1,2,3],
[6,6,6]
])
desc_mat(square_mat)
desc_mat(non_square_mat)
## Test Areas
null_mat = np.array([])
desc_mat(null_mat)
zero_mat = np.zeros((7,7))
print(f'Zero Matrix: \n{zero_mat}')
ones_mat = np.ones((6,6))
print(f'Ones Matrix: \n{ones_mat}')
np.identity(5)
```
### Task 2
Create a function named `mat_operations()` that takes in two matrices a input parameters it should:<br>
1. Determines if the matrices are viable for operation and returns your own error message if they are not viable.
2. Returns the sum of the matrices.
3. Returns the differen of the matrices.
4. Returns the element-wise multiplication of the matrices.
5. Returns the element-wise division of the matrices.
Use 5 sample matrices in which their shapes are not lower than $(3,3)$.
In your methodology, create a flowchart discuss the functions and methods you have done. Present your results in the results section showing the description of each matrix you have declared.
```
##Function Area
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
def mat_operation(matrix):
if matrix.size > 0:
is_viable = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nViable: {is_viable}')
else:
print('Not Viable')
## Matrix declarations
A = np.array([
[7,7,7],
[3,4,5],
[8,9,10]
])
B = np.array([
[9,9,9],
[14,13,11],
[10,9,3]
])
A+B
mat_operation(A+B)
## Test Areas
A*B
np.multiply(A,B)
mat_operation(A*B)
A-B
mat_operation(A-B)
A/B
np.divide(A,B)
mat_operation(A/B)
```
## Conclusion
To conclude, In this laboratory experiment I was able to understand more deeply the different types of matrices and how to operate them. The matrix can provide data or mathematical equations. It can help us by providing quick approximations of complicated calculations. The matrix can also provide help on how to solve problems in technologies by coding and also decoding.
## References
[1] N. Klein, Coding the Matrix: Linear Algebra through Applications to Computer Science 1st Edition (2013)
[2] Golub and C Van Loan, “Matrix Computations”
[3] Introduction to Matrices [Online] https://courses.lumenlearning.com/boundless-algebra/chapter/introduction-to-matrices/
[4] R. Pierce, Matrix Rank [Online] http://www.mathsisfun.com/algebra/matrix-rank.html
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
## Since we'll keep on describing matrices. Let's make a function.
def describe_mat(matrix):
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\n')
## Declaring a 2 x 2 matrix
A = np.array([
[1, 2],
[3, 1]
])
describe_mat(A)
G = np.array([
[1,1],
[2,2]
])
describe_mat(G)
## Declaring a 3 x 2 matrix
B = np.array([
[8, 2],
[5, 4],
[1, 1]
])
describe_mat(B)
H = np.array([1,2,3,4,5])
describe_mat(H)
## Declaring a Row Matrix
row_mat_1D = np.array([
1, 3, 2
]) ## this is a 1-D Matrix with a shape of (3,), it's not really considered as a row matrix.
row_mat_2D = np.array([
[1,2,3]
]) ## this is a 2-D Matrix with a shape of (1,3)
describe_mat(row_mat_1D)
describe_mat(row_mat_2D)
## Declaring a Column Matrix
col_mat = np.array([
[1],
[2],
[5]
]) ## this is a 2-D Matrix with a shape of (3,1)
describe_mat(col_mat)
def describe_mat(matrix):
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
square_mat = np.array([
[1,2,5],
[3,3,8],
[6,1,2]
])
non_square_mat = np.array([
[1,2,5],
[3,3,8]
])
describe_mat(square_mat)
describe_mat(non_square_mat)
def describe_mat(matrix):
if matrix.size > 0:
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
else:
print('Matrix is Null')
null_mat = np.array([])
describe_mat(null_mat)
zero_mat_row = np.zeros((1,2))
zero_mat_sqr = np.zeros((2,2))
zero_mat_rct = np.zeros((3,2))
print(f'Zero Row Matrix: \n{zero_mat_row}')
print(f'Zero Square Matrix: \n{zero_mat_sqr}')
print(f'Zero Rectangular Matrix: \n{zero_mat_rct}')
ones_mat_row = np.ones((1,2))
ones_mat_sqr = np.ones((2,2))
ones_mat_rct = np.ones((3,2))
print(f'Ones Row Matrix: \n{ones_mat_row}')
print(f'Ones Square Matrix: \n{ones_mat_sqr}')
print(f'Ones Rectangular Matrix: \n{ones_mat_rct}')
np.array([
[2,0,0],
[0,3,0],
[0,0,5]
])
# a[1,1], a[2,2], a[3,3], ... a[n-1,n-1]
d = np.diag([2,3,5,7])
np.diag(d).shape == d.shape[0] == d.shape[1]
np.eye(5)
np.identity(5)
np.array([
[1,2,3],
[0,3,1],
[0,0,5]
])
np.array([
[1,0,0],
[5,3,0],
[7,8,5]
])
theta = np.array([[5 , 3 , -1]])
describe_mat(theta)
A=np.array([
[1,2,1],
[0,4,-1],
[0,0,10]
])
describe_mat(A)
G = np.array([
[1,7,8],
[2,2,2],
[4,6,7]
])
H = np.tril(G)
H
A = np.array([
[1,2],
[2,3],
[4,1]
])
B = np.array([
[2,2],
[0,0],
[1,1]
])
A+B
2+A ##Broadcasting
# 2*np.ones(A.shape)+A
A-B
3-B == 3*np.ones(B.shape)-B
A*B
np.multiply(A,B)
2*A
A@B
alpha=10**-10
A/(alpha+B)
np.add(A,B)
## Function area
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
def desc_mat(matrix):
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\n')
def desc_mat(matrix):
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
def desc_mat(matrix):
if matrix.size > 0:
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
else:
print('Matrix is Null')
## Matrix declarations
square_mat = np.array([
[3,4,4],
[5,6,7],
[8,1,2]
])
non_square_mat = np.array([
[2,2,7],
[3,4,7],
[1,2,3],
[6,6,6]
])
desc_mat(square_mat)
desc_mat(non_square_mat)
## Test Areas
null_mat = np.array([])
desc_mat(null_mat)
zero_mat = np.zeros((7,7))
print(f'Zero Matrix: \n{zero_mat}')
ones_mat = np.ones((6,6))
print(f'Ones Matrix: \n{ones_mat}')
np.identity(5)
##Function Area
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
def mat_operation(matrix):
if matrix.size > 0:
is_viable = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nViable: {is_viable}')
else:
print('Not Viable')
## Matrix declarations
A = np.array([
[7,7,7],
[3,4,5],
[8,9,10]
])
B = np.array([
[9,9,9],
[14,13,11],
[10,9,3]
])
A+B
mat_operation(A+B)
## Test Areas
A*B
np.multiply(A,B)
mat_operation(A*B)
A-B
mat_operation(A-B)
A/B
np.divide(A,B)
mat_operation(A/B)
| 0.52829 | 0.994253 |
# Classify structured data with feature columns
This tutorial demonstrates how to classify structured data (e.g. tabular data in a CSV). We will use [Keras](https://www.tensorflow.org/guide/keras) to define the model, and [feature columns](https://www.tensorflow.org/guide/feature_columns) as a bridge to map from columns in a CSV to features used to train the model. This tutorial contains complete code to:
* Load a CSV file using [Pandas](https://pandas.pydata.org/).
* Build an input pipeline to batch and shuffle the rows using [tf.data](https://www.tensorflow.org/guide/datasets).
* Map from columns in the CSV to features used to train the model using feature columns.
* Build, train, and evaluate a model using Keras.
## The Dataset
We will use a small [dataset](https://archive.ics.uci.edu/ml/datasets/heart+Disease) provided by the Cleveland Clinic Foundation for Heart Disease. There are several hundred rows in the CSV. Each row describes a patient, and each column describes an attribute. We will use this information to predict whether a patient has heart disease, which in this dataset is a binary classification task.
Following is a [description](https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/heart-disease.names) of this dataset. Notice there are both numeric and categorical columns.
>Column| Description| Feature Type | Data Type
>------------|--------------------|----------------------|-----------------
>Age | Age in years | Numerical | integer
>Sex | (1 = male; 0 = female) | Categorical | integer
>CP | Chest pain type (0, 1, 2, 3, 4) | Categorical | integer
>Trestbpd | Resting blood pressure (in mm Hg on admission to the hospital) | Numerical | integer
>Chol | Serum cholestoral in mg/dl | Numerical | integer
>FBS | (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false) | Categorical | integer
>RestECG | Resting electrocardiographic results (0, 1, 2) | Categorical | integer
>Thalach | Maximum heart rate achieved | Numerical | integer
>Exang | Exercise induced angina (1 = yes; 0 = no) | Categorical | integer
>Oldpeak | ST depression induced by exercise relative to rest | Numerical | integer
>Slope | The slope of the peak exercise ST segment | Numerical | float
>CA | Number of major vessels (0-3) colored by flourosopy | Numerical | integer
>Thal | 3 = normal; 6 = fixed defect; 7 = reversable defect | Categorical | string
>Target | Diagnosis of heart disease (1 = true; 0 = false) | Classification | integer
## Import TensorFlow and other libraries
```
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import feature_column
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
```
## Use Pandas to create a dataframe
[Pandas](https://pandas.pydata.org/) is a Python library with many helpful utilities for loading and working with structured data. We will use Pandas to download the dataset from a URL, and load it into a dataframe.
```
URL = 'https://storage.googleapis.com/applied-dl/heart.csv'
dataframe = pd.read_csv(URL)
dataframe.head()
```
## Split the dataframe into train, validation, and test
The dataset we downloaded was a single CSV file. We will split this into train, validation, and test sets.
```
train, test = train_test_split(dataframe, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
```
## Create an input pipeline using tf.data
Next, we will wrap the dataframes with [tf.data](https://www.tensorflow.org/guide/datasets). This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train the model. If we were working with a very large CSV file (so large that it does not fit into memory), we would use tf.data to read it from disk directly. That is not covered in this tutorial.
```
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
batch_size = 5 # A small batch sized is used for demonstration purposes
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
```
## Understand the input pipeline
Now that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable.
```
for feature_batch, label_batch in train_ds.take(1):
print('Every feature:', list(feature_batch.keys()))
print('A batch of ages:', feature_batch['age'])
print('A batch of targets:', label_batch )
```
We can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe.
## Demonstrate several types of feature column
TensorFlow provides many types of feature columns. In this section, we will create several types of feature columns, and demonstrate how they transform a column from the dataframe.
```
# We will use this batch to demonstrate several types of feature columns
example_batch = next(iter(train_ds))[0]
# A utility method to create a feature column
# and to transform a batch of data
def demo(feature_column):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch).numpy())
```
### Numeric columns
The output of a feature column becomes the input to the model (using the demo function defined above, we will be able to see exactly how each column from the dataframe is transformed). A [numeric column](https://www.tensorflow.org/api_docs/python/tf/feature_column/numeric_column) is the simplest type of column. It is used to represent real valued features. When using this column, your model will receive the column value from the dataframe unchanged.
```
age = feature_column.numeric_column("age")
demo(age)
```
In the heart disease dataset, most columns from the dataframe are numeric.
### Bucketized columns
Often, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider raw data that represents a person's age. Instead of representing age as a numeric column, we could split the age into several buckets using a [bucketized column](https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column). Notice the one-hot values below describe which age range each row matches.
```
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
demo(age_buckets)
```
### Categorical columns
In this dataset, thal is represented as a string (e.g. 'fixed', 'normal', or 'reversible'). We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector (much like you have seen above with age buckets). The vocabulary can be passed as a list using [categorical_column_with_vocabulary_list](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list), or loaded from a file using [categorical_column_with_vocabulary_file](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file).
```
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
demo(thal_one_hot)
```
In a more complex dataset, many columns would be categorical (e.g. strings). Feature columns are most valuable when working with categorical data. Although there is only one categorical column in this dataset, we will use it to demonstrate several important types of feature columns that you could use when working with other datasets.
### Embedding columns
Suppose instead of having just a few possible strings, we have thousands (or more) values per category. For a number of reasons, as the number of categories grow large, it becomes infeasible to train a neural network using one-hot encodings. We can use an embedding column to overcome this limitation. Instead of representing the data as a one-hot vector of many dimensions, an [embedding column](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column) represents that data as a lower-dimensional, dense vector in which each cell can contain any number, not just 0 or 1. The size of the embedding (8, in the example below) is a parameter that must be tuned.
Key point: using an embedding column is best when a categorical column has many possible values. We are using one here for demonstration purposes, so you have a complete example you can modify for a different dataset in the future.
```
# Notice the input to the embedding column is the categorical column
# we previously created
thal_embedding = feature_column.embedding_column(thal, dimension=8)
demo(thal_embedding)
```
### Hashed feature columns
Another way to represent a categorical column with a large number of values is to use a [categorical_column_with_hash_bucket](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket). This feature column calculates a hash value of the input, then selects one of the `hash_bucket_size` buckets to encode a string. When using this column, you do not need to provide the vocabulary, and you can choose to make the number of hash_buckets significantly smaller than the number of actual categories to save space.
Key point: An important downside of this technique is that there may be collisions in which different strings are mapped to the same bucket. In practice, this can work well for some datasets regardless.
```
thal_hashed = feature_column.categorical_column_with_hash_bucket(
'thal', hash_bucket_size=1000)
demo(feature_column.indicator_column(thal_hashed))
```
### Crossed feature columns
Combining features into a single feature, better known as [feature crosses](https://developers.google.com/machine-learning/glossary/#feature_cross), enables a model to learn separate weights for each combination of features. Here, we will create a new feature that is the cross of age and thal. Note that `crossed_column` does not build the full table of all possible combinations (which could be very large). Instead, it is backed by a `hashed_column`, so you can choose how large the table is.
```
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
demo(feature_column.indicator_column(crossed_feature))
```
## Choose which columns to use
We have seen how to use several types of feature columns. Now we will use them to train a model. The goal of this tutorial is to show you the complete code (e.g. mechanics) needed to work with feature columns. We have selected a few columns to train our model below arbitrarily.
Key point: If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented.
```
feature_columns = []
# numeric cols
for header in ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']:
feature_columns.append(feature_column.numeric_column(header))
# bucketized cols
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
# indicator cols
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
feature_columns.append(thal_one_hot)
# embedding cols
thal_embedding = feature_column.embedding_column(thal, dimension=8)
feature_columns.append(thal_embedding)
# crossed cols
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
crossed_feature = feature_column.indicator_column(crossed_feature)
feature_columns.append(crossed_feature)
```
### Create a feature layer
Now that we have defined our feature columns, we will use a [DenseFeatures](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/DenseFeatures) layer to input them to our Keras model.
```
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
```
Earlier, we used a small batch size to demonstrate how feature columns worked. We create a new input pipeline with a larger batch size.
```
batch_size = 32
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
```
## Create, compile, and train the model
```
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(train_ds,
validation_data=val_ds,
epochs=5)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
```
Key point: You will typically see best results with deep learning with much larger and more complex datasets. When working with a small dataset like this one, we recommend using a decision tree or random forest as a strong baseline. The goal of this tutorial is not to train an accurate model, but to demonstrate the mechanics of working with structured data, so you have code to use as a starting point when working with your own datasets in the future.
## Next steps
The best way to learn more about classifying structured data is to try it yourself. We suggest finding another dataset to work with, and training a model to classify it using code similar to the above. To improve accuracy, think carefully about which features to include in your model, and how they should be represented.
|
github_jupyter
|
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import feature_column
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
URL = 'https://storage.googleapis.com/applied-dl/heart.csv'
dataframe = pd.read_csv(URL)
dataframe.head()
train, test = train_test_split(dataframe, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
batch_size = 5 # A small batch sized is used for demonstration purposes
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
for feature_batch, label_batch in train_ds.take(1):
print('Every feature:', list(feature_batch.keys()))
print('A batch of ages:', feature_batch['age'])
print('A batch of targets:', label_batch )
# We will use this batch to demonstrate several types of feature columns
example_batch = next(iter(train_ds))[0]
# A utility method to create a feature column
# and to transform a batch of data
def demo(feature_column):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch).numpy())
age = feature_column.numeric_column("age")
demo(age)
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
demo(age_buckets)
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
demo(thal_one_hot)
# Notice the input to the embedding column is the categorical column
# we previously created
thal_embedding = feature_column.embedding_column(thal, dimension=8)
demo(thal_embedding)
thal_hashed = feature_column.categorical_column_with_hash_bucket(
'thal', hash_bucket_size=1000)
demo(feature_column.indicator_column(thal_hashed))
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
demo(feature_column.indicator_column(crossed_feature))
feature_columns = []
# numeric cols
for header in ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']:
feature_columns.append(feature_column.numeric_column(header))
# bucketized cols
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
# indicator cols
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
feature_columns.append(thal_one_hot)
# embedding cols
thal_embedding = feature_column.embedding_column(thal, dimension=8)
feature_columns.append(thal_embedding)
# crossed cols
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
crossed_feature = feature_column.indicator_column(crossed_feature)
feature_columns.append(crossed_feature)
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
batch_size = 32
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(train_ds,
validation_data=val_ds,
epochs=5)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
| 0.834946 | 0.993716 |
# Classification of Weather Data using scikit-learn
## Daily Weather Data Analysis
Creating a decision tree based classification of weather data using scikit-learn
```
import pandas as pd
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
import seaborn as sns
import matplotlib.pyplot as plt
import os
os.listdir('./weather')
data = pd.read_csv('./weather/daily_weather.csv')
```
## Daily Weather Data Description
The file **daily_weather.csv** is a comma-separated file that contains weather data. This data comes from a weather station located in San Diego, California. The weather station is equipped with sensors that capture weather-related measurements such as air temperature, air pressure, and relative humidity. Data was collected for a period of three years, from September 2011 to September 2014, to ensure that sufficient data from different seasons and weather conditions is captured.
Each row in daily_weather.csv captures weather data for a separate day.
Sensor measurements from the weather station were captured at one-minute intervals. These measurements were then processed to generate values to describe daily weather. Since this dataset was created to classify low-humidity days vs. non-low-humidity days (that is, days with normal or high humidity), the variables included are weather measurements in the morning, with one measurement, namely relatively humidity, in the afternoon. The idea is to use the morning weather values to predict whether the day will be low-humidity or not based on the afternoon measurement of relative humidity.
Each row, or sample, consists of the following variables:
* **number**: unique number for each row
* **air_pressure_9am**: air pressure averaged over a period from 8:55am to 9:04am (Unit: hectopascals)
* **air_temp_9am**: air temperature averaged over a period from 8:55am to 9:04am (Unit: degrees Fahrenheit)
* **air_wind_direction_9am**: wind direction averaged over a period from 8:55am to 9:04am (Unit: degrees, with 0 means coming from the North, and increasing clockwise)
* **air_wind_speed_9am**: wind speed averaged over a period from 8:55am to 9:04am (Unit: miles per hour)
* **max_wind_direction_9am**: wind gust direction averaged over a period from 8:55am to 9:10am (Unit: degrees, with 0 being North and increasing clockwise)
* **max_wind_speed_9am**: wind gust speed averaged over a period from 8:55am to 9:04am (Unit: miles per hour)
* **rain_accumulation_9am**: amount of rain accumulated in the 24 hours prior to 9am (Unit: millimeters)
* **rain_duration_9am**: amount of time rain was recorded in the 24 hours prior to 9am (Unit: seconds)
* **relative_humidity_9am**: relative humidity averaged over a period from 8:55am to 9:04am (Unit: percent)
* **relative_humidity_3pm**: relative humidity averaged over a period from 2:55pm to 3:04pm (Unit: percent )
```
data.head()
# check missing data
total = data.isnull().sum().sort_values(ascending=False)
percent = (data.isnull().sum()/data.isnull().count()*100).sort_values(ascending=False)
dataMissing = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
dataMissing.head(15)
```
## Cleaning Data
```
del data['number']
data.shape
data = data.dropna()
data.shape
```
Lost almost 3% of dataframe
## Converting to a Classification Task
Binarize the relative_humidity_3pm tp 0 or 1.
```
cleanData = data.copy()
cleanData['high_humidity_label'] = (cleanData.relative_humidity_3pm > 24.99)*1
print(cleanData.high_humidity_label)
```
### Target is stored in 'y'
```
y = cleanData[['high_humidity_label']].copy()
cleanData.relative_humidity_3pm.head()
y.head()
```
## Using 9am sensor signals as features to predict humidity at 3pm
```
morningFeatures = ['air_pressure_9am','air_temp_9am',
'avg_wind_direction_9am','avg_wind_speed_9am',
'max_wind_direction_9am','max_wind_speed_9am',
'rain_accumulation_9am','rain_duration_9am'
]
X = cleanData[morningFeatures].copy()
X.columns
y.columns
```
## Test and train split
```
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.33, random_state = 0)
#print(type(X_train))
#print(type(X_test))
#print(type(y_train))
#print(type(y_test))
#X_train.head()
#y_train.head()
#y_train.describe()
```
## Fit on train set
```
classifier = DecisionTreeClassifier(max_leaf_nodes = 10, random_state = 42)
classifier.fit(X_train,y_train)
type(classifier)
```
## Predict on test set
```
predictions = classifier.predict(X_test)
predictions[:10]
y_test['high_humidity_label'][:10]
```
## Accuracy of the classifier
```
accuracy_score(y_true = y_test, y_pred = predictions)
```
|
github_jupyter
|
import pandas as pd
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
import seaborn as sns
import matplotlib.pyplot as plt
import os
os.listdir('./weather')
data = pd.read_csv('./weather/daily_weather.csv')
data.head()
# check missing data
total = data.isnull().sum().sort_values(ascending=False)
percent = (data.isnull().sum()/data.isnull().count()*100).sort_values(ascending=False)
dataMissing = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
dataMissing.head(15)
del data['number']
data.shape
data = data.dropna()
data.shape
cleanData = data.copy()
cleanData['high_humidity_label'] = (cleanData.relative_humidity_3pm > 24.99)*1
print(cleanData.high_humidity_label)
y = cleanData[['high_humidity_label']].copy()
cleanData.relative_humidity_3pm.head()
y.head()
morningFeatures = ['air_pressure_9am','air_temp_9am',
'avg_wind_direction_9am','avg_wind_speed_9am',
'max_wind_direction_9am','max_wind_speed_9am',
'rain_accumulation_9am','rain_duration_9am'
]
X = cleanData[morningFeatures].copy()
X.columns
y.columns
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.33, random_state = 0)
#print(type(X_train))
#print(type(X_test))
#print(type(y_train))
#print(type(y_test))
#X_train.head()
#y_train.head()
#y_train.describe()
classifier = DecisionTreeClassifier(max_leaf_nodes = 10, random_state = 42)
classifier.fit(X_train,y_train)
type(classifier)
predictions = classifier.predict(X_test)
predictions[:10]
y_test['high_humidity_label'][:10]
accuracy_score(y_true = y_test, y_pred = predictions)
| 0.278944 | 0.986205 |
```
from altair import *
import csv
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.display import display # Allows the use of display() for DataFrames
%matplotlib inline
# Open csv file and read rows into a pandas dataframe
df = pd.read_csv('movies.csv')
print "Dataset has {} rows and {} columns.".format(*df.shape)
display(df.head())
```
# Data Visualization
```
def histogram(data, **bin_kwds):
"""
Create a Histogram of a 1-dimensional array or series of data
All parameters are passed to the altair's ``Bin`` class
"""
return Chart(data).mark_bar().encode(
x=X('Academy Awards, USA', bin=Bin(**bin_kwds)),
y='count(*):Q'
)
#histogram(df, maxbins=20)
fig, (ax1, ax2, ax3, ax4) = plt.subplots(ncols=4, figsize=(12, 6), sharey=True)
sns.countplot(x="Academy Awards, USA", data=df, ax=ax1)
sns.countplot(x="Screen Actors Guild Awards", data=df, ax=ax2)
sns.countplot(x="PGA Awards", data=df, ax=ax3)
sns.countplot(x="Directors Guild of America, USA", data=df, ax=ax4)
```
# Preprocessing
```
from sklearn.model_selection import StratifiedShuffleSplit
X = df.drop(['Title', 'Academy Awards, USA'], axis=1, inplace=False)
y = df['Academy Awards, USA']
sss = StratifiedShuffleSplit(n_splits=10, test_size=0.1, random_state=42)
for train_ind, test_ind in sss.split(X, y):
print "TRAIN:", train_ind, "TEST:", test_ind
X_train, X_test = X.iloc[train_ind], X.iloc[test_ind]
y_train, y_test = y.iloc[train_ind], y.iloc[test_ind]
```
# Modelling & Evaluation
```
# Train model
from time import time
from pandas_ml import ConfusionMatrix
from sklearn.metrics import roc_auc_score
from sklearn.metrics import classification_report
from sklearn.svm import SVC
def train_classifier(clf, X_train, y_train):
''' Fits a classifier to the training data. '''
# Start the clock, train the classifier, then stop the clock
start = time()
clf.fit(X_train, y_train)
end = time()
# Print the results
print "Trained model in {:.4f} seconds".format(end - start)
def predict_labels(clf, features, target):
''' Makes predictions using a fit classifier based on F1 score. '''
# Start the clock, make predictions, then stop the clock
start = time()
y_pred = clf.predict(features)
end = time()
# Print and return results
print "Made predictions in {:.4f} seconds.".format(end - start)
print "AUC Score:", roc_auc_score(target.values, y_pred)
print classification_report(target.values, y_pred)
plot_confusion_matrix(target.values, y_pred)
def train_predict(clf, X_train, y_train, X_test, y_test):
''' Train and predict using a classifer based on F1 score. '''
# Indicate the classifier and the training set size
print "Training a {} using a training set size of {}. . .".format(clf.__class__.__name__, len(X_train))
# Train the classifier
train_classifier(clf, X_train, y_train)
# Print the results of prediction for both training and testing
print "Report for training set: ", predict_labels(clf, X_train, y_train)
print "Report for test set: ", predict_labels(clf, X_test, y_test)
def plot_confusion_matrix(y_true, y_pred):
cm = ConfusionMatrix(y_true, y_pred)
cm.plot(normalized=True)
plt.show()
#clf = SVC(C=100, kernel='sigmoid', class_weight={0: 1, 1: 9}, random_state=42)
clf = SVC(C=1, kernel='rbf', class_weight={0: 1, 1: 9}, random_state=42)
train_predict(clf, X_train, y_train, X_test, y_test)
# Search for optimal parameters
from sklearn.model_selection import GridSearchCV
# Parameters to do GridSearch on
cv_params = {
'C': [1, 10, 100, 1000],
'kernel': ['rbf', 'linear', 'poly', 'sigmoid'],
'degree': [3, 2, 1, 4]
}
# Static model parameters
ind_params = {
'class_weight': {0: 1, 1: 9},
'random_state': 42
}
# Initialize GridSearch with its parameters
optimized_SVC = GridSearchCV(estimator=SVC(**ind_params),
param_grid=cv_params,
scoring='f1',
cv=10,
n_jobs=-1)
optimized_SVC.fit(X_train, y_train)
#optimized_SVC.cv_results_
print "Best score for training:", optimized_SVC.best_score_
print "Best score parameters:", optimized_SVC.best_params_
print "Score for testing:", optimized_SVC.score(X_test, y_test)
# Train final model on full dataset
start = time()
#clf = SVC(C=1, kernel='rbf', class_weight={0: 1, 1: 9}, random_state=42)
clf = SVC(C=100, kernel='sigmoid', class_weight={0: 1, 1: 9}, random_state=42)
clf.fit(X, y)
end = time()
print "Trained model in {:.4f} seconds".format(end - start)
# Saves model for future predictions
from sklearn.externals import joblib
joblib.dump(clf, 'svc.pickle')
print "Model saved."
# Load model
#clf = joblib.load('filename.pickle')
# Predict new labels
df_pred = pd.read_csv('movies_pred.csv')
print "Dataset has {} rows and {} columns.".format(*df_pred.shape)
display(df_pred)
X_pred = df_pred.drop(['Title'], axis=1, inplace=False)
# Load model
clf_pred = joblib.load('svc.pickle')
start = time()
y_pred = clf_pred.predict(X_pred)
end = time()
# Print and return results
print "Made predictions in {:.4f} seconds.".format(end - start)
print ""
print "Predictions for Best Picture:"
for title, pred in zip(df_pred['Title'], y_pred):
print title, pred
```
|
github_jupyter
|
from altair import *
import csv
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.display import display # Allows the use of display() for DataFrames
%matplotlib inline
# Open csv file and read rows into a pandas dataframe
df = pd.read_csv('movies.csv')
print "Dataset has {} rows and {} columns.".format(*df.shape)
display(df.head())
def histogram(data, **bin_kwds):
"""
Create a Histogram of a 1-dimensional array or series of data
All parameters are passed to the altair's ``Bin`` class
"""
return Chart(data).mark_bar().encode(
x=X('Academy Awards, USA', bin=Bin(**bin_kwds)),
y='count(*):Q'
)
#histogram(df, maxbins=20)
fig, (ax1, ax2, ax3, ax4) = plt.subplots(ncols=4, figsize=(12, 6), sharey=True)
sns.countplot(x="Academy Awards, USA", data=df, ax=ax1)
sns.countplot(x="Screen Actors Guild Awards", data=df, ax=ax2)
sns.countplot(x="PGA Awards", data=df, ax=ax3)
sns.countplot(x="Directors Guild of America, USA", data=df, ax=ax4)
from sklearn.model_selection import StratifiedShuffleSplit
X = df.drop(['Title', 'Academy Awards, USA'], axis=1, inplace=False)
y = df['Academy Awards, USA']
sss = StratifiedShuffleSplit(n_splits=10, test_size=0.1, random_state=42)
for train_ind, test_ind in sss.split(X, y):
print "TRAIN:", train_ind, "TEST:", test_ind
X_train, X_test = X.iloc[train_ind], X.iloc[test_ind]
y_train, y_test = y.iloc[train_ind], y.iloc[test_ind]
# Train model
from time import time
from pandas_ml import ConfusionMatrix
from sklearn.metrics import roc_auc_score
from sklearn.metrics import classification_report
from sklearn.svm import SVC
def train_classifier(clf, X_train, y_train):
''' Fits a classifier to the training data. '''
# Start the clock, train the classifier, then stop the clock
start = time()
clf.fit(X_train, y_train)
end = time()
# Print the results
print "Trained model in {:.4f} seconds".format(end - start)
def predict_labels(clf, features, target):
''' Makes predictions using a fit classifier based on F1 score. '''
# Start the clock, make predictions, then stop the clock
start = time()
y_pred = clf.predict(features)
end = time()
# Print and return results
print "Made predictions in {:.4f} seconds.".format(end - start)
print "AUC Score:", roc_auc_score(target.values, y_pred)
print classification_report(target.values, y_pred)
plot_confusion_matrix(target.values, y_pred)
def train_predict(clf, X_train, y_train, X_test, y_test):
''' Train and predict using a classifer based on F1 score. '''
# Indicate the classifier and the training set size
print "Training a {} using a training set size of {}. . .".format(clf.__class__.__name__, len(X_train))
# Train the classifier
train_classifier(clf, X_train, y_train)
# Print the results of prediction for both training and testing
print "Report for training set: ", predict_labels(clf, X_train, y_train)
print "Report for test set: ", predict_labels(clf, X_test, y_test)
def plot_confusion_matrix(y_true, y_pred):
cm = ConfusionMatrix(y_true, y_pred)
cm.plot(normalized=True)
plt.show()
#clf = SVC(C=100, kernel='sigmoid', class_weight={0: 1, 1: 9}, random_state=42)
clf = SVC(C=1, kernel='rbf', class_weight={0: 1, 1: 9}, random_state=42)
train_predict(clf, X_train, y_train, X_test, y_test)
# Search for optimal parameters
from sklearn.model_selection import GridSearchCV
# Parameters to do GridSearch on
cv_params = {
'C': [1, 10, 100, 1000],
'kernel': ['rbf', 'linear', 'poly', 'sigmoid'],
'degree': [3, 2, 1, 4]
}
# Static model parameters
ind_params = {
'class_weight': {0: 1, 1: 9},
'random_state': 42
}
# Initialize GridSearch with its parameters
optimized_SVC = GridSearchCV(estimator=SVC(**ind_params),
param_grid=cv_params,
scoring='f1',
cv=10,
n_jobs=-1)
optimized_SVC.fit(X_train, y_train)
#optimized_SVC.cv_results_
print "Best score for training:", optimized_SVC.best_score_
print "Best score parameters:", optimized_SVC.best_params_
print "Score for testing:", optimized_SVC.score(X_test, y_test)
# Train final model on full dataset
start = time()
#clf = SVC(C=1, kernel='rbf', class_weight={0: 1, 1: 9}, random_state=42)
clf = SVC(C=100, kernel='sigmoid', class_weight={0: 1, 1: 9}, random_state=42)
clf.fit(X, y)
end = time()
print "Trained model in {:.4f} seconds".format(end - start)
# Saves model for future predictions
from sklearn.externals import joblib
joblib.dump(clf, 'svc.pickle')
print "Model saved."
# Load model
#clf = joblib.load('filename.pickle')
# Predict new labels
df_pred = pd.read_csv('movies_pred.csv')
print "Dataset has {} rows and {} columns.".format(*df_pred.shape)
display(df_pred)
X_pred = df_pred.drop(['Title'], axis=1, inplace=False)
# Load model
clf_pred = joblib.load('svc.pickle')
start = time()
y_pred = clf_pred.predict(X_pred)
end = time()
# Print and return results
print "Made predictions in {:.4f} seconds.".format(end - start)
print ""
print "Predictions for Best Picture:"
for title, pred in zip(df_pred['Title'], y_pred):
print title, pred
| 0.756268 | 0.778733 |
# Checkpoints
Sometimes, it might be useful to store some checkpoints while executing an algorithm. In particular, if a run is very time-consuming.
**pymoo** offers to resume a run by serializing the algorithm object and loading it. Resuming runs from checkpoints is possible
- the functional way by calling the `minimize` method,
- the object-oriented way by repeatedly calling the `next()` method or
- from a text file ([Biased Initialization](../customization/initialization.ipynb) from `Population` )
## Functional
```
import numpy as np
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem
from pymoo.optimize import minimize
problem = get_problem("zdt1", n_var=5)
algorithm = NSGA2(pop_size=100)
res = minimize(problem,
algorithm,
('n_gen', 5),
seed=1,
copy_algorithm=False,
verbose=True)
np.save("checkpoint", algorithm)
checkpoint, = np.load("checkpoint.npy", allow_pickle=True).flatten()
print("Loaded Checkpoint:", checkpoint)
# only necessary if for the checkpoint the termination criterion has been met
checkpoint.has_terminated = False
res = minimize(problem,
checkpoint,
('n_gen', 20),
seed=1,
copy_algorithm=False,
verbose=True)
```
## Object Oriented
```
import numpy as np
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem
from pymoo.factory import get_termination
from pymoo.optimize import minimize
from pymoo.visualization.scatter import Scatter
problem = get_problem("zdt1", n_var=5)
algorithm = NSGA2(pop_size=100)
algorithm.setup(problem, seed=1, termination=('n_gen', 20))
for k in range(5):
algorithm.next()
print(algorithm.n_gen)
np.save("checkpoint", algorithm)
checkpoint, = np.load("checkpoint.npy", allow_pickle=True).flatten()
print("Loaded Checkpoint:", checkpoint)
while checkpoint.has_next():
checkpoint.next()
print(checkpoint.n_gen)
```
## From a Text File
First, load the data from a file. Usually, this will include the variables `X`, the objective values `F` (and the constraints `G`). Here, they are created randomly. Always make sure the `Problem` you are solving would return the same values for the given `X` values. Otherwise the data might be misleading for the algorithm.
(This is not the case here. It is really JUST for illustration purposes)
```
import numpy as np
from pymoo.factory import G1
problem = G1()
N = 300
np.random.seed(1)
X = np.random.random((N, problem.n_var))
# here F and G is re-evaluated - in practice you want to load them from files too
F, G = problem.evaluate(X, return_values_of=["F", "G"])
```
Then, create a population object using your data:
```
from pymoo.model.evaluator import Evaluator
from pymoo.model.population import Population
from pymoo.model.problem import StaticProblem
# now the population object with all its attributes is created (CV, feasible, ...)
pop = Population.new("X", X)
pop = Evaluator().eval(StaticProblem(problem, F=F, G=G), pop)
```
And finally run it with a non-random initial population `sampling=pop`:
```
from pymoo.algorithms.so_genetic_algorithm import GA
from pymoo.optimize import minimize
# the algorithm is now called with the population - biased initialization
algorithm = GA(pop_size=100, sampling=pop)
res = minimize(problem,
algorithm,
('n_gen', 10),
seed=1,
verbose=True)
```
|
github_jupyter
|
import numpy as np
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem
from pymoo.optimize import minimize
problem = get_problem("zdt1", n_var=5)
algorithm = NSGA2(pop_size=100)
res = minimize(problem,
algorithm,
('n_gen', 5),
seed=1,
copy_algorithm=False,
verbose=True)
np.save("checkpoint", algorithm)
checkpoint, = np.load("checkpoint.npy", allow_pickle=True).flatten()
print("Loaded Checkpoint:", checkpoint)
# only necessary if for the checkpoint the termination criterion has been met
checkpoint.has_terminated = False
res = minimize(problem,
checkpoint,
('n_gen', 20),
seed=1,
copy_algorithm=False,
verbose=True)
import numpy as np
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem
from pymoo.factory import get_termination
from pymoo.optimize import minimize
from pymoo.visualization.scatter import Scatter
problem = get_problem("zdt1", n_var=5)
algorithm = NSGA2(pop_size=100)
algorithm.setup(problem, seed=1, termination=('n_gen', 20))
for k in range(5):
algorithm.next()
print(algorithm.n_gen)
np.save("checkpoint", algorithm)
checkpoint, = np.load("checkpoint.npy", allow_pickle=True).flatten()
print("Loaded Checkpoint:", checkpoint)
while checkpoint.has_next():
checkpoint.next()
print(checkpoint.n_gen)
import numpy as np
from pymoo.factory import G1
problem = G1()
N = 300
np.random.seed(1)
X = np.random.random((N, problem.n_var))
# here F and G is re-evaluated - in practice you want to load them from files too
F, G = problem.evaluate(X, return_values_of=["F", "G"])
from pymoo.model.evaluator import Evaluator
from pymoo.model.population import Population
from pymoo.model.problem import StaticProblem
# now the population object with all its attributes is created (CV, feasible, ...)
pop = Population.new("X", X)
pop = Evaluator().eval(StaticProblem(problem, F=F, G=G), pop)
from pymoo.algorithms.so_genetic_algorithm import GA
from pymoo.optimize import minimize
# the algorithm is now called with the population - biased initialization
algorithm = GA(pop_size=100, sampling=pop)
res = minimize(problem,
algorithm,
('n_gen', 10),
seed=1,
verbose=True)
| 0.584627 | 0.914482 |
# Agent testing, using top N hyperparameters.
This is shared/reusable notebook to checking to make sure the best hyperparamers that are found by some algorithm work well, instead of just relativily well. That bieng all an HP searcher can realistically hope for.
# Shared data path
```
data_path = "/Users/qualia/Code/infomercial/data/"
```
# Imports
```
import os
import numpy as np
import pandas as pd
from IPython.display import Image
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set_style('ticks')
matplotlib.rcParams.update({'font.size': 16})
matplotlib.rc('axes', titlesize=16)
from infomercial.exp import meta_bandit
from infomercial.exp import epsilon_bandit
from infomercial.exp import beta_bandit
from infomercial.exp import softbeta_bandit
from infomercial.local_gym import bandit
from infomercial.exp.meta_bandit import load_checkpoint
import gym
```
# Shared plots
```
def plot_critic(critic_name, env_name, result):
# -
env = gym.make(env_name)
best = env.best
# Data
critic = result[critic_name]
arms = list(critic.keys())
values = list(critic.values())
# Plotz
fig = plt.figure(figsize=(8, 3))
grid = plt.GridSpec(1, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0])
plt.scatter(arms, values, color="black", alpha=.5, s=30)
plt.plot([best]*10, np.linspace(min(values), max(values), 10),
color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Value")
plt.xlabel("Arm")
def plot_meta(env_name, result, tie_threshold):
"""Plots!"""
# episodes, actions, scores_E, scores_R, values_E, values_R, ties, policies
episodes = result["episodes"]
actions =result["actions"]
bests =result["p_bests"]
scores_E = result["scores_E"]
scores_R = result["scores_R"]
values_R = result["values_R"]
values_E = result["values_E"]
ties = result["ties"]
policies = result["policies"]
# -
env = gym.make(env_name)
best = env.best
print(f"Best arm: {best}, last arm: {actions[-1]}")
# Plotz
fig = plt.figure(figsize=(6, 14))
grid = plt.GridSpec(6, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0, 0])
plt.scatter(episodes, actions, color="black", alpha=.5, s=2, label="Bandit")
plt.plot(episodes, np.repeat(best[0], np.max(episodes)+1),
color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylim(-.1, np.max(actions)+1.1)
plt.ylabel("Arm choice")
plt.xlabel("Episode")
# Policy
policies = np.asarray(policies)
episodes = np.asarray(episodes)
plt.subplot(grid[1, 0])
m = policies == 0
plt.scatter(episodes[m], policies[m], alpha=.4, s=2, label="$\pi_E$", color="purple")
m = policies == 1
plt.scatter(episodes[m], policies[m], alpha=.4, s=2, label="$\pi_R$", color="grey")
plt.ylim(-.1, 1+.1)
plt.ylabel("Controlling\npolicy")
plt.xlabel("Episode")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# score
plt.subplot(grid[2, 0])
plt.scatter(episodes, scores_E, color="purple", alpha=0.4, s=2, label="E")
plt.plot(episodes, scores_E, color="purple", alpha=0.4)
plt.scatter(episodes, scores_R, color="grey", alpha=0.4, s=2, label="R")
plt.plot(episodes, scores_R, color="grey", alpha=0.4)
plt.plot(episodes, np.repeat(tie_threshold, np.max(episodes)+1),
color="violet", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Score")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Q
plt.subplot(grid[3, 0])
plt.scatter(episodes, np.log(values_E), color="purple", alpha=0.4, s=2, label="$Q_E$")
plt.scatter(episodes, np.log(values_R), color="grey", alpha=0.4, s=2, label="$Q_R$")
plt.plot(episodes, np.repeat(tie_threshold, np.max(episodes)+1),
color="violet", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Value")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Ties
plt.subplot(grid[4, 0])
plt.scatter(episodes, bests, color="red", alpha=.5, s=2)
plt.ylabel("p(best)")
plt.xlabel("Episode")
plt.ylim(0, 1)
# Ties
plt.subplot(grid[5, 0])
plt.scatter(episodes, ties, color="black", alpha=.5, s=2, label="$\pi_{tie}$ : 1\n $\pi_\pi$ : 0")
plt.ylim(-.1, 1+.1)
plt.ylabel("Ties index")
plt.xlabel("Episode")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
def plot_beta(env_name, result):
"""Plots!"""
episodes = result["episodes"]
actions =result["actions"]
bests =result["p_bests"]
scores_E = result["scores_E"]
scores_R = result["scores_R"]
values_R = result["values_R"]
env = gym.make(env_name)
best = env.best
print(f"Best arm: {best}, last arm: {actions[-1]}")
# Plotz
fig = plt.figure(figsize=(6, 14))
grid = plt.GridSpec(6, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0, 0])
plt.scatter(episodes, actions, color="black", alpha=.5, s=2, label="Bandit")
plt.plot(episodes, np.repeat(best[0], np.max(episodes)+1),
color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylim(-.1, np.max(actions)+1.1)
plt.ylabel("Arm choice")
plt.xlabel("Episode")
# score
plt.subplot(grid[1, 0])
plt.scatter(episodes, scores_E, color="purple", alpha=0.4, s=2, label="E")
plt.plot(episodes, scores_E, color="purple", alpha=0.4)
plt.scatter(episodes, scores_R, color="grey", alpha=0.4, s=2, label="R")
plt.plot(episodes, scores_R, color="grey", alpha=0.4)
plt.plot(episodes, np.repeat(tie_threshold, np.max(episodes)+1),
color="violet", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Score")
plt.xlabel("Episode")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Q
plt.subplot(grid[2, 0])
plt.scatter(episodes, values_R, color="grey", alpha=0.4, s=2, label="$Q_R$")
plt.ylabel("Value")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Best
plt.subplot(grid[3, 0])
plt.scatter(episodes, bests, color="red", alpha=.5, s=2)
plt.ylabel("p(best)")
plt.xlabel("Episode")
plt.ylim(0, 1)
def plot_epsilon(env_name, result):
"""Plots!"""
episodes = result["episodes"]
actions =result["actions"]
bests = result["p_bests"]
scores_R = result["scores_R"]
values_R = result["values_R"]
env = gym.make(env_name)
best = env.best
print(f"Best arm: {best}, last arm: {actions[-1]}")
# Plotz
fig = plt.figure(figsize=(6, 14))
grid = plt.GridSpec(6, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0, 0])
plt.scatter(episodes, actions, color="black", alpha=.5, s=2, label="Bandit")
plt.plot(episodes, np.repeat(best[0], np.max(episodes)+1),
color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylim(-.1, np.max(actions)+1.1)
plt.ylabel("Arm choice")
plt.xlabel("Episode")
# score
plt.subplot(grid[1, 0])
plt.scatter(episodes, scores_R, color="grey", alpha=0.4, s=2, label="R")
plt.plot(episodes, scores_R, color="grey", alpha=0.4)
plt.plot(episodes, np.repeat(tie_threshold, np.max(episodes)+1),
color="violet", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Score")
plt.xlabel("Episode")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Q
plt.subplot(grid[2, 0])
plt.scatter(episodes, values_R, color="grey", alpha=0.4, s=2, label="$Q_R$")
plt.ylabel("Value")
plt.xlabel("Episode")
plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Best
plt.subplot(grid[3, 0])
plt.scatter(episodes, bests, color="red", alpha=.5, s=2)
plt.ylabel("p(best)")
plt.xlabel("Episode")
plt.ylim(0, 1)
```
# Load parameter table
```
hp = pd.read_csv(os.path.join(data_path, "exp298_sorted.csv"))
hp.head(10)
```
### Quick histrogram
Get a sense of the param dist for the top 100 models
```
col = "tie_threshold"
# col = "epsilon"
# col = "total_R"
# col = "lr_R"
plt.figure(figsize=(4,2))
plt.hist(hp.loc[0:100, col], bins=10, color="black");
plt.xlabel(col)
plt.ylabel("count")
```
# Pick a bandit and an env
Comment out the rest
```
# env_name = 'BanditOneHigh10-v0'
env_name = 'BanditHardAndSparse10-v0'
# env_name = 'BanditUniform121-v0'
# env_name = 'DeceptiveBanditOneHigh10-v0'
```
## Meta
```
# Pick parameters
lr_R = hp["lr_R"][2]
tie_threshold = hp["tie_threshold"][2]
print(f"Running - lr_R: {lr_R}, tie_threshold: {tie_threshold}")
# Run the exp, and plot its results
result = meta_bandit(
env_name=env_name,
num_episodes=50000,
lr_R=lr_R,
tie_threshold=tie_threshold,
seed_value=None,
)
plot_meta(env_name, result, tie_threshold)
plot_critic('critic_R', env_name, result)
```
## Softbeta
```
# Pick parameters
lr_R = hp["lr_R"][1]
beta = hp["beta"][1]
temp = hp["temp"][1]
print(f"Env - {env_name}")
print(f"Running - lr_R: {lr_R}, beta: {beta}, temp: {temp}")
# Run the exp, and plot its results
result = softbeta_bandit(
env_name=env_name,
num_episodes=2*60500,
lr_R=lr_R,
beta=beta,
temp=temp,
seed_value=None,
)
plot_beta(env_name, result)
plot_critic('critic', env_name, result)
```
## Ep
```
# Pick parameters
lr_R = hp["lr_R"][2]
epsilon = hp["epsilon"][2]
print(f"Env - {env_name}")
print(f"Running - lr_R: {lr_R}, epsilon: {epsilon}")
# Run the exp, and plot its results
result = epsilon_bandit(
env_name=env_name,
num_episodes=500,
lr_R=lr_R,
epsilon=epsilon,
seed_value=None,
)
plot_epsilon(env_name, result)
plot_critic('critic_R', env_name, result)
```
# Annealed ep
```
# Pick parameters
lr_R = hp["lr_R"][1]
epsilon = hp["epsilon"][1]
epsilon_decay_tau = hp["epsilon_decay_tau"][1]
print(f"Env - {env_name}")
print(f"Running - lr_R: {lr_R}, epsilon: {epsilon}, temp: {epsilon_decay_tau}")
# Run the exp, and plot its results
result = epsilon_bandit(
env_name=env_name,
num_episodes=500,
lr_R=lr_R,
epsilon=epsilon,
epsilon_decay_tau=epsilon_decay_tau,
seed_value=None,
)
plot_epsilon(env_name, result)
plot_critic('critic_R', env_name, result)
```
|
github_jupyter
|
data_path = "/Users/qualia/Code/infomercial/data/"
import os
import numpy as np
import pandas as pd
from IPython.display import Image
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set_style('ticks')
matplotlib.rcParams.update({'font.size': 16})
matplotlib.rc('axes', titlesize=16)
from infomercial.exp import meta_bandit
from infomercial.exp import epsilon_bandit
from infomercial.exp import beta_bandit
from infomercial.exp import softbeta_bandit
from infomercial.local_gym import bandit
from infomercial.exp.meta_bandit import load_checkpoint
import gym
def plot_critic(critic_name, env_name, result):
# -
env = gym.make(env_name)
best = env.best
# Data
critic = result[critic_name]
arms = list(critic.keys())
values = list(critic.values())
# Plotz
fig = plt.figure(figsize=(8, 3))
grid = plt.GridSpec(1, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0])
plt.scatter(arms, values, color="black", alpha=.5, s=30)
plt.plot([best]*10, np.linspace(min(values), max(values), 10),
color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Value")
plt.xlabel("Arm")
def plot_meta(env_name, result, tie_threshold):
"""Plots!"""
# episodes, actions, scores_E, scores_R, values_E, values_R, ties, policies
episodes = result["episodes"]
actions =result["actions"]
bests =result["p_bests"]
scores_E = result["scores_E"]
scores_R = result["scores_R"]
values_R = result["values_R"]
values_E = result["values_E"]
ties = result["ties"]
policies = result["policies"]
# -
env = gym.make(env_name)
best = env.best
print(f"Best arm: {best}, last arm: {actions[-1]}")
# Plotz
fig = plt.figure(figsize=(6, 14))
grid = plt.GridSpec(6, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0, 0])
plt.scatter(episodes, actions, color="black", alpha=.5, s=2, label="Bandit")
plt.plot(episodes, np.repeat(best[0], np.max(episodes)+1),
color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylim(-.1, np.max(actions)+1.1)
plt.ylabel("Arm choice")
plt.xlabel("Episode")
# Policy
policies = np.asarray(policies)
episodes = np.asarray(episodes)
plt.subplot(grid[1, 0])
m = policies == 0
plt.scatter(episodes[m], policies[m], alpha=.4, s=2, label="$\pi_E$", color="purple")
m = policies == 1
plt.scatter(episodes[m], policies[m], alpha=.4, s=2, label="$\pi_R$", color="grey")
plt.ylim(-.1, 1+.1)
plt.ylabel("Controlling\npolicy")
plt.xlabel("Episode")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# score
plt.subplot(grid[2, 0])
plt.scatter(episodes, scores_E, color="purple", alpha=0.4, s=2, label="E")
plt.plot(episodes, scores_E, color="purple", alpha=0.4)
plt.scatter(episodes, scores_R, color="grey", alpha=0.4, s=2, label="R")
plt.plot(episodes, scores_R, color="grey", alpha=0.4)
plt.plot(episodes, np.repeat(tie_threshold, np.max(episodes)+1),
color="violet", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Score")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Q
plt.subplot(grid[3, 0])
plt.scatter(episodes, np.log(values_E), color="purple", alpha=0.4, s=2, label="$Q_E$")
plt.scatter(episodes, np.log(values_R), color="grey", alpha=0.4, s=2, label="$Q_R$")
plt.plot(episodes, np.repeat(tie_threshold, np.max(episodes)+1),
color="violet", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Value")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Ties
plt.subplot(grid[4, 0])
plt.scatter(episodes, bests, color="red", alpha=.5, s=2)
plt.ylabel("p(best)")
plt.xlabel("Episode")
plt.ylim(0, 1)
# Ties
plt.subplot(grid[5, 0])
plt.scatter(episodes, ties, color="black", alpha=.5, s=2, label="$\pi_{tie}$ : 1\n $\pi_\pi$ : 0")
plt.ylim(-.1, 1+.1)
plt.ylabel("Ties index")
plt.xlabel("Episode")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
def plot_beta(env_name, result):
"""Plots!"""
episodes = result["episodes"]
actions =result["actions"]
bests =result["p_bests"]
scores_E = result["scores_E"]
scores_R = result["scores_R"]
values_R = result["values_R"]
env = gym.make(env_name)
best = env.best
print(f"Best arm: {best}, last arm: {actions[-1]}")
# Plotz
fig = plt.figure(figsize=(6, 14))
grid = plt.GridSpec(6, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0, 0])
plt.scatter(episodes, actions, color="black", alpha=.5, s=2, label="Bandit")
plt.plot(episodes, np.repeat(best[0], np.max(episodes)+1),
color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylim(-.1, np.max(actions)+1.1)
plt.ylabel("Arm choice")
plt.xlabel("Episode")
# score
plt.subplot(grid[1, 0])
plt.scatter(episodes, scores_E, color="purple", alpha=0.4, s=2, label="E")
plt.plot(episodes, scores_E, color="purple", alpha=0.4)
plt.scatter(episodes, scores_R, color="grey", alpha=0.4, s=2, label="R")
plt.plot(episodes, scores_R, color="grey", alpha=0.4)
plt.plot(episodes, np.repeat(tie_threshold, np.max(episodes)+1),
color="violet", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Score")
plt.xlabel("Episode")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Q
plt.subplot(grid[2, 0])
plt.scatter(episodes, values_R, color="grey", alpha=0.4, s=2, label="$Q_R$")
plt.ylabel("Value")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Best
plt.subplot(grid[3, 0])
plt.scatter(episodes, bests, color="red", alpha=.5, s=2)
plt.ylabel("p(best)")
plt.xlabel("Episode")
plt.ylim(0, 1)
def plot_epsilon(env_name, result):
"""Plots!"""
episodes = result["episodes"]
actions =result["actions"]
bests = result["p_bests"]
scores_R = result["scores_R"]
values_R = result["values_R"]
env = gym.make(env_name)
best = env.best
print(f"Best arm: {best}, last arm: {actions[-1]}")
# Plotz
fig = plt.figure(figsize=(6, 14))
grid = plt.GridSpec(6, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0, 0])
plt.scatter(episodes, actions, color="black", alpha=.5, s=2, label="Bandit")
plt.plot(episodes, np.repeat(best[0], np.max(episodes)+1),
color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylim(-.1, np.max(actions)+1.1)
plt.ylabel("Arm choice")
plt.xlabel("Episode")
# score
plt.subplot(grid[1, 0])
plt.scatter(episodes, scores_R, color="grey", alpha=0.4, s=2, label="R")
plt.plot(episodes, scores_R, color="grey", alpha=0.4)
plt.plot(episodes, np.repeat(tie_threshold, np.max(episodes)+1),
color="violet", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Score")
plt.xlabel("Episode")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Q
plt.subplot(grid[2, 0])
plt.scatter(episodes, values_R, color="grey", alpha=0.4, s=2, label="$Q_R$")
plt.ylabel("Value")
plt.xlabel("Episode")
plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Best
plt.subplot(grid[3, 0])
plt.scatter(episodes, bests, color="red", alpha=.5, s=2)
plt.ylabel("p(best)")
plt.xlabel("Episode")
plt.ylim(0, 1)
hp = pd.read_csv(os.path.join(data_path, "exp298_sorted.csv"))
hp.head(10)
col = "tie_threshold"
# col = "epsilon"
# col = "total_R"
# col = "lr_R"
plt.figure(figsize=(4,2))
plt.hist(hp.loc[0:100, col], bins=10, color="black");
plt.xlabel(col)
plt.ylabel("count")
# env_name = 'BanditOneHigh10-v0'
env_name = 'BanditHardAndSparse10-v0'
# env_name = 'BanditUniform121-v0'
# env_name = 'DeceptiveBanditOneHigh10-v0'
# Pick parameters
lr_R = hp["lr_R"][2]
tie_threshold = hp["tie_threshold"][2]
print(f"Running - lr_R: {lr_R}, tie_threshold: {tie_threshold}")
# Run the exp, and plot its results
result = meta_bandit(
env_name=env_name,
num_episodes=50000,
lr_R=lr_R,
tie_threshold=tie_threshold,
seed_value=None,
)
plot_meta(env_name, result, tie_threshold)
plot_critic('critic_R', env_name, result)
# Pick parameters
lr_R = hp["lr_R"][1]
beta = hp["beta"][1]
temp = hp["temp"][1]
print(f"Env - {env_name}")
print(f"Running - lr_R: {lr_R}, beta: {beta}, temp: {temp}")
# Run the exp, and plot its results
result = softbeta_bandit(
env_name=env_name,
num_episodes=2*60500,
lr_R=lr_R,
beta=beta,
temp=temp,
seed_value=None,
)
plot_beta(env_name, result)
plot_critic('critic', env_name, result)
# Pick parameters
lr_R = hp["lr_R"][2]
epsilon = hp["epsilon"][2]
print(f"Env - {env_name}")
print(f"Running - lr_R: {lr_R}, epsilon: {epsilon}")
# Run the exp, and plot its results
result = epsilon_bandit(
env_name=env_name,
num_episodes=500,
lr_R=lr_R,
epsilon=epsilon,
seed_value=None,
)
plot_epsilon(env_name, result)
plot_critic('critic_R', env_name, result)
# Pick parameters
lr_R = hp["lr_R"][1]
epsilon = hp["epsilon"][1]
epsilon_decay_tau = hp["epsilon_decay_tau"][1]
print(f"Env - {env_name}")
print(f"Running - lr_R: {lr_R}, epsilon: {epsilon}, temp: {epsilon_decay_tau}")
# Run the exp, and plot its results
result = epsilon_bandit(
env_name=env_name,
num_episodes=500,
lr_R=lr_R,
epsilon=epsilon,
epsilon_decay_tau=epsilon_decay_tau,
seed_value=None,
)
plot_epsilon(env_name, result)
plot_critic('critic_R', env_name, result)
| 0.640973 | 0.910903 |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
#export
from exp.nb_06 import *
```
## ConvNet
Let's get the data and training interface from where we left in the last notebook.
[Jump_to lesson 10 video](https://course.fast.ai/videos/?lesson=10&t=5899)
```
x_train,y_train,x_valid,y_valid = get_data()
x_train,x_valid = normalize_to(x_train,x_valid)
train_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid)
nh,bs = 50,512
c = y_train.max().item()+1
loss_func = F.cross_entropy
data = DataBunch(*get_dls(train_ds, valid_ds, bs), c)
mnist_view = view_tfm(1,28,28)
cbfs = [Recorder,
partial(AvgStatsCallback,accuracy),
CudaCallback,
partial(BatchTransformXCallback, mnist_view)]
nfs = [8,16,32,64,64]
learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs)
get_learn_run??
%time run.fit(2, learn)
```
## Batchnorm
### Custom
Let's start by building our own `BatchNorm` layer from scratch.
[Jump_to lesson 10 video](https://course.fast.ai/videos/?lesson=10&t=6018)
```
class BatchNorm(nn.Module):
def __init__(self, nf, mom=0.1, eps=1e-5):
super().__init__()
# NB: pytorch bn mom is opposite of what you'd expect
self.mom,self.eps = mom,eps
self.mults = nn.Parameter(torch.ones (nf,1,1))
self.adds = nn.Parameter(torch.zeros(nf,1,1))
self.register_buffer('vars', torch.ones(1,nf,1,1))
self.register_buffer('means', torch.zeros(1,nf,1,1))
def update_stats(self, x):
m = x.mean((0,2,3), keepdim=True)
v = x.var ((0,2,3), keepdim=True)
self.means.lerp_(m, self.mom)
self.vars.lerp_ (v, self.mom)
return m,v
def forward(self, x):
if self.training:
with torch.no_grad(): m,v = self.update_stats(x)
else: m,v = self.means,self.vars
x = (x-m) / (v+self.eps).sqrt()
return x*self.mults + self.adds
def conv_layer(ni, nf, ks=3, stride=2, bn=True, **kwargs):
# No bias needed if using bn
layers = [nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=not bn),
GeneralRelu(**kwargs)]
if bn: layers.append(BatchNorm(nf))
return nn.Sequential(*layers)
#export
def init_cnn_(m, f):
if isinstance(m, nn.Conv2d):
f(m.weight, a=0.1)
if getattr(m, 'bias', None) is not None: m.bias.data.zero_()
for l in m.children(): init_cnn_(l, f)
def init_cnn(m, uniform=False):
f = init.kaiming_uniform_ if uniform else init.kaiming_normal_
init_cnn_(m, f)
def get_learn_run(nfs, data, lr, layer, cbs=None, opt_func=None, uniform=False, **kwargs):
model = get_cnn_model(data, nfs, layer, **kwargs)
init_cnn(model, uniform=uniform)
return get_runner(model, data, lr=lr, cbs=cbs, opt_func=opt_func)
```
We can then use it in training and see how it helps keep the activations means to 0 and the std to 1.
```
learn,run = get_learn_run(nfs, data, 0.9, conv_layer, cbs=cbfs)
with Hooks(learn.model, append_stats) as hooks:
run.fit(1, learn)
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks[:-1]:
ms,ss = h.stats
ax0.plot(ms[:10])
ax1.plot(ss[:10])
h.remove()
plt.legend(range(6));
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks[:-1]:
ms,ss = h.stats
ax0.plot(ms)
ax1.plot(ss)
learn,run = get_learn_run(nfs, data, 1.0, conv_layer, cbs=cbfs)
%time run.fit(3, learn)
```
### Builtin batchnorm
[Jump_to lesson 10 video](https://course.fast.ai/videos/?lesson=10&t=6679)
```
#export
def conv_layer(ni, nf, ks=3, stride=2, bn=True, **kwargs):
layers = [nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=not bn),
GeneralRelu(**kwargs)]
if bn: layers.append(nn.BatchNorm2d(nf, eps=1e-5, momentum=0.1))
return nn.Sequential(*layers)
learn,run = get_learn_run(nfs, data, 1., conv_layer, cbs=cbfs)
%time run.fit(3, learn)
```
### With scheduler
Now let's add the usual warm-up/annealing.
```
sched = combine_scheds([0.3, 0.7], [sched_lin(0.6, 2.), sched_lin(2., 0.1)])
learn,run = get_learn_run(nfs, data, 0.9, conv_layer, cbs=cbfs
+[partial(ParamScheduler,'lr', sched)])
run.fit(8, learn)
```
## More norms
### Layer norm
From [the paper](https://arxiv.org/abs/1607.06450): "*batch normalization cannot be applied to online learning tasks or to extremely large distributed models where the minibatches have to be small*".
General equation for a norm layer with learnable affine:
$$y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta$$
The difference with BatchNorm is
1. we don't keep a moving average
2. we don't average over the batches dimension but over the hidden dimension, so it's independent of the batch size
[Jump_to lesson 10 video](https://course.fast.ai/videos/?lesson=10&t=6717)
```
class LayerNorm(nn.Module):
__constants__ = ['eps']
def __init__(self, eps=1e-5):
super().__init__()
self.eps = eps
self.mult = nn.Parameter(tensor(1.))
self.add = nn.Parameter(tensor(0.))
def forward(self, x):
m = x.mean((1,2,3), keepdim=True)
v = x.var ((1,2,3), keepdim=True)
x = (x-m) / ((v+self.eps).sqrt())
return x*self.mult + self.add
def conv_ln(ni, nf, ks=3, stride=2, bn=True, **kwargs):
layers = [nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=True),
GeneralRelu(**kwargs)]
if bn: layers.append(LayerNorm())
return nn.Sequential(*layers)
learn,run = get_learn_run(nfs, data, 0.8, conv_ln, cbs=cbfs)
%time run.fit(3, learn)
```
*Thought experiment*: can this distinguish foggy days from sunny days (assuming you're using it before the first conv)?
### Instance norm
From [the paper](https://arxiv.org/abs/1607.08022):
The key difference between **contrast** and batch normalization is that the latter applies the normalization to a whole batch of images instead for single ones:
\begin{equation}\label{eq:bnorm}
y_{tijk} = \frac{x_{tijk} - \mu_{i}}{\sqrt{\sigma_i^2 + \epsilon}},
\quad
\mu_i = \frac{1}{HWT}\sum_{t=1}^T\sum_{l=1}^W \sum_{m=1}^H x_{tilm},
\quad
\sigma_i^2 = \frac{1}{HWT}\sum_{t=1}^T\sum_{l=1}^W \sum_{m=1}^H (x_{tilm} - mu_i)^2.
\end{equation}
In order to combine the effects of instance-specific normalization and batch normalization, we propose to replace the latter by the *instance normalization* (also known as *contrast normalization*) layer:
\begin{equation}\label{eq:inorm}
y_{tijk} = \frac{x_{tijk} - \mu_{ti}}{\sqrt{\sigma_{ti}^2 + \epsilon}},
\quad
\mu_{ti} = \frac{1}{HW}\sum_{l=1}^W \sum_{m=1}^H x_{tilm},
\quad
\sigma_{ti}^2 = \frac{1}{HW}\sum_{l=1}^W \sum_{m=1}^H (x_{tilm} - mu_{ti})^2.
\end{equation}
[Jump_to lesson 10 video](https://course.fast.ai/videos/?lesson=10&t=7114)
```
class InstanceNorm(nn.Module):
__constants__ = ['eps']
def __init__(self, nf, eps=1e-0):
super().__init__()
self.eps = eps
self.mults = nn.Parameter(torch.ones (nf,1,1))
self.adds = nn.Parameter(torch.zeros(nf,1,1))
def forward(self, x):
m = x.mean((2,3), keepdim=True)
v = x.var ((2,3), keepdim=True)
res = (x-m) / ((v+self.eps).sqrt())
return res*self.mults + self.adds
def conv_in(ni, nf, ks=3, stride=2, bn=True, **kwargs):
layers = [nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=True),
GeneralRelu(**kwargs)]
if bn: layers.append(InstanceNorm(nf))
return nn.Sequential(*layers)
learn,run = get_learn_run(nfs, data, 0.1, conv_in, cbs=cbfs)
%time run.fit(3, learn)
```
*Question*: why can't this classify anything?
Lost in all those norms? The authors from the [group norm paper](https://arxiv.org/pdf/1803.08494.pdf) have you covered:

### Group norm
[Jump_to lesson 10 video](https://course.fast.ai/videos/?lesson=10&t=7213)
*From the PyTorch docs:*
`GroupNorm(num_groups, num_channels, eps=1e-5, affine=True)`
The input channels are separated into `num_groups` groups, each containing
``num_channels / num_groups`` channels. The mean and standard-deviation are calculated
separately over the each group. $\gamma$ and $\beta$ are learnable
per-channel affine transform parameter vectorss of size `num_channels` if
`affine` is ``True``.
This layer uses statistics computed from input data in both training and
evaluation modes.
Args:
- num_groups (int): number of groups to separate the channels into
- num_channels (int): number of channels expected in input
- eps: a value added to the denominator for numerical stability. Default: 1e-5
- affine: a boolean value that when set to ``True``, this module
has learnable per-channel affine parameters initialized to ones (for weights)
and zeros (for biases). Default: ``True``.
Shape:
- Input: `(N, num_channels, *)`
- Output: `(N, num_channels, *)` (same shape as input)
Examples::
>>> input = torch.randn(20, 6, 10, 10)
>>> # Separate 6 channels into 3 groups
>>> m = nn.GroupNorm(3, 6)
>>> # Separate 6 channels into 6 groups (equivalent with InstanceNorm)
>>> m = nn.GroupNorm(6, 6)
>>> # Put all 6 channels into a single group (equivalent with LayerNorm)
>>> m = nn.GroupNorm(1, 6)
>>> # Activating the module
>>> output = m(input)
## Fix small batch sizes
### What's the problem?
When we compute the statistics (mean and std) for a BatchNorm Layer on a small batch, it is possible that we get a standard deviation very close to 0. because there aren't many samples (the variance of one thing is 0. since it's equal to its mean).
[Jump_to lesson 10 video](https://course.fast.ai/videos/?lesson=10&t=7304)
```
data = DataBunch(*get_dls(train_ds, valid_ds, 2), c)
def conv_layer(ni, nf, ks=3, stride=2, bn=True, **kwargs):
layers = [nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=not bn),
GeneralRelu(**kwargs)]
if bn: layers.append(nn.BatchNorm2d(nf, eps=1e-5, momentum=0.1))
return nn.Sequential(*layers)
learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs)
%time run.fit(1, learn)
```
### Running Batch Norm
To solve this problem we introduce a Running BatchNorm that uses smoother running mean and variance for the mean and std.
[Jump_to lesson 10 video](https://course.fast.ai/videos/?lesson=10&t=7516)
```
class RunningBatchNorm(nn.Module):
def __init__(self, nf, mom=0.1, eps=1e-5):
super().__init__()
self.mom,self.eps = mom,eps
self.mults = nn.Parameter(torch.ones (nf,1,1))
self.adds = nn.Parameter(torch.zeros(nf,1,1))
self.register_buffer('sums', torch.zeros(1,nf,1,1))
self.register_buffer('sqrs', torch.zeros(1,nf,1,1))
self.register_buffer('batch', tensor(0.))
self.register_buffer('count', tensor(0.))
self.register_buffer('step', tensor(0.))
self.register_buffer('dbias', tensor(0.))
def update_stats(self, x):
bs,nc,*_ = x.shape
self.sums.detach_()
self.sqrs.detach_()
dims = (0,2,3)
s = x.sum(dims, keepdim=True)
ss = (x*x).sum(dims, keepdim=True)
c = self.count.new_tensor(x.numel()/nc)
mom1 = 1 - (1-self.mom)/math.sqrt(bs-1)
self.mom1 = self.dbias.new_tensor(mom1)
self.sums.lerp_(s, self.mom1)
self.sqrs.lerp_(ss, self.mom1)
self.count.lerp_(c, self.mom1)
self.dbias = self.dbias*(1-self.mom1) + self.mom1
self.batch += bs
self.step += 1
def forward(self, x):
if self.training: self.update_stats(x)
sums = self.sums
sqrs = self.sqrs
c = self.count
if self.step<100:
sums = sums / self.dbias
sqrs = sqrs / self.dbias
c = c / self.dbias
means = sums/c
vars = (sqrs/c).sub_(means*means)
if bool(self.batch < 20): vars.clamp_min_(0.01)
x = (x-means).div_((vars.add_(self.eps)).sqrt())
return x.mul_(self.mults).add_(self.adds)
def conv_rbn(ni, nf, ks=3, stride=2, bn=True, **kwargs):
layers = [nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=not bn),
GeneralRelu(**kwargs)]
if bn: layers.append(RunningBatchNorm(nf))
return nn.Sequential(*layers)
learn,run = get_learn_run(nfs, data, 0.4, conv_rbn, cbs=cbfs)
%time run.fit(1, learn)
```
This solves the small batch size issue!
### What can we do in a single epoch?
Now let's see with a decent batch size what result we can get.
[Jump_to lesson 10 video](https://course.fast.ai/videos/?lesson=10&t=8068)
```
data = DataBunch(*get_dls(train_ds, valid_ds, 32), c)
learn,run = get_learn_run(nfs, data, 0.9, conv_rbn, cbs=cbfs
+[partial(ParamScheduler,'lr', sched_lin(1., 0.2))])
%time run.fit(1, learn)
```
## Export
```
nb_auto_export()
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
%matplotlib inline
#export
from exp.nb_06 import *
x_train,y_train,x_valid,y_valid = get_data()
x_train,x_valid = normalize_to(x_train,x_valid)
train_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid)
nh,bs = 50,512
c = y_train.max().item()+1
loss_func = F.cross_entropy
data = DataBunch(*get_dls(train_ds, valid_ds, bs), c)
mnist_view = view_tfm(1,28,28)
cbfs = [Recorder,
partial(AvgStatsCallback,accuracy),
CudaCallback,
partial(BatchTransformXCallback, mnist_view)]
nfs = [8,16,32,64,64]
learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs)
get_learn_run??
%time run.fit(2, learn)
class BatchNorm(nn.Module):
def __init__(self, nf, mom=0.1, eps=1e-5):
super().__init__()
# NB: pytorch bn mom is opposite of what you'd expect
self.mom,self.eps = mom,eps
self.mults = nn.Parameter(torch.ones (nf,1,1))
self.adds = nn.Parameter(torch.zeros(nf,1,1))
self.register_buffer('vars', torch.ones(1,nf,1,1))
self.register_buffer('means', torch.zeros(1,nf,1,1))
def update_stats(self, x):
m = x.mean((0,2,3), keepdim=True)
v = x.var ((0,2,3), keepdim=True)
self.means.lerp_(m, self.mom)
self.vars.lerp_ (v, self.mom)
return m,v
def forward(self, x):
if self.training:
with torch.no_grad(): m,v = self.update_stats(x)
else: m,v = self.means,self.vars
x = (x-m) / (v+self.eps).sqrt()
return x*self.mults + self.adds
def conv_layer(ni, nf, ks=3, stride=2, bn=True, **kwargs):
# No bias needed if using bn
layers = [nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=not bn),
GeneralRelu(**kwargs)]
if bn: layers.append(BatchNorm(nf))
return nn.Sequential(*layers)
#export
def init_cnn_(m, f):
if isinstance(m, nn.Conv2d):
f(m.weight, a=0.1)
if getattr(m, 'bias', None) is not None: m.bias.data.zero_()
for l in m.children(): init_cnn_(l, f)
def init_cnn(m, uniform=False):
f = init.kaiming_uniform_ if uniform else init.kaiming_normal_
init_cnn_(m, f)
def get_learn_run(nfs, data, lr, layer, cbs=None, opt_func=None, uniform=False, **kwargs):
model = get_cnn_model(data, nfs, layer, **kwargs)
init_cnn(model, uniform=uniform)
return get_runner(model, data, lr=lr, cbs=cbs, opt_func=opt_func)
learn,run = get_learn_run(nfs, data, 0.9, conv_layer, cbs=cbfs)
with Hooks(learn.model, append_stats) as hooks:
run.fit(1, learn)
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks[:-1]:
ms,ss = h.stats
ax0.plot(ms[:10])
ax1.plot(ss[:10])
h.remove()
plt.legend(range(6));
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks[:-1]:
ms,ss = h.stats
ax0.plot(ms)
ax1.plot(ss)
learn,run = get_learn_run(nfs, data, 1.0, conv_layer, cbs=cbfs)
%time run.fit(3, learn)
#export
def conv_layer(ni, nf, ks=3, stride=2, bn=True, **kwargs):
layers = [nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=not bn),
GeneralRelu(**kwargs)]
if bn: layers.append(nn.BatchNorm2d(nf, eps=1e-5, momentum=0.1))
return nn.Sequential(*layers)
learn,run = get_learn_run(nfs, data, 1., conv_layer, cbs=cbfs)
%time run.fit(3, learn)
sched = combine_scheds([0.3, 0.7], [sched_lin(0.6, 2.), sched_lin(2., 0.1)])
learn,run = get_learn_run(nfs, data, 0.9, conv_layer, cbs=cbfs
+[partial(ParamScheduler,'lr', sched)])
run.fit(8, learn)
class LayerNorm(nn.Module):
__constants__ = ['eps']
def __init__(self, eps=1e-5):
super().__init__()
self.eps = eps
self.mult = nn.Parameter(tensor(1.))
self.add = nn.Parameter(tensor(0.))
def forward(self, x):
m = x.mean((1,2,3), keepdim=True)
v = x.var ((1,2,3), keepdim=True)
x = (x-m) / ((v+self.eps).sqrt())
return x*self.mult + self.add
def conv_ln(ni, nf, ks=3, stride=2, bn=True, **kwargs):
layers = [nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=True),
GeneralRelu(**kwargs)]
if bn: layers.append(LayerNorm())
return nn.Sequential(*layers)
learn,run = get_learn_run(nfs, data, 0.8, conv_ln, cbs=cbfs)
%time run.fit(3, learn)
class InstanceNorm(nn.Module):
__constants__ = ['eps']
def __init__(self, nf, eps=1e-0):
super().__init__()
self.eps = eps
self.mults = nn.Parameter(torch.ones (nf,1,1))
self.adds = nn.Parameter(torch.zeros(nf,1,1))
def forward(self, x):
m = x.mean((2,3), keepdim=True)
v = x.var ((2,3), keepdim=True)
res = (x-m) / ((v+self.eps).sqrt())
return res*self.mults + self.adds
def conv_in(ni, nf, ks=3, stride=2, bn=True, **kwargs):
layers = [nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=True),
GeneralRelu(**kwargs)]
if bn: layers.append(InstanceNorm(nf))
return nn.Sequential(*layers)
learn,run = get_learn_run(nfs, data, 0.1, conv_in, cbs=cbfs)
%time run.fit(3, learn)
data = DataBunch(*get_dls(train_ds, valid_ds, 2), c)
def conv_layer(ni, nf, ks=3, stride=2, bn=True, **kwargs):
layers = [nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=not bn),
GeneralRelu(**kwargs)]
if bn: layers.append(nn.BatchNorm2d(nf, eps=1e-5, momentum=0.1))
return nn.Sequential(*layers)
learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs)
%time run.fit(1, learn)
class RunningBatchNorm(nn.Module):
def __init__(self, nf, mom=0.1, eps=1e-5):
super().__init__()
self.mom,self.eps = mom,eps
self.mults = nn.Parameter(torch.ones (nf,1,1))
self.adds = nn.Parameter(torch.zeros(nf,1,1))
self.register_buffer('sums', torch.zeros(1,nf,1,1))
self.register_buffer('sqrs', torch.zeros(1,nf,1,1))
self.register_buffer('batch', tensor(0.))
self.register_buffer('count', tensor(0.))
self.register_buffer('step', tensor(0.))
self.register_buffer('dbias', tensor(0.))
def update_stats(self, x):
bs,nc,*_ = x.shape
self.sums.detach_()
self.sqrs.detach_()
dims = (0,2,3)
s = x.sum(dims, keepdim=True)
ss = (x*x).sum(dims, keepdim=True)
c = self.count.new_tensor(x.numel()/nc)
mom1 = 1 - (1-self.mom)/math.sqrt(bs-1)
self.mom1 = self.dbias.new_tensor(mom1)
self.sums.lerp_(s, self.mom1)
self.sqrs.lerp_(ss, self.mom1)
self.count.lerp_(c, self.mom1)
self.dbias = self.dbias*(1-self.mom1) + self.mom1
self.batch += bs
self.step += 1
def forward(self, x):
if self.training: self.update_stats(x)
sums = self.sums
sqrs = self.sqrs
c = self.count
if self.step<100:
sums = sums / self.dbias
sqrs = sqrs / self.dbias
c = c / self.dbias
means = sums/c
vars = (sqrs/c).sub_(means*means)
if bool(self.batch < 20): vars.clamp_min_(0.01)
x = (x-means).div_((vars.add_(self.eps)).sqrt())
return x.mul_(self.mults).add_(self.adds)
def conv_rbn(ni, nf, ks=3, stride=2, bn=True, **kwargs):
layers = [nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=not bn),
GeneralRelu(**kwargs)]
if bn: layers.append(RunningBatchNorm(nf))
return nn.Sequential(*layers)
learn,run = get_learn_run(nfs, data, 0.4, conv_rbn, cbs=cbfs)
%time run.fit(1, learn)
data = DataBunch(*get_dls(train_ds, valid_ds, 32), c)
learn,run = get_learn_run(nfs, data, 0.9, conv_rbn, cbs=cbfs
+[partial(ParamScheduler,'lr', sched_lin(1., 0.2))])
%time run.fit(1, learn)
nb_auto_export()
| 0.87821 | 0.878314 |

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/ER_ICD10_PCS.ipynb)
# **ICD10-PCS coding**
To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens.
## 1. Colab Setup
Import license keys
```
import os
import json
with open('/content/spark_nlp_for_healthcare.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
```
Install dependencies
```
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp==$sparknlp_version
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
```
Import dependencies into Python
```
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
```
Start the Spark session
```
spark = sparknlp_jsl.start(secret)
```
## 2. Select the Entity Resolver model and construct the pipeline
Select the models:
**ICD10 Entity Resolver models:**
1. **chunkresolve_icd10cm_clinical**
2. **chunkresolve_icd10cm_diseases_clinical**
3. **chunkresolve_icd10cm_injuries_clinical**
4. **chunkresolve_icd10cm_musculoskeletal_clinical**
5. **chunkresolve_icd10cm_neoplasms_clinical**
6. **chunkresolve_icd10cm_puerile_clinical**
For more details: https://github.com/JohnSnowLabs/spark-nlp-models#pretrained-models---spark-nlp-for-healthcare
```
# Change this to the model you want to use and re-run the cells below.
ER_MODEL_NAME = "chunkresolve_icd10cm_clinical"
NER_MODEL_NAME = "ner_clinical"
```
Create the pipeline
```
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentences')
tokenizer = Tokenizer()\
.setInputCols(['sentences']) \
.setOutputCol('tokens')
pos_tagger = PerceptronModel()\
.pretrained("pos_clinical", "en", "clinical/models") \
.setInputCols(["sentences", "tokens"])\
.setOutputCol("pos_tags")
dependency_parser = DependencyParserModel()\
.pretrained("dependency_conllu", "en")\
.setInputCols(["sentences", "pos_tags", "tokens"])\
.setOutputCol("dependencies")
embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models')\
.setInputCols(["sentences", "tokens"])\
.setOutputCol("embeddings")
clinical_ner_model = NerDLModel().pretrained(NER_MODEL_NAME, 'en', 'clinical/models').setInputCols("sentences", "tokens", "embeddings")\
.setOutputCol("clinical_ner_tags")
clinical_ner_chunker = NerConverter()\
.setInputCols(["sentences", "tokens", "clinical_ner_tags"])\
.setOutputCol("clinical_ner_chunks")
chunk_embeddings = ChunkEmbeddings()\
.setInputCols("clinical_ner_chunks", "embeddings")\
.setOutputCol("chunk_embeddings")
entity_resolver = \
ChunkEntityResolverModel.pretrained(ER_MODEL_NAME,"en","clinical/models")\
.setInputCols("tokens","chunk_embeddings").setOutputCol("resolution")
pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
pos_tagger,
dependency_parser,
embeddings,
clinical_ner_model,
clinical_ner_chunker,
chunk_embeddings,
entity_resolver])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
```
## 3. Create example inputs
```
# Enter examples as strings in this array
input_list = [
"""She is followed by Dr. X in our office and has a history of severe tricuspid regurgitation with mild elevation and PA pressure. On 05/12/08, preserved left and right ventricular systolic function, aortic sclerosis with apparent mild aortic stenosis, and bi-atrial enlargement. She has previously had a Persantine Myoview nuclear rest-stress test scan completed at ABCD Medical Center in 07/06 that was negative. She has had significant mitral valve regurgitation in the past being moderate, but on the most recent echocardiogram on 05/12/08, that was not felt to be significant. She has a history of hypertension and EKGs in our office show normal sinus rhythm with frequent APCs versus wandering atrial pacemaker. She does have a history of significant hypertension in the past. She has had dizzy spells and denies clearly any true syncope. She has had bradycardia in the past from beta-blocker therapy."""
]
```
# 4. Run the pipeline
```
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
light_result = light_pipeline.fullAnnotate(input_list[0])
```
# 5. Visualize
Full Pipeline
```
result.select(
F.explode(
F.arrays_zip('resolution.metadata', 'resolution.begin' , 'resolution.end', 'resolution.result')
).alias('cols')
).select(
F.expr("cols['0']['token']").alias('token/chunk'),
F.expr("cols['1']").alias('begin'),
F.expr("cols['2']").alias('end'),
F.expr("cols['0']['resolved_text']").alias('resolved_text'),
F.expr("cols['3']").alias('icd10_code'),
).toPandas()
```
Light Pipeline
```
light_result[0]['resolution']
```
|
github_jupyter
|
import os
import json
with open('/content/spark_nlp_for_healthcare.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp==$sparknlp_version
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
spark = sparknlp_jsl.start(secret)
# Change this to the model you want to use and re-run the cells below.
ER_MODEL_NAME = "chunkresolve_icd10cm_clinical"
NER_MODEL_NAME = "ner_clinical"
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentences')
tokenizer = Tokenizer()\
.setInputCols(['sentences']) \
.setOutputCol('tokens')
pos_tagger = PerceptronModel()\
.pretrained("pos_clinical", "en", "clinical/models") \
.setInputCols(["sentences", "tokens"])\
.setOutputCol("pos_tags")
dependency_parser = DependencyParserModel()\
.pretrained("dependency_conllu", "en")\
.setInputCols(["sentences", "pos_tags", "tokens"])\
.setOutputCol("dependencies")
embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models')\
.setInputCols(["sentences", "tokens"])\
.setOutputCol("embeddings")
clinical_ner_model = NerDLModel().pretrained(NER_MODEL_NAME, 'en', 'clinical/models').setInputCols("sentences", "tokens", "embeddings")\
.setOutputCol("clinical_ner_tags")
clinical_ner_chunker = NerConverter()\
.setInputCols(["sentences", "tokens", "clinical_ner_tags"])\
.setOutputCol("clinical_ner_chunks")
chunk_embeddings = ChunkEmbeddings()\
.setInputCols("clinical_ner_chunks", "embeddings")\
.setOutputCol("chunk_embeddings")
entity_resolver = \
ChunkEntityResolverModel.pretrained(ER_MODEL_NAME,"en","clinical/models")\
.setInputCols("tokens","chunk_embeddings").setOutputCol("resolution")
pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
pos_tagger,
dependency_parser,
embeddings,
clinical_ner_model,
clinical_ner_chunker,
chunk_embeddings,
entity_resolver])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
# Enter examples as strings in this array
input_list = [
"""She is followed by Dr. X in our office and has a history of severe tricuspid regurgitation with mild elevation and PA pressure. On 05/12/08, preserved left and right ventricular systolic function, aortic sclerosis with apparent mild aortic stenosis, and bi-atrial enlargement. She has previously had a Persantine Myoview nuclear rest-stress test scan completed at ABCD Medical Center in 07/06 that was negative. She has had significant mitral valve regurgitation in the past being moderate, but on the most recent echocardiogram on 05/12/08, that was not felt to be significant. She has a history of hypertension and EKGs in our office show normal sinus rhythm with frequent APCs versus wandering atrial pacemaker. She does have a history of significant hypertension in the past. She has had dizzy spells and denies clearly any true syncope. She has had bradycardia in the past from beta-blocker therapy."""
]
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
light_result = light_pipeline.fullAnnotate(input_list[0])
result.select(
F.explode(
F.arrays_zip('resolution.metadata', 'resolution.begin' , 'resolution.end', 'resolution.result')
).alias('cols')
).select(
F.expr("cols['0']['token']").alias('token/chunk'),
F.expr("cols['1']").alias('begin'),
F.expr("cols['2']").alias('end'),
F.expr("cols['0']['resolved_text']").alias('resolved_text'),
F.expr("cols['3']").alias('icd10_code'),
).toPandas()
light_result[0]['resolution']
| 0.465387 | 0.841533 |
```
# HIDDEN
from datascience import *
from prob140 import *
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import numpy as np
# HIDDEN
from matplotlib import patches
import shapely.geometry as sg
import descartes
# HIDDEN
def show_disjoint_union():
plt.figure(figsize=(10, 20))
# create the circles with shapely
a = sg.Point(1.4,2.5).buffer(1.0)
b = sg.Point(3.3,2.5).buffer(0.75)
# use descartes to create the matplotlib patches
ax = plt.subplot(121)
ax.add_patch(descartes.PolygonPatch(a, fc='darkblue', ec='k', alpha=0.8))
ax.add_patch(descartes.PolygonPatch(b, fc='gold', ec='k', alpha=0.6))
ax.annotate('A', [1.4, 2.5])
ax.annotate('B', [3.3, 2.5])
# control display
plt.title('Mutually Exclusive Events')
plt.axis('off')
ax.set_xlim(0, 5); ax.set_ylim(0, 5)
ax.set_aspect('equal')
# use descartes to create the matplotlib patches
ax = plt.subplot(122)
ax.add_patch(descartes.PolygonPatch(a, fc='blue', ec='k', alpha=0.8))
ax.add_patch(descartes.PolygonPatch(b, fc='blue', ec='k', alpha=0.8))
# control display
plt.title('Disjoint Union')
plt.axis('off')
ax.set_xlim(0, 5); ax.set_ylim(0, 5)
ax.set_aspect('equal')
# HIDDEN
def show_difference():
plt.figure(figsize=(10, 20))
# create the circles with shapely
a = sg.Point(2,2.5).buffer(1.0)
b = sg.Point(2,2.5).buffer(0.75)
# compute the 2 parts
left = a.difference(b)
middle = a.intersection(b)
# use descartes to create the matplotlib patches
ax = plt.subplot(121)
ax.add_patch(descartes.PolygonPatch(left, fc='darkblue', ec='k', alpha=0.8))
ax.add_patch(descartes.PolygonPatch(middle, fc='olive', ec='k', alpha=0.8))
# control display
plt.title('Nested Events')
plt.axis('off')
ax.set_xlim(0, 5); ax.set_ylim(0, 5)
ax.set_aspect('equal')
# use descartes to create the matplotlib patches
ax = plt.subplot(122)
ax.add_patch(descartes.PolygonPatch(left, fc='blue', ec='k', alpha=0.8))
ax.add_patch(descartes.PolygonPatch(middle, fc='None', ec='k', alpha=0.8))
# control display
plt.title('The Difference')
plt.axis('off')
ax.set_xlim(0, 5); ax.set_ylim(0, 5)
ax.set_aspect('equal')
# HIDDEN
def show_complement():
plt.figure(figsize=(10, 20))
# create the square and circle with shapely
a = sg.box(0, 0, 4.5, 4.5)
b = sg.Point(2.25,2.5).buffer(1)
# compute the 2 parts
left = a.difference(b)
middle = a.intersection(b)
# use descartes to create the matplotlib patches
ax = plt.subplot(121)
ax.add_patch(descartes.PolygonPatch(left, fc='None', ec='k', alpha=0.8))
ax.add_patch(descartes.PolygonPatch(middle, fc='darkblue', ec='k', alpha=0.8))
# control display
plt.title('An Event (Square = Omega)')
plt.axis('off')
ax.set_xlim(0, 5); ax.set_ylim(0, 5)
ax.set_aspect('equal')
# use descartes to create the matplotlib patches
ax = plt.subplot(122)
ax.add_patch(descartes.PolygonPatch(left, fc='blue', ec='k', alpha=0.8))
ax.add_patch(descartes.PolygonPatch(middle, fc='None', ec='k', alpha=0.8))
# control display
plt.title('The Complement')
plt.axis('off')
ax.set_xlim(0, 5); ax.set_ylim(0, 5)
ax.set_aspect('equal')
```
## Addition ##
The third axiom is about events that are *mutually exclusive*. Informally, two events $A$ and $B$ are mutually exclusive if at most one of them can happen; in other words, they can't both happen.
For example, suppose you are selecting one student at random from a class in which 40% of the students are freshmen and 20% are sophomores. Each student is either a freshman or a sophomore or neither; but no student is both a freshman and a sophomore. So if $A$ is the event "the student selected is a freshman" and $B$ is the event "the student selected is a sophomore", then $A$ and $B$ are mutually exclusive.
What's the big deal about mutually exclusive events? To understand this, start by thinking about the event that the selected student is a freshman or a sophomore. In the language of set theory, that's the *union* of the two events "freshman" and "sophomore". It is a great idea to use Venn diagrams to visualize events. In the diagram below, imagine $A$ and $B$ to be two mutually exclusive events shown as blue and gold circles. Because the events are mutually exclusive, the corresponding circles don't overlap. The union is the set of all the points in the two circles.
```
# HIDDEN
show_disjoint_union()
```
What's the chance that the student is a freshman or a sophomore? In the population, 40% are freshmen and 20% are sophomores, so a natural answer is 60%. That's the percent of students who satisfy our criterion of "freshman or sophomore". The simple addition works because the two groups are disjoint.
Kolmogorov used this idea to formulate the third and most important axiom of probability. Formally, $A$ and $B$ are mutually exclusive events if their intersection is empty:
$$
A \cap B = \phi
$$
#### The Third Axiom: Addition Rule ####
In the context of finite outcome spaces, the axiom says:
- If $A$ and $B$ are mutually exclusive events, then $P(A \cup B) = P(A) + P(B)$.
You will show in an exercise that the axiom implies something more general:
- For any fixed $n$, if $A_1, A_2, \ldots, A_n$ are mutually exclusive (that is, if $A_i \cap A_j = \phi$ for all $i \ne j$), then
$$
P\big{(} \bigcup_{i=1}^n A_i \big{)} = \sum_{i=1}^n P(A_i)
$$
This is sometimes called the axiom of *finite additivity*.
This deceptively simple axiom has tremendous power, especially when it is extended to account for infinitely many mutually exclusive events. For a start, it can be used to create some handy computational tools.
### Nested Events ###
Suppose that 50% of the students in a class have Data Science as one of their majors, and 40% are majoring in Data Science as well as Computer Science (CS). If you pick a student at random, what is the chance that the student is majoring in Data Science but not in CS?
The Venn diagram below shows a dark blue circle corresponding to the event $A =$ "Data Science as one of the majors", and a gold circle (not drawn to scale) corresponding $B =$ "majoring in both Data Science and CS". The two events are *nested* because $B$ is a subset of $A$: everyone in $B$ has Data Science as one of their majors.
So $B \subseteq A$, and those who are majoring in Data Science but not CS is the *difference* "$A$ and not $B$":
$$
A \backslash B = A \cap B^c
$$
where $B^c$ is the complement of $B$. The difference is the bright blue ring on the right.
```
# HIDDEN
show_difference()
```
What's the chance that the student is in the bright blue difference? If you answered, "50% - 40% = 10%", you are right, and it's great that your intuition is saying that probabilities behave just like areas. They do. In fact the calculation follows from the axiom of additivity, which we also motivated by looking at areas.
#### Difference Rule ####
Suppose $A$ and $B$ are events such that $B \subseteq A$. Then $P(A \backslash B) = P(A) - P(B)$.
**Proof.** Because $B \subseteq A$,
$$
A = B \cup (A \backslash B)
$$
which is a disjoint union. By the axiom of additivity,
$$
P(A) = P(B) + P(A \backslash B)
$$
and so
$$
P(A \backslash B) = P(A) - P(B)
$$
### The Complement ###
If an event has chance 40%, what's the chance that it doesn't happen? The "obvious" answer of 60% is a special case of the difference rule.
#### Complement Rule ####
For any event $B$, $P(B^c) = 1 - P(B)$.
**Proof.** The Venn diagram below shows what to do. Take $A = \Omega$ in the formula for the difference, and remember the second axiom $P(\Omega) = 1$. Alternatively, redo the argument for the difference rule in this special case.
```
# HIDDEN
show_complement()
```
When you see a minus sign in a calculation of probabilities, as in the Complement Rule above, you will often find that the minus sign is due to a rearrangement of terms in an application of the addition rule.
When you add or subtract probabilities, you are implicitly splitting an event into disjoint pieces. This is called *partitioning* the event, a fundamentally important technique to master. In the subsequent sections you will see numerous uses of partitioning.
|
github_jupyter
|
# HIDDEN
from datascience import *
from prob140 import *
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import numpy as np
# HIDDEN
from matplotlib import patches
import shapely.geometry as sg
import descartes
# HIDDEN
def show_disjoint_union():
plt.figure(figsize=(10, 20))
# create the circles with shapely
a = sg.Point(1.4,2.5).buffer(1.0)
b = sg.Point(3.3,2.5).buffer(0.75)
# use descartes to create the matplotlib patches
ax = plt.subplot(121)
ax.add_patch(descartes.PolygonPatch(a, fc='darkblue', ec='k', alpha=0.8))
ax.add_patch(descartes.PolygonPatch(b, fc='gold', ec='k', alpha=0.6))
ax.annotate('A', [1.4, 2.5])
ax.annotate('B', [3.3, 2.5])
# control display
plt.title('Mutually Exclusive Events')
plt.axis('off')
ax.set_xlim(0, 5); ax.set_ylim(0, 5)
ax.set_aspect('equal')
# use descartes to create the matplotlib patches
ax = plt.subplot(122)
ax.add_patch(descartes.PolygonPatch(a, fc='blue', ec='k', alpha=0.8))
ax.add_patch(descartes.PolygonPatch(b, fc='blue', ec='k', alpha=0.8))
# control display
plt.title('Disjoint Union')
plt.axis('off')
ax.set_xlim(0, 5); ax.set_ylim(0, 5)
ax.set_aspect('equal')
# HIDDEN
def show_difference():
plt.figure(figsize=(10, 20))
# create the circles with shapely
a = sg.Point(2,2.5).buffer(1.0)
b = sg.Point(2,2.5).buffer(0.75)
# compute the 2 parts
left = a.difference(b)
middle = a.intersection(b)
# use descartes to create the matplotlib patches
ax = plt.subplot(121)
ax.add_patch(descartes.PolygonPatch(left, fc='darkblue', ec='k', alpha=0.8))
ax.add_patch(descartes.PolygonPatch(middle, fc='olive', ec='k', alpha=0.8))
# control display
plt.title('Nested Events')
plt.axis('off')
ax.set_xlim(0, 5); ax.set_ylim(0, 5)
ax.set_aspect('equal')
# use descartes to create the matplotlib patches
ax = plt.subplot(122)
ax.add_patch(descartes.PolygonPatch(left, fc='blue', ec='k', alpha=0.8))
ax.add_patch(descartes.PolygonPatch(middle, fc='None', ec='k', alpha=0.8))
# control display
plt.title('The Difference')
plt.axis('off')
ax.set_xlim(0, 5); ax.set_ylim(0, 5)
ax.set_aspect('equal')
# HIDDEN
def show_complement():
plt.figure(figsize=(10, 20))
# create the square and circle with shapely
a = sg.box(0, 0, 4.5, 4.5)
b = sg.Point(2.25,2.5).buffer(1)
# compute the 2 parts
left = a.difference(b)
middle = a.intersection(b)
# use descartes to create the matplotlib patches
ax = plt.subplot(121)
ax.add_patch(descartes.PolygonPatch(left, fc='None', ec='k', alpha=0.8))
ax.add_patch(descartes.PolygonPatch(middle, fc='darkblue', ec='k', alpha=0.8))
# control display
plt.title('An Event (Square = Omega)')
plt.axis('off')
ax.set_xlim(0, 5); ax.set_ylim(0, 5)
ax.set_aspect('equal')
# use descartes to create the matplotlib patches
ax = plt.subplot(122)
ax.add_patch(descartes.PolygonPatch(left, fc='blue', ec='k', alpha=0.8))
ax.add_patch(descartes.PolygonPatch(middle, fc='None', ec='k', alpha=0.8))
# control display
plt.title('The Complement')
plt.axis('off')
ax.set_xlim(0, 5); ax.set_ylim(0, 5)
ax.set_aspect('equal')
# HIDDEN
show_disjoint_union()
# HIDDEN
show_difference()
# HIDDEN
show_complement()
| 0.482185 | 0.898944 |
```
import matplotlib.pyplot as plt
import numpy as np
import numpy.linalg as la
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
import xgboost as xgb
import pandas as pd
import sys
sys.path.append("../../") # go to parent dir so we can import `speech2phone`
from speech2phone.preprocessing import get_TIMIT, get_phones
from speech2phone.preprocessing.filters import mel
import mag
```
# Load data
```
# to recreate cache (if you've modified the preprocessor), add `use_cache=False
X_toy, y_toy = get_TIMIT(dataset='toy', preprocessor=mel, TIMIT_root='../TIMIT/TIMIT')
X_train, y_train = get_TIMIT(dataset='train', preprocessor=mel, TIMIT_root='../TIMIT/TIMIT')
X_test, y_test = get_TIMIT(dataset='val', preprocessor=mel, TIMIT_root='../TIMIT/TIMIT')
```
## One-hot encoding
By default, outputs are categorical (ints).
```
thing = np.unique(y_toy)
print(thing, len(thing))
```
You can change it to one-hot encoding like this:
```
_, thing = get_TIMIT(dataset='toy', preprocessor=mel, TIMIT_root='../TIMIT/TIMIT', y_type='one-hot')
thing = np.flip(np.unique(thing, axis=0), axis=0)
print(thing, len(thing))
# and this is how you get phoneme labels back
print(get_phones(np.argmax(thing, axis=0)))
```
# GDA
```
model = QuadraticDiscriminantAnalysis()
model.fit(X_train, y_train)
model.score(X_test, y_test)
```
# Random Forest
```
rf = RandomForestClassifier(n_estimators=20, max_depth=30, random_state=42)
rf.fit(X_train, y_train)
rf.score(X_test, y_test)
```
# XGBoost
```
dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test, label=y_test)
params = {"max_depth": 20,
"eta": 0.3,
"num_class": 61,
"gamma": 1,
"lambda": 10,
"alpha": 10}
params["objective"] = "multi:softmax"
params["eval_metric"] = "merror"
params['nthread'] = 4
evallist = [(dtest, 'eval'), (dtrain, 'train')]
num_round = 1
bst = xgb.train(params, dtrain, num_round, evallist)
y_pred = bst.predict(dtest)
score = sum(y_pred == y_test) / len(y_test)
print(score)
```
# PCA
```
class PCA:
def __init__(self, X, y=None, s=None, sparse=False, center=True):
self.X = X - X.mean(axis=0) if center else X
self.y = y
self.sparse = sparse
if s == None:
n, d = X.shape
s = d
if not self.sparse:
U, sig, Vh = la.svd(self.X, full_matrices=False)
self.sig = sig[:s]**2
self.Vh = Vh[:s]
else:
_, sig, Vh = spla.svds(self.X, k=s, return_singular_vectors="vh")
self.sig = sig**2
self.Vh = Vh
self.a = self.transform(self.X)
self.proj_X = self.project(self.X)
if type(self.y) != None:
self.y_dict = {}
A = np.vstack([self.a, self.y])
for yj in np.unique(self.y):
curr_a = A.T[A[s] == yj][:, :s]
if len(curr_a) != 0:
self.y_dict[yj] = curr_a.mean(axis=0)
def update_s(self, s):
if not self.sparse:
U, sig, Vh = la.svd(self.X, full_matrices=False)
self.sig = sig[:s]**2
self.Vh = Vh[:s]
else:
_, sig, Vh = spla.svds(self.X, k=s, return_singular_vectors="vh")
self.sig = sig**2
self.Vh = Vh
self.a = self.transform(self.X)
self.proj_X = self.project(self.X)
if type(self.y) != None:
A = np.vstack([self.a, self.y])
for yj in np.unique(self.y):
curr_a = A.T[A[s] == yj][:, :s]
if len(curr_a) != 0:
self.y_dict[yj] = curr_a.mean(axis=0)
def transform(self, x):
return [email protected]
def project(self, x):
return [email protected]
def predict(self, x):
a_test = self.Vh @ x.T
predicted = []
for a in a_test.T:
similarities = []
for key, value in self.y_dict.items():
similarity = np.dot(a, value) / (la.norm(a)*la.norm(value))
similarities.append((similarity, key))
predicted.append(max(similarities, key=lambda x: x[0])[1])
return predicted
pca_timit = PCA(X_train, y=y_train)
n, d = X_train.shape
fig = plt.figure()
plt.plot(np.arange(d), pca_timit.sig)
plt.xlabel("number of components")
plt.ylabel("explained variance")
plt.title("TIMIT data scree plot")
plt.show()
# print(pca_timit.sig / pca_timit.sig.sum())
i = 1
p = 0
while p < .9:
p = np.sum(pca_timit.sig[:i]) / np.sum(pca_timit.sig)
# print(i, p)
i+=1
pca_timit.update_s(2)
A = np.vstack([pca_timit.a, pca_timit.y])
for i in np.unique(y_train):
curr_a = A.T[A[2] == i][:, :2]
plt.scatter(curr_a[:, 0], curr_a[:, 1], alpha=0.3)
plt.title("Plotting 2 Principal Components")
plt.show()
pca_train = PCA(X_train, y=y_train, s=20, center=False)
y_pred = pca_train.predict(X_test)
pca_score = sum(y_pred == y_test) / len(y_test)
print("PCA accuracy: {:.3f}".format(pca_score))
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import numpy as np
import numpy.linalg as la
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
import xgboost as xgb
import pandas as pd
import sys
sys.path.append("../../") # go to parent dir so we can import `speech2phone`
from speech2phone.preprocessing import get_TIMIT, get_phones
from speech2phone.preprocessing.filters import mel
import mag
# to recreate cache (if you've modified the preprocessor), add `use_cache=False
X_toy, y_toy = get_TIMIT(dataset='toy', preprocessor=mel, TIMIT_root='../TIMIT/TIMIT')
X_train, y_train = get_TIMIT(dataset='train', preprocessor=mel, TIMIT_root='../TIMIT/TIMIT')
X_test, y_test = get_TIMIT(dataset='val', preprocessor=mel, TIMIT_root='../TIMIT/TIMIT')
thing = np.unique(y_toy)
print(thing, len(thing))
_, thing = get_TIMIT(dataset='toy', preprocessor=mel, TIMIT_root='../TIMIT/TIMIT', y_type='one-hot')
thing = np.flip(np.unique(thing, axis=0), axis=0)
print(thing, len(thing))
# and this is how you get phoneme labels back
print(get_phones(np.argmax(thing, axis=0)))
model = QuadraticDiscriminantAnalysis()
model.fit(X_train, y_train)
model.score(X_test, y_test)
rf = RandomForestClassifier(n_estimators=20, max_depth=30, random_state=42)
rf.fit(X_train, y_train)
rf.score(X_test, y_test)
dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test, label=y_test)
params = {"max_depth": 20,
"eta": 0.3,
"num_class": 61,
"gamma": 1,
"lambda": 10,
"alpha": 10}
params["objective"] = "multi:softmax"
params["eval_metric"] = "merror"
params['nthread'] = 4
evallist = [(dtest, 'eval'), (dtrain, 'train')]
num_round = 1
bst = xgb.train(params, dtrain, num_round, evallist)
y_pred = bst.predict(dtest)
score = sum(y_pred == y_test) / len(y_test)
print(score)
class PCA:
def __init__(self, X, y=None, s=None, sparse=False, center=True):
self.X = X - X.mean(axis=0) if center else X
self.y = y
self.sparse = sparse
if s == None:
n, d = X.shape
s = d
if not self.sparse:
U, sig, Vh = la.svd(self.X, full_matrices=False)
self.sig = sig[:s]**2
self.Vh = Vh[:s]
else:
_, sig, Vh = spla.svds(self.X, k=s, return_singular_vectors="vh")
self.sig = sig**2
self.Vh = Vh
self.a = self.transform(self.X)
self.proj_X = self.project(self.X)
if type(self.y) != None:
self.y_dict = {}
A = np.vstack([self.a, self.y])
for yj in np.unique(self.y):
curr_a = A.T[A[s] == yj][:, :s]
if len(curr_a) != 0:
self.y_dict[yj] = curr_a.mean(axis=0)
def update_s(self, s):
if not self.sparse:
U, sig, Vh = la.svd(self.X, full_matrices=False)
self.sig = sig[:s]**2
self.Vh = Vh[:s]
else:
_, sig, Vh = spla.svds(self.X, k=s, return_singular_vectors="vh")
self.sig = sig**2
self.Vh = Vh
self.a = self.transform(self.X)
self.proj_X = self.project(self.X)
if type(self.y) != None:
A = np.vstack([self.a, self.y])
for yj in np.unique(self.y):
curr_a = A.T[A[s] == yj][:, :s]
if len(curr_a) != 0:
self.y_dict[yj] = curr_a.mean(axis=0)
def transform(self, x):
return [email protected]
def project(self, x):
return [email protected]
def predict(self, x):
a_test = self.Vh @ x.T
predicted = []
for a in a_test.T:
similarities = []
for key, value in self.y_dict.items():
similarity = np.dot(a, value) / (la.norm(a)*la.norm(value))
similarities.append((similarity, key))
predicted.append(max(similarities, key=lambda x: x[0])[1])
return predicted
pca_timit = PCA(X_train, y=y_train)
n, d = X_train.shape
fig = plt.figure()
plt.plot(np.arange(d), pca_timit.sig)
plt.xlabel("number of components")
plt.ylabel("explained variance")
plt.title("TIMIT data scree plot")
plt.show()
# print(pca_timit.sig / pca_timit.sig.sum())
i = 1
p = 0
while p < .9:
p = np.sum(pca_timit.sig[:i]) / np.sum(pca_timit.sig)
# print(i, p)
i+=1
pca_timit.update_s(2)
A = np.vstack([pca_timit.a, pca_timit.y])
for i in np.unique(y_train):
curr_a = A.T[A[2] == i][:, :2]
plt.scatter(curr_a[:, 0], curr_a[:, 1], alpha=0.3)
plt.title("Plotting 2 Principal Components")
plt.show()
pca_train = PCA(X_train, y=y_train, s=20, center=False)
y_pred = pca_train.predict(X_test)
pca_score = sum(y_pred == y_test) / len(y_test)
print("PCA accuracy: {:.3f}".format(pca_score))
| 0.404978 | 0.748168 |
```
# Let's have numpy and visualizations
import datetime
import matplotlib.pyplot as plt
import netCDF4 as nc
import numpy as np
import peewee as pw
import time
def show_raw(nd_array):
"""Simply show the data"""
from matplotlib.pyplot import imshow, show
imshow(nd_array) # interpolation="nearest")
show()
# Let's have 4D data to save / load - in .npy format
data = np.load("slovakia.npy")
# data.shape == (10, 25, 229, 691)
# dimensions: day [0-9], slot [0-24],
# latitude linspace(49.66, 47.70, 229),
# longitude linspace(16.81, 22.61, 691)
show_raw(data[1, 12, :, :]) # first day's noon's slot
# Let's have various .nc files
def make_chunked_dataset(ds, chunksizes=None):
"""In .nc file - create dimensions + variable with given chunksizes"""
ds.createDimension("day", 10)
ds.createDimension("slot", 25)
ds.createDimension("latitude", 229)
ds.createDimension("longitude", 691)
ds.createVariable(varname="slovakia",
datatype=np.int16,
dimensions=("day", "slot", "latitude", "longitude"),
zlib=True,
chunksizes=chunksizes,
fill_value=np.iinfo(np.int16).min)
with nc.Dataset("slovakia_chunked_1_big.nc", "w") as ds:
make_chunked_dataset(ds, chunksizes=(10,25,229,691))
ds["slovakia"][...] = data
with nc.Dataset("slovakia_chunked_4_time.nc", "w") as ds:
make_chunked_dataset(ds, chunksizes=(10,25,2,2))
ds["slovakia"][...] = data
with nc.Dataset("slovakia_chunked_4_space.nc", "w") as ds:
make_chunked_dataset(ds, chunksizes=(2,2,25,50))
ds["slovakia"][...] = data
with nc.Dataset("slovakia_chunked_optimally.nc", "w") as ds:
make_chunked_dataset(ds, chunksizes=(2,4,8,12))
ds["slovakia"][...] = data
with nc.Dataset("slovakia_chunked_minimally.nc", "w") as ds:
make_chunked_dataset(ds, chunksizes=(1,1,1,1))
# we must do it per partes - else RAM sadly exceeds! :-(((
for i in range(10):
for j in range(25):
ds["slovakia"][i][j] = data[i][j]
# Let's check runtime of the reading Tatry subslice on last day's noon
print("slovakia - numpy internal format")
%timeit np.load("slovakia.npy")
print("slovakia_chunked_1_big.nc")
%timeit nc.Dataset("slovakia_chunked_1_big.nc")["slovakia"][-1, 12, 161:190, 329:435]
print("slovakia_chunked_4_time.nc")
%timeit nc.Dataset("slovakia_chunked_4_time.nc")["slovakia"][-1, 12, 161:190, 329:435]
print("slovakia_chunked_4_space.nc")
%timeit nc.Dataset("slovakia_chunked_4_space.nc")["slovakia"][-1, 12, 161:190, 329:435]
print("slovakia_chunked_optimally.nc")
%timeit nc.Dataset("slovakia_chunked_optimally.nc")["slovakia"][-1, 12, 161:190, 329:435]
print("slovakia_chunked_minimally.nc")
%timeit nc.Dataset("slovakia_chunked_minimally.nc")["slovakia"][-1, 12, 161:190, 329:435]
```
|
github_jupyter
|
# Let's have numpy and visualizations
import datetime
import matplotlib.pyplot as plt
import netCDF4 as nc
import numpy as np
import peewee as pw
import time
def show_raw(nd_array):
"""Simply show the data"""
from matplotlib.pyplot import imshow, show
imshow(nd_array) # interpolation="nearest")
show()
# Let's have 4D data to save / load - in .npy format
data = np.load("slovakia.npy")
# data.shape == (10, 25, 229, 691)
# dimensions: day [0-9], slot [0-24],
# latitude linspace(49.66, 47.70, 229),
# longitude linspace(16.81, 22.61, 691)
show_raw(data[1, 12, :, :]) # first day's noon's slot
# Let's have various .nc files
def make_chunked_dataset(ds, chunksizes=None):
"""In .nc file - create dimensions + variable with given chunksizes"""
ds.createDimension("day", 10)
ds.createDimension("slot", 25)
ds.createDimension("latitude", 229)
ds.createDimension("longitude", 691)
ds.createVariable(varname="slovakia",
datatype=np.int16,
dimensions=("day", "slot", "latitude", "longitude"),
zlib=True,
chunksizes=chunksizes,
fill_value=np.iinfo(np.int16).min)
with nc.Dataset("slovakia_chunked_1_big.nc", "w") as ds:
make_chunked_dataset(ds, chunksizes=(10,25,229,691))
ds["slovakia"][...] = data
with nc.Dataset("slovakia_chunked_4_time.nc", "w") as ds:
make_chunked_dataset(ds, chunksizes=(10,25,2,2))
ds["slovakia"][...] = data
with nc.Dataset("slovakia_chunked_4_space.nc", "w") as ds:
make_chunked_dataset(ds, chunksizes=(2,2,25,50))
ds["slovakia"][...] = data
with nc.Dataset("slovakia_chunked_optimally.nc", "w") as ds:
make_chunked_dataset(ds, chunksizes=(2,4,8,12))
ds["slovakia"][...] = data
with nc.Dataset("slovakia_chunked_minimally.nc", "w") as ds:
make_chunked_dataset(ds, chunksizes=(1,1,1,1))
# we must do it per partes - else RAM sadly exceeds! :-(((
for i in range(10):
for j in range(25):
ds["slovakia"][i][j] = data[i][j]
# Let's check runtime of the reading Tatry subslice on last day's noon
print("slovakia - numpy internal format")
%timeit np.load("slovakia.npy")
print("slovakia_chunked_1_big.nc")
%timeit nc.Dataset("slovakia_chunked_1_big.nc")["slovakia"][-1, 12, 161:190, 329:435]
print("slovakia_chunked_4_time.nc")
%timeit nc.Dataset("slovakia_chunked_4_time.nc")["slovakia"][-1, 12, 161:190, 329:435]
print("slovakia_chunked_4_space.nc")
%timeit nc.Dataset("slovakia_chunked_4_space.nc")["slovakia"][-1, 12, 161:190, 329:435]
print("slovakia_chunked_optimally.nc")
%timeit nc.Dataset("slovakia_chunked_optimally.nc")["slovakia"][-1, 12, 161:190, 329:435]
print("slovakia_chunked_minimally.nc")
%timeit nc.Dataset("slovakia_chunked_minimally.nc")["slovakia"][-1, 12, 161:190, 329:435]
| 0.324985 | 0.767211 |
# First Last - Fitting Data
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.optimize import curve_fit
```
# Power on the Moon
<img src="images/ApolloRTG.jpg" alt="Apollo_ALSEP_RTG" style="width: 500px;"/>
----
* The Apollo lunar mission deployed a series of experiments on the Moon.
* The experiment package was called the Apollo Lunar Surface Experiments Package [(ALSEP)](https://en.wikipedia.org/wiki/Apollo_Lunar_Surface_Experiments_Package)
* The ALSEP was powered by a radioisotope thermoelectric generator [(RTG)](https://en.wikipedia.org/wiki/Radioisotope_thermoelectric_generator)
----
* An RTG is basically a fist-sized slug of Pu-238 wrapped in a material that generates electric power when heated.
* Since the RTG is powered by a radioisotope, the output power decreases over time as the radioisotope decays.
## Read in the datafile
The data file `/Data/Apollo_RTG.csv` contains the power output of the Apollo 12 RTG as a function of time.
The data colunms are
* [Day] - Days on the Moon
* [Power] - RTG power output in Watts
## Plot the Data
* Day vs. Power
* Use the OO interface to matplotlib
* Fit the function with a polynomial (degree >= 3)
* Plot the fit with the data
- Output size w:11in, h:8.5in
- Make the plot look nice (including clear labels)
## Power over time
* All of your answer should be formatted as sentences
* For example: `The power on day 0 is VALUE Watts`
* Do not pick the complex roots!
### 1 - What was the power output on Day 0?
### 2 - How many years after landing could you still power a 60 W lightbulb?
### 3 - How many years after landing could you still power a 5 W USB device?
### 4 - How many years after landing until the power output is 0 W?
---
# Fitting data to a function
* The datafile `./Data/linedata.csv` contains two columns of data
* Use the OO interface to matplotlib
* Plot the data (with labels!)
* Fit the function below to the data
* Find the values `(A,C,W)` that best fit the data
- Output size w:11in, h:8.5in
- Make the plot look nice (including clear labels)
----
#### Fit a gaussian of the form:
$$ \Large f(x) = A e^{-\frac{(x - C)^2}{W}} $$
* A = amplitude of the gaussian
* C = x-value of the central peak of the gaussian
* W = width of the gaussian
---
# Stellar Spectra
#### The file `./Data/StarData.csv` is a spectra of a main sequence star
* Col 1 - Wavelength `[angstroms]`
* Col 2 - Flux `[normalized to 0->1]`
#### Read in the Data
#### Plot the Data
* Use the OO interface to matplotlib
* Output size w:11in, h:8.5in
* Make the plot look nice (including clear labels and a legend)
#### Use [Wien's law](https://en.wikipedia.org/wiki/Wien%27s_displacement_law) to determine the temperature of the Star
* You will need to find the wavelength where the Flux is at a maximum
* Use the Astropy units and constants - do not hardcode
```
from astropy import units as u
from astropy import constants as const
```
### Due Mon Nov 4 - 1 pm
- `Make sure to change the filename to your name!`
- `Make sure to change the Title to your name!`
- `File -> Download as -> HTML (.html)`
- `upload your .html and .ipynb file to the class Canvas page`
----
# <font color=blue>Ravenclaw</font>
#### Plank's Law
* [Plank's Law](https://en.wikipedia.org/wiki/Planck%27s_law) describes the spectra emitted by a blackbody at a temperature T
* Calculated the blackbody flux at the above temperature at all of your data_wavelength points
* Scale the blackbody flux to `[0->1]`
#### Plot the Data and the Blackbody fit on the same plot
* Use the OO interface to matplotlib
* Output size w:11in, h:8.5in
* Make the plot look nice (including clear labels and a legend)
|
github_jupyter
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.optimize import curve_fit
from astropy import units as u
from astropy import constants as const
| 0.464902 | 0.980562 |
<a href="https://www.kaggle.com/code/ayodejiyekeen/house-price-prediction-pytorch?scriptVersionId=91072601" target="_blank"><img align="left" alt="Kaggle" title="Open in Kaggle" src="https://kaggle.com/static/images/open-in-kaggle.svg"></a>
# House Price Prediction - Advanced Regression
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
path = "../input/house-prices-advanced-regression-techniques/"
train = path + "train.csv"
test = path + "test.csv"
sample_sub = path + "sample_submission.csv"
train_df = pd.read_csv(train)
test_df = pd.read_csv(test)
sample = pd.read_csv(sample_sub)
test_df["SalePrice"] = sample["SalePrice"]
merged_df = test_df.merge(sample, how = "inner", on = ["Id","SalePrice"])
merged_df.head()
combined_df = pd.concat([train_df, merged_df])
combined_df.head()
plt.figure(figsize=(25,8))
sns.heatmap(combined_df.isna())
plt.show()
extreme_cols = [k for k,v in dict(zip(combined_df.columns, combined_df.isnull().sum())).items() if v > 1000 ]
extreme_cols
# drop columns with more than 1000 missing values
for col in extreme_cols:
combined_df.drop(col, inplace=True, axis=1)
combined_df.shape
missing_cols = [k for k,v in dict(zip(combined_df.columns, combined_df.isnull().sum())).items() if v > 0 ]
len(missing_cols)
# categorical columns with missing values
cat_col = [col for col in missing_cols if combined_df[col].dtype == object]
# numerical columns with missing values
num_col = [col for col in missing_cols if combined_df[col].dtype != object]
for col in num_col:
combined_df[col] = combined_df[col].fillna(combined_df[col].mean())
for col_ in cat_col:
combined_df[col_] = combined_df[col_].fillna(combined_df[col_].mode()[0])
combined_df.isna().any().sum()
plt.figure(figsize=(25,8))
sns.heatmap(combined_df.isna())
plt.show()
from pandas_profiling import ProfileReport
```
## Feature Engineering
```
for i in combined_df.columns:
print(f'{i} has {combined_df[i].nunique()} unique values.')
import datetime
year = datetime.datetime.now().year
year
combined_df['TotalYears'] = year - combined_df['YearBuilt']
combined_df.drop('YearBuilt', axis=1, inplace=True)
```
Treating all columns with less than 17 unique values as categorical features.
```
cat_features = [col for col in combined_df if (combined_df[col].nunique() < 26)]
len(cat_features)
from sklearn.preprocessing import LabelEncoder
lbl_encoders = {}
for feature in cat_features:
lbl_encoders[feature] = LabelEncoder()
combined_df[feature] = lbl_encoders[feature].fit_transform(combined_df[feature])
combined_df.head()
cat_values = np.stack([combined_df[i].values for i in cat_features],axis=1)
cat_values.shape
## Convert from numpy arrays to tensors
import torch
cat_values = torch.tensor(cat_values, dtype=torch.int64)
cat_values
## create a list of all continuous features
cont_features = [col for col in combined_df if (combined_df[col].nunique() > 26) & ( col != 'SalePrice') & (col != 'Id')]
len(cont_features)
for col in cont_features:
print(col , combined_df[col].dtype, combined_df[col].nunique())
## Stacking the continous features and the converting to a tensor
cont_values = np.stack([combined_df[i].values for i in cont_features], axis=1)
cont_values = torch.tensor(cont_values, dtype=torch.float)
cont_values
# Label
y = torch.tensor(combined_df['SalePrice'].values, dtype=torch.float).view(-1,1)
y
cat_values.shape, cont_values.shape, y.shape
```
### Embedding
#### Embedding size for categorical columns
```
cat_dims = [combined_df[col].nunique() for col in cat_features]
embedding_dim = [(x, min(50, (x+1) // 2)) for x in cat_dims]
import torch.nn as nn
import torch.nn.functional as F
embedding_repr = nn.ModuleList([nn.Embedding(inp, out) for inp, out in embedding_dim])
embedding_val = []
for i, e in enumerate(embedding_repr):
embedding_val.append(e(cat_values[:,i]))
z = torch.cat(embedding_val, 1)
z
## Implementing dropout
dropout = nn.Dropout(.4)
final_embedded = dropout(z)
final_embedded
## Create a feed forward neural network
class FeedForwardNN(nn.Module):
def __init__(self, embedding_dim, n_cont, out_sz, layers, p=0.5):
super().__init__()
self.embeds = nn.ModuleList([nn.Embedding(inp, out) for inp, out in embedding_dim])
self.emb_drop = nn.Dropout(p)
self.bn_cont = nn.BatchNorm1d(n_cont)
layerlist = []
n_emb = sum((out for inp, out in embedding_dim))
n_in = n_emb + n_cont
for i in layers:
layerlist.append(nn.Linear(n_in, i))
layerlist.append(nn.ReLU(inplace=True))
layerlist.append(nn.BatchNorm1d(i))
layerlist.append(nn.Dropout(p))
n_in = i
layerlist.append(nn.Linear(layers[-1], out_sz))
self.layers = nn.Sequential(*layerlist)
def forward(self, x_cat, x_cont):
embeddings = []
for i,e in enumerate(self.embeds):
embeddings.append(e(x_cat[:,i]))
x = torch.cat(embeddings, 1)
x = self.emb_drop(x)
x_cont = self.bn_cont(x_cont)
x = torch.cat([x, x_cont], 1)
x = self.layers(x)
return x
torch.manual_seed(100)
model = FeedForwardNN(embedding_dim, len(cont_features), 1, [100,50], p=0.4)
model
```
### Define Loss and Optimizer
```
loss_function = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
test_categorical = cat_values[1460 : , : ]
train_categorical = cat_values[ : 1460, :]
cat_values.shape, train_categorical.shape , test_categorical.shape
test_continuous = cont_values[1460 : , : ]
train_continuous = cont_values[ : 1460, :]
cont_values.shape, train_continuous.shape, test_continuous.shape
y_test = y[1460 : , : ]
y_train = y[ : 1460, :]
y.shape, y_train.shape, y_test.shape
EPOCHS = 5000
final_losses = []
for i in range(EPOCHS):
i = i+1
y_pred = model(train_categorical, train_continuous)
loss = torch.sqrt(loss_function(y_pred, y_train)) ## RMSE
final_losses.append(loss)
if i % 50 == 0:
print(f'Epoch number: {i} Loss: {loss.item()}')
optimizer.zero_grad()
loss.backward()
optimizer.step()
plt.plot(range(EPOCHS), [loss.item() for loss in final_losses])
plt.ylabel('RMSE Loss')
plt.xlabel('EPOCH')
plt.show()
# Validate the test data
y_pred = ''
with torch.no_grad():
y_pred = model(test_categorical, test_continuous)
loss = torch.sqrt(loss_function(y_pred, y_test))
print(f'RMSE: {loss}')
y_pred = model(test_categorical, test_continuous)
y_pred.shape
sample.shape
sample['SalePrice'] = y_pred.detach().numpy()
sample.to_csv('Submission_2.csv', index=False)
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
path = "../input/house-prices-advanced-regression-techniques/"
train = path + "train.csv"
test = path + "test.csv"
sample_sub = path + "sample_submission.csv"
train_df = pd.read_csv(train)
test_df = pd.read_csv(test)
sample = pd.read_csv(sample_sub)
test_df["SalePrice"] = sample["SalePrice"]
merged_df = test_df.merge(sample, how = "inner", on = ["Id","SalePrice"])
merged_df.head()
combined_df = pd.concat([train_df, merged_df])
combined_df.head()
plt.figure(figsize=(25,8))
sns.heatmap(combined_df.isna())
plt.show()
extreme_cols = [k for k,v in dict(zip(combined_df.columns, combined_df.isnull().sum())).items() if v > 1000 ]
extreme_cols
# drop columns with more than 1000 missing values
for col in extreme_cols:
combined_df.drop(col, inplace=True, axis=1)
combined_df.shape
missing_cols = [k for k,v in dict(zip(combined_df.columns, combined_df.isnull().sum())).items() if v > 0 ]
len(missing_cols)
# categorical columns with missing values
cat_col = [col for col in missing_cols if combined_df[col].dtype == object]
# numerical columns with missing values
num_col = [col for col in missing_cols if combined_df[col].dtype != object]
for col in num_col:
combined_df[col] = combined_df[col].fillna(combined_df[col].mean())
for col_ in cat_col:
combined_df[col_] = combined_df[col_].fillna(combined_df[col_].mode()[0])
combined_df.isna().any().sum()
plt.figure(figsize=(25,8))
sns.heatmap(combined_df.isna())
plt.show()
from pandas_profiling import ProfileReport
for i in combined_df.columns:
print(f'{i} has {combined_df[i].nunique()} unique values.')
import datetime
year = datetime.datetime.now().year
year
combined_df['TotalYears'] = year - combined_df['YearBuilt']
combined_df.drop('YearBuilt', axis=1, inplace=True)
cat_features = [col for col in combined_df if (combined_df[col].nunique() < 26)]
len(cat_features)
from sklearn.preprocessing import LabelEncoder
lbl_encoders = {}
for feature in cat_features:
lbl_encoders[feature] = LabelEncoder()
combined_df[feature] = lbl_encoders[feature].fit_transform(combined_df[feature])
combined_df.head()
cat_values = np.stack([combined_df[i].values for i in cat_features],axis=1)
cat_values.shape
## Convert from numpy arrays to tensors
import torch
cat_values = torch.tensor(cat_values, dtype=torch.int64)
cat_values
## create a list of all continuous features
cont_features = [col for col in combined_df if (combined_df[col].nunique() > 26) & ( col != 'SalePrice') & (col != 'Id')]
len(cont_features)
for col in cont_features:
print(col , combined_df[col].dtype, combined_df[col].nunique())
## Stacking the continous features and the converting to a tensor
cont_values = np.stack([combined_df[i].values for i in cont_features], axis=1)
cont_values = torch.tensor(cont_values, dtype=torch.float)
cont_values
# Label
y = torch.tensor(combined_df['SalePrice'].values, dtype=torch.float).view(-1,1)
y
cat_values.shape, cont_values.shape, y.shape
cat_dims = [combined_df[col].nunique() for col in cat_features]
embedding_dim = [(x, min(50, (x+1) // 2)) for x in cat_dims]
import torch.nn as nn
import torch.nn.functional as F
embedding_repr = nn.ModuleList([nn.Embedding(inp, out) for inp, out in embedding_dim])
embedding_val = []
for i, e in enumerate(embedding_repr):
embedding_val.append(e(cat_values[:,i]))
z = torch.cat(embedding_val, 1)
z
## Implementing dropout
dropout = nn.Dropout(.4)
final_embedded = dropout(z)
final_embedded
## Create a feed forward neural network
class FeedForwardNN(nn.Module):
def __init__(self, embedding_dim, n_cont, out_sz, layers, p=0.5):
super().__init__()
self.embeds = nn.ModuleList([nn.Embedding(inp, out) for inp, out in embedding_dim])
self.emb_drop = nn.Dropout(p)
self.bn_cont = nn.BatchNorm1d(n_cont)
layerlist = []
n_emb = sum((out for inp, out in embedding_dim))
n_in = n_emb + n_cont
for i in layers:
layerlist.append(nn.Linear(n_in, i))
layerlist.append(nn.ReLU(inplace=True))
layerlist.append(nn.BatchNorm1d(i))
layerlist.append(nn.Dropout(p))
n_in = i
layerlist.append(nn.Linear(layers[-1], out_sz))
self.layers = nn.Sequential(*layerlist)
def forward(self, x_cat, x_cont):
embeddings = []
for i,e in enumerate(self.embeds):
embeddings.append(e(x_cat[:,i]))
x = torch.cat(embeddings, 1)
x = self.emb_drop(x)
x_cont = self.bn_cont(x_cont)
x = torch.cat([x, x_cont], 1)
x = self.layers(x)
return x
torch.manual_seed(100)
model = FeedForwardNN(embedding_dim, len(cont_features), 1, [100,50], p=0.4)
model
loss_function = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
test_categorical = cat_values[1460 : , : ]
train_categorical = cat_values[ : 1460, :]
cat_values.shape, train_categorical.shape , test_categorical.shape
test_continuous = cont_values[1460 : , : ]
train_continuous = cont_values[ : 1460, :]
cont_values.shape, train_continuous.shape, test_continuous.shape
y_test = y[1460 : , : ]
y_train = y[ : 1460, :]
y.shape, y_train.shape, y_test.shape
EPOCHS = 5000
final_losses = []
for i in range(EPOCHS):
i = i+1
y_pred = model(train_categorical, train_continuous)
loss = torch.sqrt(loss_function(y_pred, y_train)) ## RMSE
final_losses.append(loss)
if i % 50 == 0:
print(f'Epoch number: {i} Loss: {loss.item()}')
optimizer.zero_grad()
loss.backward()
optimizer.step()
plt.plot(range(EPOCHS), [loss.item() for loss in final_losses])
plt.ylabel('RMSE Loss')
plt.xlabel('EPOCH')
plt.show()
# Validate the test data
y_pred = ''
with torch.no_grad():
y_pred = model(test_categorical, test_continuous)
loss = torch.sqrt(loss_function(y_pred, y_test))
print(f'RMSE: {loss}')
y_pred = model(test_categorical, test_continuous)
y_pred.shape
sample.shape
sample['SalePrice'] = y_pred.detach().numpy()
sample.to_csv('Submission_2.csv', index=False)
| 0.653569 | 0.840946 |
# TITLE
### Objective
::**DESCRIPTION OF OBJECTIVE**::
### Environment Setup
Configure notebook style (see NBCONFIG.ipynb), add imports and paths. The **%run** magic used below <font color='red'>**requires IPython 2.0 or higher.**</font>
```
%run NBCONFIG.ipynb
```
#### Step 1: ::Description Step 1::
Explanation of step 1
<hr>
<br>
<div style="float:left; \">
<img src="https://avatars0.githubusercontent.com/u/1972276?s=460"
align=left; text-align:center; style="float:left; margin-left: 5px; margin-top: -25px; width:150px; height:150px" />
</div>
<div style="float:left; \"><a href="https://github.com/hugadams">
<img src="https://assets-cdn.github.com/images/modules/logos_page/GitHub-Mark.png"
align=left; text-align:center; style="float:left; margin-left: 25px; margin-top: -25px; width:75px; height:75px" />
</div>
<div style="float:left; \"><a href="https://twitter.com/hughesadam87">
<img src="http://paymentmagnates.com/wp-content/uploads/2014/04/twitter-icon.png"
align=left; text-align:center; style="float:left; margin-left: 25px; margin-top: -25px; width:75px; height:75px" />
</div>
<div style="float:left; \"><a href="https://www.researchgate.net/profile/Adam_Hughes2?ev=hdr_xprf">
<img src="http://www.txhughes.com/images/button-researchgate.png"
align=left; text-align:center; style="float:left; margin-left: -175px; margin-top: 55px; width:75px; height:75px" />
</div>
<div style="float:left; \"><a href="https://www.linkedin.com/profile/view?id=121484744&trk=nav_responsive_tab_profile_picg">
<img src="http://smallbusinessesdoitbetter.com/wordpress/wp-content/uploads/li.png"
align=left; text-align:center; style="float:left; margin-left: -75px; margin-top: 55px; width:75px; height:75px" />
</div>
<div style="float:center; \"><a href="http://www.gwu.edu/">
<img src="https://raw.githubusercontent.com/hugadams/pyparty/master/pyparty/data/gwu.png"
align=center style="float:center; margin-right: 30px; margin-top: -35px; width:335px; height:180px" />
</div>
<h4 style="margin-top:0px;"> This notebook is free for redistribution. If citing, please reference as: </h4>
- *Hughes, A. (2012). [A Computational Framework for Plasmonic Nanobiosensing](https://www.researchgate.net/publication/236672995_A_Computational_Framework_for_Plasmonic_Nanobiosensing). Python in Science Conference [SCIPY].*
<h3 style="margin-top:30px;"> Questions or Feedback? </h3>
* [email protected]
* [email protected]
* twitter: <a href="https://twitter.com/hughesadam87" target="_blank">@hughesadam87</a>
* <a href="http://www.gwu.edu/~condmat/CME/reeves.html" target="_blank">Mark Reeves Biophysics Group</a>
<h3 style="margin-top:30px;"> References: </h3>
* [1] : **REF 1**
* [2] : **REF 2**
<h3 style="margin-top:30px;"> Related: </h3>
* <a href="http://hugadams.github.io/scikit-spectra/" target="_blank">scikit-spectra: Exploratory Spectral Data Analysis</a>
* <a href="https://github.com/hugadams/pyparty" target="_blank">pyparty: Image Analysis of Particles</a>
* <a href="http://lorenabarba.com/" target="_blank">Lorena A. Barba (GWU Engineering)</a>
* <a href="http://www.youtube.com/watch?v=W7RgkHM-B60" target="_blank">xray: extended arrays for scientific datasets</a>
<h3 style="margin-top:30px;">Notebook styling ideas:</h3>
* <a href="http://blog.louic.nl/?p=683" target="_blank">Louic's web blog</a>
* <a href="https://plot.ly/feed" target="_blank">Plotly</a>
* <a href="http://damon-is-a-geek.com/publication-ready-the-first-time-beautiful-reproducible-plots-with-matplotlib.html" target="_blank">Publication-ready the first time: Beautiful, reproducible plots with Matplotlib</a>
<br>
<hr>
|
github_jupyter
|
%run NBCONFIG.ipynb
| 0.105971 | 0.893867 |
```
from kafka import KafkaProducer
import sys, random, datetime, time, json
import time
from IPython.display import clear_output, display
if __name__ == "__main__":
# 設定要連線到Kafka集群的相關設定, 並產生一個Kafka的Producer的實例
producer1 = KafkaProducer(
# 指定Kafka集群伺服器
bootstrap_servers = ["kafka:9092"],
# 指定msgKey的序列化器, 若Key為None, 無法序列化, 透過producer直接給值
#key_serializer = str.encode,
# 指定msgValue的序列化器
#value_serializer = str.encode,
value_serializer = lambda m: json.dumps(m).encode('ascii'),
)
producer2 = KafkaProducer(
# 指定Kafka集群伺服器
bootstrap_servers = ["kafka:9092"],
# 指定msgKey的序列化器, 若Key為None, 無法序列化, 透過producer直接給值
#key_serializer = str.encode,
# 指定msgValue的序列化器
#value_serializer = str.encode,
value_serializer = lambda m: json.dumps(m).encode('ascii'),
)
print("Start sending messages ...")
while True:
clear_output(wait=True)
try:
device_id = "001"
t = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
temp = round(random.uniform(18, 30), 2)
humidity = random.randint(0, 100)
millis = int(round(time.time() * 1000))
tt = {"device_id": device_id, "timestamp": t, "Temperature": temp, "rd":millis}
hh = {"device_id": device_id, "timestamp": t, "Humidity": humidity, "rd":millis }
# 產生要發佈到Kafka的訊息
# producer1.send(topic="temperature", value=tt)
# producer2.send(topic="humidity", value=hh)
# producer.send(topic=topicName, key=msgKey, value=msgValue)
future1 = producer1.send(topic="temperature", value=tt)
record_metadata1 = future1.get(timeout=10)
#print(record_metadata1)
future2 = producer2.send(topic="humidity", value=hh)
record_metadata2 = future2.get(timeout=10)
#print(record_metadata2)
print("Message ts: %s, rd: %s sending completed!"%(t,millis))
time.sleep(1)
except Exception as e:
# 錯誤處理
e_type, e_value, e_traceback = sys.exc_info()
print("type ==> %s" % (e_type))
print("value ==> %s" % (e_value))
print("traceback ==> file name: %s" % (e_traceback.tb_frame.f_code.co_filename))
print("traceback ==> line no: %s" % (e_traceback.tb_lineno))
print("traceback ==> function name: %s" % (e_traceback.tb_frame.f_code.co_name))
#finally:
#time.sleep(1)
# 關掉Producer實例的連線
producer1.close()
producer2.close()
```
|
github_jupyter
|
from kafka import KafkaProducer
import sys, random, datetime, time, json
import time
from IPython.display import clear_output, display
if __name__ == "__main__":
# 設定要連線到Kafka集群的相關設定, 並產生一個Kafka的Producer的實例
producer1 = KafkaProducer(
# 指定Kafka集群伺服器
bootstrap_servers = ["kafka:9092"],
# 指定msgKey的序列化器, 若Key為None, 無法序列化, 透過producer直接給值
#key_serializer = str.encode,
# 指定msgValue的序列化器
#value_serializer = str.encode,
value_serializer = lambda m: json.dumps(m).encode('ascii'),
)
producer2 = KafkaProducer(
# 指定Kafka集群伺服器
bootstrap_servers = ["kafka:9092"],
# 指定msgKey的序列化器, 若Key為None, 無法序列化, 透過producer直接給值
#key_serializer = str.encode,
# 指定msgValue的序列化器
#value_serializer = str.encode,
value_serializer = lambda m: json.dumps(m).encode('ascii'),
)
print("Start sending messages ...")
while True:
clear_output(wait=True)
try:
device_id = "001"
t = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
temp = round(random.uniform(18, 30), 2)
humidity = random.randint(0, 100)
millis = int(round(time.time() * 1000))
tt = {"device_id": device_id, "timestamp": t, "Temperature": temp, "rd":millis}
hh = {"device_id": device_id, "timestamp": t, "Humidity": humidity, "rd":millis }
# 產生要發佈到Kafka的訊息
# producer1.send(topic="temperature", value=tt)
# producer2.send(topic="humidity", value=hh)
# producer.send(topic=topicName, key=msgKey, value=msgValue)
future1 = producer1.send(topic="temperature", value=tt)
record_metadata1 = future1.get(timeout=10)
#print(record_metadata1)
future2 = producer2.send(topic="humidity", value=hh)
record_metadata2 = future2.get(timeout=10)
#print(record_metadata2)
print("Message ts: %s, rd: %s sending completed!"%(t,millis))
time.sleep(1)
except Exception as e:
# 錯誤處理
e_type, e_value, e_traceback = sys.exc_info()
print("type ==> %s" % (e_type))
print("value ==> %s" % (e_value))
print("traceback ==> file name: %s" % (e_traceback.tb_frame.f_code.co_filename))
print("traceback ==> line no: %s" % (e_traceback.tb_lineno))
print("traceback ==> function name: %s" % (e_traceback.tb_frame.f_code.co_name))
#finally:
#time.sleep(1)
# 關掉Producer實例的連線
producer1.close()
producer2.close()
| 0.105867 | 0.173463 |
<img src="./pictures/logo_sizinglab.png" style="float:right; max-width: 60px; display: inline" alt="SizingLab" /></a>
# Motor selection
*Written by Marc Budinger (INSA Toulouse) and Scott Delbecq (ISAE-SUPAERO), Toulouse, France.*
**Sympy** package permits us to work with symbolic calculation.
```
from math import pi
from sympy import Symbol
from sympy import *
```
## Design graph
The following diagram represents the design graph of the motor’s selection. The mean speed/thrust (Ωmoy & Tmoy), the max speed/thrust (Ωmax & Tmax) and the battery voltage are assumed to be known here.

> **Questions:**
* Give the 2 main sizing problems you are able to detect here.
* Propose one or multiple solutions (which can request equation manipulation, addition of design variables, addition of constraints)
* Orientate the arrows and write equations order, inputs/outputs at each step of this part of sizing procedure, additional constraints
### Sizing code and optimization
> Exercice: propose a sizing code for the selection of a motor.
```
# Specifications
# Reference parameters for scaling laws
# Motor reference
# Ref : AXI 5325/16 GOLD LINE
T_nom_mot_ref = 2.32 # [N.m] rated torque
T_max_mot_ref = 85./70.*T_nom_mot_ref # [N.m] max torque
R_mot_ref = 0.03 # [Ohm] resistance
M_mot_ref = 0.575 # [kg] mass
K_mot_ref = 0.03 # [N.m/A] torque coefficient
T_mot_fr_ref = 0.03 # [N.m] friction torque (zero load, nominal speed)
# Assumption
T_pro_to=1.0#[N.m] Propeller Torque during takeoff
Omega_pro_to=1.0#[rad/s] Propeller speed during takeoff
T_pro_hov=1.0#[N.m] Propeller Torque during hover
Omega_pro_hov=1.0#[rad/s] Propeller speed during hover
U_bat_est= 4.0#[V] Battery voltage value (estimation)
```
Define the design variables as a symbol under `variableExample= Symbol('variableExample')`
```
#Design variables
k_mot=Symbol('k_mot')#[-] over sizing coefficient on the motor torque (1,400)
k_speed_mot=Symbol('k_speed_mot')#[-] over sizing coefficient on the motor speed (1,10)
#Equations:
#-----
T_nom_mot = k_mot * T_pro_hov # [N.m] Motor nominal torque per propeller
M_mot = M_mot_ref * (T_nom_mot/T_nom_mot_ref)**(3./3.5) # [kg] Motor mass
# Selection with take-off speed
K_mot = U_bat_est / (k_speed_mot*Omega_pro_to) # [N.m/A] or [V/(rad/s)] Kt motor
R_mot = R_mot_ref * (T_nom_mot/T_nom_mot_ref)**(-5./3.5)*(K_mot/K_mot_ref)**2. # [Ohm] motor resistance
T_mot_fr = T_mot_fr_ref * (T_nom_mot/T_nom_mot_ref)**(3./3.5) # [N.m] Friction torque
T_max_mot = T_max_mot_ref * (T_nom_mot/T_nom_mot_ref)
# Hover current and voltage
I_mot_hov = (T_pro_hov+T_mot_fr) / K_mot # [I] Current of the motor per propeller
U_mot_hov = R_mot*I_mot_hov + Omega_pro_hov*K_mot # [V] Voltage of the motor per propeller
P_el_mot_hov = U_mot_hov*I_mot_hov # [W] Hover : electrical power
# Takeoff current and voltage
I_mot_to = (T_pro_to+T_mot_fr) / K_mot # [I] Current of the motor per propeller
U_mot_to = R_mot*I_mot_to + Omega_pro_to*K_mot # [V] Voltage of the motor per propeller
P_el_mot_to = U_mot_to*I_mot_to # [W] Takeoff : electrical power
```
|
github_jupyter
|
from math import pi
from sympy import Symbol
from sympy import *
# Specifications
# Reference parameters for scaling laws
# Motor reference
# Ref : AXI 5325/16 GOLD LINE
T_nom_mot_ref = 2.32 # [N.m] rated torque
T_max_mot_ref = 85./70.*T_nom_mot_ref # [N.m] max torque
R_mot_ref = 0.03 # [Ohm] resistance
M_mot_ref = 0.575 # [kg] mass
K_mot_ref = 0.03 # [N.m/A] torque coefficient
T_mot_fr_ref = 0.03 # [N.m] friction torque (zero load, nominal speed)
# Assumption
T_pro_to=1.0#[N.m] Propeller Torque during takeoff
Omega_pro_to=1.0#[rad/s] Propeller speed during takeoff
T_pro_hov=1.0#[N.m] Propeller Torque during hover
Omega_pro_hov=1.0#[rad/s] Propeller speed during hover
U_bat_est= 4.0#[V] Battery voltage value (estimation)
#Design variables
k_mot=Symbol('k_mot')#[-] over sizing coefficient on the motor torque (1,400)
k_speed_mot=Symbol('k_speed_mot')#[-] over sizing coefficient on the motor speed (1,10)
#Equations:
#-----
T_nom_mot = k_mot * T_pro_hov # [N.m] Motor nominal torque per propeller
M_mot = M_mot_ref * (T_nom_mot/T_nom_mot_ref)**(3./3.5) # [kg] Motor mass
# Selection with take-off speed
K_mot = U_bat_est / (k_speed_mot*Omega_pro_to) # [N.m/A] or [V/(rad/s)] Kt motor
R_mot = R_mot_ref * (T_nom_mot/T_nom_mot_ref)**(-5./3.5)*(K_mot/K_mot_ref)**2. # [Ohm] motor resistance
T_mot_fr = T_mot_fr_ref * (T_nom_mot/T_nom_mot_ref)**(3./3.5) # [N.m] Friction torque
T_max_mot = T_max_mot_ref * (T_nom_mot/T_nom_mot_ref)
# Hover current and voltage
I_mot_hov = (T_pro_hov+T_mot_fr) / K_mot # [I] Current of the motor per propeller
U_mot_hov = R_mot*I_mot_hov + Omega_pro_hov*K_mot # [V] Voltage of the motor per propeller
P_el_mot_hov = U_mot_hov*I_mot_hov # [W] Hover : electrical power
# Takeoff current and voltage
I_mot_to = (T_pro_to+T_mot_fr) / K_mot # [I] Current of the motor per propeller
U_mot_to = R_mot*I_mot_to + Omega_pro_to*K_mot # [V] Voltage of the motor per propeller
P_el_mot_to = U_mot_to*I_mot_to # [W] Takeoff : electrical power
| 0.610221 | 0.96682 |
# Classifying Objects w/ `Convolutional Neural Network (CNN)`
### Import dependencies
```
import os
import sys
from datetime import datetime as dt
import numpy as np
import tensorflow as tf
from dataset import ImageDataset
```
### Load dataset
```
# directories definition
data_dir = 'datasets/101_ObjectCategories/'
save_dir = 'saved/'
task_dir = os.path.join(save_dir, 'classify/convnet')
save_file = os.path.join(save_dir, 'data.pkl')
data = ImageDataset(data_dir=data_dir, size=64, grayscale=True, flatten=True)
# data.create()
# data.save(save_file=save_file, force=True)
data = data.load(save_file=save_file)
# visualize data
data.visualize(data.images[:9], name='Image data', smooth=True, cmap='gray')
```
### Hyperparameters
```
# inputs
img_size = data.size
img_channel = data.channel
img_size_flat = img_size * img_size * img_channel
num_classes = data.num_classes
print(f'Image data »»»\tsize: {img_size:,} channels: {img_channel} flattened: {img_size_flat:,}')
print(f'Label data »»»\tclasses: {num_classes:,}')
# Network
stride = 2
kernel_size = 5
conv1_size = 16
conv2_size = 32
fc1_size = 256
fc2_size = 124
keep_prob = 0.8
# Training
batch_size = 24
learning_rate = .01
save_interval = 100
log_interval = 1000
iterations = 10000
```
## Building the network
```
def network(image, is_training=False):
with tf.name_scope('network'):
net = tf.reshape(image, shape=[-1, img_size, img_size, img_channel])
net = tf.contrib.layers.conv2d(net, conv1_size, kernel_size=kernel_size, stride=stride)
net = tf.contrib.layers.batch_norm(net, is_training=is_training)
net = tf.contrib.layers.conv2d(net, conv2_size, kernel_size=kernel_size, stride=stride)
net = tf.contrib.layers.batch_norm(net, is_training=is_training)
net = tf.contrib.layers.flatten(net)
net = tf.contrib.layers.fully_connected(net, fc1_size)
if is_training:
net = tf.nn.dropout(net, keep_prob=keep_prob)
net = tf.contrib.layers.fully_connected(net, fc2_size)
net = tf.contrib.layers.fully_connected(net, num_classes, activation_fn=None)
return net
```
### Model's placeholder
```
tf.reset_default_graph()
with tf.name_scope('data'):
X = tf.placeholder(tf.float32, shape=[None, img_size_flat])
y = tf.placeholder(tf.float32, shape=[None, num_classes])
```
### Loss function
```
with tf.name_scope('loss_function'):
logits = network(X, is_training=True)
x_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)
loss = tf.reduce_mean(x_entropy)
```
### Optimizer
```
with tf.name_scope('optimizer'):
global_step = tf.Variable(0, trainable=False, name='global_step')
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train = optimizer.minimize(loss, global_step=global_step)
```
### Accuracy
```
with tf.name_scope('accuracy'):
y_true = tf.argmax(y, axis=1)
y_pred = tf.argmax(tf.nn.softmax(logits), axis=1)
correct = tf.equal(y_true, y_pred)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
```
## Running the computation Graph
```
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
```
### Tensorboard
```
# tensorboard
tensorboard_dir = os.path.join(task_dir, 'tensorboard/')
logdir = os.path.join(tensorboard_dir, 'log')
# pre-trained model
model_dir = os.path.join(task_dir, 'models/')
model_file = os.path.join(model_dir, 'model.ckpt')
# Summary
tf.summary.scalar('loss', loss)
tf.summary.scalar('accuracy', accuracy)
merged = tf.summary.merge_all()
# saver and writer
saver = tf.train.Saver()
writer = tf.summary.FileWriter(logdir=logdir, graph=sess.graph)
```
### Restoring latest checkpoint
```
if tf.gfile.Exists(model_dir):
try:
print('Attempting to restore last checkpoint...')
last_ckpt = tf.train.latest_checkpoint(model_dir)
saver.restore(sess, save_path=last_ckpt)
print(f'INFO: Restored checkpoint from {last_ckpt}\n')
except Exception as e:
sys.stderr.write(f'ERR: Could not restore checkpoint. {e}\n')
sys.stderr.flush()
else:
tf.gfile.MakeDirs(model_dir)
print(f'INFO: Created checkpoint dir: {model_dir}')
```
### Training
```
start_time = dt.now()
for i in range(iterations):
X_batch, y_batch = data.next_batch(batch_size=batch_size, shuffle=True)
feed_dict = {X: X_batch, y: y_batch}
_, _acc, _i_global = sess.run([train, accuracy, global_step], feed_dict=feed_dict)
if i%save_interval == 0:
saver.save(sess=sess, save_path=model_file, global_step=global_step)
summary = sess.run(merged, feed_dict=feed_dict)
writer.add_summary(summary=summary, global_step=_i_global)
sys.stdout.write(f'\rIter: {i+1:,}\tGlobal: {_i_global:,}\tAcc: {_acc:.2%}'
f'\tTime: {dt.now() - start_time}')
sys.stdout.flush()
```
|
github_jupyter
|
import os
import sys
from datetime import datetime as dt
import numpy as np
import tensorflow as tf
from dataset import ImageDataset
# directories definition
data_dir = 'datasets/101_ObjectCategories/'
save_dir = 'saved/'
task_dir = os.path.join(save_dir, 'classify/convnet')
save_file = os.path.join(save_dir, 'data.pkl')
data = ImageDataset(data_dir=data_dir, size=64, grayscale=True, flatten=True)
# data.create()
# data.save(save_file=save_file, force=True)
data = data.load(save_file=save_file)
# visualize data
data.visualize(data.images[:9], name='Image data', smooth=True, cmap='gray')
# inputs
img_size = data.size
img_channel = data.channel
img_size_flat = img_size * img_size * img_channel
num_classes = data.num_classes
print(f'Image data »»»\tsize: {img_size:,} channels: {img_channel} flattened: {img_size_flat:,}')
print(f'Label data »»»\tclasses: {num_classes:,}')
# Network
stride = 2
kernel_size = 5
conv1_size = 16
conv2_size = 32
fc1_size = 256
fc2_size = 124
keep_prob = 0.8
# Training
batch_size = 24
learning_rate = .01
save_interval = 100
log_interval = 1000
iterations = 10000
def network(image, is_training=False):
with tf.name_scope('network'):
net = tf.reshape(image, shape=[-1, img_size, img_size, img_channel])
net = tf.contrib.layers.conv2d(net, conv1_size, kernel_size=kernel_size, stride=stride)
net = tf.contrib.layers.batch_norm(net, is_training=is_training)
net = tf.contrib.layers.conv2d(net, conv2_size, kernel_size=kernel_size, stride=stride)
net = tf.contrib.layers.batch_norm(net, is_training=is_training)
net = tf.contrib.layers.flatten(net)
net = tf.contrib.layers.fully_connected(net, fc1_size)
if is_training:
net = tf.nn.dropout(net, keep_prob=keep_prob)
net = tf.contrib.layers.fully_connected(net, fc2_size)
net = tf.contrib.layers.fully_connected(net, num_classes, activation_fn=None)
return net
tf.reset_default_graph()
with tf.name_scope('data'):
X = tf.placeholder(tf.float32, shape=[None, img_size_flat])
y = tf.placeholder(tf.float32, shape=[None, num_classes])
with tf.name_scope('loss_function'):
logits = network(X, is_training=True)
x_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)
loss = tf.reduce_mean(x_entropy)
with tf.name_scope('optimizer'):
global_step = tf.Variable(0, trainable=False, name='global_step')
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train = optimizer.minimize(loss, global_step=global_step)
with tf.name_scope('accuracy'):
y_true = tf.argmax(y, axis=1)
y_pred = tf.argmax(tf.nn.softmax(logits), axis=1)
correct = tf.equal(y_true, y_pred)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
# tensorboard
tensorboard_dir = os.path.join(task_dir, 'tensorboard/')
logdir = os.path.join(tensorboard_dir, 'log')
# pre-trained model
model_dir = os.path.join(task_dir, 'models/')
model_file = os.path.join(model_dir, 'model.ckpt')
# Summary
tf.summary.scalar('loss', loss)
tf.summary.scalar('accuracy', accuracy)
merged = tf.summary.merge_all()
# saver and writer
saver = tf.train.Saver()
writer = tf.summary.FileWriter(logdir=logdir, graph=sess.graph)
if tf.gfile.Exists(model_dir):
try:
print('Attempting to restore last checkpoint...')
last_ckpt = tf.train.latest_checkpoint(model_dir)
saver.restore(sess, save_path=last_ckpt)
print(f'INFO: Restored checkpoint from {last_ckpt}\n')
except Exception as e:
sys.stderr.write(f'ERR: Could not restore checkpoint. {e}\n')
sys.stderr.flush()
else:
tf.gfile.MakeDirs(model_dir)
print(f'INFO: Created checkpoint dir: {model_dir}')
start_time = dt.now()
for i in range(iterations):
X_batch, y_batch = data.next_batch(batch_size=batch_size, shuffle=True)
feed_dict = {X: X_batch, y: y_batch}
_, _acc, _i_global = sess.run([train, accuracy, global_step], feed_dict=feed_dict)
if i%save_interval == 0:
saver.save(sess=sess, save_path=model_file, global_step=global_step)
summary = sess.run(merged, feed_dict=feed_dict)
writer.add_summary(summary=summary, global_step=_i_global)
sys.stdout.write(f'\rIter: {i+1:,}\tGlobal: {_i_global:,}\tAcc: {_acc:.2%}'
f'\tTime: {dt.now() - start_time}')
sys.stdout.flush()
| 0.481454 | 0.80479 |
# PART 1: The super basics
```
import ipycytoscape
import json
import ipywidgets
```
What is a graph?
Mathematical structures used to model pairwise relations between objects.
Examples:
- Twitter connections.
- Rail net of a country
- Post system of a country
- Facebook connections.
Nomenclature:
The basic nomenclature consists of (based on an example of the train rail system):
- nodes (rail stations) and
- edges (rail connections between train stations)
Let's use ipycytoscape to dive into graphs.
One way to create an ipycytoscape graph is using a JSON input as follows:
(We will be following the train-rail example)
Later on it might become clear that other ways to pass data to ipycytoscape are not only possible but probably desirable in many circumstances. For the moment we intend to create a really small graph to get up and running understanding graphs and ipycytoscape.
Moreover be aware that normally the data itself is in an external separate file, but if we would proceed reading the data from an external file we would not be able to see it in the notebook and it would not serve the teaching purpose.
```
# we create the graph that is an object of ipycytoscape
ipycytoscape_obj = ipycytoscape.CytoscapeWidget()
```
### The data
```
railnet= '''{
"nodes": [
{"data": { "id": "BER" }},
{"data": { "id": "MUN"}},
{"data": { "id": "FRA"}},
{"data": { "id": "HAM"}}
],
"edges": [
{"data": { "source": "BER", "target": "MUN" }},
{"data": { "source": "MUN", "target": "FRA" }},
{"data": { "source": "FRA", "target": "BER" }},
{"data": { "source": "BER", "target": "HAM" }}
]
}'''
print(type(railnet))
railnetJSON = json.loads(railnet)
```
Let's see our mini German rail system that joins the three main German cities BERlin, MUNich and FRAnkfurt
```
ipycytoscape_obj.graph.add_graph_from_json(railnetJSON)
ipycytoscape_obj
```
Some observations:
- The train stations has a color (but we did not specified that)
- Between the train stations there is a connection (edge) representing the rail. Think about this, it can be unidirectional (train goes only in one direction, or bidirectional, the connections are in both directions. This is what "directionality" stands for.
- We dont know which station is which (no names)
Lets try to solve those problems.
IMPORTANT NOTE: Multiple graphs are being created so it's possible for you to compare the results.
## Direcionality
How would have been the JSON file if we dont want directionality?
Compare the two graphs below, the first example is using directionality and the second isn't.
```
ipycytoscape_obj2 = ipycytoscape.CytoscapeWidget()
ipycytoscape_obj2.graph.add_graph_from_json(railnetJSON, directed=True) # I am telling I dont want directions
ipycytoscape_obj2
ipycytoscape_obj3 = ipycytoscape.CytoscapeWidget()
ipycytoscape_obj3.graph.add_graph_from_json(railnetJSON, directed=False) # I am telling I dont want directions
ipycytoscape_obj3
```
## Adding names
Lets say we want to see the names of the stations on top of the nodes.
Those names are called labels.
It's necessary to add the corresponding labels to all the nodes.
```
railnet= '''{
"nodes": [
{"data": { "id": "BER", "label":"HBf BER"}},
{"data": { "id": "MUN", "label":"HBf MUN"}},
{"data": { "id": "FRA", "label":"HBf FRA"}},
{"data": { "id": "HAM", "label":"HBf HAM"}}
],
"edges": [
{"data": { "source": "BER", "target": "MUN" }},
{"data": { "source": "MUN", "target": "FRA" }},
{"data": { "source": "FRA", "target": "BER" }},
{"data": { "source": "BER", "target": "HAM" }}
]
}'''
railnetJSON = json.loads(railnet)
ipycytoscape_obj4 = ipycytoscape.CytoscapeWidget()
ipycytoscape_obj4.graph.add_graph_from_json(railnetJSON, directed=False) # I am telling I dont want directions
ipycytoscape_obj4
```
mmmmmm, as you can see we did not achieve our objective of adding the name of the stations. The stations being the nodes of the graph. Be aware that in all examples of this notebook stations and nodes might be used interchangeably "node" being the graph technical term and rail station his representation of it in real life. (btw, HBf states for central main station in German).
In order to affect and change the appearance of the graph we not only have to change the graph's data but also its style.
```
my_style = [
{'selector': 'node','style': {
'font-family': 'helvetica',
'font-size': '20px',
'label': 'data(label)'}},
]
```
What are we doing here?
We're writing the style of each one of the labels. With data(label) we specify that the property label of the attribute data of our graph should be printed with the font-size 20 and family helvetica.
Lets create a new graph with the labels
Here we're using CSS nomenclature.
You select one element and pass a style to that element.
```
ipycytoscape_obj5 = ipycytoscape.CytoscapeWidget()
ipycytoscape_obj5.graph.add_graph_from_json(railnetJSON, directed=False) # I am telling I dont want directions
ipycytoscape_obj5.set_style(my_style)
ipycytoscape_obj5
```
Lets just play around and change the size of the font and the type of font.
Now we want to change the style of an existing graph, namely the number 5.
```
ipycytoscape_obj6 = ipycytoscape.CytoscapeWidget()
ipycytoscape_obj6.graph.add_graph_from_json(railnetJSON, directed=False) # We're specifying that the graph should be undirected
ipycytoscape_obj6.set_style(my_style)
ipycytoscape_obj6 # which is the same as graph 5, but in the next cell we will try to change it
my_style = [
{'selector': 'node','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',
'background-color': 'red'}},
]
ipycytoscape_obj6 = ipycytoscape.CytoscapeWidget()
ipycytoscape_obj6.graph.add_graph_from_json(railnetJSON, directed=False) # We're specifying that the graph should be undirected
ipycytoscape_obj6.set_style(my_style)
ipycytoscape_obj6 # which is the same
```
As you can see when running the previous cell the appearance of the graph changed: the `font-family` is different and the `font-size` matches the node size. And the circles are now red.
The first question that comes to mind is if one can change the attributes of only one node.
Let's see.
```
my_style = [
{'selector': 'node','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',
'background-color': 'red'}},
{'selector': 'node[id = "BER"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',
'background-color': 'green'}}
]
ipycytoscape_obj7 = ipycytoscape.CytoscapeWidget()
ipycytoscape_obj7.graph.add_graph_from_json(railnetJSON, directed=False) # I am telling I dont want directions
ipycytoscape_obj7.set_style(my_style)
ipycytoscape_obj7.set_style(my_style)
ipycytoscape_obj7
```
#### What did we do?
We gave a particular style to ALL the nodes ('selector': 'node') and afterwards we gave the color green just to the berlin central station node: `('node[id = "BER"]')`
As you can see the way to refer to the node is 'node[id = "BER"]'.
## more customization layout
What else can we change in the apperance?
There is quite a few other attributes of the graph that we can change.
Let's assume that the train connections between the cities are as follows:
- BER - HAM -> 300km/h
- BER - MUN -> 200km/h
- MUN - FRA -> 100km/h
- FRA - BER -> 250km/h
We can also add information to the edges.
It is necessary to add labels to the edges and also to identify every edge.
```
railnet= '''{
"nodes": [
{"data": { "id": "BER", "label":"HBf BER"}},
{"data": { "id": "MUN", "label":"HBf MUN"}},
{"data": { "id": "FRA", "label":"HBf FRA"}},
{"data": { "id": "HAM", "label":"HBf HAM"}}
],
"edges": [
{"data": { "id": "line1", "source": "BER", "target": "MUN","label":"200km/h"}},
{"data": { "id": "line2", "source": "MUN", "target": "FRA","label":"200km/h"}},
{"data": { "id": "line3", "source": "FRA", "target": "BER","label":"250km/h" }},
{"data": { "id": "line4", "source": "BER", "target": "HAM","label":"300km/h" }}
]
}'''
my_style = [
{'selector': 'node','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',
'background-color': 'red'}},
{'selector': 'node[id = "BER"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',
'background-color': 'green'}},
{'selector': 'edge[id = "line1"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',}},
{'selector': 'edge[id = "line2"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',}},
{'selector': 'edge[id = "line3"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',}},
{'selector': 'edge[id = "line4"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',}}
]
railnetJSON = json.loads(railnet)
ipycytoscape_obj8 = ipycytoscape.CytoscapeWidget()
ipycytoscape_obj8.graph.add_graph_from_json(railnetJSON, directed=True) # We're specifying that the graph should be undirected
ipycytoscape_obj8.set_style(my_style)
ipycytoscape_obj8.set_style(my_style)
ipycytoscape_obj8
```
## Classes. What are they and what do they do?
Imagine we want to divide the rail net into two parts.
- cities belonging to the former east Germany and
- cities belonging to the former west Germany
And that we also want to paint these nodes in one go with a particular color. Meaning that we don't want to paint node by node but "paint all the west cities blue and east cities green" at once.
We can use classes for that.
We add a class to each node.
Let's revisit how to do this using an example from the very beginning.
```
railnet= '''{
"nodes": [
{"data": { "id": "BER", "label":"HBf BER"}, "classes":"east"},
{"data": { "id": "MUN", "label":"HBf MUN"}, "classes":"west"},
{"data": { "id": "FRA", "label":"HBf FRA"}, "classes":"west"},
{"data": { "id": "HAM", "label":"HBf HAM"}, "classes":"west"},
{"data": { "id": "LEP", "label":"HBf LEP"}, "classes":"east"}
],
"edges": [
{"data": { "id": "line1", "source": "BER", "target": "MUN","label":"200km/h"}},
{"data": { "id": "line2", "source": "MUN", "target": "FRA","label":"200km/h"}},
{"data": { "id": "line3", "source": "FRA", "target": "BER","label":"250km/h" }},
{"data": { "id": "line4", "source": "BER", "target": "HAM","label":"300km/h" }},
{"data": { "id": "line5", "source": "BER", "target": "LEP","label":"300km/h" }}
]
}'''
my_style = [
{'selector': 'node','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',}},
{'selector': 'node.east','style': {
'background-color': 'yellow'}},
{'selector': 'node.west','style': {
'background-color': 'blue'}},
{'selector': 'node[id = "BER"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',
'background-color': 'green'}},
{'selector': 'edge[id = "line1"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)'}},
{'selector': 'edge[id = "line2"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)'}},
{'selector': 'edge[id = "line3"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)'}},
{'selector': 'edge[id = "line4"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)'}},
{'selector': 'edge[id = "line5"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)'}}
]
railnetJSON = json.loads(railnet)
ipycytoscape_obj9 = ipycytoscape.CytoscapeWidget()
ipycytoscape_obj9.graph.add_graph_from_json(railnetJSON, directed=True) # We're specifying that the graph should be undirected
ipycytoscape_obj9.set_style(my_style)
ipycytoscape_obj9
```
What happended?
With
{'selector': 'node.east]',
'style': {'background-color': 'yellow'}},
We painted all east German cities yellow. BER as well, but BER color is overwritten by the green color of the BER node.
### Next
There is still a lot to uncover from ipycytoscape's functionalities:
- How to change attributes programmatically? For instance, if we have an input field with number of passengers the user can input the data and the color of the rail station. The node then could turn red if the number of passengers per day is greater than 200000
- How to add and delete elements of the graph: A new station is built in a city called Cologne. How to add that node and several edges to an existing rail net.
- How to add events: You are building an application for the rail company and they want to display information about the station when the user hovers over the node.
stay tuned.
|
github_jupyter
|
import ipycytoscape
import json
import ipywidgets
# we create the graph that is an object of ipycytoscape
ipycytoscape_obj = ipycytoscape.CytoscapeWidget()
railnet= '''{
"nodes": [
{"data": { "id": "BER" }},
{"data": { "id": "MUN"}},
{"data": { "id": "FRA"}},
{"data": { "id": "HAM"}}
],
"edges": [
{"data": { "source": "BER", "target": "MUN" }},
{"data": { "source": "MUN", "target": "FRA" }},
{"data": { "source": "FRA", "target": "BER" }},
{"data": { "source": "BER", "target": "HAM" }}
]
}'''
print(type(railnet))
railnetJSON = json.loads(railnet)
ipycytoscape_obj.graph.add_graph_from_json(railnetJSON)
ipycytoscape_obj
ipycytoscape_obj2 = ipycytoscape.CytoscapeWidget()
ipycytoscape_obj2.graph.add_graph_from_json(railnetJSON, directed=True) # I am telling I dont want directions
ipycytoscape_obj2
ipycytoscape_obj3 = ipycytoscape.CytoscapeWidget()
ipycytoscape_obj3.graph.add_graph_from_json(railnetJSON, directed=False) # I am telling I dont want directions
ipycytoscape_obj3
railnet= '''{
"nodes": [
{"data": { "id": "BER", "label":"HBf BER"}},
{"data": { "id": "MUN", "label":"HBf MUN"}},
{"data": { "id": "FRA", "label":"HBf FRA"}},
{"data": { "id": "HAM", "label":"HBf HAM"}}
],
"edges": [
{"data": { "source": "BER", "target": "MUN" }},
{"data": { "source": "MUN", "target": "FRA" }},
{"data": { "source": "FRA", "target": "BER" }},
{"data": { "source": "BER", "target": "HAM" }}
]
}'''
railnetJSON = json.loads(railnet)
ipycytoscape_obj4 = ipycytoscape.CytoscapeWidget()
ipycytoscape_obj4.graph.add_graph_from_json(railnetJSON, directed=False) # I am telling I dont want directions
ipycytoscape_obj4
my_style = [
{'selector': 'node','style': {
'font-family': 'helvetica',
'font-size': '20px',
'label': 'data(label)'}},
]
ipycytoscape_obj5 = ipycytoscape.CytoscapeWidget()
ipycytoscape_obj5.graph.add_graph_from_json(railnetJSON, directed=False) # I am telling I dont want directions
ipycytoscape_obj5.set_style(my_style)
ipycytoscape_obj5
ipycytoscape_obj6 = ipycytoscape.CytoscapeWidget()
ipycytoscape_obj6.graph.add_graph_from_json(railnetJSON, directed=False) # We're specifying that the graph should be undirected
ipycytoscape_obj6.set_style(my_style)
ipycytoscape_obj6 # which is the same as graph 5, but in the next cell we will try to change it
my_style = [
{'selector': 'node','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',
'background-color': 'red'}},
]
ipycytoscape_obj6 = ipycytoscape.CytoscapeWidget()
ipycytoscape_obj6.graph.add_graph_from_json(railnetJSON, directed=False) # We're specifying that the graph should be undirected
ipycytoscape_obj6.set_style(my_style)
ipycytoscape_obj6 # which is the same
my_style = [
{'selector': 'node','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',
'background-color': 'red'}},
{'selector': 'node[id = "BER"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',
'background-color': 'green'}}
]
ipycytoscape_obj7 = ipycytoscape.CytoscapeWidget()
ipycytoscape_obj7.graph.add_graph_from_json(railnetJSON, directed=False) # I am telling I dont want directions
ipycytoscape_obj7.set_style(my_style)
ipycytoscape_obj7.set_style(my_style)
ipycytoscape_obj7
railnet= '''{
"nodes": [
{"data": { "id": "BER", "label":"HBf BER"}},
{"data": { "id": "MUN", "label":"HBf MUN"}},
{"data": { "id": "FRA", "label":"HBf FRA"}},
{"data": { "id": "HAM", "label":"HBf HAM"}}
],
"edges": [
{"data": { "id": "line1", "source": "BER", "target": "MUN","label":"200km/h"}},
{"data": { "id": "line2", "source": "MUN", "target": "FRA","label":"200km/h"}},
{"data": { "id": "line3", "source": "FRA", "target": "BER","label":"250km/h" }},
{"data": { "id": "line4", "source": "BER", "target": "HAM","label":"300km/h" }}
]
}'''
my_style = [
{'selector': 'node','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',
'background-color': 'red'}},
{'selector': 'node[id = "BER"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',
'background-color': 'green'}},
{'selector': 'edge[id = "line1"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',}},
{'selector': 'edge[id = "line2"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',}},
{'selector': 'edge[id = "line3"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',}},
{'selector': 'edge[id = "line4"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',}}
]
railnetJSON = json.loads(railnet)
ipycytoscape_obj8 = ipycytoscape.CytoscapeWidget()
ipycytoscape_obj8.graph.add_graph_from_json(railnetJSON, directed=True) # We're specifying that the graph should be undirected
ipycytoscape_obj8.set_style(my_style)
ipycytoscape_obj8.set_style(my_style)
ipycytoscape_obj8
railnet= '''{
"nodes": [
{"data": { "id": "BER", "label":"HBf BER"}, "classes":"east"},
{"data": { "id": "MUN", "label":"HBf MUN"}, "classes":"west"},
{"data": { "id": "FRA", "label":"HBf FRA"}, "classes":"west"},
{"data": { "id": "HAM", "label":"HBf HAM"}, "classes":"west"},
{"data": { "id": "LEP", "label":"HBf LEP"}, "classes":"east"}
],
"edges": [
{"data": { "id": "line1", "source": "BER", "target": "MUN","label":"200km/h"}},
{"data": { "id": "line2", "source": "MUN", "target": "FRA","label":"200km/h"}},
{"data": { "id": "line3", "source": "FRA", "target": "BER","label":"250km/h" }},
{"data": { "id": "line4", "source": "BER", "target": "HAM","label":"300km/h" }},
{"data": { "id": "line5", "source": "BER", "target": "LEP","label":"300km/h" }}
]
}'''
my_style = [
{'selector': 'node','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',}},
{'selector': 'node.east','style': {
'background-color': 'yellow'}},
{'selector': 'node.west','style': {
'background-color': 'blue'}},
{'selector': 'node[id = "BER"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)',
'background-color': 'green'}},
{'selector': 'edge[id = "line1"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)'}},
{'selector': 'edge[id = "line2"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)'}},
{'selector': 'edge[id = "line3"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)'}},
{'selector': 'edge[id = "line4"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)'}},
{'selector': 'edge[id = "line5"]','style': {
'font-family': 'arial',
'font-size': '10px',
'label': 'data(label)'}}
]
railnetJSON = json.loads(railnet)
ipycytoscape_obj9 = ipycytoscape.CytoscapeWidget()
ipycytoscape_obj9.graph.add_graph_from_json(railnetJSON, directed=True) # We're specifying that the graph should be undirected
ipycytoscape_obj9.set_style(my_style)
ipycytoscape_obj9
| 0.259263 | 0.930142 |
# Day 10 Class Exercises: Supervised Machine Learning
## Background.
For these class exercises, we will be using the wine quality dataset which can be found at this URL:
https://archive.ics.uci.edu/ml/datasets/wine+quality. We will be using the supervised machine learning tools from the lessons to determine a model that can use physicochemical measurements of wine as a predictor of quality. The data for these exercises can be found in the `data` directory of this repository.
<span style="float:right; margin-left:10px; clear:both;"></span> Additionally, with these class exercises we learn a few new things. When new knowledge is introduced you'll see the icon shown on the right:
## Get Started
Import the Numpy, Pandas, Matplotlib (matplotlib magic), Seaborn and sklearn packages.
```
%matplotlib inline
# Data Management
import numpy as np
import pandas as pd
# Visualization
import seaborn as sns
import matplotlib.pyplot as plt
# Machine learning
from sklearn import model_selection
from sklearn import preprocessing
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
```
## Exercise 1. Review the data once more
Load the wine quality data used in the Seaborn class exercises from Day 9. As a reminder, you can read about this dataset from the file [../data/winequality.names](../data/winequality.names)
Next, read in the file named `winequality-red.csv`. This data, despite the `csv` suffix, is separated using a semicolon.
```
wine = pd.read_csv('../data/winequality-red.csv', sep=";")
wine.head()
```
How many samples (observations) do we have?
Are the data types for the columns in the dataframe appropriate for the type of data in each column?
Any missing values?
Any duplicated rows?
## Exercise 2: Explore the Dependent data
The quality column contains our expected outcome. Because we want to predict this score, it is our dependent variable. Wines scored as 0 are considered very bad and wines scored as 10 are very excellent. How many samples are there per each quality of wine?
As a reminder, view the quality distribution using a the seaborn barplot. Code similar to the following was used in Day 9 exercises. Adapt it here to fit your variables.
```python
qcounts = wine['quality'].value_counts(sort=False)
sns.barplot(x=qcounts.index, y=qcounts);
```
## Exercise 3: Explore the Independent Data
The dependent data includes our physicochemical measurements. As a reminder, let's use a Facet Grid to reiew the range of values for each of these. Code similar to the following was used in Day 9 exercises. Adapt it here to fit your variables.
```python
# First Melt the data
wine_t = wine.melt(id_vars='quality', var_name='measurement')
# Now create a FacetGrid and add a boxplot to it.
g = sns.FacetGrid(wine_t, col='measurement', col_wrap=6, sharex=False)
g.map(sns.boxplot, 'value', order=None);
```
To get a sense of the distribution shape of each indpednent data column use a violin plot as well.Code similar to the following was used in Day 9 exercises. Adapt it here to fit your variables.
```python
g = sns.FacetGrid(wine_t, col='measurement', col_wrap=6, sharex=False)
g.map(sns.violinplot, 'value', order=None);
```
Next, let's look for columns that might show correlation with other columns. Remember, colinear data can bias some supervised machine learning models, so for data columns that are highly correlated we should remove those. Code similar to the following was used in Day 9 exercises. Adapt it here to fit your variables.
```python
# Limit the plot to only 500 points to help reduce overplotting
sns.pairplot(wine.sample(500), hue='quality', palette="tab10");
```
Perform correlation analysis on the data columns. Exclude the `quality` column from the correlation analysis.
In Day 9 exercises, we used the [seaborn.heatmap](https://seaborn.pydata.org/generated/seaborn.heatmap.html) function to draw a heatmap of correlation values to help us identify columns that are highly correlated. Code similar to the following was used in Day 9 exercises. Adapt it here to fit your variables.
```python
plt.figure(figsize=(10, 10))
sns.heatmap(wine_cor, vmin=-1, vmax=1, annot=True, square=True);
```
<span style="float:right; margin-left:10px; clear:both;"></span>You may be interested to group data columns by their similarity profiles. For this, use the Seaborn [seaborn.culstermap](https://seaborn.pydata.org/generated/seaborn.clustermap.html) function instead. It will order the data columns by similarity and provide a dendgrogram on both the `x` and `y` axes to indicate relationships of simlarity. The following code example will create this plot. Adapt it for your variables.
```python
sns.clustermap(wine_cor, vmin=-1, vmax=1);
```
## Exercise 4: Cleaning the data
In summary, what important observations can we make from the exploration of both the dependent and independent variables in the data?
What type of cleaning decisions should be made?
Is the data Tidy? Do we need to adjust it?
## Exercise 5: Use SML Classification Models
First, separate out the outcome (dependent) variable and our observed (independent) data variables. Save these into variables named `X` and `Y`.
Normalize the observed data. Be sure to use the [normalization strategy](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.preprocessing) best suited for the observations about the data.
Generate the training set such that 20% of the data is left for testing and 80% for training. Name the variables with the training data as `Xt` and `Yt` respectively. Name the data used for testing/validation as `Xv` and `Yv`
Create a k-fold cross-validation strategy object to be used by the model that will be used to split the training data into 10 equal parts.
Use the following array to store results:
```python
results = {
'LogisticRegression' : np.zeros(10),
'LinearDiscriminantAnalysis' : np.zeros(10),
'KNeighborsClassifier' : np.zeros(10),
'DecisionTreeClassifier' : np.zeros(10),
'GaussianNB' : np.zeros(10),
'SVC' : np.zeros(10),
'RandomForestClassifier': np.zeros(10)
}
```
Execute a Logistic Regression classifier model
Execute a Linear Discriminant Analysis classifier model
Execute a K Neighbors classifier model
Execute a Decision Tree classifier model
Execute a GaussianNB classifier model
Execute a Support Vector Machine (SVC) classifier model
Execute a Random Forest classifier model. This is new!
<span style="float:right; margin-left:10px; clear:both;"></span> You've already been introduced to classificaiton trees. A random forest is an extension in that it fits a number of decision tree classifiers on various sub-samples of the dataset and then averages those results. This improves predictive accuracy and controls over-fitting. Learn more at the [sklearn.ensemble.RandomForestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) page.
Here's an example for use of the `RandomForestClassifier`:
```python
alg = RandomForestClassifier(n_estimators=100)
```
Plot the results of each of the models. Which performed best?
## Exercise 6: Use the Model to Predict.
Create a new object of the classifier that performed best:
Create a new model by fitting it with the training data (the same data we just used to evaulate all those different models).
Using the testing data, predict the wine quality by providing our testing data. Now that the model has been trained, it will predict a quality score using the smaller validation testing dataset. Save the result in a new variable named `predictions`
Briefly, let's view the contents of the predictions array.
What is the overall accuracy of the predictions?
Create the confusion matrix and use the Seaborn [heatmap](https://seaborn.pydata.org/generated/seaborn.heatmap.html) function to explore how well the model worked. (Note, this may take awhile to create). For the heatmap, be sure to
+ Show the values of the confusion matrix in the cells of the heatmap
+ Set the x-axis and y-axis labels.
Finally, generate and print the classification report
|
github_jupyter
|
%matplotlib inline
# Data Management
import numpy as np
import pandas as pd
# Visualization
import seaborn as sns
import matplotlib.pyplot as plt
# Machine learning
from sklearn import model_selection
from sklearn import preprocessing
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
wine = pd.read_csv('../data/winequality-red.csv', sep=";")
wine.head()
qcounts = wine['quality'].value_counts(sort=False)
sns.barplot(x=qcounts.index, y=qcounts);
# First Melt the data
wine_t = wine.melt(id_vars='quality', var_name='measurement')
# Now create a FacetGrid and add a boxplot to it.
g = sns.FacetGrid(wine_t, col='measurement', col_wrap=6, sharex=False)
g.map(sns.boxplot, 'value', order=None);
g = sns.FacetGrid(wine_t, col='measurement', col_wrap=6, sharex=False)
g.map(sns.violinplot, 'value', order=None);
# Limit the plot to only 500 points to help reduce overplotting
sns.pairplot(wine.sample(500), hue='quality', palette="tab10");
plt.figure(figsize=(10, 10))
sns.heatmap(wine_cor, vmin=-1, vmax=1, annot=True, square=True);
sns.clustermap(wine_cor, vmin=-1, vmax=1);
results = {
'LogisticRegression' : np.zeros(10),
'LinearDiscriminantAnalysis' : np.zeros(10),
'KNeighborsClassifier' : np.zeros(10),
'DecisionTreeClassifier' : np.zeros(10),
'GaussianNB' : np.zeros(10),
'SVC' : np.zeros(10),
'RandomForestClassifier': np.zeros(10)
}
alg = RandomForestClassifier(n_estimators=100)
| 0.676406 | 0.990131 |
```
import numpy as np
import pandas as pd
import xgboost as xgb
import datetime as dt
from sklearn.neighbors import KNeighborsRegressor
from sklearn.preprocessing import StandardScaler
import time as tm
# loading original data
shopInfoFile = '../dataset/shop_info.txt'
shopInfo = pd.read_table(shopInfoFile, sep = ',', header = None)
shopInfo.columns = ['shopID', 'city', 'locationID', 'perPay', 'score', 'commentCnt', 'shopLevel', 'cate1', 'cate2', 'cate3']
#load training and testing data
payTH = pd.read_csv('../preprocess/payTH_parallel.txt', sep=" ", header = None)
trainFile = '../preprocess/trainValidFeatures_ensemble.csv'
testFile = '../preprocess/validFeatures_ensemble.csv'
trainData = pd.read_csv(trainFile, header = None)
testData = pd.read_csv(testFile, header = None)
# preparing training set and validation set
periods = [7, 14, 28, 56, 112]
stats = ['mean', 'std', 'skew', 'kurtosis']
recentDataColumns = []
for period in periods:
for stat in stats:
column = 'last' + str(period) + 'days_' + stat
recentDataColumns.append(column)
periods = [7, 14, 28]
stats = ['meanView', 'stdView', 'skewView', 'kurtosisView']
recentDataViewColumns = []
for period in periods:
for stat in stats:
column = 'last' + str(period) + 'days_' + stat
recentDataViewColumns.append(column)
periods = [7, 14, 28, 56, 112]
trends = ['copy', 'ridge']
currentTrendcolumns = []
for period in periods:
for trend in trends:
column = 'last' + str(period) + 'days_' + trend
currentTrendcolumns.append(column)
primaryKey = ['shopID', 'year', 'month', 'day']
columnDic = {
'basicInfo':['city', 'perPay', 'score', 'commentCnt', 'shopLevel', 'category'],
'recentData':recentDataColumns,
'recentDataView':recentDataViewColumns,
'currentTrend':currentTrendcolumns,
'temporalInfo':['dayOfWeek', 'holiday', 'numHolidayLast', 'numHolidayCur', 'numHolidayNext'],
'weather':['maxTemp', 'minTemp', 'weather', 'pm']
}
ensembleCol = ['shopID', 'year', 'month', 'day']
orderCol = ['basicInfo', 'recentData', 'temporalInfo', 'currentTrend', 'weather', 'recentDataView']
for col in orderCol:
ensembleCol = ensembleCol + columnDic[col]
trainData.columns = ensembleCol
testData.columns = ensembleCol
startDateTrain = dt.date(2016, 9, 20)
endDateTrain = dt.date(2016, 10, 17)
startDateTest = dt.date(2016, 10, 18)
endDateTest = dt.date(2016, 10, 31)
startDate = dt.date(2015, 7, 1)
endDate = dt.date(2016, 10, 31)
startTrain = (startDateTrain - startDate).days
endTrain = (endDateTrain - startDate).days
startValid = (startDateTest - startDate).days
endValid = (endDateTest - startDate).days
trainLabel = payTH[np.arange(startTrain, endTrain + 1)].values.reshape(1, -1)[0]
validLabel = payTH[np.arange(startValid, endValid + 1)].values.reshape(1, -1)[0]
```
# data preprocessing
```
def detectNaN(a):
for i in range(len(a[0])):
e = True
for j in range(len(a) - 1):
if np.isnan(a[j][i]):
e = False
break
if (not e):
print(i)
def replace(a):
for i in range(len(a[0])):
e = True
for j in range(len(a)):
if np.isnan(a[j][i]):
a[j][i] = a[j - 1][i]
return a
# preprocessing training set
trainDataArray = np.array(trainData)
trainDataArrayProcessed = np.delete(trainDataArray, [1, 2], 1)
trainDataProcessed = replace(trainDataArrayProcessed)
detectNaN(trainDataProcessed)
scaler = StandardScaler()
scaler.fit(trainDataProcessed)
trainDataNormalized = scaler.transform(trainDataProcessed)
detectNaN(trainDataNormalized)
# preprocessing validation set
testDataArray = np.array(testData)
testDataArrayProcessed = np.delete(testDataArray, [1, 2], 1)
testDataProcessed = replace(testDataArrayProcessed)
detectNaN(testDataProcessed)
scaler = StandardScaler()
scaler.fit(testDataProcessed)
testDataNormalized = scaler.transform(testDataProcessed)
detectNaN(testDataNormalized)
```
# parameter selection
```
neighbors = [2, 5, 10, 20, 50]
preds_knn = []
eval_knn = []
recordNum = len(validLabel)
for num in neighbors:
rgs_knn = KNeighborsRegressor(n_neighbors=num)
rgs_knn.fit(trainDataNormalized, trainLabel)
pred_knn = rgs_knn.predict(testDataNormalized)
preds_knn.append(pred_knn)
evaluation = abs((validLabel - pred_knn)/(validLabel + pred_knn)).sum()/recordNum
eval_knn.append(evaluation)
print(num, evaluation)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import xgboost as xgb
import datetime as dt
from sklearn.neighbors import KNeighborsRegressor
from sklearn.preprocessing import StandardScaler
import time as tm
# loading original data
shopInfoFile = '../dataset/shop_info.txt'
shopInfo = pd.read_table(shopInfoFile, sep = ',', header = None)
shopInfo.columns = ['shopID', 'city', 'locationID', 'perPay', 'score', 'commentCnt', 'shopLevel', 'cate1', 'cate2', 'cate3']
#load training and testing data
payTH = pd.read_csv('../preprocess/payTH_parallel.txt', sep=" ", header = None)
trainFile = '../preprocess/trainValidFeatures_ensemble.csv'
testFile = '../preprocess/validFeatures_ensemble.csv'
trainData = pd.read_csv(trainFile, header = None)
testData = pd.read_csv(testFile, header = None)
# preparing training set and validation set
periods = [7, 14, 28, 56, 112]
stats = ['mean', 'std', 'skew', 'kurtosis']
recentDataColumns = []
for period in periods:
for stat in stats:
column = 'last' + str(period) + 'days_' + stat
recentDataColumns.append(column)
periods = [7, 14, 28]
stats = ['meanView', 'stdView', 'skewView', 'kurtosisView']
recentDataViewColumns = []
for period in periods:
for stat in stats:
column = 'last' + str(period) + 'days_' + stat
recentDataViewColumns.append(column)
periods = [7, 14, 28, 56, 112]
trends = ['copy', 'ridge']
currentTrendcolumns = []
for period in periods:
for trend in trends:
column = 'last' + str(period) + 'days_' + trend
currentTrendcolumns.append(column)
primaryKey = ['shopID', 'year', 'month', 'day']
columnDic = {
'basicInfo':['city', 'perPay', 'score', 'commentCnt', 'shopLevel', 'category'],
'recentData':recentDataColumns,
'recentDataView':recentDataViewColumns,
'currentTrend':currentTrendcolumns,
'temporalInfo':['dayOfWeek', 'holiday', 'numHolidayLast', 'numHolidayCur', 'numHolidayNext'],
'weather':['maxTemp', 'minTemp', 'weather', 'pm']
}
ensembleCol = ['shopID', 'year', 'month', 'day']
orderCol = ['basicInfo', 'recentData', 'temporalInfo', 'currentTrend', 'weather', 'recentDataView']
for col in orderCol:
ensembleCol = ensembleCol + columnDic[col]
trainData.columns = ensembleCol
testData.columns = ensembleCol
startDateTrain = dt.date(2016, 9, 20)
endDateTrain = dt.date(2016, 10, 17)
startDateTest = dt.date(2016, 10, 18)
endDateTest = dt.date(2016, 10, 31)
startDate = dt.date(2015, 7, 1)
endDate = dt.date(2016, 10, 31)
startTrain = (startDateTrain - startDate).days
endTrain = (endDateTrain - startDate).days
startValid = (startDateTest - startDate).days
endValid = (endDateTest - startDate).days
trainLabel = payTH[np.arange(startTrain, endTrain + 1)].values.reshape(1, -1)[0]
validLabel = payTH[np.arange(startValid, endValid + 1)].values.reshape(1, -1)[0]
def detectNaN(a):
for i in range(len(a[0])):
e = True
for j in range(len(a) - 1):
if np.isnan(a[j][i]):
e = False
break
if (not e):
print(i)
def replace(a):
for i in range(len(a[0])):
e = True
for j in range(len(a)):
if np.isnan(a[j][i]):
a[j][i] = a[j - 1][i]
return a
# preprocessing training set
trainDataArray = np.array(trainData)
trainDataArrayProcessed = np.delete(trainDataArray, [1, 2], 1)
trainDataProcessed = replace(trainDataArrayProcessed)
detectNaN(trainDataProcessed)
scaler = StandardScaler()
scaler.fit(trainDataProcessed)
trainDataNormalized = scaler.transform(trainDataProcessed)
detectNaN(trainDataNormalized)
# preprocessing validation set
testDataArray = np.array(testData)
testDataArrayProcessed = np.delete(testDataArray, [1, 2], 1)
testDataProcessed = replace(testDataArrayProcessed)
detectNaN(testDataProcessed)
scaler = StandardScaler()
scaler.fit(testDataProcessed)
testDataNormalized = scaler.transform(testDataProcessed)
detectNaN(testDataNormalized)
neighbors = [2, 5, 10, 20, 50]
preds_knn = []
eval_knn = []
recordNum = len(validLabel)
for num in neighbors:
rgs_knn = KNeighborsRegressor(n_neighbors=num)
rgs_knn.fit(trainDataNormalized, trainLabel)
pred_knn = rgs_knn.predict(testDataNormalized)
preds_knn.append(pred_knn)
evaluation = abs((validLabel - pred_knn)/(validLabel + pred_knn)).sum()/recordNum
eval_knn.append(evaluation)
print(num, evaluation)
| 0.307046 | 0.646635 |
# Labs 5
## 2) Łączenie (sekwencje) ‘akcji’ I/O — operatory >> (`then`) i >>= (`bind`), notacja do
**1. Napisać odpowiedniki `echo3` i `dialog` wykorzystujące notację `do`.**
**echo3 z użyciem do**
```
echo3 :: IO()
echo3 = getLine >>= \l1 -> getLine >>= \l2 -> putStrLn $ l1 ++ l2
```
**echo3 bez użycia do**
```
echo3' :: IO()
echo3' = do
l1 <- getLine
l2 <- getLine
putStrLn $ l1 ++ l2
```
**dialog z użyciem do**
```
dialog :: IO()
dialog = putStr "What is your happy number? "
>> getLine
>>= \choice -> let num = read choice :: Int in
if num == 7
then putStrLn "Ah, lucky 7!"
else if odd num
then putStrLn "Odd number! That's most people's choice..."
else putStrLn "Hm, even number? Unusual!"
```
**dialog bez użycia do**
```
dialog' :: IO()
dialog' = do
putStr "What is your happy number? "
choice <- getLine
let num = read choice :: Int in
if num == 7
then putStrLn "Ah, lucky 7!"
else if odd num
then putStrLn "Odd number! That's most people's choice..."
else putStrLn "Hm, even number? Unusual!"
```
**2. Napisać odpowiednik `twoQuestions` bez użycia notacji `do`.**
**twoQuestions z użyciem do**
```
twoQuestions :: IO ()
twoQuestions = do
putStr "What is your name? "
name <- getLine
putStr "How old are you? "
age <- getLine
print (name,age)
```
**twoQuestions bez użycia do**
```
twoQuestions :: IO ()
twoQuestions = putStr "What is your name? "
>> getLine
>>= \name -> putStr "How old are you? "
>> getLine
>>= \age -> print(name, age)
```
**3. Napisać ‘akcję’ `getLine'` odpowiadającą `getLine` z biblioteki `Prelude`.**
```
getLine' :: IO String
getLine' = do
char <- getChar
if char == '\n'
then return ""
else do
line <- getLine
return (char:line)
```
## 6) Funktory 2: dołączanie typów użytkownika do klasy `Functor`
**4. Sprawdzić możliwość automatycznego wygenerowania instancji `Functor` dla typu `MyList` (klauzula deriving)**
```
{-# LANGUAGE DeriveFunctor #-}
data MyList a = EmptyList
| Cons a (MyList a) deriving (Show, Functor)
list = Cons 5 (Cons 3 (Cons 29 EmptyList))
fmap (+5) list
```
**5. Napisać własną implementację funktora (`instance Functor`), a następnie sprawdzić możliwość jej automatycznego wygenerowania dla drzewa binarnego zdefiniowanego jako**
**data BinTree a = EmptyBT | NodeBT a (BinTree a) (BinTree a) deriving (Show)**
```
data BinTree a = EmptyBT
| NodeBT a (BinTree a) (BinTree a) deriving (Show)
instance Functor BinTree where
fmap _ EmptyBT = EmptyBT
fmap f (NodeBT a lt rt) = NodeBT (f a) (fmap f lt) (fmap f rt)
data BinTree' a = EmptyBT'
| NodeBT' a (BinTree' a) (BinTree' a) deriving (Show, Functor)
binTree' = NodeBT' 5 (NodeBT' 4 (NodeBT' 7 EmptyBT' EmptyBT') EmptyBT') EmptyBT'
fmap (*4) binTree'
```
**6. Napisać implementacje funktora (`instance Functor`) dla następujących typów:**
**a) newtype Pair b a = Pair { getPair :: (a,b) } -- fmap should change the first element**
```
newtype Pair b a = Pair { getPair :: (a,b) } deriving Show
instance Functor (Pair b) where
fmap f Pair {getPair = (a, b)} = Pair {getPair = (f a, b)}
fmap (*4) (Pair (6, 3))
```
**b) data Tree2 a = EmptyT2 | Leaf a | Node (Tree2 a) a (Tree2 a) deriving Show**
```
data Tree2 a = EmptyT2
| Leaf a
| Node (Tree2 a) a (Tree2 a) deriving Show
instance Functor Tree2 where
fmap _ EmptyT2 = EmptyT2
fmap f (Leaf a) = Leaf (f a)
fmap f (Node lt a rt) = Node (fmap f lt) (f a) (fmap f rt)
tree2 = Node (Leaf 3) 5 (Leaf 9)
fmap (+5) tree2
```
**c) data GTree a = Leaf a | GNode [GTree a] deriving Show**
```
data GTree a = Leaf a
| GNode [GTree a] deriving Show
instance Functor GTree where
fmap f (Leaf a) = Leaf (f a)
fmap f (GNode []) = GNode []
fmap f (GNode [x]) = GNode [fmap f x]
gTree = GNode [GNode [Leaf 7]]
fmap (+4) gTree
```
## 8) Funktory aplikatywne 2: dołączanie typów użytkownika do klasy `Applicative`
**7. Napisać implementacje funktora aplikatywnego (`instance Applicative`) dla typu:**
**newtype MyTriple a = MyTriple (a,a,a) deriving Show**
```
newtype MyTriple a = MyTriple (a,a,a) deriving Show
instance Functor MyTriple where
fmap f (MyTriple (a, b, c)) = MyTriple (f a, f b, f c)
instance Applicative MyTriple where
pure a = MyTriple (a,a,a)
(MyTriple (f, g, h)) <*> (MyTriple (a, b, c)) = MyTriple (f a, g b, h c)
fmap (+7) (MyTriple (6, 1, 0))
```
|
github_jupyter
|
echo3 :: IO()
echo3 = getLine >>= \l1 -> getLine >>= \l2 -> putStrLn $ l1 ++ l2
echo3' :: IO()
echo3' = do
l1 <- getLine
l2 <- getLine
putStrLn $ l1 ++ l2
dialog :: IO()
dialog = putStr "What is your happy number? "
>> getLine
>>= \choice -> let num = read choice :: Int in
if num == 7
then putStrLn "Ah, lucky 7!"
else if odd num
then putStrLn "Odd number! That's most people's choice..."
else putStrLn "Hm, even number? Unusual!"
dialog' :: IO()
dialog' = do
putStr "What is your happy number? "
choice <- getLine
let num = read choice :: Int in
if num == 7
then putStrLn "Ah, lucky 7!"
else if odd num
then putStrLn "Odd number! That's most people's choice..."
else putStrLn "Hm, even number? Unusual!"
twoQuestions :: IO ()
twoQuestions = do
putStr "What is your name? "
name <- getLine
putStr "How old are you? "
age <- getLine
print (name,age)
twoQuestions :: IO ()
twoQuestions = putStr "What is your name? "
>> getLine
>>= \name -> putStr "How old are you? "
>> getLine
>>= \age -> print(name, age)
getLine' :: IO String
getLine' = do
char <- getChar
if char == '\n'
then return ""
else do
line <- getLine
return (char:line)
{-# LANGUAGE DeriveFunctor #-}
data MyList a = EmptyList
| Cons a (MyList a) deriving (Show, Functor)
list = Cons 5 (Cons 3 (Cons 29 EmptyList))
fmap (+5) list
data BinTree a = EmptyBT
| NodeBT a (BinTree a) (BinTree a) deriving (Show)
instance Functor BinTree where
fmap _ EmptyBT = EmptyBT
fmap f (NodeBT a lt rt) = NodeBT (f a) (fmap f lt) (fmap f rt)
data BinTree' a = EmptyBT'
| NodeBT' a (BinTree' a) (BinTree' a) deriving (Show, Functor)
binTree' = NodeBT' 5 (NodeBT' 4 (NodeBT' 7 EmptyBT' EmptyBT') EmptyBT') EmptyBT'
fmap (*4) binTree'
newtype Pair b a = Pair { getPair :: (a,b) } deriving Show
instance Functor (Pair b) where
fmap f Pair {getPair = (a, b)} = Pair {getPair = (f a, b)}
fmap (*4) (Pair (6, 3))
data Tree2 a = EmptyT2
| Leaf a
| Node (Tree2 a) a (Tree2 a) deriving Show
instance Functor Tree2 where
fmap _ EmptyT2 = EmptyT2
fmap f (Leaf a) = Leaf (f a)
fmap f (Node lt a rt) = Node (fmap f lt) (f a) (fmap f rt)
tree2 = Node (Leaf 3) 5 (Leaf 9)
fmap (+5) tree2
data GTree a = Leaf a
| GNode [GTree a] deriving Show
instance Functor GTree where
fmap f (Leaf a) = Leaf (f a)
fmap f (GNode []) = GNode []
fmap f (GNode [x]) = GNode [fmap f x]
gTree = GNode [GNode [Leaf 7]]
fmap (+4) gTree
newtype MyTriple a = MyTriple (a,a,a) deriving Show
instance Functor MyTriple where
fmap f (MyTriple (a, b, c)) = MyTriple (f a, f b, f c)
instance Applicative MyTriple where
pure a = MyTriple (a,a,a)
(MyTriple (f, g, h)) <*> (MyTriple (a, b, c)) = MyTriple (f a, g b, h c)
fmap (+7) (MyTriple (6, 1, 0))
| 0.405684 | 0.605391 |
```
%pylab inline
from constantLowSkill3_cost100_LowReturn import *
plt.plot(detEarning)
Vgrid = np.load("LowSkillWorker3_fineGrid_cost100.npy")
gamma
num = 10000
'''
x = [w,n,m,s,e,o,z]
x = [5,0,0,0,0,0,0]
'''
from jax import random
def simulation(key):
initE = random.choice(a = nE, p=E_distribution, key = key)
initS = random.choice(a = nS, p=S_distribution, key = key)
x = [5, 0, 0, initS, initE, 0, 0]
path = []
move = []
for t in range(T_min, T_max):
_, key = random.split(key)
if t == T_max-1:
_,a = V(t,Vgrid[:,:,:,:,:,:,:,t],x)
else:
_,a = V(t,Vgrid[:,:,:,:,:,:,:,t+1],x)
xp = transition(t,a.reshape((1,-1)),x)
p = xp[:,-1]
x_next = xp[:,:-1]
path.append(x)
move.append(a)
x = x_next[random.choice(a = nS*nE, p=p, key = key)]
path.append(x)
return jnp.array(path), jnp.array(move)
%%time
# simulation part
keys = vmap(random.PRNGKey)(jnp.arange(num))
Paths, Moves = vmap(simulation)(keys)
# x = [w,n,m,s,e,o,z]
# x = [0,1,2,3,4,5,6]
ws = Paths[:,:,0].T
ns = Paths[:,:,1].T
ms = Paths[:,:,2].T
ss = Paths[:,:,3].T
es = Paths[:,:,4].T
os = Paths[:,:,5].T
zs = Paths[:,:,6].T
cs = Moves[:,:,0].T
bs = Moves[:,:,1].T
ks = Moves[:,:,2].T
hs = Moves[:,:,3].T
actions = Moves[:,:,4].T
plt.plot(range(20, T_max + 21),jnp.mean(zs,axis = 1), label = "experience")
plt.figure(figsize = [16,8])
plt.title("The mean values of simulation")
plt.plot(range(20, T_max + 21),jnp.mean(ws + H*pt*os - ms,axis = 1), label = "wealth + home equity")
plt.plot(range(20, T_max + 21),jnp.mean(ws,axis = 1), label = "wealth")
plt.plot(range(20, T_max + 20),jnp.mean(cs,axis = 1), label = "consumption")
plt.plot(range(20, T_max + 20),jnp.mean(bs,axis = 1), label = "bond")
plt.plot(range(20, T_max + 20),jnp.mean(ks,axis = 1), label = "stock")
plt.legend()
plt.title("housing consumption")
plt.plot(range(20, T_max + 20),(hs).mean(axis = 1), label = "housing")
plt.title("housing consumption for renting peole")
plt.plot(hs[:, jnp.where(os.sum(axis = 0) == 0)[0]].mean(axis = 1), label = "housing")
plt.title("house owner percentage in the population")
plt.plot(range(20, T_max + 21),(os).mean(axis = 1), label = "owning")
jnp.where(os[T_max - 1, :] == 0)
# agent number, x = [w,n,m,s,e,o]
agentNum = 35
plt.figure(figsize = [16,8])
plt.plot(range(20, T_max + 21),(ws + os*(H*pt - ms))[:,agentNum], label = "wealth + home equity")
plt.plot(range(20, T_max + 21),ws[:,agentNum], label = "wealth")
plt.plot(range(20, T_max + 21),ns[:,agentNum], label = "401k")
plt.plot(range(20, T_max + 21),ms[:,agentNum], label = "mortgage")
plt.plot(range(20, T_max + 20),cs[:,agentNum], label = "consumption")
plt.plot(range(20, T_max + 20),bs[:,agentNum], label = "bond")
plt.plot(range(20, T_max + 20),ks[:,agentNum], label = "stock")
plt.plot(range(20, T_max + 21),os[:,agentNum]*100, label = "ownership", color = "k")
plt.legend()
plt.plot(range(20, T_max + 21),ss[:,agentNum], label = "wealth")
states = jnp.array(list(range(8)), dtype = int8)
bondReturn = r_b[states]
bondReturn
expectedStockReturn = jnp.dot(Ps, r_k)
plt.figure(figsize = [12, 6])
plt.title("Bond return and expected stock return at different states")
plt.plot(range(8),bondReturn, label = "Bond returns")
plt.plot(range(8),expectedStockReturn, label = "Expect stock returns")
plt.legend()
import pandas as pd
investmentRatio = np.zeros((nS, T_max))
for age in range(0,T_max):
stockRatio = ks[age,:] / (ks[age,:] + bs[age,:])
state = ss[age,:]
list_of_tuples = list(zip(stockRatio, state))
df = pd.DataFrame(list_of_tuples,columns = ['stockRatio', "econState"])
investmentRatio[:,age] = df.groupby("econState").mean().values.flatten()
plt.figure()
for age in range(1,T_max-1, 10):
plt.plot(investmentRatio[:,age],label = str(age + 20))
plt.legend()
age = 50
stockRatio = ks[age,:] / (ks[age,:] + bs[age,:])
state = ss[age,:]
own = os[age,:]
list_of_tuples = list(zip(stockRatio, state, own))
df = pd.DataFrame(list_of_tuples,columns = ['stockRatio', "econState", "own"])
owner = df[df["own"] == 1]
renter = df[df["own"] == 0]
plt.plot(owner.groupby("econState")["stockRatio"].mean().values.flatten(), label = "Owner")
plt.plot(renter.groupby("econState")["stockRatio"].mean().values.flatten(), label = "Renter")
plt.legend()
plt.figure(figsize = [12,6])
plt.title("Stock investment ratio")
plt.plot((es[:T_max,:]*(ks/(ks+bs))).mean(axis = 1), label = "employed")
plt.plot(((1-es[:T_max,:])*(ks/(ks+bs))).mean(axis = 1), label = "unemployed")
plt.legend()
# agent selling time collection
agentTime = []
for t in range(30):
if ((os[t,:] == 0) & (os[t+1,:] == 1)).sum()>0:
for agentNum in jnp.where((os[t,:] == 0) & (os[t+1,:] == 1))[0]:
agentTime.append([t, agentNum])
agentTime = jnp.array(agentTime)
# agent selling time collection
agentHold = []
for t in range(30):
if ((os[t,:] == 0) & (os[t+1,:] == 0)).sum()>0:
for agentNum in jnp.where((os[t,:] == 0) & (os[t+1,:] == 0))[0]:
agentHold.append([t, agentNum])
agentHold = jnp.array(agentHold)
plt.title("weath level for buyer and renter")
www = (os*(ws+H*pt - ms)).sum(axis = 1)/(os).sum(axis = 1)
for age in range(30):
buyer = agentTime[agentTime[:,0] == age]
renter = agentHold[agentHold[:,0] == age]
plt.scatter(age, ws[buyer[:,0], buyer[:,1]].mean(),color = "b")
plt.scatter(age, www[age], color = "green")
plt.scatter(age, ws[renter[:,0], renter[:,1]].mean(),color = "r")
plt.title("employement status for buyer and renter")
for age in range(31):
buyer = agentTime[agentTime[:,0] == age]
renter = agentHold[agentHold[:,0] == age]
plt.scatter(age, es[buyer[:,0], buyer[:,1]].mean(),color = "b")
plt.scatter(age, es[renter[:,0], renter[:,1]].mean(),color = "r")
plt.plot((ks>0).mean(axis = 1))
# At every age
plt.plot((os[:T_max,:]*ks/(ks+bs)).sum(axis = 1)/os[:T_max,:].sum(axis = 1), label = "owner")
plt.plot(((1-os[:T_max,:])*ks/(ks+bs)).sum(axis = 1)/(1-os)[:T_max,:].sum(axis = 1), label = "renter")
plt.legend()
# At every age
plt.plot((os[:T_max,:]*ks).sum(axis = 1)/os[:T_max,:].sum(axis = 1), label = "owner")
plt.plot(((1-os[:T_max,:])*ks).sum(axis = 1)/(1-os)[:T_max,:].sum(axis = 1), label = "renter")
plt.legend()
```
|
github_jupyter
|
%pylab inline
from constantLowSkill3_cost100_LowReturn import *
plt.plot(detEarning)
Vgrid = np.load("LowSkillWorker3_fineGrid_cost100.npy")
gamma
num = 10000
'''
x = [w,n,m,s,e,o,z]
x = [5,0,0,0,0,0,0]
'''
from jax import random
def simulation(key):
initE = random.choice(a = nE, p=E_distribution, key = key)
initS = random.choice(a = nS, p=S_distribution, key = key)
x = [5, 0, 0, initS, initE, 0, 0]
path = []
move = []
for t in range(T_min, T_max):
_, key = random.split(key)
if t == T_max-1:
_,a = V(t,Vgrid[:,:,:,:,:,:,:,t],x)
else:
_,a = V(t,Vgrid[:,:,:,:,:,:,:,t+1],x)
xp = transition(t,a.reshape((1,-1)),x)
p = xp[:,-1]
x_next = xp[:,:-1]
path.append(x)
move.append(a)
x = x_next[random.choice(a = nS*nE, p=p, key = key)]
path.append(x)
return jnp.array(path), jnp.array(move)
%%time
# simulation part
keys = vmap(random.PRNGKey)(jnp.arange(num))
Paths, Moves = vmap(simulation)(keys)
# x = [w,n,m,s,e,o,z]
# x = [0,1,2,3,4,5,6]
ws = Paths[:,:,0].T
ns = Paths[:,:,1].T
ms = Paths[:,:,2].T
ss = Paths[:,:,3].T
es = Paths[:,:,4].T
os = Paths[:,:,5].T
zs = Paths[:,:,6].T
cs = Moves[:,:,0].T
bs = Moves[:,:,1].T
ks = Moves[:,:,2].T
hs = Moves[:,:,3].T
actions = Moves[:,:,4].T
plt.plot(range(20, T_max + 21),jnp.mean(zs,axis = 1), label = "experience")
plt.figure(figsize = [16,8])
plt.title("The mean values of simulation")
plt.plot(range(20, T_max + 21),jnp.mean(ws + H*pt*os - ms,axis = 1), label = "wealth + home equity")
plt.plot(range(20, T_max + 21),jnp.mean(ws,axis = 1), label = "wealth")
plt.plot(range(20, T_max + 20),jnp.mean(cs,axis = 1), label = "consumption")
plt.plot(range(20, T_max + 20),jnp.mean(bs,axis = 1), label = "bond")
plt.plot(range(20, T_max + 20),jnp.mean(ks,axis = 1), label = "stock")
plt.legend()
plt.title("housing consumption")
plt.plot(range(20, T_max + 20),(hs).mean(axis = 1), label = "housing")
plt.title("housing consumption for renting peole")
plt.plot(hs[:, jnp.where(os.sum(axis = 0) == 0)[0]].mean(axis = 1), label = "housing")
plt.title("house owner percentage in the population")
plt.plot(range(20, T_max + 21),(os).mean(axis = 1), label = "owning")
jnp.where(os[T_max - 1, :] == 0)
# agent number, x = [w,n,m,s,e,o]
agentNum = 35
plt.figure(figsize = [16,8])
plt.plot(range(20, T_max + 21),(ws + os*(H*pt - ms))[:,agentNum], label = "wealth + home equity")
plt.plot(range(20, T_max + 21),ws[:,agentNum], label = "wealth")
plt.plot(range(20, T_max + 21),ns[:,agentNum], label = "401k")
plt.plot(range(20, T_max + 21),ms[:,agentNum], label = "mortgage")
plt.plot(range(20, T_max + 20),cs[:,agentNum], label = "consumption")
plt.plot(range(20, T_max + 20),bs[:,agentNum], label = "bond")
plt.plot(range(20, T_max + 20),ks[:,agentNum], label = "stock")
plt.plot(range(20, T_max + 21),os[:,agentNum]*100, label = "ownership", color = "k")
plt.legend()
plt.plot(range(20, T_max + 21),ss[:,agentNum], label = "wealth")
states = jnp.array(list(range(8)), dtype = int8)
bondReturn = r_b[states]
bondReturn
expectedStockReturn = jnp.dot(Ps, r_k)
plt.figure(figsize = [12, 6])
plt.title("Bond return and expected stock return at different states")
plt.plot(range(8),bondReturn, label = "Bond returns")
plt.plot(range(8),expectedStockReturn, label = "Expect stock returns")
plt.legend()
import pandas as pd
investmentRatio = np.zeros((nS, T_max))
for age in range(0,T_max):
stockRatio = ks[age,:] / (ks[age,:] + bs[age,:])
state = ss[age,:]
list_of_tuples = list(zip(stockRatio, state))
df = pd.DataFrame(list_of_tuples,columns = ['stockRatio', "econState"])
investmentRatio[:,age] = df.groupby("econState").mean().values.flatten()
plt.figure()
for age in range(1,T_max-1, 10):
plt.plot(investmentRatio[:,age],label = str(age + 20))
plt.legend()
age = 50
stockRatio = ks[age,:] / (ks[age,:] + bs[age,:])
state = ss[age,:]
own = os[age,:]
list_of_tuples = list(zip(stockRatio, state, own))
df = pd.DataFrame(list_of_tuples,columns = ['stockRatio', "econState", "own"])
owner = df[df["own"] == 1]
renter = df[df["own"] == 0]
plt.plot(owner.groupby("econState")["stockRatio"].mean().values.flatten(), label = "Owner")
plt.plot(renter.groupby("econState")["stockRatio"].mean().values.flatten(), label = "Renter")
plt.legend()
plt.figure(figsize = [12,6])
plt.title("Stock investment ratio")
plt.plot((es[:T_max,:]*(ks/(ks+bs))).mean(axis = 1), label = "employed")
plt.plot(((1-es[:T_max,:])*(ks/(ks+bs))).mean(axis = 1), label = "unemployed")
plt.legend()
# agent selling time collection
agentTime = []
for t in range(30):
if ((os[t,:] == 0) & (os[t+1,:] == 1)).sum()>0:
for agentNum in jnp.where((os[t,:] == 0) & (os[t+1,:] == 1))[0]:
agentTime.append([t, agentNum])
agentTime = jnp.array(agentTime)
# agent selling time collection
agentHold = []
for t in range(30):
if ((os[t,:] == 0) & (os[t+1,:] == 0)).sum()>0:
for agentNum in jnp.where((os[t,:] == 0) & (os[t+1,:] == 0))[0]:
agentHold.append([t, agentNum])
agentHold = jnp.array(agentHold)
plt.title("weath level for buyer and renter")
www = (os*(ws+H*pt - ms)).sum(axis = 1)/(os).sum(axis = 1)
for age in range(30):
buyer = agentTime[agentTime[:,0] == age]
renter = agentHold[agentHold[:,0] == age]
plt.scatter(age, ws[buyer[:,0], buyer[:,1]].mean(),color = "b")
plt.scatter(age, www[age], color = "green")
plt.scatter(age, ws[renter[:,0], renter[:,1]].mean(),color = "r")
plt.title("employement status for buyer and renter")
for age in range(31):
buyer = agentTime[agentTime[:,0] == age]
renter = agentHold[agentHold[:,0] == age]
plt.scatter(age, es[buyer[:,0], buyer[:,1]].mean(),color = "b")
plt.scatter(age, es[renter[:,0], renter[:,1]].mean(),color = "r")
plt.plot((ks>0).mean(axis = 1))
# At every age
plt.plot((os[:T_max,:]*ks/(ks+bs)).sum(axis = 1)/os[:T_max,:].sum(axis = 1), label = "owner")
plt.plot(((1-os[:T_max,:])*ks/(ks+bs)).sum(axis = 1)/(1-os)[:T_max,:].sum(axis = 1), label = "renter")
plt.legend()
# At every age
plt.plot((os[:T_max,:]*ks).sum(axis = 1)/os[:T_max,:].sum(axis = 1), label = "owner")
plt.plot(((1-os[:T_max,:])*ks).sum(axis = 1)/(1-os)[:T_max,:].sum(axis = 1), label = "renter")
plt.legend()
| 0.441191 | 0.591812 |
# Assignment - Decision Trees and Random Forests

In this assignment, you'll continue building on the previous assignment to predict the price of a house using information like its location, area, no. of rooms etc. You'll use the dataset from the [House Prices - Advanced Regression Techniques](https://www.kaggle.com/c/house-prices-advanced-regression-techniques) competition on [Kaggle](https://kaggle.com).
We'll follow a step-by-step process:
1. Download and prepare the dataset for training
2. Train, evaluate and interpret a decision tree
3. Train, evaluate and interpret a random forest
4. Tune hyperparameters to improve the model
5. Make predictions and save the model
As you go through this notebook, you will find a **???** in certain places. Your job is to replace the **???** with appropriate code or values, to ensure that the notebook runs properly end-to-end and your machine learning model is trained properly without errors.
**Guidelines**
1. Make sure to run all the code cells in order. Otherwise, you may get errors like `NameError` for undefined variables.
2. Do not change variable names, delete cells, or disturb other existing code. It may cause problems during evaluation.
3. In some cases, you may need to add some code cells or new statements before or after the line of code containing the **???**.
4. Since you'll be using a temporary online service for code execution, save your work by running `jovian.commit` at regular intervals.
5. Review the "Evaluation Criteria" for the assignment carefully and make sure your submission meets all the criteria.
6. Questions marked **(Optional)** will not be considered for evaluation and can be skipped. They are for your learning.
7. It's okay to ask for help & discuss ideas on the [community forum](https://jovian.ai/forum/c/zero-to-gbms/gbms-assignment-2/99), but please don't post full working code, to give everyone an opportunity to solve the assignment on their own.
**Important Links**:
- Make a submission here: https://jovian.ai/learn/machine-learning-with-python-zero-to-gbms/assignment/assignment-2-decision-trees-and-random-forests
- Ask questions, discuss ideas and get help here: https://jovian.ai/forum/c/zero-to-gbms/gbms-assignment-2/99
- Review this Jupyter notebook: https://jovian.ai/aakashns/sklearn-decision-trees-random-forests
## How to Run the Code and Save Your Work
**Option 1: Running using free online resources (1-click, recommended):** The easiest way to start executing the code is to click the **Run** button at the top of this page and select **Run on Binder**. This will set up a cloud-based Jupyter notebook server and allow you to modify/execute the code.
**Option 2: Running on your computer locally:** To run the code on your computer locally, you'll need to set up [Python](https://www.python.org), download the notebook and install the required libraries. Click the **Run** button at the top of this page, select the **Run Locally** option, and follow the instructions.
**Saving your work**: You can save a snapshot of the assignment to your [Jovian](https://jovian.ai) profile, so that you can access it later and continue your work. Keep saving your work by running `jovian.commit` from time to time.
```
!pip install jovian --upgrade --quiet
import jovian
jovian.commit(project='python-random-forests-assignment', privacy='secret')
```
Let's begin by installing the required libraries.
```
!pip install opendatasets scikit-learn plotly folium --upgrade --quiet
!pip install pandas numpy matplotlib seaborn --quiet
```
## Download and prepare the dataset for training
```
import os
from zipfile import ZipFile
from urllib.request import urlretrieve
dataset_url = 'https://github.com/JovianML/opendatasets/raw/master/data/house-prices-advanced-regression-techniques.zip'
urlretrieve(dataset_url, 'house-prices.zip')
with ZipFile('house-prices.zip') as f:
f.extractall(path='house-prices')
os.listdir('house-prices')
import pandas as pd
pd.options.display.max_columns = 200
pd.options.display.max_rows = 200
prices_df = pd.read_csv('house-prices/train.csv')
prices_df
import numpy as np
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
from sklearn.model_selection import train_test_split
# Identify input and target columns
input_cols, target_col = prices_df.columns[1:-1], prices_df.columns[-1]
inputs_df, targets = prices_df[input_cols].copy(), prices_df[target_col].copy()
# Identify numeric and categorical columns
numeric_cols = prices_df[input_cols].select_dtypes(include=np.number).columns.tolist()
categorical_cols = prices_df[input_cols].select_dtypes(include='object').columns.tolist()
# Impute and scale numeric columns
imputer = SimpleImputer().fit(inputs_df[numeric_cols])
inputs_df[numeric_cols] = imputer.transform(inputs_df[numeric_cols])
scaler = MinMaxScaler().fit(inputs_df[numeric_cols])
inputs_df[numeric_cols] = scaler.transform(inputs_df[numeric_cols])
# One-hot encode categorical columns
encoder = OneHotEncoder(sparse=False, handle_unknown='ignore').fit(inputs_df[categorical_cols])
encoded_cols = list(encoder.get_feature_names(categorical_cols))
inputs_df[encoded_cols] = encoder.transform(inputs_df[categorical_cols])
# Create training and validation sets
train_inputs, val_inputs, train_targets, val_targets = train_test_split(
inputs_df[numeric_cols + encoded_cols], targets, test_size=0.25, random_state=42)
```
Let's save our work before continuing.
```
jovian.commit()
```
## Decision Tree
> **QUESTION 1**: Train a decision tree regressor using the training set.
```
from sklearn.tree import DecisionTreeRegressor
# Create the model
tree = DecisionTreeRegressor(random_state=42)
# Fit the model to the training data
%time
tree.fit(train_inputs, train_targets)
```
Let's save our work before continuing.
```
jovian.commit()
```
> **QUESTION 2**: Generate predictions on the training and validation sets using the trained decision tree, and compute the RMSE loss.
```
from sklearn.metrics import mean_squared_error
tree_train_preds = tree.predict(train_inputs)
tree_train_rmse = mean_squared_error(train_targets, tree_train_preds)
tree_val_preds = tree.predict(val_inputs)
tree_val_rmse = mean_squared_error(val_targets, tree_val_preds)
print('Train RMSE: {}, Validation RMSE: {}'.format(tree_train_rmse, tree_val_rmse))
```
Let's save our work before continuing.
```
jovian.commit()
```
> **QUESTION 3**: Visualize the decision tree (graphically and textually) and display feature importances as a graph. Limit the maximum depth of graphical visualization to 3 levels.
```
import matplotlib.pyplot as plt
from sklearn.tree import plot_tree, export_text
import seaborn as sns
sns.set_style('darkgrid')
%matplotlib inline
plt.figure(figsize=(30,15))
# Visualize the tree graphically using plot_tree
importance_df = pd.DataFrame({
"feature": train_inputs.columns,
"importance": tree.feature_importances_
}).sort_values("importance", ascending=False)
plt.title("Feature Importance")
sns.barplot(data=importance_df.head(10), x="importance", y="feature");
# Visualize the tree textually using export_text
tree_text = export_text(tree, max_depth=10, feature_names=list(train_inputs.columns))
# Display the first few lines
print(tree_text[:2000])
# Check feature importance
tree_importances = tree.feature_importances_
tree_importance_df = pd.DataFrame({
'feature': train_inputs.columns,
'importance': tree_importances
}).sort_values('importance', ascending=False)
tree_importance_df
plt.title('Decision Tree Feature Importance')
sns.barplot(data=tree_importance_df.head(10), x='importance', y='feature');
```
Let's save our work before continuing.
```
jovian.commit()
```
## Random Forests
> **QUESTION 4**: Train a random forest regressor using the training set.
```
from sklearn.ensemble import RandomForestRegressor
# Create the model
rf1 = RandomForestRegressor(n_jobs=-1, random_state=42)
# Fit the model
%time
rf1.fit(train_inputs, train_targets)
```
Let's save our work before continuing.
```
jovian.commit()
```
> **QUESTION 5**: Make predictions using the random forest regressor.
```
rf1_train_preds = rf1.predict(train_inputs)
rf1_train_rmse = mean_squared_error(train_targets, rf1_train_preds)
rf1_val_preds = rf1.predict(val_inputs)
rf1_val_rmse = mean_squared_error(val_targets, rf1_val_preds)
print('Train RMSE: {}, Validation RMSE: {}'.format(rf1_train_rmse, rf1_val_rmse))
```
Let's save our work before continuing.
```
jovian.commit()
```
## Hyperparameter Tuning
Let us now tune the hyperparameters of our model. You can find the hyperparameters for `RandomForestRegressor` here: https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html
<img src="https://i.imgur.com/EJCrSZw.png" width="480">
Hyperparameters are use
Let's define a helper function `test_params` which can test the given value of one or more hyperparameters.
```
def test_params(**params):
model = RandomForestRegressor(random_state=42, n_jobs=-1, **params).fit(train_inputs, train_targets)
train_rmse = mean_squared_error(model.predict(train_inputs), train_targets, squared=False)
val_rmse = mean_squared_error(model.predict(val_inputs), val_targets, squared=False)
return train_rmse, val_rmse
```
It can be used as follows:
```
test_params(n_estimators=20, max_depth=20)
test_params(n_estimators=50, max_depth=10, min_samples_leaf=4, max_features=0.4)
```
Let's also define a helper function to test and plot different values of a single parameter.
```
def test_param_and_plot(param_name, param_values):
train_errors, val_errors = [], []
for value in param_values:
params = {param_name: value}
train_rmse, val_rmse = test_params(**params)
train_errors.append(train_rmse)
val_errors.append(val_rmse)
plt.figure(figsize=(10,6))
plt.title('Overfitting curve: ' + param_name)
plt.plot(param_values, train_errors, 'b-o')
plt.plot(param_values, val_errors, 'r-o')
plt.xlabel(param_name)
plt.ylabel('RMSE')
plt.legend(['Training', 'Validation'])
test_param_and_plot('max_depth', [5, 10, 15, 20, 25, 30, 35, 40])
```
From the above graph, it appears that the best value for `max_depth` is around 20, beyond which the model starts to overfit.
Let's save our work before continuing.
```
jovian.commit()
```
> **QUESTION 6**: Use the `test_params` and `test_param_and_plot` functions to experiment with different values of the hyperparmeters like `n_estimators`, `max_depth`, `min_samples_split`, `min_samples_leaf`, `min_weight_fraction_leaf`, `max_features`, `max_leaf_nodes`, `min_impurity_decrease`, `min_impurity_split` etc. You can learn more about the hyperparameters here: https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html
```
test_params(max_depth=20)
test_params(max_depth=21)
test_param_and_plot('max_leaf_nodes', [2**5, 2**10, 2**15])
```
Let's save our work before continuing.
```
jovian.commit()
```
## Training the Best Model
> **QUESTION 7**: Train a random forest regressor model with your best hyperparameters to minimize the validation loss.
```
# Create the model with custom hyperparameters
rf2 = RandomForestRegressor(max_depth=21, n_jobs=-1, random_state=42)
# Train the model
rf2.fit(train_inputs, train_targets)
```
Let's save our work before continuing.
```
jovian.commit()
```
> **QUESTION 8**: Make predictions and evaluate your final model. If you're unhappy with the results, modify the hyperparameters above and try again.
```
rf2_train_preds = rf2.predict(train_inputs)
rf2_train_rmse = mean_squared_error(train_targets, rf2_train_preds)
rf2_val_preds = rf2.predict(val_inputs)
rf2_val_rmse = mean_squared_error(val_targets, rf2_val_preds)
print('Train RMSE: {}, Validation RMSE: {}'.format(rf2_train_rmse, rf2_val_rmse))
```
Let's also view and plot the feature importances.
```
rf2_importance_df = pd.DataFrame({
'feature': train_inputs.columns,
'importance': rf2.feature_importances_
}).sort_values('importance', ascending=False)
sns.barplot(data=rf2_importance_df.head(), x='importance', y='feature')
rf2_importance_df
```
Let's save our work before continuing.
```
jovian.commit()
```
## Make a Submission
To make a submission, just execute the following cell:
```
jovian.submit('zerotogbms-a2')
```
You can also submit your Jovian notebook link on the assignment page: https://jovian.ai/learn/machine-learning-with-python-zero-to-gbms/assignment/assignment-2-decision-trees-and-random-forests
Make sure to review the evaluation criteria carefully. You can make any number of submissions, and only your final submission will be evalauted.
Ask questions, discuss ideas and get help here: https://jovian.ai/forum/c/zero-to-gbms/gbms-assignment-2/99
NOTE: **The rest of this assignment is optional.**
## Making Predictions on the Test Set
Let's make predictions on the test set provided with the data.
```
test_df = pd.read_csv('house-prices/test.csv')
test_df
```
First, we need to reapply all the preprocessing steps.
```
test_df[numeric_cols] = imputer.transform(test_df[numeric_cols])
test_df[numeric_cols] = scaler.transform(test_df[numeric_cols])
test_df[encoded_cols] = encoder.transform(test_df[categorical_cols])
test_inputs = test_df[numeric_cols + encoded_cols]
```
We can now make predictions using our final model.
```
test_preds = rf2.predict(test_inputs)
submission_df = pd.read_csv('house-prices/sample_submission.csv')
submission_df
```
Let's replace the values of the `SalePrice` column with our predictions.
```
submission_df['SalePrice'] = test_preds
```
Let's save it as a CSV file and download it.
```
submission_df.to_csv('submission.csv', index=False)
from IPython.display import FileLink
FileLink('submission.csv') # Doesn't work on Colab, use the file browser instead to download the file.
```
We can now submit this file to the competition: https://www.kaggle.com/c/house-prices-advanced-regression-techniques/submissions

> **(OPTIONAL) QUESTION**: Submit your predictions to the competition. Experiment with different models, feature engineering strategies and hyperparameters and try to reach the top 10% on the leaderboard.
Let's save our work before continuing.
```
jovian.commit()
```
### Making Predictions on Single Inputs
```
def predict_input(model, single_input):
input_df = pd.DataFrame([single_input])
input_df[numeric_cols] = imputer.transform(input_df[numeric_cols])
input_df[numeric_cols] = scaler.transform(input_df[numeric_cols])
input_df[encoded_cols] = encoder.transform(input_df[categorical_cols].values)
return model.predict(input_df[numeric_cols + encoded_cols])[0]
sample_input = { 'MSSubClass': 20, 'MSZoning': 'RL', 'LotFrontage': 77.0, 'LotArea': 9320,
'Street': 'Pave', 'Alley': None, 'LotShape': 'IR1', 'LandContour': 'Lvl', 'Utilities': 'AllPub',
'LotConfig': 'Inside', 'LandSlope': 'Gtl', 'Neighborhood': 'NAmes', 'Condition1': 'Norm', 'Condition2': 'Norm',
'BldgType': '1Fam', 'HouseStyle': '1Story', 'OverallQual': 4, 'OverallCond': 5, 'YearBuilt': 1959,
'YearRemodAdd': 1959, 'RoofStyle': 'Gable', 'RoofMatl': 'CompShg', 'Exterior1st': 'Plywood',
'Exterior2nd': 'Plywood', 'MasVnrType': 'None','MasVnrArea': 0.0,'ExterQual': 'TA','ExterCond': 'TA',
'Foundation': 'CBlock','BsmtQual': 'TA','BsmtCond': 'TA','BsmtExposure': 'No','BsmtFinType1': 'ALQ',
'BsmtFinSF1': 569,'BsmtFinType2': 'Unf','BsmtFinSF2': 0,'BsmtUnfSF': 381,
'TotalBsmtSF': 950,'Heating': 'GasA','HeatingQC': 'Fa','CentralAir': 'Y','Electrical': 'SBrkr', '1stFlrSF': 1225,
'2ndFlrSF': 0, 'LowQualFinSF': 0, 'GrLivArea': 1225, 'BsmtFullBath': 1, 'BsmtHalfBath': 0, 'FullBath': 1,
'HalfBath': 1, 'BedroomAbvGr': 3, 'KitchenAbvGr': 1,'KitchenQual': 'TA','TotRmsAbvGrd': 6,'Functional': 'Typ',
'Fireplaces': 0,'FireplaceQu': np.nan,'GarageType': np.nan,'GarageYrBlt': np.nan,'GarageFinish': np.nan,'GarageCars': 0,
'GarageArea': 0,'GarageQual': np.nan,'GarageCond': np.nan,'PavedDrive': 'Y', 'WoodDeckSF': 352, 'OpenPorchSF': 0,
'EnclosedPorch': 0,'3SsnPorch': 0, 'ScreenPorch': 0, 'PoolArea': 0, 'PoolQC': np.nan, 'Fence': np.nan, 'MiscFeature': 'Shed',
'MiscVal': 400, 'MoSold': 1, 'YrSold': 2010, 'SaleType': 'WD', 'SaleCondition': 'Normal'}
predicted_price = predict_input(rf2, sample_input)
print('The predicted sale price of the house is ${}'.format(predicted_price))
```
> **EXERCISE**: Change the sample input above and make predictions. Try different examples and try to figure out which columns have a big impact on the sale price. Hint: Look at the feature importance to decide which columns to try.
### Saving the Model
```
import joblib
house_prices_rf = {
'model': rf2,
'imputer': imputer,
'scaler': scaler,
'encoder': encoder,
'input_cols': input_cols,
'target_col': target_col,
'numeric_cols': numeric_cols,
'categorical_cols': categorical_cols,
'encoded_cols': encoded_cols
}
joblib.dump(house_prices_rf, 'house_prices_rf.joblib')
```
Let's save our work before continuing.
```
jovian.commit(outputs=['house_prices_rf.joblib'])
```
### Predicting the Logarithm of Sale Price
> **(OPTIONAL) QUESTION**: In the [original Kaggle competition](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/overview/evaluation), the model is evaluated by computing the Root Mean Squared Error on the logarithm of the sale price. Try training a random forest to predict the logarithm of the sale price, instead of the actual sales price and see if the results you obtain are better than the models trained above.
```
!pip install --upgrade pip
```
|
github_jupyter
|
!pip install jovian --upgrade --quiet
import jovian
jovian.commit(project='python-random-forests-assignment', privacy='secret')
!pip install opendatasets scikit-learn plotly folium --upgrade --quiet
!pip install pandas numpy matplotlib seaborn --quiet
import os
from zipfile import ZipFile
from urllib.request import urlretrieve
dataset_url = 'https://github.com/JovianML/opendatasets/raw/master/data/house-prices-advanced-regression-techniques.zip'
urlretrieve(dataset_url, 'house-prices.zip')
with ZipFile('house-prices.zip') as f:
f.extractall(path='house-prices')
os.listdir('house-prices')
import pandas as pd
pd.options.display.max_columns = 200
pd.options.display.max_rows = 200
prices_df = pd.read_csv('house-prices/train.csv')
prices_df
import numpy as np
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
from sklearn.model_selection import train_test_split
# Identify input and target columns
input_cols, target_col = prices_df.columns[1:-1], prices_df.columns[-1]
inputs_df, targets = prices_df[input_cols].copy(), prices_df[target_col].copy()
# Identify numeric and categorical columns
numeric_cols = prices_df[input_cols].select_dtypes(include=np.number).columns.tolist()
categorical_cols = prices_df[input_cols].select_dtypes(include='object').columns.tolist()
# Impute and scale numeric columns
imputer = SimpleImputer().fit(inputs_df[numeric_cols])
inputs_df[numeric_cols] = imputer.transform(inputs_df[numeric_cols])
scaler = MinMaxScaler().fit(inputs_df[numeric_cols])
inputs_df[numeric_cols] = scaler.transform(inputs_df[numeric_cols])
# One-hot encode categorical columns
encoder = OneHotEncoder(sparse=False, handle_unknown='ignore').fit(inputs_df[categorical_cols])
encoded_cols = list(encoder.get_feature_names(categorical_cols))
inputs_df[encoded_cols] = encoder.transform(inputs_df[categorical_cols])
# Create training and validation sets
train_inputs, val_inputs, train_targets, val_targets = train_test_split(
inputs_df[numeric_cols + encoded_cols], targets, test_size=0.25, random_state=42)
jovian.commit()
from sklearn.tree import DecisionTreeRegressor
# Create the model
tree = DecisionTreeRegressor(random_state=42)
# Fit the model to the training data
%time
tree.fit(train_inputs, train_targets)
jovian.commit()
from sklearn.metrics import mean_squared_error
tree_train_preds = tree.predict(train_inputs)
tree_train_rmse = mean_squared_error(train_targets, tree_train_preds)
tree_val_preds = tree.predict(val_inputs)
tree_val_rmse = mean_squared_error(val_targets, tree_val_preds)
print('Train RMSE: {}, Validation RMSE: {}'.format(tree_train_rmse, tree_val_rmse))
jovian.commit()
import matplotlib.pyplot as plt
from sklearn.tree import plot_tree, export_text
import seaborn as sns
sns.set_style('darkgrid')
%matplotlib inline
plt.figure(figsize=(30,15))
# Visualize the tree graphically using plot_tree
importance_df = pd.DataFrame({
"feature": train_inputs.columns,
"importance": tree.feature_importances_
}).sort_values("importance", ascending=False)
plt.title("Feature Importance")
sns.barplot(data=importance_df.head(10), x="importance", y="feature");
# Visualize the tree textually using export_text
tree_text = export_text(tree, max_depth=10, feature_names=list(train_inputs.columns))
# Display the first few lines
print(tree_text[:2000])
# Check feature importance
tree_importances = tree.feature_importances_
tree_importance_df = pd.DataFrame({
'feature': train_inputs.columns,
'importance': tree_importances
}).sort_values('importance', ascending=False)
tree_importance_df
plt.title('Decision Tree Feature Importance')
sns.barplot(data=tree_importance_df.head(10), x='importance', y='feature');
jovian.commit()
from sklearn.ensemble import RandomForestRegressor
# Create the model
rf1 = RandomForestRegressor(n_jobs=-1, random_state=42)
# Fit the model
%time
rf1.fit(train_inputs, train_targets)
jovian.commit()
rf1_train_preds = rf1.predict(train_inputs)
rf1_train_rmse = mean_squared_error(train_targets, rf1_train_preds)
rf1_val_preds = rf1.predict(val_inputs)
rf1_val_rmse = mean_squared_error(val_targets, rf1_val_preds)
print('Train RMSE: {}, Validation RMSE: {}'.format(rf1_train_rmse, rf1_val_rmse))
jovian.commit()
def test_params(**params):
model = RandomForestRegressor(random_state=42, n_jobs=-1, **params).fit(train_inputs, train_targets)
train_rmse = mean_squared_error(model.predict(train_inputs), train_targets, squared=False)
val_rmse = mean_squared_error(model.predict(val_inputs), val_targets, squared=False)
return train_rmse, val_rmse
test_params(n_estimators=20, max_depth=20)
test_params(n_estimators=50, max_depth=10, min_samples_leaf=4, max_features=0.4)
def test_param_and_plot(param_name, param_values):
train_errors, val_errors = [], []
for value in param_values:
params = {param_name: value}
train_rmse, val_rmse = test_params(**params)
train_errors.append(train_rmse)
val_errors.append(val_rmse)
plt.figure(figsize=(10,6))
plt.title('Overfitting curve: ' + param_name)
plt.plot(param_values, train_errors, 'b-o')
plt.plot(param_values, val_errors, 'r-o')
plt.xlabel(param_name)
plt.ylabel('RMSE')
plt.legend(['Training', 'Validation'])
test_param_and_plot('max_depth', [5, 10, 15, 20, 25, 30, 35, 40])
jovian.commit()
test_params(max_depth=20)
test_params(max_depth=21)
test_param_and_plot('max_leaf_nodes', [2**5, 2**10, 2**15])
jovian.commit()
# Create the model with custom hyperparameters
rf2 = RandomForestRegressor(max_depth=21, n_jobs=-1, random_state=42)
# Train the model
rf2.fit(train_inputs, train_targets)
jovian.commit()
rf2_train_preds = rf2.predict(train_inputs)
rf2_train_rmse = mean_squared_error(train_targets, rf2_train_preds)
rf2_val_preds = rf2.predict(val_inputs)
rf2_val_rmse = mean_squared_error(val_targets, rf2_val_preds)
print('Train RMSE: {}, Validation RMSE: {}'.format(rf2_train_rmse, rf2_val_rmse))
rf2_importance_df = pd.DataFrame({
'feature': train_inputs.columns,
'importance': rf2.feature_importances_
}).sort_values('importance', ascending=False)
sns.barplot(data=rf2_importance_df.head(), x='importance', y='feature')
rf2_importance_df
jovian.commit()
jovian.submit('zerotogbms-a2')
test_df = pd.read_csv('house-prices/test.csv')
test_df
test_df[numeric_cols] = imputer.transform(test_df[numeric_cols])
test_df[numeric_cols] = scaler.transform(test_df[numeric_cols])
test_df[encoded_cols] = encoder.transform(test_df[categorical_cols])
test_inputs = test_df[numeric_cols + encoded_cols]
test_preds = rf2.predict(test_inputs)
submission_df = pd.read_csv('house-prices/sample_submission.csv')
submission_df
submission_df['SalePrice'] = test_preds
submission_df.to_csv('submission.csv', index=False)
from IPython.display import FileLink
FileLink('submission.csv') # Doesn't work on Colab, use the file browser instead to download the file.
jovian.commit()
def predict_input(model, single_input):
input_df = pd.DataFrame([single_input])
input_df[numeric_cols] = imputer.transform(input_df[numeric_cols])
input_df[numeric_cols] = scaler.transform(input_df[numeric_cols])
input_df[encoded_cols] = encoder.transform(input_df[categorical_cols].values)
return model.predict(input_df[numeric_cols + encoded_cols])[0]
sample_input = { 'MSSubClass': 20, 'MSZoning': 'RL', 'LotFrontage': 77.0, 'LotArea': 9320,
'Street': 'Pave', 'Alley': None, 'LotShape': 'IR1', 'LandContour': 'Lvl', 'Utilities': 'AllPub',
'LotConfig': 'Inside', 'LandSlope': 'Gtl', 'Neighborhood': 'NAmes', 'Condition1': 'Norm', 'Condition2': 'Norm',
'BldgType': '1Fam', 'HouseStyle': '1Story', 'OverallQual': 4, 'OverallCond': 5, 'YearBuilt': 1959,
'YearRemodAdd': 1959, 'RoofStyle': 'Gable', 'RoofMatl': 'CompShg', 'Exterior1st': 'Plywood',
'Exterior2nd': 'Plywood', 'MasVnrType': 'None','MasVnrArea': 0.0,'ExterQual': 'TA','ExterCond': 'TA',
'Foundation': 'CBlock','BsmtQual': 'TA','BsmtCond': 'TA','BsmtExposure': 'No','BsmtFinType1': 'ALQ',
'BsmtFinSF1': 569,'BsmtFinType2': 'Unf','BsmtFinSF2': 0,'BsmtUnfSF': 381,
'TotalBsmtSF': 950,'Heating': 'GasA','HeatingQC': 'Fa','CentralAir': 'Y','Electrical': 'SBrkr', '1stFlrSF': 1225,
'2ndFlrSF': 0, 'LowQualFinSF': 0, 'GrLivArea': 1225, 'BsmtFullBath': 1, 'BsmtHalfBath': 0, 'FullBath': 1,
'HalfBath': 1, 'BedroomAbvGr': 3, 'KitchenAbvGr': 1,'KitchenQual': 'TA','TotRmsAbvGrd': 6,'Functional': 'Typ',
'Fireplaces': 0,'FireplaceQu': np.nan,'GarageType': np.nan,'GarageYrBlt': np.nan,'GarageFinish': np.nan,'GarageCars': 0,
'GarageArea': 0,'GarageQual': np.nan,'GarageCond': np.nan,'PavedDrive': 'Y', 'WoodDeckSF': 352, 'OpenPorchSF': 0,
'EnclosedPorch': 0,'3SsnPorch': 0, 'ScreenPorch': 0, 'PoolArea': 0, 'PoolQC': np.nan, 'Fence': np.nan, 'MiscFeature': 'Shed',
'MiscVal': 400, 'MoSold': 1, 'YrSold': 2010, 'SaleType': 'WD', 'SaleCondition': 'Normal'}
predicted_price = predict_input(rf2, sample_input)
print('The predicted sale price of the house is ${}'.format(predicted_price))
import joblib
house_prices_rf = {
'model': rf2,
'imputer': imputer,
'scaler': scaler,
'encoder': encoder,
'input_cols': input_cols,
'target_col': target_col,
'numeric_cols': numeric_cols,
'categorical_cols': categorical_cols,
'encoded_cols': encoded_cols
}
joblib.dump(house_prices_rf, 'house_prices_rf.joblib')
jovian.commit(outputs=['house_prices_rf.joblib'])
!pip install --upgrade pip
| 0.739046 | 0.982807 |
# TensorFlow Simple Sentiment Analysis
```
%load_ext autoreload
%autoreload 2
import tensorflow as tf
# tf.reset_default_graph()
session = tf.InteractiveSession()
import utils
import numpy as np
max_length = 50
X, y, index_to_word, sentences = utils.load_sentiment_data(max_length)
X_train, y_train, X_test, y_test = utils.split_data(X, y)
vocab_size = len(index_to_word)
n_classes = y.shape[1]
s_i = 50
print("Sentence:", sentences[s_i])
print("Label:", utils.label_to_desc(y[s_i]))
data_placeholder = tf.placeholder(tf.float32, shape=(None, max_length, vocab_size), name='data_placeholder')
labels_placeholder = tf.placeholder(tf.float32, shape=(None, n_classes), name='labels_placeholder')
keep_prob_placeholder = tf.placeholder(tf.float32, name='keep_prob_placeholder')
# Helper function for fully connected layers
def linear(input_, output_size, layer_scope, stddev=0.02, bias_start=0.0):
shape = input_.get_shape().as_list()
with tf.variable_scope(layer_scope):
matrix = tf.get_variable("Matrix", [shape[1], output_size], tf.float32,
tf.random_normal_initializer(stddev=stddev))
bias = tf.get_variable("bias", [output_size],
initializer=tf.constant_initializer(bias_start))
return tf.matmul(input_, matrix) + bias
# Define Computation Graph
n_rnn_layers = 3
n_fc_layers = 2
n_rnn_nodes = 256
n_fc_nodes = 128
with tf.name_scope("recurrent_layers") as scope:
# Create LSTM Cell
cell = tf.nn.rnn_cell.LSTMCell(n_rnn_nodes, state_is_tuple=False)
cell = tf.nn.rnn_cell.DropoutWrapper(
cell, output_keep_prob=keep_prob_placeholder)
stacked_cells = tf.nn.rnn_cell.MultiRNNCell([cell] * n_rnn_layers, state_is_tuple=False)
output, encoding = tf.nn.dynamic_rnn(stacked_cells, data_placeholder, dtype=tf.float32)
with tf.name_scope("fc_layers") as scope:
# Connect RNN Embedding output into fully connected layers
prev_layer = encoding
for fc_index in range(0, n_fc_layers-1):
fci = tf.nn.relu(linear(prev_layer, n_fc_nodes, 'fc{}'.format(fc_index)))
fc_prev = fci
fc_final = linear(fc_prev, n_classes, 'fc{}'.format(n_fc_layers-1))
logits = tf.nn.softmax(fc_final)
# Define Loss Function + Optimizer
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, labels_placeholder))
optimizer = tf.train.GradientDescentOptimizer(0.0002).minimize(loss)
prediction = tf.nn.softmax(logits)
prediction_is_correct = tf.equal(
tf.argmax(logits, 1), tf.argmax(labels_placeholder, 1))
accuracy = tf.reduce_mean(tf.cast(prediction_is_correct, tf.float32))
# Train loop
num_steps = 1000
batch_size = 32
keep_prob_rate = 0.75
tf.initialize_all_variables().run()
for step in xrange(num_steps):
offset = (step * batch_size) % (X_train.shape[0] - batch_size)
# Generate a minibatch.
batch_data = X_train[offset:(offset + batch_size), :, :]
batch_labels = y_train[offset:(offset + batch_size), :]
# We built our networking using placeholders. It's like we've made reservations for a party of 6.
# So use feed_dict to fill what we reserved. And we can't show up with 9 people.
feed_dict_train = {data_placeholder: batch_data, labels_placeholder : batch_labels, keep_prob_placeholder: keep_prob_rate}
# Run the optimizer, get the loss, get the predictions.
# We can run multiple things at once and get their outputs
_, loss_value_train, predictions_value_train, accuracy_value_train = session.run(
[optimizer, loss, prediction, accuracy], feed_dict=feed_dict_train)
if (step % 2 == 0):
print "Minibatch train loss at step", step, ":", loss_value_train
print "Minibatch train accuracy: %.3f%%" % accuracy_value_train
feed_dict_test = {data_placeholder: X_test, labels_placeholder: y_test, keep_prob_placeholder: 1.0}
loss_value_test, predictions_value_test, accuracy_value_test = session.run(
[loss, prediction, accuracy], feed_dict=feed_dict_test)
print "Test loss: %.3f" % loss_value_test
print "Test accuracy: %.3f%%" % accuracy_value_test
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
import tensorflow as tf
# tf.reset_default_graph()
session = tf.InteractiveSession()
import utils
import numpy as np
max_length = 50
X, y, index_to_word, sentences = utils.load_sentiment_data(max_length)
X_train, y_train, X_test, y_test = utils.split_data(X, y)
vocab_size = len(index_to_word)
n_classes = y.shape[1]
s_i = 50
print("Sentence:", sentences[s_i])
print("Label:", utils.label_to_desc(y[s_i]))
data_placeholder = tf.placeholder(tf.float32, shape=(None, max_length, vocab_size), name='data_placeholder')
labels_placeholder = tf.placeholder(tf.float32, shape=(None, n_classes), name='labels_placeholder')
keep_prob_placeholder = tf.placeholder(tf.float32, name='keep_prob_placeholder')
# Helper function for fully connected layers
def linear(input_, output_size, layer_scope, stddev=0.02, bias_start=0.0):
shape = input_.get_shape().as_list()
with tf.variable_scope(layer_scope):
matrix = tf.get_variable("Matrix", [shape[1], output_size], tf.float32,
tf.random_normal_initializer(stddev=stddev))
bias = tf.get_variable("bias", [output_size],
initializer=tf.constant_initializer(bias_start))
return tf.matmul(input_, matrix) + bias
# Define Computation Graph
n_rnn_layers = 3
n_fc_layers = 2
n_rnn_nodes = 256
n_fc_nodes = 128
with tf.name_scope("recurrent_layers") as scope:
# Create LSTM Cell
cell = tf.nn.rnn_cell.LSTMCell(n_rnn_nodes, state_is_tuple=False)
cell = tf.nn.rnn_cell.DropoutWrapper(
cell, output_keep_prob=keep_prob_placeholder)
stacked_cells = tf.nn.rnn_cell.MultiRNNCell([cell] * n_rnn_layers, state_is_tuple=False)
output, encoding = tf.nn.dynamic_rnn(stacked_cells, data_placeholder, dtype=tf.float32)
with tf.name_scope("fc_layers") as scope:
# Connect RNN Embedding output into fully connected layers
prev_layer = encoding
for fc_index in range(0, n_fc_layers-1):
fci = tf.nn.relu(linear(prev_layer, n_fc_nodes, 'fc{}'.format(fc_index)))
fc_prev = fci
fc_final = linear(fc_prev, n_classes, 'fc{}'.format(n_fc_layers-1))
logits = tf.nn.softmax(fc_final)
# Define Loss Function + Optimizer
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, labels_placeholder))
optimizer = tf.train.GradientDescentOptimizer(0.0002).minimize(loss)
prediction = tf.nn.softmax(logits)
prediction_is_correct = tf.equal(
tf.argmax(logits, 1), tf.argmax(labels_placeholder, 1))
accuracy = tf.reduce_mean(tf.cast(prediction_is_correct, tf.float32))
# Train loop
num_steps = 1000
batch_size = 32
keep_prob_rate = 0.75
tf.initialize_all_variables().run()
for step in xrange(num_steps):
offset = (step * batch_size) % (X_train.shape[0] - batch_size)
# Generate a minibatch.
batch_data = X_train[offset:(offset + batch_size), :, :]
batch_labels = y_train[offset:(offset + batch_size), :]
# We built our networking using placeholders. It's like we've made reservations for a party of 6.
# So use feed_dict to fill what we reserved. And we can't show up with 9 people.
feed_dict_train = {data_placeholder: batch_data, labels_placeholder : batch_labels, keep_prob_placeholder: keep_prob_rate}
# Run the optimizer, get the loss, get the predictions.
# We can run multiple things at once and get their outputs
_, loss_value_train, predictions_value_train, accuracy_value_train = session.run(
[optimizer, loss, prediction, accuracy], feed_dict=feed_dict_train)
if (step % 2 == 0):
print "Minibatch train loss at step", step, ":", loss_value_train
print "Minibatch train accuracy: %.3f%%" % accuracy_value_train
feed_dict_test = {data_placeholder: X_test, labels_placeholder: y_test, keep_prob_placeholder: 1.0}
loss_value_test, predictions_value_test, accuracy_value_test = session.run(
[loss, prediction, accuracy], feed_dict=feed_dict_test)
print "Test loss: %.3f" % loss_value_test
print "Test accuracy: %.3f%%" % accuracy_value_test
| 0.734691 | 0.81637 |
### Step 1 - Scraping
Complete your initial scraping using Jupyter Notebook, BeautifulSoup, Pandas, and Requests/Splinter.
```
from splinter import Browser
from bs4 import BeautifulSoup as bs
import pandas as pd
from webdriver_manager.chrome import ChromeDriverManager
# Set Executable Path & Initialize Chrome Browser
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser("chrome", **executable_path, headless=False)
```
### NASA Mars News
* Scrape the [NASA Mars News Site](https://mars.nasa.gov/news/) and collect the latest News Title and Paragraph Text. Assign the text to variables that you can reference later.
```python
# Example:
news_title = "NASA's Next Mars Mission to Investigate Interior of Red Planet"
news_p = "Preparation of NASA's next spacecraft to Mars, InSight, has ramped up this summer, on course for launch next May from Vandenberg Air Force Base in central California -- the first interplanetary launch in history from America's West Coast."
```
# Open browser to NASA Mars News Site
browser.visit('https://mars.nasa.gov/news/')
html = browser.html
soup = bs(html, 'html.parser')
# Search for news titles
titles = soup.find_all('div', class_='content_title')
# Search for paragraph text under news titles
paragraphs = soup.find_all('div', class_='article_teaser_body')
# Extract first title and paragraph, and assign to variables
news_title = titles[0].text
news_paragraph = paragraphs[0].text
print(news_title)
print(news_paragraph)
```
### JPL Mars Space Images - Featured Image
* Visit the url for JPL Featured Space Image [here](https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html).
* Use splinter to navigate the site and find the image url for the current Featured Mars Image and assign the url string to a variable called `featured_image_url`.
* Make sure to find the image url to the full size `.jpg` image.
* Make sure to save a complete url string for this image.
```python
# Example:
featured_image_url = 'https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/image/featured/mars2.jpg'
```
```
# Open browser to JPL Featured Image
browser.visit('https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html')
html = browser.html
soup = bs(html, 'html.parser')
# Search for image source
img = soup.find_all('img', class_='headerimage fade-in')
source = soup.find('img', class_='headerimage fade-in').get('src')
print(img)
print(source)
url = 'https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/'
feat_url = url + source
feat_url
```
### Mars Facts
* Visit the Mars Facts webpage [here](https://space-facts.com/mars/) and use Pandas to scrape the table containing facts about the planet including Diameter, Mass, etc.
* Use Pandas to convert the data to a HTML table string.
```
# Use Pandas to scrape data
tables = pd.read_html('https://space-facts.com/mars/')
# Take second table for Mars facts
mars_df = tables[1]
mars_df
# Convert table to html
mars_facts = [mars_df.to_html(classes='data table table-borderless', index=False, header=False, border=0)]
mars_facts
```
## Mars Hemispheres
* Visit the USGS Astrogeology site [here](https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars) to obtain high resolution images for each of Mar's hemispheres.
* You will need to click each of the links to the hemispheres in order to find the image url to the full resolution image.
* Save both the image url string for the full resolution hemisphere image, and the Hemisphere title containing the hemisphere name. Use a Python dictionary to store the data using the keys `img_url` and `title`.
* Append the dictionary with the image url string and the hemisphere title to a list. This list will contain one dictionary for each hemisphere.
```python
# Example:
hemisphere_image_urls = [
{"title": "Valles Marineris Hemisphere", "img_url": "..."},
{"title": "Cerberus Hemisphere", "img_url": "..."},
{"title": "Schiaparelli Hemisphere", "img_url": "..."},
{"title": "Syrtis Major Hemisphere", "img_url": "..."},
]
```
```
# Open browser to USGS Astrogeology site
browser.visit('https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars')
# Search for Hemisphere titles
html = browser.html
soup = bs(html, 'html.parser')
hemispheres = []
# Search for the names of all four hemispheres
results = soup.find_all('div', class_="collapsible results")
hemi_names = results[0].find_all('h3')
# Get text and store in list
for name in hemi_names:
hemispheres.append(name.text)
hemispheres
# Search for thumbnail links
thumbnail_results = results[0].find_all('a')
thumbnail_links = []
for thumbnail in thumbnail_results:
# If the thumbnail element has an image...
if (thumbnail.img):
# then grab the attached link
thumbnail_url = 'https://astrogeology.usgs.gov/' + thumbnail['href']
# Append list with links
thumbnail_links.append(thumbnail_url)
thumbnail_links
#Extract Image URLs
full_imgs = []
for url in thumbnail_links:
# Click through each thumbanil link
browser.visit(url)
html = browser.html
soup = bs(html, 'html.parser')
# Scrape each page for the relative image path
results = soup.find_all('img', class_='wide-image')
relative_img_path = results[0]['src']
# Combine the reltaive image path to get the full url
img_link = 'https://astrogeology.usgs.gov/' + relative_img_path
# Add full image links to a list
full_imgs.append(img_link)
full_imgs
# Zip together the list of hemisphere names and hemisphere image links
mars_zip = zip(hemispheres, full_imgs)
hemisphere_image_urls = []
# Iterate through the zipped object
for title, img in mars_zip:
mars_hemi_dict = {}
# Add hemisphere title to dictionary
mars_hemi_dict['title'] = title
# Add image url to dictionary
mars_hemi_dict['img_url'] = img
# Append the list with dictionaries
hemisphere_image_urls.append(mars_hemi_dict)
hemisphere_image_urls
```
|
github_jupyter
|
from splinter import Browser
from bs4 import BeautifulSoup as bs
import pandas as pd
from webdriver_manager.chrome import ChromeDriverManager
# Set Executable Path & Initialize Chrome Browser
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser("chrome", **executable_path, headless=False)
# Example:
news_title = "NASA's Next Mars Mission to Investigate Interior of Red Planet"
news_p = "Preparation of NASA's next spacecraft to Mars, InSight, has ramped up this summer, on course for launch next May from Vandenberg Air Force Base in central California -- the first interplanetary launch in history from America's West Coast."
### JPL Mars Space Images - Featured Image
* Visit the url for JPL Featured Space Image [here](https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html).
* Use splinter to navigate the site and find the image url for the current Featured Mars Image and assign the url string to a variable called `featured_image_url`.
* Make sure to find the image url to the full size `.jpg` image.
* Make sure to save a complete url string for this image.
### Mars Facts
* Visit the Mars Facts webpage [here](https://space-facts.com/mars/) and use Pandas to scrape the table containing facts about the planet including Diameter, Mass, etc.
* Use Pandas to convert the data to a HTML table string.
## Mars Hemispheres
* Visit the USGS Astrogeology site [here](https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars) to obtain high resolution images for each of Mar's hemispheres.
* You will need to click each of the links to the hemispheres in order to find the image url to the full resolution image.
* Save both the image url string for the full resolution hemisphere image, and the Hemisphere title containing the hemisphere name. Use a Python dictionary to store the data using the keys `img_url` and `title`.
* Append the dictionary with the image url string and the hemisphere title to a list. This list will contain one dictionary for each hemisphere.
| 0.800809 | 0.801897 |
## sigMF IQ file SVD reconstruction: GD55 DMR
```
import os
import torch #, torchvision
import numpy as np
import matplotlib.pyplot as plt
from timeit import default_timer as timer
from torch import istft
global GPU, n_fft
GPU = 1
Fs = 1e6
n_fft = 1024
plt.style.use('default')
device = torch.device('cuda:1')
print('Torch version =', torch.__version__, 'CUDA version =', torch.version.cuda)
print('CUDA Device:', device)
print('Is cuda available? =',torch.cuda.is_available())
# %matplotlib notebook
# %matplotlib inline
```
#### Machine paths
```
path = "/home/david/sigMF_ML/RF_SVD/clean_speech/IQ_files/dmr_iq/"
os.chdir(path)
print(path)
db = np.fromfile("UHF_DMR_clean1.sigmf-data", dtype="float32")
```
#### torch GPU Cuda stft
```
def gpu(db):
I = db[0::2]
Q = db[1::2]
start = timer()
w = n_fft
win = torch.hann_window(w, periodic=True, dtype=None, layout=torch.strided, requires_grad=False).cuda(GPU)
I_stft = torch.stft(torch.tensor(I).cuda(GPU), n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=False)
Q_stft = torch.stft(torch.tensor(Q).cuda(GPU), n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=False)
X_stft = I_stft[...,0] + Q_stft[...,0] + I_stft[...,1] + -1*Q_stft[...,1]
X_stft = torch.cat((X_stft[n_fft//2:],X_stft[:n_fft//2]))
end = timer()
gpu_stft_time = end - start
print('GPU STFT time = ', gpu_stft_time)
torch.cuda.empty_cache()
return I_stft, Q_stft, gpu_stft_time
```
#### FOR plotting spectrum: one sided must be false on torch.stft
```
def gpu_plot(db):
I = db[0::2]
Q = db[1::2]
w = n_fft
win = torch.hann_window(w, periodic=True, dtype=None, layout=torch.strided, requires_grad=False).cuda(GPU)
I_stft = torch.stft(torch.tensor(I).cuda(GPU), n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=False)
Q_stft = torch.stft(torch.tensor(Q).cuda(GPU), n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=False)
X_stft = I_stft[...,0] + Q_stft[...,0] + I_stft[...,1] + -1*Q_stft[...,1]
X_stft = torch.cat((X_stft[n_fft//2:],X_stft[:n_fft//2]))
torch.cuda.empty_cache()
return X_stft
```
#### scipy CPU stft function for reference
```
def cpu(db):
t = len(db)
db2 = db[0::]
start = timer()
db = db.astype(np.float32).view(np.complex64)
I_t, I_f, Z = signal.stft(db, fs=Fs, nperseg=n_fft, return_onesided=False)
Z = np.vstack([Z[n_fft//2:], Z[:n_fft//2]])
end = timer()
cpu_stft_time = end - start
print('CPU STFT time = ', cpu_stft_time)
return Z, cpu_stft_time
```
### GPU Timing: first time slowest
```
stft_gpu = gpu_plot(db)
plt.figure(figsize=(9, 6))
fig3 = plt.figure()
plt.imshow(20*np.log10(np.abs(stft_gpu.cpu()+1e-8)), aspect='auto', origin='lower')
title = "GD55 Original spectrum"
plt.title(title)
plt.xlabel('Time in seconds')
plt.ylabel('Frequency in Hz')
plt.minorticks_on()
# plt.yticks(np.arange(0,60, 6))
fig3.savefig('GD55_full_spectrum.pdf', format="pdf")
plt.show()
```
#### GPU SVD
```
def udv_stft(I_stft,Q_stft):
start = timer()
U_I0, D_I0, V_I0 = torch.svd(I_stft[...,0].detach().cpu())
U_I1, D_I1, V_I1 = torch.svd(I_stft[...,1].detach().cpu())
U_Q0, D_Q0, V_Q0 = torch.svd(Q_stft[...,0].detach().cpu())
U_Q1, D_Q1, V_Q1 = torch.svd(Q_stft[...,1].detach().cpu())
end = timer()
usv_stft_time = end - start
print('SVD time: ',usv_stft_time)
return U_I0, D_I0, V_I0, U_I1, D_I1, V_I1, U_Q0, D_Q0, V_Q0, U_Q1, D_Q1, V_Q1, usv_stft_time
```
#### Inverse stft
```
def ISTFT(db):
w = n_fft
win = torch.hann_window(w, periodic=True, dtype=None, layout=torch.strided, requires_grad=False).cuda(GPU)
start = timer()
Z = istft(db, n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=False)
end = timer()
istft_time = end - start
print('ISTFT time = ',istft_time)
torch.cuda.empty_cache()
return Z, istft_time
```
#### Re-combine UDV to approximate original signal
```
def udv(u, d, v, k):
# print('u shape = ', u.shape)
# print('d shape = ', d.shape)
# print('v shape = ', v.shape)
start = timer()
UD = torch.mul(u[:, :k], d[:k])
print('UD shape = ', UD.shape)
v = torch.transpose(v,1,0)
UDV = torch.mm(UD, v[:k, :])
end = timer()
udv_time = end - start
# print('u new shape = ', u[:, :k].shape)
# print('d new shape = ', d[:k].shape)
# print('v new shape = ', v[:k, :].shape)
print('UDV time: ',udv_time)
return UDV, udv_time
def udv_from_file(u, d, v):
start = timer()
# print('u shape = ', u.shape)
# print('d shape = ', d.shape)
# print('v shape = ', v.shape)
UD = torch.mul(u[:, :], d[:])
# print('UD shape = ', UD.shape)
v = torch.transpose(v,1,0)
UDV = torch.mm(UD, v[:, :])
end = timer()
udv_time = end - start
print('UDV time: ',udv_time)
return UDV, udv_time
```
### Main function to run all sub function calls
```
def complete_gpu(num):
I_stft, Q_stft, gpu_stft_time = gpu(db)
U_I0, D_I0, V_I0, U_I1, D_I1, V_I1, U_Q0, D_Q0, V_Q0, U_Q1, D_Q1, V_Q1, udv_time = udv_stft(I_stft,Q_stft)
torch.cuda.empty_cache()
print('UDV I0 shapes = ',U_I0.shape, D_I0.shape, V_I0.shape)
print('UDV I1 shapes = ',U_I1.shape, D_I1.shape, V_I1.shape)
print('UDV Q0 shapes = ', U_Q0.shape, D_Q0.shape, V_Q0.shape)
print('UDV Q1 shapes = ', U_Q1.shape, D_Q1.shape, V_Q1.shape)
# ------------ I0 ------------------------------------------------------
np.save('U_I0', U_I0[:, :num].detach().cpu().numpy())
np.save('D_I0', D_I0[:num].detach().cpu().numpy())
np.save('V_I0', V_I0[:, :num].detach().cpu().numpy())
# print('saved V_IO size = ', V_I0[:, :num].shape)
# ------------ I1 ------------------------------------------------------
np.save('U_I1', U_I1[:, :num].detach().cpu().numpy())
np.save('D_I1', D_I1[:num].detach().cpu().numpy())
np.save('V_I1', V_I1[:, :num].detach().cpu().numpy())
# print('saved V_I1 size = ', V_I1[:, :num].shape)
# ------------ Q0 ------------------------------------------------------
np.save('U_Q0', U_Q0[:, :num].detach().cpu().numpy())
np.save('D_Q0', D_Q0[:num].detach().cpu().numpy())
np.save('V_Q0', V_Q0[:, :num].detach().cpu().numpy())
# print('saved V_QO size = ', V_Q0[:, :num].shape)
# ------------ Q1 ------------------------------------------------------
np.save('U_Q1', U_Q1[:, :num].detach().cpu().numpy())
np.save('D_Q1', D_Q1[:num].detach().cpu().numpy())
np.save('V_Q1', V_Q1[:, :num].detach().cpu().numpy())
# print('saved V_Q1 size = ', V_Q1[:, :num].shape)
# -----------------------------------------------------------------------
udv_I0, udv_time1 = udv(U_I0, D_I0, V_I0,num)
udv_I1, udv_time2 = udv(U_I1, D_I1, V_I1,num)
udv_Q0, udv_time3 = udv(U_Q0, D_Q0, V_Q0,num)
udv_Q1, udv_time4 = udv(U_Q1, D_Q1, V_Q1,num)
torch.cuda.empty_cache()
print('udv I shapes = ',udv_I0.shape,udv_I1.shape)
print('udv Q shapes = ',udv_Q0.shape,udv_Q1.shape)
# -------------stack and transpose----------------------------------------
start_misc = timer()
UDV_I = torch.stack([udv_I0,udv_I1])
UDV_I = torch.transpose(UDV_I,2,0)
UDV_I = torch.transpose(UDV_I,1,0)
UDV_Q = torch.stack([udv_Q0,udv_Q1])
UDV_Q = torch.transpose(UDV_Q,2,0)
UDV_Q = torch.transpose(UDV_Q,1,0)
stop_misc = timer()
misc_time = stop_misc - start_misc
torch.cuda.empty_cache()
#--------------------------------------------------------------------------
I, istft_time1 = ISTFT(UDV_I.cuda(GPU))
Q, istft_time2 = ISTFT(UDV_Q.cuda(GPU))
torch.cuda.empty_cache()
I = I.detach().cpu().numpy()
Q = Q.detach().cpu().numpy()
end = len(I)*2
IQ_SVD = np.zeros(len(I)*2) # I and Q must be same length
IQ_SVD[0:end:2] = I
IQ_SVD[1:end:2] = Q
time_sum = gpu_stft_time+udv_time+misc_time+udv_time1+udv_time2+udv_time3+udv_time4+istft_time1+istft_time2
IQ_SVD = IQ_SVD.astype(np.float32).view(np.complex64)
return IQ_SVD, time_sum
torch.cuda.empty_cache()
```
### Perform SVD on IQ stft data
```
num = 10 # number to reconstruct SVD matrix from
IQ_SVD, time_sum = complete_gpu(num)
time_sum # double sided = true, GPU stft/istft, CPU svd (torch)
```
### Write reconstructed IQ file to file
```
from array import array
IQ_file = open("gd55_svd10_1024_fft", 'wb')
IQ_SVD.tofile(IQ_file)
IQ_file.close()
```
#### load arrays for reconstruction
```
def udv_file_reconstruct():
# ****** D **************
D_I0 = np.load('D_I0.npy')
D_I1 = np.load('D_I1.npy')
D_Q0 = np.load('D_Q0.npy')
D_Q1 = np.load('D_Q1.npy')
# ****** U **************
U_I0 = np.load('U_I0.npy')
U_I1 = np.load('U_I1.npy')
U_Q0 = np.load('U_Q0.npy')
U_Q1 = np.load('U_Q1.npy')
# ****** V **************
V_I0 = np.load('V_I0.npy')
V_I1 = np.load('V_I1.npy')
V_Q0 = np.load('V_Q0.npy')
V_Q1 = np.load('V_Q1.npy')
# ****** d to torch **************
d_i0 = torch.tensor(D_I0).cuda(GPU)
d_i1 = torch.tensor(D_I1).cuda(GPU)
d_q0 = torch.tensor(D_Q0).cuda(GPU)
d_q1 = torch.tensor(D_Q1).cuda(GPU)
# ****** u to torch **************
u_i0 = torch.tensor(U_I0).cuda(GPU)
u_i1 = torch.tensor(U_I1).cuda(GPU)
u_q0 = torch.tensor(U_Q0).cuda(GPU)
u_q1 = torch.tensor(U_Q1).cuda(GPU)
# ****** v to torch **************
v_i0 = torch.tensor(V_I0).cuda(GPU)
v_i1 = torch.tensor(V_I1).cuda(GPU)
v_q0 = torch.tensor(V_Q0).cuda(GPU)
v_q1 = torch.tensor(V_Q1).cuda(GPU)
# ****** reconstruction *********************
udv_I0, udv_time1 = udv_from_file(u_i0, d_i0, v_i0)
udv_I1, udv_time2 = udv_from_file(u_i1, d_i1, v_i1)
udv_Q0, udv_time3 = udv_from_file(u_q0, d_q0, v_q0)
udv_Q1, udv_time4 = udv_from_file(u_q1, d_q1, v_q1)
torch.cuda.empty_cache()
print('udv I shapes = ',udv_I0.shape,udv_I1.shape)
print('udv Q shapes = ',udv_Q0.shape,udv_Q1.shape)
# -------------stack and transpose----------------------------------------
start_misc = timer()
UDV_I = torch.stack([udv_I0,udv_I1])
UDV_I = torch.transpose(UDV_I,2,0)
UDV_I = torch.transpose(UDV_I,1,0)
UDV_Q = torch.stack([udv_Q0,udv_Q1])
UDV_Q = torch.transpose(UDV_Q,2,0)
UDV_Q = torch.transpose(UDV_Q,1,0)
stop_misc = timer()
misc_time = stop_misc - start_misc
torch.cuda.empty_cache()
#--------------------------------------------------------------------------
I, istft_time1 = ISTFT(UDV_I)
Q, istft_time2 = ISTFT(UDV_Q)
torch.cuda.empty_cache()
I = I.detach().cpu().numpy()
Q = Q.detach().cpu().numpy()
end = len(I)*2
IQ_SVD = np.zeros(len(I)*2) # I and Q must be same length
IQ_SVD[0:end:2] = I
IQ_SVD[1:end:2] = Q
time_sum = misc_time+udv_time1+udv_time2+udv_time3+udv_time4+istft_time1+istft_time2
IQ_SVD = IQ_SVD.astype(np.float32).view(np.complex64)
torch.cuda.empty_cache()
return IQ_SVD, time_sum
IQ_SVD2, time_sum2 = udv_file_reconstruct()
time_sum2
from array import array
IQ_file = open("tyt_svd10_recon", 'wb')
IQ_SVD2.tofile(IQ_file)
IQ_file.close()
```
|
github_jupyter
|
import os
import torch #, torchvision
import numpy as np
import matplotlib.pyplot as plt
from timeit import default_timer as timer
from torch import istft
global GPU, n_fft
GPU = 1
Fs = 1e6
n_fft = 1024
plt.style.use('default')
device = torch.device('cuda:1')
print('Torch version =', torch.__version__, 'CUDA version =', torch.version.cuda)
print('CUDA Device:', device)
print('Is cuda available? =',torch.cuda.is_available())
# %matplotlib notebook
# %matplotlib inline
path = "/home/david/sigMF_ML/RF_SVD/clean_speech/IQ_files/dmr_iq/"
os.chdir(path)
print(path)
db = np.fromfile("UHF_DMR_clean1.sigmf-data", dtype="float32")
def gpu(db):
I = db[0::2]
Q = db[1::2]
start = timer()
w = n_fft
win = torch.hann_window(w, periodic=True, dtype=None, layout=torch.strided, requires_grad=False).cuda(GPU)
I_stft = torch.stft(torch.tensor(I).cuda(GPU), n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=False)
Q_stft = torch.stft(torch.tensor(Q).cuda(GPU), n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=False)
X_stft = I_stft[...,0] + Q_stft[...,0] + I_stft[...,1] + -1*Q_stft[...,1]
X_stft = torch.cat((X_stft[n_fft//2:],X_stft[:n_fft//2]))
end = timer()
gpu_stft_time = end - start
print('GPU STFT time = ', gpu_stft_time)
torch.cuda.empty_cache()
return I_stft, Q_stft, gpu_stft_time
def gpu_plot(db):
I = db[0::2]
Q = db[1::2]
w = n_fft
win = torch.hann_window(w, periodic=True, dtype=None, layout=torch.strided, requires_grad=False).cuda(GPU)
I_stft = torch.stft(torch.tensor(I).cuda(GPU), n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=False)
Q_stft = torch.stft(torch.tensor(Q).cuda(GPU), n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=False)
X_stft = I_stft[...,0] + Q_stft[...,0] + I_stft[...,1] + -1*Q_stft[...,1]
X_stft = torch.cat((X_stft[n_fft//2:],X_stft[:n_fft//2]))
torch.cuda.empty_cache()
return X_stft
def cpu(db):
t = len(db)
db2 = db[0::]
start = timer()
db = db.astype(np.float32).view(np.complex64)
I_t, I_f, Z = signal.stft(db, fs=Fs, nperseg=n_fft, return_onesided=False)
Z = np.vstack([Z[n_fft//2:], Z[:n_fft//2]])
end = timer()
cpu_stft_time = end - start
print('CPU STFT time = ', cpu_stft_time)
return Z, cpu_stft_time
stft_gpu = gpu_plot(db)
plt.figure(figsize=(9, 6))
fig3 = plt.figure()
plt.imshow(20*np.log10(np.abs(stft_gpu.cpu()+1e-8)), aspect='auto', origin='lower')
title = "GD55 Original spectrum"
plt.title(title)
plt.xlabel('Time in seconds')
plt.ylabel('Frequency in Hz')
plt.minorticks_on()
# plt.yticks(np.arange(0,60, 6))
fig3.savefig('GD55_full_spectrum.pdf', format="pdf")
plt.show()
def udv_stft(I_stft,Q_stft):
start = timer()
U_I0, D_I0, V_I0 = torch.svd(I_stft[...,0].detach().cpu())
U_I1, D_I1, V_I1 = torch.svd(I_stft[...,1].detach().cpu())
U_Q0, D_Q0, V_Q0 = torch.svd(Q_stft[...,0].detach().cpu())
U_Q1, D_Q1, V_Q1 = torch.svd(Q_stft[...,1].detach().cpu())
end = timer()
usv_stft_time = end - start
print('SVD time: ',usv_stft_time)
return U_I0, D_I0, V_I0, U_I1, D_I1, V_I1, U_Q0, D_Q0, V_Q0, U_Q1, D_Q1, V_Q1, usv_stft_time
def ISTFT(db):
w = n_fft
win = torch.hann_window(w, periodic=True, dtype=None, layout=torch.strided, requires_grad=False).cuda(GPU)
start = timer()
Z = istft(db, n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=False)
end = timer()
istft_time = end - start
print('ISTFT time = ',istft_time)
torch.cuda.empty_cache()
return Z, istft_time
def udv(u, d, v, k):
# print('u shape = ', u.shape)
# print('d shape = ', d.shape)
# print('v shape = ', v.shape)
start = timer()
UD = torch.mul(u[:, :k], d[:k])
print('UD shape = ', UD.shape)
v = torch.transpose(v,1,0)
UDV = torch.mm(UD, v[:k, :])
end = timer()
udv_time = end - start
# print('u new shape = ', u[:, :k].shape)
# print('d new shape = ', d[:k].shape)
# print('v new shape = ', v[:k, :].shape)
print('UDV time: ',udv_time)
return UDV, udv_time
def udv_from_file(u, d, v):
start = timer()
# print('u shape = ', u.shape)
# print('d shape = ', d.shape)
# print('v shape = ', v.shape)
UD = torch.mul(u[:, :], d[:])
# print('UD shape = ', UD.shape)
v = torch.transpose(v,1,0)
UDV = torch.mm(UD, v[:, :])
end = timer()
udv_time = end - start
print('UDV time: ',udv_time)
return UDV, udv_time
def complete_gpu(num):
I_stft, Q_stft, gpu_stft_time = gpu(db)
U_I0, D_I0, V_I0, U_I1, D_I1, V_I1, U_Q0, D_Q0, V_Q0, U_Q1, D_Q1, V_Q1, udv_time = udv_stft(I_stft,Q_stft)
torch.cuda.empty_cache()
print('UDV I0 shapes = ',U_I0.shape, D_I0.shape, V_I0.shape)
print('UDV I1 shapes = ',U_I1.shape, D_I1.shape, V_I1.shape)
print('UDV Q0 shapes = ', U_Q0.shape, D_Q0.shape, V_Q0.shape)
print('UDV Q1 shapes = ', U_Q1.shape, D_Q1.shape, V_Q1.shape)
# ------------ I0 ------------------------------------------------------
np.save('U_I0', U_I0[:, :num].detach().cpu().numpy())
np.save('D_I0', D_I0[:num].detach().cpu().numpy())
np.save('V_I0', V_I0[:, :num].detach().cpu().numpy())
# print('saved V_IO size = ', V_I0[:, :num].shape)
# ------------ I1 ------------------------------------------------------
np.save('U_I1', U_I1[:, :num].detach().cpu().numpy())
np.save('D_I1', D_I1[:num].detach().cpu().numpy())
np.save('V_I1', V_I1[:, :num].detach().cpu().numpy())
# print('saved V_I1 size = ', V_I1[:, :num].shape)
# ------------ Q0 ------------------------------------------------------
np.save('U_Q0', U_Q0[:, :num].detach().cpu().numpy())
np.save('D_Q0', D_Q0[:num].detach().cpu().numpy())
np.save('V_Q0', V_Q0[:, :num].detach().cpu().numpy())
# print('saved V_QO size = ', V_Q0[:, :num].shape)
# ------------ Q1 ------------------------------------------------------
np.save('U_Q1', U_Q1[:, :num].detach().cpu().numpy())
np.save('D_Q1', D_Q1[:num].detach().cpu().numpy())
np.save('V_Q1', V_Q1[:, :num].detach().cpu().numpy())
# print('saved V_Q1 size = ', V_Q1[:, :num].shape)
# -----------------------------------------------------------------------
udv_I0, udv_time1 = udv(U_I0, D_I0, V_I0,num)
udv_I1, udv_time2 = udv(U_I1, D_I1, V_I1,num)
udv_Q0, udv_time3 = udv(U_Q0, D_Q0, V_Q0,num)
udv_Q1, udv_time4 = udv(U_Q1, D_Q1, V_Q1,num)
torch.cuda.empty_cache()
print('udv I shapes = ',udv_I0.shape,udv_I1.shape)
print('udv Q shapes = ',udv_Q0.shape,udv_Q1.shape)
# -------------stack and transpose----------------------------------------
start_misc = timer()
UDV_I = torch.stack([udv_I0,udv_I1])
UDV_I = torch.transpose(UDV_I,2,0)
UDV_I = torch.transpose(UDV_I,1,0)
UDV_Q = torch.stack([udv_Q0,udv_Q1])
UDV_Q = torch.transpose(UDV_Q,2,0)
UDV_Q = torch.transpose(UDV_Q,1,0)
stop_misc = timer()
misc_time = stop_misc - start_misc
torch.cuda.empty_cache()
#--------------------------------------------------------------------------
I, istft_time1 = ISTFT(UDV_I.cuda(GPU))
Q, istft_time2 = ISTFT(UDV_Q.cuda(GPU))
torch.cuda.empty_cache()
I = I.detach().cpu().numpy()
Q = Q.detach().cpu().numpy()
end = len(I)*2
IQ_SVD = np.zeros(len(I)*2) # I and Q must be same length
IQ_SVD[0:end:2] = I
IQ_SVD[1:end:2] = Q
time_sum = gpu_stft_time+udv_time+misc_time+udv_time1+udv_time2+udv_time3+udv_time4+istft_time1+istft_time2
IQ_SVD = IQ_SVD.astype(np.float32).view(np.complex64)
return IQ_SVD, time_sum
torch.cuda.empty_cache()
num = 10 # number to reconstruct SVD matrix from
IQ_SVD, time_sum = complete_gpu(num)
time_sum # double sided = true, GPU stft/istft, CPU svd (torch)
from array import array
IQ_file = open("gd55_svd10_1024_fft", 'wb')
IQ_SVD.tofile(IQ_file)
IQ_file.close()
def udv_file_reconstruct():
# ****** D **************
D_I0 = np.load('D_I0.npy')
D_I1 = np.load('D_I1.npy')
D_Q0 = np.load('D_Q0.npy')
D_Q1 = np.load('D_Q1.npy')
# ****** U **************
U_I0 = np.load('U_I0.npy')
U_I1 = np.load('U_I1.npy')
U_Q0 = np.load('U_Q0.npy')
U_Q1 = np.load('U_Q1.npy')
# ****** V **************
V_I0 = np.load('V_I0.npy')
V_I1 = np.load('V_I1.npy')
V_Q0 = np.load('V_Q0.npy')
V_Q1 = np.load('V_Q1.npy')
# ****** d to torch **************
d_i0 = torch.tensor(D_I0).cuda(GPU)
d_i1 = torch.tensor(D_I1).cuda(GPU)
d_q0 = torch.tensor(D_Q0).cuda(GPU)
d_q1 = torch.tensor(D_Q1).cuda(GPU)
# ****** u to torch **************
u_i0 = torch.tensor(U_I0).cuda(GPU)
u_i1 = torch.tensor(U_I1).cuda(GPU)
u_q0 = torch.tensor(U_Q0).cuda(GPU)
u_q1 = torch.tensor(U_Q1).cuda(GPU)
# ****** v to torch **************
v_i0 = torch.tensor(V_I0).cuda(GPU)
v_i1 = torch.tensor(V_I1).cuda(GPU)
v_q0 = torch.tensor(V_Q0).cuda(GPU)
v_q1 = torch.tensor(V_Q1).cuda(GPU)
# ****** reconstruction *********************
udv_I0, udv_time1 = udv_from_file(u_i0, d_i0, v_i0)
udv_I1, udv_time2 = udv_from_file(u_i1, d_i1, v_i1)
udv_Q0, udv_time3 = udv_from_file(u_q0, d_q0, v_q0)
udv_Q1, udv_time4 = udv_from_file(u_q1, d_q1, v_q1)
torch.cuda.empty_cache()
print('udv I shapes = ',udv_I0.shape,udv_I1.shape)
print('udv Q shapes = ',udv_Q0.shape,udv_Q1.shape)
# -------------stack and transpose----------------------------------------
start_misc = timer()
UDV_I = torch.stack([udv_I0,udv_I1])
UDV_I = torch.transpose(UDV_I,2,0)
UDV_I = torch.transpose(UDV_I,1,0)
UDV_Q = torch.stack([udv_Q0,udv_Q1])
UDV_Q = torch.transpose(UDV_Q,2,0)
UDV_Q = torch.transpose(UDV_Q,1,0)
stop_misc = timer()
misc_time = stop_misc - start_misc
torch.cuda.empty_cache()
#--------------------------------------------------------------------------
I, istft_time1 = ISTFT(UDV_I)
Q, istft_time2 = ISTFT(UDV_Q)
torch.cuda.empty_cache()
I = I.detach().cpu().numpy()
Q = Q.detach().cpu().numpy()
end = len(I)*2
IQ_SVD = np.zeros(len(I)*2) # I and Q must be same length
IQ_SVD[0:end:2] = I
IQ_SVD[1:end:2] = Q
time_sum = misc_time+udv_time1+udv_time2+udv_time3+udv_time4+istft_time1+istft_time2
IQ_SVD = IQ_SVD.astype(np.float32).view(np.complex64)
torch.cuda.empty_cache()
return IQ_SVD, time_sum
IQ_SVD2, time_sum2 = udv_file_reconstruct()
time_sum2
from array import array
IQ_file = open("tyt_svd10_recon", 'wb')
IQ_SVD2.tofile(IQ_file)
IQ_file.close()
| 0.379608 | 0.710748 |
```
import pandas as pd
import sqlite3
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from pivottablejs import pivot_ui
pd.set_option('max_columns', None)
# Read sqlite query results into a pandas DataFrame
def qry(q, connection = sqlite3.connect("./db1.sqlite")):
df = pd.read_sql_query(q, connection)
connection.close
return df
qry("SELECT name FROM sqlite_master WHERE type='table'")
# work w dataframes
df_user = qry("SELECT * FROM USER")
df_grade = qry("SELECT * FROM grade")
df_method = qry("SELECT * FROM method")
df_ascent = qry("SELECT * FROM ascent")
# get all desc. tabs..
desc_user = df_user.describe().T
desc_user['table'] = 'user'
desc_grade = df_grade.describe().T
desc_grade['table'] = 'grade'
desc_method = df_method.describe().T
desc_method['table'] = 'method'
desc_ascent = df_ascent.describe().T
desc_ascent['table'] = 'ascent'
pd.set_option('display.float_format', lambda x: '%.4f' % x)
desc_user.append(desc_grade).append(desc_method).append(desc_ascent)
df_user = df_user.rename(columns = {'id':'user_id'})
df_ascent = df_ascent.rename(columns = {'id':'ascent_id'})
df_grade = df_grade.rename(columns = {'id':'grade_id'})
df_ascent[df_ascent.crag == 'Smith Rocks']
df_ascent['grade_id'].loc[df_ascent.name == 'Left slab crack'].value_counts()
fr_grade = {13:'3a',
21:'4a',
23:'4b',
25:'4c',
29:'5a',
31:'5b',
33:'5c',
36:'6a',
38:'6a+',
40:'6b',
42:'6b+',
44:'6c',
46:'6c+',
49:'7a',
51:'7a+',
53:'7b',
55:'7b+',
57:'7c',
59:'7c+',
62:'8a',
64:'8a+',
66:'8b',
68:'8b+',
70:'8c',
72:'8c+'}
yds_grade ={13:'4',
21:'5',
23:'6',
25:'7',
29:'8',
31:'9',
33:'10a',
36:'10b',
38:'10c',
40:'10d',
42:'11a',
44:'11b',
46:'11c',
49:'11d',
51:'12a',
53:'12b',
55:'12c',
57:'12d',
59:'13a',
62:'13b',
64:'13c',
66:'13d',
68:'14a',
70:'14b',
72:'14c',
74:'14d',
75:'15a'}
df_h_smith = df_ascent[df_ascent['grade_id']>50].loc[df_ascent['crag']=='Smith Rocks'].copy()
df_e_smith = df_ascent[df_ascent['grade_id']<50].loc[df_ascent['crag']=='Smith Rocks'].copy()
df_smith = df_ascent[df_ascent['crag']=='Smith Rocks'].copy()
pivot_ui(df_smith)
plt.figure(figsize=[17,5])
plt.hist(df_e_smith['grade_id'],bins=len(df_e_smith['grade_id'].unique())*3,label='\"Easy\"')
plt.hist(df_h_smith['grade_id'],bins=len(df_h_smith['grade_id'].unique())*2,label='Very Hard')
#locs = sorted(df_h_smith['grade_id'].unique())
locs = list(yds_grade.keys())
plt.xticks(ticks=locs,labels=[yds_grade[x] for x in locs])
plt.title('Counts of ascents by climbing grade at Smith Rock | 8a.nu')
plt.xlabel('Climb Rating, (5.__)')
plt.ylabel('Counts')
plt.legend();
plt.savefig('SmithAscentsByGrade.png',bbox_inches='tight')
plt.show()
smith_dates = pd.to_datetime(df_smith.date,unit='s')
all_dates = pd.to_datetime(df_ascent.date,unit='s')
fig, ax = plt.subplots()
fig.set_size_inches([8,5])
color = 'tab:blue'
ax.hist(all_dates[all_dates > '1990-01-01'],bins=2018-1990, alpha=.7,label='All Ascents')
ax.set_ylabel('Worldwide Logged Acents',color=color)
ax.tick_params(axis='y', labelcolor=color)
ax2 = ax.twinx()
color = 'tab:red'
ax2.hist(smith_dates[smith_dates > '1990-01-01'],bins=2018-1990, alpha=.7, color = 'tab:red', label='Smith Rock Ascents')
ax2.set_ylabel('Smith Rock Ascents', color=color)
ax2.tick_params(axis='y', labelcolor=color)
ylim = ax2.get_ylim()
ax2.set_ylim([ylim[0],ylim[1]*3])
yticks = ax2.get_yticks()
ax2.set_yticks(yticks[:3])
ax2.set_title('Number of Ascents by Year | 8a.nu');
plt.savefig('AllAscentsByYear.png',bbox_inches='tight')
plt.show()
df_h_smith.date = pd.to_datetime(df_h_smith.date, unit='s')
df_smith.date = df_smith.date = pd.to_datetime(df_smith.date, unit='s').dt.date
df_h_smith.date.iloc[0]
pd.to_datetime('1999-03-24 23:00:00')
daily_sends = df_smith.date.value_counts()
daily_sends.sort_index(inplace=True)
daily_sends
from fbprophet import Prophet
pdf = pd.DataFrame(daily_sends)
#pdf['DS'] = pdf.index
pdf.reset_index(inplace=True)
pdf.drop(index=pdf.index[-1],inplace=True)
pdf.rename(columns={'index':'ds','date':'y'},inplace=True)
m = Prophet(yearly_seasonality = True, mcmc_samples=300)
m.add_country_holidays(country_name='US')
m.fit(pdf[pdf.ds > pd.to_datetime('2000-01-01')])
future = m.make_future_dataframe(periods=365)
forecast = m.predict(future)
forecast.tail()
fig1 = m.plot(forecast)
plt.xlim(pd.to_datetime(['2000','2019']))
fig2 = m.plot_components(forecast)
type(fig2)
for idx, ascents in zip(daily_sends.index,daily_sends.values):
print(idx, ascents)
daily_sends = daily_sends[:-1]
daily_sends.value_counts(normalize=True) * 100
daily_sends.values.T
daily_sends.to_pickle('./smith_ascents.pkl')
```
|
github_jupyter
|
import pandas as pd
import sqlite3
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from pivottablejs import pivot_ui
pd.set_option('max_columns', None)
# Read sqlite query results into a pandas DataFrame
def qry(q, connection = sqlite3.connect("./db1.sqlite")):
df = pd.read_sql_query(q, connection)
connection.close
return df
qry("SELECT name FROM sqlite_master WHERE type='table'")
# work w dataframes
df_user = qry("SELECT * FROM USER")
df_grade = qry("SELECT * FROM grade")
df_method = qry("SELECT * FROM method")
df_ascent = qry("SELECT * FROM ascent")
# get all desc. tabs..
desc_user = df_user.describe().T
desc_user['table'] = 'user'
desc_grade = df_grade.describe().T
desc_grade['table'] = 'grade'
desc_method = df_method.describe().T
desc_method['table'] = 'method'
desc_ascent = df_ascent.describe().T
desc_ascent['table'] = 'ascent'
pd.set_option('display.float_format', lambda x: '%.4f' % x)
desc_user.append(desc_grade).append(desc_method).append(desc_ascent)
df_user = df_user.rename(columns = {'id':'user_id'})
df_ascent = df_ascent.rename(columns = {'id':'ascent_id'})
df_grade = df_grade.rename(columns = {'id':'grade_id'})
df_ascent[df_ascent.crag == 'Smith Rocks']
df_ascent['grade_id'].loc[df_ascent.name == 'Left slab crack'].value_counts()
fr_grade = {13:'3a',
21:'4a',
23:'4b',
25:'4c',
29:'5a',
31:'5b',
33:'5c',
36:'6a',
38:'6a+',
40:'6b',
42:'6b+',
44:'6c',
46:'6c+',
49:'7a',
51:'7a+',
53:'7b',
55:'7b+',
57:'7c',
59:'7c+',
62:'8a',
64:'8a+',
66:'8b',
68:'8b+',
70:'8c',
72:'8c+'}
yds_grade ={13:'4',
21:'5',
23:'6',
25:'7',
29:'8',
31:'9',
33:'10a',
36:'10b',
38:'10c',
40:'10d',
42:'11a',
44:'11b',
46:'11c',
49:'11d',
51:'12a',
53:'12b',
55:'12c',
57:'12d',
59:'13a',
62:'13b',
64:'13c',
66:'13d',
68:'14a',
70:'14b',
72:'14c',
74:'14d',
75:'15a'}
df_h_smith = df_ascent[df_ascent['grade_id']>50].loc[df_ascent['crag']=='Smith Rocks'].copy()
df_e_smith = df_ascent[df_ascent['grade_id']<50].loc[df_ascent['crag']=='Smith Rocks'].copy()
df_smith = df_ascent[df_ascent['crag']=='Smith Rocks'].copy()
pivot_ui(df_smith)
plt.figure(figsize=[17,5])
plt.hist(df_e_smith['grade_id'],bins=len(df_e_smith['grade_id'].unique())*3,label='\"Easy\"')
plt.hist(df_h_smith['grade_id'],bins=len(df_h_smith['grade_id'].unique())*2,label='Very Hard')
#locs = sorted(df_h_smith['grade_id'].unique())
locs = list(yds_grade.keys())
plt.xticks(ticks=locs,labels=[yds_grade[x] for x in locs])
plt.title('Counts of ascents by climbing grade at Smith Rock | 8a.nu')
plt.xlabel('Climb Rating, (5.__)')
plt.ylabel('Counts')
plt.legend();
plt.savefig('SmithAscentsByGrade.png',bbox_inches='tight')
plt.show()
smith_dates = pd.to_datetime(df_smith.date,unit='s')
all_dates = pd.to_datetime(df_ascent.date,unit='s')
fig, ax = plt.subplots()
fig.set_size_inches([8,5])
color = 'tab:blue'
ax.hist(all_dates[all_dates > '1990-01-01'],bins=2018-1990, alpha=.7,label='All Ascents')
ax.set_ylabel('Worldwide Logged Acents',color=color)
ax.tick_params(axis='y', labelcolor=color)
ax2 = ax.twinx()
color = 'tab:red'
ax2.hist(smith_dates[smith_dates > '1990-01-01'],bins=2018-1990, alpha=.7, color = 'tab:red', label='Smith Rock Ascents')
ax2.set_ylabel('Smith Rock Ascents', color=color)
ax2.tick_params(axis='y', labelcolor=color)
ylim = ax2.get_ylim()
ax2.set_ylim([ylim[0],ylim[1]*3])
yticks = ax2.get_yticks()
ax2.set_yticks(yticks[:3])
ax2.set_title('Number of Ascents by Year | 8a.nu');
plt.savefig('AllAscentsByYear.png',bbox_inches='tight')
plt.show()
df_h_smith.date = pd.to_datetime(df_h_smith.date, unit='s')
df_smith.date = df_smith.date = pd.to_datetime(df_smith.date, unit='s').dt.date
df_h_smith.date.iloc[0]
pd.to_datetime('1999-03-24 23:00:00')
daily_sends = df_smith.date.value_counts()
daily_sends.sort_index(inplace=True)
daily_sends
from fbprophet import Prophet
pdf = pd.DataFrame(daily_sends)
#pdf['DS'] = pdf.index
pdf.reset_index(inplace=True)
pdf.drop(index=pdf.index[-1],inplace=True)
pdf.rename(columns={'index':'ds','date':'y'},inplace=True)
m = Prophet(yearly_seasonality = True, mcmc_samples=300)
m.add_country_holidays(country_name='US')
m.fit(pdf[pdf.ds > pd.to_datetime('2000-01-01')])
future = m.make_future_dataframe(periods=365)
forecast = m.predict(future)
forecast.tail()
fig1 = m.plot(forecast)
plt.xlim(pd.to_datetime(['2000','2019']))
fig2 = m.plot_components(forecast)
type(fig2)
for idx, ascents in zip(daily_sends.index,daily_sends.values):
print(idx, ascents)
daily_sends = daily_sends[:-1]
daily_sends.value_counts(normalize=True) * 100
daily_sends.values.T
daily_sends.to_pickle('./smith_ascents.pkl')
| 0.261425 | 0.211946 |
Course Human-Centered Data Science ([HCDS](https://www.mi.fu-berlin.de/en/inf/groups/hcc/teaching/winter_term_2020_21/course_human_centered_data_science.html)) - Winter Term 2020/21 - [HCC](https://www.mi.fu-berlin.de/en/inf/groups/hcc/index.html) | [Freie Universität Berlin](https://www.fu-berlin.de/)
***
# A2 - Wikipedia, ORES, and Bias in Data
Please follow the reproducability workflow as practiced during the last exercise.
## Step 1⃣ | Data acquisition
You will use two data sources: (1) Wikipedia articles of politicians and (2) world population data.
**Wikipedia articles -**
The Wikipedia articles can be found on [Figshare](https://figshare.com/articles/Untitled_Item/5513449). It contains politiciaans by country from the English-language wikipedia. Please read through the documentation for this repository, then download and unzip it to extract the data file, which is called `page_data.csv`.
**Population data -**
The population data is available in `CSV` format in the `_data` folder. The file is named `export_2019.csv`. This dataset is drawn from the [world population datasheet](https://www.prb.org/international/indicator/population/table/) published by the Population Reference Bureau (downloaded 2020-11-13 10:14 AM). I have edited the dataset to make it easier to use in this assignment. The population per country is given in millions!
```
import json
import requests
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
page_data = pd.read_csv("../data_raw/page_data.csv", sep=",")
page_data.head()
# Country prefixes are seperated by commas, so the csv seperator has to be explicitly defined as semicolon
population_data = pd.read_csv("../data_raw/export_2019.csv", sep=";")
population_data
```
## Step 2⃣ | Data processing and cleaning
The data in `page_data.csv` contain some rows that you will need to filter out. It contains some page names that start with the string `"Template:"`. These pages are not Wikipedia articles, and should not be included in your analysis. The data in `export_2019.csv` does not need any cleaning.
***
| | `page_data.csv` | | |
|-|------|---------|--------|
| | **page** | **country** | **rev_id** |
|0| Template:ZambiaProvincialMinisters | Zambia | 235107991 |
|1| Bir I of Kanem | Chad | 355319463 |
***
| | `export_2019.csv` | | |
|-|------|---------|--------|
| | **country** | **population** | **region** |
|0| Algeria | 44.357 | AFRICA |
|1| Egypt | 100.803 | 355319463 |
***
```
page_data = page_data[~page_data.page.str.contains("Template")]
page_data.head()
```
### Getting article quality predictions with ORES
Now you need to get the predicted quality scores for each article in the Wikipedia dataset. We're using a machine learning system called [**ORES**](https://www.mediawiki.org/wiki/ORES) ("Objective Revision Evaluation Service"). ORES estimates the quality of an article (at a particular point in time), and assigns a series of probabilities that the article is in one of the six quality categories. The options are, from best to worst:
| ID | Quality Category | Explanation |
|----|------------------|----------|
| 1 | FA | Featured article |
| 2 | GA | Good article |
| 3 | B | B-class article |
| 4 | C | C-class article |
| 5 | Start | Start-class article |
| 6 | Stub | Stub-class article |
For context, these quality classes are a sub-set of quality assessment categories developed by Wikipedia editors. If you're curious, you can [read more](https://en.wikipedia.org/wiki/Wikipedia:Content_assessment#Grades) about what these assessment classes mean on English Wikipedia. For this assignment, you only need to know that these categories exist, and that ORES will assign one of these six categories to any `rev_id`. You need to extract all `rev_id`s in the `page_data.csv` file and use the ORES API to get the predicted quality score for that specific article revision.
### ORES REST API endpoint
The [ORES REST API](https://ores.wikimedia.org/v3/#!/scoring/get_v3_scores_context_revid_model) is configured fairly similarly to the pageviews API we used for the last assignment. It expects the following parameters:
* **project** --> `enwiki`
* **revid** --> e.g. `235107991` or multiple ids e.g.: `235107991|355319463` (batch)
* **model** --> `wp10` - The name of a model to use when scoring.
**❗Note on batch processing:** Please read the documentation about [API usage](https://www.mediawiki.org/wiki/ORES#API_usage) if you want to query a large number of revisions (batches).
You will notice that ORES returns a prediction value that contains the name of one category (e.g. `Start`), as well as probability values for each of the six quality categories. For this assignment, you only need to capture and use the value for prediction.
**❗Note:** It's possible that you will be unable to get a score for a particular article. If that happens, make sure to maintain a log of articles for which you were not able to retrieve an ORES score. This log should be saved as a separate file named `ORES_no_scores.csv` and should include the `page`, `country`, and `rev_id` (just as in `page_data.csv`).
You can use the following **samle code for API calls**:
```
import requests
import json
# Customize these with your own information
headers = {
'User-Agent': 'https://github.com/YOUR-USER-NAME',
'From': '[email protected]'
}
def get_ores_data(rev_id, headers):
# Define the endpoint
# https://ores.wikimedia.org/scores/enwiki/?models=wp10&revids=807420979|807422778
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : rev_id
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
data = json.dumps(response)
return data
headers = {
'User-Agent': 'https://github.com/jonas-weber',
'From': '[email protected]'
}
def get_ores_data(rev_id, headers):
# Define the endpoint
# https://ores.wikimedia.org/scores/enwiki/?models=wp10&revids=807420979|807422778
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : rev_id
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return response
```
Sending one request for each `rev_id` might take some time. If you want to send batches you can use `'|'.join(str(x) for x in revision_ids` to put your ids together. Please make sure to deal with [exception handling](https://www.w3schools.com/python/python_try_except.asp) of the `KeyError` exception, when extracting the `prediction` from the `JSON` response.
```
rev_id_list = page_data.rev_id.tolist()
len(rev_id_list)
get_ores_data(rev_id_list[0], headers)
ores_predictions = {}
for x in rev_id_list:
try:
prediction = get_ores_data(x, headers)["enwiki"]["scores"][str(x)]["wp10"]["score"]["prediction"]
except KeyError:
ores_predictions[x] = np.nan
else:
ores_predictions[x] = prediction
ores_predictions
pd.DataFrame(ores_predictions.items(), columns=["rev_id", "prediction"])
```
### Combining the datasets
Now you need to combine both dataset: (1) the wikipedia articles and its ORES quality scores and (2) the population data. Both have columns named `country`. After merging the data, you'll invariably run into entries which cannot be merged. Either the population dataset does not have an entry for the equivalent Wikipedia country, or vis versa.
Please remove any rows that do not have matching data, and output them to a `CSV` file called `countries-no_match.csv`. Consolidate the remaining data into a single `CSV` file called `politicians_by_country.csv`.
The schema for that file should look like the following table:
| article_name | country | region | revision_id | article_quality | population |
|--------------|---------|--------|-------------|-----------------|------------|
| Bir I of Kanem | Chad | AFRICA | 807422778 | Stub | 16877000 |
## Step 3⃣ | Analysis
Your analysis will consist of calculating the proportion (as a percentage) of articles-per-population (we can also call it `coverage`) and high-quality articles (we can also call it `relative-quality`)for **each country** and for **each region**. By `"high quality"` arcticle we mean an article that ORES predicted as `FA` (featured article) or `GA` (good article).
**Examples:**
* if a country has a population of `10,000` people, and you found `10` articles about politicians from that country, then the percentage of `articles-per-population` would be `0.1%`.
* if a country has `10` articles about politicians, and `2` of them are `FA` or `GA` class articles, then the percentage of `high-quality-articles` would be `20%`.
### Results format
The results from this analysis are six `data tables`. Embed these tables in the Jupyter notebook. You do not need to graph or otherwise visualize the data for this assignment. The tables will show:
1. **Top 10 countries by coverage**<br>10 highest-ranked countries in terms of number of politician articles as a proportion of country population
1. **Bottom 10 countries by coverage**<br>10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
1. **Top 10 countries by relative quality**<br>10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
1. **Bottom 10 countries by relative quality**<br>10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
1. **Regions by coverage**<br>Ranking of regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
1. **Regions by coverage**<br>Ranking of regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
**❗Hint:** You will find what country belongs to which region (e.g. `ASIA`) also in `export_2019.csv`. You need to calculate the total poulation per region. For that you could use `groupby` and also check out `apply`.
***
#### Credits
This exercise is slighty adapted from the course [Human Centered Data Science (Fall 2019)](https://wiki.communitydata.science/Human_Centered_Data_Science_(Fall_2019)) of [Univeristy of Washington](https://www.washington.edu/datasciencemasters/) by [Jonathan T. Morgan](https://wiki.communitydata.science/User:Jtmorgan).
Same as the original inventors, we release the notebooks under the [Creative Commons Attribution license (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
|
github_jupyter
|
import json
import requests
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
page_data = pd.read_csv("../data_raw/page_data.csv", sep=",")
page_data.head()
# Country prefixes are seperated by commas, so the csv seperator has to be explicitly defined as semicolon
population_data = pd.read_csv("../data_raw/export_2019.csv", sep=";")
population_data
page_data = page_data[~page_data.page.str.contains("Template")]
page_data.head()
import requests
import json
# Customize these with your own information
headers = {
'User-Agent': 'https://github.com/YOUR-USER-NAME',
'From': '[email protected]'
}
def get_ores_data(rev_id, headers):
# Define the endpoint
# https://ores.wikimedia.org/scores/enwiki/?models=wp10&revids=807420979|807422778
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : rev_id
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
data = json.dumps(response)
return data
headers = {
'User-Agent': 'https://github.com/jonas-weber',
'From': '[email protected]'
}
def get_ores_data(rev_id, headers):
# Define the endpoint
# https://ores.wikimedia.org/scores/enwiki/?models=wp10&revids=807420979|807422778
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : rev_id
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return response
rev_id_list = page_data.rev_id.tolist()
len(rev_id_list)
get_ores_data(rev_id_list[0], headers)
ores_predictions = {}
for x in rev_id_list:
try:
prediction = get_ores_data(x, headers)["enwiki"]["scores"][str(x)]["wp10"]["score"]["prediction"]
except KeyError:
ores_predictions[x] = np.nan
else:
ores_predictions[x] = prediction
ores_predictions
pd.DataFrame(ores_predictions.items(), columns=["rev_id", "prediction"])
| 0.511717 | 0.941439 |
```
import os
import numpy as np
from scipy.stats import kde, binned_statistic
from matplotlib.ticker import FormatStrFormatter, MultipleLocator
import matplotlib.pyplot as plt
from astropy.io import fits
from astropy.table import Table
import seaborn as sns
sns.set(context='talk', style='ticks', font_scale=1.3)#, palette='Set1')
import legacyhalos.io
%matplotlib inline
def lambda2mhalo(richness, redshift=0.3, Saro=False):
"""
Convert cluster richness, lambda, to halo mass, given various
calibrations.
* Saro et al. 2015: Equation (7) and Table 2 gives M(500).
* Melchior et al. 2017: Equation (51) and Table 4 gives M(200).
* Simet et al. 2017:
Other SDSS-based calibrations: Li et al. 2016; Miyatake et al. 2016;
Farahi et al. 2016; Baxter et al. 2016.
TODO: Return the variance!
"""
if Saro:
pass
# Melchior et al. 2017 (default)
logM0, Flam, Gz, lam0, z0 = 14.371, 1.12, 0.18, 30.0, 0.5
Mhalo = 10**logM0 * (richness / lam0)**Flam * ( (1 + redshift) / (1 + z0) )**Gz
return Mhalo
cat = legacyhalos.io.read_catalog(extname='LSPHOT-ISEDFIT', upenn=False,
isedfit=True, columns=('mstar_avg', 'sfr100_avg'))
cat1 = legacyhalos.io.read_catalog(extname='REDMAPPER', upenn=False,
isedfit=False, columns=('Z', 'LAMBDA_CHISQ'))
cat.add_columns_from(cat1)
cat
mhalo = np.log10(lambda2mhalo(cat.lambda_chisq, redshift=cat.z))
mhalo
bins = 35
mstar_med, bin_edges, _ = binned_statistic(mhalo, cat.mstar_avg, statistic='median', bins=bins)
bin_width = (bin_edges[1] - bin_edges[0])
mhalo_med = bin_edges[1:] - bin_width/2
print(bin_width)
def p75(x):
return np.percentile(x, 75)
def p25(x):
return np.percentile(x, 25)
mstar_p25, _, _ = binned_statistic(mhalo, cat.mstar_avg, statistic=p25, bins=bins)
mstar_p75, _, _ = binned_statistic(mhalo, cat.mstar_avg, statistic=p75, bins=bins)
krav = dict()
krav['m500'] = np.log10(np.array([15.6,10.3,7,5.34,2.35,1.86,1.34,0.46,0.47])*1e14)
krav['mbcg'] = np.array([3.12,4.14,3.06,1.47,0.79,1.26,1.09,0.91,1.38])*1e12
krav['mbcg_err'] = np.array([0.36,0.3,0.3,0.13,0.05,0.11,0.06,0.05,0.14])*1e12
krav['mbcg_err'] = krav['mbcg_err'] / krav['mbcg'] / np.log(10)
krav['mbcg'] = np.log10(krav['mbcg'])
gonz = dict()
gonz['mbcg'] = np.array([0.84,0.87,0.33,0.57,0.85,0.60,0.86,0.93,0.71,0.81,0.70,0.57])*1e12*2.65
gonz['mbcg_err'] = np.array([0.03,0.09,0.01,0.01,0.14,0.03,0.03,0.05,0.07,0.12,0.02,0.01])*1e12*2.65
gonz['m500'] = np.array([2.26,5.15,0.95,3.46,3.59,0.99,0.95,3.23,2.26,2.41,2.37,1.45])*1e14
gonz['m500_err'] = np.array([0.19,0.42,0.1,0.32,0.28,0.11,0.1,0.19,0.23,0.18,0.24,0.21])*1e14
gonz['mbcg_err'] = gonz['mbcg_err'] / gonz['mbcg'] / np.log(10)
gonz['mbcg'] = np.log10(gonz['mbcg'])
gonz['m500'] = np.log10(gonz['m500'])
fig, ax = plt.subplots(figsize=(8, 6))
colors = iter(sns.color_palette())
rich = cat.lambda_chisq > 100
ax.plot(mhalo_med, mstar_med, color='k', ls='-', lw=3, alpha=0.5)
ax.plot(mhalo_med, mstar_p75, color='k', ls='--', lw=3, alpha=0.5)
ax.plot(mhalo_med, mstar_p25, color='k', ls='--', lw=3, alpha=0.5)
g = ax.errorbar(gonz['m500'], gonz['mbcg'], yerr=gonz['mbcg_err'], color=next(colors),
fmt='o', label='Gonzalez+13', markersize=10)
k = ax.errorbar(krav['m500'], krav['mbcg'], yerr=krav['mbcg_err'], color=next(colors),
fmt='s', label='Kravtsov+14', markersize=10)
r = ax.scatter(mhalo[rich], cat.mstar_avg[rich], alpha=0.9, color=next(colors),
edgecolor='k', marker='D', s=50, label=r'redMaPPer ($\lambda>100$)')
ax.text(0.12, 0.16, 'redMaPPer\n$0.1<z<0.3$', multialignment='center',
transform=ax.transAxes, fontsize=14)
m500 = np.linspace(13.55, 15.25, 50)
ff = ax.plot(m500, np.polyval([0.33, 12.24], m500-14.5), ls='-',
color='k', label=r'$M_{*}\propto M_{500}^{0.33}$')
ax.text(0.12, 0.9, r'$M_{*}\propto M_{500}^{0.33}$', multialignment='center',
transform=ax.transAxes, fontsize=16)
ax.plot([13.55, 13.68], [12.8, 12.8], ls='-', color='k') # hack!!!
#ax.xaxis.set_major_formatter(FormatStrFormatter('%.1f'))
ax.xaxis.set_major_locator(MultipleLocator(0.5))
hh = [g, k, r]
ax.legend(hh, [H.get_label() for H in hh], loc='lower right',
frameon=True, fontsize=16)
#ax.legend(ff, ff.get_label(), loc='upper left',
# frameon=True, fontsize=16)
#ax.legend(loc='upper left', frameon=True, fontsize=16)
ax.set_ylim(10.5, 13)
ax.set_xlim(13.5, 15.3)
ax.set_xlabel(r'$\log_{10}\, (M_{500}\ /\ M_{\odot})$')
ax.set_ylabel(r'$\log_{10}\, (M_{*}\ /\ M_{\odot})$')
ff
stop
cat
legacyhalos_dir = os.getenv('LEGACYHALOS_DIR')
parentfile = os.path.join(legacyhalos_dir, 'legacyhalos-parent-isedfit.fits')
ls = Table(fits.getdata(parentfile, extname='LSPHOT-ISEDFIT'))
ls
_ = plt.hist(ls['MSTAR_AVG'], bins=100)
_ = plt.hist(sdss['MSTAR_AVG'], bins=100)
_ = plt.hist(sdss['MSTAR_AVG'] - ls['MSTAR_AVG'], bins=200)
plt.xlim(-0.5, 0.5)
sdss = Table(fits.getdata(parentfile, extname='SDSSPHOT-ISEDFIT'))
sdss
data = np.vstack( (ls['MSTAR_AVG'], sdss['MSTAR_AVG'] - ls['MSTAR_AVG']))
data.shape
k = kde.gaussian_kde(data.T)
#xi, yi = np.mgrid[x.min():x.max():nbins*1j, y.min():y.max():nbins*1j]
#zi = k(np.vstack([xi.flatten(), yi.flatten()]))
fig, ax = plt.subplots()
ax.hexbin(ls['MSTAR_AVG'], sdss['MSTAR_AVG'] - ls['MSTAR_AVG'],
mincnt=1)
sns.jointplot(ls['MSTAR_AVG'], sdss['MSTAR_AVG'] - ls['MSTAR_AVG'],
kind="kde", color="#4CB391", xlim=(10, 13), ylim=(-0.5, 0.5))
sns.kdeplot(ls['MSTAR_AVG'], sdss['MSTAR_AVG'] - ls['MSTAR_AVG'],
cmap="Blues", shade=True, shade_lowest=True, cbar=True,
cut=0,
)
help(sns.kdeplot)
```
### Playing around with PSF convolution
```
class SersicWaveModel(Fittable2DModel):
"""
Define a surface brightness profile model which is the sum of three Sersic
models connected by a Sersic index and half-light radius which varies
as a power-law function of wavelength.
See http://docs.astropy.org/en/stable/modeling/new.html#a-step-by-step-definition-of-a-1-d-gaussian-model
for useful info.
ToDo: convolve the model with the PSF.
"""
from astropy.modeling import Parameter
nref = Parameter(default=4, min=0.1, max=8)
r50ref = Parameter(default=10, min=1e-3, max=30) # [arcsec]
alpha = Parameter(default=0.0, min=-1, max=1)
beta = Parameter(default=0.0, min=-1, max=1)
mu50_g = Parameter(default=0.5) # [flux units]
mu50_r = Parameter(default=1)
mu50_z = Parameter(default=1.5)
psfsigma_g = Parameter(default=0.5, fixed=True) # [sigma, arcsec]
psfsigma_r = Parameter(default=0.5, fixed=True)
psfsigma_z = Parameter(default=0.5, fixed=True)
linear = False
def __init__(self, nref=nref.default, r50ref=r50ref.default,
alpha=alpha.default, beta=beta.default, mu50_g=mu50_g.default,
mu50_r=mu50_r.default, mu50_z=mu50_z.default,
psfsigma_g=psfsigma_g.default, psfsigma_r=psfsigma_r.default,
psfsigma_z=psfsigma_z.default, lambda_ref=6470, lambda_g=4890,
lambda_r=6470, lambda_z=9196, **kwargs):
self.lambda_ref = lambda_ref
self.lambda_g = lambda_g
self.lambda_r = lambda_r
self.lambda_z = lambda_z
super(SersicWaveModel, self).__init__(nref=nref, r50ref=r50ref, alpha=alpha,
beta=beta, mu50_g=mu50_g, mu50_r=mu50_r,
mu50_z=mu50_z, psfsigma_g=psfsigma_g,
psfsigma_r=psfsigma_r, psfsigma_z=psfsigma_z,
**kwargs)
def evaluate(self, r, w, nref, r50ref, alpha, beta,
mu50_g, mu50_r, mu50_z,
psfsigma_g, psfsigma_r, psfsigma_z):
"""Evaluate the wavelength-dependent Sersic model.
Args:
r : radius [kpc]
w : wavelength [Angstrom]
nref : Sersic index at the reference wavelength lambda_ref
r50ref : half-light radius at lambda_ref
alpha : power-law slope for the Sersic index
beta : power-law slope for the half-light radius
mu50_g : g-band surface brignthess at r=r50_g
mu50_r : r-band surface brignthess at r=r50_r
mu50_z : z-band surface brignthess at r=r50_z
"""
from scipy.special import gammaincinv
from astropy.convolution import Gaussian1DKernel, convolve
mu = np.zeros_like(r)
# Build the surface brightness profile at each wavelength.
for lam, mu50, psfsigma in zip( (self.lambda_g, self.lambda_r, self.lambda_z),
(mu50_g, mu50_r, mu50_z),
(psfsigma_g, psfsigma_r, psfsigma_z) ):
n = nref * (lam / self.lambda_ref)**alpha
r50 = r50ref * (lam / self.lambda_ref)**beta
indx = w == lam
mu_int = mu50 * np.exp(-gammaincinv(2 * n, 0.5) * ((r[indx] / r50) ** (1 / n) - 1))
# smooth with the PSF
if psfsigma > 0:
g = Gaussian1DKernel(stddev=psfsigma)#, mode='linear_interp')
mu_smooth = convolve(mu_int, g, normalize_kernel=True, boundary='extend')
fix = (r[indx] > 5 * psfsigma)
mu_smooth[fix] = mu_int[fix] # replace with original values
mu[indx] = mu_smooth
else:
mu[indx] = mu_int
return mu
model = SersicWaveModel(beta=0.1, alpha=-0.2, r50ref=8, nref=2.8,
psfsigma_g=0.3, psfsigma_r=0, psfsigma_z=0)
print(model)
seed = 1
rand = np.random.RandomState(seed)
minradius = 0.02
maxradius = 15.0
nrad = (25, 18, 33) # number of measurements per bandpass g, r, z
radius = []
wave = []
for lam, nn in zip( (model.lambda_g, model.lambda_r, model.lambda_z), nrad ):
#rad = rand.uniform(minradius, maxradius, nn)
rad = np.linspace(minradius, maxradius, nn)
radius.append(rad)
wave.append(np.repeat(lam, nn))
radius = np.hstack(radius)
wave = np.hstack(wave)
sb = model(radius, wave) # evaluate the model
sberr = rand.normal(loc=0, scale=sb*0.0)
sb += sberr
# plot it!
plot_sbwave(radius, wave, sb, model=model)
#plt.axvline(x=8)
#plt.axhline(y=1)
from astropy.convolution import Gaussian1DKernel, convolve, convolve_fft
from scipy.ndimage.filters import gaussian_filter
sigma = 1
g = Gaussian1DKernel(stddev=sigma)#, mode='linear_interp')
rr = radius[:26]
ff = sb1[:26]
cff = convolve(ff, g, normalize_kernel=True, boundary='fill', fill_value=ff.min())
cff[rr > 5*sigma] = ff[rr > 5*sigma] # replace with original values
rr2 = np.hstack( (radius[:26], -radius[:26]) )
ff2 = np.hstack( (sb1[:26], sb1[:26]) )
cff2 = convolve_fft(ff2, g, normalize_kernel=True, boundary='fill', fill_value=10)
#cff3 = gaussian_filter(ff, sigma, mode='constant', cval=0.2)
plt.plot(rr2, ff2, 'rs', ms=10)
plt.plot(rr2, cff2, 'bo', ms=10)
#plt.plot(rr, cff3, 'gs', ms=10)
plt.plot(rr, cff, 'gs', ms=10)
plt.xlim(-2, 2)
#plt.ylim(0.1, 1)
plt.yscale('log')
#print(cff-cff2[:26])
m1 = SersicWaveModel(beta=0.1, alpha=-0.2, r50ref=8, nref=2.8,
psfsigma_g=0, psfsigma_r=0, psfsigma_z=0)
m2 = SersicWaveModel(beta=0.1, alpha=-0.2, r50ref=8, nref=2.8,
psfsigma_g=1, psfsigma_r=1.5, psfsigma_z=0.5)
print(m2)
sb1 = m1(radius, wave)
sb2 = m2(radius, wave)
plt.plot(radius[:26], 22.5-2.5*np.log10(sb1[:26]), 'rs')
plt.plot(radius[:26], 22.5-2.5*np.log10(sb2[:26]), 'bo')
#plt.plot(radius[:26], sb1[:26], 'rs')
#plt.plot(radius[:26], sb2[:26], 'bo')
#plt.ylim(0, 20)
```
|
github_jupyter
|
import os
import numpy as np
from scipy.stats import kde, binned_statistic
from matplotlib.ticker import FormatStrFormatter, MultipleLocator
import matplotlib.pyplot as plt
from astropy.io import fits
from astropy.table import Table
import seaborn as sns
sns.set(context='talk', style='ticks', font_scale=1.3)#, palette='Set1')
import legacyhalos.io
%matplotlib inline
def lambda2mhalo(richness, redshift=0.3, Saro=False):
"""
Convert cluster richness, lambda, to halo mass, given various
calibrations.
* Saro et al. 2015: Equation (7) and Table 2 gives M(500).
* Melchior et al. 2017: Equation (51) and Table 4 gives M(200).
* Simet et al. 2017:
Other SDSS-based calibrations: Li et al. 2016; Miyatake et al. 2016;
Farahi et al. 2016; Baxter et al. 2016.
TODO: Return the variance!
"""
if Saro:
pass
# Melchior et al. 2017 (default)
logM0, Flam, Gz, lam0, z0 = 14.371, 1.12, 0.18, 30.0, 0.5
Mhalo = 10**logM0 * (richness / lam0)**Flam * ( (1 + redshift) / (1 + z0) )**Gz
return Mhalo
cat = legacyhalos.io.read_catalog(extname='LSPHOT-ISEDFIT', upenn=False,
isedfit=True, columns=('mstar_avg', 'sfr100_avg'))
cat1 = legacyhalos.io.read_catalog(extname='REDMAPPER', upenn=False,
isedfit=False, columns=('Z', 'LAMBDA_CHISQ'))
cat.add_columns_from(cat1)
cat
mhalo = np.log10(lambda2mhalo(cat.lambda_chisq, redshift=cat.z))
mhalo
bins = 35
mstar_med, bin_edges, _ = binned_statistic(mhalo, cat.mstar_avg, statistic='median', bins=bins)
bin_width = (bin_edges[1] - bin_edges[0])
mhalo_med = bin_edges[1:] - bin_width/2
print(bin_width)
def p75(x):
return np.percentile(x, 75)
def p25(x):
return np.percentile(x, 25)
mstar_p25, _, _ = binned_statistic(mhalo, cat.mstar_avg, statistic=p25, bins=bins)
mstar_p75, _, _ = binned_statistic(mhalo, cat.mstar_avg, statistic=p75, bins=bins)
krav = dict()
krav['m500'] = np.log10(np.array([15.6,10.3,7,5.34,2.35,1.86,1.34,0.46,0.47])*1e14)
krav['mbcg'] = np.array([3.12,4.14,3.06,1.47,0.79,1.26,1.09,0.91,1.38])*1e12
krav['mbcg_err'] = np.array([0.36,0.3,0.3,0.13,0.05,0.11,0.06,0.05,0.14])*1e12
krav['mbcg_err'] = krav['mbcg_err'] / krav['mbcg'] / np.log(10)
krav['mbcg'] = np.log10(krav['mbcg'])
gonz = dict()
gonz['mbcg'] = np.array([0.84,0.87,0.33,0.57,0.85,0.60,0.86,0.93,0.71,0.81,0.70,0.57])*1e12*2.65
gonz['mbcg_err'] = np.array([0.03,0.09,0.01,0.01,0.14,0.03,0.03,0.05,0.07,0.12,0.02,0.01])*1e12*2.65
gonz['m500'] = np.array([2.26,5.15,0.95,3.46,3.59,0.99,0.95,3.23,2.26,2.41,2.37,1.45])*1e14
gonz['m500_err'] = np.array([0.19,0.42,0.1,0.32,0.28,0.11,0.1,0.19,0.23,0.18,0.24,0.21])*1e14
gonz['mbcg_err'] = gonz['mbcg_err'] / gonz['mbcg'] / np.log(10)
gonz['mbcg'] = np.log10(gonz['mbcg'])
gonz['m500'] = np.log10(gonz['m500'])
fig, ax = plt.subplots(figsize=(8, 6))
colors = iter(sns.color_palette())
rich = cat.lambda_chisq > 100
ax.plot(mhalo_med, mstar_med, color='k', ls='-', lw=3, alpha=0.5)
ax.plot(mhalo_med, mstar_p75, color='k', ls='--', lw=3, alpha=0.5)
ax.plot(mhalo_med, mstar_p25, color='k', ls='--', lw=3, alpha=0.5)
g = ax.errorbar(gonz['m500'], gonz['mbcg'], yerr=gonz['mbcg_err'], color=next(colors),
fmt='o', label='Gonzalez+13', markersize=10)
k = ax.errorbar(krav['m500'], krav['mbcg'], yerr=krav['mbcg_err'], color=next(colors),
fmt='s', label='Kravtsov+14', markersize=10)
r = ax.scatter(mhalo[rich], cat.mstar_avg[rich], alpha=0.9, color=next(colors),
edgecolor='k', marker='D', s=50, label=r'redMaPPer ($\lambda>100$)')
ax.text(0.12, 0.16, 'redMaPPer\n$0.1<z<0.3$', multialignment='center',
transform=ax.transAxes, fontsize=14)
m500 = np.linspace(13.55, 15.25, 50)
ff = ax.plot(m500, np.polyval([0.33, 12.24], m500-14.5), ls='-',
color='k', label=r'$M_{*}\propto M_{500}^{0.33}$')
ax.text(0.12, 0.9, r'$M_{*}\propto M_{500}^{0.33}$', multialignment='center',
transform=ax.transAxes, fontsize=16)
ax.plot([13.55, 13.68], [12.8, 12.8], ls='-', color='k') # hack!!!
#ax.xaxis.set_major_formatter(FormatStrFormatter('%.1f'))
ax.xaxis.set_major_locator(MultipleLocator(0.5))
hh = [g, k, r]
ax.legend(hh, [H.get_label() for H in hh], loc='lower right',
frameon=True, fontsize=16)
#ax.legend(ff, ff.get_label(), loc='upper left',
# frameon=True, fontsize=16)
#ax.legend(loc='upper left', frameon=True, fontsize=16)
ax.set_ylim(10.5, 13)
ax.set_xlim(13.5, 15.3)
ax.set_xlabel(r'$\log_{10}\, (M_{500}\ /\ M_{\odot})$')
ax.set_ylabel(r'$\log_{10}\, (M_{*}\ /\ M_{\odot})$')
ff
stop
cat
legacyhalos_dir = os.getenv('LEGACYHALOS_DIR')
parentfile = os.path.join(legacyhalos_dir, 'legacyhalos-parent-isedfit.fits')
ls = Table(fits.getdata(parentfile, extname='LSPHOT-ISEDFIT'))
ls
_ = plt.hist(ls['MSTAR_AVG'], bins=100)
_ = plt.hist(sdss['MSTAR_AVG'], bins=100)
_ = plt.hist(sdss['MSTAR_AVG'] - ls['MSTAR_AVG'], bins=200)
plt.xlim(-0.5, 0.5)
sdss = Table(fits.getdata(parentfile, extname='SDSSPHOT-ISEDFIT'))
sdss
data = np.vstack( (ls['MSTAR_AVG'], sdss['MSTAR_AVG'] - ls['MSTAR_AVG']))
data.shape
k = kde.gaussian_kde(data.T)
#xi, yi = np.mgrid[x.min():x.max():nbins*1j, y.min():y.max():nbins*1j]
#zi = k(np.vstack([xi.flatten(), yi.flatten()]))
fig, ax = plt.subplots()
ax.hexbin(ls['MSTAR_AVG'], sdss['MSTAR_AVG'] - ls['MSTAR_AVG'],
mincnt=1)
sns.jointplot(ls['MSTAR_AVG'], sdss['MSTAR_AVG'] - ls['MSTAR_AVG'],
kind="kde", color="#4CB391", xlim=(10, 13), ylim=(-0.5, 0.5))
sns.kdeplot(ls['MSTAR_AVG'], sdss['MSTAR_AVG'] - ls['MSTAR_AVG'],
cmap="Blues", shade=True, shade_lowest=True, cbar=True,
cut=0,
)
help(sns.kdeplot)
class SersicWaveModel(Fittable2DModel):
"""
Define a surface brightness profile model which is the sum of three Sersic
models connected by a Sersic index and half-light radius which varies
as a power-law function of wavelength.
See http://docs.astropy.org/en/stable/modeling/new.html#a-step-by-step-definition-of-a-1-d-gaussian-model
for useful info.
ToDo: convolve the model with the PSF.
"""
from astropy.modeling import Parameter
nref = Parameter(default=4, min=0.1, max=8)
r50ref = Parameter(default=10, min=1e-3, max=30) # [arcsec]
alpha = Parameter(default=0.0, min=-1, max=1)
beta = Parameter(default=0.0, min=-1, max=1)
mu50_g = Parameter(default=0.5) # [flux units]
mu50_r = Parameter(default=1)
mu50_z = Parameter(default=1.5)
psfsigma_g = Parameter(default=0.5, fixed=True) # [sigma, arcsec]
psfsigma_r = Parameter(default=0.5, fixed=True)
psfsigma_z = Parameter(default=0.5, fixed=True)
linear = False
def __init__(self, nref=nref.default, r50ref=r50ref.default,
alpha=alpha.default, beta=beta.default, mu50_g=mu50_g.default,
mu50_r=mu50_r.default, mu50_z=mu50_z.default,
psfsigma_g=psfsigma_g.default, psfsigma_r=psfsigma_r.default,
psfsigma_z=psfsigma_z.default, lambda_ref=6470, lambda_g=4890,
lambda_r=6470, lambda_z=9196, **kwargs):
self.lambda_ref = lambda_ref
self.lambda_g = lambda_g
self.lambda_r = lambda_r
self.lambda_z = lambda_z
super(SersicWaveModel, self).__init__(nref=nref, r50ref=r50ref, alpha=alpha,
beta=beta, mu50_g=mu50_g, mu50_r=mu50_r,
mu50_z=mu50_z, psfsigma_g=psfsigma_g,
psfsigma_r=psfsigma_r, psfsigma_z=psfsigma_z,
**kwargs)
def evaluate(self, r, w, nref, r50ref, alpha, beta,
mu50_g, mu50_r, mu50_z,
psfsigma_g, psfsigma_r, psfsigma_z):
"""Evaluate the wavelength-dependent Sersic model.
Args:
r : radius [kpc]
w : wavelength [Angstrom]
nref : Sersic index at the reference wavelength lambda_ref
r50ref : half-light radius at lambda_ref
alpha : power-law slope for the Sersic index
beta : power-law slope for the half-light radius
mu50_g : g-band surface brignthess at r=r50_g
mu50_r : r-band surface brignthess at r=r50_r
mu50_z : z-band surface brignthess at r=r50_z
"""
from scipy.special import gammaincinv
from astropy.convolution import Gaussian1DKernel, convolve
mu = np.zeros_like(r)
# Build the surface brightness profile at each wavelength.
for lam, mu50, psfsigma in zip( (self.lambda_g, self.lambda_r, self.lambda_z),
(mu50_g, mu50_r, mu50_z),
(psfsigma_g, psfsigma_r, psfsigma_z) ):
n = nref * (lam / self.lambda_ref)**alpha
r50 = r50ref * (lam / self.lambda_ref)**beta
indx = w == lam
mu_int = mu50 * np.exp(-gammaincinv(2 * n, 0.5) * ((r[indx] / r50) ** (1 / n) - 1))
# smooth with the PSF
if psfsigma > 0:
g = Gaussian1DKernel(stddev=psfsigma)#, mode='linear_interp')
mu_smooth = convolve(mu_int, g, normalize_kernel=True, boundary='extend')
fix = (r[indx] > 5 * psfsigma)
mu_smooth[fix] = mu_int[fix] # replace with original values
mu[indx] = mu_smooth
else:
mu[indx] = mu_int
return mu
model = SersicWaveModel(beta=0.1, alpha=-0.2, r50ref=8, nref=2.8,
psfsigma_g=0.3, psfsigma_r=0, psfsigma_z=0)
print(model)
seed = 1
rand = np.random.RandomState(seed)
minradius = 0.02
maxradius = 15.0
nrad = (25, 18, 33) # number of measurements per bandpass g, r, z
radius = []
wave = []
for lam, nn in zip( (model.lambda_g, model.lambda_r, model.lambda_z), nrad ):
#rad = rand.uniform(minradius, maxradius, nn)
rad = np.linspace(minradius, maxradius, nn)
radius.append(rad)
wave.append(np.repeat(lam, nn))
radius = np.hstack(radius)
wave = np.hstack(wave)
sb = model(radius, wave) # evaluate the model
sberr = rand.normal(loc=0, scale=sb*0.0)
sb += sberr
# plot it!
plot_sbwave(radius, wave, sb, model=model)
#plt.axvline(x=8)
#plt.axhline(y=1)
from astropy.convolution import Gaussian1DKernel, convolve, convolve_fft
from scipy.ndimage.filters import gaussian_filter
sigma = 1
g = Gaussian1DKernel(stddev=sigma)#, mode='linear_interp')
rr = radius[:26]
ff = sb1[:26]
cff = convolve(ff, g, normalize_kernel=True, boundary='fill', fill_value=ff.min())
cff[rr > 5*sigma] = ff[rr > 5*sigma] # replace with original values
rr2 = np.hstack( (radius[:26], -radius[:26]) )
ff2 = np.hstack( (sb1[:26], sb1[:26]) )
cff2 = convolve_fft(ff2, g, normalize_kernel=True, boundary='fill', fill_value=10)
#cff3 = gaussian_filter(ff, sigma, mode='constant', cval=0.2)
plt.plot(rr2, ff2, 'rs', ms=10)
plt.plot(rr2, cff2, 'bo', ms=10)
#plt.plot(rr, cff3, 'gs', ms=10)
plt.plot(rr, cff, 'gs', ms=10)
plt.xlim(-2, 2)
#plt.ylim(0.1, 1)
plt.yscale('log')
#print(cff-cff2[:26])
m1 = SersicWaveModel(beta=0.1, alpha=-0.2, r50ref=8, nref=2.8,
psfsigma_g=0, psfsigma_r=0, psfsigma_z=0)
m2 = SersicWaveModel(beta=0.1, alpha=-0.2, r50ref=8, nref=2.8,
psfsigma_g=1, psfsigma_r=1.5, psfsigma_z=0.5)
print(m2)
sb1 = m1(radius, wave)
sb2 = m2(radius, wave)
plt.plot(radius[:26], 22.5-2.5*np.log10(sb1[:26]), 'rs')
plt.plot(radius[:26], 22.5-2.5*np.log10(sb2[:26]), 'bo')
#plt.plot(radius[:26], sb1[:26], 'rs')
#plt.plot(radius[:26], sb2[:26], 'bo')
#plt.ylim(0, 20)
| 0.460532 | 0.419291 |
# Working with code cells
## This is the second title
In this notebook you'll get some experience working with code cells.
First, run the cell below. As I mentioned before, you can run the cell by selecting it the click the "run cell" button above. However, it's easier to run it by pressing **Shift + Enter** so you don't have to take your hands away from the keyboard.
```
# Select the cell, then press Shift + Enter
3**2
```
Shift + Enter runs the cell then selects the next cell or creates a new one if necessary. You can run a cell without changing the selected cell by pressing **Control + Enter**.
The output shows up below the cell. It's printing out the result just like in a normal Python shell. Only the very last result in a cell will be printed though. Otherwise, you'll need to use `print()` to print out any variables.
> **Exercise:** Run the next two cells to test this out. Think about what you expect to happen, then try it.
```
3**2
4**2
print(3**2)
4**2
```
Now try assigning a value to a variable.
```
mindset = 'growth'
```
There is no output, `'growth'` has been assigned to the variable `mindset`. All variables, functions, and classes created in a cell are available in every other cell in the notebook.
What do you think the output will be when you run the next cell? Feel free to play around with this a bit to get used to how it works.
```
mindset[:4]
```
## Code completion
When you're writing code, you'll often be using a variable or function often and can save time by using code completion. That is, you only need to type part of the name, then press **tab**.
> **Exercise:** Place the cursor at the end of `mind` in the next cell and press **tab**
```
mind
```
Here, completing `mind` writes out the full variable name `mindset`. If there are multiple names that start the same, you'll get a menu, see below.
```
# Run this cell
mindful = True
# Complete the name here again, choose one from the menu
mindful
```
Remember that variables assigned in one cell are available in **all** cells. This includes cells that you've previously run and cells that are above where the variable was assigned. Try doing the code completion on the cell third up from here.
Code completion also comes in handy if you're using a module but don't quite remember which function you're looking for or what the available functions are. I'll show you how this works with the [random](https://docs.python.org/3/library/random.html) module. This module provides functions for generating random numbers, often useful for making fake data or picking random items from lists.
```
# Run this
import random
```
> **Exercise:** In the cell below, place the cursor after `random.` then press **tab** to bring up the code completion menu for the module. Choose `random.randint` from the list, you can move through the menu with the up and down arrow keys.
```
random.
```
Above you should have seen all the functions available from the random module. Maybe you're looking to draw random numbers from a [Gaussian distribution](https://en.wikipedia.org/wiki/Normal_distribution), also known as the normal distribution or the "bell curve".
## Tooltips
You see there is the function `random.gauss` but how do you use it? You could check out the [documentation](https://docs.python.org/3/library/random.html), or just look up the documentation in the notebook itself.
> **Exercise:** In the cell below, place the cursor after `random.gauss` the press **shift + tab** to bring up the tooltip.
```
random.gauss
```
You should have seen some simple documentation like this:
Signature: random.gauss(mu, sigma)
Docstring:
Gaussian distribution.
The function takes two arguments, `mu` and `sigma`. These are the standard symbols for the mean and the standard deviation, respectively, of the Gaussian distribution. Maybe you're not familiar with this though, and you need to know what the parameters actually mean. This will happen often, you'll find some function, but you need more information. You can show more information by pressing **shift + tab** twice.
> **Exercise:** In the cell below, show the full help documentation by pressing **shift + tab** twice.
```
random.gauss
```
You should see more help text like this:
mu is the mean, and sigma is the standard deviation. This is
slightly faster than the normalvariate() function.
|
github_jupyter
|
# Select the cell, then press Shift + Enter
3**2
3**2
4**2
print(3**2)
4**2
mindset = 'growth'
mindset[:4]
mind
# Run this cell
mindful = True
# Complete the name here again, choose one from the menu
mindful
# Run this
import random
random.
random.gauss
random.gauss
| 0.398641 | 0.986044 |
```
import numpy as np
import scipy
import sklearn
import pandas as pd
import matplotlib.pyplot as plt
from statistics import median, mean, quantiles, stdev
df = pd.read_csv("heart.csv")
# Prints the 5 number summary and the mean and standard deviation
def summary(data):
data.sort()
minimum = min(data)
maximum = max(data)
q = quantiles(data)
ave = mean(data)
std = stdev(data)
first = "Mean: {ave:.2f}, Standard Deviation: {std:.2f}"
second = "Min: {}, Q1: {}, Median: {}, Q3: {}, Max: {}"
print(first.format(ave=ave, std=std))
print(second.format(minimum, q[0], q[1], q[2], maximum))
```
# Exploring Data
Attributes:
age
sex
chest pain type (4 values)
resting blood pressure
serum cholestoral in mg/dl
fasting blood sugar > 120 mg/dl
resting electrocardiographic results (values 0,1,2)
maximum heart rate achieved
exercise induced angina
oldpeak = ST depression induced by exercise relative to rest
the slope of the peak exercise ST segment
number of major vessels (0-3) colored by flourosopy
thal: 3 = normal; 6 = fixed defect; 7 = reversable defect
The data is first and foremost, balanced and is rather small for a complex problem. This means the big deep learning methods are difficult or out (e.g. Random Forests, CNNs).
Thoughts right now are perhaps some PCA for some dimensional reduction and cluster analysis. I'm also thinking some basic logistic regression. My money is on SVM being the most useful for this data
We may also need to control for data. Obviously, there is a bit of confounding in age. Maybe control for sex as well?
```
ages = df['age'].tolist()
classes = df['target'].tolist()
sex = df['sex'].tolist()
chol = df['chol'].tolist()
print("ages:")
summary(ages)
print("\nchol:")
summary(chol)
fig, axs = plt.subplots(1, 2, sharey=True, figsize= (16, 5))
axs[0].set_xlabel("Age")
axs[1].set_xlabel("Cholesterol (mg/dl)")
axs[0].hist(ages, bins=10)
axs[1].hist(chol, bins=12)
sick = df[df['target'] == 1]
nonsick = df[df['target'] == 0]
sick['age'].tolist()
fig, axs = plt.subplots(2, 2, sharey='all', sharex='col', figsize=(16, 16))
axs[0, 0].set_ylabel("Has Cardiovascular Disease")
axs[0, 0].hist(sick['age'].tolist(), bins=[i for i in range(30, 90, 5)])
axs[1, 0].set_xlabel("Age")
axs[1, 0].set_ylabel("Doesn't have Cardiovascular Disease")
axs[1, 0].hist(nonsick['age'].tolist(), bins=[i for i in range(30, 90, 5)])
axs[0, 1].hist(sick['chol'].tolist(), bins=[i for i in range(50, 650, 50)])
axs[1, 1].set_xlabel("Cholesterol (mg/dl)")
axs[1, 1].hist(nonsick['chol'].tolist(), bins=[i for i in range(50, 650, 50)])
# Need to perform PCA. May do controls for age and sex. Remember to remove
```
|
github_jupyter
|
import numpy as np
import scipy
import sklearn
import pandas as pd
import matplotlib.pyplot as plt
from statistics import median, mean, quantiles, stdev
df = pd.read_csv("heart.csv")
# Prints the 5 number summary and the mean and standard deviation
def summary(data):
data.sort()
minimum = min(data)
maximum = max(data)
q = quantiles(data)
ave = mean(data)
std = stdev(data)
first = "Mean: {ave:.2f}, Standard Deviation: {std:.2f}"
second = "Min: {}, Q1: {}, Median: {}, Q3: {}, Max: {}"
print(first.format(ave=ave, std=std))
print(second.format(minimum, q[0], q[1], q[2], maximum))
ages = df['age'].tolist()
classes = df['target'].tolist()
sex = df['sex'].tolist()
chol = df['chol'].tolist()
print("ages:")
summary(ages)
print("\nchol:")
summary(chol)
fig, axs = plt.subplots(1, 2, sharey=True, figsize= (16, 5))
axs[0].set_xlabel("Age")
axs[1].set_xlabel("Cholesterol (mg/dl)")
axs[0].hist(ages, bins=10)
axs[1].hist(chol, bins=12)
sick = df[df['target'] == 1]
nonsick = df[df['target'] == 0]
sick['age'].tolist()
fig, axs = plt.subplots(2, 2, sharey='all', sharex='col', figsize=(16, 16))
axs[0, 0].set_ylabel("Has Cardiovascular Disease")
axs[0, 0].hist(sick['age'].tolist(), bins=[i for i in range(30, 90, 5)])
axs[1, 0].set_xlabel("Age")
axs[1, 0].set_ylabel("Doesn't have Cardiovascular Disease")
axs[1, 0].hist(nonsick['age'].tolist(), bins=[i for i in range(30, 90, 5)])
axs[0, 1].hist(sick['chol'].tolist(), bins=[i for i in range(50, 650, 50)])
axs[1, 1].set_xlabel("Cholesterol (mg/dl)")
axs[1, 1].hist(nonsick['chol'].tolist(), bins=[i for i in range(50, 650, 50)])
# Need to perform PCA. May do controls for age and sex. Remember to remove
| 0.511229 | 0.860134 |
## <small>
Copyright (c) 2017-21 Andrew Glassner
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
</small>
# Deep Learning: A Visual Approach
## by Andrew Glassner, https://glassner.com
### Order: https://nostarch.com/deep-learning-visual-approach
### GitHub: https://github.com/blueberrymusic
------
### What's in this notebook
This notebook is provided to help you work with Keras and TensorFlow. It accompanies the bonus chapters for my book. The code is in Python3, using the versions of libraries as of April 2021.
Note that I've included the output cells in this saved notebook, but Jupyter doesn't save the variables or data that were used to generate them. To recreate any cell's output, evaluate all the cells from the start up to that cell. A convenient way to experiment is to first choose "Restart & Run All" from the Kernel menu, so that everything's been defined and is up to date. Then you can experiment using the variables, data, functions, and other stuff defined in this notebook.
## Bonus Chapter 3 - Notebook 5: RNN curves
```
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
import math
random_seed = 42
# Workaround for Keras issues on Mac computers (you can comment this
# out if you're not on a Mac, or not having problems)
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
# Make a File_Helper for saving and loading files.
save_files = True
import os, sys, inspect
current_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
sys.path.insert(0, os.path.dirname(current_dir)) # path to parent dir
from DLBasics_Utilities import File_Helper
file_helper = File_Helper(save_files)
def sum_of_sines(number_of_steps, d_theta, skip_steps, freqs, amps, phases):
'''Add together multiple sine waves and return a list of values that is
number_of_steps long. d_theta is the step (in radians) between samples.
skip_steps determines the start of the sequence. The lists freqs, amps,
and phases should all the same length (but we don't check!)'''
values = []
for step_num in range(number_of_steps):
angle = d_theta * (step_num + skip_steps)
sum = 0
for wave in range(len(freqs)):
y = amps[wave] * math.sin(freqs[wave]*(phases[wave] + angle))
sum += y
values.append(sum)
return np.array(values)
def sum_of_upsloping_sines(number_of_steps, d_theta, skip_steps, freqs, amps, phases):
'''Like sum_of_sines(), but always sloping upwards'''
np.random.seed(42)
values = []
for step_num in range(number_of_steps):
angle = d_theta * (step_num + skip_steps)
sum = 0
for wave in range(len(freqs)):
y = amps[wave] * math.sin(freqs[wave]*(phases[wave] + angle))
sum += y
values.append(sum)
if step_num > 0:
sum_change = sum - prev_sum
if sum_change < 0:
values[-1] *= -1
if step_num == 1:
values[-2] *= -1
prev_sum = sum
return np.array(values)
def samples_and_targets_from_sequence(sequence, window_size):
'''Return lists of samples and targets built from overlapping
windows of the given size. Windows start at the beginning of
the input sequence and move right by 1 element.'''
samples = []
targets = []
for i in range(sequence.shape[0]-window_size):
sample = sequence[i:i+window_size]
target = sequence[i+window_size]
samples.append(sample)
targets.append(target[0])
return (np.array(samples), np.array(targets))
def make_data(data_sequence_number, training_length):
training_sequence = test_sequence = []
test_length = 200
theta_step = .057
if data_sequence_number == 0:
freqs_list = [1, 2]
amps_list = [1, 2]
phases_list = [0, 0]
data_maker = sum_of_sines
elif data_sequence_number == 1:
freqs_list = [1.1, 1.7, 3.1, 7]
amps_list = [1,2,2,3]
phases_list = [0,0,0,0]
data_maker = sum_of_sines
elif data_sequence_number == 2:
freqs_list = [1.1, 1.7, 3.1, 7]
amps_list = [1,2,2,3]
phases_list = [0,0,0,0]
data_maker = sum_of_upsloping_sines
else:
print("***** ERROR! Unknown data_sequence_number = ",data_sequence_number)
training_sequence = data_maker(training_length, theta_step, 0, freqs_list, amps_list, phases_list)
test_sequence = data_maker(test_length, theta_step, 2*training_length, freqs_list, amps_list, phases_list)
return (training_sequence, test_sequence)
def show_data_sets(training_length):
for i in range(0, 3):
(training_sequence, test_sequence) = make_data(i, training_length)
plt.figure(figsize=(8,3))
plt.subplot(1, 2, 1)
plt.plot(training_sequence)
plt.title('training sequence, set '+str(i))
plt.xlabel('index')
plt.ylabel('value')
plt.subplot(1, 2, 2)
plt.plot(test_sequence)
plt.title('test sequence, set '+str(i))
plt.xlabel('index')
plt.ylabel('value')
plt.tight_layout()
file_helper.save_figure('RNN-data-set-'+str(i))
plt.show()
show_data_sets(training_length=200)
def scale_sequences(training_sequence, test_sequence):
# reshape train and test sequences to form needed by MinMaxScaler
training_sequence = np.reshape(training_sequence, (training_sequence.shape[0], 1))
test_sequence = np.reshape(test_sequence, (test_sequence.shape[0], 1))
Min_max_scaler = MinMaxScaler(feature_range=(0, 1))
Min_max_scaler.fit(training_sequence)
scaled_training_sequence = Min_max_scaler.transform(training_sequence)
scaled_test_sequence = Min_max_scaler.transform(test_sequence)
return (Min_max_scaler, scaled_training_sequence, scaled_test_sequence)
# chop up train and test sequences into overlapping windows of the given size
def chop_up_sequences(training_sequence, test_sequence, window_size):
(X_train, y_train) = samples_and_targets_from_sequence(training_sequence, window_size)
(X_test, y_test) = samples_and_targets_from_sequence(test_sequence, window_size)
return (X_train, y_train, X_test, y_test)
def make_data_set(data_sequence_number, window_size, training_length):
(training_sequence, test_sequence) = make_data(data_sequence_number, training_length)
(Min_max_scaler, scaled_training_sequence, scaled_test_sequence) = \
scale_sequences(training_sequence, test_sequence)
(X_train, y_train, X_test, y_test)= chop_up_sequences(scaled_training_sequence, scaled_test_sequence, window_size)
return (Min_max_scaler, X_train, y_train, X_test, y_test, training_sequence, test_sequence)
# build and run the first model.
def make_model(model_number, window_size):
model = Sequential()
if model_number == 0:
model.add(LSTM(3, input_shape=[window_size, 1]))
model.add(Dense(1, activation=None))
elif model_number == 1:
model.add(LSTM(3, return_sequences=True, input_shape=[window_size, 1]))
model.add(LSTM(3))
model.add(Dense(1, activation=None))
elif model_number == 2:
model.add(LSTM(9, return_sequences=True, input_shape=[window_size, 1]))
model.add(LSTM(6, return_sequences=True))
model.add(LSTM(3))
model.add(Dense(1, activation=None))
else:
print("*** ERROR: make_model unknown model_number = ",model_number)
model.compile(loss='mean_squared_error', optimizer='adam')
return model
def build_and_compare(model_number, data_set_number, window_size, training_length, epochs):
np.random.seed(random_seed)
model = make_model(model_number, window_size)
(Min_max_scaler, X_train, y_train, X_test, y_test, training_sequence, test_sequence) = \
make_data_set(data_set_number, window_size, training_length)
history = model.fit(X_train, y_train, epochs=epochs, batch_size=1, verbose=0)
# Predict
y_train_predict = np.ravel(model.predict(X_train))
y_test_predict = np.ravel(model.predict(X_test))
# invert transformation
inverse_y_train_predict = Min_max_scaler.inverse_transform([y_train_predict])
inverse_y_test_predict = Min_max_scaler.inverse_transform([y_test_predict])
plot_string = '-dataset-'+str(data_set_number)+'-window-'+str(window_size)+\
'-model_number-'+str(model_number)+'-length-'+str(training_length)+'-epochs-'+str(epochs)
plt.plot(history.history['loss'])
plt.xlabel('epoch')
plt.ylabel('loss')
plt.title('Loss for data set '+str(data_set_number)+', window '+str(window_size))
file_helper.save_figure('RNN-loss'+plot_string)
plt.show()
# plot training and predictions
plt.plot(training_sequence, label="train", color='black', linewidth=2, zorder=20)
skip_values = np.array(window_size*(np.nan,))
flat_predict = np.ravel(inverse_y_train_predict)
plot_predict = np.append(skip_values, flat_predict)
plt.plot(plot_predict, label="train predict", color='red', linewidth=2, zorder=10)
plt.legend(loc='best')
plt.xlabel('index')
plt.ylabel('train and prediction')
plt.title('training set '+str(data_set_number)+', window '+str(window_size))
file_helper.save_figure('RNN-train-predictions'+plot_string)
plt.show()
plt.plot(test_sequence, label="test", color='black', linewidth=2, zorder=20)
skip_values = np.array(window_size*(np.nan,))
flat_predict = np.ravel(inverse_y_test_predict)
plot_predict = np.append(skip_values, flat_predict)
plt.plot(plot_predict, label="test predict", color='red', linewidth=2, zorder=10)
plt.legend(loc='best')
plt.xlabel('index')
plt.ylabel('test and prediction')
plt.title('test set '+str(data_set_number)+', window '+str(window_size))
plt.tight_layout()
file_helper.save_figure('RNN-test-predictions'+plot_string)
plt.show()
```
### Slow Alert!
If you're running without a GPU (and maybe even if you are), the last two
of these runs will take a while. It might be hours, or even days. Plan ahead!
```
build_and_compare(model_number=0, data_set_number=0, window_size=1, training_length=200, epochs=100)
build_and_compare(model_number=0, data_set_number=0, window_size=3, training_length=200, epochs=100)
build_and_compare(model_number=0, data_set_number=0, window_size=5, training_length=200, epochs=100)
build_and_compare(model_number=0, data_set_number=1, window_size=1, training_length=200, epochs=100)
build_and_compare(model_number=0, data_set_number=1, window_size=3, training_length=200, epochs=100)
build_and_compare(model_number=0, data_set_number=1, window_size=5, training_length=200, epochs=100)
build_and_compare(model_number=0, data_set_number=2, window_size=5, training_length=200, epochs=100)
build_and_compare(model_number=1, data_set_number=2, window_size=5, training_length=200, epochs=100)
build_and_compare(model_number=2, data_set_number=2, window_size=5, training_length=200, epochs=100)
#build_and_compare(model_number=2, data_set_number=2, window_size=13, training_length=2000, epochs=100)
#build_and_compare(model_number=2, data_set_number=2, window_size=13, training_length=20000, epochs=100)
```
|
github_jupyter
|
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
import math
random_seed = 42
# Workaround for Keras issues on Mac computers (you can comment this
# out if you're not on a Mac, or not having problems)
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
# Make a File_Helper for saving and loading files.
save_files = True
import os, sys, inspect
current_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
sys.path.insert(0, os.path.dirname(current_dir)) # path to parent dir
from DLBasics_Utilities import File_Helper
file_helper = File_Helper(save_files)
def sum_of_sines(number_of_steps, d_theta, skip_steps, freqs, amps, phases):
'''Add together multiple sine waves and return a list of values that is
number_of_steps long. d_theta is the step (in radians) between samples.
skip_steps determines the start of the sequence. The lists freqs, amps,
and phases should all the same length (but we don't check!)'''
values = []
for step_num in range(number_of_steps):
angle = d_theta * (step_num + skip_steps)
sum = 0
for wave in range(len(freqs)):
y = amps[wave] * math.sin(freqs[wave]*(phases[wave] + angle))
sum += y
values.append(sum)
return np.array(values)
def sum_of_upsloping_sines(number_of_steps, d_theta, skip_steps, freqs, amps, phases):
'''Like sum_of_sines(), but always sloping upwards'''
np.random.seed(42)
values = []
for step_num in range(number_of_steps):
angle = d_theta * (step_num + skip_steps)
sum = 0
for wave in range(len(freqs)):
y = amps[wave] * math.sin(freqs[wave]*(phases[wave] + angle))
sum += y
values.append(sum)
if step_num > 0:
sum_change = sum - prev_sum
if sum_change < 0:
values[-1] *= -1
if step_num == 1:
values[-2] *= -1
prev_sum = sum
return np.array(values)
def samples_and_targets_from_sequence(sequence, window_size):
'''Return lists of samples and targets built from overlapping
windows of the given size. Windows start at the beginning of
the input sequence and move right by 1 element.'''
samples = []
targets = []
for i in range(sequence.shape[0]-window_size):
sample = sequence[i:i+window_size]
target = sequence[i+window_size]
samples.append(sample)
targets.append(target[0])
return (np.array(samples), np.array(targets))
def make_data(data_sequence_number, training_length):
training_sequence = test_sequence = []
test_length = 200
theta_step = .057
if data_sequence_number == 0:
freqs_list = [1, 2]
amps_list = [1, 2]
phases_list = [0, 0]
data_maker = sum_of_sines
elif data_sequence_number == 1:
freqs_list = [1.1, 1.7, 3.1, 7]
amps_list = [1,2,2,3]
phases_list = [0,0,0,0]
data_maker = sum_of_sines
elif data_sequence_number == 2:
freqs_list = [1.1, 1.7, 3.1, 7]
amps_list = [1,2,2,3]
phases_list = [0,0,0,0]
data_maker = sum_of_upsloping_sines
else:
print("***** ERROR! Unknown data_sequence_number = ",data_sequence_number)
training_sequence = data_maker(training_length, theta_step, 0, freqs_list, amps_list, phases_list)
test_sequence = data_maker(test_length, theta_step, 2*training_length, freqs_list, amps_list, phases_list)
return (training_sequence, test_sequence)
def show_data_sets(training_length):
for i in range(0, 3):
(training_sequence, test_sequence) = make_data(i, training_length)
plt.figure(figsize=(8,3))
plt.subplot(1, 2, 1)
plt.plot(training_sequence)
plt.title('training sequence, set '+str(i))
plt.xlabel('index')
plt.ylabel('value')
plt.subplot(1, 2, 2)
plt.plot(test_sequence)
plt.title('test sequence, set '+str(i))
plt.xlabel('index')
plt.ylabel('value')
plt.tight_layout()
file_helper.save_figure('RNN-data-set-'+str(i))
plt.show()
show_data_sets(training_length=200)
def scale_sequences(training_sequence, test_sequence):
# reshape train and test sequences to form needed by MinMaxScaler
training_sequence = np.reshape(training_sequence, (training_sequence.shape[0], 1))
test_sequence = np.reshape(test_sequence, (test_sequence.shape[0], 1))
Min_max_scaler = MinMaxScaler(feature_range=(0, 1))
Min_max_scaler.fit(training_sequence)
scaled_training_sequence = Min_max_scaler.transform(training_sequence)
scaled_test_sequence = Min_max_scaler.transform(test_sequence)
return (Min_max_scaler, scaled_training_sequence, scaled_test_sequence)
# chop up train and test sequences into overlapping windows of the given size
def chop_up_sequences(training_sequence, test_sequence, window_size):
(X_train, y_train) = samples_and_targets_from_sequence(training_sequence, window_size)
(X_test, y_test) = samples_and_targets_from_sequence(test_sequence, window_size)
return (X_train, y_train, X_test, y_test)
def make_data_set(data_sequence_number, window_size, training_length):
(training_sequence, test_sequence) = make_data(data_sequence_number, training_length)
(Min_max_scaler, scaled_training_sequence, scaled_test_sequence) = \
scale_sequences(training_sequence, test_sequence)
(X_train, y_train, X_test, y_test)= chop_up_sequences(scaled_training_sequence, scaled_test_sequence, window_size)
return (Min_max_scaler, X_train, y_train, X_test, y_test, training_sequence, test_sequence)
# build and run the first model.
def make_model(model_number, window_size):
model = Sequential()
if model_number == 0:
model.add(LSTM(3, input_shape=[window_size, 1]))
model.add(Dense(1, activation=None))
elif model_number == 1:
model.add(LSTM(3, return_sequences=True, input_shape=[window_size, 1]))
model.add(LSTM(3))
model.add(Dense(1, activation=None))
elif model_number == 2:
model.add(LSTM(9, return_sequences=True, input_shape=[window_size, 1]))
model.add(LSTM(6, return_sequences=True))
model.add(LSTM(3))
model.add(Dense(1, activation=None))
else:
print("*** ERROR: make_model unknown model_number = ",model_number)
model.compile(loss='mean_squared_error', optimizer='adam')
return model
def build_and_compare(model_number, data_set_number, window_size, training_length, epochs):
np.random.seed(random_seed)
model = make_model(model_number, window_size)
(Min_max_scaler, X_train, y_train, X_test, y_test, training_sequence, test_sequence) = \
make_data_set(data_set_number, window_size, training_length)
history = model.fit(X_train, y_train, epochs=epochs, batch_size=1, verbose=0)
# Predict
y_train_predict = np.ravel(model.predict(X_train))
y_test_predict = np.ravel(model.predict(X_test))
# invert transformation
inverse_y_train_predict = Min_max_scaler.inverse_transform([y_train_predict])
inverse_y_test_predict = Min_max_scaler.inverse_transform([y_test_predict])
plot_string = '-dataset-'+str(data_set_number)+'-window-'+str(window_size)+\
'-model_number-'+str(model_number)+'-length-'+str(training_length)+'-epochs-'+str(epochs)
plt.plot(history.history['loss'])
plt.xlabel('epoch')
plt.ylabel('loss')
plt.title('Loss for data set '+str(data_set_number)+', window '+str(window_size))
file_helper.save_figure('RNN-loss'+plot_string)
plt.show()
# plot training and predictions
plt.plot(training_sequence, label="train", color='black', linewidth=2, zorder=20)
skip_values = np.array(window_size*(np.nan,))
flat_predict = np.ravel(inverse_y_train_predict)
plot_predict = np.append(skip_values, flat_predict)
plt.plot(plot_predict, label="train predict", color='red', linewidth=2, zorder=10)
plt.legend(loc='best')
plt.xlabel('index')
plt.ylabel('train and prediction')
plt.title('training set '+str(data_set_number)+', window '+str(window_size))
file_helper.save_figure('RNN-train-predictions'+plot_string)
plt.show()
plt.plot(test_sequence, label="test", color='black', linewidth=2, zorder=20)
skip_values = np.array(window_size*(np.nan,))
flat_predict = np.ravel(inverse_y_test_predict)
plot_predict = np.append(skip_values, flat_predict)
plt.plot(plot_predict, label="test predict", color='red', linewidth=2, zorder=10)
plt.legend(loc='best')
plt.xlabel('index')
plt.ylabel('test and prediction')
plt.title('test set '+str(data_set_number)+', window '+str(window_size))
plt.tight_layout()
file_helper.save_figure('RNN-test-predictions'+plot_string)
plt.show()
build_and_compare(model_number=0, data_set_number=0, window_size=1, training_length=200, epochs=100)
build_and_compare(model_number=0, data_set_number=0, window_size=3, training_length=200, epochs=100)
build_and_compare(model_number=0, data_set_number=0, window_size=5, training_length=200, epochs=100)
build_and_compare(model_number=0, data_set_number=1, window_size=1, training_length=200, epochs=100)
build_and_compare(model_number=0, data_set_number=1, window_size=3, training_length=200, epochs=100)
build_and_compare(model_number=0, data_set_number=1, window_size=5, training_length=200, epochs=100)
build_and_compare(model_number=0, data_set_number=2, window_size=5, training_length=200, epochs=100)
build_and_compare(model_number=1, data_set_number=2, window_size=5, training_length=200, epochs=100)
build_and_compare(model_number=2, data_set_number=2, window_size=5, training_length=200, epochs=100)
#build_and_compare(model_number=2, data_set_number=2, window_size=13, training_length=2000, epochs=100)
#build_and_compare(model_number=2, data_set_number=2, window_size=13, training_length=20000, epochs=100)
| 0.450601 | 0.704529 |
# File formats
There are many different file formats in widespread use within data science. In this lecture, we will review common file formats and their trade-offs, and how to choose an appropriate file format. We will also review the mechanics of reading/parsing different file formats, and how to write to them.
- [CSV](https://docs.python.org/3/library/csv.html)
- [Feather](https://arrow.apache.org/docs/python/feather.html)
- [JSON](https://www.w3schools.com/js/js_json_intro.asp)
- [XML](https://www.w3schools.com/xml/xml_whatis.asp)
- [HDF5](https://support.hdfgroup.org/HDF5/Tutor/HDF5Intro.pdf)
- [SQLite3](https://docs.python.org/3/library/sqlite3.html)
These other formats may be touched on here but will be revisited when we look at big data and distributed computing.
- [Parquet](https://parquet.apache.org/documentation/latest/)
- [Avro](https://avro.apache.org/docs/current/)
- [Arrow](https://arrow.apache.org)
## Imports
### Standard library packages
```
import os
import csv
import datetime
import decimal
import json
import sqlite3
import xml.etree.cElementTree as ET
```
### 3rd party packages
```
%%capture
! python3 -m pip install --quiet faker json2xml fastparquet fastavro rec_avro feather-format
import pendulum
import numpy as np
import pandas as pd
from faker import Faker
from json2xml import json2xml
from json2xml.utils import readfromjson
import fastavro
from rec_avro import (to_rec_avro_destructive,
from_rec_avro_destructive,
rec_avro_schema)
import fastparquet
import h5py
import tables
from sqlalchemy import create_engine
import feather
```
## How to create fake data
### Create fake profiles using `Faker`
```
fakes = [
Faker('zh_CN'),
Faker('ar_SA'),
Faker('en_US'),
]
n = 3
p = [0.3, 0.2, 0.5]
np.random.seed(1)
locales = np.random.choice(len(fakes), size=n, p=p)
profiles = [fakes[locale].profile() for locale in locales]
profiles
```
### Make `pandas` data framee
```
df = pd.DataFrame(profiles)
df.iloc[0]
```
### Make comma delimited files
```
df.to_csv('data/profiles.csv', index=False)
! head -c 200 data/profiles.csv
```
### Make tab-delimited files
```
df.to_csv('data/profiles.txt', index=False, sep='\t')
! head -c 200 data/profiles.txt
```
### Make JSON files
```
def converter(o):
if isinstance(o, datetime.datetime):
return o.__str__()
if isinstance(o, decimal.Decimal):
return o.__str__()
with open('data/profiles.json', 'w') as f:
json.dump(profiles , f, default=converter)
! head -c 200 data/profiles.json
```
### Make XML files
```
with open('data/profiles.xml', 'w') as f:
data = readfromjson('data/profiles.json')
f.write(json2xml.Json2xml({'employee': data}, wrapper="duke").to_xml())
! head -c 200 data/profiles.xml
```
### Make AVRO files
Avro is a row oriented storage format designed for distributed computing.
```
ps = json.load(open('data/profiles.json'))
avro_objects = [to_rec_avro_destructive(rec) for rec in ps]
with open('data/profiles.avro', 'wb') as f_out:
fastavro.writer(f_out, fastavro.parse_schema(rec_avro_schema()), avro_objects)
! head -c 200 data/profiles.avro
```
### Munge pandas data to be compratible with storage
```
df.birthdate = pd.to_datetime(df.birthdate)
df = (
df.current_location.
apply(pd.Series).
merge(df, left_index=True, right_index=True).
drop('current_location', axis=1).
rename({0: 'location_x', 1: 'location_y'}, axis=1)
)
df['location_x'] = df['location_x'].astype('float')
df['location_y'] = df['location_y'].astype('float')
df.website = df.website.apply(lambda s: ','.join(s))
```
### Make HDF5 files
```
df.to_hdf('data/profiles.h5', key='duke')
! head -c 200 data/profiles.h5
```
### Make Parquet files
Parquet is a column-oriented format designed for distributed computing.
```
fastparquet.write('data/profiles.parq', df)
! head -c 200 data/profiles.parq
```
### Make SQLite3 database files
```
engine = create_engine('sqlite:///data/profiles.sqlite', echo=False)
df.to_sql('duke', con=engine, if_exists='replace', index_label='id')
! head -c 200 data/profiles.sqlite
```
## Reading data from different file formats
### CSV
#### When the CSV file can be read as is
```
df = pd.read_csv('data/profiles.csv')
df.head(1)
df.loc[0]
```
#### When scrubbing of rows may be needed
```
rows = []
with open('data/profiles.csv') as f:
reader = csv.reader(f)
for row in reader:
rows.append(row)
list(map(len, rows))
rows[:2]
df = pd.DataFrame(rows[1:], columns=rows[0])
df.head(1)
```
### Tab-delimited
Same as CSV, just change separator.
#### Direct reading into DataFrame
```
df = pd.read_csv('data/profiles.txt', sep='\t')
df.head()
```
#### Row by row processing
```
rows = []
with open('data/profiles.txt') as f:
reader = csv.reader(f, delimiter='\t')
for row in reader:
rows.append(row)
list(map(len, rows))
```
### JSON
JSON is the most popular format for sharing information over the web. Most data retrieval APIs will return JSON.m
```
with open('data/profiles.json') as f:
profiles = json.load(f)
len(profiles)
profiles[0]
```
#### Using a REST API to retrieve JSON data
```
if not os.path.exists('data/pokemon.json'):
! curl -o data/pokemon.json https://pokeapi.co/api/v2/pokemon/23
with open('data/pokemon.json') as f:
pokemon = json.load(f)
pokemon.keys()
pokemon['name']
pokemon['abilities']
```
### Flatten nested JSON and extract fields to `pandas`
The [`json_normalize`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.json_normalize.html) function is useful for extracting nested fields from a JSON document.
```
import requests
pd.json_normalize(pokemon['abilities'])
pokemons = [requests.get(f'https://pokeapi.co/api/v2/pokemon/{i}').json()
for i in range(20,25)]
pd.json_normalize(pokemons, ['moves', 'version_group_details'], ['name']).columns
cols = ['name', 'move_learn_method.name', 'level_learned_at', 'version_group.name' ]
pd.json_normalize(pokemons, ['moves', 'version_group_details'], ['name'])[cols].drop_duplicates()
df1 = pd.json_normalize(pokemons, 'abilities', ['name', 'order', 'weight', ['species', 'name']])
df1
```
The `explode` and `apply` methods are useful if you have nested structures within a DataFrame.
```
df_poke = pd.DataFrame(pokemons)
df_poke.head(2)
```
Use `explode` to convert items in a list to separate rows, and `apply(pd.Series)` to convert items in a dictionary into separate columns.
```
df1 = df_poke.abilities.explode().apply(pd.Series).reset_index(drop=True)
df1.head()
df1.join(df1.ability.apply(pd.Series)).drop(columns = ['ability'])
```
### XML
```
tree = ET.parse('data/profiles.xml')
root = tree.getroot()
root.tag
ET.dump(root)
for employee in root:
for elem in employee:
print(f'{elem.tag:>20}: {elem.text}')
break
root.findall('.')
root.findall('./')
root.findall('.//')[:5]
for item in root.findall('.//company'):
print(item.text)
```
### HDF5
Like XML and JSON, HDF5 files store hierarchical data that can be annotated. The strong points of HDF5 are its ability to store large numerical data sets so that selective loading of parts of the data into memory for analysis is possible. HDF5 are also easy to use for people familiar with `numpy` and widely used in the scientific community.
There are two popular libraries for working with HDF5. Pandas uses `pytables`, and the stored schema can be quite unintuitive, but that does not matter since we usually just use Pandas to read it back in.
#### Pandas and `tables`
```
f = tables.open_file('data/profiles.h5')
f
f.root.duke.axis0[:]
f.root.duke.axis1[:]
f.root.duke.block0_items[:]
f.root.duke.block0_values[:]
f.close()
```
#### Reading into `pandas`
```
df = pd.read_hdf('data/profiles.h5')
df
```
#### Using `h5py`
For actually working directly with HDF5, I find `h5py` more intuitive.
```
filename = 'data/si.h5'
if os.path.exists(filename):
os.remove(filename)
f = h5py.File(filename, 'w')
start = pendulum.datetime(2019, 8, 31)
stop = start.add(days=3)
for day in pendulum.period(start, stop):
g = f.create_group(day.format('ddd'))
g.attrs['date'] = day.format('LLL')
g.attrs['analyst'] = 'Mario'
for expt in range(3):
data = np.random.poisson(size=(100, 100))
ds = g.create_dataset(f'expt-{expt:02d}', data=data)
f = h5py.File(filename, 'r')
list(f.keys())
list(f['Sat'].attrs.keys())
f['Sat'].attrs['analyst']
f['Sat'].attrs['date']
list(f['Sat'].keys())
f['Sat']['expt-01'][5:10, 5:10]
f['Sat']['expt-01'][5:10, 5:10].sum(axis=0)
f.close()
```
## Avro
```
! python3 -m pip install --quiet fastavro rec_avro
%%bash --out s
fastavro --schema data/profiles.avro
schema = eval(s.replace('true', 'True'))
schema
with open('data/profiles.avro', 'rb') as f:
avro_reader = fastavro.reader(f, reader_schema=schema)
for record in avro_reader:
print(record)
```
#### Avro to JSON
```
with open('data/profiles.avro', 'rb') as f:
avro_reader = fastavro.reader(f, reader_schema=schema)
for record in avro_reader:
print(from_rec_avro_destructive(record))
```
### Parquet
```
%%capture
! python3 -m pip install --quiet fastparquet
parq = fastparquet.ParquetFile('data/profiles.parq')
parq.columns
df = parq.to_pandas()
df.head(1)
```
#### Reading directly in `pandas`
```
df = pd.read_parquet('data/profiles.parq')
df.head(1)
```
## Feather
Feather is designed for fast read and write of columnar data (e.g. DataFrames). It is compatible with most data science languages and should be considered one of the top choices for large tabular data sets.
```
x = np.random.normal(0, 1, (1_000_000, 10))
df = pd.DataFrame(x, columns=list('abcdefghij'))
df.head()
%%time
df.to_csv('data/big.csv', index=False)
%%time
df_csv = pd.read_csv('data/big.csv')
! ls -lh data/big.csv
%%time
df.to_feather('data/big.feather')
%%time
df_feather = pd.read_feather('data/big.feather')
! ls -lh data/big.feather
df_csv.head(3)
df_feather.head(3)
```
### Using the `feather` package directly
```
feather.write_dataframe(df, 'data/big2.feather')
df_feather2 = feather.read_dataframe('data/big2.feather')
```
## SQL
A relatinal databse isn't really a filetype, but SQLite3 stores data as a simple file.
```
conn = sqlite3.connect('data/profiles.sqlite')
c = conn.cursor()
c.execute("SELECT * FROM sqlite_master WHERE type='table'")
c.fetchall()
c.execute('SELECT * FROM duke')
c.fetchone()
conn.close()
```
|
github_jupyter
|
import os
import csv
import datetime
import decimal
import json
import sqlite3
import xml.etree.cElementTree as ET
%%capture
! python3 -m pip install --quiet faker json2xml fastparquet fastavro rec_avro feather-format
import pendulum
import numpy as np
import pandas as pd
from faker import Faker
from json2xml import json2xml
from json2xml.utils import readfromjson
import fastavro
from rec_avro import (to_rec_avro_destructive,
from_rec_avro_destructive,
rec_avro_schema)
import fastparquet
import h5py
import tables
from sqlalchemy import create_engine
import feather
fakes = [
Faker('zh_CN'),
Faker('ar_SA'),
Faker('en_US'),
]
n = 3
p = [0.3, 0.2, 0.5]
np.random.seed(1)
locales = np.random.choice(len(fakes), size=n, p=p)
profiles = [fakes[locale].profile() for locale in locales]
profiles
df = pd.DataFrame(profiles)
df.iloc[0]
df.to_csv('data/profiles.csv', index=False)
! head -c 200 data/profiles.csv
df.to_csv('data/profiles.txt', index=False, sep='\t')
! head -c 200 data/profiles.txt
def converter(o):
if isinstance(o, datetime.datetime):
return o.__str__()
if isinstance(o, decimal.Decimal):
return o.__str__()
with open('data/profiles.json', 'w') as f:
json.dump(profiles , f, default=converter)
! head -c 200 data/profiles.json
with open('data/profiles.xml', 'w') as f:
data = readfromjson('data/profiles.json')
f.write(json2xml.Json2xml({'employee': data}, wrapper="duke").to_xml())
! head -c 200 data/profiles.xml
ps = json.load(open('data/profiles.json'))
avro_objects = [to_rec_avro_destructive(rec) for rec in ps]
with open('data/profiles.avro', 'wb') as f_out:
fastavro.writer(f_out, fastavro.parse_schema(rec_avro_schema()), avro_objects)
! head -c 200 data/profiles.avro
df.birthdate = pd.to_datetime(df.birthdate)
df = (
df.current_location.
apply(pd.Series).
merge(df, left_index=True, right_index=True).
drop('current_location', axis=1).
rename({0: 'location_x', 1: 'location_y'}, axis=1)
)
df['location_x'] = df['location_x'].astype('float')
df['location_y'] = df['location_y'].astype('float')
df.website = df.website.apply(lambda s: ','.join(s))
df.to_hdf('data/profiles.h5', key='duke')
! head -c 200 data/profiles.h5
fastparquet.write('data/profiles.parq', df)
! head -c 200 data/profiles.parq
engine = create_engine('sqlite:///data/profiles.sqlite', echo=False)
df.to_sql('duke', con=engine, if_exists='replace', index_label='id')
! head -c 200 data/profiles.sqlite
df = pd.read_csv('data/profiles.csv')
df.head(1)
df.loc[0]
rows = []
with open('data/profiles.csv') as f:
reader = csv.reader(f)
for row in reader:
rows.append(row)
list(map(len, rows))
rows[:2]
df = pd.DataFrame(rows[1:], columns=rows[0])
df.head(1)
df = pd.read_csv('data/profiles.txt', sep='\t')
df.head()
rows = []
with open('data/profiles.txt') as f:
reader = csv.reader(f, delimiter='\t')
for row in reader:
rows.append(row)
list(map(len, rows))
with open('data/profiles.json') as f:
profiles = json.load(f)
len(profiles)
profiles[0]
if not os.path.exists('data/pokemon.json'):
! curl -o data/pokemon.json https://pokeapi.co/api/v2/pokemon/23
with open('data/pokemon.json') as f:
pokemon = json.load(f)
pokemon.keys()
pokemon['name']
pokemon['abilities']
import requests
pd.json_normalize(pokemon['abilities'])
pokemons = [requests.get(f'https://pokeapi.co/api/v2/pokemon/{i}').json()
for i in range(20,25)]
pd.json_normalize(pokemons, ['moves', 'version_group_details'], ['name']).columns
cols = ['name', 'move_learn_method.name', 'level_learned_at', 'version_group.name' ]
pd.json_normalize(pokemons, ['moves', 'version_group_details'], ['name'])[cols].drop_duplicates()
df1 = pd.json_normalize(pokemons, 'abilities', ['name', 'order', 'weight', ['species', 'name']])
df1
df_poke = pd.DataFrame(pokemons)
df_poke.head(2)
df1 = df_poke.abilities.explode().apply(pd.Series).reset_index(drop=True)
df1.head()
df1.join(df1.ability.apply(pd.Series)).drop(columns = ['ability'])
tree = ET.parse('data/profiles.xml')
root = tree.getroot()
root.tag
ET.dump(root)
for employee in root:
for elem in employee:
print(f'{elem.tag:>20}: {elem.text}')
break
root.findall('.')
root.findall('./')
root.findall('.//')[:5]
for item in root.findall('.//company'):
print(item.text)
f = tables.open_file('data/profiles.h5')
f
f.root.duke.axis0[:]
f.root.duke.axis1[:]
f.root.duke.block0_items[:]
f.root.duke.block0_values[:]
f.close()
df = pd.read_hdf('data/profiles.h5')
df
filename = 'data/si.h5'
if os.path.exists(filename):
os.remove(filename)
f = h5py.File(filename, 'w')
start = pendulum.datetime(2019, 8, 31)
stop = start.add(days=3)
for day in pendulum.period(start, stop):
g = f.create_group(day.format('ddd'))
g.attrs['date'] = day.format('LLL')
g.attrs['analyst'] = 'Mario'
for expt in range(3):
data = np.random.poisson(size=(100, 100))
ds = g.create_dataset(f'expt-{expt:02d}', data=data)
f = h5py.File(filename, 'r')
list(f.keys())
list(f['Sat'].attrs.keys())
f['Sat'].attrs['analyst']
f['Sat'].attrs['date']
list(f['Sat'].keys())
f['Sat']['expt-01'][5:10, 5:10]
f['Sat']['expt-01'][5:10, 5:10].sum(axis=0)
f.close()
! python3 -m pip install --quiet fastavro rec_avro
%%bash --out s
fastavro --schema data/profiles.avro
schema = eval(s.replace('true', 'True'))
schema
with open('data/profiles.avro', 'rb') as f:
avro_reader = fastavro.reader(f, reader_schema=schema)
for record in avro_reader:
print(record)
with open('data/profiles.avro', 'rb') as f:
avro_reader = fastavro.reader(f, reader_schema=schema)
for record in avro_reader:
print(from_rec_avro_destructive(record))
%%capture
! python3 -m pip install --quiet fastparquet
parq = fastparquet.ParquetFile('data/profiles.parq')
parq.columns
df = parq.to_pandas()
df.head(1)
df = pd.read_parquet('data/profiles.parq')
df.head(1)
x = np.random.normal(0, 1, (1_000_000, 10))
df = pd.DataFrame(x, columns=list('abcdefghij'))
df.head()
%%time
df.to_csv('data/big.csv', index=False)
%%time
df_csv = pd.read_csv('data/big.csv')
! ls -lh data/big.csv
%%time
df.to_feather('data/big.feather')
%%time
df_feather = pd.read_feather('data/big.feather')
! ls -lh data/big.feather
df_csv.head(3)
df_feather.head(3)
feather.write_dataframe(df, 'data/big2.feather')
df_feather2 = feather.read_dataframe('data/big2.feather')
conn = sqlite3.connect('data/profiles.sqlite')
c = conn.cursor()
c.execute("SELECT * FROM sqlite_master WHERE type='table'")
c.fetchall()
c.execute('SELECT * FROM duke')
c.fetchone()
conn.close()
| 0.167695 | 0.870322 |
# Maxpooling Layer
In this notebook, we add and visualize the output of a maxpooling layer in a CNN.
A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN.
<img src='https://raw.githubusercontent.com/udacity/deep-learning-v2-pytorch/master/convolutional-neural-networks/conv-visualization/notebook_ims/CNN_all_layers.png' height=50% width=50% />
### Copy files and install pytorch
```
import sys
try:
import torch
except:
import os
os.environ['TCMALLOC_LARGE_ALLOC_REPORT_THRESHOLD']='2000000000'
# http://pytorch.org/
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
!{sys.executable} -m pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision >/dev/null
! curl -s https://codeload.github.com/udacity/deep-learning-v2-pytorch/tar.gz/master | tar -xz --strip=3 deep-learning-v2-pytorch-master/convolutional-neural-networks/conv-visualization/data/ >/dev/null 2>&1
```
### Import the image
```
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.axis('off')
plt.show();
```
### Define and visualize the filters
```
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
```
### Define convolutional and pooling layers
You've seen how to define a convolutional layer, next is a:
* Pooling layer
In the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!
A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
<img src='https://raw.githubusercontent.com/udacity/deep-learning-v2-pytorch/master/convolutional-neural-networks/conv-visualization/notebook_ims/maxpooling_ex.png' height=50% width=50% />
```
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
```
### Visualize the output of each filter
First, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
```
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
```
Let's look at the output of a convolutional layer after a ReLu activation function is applied.
#### ReLu activation
A ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
<img src='https://raw.githubusercontent.com/udacity/deep-learning-v2-pytorch/master/convolutional-neural-networks/conv-visualization/
notebook_ims/relu_ex.png' height=50% width=50% />
```
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer);
```
### Visualize the output of the pooling layer
Then, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.
Take a look at the values on the x, y axes to see how the image has changed size.
```
# visualize the output of the pooling layer
viz_layer(pooled_layer)
```
|
github_jupyter
|
import sys
try:
import torch
except:
import os
os.environ['TCMALLOC_LARGE_ALLOC_REPORT_THRESHOLD']='2000000000'
# http://pytorch.org/
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
!{sys.executable} -m pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision >/dev/null
! curl -s https://codeload.github.com/udacity/deep-learning-v2-pytorch/tar.gz/master | tar -xz --strip=3 deep-learning-v2-pytorch-master/convolutional-neural-networks/conv-visualization/data/ >/dev/null 2>&1
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.axis('off')
plt.show();
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer);
# visualize the output of the pooling layer
viz_layer(pooled_layer)
| 0.600188 | 0.930015 |
# Testing Order of Growth
*Data Structures and Information Retrieval in Python*
Copyright 2021 Allen Downey
License: [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/)
[Click here to run this chapter on Colab](https://colab.research.google.com/github/AllenDowney/DSIRP/blob/main/chapters/timing.ipynb)
Read the [documentation of os.times](https://docs.python.org/3/library/os.html#os.times)
```
import os
def etime():
"""Measures user and system time this process has used.
Returns the sum of user and system time."""
user, sys, chuser, chsys, real = os.times()
return user+sys
start = etime()
t = [x**2 for x in range(10000)]
end = etime()
end - start
```
Exercise: Use `etime` to measure the computation time used by `sleep`.
```
from time import sleep
sleep(1)
# Solution goes here
def time_func(func, n):
"""Run a function and return the elapsed time.
func: function
n: problem size, passed as an argument to func
returns: user+sys time in seconds
"""
start = etime()
func(n)
end = etime()
elapsed = end - start
return elapsed
```
One of the things that makes timing tricky is that many operations are too fast to measure accurately.
`%timeit` handles this by running enough times get a precise estimate, even for things that run very fast.
We'll handle it by running over a wide range of problem sizes, hoping to sizes that run long enough to measure, but not more than a few seconds.
The following function takes a size, `n`, creates an empty list, and calls `list.append` `n` times.
```
def list_append(n):
t = []
[t.append(x) for x in range(n)]
```
`timeit` can time this function accurately.
```
%timeit list_append(10000)
```
But our `time_func` is not that smart.
```
time_func(list_append, 10000)
```
Exercise: Increase the number of iterations until the run time is measureable.
```
# Solution goes here
```
## List append
The following function gradually increases `n` and records the total time.
```
def run_timing_test(func, max_time=1):
"""Tests the given function with a range of values for n.
func: function object
returns: list of ns and a list of run times.
"""
ns = []
ts = []
for i in range(10, 28):
n = 2**i
t = time_func(func, n)
print(n, t)
if t > 0:
ns.append(n)
ts.append(t)
if t > max_time:
break
return ns, ts
ns, ts = run_timing_test(list_append)
import matplotlib.pyplot as plt
plt.plot(ns, ts, 'o-')
plt.xlabel('Problem size (n)')
plt.ylabel('Runtime (seconds)');
```
This one looks pretty linear, but it won't always be so clear.
It will help to plot a straight line that goes through the last data point.
```
def fit(ns, ts, exp=1.0, index=-1):
"""Fits a curve with the given exponent.
ns: sequence of problem sizes
ts: sequence of times
exp: exponent of the fitted curve
index: index of the element the fitted line should go through
returns: sequence of fitted times
"""
# Use the element with the given index as a reference point,
# and scale all other points accordingly.
nref = ns[index]
tref = ts[index]
tfit = []
for n in ns:
ratio = n / nref
t = ratio**exp * tref
tfit.append(t)
return tfit
ts_fit = fit(ns, ts)
ts_fit
```
The following function plots the actual results and the fitted line.
```
def plot_timing_test(ns, ts, label='', color='C0', exp=1.0, scale='log'):
"""Plots data and a fitted curve.
ns: sequence of n (problem size)
ts: sequence of t (run time)
label: string label for the data curve
color: string color for the data curve
exp: exponent (slope) for the fitted curve
scale: string passed to xscale and yscale
"""
ts_fit = fit(ns, ts, exp)
fit_label = 'exp = %d' % exp
plt.plot(ns, ts_fit, label=fit_label, color='0.7', linestyle='dashed')
plt.plot(ns, ts, 'o-', label=label, color=color, alpha=0.7)
plt.xlabel('Problem size (n)')
plt.ylabel('Runtime (seconds)')
plt.xscale(scale)
plt.yscale(scale)
plt.legend()
plot_timing_test(ns, ts, scale='linear')
plt.title('list append');
```
From these results, what can we conclude about the order of growth of `list.append`?
Before we go on, let's also look at the results on a log-log scale.
```
plot_timing_test(ns, ts, scale='log')
plt.title('list append');
```
Why might we prefer this scale?
## List pop
Now let's do the same for `list.pop` (which pops from the end of the list by default).
Notice that we have to make the list before we pop things from it, so we will have to think about how to interpret the results.
```
def list_pop(n):
t = []
[t.append(x) for x in range(n)]
[t.pop() for _ in range(n)]
ns, ts = run_timing_test(list_pop)
plot_timing_test(ns, ts, scale='log')
plt.title('list pop');
```
What can we conclude?
What about `pop(0)`, which pops from the beginning of the list?
Note: You might have to adjust `exp` to make the fitted line fit.
```
def list_pop0(n):
t = []
[t.append(x) for x in range(n)]
[t.pop(0) for _ in range(n)]
ns, ts = run_timing_test(list_pop0)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('list pop(0)');
```
## Searching a list
`list.index` searches a list and returns the index of the first element that matches the target.
What do we expect if we always search for the first element?
```
def list_index0(n):
t = []
[t.append(x) for x in range(n)]
[t.index(0) for _ in range(n)]
ns, ts = run_timing_test(list_index0)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('list index(0)');
```
What if we always search for the last element?
```
def list_index_n(n):
t = []
[t.append(x) for x in range(n)]
[t.index(n-1) for _ in range(n)]
ns, ts = run_timing_test(list_index_n)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('list index(n-1)');
```
## Dictionary add
```
def dict_add(n):
d = {}
[d.setdefault(x, x) for x in range(n)]
ns, ts = run_timing_test(dict_add)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('dict add');
```
## Dictionary lookup
```
def dict_lookup(n):
d = {}
[d.setdefault(x, x) for x in range(n)]
[d[x] for x in range(n)]
ns, ts = run_timing_test(dict_lookup)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('dict lookup');
```
This characteristic of dictionaries is the foundation of a lot of efficient algorithms!
|
github_jupyter
|
import os
def etime():
"""Measures user and system time this process has used.
Returns the sum of user and system time."""
user, sys, chuser, chsys, real = os.times()
return user+sys
start = etime()
t = [x**2 for x in range(10000)]
end = etime()
end - start
from time import sleep
sleep(1)
# Solution goes here
def time_func(func, n):
"""Run a function and return the elapsed time.
func: function
n: problem size, passed as an argument to func
returns: user+sys time in seconds
"""
start = etime()
func(n)
end = etime()
elapsed = end - start
return elapsed
def list_append(n):
t = []
[t.append(x) for x in range(n)]
%timeit list_append(10000)
time_func(list_append, 10000)
# Solution goes here
def run_timing_test(func, max_time=1):
"""Tests the given function with a range of values for n.
func: function object
returns: list of ns and a list of run times.
"""
ns = []
ts = []
for i in range(10, 28):
n = 2**i
t = time_func(func, n)
print(n, t)
if t > 0:
ns.append(n)
ts.append(t)
if t > max_time:
break
return ns, ts
ns, ts = run_timing_test(list_append)
import matplotlib.pyplot as plt
plt.plot(ns, ts, 'o-')
plt.xlabel('Problem size (n)')
plt.ylabel('Runtime (seconds)');
def fit(ns, ts, exp=1.0, index=-1):
"""Fits a curve with the given exponent.
ns: sequence of problem sizes
ts: sequence of times
exp: exponent of the fitted curve
index: index of the element the fitted line should go through
returns: sequence of fitted times
"""
# Use the element with the given index as a reference point,
# and scale all other points accordingly.
nref = ns[index]
tref = ts[index]
tfit = []
for n in ns:
ratio = n / nref
t = ratio**exp * tref
tfit.append(t)
return tfit
ts_fit = fit(ns, ts)
ts_fit
def plot_timing_test(ns, ts, label='', color='C0', exp=1.0, scale='log'):
"""Plots data and a fitted curve.
ns: sequence of n (problem size)
ts: sequence of t (run time)
label: string label for the data curve
color: string color for the data curve
exp: exponent (slope) for the fitted curve
scale: string passed to xscale and yscale
"""
ts_fit = fit(ns, ts, exp)
fit_label = 'exp = %d' % exp
plt.plot(ns, ts_fit, label=fit_label, color='0.7', linestyle='dashed')
plt.plot(ns, ts, 'o-', label=label, color=color, alpha=0.7)
plt.xlabel('Problem size (n)')
plt.ylabel('Runtime (seconds)')
plt.xscale(scale)
plt.yscale(scale)
plt.legend()
plot_timing_test(ns, ts, scale='linear')
plt.title('list append');
plot_timing_test(ns, ts, scale='log')
plt.title('list append');
def list_pop(n):
t = []
[t.append(x) for x in range(n)]
[t.pop() for _ in range(n)]
ns, ts = run_timing_test(list_pop)
plot_timing_test(ns, ts, scale='log')
plt.title('list pop');
def list_pop0(n):
t = []
[t.append(x) for x in range(n)]
[t.pop(0) for _ in range(n)]
ns, ts = run_timing_test(list_pop0)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('list pop(0)');
def list_index0(n):
t = []
[t.append(x) for x in range(n)]
[t.index(0) for _ in range(n)]
ns, ts = run_timing_test(list_index0)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('list index(0)');
def list_index_n(n):
t = []
[t.append(x) for x in range(n)]
[t.index(n-1) for _ in range(n)]
ns, ts = run_timing_test(list_index_n)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('list index(n-1)');
def dict_add(n):
d = {}
[d.setdefault(x, x) for x in range(n)]
ns, ts = run_timing_test(dict_add)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('dict add');
def dict_lookup(n):
d = {}
[d.setdefault(x, x) for x in range(n)]
[d[x] for x in range(n)]
ns, ts = run_timing_test(dict_lookup)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('dict lookup');
| 0.613121 | 0.988245 |
```
#Code source written with help from:
#http://antoinevastel.github.io/machine%20learning/python/2016/02/14/svd-recommender-system.html
import math as mt
import csv
from sparsesvd import sparsesvd #used for matrix factorization
import numpy as np
from scipy.sparse import csc_matrix #used for sparse matrix
from scipy.sparse.linalg import * #used for matrix multiplication
#Note: You may need to install the library sparsesvd. Documentation for
#sparsesvd method can be found here:
#https://pypi.python.org/pypi/sparsesvd/
#constants defining the dimensions of our User Rating Matrix (URM)
MAX_PID = 4
MAX_UID = 5
#Compute SVD of the user ratings matrix
def computeSVD(urm, K):
U, s, Vt = sparsesvd(urm, K)
dim = (len(s), len(s))
S = np.zeros(dim, dtype=np.float32)
for i in range(0, len(s)):
S[i,i] = mt.sqrt(s[i])
U = csc_matrix(np.transpose(U), dtype=np.float32)
S = csc_matrix(S, dtype=np.float32)
Vt = csc_matrix(Vt, dtype=np.float32)
return U, S, Vt
#Compute estimated rating for the test user
def computeEstimatedRatings(urm, U, S, Vt, uTest, K, test):
rightTerm = S*Vt
estimatedRatings = np.zeros(shape=(MAX_UID, MAX_PID), dtype=np.float16)
for userTest in uTest:
prod = U[userTest, :]*rightTerm
#we convert the vector to dense format in order to get the indices
#of the movies with the best estimated ratings
estimatedRatings[userTest, :] = prod.todense()
recom = (-estimatedRatings[userTest, :]).argsort()[:250]
return recom
#Used in SVD calculation (number of latent factors)
K=2
#Initialize a sample user rating matrix
urm = np.array([[3, 1, 2, 3],[4, 3, 4, 3],[3, 2, 1, 5], [1, 6, 5, 2], [0, 0, 5, 0]])
urm = csc_matrix(urm, dtype=np.float32)
#Compute SVD of the input user ratings matrix
U, S, Vt = computeSVD(urm, K)
#Test user set as user_id 4 with ratings [0, 0, 5, 0]
uTest = [4]
print("User id for whom recommendations are needed: %d" % uTest[0])
#Get estimated rating for test user
print("Predictied ratings:")
uTest_recommended_items = computeEstimatedRatings(urm, U, S, Vt, uTest, K, True)
print(uTest_recommended_items)
%matplotlib inline
from pylab import *
#Plot all the users
print("Matrix Dimensions for U")
print(U.shape)
for i in range(0, U.shape[0]):
plot(U[i,0], U[i,1], marker = "*", label="user"+str(i))
for j in range(0, Vt.T.shape[0]):
plot(Vt.T[j,0], Vt.T[j,1], marker = 'd', label="item"+str(j))
legend(loc="upper right")
title('User vectors in the Latent semantic space')
ylim([-0.7, 0.7])
xlim([-0.7, 0])
show()
```
|
github_jupyter
|
#Code source written with help from:
#http://antoinevastel.github.io/machine%20learning/python/2016/02/14/svd-recommender-system.html
import math as mt
import csv
from sparsesvd import sparsesvd #used for matrix factorization
import numpy as np
from scipy.sparse import csc_matrix #used for sparse matrix
from scipy.sparse.linalg import * #used for matrix multiplication
#Note: You may need to install the library sparsesvd. Documentation for
#sparsesvd method can be found here:
#https://pypi.python.org/pypi/sparsesvd/
#constants defining the dimensions of our User Rating Matrix (URM)
MAX_PID = 4
MAX_UID = 5
#Compute SVD of the user ratings matrix
def computeSVD(urm, K):
U, s, Vt = sparsesvd(urm, K)
dim = (len(s), len(s))
S = np.zeros(dim, dtype=np.float32)
for i in range(0, len(s)):
S[i,i] = mt.sqrt(s[i])
U = csc_matrix(np.transpose(U), dtype=np.float32)
S = csc_matrix(S, dtype=np.float32)
Vt = csc_matrix(Vt, dtype=np.float32)
return U, S, Vt
#Compute estimated rating for the test user
def computeEstimatedRatings(urm, U, S, Vt, uTest, K, test):
rightTerm = S*Vt
estimatedRatings = np.zeros(shape=(MAX_UID, MAX_PID), dtype=np.float16)
for userTest in uTest:
prod = U[userTest, :]*rightTerm
#we convert the vector to dense format in order to get the indices
#of the movies with the best estimated ratings
estimatedRatings[userTest, :] = prod.todense()
recom = (-estimatedRatings[userTest, :]).argsort()[:250]
return recom
#Used in SVD calculation (number of latent factors)
K=2
#Initialize a sample user rating matrix
urm = np.array([[3, 1, 2, 3],[4, 3, 4, 3],[3, 2, 1, 5], [1, 6, 5, 2], [0, 0, 5, 0]])
urm = csc_matrix(urm, dtype=np.float32)
#Compute SVD of the input user ratings matrix
U, S, Vt = computeSVD(urm, K)
#Test user set as user_id 4 with ratings [0, 0, 5, 0]
uTest = [4]
print("User id for whom recommendations are needed: %d" % uTest[0])
#Get estimated rating for test user
print("Predictied ratings:")
uTest_recommended_items = computeEstimatedRatings(urm, U, S, Vt, uTest, K, True)
print(uTest_recommended_items)
%matplotlib inline
from pylab import *
#Plot all the users
print("Matrix Dimensions for U")
print(U.shape)
for i in range(0, U.shape[0]):
plot(U[i,0], U[i,1], marker = "*", label="user"+str(i))
for j in range(0, Vt.T.shape[0]):
plot(Vt.T[j,0], Vt.T[j,1], marker = 'd', label="item"+str(j))
legend(loc="upper right")
title('User vectors in the Latent semantic space')
ylim([-0.7, 0.7])
xlim([-0.7, 0])
show()
| 0.550849 | 0.60871 |
# Univariate Linear Regression Demo
> ☝Before moving on with this demo you might want to take a look at:
> - 📗[Math behind the Linear Regression](https://github.com/trekhleb/homemade-machine-learning/tree/master/homemade/linear_regression)
> - ⚙️[Linear Regression Source Code](https://github.com/trekhleb/homemade-machine-learning/blob/master/homemade/linear_regression/linear_regression.py)
**Linear regression** is a linear model, e.g. a model that assumes a linear relationship between the input variables `(x)` and the single output variable `(y)`. More specifically, that output variable `(y)` can be calculated from a linear combination of the input variables `(x)`.
**Univariate Linear Regression** is a linear regression that has only _one_ input parameter and one output label.
> **Demo Project:** In this demo we will build a model that will predict `Happiness.Score` for the countries based on `Economy.GDP.per.Capita` parameter.
```
# To make debugging of linear_regression module easier we enable imported modules autoreloading feature.
# By doing this you may change the code of linear_regression library and all these changes will be available here.
%load_ext autoreload
%autoreload 2
# Add project root folder to module loading paths.
import sys
sys.path.append('../..')
```
### Import Dependencies
- [pandas](https://pandas.pydata.org/) - library that we will use for loading and displaying the data in a table
- [numpy](http://www.numpy.org/) - library that we will use for linear algebra operations
- [matplotlib](https://matplotlib.org/) - library that we will use for plotting the data
- [linear_regression](https://github.com/trekhleb/homemade-machine-learning/blob/master/src/linear_regression/linear_regression.py) - custom implementation of linear regression
```
# Import 3rd party dependencies.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Import custom linear regression implementation.
from homemade.linear_regression import LinearRegression
```
### Load the Data
In this demo we will use [World Happindes Dataset](https://www.kaggle.com/unsdsn/world-happiness#2017.csv) for 2017.
```
# Load the data.
data = pd.read_csv('../../data/world-happiness-report-2017.csv')
# Print the data table.
data.head(10)
# Print histograms for each feature to see how they vary.
histohrams = data.hist(grid=False, figsize=(10, 10))
```
### Split the Data Into Training and Test Subsets
In this step we will split our dataset into _training_ and _testing_ subsets (in proportion 80/20%).
Training data set will be used for training of our linear model. Testing dataset will be used for validating of the model. All data from testing dataset will be new to model and we may check how accurate are model predictions.
```
# Split data set on training and test sets with proportions 80/20.
# Function sample() returns a random sample of items.
train_data = data.sample(frac=0.8)
test_data = data.drop(train_data.index)
# Decide what fields we want to process.
input_param_name = 'Economy..GDP.per.Capita.'
output_param_name = 'Happiness.Score'
# Split training set input and output.
x_train = train_data[[input_param_name]].values
y_train = train_data[[output_param_name]].values
# Split test set input and output.
x_test = test_data[[input_param_name]].values
y_test = test_data[[output_param_name]].values
# Plot training data.
plt.scatter(x_train, y_train, label='Training Dataset')
plt.scatter(x_test, y_test, label='Test Dataset')
plt.xlabel(input_param_name)
plt.ylabel(output_param_name)
plt.title('Countries Happines')
plt.legend()
plt.show()
```
Now we may visualize the data sets to see theirs shape.
### Init and Train Linear Regression Model
> ☝🏻This is the place where you might want to play with model configuration.
- `polynomial_degree` - this parameter will allow you to add additional polynomial features of certain degree. More features - more curved the line will be.
- `num_iterations` - this is the number of iterations that gradient descent algorithm will use to find the minimum of a cost function. Low numbers may prevent gradient descent from reaching the minimum. High numbers will make the algorithm work longer without improving its accuracy.
- `learning_rate` - this is the size of the gradient descent step. Small learning step will make algorithm work longer and will probably require more iterations to reach the minimum of the cost function. Big learning steps may couse missing the minimum and growth of the cost function value with new iterations.
- `regularization_param` - parameter that will fight overfitting. The higher the parameter, the simplier is the model will be.
- `polynomial_degree` - the degree of additional polynomial features (`x1^2 * x2, x1^2 * x2^2, ...`). This will allow you to curve the predictions.
- `sinusoid_degree` - the degree of sinusoid parameter multipliers of additional features (`sin(x), sin(2*x), ...`). This will allow you to curve the predictions by adding sinusoidal component to the prediction curve.
```
# Set up linear regression parameters.
num_iterations = 500 # Number of gradient descent iterations.
regularization_param = 0 # Helps to fight model overfitting.
learning_rate = 0.01 # The size of the gradient descent step.
polynomial_degree = 0 # The degree of additional polynomial features.
sinusoid_degree = 0 # The degree of sinusoid parameter multipliers of additional features.
# Init linear regression instance.
linear_regression = LinearRegression(x_train, y_train, polynomial_degree, sinusoid_degree)
# Train linear regression.
(theta, cost_history) = linear_regression.train(
learning_rate,
regularization_param,
num_iterations
)
# Print training results.
print('Initial cost: {:.2f}'.format(cost_history[0]))
print('Optimized cost: {:.2f}'.format(cost_history[-1]))
# Print model parameters
theta_table = pd.DataFrame({'Model Parameters': theta.flatten()})
theta_table.head()
```
### Analyze Gradient Descent Progress
The plot below illustrates how the cost function value changes over each iteration. You should see it decreasing.
In case if cost function value increases it may mean that gradient descent missed the cost function minimum and with each step it goes further away from it. In this case you might want to reduce the learning rate parameter (the size of the gradient step).
From this plot you may also get an understanding of how many iterations you need to get an optimal value of the cost function. In current example you may see that there is no much sense to increase the number of gradient descent iterations over 500 since it will not reduce cost function significantly.
```
# Plot gradient descent progress.
plt.plot(range(num_iterations), cost_history)
plt.xlabel('Iterations')
plt.ylabel('Cost')
plt.title('Gradient Descent Progress')
plt.show()
```
### Plot the Model Predictions
Since our model is trained now we may plot its predictions over the training and test datasets to see how well it fits the data.
```
# Get model predictions for the trainint set.
predictions_num = 100
x_predictions = np.linspace(x_train.min(), x_train.max(), predictions_num).reshape(predictions_num, 1);
y_predictions = linear_regression.predict(x_predictions)
# Plot training data with predictions.
plt.scatter(x_train, y_train, label='Training Dataset')
plt.scatter(x_test, y_test, label='Test Dataset')
plt.plot(x_predictions, y_predictions, 'r', label='Prediction')
plt.xlabel('Economy..GDP.per.Capita.')
plt.ylabel('Happiness.Score')
plt.title('Countries Happines')
plt.legend()
plt.show()
```
Calculate the value of cost function for the training and test data set. The less this value is, the better.
```
train_cost = linear_regression.get_cost(x_train, y_train, regularization_param)
test_cost = linear_regression.get_cost(x_test, y_test, regularization_param)
print('Train cost: {:.2f}'.format(train_cost))
print('Test cost: {:.2f}'.format(test_cost))
```
Let's now render the table of prediction values that our trained model does for unknown data (for test dataset). You should see that predicted happiness score should be quite similar to the known happiness score fron the test dataset.
```
test_predictions = linear_regression.predict(x_test)
test_predictions_table = pd.DataFrame({
'Economy GDP per Capita': x_test.flatten(),
'Test Happiness Score': y_test.flatten(),
'Predicted Happiness Score': test_predictions.flatten(),
'Prediction Diff': (y_test - test_predictions).flatten()
})
test_predictions_table.head(10)
```
|
github_jupyter
|
# To make debugging of linear_regression module easier we enable imported modules autoreloading feature.
# By doing this you may change the code of linear_regression library and all these changes will be available here.
%load_ext autoreload
%autoreload 2
# Add project root folder to module loading paths.
import sys
sys.path.append('../..')
# Import 3rd party dependencies.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Import custom linear regression implementation.
from homemade.linear_regression import LinearRegression
# Load the data.
data = pd.read_csv('../../data/world-happiness-report-2017.csv')
# Print the data table.
data.head(10)
# Print histograms for each feature to see how they vary.
histohrams = data.hist(grid=False, figsize=(10, 10))
# Split data set on training and test sets with proportions 80/20.
# Function sample() returns a random sample of items.
train_data = data.sample(frac=0.8)
test_data = data.drop(train_data.index)
# Decide what fields we want to process.
input_param_name = 'Economy..GDP.per.Capita.'
output_param_name = 'Happiness.Score'
# Split training set input and output.
x_train = train_data[[input_param_name]].values
y_train = train_data[[output_param_name]].values
# Split test set input and output.
x_test = test_data[[input_param_name]].values
y_test = test_data[[output_param_name]].values
# Plot training data.
plt.scatter(x_train, y_train, label='Training Dataset')
plt.scatter(x_test, y_test, label='Test Dataset')
plt.xlabel(input_param_name)
plt.ylabel(output_param_name)
plt.title('Countries Happines')
plt.legend()
plt.show()
# Set up linear regression parameters.
num_iterations = 500 # Number of gradient descent iterations.
regularization_param = 0 # Helps to fight model overfitting.
learning_rate = 0.01 # The size of the gradient descent step.
polynomial_degree = 0 # The degree of additional polynomial features.
sinusoid_degree = 0 # The degree of sinusoid parameter multipliers of additional features.
# Init linear regression instance.
linear_regression = LinearRegression(x_train, y_train, polynomial_degree, sinusoid_degree)
# Train linear regression.
(theta, cost_history) = linear_regression.train(
learning_rate,
regularization_param,
num_iterations
)
# Print training results.
print('Initial cost: {:.2f}'.format(cost_history[0]))
print('Optimized cost: {:.2f}'.format(cost_history[-1]))
# Print model parameters
theta_table = pd.DataFrame({'Model Parameters': theta.flatten()})
theta_table.head()
# Plot gradient descent progress.
plt.plot(range(num_iterations), cost_history)
plt.xlabel('Iterations')
plt.ylabel('Cost')
plt.title('Gradient Descent Progress')
plt.show()
# Get model predictions for the trainint set.
predictions_num = 100
x_predictions = np.linspace(x_train.min(), x_train.max(), predictions_num).reshape(predictions_num, 1);
y_predictions = linear_regression.predict(x_predictions)
# Plot training data with predictions.
plt.scatter(x_train, y_train, label='Training Dataset')
plt.scatter(x_test, y_test, label='Test Dataset')
plt.plot(x_predictions, y_predictions, 'r', label='Prediction')
plt.xlabel('Economy..GDP.per.Capita.')
plt.ylabel('Happiness.Score')
plt.title('Countries Happines')
plt.legend()
plt.show()
train_cost = linear_regression.get_cost(x_train, y_train, regularization_param)
test_cost = linear_regression.get_cost(x_test, y_test, regularization_param)
print('Train cost: {:.2f}'.format(train_cost))
print('Test cost: {:.2f}'.format(test_cost))
test_predictions = linear_regression.predict(x_test)
test_predictions_table = pd.DataFrame({
'Economy GDP per Capita': x_test.flatten(),
'Test Happiness Score': y_test.flatten(),
'Predicted Happiness Score': test_predictions.flatten(),
'Prediction Diff': (y_test - test_predictions).flatten()
})
test_predictions_table.head(10)
| 0.725843 | 0.993015 |
# What is PyTorch?
PyTorch (https://pytorch.org/) is a machine learning framework for training deep neural networks. Neural networks are represented as computational graphs i.e. graphs of nodes and weighted edges where each node represents some mathematical function. As we saw in the lectures, a central task during training is to compute the derivatives of the loss function with respect to all the weights of the edges. These derivatives are then used for gradient descent.
PyTorch enables the efficient computation of these derivatives from the structure of the graph using SIMD operations on GPUs (although a CPU alone can be used too).
The goal of this notebook is to introduce the basics of PyTorch. Another similar framework is TensorFlow (https://www.tensorflow.org/) which we won't cover here.
```
import networkx as nx
```
# Computational Graph - Automatic Differentiation
```
import torch
import numpy as np
```
### Example 1
```
x = torch.tensor([3.], requires_grad=True, dtype=torch.float32)
y = torch.tensor([4.], requires_grad=True, dtype=torch.float32)
z = x*y
```
### Questions:
What should z be? (12)
What is $\frac{\partial z}{\partial x}$? What is $\frac{\partial z}{\partial y}$?
$$\frac{\partial z}{\partial x} = y = 4$$
$$\frac{\partial z}{\partial y} = x = 3$$
```
print(f'Derivative of z with respect to x: {x.grad}')
print(f'Derivative of z with respect to y: {y.grad}')
print('Compute derivatives...')
z.backward()
print(f'Derivative of z with respect to x: {x.grad}')
print(f'Derivative of z with respect to y: {y.grad}')
```
### Example 2
```
a = torch.tensor([2], requires_grad=True, dtype=torch.float32)
b = torch.tensor([3], requires_grad=True, dtype=torch.float32)
c = torch.tensor([4], requires_grad=True, dtype=torch.float32)
d = (a+b)
d.requires_grad_()
e = d*c
e.requires_grad_()
```
Use chain rule
$$\frac{\partial e}{\partial a} = \frac{\partial e}{\partial d} \frac{\partial d}{\partial a} + \frac{\partial e}{\partial c} \frac{\partial c}{\partial a}$$
More succinctly, define $$\partial_{ea} \equiv \frac{\partial e}{\partial a}$$
Then:
$$\frac{\partial e}{\partial a} = \partial_{ea} = \partial_{ed} \partial_{da} + \partial_{ec} \partial_{ca}$$
$$\frac{\partial e}{\partial a} = \partial_{ea} = \underbrace{\partial_{ed}}_{c} \underbrace{\partial_{da}}_{1} + \underbrace{\partial_{ec}}_{d} \underbrace{\partial_{ca}}_{0} = c$$
and,
$$\partial_{eb} = c$$
and
$$\partial_{ed} = c$$
and
$$\partial_{ec} = d$$
```
e.backward()
print(a.grad)
print(b.grad)
print(c.grad)
print(d.grad)
```
# Building my own toy neural network
```
#nn diagram
G = nx.DiGraph()
G.add_node(1)
G.add_node(2)
G.add_node(3)
G.add_edge(1,2)
G.add_edge(2,3)
nx.draw_spring(G)
torch.
```
# Full neural network
```
import torch.nn as nn
class MyNet(nn.Module):
def __init__(self, n_inputs, n_outputs):
```
|
github_jupyter
|
import networkx as nx
import torch
import numpy as np
x = torch.tensor([3.], requires_grad=True, dtype=torch.float32)
y = torch.tensor([4.], requires_grad=True, dtype=torch.float32)
z = x*y
print(f'Derivative of z with respect to x: {x.grad}')
print(f'Derivative of z with respect to y: {y.grad}')
print('Compute derivatives...')
z.backward()
print(f'Derivative of z with respect to x: {x.grad}')
print(f'Derivative of z with respect to y: {y.grad}')
a = torch.tensor([2], requires_grad=True, dtype=torch.float32)
b = torch.tensor([3], requires_grad=True, dtype=torch.float32)
c = torch.tensor([4], requires_grad=True, dtype=torch.float32)
d = (a+b)
d.requires_grad_()
e = d*c
e.requires_grad_()
e.backward()
print(a.grad)
print(b.grad)
print(c.grad)
print(d.grad)
#nn diagram
G = nx.DiGraph()
G.add_node(1)
G.add_node(2)
G.add_node(3)
G.add_edge(1,2)
G.add_edge(2,3)
nx.draw_spring(G)
torch.
import torch.nn as nn
class MyNet(nn.Module):
def __init__(self, n_inputs, n_outputs):
| 0.713631 | 0.996427 |
```
import tensorflow as tf
import re
import numpy as np
import pandas as pd
from tqdm import tqdm
import collections
import itertools
from unidecode import unidecode
import malaya
import re
import json
def build_dataset(words, n_words, atleast=2):
count = [['PAD', 0], ['GO', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words - 10)
counter = [i for i in counter if i[1] >= atleast]
count.extend(counter)
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 0)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
def str_idx(corpus, dic, maxlen, UNK = 3):
X = np.zeros((len(corpus), maxlen))
for i in range(len(corpus)):
for no, k in enumerate(corpus[i][:maxlen]):
X[i, no] = dic.get(k, UNK)
return X
tokenizer = malaya.preprocessing._SocialTokenizer().tokenize
def is_number_regex(s):
if re.match("^\d+?\.\d+?$", s) is None:
return s.isdigit()
return True
def detect_money(word):
if word[:2] == 'rm' and is_number_regex(word[2:]):
return True
else:
return False
def preprocessing(string):
tokenized = tokenizer(string)
tokenized = [w.lower() for w in tokenized if len(w) > 2]
tokenized = ['<NUM>' if is_number_regex(w) else w for w in tokenized]
tokenized = ['<MONEY>' if detect_money(w) else w for w in tokenized]
return tokenized
with open('train-similarity.json') as fopen:
train = json.load(fopen)
left, right, label = train['left'], train['right'], train['label']
with open('test-similarity.json') as fopen:
test = json.load(fopen)
test_left, test_right, test_label = test['left'], test['right'], test['label']
np.unique(label, return_counts = True)
with open('similarity-dictionary.json') as fopen:
x = json.load(fopen)
dictionary = x['dictionary']
rev_dictionary = x['reverse_dictionary']
def position_encoding(inputs):
T = tf.shape(inputs)[1]
repr_dim = inputs.get_shape()[-1].value
pos = tf.reshape(tf.range(0.0, tf.to_float(T), dtype=tf.float32), [-1, 1])
i = np.arange(0, repr_dim, 2, np.float32)
denom = np.reshape(np.power(10000.0, i / repr_dim), [1, -1])
enc = tf.expand_dims(tf.concat([tf.sin(pos / denom), tf.cos(pos / denom)], 1), 0)
return tf.tile(enc, [tf.shape(inputs)[0], 1, 1])
def layer_norm(inputs, epsilon=1e-8):
mean, variance = tf.nn.moments(inputs, [-1], keep_dims=True)
normalized = (inputs - mean) / (tf.sqrt(variance + epsilon))
params_shape = inputs.get_shape()[-1:]
gamma = tf.get_variable('gamma', params_shape, tf.float32, tf.ones_initializer())
beta = tf.get_variable('beta', params_shape, tf.float32, tf.zeros_initializer())
return gamma * normalized + beta
def cnn_block(x, dilation_rate, pad_sz, hidden_dim, kernel_size):
x = layer_norm(x)
pad = tf.zeros([tf.shape(x)[0], pad_sz, hidden_dim])
x = tf.layers.conv1d(inputs = tf.concat([pad, x, pad], 1),
filters = hidden_dim,
kernel_size = kernel_size,
dilation_rate = dilation_rate)
x = x[:, :-pad_sz, :]
x = tf.nn.relu(x)
return x
class Model:
def __init__(self, size_layer, num_layers, embedded_size,
dict_size, learning_rate, dropout, kernel_size = 5):
def cnn(x, scope):
x += position_encoding(x)
with tf.variable_scope(scope, reuse = tf.AUTO_REUSE):
for n in range(num_layers):
dilation_rate = 2 ** n
pad_sz = (kernel_size - 1) * dilation_rate
with tf.variable_scope('block_%d'%n,reuse=tf.AUTO_REUSE):
x += cnn_block(x, dilation_rate, pad_sz, size_layer, kernel_size)
with tf.variable_scope('logits', reuse=tf.AUTO_REUSE):
return tf.layers.dense(x, size_layer)[:, -1]
self.X_left = tf.placeholder(tf.int32, [None, None])
self.X_right = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.float32, [None])
self.batch_size = tf.shape(self.X_left)[0]
encoder_embeddings = tf.Variable(tf.random_uniform([dict_size, embedded_size], -1, 1))
embedded_left = tf.nn.embedding_lookup(encoder_embeddings, self.X_left)
embedded_right = tf.nn.embedding_lookup(encoder_embeddings, self.X_right)
def contrastive_loss(y,d):
tmp= y * tf.square(d)
tmp2 = (1-y) * tf.square(tf.maximum((1 - d),0))
return tf.reduce_sum(tmp +tmp2)/tf.cast(self.batch_size,tf.float32)/2
self.output_left = cnn(embedded_left, 'left')
self.output_right = cnn(embedded_right, 'right')
print(self.output_left, self.output_right)
self.distance = tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(self.output_left,self.output_right)),
1,keep_dims=True))
self.distance = tf.div(self.distance, tf.add(tf.sqrt(tf.reduce_sum(tf.square(self.output_left),
1,keep_dims=True)),
tf.sqrt(tf.reduce_sum(tf.square(self.output_right),
1,keep_dims=True))))
self.distance = tf.reshape(self.distance, [-1])
self.logits = tf.identity(self.distance, name = 'logits')
self.cost = contrastive_loss(self.Y,self.distance)
self.temp_sim = tf.subtract(tf.ones_like(self.distance),
tf.rint(self.distance))
correct_predictions = tf.equal(self.temp_sim, self.Y)
self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, "float"))
self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost)
size_layer = 128
num_layers = 4
embedded_size = 128
learning_rate = 1e-3
maxlen = 50
batch_size = 128
dropout = 0.8
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(size_layer,num_layers,embedded_size,len(dictionary),learning_rate,dropout)
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver(tf.trainable_variables())
saver.save(sess, 'dilated-cnn/model.ckpt')
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'Placeholder' in n.name
or 'logits' in n.name
or 'alphas' in n.name)
and 'Adam' not in n.name
and '_power' not in n.name
and 'gradient' not in n.name
and 'Initializer' not in n.name
and 'Assign' not in n.name
]
)
import time
EARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 2, 0, 0, 0
while True:
lasttime = time.time()
if CURRENT_CHECKPOINT == EARLY_STOPPING:
print('break epoch:%d\n' % (EPOCH))
break
train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0
pbar = tqdm(range(0, len(left), batch_size), desc='train minibatch loop')
for i in pbar:
index = min(i+batch_size,len(left))
batch_x_left = str_idx(left[i: index], dictionary, maxlen)
batch_x_right = str_idx(right[i: index], dictionary, maxlen)
batch_y = label[i:index]
acc, loss, _ = sess.run([model.accuracy, model.cost, model.optimizer],
feed_dict = {model.X_left : batch_x_left,
model.X_right: batch_x_right,
model.Y : batch_y})
assert not np.isnan(loss)
train_loss += loss
train_acc += acc
pbar.set_postfix(cost=loss, accuracy = acc)
pbar = tqdm(range(0, len(test_left), batch_size), desc='test minibatch loop')
for i in pbar:
index = min(i+batch_size,len(test_left))
batch_x_left = str_idx(test_left[i: index], dictionary, maxlen)
batch_x_right = str_idx(test_right[i: index], dictionary, maxlen)
batch_y = test_label[i: index]
acc, loss = sess.run([model.accuracy, model.cost],
feed_dict = {model.X_left : batch_x_left,
model.X_right: batch_x_right,
model.Y : batch_y})
test_loss += loss
test_acc += acc
pbar.set_postfix(cost=loss, accuracy = acc)
train_loss /= (len(left) / batch_size)
train_acc /= (len(left) / batch_size)
test_loss /= (len(test_left) / batch_size)
test_acc /= (len(test_left) / batch_size)
if test_acc > CURRENT_ACC:
print(
'epoch: %d, pass acc: %f, current acc: %f'
% (EPOCH, CURRENT_ACC, test_acc)
)
CURRENT_ACC = test_acc
CURRENT_CHECKPOINT = 0
else:
CURRENT_CHECKPOINT += 1
print('time taken:', time.time()-lasttime)
print('epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n'%(EPOCH,train_loss,
train_acc,test_loss,
test_acc))
saver.save(sess, 'dilated-cnn/model.ckpt')
left = str_idx(['a person is outdoors, on a horse.'], dictionary, maxlen)
right = str_idx(['a person on a horse jumps over a broken down airplane.'], dictionary, maxlen)
sess.run([model.temp_sim,1-model.distance], feed_dict = {model.X_left : left,
model.X_right: right})
real_Y, predict_Y = [], []
pbar = tqdm(
range(0, len(test_left), batch_size), desc = 'validation minibatch loop'
)
for i in pbar:
index = min(i+batch_size,len(test_left))
batch_x_left = str_idx(test_left[i: index], dictionary, maxlen)
batch_x_right = str_idx(test_right[i: index], dictionary, maxlen)
batch_y = test_label[i: index]
predict_Y += sess.run(model.temp_sim, feed_dict = {model.X_left : batch_x_left,
model.X_right: batch_x_right,
model.Y : batch_y}).tolist()
real_Y += batch_y
from sklearn import metrics
print(
metrics.classification_report(
real_Y, predict_Y, target_names = ['not similar', 'similar']
)
)
strings.split(',')
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('dilated-cnn', strings)
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('dilated-cnn/frozen_model.pb')
x1 = g.get_tensor_by_name('import/Placeholder:0')
x2 = g.get_tensor_by_name('import/Placeholder_1:0')
logits = g.get_tensor_by_name('import/logits:0')
test_sess = tf.InteractiveSession(graph = g)
test_sess.run(1-logits, feed_dict = {x1 : left, x2: right})
test_sess.run(1-logits, feed_dict = {x1 : batch_x_left, x2: batch_x_right})
```
|
github_jupyter
|
import tensorflow as tf
import re
import numpy as np
import pandas as pd
from tqdm import tqdm
import collections
import itertools
from unidecode import unidecode
import malaya
import re
import json
def build_dataset(words, n_words, atleast=2):
count = [['PAD', 0], ['GO', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words - 10)
counter = [i for i in counter if i[1] >= atleast]
count.extend(counter)
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 0)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
def str_idx(corpus, dic, maxlen, UNK = 3):
X = np.zeros((len(corpus), maxlen))
for i in range(len(corpus)):
for no, k in enumerate(corpus[i][:maxlen]):
X[i, no] = dic.get(k, UNK)
return X
tokenizer = malaya.preprocessing._SocialTokenizer().tokenize
def is_number_regex(s):
if re.match("^\d+?\.\d+?$", s) is None:
return s.isdigit()
return True
def detect_money(word):
if word[:2] == 'rm' and is_number_regex(word[2:]):
return True
else:
return False
def preprocessing(string):
tokenized = tokenizer(string)
tokenized = [w.lower() for w in tokenized if len(w) > 2]
tokenized = ['<NUM>' if is_number_regex(w) else w for w in tokenized]
tokenized = ['<MONEY>' if detect_money(w) else w for w in tokenized]
return tokenized
with open('train-similarity.json') as fopen:
train = json.load(fopen)
left, right, label = train['left'], train['right'], train['label']
with open('test-similarity.json') as fopen:
test = json.load(fopen)
test_left, test_right, test_label = test['left'], test['right'], test['label']
np.unique(label, return_counts = True)
with open('similarity-dictionary.json') as fopen:
x = json.load(fopen)
dictionary = x['dictionary']
rev_dictionary = x['reverse_dictionary']
def position_encoding(inputs):
T = tf.shape(inputs)[1]
repr_dim = inputs.get_shape()[-1].value
pos = tf.reshape(tf.range(0.0, tf.to_float(T), dtype=tf.float32), [-1, 1])
i = np.arange(0, repr_dim, 2, np.float32)
denom = np.reshape(np.power(10000.0, i / repr_dim), [1, -1])
enc = tf.expand_dims(tf.concat([tf.sin(pos / denom), tf.cos(pos / denom)], 1), 0)
return tf.tile(enc, [tf.shape(inputs)[0], 1, 1])
def layer_norm(inputs, epsilon=1e-8):
mean, variance = tf.nn.moments(inputs, [-1], keep_dims=True)
normalized = (inputs - mean) / (tf.sqrt(variance + epsilon))
params_shape = inputs.get_shape()[-1:]
gamma = tf.get_variable('gamma', params_shape, tf.float32, tf.ones_initializer())
beta = tf.get_variable('beta', params_shape, tf.float32, tf.zeros_initializer())
return gamma * normalized + beta
def cnn_block(x, dilation_rate, pad_sz, hidden_dim, kernel_size):
x = layer_norm(x)
pad = tf.zeros([tf.shape(x)[0], pad_sz, hidden_dim])
x = tf.layers.conv1d(inputs = tf.concat([pad, x, pad], 1),
filters = hidden_dim,
kernel_size = kernel_size,
dilation_rate = dilation_rate)
x = x[:, :-pad_sz, :]
x = tf.nn.relu(x)
return x
class Model:
def __init__(self, size_layer, num_layers, embedded_size,
dict_size, learning_rate, dropout, kernel_size = 5):
def cnn(x, scope):
x += position_encoding(x)
with tf.variable_scope(scope, reuse = tf.AUTO_REUSE):
for n in range(num_layers):
dilation_rate = 2 ** n
pad_sz = (kernel_size - 1) * dilation_rate
with tf.variable_scope('block_%d'%n,reuse=tf.AUTO_REUSE):
x += cnn_block(x, dilation_rate, pad_sz, size_layer, kernel_size)
with tf.variable_scope('logits', reuse=tf.AUTO_REUSE):
return tf.layers.dense(x, size_layer)[:, -1]
self.X_left = tf.placeholder(tf.int32, [None, None])
self.X_right = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.float32, [None])
self.batch_size = tf.shape(self.X_left)[0]
encoder_embeddings = tf.Variable(tf.random_uniform([dict_size, embedded_size], -1, 1))
embedded_left = tf.nn.embedding_lookup(encoder_embeddings, self.X_left)
embedded_right = tf.nn.embedding_lookup(encoder_embeddings, self.X_right)
def contrastive_loss(y,d):
tmp= y * tf.square(d)
tmp2 = (1-y) * tf.square(tf.maximum((1 - d),0))
return tf.reduce_sum(tmp +tmp2)/tf.cast(self.batch_size,tf.float32)/2
self.output_left = cnn(embedded_left, 'left')
self.output_right = cnn(embedded_right, 'right')
print(self.output_left, self.output_right)
self.distance = tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(self.output_left,self.output_right)),
1,keep_dims=True))
self.distance = tf.div(self.distance, tf.add(tf.sqrt(tf.reduce_sum(tf.square(self.output_left),
1,keep_dims=True)),
tf.sqrt(tf.reduce_sum(tf.square(self.output_right),
1,keep_dims=True))))
self.distance = tf.reshape(self.distance, [-1])
self.logits = tf.identity(self.distance, name = 'logits')
self.cost = contrastive_loss(self.Y,self.distance)
self.temp_sim = tf.subtract(tf.ones_like(self.distance),
tf.rint(self.distance))
correct_predictions = tf.equal(self.temp_sim, self.Y)
self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, "float"))
self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost)
size_layer = 128
num_layers = 4
embedded_size = 128
learning_rate = 1e-3
maxlen = 50
batch_size = 128
dropout = 0.8
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(size_layer,num_layers,embedded_size,len(dictionary),learning_rate,dropout)
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver(tf.trainable_variables())
saver.save(sess, 'dilated-cnn/model.ckpt')
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'Placeholder' in n.name
or 'logits' in n.name
or 'alphas' in n.name)
and 'Adam' not in n.name
and '_power' not in n.name
and 'gradient' not in n.name
and 'Initializer' not in n.name
and 'Assign' not in n.name
]
)
import time
EARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 2, 0, 0, 0
while True:
lasttime = time.time()
if CURRENT_CHECKPOINT == EARLY_STOPPING:
print('break epoch:%d\n' % (EPOCH))
break
train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0
pbar = tqdm(range(0, len(left), batch_size), desc='train minibatch loop')
for i in pbar:
index = min(i+batch_size,len(left))
batch_x_left = str_idx(left[i: index], dictionary, maxlen)
batch_x_right = str_idx(right[i: index], dictionary, maxlen)
batch_y = label[i:index]
acc, loss, _ = sess.run([model.accuracy, model.cost, model.optimizer],
feed_dict = {model.X_left : batch_x_left,
model.X_right: batch_x_right,
model.Y : batch_y})
assert not np.isnan(loss)
train_loss += loss
train_acc += acc
pbar.set_postfix(cost=loss, accuracy = acc)
pbar = tqdm(range(0, len(test_left), batch_size), desc='test minibatch loop')
for i in pbar:
index = min(i+batch_size,len(test_left))
batch_x_left = str_idx(test_left[i: index], dictionary, maxlen)
batch_x_right = str_idx(test_right[i: index], dictionary, maxlen)
batch_y = test_label[i: index]
acc, loss = sess.run([model.accuracy, model.cost],
feed_dict = {model.X_left : batch_x_left,
model.X_right: batch_x_right,
model.Y : batch_y})
test_loss += loss
test_acc += acc
pbar.set_postfix(cost=loss, accuracy = acc)
train_loss /= (len(left) / batch_size)
train_acc /= (len(left) / batch_size)
test_loss /= (len(test_left) / batch_size)
test_acc /= (len(test_left) / batch_size)
if test_acc > CURRENT_ACC:
print(
'epoch: %d, pass acc: %f, current acc: %f'
% (EPOCH, CURRENT_ACC, test_acc)
)
CURRENT_ACC = test_acc
CURRENT_CHECKPOINT = 0
else:
CURRENT_CHECKPOINT += 1
print('time taken:', time.time()-lasttime)
print('epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n'%(EPOCH,train_loss,
train_acc,test_loss,
test_acc))
saver.save(sess, 'dilated-cnn/model.ckpt')
left = str_idx(['a person is outdoors, on a horse.'], dictionary, maxlen)
right = str_idx(['a person on a horse jumps over a broken down airplane.'], dictionary, maxlen)
sess.run([model.temp_sim,1-model.distance], feed_dict = {model.X_left : left,
model.X_right: right})
real_Y, predict_Y = [], []
pbar = tqdm(
range(0, len(test_left), batch_size), desc = 'validation minibatch loop'
)
for i in pbar:
index = min(i+batch_size,len(test_left))
batch_x_left = str_idx(test_left[i: index], dictionary, maxlen)
batch_x_right = str_idx(test_right[i: index], dictionary, maxlen)
batch_y = test_label[i: index]
predict_Y += sess.run(model.temp_sim, feed_dict = {model.X_left : batch_x_left,
model.X_right: batch_x_right,
model.Y : batch_y}).tolist()
real_Y += batch_y
from sklearn import metrics
print(
metrics.classification_report(
real_Y, predict_Y, target_names = ['not similar', 'similar']
)
)
strings.split(',')
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('dilated-cnn', strings)
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('dilated-cnn/frozen_model.pb')
x1 = g.get_tensor_by_name('import/Placeholder:0')
x2 = g.get_tensor_by_name('import/Placeholder_1:0')
logits = g.get_tensor_by_name('import/logits:0')
test_sess = tf.InteractiveSession(graph = g)
test_sess.run(1-logits, feed_dict = {x1 : left, x2: right})
test_sess.run(1-logits, feed_dict = {x1 : batch_x_left, x2: batch_x_right})
| 0.538741 | 0.404566 |
# Top Games on Google Play Store - An EDA
# Problem Context
A mobile game developer is planning to develop an Android game and put it on [Google Play Store](https://play.google.com/store/apps). The developer wants to strategically analyze the top existing games on Play Store in order to have a better sense of what to develop. The main questions that the developer wants an answer are:
1. Which types of games are more successful in number of ratings?
2. Paid or free games? If paid, what is a good price to go for?
4. Which types of games are growing at the moment?
3. Which types of games have the highest overall ratings?
We'll answer these questions using an Exploratory Data Analysis (EDA) approach using the "*android-games.csv*" dataset which can be found at [Top Games of Google Play Store](https://www.kaggle.com/dhruvildave/top-play-store-games).
For now, there won't be a data cleaning step.
# Data Exploration
### Import the Libraries Used
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
sns.set_theme(style = 'white') # theme used for the seaborn graphs
```
### Load and Read Data
```
data_path = 'android-games.csv'
raw_data = pd.read_csv(data_path)
```
### Basic Data Information
```
raw_data.head()
```
We can see that we have the following features:
* __rank__: Rank in a particular category
* __title__: Game title
* __total ratings__: Total number of ratings
* __installs__: Approximate install milestone
* __average rating__: Average rating out of 5
* __growth (30 days)__: Percent growth in 30 days
* __growth (60 days)__: Percent growth in 60 days
* __price__: Price in dollars
* __category__: Game category
* __5 star ratings__: Number of 5 star ratings
* __4 star ratings__: Number of 4 star ratings
* __3 star ratings__: Number of 3 star ratings
* __2 star ratings__: Number of 2 star ratings
* __1 star ratings__: Number of 1 star ratings
* __paid__: Whether the game is paid or not
There's not a lot of features in this dataset, but every one of them appears to be useful (maybe not the "*title*" for a numerical standpoint, but we'll keep it on the dataset for now).
```
raw_data.info()
raw_data.isnull().sum()
```
There is no null data and all the columns are filled on every row. This is expected since every game on Play Store must have all these information filled.
```
raw_data.describe()
```
# Price Analysis
From a preliminary analysis of the `raw_data.describe()` above, we can see that more than 75% of the top games are free (by checking when the "_price_" is 0.00). In fact, let's check the total (as a percentage):
```
# total free and paid games as a percentage
raw_data['paid'].value_counts(normalize = True) * 100
```
So only 0.40% of the games in this list are paid games (that acounts for only 7 games).
To find the paid games on this list we use:
```
raw_data[raw_data['paid'] == True].sort_values(by = 'price', ascending = False)
```
Here we have __Minecraft__ as the most expensive game on this list, costing $7.49, with more than 10.0 M installs at the time this dataset was gathered. __Minecraft__ is also the most popular game among the paid ones, so price is not a problem for game success or popularity.
```
paid_games = raw_data[raw_data['paid'] == True]
paid_games.describe()
paid_games.median()
f, ax = plt.subplots(figsize = (10, 5))
sns.countplot(x = 'price',
data = paid_games,
palette = 'cool_r')
ax.set_title('Median Game Price',
fontsize = 25,
y = 1.1)
ax.set(ylabel = '',
xlabel = 'Price')
ax.axhline(paid_games['price'].mean(),
color = 'r',
linewidth = 3,
label = 'Mean Price = $3.20')
plt.legend()
```
We only have 7 values to analyse, so it's easy to see it through a pandas DataFrame, but a graph is more visually pleasing for this.
By analysing the price, we see that the __average game price is $3.20__ and that most games on this list are priced at $1.99 (we can see that from the median).
# Category Analysis
```
raw_data['category'].unique()
```
We have 17 different game categories in this dataset.
```
raw_data['category'].value_counts()
```
Let's create a new pandas.DataFrame using the original dataset and grouping by the game category. Here we drop the "*rank*" since it's not relevant for now (if we add it, it would just be the sum from 1 to 100 (depending on the category, some have more games, "*GAME CARD*" contains 122 games, for example)). We also drop the "*paid*" column since it behaves as a sum of True or False values, also not relevant at the moment.
```
categories_df = raw_data.groupby(['category'], as_index = False).mean().drop(labels = 'rank', axis = 1).drop(labels = 'paid', axis = 1)
categories_df
```
We can now see how the columns behave in this new dataset:
* __total_ratings__: The sum of the total rating of all games in the category.
* __average rating__: The average rating of the whole category.
* __growth (30 days)__: The sum of the growth (30 days) of all games in the category.
* __growth (60 days)__: The sum of the growth (60 days) of all games in the category.
* __price__: The sum of the prices of all games in the category.
* __5 star ratings__: The sum of the 5 star rating from each game of the category.
* __4 star ratings__: The sum of the 4 star rating from each game of the category.
* __3 star ratings__: The sum of the 3 star rating from each game of the category.
* __2 star ratings__: The sum of the 2 star rating from each game of the category.
* __1 star ratings__: The sum of the 1 star rating from each game of the category.
Since the number of installs is not a numerical value, but instead it is a range, we'll use the number of ratings as a metric of game popularity (here we are assuming that the higher the number of installs is, the higher the number of ratings.
```
f, ax = plt.subplots(figsize = (10, 5))
sns.barplot(x = 'average rating',
y = 'category',
data = categories_df,
palette = 'cool_r',
order = categories_df.sort_values('average rating', ascending = False).category)
ax.set_title('Average Rating per Game Category',
fontsize = 25,
x = 0.4,
y = 1.1)
ax.set(xlim = (4, 4.5),
ylabel = '',
xlabel = 'Average Rating')
f, ax = plt.subplots(figsize = (10, 5))
sns.barplot(x = 'total ratings',
y = 'category',
data = categories_df,
palette = 'cool_r',
order = categories_df.sort_values('total ratings', ascending = False).category)
ax.set_title('Total Number of Ratings per Game Category',
fontsize = 25,
y = 1.1)
ax.set_xlabel('Total Number of Ratings (x10⁶)',fontsize = 18)
ax.set_ylabel('')
```
It is easy to see that __Action__ games dominate the market by Total Number of Ratings.
# Growth Analysis
```
growth_30_days = raw_data.groupby('category', as_index=False)['growth (30 days)'].mean()
growth_30_days
growth_60_days = raw_data.groupby('category', as_index=False)['growth (60 days)'].mean()
growth_60_days
f, ax = plt.subplots(figsize = (10, 5))
sns.barplot(x = 'growth (30 days)',
y = 'category',
data = growth_30_days,
palette = 'cool_r',
order = growth_30_days.sort_values('growth (30 days)', ascending = False).category)
ax.set_title('Average 30 Day Growth per Game Category',
fontsize = 20,
x = 0.4,
y = 1.1)
ax.set(ylabel = '',
xlabel = 'Average 30 Day Growth')
f, ax = plt.subplots(figsize = (10, 5))
sns.barplot(x = 'growth (60 days)',
y = 'category',
data = growth_60_days,
palette = 'cool_r',
order = growth_60_days.sort_values('growth (60 days)', ascending = False).category)
ax.set_title('Average 60 Day Growth per Game Category',
fontsize = 20,
x = 0.4,
y = 1.1)
ax.set(ylabel = '',
xlabel = 'Average 60 Day Growth')
```
Considering the last 30 days (from the date this dataset was gathered), __Action__ and __Word__ games have the highest growth among the categories listed. Analysing the last 60 days (again, from the time this dataset was gathered), we see that __Educational__ games had the highest growth.
```
a = [] # empty list
# average number of ratings of paid games
a.append( raw_data[raw_data['paid'] == True]['total ratings'].mean() )
# average number of ratings of free games
a.append( raw_data[raw_data['paid'] == False]['total ratings'].mean() )
a
f, ax = plt.subplots(figsize = (10, 5))
sns.barplot(x = ['Paid', 'Free'],
y = a,
palette = 'cool_r')
ax.set_title('Average Number of Ratings for Paid and Free games',
fontsize = 20,
y = 1.1)
ax.set_xlabel('')
ax.set_ylabel('Average Number of Ratings (x10⁶)', fontsize = 17)
```
So, we see that __Free Games__ have a higher average number of ratings than __Paid Games__.
# Answering the Questions
__1. Which types of games are more successful in number of ratings?__
Since we didn't use the number of installs in our analysis (because we don't have an exact value for each game, just a range), we can infer the success for each type of game from the total number of ratings for each category. With this in mind, the "__ACTION__" category has the highest number of ratings at $4.13 \times 10^6$ installs, which shows the popularity of action mobile games.
__2. Paid or free games? If paid, what is a good price to go for?__
We have seen that free games have a higher average number of ratings than paid games, so it could be a better option for a game developer to go for a free game. If the game is paid, a generally good price can be set at $1.99.
__3. Which types of games are growing at the moment?__
Considering the last 30 days (from the time this dataset was gathered), the categories the are growing more rapidly at the moment are __Action__ and __Word__ games.
__4. Which types of games have the highest overall ratings?__
Seeing as this is a list with the most popular games, the overall ratings are not too different from each other, but __Word__ games and __Cassino__ games have the highest ratings amongst all categories.
In conclusion, __Action__ mobile Android games perform very well at Google Play Store in almost every metric analysed.
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
sns.set_theme(style = 'white') # theme used for the seaborn graphs
data_path = 'android-games.csv'
raw_data = pd.read_csv(data_path)
raw_data.head()
raw_data.info()
raw_data.isnull().sum()
raw_data.describe()
# total free and paid games as a percentage
raw_data['paid'].value_counts(normalize = True) * 100
raw_data[raw_data['paid'] == True].sort_values(by = 'price', ascending = False)
paid_games = raw_data[raw_data['paid'] == True]
paid_games.describe()
paid_games.median()
f, ax = plt.subplots(figsize = (10, 5))
sns.countplot(x = 'price',
data = paid_games,
palette = 'cool_r')
ax.set_title('Median Game Price',
fontsize = 25,
y = 1.1)
ax.set(ylabel = '',
xlabel = 'Price')
ax.axhline(paid_games['price'].mean(),
color = 'r',
linewidth = 3,
label = 'Mean Price = $3.20')
plt.legend()
raw_data['category'].unique()
raw_data['category'].value_counts()
categories_df = raw_data.groupby(['category'], as_index = False).mean().drop(labels = 'rank', axis = 1).drop(labels = 'paid', axis = 1)
categories_df
f, ax = plt.subplots(figsize = (10, 5))
sns.barplot(x = 'average rating',
y = 'category',
data = categories_df,
palette = 'cool_r',
order = categories_df.sort_values('average rating', ascending = False).category)
ax.set_title('Average Rating per Game Category',
fontsize = 25,
x = 0.4,
y = 1.1)
ax.set(xlim = (4, 4.5),
ylabel = '',
xlabel = 'Average Rating')
f, ax = plt.subplots(figsize = (10, 5))
sns.barplot(x = 'total ratings',
y = 'category',
data = categories_df,
palette = 'cool_r',
order = categories_df.sort_values('total ratings', ascending = False).category)
ax.set_title('Total Number of Ratings per Game Category',
fontsize = 25,
y = 1.1)
ax.set_xlabel('Total Number of Ratings (x10⁶)',fontsize = 18)
ax.set_ylabel('')
growth_30_days = raw_data.groupby('category', as_index=False)['growth (30 days)'].mean()
growth_30_days
growth_60_days = raw_data.groupby('category', as_index=False)['growth (60 days)'].mean()
growth_60_days
f, ax = plt.subplots(figsize = (10, 5))
sns.barplot(x = 'growth (30 days)',
y = 'category',
data = growth_30_days,
palette = 'cool_r',
order = growth_30_days.sort_values('growth (30 days)', ascending = False).category)
ax.set_title('Average 30 Day Growth per Game Category',
fontsize = 20,
x = 0.4,
y = 1.1)
ax.set(ylabel = '',
xlabel = 'Average 30 Day Growth')
f, ax = plt.subplots(figsize = (10, 5))
sns.barplot(x = 'growth (60 days)',
y = 'category',
data = growth_60_days,
palette = 'cool_r',
order = growth_60_days.sort_values('growth (60 days)', ascending = False).category)
ax.set_title('Average 60 Day Growth per Game Category',
fontsize = 20,
x = 0.4,
y = 1.1)
ax.set(ylabel = '',
xlabel = 'Average 60 Day Growth')
a = [] # empty list
# average number of ratings of paid games
a.append( raw_data[raw_data['paid'] == True]['total ratings'].mean() )
# average number of ratings of free games
a.append( raw_data[raw_data['paid'] == False]['total ratings'].mean() )
a
f, ax = plt.subplots(figsize = (10, 5))
sns.barplot(x = ['Paid', 'Free'],
y = a,
palette = 'cool_r')
ax.set_title('Average Number of Ratings for Paid and Free games',
fontsize = 20,
y = 1.1)
ax.set_xlabel('')
ax.set_ylabel('Average Number of Ratings (x10⁶)', fontsize = 17)
| 0.410756 | 0.961025 |
# Neural Network
We'll now use a Neural Network to predict the players identity.
```
%matplotlib notebook
import pylab as plt
import numpy as np
import seaborn as sns; sns.set()
import keras
from keras.models import Sequential, Model
from keras.layers import Dense
from keras.optimizers import Adam
from sklearn.decomposition import PCA
```
## We'll start with the features and the topology proposed by the last year and train the NN with it. After we'll try with our features (generated by wave instead of each balloon).
```
data = np.genfromtxt('../features/kate_data_julien_sarah.csv', delimiter=',')
np.random.shuffle(data)
training_ratio = 0.85
l = len(data)
X = data[:,:-1]
y = data[:,-1]
X_train = X[:int(l*training_ratio)]
X_test = X[int(l*training_ratio):]
y_train = y[:int(l*training_ratio)]/2
y_test = y[int(l*training_ratio):]/2
y_train = keras.utils.np_utils.to_categorical(y_train.astype(int))
y_test = keras.utils.np_utils.to_categorical(y_test.astype(int))
```
# Dimensionality reduction with PCA
```
mu = X_train.mean(axis=0)
U,s,V = np.linalg.svd(X_train - mu, full_matrices=False)
Zpca = np.dot(X_train - mu, V.transpose())
Rpca = np.dot(Zpca[:,:2], V[:2,:]) + mu # reconstruction
err = np.sum((X_train-Rpca)**2)/Rpca.shape[0]/Rpca.shape[1]
print('PCA reconstruction error with 2 PCs: ' + str(round(err,3)));
print(max(Zpca[:,0]))
print(min(Zpca[:,0]))
print(max(Zpca[:,1]))
print(min(Zpca[:,1]))
print(np.argmax(Zpca[:,0]))
print(np.argmax(Zpca[:,1]))
```
# Building and training of a dnn
```
m = Sequential()
m.add(Dense(150, activation='relu', input_shape=(105,)))
#m.add(Dense(150, activation='relu'))
m.add(Dense(150, activation='relu'))
m.add(Dense(150, activation='relu'))
m.add(Dense(50, activation='relu'))
m.add(Dense(2, activation='sigmoid'))
m.compile(loss='categorical_crossentropy', optimizer = Adam(), metrics=['accuracy'])
history = m.fit(X_train, y_train, batch_size=10, epochs=20, verbose=1, validation_data = (X_test, y_test))
y_pred = m.predict(X_test)
accuracy = m.evaluate(X_test, y_test)[1]
print("Précision old features: %.2f" % accuracy)
```
## Now let's try with our features and with a different topology. Since we have computed a 12 dimension feature-vector, it would make no sense to use layers with more than 100 neurons as the last year group did.
```
X = np.genfromtxt('../features/features_wave_julian_sarah.csv', delimiter=',')
y = np.genfromtxt('../features/output_wave_julian_sarah.csv', delimiter=',')
p = np.random.permutation(len(X))
X, y = X[p], y[p]
training_ratio = 0.85
l = len(y)
X_train = X[:int(l*training_ratio)]
X_test = X[int(l*training_ratio):]
y_train = y[:int(l*training_ratio)]/2
y_test = y[int(l*training_ratio):]/2
y_train = keras.utils.np_utils.to_categorical(y_train.astype(int))
y_test = keras.utils.np_utils.to_categorical(y_test.astype(int))
```
In our case we have 12 dimensions to features instead. Let's apply the NN directly without a PCA to compare later
```
m = Sequential()
m.add(Dense(15, activation='relu', input_shape=(12,)))
m.add(Dense(15, activation='relu'))
m.add(Dense(15, activation='relu'))
m.add(Dense(4, activation='relu'))
m.add(Dense(2, activation='sigmoid'))
m.compile(loss='categorical_crossentropy', optimizer = Adam(), metrics=['accuracy'])
history = m.fit(X_train, y_train, batch_size=10, epochs=20, verbose=1, validation_data = (X_test, y_test))
accuracy = m.evaluate(X_test, y_test)[1]
print("Précision New features: %.2f" % accuracy)
```
# We got a precision of 88%, which is better than the 81% of the last year. But we would need to do an average to compare
Let's apply PCA now to reduce the dimension to 3
```
model_pca3 = PCA(n_components=3)
# On entraîne notre modèle (fit) sur les données
model_pca3.fit(X)
# On applique le résultat sur nos données :
X_reduced3 = model_pca3.transform(X)
training_ratio = 0.85
l = len(y)
X_train = X_reduced3[:int(l*training_ratio)]
X_test = X_reduced3[int(l*training_ratio):]
y_train = y[:int(l*training_ratio)]/2
y_test = y[int(l*training_ratio):]/2
y_train = keras.utils.np_utils.to_categorical(y_train.astype(int))
y_test = keras.utils.np_utils.to_categorical(y_test.astype(int))
m = Sequential()
m.add(Dense(20, activation='relu', input_shape=(3,)))
#m.add(Dense(20, activation='relu'))
m.add(Dense(20, activation='relu'))
m.add(Dense(20, activation='relu'))
m.add(Dense(5, activation='relu'))
m.add(Dense(2, activation='sigmoid'))
m.compile(loss='categorical_crossentropy', optimizer = Adam(), metrics=['accuracy'])
history = m.fit(X_train, y_train, batch_size=10, epochs=20, verbose=1, validation_data = (X_test, y_test))
```
Using a PCA doesn't seems to improve the accuracy, but the accuracy seems to be more stable with respect to the epochs
## We conclude that the Neural Network gives a better precision that our previous alorithms. Computing the features by wave instead of by balloons also seems to improve the Neural Network precision. The only disavantage is that we get less data by doing so. If few data is available, it might be better to use the features computed by balloon instead
|
github_jupyter
|
%matplotlib notebook
import pylab as plt
import numpy as np
import seaborn as sns; sns.set()
import keras
from keras.models import Sequential, Model
from keras.layers import Dense
from keras.optimizers import Adam
from sklearn.decomposition import PCA
data = np.genfromtxt('../features/kate_data_julien_sarah.csv', delimiter=',')
np.random.shuffle(data)
training_ratio = 0.85
l = len(data)
X = data[:,:-1]
y = data[:,-1]
X_train = X[:int(l*training_ratio)]
X_test = X[int(l*training_ratio):]
y_train = y[:int(l*training_ratio)]/2
y_test = y[int(l*training_ratio):]/2
y_train = keras.utils.np_utils.to_categorical(y_train.astype(int))
y_test = keras.utils.np_utils.to_categorical(y_test.astype(int))
mu = X_train.mean(axis=0)
U,s,V = np.linalg.svd(X_train - mu, full_matrices=False)
Zpca = np.dot(X_train - mu, V.transpose())
Rpca = np.dot(Zpca[:,:2], V[:2,:]) + mu # reconstruction
err = np.sum((X_train-Rpca)**2)/Rpca.shape[0]/Rpca.shape[1]
print('PCA reconstruction error with 2 PCs: ' + str(round(err,3)));
print(max(Zpca[:,0]))
print(min(Zpca[:,0]))
print(max(Zpca[:,1]))
print(min(Zpca[:,1]))
print(np.argmax(Zpca[:,0]))
print(np.argmax(Zpca[:,1]))
m = Sequential()
m.add(Dense(150, activation='relu', input_shape=(105,)))
#m.add(Dense(150, activation='relu'))
m.add(Dense(150, activation='relu'))
m.add(Dense(150, activation='relu'))
m.add(Dense(50, activation='relu'))
m.add(Dense(2, activation='sigmoid'))
m.compile(loss='categorical_crossentropy', optimizer = Adam(), metrics=['accuracy'])
history = m.fit(X_train, y_train, batch_size=10, epochs=20, verbose=1, validation_data = (X_test, y_test))
y_pred = m.predict(X_test)
accuracy = m.evaluate(X_test, y_test)[1]
print("Précision old features: %.2f" % accuracy)
X = np.genfromtxt('../features/features_wave_julian_sarah.csv', delimiter=',')
y = np.genfromtxt('../features/output_wave_julian_sarah.csv', delimiter=',')
p = np.random.permutation(len(X))
X, y = X[p], y[p]
training_ratio = 0.85
l = len(y)
X_train = X[:int(l*training_ratio)]
X_test = X[int(l*training_ratio):]
y_train = y[:int(l*training_ratio)]/2
y_test = y[int(l*training_ratio):]/2
y_train = keras.utils.np_utils.to_categorical(y_train.astype(int))
y_test = keras.utils.np_utils.to_categorical(y_test.astype(int))
m = Sequential()
m.add(Dense(15, activation='relu', input_shape=(12,)))
m.add(Dense(15, activation='relu'))
m.add(Dense(15, activation='relu'))
m.add(Dense(4, activation='relu'))
m.add(Dense(2, activation='sigmoid'))
m.compile(loss='categorical_crossentropy', optimizer = Adam(), metrics=['accuracy'])
history = m.fit(X_train, y_train, batch_size=10, epochs=20, verbose=1, validation_data = (X_test, y_test))
accuracy = m.evaluate(X_test, y_test)[1]
print("Précision New features: %.2f" % accuracy)
model_pca3 = PCA(n_components=3)
# On entraîne notre modèle (fit) sur les données
model_pca3.fit(X)
# On applique le résultat sur nos données :
X_reduced3 = model_pca3.transform(X)
training_ratio = 0.85
l = len(y)
X_train = X_reduced3[:int(l*training_ratio)]
X_test = X_reduced3[int(l*training_ratio):]
y_train = y[:int(l*training_ratio)]/2
y_test = y[int(l*training_ratio):]/2
y_train = keras.utils.np_utils.to_categorical(y_train.astype(int))
y_test = keras.utils.np_utils.to_categorical(y_test.astype(int))
m = Sequential()
m.add(Dense(20, activation='relu', input_shape=(3,)))
#m.add(Dense(20, activation='relu'))
m.add(Dense(20, activation='relu'))
m.add(Dense(20, activation='relu'))
m.add(Dense(5, activation='relu'))
m.add(Dense(2, activation='sigmoid'))
m.compile(loss='categorical_crossentropy', optimizer = Adam(), metrics=['accuracy'])
history = m.fit(X_train, y_train, batch_size=10, epochs=20, verbose=1, validation_data = (X_test, y_test))
| 0.517571 | 0.927232 |
One of the more annoying issues to deal with when juggling several projects is the need to deal with version dependencies. This is a huge issue because the more distinct packages you deal with, the more tangled the dependencies become. And this grows and coalesces into a Gordian knot-esque situation, until you finally wreck your python installation and potentially your OS along with it.
Which, obviously, is not a great situation.
This is where environment management tools like `pyenv` come in, and this post will serve as a cheat sheet for using `pyenv` to manage multiple environments
### Using `pyenv`
First, we see what python version we're currently using, and where it is located in our system
```
!which python
!python -V
```
Having determined what version of python we are working with, we next want to know what other installations are available as options.
```
!pyenv install --list | grep "3.9"
```
Let's try installing the latest 2 versions, Python 3.9.4 and 3.9.5
```
!pyenv install 3.9.4 --f
!pyenv install 3.9.5 --f
# !ls ~/.pyenv/versions/
!pyenv versions
```
As with installation, removing versions is also trivial. There are 2 ways of doing this, though using the `pyenv uninstall` approach will almost certainly save you the grief of removing something accidentally.
```
# !rm -rf ~/.pyenv/versions/3.9.4
!pyenv uninstall 3.9.4
```
You can set the global default for Python using pyenv. In most cases, you will want to use the latest version anyway, so you can set this as the default.
Beyond this default, `pyenv` also allows you to specify if you wish to use a specific version of python locally, or simply for a specific shell session
```
!pyenv global 3.9.5
# !pyenv local 3.9.5
# !pyenv shell 3.9.5
```
### Set up virtual environments with `pyenv` and `virtualenv`
Of course, the strength of pyenv goes beyond being able to download multiple versions of python. It is the ability to create isolated instances of Python that is insulated from all other instances, or a **virtual environment**. This is going to be very helpful when working with multiple repos, because you can simply create an environment within the repo and activate/deactivate it as needed, without inteference from any other installation you may have!
Notice how, although the default global python version is 3.9.5, we are able to create a 3.8.5 specific environment in just 1 line!
```
!pyenv versions
!pyenv virtualenv 3.8.5 some_older_project
!pyenv versions
```
Activating the new environment is trivial, and you can then do the usual `pip install` commands without fear of messing up external dependencies!
```
!pyenv activate some_older_project
!pyenv deactivate
```
### Conclusion
This was a relatively short post after a long hiatus, but this hope this will be helpful to anyone who has run into the same issues as I have juggling multiple environments and dependencies!
*Reference: [realpython article](https://realpython.com/intro-to-pyenv/#virtual-environments-and-pyenv)*
|
github_jupyter
|
!which python
!python -V
!pyenv install --list | grep "3.9"
!pyenv install 3.9.4 --f
!pyenv install 3.9.5 --f
# !ls ~/.pyenv/versions/
!pyenv versions
# !rm -rf ~/.pyenv/versions/3.9.4
!pyenv uninstall 3.9.4
!pyenv global 3.9.5
# !pyenv local 3.9.5
# !pyenv shell 3.9.5
!pyenv versions
!pyenv virtualenv 3.8.5 some_older_project
!pyenv versions
!pyenv activate some_older_project
!pyenv deactivate
| 0.132964 | 0.771198 |
## Day agenda
- pandas Introduction
- Creating Series data
- creating Data Frames
- Acessing data from series/DataFrame
- Reading Data From environment
- Explore data
- convert data from json to csv etc..
## Pandas Introduction
- Developed in 2008 by mckiney
- Used for analyzing.cleaning ,exploring,and manipulating data
## Pandas can do
- finding the correlation
- handling with missing values
- avg,max,min
- plotting
```
pip install pandas
import pandas
pandas.__version__
import pandas as pd
```
## Series
- list
- tuple
- numpy array
```
s1=pd.Series([1,2,3,"apssdc"])
print(s1)
print(type(s1))
s1=pd.Series([1,2,3,"apssdc"],index=['x1','x2','x3','x4'])
print(s1)
print(type(s1))
## Series object with tuple format
s2=pd.Series((1,2,3,4))
print(s2)
s2.index=[11,12,13,14]
s2
## Series object with numpy array
import numpy as np
s3=pd.Series(np.arange(1,10))
print(s3)
print(type(s3))
s1.index
s2.index
s3.index
s1
s1['x2']
s1['x1'],s1['x3']
s3.min()
s3.cumsum()
```
## pandas date_range
- To work with data and time formates we can use date_range
- pandas.date_range()
```
import pandas as p
d1=p.date_range('01-01-2021',periods=10)
d1
import pandas as p
d2=p.date_range('01-01-2021',periods=20,freq='1h')
d2
import pandas as p
d2=p.date_range('01-01-2021',periods=20,freq='24h')
d2
import pandas as p
d2=p.date_range(start='01-01-2021',end='01-02-2021',freq='5s')
d2
import pandas as pd
date=pd.Series(pd.date_range(start='01-01-2021',end='01-02-2021',freq='5s'))
print(date)
print(type(date))
```
## Data Frame
- pandas.DataFrame
- 2d data type of data
- list
- tuple
- dict
- numpy array
- string,float,complex-string/object
```
import pandas as p1
import numpy as np
df1=p1.DataFrame([[1,2,3],[4,5,"apssdc"]])
df3=p1.DataFrame({"k1":[11,12],"m1":[23,34]})
df4=p1.DataFrame(np.array([[1,3,5],[8,7,6]]))
df4
print(type(df4))
df2=p1.DataFrame(((1,3),(1,2),(3,4)),index=['x','y','z'])
df2
df3.columns=["column1","column2"]
df3.index=["row1","row2"]
df3
df3.columns
df3.index
import numpy as np
import pandas as pd
d1={"rollno":['19BTH' +str(i) for i in range(1,101)],
'm1':np.random.randint(1,100,100),
'm2':np.random.randint(1,100,100),
'm3':np.random.randint(1,100,100)}
df=pd.DataFrame(d1)
df
df.tail()
df.shape # rows,columns
df.head(10)
```
## Acessing data From dataframe
- index
- slicing
```
df.head()
print(df['rollno'])
print(type(df['rollno']))
df[["rollno",'m2']]
```
## iloc
- index location(0,1...)
## loc
- lable names
```
df.head()
df.iloc[0]
df.iloc[0:10]
df.iloc[0:10][['rollno','m1']]
student_df=df.set_index("rollno")
student_df.head()
df.head()
student_df.loc['19BTH1']
student_df.loc['19BTH1']['m1']
student_df.loc['19BTH1':'19BTH10'][['m1','m2']]
student_df["sum_columns"]=student_df.sum(axis='columns')
student_df["sum_m1"]=student_df['m1'].sum()
student_df.head()
```
|
github_jupyter
|
pip install pandas
import pandas
pandas.__version__
import pandas as pd
s1=pd.Series([1,2,3,"apssdc"])
print(s1)
print(type(s1))
s1=pd.Series([1,2,3,"apssdc"],index=['x1','x2','x3','x4'])
print(s1)
print(type(s1))
## Series object with tuple format
s2=pd.Series((1,2,3,4))
print(s2)
s2.index=[11,12,13,14]
s2
## Series object with numpy array
import numpy as np
s3=pd.Series(np.arange(1,10))
print(s3)
print(type(s3))
s1.index
s2.index
s3.index
s1
s1['x2']
s1['x1'],s1['x3']
s3.min()
s3.cumsum()
import pandas as p
d1=p.date_range('01-01-2021',periods=10)
d1
import pandas as p
d2=p.date_range('01-01-2021',periods=20,freq='1h')
d2
import pandas as p
d2=p.date_range('01-01-2021',periods=20,freq='24h')
d2
import pandas as p
d2=p.date_range(start='01-01-2021',end='01-02-2021',freq='5s')
d2
import pandas as pd
date=pd.Series(pd.date_range(start='01-01-2021',end='01-02-2021',freq='5s'))
print(date)
print(type(date))
import pandas as p1
import numpy as np
df1=p1.DataFrame([[1,2,3],[4,5,"apssdc"]])
df3=p1.DataFrame({"k1":[11,12],"m1":[23,34]})
df4=p1.DataFrame(np.array([[1,3,5],[8,7,6]]))
df4
print(type(df4))
df2=p1.DataFrame(((1,3),(1,2),(3,4)),index=['x','y','z'])
df2
df3.columns=["column1","column2"]
df3.index=["row1","row2"]
df3
df3.columns
df3.index
import numpy as np
import pandas as pd
d1={"rollno":['19BTH' +str(i) for i in range(1,101)],
'm1':np.random.randint(1,100,100),
'm2':np.random.randint(1,100,100),
'm3':np.random.randint(1,100,100)}
df=pd.DataFrame(d1)
df
df.tail()
df.shape # rows,columns
df.head(10)
df.head()
print(df['rollno'])
print(type(df['rollno']))
df[["rollno",'m2']]
df.head()
df.iloc[0]
df.iloc[0:10]
df.iloc[0:10][['rollno','m1']]
student_df=df.set_index("rollno")
student_df.head()
df.head()
student_df.loc['19BTH1']
student_df.loc['19BTH1']['m1']
student_df.loc['19BTH1':'19BTH10'][['m1','m2']]
student_df["sum_columns"]=student_df.sum(axis='columns')
student_df["sum_m1"]=student_df['m1'].sum()
student_df.head()
| 0.170439 | 0.900048 |
<p align="center">
<img width="100%" src="../../multimedia/mindstorms_51515_logo.png">
</p>
# `hub_image_animation`
Small demo of how to display an image and an animation using the hub LEDs.
# Required robot
* Hub
<img src="../multimedia/hub.jpg" width="50%" align="center">
# Source code
You can find the code in the accompanying [`.py` file](https://github.com/arturomoncadatorres/lego-mindstorms/blob/main/examples/programs/hub_image_animation.py). To get it running, simply copy and paste it in a new Mindstorms project.
# Imports
```
from mindstorms import MSHub, Motor, MotorPair, ColorSensor, DistanceSensor, App
from mindstorms.control import wait_for_seconds, wait_until, Timer
from mindstorms.operator import greater_than, greater_than_or_equal_to, less_than, less_than_or_equal_to, equal_to, not_equal_to
import math
import hub
print("-"*15 + " Execution started " + "-"*15 + "\n")
```
# Using `hub`
Notice we won't be using the standard `MSHub`, but rather the "raw" `hub`.
It is a little lower level, but it allows us making more things - like turning on the hub's pixels.
Fore more information, see [Maarten Pennings brilliant explanation and unofficial documentation about it](https://github.com/maarten-pennings/Lego-Mindstorms/blob/main/ms4/faq.md#why-are-there-so-many-ways-to-do--in-python).
```
# Turn the central light off
hub.led(0, 0, 0)
# Alternatively, use
# hub.status_light.on('black')
```
# How to display an image
Displaying an image is quite simple. We just need to define which pixels we will turn on and at what intensity.
The pixel definition is done in a string in the shape
`00000:00000:00000:00000:00000`
where each number corresponds to a pixel. Each pixel can have a value between `0` (off) to `9` (on at full intensity).
Each group of numbers (from left to right) correspond to a row of the hub (from top to bottom).
Notice the groups (i.e., rows) are separated by a colon `:`.
Therefore, if we want to turn on the central pixel of the hub at full intensity, we can do the following:
```
print("Displaying example image...")
img_example = hub.Image('00000:00000:00900:00000:00000')
hub.display.show(img_example)
wait_for_seconds(5)
print("DONE!")
```
# How to display an animation
After displaying an image, displaying an animation is quite straightforward, since an animation is
basically displaying a succession of images.
In this example, we will display a very simple animation: a dot moving from top to bottom (with a tail).
However, the basic principle can be translated to more complicated animations.
I am sure there are plenty of ways to display an animation, but I found a simple way to do this is the following.
First, we will define the frame of the animation in a list.
```
print("Defining animation frames...")
frames = ['00000:00000:00000:00000:00000',
'00900:00000:00000:00000:00000',
'00700:00900:00000:00000:00000',
'00500:00700:00900:00000:00000',
'00000:00500:00700:00900:00000',
'00000:00000:00500:00700:00900',
'00000:00000:00000:00500:00700',
'00000:00000:00000:00000:00500',
'00000:00000:00000:00000:00000']
n_frames = len(frames)
print("DONE!")
```
Then, we need to define the length of a pause between frames.
The larger the pause, the slower the animation will be.
```
print("Defining delay between frames...")
t_pause = 0.05 # In seconds
print("DONE!")
```
Lastly, we display the frames (images) consecutively.
This can be done very easily in a `for` loop.
```
print("Displaying animation...")
for ii in range(0, n_frames):
img = hub.Image(frames[ii])
hub.display.show(img)
wait_for_seconds(t_pause)
print("DONE!")
```
That's it!
```
print("-"*15 + " Execution ended " + "-"*15 + "\n")
```
|
github_jupyter
|
from mindstorms import MSHub, Motor, MotorPair, ColorSensor, DistanceSensor, App
from mindstorms.control import wait_for_seconds, wait_until, Timer
from mindstorms.operator import greater_than, greater_than_or_equal_to, less_than, less_than_or_equal_to, equal_to, not_equal_to
import math
import hub
print("-"*15 + " Execution started " + "-"*15 + "\n")
# Turn the central light off
hub.led(0, 0, 0)
# Alternatively, use
# hub.status_light.on('black')
print("Displaying example image...")
img_example = hub.Image('00000:00000:00900:00000:00000')
hub.display.show(img_example)
wait_for_seconds(5)
print("DONE!")
print("Defining animation frames...")
frames = ['00000:00000:00000:00000:00000',
'00900:00000:00000:00000:00000',
'00700:00900:00000:00000:00000',
'00500:00700:00900:00000:00000',
'00000:00500:00700:00900:00000',
'00000:00000:00500:00700:00900',
'00000:00000:00000:00500:00700',
'00000:00000:00000:00000:00500',
'00000:00000:00000:00000:00000']
n_frames = len(frames)
print("DONE!")
print("Defining delay between frames...")
t_pause = 0.05 # In seconds
print("DONE!")
print("Displaying animation...")
for ii in range(0, n_frames):
img = hub.Image(frames[ii])
hub.display.show(img)
wait_for_seconds(t_pause)
print("DONE!")
print("-"*15 + " Execution ended " + "-"*15 + "\n")
| 0.491944 | 0.968351 |
```
%matplotlib inline
from csrl.mdp import GridMDP
from csrl.oa import OmegaAutomaton
from csrl import ControlSynthesis
import numpy as np
# Specification
ltl = 'G F b & G F c & (F G d | F G e)'
oa = OmegaAutomaton(ltl,oa_type='dra')
print('Number of Omega-automaton states (including the trap state):',oa.shape[1])
print('Number of accepting pairs:',oa.shape[0])
display(oa)
# MDP Description
shape = (5,5)
# E: Empty, T: Trap, B: Obstacle
structure = np.array([
['E', 'E', 'B', 'E', 'E'],
['E', 'E', 'E', 'E', 'E'],
['E', 'E', 'E', 'E', 'E'],
['E', 'E', 'E', 'E', 'E'],
['E', 'E', 'E', 'E', 'E']
])
label = np.array([
[('b','d'), ('c','d'), (), ('b','d'), ('c','d')],
[('e',), ('e',), ('e',), ('e',), ('e',)],
[('e',), ('e',), ('e',), ('e',), ('e',)],
[('e',), ('e',), (), ('e',), ('e',)],
[('e',), ('b','e'), ('e',), ('c','e'), ('e',)]
],dtype=np.object)
reward = np.zeros(shape)
lcmap={
'b':'peachpuff',
'c':'plum',
'd':'greenyellow',
'e':'palegreen'
}
grid_mdp = GridMDP(shape=shape,structure=structure,reward=reward,label=label,figsize=5,robust=True,lcmap=lcmap) # Use figsize=4 for smaller figures
grid_mdp.plot()
# Construct the product MDP
csrl = ControlSynthesis(grid_mdp,oa)
Q=csrl.minimax_q(T=2**10,K=2**20)
# Calculate the policy
policy = np.argmax(np.min(Q,axis=-1),axis=-1)
policy_ = np.take_along_axis(np.argmin(Q,axis=-1),np.expand_dims(policy,axis=-1),axis=-1).reshape(policy.shape)
_value = np.copy(value)
_policy = np.copy(policy)
_value[:] = np.max(value,axis=0)
_policy[:] = np.argmax(value,axis=0)
ind = (csrl.discountC*_value) > value
policy[ind] = _policy[ind] + len(csrl.mdp.A)
csrl.plot(value=value,policy=policy,policy_=policy_)
path = {
(4,3) : 'r',
(4,4) : 'lu',
(3,4) : 'du',
(2,4) : 'dl',
(2,3) : 'rl',
(2,2) : 'ru',
(1,2) : 'dl',
(1,1) : 'rd',
(2,1) : 'ul',
(2,0) : 'rd',
(3,0) : 'ud',
(4,0) : 'ur',
(4,1) : 'l'
}
hidden=[(4,1)]
csrl.plot(value=value,policy=policy,policy_=policy_,iq=(1,3),path=path,hidden=hidden,save='robust_controller_c_to_b.pdf')
path = {
(4,1) : 'l',
(4,0) : 'ru',
(3,0) : 'du',
(2,0) : 'dr',
(2,1) : 'lr',
(2,2) : 'lu',
(1,2) : 'dr',
(1,3) : 'ld',
(2,3) : 'ur',
(2,4) : 'ld',
(3,4) : 'ud',
(4,4) : 'ul',
(4,3) : 'r'
}
hidden=[(4,3)]
csrl.plot(value=value,policy=policy,policy_=policy_,iq=(1,2),path=path,hidden=hidden,save='robust_controller_b_to_c.pdf')
```
|
github_jupyter
|
%matplotlib inline
from csrl.mdp import GridMDP
from csrl.oa import OmegaAutomaton
from csrl import ControlSynthesis
import numpy as np
# Specification
ltl = 'G F b & G F c & (F G d | F G e)'
oa = OmegaAutomaton(ltl,oa_type='dra')
print('Number of Omega-automaton states (including the trap state):',oa.shape[1])
print('Number of accepting pairs:',oa.shape[0])
display(oa)
# MDP Description
shape = (5,5)
# E: Empty, T: Trap, B: Obstacle
structure = np.array([
['E', 'E', 'B', 'E', 'E'],
['E', 'E', 'E', 'E', 'E'],
['E', 'E', 'E', 'E', 'E'],
['E', 'E', 'E', 'E', 'E'],
['E', 'E', 'E', 'E', 'E']
])
label = np.array([
[('b','d'), ('c','d'), (), ('b','d'), ('c','d')],
[('e',), ('e',), ('e',), ('e',), ('e',)],
[('e',), ('e',), ('e',), ('e',), ('e',)],
[('e',), ('e',), (), ('e',), ('e',)],
[('e',), ('b','e'), ('e',), ('c','e'), ('e',)]
],dtype=np.object)
reward = np.zeros(shape)
lcmap={
'b':'peachpuff',
'c':'plum',
'd':'greenyellow',
'e':'palegreen'
}
grid_mdp = GridMDP(shape=shape,structure=structure,reward=reward,label=label,figsize=5,robust=True,lcmap=lcmap) # Use figsize=4 for smaller figures
grid_mdp.plot()
# Construct the product MDP
csrl = ControlSynthesis(grid_mdp,oa)
Q=csrl.minimax_q(T=2**10,K=2**20)
# Calculate the policy
policy = np.argmax(np.min(Q,axis=-1),axis=-1)
policy_ = np.take_along_axis(np.argmin(Q,axis=-1),np.expand_dims(policy,axis=-1),axis=-1).reshape(policy.shape)
_value = np.copy(value)
_policy = np.copy(policy)
_value[:] = np.max(value,axis=0)
_policy[:] = np.argmax(value,axis=0)
ind = (csrl.discountC*_value) > value
policy[ind] = _policy[ind] + len(csrl.mdp.A)
csrl.plot(value=value,policy=policy,policy_=policy_)
path = {
(4,3) : 'r',
(4,4) : 'lu',
(3,4) : 'du',
(2,4) : 'dl',
(2,3) : 'rl',
(2,2) : 'ru',
(1,2) : 'dl',
(1,1) : 'rd',
(2,1) : 'ul',
(2,0) : 'rd',
(3,0) : 'ud',
(4,0) : 'ur',
(4,1) : 'l'
}
hidden=[(4,1)]
csrl.plot(value=value,policy=policy,policy_=policy_,iq=(1,3),path=path,hidden=hidden,save='robust_controller_c_to_b.pdf')
path = {
(4,1) : 'l',
(4,0) : 'ru',
(3,0) : 'du',
(2,0) : 'dr',
(2,1) : 'lr',
(2,2) : 'lu',
(1,2) : 'dr',
(1,3) : 'ld',
(2,3) : 'ur',
(2,4) : 'ld',
(3,4) : 'ud',
(4,4) : 'ul',
(4,3) : 'r'
}
hidden=[(4,3)]
csrl.plot(value=value,policy=policy,policy_=policy_,iq=(1,2),path=path,hidden=hidden,save='robust_controller_b_to_c.pdf')
| 0.638835 | 0.509581 |
# IBM HR Analytics Employee Attrition & Performance
## Table of Contents
* [Introduction](#chapter1)
* [Background](#Section_1_2)
* [Data Dictionary](#section_1_1)
* [Problem Statement](#Section_1_2)
* [Data Processing](#chapter2)
* [Libraries](#Section_1_2)
* [Experiment Tracking](#Section_1_2)
* [Data Preperation](#Section_1_2)
* [Exploratory Data Analysis](#chapter2)
* [EDA Insights](#chapter2)
* [Data Cleaning](#chapter2)
* [Handling Missing Values](#chapter2)
* [Outlier Detection](#chapter2)
* [Feature Reduction](#chapter2)
* [Model Building](#chapter2)
* [Validation](#chapter2)
* [Conclusions](#chapter2)
* [Recommendations](#chapter2)
# Introduction
## Background
## Data Dictionary
* AGE Numerical Value
* ATTRITION Employee leaving the company (0=no, 1=yes)
* BUSINESS TRAVEL (1=No Travel, 2=Travel Frequently, 3=Tavel Rarely)
* DAILY RATE Numerical Value - Salary Level
* DEPARTMENT (1=HR, 2=R&D, 3=Sales)
* DISTANCE FROM HOME Numerical Value - THE DISTANCE FROM WORK TO HOME
* EDUCATION Numerical Value
* EDUCATION FIELD (1=HR, 2=LIFE SCIENCES, 3=MARKETING, 4=MEDICAL SCIENCES, 5=OTHERS, 6= TEHCNICAL)
* EMPLOYEE COUNT Numerical Value
* EMPLOYEE NUMBER Numerical Value - EMPLOYEE ID
* ENVIROMENT SATISFACTION Numerical Value - SATISFACTION WITH THE ENVIROMENT
* GENDER (1=FEMALE, 2=MALE)
* HOURLY RATE Numerical Value - HOURLY SALARY
* JOB INVOLVEMENT Numerical Value - JOB INVOLVEMENT
* JOB LEVEL Numerical Value - LEVEL OF JOB
* JOB ROLE (1=HC REP, 2=HR, 3=LAB TECHNICIAN, 4=MANAGER, 5= MANAGING DIRECTOR, 6= REASEARCH DIRECTOR, 7= RESEARCH SCIENTIST, 8=SALES EXECUTIEVE, 9= SALES REPRESENTATIVE)
* JOB SATISFACTION Numerical Value - SATISFACTION WITH THE JOB
* MARITAL STATUS (1=DIVORCED, 2=MARRIED, 3=SINGLE)
* MONTHLY INCOME Numerical Value - MONTHLY SALARY
* MONTHY RATE Numerical Value - MONTHY RATE
* NUMCOMPANIES WORKED Numerical Value - NO. OF COMPANIES WORKED AT
* OVER 18 (1=YES, 2=NO)
* OVERTIME (1=NO, 2=YES)
* PERCENT SALARY HIKE Numerical Value - PERCENTAGE INCREASE IN SALARY
* PERFORMANCE RATING Numerical Value - ERFORMANCE RATING
* RELATIONS SATISFACTION Numerical Value - RELATIONS SATISFACTION
* STANDARD HOURS Numerical Value - STANDARD HOURS
* STOCK OPTIONS LEVEL Numerical Value - STOCK OPTIONS
* TOTAL WORKING YEARS Numerical Value - TOTAL YEARS WORKED
* TRAINING TIMES LAST YEAR Numerical Value - HOURS SPENT TRAINING
* WORK LIFE BALANCE Numerical Value - TIME SPENT BEWTWEEN WORK AND OUTSIDE
* YEARS AT COMPANY Numerical Value - TOTAL NUMBER OF YEARS AT THE COMPNAY
* YEARS IN CURRENT ROLE Numerical Value -YEARS IN CURRENT ROLE
* YEARS SINCE LAST PROMOTION Numerical Value - LAST PROMOTION
* YEARS WITH CURRENT MANAGER Numerical Value - YEARS SPENT WITH CURRENT MANAGER
# Use Cases
## Questions to Answer
# Data Processing
## Libraries
```
# Import necessary libraries
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import comet_ml
import seaborn as sns
import statsmodels.api as sm
import statsmodels.formula.api as smf
from statsmodels.stats.outliers_influence import variance_inflation_factor
from sklearn.model_selection import cross_val_predict, train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import RFE
from sklearn.metrics import classification_report
import warnings
warnings.filterwarnings('ignore') # Ignore warning messages for readability
import pyspark
from pyspark.sql import SparkSession, Window, DataFrame
import pyspark.sql.functions as F
import pyspark.sql.types as T
```
## Experiment Logging - Comet.ml
```
'''
experiment = Experiment(
api_key="xCeYnrykwJF1pzF0Rfj8UEzR2",
project_name="hr_dataset",
workspace="mattblasa",
)
'''
```
## Data Preparation
```
df = pd.read_csv('employee_attrition.csv')
df.head()
df.describe()
df.columns
df.info()
categ = ['Attrition', 'BusinessTravel', 'Department', 'Gender', 'JobRole', 'MaritalStatus', 'Over18', 'OverTime', 'EducationField']
df_categ = df[categ]
df_categ
#print unique cateogries in column
for col in df_categ:
print(df[col].unique())
for col in df_categ:
print({col : df[col].unique()})
```
### Convert Binary and Categorical Variables
```
# Convert binary variables into yes = 1, no = 0 (ref 1)
cols = ['Attrition', 'OverTime', 'Over18']
df[cols] = df[cols].replace(to_replace = ['No', 'Yes'], value = [0, 1])
#dummy variable based on category
df_test = pd.get_dummies(data=df, columns=['BusinessTravel', 'Department', 'Gender', 'JobRole', 'MaritalStatus', 'EducationField'])
df_test = df_test.drop(columns = 'Over18')
df_test
```
# Exploratory Data Analysis
```
df_test.info()
import missingno as ms
ms.matrix(df);
import seaborn as sns
sns.set_palette(sns.color_palette("Set2", 8))
plt.figure(figsize=(35,20))
sns.heatmap(df.corr(),annot=True)
plt.show()
df.hist( figsize=(20, 15))
df_test.columns
# Plot split bar charts for dummy categorical variables by Churn
df_cat = df_test[['BusinessTravel_Non-Travel', 'BusinessTravel_Travel_Frequently',
'BusinessTravel_Travel_Rarely', 'Department_Human Resources',
'Department_Research & Development', 'Department_Sales',
'Gender_Female', 'Gender_Male', 'JobRole_Healthcare Representative',
'JobRole_Human Resources', 'JobRole_Laboratory Technician',
'JobRole_Manager', 'JobRole_Manufacturing Director',
'JobRole_Research Director', 'JobRole_Research Scientist',
'JobRole_Sales Executive', 'JobRole_Sales Representative',
'MaritalStatus_Divorced', 'MaritalStatus_Married',
'MaritalStatus_Single','EducationField_Human Resources', 'EducationField_Life Sciences',
'EducationField_Marketing', 'EducationField_Medical',
'EducationField_Other', 'EducationField_Technical Degree', 'Attrition']]
count=1
plt.subplots(figsize=(20, 80))
for i in df_cat.columns:
plt.subplot(20,3,count)
ax = sns.countplot(x=i, hue='Attrition', data = df_cat)
legend_labels, _= ax.get_legend_handles_labels()
ax.legend(legend_labels, ['Attrition No','Attrition Yes'])
ax.set_xticklabels(('0', '1'))
count+=1
plt.show();
```
# Intial Model
```
Xinit = df_test.drop('Attrition', axis = 1)
y = df_test['Attrition'].values
df_test.info()
# Importing the libraries
import missingno as msno
# Visualize missing values as a matrix
msno.matrix(df_test);
df2 = df_test.select_dtypes(include=['uint8'])
df2.columns
print ("There are", Xinit.shape[1], "independent variables in the initial model.")
msno.matrix(df_test)
from sklearn.preprocessing import StandardScaler
Xcinit = sm.add_constant(Xinit)
logistic_regression = sm.Logit(y,Xcinit)
fitted_model1 = logistic_regression.fit()
fitted_model1.summary()
clf = LogisticRegression()
clf.fit(Xinit, y.astype(int))
y_clf = clf.predict(Xinit)
print(classification_report(y, y_clf))
# Use recursive feature elimination to choose most important features (ref 6)
model = LogisticRegression()
rfe = RFE(model, 10)
rfe = rfe.fit(Xcinit, y)
print(rfe.support_)
print('\n')
print(rfe.ranking_)
f = rfe.get_support(1) # the most important features
Xfin = Xinit[Xinit.columns[f]] # final features`
# Look for evidence of Variance Inflation Factors (ref 7) causing multicollinearity
# VIF dataframe
vif_data = pd.DataFrame()
vif_data["feature"] = Xfin.columns
# calculating VIF for each feature
vif_data["VIF"] = [variance_inflation_factor(Xfin.values, i)
for i in range(len(Xfin.columns))]
print(vif_data)
# Re-run the model
Xcfin = sm.add_constant(Xfin)
logistic_regression = sm.Logit(y,Xcfin)
fitted_model2 = logistic_regression.fit()
fitted_model2.summary()
X_train, X_test, y_train, y_test = train_test_split(Xfin, y.astype(float), test_size=0.33, random_state=101)
Xcfin = sm.add_constant(X_train)
logistic_regression = sm.Logit(y_train,Xcfin)
fitted_model2 = logistic_regression.fit()
fitted_model2.summary()
# verification
clf = LogisticRegression()
clf.fit(X_train, y_train.astype(int))
y_clf = clf.predict(X_test)
print(classification_report(y_test, y_clf))
# View prediction (initial)
clf = LogisticRegression()
clf.fit(Xinit, y.astype(int))
y_clf = clf.predict(Xinit)
print(classification_report(y, y_clf))
import numpy as np
from sklearn.linear_model import LogisticRegression
from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
from sklearn.metrics import accuracy_score
import mlflow
import mlflow.sklearn
client = mlflow.tracking.MlflowClient()
try:
experiment = client.create_experiment(name = "HR_Logistic Regression")
except:
print('Experiment Already Exists. Please check folder.')
with mlflow.start_run(experiment_id='5', run_name='HR_Logistic Regression') as run:
# Get the run and experiment id
run_id = run.info.run_uuid
experiment_id = run.info.experiment_id
#train, test = train_test_split(data)
X_train, X_test, y_train, y_test = train_test_split(Xfin, y.astype(float), test_size=0.33, random_state=101)
#Logistic Regression
lr = LogisticRegression()
lr.fit(X_test, y_test)
#Metrics
#Precision
precision = 22
#Recall
recall = 22
#Accuracy
acc = 22
#acc = accuracy_score(y_true, y_pred) #accuracy
score = lr.score(X_test, y_test) #score
#Confusion Matrix, save confusion matrix to ML runs
clf = LogisticRegression()
clf.fit(X_train, y_train.astype(int)) # logistic Regression Fit
y_clf = clf.predict(X_test) # adad
print(classification_report(y_test, y_clf)) # sdsd
#log Metrics
#mlflow.log_metric('Name', output)
mlflow.log_metric("Precision", precision)
mlflow.log_metric("Recall", recall)
mlflow.log_metric("Accuracy", acc)
mlflow.log_metric("score", score)
#Log Model
mlflow.sklearn.log_model(lr, "model")
#Print Metrics
print()
print("Precision: %s" % precision)
print("Recall: %s" % recall)
print("Accuracy: %s" % acc)
print("Score: %s" % score)
print("Model saved in run %s" % mlflow.active_run().info.run_uuid)
classification_report
```
|
github_jupyter
|
# Import necessary libraries
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import comet_ml
import seaborn as sns
import statsmodels.api as sm
import statsmodels.formula.api as smf
from statsmodels.stats.outliers_influence import variance_inflation_factor
from sklearn.model_selection import cross_val_predict, train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import RFE
from sklearn.metrics import classification_report
import warnings
warnings.filterwarnings('ignore') # Ignore warning messages for readability
import pyspark
from pyspark.sql import SparkSession, Window, DataFrame
import pyspark.sql.functions as F
import pyspark.sql.types as T
'''
experiment = Experiment(
api_key="xCeYnrykwJF1pzF0Rfj8UEzR2",
project_name="hr_dataset",
workspace="mattblasa",
)
'''
df = pd.read_csv('employee_attrition.csv')
df.head()
df.describe()
df.columns
df.info()
categ = ['Attrition', 'BusinessTravel', 'Department', 'Gender', 'JobRole', 'MaritalStatus', 'Over18', 'OverTime', 'EducationField']
df_categ = df[categ]
df_categ
#print unique cateogries in column
for col in df_categ:
print(df[col].unique())
for col in df_categ:
print({col : df[col].unique()})
# Convert binary variables into yes = 1, no = 0 (ref 1)
cols = ['Attrition', 'OverTime', 'Over18']
df[cols] = df[cols].replace(to_replace = ['No', 'Yes'], value = [0, 1])
#dummy variable based on category
df_test = pd.get_dummies(data=df, columns=['BusinessTravel', 'Department', 'Gender', 'JobRole', 'MaritalStatus', 'EducationField'])
df_test = df_test.drop(columns = 'Over18')
df_test
df_test.info()
import missingno as ms
ms.matrix(df);
import seaborn as sns
sns.set_palette(sns.color_palette("Set2", 8))
plt.figure(figsize=(35,20))
sns.heatmap(df.corr(),annot=True)
plt.show()
df.hist( figsize=(20, 15))
df_test.columns
# Plot split bar charts for dummy categorical variables by Churn
df_cat = df_test[['BusinessTravel_Non-Travel', 'BusinessTravel_Travel_Frequently',
'BusinessTravel_Travel_Rarely', 'Department_Human Resources',
'Department_Research & Development', 'Department_Sales',
'Gender_Female', 'Gender_Male', 'JobRole_Healthcare Representative',
'JobRole_Human Resources', 'JobRole_Laboratory Technician',
'JobRole_Manager', 'JobRole_Manufacturing Director',
'JobRole_Research Director', 'JobRole_Research Scientist',
'JobRole_Sales Executive', 'JobRole_Sales Representative',
'MaritalStatus_Divorced', 'MaritalStatus_Married',
'MaritalStatus_Single','EducationField_Human Resources', 'EducationField_Life Sciences',
'EducationField_Marketing', 'EducationField_Medical',
'EducationField_Other', 'EducationField_Technical Degree', 'Attrition']]
count=1
plt.subplots(figsize=(20, 80))
for i in df_cat.columns:
plt.subplot(20,3,count)
ax = sns.countplot(x=i, hue='Attrition', data = df_cat)
legend_labels, _= ax.get_legend_handles_labels()
ax.legend(legend_labels, ['Attrition No','Attrition Yes'])
ax.set_xticklabels(('0', '1'))
count+=1
plt.show();
Xinit = df_test.drop('Attrition', axis = 1)
y = df_test['Attrition'].values
df_test.info()
# Importing the libraries
import missingno as msno
# Visualize missing values as a matrix
msno.matrix(df_test);
df2 = df_test.select_dtypes(include=['uint8'])
df2.columns
print ("There are", Xinit.shape[1], "independent variables in the initial model.")
msno.matrix(df_test)
from sklearn.preprocessing import StandardScaler
Xcinit = sm.add_constant(Xinit)
logistic_regression = sm.Logit(y,Xcinit)
fitted_model1 = logistic_regression.fit()
fitted_model1.summary()
clf = LogisticRegression()
clf.fit(Xinit, y.astype(int))
y_clf = clf.predict(Xinit)
print(classification_report(y, y_clf))
# Use recursive feature elimination to choose most important features (ref 6)
model = LogisticRegression()
rfe = RFE(model, 10)
rfe = rfe.fit(Xcinit, y)
print(rfe.support_)
print('\n')
print(rfe.ranking_)
f = rfe.get_support(1) # the most important features
Xfin = Xinit[Xinit.columns[f]] # final features`
# Look for evidence of Variance Inflation Factors (ref 7) causing multicollinearity
# VIF dataframe
vif_data = pd.DataFrame()
vif_data["feature"] = Xfin.columns
# calculating VIF for each feature
vif_data["VIF"] = [variance_inflation_factor(Xfin.values, i)
for i in range(len(Xfin.columns))]
print(vif_data)
# Re-run the model
Xcfin = sm.add_constant(Xfin)
logistic_regression = sm.Logit(y,Xcfin)
fitted_model2 = logistic_regression.fit()
fitted_model2.summary()
X_train, X_test, y_train, y_test = train_test_split(Xfin, y.astype(float), test_size=0.33, random_state=101)
Xcfin = sm.add_constant(X_train)
logistic_regression = sm.Logit(y_train,Xcfin)
fitted_model2 = logistic_regression.fit()
fitted_model2.summary()
# verification
clf = LogisticRegression()
clf.fit(X_train, y_train.astype(int))
y_clf = clf.predict(X_test)
print(classification_report(y_test, y_clf))
# View prediction (initial)
clf = LogisticRegression()
clf.fit(Xinit, y.astype(int))
y_clf = clf.predict(Xinit)
print(classification_report(y, y_clf))
import numpy as np
from sklearn.linear_model import LogisticRegression
from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
from sklearn.metrics import accuracy_score
import mlflow
import mlflow.sklearn
client = mlflow.tracking.MlflowClient()
try:
experiment = client.create_experiment(name = "HR_Logistic Regression")
except:
print('Experiment Already Exists. Please check folder.')
with mlflow.start_run(experiment_id='5', run_name='HR_Logistic Regression') as run:
# Get the run and experiment id
run_id = run.info.run_uuid
experiment_id = run.info.experiment_id
#train, test = train_test_split(data)
X_train, X_test, y_train, y_test = train_test_split(Xfin, y.astype(float), test_size=0.33, random_state=101)
#Logistic Regression
lr = LogisticRegression()
lr.fit(X_test, y_test)
#Metrics
#Precision
precision = 22
#Recall
recall = 22
#Accuracy
acc = 22
#acc = accuracy_score(y_true, y_pred) #accuracy
score = lr.score(X_test, y_test) #score
#Confusion Matrix, save confusion matrix to ML runs
clf = LogisticRegression()
clf.fit(X_train, y_train.astype(int)) # logistic Regression Fit
y_clf = clf.predict(X_test) # adad
print(classification_report(y_test, y_clf)) # sdsd
#log Metrics
#mlflow.log_metric('Name', output)
mlflow.log_metric("Precision", precision)
mlflow.log_metric("Recall", recall)
mlflow.log_metric("Accuracy", acc)
mlflow.log_metric("score", score)
#Log Model
mlflow.sklearn.log_model(lr, "model")
#Print Metrics
print()
print("Precision: %s" % precision)
print("Recall: %s" % recall)
print("Accuracy: %s" % acc)
print("Score: %s" % score)
print("Model saved in run %s" % mlflow.active_run().info.run_uuid)
classification_report
| 0.466846 | 0.866246 |
# TensorFlow2.0教程-回归
tensorflow2教程知乎专栏:https://zhuanlan.zhihu.com/c_1091021863043624960
在回归问题中,我们的目标是预测连续值的输出,如价格或概率。
我们采用了经典的Auto MPG数据集,并建立了一个模型来预测20世纪70年代末和80年代初汽车的燃油效率。 为此,我们将为该模型提供该时段内许多汽车的描述。 此描述包括以下属性:气缸,排量,马力和重量。
```
from __future__ import absolute_import, division, print_function
import pathlib
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
```
## 1.Auto MPG数据集
获取数据
```
dataset_path = keras.utils.get_file('auto-mpg.data',
'https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data')
print(dataset_path)
```
使用pandas读取数据
```
column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(dataset_path, names=column_names,
na_values='?', comment='\t',
sep=' ', skipinitialspace=True)
dataset = raw_dataset.copy()
dataset.tail()
```
## 2.数据预处理
### 清洗数据
```
print(dataset.isna().sum())
dataset = dataset.dropna()
origin = dataset.pop('Origin')
dataset['USA'] = (origin == 1)*1.0
dataset['Europe'] = (origin == 2)*1.0
dataset['Japan'] = (origin == 3)*1.0
dataset.tail()
```
### 划分训练集和测试集
```
train_dataset = dataset.sample(frac=0.8,random_state=0)
test_dataset = dataset.drop(train_dataset.index)
```
### 检测数据
观察训练集中几对列的联合分布。
```
sns.pairplot(train_dataset[["MPG", "Cylinders", "Displacement", "Weight"]], diag_kind="kde")
```
整体统计数据:
```
train_stats = train_dataset.describe()
train_stats.pop("MPG")
train_stats = train_stats.transpose()
train_stats
```
### 取出标签
```
train_labels = train_dataset.pop('MPG')
test_labels = test_dataset.pop('MPG')
```
### 标准化数据
最好使用不同比例和范围的特征进行标准化。 虽然模型可能在没有特征归一化的情况下收敛,但它使训练更加困难,并且它使得结果模型依赖于输入中使用的单位的选择。
```
def norm(x):
return (x - train_stats['mean']) / train_stats['std']
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
```
## 3.构建模型
```
def build_model():
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=[len(train_dataset.keys())]),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
return model
model = build_model()
model.summary()
example_batch = normed_train_data[:10]
example_result = model.predict(example_batch)
example_result
```
## 4.训练模型
```
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')
EPOCHS = 1000
history = model.fit(
normed_train_data, train_labels,
epochs=EPOCHS, validation_split = 0.2, verbose=0,
callbacks=[PrintDot()])
```
查看训练记录
```
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
def plot_history(history):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [MPG]')
plt.plot(hist['epoch'], hist['mae'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mae'],
label = 'Val Error')
plt.ylim([0,5])
plt.legend()
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Square Error [$MPG^2$]')
plt.plot(hist['epoch'], hist['mse'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mse'],
label = 'Val Error')
plt.ylim([0,20])
plt.legend()
plt.show()
plot_history(history)
```
使用early stop
```
model = build_model()
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
history = model.fit(normed_train_data, train_labels, epochs=EPOCHS,
validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()])
plot_history(history)
```
测试
```
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=0)
print("Testing set Mean Abs Error: {:5.2f} MPG".format(mae))
```
# 5.预测
```
test_predictions = model.predict(normed_test_data).flatten()
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [MPG]')
plt.ylabel('Predictions [MPG]')
plt.axis('equal')
plt.axis('square')
plt.xlim([0,plt.xlim()[1]])
plt.ylim([0,plt.ylim()[1]])
_ = plt.plot([-100, 100], [-100, 100])
error = test_predictions - test_labels
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error [MPG]")
_ = plt.ylabel("Count")
```
|
github_jupyter
|
from __future__ import absolute_import, division, print_function
import pathlib
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
dataset_path = keras.utils.get_file('auto-mpg.data',
'https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data')
print(dataset_path)
column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(dataset_path, names=column_names,
na_values='?', comment='\t',
sep=' ', skipinitialspace=True)
dataset = raw_dataset.copy()
dataset.tail()
print(dataset.isna().sum())
dataset = dataset.dropna()
origin = dataset.pop('Origin')
dataset['USA'] = (origin == 1)*1.0
dataset['Europe'] = (origin == 2)*1.0
dataset['Japan'] = (origin == 3)*1.0
dataset.tail()
train_dataset = dataset.sample(frac=0.8,random_state=0)
test_dataset = dataset.drop(train_dataset.index)
sns.pairplot(train_dataset[["MPG", "Cylinders", "Displacement", "Weight"]], diag_kind="kde")
train_stats = train_dataset.describe()
train_stats.pop("MPG")
train_stats = train_stats.transpose()
train_stats
train_labels = train_dataset.pop('MPG')
test_labels = test_dataset.pop('MPG')
def norm(x):
return (x - train_stats['mean']) / train_stats['std']
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
def build_model():
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=[len(train_dataset.keys())]),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
return model
model = build_model()
model.summary()
example_batch = normed_train_data[:10]
example_result = model.predict(example_batch)
example_result
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')
EPOCHS = 1000
history = model.fit(
normed_train_data, train_labels,
epochs=EPOCHS, validation_split = 0.2, verbose=0,
callbacks=[PrintDot()])
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
def plot_history(history):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [MPG]')
plt.plot(hist['epoch'], hist['mae'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mae'],
label = 'Val Error')
plt.ylim([0,5])
plt.legend()
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Square Error [$MPG^2$]')
plt.plot(hist['epoch'], hist['mse'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mse'],
label = 'Val Error')
plt.ylim([0,20])
plt.legend()
plt.show()
plot_history(history)
model = build_model()
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
history = model.fit(normed_train_data, train_labels, epochs=EPOCHS,
validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()])
plot_history(history)
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=0)
print("Testing set Mean Abs Error: {:5.2f} MPG".format(mae))
test_predictions = model.predict(normed_test_data).flatten()
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [MPG]')
plt.ylabel('Predictions [MPG]')
plt.axis('equal')
plt.axis('square')
plt.xlim([0,plt.xlim()[1]])
plt.ylim([0,plt.ylim()[1]])
_ = plt.plot([-100, 100], [-100, 100])
error = test_predictions - test_labels
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error [MPG]")
_ = plt.ylabel("Count")
| 0.860325 | 0.949809 |
## Dependencies
```
!pip install --quiet /kaggle/input/kerasapplications
!pip install --quiet /kaggle/input/efficientnet-git
import warnings, glob
from tensorflow.keras import Sequential, Model
import efficientnet.tfkeras as efn
from cassava_scripts import *
seed = 0
seed_everything(seed)
warnings.filterwarnings('ignore')
```
### Hardware configuration
```
# TPU or GPU detection
# Detect hardware, return appropriate distribution strategy
strategy, tpu = set_up_strategy()
AUTO = tf.data.experimental.AUTOTUNE
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
```
# Model parameters
```
BATCH_SIZE = 8 * REPLICAS
HEIGHT = 512
WIDTH = 512
CHANNELS = 3
N_CLASSES = 5
TTA_STEPS = 0 # Do TTA if > 0
```
# Augmentation
```
def data_augment(image, label):
p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_pixel_1 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_pixel_2 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_pixel_3 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# Flips
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)
if p_spatial > .75:
image = tf.image.transpose(image)
# Rotates
if p_rotate > .75:
image = tf.image.rot90(image, k=3) # rotate 270º
elif p_rotate > .5:
image = tf.image.rot90(image, k=2) # rotate 180º
elif p_rotate > .25:
image = tf.image.rot90(image, k=1) # rotate 90º
# Pixel-level transforms
if p_pixel_1 >= .4:
image = tf.image.random_saturation(image, lower=.7, upper=1.3)
if p_pixel_2 >= .4:
image = tf.image.random_contrast(image, lower=.8, upper=1.2)
if p_pixel_3 >= .4:
image = tf.image.random_brightness(image, max_delta=.1)
# Crops
if p_crop > .7:
if p_crop > .9:
image = tf.image.central_crop(image, central_fraction=.7)
elif p_crop > .8:
image = tf.image.central_crop(image, central_fraction=.8)
else:
image = tf.image.central_crop(image, central_fraction=.9)
elif p_crop > .4:
crop_size = tf.random.uniform([], int(HEIGHT*.8), HEIGHT, dtype=tf.int32)
image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
# # Crops
# if p_crop > .6:
# if p_crop > .9:
# image = tf.image.central_crop(image, central_fraction=.5)
# elif p_crop > .8:
# image = tf.image.central_crop(image, central_fraction=.6)
# elif p_crop > .7:
# image = tf.image.central_crop(image, central_fraction=.7)
# else:
# image = tf.image.central_crop(image, central_fraction=.8)
# elif p_crop > .3:
# crop_size = tf.random.uniform([], int(HEIGHT*.6), HEIGHT, dtype=tf.int32)
# image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
return image, label
```
## Auxiliary functions
```
# Datasets utility functions
def resize_image(image, label):
image = tf.image.resize(image, [HEIGHT, WIDTH])
image = tf.reshape(image, [HEIGHT, WIDTH, CHANNELS])
return image, label
def process_path(file_path):
name = get_name(file_path)
img = tf.io.read_file(file_path)
img = decode_image(img)
img, _ = scale_image(img, None)
# img = center_crop(img, HEIGHT, WIDTH)
return img, name
def get_dataset(files_path, shuffled=False, tta=False, extension='jpg'):
dataset = tf.data.Dataset.list_files(f'{files_path}*{extension}', shuffle=shuffled)
dataset = dataset.map(process_path, num_parallel_calls=AUTO)
if tta:
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.map(resize_image, num_parallel_calls=AUTO)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO)
return dataset
```
# Load data
```
database_base_path = '/kaggle/input/cassava-leaf-disease-classification/'
submission = pd.read_csv(f'{database_base_path}sample_submission.csv')
display(submission.head())
TEST_FILENAMES = tf.io.gfile.glob(f'{database_base_path}test_tfrecords/ld_test*.tfrec')
NUM_TEST_IMAGES = count_data_items(TEST_FILENAMES)
print(f'GCS: test: {NUM_TEST_IMAGES}')
model_path_list = glob.glob('/kaggle/input/65-cassava-leaf-effnetb5-aux-task-healt-cmd-01-512/*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep='\n')
```
# Model
```
def model_fn(input_shape, N_CLASSES):
inputs = L.Input(shape=input_shape, name='input_image')
base_model = efn.EfficientNetB5(input_tensor=inputs,
include_top=False,
weights=None,
pooling='avg')
base_model.trainable = False
x = L.Dropout(.25)(base_model.output)
output = L.Dense(N_CLASSES, activation='softmax', name='output')(x)
output_healthy = L.Dense(1, activation='sigmoid', name='output_healthy')(x)
output_cmd = L.Dense(1, activation='sigmoid', name='output_cmd')(x)
model = Model(inputs=inputs, outputs=[output, output_healthy, output_cmd])
return model
with strategy.scope():
model = model_fn((None, None, CHANNELS), N_CLASSES)
model.summary()
```
# Test set predictions
```
files_path = f'{database_base_path}test_images/'
test_size = len(os.listdir(files_path))
test_preds = np.zeros((test_size, N_CLASSES))
for model_path in model_path_list:
print(model_path)
K.clear_session()
model.load_weights(model_path)
if TTA_STEPS > 0:
test_ds = get_dataset(files_path, tta=True).repeat()
ct_steps = TTA_STEPS * ((test_size/BATCH_SIZE) + 1)
preds = model.predict(test_ds, steps=ct_steps, verbose=1)[:(test_size * TTA_STEPS)]
preds = np.mean(preds.reshape(test_size, TTA_STEPS, N_CLASSES, order='F'), axis=1)
test_preds += preds / len(model_path_list)
else:
test_ds = get_dataset(files_path, tta=False)
x_test = test_ds.map(lambda image, image_name: image)
test_preds += model.predict(x_test)[0] / len(model_path_list)
test_preds = np.argmax(test_preds, axis=-1)
test_names_ds = get_dataset(files_path)
image_names = [img_name.numpy().decode('utf-8') for img, img_name in iter(test_names_ds.unbatch())]
submission = pd.DataFrame({'image_id': image_names, 'label': test_preds})
submission.to_csv('submission.csv', index=False)
display(submission.head())
```
|
github_jupyter
|
!pip install --quiet /kaggle/input/kerasapplications
!pip install --quiet /kaggle/input/efficientnet-git
import warnings, glob
from tensorflow.keras import Sequential, Model
import efficientnet.tfkeras as efn
from cassava_scripts import *
seed = 0
seed_everything(seed)
warnings.filterwarnings('ignore')
# TPU or GPU detection
# Detect hardware, return appropriate distribution strategy
strategy, tpu = set_up_strategy()
AUTO = tf.data.experimental.AUTOTUNE
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
BATCH_SIZE = 8 * REPLICAS
HEIGHT = 512
WIDTH = 512
CHANNELS = 3
N_CLASSES = 5
TTA_STEPS = 0 # Do TTA if > 0
def data_augment(image, label):
p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_pixel_1 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_pixel_2 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_pixel_3 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# Flips
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)
if p_spatial > .75:
image = tf.image.transpose(image)
# Rotates
if p_rotate > .75:
image = tf.image.rot90(image, k=3) # rotate 270º
elif p_rotate > .5:
image = tf.image.rot90(image, k=2) # rotate 180º
elif p_rotate > .25:
image = tf.image.rot90(image, k=1) # rotate 90º
# Pixel-level transforms
if p_pixel_1 >= .4:
image = tf.image.random_saturation(image, lower=.7, upper=1.3)
if p_pixel_2 >= .4:
image = tf.image.random_contrast(image, lower=.8, upper=1.2)
if p_pixel_3 >= .4:
image = tf.image.random_brightness(image, max_delta=.1)
# Crops
if p_crop > .7:
if p_crop > .9:
image = tf.image.central_crop(image, central_fraction=.7)
elif p_crop > .8:
image = tf.image.central_crop(image, central_fraction=.8)
else:
image = tf.image.central_crop(image, central_fraction=.9)
elif p_crop > .4:
crop_size = tf.random.uniform([], int(HEIGHT*.8), HEIGHT, dtype=tf.int32)
image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
# # Crops
# if p_crop > .6:
# if p_crop > .9:
# image = tf.image.central_crop(image, central_fraction=.5)
# elif p_crop > .8:
# image = tf.image.central_crop(image, central_fraction=.6)
# elif p_crop > .7:
# image = tf.image.central_crop(image, central_fraction=.7)
# else:
# image = tf.image.central_crop(image, central_fraction=.8)
# elif p_crop > .3:
# crop_size = tf.random.uniform([], int(HEIGHT*.6), HEIGHT, dtype=tf.int32)
# image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
return image, label
# Datasets utility functions
def resize_image(image, label):
image = tf.image.resize(image, [HEIGHT, WIDTH])
image = tf.reshape(image, [HEIGHT, WIDTH, CHANNELS])
return image, label
def process_path(file_path):
name = get_name(file_path)
img = tf.io.read_file(file_path)
img = decode_image(img)
img, _ = scale_image(img, None)
# img = center_crop(img, HEIGHT, WIDTH)
return img, name
def get_dataset(files_path, shuffled=False, tta=False, extension='jpg'):
dataset = tf.data.Dataset.list_files(f'{files_path}*{extension}', shuffle=shuffled)
dataset = dataset.map(process_path, num_parallel_calls=AUTO)
if tta:
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.map(resize_image, num_parallel_calls=AUTO)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO)
return dataset
database_base_path = '/kaggle/input/cassava-leaf-disease-classification/'
submission = pd.read_csv(f'{database_base_path}sample_submission.csv')
display(submission.head())
TEST_FILENAMES = tf.io.gfile.glob(f'{database_base_path}test_tfrecords/ld_test*.tfrec')
NUM_TEST_IMAGES = count_data_items(TEST_FILENAMES)
print(f'GCS: test: {NUM_TEST_IMAGES}')
model_path_list = glob.glob('/kaggle/input/65-cassava-leaf-effnetb5-aux-task-healt-cmd-01-512/*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep='\n')
def model_fn(input_shape, N_CLASSES):
inputs = L.Input(shape=input_shape, name='input_image')
base_model = efn.EfficientNetB5(input_tensor=inputs,
include_top=False,
weights=None,
pooling='avg')
base_model.trainable = False
x = L.Dropout(.25)(base_model.output)
output = L.Dense(N_CLASSES, activation='softmax', name='output')(x)
output_healthy = L.Dense(1, activation='sigmoid', name='output_healthy')(x)
output_cmd = L.Dense(1, activation='sigmoid', name='output_cmd')(x)
model = Model(inputs=inputs, outputs=[output, output_healthy, output_cmd])
return model
with strategy.scope():
model = model_fn((None, None, CHANNELS), N_CLASSES)
model.summary()
files_path = f'{database_base_path}test_images/'
test_size = len(os.listdir(files_path))
test_preds = np.zeros((test_size, N_CLASSES))
for model_path in model_path_list:
print(model_path)
K.clear_session()
model.load_weights(model_path)
if TTA_STEPS > 0:
test_ds = get_dataset(files_path, tta=True).repeat()
ct_steps = TTA_STEPS * ((test_size/BATCH_SIZE) + 1)
preds = model.predict(test_ds, steps=ct_steps, verbose=1)[:(test_size * TTA_STEPS)]
preds = np.mean(preds.reshape(test_size, TTA_STEPS, N_CLASSES, order='F'), axis=1)
test_preds += preds / len(model_path_list)
else:
test_ds = get_dataset(files_path, tta=False)
x_test = test_ds.map(lambda image, image_name: image)
test_preds += model.predict(x_test)[0] / len(model_path_list)
test_preds = np.argmax(test_preds, axis=-1)
test_names_ds = get_dataset(files_path)
image_names = [img_name.numpy().decode('utf-8') for img, img_name in iter(test_names_ds.unbatch())]
submission = pd.DataFrame({'image_id': image_names, 'label': test_preds})
submission.to_csv('submission.csv', index=False)
display(submission.head())
| 0.533397 | 0.478955 |
# Week 4
### Dopants
# Background#
A **dopant** is a foreign atom that enters the lattice. This can be accidental, e.g. Nuclear fuel contains large concentrations of Fe cations due to the fuel pellets being heated on steel equitment, or it can be by design, e.g. Gadalidium doped ceria is a fuel cell material where Gd cations are added to improve the materials conductivity.
<center>
<br>
<img src="./figures/Dopant.png\" width=\"400px\">
<i>Figure 1. A pictorial example of doped NaCl.</i>
<br>
</center>
# Aim and Objectives #
The **Aim** of the next **week** is to **design** your own simulations to
**investigate** how the transport properties of CaF$_2$ are affected by dopants.
The **first objective** is to **decide** on the specific research questions you would like to answer.
The **second objective** is to **design** the simulations to answer these quesions.
The **third objective** is to **run** these simulations.
For example, you could design simulations to answer the following research questions:
- How does the charge of the dopant affect the diffusion of F?
- The mass / size of the dopant affect the diffusion of F?
- The concentration of the dopant affect the diffusion of F?
# Simulating Dopants #
The <code>defect</code> module you used in week 3 will help with adding the dopants.
**NOTE:** due to the rapid diffusion of anions compared to cations it is pointless to consider anion impurities. Instead it is best to consider cation impurities on Ca sites.
# Exercise 1: Introducing Dopants #
**Run** the cell below to create an input file for <code>METADISE</code>.
```
import numpy as np
import subprocess
import os
import defect
# Read the METADISE input file
data = defect.read("Input/input.txt")
# Add a 10 % concentration of 3+ charged La
new_data = defect.dopant(data, "La", 3.0, 10)
# Write a METADISE file to folder La_10
defect.write_output(new_data, "La_10", "La")
```
This has created a directory called <code>La_10</code> which contains an input file that contains a 10% concentration of lanthanum dopants with a 3 + charge.
**Run** the cell below to run <code>METADISE</code> on your previously generated input file.
```
subprocess.call('../Codes/metadise.exe', cwd='La_10/')
os.rename('La_10/control_o0001.dlp', 'La_10/CONTROL')
os.rename('La_10/config__o0001.dlp', 'La_10/CONFIG')
os.rename('La_10/field___o0001.dlp', 'La_10/FIELD')
```
<code>METADISE</code> has created the three input files (<code>CONTROL</code>, <code>CONFIG</code> and <code>FIELD</code>) for <code>DL_POLY</code> which correspond to a CaF$_2$ which contains a 10% concentration of lanthanum dopants.
Now your simulation is ready, **check** the structure before you run the simulation.
You can view the <code>CONFIG</code> file in three dimensions using the <code>VESTA</code> program.
**Run** the cells below to create input files for <code>METADISE</code> and run <code>METADISE</code> on your previously generated input files.
This will generate two directories containing the three input files (<code>CONTROL</code>, <code>CONFIG</code> and <code>FIELD</code>) for <code>DL_POLY</code> which correspond to a CaF$_2$ which contains a 10% concentration of potassium and strontium dopants, respectively.
```
# Read the METADISE input file
data = defect.read("Input/input.txt")
# Add a 10 % concentration of 1+ charged K
new_data = defect.dopant(data, "K", 1.0, 10)
# Write a METADISE file to folder K_10
defect.write_output(new_data, "K_10", "K")
subprocess.call('../Codes/metadise.exe', cwd='K_10/')
os.rename('K_10/control_o0001.dlp', 'K_10/CONTROL')
os.rename('K_10/config__o0001.dlp', 'K_10/CONFIG')
os.rename('K_10/field___o0001.dlp', 'K_10/FIELD')
# Read the METADISE input file
data = defect.read("Input/input.txt")
# Add a 10 % concentration of 2+ charged Sr
new_data = defect.dopant(data, "Sr", 2.0, 10)
# Write a METADISE file to folder Sr_10
defect.write_output(new_data, "Sr_10", "Sr")
subprocess.call('../Codes/metadise.exe', cwd='Sr_10/')
os.rename('Sr_10/control_o0001.dlp', 'Sr_10/CONTROL')
os.rename('Sr_10/config__o0001.dlp', 'Sr_10/CONFIG')
os.rename('Sr_10/field___o0001.dlp', 'Sr_10/FIELD')
```
# Putting It All Together #
You should now hopefully have the tools needed to begin to investigate the role of various dopants on the transport properties of CaF$_2$.
As with week 3, it is up to you how you want to proceed from here, this should be treated as a research project, in your groups decide what questions you want to answer and then design a series of simulations to answer those questions. As always there will be a demonstrator who will be happy to assist. Good luck.
|
github_jupyter
|
import numpy as np
import subprocess
import os
import defect
# Read the METADISE input file
data = defect.read("Input/input.txt")
# Add a 10 % concentration of 3+ charged La
new_data = defect.dopant(data, "La", 3.0, 10)
# Write a METADISE file to folder La_10
defect.write_output(new_data, "La_10", "La")
subprocess.call('../Codes/metadise.exe', cwd='La_10/')
os.rename('La_10/control_o0001.dlp', 'La_10/CONTROL')
os.rename('La_10/config__o0001.dlp', 'La_10/CONFIG')
os.rename('La_10/field___o0001.dlp', 'La_10/FIELD')
# Read the METADISE input file
data = defect.read("Input/input.txt")
# Add a 10 % concentration of 1+ charged K
new_data = defect.dopant(data, "K", 1.0, 10)
# Write a METADISE file to folder K_10
defect.write_output(new_data, "K_10", "K")
subprocess.call('../Codes/metadise.exe', cwd='K_10/')
os.rename('K_10/control_o0001.dlp', 'K_10/CONTROL')
os.rename('K_10/config__o0001.dlp', 'K_10/CONFIG')
os.rename('K_10/field___o0001.dlp', 'K_10/FIELD')
# Read the METADISE input file
data = defect.read("Input/input.txt")
# Add a 10 % concentration of 2+ charged Sr
new_data = defect.dopant(data, "Sr", 2.0, 10)
# Write a METADISE file to folder Sr_10
defect.write_output(new_data, "Sr_10", "Sr")
subprocess.call('../Codes/metadise.exe', cwd='Sr_10/')
os.rename('Sr_10/control_o0001.dlp', 'Sr_10/CONTROL')
os.rename('Sr_10/config__o0001.dlp', 'Sr_10/CONFIG')
os.rename('Sr_10/field___o0001.dlp', 'Sr_10/FIELD')
| 0.266166 | 0.955858 |
# Acknowledgements
This is a summary notebook borrowing in its entirety from William Koehrsen's https://williamkoehrsen.medium.com/ article in Towards Data Science and its corresponding github repo
1. https://towardsdatascience.com/the-poisson-distribution-and-poisson-process-explained-4e2cb17d459
2. https://github.com/WillKoehrsen/Data-Analysis/blob/master/poisson/poisson.ipynb
<b>A Poisson process is </b>
1. Model for discrete events
2. Average time between events is known (and constant)
3. Exact timing of events is random
4. Events are independent
5. Two events cannot occur at same time
Poisson Distribution gives the probability of a number of events in an interval generated by a Poisson process.
We use meteor observation to explain this. Some technical terms:
1. Asteroids are large chunks of rock orbiting the sun in the asteroid belt.
2. Pieces of asteroids that break off become meteoroids. A meteoroid can come from an asteroid, a comet, or a piece of a planet and is usually millimeters in diameter but can be up to a kilometer.
3. If the meteoroid survives its trip through the atmosphere and impacts Earth, it’s called a meteorite.
4. Meteors are the streaks of light you see in the sky that are caused by pieces of debris called meteoroids burning up in the atmosphere
<b>We can use Poisson to describe two things:</b>
1. Use Poisson Process for probablity of events in an interval
2. Find waiting time between two events using Poisson distribution
Some processes that are close approximation of Poisson Processes:
1. Meteors
2. Failures in a website
3. Customers calling for support
4. Stock price movements (we know average stock price movements per day)
<b>Two observations from the definition of Poisson process:</b>
1. Two events cannot occur at same time implies that we can think of each sub-interval of a Poisson process as a Bernoulli Trial, that is, either a success or a failure. With our website, the entire interval may be 600 days, but each sub-interval — one day — our website either goes down or it doesn’t.
2. Average time between events but they are randomly spaced (stochastic). We might have back-to-back failures, but we could also go years between failures due to the randomness of the process
\begin{equation}
\begin{aligned}
P(X=k) = \dfrac{\lambda^{k} }{k!} \, e^{-\lambda}
\end{aligned}
\tag{Equation 1}
\end{equation}
where parameter $ \lambda $ > 0.
Let r be number of events per unit time $ \dfrac{events}{time} $. Then r can be called rate parameter. Then we have $ \lambda \, = \, rt $. Substituting $ \lambda \, = \, rt $ in Equation 1 we get the following intuitively:
\begin{equation}
\begin{aligned}
P(k \, events \, in \, interval \, t) = \dfrac{(\dfrac{events}{time}\, * \, interval \, t)^{k} }{k!} \
e^{-(\dfrac{events}{time}\, * \, interval \, t)}
\end{aligned}
\tag{Equation 2}
\end{equation}
where $ \dfrac{events}{time}\, * \, inteval \, t $ is just called the parameter for Poisson Process. We do not use the phrase time period but use interval because the process can be applied to many things such as a area volume based on the application of Poisson Process. The parameter $ \lambda $ can be thought as expected number of events in a interval.
Equation 2 can be formally written now as:
\begin{equation}
\begin{aligned}
P(k \, events \, in \, interval \, t) = \dfrac{(rt)^{k} }{k!} \
e^{-rt}
\end{aligned}
\tag{Equation 3}
\end{equation}
We want to plot Probability Mass Function (PMF) against events for different rate parameters.
We have to first prepare required Python code. Here we go
```
import pandas as pd
import numpy as np
from scipy.special import factorial # Poisson distribution formula has factorial term
# Display all cell outputs
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = 'all'
# Visualizations
import chart_studio
chart_studio.tools.set_credentials_file(username='datavictor', api_key='NZg3CXiXgzQM1e7ytw8U')
chart_studio.tools.set_config_file(world_readable=True,sharing='public')
from chart_studio import plotly # replaces import plotly.plotly as py
import plotly.graph_objs as go
from plotly.offline import iplot
# Cufflinks for dataframes
import cufflinks as cf
cf.go_offline()
cf.set_config_file(world_readable=True, theme='pearl')
# np.random.seed(42) legacy method. do not use
rnd = np.random.RandomState(42)
rnd # rnd will now replace all access to np.random
```
# 1. Calculating Possion Probability
```
# Variables defiend in this cell are used throughout the notebook
events_per_minute = 1/12
minutes = 60
events_per_hour = events_per_minute * minutes
lambda_param = events_per_hour
print(f'Rate parameter is {lambda_param}')
# Calculate probability of k events in specified number of minutes
def poisson_probability(events_per_minute, minutes, k):
lambda_param = events_per_minute * minutes
return np.exp(-lambda_param) * np.power(lambda_param, k) / factorial(k)
#Sample
k = 3
k_meteor_probability = poisson_probability(events_per_minute, minutes, k)
print(f'The probability of {k} meteors in {minutes} minutes is {100*k_meteor_probability:.2f}%.')
# We can pass a list or numpy array as the 3rd param and we will get back a list or numpy array of probabilities
events_12 = np.arange(12)
print(type(events_12))
prob_12 = poisson_probability(events_per_minute, minutes, events_12)
type(prob_12)
print(f'The most likely value is {np.argmax(prob_12)} with probability {np.max(prob_12):.4f}')
```
We can use the built in poisson function
```
x = rnd.poisson(lambda_rate_param, 10000) #rnd replaces the previous np.random
print(type(x))
(x == 3).mean()
```
# 2. Plotting Poisson Mass Distribution
This is PMF because events are discrete
```
# Plot PMF of Poisson distribution
def plot_pmf(x, p_x, title=''):
df = pd.DataFrame({'x': x, 'y': p_x})
# print(f'The most likely value is {np.argmax(p_x)} with probability {np.max(p_x):.4f}')
annotations = [dict(x=x, y=y+0.01, text=f'{y:.2f}',
showarrow=False, textangle=0) for x, y in zip(df['x'], df['y'])]
df.iplot(kind='scatter', mode='markers+lines',
x='x', y='y', xTitle='Number of Events',
yTitle='Probability', annotations=annotations,
title=title)
plot_pmf(events_12, prob_12, title='Probability of Number of Meteors in One Hour')
```
# 3. Plotting Probability for different rates and different time period
NOTE: The most likely number of events in the interval for each curve is the rate parameter
```
def plot_different_rates(events_per_minute, minutes, ns, title=''):
df = pd.DataFrame()
annotations=[]
colors = ['orange', 'green', 'red', 'blue', 'purple', 'brown']
for i, events in enumerate(events_per_minute):
probs = calc_prob(events, minutes, ns)
annotations.append(dict(x=np.argmax(probs)+1, y=np.max(probs)+0.025,
text=f'{int(events * minutes)} MPH<br>Meteors = {np.argmax(probs) + 1}<br>P = {np.max(probs):.2f}',
color=colors[i],
showarrow=False, textangle=0))
df[f'Meteors per Hour = {int(events * minutes)}'] = probs
df.index = ns
df.iplot(kind='scatter', mode='markers+lines', colors=colors, size=8, annotations=annotations,
xTitle='Events', yTitle='Probability', title=title)
return df
df = plot_different_rates(events_per_minute=np.array([1/5, 1/12, 1/10, 1/15, 1/20, 1/30]),
minutes=60,
ns=list(range(15)),
title='Probability of Meteors in 1 Hour at Different Rates')
def plot_different_times(events_per_minute, minutes, ns, title=''):
df = pd.DataFrame()
annotations = []
colors = ['orange', 'green', 'red', 'blue', 'purple', 'brown']
for i, minute in enumerate(minutes):
probs = calc_prob(events_per_minute, minute, ns)
annotations.append(dict(x=np.argmax(probs), y=np.max(probs)+0.025,
color=colors[i],
text=f'{minute} Minutes<br>Meteors = {np.argmax(probs)}<br>P = {np.max(probs):.2f}',
showarrow=False, textangle=0))
df[f'Minutes = {minute}'] = probs
df.index = ns
df.iplot(kind='scatter', mode='markers+lines', colors=colors,
size=8, annotations=annotations,
xTitle='Events', yTitle='Probability', title=title)
return df
df = plot_different_times(events_per_minute=1/12, minutes=np.array([30, 60, 90, 120]),
ns=list(range(15)), title='Probability of Meteors in Time Intervals for fixed rate of 5 meteors per hour')
```
# 4. Simulation of Observations
We can use np.random.poisson to simulate 10,000 hours of observation and then make a histogram of observations. We expect to see a peak at 4 or 5 meteors since that is the most likely value.
```
def plot_hist(x, title='',summary=True):
df = pd.DataFrame(x)
df.iplot(kind='hist', xTitle='Events',
yTitle='Count', title=title)
if summary:
print(df.describe())
N = 10000
counts = np.random.poisson(lambda_rate_param, size=N)
plot_hist(counts, title=f'Distribution of Number of Meteors in 1 Hour Simulated {N} Times')
counts = np.random.poisson(lambda_rate_param * 3, size=N)
plot_hist(counts, title=f'Distribution of Number of Meteors in 3 Hours Simulated {N} Times')
```
# 5. Probability of Different Numbers of Events
Now let's take a look at the probability of seeing different numbers of meteors. We can find the probability by summing up the probabilities of more than a given number of events or less than or equal to a given number of events.
```
def pr_less_than_or_equal(events_per_minute, minutes, n_query, quiet=False):
p_n = poisson_probability(events_per_minute, minutes, np.arange(100))
p = p_n[:n_query+1].sum() / p_n.sum()
if not quiet:
print(f'{int(events_per_minute*60)} Meteors Per Hour. Probability of {n_query} or fewer meteors in {int(minutes/60)} hour: {100*p:.2f}%.')
return p
def pr_greater_than(events_per_minute, minutes, n_query, quiet=False):
p = 1 - pr_less_than_or_equal(events_per_minute, minutes, n_query)
if not quiet:
print(f'{int(events_per_minute*60)} Meteors Per Hour. Probability of more than {n_query} meteors in {int(minutes/60)} hour: {100*p:.2f}%.')
return p
assert pr_less_than_or_equal(events_per_minute, minutes, 6, True) + pr_greater_than(events_per_minute, minutes, 6, True) == 1
assert pr_less_than_or_equal(events_per_minute, minutes, 8, True) + pr_greater_than(events_per_minute, minutes, 8, True) == 1
_ = pr_greater_than(events_per_minute=1/12, minutes=60, n_query=10)
```
# 6. Waiting time between Events
We would like to calculate how long we have to wait until an event occurs (This is very different from average time between two events). Derivation as follows
Reproducing the equation 3 below for reference
\begin{equation}
\begin{aligned}
P(X=k) = \dfrac{(rt)^{k} }{k!} \, e^{-rt}
\end{aligned}
\tag{Equation 3}
\end{equation}
where k is the number of events that occured between time 0 till t at the rate $ \lambda $
Let T be the time when first meteor sighting event occurs. Hence $ P(t<T) = 0 $
Similar probability can be obtained from equation 3 by substituting k = 0
\begin{equation}
\begin{aligned}
P(k=0) = \dfrac{(rt)^{0} }{0!} \, e^{-rt} \, = \, e^{-rt}
\end{aligned}
\tag{Equation 4}
\end{equation}
As an example, if the average time between meteor sightings is 12 minute, then there is a 60% chance of seeing one with wait > 6 minutes
```
def waiting_time_more_than(events_per_minute, t, quiet=False):
p = np.exp(-events_per_minute * t)
if not quiet:
print(f'{int(events_per_minute*60)} Meteors per hour. Probability of waiting more than {t} minutes: {100*p:.2f}%.')
return p
def waiting_time_less_than_or_equal(events_per_minute, t, quiet=False):
p = 1 - waiting_time_more_than(events_per_minute, t, quiet=quiet)
if not quiet:
print(f'{int(events_per_minute*60)} Meteors per hour. Probability of waiting at most {t} minutes: {100*p:.2f}%.')
return p
def waiting_time_between(events_per_minute, t1, t2):
p1 = waiting_time_less_than_or_equal(events_per_minute, t1, True)
p2 = waiting_time_less_than_or_equal(events_per_minute, t2, True)
p = p2-p1
print(f'Probability of waiting between {t1} and {t2} minutes: {100*p:.2f}%.')
return p
assert waiting_time_more_than(events_per_minute, 15, True) + waiting_time_less_than_or_equal(events_per_minute, 15, True) == 1
_ = waiting_time_less_than_or_equal(events_per_minute, 12)
def plot_waiting_time(events_per_minute, ts, title=''):
p_t = waiting_time_more_than(events_per_minute, ts, quiet=True)
df = pd.DataFrame({'x': ts, 'y': p_t})
df.iplot(kind='scatter', mode='markers+lines', size=8,
x='x', y='y', xTitle='Waiting Time',
yTitle='Probability',
title=title)
return p_t
p_t = plot_waiting_time(events_per_minute, np.arange(100), title='Probability (T > t)')
```
|
github_jupyter
|
import pandas as pd
import numpy as np
from scipy.special import factorial # Poisson distribution formula has factorial term
# Display all cell outputs
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = 'all'
# Visualizations
import chart_studio
chart_studio.tools.set_credentials_file(username='datavictor', api_key='NZg3CXiXgzQM1e7ytw8U')
chart_studio.tools.set_config_file(world_readable=True,sharing='public')
from chart_studio import plotly # replaces import plotly.plotly as py
import plotly.graph_objs as go
from plotly.offline import iplot
# Cufflinks for dataframes
import cufflinks as cf
cf.go_offline()
cf.set_config_file(world_readable=True, theme='pearl')
# np.random.seed(42) legacy method. do not use
rnd = np.random.RandomState(42)
rnd # rnd will now replace all access to np.random
# Variables defiend in this cell are used throughout the notebook
events_per_minute = 1/12
minutes = 60
events_per_hour = events_per_minute * minutes
lambda_param = events_per_hour
print(f'Rate parameter is {lambda_param}')
# Calculate probability of k events in specified number of minutes
def poisson_probability(events_per_minute, minutes, k):
lambda_param = events_per_minute * minutes
return np.exp(-lambda_param) * np.power(lambda_param, k) / factorial(k)
#Sample
k = 3
k_meteor_probability = poisson_probability(events_per_minute, minutes, k)
print(f'The probability of {k} meteors in {minutes} minutes is {100*k_meteor_probability:.2f}%.')
# We can pass a list or numpy array as the 3rd param and we will get back a list or numpy array of probabilities
events_12 = np.arange(12)
print(type(events_12))
prob_12 = poisson_probability(events_per_minute, minutes, events_12)
type(prob_12)
print(f'The most likely value is {np.argmax(prob_12)} with probability {np.max(prob_12):.4f}')
x = rnd.poisson(lambda_rate_param, 10000) #rnd replaces the previous np.random
print(type(x))
(x == 3).mean()
# Plot PMF of Poisson distribution
def plot_pmf(x, p_x, title=''):
df = pd.DataFrame({'x': x, 'y': p_x})
# print(f'The most likely value is {np.argmax(p_x)} with probability {np.max(p_x):.4f}')
annotations = [dict(x=x, y=y+0.01, text=f'{y:.2f}',
showarrow=False, textangle=0) for x, y in zip(df['x'], df['y'])]
df.iplot(kind='scatter', mode='markers+lines',
x='x', y='y', xTitle='Number of Events',
yTitle='Probability', annotations=annotations,
title=title)
plot_pmf(events_12, prob_12, title='Probability of Number of Meteors in One Hour')
def plot_different_rates(events_per_minute, minutes, ns, title=''):
df = pd.DataFrame()
annotations=[]
colors = ['orange', 'green', 'red', 'blue', 'purple', 'brown']
for i, events in enumerate(events_per_minute):
probs = calc_prob(events, minutes, ns)
annotations.append(dict(x=np.argmax(probs)+1, y=np.max(probs)+0.025,
text=f'{int(events * minutes)} MPH<br>Meteors = {np.argmax(probs) + 1}<br>P = {np.max(probs):.2f}',
color=colors[i],
showarrow=False, textangle=0))
df[f'Meteors per Hour = {int(events * minutes)}'] = probs
df.index = ns
df.iplot(kind='scatter', mode='markers+lines', colors=colors, size=8, annotations=annotations,
xTitle='Events', yTitle='Probability', title=title)
return df
df = plot_different_rates(events_per_minute=np.array([1/5, 1/12, 1/10, 1/15, 1/20, 1/30]),
minutes=60,
ns=list(range(15)),
title='Probability of Meteors in 1 Hour at Different Rates')
def plot_different_times(events_per_minute, minutes, ns, title=''):
df = pd.DataFrame()
annotations = []
colors = ['orange', 'green', 'red', 'blue', 'purple', 'brown']
for i, minute in enumerate(minutes):
probs = calc_prob(events_per_minute, minute, ns)
annotations.append(dict(x=np.argmax(probs), y=np.max(probs)+0.025,
color=colors[i],
text=f'{minute} Minutes<br>Meteors = {np.argmax(probs)}<br>P = {np.max(probs):.2f}',
showarrow=False, textangle=0))
df[f'Minutes = {minute}'] = probs
df.index = ns
df.iplot(kind='scatter', mode='markers+lines', colors=colors,
size=8, annotations=annotations,
xTitle='Events', yTitle='Probability', title=title)
return df
df = plot_different_times(events_per_minute=1/12, minutes=np.array([30, 60, 90, 120]),
ns=list(range(15)), title='Probability of Meteors in Time Intervals for fixed rate of 5 meteors per hour')
def plot_hist(x, title='',summary=True):
df = pd.DataFrame(x)
df.iplot(kind='hist', xTitle='Events',
yTitle='Count', title=title)
if summary:
print(df.describe())
N = 10000
counts = np.random.poisson(lambda_rate_param, size=N)
plot_hist(counts, title=f'Distribution of Number of Meteors in 1 Hour Simulated {N} Times')
counts = np.random.poisson(lambda_rate_param * 3, size=N)
plot_hist(counts, title=f'Distribution of Number of Meteors in 3 Hours Simulated {N} Times')
def pr_less_than_or_equal(events_per_minute, minutes, n_query, quiet=False):
p_n = poisson_probability(events_per_minute, minutes, np.arange(100))
p = p_n[:n_query+1].sum() / p_n.sum()
if not quiet:
print(f'{int(events_per_minute*60)} Meteors Per Hour. Probability of {n_query} or fewer meteors in {int(minutes/60)} hour: {100*p:.2f}%.')
return p
def pr_greater_than(events_per_minute, minutes, n_query, quiet=False):
p = 1 - pr_less_than_or_equal(events_per_minute, minutes, n_query)
if not quiet:
print(f'{int(events_per_minute*60)} Meteors Per Hour. Probability of more than {n_query} meteors in {int(minutes/60)} hour: {100*p:.2f}%.')
return p
assert pr_less_than_or_equal(events_per_minute, minutes, 6, True) + pr_greater_than(events_per_minute, minutes, 6, True) == 1
assert pr_less_than_or_equal(events_per_minute, minutes, 8, True) + pr_greater_than(events_per_minute, minutes, 8, True) == 1
_ = pr_greater_than(events_per_minute=1/12, minutes=60, n_query=10)
def waiting_time_more_than(events_per_minute, t, quiet=False):
p = np.exp(-events_per_minute * t)
if not quiet:
print(f'{int(events_per_minute*60)} Meteors per hour. Probability of waiting more than {t} minutes: {100*p:.2f}%.')
return p
def waiting_time_less_than_or_equal(events_per_minute, t, quiet=False):
p = 1 - waiting_time_more_than(events_per_minute, t, quiet=quiet)
if not quiet:
print(f'{int(events_per_minute*60)} Meteors per hour. Probability of waiting at most {t} minutes: {100*p:.2f}%.')
return p
def waiting_time_between(events_per_minute, t1, t2):
p1 = waiting_time_less_than_or_equal(events_per_minute, t1, True)
p2 = waiting_time_less_than_or_equal(events_per_minute, t2, True)
p = p2-p1
print(f'Probability of waiting between {t1} and {t2} minutes: {100*p:.2f}%.')
return p
assert waiting_time_more_than(events_per_minute, 15, True) + waiting_time_less_than_or_equal(events_per_minute, 15, True) == 1
_ = waiting_time_less_than_or_equal(events_per_minute, 12)
def plot_waiting_time(events_per_minute, ts, title=''):
p_t = waiting_time_more_than(events_per_minute, ts, quiet=True)
df = pd.DataFrame({'x': ts, 'y': p_t})
df.iplot(kind='scatter', mode='markers+lines', size=8,
x='x', y='y', xTitle='Waiting Time',
yTitle='Probability',
title=title)
return p_t
p_t = plot_waiting_time(events_per_minute, np.arange(100), title='Probability (T > t)')
| 0.739705 | 0.977841 |
# Exp F3 for the Classical SRNN (sgd)
This is a notebook for testing the classical SRNN.
F3 aims to test the classical counterpart of QRNN.
## Import everything
Modify setting for pytorch
```
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
currentPath=os.getcwd()
```
Import matplotlib and others
```
%matplotlib inline
import matplotlib.pyplot as plt
import torch
```
Import the classical SRNN and others
```
#Modify path for the notebooks
currentPath=os.path.join(currentPath,'..')
currentPath=os.path.join(currentPath,'src')
os.chdir(currentPath)
from DataGenerator.HenonMapDataGen import HenonMapDataGen
from ClassicalModels.ClassicalSRNNs import ClassicalSRNN
from ClassicalModels.ClassicalSRNNs import SuportFunction
from GradientFreeOptimizers.CostFunc import GradFreeMSELoss
import GradientFreeOptimizers.Helpers as hp
```
### Get the data
#### Set save path
```
savepath=os.path.join(currentPath,'..','data','HenonMap','Exp')
filename='QExp1.csv'
'''
hmap=HenonMapDataGen(savepath=savepath)
hmap(10000)
hmap.save_to_CSV(filename)
''';
```
#### Read the data
```
hmap=HenonMapDataGen(savepath=savepath)
hmap.read_from_CSV(filename)
print(hmap)
```
#### Generate the data iter
```
testSetRatio=0.2
numStep=10
batchSize=16
trainIter,testIter=hmap.get_data_iter(testSetRatio,numStep,batchSize,mask=0,shuffle=False)
X,Y=next(iter(trainIter))
print('Train Data Size:',len(trainIter))
X,Y=next(iter(testIter))
print('Test Data Size:',len(testIter))
```
### Define the SRNN
#### Get neccesary functions
```
srnnExpSup=SuportFunction()
transform=lambda Xs:[torch.squeeze(x) for x in Xs]
init_rnn_state=srnnExpSup.get_init_state_fun(initStateValue=1.0)
get_params=srnnExpSup.get_get_params_fun(rescale=1.0)
rnn=srnnExpSup.get_forward_fn_fun(isTypical=False)
predict_fun=srnnExpSup.get_predict_fun(outputTransoform=transform)
```
#### Create the SRNN
```
inputSize=outputSize=1
hiddenSize=2
net=ClassicalSRNN(inputSize,hiddenSize,outputSize,get_params,init_rnn_state,rnn)
```
#### Test prediction
```
state=net.begin_state(batchSize)
Y,newState=net(X,state)
Y.shape, len(newState), newState[0].shape
preX,preY=hmap.data_as_tensor
preX,preY=torch.unsqueeze(preX[:2],-1),torch.unsqueeze(preY[:10],-1)
print('preX=',preX)
preY=[y for y in torch.cat((preX[:2],preY[1:]),dim=0)]
print('preY=',preY)
preX=torch.unsqueeze(preX,-1)
YHat=predict_fun(preX,net,numPreds=5)
print('YHat=',YHat)
```
### Train the network
#### Parameters
```
num_epochs, lr = 300, 0.2
step_epochs=10
```
#### Loss function
```
lossFunc=GradFreeMSELoss(net)
```
#### Trainer
```
trainer = torch.optim.SGD(net.params, lr=lr)
scheduler=torch.optim.lr_scheduler.StepLR(trainer,step_size=100,gamma=0.1)
```
#### Initial loss
```
l_epochs=[]
train_l=SuportFunction.evaluate_accuracy(net,trainIter,lossFunc,False)
test_l=SuportFunction.evaluate_accuracy(net,testIter,lossFunc,False)
l_epochs.append([train_l,test_l])
print('Initial Train Loss:',train_l)
print('Initial Test Loss:',test_l)
```
#### Training
```
animator = hp.Animator(xlabel='epoch', ylabel='Loss',
legend=['train','test'], xlim=[1, num_epochs])
# prediction
predict = lambda prefix: predict_fun(prefix,net, numPreds=9)
# train and predict
for epoch in range(num_epochs):
trainLoss, speed = SuportFunction.train_epoch(
net, trainIter, lossFunc, trainer, False)
testLoss=SuportFunction.evaluate_accuracy(net, testIter, lossFunc, False)
if (epoch + 1) % step_epochs == 0:
print(predict(preX))
animator.add(epoch + 1, [trainLoss,testLoss])
l_epochs.append([trainLoss,testLoss])
scheduler.step()
testLoss=SuportFunction.evaluate_accuracy(net, testIter, lossFunc, False)
print(f'TestLoss {testLoss:f}, {speed:f} point/s')
print('Prediction:\n',predict(preX))
print('Answer:\n',preY)
```
### Visualize the performance
#### One Step Prediction
```
X,Y=next(iter(testIter))
state=net.begin_state(batchSize)
Y_hat,newState=net(X,state)
#print('X=',torch.squeeze(X))
#print('Prediction=',torch.squeeze(Y_hat).detach())
Y=Y.transpose(0,1).reshape([-1,Y.shape[-1]])
#print('Y=',torch.squeeze(Y))
#Visualize the data
axes,fig=plt.subplots(1,1,figsize=(4,3))
plt.title('One-Step Prediction')
plt.plot(torch.linspace(0,Y.numel(),Y.numel()),torch.squeeze(Y),label='Y')
plt.plot(torch.linspace(0,Y.numel(),Y.numel()),torch.squeeze(Y_hat).detach(),label=r'$\hat{Y}$')
plt.legend();
```
#### Multi Step Prediction
```
prefixSize=10
totalSize=30
testShift=int(len(hmap)*(1-testSetRatio))
preX,preY=hmap.data_as_tensor
preX,preY=torch.unsqueeze(preX[testShift:testShift+prefixSize],-1),torch.unsqueeze(preY[testShift:testShift+totalSize-1],-1)
#print('preX=',preX)
preY=[y for y in torch.cat((preX[:2],preY[1:]),dim=0)]
#print('preY=',preY)
len(preY)
preX=torch.unsqueeze(preX,-1)
YHat=predict_fun(preX,net,numPreds=totalSize-prefixSize)
#print('YHat=',YHat)
len(YHat)
#Visualize the data
axes,fig=plt.subplots(1,1,figsize=(4,3))
plt.title('Multi-Step Prediction')
fig.set_ylim(-2,2)
plt.plot(torch.linspace(0,len(preY),len(preY)),preY,label='Y')
plt.plot(torch.linspace(0,len(preY),len(preY)),YHat,label=r'$\hat{Y}$')
plt.vlines([prefixSize-1],ymin=-2,ymax=2,linestyles='dashed',label='Prediction')
plt.legend();
```
# End of the test
|
github_jupyter
|
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
currentPath=os.getcwd()
%matplotlib inline
import matplotlib.pyplot as plt
import torch
#Modify path for the notebooks
currentPath=os.path.join(currentPath,'..')
currentPath=os.path.join(currentPath,'src')
os.chdir(currentPath)
from DataGenerator.HenonMapDataGen import HenonMapDataGen
from ClassicalModels.ClassicalSRNNs import ClassicalSRNN
from ClassicalModels.ClassicalSRNNs import SuportFunction
from GradientFreeOptimizers.CostFunc import GradFreeMSELoss
import GradientFreeOptimizers.Helpers as hp
savepath=os.path.join(currentPath,'..','data','HenonMap','Exp')
filename='QExp1.csv'
'''
hmap=HenonMapDataGen(savepath=savepath)
hmap(10000)
hmap.save_to_CSV(filename)
''';
hmap=HenonMapDataGen(savepath=savepath)
hmap.read_from_CSV(filename)
print(hmap)
testSetRatio=0.2
numStep=10
batchSize=16
trainIter,testIter=hmap.get_data_iter(testSetRatio,numStep,batchSize,mask=0,shuffle=False)
X,Y=next(iter(trainIter))
print('Train Data Size:',len(trainIter))
X,Y=next(iter(testIter))
print('Test Data Size:',len(testIter))
srnnExpSup=SuportFunction()
transform=lambda Xs:[torch.squeeze(x) for x in Xs]
init_rnn_state=srnnExpSup.get_init_state_fun(initStateValue=1.0)
get_params=srnnExpSup.get_get_params_fun(rescale=1.0)
rnn=srnnExpSup.get_forward_fn_fun(isTypical=False)
predict_fun=srnnExpSup.get_predict_fun(outputTransoform=transform)
inputSize=outputSize=1
hiddenSize=2
net=ClassicalSRNN(inputSize,hiddenSize,outputSize,get_params,init_rnn_state,rnn)
state=net.begin_state(batchSize)
Y,newState=net(X,state)
Y.shape, len(newState), newState[0].shape
preX,preY=hmap.data_as_tensor
preX,preY=torch.unsqueeze(preX[:2],-1),torch.unsqueeze(preY[:10],-1)
print('preX=',preX)
preY=[y for y in torch.cat((preX[:2],preY[1:]),dim=0)]
print('preY=',preY)
preX=torch.unsqueeze(preX,-1)
YHat=predict_fun(preX,net,numPreds=5)
print('YHat=',YHat)
num_epochs, lr = 300, 0.2
step_epochs=10
lossFunc=GradFreeMSELoss(net)
trainer = torch.optim.SGD(net.params, lr=lr)
scheduler=torch.optim.lr_scheduler.StepLR(trainer,step_size=100,gamma=0.1)
l_epochs=[]
train_l=SuportFunction.evaluate_accuracy(net,trainIter,lossFunc,False)
test_l=SuportFunction.evaluate_accuracy(net,testIter,lossFunc,False)
l_epochs.append([train_l,test_l])
print('Initial Train Loss:',train_l)
print('Initial Test Loss:',test_l)
animator = hp.Animator(xlabel='epoch', ylabel='Loss',
legend=['train','test'], xlim=[1, num_epochs])
# prediction
predict = lambda prefix: predict_fun(prefix,net, numPreds=9)
# train and predict
for epoch in range(num_epochs):
trainLoss, speed = SuportFunction.train_epoch(
net, trainIter, lossFunc, trainer, False)
testLoss=SuportFunction.evaluate_accuracy(net, testIter, lossFunc, False)
if (epoch + 1) % step_epochs == 0:
print(predict(preX))
animator.add(epoch + 1, [trainLoss,testLoss])
l_epochs.append([trainLoss,testLoss])
scheduler.step()
testLoss=SuportFunction.evaluate_accuracy(net, testIter, lossFunc, False)
print(f'TestLoss {testLoss:f}, {speed:f} point/s')
print('Prediction:\n',predict(preX))
print('Answer:\n',preY)
X,Y=next(iter(testIter))
state=net.begin_state(batchSize)
Y_hat,newState=net(X,state)
#print('X=',torch.squeeze(X))
#print('Prediction=',torch.squeeze(Y_hat).detach())
Y=Y.transpose(0,1).reshape([-1,Y.shape[-1]])
#print('Y=',torch.squeeze(Y))
#Visualize the data
axes,fig=plt.subplots(1,1,figsize=(4,3))
plt.title('One-Step Prediction')
plt.plot(torch.linspace(0,Y.numel(),Y.numel()),torch.squeeze(Y),label='Y')
plt.plot(torch.linspace(0,Y.numel(),Y.numel()),torch.squeeze(Y_hat).detach(),label=r'$\hat{Y}$')
plt.legend();
prefixSize=10
totalSize=30
testShift=int(len(hmap)*(1-testSetRatio))
preX,preY=hmap.data_as_tensor
preX,preY=torch.unsqueeze(preX[testShift:testShift+prefixSize],-1),torch.unsqueeze(preY[testShift:testShift+totalSize-1],-1)
#print('preX=',preX)
preY=[y for y in torch.cat((preX[:2],preY[1:]),dim=0)]
#print('preY=',preY)
len(preY)
preX=torch.unsqueeze(preX,-1)
YHat=predict_fun(preX,net,numPreds=totalSize-prefixSize)
#print('YHat=',YHat)
len(YHat)
#Visualize the data
axes,fig=plt.subplots(1,1,figsize=(4,3))
plt.title('Multi-Step Prediction')
fig.set_ylim(-2,2)
plt.plot(torch.linspace(0,len(preY),len(preY)),preY,label='Y')
plt.plot(torch.linspace(0,len(preY),len(preY)),YHat,label=r'$\hat{Y}$')
plt.vlines([prefixSize-1],ymin=-2,ymax=2,linestyles='dashed',label='Prediction')
plt.legend();
| 0.362292 | 0.843831 |
```
import os
import pandas
import nibabel as ni
import numpy as np
import scipy.stats as stats
import statsmodels.formula.api as smf
import matplotlib.pyplot as plt
import seaborn as sns
from glob import glob
from scipy.spatial import KDTree
import warnings
warnings.filterwarnings('ignore')
# Your git directory here:
git_dir = '/home/users/jvogel/git/Hippocampus_AP_Axis/'
import sys
sys.path.insert(0,git_dir)
import HAP_Utils as hap
```
# DOWNLOAD INSTRUCTIONS
* Go to http://human.brain-map.org/static/download
* Under the subheading "Complete normalized microarray datasets", click each donor link to download microarray gene expression data for that subject (e.g. H0351.2001)
* Unzip and move these folders to a single location
To get the renormalized sample coordinates:
-----------------------------
* Go to https://github.com/gdevenyi/AllenHumanGeneMNI
* Clone or download the respository
# Inititalize data
```
# Enter the path for the directory where you stored your file downloads
aba_dir = '/data1/users/jvogel/Allen_Human_Brain_Atlas/'
probes = pandas.read_csv(os.path.join(aba_dir,'normalized_microarray_donor9861/Probes.csv'))
```
#### GET EVERYTHING INTO A GIANT DATAFRAME AND SAVE DONOR INFORMATION FOR REGRESSION
```
bigsheet = []
xpsheets = sorted(glob(os.path.join(aba_dir,
'normalized_microarray_donor*/MicroarrayExpression.csv'
))) # collect gene expression data
dids = [x.split('/')[-2].split('donor')[-1] for x in xpsheets] # donor IDs
# Turn each csv into a dataframe, add donor ID, and concatenate into one big DataFrame
for sheet in xpsheets:
did = sheet.split('/')[-2].split('donor')[-1] # get donor id
gxp = pandas.read_csv(sheet,header=None)
gxp.drop(gxp.columns[0],axis=1,inplace=True)
# create dummy variables for donor
for tid in dids:
if tid == did:
gxp.loc['is_%s'%tid] = 1
else:
gxp.loc['is_%s'%tid] = 0
bigsheet.append(gxp)
print('finished',did)
bigdf = pandas.concat(bigsheet,axis=1).transpose()
```
# Regress out donor effects
Here, we want to remove donor-specific separately for each probe.
```
# PREP FOR REGRESSION
bigdf.columns = ['col_%s'%x for x in bigdf.columns[:-6]] + bigdf.columns[-6:].tolist()
resid_df = pandas.DataFrame(np.empty_like(bigdf.values),
index = bigdf.index, columns=bigdf.columns)
resid_df.iloc[:,-6:] = bigdf.iloc[:,-6:].values
# RUN STATS (took about 5-10 minute on the work computer)
for i,col in enumerate(bigdf.columns[:-6]):
resid_df.loc[:,col] = smf.ols('%s ~ is_10021 + is_12876 + is_14380 + is_15496 + is_15697'%(col),
data=bigdf).fit().resid
if i % 1000 == 0:
print(i)
# SAVE BACK INTO SPREADSHEETS
dfz = {}
for col in resid_df.columns[-6:]:
did = col.split('_')[-1]
ddir = os.path.join(aba_dir,'normalized_microarray_donor%s'%did)
refsht = os.path.join(ddir,'MicroarrayExpression.csv')
print('loading reference csv')
gxp = pandas.read_csv(refsht,header=None)
gxp.drop(gxp.columns[0],axis=1,inplace=True)
ndf = resid_df[resid_df[col]==1][resid_df.columns[:-6]].transpose()
ndf.index = gxp.index
ndf.columns = gxp.columns
print('saving new csv')
ndf.to_csv(os.path.join(ddir,'MExp_genes_ctr_for_donor'))
dfz.update({did: ndf})
print('finished',did)
```
# Create DataFrames for subsequent use
### Sample reference information with corrected coordinates
```
donor_key = {'H0351.2002': '10021', 'H0351.2001': '9861',
'H0351.1009': '12876', 'H0351.1012': '14380',
'H0351.1015': '15496', 'H0351.1016': '15697'}
# ENTER PATH TO GIT REPOSITORY CONTAINING COORDINATES
coord_pth = '/home/users/jvogel/git/AllenHumanGeneMNI/'
sasheets = sorted(glob(os.path.join(coord_pth,'transformed-points/recombine/*SampleAnnot.csv')))
ref = []
for sheet in sasheets:
did = donor_key[sheet.split('/')[-1].split('_')[0]]
sa = pandas.read_csv(sheet)
sa.loc[:,'donor'] = [did for x in range(len(sa))]
sa.loc[:,'sample'] = [x for x in range(1,len(sa)+1)]
ref.append(sa)
SA = pandas.concat(ref).sort_values(['donor','sample'])
SA.index = range(len(SA))
SA.head()
data_dir = os.path.join(git_dir,'Data')
SA.to_csv(os.path.join(data_dir,'MAIN_gcx_wholebrain_info.csv'))
```
### Find hippocampus coordinates
```
hipp_structures = ['CA1','CA2','CA3','CA4','DG','S']
hipp_df = pandas.DataFrame(SA[SA.structure_acronym.isin(hipp_structures)], copy=True)
hipp_df.head()
```
### Make sure they are are inside of, or within three mm of, the hippocampus
```
# Isolate hippocampus
HO = ni.load(os.path.join(data_dir,'HarvardOxford-sub-maxprob-thr25-1mm.nii.gz')).get_data()
hipps = np.zeros_like(HO)
hipps[HO==9] = 1
hipps[HO==19] = 1
hipp_coords = np.where(hipps==1)
# Get XYZ coordinates of each hippocampus samples
sample_coords = []
for i,row in hipp_df.iterrows():
coords = hap.convert_coords([row['mni_nlin_x'],
row['mni_nlin_y'],
row['mni_nlin_z']],
'xyz')
sample_coords.append([round(x) for x in coords])
# compute distance shortest distance of each sample
# convert to format that KDTree likes
hipp_cs = [(hipp_coords[0][x],
hipp_coords[1][x],
hipp_coords[2][x]) for x in range(len(hipp_coords[0]))]
# make KDTree
tree = KDTree(hipp_cs)
# compute distances
dists = [tree.query(x)[0] for x in sample_coords]
# Make sure most distances are small
plt.close()
sns.distplot(dists, kde=False)
plt.xlabel('distance (mm) from hippocampus mask')
plt.ylabel('N samples')
plt.show()
```
### Dataframe with Gene expression for (good) hippocampus samples only
```
# get index of coords that are within 3mm (rounded) to the hippocampus mask
good_ind = [x for x in range(len(dists)) if dists[x] < 3.49] # list indices
good_hipp_df = hipp_df.iloc[good_ind]
# For some reason, I apparently got rid of another coordinate,
# so the following will match what was used for analysis in the paper
todrop = good_hipp_df[(good_hipp_df.donor=='14380') & (good_hipp_df['sample']==220)].index
good_hipp_df.drop(todrop,inplace=True)
# save it
good_hipp_df.to_csv(os.path.join(data_dir,'MAIN_hippocampus_sample_info.csv'))
# Make dataframe
hxp = resid_df.iloc[good_hipp_df.index][bigdf.columns[:-6]].T
hxp.index = range(len(hxp))
hxp.columns = ['%s_%s'%(good_hipp_df.loc[x,'donor'],
good_hipp_df.loc[x,'sample']
) for x in good_hipp_df.index]
# Save it to our Data directory
hxp.to_csv(os.path.join(data_dir,'MAIN_hippocampus_gxp.csv'))
```
|
github_jupyter
|
import os
import pandas
import nibabel as ni
import numpy as np
import scipy.stats as stats
import statsmodels.formula.api as smf
import matplotlib.pyplot as plt
import seaborn as sns
from glob import glob
from scipy.spatial import KDTree
import warnings
warnings.filterwarnings('ignore')
# Your git directory here:
git_dir = '/home/users/jvogel/git/Hippocampus_AP_Axis/'
import sys
sys.path.insert(0,git_dir)
import HAP_Utils as hap
# Enter the path for the directory where you stored your file downloads
aba_dir = '/data1/users/jvogel/Allen_Human_Brain_Atlas/'
probes = pandas.read_csv(os.path.join(aba_dir,'normalized_microarray_donor9861/Probes.csv'))
bigsheet = []
xpsheets = sorted(glob(os.path.join(aba_dir,
'normalized_microarray_donor*/MicroarrayExpression.csv'
))) # collect gene expression data
dids = [x.split('/')[-2].split('donor')[-1] for x in xpsheets] # donor IDs
# Turn each csv into a dataframe, add donor ID, and concatenate into one big DataFrame
for sheet in xpsheets:
did = sheet.split('/')[-2].split('donor')[-1] # get donor id
gxp = pandas.read_csv(sheet,header=None)
gxp.drop(gxp.columns[0],axis=1,inplace=True)
# create dummy variables for donor
for tid in dids:
if tid == did:
gxp.loc['is_%s'%tid] = 1
else:
gxp.loc['is_%s'%tid] = 0
bigsheet.append(gxp)
print('finished',did)
bigdf = pandas.concat(bigsheet,axis=1).transpose()
# PREP FOR REGRESSION
bigdf.columns = ['col_%s'%x for x in bigdf.columns[:-6]] + bigdf.columns[-6:].tolist()
resid_df = pandas.DataFrame(np.empty_like(bigdf.values),
index = bigdf.index, columns=bigdf.columns)
resid_df.iloc[:,-6:] = bigdf.iloc[:,-6:].values
# RUN STATS (took about 5-10 minute on the work computer)
for i,col in enumerate(bigdf.columns[:-6]):
resid_df.loc[:,col] = smf.ols('%s ~ is_10021 + is_12876 + is_14380 + is_15496 + is_15697'%(col),
data=bigdf).fit().resid
if i % 1000 == 0:
print(i)
# SAVE BACK INTO SPREADSHEETS
dfz = {}
for col in resid_df.columns[-6:]:
did = col.split('_')[-1]
ddir = os.path.join(aba_dir,'normalized_microarray_donor%s'%did)
refsht = os.path.join(ddir,'MicroarrayExpression.csv')
print('loading reference csv')
gxp = pandas.read_csv(refsht,header=None)
gxp.drop(gxp.columns[0],axis=1,inplace=True)
ndf = resid_df[resid_df[col]==1][resid_df.columns[:-6]].transpose()
ndf.index = gxp.index
ndf.columns = gxp.columns
print('saving new csv')
ndf.to_csv(os.path.join(ddir,'MExp_genes_ctr_for_donor'))
dfz.update({did: ndf})
print('finished',did)
donor_key = {'H0351.2002': '10021', 'H0351.2001': '9861',
'H0351.1009': '12876', 'H0351.1012': '14380',
'H0351.1015': '15496', 'H0351.1016': '15697'}
# ENTER PATH TO GIT REPOSITORY CONTAINING COORDINATES
coord_pth = '/home/users/jvogel/git/AllenHumanGeneMNI/'
sasheets = sorted(glob(os.path.join(coord_pth,'transformed-points/recombine/*SampleAnnot.csv')))
ref = []
for sheet in sasheets:
did = donor_key[sheet.split('/')[-1].split('_')[0]]
sa = pandas.read_csv(sheet)
sa.loc[:,'donor'] = [did for x in range(len(sa))]
sa.loc[:,'sample'] = [x for x in range(1,len(sa)+1)]
ref.append(sa)
SA = pandas.concat(ref).sort_values(['donor','sample'])
SA.index = range(len(SA))
SA.head()
data_dir = os.path.join(git_dir,'Data')
SA.to_csv(os.path.join(data_dir,'MAIN_gcx_wholebrain_info.csv'))
hipp_structures = ['CA1','CA2','CA3','CA4','DG','S']
hipp_df = pandas.DataFrame(SA[SA.structure_acronym.isin(hipp_structures)], copy=True)
hipp_df.head()
# Isolate hippocampus
HO = ni.load(os.path.join(data_dir,'HarvardOxford-sub-maxprob-thr25-1mm.nii.gz')).get_data()
hipps = np.zeros_like(HO)
hipps[HO==9] = 1
hipps[HO==19] = 1
hipp_coords = np.where(hipps==1)
# Get XYZ coordinates of each hippocampus samples
sample_coords = []
for i,row in hipp_df.iterrows():
coords = hap.convert_coords([row['mni_nlin_x'],
row['mni_nlin_y'],
row['mni_nlin_z']],
'xyz')
sample_coords.append([round(x) for x in coords])
# compute distance shortest distance of each sample
# convert to format that KDTree likes
hipp_cs = [(hipp_coords[0][x],
hipp_coords[1][x],
hipp_coords[2][x]) for x in range(len(hipp_coords[0]))]
# make KDTree
tree = KDTree(hipp_cs)
# compute distances
dists = [tree.query(x)[0] for x in sample_coords]
# Make sure most distances are small
plt.close()
sns.distplot(dists, kde=False)
plt.xlabel('distance (mm) from hippocampus mask')
plt.ylabel('N samples')
plt.show()
# get index of coords that are within 3mm (rounded) to the hippocampus mask
good_ind = [x for x in range(len(dists)) if dists[x] < 3.49] # list indices
good_hipp_df = hipp_df.iloc[good_ind]
# For some reason, I apparently got rid of another coordinate,
# so the following will match what was used for analysis in the paper
todrop = good_hipp_df[(good_hipp_df.donor=='14380') & (good_hipp_df['sample']==220)].index
good_hipp_df.drop(todrop,inplace=True)
# save it
good_hipp_df.to_csv(os.path.join(data_dir,'MAIN_hippocampus_sample_info.csv'))
# Make dataframe
hxp = resid_df.iloc[good_hipp_df.index][bigdf.columns[:-6]].T
hxp.index = range(len(hxp))
hxp.columns = ['%s_%s'%(good_hipp_df.loc[x,'donor'],
good_hipp_df.loc[x,'sample']
) for x in good_hipp_df.index]
# Save it to our Data directory
hxp.to_csv(os.path.join(data_dir,'MAIN_hippocampus_gxp.csv'))
| 0.22431 | 0.722992 |
### Model Selection
```
%load_ext autoreload
%autoreload 2
import math
import numpy as np
import torch
from torch import nn
import d2l
```
Our true function will be $ y = 5 + 1.2x - 3.4 \frac{x^2}{2!} + 5.6 \frac{x^3}{3!} + \epsilon$
```
max_degree = 20
n_train, n_test = 100, 100
true_w = np.zeros(max_degree)
true_w[0:4] = np.array([5, 1.2, -3.4, 5.6])
features = np.random.normal(size=(n_train + n_test,1))
np.random.shuffle(features)
# Create array of shape (n_train+n_test, max_degree)
poly_features = np.power(features, np.arange(max_degree).reshape(1,-1))
for i in range(max_degree):
#`gamma(n)` = (n-1)!
# Rescale to avoid very large values of gradients or losses
poly_features[:, i] /= math.gamma(i + 1)
# Shape of `labels`: (`n_train` + `n_test`,)
labels = np.dot(poly_features, true_w)
labels += np.random.normal(scale=0.1, size = labels.shape)
# Convert from NumPy ndarrays to tensors
true_w, features, poly_features, labels = [
torch.tensor(x, dtype = torch.float32)
for x in [true_w, features, poly_features, labels]]
features[:2], poly_features[:2, :], labels[:2]
# @save
def evaluate_loss(net, data_iter, loss):
"""Evaluate the loss of a model on the given dataset."""
metric = d2l.Accumulator(2) # store loss and number of examples
for X, y in data_iter:
output = net(X)
y = y.reshape(output.shape)
l = loss(output, y)
metric.add(l.sum(), l.numel())
return metric[0] / metric[1]
def train(train_features, test_features,
train_labels, test_labels, num_epochs = 400):
loss = nn.MSELoss(reduction='none') #don't take average or sum
input_shape = train_features.shape[-1]
# bias is included in polynimial features
net = nn.Sequential(nn.Linear(input_shape, 1, bias=False))
batch_size = min(10, train_labels.shape[0])
train_iter = d2l.load_array((train_features, train_labels.reshape(-1,1)),
batch_size, is_train=True)
test_iter = d2l.load_array((test_features, test_labels.reshape(-1,1)),
batch_size, is_train=False)
trainer = torch.optim.SGD(net.parameters(), lr=0.01)
animator = d2l.Animator(xlabel='epoch', ylabel='loss', yscale='log',
xlim=[1, num_epochs], ylim=[1e-3, 1e2],
legend=['train', 'test'])
for epoch in range(num_epochs):
d2l.train_epoch_ch3(net, train_iter, loss, trainer)
if epoch == 0 or (epoch + 1) % 20 == 0:
animator.add(epoch + 1, (evaluate_loss(net, train_iter, loss),
evaluate_loss(net, test_iter, loss)))
print('weight:', net[0].weight.data.numpy())
```
#### Third order polynomial
```
# Pick the first four dimensions, i.e., 1, x, x^2/2!, x^3/3! from the
# polynomial features
train(poly_features[:n_train, :4], poly_features[n_train:, :4],
labels[:n_train], labels[n_train:])
```
#### Linear
```
train(poly_features[:n_train,:2], poly_features[n_train:, :2],
labels[:n_train], labels[n_train:])
```
#### Higher order polynomials
```
train(poly_features[:n_train,:], poly_features[n_train:, :],
labels[:n_train], labels[n_train:], num_epochs=1500)
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
import math
import numpy as np
import torch
from torch import nn
import d2l
max_degree = 20
n_train, n_test = 100, 100
true_w = np.zeros(max_degree)
true_w[0:4] = np.array([5, 1.2, -3.4, 5.6])
features = np.random.normal(size=(n_train + n_test,1))
np.random.shuffle(features)
# Create array of shape (n_train+n_test, max_degree)
poly_features = np.power(features, np.arange(max_degree).reshape(1,-1))
for i in range(max_degree):
#`gamma(n)` = (n-1)!
# Rescale to avoid very large values of gradients or losses
poly_features[:, i] /= math.gamma(i + 1)
# Shape of `labels`: (`n_train` + `n_test`,)
labels = np.dot(poly_features, true_w)
labels += np.random.normal(scale=0.1, size = labels.shape)
# Convert from NumPy ndarrays to tensors
true_w, features, poly_features, labels = [
torch.tensor(x, dtype = torch.float32)
for x in [true_w, features, poly_features, labels]]
features[:2], poly_features[:2, :], labels[:2]
# @save
def evaluate_loss(net, data_iter, loss):
"""Evaluate the loss of a model on the given dataset."""
metric = d2l.Accumulator(2) # store loss and number of examples
for X, y in data_iter:
output = net(X)
y = y.reshape(output.shape)
l = loss(output, y)
metric.add(l.sum(), l.numel())
return metric[0] / metric[1]
def train(train_features, test_features,
train_labels, test_labels, num_epochs = 400):
loss = nn.MSELoss(reduction='none') #don't take average or sum
input_shape = train_features.shape[-1]
# bias is included in polynimial features
net = nn.Sequential(nn.Linear(input_shape, 1, bias=False))
batch_size = min(10, train_labels.shape[0])
train_iter = d2l.load_array((train_features, train_labels.reshape(-1,1)),
batch_size, is_train=True)
test_iter = d2l.load_array((test_features, test_labels.reshape(-1,1)),
batch_size, is_train=False)
trainer = torch.optim.SGD(net.parameters(), lr=0.01)
animator = d2l.Animator(xlabel='epoch', ylabel='loss', yscale='log',
xlim=[1, num_epochs], ylim=[1e-3, 1e2],
legend=['train', 'test'])
for epoch in range(num_epochs):
d2l.train_epoch_ch3(net, train_iter, loss, trainer)
if epoch == 0 or (epoch + 1) % 20 == 0:
animator.add(epoch + 1, (evaluate_loss(net, train_iter, loss),
evaluate_loss(net, test_iter, loss)))
print('weight:', net[0].weight.data.numpy())
# Pick the first four dimensions, i.e., 1, x, x^2/2!, x^3/3! from the
# polynomial features
train(poly_features[:n_train, :4], poly_features[n_train:, :4],
labels[:n_train], labels[n_train:])
train(poly_features[:n_train,:2], poly_features[n_train:, :2],
labels[:n_train], labels[n_train:])
train(poly_features[:n_train,:], poly_features[n_train:, :],
labels[:n_train], labels[n_train:], num_epochs=1500)
| 0.893559 | 0.934873 |
# Tutorial 3 - CNN
Esse tutorial tem como objetivo trabalhar com a rede neural convolucional (CNN). Uma boa apresentação desta rede é o material do [deeplearningbook.org](http://www.deeplearningbook.org/contents/convnets.html).
As implementações serão baseadas nos seguintes tutoriais:
* [Lab 11 do Material ML/DL for everyone](https://drive.google.com/drive/u/0/folders/0B41Zbb4c8HVyMHlSQlVFWWphNXc)
* [Implementação da CNN do curso da Udacity](https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/convolutional_network.py)
Vamos trabalhar de forma parecida ao tutorial da MLP. Primeiro vou aplicar à base do [MNIST](http://yann.lecun.com/exdb/mnist/) e em seguida aplicar para uma tarefa mais difícil que é a classificação da base [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html).
```
#Imports necessários
from IPython.display import YouTubeVideo, Image
```
Antes de começar a implementação, vamos entender um pouco como funciona a CNN.
Para começar recomendo o vídeo a seguir que tem uma rápida explicação do tema:
```
# Breve explicação sobre CNN
# Vídeo por: Deeplearning.TV
YouTubeVideo('JiN9p5vWHDY')
```
Algumas anotações e imagens foram tiradas das notas de aula do curso [CS231n Convolutional Neural Networks for Visual Recognition](http://cs231n.stanford.edu/) disponível [neste link](http://cs231n.github.io/convolutional-networks/).
O primeiro passo para entender a arquitura da CNN é tentar enxergar sua diferença para um rede neural tradicional. As notas de aulas linkadas anteriormente linkadas anteriormente discute bem isso. Vale a pena dar uma lida.
Brevemente, vamos analisar as imagem a seguir. A primeira corresponde a uma MLP e a segunda à arquitetura de uma CNN.
*Fonte das Imagens: http://cs231n.github.io/convolutional-networks/*
```
print("Figura 1: Arquitetuta da MLP")
Image(url="http://cs231n.github.io/assets/nn1/neural_net2.jpeg",width=400)
```
Se a gente imaginar que a entrada desta rede é uma imagem 32x32x3 (32 de largura, 32 de altura e 3 de profundidade: canais RGB, por exemplo), teríamos um vetor com 3072 posições. Em outras palavras 3072 pesos multiplicados pela entrada e conectados a todos os demais neurônios da *hidden layer*. É fácil enxergar que se aumentarmos essa imagem para 200x200x3, por exemplo, a quantidade de parâmetros envolvidos ao longo da rede aumenta consideravelmente. Esse fato tem várias implicações como, por exemplo, escalar uma rede desse tipo para trabalhar com imagens.
```
print("Figura 2: Arquitetuta da CNN")
Image(url="http://cs231n.github.io/assets/cnn/cnn.jpeg", width=400)
```
Agora, se olharmos para a arquitetura da CNN, vamos ver que esse processo é um pouco mais simplificado porque os neurônios são agrupados em uma estrutura em cubo (*width*x*height*x*depth*). Organizados dessa forma, os neurônios estarão conectados a apenas uma região da camanda anterior, ao invés de todos os neurônios como na arquitetura MLP (isso será melhor abordado quando detalhar as operações aplicadas nas camadas).
Esse trecho tirado do [DeepLearningBook](http://www.deeplearningbook.org) explica bem o efeito dessa mudança na complexidade computacional:
> This means that we need to store fewer parameters, which both reduces the memory requirements of the model and improves its statistical efficiency. It also means that computing the output requires fewer operations. These improvements in efficiency are usually quite large. If there are $m$ inputs and $n$ outputs, then matrix multiplication requires $m×n$ parameters and the algorithms used in practice have $O(m × n)$ runtime (per example). If we limit the number of connections each output may have to $k$, then the sparsely connected approach requires only $k×n$ parameters and $O(k × n)$ runtime. For many practical applications, it is possible to obtain good performance on the machine learning task while keeping $k$ several orders of magnitude smaller than $m$.
```
ver as outras vantagens: parameter sharing and equivariant representations
```
## Arquitetura da CNN
Uma CNN é uma sequência de camadas e cada camada transforma uma estrutura em cubo em outra estrutura em cubo de acordo com uma série de trasformações. Podemos serparar tais camadas em três tipos: *convolutional layer*, *pooling layer* e *fully-connceted layer*. Para algumas arquiteturas uma camada *RELU* também pode aparecer.
Para ilustrar, vamos ver o caso de um problema de classificação representado pela imagem a seguir:
*Fonte da imagem: http://cs231n.github.io/convolutional-networks/*
```
Image(url="http://cs231n.github.io/assets/cnn/convnet.jpeg", width=700)
```
Observe que as camadas são aplicadas várias vezes. Isso depende da arquitetura que vamos escolher para nosso problema. Observe que a entrada e a saída de cada camada é uma estrutura em cubo (widthxheightxdepth) comentada anteriormente. A saída, para esse problema, é um vetor de classificação.
Uma versão animada e rodando "ao vivo" no navegador pode ser visto em: http://cs231n.stanford.edu/ (Ps: muito bom :P)
Para melhor entendimento vale dar um olhada na explicação de cada camada em http://cs231n.github.io/convolutional-networks/#layers. Abordarei alguns aspectos a seguir.
### ### Este tutorial está em desenvolvimento ### ###
|
github_jupyter
|
#Imports necessários
from IPython.display import YouTubeVideo, Image
# Breve explicação sobre CNN
# Vídeo por: Deeplearning.TV
YouTubeVideo('JiN9p5vWHDY')
print("Figura 1: Arquitetuta da MLP")
Image(url="http://cs231n.github.io/assets/nn1/neural_net2.jpeg",width=400)
print("Figura 2: Arquitetuta da CNN")
Image(url="http://cs231n.github.io/assets/cnn/cnn.jpeg", width=400)
ver as outras vantagens: parameter sharing and equivariant representations
Image(url="http://cs231n.github.io/assets/cnn/convnet.jpeg", width=700)
| 0.40439 | 0.990394 |
### Necessary Imports
```
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Conv2D, Input, Flatten, Reshape
from tensorflow.keras.layers import Dense, Conv2DTranspose, BatchNormalization, Activation
from tensorflow.keras.models import Model
from tensorflow.keras.layers import concatenate
from tensorflow.keras.optimizers import Adam, RMSprop
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.utils import plot_model
from tensorflow.keras import backend as K
from tensorflow.keras.datasets import mnist
import os
import math
```
### Loading Training Data
```
# load MNIST dataset
(x_train, _), (_, _) = mnist.load_data()
# reshape data for CNN as (28, 28, 1) and normalize
image_size = x_train.shape[1]
x_train = np.reshape(x_train, [-1, image_size, image_size, 1])
x_train = x_train.astype('float32') / 255
x_train.shape
```
### Model Parameters
```
latent_size = 100
batch_size = 64
train_steps = 40000
lr = 2e-4
decay = 6e-8
input_shape = (image_size, image_size, 1)
```
### Generator Function
```
def build_generator(inputs, image_size):
"""Builds a generator model"""
image_resize = image_size // 4
kernel_size = 5
layer_filters = [128, 64, 32, 1]
x = Dense(image_resize * image_resize * layer_filters[0])(inputs)
x = Reshape((image_resize, image_resize, layer_filters[0]))(x)
for filters in layer_filters:
if filters > layer_filters[-2]:
strides = 2
else:
strides = 1
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2DTranspose(filters = filters,
kernel_size=kernel_size,
strides=strides,
padding='same')(x)
x = Activation('tanh')(x)
generator = Model(inputs, x , name='generator')
return generator
```
### Discriminator Function
```
def build_discriminator(inputs):
"""Build a Discriminator Model
Stack of LeakyReLU-Conv2D to discriminate real from fake.
The network does not converge with BN so it is not used here
unlike in [1] or original paper.
# Arguments
inputs (Layer): Input layer of the discriminator (the image)
# Returns
Model: Discriminator Model
"""
kernel_size = 5
layer_filters = [32, 64, 128, 256]
x = inputs
for filters in layer_filters:
if filters == layer_filters[-1]:
strides = 1
else:
strides = 2
x = LeakyReLU(alpha=0.2)(x)
x = Conv2D(filters=filters,
kernel_size=kernel_size,
strides=strides,
padding='same')(x)
x = Flatten()(x)
x = Dense(1)(x)
x = Activation('sigmoid')(x)
discriminator = Model(inputs, x, name='discriminator')
return discriminator
```
### Building Discriminator
```
inputs = Input(shape=input_shape, name='discriminator_input')
discriminator = build_discriminator(inputs)
optimizer = RMSprop(lr=lr, decay=decay)
discriminator.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
discriminator.summary()
```
### Building Generator
```
input_shape = (latent_size )
inputs = Input(shape=input_shape, name='z_input')
generator = build_generator(inputs, image_size)
generator.summary()
```
### Building Adverserial Network
```
optimizer = RMSprop(lr=lr * 0.5, decay=decay * 0.5)
discriminator.trainable = False
adversarial = Model(inputs,
discriminator(generator(inputs)),
name='model')
adversarial.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
adversarial.summary()
```
### Training Function
```
def train(models, x_train, paarams):
generator, discriminator, adversarial = models
batch_size, latent_size, train_steps, model_name = params
save_interval = 500
noise_input = np.random.uniform(-1.0, 1.0, size=[16, latent_size])
train_size = x_train.shape[0]
for i in range(train_steps):
rand_indexes = np.random.randint(0, train_size, size=batch_size)
```
|
github_jupyter
|
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Conv2D, Input, Flatten, Reshape
from tensorflow.keras.layers import Dense, Conv2DTranspose, BatchNormalization, Activation
from tensorflow.keras.models import Model
from tensorflow.keras.layers import concatenate
from tensorflow.keras.optimizers import Adam, RMSprop
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.utils import plot_model
from tensorflow.keras import backend as K
from tensorflow.keras.datasets import mnist
import os
import math
# load MNIST dataset
(x_train, _), (_, _) = mnist.load_data()
# reshape data for CNN as (28, 28, 1) and normalize
image_size = x_train.shape[1]
x_train = np.reshape(x_train, [-1, image_size, image_size, 1])
x_train = x_train.astype('float32') / 255
x_train.shape
latent_size = 100
batch_size = 64
train_steps = 40000
lr = 2e-4
decay = 6e-8
input_shape = (image_size, image_size, 1)
def build_generator(inputs, image_size):
"""Builds a generator model"""
image_resize = image_size // 4
kernel_size = 5
layer_filters = [128, 64, 32, 1]
x = Dense(image_resize * image_resize * layer_filters[0])(inputs)
x = Reshape((image_resize, image_resize, layer_filters[0]))(x)
for filters in layer_filters:
if filters > layer_filters[-2]:
strides = 2
else:
strides = 1
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2DTranspose(filters = filters,
kernel_size=kernel_size,
strides=strides,
padding='same')(x)
x = Activation('tanh')(x)
generator = Model(inputs, x , name='generator')
return generator
def build_discriminator(inputs):
"""Build a Discriminator Model
Stack of LeakyReLU-Conv2D to discriminate real from fake.
The network does not converge with BN so it is not used here
unlike in [1] or original paper.
# Arguments
inputs (Layer): Input layer of the discriminator (the image)
# Returns
Model: Discriminator Model
"""
kernel_size = 5
layer_filters = [32, 64, 128, 256]
x = inputs
for filters in layer_filters:
if filters == layer_filters[-1]:
strides = 1
else:
strides = 2
x = LeakyReLU(alpha=0.2)(x)
x = Conv2D(filters=filters,
kernel_size=kernel_size,
strides=strides,
padding='same')(x)
x = Flatten()(x)
x = Dense(1)(x)
x = Activation('sigmoid')(x)
discriminator = Model(inputs, x, name='discriminator')
return discriminator
inputs = Input(shape=input_shape, name='discriminator_input')
discriminator = build_discriminator(inputs)
optimizer = RMSprop(lr=lr, decay=decay)
discriminator.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
discriminator.summary()
input_shape = (latent_size )
inputs = Input(shape=input_shape, name='z_input')
generator = build_generator(inputs, image_size)
generator.summary()
optimizer = RMSprop(lr=lr * 0.5, decay=decay * 0.5)
discriminator.trainable = False
adversarial = Model(inputs,
discriminator(generator(inputs)),
name='model')
adversarial.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
adversarial.summary()
def train(models, x_train, paarams):
generator, discriminator, adversarial = models
batch_size, latent_size, train_steps, model_name = params
save_interval = 500
noise_input = np.random.uniform(-1.0, 1.0, size=[16, latent_size])
train_size = x_train.shape[0]
for i in range(train_steps):
rand_indexes = np.random.randint(0, train_size, size=batch_size)
| 0.904987 | 0.912592 |
# Classes and Objects II
- [Download the lecture notes](https://philchodrow.github.io/PIC16A/content/object_oriented_programming/classes_and_objects_II.ipynb).
In these notes, we'll further develop our skills with object-oriented programming. Our primary focus will be on the use of **magic methods** to perform custom operations on our objects.
## Example: Vectors
Let's implement a `Vector` class. `Vector`s should admit operations like addition, subtraction, and scalar multiplication. Perhaps surprisingly, Python doesn't really support this natively. For example:
```
# (1, 2) + (3, 4) = (4, 6)
(1, 2) + (3, 4)
```
So, let's do it ourselves! We'll focus on vectors with just two dimensions. We'll soon introduce the `numpy` module for working with vectors of arbitrary dimensions.
```
class Vector:
"""
Class for 2-dimensional vectors
Supports standard vector operations, including scalar multiplication and vector addition.
"""
def __init__(self, x, y):
self.x = x
self.y = y
def scalar_multiply(self, c):
"""
Return a Vector with components multiplied by c
"""
return(Vector(c*self.x, c*self.y))
v = Vector(1, 2)
u = v.scalar_multiply(2)
u.x, u.y
```
So far so good, but it's not all that easy to actually view the result of the scalar multiplication. For example:
```
class Vector:
"""
Class for 2-dimensional vectors
Supports standard vector operations, including scalar multiplication and vector addition.
"""
def __init__(self, x, y):
self.x = x
self.y = y
def scalar_multiply(self, c):
"""
Return a Vector with components multiplied by c
"""
return(Vector(c*self.x, c*self.y))
def __str__(self):
return("Vector(" + str(self.x) + "," + str(self.y) + ")")
```
This motivates us to add a method for representing `v` nicely as a string. The `__str__` *magic method* does this for us.
```
v = Vector(1,2)
print(v)
# ---
print(v.scalar_multiply(2))
# ---
class Vector:
"""
Class for 2-dimensional vectors
Supports standard vector operations, including scalar multiplication and vector addition.
"""
def __init__(self, x, y):
self.x = x
self.y = y
def scalar_multiply(self, c):
"""
Return a Vector with components multiplied by c
"""
return(Vector(c*self.x, c*self.y))
def __str__(self):
return("Vector(" + str(self.x) + "," + str(self.y) + ")")
def __add__(self, other):
return(Vector(self.x + other.x, self.y + other.y))
def __sub__(self, other):
return(Vector(self.x - other.x, self.y - other.y))
```
We can also add useful *binary operations*, like addition. The "magic" in "magic method" refers to the fact that Python will automatically use these methods when interpreting symbols like `+` and `*`. Often times, magic methods are extremely obvious to implement, and in this case it's ok not to document them.
There are MANY magic methods -- check out [this blog post](https://rszalski.github.io/magicmethods/) for a complete list. For now, let's just implement addition and subtraction.
```
u = Vector(1, 2)
v = Vector(1, 1)
print(u+v)
print(u-v)
# ---
```
|
github_jupyter
|
# (1, 2) + (3, 4) = (4, 6)
(1, 2) + (3, 4)
class Vector:
"""
Class for 2-dimensional vectors
Supports standard vector operations, including scalar multiplication and vector addition.
"""
def __init__(self, x, y):
self.x = x
self.y = y
def scalar_multiply(self, c):
"""
Return a Vector with components multiplied by c
"""
return(Vector(c*self.x, c*self.y))
v = Vector(1, 2)
u = v.scalar_multiply(2)
u.x, u.y
class Vector:
"""
Class for 2-dimensional vectors
Supports standard vector operations, including scalar multiplication and vector addition.
"""
def __init__(self, x, y):
self.x = x
self.y = y
def scalar_multiply(self, c):
"""
Return a Vector with components multiplied by c
"""
return(Vector(c*self.x, c*self.y))
def __str__(self):
return("Vector(" + str(self.x) + "," + str(self.y) + ")")
v = Vector(1,2)
print(v)
# ---
print(v.scalar_multiply(2))
# ---
class Vector:
"""
Class for 2-dimensional vectors
Supports standard vector operations, including scalar multiplication and vector addition.
"""
def __init__(self, x, y):
self.x = x
self.y = y
def scalar_multiply(self, c):
"""
Return a Vector with components multiplied by c
"""
return(Vector(c*self.x, c*self.y))
def __str__(self):
return("Vector(" + str(self.x) + "," + str(self.y) + ")")
def __add__(self, other):
return(Vector(self.x + other.x, self.y + other.y))
def __sub__(self, other):
return(Vector(self.x - other.x, self.y - other.y))
u = Vector(1, 2)
v = Vector(1, 1)
print(u+v)
print(u-v)
# ---
| 0.779196 | 0.985677 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.