path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Iris Dataset/Iris dataset.ipynb | ###Markdown
Load data
###Code
from sklearn import datasets
iris = datasets.load_iris()
X = iris.data
y = iris.target
###Output
_____no_output_____
###Markdown
Peek at data
###Code
X[:5]
y[:5]
print(iris['DESCR'])
###Output
Iris Plants Database
====================
Notes
-----
Data Set Characteristics:
:Number of Instances: 150 (50 in each of three classes)
:Number of Attributes: 4 numeric, predictive attributes and the class
:Attribute Information:
- sepal length in cm
- sepal width in cm
- petal length in cm
- petal width in cm
- class:
- Iris-Setosa
- Iris-Versicolour
- Iris-Virginica
:Summary Statistics:
============== ==== ==== ======= ===== ====================
Min Max Mean SD Class Correlation
============== ==== ==== ======= ===== ====================
sepal length: 4.3 7.9 5.84 0.83 0.7826
sepal width: 2.0 4.4 3.05 0.43 -0.4194
petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)
petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)
============== ==== ==== ======= ===== ====================
:Missing Attribute Values: None
:Class Distribution: 33.3% for each of 3 classes.
:Creator: R.A. Fisher
:Donor: Michael Marshall (MARSHALL%[email protected])
:Date: July, 1988
This is a copy of UCI ML iris datasets.
http://archive.ics.uci.edu/ml/datasets/Iris
The famous Iris database, first used by Sir R.A Fisher
This is perhaps the best known database to be found in the
pattern recognition literature. Fisher's paper is a classic in the field and
is referenced frequently to this day. (See Duda & Hart, for example.) The
data set contains 3 classes of 50 instances each, where each class refers to a
type of iris plant. One class is linearly separable from the other 2; the
latter are NOT linearly separable from each other.
References
----------
- Fisher,R.A. "The use of multiple measurements in taxonomic problems"
Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to
Mathematical Statistics" (John Wiley, NY, 1950).
- Duda,R.O., & Hart,P.E. (1973) Pattern Classification and Scene Analysis.
(Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.
- Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System
Structure and Classification Rule for Recognition in Partially Exposed
Environments". IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. PAMI-2, No. 1, 67-71.
- Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule". IEEE Transactions
on Information Theory, May 1972, 431-433.
- See also: 1988 MLC Proceedings, 54-64. Cheeseman et al"s AUTOCLASS II
conceptual clustering system finds 3 classes in the data.
- Many, many more ...
###Markdown
Split into train - test sets
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=42)
import xgboost as xgb
dtrain = xgb.DMatrix(X_train, label = y_train)
dtest = xgb.DMatrix(X_test, label = y_test)
###Output
_____no_output_____
###Markdown
Set parameters
###Code
param = {
'max_depth': 4, # max depth of a tree
'eta': 0.4, # training step for an iteration
'silent': 1, # logging mode
'objective': 'multi:softprob', # error evaluation for multiclass
'num_class':3 # number of classes
}
num_round = 30 # number of iterations
###Output
_____no_output_____
###Markdown
Train
###Code
mdl = xgb.train(param, dtrain, num_round)
preds = mdl.predict(dtest)
preds[:5]
###Output
_____no_output_____
###Markdown
This is a matrix of probablities. Select the class with highest probability.
###Code
import numpy as np
best_preds = np.asarray([np.argmax(val) for val in preds])
###Output
_____no_output_____
###Markdown
Evaluate model
###Code
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, best_preds))
###Output
1.0
###Markdown
Save model
###Code
from sklearn.externals import joblib
joblib.dump(mdl, 'iris_model.pkl', compress = True)
###Output
_____no_output_____
###Markdown
Load model
###Code
mdl_loaded = joblib.load('iris_model.pkl')
(mdl_loaded.predict(dtest) == preds).all()
###Output
_____no_output_____ |
notebook/SWExpertAcademy/Course/ProgramingIntermediate/05_Queue1.ipynb | ###Markdown
출처: https://swexpertacademy.com/7차시 6일차 - 피자 굽기
###Code
# Make input file
f = open("input.txt", "w")
f.write("3\n")
f.write("3 5\n")
f.write("7 2 6 5 3\n")
f.write("5 10\n")
f.write("5 9 3 9 9 2 5 8 7 1\n")
f.write("5 10\n")
f.write("20 4 5 7 3 15 2 1 2 2\n")
f.close()
# for Jupyter Notebook
###
f = open("input.txt", "r")
input = f.readline
###
T = int(input())
for test_case in range(1, T+1):
nm = list(map(int, input().split()))
n, m = [nm[i] for i in range(2)]
cheese = list(map(int, input().split()))
pizzas = [[cheese[i], i+1] for i in range(len(cheese))]
oven = pizzas[:n]
ready_to_oven = pizzas[n:]
while(len(oven)!=1):
c, i = oven.pop(0)
# cheese 가 다 녹았을 때
if (c//2 == 0):
# ready to oven pizza 가 남았을때 oven에 추가
if len(ready_to_oven):
oven.append(ready_to_oven.pop(0))
else:
oven.append([c//2, i])
last = oven.pop()
print("#{} {}".format(test_case, last[-1]))
###Output
|
notebooks/community/ml_ops/stage2/mlops_experimentation.ipynb | ###Markdown
E2E ML on GCP: MLOps stage 2 : experimentation View on GitHub Run in Colab Open in Vertex AI Workbench OverviewThis tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 2 : experimentation. DatasetThe dataset used for this tutorial is the [Chicago Taxi](https://www.kaggle.com/chicago/chicago-taxi-trips-bq). The version of the dataset you will use in this tutorial is stored in a public BigQuery table. The trained model predicts whether someone would leave a tip for a taxi fare. ObjectiveIn this tutorial, you create a MLOps stage 2: experimentation process.This tutorial uses the following Vertex AI:- `Vertex AI Datasets`- `Vertex AI Models`- `Vertex AI AutoML`- `Vertex AI Training`- `Vertex AI TensorBoard`- `Vertex AI Vizier`- `Vertex AI Batch Prediction`The steps performed include:- Review the `Dataset` resource created during stage 1.- Train an AutoML tabular binary classifier model in the background.- Build the experimental model architecture.- Construct a custom training package for the `Dataset` resource.- Test the custom training package locally.- Test the custom training package in the cloud with Vertex AI Training.- Hyperparameter tune the model training with Vertex AI Vizier.- Train the custom model with Vertex AI Training.- Add a serving function for online/batch prediction to the custom model.- Test the custom model with the serving function.- Evaluate the custom model using Vertex AI Batch Prediction- Wait for the AutoML training job to complete.- Evaluate the AutoML model using Vertex AI Batch Prediction with the same evaluation slices as the custom model.- Set the evaluation results of the AutoML model as the baseline.- If the evaluation of the custom model is below baseline, continue to experiment with the custom model.- If the evaluation of the custom model is above baseline, save the model as the first best model. RecommendationsWhen doing E2E MLOps on Google Cloud for experimentation, the following best practices with structured (tabular) data are recommended: - Determine a baseline evaluation using AutoML. - Design and build a model architecture. - Upload the untrained model architecture as a Vertex AI Model resource. - Construct a training package that can be ran locally and as a Vertex AI Training job. - Decompose the training package into: data, model, train and task Python modules. - Obtain the location of the transformed training data from the user metadata of the Vertex AI Dataset resource. - Obtain the location of the model artifacts from the Vertex AI Model resource. - Include in the training package initializing a Vertex AI Experiment and corresponding run. - Log hyperparameters and training parameters for the experiment. - Add callbacks for early stop, TensorBoard, and hyperparameter tuning, where hyperparameter tuning is a command-line option. - Test the training package locally with a small number of epochs. - Test the training package with Vertex AI Training. - Do hyperparameter tuning with Vertex AI Hyperparameter Tuning. - Do full training of the custom model with Vertex AI Training. - Log the hyperparameter values for the experiment/run. - Evaluate the custom model. - Single evaluation slice, same metrics as AutoML - Add evaluation to the training package and return the results in a file in the Cloud Storage bucket used for training - Custom evaluation slices, custom metrics - Evaluate custom evaluation slices as a Vertex AI Batch Prediction for both AutoML and custom model - Perform custom metrics on the results from the batch job - Compare custom model metrics against the AutoML baseline - If less than baseline, then continue to experiment - If greater then baseline, then upload model as the new baseline and save evaluation results with the model. InstallationsInstall *one time* the packages for executing the MLOps notebooks.
###Code
import os
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
ONCE_ONLY = False
if ONCE_ONLY:
! pip3 install -U tensorflow==2.5 $USER_FLAG
! pip3 install -U tensorflow-data-validation==1.2 $USER_FLAG
! pip3 install -U tensorflow-transform==1.2 $USER_FLAG
! pip3 install -U tensorflow-io==0.18 $USER_FLAG
! pip3 install --upgrade google-cloud-aiplatform[tensorboard] $USER_FLAG
! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG
! pip3 install --upgrade google-cloud-bigquery $USER_FLAG
! pip3 install --upgrade google-cloud-logging $USER_FLAG
! pip3 install --upgrade apache-beam[gcp] $USER_FLAG
! pip3 install --upgrade pyarrow $USER_FLAG
! pip3 install --upgrade cloudml-hypertune $USER_FLAG
! pip3 install --upgrade kfp $USER_FLAG
! pip3 install --upgrade torchvision $USER_FLAG
! pip3 install --upgrade rpy2 $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com). 1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations).
###Code
REGION = "[your-region]" # @param {type:"string"}
if REGION == "[your-region]":
REGION = "us-central1"
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Vertex AI Workbench Notebooks**, your environment is alreadyauthenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructionswhen prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:1. In the Cloud Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).2. Click **Create service account**.3. In the **Service account name** field, enter a name, and click **Create**.4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "Vertex AI"into the filter box, and select **Vertex AI Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.5. Click *Create*. A JSON file that contains your key downloads to yourlocal environment.6. Enter the path to your service account key as the`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
###Code
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Vertex AI Workbench, then don't execute this code
IS_COLAB = False
if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv(
"DL_ANACONDA_HOME"
):
if "google.colab" in sys.modules:
IS_COLAB = True
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Service Account**If you don't know your service account**, try to get your service account using `gcloud` command by executing the second cell below.
###Code
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your service account from gcloud
if not IS_COLAB:
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].replace("*", "").strip()
if IS_COLAB:
shell_output = ! gcloud projects describe $PROJECT_ID
project_number = shell_output[-1].split(":")[1].strip().replace("'", "")
SERVICE_ACCOUNT = f"{project_number}[email protected]"
print("Service Account:", SERVICE_ACCOUNT)
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants
###Code
import google.cloud.aiplatform as aip
###Output
_____no_output_____
###Markdown
Import TensorFlowImport the TensorFlow package into your Python environment.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Import TensorFlow TransformImport the TensorFlow Transform (TFT) package into your Python environment.
###Code
import tensorflow_transform as tft
###Output
_____no_output_____
###Markdown
Import TensorFlow Data ValidationImport the TensorFlow Data Validation (TFDV) package into your Python environment.
###Code
import tensorflow_data_validation as tfdv
###Output
_____no_output_____
###Markdown
Initialize Vertex AI SDK for PythonInitialize the Vertex AI SDK for Python for your project and corresponding bucket.
###Code
aip.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)
###Output
_____no_output_____
###Markdown
Set hardware acceleratorsYou can set hardware accelerators for training and prediction.Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)Otherwise specify `(None, None)` to use a container image to run on a CPU.Learn more about [hardware accelerator support for your region](https://cloud.google.com/vertex-ai/docs/general/locationsaccelerators).*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3. This is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
###Code
import os
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.gapic.AcceleratorType.NVIDIA_TESLA_K80, 4)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
###Output
_____no_output_____
###Markdown
Set pre-built containersSet the pre-built Docker container image for training and prediction.For the latest list, see [Pre-built containers for training](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers).For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers).
###Code
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2.5".replace(".", "-")
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "{}-docker.pkg.dev/vertex-ai/training/{}:latest".format(
REGION.split("-")[0], TRAIN_VERSION
)
DEPLOY_IMAGE = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format(
REGION.split("-")[0], DEPLOY_VERSION
)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
###Output
_____no_output_____
###Markdown
Set machine typeNext, set the machine type to use for training and prediction.- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]*Note: The following is not supported for training:* - `standard`: 2 vCPUs - `highcpu`: 2, 4 and 8 vCPUs*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.
###Code
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
###Output
_____no_output_____
###Markdown
Retrieve the dataset from stage 1Next, retrieve the dataset you created during stage 1 with the helper function `find_dataset()`. This helper function finds all the datasets whose display name matches the specified prefix and import format (e.g., bq). Finally it sorts the matches by create time and returns the latest version.
###Code
def find_dataset(display_name_prefix, import_format):
matches = []
datasets = aip.TabularDataset.list()
for dataset in datasets:
if dataset.display_name.startswith(display_name_prefix):
try:
if (
"bq" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["bigquerySource"]
):
matches.append(dataset)
if (
"csv" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["gcsSource"]
):
matches.append(dataset)
except:
pass
create_time = None
for match in matches:
if create_time is None or match.create_time > create_time:
create_time = match.create_time
dataset = match
return dataset
dataset = find_dataset("Chicago Taxi", "bq")
print(dataset)
###Output
_____no_output_____
###Markdown
Load dataset's user metadataLoad the user metadata for the dataset.
###Code
import json
try:
with tf.io.gfile.GFile(
"gs://" + dataset.labels["user_metadata"] + "/metadata.jsonl", "r"
) as f:
metadata = json.load(f)
print(metadata)
except:
print("no metadata")
###Output
_____no_output_____
###Markdown
Create and run training pipelineTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipelineAn AutoML training pipeline is created with the `AutoMLTabularTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the `TrainingJob` resource.- `optimization_prediction_type`: The type task to train the model for. - `classification`: A tabuar classification model. - `regression`: A tabular regression model.- `column_transformations`: (Optional): Transformations to apply to the input columns- `optimization_objective`: The optimization objective to minimize or maximize. - binary classification: - `minimize-log-loss` - `maximize-au-roc` - `maximize-au-prc` - `maximize-precision-at-recall` - `maximize-recall-at-precision` - multi-class classification: - `minimize-log-loss` - regression: - `minimize-rmse` - `minimize-mae` - `minimize-rmsle`The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
###Code
dag = aip.AutoMLTabularTrainingJob(
display_name="chicago_" + TIMESTAMP,
optimization_prediction_type="classification",
optimization_objective="minimize-log-loss",
)
print(dag)
###Output
_____no_output_____
###Markdown
Run the training pipelineNext, you run the DAG to start the training job by invoking the method `run`, with the following parameters:- `dataset`: The `Dataset` resource to train the model.- `model_display_name`: The human readable name for the trained model.- `training_fraction_split`: The percentage of the dataset to use for training.- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).- `validation_fraction_split`: The percentage of the dataset to use for validation.- `target_column`: The name of the column to train as the label.- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.The `run` method when completed returns the `Model` resource.The execution of the training pipeline will take upto 180 minutes.
###Code
async_model = dag.run(
dataset=dataset,
model_display_name="chicago_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=8000,
disable_early_stopping=False,
target_column="tip_bin",
sync=False,
)
###Output
_____no_output_____
###Markdown
Create experiment for tracking training related metadataSetup tracking the parameters (configuration) and metrics (results) for each experiment:- `aip.init()` - Create an experiment instance- `aip.start_run()` - Track a specific run within the experiment.Learn more about [Introduction to Vertex AI ML Metadata](https://cloud.google.com/vertex-ai/docs/ml-metadata/introduction).
###Code
EXPERIMENT_NAME = "chicago-" + TIMESTAMP
aip.init(experiment=EXPERIMENT_NAME)
aip.start_run("run-1")
###Output
_____no_output_____
###Markdown
Create a Vertex AI TensorBoard instanceCreate a Vertex AI TensorBoard instance to use TensorBoard in conjunction with Vertex AI Training for custom model training.Learn more about [Get started with Vertex AI TensorBoard](https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-overview).
###Code
TENSORBOARD_DISPLAY_NAME = "chicago_" + TIMESTAMP
tensorboard = aip.Tensorboard.create(display_name=TENSORBOARD_DISPLAY_NAME)
tensorboard_resource_name = tensorboard.gca_resource.name
print("TensorBoard resource name:", tensorboard_resource_name)
###Output
_____no_output_____
###Markdown
Create the input layer for your custom modelNext, you create the input layer for your custom tabular model, based on the data types of each feature.
###Code
from tensorflow.keras.layers import Input
def create_model_inputs(
numeric_features=None, categorical_features=None, embedding_features=None
):
inputs = {}
for feature_name in numeric_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.float32)
for feature_name in categorical_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64)
for feature_name in embedding_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64)
return inputs
input_layers = create_model_inputs(
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
embedding_features=metadata["embedding_features"],
)
print(input_layers)
###Output
_____no_output_____
###Markdown
Create the binary classifier custom modelNext, you create your binary classifier custom tabular model.
###Code
from math import sqrt
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import (Activation, Concatenate, Dense, Embedding,
experimental)
def create_binary_classifier(
input_layers,
tft_output,
metaparams,
numeric_features,
categorical_features,
embedding_features,
):
layers = []
for feature_name in input_layers:
if feature_name in embedding_features:
vocab_size = tft_output.vocabulary_size_by_name(feature_name)
embedding_size = int(sqrt(vocab_size))
embedding_output = Embedding(
input_dim=vocab_size + 1,
output_dim=embedding_size,
name=f"{feature_name}_embedding",
)(input_layers[feature_name])
layers.append(embedding_output)
elif feature_name in categorical_features:
vocab_size = tft_output.vocabulary_size_by_name(feature_name)
onehot_layer = experimental.preprocessing.CategoryEncoding(
num_tokens=vocab_size,
output_mode="binary",
name=f"{feature_name}_onehot",
)(input_layers[feature_name])
layers.append(onehot_layer)
elif feature_name in numeric_features:
numeric_layer = tf.expand_dims(input_layers[feature_name], -1)
layers.append(numeric_layer)
else:
pass
joined = Concatenate(name="combines_inputs")(layers)
feedforward_output = Sequential(
[Dense(units, activation="relu") for units in metaparams["hidden_units"]],
name="feedforward_network",
)(joined)
logits = Dense(units=1, name="logits")(feedforward_output)
pred = Activation("sigmoid")(logits)
model = Model(inputs=input_layers, outputs=[pred])
return model
TRANSFORM_ARTIFACTS_DIR = metadata["transform_artifacts_dir"]
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
metaparams = {"hidden_units": [128, 64]}
aip.log_params(metaparams)
model = create_binary_classifier(
input_layers,
tft_output,
metaparams,
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
embedding_features=metadata["embedding_features"],
)
model.summary()
###Output
_____no_output_____
###Markdown
Visualize the model architectureNext, visualize the architecture of the custom model.
###Code
tf.keras.utils.plot_model(model, show_shapes=True, show_dtype=True)
###Output
_____no_output_____
###Markdown
Save model artifactsNext, save the model artifacts to your Cloud Storage bucket
###Code
MODEL_DIR = f"{BUCKET_NAME}/base_model"
model.save(MODEL_DIR)
###Output
_____no_output_____
###Markdown
Upload the local model to a Vertex AI Model resourceNext, you upload your local custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource.
###Code
vertex_custom_model = aip.Model.upload(
display_name="chicago_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
labels={"base_model": "1"},
sync=True,
)
###Output
_____no_output_____
###Markdown
Construct the training package Package layoutBefore you start training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.- PKG-INFO- README.md- setup.cfg- setup.py- trainer - \_\_init\_\_.py - task.py - other Python scriptsThe files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.The file `trainer/task.py` is the Python script for executing the custom training job.
###Code
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'google-cloud-aiplatform',\n\n 'cloudml-hypertune',\n\n 'tensorflow_datasets==1.3.0',\n\n 'tensorflow==2.5',\n\n 'tensorflow_data_validation==1.2',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Chicago Taxi tabular binary classifier\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex AI"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
###Output
_____no_output_____
###Markdown
Get feature specification for the preprocessed dataNext, create the feature specification for the preprocessed data.
###Code
transform_feature_spec = tft_output.transformed_feature_spec()
print(transform_feature_spec)
###Output
_____no_output_____
###Markdown
Load the transformed data into a tf.data.DatasetNext, you load the gzip TFRecords on Cloud Storage storage into a `tf.data.Dataset` generator. These functions are re-used when training the custom model using `Vertex Training`, so you save them to the python training package.
###Code
%%writefile custom/trainer/data.py
import tensorflow as tf
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(filenames, compression_type="GZIP")
def get_dataset(file_pattern, feature_spec, label_column, batch_size=200):
"""Generates features and label for tuning/training.
Args:
file_pattern: input tfrecord file pattern.
feature_spec: a dictionary of feature specifications.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=feature_spec,
label_key=label_column,
reader=_gzip_reader_fn,
num_epochs=1,
drop_final_batch=True,
)
return dataset
from custom.trainer import data
TRANSFORMED_DATA_PREFIX = metadata["transformed_data_prefix"]
LABEL_COLUMN = metadata["label_column"]
train_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/train/data-*.gz"
val_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/val/data-*.gz"
test_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/test/data-*.gz"
for input_features, target in data.get_dataset(
train_data_file_pattern, transform_feature_spec, LABEL_COLUMN, batch_size=3
).take(1):
for key in input_features:
print(
f"{key} {input_features[key].dtype}: {input_features[key].numpy().tolist()}"
)
print(f"target: {target.numpy().tolist()}")
###Output
_____no_output_____
###Markdown
Test the model architecture with transformed inputNext, test the model architecture with a sample of the transformed training input.*Note:* Since the model is untrained, the predictions should be random. Since this is a binary classifier, expect the predicted results ~0.5.
###Code
model(input_features)
###Output
_____no_output_____
###Markdown
Develop and test the training scriptsWhen experimenting, one typically develops and tests the training package locally, before moving to training in the cloud. Create training scriptNext, you write the Python script for compiling and training the model.
###Code
%%writefile custom/trainer/train.py
from trainer import data
import tensorflow as tf
import logging
from hypertune import HyperTune
def compile(model, hyperparams):
''' Compile the model '''
optimizer = tf.keras.optimizers.Adam(learning_rate=hyperparams["learning_rate"])
loss = tf.keras.losses.BinaryCrossentropy(from_logits=False)
metrics = [tf.keras.metrics.BinaryAccuracy(name="accuracy")]
model.compile(optimizer=optimizer,loss=loss, metrics=metrics)
return model
def warmup(
model,
hyperparams,
train_data_dir,
label_column,
transformed_feature_spec
):
''' Warmup the initialized model weights '''
train_dataset = data.get_dataset(
train_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
lr_inc = (hyperparams['end_learning_rate'] - hyperparams['start_learning_rate']) / hyperparams['num_epochs']
def scheduler(epoch, lr):
if epoch == 0:
return hyperparams['start_learning_rate']
return lr + lr_inc
callbacks = [tf.keras.callbacks.LearningRateScheduler(scheduler)]
logging.info("Model warmup started...")
history = model.fit(
train_dataset,
epochs=hyperparams["num_epochs"],
steps_per_epoch=hyperparams["steps"],
callbacks=callbacks
)
logging.info("Model warmup completed.")
return history
def train(
model,
hyperparams,
train_data_dir,
val_data_dir,
label_column,
transformed_feature_spec,
log_dir,
tuning=False
):
''' Train the model '''
train_dataset = data.get_dataset(
train_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
val_dataset = data.get_dataset(
val_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
early_stop = tf.keras.callbacks.EarlyStopping(
monitor=hyperparams["early_stop"]["monitor"], patience=hyperparams["early_stop"]["patience"], restore_best_weights=True
)
callbacks = [early_stop]
if log_dir:
tensorboard = tf.keras.callbacks.TensorBoard(log_dir=log_dir)
callbacks = callbacks.append(tensorboard)
if tuning:
# Instantiate the HyperTune reporting object
hpt = HyperTune()
# Reporting callback
class HPTCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='val_loss',
metric_value=logs['val_loss'],
global_step=epoch
)
if not callbacks:
callbacks = []
callbacks.append(HPTCallback())
logging.info("Model training started...")
history = model.fit(
train_dataset,
epochs=hyperparams["num_epochs"],
validation_data=val_dataset,
callbacks=callbacks
)
logging.info("Model training completed.")
return history
def evaluate(
model,
hyperparams,
test_data_dir,
label_column,
transformed_feature_spec
):
logging.info("Model evaluation started...")
test_dataset = data.get_dataset(
test_data_dir,
transformed_feature_spec,
label_column,
hyperparams["batch_size"],
)
evaluation_metrics = model.evaluate(test_dataset)
logging.info("Model evaluation completed.")
return evaluation_metrics
###Output
_____no_output_____
###Markdown
Train the model locallyNext, test the training package locally, by training with just a few epochs:- `num_epochs`: The number of epochs to pass to the training package.- `compile()`: Compile the model for training.- `warmup()`: Warmup the initialized model weights.- `train()`: Train the model.
###Code
os.chdir("custom")
import logging
from trainer import train
TENSORBOARD_LOG_DIR = "./logs"
logging.getLogger().setLevel(logging.INFO)
hyperparams = {}
hyperparams["learning_rate"] = 0.01
aip.log_params(hyperparams)
train.compile(model, hyperparams)
warmupparams = {}
warmupparams["start_learning_rate"] = 0.0001
warmupparams["end_learning_rate"] = 0.01
warmupparams["num_epochs"] = 4
warmupparams["batch_size"] = 64
warmupparams["steps"] = 50
aip.log_params(warmupparams)
train.warmup(
model, warmupparams, train_data_file_pattern, LABEL_COLUMN, transform_feature_spec
)
trainparams = {}
trainparams["num_epochs"] = 5
trainparams["batch_size"] = 64
trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5}
aip.log_params(trainparams)
train.train(
model,
trainparams,
train_data_file_pattern,
val_data_file_pattern,
LABEL_COLUMN,
transform_feature_spec,
TENSORBOARD_LOG_DIR,
)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Evaluate the model locallyNext, test the evaluation portion of the training package:- `evaluate()`: Evaluate the model.
###Code
os.chdir("custom")
from trainer import train
evalparams = {}
evalparams["batch_size"] = 64
metrics = {}
metrics["loss"], metrics["acc"] = train.evaluate(
model, evalparams, test_data_file_pattern, LABEL_COLUMN, transform_feature_spec
)
print("ACC", metrics["acc"], "LOSS", metrics["loss"])
aip.log_metrics(metrics)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Retrieve model from Vertex AINext, create the Python script to retrieve your experimental model from Vertex AI.
###Code
%%writefile custom/trainer/model.py
import google.cloud.aiplatform as aip
def get(model_id):
model = aip.Model(model_id)
return model
###Output
_____no_output_____
###Markdown
Create the task script for the Python training packageNext, you create the `task.py` script for driving the training package. Some noteable steps include:- Command-line arguments: - `model-id`: The resource ID of the `Model` resource you built during experimenting. This is the untrained model architecture. - `dataset-id`: The resource ID of the `Dataset` resource to use for training. - `experiment`: The name of the experiment. - `run`: The name of the run within this experiment. - `tensorboard-logdir`: The logging directory for Vertex AI Tensorboard.- `get_data()`: - Loads the Dataset resource into memory. - Obtains the user metadata from the Dataset resource. - From the metadata, obtain location of transformed data, transformation function and name of label column- `get_model()`: - Loads the Model resource into memory. - Obtains location of model artifacts of the model architecture. - Loads the model architecture. - Compiles the model.- `warmup_model()`: - Warms up the initialized model weights- `train_model()`: - Train the model.- `evaluate_model()`: - Evaluates the model. - Saves evaluation metrics to Cloud Storage bucket.
###Code
%%writefile custom/trainer/task.py
import os
import argparse
import logging
import json
import tensorflow as tf
import tensorflow_transform as tft
from tensorflow.python.client import device_lib
import google.cloud.aiplatform as aip
from trainer import data
from trainer import model as model_
from trainer import train
try:
from trainer import serving
except:
pass
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--model-id', dest='model_id',
default=None, type=str, help='Vertex Model ID.')
parser.add_argument('--dataset-id', dest='dataset_id',
default=None, type=str, help='Vertex Dataset ID.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--start_lr', dest='start_lr',
default=0.0001, type=float,
help='Starting learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--batch_size', dest='batch_size',
default=16, type=int,
help='Batch size.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--tensorboard-log-dir', dest='tensorboard_log_dir',
default=os.getenv('AIP_TENSORBOARD_LOG_DIR'), type=str,
help='Output file for tensorboard logs')
parser.add_argument('--experiment', dest='experiment',
default=None, type=str,
help='Name of experiment')
parser.add_argument('--project', dest='project',
default=None, type=str,
help='Name of project')
parser.add_argument('--run', dest='run',
default=None, type=str,
help='Name of run in experiment')
parser.add_argument('--evaluate', dest='evaluate',
default=False, type=bool,
help='Whether to perform evaluation')
parser.add_argument('--serving', dest='serving',
default=False, type=bool,
help='Whether to attach the serving function')
parser.add_argument('--tuning', dest='tuning',
default=False, type=bool,
help='Whether to perform hyperparameter tuning')
parser.add_argument('--warmup', dest='warmup',
default=False, type=bool,
help='Whether to perform warmup weight initialization')
args = parser.parse_args()
logging.getLogger().setLevel(logging.INFO)
logging.info('DEVICES' + str(device_lib.list_local_devices()))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
logging.info("Single device training")
# Single Machine, multiple compute device
elif args.distribute == 'mirrored':
strategy = tf.distribute.MirroredStrategy()
logging.info("Mirrored Strategy distributed training")
# Multi Machine, multiple compute device
elif args.distribute == 'multiworker':
strategy = tf.distribute.MultiWorkerMirroredStrategy()
logging.info("Multi-worker Strategy distributed training")
logging.info('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
logging.info('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Initialize the run for this experiment
if args.experiment:
logging.info("Initialize experiment: {}".format(args.experiment))
aip.init(experiment=args.experiment, project=args.project)
aip.start_run(args.run)
metadata = {}
def get_data():
''' Get the preprocessed training data '''
global train_data_file_pattern, val_data_file_pattern, test_data_file_pattern
global label_column, transform_feature_spec, metadata
dataset = aip.TabularDataset(args.dataset_id)
METADATA = 'gs://' + dataset.labels['user_metadata'] + "/metadata.jsonl"
with tf.io.gfile.GFile(METADATA, "r") as f:
metadata = json.load(f)
TRANSFORMED_DATA_PREFIX = metadata['transformed_data_prefix']
label_column = metadata['label_column']
train_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/train/data-*.gz'
val_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/val/data-*.gz'
test_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/test/data-*.gz'
TRANSFORM_ARTIFACTS_DIR = metadata['transform_artifacts_dir']
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
transform_feature_spec = tft_output.transformed_feature_spec()
def get_model():
''' Get the untrained model architecture '''
global model_artifacts
vertex_model = model_.get(args.model_id)
model_artifacts = vertex_model.gca_resource.artifact_uri
model = tf.keras.models.load_model(model_artifacts)
# Compile the model
hyperparams = {}
hyperparams["learning_rate"] = args.lr
if args.experiment:
aip.log_params(hyperparams)
metadata.update(hyperparams)
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f:
f.write(json.dumps(metadata))
train.compile(model, hyperparams)
return model
def warmup_model(model):
''' Warmup the initialized model weights '''
warmupparams = {}
warmupparams["num_epochs"] = args.epochs
warmupparams["batch_size"] = args.batch_size
warmupparams["steps"] = args.steps
warmupparams["start_learning_rate"] = args.start_lr
warmupparams["end_learning_rate"] = args.lr
train.warmup(model, warmupparams, train_data_file_pattern, label_column, transform_feature_spec)
return model
def train_model(model):
''' Train the model '''
trainparams = {}
trainparams["num_epochs"] = args.epochs
trainparams["batch_size"] = args.batch_size
trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5}
if args.experiment:
aip.log_params(trainparams)
metadata.update(trainparams)
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f:
f.write(json.dumps(metadata))
train.train(model, trainparams, train_data_file_pattern, val_data_file_pattern, label_column, transform_feature_spec, args.tensorboard_log_dir, args.tuning)
return model
def evaluate_model(model):
''' Evaluate the model '''
evalparams = {}
evalparams["batch_size"] = args.batch_size
metrics = train.evaluate(model, evalparams, test_data_file_pattern, label_column, transform_feature_spec)
metadata.update({'metrics': metrics})
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f:
f.write(json.dumps(metadata))
get_data()
with strategy.scope():
model = get_model()
if args.warmup:
model = warmup_model(model)
else:
model = train_model(model)
if args.evaluate:
evaluate_model(model)
if args.serving:
logging.info('Save serving model to: ' + args.model_dir)
serving.construct_serving_model(
model=model,
serving_model_dir=args.model_dir,
metadata=metadata
)
elif args.warmup:
logging.info('Save warmed up model to: ' + model_artifacts)
model.save(model_artifacts)
else:
logging.info('Save trained model to: ' + args.model_dir)
model.save(args.model_dir)
###Output
_____no_output_____
###Markdown
Test training package locallyNext, test your completed training package locally with just a few epochs.
###Code
DATASET_ID = dataset.resource_name
MODEL_ID = vertex_custom_model.resource_name
!cd custom; python3 -m trainer.task --model-id={MODEL_ID} --dataset-id={DATASET_ID} --experiment='chicago' --run='test' --project={PROJECT_ID} --epochs=5 --model-dir=/tmp --evaluate=True
###Output
_____no_output_____
###Markdown
Warmup trainingNow that you have tested the training scripts, you perform warmup training on the base model. Warmup training is used to stabilize the weight initialization. By doing so, each subsequent training and tuning of the model architecture will start with the same stabilized weight initialization.
###Code
MODEL_DIR = f"{BUCKET_NAME}/base_model"
!cd custom; python3 -m trainer.task --model-id={MODEL_ID} --dataset-id={DATASET_ID} --project={PROJECT_ID} --epochs=5 --steps=300 --batch_size=16 --lr=0.01 --start_lr=0.0001 --model-dir={MODEL_DIR} --warmup=True
###Output
_____no_output_____
###Markdown
Mirrored StrategyWhen training on a single VM, one can either train was a single compute device or with multiple compute devices on the same VM. With Vertex AI Distributed Training you can specify both the number of compute devices for the VM instance and type of compute devices: CPU, GPU.Vertex AI Distributed Training supports `tf.distribute.MirroredStrategy' for TensorFlow models. To enable training across multiple compute devices on the same VM, you do the following additional steps in your Python training script:1. Set the tf.distribute.MirrorStrategy2. Compile the model within the scope of tf.distribute.MirrorStrategy. *Note:* Tells MirroredStrategy which variables to mirror across your compute devices.3. Increase the batch size for each compute device to num_devices * batch size.During transitions, the distribution of batches will be synchronized as well as the updates to the model parameters. Create and run custom training jobTo train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training jobA custom training job is created with the `CustomTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the custom training job.- `container_uri`: The training container image.- `python_package_gcs_uri`: The location of the Python training package as a tarball.- `python_module_name`: The relative path to the training script in the Python package.- `model_serving_container_uri`: The container image for deploying the model.*Note:* There is no requirements parameter. You specify any requirements in the `setup.py` script in your Python package.
###Code
DISPLAY_NAME = "chicago_" + TIMESTAMP
job = aip.CustomPythonPackageTrainingJob(
display_name=DISPLAY_NAME,
python_package_gcs_uri=f"{BUCKET_NAME}/trainer_chicago.tar.gz",
python_module_name="trainer.task",
container_uri=TRAIN_IMAGE,
model_serving_container_image_uri=DEPLOY_IMAGE,
project=PROJECT_ID,
)
! rm -rf custom/logs
! rm -rf custom/trainer/__pycache__
###Output
_____no_output_____
###Markdown
Store training script on your Cloud Storage bucketNext, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
###Code
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_chicago.tar.gz
###Output
_____no_output_____
###Markdown
Run the custom Python package training jobNext, you run the custom job to start the training job by invoking the method `run()`. The parameters are the same as when running a CustomTrainingJob.*Note:* The parameter service_account is set so that the initializing experiment step `aip.init(experiment="...")` has necessarily permission to access the Vertex AI Metadata Store.
###Code
MODEL_DIR = BUCKET_NAME + "/testing"
CMDARGS = [
"--epochs=5",
"--batch_size=16",
"--distribute=mirrored",
"--experiment=chicago",
"--run=test",
"--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
]
model = job.run(
model_display_name="chicago_" + TIMESTAMP,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
service_account=SERVICE_ACCOUNT,
tensorboard=tensorboard_resource_name,
sync=True,
)
###Output
_____no_output_____
###Markdown
Delete a custom training jobAfter a training job is completed, you can delete the training job with the method `delete()`. Prior to completion, a training job can be canceled with the method `cancel()`.
###Code
job.delete()
###Output
_____no_output_____
###Markdown
Delete the modelThe method 'delete()' will delete the model.
###Code
model.delete()
###Output
_____no_output_____
###Markdown
Hyperparameter tuningNext, you perform hyperparameter tuning with the training package. The training package has some additions that make the same package usable for both hyperparameter tuning, as well as local testing and full cloud training:- Command-Line: - `tuning`: indicates to use the HyperTune service as a callback during training.- `train()`: If tuning is set, creates and adds a callback to HyperTune service. Prepare your machine specificationNow define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training. - `machine_type`: The type of GCP instance to provision -- e.g., n1-standard-8. - `accelerator_type`: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable `TRAIN_GPU != None`, you are using a GPU; otherwise you will use a CPU. - `accelerator_count`: The number of accelerators.
###Code
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
###Output
_____no_output_____
###Markdown
Prepare your disk specification(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training. - `boot_disk_type`: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD. - `boot_disk_size_gb`: Size of disk in GB.
###Code
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
###Output
_____no_output_____
###Markdown
Define worker pool specification for hyperparameter tuning jobNext, define the worker pool specification. Note that we plan to tune the learning rate and batch size, so you do not pass them as command-line arguments (omitted). The Vertex AI Hyperparameter Tuning service will pick values for both learning rate and batch size during trials, which it will pass along as command-line arguments.
###Code
CMDARGS = [
"--epochs=5",
"--distribute=mirrored",
# "--experiment=chicago",
# "--run=tune",
# "--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
"--tuning=True",
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_chicago.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
###Output
_____no_output_____
###Markdown
Create a custom jobUse the class `CustomJob` to create a custom job, such as for hyperparameter tuning, with the following parameters:- `display_name`: A human readable name for the custom job.- `worker_pool_specs`: The specification for the corresponding VM instances.
###Code
job = aip.CustomJob(
display_name="chicago_" + TIMESTAMP, worker_pool_specs=worker_pool_spec
)
###Output
_____no_output_____
###Markdown
Create a hyperparameter tuning jobUse the class `HyperparameterTuningJob` to create a hyperparameter tuning job, with the following parameters:- `display_name`: A human readable name for the custom job.- `custom_job`: The worker pool spec from this custom job applies to the CustomJobs created in all the trials.- `metrics_spec`: The metrics to optimize. The dictionary key is the metric_id, which is reported by your training job, and the dictionary value is the optimization goal of the metric('minimize' or 'maximize').- `parameter_spec`: The parameters to optimize. The dictionary key is the metric_id, which is passed into your training job as a command line key word argument, and the dictionary value is the parameter specification of the metric.- `search_algorithm`: The search algorithm to use: `grid`, `random` and `None`. If `None` is specified, the `Vizier` service (Bayesian) is used.- `max_trial_count`: The maximum number of trials to perform.
###Code
from google.cloud.aiplatform import hyperparameter_tuning as hpt
hpt_job = aip.HyperparameterTuningJob(
display_name="chicago_" + TIMESTAMP,
custom_job=job,
metric_spec={
"val_loss": "minimize",
},
parameter_spec={
"lr": hpt.DoubleParameterSpec(min=0.001, max=0.1, scale="log"),
"batch_size": hpt.DiscreteParameterSpec([16, 32, 64, 128, 256], scale="linear"),
},
search_algorithm=None,
max_trial_count=8,
parallel_trial_count=1,
)
###Output
_____no_output_____
###Markdown
Run the hyperparameter tuning jobUse the `run()` method to execute the hyperparameter tuning job.
###Code
hpt_job.run()
###Output
_____no_output_____
###Markdown
Best trialNow look at which trial was the best:
###Code
best = (None, None, None, 0.0)
for trial in hpt_job.trials:
# Keep track of the best outcome
if float(trial.final_measurement.metrics[0].value) > best[3]:
try:
best = (
trial.id,
float(trial.parameters[0].value),
float(trial.parameters[1].value),
float(trial.final_measurement.metrics[0].value),
)
except:
best = (
trial.id,
float(trial.parameters[0].value),
None,
float(trial.final_measurement.metrics[0].value),
)
print(best)
###Output
_____no_output_____
###Markdown
Delete the hyperparameter tuning jobThe method 'delete()' will delete the hyperparameter tuning job.
###Code
hpt_job.delete()
###Output
_____no_output_____
###Markdown
Save the best hyperparameter values
###Code
LR = best[2]
BATCH_SIZE = int(best[1])
###Output
_____no_output_____
###Markdown
Create and run custom training jobTo train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training jobA custom training job is created with the `CustomTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the custom training job.- `container_uri`: The training container image.- `python_package_gcs_uri`: The location of the Python training package as a tarball.- `python_module_name`: The relative path to the training script in the Python package.- `model_serving_container_uri`: The container image for deploying the model.*Note:* There is no requirements parameter. You specify any requirements in the `setup.py` script in your Python package.
###Code
DISPLAY_NAME = "chicago_" + TIMESTAMP
job = aip.CustomPythonPackageTrainingJob(
display_name=DISPLAY_NAME,
python_package_gcs_uri=f"{BUCKET_NAME}/trainer_chicago.tar.gz",
python_module_name="trainer.task",
container_uri=TRAIN_IMAGE,
model_serving_container_image_uri=DEPLOY_IMAGE,
project=PROJECT_ID,
)
###Output
_____no_output_____
###Markdown
Run the custom Python package training jobNext, you run the custom job to start the training job by invoking the method `run()`. The parameters are the same as when running a CustomTrainingJob.*Note:* The parameter service_account is set so that the initializing experiment step `aip.init(experiment="...")` has necessarily permission to access the Vertex AI Metadata Store.
###Code
MODEL_DIR = BUCKET_NAME + "/trained"
FULL_EPOCHS = 100
CMDARGS = [
f"--epochs={FULL_EPOCHS}",
f"--lr={LR}",
f"--batch_size={BATCH_SIZE}",
"--distribute=mirrored",
"--experiment=chicago",
"--run=full",
"--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
"--evaluate=True",
]
model = job.run(
model_display_name="chicago_" + TIMESTAMP,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
service_account=SERVICE_ACCOUNT,
tensorboard=tensorboard_resource_name,
sync=True,
)
###Output
_____no_output_____
###Markdown
Delete a custom training jobAfter a training job is completed, you can delete the training job with the method `delete()`. Prior to completion, a training job can be canceled with the method `cancel()`.
###Code
job.delete()
###Output
_____no_output_____
###Markdown
Get the experiment resultsNext, you use the experiment name as a parameter to the method `get_experiment_df()` to get the results of the experiment as a pandas dataframe.
###Code
EXPERIMENT_NAME = "chicago"
experiment_df = aip.get_experiment_df()
experiment_df = experiment_df[experiment_df.experiment_name == EXPERIMENT_NAME]
experiment_df.T
###Output
_____no_output_____
###Markdown
Review the custom model evaluation resultsNext, you review the evaluation metrics builtin into the training package.
###Code
METRICS = MODEL_DIR + "/model/metrics.txt"
! gsutil cat $METRICS
###Output
_____no_output_____
###Markdown
Delete the TensorBoard instanceNext, delete the TensorBoard instance.
###Code
tensorboard.delete()
vertex_custom_model = model
model = tf.keras.models.load_model(MODEL_DIR + "/model")
###Output
_____no_output_____
###Markdown
Add a serving functionNext, you add a serving function to your model for online and batch prediction. This allows prediction requests to be sent in raw format (unpreprocessed), either as a serialized TF.Example or JSONL object. The serving function will then preprocess the prediction request into the transformed format expected by the model.
###Code
%%writefile custom/trainer/serving.py
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_transform as tft
import logging
def _get_serve_features_fn(model, tft_output):
"""Returns a function that accept a dictionary of features and applies TFT."""
model.tft_layer = tft_output.transform_features_layer()
@tf.function
def serve_features_fn(raw_features):
"""Returns the output to be used in the serving signature."""
transformed_features = model.tft_layer(raw_features)
probabilities = model(transformed_features)
return {"scores": probabilities}
return serve_features_fn
def _get_serve_tf_examples_fn(model, tft_output, feature_spec):
"""Returns a function that parses a serialized tf.Example and applies TFT."""
model.tft_layer = tft_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
"""Returns the output to be used in the serving signature."""
for key in list(feature_spec.keys()):
if key not in features:
feature_spec.pop(key)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
probabilities = model(transformed_features)
return {"scores": probabilities}
return serve_tf_examples_fn
def construct_serving_model(
model, serving_model_dir, metadata
):
global features
schema_location = metadata['schema']
features = metadata['numeric_features'] + metadata['categorical_features'] + metadata['embedding_features']
print("FEATURES", features)
tft_output_dir = metadata["transform_artifacts_dir"]
schema = tfdv.load_schema_text(schema_location)
feature_spec = tft.tf_metadata.schema_utils.schema_as_feature_spec(schema).feature_spec
tft_output = tft.TFTransformOutput(tft_output_dir)
# Drop features that were not used in training
features_input_signature = {
feature_name: tf.TensorSpec(
shape=(None, 1), dtype=spec.dtype, name=feature_name
)
for feature_name, spec in feature_spec.items()
if feature_name in features
}
signatures = {
"serving_default": _get_serve_features_fn(
model, tft_output
).get_concrete_function(features_input_signature),
"serving_tf_example": _get_serve_tf_examples_fn(
model, tft_output, feature_spec
).get_concrete_function(
tf.TensorSpec(shape=[None], dtype=tf.string, name="examples")
),
}
logging.info("Model saving started...")
model.save(serving_model_dir, signatures=signatures)
logging.info("Model saving completed.")
###Output
_____no_output_____
###Markdown
Construct the serving modelNow construct the serving model and store the serving model to your Cloud Storage bucket.
###Code
os.chdir("custom")
from trainer import serving
SERVING_MODEL_DIR = BUCKET_NAME + "/serving_model"
serving.construct_serving_model(
model=model, serving_model_dir=SERVING_MODEL_DIR, metadata=metadata
)
serving_model = tf.keras.models.load_model(SERVING_MODEL_DIR)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Test the serving model locally with tf.Example dataNext, test the layer interface in the serving model for tf.Example data.
###Code
EXPORTED_TFREC_PREFIX = metadata["exported_tfrec_prefix"]
file_names = tf.data.TFRecordDataset.list_files(
EXPORTED_TFREC_PREFIX + "/data-*.tfrecord"
)
for batch in tf.data.TFRecordDataset(file_names).batch(3).take(1):
predictions = serving_model.signatures["serving_tf_example"](batch)
for key in predictions:
print(f"{key}: {predictions[key]}")
###Output
_____no_output_____
###Markdown
Test the serving model locally with JSONL dataNext, test the layer interface in the serving model for JSONL data.
###Code
schema = tfdv.load_schema_text(metadata["schema"])
feature_spec = tft.tf_metadata.schema_utils.schema_as_feature_spec(schema).feature_spec
instance = {
"dropoff_grid": "POINT(-87.6 41.9)",
"euclidean": 2064.2696,
"loc_cross": "",
"payment_type": "Credit Card",
"pickup_grid": "POINT(-87.6 41.9)",
"trip_miles": 1.37,
"trip_day": 12,
"trip_hour": 6,
"trip_month": 2,
"trip_day_of_week": 4,
"trip_seconds": 555,
}
for feature_name in instance:
dtype = feature_spec[feature_name].dtype
instance[feature_name] = tf.constant([[instance[feature_name]]], dtype)
predictions = serving_model.signatures["serving_default"](**instance)
for key in predictions:
print(f"{key}: {predictions[key].numpy()}")
###Output
_____no_output_____
###Markdown
Upload the serving model to a Vertex AI Model resourceNext, you upload your serving custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource.
###Code
vertex_serving_model = aip.Model.upload(
display_name="chicago_" + TIMESTAMP,
artifact_uri=SERVING_MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
labels={"user_metadata": BUCKET_NAME[5:]},
sync=True,
)
###Output
_____no_output_____
###Markdown
Evaluate the serving modelNext, evaluate the serving model with the evaluation (test) slices. For apples-to-apples comparison, you use the same evaluation slices for both the custom model and the AutoML model. Since your evaluation slices and metrics maybe custom, we recommend:- Send each evaluation slice as a Vertex AI Batch Prediction Job.- Use a custom evaluation script to evaluate the results from the batch prediction job.
###Code
SERVING_OUTPUT_DATA_DIR = BUCKET_NAME + "/batch_eval"
EXPORTED_JSONL_PREFIX = metadata["exported_jsonl_prefix"]
MIN_NODES = 1
MAX_NODES = 1
job = vertex_serving_model.batch_predict(
instances_format="jsonl",
predictions_format="jsonl",
job_display_name="chicago_" + TIMESTAMP,
gcs_source=EXPORTED_JSONL_PREFIX + "*.jsonl",
gcs_destination_prefix=SERVING_OUTPUT_DATA_DIR,
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=True,
)
###Output
_____no_output_____
###Markdown
Perform custom evaluation metricsAfter the batch job has completed, you input the results and target labels to your custom evaluation script. For demonstration purposes, we just display the results of the batch prediction.
###Code
batch_dir = ! gsutil ls $SERVING_OUTPUT_DATA_DIR
batch_dir = batch_dir[0]
outputs = ! gsutil ls $batch_dir
errors = outputs[0]
results = outputs[1]
print("errors")
! gsutil cat $errors
print("results")
! gsutil cat $results | head -n10
model = async_model
###Output
_____no_output_____
###Markdown
Wait for completion of AutoML training jobNext, wait for the AutoML training job to complete. Alternatively, one can set the parameter `sync` to `True` in the `run()` method to block until the AutoML training job is completed.
###Code
model.wait()
###Output
_____no_output_____
###Markdown
Review model evaluation scoresAfter your model training has finished, you can review the evaluation scores for it using the `list_model_evaluations()` method. This method will return an iterator for each evaluation slice.
###Code
model_evaluations = model.list_model_evaluations()
for model_evaluation in model_evaluations:
print(model_evaluation.to_dict())
###Output
_____no_output_____
###Markdown
Compare metric results with AutoML baselineFinally, you make a decision if the current experiment produces a custom model that is better than the AutoML baseline, as follows: - Compare the evaluation results for each evaluation slice between the custom model and the AutoML model. - Weight the results according to your business purposes. - Add up the result and make a determination if the custom model is better. Store evaluation results for custom modelNext, you use the labels field to store user metadata containing the custom metrics information.
###Code
import json
metadata = {}
metadata["train_eval_metrics"] = METRICS
metadata["custom_eval_metrics"] = "[you-fill-this-in]"
with tf.io.gfile.GFile("gs://" + BUCKET_NAME[5:] + "/metadata.jsonl", "w") as f:
json.dump(metadata, f)
!gsutil cat $BUCKET_NAME/metadata.jsonl
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- AutoML Training Job- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_all = False
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____
###Markdown
E2E ML on GCP: MLOps stage 2 : experimentation View on GitHub Open in Vertex AI Workbench OverviewThis tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 2 : experimentation. DatasetThe dataset used for this tutorial is the [Chicago Taxi](https://www.kaggle.com/chicago/chicago-taxi-trips-bq). The version of the dataset you will use in this tutorial is stored in a public BigQuery table. The trained model predicts whether someone would leave a tip for a taxi fare. ObjectiveIn this tutorial, you create a MLOps stage 2: experimentation process.This tutorial uses the following Vertex AI:- `Vertex AI Datasets`- `Vertex AI Models`- `Vertex AI AutoML`- `Vertex AI Training`- `Vertex AI TensorBoard`- `Vertex AI Vizier`- `Vertex AI Batch Prediction`The steps performed include:- Review the `Dataset` resource created during stage 1.- Train an AutoML tabular binary classifier model in the background.- Build the experimental model architecture.- Construct a custom training package for the `Dataset` resource.- Test the custom training package locally.- Test the custom training package in the cloud with Vertex AI Training.- Hyperparameter tune the model training with Vertex AI Vizier.- Train the custom model with Vertex AI Training.- Add a serving function for online/batch prediction to the custom model.- Test the custom model with the serving function.- Evaluate the custom model using Vertex AI Batch Prediction- Wait for the AutoML training job to complete.- Evaluate the AutoML model using Vertex AI Batch Prediction with the same evaluation slices as the custom model.- Set the evaluation results of the AutoML model as the baseline.- If the evaluation of the custom model is below baseline, continue to experiment with the custom model.- If the evaluation of the custom model is above baseline, save the model as the first best model. RecommendationsWhen doing E2E MLOps on Google Cloud for experimentation, the following best practices with structured (tabular) data are recommended: - Determine a baseline evaluation using AutoML. - Design and build a model architecture. - Upload the untrained model architecture as a Vertex AI Model resource. - Construct a training package that can be ran locally and as a Vertex AI Training job. - Decompose the training package into: data, model, train and task Python modules. - Obtain the location of the transformed training data from the user metadata of the Vertex AI Dataset resource. - Obtain the location of the model artifacts from the Vertex AI Model resource. - Include in the training package initializing a Vertex AI Experiment and corresponding run. - Log hyperparameters and training parameters for the experiment. - Add callbacks for early stop, TensorBoard, and hyperparameter tuning, where hyperparameter tuning is a command-line option. - Test the training package locally with a small number of epochs. - Test the training package with Vertex AI Training. - Do hyperparameter tuning with Vertex AI Hyperparameter Tuning. - Do full training of the custom model with Vertex AI Training. - Log the hyperparameter values for the experiment/run. - Evaluate the custom model. - Single evaluation slice, same metrics as AutoML - Add evaluation to the training package and return the results in a file in the Cloud Storage bucket used for training - Custom evaluation slices, custom metrics - Evaluate custom evaluation slices as a Vertex AI Batch Prediction for both AutoML and custom model - Perform custom metrics on the results from the batch job - Compare custom model metrics against the AutoML baseline - If less than baseline, then continue to experiment - If greater then baseline, then upload model as the new baseline and save evaluation results with the model. InstallationsInstall *one time* the packages for executing the MLOps notebooks.
###Code
ONCE_ONLY = False
if ONCE_ONLY:
! pip3 install -U tensorflow==2.5 $USER_FLAG
! pip3 install -U tensorflow-data-validation==1.2 $USER_FLAG
! pip3 install -U tensorflow-transform==1.2 $USER_FLAG
! pip3 install -U tensorflow-io==0.18 $USER_FLAG
! pip3 install --upgrade google-cloud-aiplatform[tensorboard] $USER_FLAG
! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG
! pip3 install --upgrade google-cloud-bigquery $USER_FLAG
! pip3 install --upgrade google-cloud-logging $USER_FLAG
! pip3 install --upgrade apache-beam[gcp] $USER_FLAG
! pip3 install --upgrade pyarrow $USER_FLAG
! pip3 install --upgrade cloudml-hypertune $USER_FLAG
! pip3 install --upgrade kfp $USER_FLAG
! pip3 install --upgrade torchvision $USER_FLAG
! pip3 install --upgrade rpy2 $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations).
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Service Account**If you don't know your service account**, try to get your service account using `gcloud` command by executing the second cell below.
###Code
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].replace("*", "").strip()
print("Service Account:", SERVICE_ACCOUNT)
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants
###Code
import google.cloud.aiplatform as aip
###Output
_____no_output_____
###Markdown
Import TensorFlowImport the TensorFlow package into your Python environment.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Import TensorFlow TransformImport the TensorFlow Transform (TFT) package into your Python environment.
###Code
import tensorflow_transform as tft
###Output
_____no_output_____
###Markdown
Import TensorFlow Data ValidationImport the TensorFlow Data Validation (TFDV) package into your Python environment.
###Code
import tensorflow_data_validation as tfdv
###Output
_____no_output_____
###Markdown
Initialize Vertex AI SDK for PythonInitialize the Vertex AI SDK for Python for your project and corresponding bucket.
###Code
aip.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)
###Output
_____no_output_____
###Markdown
Set hardware acceleratorsYou can set hardware accelerators for training and prediction.Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)Otherwise specify `(None, None)` to use a container image to run on a CPU.Learn more about [hardware accelerator support for your region](https://cloud.google.com/vertex-ai/docs/general/locationsaccelerators).*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3. This is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
###Code
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.gapic.AcceleratorType.NVIDIA_TESLA_K80, 4)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
###Output
_____no_output_____
###Markdown
Set pre-built containersSet the pre-built Docker container image for training and prediction.For the latest list, see [Pre-built containers for training](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers).For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers).
###Code
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2.5".replace(".", "-")
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "{}-docker.pkg.dev/vertex-ai/training/{}:latest".format(
REGION.split("-")[0], TRAIN_VERSION
)
DEPLOY_IMAGE = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format(
REGION.split("-")[0], DEPLOY_VERSION
)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
###Output
_____no_output_____
###Markdown
Set machine typeNext, set the machine type to use for training and prediction.- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]*Note: The following is not supported for training:* - `standard`: 2 vCPUs - `highcpu`: 2, 4 and 8 vCPUs*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.
###Code
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
###Output
_____no_output_____
###Markdown
Retrieve the dataset from stage 1Next, retrieve the dataset you created during stage 1 with the helper function `find_dataset()`. This helper function finds all the datasets whose display name matches the specified prefix and import format (e.g., bq). Finally it sorts the matches by create time and returns the latest version.
###Code
def find_dataset(display_name_prefix, import_format):
matches = []
datasets = aip.TabularDataset.list()
for dataset in datasets:
if dataset.display_name.startswith(display_name_prefix):
try:
if (
"bq" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["bigquerySource"]
):
matches.append(dataset)
if (
"csv" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["gcsSource"]
):
matches.append(dataset)
except:
pass
create_time = None
for match in matches:
if create_time is None or match.create_time > create_time:
create_time = match.create_time
dataset = match
return dataset
dataset = find_dataset("Chicago Taxi", "bq")
print(dataset)
###Output
_____no_output_____
###Markdown
Load dataset's user metadataLoad the user metadata for the dataset.
###Code
import json
try:
with tf.io.gfile.GFile(
"gs://" + dataset.labels["user_metadata"] + "/metadata.jsonl", "r"
) as f:
metadata = json.load(f)
print(metadata)
except:
print("no metadata")
###Output
_____no_output_____
###Markdown
Create and run training pipelineTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipelineAn AutoML training pipeline is created with the `AutoMLTabularTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the `TrainingJob` resource.- `optimization_prediction_type`: The type task to train the model for. - `classification`: A tabuar classification model. - `regression`: A tabular regression model.- `column_transformations`: (Optional): Transformations to apply to the input columns- `optimization_objective`: The optimization objective to minimize or maximize. - binary classification: - `minimize-log-loss` - `maximize-au-roc` - `maximize-au-prc` - `maximize-precision-at-recall` - `maximize-recall-at-precision` - multi-class classification: - `minimize-log-loss` - regression: - `minimize-rmse` - `minimize-mae` - `minimize-rmsle`The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
###Code
dag = aip.AutoMLTabularTrainingJob(
display_name="chicago_" + TIMESTAMP,
optimization_prediction_type="classification",
optimization_objective="minimize-log-loss",
)
print(dag)
###Output
_____no_output_____
###Markdown
Run the training pipelineNext, you run the DAG to start the training job by invoking the method `run`, with the following parameters:- `dataset`: The `Dataset` resource to train the model.- `model_display_name`: The human readable name for the trained model.- `training_fraction_split`: The percentage of the dataset to use for training.- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).- `validation_fraction_split`: The percentage of the dataset to use for validation.- `target_column`: The name of the column to train as the label.- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.The `run` method when completed returns the `Model` resource.The execution of the training pipeline will take upto 180 minutes.
###Code
async_model = dag.run(
dataset=dataset,
model_display_name="chicago_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=8000,
disable_early_stopping=False,
target_column="tip_bin",
sync=False,
)
###Output
_____no_output_____
###Markdown
Create experiment for tracking training related metadataSetup tracking the parameters (configuration) and metrics (results) for each experiment:- `aip.init()` - Create an experiment instance- `aip.start_run()` - Track a specific run within the experiment.Learn more about [Introduction to Vertex AI ML Metadata](https://cloud.google.com/vertex-ai/docs/ml-metadata/introduction).
###Code
EXPERIMENT_NAME = "chicago-" + TIMESTAMP
aip.init(experiment=EXPERIMENT_NAME)
aip.start_run("run-1")
###Output
_____no_output_____
###Markdown
Create a Vertex AI TensorBoard instanceCreate a Vertex AI TensorBoard instance to use TensorBoard in conjunction with Vertex AI Training for custom model training.Learn more about [Get started with Vertex AI TensorBoard](https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-overview).
###Code
TENSORBOARD_DISPLAY_NAME = "chicago_" + TIMESTAMP
tensorboard = aip.Tensorboard.create(display_name=TENSORBOARD_DISPLAY_NAME)
tensorboard_resource_name = tensorboard.gca_resource.name
print("TensorBoard resource name:", tensorboard_resource_name)
###Output
_____no_output_____
###Markdown
Create the input layer for your custom modelNext, you create the input layer for your custom tabular model, based on the data types of each feature.
###Code
from tensorflow.keras.layers import Input
def create_model_inputs(
numeric_features=None, categorical_features=None, embedding_features=None
):
inputs = {}
for feature_name in numeric_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.float32)
for feature_name in categorical_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64)
for feature_name in embedding_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64)
return inputs
input_layers = create_model_inputs(
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
embedding_features=metadata["embedding_features"],
)
print(input_layers)
###Output
_____no_output_____
###Markdown
Create the binary classifier custom modelNext, you create your binary classifier custom tabular model.
###Code
from math import sqrt
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import (Activation, Concatenate, Dense, Embedding,
experimental)
def create_binary_classifier(
input_layers,
tft_output,
metaparams,
numeric_features,
categorical_features,
embedding_features,
):
layers = []
for feature_name in input_layers:
if feature_name in embedding_features:
vocab_size = tft_output.vocabulary_size_by_name(feature_name)
embedding_size = int(sqrt(vocab_size))
embedding_output = Embedding(
input_dim=vocab_size + 1,
output_dim=embedding_size,
name=f"{feature_name}_embedding",
)(input_layers[feature_name])
layers.append(embedding_output)
elif feature_name in categorical_features:
vocab_size = tft_output.vocabulary_size_by_name(feature_name)
onehot_layer = experimental.preprocessing.CategoryEncoding(
num_tokens=vocab_size,
output_mode="binary",
name=f"{feature_name}_onehot",
)(input_layers[feature_name])
layers.append(onehot_layer)
elif feature_name in numeric_features:
numeric_layer = tf.expand_dims(input_layers[feature_name], -1)
layers.append(numeric_layer)
else:
pass
joined = Concatenate(name="combines_inputs")(layers)
feedforward_output = Sequential(
[Dense(units, activation="relu") for units in metaparams["hidden_units"]],
name="feedforward_network",
)(joined)
logits = Dense(units=1, name="logits")(feedforward_output)
pred = Activation("sigmoid")(logits)
model = Model(inputs=input_layers, outputs=[pred])
return model
TRANSFORM_ARTIFACTS_DIR = metadata["transform_artifacts_dir"]
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
metaparams = {"hidden_units": [128, 64]}
aip.log_params(metaparams)
model = create_binary_classifier(
input_layers,
tft_output,
metaparams,
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
embedding_features=metadata["embedding_features"],
)
model.summary()
###Output
_____no_output_____
###Markdown
Visualize the model architectureNext, visualize the architecture of the custom model.
###Code
tf.keras.utils.plot_model(model, show_shapes=True, show_dtype=True)
###Output
_____no_output_____
###Markdown
Save model artifactsNext, save the model artifacts to your Cloud Storage bucket
###Code
MODEL_DIR = f"{BUCKET_NAME}/base_model"
model.save(MODEL_DIR)
###Output
_____no_output_____
###Markdown
Upload the local model to a Vertex AI Model resourceNext, you upload your local custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource.
###Code
vertex_custom_model = aip.Model.upload(
display_name="chicago_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
labels={"base_model": "1"},
sync=True,
)
###Output
_____no_output_____
###Markdown
Construct the training package Package layoutBefore you start training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.- PKG-INFO- README.md- setup.cfg- setup.py- trainer - \_\_init\_\_.py - task.py - other Python scriptsThe files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.The file `trainer/task.py` is the Python script for executing the custom training job.
###Code
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'google-cloud-aiplatform',\n\n 'cloudml-hypertune',\n\n 'tensorflow_datasets==1.3.0',\n\n 'tensorflow_data_validation==1.2',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Chicago Taxi tabular binary classifier\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex AI"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
###Output
_____no_output_____
###Markdown
Get feature specification for the preprocessed dataNext, create the feature specification for the preprocessed data.
###Code
transform_feature_spec = tft_output.transformed_feature_spec()
print(transform_feature_spec)
###Output
_____no_output_____
###Markdown
Load the transformed data into a tf.data.DatasetNext, you load the gzip TFRecords on Cloud Storage storage into a `tf.data.Dataset` generator. These functions are re-used when training the custom model using `Vertex Training`, so you save them to the python training package.
###Code
%%writefile custom/trainer/data.py
import tensorflow as tf
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(filenames, compression_type="GZIP")
def get_dataset(file_pattern, feature_spec, label_column, batch_size=200):
"""Generates features and label for tuning/training.
Args:
file_pattern: input tfrecord file pattern.
feature_spec: a dictionary of feature specifications.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=feature_spec,
label_key=label_column,
reader=_gzip_reader_fn,
num_epochs=1,
drop_final_batch=True,
)
return dataset
from custom.trainer import data
TRANSFORMED_DATA_PREFIX = metadata["transformed_data_prefix"]
LABEL_COLUMN = metadata["label_column"]
train_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/train/data-*.gz"
val_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/val/data-*.gz"
test_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/test/data-*.gz"
for input_features, target in data.get_dataset(
train_data_file_pattern, transform_feature_spec, LABEL_COLUMN, batch_size=3
).take(1):
for key in input_features:
print(
f"{key} {input_features[key].dtype}: {input_features[key].numpy().tolist()}"
)
print(f"target: {target.numpy().tolist()}")
###Output
_____no_output_____
###Markdown
Test the model architecture with transformed inputNext, test the model architecture with a sample of the transformed training input.*Note:* Since the model is untrained, the predictions should be random. Since this is a binary classifier, expect the predicted results ~0.5.
###Code
model(input_features)
###Output
_____no_output_____
###Markdown
Develop and test the training scriptsWhen experimenting, one typically develops and tests the training package locally, before moving to training in the cloud. Create training scriptNext, you write the Python script for compiling and training the model.
###Code
%%writefile custom/trainer/train.py
from trainer import data
import tensorflow as tf
import logging
from hypertune import HyperTune
def compile(model, hyperparams):
''' Compile the model '''
optimizer = tf.keras.optimizers.Adam(learning_rate=hyperparams["learning_rate"])
loss = tf.keras.losses.BinaryCrossentropy(from_logits=False)
metrics = [tf.keras.metrics.BinaryAccuracy(name="accuracy")]
model.compile(optimizer=optimizer,loss=loss, metrics=metrics)
return model
def warmup(
model,
hyperparams,
train_data_dir,
label_column,
transformed_feature_spec
):
''' Warmup the initialized model weights '''
train_dataset = data.get_dataset(
train_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
lr_inc = (hyperparams['end_learning_rate'] - hyperparams['start_learning_rate']) / hyperparams['num_epochs']
def scheduler(epoch, lr):
if epoch == 0:
return hyperparams['start_learning_rate']
return lr + lr_inc
callbacks = [tf.keras.callbacks.LearningRateScheduler(scheduler)]
logging.info("Model warmup started...")
history = model.fit(
train_dataset,
epochs=hyperparams["num_epochs"],
steps_per_epoch=hyperparams["steps"],
callbacks=callbacks
)
logging.info("Model warmup completed.")
return history
def train(
model,
hyperparams,
train_data_dir,
val_data_dir,
label_column,
transformed_feature_spec,
log_dir,
tuning=False
):
''' Train the model '''
train_dataset = data.get_dataset(
train_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
val_dataset = data.get_dataset(
val_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
early_stop = tf.keras.callbacks.EarlyStopping(
monitor=hyperparams["early_stop"]["monitor"], patience=hyperparams["early_stop"]["patience"], restore_best_weights=True
)
callbacks = [early_stop]
if log_dir:
tensorboard = tf.keras.callbacks.TensorBoard(log_dir=log_dir)
callbacks = callbacks.append(tensorboard)
if tuning:
# Instantiate the HyperTune reporting object
hpt = HyperTune()
# Reporting callback
class HPTCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='val_loss',
metric_value=logs['val_loss'],
global_step=epoch
)
if not callbacks:
callbacks = []
callbacks.append(HPTCallback())
logging.info("Model training started...")
history = model.fit(
train_dataset,
epochs=hyperparams["num_epochs"],
validation_data=val_dataset,
callbacks=callbacks
)
logging.info("Model training completed.")
return history
def evaluate(
model,
hyperparams,
test_data_dir,
label_column,
transformed_feature_spec
):
logging.info("Model evaluation started...")
test_dataset = data.get_dataset(
test_data_dir,
transformed_feature_spec,
label_column,
hyperparams["batch_size"],
)
evaluation_metrics = model.evaluate(test_dataset)
logging.info("Model evaluation completed.")
return evaluation_metrics
###Output
_____no_output_____
###Markdown
Train the model locallyNext, test the training package locally, by training with just a few epochs:- `num_epochs`: The number of epochs to pass to the training package.- `compile()`: Compile the model for training.- `warmup()`: Warmup the initialized model weights.- `train()`: Train the model.
###Code
os.chdir("custom")
import logging
from trainer import train
TENSORBOARD_LOG_DIR = "./logs"
logging.getLogger().setLevel(logging.INFO)
hyperparams = {}
hyperparams["learning_rate"] = 0.01
aip.log_params(hyperparams)
train.compile(model, hyperparams)
warmupparams = {}
warmupparams["start_learning_rate"] = 0.0001
warmupparams["end_learning_rate"] = 0.01
warmupparams["num_epochs"] = 4
warmupparams["batch_size"] = 64
warmupparams["steps"] = 50
aip.log_params(warmupparams)
train.warmup(
model, warmupparams, train_data_file_pattern, LABEL_COLUMN, transform_feature_spec
)
trainparams = {}
trainparams["num_epochs"] = 5
trainparams["batch_size"] = 64
trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5}
aip.log_params(trainparams)
train.train(
model,
trainparams,
train_data_file_pattern,
val_data_file_pattern,
LABEL_COLUMN,
transform_feature_spec,
TENSORBOARD_LOG_DIR,
)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Evaluate the model locallyNext, test the evaluation portion of the training package:- `evaluate()`: Evaluate the model.
###Code
os.chdir("custom")
from trainer import train
evalparams = {}
evalparams["batch_size"] = 64
metrics = {}
metrics["loss"], metrics["acc"] = train.evaluate(
model, evalparams, test_data_file_pattern, LABEL_COLUMN, transform_feature_spec
)
print("ACC", metrics["acc"], "LOSS", metrics["loss"])
aip.log_metrics(metrics)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Retrieve model from Vertex AINext, create the Python script to retrieve your experimental model from Vertex AI.
###Code
%%writefile custom/trainer/model.py
import google.cloud.aiplatform as aip
def get(model_id):
model = aip.Model(model_id)
return model
###Output
_____no_output_____
###Markdown
Create the task script for the Python training packageNext, you create the `task.py` script for driving the training package. Some noteable steps include:- Command-line arguments: - `model-id`: The resource ID of the `Model` resource you built during experimenting. This is the untrained model architecture. - `dataset-id`: The resource ID of the `Dataset` resource to use for training. - `experiment`: The name of the experiment. - `run`: The name of the run within this experiment. - `tensorboard-logdir`: The logging directory for Vertex AI Tensorboard.- `get_data()`: - Loads the Dataset resource into memory. - Obtains the user metadata from the Dataset resource. - From the metadata, obtain location of transformed data, transformation function and name of label column- `get_model()`: - Loads the Model resource into memory. - Obtains location of model artifacts of the model architecture. - Loads the model architecture. - Compiles the model.- `warmup_model()`: - Warms up the initialized model weights- `train_model()`: - Train the model.- `evaluate_model()`: - Evaluates the model. - Saves evaluation metrics to Cloud Storage bucket.
###Code
%%writefile custom/trainer/task.py
import os
import argparse
import logging
import json
import tensorflow as tf
import tensorflow_transform as tft
from tensorflow.python.client import device_lib
import google.cloud.aiplatform as aip
from trainer import data
from trainer import model as model_
from trainer import train
try:
from trainer import serving
except:
pass
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--model-id', dest='model_id',
default=None, type=str, help='Vertex Model ID.')
parser.add_argument('--dataset-id', dest='dataset_id',
default=None, type=str, help='Vertex Dataset ID.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--start_lr', dest='start_lr',
default=0.0001, type=float,
help='Starting learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--batch_size', dest='batch_size',
default=16, type=int,
help='Batch size.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--tensorboard-log-dir', dest='tensorboard_log_dir',
default=os.getenv('AIP_TENSORBOARD_LOG_DIR'), type=str,
help='Output file for tensorboard logs')
parser.add_argument('--experiment', dest='experiment',
default=None, type=str,
help='Name of experiment')
parser.add_argument('--project', dest='project',
default=None, type=str,
help='Name of project')
parser.add_argument('--run', dest='run',
default=None, type=str,
help='Name of run in experiment')
parser.add_argument('--evaluate', dest='evaluate',
default=False, type=bool,
help='Whether to perform evaluation')
parser.add_argument('--serving', dest='serving',
default=False, type=bool,
help='Whether to attach the serving function')
parser.add_argument('--tuning', dest='tuning',
default=False, type=bool,
help='Whether to perform hyperparameter tuning')
parser.add_argument('--warmup', dest='warmup',
default=False, type=bool,
help='Whether to perform warmup weight initialization')
args = parser.parse_args()
logging.getLogger().setLevel(logging.INFO)
logging.info('DEVICES' + str(device_lib.list_local_devices()))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
logging.info("Single device training")
# Single Machine, multiple compute device
elif args.distribute == 'mirrored':
strategy = tf.distribute.MirroredStrategy()
logging.info("Mirrored Strategy distributed training")
# Multi Machine, multiple compute device
elif args.distribute == 'multiworker':
strategy = tf.distribute.MultiWorkerMirroredStrategy()
logging.info("Multi-worker Strategy distributed training")
logging.info('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
logging.info('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Initialize the run for this experiment
if args.experiment:
logging.info("Initialize experiment: {}".format(args.experiment))
aip.init(experiment=args.experiment, project=args.project)
aip.start_run(args.run)
metadata = {}
def get_data():
''' Get the preprocessed training data '''
global train_data_file_pattern, val_data_file_pattern, test_data_file_pattern
global label_column, transform_feature_spec, metadata
dataset = aip.TabularDataset(args.dataset_id)
METADATA = 'gs://' + dataset.labels['user_metadata'] + "/metadata.jsonl"
with tf.io.gfile.GFile(METADATA, "r") as f:
metadata = json.load(f)
TRANSFORMED_DATA_PREFIX = metadata['transformed_data_prefix']
label_column = metadata['label_column']
train_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/train/data-*.gz'
val_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/val/data-*.gz'
test_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/test/data-*.gz'
TRANSFORM_ARTIFACTS_DIR = metadata['transform_artifacts_dir']
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
transform_feature_spec = tft_output.transformed_feature_spec()
def get_model():
''' Get the untrained model architecture '''
global model_artifacts
vertex_model = model_.get(args.model_id)
model_artifacts = vertex_model.gca_resource.artifact_uri
model = tf.keras.models.load_model(model_artifacts)
# Compile the model
hyperparams = {}
hyperparams["learning_rate"] = args.lr
if args.experiment:
aip.log_params(hyperparams)
metadata.update(hyperparams)
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f:
f.write(json.dumps(metadata))
train.compile(model, hyperparams)
return model
def warmup_model(model):
''' Warmup the initialized model weights '''
warmupparams = {}
warmupparams["num_epochs"] = args.epochs
warmupparams["batch_size"] = args.batch_size
warmupparams["steps"] = args.steps
warmupparams["start_learning_rate"] = args.start_lr
warmupparams["end_learning_rate"] = args.lr
train.warmup(model, warmupparams, train_data_file_pattern, label_column, transform_feature_spec)
return model
def train_model(model):
''' Train the model '''
trainparams = {}
trainparams["num_epochs"] = args.epochs
trainparams["batch_size"] = args.batch_size
trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5}
if args.experiment:
aip.log_params(trainparams)
metadata.update(trainparams)
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f:
f.write(json.dumps(metadata))
train.train(model, trainparams, train_data_file_pattern, val_data_file_pattern, label_column, transform_feature_spec, args.tensorboard_log_dir, args.tuning)
return model
def evaluate_model(model):
''' Evaluate the model '''
evalparams = {}
evalparams["batch_size"] = args.batch_size
metrics = train.evaluate(model, evalparams, test_data_file_pattern, label_column, transform_feature_spec)
metadata.update({'metrics': metrics})
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f:
f.write(json.dumps(metadata))
get_data()
with strategy.scope():
model = get_model()
if args.warmup:
model = warmup_model(model)
else:
model = train_model(model)
if args.evaluate:
evaluate_model(model)
if args.serving:
logging.info('Save serving model to: ' + args.model_dir)
serving.construct_serving_model(
model=model,
serving_model_dir=args.model_dir,
metadata=metadata
)
elif args.warmup:
logging.info('Save warmed up model to: ' + model_artifacts)
model.save(model_artifacts)
else:
logging.info('Save trained model to: ' + args.model_dir)
model.save(args.model_dir)
###Output
_____no_output_____
###Markdown
Test training package locallyNext, test your completed training package locally with just a few epochs.
###Code
DATASET_ID = dataset.resource_name
MODEL_ID = vertex_custom_model.resource_name
!cd custom; python3 -m trainer.task --model-id={MODEL_ID} --dataset-id={DATASET_ID} --experiment='chicago' --run='test' --project={PROJECT_ID} --epochs=5 --model-dir=/tmp --evaluate=True
###Output
_____no_output_____
###Markdown
Warmup trainingNow that you have tested the training scripts, you perform warmup training on the base model. Warmup training is used to stabilize the weight initialization. By doing so, each subsequent training and tuning of the model architecture will start with the same stabilized weight initialization.
###Code
MODEL_DIR = f"{BUCKET_NAME}/base_model"
!cd custom; python3 -m trainer.task --model-id={MODEL_ID} --dataset-id={DATASET_ID} --project={PROJECT_ID} --epochs=5 --steps=300 --batch_size=16 --lr=0.01 --start_lr=0.0001 --model-dir={MODEL_DIR} --warmup=True
###Output
_____no_output_____
###Markdown
Mirrored StrategyWhen training on a single VM, one can either train was a single compute device or with multiple compute devices on the same VM. With Vertex AI Distributed Training you can specify both the number of compute devices for the VM instance and type of compute devices: CPU, GPU.Vertex AI Distributed Training supports `tf.distribute.MirroredStrategy' for TensorFlow models. To enable training across multiple compute devices on the same VM, you do the following additional steps in your Python training script:1. Set the tf.distribute.MirrorStrategy2. Compile the model within the scope of tf.distribute.MirrorStrategy. *Note:* Tells MirroredStrategy which variables to mirror across your compute devices.3. Increase the batch size for each compute device to num_devices * batch size.During transitions, the distribution of batches will be synchronized as well as the updates to the model parameters. Create and run custom training jobTo train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training jobA custom training job is created with the `CustomTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the custom training job.- `container_uri`: The training container image.- `python_package_gcs_uri`: The location of the Python training package as a tarball.- `python_module_name`: The relative path to the training script in the Python package.- `model_serving_container_uri`: The container image for deploying the model.*Note:* There is no requirements parameter. You specify any requirements in the `setup.py` script in your Python package.
###Code
DISPLAY_NAME = "chicago_" + TIMESTAMP
job = aip.CustomPythonPackageTrainingJob(
display_name=DISPLAY_NAME,
python_package_gcs_uri=f"{BUCKET_NAME}/trainer_chicago.tar.gz",
python_module_name="trainer.task",
container_uri=TRAIN_IMAGE,
model_serving_container_image_uri=DEPLOY_IMAGE,
project=PROJECT_ID,
)
! rm -rf custom/logs
! rm -rf custom/trainer/__pycache__
###Output
_____no_output_____
###Markdown
Store training script on your Cloud Storage bucketNext, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
###Code
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_chicago.tar.gz
###Output
_____no_output_____
###Markdown
Run the custom Python package training jobNext, you run the custom job to start the training job by invoking the method `run()`. The parameters are the same as when running a CustomTrainingJob.*Note:* The parameter service_account is set so that the initializing experiment step `aip.init(experiment="...")` has necessarily permission to access the Vertex AI Metadata Store.
###Code
MODEL_DIR = BUCKET_NAME + "/testing"
CMDARGS = [
"--epochs=5",
"--batch_size=16",
"--distribute=mirrored",
"--experiment=chicago",
"--run=test",
"--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
]
model = job.run(
model_display_name="chicago_" + TIMESTAMP,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
service_account=SERVICE_ACCOUNT,
tensorboard=tensorboard_resource_name,
sync=True,
)
###Output
_____no_output_____
###Markdown
Delete a custom training jobAfter a training job is completed, you can delete the training job with the method `delete()`. Prior to completion, a training job can be canceled with the method `cancel()`.
###Code
job.delete()
###Output
_____no_output_____
###Markdown
Delete the modelThe method 'delete()' will delete the model.
###Code
model.delete()
###Output
_____no_output_____
###Markdown
Hyperparameter tuningNext, you perform hyperparameter tuning with the training package. The training package has some additions that make the same package usable for both hyperparameter tuning, as well as local testing and full cloud training:- Command-Line: - `tuning`: indicates to use the HyperTune service as a callback during training.- `train()`: If tuning is set, creates and adds a callback to HyperTune service. Prepare your machine specificationNow define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training. - `machine_type`: The type of GCP instance to provision -- e.g., n1-standard-8. - `accelerator_type`: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable `TRAIN_GPU != None`, you are using a GPU; otherwise you will use a CPU. - `accelerator_count`: The number of accelerators.
###Code
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
###Output
_____no_output_____
###Markdown
Prepare your disk specification(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training. - `boot_disk_type`: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD. - `boot_disk_size_gb`: Size of disk in GB.
###Code
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
###Output
_____no_output_____
###Markdown
Define worker pool specification for hyperparameter tuning jobNext, define the worker pool specification. Note that we plan to tune the learning rate and batch size, so you do not pass them as command-line arguments (omitted). The Vertex AI Hyperparameter Tuning service will pick values for both learning rate and batch size during trials, which it will pass along as command-line arguments.
###Code
CMDARGS = [
"--epochs=5",
"--distribute=mirrored",
# "--experiment=chicago",
# "--run=tune",
# "--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
"--tuning=True",
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_chicago.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
###Output
_____no_output_____
###Markdown
Create a custom jobUse the class `CustomJob` to create a custom job, such as for hyperparameter tuning, with the following parameters:- `display_name`: A human readable name for the custom job.- `worker_pool_specs`: The specification for the corresponding VM instances.
###Code
job = aip.CustomJob(
display_name="chicago_" + TIMESTAMP, worker_pool_specs=worker_pool_spec
)
###Output
_____no_output_____
###Markdown
Create a hyperparameter tuning jobUse the class `HyperparameterTuningJob` to create a hyperparameter tuning job, with the following parameters:- `display_name`: A human readable name for the custom job.- `custom_job`: The worker pool spec from this custom job applies to the CustomJobs created in all the trials.- `metrics_spec`: The metrics to optimize. The dictionary key is the metric_id, which is reported by your training job, and the dictionary value is the optimization goal of the metric('minimize' or 'maximize').- `parameter_spec`: The parameters to optimize. The dictionary key is the metric_id, which is passed into your training job as a command line key word argument, and the dictionary value is the parameter specification of the metric.- `search_algorithm`: The search algorithm to use: `grid`, `random` and `None`. If `None` is specified, the `Vizier` service (Bayesian) is used.- `max_trial_count`: The maximum number of trials to perform.
###Code
from google.cloud.aiplatform import hyperparameter_tuning as hpt
hpt_job = aip.HyperparameterTuningJob(
display_name="chicago_" + TIMESTAMP,
custom_job=job,
metric_spec={
"val_loss": "minimize",
},
parameter_spec={
"lr": hpt.DoubleParameterSpec(min=0.001, max=0.1, scale="log"),
"batch_size": hpt.DiscreteParameterSpec([16, 32, 64, 128, 256], scale="linear"),
},
search_algorithm=None,
max_trial_count=8,
parallel_trial_count=1,
)
###Output
_____no_output_____
###Markdown
Run the hyperparameter tuning jobUse the `run()` method to execute the hyperparameter tuning job.
###Code
hpt_job.run()
###Output
_____no_output_____
###Markdown
Best trialNow look at which trial was the best:
###Code
best = (None, None, None, 0.0)
for trial in hpt_job.trials:
# Keep track of the best outcome
if float(trial.final_measurement.metrics[0].value) > best[3]:
try:
best = (
trial.id,
float(trial.parameters[0].value),
float(trial.parameters[1].value),
float(trial.final_measurement.metrics[0].value),
)
except:
best = (
trial.id,
float(trial.parameters[0].value),
None,
float(trial.final_measurement.metrics[0].value),
)
print(best)
###Output
_____no_output_____
###Markdown
Delete the hyperparameter tuning jobThe method 'delete()' will delete the hyperparameter tuning job.
###Code
hpt_job.delete()
###Output
_____no_output_____
###Markdown
Save the best hyperparameter values
###Code
LR = best[2]
BATCH_SIZE = int(best[1])
###Output
_____no_output_____
###Markdown
Create and run custom training jobTo train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training jobA custom training job is created with the `CustomTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the custom training job.- `container_uri`: The training container image.- `python_package_gcs_uri`: The location of the Python training package as a tarball.- `python_module_name`: The relative path to the training script in the Python package.- `model_serving_container_uri`: The container image for deploying the model.*Note:* There is no requirements parameter. You specify any requirements in the `setup.py` script in your Python package.
###Code
DISPLAY_NAME = "chicago_" + TIMESTAMP
job = aip.CustomPythonPackageTrainingJob(
display_name=DISPLAY_NAME,
python_package_gcs_uri=f"{BUCKET_NAME}/trainer_chicago.tar.gz",
python_module_name="trainer.task",
container_uri=TRAIN_IMAGE,
model_serving_container_image_uri=DEPLOY_IMAGE,
project=PROJECT_ID,
)
###Output
_____no_output_____
###Markdown
Run the custom Python package training jobNext, you run the custom job to start the training job by invoking the method `run()`. The parameters are the same as when running a CustomTrainingJob.*Note:* The parameter service_account is set so that the initializing experiment step `aip.init(experiment="...")` has necessarily permission to access the Vertex AI Metadata Store.
###Code
MODEL_DIR = BUCKET_NAME + "/trained"
FULL_EPOCHS = 100
CMDARGS = [
f"--epochs={FULL_EPOCHS}",
f"--lr={LR}",
f"--batch_size={BATCH_SIZE}",
"--distribute=mirrored",
"--experiment=chicago",
"--run=full",
"--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
"--evaluate=True",
]
model = job.run(
model_display_name="chicago_" + TIMESTAMP,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
service_account=SERVICE_ACCOUNT,
tensorboard=tensorboard_resource_name,
sync=True,
)
###Output
_____no_output_____
###Markdown
Delete a custom training jobAfter a training job is completed, you can delete the training job with the method `delete()`. Prior to completion, a training job can be canceled with the method `cancel()`.
###Code
job.delete()
###Output
_____no_output_____
###Markdown
Get the experiment resultsNext, you use the experiment name as a parameter to the method `get_experiment_df()` to get the results of the experiment as a pandas dataframe.
###Code
EXPERIMENT_NAME = "chicago"
experiment_df = aip.get_experiment_df()
experiment_df = experiment_df[experiment_df.experiment_name == EXPERIMENT_NAME]
experiment_df.T
###Output
_____no_output_____
###Markdown
Review the custom model evaluation resultsNext, you review the evaluation metrics builtin into the training package.
###Code
METRICS = MODEL_DIR + "/model/metrics.txt"
! gsutil cat $METRICS
###Output
_____no_output_____
###Markdown
Delete the TensorBoard instanceNext, delete the TensorBoard instance.
###Code
tensorboard.delete()
vertex_custom_model = model
model = tf.keras.models.load_model(MODEL_DIR + "/model")
###Output
_____no_output_____
###Markdown
Add a serving functionNext, you add a serving function to your model for online and batch prediction. This allows prediction requests to be sent in raw format (unpreprocessed), either as a serialized TF.Example or JSONL object. The serving function will then preprocess the prediction request into the transformed format expected by the model.
###Code
%%writefile custom/trainer/serving.py
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_transform as tft
import logging
def _get_serve_features_fn(model, tft_output):
"""Returns a function that accept a dictionary of features and applies TFT."""
model.tft_layer = tft_output.transform_features_layer()
@tf.function
def serve_features_fn(raw_features):
"""Returns the output to be used in the serving signature."""
transformed_features = model.tft_layer(raw_features)
probabilities = model(transformed_features)
return {"scores": probabilities}
return serve_features_fn
def _get_serve_tf_examples_fn(model, tft_output, feature_spec):
"""Returns a function that parses a serialized tf.Example and applies TFT."""
model.tft_layer = tft_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
"""Returns the output to be used in the serving signature."""
for key in list(feature_spec.keys()):
if key not in features:
feature_spec.pop(key)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
probabilities = model(transformed_features)
return {"scores": probabilities}
return serve_tf_examples_fn
def construct_serving_model(
model, serving_model_dir, metadata
):
global features
schema_location = metadata['schema']
features = metadata['numeric_features'] + metadata['categorical_features'] + metadata['embedding_features']
print("FEATURES", features)
tft_output_dir = metadata["transform_artifacts_dir"]
schema = tfdv.load_schema_text(schema_location)
feature_spec = tft.tf_metadata.schema_utils.schema_as_feature_spec(schema).feature_spec
tft_output = tft.TFTransformOutput(tft_output_dir)
# Drop features that were not used in training
features_input_signature = {
feature_name: tf.TensorSpec(
shape=(None, 1), dtype=spec.dtype, name=feature_name
)
for feature_name, spec in feature_spec.items()
if feature_name in features
}
signatures = {
"serving_default": _get_serve_features_fn(
model, tft_output
).get_concrete_function(features_input_signature),
"serving_tf_example": _get_serve_tf_examples_fn(
model, tft_output, feature_spec
).get_concrete_function(
tf.TensorSpec(shape=[None], dtype=tf.string, name="examples")
),
}
logging.info("Model saving started...")
model.save(serving_model_dir, signatures=signatures)
logging.info("Model saving completed.")
###Output
_____no_output_____
###Markdown
Construct the serving modelNow construct the serving model and store the serving model to your Cloud Storage bucket.
###Code
os.chdir("custom")
from trainer import serving
SERVING_MODEL_DIR = BUCKET_NAME + "/serving_model"
serving.construct_serving_model(
model=model, serving_model_dir=SERVING_MODEL_DIR, metadata=metadata
)
serving_model = tf.keras.models.load_model(SERVING_MODEL_DIR)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Test the serving model locally with tf.Example dataNext, test the layer interface in the serving model for tf.Example data.
###Code
EXPORTED_TFREC_PREFIX = metadata["exported_tfrec_prefix"]
file_names = tf.data.TFRecordDataset.list_files(
EXPORTED_TFREC_PREFIX + "/data-*.tfrecord"
)
for batch in tf.data.TFRecordDataset(file_names).batch(3).take(1):
predictions = serving_model.signatures["serving_tf_example"](batch)
for key in predictions:
print(f"{key}: {predictions[key]}")
###Output
_____no_output_____
###Markdown
Test the serving model locally with JSONL dataNext, test the layer interface in the serving model for JSONL data.
###Code
schema = tfdv.load_schema_text(metadata["schema"])
feature_spec = tft.tf_metadata.schema_utils.schema_as_feature_spec(schema).feature_spec
instance = {
"dropoff_grid": "POINT(-87.6 41.9)",
"euclidean": 2064.2696,
"loc_cross": "",
"payment_type": "Credit Card",
"pickup_grid": "POINT(-87.6 41.9)",
"trip_miles": 1.37,
"trip_day": 12,
"trip_hour": 6,
"trip_month": 2,
"trip_day_of_week": 4,
"trip_seconds": 555,
}
for feature_name in instance:
dtype = feature_spec[feature_name].dtype
instance[feature_name] = tf.constant([[instance[feature_name]]], dtype)
predictions = serving_model.signatures["serving_default"](**instance)
for key in predictions:
print(f"{key}: {predictions[key].numpy()}")
###Output
_____no_output_____
###Markdown
Upload the serving model to a Vertex AI Model resourceNext, you upload your serving custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource.
###Code
vertex_serving_model = aip.Model.upload(
display_name="chicago_" + TIMESTAMP,
artifact_uri=SERVING_MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
labels={"user_metadata": BUCKET_NAME[5:]},
sync=True,
)
###Output
_____no_output_____
###Markdown
Evaluate the serving modelNext, evaluate the serving model with the evaluation (test) slices. For apples-to-apples comparison, you use the same evaluation slices for both the custom model and the AutoML model. Since your evaluation slices and metrics maybe custom, we recommend:- Send each evaluation slice as a Vertex AI Batch Prediction Job.- Use a custom evaluation script to evaluate the results from the batch prediction job.
###Code
SERVING_OUTPUT_DATA_DIR = BUCKET_NAME + "/batch_eval"
EXPORTED_JSONL_PREFIX = metadata["exported_jsonl_prefix"]
MIN_NODES = 1
MAX_NODES = 1
job = vertex_serving_model.batch_predict(
instances_format="jsonl",
predictions_format="jsonl",
job_display_name="chicago_" + TIMESTAMP,
gcs_source=EXPORTED_JSONL_PREFIX + "*.jsonl",
gcs_destination_prefix=SERVING_OUTPUT_DATA_DIR,
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=True,
)
###Output
_____no_output_____
###Markdown
Perform custom evaluation metricsAfter the batch job has completed, you input the results and target labels to your custom evaluation script. For demonstration purposes, we just display the results of the batch prediction.
###Code
batch_dir = ! gsutil ls $SERVING_OUTPUT_DATA_DIR
batch_dir = batch_dir[0]
outputs = ! gsutil ls $batch_dir
errors = outputs[0]
results = outputs[1]
print("errors")
! gsutil cat $errors
print("results")
! gsutil cat $results | head -n10
model = async_model
###Output
_____no_output_____
###Markdown
Wait for completion of AutoML training jobNext, wait for the AutoML training job to complete. Alternatively, one can set the parameter `sync` to `True` in the `run()` method to block until the AutoML training job is completed.
###Code
model.wait()
###Output
_____no_output_____
###Markdown
Review model evaluation scoresAfter your model has finished training, you can review the evaluation scores for it.First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
###Code
# Get model resource ID
models = aip.Model.list(filter="display_name=chicago_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
###Output
_____no_output_____
###Markdown
Compare metric results with AutoML baselineFinally, you make a decision if the current experiment produces a custom model that is better than the AutoML baseline, as follows: - Compare the evaluation results for each evaluation slice between the custom model and the AutoML model. - Weight the results according to your business purposes. - Add up the result and make a determination if the custom model is better. Store evaluation results for custom modelNext, you use the labels field to store user metadata containing the custom metrics information.
###Code
import json
metadata = {}
metadata["train_eval_metrics"] = METRICS
metadata["custom_eval_metrics"] = "[you-fill-this-in]"
with tf.io.gfile.GFile("gs://" + BUCKET_NAME[5:] + "/metadata.jsonl", "w") as f:
json.dump(metadata, f)
!gsutil cat $BUCKET_NAME/metadata.jsonl
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- AutoML Training Job- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_all = False
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____
###Markdown
E2E ML on GCP: MLOps stage 2 : experimentation View on GitHub Open in Google Cloud Notebooks OverviewThis tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 2 : experimentation. DatasetThe dataset used for this tutorial is the [Chicago Taxi](https://www.kaggle.com/chicago/chicago-taxi-trips-bq). The version of the dataset you will use in this tutorial is stored in a public BigQuery table. The trained model predicts whether someone would leave a tip for a taxi fare. ObjectiveIn this tutorial, you create a MLOps stage 2: experimentation process.This tutorial uses the following Vertex AI:- `Vertex AI Datasets`- `Vertex AI Models`- `Vertex AI AutoML`- `Vertex AI Training`- `Vertex AI TensorBoard`- `Vertex AI Vizier`- `Vertex AI Batch Prediction`The steps performed include:- Review the `Dataset` resource created during stage 1.- Train an AutoML tabular binary classifier model in the background.- Build the experimental model architecture.- Construct a custom training package for the `Dataset` resource.- Test the custom training package locally.- Test the custom training package in the cloud with Vertex AI Training.- Hyperparameter tune the model training with Vertex AI Vizier.- Train the custom model with Vertex AI Training.- Add a serving function for online/batch prediction to the custom model.- Test the custom model with the serving function.- Evaluate the custom model using Vertex AI Batch Prediction- Wait for the AutoML training job to complete.- Evaluate the AutoML model using Vertex AI Batch Prediction with the same evaluation slices as the custom model.- Set the evaluation results of the AutoML model as the baseline.- If the evaluation of the custom model is below baseline, continue to experiment with the custom model.- If the evaluation of the custom model is above baseline, save the model as the first best model. RecommendationsWhen doing E2E MLOps on Google Cloud for experimentation, the following best practices with structured (tabular) data are recommended: - Determine a baseline evaluation using AutoML. - Design and build a model architecture. - Upload the untrained model architecture as a Vertex AI Model resource. - Construct a training package that can be ran locally and as a Vertex AI Training job. - Decompose the training package into: data, model, train and task Python modules. - Obtain the location of the transformed training data from the user metadata of the Vertex AI Dataset resource. - Obtain the location of the model artifacts from the Vertex AI Model resource. - Include in the training package initializing a Vertex AI Experiment and corresponding run. - Log hyperparameters and training parameters for the experiment. - Add callbacks for early stop, TensorBoard, and hyperparameter tuning, where hyperparameter tuning is a command-line option. - Test the training package locally with a small number of epochs. - Test the training package with Vertex AI Training. - Do hyperparameter tuning with Vertex AI Hyperparameter Tuning. - Do full training of the custom model with Vertex AI Training. - Log the hyperparameter values for the experiment/run. - Evaluate the custom model. - Single evaluation slice, same metrics as AutoML - Add evaluation to the training package and return the results in a file in the Cloud Storage bucket used for training - Custom evaluation slices, custom metrics - Evaluate custom evaluation slices as a Vertex AI Batch Prediction for both AutoML and custom model - Perform custom metrics on the results from the batch job - Compare custom model metrics against the AutoML baseline - If less than baseline, then continue to experiment - If greater then baseline, then upload model as the new baseline and save evaluation results with the model. InstallationsInstall *one time* the packages for executing the MLOps notebooks.
###Code
ONCE_ONLY = False
if ONCE_ONLY:
! pip3 install -U tensorflow==2.5 $USER_FLAG
! pip3 install -U tensorflow-data-validation==1.2 $USER_FLAG
! pip3 install -U tensorflow-transform==1.2 $USER_FLAG
! pip3 install -U tensorflow-io==0.18 $USER_FLAG
! pip3 install --upgrade google-cloud-aiplatform[tensorboard] $USER_FLAG
! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG
! pip3 install --upgrade google-cloud-bigquery $USER_FLAG
! pip3 install --upgrade google-cloud-logging $USER_FLAG
! pip3 install --upgrade apache-beam[gcp] $USER_FLAG
! pip3 install --upgrade pyarrow $USER_FLAG
! pip3 install --upgrade cloudml-hypertune $USER_FLAG
! pip3 install --upgrade kfp $USER_FLAG
! pip3 install --upgrade torchvision $USER_FLAG
! pip3 install --upgrade rpy2 $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations).
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Service Account**If you don't know your service account**, try to get your service account using `gcloud` command by executing the second cell below.
###Code
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].replace("*", "").strip()
print("Service Account:", SERVICE_ACCOUNT)
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants
###Code
import google.cloud.aiplatform as aip
###Output
_____no_output_____
###Markdown
Import TensorFlowImport the TensorFlow package into your Python environment.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Import TensorFlow TransformImport the TensorFlow Transform (TFT) package into your Python environment.
###Code
import tensorflow_transform as tft
###Output
_____no_output_____
###Markdown
Import TensorFlow Data ValidationImport the TensorFlow Data Validation (TFDV) package into your Python environment.
###Code
import tensorflow_data_validation as tfdv
###Output
_____no_output_____
###Markdown
Initialize Vertex AI SDK for PythonInitialize the Vertex AI SDK for Python for your project and corresponding bucket.
###Code
aip.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)
###Output
_____no_output_____
###Markdown
Set hardware acceleratorsYou can set hardware accelerators for training and prediction.Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)Otherwise specify `(None, None)` to use a container image to run on a CPU.Learn more about [hardware accelerator support for your region](https://cloud.google.com/vertex-ai/docs/general/locationsaccelerators).*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3. This is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
###Code
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.gapic.AcceleratorType.NVIDIA_TESLA_K80, 4)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
###Output
_____no_output_____
###Markdown
Set pre-built containersSet the pre-built Docker container image for training and prediction.For the latest list, see [Pre-built containers for training](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers).For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers).
###Code
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2.5".replace(".", "-")
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "{}-docker.pkg.dev/vertex-ai/training/{}:latest".format(
REGION.split("-")[0], TRAIN_VERSION
)
DEPLOY_IMAGE = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format(
REGION.split("-")[0], DEPLOY_VERSION
)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
###Output
_____no_output_____
###Markdown
Set machine typeNext, set the machine type to use for training and prediction.- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]*Note: The following is not supported for training:* - `standard`: 2 vCPUs - `highcpu`: 2, 4 and 8 vCPUs*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.
###Code
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
###Output
_____no_output_____
###Markdown
Retrieve the dataset from stage 1Next, retrieve the dataset you created during stage 1 with the helper function `find_dataset()`. This helper function finds all the datasets whose display name matches the specified prefix and import format (e.g., bq). Finally it sorts the matches by create time and returns the latest version.
###Code
def find_dataset(display_name_prefix, import_format):
matches = []
datasets = aip.TabularDataset.list()
for dataset in datasets:
if dataset.display_name.startswith(display_name_prefix):
try:
if (
"bq" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["bigquerySource"]
):
matches.append(dataset)
if (
"csv" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["gcsSource"]
):
matches.append(dataset)
except:
pass
create_time = None
for match in matches:
if create_time is None or match.create_time > create_time:
create_time = match.create_time
dataset = match
return dataset
dataset = find_dataset("Chicago Taxi", "bq")
print(dataset)
###Output
_____no_output_____
###Markdown
Load dataset's user metadataLoad the user metadata for the dataset.
###Code
import json
try:
with tf.io.gfile.GFile(
"gs://" + dataset.labels["user_metadata"] + "/metadata.jsonl", "r"
) as f:
metadata = json.load(f)
print(metadata)
except:
print("no metadata")
###Output
_____no_output_____
###Markdown
Create and run training pipelineTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipelineAn AutoML training pipeline is created with the `AutoMLTabularTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the `TrainingJob` resource.- `optimization_prediction_type`: The type task to train the model for. - `classification`: A tabuar classification model. - `regression`: A tabular regression model.- `column_transformations`: (Optional): Transformations to apply to the input columns- `optimization_objective`: The optimization objective to minimize or maximize. - binary classification: - `minimize-log-loss` - `maximize-au-roc` - `maximize-au-prc` - `maximize-precision-at-recall` - `maximize-recall-at-precision` - multi-class classification: - `minimize-log-loss` - regression: - `minimize-rmse` - `minimize-mae` - `minimize-rmsle`The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
###Code
dag = aip.AutoMLTabularTrainingJob(
display_name="chicago_" + TIMESTAMP,
optimization_prediction_type="classification",
optimization_objective="minimize-log-loss",
)
print(dag)
###Output
_____no_output_____
###Markdown
Run the training pipelineNext, you run the DAG to start the training job by invoking the method `run`, with the following parameters:- `dataset`: The `Dataset` resource to train the model.- `model_display_name`: The human readable name for the trained model.- `training_fraction_split`: The percentage of the dataset to use for training.- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).- `validation_fraction_split`: The percentage of the dataset to use for validation.- `target_column`: The name of the column to train as the label.- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.The `run` method when completed returns the `Model` resource.The execution of the training pipeline will take upto 180 minutes.
###Code
async_model = dag.run(
dataset=dataset,
model_display_name="chicago_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=8000,
disable_early_stopping=False,
target_column="tip_bin",
sync=False,
)
###Output
_____no_output_____
###Markdown
Create experiment for tracking training related metadataSetup tracking the parameters (configuration) and metrics (results) for each experiment:- `aip.init()` - Create an experiment instance- `aip.start_run()` - Track a specific run within the experiment.Learn more about [Introduction to Vertex AI ML Metadata](https://cloud.google.com/vertex-ai/docs/ml-metadata/introduction).
###Code
EXPERIMENT_NAME = "chicago-" + TIMESTAMP
aip.init(experiment=EXPERIMENT_NAME)
aip.start_run("run-1")
###Output
_____no_output_____
###Markdown
Create a Vertex AI TensorBoard instanceCreate a Vertex AI TensorBoard instance to use TensorBoard in conjunction with Vertex AI Training for custom model training.Learn more about [Get started with Vertex AI TensorBoard](https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-overview).
###Code
TENSORBOARD_DISPLAY_NAME = "chicago_" + TIMESTAMP
tensorboard = aip.Tensorboard.create(display_name=TENSORBOARD_DISPLAY_NAME)
tensorboard_resource_name = tensorboard.gca_resource.name
print("TensorBoard resource name:", tensorboard_resource_name)
###Output
_____no_output_____
###Markdown
Create the input layer for your custom modelNext, you create the input layer for your custom tabular model, based on the data types of each feature.
###Code
from tensorflow.keras.layers import Input
def create_model_inputs(
numeric_features=None, categorical_features=None, embedding_features=None
):
inputs = {}
for feature_name in numeric_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.float32)
for feature_name in categorical_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64)
for feature_name in embedding_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64)
return inputs
input_layers = create_model_inputs(
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
embedding_features=metadata["embedding_features"],
)
print(input_layers)
###Output
_____no_output_____
###Markdown
Create the binary classifier custom modelNext, you create your binary classifier custom tabular model.
###Code
from math import sqrt
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import (Activation, Concatenate, Dense, Embedding,
experimental)
def create_binary_classifier(
input_layers,
tft_output,
metaparams,
numeric_features,
categorical_features,
embedding_features,
):
layers = []
for feature_name in input_layers:
if feature_name in embedding_features:
vocab_size = tft_output.vocabulary_size_by_name(feature_name)
embedding_size = int(sqrt(vocab_size))
embedding_output = Embedding(
input_dim=vocab_size + 1,
output_dim=embedding_size,
name=f"{feature_name}_embedding",
)(input_layers[feature_name])
layers.append(embedding_output)
elif feature_name in categorical_features:
vocab_size = tft_output.vocabulary_size_by_name(feature_name)
onehot_layer = experimental.preprocessing.CategoryEncoding(
num_tokens=vocab_size,
output_mode="binary",
name=f"{feature_name}_onehot",
)(input_layers[feature_name])
layers.append(onehot_layer)
elif feature_name in numeric_features:
numeric_layer = tf.expand_dims(input_layers[feature_name], -1)
layers.append(numeric_layer)
else:
pass
joined = Concatenate(name="combines_inputs")(layers)
feedforward_output = Sequential(
[Dense(units, activation="relu") for units in metaparams["hidden_units"]],
name="feedforward_network",
)(joined)
logits = Dense(units=1, name="logits")(feedforward_output)
pred = Activation("sigmoid")(logits)
model = Model(inputs=input_layers, outputs=[pred])
return model
TRANSFORM_ARTIFACTS_DIR = metadata["transform_artifacts_dir"]
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
metaparams = {"hidden_units": [128, 64]}
aip.log_params(metaparams)
model = create_binary_classifier(
input_layers,
tft_output,
metaparams,
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
embedding_features=metadata["embedding_features"],
)
model.summary()
###Output
_____no_output_____
###Markdown
Visualize the model architectureNext, visualize the architecture of the custom model.
###Code
tf.keras.utils.plot_model(model, show_shapes=True, show_dtype=True)
###Output
_____no_output_____
###Markdown
Save model artifactsNext, save the model artifacts to your Cloud Storage bucket
###Code
MODEL_DIR = f"{BUCKET_NAME}/base_model"
model.save(MODEL_DIR)
###Output
_____no_output_____
###Markdown
Upload the local model to a Vertex AI Model resourceNext, you upload your local custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource.
###Code
vertex_custom_model = aip.Model.upload(
display_name="chicago_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
labels={"base_model": "1"},
sync=True,
)
###Output
_____no_output_____
###Markdown
Construct the training package Package layoutBefore you start training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.- PKG-INFO- README.md- setup.cfg- setup.py- trainer - \_\_init\_\_.py - task.py - other Python scriptsThe files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.The file `trainer/task.py` is the Python script for executing the custom training job.
###Code
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'google-cloud-aiplatform',\n\n 'cloudml-hypertune',\n\n 'tensorflow_datasets==1.3.0',\n\n 'tensorflow_data_validation==1.2',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Chicago Taxi tabular binary classifier\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex AI"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
###Output
_____no_output_____
###Markdown
Get feature specification for the preprocessed dataNext, create the feature specification for the preprocessed data.
###Code
transform_feature_spec = tft_output.transformed_feature_spec()
print(transform_feature_spec)
###Output
_____no_output_____
###Markdown
Load the transformed data into a tf.data.DatasetNext, you load the gzip TFRecords on Cloud Storage storage into a `tf.data.Dataset` generator. These functions are re-used when training the custom model using `Vertex Training`, so you save them to the python training package.
###Code
%%writefile custom/trainer/data.py
import tensorflow as tf
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(filenames, compression_type="GZIP")
def get_dataset(file_pattern, feature_spec, label_column, batch_size=200):
"""Generates features and label for tuning/training.
Args:
file_pattern: input tfrecord file pattern.
feature_spec: a dictionary of feature specifications.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=feature_spec,
label_key=label_column,
reader=_gzip_reader_fn,
num_epochs=1,
drop_final_batch=True,
)
return dataset
from custom.trainer import data
TRANSFORMED_DATA_PREFIX = metadata["transformed_data_prefix"]
LABEL_COLUMN = metadata["label_column"]
train_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/train/data-*.gz"
val_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/val/data-*.gz"
test_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/test/data-*.gz"
for input_features, target in data.get_dataset(
train_data_file_pattern, transform_feature_spec, LABEL_COLUMN, batch_size=3
).take(1):
for key in input_features:
print(
f"{key} {input_features[key].dtype}: {input_features[key].numpy().tolist()}"
)
print(f"target: {target.numpy().tolist()}")
###Output
_____no_output_____
###Markdown
Test the model architecture with transformed inputNext, test the model architecture with a sample of the transformed training input.*Note:* Since the model is untrained, the predictions should be random. Since this is a binary classifier, expect the predicted results ~0.5.
###Code
model(input_features)
###Output
_____no_output_____
###Markdown
Develop and test the training scriptsWhen experimenting, one typically develops and tests the training package locally, before moving to training in the cloud. Create training scriptNext, you write the Python script for compiling and training the model.
###Code
%%writefile custom/trainer/train.py
from trainer import data
import tensorflow as tf
import logging
from hypertune import HyperTune
def compile(model, hyperparams):
''' Compile the model '''
optimizer = tf.keras.optimizers.Adam(learning_rate=hyperparams["learning_rate"])
loss = tf.keras.losses.BinaryCrossentropy(from_logits=False)
metrics = [tf.keras.metrics.BinaryAccuracy(name="accuracy")]
model.compile(optimizer=optimizer,loss=loss, metrics=metrics)
return model
def warmup(
model,
hyperparams,
train_data_dir,
label_column,
transformed_feature_spec
):
''' Warmup the initialized model weights '''
train_dataset = data.get_dataset(
train_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
lr_inc = (hyperparams['end_learning_rate'] - hyperparams['start_learning_rate']) / hyperparams['num_epochs']
def scheduler(epoch, lr):
if epoch == 0:
return hyperparams['start_learning_rate']
return lr + lr_inc
callbacks = [tf.keras.callbacks.LearningRateScheduler(scheduler)]
logging.info("Model warmup started...")
history = model.fit(
train_dataset,
epochs=hyperparams["num_epochs"],
steps_per_epoch=hyperparams["steps"],
callbacks=callbacks
)
logging.info("Model warmup completed.")
return history
def train(
model,
hyperparams,
train_data_dir,
val_data_dir,
label_column,
transformed_feature_spec,
log_dir,
tuning=False
):
''' Train the model '''
train_dataset = data.get_dataset(
train_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
val_dataset = data.get_dataset(
val_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
early_stop = tf.keras.callbacks.EarlyStopping(
monitor=hyperparams["early_stop"]["monitor"], patience=hyperparams["early_stop"]["patience"], restore_best_weights=True
)
callbacks = [early_stop]
if log_dir:
tensorboard = tf.keras.callbacks.TensorBoard(log_dir=log_dir)
callbacks = callbacks.append(tensorboard)
if tuning:
# Instantiate the HyperTune reporting object
hpt = HyperTune()
# Reporting callback
class HPTCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='val_loss',
metric_value=logs['val_loss'],
global_step=epoch
)
if not callbacks:
callbacks = []
callbacks.append(HPTCallback())
logging.info("Model training started...")
history = model.fit(
train_dataset,
epochs=hyperparams["num_epochs"],
validation_data=val_dataset,
callbacks=callbacks
)
logging.info("Model training completed.")
return history
def evaluate(
model,
hyperparams,
test_data_dir,
label_column,
transformed_feature_spec
):
logging.info("Model evaluation started...")
test_dataset = data.get_dataset(
test_data_dir,
transformed_feature_spec,
label_column,
hyperparams["batch_size"],
)
evaluation_metrics = model.evaluate(test_dataset)
logging.info("Model evaluation completed.")
return evaluation_metrics
###Output
_____no_output_____
###Markdown
Train the model locallyNext, test the training package locally, by training with just a few epochs:- `num_epochs`: The number of epochs to pass to the training package.- `compile()`: Compile the model for training.- `warmup()`: Warmup the initialized model weights.- `train()`: Train the model.
###Code
os.chdir("custom")
import logging
from trainer import train
TENSORBOARD_LOG_DIR = "./logs"
logging.getLogger().setLevel(logging.INFO)
hyperparams = {}
hyperparams["learning_rate"] = 0.01
aip.log_params(hyperparams)
train.compile(model, hyperparams)
warmupparams = {}
warmupparams["start_learning_rate"] = 0.0001
warmupparams["end_learning_rate"] = 0.01
warmupparams["num_epochs"] = 4
warmupparams["batch_size"] = 64
warmupparams["steps"] = 50
aip.log_params(warmupparams)
train.warmup(
model, warmupparams, train_data_file_pattern, LABEL_COLUMN, transform_feature_spec
)
trainparams = {}
trainparams["num_epochs"] = 5
trainparams["batch_size"] = 64
trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5}
aip.log_params(trainparams)
train.train(
model,
trainparams,
train_data_file_pattern,
val_data_file_pattern,
LABEL_COLUMN,
transform_feature_spec,
TENSORBOARD_LOG_DIR,
)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Evaluate the model locallyNext, test the evaluation portion of the training package:- `evaluate()`: Evaluate the model.
###Code
os.chdir("custom")
from trainer import train
evalparams = {}
evalparams["batch_size"] = 64
metrics = {}
metrics["loss"], metrics["acc"] = train.evaluate(
model, evalparams, test_data_file_pattern, LABEL_COLUMN, transform_feature_spec
)
print("ACC", metrics["acc"], "LOSS", metrics["loss"])
aip.log_metrics(metrics)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Retrieve model from Vertex AINext, create the Python script to retrieve your experimental model from Vertex AI.
###Code
%%writefile custom/trainer/model.py
import google.cloud.aiplatform as aip
def get(model_id):
model = aip.Model(model_id)
return model
###Output
_____no_output_____
###Markdown
Create the task script for the Python training packageNext, you create the `task.py` script for driving the training package. Some noteable steps include:- Command-line arguments: - `model-id`: The resource ID of the `Model` resource you built during experimenting. This is the untrained model architecture. - `dataset-id`: The resource ID of the `Dataset` resource to use for training. - `experiment`: The name of the experiment. - `run`: The name of the run within this experiment. - `tensorboard-logdir`: The logging directory for Vertex AI Tensorboard.- `get_data()`: - Loads the Dataset resource into memory. - Obtains the user metadata from the Dataset resource. - From the metadata, obtain location of transformed data, transformation function and name of label column- `get_model()`: - Loads the Model resource into memory. - Obtains location of model artifacts of the model architecture. - Loads the model architecture. - Compiles the model.- `warmup_model()`: - Warms up the initialized model weights- `train_model()`: - Train the model.- `evaluate_model()`: - Evaluates the model. - Saves evaluation metrics to Cloud Storage bucket.
###Code
%%writefile custom/trainer/task.py
import os
import argparse
import logging
import json
import tensorflow as tf
import tensorflow_transform as tft
from tensorflow.python.client import device_lib
import google.cloud.aiplatform as aip
from trainer import data
from trainer import model as model_
from trainer import train
try:
from trainer import serving
except:
pass
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--model-id', dest='model_id',
default=None, type=str, help='Vertex Model ID.')
parser.add_argument('--dataset-id', dest='dataset_id',
default=None, type=str, help='Vertex Dataset ID.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--start_lr', dest='start_lr',
default=0.0001, type=float,
help='Starting learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--batch_size', dest='batch_size',
default=16, type=int,
help='Batch size.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--tensorboard-log-dir', dest='tensorboard_log_dir',
default=os.getenv('AIP_TENSORBOARD_LOG_DIR'), type=str,
help='Output file for tensorboard logs')
parser.add_argument('--experiment', dest='experiment',
default=None, type=str,
help='Name of experiment')
parser.add_argument('--project', dest='project',
default=None, type=str,
help='Name of project')
parser.add_argument('--run', dest='run',
default=None, type=str,
help='Name of run in experiment')
parser.add_argument('--evaluate', dest='evaluate',
default=False, type=bool,
help='Whether to perform evaluation')
parser.add_argument('--serving', dest='serving',
default=False, type=bool,
help='Whether to attach the serving function')
parser.add_argument('--tuning', dest='tuning',
default=False, type=bool,
help='Whether to perform hyperparameter tuning')
parser.add_argument('--warmup', dest='warmup',
default=False, type=bool,
help='Whether to perform warmup weight initialization')
args = parser.parse_args()
logging.getLogger().setLevel(logging.INFO)
logging.info('DEVICES' + str(device_lib.list_local_devices()))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
logging.info("Single device training")
# Single Machine, multiple compute device
elif args.distribute == 'mirrored':
strategy = tf.distribute.MirroredStrategy()
logging.info("Mirrored Strategy distributed training")
# Multi Machine, multiple compute device
elif args.distribute == 'multiworker':
strategy = tf.distribute.MultiWorkerMirroredStrategy()
logging.info("Multi-worker Strategy distributed training")
logging.info('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
logging.info('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Initialize the run for this experiment
if args.experiment:
logging.info("Initialize experiment: {}".format(args.experiment))
aip.init(experiment=args.experiment, project=args.project)
aip.start_run(args.run)
metadata = {}
def get_data():
''' Get the preprocessed training data '''
global train_data_file_pattern, val_data_file_pattern, test_data_file_pattern
global label_column, transform_feature_spec, metadata
dataset = aip.TabularDataset(args.dataset_id)
METADATA = 'gs://' + dataset.labels['user_metadata'] + "/metadata.jsonl"
with tf.io.gfile.GFile(METADATA, "r") as f:
metadata = json.load(f)
TRANSFORMED_DATA_PREFIX = metadata['transformed_data_prefix']
label_column = metadata['label_column']
train_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/train/data-*.gz'
val_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/val/data-*.gz'
test_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/test/data-*.gz'
TRANSFORM_ARTIFACTS_DIR = metadata['transform_artifacts_dir']
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
transform_feature_spec = tft_output.transformed_feature_spec()
def get_model():
''' Get the untrained model architecture '''
global model_artifacts
vertex_model = model_.get(args.model_id)
model_artifacts = vertex_model.gca_resource.artifact_uri
model = tf.keras.models.load_model(model_artifacts)
# Compile the model
hyperparams = {}
hyperparams["learning_rate"] = args.lr
if args.experiment:
aip.log_params(hyperparams)
metadata.update(hyperparams)
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f:
f.write(json.dumps(metadata))
train.compile(model, hyperparams)
return model
def warmup_model(model):
''' Warmup the initialized model weights '''
warmupparams = {}
warmupparams["num_epochs"] = args.epochs
warmupparams["batch_size"] = args.batch_size
warmupparams["steps"] = args.steps
warmupparams["start_learning_rate"] = args.start_lr
warmupparams["end_learning_rate"] = args.lr
train.warmup(model, warmupparams, train_data_file_pattern, label_column, transform_feature_spec)
return model
def train_model(model):
''' Train the model '''
trainparams = {}
trainparams["num_epochs"] = args.epochs
trainparams["batch_size"] = args.batch_size
trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5}
if args.experiment:
aip.log_params(trainparams)
metadata.update(trainparams)
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f:
f.write(json.dumps(metadata))
train.train(model, trainparams, train_data_file_pattern, val_data_file_pattern, label_column, transform_feature_spec, args.tensorboard_log_dir, args.tuning)
return model
def evaluate_model(model):
''' Evaluate the model '''
evalparams = {}
evalparams["batch_size"] = args.batch_size
metrics = train.evaluate(model, evalparams, test_data_file_pattern, label_column, transform_feature_spec)
metadata.update({'metrics': metrics})
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f:
f.write(json.dumps(metadata))
get_data()
with strategy.scope():
model = get_model()
if args.warmup:
model = warmup_model(model)
else:
model = train_model(model)
if args.evaluate:
evaluate_model(model)
if args.serving:
logging.info('Save serving model to: ' + args.model_dir)
serving.construct_serving_model(
model=model,
serving_model_dir=args.model_dir,
metadata=metadata
)
elif args.warmup:
logging.info('Save warmed up model to: ' + model_artifacts)
model.save(model_artifacts)
else:
logging.info('Save trained model to: ' + args.model_dir)
model.save(args.model_dir)
###Output
_____no_output_____
###Markdown
Test training package locallyNext, test your completed training package locally with just a few epochs.
###Code
DATASET_ID = dataset.resource_name
MODEL_ID = vertex_custom_model.resource_name
!cd custom; python3 -m trainer.task --model-id={MODEL_ID} --dataset-id={DATASET_ID} --experiment='chicago' --run='test' --project={PROJECT_ID} --epochs=5 --model-dir=/tmp --evaluate=True
###Output
_____no_output_____
###Markdown
Warmup trainingNow that you have tested the training scripts, you perform warmup training on the base model. Warmup training is used to stabilize the weight initialization. By doing so, each subsequent training and tuning of the model architecture will start with the same stabilized weight initialization.
###Code
MODEL_DIR = f"{BUCKET_NAME}/base_model"
!cd custom; python3 -m trainer.task --model-id={MODEL_ID} --dataset-id={DATASET_ID} --project={PROJECT_ID} --epochs=5 --steps=300 --batch_size=16 --lr=0.01 --start_lr=0.0001 --model-dir={MODEL_DIR} --warmup=True
###Output
_____no_output_____
###Markdown
Mirrored StrategyWhen training on a single VM, one can either train was a single compute device or with multiple compute devices on the same VM. With Vertex AI Distributed Training you can specify both the number of compute devices for the VM instance and type of compute devices: CPU, GPU.Vertex AI Distributed Training supports `tf.distribute.MirroredStrategy' for TensorFlow models. To enable training across multiple compute devices on the same VM, you do the following additional steps in your Python training script:1. Set the tf.distribute.MirrorStrategy2. Compile the model within the scope of tf.distribute.MirrorStrategy. *Note:* Tells MirroredStrategy which variables to mirror across your compute devices.3. Increase the batch size for each compute device to num_devices * batch size.During transitions, the distribution of batches will be synchronized as well as the updates to the model parameters. Create and run custom training jobTo train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training jobA custom training job is created with the `CustomTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the custom training job.- `container_uri`: The training container image.- `python_package_gcs_uri`: The location of the Python training package as a tarball.- `python_module_name`: The relative path to the training script in the Python package.- `model_serving_container_uri`: The container image for deploying the model.*Note:* There is no requirements parameter. You specify any requirements in the `setup.py` script in your Python package.
###Code
DISPLAY_NAME = "chicago_" + TIMESTAMP
job = aip.CustomPythonPackageTrainingJob(
display_name=DISPLAY_NAME,
python_package_gcs_uri=f"{BUCKET_NAME}/trainer_chicago.tar.gz",
python_module_name="trainer.task",
container_uri=TRAIN_IMAGE,
model_serving_container_image_uri=DEPLOY_IMAGE,
project=PROJECT_ID,
)
! rm -rf custom/logs
! rm -rf custom/trainer/__pycache__
###Output
_____no_output_____
###Markdown
Store training script on your Cloud Storage bucketNext, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
###Code
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_chicago.tar.gz
###Output
_____no_output_____
###Markdown
Run the custom Python package training jobNext, you run the custom job to start the training job by invoking the method `run()`. The parameters are the same as when running a CustomTrainingJob.*Note:* The parameter service_account is set so that the initializing experiment step `aip.init(experiment="...")` has necessarily permission to access the Vertex AI Metadata Store.
###Code
MODEL_DIR = BUCKET_NAME + "/testing"
CMDARGS = [
"--epochs=5",
"--batch_size=16",
"--distribute=mirrored",
"--experiment=chicago",
"--run=test",
"--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
]
model = job.run(
model_display_name="chicago_" + TIMESTAMP,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
service_account=SERVICE_ACCOUNT,
tensorboard=tensorboard_resource_name,
sync=True,
)
###Output
_____no_output_____
###Markdown
Delete a custom training jobAfter a training job is completed, you can delete the training job with the method `delete()`. Prior to completion, a training job can be canceled with the method `cancel()`.
###Code
job.delete()
###Output
_____no_output_____
###Markdown
Delete the modelThe method 'delete()' will delete the model.
###Code
model.delete()
###Output
_____no_output_____
###Markdown
Hyperparameter tuningNext, you perform hyperparameter tuning with the training package. The training package has some additions that make the same package usable for both hyperparameter tuning, as well as local testing and full cloud training:- Command-Line: - `tuning`: indicates to use the HyperTune service as a callback during training.- `train()`: If tuning is set, creates and adds a callback to HyperTune service. Prepare your machine specificationNow define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training. - `machine_type`: The type of GCP instance to provision -- e.g., n1-standard-8. - `accelerator_type`: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable `TRAIN_GPU != None`, you are using a GPU; otherwise you will use a CPU. - `accelerator_count`: The number of accelerators.
###Code
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
###Output
_____no_output_____
###Markdown
Prepare your disk specification(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training. - `boot_disk_type`: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD. - `boot_disk_size_gb`: Size of disk in GB.
###Code
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
###Output
_____no_output_____
###Markdown
Define worker pool specification for hyperparameter tuning jobNext, define the worker pool specification. Note that we plan to tune the learning rate and batch size, so you do not pass them as command-line arguments (omitted). The Vertex AI Hyperparameter Tuning service will pick values for both learning rate and batch size during trials, which it will pass along as command-line arguments.
###Code
CMDARGS = [
"--epochs=5",
"--distribute=mirrored",
# "--experiment=chicago",
# "--run=tune",
# "--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
"--tuning=True",
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_chicago.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
###Output
_____no_output_____
###Markdown
Create a custom jobUse the class `CustomJob` to create a custom job, such as for hyperparameter tuning, with the following parameters:- `display_name`: A human readable name for the custom job.- `worker_pool_specs`: The specification for the corresponding VM instances.
###Code
job = aip.CustomJob(
display_name="chicago_" + TIMESTAMP, worker_pool_specs=worker_pool_spec
)
###Output
_____no_output_____
###Markdown
Create a hyperparameter tuning jobUse the class `HyperparameterTuningJob` to create a hyperparameter tuning job, with the following parameters:- `display_name`: A human readable name for the custom job.- `custom_job`: The worker pool spec from this custom job applies to the CustomJobs created in all the trials.- `metrics_spec`: The metrics to optimize. The dictionary key is the metric_id, which is reported by your training job, and the dictionary value is the optimization goal of the metric('minimize' or 'maximize').- `parameter_spec`: The parameters to optimize. The dictionary key is the metric_id, which is passed into your training job as a command line key word argument, and the dictionary value is the parameter specification of the metric.- `search_algorithm`: The search algorithm to use: `grid`, `random` and `None`. If `None` is specified, the `Vizier` service (Bayesian) is used.- `max_trial_count`: The maximum number of trials to perform.
###Code
from google.cloud.aiplatform import hyperparameter_tuning as hpt
hpt_job = aip.HyperparameterTuningJob(
display_name="chicago_" + TIMESTAMP,
custom_job=job,
metric_spec={
"val_loss": "minimize",
},
parameter_spec={
"lr": hpt.DoubleParameterSpec(min=0.001, max=0.1, scale="log"),
"batch_size": hpt.DiscreteParameterSpec([16, 32, 64, 128, 256], scale="linear"),
},
search_algorithm=None,
max_trial_count=8,
parallel_trial_count=1,
)
###Output
_____no_output_____
###Markdown
Run the hyperparameter tuning jobUse the `run()` method to execute the hyperparameter tuning job.
###Code
hpt_job.run()
###Output
_____no_output_____
###Markdown
Best trialNow look at which trial was the best:
###Code
best = (None, None, None, 0.0)
for trial in hpt_job.trials:
# Keep track of the best outcome
if float(trial.final_measurement.metrics[0].value) > best[3]:
try:
best = (
trial.id,
float(trial.parameters[0].value),
float(trial.parameters[1].value),
float(trial.final_measurement.metrics[0].value),
)
except:
best = (
trial.id,
float(trial.parameters[0].value),
None,
float(trial.final_measurement.metrics[0].value),
)
print(best)
###Output
_____no_output_____
###Markdown
Delete the hyperparameter tuning jobThe method 'delete()' will delete the hyperparameter tuning job.
###Code
hpt_job.delete()
###Output
_____no_output_____
###Markdown
Save the best hyperparameter values
###Code
LR = best[2]
BATCH_SIZE = int(best[1])
###Output
_____no_output_____
###Markdown
Create and run custom training jobTo train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training jobA custom training job is created with the `CustomTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the custom training job.- `container_uri`: The training container image.- `python_package_gcs_uri`: The location of the Python training package as a tarball.- `python_module_name`: The relative path to the training script in the Python package.- `model_serving_container_uri`: The container image for deploying the model.*Note:* There is no requirements parameter. You specify any requirements in the `setup.py` script in your Python package.
###Code
DISPLAY_NAME = "chicago_" + TIMESTAMP
job = aip.CustomPythonPackageTrainingJob(
display_name=DISPLAY_NAME,
python_package_gcs_uri=f"{BUCKET_NAME}/trainer_chicago.tar.gz",
python_module_name="trainer.task",
container_uri=TRAIN_IMAGE,
model_serving_container_image_uri=DEPLOY_IMAGE,
project=PROJECT_ID,
)
###Output
_____no_output_____
###Markdown
Run the custom Python package training jobNext, you run the custom job to start the training job by invoking the method `run()`. The parameters are the same as when running a CustomTrainingJob.*Note:* The parameter service_account is set so that the initializing experiment step `aip.init(experiment="...")` has necessarily permission to access the Vertex AI Metadata Store.
###Code
MODEL_DIR = BUCKET_NAME + "/trained"
FULL_EPOCHS = 100
CMDARGS = [
f"--epochs={FULL_EPOCHS}",
f"--lr={LR}",
f"--batch_size={BATCH_SIZE}",
"--distribute=mirrored",
"--experiment=chicago",
"--run=full",
"--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
"--evaluate=True",
]
model = job.run(
model_display_name="chicago_" + TIMESTAMP,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
service_account=SERVICE_ACCOUNT,
tensorboard=tensorboard_resource_name,
sync=True,
)
###Output
_____no_output_____
###Markdown
Delete a custom training jobAfter a training job is completed, you can delete the training job with the method `delete()`. Prior to completion, a training job can be canceled with the method `cancel()`.
###Code
job.delete()
###Output
_____no_output_____
###Markdown
Get the experiment resultsNext, you use the experiment name as a parameter to the method `get_experiment_df()` to get the results of the experiment as a pandas dataframe.
###Code
EXPERIMENT_NAME = "chicago"
experiment_df = aip.get_experiment_df()
experiment_df = experiment_df[experiment_df.experiment_name == EXPERIMENT_NAME]
experiment_df.T
###Output
_____no_output_____
###Markdown
Review the custom model evaluation resultsNext, you review the evaluation metrics builtin into the training package.
###Code
METRICS = MODEL_DIR + "/model/metrics.txt"
! gsutil cat $METRICS
###Output
_____no_output_____
###Markdown
Delete the TensorBoard instanceNext, delete the TensorBoard instance.
###Code
tensorboard.delete()
vertex_custom_model = model
model = tf.keras.models.load_model(MODEL_DIR + "/model")
###Output
_____no_output_____
###Markdown
Add a serving functionNext, you add a serving function to your model for online and batch prediction. This allows prediction requests to be sent in raw format (unpreprocessed), either as a serialized TF.Example or JSONL object. The serving function will then preprocess the prediction request into the transformed format expected by the model.
###Code
%%writefile custom/trainer/serving.py
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_transform as tft
import logging
def _get_serve_features_fn(model, tft_output):
"""Returns a function that accept a dictionary of features and applies TFT."""
model.tft_layer = tft_output.transform_features_layer()
@tf.function
def serve_features_fn(raw_features):
"""Returns the output to be used in the serving signature."""
transformed_features = model.tft_layer(raw_features)
probabilities = model(transformed_features)
return {"scores": probabilities}
return serve_features_fn
def _get_serve_tf_examples_fn(model, tft_output, feature_spec):
"""Returns a function that parses a serialized tf.Example and applies TFT."""
model.tft_layer = tft_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
"""Returns the output to be used in the serving signature."""
for key in list(feature_spec.keys()):
if key not in features:
feature_spec.pop(key)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
probabilities = model(transformed_features)
return {"scores": probabilities}
return serve_tf_examples_fn
def construct_serving_model(
model, serving_model_dir, metadata
):
global features
schema_location = metadata['schema']
features = metadata['numeric_features'] + metadata['categorical_features'] + metadata['embedding_features']
print("FEATURES", features)
tft_output_dir = metadata["transform_artifacts_dir"]
schema = tfdv.load_schema_text(schema_location)
feature_spec = tft.tf_metadata.schema_utils.schema_as_feature_spec(schema).feature_spec
tft_output = tft.TFTransformOutput(tft_output_dir)
# Drop features that were not used in training
features_input_signature = {
feature_name: tf.TensorSpec(
shape=(None, 1), dtype=spec.dtype, name=feature_name
)
for feature_name, spec in feature_spec.items()
if feature_name in features
}
signatures = {
"serving_default": _get_serve_features_fn(
model, tft_output
).get_concrete_function(features_input_signature),
"serving_tf_example": _get_serve_tf_examples_fn(
model, tft_output, feature_spec
).get_concrete_function(
tf.TensorSpec(shape=[None], dtype=tf.string, name="examples")
),
}
logging.info("Model saving started...")
model.save(serving_model_dir, signatures=signatures)
logging.info("Model saving completed.")
###Output
_____no_output_____
###Markdown
Construct the serving modelNow construct the serving model and store the serving model to your Cloud Storage bucket.
###Code
os.chdir("custom")
from trainer import serving
SERVING_MODEL_DIR = BUCKET_NAME + "/serving_model"
serving.construct_serving_model(
model=model, serving_model_dir=SERVING_MODEL_DIR, metadata=metadata
)
serving_model = tf.keras.models.load_model(SERVING_MODEL_DIR)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Test the serving model locally with tf.Example dataNext, test the layer interface in the serving model for tf.Example data.
###Code
EXPORTED_TFREC_PREFIX = metadata["exported_tfrec_prefix"]
file_names = tf.data.TFRecordDataset.list_files(
EXPORTED_TFREC_PREFIX + "/data-*.tfrecord"
)
for batch in tf.data.TFRecordDataset(file_names).batch(3).take(1):
predictions = serving_model.signatures["serving_tf_example"](batch)
for key in predictions:
print(f"{key}: {predictions[key]}")
###Output
_____no_output_____
###Markdown
Test the serving model locally with JSONL dataNext, test the layer interface in the serving model for JSONL data.
###Code
schema = tfdv.load_schema_text(metadata["schema"])
feature_spec = tft.tf_metadata.schema_utils.schema_as_feature_spec(schema).feature_spec
instance = {
"dropoff_grid": "POINT(-87.6 41.9)",
"euclidean": 2064.2696,
"loc_cross": "",
"payment_type": "Credit Card",
"pickup_grid": "POINT(-87.6 41.9)",
"trip_miles": 1.37,
"trip_day": 12,
"trip_hour": 6,
"trip_month": 2,
"trip_day_of_week": 4,
"trip_seconds": 555,
}
for feature_name in instance:
dtype = feature_spec[feature_name].dtype
instance[feature_name] = tf.constant([[instance[feature_name]]], dtype)
predictions = serving_model.signatures["serving_default"](**instance)
for key in predictions:
print(f"{key}: {predictions[key].numpy()}")
###Output
_____no_output_____
###Markdown
Upload the serving model to a Vertex AI Model resourceNext, you upload your serving custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource.
###Code
vertex_serving_model = aip.Model.upload(
display_name="chicago_" + TIMESTAMP,
artifact_uri=SERVING_MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
labels={"user_metadata": BUCKET_NAME[5:]},
sync=True,
)
###Output
_____no_output_____
###Markdown
Evaluate the serving modelNext, evaluate the serving model with the evaluation (test) slices. For apples-to-apples comparison, you use the same evaluation slices for both the custom model and the AutoML model. Since your evaluation slices and metrics maybe custom, we recommend:- Send each evaluation slice as a Vertex AI Batch Prediction Job.- Use a custom evaluation script to evaluate the results from the batch prediction job.
###Code
SERVING_OUTPUT_DATA_DIR = BUCKET_NAME + "/batch_eval"
EXPORTED_JSONL_PREFIX = metadata["exported_jsonl_prefix"]
MIN_NODES = 1
MAX_NODES = 1
job = vertex_serving_model.batch_predict(
instances_format="jsonl",
predictions_format="jsonl",
job_display_name="chicago_" + TIMESTAMP,
gcs_source=EXPORTED_JSONL_PREFIX + "*.jsonl",
gcs_destination_prefix=SERVING_OUTPUT_DATA_DIR,
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=True,
)
###Output
_____no_output_____
###Markdown
Perform custom evaluation metricsAfter the batch job has completed, you input the results and target labels to your custom evaluation script. For demonstration purposes, we just display the results of the batch prediction.
###Code
batch_dir = ! gsutil ls $SERVING_OUTPUT_DATA_DIR
batch_dir = batch_dir[0]
outputs = ! gsutil ls $batch_dir
errors = outputs[0]
results = outputs[1]
print("errors")
! gsutil cat $errors
print("results")
! gsutil cat $results | head -n10
model = async_model
###Output
_____no_output_____
###Markdown
Wait for completion of AutoML training jobNext, wait for the AutoML training job to complete. Alternatively, one can set the parameter `sync` to `True` in the `run()` method to block until the AutoML training job is completed.
###Code
model.wait()
###Output
_____no_output_____
###Markdown
Review model evaluation scoresAfter your model has finished training, you can review the evaluation scores for it.First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
###Code
# Get model resource ID
models = aip.Model.list(filter="display_name=chicago_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
###Output
_____no_output_____
###Markdown
Compare metric results with AutoML baselineFinally, you make a decision if the current experiment produces a custom model that is better than the AutoML baseline, as follows: - Compare the evaluation results for each evaluation slice between the custom model and the AutoML model. - Weight the results according to your business purposes. - Add up the result and make a determination if the custom model is better. Store evaluation results for custom modelNext, you use the labels field to store user metadata containing the custom metrics information.
###Code
import json
metadata = {}
metadata["train_eval_metrics"] = METRICS
metadata["custom_eval_metrics"] = "[you-fill-this-in]"
with tf.io.gfile.GFile("gs://" + BUCKET_NAME[5:] + "/metadata.jsonl", "w") as f:
json.dump(metadata, f)
!gsutil cat $BUCKET_NAME/metadata.jsonl
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- AutoML Training Job- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_all = False
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____
###Markdown
E2E ML on GCP: MLOps stage 2 : experimentation View on GitHub Open in Google Cloud Notebooks OverviewThis tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 2 : experimentation. DatasetThe dataset used for this tutorial is the [Chicago Taxi](https://www.kaggle.com/chicago/chicago-taxi-trips-bq). The version of the dataset you will use in this tutorial is stored in a public BigQuery table. The trained model predicts whether someone would leave a tip for a taxi fare. ObjectiveIn this tutorial, you create a MLOps stage 2: experimentation process.This tutorial uses the following Vertex AI:- `Vertex AI Datasets`- `Vertex AI Models`- `Vertex AI AutoML`- `Vertex AI Training`- `Vertex AI TensorBoard`- `Vertex AI Vizier`- `Vertex AI Batch Prediction`The steps performed include:- Review the `Dataset` resource created during stage 1.- Train an AutoML tabular binary classifier model in the background.- Build the experimental model architecture.- Construct a custom training package for the `Dataset` resource.- Test the custom training package locally.- Test the custom training package in the cloud with Vertex AI Training.- Hyperparameter tune the model training with Vertex AI Vizier.- Train the custom model with Vertex AI Training.- Add a serving function for online/batch prediction to the custom model.- Test the custom model with the serving function.- Evaluate the custom model using Vertex AI Batch Prediction- Wait for AutoML training job to complete.- Evaluate the AutoML model using Vertex AI Batch Prediction with the same evaluation slices as the custom model.- Set the evaluation results of the AutoML model as the baseline.- If the evaluation of the custom model is below baseline, continue to experiment with custom model.- If the evaluation of the custom model is above baseline, save the model as the first best model. RecommendationsWhen doing E2E MLOps on Google Cloud for experimentation, the following best practices with structured (tabular) data are recommended: - Determine a baseline evaluation using AutoML. - Design and build a model architecture. - Upload the untrained model architecture as a Vertex AI Model resource. - Construct a training package that can be ran locally and as a Vertex AI Training job. - Decompose the training package into: data, model, train and task Python modules. - Obtain the location of the transformed training data from the user metadata of the Vertex AI Dataset resource. - Obtain the location of the model artifacts from the Vertex AI Model resource. - Include in the training package initializing a Vertex AI Experiment and corresponding run. - Log hyperparameters and training parameters for the experiment. - Add callbacks for early stop, TensorBoard, and hyperparameter tuning, where hyperparameter tuning is a command-line option. - Test the training package locally with a small number of epochs. - Test the training package with Vertex AI Training. - Do hyperparameter tuning with Vertex AI Hyperparameter Tuning. - Do full training of the custom model with Vertex AI Training. - Log the hyperparameter values for the experiment/run. - Evaluate the custom model. - Single evaluation slice, same metrics as AutoML - Add evaluation to the training package and return the results in a file in the Cloud Storage bucket used for training - Custom evaluation slices, custom metrics - Evaluate custom evaluation slices as a Vertex AI Batch Prediction for both AutoML and custom model - Perform custom metrics on the results from the batch job - Compare custom model metrics against the AutoML baseline - If less than baseline, then continue to experiment - If greater then baseline, then upload model as the new baseline and save evaluation results with the model. InstallationsInstall *one time* the packages for executing the MLOps notebooks.
###Code
ONCE_ONLY = False
if ONCE_ONLY:
! pip3 install -U tensorflow==2.5 $USER_FLAG
! pip3 install -U tensorflow-data-validation==1.2 $USER_FLAG
! pip3 install -U tensorflow-transform==1.2 $USER_FLAG
! pip3 install -U tensorflow-io==0.18 $USER_FLAG
! pip3 install --upgrade google-cloud-aiplatform[tensorboard] $USER_FLAG
! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG
! pip3 install --upgrade google-cloud-bigquery $USER_FLAG
! pip3 install --upgrade google-cloud-logging $USER_FLAG
! pip3 install --upgrade apache-beam[gcp] $USER_FLAG
! pip3 install --upgrade pyarrow $USER_FLAG
! pip3 install --upgrade cloudml-hypertune $USER_FLAG
! pip3 install --upgrade kfp $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations).
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Service Account**If you don't know your service account**, try to get your service account using `gcloud` command by executing the second cell below.
###Code
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].strip()
print("Service Account:", SERVICE_ACCOUNT)
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants
###Code
import google.cloud.aiplatform as aip
###Output
_____no_output_____
###Markdown
Import TensorFlowImport the TensorFlow package into your Python environment.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Import TensorFlow TransformImport the TensorFlow Transform (TFT) package into your Python environment.
###Code
import tensorflow_transform as tft
###Output
_____no_output_____
###Markdown
Import TensorFlow Data ValidationImport the TensorFlow Data Validation (TFDV) package into your Python environment.
###Code
import tensorflow_data_validation as tfdv
###Output
_____no_output_____
###Markdown
Initialize Vertex AI SDK for PythonInitialize the Vertex AI SDK for Python for your project and corresponding bucket.
###Code
aip.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)
###Output
_____no_output_____
###Markdown
Set hardware acceleratorsYou can set hardware accelerators for training and prediction.Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)Otherwise specify `(None, None)` to use a container image to run on a CPU.Learn more about [hardware accelerator support for your region](https://cloud.google.com/vertex-ai/docs/general/locationsaccelerators).*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3. This is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
###Code
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.gapic.AcceleratorType.NVIDIA_TESLA_K80, 4)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
###Output
_____no_output_____
###Markdown
Set pre-built containersSet the pre-built Docker container image for training and prediction.For the latest list, see [Pre-built containers for training](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers).For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers).
###Code
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2.5".replace(".", "-")
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "{}-docker.pkg.dev/vertex-ai/training/{}:latest".format(
REGION.split("-")[0], TRAIN_VERSION
)
DEPLOY_IMAGE = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format(
REGION.split("-")[0], DEPLOY_VERSION
)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
###Output
_____no_output_____
###Markdown
Set machine typeNext, set the machine type to use for training and prediction.- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]*Note: The following is not supported for training:* - `standard`: 2 vCPUs - `highcpu`: 2, 4 and 8 vCPUs*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.
###Code
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
###Output
_____no_output_____
###Markdown
Retrieve the dataset from stage 1Next, retrieve the dataset you created during stage 1 with the helper function `find_dataset()`. This helper function finds all the datasets whose display name matches the specified prefix and import format (e.g., bq). Finally it sorts the matches by create time and returns the latest version.
###Code
def find_dataset(display_name_prefix, import_format):
matches = []
datasets = aip.TabularDataset.list()
for dataset in datasets:
if dataset.display_name.startswith(display_name_prefix):
try:
if (
"bq" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["bigquerySource"]
):
matches.append(dataset)
if (
"csv" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["gcsSource"]
):
matches.append(dataset)
except:
pass
create_time = None
for match in matches:
if create_time is None or match.create_time > create_time:
create_time = match.create_time
dataset = match
return dataset
dataset = find_dataset("Chicago Taxi", "bq")
print(dataset)
###Output
_____no_output_____
###Markdown
Load dataset's user metadataLoad the user metadata for the dataset.
###Code
import json
with tf.io.gfile.GFile(
"gs://" + dataset.labels["user_metadata"] + "/metadata.jsonl", "r"
) as f:
metadata = json.load(f)
print(metadata)
###Output
_____no_output_____
###Markdown
Create and run training pipelineTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipelineAn AutoML training pipeline is created with the `AutoMLTabularTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the `TrainingJob` resource.- `optimization_prediction_type`: The type task to train the model for. - `classification`: A tabuar classification model. - `regression`: A tabular regression model.- `column_transformations`: (Optional): Transformations to apply to the input columns- `optimization_objective`: The optimization objective to minimize or maximize. - binary classification: - `minimize-log-loss` - `maximize-au-roc` - `maximize-au-prc` - `maximize-precision-at-recall` - `maximize-recall-at-precision` - multi-class classification: - `minimize-log-loss` - regression: - `minimize-rmse` - `minimize-mae` - `minimize-rmsle`The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
###Code
dag = aip.AutoMLTabularTrainingJob(
display_name="chicago_" + TIMESTAMP,
optimization_prediction_type="classification",
optimization_objective="minimize-log-loss",
)
print(dag)
###Output
_____no_output_____
###Markdown
Run the training pipelineNext, you run the DAG to start the training job by invoking the method `run`, with the following parameters:- `dataset`: The `Dataset` resource to train the model.- `model_display_name`: The human readable name for the trained model.- `training_fraction_split`: The percentage of the dataset to use for training.- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).- `validation_fraction_split`: The percentage of the dataset to use for validation.- `target_column`: The name of the column to train as the label.- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.The `run` method when completed returns the `Model` resource.The execution of the training pipeline will take upto 180 minutes.
###Code
async_model = dag.run(
dataset=dataset,
model_display_name="chicago_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=8000,
disable_early_stopping=False,
target_column="tip_bin",
sync=False,
)
###Output
_____no_output_____
###Markdown
Create experiment for tracking training related metadataSetup tracking the parameters (configuration) and metrics (results) for each experiment:- `aip.init()` - Create an experiment instance- `aip.start_run()` - Track a specific run within the experiment.Learn more about [Introduction to Vertex AI ML Metadata](https://cloud.google.com/vertex-ai/docs/ml-metadata/introduction).
###Code
EXPERIMENT_NAME = "chicago-" + TIMESTAMP
aip.init(experiment=EXPERIMENT_NAME)
aip.start_run("run-1")
###Output
_____no_output_____
###Markdown
Create a Vertex AI TensorBoard instanceCreate a Vertex AI TensorBoard instance to use TensorBoard in conjunction with Vertex AI Training for custom model training.Learn more about [Get started with Vertex AI TensorBoard](https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-overview).
###Code
TENSORBOARD_DISPLAY_NAME = "chicago_" + TIMESTAMP
tensorboard = aip.Tensorboard.create(display_name=TENSORBOARD_DISPLAY_NAME)
tensorboard_resource_name = tensorboard.gca_resource.name
print("TensorBoard resource name:", tensorboard_resource_name)
###Output
_____no_output_____
###Markdown
Create the input layer for your custom modelNext, you create the input layer for your custom tabular model, based on the data types of each feature.
###Code
from tensorflow.keras.layers import Input
def create_model_inputs(
numeric_features=None, categorical_features=None, embedding_features=None
):
inputs = {}
for feature_name in numeric_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.float32)
for feature_name in categorical_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64)
for feature_name in embedding_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64)
return inputs
input_layers = create_model_inputs(
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
embedding_features=metadata["embedding_features"],
)
print(input_layers)
###Output
_____no_output_____
###Markdown
Create the binary classifier custom modelNext, you create your binary classifier custom tabular model.
###Code
from math import sqrt
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import (Activation, Concatenate, Dense, Embedding,
experimental)
def create_binary_classifier(
input_layers,
tft_output,
metaparams,
numeric_features,
categorical_features,
embedding_features,
):
layers = []
for feature_name in input_layers:
if feature_name in embedding_features:
vocab_size = tft_output.vocabulary_size_by_name(feature_name)
embedding_size = int(sqrt(vocab_size))
embedding_output = Embedding(
input_dim=vocab_size + 1,
output_dim=embedding_size,
name=f"{feature_name}_embedding",
)(input_layers[feature_name])
layers.append(embedding_output)
elif feature_name in categorical_features:
vocab_size = tft_output.vocabulary_size_by_name(feature_name)
onehot_layer = experimental.preprocessing.CategoryEncoding(
num_tokens=vocab_size,
output_mode="binary",
name=f"{feature_name}_onehot",
)(input_layers[feature_name])
layers.append(onehot_layer)
elif feature_name in numeric_features:
numeric_layer = tf.expand_dims(input_layers[feature_name], -1)
layers.append(numeric_layer)
else:
pass
joined = Concatenate(name="combines_inputs")(layers)
feedforward_output = Sequential(
[Dense(units, activation="relu") for units in metaparams["hidden_units"]],
name="feedforward_network",
)(joined)
logits = Dense(units=1, name="logits")(feedforward_output)
pred = Activation("sigmoid")(logits)
model = Model(inputs=input_layers, outputs=[pred])
return model
TRANSFORM_ARTIFACTS_DIR = metadata["transform_artifacts_dir"]
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
metaparams = {"hidden_units": [128, 64]}
aip.log_params(metaparams)
model = create_binary_classifier(
input_layers,
tft_output,
metaparams,
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
embedding_features=metadata["embedding_features"],
)
model.summary()
###Output
_____no_output_____
###Markdown
Visualize the model architectureNext, visualize the architecture of the custom model.
###Code
tf.keras.utils.plot_model(model, show_shapes=True, show_dtype=True)
###Output
_____no_output_____
###Markdown
Save model artifactsNext, save the model artifacts to your Cloud Storage bucket
###Code
MODEL_DIR = f"{BUCKET_NAME}/base_model"
model.save(MODEL_DIR)
###Output
_____no_output_____
###Markdown
Upload the local model to a Vertex AI Model resourceNext, you upload your local custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource.
###Code
vertex_custom_model = aip.Model.upload(
display_name="chicago_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
labels={"base_model": 1},
sync=True,
)
###Output
_____no_output_____
###Markdown
Construct the training package Package layoutBefore you start training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.- PKG-INFO- README.md- setup.cfg- setup.py- trainer - \_\_init\_\_.py - task.py - other Python scriptsThe files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.The file `trainer/task.py` is the Python script for executing the custom training job.
###Code
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'google-cloud-aiplatform',\n\n 'cloudml-hypertune',\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Chicago Taxi tabular binary classifier\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex AI"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
###Output
_____no_output_____
###Markdown
Get feature specification for the preprocessed dataNext, create the feature specification for the preprocessed data.
###Code
transform_feature_spec = tft_output.transformed_feature_spec()
print(transform_feature_spec)
###Output
_____no_output_____
###Markdown
Load the transformed data into a tf.data.DatasetNext, you load the gzip TFRecords on Cloud Storage storage into a `tf.data.Dataset` generator. These functions are re-used when training the custom model using `Vertex Training`, so you save them to the python training package.
###Code
%%writefile custom/trainer/data.py
import tensorflow as tf
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(filenames, compression_type="GZIP")
def get_dataset(file_pattern, feature_spec, label_column, batch_size=200):
"""Generates features and label for tuning/training.
Args:
file_pattern: input tfrecord file pattern.
feature_spec: a dictionary of feature specifications.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=feature_spec,
label_key=label_column,
reader=_gzip_reader_fn,
num_epochs=1,
drop_final_batch=True,
)
return dataset
from custom.trainer import data
TRANSFORMED_DATA_PREFIX = metadata["transformed_data_prefix"]
LABEL_COLUMN = metadata["label_column"]
train_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/train/data-*.gz"
val_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/val/data-*.gz"
test_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/test/data-*.gz"
for input_features, target in data.get_dataset(
train_data_file_pattern, transform_feature_spec, LABEL_COLUMN, batch_size=3
).take(1):
for key in input_features:
print(
f"{key} {input_features[key].dtype}: {input_features[key].numpy().tolist()}"
)
print(f"target: {target.numpy().tolist()}")
###Output
_____no_output_____
###Markdown
Test the model architecture with transformed inputNext, test the model architecture with a sample of the transformed training input.*Note:* Since the model is untrained, the predictions should be random. Since this is a binary classifier, expect the predicted results ~0.5.
###Code
model(input_features)
###Output
_____no_output_____
###Markdown
Develop and test the training scriptsWhen experimenting, one typically develops and tests the training package locally, before moving to training in the cloud. Create training scriptNext, you write the Python script for compiling and training the model.
###Code
%%writefile custom/trainer/train.py
from trainer import data
import tensorflow as tf
import logging
from hypertune import HyperTune
def compile(model, hyperparams):
optimizer = tf.keras.optimizers.Adam(learning_rate=hyperparams["learning_rate"])
loss = tf.keras.losses.BinaryCrossentropy(from_logits=False)
metrics = [tf.keras.metrics.BinaryAccuracy(name="accuracy")]
model.compile(optimizer=optimizer,loss=loss, metrics=metrics)
return model
def train(
model,
hyperparams,
train_data_dir,
val_data_dir,
label_column,
transformed_feature_spec,
log_dir,
tuning=False
):
train_dataset = data.get_dataset(
train_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
val_dataset = data.get_dataset(
val_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
tensorboard = tf.keras.callbacks.TensorBoard(log_dir=log_dir)
early_stop = tf.keras.callbacks.EarlyStopping(
monitor=hyperparams["early_stop"]["monitor"], patience=hyperparams["early_stop"]["patience"], restore_best_weights=True
)
callbacks=[tensorboard, early_stop]
if tuning:
# Instantiate the HyperTune reporting object
hpt = HyperTune()
# Reporting callback
class HPTCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='val_loss',
metric_value=logs['val_loss'],
global_step=epoch
)
callbacks.append(HPTCallback())
logging.info("Model training started...")
history = model.fit(
train_dataset,
epochs=hyperparams["num_epochs"],
validation_data=val_dataset,
callbacks=callbacks
)
logging.info("Model training completed.")
return history
def evaluate(
model,
hyperparams,
test_data_dir,
label_column,
transformed_feature_spec
):
logging.info("Model evaluation started...")
test_dataset = data.get_dataset(
test_data_dir,
transformed_feature_spec,
label_column,
hyperparams["batch_size"],
)
evaluation_metrics = model.evaluate(test_dataset)
logging.info("Model evaluation completed.")
return evaluation_metrics
###Output
_____no_output_____
###Markdown
Train the model locallyNext, test the training package locally, by training with just a few epochs:- `num_epochs`: The number of epochs to pass to the training package.- `compile()`: Compile the model for training.- `train()`: Train the model.
###Code
os.chdir("custom")
import logging
from trainer import train
TENSORBOARD_LOG_DIR = "./logs"
logging.getLogger().setLevel(logging.INFO)
hyperparams = {}
hyperparams["learning_rate"] = 0.001
aip.log_params(hyperparams)
train.compile(model, hyperparams)
trainparams = {}
trainparams["num_epochs"] = 5
trainparams["batch_size"] = 64
trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5}
aip.log_params(trainparams)
train.train(
model,
trainparams,
train_data_file_pattern,
val_data_file_pattern,
LABEL_COLUMN,
transform_feature_spec,
TENSORBOARD_LOG_DIR,
)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Evaluate the model locallyNext, test the evaluation portion of the training package:- `evaluate()`: Evaluate the model.
###Code
os.chdir("custom")
from trainer import train
evalparams = {}
evalparams["batch_size"] = 64
metrics = {}
metrics["loss"], metrics["acc"] = train.evaluate(
model, evalparams, test_data_file_pattern, LABEL_COLUMN, transform_feature_spec
)
print("ACC", metrics["acc"], "LOSS", metrics["loss"])
aip.log_metrics(metrics)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Retrieve model from Vertex AINext, create the Python script to retrieve your experimental model from Vertex AI.
###Code
%%writefile custom/trainer/model.py
import google.cloud.aiplatform as aip
def get(model_id):
model = aip.Model(model_id)
return model
###Output
_____no_output_____
###Markdown
Create the task script for the Python training packageNext, you create the `task.py` script for driving the training package. Some noteable steps include:- Command-line arguments: - `model-id`: The resource ID of the `Model` resource you built during experimenting. This is the untrained model architecture. - `dataset-id`: The resource ID of the `Dataset` resource to use for training. - `experiment`: The name of the experiment. - `run`: The name of the run within this experiment. - `tensorboard-logdir`: The logging directory for Vertex AI Tensorboard.- `get_data()`: - Loads the Dataset resource into memory. - Obtains the user metadata from the Dataset resource. - From the metadata, obtain location of transformed data, transformation function and name of label column- `get_model()`: - Loads the Model resource into memory. - Obtains location of model artifacts of the model architecture. - Loads the model architecture. - Compiles the model.- `train_model()`: - Train the model.- `evaluate_model()`: - Evaluates the model. - Saves evaluation metrics to Cloud Storage bucket.
###Code
%%writefile custom/trainer/task.py
import os
import argparse
import logging
import json
import tensorflow as tf
import tensorflow_transform as tft
from tensorflow.python.client import device_lib
import google.cloud.aiplatform as aip
from trainer import data
from trainer import model as model_
from trainer import train
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--model-id', dest='model_id',
default=None, type=str, help='Vertex Model ID.')
parser.add_argument('--dataset-id', dest='dataset_id',
default=None, type=str, help='Vertex Dataset ID.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--batch_size', dest='batch_size',
default=16, type=int,
help='Batch size.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--tensorboard-log-dir', dest='tensorboard_log_dir',
default='/tmp/logs', type=str,
help='Output file for tensorboard logs')
parser.add_argument('--experiment', dest='experiment',
default=None, type=str,
help='Name of experiment')
parser.add_argument('--project', dest='project',
default=None, type=str,
help='Name of project')
parser.add_argument('--run', dest='run',
default=None, type=str,
help='Name of run in experiment')
parser.add_argument('--evaluate', dest='evaluate',
default=False, type=bool,
help='Whether to perform evaluation')
parser.add_argument('--tuning', dest='tuning',
default=False, type=bool,
help='Whether to perform hyperparameter tuning')
args = parser.parse_args()
logging.getLogger().setLevel(logging.INFO)
logging.info('DEVICES' + str(device_lib.list_local_devices()))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
logging.info("Single device training")
# Single Machine, multiple compute device
elif args.distribute == 'mirrored':
strategy = tf.distribute.MirroredStrategy()
logging.info("Mirrored Strategy distributed training")
# Multi Machine, multiple compute device
elif args.distribute == 'multiworker':
strategy = tf.distribute.MultiWorkerMirroredStrategy()
logging.info("Multi-worker Strategy distributed training")
logging.info('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
logging.info('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Initialize the run for this experiment
if args.experiment:
logging.info("Initialize experiment: {}".format(args.experiment))
aip.init(experiment=args.experiment, project=args.project)
aip.start_run(args.run)
def get_data():
''' Get the preprocessed training data '''
global train_data_file_pattern, val_data_file_pattern, test_data_file_pattern
global label_column, transform_feature_spec
dataset = aip.TabularDataset(args.dataset_id)
METADATA = 'gs://' + dataset.labels['user_metadata'] + "/metadata.jsonl"
with tf.io.gfile.GFile(METADATA, "r") as f:
metadata = json.load(f)
TRANSFORMED_DATA_PREFIX = metadata['transformed_data_prefix']
label_column = metadata['label_column']
train_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/train/data-*.gz'
val_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/val/data-*.gz'
test_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/test/data-*.gz'
TRANSFORM_ARTIFACTS_DIR = metadata['transform_artifacts_dir']
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
transform_feature_spec = tft_output.transformed_feature_spec()
def get_model():
''' Get the untrained model architecture '''
vertex_model = model_.get(args.model_id)
model_artifacts = vertex_model.gca_resource.artifact_uri
model = tf.keras.models.load_model(model_artifacts)
# Compile the model
hyperparams = {}
hyperparams["learning_rate"] = args.lr
if args.experiment:
aip.log_params(hyperparams)
train.compile(model, hyperparams)
return model
def train_model(model):
''' Train the model '''
trainparams = {}
trainparams["num_epochs"] = args.epochs
trainparams["batch_size"] = args.batch_size
trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5}
if args.experiment:
aip.log_params(trainparams)
train.train(model, trainparams, train_data_file_pattern, val_data_file_pattern, label_column, transform_feature_spec, args.tensorboard_log_dir, args.tuning)
return model
def evaluate_model(model):
''' Evaluate the model '''
evalparams = {}
evalparams["batch_size"] = args.batch_size
metrics = train.evaluate(model, evalparams, test_data_file_pattern, label_column, transform_feature_spec)
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt", "w")) as f:
f.write(str(metrics))
get_data()
with strategy.scope():
model = get_model()
model = train_model(model)
if args.evaluate:
evaluate_model(model)
logging.info('Save trained model to: ' + args.model_dir)
model.save(args.model_dir)
###Output
_____no_output_____
###Markdown
Test training package locallyNext, test your completed training package locally with just a few epochs.
###Code
DATASET_ID = dataset.resource_name
MODEL_ID = vertex_custom_model.resource_name
!cd custom; python3 -m trainer.task --model-id={MODEL_ID} --dataset-id={DATASET_ID} --experiment='chicago' --run='test' --project={PROJECT_ID} --epochs=5 --model-dir=/tmp --evaluate=True
###Output
_____no_output_____
###Markdown
Mirrored StrategyWhen training on a single VM, one can either train was a single compute device or with multiple compute devices on the same VM. With Vertex AI Distributed Training you can specify both the number of compute devices for the VM instance and type of compute devices: CPU, GPU.Vertex AI Distributed Training supports `tf.distribute.MirroredStrategy' for TensorFlow models. To enable training across multiple compute devices on the same VM, you do the following additional steps in your Python training script:1. Set the tf.distribute.MirrorStrategy2. Compile the model within the scope of tf.distribute.MirrorStrategy. *Note:* Tells MirroredStrategy which variables to mirror across your compute devices.3. Increase the batch size for each compute device to num_devices * batch size.During transitions, the distribution of batches will be synchronized as well as the updates to the model parameters. Create and run custom training jobTo train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training jobA custom training job is created with the `CustomTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the custom training job.- `container_uri`: The training container image.- `python_package_gcs_uri`: The location of the Python training package as a tarball.- `python_module_name`: The relative path to the training script in the Python package.- `model_serving_container_uri`: The container image for deploying the model.*Note:* There is no requirements parameter. You specify any requirements in the `setup.py` script in your Python package.
###Code
DISPLAY_NAME = "chicago_" + TIMESTAMP
job = aip.CustomPythonPackageTrainingJob(
display_name=DISPLAY_NAME,
python_package_gcs_uri=f"{BUCKET_NAME}/trainer_chicago.tar.gz",
python_module_name="trainer.task",
container_uri=TRAIN_IMAGE,
model_serving_container_image_uri=DEPLOY_IMAGE,
project=PROJECT_ID,
)
! rm -rf custom/logs
! rm -rf custom/trainer/__pycache__
###Output
_____no_output_____
###Markdown
Store training script on your Cloud Storage bucketNext, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
###Code
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_chicago.tar.gz
###Output
_____no_output_____
###Markdown
Run the custom Python package training jobNext, you run the custom job to start the training job by invoking the method `run()`. The parameters are the same as when running a CustomTrainingJob.*Note:* The parameter service_account is set so that the initializing experiment step `aip.init(experiment="...")` has necessarily permission to access the Vertex AI Metadata Store.
###Code
MODEL_DIR = BUCKET_NAME + "/testing"
CMDARGS = [
"--epochs=5",
"--batch_size=16",
"--distribute=mirrored",
"--experiment=chicago",
"--run=test",
"--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
]
model = job.run(
model_display_name="chicago_" + TIMESTAMP,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
service_account=SERVICE_ACCOUNT,
sync=True,
)
###Output
_____no_output_____
###Markdown
Delete a custom training jobAfter a training job is completed, you can delete the training job with the method `delete()`. Prior to completion, a training job can be canceled with the method `cancel()`.
###Code
job.delete()
###Output
_____no_output_____
###Markdown
Delete the modelThe method 'delete()' will delete the model.
###Code
model.delete()
###Output
_____no_output_____
###Markdown
Hyperparameter tuningNext, you perform hyperparameter tuning with the training package. The training package has some additions that make the same package usable for both hyperparameter tuning, as well as local testing and full cloud training:- Command-Line: - `tuning`: indicates to use the HyperTune service as a callback during training.- `train()`: If tuning is set, creates and adds a callback to HyperTune service. Prepare your machine specificationNow define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training. - `machine_type`: The type of GCP instance to provision -- e.g., n1-standard-8. - `accelerator_type`: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable `TRAIN_GPU != None`, you are using a GPU; otherwise you will use a CPU. - `accelerator_count`: The number of accelerators.
###Code
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
###Output
_____no_output_____
###Markdown
Prepare your disk specification(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training. - `boot_disk_type`: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD. - `boot_disk_size_gb`: Size of disk in GB.
###Code
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
###Output
_____no_output_____
###Markdown
Define worker pool specification for hyperparameter tuning jobNext, define the worker pool specification. Note that we plan to tune the learning rate and batch size, so you do not pass them as command-line arguments (omitted). The Vertex AI Hyperparameter Tuning service will pick values for both learning rate and batch size during trials, which it will pass along as command-line arguments.
###Code
CMDARGS = [
"--epochs=5",
"--distribute=mirrored",
# "--experiment=chicago",
# "--run=tune",
# "--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
"--tuning=True",
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_chicago.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
###Output
_____no_output_____
###Markdown
Create a custom jobUse the class `CustomJob` to create a custom job, such as for hyperparameter tuning, with the following parameters:- `display_name`: A human readable name for the custom job.- `worker_pool_specs`: The specification for the corresponding VM instances.
###Code
job = aip.CustomJob(
display_name="chicago_" + TIMESTAMP, worker_pool_specs=worker_pool_spec
)
###Output
_____no_output_____
###Markdown
Create a hyperparameter tuning jobUse the class `HyperparameterTuningJob` to create a hyperparameter tuning job, with the following parameters:- `display_name`: A human readable name for the custom job.- `custom_job`: The worker pool spec from this custom job applies to the CustomJobs created in all the trials.- `metrics_spec`: The metrics to optimize. The dictionary key is the metric_id, which is reported by your training job, and the dictionary value is the optimization goal of the metric('minimize' or 'maximize').- `parameter_spec`: The parameters to optimize. The dictionary key is the metric_id, which is passed into your training job as a command line key word argument, and the dictionary value is the parameter specification of the metric.- `search_algorithm`: The search algorithm to use: `grid`, `random` and `None`. If `None` is specified, the `Vizier` service (Bayesian) is used.- `max_trial_count`: The maximum number of trials to perform.
###Code
from google.cloud.aiplatform import hyperparameter_tuning as hpt
hpt_job = aip.HyperparameterTuningJob(
display_name="chicago_" + TIMESTAMP,
custom_job=job,
metric_spec={
"val_loss": "minimize",
},
parameter_spec={
"lr": hpt.DoubleParameterSpec(min=0.001, max=0.1, scale="log"),
"batch_size": hpt.DiscreteParameterSpec([16, 32, 64, 128, 256], scale="linear"),
},
search_algorithm=None,
max_trial_count=8,
parallel_trial_count=1,
)
###Output
_____no_output_____
###Markdown
Run the hyperparameter tuning jobUse the `run()` method to execute the hyperparameter tuning job.
###Code
hpt_job.run()
###Output
_____no_output_____
###Markdown
Best trialNow look at which trial was the best:
###Code
best = (None, None, None, 0.0)
for trial in hpt_job.trials:
# Keep track of the best outcome
if float(trial.final_measurement.metrics[0].value) > best[3]:
try:
best = (
trial.id,
float(trial.parameters[0].value),
float(trial.parameters[1].value),
float(trial.final_measurement.metrics[0].value),
)
except:
best = (
trial.id,
float(trial.parameters[0].value),
None,
float(trial.final_measurement.metrics[0].value),
)
print(best)
###Output
_____no_output_____
###Markdown
Delete the hyperparameter tuning jobThe method 'delete()' will delete the hyperparameter tuning job.
###Code
hpt_job.delete()
###Output
_____no_output_____
###Markdown
Save the best hyperparameter values
###Code
LR = best[2]
BATCH_SIZE = int(best[1])
###Output
_____no_output_____
###Markdown
Create and run custom training jobTo train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training jobA custom training job is created with the `CustomTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the custom training job.- `container_uri`: The training container image.- `python_package_gcs_uri`: The location of the Python training package as a tarball.- `python_module_name`: The relative path to the training script in the Python package.- `model_serving_container_uri`: The container image for deploying the model.*Note:* There is no requirements parameter. You specify any requirements in the `setup.py` script in your Python package.
###Code
DISPLAY_NAME = "chicago_" + TIMESTAMP
job = aip.CustomPythonPackageTrainingJob(
display_name=DISPLAY_NAME,
python_package_gcs_uri=f"{BUCKET_NAME}/trainer_chicago.tar.gz",
python_module_name="trainer.task",
container_uri=TRAIN_IMAGE,
model_serving_container_image_uri=DEPLOY_IMAGE,
project=PROJECT_ID,
)
###Output
_____no_output_____
###Markdown
Run the custom Python package training jobNext, you run the custom job to start the training job by invoking the method `run()`. The parameters are the same as when running a CustomTrainingJob.*Note:* The parameter service_account is set so that the initializing experiment step `aip.init(experiment="...")` has necessarily permission to access the Vertex AI Metadata Store.
###Code
MODEL_DIR = BUCKET_NAME + "/trained"
FULL_EPOCHS = 100
CMDARGS = [
f"--epochs={FULL_EPOCHS}",
f"--lr={LR}",
f"--batch_size={BATCH_SIZE}",
"--distribute=mirrored",
"--experiment=chicago",
"--run=full",
"--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
"--evaluate=True",
]
model = job.run(
model_display_name="chicago_" + TIMESTAMP,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
service_account=SERVICE_ACCOUNT,
sync=True,
)
###Output
_____no_output_____
###Markdown
Delete a custom training jobAfter a training job is completed, you can delete the training job with the method `delete()`. Prior to completion, a training job can be canceled with the method `cancel()`.
###Code
job.delete()
###Output
_____no_output_____
###Markdown
Get the experiment resultsNext, you use the experiment name as a parameter to the method `get_experiment_df()` to get the results of the experiment as a pandas dataframe.
###Code
experiment_df = aip.get_experiment_df()
experiment_df = experiment_df[experiment_df.experiment_name == "chicago"]
experiment_df.T
###Output
_____no_output_____
###Markdown
Review the custom model evaluation resultsNext, you review the evaluation metrics builtin into the training package.
###Code
METRICS = MODEL_DIR + "/model/metrics.txt"
! gsutil cat $METRICS
vertex_custom_model = model
model = tf.keras.models.load_model(MODEL_DIR + "/model")
###Output
_____no_output_____
###Markdown
Add a serving functionNext, you add a serving function to your model for online and batch prediction. This allows prediction requests to be sent in raw format (unpreprocessed), either as a serialized TF.Example or JSONL object. The serving function will then preprocess the prediction request into the transformed format expected by the model.
###Code
def _get_serve_features_fn(model, tft_output):
"""Returns a function that accept a dictionary of features and applies TFT."""
model.tft_layer = tft_output.transform_features_layer()
@tf.function
def serve_features_fn(raw_features):
"""Returns the output to be used in the serving signature."""
transformed_features = model.tft_layer(raw_features)
probabilities = model(transformed_features)
return {"scores": probabilities}
return serve_features_fn
def _get_serve_tf_examples_fn(model, tft_output, feature_spec):
"""Returns a function that parses a serialized tf.Example and applies TFT."""
model.tft_layer = tft_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
"""Returns the output to be used in the serving signature."""
for key in list(feature_spec.keys()):
if key not in features:
feature_spec.pop(key)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
probabilities = model(transformed_features)
return {"scores": probabilities}
return serve_tf_examples_fn
def construct_serving_model(model, serving_model_dir, metadata):
global features
schema_location = metadata["schema"]
features = (
metadata["numeric_features"]
+ metadata["categorical_features"]
+ metadata["embedding_features"]
)
print("FEATURES", features)
tft_output_dir = metadata["transform_artifacts_dir"]
schema = tfdv.load_schema_text(schema_location)
feature_spec = tft.tf_metadata.schema_utils.schema_as_feature_spec(
schema
).feature_spec
tft_output = tft.TFTransformOutput(tft_output_dir)
# Drop features that were not used in training
features_input_signature = {
feature_name: tf.TensorSpec(
shape=(None, 1), dtype=spec.dtype, name=feature_name
)
for feature_name, spec in feature_spec.items()
if feature_name in features
}
signatures = {
"serving_default": _get_serve_features_fn(
model, tft_output
).get_concrete_function(features_input_signature),
"serving_tf_example": _get_serve_tf_examples_fn(
model, tft_output, feature_spec
).get_concrete_function(
tf.TensorSpec(shape=[None], dtype=tf.string, name="examples")
),
}
logging.info("Model export started...")
model.save(serving_model_dir, signatures=signatures)
logging.info("Model export completed.")
###Output
_____no_output_____
###Markdown
Construct the serving modelNow construct the serving model and store the serving model to your Cloud Storage bucket.
###Code
SERVING_MODEL_DIR = BUCKET_NAME + "/serving_model"
construct_serving_model(
model=model, serving_model_dir=SERVING_MODEL_DIR, metadata=metadata
)
serving_model = tf.keras.models.load_model(SERVING_MODEL_DIR)
###Output
_____no_output_____
###Markdown
Test the serving model locally with tf.Example dataNext, test the layer interface in the serving model for tf.Example data.
###Code
EXPORTED_TFREC_PREFIX = metadata["exported_tfrec_prefix"]
file_names = tf.data.TFRecordDataset.list_files(
EXPORTED_TFREC_PREFIX + "/data-*.tfrecord"
)
for batch in tf.data.TFRecordDataset(file_names).batch(3).take(1):
predictions = serving_model.signatures["serving_tf_example"](batch)
for key in predictions:
print(f"{key}: {predictions[key]}")
###Output
_____no_output_____
###Markdown
Test the serving model locally with JSONL dataNext, test the layer interface in the serving model for JSONL data.
###Code
schema = tfdv.load_schema_text(metadata["schema"])
feature_spec = tft.tf_metadata.schema_utils.schema_as_feature_spec(schema).feature_spec
instance = {
"dropoff_grid": "POINT(-87.6 41.9)",
"euclidean": 2064.2696,
"loc_cross": "",
"payment_type": "Credit Card",
"pickup_grid": "POINT(-87.6 41.9)",
"trip_miles": 1.37,
"trip_day": 12,
"trip_hour": 6,
"trip_month": 2,
"trip_day_of_week": 4,
"trip_seconds": 555,
}
for feature_name in instance:
dtype = feature_spec[feature_name].dtype
instance[feature_name] = tf.constant([[instance[feature_name]]], dtype)
predictions = serving_model.signatures["serving_default"](**instance)
for key in predictions:
print(f"{key}: {predictions[key].numpy()}")
###Output
_____no_output_____
###Markdown
Upload the serving model to a Vertex AI Model resourceNext, you upload your serving custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource.
###Code
vertex_serving_model = aip.Model.upload(
display_name="chicago_" + TIMESTAMP,
artifact_uri=SERVING_MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
labels={"user_metadata": BUCKET_NAME[5:]},
sync=True,
)
###Output
_____no_output_____
###Markdown
Evaluate the serving modelNext, evaluate the serving model with the evaluation (test) slices. For apples-to-apples comparison, you use the same evaluation slices for both the custom model and the AutoML model. Since your evaluation slices and metrics maybe custom, we recommend:- Send each evaluation slice as a Vertex AI Batch Prediction Job.- Use a custom evaluation script to evaluate the results from the batch prediction job.
###Code
SERVING_OUTPUT_DATA_DIR = BUCKET_NAME + "/batch_eval"
EXPORTED_JSONL_PREFIX = metadata["exported_jsonl_prefix"]
MIN_NODES = 1
MAX_NODES = 1
job = vertex_serving_model.batch_predict(
instances_format="jsonl",
predictions_format="jsonl",
job_display_name="chicago_" + TIMESTAMP,
gcs_source=EXPORTED_JSONL_PREFIX + "*.jsonl",
gcs_destination_prefix=SERVING_OUTPUT_DATA_DIR,
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=True,
)
###Output
_____no_output_____
###Markdown
Perform custom evaluation metricsAfter the batch job has completed, you input the results and target labels to your custom evaluation script. For demonstration purposes, we just display the results of the batch prediction.
###Code
batch_dir = ! gsutil ls $SERVING_OUTPUT_DATA_DIR
batch_dir = batch_dir[0]
outputs = ! gsutil ls $batch_dir
errors = outputs[0]
results = outputs[1]
print("errors")
! gsutil cat $errors
print("results")
! gsutil cat $results | head -n10
model = async_model
###Output
_____no_output_____
###Markdown
Wait for completion of AutoML training jobNext, wait for the AutoML training job to complete. Alternatively, one can set the parameter `sync` to `True` in the `run()` method to block until the AutoML training job is completed.
###Code
model.wait()
###Output
_____no_output_____
###Markdown
Review model evaluation scoresAfter your model has finished training, you can review the evaluation scores for it.First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
###Code
# Get model resource ID
models = aip.Model.list(filter="display_name=chicago_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
###Output
_____no_output_____
###Markdown
Compare metric results with AutoML baselineFinally, you make a decision if the current experiment produces a custom model that is better than the AutoML baseline, as follows: - Compare the evaluation results for each evaluation slice between the custom model and the AutoML model. - Weight the results according to your business purposes. - Add up the result and make a determination if the custom model is better. Store evaluation results for custom modelNext, you use the labels field to store user metadata containing the custom metrics information.
###Code
import json
metadata = {}
metadata["train_eval_metrics"] = METRICS
metadata["custom_eval_metrics"] = "[you-fill-this-in]"
with tf.io.gfile.GFile("gs://" + BUCKET_NAME[5:] + "/metadata.jsonl", "w") as f:
json.dump(metadata, f)
!gsutil cat $BUCKET_NAME/metadata.jsonl
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- AutoML Training Job- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_all = False
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____
###Markdown
E2E ML on GCP: MLOps stage 2 : experimentation View on GitHub Open in Google Cloud Notebooks OverviewThis tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 2 : experimentation. DatasetThe dataset used for this tutorial is the [Chicago Taxi](https://www.kaggle.com/chicago/chicago-taxi-trips-bq). The version of the dataset you will use in this tutorial is stored in a public BigQuery table. The trained model predicts whether someone would leave a tip for a taxi fare. ObjectiveIn this tutorial, you create a MLOps stage 2: experimentation process.This tutorial uses the following Vertex AI:- `Vertex AI Datasets`- `Vertex AI Models`- `Vertex AI AutoML`- `Vertex AI Training`- `Vertex AI TensorBoard`- `Vertex AI Vizier`- `Vertex AI Batch Prediction`The steps performed include:- Review the `Dataset` resource created during stage 1.- Train an AutoML tabular binary classifier model in the background.- Build the experimental model architecture.- Construct a custom training package for the `Dataset` resource.- Test the custom training package locally.- Test the custom training package in the cloud with Vertex AI Training.- Hyperparameter tune the model training with Vertex AI Vizier.- Train the custom model with Vertex AI Training.- Add a serving function for online/batch prediction to the custom model.- Test the custom model with the serving function.- Evaluate the custom model using Vertex AI Batch Prediction- Wait for AutoML training job to complete.- Evaluate the AutoML model using Vertex AI Batch Prediction with the same evaluation slices as the custom model.- Set the evaluation results of the AutoML model as the baseline.- If the evaluation of the custom model is below baseline, continue to experiment with custom model.- If the evaluation of the custom model is above baseline, save the model as the first best model. RecommendationsWhen doing E2E MLOps on Google Cloud for experimentation, the following best practices with structured (tabular) data are recommended: - Determine a baseline evaluation using AutoML. - Design and build a model architecture. - Upload the untrained model architecture as a Vertex AI Model resource. - Construct a training package that can be ran locally and as a Vertex AI Training job. - Decompose the training package into: data, model, train and task Python modules. - Obtain the location of the transformed training data from the user metadata of the Vertex AI Dataset resource. - Obtain the location of the model artifacts from the Vertex AI Model resource. - Include in the training package initializing a Vertex AI Experiment and corresponding run. - Log hyperparameters and training parameters for the experiment. - Add callbacks for early stop, TensorBoard, and hyperparameter tuning, where hyperparameter tuning is a command-line option. - Test the training package locally with a small number of epochs. - Test the training package with Vertex AI Training. - Do hyperparameter tuning with Vertex AI Hyperparameter Tuning. - Do full training of the custom model with Vertex AI Training. - Log the hyperparameter values for the experiment/run. - Evaluate the custom model. - Single evaluation slice, same metrics as AutoML - Add evaluation to the training package and return the results in a file in the Cloud Storage bucket used for training - Custom evaluation slices, custom metrics - Evaluate custom evaluation slices as a Vertex AI Batch Prediction for both AutoML and custom model - Perform custom metrics on the results from the batch job - Compare custom model metrics against the AutoML baseline - If less than baseline, then continue to experiment - If greater then baseline, then upload model as the new baseline and save evaluation results with the model. InstallationsInstall *one time* the packages for executing the MLOps notebooks.
###Code
ONCE_ONLY = False
if ONCE_ONLY:
! pip3 install -U tensorflow==2.5 $USER_FLAG
! pip3 install -U tensorflow-data-validation==1.2 $USER_FLAG
! pip3 install -U tensorflow-transform==1.2 $USER_FLAG
! pip3 install -U tensorflow-io==0.18 $USER_FLAG
! pip3 install --upgrade google-cloud-aiplatform[tensorboard] $USER_FLAG
! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG
! pip3 install --upgrade google-cloud-bigquery $USER_FLAG
! pip3 install --upgrade google-cloud-logging $USER_FLAG
! pip3 install --upgrade apache-beam[gcp] $USER_FLAG
! pip3 install --upgrade pyarrow $USER_FLAG
! pip3 install --upgrade cloudml-hypertune $USER_FLAG
! pip3 install --upgrade kfp $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations).
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Service Account**If you don't know your service account**, try to get your service account using `gcloud` command by executing the second cell below.
###Code
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].strip()
print("Service Account:", SERVICE_ACCOUNT)
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants
###Code
import google.cloud.aiplatform as aip
###Output
_____no_output_____
###Markdown
Import TensorFlowImport the TensorFlow package into your Python environment.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Import TensorFlow TransformImport the TensorFlow Transform (TFT) package into your Python environment.
###Code
import tensorflow_transform as tft
###Output
_____no_output_____
###Markdown
Import TensorFlow Data ValidationImport the TensorFlow Data Validation (TFDV) package into your Python environment.
###Code
import tensorflow_data_validation as tfdv
###Output
_____no_output_____
###Markdown
Initialize Vertex AI SDK for PythonInitialize the Vertex AI SDK for Python for your project and corresponding bucket.
###Code
aip.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)
###Output
_____no_output_____
###Markdown
Set hardware acceleratorsYou can set hardware accelerators for training and prediction.Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)Otherwise specify `(None, None)` to use a container image to run on a CPU.Learn more about [hardware accelerator support for your region](https://cloud.google.com/vertex-ai/docs/general/locationsaccelerators).*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3. This is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
###Code
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.gapic.AcceleratorType.NVIDIA_TESLA_K80, 4)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
###Output
_____no_output_____
###Markdown
Set pre-built containersSet the pre-built Docker container image for training and prediction.For the latest list, see [Pre-built containers for training](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers).For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers).
###Code
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2.5".replace(".", "-")
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "{}-docker.pkg.dev/vertex-ai/training/{}:latest".format(
REGION.split("-")[0], TRAIN_VERSION
)
DEPLOY_IMAGE = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format(
REGION.split("-")[0], DEPLOY_VERSION
)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
###Output
_____no_output_____
###Markdown
Set machine typeNext, set the machine type to use for training and prediction.- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]*Note: The following is not supported for training:* - `standard`: 2 vCPUs - `highcpu`: 2, 4 and 8 vCPUs*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.
###Code
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
###Output
_____no_output_____
###Markdown
Retrieve the dataset from stage 1Next, retrieve the dataset you created during stage 1 with the helper function `find_dataset()`. This helper function finds all the datasets whose display name matches the specified prefix and import format (e.g., bq). Finally it sorts the matches by create time and returns the latest version.
###Code
def find_dataset(display_name_prefix, import_format):
matches = []
datasets = aip.TabularDataset.list()
for dataset in datasets:
if dataset.display_name.startswith(display_name_prefix):
try:
if (
"bq" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["bigquerySource"]
):
matches.append(dataset)
if (
"csv" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["gcsSource"]
):
matches.append(dataset)
except:
pass
create_time = None
for match in matches:
if create_time is None or match.create_time > create_time:
create_time = match.create_time
dataset = match
return dataset
dataset = find_dataset("Chicago Taxi", "bq")
print(dataset)
###Output
_____no_output_____
###Markdown
Load dataset's user metadataLoad the user metadata for the dataset.
###Code
import json
try:
with tf.io.gfile.GFile(
"gs://" + dataset.labels["user_metadata"] + "/metadata.jsonl", "r"
) as f:
metadata = json.load(f)
print(metadata)
except:
print("no metadata")
###Output
_____no_output_____
###Markdown
Create and run training pipelineTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipelineAn AutoML training pipeline is created with the `AutoMLTabularTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the `TrainingJob` resource.- `optimization_prediction_type`: The type task to train the model for. - `classification`: A tabuar classification model. - `regression`: A tabular regression model.- `column_transformations`: (Optional): Transformations to apply to the input columns- `optimization_objective`: The optimization objective to minimize or maximize. - binary classification: - `minimize-log-loss` - `maximize-au-roc` - `maximize-au-prc` - `maximize-precision-at-recall` - `maximize-recall-at-precision` - multi-class classification: - `minimize-log-loss` - regression: - `minimize-rmse` - `minimize-mae` - `minimize-rmsle`The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
###Code
dag = aip.AutoMLTabularTrainingJob(
display_name="chicago_" + TIMESTAMP,
optimization_prediction_type="classification",
optimization_objective="minimize-log-loss",
)
print(dag)
###Output
_____no_output_____
###Markdown
Run the training pipelineNext, you run the DAG to start the training job by invoking the method `run`, with the following parameters:- `dataset`: The `Dataset` resource to train the model.- `model_display_name`: The human readable name for the trained model.- `training_fraction_split`: The percentage of the dataset to use for training.- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).- `validation_fraction_split`: The percentage of the dataset to use for validation.- `target_column`: The name of the column to train as the label.- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.The `run` method when completed returns the `Model` resource.The execution of the training pipeline will take upto 180 minutes.
###Code
async_model = dag.run(
dataset=dataset,
model_display_name="chicago_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=8000,
disable_early_stopping=False,
target_column="tip_bin",
sync=False,
)
###Output
_____no_output_____
###Markdown
Create experiment for tracking training related metadataSetup tracking the parameters (configuration) and metrics (results) for each experiment:- `aip.init()` - Create an experiment instance- `aip.start_run()` - Track a specific run within the experiment.Learn more about [Introduction to Vertex AI ML Metadata](https://cloud.google.com/vertex-ai/docs/ml-metadata/introduction).
###Code
EXPERIMENT_NAME = "chicago-" + TIMESTAMP
aip.init(experiment=EXPERIMENT_NAME)
aip.start_run("run-1")
###Output
_____no_output_____
###Markdown
Create a Vertex AI TensorBoard instanceCreate a Vertex AI TensorBoard instance to use TensorBoard in conjunction with Vertex AI Training for custom model training.Learn more about [Get started with Vertex AI TensorBoard](https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-overview).
###Code
TENSORBOARD_DISPLAY_NAME = "chicago_" + TIMESTAMP
tensorboard = aip.Tensorboard.create(display_name=TENSORBOARD_DISPLAY_NAME)
tensorboard_resource_name = tensorboard.gca_resource.name
print("TensorBoard resource name:", tensorboard_resource_name)
###Output
_____no_output_____
###Markdown
Create the input layer for your custom modelNext, you create the input layer for your custom tabular model, based on the data types of each feature.
###Code
from tensorflow.keras.layers import Input
def create_model_inputs(
numeric_features=None, categorical_features=None, embedding_features=None
):
inputs = {}
for feature_name in numeric_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.float32)
for feature_name in categorical_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64)
for feature_name in embedding_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64)
return inputs
input_layers = create_model_inputs(
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
embedding_features=metadata["embedding_features"],
)
print(input_layers)
###Output
_____no_output_____
###Markdown
Create the binary classifier custom modelNext, you create your binary classifier custom tabular model.
###Code
from math import sqrt
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import (Activation, Concatenate, Dense, Embedding,
experimental)
def create_binary_classifier(
input_layers,
tft_output,
metaparams,
numeric_features,
categorical_features,
embedding_features,
):
layers = []
for feature_name in input_layers:
if feature_name in embedding_features:
vocab_size = tft_output.vocabulary_size_by_name(feature_name)
embedding_size = int(sqrt(vocab_size))
embedding_output = Embedding(
input_dim=vocab_size + 1,
output_dim=embedding_size,
name=f"{feature_name}_embedding",
)(input_layers[feature_name])
layers.append(embedding_output)
elif feature_name in categorical_features:
vocab_size = tft_output.vocabulary_size_by_name(feature_name)
onehot_layer = experimental.preprocessing.CategoryEncoding(
num_tokens=vocab_size,
output_mode="binary",
name=f"{feature_name}_onehot",
)(input_layers[feature_name])
layers.append(onehot_layer)
elif feature_name in numeric_features:
numeric_layer = tf.expand_dims(input_layers[feature_name], -1)
layers.append(numeric_layer)
else:
pass
joined = Concatenate(name="combines_inputs")(layers)
feedforward_output = Sequential(
[Dense(units, activation="relu") for units in metaparams["hidden_units"]],
name="feedforward_network",
)(joined)
logits = Dense(units=1, name="logits")(feedforward_output)
pred = Activation("sigmoid")(logits)
model = Model(inputs=input_layers, outputs=[pred])
return model
TRANSFORM_ARTIFACTS_DIR = metadata["transform_artifacts_dir"]
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
metaparams = {"hidden_units": [128, 64]}
aip.log_params(metaparams)
model = create_binary_classifier(
input_layers,
tft_output,
metaparams,
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
embedding_features=metadata["embedding_features"],
)
model.summary()
###Output
_____no_output_____
###Markdown
Visualize the model architectureNext, visualize the architecture of the custom model.
###Code
tf.keras.utils.plot_model(model, show_shapes=True, show_dtype=True)
###Output
_____no_output_____
###Markdown
Save model artifactsNext, save the model artifacts to your Cloud Storage bucket
###Code
MODEL_DIR = f"{BUCKET_NAME}/base_model"
model.save(MODEL_DIR)
###Output
_____no_output_____
###Markdown
Upload the local model to a Vertex AI Model resourceNext, you upload your local custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource.
###Code
vertex_custom_model = aip.Model.upload(
display_name="chicago_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
labels={"base_model": "1"},
sync=True,
)
###Output
_____no_output_____
###Markdown
Construct the training package Package layoutBefore you start training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.- PKG-INFO- README.md- setup.cfg- setup.py- trainer - \_\_init\_\_.py - task.py - other Python scriptsThe files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.The file `trainer/task.py` is the Python script for executing the custom training job.
###Code
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'google-cloud-aiplatform',\n\n 'cloudml-hypertune',\n\n 'tensorflow_datasets==1.3.0',\n\n 'tensorflow_data_validation==1.2',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Chicago Taxi tabular binary classifier\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex AI"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
###Output
_____no_output_____
###Markdown
Get feature specification for the preprocessed dataNext, create the feature specification for the preprocessed data.
###Code
transform_feature_spec = tft_output.transformed_feature_spec()
print(transform_feature_spec)
###Output
_____no_output_____
###Markdown
Load the transformed data into a tf.data.DatasetNext, you load the gzip TFRecords on Cloud Storage storage into a `tf.data.Dataset` generator. These functions are re-used when training the custom model using `Vertex Training`, so you save them to the python training package.
###Code
%%writefile custom/trainer/data.py
import tensorflow as tf
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(filenames, compression_type="GZIP")
def get_dataset(file_pattern, feature_spec, label_column, batch_size=200):
"""Generates features and label for tuning/training.
Args:
file_pattern: input tfrecord file pattern.
feature_spec: a dictionary of feature specifications.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=feature_spec,
label_key=label_column,
reader=_gzip_reader_fn,
num_epochs=1,
drop_final_batch=True,
)
return dataset
from custom.trainer import data
TRANSFORMED_DATA_PREFIX = metadata["transformed_data_prefix"]
LABEL_COLUMN = metadata["label_column"]
train_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/train/data-*.gz"
val_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/val/data-*.gz"
test_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/test/data-*.gz"
for input_features, target in data.get_dataset(
train_data_file_pattern, transform_feature_spec, LABEL_COLUMN, batch_size=3
).take(1):
for key in input_features:
print(
f"{key} {input_features[key].dtype}: {input_features[key].numpy().tolist()}"
)
print(f"target: {target.numpy().tolist()}")
###Output
_____no_output_____
###Markdown
Test the model architecture with transformed inputNext, test the model architecture with a sample of the transformed training input.*Note:* Since the model is untrained, the predictions should be random. Since this is a binary classifier, expect the predicted results ~0.5.
###Code
model(input_features)
###Output
_____no_output_____
###Markdown
Develop and test the training scriptsWhen experimenting, one typically develops and tests the training package locally, before moving to training in the cloud. Create training scriptNext, you write the Python script for compiling and training the model.
###Code
%%writefile custom/trainer/train.py
from trainer import data
import tensorflow as tf
import logging
from hypertune import HyperTune
def compile(model, hyperparams):
''' Compile the model '''
optimizer = tf.keras.optimizers.Adam(learning_rate=hyperparams["learning_rate"])
loss = tf.keras.losses.BinaryCrossentropy(from_logits=False)
metrics = [tf.keras.metrics.BinaryAccuracy(name="accuracy")]
model.compile(optimizer=optimizer,loss=loss, metrics=metrics)
return model
def warmup(
model,
hyperparams,
train_data_dir,
label_column,
transformed_feature_spec
):
''' Warmup the initialized model weights '''
train_dataset = data.get_dataset(
train_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
lr_inc = (hyperparams['end_learning_rate'] - hyperparams['start_learning_rate']) / hyperparams['num_epochs']
def scheduler(epoch, lr):
if epoch == 0:
return hyperparams['start_learning_rate']
return lr + lr_inc
callbacks = [tf.keras.callbacks.LearningRateScheduler(scheduler)]
logging.info("Model warmup started...")
history = model.fit(
train_dataset,
epochs=hyperparams["num_epochs"],
steps_per_epoch=hyperparams["steps"],
callbacks=callbacks
)
logging.info("Model warmup completed.")
return history
def train(
model,
hyperparams,
train_data_dir,
val_data_dir,
label_column,
transformed_feature_spec,
log_dir,
tuning=False
):
''' Train the model '''
train_dataset = data.get_dataset(
train_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
val_dataset = data.get_dataset(
val_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
early_stop = tf.keras.callbacks.EarlyStopping(
monitor=hyperparams["early_stop"]["monitor"], patience=hyperparams["early_stop"]["patience"], restore_best_weights=True
)
callbacks = [early_stop]
if log_dir:
tensorboard = tf.keras.callbacks.TensorBoard(log_dir=log_dir)
callbacks = callbacks.append(tensorboard)
if tuning:
# Instantiate the HyperTune reporting object
hpt = HyperTune()
# Reporting callback
class HPTCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='val_loss',
metric_value=logs['val_loss'],
global_step=epoch
)
if not callbacks:
callbacks = []
callbacks.append(HPTCallback())
logging.info("Model training started...")
history = model.fit(
train_dataset,
epochs=hyperparams["num_epochs"],
validation_data=val_dataset,
callbacks=callbacks
)
logging.info("Model training completed.")
return history
def evaluate(
model,
hyperparams,
test_data_dir,
label_column,
transformed_feature_spec
):
logging.info("Model evaluation started...")
test_dataset = data.get_dataset(
test_data_dir,
transformed_feature_spec,
label_column,
hyperparams["batch_size"],
)
evaluation_metrics = model.evaluate(test_dataset)
logging.info("Model evaluation completed.")
return evaluation_metrics
###Output
_____no_output_____
###Markdown
Train the model locallyNext, test the training package locally, by training with just a few epochs:- `num_epochs`: The number of epochs to pass to the training package.- `compile()`: Compile the model for training.- `warmup()`: Warmup the initialized model weights.- `train()`: Train the model.
###Code
os.chdir("custom")
import logging
from trainer import train
TENSORBOARD_LOG_DIR = "./logs"
logging.getLogger().setLevel(logging.INFO)
hyperparams = {}
hyperparams["learning_rate"] = 0.01
aip.log_params(hyperparams)
train.compile(model, hyperparams)
warmupparams = {}
warmupparams["start_learning_rate"] = 0.0001
warmupparams["end_learning_rate"] = 0.01
warmupparams["num_epochs"] = 4
warmupparams["batch_size"] = 64
warmupparams["steps"] = 50
aip.log_params(warmupparams)
train.warmup(
model, warmupparams, train_data_file_pattern, LABEL_COLUMN, transform_feature_spec
)
trainparams = {}
trainparams["num_epochs"] = 5
trainparams["batch_size"] = 64
trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5}
aip.log_params(trainparams)
train.train(
model,
trainparams,
train_data_file_pattern,
val_data_file_pattern,
LABEL_COLUMN,
transform_feature_spec,
TENSORBOARD_LOG_DIR,
)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Evaluate the model locallyNext, test the evaluation portion of the training package:- `evaluate()`: Evaluate the model.
###Code
os.chdir("custom")
from trainer import train
evalparams = {}
evalparams["batch_size"] = 64
metrics = {}
metrics["loss"], metrics["acc"] = train.evaluate(
model, evalparams, test_data_file_pattern, LABEL_COLUMN, transform_feature_spec
)
print("ACC", metrics["acc"], "LOSS", metrics["loss"])
aip.log_metrics(metrics)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Retrieve model from Vertex AINext, create the Python script to retrieve your experimental model from Vertex AI.
###Code
%%writefile custom/trainer/model.py
import google.cloud.aiplatform as aip
def get(model_id):
model = aip.Model(model_id)
return model
###Output
_____no_output_____
###Markdown
Create the task script for the Python training packageNext, you create the `task.py` script for driving the training package. Some noteable steps include:- Command-line arguments: - `model-id`: The resource ID of the `Model` resource you built during experimenting. This is the untrained model architecture. - `dataset-id`: The resource ID of the `Dataset` resource to use for training. - `experiment`: The name of the experiment. - `run`: The name of the run within this experiment. - `tensorboard-logdir`: The logging directory for Vertex AI Tensorboard.- `get_data()`: - Loads the Dataset resource into memory. - Obtains the user metadata from the Dataset resource. - From the metadata, obtain location of transformed data, transformation function and name of label column- `get_model()`: - Loads the Model resource into memory. - Obtains location of model artifacts of the model architecture. - Loads the model architecture. - Compiles the model.- `warmup_model()`: - Warms up the initialized model weights- `train_model()`: - Train the model.- `evaluate_model()`: - Evaluates the model. - Saves evaluation metrics to Cloud Storage bucket.
###Code
%%writefile custom/trainer/task.py
import os
import argparse
import logging
import json
import tensorflow as tf
import tensorflow_transform as tft
from tensorflow.python.client import device_lib
import google.cloud.aiplatform as aip
from trainer import data
from trainer import model as model_
from trainer import train
try:
from trainer import serving
except:
pass
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--model-id', dest='model_id',
default=None, type=str, help='Vertex Model ID.')
parser.add_argument('--dataset-id', dest='dataset_id',
default=None, type=str, help='Vertex Dataset ID.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--start_lr', dest='start_lr',
default=0.0001, type=float,
help='Starting learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--batch_size', dest='batch_size',
default=16, type=int,
help='Batch size.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--tensorboard-log-dir', dest='tensorboard_log_dir',
default=os.getenv('AIP_TENSORBOARD_LOG_DIR'), type=str,
help='Output file for tensorboard logs')
parser.add_argument('--experiment', dest='experiment',
default=None, type=str,
help='Name of experiment')
parser.add_argument('--project', dest='project',
default=None, type=str,
help='Name of project')
parser.add_argument('--run', dest='run',
default=None, type=str,
help='Name of run in experiment')
parser.add_argument('--evaluate', dest='evaluate',
default=False, type=bool,
help='Whether to perform evaluation')
parser.add_argument('--serving', dest='serving',
default=False, type=bool,
help='Whether to attach the serving function')
parser.add_argument('--tuning', dest='tuning',
default=False, type=bool,
help='Whether to perform hyperparameter tuning')
parser.add_argument('--warmup', dest='warmup',
default=False, type=bool,
help='Whether to perform warmup weight initialization')
args = parser.parse_args()
logging.getLogger().setLevel(logging.INFO)
logging.info('DEVICES' + str(device_lib.list_local_devices()))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
logging.info("Single device training")
# Single Machine, multiple compute device
elif args.distribute == 'mirrored':
strategy = tf.distribute.MirroredStrategy()
logging.info("Mirrored Strategy distributed training")
# Multi Machine, multiple compute device
elif args.distribute == 'multiworker':
strategy = tf.distribute.MultiWorkerMirroredStrategy()
logging.info("Multi-worker Strategy distributed training")
logging.info('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
logging.info('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Initialize the run for this experiment
if args.experiment:
logging.info("Initialize experiment: {}".format(args.experiment))
aip.init(experiment=args.experiment, project=args.project)
aip.start_run(args.run)
metadata = {}
def get_data():
''' Get the preprocessed training data '''
global train_data_file_pattern, val_data_file_pattern, test_data_file_pattern
global label_column, transform_feature_spec, metadata
dataset = aip.TabularDataset(args.dataset_id)
METADATA = 'gs://' + dataset.labels['user_metadata'] + "/metadata.jsonl"
with tf.io.gfile.GFile(METADATA, "r") as f:
metadata = json.load(f)
TRANSFORMED_DATA_PREFIX = metadata['transformed_data_prefix']
label_column = metadata['label_column']
train_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/train/data-*.gz'
val_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/val/data-*.gz'
test_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/test/data-*.gz'
TRANSFORM_ARTIFACTS_DIR = metadata['transform_artifacts_dir']
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
transform_feature_spec = tft_output.transformed_feature_spec()
def get_model():
''' Get the untrained model architecture '''
global model_artifacts
vertex_model = model_.get(args.model_id)
model_artifacts = vertex_model.gca_resource.artifact_uri
model = tf.keras.models.load_model(model_artifacts)
# Compile the model
hyperparams = {}
hyperparams["learning_rate"] = args.lr
if args.experiment:
aip.log_params(hyperparams)
metadata.update(hyperparams)
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f:
f.write(json.dumps(metadata))
train.compile(model, hyperparams)
return model
def warmup_model(model):
''' Warmup the initialized model weights '''
warmupparams = {}
warmupparams["num_epochs"] = args.epochs
warmupparams["batch_size"] = args.batch_size
warmupparams["steps"] = args.steps
warmupparams["start_learning_rate"] = args.start_lr
warmupparams["end_learning_rate"] = args.lr
train.warmup(model, warmupparams, train_data_file_pattern, label_column, transform_feature_spec)
return model
def train_model(model):
''' Train the model '''
trainparams = {}
trainparams["num_epochs"] = args.epochs
trainparams["batch_size"] = args.batch_size
trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5}
if args.experiment:
aip.log_params(trainparams)
metadata.update(trainparams)
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f:
f.write(json.dumps(metadata))
train.train(model, trainparams, train_data_file_pattern, val_data_file_pattern, label_column, transform_feature_spec, args.tensorboard_log_dir, args.tuning)
return model
def evaluate_model(model):
''' Evaluate the model '''
evalparams = {}
evalparams["batch_size"] = args.batch_size
metrics = train.evaluate(model, evalparams, test_data_file_pattern, label_column, transform_feature_spec)
metadata.update({'metrics': metrics})
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f:
f.write(json.dumps(metadata))
get_data()
with strategy.scope():
model = get_model()
if args.warmup:
model = warmup_model(model)
else:
model = train_model(model)
if args.evaluate:
evaluate_model(model)
if args.serving:
logging.info('Save serving model to: ' + args.model_dir)
serving.construct_serving_model(
model=model,
serving_model_dir=args.model_dir,
metadata=metadata
)
elif args.warmup:
logging.info('Save warmed up model to: ' + model_artifacts)
model.save(model_artifacts)
else:
logging.info('Save trained model to: ' + args.model_dir)
model.save(args.model_dir)
###Output
_____no_output_____
###Markdown
Test training package locallyNext, test your completed training package locally with just a few epochs.
###Code
DATASET_ID = dataset.resource_name
MODEL_ID = vertex_custom_model.resource_name
!cd custom; python3 -m trainer.task --model-id={MODEL_ID} --dataset-id={DATASET_ID} --experiment='chicago' --run='test' --project={PROJECT_ID} --epochs=5 --model-dir=/tmp --evaluate=True
###Output
_____no_output_____
###Markdown
Warmup trainingNow that you have tested the training scripts, you perform warmup training on the base model. Warmup training is used to stabilize the weight initialization. By doing so, each subsequent training and tuning of the model architecture will start with the same stabilized weight initialization.
###Code
MODEL_DIR = f"{BUCKET_NAME}/base_model"
!cd custom; python3 -m trainer.task --model-id={MODEL_ID} --dataset-id={DATASET_ID} --project={PROJECT_ID} --epochs=5 --steps=300 --batch_size=16 --lr=0.01 --start_lr=0.0001 --model-dir={MODEL_DIR} --warmup=True
###Output
_____no_output_____
###Markdown
Mirrored StrategyWhen training on a single VM, one can either train was a single compute device or with multiple compute devices on the same VM. With Vertex AI Distributed Training you can specify both the number of compute devices for the VM instance and type of compute devices: CPU, GPU.Vertex AI Distributed Training supports `tf.distribute.MirroredStrategy' for TensorFlow models. To enable training across multiple compute devices on the same VM, you do the following additional steps in your Python training script:1. Set the tf.distribute.MirrorStrategy2. Compile the model within the scope of tf.distribute.MirrorStrategy. *Note:* Tells MirroredStrategy which variables to mirror across your compute devices.3. Increase the batch size for each compute device to num_devices * batch size.During transitions, the distribution of batches will be synchronized as well as the updates to the model parameters. Create and run custom training jobTo train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training jobA custom training job is created with the `CustomTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the custom training job.- `container_uri`: The training container image.- `python_package_gcs_uri`: The location of the Python training package as a tarball.- `python_module_name`: The relative path to the training script in the Python package.- `model_serving_container_uri`: The container image for deploying the model.*Note:* There is no requirements parameter. You specify any requirements in the `setup.py` script in your Python package.
###Code
DISPLAY_NAME = "chicago_" + TIMESTAMP
job = aip.CustomPythonPackageTrainingJob(
display_name=DISPLAY_NAME,
python_package_gcs_uri=f"{BUCKET_NAME}/trainer_chicago.tar.gz",
python_module_name="trainer.task",
container_uri=TRAIN_IMAGE,
model_serving_container_image_uri=DEPLOY_IMAGE,
project=PROJECT_ID,
)
! rm -rf custom/logs
! rm -rf custom/trainer/__pycache__
###Output
_____no_output_____
###Markdown
Store training script on your Cloud Storage bucketNext, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
###Code
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_chicago.tar.gz
###Output
_____no_output_____
###Markdown
Run the custom Python package training jobNext, you run the custom job to start the training job by invoking the method `run()`. The parameters are the same as when running a CustomTrainingJob.*Note:* The parameter service_account is set so that the initializing experiment step `aip.init(experiment="...")` has necessarily permission to access the Vertex AI Metadata Store.
###Code
MODEL_DIR = BUCKET_NAME + "/testing"
CMDARGS = [
"--epochs=5",
"--batch_size=16",
"--distribute=mirrored",
"--experiment=chicago",
"--run=test",
"--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
]
model = job.run(
model_display_name="chicago_" + TIMESTAMP,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
service_account=SERVICE_ACCOUNT,
tensorboard=tensorboard_resource_name,
sync=True,
)
###Output
_____no_output_____
###Markdown
Delete a custom training jobAfter a training job is completed, you can delete the training job with the method `delete()`. Prior to completion, a training job can be canceled with the method `cancel()`.
###Code
job.delete()
###Output
_____no_output_____
###Markdown
Delete the modelThe method 'delete()' will delete the model.
###Code
model.delete()
###Output
_____no_output_____
###Markdown
Hyperparameter tuningNext, you perform hyperparameter tuning with the training package. The training package has some additions that make the same package usable for both hyperparameter tuning, as well as local testing and full cloud training:- Command-Line: - `tuning`: indicates to use the HyperTune service as a callback during training.- `train()`: If tuning is set, creates and adds a callback to HyperTune service. Prepare your machine specificationNow define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training. - `machine_type`: The type of GCP instance to provision -- e.g., n1-standard-8. - `accelerator_type`: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable `TRAIN_GPU != None`, you are using a GPU; otherwise you will use a CPU. - `accelerator_count`: The number of accelerators.
###Code
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
###Output
_____no_output_____
###Markdown
Prepare your disk specification(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training. - `boot_disk_type`: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD. - `boot_disk_size_gb`: Size of disk in GB.
###Code
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
###Output
_____no_output_____
###Markdown
Define worker pool specification for hyperparameter tuning jobNext, define the worker pool specification. Note that we plan to tune the learning rate and batch size, so you do not pass them as command-line arguments (omitted). The Vertex AI Hyperparameter Tuning service will pick values for both learning rate and batch size during trials, which it will pass along as command-line arguments.
###Code
CMDARGS = [
"--epochs=5",
"--distribute=mirrored",
# "--experiment=chicago",
# "--run=tune",
# "--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
"--tuning=True",
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_chicago.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
###Output
_____no_output_____
###Markdown
Create a custom jobUse the class `CustomJob` to create a custom job, such as for hyperparameter tuning, with the following parameters:- `display_name`: A human readable name for the custom job.- `worker_pool_specs`: The specification for the corresponding VM instances.
###Code
job = aip.CustomJob(
display_name="chicago_" + TIMESTAMP, worker_pool_specs=worker_pool_spec
)
###Output
_____no_output_____
###Markdown
Create a hyperparameter tuning jobUse the class `HyperparameterTuningJob` to create a hyperparameter tuning job, with the following parameters:- `display_name`: A human readable name for the custom job.- `custom_job`: The worker pool spec from this custom job applies to the CustomJobs created in all the trials.- `metrics_spec`: The metrics to optimize. The dictionary key is the metric_id, which is reported by your training job, and the dictionary value is the optimization goal of the metric('minimize' or 'maximize').- `parameter_spec`: The parameters to optimize. The dictionary key is the metric_id, which is passed into your training job as a command line key word argument, and the dictionary value is the parameter specification of the metric.- `search_algorithm`: The search algorithm to use: `grid`, `random` and `None`. If `None` is specified, the `Vizier` service (Bayesian) is used.- `max_trial_count`: The maximum number of trials to perform.
###Code
from google.cloud.aiplatform import hyperparameter_tuning as hpt
hpt_job = aip.HyperparameterTuningJob(
display_name="chicago_" + TIMESTAMP,
custom_job=job,
metric_spec={
"val_loss": "minimize",
},
parameter_spec={
"lr": hpt.DoubleParameterSpec(min=0.001, max=0.1, scale="log"),
"batch_size": hpt.DiscreteParameterSpec([16, 32, 64, 128, 256], scale="linear"),
},
search_algorithm=None,
max_trial_count=8,
parallel_trial_count=1,
)
###Output
_____no_output_____
###Markdown
Run the hyperparameter tuning jobUse the `run()` method to execute the hyperparameter tuning job.
###Code
hpt_job.run()
###Output
_____no_output_____
###Markdown
Best trialNow look at which trial was the best:
###Code
best = (None, None, None, 0.0)
for trial in hpt_job.trials:
# Keep track of the best outcome
if float(trial.final_measurement.metrics[0].value) > best[3]:
try:
best = (
trial.id,
float(trial.parameters[0].value),
float(trial.parameters[1].value),
float(trial.final_measurement.metrics[0].value),
)
except:
best = (
trial.id,
float(trial.parameters[0].value),
None,
float(trial.final_measurement.metrics[0].value),
)
print(best)
###Output
_____no_output_____
###Markdown
Delete the hyperparameter tuning jobThe method 'delete()' will delete the hyperparameter tuning job.
###Code
hpt_job.delete()
###Output
_____no_output_____
###Markdown
Save the best hyperparameter values
###Code
LR = best[2]
BATCH_SIZE = int(best[1])
###Output
_____no_output_____
###Markdown
Create and run custom training jobTo train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training jobA custom training job is created with the `CustomTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the custom training job.- `container_uri`: The training container image.- `python_package_gcs_uri`: The location of the Python training package as a tarball.- `python_module_name`: The relative path to the training script in the Python package.- `model_serving_container_uri`: The container image for deploying the model.*Note:* There is no requirements parameter. You specify any requirements in the `setup.py` script in your Python package.
###Code
DISPLAY_NAME = "chicago_" + TIMESTAMP
job = aip.CustomPythonPackageTrainingJob(
display_name=DISPLAY_NAME,
python_package_gcs_uri=f"{BUCKET_NAME}/trainer_chicago.tar.gz",
python_module_name="trainer.task",
container_uri=TRAIN_IMAGE,
model_serving_container_image_uri=DEPLOY_IMAGE,
project=PROJECT_ID,
)
###Output
_____no_output_____
###Markdown
Run the custom Python package training jobNext, you run the custom job to start the training job by invoking the method `run()`. The parameters are the same as when running a CustomTrainingJob.*Note:* The parameter service_account is set so that the initializing experiment step `aip.init(experiment="...")` has necessarily permission to access the Vertex AI Metadata Store.
###Code
MODEL_DIR = BUCKET_NAME + "/trained"
FULL_EPOCHS = 100
CMDARGS = [
f"--epochs={FULL_EPOCHS}",
f"--lr={LR}",
f"--batch_size={BATCH_SIZE}",
"--distribute=mirrored",
"--experiment=chicago",
"--run=full",
"--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
"--evaluate=True",
]
model = job.run(
model_display_name="chicago_" + TIMESTAMP,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
service_account=SERVICE_ACCOUNT,
tensorboard=tensorboard_resource_name,
sync=True,
)
###Output
_____no_output_____
###Markdown
Delete a custom training jobAfter a training job is completed, you can delete the training job with the method `delete()`. Prior to completion, a training job can be canceled with the method `cancel()`.
###Code
job.delete()
###Output
_____no_output_____
###Markdown
Get the experiment resultsNext, you use the experiment name as a parameter to the method `get_experiment_df()` to get the results of the experiment as a pandas dataframe.
###Code
EXPERIMENT_NAME = "chicago"
experiment_df = aip.get_experiment_df()
experiment_df = experiment_df[experiment_df.experiment_name == EXPERIMENT_NAME]
experiment_df.T
###Output
_____no_output_____
###Markdown
Review the custom model evaluation resultsNext, you review the evaluation metrics builtin into the training package.
###Code
METRICS = MODEL_DIR + "/model/metrics.txt"
! gsutil cat $METRICS
###Output
_____no_output_____
###Markdown
Delete the TensorBoard instanceNext, delete the TensorBoard instance.
###Code
tensorboard.delete()
vertex_custom_model = model
model = tf.keras.models.load_model(MODEL_DIR + "/model")
###Output
_____no_output_____
###Markdown
Add a serving functionNext, you add a serving function to your model for online and batch prediction. This allows prediction requests to be sent in raw format (unpreprocessed), either as a serialized TF.Example or JSONL object. The serving function will then preprocess the prediction request into the transformed format expected by the model.
###Code
%%writefile custom/trainer/serving.py
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_transform as tft
import logging
def _get_serve_features_fn(model, tft_output):
"""Returns a function that accept a dictionary of features and applies TFT."""
model.tft_layer = tft_output.transform_features_layer()
@tf.function
def serve_features_fn(raw_features):
"""Returns the output to be used in the serving signature."""
transformed_features = model.tft_layer(raw_features)
probabilities = model(transformed_features)
return {"scores": probabilities}
return serve_features_fn
def _get_serve_tf_examples_fn(model, tft_output, feature_spec):
"""Returns a function that parses a serialized tf.Example and applies TFT."""
model.tft_layer = tft_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
"""Returns the output to be used in the serving signature."""
for key in list(feature_spec.keys()):
if key not in features:
feature_spec.pop(key)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
probabilities = model(transformed_features)
return {"scores": probabilities}
return serve_tf_examples_fn
def construct_serving_model(
model, serving_model_dir, metadata
):
global features
schema_location = metadata['schema']
features = metadata['numeric_features'] + metadata['categorical_features'] + metadata['embedding_features']
print("FEATURES", features)
tft_output_dir = metadata["transform_artifacts_dir"]
schema = tfdv.load_schema_text(schema_location)
feature_spec = tft.tf_metadata.schema_utils.schema_as_feature_spec(schema).feature_spec
tft_output = tft.TFTransformOutput(tft_output_dir)
# Drop features that were not used in training
features_input_signature = {
feature_name: tf.TensorSpec(
shape=(None, 1), dtype=spec.dtype, name=feature_name
)
for feature_name, spec in feature_spec.items()
if feature_name in features
}
signatures = {
"serving_default": _get_serve_features_fn(
model, tft_output
).get_concrete_function(features_input_signature),
"serving_tf_example": _get_serve_tf_examples_fn(
model, tft_output, feature_spec
).get_concrete_function(
tf.TensorSpec(shape=[None], dtype=tf.string, name="examples")
),
}
logging.info("Model saving started...")
model.save(serving_model_dir, signatures=signatures)
logging.info("Model saving completed.")
###Output
_____no_output_____
###Markdown
Construct the serving modelNow construct the serving model and store the serving model to your Cloud Storage bucket.
###Code
os.chdir("custom")
from trainer import serving
SERVING_MODEL_DIR = BUCKET_NAME + "/serving_model"
serving.construct_serving_model(
model=model, serving_model_dir=SERVING_MODEL_DIR, metadata=metadata
)
serving_model = tf.keras.models.load_model(SERVING_MODEL_DIR)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Test the serving model locally with tf.Example dataNext, test the layer interface in the serving model for tf.Example data.
###Code
EXPORTED_TFREC_PREFIX = metadata["exported_tfrec_prefix"]
file_names = tf.data.TFRecordDataset.list_files(
EXPORTED_TFREC_PREFIX + "/data-*.tfrecord"
)
for batch in tf.data.TFRecordDataset(file_names).batch(3).take(1):
predictions = serving_model.signatures["serving_tf_example"](batch)
for key in predictions:
print(f"{key}: {predictions[key]}")
###Output
_____no_output_____
###Markdown
Test the serving model locally with JSONL dataNext, test the layer interface in the serving model for JSONL data.
###Code
schema = tfdv.load_schema_text(metadata["schema"])
feature_spec = tft.tf_metadata.schema_utils.schema_as_feature_spec(schema).feature_spec
instance = {
"dropoff_grid": "POINT(-87.6 41.9)",
"euclidean": 2064.2696,
"loc_cross": "",
"payment_type": "Credit Card",
"pickup_grid": "POINT(-87.6 41.9)",
"trip_miles": 1.37,
"trip_day": 12,
"trip_hour": 6,
"trip_month": 2,
"trip_day_of_week": 4,
"trip_seconds": 555,
}
for feature_name in instance:
dtype = feature_spec[feature_name].dtype
instance[feature_name] = tf.constant([[instance[feature_name]]], dtype)
predictions = serving_model.signatures["serving_default"](**instance)
for key in predictions:
print(f"{key}: {predictions[key].numpy()}")
###Output
_____no_output_____
###Markdown
Upload the serving model to a Vertex AI Model resourceNext, you upload your serving custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource.
###Code
vertex_serving_model = aip.Model.upload(
display_name="chicago_" + TIMESTAMP,
artifact_uri=SERVING_MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
labels={"user_metadata": BUCKET_NAME[5:]},
sync=True,
)
###Output
_____no_output_____
###Markdown
Evaluate the serving modelNext, evaluate the serving model with the evaluation (test) slices. For apples-to-apples comparison, you use the same evaluation slices for both the custom model and the AutoML model. Since your evaluation slices and metrics maybe custom, we recommend:- Send each evaluation slice as a Vertex AI Batch Prediction Job.- Use a custom evaluation script to evaluate the results from the batch prediction job.
###Code
SERVING_OUTPUT_DATA_DIR = BUCKET_NAME + "/batch_eval"
EXPORTED_JSONL_PREFIX = metadata["exported_jsonl_prefix"]
MIN_NODES = 1
MAX_NODES = 1
job = vertex_serving_model.batch_predict(
instances_format="jsonl",
predictions_format="jsonl",
job_display_name="chicago_" + TIMESTAMP,
gcs_source=EXPORTED_JSONL_PREFIX + "*.jsonl",
gcs_destination_prefix=SERVING_OUTPUT_DATA_DIR,
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=True,
)
###Output
_____no_output_____
###Markdown
Perform custom evaluation metricsAfter the batch job has completed, you input the results and target labels to your custom evaluation script. For demonstration purposes, we just display the results of the batch prediction.
###Code
batch_dir = ! gsutil ls $SERVING_OUTPUT_DATA_DIR
batch_dir = batch_dir[0]
outputs = ! gsutil ls $batch_dir
errors = outputs[0]
results = outputs[1]
print("errors")
! gsutil cat $errors
print("results")
! gsutil cat $results | head -n10
model = async_model
###Output
_____no_output_____
###Markdown
Wait for completion of AutoML training jobNext, wait for the AutoML training job to complete. Alternatively, one can set the parameter `sync` to `True` in the `run()` method to block until the AutoML training job is completed.
###Code
model.wait()
###Output
_____no_output_____
###Markdown
Review model evaluation scoresAfter your model has finished training, you can review the evaluation scores for it.First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
###Code
# Get model resource ID
models = aip.Model.list(filter="display_name=chicago_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
###Output
_____no_output_____
###Markdown
Compare metric results with AutoML baselineFinally, you make a decision if the current experiment produces a custom model that is better than the AutoML baseline, as follows: - Compare the evaluation results for each evaluation slice between the custom model and the AutoML model. - Weight the results according to your business purposes. - Add up the result and make a determination if the custom model is better. Store evaluation results for custom modelNext, you use the labels field to store user metadata containing the custom metrics information.
###Code
import json
metadata = {}
metadata["train_eval_metrics"] = METRICS
metadata["custom_eval_metrics"] = "[you-fill-this-in]"
with tf.io.gfile.GFile("gs://" + BUCKET_NAME[5:] + "/metadata.jsonl", "w") as f:
json.dump(metadata, f)
!gsutil cat $BUCKET_NAME/metadata.jsonl
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- AutoML Training Job- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_all = False
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____
###Markdown
E2E ML on GCP: MLOps stage 2 : experimentation View on GitHub Open in Google Cloud Notebooks OverviewThis tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 2 : experimentation. DatasetThe dataset used for this tutorial is the [Chicago Taxi](https://www.kaggle.com/chicago/chicago-taxi-trips-bq). The version of the dataset you will use in this tutorial is stored in a public BigQuery table. The trained model predicts whether someone would leave a tip for a taxi fare. ObjectiveIn this tutorial, you create a MLOps stage 2: experimentation process.This tutorial uses the following Vertex AI:- `Vertex AI Datasets`- `Vertex AI Models`- `Vertex AI AutoML`- `Vertex AI Training`- `Vertex AI TensorBoard`- `Vertex AI Vizier`- `Vertex AI Batch Prediction`The steps performed include:- Review the `Dataset` resource created during stage 1.- Train an AutoML tabular binary classifier model in the background.- Build the experimental model architecture.- Construct a custom training package for the `Dataset` resource.- Test the custom training package locally.- Test the custom training package in the cloud with Vertex AI Training.- Hyperparameter tune the model training with Vertex AI Vizier.- Train the custom model with Vertex AI Training.- Add a serving function for online/batch prediction to the custom model.- Test the custom model with the serving function.- Evaluate the custom model using Vertex AI Batch Prediction- Wait for AutoML training job to complete.- Evaluate the AutoML model using Vertex AI Batch Prediction with the same evaluation slices as the custom model.- Set the evaluation results of the AutoML model as the baseline.- If the evaluation of the custom model is below baseline, continue to experiment with custom model.- If the evaluation of the custom model is above baseline, save the model as the first best model. RecommendationsWhen doing E2E MLOps on Google Cloud for experimentation, the following best practices with structured (tabular) data are recommended: - Determine a baseline evaluation using AutoML. - Design and build a model architecture. - Upload the untrained model architecture as a Vertex AI Model resource. - Construct a training package that can be ran locally and as a Vertex AI Training job. - Decompose the training package into: data, model, train and task Python modules. - Obtain the location of the transformed training data from the user metadata of the Vertex AI Dataset resource. - Obtain the location of the model artifacts from the Vertex AI Model resource. - Include in the training package initializing a Vertex AI Experiment and corresponding run. - Log hyperparameters and training parameters for the experiment. - Add callbacks for early stop, TensorBoard, and hyperparameter tuning, where hyperparameter tuning is a command-line option. - Test the training package locally with a small number of epochs. - Test the training package with Vertex AI Training. - Do hyperparameter tuning with Vertex AI Hyperparameter Tuning. - Do full training of the custom model with Vertex AI Training. - Log the hyperparameter values for the experiment/run. - Evaluate the custom model. - Single evaluation slice, same metrics as AutoML - Add evaluation to the training package and return the results in a file in the Cloud Storage bucket used for training - Custom evaluation slices, custom metrics - Evaluate custom evaluation slices as a Vertex AI Batch Prediction for both AutoML and custom model - Perform custom metrics on the results from the batch job - Compare custom model metrics against the AutoML baseline - If less than baseline, then continue to experiment - If greater then baseline, then upload model as the new baseline and save evaluation results with the model. InstallationsInstall *one time* the packages for executing the MLOps notebooks.
###Code
ONCE_ONLY = False
if ONCE_ONLY:
! pip3 install -U tensorflow==2.5 $USER_FLAG
! pip3 install -U tensorflow-data-validation==1.2 $USER_FLAG
! pip3 install -U tensorflow-transform==1.2 $USER_FLAG
! pip3 install -U tensorflow-io==0.18 $USER_FLAG
! pip3 install --upgrade google-cloud-aiplatform[tensorboard] $USER_FLAG
! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG
! pip3 install --upgrade google-cloud-bigquery $USER_FLAG
! pip3 install --upgrade google-cloud-logging $USER_FLAG
! pip3 install --upgrade apache-beam[gcp] $USER_FLAG
! pip3 install --upgrade pyarrow $USER_FLAG
! pip3 install --upgrade cloudml-hypertune $USER_FLAG
! pip3 install --upgrade kfp $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations).
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Service Account**If you don't know your service account**, try to get your service account using `gcloud` command by executing the second cell below.
###Code
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].strip()
print("Service Account:", SERVICE_ACCOUNT)
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants
###Code
import google.cloud.aiplatform as aip
###Output
_____no_output_____
###Markdown
Import TensorFlowImport the TensorFlow package into your Python environment.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Import TensorFlow TransformImport the TensorFlow Transform (TFT) package into your Python environment.
###Code
import tensorflow_transform as tft
###Output
_____no_output_____
###Markdown
Import TensorFlow Data ValidationImport the TensorFlow Data Validation (TFDV) package into your Python environment.
###Code
import tensorflow_data_validation as tfdv
###Output
_____no_output_____
###Markdown
Initialize Vertex AI SDK for PythonInitialize the Vertex AI SDK for Python for your project and corresponding bucket.
###Code
aip.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)
###Output
_____no_output_____
###Markdown
Set hardware acceleratorsYou can set hardware accelerators for training and prediction.Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)Otherwise specify `(None, None)` to use a container image to run on a CPU.Learn more about [hardware accelerator support for your region](https://cloud.google.com/vertex-ai/docs/general/locationsaccelerators).*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3. This is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
###Code
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.gapic.AcceleratorType.NVIDIA_TESLA_K80, 4)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
###Output
_____no_output_____
###Markdown
Set pre-built containersSet the pre-built Docker container image for training and prediction.For the latest list, see [Pre-built containers for training](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers).For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers).
###Code
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2.5".replace(".", "-")
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "{}-docker.pkg.dev/vertex-ai/training/{}:latest".format(
REGION.split("-")[0], TRAIN_VERSION
)
DEPLOY_IMAGE = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format(
REGION.split("-")[0], DEPLOY_VERSION
)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
###Output
_____no_output_____
###Markdown
Set machine typeNext, set the machine type to use for training and prediction.- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]*Note: The following is not supported for training:* - `standard`: 2 vCPUs - `highcpu`: 2, 4 and 8 vCPUs*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.
###Code
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
###Output
_____no_output_____
###Markdown
Retrieve the dataset from stage 1Next, retrieve the dataset you created during stage 1 with the helper function `find_dataset()`. This helper function finds all the datasets whose display name matches the specified prefix and import format (e.g., bq). Finally it sorts the matches by create time and returns the latest version.
###Code
def find_dataset(display_name_prefix, import_format):
matches = []
datasets = aip.TabularDataset.list()
for dataset in datasets:
if dataset.display_name.startswith(display_name_prefix):
try:
if (
"bq" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["bigquerySource"]
):
matches.append(dataset)
if (
"csv" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["gcsSource"]
):
matches.append(dataset)
except:
pass
create_time = None
for match in matches:
if create_time is None or match.create_time > create_time:
create_time = match.create_time
dataset = match
return dataset
dataset = find_dataset("Chicago Taxi", "bq")
print(dataset)
###Output
_____no_output_____
###Markdown
Load dataset's user metadataLoad the user metadata for the dataset.
###Code
import json
try:
with tf.io.gfile.GFile(
"gs://" + dataset.labels["user_metadata"] + "/metadata.jsonl", "r"
) as f:
metadata = json.load(f)
print(metadata)
except:
print("no metadata")
###Output
_____no_output_____
###Markdown
Create and run training pipelineTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipelineAn AutoML training pipeline is created with the `AutoMLTabularTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the `TrainingJob` resource.- `optimization_prediction_type`: The type task to train the model for. - `classification`: A tabuar classification model. - `regression`: A tabular regression model.- `column_transformations`: (Optional): Transformations to apply to the input columns- `optimization_objective`: The optimization objective to minimize or maximize. - binary classification: - `minimize-log-loss` - `maximize-au-roc` - `maximize-au-prc` - `maximize-precision-at-recall` - `maximize-recall-at-precision` - multi-class classification: - `minimize-log-loss` - regression: - `minimize-rmse` - `minimize-mae` - `minimize-rmsle`The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
###Code
dag = aip.AutoMLTabularTrainingJob(
display_name="chicago_" + TIMESTAMP,
optimization_prediction_type="classification",
optimization_objective="minimize-log-loss",
)
print(dag)
###Output
_____no_output_____
###Markdown
Run the training pipelineNext, you run the DAG to start the training job by invoking the method `run`, with the following parameters:- `dataset`: The `Dataset` resource to train the model.- `model_display_name`: The human readable name for the trained model.- `training_fraction_split`: The percentage of the dataset to use for training.- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).- `validation_fraction_split`: The percentage of the dataset to use for validation.- `target_column`: The name of the column to train as the label.- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.The `run` method when completed returns the `Model` resource.The execution of the training pipeline will take upto 180 minutes.
###Code
async_model = dag.run(
dataset=dataset,
model_display_name="chicago_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=8000,
disable_early_stopping=False,
target_column="tip_bin",
sync=False,
)
###Output
_____no_output_____
###Markdown
Create experiment for tracking training related metadataSetup tracking the parameters (configuration) and metrics (results) for each experiment:- `aip.init()` - Create an experiment instance- `aip.start_run()` - Track a specific run within the experiment.Learn more about [Introduction to Vertex AI ML Metadata](https://cloud.google.com/vertex-ai/docs/ml-metadata/introduction).
###Code
EXPERIMENT_NAME = "chicago-" + TIMESTAMP
aip.init(experiment=EXPERIMENT_NAME)
aip.start_run("run-1")
###Output
_____no_output_____
###Markdown
Create a Vertex AI TensorBoard instanceCreate a Vertex AI TensorBoard instance to use TensorBoard in conjunction with Vertex AI Training for custom model training.Learn more about [Get started with Vertex AI TensorBoard](https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-overview).
###Code
TENSORBOARD_DISPLAY_NAME = "chicago_" + TIMESTAMP
tensorboard = aip.Tensorboard.create(display_name=TENSORBOARD_DISPLAY_NAME)
tensorboard_resource_name = tensorboard.gca_resource.name
print("TensorBoard resource name:", tensorboard_resource_name)
###Output
_____no_output_____
###Markdown
Create the input layer for your custom modelNext, you create the input layer for your custom tabular model, based on the data types of each feature.
###Code
from tensorflow.keras.layers import Input
def create_model_inputs(
numeric_features=None, categorical_features=None, embedding_features=None
):
inputs = {}
for feature_name in numeric_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.float32)
for feature_name in categorical_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64)
for feature_name in embedding_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64)
return inputs
input_layers = create_model_inputs(
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
embedding_features=metadata["embedding_features"],
)
print(input_layers)
###Output
_____no_output_____
###Markdown
Create the binary classifier custom modelNext, you create your binary classifier custom tabular model.
###Code
from math import sqrt
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import (Activation, Concatenate, Dense, Embedding,
experimental)
def create_binary_classifier(
input_layers,
tft_output,
metaparams,
numeric_features,
categorical_features,
embedding_features,
):
layers = []
for feature_name in input_layers:
if feature_name in embedding_features:
vocab_size = tft_output.vocabulary_size_by_name(feature_name)
embedding_size = int(sqrt(vocab_size))
embedding_output = Embedding(
input_dim=vocab_size + 1,
output_dim=embedding_size,
name=f"{feature_name}_embedding",
)(input_layers[feature_name])
layers.append(embedding_output)
elif feature_name in categorical_features:
vocab_size = tft_output.vocabulary_size_by_name(feature_name)
onehot_layer = experimental.preprocessing.CategoryEncoding(
num_tokens=vocab_size,
output_mode="binary",
name=f"{feature_name}_onehot",
)(input_layers[feature_name])
layers.append(onehot_layer)
elif feature_name in numeric_features:
numeric_layer = tf.expand_dims(input_layers[feature_name], -1)
layers.append(numeric_layer)
else:
pass
joined = Concatenate(name="combines_inputs")(layers)
feedforward_output = Sequential(
[Dense(units, activation="relu") for units in metaparams["hidden_units"]],
name="feedforward_network",
)(joined)
logits = Dense(units=1, name="logits")(feedforward_output)
pred = Activation("sigmoid")(logits)
model = Model(inputs=input_layers, outputs=[pred])
return model
TRANSFORM_ARTIFACTS_DIR = metadata["transform_artifacts_dir"]
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
metaparams = {"hidden_units": [128, 64]}
aip.log_params(metaparams)
model = create_binary_classifier(
input_layers,
tft_output,
metaparams,
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
embedding_features=metadata["embedding_features"],
)
model.summary()
###Output
_____no_output_____
###Markdown
Visualize the model architectureNext, visualize the architecture of the custom model.
###Code
tf.keras.utils.plot_model(model, show_shapes=True, show_dtype=True)
###Output
_____no_output_____
###Markdown
Save model artifactsNext, save the model artifacts to your Cloud Storage bucket
###Code
MODEL_DIR = f"{BUCKET_NAME}/base_model"
model.save(MODEL_DIR)
###Output
_____no_output_____
###Markdown
Upload the local model to a Vertex AI Model resourceNext, you upload your local custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource.
###Code
vertex_custom_model = aip.Model.upload(
display_name="chicago_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
labels={"base_model": "1"},
sync=True,
)
###Output
_____no_output_____
###Markdown
Construct the training package Package layoutBefore you start training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.- PKG-INFO- README.md- setup.cfg- setup.py- trainer - \_\_init\_\_.py - task.py - other Python scriptsThe files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.The file `trainer/task.py` is the Python script for executing the custom training job.
###Code
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'google-cloud-aiplatform',\n\n 'cloudml-hypertune',\n\n 'tensorflow_datasets==1.3.0',\n\n 'tensorflow_data_validation==1.2',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Chicago Taxi tabular binary classifier\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex AI"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
###Output
_____no_output_____
###Markdown
Get feature specification for the preprocessed dataNext, create the feature specification for the preprocessed data.
###Code
transform_feature_spec = tft_output.transformed_feature_spec()
print(transform_feature_spec)
###Output
_____no_output_____
###Markdown
Load the transformed data into a tf.data.DatasetNext, you load the gzip TFRecords on Cloud Storage storage into a `tf.data.Dataset` generator. These functions are re-used when training the custom model using `Vertex Training`, so you save them to the python training package.
###Code
%%writefile custom/trainer/data.py
import tensorflow as tf
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(filenames, compression_type="GZIP")
def get_dataset(file_pattern, feature_spec, label_column, batch_size=200):
"""Generates features and label for tuning/training.
Args:
file_pattern: input tfrecord file pattern.
feature_spec: a dictionary of feature specifications.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=feature_spec,
label_key=label_column,
reader=_gzip_reader_fn,
num_epochs=1,
drop_final_batch=True,
)
return dataset
from custom.trainer import data
TRANSFORMED_DATA_PREFIX = metadata["transformed_data_prefix"]
LABEL_COLUMN = metadata["label_column"]
train_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/train/data-*.gz"
val_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/val/data-*.gz"
test_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/test/data-*.gz"
for input_features, target in data.get_dataset(
train_data_file_pattern, transform_feature_spec, LABEL_COLUMN, batch_size=3
).take(1):
for key in input_features:
print(
f"{key} {input_features[key].dtype}: {input_features[key].numpy().tolist()}"
)
print(f"target: {target.numpy().tolist()}")
###Output
_____no_output_____
###Markdown
Test the model architecture with transformed inputNext, test the model architecture with a sample of the transformed training input.*Note:* Since the model is untrained, the predictions should be random. Since this is a binary classifier, expect the predicted results ~0.5.
###Code
model(input_features)
###Output
_____no_output_____
###Markdown
Develop and test the training scriptsWhen experimenting, one typically develops and tests the training package locally, before moving to training in the cloud. Create training scriptNext, you write the Python script for compiling and training the model.
###Code
%%writefile custom/trainer/train.py
from trainer import data
import tensorflow as tf
import logging
from hypertune import HyperTune
def compile(model, hyperparams):
optimizer = tf.keras.optimizers.Adam(learning_rate=hyperparams["learning_rate"])
loss = tf.keras.losses.BinaryCrossentropy(from_logits=False)
metrics = [tf.keras.metrics.BinaryAccuracy(name="accuracy")]
model.compile(optimizer=optimizer,loss=loss, metrics=metrics)
return model
def train(
model,
hyperparams,
train_data_dir,
val_data_dir,
label_column,
transformed_feature_spec,
log_dir,
tuning=False
):
train_dataset = data.get_dataset(
train_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
val_dataset = data.get_dataset(
val_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
early_stop = tf.keras.callbacks.EarlyStopping(
monitor=hyperparams["early_stop"]["monitor"], patience=hyperparams["early_stop"]["patience"], restore_best_weights=True
)
callbacks=[early_stop]
if log_dir:
tensorboard = tf.keras.callbacks.TensorBoard(log_dir=log_dir)
callbacks=callbacks.append(tensorboard)
if tuning:
# Instantiate the HyperTune reporting object
hpt = HyperTune()
# Reporting callback
class HPTCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='val_loss',
metric_value=logs['val_loss'],
global_step=epoch
)
callbacks.append(HPTCallback())
logging.info("Model training started...")
history = model.fit(
train_dataset,
epochs=hyperparams["num_epochs"],
validation_data=val_dataset,
callbacks=callbacks
)
logging.info("Model training completed.")
return history
def evaluate(
model,
hyperparams,
test_data_dir,
label_column,
transformed_feature_spec
):
logging.info("Model evaluation started...")
test_dataset = data.get_dataset(
test_data_dir,
transformed_feature_spec,
label_column,
hyperparams["batch_size"],
)
evaluation_metrics = model.evaluate(test_dataset)
logging.info("Model evaluation completed.")
return evaluation_metrics
###Output
_____no_output_____
###Markdown
Train the model locallyNext, test the training package locally, by training with just a few epochs:- `num_epochs`: The number of epochs to pass to the training package.- `compile()`: Compile the model for training.- `train()`: Train the model.
###Code
os.chdir("custom")
import logging
from trainer import train
TENSORBOARD_LOG_DIR = "./logs"
logging.getLogger().setLevel(logging.INFO)
hyperparams = {}
hyperparams["learning_rate"] = 0.001
aip.log_params(hyperparams)
train.compile(model, hyperparams)
trainparams = {}
trainparams["num_epochs"] = 5
trainparams["batch_size"] = 64
trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5}
aip.log_params(trainparams)
train.train(
model,
trainparams,
train_data_file_pattern,
val_data_file_pattern,
LABEL_COLUMN,
transform_feature_spec,
TENSORBOARD_LOG_DIR,
)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Evaluate the model locallyNext, test the evaluation portion of the training package:- `evaluate()`: Evaluate the model.
###Code
os.chdir("custom")
from trainer import train
evalparams = {}
evalparams["batch_size"] = 64
metrics = {}
metrics["loss"], metrics["acc"] = train.evaluate(
model, evalparams, test_data_file_pattern, LABEL_COLUMN, transform_feature_spec
)
print("ACC", metrics["acc"], "LOSS", metrics["loss"])
aip.log_metrics(metrics)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Retrieve model from Vertex AINext, create the Python script to retrieve your experimental model from Vertex AI.
###Code
%%writefile custom/trainer/model.py
import google.cloud.aiplatform as aip
def get(model_id):
model = aip.Model(model_id)
return model
###Output
_____no_output_____
###Markdown
Create the task script for the Python training packageNext, you create the `task.py` script for driving the training package. Some noteable steps include:- Command-line arguments: - `model-id`: The resource ID of the `Model` resource you built during experimenting. This is the untrained model architecture. - `dataset-id`: The resource ID of the `Dataset` resource to use for training. - `experiment`: The name of the experiment. - `run`: The name of the run within this experiment. - `tensorboard-logdir`: The logging directory for Vertex AI Tensorboard.- `get_data()`: - Loads the Dataset resource into memory. - Obtains the user metadata from the Dataset resource. - From the metadata, obtain location of transformed data, transformation function and name of label column- `get_model()`: - Loads the Model resource into memory. - Obtains location of model artifacts of the model architecture. - Loads the model architecture. - Compiles the model.- `train_model()`: - Train the model.- `evaluate_model()`: - Evaluates the model. - Saves evaluation metrics to Cloud Storage bucket.
###Code
%%writefile custom/trainer/task.py
import os
import argparse
import logging
import json
import tensorflow as tf
import tensorflow_transform as tft
from tensorflow.python.client import device_lib
import google.cloud.aiplatform as aip
from trainer import data
from trainer import model as model_
from trainer import train
from trainer import serving
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--model-id', dest='model_id',
default=None, type=str, help='Vertex Model ID.')
parser.add_argument('--dataset-id', dest='dataset_id',
default=None, type=str, help='Vertex Dataset ID.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--batch_size', dest='batch_size',
default=16, type=int,
help='Batch size.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--tensorboard-log-dir', dest='tensorboard_log_dir',
default=os.getenv('AIP_TENSORBOARD_LOG_DIR'), type=str,
help='Output file for tensorboard logs')
parser.add_argument('--experiment', dest='experiment',
default=None, type=str,
help='Name of experiment')
parser.add_argument('--project', dest='project',
default=None, type=str,
help='Name of project')
parser.add_argument('--run', dest='run',
default=None, type=str,
help='Name of run in experiment')
parser.add_argument('--evaluate', dest='evaluate',
default=False, type=bool,
help='Whether to perform evaluation')
parser.add_argument('--serving', dest='serving',
default=False, type=bool,
help='Whether to attach the serving function')
parser.add_argument('--tuning', dest='tuning',
default=False, type=bool,
help='Whether to perform hyperparameter tuning')
args = parser.parse_args()
logging.getLogger().setLevel(logging.INFO)
logging.info('DEVICES' + str(device_lib.list_local_devices()))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
logging.info("Single device training")
# Single Machine, multiple compute device
elif args.distribute == 'mirrored':
strategy = tf.distribute.MirroredStrategy()
logging.info("Mirrored Strategy distributed training")
# Multi Machine, multiple compute device
elif args.distribute == 'multiworker':
strategy = tf.distribute.MultiWorkerMirroredStrategy()
logging.info("Multi-worker Strategy distributed training")
logging.info('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
logging.info('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Initialize the run for this experiment
if args.experiment:
logging.info("Initialize experiment: {}".format(args.experiment))
aip.init(experiment=args.experiment, project=args.project)
aip.start_run(args.run)
metadata = {}
def get_data():
''' Get the preprocessed training data '''
global train_data_file_pattern, val_data_file_pattern, test_data_file_pattern
global label_column, transform_feature_spec, metadata
dataset = aip.TabularDataset(args.dataset_id)
METADATA = 'gs://' + dataset.labels['user_metadata'] + "/metadata.jsonl"
with tf.io.gfile.GFile(METADATA, "r") as f:
metadata = json.load(f)
TRANSFORMED_DATA_PREFIX = metadata['transformed_data_prefix']
label_column = metadata['label_column']
train_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/train/data-*.gz'
val_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/val/data-*.gz'
test_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/test/data-*.gz'
TRANSFORM_ARTIFACTS_DIR = metadata['transform_artifacts_dir']
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
transform_feature_spec = tft_output.transformed_feature_spec()
def get_model():
''' Get the untrained model architecture '''
vertex_model = model_.get(args.model_id)
model_artifacts = vertex_model.gca_resource.artifact_uri
model = tf.keras.models.load_model(model_artifacts)
# Compile the model
hyperparams = {}
hyperparams["learning_rate"] = args.lr
if args.experiment:
aip.log_params(hyperparams)
metadata.update(hyperparams)
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f:
f.write(json.dumps(metadata))
train.compile(model, hyperparams)
return model
def train_model(model):
''' Train the model '''
trainparams = {}
trainparams["num_epochs"] = args.epochs
trainparams["batch_size"] = args.batch_size
trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5}
if args.experiment:
aip.log_params(trainparams)
metadata.update(trainparams)
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f:
f.write(json.dumps(metadata))
train.train(model, trainparams, train_data_file_pattern, val_data_file_pattern, label_column, transform_feature_spec, args.tensorboard_log_dir, args.tuning)
return model
def evaluate_model(model):
''' Evaluate the model '''
evalparams = {}
evalparams["batch_size"] = args.batch_size
metrics = train.evaluate(model, evalparams, test_data_file_pattern, label_column, transform_feature_spec)
metadata.update({'metrics': metrics})
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f:
f.write(json.dumps(metadata))
get_data()
with strategy.scope():
model = get_model()
model = train_model(model)
if args.evaluate:
evaluate_model(model)
if args.serving:
logging.info('Save serving model to: ' + args.model_dir)
serving.construct_serving_model(
model=model,
serving_model_dir=args.model_dir,
metadata=metadata
)
else:
logging.info('Save trained model to: ' + args.model_dir)
model.save(args.model_dir)
###Output
_____no_output_____
###Markdown
Test training package locallyNext, test your completed training package locally with just a few epochs.
###Code
DATASET_ID = dataset.resource_name
MODEL_ID = vertex_custom_model.resource_name
!cd custom; python3 -m trainer.task --model-id={MODEL_ID} --dataset-id={DATASET_ID} --experiment='chicago' --run='test' --project={PROJECT_ID} --epochs=5 --model-dir=/tmp --evaluate=True
###Output
_____no_output_____
###Markdown
Mirrored StrategyWhen training on a single VM, one can either train was a single compute device or with multiple compute devices on the same VM. With Vertex AI Distributed Training you can specify both the number of compute devices for the VM instance and type of compute devices: CPU, GPU.Vertex AI Distributed Training supports `tf.distribute.MirroredStrategy' for TensorFlow models. To enable training across multiple compute devices on the same VM, you do the following additional steps in your Python training script:1. Set the tf.distribute.MirrorStrategy2. Compile the model within the scope of tf.distribute.MirrorStrategy. *Note:* Tells MirroredStrategy which variables to mirror across your compute devices.3. Increase the batch size for each compute device to num_devices * batch size.During transitions, the distribution of batches will be synchronized as well as the updates to the model parameters. Create and run custom training jobTo train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training jobA custom training job is created with the `CustomTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the custom training job.- `container_uri`: The training container image.- `python_package_gcs_uri`: The location of the Python training package as a tarball.- `python_module_name`: The relative path to the training script in the Python package.- `model_serving_container_uri`: The container image for deploying the model.*Note:* There is no requirements parameter. You specify any requirements in the `setup.py` script in your Python package.
###Code
DISPLAY_NAME = "chicago_" + TIMESTAMP
job = aip.CustomPythonPackageTrainingJob(
display_name=DISPLAY_NAME,
python_package_gcs_uri=f"{BUCKET_NAME}/trainer_chicago.tar.gz",
python_module_name="trainer.task",
container_uri=TRAIN_IMAGE,
model_serving_container_image_uri=DEPLOY_IMAGE,
project=PROJECT_ID,
)
! rm -rf custom/logs
! rm -rf custom/trainer/__pycache__
###Output
_____no_output_____
###Markdown
Store training script on your Cloud Storage bucketNext, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
###Code
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_chicago.tar.gz
###Output
_____no_output_____
###Markdown
Run the custom Python package training jobNext, you run the custom job to start the training job by invoking the method `run()`. The parameters are the same as when running a CustomTrainingJob.*Note:* The parameter service_account is set so that the initializing experiment step `aip.init(experiment="...")` has necessarily permission to access the Vertex AI Metadata Store.
###Code
MODEL_DIR = BUCKET_NAME + "/testing"
CMDARGS = [
"--epochs=5",
"--batch_size=16",
"--distribute=mirrored",
"--experiment=chicago",
"--run=test",
"--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
]
model = job.run(
model_display_name="chicago_" + TIMESTAMP,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
service_account=SERVICE_ACCOUNT,
tensorboard=tensorboard,
sync=True,
)
###Output
_____no_output_____
###Markdown
Delete a custom training jobAfter a training job is completed, you can delete the training job with the method `delete()`. Prior to completion, a training job can be canceled with the method `cancel()`.
###Code
job.delete()
###Output
_____no_output_____
###Markdown
Delete the modelThe method 'delete()' will delete the model.
###Code
model.delete()
###Output
_____no_output_____
###Markdown
Hyperparameter tuningNext, you perform hyperparameter tuning with the training package. The training package has some additions that make the same package usable for both hyperparameter tuning, as well as local testing and full cloud training:- Command-Line: - `tuning`: indicates to use the HyperTune service as a callback during training.- `train()`: If tuning is set, creates and adds a callback to HyperTune service. Prepare your machine specificationNow define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training. - `machine_type`: The type of GCP instance to provision -- e.g., n1-standard-8. - `accelerator_type`: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable `TRAIN_GPU != None`, you are using a GPU; otherwise you will use a CPU. - `accelerator_count`: The number of accelerators.
###Code
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
###Output
_____no_output_____
###Markdown
Prepare your disk specification(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training. - `boot_disk_type`: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD. - `boot_disk_size_gb`: Size of disk in GB.
###Code
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
###Output
_____no_output_____
###Markdown
Define worker pool specification for hyperparameter tuning jobNext, define the worker pool specification. Note that we plan to tune the learning rate and batch size, so you do not pass them as command-line arguments (omitted). The Vertex AI Hyperparameter Tuning service will pick values for both learning rate and batch size during trials, which it will pass along as command-line arguments.
###Code
CMDARGS = [
"--epochs=5",
"--distribute=mirrored",
# "--experiment=chicago",
# "--run=tune",
# "--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
"--tuning=True",
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_chicago.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
###Output
_____no_output_____
###Markdown
Create a custom jobUse the class `CustomJob` to create a custom job, such as for hyperparameter tuning, with the following parameters:- `display_name`: A human readable name for the custom job.- `worker_pool_specs`: The specification for the corresponding VM instances.
###Code
job = aip.CustomJob(
display_name="chicago_" + TIMESTAMP, worker_pool_specs=worker_pool_spec
)
###Output
_____no_output_____
###Markdown
Create a hyperparameter tuning jobUse the class `HyperparameterTuningJob` to create a hyperparameter tuning job, with the following parameters:- `display_name`: A human readable name for the custom job.- `custom_job`: The worker pool spec from this custom job applies to the CustomJobs created in all the trials.- `metrics_spec`: The metrics to optimize. The dictionary key is the metric_id, which is reported by your training job, and the dictionary value is the optimization goal of the metric('minimize' or 'maximize').- `parameter_spec`: The parameters to optimize. The dictionary key is the metric_id, which is passed into your training job as a command line key word argument, and the dictionary value is the parameter specification of the metric.- `search_algorithm`: The search algorithm to use: `grid`, `random` and `None`. If `None` is specified, the `Vizier` service (Bayesian) is used.- `max_trial_count`: The maximum number of trials to perform.
###Code
from google.cloud.aiplatform import hyperparameter_tuning as hpt
hpt_job = aip.HyperparameterTuningJob(
display_name="chicago_" + TIMESTAMP,
custom_job=job,
metric_spec={
"val_loss": "minimize",
},
parameter_spec={
"lr": hpt.DoubleParameterSpec(min=0.001, max=0.1, scale="log"),
"batch_size": hpt.DiscreteParameterSpec([16, 32, 64, 128, 256], scale="linear"),
},
search_algorithm=None,
max_trial_count=8,
parallel_trial_count=1,
)
###Output
_____no_output_____
###Markdown
Run the hyperparameter tuning jobUse the `run()` method to execute the hyperparameter tuning job.
###Code
hpt_job.run()
###Output
_____no_output_____
###Markdown
Best trialNow look at which trial was the best:
###Code
best = (None, None, None, 0.0)
for trial in hpt_job.trials:
# Keep track of the best outcome
if float(trial.final_measurement.metrics[0].value) > best[3]:
try:
best = (
trial.id,
float(trial.parameters[0].value),
float(trial.parameters[1].value),
float(trial.final_measurement.metrics[0].value),
)
except:
best = (
trial.id,
float(trial.parameters[0].value),
None,
float(trial.final_measurement.metrics[0].value),
)
print(best)
###Output
_____no_output_____
###Markdown
Delete the hyperparameter tuning jobThe method 'delete()' will delete the hyperparameter tuning job.
###Code
hpt_job.delete()
###Output
_____no_output_____
###Markdown
Save the best hyperparameter values
###Code
LR = best[2]
BATCH_SIZE = int(best[1])
###Output
_____no_output_____
###Markdown
Create and run custom training jobTo train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training jobA custom training job is created with the `CustomTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the custom training job.- `container_uri`: The training container image.- `python_package_gcs_uri`: The location of the Python training package as a tarball.- `python_module_name`: The relative path to the training script in the Python package.- `model_serving_container_uri`: The container image for deploying the model.*Note:* There is no requirements parameter. You specify any requirements in the `setup.py` script in your Python package.
###Code
DISPLAY_NAME = "chicago_" + TIMESTAMP
job = aip.CustomPythonPackageTrainingJob(
display_name=DISPLAY_NAME,
python_package_gcs_uri=f"{BUCKET_NAME}/trainer_chicago.tar.gz",
python_module_name="trainer.task",
container_uri=TRAIN_IMAGE,
model_serving_container_image_uri=DEPLOY_IMAGE,
project=PROJECT_ID,
)
###Output
_____no_output_____
###Markdown
Run the custom Python package training jobNext, you run the custom job to start the training job by invoking the method `run()`. The parameters are the same as when running a CustomTrainingJob.*Note:* The parameter service_account is set so that the initializing experiment step `aip.init(experiment="...")` has necessarily permission to access the Vertex AI Metadata Store.
###Code
MODEL_DIR = BUCKET_NAME + "/trained"
FULL_EPOCHS = 100
CMDARGS = [
f"--epochs={FULL_EPOCHS}",
f"--lr={LR}",
f"--batch_size={BATCH_SIZE}",
"--distribute=mirrored",
"--experiment=chicago",
"--run=full",
"--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
"--evaluate=True",
]
model = job.run(
model_display_name="chicago_" + TIMESTAMP,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
service_account=SERVICE_ACCOUNT,
tensorboard=tensorboard,
sync=True,
)
###Output
_____no_output_____
###Markdown
Delete a custom training jobAfter a training job is completed, you can delete the training job with the method `delete()`. Prior to completion, a training job can be canceled with the method `cancel()`.
###Code
job.delete()
###Output
_____no_output_____
###Markdown
Get the experiment resultsNext, you use the experiment name as a parameter to the method `get_experiment_df()` to get the results of the experiment as a pandas dataframe.
###Code
EXPERIMENT_NAME = "chicago"
experiment_df = aip.get_experiment_df()
experiment_df = experiment_df[experiment_df.experiment_name == EXPERIMENT_NAME]
experiment_df.T
###Output
_____no_output_____
###Markdown
Review the custom model evaluation resultsNext, you review the evaluation metrics builtin into the training package.
###Code
METRICS = MODEL_DIR + "/model/metrics.txt"
! gsutil cat $METRICS
###Output
_____no_output_____
###Markdown
Delete the TensorBoard instanceNext, delete the TensorBoard instance.
###Code
tensorboard.delete()
vertex_custom_model = model
model = tf.keras.models.load_model(MODEL_DIR + "/model")
###Output
_____no_output_____
###Markdown
Add a serving functionNext, you add a serving function to your model for online and batch prediction. This allows prediction requests to be sent in raw format (unpreprocessed), either as a serialized TF.Example or JSONL object. The serving function will then preprocess the prediction request into the transformed format expected by the model.
###Code
%%writefile custom/trainer/serving.py
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_transform as tft
import logging
def _get_serve_features_fn(model, tft_output):
"""Returns a function that accept a dictionary of features and applies TFT."""
model.tft_layer = tft_output.transform_features_layer()
@tf.function
def serve_features_fn(raw_features):
"""Returns the output to be used in the serving signature."""
transformed_features = model.tft_layer(raw_features)
probabilities = model(transformed_features)
return {"scores": probabilities}
return serve_features_fn
def _get_serve_tf_examples_fn(model, tft_output, feature_spec):
"""Returns a function that parses a serialized tf.Example and applies TFT."""
model.tft_layer = tft_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
"""Returns the output to be used in the serving signature."""
for key in list(feature_spec.keys()):
if key not in features:
feature_spec.pop(key)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
probabilities = model(transformed_features)
return {"scores": probabilities}
return serve_tf_examples_fn
def construct_serving_model(
model, serving_model_dir, metadata
):
global features
schema_location = metadata['schema']
features = metadata['numeric_features'] + metadata['categorical_features'] + metadata['embedding_features']
print("FEATURES", features)
tft_output_dir = metadata["transform_artifacts_dir"]
schema = tfdv.load_schema_text(schema_location)
feature_spec = tft.tf_metadata.schema_utils.schema_as_feature_spec(schema).feature_spec
tft_output = tft.TFTransformOutput(tft_output_dir)
# Drop features that were not used in training
features_input_signature = {
feature_name: tf.TensorSpec(
shape=(None, 1), dtype=spec.dtype, name=feature_name
)
for feature_name, spec in feature_spec.items()
if feature_name in features
}
signatures = {
"serving_default": _get_serve_features_fn(
model, tft_output
).get_concrete_function(features_input_signature),
"serving_tf_example": _get_serve_tf_examples_fn(
model, tft_output, feature_spec
).get_concrete_function(
tf.TensorSpec(shape=[None], dtype=tf.string, name="examples")
),
}
logging.info("Model export started...")
model.save(serving_model_dir, signatures=signatures)
logging.info("Model export completed.")
###Output
_____no_output_____
###Markdown
Construct the serving modelNow construct the serving model and store the serving model to your Cloud Storage bucket.
###Code
os.chdir("custom")
from trainer import serving
SERVING_MODEL_DIR = BUCKET_NAME + "/serving_model"
serving.construct_serving_model(
model=model, serving_model_dir=SERVING_MODEL_DIR, metadata=metadata
)
serving_model = tf.keras.models.load_model(SERVING_MODEL_DIR)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Test the serving model locally with tf.Example dataNext, test the layer interface in the serving model for tf.Example data.
###Code
EXPORTED_TFREC_PREFIX = metadata["exported_tfrec_prefix"]
file_names = tf.data.TFRecordDataset.list_files(
EXPORTED_TFREC_PREFIX + "/data-*.tfrecord"
)
for batch in tf.data.TFRecordDataset(file_names).batch(3).take(1):
predictions = serving_model.signatures["serving_tf_example"](batch)
for key in predictions:
print(f"{key}: {predictions[key]}")
###Output
_____no_output_____
###Markdown
Test the serving model locally with JSONL dataNext, test the layer interface in the serving model for JSONL data.
###Code
schema = tfdv.load_schema_text(metadata["schema"])
feature_spec = tft.tf_metadata.schema_utils.schema_as_feature_spec(schema).feature_spec
instance = {
"dropoff_grid": "POINT(-87.6 41.9)",
"euclidean": 2064.2696,
"loc_cross": "",
"payment_type": "Credit Card",
"pickup_grid": "POINT(-87.6 41.9)",
"trip_miles": 1.37,
"trip_day": 12,
"trip_hour": 6,
"trip_month": 2,
"trip_day_of_week": 4,
"trip_seconds": 555,
}
for feature_name in instance:
dtype = feature_spec[feature_name].dtype
instance[feature_name] = tf.constant([[instance[feature_name]]], dtype)
predictions = serving_model.signatures["serving_default"](**instance)
for key in predictions:
print(f"{key}: {predictions[key].numpy()}")
###Output
_____no_output_____
###Markdown
Upload the serving model to a Vertex AI Model resourceNext, you upload your serving custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource.
###Code
vertex_serving_model = aip.Model.upload(
display_name="chicago_" + TIMESTAMP,
artifact_uri=SERVING_MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
labels={"user_metadata": BUCKET_NAME[5:]},
sync=True,
)
###Output
_____no_output_____
###Markdown
Evaluate the serving modelNext, evaluate the serving model with the evaluation (test) slices. For apples-to-apples comparison, you use the same evaluation slices for both the custom model and the AutoML model. Since your evaluation slices and metrics maybe custom, we recommend:- Send each evaluation slice as a Vertex AI Batch Prediction Job.- Use a custom evaluation script to evaluate the results from the batch prediction job.
###Code
SERVING_OUTPUT_DATA_DIR = BUCKET_NAME + "/batch_eval"
EXPORTED_JSONL_PREFIX = metadata["exported_jsonl_prefix"]
MIN_NODES = 1
MAX_NODES = 1
job = vertex_serving_model.batch_predict(
instances_format="jsonl",
predictions_format="jsonl",
job_display_name="chicago_" + TIMESTAMP,
gcs_source=EXPORTED_JSONL_PREFIX + "*.jsonl",
gcs_destination_prefix=SERVING_OUTPUT_DATA_DIR,
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=True,
)
###Output
_____no_output_____
###Markdown
Perform custom evaluation metricsAfter the batch job has completed, you input the results and target labels to your custom evaluation script. For demonstration purposes, we just display the results of the batch prediction.
###Code
batch_dir = ! gsutil ls $SERVING_OUTPUT_DATA_DIR
batch_dir = batch_dir[0]
outputs = ! gsutil ls $batch_dir
errors = outputs[0]
results = outputs[1]
print("errors")
! gsutil cat $errors
print("results")
! gsutil cat $results | head -n10
model = async_model
###Output
_____no_output_____
###Markdown
Wait for completion of AutoML training jobNext, wait for the AutoML training job to complete. Alternatively, one can set the parameter `sync` to `True` in the `run()` method to block until the AutoML training job is completed.
###Code
model.wait()
###Output
_____no_output_____
###Markdown
Review model evaluation scoresAfter your model has finished training, you can review the evaluation scores for it.First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
###Code
# Get model resource ID
models = aip.Model.list(filter="display_name=chicago_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
###Output
_____no_output_____
###Markdown
Compare metric results with AutoML baselineFinally, you make a decision if the current experiment produces a custom model that is better than the AutoML baseline, as follows: - Compare the evaluation results for each evaluation slice between the custom model and the AutoML model. - Weight the results according to your business purposes. - Add up the result and make a determination if the custom model is better. Store evaluation results for custom modelNext, you use the labels field to store user metadata containing the custom metrics information.
###Code
import json
metadata = {}
metadata["train_eval_metrics"] = METRICS
metadata["custom_eval_metrics"] = "[you-fill-this-in]"
with tf.io.gfile.GFile("gs://" + BUCKET_NAME[5:] + "/metadata.jsonl", "w") as f:
json.dump(metadata, f)
!gsutil cat $BUCKET_NAME/metadata.jsonl
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- AutoML Training Job- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_all = False
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____
###Markdown
E2E ML on GCP: MLOps stage 2 : experimentation View on GitHub Open in Google Cloud Notebooks OverviewThis tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 2 : experimentation. DatasetThe dataset used for this tutorial is the [Chicago Taxi](https://www.kaggle.com/chicago/chicago-taxi-trips-bq). The version of the dataset you will use in this tutorial is stored in a public BigQuery table. The trained model predicts whether someone would leave a tip for a taxi fare. ObjectiveIn this tutorial, you create a MLOps stage 2: experimentation process.This tutorial uses the following Vertex AI:- `Vertex AI Datasets`- `Vertex AI Models`- `Vertex AI AutoML`- `Vertex AI Training`- `Vertex AI TensorBoard`- `Vertex AI Vizier`- `Vertex AI Batch Prediction`The steps performed include:- Review the `Dataset` resource created during stage 1.- Train an AutoML tabular binary classifier model in the background.- Build the experimental model architecture.- Construct a custom training package for the `Dataset` resource.- Test the custom training package locally.- Test the custom training package in the cloud with Vertex AI Training.- Hyperparameter tune the model training with Vertex AI Vizier.- Train the custom model with Vertex AI Training.- Add a serving function for online/batch prediction to the custom model.- Test the custom model with the serving function.- Evaluate the custom model using Vertex AI Batch Prediction- Wait for AutoML training job to complete.- Evaluate the AutoML model using Vertex AI Batch Prediction with the same evaluation slices as the custom model.- Set the evaluation results of the AutoML model as the baseline.- If the evaluation of the custom model is below baseline, continue to experiment with custom model.- If the evaluation of the custom model is above baseline, save the model as the first best model. RecommendationsWhen doing E2E MLOps on Google Cloud for experimentation, the following best practices with structured (tabular) data are recommended: - Determine a baseline evaluation using AutoML. - Design and build a model architecture. - Upload the untrained model architecture as a Vertex AI Model resource. - Construct a training package that can be ran locally and as a Vertex AI Training job. - Decompose the training package into: data, model, train and task Python modules. - Obtain the location of the transformed training data from the user metadata of the Vertex AI Dataset resource. - Obtain the location of the model artifacts from the Vertex AI Model resource. - Include in the training package initializing a Vertex AI Experiment and corresponding run. - Log hyperparameters and training parameters for the experiment. - Add callbacks for early stop, TensorBoard, and hyperparameter tuning, where hyperparameter tuning is a command-line option. - Test the training package locally with a small number of epochs. - Test the training package with Vertex AI Training. - Do hyperparameter tuning with Vertex AI Hyperparameter Tuning. - Do full training of the custom model with Vertex AI Training. - Log the hyperparameter values for the experiment/run. - Evaluate the custom model. - Single evaluation slice, same metrics as AutoML - Add evaluation to the training package and return the results in a file in the Cloud Storage bucket used for training - Custom evaluation slices, custom metrics - Evaluate custom evaluation slices as a Vertex AI Batch Prediction for both AutoML and custom model - Perform custom metrics on the results from the batch job - Compare custom model metrics against the AutoML baseline - If less than baseline, then continue to experiment - If greater then baseline, then upload model as the new baseline and save evaluation results with the model. InstallationsInstall *one time* the packages for executing the MLOps notebooks.
###Code
ONCE_ONLY = False
if ONCE_ONLY:
! pip3 install -U tensorflow==2.5 $USER_FLAG
! pip3 install -U tensorflow-data-validation==1.2 $USER_FLAG
! pip3 install -U tensorflow-transform==1.2 $USER_FLAG
! pip3 install -U tensorflow-io==0.18 $USER_FLAG
! pip3 install --upgrade google-cloud-aiplatform[tensorboard] $USER_FLAG
! pip3 install --upgrade google-cloud-bigquery $USER_FLAG
! pip3 install --upgrade google-cloud-logging $USER_FLAG
! pip3 install --upgrade apache-beam[gcp] $USER_FLAG
! pip3 install --upgrade pyarrow $USER_FLAG
! pip3 install --upgrade cloudml-hypertune $USER_FLAG
! pip3 install --upgrade kfp $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Service Account**If you don't know your service account**, try to get your service account using `gcloud` command by executing the second cell below.
###Code
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].strip()
print("Service Account:", SERVICE_ACCOUNT)
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants
###Code
import google.cloud.aiplatform as aip
###Output
_____no_output_____
###Markdown
Import TensorFlowImport the TensorFlow package into your Python environment.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Import TensorFlow TransformImport the TensorFlow Transform (TFT) package into your Python environment.
###Code
import tensorflow_transform as tft
###Output
_____no_output_____
###Markdown
Import TensorFlow Data ValidationImport the TensorFlow Data Validation (TFDV) package into your Python environment.
###Code
import tensorflow_data_validation as tfdv
###Output
_____no_output_____
###Markdown
Initialize Vertex AI SDK for PythonInitialize the Vertex AI SDK for Python for your project and corresponding bucket.
###Code
aip.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)
###Output
_____no_output_____
###Markdown
Set hardware acceleratorsYou can set hardware accelerators for training and prediction.Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)Otherwise specify `(None, None)` to use a container image to run on a CPU.Learn more [here](https://cloud.google.com/vertex-ai/docs/general/locationsaccelerators) hardware accelerator support for your region*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
###Code
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.gapic.AcceleratorType.NVIDIA_TESLA_K80, 4)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
###Output
_____no_output_____
###Markdown
Set pre-built containersSet the pre-built Docker container image for training and prediction.For the latest list, see [Pre-built containers for training](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers).For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers).
###Code
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2.5".replace(".", "-")
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "{}-docker.pkg.dev/vertex-ai/training/{}:latest".format(
REGION.split("-")[0], TRAIN_VERSION
)
DEPLOY_IMAGE = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format(
REGION.split("-")[0], DEPLOY_VERSION
)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
###Output
_____no_output_____
###Markdown
Set machine typeNext, set the machine type to use for training and prediction.- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]*Note: The following is not supported for training:* - `standard`: 2 vCPUs - `highcpu`: 2, 4 and 8 vCPUs*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.
###Code
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
###Output
_____no_output_____
###Markdown
Retrieve the dataset from stage 1Next, retrieve the dataset you created during stage 1 with the helper function `find_dataset()`. This helper function finds all the datasets whose display name matches the specified prefix and import format (e.g., bq). Finally it sorts the matches by create time and returns the latest version.
###Code
def find_dataset(display_name_prefix, import_format):
matches = []
datasets = aip.TabularDataset.list()
for dataset in datasets:
if dataset.display_name.startswith(display_name_prefix):
try:
if (
"bq" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["bigquerySource"]
):
matches.append(dataset)
if (
"csv" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["gcsSource"]
):
matches.append(dataset)
except:
pass
create_time = None
for match in matches:
if create_time is None or match.create_time > create_time:
create_time = match.create_time
dataset = match
return dataset
dataset = find_dataset("Chicago Taxi", "bq")
print(dataset)
###Output
_____no_output_____
###Markdown
Load dataset's user metadataLoad the user metadata for the dataset.
###Code
import json
with tf.io.gfile.GFile(
"gs://" + dataset.labels["user_metadata"] + "/metadata.jsonl", "r"
) as f:
metadata = json.load(f)
print(metadata)
###Output
_____no_output_____
###Markdown
Create and run training pipelineTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipelineAn AutoML training pipeline is created with the `AutoMLTabularTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the `TrainingJob` resource.- `optimization_prediction_type`: The type task to train the model for. - `classification`: A tabuar classification model. - `regression`: A tabular regression model.- `column_transformations`: (Optional): Transformations to apply to the input columns- `optimization_objective`: The optimization objective to minimize or maximize. - binary classification: - `minimize-log-loss` - `maximize-au-roc` - `maximize-au-prc` - `maximize-precision-at-recall` - `maximize-recall-at-precision` - multi-class classification: - `minimize-log-loss` - regression: - `minimize-rmse` - `minimize-mae` - `minimize-rmsle`The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
###Code
dag = aip.AutoMLTabularTrainingJob(
display_name="chicago_" + TIMESTAMP,
optimization_prediction_type="classification",
optimization_objective="minimize-log-loss",
)
print(dag)
###Output
_____no_output_____
###Markdown
Run the training pipelineNext, you run the DAG to start the training job by invoking the method `run`, with the following parameters:- `dataset`: The `Dataset` resource to train the model.- `model_display_name`: The human readable name for the trained model.- `training_fraction_split`: The percentage of the dataset to use for training.- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).- `validation_fraction_split`: The percentage of the dataset to use for validation.- `target_column`: The name of the column to train as the label.- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.The `run` method when completed returns the `Model` resource.The execution of the training pipeline will take upto 180 minutes.
###Code
async_model = dag.run(
dataset=dataset,
model_display_name="chicago_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=8000,
disable_early_stopping=False,
target_column="tip_bin",
sync=False,
)
###Output
_____no_output_____
###Markdown
Create experiment for tracking training related metadataSetup tracking the parameters (configuration) and metrics (results) for each experiment:- `aip.init()` - Create an experiment instance- `aip.start_run()` - Track a specific run within the experiment.Learn more about [Introduction to Vertex AI ML Metadata](https://cloud.google.com/vertex-ai/docs/ml-metadata/introduction).
###Code
EXPERIMENT_NAME = "chicago-" + TIMESTAMP
aip.init(experiment=EXPERIMENT_NAME)
aip.start_run("run-1")
###Output
_____no_output_____
###Markdown
Create a Vertex AI TensorBoard instanceCreate a Vertex AI TensorBoard instance to use TensorBoard in conjunction with Vertex AI Training for custom model training.Learn more about [Get started with Vertex AI TensorBoard](https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-overview).
###Code
TENSORBOARD_DISPLAY_NAME = "chicago_" + TIMESTAMP
tensorboard = aip.Tensorboard.create(display_name=TENSORBOARD_DISPLAY_NAME)
tensorboard_resource_name = tensorboard.gca_resource.name
print("TensorBoard resource name:", tensorboard_resource_name)
###Output
_____no_output_____
###Markdown
Create the input layer for custom modelNext, you create the input layer for your custom tabular model, based on the data types of each feature.
###Code
from tensorflow.keras.layers import Input
def create_model_inputs(
numeric_features=None, categorical_features=None, embedding_features=None
):
inputs = {}
for feature_name in numeric_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.float32)
for feature_name in categorical_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64)
for feature_name in embedding_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64)
return inputs
input_layers = create_model_inputs(
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
embedding_features=metadata["embedding_features"],
)
print(input_layers)
###Output
_____no_output_____
###Markdown
Create the binary classifier custom modelNext, you create your binary classifier custom tabular model.
###Code
from math import sqrt
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import (Activation, Concatenate, Dense, Embedding,
experimental)
def create_binary_classifier(
input_layers,
tft_output,
metaparams,
numeric_features,
categorical_features,
embedding_features,
):
layers = []
for feature_name in input_layers:
if feature_name in embedding_features:
vocab_size = tft_output.vocabulary_size_by_name(feature_name)
embedding_size = int(sqrt(vocab_size))
embedding_output = Embedding(
input_dim=vocab_size + 1,
output_dim=embedding_size,
name=f"{feature_name}_embedding",
)(input_layers[feature_name])
layers.append(embedding_output)
elif feature_name in categorical_features:
vocab_size = tft_output.vocabulary_size_by_name(feature_name)
onehot_layer = experimental.preprocessing.CategoryEncoding(
num_tokens=vocab_size,
output_mode="binary",
name=f"{feature_name}_onehot",
)(input_layers[feature_name])
layers.append(onehot_layer)
elif feature_name in numeric_features:
numeric_layer = tf.expand_dims(input_layers[feature_name], -1)
layers.append(numeric_layer)
else:
pass
joined = Concatenate(name="combines_inputs")(layers)
feedforward_output = Sequential(
[Dense(units, activation="relu") for units in metaparams["hidden_units"]],
name="feedforward_network",
)(joined)
logits = Dense(units=1, name="logits")(feedforward_output)
pred = Activation("sigmoid")(logits)
model = Model(inputs=input_layers, outputs=[pred])
return model
TRANSFORM_ARTIFACTS_DIR = metadata["transform_artifacts_dir"]
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
metaparams = {"hidden_units": [128, 64]}
aip.log_params(metaparams)
model = create_binary_classifier(
input_layers,
tft_output,
metaparams,
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
embedding_features=metadata["embedding_features"],
)
model.summary()
###Output
_____no_output_____
###Markdown
Visualize the model archirectureNext, visualize the architecture of the custom model.
###Code
tf.keras.utils.plot_model(model, show_shapes=True, show_dtype=True)
###Output
_____no_output_____
###Markdown
Save model artifactsNext, save the model artifacts to your Cloud Storage bucket
###Code
MODEL_DIR = f"{BUCKET_NAME}/base_model"
model.save(MODEL_DIR)
###Output
_____no_output_____
###Markdown
Upload the local model to a Model resourceNext, you upload your local custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource.
###Code
vertex_custom_model = aip.Model.upload(
display_name="chicago_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
sync=True,
)
###Output
_____no_output_____
###Markdown
Construct the training package Package layoutBefore you start training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.- PKG-INFO- README.md- setup.cfg- setup.py- trainer - \_\_init\_\_.py - task.py - other Python scriptsThe files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.The file `trainer/task.py` is the Python script for executing the custom training job.
###Code
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'google-cloud-aiplatform',\n\n 'cloudml-hypertune',\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Chicago Taxi tabular binary classifier\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex AI"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
###Output
_____no_output_____
###Markdown
Get feature specification for the preprocessed dataNext, create the feature specification for the preprocessed data.
###Code
transform_feature_spec = tft_output.transformed_feature_spec()
print(transform_feature_spec)
###Output
_____no_output_____
###Markdown
Load the transformed data into a tf.data.DatasetNext, you load the gzip TFRecords on Cloud Storage storage into a `tf.data.Dataset` generator. These functions are re-used when training the custom model using `Vertex Training`, so you save them to the python training package.
###Code
%%writefile custom/trainer/data.py
import tensorflow as tf
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(filenames, compression_type="GZIP")
def get_dataset(file_pattern, feature_spec, label_column, batch_size=200):
"""Generates features and label for tuning/training.
Args:
file_pattern: input tfrecord file pattern.
feature_spec: a dictionary of feature specifications.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=feature_spec,
label_key=label_column,
reader=_gzip_reader_fn,
num_epochs=1,
drop_final_batch=True,
)
return dataset
from custom.trainer import data
TRANSFORMED_DATA_PREFIX = metadata["transformed_data_prefix"]
LABEL_COLUMN = metadata["label_column"]
train_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/train/data-*.gz"
val_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/val/data-*.gz"
test_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/test/data-*.gz"
for input_features, target in data.get_dataset(
train_data_file_pattern, transform_feature_spec, LABEL_COLUMN, batch_size=3
).take(1):
for key in input_features:
print(
f"{key} {input_features[key].dtype}: {input_features[key].numpy().tolist()}"
)
print(f"target: {target.numpy().tolist()}")
###Output
_____no_output_____
###Markdown
Test the model architecture with transformed inputNext, test the model architecture with a sample of the transformed training input.*Note:* Since the model is untrained, the predictions should be random. Since this is a binary classifier, expect the predicted results ~0.5.
###Code
model(input_features)
###Output
_____no_output_____
###Markdown
Develop and test the training scriptsWhen experimenting, one typically develops and tests the training package locally, before moving to training in the cloud. Create training scriptNext, you write the Python script for compiling and training the model.
###Code
%%writefile custom/trainer/train.py
from trainer import data
import tensorflow as tf
import logging
from hypertune import HyperTune
def compile(model, hyperparams):
optimizer = tf.keras.optimizers.Adam(learning_rate=hyperparams["learning_rate"])
loss = tf.keras.losses.BinaryCrossentropy(from_logits=False)
metrics = [tf.keras.metrics.BinaryAccuracy(name="accuracy")]
model.compile(optimizer=optimizer,loss=loss, metrics=metrics)
return model
def train(
model,
hyperparams,
train_data_dir,
val_data_dir,
label_column,
transformed_feature_spec,
log_dir,
tuning=False
):
train_dataset = data.get_dataset(
train_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
val_dataset = data.get_dataset(
val_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
tensorboard = tf.keras.callbacks.TensorBoard(log_dir=log_dir)
early_stop = tf.keras.callbacks.EarlyStopping(
monitor=hyperparams["early_stop"]["monitor"], patience=hyperparams["early_stop"]["patience"], restore_best_weights=True
)
callbacks=[tensorboard, early_stop]
if tuning:
# Instantiate the HyperTune reporting object
hpt = HyperTune()
# Reporting callback
class HPTCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='val_loss',
metric_value=logs['val_loss'],
global_step=epoch
)
callbacks.append(HPTCallback())
logging.info("Model training started...")
history = model.fit(
train_dataset,
epochs=hyperparams["num_epochs"],
validation_data=val_dataset,
callbacks=callbacks
)
logging.info("Model training completed.")
return history
def evaluate(
model,
hyperparams,
test_data_dir,
label_column,
transformed_feature_spec
):
logging.info("Model evaluation started...")
test_dataset = data.get_dataset(
test_data_dir,
transformed_feature_spec,
label_column,
hyperparams["batch_size"],
)
evaluation_metrics = model.evaluate(test_dataset)
logging.info("Model evaluation completed.")
return evaluation_metrics
###Output
_____no_output_____
###Markdown
Train the model locallyNext, test the training package locally, by training with just a few epochs:- `num_epochs`: The number of epochs to pass to the training package.- `compile()`: Compile the model for training.- `train()`: Train the model.
###Code
os.chdir("custom")
import logging
from trainer import train
TENSORBOARD_LOG_DIR = "./logs"
logging.getLogger().setLevel(logging.INFO)
hyperparams = {}
hyperparams["learning_rate"] = 0.001
aip.log_params(hyperparams)
train.compile(model, hyperparams)
trainparams = {}
trainparams["num_epochs"] = 5
trainparams["batch_size"] = 64
trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5}
aip.log_params(trainparams)
train.train(
model,
trainparams,
train_data_file_pattern,
val_data_file_pattern,
LABEL_COLUMN,
transform_feature_spec,
TENSORBOARD_LOG_DIR,
)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Evaluate the model locallyNext, test the evaluation portion of the training package:- `evaluate()`: Evaluate the model.
###Code
os.chdir("custom")
from trainer import train
evalparams = {}
evalparams["batch_size"] = 64
metrics = {}
metrics["loss"], metrics["acc"] = train.evaluate(
model, evalparams, test_data_file_pattern, LABEL_COLUMN, transform_feature_spec
)
print("ACC", metrics["acc"], "LOSS", metrics["loss"])
aip.log_metrics(metrics)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Retrieve model from Vertex AINext, create the Python script to retrieve your experimental model from Vertex AI.
###Code
%%writefile custom/trainer/model.py
import google.cloud.aiplatform as aip
def get(model_id):
model = aip.Model(model_id)
return model
###Output
_____no_output_____
###Markdown
Create the task script for the Python training packageNext, you create the `task.py` script for driving the training package. Some noteable steps include:- Command-line arguments: - `model-id`: The resource ID of the `Model` resource you built during experimenting. This is the untrained model architecture. - `dataset-id`: The resource ID of the `Dataset` resource to use for training. - `experiment`: The name of the experiment. - `run`: The name of the run within this experiment. - `tensorboard-logdir`: The logging directory for Vertex AI Tensorboard.- `get_data()`: - Loads the Dataset resource into memory. - Obtains the user metadata from the Dataset resource. - From the metadata, obtain location of transformed data, transformation function and name of label column- `get_model()`: - Loads the Model resource into memory. - Obtains location of model artifacts of the model architecture. - Loads the model architecture. - Compiles the model.- `train_model()`: - Train the model.- `evaluate_model()`: - Evaluates the model. - Saves evaluation metrics to Cloud Storage bucket.
###Code
%%writefile custom/trainer/task.py
import os
import argparse
import logging
import json
import tensorflow as tf
import tensorflow_transform as tft
from tensorflow.python.client import device_lib
import google.cloud.aiplatform as aip
from trainer import data
from trainer import model as model_
from trainer import train
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--model-id', dest='model_id',
default=None, type=str, help='Vertex Model ID.')
parser.add_argument('--dataset-id', dest='dataset_id',
default=None, type=str, help='Vertex Dataset ID.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--batch_size', dest='batch_size',
default=16, type=int,
help='Batch size.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--tensorboard-log-dir', dest='tensorboard_log_dir',
default='/tmp/logs', type=str,
help='Output file for tensorboard logs')
parser.add_argument('--experiment', dest='experiment',
default=None, type=str,
help='Name of experiment')
parser.add_argument('--project', dest='project',
default=None, type=str,
help='Name of project')
parser.add_argument('--run', dest='run',
default=None, type=str,
help='Name of run in experiment')
parser.add_argument('--evaluate', dest='evaluate',
default=False, type=bool,
help='Whether to perform evaluation')
parser.add_argument('--tuning', dest='tuning',
default=False, type=bool,
help='Whether to perform hyperparameter tuning')
args = parser.parse_args()
logging.getLogger().setLevel(logging.INFO)
logging.info('DEVICES' + str(device_lib.list_local_devices()))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
logging.info("Single device training")
# Single Machine, multiple compute device
elif args.distribute == 'mirrored':
strategy = tf.distribute.MirroredStrategy()
logging.info("Mirrored Strategy distributed training")
# Multi Machine, multiple compute device
elif args.distribute == 'multiworker':
strategy = tf.distribute.MultiWorkerMirroredStrategy()
logging.info("Multi-worker Strategy distributed training")
logging.info('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
logging.info('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Initialize the run for this experiment
if args.experiment:
aip.init(experiment=args.experiment, project=args.project)
aip.start_run(args.run)
def get_data():
''' Get the preprocessed training data '''
global train_data_file_pattern, val_data_file_pattern, test_data_file_pattern
global label_column, transform_feature_spec
dataset = aip.TabularDataset(args.dataset_id)
METADATA = 'gs://' + dataset.labels['user_metadata'] + "/metadata.jsonl"
with tf.io.gfile.GFile(METADATA, "r") as f:
metadata = json.load(f)
TRANSFORMED_DATA_PREFIX = metadata['transformed_data_prefix']
label_column = metadata['label_column']
train_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/train/data-*.gz'
val_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/val/data-*.gz'
test_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/test/data-*.gz'
TRANSFORM_ARTIFACTS_DIR = metadata['transform_artifacts_dir']
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
transform_feature_spec = tft_output.transformed_feature_spec()
def get_model():
''' Get the untrained model architecture '''
vertex_model = model_.get(args.model_id)
model_artifacts = vertex_model.gca_resource.artifact_uri
model = tf.keras.models.load_model(model_artifacts)
# Compile the model
hyperparams = {}
hyperparams["learning_rate"] = args.lr
if args.experiment:
aip.log_params(hyperparams)
train.compile(model, hyperparams)
return model
def train_model(model):
''' Train the model '''
trainparams = {}
trainparams["num_epochs"] = args.epochs
trainparams["batch_size"] = args.batch_size
trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5}
if args.experiment:
aip.log_params(trainparams)
train.train(model, trainparams, train_data_file_pattern, val_data_file_pattern, label_column, transform_feature_spec, args.tensorboard_log_dir, args.tuning)
return model
def evaluate_model(model):
''' Evaluate the model '''
evalparams = {}
evalparams["batch_size"] = args.batch_size
metrics = train.evaluate(model, evalparams, test_data_file_pattern, label_column, transform_feature_spec)
with tf.io.gfile.GFile(args.model_dir + "/metrics.txt", "w") as f:
f.write(str(metrics))
get_data()
with strategy.scope():
model = get_model()
model = train_model(model)
if args.evaluate:
evaluate_model(model)
logging.info('Save trained model to: ' + args.model_dir)
model.save(args.model_dir)
###Output
_____no_output_____
###Markdown
Test training package locallyNext, test your completed training package locally with just a few epochs.
###Code
DATASET_ID = dataset.resource_name
MODEL_ID = vertex_custom_model.resource_name
!cd custom; python3 -m trainer.task --model-id={MODEL_ID} --dataset-id={DATASET_ID} --experiment='chicago' --run='test' --project={PROJECT_ID} --epochs=5 --model-dir=/tmp --evaluate=True
###Output
_____no_output_____
###Markdown
Mirrored StrategyWhen training on a single VM, one can either train was a single compute device or with multiple compute devices on the same VM. With Vertex AI Distributed Training you can specify both the number of compute devices for the VM instance and type of compute devices: CPU, GPU.Vertex AI Distributed Training supports `tf.distribute.MirroredStrategy' for TensorFlow models. To enable training across multiple compute devices on the same VM, you do the following additional steps in your Python training script:1. Set the tf.distribute.MirrorStrategy2. Compile the model within the scope of tf.distribute.MirrorStrategy. *Note:* Tells MirroredStrategy which variables to mirror across your compute devices.3. Increase the batch size for each compute device to num_devices * batch size.During transitions, the distribution of batches will be synchronized as well as the updates to the model parameters. Create and run custom training jobTo train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training jobA custom training job is created with the `CustomTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the custom training job.- `container_uri`: The training container image.- `python_package_gcs_uri`: The location of the Python training package as a tarball.- `python_module_name`: The relative path to the training script in the Python package.- `model_serving_container_uri`: The container image for deploying the model.*Note:* There is no requirements parameter. You specify any requirements in the `setup.py` script in your Python package.
###Code
DISPLAY_NAME = "chicago_" + TIMESTAMP
job = aip.CustomPythonPackageTrainingJob(
display_name=DISPLAY_NAME,
python_package_gcs_uri=f"{BUCKET_NAME}/trainer_chicago.tar.gz",
python_module_name="trainer.task",
container_uri=TRAIN_IMAGE,
model_serving_container_image_uri=DEPLOY_IMAGE,
project=PROJECT_ID,
)
! rm -rf custom/logs
! rm -rf custom/trainer/__pycache__
###Output
_____no_output_____
###Markdown
Store training script on your Cloud Storage bucketNext, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
###Code
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_chicago.tar.gz
###Output
_____no_output_____
###Markdown
Run the custom Python package training jobNext, you run the custom job to start the training job by invoking the method `run()`. The parameters are the same as when running a CustomTrainingJob.*Note:* The parameter service_account is set so that the initializing experiment step `aip.init(experiment="...")` has necessarily permission to access the Vertex AI Metadata Store.
###Code
MODEL_DIR = BUCKET_NAME + "/testing"
CMDARGS = [
"--epochs=5",
"--batch_size=16",
"--distribute=mirrored",
"--experiment=chicago",
"--run=test",
"--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
]
model = job.run(
model_display_name="chicago_" + TIMESTAMP,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
service_account=SERVICE_ACCOUNT,
sync=True,
)
###Output
_____no_output_____
###Markdown
Delete a custom training jobAfter a training job is completed, you can delete the training job with the method `delete()`. Prior to completion, a training job can be canceled with the method `cancel()`.
###Code
job.delete()
###Output
_____no_output_____
###Markdown
Hyperparameter tuningNext, you perform hyperparameter tuning with the training package. The training package has some additions that make the same package usable for both hyperparameter tuning, as well as local testing and full cloud training:- Command-Line: - `tuning`: indicates to use the HyperTune service as a callback during training.- `train()`: If tuning is set, creates and adds a callback to HyperTune service. Prepare your machine specificationNow define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training. - `machine_type`: The type of GCP instance to provision -- e.g., n1-standard-8. - `accelerator_type`: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable `TRAIN_GPU != None`, you are using a GPU; otherwise you will use a CPU. - `accelerator_count`: The number of accelerators.
###Code
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
###Output
_____no_output_____
###Markdown
Prepare your disk specification(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training. - `boot_disk_type`: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD. - `boot_disk_size_gb`: Size of disk in GB.
###Code
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
###Output
_____no_output_____
###Markdown
Define worker pool specification for hyperparameter tuning jobNext, define the worker pool specification. Note that we plan to tune the learning rate and batch size, so you do not pass them as command-line arguments (omitted). The Vertex AI Hyperparameter Tuning service will pick values for both learning rate and batch size during trials, which it will pass along as command-line arguments.
###Code
CMDARGS = [
"--epochs=5",
"--distribute=mirrored",
# "--experiment=chicago",
# "--run=tune",
# "--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
"--tuning=True",
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_chicago.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
###Output
_____no_output_____
###Markdown
Create a custom jobUse the class `CustomJob` to create a custom job, such as for hyperparameter tuning, with the following parameters:- `display_name`: A human readable name for the custom job.- `worker_pool_specs`: The specification for the corresponding VM instances.
###Code
job = aip.CustomJob(
display_name="chicago_" + TIMESTAMP, worker_pool_specs=worker_pool_spec
)
###Output
_____no_output_____
###Markdown
Create a hyperparameter tuning jobUse the class `HyperparameterTuningJob` to create a hyperparameter tuning job, with the following parameters:- `display_name`: A human readable name for the custom job.- `custom_job`: The worker pool spec from this custom job applies to the CustomJobs created in all the trials.- `metrics_spec`: The metrics to optimize. The dictionary key is the metric_id, which is reported by your training job, and the dictionary value is the optimization goal of the metric('minimize' or 'maximize').- `parameter_spec`: The parameters to optimize. The dictionary key is the metric_id, which is passed into your training job as a command line key word argument, and the dictionary value is the parameter specification of the metric.- `search_algorithm`: The search algorithm to use: `grid`, `random` and `None`. If `None` is specified, the `Vizier` service (Bayesian) is used.- `max_trial_count`: The maximum number of trials to perform.
###Code
from google.cloud.aiplatform import hyperparameter_tuning as hpt
hpt_job = aip.HyperparameterTuningJob(
display_name="chicago_" + TIMESTAMP,
custom_job=job,
metric_spec={
"val_loss": "minimize",
},
parameter_spec={
"lr": hpt.DoubleParameterSpec(min=0.001, max=0.1, scale="log"),
"batch_size": hpt.DiscreteParameterSpec([16, 32, 64, 128, 256], scale="linear"),
},
search_algorithm=None,
max_trial_count=8,
parallel_trial_count=1,
)
###Output
_____no_output_____
###Markdown
Run the hyperparameter tuning jobUse the `run()` method to execute the hyperparameter tuning job.
###Code
hpt_job.run()
###Output
_____no_output_____
###Markdown
Best trialNow look at which trial was the best:
###Code
best = (None, None, None, 0.0)
for trial in hpt_job.trials:
# Keep track of the best outcome
if float(trial.final_measurement.metrics[0].value) > best[3]:
try:
best = (
trial.id,
float(trial.parameters[0].value),
float(trial.parameters[1].value),
float(trial.final_measurement.metrics[0].value),
)
except:
best = (
trial.id,
float(trial.parameters[0].value),
None,
float(trial.final_measurement.metrics[0].value),
)
print(best)
###Output
_____no_output_____
###Markdown
Delete the hyperparameter tuning jobThe method 'delete()' will delete the hyperparameter tuning job.
###Code
hpt_job.delete()
###Output
_____no_output_____
###Markdown
Save the best hyperparameter values
###Code
LR = best[2]
BATCH_SIZE = int(best[1])
###Output
_____no_output_____
###Markdown
Create and run custom training jobTo train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training jobA custom training job is created with the `CustomTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the custom training job.- `container_uri`: The training container image.- `python_package_gcs_uri`: The location of the Python training package as a tarball.- `python_module_name`: The relative path to the training script in the Python package.- `model_serving_container_uri`: The container image for deploying the model.*Note:* There is no requirements parameter. You specify any requirements in the `setup.py` script in your Python package.
###Code
DISPLAY_NAME = "chicago_" + TIMESTAMP
job = aip.CustomPythonPackageTrainingJob(
display_name=DISPLAY_NAME,
python_package_gcs_uri=f"{BUCKET_NAME}/trainer_chicago.tar.gz",
python_module_name="trainer.task",
container_uri=TRAIN_IMAGE,
model_serving_container_image_uri=DEPLOY_IMAGE,
project=PROJECT_ID,
)
###Output
_____no_output_____
###Markdown
Run the custom Python package training jobNext, you run the custom job to start the training job by invoking the method `run()`. The parameters are the same as when running a CustomTrainingJob.*Note:* The parameter service_account is set so that the initializing experiment step `aip.init(experiment="...")` has necessarily permission to access the Vertex AI Metadata Store.
###Code
MODEL_DIR = BUCKET_NAME + "/trained"
FULL_EPOCHS = 100
CMDARGS = [
f"--epochs={FULL_EPOCHS}",
f"--lr={LR}",
f"--batch_size={BATCH_SIZE}",
"--distribute=mirrored",
"--experiment=chicago",
"--run=full",
"--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
"--evaluate=True",
]
model = job.run(
model_display_name="chicago_" + TIMESTAMP,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
service_account=SERVICE_ACCOUNT,
sync=True,
)
###Output
_____no_output_____
###Markdown
Delete a custom training jobAfter a training job is completed, you can delete the training job with the method `delete()`. Prior to completion, a training job can be canceled with the method `cancel()`.
###Code
job.delete()
###Output
_____no_output_____
###Markdown
Get the experiment resultsNext, you use the experiment name as a parameter to the method `get_experiment_df()` to get the results of the experiment as a pandas dataframe.
###Code
experiment_df = aip.get_experiment_df()
experiment_df = experiment_df[experiment_df.experiment_name == "chicago"]
experiment_df.T
###Output
_____no_output_____
###Markdown
Review the custom model evaluation resultsNext, you review the evaluation metrics builtin into the training package.
###Code
METRICS = MODEL_DIR + "/model/metrics.txt"
! gsutil cat $METRICS
vertex_custom_model = model
model = tf.keras.models.load_model(MODEL_DIR + "/model")
###Output
_____no_output_____
###Markdown
Add a serving functionNext, you add a serving function to your model for online and batch prediction. This allows prediction requests to be sent in raw format (unpreprocessed), either as a serialized TF.Example or JSONL object. The serving function will then preprocess the prediction request into the transformed format expected by the model.
###Code
def _get_serve_features_fn(model, tft_output):
"""Returns a function that accept a dictionary of features and applies TFT."""
model.tft_layer = tft_output.transform_features_layer()
@tf.function
def serve_features_fn(raw_features):
"""Returns the output to be used in the serving signature."""
transformed_features = model.tft_layer(raw_features)
probabilities = model(transformed_features)
return {"scores": probabilities}
return serve_features_fn
def _get_serve_tf_examples_fn(model, tft_output, feature_spec):
"""Returns a function that parses a serialized tf.Example and applies TFT."""
model.tft_layer = tft_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
"""Returns the output to be used in the serving signature."""
for key in list(feature_spec.keys()):
if key not in features:
feature_spec.pop(key)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
probabilities = model(transformed_features)
return {"scores": probabilities}
return serve_tf_examples_fn
def construct_serving_model(model, serving_model_dir, metadata):
global features
schema_location = metadata["schema"]
features = (
metadata["numeric_features"]
+ metadata["categorical_features"]
+ metadata["embedding_features"]
)
print("FEATURES", features)
tft_output_dir = metadata["transform_artifacts_dir"]
schema = tfdv.load_schema_text(schema_location)
feature_spec = tft.tf_metadata.schema_utils.schema_as_feature_spec(
schema
).feature_spec
tft_output = tft.TFTransformOutput(tft_output_dir)
# Drop features that were not used in training
features_input_signature = {
feature_name: tf.TensorSpec(
shape=(None, 1), dtype=spec.dtype, name=feature_name
)
for feature_name, spec in feature_spec.items()
if feature_name in features
}
signatures = {
"serving_default": _get_serve_features_fn(
model, tft_output
).get_concrete_function(features_input_signature),
"serving_tf_example": _get_serve_tf_examples_fn(
model, tft_output, feature_spec
).get_concrete_function(
tf.TensorSpec(shape=[None], dtype=tf.string, name="examples")
),
}
logging.info("Model export started...")
model.save(serving_model_dir, signatures=signatures)
logging.info("Model export completed.")
###Output
_____no_output_____
###Markdown
Construct the serving modelNow construct the serving model and store the serving model to your Cloud Storage bucket.
###Code
SERVING_MODEL_DIR = BUCKET_NAME + "/serving_model"
construct_serving_model(
model=model, serving_model_dir=SERVING_MODEL_DIR, metadata=metadata
)
serving_model = tf.keras.models.load_model(SERVING_MODEL_DIR)
###Output
_____no_output_____
###Markdown
Test the serving model locally with tf.Example dataNext, test the layer interface in the serving model for tf.Example data.
###Code
EXPORTED_TFREC_PREFIX = metadata["exported_tfrec_prefix"]
file_names = tf.data.TFRecordDataset.list_files(
EXPORTED_TFREC_PREFIX + "/data-*.tfrecord"
)
for batch in tf.data.TFRecordDataset(file_names).batch(3).take(1):
predictions = serving_model.signatures["serving_tf_example"](batch)
for key in predictions:
print(f"{key}: {predictions[key]}")
###Output
_____no_output_____
###Markdown
Test the serving model locally with JSONL dataNext, test the layer interface in the serving model for JSONL data.
###Code
schema = tfdv.load_schema_text(metadata["schema"])
feature_spec = tft.tf_metadata.schema_utils.schema_as_feature_spec(schema).feature_spec
instance = {
"dropoff_grid": "POINT(-87.6 41.9)",
"euclidean": 2064.2696,
"loc_cross": "",
"payment_type": "Credit Card",
"pickup_grid": "POINT(-87.6 41.9)",
"trip_miles": 1.37,
"trip_day": 12,
"trip_hour": 6,
"trip_month": 2,
"trip_day_of_week": 4,
"trip_seconds": 555,
}
for feature_name in instance:
dtype = feature_spec[feature_name].dtype
instance[feature_name] = tf.constant([[instance[feature_name]]], dtype)
predictions = serving_model.signatures["serving_default"](**instance)
for key in predictions:
print(f"{key}: {predictions[key].numpy()}")
###Output
_____no_output_____
###Markdown
Upload the serving model to a Model resourceNext, you upload your serving custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource.
###Code
vertex_serving_model = aip.Model.upload(
display_name="chicago_" + TIMESTAMP,
artifact_uri=SERVING_MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
labels={"metadata": BUCKET_NAME[5:]},
sync=True,
)
###Output
_____no_output_____
###Markdown
Evaluate the serving modelNext, evaluate the serving model with the evaluation (test) slices. For apples-to-apples comparison, you use the same evaluation slices for both the custom model and the AutoML model. Since your evaluation slices and metrics maybe custom, we recommend:- Send each evaluation slice as a Vertex AI Batch Prediction Job.- Use a custom evaluation script to evaluate the results from the batch prediction job.
###Code
SERVING_OUTPUT_DATA_DIR = BUCKET_NAME + "/batch_eval"
EXPORTED_JSONL_PREFIX = metadata["exported_jsonl_prefix"]
MIN_NODES = 1
MAX_NODES = 1
job = vertex_serving_model.batch_predict(
instances_format="jsonl",
predictions_format="jsonl",
job_display_name="chicago_" + TIMESTAMP,
gcs_source=EXPORTED_JSONL_PREFIX + "*.jsonl",
gcs_destination_prefix=SERVING_OUTPUT_DATA_DIR,
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=True,
)
###Output
_____no_output_____
###Markdown
Perform custom evaluation metricsAfter the batch job has completed, you input the results and target labels to your custom evaluation script. For demonstration purposes, we just display the results of the batch prediction.
###Code
batch_dir = ! gsutil ls $SERVING_OUTPUT_DATA_DIR
batch_dir = batch_dir[0]
outputs = ! gsutil ls $batch_dir
errors = outputs[0]
results = outputs[1]
print("errors")
! gsutil cat $errors
print("results")
! gsutil cat $results | head -n10
model = async_model
###Output
_____no_output_____
###Markdown
Wait for completion of AutoML training jobNext, wait for the AutoML training job to complete. Alternatively, one can set the parameter `sync` to `True` in the `run()` method to block until the AutoML training job is completed.
###Code
model.wait()
###Output
_____no_output_____
###Markdown
Review model evaluation scoresAfter your model has finished training, you can review the evaluation scores for it.First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
###Code
# Get model resource ID
models = aip.Model.list(filter="display_name=chicago_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
###Output
_____no_output_____
###Markdown
Compare metric results with AutoML baselineFinally, you make a decision if the current experiment produces a custom model that is better than the AutoML baseline, as follows: - Compare the evaluation results for each evaluation slice between the custom model and the AutoML model. - Weight the results according to your business purposes. - Add up the result and make a determination if the custom model is better. Store evaluation results for custom modelNext, you use the labels field to store user metadata containing the custom metrics information.
###Code
import json
metadata = {}
metadata["train_eval_metrics"] = METRICS
metadata["custom_eval_metrics"] = "[you-fill-this-in]"
with tf.io.gfile.GFile("gs://" + BUCKET_NAME[5:] + "/metadata.jsonl", "w") as f:
json.dump(metadata, f)
!gsutil cat $BUCKET_NAME/metadata.jsonl
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- AutoML Training Job- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_all = False
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____
###Markdown
E2E ML on GCP: MLOps stage 2 : experimentation View on GitHub Open in Google Cloud Notebooks OverviewThis tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 2 : experimentation. DatasetThe dataset used for this tutorial is the [Chicago Taxi](https://www.kaggle.com/chicago/chicago-taxi-trips-bq). The version of the dataset you will use in this tutorial is stored in a public BigQuery table. The trained model predicts whether someone would leave a tip for a taxi fare. ObjectiveIn this tutorial, you create a MLOps stage 2: experimentation process.This tutorial uses the following Vertex AI:- `Vertex Datasets`- `Vertex AutoML`- `Vertex Training`- `Vertex TensorBoard`The steps performed include:- Review the `Dataset` resource created during stage 1.- Train an AutoML tabular binary classifier model in the background.- Construct a custom training job for the `Dataset` resource.- ?? Hyperparameter Tuning- Train the custom model.- Evaluate the custom model.- ?? Tensorboard- Wait for AutoML training job to complete.- Evaluate the AutoML model. InstallationInstall the latest version of Vertex SDK for Python.
###Code
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
###Output
_____no_output_____
###Markdown
Install the latest GA version of *TensorFlow Transform* library as well.
###Code
! pip3 install -U tensorflow-transform $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants
###Code
import google.cloud.aiplatform as aip
###Output
_____no_output_____
###Markdown
Import TensorFlowImport the TensorFlow package into your Python environment.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Import TensorFlow TransformImport the TensorFlow Transform (TFT) package into your Python environment.
###Code
import tensorflow_transform as tft
###Output
_____no_output_____
###Markdown
Initialize Vertex SDK for PythonInitialize the Vertex SDK for Python for your project and corresponding bucket.
###Code
aip.init(project=PROJECT_ID, location=REGION)
###Output
_____no_output_____
###Markdown
Retrieve the dataset from stage 1Next, retrieve the dataset you created during stage 1 with the helper function `find_dataset()`. This helper function finds all the datasets whose display name matches the specified prefix and import format (e.g., bq). Finally it sorts the matches by create time and returns the latest version.
###Code
def find_dataset(display_name_prefix, import_format):
matches = []
datasets = aip.TabularDataset.list()
for dataset in datasets:
if dataset.display_name.startswith(display_name_prefix):
try:
if (
"bq" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["bigquerySource"]
):
matches.append(dataset)
if (
"csv" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["gcsSource"]
):
matches.append(dataset)
except:
pass
create_time = None
for match in matches:
if create_time == None or match.create_time > create_time:
create_time = match.create_time
dataset = match
return dataset
dataset = find_dataset("Chicago Taxi", "bq")
print(dataset)
###Output
_____no_output_____
###Markdown
Create and run training pipelineTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipelineAn AutoML training pipeline is created with the `AutoMLTabularTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the `TrainingJob` resource.- `optimization_prediction_type`: The type task to train the model for. - `classification`: A tabuar classification model. - `regression`: A tabular regression model.- `column_transformations`: (Optional): Transformations to apply to the input columns- `optimization_objective`: The optimization objective to minimize or maximize. - binary classification: - `minimize-log-loss` - `maximize-au-roc` - `maximize-au-prc` - `maximize-precision-at-recall` - `maximize-recall-at-precision` - multi-class classification: - `minimize-log-loss` - regression: - `minimize-rmse` - `minimize-mae` - `minimize-rmsle`The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
###Code
dag = aip.AutoMLTabularTrainingJob(
display_name="chicago_" + TIMESTAMP,
optimization_prediction_type="classification",
optimization_objective="minimize-log-loss",
)
print(dag)
###Output
_____no_output_____
###Markdown
Run the training pipelineNext, you run the DAG to start the training job by invoking the method `run`, with the following parameters:- `dataset`: The `Dataset` resource to train the model.- `model_display_name`: The human readable name for the trained model.- `training_fraction_split`: The percentage of the dataset to use for training.- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).- `validation_fraction_split`: The percentage of the dataset to use for validation.- `target_column`: The name of the column to train as the label.- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.The `run` method when completed returns the `Model` resource.The execution of the training pipeline will take upto 180 minutes.
###Code
async_model = dag.run(
dataset=dataset,
model_display_name="chicago_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=8000,
disable_early_stopping=False,
target_column="tip_bin",
sync=False,
)
###Output
_____no_output_____
###Markdown
Retrieve the dataset from stage 1Next, retrieve the dataset you created during stage 1 with the helper function `find_dataset()`. This helper function finds all the datasets whose display name matches the specified prefix and import format (e.g., bq). Finally it sorts the matches by create time and returns the latest version.
###Code
def find_dataset(display_name_prefix, import_format):
matches = []
datasets = aip.TabularDataset.list()
for dataset in datasets:
if dataset.display_name.startswith(display_name_prefix):
try:
if (
"bq" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["bigquerySource"]
):
matches.append(dataset)
if (
"csv" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["gcsSource"]
):
matches.append(dataset)
except:
pass
create_time = None
for match in matches:
if create_time == None or match.create_time > create_time:
create_time = match.create_time
dataset = match
return dataset
dataset = find_dataset("Chicago Taxi", "csv")
print(dataset)
###Output
_____no_output_____
###Markdown
Load dataset's user metadataLoad the user metadata for the dataset.
###Code
import json
with tf.io.gfile.GFile(
"gs://" + dataset.labels["user_metadata"] + "/metadata.jsonl", "r"
) as f:
metadata = json.load(f)
print(metadata)
###Output
_____no_output_____
###Markdown
Create experiment for tracking training related metadataSetup tracking the parameters (inputs) and metrics (results) for each experiment:- `aip.init()` - Create an experiment instance- `aip.start_run()` - Track a specific run within the experiment.Learn more about [Vertex ML Metadata](https://cloud.google.com/vertex-ai/docs/ml-metadata/introduction).
###Code
EXPERIMENT_NAME = "chicago-" + TIMESTAMP
aip.init(experiment=EXPERIMENT_NAME)
aip.start_run("run-1")
###Output
_____no_output_____
###Markdown
Create the input layer for custom modelNext, you create the input layer for your custom tabular model, based on the data types of each feature.
###Code
from tensorflow.keras.layers import Input
def create_model_inputs(numeric_features=None, categorical_features=None):
inputs = {}
for feature_name in numeric_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.float32)
for feature_name in categorical_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64)
return inputs
input_layers = create_model_inputs(
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
)
print(input_layers)
###Output
_____no_output_____
###Markdown
Create the binary classifier custom modelNext, you create your binary classifier custom tabular model.
###Code
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Concatenate, Dense, experimental
def create_binary_classifier(
input_layers, tft_output, hyperparams, numeric_features, categorical_features
):
layers = []
for feature_name in input_layers:
if feature_name in categorical_features:
vocab_size = tft_output.vocabulary_size_by_name(feature_name)
onehot_layer = experimental.preprocessing.CategoryEncoding(
max_tokens=vocab_size,
output_mode="binary",
name=f"{feature_name}_onehot",
)(input_layers[feature_name])
layers.append(onehot_layer)
elif feature_name in numeric_features:
numeric_layer = tf.expand_dims(input_layers[feature_name], -1)
layers.append(numeric_layer)
else:
pass
joined = Concatenate(name="combines_inputs")(layers)
feedforward_output = Sequential(
[Dense(units, activation="relu") for units in hyperparams["hidden_units"]],
name="feedforward_network",
)(joined)
logits = Dense(units=1, name="logits")(feedforward_output)
model = Model(inputs=input_layers, outputs=[logits])
return model
TRANSFORM_ARTIFACTS_DIR = metadata["transform_artifacts_dir"]
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
hyperparams = {"hidden_units": [128, 64]}
aip.log_params(hyperparams)
model = create_binary_classifier(
input_layers,
tft_output,
hyperparams,
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
)
model.summary()
###Output
_____no_output_____
###Markdown
Visualize the model archirectureNext, visualize the architecture of the custom model.
###Code
tf.keras.utils.plot_model(model, show_shapes=True, show_dtype=True)
###Output
_____no_output_____
###Markdown
Construct the training package Package layoutBefore you start training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.- PKG-INFO- README.md- setup.cfg- setup.py- trainer - \_\_init\_\_.py - task.py - other Python scriptsThe files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.The file `trainer/task.py` is the Python script for executing the custom training job.
###Code
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Chicago Taxi tabular binary classifier\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
###Output
_____no_output_____
###Markdown
Get feature specification for the preprocessed dataNext, create the feature specification for the preprocessed data.
###Code
transform_feature_spec = tft_output.transformed_feature_spec()
print(transform_feature_spec)
###Output
_____no_output_____
###Markdown
Load the transformed data into a tf.data.DatasetNext, you load the gzip TFRecords on Cloud Storage storage into a `tf.data.Dataset` generator. These functions are re-used when training the custom model using `Vertex Training`, so you save them to the python training package.
###Code
%%writefile custom/trainer/data.py
import tensorflow as tf
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(filenames, compression_type="GZIP")
def get_dataset(file_pattern, feature_spec, label_column, batch_size=200):
"""Generates features and label for tuning/training.
Args:
file_pattern: input tfrecord file pattern.
feature_spec: a dictionary of feature specifications.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=feature_spec,
label_key=label_column,
reader=_gzip_reader_fn,
num_epochs=1,
drop_final_batch=True,
)
return dataset
from custom.trainer import data
TRANSFORMED_DATA_PREFIX = metadata["transformed_data_prefix"]
LABEL_COLUMN = metadata["label_column"]
train_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/train/data-*.gz"
val_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/val/data-*.gz"
test_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/test/data-*.gz"
for input_features, target in data.get_dataset(
train_data_file_pattern, transform_feature_spec, LABEL_COLUMN, batch_size=3
).take(1):
for key in input_features:
print(
f"{key} {input_features[key].dtype}: {input_features[key].numpy().tolist()}"
)
print(f"target: {target.numpy().tolist()}")
###Output
_____no_output_____
###Markdown
Train the modelblah Create training scriptblah
###Code
%%writefile custom/trainer/train.py
from custom.trainer import data
import tensorflow as tf
import logging
def compile(model, hyperparams):
optimizer = tf.keras.optimizers.Adam(learning_rate=hyperparams["learning_rate"])
loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)
metrics = [tf.keras.metrics.BinaryAccuracy(name="accuracy")]
model.compile(optimizer=optimizer,loss=loss, metrics=metrics)
return model
def train(
model,
hyperparams,
train_data_dir,
val_data_dir,
label_column,
transformed_feature_spec,
log_dir
):
train_dataset = data.get_dataset(
train_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
val_dataset = data.get_dataset(
val_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
logging.info("Model training started...")
history = model.fit(
train_dataset,
epochs=hyperparams["num_epochs"],
validation_data=val_dataset
)
logging.info("Model training completed.")
return history
###Output
_____no_output_____
###Markdown
Train the model locallyNext, test the training package locally, by training with just a few epochs:- `num_epochs`: The number of epochs to pass to the training package.- `compile()`: Compile the model for training.- `train(): Train the model.
###Code
import logging
from custom.trainer import train
logging.getLogger().setLevel(logging.INFO)
hyperparams["learning_rate"] = 0.001
hyperparams["num_epochs"] = 5
hyperparams["batch_size"] = 512
aip.log_params(hyperparams)
train.compile(model, hyperparams)
train.train(
model,
hyperparams,
train_data_file_pattern,
val_data_file_pattern,
LABEL_COLUMN,
transform_feature_spec,
None,
)
model = async_model
###Output
_____no_output_____
###Markdown
Wait for completion of AutoML training jobNext, wait for the AutoML training job to complete. Alternatively, one can set the parameter `sync` to `True` in the `run()` method to block until the AutoML training job is completed.
###Code
model.wait()
###Output
_____no_output_____
###Markdown
Review model evaluation scoresAfter your model has finished training, you can review the evaluation scores for it.First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
###Code
# Get model resource ID
models = aip.Model.list(filter="display_name=chicago_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- AutoML Training Job- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____
###Markdown
E2E ML on GCP: MLOps stage 2 : experimentation View on GitHub Run in Colab Open in Vertex AI Workbench OverviewThis tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 2 : experimentation. DatasetThe dataset used for this tutorial is the [Chicago Taxi](https://www.kaggle.com/chicago/chicago-taxi-trips-bq). The version of the dataset you will use in this tutorial is stored in a public BigQuery table. The trained model predicts whether someone would leave a tip for a taxi fare. ObjectiveIn this tutorial, you create a MLOps stage 2: experimentation process.This tutorial uses the following Vertex AI:- `Vertex AI Datasets`- `Vertex AI Models`- `Vertex AI AutoML`- `Vertex AI Training`- `Vertex AI TensorBoard`- `Vertex AI Vizier`- `Vertex AI Batch Prediction`The steps performed include:- Review the `Dataset` resource created during stage 1.- Train an AutoML tabular binary classifier model in the background.- Build the experimental model architecture.- Construct a custom training package for the `Dataset` resource.- Test the custom training package locally.- Test the custom training package in the cloud with Vertex AI Training.- Hyperparameter tune the model training with Vertex AI Vizier.- Train the custom model with Vertex AI Training.- Add a serving function for online/batch prediction to the custom model.- Test the custom model with the serving function.- Evaluate the custom model using Vertex AI Batch Prediction- Wait for the AutoML training job to complete.- Evaluate the AutoML model using Vertex AI Batch Prediction with the same evaluation slices as the custom model.- Set the evaluation results of the AutoML model as the baseline.- If the evaluation of the custom model is below baseline, continue to experiment with the custom model.- If the evaluation of the custom model is above baseline, save the model as the first best model. RecommendationsWhen doing E2E MLOps on Google Cloud for experimentation, the following best practices with structured (tabular) data are recommended: - Determine a baseline evaluation using AutoML. - Design and build a model architecture. - Upload the untrained model architecture as a Vertex AI Model resource. - Construct a training package that can be ran locally and as a Vertex AI Training job. - Decompose the training package into: data, model, train and task Python modules. - Obtain the location of the transformed training data from the user metadata of the Vertex AI Dataset resource. - Obtain the location of the model artifacts from the Vertex AI Model resource. - Include in the training package initializing a Vertex AI Experiment and corresponding run. - Log hyperparameters and training parameters for the experiment. - Add callbacks for early stop, TensorBoard, and hyperparameter tuning, where hyperparameter tuning is a command-line option. - Test the training package locally with a small number of epochs. - Test the training package with Vertex AI Training. - Do hyperparameter tuning with Vertex AI Hyperparameter Tuning. - Do full training of the custom model with Vertex AI Training. - Log the hyperparameter values for the experiment/run. - Evaluate the custom model. - Single evaluation slice, same metrics as AutoML - Add evaluation to the training package and return the results in a file in the Cloud Storage bucket used for training - Custom evaluation slices, custom metrics - Evaluate custom evaluation slices as a Vertex AI Batch Prediction for both AutoML and custom model - Perform custom metrics on the results from the batch job - Compare custom model metrics against the AutoML baseline - If less than baseline, then continue to experiment - If greater then baseline, then upload model as the new baseline and save evaluation results with the model. InstallationsInstall *one time* the packages for executing the MLOps notebooks.
###Code
import os
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME") and not os.getenv("VIRTUAL_ENV")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
ONCE_ONLY = False
if ONCE_ONLY:
! pip3 install -U tensorflow==2.5 $USER_FLAG
! pip3 install -U tensorflow-data-validation==1.2 $USER_FLAG
! pip3 install -U tensorflow-transform==1.2 $USER_FLAG
! pip3 install -U tensorflow-io==0.18 $USER_FLAG
! pip3 install --upgrade google-cloud-aiplatform[tensorboard] $USER_FLAG
! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG
! pip3 install --upgrade google-cloud-bigquery $USER_FLAG
! pip3 install --upgrade google-cloud-logging $USER_FLAG
! pip3 install --upgrade apache-beam[gcp] $USER_FLAG
! pip3 install --upgrade pyarrow $USER_FLAG
! pip3 install --upgrade cloudml-hypertune $USER_FLAG
! pip3 install --upgrade kfp $USER_FLAG
! pip3 install --upgrade torchvision $USER_FLAG
! pip3 install --upgrade rpy2 $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com). 1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations).
###Code
REGION = "[your-region]" # @param {type:"string"}
if REGION == "[your-region]":
REGION = "us-central1"
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Vertex AI Workbench Notebooks**, your environment is alreadyauthenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructionswhen prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:1. In the Cloud Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).2. Click **Create service account**.3. In the **Service account name** field, enter a name, and click **Create**.4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "Vertex AI"into the filter box, and select **Vertex AI Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.5. Click *Create*. A JSON file that contains your key downloads to yourlocal environment.6. Enter the path to your service account key as the`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
###Code
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Vertex AI Workbench, then don't execute this code
IS_COLAB = False
if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv(
"DL_ANACONDA_HOME"
):
if "google.colab" in sys.modules:
IS_COLAB = True
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Service Account**If you don't know your service account**, try to get your service account using `gcloud` command by executing the second cell below.
###Code
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your service account from gcloud
if not IS_COLAB:
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].replace("*", "").strip()
if IS_COLAB:
shell_output = ! gcloud projects describe $PROJECT_ID
project_number = shell_output[-1].split(":")[1].strip().replace("'", "")
SERVICE_ACCOUNT = f"{project_number}[email protected]"
print("Service Account:", SERVICE_ACCOUNT)
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants
###Code
import google.cloud.aiplatform as aip
###Output
_____no_output_____
###Markdown
Import TensorFlowImport the TensorFlow package into your Python environment.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Import TensorFlow TransformImport the TensorFlow Transform (TFT) package into your Python environment.
###Code
import tensorflow_transform as tft
###Output
_____no_output_____
###Markdown
Import TensorFlow Data ValidationImport the TensorFlow Data Validation (TFDV) package into your Python environment.
###Code
import tensorflow_data_validation as tfdv
###Output
_____no_output_____
###Markdown
Initialize Vertex AI SDK for PythonInitialize the Vertex AI SDK for Python for your project and corresponding bucket.
###Code
aip.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)
###Output
_____no_output_____
###Markdown
Set hardware acceleratorsYou can set hardware accelerators for training and prediction.Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)Otherwise specify `(None, None)` to use a container image to run on a CPU.Learn more about [hardware accelerator support for your region](https://cloud.google.com/vertex-ai/docs/general/locationsaccelerators).*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3. This is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
###Code
import os
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.gapic.AcceleratorType.NVIDIA_TESLA_K80, 4)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
###Output
_____no_output_____
###Markdown
Set pre-built containersSet the pre-built Docker container image for training and prediction.For the latest list, see [Pre-built containers for training](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers).For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers).
###Code
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2.5".replace(".", "-")
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "{}-docker.pkg.dev/vertex-ai/training/{}:latest".format(
REGION.split("-")[0], TRAIN_VERSION
)
DEPLOY_IMAGE = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format(
REGION.split("-")[0], DEPLOY_VERSION
)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
###Output
_____no_output_____
###Markdown
Set machine typeNext, set the machine type to use for training and prediction.- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]*Note: The following is not supported for training:* - `standard`: 2 vCPUs - `highcpu`: 2, 4 and 8 vCPUs*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.
###Code
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
###Output
_____no_output_____
###Markdown
Retrieve the dataset from stage 1Next, retrieve the dataset you created during stage 1 with the helper function `find_dataset()`. This helper function finds all the datasets whose display name matches the specified prefix and import format (e.g., bq). Finally it sorts the matches by create time and returns the latest version.
###Code
def find_dataset(display_name_prefix, import_format):
matches = []
datasets = aip.TabularDataset.list()
for dataset in datasets:
if dataset.display_name.startswith(display_name_prefix):
try:
if (
"bq" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["bigquerySource"]
):
matches.append(dataset)
if (
"csv" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["gcsSource"]
):
matches.append(dataset)
except:
pass
create_time = None
for match in matches:
if create_time is None or match.create_time > create_time:
create_time = match.create_time
dataset = match
return dataset
dataset = find_dataset("Chicago Taxi", "bq")
print(dataset)
###Output
_____no_output_____
###Markdown
Load dataset's user metadataLoad the user metadata for the dataset.
###Code
import json
try:
with tf.io.gfile.GFile(
"gs://" + dataset.labels["user_metadata"] + "/metadata.jsonl", "r"
) as f:
metadata = json.load(f)
print(metadata)
except:
print("no metadata")
###Output
_____no_output_____
###Markdown
Create and run training pipelineTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipelineAn AutoML training pipeline is created with the `AutoMLTabularTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the `TrainingJob` resource.- `optimization_prediction_type`: The type task to train the model for. - `classification`: A tabuar classification model. - `regression`: A tabular regression model.- `column_transformations`: (Optional): Transformations to apply to the input columns- `optimization_objective`: The optimization objective to minimize or maximize. - binary classification: - `minimize-log-loss` - `maximize-au-roc` - `maximize-au-prc` - `maximize-precision-at-recall` - `maximize-recall-at-precision` - multi-class classification: - `minimize-log-loss` - regression: - `minimize-rmse` - `minimize-mae` - `minimize-rmsle`The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
###Code
dag = aip.AutoMLTabularTrainingJob(
display_name="chicago_" + TIMESTAMP,
optimization_prediction_type="classification",
optimization_objective="minimize-log-loss",
)
print(dag)
###Output
_____no_output_____
###Markdown
Run the training pipelineNext, you run the DAG to start the training job by invoking the method `run`, with the following parameters:- `dataset`: The `Dataset` resource to train the model.- `model_display_name`: The human readable name for the trained model.- `training_fraction_split`: The percentage of the dataset to use for training.- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).- `validation_fraction_split`: The percentage of the dataset to use for validation.- `target_column`: The name of the column to train as the label.- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.The `run` method when completed returns the `Model` resource.The execution of the training pipeline will take upto 180 minutes.
###Code
async_model = dag.run(
dataset=dataset,
model_display_name="chicago_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=8000,
disable_early_stopping=False,
target_column="tip_bin",
sync=False,
)
###Output
_____no_output_____
###Markdown
Create experiment for tracking training related metadataSetup tracking the parameters (configuration) and metrics (results) for each experiment:- `aip.init()` - Create an experiment instance- `aip.start_run()` - Track a specific run within the experiment.Learn more about [Introduction to Vertex AI ML Metadata](https://cloud.google.com/vertex-ai/docs/ml-metadata/introduction).
###Code
EXPERIMENT_NAME = "chicago-" + TIMESTAMP
aip.init(experiment=EXPERIMENT_NAME)
aip.start_run("run-1")
###Output
_____no_output_____
###Markdown
Create a Vertex AI TensorBoard instanceCreate a Vertex AI TensorBoard instance to use TensorBoard in conjunction with Vertex AI Training for custom model training.Learn more about [Get started with Vertex AI TensorBoard](https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-overview).
###Code
TENSORBOARD_DISPLAY_NAME = "chicago_" + TIMESTAMP
tensorboard = aip.Tensorboard.create(display_name=TENSORBOARD_DISPLAY_NAME)
tensorboard_resource_name = tensorboard.gca_resource.name
print("TensorBoard resource name:", tensorboard_resource_name)
###Output
_____no_output_____
###Markdown
Create the input layer for your custom modelNext, you create the input layer for your custom tabular model, based on the data types of each feature.
###Code
from tensorflow.keras.layers import Input
def create_model_inputs(
numeric_features=None, categorical_features=None, embedding_features=None
):
inputs = {}
for feature_name in numeric_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.float32)
for feature_name in categorical_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64)
for feature_name in embedding_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64)
return inputs
input_layers = create_model_inputs(
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
embedding_features=metadata["embedding_features"],
)
print(input_layers)
###Output
_____no_output_____
###Markdown
Create the binary classifier custom modelNext, you create your binary classifier custom tabular model.
###Code
from math import sqrt
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import (Activation, Concatenate, Dense, Embedding,
experimental)
def create_binary_classifier(
input_layers,
tft_output,
metaparams,
numeric_features,
categorical_features,
embedding_features,
):
layers = []
for feature_name in input_layers:
if feature_name in embedding_features:
vocab_size = tft_output.vocabulary_size_by_name(feature_name)
embedding_size = int(sqrt(vocab_size))
embedding_output = Embedding(
input_dim=vocab_size + 1,
output_dim=embedding_size,
name=f"{feature_name}_embedding",
)(input_layers[feature_name])
layers.append(embedding_output)
elif feature_name in categorical_features:
vocab_size = tft_output.vocabulary_size_by_name(feature_name)
onehot_layer = experimental.preprocessing.CategoryEncoding(
num_tokens=vocab_size,
output_mode="binary",
name=f"{feature_name}_onehot",
)(input_layers[feature_name])
layers.append(onehot_layer)
elif feature_name in numeric_features:
numeric_layer = tf.expand_dims(input_layers[feature_name], -1)
layers.append(numeric_layer)
else:
pass
joined = Concatenate(name="combines_inputs")(layers)
feedforward_output = Sequential(
[Dense(units, activation="relu") for units in metaparams["hidden_units"]],
name="feedforward_network",
)(joined)
logits = Dense(units=1, name="logits")(feedforward_output)
pred = Activation("sigmoid")(logits)
model = Model(inputs=input_layers, outputs=[pred])
return model
TRANSFORM_ARTIFACTS_DIR = metadata["transform_artifacts_dir"]
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
metaparams = {"hidden_units": [128, 64]}
aip.log_params(metaparams)
model = create_binary_classifier(
input_layers,
tft_output,
metaparams,
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
embedding_features=metadata["embedding_features"],
)
model.summary()
###Output
_____no_output_____
###Markdown
Visualize the model architectureNext, visualize the architecture of the custom model.
###Code
tf.keras.utils.plot_model(model, show_shapes=True, show_dtype=True)
###Output
_____no_output_____
###Markdown
Save model artifactsNext, save the model artifacts to your Cloud Storage bucket
###Code
MODEL_DIR = f"{BUCKET_NAME}/base_model"
model.save(MODEL_DIR)
###Output
_____no_output_____
###Markdown
Upload the local model to a Vertex AI Model resourceNext, you upload your local custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource.
###Code
vertex_custom_model = aip.Model.upload(
display_name="chicago_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
labels={"base_model": "1"},
sync=True,
)
###Output
_____no_output_____
###Markdown
Construct the training package Package layoutBefore you start training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.- PKG-INFO- README.md- setup.cfg- setup.py- trainer - \_\_init\_\_.py - task.py - other Python scriptsThe files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.The file `trainer/task.py` is the Python script for executing the custom training job.
###Code
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'google-cloud-aiplatform',\n\n 'cloudml-hypertune',\n\n 'tensorflow_datasets==1.3.0',\n\n 'tensorflow==2.5',\n\n 'tensorflow_data_validation==1.2',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Chicago Taxi tabular binary classifier\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex AI"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
###Output
_____no_output_____
###Markdown
Get feature specification for the preprocessed dataNext, create the feature specification for the preprocessed data.
###Code
transform_feature_spec = tft_output.transformed_feature_spec()
print(transform_feature_spec)
###Output
_____no_output_____
###Markdown
Load the transformed data into a tf.data.DatasetNext, you load the gzip TFRecords on Cloud Storage storage into a `tf.data.Dataset` generator. These functions are re-used when training the custom model using `Vertex Training`, so you save them to the python training package.
###Code
%%writefile custom/trainer/data.py
import tensorflow as tf
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(filenames, compression_type="GZIP")
def get_dataset(file_pattern, feature_spec, label_column, batch_size=200):
"""Generates features and label for tuning/training.
Args:
file_pattern: input tfrecord file pattern.
feature_spec: a dictionary of feature specifications.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=feature_spec,
label_key=label_column,
reader=_gzip_reader_fn,
num_epochs=1,
drop_final_batch=True,
)
return dataset
from custom.trainer import data
TRANSFORMED_DATA_PREFIX = metadata["transformed_data_prefix"]
LABEL_COLUMN = metadata["label_column"]
train_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/train/data-*.gz"
val_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/val/data-*.gz"
test_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/test/data-*.gz"
for input_features, target in data.get_dataset(
train_data_file_pattern, transform_feature_spec, LABEL_COLUMN, batch_size=3
).take(1):
for key in input_features:
print(
f"{key} {input_features[key].dtype}: {input_features[key].numpy().tolist()}"
)
print(f"target: {target.numpy().tolist()}")
###Output
_____no_output_____
###Markdown
Test the model architecture with transformed inputNext, test the model architecture with a sample of the transformed training input.*Note:* Since the model is untrained, the predictions should be random. Since this is a binary classifier, expect the predicted results ~0.5.
###Code
model(input_features)
###Output
_____no_output_____
###Markdown
Develop and test the training scriptsWhen experimenting, one typically develops and tests the training package locally, before moving to training in the cloud. Create training scriptNext, you write the Python script for compiling and training the model.
###Code
%%writefile custom/trainer/train.py
from trainer import data
import tensorflow as tf
import logging
from hypertune import HyperTune
def compile(model, hyperparams):
''' Compile the model '''
optimizer = tf.keras.optimizers.Adam(learning_rate=hyperparams["learning_rate"])
loss = tf.keras.losses.BinaryCrossentropy(from_logits=False)
metrics = [tf.keras.metrics.BinaryAccuracy(name="accuracy")]
model.compile(optimizer=optimizer,loss=loss, metrics=metrics)
return model
def warmup(
model,
hyperparams,
train_data_dir,
label_column,
transformed_feature_spec
):
''' Warmup the initialized model weights '''
train_dataset = data.get_dataset(
train_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
lr_inc = (hyperparams['end_learning_rate'] - hyperparams['start_learning_rate']) / hyperparams['num_epochs']
def scheduler(epoch, lr):
if epoch == 0:
return hyperparams['start_learning_rate']
return lr + lr_inc
callbacks = [tf.keras.callbacks.LearningRateScheduler(scheduler)]
logging.info("Model warmup started...")
history = model.fit(
train_dataset,
epochs=hyperparams["num_epochs"],
steps_per_epoch=hyperparams["steps"],
callbacks=callbacks
)
logging.info("Model warmup completed.")
return history
def train(
model,
hyperparams,
train_data_dir,
val_data_dir,
label_column,
transformed_feature_spec,
log_dir,
tuning=False
):
''' Train the model '''
train_dataset = data.get_dataset(
train_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
val_dataset = data.get_dataset(
val_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
early_stop = tf.keras.callbacks.EarlyStopping(
monitor=hyperparams["early_stop"]["monitor"], patience=hyperparams["early_stop"]["patience"], restore_best_weights=True
)
callbacks = [early_stop]
if log_dir:
tensorboard = tf.keras.callbacks.TensorBoard(log_dir=log_dir)
callbacks = callbacks.append(tensorboard)
if tuning:
# Instantiate the HyperTune reporting object
hpt = HyperTune()
# Reporting callback
class HPTCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='val_loss',
metric_value=logs['val_loss'],
global_step=epoch
)
if not callbacks:
callbacks = []
callbacks.append(HPTCallback())
logging.info("Model training started...")
history = model.fit(
train_dataset,
epochs=hyperparams["num_epochs"],
validation_data=val_dataset,
callbacks=callbacks
)
logging.info("Model training completed.")
return history
def evaluate(
model,
hyperparams,
test_data_dir,
label_column,
transformed_feature_spec
):
logging.info("Model evaluation started...")
test_dataset = data.get_dataset(
test_data_dir,
transformed_feature_spec,
label_column,
hyperparams["batch_size"],
)
evaluation_metrics = model.evaluate(test_dataset)
logging.info("Model evaluation completed.")
return evaluation_metrics
###Output
_____no_output_____
###Markdown
Train the model locallyNext, test the training package locally, by training with just a few epochs:- `num_epochs`: The number of epochs to pass to the training package.- `compile()`: Compile the model for training.- `warmup()`: Warmup the initialized model weights.- `train()`: Train the model.
###Code
os.chdir("custom")
import logging
from trainer import train
TENSORBOARD_LOG_DIR = "./logs"
logging.getLogger().setLevel(logging.INFO)
hyperparams = {}
hyperparams["learning_rate"] = 0.01
aip.log_params(hyperparams)
train.compile(model, hyperparams)
warmupparams = {}
warmupparams["start_learning_rate"] = 0.0001
warmupparams["end_learning_rate"] = 0.01
warmupparams["num_epochs"] = 4
warmupparams["batch_size"] = 64
warmupparams["steps"] = 50
aip.log_params(warmupparams)
train.warmup(
model, warmupparams, train_data_file_pattern, LABEL_COLUMN, transform_feature_spec
)
trainparams = {}
trainparams["num_epochs"] = 5
trainparams["batch_size"] = 64
trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5}
aip.log_params(trainparams)
train.train(
model,
trainparams,
train_data_file_pattern,
val_data_file_pattern,
LABEL_COLUMN,
transform_feature_spec,
TENSORBOARD_LOG_DIR,
)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Evaluate the model locallyNext, test the evaluation portion of the training package:- `evaluate()`: Evaluate the model.
###Code
os.chdir("custom")
from trainer import train
evalparams = {}
evalparams["batch_size"] = 64
metrics = {}
metrics["loss"], metrics["acc"] = train.evaluate(
model, evalparams, test_data_file_pattern, LABEL_COLUMN, transform_feature_spec
)
print("ACC", metrics["acc"], "LOSS", metrics["loss"])
aip.log_metrics(metrics)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Retrieve model from Vertex AINext, create the Python script to retrieve your experimental model from Vertex AI.
###Code
%%writefile custom/trainer/model.py
import google.cloud.aiplatform as aip
def get(model_id):
model = aip.Model(model_id)
return model
###Output
_____no_output_____
###Markdown
Create the task script for the Python training packageNext, you create the `task.py` script for driving the training package. Some noteable steps include:- Command-line arguments: - `model-id`: The resource ID of the `Model` resource you built during experimenting. This is the untrained model architecture. - `dataset-id`: The resource ID of the `Dataset` resource to use for training. - `experiment`: The name of the experiment. - `run`: The name of the run within this experiment. - `tensorboard-logdir`: The logging directory for Vertex AI Tensorboard.- `get_data()`: - Loads the Dataset resource into memory. - Obtains the user metadata from the Dataset resource. - From the metadata, obtain location of transformed data, transformation function and name of label column- `get_model()`: - Loads the Model resource into memory. - Obtains location of model artifacts of the model architecture. - Loads the model architecture. - Compiles the model.- `warmup_model()`: - Warms up the initialized model weights- `train_model()`: - Train the model.- `evaluate_model()`: - Evaluates the model. - Saves evaluation metrics to Cloud Storage bucket.
###Code
%%writefile custom/trainer/task.py
import os
import argparse
import logging
import json
import tensorflow as tf
import tensorflow_transform as tft
from tensorflow.python.client import device_lib
import google.cloud.aiplatform as aip
from trainer import data
from trainer import model as model_
from trainer import train
try:
from trainer import serving
except:
pass
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--model-id', dest='model_id',
default=None, type=str, help='Vertex Model ID.')
parser.add_argument('--dataset-id', dest='dataset_id',
default=None, type=str, help='Vertex Dataset ID.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--start_lr', dest='start_lr',
default=0.0001, type=float,
help='Starting learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--batch_size', dest='batch_size',
default=16, type=int,
help='Batch size.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--tensorboard-log-dir', dest='tensorboard_log_dir',
default=os.getenv('AIP_TENSORBOARD_LOG_DIR'), type=str,
help='Output file for tensorboard logs')
parser.add_argument('--experiment', dest='experiment',
default=None, type=str,
help='Name of experiment')
parser.add_argument('--project', dest='project',
default=None, type=str,
help='Name of project')
parser.add_argument('--run', dest='run',
default=None, type=str,
help='Name of run in experiment')
parser.add_argument('--evaluate', dest='evaluate',
default=False, type=bool,
help='Whether to perform evaluation')
parser.add_argument('--serving', dest='serving',
default=False, type=bool,
help='Whether to attach the serving function')
parser.add_argument('--tuning', dest='tuning',
default=False, type=bool,
help='Whether to perform hyperparameter tuning')
parser.add_argument('--warmup', dest='warmup',
default=False, type=bool,
help='Whether to perform warmup weight initialization')
args = parser.parse_args()
logging.getLogger().setLevel(logging.INFO)
logging.info('DEVICES' + str(device_lib.list_local_devices()))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
logging.info("Single device training")
# Single Machine, multiple compute device
elif args.distribute == 'mirrored':
strategy = tf.distribute.MirroredStrategy()
logging.info("Mirrored Strategy distributed training")
# Multi Machine, multiple compute device
elif args.distribute == 'multiworker':
strategy = tf.distribute.MultiWorkerMirroredStrategy()
logging.info("Multi-worker Strategy distributed training")
logging.info('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
logging.info('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Initialize the run for this experiment
if args.experiment:
logging.info("Initialize experiment: {}".format(args.experiment))
aip.init(experiment=args.experiment, project=args.project)
aip.start_run(args.run)
metadata = {}
def get_data():
''' Get the preprocessed training data '''
global train_data_file_pattern, val_data_file_pattern, test_data_file_pattern
global label_column, transform_feature_spec, metadata
dataset = aip.TabularDataset(args.dataset_id)
METADATA = 'gs://' + dataset.labels['user_metadata'] + "/metadata.jsonl"
with tf.io.gfile.GFile(METADATA, "r") as f:
metadata = json.load(f)
TRANSFORMED_DATA_PREFIX = metadata['transformed_data_prefix']
label_column = metadata['label_column']
train_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/train/data-*.gz'
val_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/val/data-*.gz'
test_data_file_pattern = TRANSFORMED_DATA_PREFIX + '/test/data-*.gz'
TRANSFORM_ARTIFACTS_DIR = metadata['transform_artifacts_dir']
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
transform_feature_spec = tft_output.transformed_feature_spec()
def get_model():
''' Get the untrained model architecture '''
global model_artifacts
vertex_model = model_.get(args.model_id)
model_artifacts = vertex_model.gca_resource.artifact_uri
model = tf.keras.models.load_model(model_artifacts)
# Compile the model
hyperparams = {}
hyperparams["learning_rate"] = args.lr
if args.experiment:
aip.log_params(hyperparams)
metadata.update(hyperparams)
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f:
f.write(json.dumps(metadata))
train.compile(model, hyperparams)
return model
def warmup_model(model):
''' Warmup the initialized model weights '''
warmupparams = {}
warmupparams["num_epochs"] = args.epochs
warmupparams["batch_size"] = args.batch_size
warmupparams["steps"] = args.steps
warmupparams["start_learning_rate"] = args.start_lr
warmupparams["end_learning_rate"] = args.lr
train.warmup(model, warmupparams, train_data_file_pattern, label_column, transform_feature_spec)
return model
def train_model(model):
''' Train the model '''
trainparams = {}
trainparams["num_epochs"] = args.epochs
trainparams["batch_size"] = args.batch_size
trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5}
if args.experiment:
aip.log_params(trainparams)
metadata.update(trainparams)
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f:
f.write(json.dumps(metadata))
train.train(model, trainparams, train_data_file_pattern, val_data_file_pattern, label_column, transform_feature_spec, args.tensorboard_log_dir, args.tuning)
return model
def evaluate_model(model):
''' Evaluate the model '''
evalparams = {}
evalparams["batch_size"] = args.batch_size
metrics = train.evaluate(model, evalparams, test_data_file_pattern, label_column, transform_feature_spec)
metadata.update({'metrics': metrics})
with tf.io.gfile.GFile(os.path.join(args.model_dir, "metrics.txt"), "w") as f:
f.write(json.dumps(metadata))
get_data()
with strategy.scope():
model = get_model()
if args.warmup:
model = warmup_model(model)
else:
model = train_model(model)
if args.evaluate:
evaluate_model(model)
if args.serving:
logging.info('Save serving model to: ' + args.model_dir)
serving.construct_serving_model(
model=model,
serving_model_dir=args.model_dir,
metadata=metadata
)
elif args.warmup:
logging.info('Save warmed up model to: ' + model_artifacts)
model.save(model_artifacts)
else:
logging.info('Save trained model to: ' + args.model_dir)
model.save(args.model_dir)
###Output
_____no_output_____
###Markdown
Test training package locallyNext, test your completed training package locally with just a few epochs.
###Code
DATASET_ID = dataset.resource_name
MODEL_ID = vertex_custom_model.resource_name
!cd custom; python3 -m trainer.task --model-id={MODEL_ID} --dataset-id={DATASET_ID} --experiment='chicago' --run='test' --project={PROJECT_ID} --epochs=5 --model-dir=/tmp --evaluate=True
###Output
_____no_output_____
###Markdown
Warmup trainingNow that you have tested the training scripts, you perform warmup training on the base model. Warmup training is used to stabilize the weight initialization. By doing so, each subsequent training and tuning of the model architecture will start with the same stabilized weight initialization.
###Code
MODEL_DIR = f"{BUCKET_NAME}/base_model"
!cd custom; python3 -m trainer.task --model-id={MODEL_ID} --dataset-id={DATASET_ID} --project={PROJECT_ID} --epochs=5 --steps=300 --batch_size=16 --lr=0.01 --start_lr=0.0001 --model-dir={MODEL_DIR} --warmup=True
###Output
_____no_output_____
###Markdown
Mirrored StrategyWhen training on a single VM, one can either train was a single compute device or with multiple compute devices on the same VM. With Vertex AI Distributed Training you can specify both the number of compute devices for the VM instance and type of compute devices: CPU, GPU.Vertex AI Distributed Training supports `tf.distribute.MirroredStrategy' for TensorFlow models. To enable training across multiple compute devices on the same VM, you do the following additional steps in your Python training script:1. Set the tf.distribute.MirrorStrategy2. Compile the model within the scope of tf.distribute.MirrorStrategy. *Note:* Tells MirroredStrategy which variables to mirror across your compute devices.3. Increase the batch size for each compute device to num_devices * batch size.During transitions, the distribution of batches will be synchronized as well as the updates to the model parameters. Create and run custom training jobTo train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training jobA custom training job is created with the `CustomTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the custom training job.- `container_uri`: The training container image.- `python_package_gcs_uri`: The location of the Python training package as a tarball.- `python_module_name`: The relative path to the training script in the Python package.- `model_serving_container_uri`: The container image for deploying the model.*Note:* There is no requirements parameter. You specify any requirements in the `setup.py` script in your Python package.
###Code
DISPLAY_NAME = "chicago_" + TIMESTAMP
job = aip.CustomPythonPackageTrainingJob(
display_name=DISPLAY_NAME,
python_package_gcs_uri=f"{BUCKET_NAME}/trainer_chicago.tar.gz",
python_module_name="trainer.task",
container_uri=TRAIN_IMAGE,
model_serving_container_image_uri=DEPLOY_IMAGE,
project=PROJECT_ID,
)
! rm -rf custom/logs
! rm -rf custom/trainer/__pycache__
###Output
_____no_output_____
###Markdown
Store training script on your Cloud Storage bucketNext, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
###Code
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_chicago.tar.gz
###Output
_____no_output_____
###Markdown
Run the custom Python package training jobNext, you run the custom job to start the training job by invoking the method `run()`. The parameters are the same as when running a CustomTrainingJob.*Note:* The parameter service_account is set so that the initializing experiment step `aip.init(experiment="...")` has necessarily permission to access the Vertex AI Metadata Store.
###Code
MODEL_DIR = BUCKET_NAME + "/testing"
CMDARGS = [
"--epochs=5",
"--batch_size=16",
"--distribute=mirrored",
"--experiment=chicago",
"--run=test",
"--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
]
model = job.run(
model_display_name="chicago_" + TIMESTAMP,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
service_account=SERVICE_ACCOUNT,
tensorboard=tensorboard_resource_name,
sync=True,
)
###Output
_____no_output_____
###Markdown
Delete a custom training jobAfter a training job is completed, you can delete the training job with the method `delete()`. Prior to completion, a training job can be canceled with the method `cancel()`.
###Code
job.delete()
###Output
_____no_output_____
###Markdown
Delete the modelThe method 'delete()' will delete the model.
###Code
model.delete()
###Output
_____no_output_____
###Markdown
Hyperparameter tuningNext, you perform hyperparameter tuning with the training package. The training package has some additions that make the same package usable for both hyperparameter tuning, as well as local testing and full cloud training:- Command-Line: - `tuning`: indicates to use the HyperTune service as a callback during training.- `train()`: If tuning is set, creates and adds a callback to HyperTune service. Prepare your machine specificationNow define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training. - `machine_type`: The type of GCP instance to provision -- e.g., n1-standard-8. - `accelerator_type`: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable `TRAIN_GPU != None`, you are using a GPU; otherwise you will use a CPU. - `accelerator_count`: The number of accelerators.
###Code
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
###Output
_____no_output_____
###Markdown
Prepare your disk specification(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training. - `boot_disk_type`: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD. - `boot_disk_size_gb`: Size of disk in GB.
###Code
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
###Output
_____no_output_____
###Markdown
Define worker pool specification for hyperparameter tuning jobNext, define the worker pool specification. Note that we plan to tune the learning rate and batch size, so you do not pass them as command-line arguments (omitted). The Vertex AI Hyperparameter Tuning service will pick values for both learning rate and batch size during trials, which it will pass along as command-line arguments.
###Code
CMDARGS = [
"--epochs=5",
"--distribute=mirrored",
# "--experiment=chicago",
# "--run=tune",
# "--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
"--tuning=True",
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_chicago.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
###Output
_____no_output_____
###Markdown
Create a custom jobUse the class `CustomJob` to create a custom job, such as for hyperparameter tuning, with the following parameters:- `display_name`: A human readable name for the custom job.- `worker_pool_specs`: The specification for the corresponding VM instances.
###Code
job = aip.CustomJob(
display_name="chicago_" + TIMESTAMP, worker_pool_specs=worker_pool_spec
)
###Output
_____no_output_____
###Markdown
Create a hyperparameter tuning jobUse the class `HyperparameterTuningJob` to create a hyperparameter tuning job, with the following parameters:- `display_name`: A human readable name for the custom job.- `custom_job`: The worker pool spec from this custom job applies to the CustomJobs created in all the trials.- `metrics_spec`: The metrics to optimize. The dictionary key is the metric_id, which is reported by your training job, and the dictionary value is the optimization goal of the metric('minimize' or 'maximize').- `parameter_spec`: The parameters to optimize. The dictionary key is the metric_id, which is passed into your training job as a command line key word argument, and the dictionary value is the parameter specification of the metric.- `search_algorithm`: The search algorithm to use: `grid`, `random` and `None`. If `None` is specified, the `Vizier` service (Bayesian) is used.- `max_trial_count`: The maximum number of trials to perform.
###Code
from google.cloud.aiplatform import hyperparameter_tuning as hpt
hpt_job = aip.HyperparameterTuningJob(
display_name="chicago_" + TIMESTAMP,
custom_job=job,
metric_spec={
"val_loss": "minimize",
},
parameter_spec={
"lr": hpt.DoubleParameterSpec(min=0.001, max=0.1, scale="log"),
"batch_size": hpt.DiscreteParameterSpec([16, 32, 64, 128, 256], scale="linear"),
},
search_algorithm=None,
max_trial_count=8,
parallel_trial_count=1,
)
###Output
_____no_output_____
###Markdown
Run the hyperparameter tuning jobUse the `run()` method to execute the hyperparameter tuning job.
###Code
hpt_job.run()
###Output
_____no_output_____
###Markdown
Best trialNow look at which trial was the best:
###Code
best = (None, None, None, 0.0)
for trial in hpt_job.trials:
# Keep track of the best outcome
if float(trial.final_measurement.metrics[0].value) > best[3]:
try:
best = (
trial.id,
float(trial.parameters[0].value),
float(trial.parameters[1].value),
float(trial.final_measurement.metrics[0].value),
)
except:
best = (
trial.id,
float(trial.parameters[0].value),
None,
float(trial.final_measurement.metrics[0].value),
)
print(best)
###Output
_____no_output_____
###Markdown
Delete the hyperparameter tuning jobThe method 'delete()' will delete the hyperparameter tuning job.
###Code
hpt_job.delete()
###Output
_____no_output_____
###Markdown
Save the best hyperparameter values
###Code
LR = best[2]
BATCH_SIZE = int(best[1])
###Output
_____no_output_____
###Markdown
Create and run custom training jobTo train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training jobA custom training job is created with the `CustomTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the custom training job.- `container_uri`: The training container image.- `python_package_gcs_uri`: The location of the Python training package as a tarball.- `python_module_name`: The relative path to the training script in the Python package.- `model_serving_container_uri`: The container image for deploying the model.*Note:* There is no requirements parameter. You specify any requirements in the `setup.py` script in your Python package.
###Code
DISPLAY_NAME = "chicago_" + TIMESTAMP
job = aip.CustomPythonPackageTrainingJob(
display_name=DISPLAY_NAME,
python_package_gcs_uri=f"{BUCKET_NAME}/trainer_chicago.tar.gz",
python_module_name="trainer.task",
container_uri=TRAIN_IMAGE,
model_serving_container_image_uri=DEPLOY_IMAGE,
project=PROJECT_ID,
)
###Output
_____no_output_____
###Markdown
Run the custom Python package training jobNext, you run the custom job to start the training job by invoking the method `run()`. The parameters are the same as when running a CustomTrainingJob.*Note:* The parameter service_account is set so that the initializing experiment step `aip.init(experiment="...")` has necessarily permission to access the Vertex AI Metadata Store.
###Code
MODEL_DIR = BUCKET_NAME + "/trained"
FULL_EPOCHS = 100
CMDARGS = [
f"--epochs={FULL_EPOCHS}",
f"--lr={LR}",
f"--batch_size={BATCH_SIZE}",
"--distribute=mirrored",
"--experiment=chicago",
"--run=full",
"--project=" + PROJECT_ID,
"--model-id=" + MODEL_ID,
"--dataset-id=" + DATASET_ID,
"--evaluate=True",
]
model = job.run(
model_display_name="chicago_" + TIMESTAMP,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
service_account=SERVICE_ACCOUNT,
tensorboard=tensorboard_resource_name,
sync=True,
)
###Output
_____no_output_____
###Markdown
Delete a custom training jobAfter a training job is completed, you can delete the training job with the method `delete()`. Prior to completion, a training job can be canceled with the method `cancel()`.
###Code
job.delete()
###Output
_____no_output_____
###Markdown
Get the experiment resultsNext, you use the experiment name as a parameter to the method `get_experiment_df()` to get the results of the experiment as a pandas dataframe.
###Code
EXPERIMENT_NAME = "chicago"
experiment_df = aip.get_experiment_df()
experiment_df = experiment_df[experiment_df.experiment_name == EXPERIMENT_NAME]
experiment_df.T
###Output
_____no_output_____
###Markdown
Review the custom model evaluation resultsNext, you review the evaluation metrics builtin into the training package.
###Code
METRICS = MODEL_DIR + "/model/metrics.txt"
! gsutil cat $METRICS
###Output
_____no_output_____
###Markdown
Delete the TensorBoard instanceNext, delete the TensorBoard instance.
###Code
tensorboard.delete()
vertex_custom_model = model
model = tf.keras.models.load_model(MODEL_DIR + "/model")
###Output
_____no_output_____
###Markdown
Add a serving functionNext, you add a serving function to your model for online and batch prediction. This allows prediction requests to be sent in raw format (unpreprocessed), either as a serialized TF.Example or JSONL object. The serving function will then preprocess the prediction request into the transformed format expected by the model.
###Code
%%writefile custom/trainer/serving.py
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_transform as tft
import logging
def _get_serve_features_fn(model, tft_output):
"""Returns a function that accept a dictionary of features and applies TFT."""
model.tft_layer = tft_output.transform_features_layer()
@tf.function
def serve_features_fn(raw_features):
"""Returns the output to be used in the serving signature."""
transformed_features = model.tft_layer(raw_features)
probabilities = model(transformed_features)
return {"scores": probabilities}
return serve_features_fn
def _get_serve_tf_examples_fn(model, tft_output, feature_spec):
"""Returns a function that parses a serialized tf.Example and applies TFT."""
model.tft_layer = tft_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
"""Returns the output to be used in the serving signature."""
for key in list(feature_spec.keys()):
if key not in features:
feature_spec.pop(key)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
probabilities = model(transformed_features)
return {"scores": probabilities}
return serve_tf_examples_fn
def construct_serving_model(
model, serving_model_dir, metadata
):
global features
schema_location = metadata['schema']
features = metadata['numeric_features'] + metadata['categorical_features'] + metadata['embedding_features']
print("FEATURES", features)
tft_output_dir = metadata["transform_artifacts_dir"]
schema = tfdv.load_schema_text(schema_location)
feature_spec = tft.tf_metadata.schema_utils.schema_as_feature_spec(schema).feature_spec
tft_output = tft.TFTransformOutput(tft_output_dir)
# Drop features that were not used in training
features_input_signature = {
feature_name: tf.TensorSpec(
shape=(None, 1), dtype=spec.dtype, name=feature_name
)
for feature_name, spec in feature_spec.items()
if feature_name in features
}
signatures = {
"serving_default": _get_serve_features_fn(
model, tft_output
).get_concrete_function(features_input_signature),
"serving_tf_example": _get_serve_tf_examples_fn(
model, tft_output, feature_spec
).get_concrete_function(
tf.TensorSpec(shape=[None], dtype=tf.string, name="examples")
),
}
logging.info("Model saving started...")
model.save(serving_model_dir, signatures=signatures)
logging.info("Model saving completed.")
###Output
_____no_output_____
###Markdown
Construct the serving modelNow construct the serving model and store the serving model to your Cloud Storage bucket.
###Code
os.chdir("custom")
from trainer import serving
SERVING_MODEL_DIR = BUCKET_NAME + "/serving_model"
serving.construct_serving_model(
model=model, serving_model_dir=SERVING_MODEL_DIR, metadata=metadata
)
serving_model = tf.keras.models.load_model(SERVING_MODEL_DIR)
os.chdir("..")
###Output
_____no_output_____
###Markdown
Test the serving model locally with tf.Example dataNext, test the layer interface in the serving model for tf.Example data.
###Code
EXPORTED_TFREC_PREFIX = metadata["exported_tfrec_prefix"]
file_names = tf.data.TFRecordDataset.list_files(
EXPORTED_TFREC_PREFIX + "/data-*.tfrecord"
)
for batch in tf.data.TFRecordDataset(file_names).batch(3).take(1):
predictions = serving_model.signatures["serving_tf_example"](batch)
for key in predictions:
print(f"{key}: {predictions[key]}")
###Output
_____no_output_____
###Markdown
Test the serving model locally with JSONL dataNext, test the layer interface in the serving model for JSONL data.
###Code
schema = tfdv.load_schema_text(metadata["schema"])
feature_spec = tft.tf_metadata.schema_utils.schema_as_feature_spec(schema).feature_spec
instance = {
"dropoff_grid": "POINT(-87.6 41.9)",
"euclidean": 2064.2696,
"loc_cross": "",
"payment_type": "Credit Card",
"pickup_grid": "POINT(-87.6 41.9)",
"trip_miles": 1.37,
"trip_day": 12,
"trip_hour": 6,
"trip_month": 2,
"trip_day_of_week": 4,
"trip_seconds": 555,
}
for feature_name in instance:
dtype = feature_spec[feature_name].dtype
instance[feature_name] = tf.constant([[instance[feature_name]]], dtype)
predictions = serving_model.signatures["serving_default"](**instance)
for key in predictions:
print(f"{key}: {predictions[key].numpy()}")
###Output
_____no_output_____
###Markdown
Upload the serving model to a Vertex AI Model resourceNext, you upload your serving custom model artifacts to Vertex AI to convert into a managed Vertex AI Model resource.
###Code
vertex_serving_model = aip.Model.upload(
display_name="chicago_" + TIMESTAMP,
artifact_uri=SERVING_MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
labels={"user_metadata": BUCKET_NAME[5:]},
sync=True,
)
###Output
_____no_output_____
###Markdown
Evaluate the serving modelNext, evaluate the serving model with the evaluation (test) slices. For apples-to-apples comparison, you use the same evaluation slices for both the custom model and the AutoML model. Since your evaluation slices and metrics maybe custom, we recommend:- Send each evaluation slice as a Vertex AI Batch Prediction Job.- Use a custom evaluation script to evaluate the results from the batch prediction job.
###Code
SERVING_OUTPUT_DATA_DIR = BUCKET_NAME + "/batch_eval"
EXPORTED_JSONL_PREFIX = metadata["exported_jsonl_prefix"]
MIN_NODES = 1
MAX_NODES = 1
job = vertex_serving_model.batch_predict(
instances_format="jsonl",
predictions_format="jsonl",
job_display_name="chicago_" + TIMESTAMP,
gcs_source=EXPORTED_JSONL_PREFIX + "*.jsonl",
gcs_destination_prefix=SERVING_OUTPUT_DATA_DIR,
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=True,
)
###Output
_____no_output_____
###Markdown
Perform custom evaluation metricsAfter the batch job has completed, you input the results and target labels to your custom evaluation script. For demonstration purposes, we just display the results of the batch prediction.
###Code
batch_dir = ! gsutil ls $SERVING_OUTPUT_DATA_DIR
batch_dir = batch_dir[0]
outputs = ! gsutil ls $batch_dir
errors = outputs[0]
results = outputs[1]
print("errors")
! gsutil cat $errors
print("results")
! gsutil cat $results | head -n10
model = async_model
###Output
_____no_output_____
###Markdown
Wait for completion of AutoML training jobNext, wait for the AutoML training job to complete. Alternatively, one can set the parameter `sync` to `True` in the `run()` method to block until the AutoML training job is completed.
###Code
model.wait()
###Output
_____no_output_____
###Markdown
Review model evaluation scoresAfter your model training has finished, you can review the evaluation scores for it using the `list_model_evaluations()` method. This method will return an iterator for each evaluation slice.
###Code
model_evaluations = model.list_model_evaluations()
for model_evaluation in model_evaluations:
print(model_evaluation.to_dict())
###Output
_____no_output_____
###Markdown
Compare metric results with AutoML baselineFinally, you make a decision if the current experiment produces a custom model that is better than the AutoML baseline, as follows: - Compare the evaluation results for each evaluation slice between the custom model and the AutoML model. - Weight the results according to your business purposes. - Add up the result and make a determination if the custom model is better. Store evaluation results for custom modelNext, you use the labels field to store user metadata containing the custom metrics information.
###Code
import json
metadata = {}
metadata["train_eval_metrics"] = METRICS
metadata["custom_eval_metrics"] = "[you-fill-this-in]"
with tf.io.gfile.GFile("gs://" + BUCKET_NAME[5:] + "/metadata.jsonl", "w") as f:
json.dump(metadata, f)
!gsutil cat $BUCKET_NAME/metadata.jsonl
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- AutoML Training Job- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_all = False
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____
###Markdown
E2E ML on GCP: MLOps stage 2 : experimentation View on GitHub Open in Google Cloud Notebooks OverviewThis tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 2 : experimentation. DatasetThe dataset used for this tutorial is the [Chicago Taxi](https://www.kaggle.com/chicago/chicago-taxi-trips-bq). The version of the dataset you will use in this tutorial is stored in a public BigQuery table. The trained model predicts whether someone would leave a tip for a taxi fare. ObjectiveIn this tutorial, you create a MLOps stage 2: experimentation process.This tutorial uses the following Vertex AI:- `Vertex Datasets`- `Vertex AutoML`- `Vertex Training`- `Vertex TensorBoard`The steps performed include:- Review the `Dataset` resource created during stage 1.- Train an AutoML tabular binary classifier model in the background.- Construct a custom training job for the `Dataset` resource.- ?? Hyperparameter Tuning- Train the custom model.- Evaluate the custom model.- ?? Tensorboard- Wait for AutoML training job to complete.- Evaluate the AutoML model. InstallationsInstall *one time* the packages for executing the MLOps notebooks.
###Code
ONCE_ONLY = False
if ONCE_ONLY:
! pip3 install -U tensorflow==2.5 $USER_FLAG
! pip3 install -U tensorflow-data-validation==1.2 $USER_FLAG
! pip3 install -U tensorflow-transform==1.2 $USER_FLAG
! pip3 install -U tensorflow-io==0.18 $USER_FLAG
! pip3 install --upgrade google-cloud-aiplatform[tensorboard] $USER_FLAG
! pip3 install --upgrade google-cloud-bigquery $USER_FLAG
! pip3 install --upgrade google-cloud-logging $USER_FLAG
! pip3 install --upgrade apache-beam[gcp] $USER_FLAG
! pip3 install --upgrade pyarrow $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants
###Code
import google.cloud.aiplatform as aip
###Output
_____no_output_____
###Markdown
Import TensorFlowImport the TensorFlow package into your Python environment.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Import TensorFlow TransformImport the TensorFlow Transform (TFT) package into your Python environment.
###Code
import tensorflow_transform as tft
###Output
_____no_output_____
###Markdown
Initialize Vertex SDK for PythonInitialize the Vertex SDK for Python for your project and corresponding bucket.
###Code
aip.init(project=PROJECT_ID, location=REGION)
###Output
_____no_output_____
###Markdown
Retrieve the dataset from stage 1Next, retrieve the dataset you created during stage 1 with the helper function `find_dataset()`. This helper function finds all the datasets whose display name matches the specified prefix and import format (e.g., bq). Finally it sorts the matches by create time and returns the latest version.
###Code
def find_dataset(display_name_prefix, import_format):
matches = []
datasets = aip.TabularDataset.list()
for dataset in datasets:
if dataset.display_name.startswith(display_name_prefix):
try:
if (
"bq" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["bigquerySource"]
):
matches.append(dataset)
if (
"csv" == import_format
and dataset.to_dict()["metadata"]["inputConfig"]["gcsSource"]
):
matches.append(dataset)
except:
pass
create_time = None
for match in matches:
if create_time is None or match.create_time > create_time:
create_time = match.create_time
dataset = match
return dataset
dataset = find_dataset("Chicago Taxi", "bq")
print(dataset)
###Output
_____no_output_____
###Markdown
Load dataset's user metadataLoad the user metadata for the dataset.
###Code
import json
with tf.io.gfile.GFile(
"gs://" + dataset.labels["user_metadata"] + "/metadata.jsonl", "r"
) as f:
metadata = json.load(f)
print(metadata)
###Output
_____no_output_____
###Markdown
Create and run training pipelineTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipelineAn AutoML training pipeline is created with the `AutoMLTabularTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the `TrainingJob` resource.- `optimization_prediction_type`: The type task to train the model for. - `classification`: A tabuar classification model. - `regression`: A tabular regression model.- `column_transformations`: (Optional): Transformations to apply to the input columns- `optimization_objective`: The optimization objective to minimize or maximize. - binary classification: - `minimize-log-loss` - `maximize-au-roc` - `maximize-au-prc` - `maximize-precision-at-recall` - `maximize-recall-at-precision` - multi-class classification: - `minimize-log-loss` - regression: - `minimize-rmse` - `minimize-mae` - `minimize-rmsle`The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
###Code
dag = aip.AutoMLTabularTrainingJob(
display_name="chicago_" + TIMESTAMP,
optimization_prediction_type="classification",
optimization_objective="minimize-log-loss",
)
print(dag)
###Output
_____no_output_____
###Markdown
Run the training pipelineNext, you run the DAG to start the training job by invoking the method `run`, with the following parameters:- `dataset`: The `Dataset` resource to train the model.- `model_display_name`: The human readable name for the trained model.- `training_fraction_split`: The percentage of the dataset to use for training.- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).- `validation_fraction_split`: The percentage of the dataset to use for validation.- `target_column`: The name of the column to train as the label.- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.The `run` method when completed returns the `Model` resource.The execution of the training pipeline will take upto 180 minutes.
###Code
async_model = dag.run(
dataset=dataset,
model_display_name="chicago_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=8000,
disable_early_stopping=False,
target_column="tip_bin",
sync=False,
)
###Output
_____no_output_____
###Markdown
Create experiment for tracking training related metadataSetup tracking the parameters (configuration) and metrics (results) for each experiment:- `aip.init()` - Create an experiment instance- `aip.start_run()` - Track a specific run within the experiment.Learn more about [Introduction to Vertex ML Metadata](https://cloud.google.com/vertex-ai/docs/ml-metadata/introduction).
###Code
EXPERIMENT_NAME = "chicago-" + TIMESTAMP
aip.init(experiment=EXPERIMENT_NAME)
aip.start_run("run-1")
###Output
_____no_output_____
###Markdown
Create a Vertex TensorBoard instanceCreate a Vertex TensorBoard instance to use TensorBoard in conjunction with Vertex Training for custom model training.Learn more about [Get started with Vertex TensorBoard](https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-overview).
###Code
TENSORBOARD_DISPLAY_NAME = "chicago_" + TIMESTAMP
tensorboard = aip.Tensorboard.create(display_name=TENSORBOARD_DISPLAY_NAME)
tensorboard_resource_name = tensorboard.gca_resource.name
print("TensorBoard resource name:", tensorboard_resource_name)
###Output
_____no_output_____
###Markdown
Create the input layer for custom modelNext, you create the input layer for your custom tabular model, based on the data types of each feature.
###Code
from tensorflow.keras.layers import Input
def create_model_inputs(numeric_features=None, categorical_features=None):
inputs = {}
for feature_name in numeric_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.float32)
for feature_name in categorical_features:
inputs[feature_name] = Input(name=feature_name, shape=[], dtype=tf.int64)
return inputs
input_layers = create_model_inputs(
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
)
print(input_layers)
###Output
_____no_output_____
###Markdown
Create the binary classifier custom modelNext, you create your binary classifier custom tabular model.
###Code
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Concatenate, Dense, experimental
def create_binary_classifier(
input_layers, tft_output, metaparams, numeric_features, categorical_features
):
layers = []
for feature_name in input_layers:
if feature_name in categorical_features:
vocab_size = tft_output.vocabulary_size_by_name(feature_name)
onehot_layer = experimental.preprocessing.CategoryEncoding(
num_tokens=vocab_size,
output_mode="binary",
name=f"{feature_name}_onehot",
)(input_layers[feature_name])
layers.append(onehot_layer)
elif feature_name in numeric_features:
numeric_layer = tf.expand_dims(input_layers[feature_name], -1)
layers.append(numeric_layer)
else:
pass
joined = Concatenate(name="combines_inputs")(layers)
feedforward_output = Sequential(
[Dense(units, activation="relu") for units in metaparams["hidden_units"]],
name="feedforward_network",
)(joined)
logits = Dense(units=1, name="logits")(feedforward_output)
model = Model(inputs=input_layers, outputs=[logits])
return model
TRANSFORM_ARTIFACTS_DIR = metadata["transform_artifacts_dir"]
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
metaparams = {"hidden_units": [128, 64]}
aip.log_params(metaparams)
model = create_binary_classifier(
input_layers,
tft_output,
metaparams,
numeric_features=metadata["numeric_features"],
categorical_features=metadata["categorical_features"],
)
model.summary()
###Output
_____no_output_____
###Markdown
Visualize the model archirectureNext, visualize the architecture of the custom model.
###Code
tf.keras.utils.plot_model(model, show_shapes=True, show_dtype=True)
###Output
_____no_output_____
###Markdown
Construct the training package Package layoutBefore you start training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.- PKG-INFO- README.md- setup.cfg- setup.py- trainer - \_\_init\_\_.py - task.py - other Python scriptsThe files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.The file `trainer/task.py` is the Python script for executing the custom training job.
###Code
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Chicago Taxi tabular binary classifier\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
###Output
_____no_output_____
###Markdown
Get feature specification for the preprocessed dataNext, create the feature specification for the preprocessed data.
###Code
transform_feature_spec = tft_output.transformed_feature_spec()
print(transform_feature_spec)
###Output
_____no_output_____
###Markdown
Load the transformed data into a tf.data.DatasetNext, you load the gzip TFRecords on Cloud Storage storage into a `tf.data.Dataset` generator. These functions are re-used when training the custom model using `Vertex Training`, so you save them to the python training package.
###Code
%%writefile custom/trainer/data.py
import tensorflow as tf
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(filenames, compression_type="GZIP")
def get_dataset(file_pattern, feature_spec, label_column, batch_size=200):
"""Generates features and label for tuning/training.
Args:
file_pattern: input tfrecord file pattern.
feature_spec: a dictionary of feature specifications.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=feature_spec,
label_key=label_column,
reader=_gzip_reader_fn,
num_epochs=1,
drop_final_batch=True,
)
return dataset
from custom.trainer import data
TRANSFORMED_DATA_PREFIX = metadata["transformed_data_prefix"]
LABEL_COLUMN = metadata["label_column"]
train_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/train/data-*.gz"
val_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/val/data-*.gz"
test_data_file_pattern = TRANSFORMED_DATA_PREFIX + "/test/data-*.gz"
for input_features, target in data.get_dataset(
train_data_file_pattern, transform_feature_spec, LABEL_COLUMN, batch_size=3
).take(1):
for key in input_features:
print(
f"{key} {input_features[key].dtype}: {input_features[key].numpy().tolist()}"
)
print(f"target: {target.numpy().tolist()}")
###Output
_____no_output_____
###Markdown
Train the modelblah Create training scriptblah
###Code
%%writefile custom/trainer/train.py
from custom.trainer import data
import tensorflow as tf
import logging
def compile(model, hyperparams):
optimizer = tf.keras.optimizers.Adam(learning_rate=hyperparams["learning_rate"])
loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)
metrics = [tf.keras.metrics.BinaryAccuracy(name="accuracy")]
model.compile(optimizer=optimizer,loss=loss, metrics=metrics)
return model
def train(
model,
hyperparams,
train_data_dir,
val_data_dir,
label_column,
transformed_feature_spec,
log_dir
):
train_dataset = data.get_dataset(
train_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
val_dataset = data.get_dataset(
val_data_dir,
transformed_feature_spec,
label_column,
batch_size=hyperparams["batch_size"],
)
tensorboard = tf.keras.callbacks.TensorBoard(log_dir=log_dir)
early_stop = tf.keras.callbacks.EarlyStopping(
monitor=hyperparams["early_stop"]["monitor"], patience=hyperparams["early_stop"]["patience"], restore_best_weights=True
)
logging.info("Model training started...")
history = model.fit(
train_dataset,
epochs=hyperparams["num_epochs"],
validation_data=val_dataset,
callbacks=[tensorboard, early_stop]
)
logging.info("Model training completed.")
return history
def evaluate(
model,
hyperparams,
test_data_dir,
label_column,
transformed_feature_spec
):
logging.info("Model evaluation started...")
test_dataset = data.get_dataset(
test_data_dir,
transformed_feature_spec,
label_column,
hyperparams["batch_size"],
)
evaluation_metrics = model.evaluate(test_dataset)
logging.info("Model evaluation completed.")
return evaluation_metrics
###Output
_____no_output_____
###Markdown
Train the model locallyNext, test the training package locally, by training with just a few epochs:- `num_epochs`: The number of epochs to pass to the training package.- `compile()`: Compile the model for training.- `train()`: Train the model.
###Code
import logging
from custom.trainer import train
TENSORBOARD_LOG_DIR = "./logs"
logging.getLogger().setLevel(logging.INFO)
hyperparams = {}
hyperparams["learning_rate"] = 0.001
aip.log_params(hyperparams)
train.compile(model, hyperparams)
trainparams = {}
trainparams["num_epochs"] = 5
trainparams["batch_size"] = 64
trainparams["early_stop"] = {"monitor": "val_loss", "patience": 5}
aip.log_params(trainparams)
train.train(
model,
trainparams,
train_data_file_pattern,
val_data_file_pattern,
LABEL_COLUMN,
transform_feature_spec,
TENSORBOARD_LOG_DIR,
)
###Output
_____no_output_____
###Markdown
Evaluate the model locallyNext, test the evaluation portion of the training package:- `evaluate()`: Evaluate the model.
###Code
from custom.trainer import train
evalparams = {}
evalparams["batch_size"] = 64
metrics = {}
metrics["loss"], metrics["acc"] = train.evaluate(
model, evalparams, test_data_file_pattern, LABEL_COLUMN, transform_feature_spec
)
print("ACC", metrics["acc"], "LOSS", metrics["loss"])
aip.log_metrics(metrics)
model = async_model
###Output
_____no_output_____
###Markdown
Wait for completion of AutoML training jobNext, wait for the AutoML training job to complete. Alternatively, one can set the parameter `sync` to `True` in the `run()` method to block until the AutoML training job is completed.
###Code
model.wait()
###Output
_____no_output_____
###Markdown
Review model evaluation scoresAfter your model has finished training, you can review the evaluation scores for it.First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
###Code
# Get model resource ID
models = aip.Model.list(filter="display_name=chicago_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- AutoML Training Job- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____ |
src/preprocess/LA2-closed.ipynb | ###Markdown
ODs
###Code
zip_file = ZipFile('../../data/LA/mobile-phone/travel_demand_LA.zip')
zip_file.infolist()
types = {str(x): np.float32 for x in range(0,24)}
types['O_Tract'] = str
types['D_Tract'] = str
types['HBW'] = np.float32
types['HBO'] = np.float32
types['NHB'] = np.float32
types['lon1'] = np.float32
types['lat1'] = np.float32
types['lon2'] = np.float32
types['lat2'] = np.float32
travel_df = pd.read_csv(zip_file.open('travel_demand_LA.csv'), dtype=types)
travel_df = travel_df.drop(['lon1', 'lat1', 'lon2', 'lat2'], axis=1)
travel_df['tot'] = travel_df[[str(x) for x in range(0,24)]].sum(axis=1)
travel_df = travel_df[['O_Tract', 'D_Tract', 'HBW', 'HBO', 'NHB', 'tot']]
travel_df.head()
travel_df = travel_df[['O_Tract', 'D_Tract', 'HBW', 'HBO', 'NHB', 'tot']]
travel_df = travel_df[(travel_df['HBW']!=0) | (travel_df['HBO']!=0) | (travel_df['NHB']!=0) | (travel_df['tot']!=0)]
travel_df.head()
# Barrios to blockgroup
od_sp_groups_df = pd.merge(travel_df[['O_Tract', 'D_Tract', 'HBO', 'NHB', 'tot']], blocks2spid_unique_df.rename(columns={'sp_id': 'o_sp_id'}), left_on='O_Tract', right_on='GEOID').drop(['GEOID'], axis=1)
od_sp_groups_df.loc[:, 'tot'] = od_sp_groups_df['tot'] / od_sp_groups_df['count']
od_sp_groups_df = od_sp_groups_df.drop(['count', 'O_Tract'], axis=1)
od_sp_groups_df = od_sp_groups_df.groupby(['o_sp_id', 'D_Tract'], as_index=False).sum()
od_sp_groups_df.head()
od_sp_groups_df = pd.merge(od_sp_groups_df, blocks2spid_unique_df.rename(columns={'sp_id': 'd_sp_id'}), left_on='D_Tract', right_on='GEOID').drop(['GEOID'], axis=1)
od_sp_groups_df.loc[:, 'tot'] = od_sp_groups_df['tot'] / od_sp_groups_df['count']
od_sp_groups_df = od_sp_groups_df.drop(['count', 'D_Tract'], axis=1)
od_sp_groups_df = od_sp_groups_df.groupby(['o_sp_id', 'd_sp_id'], as_index=False).sum()
od_sp_groups_df.head()
all_sp_ids = sorted([str(x) for x in list(set(blocks2spid_df.sp_id.values))])
###Output
_____no_output_____
###Markdown
Fix missing links
###Code
import itertools
tuples = list(itertools.product(all_sp_ids, all_sp_ids))
od_sp_groups_df['o_sp_id'] = od_sp_groups_df['o_sp_id'].astype(str)
od_sp_groups_df['d_sp_id'] = od_sp_groups_df['d_sp_id'].astype(str)
od_sp_groups_df = od_sp_groups_df.set_index(['o_sp_id', 'd_sp_id']).reindex(tuples).fillna(0).reset_index()
od_sp_groups_df.head()
#Tot 0 ?
od_sp_groups_df[od_sp_groups_df.tot == 0].head()
###Output
_____no_output_____
###Markdown
Extras
###Code
od_extra_df = pd.merge(travel_df[['O_Tract', 'D_Tract', 'HBO', 'NHB', 'tot']], blocks2spid_unique_df.rename(columns={'sp_id': 'o_sp_id'}), left_on='O_Tract', right_on='GEOID', how='left').drop(['GEOID'], axis=1)
od_extra_df = pd.merge(od_extra_df, blocks2spid_unique_df.rename(columns={'sp_id': 'd_sp_id'}), left_on='D_Tract', right_on='GEOID', how='left').drop(['GEOID'], axis=1)
od_extra_df.head()
od_extra_df = od_extra_df[(od_extra_df.o_sp_id.isnull() & ~od_extra_df.d_sp_id.isnull()) | ((~od_extra_df.o_sp_id.isnull()) & od_extra_df.d_sp_id.isnull())]
# from out to LA
in_extra_df = od_extra_df[od_extra_df.o_sp_id.isnull()].groupby('d_sp_id', as_index=False).sum()
in_extra_df['d_sp_id'] = in_extra_df['d_sp_id'].astype(int).astype(str)
in_extra_df = in_extra_df[['d_sp_id', 'HBO', 'NHB', 'tot']]
in_extra_df['ntrips'] = in_extra_df['tot']
in_extra_df.head()
# from LA to out
out_extra_df = od_extra_df[od_extra_df.d_sp_id.isnull()].groupby('o_sp_id', as_index=False).sum()
out_extra_df['o_sp_id'] = out_extra_df['o_sp_id'].astype(int).astype(str)
out_extra_df = out_extra_df[['o_sp_id', 'HBO', 'NHB', 'tot']]
out_extra_df['ntrips'] = out_extra_df['tot']
out_extra_df.head()
###Output
_____no_output_____
###Markdown
Blocks_attract
###Code
blocks2bid_unique_df = blocks2spid_df.drop_duplicates(subset=['bid'])[['bid', 'GEOID', 'count']]
blocks2bid_unique_df.head()
od_bid_groups_df = pd.merge(travel_df[['O_Tract', 'D_Tract', 'HBO', 'NHB', 'tot']], blocks2bid_unique_df.rename(columns={'bid': 'o_bid'}), left_on='O_Tract', right_on='GEOID').drop(['GEOID'], axis=1)
od_bid_groups_df.loc[:, 'tot'] = od_bid_groups_df['tot'] / od_bid_groups_df['count']
od_bid_groups_df = od_bid_groups_df.drop(['count', 'O_Tract'], axis=1)
od_bid_groups_df = od_bid_groups_df.groupby(['o_bid', 'D_Tract'], as_index=False).sum()
od_bid_groups_df = pd.merge(od_bid_groups_df, blocks2bid_unique_df.rename(columns={'bid': 'd_bid'}), left_on='D_Tract', right_on='GEOID').drop(['GEOID'], axis=1)
od_bid_groups_df.loc[:, 'tot'] = od_bid_groups_df['tot'] / od_bid_groups_df['count']
od_bid_groups_df = od_bid_groups_df.drop(['count', 'D_Tract'], axis=1)
od_bid_groups_df = od_bid_groups_df.groupby(['o_bid', 'd_bid'], as_index=False).sum()
#od_bid_groups_df = od_bid_groups_df.set_index('d_bid')
od_bid_groups_df.head()
sql = """
SELECT sp_id::text, unnest(lower_ids)::text as bid FROM spatial_groups where city='{city}'
""".format(city=CITY)
blocks_spatial_df = pd.read_sql(sql, engine)
blocks_spatial_df.head()
attract_df = od_sp_groups_df[['o_sp_id']].drop_duplicates().set_index('o_sp_id')
attract_df['attract'] = 0.
for i, spid in enumerate(attract_df.index.values):
bids = blocks_spatial_df[blocks_spatial_df.sp_id == spid]['bid'].values
s = od_bid_groups_df[(od_bid_groups_df.d_bid.isin(bids)) & (~(od_bid_groups_df.o_bid.isin(bids)))]['NHB'].sum()
attract_df.loc[spid, 'attract'] = s
attract_df = attract_df.reset_index()
attract_df.head()
###Output
_____no_output_____
###Markdown
Save "other" trips to out and to in
###Code
trips_other = od_sp_groups_df[['o_sp_id', 'd_sp_id', 'tot', 'NHB']].copy() #[od_sp_groups_df.o_sp_id == od_sp_groups_df.d_sp_id]
trips_other['ntrips'] = trips_other['tot']
trips_other = trips_other.drop(['tot'], axis=1)
trips_other.head()
trips_attract = trips_other[trips_other.o_sp_id != trips_other.d_sp_id].copy()
trips_attract = trips_attract.rename(columns={'NHB': 'attract'}).drop('o_sp_id', axis=1)
trips_attract = trips_attract.groupby('d_sp_id', as_index=False).sum()
trips_attract = trips_attract.rename(columns={'d_sp_id': 'o_sp_id'}).drop('ntrips', axis=1)
trips_attract.head()
trips_attract = pd.concat((trips_attract, in_extra_df.rename(columns={'d_sp_id': 'o_sp_id', 'NHB': 'attract'})[['o_sp_id', 'attract']]))
trips_attract = trips_attract.groupby('o_sp_id', as_index=False).sum()
trips_attract.head()
trips_out = trips_other[trips_other.o_sp_id != trips_other.d_sp_id][['o_sp_id', 'NHB', 'ntrips']]
trips_out = pd.concat((trips_out, out_extra_df[['o_sp_id', 'NHB', 'ntrips']]))
trips_out = trips_out.groupby('o_sp_id', as_index=False).sum()
trips_out = trips_out.rename(columns={'ntrips': 'nout'})
trips_out = trips_out.drop(['NHB'], axis=1)
trips_out.head()
trips_in = trips_other[trips_other.o_sp_id == trips_other.d_sp_id].groupby('o_sp_id', as_index=False).sum()
trips_in = trips_in.rename(columns={'ntrips': 'nin'})
trips_in = trips_in.drop(['NHB'], axis=1)
trips_in.head()
df_all = pd.merge(trips_in, trips_out, on='o_sp_id')
df_all = pd.merge(trips_attract, df_all, on='o_sp_id')
df_all.head()
df_all = pd.merge(trips_in, trips_out, on='o_sp_id')
df_all = pd.merge(trips_attract, df_all, on='o_sp_id')
df_all.head()
df_all.to_sql('temptable3', engine, if_exists='replace', index=False)
sql = """
INSERT INTO spatial_groups_trips (sp_id, city, spatial_name, num_Otrips_in, num_Otrips_out, attract)
SELECT c.o_sp_id::int, '{city}', 'ego', c.nin, c.nout, c.attract
FROM temptable3 c
""".format(city=CITY)
result = engine.execute(text(sql))
###Output
_____no_output_____
###Markdown
Save OD
###Code
ODs_matrix_df = od_sp_groups_df.copy()
ODs_matrix_df = ODs_matrix_df.pivot(index='o_sp_id', columns='d_sp_id', values='tot')
ODs_matrix_df.head()
ODs_matrix_df['city'] = CITY
ODs_matrix_df.to_csv('../../data/generated_files/{city}_ODs.csv'.format(city=CITY))
###Output
_____no_output_____
###Markdown
Ambient population
###Code
sql = """
SELECT b.original_id, bid, sp_id
FROM blocks_group b
INNER JOIN spatial_groups as sp on b.bid = sp.core_id
WHERE b.city='{city}'
""".format(city=CITY)
blocks2coreid_df = pd.read_sql(sql, engine)
blocks2coreid_df['GEOID'] = blocks2coreid_df['original_id'].str[0:11]
blocks2coreid_df.head()
njoins_coreid_df = blocks2coreid_df[['bid', 'GEOID']].drop_duplicates().groupby('GEOID').size().to_frame('count').reset_index()
njoins_coreid_df.head()
blocks2coreid_df = pd.merge(blocks2coreid_df, njoins_coreid_df, on='GEOID')
blocks2coreid_df.head()
ambient_df = pd.read_csv('../../data/LA/mobile-phone/hourly_stay_LA.csv', dtype={'tract': str})
ambient_df[ambient_df.tract == '06037101110'].head()
blocks2coreid_unique_df = blocks2coreid_df.drop_duplicates(subset=['GEOID', 'sp_id'])[['GEOID', 'sp_id', 'count']]
blocks2coreid_unique_df.head()
ambient_sp_id_df = pd.merge(ambient_df, blocks2coreid_df[['GEOID', 'count', 'bid']].drop_duplicates(subset=['GEOID', 'bid', 'count']).rename(columns={'GEOID': 'tract'}), on='tract')
columns = [str(x) for x in range(0,24)]
for c in columns:
ambient_sp_id_df.loc[:, c] = ambient_sp_id_df.loc[:, c]/ambient_sp_id_df['count']
ambient_sp_id_df.head()
ambient_sp_id_df = ambient_sp_id_df.groupby('bid', as_index=False).sum()
ambient_sp_id_df['ambient_avg'] = ambient_sp_id_df[[str(x) for x in range(0,24)]].mean(axis=1)
ambient_sp_id_df.head()
ambient_sp_id_df[['bid', 'ambient_avg']].to_sql('temptable3', engine, if_exists='replace', index=False)
sql = """
INSERT INTO ambient_population (bid, city, num_people)
SELECT c.bid, '{city}', c.ambient_avg
FROM temptable3 c
""".format(city=CITY)
result = engine.execute(text(sql))
###Output
_____no_output_____ |
notebooks/regression_tree.ipynb | ###Markdown
Decision Tree Getting StartedStart by loading two binary classification datasets - the spiral dataset and the ION dataset.
###Code
import pandas as pd
import numpy as np
from numpy.matlib import repmat
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
The code below generates spiral data using the trigonometric functions sine and cosine, then splits the data into train and test segments.
###Code
def spiraldata(N=300):
# generate a vector of "radius" values
r = np.linspace(1,2*np.pi,N)
# generate a curve that draws circles with increasing radius
X_train1 = np.array([np.sin(2 * r) * r, np.cos(2 * r) * r]).T
X_train2 = np.array([np.sin(2 * r + np.pi) * r, np.cos(2 * r + np.pi) * r]).T
X_train = np.concatenate([X_train1, X_train2], axis=0)
y_train = np.concatenate([np.ones(N), -1 * np.ones(N)])
X_train = X_train + np.random.randn(X_train.shape[0], X_train.shape[1]) * 0.2
# Now sample alternating values to generate the test and train sets
X_test = X_train[::2,:]
y_test = y_train[::2]
X_train = X_train[1::2,:]
y_train = y_train[1::2]
return X_train, y_train, X_test, y_test
###Output
_____no_output_____
###Markdown
We can plot xTrSpiral to see the curve generated by the function above:
###Code
xTrSpiral, yTrSpiral, xTeSpiral, yTeSpiral = spiraldata(150)
plt.scatter(xTrSpiral[:,0], xTrSpiral[:,1], 30, yTrSpiral)
plt.show()
###Output
_____no_output_____
###Markdown
The following code loads the ION dataset.
###Code
# load in some binary test data (labels are -1, +1)
data = pd.read_csv('../data/ion.csv', header=None)
print(data.head())
# Load the features and labels
X = data.drop(34, axis=1).values
y = data.loc[:,34].values
# Create train and test data
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
print(set(y))
le.fit(list(set(y)))
y = le.transform(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42, stratify=y)
###Output
_____no_output_____
###Markdown
Implement Regression TreesPart One: Implement sqimpurityFirst, we implement the function sqimpurity which takes as input a vector of $n$ labels and outputs the corresponding squared loss impurity$$\sum_{i} (y_i-\bar{y}_i)^2 \textrm{ where: } \bar{y}_i=\frac{1}{n}\sum_{i} y_i$$
###Code
def sqimpurity(y_train):
"""This function computes the weighted variance of the labels."""
N, = y_train.shape
assert N > 0 # must have at least one sample
# compute the mean
ybar = np.mean(y_train)
impurity = np.sum(np.power(y_train - ybar, 2))
return impurity
###Output
_____no_output_____
###Markdown
Part Two: Implement sqsplitNow we implement sqsplit, which takes as input a data set with labels and computes the best feature and cut-value of an optimal split based on the squared error impurity. The sqsplit function takes as input a data set of row vectors and a label vector and outputs a feature dimension, a cut threshold, and the impurity loss of this best split. The cut value should be the average of the values in the dimension where two datapoints are split. To find the best split, evaluate all possible splits and then search for the split that yields the minimum loss.Remember that we evaluate the quality of a split of a parent set $S_P$ into two sets $S_L$ and $S_R$ by the weighted impurity of the two branches, i.e.$\frac{\left|S_L\right|}{\left|S_P\right|}I\left(S_L\right)+\frac{\left|S_R\right|}{\left|S_P\right|}I\left(S_R\right)$In the case of the squared loss, this becomes:$\frac{1}{|S_P|}\sum_{(x,y)\in S_L}(y-\bar{y}_{S_L})^2 +\frac{1}{|S_P|}\sum_{(x,y)\in S_R}(y-\bar{y}_{S_R})^2$Note: Avoid splitting on datapoints with same value in a dimension.
###Code
def sqsplit(X_train, y_train):
"""This function finds the best feature, cut value, and loss value."""
N, D = X_train.shape
assert D > 0 # must have at least one dimension
assert N > 1 # must have at least two samples
# initialize return values
bestloss = np.inf
feature = np.inf
cut = np.inf
# iterate over values
for d in range(D):
# sort the arrays
x = X_train[:,d].flatten()
idx = np.argsort(x)
x = x[idx]
y = y_train[idx]
for k in range(0,len(y)-1):
if x[k] == x[k+1]:
continue
left = y[:k+1]
right = y[k+1:]
loss = sqimpurity(left) + sqimpurity(right)
if loss < bestloss:
bestloss = loss
feature = d
cut = np.mean([x[k], x[k+1]])
return feature, cut, bestloss
###Output
_____no_output_____
###Markdown
Part Three: Implement cartIn this section, we implement the function cart, which returns a regression tree based on the minimum squared loss splitting rule. We use the function sqsplit to make splits. The TreeNode class below represents our tree. Note that the nature of CART trees implies that every node has exactly 0 or 2 children. Tree StructureThe tree structure comes with distinct leaves and nodes. Leaves have two fields, parent (another node) and prediction (a numerical value).Nodes have six fields: left: node describing left subtree right: node describing right subtree feature: index of feature to cut cut: cutoff value c ( c : right) prediction: prediction at this node (This should be the average of the labels at this node)
###Code
class TreeNode(object):
def __init__(self, left, right, feature, cut, prediction):
self.left = left
self.right = right
self.feature = feature
self.cut = cut
self.prediction = prediction
def cart(X_train, y_train):
"""This function builds a CART tree."""
n,d = X_train.shape
# initialize
prediction = np.mean(y_train)
x_u = len(set(X_train.reshape(-1)))
y_u = len(np.unique(y_train))
if x_u == 1 or y_u == 1:
tree = TreeNode(None, None, None, None, prediction)
else:
feature, cut, bestloss = sqsplit(X_train, y_train)
if feature == np.inf or cut == np.inf or bestloss == np.inf:
tree = TreeNode(None, None, None, None, prediction)
return tree
# generate left and right branch
X_train_l = X_train[X_train[:,feature] <= cut,:]
X_train_r = X_train[X_train[:,feature] > cut,:]
y_train_l = y_train[X_train[:,feature] <= cut]
y_train_r = y_train[X_train[:,feature] > cut]
tree = TreeNode(None, None, feature, cut, prediction)
tree.left = cart(X_train_l, y_train_l)
tree.right = cart(X_train_r, y_train_r)
tree.right.parent = tree
tree.left.parent = tree
return tree
###Output
_____no_output_____
###Markdown
Part Four: Implement evaltreeImplement the function evaltree, which evaluates a decision tree on a given test data set.
###Code
def evaltree(root, X_test):
"""This function evaluates X_test using a decision tree root."""
# initialize and iterate
n,d = X_test.shape
pred = np.zeros(n)
for i in range(n):
r = root
x = X_test[i,:].flatten()
while r.left is not None and r.right is not None:
feature = r.feature
cut = r.cut
if x[feature] <= cut:
r = r.left
else:
r = r.right
else:
pred[i] = r.prediction
return pred
###Output
_____no_output_____
###Markdown
Visualize TreeThe following code defines a function visclassifier(), which plots the decision boundary of a classifier in 2 dimensions. Execute the following code to see what the decision boundary of your tree looks like on the ion data set.
###Code
def visclassifier(fun, X_train, y_train, w=None, b=0):
y_train = np.array(y_train).flatten()
symbols = ['ko', 'kx']
marker_symbols = ['o', 'x']
mycolors = [[0.5, 0.5, 1], [1, 0.5, 0.5]]
# get the unique values from labels array
classvals = np.unique(y_train)
plt.figure()
# return 300 evenly spaced numbers over this interval
res = 300
xrange = np.linspace(min(X_train[:, 0]), max(X_train[:, 0]), res)
yrange = np.linspace(min(X_train[:, 1]), max(X_train[:, 1]), res)
# repeat this matrix 300 times for both axes
pixelX = repmat(xrange, res, 1)
pixelY = repmat(yrange, res, 1).T
X_test = np.array([pixelX.flatten(), pixelY.flatten()]).T
# test all of these points on the grid
testpreds = fun(X_test)
# reshape it back together to make our grid
Z = testpreds.reshape(res, res)
# Z[0,0] = 1 # optional: scale the colors correctly
# fill in the contours for these predictions
plt.contourf(pixelX, pixelY, np.sign(Z), colors=mycolors)
# creates x's and o's for training set
for idx, c in enumerate(classvals):
plt.scatter(X_train[y_train == c,0], X_train[y_train == c,1], marker=marker_symbols[idx], color='k')
if w is not None:
w = np.array(w).flatten()
alpha = -1 * b / (w ** 2).sum()
plt.quiver(w[0] * alpha, w[1] * alpha, w[0], w[1], linewidth=2, color=[0,1,0])
plt.axis('tight')
plt.show()
tree = cart(xTrSpiral, yTrSpiral) # compute tree on training data
visclassifier(lambda X: evaltree(tree,X), xTrSpiral, yTrSpiral)
print('Training error: %.4f' % np.mean(np.sign(evaltree(tree,xTrSpiral)) != yTrSpiral))
print('Testing error: %.4f' % np.mean(np.sign(evaltree(tree,xTeSpiral)) != yTeSpiral))
###Output
_____no_output_____ |
Notebooks/.ipynb_checkpoints/CC_Classifier-checkpoint.ipynb | ###Markdown
Cortical control in SCCwm-DBS Classifying Target Engagement OverviewHow can we be sure we're stimulating the SCCwm?What signal can we use to optimize our therapeutic parameters, including location, voltage, and frequency?In this notebook we'll address this question by developing a classifier capable of specifically identifying SCCwm-DBS. Methods
###Code
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import DBSpace as dbo
from DBSpace.visualizations import EEG_Viz
from DBSpace.control import proc_dEEG
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.decomposition import PCA, FastICA
sns.set_context('paper')
sns.set(font_scale=4)
sns.set_style('white')
###Output
Using DBSpace LATEST
Importing from DBSpace.control...
###Markdown
Binary Classification
###Code
all_pts = ['906','907','908']
EEG_analysis = proc_dEEG.proc_dEEG(pts=all_pts,procsteps='conservative',condits=['OnT','OffT'])
#%%
EEG_analysis.train_binSVM(mask=False)
EEG_analysis.oneshot_binSVM()
EEG_analysis.bootstrap_binSVM()
EEG_analysis.OnT_dr(data_source=EEG_analysis.SVM_coeffs)
EEG_analysis.learning_binSVM()
###Output
DOING BINARY - Learning Curve
###Markdown
Salient Channels
###Code
EEG_analysis.analyse_binSVM()
###Output
/home/virati/py_37_env/lib/python3.7/site-packages/mne/utils/docs.py:830: DeprecationWarning: Function read_montage is deprecated; ``read_montage`` is deprecated and will be removed in v0.20. Please use ``read_dig_fif``, ``read_dig_egi``, ``read_custom_montage``, or ``read_dig_captrack`` to read a digitization based on your needs instead; or ``make_standard_montage`` to create ``DigMontage`` based on template; or ``make_dig_montage`` to create a ``DigMontage`` out of np.arrays
warnings.warn(msg, category=DeprecationWarning)
/home/virati/py_37_env/lib/python3.7/site-packages/mne/utils/docs.py:813: DeprecationWarning: Class Montage is deprecated; Montage class is deprecated and will be removed in v0.20. Please use DigMontage instead.
warnings.warn(msg, category=DeprecationWarning)
|
Understanding_Deep_learning_using_CNN.ipynb | ###Markdown
UNDERSTANDING DEEP LEARNING USING CNN ABSTRACT**Context:**Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. Zalando intends Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.The original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. "If it doesn't work on MNIST, it won't work at all", they said. "Well, if it does work on MNIST, it may still fail on others."Zalando seeks to replace the original MNIST dataset**Content:**Each image is 28 pixels in height and 28 pixels in width, for a total of 784 pixels in total. Each pixel has a single pixel-value associated with it, indicating the lightness or darkness of that pixel, with higher numbers meaning darker. This pixel-value is an integer between 0 and 255. The training and test data sets have 785 columns. The first column consists of the class labels (see above), and represents the article of clothing. The rest of the columns contain the pixel-values of the associated image.To locate a pixel on the image, suppose that we have decomposed x as x = i * 28 + j, where i and j are integers between 0 and 27. The pixel is located on row i and column j of a 28 x 28 matrix.For example, pixel31 indicates the pixel that is in the fourth column from the left, and the second row from the top, as in the ascii-diagram below. **Labels:**Each training and test example is assigned to one of the following labels:0 T-shirt/top1 Trouser2 Pullover3 Dress4 Coat5 Sandal6 Shirt7 Sneaker8 Bag9 Ankle boot **Overview:**Each row is a separate imageColumn 1 is the class label.Remaining columns are pixel numbers (784 total).Each value is the darkness of the pixel (1 to 255)Kaggle link to dataset - https://www.kaggle.com/zalando-research/fashionmnist
###Code
#Importing necessary libraries
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from tensorflow.python import keras
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Flatten, Conv2D, Dropout, MaxPooling2D
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import plotly.graph_objs as go
import plotly.figure_factory as ff
from plotly import tools
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
#Metadata
IMG_ROWS = 28
IMG_COLS = 28
NUM_CLASSES = 10
TEST_SIZE = 0.2
RANDOM_STATE = 0
NO_EPOCHS = 2
BATCH_SIZE = 128
#Reading data from file
train_data = pd.read_csv('fashion-mnist_train.csv')
test_data = pd.read_csv('fashion-mnist_test.csv')
print("Fashion MNIST train - rows:",train_data.shape[0]," columns:", train_data.shape[1])
print("Fashion MNIST test - rows:",test_data.shape[0]," columns:", test_data.shape[1])
labels = {0 : "T-shirt/top", 1: "Trouser", 2: "Pullover", 3: "Dress", 4: "Coat",
5: "Sandal", 6: "Shirt", 7: "Sneaker", 8: "Bag", 9: "Ankle Boot"}
def get_classes_distribution(data):
# Get the count for each label
label_counts = data["label"].value_counts()
# Get total number of samples
total_samples = len(data)
# Count the number of items in each class
for i in range(len(label_counts)):
label = labels[label_counts.index[i]]
count = label_counts.values[i]
percent = (count / total_samples) * 100
print("{:<20s}: {} or {}%".format(label, count, percent))
get_classes_distribution(train_data)
def data_preprocessing(raw):
out_y = keras.utils.to_categorical(raw.label, NUM_CLASSES)
num_images = raw.shape[0]
x_as_array = raw.values[:,1:]
x_shaped_array = x_as_array.reshape(num_images, IMG_ROWS, IMG_COLS, 1)
out_x = x_shaped_array / 255
return out_x, out_y
X, y = data_preprocessing(train_data)
X_test, y_test = data_preprocessing(test_data)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=TEST_SIZE, random_state=RANDOM_STATE)
print("Fashion MNIST train - rows:",X_train.shape[0]," columns:", X_train.shape[1:4])
print("Fashion MNIST valid - rows:",X_val.shape[0]," columns:", X_val.shape[1:4])
print("Fashion MNIST test - rows:",X_test.shape[0]," columns:", X_test.shape[1:4])
#Creating an object of Sequential class
model = Sequential()
# Add convolution 2D
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
kernel_initializer='he_normal',
input_shape=(IMG_ROWS, IMG_COLS, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64,
kernel_size=(3, 3),
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(NUM_CLASSES, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy'])
model.summary()
#Fitting the model
train_model = model.fit(X_train, y_train,
batch_size=BATCH_SIZE,
epochs=NO_EPOCHS,
verbose=1,
validation_data=(X_val, y_val))
#Model evaluation
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Test loss: 0.337430632686615
Test accuracy: 0.8795
###Markdown
Changing activation function to sigmoid
###Code
model = Sequential()
# Add convolution 2D
model.add(Conv2D(32, kernel_size=(3, 3),
activation='sigmoid',
kernel_initializer='he_normal',
input_shape=(IMG_ROWS, IMG_COLS, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64,
kernel_size=(3, 3),
activation='sigmoid'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='sigmoid'))
model.add(Flatten())
model.add(Dense(128, activation='sigmoid'))
model.add(Dense(NUM_CLASSES, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy'])
train_model = model.fit(X_train, y_train,
batch_size=BATCH_SIZE,
epochs=NO_EPOCHS,
verbose=1,
validation_data=(X_val, y_val))
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Test loss: 0.5890627470970153
Test accuracy: 0.7659
###Markdown
Here, we see that changing activation function to sigmoid decreases the accuracy to 76% as compared to the activation function relu and softmax which gave an accuracy of about 87% Hence, we make use of relu and softmax as activation function and change the loss function to cosine proximity to see its effect. Changing loss function to cosine proximity
###Code
model = Sequential()
# Add convolution 2D
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
kernel_initializer='he_normal',
input_shape=(IMG_ROWS, IMG_COLS, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64,
kernel_size=(3, 3),
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(NUM_CLASSES, activation='softmax'))
model.compile(loss=keras.losses.cosine_proximity,
optimizer='adam',
metrics=['accuracy'])
train_model = model.fit(X_train, y_train,
batch_size=BATCH_SIZE,
epochs=NO_EPOCHS,
verbose=1,
validation_data=(X_val, y_val))
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Test loss: -0.901581658744812
Test accuracy: 0.8841
###Markdown
Here, changing loss function to cosine proxity leads to the test accuracy of 88% that is slightly greater than the cross entropy loss function which was 87% . Now, let us use both cosine proximity and cross entropy as our loss function and change the number of epochs to see its effect on the accuracy. Changing number of Epochs to 5
###Code
NO_EPOCHS = 5
model = Sequential()
# Add convolution 2D
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
kernel_initializer='he_normal',
input_shape=(IMG_ROWS, IMG_COLS, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64,
kernel_size=(3, 3),
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(NUM_CLASSES, activation='softmax'))
model.compile(loss=keras.losses.cosine_proximity,
optimizer='adam',
metrics=['accuracy'])
train_model = model.fit(X_train, y_train,
batch_size=BATCH_SIZE,
epochs=NO_EPOCHS,
verbose=1,
validation_data=(X_val, y_val))
#Model evaluation
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Test loss: -0.9201883563041687
Test accuracy: 0.9064
###Markdown
For loss function - cosine proximity, we see that test accuracy increased to 90% with loss of about -0.92. Now, let us the its effect on loss function - Cross Entropy
###Code
NO_EPOCHS = 5
model = Sequential()
# Add convolution 2D
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
kernel_initializer='he_normal',
input_shape=(IMG_ROWS, IMG_COLS, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64,
kernel_size=(3, 3),
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(NUM_CLASSES, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy'])
train_model = model.fit(X_train, y_train,
batch_size=BATCH_SIZE,
epochs=NO_EPOCHS,
verbose=1,
validation_data=(X_val, y_val))
#Model evaluation
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Test loss: 0.25092555755376816
Test accuracy: 0.9076
###Markdown
Here, we see that the accuracy does not increase much while the test loss increases to 0.25. Hence, we continue with loss function - cosine proximity. Now, let us increase the number of epochs and see if it has any effect. Changing number of Epochs to 7
###Code
NO_EPOCHS = 7
model = Sequential()
# Add convolution 2D
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
kernel_initializer='he_normal',
input_shape=(IMG_ROWS, IMG_COLS, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64,
kernel_size=(3, 3),
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(NUM_CLASSES, activation='softmax'))
model.compile(loss=keras.losses.cosine_proximity,
optimizer='adam',
metrics=['accuracy'])
train_model = model.fit(X_train, y_train,
batch_size=BATCH_SIZE,
epochs=NO_EPOCHS,
verbose=1,
validation_data=(X_val, y_val))
#Model evaluation
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Test loss: -0.9222356794357299
Test accuracy: 0.9074
###Markdown
Increasing the epochs to 7, increases the test accuracy to 0.9074 which was earlier 0.9064 for 5 epochs. The increase in the accuracy is not much and we see that it plateaus, so it is better to use 5 epochs and not 7 as it impacts the performance.Let us now change the gradient estimator to stochastic gradient descent and see its impact. Changing Gradient estimation to Stochastic Gradient Descent
###Code
NO_EPOCHS = 5
model = Sequential()
# Add convolution 2D
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
kernel_initializer='he_normal',
input_shape=(IMG_ROWS, IMG_COLS, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64,
kernel_size=(3, 3),
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(NUM_CLASSES, activation='softmax'))
model.compile(loss=keras.losses.cosine_proximity,
optimizer='SGD',
metrics=['accuracy'])
train_model = model.fit(X_train, y_train,
batch_size=BATCH_SIZE,
epochs=NO_EPOCHS,
verbose=1,
validation_data=(X_val, y_val))
#Model evaluation
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Test loss: 0.48522751784324647
Test accuracy: 0.8225
###Markdown
Here, we see that the test accuracy dropped to 82% and loss increased to 0.48. Hence, it is better to use Adam optimizer than stochastic gradient descent. Let us change it to Adamax and see its impact. Changing Gradient estimation to Adamax
###Code
NO_EPOCHS = 5
model = Sequential()
# Add convolution 2D
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
kernel_initializer='he_normal',
input_shape=(IMG_ROWS, IMG_COLS, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64,
kernel_size=(3, 3),
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(NUM_CLASSES, activation='softmax'))
model.compile(loss=keras.losses.cosine_proximity,
optimizer='Adamax',
metrics=['accuracy'])
train_model = model.fit(X_train, y_train,
batch_size=BATCH_SIZE,
epochs=NO_EPOCHS,
verbose=1,
validation_data=(X_val, y_val))
#Model evaluation
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Test loss: -0.9144185781478882
Test accuracy: 0.8985
###Markdown
Here, we see that the test accuracy increased to 89% and loss decreased back to -0.91. But, adam optimizer gave accuracy of 90% with loss of -0.92. Hence, Adam optimizer gives better accuracy so it is better to use it to configure the learning process.Let us now try to change the number of layers in the network architecture and see its impact. Changing Network architecture - Number of layers to 2
###Code
NO_EPOCHS = 5
model = Sequential()
# Add convolution 2D
# model.add(Conv2D(32, kernel_size=(3, 3),
# activation='relu',
# kernel_initializer='he_normal',
# input_shape=(IMG_ROWS, IMG_COLS, 1)))
# model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64,
kernel_size=(3, 3),
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(NUM_CLASSES, activation='softmax'))
model.compile(loss=keras.losses.cosine_proximity,
optimizer='adam',
metrics=['accuracy'])
train_model = model.fit(X_train, y_train,
batch_size=BATCH_SIZE,
epochs=NO_EPOCHS,
verbose=1,
validation_data=(X_val, y_val))
#Model evaluation
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Test loss: -0.9315527265548706
Test accuracy: 0.9176
###Markdown
Here, we can see that decreasing the layer to 2 in the network architecture gave an accuracy of 91% which increased compared to that of 3 layer architecture with accuracy of 90%. Decreasing the layer in architecture also improves the performance and hence we will consider a 2-layer architecture.Let us try to reduce the layer to 1 and see its impact. Changing Network architecture - Number of layers to 1
###Code
NO_EPOCHS = 5
model = Sequential()
#Add convolution 2D
# model.add(Conv2D(32, kernel_size=(3, 3),
# activation='relu',
# kernel_initializer='he_normal',
# input_shape=(IMG_ROWS, IMG_COLS, 1)))
# model.add(MaxPooling2D((2, 2)))
# model.add(Conv2D(64,
# kernel_size=(3, 3),
# activation='relu'))
# model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(NUM_CLASSES, activation='softmax'))
model.compile(loss=keras.losses.cosine_proximity,
optimizer='adam',
metrics=['accuracy'])
train_model = model.fit(X_train, y_train,
batch_size=BATCH_SIZE,
epochs=NO_EPOCHS,
verbose=1,
validation_data=(X_val, y_val))
#Model evaluation
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Test loss: -0.9220789421081543
Test accuracy: 0.9078
###Markdown
We see that the accuracy decreases back to 90% with test loss of -0.92. Hence, we keep a 2 layer architecture for our problem which gives best accuracy. Changing Network initializer to uniform
###Code
NO_EPOCHS = 5
model = Sequential()
#Add convolution 2D
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
kernel_initializer='uniform',
input_shape=(IMG_ROWS, IMG_COLS, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64,
kernel_size=(3, 3),
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(NUM_CLASSES, activation='softmax'))
model.compile(loss=keras.losses.cosine_proximity,
optimizer='adam',
metrics=['accuracy'])
train_model = model.fit(X_train, y_train,
batch_size=BATCH_SIZE,
epochs=NO_EPOCHS,
verbose=1,
validation_data=(X_val, y_val))
#Model evaluation
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Test loss: -0.9103037516593933
Test accuracy: 0.8936
###Markdown
While changing the kernel initializer to uniform, we see that the accuracy is less - 89% and hence let us change it to random uniform and see if it has a positive impact on the accuracy. Changing Network initializer to random uniform
###Code
NO_EPOCHS = 5
model = Sequential()
#Add convolution 2D
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
kernel_initializer='random_uniform',
input_shape=(IMG_ROWS, IMG_COLS, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64,
kernel_size=(3, 3),
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(NUM_CLASSES, activation='softmax'))
model.compile(loss=keras.losses.cosine_proximity,
optimizer='adam',
metrics=['accuracy'])
train_model = model.fit(X_train, y_train,
batch_size=BATCH_SIZE,
epochs=NO_EPOCHS,
verbose=1,
validation_data=(X_val, y_val))
#Model evaluation
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Test loss: -0.9117093379020691
Test accuracy: 0.8959
|
ASSOCIATION_RULE_LEARNING/ASSOCIATION_RULE_LEARNING/DIC_GreenD.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
from sklearn.externals import joblib
path = "C:\\Windows\\System32\\"
equipment_list = ["ONN_OFF_equip1.pkl","ONN_OFF_equip2.pkl","ONN_OFF_equip3.pkl","ONN_OFF_equip4.pkl","ONN_OFF_equip5.pkl","ONN_OFF_equip6.pkl","ONN_OFF_equip7.pkl","ONN_OFF_equip8.pkl","ONN_OFF_equip9.pkl"]
def equipment_loader(file_path,equipments):
equip_dict = {}
for equipment in equipments:
paths = file_path+equipment
equip_data = joblib.load(file_path+equipment)
equipment = equipment[:-4]
equip_data = list(equip_data)
equip_data = [ x for x in equip_data if x!= '0' and x!= 0]
equip_dict[equipment] = list(equip_data)
return equip_dict
equipments_data = equipment_loader(path,equipment_list)
def DateString():
date = input("Enter Date : ")
month = input("Enter Month : ")
year = input("Enter Year : ")
Date= [date,month,year]
Date ='-'.join(Date)
return Date
sample = equipments_data["ONN_OFF_equip1"][:2000]
print(sample)
date ='2014-02-15'
day_index = [x for x in list(sample) if date in x ]
print(day_index)
def DayDataExtraction(Data,equipment_list,Date):
day_data = {}
for equipment in equipment_list:
equipment = equipment[:-4]
e_data = list(Data[equipment])
e_data = [ x for x in e_data if Date in x]
day_data[equipment] = e_data
cleaned_day_data = {}
for equipment in day_data.keys():
if len(day_data[equipment])==0:
continue
else:
cleaned_day_data[equipment] = day_data[equipment]
no_of_equipment_w = len(cleaned_day_data.keys())
print("No of equipment working on {%s} are :"%(Date),end = " ")
print(no_of_equipment_w)
return cleaned_day_data
day_data = DayDataExtraction(equipments_data,equipment_list,'2014-02-20')
def HourDataExtraction(Data,Date,HourTime,equipment_list):
day_ = DayDataExtraction(Data,equipment_list,Date)
#print(day_)
temp_list = [Date,HourTime[:2]]
temp_time = ' '.join(temp_list)
hourwise_data = {}
for equip in day_.keys():
temp_data = list(Data[equip])
temp_data = [ x for x in temp_data if temp_time in x]
if len(temp_data)==0:
continue
else:
hourwise_data[equip] = temp_data
print("No of equipments working at %s %s are :"%(Date,HourTime),end =" ")
print(len(hourwise_data.keys()))
return hourwise_data
hde = HourDataExtraction(equipments_data,'2014-02-20','12:00:00',equipment_list)
def TimeDataExtraction(day_,Data,Date,Time,equipment_list):
day_ = day_
#day_ = DayDataExtraction(Data,equipment_list,Date)
#print(day_)
temp_list = [Date,Time]
temp_time = ' '.join(temp_list)
_data = {}
for equip in day_.keys():
temp_data = list(Data[equip])
temp_data = [ x for x in temp_data if temp_time in x]
if len(temp_data)==0:
continue
else:
_data[equip] = temp_data
print("No of equipments working at %s %s are :"%(Date,Time),end =" ")
print(len(_data.keys()))
return list(_data.keys())
def DayTimeGenerator():
hour = ['00','01','02','03','04','05','06','07','08','09','10','11','12','13','14','15','16','17','18','19','20','21','22','23']
minute = ['00','05']
for x in range(10,60):
if x%5==0:
minute.append(str(x))
second = '00'
Time = []
for hr in hour:
for min in minute:
temp = [hr,min,second]
temp = ':'.join(temp)
Time.append(temp)
return Time
Time = DayTimeGenerator()
def MonthDateGenerator(year ,month,days_in_the_month):
year = year
month = month
days = days_in_the_month
day = ['01','02','03','04','05','06','07','08','09']
for i in range(10,days+1):
day.append(str(i))
Dates = []
for d in day:
temp = [year,month,d]
temp = '-'.join(temp)
Dates.append(temp)
return Dates
Dates = MonthDateGenerator('2015','02',30)
def DataExtractor(equipment_list,equipments_data,Dates,Time):
transactions = []
no_of_eq = len(equipment_list)
for date in Dates:
day_ = DayDataExtraction(equipments_data,equipment_list,date)
for time in Time:
temp_list = TimeDataExtraction(day_,equipments_data,date,time,equipment_list)
if len(temp_list)==0:
continue
#if len(temp_list)<no_of_eq:
# diff = no_of_eq-len(temp_list)
# for i in range(1,diff+1):
# temp_list.append('nan')
else:
transactions.append(temp_list)
return transactions
apriori_data = DataExtractor(equipment_list,equipments_data,Dates,Time)
print(apriori_data)
import pandas as pd
from mlxtend.preprocessing import TransactionEncoder
from mlxtend.frequent_patterns import apriori
apriori = apriori_data
te = TransactionEncoder()
data = te.fit(apriori_data).transform(apriori_data)
data = pd.DataFrame(data, columns=te.columns_)
from mlxtend.frequent_patterns import apriori
apriori(data, min_support=0.15,use_colnames=True)
###Output
_____no_output_____ |
_notebooks/2020-10-07-Keras_CNN_Malaria_custom_data.ipynb | ###Markdown
"Keras CNN - Malaria with image augmentation"- title: "Keras CNN: Malaria with image augmentation"- toc: true- badges: False- comments: true- author: Sam Treacy- categories: [keras, cnn, tensorflow, image_augmentation, classification, python]
###Code
import os
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.image import imread
pwd
my_data_dir = '/Users/samtreacy/OneDrive - TietoEVRY/00_Analysis/Jupyter/Tensorflow_Cert/Summaries_to_Study' + '/DATA/Malaria_cells'
my_data_dir
os.listdir(my_data_dir)
train_path = my_data_dir + '/train'
test_path = my_data_dir + '/test'
test_path
os.listdir(train_path)
os.listdir(train_path + '/parasitized')[0]
para_cell = train_path + '/parasitized/' + 'C189P150ThinF_IMG_20151203_142224_cell_84.png'
para_cell = imread(para_cell)
plt.imshow(para_cell)
para_cell.shape
###Output
_____no_output_____
###Markdown
Image count
###Code
len( os.listdir(train_path + '/parasitized/'))
len( os.listdir(train_path + '/uninfected/'))
###Output
_____no_output_____
###Markdown
Average image dimension
###Code
dim1 = []
dim2 = []
for image_filename in os.listdir(train_path + '/uninfected/'):
img = imread(train_path + '/uninfected/' + image_filename)
d1, d2, colours = img.shape
dim1.append(d1)
dim2.append(d2)
sns.jointplot(x=dim1, y=dim2);
np.mean(dim1)
np.mean(dim2)
###Output
_____no_output_____
###Markdown
Set default image shape
###Code
image_shape = (130, 130, 3)
###Output
_____no_output_____
###Markdown
Prepare Data
###Code
from tensorflow.keras.preprocessing.image import ImageDataGenerator
image_gen = ImageDataGenerator(rotation_range=20,
width_shift_range=0.1,
height_shift_range=0.1,
rescale =1/255,
shear_range=0.1,
zoom_range=0.1,
horizontal_flip=True,
fill_mode='nearest'
)
plt.imshow(para_cell);
plt.imshow(image_gen.random_transform(para_cell));
###Output
_____no_output_____
###Markdown
Flow images from Directory
###Code
image_gen.flow_from_directory(train_path)
image_gen.flow_from_directory(test_path)
###Output
Found 2600 images belonging to 2 classes.
###Markdown
Create Model
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Flatten, Conv2D, Dropout, MaxPool2D
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(3,3),
activation='relu', input_shape=image_shape))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Conv2D(filters=32, kernel_size=(3,3),
activation='relu', input_shape=image_shape))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Conv2D(filters=32, kernel_size=(3,3),
activation='relu', input_shape=image_shape))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss = 'binary_crossentropy',
optimizer = 'adam',
metrics = ['accuracy'])
model.summary()
image_shape
###Output
_____no_output_____
###Markdown
Early Stopping
###Code
from tensorflow.keras.callbacks import EarlyStopping
early_stop = EarlyStopping(monitor='val_loss', patience=2)
###Output
_____no_output_____
###Markdown
Train Model
###Code
batch_size = 256
train_image_gen = image_gen.flow_from_directory(train_path,
target_size=image_shape[:2],
color_mode='rgb',
batch_size=batch_size,
shuffle = False,
class_mode='binary')
test_image_gen = image_gen.flow_from_directory(test_path,
target_size=image_shape[:2],
color_mode='rgb',
batch_size=batch_size,
shuffle = False,
class_mode='binary')
train_image_gen.class_indices
model.fit_generator(train_image_gen, epochs=3,
validation_data=test_image_gen,
callbacks = [early_stop])
###Output
Epoch 1/3
98/98 [==============================] - 1195s 12s/step - loss: 0.3403 - accuracy: 0.8714 - val_loss: 0.1936 - val_accuracy: 0.9385
Epoch 2/3
98/98 [==============================] - 1857s 19s/step - loss: 0.2553 - accuracy: 0.9203 - val_loss: 0.2006 - val_accuracy: 0.9350
Epoch 3/3
98/98 [==============================] - 1208s 12s/step - loss: 0.1927 - accuracy: 0.9381 - val_loss: 0.2146 - val_accuracy: 0.9277
###Markdown
Save Model
###Code
from tensorflow.keras.models import load_model
model.save('marlaria_detector.h5')
###Output
_____no_output_____
###Markdown
Evaluate Model
###Code
losses = pd.DataFrame(model.history.history)
losses[['accuracy', 'val_accuracy']].plot();
losses[['loss', 'val_loss']].plot();
model.metrics_names
model.evaluate_generator(test_image_gen)
# https://datascience.stackexchange.com/questions/13894/how-to-get-predictions-with-predict-generator-on-streaming-test-data-in-keras
pred_probabilities = model.predict_generator(test_image_gen, workers = 0)
pred_probabilities[1:10]
trueClass = test_image_gen.classes
predictions = pred_probabilities > 0.5
predictions
from sklearn.metrics import classification_report, confusion_matrix
print(classification_report(trueClass, predictions ))
confusion_matrix( test_image_gen.classes, predictions )
###Output
_____no_output_____
###Markdown
Predict Image
###Code
from tensorflow.keras.preprocessing import image
para_cell = train_path + '/parasitized/' + 'C189P150ThinF_IMG_20151203_142224_cell_84.png'
para_cell
my_image = image.load_img(para_cell, target_size=image_shape)
my_image
my_image = image.img_to_array(my_image)
my_image.shape
my_image.reshape(1, 130, 130, 3).shape
model.predict(my_image.reshape(1, 130, 130, 3))
train_image_gen.class_indices
test_image_gen.class_indices
###Output
_____no_output_____ |
vqe h2.ipynb | ###Markdown
https://arxiv.org/abs/1304.306110.1103/PhysRevX.6.031007Second Quantized Hamiltonian\begin{eqnarray*} \mathcal{H}(r)=h_0 + \sum_{pq} h_{pq}(r) a^{\dagger}_p a_q +\frac{1}{2} \sum_{pqrs} h_{pqrs}(r) a^{\dagger}_p a^{\dagger}_qa_ra_s\end{eqnarray*}\begin{eqnarray*} h_{pq}(r)=\int{d\mathbf{r}}\,\phi^*_p(\mathbf{r})\left(-\frac{1}{2}\nabla^2-\sum_{a}{\frac{Z_a}{\mathbf{r}_{a,\mathbf{r}}}}\right)\phi_q(\mathbf{r})\end{eqnarray*}\begin{eqnarray*} h_{pqrs}(r)=\int{d\mathbf{r_1}\,d\mathbf{r_2}}\,\phi^*_p(\mathbf{r_1})\phi^*_q(\mathbf{r_2})r_{1,2}^{-1}\phi_r(\mathbf{r_1})\phi_s(\mathbf{r_2})\end{eqnarray*}Jordan-Wigner transformation\begin{eqnarray*} a^{\dagger} = I^{\otimes j-1}\otimes \sigma_{-} \otimes \sigma_{z}^{\otimes N-j}\\ a = I^{\otimes j-1}\otimes \sigma_{+} \otimes \sigma_{z}^{\otimes N-j}\end{eqnarray*}
###Code
gates = Gates(1)
ID = gates.ID()
X = gates.X()
Y = gates.Y()
Z = gates.Z()
II = ID.kron(ID)
XX = X.kron(X)
YY = Y.kron(Y)
ZZ = Z.kron(Z)
ZI = Z.kron(ID)
IZ = ID.kron(Z)
sig_is = np.kron([1, 1], [1, -1])
sig_si = np.kron([1, -1], [1, 1])
def repulsion_energy(Z1=1, Z2=1, r=75e-12):
Eh = 4.3597447222071e-18 # hartree energy
ep0 = 8.854187e-12
e = -1.602176634e-19
return (1/(4*pi*ep0)*(Z1*Z2*e**2)/r)/Eh
rep_energy = repulsion_energy()
print("Repulsion energy (Eh): %s"%rep_energy)
g0 = -0.4804
g1 = 0.3435
g2 = -0.4347
g3 = 0.5716
g4 = 0.0910
g5 = 0.0910
H = II*g0 + IZ*g1 + ZI*g2 + ZZ*g3 + XX*g4 + YY*g5
min(scipy.linalg.eig(H.get())[0])+rep_energy
def ansatz(reg, params):
n_qubits = len(reg)
depth = n_qubits
for i in range(depth):
for j in range(n_qubits):
if(j < n_qubits-1):
reg[j+1].CNOT(reg[j])
reg[i].RY(params[j])
def ansatz_2q(q1, q2, params):
q2.CNOT(q1)
q1.RY(params[0])
q2.RY(params[1])
q1.CNOT(q2)
q1.RY(params[2])
q2.RY(params[3])
q2.CNOT(q1)
q1.RY(params[4])
q2.RY(params[5])
def expectation_2q(params):
logicQuBit = LogicQuBit(2)
q1 = Qubit()
q2 = Qubit()
ansatz_2q(q1,q2,params)
psi = logicQuBit.getPsi()
return (psi.adjoint()*H*psi).get()[0][0]
minimum = minimize(expectation_2q, [0,0,0,0,0,0], method='Nelder-Mead', options={'xtol': 1e-10, 'ftol': 1e-10})
print(minimum.fun+rep_energy)
def expectation_value(measurements, base = np.array([1,-1,-1,1])):
probabilities = np.array(measurements)
expectation = np.sum(base * probabilities)
return expectation
def sigma_xx(params):
logicQuBit = LogicQuBit(2, first_left = False)
q1 = Qubit()
q2 = Qubit()
ansatz_2q(q1,q2,params)
# medidas em XX
q1.RY(-pi/2)
q2.RY(-pi/2)
result = logicQuBit.Measure([q1,q2])
result = expectation_value(result)
return result
def sigma_yy(params):
logicQuBit = LogicQuBit(2, first_left = False)
q1 = Qubit()
q2 = Qubit()
ansatz_2q(q1,q2,params)
# medidas em YY
q1.RX(pi/2)
q2.RX(pi/2)
result = logicQuBit.Measure([q1,q2])
result = expectation_value(result)
return result
def sigma_zz(params):
logicQuBit = LogicQuBit(2, first_left = False)
q1 = Qubit()
q2 = Qubit()
ansatz_2q(q1,q2,params)
result = logicQuBit.Measure([q1,q2])
zz = expectation_value(result)
iz = expectation_value(result, sig_is) # [zz, iz] = 0
zi = expectation_value(result, sig_si) # [zz, zi] = 0
return zz, iz, zi
def expectation_energy(params):
xx = sigma_xx(params)
yy = sigma_yy(params)
zz, iz, zi = sigma_zz(params)
result = g0 + g1*iz + g2*zi + g3*zz + g4*xx + g5*yy
return result
minimum = minimize(expectation_energy, [0,0,0,0,0,0], method='Nelder-Mead', options={'xtol': 1e-10, 'ftol': 1e-10})
print(minimum.fun+rep_energy)
def gradient(params, evaluate):
n_params = params.shape[0]
shift = pi/2
gradients = np.zeros(n_params)
for i in range(n_params):
#parameter shift rule
shift_vect = np.array([shift if j==i else 0 for j in range(n_params)])
shift_right = params + shift_vect
shift_left = params - shift_vect
expectation_right = evaluate(shift_right)
expectation_left = evaluate(shift_left)
gradients[i] = expectation_right - expectation_left
return gradients
params = np.random.uniform(-np.pi, np.pi, 6)
last_params = np.zeros(6)
lr = 0.1
err = 1
while err > 1e-5:
grad = gradient(params, expectation_energy)
params = params - lr*grad
err = abs(sum(params - last_params))
last_params = np.array(params)
print(err)
energy = expectation_energy(params)
energy = energy + rep_energy
print(energy)
###Output
(-1.1456294446036503+0j)
|
tutorials/LinearAlgebra/LinearAlgebra-Copy1.ipynb | ###Markdown
Introduction to Linear AlgebraThis is a tutorial designed to introduce you to the basics of linear algebra.Linear algebra is a branch of mathematics dedicated to studying the properties of matrices and vectors,which are used extensively in quantum computing to represent quantum states and operations on them.This tutorial doesn't come close to covering the full breadth of the topic, but it should be enough to get you comfortable with the main concepts of linear algebra used in quantum computing.This tutorial assumes familiarity with complex numbers; if you need a review of this topic, we recommend that you complete the [Complex Arithmetic](../ComplexArithmetic/ComplexArithmetic.ipynb) tutorial before tackling this one.This tutorial covers the following topics:* Matrices and vectors* Basic matrix operations* Operations and properties of complex matrices* Inner and outer vector products* Tensor product* Eigenvalues and eigenvectorsIf you need to look up some formulas quickly, you can find them in [this cheatsheet](https://github.com/microsoft/QuantumKatas/blob/main/quickref/qsharp-quick-reference.pdf). This notebook has several tasks that require you to write Python code to test your understanding of the concepts. If you are not familiar with Python, [here](https://docs.python.org/3/tutorial/index.html) is a good introductory tutorial for it.> The exercises use Python's built-in representation of complex numbers. Most of the operations (addition, multiplication, etc.) work as you expect them to. Here are a few notes on Python-specific syntax:>> * If `z` is a complex number, `z.real` is the real component, and `z.imag` is the coefficient of the imaginary component.> * To represent an imaginary number, put `j` after a real number: $3.14i$ would be `3.14j`.> * To represent a complex number, simply add a real number and an imaginary number.> * The built-in function `abs` computes the modulus of a complex number.>> You can find more information in the [official documentation](https://docs.python.org/3/library/cmath.html).Let's start by importing some useful mathematical functions and constants, and setting up a few things necessary for testing the exercises. **Do not skip this step.**Click the cell with code below this block of text and press `Ctrl+Enter` (`⌘+Enter` on Mac).
###Code
# Run this cell using Ctrl+Enter (⌘+Enter on Mac).
from testing import exercise, create_empty_matrix
from typing import List
import math, cmath
Matrix = List[List[complex]]
###Output
Success!
###Markdown
Part I. Matrices and Basic Operations Matrices and VectorsA **matrix** is set of numbers arranged in a rectangular grid. Here is a $2$ by $2$ matrix:$$A =\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$$$A_{i,j}$ refers to the element in row $i$ and column $j$ of matrix $A$ (all indices are 0-based). In the above example, $A_{0,1} = 2$.An $n \times m$ matrix will have $n$ rows and $m$ columns, like so:$$\begin{bmatrix} x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\ x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1}\end{bmatrix}$$A $1 \times 1$ matrix is equivalent to a scalar:$$\begin{bmatrix} 3 \end{bmatrix} = 3$$Quantum computing uses complex-valued matrices: the elements of a matrix can be complex numbers. This, for example, is a valid complex-valued matrix:$$\begin{bmatrix} 1 & i \\ -2i & 3 + 4i\end{bmatrix}$$Finally, a **vector** is an $n \times 1$ matrix. Here, for example, is a $3 \times 1$ vector:$$V = \begin{bmatrix} 1 \\ 2i \\ 3 + 4i \end{bmatrix}$$Since vectors always have a width of $1$, vector elements are sometimes written using only one index. In the above example, $V_0 = 1$ and $V_1 = 2i$. Matrix AdditionThe easiest matrix operation is **matrix addition**. Matrix addition works between two matrices of the same size, and adds each number from the first matrix to the number in the same position in the second matrix:$$\begin{bmatrix} x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\ x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1}\end{bmatrix}+\begin{bmatrix} y_{0,0} & y_{0,1} & \dotsb & y_{0,m-1} \\ y_{1,0} & y_{1,1} & \dotsb & y_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ y_{n-1,0} & y_{n-1,1} & \dotsb & y_{n-1,m-1}\end{bmatrix}=\begin{bmatrix} x_{0,0} + y_{0,0} & x_{0,1} + y_{0,1} & \dotsb & x_{0,m-1} + y_{0,m-1} \\ x_{1,0} + y_{1,0} & x_{1,1} + y_{1,1} & \dotsb & x_{1,m-1} + y_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,0} + y_{n-1,0} & x_{n-1,1} + y_{n-1,1} & \dotsb & x_{n-1,m-1} + y_{n-1,m-1}\end{bmatrix}$$Similarly, we can compute $A - B$ by subtracting elements of $B$ from corresponding elements of $A$.Matrix addition has the following properties:* Commutativity: $A + B = B + A$* Associativity: $(A + B) + C = A + (B + C)$ Exercise 1: Matrix addition.**Inputs:**1. An $n \times m$ matrix $A$, represented as a two-dimensional list.2. An $n \times m$ matrix $B$, represented as a two-dimensional list.**Output:** Return the sum of the matrices $A + B$ - an $n \times m$ matrix, represented as a two-dimensional list.> When representing matrices as lists, each sub-list represents a row.>> For example, list `[[1, 2], [3, 4]]` represents the following matrix:>> $$\begin{bmatrix} 1 & 2 \\ 3 & 4\end{bmatrix}$$Fill in the missing code and run the cell below to test your work. Need a hint? Click here A video explanation can be found here.
###Code
@exercise
def matrix_add(a : Matrix, b : Matrix) -> Matrix:
# You can get the size of a matrix like this:
rows = len(a)
columns = len(a[0])
# You can use the following function to initialize a rows×columns matrix filled with 0s to store your answer
c = create_empty_matrix(rows, columns)
# You can use a for loop to execute its body several times;
# in this loop variable i will take on each value from 0 to n-1, inclusive
for i in range(rows):
# Loops can be nested
for j in range(columns):
# You can access elements of a matrix like this:
x = a[i][j]
y = b[i][j]
# You can modify the elements of a matrix like this:
c[i][j] = x + y
return c
###Output
Success!
###Markdown
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynbExercise-1:-Matrix-addition.).* Scalar MultiplicationThe next matrix operation is **scalar multiplication** - multiplying the entire matrix by a scalar (real or complex number):$$a \cdot\begin{bmatrix} x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\ x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1}\end{bmatrix}=\begin{bmatrix} a \cdot x_{0,0} & a \cdot x_{0,1} & \dotsb & a \cdot x_{0,m-1} \\ a \cdot x_{1,0} & a \cdot x_{1,1} & \dotsb & a \cdot x_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ a \cdot x_{n-1,0} & a \cdot x_{n-1,1} & \dotsb & a \cdot x_{n-1,m-1}\end{bmatrix}$$Scalar multiplication has the following properties:* Associativity: $x \cdot (yA) = (x \cdot y)A$* Distributivity over matrix addition: $x(A + B) = xA + xB$* Distributivity over scalar addition: $(x + y)A = xA + yA$ Exercise 2: Scalar multiplication.**Inputs:**1. A scalar $x$.2. An $n \times m$ matrix $A$.**Output:** Return the $n \times m$ matrix $x \cdot A$. Need a hint? Click here A video explanation can be found here.
###Code
@exercise
def scalar_mult(x : complex, a : Matrix) -> Matrix:
# Fill in the missing code and run the cell to check your work.
rows = len(a)
cols = len(a[0])
c = create_empty_matrix(rows, cols)
for i in range (rows):
for j in range(cols):
c[i][j] = a[i][j] * x
return c
###Output
Success!
###Markdown
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynbExercise-2:-Scalar-multiplication.).* Matrix Multiplication**Matrix multiplication** is a very important and somewhat unusual operation. The unusual thing about it is that neither its operands nor its output are the same size: an $n \times m$ matrix multiplied by an $m \times k$ matrix results in an $n \times k$ matrix. That is, for matrix multiplication to be applicable, the number of columns in the first matrix must equal the number of rows in the second matrix.Here is how matrix product is calculated: if we are calculating $AB = C$, then$$C_{i,j} = A_{i,0} \cdot B_{0,j} + A_{i,1} \cdot B_{1,j} + \dotsb + A_{i,m-1} \cdot B_{m-1,j} = \sum_{t = 0}^{m-1} A_{i,t} \cdot B_{t,j}$$Here is a small example:$$\begin{bmatrix} \color{blue} 1 & \color{blue} 2 & \color{blue} 3 \\ \color{red} 4 & \color{red} 5 & \color{red} 6\end{bmatrix}\begin{bmatrix} 1 \\ 2 \\ 3\end{bmatrix}=\begin{bmatrix} (\color{blue} 1 \cdot 1) + (\color{blue} 2 \cdot 2) + (\color{blue} 3 \cdot 3) \\ (\color{red} 4 \cdot 1) + (\color{red} 5 \cdot 2) + (\color{red} 6 \cdot 3)\end{bmatrix}=\begin{bmatrix} 14 \\ 32\end{bmatrix}$$ Matrix multiplication has the following properties:* Associativity: $A(BC) = (AB)C$* Distributivity over matrix addition: $A(B + C) = AB + AC$ and $(A + B)C = AC + BC$* Associativity with scalar multiplication: $xAB = x(AB) = A(xB)$> Note that matrix multiplication is **not commutative:** $AB$ rarely equals $BA$.Another very important property of matrix multiplication is that a matrix multiplied by a vector produces another vector.An **identity matrix** $I_n$ is a special $n \times n$ matrix which has $1$s on the main diagonal, and $0$s everywhere else:$$I_n =\begin{bmatrix} 1 & 0 & \dotsb & 0 \\ 0 & 1 & \dotsb & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dotsb & 1\end{bmatrix}$$What makes it special is that multiplying any matrix (of compatible size) by $I_n$ returns the original matrix. To put it another way, if $A$ is an $n \times m$ matrix:$$AI_m = I_nA = A$$This is why $I_n$ is called an identity matrix - it acts as a **multiplicative identity**. In other words, it is the matrix equivalent of the number $1$. Exercise 3: Matrix multiplication.**Inputs:**1. An $n \times m$ matrix $A$.2. An $m \times k$ matrix $B$.**Output:** Return the $n \times k$ matrix equal to the matrix product $AB$. Need a hint? Click here To solve this exercise, you will need 3 for loops: one to go over $n$ rows of the output matrix, one to go over $k$ columns, and one to add up $m$ products that form each element of the output: for i in range(n): for j in range(k): sum = 0 for t in range(m): sum = sum + ... c[i][j] = sum A video explanation can be found here.
###Code
@exercise
def matrix_mult(a : Matrix, b : Matrix) -> Matrix:
rowsA = len(a)
colsA = len(a[0])
colsB = len(b[0])
c = create_empty_matrix(rowsA, colsB)
for i in range(rowsA):
for j in range(colsB):
for k in range(colsA):
c[i][j] = a[i][k] * b[k][i]
return c
###Output
_____no_output_____
###Markdown
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynbExercise-3:-Matrix-multiplication.).* Inverse MatricesA square $n \times n$ matrix $A$ is **invertible** if it has an inverse $n \times n$ matrix $A^{-1}$ with the following property:$$AA^{-1} = A^{-1}A = I_n$$In other words, $A^{-1}$ acts as the **multiplicative inverse** of $A$.Another, equivalent definition highlights what makes this an interesting property. For any matrices $B$ and $C$ of compatible sizes:$$A^{-1}(AB) = A(A^{-1}B) = B \\(CA)A^{-1} = (CA^{-1})A = C$$A square matrix has a property called the **determinant**, with the determinant of matrix $A$ being written as $|A|$. A matrix is invertible if and only if its determinant isn't equal to $0$.For a $2 \times 2$ matrix $A$, the determinant is defined as $|A| = (A_{0,0} \cdot A_{1,1}) - (A_{0,1} \cdot A_{1,0})$.For larger matrices, the determinant is defined through determinants of sub-matrices. You can learn more from [Wikipedia](https://en.wikipedia.org/wiki/Determinant) or from [Wolfram MathWorld](http://mathworld.wolfram.com/Determinant.html). Exercise 4: Matrix Inversion.**Input:** An invertible $2 \times 2$ matrix $A$.**Output:** Return the inverse of $A$, a $2 \times 2$ matrix $A^{-1}$. Need a hint? Click here Try to come up with a general method of doing it by hand first. If you get stuck, you may find this Wikipedia article useful. For this exercise, $|A|$ is guaranteed to be non-zero. A video explanation can be found here.
###Code
@exercise
def matrix_inverse(a : Matrix) -> Matrix:
return ...
###Output
_____no_output_____
###Markdown
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynbExercise-4:-Matrix-Inversion.).* TransposeThe **transpose** operation, denoted as $A^T$, is essentially a reflection of the matrix across the diagonal: $(A^T)_{i,j} = A_{j,i}$.Given an $n \times m$ matrix $A$, its transpose is the $m \times n$ matrix $A^T$, such that if:$$A =\begin{bmatrix} x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\ x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1}\end{bmatrix}$$then:$$A^T =\begin{bmatrix} x_{0,0} & x_{1,0} & \dotsb & x_{n-1,0} \\ x_{0,1} & x_{1,1} & \dotsb & x_{n-1,1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{0,m-1} & x_{1,m-1} & \dotsb & x_{n-1,m-1}\end{bmatrix}$$For example:$$\begin{bmatrix} 1 & 2 \\ 3 & 4 \\ 5 & 6\end{bmatrix}^T=\begin{bmatrix} 1 & 3 & 5 \\ 2 & 4 & 6\end{bmatrix}$$A **symmetric** matrix is a square matrix which equals its own transpose: $A = A^T$. To put it another way, it has reflection symmetry (hence the name) across the main diagonal. For example, the following matrix is symmetric:$$\begin{bmatrix} 1 & 2 & 3 \\ 2 & 4 & 5 \\ 3 & 5 & 6\end{bmatrix}$$The transpose of a matrix product is equal to the product of transposed matrices, taken in reverse order:$$(AB)^T = B^TA^T$$ Exercise 5: Transpose.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^T$, the transpose of $A$. Need a hint? Click here A video explanation can be found here.
###Code
@exercise
def transpose(a : Matrix) -> Matrix:
return ...
###Output
_____no_output_____
###Markdown
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynbExercise-5:-Transpose.).* ConjugateThe next important single-matrix operation is the **matrix conjugate**, denoted as $\overline{A}$. This, as the name might suggest, involves taking the [complex conjugate](../ComplexArithmetic/ComplexArithmetic.ipynbComplex-Conjugate) of every element of the matrix: if$$A =\begin{bmatrix} x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\ x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1}\end{bmatrix}$$Then:$$\overline{A} =\begin{bmatrix} \overline{x}_{0,0} & \overline{x}_{0,1} & \dotsb & \overline{x}_{0,m-1} \\ \overline{x}_{1,0} & \overline{x}_{1,1} & \dotsb & \overline{x}_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ \overline{x}_{n-1,0} & \overline{x}_{n-1,1} & \dotsb & \overline{x}_{n-1,m-1}\end{bmatrix}$$The conjugate of a matrix product equals to the product of conjugates of the matrices:$$\overline{AB} = (\overline{A})(\overline{B})$$ Exercise 6: Conjugate.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $n \times m$ matrix $\overline{A}$, the conjugate of $A$.> As a reminder, you can get the real and imaginary components of complex number `z` using `z.real` and `z.imag`, respectively. Need a hint? Click here To calculate the conjugate of a matrix take the conjugate of each element, check the complex arithmetic tutorial to see how to calculate the conjugate of a complex number.
###Code
@exercise
def conjugate(a : Matrix) -> Matrix:
return ...
###Output
_____no_output_____
###Markdown
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynbExercise-6:-Conjugate.).* AdjointThe final important single-matrix operation is a combination of the above two. The **conjugate transpose**, also called the **adjoint** of matrix $A$, is defined as $A^\dagger = \overline{(A^T)} = (\overline{A})^T$.A matrix is known as **Hermitian** or **self-adjoint** if it equals its own adjoint: $A = A^\dagger$. For example, the following matrix is Hermitian:$$\begin{bmatrix} 1 & i \\ -i & 2\end{bmatrix}$$The adjoint of a matrix product can be calculated as follows:$$(AB)^\dagger = B^\dagger A^\dagger$$ Exercise 7: Adjoint.**Input:** An $n \times m$ matrix $A$.**Output:** Return an $m \times n$ matrix $A^\dagger$, the adjoint of $A$.> Don't forget, you can re-use functions you've written previously.
###Code
@exercise
def adjoint(a : Matrix) -> Matrix:
return ...
###Output
_____no_output_____
###Markdown
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynbExercise-7:-Adjoint.).* Unitary Matrices**Unitary matrices** are very important for quantum computing. A matrix is unitary when it is invertible, and its inverse is equal to its adjoint: $U^{-1} = U^\dagger$. That is, an $n \times n$ square matrix $U$ is unitary if and only if $UU^\dagger = U^\dagger U = I_n$.For example, the following matrix is unitary:$$\begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{i}{\sqrt{2}} & \frac{-i}{\sqrt{2}} \\\end{bmatrix}$$ Exercise 8: Unitary Verification.**Input:** An $n \times n$ matrix $A$.**Output:** Check if the matrix is unitary and return `True` if it is, or `False` if it isn't.> Because of inaccuracy when dealing with floating point numbers on a computer (rounding errors), you won't always get the exact result you are expecting from a long series of calculations. To get around this, Python has a function `approx` which can be used to check if two numbers are "close enough:" `a == approx(b)`. Need a hint? Click here Keep in mind, you have only implemented matrix inverses for $2 \times 2$ matrices, and this exercise may give you larger inputs. There is a way to solve this without taking the inverse.
###Code
from pytest import approx
@exercise
def is_matrix_unitary(a : Matrix) -> bool:
return ...
###Output
_____no_output_____
###Markdown
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynbExercise-8:-Unitary-Verification.).* Next StepsCongratulations! At this point, you should understand enough linear algebra to be able to get started with the tutorials on [the concept of qubit](../Qubit/Qubit.ipynb) and on [single-qubit quantum gates](../SingleQubitGates/SingleQubitGates.ipynb). The next section covers more advanced matrix operations that help explain the properties of qubits and quantum gates. Part II. Advanced Operations Inner ProductThe **inner product** is yet another important matrix operation that is only applied to vectors. Given two vectors $V$ and $W$ of the same size, their inner product $\langle V , W \rangle$ is defined as a product of matrices $V^\dagger$ and $W$:$$\langle V , W \rangle = V^\dagger W$$Let's break this down so it's a bit easier to understand. A $1 \times n$ matrix (the adjoint of an $n \times 1$ vector) multiplied by an $n \times 1$ vector results in a $1 \times 1$ matrix (which is equivalent to a scalar). The result of an inner product is that scalar. To put it another way, to calculate the inner product of two vectors, take the corresponding elements $V_k$ and $W_k$, multiply the complex conjugate of $V_k$ by $W_k$, and add up those products:$$\langle V , W \rangle = \sum_{k=0}^{n-1}\overline{V_k}W_k$$Here is a simple example:$$\langle\begin{bmatrix} -6 \\ 9i\end{bmatrix},\begin{bmatrix} 3 \\ -8\end{bmatrix}\rangle =\begin{bmatrix} -6 \\ 9i\end{bmatrix}^\dagger\begin{bmatrix} 3 \\ -8\end{bmatrix}=\begin{bmatrix} -6 & -9i \end{bmatrix}\begin{bmatrix} 3 \\ -8\end{bmatrix}= (-6) \cdot (3) + (-9i) \cdot (-8) = -18 + 72i$$ If you are familiar with the **dot product**, you will notice that it is equivalent to inner product for real-numbered vectors.> We use our definition for these tutorials because it matches the notation used in quantum computing. You might encounter other sources which define the inner product a little differently: $\langle V , W \rangle = W^\dagger V = V^T\overline{W}$, in contrast to the $V^\dagger W$ that we use. These definitions are almost equivalent, with some differences in the scalar multiplication by a complex number.An immediate application for the inner product is computing the **vector norm**. The norm of vector $V$ is defined as $||V|| = \sqrt{\langle V , V \rangle}$. This condenses the vector down to a single non-negative real value. If the vector represents coordinates in space, the norm happens to be the length of the vector. A vector is called **normalized** if its norm is equal to $1$.The inner product has the following properties:* Distributivity over addition: $\langle V + W , X \rangle = \langle V , X \rangle + \langle W , X \rangle$ and $\langle V , W + X \rangle = \langle V , W \rangle + \langle V , X \rangle$* Partial associativity with scalar multiplication: $x \cdot \langle V , W \rangle = \langle \overline{x}V , W \rangle = \langle V , xW \rangle$* Skew symmetry: $\langle V , W \rangle = \overline{\langle W , V \rangle}$* Multiplying a vector by a unitary matrix **preserves the vector's inner product with itself** (and therefore the vector's norm): $\langle UV , UV \rangle = \langle V , V \rangle$> Note that just like matrix multiplication, the inner product is **not commutative**: $\langle V , W \rangle$ won't always equal $\langle W , V \rangle$. Exercise 9: Inner product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $n \times 1$ vector $W$.**Output:** Return a complex number - the inner product $\langle V , W \rangle$. Need a hint? Click here A video explanation can be found here.
###Code
@exercise
def inner_prod(v : Matrix, w : Matrix) -> complex:
return ...
###Output
_____no_output_____
###Markdown
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynbExercise-9:-Inner-product.).* Exercise 10: Normalized vectors.**Input:** A non-zero $n \times 1$ vector $V$.**Output:** Return an $n \times 1$ vector $\frac{V}{||V||}$ - the normalized version of the vector $V$. Need a hint? Click here You might need the square root function to solve this exercise. As a reminder, Python's square root function is available in the math library. A video explanation can be found here. Note that when this method is used with complex vectors, you should take the modulus of the complex number for the division.
###Code
@exercise
def normalize(v : Matrix) -> Matrix:
return ...
###Output
_____no_output_____
###Markdown
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynbExercise-10:-Normalized-vectors.).* Outer ProductThe **outer product** of two vectors $V$ and $W$ is defined as $VW^\dagger$. That is, the outer product of an $n \times 1$ vector and an $m \times 1$ vector is an $n \times m$ matrix. If we denote the outer product of $V$ and $W$ as $X$, then $X_{i,j} = V_i \cdot \overline{W_j}$. Here is a simple example:outer product of $\begin{bmatrix} -3i \\ 9 \end{bmatrix}$ and $\begin{bmatrix} 9i \\ 2 \\ 7 \end{bmatrix}$ is:$$\begin{bmatrix} \color{blue} {-3i} \\ \color{blue} 9 \end{bmatrix}\begin{bmatrix} \color{red} {9i} \\ \color{red} 2 \\ \color{red} 7 \end{bmatrix}^\dagger=\begin{bmatrix} \color{blue} {-3i} \\ \color{blue} 9 \end{bmatrix}\begin{bmatrix} \color{red} {-9i} & \color{red} 2 & \color{red} 7 \end{bmatrix}=\begin{bmatrix} \color{blue} {-3i} \cdot \color{red} {(-9i)} & \color{blue} {-3i} \cdot \color{red} 2 & \color{blue} {-3i} \cdot \color{red} 7 \\ \color{blue} 9 \cdot \color{red} {(-9i)} & \color{blue} 9 \cdot \color{red} 2 & \color{blue} 9 \cdot \color{red} 7\end{bmatrix}=\begin{bmatrix} -27 & -6i & -21i \\ -81i & 18 & 63\end{bmatrix}$$ Exercise 11: Outer product.**Inputs:**1. An $n \times 1$ vector $V$.2. An $m \times 1$ vector $W$.**Output:** Return an $n \times m$ matrix that represents the outer product of $V$ and $W$.
###Code
@exercise
def outer_prod(v : Matrix, w : Matrix) -> Matrix:
return ...
###Output
_____no_output_____
###Markdown
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynbExercise-11:-Outer-product.).* Tensor ProductThe **tensor product** is a different way of multiplying matrices. Rather than multiplying rows by columns, the tensor product multiplies the second matrix by every element of the first matrix.Given $n \times m$ matrix $A$ and $k \times l$ matrix $B$, their tensor product $A \otimes B$ is an $(n \cdot k) \times (m \cdot l)$ matrix defined as follows:$$A \otimes B =\begin{bmatrix} A_{0,0} \cdot B & A_{0,1} \cdot B & \dotsb & A_{0,m-1} \cdot B \\ A_{1,0} \cdot B & A_{1,1} \cdot B & \dotsb & A_{1,m-1} \cdot B \\ \vdots & \vdots & \ddots & \vdots \\ A_{n-1,0} \cdot B & A_{n-1,1} \cdot B & \dotsb & A_{n-1,m-1} \cdot B\end{bmatrix}=\begin{bmatrix} A_{0,0} \cdot \color{red} {\begin{bmatrix}B_{0,0} & \dotsb & B_{0,l-1} \\ \vdots & \ddots & \vdots \\ B_{k-1,0} & \dotsb & b_{k-1,l-1} \end{bmatrix}} & \dotsb & A_{0,m-1} \cdot \color{blue} {\begin{bmatrix}B_{0,0} & \dotsb & B_{0,l-1} \\ \vdots & \ddots & \vdots \\ B_{k-1,0} & \dotsb & B_{k-1,l-1} \end{bmatrix}} \\ \vdots & \ddots & \vdots \\ A_{n-1,0} \cdot \color{blue} {\begin{bmatrix}B_{0,0} & \dotsb & B_{0,l-1} \\ \vdots & \ddots & \vdots \\ B_{k-1,0} & \dotsb & B_{k-1,l-1} \end{bmatrix}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {\begin{bmatrix}B_{0,0} & \dotsb & B_{0,l-1} \\ \vdots & \ddots & \vdots \\ B_{k-1,0} & \dotsb & B_{k-1,l-1} \end{bmatrix}}\end{bmatrix}= \\=\begin{bmatrix} A_{0,0} \cdot \color{red} {B_{0,0}} & \dotsb & A_{0,0} \cdot \color{red} {B_{0,l-1}} & \dotsb & A_{0,m-1} \cdot \color{blue} {B_{0,0}} & \dotsb & A_{0,m-1} \cdot \color{blue} {B_{0,l-1}} \\ \vdots & \ddots & \vdots & \dotsb & \vdots & \ddots & \vdots \\ A_{0,0} \cdot \color{red} {B_{k-1,0}} & \dotsb & A_{0,0} \cdot \color{red} {B_{k-1,l-1}} & \dotsb & A_{0,m-1} \cdot \color{blue} {B_{k-1,0}} & \dotsb & A_{0,m-1} \cdot \color{blue} {B_{k-1,l-1}} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ A_{n-1,0} \cdot \color{blue} {B_{0,0}} & \dotsb & A_{n-1,0} \cdot \color{blue} {B_{0,l-1}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {B_{0,0}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {B_{0,l-1}} \\ \vdots & \ddots & \vdots & \dotsb & \vdots & \ddots & \vdots \\ A_{n-1,0} \cdot \color{blue} {B_{k-1,0}} & \dotsb & A_{n-1,0} \cdot \color{blue} {B_{k-1,l-1}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {B_{k-1,0}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {B_{k-1,l-1}}\end{bmatrix}$$Here is a simple example:$$\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \otimes \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} =\begin{bmatrix} 1 \cdot \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} & 2 \cdot \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} \\ 3 \cdot \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} & 4 \cdot \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}\end{bmatrix}=\begin{bmatrix} 1 \cdot 5 & 1 \cdot 6 & 2 \cdot 5 & 2 \cdot 6 \\ 1 \cdot 7 & 1 \cdot 8 & 2 \cdot 7 & 2 \cdot 8 \\ 3 \cdot 5 & 3 \cdot 6 & 4 \cdot 5 & 4 \cdot 6 \\ 3 \cdot 7 & 3 \cdot 8 & 4 \cdot 7 & 4 \cdot 8\end{bmatrix}=\begin{bmatrix} 5 & 6 & 10 & 12 \\ 7 & 8 & 14 & 16 \\ 15 & 18 & 20 & 24 \\ 21 & 24 & 28 & 32\end{bmatrix}$$Notice that the tensor product of two vectors is another vector: if $V$ is an $n \times 1$ vector, and $W$ is an $m \times 1$ vector, $V \otimes W$ is an $(n \cdot m) \times 1$ vector. The tensor product has the following properties:* Distributivity over addition: $(A + B) \otimes C = A \otimes C + B \otimes C$, $A \otimes (B + C) = A \otimes B + A \otimes C$* Associativity with scalar multiplication: $x(A \otimes B) = (xA) \otimes B = A \otimes (xB)$* Mixed-product property (relation with matrix multiplication): $(A \otimes B) (C \otimes D) = (AC) \otimes (BD)$ Exercise 12*: Tensor Product.**Inputs:**1. An $n \times m$ matrix $A$.2. A $k \times l$ matrix $B$.**Output:** Return an $(n \cdot k) \times (m \cdot l)$ matrix $A \otimes B$, the tensor product of $A$ and $B$.
###Code
@exercise
def tensor_product(a : Matrix, b : Matrix) -> Matrix:
return ...
###Output
_____no_output_____
###Markdown
*Can't come up with a solution? See the explained solution in the* Linear Algebra Workbook. Next StepsAt this point, you know enough to complete the tutorials on [the concept of qubit](../Qubit/Qubit.ipynb), [single-qubit gates](../SingleQubitGates/SingleQubitGates.ipynb), [multi-qubit systems](../MultiQubitSystems/MultiQubitSystems.ipynb), and [multi-qubit gates](../MultiQubitGates/MultiQubitGates.ipynb). The last part of this tutorial is a brief introduction to eigenvalues and eigenvectors, which are used for more advanced topics in quantum computing. Feel free to move on to the next tutorials, and come back here once you encounter eigenvalues and eigenvectors elsewhere. Part III: Eigenvalues and EigenvectorsConsider the following example of multiplying a matrix by a vector:$$\begin{bmatrix} 1 & -3 & 3 \\ 3 & -5 & 3 \\ 6 & -6 & 4\end{bmatrix}\begin{bmatrix} 1 \\ 1 \\ 2\end{bmatrix}=\begin{bmatrix} 4 \\ 4 \\ 8\end{bmatrix}$$Notice that the resulting vector is just the initial vector multiplied by a scalar (in this case 4). This behavior is so noteworthy that it is described using a special set of terms.Given a nonzero $n \times n$ matrix $A$, a nonzero vector $V$, and a scalar $x$, if $AV = xV$, then $x$ is an **eigenvalue** of $A$, and $V$ is an **eigenvector** of $A$ corresponding to that eigenvalue.The properties of eigenvalues and eigenvectors are used extensively in quantum computing. You can learn more about eigenvalues, eigenvectors, and their properties at [Wolfram MathWorld](http://mathworld.wolfram.com/Eigenvector.html) or on [Wikipedia](https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors). Exercise 13: Finding an eigenvalue.**Inputs:**1. An $n \times n$ matrix $A$.2. An eigenvector $V$ of matrix $A$.**Output:** Return a real number - the eigenvalue of $A$ that is associated with the given eigenvector. Need a hint? Click here Multiply the matrix by the vector, then divide the elements of the result by the elements of the original vector. Don't forget though, some elements of the vector may be $0$.
###Code
@exercise
def find_eigenvalue(a : Matrix, v : Matrix) -> float:
return ...
###Output
_____no_output_____
###Markdown
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynbExercise-13:-Finding-an-eigenvalue.).* Exercise 14**: Finding an eigenvector.**Inputs:**1. A $2 \times 2$ matrix $A$.2. An eigenvalue $x$ of matrix $A$.**Output:** Return any non-zero eigenvector of $A$ that is associated with $x$. Need a hint? Click here A matrix and an eigenvalue will have multiple eigenvectors (infinitely many, in fact), but you only need to find one. Try treating the elements of the vector as variables in a system of two equations. Watch out for division by $0$!
###Code
@exercise
def find_eigenvector(a : Matrix, x : float) -> Matrix:
return ...
###Output
_____no_output_____ |
05_Transforming Basic Objects into the Powerful DataFrame/05_transform-to-dataframe_syllabus.ipynb | ###Markdown
05 | Transforming Basic Objects into the Powerful DataFrame Python + Data Science Tutorials in ↓ <a href="https://www.youtube.com/c/PythonResolver?sub_confirmation=1" >YouTube</a > Blog GitHub Author: @jsulopz The ChallengeCreate a DataFrame from scratch using the following information... https://github.com/jsulopz/data/blob/main/best_tennis_players_stats.json The Covered Solution ...and compare which tennist player earned more money ↓
###Code
?? #! read the full story to find out the solution
###Output
_____no_output_____
###Markdown
What will we learn? - Why the `function()` is so important in programming?- Why have you got **different types of `functions()`**?- How to find **solutions by filtering tutorials on Google**?- How to **get help from python** and use it wisely? Which concepts will we use?- Module- Dot notation- Objects- Variables- The Autocompletion Tool- The Docstring- Function - `object.function()` - `module.function()` - `built_in_function()`- [Google Method] Requirements?- None The starting *thing*
###Code
internet_usage_spain.xlsx
###Output
_____no_output_____
###Markdown
Syllabus for the [Notebook](01script_functions.ipynb)1. Default *things* in Python2. Object-Oriented Programming 1. `string` 2. `integer` 3. `float` 4. `list`1. The Python Resolver Discipline2. Type of `functions()` 1. `Built-in` Functions 2. Functions inside `instances` 3. External Functions: from the `module`1. Use of `functions()` 1. Change Default Parameters of a Function 2. The Elements of Programming1. Code Syntax 1. The `module` 2. The `.` **DOT NOTATION** 3. The `function()` 4. The `(parameter=object)` 1. `(io='internet_usage_spain.xlsx')` 2. `(sheet_name=1)` 1. When you `execute`... 2. The `function()` returns an `object` 3. Recap1. Source Code Execution | What happens inside the computer ?2. The Importance of the `function()` 1. Python doesn't know about the Excel File 2. Other `functions()`1. What have we learnt? 1. Why the `function()` is so important in programming? 2. Why have you got **different types of `functions()`**? 3. How to find **solutions by filtering tutorials on Google**? 4. How to **get help from python** and use it wisely?1. Define the concepts ↓ The Uncovered Solution
###Code
roger = {'income': 130, 'titles': 103, 'grand slams': 20, 'turned professional': 1998, 'wins': 1251, 'losses': 275}
rafa = {'income': 127, #!
'titles': 90,
'grand slams': 21,
'turned professional': 2001,
'wins': 1038,
'losses': 209}
nole = {'income': 154,
'titles': 86,
'grand slams': 20,
'turned professional': 2003,
'wins': 989,
'losses': 199}
list_best_players = [roger, rafa, nole] #!
list_best_players
import pandas as pd
df_best_players = pd.DataFrame(list_best_players, index=['Roger Federer', 'Rafa Nadal', 'Novak Djokovic'])
df_best_players
import plotly.express as px
px.bar(x=df_best_players.index, color=df_best_players.index, y='income', data_frame=df_best_players)
fig = go.Figure()
fig.add_trace(go.Bar(
x=['Player 1'],
y=[df.income.rafa],
name="Player 1?"
))
fig.add_trace(go.Bar(
x=['Player 2'],
y=[df.income.roger],
name="Player 2?"
))
fig.add_trace(go.Bar(
x=['Player 3'],
y=[df.income.nole],
name="Player 3?"
))
fig.update_layout(
title="Tennist Player 'X' earned more money",
xaxis_title="Player",
yaxis_title="Prizes (in millions $USD)",
legend_title="Legend Title",
)
fig.show()
###Output
_____no_output_____ |
archive/2019-04-Technical-University-of-Denmark/4-Sum-of-citations-and-citation-histogram.ipynb | ###Markdown
Example: Retrieve the sum of citations by year, and create a citation histogram
###Code
from dimcli.shortcuts import dslquery_json as dslquery
import pandas as pd
from collections import Counter
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Return publicationsReturn all articles between 2000 and 2015 for the Journal Science and Technology of Advanced MaterialsFirst, define a function that will retrieve publications for the Journal Science and Technology of Advanced Materials.Hint: you can get the id of the journal that you are interested in by looking at the dimensions url when filtering on source title in the application: https://app.dimensions.ai/discover/publication?and_facet_journal=jour.1048844
###Code
def searchPubs(limit=1000, skip=0):
data = """search publications
where year in [2000:2015]
and journal.id = "jour.1048844"
and type="article"
return publications[id+times_cited+year]
limit {} skip {}
""".format(limit,skip)
return data
###Output
_____no_output_____
###Markdown
Loop through the results as there are more than 1000Second, define a function that get the search results in batches of 1000, by using the skip function. You can get up to 50,000 publications using this method.
###Code
def dslsearchpublications():
skip = 0
pubs = []
total_pubs = []
result = {}
while (skip == 0) or (len(pubs) == 1000):
pubs = dslquery(searchPubs(skip=skip)).get('publications',[])
total_pubs += pubs
skip += 1000
return total_pubs
###Output
_____no_output_____
###Markdown
Put the results into a dataframerun your Dimensions API loop, and put the results into a dataframe
###Code
pubs = dslsearchpublications()
print(len(pubs))
rf = pd.DataFrame(pubs)
rf.head()
###Output
Execution time: 0.7992026805877686
Execution time: 0.5499269962310791
1542
###Markdown
Sum the citations by yearUse the dataframe to sum the results, and create a bar chart
###Code
rf.groupby(['year']).sum().plot(kind='bar')
###Output
_____no_output_____
###Markdown
Journal Citation HistogramTaking the same dataframe, you can now easily create a citation histogram
###Code
fig = plt.figure(figsize = (20,10))
ax = fig.gca()
rf['times_cited'].hist(bins=100)
###Output
_____no_output_____ |
Project/SageMaker Project- WIP.ipynb | ###Markdown
Creating a Sentiment Analysis Web App Using PyTorch and SageMaker_Deep Learning Nanodegree Program | Deployment_---Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. General OutlineRecall the general outline for SageMaker projects using a notebook instance.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.For this project, you will be following the steps in the general outline with some modifications. First, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.In addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app. Step 1: Downloading the dataAs in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.
###Code
#%mkdir ../data
#!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
#!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing and Processing the dataAlso, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
###Output
IMDB reviews: train = 12500 pos / 12500 neg, test = 12500 pos / 12500 neg
###Markdown
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.
###Code
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
###Output
IMDb reviews (combined): train = 25000, test = 25000
###Markdown
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly.
###Code
print(len(train_X[2]))
print(train_X[2])
print(train_y[2])
###Output
927
I found this to be a charming adaptation, very lively and full of fun. With the exception of a couple of major errors, the cast is wonderful. I have to echo some of the earlier comments -- Chynna Phillips is horribly miscast as a teenager. At 27, she's just too old (and, yes, it DOES show), and lacks the singing "chops" for Broadway-style music. Vanessa Williams is a decent-enough singer and, for a non-dancer, she's adequate. However, she is NOT Latina, and her character definitely is. She's also very STRIDENT throughout, which gets tiresome.<br /><br />The girls of Sweet Apple's Conrad Birdie fan club really sparkle -- with special kudos to Brigitta Dau and Chiara Zanni. I also enjoyed Tyne Daly's performance, though I'm not generally a fan of her work. Finally, the dancing Shriners are a riot, especially the dorky three in the bar.<br /><br />The movie is suitable for the whole family, and I highly recommend it.
1
###Markdown
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.
###Code
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
###Output
_____no_output_____
###Markdown
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set.
###Code
# TODO: Apply review_to_words to a review (train_X[100] or any other review)
print(len(review_to_words(train_X[2])))
print(review_to_words(train_X[2]))
###Output
85
['found', 'charm', 'adapt', 'live', 'full', 'fun', 'except', 'coupl', 'major', 'error', 'cast', 'wonder', 'echo', 'earlier', 'comment', 'chynna', 'phillip', 'horribl', 'miscast', 'teenag', '27', 'old', 'ye', 'show', 'lack', 'sing', 'chop', 'broadway', 'style', 'music', 'vanessa', 'william', 'decent', 'enough', 'singer', 'non', 'dancer', 'adequ', 'howev', 'latina', 'charact', 'definit', 'also', 'strident', 'throughout', 'get', 'tiresom', 'girl', 'sweet', 'appl', 'conrad', 'birdi', 'fan', 'club', 'realli', 'sparkl', 'special', 'kudo', 'brigitta', 'dau', 'chiara', 'zanni', 'also', 'enjoy', 'tyne', 'dali', 'perform', 'though', 'gener', 'fan', 'work', 'final', 'danc', 'shriner', 'riot', 'especi', 'dorki', 'three', 'bar', 'movi', 'suitabl', 'whole', 'famili', 'highli', 'recommend']
###Markdown
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input? **Answer:** The function review_to_words also removes common English stopwords, puts all to lower case, removes punctuation, and splits/tokenizes the words. The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time.
###Code
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) # reads it first if pickled
# #test pickling
# x = [1,2,3]
# cache_test_file= "test.pkl"
# with open(os.path.join(cache_dir, cache_test_file), "wb") as f:
# pickle.dump(x, f)
# print("Wrote preprocessed data to cache file:", cache_test_file)
# # bringing it back!
# with open(os.path.join(cache_dir, cache_test_file), "rb") as f:
# x_revived = pickle.load(f)
# print("Read preprocessed data from cache file:", x_revived)
# x_revived
len(train_X)
train_X_backup = train_X.copy()
train_X[2][:8]
###Output
_____no_output_____
###Markdown
Transform the dataIn the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.Since we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews. (TODO) Create a word dictionaryTo begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'.
###Code
# word_count = {}
# def tokenize(corpus):
# for review in corpus:
# for word in review:
# if word in word_count:
# word_count[word] += 1
# else:
# word_count[word] = 1
# return(word_count)
# tokenize(train_X)
# sorted_words = sorted(word_count.items(), key=lambda x:-x[1]) #[:int(vocab_size)] # top 5000
# sorted_words = [i[0] for i in sorted_words]
# print(len(sorted_words))
# sorted_words
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
word_count = {}
corpus = data
for review in corpus:
for word in review:
if word in word_count:
word_count[word] += 1
else:
word_count[word] = 1
#word_count = {} # A dict storing the words that appear in the reviews along with how often they occur
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
sorted_words = sorted(word_count.items(), key=lambda x:-x[1]) #[:int(vocab_size)] # top 5000
sorted_words = [i[0] for i in sorted_words]
print(len(sorted_words))
word_dict = {} # This is what we are building, a dictionary that translates words into integers
for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'
word_dict[word] = idx + 2 # 'infrequent' labels
return word_dict
word_dict = build_dict(train_X)
print(len(word_dict))
word_dict
###Output
4998
###Markdown
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set? **Answer:** The five most common processed words are "movi", "film", "one", "like", and "time". These make sense considering these are movies reviews, but I was surprised to see "one".
###Code
# TODO: Use this space to determine the five most frequently appearing words in the training set.
{key:value for key, value in word_dict.items() if value in (0,1,2,3,4,5,6)}
###Output
_____no_output_____
###Markdown
Save `word_dict`Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use.
###Code
data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
#ls this is saving locally in the directory above data/
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f)
word_dict
###Output
_____no_output_____
###Markdown
Transform the reviewsNow that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`.
###Code
import numpy as np
def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pad]):
if word in word_dict:
working_sentence[word_index] = word_dict[word]
else:
working_sentence[word_index] = INFREQ
return working_sentence, min(len(sentence), pad)
def convert_and_pad_data(word_dict, data, pad=500):
result = []
lengths = []
for sentence in data:
converted, leng = convert_and_pad(word_dict, sentence, pad)
result.append(converted)
lengths.append(leng)
return np.array(result), np.array(lengths)
train_X, train_X_len = convert_and_pad_data(word_dict, train_X)
test_X, test_X_len = convert_and_pad_data(word_dict, test_X)
train_X_backup[2]
search = 575
for name, num in word_dict.items(): # for name, age in dictionary.iteritems(): (for Python 2.x)
if num == search:
print(name)
train_X[2]
###Output
_____no_output_____
###Markdown
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?
###Code
train_X_backup[0]
# Use this cell to examine one of the processed reviews to make sure everything is working as intended.
print(train_X[0].shape)
train_X[0]
search = 712
for name, num in word_dict.items(): # for name, age in dictionary.iteritems(): (for Python 2.x)
if num == search:
print(name)
###Output
tom
###Markdown
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem? **Answer:** The tokens that are infrequent may be important to the meaning in the review, and hold specific information that is lost. For instance an Term Frequency, Inverse Document Frequency (TF-IDF) assumes that these words contain more key information. From this viewpoint, we are losing information. It is possible the test data has slightly different vocabulary. Also, length of 500 in padding could slow model training, or conversely, leave out information from longer reviews. There might be differences in the train and test corpuses, so slight differences may exist. Step 3: Upload the data to S3As in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on. Save the processed training dataset locallyIt is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review.
###Code
import pandas as pd #this is saving locally in notebook instance not s3
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Uploading the training dataNext, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model.
###Code
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
print(data_dir)
print(sagemaker_session)
print(bucket)
print(role)
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)
###Output
_____no_output_____
###Markdown
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory. Step 4: Build and Train the PyTorch ModelIn the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects - Model Artifacts, - Training Code, and - Inference Code, each of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.We will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below.
###Code
!pygmentize train/model.py
###Output
[34mimport[39;49;00m [04m[36mtorch.nn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mclass[39;49;00m [04m[32mLSTMClassifier[39;49;00m(nn.Module):
[33m"""[39;49;00m
[33m This is the simple RNN model we will be using to perform Sentiment Analysis.[39;49;00m
[33m """[39;49;00m
[34mdef[39;49;00m [32m__init__[39;49;00m([36mself[39;49;00m, embedding_dim, hidden_dim, vocab_size):
[33m"""[39;49;00m
[33m Initialize the model by settingg up the various layers.[39;49;00m
[33m """[39;49;00m
[36msuper[39;49;00m(LSTMClassifier, [36mself[39;49;00m).[32m__init__[39;49;00m()
[36mself[39;49;00m.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=[34m0[39;49;00m)
[36mself[39;49;00m.lstm = nn.LSTM(embedding_dim, hidden_dim)
[36mself[39;49;00m.dense = nn.Linear(in_features=hidden_dim, out_features=[34m1[39;49;00m)
[36mself[39;49;00m.sig = nn.Sigmoid()
[36mself[39;49;00m.word_dict = [36mNone[39;49;00m
[34mdef[39;49;00m [32mforward[39;49;00m([36mself[39;49;00m, x):
[33m"""[39;49;00m
[33m Perform a forward pass of our model on some input.[39;49;00m
[33m """[39;49;00m
x = x.t()
lengths = x[[34m0[39;49;00m,:]
reviews = x[[34m1[39;49;00m:,:]
embeds = [36mself[39;49;00m.embedding(reviews)
lstm_out, _ = [36mself[39;49;00m.lstm(embeds)
out = [36mself[39;49;00m.dense(lstm_out)
out = out[lengths - [34m1[39;49;00m, [36mrange[39;49;00m([36mlen[39;49;00m(lengths))]
[34mreturn[39;49;00m [36mself[39;49;00m.sig(out.squeeze())
###Markdown
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.First we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving.
###Code
import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()
# Build the dataset
train_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)
# Build the dataloader
train_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50)
###Output
_____no_output_____
###Markdown
(TODO) Writing the training methodNext we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later.
###Code
def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# TODO: Complete this train method to train the model provided.
optimizer.zero_grad() # zero the parameter gradients
# forward + backward + optimize #https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html
outputs = model(batch_X) #model here is net in example
loss = loss_fn(outputs, batch_y)
loss.backward()
optimizer.step()
# end
total_loss += loss.data.item()
print("Epoch: {}, BCELoss: {}".format(epoch, total_loss / len(train_loader)))
###Output
_____no_output_____
###Markdown
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose.
###Code
import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters()) # optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device)
###Output
Epoch: 1, BCELoss: 0.6945493459701538
Epoch: 2, BCELoss: 0.6869992852210999
Epoch: 3, BCELoss: 0.680661427974701
Epoch: 4, BCELoss: 0.6737269997596741
Epoch: 5, BCELoss: 0.6652729988098145
###Markdown
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run. (TODO) Training the modelWhen a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.The way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file.
###Code
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
train_instance_count=1,
#train_instance_type= 'ml.m5.large', #'ml.m5.large' takes 10736 seconds
train_instance_type= 'ml.p2.xlarge', #needs to be available (created) in AWS console, and in list if error
hyperparameters={
'epochs': 10,
'hidden_dim': 200,
})
#ml.p2.xlarge
#errorstop
import time
t0 = time.time()
estimator.fit({'training': input_data}) #start at 12:20PM
t1 = time.time()
total_seconds_fit = t1-t0
total_seconds_fit/60
estimator # sagemaker.pytorch.estimator.PyTorch at 0x7f72612655c0
# This code is important to load the correct training job
#This is how we access it again!
training_job_name = estimator.latest_training_job.name
print(training_job_name) # sagemaker-pytorch-2019-10-17-01-24-11-924
# attached_estimator = estimator.attach(training_job_name)
###Output
sagemaker-pytorch-2019-11-12-02-36-04-850
###Markdown
Step 5: Testing the modelAs mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly. Step 6: Deploy the model for testingNow that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.There is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )Since we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.In other words **If you are no longer using a deployed endpoint, shut it down!****TODO:** Deploy the trained model.
###Code
# TODO: Deploy the trained model
#existing = 'sagemaker-pytorch-2019-10-08-05-11-22-626'
#predictor = attached_estimator.deploy(initial_instance_count=1, instance_type='ml.m5.large') # for using attached_estimator
predictor = estimator.deploy(initial_instance_count=1, instance_type='ml.m5.large')
###Output
-------------------------------------------------------------------------------------!
###Markdown
Step 7 - Use the model for testingOnce deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is.
###Code
# test_X.shape
# test_X[:4]
# print(test_X_len.shape)
# test_X_len[:2]
test_X2 = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1) # adds length 1 to front review_length
test_X2[:4]
test_X2.shape
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array in split_array:
predictions = np.append(predictions, predictor.predict(array))
return predictions
predictions = predict(test_X2.values)
predictions = [round(num) for num in predictions]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis? **Answer:** Our XGBoost model had an accuracy of 0.862 through hyperparameter tuning. This is a decent model accuracy. XGBoost is less likely to overfit since it is ensemble, and predicts well.For the initial model output, not shown currently, the predictive accuracy was 0.5 which is no better than chance. Something was not right with the word_dict object, and caused the model to be no better than flipping a coin. So, after fixing the word_dict, the accuracy has improved to 0.85832. This is not as good as the xgboost model currently, but may not have converged. This could be further improved by adding more depth and LSTM layers; add more epochs as well. An RNN potentially can learn more abstractions that a decision tree, and perform better overall (though may overfit). However, tuning and iteration is needed still. (TODO) More testingWe now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model.
###Code
test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'
print(len(review_to_words(test_review)))
np.array([len(review_to_words(test_review))]).shape
###Output
_____no_output_____
###Markdown
The question we now need to answer is, how do we send this review to our model?Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews. - Removed any html tags and stemmed the input - Encoded the review as a sequence of integers using `word_dict` In order process the review we will need to repeat these two steps.**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`.
###Code
pd.DataFrame(np.array([len(review_to_words(test_review))]))
test_data = review_to_words(test_review) #review to words
test_data = convert_and_pad(word_dict, test_data, pad=500) #convert_and_pad
type(test_data)
# TODO: Convert test_review into a form usable by the model and save the results in test_data
# test_data = review_to_words(test_review) #review to words
# test_data = convert_and_pad(word_dict, test_data, pad=500) #convert_and_pad
# test_data = np.array([test_data[0]]) # array object creation
# print(test_data.shape) #checking the shape
test_data = np.array([convert_and_pad(word_dict, review_to_words(test_review), pad=500)[0]])
print(type(test_data))
#sample data length
sample_length = pd.DataFrame(np.array([len(review_to_words(test_review))]))
#creating object model requires
test_sample = pd.concat([sample_length, pd.DataFrame(test_data)], axis=1)
test_sample
###Output
<class 'numpy.ndarray'>
###Markdown
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review.
###Code
predictor.predict(test_sample)
###Output
_____no_output_____
###Markdown
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive. Delete the endpointOf course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it.
###Code
estimator.delete_endpoint()
###Output
_____no_output_____
###Markdown
Step 6 (again) - Deploy the model for the web appNow that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.As we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.We will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.When deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use. - `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model. - `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code. - `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint. - `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.For the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize. (TODO) Writing inference codeBefore writing our custom inference code, we will begin by taking a look at the code which has been provided.
###Code
errorme
!pygmentize serve/predict.py
x = np.array(0.52)
print(x)
y = np.array(int(np.round(x,0)))
y = np.array(str(int(np.round(x,0))))
results = []
results.append(int(y))
results
# data_X = pd.DataFrame(np.array([convert_and_pad(model.word_dict, review_to_words(input_data), pad=500)[0]])) #ben edit
# data_len = pd.DataFrame(np.array([len(review_to_words(input_data))])) #ben edit
###Output
_____no_output_____
###Markdown
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file. Deploying the modelNow that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data.
###Code
from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchModel(model_data=estimator.model_data,
role = role,
framework_version='0.4.0',
entry_point='predict.py',
source_dir='serve',
predictor_cls=StringPredictor)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
###Output
--------------------------------------------------------------------------------------------------!
###Markdown
Testing the modelNow that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive.
###Code
import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path)
files_read = 0
print('Starting ', sentiment, ' files')
# Iterate through the files and send them to the predictor
for f in files:
with open(f) as review:
# First, we store the ground truth (was the review positive or negative)
if sentiment == 'pos':
ground.append(1)
else:
ground.append(0)
# Read in the review and convert to 'utf-8' for transmission via HTTP
review_input = review.read().encode('utf-8')
# Send the review to the predictor and store the results
#results.append(int(float(predictor.predict(review_input)))) # this works with the float!
results.append(int(predictor.predict(review_input)))
#results.append(float(predictor.predict(review_input))) udacity help
#results.append(predictor.predict(review_input)) #ben test
# Sending reviews to our endpoint one at a time takes a while so we
# only send a small number of reviews
files_read += 1
if files_read == stop:
break
return ground, results
ground, results = test_reviews()
from sklearn.metrics import accuracy_score
accuracy_score(ground, results)
###Output
_____no_output_____
###Markdown
As an additional test, we can try sending the `test_review` that we looked at earlier.
###Code
predictor.predict(test_review)
###Output
_____no_output_____
###Markdown
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back. Step 7 (again): Use the model for the web app> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.So far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.The diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.In the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.Lastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function. Setting up a Lambda functionThe first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result. Part A: Create an IAM Role for the Lambda functionSince we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.Using the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.In the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.Lastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**. Part B: Create a Lambda functionNow it is time to actually create the Lambda function.Using the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.On the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below. ```python We need to use the low-level library to interact with SageMaker since the SageMaker API is not available natively through Lambda.import boto3def lambda_handler(event, context): The SageMaker runtime is what allows us to invoke the endpoint that we've created. runtime = boto3.Session().client('sagemaker-runtime') Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', The name of the endpoint we created ContentType = 'text/plain', The data format that is expected Body = event['body']) The actual review The response is an HTTP response whose body contains the result of our inference result = response['Body'].read().decode('utf-8') return { 'statusCode' : 200, 'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' }, 'body' : result }```Once you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below.
###Code
predictor.endpoint
###Output
_____no_output_____
###Markdown
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function. Setting up API GatewayNow that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.Using AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.On the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.Now we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.Select the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.For the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.Type the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.The last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.You have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**.
###Code
#Invoke URL: https://c1gh4eok3g.execute-api.us-east-2.amazonaws.com/prod
###Output
_____no_output_____
###Markdown
Step 4: Deploying our web appNow that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.In the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\*\*REPLACE WITH PUBLIC API URL\*\***. Replace this string with the url that you wrote down in the last step and then save the file.Now, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.If you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.**TODO:** Make sure that you include the edited `index.html` file in your project submission. Now that your web app is working, trying playing around with it and see how well it works.**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review? **Answer:**I tested the review: "In the beginning I hated that movie. But, then I started to love it after the second scene! It is now one of my favorites."It came back as a positive result with "Your review was POSITIVE". Delete the endpointRemember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____ |
ch2_py_prog_spec/list_comprehension.ipynb | ###Markdown
List Comprehension
###Code
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
even = []
for number in numbers:
if number % 2 == 0:
even.append(number)
even
## using the list comprehension
even = [number for number in numbers if number % 2 == 0]
even
# generate a list of random integers
from random import randint, seed
seed(10) # set random seed to make examples reproducible
random_elements = [randint(1, 10) for i in range(5)]
print(random_elements)
# using list comprehension to generate a set
# generate a list of random integers
from random import randint, seed
seed(10) # set random seed to make examples reproducible
random_elements = {randint(1, 10) for i in range(5)}
print(random_elements)
# using list comprehension to generate a dict
# generate a list of random integers
from random import randint, seed
seed(10) # set random seed to make examples reproducible
random_elements = {i: randint(1, 10) for i in range(5)}
print(random_elements)
###Output
{0: 10, 1: 1, 2: 7, 3: 8, 4: 10}
###Markdown
Generators You might think that if you replace the square brackets with parentheses, you could obtaina tuple. Actually, you get a generator object.The main difference between generators andlist comprehensions is that elements are generated on demand and not computed andstored all at once in memory.
###Code
numbers = numbers
numbers
even_generator = (number for number in numbers if number % 2 == 0)
even = list(even_generator)
even
even_generator
even_bis = list(even_generator)
even_bis
# This simple example is here to show you that a generator can be used only once. Once all
# the values have been produced, it's over.
def even_num(max_num):
for i in range(2, max_num + 1):
print(i)
if i % 2 == 0:
yield i
even = list(even_num(10))
even
even = list(even_num(10))
def even_num(max_num):
for i in range(2, max_num + 1):
print(i)
if i % 2 == 0:
yield i
print("generator exhausted")
even = list(even_num(10))
###Output
2
3
4
5
6
7
8
9
10
generator exhausted
###Markdown
Writing Object Oriented programs
###Code
class Greetings:
def greet(self, name):
return f"Hello, {name}"
c = Greetings()
print(c.greet('John'))
class Greetings:
default_name: str
def __init__(self, default_name):
self.default_name = default_name
def greet(self, name=None):
return f"Hello, {name if name else self.default_name}"
c = Greetings()
print(c.greet())
c = Greetings("Alan")
print(c.greet())
print(c.greet('Mark'))
###Output
Hello, Mark
###Markdown
Implementing Magic Methods
###Code
# Object representations – __repr__ and __str__
class Temperature:
def __init__(self, value, scale):
self.value = value
self.scale = scale
def __repr__(self):
return f"Temperature ({self.value}, {self.scale})"
def __str__(self):
return f"Temperature is {self.value} °{self.scale}"
t = Temperature(25, "C")
t
repr(t)
str(t)
###Output
_____no_output_____
###Markdown
Comparison methods – __eq__, __gt__, __lt__, and so on
###Code
class Temperature:
def __init__(self, value, scale):
self.value = value
self.scale = scale
if scale == 'C':
self.value_kelvin = value + 273.15
elif scale == 'F':
self.value_kelvin = (value - 32) * 5 / 9 + 273.15
def __eq__(self, other):
return self.value_kelvin == other.value_kelvin
def __lt__(self, other):
return self.value_kelvin < other.value_kelvin
def __repr__(self):
return f"Temperature ({self.value}, {self.scale})"
def __str__(self):
return f"Temperature is {self.value} °{self.scale}"
tc = Temperature(25, "C")
tf = Temperature(77, "F")
tf2 = Temperature(100, "F")
print(tc == tf)
print(tc < tf2)
###Output
True
###Markdown
Operators – __add__, __sub__, __mul__, and so on Callable object – __call__
###Code
class Counter:
def __init__(self):
self.counter = 0
def __call__(self, inc=1, *args, **kwargs):
self.counter += inc
c = Counter()
c.counter
c.counter
c()
c.counter
c(10)
c.counter
###Output
_____no_output_____
###Markdown
Reusing logic and avoiding repetition with inheritance
###Code
class A:
def f(self):
return 'A'
class Child(A):
def f(self):
parent_result = super().f()
return f"Child {parent_result}"
class B:
def f(self):
return 'B'
###Output
_____no_output_____
###Markdown
As its name suggests, multiple inheritance allows you to derive a child class frommultiple classes.
###Code
class Child(A, B):
pass
###Output
_____no_output_____
###Markdown
If you call methodof Child, you'll get the value "A". In this simple case, Python willfconsider the first matching method following the order of the parent classes.
###Code
c = Child()
print(c.f())
Child.mro()
# Method Resolution Order (MRO)
###Output
_____no_output_____
###Markdown
Type hinting and type checking with mypy
###Code
def greeting(name: str) -> str:
return f"Hello, {name}"
###Output
_____no_output_____
###Markdown
The typing module
###Code
from typing import Dict, List, Set, Tuple
l: List = [1, 2, 3, 4, 5]
t: Tuple[int, str, float] = (1, "hello", 3.14)
s: Set[int] = {1, 2, 3, 4, 5, 6}
d: Dict[str, int] = {"a": 1, 'b': 2, 'c': 3}
from typing import List, Union
l2: List[Union[int, float]] = [1, 2.5, 3.14, 5]
l2
from typing import Union
def greeting(name: Union[str, None] = None) -> str:
return f"Hello, {name if name else 'Anonymous'}"
greeting()
greeting('Someone')
from typing import Optional
def greeting(name: Optional[str] = None) -> str:
return f"Hello, {name if name else 'Anonymous'}"
greeting()
## Custom Type
IntStringFloatType = Tuple[int, str, float]
t: IntStringFloatType = (1, 'hello', 3.14)
from typing import List
class Post:
def __init__(self, title: str) -> None:
self.title = title
def __str__(self) -> str:
return self.title
posts: List[Post] = [Post('postA'), Post('postB')]
posts
###Output
_____no_output_____
###Markdown
Type function signatures with Callable
###Code
from typing import Callable, List
ConditionFunction = Callable[[int], bool]
def filter_list(l: List[int], condition: ConditionFunction) -> List[int]:
return [i for i in l if condition(i)]
from typing import Any
def f(x:Any)->Any:
return x
###Output
_____no_output_____
###Markdown
The second one, cast, is a function that lets you override the type inferred by the typechecker. It'll force the type checker to consider the type you specify:
###Code
from typing import Any, cast
def f(x:Any)->Any:
return x
a = f("a") # inferred type is Any
a = cast(str,f("a")) # forced type to be str
a
###Output
_____no_output_____
###Markdown
Asynchronous I/OThe main motivation behind this is that I/O operations are slow: reading fromdisk, network requests are a million times slower than reading from RAM or processinginstructions.
###Code
with open(__file__) as f:
data = f.read()
# the program will block here until the data has been read
print(data)
###Output
_____no_output_____
###Markdown
We see that the script will block until we have retrieved the data from the disk and, aswe said, this can be long. 99% percent of the execution time of the program is spenton waiting for the disk. Usually, it's not an issue for simple scripts like this because youprobably don't have to perform other operations in the meantime. However, in other situations, there could have been the opportunity to perform othertasks. The typical case that is of great interest in this book is web servers. Imagine we havea first user that makes a request performing a 10-seconds-long database query beforesending the response. If a second user makes another request in the meantime, they'll haveto wait for the first response to finish before getting their answer. To solve this, traditional Python web servers based on the Web Server Gateway Interface(WSGI), such as Flask or Django, spawn several workers. Those are sub-processes of theweb server that are all able to answer requests. If one is busy processing a long request,others can answer new coming requests. With asynchronous I/O, a single process won't block when processing a request with a longI/O operation. While it waits for this operation to finish, it can answer other requests. Whenthe I/O operation is done, it resumes the request logic and can finally answer the request. Technically, this is achieved through the concept of an event loop. Think of it as aconductor that will manage all the asynchronous tasks you'll send to it. When data isavailable or when the write operation is done for one of those tasks, it'll ping the mainprogram so that it can perform the next operations. Underneath, it relies upon theoperating system select and poll calls, which are precisely there to ask for eventsabout I/O operations at an operating system level. You can read very interesting detailsabout this in the article Async IO on Linux: select, poll, and epoll by Julia Evans: https://jvns.ca/blog/2017/06/03/async-io-on-linux--select--poll--andepoll/.
###Code
import asyncio
async def main():
print('Hello ...')
await asyncio.sleep(2)
print('....world!')
asyncio.run(main())
###Output
_____no_output_____ |
Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.ipynb | ###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Start-to-Finish Example: Head-On Black Hole Collision Author: Zach Etienne Formatting improvements courtesy Brandon Clark This module implements a basic numerical relativity code to merge two black holes in *spherical-like coordinates*, as well as the gravitational wave analysis provided by the $\psi_4$ NRPy+ tutorial notebooks ([$\psi_4$](Tutorial-Psi4.ipynb) & [$\psi_4$ tetrad](Tutorial-Psi4_tetrads.ipynb)). Here we place the black holes initially on the $z$-axis, so the entire simulation is axisymmetric about the $\phi$-axis. Minimal sampling in the $\phi$ direction greatly speeds up the simulation.**Notebook Status:** Validated **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution *after a short numerical evolution of the initial data* (see [plots at bottom](convergence)), and all quantities have been validated against the [original SENR code](https://bitbucket.org/zach_etienne/nrpy). Finally, excellent agreement is seen in the gravitational wave signal from the ringing remnant black hole for multiple decades in amplitude when compared to black hole perturbation theory predictions. NRPy+ Source Code for this module: * [BSSN/BrillLindquist.py](../edit/BSSN/BrillLindquist.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb): Brill-Lindquist initial data; sets all ADM variables in Cartesian basis: * [BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): Spherical/Cartesian ADM$\to$Curvilinear BSSN converter function, for which exact expressions are given for ADM quantities.* [BSSN/BSSN_ID_function_string.py](../edit/BSSN/BSSN_ID_function_string.py): Sets up the C code string enabling initial data be set up in a point-by-point fashion* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates* [BSSN/BSSN_RHSs.py](../edit/BSSN/BSSN_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb): Generates the right-hand sides for the BSSN evolution equations in singular, curvilinear coordinates* [BSSN/BSSN_gauge_RHSs.py](../edit/BSSN/BSSN_gauge_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb): Generates the right-hand sides for the BSSN gauge evolution equations in singular, curvilinear coordinates* [BSSN/Psi4.py](../edit/BSSN/Psi4.py); [\[**tutorial**\]](Tutorial-Psi4.ipynb): Generates expressions for $\psi_4$, the outgoing Weyl scalar + [BSSN/Psi4_tetrads.py](../edit/BSSN/Psi4_tetrads.py); [\[**tutorial**\]](Tutorial-Psi4_tetrads.ipynb): Generates quasi-Kinnersley tetrad needed for $\psi_4$-based gravitational wave extraction Introduction:Here we use NRPy+ to generate the C source code necessary to set up initial data for two black holes (Brill-Lindquist, [Brill & Lindquist, Phys. Rev. 131, 471, 1963](https://journals.aps.org/pr/abstract/10.1103/PhysRev.131.471); see also Eq. 1 of [Brandt & Brügmann, arXiv:gr-qc/9711015v1](https://arxiv.org/pdf/gr-qc/9711015v1.pdf)). Then we use it to generate the RHS expressions for [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html) time integration based on the [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4).The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration * [**NRPy+ tutorial on Method of Lines algorithm**](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).1. Set gridfunction values to initial data * [**NRPy+ tutorial on Brill-Lindquist initial data**](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb) * [**NRPy+ tutorial on validating Brill-Lindquist initial data**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_Exact_Initial_Data.ipynb).1. Next, integrate the initial data forward in time using the Method of Lines coupled to a Runge-Kutta explicit timestepping algorithm: 1. At the start of each iteration in time, output the Hamiltonian constraint violation * [**NRPy+ tutorial on BSSN constraints**](Tutorial-BSSN_constraints.ipynb). 1. At each RK time substep, do the following: 1. Evaluate BSSN RHS expressions * [**NRPy+ tutorial on BSSN right-hand sides**](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb) * [**NRPy+ tutorial on BSSN gauge condition right-hand sides**](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb) 1. Apply singular, curvilinear coordinate boundary conditions [*a la* the SENR/NRPy+ paper](https://arxiv.org/abs/1712.07658) * [**NRPy+ tutorial on setting up singular, curvilinear boundary conditions**](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb) 1. Enforce constraint on conformal 3-metric: $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ * [**NRPy+ tutorial on enforcing $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint**](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)1. Repeat above steps at two numerical resolutions to confirm convergence to zero. Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id): Import Brill-Lindquist ADM initial data C function from the [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module1. [Step 3](bssn): Output C code for BSSN spacetime solve 1. [Step 3.a](bssnrhs): Output C code for BSSN RHS expressions 1. [Step 3.b](hamconstraint): Output C code for Hamiltonian constraint 1. [Step 3.c](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint 1. [Step 3.d](spinweight): Computing $_{-2}Y_{\ell m} (\theta, \phi)$ for all $(\ell,m)$ for $\ell=0$ up to 2 1. [Step 3.e](psi4): $\psi_4$ 1. [Step 3.f](cparams_rfm_and_domainsize): Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h`1. [Step 4](bc_functs): Set up boundary condition functions for chosen singular, curvilinear coordinate system1. [Step 5](mainc): `BrillLindquist_Playground.c`: The Main C Code1. [Step 6](visualize): Data Visualization Animations 1. [Step 6.a](installdownload): Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded 1. [Step 6.b](genimages): Generate images for visualization animation 1. [Step 6.c](genvideo): Generate visualization animation1. [Step 7](convergence): Visualize the numerical error, and confirm that it converges to zero with increasing numerical resolution (sampling)1. [Step 8](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step P0: Set the option for evolving the initial data forward in time
# Options include "low resolution" and "high resolution"
EvolOption = "low resolution"
# Step P1: Import needed NRPy+ core modules:
from outputC import * # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys, time # Standard Python modules for multiplatform OS-level functions, benchmarking
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("BSSN_Two_BHs_Collide_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three (BSSN is a 3+1 decomposition
# of Einstein's equations), and then read
# the parameter as DIM.
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "SinhSpherical"
if EvolOption == "low resolution":
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 150 # Length scale of computational domain
final_time = 200 # Final time
FD_order = 8 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
elif EvolOption == "high resolution":
# See above for description of the domain_size parameter
domain_size = 300 # Length scale of computational domain
final_time = 275 # Final time
FD_order = 10 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
else:
print("Error: output is not tuned for Nr = "+str(Nr)+" . Plotting disabled.")
exit(1)
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.2 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05 # If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.b: Set the timestepping order,
# the core data type, and the CFL factor.
# Step 2.b: Set the order of spatial and temporal derivatives;
# the core data type, and the CFL factor.
# RK_method choices include: Euler, "RK2 Heun", "RK2 MP", "RK2 Ralston", RK3, "RK3 Heun", "RK3 Ralston",
# SSPRK3, RK4, DP5, DP5alt, CK5, DP6, L6, DP8
RK_method = "RK4"
REAL = "double" # Best to use double here.
CFL_FACTOR= 0.5 # (GETS OVERWRITTEN WHEN EXECUTED.) In pure axisymmetry (symmetry_axes = 2 below) 1.0 works fine. Otherwise 0.5 or lower.
# Step 3: Generate Runge-Kutta-based (RK-based) timestepping code.
# As described above the Table of Contents, this is a 3-step process:
# 3.A: Evaluate RHSs (RHS_string)
# 3.B: Apply boundary conditions (post_RHS_string, pt 1)
# 3.C: Enforce det(gammabar) = det(gammahat) constraint (post_RHS_string, pt 2)
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method,
RHS_string = """
Ricci_eval(&rfmstruct, ¶ms, RK_INPUT_GFS, auxevol_gfs);
rhs_eval(&rfmstruct, ¶ms, auxevol_gfs, RK_INPUT_GFS, RK_OUTPUT_GFS);""",
post_RHS_string = """
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS, evol_gf_parity, RK_OUTPUT_GFS);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, RK_OUTPUT_GFS);\n""",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
# Step 4: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Step 5: Set the finite differencing order to FD_order (set above).
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order)
# Step 6: Copy SIMD/SIMD_intrinsics.h to $Ccodesdir/SIMD/SIMD_intrinsics.h
cmd.mkdir(os.path.join(Ccodesdir,"SIMD"))
shutil.copy(os.path.join("SIMD/")+"SIMD_intrinsics.h",os.path.join(Ccodesdir,"SIMD/"))
# Step 7: Set the direction=2 (phi) axis to be the symmetry axis; i.e.,
# axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
###Output
_____no_output_____
###Markdown
Step 1.c: Output needed C code for finding the minimum proper distance between grid points, needed for [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673)-limited timestep \[Back to [top](toc)\]$$\label{cfl}$$In order for our explicit-timestepping numerical solution to the scalar wave equation to be stable, it must satisfy the [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673) condition:$$\Delta t \le \frac{\min(ds_i)}{c},$$where $c$ is the wavespeed, and$$ds_i = h_i \Delta x^i$$ is the proper distance between neighboring gridpoints in the $i$th direction (in 3D, there are 3 directions), $h_i$ is the $i$th reference metric scale factor, and $\Delta x^i$ is the uniform grid spacing in the $i$th direction:
###Code
# Output the find_timestep() function to a C file.
rfm.out_timestep_func_to_file(os.path.join(Ccodesdir,"find_timestep.h"))
###Output
_____no_output_____
###Markdown
Step 2: Import Brill-Lindquist ADM initial data C function from the [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module \[Back to [top](toc)\]$$\label{adm_id}$$The [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module does the following:1. Set up Brill-Lindquist initial data [ADM](https://en.wikipedia.org/wiki/ADM_formalism) quantities in the **Cartesian basis**, as [documented here](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb). 1. Convert the ADM **Cartesian quantities** to **BSSN quantities in the desired Curvilinear basis** (set by reference_metric::CoordSystem), as [documented here](Tutorial-ADM_Initial_Data-Converting_ADMCartesian_to_BSSNCurvilinear.ipynb).1. Sets up the standardized C function for setting all BSSN Curvilinear gridfunctions in a pointwise fashion, as [written here](../edit/BSSN/BSSN_ID_function_string.py), and returns the C function as a Python string.
###Code
import BSSN.BrillLindquist as bl
def BrillLindquistID():
print("Generating optimized C code for Brill-Lindquist initial data. May take a while, depending on CoordSystem.")
start = time.time()
bl.BrillLindquist() # Registers ID C function in dictionary, used below to output to file.
with open(os.path.join(Ccodesdir,"initial_data.h"),"w") as file:
file.write(outC_function_dict["initial_data"])
end = time.time()
print("Finished BL initial data codegen in "+str(end-start)+" seconds.")
###Output
_____no_output_____
###Markdown
Step 3: Output C code for BSSN spacetime solve \[Back to [top](toc)\]$$\label{bssn}$$ Step 3.a: Output C code for BSSN RHS expressions \[Back to [top](toc)\]$$\label{bssnrhs}$$
###Code
import BSSN.BSSN_RHSs as rhs
import BSSN.BSSN_gauge_RHSs as gaugerhs
# Set the *covariant*, second-order Gamma-driving shift condition
par.set_parval_from_str("BSSN.BSSN_gauge_RHSs::ShiftEvolutionOption", "GammaDriving2ndOrder_Covariant")
print("Generating symbolic expressions for BSSN RHSs...")
start = time.time()
# Enable rfm_precompute infrastructure, which results in
# BSSN RHSs that are free of transcendental functions,
# even in curvilinear coordinates, so long as
# ConformalFactor is set to "W" (default).
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
# Evaluate BSSN + BSSN gauge RHSs with rfm_precompute enabled:
import BSSN.BSSN_quantities as Bq
par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic","True")
rhs.BSSN_RHSs()
gaugerhs.BSSN_gauge_RHSs()
# We use betaU as our upwinding control vector:
Bq.BSSN_basic_tensors()
betaU = Bq.betaU
import BSSN.Enforce_Detgammabar_Constraint as EGC
enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammabar_Constraint_symb_expressions()
# Next compute Ricci tensor
par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic","False")
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
# Now register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)
# Now that we are finished with all the rfm hatted
# quantities in generic precomputed functional
# form, let's restore them to their closed-
# form expressions.
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
end = time.time()
print("Finished BSSN symbolic expressions in "+str(end-start)+" seconds.")
def BSSN_RHSs():
print("Generating C code for BSSN RHSs in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
# Construct the left-hand sides and right-hand-side expressions for all BSSN RHSs
lhs_names = [ "alpha", "cf", "trK"]
rhs_exprs = [gaugerhs.alpha_rhs, rhs.cf_rhs, rhs.trK_rhs]
for i in range(3):
lhs_names.append( "betU"+str(i))
rhs_exprs.append(gaugerhs.bet_rhsU[i])
lhs_names.append( "lambdaU"+str(i))
rhs_exprs.append(rhs.lambda_rhsU[i])
lhs_names.append( "vetU"+str(i))
rhs_exprs.append(gaugerhs.vet_rhsU[i])
for j in range(i,3):
lhs_names.append( "aDD"+str(i)+str(j))
rhs_exprs.append(rhs.a_rhsDD[i][j])
lhs_names.append( "hDD"+str(i)+str(j))
rhs_exprs.append(rhs.h_rhsDD[i][j])
# Sort the lhss list alphabetically, and rhss to match.
# This ensures the RHSs are evaluated in the same order
# they're allocated in memory:
lhs_names,rhs_exprs = [list(x) for x in zip(*sorted(zip(lhs_names,rhs_exprs), key=lambda pair: pair[0]))]
# Declare the list of lhrh's
BSSN_evol_rhss = []
for var in range(len(lhs_names)):
BSSN_evol_rhss.append(lhrh(lhs=gri.gfaccess("rhs_gfs",lhs_names[var]),rhs=rhs_exprs[var]))
# Set up the C function for the BSSN RHSs
desc="Evaluate the BSSN RHSs"
name="rhs_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict auxevol_gfs,const REAL *restrict in_gfs,REAL *restrict rhs_gfs""",
body = fin.FD_outputC("returnstring",BSSN_evol_rhss, params="outCverbose=False,SIMD_enable=True",
upwindcontrolvec=betaU).replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,EnableSIMD,Enable_rfm_precompute")
end = time.time()
print("Finished BSSN_RHS C codegen in " + str(end - start) + " seconds.")
def Ricci():
print("Generating C code for Ricci tensor in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
desc="Evaluate the Ricci tensor"
name="Ricci_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict in_gfs,REAL *restrict auxevol_gfs""",
body = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD00"),rhs=Bq.RbarDD[0][0]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD01"),rhs=Bq.RbarDD[0][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD02"),rhs=Bq.RbarDD[0][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD11"),rhs=Bq.RbarDD[1][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD12"),rhs=Bq.RbarDD[1][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD22"),rhs=Bq.RbarDD[2][2])],
params="outCverbose=False,SIMD_enable=True").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,EnableSIMD,Enable_rfm_precompute")
end = time.time()
print("Finished Ricci C codegen in " + str(end - start) + " seconds.")
###Output
Generating symbolic expressions for BSSN RHSs...
Finished BSSN symbolic expressions in 3.5383458137512207 seconds.
###Markdown
Step 3.b: Output C code for Hamiltonian constraint \[Back to [top](toc)\]$$\label{hamconstraint}$$Next output the C code for evaluating the Hamiltonian constraint [(**Tutorial**)](Tutorial-BSSN_constraints.ipynb). In the absence of numerical error, this constraint should evaluate to zero. However it does not due to numerical (typically truncation and roundoff) error. We will therefore measure the Hamiltonian constraint violation to gauge the accuracy of our simulation, and, ultimately determine whether errors are dominated by numerical finite differencing (truncation) error as expected.
###Code
def Hamiltonian():
start = time.time()
print("Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.")
# Set up the C function for the Hamiltonian RHS
desc="Evaluate the Hamiltonian constraint"
name="Hamiltonian_constraint"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict aux_gfs""",
body = fin.FD_outputC("returnstring",lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H),
params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,Enable_rfm_precompute")
end = time.time()
print("Finished Hamiltonian C codegen in " + str(end - start) + " seconds.")
###Output
_____no_output_____
###Markdown
Step 3.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
def gammadet():
start = time.time()
print("Generating optimized C code for gamma constraint. May take a while, depending on CoordSystem.")
# Set up the C function for the det(gammahat) = det(gammabar)
EGC.output_Enforce_Detgammabar_Constraint_Ccode(Ccodesdir,exprs=enforce_detg_constraint_symb_expressions)
end = time.time()
print("Finished gamma constraint C codegen in " + str(end - start) + " seconds.")
###Output
_____no_output_____
###Markdown
Step 3.d: Computing $_{-2}Y_{\ell m} (\theta, \phi)$ for all $(\ell,m)$ for $\ell=0$ up to 2 \[Back to [top](toc)\]$$\label{spinweight}$$ Here we output all spin-weight $-2$ spherical harmonics [**Tutorial Module**](Tutorial-SpinWeighted_Spherical_Harmonics.ipynb) for $\ell=0$ up to and $\ell=2$.
###Code
import SpinWeight_minus2_SphHarmonics.SpinWeight_minus2_SphHarmonics as swm2
cmd.mkdir(os.path.join(Ccodesdir,"SpinWeight_minus2_SphHarmonics"))
swm2.SpinWeight_minus2_SphHarmonics(maximum_l=2,
filename=os.path.join(Ccodesdir,"SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h"))
###Output
_____no_output_____
###Markdown
Step 3.e: Output $\psi_4$ \[Back to [top](toc)\]$$\label{psi4}$$We output $\psi_4$, assuming Quasi-Kinnersley tetrad of [Baker, Campanelli, Lousto (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf).
###Code
import BSSN.Psi4_tetrads as BP4t
par.set_parval_from_str("BSSN.Psi4_tetrads::TetradChoice","QuasiKinnersley")
#par.set_parval_from_str("BSSN.Psi4_tetrads::UseCorrectUnitNormal","True")
import BSSN.Psi4 as BP4
BP4.Psi4()
psi4r_0pt = gri.register_gridfunctions("AUX","psi4r_0pt")
psi4r_1pt = gri.register_gridfunctions("AUX","psi4r_1pt")
psi4r_2pt = gri.register_gridfunctions("AUX","psi4r_2pt")
psi4i_0pt = gri.register_gridfunctions("AUX","psi4i_0pt")
psi4i_1pt = gri.register_gridfunctions("AUX","psi4i_1pt")
psi4i_2pt = gri.register_gridfunctions("AUX","psi4i_2pt")
desc="""Since it's so expensive to compute, instead of evaluating
psi_4 at all interior points, this functions evaluates it on a
point-by-point basis."""
name="psi4"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """const paramstruct *restrict params,
const int i0,const int i1,const int i2,
REAL *restrict xx[3], const REAL *restrict in_gfs, REAL *restrict aux_gfs""",
body = """
const int idx = IDX3S(i0,i1,i2);
const REAL xx0 = xx[0][i0];const REAL xx1 = xx[1][i1];const REAL xx2 = xx[2][i2];
// Real part of psi_4, divided into 3 terms
{
#include "Psi4re_pt0_lowlevel.h"
}
{
#include "Psi4re_pt1_lowlevel.h"
}
{
#include "Psi4re_pt2_lowlevel.h"
}
// Imaginary part of psi_4, divided into 3 terms
{
#include "Psi4im_pt0_lowlevel.h"
}
{
#include "Psi4im_pt1_lowlevel.h"
}
{
#include "Psi4im_pt2_lowlevel.h"
}""")
def Psi4re(part):
print("Generating C code for psi4_re_pt"+str(part)+" in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
Psi4re_pt = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("aux_gfs","psi4r_"+str(part)+"pt"),rhs=BP4.psi4_re_pt[part])],
params="outCverbose=False")
with open(os.path.join(Ccodesdir,"Psi4re_pt"+str(part)+"_lowlevel.h"), "w") as file:
file.write(Psi4re_pt.replace("IDX4","IDX4S"))
end = time.time()
print("Finished generating psi4_re_pt"+str(part)+" in "+str(end-start)+" seconds.")
def Psi4im(part):
print("Generating C code for psi4_im_pt"+str(part)+" in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
Psi4im_pt = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("aux_gfs","psi4i_"+str(part)+"pt"),rhs=BP4.psi4_im_pt[part])],
params="outCverbose=False")
with open(os.path.join(Ccodesdir,"Psi4im_pt"+str(part)+"_lowlevel.h"), "w") as file:
file.write(Psi4im_pt.replace("IDX4","IDX4S"))
end = time.time()
print("Finished generating psi4_im_pt"+str(part)+" in "+str(end-start)+" seconds.")
# Step 0: Import the multiprocessing module.
import multiprocessing
# Step 1: Create a list of functions we wish to evaluate in parallel
funcs = [Psi4re,Psi4re,Psi4re,Psi4im,Psi4im,Psi4im, BrillLindquistID,BSSN_RHSs,Ricci,Hamiltonian,gammadet]
# Step 1.a: Define master function for calling all above functions.
# Note that lambdifying this doesn't work in Python 3
def master_func(idx):
if idx < 3: # Call Psi4re(arg)
funcs[idx](idx)
elif idx < 6: # Call Psi4im(arg-3)
funcs[idx](idx-3)
else: # All non-Psi4 functions:
funcs[idx]()
try:
if os.name == 'nt':
# It's a mess to get working in Windows, so we don't bother. :/
# https://medium.com/@grvsinghal/speed-up-your-python-code-using-multiprocessing-on-windows-and-jupyter-or-ipython-2714b49d6fac
raise Exception("Parallel codegen currently not available in Windows")
# Step 1.b: Import the multiprocessing module.
import multiprocessing
# Step 1.c: Evaluate list of functions in parallel if possible;
# otherwise fallback to serial evaluation:
pool = multiprocessing.Pool()
pool.map(master_func,range(len(funcs)))
except:
# Steps 1.b-1.c, alternate: As fallback, evaluate functions in serial.
# This will happen on Android and Windows systems
for idx in range(len(funcs)):
master_func(idx)
###Output
Generating C code for psi4_im_pt0 in SinhSpherical coordinates.
Generating C code for psi4_re_pt1 in SinhSpherical coordinates.
Generating C code for psi4_re_pt2 in SinhSpherical coordinates.
Generating C code for psi4_re_pt0 in SinhSpherical coordinates.
Generating C code for psi4_im_pt1 in SinhSpherical coordinates.
Generating C code for BSSN RHSs in SinhSpherical coordinates.
Generating optimized C code for Brill-Lindquist initial data. May take a while, depending on CoordSystem.
Generating C code for psi4_im_pt2 in SinhSpherical coordinates.
Generating C code for Ricci tensor in SinhSpherical coordinates.
Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.
Generating optimized C code for gamma constraint. May take a while, depending on CoordSystem.
Output C function enforce_detgammabar_constraint() to file BSSN_Two_BHs_Collide_Ccodes/enforce_detgammabar_constraint.h
Finished gamma constraint C codegen in 0.10176610946655273 seconds.
Output C function Hamiltonian_constraint() to file BSSN_Two_BHs_Collide_Ccodes/Hamiltonian_constraint.h
Finished Hamiltonian C codegen in 9.232191324234009 seconds.
Finished BL initial data codegen in 15.582969903945923 seconds.
Finished generating psi4_im_pt2 in 26.243969678878784 seconds.
Finished generating psi4_im_pt1 in 27.658324718475342 seconds.
Finished generating psi4_re_pt2 in 50.5712308883667 seconds.
Finished generating psi4_re_pt1 in 57.67112588882446 seconds.
Output C function rhs_eval() to file BSSN_Two_BHs_Collide_Ccodes/rhs_eval.h
Finished BSSN_RHS C codegen in 76.27907538414001 seconds.
Finished generating psi4_im_pt0 in 86.91213059425354 seconds.
Output C function Ricci_eval() to file BSSN_Two_BHs_Collide_Ccodes/Ricci_eval.h
Finished Ricci C codegen in 121.73628520965576 seconds.
Finished generating psi4_re_pt0 in 180.75423169136047 seconds.
###Markdown
Step 3.f: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](toc)\]$$\label{cparams_rfm_and_domainsize}$$Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
###Code
# Step 3.f.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.f.ii: Set free_parameters.h
with open(os.path.join(Ccodesdir,"free_parameters.h"),"w") as file:
file.write("""
// Set free-parameter values.
// Set free-parameter values for BSSN evolution:
params.eta = 2.0;
// Set free parameters for the (Brill-Lindquist) initial data
params.BH1_posn_x = 0.0; params.BH1_posn_y = 0.0; params.BH1_posn_z =+0.25;
params.BH2_posn_x = 0.0; params.BH2_posn_y = 0.0; params.BH2_posn_z =-0.25;
params.BH1_mass = 0.5; params.BH2_mass = 0.5;
"""+"""
const REAL final_time = """+str(final_time)+";\n")
# Append to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Step 3.f.iii: Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir)
# Step 3.f.iv: Generate xxCart.h, which contains xxCart() for
# (the mapping from xx->Cartesian) for the chosen
# CoordSystem:
rfm.xxCart_h("xxCart","./set_Cparameters.h",os.path.join(Ccodesdir,"xxCart.h"))
# Step 3.f.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
###Output
_____no_output_____
###Markdown
Step 4: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](toc)\]$$\label{bc_functs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../"))
###Output
Wrote to file "BSSN_Two_BHs_Collide_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h"
Evolved gridfunction "aDD00" has parity type 4.
Evolved gridfunction "aDD01" has parity type 5.
Evolved gridfunction "aDD02" has parity type 6.
Evolved gridfunction "aDD11" has parity type 7.
Evolved gridfunction "aDD12" has parity type 8.
Evolved gridfunction "aDD22" has parity type 9.
Evolved gridfunction "alpha" has parity type 0.
Evolved gridfunction "betU0" has parity type 1.
Evolved gridfunction "betU1" has parity type 2.
Evolved gridfunction "betU2" has parity type 3.
Evolved gridfunction "cf" has parity type 0.
Evolved gridfunction "hDD00" has parity type 4.
Evolved gridfunction "hDD01" has parity type 5.
Evolved gridfunction "hDD02" has parity type 6.
Evolved gridfunction "hDD11" has parity type 7.
Evolved gridfunction "hDD12" has parity type 8.
Evolved gridfunction "hDD22" has parity type 9.
Evolved gridfunction "lambdaU0" has parity type 1.
Evolved gridfunction "lambdaU1" has parity type 2.
Evolved gridfunction "lambdaU2" has parity type 3.
Evolved gridfunction "trK" has parity type 0.
Evolved gridfunction "vetU0" has parity type 1.
Evolved gridfunction "vetU1" has parity type 2.
Evolved gridfunction "vetU2" has parity type 3.
Auxiliary gridfunction "H" has parity type 0.
Auxiliary gridfunction "psi4i_0pt" has parity type 0.
Auxiliary gridfunction "psi4i_1pt" has parity type 0.
Auxiliary gridfunction "psi4i_2pt" has parity type 0.
Auxiliary gridfunction "psi4r_0pt" has parity type 0.
Auxiliary gridfunction "psi4r_1pt" has parity type 0.
Auxiliary gridfunction "psi4r_2pt" has parity type 0.
AuxEvol gridfunction "RbarDD00" has parity type 4.
AuxEvol gridfunction "RbarDD01" has parity type 5.
AuxEvol gridfunction "RbarDD02" has parity type 6.
AuxEvol gridfunction "RbarDD11" has parity type 7.
AuxEvol gridfunction "RbarDD12" has parity type 8.
AuxEvol gridfunction "RbarDD22" has parity type 9.
Wrote to file "BSSN_Two_BHs_Collide_Ccodes/boundary_conditions/EigenCoord_Cart_to_xx.h"
###Markdown
Step 5: `BrillLindquist_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER),
# and set the CFL_FACTOR (which can be overwritten at the command line)
with open(os.path.join(Ccodesdir,"BSSN_Playground_REAL__NGHOSTS__CFL_FACTOR.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
REAL CFL_FACTOR = """+str(CFL_FACTOR)+"""; // Set the CFL Factor. Can be overwritten at command line.""")
%%writefile $Ccodesdir/BrillLindquist_Playground.c
// Step P0: Define REAL and NGHOSTS; and declare CFL_FACTOR. This header is generated in NRPy+.
#include "BSSN_Playground_REAL__NGHOSTS__CFL_FACTOR.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// All SIMD intrinsics used in SIMD-enabled C code loops are defined here:
#include "SIMD/SIMD_intrinsics.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "time.h"
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
#define wavespeed 1.0 // Set CFL-based "wavespeed" to 1.0.
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xxCart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Set xxCart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xxCart.h"
// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P7: Implement the algorithm for upwinding.
// *NOTE*: This upwinding is backwards from
// usual upwinding algorithms, because the
// upwinding control vector in BSSN (the shift)
// acts like a *negative* velocity.
//#define UPWIND_ALG(UpwindVecU) UpwindVecU > 0.0 ? 1.0 : 0.0
// Step P8: Include function for enforcing detgammabar constraint.
#include "enforce_detgammabar_constraint.h"
// Step P9: Find the CFL-constrained timestep
#include "find_timestep.h"
// Step P10: Declare function necessary for setting up the initial data.
// Step P10.a: Define BSSN_ID() for BrillLindquist initial data
// Step P10.b: Set the generic driver function for setting up BSSN initial data
#include "initial_data.h"
// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)
#include "Hamiltonian_constraint.h"
// Step P12: Declare rhs_eval function, which evaluates BSSN RHSs
#include "rhs_eval.h"
// Step P13: Declare Ricci_eval function, which evaluates Ricci tensor
#include "Ricci_eval.h"
// Step P14: Declare function for evaluating real and imaginary parts of psi4 (diagnostic)
#include "psi4.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4 && argc != 5) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
if(argc == 5) {
CFL_FACTOR = strtod(argv[4],NULL);
if(CFL_FACTOR > 0.5 && atoi(argv[3])!=2) {
fprintf(stderr,"WARNING: CFL_FACTOR was set to %e, which is > 0.5.\n",CFL_FACTOR);
fprintf(stderr," This will generally only be stable if the simulation is purely axisymmetric\n");
fprintf(stderr," However, Nx2 was set to %d>2, which implies a non-axisymmetric simulation\n",atoi(argv[3]));
}
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0h: Time coordinate parameters
const REAL t_final = final_time;
// Step 0i: Set timestep based on smallest proper distance between gridpoints and CFL factor
REAL dt = find_timestep(¶ms, xx);
//fprintf(stderr,"# Timestep set to = %e\n",(double)dt);
int N_final = (int)(t_final / dt + 0.5); // The number of points in time.
// Add 0.5 to account for C rounding down
// typecasts to integers.
REAL out_approx_every_t = 0.2;
int output_every_N = (int)(out_approx_every_t*((REAL)N_final)/t_final);
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
// Step 1: Set up initial data to an exact solution
initial_data(¶ms, xx, y_n_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, y_n_gfs);
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
#ifdef __linux__ // Use high-precision timer in Linux.
struct timespec start, end;
clock_gettime(CLOCK_REALTIME, &start);
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
// http://www.cplusplus.com/reference/ctime/time/
time_t start_timer,end_timer;
time(&start_timer); // Resolution of one second...
#endif
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
for(int n=0;n<=N_final;n++) { // Main loop to progress forward in time.
// Step 3.a: Output diagnostics
if(n%output_every_N == 0) {
// Step 3.a.i: Output psi4 spin-weight -2 decomposed data, every N_output_every */
#include "SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h"
char filename[100];
for(int r_ext_idx = (Nxx_plus_2NGHOSTS0-NGHOSTS)/4;
r_ext_idx<(Nxx_plus_2NGHOSTS0-NGHOSTS)*0.9;
r_ext_idx+=5) {
REAL r_ext;
{
REAL xx0 = xx[0][r_ext_idx];
REAL xx1 = xx[1][1];
REAL xx2 = xx[2][1];
REAL xCart[3];
xxCart(¶ms,xx,r_ext_idx,1,1,xCart);
r_ext = sqrt(xCart[0]*xCart[0] + xCart[1]*xCart[1] + xCart[2]*xCart[2]);
}
sprintf(filename,"outPsi4_l2m0-%d-r%.2f.txt",Nxx[0],(double)r_ext);
FILE *outPsi4_l2m0;
if(n==0) outPsi4_l2m0 = fopen(filename, "w");
else outPsi4_l2m0 = fopen(filename, "a");
REAL Psi4r_0pt_l2m0 = 0.0,Psi4r_1pt_l2m0 = 0.0,Psi4r_2pt_l2m0 = 0.0;
REAL Psi4i_0pt_l2m0 = 0.0,Psi4i_1pt_l2m0 = 0.0,Psi4i_2pt_l2m0 = 0.0;
LOOP_REGION(r_ext_idx,r_ext_idx+1,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,NGHOSTS+1) {
psi4(¶ms, i0,i1,i2, xx, y_n_gfs, diagnostic_output_gfs);
const int idx = IDX3S(i0,i1,i2);
const REAL th = xx[1][i1];
const REAL ph = xx[2][i2];
// Construct integrand for Psi4 spin-weight s=-2,l=2,m=0 spherical harmonic
// Based on http://www.demonstrations.wolfram.com/SpinWeightedSphericalHarmonics/
// we have {}_{s}_Y_{lm} = {}_{-2}_Y_{20} = 1/4 * sqrt(15 / (2*pi)) * sin(th)^2
// Confirm integrand is correct:
// Integrate[(1/4 Sqrt[15/(2 \[Pi])] Sin[th]^2) (1/4 Sqrt[15/(2 \[Pi])] Sin[th]^2)*2*Pi*Sin[th], {th, 0, Pi}]
// ^^^ equals 1.
REAL ReY_sm2_l2_m0,ImY_sm2_l2_m0;
SpinWeight_minus2_SphHarmonics(2,0, th,ph, &ReY_sm2_l2_m0,&ImY_sm2_l2_m0);
const REAL sinth = sin(xx[1][i1]);
/* psi4 *{}_{-2}_Y_{20}* (int dphi)* sinth*dtheta */
Psi4r_0pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4R_0PTGF,idx)]*ReY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4r_1pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4R_1PTGF,idx)]*ReY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4r_2pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4R_2PTGF,idx)]*ReY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4i_0pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4I_0PTGF,idx)]*ImY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4i_1pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4I_1PTGF,idx)]*ImY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4i_2pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4I_2PTGF,idx)]*ImY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
}
fprintf(outPsi4_l2m0,"%e %.15e %.15e %.15e %.15e %.15e %.15e\n", (double)((n)*dt),
(double)Psi4r_0pt_l2m0,(double)Psi4r_1pt_l2m0,(double)Psi4r_2pt_l2m0,
(double)Psi4i_0pt_l2m0,(double)Psi4i_1pt_l2m0,(double)Psi4i_2pt_l2m0);
fclose(outPsi4_l2m0);
}
}
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
#include "MoLtimestepping/RK_MoL.h"
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
if(n==N_final-1) {
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
const int i0MIN=NGHOSTS; // In spherical, r=Delta r/2.
const int i1mid=Nxx_plus_2NGHOSTS1/2;
const int i2mid=Nxx_plus_2NGHOSTS2/2;
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
xxCart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1],xCart[2], y_n_gfs[IDX4ptS(CFGF,idx)],
log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
}
// Step 3.d: Progress indicator printing to stderr
// Step 3.d.i: Measure average time per iteration
#ifdef __linux__ // Use high-precision timer in Linux.
clock_gettime(CLOCK_REALTIME, &end);
const long long unsigned int time_in_ns = 1000000000L * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
time(&end_timer); // Resolution of one second...
REAL time_in_ns = difftime(end_timer,start_timer)*1.0e9+0.5; // Round up to avoid divide-by-zero.
#endif
const REAL s_per_iteration_avg = ((REAL)time_in_ns / (REAL)n) / 1.0e9;
const int iterations_remaining = N_final - n;
const REAL time_remaining_in_mins = s_per_iteration_avg * (REAL)iterations_remaining / 60.0;
const REAL num_RHS_pt_evals = (REAL)(Nxx[0]*Nxx[1]*Nxx[2]) * 4.0 * (REAL)n; // 4 RHS evals per gridpoint for RK4
const REAL RHS_pt_evals_per_sec = num_RHS_pt_evals / ((REAL)time_in_ns / 1.0e9);
// Step 3.d.ii: Output simulation progress to stderr
if(n % 10 == 0) {
fprintf(stderr,"%c[2K", 27); // Clear the line
fprintf(stderr,"It: %d t=%.2f dt=%.2e | %.1f%%; ETA %.0f s | t/h %.2f | gp/s %.2e\r", // \r is carriage return, move cursor to the beginning of the line
n, n * (double)dt, (double)dt, (double)(100.0 * (REAL)n / (REAL)N_final),
(double)time_remaining_in_mins*60, (double)(dt * 3600.0 / s_per_iteration_avg), (double)RHS_pt_evals_per_sec);
fflush(stderr); // Flush the stderr buffer
} // End progress indicator if(n % 10 == 0)
} // End main loop to progress forward in time.
fprintf(stderr,"\n"); // Clear the final line of output from progress indicator.
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
if EvolOption == "low resolution":
Nr = 270
Ntheta = 8
elif EvolOption == "high resolution":
Nr = 800
Ntheta = 16
else:
print("Error: unknown EvolOption!")
sys.exit(1)
CFL_FACTOR = 1.0
import cmdline_helper as cmd
print("Now compiling, should take ~10 seconds...\n")
start = time.time()
cmd.C_compile(os.path.join(Ccodesdir,"BrillLindquist_Playground.c"), "BrillLindquist_Playground",
compile_mode="optimized")
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
if Nr == 800:
print("Now running. Should take ~8 hours...\n")
if Nr == 270:
print("Now running. Should take ~30 minutes...\n")
start = time.time()
cmd.delete_existing_files("out*.txt")
cmd.delete_existing_files("out*.png")
cmd.Execute("BrillLindquist_Playground", str(Nr)+" "+str(Ntheta)+" 2 "+str(CFL_FACTOR))
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
###Output
Now compiling, should take ~10 seconds...
Compiling executable...
Executing `gcc -Ofast -fopenmp -march=native -funroll-loops BSSN_Two_BHs_Collide_Ccodes/BrillLindquist_Playground.c -o BrillLindquist_Playground -lm`...
Finished executing in 11.231794118881226 seconds.
Finished compilation.
Finished in 11.24258828163147 seconds.
Now running. Should take ~30 minutes...
Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./BrillLindquist_Playground 270 8 2 1.0`...
[2KIt: 27200 t=199.93 dt=7.35e-03 | 100.0%; ETA 0 s | t/h 2701.46 | gp/s 1.76e+06
Finished executing in 266.6908230781555 seconds.
Finished in 266.7072021961212 seconds.
###Markdown
Step 6: Comparison with black hole perturbation theory \[Back to [top](toc)\]$$\label{compare}$$According to black hole perturbation theory ([Berti et al](https://arxiv.org/abs/0905.2975)), the resultant black hole should ring down with dominant, spin-weight $s=-2$ spherical harmonic mode $(l=2,m=0)$ according to$${}_{s=-2}\text{Re}(\psi_4)_{l=2,m=0} = A e^{−0.0890 t/M} \cos(0.3737 t/M+ \phi),$$where $M=1$ for these data, and $A$ and $\phi$ are an arbitrary amplitude and phase, respectively. Here we will plot the resulting waveform at $r/M=33.13$, comparing to the expected frequency and amplitude falloff predicted by black hole perturbation theory.Notice that we find about 4.2 orders of magnitude agreement! If you are willing to invest more resources and wait much longer, you will find approximately 8.5 orders of magnitude agreement (*better* than Fig 6 of [Ruchlin et al](https://arxiv.org/pdf/1712.07658.pdf)) if you adjust the above code parameters such that1. Finite-differencing order is set to 101. Nr = 8001. Ntheta = 161. Outer boundary (`AMPL`) set to 3001. Final time (`t_final`) set to 2751. Set the initial positions of the BHs to `BH1_posn_z = -BH2_posn_z = 0.25`
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import savefig
from matplotlib import rc
rc('text', usetex=True)
if Nr == 270:
extraction_radius = "33.13"
Amplitude = 1.8e-2
Phase = 2.8
elif Nr == 800:
extraction_radius = "33.64"
Amplitude = 1.8e-2
Phase = 2.8
else:
print("Error: output is not tuned for Nr = "+str(Nr)+" . Plotting disabled.")
exit(1)
#Transposed for easier unpacking:
t,psi4r1,psi4r2,psi4r3,psi4i1,psi4i2,psi4i3 = np.loadtxt("outPsi4_l2m0-"+str(Nr)+"-r"+extraction_radius+".txt").T
t_retarded = []
log10abspsi4r = []
bh_pert_thry = []
for i in range(len(psi4r1)):
retarded_time = t[i]-np.float(extraction_radius)
t_retarded.append(retarded_time)
log10abspsi4r.append(np.log(np.float(extraction_radius)*np.abs(psi4r1[i] + psi4r2[i] + psi4r3[i]))/np.log(10))
bh_pert_thry.append(np.log(Amplitude*np.exp(-0.0890*retarded_time)*np.abs(np.cos(0.3737*retarded_time+Phase)))/np.log(10))
# print(bh_pert_thry)
fig, ax = plt.subplots()
plt.title("Grav. Wave Agreement with BH perturbation theory",fontsize=18)
plt.xlabel("$(t - R_{ext})/M$",fontsize=16)
plt.ylabel('$\log_{10}|\psi_4|$',fontsize=16)
ax.plot(t_retarded, log10abspsi4r, 'k-', label='SENR/NRPy+ simulation')
ax.plot(t_retarded, bh_pert_thry, 'k--', label='BH perturbation theory')
#ax.set_xlim([0,t_retarded[len(psi4r1)-1]])
ax.set_xlim([0,final_time - float(extraction_radius)+10])
ax.set_ylim([-13.5,-1.5])
plt.xticks(size = 14)
plt.yticks(size = 14)
legend = ax.legend(loc='upper right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
# Note that you'll need `dvipng` installed to generate the following file:
savefig("BHperttheorycompare.png",dpi=150)
###Output
_____no_output_____
###Markdown
Step 7: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.ipynb
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Start-to-Finish Example: Head-On Black Hole Collision with Gravitational Wave Analysis Author: Zach Etienne Formatting improvements courtesy Brandon Clark This module implements a basic numerical relativity code to merge two black holes in *spherical coordinates*, as well as the gravitational wave analysis provided by the $\psi_4$ NRPy+ tutorial modules ([$\psi_4$](Tutorial-Psi4.ipynb) & [$\psi_4$ tetrad](Tutorial-Psi4_tetrads.ipynb)). Here we place the black holes initially on the $z$-axis, so the entire simulation is axisymmetric about the $\phi$-axis. Not sampling in the $\phi$ direction greatly speeds up the simulation.**Module Status:** Validated **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution *after a short numerical evolution of the initial data* (see [plot](convergence) at bottom), and results have been validated to agree to roundoff error with the [original SENR code](https://bitbucket.org/zach_etienne/nrpy).Further, agreement of $\psi_4$ with result expected from black hole perturbation theory (*a la* Fig 6 of [Ruchlin, Etienne, and Baumgarte](https://arxiv.org/pdf/1712.07658.pdf)) has been successfully demonstrated in [Step 7](compare). NRPy+ Source Code for this module: 1. [BSSN/BrillLindquist.py](../edit/BSSN/BrillLindquist.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb): Brill-Lindquist initial data; sets all ADM variables in Cartesian basis: 1. [BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): Spherical/Cartesian ADM$\to$Curvilinear BSSN converter function, for which exact expressions are given for ADM quantities.1. [BSSN/BSSN_ID_function_string.py](../edit/BSSN/BSSN_ID_function_string.py): Sets up the C code string enabling initial data be set up in a point-by-point fashion1. [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates1. [BSSN/BSSN_RHSs.py](../edit/BSSN/BSSN_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb): Generates the right-hand sides for the BSSN evolution equations in singular, curvilinear coordinates1. [BSSN/BSSN_gauge_RHSs.py](../edit/BSSN/BSSN_gauge_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb): Generates the right-hand sides for the BSSN gauge evolution equations in singular, curvilinear coordinates Introduction:Here we use NRPy+ to generate the C source code necessary to set up initial data for two black holes (Brill-Lindquist, [Brill & Lindquist, Phys. Rev. 131, 471, 1963](https://journals.aps.org/pr/abstract/10.1103/PhysRev.131.471); see also Eq. 1 of [Brandt & Brügmann, arXiv:gr-qc/9711015v1](https://arxiv.org/pdf/gr-qc/9711015v1.pdf)). Then we use it to generate the RHS expressions for [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html) time integration based on the [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4).The entire algorithm is outlined below, with NRPy+-based components highlighted in green.1. Allocate memory for gridfunctions, including temporary storage for the RK4 time integration.1. ([Step 2 below](adm_id)) Set gridfunction values to initial data (**[documented in previous start-to-finish module](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_two_BH_initial_data.ipynb)**).1. Evolve the initial data forward in time using RK4 time integration. At each RK4 substep, do the following: 1. ([Step 3 below](bssn_rhs)) Evaluate BSSN RHS expressions. 1. ([Step 4 below](apply_bcs)) Apply singular, curvilinear coordinate boundary conditions [*a la* the SENR/NRPy+ paper](https://arxiv.org/abs/1712.07658) 1. ([Step 5 below](enforce3metric)) Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint1. At the end of each iteration in time, output the Hamiltonian constraint violation. 1. Repeat above steps at two numerical resolutions to confirm convergence to zero. Table of Contents$$\label{toc}$$This module is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id): Import Brill-Lindquist ADM initial data C function from the [BSSN.BrillLindquist](../edit/BSSN/BrillLindquist.py) NRPy+ module1. [Step 3](nrpyccodes) Define Functions for Generating C Codes of Needed Quantities 1. [Step 3.a](bssnrhs): BSSN RHSs 1. [Step 3.b](hamconstraint): Hamiltonian constraint 1. [Step 3.c](spinweight): Computing $_{-2}Y_{\ell m} (\theta, \phi)$ for all $(\ell,m)$ for $\ell=0$ up to 2 1. [Step 3.d](psi4): $\psi_4$1. [Step 4](ccodegen): Generate C codes in parallel1. [Step 5](apply_bcs): Apply singular, curvilinear coordinate boundary conditions1. [Step 6](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint1. [Step 7](mainc): `BrillLindquist_Playground.c`: The Main C Code1. [Step 8](compare): Comparison with black hole perturbation theory1. [Step 9](visual): Data Visualization Animations 1. [Step 9.a](installdownload): Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded 1. [Step 9.b](genimages): Generate images for visualization animation 1. [Step 9.c](genvideo): Generate visualization animation1. [Step 10](convergence): Visualize the numerical error, and confirm that it converges to zero with increasing numerical resolution (sampling)1. [Step 11](latex_pdf_output): Output this module to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# First we import needed core NRPy+ modules
from outputC import *
import NRPy_param_funcs as par
import grid as gri
import loop as lp
import indexedexp as ixp
import finite_difference as fin
import reference_metric as rfm
#par.set_parval_from_str("outputC::PRECISION","long double")
# Set spatial dimension (must be 3 for BSSN)
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Set some core parameter choices, including order of MoL timestepping, FD order,
# floating point precision, and CFL factor:
# Choices are: Euler, "RK2 Heun", "RK2 MP", "RK2 Ralston", RK3, "RK3 Heun", "RK3 Ralston",
# SSPRK3, RK4, DP5, DP5alt, CK5, DP6, L6, DP8
RK_method = "RK4"
FD_order = 10 # Even numbers only, starting with 2. 12 is generally unstable
REAL = "double" # Best to use double here.
CFL_FACTOR= 0.5 # (GETS OVERWRITTEN WHEN EXECUTED.) In pure axisymmetry (symmetry_axes = 2 below) 1.0 works fine. Otherwise 0.5 or lower.
# Generate timestepping code. As described above the Table of Contents, this is a 3-step process:
# 3.A: Evaluate RHSs (RHS_string)
# 3.B: Apply boundary conditions (post_RHS_string, pt 1)
# 3.C: Enforce det(gammabar) = det(gammahat) constraint (post_RHS_string, pt 2)
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_order = Butcher_dict[RK_method][1]
MoL.MoL_C_Code_Generation(RK_method,
RHS_string = "rhs_eval(Nxx,Nxx_plus_2NGHOSTS,dxx, xx, RK_INPUT_GFS, RK_OUTPUT_GFS);",
post_RHS_string = """
apply_bcs(Nxx, Nxx_plus_2NGHOSTS, bc_gz_map,bc_parity_conditions,NUM_EVOL_GFS,evol_gf_parity, RK_OUTPUT_GFS);
enforce_detgammabar_constraint(Nxx_plus_2NGHOSTS, xx, RK_OUTPUT_GFS);\n""")
# Set finite differencing order:
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order)
# REAL and CFL_FACTOR parameters used below in C code directly
# Then we set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","SinhSpherical")
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Set the finite-differencing order to 6, matching B-L test from REB paper (Pg 20 of https://arxiv.org/pdf/1712.07658.pdf)
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER",FD_order)
# Then we set the phi axis to be the symmetry axis; i.e., axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
#################
# Next output C headers related to the numerical grids we just set up:
#################
# First output the coordinate bounds xxmin[] and xxmax[]:
with open("BSSN/xxminmax.h", "w") as file:
file.write("const REAL xxmin[3] = {"+str(rfm.xxmin[0])+","+str(rfm.xxmin[1])+","+str(rfm.xxmin[2])+"};\n")
file.write("const REAL xxmax[3] = {"+str(rfm.xxmax[0])+","+str(rfm.xxmax[1])+","+str(rfm.xxmax[2])+"};\n")
# Next output the proper distance between gridpoints in given coordinate system.
# This is used to find the minimum timestep.
dxx = ixp.declarerank1("dxx",DIM=3)
ds_dirn = rfm.ds_dirn(dxx)
outputC([ds_dirn[0],ds_dirn[1],ds_dirn[2]],["ds_dirn0","ds_dirn1","ds_dirn2"],"BSSN/ds_dirn.h")
# Generic coordinate NRPy+ file output, Part 2: output the conversion from (x0,x1,x2) to Cartesian (x,y,z)
outputC([rfm.xxCart[0],rfm.xxCart[1],rfm.xxCart[2]],["xCart[0]","xCart[1]","xCart[2]"],
"BSSN/xxCart.h")
###Output
Wrote to file "BSSN/ds_dirn.h"
Wrote to file "BSSN/xxCart.h"
###Markdown
Step 2: Import Brill-Lindquist ADM initial data C function from the [BSSN.BrillLindquist](../edit/BSSN/BrillLindquist.py) NRPy+ module \[Back to [top](toc)\]$$\label{adm_id}$$The [BSSN.BrillLindquist](../edit/BSSN/BrillLindquist.py) NRPy+ module does the following:1. Set up Brill-Lindquist initial data [ADM](https://en.wikipedia.org/wiki/ADM_formalism) quantities in the **Cartesian basis**, as [documented here](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb). 1. Convert the ADM **Cartesian quantities** to **BSSN quantities in the desired Curvilinear basis** (set by reference_metric::CoordSystem), as [documented here](Tutorial-ADM_Initial_Data-Converting_ADMCartesian_to_BSSNCurvilinear.ipynb).1. Sets up the standardized C function for setting all BSSN Curvilinear gridfunctions in a pointwise fashion, as [written here](../edit/BSSN/BSSN_ID_function_string.py), and returns the C function as a Python string.
###Code
import BSSN.BrillLindquist as bl
def BrillLindquistID():
returnfunction = bl.BrillLindquist()
# Now output the Brill-Lindquist initial data to file:
with open("BSSN/BrillLindquist.h","w") as file:
file.write(bl.returnfunction)
###Output
_____no_output_____
###Markdown
Step 3: Define Functions for Generating C Codes of Needed Quantities \[Back to [top](toc)\]$$\label{nrpyccodes}$$ Step 3.a: BSSN RHSs \[Back to [top](toc)\]$$\label{bssnrhs}$$
###Code
import BSSN.BSSN_RHSs as rhs
import BSSN.BSSN_gauge_RHSs as gaugerhs
import time
# Set the *covariant*, second-order Gamma-driving shift condition
par.set_parval_from_str("BSSN.BSSN_gauge_RHSs::ShiftEvolutionOption", "GammaDriving2ndOrder_Covariant")
rhs.BSSN_RHSs()
gaugerhs.BSSN_gauge_RHSs()
thismodule = __name__
diss_strength = par.Cparameters("REAL", thismodule, "diss_strength", 1e300) # diss_strength must be set in C, and
# we set it crazy high to ensure this.
alpha_dKOD = ixp.declarerank1("alpha_dKOD")
cf_dKOD = ixp.declarerank1("cf_dKOD")
trK_dKOD = ixp.declarerank1("trK_dKOD")
betU_dKOD = ixp.declarerank2("betU_dKOD","nosym")
vetU_dKOD = ixp.declarerank2("vetU_dKOD","nosym")
lambdaU_dKOD = ixp.declarerank2("lambdaU_dKOD","nosym")
aDD_dKOD = ixp.declarerank3("aDD_dKOD","sym01")
hDD_dKOD = ixp.declarerank3("hDD_dKOD","sym01")
for k in range(DIM):
gaugerhs.alpha_rhs += diss_strength*alpha_dKOD[k]
rhs.cf_rhs += diss_strength* cf_dKOD[k]
rhs.trK_rhs += diss_strength* trK_dKOD[k]
for i in range(DIM):
gaugerhs.bet_rhsU[i] += diss_strength* betU_dKOD[i][k]
gaugerhs.vet_rhsU[i] += diss_strength* vetU_dKOD[i][k]
rhs.lambda_rhsU[i] += diss_strength*lambdaU_dKOD[i][k]
for j in range(DIM):
rhs.a_rhsDD[i][j] += diss_strength*aDD_dKOD[i][j][k]
rhs.h_rhsDD[i][j] += diss_strength*hDD_dKOD[i][j][k]
def BSSN_RHSs():
print("Generating C code for BSSN RHSs in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
BSSN_evol_rhss = [ \
lhrh(lhs=gri.gfaccess("rhs_gfs","aDD00"),rhs=rhs.a_rhsDD[0][0]),
lhrh(lhs=gri.gfaccess("rhs_gfs","aDD01"),rhs=rhs.a_rhsDD[0][1]),
lhrh(lhs=gri.gfaccess("rhs_gfs","aDD02"),rhs=rhs.a_rhsDD[0][2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","aDD11"),rhs=rhs.a_rhsDD[1][1]),
lhrh(lhs=gri.gfaccess("rhs_gfs","aDD12"),rhs=rhs.a_rhsDD[1][2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","aDD22"),rhs=rhs.a_rhsDD[2][2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","alpha"),rhs=gaugerhs.alpha_rhs),
lhrh(lhs=gri.gfaccess("rhs_gfs","betU0"),rhs=gaugerhs.bet_rhsU[0]),
lhrh(lhs=gri.gfaccess("rhs_gfs","betU1"),rhs=gaugerhs.bet_rhsU[1]),
lhrh(lhs=gri.gfaccess("rhs_gfs","betU2"),rhs=gaugerhs.bet_rhsU[2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","cf"), rhs=rhs.cf_rhs),
lhrh(lhs=gri.gfaccess("rhs_gfs","hDD00"),rhs=rhs.h_rhsDD[0][0]),
lhrh(lhs=gri.gfaccess("rhs_gfs","hDD01"),rhs=rhs.h_rhsDD[0][1]),
lhrh(lhs=gri.gfaccess("rhs_gfs","hDD02"),rhs=rhs.h_rhsDD[0][2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","hDD11"),rhs=rhs.h_rhsDD[1][1]),
lhrh(lhs=gri.gfaccess("rhs_gfs","hDD12"),rhs=rhs.h_rhsDD[1][2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","hDD22"),rhs=rhs.h_rhsDD[2][2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","lambdaU0"),rhs=rhs.lambda_rhsU[0]),
lhrh(lhs=gri.gfaccess("rhs_gfs","lambdaU1"),rhs=rhs.lambda_rhsU[1]),
lhrh(lhs=gri.gfaccess("rhs_gfs","lambdaU2"),rhs=rhs.lambda_rhsU[2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","trK"), rhs=rhs.trK_rhs),
lhrh(lhs=gri.gfaccess("rhs_gfs","vetU0"),rhs=gaugerhs.vet_rhsU[0]),
lhrh(lhs=gri.gfaccess("rhs_gfs","vetU1"),rhs=gaugerhs.vet_rhsU[1]),
lhrh(lhs=gri.gfaccess("rhs_gfs","vetU2"),rhs=gaugerhs.vet_rhsU[2]) ]
import BSSN.BSSN_quantities as Bq
Bq.BSSN_basic_tensors()
betaU = Bq.betaU
BSSN_RHSs_string = fin.FD_outputC("returnstring",BSSN_evol_rhss, params="outCverbose=False",upwindcontrolvec=betaU)
end = time.time()
print("Finished generating BSSN RHSs in "+str(end-start)+" seconds.")
with open("BSSN/BSSN_RHSs.h", "w") as file:
file.write(lp.loop(["i2","i1","i0"],["NGHOSTS","NGHOSTS","NGHOSTS"],
["NGHOSTS+Nxx[2]","NGHOSTS+Nxx[1]","NGHOSTS+Nxx[0]"],
["1","1","1"],["const REAL invdx0 = 1.0/dxx[0];\n"+
"const REAL invdx1 = 1.0/dxx[1];\n"+
"const REAL invdx2 = 1.0/dxx[2];\n"+
"#pragma omp parallel for",
" const REAL xx2 = xx[2][i2];",
" const REAL xx1 = xx[1][i1];"],"",
"""
const REAL xx0 = xx[0][i0];
#define ERF(X, X0, W) (0.5 * (erf( ( (X) - (X0) ) / (W) ) + 1.0))
REAL xCart[3];
#include "../CurviBoundaryConditions/xxCart.h"
const REAL diss_strength = ERF(sqrt(xCart[0]*xCart[0] + xCart[1]*xCart[1] + xCart[2]*xCart[2]),2.0L,0.17L)*0.99L;\n"""+BSSN_RHSs_string))
###Output
_____no_output_____
###Markdown
Step 3.b: Output C code for Hamiltonian constraint \[Back to [top](toc)\]$$\label{hamconstraint}$$Next output the C code for evaluating the Hamiltonian constraint. In the absence of numerical error, this constraint should evaluate to zero. However it does not due to numerical (typically truncation and roundoff) error. We will therefore measure the Hamiltonian constraint violation to gauge the accuracy of our simulation, and, ultimately determine whether errors are dominated by numerical finite differencing (truncation) error as expected.
###Code
# First register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
def H():
print("Generating C code for BSSN Hamiltonian in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
bssncon.output_C__Hamiltonian_h(add_T4UUmunu_source_terms=False)
###Output
_____no_output_____
###Markdown
Step 3.c: Computing $_{-2}Y_{\ell m} (\theta, \phi)$ for all $(\ell,m)$ for $\ell=0$ up to 2 \[Back to [top](toc)\]$$\label{spinweight}$$ [**Tutorial Module**](Tutorial-SpinWeighted_Spherical_Harmonics.ipynb)
###Code
import SpinWeight_minus2_SphHarmonics.SpinWeight_minus2_SphHarmonics as swm2
swm2.SpinWeight_minus2_SphHarmonics(maximum_l=2,filename="SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h")
###Output
_____no_output_____
###Markdown
Step 3.d: Output $\psi_4$ \[Back to [top](toc)\]$$\label{psi4}$$We output $\psi_4$, assuming Quasi-Kinnersley tetrad of [Baker, Campanelli, Lousto (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf).
###Code
import BSSN.Psi4_tetrads as BP4t
par.set_parval_from_str("BSSN.Psi4_tetrads::TetradChoice","QuasiKinnersley")
#par.set_parval_from_str("BSSN.Psi4_tetrads::UseCorrectUnitNormal","True")
import BSSN.Psi4 as BP4
BP4.Psi4()
psi4r_0pt = gri.register_gridfunctions("AUX","psi4r_0pt")
psi4r_1pt = gri.register_gridfunctions("AUX","psi4r_1pt")
psi4r_2pt = gri.register_gridfunctions("AUX","psi4r_2pt")
psi4i_0pt = gri.register_gridfunctions("AUX","psi4i_0pt")
psi4i_1pt = gri.register_gridfunctions("AUX","psi4i_1pt")
psi4i_2pt = gri.register_gridfunctions("AUX","psi4i_2pt")
def Psi4re(part):
print("Generating C code for psi4_re_pt"+str(part)+" in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
fin.FD_outputC("BSSN/Psi4re_pt"+str(part)+"_lowlevel.h",
[lhrh(lhs=gri.gfaccess("aux_gfs","psi4r_"+str(part)+"pt"),rhs=BP4.psi4_re_pt[part])],
params="outCverbose=False")
end = time.time()
print("Finished generating psi4_re_pt"+str(part)+" in "+str(end-start)+" seconds.")
def Psi4im(part):
print("Generating C code for psi4_im_pt"+str(part)+" in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
fin.FD_outputC("BSSN/Psi4im_pt"+str(part)+"_lowlevel.h",
[lhrh(lhs=gri.gfaccess("aux_gfs","psi4i_"+str(part)+"pt"),rhs=BP4.psi4_im_pt[part])],
params="outCverbose=False")
end = time.time()
print("Finished generating psi4_im_pt"+str(part)+" in "+str(end-start)+" seconds.")
###Output
_____no_output_____
###Markdown
Step 4: Perform Parallelized C Code Generation \[Back to [top](toc)\]$$\label{ccodegen}$$Here we call all functions defined in [the above section](nrpyccodes) in parallel, to greatly expedite C code generation on multicore CPUs.
###Code
import multiprocessing
if __name__ == '__main__':
ID = multiprocessing.Process(target=BrillLindquistID)
RHS = multiprocessing.Process(target=BSSN_RHSs)
H = multiprocessing.Process(target=H)
Psi4re0 = multiprocessing.Process(target=Psi4re, args=(0,))
Psi4re1 = multiprocessing.Process(target=Psi4re, args=(1,))
Psi4re2 = multiprocessing.Process(target=Psi4re, args=(2,))
Psi4im0 = multiprocessing.Process(target=Psi4im, args=(0,))
Psi4im1 = multiprocessing.Process(target=Psi4im, args=(1,))
Psi4im2 = multiprocessing.Process(target=Psi4im, args=(2,))
ID.start()
RHS.start()
H.start()
Psi4re0.start()
Psi4re1.start()
Psi4re2.start()
Psi4im0.start()
Psi4im1.start()
Psi4im2.start()
ID.join()
RHS.join()
H.join()
Psi4re0.join()
Psi4re1.join()
Psi4re2.join()
Psi4im0.join()
Psi4im1.join()
Psi4im2.join()
###Output
Generating C code for BSSN RHSs in SinhSpherical coordinates.
Generating C code for BSSN Hamiltonian in SinhSpherical coordinates.
Generating C code for psi4_re_pt0 in SinhSpherical coordinates.
Generating C code for psi4_re_pt1 in SinhSpherical coordinates.
Generating C code for psi4_re_pt2 in SinhSpherical coordinates.
Generating C code for psi4_im_pt0 in SinhSpherical coordinates.
Generating C code for psi4_im_pt1 in SinhSpherical coordinates.
Generating C code for psi4_im_pt2 in SinhSpherical coordinates.
Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.
Wrote to file "BSSN/Psi4im_pt1_lowlevel.h"
Finished generating psi4_im_pt1 in 11.6268601418 seconds.
Wrote to file "BSSN/Psi4re_pt1_lowlevel.h"
Finished generating psi4_re_pt1 in 14.1261129379 seconds.
Wrote to file "BSSN/Psi4im_pt0_lowlevel.h"
Finished generating psi4_im_pt0 in 25.1990799904 seconds.
Wrote to file "BSSN/Psi4im_pt2_lowlevel.h"
Finished generating psi4_im_pt2 in 35.0764701366 seconds.
Wrote to file "BSSN/Psi4re_pt0_lowlevel.h"
Finished generating psi4_re_pt0 in 37.4233150482 seconds.
Wrote to file "BSSN/Psi4re_pt2_lowlevel.h"
Finished generating psi4_re_pt2 in 44.3447999954 seconds.
Finished in 53.8594939709 seconds.
Output C implementation of Hamiltonian constraint to BSSN/Hamiltonian.h
Finished generating BSSN RHSs in 243.633222103 seconds.
###Markdown
Step 5: Apply singular, curvilinear coordinate boundary conditions \[Back to [top](toc)\]$$\label{apply_bcs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial module](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions()
###Output
Wrote to file "CurviBoundaryConditions/gridfunction_defines.h"
Wrote to file "CurviBoundaryConditions/set_parity_conditions.h"
Wrote to file "CurviBoundaryConditions/xxCart.h"
Wrote to file "CurviBoundaryConditions/xxminmax.h"
Wrote to file "CurviBoundaryConditions/Cart_to_xx.h"
###Markdown
Step 6: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial module](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb).Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
import BSSN.Enforce_Detgammabar_Constraint as EGC
EGC.output_Enforce_Detgammabar_Constraint_Ccode()
###Output
Output C implementation of det(gammabar) constraint to file BSSN/enforce_detgammabar_constraint.h
###Markdown
Step 7: `BrillLindquist_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER),
# and set the CFL_FACTOR (which can be overwritten at the command line)
with open("BSSN/BSSN_Playground_REAL__NGHOSTS__CFL_FACTOR.h", "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
REAL CFL_FACTOR = """+str(CFL_FACTOR)+"""; // Set the CFL Factor. Can be overwritten at command line.""")
%%writefile BSSN/BrillLindquist_Playground.c
// Step P0: define NGHOSTS and declare CFL_FACTOR.
#include "BSSN_Playground_REAL__NGHOSTS__CFL_FACTOR.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "time.h"
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
// Step P2: Set free parameters
// Step P2a: Free parameters for the numerical grid
// ONLY SinhSpherical used in this module.
// SinhSpherical coordinates parameters
const REAL AMPL = 300; // Parameter has been updated, compared to B-L test from REB paper (Pg 20 of https://arxiv.org/pdf/1712.07658.pdf)
const REAL SINHW = 0.2L; // Parameter has been updated, compared to B-L test from REB paper (Pg 20 of https://arxiv.org/pdf/1712.07658.pdf)
//const REAL SINHW = 0.125; // Matches B-L test from REB paper (Pg 20 of https://arxiv.org/pdf/1712.07658.pdf)
// Time coordinate parameters
const REAL t_final = 275; /* Final time is set so that at t=t_final,
* data at the plotted wave extraction radius have not been corrupted
* by the approximate outer boundary condition */
// Step P2b: Free parameters for the spacetime evolution
const REAL eta = 2.0; // Gamma-driving shift condition parameter. Matches B-L test from REB paper (Pg 20 of https://arxiv.org/pdf/1712.07658.pdf)
// Step P3: Implement the algorithm for upwinding.
// *NOTE*: This upwinding is backwards from
// usual upwinding algorithms, because the
// upwinding control vector in BSSN (the shift)
// acts like a *negative* velocity.
#define UPWIND_ALG(UpwindVecU) UpwindVecU > 0.0 ? 1.0 : 0.0
// Step P4: Set free parameters for the (Brill-Lindquist) initial data
const REAL BH1_posn_x = 0.0,BH1_posn_y = 0.0,BH1_posn_z = +0.25;
const REAL BH2_posn_x = 0.0,BH2_posn_y = 0.0,BH2_posn_z = -0.25;
//const REAL BH1_posn_x = 0.0,BH1_posn_y = 0.0,BH1_posn_z = +0.05; // SUPER CLOSE
//const REAL BH2_posn_x = 0.0,BH2_posn_y = 0.0,BH2_posn_z = -0.05; // SUPER CLOSE
const REAL BH1_mass = 0.5,BH2_mass = 0.5;
// Step P5: Declare the IDX4(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS[0] elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS[0]*Nxx_plus_2NGHOSTS[1] in memory, etc.
#define IDX4(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS[0] * ( (j) + Nxx_plus_2NGHOSTS[1] * ( (k) + Nxx_plus_2NGHOSTS[2] * (g) ) ) )
#define IDX3(i,j,k) ( (i) + Nxx_plus_2NGHOSTS[0] * ( (j) + Nxx_plus_2NGHOSTS[1] * (k) ) )
// Assuming idx = IDX3(i,j,k). Much faster if idx can be reused over and over:
#define IDX4pt(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS[0]*Nxx_plus_2NGHOSTS[1]*Nxx_plus_2NGHOSTS[2]) * (g) )
// Step P6: Set #define's for BSSN gridfunctions. C code generated above
#include "../CurviBoundaryConditions/gridfunction_defines.h"
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
void xxCart(REAL *xx[3],const int i0,const int i1,const int i2, REAL xCart[3]) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
#include "../CurviBoundaryConditions/xxCart.h"
}
// Step P7: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "../CurviBoundaryConditions/curvilinear_parity_and_outer_boundary_conditions.h"
// Step P8: Include function for enforcing detgammabar constraint.
#include "enforce_detgammabar_constraint.h"
// Step P9: Find the CFL-constrained timestep
REAL find_timestep(const int Nxx_plus_2NGHOSTS[3],const REAL dxx[3],REAL *xx[3], const REAL CFL_FACTOR) {
const REAL dxx0 = dxx[0], dxx1 = dxx[1], dxx2 = dxx[2];
REAL dsmin = 1e38; // Start with a crazy high value... close to the largest number in single precision.
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS[0]-NGHOSTS, NGHOSTS,Nxx_plus_2NGHOSTS[1]-NGHOSTS, NGHOSTS,Nxx_plus_2NGHOSTS[2]-NGHOSTS) {
const REAL xx0 = xx[0][i0], xx1 = xx[1][i1], xx2 = xx[2][i2];
REAL ds_dirn0, ds_dirn1, ds_dirn2;
#include "ds_dirn.h"
#define MIN(A, B) ( ((A) < (B)) ? (A) : (B) )
// Set dsmin = MIN(dsmin, ds_dirn0, ds_dirn1, ds_dirn2);
dsmin = MIN(dsmin,MIN(ds_dirn0,MIN(ds_dirn1,ds_dirn2)));
}
return dsmin*CFL_FACTOR;
}
// Step P10: Declare function necessary for setting up the initial data.
// Step P10.a: Define BSSN_ID() for BrillLindquist initial data
#include "BrillLindquist.h"
// Step P10.b: Set the generic driver function for setting up BSSN initial data
void initial_data(const int Nxx_plus_2NGHOSTS[3],REAL *xx[3], REAL *in_gfs) {
#pragma omp parallel for
LOOP_REGION(0,Nxx_plus_2NGHOSTS[0], 0,Nxx_plus_2NGHOSTS[1], 0,Nxx_plus_2NGHOSTS[2]) {
const int idx = IDX3(i0,i1,i2);
BSSN_ID(xx[0][i0],xx[1][i1],xx[2][i2],
&in_gfs[IDX4pt(HDD00GF,idx)],&in_gfs[IDX4pt(HDD01GF,idx)],&in_gfs[IDX4pt(HDD02GF,idx)],
&in_gfs[IDX4pt(HDD11GF,idx)],&in_gfs[IDX4pt(HDD12GF,idx)],&in_gfs[IDX4pt(HDD22GF,idx)],
&in_gfs[IDX4pt(ADD00GF,idx)],&in_gfs[IDX4pt(ADD01GF,idx)],&in_gfs[IDX4pt(ADD02GF,idx)],
&in_gfs[IDX4pt(ADD11GF,idx)],&in_gfs[IDX4pt(ADD12GF,idx)],&in_gfs[IDX4pt(ADD22GF,idx)],
&in_gfs[IDX4pt(TRKGF,idx)],
&in_gfs[IDX4pt(LAMBDAU0GF,idx)],&in_gfs[IDX4pt(LAMBDAU1GF,idx)],&in_gfs[IDX4pt(LAMBDAU2GF,idx)],
&in_gfs[IDX4pt(VETU0GF,idx)],&in_gfs[IDX4pt(VETU1GF,idx)],&in_gfs[IDX4pt(VETU2GF,idx)],
&in_gfs[IDX4pt(BETU0GF,idx)],&in_gfs[IDX4pt(BETU1GF,idx)],&in_gfs[IDX4pt(BETU2GF,idx)],
&in_gfs[IDX4pt(ALPHAGF,idx)],&in_gfs[IDX4pt(CFGF,idx)]);
}
}
// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)
void Hamiltonian_constraint(const int Nxx[3],const int Nxx_plus_2NGHOSTS[3],const REAL dxx[3], REAL *xx[3],
REAL *in_gfs, REAL *aux_gfs) {
#include "Hamiltonian.h"
}
// Step P12: Declare function for evaluating real and imaginary parts of psi4 (diagnostic)
void psi4(const int Nxx_plus_2NGHOSTS[3],const int i0,const int i1,const int i2,
const REAL dxx[3], REAL *xx[3],
REAL *in_gfs, REAL *aux_gfs) {
const int idx = IDX3(i0,i1,i2);
const REAL xx0 = xx[0][i0];
const REAL xx1 = xx[1][i1];
const REAL xx2 = xx[2][i2];
const REAL invdx0 = 1.0/dxx[0];
const REAL invdx1 = 1.0/dxx[1];
const REAL invdx2 = 1.0/dxx[2];
// REAL psi4_re_pt0,psi4_re_pt1,psi4_re_pt2;
{
#include "Psi4re_pt0_lowlevel.h"
}
{
#include "Psi4re_pt1_lowlevel.h"
}
{
#include "Psi4re_pt2_lowlevel.h"
}
// REAL psi4_im_pt0,psi4_im_pt1,psi4_im_pt2;
{
#include "Psi4im_pt0_lowlevel.h"
}
{
#include "Psi4im_pt1_lowlevel.h"
}
{
#include "Psi4im_pt2_lowlevel.h"
}
// aux_gfs[IDX4pt(PSI4RGF,idx)] = psi4_re_pt0 + psi4_re_pt1 + psi4_re_pt2;
// aux_gfs[IDX4pt(PSI4IGF,idx)] = psi4_im_pt0 + psi4_im_pt1 + psi4_im_pt2;
}
// Step P13: Declare function to evaluate the BSSN RHSs
void rhs_eval(const int Nxx[3],const int Nxx_plus_2NGHOSTS[3],const REAL dxx[3], REAL *xx[3], const REAL *in_gfs,REAL *rhs_gfs) {
#include "BSSN_RHSs.h"
}
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4 && argc != 5) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
if(argc == 5) {
CFL_FACTOR = strtod(argv[4],NULL);
if(CFL_FACTOR > 0.5 && atoi(argv[3])!=2) {
fprintf(stderr,"WARNING: CFL_FACTOR was set to %e, which is > 0.5.\n",CFL_FACTOR);
fprintf(stderr," This will generally only be stable if the simulation is purely axisymmetric\n");
fprintf(stderr," However, Nx2 was set to %d>2, which implies a non-axisymmetric simulation\n",atoi(argv[3]));
}
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
const int Nxx_plus_2NGHOSTS[3] = { Nxx[0]+2*NGHOSTS, Nxx[1]+2*NGHOSTS, Nxx[2]+2*NGHOSTS };
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS[0]*Nxx_plus_2NGHOSTS[1]*Nxx_plus_2NGHOSTS[2];
#include "xxminmax.h"
// Step 0c: Allocate memory for gridfunctions
#include "../MoLtimestepping/RK_Allocate_Memory.h"
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
printf("Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
printf(" or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0d: Set up space and time coordinates
// Step 0d.i: Set \Delta x^i on uniform grids.
REAL dxx[3];
for(int i=0;i<3;i++) dxx[i] = (xxmax[i] - xxmin[i]) / ((REAL)Nxx[i]);
// Step 0d.ii: Set up uniform coordinate grids
REAL *xx[3];
for(int i=0;i<3;i++) {
xx[i] = (REAL *)malloc(sizeof(REAL)*Nxx_plus_2NGHOSTS[i]);
for(int j=0;j<Nxx_plus_2NGHOSTS[i];j++) {
xx[i][j] = xxmin[i] + ((REAL)(j-NGHOSTS) + (1.0/2.0))*dxx[i]; // Cell-centered grid.
}
}
// Step 0d.iii: Set timestep based on smallest proper distance between gridpoints and CFL factor
REAL dt = find_timestep(Nxx_plus_2NGHOSTS, dxx,xx, CFL_FACTOR);
//printf("# Timestep set to = %e\n",(double)dt);
int N_final = (int)(t_final / dt + 0.5); // The number of iterations in time.
//Add 0.5 to account for C rounding down integers.
REAL out_approx_every_t = 0.2;
int N_output_every = (int)(out_approx_every_t*((REAL)N_final)/t_final);
// Step 0e: Find ghostzone mappings and parities:
gz_map *bc_gz_map = (gz_map *)malloc(sizeof(gz_map)*Nxx_plus_2NGHOSTS_tot);
parity_condition *bc_parity_conditions = (parity_condition *)malloc(sizeof(parity_condition)*Nxx_plus_2NGHOSTS_tot);
set_up_bc_gz_map_and_parity_conditions(Nxx_plus_2NGHOSTS,xx,dxx,xxmin,xxmax, bc_gz_map, bc_parity_conditions);
// Step 1: Set up initial data to an exact solution
initial_data(Nxx_plus_2NGHOSTS, xx, y_n_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs(Nxx, Nxx_plus_2NGHOSTS, bc_gz_map,bc_parity_conditions,NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
enforce_detgammabar_constraint(Nxx_plus_2NGHOSTS, xx, y_n_gfs);
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
#ifdef __linux__ // Use high-precision timer in Linux.
struct timespec start, end;
clock_gettime(CLOCK_REALTIME, &start);
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
// http://www.cplusplus.com/reference/ctime/time/
time_t start_timer,end_timer;
time(&start_timer); // Resolution of one second...
#endif
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
for(int n=0;n<=N_final;n++) { // Main loop to progress forward in time.
/* Step 3.a: Output psi4 spin-weight -2 decomposed data, every N_output_every */
if(n%N_output_every == 0) {
#include "../SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h"
char filename[100];
//int r_ext_idx = (Nxx_plus_2NGHOSTS[0]-NGHOSTS)/4;
for(int r_ext_idx = (Nxx_plus_2NGHOSTS[0]-NGHOSTS)/4; r_ext_idx<(Nxx_plus_2NGHOSTS[0]-NGHOSTS)*0.9;r_ext_idx+=5) {
REAL r_ext;
{
REAL xx0 = xx[0][r_ext_idx];
REAL xx1 = xx[1][1];
REAL xx2 = xx[2][1];
REAL xCart[3];
#include "xxCart.h"
r_ext = sqrt(xCart[0]*xCart[0] + xCart[1]*xCart[1] + xCart[2]*xCart[2]);
}
sprintf(filename,"outPsi4_l2m0-%d-r%.2f.txt",Nxx[0],(double)r_ext);
FILE *outPsi4_l2m0;
if(n==0) outPsi4_l2m0 = fopen(filename, "w");
else outPsi4_l2m0 = fopen(filename, "a");
REAL Psi4r_0pt_l2m0 = 0.0,Psi4r_1pt_l2m0 = 0.0,Psi4r_2pt_l2m0 = 0.0;
REAL Psi4i_0pt_l2m0 = 0.0,Psi4i_1pt_l2m0 = 0.0,Psi4i_2pt_l2m0 = 0.0;
LOOP_REGION(r_ext_idx,r_ext_idx+1,
NGHOSTS,Nxx_plus_2NGHOSTS[1]-NGHOSTS,
NGHOSTS,NGHOSTS+1) {
psi4(Nxx_plus_2NGHOSTS, i0,i1,i2, dxx,xx, y_n_gfs, diagnostic_output_gfs);
const int idx = IDX3(i0,i1,i2);
const REAL th = xx[1][i1];
const REAL ph = xx[2][i2];
// Construct integrand for Psi4 spin-weight s=-2,l=2,m=0 spherical harmonic
// Based on http://www.demonstrations.wolfram.com/SpinWeightedSphericalHarmonics/
// we have {}_{s}_Y_{lm} = {}_{-2}_Y_{20} = 1/4 * sqrt(15 / (2*pi)) * sin(th)^2
// Confirm integrand is correct:
// Integrate[(1/4 Sqrt[15/(2 \[Pi])] Sin[th]^2) (1/4 Sqrt[15/(2 \[Pi])] Sin[th]^2)*2*Pi*Sin[th], {th, 0, Pi}]
// ^^^ equals 1.
REAL ReY_sm2_l2_m0,ImY_sm2_l2_m0;
SpinWeight_minus2_SphHarmonics(2,0, th,ph, &ReY_sm2_l2_m0,&ImY_sm2_l2_m0);
const REAL sinth = sin(xx[1][i1]);
/* psi4 *{}_{-2}_Y_{20}* (int dphi)* sinth*dtheta */
Psi4r_0pt_l2m0 += diagnostic_output_gfs[IDX4pt(PSI4R_0PTGF,idx)]*ReY_sm2_l2_m0 * (2*M_PI) * sinth*dxx[1];
Psi4r_1pt_l2m0 += diagnostic_output_gfs[IDX4pt(PSI4R_1PTGF,idx)]*ReY_sm2_l2_m0 * (2*M_PI) * sinth*dxx[1];
Psi4r_2pt_l2m0 += diagnostic_output_gfs[IDX4pt(PSI4R_2PTGF,idx)]*ReY_sm2_l2_m0 * (2*M_PI) * sinth*dxx[1];
Psi4i_0pt_l2m0 += diagnostic_output_gfs[IDX4pt(PSI4I_0PTGF,idx)]*ImY_sm2_l2_m0 * (2*M_PI) * sinth*dxx[1];
Psi4i_1pt_l2m0 += diagnostic_output_gfs[IDX4pt(PSI4I_1PTGF,idx)]*ImY_sm2_l2_m0 * (2*M_PI) * sinth*dxx[1];
Psi4i_2pt_l2m0 += diagnostic_output_gfs[IDX4pt(PSI4I_2PTGF,idx)]*ImY_sm2_l2_m0 * (2*M_PI) * sinth*dxx[1];
}
fprintf(outPsi4_l2m0,"%e %.15e %.15e %.15e %.15e %.15e %.15e\n", (double)((n)*dt),
(double)Psi4r_0pt_l2m0,(double)Psi4r_1pt_l2m0,(double)Psi4r_2pt_l2m0,
(double)Psi4i_0pt_l2m0,(double)Psi4i_1pt_l2m0,(double)Psi4i_2pt_l2m0);
fclose(outPsi4_l2m0);
}
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(Nxx,Nxx_plus_2NGHOSTS,dxx, xx, y_n_gfs, diagnostic_output_gfs);
sprintf(filename,"out1D-%d.txt",Nxx[0]);
FILE *out2D;
if(n==0) out2D = fopen(filename, "w");
else out2D = fopen(filename, "a");
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS[0]-NGHOSTS,
Nxx_plus_2NGHOSTS[1]/2,Nxx_plus_2NGHOSTS[1]/2+1,
Nxx_plus_2NGHOSTS[2]/2,Nxx_plus_2NGHOSTS[2]/2+1) {
const int idx = IDX3(i0,i1,i2);
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
#include "xxCart.h"
fprintf(out2D,"%e %e %e\n",
(double)sqrt(xCart[0]*xCart[0] + xCart[1]*xCart[1] + xCart[2]*xCart[2]),
(double)y_n_gfs[IDX4pt(CFGF,idx)],(double)log10(fabs(diagnostic_output_gfs[IDX4pt(HGF,idx)])));
}
fprintf(out2D,"\n\n");
fclose(out2D);
}
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
#include "../MoLtimestepping/RK_MoL.h"
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
if(n==N_final-1) {
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(Nxx,Nxx_plus_2NGHOSTS,dxx, xx, y_n_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
const int i0MIN=NGHOSTS; // In spherical, r=Delta r/2.
const int i1mid=Nxx_plus_2NGHOSTS[1]/2;
const int i2mid=Nxx_plus_2NGHOSTS[2]/2;
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS[0]-NGHOSTS, NGHOSTS,Nxx_plus_2NGHOSTS[1]-NGHOSTS, NGHOSTS,Nxx_plus_2NGHOSTS[2]-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
#include "xxCart.h"
int idx = IDX3(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1],xCart[2], y_n_gfs[IDX4pt(CFGF,idx)],
log10(fabs(diagnostic_output_gfs[IDX4pt(HGF,idx)])));
}
fclose(out2D);
}
// Step 3.d: Progress indicator printing to stderr
// Step 3.d.i: Measure average time per iteration
#ifdef __linux__ // Use high-precision timer in Linux.
clock_gettime(CLOCK_REALTIME, &end);
const long long unsigned int time_in_ns = 1000000000L * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
time(&end_timer); // Resolution of one second...
REAL time_in_ns = difftime(end_timer,start_timer)*1.0e9+0.5; // Round up to avoid divide-by-zero.
#endif
const REAL s_per_iteration_avg = ((REAL)time_in_ns / (REAL)n) / 1.0e9;
const int iterations_remaining = N_final - n;
const REAL time_remaining_in_mins = s_per_iteration_avg * (REAL)iterations_remaining / 60.0;
const REAL num_RHS_pt_evals = (REAL)(Nxx[0]*Nxx[1]*Nxx[2]) * 4.0 * (REAL)n; // 4 RHS evals per gridpoint for RK4
const REAL RHS_pt_evals_per_sec = num_RHS_pt_evals / ((REAL)time_in_ns / 1.0e9);
// Step 3.d.ii: Output simulation progress to stderr
if(n % 10 == 0) {
fprintf(stderr,"%c[2K", 27); // Clear the line
fprintf(stderr,"It: %d t=%.2f dt=%.2e | %.1f%%; ETA %.0f s | t/h %.2f | gp/s %.2e\r", // \r is carriage return, move cursor to the beginning of the line
n, n * (double)dt, (double)dt, (double)(100.0 * (REAL)n / (REAL)N_final),
(double)time_remaining_in_mins*60, (double)(dt * 3600.0 / s_per_iteration_avg), (double)RHS_pt_evals_per_sec);
fflush(stderr); // Flush the stderr buffer
} // End progress indicator if(n % 10 == 0)
} // End main loop to progress forward in time.
fprintf(stderr,"\n"); // Clear the final line of output from progress indicator.
// Step 4: Free all allocated memory
free(bc_parity_conditions);
free(bc_gz_map);
#include "../MoLtimestepping/RK_Free_Memory.h"
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
# Nr = 270
# Ntheta = 8
Nr = 800
Ntheta = 16
CFL_FACTOR = 1.0
import cmdline_helper as cmd
print("Now compiling, should take ~10 seconds...\n")
start = time.time()
cmd.C_compile("BSSN/BrillLindquist_Playground.c", "BrillLindquist_Playground")
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
if Nr == 800:
print("Now running. Should take ~8 hours...\n")
if Nr == 270:
print("Now running. Should take ~30 minutes...\n")
start = time.time()
cmd.delete_existing_files("out*.txt")
cmd.delete_existing_files("out*.png")
cmd.Execute("BrillLindquist_Playground", str(Nr)+" "+str(Ntheta)+" 2 "+str(CFL_FACTOR))
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
###Output
Now compiling, should take ~10 seconds...
Compiling executable...
Executing `gcc -Ofast -fopenmp -march=native -funroll-loops BSSN/BrillLindquist_Playground.c -o BrillLindquist_Playground -lm`...
Finished executing in 24.6863269806 seconds.
Finished compilation.
Finished in 24.7045240402 seconds.
Now running. Should take ~30 minutes...
Executing `taskset -c 0,1,2,3 ./BrillLindquist_Playground 800 16 2 1.0`...
[2KIt: 4070 t=10.10 dt=2.48e-03 | 3.7%; ETA 31285 s | t/h 30.48 | gp/s 3.50e+05
###Markdown
Step 8: Comparison with black hole perturbation theory \[Back to [top](toc)\]$$\label{compare}$$According to black hole perturbation theory ([Berti et al](https://arxiv.org/abs/0905.2975)), the resultant black hole should ring down with dominant, spin-weight $s=-2$ spherical harmonic mode $(l=2,m=0)$ according to$${}_{s=-2}\text{Re}(\psi_4)_{l=2,m=0} = A e^{−0.0890 t/M} \cos(0.3737 t/M+ \phi),$$where $M=1$ for these data, and $A$ and $\phi$ are an arbitrary amplitude and phase, respectively. Here we will plot the resulting waveform at $r/M=33.13$, comparing to the expected frequency and amplitude falloff predicted by black hole perturbation theory.Notice that we find about 4.2 orders of magnitude agreement! If you are willing to invest more resources and wait much longer, you will find approximately 8.5 orders of magnitude agreement (*better* than Fig 6 of [Ruchlin et al](https://arxiv.org/pdf/1712.07658.pdf)) if you adjust the above code parameters such that1. Finite-differencing order is set to 101. Nr = 8001. Ntheta = 161. Outer boundary (`AMPL`) set to 3001. Final time (`t_final`) set to 2751. Set the initial positions of the BHs to `BH1_posn_z = -BH2_posn_z = 0.25`
###Code
%matplotlib inline
import numpy as np
# from scipy.interpolate import griddata
import matplotlib.pyplot as plt
from matplotlib.pyplot import savefig
# from IPython.display import HTML
# import matplotlib.image as mgimg
from matplotlib import rc
rc('text', usetex=True)
if Nr == 270:
extraction_radius = "33.13"
Amplitude = 7e-2
Phase = 2.8
elif Nr == 800:
extraction_radius = "33.64"
Amplitude = 1.8e-2
Phase = 2.8
else:
print("Error: output is not tuned for Nr = "+str(Nr)+" . Plotting disabled.")
exit(1)
#Transposed for easier unpacking:
t,psi4r1,psi4r2,psi4r3,psi4i1,psi4i2,psi4i3 = np.loadtxt("outPsi4_l2m0-"+str(Nr)+"-r"+extraction_radius+".txt").T
t_retarded = []
log10abspsi4r = []
bh_pert_thry = []
for i in range(len(psi4r1)):
retarded_time = t[i]-np.float(extraction_radius)
t_retarded.append(retarded_time)
log10abspsi4r.append(np.log(np.float(extraction_radius)*np.abs(psi4r1[i] + psi4r2[i] + psi4r3[i]))/np.log(10))
bh_pert_thry.append(np.log(Amplitude*np.exp(-0.0890*retarded_time)*np.abs(np.cos(0.3737*retarded_time+Phase)))/np.log(10))
# print(bh_pert_thry)
fig, ax = plt.subplots()
plt.title("Grav. Wave Agreement with BH perturbation theory",fontsize=18)
plt.xlabel("$(t - R_{ext})/M$",fontsize=16)
plt.ylabel('$\log_{10}|\psi_4|$',fontsize=16)
ax.plot(t_retarded, log10abspsi4r, 'k-', label='SENR/NRPy+ simulation')
ax.plot(t_retarded, bh_pert_thry, 'k--', label='BH perturbation theory')
#ax.set_xlim([0,t_retarded[len(psi4r1)-1]])
ax.set_xlim([0,240])
ax.set_ylim([-13,-1.5])
plt.xticks(size = 14)
plt.yticks(size = 14)
legend = ax.legend(loc='upper right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
# Note that you'll need `dvipng` installed to generate the following file:
savefig("BHperttheorycompare.png",dpi=150)
###Output
_____no_output_____
###Markdown
Step 9: Data Visualization Animations \[Back to [top](toc)\]$$\label{visual}$$ Step 9.a: Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded \[Back to [top](toc)\]$$\label{installdownload}$$ Note that if you are not running this within `mybinder`, but on a Windows system, `ffmpeg` must be installed using a separate package (on [this site](http://ffmpeg.org/)), or (if running Jupyter within Anaconda, use the command: `conda install -c conda-forge ffmpeg`).
###Code
# print("Ignore any warnings or errors from the following command:")
# !pip install scipy > /dev/null
# check_for_ffmpeg = !which ffmpeg >/dev/null && echo $?
# if check_for_ffmpeg != ['0']:
# print("Couldn't find ffmpeg, so I'll download it.")
# # Courtesy https://johnvansickle.com/ffmpeg/
# !wget http://astro.phys.wvu.edu/zetienne/ffmpeg-static-amd64-johnvansickle.tar.xz
# !tar Jxf ffmpeg-static-amd64-johnvansickle.tar.xz
# print("Copying ffmpeg to ~/.local/bin/. Assumes ~/.local/bin is in the PATH.")
# !mkdir ~/.local/bin/
# !cp ffmpeg-static-amd64-johnvansickle/ffmpeg ~/.local/bin/
# print("If this doesn't work, then install ffmpeg yourself. It should work fine on mybinder.")
###Output
_____no_output_____
###Markdown
Step 9.b: Generate images for visualization animation \[Back to [top](toc)\]$$\label{genimages}$$ Here we loop through the data files output by the executable compiled and run in [the previous step](mainc), generating a [png](https://en.wikipedia.org/wiki/Portable_Network_Graphics) image for each data file.**Special thanks to Terrence Pierre Jacques. His work with the first versions of these scripts greatly contributed to the scripts as they exist below.**
###Code
# ## VISUALIZATION ANIMATION, PART 1: Generate PNGs, one per frame of movie ##
# import numpy as np
# from scipy.interpolate import griddata
# import matplotlib.pyplot as plt
# from matplotlib.pyplot import savefig
# from IPython.display import HTML
# import matplotlib.image as mgimg
# import glob
# import sys
# from matplotlib import animation
# globby = glob.glob('out96-00*.txt')
# file_list = []
# for x in sorted(globby):
# file_list.append(x)
# bound=1.4
# pl_xmin = -bound
# pl_xmax = +bound
# pl_ymin = -bound
# pl_ymax = +bound
# for filename in file_list:
# fig = plt.figure()
# x,y,cf,Ham = np.loadtxt(filename).T #Transposed for easier unpacking
# plotquantity = cf
# plotdescription = "Numerical Soln."
# plt.title("Black Hole Head-on Collision (conf factor)")
# plt.xlabel("y/M")
# plt.ylabel("z/M")
# grid_x, grid_y = np.mgrid[pl_xmin:pl_xmax:300j, pl_ymin:pl_ymax:300j]
# points = np.zeros((len(x), 2))
# for i in range(len(x)):
# # Zach says: No idea why x and y get flipped...
# points[i][0] = y[i]
# points[i][1] = x[i]
# grid = griddata(points, plotquantity, (grid_x, grid_y), method='nearest')
# gridcub = griddata(points, plotquantity, (grid_x, grid_y), method='cubic')
# im = plt.imshow(gridcub, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
# ax = plt.colorbar()
# ax.set_label(plotdescription)
# savefig(filename+".png",dpi=150)
# plt.close(fig)
# sys.stdout.write("%c[2K" % 27)
# sys.stdout.write("Processing file "+filename+"\r")
# sys.stdout.flush()
###Output
_____no_output_____
###Markdown
Step 9.c: Generate visualization animation \[Back to [top](toc)\]$$\label{genvideo}$$ In the following step, [ffmpeg](http://ffmpeg.org) is used to generate an [mp4](https://en.wikipedia.org/wiki/MPEG-4) video file, which can be played directly from this Jupyter notebook.
###Code
# ## VISUALIZATION ANIMATION, PART 2: Combine PNGs to generate movie ##
# # https://stackoverflow.com/questions/14908576/how-to-remove-frame-from-matplotlib-pyplot-figure-vs-matplotlib-figure-frame
# # https://stackoverflow.com/questions/23176161/animating-pngs-in-matplotlib-using-artistanimation
# fig = plt.figure(frameon=False)
# ax = fig.add_axes([0, 0, 1, 1])
# ax.axis('off')
# myimages = []
# for i in range(len(file_list)):
# img = mgimg.imread(file_list[i]+".png")
# imgplot = plt.imshow(img)
# myimages.append([imgplot])
# ani = animation.ArtistAnimation(fig, myimages, interval=100, repeat_delay=1000)
# plt.close()
# ani.save('BH_Head-on_Collision.mp4', fps=5,dpi=150)
## VISUALIZATION ANIMATION, PART 3: Display movie as embedded HTML5 (see next cell) ##
# https://stackoverflow.com/questions/18019477/how-can-i-play-a-local-video-in-my-ipython-notebook
# %%HTML
# <video width="480" height="360" controls>
# <source src="BH_Head-on_Collision.mp4" type="video/mp4">
# </video>
###Output
_____no_output_____
###Markdown
Step 10: Visualize the numerical error, and confirm that it converges to zero with increasing numerical resolution (sampling) \[Back to [top](toc)\]$$\label{convergence}$$
###Code
# x96,y96,valuesCF96,valuesHam96 = np.loadtxt('out96.txt').T #Transposed for easier unpacking
# pl_xmin = -2.5
# pl_xmax = +2.5
# pl_ymin = -2.5
# pl_ymax = +2.5
# grid_x, grid_y = np.mgrid[pl_xmin:pl_xmax:100j, pl_ymin:pl_ymax:100j]
# points96 = np.zeros((len(x96), 2))
# for i in range(len(x96)):
# points96[i][0] = x96[i]
# points96[i][1] = y96[i]
# grid96 = griddata(points96, valuesCF96, (grid_x, grid_y), method='nearest')
# grid96cub = griddata(points96, valuesCF96, (grid_x, grid_y), method='cubic')
# grid96 = griddata(points96, valuesHam96, (grid_x, grid_y), method='nearest')
# grid96cub = griddata(points96, valuesHam96, (grid_x, grid_y), method='cubic')
# # fig, ax = plt.subplots()
# plt.clf()
# plt.title("96x16 Num. Err.: log_{10}|Ham|")
# plt.xlabel("x/M")
# plt.ylabel("z/M")
# fig96cub = plt.imshow(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
# cb = plt.colorbar(fig96cub)
# x72,y72,valuesCF72,valuesHam72 = np.loadtxt('out72.txt').T #Transposed for easier unpacking
# points72 = np.zeros((len(x72), 2))
# for i in range(len(x72)):
# points72[i][0] = x72[i]
# points72[i][1] = y72[i]
# grid72 = griddata(points72, valuesHam72, (grid_x, grid_y), method='nearest')
# griddiff_72_minus_96 = np.zeros((100,100))
# griddiff_72_minus_96_1darray = np.zeros(100*100)
# gridx_1darray_yeq0 = np.zeros(100)
# grid72_1darray_yeq0 = np.zeros(100)
# grid96_1darray_yeq0 = np.zeros(100)
# count = 0
# for i in range(100):
# for j in range(100):
# griddiff_72_minus_96[i][j] = grid72[i][j] - grid96[i][j]
# griddiff_72_minus_96_1darray[count] = griddiff_72_minus_96[i][j]
# if j==49:
# gridx_1darray_yeq0[i] = grid_x[i][j]
# grid72_1darray_yeq0[i] = grid72[i][j] + np.log10((72./96.)**4)
# grid96_1darray_yeq0[i] = grid96[i][j]
# count = count + 1
# plt.clf()
# fig, ax = plt.subplots()
# plt.title("4th-order Convergence, at t/M=7.5 (post-merger; horiz at x/M=+/-1)")
# plt.xlabel("x/M")
# plt.ylabel("log10(Relative error)")
# ax.plot(gridx_1darray_yeq0, grid96_1darray_yeq0, 'k-', label='Nr=96')
# ax.plot(gridx_1darray_yeq0, grid72_1darray_yeq0, 'k--', label='Nr=72, mult by (72/96)^4')
# ax.set_ylim([-8.5,0.5])
# legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
# legend.get_frame().set_facecolor('C1')
# plt.show()
###Output
_____no_output_____
###Markdown
Step 11: Output this module to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.ipynb
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
_____no_output_____
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Start-to-Finish Example: Head-On Black Hole Collision Author: Zach Etienne Formatting improvements courtesy Brandon Clark This module implements a basic numerical relativity code to merge two black holes in *spherical-like coordinates*, as well as the gravitational wave analysis provided by the $\psi_4$ NRPy+ tutorial notebooks ([$\psi_4$](Tutorial-Psi4.ipynb) & [$\psi_4$ tetrad](Tutorial-Psi4_tetrads.ipynb)). Here we place the black holes initially on the $z$-axis, so the entire simulation is axisymmetric about the $\phi$-axis. Minimal sampling in the $\phi$ direction greatly speeds up the simulation.**Notebook Status:** Validated **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution *after a short numerical evolution of the initial data* (see [plots at bottom](convergence)), and all quantities have been validated against the [original SENR code](https://bitbucket.org/zach_etienne/nrpy). Finally, excellent agreement is seen in the gravitational wave signal from the ringing remnant black hole for multiple decades in amplitude when compared to black hole perturbation theory predictions. NRPy+ Source Code for this module: * [BSSN/BrillLindquist.py](../edit/BSSN/BrillLindquist.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb): Brill-Lindquist initial data; sets all ADM variables in Cartesian basis: * [BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): Spherical/Cartesian ADM$\to$Curvilinear BSSN converter function, for which exact expressions are given for ADM quantities.* [BSSN/BSSN_ID_function_string.py](../edit/BSSN/BSSN_ID_function_string.py): Sets up the C code string enabling initial data be set up in a point-by-point fashion* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates* [BSSN/BSSN_RHSs.py](../edit/BSSN/BSSN_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb): Generates the right-hand sides for the BSSN evolution equations in singular, curvilinear coordinates* [BSSN/BSSN_gauge_RHSs.py](../edit/BSSN/BSSN_gauge_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb): Generates the right-hand sides for the BSSN gauge evolution equations in singular, curvilinear coordinates* [BSSN/Psi4.py](../edit/BSSN/Psi4.py); [\[**tutorial**\]](Tutorial-Psi4.ipynb): Generates expressions for $\psi_4$, the outgoing Weyl scalar + [BSSN/Psi4_tetrads.py](../edit/BSSN/Psi4_tetrads.py); [\[**tutorial**\]](Tutorial-Psi4_tetrads.ipynb): Generates quasi-Kinnersley tetrad needed for $\psi_4$-based gravitational wave extraction Introduction:Here we use NRPy+ to generate the C source code necessary to set up initial data for two black holes (Brill-Lindquist, [Brill & Lindquist, Phys. Rev. 131, 471, 1963](https://journals.aps.org/pr/abstract/10.1103/PhysRev.131.471); see also Eq. 1 of [Brandt & Brügmann, arXiv:gr-qc/9711015v1](https://arxiv.org/pdf/gr-qc/9711015v1.pdf)). Then we use it to generate the RHS expressions for [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html) time integration based on the [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4).The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration * [**NRPy+ tutorial on Method of Lines algorithm**](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).1. Set gridfunction values to initial data * [**NRPy+ tutorial on Brill-Lindquist initial data**](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb) * [**NRPy+ tutorial on validating Brill-Lindquist initial data**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_Exact_Initial_Data.ipynb).1. Next, integrate the initial data forward in time using the Method of Lines coupled to a Runge-Kutta explicit timestepping algorithm: 1. At the start of each iteration in time, output the Hamiltonian constraint violation * [**NRPy+ tutorial on BSSN constraints**](Tutorial-BSSN_constraints.ipynb). 1. At each RK time substep, do the following: 1. Evaluate BSSN RHS expressions * [**NRPy+ tutorial on BSSN right-hand sides**](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb) * [**NRPy+ tutorial on BSSN gauge condition right-hand sides**](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb) 1. Apply singular, curvilinear coordinate boundary conditions [*a la* the SENR/NRPy+ paper](https://arxiv.org/abs/1712.07658) * [**NRPy+ tutorial on setting up singular, curvilinear boundary conditions**](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb) 1. Enforce constraint on conformal 3-metric: $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ * [**NRPy+ tutorial on enforcing $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint**](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)1. Repeat above steps at two numerical resolutions to confirm convergence to zero. Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id): Import Brill-Lindquist ADM initial data C function from the [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module1. [Step 3](bssn): Output C code for BSSN spacetime solve 1. [Step 3.a](bssnrhs): Output C code for BSSN RHS expressions 1. [Step 3.b](hamconstraint): Output C code for Hamiltonian constraint 1. [Step 3.c](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint 1. [Step 3.d](spinweight): Computing $_{-2}Y_{\ell m} (\theta, \phi)$ for all $(\ell,m)$ for $\ell=0$ up to 2 1. [Step 3.e](psi4): $\psi_4$ 1. [Step 3.f](cparams_rfm_and_domainsize): Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h`1. [Step 4](bc_functs): Set up boundary condition functions for chosen singular, curvilinear coordinate system1. [Step 5](mainc): `BrillLindquist_Playground.c`: The Main C Code1. [Step 6](visualize): Data Visualization Animations 1. [Step 6.a](installdownload): Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded 1. [Step 6.b](genimages): Generate images for visualization animation 1. [Step 6.c](genvideo): Generate visualization animation1. [Step 7](convergence): Visualize the numerical error, and confirm that it converges to zero with increasing numerical resolution (sampling)1. [Step 8](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step P0: Set the option for evolving the initial data forward in time
# Options include "low resolution" and "high resolution"
EvolOption = "low resolution"
# Step P1: Import needed NRPy+ core modules:
from outputC import * # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys, time # Standard Python modules for multiplatform OS-level functions, benchmarking
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("BSSN_Two_BHs_Collide_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three (BSSN is a 3+1 decomposition
# of Einstein's equations), and then read
# the parameter as DIM.
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "SinhSpherical"
if EvolOption == "low resolution":
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 150 # Length scale of computational domain
final_time = 200 # Final time
FD_order = 8 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
elif EvolOption == "high resolution":
# See above for description of the domain_size parameter
domain_size = 300 # Length scale of computational domain
final_time = 275 # Final time
FD_order = 10 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
else:
print("Error: output is not tuned for Nr = "+str(Nr)+" . Plotting disabled.")
exit(1)
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.2 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05 # If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.b: Set the timestepping order,
# the core data type, and the CFL factor.
# Step 2.b: Set the order of spatial and temporal derivatives;
# the core data type, and the CFL factor.
# RK_method choices include: Euler, "RK2 Heun", "RK2 MP", "RK2 Ralston", RK3, "RK3 Heun", "RK3 Ralston",
# SSPRK3, RK4, DP5, DP5alt, CK5, DP6, L6, DP8
RK_method = "RK4"
REAL = "double" # Best to use double here.
CFL_FACTOR= 0.5 # (GETS OVERWRITTEN WHEN EXECUTED.) In pure axisymmetry (symmetry_axes = 2 below) 1.0 works fine. Otherwise 0.5 or lower.
# Step 3: Generate Runge-Kutta-based (RK-based) timestepping code.
# As described above the Table of Contents, this is a 3-step process:
# 3.A: Evaluate RHSs (RHS_string)
# 3.B: Apply boundary conditions (post_RHS_string, pt 1)
# 3.C: Enforce det(gammabar) = det(gammahat) constraint (post_RHS_string, pt 2)
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method,
RHS_string = """
Ricci_eval(&rfmstruct, ¶ms, RK_INPUT_GFS, auxevol_gfs);
rhs_eval(&rfmstruct, ¶ms, auxevol_gfs, RK_INPUT_GFS, RK_OUTPUT_GFS);""",
post_RHS_string = """
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS, evol_gf_parity, RK_OUTPUT_GFS);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, RK_OUTPUT_GFS);\n""",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
# Step 4: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Step 5: Set the finite differencing order to FD_order (set above).
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order)
# Step 6: Copy SIMD/SIMD_intrinsics.h to $Ccodesdir/SIMD/SIMD_intrinsics.h
cmd.mkdir(os.path.join(Ccodesdir,"SIMD"))
shutil.copy(os.path.join("SIMD/")+"SIMD_intrinsics.h",os.path.join(Ccodesdir,"SIMD/"))
# Step 7: Set the direction=2 (phi) axis to be the symmetry axis; i.e.,
# axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
###Output
_____no_output_____
###Markdown
Step 1.c: Output needed C code for finding the minimum proper distance between grid points, needed for [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673)-limited timestep \[Back to [top](toc)\]$$\label{cfl}$$In order for our explicit-timestepping numerical solution to the scalar wave equation to be stable, it must satisfy the [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673) condition:$$\Delta t \le \frac{\min(ds_i)}{c},$$where $c$ is the wavespeed, and$$ds_i = h_i \Delta x^i$$ is the proper distance between neighboring gridpoints in the $i$th direction (in 3D, there are 3 directions), $h_i$ is the $i$th reference metric scale factor, and $\Delta x^i$ is the uniform grid spacing in the $i$th direction:
###Code
# Output the find_timestep() function to a C file.
rfm.out_timestep_func_to_file(os.path.join(Ccodesdir,"find_timestep.h"))
###Output
_____no_output_____
###Markdown
Step 2: Import Brill-Lindquist ADM initial data C function from the [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module \[Back to [top](toc)\]$$\label{adm_id}$$The [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module does the following:1. Set up Brill-Lindquist initial data [ADM](https://en.wikipedia.org/wiki/ADM_formalism) quantities in the **Cartesian basis**, as [documented here](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb). 1. Convert the ADM **Cartesian quantities** to **BSSN quantities in the desired Curvilinear basis** (set by reference_metric::CoordSystem), as [documented here](Tutorial-ADM_Initial_Data-Converting_ADMCartesian_to_BSSNCurvilinear.ipynb).1. Sets up the standardized C function for setting all BSSN Curvilinear gridfunctions in a pointwise fashion, as [written here](../edit/BSSN/BSSN_ID_function_string.py), and returns the C function as a Python string.
###Code
import BSSN.BrillLindquist as bl
def BrillLindquistID():
print("Generating optimized C code for Brill-Lindquist initial data. May take a while, depending on CoordSystem.")
start = time.time()
bl.BrillLindquist() # Registers ID C function in dictionary, used below to output to file.
with open(os.path.join(Ccodesdir,"initial_data.h"),"w") as file:
file.write(outC_function_dict["initial_data"])
end = time.time()
print("Finished BL initial data codegen in "+str(end-start)+" seconds.")
###Output
_____no_output_____
###Markdown
Step 3: Output C code for BSSN spacetime solve \[Back to [top](toc)\]$$\label{bssn}$$ Step 3.a: Output C code for BSSN RHS expressions \[Back to [top](toc)\]$$\label{bssnrhs}$$
###Code
import BSSN.BSSN_RHSs as rhs
import BSSN.BSSN_gauge_RHSs as gaugerhs
# Set the *covariant*, second-order Gamma-driving shift condition
par.set_parval_from_str("BSSN.BSSN_gauge_RHSs::ShiftEvolutionOption", "GammaDriving2ndOrder_Covariant")
print("Generating symbolic expressions for BSSN RHSs...")
start = time.time()
# Enable rfm_precompute infrastructure, which results in
# BSSN RHSs that are free of transcendental functions,
# even in curvilinear coordinates, so long as
# ConformalFactor is set to "W" (default).
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
# Evaluate BSSN + BSSN gauge RHSs with rfm_precompute enabled:
import BSSN.BSSN_quantities as Bq
par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic","True")
rhs.BSSN_RHSs()
gaugerhs.BSSN_gauge_RHSs()
# We use betaU as our upwinding control vector:
Bq.BSSN_basic_tensors()
betaU = Bq.betaU
import BSSN.Enforce_Detgammabar_Constraint as EGC
enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammabar_Constraint_symb_expressions()
# Next compute Ricci tensor
par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic","False")
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
# Now register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)
# Now that we are finished with all the rfm hatted
# quantities in generic precomputed functional
# form, let's restore them to their closed-
# form expressions.
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
end = time.time()
print("Finished BSSN symbolic expressions in "+str(end-start)+" seconds.")
def BSSN_RHSs():
print("Generating C code for BSSN RHSs in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
# Construct the left-hand sides and right-hand-side expressions for all BSSN RHSs
lhs_names = [ "alpha", "cf", "trK"]
rhs_exprs = [gaugerhs.alpha_rhs, rhs.cf_rhs, rhs.trK_rhs]
for i in range(3):
lhs_names.append( "betU"+str(i))
rhs_exprs.append(gaugerhs.bet_rhsU[i])
lhs_names.append( "lambdaU"+str(i))
rhs_exprs.append(rhs.lambda_rhsU[i])
lhs_names.append( "vetU"+str(i))
rhs_exprs.append(gaugerhs.vet_rhsU[i])
for j in range(i,3):
lhs_names.append( "aDD"+str(i)+str(j))
rhs_exprs.append(rhs.a_rhsDD[i][j])
lhs_names.append( "hDD"+str(i)+str(j))
rhs_exprs.append(rhs.h_rhsDD[i][j])
# Sort the lhss list alphabetically, and rhss to match.
# This ensures the RHSs are evaluated in the same order
# they're allocated in memory:
lhs_names,rhs_exprs = [list(x) for x in zip(*sorted(zip(lhs_names,rhs_exprs), key=lambda pair: pair[0]))]
# Declare the list of lhrh's
BSSN_evol_rhss = []
for var in range(len(lhs_names)):
BSSN_evol_rhss.append(lhrh(lhs=gri.gfaccess("rhs_gfs",lhs_names[var]),rhs=rhs_exprs[var]))
# Set up the C function for the BSSN RHSs
desc="Evaluate the BSSN RHSs"
name="rhs_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict auxevol_gfs,const REAL *restrict in_gfs,REAL *restrict rhs_gfs""",
body = fin.FD_outputC("returnstring",BSSN_evol_rhss, params="outCverbose=False,SIMD_enable=True",
upwindcontrolvec=betaU).replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,EnableSIMD,Enable_rfm_precompute")
end = time.time()
print("Finished BSSN_RHS C codegen in " + str(end - start) + " seconds.")
def Ricci():
print("Generating C code for Ricci tensor in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
desc="Evaluate the Ricci tensor"
name="Ricci_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict in_gfs,REAL *restrict auxevol_gfs""",
body = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD00"),rhs=Bq.RbarDD[0][0]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD01"),rhs=Bq.RbarDD[0][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD02"),rhs=Bq.RbarDD[0][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD11"),rhs=Bq.RbarDD[1][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD12"),rhs=Bq.RbarDD[1][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD22"),rhs=Bq.RbarDD[2][2])],
params="outCverbose=False,SIMD_enable=True").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,EnableSIMD,Enable_rfm_precompute")
end = time.time()
print("Finished Ricci C codegen in " + str(end - start) + " seconds.")
###Output
Generating symbolic expressions for BSSN RHSs...
Finished BSSN symbolic expressions in 3.393543243408203 seconds.
###Markdown
Step 3.b: Output C code for Hamiltonian constraint \[Back to [top](toc)\]$$\label{hamconstraint}$$Next output the C code for evaluating the Hamiltonian constraint [(**Tutorial**)](Tutorial-BSSN_constraints.ipynb). In the absence of numerical error, this constraint should evaluate to zero. However it does not due to numerical (typically truncation and roundoff) error. We will therefore measure the Hamiltonian constraint violation to gauge the accuracy of our simulation, and, ultimately determine whether errors are dominated by numerical finite differencing (truncation) error as expected.
###Code
def Hamiltonian():
start = time.time()
print("Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.")
# Set up the C function for the Hamiltonian RHS
desc="Evaluate the Hamiltonian constraint"
name="Hamiltonian_constraint"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict aux_gfs""",
body = fin.FD_outputC("returnstring",lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H),
params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,Enable_rfm_precompute")
end = time.time()
print("Finished Hamiltonian C codegen in " + str(end - start) + " seconds.")
###Output
_____no_output_____
###Markdown
Step 3.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
def gammadet():
start = time.time()
print("Generating optimized C code for gamma constraint. May take a while, depending on CoordSystem.")
# Set up the C function for the det(gammahat) = det(gammabar)
EGC.output_Enforce_Detgammabar_Constraint_Ccode(Ccodesdir,exprs=enforce_detg_constraint_symb_expressions)
end = time.time()
print("Finished gamma constraint C codegen in " + str(end - start) + " seconds.")
###Output
_____no_output_____
###Markdown
Step 3.d: Computing $_{-2}Y_{\ell m} (\theta, \phi)$ for all $(\ell,m)$ for $\ell=0$ up to 2 \[Back to [top](toc)\]$$\label{spinweight}$$ Here we output all spin-weight $-2$ spherical harmonics [**Tutorial Module**](Tutorial-SpinWeighted_Spherical_Harmonics.ipynb) for $\ell=0$ up to and $\ell=2$.
###Code
import SpinWeight_minus2_SphHarmonics.SpinWeight_minus2_SphHarmonics as swm2
cmd.mkdir(os.path.join(Ccodesdir,"SpinWeight_minus2_SphHarmonics"))
swm2.SpinWeight_minus2_SphHarmonics(maximum_l=2,
filename=os.path.join(Ccodesdir,"SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h"))
###Output
_____no_output_____
###Markdown
Step 3.e: Output $\psi_4$ \[Back to [top](toc)\]$$\label{psi4}$$We output $\psi_4$, assuming Quasi-Kinnersley tetrad of [Baker, Campanelli, Lousto (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf).
###Code
import BSSN.Psi4_tetrads as BP4t
par.set_parval_from_str("BSSN.Psi4_tetrads::TetradChoice","QuasiKinnersley")
#par.set_parval_from_str("BSSN.Psi4_tetrads::UseCorrectUnitNormal","True")
import BSSN.Psi4 as BP4
BP4.Psi4()
psi4r_0pt = gri.register_gridfunctions("AUX","psi4r_0pt")
psi4r_1pt = gri.register_gridfunctions("AUX","psi4r_1pt")
psi4r_2pt = gri.register_gridfunctions("AUX","psi4r_2pt")
psi4i_0pt = gri.register_gridfunctions("AUX","psi4i_0pt")
psi4i_1pt = gri.register_gridfunctions("AUX","psi4i_1pt")
psi4i_2pt = gri.register_gridfunctions("AUX","psi4i_2pt")
desc="""Since it's so expensive to compute, instead of evaluating
psi_4 at all interior points, this functions evaluates it on a
point-by-point basis."""
name="psi4"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """const paramstruct *restrict params,
const int i0,const int i1,const int i2,
REAL *restrict xx[3], const REAL *restrict in_gfs, REAL *restrict aux_gfs""",
body = """
const int idx = IDX3S(i0,i1,i2);
const REAL xx0 = xx[0][i0];const REAL xx1 = xx[1][i1];const REAL xx2 = xx[2][i2];
// Real part of psi_4, divided into 3 terms
{
#include "Psi4re_pt0_lowlevel.h"
}
{
#include "Psi4re_pt1_lowlevel.h"
}
{
#include "Psi4re_pt2_lowlevel.h"
}
// Imaginary part of psi_4, divided into 3 terms
{
#include "Psi4im_pt0_lowlevel.h"
}
{
#include "Psi4im_pt1_lowlevel.h"
}
{
#include "Psi4im_pt2_lowlevel.h"
}""")
def Psi4re(part):
print("Generating C code for psi4_re_pt"+str(part)+" in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
Psi4re_pt = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("aux_gfs","psi4r_"+str(part)+"pt"),rhs=BP4.psi4_re_pt[part])],
params="outCverbose=False")
with open(os.path.join(Ccodesdir,"Psi4re_pt"+str(part)+"_lowlevel.h"), "w") as file:
file.write(Psi4re_pt.replace("IDX4","IDX4S"))
end = time.time()
print("Finished generating psi4_re_pt"+str(part)+" in "+str(end-start)+" seconds.")
def Psi4im(part):
print("Generating C code for psi4_im_pt"+str(part)+" in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
Psi4im_pt = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("aux_gfs","psi4i_"+str(part)+"pt"),rhs=BP4.psi4_im_pt[part])],
params="outCverbose=False")
with open(os.path.join(Ccodesdir,"Psi4im_pt"+str(part)+"_lowlevel.h"), "w") as file:
file.write(Psi4im_pt.replace("IDX4","IDX4S"))
end = time.time()
print("Finished generating psi4_im_pt"+str(part)+" in "+str(end-start)+" seconds.")
# Step 0: Import the multiprocessing module.
import multiprocessing
# Step 1: Create a list of functions we wish to evaluate in parallel
funcs = [Psi4re,Psi4re,Psi4re,Psi4im,Psi4im,Psi4im, BrillLindquistID,BSSN_RHSs,Ricci,Hamiltonian,gammadet]
# Step 1.a: Define master function for calling all above functions.
# Note that lambdifying this doesn't work in Python 3
def master_func(idx):
if idx < 3: # Call Psi4re(arg)
funcs[idx](idx)
elif idx < 6: # Call Psi4im(arg-3)
funcs[idx](idx-3)
else: # All non-Psi4 functions:
funcs[idx]()
try:
if os.name == 'nt':
# It's a mess to get working in Windows, so we don't bother. :/
# https://medium.com/@grvsinghal/speed-up-your-python-code-using-multiprocessing-on-windows-and-jupyter-or-ipython-2714b49d6fac
raise Exception("Parallel codegen currently not available in Windows")
# Step 1.b: Import the multiprocessing module.
import multiprocessing
# Step 1.c: Evaluate list of functions in parallel if possible;
# otherwise fallback to serial evaluation:
pool = multiprocessing.Pool()
pool.map(master_func,range(len(funcs)))
except:
# Steps 1.b-1.c, alternate: As fallback, evaluate functions in serial.
# This will happen on Android and Windows systems
for idx in range(len(funcs)):
master_func(idx)
###Output
Generating C code for psi4_re_pt1 in SinhSpherical coordinates.
Generating C code for psi4_re_pt0 in SinhSpherical coordinates.
Generating C code for psi4_im_pt0 in SinhSpherical coordinates.
Generating C code for psi4_im_pt1 in SinhSpherical coordinates.
Generating C code for psi4_im_pt2 in SinhSpherical coordinates.
Generating optimized C code for Brill-Lindquist initial data. May take a while, depending on CoordSystem.
Generating C code for psi4_re_pt2 in SinhSpherical coordinates.
Generating C code for BSSN RHSs in SinhSpherical coordinates.
Finished generating psi4_im_pt1 in 2.5125863552093506 seconds.
Generating C code for Ricci tensor in SinhSpherical coordinates.
Finished generating psi4_re_pt1 in 4.591909885406494 seconds.
Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.
Finished generating psi4_im_pt0 in 12.626752614974976 seconds.
Generating optimized C code for gamma constraint. May take a while, depending on CoordSystem.
Output C function enforce_detgammabar_constraint() to file BSSN_Two_BHs_Collide_Ccodes/enforce_detgammabar_constraint.h
Finished gamma constraint C codegen in 0.14875078201293945 seconds.
Finished generating psi4_re_pt0 in 20.1132595539093 seconds.
Output C function Hamiltonian_constraint() to file BSSN_Two_BHs_Collide_Ccodes/Hamiltonian_constraint.h
Finished Hamiltonian C codegen in 20.150496006011963 seconds.
Finished generating psi4_im_pt2 in 28.52921152114868 seconds.
Finished BL initial data codegen in 28.986626625061035 seconds.
Finished generating psi4_re_pt2 in 36.06533360481262 seconds.
Output C function rhs_eval() to file BSSN_Two_BHs_Collide_Ccodes/rhs_eval.h
Finished BSSN_RHS C codegen in 61.53437352180481 seconds.
Output C function Ricci_eval() to file BSSN_Two_BHs_Collide_Ccodes/Ricci_eval.h
Finished Ricci C codegen in 80.57325959205627 seconds.
###Markdown
Step 3.f: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](toc)\]$$\label{cparams_rfm_and_domainsize}$$Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
###Code
# Step 3.f.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.f.ii: Set free_parameters.h
with open(os.path.join(Ccodesdir,"free_parameters.h"),"w") as file:
file.write("""
// Set free-parameter values.
// Set free-parameter values for BSSN evolution:
params.eta = 2.0;
// Set free parameters for the (Brill-Lindquist) initial data
params.BH1_posn_x = 0.0; params.BH1_posn_y = 0.0; params.BH1_posn_z =+0.25;
params.BH2_posn_x = 0.0; params.BH2_posn_y = 0.0; params.BH2_posn_z =-0.25;
params.BH1_mass = 0.5; params.BH2_mass = 0.5;
"""+"""
const REAL final_time = """+str(final_time)+";\n")
# Append to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Step 3.f.iii: Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir)
# Step 3.f.iv: Generate xxCart.h, which contains xxCart() for
# (the mapping from xx->Cartesian) for the chosen
# CoordSystem:
rfm.xxCart_h("xxCart","./set_Cparameters.h",os.path.join(Ccodesdir,"xxCart.h"))
# Step 3.f.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
###Output
_____no_output_____
###Markdown
Step 4: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](toc)\]$$\label{bc_functs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../"))
###Output
Wrote to file "BSSN_Two_BHs_Collide_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h"
Evolved gridfunction "aDD00" has parity type 4.
Evolved gridfunction "aDD01" has parity type 5.
Evolved gridfunction "aDD02" has parity type 6.
Evolved gridfunction "aDD11" has parity type 7.
Evolved gridfunction "aDD12" has parity type 8.
Evolved gridfunction "aDD22" has parity type 9.
Evolved gridfunction "alpha" has parity type 0.
Evolved gridfunction "betU0" has parity type 1.
Evolved gridfunction "betU1" has parity type 2.
Evolved gridfunction "betU2" has parity type 3.
Evolved gridfunction "cf" has parity type 0.
Evolved gridfunction "hDD00" has parity type 4.
Evolved gridfunction "hDD01" has parity type 5.
Evolved gridfunction "hDD02" has parity type 6.
Evolved gridfunction "hDD11" has parity type 7.
Evolved gridfunction "hDD12" has parity type 8.
Evolved gridfunction "hDD22" has parity type 9.
Evolved gridfunction "lambdaU0" has parity type 1.
Evolved gridfunction "lambdaU1" has parity type 2.
Evolved gridfunction "lambdaU2" has parity type 3.
Evolved gridfunction "trK" has parity type 0.
Evolved gridfunction "vetU0" has parity type 1.
Evolved gridfunction "vetU1" has parity type 2.
Evolved gridfunction "vetU2" has parity type 3.
Auxiliary gridfunction "H" has parity type 0.
Auxiliary gridfunction "psi4i_0pt" has parity type 0.
Auxiliary gridfunction "psi4i_1pt" has parity type 0.
Auxiliary gridfunction "psi4i_2pt" has parity type 0.
Auxiliary gridfunction "psi4r_0pt" has parity type 0.
Auxiliary gridfunction "psi4r_1pt" has parity type 0.
Auxiliary gridfunction "psi4r_2pt" has parity type 0.
AuxEvol gridfunction "RbarDD00" has parity type 4.
AuxEvol gridfunction "RbarDD01" has parity type 5.
AuxEvol gridfunction "RbarDD02" has parity type 6.
AuxEvol gridfunction "RbarDD11" has parity type 7.
AuxEvol gridfunction "RbarDD12" has parity type 8.
AuxEvol gridfunction "RbarDD22" has parity type 9.
Wrote to file "BSSN_Two_BHs_Collide_Ccodes/boundary_conditions/EigenCoord_Cart_to_xx.h"
###Markdown
Step 5: `BrillLindquist_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER),
# and set the CFL_FACTOR (which can be overwritten at the command line)
with open(os.path.join(Ccodesdir,"BSSN_Playground_REAL__NGHOSTS__CFL_FACTOR.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
REAL CFL_FACTOR = """+str(CFL_FACTOR)+"""; // Set the CFL Factor. Can be overwritten at command line.""")
%%writefile $Ccodesdir/BrillLindquist_Playground.c
// Step P0: Define REAL and NGHOSTS; and declare CFL_FACTOR. This header is generated in NRPy+.
#include "BSSN_Playground_REAL__NGHOSTS__CFL_FACTOR.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// All SIMD intrinsics used in SIMD-enabled C code loops are defined here:
#include "SIMD/SIMD_intrinsics.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "time.h"
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
#define wavespeed 1.0 // Set CFL-based "wavespeed" to 1.0.
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xxCart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Set xxCart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xxCart.h"
// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P7: Implement the algorithm for upwinding.
// *NOTE*: This upwinding is backwards from
// usual upwinding algorithms, because the
// upwinding control vector in BSSN (the shift)
// acts like a *negative* velocity.
//#define UPWIND_ALG(UpwindVecU) UpwindVecU > 0.0 ? 1.0 : 0.0
// Step P8: Include function for enforcing detgammabar constraint.
#include "enforce_detgammabar_constraint.h"
// Step P9: Find the CFL-constrained timestep
#include "find_timestep.h"
// Step P10: Declare function necessary for setting up the initial data.
// Step P10.a: Define BSSN_ID() for BrillLindquist initial data
// Step P10.b: Set the generic driver function for setting up BSSN initial data
#include "initial_data.h"
// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)
#include "Hamiltonian_constraint.h"
// Step P12: Declare rhs_eval function, which evaluates BSSN RHSs
#include "rhs_eval.h"
// Step P13: Declare Ricci_eval function, which evaluates Ricci tensor
#include "Ricci_eval.h"
// Step P14: Declare function for evaluating real and imaginary parts of psi4 (diagnostic)
#include "psi4.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4 && argc != 5) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
if(argc == 5) {
CFL_FACTOR = strtod(argv[4],NULL);
if(CFL_FACTOR > 0.5 && atoi(argv[3])!=2) {
fprintf(stderr,"WARNING: CFL_FACTOR was set to %e, which is > 0.5.\n",CFL_FACTOR);
fprintf(stderr," This will generally only be stable if the simulation is purely axisymmetric\n");
fprintf(stderr," However, Nx2 was set to %d>2, which implies a non-axisymmetric simulation\n",atoi(argv[3]));
}
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0h: Time coordinate parameters
const REAL t_final = final_time;
// Step 0i: Set timestep based on smallest proper distance between gridpoints and CFL factor
REAL dt = find_timestep(¶ms, xx);
//fprintf(stderr,"# Timestep set to = %e\n",(double)dt);
int N_final = (int)(t_final / dt + 0.5); // The number of points in time.
// Add 0.5 to account for C rounding down
// typecasts to integers.
REAL out_approx_every_t = 0.2;
int output_every_N = (int)(out_approx_every_t*((REAL)N_final)/t_final);
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
// Step 1: Set up initial data to an exact solution
initial_data(¶ms, xx, y_n_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, y_n_gfs);
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
#ifdef __linux__ // Use high-precision timer in Linux.
struct timespec start, end;
clock_gettime(CLOCK_REALTIME, &start);
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
// http://www.cplusplus.com/reference/ctime/time/
time_t start_timer,end_timer;
time(&start_timer); // Resolution of one second...
#endif
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
for(int n=0;n<=N_final;n++) { // Main loop to progress forward in time.
// Step 3.a: Output diagnostics
if(n%output_every_N == 0) {
// Step 3.a.i: Output psi4 spin-weight -2 decomposed data, every N_output_every */
#include "SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h"
char filename[100];
for(int r_ext_idx = (Nxx_plus_2NGHOSTS0-NGHOSTS)/4;
r_ext_idx<(Nxx_plus_2NGHOSTS0-NGHOSTS)*0.9;
r_ext_idx+=5) {
REAL r_ext;
{
REAL xx0 = xx[0][r_ext_idx];
REAL xx1 = xx[1][1];
REAL xx2 = xx[2][1];
REAL xCart[3];
xxCart(¶ms,xx,r_ext_idx,1,1,xCart);
r_ext = sqrt(xCart[0]*xCart[0] + xCart[1]*xCart[1] + xCart[2]*xCart[2]);
}
sprintf(filename,"outPsi4_l2m0-%d-r%.2f.txt",Nxx[0],(double)r_ext);
FILE *outPsi4_l2m0;
if(n==0) outPsi4_l2m0 = fopen(filename, "w");
else outPsi4_l2m0 = fopen(filename, "a");
REAL Psi4r_0pt_l2m0 = 0.0,Psi4r_1pt_l2m0 = 0.0,Psi4r_2pt_l2m0 = 0.0;
REAL Psi4i_0pt_l2m0 = 0.0,Psi4i_1pt_l2m0 = 0.0,Psi4i_2pt_l2m0 = 0.0;
LOOP_REGION(r_ext_idx,r_ext_idx+1,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,NGHOSTS+1) {
psi4(¶ms, i0,i1,i2, xx, y_n_gfs, diagnostic_output_gfs);
const int idx = IDX3S(i0,i1,i2);
const REAL th = xx[1][i1];
const REAL ph = xx[2][i2];
// Construct integrand for Psi4 spin-weight s=-2,l=2,m=0 spherical harmonic
// Based on http://www.demonstrations.wolfram.com/SpinWeightedSphericalHarmonics/
// we have {}_{s}_Y_{lm} = {}_{-2}_Y_{20} = 1/4 * sqrt(15 / (2*pi)) * sin(th)^2
// Confirm integrand is correct:
// Integrate[(1/4 Sqrt[15/(2 \[Pi])] Sin[th]^2) (1/4 Sqrt[15/(2 \[Pi])] Sin[th]^2)*2*Pi*Sin[th], {th, 0, Pi}]
// ^^^ equals 1.
REAL ReY_sm2_l2_m0,ImY_sm2_l2_m0;
SpinWeight_minus2_SphHarmonics(2,0, th,ph, &ReY_sm2_l2_m0,&ImY_sm2_l2_m0);
const REAL sinth = sin(xx[1][i1]);
/* psi4 *{}_{-2}_Y_{20}* (int dphi)* sinth*dtheta */
Psi4r_0pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4R_0PTGF,idx)]*ReY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4r_1pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4R_1PTGF,idx)]*ReY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4r_2pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4R_2PTGF,idx)]*ReY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4i_0pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4I_0PTGF,idx)]*ImY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4i_1pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4I_1PTGF,idx)]*ImY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4i_2pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4I_2PTGF,idx)]*ImY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
}
fprintf(outPsi4_l2m0,"%e %.15e %.15e %.15e %.15e %.15e %.15e\n", (double)((n)*dt),
(double)Psi4r_0pt_l2m0,(double)Psi4r_1pt_l2m0,(double)Psi4r_2pt_l2m0,
(double)Psi4i_0pt_l2m0,(double)Psi4i_1pt_l2m0,(double)Psi4i_2pt_l2m0);
fclose(outPsi4_l2m0);
}
}
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
#include "MoLtimestepping/RK_MoL.h"
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
if(n==N_final-1) {
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
const int i0MIN=NGHOSTS; // In spherical, r=Delta r/2.
const int i1mid=Nxx_plus_2NGHOSTS1/2;
const int i2mid=Nxx_plus_2NGHOSTS2/2;
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
xxCart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1],xCart[2], y_n_gfs[IDX4ptS(CFGF,idx)],
log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
}
// Step 3.d: Progress indicator printing to stderr
// Step 3.d.i: Measure average time per iteration
#ifdef __linux__ // Use high-precision timer in Linux.
clock_gettime(CLOCK_REALTIME, &end);
const long long unsigned int time_in_ns = 1000000000L * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
time(&end_timer); // Resolution of one second...
REAL time_in_ns = difftime(end_timer,start_timer)*1.0e9+0.5; // Round up to avoid divide-by-zero.
#endif
const REAL s_per_iteration_avg = ((REAL)time_in_ns / (REAL)n) / 1.0e9;
const int iterations_remaining = N_final - n;
const REAL time_remaining_in_mins = s_per_iteration_avg * (REAL)iterations_remaining / 60.0;
const REAL num_RHS_pt_evals = (REAL)(Nxx[0]*Nxx[1]*Nxx[2]) * 4.0 * (REAL)n; // 4 RHS evals per gridpoint for RK4
const REAL RHS_pt_evals_per_sec = num_RHS_pt_evals / ((REAL)time_in_ns / 1.0e9);
// Step 3.d.ii: Output simulation progress to stderr
if(n % 10 == 0) {
fprintf(stderr,"%c[2K", 27); // Clear the line
fprintf(stderr,"It: %d t=%.2f dt=%.2e | %.1f%%; ETA %.0f s | t/h %.2f | gp/s %.2e\r", // \r is carriage return, move cursor to the beginning of the line
n, n * (double)dt, (double)dt, (double)(100.0 * (REAL)n / (REAL)N_final),
(double)time_remaining_in_mins*60, (double)(dt * 3600.0 / s_per_iteration_avg), (double)RHS_pt_evals_per_sec);
fflush(stderr); // Flush the stderr buffer
} // End progress indicator if(n % 10 == 0)
} // End main loop to progress forward in time.
fprintf(stderr,"\n"); // Clear the final line of output from progress indicator.
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
if EvolOption == "low resolution":
Nr = 270
Ntheta = 8
elif EvolOption == "high resolution":
Nr = 800
Ntheta = 16
else:
print("Error: unknown EvolOption!")
sys.exit(1)
CFL_FACTOR = 1.0
import cmdline_helper as cmd
print("Now compiling, should take ~10 seconds...\n")
start = time.time()
cmd.C_compile(os.path.join(Ccodesdir,"BrillLindquist_Playground.c"), "BrillLindquist_Playground",
compile_mode="optimized")
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
if Nr == 800:
print("Now running. Should take ~8 hours...\n")
if Nr == 270:
print("Now running. Should take ~30 minutes...\n")
start = time.time()
cmd.delete_existing_files("out*.txt")
cmd.delete_existing_files("out*.png")
cmd.Execute("BrillLindquist_Playground", str(Nr)+" "+str(Ntheta)+" 2 "+str(CFL_FACTOR))
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
###Output
Now compiling, should take ~10 seconds...
Compiling executable...
Executing `gcc -Ofast -fopenmp -march=native -funroll-loops BSSN_Two_BHs_Collide_Ccodes/BrillLindquist_Playground.c -o BrillLindquist_Playground -lm`...
Finished executing in 15.035797357559204 seconds.
Finished compilation.
Finished in 15.04578948020935 seconds.
Now running. Should take ~30 minutes...
Executing `taskset -c 0,1,2,3 ./BrillLindquist_Playground 270 8 2 1.0`...
[2KIt: 27200 t=199.93 dt=7.35e-03 | 100.0%; ETA 0 s | t/h 999.82 | gp/s 6.53e+05
Finished executing in 720.2193357944489 seconds.
Finished in 720.234804391861 seconds.
###Markdown
Step 6: Comparison with black hole perturbation theory \[Back to [top](toc)\]$$\label{compare}$$According to black hole perturbation theory ([Berti et al](https://arxiv.org/abs/0905.2975)), the resultant black hole should ring down with dominant, spin-weight $s=-2$ spherical harmonic mode $(l=2,m=0)$ according to$${}_{s=-2}\text{Re}(\psi_4)_{l=2,m=0} = A e^{−0.0890 t/M} \cos(0.3737 t/M+ \phi),$$where $M=1$ for these data, and $A$ and $\phi$ are an arbitrary amplitude and phase, respectively. Here we will plot the resulting waveform at $r/M=33.13$, comparing to the expected frequency and amplitude falloff predicted by black hole perturbation theory.Notice that we find about 4.2 orders of magnitude agreement! If you are willing to invest more resources and wait much longer, you will find approximately 8.5 orders of magnitude agreement (*better* than Fig 6 of [Ruchlin et al](https://arxiv.org/pdf/1712.07658.pdf)) if you adjust the above code parameters such that1. Finite-differencing order is set to 101. Nr = 8001. Ntheta = 161. Outer boundary (`AMPL`) set to 3001. Final time (`t_final`) set to 2751. Set the initial positions of the BHs to `BH1_posn_z = -BH2_posn_z = 0.25`
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import savefig
from matplotlib import rc
rc('text', usetex=True)
if Nr == 270:
extraction_radius = "33.13"
Amplitude = 1.8e-2
Phase = 2.8
elif Nr == 800:
extraction_radius = "33.64"
Amplitude = 1.8e-2
Phase = 2.8
else:
print("Error: output is not tuned for Nr = "+str(Nr)+" . Plotting disabled.")
exit(1)
#Transposed for easier unpacking:
t,psi4r1,psi4r2,psi4r3,psi4i1,psi4i2,psi4i3 = np.loadtxt("outPsi4_l2m0-"+str(Nr)+"-r"+extraction_radius+".txt").T
t_retarded = []
log10abspsi4r = []
bh_pert_thry = []
for i in range(len(psi4r1)):
retarded_time = t[i]-np.float(extraction_radius)
t_retarded.append(retarded_time)
log10abspsi4r.append(np.log(np.float(extraction_radius)*np.abs(psi4r1[i] + psi4r2[i] + psi4r3[i]))/np.log(10))
bh_pert_thry.append(np.log(Amplitude*np.exp(-0.0890*retarded_time)*np.abs(np.cos(0.3737*retarded_time+Phase)))/np.log(10))
# print(bh_pert_thry)
fig, ax = plt.subplots()
plt.title("Grav. Wave Agreement with BH perturbation theory",fontsize=18)
plt.xlabel("$(t - R_{ext})/M$",fontsize=16)
plt.ylabel('$\log_{10}|\psi_4|$',fontsize=16)
ax.plot(t_retarded, log10abspsi4r, 'k-', label='SENR/NRPy+ simulation')
ax.plot(t_retarded, bh_pert_thry, 'k--', label='BH perturbation theory')
#ax.set_xlim([0,t_retarded[len(psi4r1)-1]])
ax.set_xlim([0,final_time - float(extraction_radius)+10])
ax.set_ylim([-13.5,-1.5])
plt.xticks(size = 14)
plt.yticks(size = 14)
legend = ax.legend(loc='upper right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
# Note that you'll need `dvipng` installed to generate the following file:
savefig("BHperttheorycompare.png",dpi=150)
###Output
_____no_output_____
###Markdown
Step 7: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.ipynb
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.ipynb to latex
[NbConvertApp] Support files will be in Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4_files/
[NbConvertApp] Making directory Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4_files
[NbConvertApp] Writing 172359 bytes to Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Start-to-Finish Example: Head-On Black Hole Collision Author: Zach Etienne Formatting improvements courtesy Brandon Clark This module implements a basic numerical relativity code to merge two black holes in *spherical-like coordinates*, as well as the gravitational wave analysis provided by the $\psi_4$ NRPy+ tutorial notebooks ([$\psi_4$](Tutorial-Psi4.ipynb) & [$\psi_4$ tetrad](Tutorial-Psi4_tetrads.ipynb)). Here we place the black holes initially on the $z$-axis, so the entire simulation is axisymmetric about the $\phi$-axis. Minimal sampling in the $\phi$ direction greatly speeds up the simulation.**Notebook Status:** Validated **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution *after a short numerical evolution of the initial data* (see [plots at bottom](convergence)), and all quantities have been validated against the [original SENR code](https://bitbucket.org/zach_etienne/nrpy). Finally, excellent agreement is seen in the gravitational wave signal from the ringing remnant black hole for multiple decades in amplitude when compared to black hole perturbation theory predictions. NRPy+ Source Code for this module: * [BSSN/BrillLindquist.py](../edit/BSSN/BrillLindquist.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb): Brill-Lindquist initial data; sets all ADM variables in Cartesian basis: * [BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): Spherical/Cartesian ADM$\to$Curvilinear BSSN converter function, for which exact expressions are given for ADM quantities.* [BSSN/BSSN_ID_function_string.py](../edit/BSSN/BSSN_ID_function_string.py): Sets up the C code string enabling initial data be set up in a point-by-point fashion* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates* [BSSN/BSSN_RHSs.py](../edit/BSSN/BSSN_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb): Generates the right-hand sides for the BSSN evolution equations in singular, curvilinear coordinates* [BSSN/BSSN_gauge_RHSs.py](../edit/BSSN/BSSN_gauge_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb): Generates the right-hand sides for the BSSN gauge evolution equations in singular, curvilinear coordinates* [BSSN/Psi4.py](../edit/BSSN/Psi4.py); [\[**tutorial**\]](Tutorial-Psi4.ipynb): Generates expressions for $\psi_4$, the outgoing Weyl scalar + [BSSN/Psi4_tetrads.py](../edit/BSSN/Psi4_tetrads.py); [\[**tutorial**\]](Tutorial-Psi4_tetrads.ipynb): Generates quasi-Kinnersley tetrad needed for $\psi_4$-based gravitational wave extraction Introduction:Here we use NRPy+ to generate the C source code necessary to set up initial data for two black holes (Brill-Lindquist, [Brill & Lindquist, Phys. Rev. 131, 471, 1963](https://journals.aps.org/pr/abstract/10.1103/PhysRev.131.471); see also Eq. 1 of [Brandt & Brügmann, arXiv:gr-qc/9711015v1](https://arxiv.org/pdf/gr-qc/9711015v1.pdf)). Then we use it to generate the RHS expressions for [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html) time integration based on the [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4).The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration * [**NRPy+ tutorial on Method of Lines algorithm**](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).1. Set gridfunction values to initial data * [**NRPy+ tutorial on Brill-Lindquist initial data**](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb) * [**NRPy+ tutorial on validating Brill-Lindquist initial data**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_Exact_Initial_Data.ipynb).1. Next, integrate the initial data forward in time using the Method of Lines coupled to a Runge-Kutta explicit timestepping algorithm: 1. At the start of each iteration in time, output the Hamiltonian constraint violation * [**NRPy+ tutorial on BSSN constraints**](Tutorial-BSSN_constraints.ipynb). 1. At each RK time substep, do the following: 1. Evaluate BSSN RHS expressions * [**NRPy+ tutorial on BSSN right-hand sides**](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb) * [**NRPy+ tutorial on BSSN gauge condition right-hand sides**](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb) 1. Apply singular, curvilinear coordinate boundary conditions [*a la* the SENR/NRPy+ paper](https://arxiv.org/abs/1712.07658) * [**NRPy+ tutorial on setting up singular, curvilinear boundary conditions**](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb) 1. Enforce constraint on conformal 3-metric: $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ * [**NRPy+ tutorial on enforcing $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint**](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)1. Repeat above steps at two numerical resolutions to confirm convergence to zero. Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id): Import Brill-Lindquist ADM initial data C function from the [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module1. [Step 3](bssn): Output C code for BSSN spacetime solve 1. [Step 3.a](bssnrhs): Output C code for BSSN RHS expressions 1. [Step 3.b](hamconstraint): Output C code for Hamiltonian constraint 1. [Step 3.c](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint 1. [Step 3.d](spinweight): Computing $_{-2}Y_{\ell m} (\theta, \phi)$ for all $(\ell,m)$ for $\ell=0$ up to 2 1. [Step 3.e](psi4): $\psi_4$ 1. [Step 3.f](cparams_rfm_and_domainsize): Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h`1. [Step 4](bc_functs): Set up boundary condition functions for chosen singular, curvilinear coordinate system1. [Step 5](mainc): `BrillLindquist_Playground.c`: The Main C Code1. [Step 6](visualize): Data Visualization Animations 1. [Step 6.a](installdownload): Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded 1. [Step 6.b](genimages): Generate images for visualization animation 1. [Step 6.c](genvideo): Generate visualization animation1. [Step 7](convergence): Visualize the numerical error, and confirm that it converges to zero with increasing numerical resolution (sampling)1. [Step 8](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step P0: Set the option for evolving the initial data forward in time
# Options include "low resolution" and "high resolution"
EvolOption = "low resolution"
# Step P1: Import needed NRPy+ core modules:
from outputC import * # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys, time # Standard Python modules for multiplatform OS-level functions, benchmarking
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("BSSN_Two_BHs_Collide_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three (BSSN is a 3+1 decomposition
# of Einstein's equations), and then read
# the parameter as DIM.
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "SinhSpherical"
if EvolOption == "low resolution":
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 150 # Length scale of computational domain
final_time = 200 # Final time
FD_order = 8 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
elif EvolOption == "high resolution":
# See above for description of the domain_size parameter
domain_size = 300 # Length scale of computational domain
final_time = 275 # Final time
FD_order = 10 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
else:
print("Error: output is not tuned for Nr = "+str(Nr)+" . Plotting disabled.")
exit(1)
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.2 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05 # If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.b: Set the timestepping order,
# the core data type, and the CFL factor.
# Step 2.b: Set the order of spatial and temporal derivatives;
# the core data type, and the CFL factor.
# RK_method choices include: Euler, "RK2 Heun", "RK2 MP", "RK2 Ralston", RK3, "RK3 Heun", "RK3 Ralston",
# SSPRK3, RK4, DP5, DP5alt, CK5, DP6, L6, DP8
RK_method = "RK4"
REAL = "double" # Best to use double here.
CFL_FACTOR= 0.5 # (GETS OVERWRITTEN WHEN EXECUTED.) In pure axisymmetry (symmetry_axes = 2 below) 1.0 works fine. Otherwise 0.5 or lower.
# Step 3: Generate Runge-Kutta-based (RK-based) timestepping code.
# As described above the Table of Contents, this is a 3-step process:
# 3.A: Evaluate RHSs (RHS_string)
# 3.B: Apply boundary conditions (post_RHS_string, pt 1)
# 3.C: Enforce det(gammabar) = det(gammahat) constraint (post_RHS_string, pt 2)
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method,
RHS_string = """
Ricci_eval(&rfmstruct, ¶ms, RK_INPUT_GFS, auxevol_gfs);
rhs_eval(&rfmstruct, ¶ms, auxevol_gfs, RK_INPUT_GFS, RK_OUTPUT_GFS);""",
post_RHS_string = """
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS, evol_gf_parity, RK_OUTPUT_GFS);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, RK_OUTPUT_GFS);\n""",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
# Step 4: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Step 5: Set the finite differencing order to FD_order (set above).
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order)
# Step 6: Copy SIMD/SIMD_intrinsics.h to $Ccodesdir/SIMD/SIMD_intrinsics.h
cmd.mkdir(os.path.join(Ccodesdir,"SIMD"))
shutil.copy(os.path.join("SIMD/")+"SIMD_intrinsics.h",os.path.join(Ccodesdir,"SIMD/"))
# Step 7: Set the direction=2 (phi) axis to be the symmetry axis; i.e.,
# axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
###Output
_____no_output_____
###Markdown
Step 1.c: Output needed C code for finding the minimum proper distance between grid points, needed for [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673)-limited timestep \[Back to [top](toc)\]$$\label{cfl}$$In order for our explicit-timestepping numerical solution to the scalar wave equation to be stable, it must satisfy the [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673) condition:$$\Delta t \le \frac{\min(ds_i)}{c},$$where $c$ is the wavespeed, and$$ds_i = h_i \Delta x^i$$ is the proper distance between neighboring gridpoints in the $i$th direction (in 3D, there are 3 directions), $h_i$ is the $i$th reference metric scale factor, and $\Delta x^i$ is the uniform grid spacing in the $i$th direction:
###Code
# Output the find_timestep() function to a C file.
rfm.out_timestep_func_to_file(os.path.join(Ccodesdir,"find_timestep.h"))
###Output
_____no_output_____
###Markdown
Step 2: Import Brill-Lindquist ADM initial data C function from the [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module \[Back to [top](toc)\]$$\label{adm_id}$$The [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module does the following:1. Set up Brill-Lindquist initial data [ADM](https://en.wikipedia.org/wiki/ADM_formalism) quantities in the **Cartesian basis**, as [documented here](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb). 1. Convert the ADM **Cartesian quantities** to **BSSN quantities in the desired Curvilinear basis** (set by reference_metric::CoordSystem), as [documented here](Tutorial-ADM_Initial_Data-Converting_ADMCartesian_to_BSSNCurvilinear.ipynb).1. Sets up the standardized C function for setting all BSSN Curvilinear gridfunctions in a pointwise fashion, as [written here](../edit/BSSN/BSSN_ID_function_string.py), and returns the C function as a Python string.
###Code
import BSSN.BrillLindquist as bl
def BrillLindquistID():
print("Generating optimized C code for Brill-Lindquist initial data. May take a while, depending on CoordSystem.")
start = time.time()
bl.BrillLindquist() # Registers ID C function in dictionary, used below to output to file.
with open(os.path.join(Ccodesdir,"initial_data.h"),"w") as file:
file.write(outC_function_dict["initial_data"])
end = time.time()
print("Finished BL initial data codegen in "+str(end-start)+" seconds.")
###Output
_____no_output_____
###Markdown
Step 3: Output C code for BSSN spacetime solve \[Back to [top](toc)\]$$\label{bssn}$$ Step 3.a: Output C code for BSSN RHS expressions \[Back to [top](toc)\]$$\label{bssnrhs}$$
###Code
import BSSN.BSSN_RHSs as rhs
import BSSN.BSSN_gauge_RHSs as gaugerhs
# Set the *covariant*, second-order Gamma-driving shift condition
par.set_parval_from_str("BSSN.BSSN_gauge_RHSs::ShiftEvolutionOption", "GammaDriving2ndOrder_Covariant")
print("Generating symbolic expressions for BSSN RHSs...")
start = time.time()
# Enable rfm_precompute infrastructure, which results in
# BSSN RHSs that are free of transcendental functions,
# even in curvilinear coordinates, so long as
# ConformalFactor is set to "W" (default).
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
# Evaluate BSSN + BSSN gauge RHSs with rfm_precompute enabled:
import BSSN.BSSN_quantities as Bq
par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic","True")
rhs.BSSN_RHSs()
gaugerhs.BSSN_gauge_RHSs()
# We use betaU as our upwinding control vector:
Bq.BSSN_basic_tensors()
betaU = Bq.betaU
import BSSN.Enforce_Detgammabar_Constraint as EGC
enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammabar_Constraint_symb_expressions()
# Next compute Ricci tensor
par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic","False")
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
# Now register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)
# Now that we are finished with all the rfm hatted
# quantities in generic precomputed functional
# form, let's restore them to their closed-
# form expressions.
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
end = time.time()
print("Finished BSSN symbolic expressions in "+str(end-start)+" seconds.")
def BSSN_RHSs():
print("Generating C code for BSSN RHSs in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
# Construct the left-hand sides and right-hand-side expressions for all BSSN RHSs
lhs_names = [ "alpha", "cf", "trK"]
rhs_exprs = [gaugerhs.alpha_rhs, rhs.cf_rhs, rhs.trK_rhs]
for i in range(3):
lhs_names.append( "betU"+str(i))
rhs_exprs.append(gaugerhs.bet_rhsU[i])
lhs_names.append( "lambdaU"+str(i))
rhs_exprs.append(rhs.lambda_rhsU[i])
lhs_names.append( "vetU"+str(i))
rhs_exprs.append(gaugerhs.vet_rhsU[i])
for j in range(i,3):
lhs_names.append( "aDD"+str(i)+str(j))
rhs_exprs.append(rhs.a_rhsDD[i][j])
lhs_names.append( "hDD"+str(i)+str(j))
rhs_exprs.append(rhs.h_rhsDD[i][j])
# Sort the lhss list alphabetically, and rhss to match.
# This ensures the RHSs are evaluated in the same order
# they're allocated in memory:
lhs_names,rhs_exprs = [list(x) for x in zip(*sorted(zip(lhs_names,rhs_exprs), key=lambda pair: pair[0]))]
# Declare the list of lhrh's
BSSN_evol_rhss = []
for var in range(len(lhs_names)):
BSSN_evol_rhss.append(lhrh(lhs=gri.gfaccess("rhs_gfs",lhs_names[var]),rhs=rhs_exprs[var]))
# Set up the C function for the BSSN RHSs
desc="Evaluate the BSSN RHSs"
name="rhs_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict auxevol_gfs,const REAL *restrict in_gfs,REAL *restrict rhs_gfs""",
body = fin.FD_outputC("returnstring",BSSN_evol_rhss, params="outCverbose=False,SIMD_enable=True",
upwindcontrolvec=betaU).replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,EnableSIMD,Enable_rfm_precompute")
end = time.time()
print("Finished BSSN_RHS C codegen in " + str(end - start) + " seconds.")
def Ricci():
print("Generating C code for Ricci tensor in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
desc="Evaluate the Ricci tensor"
name="Ricci_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict in_gfs,REAL *restrict auxevol_gfs""",
body = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD00"),rhs=Bq.RbarDD[0][0]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD01"),rhs=Bq.RbarDD[0][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD02"),rhs=Bq.RbarDD[0][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD11"),rhs=Bq.RbarDD[1][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD12"),rhs=Bq.RbarDD[1][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD22"),rhs=Bq.RbarDD[2][2])],
params="outCverbose=False,SIMD_enable=True").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,EnableSIMD,Enable_rfm_precompute")
end = time.time()
print("Finished Ricci C codegen in " + str(end - start) + " seconds.")
###Output
Generating symbolic expressions for BSSN RHSs...
Finished BSSN symbolic expressions in 3.364407777786255 seconds.
###Markdown
Step 3.b: Output C code for Hamiltonian constraint \[Back to [top](toc)\]$$\label{hamconstraint}$$Next output the C code for evaluating the Hamiltonian constraint [(**Tutorial**)](Tutorial-BSSN_constraints.ipynb). In the absence of numerical error, this constraint should evaluate to zero. However it does not due to numerical (typically truncation and roundoff) error. We will therefore measure the Hamiltonian constraint violation to gauge the accuracy of our simulation, and, ultimately determine whether errors are dominated by numerical finite differencing (truncation) error as expected.
###Code
def Hamiltonian():
start = time.time()
print("Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.")
# Set up the C function for the Hamiltonian RHS
desc="Evaluate the Hamiltonian constraint"
name="Hamiltonian_constraint"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict aux_gfs""",
body = fin.FD_outputC("returnstring",lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H),
params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,Enable_rfm_precompute")
end = time.time()
print("Finished Hamiltonian C codegen in " + str(end - start) + " seconds.")
###Output
_____no_output_____
###Markdown
Step 3.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
def gammadet():
start = time.time()
print("Generating optimized C code for gamma constraint. May take a while, depending on CoordSystem.")
# Set up the C function for the det(gammahat) = det(gammabar)
EGC.output_Enforce_Detgammabar_Constraint_Ccode(Ccodesdir,exprs=enforce_detg_constraint_symb_expressions)
end = time.time()
print("Finished gamma constraint C codegen in " + str(end - start) + " seconds.")
###Output
_____no_output_____
###Markdown
Step 3.d: Computing $_{-2}Y_{\ell m} (\theta, \phi)$ for all $(\ell,m)$ for $\ell=0$ up to 2 \[Back to [top](toc)\]$$\label{spinweight}$$ Here we output all spin-weight $-2$ spherical harmonics [**Tutorial Module**](Tutorial-SpinWeighted_Spherical_Harmonics.ipynb) for $\ell=0$ up to and $\ell=2$.
###Code
import SpinWeight_minus2_SphHarmonics.SpinWeight_minus2_SphHarmonics as swm2
cmd.mkdir(os.path.join(Ccodesdir,"SpinWeight_minus2_SphHarmonics"))
swm2.SpinWeight_minus2_SphHarmonics(maximum_l=2,
filename=os.path.join(Ccodesdir,"SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h"))
###Output
_____no_output_____
###Markdown
Step 3.e: Output $\psi_4$ \[Back to [top](toc)\]$$\label{psi4}$$We output $\psi_4$, assuming Quasi-Kinnersley tetrad of [Baker, Campanelli, Lousto (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf).
###Code
import BSSN.Psi4_tetrads as BP4t
par.set_parval_from_str("BSSN.Psi4_tetrads::TetradChoice","QuasiKinnersley")
#par.set_parval_from_str("BSSN.Psi4_tetrads::UseCorrectUnitNormal","True")
import BSSN.Psi4 as BP4
BP4.Psi4()
psi4r_0pt = gri.register_gridfunctions("AUX","psi4r_0pt")
psi4r_1pt = gri.register_gridfunctions("AUX","psi4r_1pt")
psi4r_2pt = gri.register_gridfunctions("AUX","psi4r_2pt")
psi4i_0pt = gri.register_gridfunctions("AUX","psi4i_0pt")
psi4i_1pt = gri.register_gridfunctions("AUX","psi4i_1pt")
psi4i_2pt = gri.register_gridfunctions("AUX","psi4i_2pt")
desc="""Since it's so expensive to compute, instead of evaluating
psi_4 at all interior points, this functions evaluates it on a
point-by-point basis."""
name="psi4"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """const paramstruct *restrict params,
const int i0,const int i1,const int i2,
REAL *restrict xx[3], const REAL *restrict in_gfs, REAL *restrict aux_gfs""",
body = """
const int idx = IDX3S(i0,i1,i2);
const REAL xx0 = xx[0][i0];const REAL xx1 = xx[1][i1];const REAL xx2 = xx[2][i2];
// Real part of psi_4, divided into 3 terms
{
#include "Psi4re_pt0_lowlevel.h"
}
{
#include "Psi4re_pt1_lowlevel.h"
}
{
#include "Psi4re_pt2_lowlevel.h"
}
// Imaginary part of psi_4, divided into 3 terms
{
#include "Psi4im_pt0_lowlevel.h"
}
{
#include "Psi4im_pt1_lowlevel.h"
}
{
#include "Psi4im_pt2_lowlevel.h"
}""")
def Psi4re(part):
print("Generating C code for psi4_re_pt"+str(part)+" in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
Psi4re_pt = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("aux_gfs","psi4r_"+str(part)+"pt"),rhs=BP4.psi4_re_pt[part])],
params="outCverbose=False")
with open(os.path.join(Ccodesdir,"Psi4re_pt"+str(part)+"_lowlevel.h"), "w") as file:
file.write(Psi4re_pt.replace("IDX4","IDX4S"))
end = time.time()
print("Finished generating psi4_re_pt"+str(part)+" in "+str(end-start)+" seconds.")
def Psi4im(part):
print("Generating C code for psi4_im_pt"+str(part)+" in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
Psi4im_pt = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("aux_gfs","psi4i_"+str(part)+"pt"),rhs=BP4.psi4_im_pt[part])],
params="outCverbose=False")
with open(os.path.join(Ccodesdir,"Psi4im_pt"+str(part)+"_lowlevel.h"), "w") as file:
file.write(Psi4im_pt.replace("IDX4","IDX4S"))
end = time.time()
print("Finished generating psi4_im_pt"+str(part)+" in "+str(end-start)+" seconds.")
# Step 0: Import the multiprocessing module.
import multiprocessing
# Step 1: Create a list of functions we wish to evaluate in parallel
funcs = [Psi4re,Psi4re,Psi4re,Psi4im,Psi4im,Psi4im, BrillLindquistID,BSSN_RHSs,Ricci,Hamiltonian,gammadet]
# Step 1.a: Define master function for calling all above functions.
# Note that lambdifying this doesn't work in Python 3
def master_func(idx):
if idx < 3: # Call Psi4re(arg)
funcs[idx](idx)
elif idx < 6: # Call Psi4im(arg-3)
funcs[idx](idx-3)
else: # All non-Psi4 functions:
funcs[idx]()
try:
if os.name == 'nt':
# It's a mess to get working in Windows, so we don't bother. :/
# https://medium.com/@grvsinghal/speed-up-your-python-code-using-multiprocessing-on-windows-and-jupyter-or-ipython-2714b49d6fac
raise Exception("Parallel codegen currently not available in Windows")
# Step 1.b: Import the multiprocessing module.
import multiprocessing
# Step 1.c: Evaluate list of functions in parallel if possible;
# otherwise fallback to serial evaluation:
pool = multiprocessing.Pool()
pool.map(master_func,range(len(funcs)))
except:
# Steps 1.b-1.c, alternate: As fallback, evaluate functions in serial.
# This will happen on Android and Windows systems
for idx in range(len(funcs)):
master_func(idx)
###Output
Generating C code for psi4_re_pt0 in SinhSpherical coordinates.
Generating C code for psi4_re_pt1 in SinhSpherical coordinates.
Generating C code for psi4_im_pt2 in SinhSpherical coordinates.
Generating C code for psi4_re_pt2 in SinhSpherical coordinates.
Generating C code for psi4_im_pt1 in SinhSpherical coordinates.
Generating C code for psi4_im_pt0 in SinhSpherical coordinates.
Generating C code for BSSN RHSs in SinhSpherical coordinates.
Generating optimized C code for Brill-Lindquist initial data. May take a while, depending on CoordSystem.
Finished generating psi4_im_pt1 in 2.4940905570983887 seconds.
Generating C code for Ricci tensor in SinhSpherical coordinates.
Finished generating psi4_re_pt1 in 4.4882872104644775 seconds.
Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.
Finished generating psi4_im_pt0 in 12.445091724395752 seconds.
Generating optimized C code for gamma constraint. May take a while, depending on CoordSystem.
Output C function enforce_detgammabar_constraint() to file BSSN_Two_BHs_Collide_Ccodes/enforce_detgammabar_constraint.h
Finished gamma constraint C codegen in 0.15833306312561035 seconds.
Finished generating psi4_re_pt0 in 22.015540599822998 seconds.
Output C function Hamiltonian_constraint() to file BSSN_Two_BHs_Collide_Ccodes/Hamiltonian_constraint.h
Finished Hamiltonian C codegen in 19.337371110916138 seconds.
Finished BL initial data codegen in 24.676788568496704 seconds.
Finished generating psi4_im_pt2 in 26.10047674179077 seconds.
Finished generating psi4_re_pt2 in 37.120468616485596 seconds.
Output C function rhs_eval() to file BSSN_Two_BHs_Collide_Ccodes/rhs_eval.h
Finished BSSN_RHS C codegen in 59.93295478820801 seconds.
Output C function Ricci_eval() to file BSSN_Two_BHs_Collide_Ccodes/Ricci_eval.h
Finished Ricci C codegen in 80.50617933273315 seconds.
###Markdown
Step 3.f: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](toc)\]$$\label{cparams_rfm_and_domainsize}$$Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
###Code
# Step 3.f.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.f.ii: Set free_parameters.h
with open(os.path.join(Ccodesdir,"free_parameters.h"),"w") as file:
file.write("""
// Set free-parameter values.
// Set free-parameter values for BSSN evolution:
params.eta = 2.0;
// Set free parameters for the (Brill-Lindquist) initial data
params.BH1_posn_x = 0.0; params.BH1_posn_y = 0.0; params.BH1_posn_z =+0.25;
params.BH2_posn_x = 0.0; params.BH2_posn_y = 0.0; params.BH2_posn_z =-0.25;
params.BH1_mass = 0.5; params.BH2_mass = 0.5;
"""+"""
const REAL final_time = """+str(final_time)+";\n")
# Append to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Step 3.f.iii: Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir)
# Step 3.f.iv: Generate xxCart.h, which contains xxCart() for
# (the mapping from xx->Cartesian) for the chosen
# CoordSystem:
rfm.xxCart_h("xxCart","./set_Cparameters.h",os.path.join(Ccodesdir,"xxCart.h"))
# Step 3.f.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
###Output
_____no_output_____
###Markdown
Step 4: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](toc)\]$$\label{bc_functs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../"))
###Output
Wrote to file "BSSN_Two_BHs_Collide_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h"
Evolved gridfunction "aDD00" has parity type 4.
Evolved gridfunction "aDD01" has parity type 5.
Evolved gridfunction "aDD02" has parity type 6.
Evolved gridfunction "aDD11" has parity type 7.
Evolved gridfunction "aDD12" has parity type 8.
Evolved gridfunction "aDD22" has parity type 9.
Evolved gridfunction "alpha" has parity type 0.
Evolved gridfunction "betU0" has parity type 1.
Evolved gridfunction "betU1" has parity type 2.
Evolved gridfunction "betU2" has parity type 3.
Evolved gridfunction "cf" has parity type 0.
Evolved gridfunction "hDD00" has parity type 4.
Evolved gridfunction "hDD01" has parity type 5.
Evolved gridfunction "hDD02" has parity type 6.
Evolved gridfunction "hDD11" has parity type 7.
Evolved gridfunction "hDD12" has parity type 8.
Evolved gridfunction "hDD22" has parity type 9.
Evolved gridfunction "lambdaU0" has parity type 1.
Evolved gridfunction "lambdaU1" has parity type 2.
Evolved gridfunction "lambdaU2" has parity type 3.
Evolved gridfunction "trK" has parity type 0.
Evolved gridfunction "vetU0" has parity type 1.
Evolved gridfunction "vetU1" has parity type 2.
Evolved gridfunction "vetU2" has parity type 3.
Auxiliary gridfunction "H" has parity type 0.
Auxiliary gridfunction "psi4i_0pt" has parity type 0.
Auxiliary gridfunction "psi4i_1pt" has parity type 0.
Auxiliary gridfunction "psi4i_2pt" has parity type 0.
Auxiliary gridfunction "psi4r_0pt" has parity type 0.
Auxiliary gridfunction "psi4r_1pt" has parity type 0.
Auxiliary gridfunction "psi4r_2pt" has parity type 0.
AuxEvol gridfunction "RbarDD00" has parity type 4.
AuxEvol gridfunction "RbarDD01" has parity type 5.
AuxEvol gridfunction "RbarDD02" has parity type 6.
AuxEvol gridfunction "RbarDD11" has parity type 7.
AuxEvol gridfunction "RbarDD12" has parity type 8.
AuxEvol gridfunction "RbarDD22" has parity type 9.
Wrote to file "BSSN_Two_BHs_Collide_Ccodes/boundary_conditions/EigenCoord_Cart_to_xx.h"
###Markdown
Step 5: `BrillLindquist_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER),
# and set the CFL_FACTOR (which can be overwritten at the command line)
with open(os.path.join(Ccodesdir,"BSSN_Playground_REAL__NGHOSTS__CFL_FACTOR.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
REAL CFL_FACTOR = """+str(CFL_FACTOR)+"""; // Set the CFL Factor. Can be overwritten at command line.""")
%%writefile $Ccodesdir/BrillLindquist_Playground.c
// Step P0: Define REAL and NGHOSTS; and declare CFL_FACTOR. This header is generated in NRPy+.
#include "BSSN_Playground_REAL__NGHOSTS__CFL_FACTOR.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// All SIMD intrinsics used in SIMD-enabled C code loops are defined here:
#include "SIMD/SIMD_intrinsics.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "time.h"
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
#define wavespeed 1.0 // Set CFL-based "wavespeed" to 1.0.
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xxCart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Set xxCart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xxCart.h"
// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P7: Implement the algorithm for upwinding.
// *NOTE*: This upwinding is backwards from
// usual upwinding algorithms, because the
// upwinding control vector in BSSN (the shift)
// acts like a *negative* velocity.
//#define UPWIND_ALG(UpwindVecU) UpwindVecU > 0.0 ? 1.0 : 0.0
// Step P8: Include function for enforcing detgammabar constraint.
#include "enforce_detgammabar_constraint.h"
// Step P9: Find the CFL-constrained timestep
#include "find_timestep.h"
// Step P10: Declare function necessary for setting up the initial data.
// Step P10.a: Define BSSN_ID() for BrillLindquist initial data
// Step P10.b: Set the generic driver function for setting up BSSN initial data
#include "initial_data.h"
// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)
#include "Hamiltonian_constraint.h"
// Step P12: Declare rhs_eval function, which evaluates BSSN RHSs
#include "rhs_eval.h"
// Step P13: Declare Ricci_eval function, which evaluates Ricci tensor
#include "Ricci_eval.h"
// Step P14: Declare function for evaluating real and imaginary parts of psi4 (diagnostic)
#include "psi4.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4 && argc != 5) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
if(argc == 5) {
CFL_FACTOR = strtod(argv[4],NULL);
if(CFL_FACTOR > 0.5 && atoi(argv[3])!=2) {
fprintf(stderr,"WARNING: CFL_FACTOR was set to %e, which is > 0.5.\n",CFL_FACTOR);
fprintf(stderr," This will generally only be stable if the simulation is purely axisymmetric\n");
fprintf(stderr," However, Nx2 was set to %d>2, which implies a non-axisymmetric simulation\n",atoi(argv[3]));
}
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0h: Time coordinate parameters
const REAL t_final = final_time;
// Step 0i: Set timestep based on smallest proper distance between gridpoints and CFL factor
REAL dt = find_timestep(¶ms, xx);
//fprintf(stderr,"# Timestep set to = %e\n",(double)dt);
int N_final = (int)(t_final / dt + 0.5); // The number of points in time.
// Add 0.5 to account for C rounding down
// typecasts to integers.
REAL out_approx_every_t = 0.2;
int output_every_N = (int)(out_approx_every_t*((REAL)N_final)/t_final);
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
// Step 1: Set up initial data to an exact solution
initial_data(¶ms, xx, y_n_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, y_n_gfs);
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
#ifdef __linux__ // Use high-precision timer in Linux.
struct timespec start, end;
clock_gettime(CLOCK_REALTIME, &start);
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
// http://www.cplusplus.com/reference/ctime/time/
time_t start_timer,end_timer;
time(&start_timer); // Resolution of one second...
#endif
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
for(int n=0;n<=N_final;n++) { // Main loop to progress forward in time.
// Step 3.a: Output diagnostics
if(n%output_every_N == 0) {
// Step 3.a.i: Output psi4 spin-weight -2 decomposed data, every N_output_every */
#include "SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h"
char filename[100];
for(int r_ext_idx = (Nxx_plus_2NGHOSTS0-NGHOSTS)/4;
r_ext_idx<(Nxx_plus_2NGHOSTS0-NGHOSTS)*0.9;
r_ext_idx+=5) {
REAL r_ext;
{
REAL xx0 = xx[0][r_ext_idx];
REAL xx1 = xx[1][1];
REAL xx2 = xx[2][1];
REAL xCart[3];
xxCart(¶ms,xx,r_ext_idx,1,1,xCart);
r_ext = sqrt(xCart[0]*xCart[0] + xCart[1]*xCart[1] + xCart[2]*xCart[2]);
}
sprintf(filename,"outPsi4_l2m0-%d-r%.2f.txt",Nxx[0],(double)r_ext);
FILE *outPsi4_l2m0;
if(n==0) outPsi4_l2m0 = fopen(filename, "w");
else outPsi4_l2m0 = fopen(filename, "a");
REAL Psi4r_0pt_l2m0 = 0.0,Psi4r_1pt_l2m0 = 0.0,Psi4r_2pt_l2m0 = 0.0;
REAL Psi4i_0pt_l2m0 = 0.0,Psi4i_1pt_l2m0 = 0.0,Psi4i_2pt_l2m0 = 0.0;
LOOP_REGION(r_ext_idx,r_ext_idx+1,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,NGHOSTS+1) {
psi4(¶ms, i0,i1,i2, xx, y_n_gfs, diagnostic_output_gfs);
const int idx = IDX3S(i0,i1,i2);
const REAL th = xx[1][i1];
const REAL ph = xx[2][i2];
// Construct integrand for Psi4 spin-weight s=-2,l=2,m=0 spherical harmonic
// Based on http://www.demonstrations.wolfram.com/SpinWeightedSphericalHarmonics/
// we have {}_{s}_Y_{lm} = {}_{-2}_Y_{20} = 1/4 * sqrt(15 / (2*pi)) * sin(th)^2
// Confirm integrand is correct:
// Integrate[(1/4 Sqrt[15/(2 \[Pi])] Sin[th]^2) (1/4 Sqrt[15/(2 \[Pi])] Sin[th]^2)*2*Pi*Sin[th], {th, 0, Pi}]
// ^^^ equals 1.
REAL ReY_sm2_l2_m0,ImY_sm2_l2_m0;
SpinWeight_minus2_SphHarmonics(2,0, th,ph, &ReY_sm2_l2_m0,&ImY_sm2_l2_m0);
const REAL sinth = sin(xx[1][i1]);
/* psi4 *{}_{-2}_Y_{20}* (int dphi)* sinth*dtheta */
Psi4r_0pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4R_0PTGF,idx)]*ReY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4r_1pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4R_1PTGF,idx)]*ReY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4r_2pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4R_2PTGF,idx)]*ReY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4i_0pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4I_0PTGF,idx)]*ImY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4i_1pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4I_1PTGF,idx)]*ImY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4i_2pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4I_2PTGF,idx)]*ImY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
}
fprintf(outPsi4_l2m0,"%e %.15e %.15e %.15e %.15e %.15e %.15e\n", (double)((n)*dt),
(double)Psi4r_0pt_l2m0,(double)Psi4r_1pt_l2m0,(double)Psi4r_2pt_l2m0,
(double)Psi4i_0pt_l2m0,(double)Psi4i_1pt_l2m0,(double)Psi4i_2pt_l2m0);
fclose(outPsi4_l2m0);
}
}
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
#include "MoLtimestepping/RK_MoL.h"
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
if(n==N_final-1) {
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
const int i0MIN=NGHOSTS; // In spherical, r=Delta r/2.
const int i1mid=Nxx_plus_2NGHOSTS1/2;
const int i2mid=Nxx_plus_2NGHOSTS2/2;
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
xxCart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1],xCart[2], y_n_gfs[IDX4ptS(CFGF,idx)],
log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
}
// Step 3.d: Progress indicator printing to stderr
// Step 3.d.i: Measure average time per iteration
#ifdef __linux__ // Use high-precision timer in Linux.
clock_gettime(CLOCK_REALTIME, &end);
const long long unsigned int time_in_ns = 1000000000L * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
time(&end_timer); // Resolution of one second...
REAL time_in_ns = difftime(end_timer,start_timer)*1.0e9+0.5; // Round up to avoid divide-by-zero.
#endif
const REAL s_per_iteration_avg = ((REAL)time_in_ns / (REAL)n) / 1.0e9;
const int iterations_remaining = N_final - n;
const REAL time_remaining_in_mins = s_per_iteration_avg * (REAL)iterations_remaining / 60.0;
const REAL num_RHS_pt_evals = (REAL)(Nxx[0]*Nxx[1]*Nxx[2]) * 4.0 * (REAL)n; // 4 RHS evals per gridpoint for RK4
const REAL RHS_pt_evals_per_sec = num_RHS_pt_evals / ((REAL)time_in_ns / 1.0e9);
// Step 3.d.ii: Output simulation progress to stderr
if(n % 10 == 0) {
fprintf(stderr,"%c[2K", 27); // Clear the line
fprintf(stderr,"It: %d t=%.2f dt=%.2e | %.1f%%; ETA %.0f s | t/h %.2f | gp/s %.2e\r", // \r is carriage return, move cursor to the beginning of the line
n, n * (double)dt, (double)dt, (double)(100.0 * (REAL)n / (REAL)N_final),
(double)time_remaining_in_mins*60, (double)(dt * 3600.0 / s_per_iteration_avg), (double)RHS_pt_evals_per_sec);
fflush(stderr); // Flush the stderr buffer
} // End progress indicator if(n % 10 == 0)
} // End main loop to progress forward in time.
fprintf(stderr,"\n"); // Clear the final line of output from progress indicator.
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
if EvolOption == "low resolution":
Nr = 270
Ntheta = 8
elif EvolOption == "high resolution":
Nr = 800
Ntheta = 16
else:
print("Error: unknown EvolOption!")
sys.exit(1)
CFL_FACTOR = 1.0
import cmdline_helper as cmd
print("Now compiling, should take ~10 seconds...\n")
start = time.time()
cmd.C_compile(os.path.join(Ccodesdir,"BrillLindquist_Playground.c"), "BrillLindquist_Playground",
compile_mode="optimized")
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
if Nr == 800:
print("Now running. Should take ~8 hours...\n")
if Nr == 270:
print("Now running. Should take ~30 minutes...\n")
start = time.time()
cmd.delete_existing_files("out*.txt")
cmd.delete_existing_files("out*.png")
cmd.Execute("BrillLindquist_Playground", str(Nr)+" "+str(Ntheta)+" 2 "+str(CFL_FACTOR))
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
###Output
Now compiling, should take ~10 seconds...
Compiling executable...
Executing `gcc -Ofast -fopenmp -march=native -funroll-loops BSSN_Two_BHs_Collide_Ccodes/BrillLindquist_Playground.c -o BrillLindquist_Playground -lm`...
Finished executing in 14.632002353668213 seconds.
Finished compilation.
Finished in 14.640899181365967 seconds.
Now running. Should take ~30 minutes...
Executing `taskset -c 0,1,2,3 ./BrillLindquist_Playground 270 8 2 1.0`...
[2KIt: 27200 t=199.93 dt=7.35e-03 | 100.0%; ETA 0 s | t/h 963.03 | gp/s 6.29e+05
Finished executing in 747.6896994113922 seconds.
Finished in 747.6993782520294 seconds.
###Markdown
Step 6: Comparison with black hole perturbation theory \[Back to [top](toc)\]$$\label{compare}$$According to black hole perturbation theory ([Berti et al](https://arxiv.org/abs/0905.2975)), the resultant black hole should ring down with dominant, spin-weight $s=-2$ spherical harmonic mode $(l=2,m=0)$ according to$${}_{s=-2}\text{Re}(\psi_4)_{l=2,m=0} = A e^{−0.0890 t/M} \cos(0.3737 t/M+ \phi),$$where $M=1$ for these data, and $A$ and $\phi$ are an arbitrary amplitude and phase, respectively. Here we will plot the resulting waveform at $r/M=33.13$, comparing to the expected frequency and amplitude falloff predicted by black hole perturbation theory.Notice that we find about 4.2 orders of magnitude agreement! If you are willing to invest more resources and wait much longer, you will find approximately 8.5 orders of magnitude agreement (*better* than Fig 6 of [Ruchlin et al](https://arxiv.org/pdf/1712.07658.pdf)) if you adjust the above code parameters such that1. Finite-differencing order is set to 101. Nr = 8001. Ntheta = 161. Outer boundary (`AMPL`) set to 3001. Final time (`t_final`) set to 2751. Set the initial positions of the BHs to `BH1_posn_z = -BH2_posn_z = 0.25`
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import savefig
from matplotlib import rc
rc('text', usetex=True)
if Nr == 270:
extraction_radius = "33.13"
Amplitude = 1.8e-2
Phase = 2.8
elif Nr == 800:
extraction_radius = "33.64"
Amplitude = 1.8e-2
Phase = 2.8
else:
print("Error: output is not tuned for Nr = "+str(Nr)+" . Plotting disabled.")
exit(1)
#Transposed for easier unpacking:
t,psi4r1,psi4r2,psi4r3,psi4i1,psi4i2,psi4i3 = np.loadtxt("outPsi4_l2m0-"+str(Nr)+"-r"+extraction_radius+".txt").T
t_retarded = []
log10abspsi4r = []
bh_pert_thry = []
for i in range(len(psi4r1)):
retarded_time = t[i]-np.float(extraction_radius)
t_retarded.append(retarded_time)
log10abspsi4r.append(np.log(np.float(extraction_radius)*np.abs(psi4r1[i] + psi4r2[i] + psi4r3[i]))/np.log(10))
bh_pert_thry.append(np.log(Amplitude*np.exp(-0.0890*retarded_time)*np.abs(np.cos(0.3737*retarded_time+Phase)))/np.log(10))
# print(bh_pert_thry)
fig, ax = plt.subplots()
plt.title("Grav. Wave Agreement with BH perturbation theory",fontsize=18)
plt.xlabel("$(t - R_{ext})/M$",fontsize=16)
plt.ylabel('$\log_{10}|\psi_4|$',fontsize=16)
ax.plot(t_retarded, log10abspsi4r, 'k-', label='SENR/NRPy+ simulation')
ax.plot(t_retarded, bh_pert_thry, 'k--', label='BH perturbation theory')
#ax.set_xlim([0,t_retarded[len(psi4r1)-1]])
ax.set_xlim([0,final_time - float(extraction_radius)+10])
ax.set_ylim([-13.5,-1.5])
plt.xticks(size = 14)
plt.yticks(size = 14)
legend = ax.legend(loc='upper right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
# Note that you'll need `dvipng` installed to generate the following file:
savefig("BHperttheorycompare.png",dpi=150)
###Output
_____no_output_____
###Markdown
Step 7: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.ipynb
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Start-to-Finish Example: Head-On Black Hole Collision Author: Zach Etienne Formatting improvements courtesy Brandon Clark This module implements a basic numerical relativity code to merge two black holes in *spherical-like coordinates*, as well as the gravitational wave analysis provided by the $\psi_4$ NRPy+ tutorial notebooks ([$\psi_4$](Tutorial-Psi4.ipynb) & [$\psi_4$ tetrad](Tutorial-Psi4_tetrads.ipynb)). Here we place the black holes initially on the $z$-axis, so the entire simulation is axisymmetric about the $\phi$-axis. Minimal sampling in the $\phi$ direction greatly speeds up the simulation.**Notebook Status:** Validated **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution *after a short numerical evolution of the initial data* (see [plots at bottom](convergence)), and all quantities have been validated against the [original SENR code](https://bitbucket.org/zach_etienne/nrpy). Finally, excellent agreement is seen in the gravitational wave signal from the ringing remnant black hole for multiple decades in amplitude when compared to black hole perturbation theory predictions. NRPy+ Source Code for this module: * [BSSN/BrillLindquist.py](../edit/BSSN/BrillLindquist.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb): Brill-Lindquist initial data; sets all ADM variables in Cartesian basis: * [BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): Spherical/Cartesian ADM$\to$Curvilinear BSSN converter function, for which exact expressions are given for ADM quantities.* [BSSN/BSSN_ID_function_string.py](../edit/BSSN/BSSN_ID_function_string.py): Sets up the C code string enabling initial data be set up in a point-by-point fashion* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates* [BSSN/BSSN_RHSs.py](../edit/BSSN/BSSN_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb): Generates the right-hand sides for the BSSN evolution equations in singular, curvilinear coordinates* [BSSN/BSSN_gauge_RHSs.py](../edit/BSSN/BSSN_gauge_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb): Generates the right-hand sides for the BSSN gauge evolution equations in singular, curvilinear coordinates* [BSSN/Psi4.py](../edit/BSSN/Psi4.py); [\[**tutorial**\]](Tutorial-Psi4.ipynb): Generates expressions for $\psi_4$, the outgoing Weyl scalar + [BSSN/Psi4_tetrads.py](../edit/BSSN/Psi4_tetrads.py); [\[**tutorial**\]](Tutorial-Psi4_tetrads.ipynb): Generates quasi-Kinnersley tetrad needed for $\psi_4$-based gravitational wave extraction Introduction:Here we use NRPy+ to generate the C source code necessary to set up initial data for two black holes (Brill-Lindquist, [Brill & Lindquist, Phys. Rev. 131, 471, 1963](https://journals.aps.org/pr/abstract/10.1103/PhysRev.131.471); see also Eq. 1 of [Brandt & Brügmann, arXiv:gr-qc/9711015v1](https://arxiv.org/pdf/gr-qc/9711015v1.pdf)). Then we use it to generate the RHS expressions for [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html) time integration based on the [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4).The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration * [**NRPy+ tutorial on Method of Lines algorithm**](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).1. Set gridfunction values to initial data * [**NRPy+ tutorial on Brill-Lindquist initial data**](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb) * [**NRPy+ tutorial on validating Brill-Lindquist initial data**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_Exact_Initial_Data.ipynb).1. Next, integrate the initial data forward in time using the Method of Lines coupled to a Runge-Kutta explicit timestepping algorithm: 1. At the start of each iteration in time, output the Hamiltonian constraint violation * [**NRPy+ tutorial on BSSN constraints**](Tutorial-BSSN_constraints.ipynb). 1. At each RK time substep, do the following: 1. Evaluate BSSN RHS expressions * [**NRPy+ tutorial on BSSN right-hand sides**](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb) * [**NRPy+ tutorial on BSSN gauge condition right-hand sides**](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb) 1. Apply singular, curvilinear coordinate boundary conditions [*a la* the SENR/NRPy+ paper](https://arxiv.org/abs/1712.07658) * [**NRPy+ tutorial on setting up singular, curvilinear boundary conditions**](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb) 1. Enforce constraint on conformal 3-metric: $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ * [**NRPy+ tutorial on enforcing $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint**](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)1. Repeat above steps at two numerical resolutions to confirm convergence to zero. Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id): Import Brill-Lindquist ADM initial data C function from the [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module1. [Step 3](bssn): Output C code for BSSN spacetime solve 1. [Step 3.a](bssnrhs): Output C code for BSSN RHS expressions 1. [Step 3.b](hamconstraint): Output C code for Hamiltonian constraint 1. [Step 3.c](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint 1. [Step 3.d](psi4): Compute $\psi_4$, which encodes gravitational wave information in our numerical relativity calculations 1. [Step 3.e](decomposepsi4): Decompose $\psi_4$ into spin-weight -2 spherical harmonics 1. [Step 3.e.i](spinweight): Output ${}^{-2}Y_{\ell,m}$, up to and including $\ell=\ell_{\rm max}$=`l_max` (set to 2 here) 1. [Step 3.e.ii](full_diag): Decomposition of $\psi_4$ into spin-weight -2 spherical harmonics: Full C-code diagnostic implementation 1. [Step 3.f](coutput): Output all NRPy+ C-code kernels, in parallel if possible 1. [Step 3.g](cparams_rfm_and_domainsize): Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h`1. [Step 4](bc_functs): Set up boundary condition functions for chosen singular, curvilinear coordinate system1. [Step 5](mainc): `BrillLindquist_Playground.c`: The Main C Code1. [Step 6](visualize): Data Visualization Animations 1. [Step 6.a](installdownload): Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded 1. [Step 6.b](genimages): Generate images for visualization animation 1. [Step 6.c](genvideo): Generate visualization animation1. [Step 7](convergence): Visualize the numerical error, and confirm that it converges to zero with increasing numerical resolution (sampling)1. [Step 8](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step P0: Set the option for evolving the initial data forward in time
# Options include "low resolution" and "high resolution"
EvolOption = "low resolution"
# Step P1: Import needed NRPy+ core modules:
from outputC import lhrh,outCfunction,outC_function_dict # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys, time # Standard Python modules for multiplatform OS-level functions, benchmarking
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("BSSN_Two_BHs_Collide_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three (BSSN is a 3+1 decomposition
# of Einstein's equations), and then read
# the parameter as DIM.
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Step 1.a: Enable SIMD-optimized code?
# I.e., generate BSSN and Ricci C code kernels using SIMD-vectorized
# compiler intrinsics, which *greatly improve the code's performance*,
# though at the expense of making the C-code kernels less
# human-readable.
# * Important note in case you wish to modify the BSSN/Ricci kernels
# here by adding expressions containing transcendental functions
# (e.g., certain scalar fields):
# Note that SIMD-based transcendental function intrinsics are not
# supported by the default installation of gcc or clang (you will
# need to use e.g., the SLEEF library from sleef.org, for this
# purpose). The Intel compiler suite does support these intrinsics
# however without the need for external libraries.
SIMD_enable = True
# Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "SinhSpherical"
# Decompose psi_4 (second time derivative of gravitational
# wave strain) into all spin-weight=-2
# l,m spherical harmonics, starting at l=2
# going up to and including l_max, set here:
l_max = 2
if EvolOption == "low resolution":
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 150 # Length scale of computational domain
final_time = 200 # Final time
FD_order = 8 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
elif EvolOption == "high resolution":
# See above for description of the domain_size parameter
domain_size = 300 # Length scale of computational domain
final_time = 275 # Final time
FD_order = 10 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
else:
print("Error: output is not tuned for Nr = "+str(Nr)+" . Plotting disabled.")
exit(1)
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.2 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05 # If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.b: Set the timestepping order,
# the core data type, and the CFL factor.
# Step 2.b: Set the order of spatial and temporal derivatives;
# the core data type, and the CFL factor.
# RK_method choices include: Euler, "RK2 Heun", "RK2 MP", "RK2 Ralston", RK3, "RK3 Heun", "RK3 Ralston",
# SSPRK3, RK4, DP5, DP5alt, CK5, DP6, L6, DP8
RK_method = "RK4"
REAL = "double" # Best to use double here.
CFL_FACTOR= 0.5 # (GETS OVERWRITTEN WHEN EXECUTED.) In pure axisymmetry (symmetry_axes = 2 below) 1.0 works fine. Otherwise 0.5 or lower.
# Step 3: Generate Runge-Kutta-based (RK-based) timestepping code.
# As described above the Table of Contents, this is a 3-step process:
# 3.A: Evaluate RHSs (RHS_string)
# 3.B: Apply boundary conditions (post_RHS_string, pt 1)
# 3.C: Enforce det(gammabar) = det(gammahat) constraint (post_RHS_string, pt 2)
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method,
RHS_string = """
Ricci_eval(&rfmstruct, ¶ms, RK_INPUT_GFS, auxevol_gfs);
rhs_eval(&rfmstruct, ¶ms, auxevol_gfs, RK_INPUT_GFS, RK_OUTPUT_GFS);""",
post_RHS_string = """
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS, evol_gf_parity, RK_OUTPUT_GFS);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, RK_OUTPUT_GFS);\n""",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
# Step 4: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Step 5: Set the finite differencing order to FD_order (set above).
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order)
# Step 6: Copy SIMD/SIMD_intrinsics.h to $Ccodesdir/SIMD/SIMD_intrinsics.h
cmd.mkdir(os.path.join(Ccodesdir,"SIMD"))
shutil.copy(os.path.join("SIMD/")+"SIMD_intrinsics.h",os.path.join(Ccodesdir,"SIMD/"))
# Step 7: Set the direction=2 (phi) axis to be the symmetry axis; i.e.,
# axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
###Output
_____no_output_____
###Markdown
Step 1.c: Output needed C code for finding the minimum proper distance between grid points, needed for [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673)-limited timestep \[Back to [top](toc)\]$$\label{cfl}$$In order for our explicit-timestepping numerical solution to the scalar wave equation to be stable, it must satisfy the [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673) condition:$$\Delta t \le \frac{\min(ds_i)}{c},$$where $c$ is the wavespeed, and$$ds_i = h_i \Delta x^i$$ is the proper distance between neighboring gridpoints in the $i$th direction (in 3D, there are 3 directions), $h_i$ is the $i$th reference metric scale factor, and $\Delta x^i$ is the uniform grid spacing in the $i$th direction:
###Code
# Output the find_timestep() function to a C file.
rfm.out_timestep_func_to_file(os.path.join(Ccodesdir,"find_timestep.h"))
###Output
_____no_output_____
###Markdown
Step 2: Import Brill-Lindquist ADM initial data C function from the [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module \[Back to [top](toc)\]$$\label{adm_id}$$The [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module does the following:1. Set up Brill-Lindquist initial data [ADM](https://en.wikipedia.org/wiki/ADM_formalism) quantities in the **Cartesian basis**, as [documented here](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb). 1. Convert the ADM **Cartesian quantities** to **BSSN quantities in the desired Curvilinear basis** (set by reference_metric::CoordSystem), as [documented here](Tutorial-ADM_Initial_Data-Converting_ADMCartesian_to_BSSNCurvilinear.ipynb).1. Sets up the standardized C function for setting all BSSN Curvilinear gridfunctions in a pointwise fashion, as [written here](../edit/BSSN/BSSN_ID_function_string.py), and returns the C function as a Python string.
###Code
import BSSN.BrillLindquist as bl
def BrillLindquistID():
print("Generating optimized C code for Brill-Lindquist initial data. May take a while, depending on CoordSystem.")
start = time.time()
bl.BrillLindquist() # Registers ID C function in dictionary, used below to output to file.
with open(os.path.join(Ccodesdir,"initial_data.h"),"w") as file:
file.write(outC_function_dict["initial_data"])
end = time.time()
print("(BENCH) Finished BL initial data codegen in "+str(end-start)+" seconds.")
###Output
_____no_output_____
###Markdown
Step 3: Output C code for BSSN spacetime solve \[Back to [top](toc)\]$$\label{bssn}$$ Step 3.a: Output C code for BSSN RHS expressions \[Back to [top](toc)\]$$\label{bssnrhs}$$
###Code
import BSSN.BSSN_RHSs as rhs
import BSSN.BSSN_gauge_RHSs as gaugerhs
# Set the *covariant*, second-order Gamma-driving shift condition
par.set_parval_from_str("BSSN.BSSN_gauge_RHSs::ShiftEvolutionOption", "GammaDriving2ndOrder_Covariant")
print("Generating symbolic expressions for BSSN RHSs...")
start = time.time()
# Enable rfm_precompute infrastructure, which results in
# BSSN RHSs that are free of transcendental functions,
# even in curvilinear coordinates, so long as
# ConformalFactor is set to "W" (default).
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
# Evaluate BSSN + BSSN gauge RHSs with rfm_precompute enabled:
import BSSN.BSSN_quantities as Bq
par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic","True")
rhs.BSSN_RHSs()
gaugerhs.BSSN_gauge_RHSs()
# We use betaU as our upwinding control vector:
Bq.BSSN_basic_tensors()
betaU = Bq.betaU
import BSSN.Enforce_Detgammabar_Constraint as EGC
enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammabar_Constraint_symb_expressions()
# Next compute Ricci tensor
par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic","False")
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
# Now register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)
# Now that we are finished with all the rfm hatted
# quantities in generic precomputed functional
# form, let's restore them to their closed-
# form expressions.
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
end = time.time()
print("(BENCH) Finished BSSN symbolic expressions in "+str(end-start)+" seconds.")
def BSSN_RHSs():
print("Generating C code for BSSN RHSs in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
# Construct the left-hand sides and right-hand-side expressions for all BSSN RHSs
lhs_names = [ "alpha", "cf", "trK"]
rhs_exprs = [gaugerhs.alpha_rhs, rhs.cf_rhs, rhs.trK_rhs]
for i in range(3):
lhs_names.append( "betU"+str(i))
rhs_exprs.append(gaugerhs.bet_rhsU[i])
lhs_names.append( "lambdaU"+str(i))
rhs_exprs.append(rhs.lambda_rhsU[i])
lhs_names.append( "vetU"+str(i))
rhs_exprs.append(gaugerhs.vet_rhsU[i])
for j in range(i,3):
lhs_names.append( "aDD"+str(i)+str(j))
rhs_exprs.append(rhs.a_rhsDD[i][j])
lhs_names.append( "hDD"+str(i)+str(j))
rhs_exprs.append(rhs.h_rhsDD[i][j])
# Sort the lhss list alphabetically, and rhss to match.
# This ensures the RHSs are evaluated in the same order
# they're allocated in memory:
lhs_names,rhs_exprs = [list(x) for x in zip(*sorted(zip(lhs_names,rhs_exprs), key=lambda pair: pair[0]))]
# Declare the list of lhrh's
BSSN_evol_rhss = []
for var in range(len(lhs_names)):
BSSN_evol_rhss.append(lhrh(lhs=gri.gfaccess("rhs_gfs",lhs_names[var]),rhs=rhs_exprs[var]))
# Set up the C function for the BSSN RHSs
desc="Evaluate the BSSN RHSs"
name="rhs_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict auxevol_gfs,const REAL *restrict in_gfs,REAL *restrict rhs_gfs""",
body = fin.FD_outputC("returnstring",BSSN_evol_rhss, params="outCverbose=False,SIMD_enable=True",
upwindcontrolvec=betaU).replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,EnableSIMD,Enable_rfm_precompute")
end = time.time()
print("(BENCH) Finished BSSN_RHS C codegen in " + str(end - start) + " seconds.")
def Ricci():
print("Generating C code for Ricci tensor in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
desc="Evaluate the Ricci tensor"
name="Ricci_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict in_gfs,REAL *restrict auxevol_gfs""",
body = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD00"),rhs=Bq.RbarDD[0][0]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD01"),rhs=Bq.RbarDD[0][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD02"),rhs=Bq.RbarDD[0][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD11"),rhs=Bq.RbarDD[1][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD12"),rhs=Bq.RbarDD[1][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD22"),rhs=Bq.RbarDD[2][2])],
params="outCverbose=False,SIMD_enable=True").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,EnableSIMD,Enable_rfm_precompute")
end = time.time()
print("(BENCH) Finished Ricci C codegen in " + str(end - start) + " seconds.")
###Output
Generating symbolic expressions for BSSN RHSs...
(BENCH) Finished BSSN symbolic expressions in 4.267388105392456 seconds.
###Markdown
Step 3.b: Output C code for Hamiltonian constraint \[Back to [top](toc)\]$$\label{hamconstraint}$$Next output the C code for evaluating the Hamiltonian constraint [(**Tutorial**)](Tutorial-BSSN_constraints.ipynb). In the absence of numerical error, this constraint should evaluate to zero. However it does not due to numerical (typically truncation and roundoff) error. We will therefore measure the Hamiltonian constraint violation to gauge the accuracy of our simulation, and, ultimately determine whether errors are dominated by numerical finite differencing (truncation) error as expected.
###Code
def Hamiltonian():
start = time.time()
print("Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.")
# Set up the C function for the Hamiltonian RHS
desc="Evaluate the Hamiltonian constraint"
name="Hamiltonian_constraint"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict aux_gfs""",
body = fin.FD_outputC("returnstring",lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H),
params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,Enable_rfm_precompute")
end = time.time()
print("(BENCH) Finished Hamiltonian C codegen in " + str(end - start) + " seconds.")
###Output
_____no_output_____
###Markdown
Step 3.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
def gammadet():
start = time.time()
print("Generating optimized C code for gamma constraint. May take a while, depending on CoordSystem.")
# Set up the C function for the det(gammahat) = det(gammabar)
EGC.output_Enforce_Detgammabar_Constraint_Ccode(Ccodesdir,exprs=enforce_detg_constraint_symb_expressions)
end = time.time()
print("(BENCH) Finished gamma constraint C codegen in " + str(end - start) + " seconds.")
###Output
_____no_output_____
###Markdown
Step 3.d: Compute $\psi_4$, which encodes gravitational wave information in our numerical relativity calculations \[Back to [top](toc)\]$$\label{psi4}$$The [Weyl scalar](https://en.wikipedia.org/wiki/Weyl_scalar) $\psi_4$ encodes gravitational wave information in our numerical relativity calculations. For more details on how it is computed, see [this NRPy+ tutorial notebook for information on $\psi_4$](Tutorial-Psi4.ipynb) and [this one on the Quasi-Kinnersley tetrad](Tutorial-Psi4_tetrads.ipynb) (as implemented in [Baker, Campanelli, Lousto (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf)).$\psi_4$ is related to the gravitational wave strain via$$\psi_4 = \ddot{h}_+ - i \ddot{h}_\times,$$where $\ddot{h}_+$ is the second time derivative of the $+$ polarization of the gravitational wave strain $h$, and $\ddot{h}_\times$ is the second time derivative of the $\times$ polarization of the gravitational wave strain $h$.
###Code
import BSSN.Psi4_tetrads as BP4t
par.set_parval_from_str("BSSN.Psi4_tetrads::TetradChoice","QuasiKinnersley")
#par.set_parval_from_str("BSSN.Psi4_tetrads::UseCorrectUnitNormal","True")
import BSSN.Psi4 as BP4
print("Generating symbolic expressions for psi4...")
start = time.time()
BP4.Psi4()
end = time.time()
print("(BENCH) Finished psi4 symbolic expressions in "+str(end-start)+" seconds.")
psi4r_0pt = gri.register_gridfunctions("AUX","psi4r_0pt")
psi4r_1pt = gri.register_gridfunctions("AUX","psi4r_1pt")
psi4r_2pt = gri.register_gridfunctions("AUX","psi4r_2pt")
psi4i_0pt = gri.register_gridfunctions("AUX","psi4i_0pt")
psi4i_1pt = gri.register_gridfunctions("AUX","psi4i_1pt")
psi4i_2pt = gri.register_gridfunctions("AUX","psi4i_2pt")
desc="""Since it's so expensive to compute, instead of evaluating
psi_4 at all interior points, this functions evaluates it on a
point-by-point basis."""
name="psi4"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """const paramstruct *restrict params,
const int i0,const int i1,const int i2,
REAL *restrict xx[3], const REAL *restrict in_gfs, REAL *restrict aux_gfs""",
body = """
const int idx = IDX3S(i0,i1,i2);
const REAL xx0 = xx[0][i0];const REAL xx1 = xx[1][i1];const REAL xx2 = xx[2][i2];
// Real part of psi_4, divided into 3 terms
{
#include "Psi4re_pt0_lowlevel.h"
}
{
#include "Psi4re_pt1_lowlevel.h"
}
{
#include "Psi4re_pt2_lowlevel.h"
}
// Imaginary part of psi_4, divided into 3 terms
{
#include "Psi4im_pt0_lowlevel.h"
}
{
#include "Psi4im_pt1_lowlevel.h"
}
{
#include "Psi4im_pt2_lowlevel.h"
}""")
def Psi4re(part):
print("Generating C code for psi4_re_pt"+str(part)+" in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
Psi4re_pt = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("aux_gfs","psi4r_"+str(part)+"pt"),rhs=BP4.psi4_re_pt[part])],
params="outCverbose=False,CSE_sorting=none") # Generating the CSE for psi4 is the slowest
# operation in this notebook, and much of the CSE
# time is spent sorting CSE expressions. Disabling
# this sorting makes the C codegen 3-4x faster,
# but the tradeoff is that every time this is
# run, the CSE patterns will be different
# (though they should result in mathematically
# *identical* expressions). You can expect
# roundoff-level differences as a result.
with open(os.path.join(Ccodesdir,"Psi4re_pt"+str(part)+"_lowlevel.h"), "w") as file:
file.write(Psi4re_pt.replace("IDX4","IDX4S"))
end = time.time()
print("(BENCH) Finished generating psi4_re_pt"+str(part)+" in "+str(end-start)+" seconds.")
def Psi4im(part):
print("Generating C code for psi4_im_pt"+str(part)+" in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
Psi4im_pt = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("aux_gfs","psi4i_"+str(part)+"pt"),rhs=BP4.psi4_im_pt[part])],
params="outCverbose=False,CSE_sorting=none") # Generating the CSE for psi4 is the slowest
# operation in this notebook, and much of the CSE
# time is spent sorting CSE expressions. Disabling
# this sorting makes the C codegen 3-4x faster,
# but the tradeoff is that every time this is
# run, the CSE patterns will be different
# (though they should result in mathematically
# *identical* expressions). You can expect
# roundoff-level differences as a result.
with open(os.path.join(Ccodesdir,"Psi4im_pt"+str(part)+"_lowlevel.h"), "w") as file:
file.write(Psi4im_pt.replace("IDX4","IDX4S"))
end = time.time()
print("(BENCH) Finished generating psi4_im_pt"+str(part)+" in "+str(end-start)+" seconds.")
###Output
Generating symbolic expressions for psi4...
(BENCH) Finished psi4 symbolic expressions in 17.19464373588562 seconds.
Output C function psi4() to file BSSN_Two_BHs_Collide_Ccodes/psi4.h
###Markdown
Step 3.e: Decompose $\psi_4$ into spin-weight -2 spherical harmonics \[Back to [top](toc)\]$$\label{decomposepsi4}$$ Instead of measuring $\psi_4$ for all possible (gravitational wave) observers in our simulation domain, we instead decompose it into a natural basis set, which by convention is the spin-weight -2 spherical harmonics.Here we implement the algorithm for decomposing $\psi_4$ into spin-weight -2 spherical harmonic modes. The decomposition is defined as follows:$${}^{-2}\left[\psi_4\right]_{\ell,m}(t,R) = \int \int \psi_4(t,R,\theta,\phi)\ \left[{}^{-2}Y^*_{\ell,m}(\theta,\phi)\right] \sin \theta d\theta d\phi,$$where* ${}^{-2}Y^*_{\ell,m}(\theta,\phi)$ is the complex conjugate of the spin-weight $-2$ spherical harmonic $\ell,m$ mode* $R$ is the (fixed) radius at which we extract $\psi_4$ information* $t$ is the time coordinate* $\theta,\phi$ are the polar and azimuthal angles, respectively (we use [the physics notation for spherical coordinates](https://en.wikipedia.org/wiki/Spherical_coordinate_system) here) Step 3.e.i Output ${}^{-2}Y_{\ell,m}$, up to and including $\ell=\ell_{\rm max}$=`l_max` (set to 2 here) \[Back to [top](toc)\]$$\label{spinweight}$$ Here we output all spin-weight $-2$ spherical harmonics [**Tutorial Module**](Tutorial-SpinWeighted_Spherical_Harmonics.ipynb) for $\ell=0$ up to and including $\ell=\ell_{\rm max}$=`l_max` (set to 2 here).
###Code
import SpinWeight_minus2_SphHarmonics.SpinWeight_minus2_SphHarmonics as swm2
cmd.mkdir(os.path.join(Ccodesdir,"SpinWeight_minus2_SphHarmonics"))
swm2.SpinWeight_minus2_SphHarmonics(maximum_l=l_max,
filename=os.path.join(Ccodesdir,"SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h"))
###Output
_____no_output_____
###Markdown
Step 3.e.ii Decomposition of $\psi_4$ into spin-weight -2 spherical harmonics: Full C-code diagnostic implementation \[Back to [top](toc)\]$$\label{full_diag}$$ Note that this diagnostic implementation assumes that `Spherical`-like coordinates are used (e.g., `SinhSpherical` or `Spherical`), which are the most natural coordinate system for decomposing $\psi_4$ into spin-weight -2 modes.First we process the inputs needed to compute $\psi_4$ at all needed $\theta,\phi$ points
###Code
%%writefile $Ccodesdir/driver_psi4_spinweightm2_decomposition.h
void driver_psi4_spinweightm2_decomposition(const paramstruct *restrict params,
const REAL curr_time,const int R_ext_idx,
REAL *restrict xx[3],
const REAL *restrict y_n_gfs,
REAL *restrict diagnostic_output_gfs) {
#include "set_Cparameters.h"
// Step 1: Set the extraction radius R_ext based on the radial index R_ext_idx
REAL R_ext;
{
REAL xx0 = xx[0][R_ext_idx];
REAL xx1 = xx[1][1];
REAL xx2 = xx[2][1];
REAL xCart[3];
xx_to_Cart(params,xx,R_ext_idx,1,1,xCart);
R_ext = sqrt(xCart[0]*xCart[0] + xCart[1]*xCart[1] + xCart[2]*xCart[2]);
}
// Step 2: Compute psi_4 at this extraction radius and store to a local 2D array.
const int sizeof_2Darray = sizeof(REAL)*(Nxx_plus_2NGHOSTS1-2*NGHOSTS)*(Nxx_plus_2NGHOSTS2-2*NGHOSTS);
REAL *restrict psi4r_at_R_ext = (REAL *restrict)malloc(sizeof_2Darray);
REAL *restrict psi4i_at_R_ext = (REAL *restrict)malloc(sizeof_2Darray);
// ... also store theta, sin(theta), and phi to corresponding 1D arrays.
REAL *restrict sinth_array = (REAL *restrict)malloc(sizeof(REAL)*(Nxx_plus_2NGHOSTS1-2*NGHOSTS));
REAL *restrict th_array = (REAL *restrict)malloc(sizeof(REAL)*(Nxx_plus_2NGHOSTS1-2*NGHOSTS));
REAL *restrict ph_array = (REAL *restrict)malloc(sizeof(REAL)*(Nxx_plus_2NGHOSTS2-2*NGHOSTS));
const int i0=R_ext_idx;
#pragma omp parallel for
for(int i1=NGHOSTS;i1<Nxx_plus_2NGHOSTS1-NGHOSTS;i1++) {
th_array[i1-NGHOSTS] = xx[1][i1];
sinth_array[i1-NGHOSTS] = sin(xx[1][i1]);
for(int i2=NGHOSTS;i2<Nxx_plus_2NGHOSTS2-NGHOSTS;i2++) {
ph_array[i2-NGHOSTS] = xx[2][i2];
// Compute real & imaginary parts of psi_4, output to diagnostic_output_gfs
psi4(params, i0,i1,i2, xx, y_n_gfs, diagnostic_output_gfs);
const int idx3d = IDX3S(i0,i1,i2);
const REAL psi4r = (+diagnostic_output_gfs[IDX4ptS(PSI4R_0PTGF,idx3d)]
+diagnostic_output_gfs[IDX4ptS(PSI4R_1PTGF,idx3d)]
+diagnostic_output_gfs[IDX4ptS(PSI4R_2PTGF,idx3d)]);
const REAL psi4i = (+diagnostic_output_gfs[IDX4ptS(PSI4I_0PTGF,idx3d)]
+diagnostic_output_gfs[IDX4ptS(PSI4I_1PTGF,idx3d)]
+diagnostic_output_gfs[IDX4ptS(PSI4I_2PTGF,idx3d)]);
// Store result to "2D" array (actually 1D array with 2D storage):
const int idx2d = (i1-NGHOSTS)*(Nxx_plus_2NGHOSTS2-2*NGHOSTS)+(i2-NGHOSTS);
psi4r_at_R_ext[idx2d] = psi4r;
psi4i_at_R_ext[idx2d] = psi4i;
}
}
###Output
Writing BSSN_Two_BHs_Collide_Ccodes//driver_psi4_spinweightm2_decomposition.h
###Markdown
Next we implement the integral:$${}^{-2}\left[\psi_4\right]_{\ell,m}(t,R) = \int \int \psi_4(t,R,\theta,\phi)\ \left[{}^{-2}Y^*_{\ell,m}(\theta,\phi)\right] \sin \theta d\theta d\phi.$$Since $\psi_4(t,R,\theta,\phi)$ and $\left[{}^{-2}Y^*_{\ell,m}(\theta,\phi)\right]$ are generally complex, for simplicity let's define\begin{align}\psi_4(t,R,\theta,\phi)&=a+i b \\\left[{}^{-2}Y_{\ell,m}(\theta,\phi)\right] &= c + id\\\implies \left[{}^{-2}Y^*_{\ell,m}(\theta,\phi)\right] = \left[{}^{-2}Y_{\ell,m}(\theta,\phi)\right]^* &=c-i d\end{align}Then the product (appearing within the integral) will be given by\begin{align}(a + i b) (c-i d) &= (ac + bd) + i(bc - ad),\end{align}which cleanly splits the real and complex parts. For better modularity, we output this algorithm to a function `decompose_psi4_into_swm2_modes()` in file `decompose_psi4_into_swm2_modes.h`. Here, we will call this function from within `output_psi4_spinweight_m2_decomposition()`, but in general it could be called from codes that do not use spherical coordinates, and the `psi4r_at_R_ext[]` and `psi4i_at_R_ext[]` arrays are filled using interpolations.
###Code
%%writefile $Ccodesdir/lowlevel_decompose_psi4_into_swm2_modes.h
void lowlevel_decompose_psi4_into_swm2_modes(const paramstruct *restrict params,
const REAL curr_time, const REAL R_ext,
const REAL *restrict th_array,const REAL *restrict sinth_array,const REAL *restrict ph_array,
const REAL *restrict psi4r_at_R_ext,const REAL *restrict psi4i_at_R_ext) {
#include "set_Cparameters.h"
for(int l=2;l<=L_MAX;l++) { // L_MAX is a global variable, since it must be set in Python (so that SpinWeight_minus2_SphHarmonics() computes enough modes)
for(int m=-l;m<=l;m++) {
// Parallelize the integration loop:
REAL psi4r_l_m = 0.0;
REAL psi4i_l_m = 0.0;
#pragma omp parallel for reduction(+:psi4r_l_m,psi4i_l_m)
for(int i1=0;i1<Nxx_plus_2NGHOSTS1-2*NGHOSTS;i1++) {
const REAL th = th_array[i1];
const REAL sinth = sinth_array[i1];
for(int i2=0;i2<Nxx_plus_2NGHOSTS2-2*NGHOSTS;i2++) {
const REAL ph = ph_array[i2];
// Construct integrand for psi4 spin-weight s=-2,l=2,m=0 spherical harmonic
REAL ReY_sm2_l_m,ImY_sm2_l_m;
SpinWeight_minus2_SphHarmonics(l,m, th,ph, &ReY_sm2_l_m,&ImY_sm2_l_m);
const int idx2d = i1*(Nxx_plus_2NGHOSTS2-2*NGHOSTS)+i2;
const REAL a = psi4r_at_R_ext[idx2d];
const REAL b = psi4i_at_R_ext[idx2d];
const REAL c = ReY_sm2_l_m;
const REAL d = ImY_sm2_l_m;
psi4r_l_m += (a*c + b*d) * dxx2 * sinth*dxx1;
psi4i_l_m += (b*c - a*d) * dxx2 * sinth*dxx1;
}
}
// Step 4: Output the result of the integration to file.
char filename[100];
sprintf(filename,"outpsi4_l%d_m%d-%d-r%.2f.txt",l,m, Nxx0,(double)R_ext);
if(m>=0) sprintf(filename,"outpsi4_l%d_m+%d-%d-r%.2f.txt",l,m, Nxx0,(double)R_ext);
FILE *outpsi4_l_m;
// 0 = n*dt when n=0 is exactly represented in double/long double precision,
// so no worries about the result being ~1e-16 in double/ld precision
if(curr_time==0) outpsi4_l_m = fopen(filename, "w");
else outpsi4_l_m = fopen(filename, "a");
fprintf(outpsi4_l_m,"%e %.15e %.15e\n", (double)(curr_time),
(double)psi4r_l_m,(double)psi4i_l_m);
fclose(outpsi4_l_m);
}
}
}
###Output
Writing BSSN_Two_BHs_Collide_Ccodes//lowlevel_decompose_psi4_into_swm2_modes.h
###Markdown
Finally, we complete the function `output_psi4_spinweight_m2_decomposition()`, now calling the above routine and freeing all allocated memory.
###Code
%%writefile -a $Ccodesdir/driver_psi4_spinweightm2_decomposition.h
// Step 4: Perform integrations across all l,m modes from l=2 up to and including L_MAX (global variable):
lowlevel_decompose_psi4_into_swm2_modes(params, curr_time,R_ext, th_array,sinth_array, ph_array,
psi4r_at_R_ext,psi4i_at_R_ext);
// Step 5: Free all allocated memory:
free(psi4r_at_R_ext); free(psi4i_at_R_ext);
free(sinth_array); free(th_array); free(ph_array);
}
###Output
Appending to BSSN_Two_BHs_Collide_Ccodes//driver_psi4_spinweightm2_decomposition.h
###Markdown
Step 3.f: Output all NRPy+ C-code kernels, in parallel if possible \[Back to [top](toc)\]$$\label{coutput}$$
###Code
# Step 0: Import the multiprocessing module.
import multiprocessing
# Step 1: Create a list of functions we wish to evaluate in parallel
funcs = [Psi4re,Psi4re,Psi4re,Psi4im,Psi4im,Psi4im, BrillLindquistID,BSSN_RHSs,Ricci,Hamiltonian,gammadet]
# Step 1.a: Define master function for calling all above functions.
# Note that lambdifying this doesn't work in Python 3
def master_func(idx):
if idx < 3: # Call Psi4re(arg)
funcs[idx](idx)
elif idx < 6: # Call Psi4im(arg-3)
funcs[idx](idx-3)
else: # All non-Psi4 functions:
funcs[idx]()
try:
if os.name == 'nt':
# It's a mess to get working in Windows, so we don't bother. :/
# https://medium.com/@grvsinghal/speed-up-your-python-code-using-multiprocessing-on-windows-and-jupyter-or-ipython-2714b49d6fac
raise Exception("Parallel codegen currently not available in Windows")
# Step 1.b: Import the multiprocessing module.
import multiprocessing
# Step 1.c: Evaluate list of functions in parallel if possible;
# otherwise fallback to serial evaluation:
pool = multiprocessing.Pool()
pool.map(master_func,range(len(funcs)))
except:
# Steps 1.b-1.c, alternate: As fallback, evaluate functions in serial.
# This will happen on Android and Windows systems
for idx in range(len(funcs)):
master_func(idx)
###Output
Generating C code for psi4_re_pt0 in SinhSpherical coordinates.Generating C code for psi4_re_pt1 in SinhSpherical coordinates.Generating optimized C code for Brill-Lindquist initial data. May take a while, depending on CoordSystem.Generating C code for psi4_re_pt2 in SinhSpherical coordinates.Generating C code for psi4_im_pt1 in SinhSpherical coordinates.Generating C code for Ricci tensor in SinhSpherical coordinates.Generating C code for psi4_im_pt0 in SinhSpherical coordinates.Generating C code for psi4_im_pt2 in SinhSpherical coordinates.Generating C code for BSSN RHSs in SinhSpherical coordinates.Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.Generating optimized C code for gamma constraint. May take a while, depending on CoordSystem.
Output C function enforce_detgammabar_constraint() to file BSSN_Two_BHs_Collide_Ccodes/enforce_detgammabar_constraint.h
(BENCH) Finished gamma constraint C codegen in 0.11540484428405762 seconds.
(BENCH) Finished generating psi4_im_pt1 in 11.360079765319824 seconds.
(BENCH) Finished generating psi4_im_pt2 in 14.557591915130615 seconds.
(BENCH) Finished BL initial data codegen in 15.187357664108276 seconds.
Output C function rhs_eval() to file BSSN_Two_BHs_Collide_Ccodes/rhs_eval.h
(BENCH) Finished BSSN_RHS C codegen in 18.33892321586609 seconds.
(BENCH) Finished generating psi4_re_pt2 in 19.347757577896118 seconds.
Output C function Ricci_eval() to file BSSN_Two_BHs_Collide_Ccodes/Ricci_eval.h
(BENCH) Finished Ricci C codegen in 20.357526063919067 seconds.
(BENCH) Finished generating psi4_re_pt1 in 20.787642240524292 seconds.
(BENCH) Finished generating psi4_im_pt0 in 35.46419930458069 seconds.
Output C function Hamiltonian_constraint() to file BSSN_Two_BHs_Collide_Ccodes/Hamiltonian_constraint.h
(BENCH) Finished Hamiltonian C codegen in 36.30071473121643 seconds.
(BENCH) Finished generating psi4_re_pt0 in 61.16836905479431 seconds.
###Markdown
Step 3.g: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](toc)\]$$\label{cparams_rfm_and_domainsize}$$Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
###Code
# Step 3.f.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.f.ii: Set free_parameters.h
with open(os.path.join(Ccodesdir,"free_parameters.h"),"w") as file:
file.write("""
// Set free-parameter values.
// Set free-parameter values for BSSN evolution:
params.eta = 2.0;
// Set free parameters for the (Brill-Lindquist) initial data
params.BH1_posn_x = 0.0; params.BH1_posn_y = 0.0; params.BH1_posn_z =+0.25;
params.BH2_posn_x = 0.0; params.BH2_posn_y = 0.0; params.BH2_posn_z =-0.25;
params.BH1_mass = 0.5; params.BH2_mass = 0.5;
"""+"""
const REAL final_time = """+str(final_time)+";\n")
# Append to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Step 3.f.iii: Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir)
# Step 3.f.iv: Generate xx_to_Cart.h, which contains xx_to_Cart() for
# (the mapping from xx->Cartesian) for the chosen
# CoordSystem:
rfm.xx_to_Cart_h("xx_to_Cart","./set_Cparameters.h",os.path.join(Ccodesdir,"xx_to_Cart.h"))
# Step 3.f.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
###Output
_____no_output_____
###Markdown
Step 4: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](toc)\]$$\label{bc_functs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../"))
###Output
Wrote to file "BSSN_Two_BHs_Collide_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h"
Evolved parity: ( aDD00:4, aDD01:5, aDD02:6, aDD11:7, aDD12:8, aDD22:9,
alpha:0, betU0:1, betU1:2, betU2:3, cf:0, hDD00:4, hDD01:5, hDD02:6,
hDD11:7, hDD12:8, hDD22:9, lambdaU0:1, lambdaU1:2, lambdaU2:3, trK:0,
vetU0:1, vetU1:2, vetU2:3 )
Auxiliary parity: ( H:0, psi4i_0pt:0, psi4i_1pt:0, psi4i_2pt:0,
psi4r_0pt:0, psi4r_1pt:0, psi4r_2pt:0 )
AuxEvol parity: ( RbarDD00:4, RbarDD01:5, RbarDD02:6, RbarDD11:7,
RbarDD12:8, RbarDD22:9 )
Wrote to file "BSSN_Two_BHs_Collide_Ccodes/boundary_conditions/EigenCoord_Cart_to_xx.h"
###Markdown
Step 5: `BrillLindquist_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER),
# and set the CFL_FACTOR (which can be overwritten at the command line)
with open(os.path.join(Ccodesdir,"BSSN_Playground_REAL__NGHOSTS__CFL_FACTOR.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
REAL CFL_FACTOR = """+str(CFL_FACTOR)+"""; // Set the CFL Factor. Can be overwritten at command line.
// Part P0.d: We decompose psi_4 into all spin-weight=-2
// l,m spherical harmonics, starting at l=2,
// going up to and including l_max, set here:
#define L_MAX """+str(l_max)+"""
""")
%%writefile $Ccodesdir/BrillLindquist_Playground.c
// Step P0: Define REAL and NGHOSTS; and declare CFL_FACTOR. This header is generated in NRPy+.
#include "BSSN_Playground_REAL__NGHOSTS__CFL_FACTOR.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// All SIMD intrinsics used in SIMD-enabled C code loops are defined here:
#include "SIMD/SIMD_intrinsics.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "time.h"
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
#define wavespeed 1.0 // Set CFL-based "wavespeed" to 1.0.
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xx_to_Cart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Set xx_to_Cart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xx_to_Cart.h"
// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P7: Implement the algorithm for upwinding.
// *NOTE*: This upwinding is backwards from
// usual upwinding algorithms, because the
// upwinding control vector in BSSN (the shift)
// acts like a *negative* velocity.
//#define UPWIND_ALG(UpwindVecU) UpwindVecU > 0.0 ? 1.0 : 0.0
// Step P8: Include function for enforcing detgammabar constraint.
#include "enforce_detgammabar_constraint.h"
// Step P9: Find the CFL-constrained timestep
#include "find_timestep.h"
// Step P10: Declare function necessary for setting up the initial data.
// Step P10.a: Define BSSN_ID() for BrillLindquist initial data
// Step P10.b: Set the generic driver function for setting up BSSN initial data
#include "initial_data.h"
// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)
#include "Hamiltonian_constraint.h"
// Step P12: Declare rhs_eval function, which evaluates BSSN RHSs
#include "rhs_eval.h"
// Step P13: Declare Ricci_eval function, which evaluates Ricci tensor
#include "Ricci_eval.h"
// Step P14: Declare function for evaluating real and imaginary parts of psi4 (diagnostic)
#include "psi4.h"
#include "SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h"
#include "lowlevel_decompose_psi4_into_swm2_modes.h"
#include "driver_psi4_spinweightm2_decomposition.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4 && argc != 5) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
if(argc == 5) {
CFL_FACTOR = strtod(argv[4],NULL);
if(CFL_FACTOR > 0.5 && atoi(argv[3])!=2) {
fprintf(stderr,"WARNING: CFL_FACTOR was set to %e, which is > 0.5.\n",CFL_FACTOR);
fprintf(stderr," This will generally only be stable if the simulation is purely axisymmetric\n");
fprintf(stderr," However, Nx2 was set to %d>2, which implies a non-axisymmetric simulation\n",atoi(argv[3]));
}
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0h: Time coordinate parameters
const REAL t_final = final_time;
// Step 0i: Set timestep based on smallest proper distance between gridpoints and CFL factor
REAL dt = find_timestep(¶ms, xx);
//fprintf(stderr,"# Timestep set to = %e\n",(double)dt);
int N_final = (int)(t_final / dt + 0.5); // The number of points in time.
// Add 0.5 to account for C rounding down
// typecasts to integers.
REAL out_approx_every_t = 0.2;
int output_every_N = (int)(out_approx_every_t*((REAL)N_final)/t_final);
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
// Step 1: Set up initial data to an exact solution
initial_data(¶ms, xx, y_n_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, y_n_gfs);
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
#ifdef __linux__ // Use high-precision timer in Linux.
struct timespec start, end;
clock_gettime(CLOCK_REALTIME, &start);
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
// http://www.cplusplus.com/reference/ctime/time/
time_t start_timer,end_timer;
time(&start_timer); // Resolution of one second...
#endif
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
for(int n=0;n<=N_final;n++) { // Main loop to progress forward in time.
// Step 3.a: Output diagnostics
// Step 3.a.i: Output psi4 spin-weight -2 decomposed data, every N_output_every
if(n%output_every_N == 0) {
for(int r_ext_idx = (Nxx_plus_2NGHOSTS0-NGHOSTS)/4;
r_ext_idx<(Nxx_plus_2NGHOSTS0-NGHOSTS)*0.9;
r_ext_idx+=5) {
// psi_4 mode-by-mode spin-weight -2 spherical harmonic decomposition routine
driver_psi4_spinweightm2_decomposition(¶ms, ((REAL)n)*dt,r_ext_idx,
xx, y_n_gfs, diagnostic_output_gfs);
}
}
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
#include "MoLtimestepping/RK_MoL.h"
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
if(n==N_final-1) {
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
const int i0MIN=NGHOSTS; // In spherical, r=Delta r/2.
const int i1mid=Nxx_plus_2NGHOSTS1/2;
const int i2mid=Nxx_plus_2NGHOSTS2/2;
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
xx_to_Cart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1],xCart[2], y_n_gfs[IDX4ptS(CFGF,idx)],
log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
}
// Step 3.d: Progress indicator printing to stderr
// Step 3.d.i: Measure average time per iteration
#ifdef __linux__ // Use high-precision timer in Linux.
clock_gettime(CLOCK_REALTIME, &end);
const long long unsigned int time_in_ns = 1000000000L * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
time(&end_timer); // Resolution of one second...
REAL time_in_ns = difftime(end_timer,start_timer)*1.0e9+0.5; // Round up to avoid divide-by-zero.
#endif
const REAL s_per_iteration_avg = ((REAL)time_in_ns / (REAL)n) / 1.0e9;
const int iterations_remaining = N_final - n;
const REAL time_remaining_in_mins = s_per_iteration_avg * (REAL)iterations_remaining / 60.0;
const REAL num_RHS_pt_evals = (REAL)(Nxx[0]*Nxx[1]*Nxx[2]) * 4.0 * (REAL)n; // 4 RHS evals per gridpoint for RK4
const REAL RHS_pt_evals_per_sec = num_RHS_pt_evals / ((REAL)time_in_ns / 1.0e9);
// Step 3.d.ii: Output simulation progress to stderr
if(n % 40 == 0) {
fprintf(stderr,"%c[2K", 27); // Clear the line
fprintf(stderr,"It: %d t=%.2f dt=%.2e | %.1f%%; ETA %.0f s | t/h %.2f | gp/s %.2e\r", // \r is carriage return, move cursor to the beginning of the line
n, n * (double)dt, (double)dt, (double)(100.0 * (REAL)n / (REAL)N_final),
(double)time_remaining_in_mins*60, (double)(dt * 3600.0 / s_per_iteration_avg), (double)RHS_pt_evals_per_sec);
fflush(stderr); // Flush the stderr buffer
} // End progress indicator if(n % 10 == 0)
} // End main loop to progress forward in time.
fprintf(stderr,"\n"); // Clear the final line of output from progress indicator.
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
if EvolOption == "low resolution":
Nr = 270
Ntheta = 8
elif EvolOption == "high resolution":
Nr = 800
Ntheta = 16
else:
print("Error: unknown EvolOption!")
sys.exit(1)
CFL_FACTOR = 1.0
import cmdline_helper as cmd
print("Now compiling, should take ~10 seconds...\n")
start = time.time()
cmd.C_compile(os.path.join(Ccodesdir,"BrillLindquist_Playground.c"), "BrillLindquist_Playground",
compile_mode="optimized")
end = time.time()
print("(BENCH) Finished in "+str(end-start)+" seconds.\n\n")
if Nr == 800:
print("Now running. Should take ~8 hours...\n")
if Nr == 270:
print("Now running. Should take ~30 minutes...\n")
start = time.time()
cmd.delete_existing_files("out*.txt")
cmd.delete_existing_files("out*.png")
cmd.Execute("BrillLindquist_Playground", str(Nr)+" "+str(Ntheta)+" 2 "+str(CFL_FACTOR))
end = time.time()
print("(BENCH) Finished in "+str(end-start)+" seconds.\n\n")
###Output
Now compiling, should take ~10 seconds...
Compiling executable...
(EXEC): Executing `gcc -Ofast -fopenmp -march=native -funroll-loops BSSN_Two_BHs_Collide_Ccodes/BrillLindquist_Playground.c -o BrillLindquist_Playground -lm`...
(BENCH): Finished executing in 9.02372121810913 seconds.
Finished compilation.
(BENCH) Finished in 9.034607410430908 seconds.
Now running. Should take ~30 minutes...
(EXEC): Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./BrillLindquist_Playground 270 8 2 1.0`...
[2KIt: 27200 t=199.93 dt=7.35e-03 | 100.0%; ETA 0 s | t/h 2979.16 | gp/s 1.95e+06
(BENCH): Finished executing in 241.8013095855713 seconds.
(BENCH) Finished in 241.81788897514343 seconds.
###Markdown
Step 6: Comparison with black hole perturbation theory \[Back to [top](toc)\]$$\label{compare}$$According to black hole perturbation theory ([Berti et al](https://arxiv.org/abs/0905.2975)), the resultant black hole should ring down with dominant, spin-weight $s=-2$ spherical harmonic mode $(l=2,m=0)$ according to$${}_{s=-2}\text{Re}(\psi_4)_{l=2,m=0} = A e^{−0.0890 t/M} \cos(0.3737 t/M+ \phi),$$where $M=1$ for these data, and $A$ and $\phi$ are an arbitrary amplitude and phase, respectively. Here we will plot the resulting waveform at $r/M=33.13$, comparing to the expected frequency and amplitude falloff predicted by black hole perturbation theory.Notice that we find about 4.2 orders of magnitude agreement! If you are willing to invest more resources and wait much longer, you will find approximately 8.5 orders of magnitude agreement (*better* than Fig 6 of [Ruchlin et al](https://arxiv.org/pdf/1712.07658.pdf)) if you adjust the above code parameters such that1. Finite-differencing order is set to 101. Nr = 8001. Ntheta = 161. Outer boundary (`AMPL`) set to 3001. Final time (`t_final`) set to 2751. Set the initial positions of the BHs to `BH1_posn_z = -BH2_posn_z = 0.25`
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import savefig
from matplotlib import rc
rc('text', usetex=True)
if Nr == 270:
extraction_radius = "33.13"
Amplitude = 1.8e-2
Phase = 2.8
elif Nr == 800:
extraction_radius = "33.64"
Amplitude = 1.8e-2
Phase = 2.8
else:
print("Error: output is not tuned for Nr = "+str(Nr)+" . Plotting disabled.")
exit(1)
#Transposed for easier unpacking:
t,psi4r,psi4i = np.loadtxt("outpsi4_l2_m+0-"+str(Nr)+"-r"+extraction_radius+".txt").T
t_retarded = []
log10abspsi4r = []
bh_pert_thry = []
for i in range(len(psi4r)):
retarded_time = t[i]-np.float(extraction_radius)
t_retarded.append(retarded_time)
log10abspsi4r.append(np.log(np.float(extraction_radius)*np.abs(psi4r[i]))/np.log(10))
bh_pert_thry.append(np.log(Amplitude*np.exp(-0.0890*retarded_time)*np.abs(np.cos(0.3737*retarded_time+Phase)))/np.log(10))
# print(bh_pert_thry)
fig, ax = plt.subplots()
plt.title("Grav. Wave Agreement with BH perturbation theory",fontsize=18)
plt.xlabel("$(t - R_{ext})/M$",fontsize=16)
plt.ylabel('$\log_{10}|\psi_4|$',fontsize=16)
ax.plot(t_retarded, log10abspsi4r, 'k-', label='SENR/NRPy+ simulation')
ax.plot(t_retarded, bh_pert_thry, 'k--', label='BH perturbation theory')
#ax.set_xlim([0,t_retarded[len(psi4r1)-1]])
ax.set_xlim([0,final_time - float(extraction_radius)+10])
ax.set_ylim([-13.5,-1.5])
plt.xticks(size = 14)
plt.yticks(size = 14)
legend = ax.legend(loc='upper right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
# Note that you'll need `dvipng` installed to generate the following file:
savefig("BHperttheorycompare.png",dpi=150)
###Output
_____no_output_____
###Markdown
Step 7: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4")
###Output
Created Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex,
and compiled LaTeX file to PDF file Tutorial-Start_to_Finish-
BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf
###Markdown
Start-to-Finish Example: Head-On Black Hole Collision with Gravitational Wave Analysis Author: Zach Etienne Formatting improvements courtesy Brandon Clark This module implements a basic numerical relativity code to merge two black holes in *spherical coordinates*, as well as the gravitational wave analysis provided by the $\psi_4$ NRPy+ tutorial modules ([$\psi_4$](Tutorial-Psi4.ipynb) & [$\psi_4$ tetrad](Tutorial-Psi4_tetrads.ipynb)). Here we place the black holes initially on the $z$-axis, so the entire simulation is axisymmetric about the $\phi$-axis. Not sampling in the $\phi$ direction greatly speeds up the simulation.**This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution *after a short numerical evolution of the initial data* (see [plot](convergence) at bottom), and results have been validated to agree to roundoff error with the [original SENR code](https://bitbucket.org/zach_etienne/nrpy).****Further, agreement of $\psi_4$ with result expected from black hole perturbation theory (*a la* Fig 6 of [Ruchlin, Etienne, and Baumgarte](https://arxiv.org/pdf/1712.07658.pdf)) has been successfully demonstrated in [Step 7](compare).** NRPy+ Source Code for this module: 1. [BSSN/BrillLindquist.py](../edit/BSSN/BrillLindquist.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb): Brill-Lindquist initial data; sets all ADM variables in Cartesian basis: 1. [BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): Spherical/Cartesian ADM$\to$Curvilinear BSSN converter function, for which exact expressions are given for ADM quantities.1. [BSSN/BSSN_ID_function_string.py](../edit/BSSN/BSSN_ID_function_string.py): Sets up the C code string enabling initial data be set up in a point-by-point fashion1. [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates1. [BSSN/BSSN_RHSs.py](../edit/BSSN/BSSN_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb): Generates the right-hand sides for the BSSN evolution equations in singular, curvilinear coordinates1. [BSSN/BSSN_gauge_RHSs.py](../edit/BSSN/BSSN_gauge_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb): Generates the right-hand sides for the BSSN gauge evolution equations in singular, curvilinear coordinates Introduction:Here we use NRPy+ to generate the C source code necessary to set up initial data for two black holes (Brill-Lindquist, [Brill & Lindquist, Phys. Rev. 131, 471, 1963](https://journals.aps.org/pr/abstract/10.1103/PhysRev.131.471); see also Eq. 1 of [Brandt & Brügmann, arXiv:gr-qc/9711015v1](https://arxiv.org/pdf/gr-qc/9711015v1.pdf)). Then we use it to generate the RHS expressions for [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html) time integration based on the [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4).The entire algorithm is outlined below, with NRPy+-based components highlighted in green.1. Allocate memory for gridfunctions, including temporary storage for the RK4 time integration.1. ([Step 2 below](adm_id)) Set gridfunction values to initial data (**[documented in previous start-to-finish module](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_two_BH_initial_data.ipynb)**).1. Evolve the initial data forward in time using RK4 time integration. At each RK4 substep, do the following: 1. ([Step 3 below](bssn_rhs)) Evaluate BSSN RHS expressions. 1. ([Step 4 below](apply_bcs)) Apply singular, curvilinear coordinate boundary conditions [*a la* the SENR/NRPy+ paper](https://arxiv.org/abs/1712.07658) 1. ([Step 5 below](enforce3metric)) Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint1. At the end of each iteration in time, output the Hamiltonian constraint violation. 1. Repeat above steps at two numerical resolutions to confirm convergence to zero. Table of Contents$$\label{toc}$$This module is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id): Import Brill-Lindquist ADM initial data C function from the [BSSN.BrillLindquist](../edit/BSSN/BrillLindquist.py) NRPy+ module1. [Step 3](nrpyccodes) Define Functions for Generating C Codes of Needed Quantities 1. [Step 3.a](bssnrhs): BSSN RHSs 1. [Step 3.b](hamconstraint): Hamiltonian constraint 1. [Step 3.c](spinweight): Computing $_{-2}Y_{\ell m} (\theta, \phi)$ for all $(\ell,m)$ for $\ell=0$ up to 2 1. [Step 3.d](psi4): $\psi_4$1. [Step 4](ccodegen): Generate C codes in parallel1. [Step 5](apply_bcs): Apply singular, curvilinear coordinate boundary conditions1. [Step 6](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint1. [Step 7](mainc): `BrillLindquist_Playground.c`: The Main C Code1. [Step 8](compare): Comparison with black hole perturbation theory1. [Step 9](visual): Data Visualization Animations 1. [Step 9.a](installdownload): Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded 1. [Step 9.b](genimages): Generate images for visualization animation 1. [Step 9.c](genvideo): Generate visualization animation1. [Step 10](convergence): Visualize the numerical error, and confirm that it converges to zero with increasing numerical resolution (sampling)1. [Step 11](latex_pdf_output): Output this module to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# First we import needed core NRPy+ modules
from outputC import *
import NRPy_param_funcs as par
import grid as gri
import loop as lp
import indexedexp as ixp
import finite_difference as fin
import reference_metric as rfm
#par.set_parval_from_str("outputC::PRECISION","long double")
# Set spatial dimension (must be 3 for BSSN)
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Then we set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","SinhSpherical")
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Set the finite-differencing order to 6, matching B-L test from REB paper (Pg 20 of https://arxiv.org/pdf/1712.07658.pdf)
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER",10)
# Then we set the phi axis to be the symmetry axis; i.e., axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
#################
# Next output C headers related to the numerical grids we just set up:
#################
# First output the coordinate bounds xxmin[] and xxmax[]:
with open("BSSN/xxminmax.h", "w") as file:
file.write("const REAL xxmin[3] = {"+str(rfm.xxmin[0])+","+str(rfm.xxmin[1])+","+str(rfm.xxmin[2])+"};\n")
file.write("const REAL xxmax[3] = {"+str(rfm.xxmax[0])+","+str(rfm.xxmax[1])+","+str(rfm.xxmax[2])+"};\n")
# Next output the proper distance between gridpoints in given coordinate system.
# This is used to find the minimum timestep.
dxx = ixp.declarerank1("dxx",DIM=3)
ds_dirn = rfm.ds_dirn(dxx)
outputC([ds_dirn[0],ds_dirn[1],ds_dirn[2]],["ds_dirn0","ds_dirn1","ds_dirn2"],"BSSN/ds_dirn.h")
# Generic coordinate NRPy+ file output, Part 2: output the conversion from (x0,x1,x2) to Cartesian (x,y,z)
outputC([rfm.xxCart[0],rfm.xxCart[1],rfm.xxCart[2]],["xCart[0]","xCart[1]","xCart[2]"],
"BSSN/xxCart.h")
###Output
KeyboardInterrupt
###Markdown
Step 2: Import Brill-Lindquist ADM initial data C function from the [BSSN.BrillLindquist](../edit/BSSN/BrillLindquist.py) NRPy+ module \[Back to [top](toc)\]$$\label{adm_id}$$The [BSSN.BrillLindquist](../edit/BSSN/BrillLindquist.py) NRPy+ module does the following:1. Set up Brill-Lindquist initial data [ADM](https://en.wikipedia.org/wiki/ADM_formalism) quantities in the **Cartesian basis**, as [documented here](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb). 1. Convert the ADM **Cartesian quantities** to **BSSN quantities in the desired Curvilinear basis** (set by reference_metric::CoordSystem), as [documented here](Tutorial-ADM_Initial_Data-Converting_ADMCartesian_to_BSSNCurvilinear.ipynb).1. Sets up the standardized C function for setting all BSSN Curvilinear gridfunctions in a pointwise fashion, as [written here](../edit/BSSN/BSSN_ID_function_string.py), and returns the C function as a Python string.
###Code
import BSSN.BrillLindquist as bl
def BrillLindquistID():
returnfunction = bl.BrillLindquist()
# Now output the Brill-Lindquist initial data to file:
with open("BSSN/BrillLindquist.h","w") as file:
file.write(bl.returnfunction)
###Output
_____no_output_____
###Markdown
Step 3: Define Functions for Generating C Codes of Needed Quantities \[Back to [top](toc)\]$$\label{nrpyccodes}$$ Step 3.a: BSSN RHSs \[Back to [top](toc)\]$$\label{bssnrhs}$$
###Code
import BSSN.BSSN_RHSs as rhs
import BSSN.BSSN_gauge_RHSs as gaugerhs
import time
# Set the *covariant*, second-order Gamma-driving shift condition
par.set_parval_from_str("BSSN.BSSN_gauge_RHSs::ShiftEvolutionOption", "GammaDriving2ndOrder_Covariant")
rhs.BSSN_RHSs()
gaugerhs.BSSN_gauge_RHSs()
thismodule = __name__
diss_strength = par.Cparameters("REAL", thismodule, "diss_strength")
alpha_dKOD = ixp.declarerank1("alpha_dKOD")
cf_dKOD = ixp.declarerank1("cf_dKOD")
trK_dKOD = ixp.declarerank1("trK_dKOD")
betU_dKOD = ixp.declarerank2("betU_dKOD","nosym")
vetU_dKOD = ixp.declarerank2("vetU_dKOD","nosym")
lambdaU_dKOD = ixp.declarerank2("lambdaU_dKOD","nosym")
aDD_dKOD = ixp.declarerank3("aDD_dKOD","sym01")
hDD_dKOD = ixp.declarerank3("hDD_dKOD","sym01")
for k in range(DIM):
gaugerhs.alpha_rhs += diss_strength*alpha_dKOD[k]
rhs.cf_rhs += diss_strength* cf_dKOD[k]
rhs.trK_rhs += diss_strength* trK_dKOD[k]
for i in range(DIM):
gaugerhs.bet_rhsU[i] += diss_strength* betU_dKOD[i][k]
gaugerhs.vet_rhsU[i] += diss_strength* vetU_dKOD[i][k]
rhs.lambda_rhsU[i] += diss_strength*lambdaU_dKOD[i][k]
for j in range(DIM):
rhs.a_rhsDD[i][j] += diss_strength*aDD_dKOD[i][j][k]
rhs.h_rhsDD[i][j] += diss_strength*hDD_dKOD[i][j][k]
def BSSN_RHSs():
print("Generating C code for BSSN RHSs in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
BSSN_evol_rhss = [ \
lhrh(lhs=gri.gfaccess("rhs_gfs","aDD00"),rhs=rhs.a_rhsDD[0][0]),
lhrh(lhs=gri.gfaccess("rhs_gfs","aDD01"),rhs=rhs.a_rhsDD[0][1]),
lhrh(lhs=gri.gfaccess("rhs_gfs","aDD02"),rhs=rhs.a_rhsDD[0][2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","aDD11"),rhs=rhs.a_rhsDD[1][1]),
lhrh(lhs=gri.gfaccess("rhs_gfs","aDD12"),rhs=rhs.a_rhsDD[1][2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","aDD22"),rhs=rhs.a_rhsDD[2][2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","alpha"),rhs=gaugerhs.alpha_rhs),
lhrh(lhs=gri.gfaccess("rhs_gfs","betU0"),rhs=gaugerhs.bet_rhsU[0]),
lhrh(lhs=gri.gfaccess("rhs_gfs","betU1"),rhs=gaugerhs.bet_rhsU[1]),
lhrh(lhs=gri.gfaccess("rhs_gfs","betU2"),rhs=gaugerhs.bet_rhsU[2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","cf"), rhs=rhs.cf_rhs),
lhrh(lhs=gri.gfaccess("rhs_gfs","hDD00"),rhs=rhs.h_rhsDD[0][0]),
lhrh(lhs=gri.gfaccess("rhs_gfs","hDD01"),rhs=rhs.h_rhsDD[0][1]),
lhrh(lhs=gri.gfaccess("rhs_gfs","hDD02"),rhs=rhs.h_rhsDD[0][2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","hDD11"),rhs=rhs.h_rhsDD[1][1]),
lhrh(lhs=gri.gfaccess("rhs_gfs","hDD12"),rhs=rhs.h_rhsDD[1][2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","hDD22"),rhs=rhs.h_rhsDD[2][2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","lambdaU0"),rhs=rhs.lambda_rhsU[0]),
lhrh(lhs=gri.gfaccess("rhs_gfs","lambdaU1"),rhs=rhs.lambda_rhsU[1]),
lhrh(lhs=gri.gfaccess("rhs_gfs","lambdaU2"),rhs=rhs.lambda_rhsU[2]),
lhrh(lhs=gri.gfaccess("rhs_gfs","trK"), rhs=rhs.trK_rhs),
lhrh(lhs=gri.gfaccess("rhs_gfs","vetU0"),rhs=gaugerhs.vet_rhsU[0]),
lhrh(lhs=gri.gfaccess("rhs_gfs","vetU1"),rhs=gaugerhs.vet_rhsU[1]),
lhrh(lhs=gri.gfaccess("rhs_gfs","vetU2"),rhs=gaugerhs.vet_rhsU[2]) ]
import BSSN.BSSN_quantities as Bq
Bq.BSSN_basic_tensors()
betaU = Bq.betaU
BSSN_RHSs_string = fin.FD_outputC("returnstring",BSSN_evol_rhss, params="outCverbose=False",upwindcontrolvec=betaU)
end = time.time()
print("Finished generating BSSN RHSs in "+str(end-start)+" seconds.")
with open("BSSN/BSSN_RHSs.h", "w") as file:
file.write(lp.loop(["i2","i1","i0"],["NGHOSTS","NGHOSTS","NGHOSTS"],
["NGHOSTS+Nxx[2]","NGHOSTS+Nxx[1]","NGHOSTS+Nxx[0]"],
["1","1","1"],["const REAL invdx0 = 1.0/dxx[0];\n"+
"const REAL invdx1 = 1.0/dxx[1];\n"+
"const REAL invdx2 = 1.0/dxx[2];\n"+
"#pragma omp parallel for",
" const REAL xx2 = xx[2][i2];",
" const REAL xx1 = xx[1][i1];"],"",
"""
const REAL xx0 = xx[0][i0];
#define ERF(X, X0, W) (0.5 * (erf( ( (X) - (X0) ) / (W) ) + 1.0))
REAL xCart[3];
#include "../CurviBoundaryConditions/xxCart.h"
const REAL diss_strength = ERF(sqrt(xCart[0]*xCart[0] + xCart[1]*xCart[1] + xCart[2]*xCart[2]),2.0L,0.17L)*0.99L;\n"""+BSSN_RHSs_string))
###Output
_____no_output_____
###Markdown
Step 3.b: Hamiltonian constraint \[Back to [top](toc)\]$$\label{hamconstraint}$$
###Code
# First register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
def H():
print("Generating C code for BSSN Hamiltonian in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
bssncon.output_C__Hamiltonian_h(add_T4UUmunu_source_terms=False)
###Output
_____no_output_____
###Markdown
Step 3.c: Computing $_{-2}Y_{\ell m} (\theta, \phi)$ for all $(\ell,m)$ for $\ell=0$ up to 2 \[Back to [top](toc)\]$$\label{spinweight}$$ [**Tutorial Module**](Tutorial-SpinWeighted_Spherical_Harmonics.ipynb)
###Code
import SpinWeight_minus2_SphHarmonics.SpinWeight_minus2_SphHarmonics as swm2
swm2.SpinWeight_minus2_SphHarmonics(maximum_l=2,filename="SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h")
###Output
_____no_output_____
###Markdown
Step 3.d: Output $\psi_4$ \[Back to [top](toc)\]$$\label{psi4}$$We output $\psi_4$, assuming Quasi-Kinnersley tetrad of [Baker, Campanelli, Lousto (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf).
###Code
import BSSN.Psi4_tetrads as BP4t
par.set_parval_from_str("BSSN.Psi4_tetrads::TetradChoice","QuasiKinnersley")
#par.set_parval_from_str("BSSN.Psi4_tetrads::UseCorrectUnitNormal","True")
import BSSN.Psi4 as BP4
BP4.Psi4()
psi4r_0pt = gri.register_gridfunctions("AUX","psi4r_0pt")
psi4r_1pt = gri.register_gridfunctions("AUX","psi4r_1pt")
psi4r_2pt = gri.register_gridfunctions("AUX","psi4r_2pt")
psi4i_0pt = gri.register_gridfunctions("AUX","psi4i_0pt")
psi4i_1pt = gri.register_gridfunctions("AUX","psi4i_1pt")
psi4i_2pt = gri.register_gridfunctions("AUX","psi4i_2pt")
def Psi4re(part):
print("Generating C code for psi4_re_pt"+str(part)+" in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
fin.FD_outputC("BSSN/Psi4re_pt"+str(part)+"_lowlevel.h",
[lhrh(lhs=gri.gfaccess("aux_gfs","psi4r_"+str(part)+"pt"),rhs=BP4.psi4_re_pt[part])],
params="outCverbose=False")
end = time.time()
print("Finished generating psi4_re_pt"+str(part)+" in "+str(end-start)+" seconds.")
def Psi4im(part):
print("Generating C code for psi4_im_pt"+str(part)+" in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
fin.FD_outputC("BSSN/Psi4im_pt"+str(part)+"_lowlevel.h",
[lhrh(lhs=gri.gfaccess("aux_gfs","psi4i_"+str(part)+"pt"),rhs=BP4.psi4_im_pt[part])],
params="outCverbose=False")
end = time.time()
print("Finished generating psi4_im_pt"+str(part)+" in "+str(end-start)+" seconds.")
###Output
_____no_output_____
###Markdown
Step 4: Perform Parallelized C Code Generation \[Back to [top](toc)\]$$\label{ccodegen}$$Here we call all functions defined in [the above section](nrpyccodes) in parallel, to greatly expedite C code generation on multicore CPUs.
###Code
import multiprocessing
if __name__ == '__main__':
ID = multiprocessing.Process(target=BrillLindquistID)
RHS = multiprocessing.Process(target=BSSN_RHSs)
H = multiprocessing.Process(target=H)
Psi4re0 = multiprocessing.Process(target=Psi4re, args=(0,))
Psi4re1 = multiprocessing.Process(target=Psi4re, args=(1,))
Psi4re2 = multiprocessing.Process(target=Psi4re, args=(2,))
Psi4im0 = multiprocessing.Process(target=Psi4im, args=(0,))
Psi4im1 = multiprocessing.Process(target=Psi4im, args=(1,))
Psi4im2 = multiprocessing.Process(target=Psi4im, args=(2,))
ID.start()
RHS.start()
H.start()
Psi4re0.start()
Psi4re1.start()
Psi4re2.start()
Psi4im0.start()
Psi4im1.start()
Psi4im2.start()
ID.join()
RHS.join()
H.join()
Psi4re0.join()
Psi4re1.join()
Psi4re2.join()
Psi4im0.join()
Psi4im1.join()
Psi4im2.join()
###Output
_____no_output_____
###Markdown
Step 5: Apply singular, curvilinear coordinate boundary conditions \[Back to [top](toc)\]$$\label{apply_bcs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial module](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions()
###Output
_____no_output_____
###Markdown
Step 6: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial module](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb).Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
import BSSN.Enforce_Detgammabar_Constraint as EGC
EGC.output_Enforce_Detgammabar_Constraint_Ccode()
###Output
_____no_output_____
###Markdown
Step 7: `BrillLindquist_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
with open("BSSN/NGHOSTS.h", "w") as file:
file.write("// Part P0: Set the number of ghost zones, from NRPy+'s FD_CENTDERIVS_ORDER\n")
# Upwinding in BSSN requires that NGHOSTS = FD_CENTDERIVS_ORDER/2 + 1 <- Notice the +1.
file.write("#define NGHOSTS "+str(int(par.parval_from_str("finite_difference::FD_CENTDERIVS_ORDER")/2)+1)+"\n")
%%writefile BSSN/BrillLindquist_Playground.c
// Step P1: Import needed header files
#include "NGHOSTS.h" // A NRPy+-generated file, which is set based on FD_CENTDERIVS_ORDER.
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "time.h"
// Step P2: Add needed #define's to set data type, the IDX4() macro, and the gridfunctions
// Step P2a: set REAL=double, so that all floating point numbers are stored to at least ~16 significant digits.
#define REAL double
/*
#define REAL long double
#define erf erfl
*/
// Step P3: Set free parameters
// Step P3a: Free parameters for the numerical grid
// Cartesian coordinates parameters
const REAL xmin = -10.,xmax=10.;
const REAL ymin = -10.,ymax=10.;
const REAL zmin = -10.,zmax=10.;
// Spherical coordinates parameter
const REAL RMAX = 150.;
// SinhSpherical coordinates parameters
const REAL AMPL = 300; // Updated parameter
//const REAL SINHW = 0.125; // matching B-L test from REB paper (Pg 20 of https://arxiv.org/pdf/1712.07658.pdf)
const REAL SINHW = 0.2L; // Updated parameter
// Cylindrical coordinates parameters
const REAL ZMIN = -7.5;
const REAL ZMAX = 7.5;
const REAL RHOMAX = 7.5;
// Time coordinate parameters
const REAL t_final = 275; /* Final time is set so that at t=t_final,
* data at the origin have not been corrupted
* by the approximate outer boundary condition */
REAL CFL_FACTOR = 0.5; // Set the CFL Factor
// Step P3b: Free parameters for the spacetime evolution
const REAL eta = 2.0; // Gamma-driving shift condition parameter. Matches B-L test from REB paper (Pg 20 of https://arxiv.org/pdf/1712.07658.pdf)
// Step P4: Implement the algorithm for upwinding.
// *NOTE*: This upwinding is backwards from
// usual upwinding algorithms, because the
// upwinding control vector in BSSN (the shift)
// acts like a *negative* velocity.
#define UPWIND_ALG(UpwindVecU) UpwindVecU > 0.0 ? 1.0 : 0.0
// Step P5: Set free parameters for the (Brill-Lindquist) initial data
const REAL BH1_posn_x = 0.0,BH1_posn_y = 0.0,BH1_posn_z = +0.25;
const REAL BH2_posn_x = 0.0,BH2_posn_y = 0.0,BH2_posn_z = -0.25;
//const REAL BH1_posn_x = 0.0,BH1_posn_y = 0.0,BH1_posn_z = +0.05; // SUPER CLOSE
//const REAL BH2_posn_x = 0.0,BH2_posn_y = 0.0,BH2_posn_z = -0.05; // SUPER CLOSE
const REAL BH1_mass = 0.5,BH2_mass = 0.5;
// Step P6: Declare the IDX4(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS[0] elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS[0]*Nxx_plus_2NGHOSTS[1] in memory, etc.
#define IDX4(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS[0] * ( (j) + Nxx_plus_2NGHOSTS[1] * ( (k) + Nxx_plus_2NGHOSTS[2] * (g) ) ) )
#define IDX3(i,j,k) ( (i) + Nxx_plus_2NGHOSTS[0] * ( (j) + Nxx_plus_2NGHOSTS[1] * (k) ) )
// Assuming idx = IDX3(i,j,k). Much faster if idx can be reused over and over:
#define IDX4pt(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS[0]*Nxx_plus_2NGHOSTS[1]*Nxx_plus_2NGHOSTS[2]) * (g) )
// Step P7: Set #define's for BSSN gridfunctions. C code generated above
#include "../CurviBoundaryConditions/gridfunction_defines.h"
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
void xxCart(REAL *xx[3],const int i0,const int i1,const int i2, REAL xCart[3]) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
#include "../CurviBoundaryConditions/xxCart.h"
}
// Step P8: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "../CurviBoundaryConditions/curvilinear_parity_and_outer_boundary_conditions.h"
#include "enforce_detgammabar_constraint.h"
// Step P9: Find the CFL-constrained timestep
REAL find_timestep(const int Nxx_plus_2NGHOSTS[3],const REAL dxx[3],REAL *xx[3], const REAL CFL_FACTOR) {
const REAL dxx0 = dxx[0], dxx1 = dxx[1], dxx2 = dxx[2];
REAL dsmin = 1e38; // Start with a crazy high value... close to the largest number in single precision.
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS[0]-NGHOSTS, NGHOSTS,Nxx_plus_2NGHOSTS[1]-NGHOSTS, NGHOSTS,Nxx_plus_2NGHOSTS[2]-NGHOSTS) {
const REAL xx0 = xx[0][i0], xx1 = xx[1][i1], xx2 = xx[2][i2];
REAL ds_dirn0, ds_dirn1, ds_dirn2;
#include "ds_dirn.h"
#define MIN(A, B) ( ((A) < (B)) ? (A) : (B) )
// Set dsmin = MIN(dsmin, ds_dirn0, ds_dirn1, ds_dirn2);
dsmin = MIN(dsmin,MIN(ds_dirn0,MIN(ds_dirn1,ds_dirn2)));
}
return dsmin*CFL_FACTOR;
}
// Contains BSSN_ID() for BrillLindquist initial data
#include "BrillLindquist.h"
// Step P10.a: Declare the function for the exact solution. time==0 corresponds to the initial data.
void initial_data(const int Nxx_plus_2NGHOSTS[3],REAL *xx[3], REAL *in_gfs) {
#pragma omp parallel for
LOOP_REGION(0,Nxx_plus_2NGHOSTS[0], 0,Nxx_plus_2NGHOSTS[1], 0,Nxx_plus_2NGHOSTS[2]) {
const int idx = IDX3(i0,i1,i2);
BSSN_ID(xx[0][i0],xx[1][i1],xx[2][i2],
&in_gfs[IDX4pt(HDD00GF,idx)],&in_gfs[IDX4pt(HDD01GF,idx)],&in_gfs[IDX4pt(HDD02GF,idx)],
&in_gfs[IDX4pt(HDD11GF,idx)],&in_gfs[IDX4pt(HDD12GF,idx)],&in_gfs[IDX4pt(HDD22GF,idx)],
&in_gfs[IDX4pt(ADD00GF,idx)],&in_gfs[IDX4pt(ADD01GF,idx)],&in_gfs[IDX4pt(ADD02GF,idx)],
&in_gfs[IDX4pt(ADD11GF,idx)],&in_gfs[IDX4pt(ADD12GF,idx)],&in_gfs[IDX4pt(ADD22GF,idx)],
&in_gfs[IDX4pt(TRKGF,idx)],
&in_gfs[IDX4pt(LAMBDAU0GF,idx)],&in_gfs[IDX4pt(LAMBDAU1GF,idx)],&in_gfs[IDX4pt(LAMBDAU2GF,idx)],
&in_gfs[IDX4pt(VETU0GF,idx)],&in_gfs[IDX4pt(VETU1GF,idx)],&in_gfs[IDX4pt(VETU2GF,idx)],
&in_gfs[IDX4pt(BETU0GF,idx)],&in_gfs[IDX4pt(BETU1GF,idx)],&in_gfs[IDX4pt(BETU2GF,idx)],
&in_gfs[IDX4pt(ALPHAGF,idx)],&in_gfs[IDX4pt(CFGF,idx)]);
}
}
// Step P10.b: Implement Hamiltonian constraint diagnostic
void Hamiltonian_constraint(const int Nxx[3],const int Nxx_plus_2NGHOSTS[3],const REAL dxx[3], REAL *xx[3],
REAL *in_gfs, REAL *aux_gfs) {
#include "Hamiltonian.h"
}
// Step P10.c: Psi4 output
void psi4(const int Nxx_plus_2NGHOSTS[3],const int i0,const int i1,const int i2,
const REAL dxx[3], REAL *xx[3],
REAL *in_gfs, REAL *aux_gfs) {
const int idx = IDX3(i0,i1,i2);
const REAL xx0 = xx[0][i0];
const REAL xx1 = xx[1][i1];
const REAL xx2 = xx[2][i2];
const REAL invdx0 = 1.0/dxx[0];
const REAL invdx1 = 1.0/dxx[1];
const REAL invdx2 = 1.0/dxx[2];
// REAL psi4_re_pt0,psi4_re_pt1,psi4_re_pt2;
{
#include "Psi4re_pt0_lowlevel.h"
}
{
#include "Psi4re_pt1_lowlevel.h"
}
{
#include "Psi4re_pt2_lowlevel.h"
}
// REAL psi4_im_pt0,psi4_im_pt1,psi4_im_pt2;
{
#include "Psi4im_pt0_lowlevel.h"
}
{
#include "Psi4im_pt1_lowlevel.h"
}
{
#include "Psi4im_pt2_lowlevel.h"
}
// aux_gfs[IDX4pt(PSI4RGF,idx)] = psi4_re_pt0 + psi4_re_pt1 + psi4_re_pt2;
// aux_gfs[IDX4pt(PSI4IGF,idx)] = psi4_im_pt0 + psi4_im_pt1 + psi4_im_pt2;
}
// Step P11: Declare the function to evaluate the BSSN RHSs
void rhs_eval(const int Nxx[3],const int Nxx_plus_2NGHOSTS[3],const REAL dxx[3], REAL *xx[3], const REAL *in_gfs,REAL *rhs_gfs) {
#include "BSSN_RHSs.h"
}
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up scalar wave initial data
// Step 2: Evolve scalar wave initial data forward in time using Method of Lines with RK4 algorithm,
// applying quadratic extrapolation outer boundary conditions.
// Step 3: Output relative error between numerical and exact solution.
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4 && argc != 5) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
if(argc == 5) {
CFL_FACTOR = strtod(argv[4],NULL);
if(CFL_FACTOR > 0.5 && atoi(argv[3])!=2) {
fprintf(stderr,"WARNING: CFL_FACTOR was set to %e, which is > 0.5.\n",(double)CFL_FACTOR);
fprintf(stderr," This will generally only be stable if the simulation is purely axisymmetric\n");
fprintf(stderr," However, Nx2 was set to %d>2, which implies a non-axisymmetric simulation\n",atoi(argv[3]));
}
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
const int Nxx_plus_2NGHOSTS[3] = { Nxx[0]+2*NGHOSTS, Nxx[1]+2*NGHOSTS, Nxx[2]+2*NGHOSTS };
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS[0]*Nxx_plus_2NGHOSTS[1]*Nxx_plus_2NGHOSTS[2];
#include "xxminmax.h"
// Step 0c: Allocate memory for gridfunctions
REAL *evol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *next_in_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *aux_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUX_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *k1_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *k2_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *k3_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *k4_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0d: Set up space and time coordinates
// Step 0d.i: Set \Delta x^i on uniform grids.
REAL dxx[3];
for(int i=0;i<3;i++) dxx[i] = (xxmax[i] - xxmin[i]) / ((REAL)Nxx[i]);
// Step 0d.ii: Set up uniform coordinate grids
REAL *xx[3];
for(int i=0;i<3;i++) {
xx[i] = (REAL *)malloc(sizeof(REAL)*Nxx_plus_2NGHOSTS[i]);
for(int j=0;j<Nxx_plus_2NGHOSTS[i];j++) {
xx[i][j] = xxmin[i] + ((REAL)(j-NGHOSTS) + (1.0/2.0))*dxx[i]; // Cell-centered grid.
}
}
// Step 0d.iii: Set timestep based on smallest proper distance between gridpoints and CFL factor
REAL dt = find_timestep(Nxx_plus_2NGHOSTS, dxx,xx, CFL_FACTOR);
//printf("# Timestep set to = %e\n",(double)dt);
int N_final = (int)(t_final / dt + 0.5); // The number of iterations in time.
//Add 0.5 to account for C rounding down integers.
REAL out_approx_every_t = 0.2;
int N_output_every = (int)(out_approx_every_t*((REAL)N_final)/t_final);
// Step 0e: Find ghostzone mappings and parities:
gz_map *bc_gz_map = (gz_map *)malloc(sizeof(gz_map)*Nxx_plus_2NGHOSTS_tot);
parity_condition *bc_parity_conditions = (parity_condition *)malloc(sizeof(parity_condition)*Nxx_plus_2NGHOSTS_tot);
set_up_bc_gz_map_and_parity_conditions(Nxx_plus_2NGHOSTS,xx,dxx,xxmin,xxmax, bc_gz_map, bc_parity_conditions);
// Step 1: Set up initial data to an exact solution at time=0:
initial_data(Nxx_plus_2NGHOSTS, xx, evol_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs(Nxx, Nxx_plus_2NGHOSTS, bc_gz_map,bc_parity_conditions,NUM_EVOL_GFS,evol_gf_parity, evol_gfs);
enforce_detgammabar_constraint(Nxx_plus_2NGHOSTS, xx, evol_gfs);
// Step 2: Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(Nxx,Nxx_plus_2NGHOSTS,dxx, xx, evol_gfs, aux_gfs);
// Step 3: Start the timer, for keeping track of how fast the simulation is progressing.
struct timespec start, end;
clock_gettime(CLOCK_REALTIME, &start);
// Step 4: Integrate the initial data forward in time using the Method of Lines and RK4
for(int n=0;n<=N_final;n++) { // Main loop to progress forward in time.
/* Step 3: Output 2D data file, for visualization */
if(n%N_output_every == 0) {
#include "../SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h"
char filename[100];
//int r_ext_idx = (Nxx_plus_2NGHOSTS[0]-NGHOSTS)/4;
for(int r_ext_idx = (Nxx_plus_2NGHOSTS[0]-NGHOSTS)/4; r_ext_idx<(Nxx_plus_2NGHOSTS[0]-NGHOSTS)*0.9;r_ext_idx+=5) {
REAL r_ext;
{
REAL xx0 = xx[0][r_ext_idx];
REAL xx1 = xx[1][1];
REAL xx2 = xx[2][1];
REAL xCart[3];
#include "xxCart.h"
r_ext = sqrt(xCart[0]*xCart[0] + xCart[1]*xCart[1] + xCart[2]*xCart[2]);
}
/* TOO VERBOSE:
sprintf(filename,"outPsi4-%d-r%.2f-%08d.txt",Nxx[0],(double)r_ext,n);
FILE *out2DPsi4 = fopen(filename, "w");
LOOP_REGION(r_ext_idx,r_ext_idx+1,
NGHOSTS-1,Nxx_plus_2NGHOSTS[1]-NGHOSTS+1,
NGHOSTS-1,Nxx_plus_2NGHOSTS[2]-NGHOSTS+1) {
psi4(Nxx_plus_2NGHOSTS, i0,i1,i2, dxx,xx, evol_gfs, aux_gfs);
const int idx = IDX3(i0,i1,i2);
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
#include "xxCart.h"
REAL r = sqrt(xCart[0]*xCart[0] + xCart[1]*xCart[1] + xCart[2]*xCart[2]);
fprintf(out2DPsi4,"%e %e %e %e %.15e %.15e\n",
(double)((n)*dt),
(double)r,
(double)xx[1][i1],
(double)xx[2][i2],
(double)aux_gfs[IDX4pt(PSI4RGF,idx)],
(double)aux_gfs[IDX4pt(PSI4IGF,idx)]);
}
fclose(out2DPsi4);
*/
sprintf(filename,"outPsi4_l2m0-%d-r%.2f.txt",Nxx[0],(double)r_ext);
FILE *outPsi4_l2m0;
if(n==0) outPsi4_l2m0 = fopen(filename, "w");
else outPsi4_l2m0 = fopen(filename, "a");
REAL Psi4r_0pt_l2m0 = 0.0,Psi4r_1pt_l2m0 = 0.0,Psi4r_2pt_l2m0 = 0.0;
REAL Psi4i_0pt_l2m0 = 0.0,Psi4i_1pt_l2m0 = 0.0,Psi4i_2pt_l2m0 = 0.0;
LOOP_REGION(r_ext_idx,r_ext_idx+1,
NGHOSTS,Nxx_plus_2NGHOSTS[1]-NGHOSTS,
NGHOSTS,NGHOSTS+1) {
psi4(Nxx_plus_2NGHOSTS, i0,i1,i2, dxx,xx, evol_gfs, aux_gfs);
const int idx = IDX3(i0,i1,i2);
const REAL th = xx[1][i1];
const REAL ph = xx[2][i2];
// Construct integrand for Psi4 spin-weight s=-2,l=2,m=0 spherical harmonic
// Based on http://www.demonstrations.wolfram.com/SpinWeightedSphericalHarmonics/
// we have {}_{s}_Y_{lm} = {}_{-2}_Y_{20} = 1/4 * sqrt(15 / (2*pi)) * sin(th)^2
// Confirm integrand is correct:
// Integrate[(1/4 Sqrt[15/(2 \[Pi])] Sin[th]^2) (1/4 Sqrt[15/(2 \[Pi])] Sin[th]^2)*2*Pi*Sin[th], {th, 0, Pi}]
// ^^^ equals 1.
REAL ReY_sm2_l2_m0,ImY_sm2_l2_m0;
SpinWeight_minus2_SphHarmonics(2,0, th,ph, &ReY_sm2_l2_m0,&ImY_sm2_l2_m0);
const REAL sinth = sin(xx[1][i1]);
/* psi4 *{}_{-2}_Y_{20}* (int dphi)* sinth*dtheta */
Psi4r_0pt_l2m0 += aux_gfs[IDX4pt(PSI4R_0PTGF,idx)]*ReY_sm2_l2_m0 * (2*M_PI) * sinth*dxx[1];
Psi4r_1pt_l2m0 += aux_gfs[IDX4pt(PSI4R_1PTGF,idx)]*ReY_sm2_l2_m0 * (2*M_PI) * sinth*dxx[1];
Psi4r_2pt_l2m0 += aux_gfs[IDX4pt(PSI4R_2PTGF,idx)]*ReY_sm2_l2_m0 * (2*M_PI) * sinth*dxx[1];
Psi4i_0pt_l2m0 += aux_gfs[IDX4pt(PSI4I_0PTGF,idx)]*ImY_sm2_l2_m0 * (2*M_PI) * sinth*dxx[1];
Psi4i_1pt_l2m0 += aux_gfs[IDX4pt(PSI4I_1PTGF,idx)]*ImY_sm2_l2_m0 * (2*M_PI) * sinth*dxx[1];
Psi4i_2pt_l2m0 += aux_gfs[IDX4pt(PSI4I_2PTGF,idx)]*ImY_sm2_l2_m0 * (2*M_PI) * sinth*dxx[1];
}
fprintf(outPsi4_l2m0,"%e %.15e %.15e %.15e %.15e %.15e %.15e\n", (double)((n)*dt),
(double)Psi4r_0pt_l2m0,(double)Psi4r_1pt_l2m0,(double)Psi4r_2pt_l2m0,
(double)Psi4i_0pt_l2m0,(double)Psi4i_1pt_l2m0,(double)Psi4i_2pt_l2m0);
fclose(outPsi4_l2m0);
}
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(Nxx,Nxx_plus_2NGHOSTS,dxx, xx, evol_gfs, aux_gfs);
// const int i1mid = Nxx_plus_2NGHOSTS[1]/2;
sprintf(filename,"out1D-%d.txt",Nxx[0]);
FILE *out2D;
if(n==0) out2D = fopen(filename, "w");
else out2D = fopen(filename, "a");
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS[0]-NGHOSTS,
Nxx_plus_2NGHOSTS[1]/2,Nxx_plus_2NGHOSTS[1]/2+1,
Nxx_plus_2NGHOSTS[2]/2,Nxx_plus_2NGHOSTS[2]/2+1) {
const int idx = IDX3(i0,i1,i2);
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
#include "xxCart.h"
fprintf(out2D,"%e %e %e\n",
(double)sqrt(xCart[0]*xCart[0] + xCart[1]*xCart[1] + xCart[2]*xCart[2]),
(double)evol_gfs[IDX4pt(CFGF,idx)],(double)log10(fabs(aux_gfs[IDX4pt(HGF,idx)])));
}
fprintf(out2D,"\n\n");
fclose(out2D);
}
/***************************************************/
/* Implement RK4 for Method of Lines timestepping: */
/***************************************************/
/* -= RK4: Step 1 of 4 =- */
/* First evaluate k1 = RHSs expression */
rhs_eval(Nxx,Nxx_plus_2NGHOSTS,dxx, xx,evol_gfs, k1_gfs);
/* Next k1 -> k1*dt, and then set the input for */
/* the next RHS eval call to y_n+k1/2 */
#pragma omp parallel for
for(int i=0;i<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;i++) {
k1_gfs[i] *= dt;
next_in_gfs[i] = evol_gfs[i] + k1_gfs[i]*0.5;
}
/* Finally, apply boundary conditions to */
/* next_in_gfs, so its data are set everywhere. */
apply_bcs(Nxx, Nxx_plus_2NGHOSTS, bc_gz_map,bc_parity_conditions,NUM_EVOL_GFS,evol_gf_parity, next_in_gfs);
enforce_detgammabar_constraint(Nxx_plus_2NGHOSTS, xx, next_in_gfs);
/* -= RK4: Step 2 of 4 =- */
rhs_eval(Nxx,Nxx_plus_2NGHOSTS,dxx, xx,next_in_gfs, k2_gfs);
#pragma omp parallel for
for(int i=0;i<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;i++) {
k2_gfs[i] *= dt;
next_in_gfs[i] = evol_gfs[i] + k2_gfs[i]*0.5;
}
apply_bcs(Nxx, Nxx_plus_2NGHOSTS, bc_gz_map,bc_parity_conditions,NUM_EVOL_GFS,evol_gf_parity, next_in_gfs);
enforce_detgammabar_constraint(Nxx_plus_2NGHOSTS, xx, next_in_gfs);
/* -= RK4: Step 3 of 4 =- */
rhs_eval(Nxx,Nxx_plus_2NGHOSTS,dxx, xx,next_in_gfs, k3_gfs);
#pragma omp parallel for
for(int i=0;i<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;i++) {
k3_gfs[i] *= dt;
next_in_gfs[i] = evol_gfs[i] + k3_gfs[i];
}
apply_bcs(Nxx, Nxx_plus_2NGHOSTS, bc_gz_map,bc_parity_conditions,NUM_EVOL_GFS,evol_gf_parity, next_in_gfs);
enforce_detgammabar_constraint(Nxx_plus_2NGHOSTS, xx, next_in_gfs);
/* -= RK4: Step 4 of 4 =- */
rhs_eval(Nxx,Nxx_plus_2NGHOSTS,dxx, xx,next_in_gfs, k4_gfs);
#pragma omp parallel for
for(int i=0;i<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;i++) {
k4_gfs[i] *= dt;
evol_gfs[i] += (1.0/6.0)*(k1_gfs[i] + 2.0*k2_gfs[i] + 2.0*k3_gfs[i] + k4_gfs[i]);
}
apply_bcs(Nxx, Nxx_plus_2NGHOSTS, bc_gz_map,bc_parity_conditions,NUM_EVOL_GFS,evol_gf_parity, evol_gfs);
enforce_detgammabar_constraint(Nxx_plus_2NGHOSTS, xx, evol_gfs);
/* Validation: Output Hamiltonian constraint violation */
if(n==N_final-1) {
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(Nxx,Nxx_plus_2NGHOSTS,dxx, xx, evol_gfs, aux_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
const int i0MIN=NGHOSTS; // In spherical, r=Delta r/2.
const int i1mid=Nxx_plus_2NGHOSTS[1]/2;
const int i2mid=Nxx_plus_2NGHOSTS[2]/2;
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS[0]-NGHOSTS, NGHOSTS,Nxx_plus_2NGHOSTS[1]-NGHOSTS, NGHOSTS,Nxx_plus_2NGHOSTS[2]-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
#include "xxCart.h"
int idx = IDX3(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",(double)xCart[1],(double)xCart[2], (double)evol_gfs[IDX4pt(CFGF,idx)],(double)log10(fabs(aux_gfs[IDX4pt(HGF,idx)])));
}
fclose(out2D);
}
// Progress indicator printing to stdout
// Measure average time per iteration
clock_gettime(CLOCK_REALTIME, &end);
const long long unsigned int time_in_ns = 1000000000L * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;
const REAL s_per_iteration_avg = ((REAL)time_in_ns / (REAL)n) / 1.0e9;
const int iterations_remaining = N_final - n;
const REAL time_remaining_in_mins = s_per_iteration_avg * (REAL)iterations_remaining / 60.0;
const REAL num_RHS_pt_evals = (REAL)(Nxx[0]*Nxx[1]*Nxx[2]) * 4.0 * (REAL)n; // 4 RHS evals per gridpoint for RK4
const REAL RHS_pt_evals_per_sec = num_RHS_pt_evals / ((REAL)time_in_ns / 1.0e9);
// Progress indicator printing to stderr
fprintf(stderr,"%c[2K", 27); // Clear the line
fprintf(stderr,"It: %d t=%.2f | %.1f%%; ETA %.0f s | t/h %.2f | gp/s %.2e\r", // \r is carriage return, move cursor to the beginning of the line
n, n * (double)dt, (double)(100.0 * (REAL)n / (REAL)N_final),
(double)time_remaining_in_mins*60, (double)(dt * 3600.0 / s_per_iteration_avg), (double)RHS_pt_evals_per_sec);
fflush(stderr); // Flush the stderr buffer
} // End main loop to progress forward in time.
fprintf(stderr,"\n"); // Clear the line.
/* Step 4: Free all allocated memory */
free(bc_parity_conditions);
free(bc_gz_map);
free(k4_gfs);
free(k3_gfs);
free(k2_gfs);
free(k1_gfs);
free(aux_gfs);
free(next_in_gfs);
free(evol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
!rm -f BrillLindquist_Playground out*.txt
# Nr = 270
# Ntheta = 8
Nr = 800
Ntheta = 16
print("Now compiling, should take ~10 seconds...\n")
start = time.time()
!gcc -Ofast -march=native -ftree-parallelize-loops=2 -fopenmp BSSN/BrillLindquist_Playground.c -o BrillLindquist_Playground -lm
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
print("Now running. Should take ~30 minutes...\n")
start = time.time()
#!taskset -c 0,1,2,3 ./BrillLindquist_Playground Nr Ntheta 2 1.0
!taskset -c 0,1,2,3 ./BrillLindquist_Playground Nr Ntheta 2 1.0
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
# print("Now compiling, should take ~10 seconds...\n")
# start = time.time()
# !gcc -Ofast -march=native -ftree-parallelize-loops=2 -fopenmp BSSN/BrillLindquist_Playground.c -o BrillLindquist_Playground -lm
# end = time.time()
# print("Finished in "+str(end-start)+" seconds.\n\n")
# print("Now running at low resolution. Should take ~30 minutes...\n")
# start = time.time()
# !taskset -c 0,1,2,3,4,5 ./BrillLindquist_Playground 270 8 2 1.0
# end = time.time()
# print("Finished in "+str(end-start)+" seconds.\n\n")
# # print("Now running at higher-resolution. Should take ~75 seconds...\n")
# # start = time.time()
# # !taskset -c 0,1 ./BrillLindquist_Playground 320 8 2 1.0
# # end = time.time()
# # print("Finished in "+str(end-start)+" seconds.")
###Output
_____no_output_____
###Markdown
Step 8: Comparison with black hole perturbation theory \[Back to [top](toc)\]$$\label{compare}$$According to black hole perturbation theory ([Berti et al](https://arxiv.org/abs/0905.2975)), the resultant black hole should ring down with dominant, spin-weight $s=-2$ spherical harmonic mode $(l=2,m=0)$ according to$${}_{s=-2}\text{Re}(\psi_4)_{l=2,m=0} = A e^{−0.0890 t/M} \cos(0.3737 t/M+ \phi),$$where $M=1$ for these data, and $A$ and $\phi$ are an arbitrary amplitude and phase, respectively. Here we will plot the resulting waveform at $r/M=33.13$, comparing to the expected frequency and amplitude falloff predicted by black hole perturbation theory.Notice that we find about 4.2 orders of magnitude agreement! If you are willing to invest more resources and wait much longer, you will find approximately 8.5 orders of magnitude agreement (*better* than Fig 6 of [Ruchlin et al](https://arxiv.org/pdf/1712.07658.pdf)) if you adjust the above code parameters such that1. Finite-differencing order is set to 101. Nr = 8001. Ntheta = 161. Outer boundary (`AMPL`) set to 3001. Final time (`t_final`) set to 2751. Set the initial positions of the BHs to `BH1_posn_z = -BH2_posn_z = 0.25`
###Code
%matplotlib inline
import numpy as np
# from scipy.interpolate import griddata
import matplotlib.pyplot as plt
from matplotlib.pyplot import savefig
# from IPython.display import HTML
# import matplotlib.image as mgimg
from matplotlib import rc
rc('text', usetex=True)
if Nr == 270:
extraction_radius = "33.13"
Amplitude = 7e-2
Phase = 2.8
elif Nr == 800:
extraction_radius = "33.64"
Amplitude = 1.8e-2
Phase = 2.8
else:
print("Error: output is not tuned for Nr = "+str(Nr)+" . Plotting disabled.")
exit(1)
#Transposed for easier unpacking:
t,psi4r1,psi4r2,psi4r3,psi4i1,psi4i2,psi4i3 = np.loadtxt("outPsi4_l2m0-"+str(Nr)+"-r"+extraction_radius+".txt").T
t_retarded = []
log10abspsi4r = []
bh_pert_thry = []
for i in range(len(psi4r1)):
retarded_time = t[i]-np.float(extraction_radius)
t_retarded.append(retarded_time)
log10abspsi4r.append(np.log(np.float(extraction_radius)*np.abs(psi4r1[i] + psi4r2[i] + psi4r3[i]))/np.log(10))
bh_pert_thry.append(np.log(Amplitude*np.exp(-0.0890*retarded_time)*np.abs(np.cos(0.3737*retarded_time+Phase)))/np.log(10))
# print(bh_pert_thry)
fig, ax = plt.subplots()
plt.title("Grav. Wave Agreement with BH perturbation theory",fontsize=18)
plt.xlabel("$(t - R_{ext})/M$",fontsize=16)
plt.ylabel('$\log_{10}|\psi_4|$',fontsize=16)
ax.plot(t_retarded, log10abspsi4r, 'k-', label='SENR/NRPy+ simulation')
ax.plot(t_retarded, bh_pert_thry, 'k--', label='BH perturbation theory')
#ax.set_xlim([0,t_retarded[len(psi4r1)-1]])
ax.set_xlim([0,240])
ax.set_ylim([-13,-1.5])
plt.xticks(size = 14)
plt.yticks(size = 14)
legend = ax.legend(loc='upper right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
# Note that you'll need `dvipng` installed to generate the following file:
savefig("BHperttheorycompare.png",dpi=150)
###Output
_____no_output_____
###Markdown
Step 9: Data Visualization Animations \[Back to [top](toc)\]$$\label{visual}$$ Step 9.a: Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded \[Back to [top](toc)\]$$\label{installdownload}$$ Note that if you are not running this within `mybinder`, but on a Windows system, `ffmpeg` must be installed using a separate package (on [this site](http://ffmpeg.org/)), or (if running Jupyter within Anaconda, use the command: `conda install -c conda-forge ffmpeg`).
###Code
# print("Ignore any warnings or errors from the following command:")
# !pip install scipy > /dev/null
# check_for_ffmpeg = !which ffmpeg >/dev/null && echo $?
# if check_for_ffmpeg != ['0']:
# print("Couldn't find ffmpeg, so I'll download it.")
# # Courtesy https://johnvansickle.com/ffmpeg/
# !wget https://math.wvu.edu/~zetienne/ffmpeg-static-amd64-johnvansickle.tar.xz
# !tar Jxf ffmpeg-static-amd64-johnvansickle.tar.xz
# print("Copying ffmpeg to ~/.local/bin/. Assumes ~/.local/bin is in the PATH.")
# !mkdir ~/.local/bin/
# !cp ffmpeg-static-amd64-johnvansickle/ffmpeg ~/.local/bin/
# print("If this doesn't work, then install ffmpeg yourself. It should work fine on mybinder.")
###Output
_____no_output_____
###Markdown
Step 9.b: Generate images for visualization animation \[Back to [top](toc)\]$$\label{genimages}$$ Here we loop through the data files output by the executable compiled and run in [the previous step](mainc), generating a [png](https://en.wikipedia.org/wiki/Portable_Network_Graphics) image for each data file.**Special thanks to Terrence Pierre Jacques. His work with the first versions of these scripts greatly contributed to the scripts as they exist below.**
###Code
# ## VISUALIZATION ANIMATION, PART 1: Generate PNGs, one per frame of movie ##
# import numpy as np
# from scipy.interpolate import griddata
# import matplotlib.pyplot as plt
# from matplotlib.pyplot import savefig
# from IPython.display import HTML
# import matplotlib.image as mgimg
# import glob
# import sys
# from matplotlib import animation
# globby = glob.glob('out96-00*.txt')
# file_list = []
# for x in sorted(globby):
# file_list.append(x)
# bound=1.4
# pl_xmin = -bound
# pl_xmax = +bound
# pl_ymin = -bound
# pl_ymax = +bound
# for filename in file_list:
# fig = plt.figure()
# x,y,cf,Ham = np.loadtxt(filename).T #Transposed for easier unpacking
# plotquantity = cf
# plotdescription = "Numerical Soln."
# plt.title("Black Hole Head-on Collision (conf factor)")
# plt.xlabel("y/M")
# plt.ylabel("z/M")
# grid_x, grid_y = np.mgrid[pl_xmin:pl_xmax:300j, pl_ymin:pl_ymax:300j]
# points = np.zeros((len(x), 2))
# for i in range(len(x)):
# # Zach says: No idea why x and y get flipped...
# points[i][0] = y[i]
# points[i][1] = x[i]
# grid = griddata(points, plotquantity, (grid_x, grid_y), method='nearest')
# gridcub = griddata(points, plotquantity, (grid_x, grid_y), method='cubic')
# im = plt.imshow(gridcub, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
# ax = plt.colorbar()
# ax.set_label(plotdescription)
# savefig(filename+".png",dpi=150)
# plt.close(fig)
# sys.stdout.write("%c[2K" % 27)
# sys.stdout.write("Processing file "+filename+"\r")
# sys.stdout.flush()
###Output
_____no_output_____
###Markdown
Step 9.c: Generate visualization animation \[Back to [top](toc)\]$$\label{genvideo}$$ In the following step, [ffmpeg](http://ffmpeg.org) is used to generate an [mp4](https://en.wikipedia.org/wiki/MPEG-4) video file, which can be played directly from this Jupyter notebook.
###Code
# ## VISUALIZATION ANIMATION, PART 2: Combine PNGs to generate movie ##
# # https://stackoverflow.com/questions/14908576/how-to-remove-frame-from-matplotlib-pyplot-figure-vs-matplotlib-figure-frame
# # https://stackoverflow.com/questions/23176161/animating-pngs-in-matplotlib-using-artistanimation
# fig = plt.figure(frameon=False)
# ax = fig.add_axes([0, 0, 1, 1])
# ax.axis('off')
# myimages = []
# for i in range(len(file_list)):
# img = mgimg.imread(file_list[i]+".png")
# imgplot = plt.imshow(img)
# myimages.append([imgplot])
# ani = animation.ArtistAnimation(fig, myimages, interval=100, repeat_delay=1000)
# plt.close()
# ani.save('BH_Head-on_Collision.mp4', fps=5,dpi=150)
## VISUALIZATION ANIMATION, PART 3: Display movie as embedded HTML5 (see next cell) ##
# https://stackoverflow.com/questions/18019477/how-can-i-play-a-local-video-in-my-ipython-notebook
# %%HTML
# <video width="480" height="360" controls>
# <source src="BH_Head-on_Collision.mp4" type="video/mp4">
# </video>
###Output
_____no_output_____
###Markdown
Step 10: Visualize the numerical error, and confirm that it converges to zero with increasing numerical resolution (sampling) \[Back to [top](toc)\]$$\label{convergence}$$
###Code
# x96,y96,valuesCF96,valuesHam96 = np.loadtxt('out96.txt').T #Transposed for easier unpacking
# pl_xmin = -2.5
# pl_xmax = +2.5
# pl_ymin = -2.5
# pl_ymax = +2.5
# grid_x, grid_y = np.mgrid[pl_xmin:pl_xmax:100j, pl_ymin:pl_ymax:100j]
# points96 = np.zeros((len(x96), 2))
# for i in range(len(x96)):
# points96[i][0] = x96[i]
# points96[i][1] = y96[i]
# grid96 = griddata(points96, valuesCF96, (grid_x, grid_y), method='nearest')
# grid96cub = griddata(points96, valuesCF96, (grid_x, grid_y), method='cubic')
# grid96 = griddata(points96, valuesHam96, (grid_x, grid_y), method='nearest')
# grid96cub = griddata(points96, valuesHam96, (grid_x, grid_y), method='cubic')
# # fig, ax = plt.subplots()
# plt.clf()
# plt.title("96x16 Num. Err.: log_{10}|Ham|")
# plt.xlabel("x/M")
# plt.ylabel("z/M")
# fig96cub = plt.imshow(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
# cb = plt.colorbar(fig96cub)
# x72,y72,valuesCF72,valuesHam72 = np.loadtxt('out72.txt').T #Transposed for easier unpacking
# points72 = np.zeros((len(x72), 2))
# for i in range(len(x72)):
# points72[i][0] = x72[i]
# points72[i][1] = y72[i]
# grid72 = griddata(points72, valuesHam72, (grid_x, grid_y), method='nearest')
# griddiff_72_minus_96 = np.zeros((100,100))
# griddiff_72_minus_96_1darray = np.zeros(100*100)
# gridx_1darray_yeq0 = np.zeros(100)
# grid72_1darray_yeq0 = np.zeros(100)
# grid96_1darray_yeq0 = np.zeros(100)
# count = 0
# for i in range(100):
# for j in range(100):
# griddiff_72_minus_96[i][j] = grid72[i][j] - grid96[i][j]
# griddiff_72_minus_96_1darray[count] = griddiff_72_minus_96[i][j]
# if j==49:
# gridx_1darray_yeq0[i] = grid_x[i][j]
# grid72_1darray_yeq0[i] = grid72[i][j] + np.log10((72./96.)**4)
# grid96_1darray_yeq0[i] = grid96[i][j]
# count = count + 1
# plt.clf()
# fig, ax = plt.subplots()
# plt.title("4th-order Convergence, at t/M=7.5 (post-merger; horiz at x/M=+/-1)")
# plt.xlabel("x/M")
# plt.ylabel("log10(Relative error)")
# ax.plot(gridx_1darray_yeq0, grid96_1darray_yeq0, 'k-', label='Nr=96')
# ax.plot(gridx_1darray_yeq0, grid72_1darray_yeq0, 'k--', label='Nr=72, mult by (72/96)^4')
# ax.set_ylim([-8.5,0.5])
# legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
# legend.get_frame().set_facecolor('C1')
# plt.show()
###Output
_____no_output_____
###Markdown
Step 11: Output this module to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.ipynb
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
_____no_output_____
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Start-to-Finish Example: Head-On Black Hole Collision Author: Zach Etienne Formatting improvements courtesy Brandon Clark This module implements a basic numerical relativity code to merge two black holes in *spherical-like coordinates*, as well as the gravitational wave analysis provided by the $\psi_4$ NRPy+ tutorial notebooks ([$\psi_4$](Tutorial-Psi4.ipynb) & [$\psi_4$ tetrad](Tutorial-Psi4_tetrads.ipynb)). Here we place the black holes initially on the $z$-axis, so the entire simulation is axisymmetric about the $\phi$-axis. Minimal sampling in the $\phi$ direction greatly speeds up the simulation.**Notebook Status:** Validated **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution *after a short numerical evolution of the initial data* (see [plots at bottom](convergence)), and all quantities have been validated against the [original SENR code](https://bitbucket.org/zach_etienne/nrpy). Finally, excellent agreement is seen in the gravitational wave signal from the ringing remnant black hole for multiple decades in amplitude when compared to black hole perturbation theory predictions. NRPy+ Source Code for this module: * [BSSN/BrillLindquist.py](../edit/BSSN/BrillLindquist.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb): Brill-Lindquist initial data; sets all ADM variables in Cartesian basis: * [BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): Spherical/Cartesian ADM$\to$Curvilinear BSSN converter function, for which exact expressions are given for ADM quantities.* [BSSN/BSSN_ID_function_string.py](../edit/BSSN/BSSN_ID_function_string.py): Sets up the C code string enabling initial data be set up in a point-by-point fashion* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates* [BSSN/BSSN_RHSs.py](../edit/BSSN/BSSN_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb): Generates the right-hand sides for the BSSN evolution equations in singular, curvilinear coordinates* [BSSN/BSSN_gauge_RHSs.py](../edit/BSSN/BSSN_gauge_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb): Generates the right-hand sides for the BSSN gauge evolution equations in singular, curvilinear coordinates* [BSSN/Psi4.py](../edit/BSSN/Psi4.py); [\[**tutorial**\]](Tutorial-Psi4.ipynb): Generates expressions for $\psi_4$, the outgoing Weyl scalar + [BSSN/Psi4_tetrads.py](../edit/BSSN/Psi4_tetrads.py); [\[**tutorial**\]](Tutorial-Psi4_tetrads.ipynb): Generates quasi-Kinnersley tetrad needed for $\psi_4$-based gravitational wave extraction Introduction:Here we use NRPy+ to generate the C source code necessary to set up initial data for two black holes (Brill-Lindquist, [Brill & Lindquist, Phys. Rev. 131, 471, 1963](https://journals.aps.org/pr/abstract/10.1103/PhysRev.131.471); see also Eq. 1 of [Brandt & Brügmann, arXiv:gr-qc/9711015v1](https://arxiv.org/pdf/gr-qc/9711015v1.pdf)). Then we use it to generate the RHS expressions for [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html) time integration based on the [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4).The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration * [**NRPy+ tutorial on Method of Lines algorithm**](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).1. Set gridfunction values to initial data * [**NRPy+ tutorial on Brill-Lindquist initial data**](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb) * [**NRPy+ tutorial on validating Brill-Lindquist initial data**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_Exact_Initial_Data.ipynb).1. Next, integrate the initial data forward in time using the Method of Lines coupled to a Runge-Kutta explicit timestepping algorithm: 1. At the start of each iteration in time, output the Hamiltonian constraint violation * [**NRPy+ tutorial on BSSN constraints**](Tutorial-BSSN_constraints.ipynb). 1. At each RK time substep, do the following: 1. Evaluate BSSN RHS expressions * [**NRPy+ tutorial on BSSN right-hand sides**](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb) * [**NRPy+ tutorial on BSSN gauge condition right-hand sides**](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb) 1. Apply singular, curvilinear coordinate boundary conditions [*a la* the SENR/NRPy+ paper](https://arxiv.org/abs/1712.07658) * [**NRPy+ tutorial on setting up singular, curvilinear boundary conditions**](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb) 1. Enforce constraint on conformal 3-metric: $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ * [**NRPy+ tutorial on enforcing $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint**](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)1. Repeat above steps at two numerical resolutions to confirm convergence to zero. Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id): Import Brill-Lindquist ADM initial data C function from the [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module1. [Step 3](bssn): Output C code for BSSN spacetime solve 1. [Step 3.a](bssnrhs): Output C code for BSSN RHS expressions 1. [Step 3.b](hamconstraint): Output C code for Hamiltonian constraint 1. [Step 3.c](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint 1. [Step 3.d](spinweight): Computing $_{-2}Y_{\ell m} (\theta, \phi)$ for all $(\ell,m)$ for $\ell=0$ up to 2 1. [Step 3.e](psi4): $\psi_4$ 1. [Step 3.f](cparams_rfm_and_domainsize): Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h`1. [Step 4](bc_functs): Set up boundary condition functions for chosen singular, curvilinear coordinate system1. [Step 5](mainc): `BrillLindquist_Playground.c`: The Main C Code1. [Step 6](visualize): Data Visualization Animations 1. [Step 6.a](installdownload): Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded 1. [Step 6.b](genimages): Generate images for visualization animation 1. [Step 6.c](genvideo): Generate visualization animation1. [Step 7](convergence): Visualize the numerical error, and confirm that it converges to zero with increasing numerical resolution (sampling)1. [Step 8](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step P0: Set the option for evolving the initial data forward in time
# Options include "low resolution" and "high resolution"
EvolOption = "low resolution"
# Step P1: Import needed NRPy+ core modules:
from outputC import * # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys, time # Standard Python modules for multiplatform OS-level functions, benchmarking
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("BSSN_Two_BHs_Collide_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three (BSSN is a 3+1 decomposition
# of Einstein's equations), and then read
# the parameter as DIM.
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Step 1.a: Enable SIMD-optimized code?
# I.e., generate BSSN and Ricci C code kernels using SIMD-vectorized
# compiler intrinsics, which *greatly improve the code's performance*,
# though at the expense of making the C-code kernels less
# human-readable.
# * Important note in case you wish to modify the BSSN/Ricci kernels
# here by adding expressions containing transcendental functions
# (e.g., certain scalar fields):
# Note that SIMD-based transcendental function intrinsics are not
# supported by the default installation of gcc or clang (you will
# need to use e.g., the SLEEF library from sleef.org, for this
# purpose). The Intel compiler suite does support these intrinsics
# however without the need for external libraries.
SIMD_enable = True
# Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "SinhSpherical"
if EvolOption == "low resolution":
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 150 # Length scale of computational domain
final_time = 200 # Final time
FD_order = 8 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
elif EvolOption == "high resolution":
# See above for description of the domain_size parameter
domain_size = 300 # Length scale of computational domain
final_time = 275 # Final time
FD_order = 10 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
else:
print("Error: output is not tuned for Nr = "+str(Nr)+" . Plotting disabled.")
exit(1)
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.2 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05 # If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.b: Set the timestepping order,
# the core data type, and the CFL factor.
# Step 2.b: Set the order of spatial and temporal derivatives;
# the core data type, and the CFL factor.
# RK_method choices include: Euler, "RK2 Heun", "RK2 MP", "RK2 Ralston", RK3, "RK3 Heun", "RK3 Ralston",
# SSPRK3, RK4, DP5, DP5alt, CK5, DP6, L6, DP8
RK_method = "RK4"
REAL = "double" # Best to use double here.
CFL_FACTOR= 0.5 # (GETS OVERWRITTEN WHEN EXECUTED.) In pure axisymmetry (symmetry_axes = 2 below) 1.0 works fine. Otherwise 0.5 or lower.
# Step 3: Generate Runge-Kutta-based (RK-based) timestepping code.
# As described above the Table of Contents, this is a 3-step process:
# 3.A: Evaluate RHSs (RHS_string)
# 3.B: Apply boundary conditions (post_RHS_string, pt 1)
# 3.C: Enforce det(gammabar) = det(gammahat) constraint (post_RHS_string, pt 2)
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method,
RHS_string = """
Ricci_eval(&rfmstruct, ¶ms, RK_INPUT_GFS, auxevol_gfs);
rhs_eval(&rfmstruct, ¶ms, auxevol_gfs, RK_INPUT_GFS, RK_OUTPUT_GFS);""",
post_RHS_string = """
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS, evol_gf_parity, RK_OUTPUT_GFS);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, RK_OUTPUT_GFS);\n""",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
# Step 4: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Step 5: Set the finite differencing order to FD_order (set above).
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order)
# Step 6: Copy SIMD/SIMD_intrinsics.h to $Ccodesdir/SIMD/SIMD_intrinsics.h
cmd.mkdir(os.path.join(Ccodesdir,"SIMD"))
shutil.copy(os.path.join("SIMD/")+"SIMD_intrinsics.h",os.path.join(Ccodesdir,"SIMD/"))
# Step 7: Set the direction=2 (phi) axis to be the symmetry axis; i.e.,
# axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
###Output
_____no_output_____
###Markdown
Step 1.c: Output needed C code for finding the minimum proper distance between grid points, needed for [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673)-limited timestep \[Back to [top](toc)\]$$\label{cfl}$$In order for our explicit-timestepping numerical solution to the scalar wave equation to be stable, it must satisfy the [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673) condition:$$\Delta t \le \frac{\min(ds_i)}{c},$$where $c$ is the wavespeed, and$$ds_i = h_i \Delta x^i$$ is the proper distance between neighboring gridpoints in the $i$th direction (in 3D, there are 3 directions), $h_i$ is the $i$th reference metric scale factor, and $\Delta x^i$ is the uniform grid spacing in the $i$th direction:
###Code
# Output the find_timestep() function to a C file.
rfm.out_timestep_func_to_file(os.path.join(Ccodesdir,"find_timestep.h"))
###Output
_____no_output_____
###Markdown
Step 2: Import Brill-Lindquist ADM initial data C function from the [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module \[Back to [top](toc)\]$$\label{adm_id}$$The [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module does the following:1. Set up Brill-Lindquist initial data [ADM](https://en.wikipedia.org/wiki/ADM_formalism) quantities in the **Cartesian basis**, as [documented here](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb). 1. Convert the ADM **Cartesian quantities** to **BSSN quantities in the desired Curvilinear basis** (set by reference_metric::CoordSystem), as [documented here](Tutorial-ADM_Initial_Data-Converting_ADMCartesian_to_BSSNCurvilinear.ipynb).1. Sets up the standardized C function for setting all BSSN Curvilinear gridfunctions in a pointwise fashion, as [written here](../edit/BSSN/BSSN_ID_function_string.py), and returns the C function as a Python string.
###Code
import BSSN.BrillLindquist as bl
def BrillLindquistID():
print("Generating optimized C code for Brill-Lindquist initial data. May take a while, depending on CoordSystem.")
start = time.time()
bl.BrillLindquist() # Registers ID C function in dictionary, used below to output to file.
with open(os.path.join(Ccodesdir,"initial_data.h"),"w") as file:
file.write(outC_function_dict["initial_data"])
end = time.time()
print("Finished BL initial data codegen in "+str(end-start)+" seconds.")
###Output
_____no_output_____
###Markdown
Step 3: Output C code for BSSN spacetime solve \[Back to [top](toc)\]$$\label{bssn}$$ Step 3.a: Output C code for BSSN RHS expressions \[Back to [top](toc)\]$$\label{bssnrhs}$$
###Code
import BSSN.BSSN_RHSs as rhs
import BSSN.BSSN_gauge_RHSs as gaugerhs
# Set the *covariant*, second-order Gamma-driving shift condition
par.set_parval_from_str("BSSN.BSSN_gauge_RHSs::ShiftEvolutionOption", "GammaDriving2ndOrder_Covariant")
print("Generating symbolic expressions for BSSN RHSs...")
start = time.time()
# Enable rfm_precompute infrastructure, which results in
# BSSN RHSs that are free of transcendental functions,
# even in curvilinear coordinates, so long as
# ConformalFactor is set to "W" (default).
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
# Evaluate BSSN + BSSN gauge RHSs with rfm_precompute enabled:
import BSSN.BSSN_quantities as Bq
par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic","True")
rhs.BSSN_RHSs()
gaugerhs.BSSN_gauge_RHSs()
# We use betaU as our upwinding control vector:
Bq.BSSN_basic_tensors()
betaU = Bq.betaU
import BSSN.Enforce_Detgammabar_Constraint as EGC
enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammabar_Constraint_symb_expressions()
# Next compute Ricci tensor
par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic","False")
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
# Now register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)
# Now that we are finished with all the rfm hatted
# quantities in generic precomputed functional
# form, let's restore them to their closed-
# form expressions.
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
end = time.time()
print("Finished BSSN symbolic expressions in "+str(end-start)+" seconds.")
def BSSN_RHSs():
print("Generating C code for BSSN RHSs in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
# Construct the left-hand sides and right-hand-side expressions for all BSSN RHSs
lhs_names = [ "alpha", "cf", "trK"]
rhs_exprs = [gaugerhs.alpha_rhs, rhs.cf_rhs, rhs.trK_rhs]
for i in range(3):
lhs_names.append( "betU"+str(i))
rhs_exprs.append(gaugerhs.bet_rhsU[i])
lhs_names.append( "lambdaU"+str(i))
rhs_exprs.append(rhs.lambda_rhsU[i])
lhs_names.append( "vetU"+str(i))
rhs_exprs.append(gaugerhs.vet_rhsU[i])
for j in range(i,3):
lhs_names.append( "aDD"+str(i)+str(j))
rhs_exprs.append(rhs.a_rhsDD[i][j])
lhs_names.append( "hDD"+str(i)+str(j))
rhs_exprs.append(rhs.h_rhsDD[i][j])
# Sort the lhss list alphabetically, and rhss to match.
# This ensures the RHSs are evaluated in the same order
# they're allocated in memory:
lhs_names,rhs_exprs = [list(x) for x in zip(*sorted(zip(lhs_names,rhs_exprs), key=lambda pair: pair[0]))]
# Declare the list of lhrh's
BSSN_evol_rhss = []
for var in range(len(lhs_names)):
BSSN_evol_rhss.append(lhrh(lhs=gri.gfaccess("rhs_gfs",lhs_names[var]),rhs=rhs_exprs[var]))
# Set up the C function for the BSSN RHSs
desc="Evaluate the BSSN RHSs"
name="rhs_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict auxevol_gfs,const REAL *restrict in_gfs,REAL *restrict rhs_gfs""",
body = fin.FD_outputC("returnstring",BSSN_evol_rhss, params="outCverbose=False,SIMD_enable=True",
upwindcontrolvec=betaU).replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,EnableSIMD,Enable_rfm_precompute")
end = time.time()
print("Finished BSSN_RHS C codegen in " + str(end - start) + " seconds.")
def Ricci():
print("Generating C code for Ricci tensor in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
desc="Evaluate the Ricci tensor"
name="Ricci_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict in_gfs,REAL *restrict auxevol_gfs""",
body = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD00"),rhs=Bq.RbarDD[0][0]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD01"),rhs=Bq.RbarDD[0][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD02"),rhs=Bq.RbarDD[0][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD11"),rhs=Bq.RbarDD[1][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD12"),rhs=Bq.RbarDD[1][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD22"),rhs=Bq.RbarDD[2][2])],
params="outCverbose=False,SIMD_enable=True").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,EnableSIMD,Enable_rfm_precompute")
end = time.time()
print("Finished Ricci C codegen in " + str(end - start) + " seconds.")
###Output
Generating symbolic expressions for BSSN RHSs...
Finished BSSN symbolic expressions in 3.7352423667907715 seconds.
###Markdown
Step 3.b: Output C code for Hamiltonian constraint \[Back to [top](toc)\]$$\label{hamconstraint}$$Next output the C code for evaluating the Hamiltonian constraint [(**Tutorial**)](Tutorial-BSSN_constraints.ipynb). In the absence of numerical error, this constraint should evaluate to zero. However it does not due to numerical (typically truncation and roundoff) error. We will therefore measure the Hamiltonian constraint violation to gauge the accuracy of our simulation, and, ultimately determine whether errors are dominated by numerical finite differencing (truncation) error as expected.
###Code
def Hamiltonian():
start = time.time()
print("Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.")
# Set up the C function for the Hamiltonian RHS
desc="Evaluate the Hamiltonian constraint"
name="Hamiltonian_constraint"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict aux_gfs""",
body = fin.FD_outputC("returnstring",lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H),
params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,Enable_rfm_precompute")
end = time.time()
print("Finished Hamiltonian C codegen in " + str(end - start) + " seconds.")
###Output
_____no_output_____
###Markdown
Step 3.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
def gammadet():
start = time.time()
print("Generating optimized C code for gamma constraint. May take a while, depending on CoordSystem.")
# Set up the C function for the det(gammahat) = det(gammabar)
EGC.output_Enforce_Detgammabar_Constraint_Ccode(Ccodesdir,exprs=enforce_detg_constraint_symb_expressions)
end = time.time()
print("Finished gamma constraint C codegen in " + str(end - start) + " seconds.")
###Output
_____no_output_____
###Markdown
Step 3.d: Computing $_{-2}Y_{\ell m} (\theta, \phi)$ for all $(\ell,m)$ for $\ell=0$ up to 2 \[Back to [top](toc)\]$$\label{spinweight}$$ Here we output all spin-weight $-2$ spherical harmonics [**Tutorial Module**](Tutorial-SpinWeighted_Spherical_Harmonics.ipynb) for $\ell=0$ up to and $\ell=2$.
###Code
import SpinWeight_minus2_SphHarmonics.SpinWeight_minus2_SphHarmonics as swm2
cmd.mkdir(os.path.join(Ccodesdir,"SpinWeight_minus2_SphHarmonics"))
swm2.SpinWeight_minus2_SphHarmonics(maximum_l=2,
filename=os.path.join(Ccodesdir,"SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h"))
###Output
_____no_output_____
###Markdown
Step 3.e: Output $\psi_4$ \[Back to [top](toc)\]$$\label{psi4}$$We output $\psi_4$, assuming Quasi-Kinnersley tetrad of [Baker, Campanelli, Lousto (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf).
###Code
import BSSN.Psi4_tetrads as BP4t
par.set_parval_from_str("BSSN.Psi4_tetrads::TetradChoice","QuasiKinnersley")
#par.set_parval_from_str("BSSN.Psi4_tetrads::UseCorrectUnitNormal","True")
import BSSN.Psi4 as BP4
print("Generating symbolic expressions for psi4...")
start = time.time()
BP4.Psi4()
end = time.time()
print("Finished psi4 symbolic expressions in "+str(end-start)+" seconds.")
psi4r_0pt = gri.register_gridfunctions("AUX","psi4r_0pt")
psi4r_1pt = gri.register_gridfunctions("AUX","psi4r_1pt")
psi4r_2pt = gri.register_gridfunctions("AUX","psi4r_2pt")
psi4i_0pt = gri.register_gridfunctions("AUX","psi4i_0pt")
psi4i_1pt = gri.register_gridfunctions("AUX","psi4i_1pt")
psi4i_2pt = gri.register_gridfunctions("AUX","psi4i_2pt")
desc="""Since it's so expensive to compute, instead of evaluating
psi_4 at all interior points, this functions evaluates it on a
point-by-point basis."""
name="psi4"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """const paramstruct *restrict params,
const int i0,const int i1,const int i2,
REAL *restrict xx[3], const REAL *restrict in_gfs, REAL *restrict aux_gfs""",
body = """
const int idx = IDX3S(i0,i1,i2);
const REAL xx0 = xx[0][i0];const REAL xx1 = xx[1][i1];const REAL xx2 = xx[2][i2];
// Real part of psi_4, divided into 3 terms
{
#include "Psi4re_pt0_lowlevel.h"
}
{
#include "Psi4re_pt1_lowlevel.h"
}
{
#include "Psi4re_pt2_lowlevel.h"
}
// Imaginary part of psi_4, divided into 3 terms
{
#include "Psi4im_pt0_lowlevel.h"
}
{
#include "Psi4im_pt1_lowlevel.h"
}
{
#include "Psi4im_pt2_lowlevel.h"
}""")
def Psi4re(part):
print("Generating C code for psi4_re_pt"+str(part)+" in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
Psi4re_pt = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("aux_gfs","psi4r_"+str(part)+"pt"),rhs=BP4.psi4_re_pt[part])],
params="outCverbose=False,CSE_sorting=none") # Generating the CSE for psi4 is the slowest
# operation in this notebook, and much of the CSE
# time is spent sorting CSE expressions. Disabling
# this sorting makes the C codegen 3-4x faster,
# but the tradeoff is that every time this is
# run, the CSE patterns will be different
# (though they should result in mathematically
# *identical* expressions). You can expect
# roundoff-level differences as a result.
with open(os.path.join(Ccodesdir,"Psi4re_pt"+str(part)+"_lowlevel.h"), "w") as file:
file.write(Psi4re_pt.replace("IDX4","IDX4S"))
end = time.time()
print("Finished generating psi4_re_pt"+str(part)+" in "+str(end-start)+" seconds.")
def Psi4im(part):
print("Generating C code for psi4_im_pt"+str(part)+" in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
Psi4im_pt = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("aux_gfs","psi4i_"+str(part)+"pt"),rhs=BP4.psi4_im_pt[part])],
params="outCverbose=False,CSE_sorting=none") # Generating the CSE for psi4 is the slowest
# operation in this notebook, and much of the CSE
# time is spent sorting CSE expressions. Disabling
# this sorting makes the C codegen 3-4x faster,
# but the tradeoff is that every time this is
# run, the CSE patterns will be different
# (though they should result in mathematically
# *identical* expressions). You can expect
# roundoff-level differences as a result.
with open(os.path.join(Ccodesdir,"Psi4im_pt"+str(part)+"_lowlevel.h"), "w") as file:
file.write(Psi4im_pt.replace("IDX4","IDX4S"))
end = time.time()
print("Finished generating psi4_im_pt"+str(part)+" in "+str(end-start)+" seconds.")
# Step 0: Import the multiprocessing module.
import multiprocessing
# Step 1: Create a list of functions we wish to evaluate in parallel
funcs = [Psi4re,Psi4re,Psi4re,Psi4im,Psi4im,Psi4im, BrillLindquistID,BSSN_RHSs,Ricci,Hamiltonian,gammadet]
# Step 1.a: Define master function for calling all above functions.
# Note that lambdifying this doesn't work in Python 3
def master_func(idx):
if idx < 3: # Call Psi4re(arg)
funcs[idx](idx)
elif idx < 6: # Call Psi4im(arg-3)
funcs[idx](idx-3)
else: # All non-Psi4 functions:
funcs[idx]()
try:
if os.name == 'nt':
# It's a mess to get working in Windows, so we don't bother. :/
# https://medium.com/@grvsinghal/speed-up-your-python-code-using-multiprocessing-on-windows-and-jupyter-or-ipython-2714b49d6fac
raise Exception("Parallel codegen currently not available in Windows")
# Step 1.b: Import the multiprocessing module.
import multiprocessing
# Step 1.c: Evaluate list of functions in parallel if possible;
# otherwise fallback to serial evaluation:
pool = multiprocessing.Pool()
pool.map(master_func,range(len(funcs)))
except:
# Steps 1.b-1.c, alternate: As fallback, evaluate functions in serial.
# This will happen on Android and Windows systems
for idx in range(len(funcs)):
master_func(idx)
###Output
Generating C code for psi4_re_pt0 in SinhSpherical coordinates.
Generating C code for psi4_re_pt1 in SinhSpherical coordinates.
Generating C code for psi4_im_pt0 in SinhSpherical coordinates.
Generating C code for psi4_re_pt2 in SinhSpherical coordinates.
Generating C code for psi4_im_pt2 in SinhSpherical coordinates.
Generating C code for psi4_im_pt1 in SinhSpherical coordinates.
Generating optimized C code for Brill-Lindquist initial data. May take a while, depending on CoordSystem.
Generating C code for BSSN RHSs in SinhSpherical coordinates.
Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.
Generating optimized C code for gamma constraint. May take a while, depending on CoordSystem.
Generating C code for Ricci tensor in SinhSpherical coordinates.
Output C function enforce_detgammabar_constraint() to file BSSN_Two_BHs_Collide_Ccodes/enforce_detgammabar_constraint.h
Finished gamma constraint C codegen in 0.09074592590332031 seconds.
Finished generating psi4_im_pt2 in 8.661821126937866 seconds.
Output C function Hamiltonian_constraint() to file BSSN_Two_BHs_Collide_Ccodes/Hamiltonian_constraint.h
Finished Hamiltonian C codegen in 9.526127815246582 seconds.
Finished generating psi4_im_pt1 in 10.364755630493164 seconds.
Finished generating psi4_re_pt2 in 15.475677013397217 seconds.
Finished BL initial data codegen in 16.641889572143555 seconds.
Finished generating psi4_re_pt1 in 19.271633625030518 seconds.
Output C function rhs_eval() to file BSSN_Two_BHs_Collide_Ccodes/rhs_eval.h
Finished BSSN_RHS C codegen in 28.120524644851685 seconds.
Finished generating psi4_im_pt0 in 30.16318106651306 seconds.
Output C function Ricci_eval() to file BSSN_Two_BHs_Collide_Ccodes/Ricci_eval.h
Finished Ricci C codegen in 30.861683130264282 seconds.
Finished generating psi4_re_pt0 in 55.99243879318237 seconds.
###Markdown
Step 3.f: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](toc)\]$$\label{cparams_rfm_and_domainsize}$$Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
###Code
# Step 3.f.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.f.ii: Set free_parameters.h
with open(os.path.join(Ccodesdir,"free_parameters.h"),"w") as file:
file.write("""
// Set free-parameter values.
// Set free-parameter values for BSSN evolution:
params.eta = 2.0;
// Set free parameters for the (Brill-Lindquist) initial data
params.BH1_posn_x = 0.0; params.BH1_posn_y = 0.0; params.BH1_posn_z =+0.25;
params.BH2_posn_x = 0.0; params.BH2_posn_y = 0.0; params.BH2_posn_z =-0.25;
params.BH1_mass = 0.5; params.BH2_mass = 0.5;
"""+"""
const REAL final_time = """+str(final_time)+";\n")
# Append to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Step 3.f.iii: Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir)
# Step 3.f.iv: Generate xxCart.h, which contains xxCart() for
# (the mapping from xx->Cartesian) for the chosen
# CoordSystem:
rfm.xxCart_h("xxCart","./set_Cparameters.h",os.path.join(Ccodesdir,"xxCart.h"))
# Step 3.f.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
###Output
_____no_output_____
###Markdown
Step 4: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](toc)\]$$\label{bc_functs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../"))
###Output
Wrote to file "BSSN_Two_BHs_Collide_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h"
Evolved gridfunction "aDD00" has parity type 4.
Evolved gridfunction "aDD01" has parity type 5.
Evolved gridfunction "aDD02" has parity type 6.
Evolved gridfunction "aDD11" has parity type 7.
Evolved gridfunction "aDD12" has parity type 8.
Evolved gridfunction "aDD22" has parity type 9.
Evolved gridfunction "alpha" has parity type 0.
Evolved gridfunction "betU0" has parity type 1.
Evolved gridfunction "betU1" has parity type 2.
Evolved gridfunction "betU2" has parity type 3.
Evolved gridfunction "cf" has parity type 0.
Evolved gridfunction "hDD00" has parity type 4.
Evolved gridfunction "hDD01" has parity type 5.
Evolved gridfunction "hDD02" has parity type 6.
Evolved gridfunction "hDD11" has parity type 7.
Evolved gridfunction "hDD12" has parity type 8.
Evolved gridfunction "hDD22" has parity type 9.
Evolved gridfunction "lambdaU0" has parity type 1.
Evolved gridfunction "lambdaU1" has parity type 2.
Evolved gridfunction "lambdaU2" has parity type 3.
Evolved gridfunction "trK" has parity type 0.
Evolved gridfunction "vetU0" has parity type 1.
Evolved gridfunction "vetU1" has parity type 2.
Evolved gridfunction "vetU2" has parity type 3.
Auxiliary gridfunction "H" has parity type 0.
Auxiliary gridfunction "psi4i_0pt" has parity type 0.
Auxiliary gridfunction "psi4i_1pt" has parity type 0.
Auxiliary gridfunction "psi4i_2pt" has parity type 0.
Auxiliary gridfunction "psi4r_0pt" has parity type 0.
Auxiliary gridfunction "psi4r_1pt" has parity type 0.
Auxiliary gridfunction "psi4r_2pt" has parity type 0.
AuxEvol gridfunction "RbarDD00" has parity type 4.
AuxEvol gridfunction "RbarDD01" has parity type 5.
AuxEvol gridfunction "RbarDD02" has parity type 6.
AuxEvol gridfunction "RbarDD11" has parity type 7.
AuxEvol gridfunction "RbarDD12" has parity type 8.
AuxEvol gridfunction "RbarDD22" has parity type 9.
Wrote to file "BSSN_Two_BHs_Collide_Ccodes/boundary_conditions/EigenCoord_Cart_to_xx.h"
###Markdown
Step 5: `BrillLindquist_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER),
# and set the CFL_FACTOR (which can be overwritten at the command line)
with open(os.path.join(Ccodesdir,"BSSN_Playground_REAL__NGHOSTS__CFL_FACTOR.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
REAL CFL_FACTOR = """+str(CFL_FACTOR)+"""; // Set the CFL Factor. Can be overwritten at command line.""")
%%writefile $Ccodesdir/BrillLindquist_Playground.c
// Step P0: Define REAL and NGHOSTS; and declare CFL_FACTOR. This header is generated in NRPy+.
#include "BSSN_Playground_REAL__NGHOSTS__CFL_FACTOR.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// All SIMD intrinsics used in SIMD-enabled C code loops are defined here:
#include "SIMD/SIMD_intrinsics.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "time.h"
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
#define wavespeed 1.0 // Set CFL-based "wavespeed" to 1.0.
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xxCart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Set xxCart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xxCart.h"
// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P7: Implement the algorithm for upwinding.
// *NOTE*: This upwinding is backwards from
// usual upwinding algorithms, because the
// upwinding control vector in BSSN (the shift)
// acts like a *negative* velocity.
//#define UPWIND_ALG(UpwindVecU) UpwindVecU > 0.0 ? 1.0 : 0.0
// Step P8: Include function for enforcing detgammabar constraint.
#include "enforce_detgammabar_constraint.h"
// Step P9: Find the CFL-constrained timestep
#include "find_timestep.h"
// Step P10: Declare function necessary for setting up the initial data.
// Step P10.a: Define BSSN_ID() for BrillLindquist initial data
// Step P10.b: Set the generic driver function for setting up BSSN initial data
#include "initial_data.h"
// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)
#include "Hamiltonian_constraint.h"
// Step P12: Declare rhs_eval function, which evaluates BSSN RHSs
#include "rhs_eval.h"
// Step P13: Declare Ricci_eval function, which evaluates Ricci tensor
#include "Ricci_eval.h"
// Step P14: Declare function for evaluating real and imaginary parts of psi4 (diagnostic)
#include "psi4.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4 && argc != 5) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
if(argc == 5) {
CFL_FACTOR = strtod(argv[4],NULL);
if(CFL_FACTOR > 0.5 && atoi(argv[3])!=2) {
fprintf(stderr,"WARNING: CFL_FACTOR was set to %e, which is > 0.5.\n",CFL_FACTOR);
fprintf(stderr," This will generally only be stable if the simulation is purely axisymmetric\n");
fprintf(stderr," However, Nx2 was set to %d>2, which implies a non-axisymmetric simulation\n",atoi(argv[3]));
}
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0h: Time coordinate parameters
const REAL t_final = final_time;
// Step 0i: Set timestep based on smallest proper distance between gridpoints and CFL factor
REAL dt = find_timestep(¶ms, xx);
//fprintf(stderr,"# Timestep set to = %e\n",(double)dt);
int N_final = (int)(t_final / dt + 0.5); // The number of points in time.
// Add 0.5 to account for C rounding down
// typecasts to integers.
REAL out_approx_every_t = 0.2;
int output_every_N = (int)(out_approx_every_t*((REAL)N_final)/t_final);
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
// Step 1: Set up initial data to an exact solution
initial_data(¶ms, xx, y_n_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, y_n_gfs);
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
#ifdef __linux__ // Use high-precision timer in Linux.
struct timespec start, end;
clock_gettime(CLOCK_REALTIME, &start);
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
// http://www.cplusplus.com/reference/ctime/time/
time_t start_timer,end_timer;
time(&start_timer); // Resolution of one second...
#endif
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
for(int n=0;n<=N_final;n++) { // Main loop to progress forward in time.
// Step 3.a: Output diagnostics
if(n%output_every_N == 0) {
// Step 3.a.i: Output psi4 spin-weight -2 decomposed data, every N_output_every */
#include "SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h"
char filename[100];
for(int r_ext_idx = (Nxx_plus_2NGHOSTS0-NGHOSTS)/4;
r_ext_idx<(Nxx_plus_2NGHOSTS0-NGHOSTS)*0.9;
r_ext_idx+=5) {
REAL r_ext;
{
REAL xx0 = xx[0][r_ext_idx];
REAL xx1 = xx[1][1];
REAL xx2 = xx[2][1];
REAL xCart[3];
xxCart(¶ms,xx,r_ext_idx,1,1,xCart);
r_ext = sqrt(xCart[0]*xCart[0] + xCart[1]*xCart[1] + xCart[2]*xCart[2]);
}
sprintf(filename,"outPsi4_l2m0-%d-r%.2f.txt",Nxx[0],(double)r_ext);
FILE *outPsi4_l2m0;
if(n==0) outPsi4_l2m0 = fopen(filename, "w");
else outPsi4_l2m0 = fopen(filename, "a");
REAL Psi4r_0pt_l2m0 = 0.0,Psi4r_1pt_l2m0 = 0.0,Psi4r_2pt_l2m0 = 0.0;
REAL Psi4i_0pt_l2m0 = 0.0,Psi4i_1pt_l2m0 = 0.0,Psi4i_2pt_l2m0 = 0.0;
LOOP_REGION(r_ext_idx,r_ext_idx+1,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,NGHOSTS+1) {
psi4(¶ms, i0,i1,i2, xx, y_n_gfs, diagnostic_output_gfs);
const int idx = IDX3S(i0,i1,i2);
const REAL th = xx[1][i1];
const REAL ph = xx[2][i2];
// Construct integrand for Psi4 spin-weight s=-2,l=2,m=0 spherical harmonic
// Based on http://www.demonstrations.wolfram.com/SpinWeightedSphericalHarmonics/
// we have {}_{s}_Y_{lm} = {}_{-2}_Y_{20} = 1/4 * sqrt(15 / (2*pi)) * sin(th)^2
// Confirm integrand is correct:
// Integrate[(1/4 Sqrt[15/(2 \[Pi])] Sin[th]^2) (1/4 Sqrt[15/(2 \[Pi])] Sin[th]^2)*2*Pi*Sin[th], {th, 0, Pi}]
// ^^^ equals 1.
REAL ReY_sm2_l2_m0,ImY_sm2_l2_m0;
SpinWeight_minus2_SphHarmonics(2,0, th,ph, &ReY_sm2_l2_m0,&ImY_sm2_l2_m0);
const REAL sinth = sin(xx[1][i1]);
/* psi4 *{}_{-2}_Y_{20}* (int dphi)* sinth*dtheta */
Psi4r_0pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4R_0PTGF,idx)]*ReY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4r_1pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4R_1PTGF,idx)]*ReY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4r_2pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4R_2PTGF,idx)]*ReY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4i_0pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4I_0PTGF,idx)]*ImY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4i_1pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4I_1PTGF,idx)]*ImY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
Psi4i_2pt_l2m0 += diagnostic_output_gfs[IDX4ptS(PSI4I_2PTGF,idx)]*ImY_sm2_l2_m0 * (2*M_PI) * sinth*dxx1;
}
fprintf(outPsi4_l2m0,"%e %.15e %.15e %.15e %.15e %.15e %.15e\n", (double)((n)*dt),
(double)Psi4r_0pt_l2m0,(double)Psi4r_1pt_l2m0,(double)Psi4r_2pt_l2m0,
(double)Psi4i_0pt_l2m0,(double)Psi4i_1pt_l2m0,(double)Psi4i_2pt_l2m0);
fclose(outPsi4_l2m0);
}
}
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
#include "MoLtimestepping/RK_MoL.h"
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
if(n==N_final-1) {
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
const int i0MIN=NGHOSTS; // In spherical, r=Delta r/2.
const int i1mid=Nxx_plus_2NGHOSTS1/2;
const int i2mid=Nxx_plus_2NGHOSTS2/2;
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
xxCart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1],xCart[2], y_n_gfs[IDX4ptS(CFGF,idx)],
log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
}
// Step 3.d: Progress indicator printing to stderr
// Step 3.d.i: Measure average time per iteration
#ifdef __linux__ // Use high-precision timer in Linux.
clock_gettime(CLOCK_REALTIME, &end);
const long long unsigned int time_in_ns = 1000000000L * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
time(&end_timer); // Resolution of one second...
REAL time_in_ns = difftime(end_timer,start_timer)*1.0e9+0.5; // Round up to avoid divide-by-zero.
#endif
const REAL s_per_iteration_avg = ((REAL)time_in_ns / (REAL)n) / 1.0e9;
const int iterations_remaining = N_final - n;
const REAL time_remaining_in_mins = s_per_iteration_avg * (REAL)iterations_remaining / 60.0;
const REAL num_RHS_pt_evals = (REAL)(Nxx[0]*Nxx[1]*Nxx[2]) * 4.0 * (REAL)n; // 4 RHS evals per gridpoint for RK4
const REAL RHS_pt_evals_per_sec = num_RHS_pt_evals / ((REAL)time_in_ns / 1.0e9);
// Step 3.d.ii: Output simulation progress to stderr
if(n % 40 == 0) {
fprintf(stderr,"%c[2K", 27); // Clear the line
fprintf(stderr,"It: %d t=%.2f dt=%.2e | %.1f%%; ETA %.0f s | t/h %.2f | gp/s %.2e\r", // \r is carriage return, move cursor to the beginning of the line
n, n * (double)dt, (double)dt, (double)(100.0 * (REAL)n / (REAL)N_final),
(double)time_remaining_in_mins*60, (double)(dt * 3600.0 / s_per_iteration_avg), (double)RHS_pt_evals_per_sec);
fflush(stderr); // Flush the stderr buffer
} // End progress indicator if(n % 10 == 0)
} // End main loop to progress forward in time.
fprintf(stderr,"\n"); // Clear the final line of output from progress indicator.
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
if EvolOption == "low resolution":
Nr = 270
Ntheta = 8
elif EvolOption == "high resolution":
Nr = 800
Ntheta = 16
else:
print("Error: unknown EvolOption!")
sys.exit(1)
CFL_FACTOR = 1.0
import cmdline_helper as cmd
print("Now compiling, should take ~10 seconds...\n")
start = time.time()
cmd.C_compile(os.path.join(Ccodesdir,"BrillLindquist_Playground.c"), "BrillLindquist_Playground",
compile_mode="optimized")
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
if Nr == 800:
print("Now running. Should take ~8 hours...\n")
if Nr == 270:
print("Now running. Should take ~30 minutes...\n")
start = time.time()
cmd.delete_existing_files("out*.txt")
cmd.delete_existing_files("out*.png")
cmd.Execute("BrillLindquist_Playground", str(Nr)+" "+str(Ntheta)+" 2 "+str(CFL_FACTOR))
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
###Output
Now compiling, should take ~10 seconds...
Compiling executable...
Executing `gcc -Ofast -fopenmp -march=native -funroll-loops BSSN_Two_BHs_Collide_Ccodes/BrillLindquist_Playground.c -o BrillLindquist_Playground -lm`...
Finished executing in 10.638438701629639 seconds.
Finished compilation.
Finished in 10.649491786956787 seconds.
Now running. Should take ~30 minutes...
Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./BrillLindquist_Playground 270 8 2 1.0`...
[2KIt: 27200 t=199.93 dt=7.35e-03 | 100.0%; ETA 0 s | t/h 2869.11 | gp/s 1.87e+066
Finished executing in 251.19173169136047 seconds.
Finished in 251.20933508872986 seconds.
###Markdown
Step 6: Comparison with black hole perturbation theory \[Back to [top](toc)\]$$\label{compare}$$According to black hole perturbation theory ([Berti et al](https://arxiv.org/abs/0905.2975)), the resultant black hole should ring down with dominant, spin-weight $s=-2$ spherical harmonic mode $(l=2,m=0)$ according to$${}_{s=-2}\text{Re}(\psi_4)_{l=2,m=0} = A e^{−0.0890 t/M} \cos(0.3737 t/M+ \phi),$$where $M=1$ for these data, and $A$ and $\phi$ are an arbitrary amplitude and phase, respectively. Here we will plot the resulting waveform at $r/M=33.13$, comparing to the expected frequency and amplitude falloff predicted by black hole perturbation theory.Notice that we find about 4.2 orders of magnitude agreement! If you are willing to invest more resources and wait much longer, you will find approximately 8.5 orders of magnitude agreement (*better* than Fig 6 of [Ruchlin et al](https://arxiv.org/pdf/1712.07658.pdf)) if you adjust the above code parameters such that1. Finite-differencing order is set to 101. Nr = 8001. Ntheta = 161. Outer boundary (`AMPL`) set to 3001. Final time (`t_final`) set to 2751. Set the initial positions of the BHs to `BH1_posn_z = -BH2_posn_z = 0.25`
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import savefig
from matplotlib import rc
rc('text', usetex=True)
if Nr == 270:
extraction_radius = "33.13"
Amplitude = 1.8e-2
Phase = 2.8
elif Nr == 800:
extraction_radius = "33.64"
Amplitude = 1.8e-2
Phase = 2.8
else:
print("Error: output is not tuned for Nr = "+str(Nr)+" . Plotting disabled.")
exit(1)
#Transposed for easier unpacking:
t,psi4r1,psi4r2,psi4r3,psi4i1,psi4i2,psi4i3 = np.loadtxt("outPsi4_l2m0-"+str(Nr)+"-r"+extraction_radius+".txt").T
t_retarded = []
log10abspsi4r = []
bh_pert_thry = []
for i in range(len(psi4r1)):
retarded_time = t[i]-np.float(extraction_radius)
t_retarded.append(retarded_time)
log10abspsi4r.append(np.log(np.float(extraction_radius)*np.abs(psi4r1[i] + psi4r2[i] + psi4r3[i]))/np.log(10))
bh_pert_thry.append(np.log(Amplitude*np.exp(-0.0890*retarded_time)*np.abs(np.cos(0.3737*retarded_time+Phase)))/np.log(10))
# print(bh_pert_thry)
fig, ax = plt.subplots()
plt.title("Grav. Wave Agreement with BH perturbation theory",fontsize=18)
plt.xlabel("$(t - R_{ext})/M$",fontsize=16)
plt.ylabel('$\log_{10}|\psi_4|$',fontsize=16)
ax.plot(t_retarded, log10abspsi4r, 'k-', label='SENR/NRPy+ simulation')
ax.plot(t_retarded, bh_pert_thry, 'k--', label='BH perturbation theory')
#ax.set_xlim([0,t_retarded[len(psi4r1)-1]])
ax.set_xlim([0,final_time - float(extraction_radius)+10])
ax.set_ylim([-13.5,-1.5])
plt.xticks(size = 14)
plt.yticks(size = 14)
legend = ax.legend(loc='upper right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
# Note that you'll need `dvipng` installed to generate the following file:
savefig("BHperttheorycompare.png",dpi=150)
###Output
_____no_output_____
###Markdown
Step 7: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.ipynb
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Start-to-Finish Example: Head-On Black Hole Collision Author: Zach Etienne Formatting improvements courtesy Brandon Clark This module implements a basic numerical relativity code to merge two black holes in *spherical-like coordinates*, as well as the gravitational wave analysis provided by the $\psi_4$ NRPy+ tutorial notebooks ([$\psi_4$](Tutorial-Psi4.ipynb) & [$\psi_4$ tetrad](Tutorial-Psi4_tetrads.ipynb)). Here we place the black holes initially on the $z$-axis, so the entire simulation is axisymmetric about the $\phi$-axis. Minimal sampling in the $\phi$ direction greatly speeds up the simulation.**Notebook Status:** Validated **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution *after a short numerical evolution of the initial data* (see [plots at bottom](convergence)), and all quantities have been validated against the [original SENR code](https://bitbucket.org/zach_etienne/nrpy). Finally, excellent agreement is seen in the gravitational wave signal from the ringing remnant black hole for multiple decades in amplitude when compared to black hole perturbation theory predictions. NRPy+ Source Code for this module: * [BSSN/BrillLindquist.py](../edit/BSSN/BrillLindquist.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb): Brill-Lindquist initial data; sets all ADM variables in Cartesian basis: * [BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): Spherical/Cartesian ADM$\to$Curvilinear BSSN converter function, for which exact expressions are given for ADM quantities.* [BSSN/BSSN_ID_function_string.py](../edit/BSSN/BSSN_ID_function_string.py): Sets up the C code string enabling initial data be set up in a point-by-point fashion* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates* [BSSN/BSSN_RHSs.py](../edit/BSSN/BSSN_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb): Generates the right-hand sides for the BSSN evolution equations in singular, curvilinear coordinates* [BSSN/BSSN_gauge_RHSs.py](../edit/BSSN/BSSN_gauge_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb): Generates the right-hand sides for the BSSN gauge evolution equations in singular, curvilinear coordinates* [BSSN/Psi4.py](../edit/BSSN/Psi4.py); [\[**tutorial**\]](Tutorial-Psi4.ipynb): Generates expressions for $\psi_4$, the outgoing Weyl scalar + [BSSN/Psi4_tetrads.py](../edit/BSSN/Psi4_tetrads.py); [\[**tutorial**\]](Tutorial-Psi4_tetrads.ipynb): Generates quasi-Kinnersley tetrad needed for $\psi_4$-based gravitational wave extraction Introduction:Here we use NRPy+ to generate the C source code necessary to set up initial data for two black holes (Brill-Lindquist, [Brill & Lindquist, Phys. Rev. 131, 471, 1963](https://journals.aps.org/pr/abstract/10.1103/PhysRev.131.471); see also Eq. 1 of [Brandt & Brügmann, arXiv:gr-qc/9711015v1](https://arxiv.org/pdf/gr-qc/9711015v1.pdf)). Then we use it to generate the RHS expressions for [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html) time integration based on the [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4).The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration * [**NRPy+ tutorial on Method of Lines algorithm**](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).1. Set gridfunction values to initial data * [**NRPy+ tutorial on Brill-Lindquist initial data**](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb) * [**NRPy+ tutorial on validating Brill-Lindquist initial data**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_Exact_Initial_Data.ipynb).1. Next, integrate the initial data forward in time using the Method of Lines coupled to a Runge-Kutta explicit timestepping algorithm: 1. At the start of each iteration in time, output the Hamiltonian constraint violation * [**NRPy+ tutorial on BSSN constraints**](Tutorial-BSSN_constraints.ipynb). 1. At each RK time substep, do the following: 1. Evaluate BSSN RHS expressions * [**NRPy+ tutorial on BSSN right-hand sides**](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb) * [**NRPy+ tutorial on BSSN gauge condition right-hand sides**](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb) 1. Apply singular, curvilinear coordinate boundary conditions [*a la* the SENR/NRPy+ paper](https://arxiv.org/abs/1712.07658) * [**NRPy+ tutorial on setting up singular, curvilinear boundary conditions**](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb) 1. Enforce constraint on conformal 3-metric: $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ * [**NRPy+ tutorial on enforcing $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint**](Tutorial-BSSN_enforcing_determinant_gammabar_equals_gammahat_constraint.ipynb)1. Repeat above steps at two numerical resolutions to confirm convergence to zero. Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id): Import Brill-Lindquist ADM initial data C function from the [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module1. [Step 3](bssn): Output C code for BSSN spacetime solve 1. [Step 3.a](bssnrhs): Output C code for BSSN RHS expressions 1. [Step 3.b](hamconstraint): Output C code for Hamiltonian constraint 1. [Step 3.c](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint 1. [Step 3.d](psi4): Compute $\psi_4$, which encodes gravitational wave information in our numerical relativity calculations 1. [Step 3.e](decomposepsi4): Decompose $\psi_4$ into spin-weight -2 spherical harmonics 1. [Step 3.e.i](spinweight): Output ${}^{-2}Y_{\ell,m}$, up to and including $\ell=\ell_{\rm max}$=`l_max` (set to 2 here) 1. [Step 3.e.ii](full_diag): Decomposition of $\psi_4$ into spin-weight -2 spherical harmonics: Full C-code diagnostic implementation 1. [Step 3.f](coutput): Output all NRPy+ C-code kernels, in parallel if possible 1. [Step 3.g](cparams_rfm_and_domainsize): Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h`1. [Step 4](bc_functs): Set up boundary condition functions for chosen singular, curvilinear coordinate system1. [Step 5](mainc): `BrillLindquist_Playground.c`: The Main C Code1. [Step 6](visualize): Data Visualization Animations 1. [Step 6.a](installdownload): Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded 1. [Step 6.b](genimages): Generate images for visualization animation 1. [Step 6.c](genvideo): Generate visualization animation1. [Step 7](convergence): Visualize the numerical error, and confirm that it converges to zero with increasing numerical resolution (sampling)1. [Step 8](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step P0: Set the option for evolving the initial data forward in time
# Options include "low resolution" and "high resolution"
EvolOption = "low resolution"
# Step P1: Import needed NRPy+ core modules:
from outputC import lhrh,outCfunction,outC_function_dict # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys, time # Standard Python modules for multiplatform OS-level functions, benchmarking
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("BSSN_Two_BHs_Collide_Ccodes_psi4")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir, "output")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three (BSSN is a 3+1 decomposition
# of Einstein's equations), and then read
# the parameter as DIM.
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Step 1.a: Enable SIMD-optimized code?
# I.e., generate BSSN and Ricci C code kernels using SIMD-vectorized
# compiler intrinsics, which *greatly improve the code's performance*,
# though at the expense of making the C-code kernels less
# human-readable.
# * Important note in case you wish to modify the BSSN/Ricci kernels
# here by adding expressions containing transcendental functions
# (e.g., certain scalar fields):
# Note that SIMD-based transcendental function intrinsics are not
# supported by the default installation of gcc or clang (you will
# need to use e.g., the SLEEF library from sleef.org, for this
# purpose). The Intel compiler suite does support these intrinsics
# however without the need for external libraries.
enable_SIMD = True
# Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "SinhSpherical"
# Decompose psi_4 (second time derivative of gravitational
# wave strain) into all spin-weight=-2
# l,m spherical harmonics, starting at l=2
# going up to and including l_max, set here:
l_max = 2
if EvolOption == "low resolution":
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 150 # Length scale of computational domain
FD_order = 8 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
elif EvolOption == "high resolution":
# See above for description of the domain_size parameter
domain_size = 300 # Length scale of computational domain
FD_order = 10 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
else:
print("Error: EvolOption = "+str(EvolOption)+" unrecognized.")
exit(1)
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.2 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05 # If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.b: Set the timestepping order,
# the core data type, and the CFL factor.
# Step 2.b: Set the order of spatial and temporal derivatives;
# the core data type, and the CFL factor.
# RK_method choices include: Euler, "RK2 Heun", "RK2 MP", "RK2 Ralston", RK3, "RK3 Heun", "RK3 Ralston",
# SSPRK3, RK4, DP5, DP5alt, CK5, DP6, L6, DP8
RK_method = "RK4"
REAL = "double" # Best to use double here.
CFL_FACTOR= 1.0 # (GETS OVERWRITTEN WHEN EXECUTED.) In pure axisymmetry (symmetry_axes = 2 below) 1.0 works fine. Otherwise 0.5 or lower.
# Step 3: Generate Runge-Kutta-based (RK-based) timestepping code.
# As described above the Table of Contents, this is a 3-step process:
# 3.A: Evaluate RHSs (RHS_string)
# 3.B: Apply boundary conditions (post_RHS_string, pt 1)
# 3.C: Enforce det(gammabar) = det(gammahat) constraint (post_RHS_string, pt 2)
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method,
RHS_string = """
Ricci_eval(&rfmstruct, ¶ms, RK_INPUT_GFS, auxevol_gfs);
rhs_eval(&rfmstruct, ¶ms, auxevol_gfs, RK_INPUT_GFS, RK_OUTPUT_GFS);""",
post_RHS_string = """
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS, evol_gf_parity, RK_OUTPUT_GFS);
enforce_detgammahat_constraint(&rfmstruct, ¶ms, RK_OUTPUT_GFS);\n""",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
# Step 4: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Step 5: Set the finite differencing order to FD_order (set above).
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order)
# Step 6: Copy SIMD/SIMD_intrinsics.h to $Ccodesdir/SIMD/SIMD_intrinsics.h
cmd.mkdir(os.path.join(Ccodesdir,"SIMD"))
shutil.copy(os.path.join("SIMD/")+"SIMD_intrinsics.h",os.path.join(Ccodesdir,"SIMD/"))
# Step 7: Set the direction=2 (phi) axis to be the symmetry axis; i.e.,
# axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
###Output
_____no_output_____
###Markdown
Step 1.c: Output needed C code for finding the minimum proper distance between grid points, needed for [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673)-limited timestep \[Back to [top](toc)\]$$\label{cfl}$$In order for our explicit-timestepping numerical solution to the scalar wave equation to be stable, it must satisfy the [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673) condition:$$\Delta t \le \frac{\min(ds_i)}{c},$$where $c$ is the wavespeed, and$$ds_i = h_i \Delta x^i$$ is the proper distance between neighboring gridpoints in the $i$th direction (in 3D, there are 3 directions), $h_i$ is the $i$th reference metric scale factor, and $\Delta x^i$ is the uniform grid spacing in the $i$th direction:
###Code
# Output the find_timestep() function to a C file.
rfm.out_timestep_func_to_file(os.path.join(Ccodesdir,"find_timestep.h"))
###Output
_____no_output_____
###Markdown
Step 2: Import Brill-Lindquist ADM initial data C function from the [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module \[Back to [top](toc)\]$$\label{adm_id}$$The [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module does the following:1. Set up Brill-Lindquist initial data [ADM](https://en.wikipedia.org/wiki/ADM_formalism) quantities in the **Cartesian basis**, as [documented here](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb). 1. Convert the ADM **Cartesian quantities** to **BSSN quantities in the desired Curvilinear basis** (set by reference_metric::CoordSystem), as [documented here](Tutorial-ADM_Initial_Data-Converting_ADMCartesian_to_BSSNCurvilinear.ipynb).1. Sets up the standardized C function for setting all BSSN Curvilinear gridfunctions in a pointwise fashion, as [written here](../edit/BSSN/BSSN_ID_function_string.py), and returns the C function as a Python string.
###Code
import BSSN.BrillLindquist as bl
def BrillLindquistID():
print("Generating optimized C code for Brill-Lindquist initial data. May take a while, depending on CoordSystem.")
start = time.time()
bl.BrillLindquist() # Registers ID C function in dictionary, used below to output to file.
with open(os.path.join(Ccodesdir,"initial_data.h"),"w") as file:
file.write(outC_function_dict["initial_data"])
end = time.time()
print("(BENCH) Finished BL initial data codegen in "+str(end-start)+" seconds.")
###Output
_____no_output_____
###Markdown
Step 3: Output C code for BSSN spacetime solve \[Back to [top](toc)\]$$\label{bssn}$$ Step 3.a: Output C code for BSSN RHS expressions \[Back to [top](toc)\]$$\label{bssnrhs}$$
###Code
import BSSN.BSSN_RHSs as rhs
import BSSN.BSSN_gauge_RHSs as gaugerhs
# Set the *covariant*, second-order Gamma-driving shift condition
par.set_parval_from_str("BSSN.BSSN_gauge_RHSs::ShiftEvolutionOption", "GammaDriving2ndOrder_Covariant")
print("Generating symbolic expressions for BSSN RHSs...")
start = time.time()
# Enable rfm_precompute infrastructure, which results in
# BSSN RHSs that are free of transcendental functions,
# even in curvilinear coordinates, so long as
# ConformalFactor is set to "W" (default).
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
# Evaluate BSSN + BSSN gauge RHSs with rfm_precompute enabled:
import BSSN.BSSN_quantities as Bq
par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic","True")
rhs.BSSN_RHSs()
gaugerhs.BSSN_gauge_RHSs()
# We use betaU as our upwinding control vector:
Bq.BSSN_basic_tensors()
betaU = Bq.betaU
import BSSN.Enforce_Detgammahat_Constraint as EGC
enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammahat_Constraint_symb_expressions()
# Next compute Ricci tensor
par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic","False")
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
# Now register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)
# Now that we are finished with all the rfm hatted
# quantities in generic precomputed functional
# form, let's restore them to their closed-
# form expressions.
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
end = time.time()
print("(BENCH) Finished BSSN symbolic expressions in "+str(end-start)+" seconds.")
def BSSN_RHSs():
print("Generating C code for BSSN RHSs in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
# Construct the left-hand sides and right-hand-side expressions for all BSSN RHSs
lhs_names = [ "alpha", "cf", "trK"]
rhs_exprs = [gaugerhs.alpha_rhs, rhs.cf_rhs, rhs.trK_rhs]
for i in range(3):
lhs_names.append( "betU"+str(i))
rhs_exprs.append(gaugerhs.bet_rhsU[i])
lhs_names.append( "lambdaU"+str(i))
rhs_exprs.append(rhs.lambda_rhsU[i])
lhs_names.append( "vetU"+str(i))
rhs_exprs.append(gaugerhs.vet_rhsU[i])
for j in range(i,3):
lhs_names.append( "aDD"+str(i)+str(j))
rhs_exprs.append(rhs.a_rhsDD[i][j])
lhs_names.append( "hDD"+str(i)+str(j))
rhs_exprs.append(rhs.h_rhsDD[i][j])
# Sort the lhss list alphabetically, and rhss to match.
# This ensures the RHSs are evaluated in the same order
# they're allocated in memory:
lhs_names,rhs_exprs = [list(x) for x in zip(*sorted(zip(lhs_names,rhs_exprs), key=lambda pair: pair[0]))]
# Declare the list of lhrh's
BSSN_evol_rhss = []
for var in range(len(lhs_names)):
BSSN_evol_rhss.append(lhrh(lhs=gri.gfaccess("rhs_gfs",lhs_names[var]),rhs=rhs_exprs[var]))
# Set up the C function for the BSSN RHSs
desc="Evaluate the BSSN RHSs"
name="rhs_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict auxevol_gfs,const REAL *restrict in_gfs,REAL *restrict rhs_gfs""",
body = fin.FD_outputC("returnstring",BSSN_evol_rhss, params="outCverbose=False,enable_SIMD=True",
upwindcontrolvec=betaU),
loopopts = "InteriorPoints,enable_SIMD,enable_rfm_precompute")
end = time.time()
print("(BENCH) Finished BSSN_RHS C codegen in " + str(end - start) + " seconds.")
def Ricci():
print("Generating C code for Ricci tensor in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
desc="Evaluate the Ricci tensor"
name="Ricci_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict in_gfs,REAL *restrict auxevol_gfs""",
body = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD00"),rhs=Bq.RbarDD[0][0]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD01"),rhs=Bq.RbarDD[0][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD02"),rhs=Bq.RbarDD[0][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD11"),rhs=Bq.RbarDD[1][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD12"),rhs=Bq.RbarDD[1][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD22"),rhs=Bq.RbarDD[2][2])],
params="outCverbose=False,enable_SIMD=True"),
loopopts = "InteriorPoints,enable_SIMD,enable_rfm_precompute")
end = time.time()
print("(BENCH) Finished Ricci C codegen in " + str(end - start) + " seconds.")
###Output
Generating symbolic expressions for BSSN RHSs...
(BENCH) Finished BSSN symbolic expressions in 3.0587854385375977 seconds.
###Markdown
Step 3.b: Output C code for Hamiltonian constraint \[Back to [top](toc)\]$$\label{hamconstraint}$$Next output the C code for evaluating the Hamiltonian constraint [(**Tutorial**)](Tutorial-BSSN_constraints.ipynb). In the absence of numerical error, this constraint should evaluate to zero. However it does not due to numerical (typically truncation and roundoff) error. We will therefore measure the Hamiltonian constraint violation to gauge the accuracy of our simulation, and, ultimately determine whether errors are dominated by numerical finite differencing (truncation) error as expected.
###Code
def Hamiltonian():
start = time.time()
print("Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.")
# Set up the C function for the Hamiltonian RHS
desc="Evaluate the Hamiltonian constraint"
name="Hamiltonian_constraint"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict aux_gfs""",
body = fin.FD_outputC("returnstring",lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H),
params="outCverbose=False"),
loopopts = "InteriorPoints,enable_rfm_precompute")
end = time.time()
print("(BENCH) Finished Hamiltonian C codegen in " + str(end - start) + " seconds.")
###Output
_____no_output_____
###Markdown
Step 3.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN_enforcing_determinant_gammabar_equals_gammahat_constraint.ipynb)Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
def gammadet():
start = time.time()
print("Generating optimized C code for gamma constraint. May take a while, depending on CoordSystem.")
# Set up the C function for the det(gammahat) = det(gammabar)
EGC.output_Enforce_Detgammahat_Constraint_Ccode(Ccodesdir,exprs=enforce_detg_constraint_symb_expressions)
end = time.time()
print("(BENCH) Finished gamma constraint C codegen in " + str(end - start) + " seconds.")
###Output
_____no_output_____
###Markdown
Step 3.d: Compute $\psi_4$, which encodes gravitational wave information in our numerical relativity calculations \[Back to [top](toc)\]$$\label{psi4}$$The [Weyl scalar](https://en.wikipedia.org/wiki/Weyl_scalar) $\psi_4$ encodes gravitational wave information in our numerical relativity calculations. For more details on how it is computed, see [this NRPy+ tutorial notebook for information on $\psi_4$](Tutorial-Psi4.ipynb) and [this one on the Quasi-Kinnersley tetrad](Tutorial-Psi4_tetrads.ipynb) (as implemented in [Baker, Campanelli, Lousto (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf)).$\psi_4$ is related to the gravitational wave strain via$$\psi_4 = \ddot{h}_+ - i \ddot{h}_\times,$$where $\ddot{h}_+$ is the second time derivative of the $+$ polarization of the gravitational wave strain $h$, and $\ddot{h}_\times$ is the second time derivative of the $\times$ polarization of the gravitational wave strain $h$.
###Code
import BSSN.Psi4_tetrads as BP4t
par.set_parval_from_str("BSSN.Psi4_tetrads::TetradChoice","QuasiKinnersley")
#par.set_parval_from_str("BSSN.Psi4_tetrads::UseCorrectUnitNormal","True")
import BSSN.Psi4 as BP4
print("Generating symbolic expressions for psi4...")
start = time.time()
BP4.Psi4()
end = time.time()
print("(BENCH) Finished psi4 symbolic expressions in "+str(end-start)+" seconds.")
psi4r_0pt = gri.register_gridfunctions("AUX","psi4r_0pt")
psi4r_1pt = gri.register_gridfunctions("AUX","psi4r_1pt")
psi4r_2pt = gri.register_gridfunctions("AUX","psi4r_2pt")
psi4i_0pt = gri.register_gridfunctions("AUX","psi4i_0pt")
psi4i_1pt = gri.register_gridfunctions("AUX","psi4i_1pt")
psi4i_2pt = gri.register_gridfunctions("AUX","psi4i_2pt")
desc="""Since it's so expensive to compute, instead of evaluating
psi_4 at all interior points, this functions evaluates it on a
point-by-point basis."""
name="psi4"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """const paramstruct *restrict params,
const int i0,const int i1,const int i2,
REAL *restrict xx[3], const REAL *restrict in_gfs, REAL *restrict aux_gfs""",
body = """
const int idx = IDX3S(i0,i1,i2);
const REAL xx0 = xx[0][i0];const REAL xx1 = xx[1][i1];const REAL xx2 = xx[2][i2];
// Real part of psi_4, divided into 3 terms
{
#include "Psi4re_pt0_lowlevel.h"
}
{
#include "Psi4re_pt1_lowlevel.h"
}
{
#include "Psi4re_pt2_lowlevel.h"
}
// Imaginary part of psi_4, divided into 3 terms
{
#include "Psi4im_pt0_lowlevel.h"
}
{
#include "Psi4im_pt1_lowlevel.h"
}
{
#include "Psi4im_pt2_lowlevel.h"
}""")
def Psi4re(part):
print("Generating C code for psi4_re_pt"+str(part)+" in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
Psi4re_pt = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("aux_gfs","psi4r_"+str(part)+"pt"),rhs=BP4.psi4_re_pt[part])],
params="outCverbose=False,CSE_sorting=none") # Generating the CSE for psi4 is the slowest
# operation in this notebook, and much of the CSE
# time is spent sorting CSE expressions. Disabling
# this sorting makes the C codegen 3-4x faster,
# but the tradeoff is that every time this is
# run, the CSE patterns will be different
# (though they should result in mathematically
# *identical* expressions). You can expect
# roundoff-level differences as a result.
with open(os.path.join(Ccodesdir,"Psi4re_pt"+str(part)+"_lowlevel.h"), "w") as file:
file.write(Psi4re_pt)
end = time.time()
print("(BENCH) Finished generating psi4_re_pt"+str(part)+" in "+str(end-start)+" seconds.")
def Psi4im(part):
print("Generating C code for psi4_im_pt"+str(part)+" in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
Psi4im_pt = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("aux_gfs","psi4i_"+str(part)+"pt"),rhs=BP4.psi4_im_pt[part])],
params="outCverbose=False,CSE_sorting=none") # Generating the CSE for psi4 is the slowest
# operation in this notebook, and much of the CSE
# time is spent sorting CSE expressions. Disabling
# this sorting makes the C codegen 3-4x faster,
# but the tradeoff is that every time this is
# run, the CSE patterns will be different
# (though they should result in mathematically
# *identical* expressions). You can expect
# roundoff-level differences as a result.
with open(os.path.join(Ccodesdir,"Psi4im_pt"+str(part)+"_lowlevel.h"), "w") as file:
file.write(Psi4im_pt)
end = time.time()
print("(BENCH) Finished generating psi4_im_pt"+str(part)+" in "+str(end-start)+" seconds.")
###Output
Generating symbolic expressions for psi4...
(BENCH) Finished psi4 symbolic expressions in 13.001359939575195 seconds.
Output C function psi4() to file BSSN_Two_BHs_Collide_Ccodes_psi4/psi4.h
###Markdown
Step 3.e: Decompose $\psi_4$ into spin-weight -2 spherical harmonics \[Back to [top](toc)\]$$\label{decomposepsi4}$$ Instead of measuring $\psi_4$ for all possible (gravitational wave) observers in our simulation domain, we instead decompose it into a natural basis set, which by convention is the spin-weight -2 spherical harmonics.Here we implement the algorithm for decomposing $\psi_4$ into spin-weight -2 spherical harmonic modes. The decomposition is defined as follows:$${}^{-2}\left[\psi_4\right]_{\ell,m}(t,R) = \int \int \psi_4(t,R,\theta,\phi)\ \left[{}^{-2}Y^*_{\ell,m}(\theta,\phi)\right] \sin \theta d\theta d\phi,$$where* ${}^{-2}Y^*_{\ell,m}(\theta,\phi)$ is the complex conjugate of the spin-weight $-2$ spherical harmonic $\ell,m$ mode* $R$ is the (fixed) radius at which we extract $\psi_4$ information* $t$ is the time coordinate* $\theta,\phi$ are the polar and azimuthal angles, respectively (we use [the physics notation for spherical coordinates](https://en.wikipedia.org/wiki/Spherical_coordinate_system) here) Step 3.e.i Output ${}^{-2}Y_{\ell,m}$, up to and including $\ell=\ell_{\rm max}$=`l_max` (set to 2 here) \[Back to [top](toc)\]$$\label{spinweight}$$ Here we output all spin-weight $-2$ spherical harmonics [**Tutorial Module**](Tutorial-SpinWeighted_Spherical_Harmonics.ipynb) for $\ell=0$ up to and including $\ell=\ell_{\rm max}$=`l_max` (set to 2 here).
###Code
import SpinWeight_minus2_SphHarmonics.SpinWeight_minus2_SphHarmonics as swm2
cmd.mkdir(os.path.join(Ccodesdir,"SpinWeight_minus2_SphHarmonics"))
swm2.SpinWeight_minus2_SphHarmonics(maximum_l=l_max,
filename=os.path.join(Ccodesdir,"SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h"))
###Output
_____no_output_____
###Markdown
Step 3.e.ii Decomposition of $\psi_4$ into spin-weight -2 spherical harmonics: Full C-code diagnostic implementation \[Back to [top](toc)\]$$\label{full_diag}$$ Note that this diagnostic implementation assumes that `Spherical`-like coordinates are used (e.g., `SinhSpherical` or `Spherical`), which are the most natural coordinate system for decomposing $\psi_4$ into spin-weight -2 modes.First we process the inputs needed to compute $\psi_4$ at all needed $\theta,\phi$ points
###Code
%%writefile $Ccodesdir/driver_psi4_spinweightm2_decomposition.h
void driver_psi4_spinweightm2_decomposition(const paramstruct *restrict params,
const REAL curr_time,const int R_ext_idx,
REAL *restrict xx[3],
const REAL *restrict y_n_gfs,
REAL *restrict diagnostic_output_gfs) {
#include "set_Cparameters.h"
// Step 1: Set the extraction radius R_ext based on the radial index R_ext_idx
REAL R_ext;
{
REAL xx0 = xx[0][R_ext_idx];
REAL xx1 = xx[1][1];
REAL xx2 = xx[2][1];
REAL xCart[3];
xx_to_Cart(params,xx,R_ext_idx,1,1,xCart);
R_ext = sqrt(xCart[0]*xCart[0] + xCart[1]*xCart[1] + xCart[2]*xCart[2]);
}
// Step 2: Compute psi_4 at this extraction radius and store to a local 2D array.
const int sizeof_2Darray = sizeof(REAL)*(Nxx_plus_2NGHOSTS1-2*NGHOSTS)*(Nxx_plus_2NGHOSTS2-2*NGHOSTS);
REAL *restrict psi4r_at_R_ext = (REAL *restrict)malloc(sizeof_2Darray);
REAL *restrict psi4i_at_R_ext = (REAL *restrict)malloc(sizeof_2Darray);
// ... also store theta, sin(theta), and phi to corresponding 1D arrays.
REAL *restrict sinth_array = (REAL *restrict)malloc(sizeof(REAL)*(Nxx_plus_2NGHOSTS1-2*NGHOSTS));
REAL *restrict th_array = (REAL *restrict)malloc(sizeof(REAL)*(Nxx_plus_2NGHOSTS1-2*NGHOSTS));
REAL *restrict ph_array = (REAL *restrict)malloc(sizeof(REAL)*(Nxx_plus_2NGHOSTS2-2*NGHOSTS));
const int i0=R_ext_idx;
#pragma omp parallel for
for(int i1=NGHOSTS;i1<Nxx_plus_2NGHOSTS1-NGHOSTS;i1++) {
th_array[i1-NGHOSTS] = xx[1][i1];
sinth_array[i1-NGHOSTS] = sin(xx[1][i1]);
for(int i2=NGHOSTS;i2<Nxx_plus_2NGHOSTS2-NGHOSTS;i2++) {
ph_array[i2-NGHOSTS] = xx[2][i2];
// Compute real & imaginary parts of psi_4, output to diagnostic_output_gfs
psi4(params, i0,i1,i2, xx, y_n_gfs, diagnostic_output_gfs);
const int idx3d = IDX3S(i0,i1,i2);
const REAL psi4r = (+diagnostic_output_gfs[IDX4ptS(PSI4R_0PTGF,idx3d)]
+diagnostic_output_gfs[IDX4ptS(PSI4R_1PTGF,idx3d)]
+diagnostic_output_gfs[IDX4ptS(PSI4R_2PTGF,idx3d)]);
const REAL psi4i = (+diagnostic_output_gfs[IDX4ptS(PSI4I_0PTGF,idx3d)]
+diagnostic_output_gfs[IDX4ptS(PSI4I_1PTGF,idx3d)]
+diagnostic_output_gfs[IDX4ptS(PSI4I_2PTGF,idx3d)]);
// Store result to "2D" array (actually 1D array with 2D storage):
const int idx2d = (i1-NGHOSTS)*(Nxx_plus_2NGHOSTS2-2*NGHOSTS)+(i2-NGHOSTS);
psi4r_at_R_ext[idx2d] = psi4r;
psi4i_at_R_ext[idx2d] = psi4i;
}
}
###Output
Writing BSSN_Two_BHs_Collide_Ccodes_psi4/driver_psi4_spinweightm2_decomposition.h
###Markdown
Next we implement the integral:$${}^{-2}\left[\psi_4\right]_{\ell,m}(t,R) = \int \int \psi_4(t,R,\theta,\phi)\ \left[{}^{-2}Y^*_{\ell,m}(\theta,\phi)\right] \sin \theta d\theta d\phi.$$Since $\psi_4(t,R,\theta,\phi)$ and $\left[{}^{-2}Y^*_{\ell,m}(\theta,\phi)\right]$ are generally complex, for simplicity let's define\begin{align}\psi_4(t,R,\theta,\phi)&=a+i b \\\left[{}^{-2}Y_{\ell,m}(\theta,\phi)\right] &= c + id\\\implies \left[{}^{-2}Y^*_{\ell,m}(\theta,\phi)\right] = \left[{}^{-2}Y_{\ell,m}(\theta,\phi)\right]^* &=c-i d\end{align}Then the product (appearing within the integral) will be given by\begin{align}(a + i b) (c-i d) &= (ac + bd) + i(bc - ad),\end{align}which cleanly splits the real and complex parts. For better modularity, we output this algorithm to a function `decompose_psi4_into_swm2_modes()` in file `decompose_psi4_into_swm2_modes.h`. Here, we will call this function from within `output_psi4_spinweight_m2_decomposition()`, but in general it could be called from codes that do not use spherical coordinates, and the `psi4r_at_R_ext[]` and `psi4i_at_R_ext[]` arrays are filled using interpolations.
###Code
%%writefile $Ccodesdir/lowlevel_decompose_psi4_into_swm2_modes.h
void lowlevel_decompose_psi4_into_swm2_modes(const paramstruct *restrict params,
const REAL curr_time, const REAL R_ext,
const REAL *restrict th_array,const REAL *restrict sinth_array,const REAL *restrict ph_array,
const REAL *restrict psi4r_at_R_ext,const REAL *restrict psi4i_at_R_ext) {
#include "set_Cparameters.h"
for(int l=2;l<=L_MAX;l++) { // L_MAX is a global variable, since it must be set in Python (so that SpinWeight_minus2_SphHarmonics() computes enough modes)
for(int m=-l;m<=l;m++) {
// Parallelize the integration loop:
REAL psi4r_l_m = 0.0;
REAL psi4i_l_m = 0.0;
#pragma omp parallel for reduction(+:psi4r_l_m,psi4i_l_m)
for(int i1=0;i1<Nxx_plus_2NGHOSTS1-2*NGHOSTS;i1++) {
const REAL th = th_array[i1];
const REAL sinth = sinth_array[i1];
for(int i2=0;i2<Nxx_plus_2NGHOSTS2-2*NGHOSTS;i2++) {
const REAL ph = ph_array[i2];
// Construct integrand for psi4 spin-weight s=-2,l=2,m=0 spherical harmonic
REAL ReY_sm2_l_m,ImY_sm2_l_m;
SpinWeight_minus2_SphHarmonics(l,m, th,ph, &ReY_sm2_l_m,&ImY_sm2_l_m);
const int idx2d = i1*(Nxx_plus_2NGHOSTS2-2*NGHOSTS)+i2;
const REAL a = psi4r_at_R_ext[idx2d];
const REAL b = psi4i_at_R_ext[idx2d];
const REAL c = ReY_sm2_l_m;
const REAL d = ImY_sm2_l_m;
psi4r_l_m += (a*c + b*d) * dxx2 * sinth*dxx1;
psi4i_l_m += (b*c - a*d) * dxx2 * sinth*dxx1;
}
}
// Step 4: Output the result of the integration to file.
char filename[100];
sprintf(filename,"outpsi4_l%d_m%d-%d-r%.2f.txt",l,m, Nxx0,(double)R_ext);
if(m>=0) sprintf(filename,"outpsi4_l%d_m+%d-%d-r%.2f.txt",l,m, Nxx0,(double)R_ext);
FILE *outpsi4_l_m;
// 0 = n*dt when n=0 is exactly represented in double/long double precision,
// so no worries about the result being ~1e-16 in double/ld precision
if(curr_time==0) outpsi4_l_m = fopen(filename, "w");
else outpsi4_l_m = fopen(filename, "a");
fprintf(outpsi4_l_m,"%e %.15e %.15e\n", (double)(curr_time),
(double)psi4r_l_m,(double)psi4i_l_m);
fclose(outpsi4_l_m);
}
}
}
###Output
Writing BSSN_Two_BHs_Collide_Ccodes_psi4/lowlevel_decompose_psi4_into_swm2_modes.h
###Markdown
Finally, we complete the function `output_psi4_spinweight_m2_decomposition()`, now calling the above routine and freeing all allocated memory.
###Code
%%writefile -a $Ccodesdir/driver_psi4_spinweightm2_decomposition.h
// Step 4: Perform integrations across all l,m modes from l=2 up to and including L_MAX (global variable):
lowlevel_decompose_psi4_into_swm2_modes(params, curr_time,R_ext, th_array,sinth_array, ph_array,
psi4r_at_R_ext,psi4i_at_R_ext);
// Step 5: Free all allocated memory:
free(psi4r_at_R_ext); free(psi4i_at_R_ext);
free(sinth_array); free(th_array); free(ph_array);
}
###Output
Appending to BSSN_Two_BHs_Collide_Ccodes_psi4/driver_psi4_spinweightm2_decomposition.h
###Markdown
Step 3.f: Output all NRPy+ C-code kernels, in parallel if possible \[Back to [top](toc)\]$$\label{coutput}$$
###Code
# Step 0: Import the multiprocessing module.
import multiprocessing
# Step 1: Create a list of functions we wish to evaluate in parallel
funcs = [Psi4re,Psi4re,Psi4re,Psi4im,Psi4im,Psi4im, BrillLindquistID,BSSN_RHSs,Ricci,Hamiltonian,gammadet]
# Step 1.a: Define master function for calling all above functions.
# Note that lambdifying this doesn't work in Python 3
def master_func(idx):
if idx < 3: # Call Psi4re(arg)
funcs[idx](idx)
elif idx < 6: # Call Psi4im(arg-3)
funcs[idx](idx-3)
else: # All non-Psi4 functions:
funcs[idx]()
try:
if os.name == 'nt':
# It's a mess to get working in Windows, so we don't bother. :/
# https://medium.com/@grvsinghal/speed-up-your-python-code-using-multiprocessing-on-windows-and-jupyter-or-ipython-2714b49d6fac
raise Exception("Parallel codegen currently not available in Windows")
# Step 1.b: Import the multiprocessing module.
import multiprocessing
# Step 1.c: Evaluate list of functions in parallel if possible;
# otherwise fallback to serial evaluation:
pool = multiprocessing.Pool()
pool.map(master_func,range(len(funcs)))
except:
# Steps 1.b-1.c, alternate: As fallback, evaluate functions in serial.
# This will happen on Android and Windows systems
for idx in range(len(funcs)):
master_func(idx)
###Output
Generating C code for psi4_re_pt0 in SinhSpherical coordinates.Generating C code for psi4_re_pt1 in SinhSpherical coordinates.Generating C code for psi4_re_pt2 in SinhSpherical coordinates.Generating C code for psi4_im_pt0 in SinhSpherical coordinates.Generating C code for psi4_im_pt1 in SinhSpherical coordinates.Generating C code for psi4_im_pt2 in SinhSpherical coordinates.Generating optimized C code for Brill-Lindquist initial data. May take a while, depending on CoordSystem.
Generating C code for BSSN RHSs in SinhSpherical coordinates.
Generating C code for Ricci tensor in SinhSpherical coordinates.
Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.Generating optimized C code for gamma constraint. May take a while, depending on CoordSystem.
Output C function enforce_detgammahat_constraint() to file BSSN_Two_BHs_Collide_Ccodes_psi4/enforce_detgammahat_constraint.h
(BENCH) Finished gamma constraint C codegen in 0.10424590110778809 seconds.
(BENCH) Finished generating psi4_im_pt1 in 12.016667366027832 seconds.
(BENCH) Finished BL initial data codegen in 14.484731197357178 seconds.
(BENCH) Finished generating psi4_im_pt2 in 15.47280764579773 seconds.
Output C function rhs_eval() to file BSSN_Two_BHs_Collide_Ccodes_psi4/rhs_eval.h
(BENCH) Finished BSSN_RHS C codegen in 20.126734733581543 seconds.
(BENCH) Finished generating psi4_re_pt2 in 21.123689651489258 seconds.
Output C function Ricci_eval() to file BSSN_Two_BHs_Collide_Ccodes_psi4/Ricci_eval.h
(BENCH) Finished Ricci C codegen in 21.265940189361572 seconds.
(BENCH) Finished generating psi4_re_pt1 in 22.23289155960083 seconds.
Output C function Hamiltonian_constraint() to file BSSN_Two_BHs_Collide_Ccodes_psi4/Hamiltonian_constraint.h
(BENCH) Finished Hamiltonian C codegen in 37.140023946762085 seconds.
(BENCH) Finished generating psi4_im_pt0 in 38.12496042251587 seconds.
(BENCH) Finished generating psi4_re_pt0 in 65.21467590332031 seconds.
###Markdown
Step 3.g: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](toc)\]$$\label{cparams_rfm_and_domainsize}$$Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
###Code
# Step 3.f.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.f.ii: Set free_parameters.h
with open(os.path.join(Ccodesdir,"free_parameters.h"),"w") as file:
file.write("""
// Set free-parameter values.
// Set free-parameter values for BSSN evolution:
params.eta = 2.0;
// Set free parameters for the (Brill-Lindquist) initial data
params.BH1_posn_x = 0.0; params.BH1_posn_y = 0.0; params.BH1_posn_z =+0.25;
params.BH2_posn_x = 0.0; params.BH2_posn_y = 0.0; params.BH2_posn_z =-0.25;
params.BH1_mass = 0.5; params.BH2_mass = 0.5;\n""")
# Append to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Step 3.f.iii: Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir)
# Step 3.f.iv: Generate xx_to_Cart.h, which contains xx_to_Cart() for
# (the mapping from xx->Cartesian) for the chosen
# CoordSystem:
rfm.xx_to_Cart_h("xx_to_Cart","./set_Cparameters.h",os.path.join(Ccodesdir,"xx_to_Cart.h"))
# Step 3.f.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
###Output
_____no_output_____
###Markdown
Step 4: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](toc)\]$$\label{bc_functs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../"))
###Output
Wrote to file "BSSN_Two_BHs_Collide_Ccodes_psi4/boundary_conditions/parity_conditions_symbolic_dot_products.h"
Evolved parity: ( aDD00:4, aDD01:5, aDD02:6, aDD11:7, aDD12:8, aDD22:9,
alpha:0, betU0:1, betU1:2, betU2:3, cf:0, hDD00:4, hDD01:5, hDD02:6,
hDD11:7, hDD12:8, hDD22:9, lambdaU0:1, lambdaU1:2, lambdaU2:3, trK:0,
vetU0:1, vetU1:2, vetU2:3 )
Auxiliary parity: ( H:0, psi4i_0pt:0, psi4i_1pt:0, psi4i_2pt:0,
psi4r_0pt:0, psi4r_1pt:0, psi4r_2pt:0 )
AuxEvol parity: ( RbarDD00:4, RbarDD01:5, RbarDD02:6, RbarDD11:7,
RbarDD12:8, RbarDD22:9 )
Wrote to file "BSSN_Two_BHs_Collide_Ccodes_psi4/boundary_conditions/EigenCoord_Cart_to_xx.h"
###Markdown
Step 5: `BrillLindquist_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER),
# and set the CFL_FACTOR (which can be overwritten at the command line)
with open(os.path.join(Ccodesdir,"BSSN_Playground_REAL__NGHOSTS__CFL_FACTOR.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
REAL CFL_FACTOR = """+str(CFL_FACTOR)+"""; // Set the CFL Factor. Can be overwritten at command line.
// Part P0.d: We decompose psi_4 into all spin-weight=-2
// l,m spherical harmonics, starting at l=2,
// going up to and including l_max, set here:
#define L_MAX """+str(l_max)+"""
""")
%%writefile $Ccodesdir/BrillLindquist_Playground.c
// Step P0: Define REAL and NGHOSTS; and declare CFL_FACTOR. This header is generated in NRPy+.
#include "BSSN_Playground_REAL__NGHOSTS__CFL_FACTOR.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// All SIMD intrinsics used in SIMD-enabled C code loops are defined here:
#include "SIMD/SIMD_intrinsics.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "time.h"
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
#define wavespeed 1.0 // Set CFL-based "wavespeed" to 1.0.
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xx_to_Cart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Set xx_to_Cart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xx_to_Cart.h"
// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P7: Implement the algorithm for upwinding.
// *NOTE*: This upwinding is backwards from
// usual upwinding algorithms, because the
// upwinding control vector in BSSN (the shift)
// acts like a *negative* velocity.
//#define UPWIND_ALG(UpwindVecU) UpwindVecU > 0.0 ? 1.0 : 0.0
// Step P8: Include function for enforcing detgammabar constraint.
#include "enforce_detgammahat_constraint.h"
// Step P9: Find the CFL-constrained timestep
#include "find_timestep.h"
// Step P10: Declare function necessary for setting up the initial data.
// Step P10.a: Define BSSN_ID() for BrillLindquist initial data
// Step P10.b: Set the generic driver function for setting up BSSN initial data
#include "initial_data.h"
// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)
#include "Hamiltonian_constraint.h"
// Step P12: Declare rhs_eval function, which evaluates BSSN RHSs
#include "rhs_eval.h"
// Step P13: Declare Ricci_eval function, which evaluates Ricci tensor
#include "Ricci_eval.h"
// Step P14: Declare function for evaluating real and imaginary parts of psi4 (diagnostic)
#include "psi4.h"
#include "SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h"
#include "lowlevel_decompose_psi4_into_swm2_modes.h"
#include "driver_psi4_spinweightm2_decomposition.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4 && argc != 5) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
if(argc == 5) {
CFL_FACTOR = strtod(argv[4],NULL);
}
if(CFL_FACTOR > 0.5 && atoi(argv[3])!=2) {
fprintf(stderr,"WARNING: CFL_FACTOR was set to %e, which is > 0.5.\n",CFL_FACTOR);
fprintf(stderr," This will generally only be stable if the simulation is purely axisymmetric\n");
fprintf(stderr," However, Nx2 was set to %d>2, which implies a non-axisymmetric simulation\n",atoi(argv[3]));
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0h: Time coordinate parameters
const REAL t_final = 1.5*domain_size; /* Final time is set so that at t=t_final,
* data at the origin have not been corrupted
* by the approximate outer boundary condition */
// Step 0i: Set timestep based on smallest proper distance between gridpoints and CFL factor
REAL dt = find_timestep(¶ms, xx);
//fprintf(stderr,"# Timestep set to = %e\n",(double)dt);
int N_final = (int)(t_final / dt + 0.5); // The number of points in time.
// Add 0.5 to account for C rounding down
// typecasts to integers.
const REAL out_approx_every_t = 0.5;
int output_every_N = (int)(out_approx_every_t*((REAL)N_final)/t_final);
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
// Step 1: Set up initial data to an exact solution
initial_data(¶ms, xx, y_n_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
enforce_detgammahat_constraint(&rfmstruct, ¶ms, y_n_gfs);
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
#ifdef __linux__ // Use high-precision timer in Linux.
struct timespec start, end;
clock_gettime(CLOCK_REALTIME, &start);
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
// http://www.cplusplus.com/reference/ctime/time/
time_t start_timer,end_timer;
time(&start_timer); // Resolution of one second...
#endif
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
for(int n=0;n<=N_final;n++) { // Main loop to progress forward in time.
// Step 3.a: Output diagnostics
if(n%output_every_N == 0) {
// Step 3.a.i: Output psi4 spin-weight -2 decomposed data, every N_output_every
for(int r_ext_idx = (Nxx_plus_2NGHOSTS0-NGHOSTS)/4;
r_ext_idx<(Nxx_plus_2NGHOSTS0-NGHOSTS)*0.9;
r_ext_idx+=5) {
// psi_4 mode-by-mode spin-weight -2 spherical harmonic decomposition routine
driver_psi4_spinweightm2_decomposition(¶ms, ((REAL)n)*dt,r_ext_idx,
xx, y_n_gfs, diagnostic_output_gfs);
}
}
if(n%100 == 0) {
// Step 3.a.ii: Regularly output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d-%08d.txt",Nxx[0],n);
FILE *out2D = fopen(filename, "w");
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xCart[3]; xx_to_Cart(¶ms,xx,i0,i1,i2, xCart);
int idx = IDX3S(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1],xCart[2],
y_n_gfs[IDX4ptS(CFGF,idx)], log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
}
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
#include "MoLtimestepping/RK_MoL.h"
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
if(n==N_final-1) {
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xCart[3]; xx_to_Cart(¶ms,xx,i0,i1,i2, xCart);
int idx = IDX3S(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1],xCart[2], y_n_gfs[IDX4ptS(CFGF,idx)],
log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
}
// Step 3.d: Progress indicator printing to stderr
// Step 3.d.i: Measure average time per iteration
#ifdef __linux__ // Use high-precision timer in Linux.
clock_gettime(CLOCK_REALTIME, &end);
const long long unsigned int time_in_ns = 1000000000L * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
time(&end_timer); // Resolution of one second...
REAL time_in_ns = difftime(end_timer,start_timer)*1.0e9+0.5; // Round up to avoid divide-by-zero.
#endif
const REAL s_per_iteration_avg = ((REAL)time_in_ns / (REAL)n) / 1.0e9;
const int iterations_remaining = N_final - n;
const REAL time_remaining_in_mins = s_per_iteration_avg * (REAL)iterations_remaining / 60.0;
const REAL num_RHS_pt_evals = (REAL)(Nxx[0]*Nxx[1]*Nxx[2]) * 4.0 * (REAL)n; // 4 RHS evals per gridpoint for RK4
const REAL RHS_pt_evals_per_sec = num_RHS_pt_evals / ((REAL)time_in_ns / 1.0e9);
// Step 3.d.ii: Output simulation progress to stderr
if(n % 40 == 0) {
fprintf(stderr,"%c[2K", 27); // Clear the line
fprintf(stderr,"It: %d t=%.2f dt=%.2e | %.1f%%; ETA %.0f s | t/h %.2f | gp/s %.2e\r", // \r is carriage return, move cursor to the beginning of the line
n, n * (double)dt, (double)dt, (double)(100.0 * (REAL)n / (REAL)N_final),
(double)time_remaining_in_mins*60, (double)(dt * 3600.0 / s_per_iteration_avg), (double)RHS_pt_evals_per_sec);
fflush(stderr); // Flush the stderr buffer
} // End progress indicator if(n % 10 == 0)
} // End main loop to progress forward in time.
fprintf(stderr,"\n"); // Clear the final line of output from progress indicator.
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
if EvolOption == "low resolution":
Nr = 270
Ntheta = 8
elif EvolOption == "high resolution":
Nr = 800
Ntheta = 16
else:
print("Error: unknown EvolOption!")
sys.exit(1)
import cmdline_helper as cmd
print("Now compiling, should take ~10 seconds...\n")
start = time.time()
cmd.C_compile(os.path.join(Ccodesdir, "BrillLindquist_Playground.c"),
os.path.join(outdir, "BrillLindquist_Playground"),
compile_mode="optimized")
end = time.time()
print("(BENCH) Finished in "+str(end-start)+" seconds.\n\n")
if Nr == 800:
print("Now running. Should take ~8 hours...\n")
if Nr == 270:
print("Now running. Should take ~30 minutes...\n")
# Change to output directory
os.chdir(os.path.join(outdir))
cmd.delete_existing_files("out*.txt")
cmd.delete_existing_files("out*.png")
cmd.Execute("BrillLindquist_Playground", str(Nr)+" "+str(Ntheta)+" 2")
# Change back to root directory
os.chdir(os.path.join("..", ".."))
###Output
Now compiling, should take ~10 seconds...
Compiling executable...
(EXEC): Executing `gcc -std=gnu99 -Ofast -fopenmp -march=native -funroll-loops BSSN_Two_BHs_Collide_Ccodes_psi4/BrillLindquist_Playground.c -o BSSN_Two_BHs_Collide_Ccodes_psi4/output/BrillLindquist_Playground -lm`...
(BENCH): Finished executing in 9.224141120910645 seconds.
Finished compilation.
(BENCH) Finished in 9.232537031173706 seconds.
Now running. Should take ~30 minutes...
(EXEC): Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./BrillLindquist_Playground 270 8 2`...
[2KIt: 30600 t=224.92 dt=7.35e-03 | 100.0%; ETA 0 s | t/h 2704.49 | gp/s 1.77e+066
(BENCH): Finished executing in 299.734411239624 seconds.
(BENCH) Finished in 299.74692583084106 seconds.
###Markdown
Step 6: Comparison with black hole perturbation theory \[Back to [top](toc)\]$$\label{compare}$$According to black hole perturbation theory ([Berti et al](https://arxiv.org/abs/0905.2975)), the resultant black hole should ring down with dominant, spin-weight $s=-2$ spherical harmonic mode $(l=2,m=0)$ according to$${}_{s=-2}\text{Re}(\psi_4)_{l=2,m=0} = A e^{−0.0890 t/M} \cos(0.3737 t/M+ \phi),$$where $M=1$ for these data, and $A$ and $\phi$ are an arbitrary amplitude and phase, respectively. Here we will plot the resulting waveform at $r/M=33.13$, comparing to the expected frequency and amplitude falloff predicted by black hole perturbation theory.Notice that we find about 4.2 orders of magnitude agreement! If you are willing to invest more resources and wait much longer, you will find approximately 8.5 orders of magnitude agreement (*better* than Fig 6 of [Ruchlin et al](https://arxiv.org/pdf/1712.07658.pdf)) if you adjust the above code parameters such that1. Finite-differencing order is set to 101. Nr = 8001. Ntheta = 161. Outer boundary (`AMPL`) set to 3001. Final time (`t_final`) set to 2751. Set the initial positions of the BHs to `BH1_posn_z = -BH2_posn_z = 0.25`
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import savefig
from matplotlib import rc
rc('text', usetex=True)
if Nr == 270:
extraction_radius = "30.20"
Amplitude = 1.8e-2
Phase = 2.8
elif Nr == 800:
extraction_radius = "33.64"
Amplitude = 1.8e-2
Phase = 2.8
else:
print("Error: output is not tuned for Nr = "+str(Nr)+" . Plotting disabled.")
exit(1)
# Transposed for easier unpacking:
t,psi4r,psi4i = np.loadtxt(os.path.join(outdir, "outpsi4_l2_m+0-"+str(Nr)+"-r"+extraction_radius+".txt")).T
t_retarded = []
log10abspsi4r = []
bh_pert_thry = []
for i in range(len(psi4r)):
retarded_time = t[i]-float(extraction_radius)
t_retarded.append(retarded_time)
log10abspsi4r.append(np.log(float(extraction_radius)*np.abs(psi4r[i]))/np.log(10))
bh_pert_thry.append(np.log(Amplitude*np.exp(-0.0890*retarded_time)*np.abs(np.cos(0.3737*retarded_time+Phase)))/np.log(10))
# print(bh_pert_thry)
fig, ax = plt.subplots()
plt.title("Grav. Wave Agreement with BH perturbation theory",fontsize=18)
plt.xlabel("$(t - R_{ext})/M$",fontsize=16)
plt.ylabel('$\log_{10}|\psi_4|$',fontsize=16)
ax.plot(t_retarded, log10abspsi4r, 'k-', label='SENR/NRPy+ simulation')
ax.plot(t_retarded, bh_pert_thry, 'k--', label='BH perturbation theory')
# ax.set_xlim([0,t_retarded[len(psi4r1)-1]])
ax.set_xlim([0, 1.5*domain_size - float(extraction_radius)])
ax.set_ylim([-13.5, -1.5])
plt.xticks(size = 14)
plt.yticks(size = 14)
legend = ax.legend(loc='upper right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
# Note that you'll need `dvipng` installed to generate the following file:
savefig(os.path.join(outdir, "BHperttheorycompare.png"), dpi=150)
###Output
_____no_output_____
###Markdown
Step 7: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4")
###Output
Created Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex,
and compiled LaTeX file to PDF file Tutorial-Start_to_Finish-
BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Start-to-Finish Example: Head-On Black Hole Collision Author: Zach Etienne Formatting improvements courtesy Brandon Clark This module implements a basic numerical relativity code to merge two black holes in *spherical-like coordinates*, as well as the gravitational wave analysis provided by the $\psi_4$ NRPy+ tutorial notebooks ([$\psi_4$](Tutorial-Psi4.ipynb) & [$\psi_4$ tetrad](Tutorial-Psi4_tetrads.ipynb)). Here we place the black holes initially on the $z$-axis, so the entire simulation is axisymmetric about the $\phi$-axis. Minimal sampling in the $\phi$ direction greatly speeds up the simulation.**Notebook Status:** Validated **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution *after a short numerical evolution of the initial data* (see [plots at bottom](convergence)), and all quantities have been validated against the [original SENR code](https://bitbucket.org/zach_etienne/nrpy). Finally, excellent agreement is seen in the gravitational wave signal from the ringing remnant black hole for multiple decades in amplitude when compared to black hole perturbation theory predictions. NRPy+ Source Code for this module: * [BSSN/BrillLindquist.py](../edit/BSSN/BrillLindquist.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb): Brill-Lindquist initial data; sets all ADM variables in Cartesian basis: * [BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): Spherical/Cartesian ADM$\to$Curvilinear BSSN converter function, for which exact expressions are given for ADM quantities.* [BSSN/BSSN_ID_function_string.py](../edit/BSSN/BSSN_ID_function_string.py): Sets up the C code string enabling initial data be set up in a point-by-point fashion* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates* [BSSN/BSSN_RHSs.py](../edit/BSSN/BSSN_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb): Generates the right-hand sides for the BSSN evolution equations in singular, curvilinear coordinates* [BSSN/BSSN_gauge_RHSs.py](../edit/BSSN/BSSN_gauge_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb): Generates the right-hand sides for the BSSN gauge evolution equations in singular, curvilinear coordinates* [BSSN/Psi4.py](../edit/BSSN/Psi4.py); [\[**tutorial**\]](Tutorial-Psi4.ipynb): Generates expressions for $\psi_4$, the outgoing Weyl scalar + [BSSN/Psi4_tetrads.py](../edit/BSSN/Psi4_tetrads.py); [\[**tutorial**\]](Tutorial-Psi4_tetrads.ipynb): Generates quasi-Kinnersley tetrad needed for $\psi_4$-based gravitational wave extraction Introduction:Here we use NRPy+ to generate the C source code necessary to set up initial data for two black holes (Brill-Lindquist, [Brill & Lindquist, Phys. Rev. 131, 471, 1963](https://journals.aps.org/pr/abstract/10.1103/PhysRev.131.471); see also Eq. 1 of [Brandt & Brügmann, arXiv:gr-qc/9711015v1](https://arxiv.org/pdf/gr-qc/9711015v1.pdf)). Then we use it to generate the RHS expressions for [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html) time integration based on the [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4).The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration * [**NRPy+ tutorial on Method of Lines algorithm**](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).1. Set gridfunction values to initial data * [**NRPy+ tutorial on Brill-Lindquist initial data**](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb) * [**NRPy+ tutorial on validating Brill-Lindquist initial data**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_Exact_Initial_Data.ipynb).1. Next, integrate the initial data forward in time using the Method of Lines coupled to a Runge-Kutta explicit timestepping algorithm: 1. At the start of each iteration in time, output the Hamiltonian constraint violation * [**NRPy+ tutorial on BSSN constraints**](Tutorial-BSSN_constraints.ipynb). 1. At each RK time substep, do the following: 1. Evaluate BSSN RHS expressions * [**NRPy+ tutorial on BSSN right-hand sides**](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb) * [**NRPy+ tutorial on BSSN gauge condition right-hand sides**](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb) 1. Apply singular, curvilinear coordinate boundary conditions [*a la* the SENR/NRPy+ paper](https://arxiv.org/abs/1712.07658) * [**NRPy+ tutorial on setting up singular, curvilinear boundary conditions**](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb) 1. Enforce constraint on conformal 3-metric: $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ * [**NRPy+ tutorial on enforcing $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint**](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)1. Repeat above steps at two numerical resolutions to confirm convergence to zero. Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](adm_id): Import Brill-Lindquist ADM initial data C function from the [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module1. [Step 3](bssn): Output C code for BSSN spacetime solve 1. [Step 3.a](bssnrhs): Output C code for BSSN RHS expressions 1. [Step 3.b](hamconstraint): Output C code for Hamiltonian constraint 1. [Step 3.c](enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint 1. [Step 3.d](psi4): Compute $\psi_4$, which encodes gravitational wave information in our numerical relativity calculations 1. [Step 3.e](decomposepsi4): Decompose $\psi_4$ into spin-weight -2 spherical harmonics 1. [Step 3.e.i](spinweight): Output ${}^{-2}Y_{\ell,m}$, up to and including $\ell=\ell_{\rm max}$=`l_max` (set to 2 here) 1. [Step 3.e.ii](full_diag): Decomposition of $\psi_4$ into spin-weight -2 spherical harmonics: Full C-code diagnostic implementation 1. [Step 3.f](coutput): Output all NRPy+ C-code kernels, in parallel if possible 1. [Step 3.g](cparams_rfm_and_domainsize): Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h`1. [Step 4](bc_functs): Set up boundary condition functions for chosen singular, curvilinear coordinate system1. [Step 5](mainc): `BrillLindquist_Playground.c`: The Main C Code1. [Step 6](visualize): Data Visualization Animations 1. [Step 6.a](installdownload): Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded 1. [Step 6.b](genimages): Generate images for visualization animation 1. [Step 6.c](genvideo): Generate visualization animation1. [Step 7](convergence): Visualize the numerical error, and confirm that it converges to zero with increasing numerical resolution (sampling)1. [Step 8](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step P0: Set the option for evolving the initial data forward in time
# Options include "low resolution" and "high resolution"
EvolOption = "low resolution"
# Step P1: Import needed NRPy+ core modules:
from outputC import lhrh,outCfunction,outC_function_dict # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys, time # Standard Python modules for multiplatform OS-level functions, benchmarking
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("BSSN_Two_BHs_Collide_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three (BSSN is a 3+1 decomposition
# of Einstein's equations), and then read
# the parameter as DIM.
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Step 1.a: Enable SIMD-optimized code?
# I.e., generate BSSN and Ricci C code kernels using SIMD-vectorized
# compiler intrinsics, which *greatly improve the code's performance*,
# though at the expense of making the C-code kernels less
# human-readable.
# * Important note in case you wish to modify the BSSN/Ricci kernels
# here by adding expressions containing transcendental functions
# (e.g., certain scalar fields):
# Note that SIMD-based transcendental function intrinsics are not
# supported by the default installation of gcc or clang (you will
# need to use e.g., the SLEEF library from sleef.org, for this
# purpose). The Intel compiler suite does support these intrinsics
# however without the need for external libraries.
SIMD_enable = True
# Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "SinhSpherical"
# Decompose psi_4 (second time derivative of gravitational
# wave strain) into all spin-weight=-2
# l,m spherical harmonics, starting at l=2
# going up to and including l_max, set here:
l_max = 2
if EvolOption == "low resolution":
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 150 # Length scale of computational domain
final_time = 200 # Final time
FD_order = 8 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
elif EvolOption == "high resolution":
# See above for description of the domain_size parameter
domain_size = 300 # Length scale of computational domain
final_time = 275 # Final time
FD_order = 10 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
else:
print("Error: output is not tuned for Nr = "+str(Nr)+" . Plotting disabled.")
exit(1)
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.2 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05 # If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.b: Set the timestepping order,
# the core data type, and the CFL factor.
# Step 2.b: Set the order of spatial and temporal derivatives;
# the core data type, and the CFL factor.
# RK_method choices include: Euler, "RK2 Heun", "RK2 MP", "RK2 Ralston", RK3, "RK3 Heun", "RK3 Ralston",
# SSPRK3, RK4, DP5, DP5alt, CK5, DP6, L6, DP8
RK_method = "RK4"
REAL = "double" # Best to use double here.
CFL_FACTOR= 0.5 # (GETS OVERWRITTEN WHEN EXECUTED.) In pure axisymmetry (symmetry_axes = 2 below) 1.0 works fine. Otherwise 0.5 or lower.
# Step 3: Generate Runge-Kutta-based (RK-based) timestepping code.
# As described above the Table of Contents, this is a 3-step process:
# 3.A: Evaluate RHSs (RHS_string)
# 3.B: Apply boundary conditions (post_RHS_string, pt 1)
# 3.C: Enforce det(gammabar) = det(gammahat) constraint (post_RHS_string, pt 2)
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method,
RHS_string = """
Ricci_eval(&rfmstruct, ¶ms, RK_INPUT_GFS, auxevol_gfs);
rhs_eval(&rfmstruct, ¶ms, auxevol_gfs, RK_INPUT_GFS, RK_OUTPUT_GFS);""",
post_RHS_string = """
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS, evol_gf_parity, RK_OUTPUT_GFS);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, RK_OUTPUT_GFS);\n""",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
# Step 4: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Step 5: Set the finite differencing order to FD_order (set above).
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order)
# Step 6: Copy SIMD/SIMD_intrinsics.h to $Ccodesdir/SIMD/SIMD_intrinsics.h
cmd.mkdir(os.path.join(Ccodesdir,"SIMD"))
shutil.copy(os.path.join("SIMD/")+"SIMD_intrinsics.h",os.path.join(Ccodesdir,"SIMD/"))
# Step 7: Set the direction=2 (phi) axis to be the symmetry axis; i.e.,
# axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
###Output
_____no_output_____
###Markdown
Step 1.c: Output needed C code for finding the minimum proper distance between grid points, needed for [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673)-limited timestep \[Back to [top](toc)\]$$\label{cfl}$$In order for our explicit-timestepping numerical solution to the scalar wave equation to be stable, it must satisfy the [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673) condition:$$\Delta t \le \frac{\min(ds_i)}{c},$$where $c$ is the wavespeed, and$$ds_i = h_i \Delta x^i$$ is the proper distance between neighboring gridpoints in the $i$th direction (in 3D, there are 3 directions), $h_i$ is the $i$th reference metric scale factor, and $\Delta x^i$ is the uniform grid spacing in the $i$th direction:
###Code
# Output the find_timestep() function to a C file.
rfm.out_timestep_func_to_file(os.path.join(Ccodesdir,"find_timestep.h"))
###Output
_____no_output_____
###Markdown
Step 2: Import Brill-Lindquist ADM initial data C function from the [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module \[Back to [top](toc)\]$$\label{adm_id}$$The [`BSSN.BrillLindquist`](../edit/BSSN/BrillLindquist.py) NRPy+ module does the following:1. Set up Brill-Lindquist initial data [ADM](https://en.wikipedia.org/wiki/ADM_formalism) quantities in the **Cartesian basis**, as [documented here](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb). 1. Convert the ADM **Cartesian quantities** to **BSSN quantities in the desired Curvilinear basis** (set by reference_metric::CoordSystem), as [documented here](Tutorial-ADM_Initial_Data-Converting_ADMCartesian_to_BSSNCurvilinear.ipynb).1. Sets up the standardized C function for setting all BSSN Curvilinear gridfunctions in a pointwise fashion, as [written here](../edit/BSSN/BSSN_ID_function_string.py), and returns the C function as a Python string.
###Code
import BSSN.BrillLindquist as bl
def BrillLindquistID():
print("Generating optimized C code for Brill-Lindquist initial data. May take a while, depending on CoordSystem.")
start = time.time()
bl.BrillLindquist() # Registers ID C function in dictionary, used below to output to file.
with open(os.path.join(Ccodesdir,"initial_data.h"),"w") as file:
file.write(outC_function_dict["initial_data"])
end = time.time()
print("(BENCH) Finished BL initial data codegen in "+str(end-start)+" seconds.")
###Output
_____no_output_____
###Markdown
Step 3: Output C code for BSSN spacetime solve \[Back to [top](toc)\]$$\label{bssn}$$ Step 3.a: Output C code for BSSN RHS expressions \[Back to [top](toc)\]$$\label{bssnrhs}$$
###Code
import BSSN.BSSN_RHSs as rhs
import BSSN.BSSN_gauge_RHSs as gaugerhs
# Set the *covariant*, second-order Gamma-driving shift condition
par.set_parval_from_str("BSSN.BSSN_gauge_RHSs::ShiftEvolutionOption", "GammaDriving2ndOrder_Covariant")
print("Generating symbolic expressions for BSSN RHSs...")
start = time.time()
# Enable rfm_precompute infrastructure, which results in
# BSSN RHSs that are free of transcendental functions,
# even in curvilinear coordinates, so long as
# ConformalFactor is set to "W" (default).
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
# Evaluate BSSN + BSSN gauge RHSs with rfm_precompute enabled:
import BSSN.BSSN_quantities as Bq
par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic","True")
rhs.BSSN_RHSs()
gaugerhs.BSSN_gauge_RHSs()
# We use betaU as our upwinding control vector:
Bq.BSSN_basic_tensors()
betaU = Bq.betaU
import BSSN.Enforce_Detgammabar_Constraint as EGC
enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammabar_Constraint_symb_expressions()
# Next compute Ricci tensor
par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic","False")
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
# Now register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)
# Now that we are finished with all the rfm hatted
# quantities in generic precomputed functional
# form, let's restore them to their closed-
# form expressions.
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
end = time.time()
print("(BENCH) Finished BSSN symbolic expressions in "+str(end-start)+" seconds.")
def BSSN_RHSs():
print("Generating C code for BSSN RHSs in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
# Construct the left-hand sides and right-hand-side expressions for all BSSN RHSs
lhs_names = [ "alpha", "cf", "trK"]
rhs_exprs = [gaugerhs.alpha_rhs, rhs.cf_rhs, rhs.trK_rhs]
for i in range(3):
lhs_names.append( "betU"+str(i))
rhs_exprs.append(gaugerhs.bet_rhsU[i])
lhs_names.append( "lambdaU"+str(i))
rhs_exprs.append(rhs.lambda_rhsU[i])
lhs_names.append( "vetU"+str(i))
rhs_exprs.append(gaugerhs.vet_rhsU[i])
for j in range(i,3):
lhs_names.append( "aDD"+str(i)+str(j))
rhs_exprs.append(rhs.a_rhsDD[i][j])
lhs_names.append( "hDD"+str(i)+str(j))
rhs_exprs.append(rhs.h_rhsDD[i][j])
# Sort the lhss list alphabetically, and rhss to match.
# This ensures the RHSs are evaluated in the same order
# they're allocated in memory:
lhs_names,rhs_exprs = [list(x) for x in zip(*sorted(zip(lhs_names,rhs_exprs), key=lambda pair: pair[0]))]
# Declare the list of lhrh's
BSSN_evol_rhss = []
for var in range(len(lhs_names)):
BSSN_evol_rhss.append(lhrh(lhs=gri.gfaccess("rhs_gfs",lhs_names[var]),rhs=rhs_exprs[var]))
# Set up the C function for the BSSN RHSs
desc="Evaluate the BSSN RHSs"
name="rhs_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict auxevol_gfs,const REAL *restrict in_gfs,REAL *restrict rhs_gfs""",
body = fin.FD_outputC("returnstring",BSSN_evol_rhss, params="outCverbose=False,SIMD_enable=True",
upwindcontrolvec=betaU).replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,EnableSIMD,Enable_rfm_precompute")
end = time.time()
print("(BENCH) Finished BSSN_RHS C codegen in " + str(end - start) + " seconds.")
def Ricci():
print("Generating C code for Ricci tensor in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
desc="Evaluate the Ricci tensor"
name="Ricci_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict in_gfs,REAL *restrict auxevol_gfs""",
body = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD00"),rhs=Bq.RbarDD[0][0]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD01"),rhs=Bq.RbarDD[0][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD02"),rhs=Bq.RbarDD[0][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD11"),rhs=Bq.RbarDD[1][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD12"),rhs=Bq.RbarDD[1][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD22"),rhs=Bq.RbarDD[2][2])],
params="outCverbose=False,SIMD_enable=True").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,EnableSIMD,Enable_rfm_precompute")
end = time.time()
print("(BENCH) Finished Ricci C codegen in " + str(end - start) + " seconds.")
###Output
Generating symbolic expressions for BSSN RHSs...
(BENCH) Finished BSSN symbolic expressions in 4.267388105392456 seconds.
###Markdown
Step 3.b: Output C code for Hamiltonian constraint \[Back to [top](toc)\]$$\label{hamconstraint}$$Next output the C code for evaluating the Hamiltonian constraint [(**Tutorial**)](Tutorial-BSSN_constraints.ipynb). In the absence of numerical error, this constraint should evaluate to zero. However it does not due to numerical (typically truncation and roundoff) error. We will therefore measure the Hamiltonian constraint violation to gauge the accuracy of our simulation, and, ultimately determine whether errors are dominated by numerical finite differencing (truncation) error as expected.
###Code
def Hamiltonian():
start = time.time()
print("Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.")
# Set up the C function for the Hamiltonian RHS
desc="Evaluate the Hamiltonian constraint"
name="Hamiltonian_constraint"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict aux_gfs""",
body = fin.FD_outputC("returnstring",lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H),
params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,Enable_rfm_precompute")
end = time.time()
print("(BENCH) Finished Hamiltonian C codegen in " + str(end - start) + " seconds.")
###Output
_____no_output_____
###Markdown
Step 3.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
###Code
def gammadet():
start = time.time()
print("Generating optimized C code for gamma constraint. May take a while, depending on CoordSystem.")
# Set up the C function for the det(gammahat) = det(gammabar)
EGC.output_Enforce_Detgammabar_Constraint_Ccode(Ccodesdir,exprs=enforce_detg_constraint_symb_expressions)
end = time.time()
print("(BENCH) Finished gamma constraint C codegen in " + str(end - start) + " seconds.")
###Output
_____no_output_____
###Markdown
Step 3.d: Compute $\psi_4$, which encodes gravitational wave information in our numerical relativity calculations \[Back to [top](toc)\]$$\label{psi4}$$The [Weyl scalar](https://en.wikipedia.org/wiki/Weyl_scalar) $\psi_4$ encodes gravitational wave information in our numerical relativity calculations. For more details on how it is computed, see [this NRPy+ tutorial notebook for information on $\psi_4$](Tutorial-Psi4.ipynb) and [this one on the Quasi-Kinnersley tetrad](Tutorial-Psi4_tetrads.ipynb) (as implemented in [Baker, Campanelli, Lousto (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf)).$\psi_4$ is related to the gravitational wave strain via$$\psi_4 = \ddot{h}_+ - i \ddot{h}_\times,$$where $\ddot{h}_+$ is the second time derivative of the $+$ polarization of the gravitational wave strain $h$, and $\ddot{h}_\times$ is the second time derivative of the $\times$ polarization of the gravitational wave strain $h$.
###Code
import BSSN.Psi4_tetrads as BP4t
par.set_parval_from_str("BSSN.Psi4_tetrads::TetradChoice","QuasiKinnersley")
#par.set_parval_from_str("BSSN.Psi4_tetrads::UseCorrectUnitNormal","True")
import BSSN.Psi4 as BP4
print("Generating symbolic expressions for psi4...")
start = time.time()
BP4.Psi4()
end = time.time()
print("(BENCH) Finished psi4 symbolic expressions in "+str(end-start)+" seconds.")
psi4r_0pt = gri.register_gridfunctions("AUX","psi4r_0pt")
psi4r_1pt = gri.register_gridfunctions("AUX","psi4r_1pt")
psi4r_2pt = gri.register_gridfunctions("AUX","psi4r_2pt")
psi4i_0pt = gri.register_gridfunctions("AUX","psi4i_0pt")
psi4i_1pt = gri.register_gridfunctions("AUX","psi4i_1pt")
psi4i_2pt = gri.register_gridfunctions("AUX","psi4i_2pt")
desc="""Since it's so expensive to compute, instead of evaluating
psi_4 at all interior points, this functions evaluates it on a
point-by-point basis."""
name="psi4"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """const paramstruct *restrict params,
const int i0,const int i1,const int i2,
REAL *restrict xx[3], const REAL *restrict in_gfs, REAL *restrict aux_gfs""",
body = """
const int idx = IDX3S(i0,i1,i2);
const REAL xx0 = xx[0][i0];const REAL xx1 = xx[1][i1];const REAL xx2 = xx[2][i2];
// Real part of psi_4, divided into 3 terms
{
#include "Psi4re_pt0_lowlevel.h"
}
{
#include "Psi4re_pt1_lowlevel.h"
}
{
#include "Psi4re_pt2_lowlevel.h"
}
// Imaginary part of psi_4, divided into 3 terms
{
#include "Psi4im_pt0_lowlevel.h"
}
{
#include "Psi4im_pt1_lowlevel.h"
}
{
#include "Psi4im_pt2_lowlevel.h"
}""")
def Psi4re(part):
print("Generating C code for psi4_re_pt"+str(part)+" in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
Psi4re_pt = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("aux_gfs","psi4r_"+str(part)+"pt"),rhs=BP4.psi4_re_pt[part])],
params="outCverbose=False,CSE_sorting=none") # Generating the CSE for psi4 is the slowest
# operation in this notebook, and much of the CSE
# time is spent sorting CSE expressions. Disabling
# this sorting makes the C codegen 3-4x faster,
# but the tradeoff is that every time this is
# run, the CSE patterns will be different
# (though they should result in mathematically
# *identical* expressions). You can expect
# roundoff-level differences as a result.
with open(os.path.join(Ccodesdir,"Psi4re_pt"+str(part)+"_lowlevel.h"), "w") as file:
file.write(Psi4re_pt.replace("IDX4","IDX4S"))
end = time.time()
print("(BENCH) Finished generating psi4_re_pt"+str(part)+" in "+str(end-start)+" seconds.")
def Psi4im(part):
print("Generating C code for psi4_im_pt"+str(part)+" in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
Psi4im_pt = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("aux_gfs","psi4i_"+str(part)+"pt"),rhs=BP4.psi4_im_pt[part])],
params="outCverbose=False,CSE_sorting=none") # Generating the CSE for psi4 is the slowest
# operation in this notebook, and much of the CSE
# time is spent sorting CSE expressions. Disabling
# this sorting makes the C codegen 3-4x faster,
# but the tradeoff is that every time this is
# run, the CSE patterns will be different
# (though they should result in mathematically
# *identical* expressions). You can expect
# roundoff-level differences as a result.
with open(os.path.join(Ccodesdir,"Psi4im_pt"+str(part)+"_lowlevel.h"), "w") as file:
file.write(Psi4im_pt.replace("IDX4","IDX4S"))
end = time.time()
print("(BENCH) Finished generating psi4_im_pt"+str(part)+" in "+str(end-start)+" seconds.")
###Output
Generating symbolic expressions for psi4...
(BENCH) Finished psi4 symbolic expressions in 17.19464373588562 seconds.
Output C function psi4() to file BSSN_Two_BHs_Collide_Ccodes/psi4.h
###Markdown
Step 3.e: Decompose $\psi_4$ into spin-weight -2 spherical harmonics \[Back to [top](toc)\]$$\label{decomposepsi4}$$ Instead of measuring $\psi_4$ for all possible (gravitational wave) observers in our simulation domain, we instead decompose it into a natural basis set, which by convention is the spin-weight -2 spherical harmonics.Here we implement the algorithm for decomposing $\psi_4$ into spin-weight -2 spherical harmonic modes. The decomposition is defined as follows:$${}^{-2}\left[\psi_4\right]_{\ell,m}(t,R) = \int \int \psi_4(t,R,\theta,\phi)\ \left[{}^{-2}Y^*_{\ell,m}(\theta,\phi)\right] \sin \theta d\theta d\phi,$$where* ${}^{-2}Y^*_{\ell,m}(\theta,\phi)$ is the complex conjugate of the spin-weight $-2$ spherical harmonic $\ell,m$ mode* $R$ is the (fixed) radius at which we extract $\psi_4$ information* $t$ is the time coordinate* $\theta,\phi$ are the polar and azimuthal angles, respectively (we use [the physics notation for spherical coordinates](https://en.wikipedia.org/wiki/Spherical_coordinate_system) here) Step 3.e.i Output ${}^{-2}Y_{\ell,m}$, up to and including $\ell=\ell_{\rm max}$=`l_max` (set to 2 here) \[Back to [top](toc)\]$$\label{spinweight}$$ Here we output all spin-weight $-2$ spherical harmonics [**Tutorial Module**](Tutorial-SpinWeighted_Spherical_Harmonics.ipynb) for $\ell=0$ up to and including $\ell=\ell_{\rm max}$=`l_max` (set to 2 here).
###Code
import SpinWeight_minus2_SphHarmonics.SpinWeight_minus2_SphHarmonics as swm2
cmd.mkdir(os.path.join(Ccodesdir,"SpinWeight_minus2_SphHarmonics"))
swm2.SpinWeight_minus2_SphHarmonics(maximum_l=l_max,
filename=os.path.join(Ccodesdir,"SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h"))
###Output
_____no_output_____
###Markdown
Step 3.e.ii Decomposition of $\psi_4$ into spin-weight -2 spherical harmonics: Full C-code diagnostic implementation \[Back to [top](toc)\]$$\label{full_diag}$$ Note that this diagnostic implementation assumes that `Spherical`-like coordinates are used (e.g., `SinhSpherical` or `Spherical`), which are the most natural coordinate system for decomposing $\psi_4$ into spin-weight -2 modes.First we process the inputs needed to compute $\psi_4$ at all needed $\theta,\phi$ points
###Code
%%writefile $Ccodesdir/driver_psi4_spinweightm2_decomposition.h
void driver_psi4_spinweightm2_decomposition(const paramstruct *restrict params,
const REAL curr_time,const int R_ext_idx,
REAL *restrict xx[3],
const REAL *restrict y_n_gfs,
REAL *restrict diagnostic_output_gfs) {
#include "set_Cparameters.h"
// Step 1: Set the extraction radius R_ext based on the radial index R_ext_idx
REAL R_ext;
{
REAL xx0 = xx[0][R_ext_idx];
REAL xx1 = xx[1][1];
REAL xx2 = xx[2][1];
REAL xCart[3];
xxCart(params,xx,R_ext_idx,1,1,xCart);
R_ext = sqrt(xCart[0]*xCart[0] + xCart[1]*xCart[1] + xCart[2]*xCart[2]);
}
// Step 2: Compute psi_4 at this extraction radius and store to a local 2D array.
const int sizeof_2Darray = sizeof(REAL)*(Nxx_plus_2NGHOSTS1-2*NGHOSTS)*(Nxx_plus_2NGHOSTS2-2*NGHOSTS);
REAL *restrict psi4r_at_R_ext = (REAL *restrict)malloc(sizeof_2Darray);
REAL *restrict psi4i_at_R_ext = (REAL *restrict)malloc(sizeof_2Darray);
// ... also store theta, sin(theta), and phi to corresponding 1D arrays.
REAL *restrict sinth_array = (REAL *restrict)malloc(sizeof(REAL)*(Nxx_plus_2NGHOSTS1-2*NGHOSTS));
REAL *restrict th_array = (REAL *restrict)malloc(sizeof(REAL)*(Nxx_plus_2NGHOSTS1-2*NGHOSTS));
REAL *restrict ph_array = (REAL *restrict)malloc(sizeof(REAL)*(Nxx_plus_2NGHOSTS2-2*NGHOSTS));
const int i0=R_ext_idx;
#pragma omp parallel for
for(int i1=NGHOSTS;i1<Nxx_plus_2NGHOSTS1-NGHOSTS;i1++) {
th_array[i1-NGHOSTS] = xx[1][i1];
sinth_array[i1-NGHOSTS] = sin(xx[1][i1]);
for(int i2=NGHOSTS;i2<Nxx_plus_2NGHOSTS2-NGHOSTS;i2++) {
ph_array[i2-NGHOSTS] = xx[2][i2];
// Compute real & imaginary parts of psi_4, output to diagnostic_output_gfs
psi4(params, i0,i1,i2, xx, y_n_gfs, diagnostic_output_gfs);
const int idx3d = IDX3S(i0,i1,i2);
const REAL psi4r = (+diagnostic_output_gfs[IDX4ptS(PSI4R_0PTGF,idx3d)]
+diagnostic_output_gfs[IDX4ptS(PSI4R_1PTGF,idx3d)]
+diagnostic_output_gfs[IDX4ptS(PSI4R_2PTGF,idx3d)]);
const REAL psi4i = (+diagnostic_output_gfs[IDX4ptS(PSI4I_0PTGF,idx3d)]
+diagnostic_output_gfs[IDX4ptS(PSI4I_1PTGF,idx3d)]
+diagnostic_output_gfs[IDX4ptS(PSI4I_2PTGF,idx3d)]);
// Store result to "2D" array (actually 1D array with 2D storage):
const int idx2d = (i1-NGHOSTS)*(Nxx_plus_2NGHOSTS2-2*NGHOSTS)+(i2-NGHOSTS);
psi4r_at_R_ext[idx2d] = psi4r;
psi4i_at_R_ext[idx2d] = psi4i;
}
}
###Output
Writing BSSN_Two_BHs_Collide_Ccodes//driver_psi4_spinweightm2_decomposition.h
###Markdown
Next we implement the integral:$${}^{-2}\left[\psi_4\right]_{\ell,m}(t,R) = \int \int \psi_4(t,R,\theta,\phi)\ \left[{}^{-2}Y^*_{\ell,m}(\theta,\phi)\right] \sin \theta d\theta d\phi.$$Since $\psi_4(t,R,\theta,\phi)$ and $\left[{}^{-2}Y^*_{\ell,m}(\theta,\phi)\right]$ are generally complex, for simplicity let's define\begin{align}\psi_4(t,R,\theta,\phi)&=a+i b \\\left[{}^{-2}Y_{\ell,m}(\theta,\phi)\right] &= c + id\\\implies \left[{}^{-2}Y^*_{\ell,m}(\theta,\phi)\right] = \left[{}^{-2}Y_{\ell,m}(\theta,\phi)\right]^* &=c-i d\end{align}Then the product (appearing within the integral) will be given by\begin{align}(a + i b) (c-i d) &= (ac + bd) + i(bc - ad),\end{align}which cleanly splits the real and complex parts. For better modularity, we output this algorithm to a function `decompose_psi4_into_swm2_modes()` in file `decompose_psi4_into_swm2_modes.h`. Here, we will call this function from within `output_psi4_spinweight_m2_decomposition()`, but in general it could be called from codes that do not use spherical coordinates, and the `psi4r_at_R_ext[]` and `psi4i_at_R_ext[]` arrays are filled using interpolations.
###Code
%%writefile $Ccodesdir/lowlevel_decompose_psi4_into_swm2_modes.h
void lowlevel_decompose_psi4_into_swm2_modes(const paramstruct *restrict params,
const REAL curr_time, const REAL R_ext,
const REAL *restrict th_array,const REAL *restrict sinth_array,const REAL *restrict ph_array,
const REAL *restrict psi4r_at_R_ext,const REAL *restrict psi4i_at_R_ext) {
#include "set_Cparameters.h"
for(int l=2;l<=L_MAX;l++) { // L_MAX is a global variable, since it must be set in Python (so that SpinWeight_minus2_SphHarmonics() computes enough modes)
for(int m=-l;m<=l;m++) {
// Parallelize the integration loop:
REAL psi4r_l_m = 0.0;
REAL psi4i_l_m = 0.0;
#pragma omp parallel for reduction(+:psi4r_l_m,psi4i_l_m)
for(int i1=0;i1<Nxx_plus_2NGHOSTS1-2*NGHOSTS;i1++) {
const REAL th = th_array[i1];
const REAL sinth = sinth_array[i1];
for(int i2=0;i2<Nxx_plus_2NGHOSTS2-2*NGHOSTS;i2++) {
const REAL ph = ph_array[i2];
// Construct integrand for psi4 spin-weight s=-2,l=2,m=0 spherical harmonic
REAL ReY_sm2_l_m,ImY_sm2_l_m;
SpinWeight_minus2_SphHarmonics(l,m, th,ph, &ReY_sm2_l_m,&ImY_sm2_l_m);
const int idx2d = i1*(Nxx_plus_2NGHOSTS2-2*NGHOSTS)+i2;
const REAL a = psi4r_at_R_ext[idx2d];
const REAL b = psi4i_at_R_ext[idx2d];
const REAL c = ReY_sm2_l_m;
const REAL d = ImY_sm2_l_m;
psi4r_l_m += (a*c + b*d) * dxx2 * sinth*dxx1;
psi4i_l_m += (b*c - a*d) * dxx2 * sinth*dxx1;
}
}
// Step 4: Output the result of the integration to file.
char filename[100];
sprintf(filename,"outpsi4_l%d_m%d-%d-r%.2f.txt",l,m, Nxx0,(double)R_ext);
if(m>=0) sprintf(filename,"outpsi4_l%d_m+%d-%d-r%.2f.txt",l,m, Nxx0,(double)R_ext);
FILE *outpsi4_l_m;
// 0 = n*dt when n=0 is exactly represented in double/long double precision,
// so no worries about the result being ~1e-16 in double/ld precision
if(curr_time==0) outpsi4_l_m = fopen(filename, "w");
else outpsi4_l_m = fopen(filename, "a");
fprintf(outpsi4_l_m,"%e %.15e %.15e\n", (double)(curr_time),
(double)psi4r_l_m,(double)psi4i_l_m);
fclose(outpsi4_l_m);
}
}
}
###Output
Writing BSSN_Two_BHs_Collide_Ccodes//lowlevel_decompose_psi4_into_swm2_modes.h
###Markdown
Finally, we complete the function `output_psi4_spinweight_m2_decomposition()`, now calling the above routine and freeing all allocated memory.
###Code
%%writefile -a $Ccodesdir/driver_psi4_spinweightm2_decomposition.h
// Step 4: Perform integrations across all l,m modes from l=2 up to and including L_MAX (global variable):
lowlevel_decompose_psi4_into_swm2_modes(params, curr_time,R_ext, th_array,sinth_array, ph_array,
psi4r_at_R_ext,psi4i_at_R_ext);
// Step 5: Free all allocated memory:
free(psi4r_at_R_ext); free(psi4i_at_R_ext);
free(sinth_array); free(th_array); free(ph_array);
}
###Output
Appending to BSSN_Two_BHs_Collide_Ccodes//driver_psi4_spinweightm2_decomposition.h
###Markdown
Step 3.f: Output all NRPy+ C-code kernels, in parallel if possible \[Back to [top](toc)\]$$\label{coutput}$$
###Code
# Step 0: Import the multiprocessing module.
import multiprocessing
# Step 1: Create a list of functions we wish to evaluate in parallel
funcs = [Psi4re,Psi4re,Psi4re,Psi4im,Psi4im,Psi4im, BrillLindquistID,BSSN_RHSs,Ricci,Hamiltonian,gammadet]
# Step 1.a: Define master function for calling all above functions.
# Note that lambdifying this doesn't work in Python 3
def master_func(idx):
if idx < 3: # Call Psi4re(arg)
funcs[idx](idx)
elif idx < 6: # Call Psi4im(arg-3)
funcs[idx](idx-3)
else: # All non-Psi4 functions:
funcs[idx]()
try:
if os.name == 'nt':
# It's a mess to get working in Windows, so we don't bother. :/
# https://medium.com/@grvsinghal/speed-up-your-python-code-using-multiprocessing-on-windows-and-jupyter-or-ipython-2714b49d6fac
raise Exception("Parallel codegen currently not available in Windows")
# Step 1.b: Import the multiprocessing module.
import multiprocessing
# Step 1.c: Evaluate list of functions in parallel if possible;
# otherwise fallback to serial evaluation:
pool = multiprocessing.Pool()
pool.map(master_func,range(len(funcs)))
except:
# Steps 1.b-1.c, alternate: As fallback, evaluate functions in serial.
# This will happen on Android and Windows systems
for idx in range(len(funcs)):
master_func(idx)
###Output
Generating C code for psi4_re_pt0 in SinhSpherical coordinates.Generating C code for psi4_re_pt1 in SinhSpherical coordinates.Generating optimized C code for Brill-Lindquist initial data. May take a while, depending on CoordSystem.Generating C code for psi4_re_pt2 in SinhSpherical coordinates.Generating C code for psi4_im_pt1 in SinhSpherical coordinates.Generating C code for Ricci tensor in SinhSpherical coordinates.Generating C code for psi4_im_pt0 in SinhSpherical coordinates.Generating C code for psi4_im_pt2 in SinhSpherical coordinates.Generating C code for BSSN RHSs in SinhSpherical coordinates.Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.Generating optimized C code for gamma constraint. May take a while, depending on CoordSystem.
Output C function enforce_detgammabar_constraint() to file BSSN_Two_BHs_Collide_Ccodes/enforce_detgammabar_constraint.h
(BENCH) Finished gamma constraint C codegen in 0.11540484428405762 seconds.
(BENCH) Finished generating psi4_im_pt1 in 11.360079765319824 seconds.
(BENCH) Finished generating psi4_im_pt2 in 14.557591915130615 seconds.
(BENCH) Finished BL initial data codegen in 15.187357664108276 seconds.
Output C function rhs_eval() to file BSSN_Two_BHs_Collide_Ccodes/rhs_eval.h
(BENCH) Finished BSSN_RHS C codegen in 18.33892321586609 seconds.
(BENCH) Finished generating psi4_re_pt2 in 19.347757577896118 seconds.
Output C function Ricci_eval() to file BSSN_Two_BHs_Collide_Ccodes/Ricci_eval.h
(BENCH) Finished Ricci C codegen in 20.357526063919067 seconds.
(BENCH) Finished generating psi4_re_pt1 in 20.787642240524292 seconds.
(BENCH) Finished generating psi4_im_pt0 in 35.46419930458069 seconds.
Output C function Hamiltonian_constraint() to file BSSN_Two_BHs_Collide_Ccodes/Hamiltonian_constraint.h
(BENCH) Finished Hamiltonian C codegen in 36.30071473121643 seconds.
(BENCH) Finished generating psi4_re_pt0 in 61.16836905479431 seconds.
###Markdown
Step 3.g: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](toc)\]$$\label{cparams_rfm_and_domainsize}$$Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
###Code
# Step 3.f.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.f.ii: Set free_parameters.h
with open(os.path.join(Ccodesdir,"free_parameters.h"),"w") as file:
file.write("""
// Set free-parameter values.
// Set free-parameter values for BSSN evolution:
params.eta = 2.0;
// Set free parameters for the (Brill-Lindquist) initial data
params.BH1_posn_x = 0.0; params.BH1_posn_y = 0.0; params.BH1_posn_z =+0.25;
params.BH2_posn_x = 0.0; params.BH2_posn_y = 0.0; params.BH2_posn_z =-0.25;
params.BH1_mass = 0.5; params.BH2_mass = 0.5;
"""+"""
const REAL final_time = """+str(final_time)+";\n")
# Append to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Step 3.f.iii: Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir)
# Step 3.f.iv: Generate xxCart.h, which contains xxCart() for
# (the mapping from xx->Cartesian) for the chosen
# CoordSystem:
rfm.xxCart_h("xxCart","./set_Cparameters.h",os.path.join(Ccodesdir,"xxCart.h"))
# Step 3.f.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
###Output
_____no_output_____
###Markdown
Step 4: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](toc)\]$$\label{bc_functs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
###Code
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../"))
###Output
Wrote to file "BSSN_Two_BHs_Collide_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h"
Evolved parity: ( aDD00:4, aDD01:5, aDD02:6, aDD11:7, aDD12:8, aDD22:9,
alpha:0, betU0:1, betU1:2, betU2:3, cf:0, hDD00:4, hDD01:5, hDD02:6,
hDD11:7, hDD12:8, hDD22:9, lambdaU0:1, lambdaU1:2, lambdaU2:3, trK:0,
vetU0:1, vetU1:2, vetU2:3 )
Auxiliary parity: ( H:0, psi4i_0pt:0, psi4i_1pt:0, psi4i_2pt:0,
psi4r_0pt:0, psi4r_1pt:0, psi4r_2pt:0 )
AuxEvol parity: ( RbarDD00:4, RbarDD01:5, RbarDD02:6, RbarDD11:7,
RbarDD12:8, RbarDD22:9 )
Wrote to file "BSSN_Two_BHs_Collide_Ccodes/boundary_conditions/EigenCoord_Cart_to_xx.h"
###Markdown
Step 5: `BrillLindquist_Playground.c`: The Main C Code \[Back to [top](toc)\]$$\label{mainc}$$
###Code
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER),
# and set the CFL_FACTOR (which can be overwritten at the command line)
with open(os.path.join(Ccodesdir,"BSSN_Playground_REAL__NGHOSTS__CFL_FACTOR.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
REAL CFL_FACTOR = """+str(CFL_FACTOR)+"""; // Set the CFL Factor. Can be overwritten at command line.
// Part P0.d: We decompose psi_4 into all spin-weight=-2
// l,m spherical harmonics, starting at l=2,
// going up to and including l_max, set here:
#define L_MAX """+str(l_max)+"""
""")
%%writefile $Ccodesdir/BrillLindquist_Playground.c
// Step P0: Define REAL and NGHOSTS; and declare CFL_FACTOR. This header is generated in NRPy+.
#include "BSSN_Playground_REAL__NGHOSTS__CFL_FACTOR.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// All SIMD intrinsics used in SIMD-enabled C code loops are defined here:
#include "SIMD/SIMD_intrinsics.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "time.h"
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
#define wavespeed 1.0 // Set CFL-based "wavespeed" to 1.0.
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xxCart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Set xxCart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xxCart.h"
// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P7: Implement the algorithm for upwinding.
// *NOTE*: This upwinding is backwards from
// usual upwinding algorithms, because the
// upwinding control vector in BSSN (the shift)
// acts like a *negative* velocity.
//#define UPWIND_ALG(UpwindVecU) UpwindVecU > 0.0 ? 1.0 : 0.0
// Step P8: Include function for enforcing detgammabar constraint.
#include "enforce_detgammabar_constraint.h"
// Step P9: Find the CFL-constrained timestep
#include "find_timestep.h"
// Step P10: Declare function necessary for setting up the initial data.
// Step P10.a: Define BSSN_ID() for BrillLindquist initial data
// Step P10.b: Set the generic driver function for setting up BSSN initial data
#include "initial_data.h"
// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)
#include "Hamiltonian_constraint.h"
// Step P12: Declare rhs_eval function, which evaluates BSSN RHSs
#include "rhs_eval.h"
// Step P13: Declare Ricci_eval function, which evaluates Ricci tensor
#include "Ricci_eval.h"
// Step P14: Declare function for evaluating real and imaginary parts of psi4 (diagnostic)
#include "psi4.h"
#include "SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h"
#include "lowlevel_decompose_psi4_into_swm2_modes.h"
#include "driver_psi4_spinweightm2_decomposition.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4 && argc != 5) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
if(argc == 5) {
CFL_FACTOR = strtod(argv[4],NULL);
if(CFL_FACTOR > 0.5 && atoi(argv[3])!=2) {
fprintf(stderr,"WARNING: CFL_FACTOR was set to %e, which is > 0.5.\n",CFL_FACTOR);
fprintf(stderr," This will generally only be stable if the simulation is purely axisymmetric\n");
fprintf(stderr," However, Nx2 was set to %d>2, which implies a non-axisymmetric simulation\n",atoi(argv[3]));
}
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0h: Time coordinate parameters
const REAL t_final = final_time;
// Step 0i: Set timestep based on smallest proper distance between gridpoints and CFL factor
REAL dt = find_timestep(¶ms, xx);
//fprintf(stderr,"# Timestep set to = %e\n",(double)dt);
int N_final = (int)(t_final / dt + 0.5); // The number of points in time.
// Add 0.5 to account for C rounding down
// typecasts to integers.
REAL out_approx_every_t = 0.2;
int output_every_N = (int)(out_approx_every_t*((REAL)N_final)/t_final);
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
// Step 1: Set up initial data to an exact solution
initial_data(¶ms, xx, y_n_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, y_n_gfs);
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
#ifdef __linux__ // Use high-precision timer in Linux.
struct timespec start, end;
clock_gettime(CLOCK_REALTIME, &start);
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
// http://www.cplusplus.com/reference/ctime/time/
time_t start_timer,end_timer;
time(&start_timer); // Resolution of one second...
#endif
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
for(int n=0;n<=N_final;n++) { // Main loop to progress forward in time.
// Step 3.a: Output diagnostics
// Step 3.a.i: Output psi4 spin-weight -2 decomposed data, every N_output_every
if(n%output_every_N == 0) {
for(int r_ext_idx = (Nxx_plus_2NGHOSTS0-NGHOSTS)/4;
r_ext_idx<(Nxx_plus_2NGHOSTS0-NGHOSTS)*0.9;
r_ext_idx+=5) {
// psi_4 mode-by-mode spin-weight -2 spherical harmonic decomposition routine
driver_psi4_spinweightm2_decomposition(¶ms, ((REAL)n)*dt,r_ext_idx,
xx, y_n_gfs, diagnostic_output_gfs);
}
}
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
#include "MoLtimestepping/RK_MoL.h"
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
if(n==N_final-1) {
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
const int i0MIN=NGHOSTS; // In spherical, r=Delta r/2.
const int i1mid=Nxx_plus_2NGHOSTS1/2;
const int i2mid=Nxx_plus_2NGHOSTS2/2;
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
xxCart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1],xCart[2], y_n_gfs[IDX4ptS(CFGF,idx)],
log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
}
// Step 3.d: Progress indicator printing to stderr
// Step 3.d.i: Measure average time per iteration
#ifdef __linux__ // Use high-precision timer in Linux.
clock_gettime(CLOCK_REALTIME, &end);
const long long unsigned int time_in_ns = 1000000000L * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
time(&end_timer); // Resolution of one second...
REAL time_in_ns = difftime(end_timer,start_timer)*1.0e9+0.5; // Round up to avoid divide-by-zero.
#endif
const REAL s_per_iteration_avg = ((REAL)time_in_ns / (REAL)n) / 1.0e9;
const int iterations_remaining = N_final - n;
const REAL time_remaining_in_mins = s_per_iteration_avg * (REAL)iterations_remaining / 60.0;
const REAL num_RHS_pt_evals = (REAL)(Nxx[0]*Nxx[1]*Nxx[2]) * 4.0 * (REAL)n; // 4 RHS evals per gridpoint for RK4
const REAL RHS_pt_evals_per_sec = num_RHS_pt_evals / ((REAL)time_in_ns / 1.0e9);
// Step 3.d.ii: Output simulation progress to stderr
if(n % 40 == 0) {
fprintf(stderr,"%c[2K", 27); // Clear the line
fprintf(stderr,"It: %d t=%.2f dt=%.2e | %.1f%%; ETA %.0f s | t/h %.2f | gp/s %.2e\r", // \r is carriage return, move cursor to the beginning of the line
n, n * (double)dt, (double)dt, (double)(100.0 * (REAL)n / (REAL)N_final),
(double)time_remaining_in_mins*60, (double)(dt * 3600.0 / s_per_iteration_avg), (double)RHS_pt_evals_per_sec);
fflush(stderr); // Flush the stderr buffer
} // End progress indicator if(n % 10 == 0)
} // End main loop to progress forward in time.
fprintf(stderr,"\n"); // Clear the final line of output from progress indicator.
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
if EvolOption == "low resolution":
Nr = 270
Ntheta = 8
elif EvolOption == "high resolution":
Nr = 800
Ntheta = 16
else:
print("Error: unknown EvolOption!")
sys.exit(1)
CFL_FACTOR = 1.0
import cmdline_helper as cmd
print("Now compiling, should take ~10 seconds...\n")
start = time.time()
cmd.C_compile(os.path.join(Ccodesdir,"BrillLindquist_Playground.c"), "BrillLindquist_Playground",
compile_mode="optimized")
end = time.time()
print("(BENCH) Finished in "+str(end-start)+" seconds.\n\n")
if Nr == 800:
print("Now running. Should take ~8 hours...\n")
if Nr == 270:
print("Now running. Should take ~30 minutes...\n")
start = time.time()
cmd.delete_existing_files("out*.txt")
cmd.delete_existing_files("out*.png")
cmd.Execute("BrillLindquist_Playground", str(Nr)+" "+str(Ntheta)+" 2 "+str(CFL_FACTOR))
end = time.time()
print("(BENCH) Finished in "+str(end-start)+" seconds.\n\n")
###Output
Now compiling, should take ~10 seconds...
Compiling executable...
(EXEC): Executing `gcc -Ofast -fopenmp -march=native -funroll-loops BSSN_Two_BHs_Collide_Ccodes/BrillLindquist_Playground.c -o BrillLindquist_Playground -lm`...
(BENCH): Finished executing in 9.02372121810913 seconds.
Finished compilation.
(BENCH) Finished in 9.034607410430908 seconds.
Now running. Should take ~30 minutes...
(EXEC): Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./BrillLindquist_Playground 270 8 2 1.0`...
[2KIt: 27200 t=199.93 dt=7.35e-03 | 100.0%; ETA 0 s | t/h 2979.16 | gp/s 1.95e+06
(BENCH): Finished executing in 241.8013095855713 seconds.
(BENCH) Finished in 241.81788897514343 seconds.
###Markdown
Step 6: Comparison with black hole perturbation theory \[Back to [top](toc)\]$$\label{compare}$$According to black hole perturbation theory ([Berti et al](https://arxiv.org/abs/0905.2975)), the resultant black hole should ring down with dominant, spin-weight $s=-2$ spherical harmonic mode $(l=2,m=0)$ according to$${}_{s=-2}\text{Re}(\psi_4)_{l=2,m=0} = A e^{−0.0890 t/M} \cos(0.3737 t/M+ \phi),$$where $M=1$ for these data, and $A$ and $\phi$ are an arbitrary amplitude and phase, respectively. Here we will plot the resulting waveform at $r/M=33.13$, comparing to the expected frequency and amplitude falloff predicted by black hole perturbation theory.Notice that we find about 4.2 orders of magnitude agreement! If you are willing to invest more resources and wait much longer, you will find approximately 8.5 orders of magnitude agreement (*better* than Fig 6 of [Ruchlin et al](https://arxiv.org/pdf/1712.07658.pdf)) if you adjust the above code parameters such that1. Finite-differencing order is set to 101. Nr = 8001. Ntheta = 161. Outer boundary (`AMPL`) set to 3001. Final time (`t_final`) set to 2751. Set the initial positions of the BHs to `BH1_posn_z = -BH2_posn_z = 0.25`
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import savefig
from matplotlib import rc
rc('text', usetex=True)
if Nr == 270:
extraction_radius = "33.13"
Amplitude = 1.8e-2
Phase = 2.8
elif Nr == 800:
extraction_radius = "33.64"
Amplitude = 1.8e-2
Phase = 2.8
else:
print("Error: output is not tuned for Nr = "+str(Nr)+" . Plotting disabled.")
exit(1)
#Transposed for easier unpacking:
t,psi4r,psi4i = np.loadtxt("outpsi4_l2_m+0-"+str(Nr)+"-r"+extraction_radius+".txt").T
t_retarded = []
log10abspsi4r = []
bh_pert_thry = []
for i in range(len(psi4r)):
retarded_time = t[i]-np.float(extraction_radius)
t_retarded.append(retarded_time)
log10abspsi4r.append(np.log(np.float(extraction_radius)*np.abs(psi4r[i]))/np.log(10))
bh_pert_thry.append(np.log(Amplitude*np.exp(-0.0890*retarded_time)*np.abs(np.cos(0.3737*retarded_time+Phase)))/np.log(10))
# print(bh_pert_thry)
fig, ax = plt.subplots()
plt.title("Grav. Wave Agreement with BH perturbation theory",fontsize=18)
plt.xlabel("$(t - R_{ext})/M$",fontsize=16)
plt.ylabel('$\log_{10}|\psi_4|$',fontsize=16)
ax.plot(t_retarded, log10abspsi4r, 'k-', label='SENR/NRPy+ simulation')
ax.plot(t_retarded, bh_pert_thry, 'k--', label='BH perturbation theory')
#ax.set_xlim([0,t_retarded[len(psi4r1)-1]])
ax.set_xlim([0,final_time - float(extraction_radius)+10])
ax.set_ylim([-13.5,-1.5])
plt.xticks(size = 14)
plt.yticks(size = 14)
legend = ax.legend(loc='upper right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
# Note that you'll need `dvipng` installed to generate the following file:
savefig("BHperttheorycompare.png",dpi=150)
###Output
_____no_output_____
###Markdown
Step 7: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4")
###Output
Created Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide-Psi4.tex,
and compiled LaTeX file to PDF file Tutorial-Start_to_Finish-
BSSNCurvilinear-Two_BHs_Collide-Psi4.pdf
|
Chemical-Reactor-Design/Tutorials/Tut-1/Tut-1.1.ipynb | ###Markdown
A liquid phase reaction (stoichiometry given below) needs to be processed in a continuous reacor system (operated isothermally).$$ A + P \rightarrow 2P $$Details on the reaction rate (based on component A) are given below:$$ -r_A = kC_AC_B $$$$ k = 1 \frac{l}{mol.min} $$A feed with $C_{Ao}$ = 1 $ \frac{mol}{l}$ and $C_{Po}$ = 0.02 $ \frac{mol}{l}$ is fed to the reactor at a flow rate of Q =2.5 $ \frac{l}{min}$. A conversion of $A$ of 85% is required.
###Code
def ri(CA, CP):
rA = -k*CA*CP
rP = -rA
return [rA, rP]
###Output
_____no_output_____
###Markdown
Parameters
###Code
CAo = 1 # mol/l
CPo = 0.02
Q = 2.5 # l/min
FAo = CAo*Q
FPo = CPo*Q
x = 0.85
k = 1
###Output
_____no_output_____
###Markdown
a) Determine the volume of a single CSTR that will be required to achieve the desiredconversion
###Code
FA = FAo*(1-x)
FP = FPo + x*FAo
rA, rP = ri(FA/Q, FP/Q)
vCSTR = (FAo-FA)/(-rA)
print(np.round(vCSTR, 2), 'l CSTR')
###Output
16.28 l CSTR
###Markdown
b) Determine the volume of a single PFR that will be required to achieve the desiredconversion
###Code
def DE(V, var):
FA, FP = var
CA = FA/Q
CP = FP/Q
rA, rP = ri(CA, CP)
dFAdV = rA
dFPdV = rP
return [dFAdV, dFPdV]
def event85(V, var):
FA, FP = var
return FA - FAo*(1-x)
xb = 0.99
def eventterm(V, var):
FA, FP = var
return FA - FAo*(1-xb)
eventterm.terminal = True
Vbound = [0, 200]
init = np.array([FAo, FPo])
PFR = scipy.integrate.solve_ivp(DE, Vbound, init, dense_output= True, events=[event85, eventterm])
FA, FP = PFR.y
VPFR = PFR.t_events[0][0]
print(np.round(VPFR, 2), 'l PFR')
###Output
13.89 l PFR
###Markdown
c)Determine the minimum total reactor volume required that will result in the desiredconversion if you are allowed to use more than one reactor in series. Also specifythe type of reactors used, the volume of each individual reactor and the sequence ofthe reactors that you suggest.
###Code
xvals = np.arange(0.01, 1, 0.01)
CAs = CAo*(1-xvals)
CPs = CPo + CAo*xvals
rAs = -k*CAs*CPs
negra = 1/(-rAs)
plt.figure(1)
plt.plot(xvals, negra)
plt.xlabel('x')
plt.ylabel('1/-rA')
plt.title('Levenspiel Plot')
plt.show()
inv_maxrate = min(negra)
xmaxrate = xvals[(negra == inv_maxrate)][0]
VolCSTR = xmaxrate*inv_maxrate*Q
print(np.round(VolCSTR, 2), 'l CSTR first')
y = []
for i in range(len(xvals)):
if xvals[i] >= xmaxrate and xvals[i] <= x:
y += [negra[i]]
VolPFR = scipy.integrate.trapz(y, dx=0.01)*Q
print(np.round(VolPFR, 2), 'l PFR after CSTR')
VolT = VolCSTR + VolPFR
print(np.round(VolT, 2), 'l in Total')
###Output
9.02 l in Total
|
semana_5/dia_4/RESU_Ejercicios Numpy III.ipynb | ###Markdown
![imagen](../../imagenes/ejercicios.png) Ejercicio 11. Crea un array que vaya del 0 al 9 y que se llame `my_array`2. imprime por pantalla los elementos [9 7 5 3 1] en ese orden.
###Code
import numpy as np
# 1.
my_array = np.arange(10)
print(my_array)
# 2.
print(my_array[::-2])
###Output
[0 1 2 3 4 5 6 7 8 9]
[9 7 5 3 1]
###Markdown
Ejercicio 2Imprime por pantalla la secuencia [8 7] del array `my_array`
###Code
print(my_array)
print(my_array[-2:-4:-1])
print(my_array[8:6:-1])
###Output
[8 7]
###Markdown
Ejercicio 3Imprime por pantalla la secuencia [2 1 0] de `my_array`
###Code
print(my_array)
print(my_array[-8::-1])
# Otra solución, en dos pasos
my_array[0:3][::-1]
array1 = my_array[0:3]
array2 = array1[::-1]
print(array1)
print(array2)
print(my_array[2::-1])
###Output
[2 1 0]
###Markdown
Ejercicio 41. Crea una matriz de 4x5, con una secuencia del 1 al 202. Invierte totalmente la matriz. Tanto las filas, como las columnas
###Code
my_matrix = np.arange(1, 21).reshape((4,5))
print(my_matrix)
print(my_matrix[::-1, ::-1])
###Output
[[20 19 18 17 16]
[15 14 13 12 11]
[10 9 8 7 6]
[ 5 4 3 2 1]]
###Markdown
Ejercicio 5Obtén el siguiente array, partiendo de la matriz del ejercicio 4```Pythonarray([[1, 2], [6, 7]])```
###Code
my_matrix = np.arange(1, 21).reshape((4,5))
print(my_matrix)
my_matrix[:2:, :2:]
my_matrix[:2, :2]
###Output
_____no_output_____
###Markdown
Ejercicio 6Obtén el siguiente array, partiendo de la matriz del ejercicio 4```Pythonarray([[ 1, 3, 5], [11, 13, 15]])```
###Code
my_matrix = np.arange(1, 21).reshape((4,5))
print(my_matrix)
my_matrix[::2, ::2]
###Output
_____no_output_____
###Markdown
Ejercicio 7Obtén el siguiente array, partiendo de la matriz del ejercicio 4```Pythonarray([[ 5, 4, 3, 2, 1], [10, 9, 8, 7, 6], [15, 14, 13, 12, 11]])```
###Code
my_matrix = np.arange(1, 21).reshape((4,5))
print(my_matrix)
print(my_matrix[::, ::-1])
print(my_matrix[:-1:, ::-1])
###Output
[[ 5 4 3 2 1]
[10 9 8 7 6]
[15 14 13 12 11]
[20 19 18 17 16]]
[[ 5 4 3 2 1]
[10 9 8 7 6]
[15 14 13 12 11]]
###Markdown
Ejercicio 8Dado el siguiente array:```Pythonx = np.array(["Loro", "Perro", "Gato", "Loro", "Perro"])```Filtra el array para quedarte únicamente con los loros.
###Code
x = np.array(["Loro", "Perro", "Gato", "Loro", "Perro"])
# Metodo 1
mask = np.array([True, False, False, True, False])
x[mask]
# Metodo 2
mask = x == "Loro"
print(mask)
x[mask]
# Metodo 3
mask = np.where(x == "Loro")
print(mask)
x[mask]
###Output
(array([0, 3], dtype=int64),)
###Markdown
Ejercicio 9Crea una secuencia de 20 elementos y transfórmala en un array compuesto por 2 matrices de 5x2
###Code
x = np.arange(20).reshape((2,5,2))
print(x)
###Output
[[[ 0 1]
[ 2 3]
[ 4 5]
[ 6 7]
[ 8 9]]
[[10 11]
[12 13]
[14 15]
[16 17]
[18 19]]]
###Markdown
Ejercicio 10Obtén el siguiente array, partiendo de la matriz del ejercicio 4```Pythonarray([[20, 19, 18, 17, 16], [15, 14, 13, 12, 11], [10, 9, 8, 7, 6], [ 5, 4, 3, 2, 1]])```
###Code
my_matrix = np.arange(1, 21).reshape((4,5))
print(my_matrix)
my_matrix[::-1, ::-1]
###Output
_____no_output_____ |
lessons/ETLPipelines/11_duplicatedata_exercise/11_duplicatedata_exercise.ipynb | ###Markdown
Duplicate DataA data set might have duplicate data: in other words, the same record is represented multiple times. Sometimes, it's easy to find and eliminate duplicate data like when two records are exactly the same. At other times, like what was discussed in the video, duplicate data is hard to spot. Exercise 1From the World Bank GDP data, count the number of countries that have had a project totalamt greater than 1 billion dollars (1,000,000,000). To get the count, you'll have to remove duplicate data rows.
###Code
import pandas as pd
# read in the projects data set and do some basic wrangling
projects = pd.read_csv('../data/projects_data.csv', dtype=str)
projects.drop('Unnamed: 56', axis=1, inplace=True)
projects['totalamt'] = pd.to_numeric(projects['totalamt'].str.replace(',', ''))
projects['countryname'] = projects['countryname'].str.split(';', expand=True)[0]
projects['boardapprovaldate'] = pd.to_datetime(projects['boardapprovaldate'])
# TODO: filter the data frame for projects over 1 billion dollars
# TODO: count the number of unique countries in the results
###Output
_____no_output_____
###Markdown
Exercise 2 (challenge)This exercise is more challenging. The projects data set contains data about Yugoslavia, which was an Eastern European country until 1992. Yugoslavia eventually broke up into 7 countries: Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Montenegro, Serbia, and Slovenia.But the projects dataset has some ambiguity in how it treats Yugoslavia and the 7 countries that came from Yugoslavia. Your task is to find Yugoslavia projects that are probably represented multiple times in the data set.
###Code
# TODO: output all projects for the 'Socialist Federal Republic of Yugoslavia'
# HINT: You can use the exact country name or use the pandas str.contains() method to search for Yugoslavia
###Output
_____no_output_____
###Markdown
Yugoslavia officially ended on [April 27th, 1992](https://en.wikipedia.org/wiki/Yugoslavia). In the code cell below, filter for projects with a 'boardapprovaldate' prior to April 27th, 1992 **and** with 'countryname' Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Serbia **or** Slovenia. You'll see there are a total of 12 projects in the data set that match this criteria. Save the results in the republics variable
###Code
import datetime
# TODO: filter the projects data set for project boardapprovaldate prior to April 27th, 1992 AND with countryname
# of either 'Bosnia and Herzegovina', 'Croatia', 'Kosovo', 'Macedonia', 'Serbia', or 'Sovenia'. Store the
# results in the republics variable
#
# TODO: so that it's easier to see all the data, keep only these columns:
# ['regionname', 'countryname', 'lendinginstr', 'totalamt', 'boardapprovaldate',
# 'location','GeoLocID', 'GeoLocName', 'Latitude','Longitude','Country', 'project_name']
# TODO: sort the results by boardapprovaldate
republics = None
# show the results
republics
###Output
_____no_output_____
###Markdown
Are these projects also represented in the data labeled Yugoslavia? In the code cell below, filter for Yugoslavia projects approved between February 1st, 1980 and May 23rd, 1989 which are the minimum and maximum dates in the results above. Store the results in the yugoslavia variable.The goal is to see if there are any projects represented more than once in the data set.
###Code
# TODO: Filter the projects data for Yugoslavia projects between
# February 1st, 1980 and May 23rd, 1989. Store the results in the
# Yugoslavia variable. Keep the same columns as the previous code cell.
# Sort the values by boardapprovaldate
yugoslavia = None
# show the results
yugoslavia
###Output
_____no_output_____
###Markdown
And as a final step, try to see if there are any projects in the republics variable and yugoslavia variable that could be the same project.There are multiple ways to do that. As a suggestion, find unique dates in the republics variable. Then separately find unique dates in the yugoslavia variable. Concatenate (ie append) the results together. And then count the number of times each date occurs in this list. If a date occurs twice, that means the same boardapprovaldate appeared in both the Yugoslavia data as well as in the republics data.You'll should find that there are three suspicious cases:* July 26th, 1983* March 31st, 1987* October 13th, 1987
###Code
import numpy as np
# TODO: find the unique dates in the republics variable
republic_unique_dates = None
# TODO: find the unique dates in the yugoslavia variable
yugoslavia_unique_dates = None
# TODO: make a list of the results appending one list to the other
dates = None
# TODO: print out the dates that appeared twice in the results
###Output
_____no_output_____
###Markdown
ConclusionOn July 26th, 1983, for example, projects were approved for Bosnia and Herzegovina, Croatia, Macedonia, Slovenia, and Yugoslavia. The code below shows the projects for that date. You'll notice that Yugoslavia had two projects, one of which was called "Power Transmission Project (03) Energy Managem...". The projects in the other countries were all called "POWER TRANS.III". This looks like a case of duplicate data. What you end up doing with this knowledge would depend on the context. For example, if you wanted to get a true count for the total number of projects in the data set, should all of these projects be counted as one project? Run the code cell below to see the projects in question.
###Code
import datetime
# run this code cell to see the duplicate data
pd.concat([yugoslavia[yugoslavia['boardapprovaldate'] == datetime.date(1983, 7, 26)], republics[republics['boardapprovaldate'] == datetime.date(1983, 7, 26)]])
###Output
_____no_output_____
###Markdown
Duplicate DataA data set might have duplicate data: in other words, the same record is represented multiple times. Sometimes, it's easy to find and eliminate duplicate data like when two records are exactly the same. At other times, like what was discussed in the video, duplicate data is hard to spot. Exercise 1From the World Bank GDP data, count the number of countries that have had a project totalamt greater than 1 billion dollars (1,000,000,000). To get the count, you'll have to remove duplicate data rows.
###Code
%load_ext lab_black
import pandas as pd
# read in the projects data set and do some basic wrangling
projects = pd.read_csv("../data/projects_data.csv", dtype=str)
projects.drop("Unnamed: 56", axis=1, inplace=True)
projects["totalamt"] = pd.to_numeric(projects["totalamt"].str.replace(",", ""))
projects["countryname"] = projects["countryname"].str.split(";", expand=True)[0]
projects["boardapprovaldate"] = pd.to_datetime(projects["boardapprovaldate"])
# TODO: filter the data frame for projects over 1 billion dollars
# TODO: count the number of unique countries in the results
projects.query("totalamt > 1000000000")["countryname"].nunique()
###Output
_____no_output_____
###Markdown
Exercise 2 (challenge)This exercise is more challenging. The projects data set contains data about Yugoslavia, which was an Eastern European country until 1992. Yugoslavia eventually broke up into 7 countries: Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Montenegro, Serbia, and Slovenia.But the projects dataset has some ambiguity in how it treats Yugoslavia and the 7 countries that came from Yugoslavia. Your task is to find Yugoslavia projects that are probably represented multiple times in the data set.
###Code
# TODO: output all projects for the 'Socialist Federal Republic of Yugoslavia'
# HINT: You can use the exact country name or use the pandas str.contains() method to search for Yugoslavia
projects.query("countryname.str.contains('ugoslavia')")
###Output
_____no_output_____
###Markdown
Yugoslavia officially ended on [April 27th, 1992](https://en.wikipedia.org/wiki/Yugoslavia). In the code cell below, filter for projects with a 'boardapprovaldate' prior to April 27th, 1992 **and** with 'countryname' Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Serbia **or** Slovenia. You'll see there are a total of 12 projects in the data set that match this criteria. Save the results in the republics variable
###Code
# TODO: filter the projects data set for project boardapprovaldate prior to April 27th, 1992 AND with countryname
# of either 'Bosnia and Herzegovina', 'Croatia', 'Kosovo', 'Macedonia', 'Serbia', or 'Sovenia'. Store the
# results in the republics variable
#
# TODO: so that it's easier to see all the data, keep only these columns:
# ['regionname', 'countryname', 'lendinginstr', 'totalamt', 'boardapprovaldate',
# 'location','GeoLocID', 'GeoLocName', 'Latitude','Longitude','Country', 'project_name']
# TODO: sort the results by boardapprovaldate
republics = (
projects.query(
"(boardapprovaldate < @pd.to_datetime('1992-04-27T00:00:00Z')) and (countryname.isin(['Bosnia and Herzegovina', 'Republic of Croatia', 'Kosovo', 'Macedonia', 'Serbia', 'Republic of Slovenia']))"
)
.sort_values("boardapprovaldate")
.loc[
:,
[
"regionname",
"countryname",
"lendinginstr",
"totalamt",
"boardapprovaldate",
"location",
"GeoLocID",
"GeoLocName",
"Latitude",
"Longitude",
"Country",
"project_name",
],
]
)
# show the results
republics
###Output
_____no_output_____
###Markdown
Are these projects also represented in the data labeled Yugoslavia? In the code cell below, filter for Yugoslavia projects approved between February 1st, 1980 and May 23rd, 1989 which are the minimum and maximum dates in the results above. Store the results in the yugoslavia variable.The goal is to see if there are any projects represented more than once in the data set.
###Code
# TODO: Filter the projects data for Yugoslavia projects between
# February 1st, 1980 and May 23rd, 1989. Store the results in the
# Yugoslavia variable. Keep the same columns as the previous code cell.
# Sort the values by boardapprovaldate
yugoslavia = (
projects.query(
"countryname.str.contains('ugoslavia') and @pd.to_datetime('1980-02-01T00:00:00Z') <= boardapprovaldate <= @pd.to_datetime('1980-05-23T00:00:00Z')"
)
.loc[
:,
[
"regionname",
"countryname",
"lendinginstr",
"totalamt",
"boardapprovaldate",
"location",
"GeoLocID",
"GeoLocName",
"Latitude",
"Longitude",
"Country",
"project_name",
],
]
.sort_values("boardapprovaldate")
)
# show the results
yugoslavia
###Output
_____no_output_____
###Markdown
And as a final step, try to see if there are any projects in the republics variable and yugoslavia variable that could be the same project.There are multiple ways to do that. As a suggestion, find unique dates in the republics variable. Then separately find unique dates in the yugoslavia variable. Concatenate (ie append) the results together. And then count the number of times each date occurs in this list. If a date occurs twice, that means the same boardapprovaldate appeared in both the Yugoslavia data as well as in the republics data.You'll should find that there are three suspicious cases:* July 26th, 1983* March 31st, 1987* October 13th, 1987* May 23rd, 1989
###Code
import numpy as np
# TODO: find the unique dates in the republics variable
republic_unique_dates = projects["boardapprovaldate"].unique()
# TODO: find the unique dates in the yugoslavia variable
yugoslavia_unique_dates = yugoslavia["boardapprovaldate"].unique()
unique_dates, count = np.unique(dates, return_counts=True)
for i in range(len(unique_dates)):
if count[i] == 2:
print(unique_dates[i])
###Output
_____no_output_____
###Markdown
ConclusionOn July 26th, 1983, for example, projects were approved for Bosnia and Herzegovina, Croatia, Macedonia, Slovenia, and Yugoslavia. The code below shows the projects for that date. You'll notice that Yugoslavia had two projects, one of which was called "Power Transmission Project (03) Energy Managem...". The projects in the other countries were all called "POWER TRANS.III". This looks like a case of duplicate data. What you end up doing with this knowledge would depend on the context. For example, if you wanted to get a true count for the total number of projects in the data set, should all of these projects be counted as one project? Run the code cell below to see the projects in question.
###Code
import datetime
# run this code cell to see the duplicate data
pd.concat(
[
yugoslavia[yugoslavia["boardapprovaldate"] == datetime.date(1983, 7, 26)],
republics[republics["boardapprovaldate"] == datetime.date(1983, 7, 26)],
]
)
###Output
_____no_output_____
###Markdown
Duplicate DataA data set might have duplicate data: in other words, the same record is represented multiple times. Sometimes, it's easy to find and eliminate duplicate data like when two records are exactly the same. At other times, like what was discussed in the video, duplicate data is hard to spot. Exercise 1From the World Bank GDP data, count the number of countries that have had a project totalamt greater than 1 billion dollars (1,000,000,000). To get the count, you'll have to remove duplicate data rows.
###Code
import pandas as pd
# read in the projects data set and do some basic wrangling
projects = pd.read_csv('../data/projects_data.csv', dtype=str)
projects.drop('Unnamed: 56', axis=1, inplace=True)
projects['totalamt'] = pd.to_numeric(projects['totalamt'].str.replace(',', ''))
projects['countryname'] = projects['countryname'].str.split(';', expand=True)[0]
projects['boardapprovaldate'] = pd.to_datetime(projects['boardapprovaldate'])
# TODO: filter the data frame for projects over 1 billion dollars
projects_filter = projects[projects.totalamt>10**9]
# TODO: count the number of unique countries in the results
len(projects_filter.countryname.unique())
###Output
_____no_output_____
###Markdown
Exercise 2 (challenge)This exercise is more challenging. The projects data set contains data about Yugoslavia, which was an Eastern European country until 1992. Yugoslavia eventually broke up into 7 countries: Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Montenegro, Serbia, and Slovenia.But the projects dataset has some ambiguity in how it treats Yugoslavia and the 7 countries that came from Yugoslavia. Your task is to find Yugoslavia projects that are probably represented multiple times in the data set.
###Code
# TODO: output all projects for the 'Socialist Federal Republic of Yugoslavia'
# HINT: You can use the exact country name or use the pandas str.contains() method to search for Yugoslavia
projects[projects.countryname.str.contains('Yugoslavia')]
###Output
_____no_output_____
###Markdown
Yugoslavia officially ended on [April 27th, 1992](https://en.wikipedia.org/wiki/Yugoslavia). In the code cell below, filter for projects with a 'boardapprovaldate' prior to April 27th, 1992 **and** with 'countryname' Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Serbia **or** Slovenia. You'll see there are a total of 12 projects in the data set that match this criteria. Save the results in the republics variable
###Code
projects.boardapprovaldate.isnull().sum()
import datetime
# TODO: filter the projects data set for project boardapprovaldate prior to April 27th, 1992 AND with countryname
# of either 'Bosnia and Herzegovina', 'Croatia', 'Kosovo', 'Macedonia', 'Serbia', or 'Sovenia'. Store the
# results in the republics variable
#
# TODO: so that it's easier to see all the data, keep only these columns:
# ['regionname', 'countryname', 'lendinginstr', 'totalamt', 'boardapprovaldate',
# 'location','GeoLocID', 'GeoLocName', 'Latitude','Longitude','Country', 'project_name']
# TODO: sort the results by boardapprovaldate
retained_columns = ['regionname', 'countryname', 'lendinginstr', 'totalamt', 'boardapprovaldate', 'location','GeoLocID', 'GeoLocName', 'Latitude','Longitude','Country', 'project_name']
country_list= ['Bosnia and Herzegovina', 'Croatia', 'Kosovo', 'Macedonia', 'Montenegro', 'Serbia', 'Slovenia']
end_date = pd.to_datetime("April 27th, 1992")
republics = projects[retained_columns]
republics = republics[republics.countryname.isin(country_list)]
republics = republics[republics.boardapprovaldate<=end_date].sort_values('boardapprovaldate')
# show the results
republics
###Output
_____no_output_____
###Markdown
Are these projects also represented in the data labeled Yugoslavia? In the code cell below, filter for Yugoslavia projects approved between February 1st, 1980 and May 23rd, 1989 which are the minimum and maximum dates in the results above. Store the results in the yugoslavia variable.The goal is to see if there are any projects represented more than once in the data set.
###Code
# TODO: Filter the projects data for Yugoslavia projects between
# February 1st, 1980 and May 23rd, 1989. Store the results in the
# Yugoslavia variable. Keep the same columns as the previous code cell.
# Sort the values by boardapprovaldate
yugoslavia = projects[retained_columns]
yugoslavia = yugoslavia[yugoslavia.countryname.str.contains('Yugoslavia')]
yugoslavia = yugoslavia[yugoslavia.boardapprovaldate>=pd.to_datetime('February 1st, 1980')]
yugoslavia = yugoslavia[yugoslavia.boardapprovaldate<=pd.to_datetime('May 23rd, 1989')].sort_values('boardapprovaldate')
# show the results
yugoslavia
###Output
_____no_output_____
###Markdown
And as a final step, try to see if there are any projects in the republics variable and yugoslavia variable that could be the same project.There are multiple ways to do that. As a suggestion, find unique dates in the republics variable. Then separately find unique dates in the yugoslavia variable. Concatenate (ie append) the results together. And then count the number of times each date occurs in this list. If a date occurs twice, that means the same boardapprovaldate appeared in both the Yugoslavia data as well as in the republics data.You'll should find that there are three suspicious cases:* July 26th, 1983* March 31st, 1987* October 13th, 1987
###Code
import numpy as np
# TODO: find the unique dates in the republics variable
republic_unique_dates = republics.boardapprovaldate.unique()
# TODO: find the unique dates in the yugoslavia variable
yugoslavia_unique_dates = yugoslavia.boardapprovaldate.unique()
# TODO: make a list of the results appending one list to the other
dates = [x for x in republic_unique_dates if x in yugoslavia_unique_dates]
# TODO: print out the dates that appeared twice in the results
print(dates)
###Output
[numpy.datetime64('1983-07-26T00:00:00.000000000'), numpy.datetime64('1987-10-13T00:00:00.000000000')]
###Markdown
ConclusionOn July 26th, 1983, for example, projects were approved for Bosnia and Herzegovina, Croatia, Macedonia, Slovenia, and Yugoslavia. The code below shows the projects for that date. You'll notice that Yugoslavia had two projects, one of which was called "Power Transmission Project (03) Energy Managem...". The projects in the other countries were all called "POWER TRANS.III". This looks like a case of duplicate data. What you end up doing with this knowledge would depend on the context. For example, if you wanted to get a true count for the total number of projects in the data set, should all of these projects be counted as one project? Run the code cell below to see the projects in question.
###Code
import datetime
# run this code cell to see the duplicate data
pd.concat([yugoslavia[yugoslavia['boardapprovaldate'] == pd.to_datetime(datetime.date(1983, 7, 26))], republics[republics['boardapprovaldate'] == pd.to_datetime(datetime.date(1983, 7, 26))]])
###Output
_____no_output_____
###Markdown
Duplicate DataA data set might have duplicate data: in other words, the same record is represented multiple times. Sometimes, it's easy to find and eliminate duplicate data like when two records are exactly the same. At other times, like what was discussed in the video, duplicate data is hard to spot. Exercise 1From the World Bank GDP data, count the number of countries that have had a project totalamt greater than 1 billion dollars (1,000,000,000). To get the count, you'll have to remove duplicate data rows.
###Code
import pandas as pd
# read in the projects data set and do some basic wrangling
projects = pd.read_csv('../data/projects_data.csv', dtype=str)
projects.drop('Unnamed: 56', axis=1, inplace=True)
projects['totalamt'] = pd.to_numeric(projects['totalamt'].str.replace(',', ''))
projects['countryname'] = projects['countryname'].str.split(';', expand=True)[0]
projects['boardapprovaldate'] = pd.to_datetime(projects['boardapprovaldate'])
# TODO: filter the data frame for projects over 1 billion dollars
# TODO: count the number of unique countries in the results
projects[projects['totalamt'] > 1000000000]['countryname'].nunique()
###Output
_____no_output_____
###Markdown
Exercise 2 (challenge)This exercise is more challenging. The projects data set contains data about Yugoslavia, which was an Eastern European country until 1992. Yugoslavia eventually broke up into 7 countries: Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Montenegro, Serbia, and Slovenia.But the projects dataset has some ambiguity in how it treats Yugoslavia and the 7 countries that came from Yugoslavia. Your task is to find Yugoslavia projects that are probably represented multiple times in the data set.
###Code
# TODO: output all projects for the 'Socialist Federal Republic of Yugoslavia'
# HINT: You can use the exact country name or use the pandas str.contains() method to search for Yugoslavia
projects[projects['countryname'].str.contains('Yugoslavia')]
###Output
_____no_output_____
###Markdown
Yugoslavia officially ended on [April 27th, 1992](https://en.wikipedia.org/wiki/Yugoslavia). In the code cell below, filter for projects with a 'boardapprovaldate' prior to April 27th, 1992 **and** with 'countryname' Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Serbia **or** Slovenia. You'll see there are a total of 12 projects in the data set that match this criteria. Save the results in the republics variable
###Code
import datetime
# TODO: filter the projects data set for project boardapprovaldate prior to April 27th, 1992 AND with countryname
# of either 'Bosnia and Herzegovina', 'Croatia', 'Kosovo', 'Macedonia', 'Serbia', or 'Sovenia'. Store the
# results in the republics variable
#
# TODO: so that it's easier to see all the data, keep only these columns:
# ['regionname', 'countryname', 'lendinginstr', 'totalamt', 'boardapprovaldate',
# 'location','GeoLocID', 'GeoLocName', 'Latitude','Longitude','Country', 'project_name']
# TODO: sort the results by boardapprovaldate
republics = projects[(projects['boardapprovaldate'] < datetime.date(1992, 4, 27)) &
((projects['countryname'].str.contains('Bosnia')) |
(projects['countryname'].str.contains('Croatia')) |
(projects['countryname'].str.contains('Kosovo')) |
(projects['countryname'].str.contains('Macedonia')) |
(projects['countryname'].str.contains('Montenegro')) |
(projects['countryname'].str.contains('Serbia')) |
(projects['countryname'].str.contains('Slovenia')))]
###Output
_____no_output_____
###Markdown
Are these projects also represented in the data labeled Yugoslavia? In the code cell below, filter for Yugoslavia projects approved between February 1st, 1980 and May 23rd, 1989 which are the minimum and maximum dates in the results above. Store the results in the yugoslavia variable.The goal is to see if there are any projects represented more than once in the data set.
###Code
# TODO: Filter the projects data for Yugoslavia projects between
# February 1st, 1980 and May 23rd, 1989. Store the results in the
# Yugoslavia variable. Keep the same columns as the previous code cell.
# Sort the values by boardapprovaldate
yugoslavia = None
# show the results
yugoslavia
###Output
_____no_output_____
###Markdown
And as a final step, try to see if there are any projects in the republics variable and yugoslavia variable that could be the same project.There are multiple ways to do that. As a suggestion, find unique dates in the republics variable. Then separately find unique dates in the yugoslavia variable. Concatenate (ie append) the results together. And then count the number of times each date occurs in this list. If a date occurs twice, that means the same boardapprovaldate appeared in both the Yugoslavia data as well as in the republics data.You'll should find that there are three suspicious cases:* July 26th, 1983* March 31st, 1987* October 13th, 1987* May 23rd, 1989
###Code
import numpy as np
# TODO: find the unique dates in the republics variable
republic_unique_dates = republics['boardapprovaldate'].unique()
# TODO: find the unique dates in the yugoslavia variable
yugoslavia_unique_dates = yugoslavia['boardapprovaldate'].unique()
# TODO: make a list of the results appending one list to the other
dates = np.append(republic_unique_dates, yugoslavia_unique_dates)
# TODO: print out the dates that appeared twice in the results
unique_dates, count = np.unique(dates, return_counts=True)
###Output
_____no_output_____
###Markdown
ConclusionOn July 26th, 1983, for example, projects were approved for Bosnia and Herzegovina, Croatia, Macedonia, Slovenia, and Yugoslavia. The code below shows the projects for that date. You'll notice that Yugoslavia had two projects, one of which was called "Power Transmission Project (03) Energy Managem...". The projects in the other countries were all called "POWER TRANS.III". This looks like a case of duplicate data. What you end up doing with this knowledge would depend on the context. For example, if you wanted to get a true count for the total number of projects in the data set, should all of these projects be counted as one project? Run the code cell below to see the projects in question.
###Code
import datetime
# run this code cell to see the duplicate data
pd.concat([yugoslavia[yugoslavia['boardapprovaldate'] == datetime.date(1983, 7, 26)], republics[republics['boardapprovaldate'] == datetime.date(1983, 7, 26)]])
###Output
_____no_output_____
###Markdown
Duplicate DataA data set might have duplicate data: in other words, the same record is represented multiple times. Sometimes, it's easy to find and eliminate duplicate data like when two records are exactly the same. At other times, like what was discussed in the video, duplicate data is hard to spot. Exercise 1From the World Bank GDP data, count the number of countries that have had a project totalamt greater than 1 billion dollars (1,000,000,000). To get the count, you'll have to remove duplicate data rows.
###Code
import pandas as pd
# read in the projects data set and do some basic wrangling
projects = pd.read_csv('../data/projects_data.csv', dtype=str)
projects.drop('Unnamed: 56', axis=1, inplace=True)
projects['totalamt'] = pd.to_numeric(projects['totalamt'].str.replace(',', ''))
projects['countryname'] = projects['countryname'].str.split(';', expand=True)[0]
projects['boardapprovaldate'] = pd.to_datetime(projects['boardapprovaldate'])
# TODO: filter the data frame for projects over 1 billion dollars
# TODO: count the number of unique countries in the results
###Output
_____no_output_____
###Markdown
Exercise 2 (challenge)This exercise is more challenging. The projects data set contains data about Yugoslavia, which was an Eastern European country until 1992. Yugoslavia eventually broke up into 7 countries: Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Montenegro, Serbia, and Slovenia.But the projects dataset has some ambiguity in how it treats Yugoslavia and the 7 countries that came from Yugoslavia. Your task is to find Yugoslavia projects that are probably represented multiple times in the data set.
###Code
# TODO: output all projects for the 'Socialist Federal Republic of Yugoslavia'
# HINT: You can use the exact country name or use the pandas str.contains() method to search for Yugoslavia
###Output
_____no_output_____
###Markdown
Yugoslavia officially ended on [April 27th, 1992](https://en.wikipedia.org/wiki/Yugoslavia). In the code cell below, filter for projects with a 'boardapprovaldate' prior to April 27th, 1992 **and** with 'countryname' Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Serbia **or** Slovenia. You'll see there are a total of 12 projects in the data set that match this criteria. Save the results in the republics variable
###Code
import datetime
# TODO: filter the projects data set for project boardapprovaldate prior to April 27th, 1992 AND with countryname
# of either 'Bosnia and Herzegovina', 'Croatia', 'Kosovo', 'Macedonia', 'Serbia', or 'Sovenia'. Store the
# results in the republics variable
#
# TODO: so that it's easier to see all the data, keep only these columns:
# ['regionname', 'countryname', 'lendinginstr', 'totalamt', 'boardapprovaldate',
# 'location','GeoLocID', 'GeoLocName', 'Latitude','Longitude','Country', 'project_name']
# TODO: sort the results by boardapprovaldate
republics = None
# show the results
republics
###Output
_____no_output_____
###Markdown
Are these projects also represented in the data labeled Yugoslavia? In the code cell below, filter for Yugoslavia projects approved between February 1st, 1980 and May 23rd, 1989 which are the minimum and maximum dates in the results above. Store the results in the yugoslavia variable.The goal is to see if there are any projects represented more than once in the data set.
###Code
# TODO: Filter the projects data for Yugoslavia projects between
# February 1st, 1980 and May 23rd, 1989. Store the results in the
# Yugoslavia variable. Keep the same columns as the previous code cell.
# Sort the values by boardapprovaldate
yugoslavia = None
# show the results
yugoslavia
###Output
_____no_output_____
###Markdown
And as a final step, try to see if there are any projects in the republics variable and yugoslavia variable that could be the same project.There are multiple ways to do that. As a suggestion, find unique dates in the republics variable. Then separately find unique dates in the yugoslavia variable. Concatenate (ie append) the results together. And then count the number of times each date occurs in this list. If a date occurs twice, that means the same boardapprovaldate appeared in both the Yugoslavia data as well as in the republics data.You'll should find that there are three suspicious cases:* July 26th, 1983* March 31st, 1987* October 13th, 1987* May 23rd, 1989
###Code
import numpy as np
# TODO: find the unique dates in the republics variable
republic_unique_dates = None
# TODO: find the unique dates in the yugoslavia variable
yugoslavia_unique_dates = None
# TODO: make a list of the results appending one list to the other
dates = None
# TODO: print out the dates that appeared twice in the results
###Output
_____no_output_____
###Markdown
ConclusionOn July 26th, 1983, for example, projects were approved for Bosnia and Herzegovina, Croatia, Macedonia, Slovenia, and Yugoslavia. The code below shows the projects for that date. You'll notice that Yugoslavia had two projects, one of which was called "Power Transmission Project (03) Energy Managem...". The projects in the other countries were all called "POWER TRANS.III". This looks like a case of duplicate data. What you end up doing with this knowledge would depend on the context. For example, if you wanted to get a true count for the total number of projects in the data set, should all of these projects be counted as one project? Run the code cell below to see the projects in question.
###Code
import datetime
# run this code cell to see the duplicate data
pd.concat([yugoslavia[yugoslavia['boardapprovaldate'] == datetime.date(1983, 7, 26)], republics[republics['boardapprovaldate'] == datetime.date(1983, 7, 26)]])
###Output
_____no_output_____
###Markdown
Duplicate DataA data set might have duplicate data: in other words, the same record is represented multiple times. Sometimes, it's easy to find and eliminate duplicate data like when two records are exactly the same. At other times, like what was discussed in the video, duplicate data is hard to spot. Exercise 1From the World Bank GDP data, count the number of countries that have had a project totalamt greater than 1 billion dollars (1,000,000,000). To get the count, you'll have to remove duplicate data rows.
###Code
import pandas as pd
# read in the projects data set and do some basic wrangling
projects = pd.read_csv('../data/projects_data.csv', dtype=str)
projects.drop('Unnamed: 56', axis=1, inplace=True)
projects['totalamt'] = pd.to_numeric(projects['totalamt'].str.replace(',', ''))
projects['countryname'] = projects['countryname'].str.split(';', expand=True)[0]
projects['boardapprovaldate'] = pd.to_datetime(projects['boardapprovaldate'])
# TODO: filter the data frame for projects over 1 billion dollars
# TODO: count the number of unique countries in the results
projects[projects.totalamt > 1000000000].countryname.unique()
###Output
_____no_output_____
###Markdown
Exercise 2 (challenge)This exercise is more challenging. The projects data set contains data about Yugoslavia, which was an Eastern European country until 1992. Yugoslavia eventually broke up into 7 countries: Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Montenegro, Serbia, and Slovenia.But the projects dataset has some ambiguity in how it treats Yugoslavia and the 7 countries that came from Yugoslavia. Your task is to find Yugoslavia projects that are probably represented multiple times in the data set.
###Code
# TODO: output all projects for the 'Socialist Federal Republic of Yugoslavia'
# HINT: You can use the exact country name or use the pandas str.contains() method to search for Yugoslaviapr
projects[projects.countryname.str.contains('Yugoslavia')]
###Output
_____no_output_____
###Markdown
Yugoslavia officially ended on [April 27th, 1992](https://en.wikipedia.org/wiki/Yugoslavia). In the code cell below, filter for projects with a 'boardapprovaldate' prior to April 27th, 1992 **and** with 'countryname' Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Serbia **or** Slovenia. You'll see there are a total of 12 projects in the data set that match this criteria. Save the results in the republics variable
###Code
import datetime
# TODO: filter the projects data set for project boardapprovaldate prior to April 27th, 1992 AND with countryname
# of either 'Bosnia and Herzegovina', 'Croatia', 'Kosovo', 'Macedonia', 'Serbia', or 'Sovenia'. Store the
# results in the republics variable
#
# TODO: so that it's easier to see all the data, keep only these columns:
# ['regionname', 'countryname', 'lendinginstr', 'totalamt', 'boardapprovaldate',
# 'location','GeoLocID', 'GeoLocName', 'Latitude','Longitude','Country', 'project_name']
# TODO: sort the results by boardapprovaldate
republics = projects[(projects.boardapprovaldate < '04/27/1992') & ((projects.countryname == 'Bosnia and Herzegovina') | (projects.countryname == 'Croatia') | (projects.countryname == 'Kosovo') | (projects.countryname == 'Macedonia') | (projects.countryname == 'Serbia') | (projects.countryname == 'Sovenia'))]
republics = republics[['regionname', 'countryname', 'lendinginstr', 'totalamt', 'boardapprovaldate', 'location','GeoLocID', 'GeoLocName', 'Latitude','Longitude','Country', 'project_name']]
republics = republics.sort_values('boardapprovaldate')
# show the results
republics
###Output
_____no_output_____
###Markdown
Are these projects also represented in the data labeled Yugoslavia? In the code cell below, filter for Yugoslavia projects approved between February 1st, 1980 and May 23rd, 1989 which are the minimum and maximum dates in the results above. Store the results in the yugoslavia variable.The goal is to see if there are any projects represented more than once in the data set.
###Code
# TODO: Filter the projects data for Yugoslavia projects between
# February 1st, 1980 and May 23rd, 1989. Store the results in the
# Yugoslavia variable. Keep the same columns as the previous code cell.
# Sort the values by boardapprovaldate
yugoslavia = projects[(projects.countryname.str.contains('Yugoslavia')) & (projects.boardapprovaldate >= '02/01/1980') & (projects.boardapprovaldate <= '05/23/1989') ]
yugoslavia = yugoslavia[['regionname', 'countryname', 'lendinginstr', 'totalamt', 'boardapprovaldate', 'location','GeoLocID', 'GeoLocName', 'Latitude','Longitude','Country', 'project_name']].sort_values('boardapprovaldate')
# show the results
yugoslavia
###Output
_____no_output_____
###Markdown
And as a final step, try to see if there are any projects in the republics variable and yugoslavia variable that could be the same project.There are multiple ways to do that. As a suggestion, find unique dates in the republics variable. Then separately find unique dates in the yugoslavia variable. Concatenate (ie append) the results together. And then count the number of times each date occurs in this list. If a date occurs twice, that means the same boardapprovaldate appeared in both the Yugoslavia data as well as in the republics data.You should find that there are four suspicious cases:* July 26th, 1983* March 31st, 1987* October 13th, 1987* May 23rd, 1989
###Code
import numpy as np
# TODO: find the unique dates in the republics variable
republic_unique_dates = republics.boardapprovaldate.unique()
# TODO: find the unique dates in the yugoslavia variable
yugoslavia_unique_dates = yugoslavia.boardapprovaldate.unique()
# TODO: make a list of the results appending one list to the other
dates = np.append(republic_unique_dates, yugoslavia_unique_dates)
# TODO: print out the dates that appeared twice in the results
date, count = np.unique(dates, return_counts=True)
for i in range(0,len(date)):
if count[i]==2:
print(date[i],count[i])
###Output
1983-07-26T00:00:00.000000000 2
1987-10-13T00:00:00.000000000 2
###Markdown
ConclusionOn July 26th, 1983, for example, projects were approved for Bosnia and Herzegovina, Croatia, Macedonia, Slovenia, and Yugoslavia. The code below shows the projects for that date. You'll notice that Yugoslavia had two projects, one of which was called "Power Transmission Project (03) Energy Managem...". The projects in the other countries were all called "POWER TRANS.III". This looks like a case of duplicate data. What you end up doing with this knowledge would depend on the context. For example, if you wanted to get a true count for the total number of projects in the data set, should all of these projects be counted as one project? Run the code cell below to see the projects in question.
###Code
import datetime
# run this code cell to see the duplicate data
pd.concat([yugoslavia[yugoslavia['boardapprovaldate'] == datetime.date(1983, 7, 26)], republics[republics['boardapprovaldate'] == datetime.date(1983, 7, 26)]])
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:4: FutureWarning: Comparing Series of datetimes with 'datetime.date'. Currently, the
'datetime.date' is coerced to a datetime. In the future pandas will
not coerce, and 'the values will not compare equal to the
'datetime.date'. To retain the current behavior, convert the
'datetime.date' to a datetime with 'pd.Timestamp'.
after removing the cwd from sys.path.
###Markdown
Duplicate DataA data set might have duplicate data: in other words, the same record is represented multiple times. Sometimes, it's easy to find and eliminate duplicate data like when two records are exactly the same. At other times, like what was discussed in the video, duplicate data is hard to spot. Exercise 1From the World Bank GDP data, count the number of countries that have had a project totalamt greater than 1 billion dollars (1,000,000,000). To get the count, you'll have to remove duplicate data rows.
###Code
import pandas as pd
# read in the projects data set and do some basic wrangling
projects = pd.read_csv('../data/projects_data.csv', dtype=str)
projects.drop('Unnamed: 56', axis=1, inplace=True)
projects['totalamt'] = pd.to_numeric(projects['totalamt'].str.replace(',', ''))
projects['countryname'] = projects['countryname'].str.split(';', expand=True)[0]
projects['boardapprovaldate'] = pd.to_datetime(projects['boardapprovaldate'])
# TODO: filter the data frame for projects over 1 billion dollars
# TODO: count the number of unique countries in the results
###Output
_____no_output_____
###Markdown
Exercise 2 (challenge)This exercise is more challenging. The projects data set contains data about Yugoslavia, which was an Eastern European country until 1992. Yugoslavia eventually broke up into 7 countries: Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Montenegro, Serbia, and Slovenia.But the projects dataset has some ambiguity in how it treats Yugoslavia and the 7 countries that came from Yugoslavia. Your task is to find Yugoslavia projects that are probably represented multiple times in the data set.
###Code
# TODO: output all projects for the 'Socialist Federal Republic of Yugoslavia'
# HINT: You can use the exact country name or use the pandas str.contains() method to search for Yugoslavia
###Output
_____no_output_____
###Markdown
Yugoslavia officially ended on [April 27th, 1992](https://en.wikipedia.org/wiki/Yugoslavia). In the code cell below, filter for projects with a 'boardapprovaldate' prior to April 27th, 1992 **and** with 'countryname' Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Serbia **or** Slovenia. You'll see there are a total of 12 projects in the data set that match this criteria. Save the results in the republics variable
###Code
import datetime
# TODO: filter the projects data set for project boardapprovaldate prior to April 27th, 1992 AND with countryname
# of either 'Bosnia and Herzegovina', 'Croatia', 'Kosovo', 'Macedonia', 'Serbia', or 'Sovenia'. Store the
# results in the republics variable
#
# TODO: so that it's easier to see all the data, keep only these columns:
# ['regionname', 'countryname', 'lendinginstr', 'totalamt', 'boardapprovaldate',
# 'location','GeoLocID', 'GeoLocName', 'Latitude','Longitude','Country', 'project_name']
# TODO: sort the results by boardapprovaldate
republics = None
# show the results
republics
###Output
_____no_output_____
###Markdown
Are these projects also represented in the data labeled Yugoslavia? In the code cell below, filter for Yugoslavia projects approved between February 1st, 1980 and May 23rd, 1989 which are the minimum and maximum dates in the results above. Store the results in the yugoslavia variable.The goal is to see if there are any projects represented more than once in the data set.
###Code
# TODO: Filter the projects data for Yugoslavia projects between
# February 1st, 1980 and May 23rd, 1989. Store the results in the
# Yugoslavia variable. Keep the same columns as the previous code cell.
# Sort the values by boardapprovaldate
yugoslavia = None
# show the results
yugoslavia
###Output
_____no_output_____
###Markdown
And as a final step, try to see if there are any projects in the republics variable and yugoslavia variable that could be the same project.There are multiple ways to do that. As a suggestion, find unique dates in the republics variable. Then separately find unique dates in the yugoslavia variable. Concatenate (ie append) the results together. And then count the number of times each date occurs in this list. If a date occurs twice, that means the same boardapprovaldate appeared in both the Yugoslavia data as well as in the republics data.You'll should find that there are three suspicious cases:* July 26th, 1983* March 31st, 1987* October 13th, 1987* May 23rd, 1989
###Code
import numpy as np
# TODO: find the unique dates in the republics variable
republic_unique_dates = None
# TODO: find the unique dates in the yugoslavia variable
yugoslavia_unique_dates = None
# TODO: make a list of the results appending one list to the other
dates = None
# TODO: print out the dates that appeared twice in the results
###Output
_____no_output_____
###Markdown
ConclusionOn July 26th, 1983, for example, projects were approved for Bosnia and Herzegovina, Croatia, Macedonia, Slovenia, and Yugoslavia. The code below shows the projects for that date. You'll notice that Yugoslavia had two projects, one of which was called "Power Transmission Project (03) Energy Managem...". The projects in the other countries were all called "POWER TRANS.III". This looks like a case of duplicate data. What you end up doing with this knowledge would depend on the context. For example, if you wanted to get a true count for the total number of projects in the data set, should all of these projects be counted as one project? Run the code cell below to see the projects in question.
###Code
import datetime
# run this code cell to see the duplicate data
pd.concat([yugoslavia[yugoslavia['boardapprovaldate'] == datetime.date(1983, 7, 26)], republics[republics['boardapprovaldate'] == datetime.date(1983, 7, 26)]])
###Output
_____no_output_____
###Markdown
Duplicate DataA data set might have duplicate data: in other words, the same record is represented multiple times. Sometimes, it's easy to find and eliminate duplicate data like when two records are exactly the same. At other times, like what was discussed in the video, duplicate data is hard to spot. Exercise 1From the World Bank GDP data, count the number of countries that have had a project totalamt greater than 1 billion dollars (1,000,000,000). To get the count, you'll have to remove duplicate data rows.
###Code
projects['boardapprovaldate']
import pandas as pd
import datetime
# read in the projects data set and do some basic wrangling
projects = pd.read_csv('../data/projects_data.csv', dtype=str)
projects.drop('Unnamed: 56', axis=1, inplace=True)
projects['totalamt'] = pd.to_numeric(projects['totalamt'].str.replace(',', ''))
projects['countryname'] = projects['countryname'].str.split(';', expand=True)[0]
projects['boardapprovaldate'] = pd.to_datetime(projects['boardapprovaldate'], unit='ns')
# TODO: filter the data frame for projects over 1 billion dollars
# TODO: count the number of unique countries in the results
projects[projects['totalamt'] > 1000000000]['countryname'].nunique()
###Output
_____no_output_____
###Markdown
Exercise 2 (challenge)This exercise is more challenging. The projects data set contains data about Yugoslavia, which was an Eastern European country until 1992. Yugoslavia eventually broke up into 7 countries: Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Montenegro, Serbia, and Slovenia.But the projects dataset has some ambiguity in how it treats Yugoslavia and the 7 countries that came from Yugoslavia. Your task is to find Yugoslavia projects that are probably represented multiple times in the data set.
###Code
# TODO: output all projects for the 'Socialist Federal Republic of Yugoslavia'
# HINT: You can use the exact country name or use the pandas str.contains() method to search for Yugoslavia
projects[projects['countryname'].str.contains('Yugoslavia')]
###Output
_____no_output_____
###Markdown
Yugoslavia officially ended on [April 27th, 1992](https://en.wikipedia.org/wiki/Yugoslavia). In the code cell below, filter for projects with a 'boardapprovaldate' prior to April 27th, 1992 **and** with 'countryname' Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Serbia **or** Slovenia. You'll see there are a total of 12 projects in the data set that match this criteria. Save the results in the republics variable
###Code
import datetime
# TODO: filter the projects data set for project boardapprovaldate prior to April 27th, 1992 AND with countryname
# of either 'Bosnia and Herzegovina', 'Croatia', 'Kosovo', 'Macedonia', 'Serbia', or 'Sovenia'. Store the
# results in the republics variable
#
# TODO: so that it's easier to see all the data, keep only these columns:
# ['regionname', 'countryname', 'lendinginstr', 'totalamt', 'boardapprovaldate',
# 'location','GeoLocID', 'GeoLocName', 'Latitude','Longitude','Country', 'project_name']
# TODO: sort the results by boardapprovaldate
republics = projects[(projects['boardapprovaldate'] < pd.Timestamp(datetime.date(1992, 4, 27))) &
((projects['countryname'].str.contains('Bosnia')) |
(projects['countryname'].str.contains('Croatia')) |
(projects['countryname'].str.contains('Kosovo')) |
(projects['countryname'].str.contains('Macedonia')) |
(projects['countryname'].str.contains('Montenegro')) |
(projects['countryname'].str.contains('Serbia')) |
(projects['countryname'].str.contains('Slovenia')))][['regionname',
'countryname',
'lendinginstr',
'totalamt',
'boardapprovaldate',
'location',
'GeoLocID',
'GeoLocName',
'Latitude',
'Longitude',
'Country',
'project_name']].sort_values('boardapprovaldate')
# show the results
republics
###Output
_____no_output_____
###Markdown
Are these projects also represented in the data labeled Yugoslavia? In the code cell below, filter for Yugoslavia projects approved between February 1st, 1980 and May 23rd, 1989 which are the minimum and maximum dates in the results above. Store the results in the yugoslavia variable.The goal is to see if there are any projects represented more than once in the data set.
###Code
# TODO: Filter the projects data for Yugoslavia projects between
# February 1st, 1980 and May 23rd, 1989. Store the results in the
# Yugoslavia variable. Keep the same columns as the previous code cell.
# Sort the values by boardapprovaldate
yugoslavia = projects[(projects['countryname'].str.contains('Yugoslavia')) &
(projects['boardapprovaldate'] >= datetime.date(1980, 2, 1)) &
(projects['boardapprovaldate'] <= datetime.date(1989, 5, 23))][['regionname',
'countryname',
'lendinginstr',
'totalamt',
'boardapprovaldate',
'location',
'GeoLocID',
'GeoLocName',
'Latitude',
'Longitude',
'Country',
'project_name']].sort_values('boardapprovaldate')
# show the results
yugoslavia
###Output
_____no_output_____
###Markdown
And as a final step, try to see if there are any projects in the republics variable and yugoslavia variable that could be the same project.There are multiple ways to do that. As a suggestion, find unique dates in the republics variable. Then separately find unique dates in the yugoslavia variable. Concatenate (ie append) the results together. And then count the number of times each date occurs in this list. If a date occurs twice, that means the same boardapprovaldate appeared in both the Yugoslavia data as well as in the republics data.You'll should find that there are three suspicious cases:* July 26th, 1983* March 31st, 1987* October 13th, 1987* May 23rd, 1989
###Code
import numpy as np
# TODO: find the unique dates in the republics variable
republic_unique_dates = republics['boardapprovaldate'].unique()
# TODO: find the unique dates in the yugoslavia variable
yugoslavia_unique_dates = yugoslavia['boardapprovaldate'].unique()
# TODO: make a list of the results appending one list to the other
dates = np.append(republic_unique_dates, yugoslavia_unique_dates)
# TODO: print out the dates that appeared twice in the results
unique_dates, count = np.unique(dates, return_counts=True)
for i in range(len(unique_dates)):
if count[i] == 2:
print(unique_dates[i])
###Output
_____no_output_____
###Markdown
ConclusionOn July 26th, 1983, for example, projects were approved for Bosnia and Herzegovina, Croatia, Macedonia, Slovenia, and Yugoslavia. The code below shows the projects for that date. You'll notice that Yugoslavia had two projects, one of which was called "Power Transmission Project (03) Energy Managem...". The projects in the other countries were all called "POWER TRANS.III". This looks like a case of duplicate data. What you end up doing with this knowledge would depend on the context. For example, if you wanted to get a true count for the total number of projects in the data set, should all of these projects be counted as one project? Run the code cell below to see the projects in question.
###Code
import datetime
# run this code cell to see the duplicate data
pd.concat([yugoslavia[yugoslavia['boardapprovaldate'] == datetime.date(1983, 7, 26)], republics[republics['boardapprovaldate'] == datetime.date(1983, 7, 26)]])
###Output
_____no_output_____
###Markdown
Duplicate DataA data set might have duplicate data: in other words, the same record is represented multiple times. Sometimes, it's easy to find and eliminate duplicate data like when two records are exactly the same. At other times, like what was discussed in the video, duplicate data is hard to spot. Exercise 1From the World Bank GDP data, count the number of countries that have had a project totalamt greater than 1 billion dollars (1,000,000,000). To get the count, you'll have to remove duplicate data rows.
###Code
import pandas as pd
# read in the projects data set and do some basic wrangling
projects = pd.read_csv('../data/projects_data.csv', dtype=str)
projects.drop('Unnamed: 56', axis=1, inplace=True)
projects['totalamt'] = pd.to_numeric(projects['totalamt'].str.replace(',', ''))
projects['countryname'] = projects['countryname'].str.split(';', expand=True)[0]
projects['boardapprovaldate'] = pd.to_datetime(projects['boardapprovaldate'])
# TODO: filter the data frame for projects over 1 billion dollars
# TODO: count the number of unique countries in the results
projects[projects.totalamt > 10**9].countryname.nunique()
###Output
_____no_output_____
###Markdown
Exercise 2 (challenge)This exercise is more challenging. The projects data set contains data about Yugoslavia, which was an Eastern European country until 1992. Yugoslavia eventually broke up into 7 countries: Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Montenegro, Serbia, and Slovenia.But the projects dataset has some ambiguity in how it treats Yugoslavia and the 7 countries that came from Yugoslavia. Your task is to find Yugoslavia projects that are probably represented multiple times in the data set.
###Code
# TODO: output all projects for the 'Socialist Federal Republic of Yugoslavia'
# HINT: You can use the exact country name or use the pandas str.contains() method to search for Yugoslavia
projects[projects.countryname.str.contains('Yugoslavia')]
###Output
_____no_output_____
###Markdown
Yugoslavia officially ended on [April 27th, 1992](https://en.wikipedia.org/wiki/Yugoslavia). In the code cell below, filter for projects with a 'boardapprovaldate' prior to April 27th, 1992 **and** with 'countryname' Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Serbia **or** Slovenia. You'll see there are a total of 12 projects in the data set that match this criteria. Save the results in the republics variable
###Code
%%time
import datetime
# TODO: filter the projects data set for project boardapprovaldate prior to April 27th, 1992 AND with countryname
# of either 'Bosnia and Herzegovina', 'Croatia', 'Kosovo', 'Macedonia', 'Serbia', or 'Slovenia'. Store the
# results in the republics variable
#
# TODO: so that it's easier to see all the data, keep only these columns:
# ['regionname', 'countryname', 'lendinginstr', 'totalamt', 'boardapprovaldate',
# 'location','GeoLocID', 'GeoLocName', 'Latitude','Longitude','Country', 'project_name']
# TODO: sort the results by boardapprovaldate
filter1 = projects.boardapprovaldate.dt.date < datetime.date(1992, 4, 27)
filter2 = projects.countryname.str.contains('Bosnia|Herzegovina|Croatia|Kosovo|Macedonia|Serbia|Slovenia', regex=True)
cols = ['regionname', 'countryname', 'lendinginstr', 'totalamt', 'boardapprovaldate',
'location','GeoLocID', 'GeoLocName', 'Latitude','Longitude','Country', 'project_name']
republics = projects[filter1 & filter2].sort_values(by='boardapprovaldate', ascending=True).loc[:, cols]
# show the results
republics
###Output
CPU times: user 17.5 ms, sys: 1.48 ms, total: 19 ms
Wall time: 17.9 ms
###Markdown
Are these projects also represented in the data labeled Yugoslavia? In the code cell below, filter for Yugoslavia projects approved between February 1st, 1980 and May 23rd, 1989 which are the minimum and maximum dates in the results above. Store the results in the yugoslavia variable.The goal is to see if there are any projects represented more than once in the data set.
###Code
%%time
# TODO: Filter the projects data for Yugoslavia projects between
# February 1st, 1980 and May 23rd, 1989. Store the results in the
# Yugoslavia variable. Keep the same columns as the previous code cell.
# Sort the values by boardapprovaldate
filter1 = datetime.date(1980, 2, 1) <= projects.boardapprovaldate.dt.date
filter2 = projects.boardapprovaldate.dt.date <= datetime.date(1989, 5, 23)
filter3 = projects.countryname.str.contains('Yugoslavia')
yugoslavia = projects[filter1 & filter2 & filter3].sort_values(by='boardapprovaldate', ascending=True).loc[:, cols]
# show the results
yugoslavia
###Output
CPU times: user 17.6 ms, sys: 1.18 ms, total: 18.8 ms
Wall time: 17.8 ms
###Markdown
And as a final step, try to see if there are any projects in the republics variable and yugoslavia variable that could be the same project.There are multiple ways to do that. As a suggestion, find unique dates in the republics variable. Then separately find unique dates in the yugoslavia variable. Concatenate (ie append) the results together. And then count the number of times each date occurs in this list. If a date occurs twice, that means the same boardapprovaldate appeared in both the Yugoslavia data as well as in the republics data.You'll should find that there are three suspicious cases:* July 26th, 1983* March 31st, 1987* October 13th, 1987* May 23rd, 1989
###Code
%%time
import numpy as np
# TODO: find the unique dates in the republics variable
republic_unique_dates = republics.boardapprovaldate.unique()
# TODO: find the unique dates in the yugoslavia variable
yugoslavia_unique_dates = yugoslavia.boardapprovaldate.unique()
# TODO: make a list of the results appending one list to the other
dates = np.append(republic_unique_dates, yugoslavia_unique_dates)
# TODO: print out the dates that appeared twice in the results
for i in np.intersect1d(republic_unique_dates, yugoslavia_unique_dates):
print(i)
###Output
1983-07-26 00:00:00+00:00
1987-03-31 00:00:00+00:00
1987-10-13 00:00:00+00:00
1989-05-23 00:00:00+00:00
CPU times: user 1.04 ms, sys: 543 µs, total: 1.58 ms
Wall time: 1.05 ms
###Markdown
ConclusionOn July 26th, 1983, for example, projects were approved for Bosnia and Herzegovina, Croatia, Macedonia, Slovenia, and Yugoslavia. The code below shows the projects for that date. You'll notice that Yugoslavia had two projects, one of which was called "Power Transmission Project (03) Energy Managem...". The projects in the other countries were all called "POWER TRANS.III". This looks like a case of duplicate data. What you end up doing with this knowledge would depend on the context. For example, if you wanted to get a true count for the total number of projects in the data set, should all of these projects be counted as one project? Run the code cell below to see the projects in question.
###Code
import datetime
# run this code cell to see the duplicate data
pd.concat([yugoslavia[yugoslavia['boardapprovaldate'].dt.date == datetime.date(1983, 7, 26)], republics[republics['boardapprovaldate'].dt.date == datetime.date(1983, 7, 26)]])
###Output
_____no_output_____
###Markdown
Duplicate DataA data set might have duplicate data: in other words, the same record is represented multiple times. Sometimes, it's easy to find and eliminate duplicate data like when two records are exactly the same. At other times, like what was discussed in the video, duplicate data is hard to spot. Exercise 1From the World Bank GDP data, count the number of countries that have had a project totalamt greater than 1 billion dollars (1,000,000,000). To get the count, you'll have to remove duplicate data rows.
###Code
import pandas as pd
# read in the projects data set and do some basic wrangling
projects = pd.read_csv('../data/projects_data.csv', dtype=str)
projects.drop('Unnamed: 56', axis=1, inplace=True)
projects['totalamt'] = pd.to_numeric(projects['totalamt'].str.replace(',', ''))
projects['countryname'] = projects['countryname'].str.split(';', expand=True)[0]
projects['boardapprovaldate'] = pd.to_datetime(projects['boardapprovaldate'])
# TODO: filter the data frame for projects over 1 billion dollars
# TODO: count the number of unique countries in the results
projects[projects['totalamt']>=1000000000]['countryname'].count()
###Output
_____no_output_____
###Markdown
Exercise 2 (challenge)This exercise is more challenging. The projects data set contains data about Yugoslavia, which was an Eastern European country until 1992. Yugoslavia eventually broke up into 7 countries: Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Montenegro, Serbia, and Slovenia.But the projects dataset has some ambiguity in how it treats Yugoslavia and the 7 countries that came from Yugoslavia. Your task is to find Yugoslavia projects that are probably represented multiple times in the data set.
###Code
# TODO: output all projects for the 'Socialist Federal Republic of Yugoslavia'
# HINT: You can use the exact country name or use the pandas str.contains() method to search for Yugoslavia
###Output
_____no_output_____
###Markdown
Yugoslavia officially ended on [April 27th, 1992](https://en.wikipedia.org/wiki/Yugoslavia). In the code cell below, filter for projects with a 'boardapprovaldate' prior to April 27th, 1992 **and** with 'countryname' Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Serbia **or** Slovenia. You'll see there are a total of 12 projects in the data set that match this criteria. Save the results in the republics variable
###Code
import datetime
# TODO: filter the projects data set for project boardapprovaldate prior to April 27th, 1992 AND with countryname
# of either 'Bosnia and Herzegovina', 'Croatia', 'Kosovo', 'Macedonia', 'Serbia', or 'Sovenia'. Store the
# results in the republics variable
#
# TODO: so that it's easier to see all the data, keep only these columns:
# ['regionname', 'countryname', 'lendinginstr', 'totalamt', 'boardapprovaldate',
# 'location','GeoLocID', 'GeoLocName', 'Latitude','Longitude','Country', 'project_name']
# TODO: sort the results by boardapprovaldate
republics = None
# show the results
republics
###Output
_____no_output_____
###Markdown
Are these projects also represented in the data labeled Yugoslavia? In the code cell below, filter for Yugoslavia projects approved between February 1st, 1980 and May 23rd, 1989 which are the minimum and maximum dates in the results above. Store the results in the yugoslavia variable.The goal is to see if there are any projects represented more than once in the data set.
###Code
# TODO: Filter the projects data for Yugoslavia projects between
# February 1st, 1980 and May 23rd, 1989. Store the results in the
# Yugoslavia variable. Keep the same columns as the previous code cell.
# Sort the values by boardapprovaldate
yugoslavia = None
# show the results
yugoslavia
###Output
_____no_output_____
###Markdown
And as a final step, try to see if there are any projects in the republics variable and yugoslavia variable that could be the same project.There are multiple ways to do that. As a suggestion, find unique dates in the republics variable. Then separately find unique dates in the yugoslavia variable. Concatenate (ie append) the results together. And then count the number of times each date occurs in this list. If a date occurs twice, that means the same boardapprovaldate appeared in both the Yugoslavia data as well as in the republics data.You'll should find that there are three suspicious cases:* July 26th, 1983* March 31st, 1987* October 13th, 1987* May 23rd, 1989
###Code
import numpy as np
# TODO: find the unique dates in the republics variable
republic_unique_dates = None
# TODO: find the unique dates in the yugoslavia variable
yugoslavia_unique_dates = None
# TODO: make a list of the results appending one list to the other
dates = None
# TODO: print out the dates that appeared twice in the results
###Output
_____no_output_____
###Markdown
ConclusionOn July 26th, 1983, for example, projects were approved for Bosnia and Herzegovina, Croatia, Macedonia, Slovenia, and Yugoslavia. The code below shows the projects for that date. You'll notice that Yugoslavia had two projects, one of which was called "Power Transmission Project (03) Energy Managem...". The projects in the other countries were all called "POWER TRANS.III". This looks like a case of duplicate data. What you end up doing with this knowledge would depend on the context. For example, if you wanted to get a true count for the total number of projects in the data set, should all of these projects be counted as one project? Run the code cell below to see the projects in question.
###Code
import datetime
# run this code cell to see the duplicate data
pd.concat([yugoslavia[yugoslavia['boardapprovaldate'] == datetime.date(1983, 7, 26)], republics[republics['boardapprovaldate'] == datetime.date(1983, 7, 26)]])
###Output
_____no_output_____
###Markdown
Duplicate DataA data set might have duplicate data: in other words, the same record is represented multiple times. Sometimes, it's easy to find and eliminate duplicate data like when two records are exactly the same. At other times, like what was discussed in the video, duplicate data is hard to spot. Exercise 1From the World Bank GDP data, count the number of countries that have had a project totalamt greater than 1 billion dollars (1,000,000,000). To get the count, you'll have to remove duplicate data rows.
###Code
import pandas as pd
# read in the projects data set and do some basic wrangling
projects = pd.read_csv('../data/projects_data.csv', dtype=str)
projects.drop('Unnamed: 56', axis=1, inplace=True)
projects['totalamt'] = pd.to_numeric(projects['totalamt'].str.replace(',', ''))
projects['countryname'] = projects['countryname'].str.split(';', expand=True)[0]
projects['boardapprovaldate'] = pd.to_datetime(projects['boardapprovaldate'])
# TODO: filter the data frame for projects over 1 billion dollars
# TODO: count the number of unique countries in the results
projects[projects['totalamt'] > 1000000000]['countryname'].nunique()
###Output
_____no_output_____
###Markdown
Exercise 2 (challenge)This exercise is more challenging. The projects data set contains data about Yugoslavia, which was an Eastern European country until 1992. Yugoslavia eventually broke up into 7 countries: Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Montenegro, Serbia, and Slovenia.But the projects dataset has some ambiguity in how it treats Yugoslavia and the 7 countries that came from Yugoslavia. Your task is to find Yugoslavia projects that are probably represented multiple times in the data set.
###Code
# TODO: output all projects for the 'Socialist Federal Republic of Yugoslavia'
# HINT: You can use the exact country name or use the pandas str.contains() method to search for Yugoslavia
###Output
_____no_output_____
###Markdown
Yugoslavia officially ended on [April 27th, 1992](https://en.wikipedia.org/wiki/Yugoslavia). In the code cell below, filter for projects with a 'boardapprovaldate' prior to April 27th, 1992 **and** with 'countryname' Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Serbia **or** Slovenia. You'll see there are a total of 12 projects in the data set that match this criteria. Save the results in the republics variable
###Code
import datetime
# TODO: filter the projects data set for project boardapprovaldate prior to April 27th, 1992 AND with countryname
# of either 'Bosnia and Herzegovina', 'Croatia', 'Kosovo', 'Macedonia', 'Serbia', or 'Sovenia'. Store the
# results in the republics variable
#
# TODO: so that it's easier to see all the data, keep only these columns:
# ['regionname', 'countryname', 'lendinginstr', 'totalamt', 'boardapprovaldate',
# 'location','GeoLocID', 'GeoLocName', 'Latitude','Longitude','Country', 'project_name']
# TODO: sort the results by boardapprovaldate
republics = None
# show the results
republics
###Output
_____no_output_____
###Markdown
Are these projects also represented in the data labeled Yugoslavia? In the code cell below, filter for Yugoslavia projects approved between February 1st, 1980 and May 23rd, 1989 which are the minimum and maximum dates in the results above. Store the results in the yugoslavia variable.The goal is to see if there are any projects represented more than once in the data set.
###Code
# TODO: Filter the projects data for Yugoslavia projects between
# February 1st, 1980 and May 23rd, 1989. Store the results in the
# Yugoslavia variable. Keep the same columns as the previous code cell.
# Sort the values by boardapprovaldate
yugoslavia = None
# show the results
yugoslavia
###Output
_____no_output_____
###Markdown
And as a final step, try to see if there are any projects in the republics variable and yugoslavia variable that could be the same project.There are multiple ways to do that. As a suggestion, find unique dates in the republics variable. Then separately find unique dates in the yugoslavia variable. Concatenate (ie append) the results together. And then count the number of times each date occurs in this list. If a date occurs twice, that means the same boardapprovaldate appeared in both the Yugoslavia data as well as in the republics data.You'll should find that there are three suspicious cases:* July 26th, 1983* March 31st, 1987* October 13th, 1987* May 23rd, 1989
###Code
import numpy as np
# TODO: find the unique dates in the republics variable
republic_unique_dates = None
# TODO: find the unique dates in the yugoslavia variable
yugoslavia_unique_dates = None
# TODO: make a list of the results appending one list to the other
dates = None
# TODO: print out the dates that appeared twice in the results
###Output
_____no_output_____
###Markdown
ConclusionOn July 26th, 1983, for example, projects were approved for Bosnia and Herzegovina, Croatia, Macedonia, Slovenia, and Yugoslavia. The code below shows the projects for that date. You'll notice that Yugoslavia had two projects, one of which was called "Power Transmission Project (03) Energy Managem...". The projects in the other countries were all called "POWER TRANS.III". This looks like a case of duplicate data. What you end up doing with this knowledge would depend on the context. For example, if you wanted to get a true count for the total number of projects in the data set, should all of these projects be counted as one project? Run the code cell below to see the projects in question.
###Code
import datetime
# run this code cell to see the duplicate data
pd.concat([yugoslavia[yugoslavia['boardapprovaldate'] == datetime.date(1983, 7, 26)], republics[republics['boardapprovaldate'] == datetime.date(1983, 7, 26)]])
###Output
_____no_output_____
###Markdown
Duplicate DataA data set might have duplicate data: in other words, the same record is represented multiple times. Sometimes, it's easy to find and eliminate duplicate data like when two records are exactly the same. At other times, like what was discussed in the video, duplicate data is hard to spot. Exercise 1From the World Bank GDP data, count the number of countries that have had a project totalamt greater than 1 billion dollars (1,000,000,000). To get the count, you'll have to remove duplicate data rows.
###Code
import pandas as pd
# read in the projects data set and do some basic wrangling
projects = pd.read_csv('../data/projects_data.csv', dtype=str)
projects.drop('Unnamed: 56', axis=1, inplace=True)
projects['totalamt'] = pd.to_numeric(projects['totalamt'].str.replace(',', ''))
projects['countryname'] = projects['countryname'].str.split(';', expand=True)[0]
projects['boardapprovaldate'] = pd.to_datetime(projects['boardapprovaldate'])
# TODO: filter the data frame for projects over 1 billion dollars
# TODO: count the number of unique countries in the results
projects.drop_duplicates(inplace=True)
projects[projects['totalamt']>1000000000]['countryname'].nunique()
###Output
_____no_output_____
###Markdown
Exercise 2 (challenge)This exercise is more challenging. The projects data set contains data about Yugoslavia, which was an Eastern European country until 1992. Yugoslavia eventually broke up into 7 countries: Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Montenegro, Serbia, and Slovenia.But the projects dataset has some ambiguity in how it treats Yugoslavia and the 7 countries that came from Yugoslavia. Your task is to find Yugoslavia projects that are probably represented multiple times in the data set.
###Code
# TODO: output all projects for the 'Socialist Federal Republic of Yugoslavia'
# HINT: You can use the exact country name or use the pandas str.contains() method to search for Yugoslavia
###Output
_____no_output_____
###Markdown
Yugoslavia officially ended on [April 27th, 1992](https://en.wikipedia.org/wiki/Yugoslavia). In the code cell below, filter for projects with a 'boardapprovaldate' prior to April 27th, 1992 **and** with 'countryname' Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Serbia **or** Slovenia. You'll see there are a total of 12 projects in the data set that match this criteria. Save the results in the republics variable
###Code
import datetime
# TODO: filter the projects data set for project boardapprovaldate prior to April 27th, 1992 AND with countryname
# of either 'Bosnia and Herzegovina', 'Croatia', 'Kosovo', 'Macedonia', 'Serbia', or 'Sovenia'. Store the
# results in the republics variable
#
# TODO: so that it's easier to see all the data, keep only these columns:
# ['regionname', 'countryname', 'lendinginstr', 'totalamt', 'boardapprovaldate',
# 'location','GeoLocID', 'GeoLocName', 'Latitude','Longitude','Country', 'project_name']
# TODO: sort the results by boardapprovaldate
republics = None
# show the results
republics
###Output
_____no_output_____
###Markdown
Are these projects also represented in the data labeled Yugoslavia? In the code cell below, filter for Yugoslavia projects approved between February 1st, 1980 and May 23rd, 1989 which are the minimum and maximum dates in the results above. Store the results in the yugoslavia variable.The goal is to see if there are any projects represented more than once in the data set.
###Code
# TODO: Filter the projects data for Yugoslavia projects between
# February 1st, 1980 and May 23rd, 1989. Store the results in the
# Yugoslavia variable. Keep the same columns as the previous code cell.
# Sort the values by boardapprovaldate
yugoslavia = None
# show the results
yugoslavia
###Output
_____no_output_____
###Markdown
And as a final step, try to see if there are any projects in the republics variable and yugoslavia variable that could be the same project.There are multiple ways to do that. As a suggestion, find unique dates in the republics variable. Then separately find unique dates in the yugoslavia variable. Concatenate (ie append) the results together. And then count the number of times each date occurs in this list. If a date occurs twice, that means the same boardapprovaldate appeared in both the Yugoslavia data as well as in the republics data.You'll should find that there are three suspicious cases:* July 26th, 1983* March 31st, 1987* October 13th, 1987* May 23rd, 1989
###Code
import numpy as np
# TODO: find the unique dates in the republics variable
republic_unique_dates = None
# TODO: find the unique dates in the yugoslavia variable
yugoslavia_unique_dates = None
# TODO: make a list of the results appending one list to the other
dates = None
# TODO: print out the dates that appeared twice in the results
###Output
_____no_output_____
###Markdown
ConclusionOn July 26th, 1983, for example, projects were approved for Bosnia and Herzegovina, Croatia, Macedonia, Slovenia, and Yugoslavia. The code below shows the projects for that date. You'll notice that Yugoslavia had two projects, one of which was called "Power Transmission Project (03) Energy Managem...". The projects in the other countries were all called "POWER TRANS.III". This looks like a case of duplicate data. What you end up doing with this knowledge would depend on the context. For example, if you wanted to get a true count for the total number of projects in the data set, should all of these projects be counted as one project? Run the code cell below to see the projects in question.
###Code
import datetime
# run this code cell to see the duplicate data
pd.concat([yugoslavia[yugoslavia['boardapprovaldate'] == datetime.date(1983, 7, 26)], republics[republics['boardapprovaldate'] == datetime.date(1983, 7, 26)]])
###Output
_____no_output_____
###Markdown
Duplicate DataA data set might have duplicate data: in other words, the same record is represented multiple times. Sometimes, it's easy to find and eliminate duplicate data like when two records are exactly the same. At other times, like what was discussed in the video, duplicate data is hard to spot. Exercise 1From the World Bank GDP data, count the number of countries that have had a project totalamt greater than 1 billion dollars (1,000,000,000). To get the count, you'll have to remove duplicate data rows.
###Code
import pandas as pd
# read in the projects data set and do some basic wrangling
projects = pd.read_csv('../data/projects_data.csv', dtype=str)
projects.drop('Unnamed: 56', axis=1, inplace=True)
projects['totalamt'] = pd.to_numeric(projects['totalamt'].str.replace(',', ''))
projects['countryname'] = projects['countryname'].str.split(';', expand=True)[0]
projects['boardapprovaldate'] = pd.to_datetime(projects['boardapprovaldate'])
# TODO: filter the data frame for projects over 1 billion dollars
# TODO: count the number of unique countries in the results
projects[projects['totalamt'] >= 1e+9]['countryname'].nunique()
###Output
_____no_output_____
###Markdown
Exercise 2 (challenge)This exercise is more challenging. The projects data set contains data about Yugoslavia, which was an Eastern European country until 1992. Yugoslavia eventually broke up into 7 countries: Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Montenegro, Serbia, and Slovenia.But the projects dataset has some ambiguity in how it treats Yugoslavia and the 7 countries that came from Yugoslavia. Your task is to find Yugoslavia projects that are probably represented multiple times in the data set.
###Code
# TODO: output all projects for the 'Socialist Federal Republic of Yugoslavia'
# HINT: You can use the exact country name or use the pandas str.contains() method to search for Yugoslavia
projects[projects['countryname'].str.contains('Yugoslavia')]
###Output
_____no_output_____
###Markdown
Yugoslavia officially ended on [April 27th, 1992](https://en.wikipedia.org/wiki/Yugoslavia). In the code cell below, filter for projects with a 'boardapprovaldate' prior to April 27th, 1992 **and** with 'countryname' Bosnia and Herzegovina, Croatia, Kosovo, Macedonia, Serbia **or** Slovenia. You'll see there are a total of 12 projects in the data set that match this criteria. Save the results in the republics variable
###Code
import datetime
# TODO: filter the projects data set for project boardapprovaldate prior to April 27th, 1992 AND with countryname
# of either 'Bosnia and Herzegovina', 'Croatia', 'Kosovo', 'Macedonia', 'Serbia', or 'Slovenia'. Store the
# results in the republics variable
#
# TODO: so that it's easier to see all the data, keep only these columns:
# ['regionname', 'countryname', 'lendinginstr', 'totalamt', 'boardapprovaldate',
# 'location','GeoLocID', 'GeoLocName', 'Latitude','Longitude','Country', 'project_name']
# TODO: sort the results by boardapprovaldate
republics = projects[(projects['boardapprovaldate'] <= pd.Timestamp(datetime.date(1992, 4, 27))) & (
(projects['countryname'].str.contains('Bosnia')) |
(projects['countryname'].str.contains('Croatia')) |
(projects['countryname'].str.contains('Kosovo')) |
(projects['countryname'].str.contains('Macedonia')) |
(projects['countryname'].str.contains('Serbia')) |
(projects['countryname'].str.contains('Slovenia')))]
# show the results
republics
###Output
_____no_output_____
###Markdown
Are these projects also represented in the data labeled Yugoslavia? In the code cell below, filter for Yugoslavia projects approved between February 1st, 1980 and May 23rd, 1989 which are the minimum and maximum dates in the results above. Store the results in the yugoslavia variable.The goal is to see if there are any projects represented more than once in the data set.
###Code
# TODO: Filter the projects data for Yugoslavia projects between
# February 1st, 1980 and May 23rd, 1989. Store the results in the
# Yugoslavia variable. Keep the same columns as the previous code cell.
# Sort the values by boardapprovaldate
yugoslavia = projects[(projects['countryname'].str.contains('Yugoslavia')) & ((projects['boardapprovaldate'] >= pd.Timestamp(datetime.date(1980, 2, 1))) &
(projects['boardapprovaldate'] <= pd.Timestamp(datetime.date(1989, 5, 23))))]
# show the results
yugoslavia
###Output
_____no_output_____
###Markdown
And as a final step, try to see if there are any projects in the republics variable and yugoslavia variable that could be the same project.There are multiple ways to do that. As a suggestion, find unique dates in the republics variable. Then separately find unique dates in the yugoslavia variable. Concatenate (ie append) the results together. And then count the number of times each date occurs in this list. If a date occurs twice, that means the same boardapprovaldate appeared in both the Yugoslavia data as well as in the republics data.You should find that there are four suspicious cases:* July 26th, 1983* March 31st, 1987* October 13th, 1987* May 23rd, 1989
###Code
import numpy as np
# TODO: find the unique dates in the republics variable
republic_unique_dates = republics.groupby(['boardapprovaldate']).agg({'boardapprovaldate': 'unique'})
# TODO: find the unique dates in the yugoslavia variable
yugoslavia_unique_dates = yugoslavia.groupby(['boardapprovaldate']).agg({'boardapprovaldate': 'unique'})
# TODO: make a list of the results appending one list to the other
dates = np.append(republic_unique_dates, yugoslavia_unique_dates)
# TODO: print out the dates that appeared twice in the results
unique, count = np.unique(dates, return_counts=True)
for date in range(len(unique)):
if count[date] == 2:
print(unique[date])
###Output
['1983-07-26T00:00:00.000000000']
['1987-03-31T00:00:00.000000000']
['1987-10-13T00:00:00.000000000']
['1989-05-23T00:00:00.000000000']
###Markdown
ConclusionOn July 26th, 1983, for example, projects were approved for Bosnia and Herzegovina, Croatia, Macedonia, Slovenia, and Yugoslavia. The code below shows the projects for that date. You'll notice that Yugoslavia had two projects, one of which was called "Power Transmission Project (03) Energy Managem...". The projects in the other countries were all called "POWER TRANS.III". This looks like a case of duplicate data. What you end up doing with this knowledge would depend on the context. For example, if you wanted to get a true count for the total number of projects in the data set, should all of these projects be counted as one project? Run the code cell below to see the projects in question.
###Code
import datetime
# run this code cell to see the duplicate data
pd.concat([yugoslavia[yugoslavia['boardapprovaldate'] == pd.Timestamp(datetime.date(1983, 7, 26))],
republics[republics['boardapprovaldate'] == pd.Timestamp(datetime.date(1983, 7, 26))]])
###Output
_____no_output_____ |
PyTips_1_enumerate/PyTips 1 - Get loop counter with enumerate.ipynb | ###Markdown
PyTips 1 - Get loop counter with enumerate() Setup
###Code
my_ip = '10.16.32.113'
my_ip_octets = my_ip.split('.')
###Output
_____no_output_____
###Markdown
Looping over collection with helper variable.
###Code
ip_to_dec = 0
for i in range(len(my_ip_octets)):
ip_to_dec += int(my_ip_octets[i]) * 256**i
print('{:<20}{:<20}'.format('Dotted-decimal', 'Decimal'))
print('{:<20}{:<20}'.format(my_ip, ip_to_dec))
###Output
Dotted-decimal Decimal
10.16.32.113 1897926666
###Markdown
Looping over collection with enumerate.
###Code
ip_to_dec = 0
for i, octet in enumerate(my_ip_octets):
ip_to_dec += int(octet) * 256**i
print('{:<20}{:<20}'.format('Dotted-decimal', 'Decimal'))
print('{:<20}{:<20}'.format(my_ip, ip_to_dec))
###Output
Dotted-decimal Decimal
10.16.32.113 1897926666
###Markdown
Enumerate with counter starting from 1.
###Code
todo_list = [
'Snooze alarm',
'Reluctantly get up',
'Check email',
'Check Twitter',
'Check Facebook',
'Have coffee',
]
print('My TODO for today:')
for i, elem in enumerate(todo_list, start=1):
print('{:>3}: {}'.format(i, elem))
###Output
My TODO for today:
1: Snooze alarm
2: Reluctantly get up
3: Check email
4: Check Twitter
5: Check Facebook
6: Have coffee
|
Code/Section 2/Selecting best features for training the model.ipynb | ###Markdown
Import modules
###Code
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import pandas as pd
import numpy as np
from sklearn.cross_validation import cross_val_score
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import RandomizedLasso
from sklearn.datasets import load_boston
from sklearn.feature_selection import RFE
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
Read in the dataset
###Code
data = pd.read_csv('data-titanic.csv')
data = data.drop(['name', 'ticket', 'cabin', 'body', 'boat', 'home.dest'], axis=1)
data = data.dropna()
from sklearn import preprocessing
encoded_data = data.copy()
le = preprocessing.LabelEncoder()
encoded_data.sex = le.fit_transform(encoded_data.sex)
encoded_data.embarked = le.fit_transform(encoded_data.embarked)
features = encoded_data.drop(['survived'], axis=1).values
labels = encoded_data['survived'].values
###Output
_____no_output_____
###Markdown
Using all features
###Code
lin_reg = LinearRegression()
cross_val_score(lin_reg, features, labels, cv=10, scoring='neg_mean_squared_error')
-cross_val_score(lin_reg, features, labels, cv=10, scoring='neg_mean_squared_error')
np.sqrt(-cross_val_score(lin_reg, features, labels, cv=10, scoring='neg_mean_squared_error'))
np.sqrt(-cross_val_score(lin_reg, features, labels, cv=10, scoring='neg_mean_squared_error')).mean()
###Output
_____no_output_____
###Markdown
Choosing features manually
###Code
encoded_data.columns
features = encoded_data[['sex', 'pclass']].values
np.sqrt(-cross_val_score(lin_reg, features, labels, cv=10, scoring='neg_mean_squared_error')).mean()
###Output
_____no_output_____
###Markdown
Feature Selection using Recursive Feature Elimination
###Code
model = LinearRegression()
rfe = RFE(model, n_features_to_select=1)
rfe.fit(features,labels)
sorted(zip(map(lambda x: round(x, 4), rfe.ranking_), names))
###Output
_____no_output_____
###Markdown
Feature selection using Randomized Lasso
###Code
boston = load_boston()
features = boston["data"]
labels = boston["target"]
model = RandomizedLasso(alpha=0.025)
model.fit(features, labels)
sorted(zip(map(lambda x: round(x, 4), model.scores_),
boston["feature_names"]), reverse=True)
###Output
_____no_output_____ |
MonkeyNet.ipynb | ###Markdown
10 Monkey classification with custom DNN Architecture Setting Operating System Variable 'KAGGLE_USERNAME' as theroyakashYou can download your 'KAGGLE_KEY' from kaggle's account settngs
###Code
import os
os.environ['KAGGLE_USERNAME'] = "theroyakash"
os.environ['KAGGLE_KEY'] = "################CONFIDENTIAL################"
!kaggle datasets download -d slothkong/10-monkey-species
!ls
from zipfile import ZipFile
with ZipFile('10-monkey-species.zip', 'r') as zipObj:
# Extract all the contents of zip file in current directory
zipObj.extractall()
!ls
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D, GlobalAveragePooling2D, BatchNormalization
from tensorflow.keras.layers import Dense, Dropout, Flatten, Activation, Concatenate, Lambda
from tensorflow.keras.models import Model
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import regularizers, activations
from matplotlib import pyplot as plt
# print("Tensorflow version " + tf.__version__)
# try:
# tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
# print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])
# except ValueError:
# raise BaseException('ERROR: Not connected to a TPU runtime; please see the previous cell in this notebook for instructions!')
training_datadir = '/content/training/training/'
validation_datadir = '/content/validation/validation/'
labels_path = '/content/monkey_labels.txt'
f = open("monkey_labels.txt", "r")
print(f.read())
labels_latin = ['alouatta_palliata',
'erythrocebus_patas',
'cacajao_calvus',
'macaca_fuscata',
'cebuella_pygmea',
'cebus_capucinus',
'mico_argentatus',
'saimiri_sciureus',
'aotus_nigriceps',
'trachypithecus_johnii']
labels_common = ['mantled_howler',
'patas_monkey',
'bald_uakari',
'japanese_macaque',
'pygmy_marmoset',
'white_headed_capuchin',
'silvery_marmoset',
'common_squirrel_monkey',
'black_headed_night_monkey',
'nilgiri_langur']
len(labels_common)
len(labels_latin)
import keras_preprocessing
from keras_preprocessing import image
from keras_preprocessing.image import ImageDataGenerator
training_datagen = ImageDataGenerator(
rescale = 1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
train_generator = training_datagen.flow_from_directory(
training_datadir,
target_size=(200,200),
class_mode='categorical',
batch_size = 32
)
validation_datagen = ImageDataGenerator(
rescale = 1./255
)
validation_generator = validation_datagen.flow_from_directory(
validation_datadir,
target_size = (200,200),
class_mode='categorical',
batch_size=32
)
import math
math.ceil(34.3125)
# model = tf.keras.models.Sequential([
# # Note the input shape is the desired size of the image 150x150 with 3 bytes color
# # This is the first convolution
# tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(200, 200, 3)),
# tf.keras.layers.MaxPooling2D(2, 2),
# # The second convolution
# tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
# tf.keras.layers.MaxPooling2D(2,2),
# # The third convolution
# tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
# tf.keras.layers.MaxPooling2D(2,2),
# # The fourth convolution
# tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
# tf.keras.layers.MaxPooling2D(2,2),
# # Flatten the results to feed into a DNN
# tf.keras.layers.Flatten(),
# tf.keras.layers.Dropout(0.5),
# # 512 neuron hidden layer
# tf.keras.layers.Dense(512, activation='relu'),
# tf.keras.layers.Dense(10, activation='softmax')
# ])
# model.summary()
# # Create a resolver
# # Distribution strategies
# resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
# tf.config.experimental_connect_to_cluster(resolver)
# tf.tpu.experimental.initialize_tpu_system(resolver)
# strategy = tf.distribute.experimental.TPUStrategy(resolver)
!ls
import os
os.getcwd()
!ls
img_input = Input(shape=(200,200,3))
conv2d_1 = Conv2D(64, (3,3), activation='relu', padding='valid', name='conv2d_1')(img_input)
maxpool1 = MaxPooling2D(pool_size=(2,2))(conv2d_1)
conv2d_2 = Conv2D(128, (3,3), activation='relu', padding='valid', name='conv2d_2')(maxpool1)
conv2d_3 = Conv2D(128, (3,3), activation='relu', padding='valid', name='conv2d_3')(conv2d_2)
maxpool1 = MaxPooling2D(pool_size=(2,2))(conv2d_3)
conv2d_5 = Conv2D(128, (3,3), activation='relu', padding='valid', name='conv2d_5')(maxpool1)
branch0 = Conv2D(64, (1,1), padding='same', name='Branch_Zero_1_by_1_Conv2D')(conv2d_5)
branch1 = Conv2D(64, (1,1), activation='relu', padding='same', name='BranchOne3By3Conv2D1')(conv2d_5)
branch1 = Conv2D(64, (3,3), activation='relu', padding='same', name='BranchOne3By3Conv2D2')(branch1)
branch1 = Conv2D(32, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu', name='BranchOne3By3Conv2D3')(branch1)
concatenated_branchA = Concatenate()([branch0, branch1])
concatination_activation = Activation('relu')(concatenated_branchA)
pool0 = MaxPooling2D(pool_size=(2, 2))(concatination_activation)
branch00 = Conv2D(64, (1,1), padding='same', name='BranchZeroZero1By1Conv2D')(pool0)
branch11 = Conv2D(64, (1,1), activation='relu', padding='same', name='BranchOneOne3By3Conv2D1')(pool0)
branch11 = Conv2D(64, (3,3), activation='relu', padding='same', name='BranchOneOne3By3Conv2D2')(branch11)
branch11 = Conv2D(32, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu', name='BranchOneOne3By3Conv2D3')(branch11)
concatenated_branchB = Concatenate()([branch00, branch11])
concatenation_activation_branchB = Activation('relu')(concatenated_branchB)
flattened_before_dense = Flatten()(concatenation_activation_branchB)
dense1 = Dense(1024, activation='relu', name='firstDenseLayer')(flattened_before_dense)
dense2 = Dense(512, activation='relu', name='SecondDenseLayer')(dense1)
dense3 = Dense(128, activation='relu', name='ThirdDenseLayer')(dense2)
prediction_branch = Dense(10,activation='softmax', name='FinalSoftmaxLayer')(dense3)
model = Model(inputs=img_input, outputs=prediction_branch)
model.summary()
learning_rate, epochs = 0.001, 30
# compile our model
print("compiling model...")
model.compile(loss="categorical_crossentropy",
optimizer=Adam(lr=learning_rate, decay=learning_rate / epochs),
metrics=["accuracy"])
print("Model Compiled Successfully")
print("[SUMMARY]:")
print(model.summary())
history = model.fit(train_generator,
epochs=epochs,
steps_per_epoch=35,
validation_data = validation_generator,
verbose = 1,
validation_steps=35)
from tensorflow.keras.utils import plot_model
plot_model(model , 'MonkeyNet.png' , show_shapes=True)
from google.colab import files
files.download('MonkeyNet.png')
from google.colab import drive
drive.mount('/content/drive')
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
model.save('MonkeyNet.h5')
model_file = drive.CreateFile({'MonkeyNet' : 'MonkeyNet.h5'})
model_file.SetContentFile('MonkeyNet.h5')
model_file.Upload()
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.title('Training and validation accuracy')
plt.plot(epochs, acc, 'red', label='Training acc')
plt.plot(epochs, val_acc, 'blue', label='Validation acc')
plt.legend()
plt.figure()
plt.title('Training and validation loss')
plt.plot(epochs, loss, 'red', label='Training loss')
plt.plot(epochs, val_loss, 'blue', label='Validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____ |
src/gluonts/nursery/QRX-Wrapped-M5-Accuracy-Solution/3. code/2. train/1-2. recursive_store_cat_TRAIN.ipynb | ###Markdown
Please input your directory for the top level folderfolder name : SUBMISSION MODEL
###Code
dir_ = 'INPUT-PROJECT-DIRECTORY/submission_model/' # input only here
###Output
_____no_output_____
###Markdown
setting other directory
###Code
raw_data_dir = dir_+'2. data/'
processed_data_dir = dir_+'2. data/processed/'
log_dir = dir_+'4. logs/'
model_dir = dir_+'5. models/'
####################################################################################
####################### 1-2. recursive model by store & cat ########################
####################################################################################
ver, KKK = 'priv', 0
STORES = ['CA_1', 'CA_2', 'CA_3', 'CA_4', 'TX_1', 'TX_2', 'TX_3', 'WI_1', 'WI_2', 'WI_3']
CATS = ['HOBBIES','HOUSEHOLD', 'FOODS']
# General imports
import numpy as np
import pandas as pd
import os, sys, gc, time, warnings, pickle, psutil, random
# custom imports
from multiprocessing import Pool
warnings.filterwarnings('ignore')
########################### Helpers
#################################################################################
def seed_everything(seed=0):
random.seed(seed)
np.random.seed(seed)
## Multiprocess Runs
def df_parallelize_run(func, t_split):
num_cores = np.min([N_CORES,len(t_split)])
pool = Pool(num_cores)
df = pd.concat(pool.map(func, t_split), axis=1)
pool.close()
pool.join()
return df
########################### Helper to load data by store ID
#################################################################################
# Read data
def get_data_by_store(store, dept):
df = pd.concat([pd.read_pickle(BASE),
pd.read_pickle(PRICE).iloc[:,2:],
pd.read_pickle(CALENDAR).iloc[:,2:]],
axis=1)
df = df[df['d']>=START_TRAIN]
df = df[(df['store_id']==store) & (df['cat_id']==dept)]
df2 = pd.read_pickle(MEAN_ENC)[mean_features]
df2 = df2[df2.index.isin(df.index)]
df3 = pd.read_pickle(LAGS).iloc[:,3:]
df3 = df3[df3.index.isin(df.index)]
df = pd.concat([df, df2], axis=1)
del df2
df = pd.concat([df, df3], axis=1)
del df3
features = [col for col in list(df) if col not in remove_features]
df = df[['id','d',TARGET]+features]
df = df.reset_index(drop=True)
return df, features
# Recombine Test set after training
def get_base_test():
base_test = pd.DataFrame()
for store_id in STORES:
for state_id in CATS:
temp_df = pd.read_pickle(processed_data_dir+'test_'+store_id+'_'+state_id+'.pkl')
temp_df['store_id'] = store_id
temp_df['cat_id'] = state_id
base_test = pd.concat([base_test, temp_df]).reset_index(drop=True)
return base_test
########################### Helper to make dynamic rolling lags
#################################################################################
def make_lag(LAG_DAY):
lag_df = base_test[['id','d',TARGET]]
col_name = 'sales_lag_'+str(LAG_DAY)
lag_df[col_name] = lag_df.groupby(['id'])[TARGET].transform(lambda x: x.shift(LAG_DAY)).astype(np.float16)
return lag_df[[col_name]]
def make_lag_roll(LAG_DAY):
shift_day = LAG_DAY[0]
roll_wind = LAG_DAY[1]
lag_df = base_test[['id','d',TARGET]]
col_name = 'rolling_mean_tmp_'+str(shift_day)+'_'+str(roll_wind)
lag_df[col_name] = lag_df.groupby(['id'])[TARGET].transform(lambda x: x.shift(shift_day).rolling(roll_wind).mean())
return lag_df[[col_name]]
########################### Model params
#################################################################################
import lightgbm as lgb
lgb_params = {
'boosting_type': 'gbdt',
'objective': 'tweedie',
'tweedie_variance_power': 1.1,
'metric': 'rmse',
'subsample': 0.5,
'subsample_freq': 1,
'learning_rate': 0.015,
'num_leaves': 2**8-1,
'min_data_in_leaf': 2**8-1,
'feature_fraction': 0.5,
'max_bin': 100,
'n_estimators': 3000,
'boost_from_average': False,
'verbose': -1
}
########################### Vars
#################################################################################
VER = 1
SEED = 42
seed_everything(SEED)
lgb_params['seed'] = SEED
N_CORES = psutil.cpu_count()
#LIMITS and const
TARGET = 'sales'
START_TRAIN = 700
END_TRAIN = 1941 - 28*KKK
P_HORIZON = 28
USE_AUX = False
remove_features = ['id','cat_id', 'state_id','store_id',
'date','wm_yr_wk','d',TARGET]
mean_features = ['enc_store_id_dept_id_mean','enc_store_id_dept_id_std',
'enc_item_id_store_id_mean','enc_item_id_store_id_std']
ORIGINAL = raw_data_dir
BASE = processed_data_dir+'grid_part_1.pkl'
PRICE = processed_data_dir+'grid_part_2.pkl'
CALENDAR = processed_data_dir+'grid_part_3.pkl'
LAGS = processed_data_dir+'lags_df_28.pkl'
MEAN_ENC = processed_data_dir+'mean_encoding_df.pkl'
SHIFT_DAY = 28
N_LAGS = 15
LAGS_SPLIT = [col for col in range(SHIFT_DAY,SHIFT_DAY+N_LAGS)]
ROLS_SPLIT = []
for i in [1,7,14]:
for j in [7,14,30,60]:
ROLS_SPLIT.append([i,j])
########################### Train Models
#################################################################################
from lightgbm import LGBMRegressor
from gluonts.model.rotbaum._model import QRX
for store_id in STORES:
for state_id in CATS:
print('Train', store_id, state_id)
grid_df, features_columns = get_data_by_store(store_id, state_id)
train_mask = grid_df['d']<=END_TRAIN
valid_mask = train_mask&(grid_df['d']>(END_TRAIN-P_HORIZON))
preds_mask = (grid_df['d']>(END_TRAIN-100)) & (grid_df['d'] <= END_TRAIN+P_HORIZON)
# train_data = lgb.Dataset(grid_df[train_mask][features_columns],
# label=grid_df[train_mask][TARGET])
# valid_data = lgb.Dataset(grid_df[valid_mask][features_columns],
# label=grid_df[valid_mask][TARGET])
seed_everything(SEED)
estimator = QRX(model=LGBMRegressor(**lgb_params),#lgb_wrapper(**lgb_params),
clump_size=200)
estimator.fit(
grid_df[train_mask][features_columns],
grid_df[train_mask][TARGET],
max_sample_size=1000000,
seed=SEED,
eval_set=(
grid_df[valid_mask][features_columns],
grid_df[valid_mask][TARGET]
),
verbose=100,
x_train_is_dataframe=True
)
# estimator = lgb.train(lgb_params,
# train_data,
# valid_sets = [valid_data],
# verbose_eval = 100
#
# )
# display(pd.DataFrame({'name':estimator.feature_name(),
# 'imp':estimator.feature_importance()}).sort_values('imp',ascending=False).head(25))
grid_df = grid_df[preds_mask].reset_index(drop=True)
keep_cols = [col for col in list(grid_df) if '_tmp_' not in col]
grid_df = grid_df[keep_cols]
d_sales = grid_df[['d','sales']]
substitute = d_sales['sales'].values
substitute[(d_sales['d'] > END_TRAIN)] = np.nan
grid_df['sales'] = substitute
grid_df.to_pickle(processed_data_dir+'test_'+store_id+'_'+state_id+'.pkl')
model_name = model_dir+'lgb_model_'+store_id+'_'+state_id+'_v'+str(VER)+'.bin'
pickle.dump(estimator, open(model_name, 'wb'))
del grid_df, d_sales, substitute, estimator#, train_data, valid_data
gc.collect()
MODEL_FEATURES = features_columns
###Output
Train CA_1 HOBBIES
[LightGBM] [Warning] feature_fraction is set=0.5, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.5
[LightGBM] [Warning] min_data_in_leaf is set=255, min_child_samples=20 will be ignored. Current value: min_data_in_leaf=255
[100] valid_0's rmse: 2.16414
[200] valid_0's rmse: 2.08886
[300] valid_0's rmse: 2.04793
[400] valid_0's rmse: 2.0046
[500] valid_0's rmse: 1.96335
[600] valid_0's rmse: 1.92071
[700] valid_0's rmse: 1.88306
[800] valid_0's rmse: 1.84507
[900] valid_0's rmse: 1.80932
[1000] valid_0's rmse: 1.77228
[1100] valid_0's rmse: 1.73566
[1200] valid_0's rmse: 1.69836
[1300] valid_0's rmse: 1.66419
[1400] valid_0's rmse: 1.62874
[1500] valid_0's rmse: 1.5964
[1600] valid_0's rmse: 1.56647
[1700] valid_0's rmse: 1.53748
[1800] valid_0's rmse: 1.50902
[1900] valid_0's rmse: 1.48282
[2000] valid_0's rmse: 1.45599
[2100] valid_0's rmse: 1.42992
[2200] valid_0's rmse: 1.40797
[2300] valid_0's rmse: 1.38649
[2400] valid_0's rmse: 1.36769
[2500] valid_0's rmse: 1.34734
[2600] valid_0's rmse: 1.32939
[2700] valid_0's rmse: 1.31148
[2800] valid_0's rmse: 1.29578
[2900] valid_0's rmse: 1.28107
[3000] valid_0's rmse: 1.26571
Train CA_1 HOUSEHOLD
[LightGBM] [Warning] feature_fraction is set=0.5, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.5
[LightGBM] [Warning] min_data_in_leaf is set=255, min_child_samples=20 will be ignored. Current value: min_data_in_leaf=255
[100] valid_0's rmse: 1.25212
[200] valid_0's rmse: 1.19975
[300] valid_0's rmse: 1.1893
[400] valid_0's rmse: 1.18055
[500] valid_0's rmse: 1.1734
[600] valid_0's rmse: 1.16612
[700] valid_0's rmse: 1.16012
[800] valid_0's rmse: 1.15384
[900] valid_0's rmse: 1.14841
[1000] valid_0's rmse: 1.14296
[1100] valid_0's rmse: 1.13743
[1200] valid_0's rmse: 1.13215
[1300] valid_0's rmse: 1.12748
[1400] valid_0's rmse: 1.12309
[1500] valid_0's rmse: 1.1188
[1600] valid_0's rmse: 1.11438
[1700] valid_0's rmse: 1.11013
[1800] valid_0's rmse: 1.10538
[1900] valid_0's rmse: 1.1014
[2000] valid_0's rmse: 1.09758
[2100] valid_0's rmse: 1.09447
[2200] valid_0's rmse: 1.09046
[2300] valid_0's rmse: 1.08659
[2400] valid_0's rmse: 1.08347
[2500] valid_0's rmse: 1.08002
[2600] valid_0's rmse: 1.07676
[2700] valid_0's rmse: 1.07313
[2800] valid_0's rmse: 1.06947
[2900] valid_0's rmse: 1.06548
[3000] valid_0's rmse: 1.06254
Train CA_1 FOODS
[LightGBM] [Warning] feature_fraction is set=0.5, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.5
[LightGBM] [Warning] min_data_in_leaf is set=255, min_child_samples=20 will be ignored. Current value: min_data_in_leaf=255
[100] valid_0's rmse: 2.65329
[200] valid_0's rmse: 2.33343
[300] valid_0's rmse: 2.29563
[400] valid_0's rmse: 2.27611
[500] valid_0's rmse: 2.26152
[600] valid_0's rmse: 2.24957
[700] valid_0's rmse: 2.23952
[800] valid_0's rmse: 2.23127
[900] valid_0's rmse: 2.22144
[1000] valid_0's rmse: 2.21289
[1100] valid_0's rmse: 2.20414
[1200] valid_0's rmse: 2.19767
[1300] valid_0's rmse: 2.1892
[1400] valid_0's rmse: 2.18311
[1500] valid_0's rmse: 2.17698
[1600] valid_0's rmse: 2.17134
[1700] valid_0's rmse: 2.16566
[1800] valid_0's rmse: 2.15952
[1900] valid_0's rmse: 2.15505
[2000] valid_0's rmse: 2.15058
[2100] valid_0's rmse: 2.14359
[2200] valid_0's rmse: 2.13794
[2300] valid_0's rmse: 2.13262
[2400] valid_0's rmse: 2.12769
[2500] valid_0's rmse: 2.1229
[2600] valid_0's rmse: 2.1172
[2700] valid_0's rmse: 2.11114
[2800] valid_0's rmse: 2.10638
[2900] valid_0's rmse: 2.10206
[3000] valid_0's rmse: 2.09767
Train CA_2 HOBBIES
[LightGBM] [Warning] feature_fraction is set=0.5, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.5
[LightGBM] [Warning] min_data_in_leaf is set=255, min_child_samples=20 will be ignored. Current value: min_data_in_leaf=255
[100] valid_0's rmse: 1.7105
[200] valid_0's rmse: 1.64561
[300] valid_0's rmse: 1.60863
[400] valid_0's rmse: 1.57242
[500] valid_0's rmse: 1.53577
[600] valid_0's rmse: 1.50163
[700] valid_0's rmse: 1.46956
[800] valid_0's rmse: 1.43805
[900] valid_0's rmse: 1.4065
[1000] valid_0's rmse: 1.37693
[1100] valid_0's rmse: 1.34912
[1200] valid_0's rmse: 1.31803
[1300] valid_0's rmse: 1.29236
[1400] valid_0's rmse: 1.26743
[1500] valid_0's rmse: 1.24402
[1600] valid_0's rmse: 1.2211
[1700] valid_0's rmse: 1.19869
[1800] valid_0's rmse: 1.18022
[1900] valid_0's rmse: 1.16087
[2000] valid_0's rmse: 1.1418
[2100] valid_0's rmse: 1.12439
[2200] valid_0's rmse: 1.10701
[2300] valid_0's rmse: 1.09325
[2400] valid_0's rmse: 1.08055
[2500] valid_0's rmse: 1.06703
[2600] valid_0's rmse: 1.05348
[2700] valid_0's rmse: 1.04145
[2800] valid_0's rmse: 1.03075
[2900] valid_0's rmse: 1.01829
[3000] valid_0's rmse: 1.00826
Train CA_2 HOUSEHOLD
[LightGBM] [Warning] feature_fraction is set=0.5, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.5
[LightGBM] [Warning] min_data_in_leaf is set=255, min_child_samples=20 will be ignored. Current value: min_data_in_leaf=255
[100] valid_0's rmse: 1.40939
[200] valid_0's rmse: 1.34879
[300] valid_0's rmse: 1.33299
[400] valid_0's rmse: 1.32152
[500] valid_0's rmse: 1.31132
[600] valid_0's rmse: 1.30184
[700] valid_0's rmse: 1.29437
[800] valid_0's rmse: 1.28696
[900] valid_0's rmse: 1.28016
[1000] valid_0's rmse: 1.27403
[1100] valid_0's rmse: 1.26746
[1200] valid_0's rmse: 1.26078
[1300] valid_0's rmse: 1.25508
[1400] valid_0's rmse: 1.24882
[1500] valid_0's rmse: 1.2431
[1600] valid_0's rmse: 1.2378
[1700] valid_0's rmse: 1.23276
[1800] valid_0's rmse: 1.22776
[1900] valid_0's rmse: 1.22258
[2000] valid_0's rmse: 1.2182
[2100] valid_0's rmse: 1.21374
[2200] valid_0's rmse: 1.20922
[2300] valid_0's rmse: 1.20489
[2400] valid_0's rmse: 1.2009
[2500] valid_0's rmse: 1.197
[2600] valid_0's rmse: 1.19269
[2700] valid_0's rmse: 1.18841
[2800] valid_0's rmse: 1.18413
[2900] valid_0's rmse: 1.18026
[3000] valid_0's rmse: 1.17642
Train CA_2 FOODS
[LightGBM] [Warning] feature_fraction is set=0.5, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.5
[LightGBM] [Warning] min_data_in_leaf is set=255, min_child_samples=20 will be ignored. Current value: min_data_in_leaf=255
[100] valid_0's rmse: 2.51865
[200] valid_0's rmse: 2.2429
[300] valid_0's rmse: 2.18822
[400] valid_0's rmse: 2.16313
[500] valid_0's rmse: 2.14247
[600] valid_0's rmse: 2.12321
[700] valid_0's rmse: 2.10913
Train CA_3 HOBBIES
[LightGBM] [Warning] feature_fraction is set=0.5, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.5
[LightGBM] [Warning] min_data_in_leaf is set=255, min_child_samples=20 will be ignored. Current value: min_data_in_leaf=255
[100] valid_0's rmse: 2.14383
[200] valid_0's rmse: 2.07174
[300] valid_0's rmse: 2.02436
[400] valid_0's rmse: 1.97643
[500] valid_0's rmse: 1.92835
[900] valid_0's rmse: 1.76322
[1000] valid_0's rmse: 1.72243
[1100] valid_0's rmse: 1.68652
[1200] valid_0's rmse: 1.653
[1300] valid_0's rmse: 1.62103
[1400] valid_0's rmse: 1.5967
[1500] valid_0's rmse: 1.56518
[1600] valid_0's rmse: 1.54042
[1700] valid_0's rmse: 1.51367
[1800] valid_0's rmse: 1.49212
[1900] valid_0's rmse: 1.47125
[2000] valid_0's rmse: 1.45122
[2100] valid_0's rmse: 1.43232
[2200] valid_0's rmse: 1.41607
[2300] valid_0's rmse: 1.40143
[2400] valid_0's rmse: 1.38171
[2500] valid_0's rmse: 1.36624
[2600] valid_0's rmse: 1.35268
[2700] valid_0's rmse: 1.33424
[2800] valid_0's rmse: 1.3227
[2900] valid_0's rmse: 1.308
[3000] valid_0's rmse: 1.29579
###Markdown
Please input your directory for the top level folderfolder name : SUBMISSION MODEL
###Code
dir_ = 'INPUT-PROJECT-DIRECTORY/submission_model/' # input only here
###Output
_____no_output_____
###Markdown
setting other directory
###Code
raw_data_dir = dir_+'2. data/'
processed_data_dir = dir_+'2. data/processed/'
log_dir = dir_+'4. logs/'
model_dir = dir_+'5. models/'
####################################################################################
####################### 1-2. recursive model by store & cat ########################
####################################################################################
ver, KKK = 'priv', 0
STORES = ['CA_1', 'CA_2', 'CA_3', 'CA_4', 'TX_1', 'TX_2', 'TX_3', 'WI_1', 'WI_2', 'WI_3']
CATS = ['HOBBIES','HOUSEHOLD', 'FOODS']
# General imports
import numpy as np
import pandas as pd
import os, sys, gc, time, warnings, pickle, psutil, random
# custom imports
from multiprocessing import Pool
warnings.filterwarnings('ignore')
########################### Helpers
#################################################################################
def seed_everything(seed=0):
random.seed(seed)
np.random.seed(seed)
## Multiprocess Runs
def df_parallelize_run(func, t_split):
num_cores = np.min([N_CORES,len(t_split)])
pool = Pool(num_cores)
df = pd.concat(pool.map(func, t_split), axis=1)
pool.close()
pool.join()
return df
########################### Helper to load data by store ID
#################################################################################
# Read data
def get_data_by_store(store, dept):
df = pd.concat([pd.read_pickle(BASE),
pd.read_pickle(PRICE).iloc[:,2:],
pd.read_pickle(CALENDAR).iloc[:,2:]],
axis=1)
df = df[df['d']>=START_TRAIN]
df = df[(df['store_id']==store) & (df['cat_id']==dept)]
df2 = pd.read_pickle(MEAN_ENC)[mean_features]
df2 = df2[df2.index.isin(df.index)]
df3 = pd.read_pickle(LAGS).iloc[:,3:]
df3 = df3[df3.index.isin(df.index)]
df = pd.concat([df, df2], axis=1)
del df2
df = pd.concat([df, df3], axis=1)
del df3
features = [col for col in list(df) if col not in remove_features]
df = df[['id','d',TARGET]+features]
df = df.reset_index(drop=True)
return df, features
# Recombine Test set after training
def get_base_test():
base_test = pd.DataFrame()
for store_id in STORES:
for state_id in CATS:
temp_df = pd.read_pickle(processed_data_dir+'test_'+store_id+'_'+state_id+'.pkl')
temp_df['store_id'] = store_id
temp_df['cat_id'] = state_id
base_test = pd.concat([base_test, temp_df]).reset_index(drop=True)
return base_test
########################### Helper to make dynamic rolling lags
#################################################################################
def make_lag(LAG_DAY):
lag_df = base_test[['id','d',TARGET]]
col_name = 'sales_lag_'+str(LAG_DAY)
lag_df[col_name] = lag_df.groupby(['id'])[TARGET].transform(lambda x: x.shift(LAG_DAY)).astype(np.float16)
return lag_df[[col_name]]
def make_lag_roll(LAG_DAY):
shift_day = LAG_DAY[0]
roll_wind = LAG_DAY[1]
lag_df = base_test[['id','d',TARGET]]
col_name = 'rolling_mean_tmp_'+str(shift_day)+'_'+str(roll_wind)
lag_df[col_name] = lag_df.groupby(['id'])[TARGET].transform(lambda x: x.shift(shift_day).rolling(roll_wind).mean())
return lag_df[[col_name]]
########################### Model params
#################################################################################
import lightgbm as lgb
lgb_params = {
'boosting_type': 'gbdt',
'objective': 'tweedie',
'tweedie_variance_power': 1.1,
'metric': 'rmse',
'subsample': 0.5,
'subsample_freq': 1,
'learning_rate': 0.015,
'num_leaves': 2**8-1,
'min_data_in_leaf': 2**8-1,
'feature_fraction': 0.5,
'max_bin': 100,
'n_estimators': 3000,
'boost_from_average': False,
'verbose': -1
}
########################### Vars
#################################################################################
VER = 1
SEED = 42
seed_everything(SEED)
lgb_params['seed'] = SEED
N_CORES = psutil.cpu_count()
#LIMITS and const
TARGET = 'sales'
START_TRAIN = 700
END_TRAIN = 1941 - 28*KKK
P_HORIZON = 28
USE_AUX = False
remove_features = ['id','cat_id', 'state_id','store_id',
'date','wm_yr_wk','d',TARGET]
mean_features = ['enc_store_id_dept_id_mean','enc_store_id_dept_id_std',
'enc_item_id_store_id_mean','enc_item_id_store_id_std']
ORIGINAL = raw_data_dir
BASE = processed_data_dir+'grid_part_1.pkl'
PRICE = processed_data_dir+'grid_part_2.pkl'
CALENDAR = processed_data_dir+'grid_part_3.pkl'
LAGS = processed_data_dir+'lags_df_28.pkl'
MEAN_ENC = processed_data_dir+'mean_encoding_df.pkl'
SHIFT_DAY = 28
N_LAGS = 15
LAGS_SPLIT = [col for col in range(SHIFT_DAY,SHIFT_DAY+N_LAGS)]
ROLS_SPLIT = []
for i in [1,7,14]:
for j in [7,14,30,60]:
ROLS_SPLIT.append([i,j])
########################### Train Models
#################################################################################
from lightgbm import LGBMRegressor
from gluonts.model.rotbaum._model import QRX
for store_id in STORES:
for state_id in CATS:
print('Train', store_id, state_id)
grid_df, features_columns = get_data_by_store(store_id, state_id)
train_mask = grid_df['d']<=END_TRAIN
valid_mask = train_mask&(grid_df['d']>(END_TRAIN-P_HORIZON))
preds_mask = (grid_df['d']>(END_TRAIN-100)) & (grid_df['d'] <= END_TRAIN+P_HORIZON)
# train_data = lgb.Dataset(grid_df[train_mask][features_columns],
# label=grid_df[train_mask][TARGET])
# valid_data = lgb.Dataset(grid_df[valid_mask][features_columns],
# label=grid_df[valid_mask][TARGET])
seed_everything(SEED)
estimator = QRX(model=LGBMRegressor(**lgb_params),#lgb_wrapper(**lgb_params),
min_bin_size=200)
estimator.fit(
grid_df[train_mask][features_columns],
grid_df[train_mask][TARGET],
max_sample_size=1000000,
seed=SEED,
eval_set=(
grid_df[valid_mask][features_columns],
grid_df[valid_mask][TARGET]
),
verbose=100,
x_train_is_dataframe=True
)
# estimator = lgb.train(lgb_params,
# train_data,
# valid_sets = [valid_data],
# verbose_eval = 100
#
# )
# display(pd.DataFrame({'name':estimator.feature_name(),
# 'imp':estimator.feature_importance()}).sort_values('imp',ascending=False).head(25))
grid_df = grid_df[preds_mask].reset_index(drop=True)
keep_cols = [col for col in list(grid_df) if '_tmp_' not in col]
grid_df = grid_df[keep_cols]
d_sales = grid_df[['d','sales']]
substitute = d_sales['sales'].values
substitute[(d_sales['d'] > END_TRAIN)] = np.nan
grid_df['sales'] = substitute
grid_df.to_pickle(processed_data_dir+'test_'+store_id+'_'+state_id+'.pkl')
model_name = model_dir+'lgb_model_'+store_id+'_'+state_id+'_v'+str(VER)+'.bin'
pickle.dump(estimator, open(model_name, 'wb'))
del grid_df, d_sales, substitute, estimator#, train_data, valid_data
gc.collect()
MODEL_FEATURES = features_columns
###Output
Train CA_1 HOBBIES
[LightGBM] [Warning] feature_fraction is set=0.5, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.5
[LightGBM] [Warning] min_data_in_leaf is set=255, min_child_samples=20 will be ignored. Current value: min_data_in_leaf=255
[100] valid_0's rmse: 2.16414
[200] valid_0's rmse: 2.08886
[300] valid_0's rmse: 2.04793
[400] valid_0's rmse: 2.0046
[500] valid_0's rmse: 1.96335
[600] valid_0's rmse: 1.92071
[700] valid_0's rmse: 1.88306
[800] valid_0's rmse: 1.84507
[900] valid_0's rmse: 1.80932
[1000] valid_0's rmse: 1.77228
[1100] valid_0's rmse: 1.73566
[1200] valid_0's rmse: 1.69836
[1300] valid_0's rmse: 1.66419
[1400] valid_0's rmse: 1.62874
[1500] valid_0's rmse: 1.5964
[1600] valid_0's rmse: 1.56647
[1700] valid_0's rmse: 1.53748
[1800] valid_0's rmse: 1.50902
[1900] valid_0's rmse: 1.48282
[2000] valid_0's rmse: 1.45599
[2100] valid_0's rmse: 1.42992
[2200] valid_0's rmse: 1.40797
[2300] valid_0's rmse: 1.38649
[2400] valid_0's rmse: 1.36769
[2500] valid_0's rmse: 1.34734
[2600] valid_0's rmse: 1.32939
[2700] valid_0's rmse: 1.31148
[2800] valid_0's rmse: 1.29578
[2900] valid_0's rmse: 1.28107
[3000] valid_0's rmse: 1.26571
Train CA_1 HOUSEHOLD
[LightGBM] [Warning] feature_fraction is set=0.5, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.5
[LightGBM] [Warning] min_data_in_leaf is set=255, min_child_samples=20 will be ignored. Current value: min_data_in_leaf=255
[100] valid_0's rmse: 1.25212
[200] valid_0's rmse: 1.19975
[300] valid_0's rmse: 1.1893
[400] valid_0's rmse: 1.18055
[500] valid_0's rmse: 1.1734
[600] valid_0's rmse: 1.16612
[700] valid_0's rmse: 1.16012
[800] valid_0's rmse: 1.15384
[900] valid_0's rmse: 1.14841
[1000] valid_0's rmse: 1.14296
[1100] valid_0's rmse: 1.13743
[1200] valid_0's rmse: 1.13215
[1300] valid_0's rmse: 1.12748
[1400] valid_0's rmse: 1.12309
[1500] valid_0's rmse: 1.1188
[1600] valid_0's rmse: 1.11438
[1700] valid_0's rmse: 1.11013
[1800] valid_0's rmse: 1.10538
[1900] valid_0's rmse: 1.1014
[2000] valid_0's rmse: 1.09758
[2100] valid_0's rmse: 1.09447
[2200] valid_0's rmse: 1.09046
[2300] valid_0's rmse: 1.08659
[2400] valid_0's rmse: 1.08347
[2500] valid_0's rmse: 1.08002
[2600] valid_0's rmse: 1.07676
[2700] valid_0's rmse: 1.07313
[2800] valid_0's rmse: 1.06947
[2900] valid_0's rmse: 1.06548
[3000] valid_0's rmse: 1.06254
Train CA_1 FOODS
[LightGBM] [Warning] feature_fraction is set=0.5, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.5
[LightGBM] [Warning] min_data_in_leaf is set=255, min_child_samples=20 will be ignored. Current value: min_data_in_leaf=255
[100] valid_0's rmse: 2.65329
[200] valid_0's rmse: 2.33343
[300] valid_0's rmse: 2.29563
[400] valid_0's rmse: 2.27611
[500] valid_0's rmse: 2.26152
[600] valid_0's rmse: 2.24957
[700] valid_0's rmse: 2.23952
[800] valid_0's rmse: 2.23127
[900] valid_0's rmse: 2.22144
[1000] valid_0's rmse: 2.21289
[1100] valid_0's rmse: 2.20414
[1200] valid_0's rmse: 2.19767
[1300] valid_0's rmse: 2.1892
[1400] valid_0's rmse: 2.18311
[1500] valid_0's rmse: 2.17698
[1600] valid_0's rmse: 2.17134
[1700] valid_0's rmse: 2.16566
[1800] valid_0's rmse: 2.15952
[1900] valid_0's rmse: 2.15505
[2000] valid_0's rmse: 2.15058
[2100] valid_0's rmse: 2.14359
[2200] valid_0's rmse: 2.13794
[2300] valid_0's rmse: 2.13262
[2400] valid_0's rmse: 2.12769
[2500] valid_0's rmse: 2.1229
[2600] valid_0's rmse: 2.1172
[2700] valid_0's rmse: 2.11114
[2800] valid_0's rmse: 2.10638
[2900] valid_0's rmse: 2.10206
[3000] valid_0's rmse: 2.09767
Train CA_2 HOBBIES
[LightGBM] [Warning] feature_fraction is set=0.5, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.5
[LightGBM] [Warning] min_data_in_leaf is set=255, min_child_samples=20 will be ignored. Current value: min_data_in_leaf=255
[100] valid_0's rmse: 1.7105
[200] valid_0's rmse: 1.64561
[300] valid_0's rmse: 1.60863
[400] valid_0's rmse: 1.57242
[500] valid_0's rmse: 1.53577
[600] valid_0's rmse: 1.50163
[700] valid_0's rmse: 1.46956
[800] valid_0's rmse: 1.43805
[900] valid_0's rmse: 1.4065
[1000] valid_0's rmse: 1.37693
[1100] valid_0's rmse: 1.34912
[1200] valid_0's rmse: 1.31803
[1300] valid_0's rmse: 1.29236
[1400] valid_0's rmse: 1.26743
[1500] valid_0's rmse: 1.24402
[1600] valid_0's rmse: 1.2211
[1700] valid_0's rmse: 1.19869
[1800] valid_0's rmse: 1.18022
[1900] valid_0's rmse: 1.16087
[2000] valid_0's rmse: 1.1418
[2100] valid_0's rmse: 1.12439
[2200] valid_0's rmse: 1.10701
[2300] valid_0's rmse: 1.09325
[2400] valid_0's rmse: 1.08055
[2500] valid_0's rmse: 1.06703
[2600] valid_0's rmse: 1.05348
[2700] valid_0's rmse: 1.04145
[2800] valid_0's rmse: 1.03075
[2900] valid_0's rmse: 1.01829
[3000] valid_0's rmse: 1.00826
Train CA_2 HOUSEHOLD
[LightGBM] [Warning] feature_fraction is set=0.5, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.5
[LightGBM] [Warning] min_data_in_leaf is set=255, min_child_samples=20 will be ignored. Current value: min_data_in_leaf=255
[100] valid_0's rmse: 1.40939
[200] valid_0's rmse: 1.34879
[300] valid_0's rmse: 1.33299
[400] valid_0's rmse: 1.32152
[500] valid_0's rmse: 1.31132
[600] valid_0's rmse: 1.30184
[700] valid_0's rmse: 1.29437
[800] valid_0's rmse: 1.28696
[900] valid_0's rmse: 1.28016
[1000] valid_0's rmse: 1.27403
[1100] valid_0's rmse: 1.26746
[1200] valid_0's rmse: 1.26078
[1300] valid_0's rmse: 1.25508
[1400] valid_0's rmse: 1.24882
[1500] valid_0's rmse: 1.2431
[1600] valid_0's rmse: 1.2378
[1700] valid_0's rmse: 1.23276
[1800] valid_0's rmse: 1.22776
[1900] valid_0's rmse: 1.22258
[2000] valid_0's rmse: 1.2182
[2100] valid_0's rmse: 1.21374
[2200] valid_0's rmse: 1.20922
[2300] valid_0's rmse: 1.20489
[2400] valid_0's rmse: 1.2009
[2500] valid_0's rmse: 1.197
[2600] valid_0's rmse: 1.19269
[2700] valid_0's rmse: 1.18841
[2800] valid_0's rmse: 1.18413
[2900] valid_0's rmse: 1.18026
[3000] valid_0's rmse: 1.17642
Train CA_2 FOODS
[LightGBM] [Warning] feature_fraction is set=0.5, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.5
[LightGBM] [Warning] min_data_in_leaf is set=255, min_child_samples=20 will be ignored. Current value: min_data_in_leaf=255
[100] valid_0's rmse: 2.51865
[200] valid_0's rmse: 2.2429
[300] valid_0's rmse: 2.18822
[400] valid_0's rmse: 2.16313
[500] valid_0's rmse: 2.14247
[600] valid_0's rmse: 2.12321
[700] valid_0's rmse: 2.10913
Train CA_3 HOBBIES
[LightGBM] [Warning] feature_fraction is set=0.5, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.5
[LightGBM] [Warning] min_data_in_leaf is set=255, min_child_samples=20 will be ignored. Current value: min_data_in_leaf=255
[100] valid_0's rmse: 2.14383
[200] valid_0's rmse: 2.07174
[300] valid_0's rmse: 2.02436
[400] valid_0's rmse: 1.97643
[500] valid_0's rmse: 1.92835
[900] valid_0's rmse: 1.76322
[1000] valid_0's rmse: 1.72243
[1100] valid_0's rmse: 1.68652
[1200] valid_0's rmse: 1.653
[1300] valid_0's rmse: 1.62103
[1400] valid_0's rmse: 1.5967
[1500] valid_0's rmse: 1.56518
[1600] valid_0's rmse: 1.54042
[1700] valid_0's rmse: 1.51367
[1800] valid_0's rmse: 1.49212
[1900] valid_0's rmse: 1.47125
[2000] valid_0's rmse: 1.45122
[2100] valid_0's rmse: 1.43232
[2200] valid_0's rmse: 1.41607
[2300] valid_0's rmse: 1.40143
[2400] valid_0's rmse: 1.38171
[2500] valid_0's rmse: 1.36624
[2600] valid_0's rmse: 1.35268
[2700] valid_0's rmse: 1.33424
[2800] valid_0's rmse: 1.3227
[2900] valid_0's rmse: 1.308
[3000] valid_0's rmse: 1.29579
|
1_introducao.ipynb | ###Markdown
AceleraDev Codenation - Semana 2*Daniel Santos Pereira | Data & B.I Analyst | Machine Learning in Training | MCP* **Manipulando Dados**
###Code
#Importação dos pacotes
import pandas as pd
import numpy as np
#Acessando o help dos pacotes
pd?
###Output
_____no_output_____
###Markdown
**Dicionários**
###Code
#Criando um dicionário com os dados
dados = {'canal_vendas' : ['Facebook', 'twitter', 'intagram', 'linkedin', 'facebook'],
'acessos': [100, 200, 300, 400, 500],
'site': ['site1', 'site1', 'site2', 'site2', 'site3'],
'vendas': [1000.52, 1052.34, 2009, 5000, 300]}
#Para printar o dicionário
dados
#verificando o tipo de dicionario
type(dados)
#Acessando a chaves do meu dicionário
dados.keys()
#Acessando uma chave especifica * Lembrar que ele é case-sensitive
dados['canal_vendas']
#Acessando uma posição/valor especifíco de um dicionário
dados['canal_vendas'][2]
#Acessando uma posição especifica de um dicionário - slice
print('Retorna todos os elementos do array: '+
str(dados['canal_vendas'][:]))
#Acessando uma posição especifica de um dicionário - slice
print('Retorna todos os elementos até a posição definida: '+
str(dados['canal_vendas'][:4]))
#Acessando uma posição especifica de um dicionário - slice
#dados['canal_vendas'][:4] # Retorna todos os elementos até a posição definida
print('Retorna valor de acordo com o índice no array: '+
str(dados['canal_vendas'][2:4]))
###Output
Retorna valor de acordo com o índice no array: ['intagram', 'linkedin']
###Markdown
**Lista**
###Code
#Criando uma lista
lista = [200, 200, 300, 800, 300]
#Printando a lista
lista
#Vendo valores específicos - O array é iniciado no 0
lista[3]
#Fatia da lista (slice) -- A partir do índice 1, retorne até o quarto valor
lista[1:4]
#Adicionando a lista ao dicionário
dados['lista'] = lista
dados
###Output
_____no_output_____
###Markdown
**DataFrames**
###Code
#Criar um data frame a partir de um dict
dataframe = pd.DataFrame(dados)
#Acessando o dataframe
dataframe
#Printando os primeiros casos do dataframe
dataframe.head(2)
#Verificando o formado do dataframe (total de linhas e colunas)
dataframe.shape
#Verificando o índice do dataframe
dataframe.index
#Verificando os tipos dos dados do dataframe
dataframe.dtypes
#Verificando se existem valores faltantes
dataframe.isna()
#Printando os nomes da colunas
dataframe.columns
#Acessando uma coluna especifica
dataframe['canal_vendas']
#Criação de uma nova coluna
dataframe['nova_coluna'] = [1, 2, 3, 4, 5]
dataframe
dataframe.columns
#Removendo colunas -- sem a cláusula inplace = True,
# -- as colunas não são removidas, somente não aparecem mais
dataframe.drop(columns = ['nova_coluna'], inplace=True)
#Mostrondo as colunas
dataframe.columns
#Acessando valores especificos
dataframe['acessos'][1]
#Acessando fatia de coluna especifica
dataframe['canal_vendas'][:2]
dataframe
#Fatiando os dados usando o iloc (linhas / colunas)
dataframe.iloc[3:,4:]
#Fatiar os dados usando o loc (indice)
dataframe.loc[:3]
#Selecionando colunas especificas
dataframe[['canal_vendas','acessos']]
# Passando os valores atraves de listas
filtro = ['canal_vendas', 'acessos']
dataframe[filtro]
# Usando o metódo info()
dataframe.info()
# Completando os valores faltantes usando fillna
# Pivotando os dados (coluna)
aux = dataframe.pivot(index= 'canal_vendas', columns='site', values='acessos').fillna(0)
dataframe.pivot(index= 'canal_vendas', columns='site', values='acessos').fillna(0)
# Mudando as colunas usando o comando melt
dataframe.melt(id_vars='site', value_vars=['canal_vendas'])
# Resetando o índice do dataframe
print(aux.columns)
aux = aux.reset_index()
print(aux.columns)
# Exemplo do comando melt
aux.melt(id_vars='canal_vendas', value_vars=['site1','site2','site3'])
###Output
_____no_output_____ |
10.Applied Data Science Capstone/Solution Notebooks/Interactive Visual Analytics with Folium lab Solution.ipynb | ###Markdown
**Launch Sites Locations Analysis with Folium** Estimated time needed: **40** minutes The launch success rate may depend on many factors such as payload mass, orbit type, and so on. It may also depend on the location and proximities of a launch site, i.e., the initial position of rocket trajectories. Finding an optimal location for building a launch site certainly involves many factors and hopefully we could discover some of the factors by analyzing the existing launch site locations. In the previous exploratory data analysis labs, you have visualized the SpaceX launch dataset using `matplotlib` and `seaborn` and discovered some preliminary correlations between the launch site and success rates. In this lab, you will be performing more interactive visual analytics using `Folium`. Objectives This lab contains the following tasks:* **TASK 1:** Mark all launch sites on a map* **TASK 2:** Mark the success/failed launches for each site on the map* **TASK 3:** Calculate the distances between a launch site to its proximitiesAfter completed the above tasks, you should be able to find some geographical patterns about launch sites. Let's first import required Python packages for this lab:
###Code
!pip3 install folium
!pip3 install wget
import folium
import wget
import pandas as pd
# Import folium MarkerCluster plugin
from folium.plugins import MarkerCluster
# Import folium MousePosition plugin
from folium.plugins import MousePosition
# Import folium DivIcon plugin
from folium.features import DivIcon
###Output
_____no_output_____
###Markdown
If you need to refresh your memory about folium, you may download and refer to this previous folium lab: [Generating Maps with Python](https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/labs/module\_3/DV0101EN-3-5-1-Generating-Maps-in-Python-py-v2.0.ipynb) Task 1: Mark all launch sites on a map First, let's try to add each site's location on a map using site's latitude and longitude coordinates The following dataset with the name `spacex_launch_geo.csv` is an augmented dataset with latitude and longitude added for each site.
###Code
# Download and read the `spacex_launch_geo.csv`
spacex_csv_file = wget.download('https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/spacex_launch_geo.csv')
spacex_df=pd.read_csv(spacex_csv_file)
uniquelaunchsite = spacex_df['Launch Site'].unique().tolist()
launchsites=[]
launchsites.append({'label':'All Sites','value':'All Sites'})
for site in uniquelaunchsite:
launchsites.append({'label':site,'value':site})
launchsites
spacex_df[spacex_df['class']==1]
# spacex_df['class'].unique()
###Output
_____no_output_____
###Markdown
Now, you can take a look at what are the coordinates for each site.
###Code
# Select relevant sub-columns: `Launch Site`, `Lat(Latitude)`, `Long(Longitude)`, `class`
spacex_df = spacex_df[['Launch Site', 'Lat', 'Long', 'class']]
launch_sites_df = spacex_df.groupby(['Launch Site'], as_index=False).first()
launch_sites_df = launch_sites_df[['Launch Site', 'Lat', 'Long']]
launch_sites_df
###Output
_____no_output_____
###Markdown
Above coordinates are just plain numbers that can not give you any intuitive insights about where are those launch sites. If you are very good at geography, you can interpret those numbers directly in your mind. If not, that's fine too. Let's visualize those locations by pinning them on a map. We first need to create a folium `Map` object, with an initial center location to be NASA Johnson Space Center at Houston, Texas.
###Code
# Start location is NASA Johnson Space Center
nasa_coordinate = [29.559684888503615, -95.0830971930759]
site_map = folium.Map(location=nasa_coordinate, zoom_start=10)
site_map
###Output
_____no_output_____
###Markdown
We could use `folium.Circle` to add a highlighted circle area with a text label on a specific coordinate. For example,
###Code
# Create a blue circle at NASA Johnson Space Center's coordinate with a popup label showing its name
circle = folium.Circle(nasa_coordinate, radius=1000, color='#d35400', fill=True).add_child(folium.Popup('NASA Johnson Space Center'))
# Create a blue circle at NASA Johnson Space Center's coordinate with a icon showing its name
marker = folium.map.Marker(
nasa_coordinate,
# Create an icon as a text label
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color:#d35400;"><b>%s</b></div>' % 'NASA JSC',
)
)
site_map.add_child(circle)
site_map.add_child(marker)
###Output
_____no_output_____
###Markdown
and you should find a small yellow circle near the city of Houston and you can zoom-in to see a larger circle. Now, let's add a circle for each launch site in data frame `launch_sites` *TODO:* Create and add `folium.Circle` and `folium.Marker` for each launch site on the site map An example of folium.Circle: `folium.Circle(coordinate, radius=1000, color='000000', fill=True).add_child(folium.Popup(...))` An example of folium.Marker: `folium.map.Marker(coordinate, icon=DivIcon(icon_size=(20,20),icon_anchor=(0,0), html='%s' % 'label', ))`
###Code
# Initial the map
site_map = folium.Map(location=nasa_coordinate, zoom_start=5)
# For each launch site, add a Circle object based on its coordinate (Lat, Long) values. In addition, add Launch site name as a popup label
for index,site in launch_sites_df.iterrows():
circle = folium.Circle([site['Lat'],site['Long']], color='#d35400',fill=True).add_child(folium.Popup(site['Launch Site']))
marker = folium.map.Marker(
[site['Lat'],
site['Long']],
# Create an icon as text Label
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color: #d35400;"><b>%s</b></div>' % site['Launch Site'],
))
site_map.add_child(circle)
site_map.add_child(marker)
site_map
###Output
_____no_output_____
###Markdown
The generated map with marked launch sites should look similar to the following: Now, you can explore the map by zoom-in/out the marked areas, and try to answer the following questions:* Are all launch sites in proximity to the Equator line?* Are all launch sites in very close proximity to the coast?Also please try to explain your findings. Task 2: Mark the success/failed launches for each site on the map Next, let's try to enhance the map by adding the launch outcomes for each site, and see which sites have high success rates.Recall that data frame spacex_df has detailed launch records, and the `class` column indicates if this launch was successful or not
###Code
spacex_df.tail(10)
###Output
_____no_output_____
###Markdown
Next, let's create markers for all launch records.If a launch was successful `(class=1)`, then we use a green marker and if a launch was failed, we use a red marker `(class=0)` Note that a launch only happens in one of the four launch sites, which means many launch records will have the exact same coordinate. Marker clusters can be a good way to simplify a map containing many markers having the same coordinate. Let's first create a `MarkerCluster` object
###Code
marker_cluster = MarkerCluster()
###Output
_____no_output_____
###Markdown
*TODO:* Create a new column in `launch_sites` dataframe called `marker_color` to store the marker colors based on the `class` value
###Code
spacex_df['Marker_color']=None
# Apply a function to check the value of `class` column
# If class=1, marker_color value will be green
# If class=0, marker_color value will be red
# Function to assign color to launch outcome
def assign_marker_color(launch_outcome):
if launch_outcome == 1:
return 'green'
else:
return 'red'
spacex_df['Marker_color'] = spacex_df['class'].apply(assign_marker_color)
spacex_df.tail(10)
###Output
_____no_output_____
###Markdown
*TODO:* For each launch result in `spacex_df` data frame, add a `folium.Marker` to `marker_cluster`
###Code
# Add marker_cluster to current site_map
site_map.add_child(marker_cluster)
# for each row in spacex_df data frame
# create a Marker object with its coordinate
# and customize the Marker's icon property to indicate if this launch was successed or failed,
# e.g., icon=folium.Icon(color='white', icon_color=row['marker_color']
for index, record in spacex_df.iterrows():
# TODO: Create and add a Marker cluster to the site map
marker = folium.Marker([record['Lat'], record['Long']],
icon = folium.Icon(color='white', icon_color=record['Marker_color']))
marker_cluster.add_child(marker)
site_map
###Output
_____no_output_____
###Markdown
Your updated map may look like the following screenshots: From the color-labeled markers in marker clusters, you should be able to easily identify which launch sites have relatively high success rates. TASK 3: Calculate the distances between a launch site to its proximities Next, we need to explore and analyze the proximities of launch sites. Let's first add a `MousePosition` on the map to get coordinate for a mouse over a point on the map. As such, while you are exploring the map, you can easily find the coordinates of any points of interests (such as railway)
###Code
# Add Mouse Position to get the coordinate (Lat, Long) for a mouse over on the map
formatter = "function(num) {return L.Util.formatNum(num, 5);};"
mouse_position = MousePosition(
position='topleft',
separator=' Long: ',
empty_string='NaN',
lng_first=False,
num_digits=20,
prefix='Lat:',
lat_formatter=formatter,
lng_formatter=formatter,
)
site_map.add_child(mouse_position)
site_map
###Output
_____no_output_____
###Markdown
Now zoom in to a launch site and explore its proximity to see if you can easily find any railway, highway, coastline, etc. Move your mouse to these points and mark down their coordinates (shown on the top-left) in order to the distance to the launch site. You can calculate the distance between two points on the map based on their `Lat` and `Long` values using the following method:
###Code
from math import sin, cos, sqrt, atan2, radians
def calculate_distance(lat1, lon1, lat2, lon2):
# approximate radius of earth in km
R = 6373.0
lat1 = radians(lat1)
lon1 = radians(lon1)
lat2 = radians(lat2)
lon2 = radians(lon2)
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat / 2)**2 + cos(lat1) * cos(lat2) * sin(dlon / 2)**2
c = 2 * atan2(sqrt(a), sqrt(1 - a))
distance = R * c
return distance
###Output
_____no_output_____
###Markdown
*TODO:* Mark down a point on the closest coastline using MousePosition and calculate the distance between the coastline point and the launch site.
###Code
# find coordinate of the closet coastline
# e.g.,: Lat: 28.56367 Lon: -80.57163
# distance_coastline = calculate_distance(launch_site_lat, launch_site_lon, coastline_lat, coastline_lon)
###Output
_____no_output_____
###Markdown
*TODO:* After obtained its coordinate, create a `folium.Marker` to show the distance
###Code
# Create and add a folium.Marker on your selected closest coastline point on the map
# Display the distance between coastline point and launch site using the icon property
# for example
# distance_marker = folium.Marker(
# coordinate,
# icon=DivIcon(
# icon_size=(20,20),
# icon_anchor=(0,0),
# html='<div style="font-size: 12; color:#d35400;"><b>%s</b></div>' % "{:10.2f} KM".format(distance),
# )
# )
###Output
_____no_output_____
###Markdown
*TODO:* Draw a `PolyLine` between a launch site to the selected coastline point
###Code
# Create a `folium.PolyLine` object using the coastline coordinates and launch site coordinate
# lines=folium.PolyLine(locations=coordinates, weight=1)
# site_map.add_child(lines)
#Work out distance to coastline
coordinates = [
[28.56342, -80.57674],
[28.56342, -80.56756]]
lines=folium.PolyLine(locations=coordinates, weight=1)
site_map.add_child(lines)
distance = calculate_distance(coordinates[0][0], coordinates[0][1], coordinates[1][0], coordinates[1][1])
distance_circle = folium.Marker(
[28.56342, -80.56794],
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color:#d35400;"><b>%s</b></div>' % "{:10.2f} KM".format(distance),
)
)
site_map.add_child(distance_circle)
site_map
###Output
_____no_output_____
###Markdown
Your updated map with distance line should look like the following screenshot: *TODO:* Similarly, you can draw a line betwee a launch site to its closest city, railway, highway, etc. You need to use `MousePosition` to find the their coordinates on the map first A railway map symbol may look like this: A highway map symbol may look like this: A city map symbol may look like this:
###Code
# Create a marker with distance to a closest city, railway, highway, etc.
# Draw a line between the marker to the launch site
#Distance to Highway
coordinates = [
[28.56342, -80.57674],
[28.411780, -80.820630]]
lines=folium.PolyLine(locations=coordinates, weight=1)
site_map.add_child(lines)
distance = calculate_distance(coordinates[0][0], coordinates[0][1], coordinates[1][0], coordinates[1][1])
distance_circle = folium.Marker(
[28.411780, -80.820630],
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color:#252526;"><b>%s</b></div>' % "{:10.2f} KM".format(distance),
)
)
site_map.add_child(distance_circle)
site_map
#Distance to Florida City
coordinates = [
[28.56342, -80.57674],
[28.5383, -81.3792]]
lines=folium.PolyLine(locations=coordinates, weight=1)
site_map.add_child(lines)
distance = calculate_distance(coordinates[0][0], coordinates[0][1], coordinates[1][0], coordinates[1][1])
distance_circle = folium.Marker(
[28.5383, -81.3792],
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color:#252526;"><b>%s</b></div>' % "{:10.2f} KM".format(distance),
)
)
site_map.add_child(distance_circle)
site_map
###Output
_____no_output_____ |
Notebooks/successful-models/xgb-gridsearch.ipynb | ###Markdown
Data Mining Challange: *Reddit Gender Text-Classification* Modules
###Code
%%time
#Numpy
import numpy as np
#Sklearn
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV # Exhaustive search over specified parameter values for a given estimator
from sklearn.model_selection import cross_val_score # Evaluate a score by cross-validation
from sklearn.model_selection import KFold # K-Folds cross-validator providing train/test indices to split data in train/test sets.
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import roc_auc_score # Compute Area Under the Receiver Operating Characteristic Curve from prediction scores
from sklearn.feature_extraction.text import CountVectorizer # Convert a collection of text documents to a matrix of token counts
#XGBoost
from xgboost import XGBRegressor
# Matplotlib
import matplotlib # Data visualization
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
#Pickle
import pickle # To load files
# Joblib
import joblib # To save models
###Output
_____no_output_____
###Markdown
Data Collection
###Code
# load preprocessed data to save tine
with open("../input/challengedadata/comments.txt", "rb") as f:
clean_train_comments = pickle.load(f)
f.close()
with open("../input/challengedadata/targets.txt", "rb") as ft:
y = pickle.load(ft)
ft.close()
###Output
_____no_output_____
###Markdown
Data Manipulation
###Code
vectorizer = CountVectorizer(analyzer = "word",
max_features = 2000, ngram_range=(1, 2))
# converts in np array
train_data_features = vectorizer.fit_transform(clean_train_comments).toarray()
# create vocabulary
vocab = vectorizer.get_feature_names()
# counts how many times a word appears
dist = np.sum(train_data_features, axis=0)
# removes the 40 most utilized words
for _ in range(40):
index = np.argmax(dist)
train_data_features = np.delete(train_data_features, index, axis = 1)
X_len = [[len(x)] for x in train_data_features]
s = np.concatenate((train_data_features,np.array(X_len)),axis = 1)
# 5000 rows (one per author), and 2000-40+1 (X_len) features
s.shape
y = np.array(y)
###Output
_____no_output_____
###Markdown
Model Exploration
###Code
parameters = {"learning_rate":[0.03,0.05,0.07,0.01,0.15,0.2,0.25,0.3],'min_child_weight': [1,4,5,8],'gamma': [0.0, 0.1,0.2, 0.3,0.4,0.5,0.6,0.8],
'subsample': [0.6,0.7,0.8,0.9,1], 'colsample_bytree': [0.3,0.4,0.5, 0.6,0.7,0.8,0.9,1],
'max_depth': [2,3,4,5,6,7,8,10,12,15], 'scale_pos_weight': [1,2.70, 10, 25, 50, 75, 100, 1000] }
parameters0 = {'min_child_weight': [1,8],'gamma': [0.6,0.8],
'subsample': [0.9], 'colsample_bytree': [0.6],
'max_depth': [4], 'scale_pos_weight': [1,2.70, 10, 25, 50, 75, 100, 1000] }
xgb = XGBRegressor(objective = "reg:logistic", n_estimators=10000,
tree_method = "gpu_hist", gpu_id = 0)
# Model exploration
xgbClf = GridSearchCV(xgb, param_grid = parameters0, cv = StratifiedKFold(n_splits=10, shuffle = True, random_state = 1001), scoring = "roc_auc" ,verbose=True, n_jobs=-1)
# Model fit
xgbClf.fit(s, y, verbose=False)
# Save model
joblib.dump(xgbClf, '../working/xgbClf.pkl')
print("xgbCLf.best_score = ", xgbClf.best_score_)
print("xgbCLf.best_estimator_ = ", xgbClf.best_estimator_)
###Output
_____no_output_____ |
2 Matrices and Statistics for Data Science/Stephen_Eades_Anomaly.ipynb | ###Markdown
Module 5 Submission Machine Learning and Data Mining Stephen Eades 7/14/2020 Anomaly Detection Read in data from anamaly_detection.txt and assign the data to an array (x) Create a function -- anomaly_detection() to take an array as an input and output the result in the format of "sample_output" file. Anomaly: Assume D is a dataset. X is a member of D. Mu is the mean of D without X, and STD is the standard deviation of D without X. If the difference of X and Mu is larger than 3xSTD, we say X is an anomaly. We then remove X from D. We will iteratively search D for anomaly until no more outliers are found.Submission: You will export your notebook to both .html and .py formats. You will submit the following 2 files to Blackboard. In your html file, you should include all the outputs of your python script without error message. Firstname_Lastname_Anomaly.html Firstname_Lastname_Anomaly.py Attachments: Anomaly_detection.txt: The data file that you will read in A sample output of the function you will create
###Code
import math
import numpy as np
import copy
# Read in data from anomaly_detection.txt and assign the data to an array (x)
with open("anomaly_detection.txt") as anomalies:
# Store each line into array
x = []
for line in anomalies:
x.append(float(line.rstrip()))
# Create a function to take an array as input and output the result in the "sample_output" format
def anomaly_detection(array):
item_with_max_difference = 0.00
max_difference = 0.00
mean_of_array_excluding_most_distant_member = 0.00
array_excluding_most_distant_member = []
# Loop through the input array
for item in array:
# Duplicate array and remove current item from dataset
array_without_item = copy.deepcopy(array)
array_without_item.remove(item)
# Calculate mean of the remaining data points without that member
sum = 0
for element in array_without_item:
# Calculate the sum
sum = sum + element
# Calculate mean of the dataset using the sum
mean = sum/len(array_without_item)
# Calculate difference between the member you removed and this mean of the remaining data points
difference = item - mean
if difference < 1:
# Convert negatives to positive
difference = difference * -1
# Store the difference, member, mean, and array if the difference is the greatest yet
if difference > max_difference:
max_difference = difference
item_with_max_difference = item
mean_of_array_excluding_most_distant_member = mean
array_excluding_most_distant_member = array_without_item
# Find the standard deviation of the array excluding the most distant member
std_dev = np.std(array_excluding_most_distant_member)
# Check if the difference is greater than 3 times the standard deviation
if max_difference > (std_dev * 3):
# Remove it from array and repeat step 1 by calling the function recursively
print(f'Remove {item_with_max_difference} from the list because its {round(max_difference/std_dev, 2)} times of standard deviation of the list without it.')
array.remove(item_with_max_difference)
print(f'{item_with_max_difference} is removed from the list! \n')
anomaly_detection(array)
# Stop and display that there are no more anomalies
return (f'no more anomalies are detected')
anomaly_detection(x)
###Output
Remove 160.0 from the list because its 4.14 times of standard deviation of the list without it.
160.0 is removed from the list!
Remove 55.0 from the list because its 3.57 times of standard deviation of the list without it.
55.0 is removed from the list!
Remove 131.85777845 from the list because its 3.08 times of standard deviation of the list without it.
131.85777845 is removed from the list!
|
Gender & Age Classifier.ipynb | ###Markdown
Age & Gender Classifier using Deep CNNs - **Dataset : UTKFace**UTKFace dataset is a large-scale face dataset with long age span (range from 0 to 116 years old). The dataset consists of over 20,000 face images with annotations of age, gender, and ethnicity. The images cover large variation in pose, facial expression, illumination, occlusion, resolution, etc.In this notebook, I have used the `aligned and cropped` faces available to train my models. Needless to say, any face input for testing must be cropped and aligned vertically to large extent
###Code
# Mounting the drive so that dataset can be loaded
from google.colab import drive
drive.mount('/content/drive')
# Essential libraries
import numpy as np
import pandas as pd
import os
import glob
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.io
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from keras import applications,activations
from keras.preprocessing.image import ImageDataGenerator,load_img,img_to_array
from keras import optimizers,utils
from keras.models import Sequential, Model
from keras.layers import Dropout, Flatten, Dense, GlobalAveragePooling2D,BatchNormalization,ZeroPadding2D, Input
from keras.layers import Conv2D, Activation,MaxPooling2D
from keras import backend as k
from keras.callbacks import ModelCheckpoint, LearningRateScheduler, TensorBoard, EarlyStopping
dataset_folder_name = '/content/drive/My Drive/Images'
TRAIN_TEST_SPLIT = 0.8
IM_WIDTH = IM_HEIGHT = 198
dataset_dict = {
'race_id': {
0: 'white',
1: 'black',
2: 'asian',
3: 'indian',
4: 'others'
},
'gender_id': {
0: 'male',
1: 'female'
}
}
dataset_dict['gender_alias'] = dict((g, i) for i, g in dataset_dict['gender_id'].items()) # (Gender: id)
dataset_dict['race_alias'] = dict((r, i) for i, r in dataset_dict['race_id'].items()) # (Race: id)
# Let's also define a function to help us on extracting the data from our dataset. This function will be
# used to iterate over each file of the UTK dataset and return a Pandas Dataframe containing all the
# fields (age, gender and sex) of our records.
def parse_dataset(dataset_path, ext='jpg'):
"""
Used to extract information about our dataset. It does iterate over all images and return a DataFrame
with the data (age, gender and sex) of all files.
"""
def parse_info_from_file(path):
"""
Parse information from a single file
"""
try:
filename = os.path.split(path)[1]
filename = os.path.splitext(filename)[0]
age, gender, race, _ = filename.split('_')
return int(age), dataset_dict['gender_id'][int(gender)], dataset_dict['race_id'][int(race)]
except Exception as ex:
return None, None, None
files = glob.glob(os.path.join(dataset_path, "*.%s" % ext))
records = []
for file in files:
info = parse_info_from_file(file)
records.append(info)
df = pd.DataFrame(records)
df['file'] = files
df.columns = ['age', 'gender', 'race', 'file']
df = df.dropna()
return df
df = parse_dataset(dataset_folder_name)
df.head()
# Now we have a pandas dataframe with us. This can be dealt with quite easily. Like, we simply now need
# to OHE gender, race and feed it into model. Using Pandas dataframe also allows me to manipulate and
# visualize data by plotting graphs.
###Output
_____no_output_____
###Markdown
Data analysis & visualization (EDA)After some data preprocessing, let's analyze the data using graphs to get a better understanding about its distribution
###Code
df.info() # No NAN values. Clean dataset
df.describe()
# Lower percentile - 25, median - 50 & upper percentile - 75 (for numerical data)
ages = df['age']
nbins = 10
plt.hist(ages,nbins,color='green',histtype='bar')
plt.show()
# Majority population lies between 20-30 age group. Clearly, the dataset is not very well balanced. So training will
# not be easy & accurate. We don't want to be biased. Try using class weights ?
x = (df.gender=='male').sum()
y = (df.gender=='female').sum()
gender = [x,y]
labels = ['male','female']
colors = [ 'y', 'g']
plt.pie(gender,labels = labels,colors = colors,radius=1.2,autopct='%.1f%%')
plt.show()
# Uniform distribution to a large extent. Although, males slightly exceed females in numbers.No need to change gender in data.
# Pretty well balanced ! Lets also visualize this on a bar graph (to get better understanding of numbers)
sns.countplot(x='gender', data=df);
# Males ~ Just over 12k
# Females ~ Just over 11k
df.groupby(['gender']).mean() # Mean age by gender
x = (df.race=='white').sum()
y = (df.race=='black').sum()
z = (df.race=='asian').sum()
a = (df.race=='indian').sum()
b = (df.race=='others').sum()
gender = [x,y,z,a,b]
labels = ['white','black','asian','indian','others']
colors = [ 'y', 'g','b','r','m']
plt.pie(gender,labels = labels,colors = colors,radius=1.2,autopct='%.1f%%')
plt.show()
df.groupby(['race']).mean() # Mean age by race
sns.set(style ="whitegrid")
_ = sns.stripplot(x='race',y='age',data=df)
# Not very useful :( , we only understand that not many elderly ppl (above 60) in others category are present
sns.factorplot('race', 'age', 'gender', data=df,kind='bar');
# Gives the mean age of both genders of all races
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))
_ = sns.boxplot(data=df, x='gender', y='age', ax=ax1)
_ = sns.boxplot(data=df, x='race', y='age', ax=ax2)
# We see that most of males are between 25 and 55, whereas most of the females are between 20 and 35
# Even while grouping by race, we find good amount of variations in different races
df['age'] = df['age']//25
''' This basically makes 5 divisions in age-groups -
1. 0-24
2. 25-49
3. 50-74
4. 75-99
5. 100-124 '''
x = (df.age==0).sum()
y = (df.age==1).sum()
z = (df.age==2).sum()
a = (df.age==3).sum()
b = (df.age==4).sum()
c = (df.age==5).sum()
print(x,' ',y,' ',z,' ',a,' ',b, ' ',c)
df.head()
###Output
_____no_output_____
###Markdown
Data GeneratorIn order to input our data to our Keras multi-output model, we have a helper object to work as a data generator for our dataset. This will be done by generating batches of data, which will be used to feed our multi-output model with both the images and their labels (instead of just loading all the dataset into the memory at once, which might lead to an out of memory error).
###Code
from keras.utils import to_categorical
from PIL import Image
p = np.random.permutation(len(df))
train_up_to = int(len(df) * TRAIN_TEST_SPLIT)
train_idx = p[:train_up_to]
val_idx = p[train_up_to:]
# converts alias to id
df['gender_id'] = df['gender'].map(lambda gender: dataset_dict['gender_alias'][gender])
df['race_id'] = df['race'].map(lambda race: dataset_dict['race_alias'][race])
# Now we got train_idx, valid_idx, test_idx
def preprocess_image(img_path): # Used to perform some minor preprocessing on the image before inputting into the network.
im = Image.open(img_path)
im = im.resize((IM_WIDTH, IM_HEIGHT))
im = np.array(im) / 255.0
return im
def generate_images(image_idx, is_training, batch_size=16): # Used to generate a batch with images when training/validating our model.
# arrays to store our batched data
images, ages, races, genders = [], [], [], []
while True:
for idx in image_idx:
person = df.iloc[idx]
age = person['age']
race = person['race_id']
gender = person['gender_id']
file = person['file']
im = preprocess_image(file)
races.append(to_categorical(race, len(dataset_dict['race_id'])))
genders.append(to_categorical(gender, len(dataset_dict['gender_id'])))
ages.append(to_categorical(age,5))
images.append(im)
# yielding condition
if len(images) >= batch_size:
yield np.array(images), [np.array(ages), np.array(genders)]
images, ages, genders = [], [], []
if not is_training:
break
###Output
_____no_output_____
###Markdown
Model Architecture
###Code
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import SeparableConv2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers.core import Activation
from keras.layers.core import Dropout
from keras.layers import SpatialDropout2D
from keras.layers.core import Lambda
from keras.layers.core import Dense
from keras.layers import Flatten
from keras.layers import Input
from keras.regularizers import l2
import tensorflow as tf
def make_default_hidden_layers(inputs):
x = SeparableConv2D(32, (3, 3), padding="same")(inputs)
x = Activation("relu")(x)
x = MaxPooling2D(pool_size=(3, 3))(x)
x = BatchNormalization(axis=-1)(x)
x = SeparableConv2D(64, (3, 3), padding="same")(x)
x = Activation("relu")(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = BatchNormalization(axis=-1)(x)
x = SeparableConv2D(128, (3, 3), padding="same")(x)
x = Activation("relu")(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = BatchNormalization(axis=-1)(x)
x = SeparableConv2D(128, (3, 3), padding="same")(x)
x = Activation("relu")(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = SpatialDropout2D(0.1)(x)
x = BatchNormalization(axis=-1)(x)
x = SeparableConv2D(256, (3, 3), padding="same")(x)
x = Activation("relu")(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = SpatialDropout2D(0.1)(x)
x = BatchNormalization(axis=-1)(x)
x = SeparableConv2D(256, (3, 3), padding="same")(x)
x = Activation("relu")(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = SpatialDropout2D(0.15)(x)
x = BatchNormalization(axis=-1)(x)
x = SeparableConv2D(256, (3, 3), padding="same")(x)
x = Activation("relu")(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = SpatialDropout2D(0.15)(x)
x = BatchNormalization(axis=-1)(x)
return x
def build_gender_branch(inputs):
x = make_default_hidden_layers(inputs)
x = Flatten()(x)
x = Dense(64)(x)
x = Activation("relu")(x)
x = Dropout(0.3)(x)
x = BatchNormalization()(x)
x = Dense(32)(x)
x = Activation("relu")(x)
x = Dropout(0.25)(x)
x = BatchNormalization()(x)
x = Dense(2)(x)
x = Activation("softmax", name="gender_output")(x)
return x
def build_age_branch(inputs):
x = make_default_hidden_layers(inputs)
x = Flatten()(x)
x = Dense(128, kernel_regularizer=l2(0.03))(x)
x = Activation("relu")(x)
x = Dropout(0.3)(x)
x = BatchNormalization()(x)
x = Dense(64)(x)
x = Activation("relu")(x)
x = Dropout(0.3)(x)
x = BatchNormalization()(x)
x = Dense(32)(x)
x = Activation("relu")(x)
x = Dropout(0.2)(x)
x = BatchNormalization()(x)
x = Dense(5)(x)
x = Activation("softmax", name="age_output")(x)
return x
def assemble_model(width, height):
input_shape = (height, width, 3)
inputs = Input(shape=input_shape)
age_branch = build_age_branch(inputs)
gender_branch = build_gender_branch(inputs)
model = Model(inputs=inputs, outputs = [age_branch, gender_branch], name="face_net")
return model
model = assemble_model(198, 198)
###Output
_____no_output_____
###Markdown
Convolutional neural networks work on 2 assumptions -1. Low level features are local2. What's useful in one place will also be useful in other places.Kernel size should be determined by how strongly we believe in those assumptions for the problem at hand. In general, smaller filters are considered better than larger filter sizes. Also, usually -* Number of filters tend to increase with depth of model (more representational capacity is required in the model)* Size of filters is almost always odd. Like 3x3, 5x5* Filter size tends to decrease with depth of the model (initial layers have larger receptive fields).
###Code
picture = "/content/drive/My Drive/classifiers.PNG"
from IPython.display import Image
Image(picture, width=350)
# Model design flowchart
model.summary()
# A callback is a set of functions to be applied at given stages of the training procedure.
import math
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint
from keras.callbacks import LearningRateScheduler
# When training a neural network, the learning rate is often the most important
# hyperparameter to tune. When training deep neural networks, it is often useful
# to reduce learning rate as the training progresses.
# LRS in Keras reduces the learning rate by a certain factor after certain no of epochs
def step_decay(epoch):
initial_lrate = 0.008
drop = 0.5
epochs_drop = 5.0
lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))
return lrate
opt = Adam(lr=0.0) # 0.0 here signifies this is not to be used
lrate = LearningRateScheduler(step_decay)
model.compile(optimizer=opt,
loss={
'age_output': 'categorical_crossentropy',
'gender_output': 'categorical_crossentropy'},
metrics={
'age_output': 'accuracy',
'gender_output': 'accuracy'})
callbacks_list = [lrate]
# It is this callback that allows a function to invoke during program execution.
###Output
_____no_output_____
###Markdown
Keras does not touch class imbalance issues on its own. If you aren't going to handle imbalance from the data directly, you should introduce an additional parameter in your loss function that understands the class distribution. In Keras, the param is called `class_weight`Class weights, ensure that the unevenness in data distribution is sorted out.Basically, **classes with less numbers are given more weight**, so that model doesn't get biased towards the more prevalent classes.
###Code
from sklearn.utils import class_weight
class_weight = class_weight.compute_class_weight('balanced' ,np.array([0,1,2,3,4]) ,np.array(df['age']))
class_weight1 = {'age_output': class_weight}
###Output
_____no_output_____
###Markdown
Some key points to remember* **ModelCheckpoint** - ModelCheckpoint callback is used in conjunction with training using model.fit() to save a model or weights (in a checkpoint file) at some interval, so the model or weights can be loaded later to continue the training from the state saved. This is very essential as DL models can take upto days to train. By default - it's NULL. By keeping the filename constant throughout training, we ensure that only the best model weights remain in the file uptill the point we have trained. To load weights - `model.load_weights(filepath)`---* **Fit function** - In the latest version of TF - 2.2.0v (released May 2020), the *fit* function has replaced the *fit_generator* function present before.---* **Validation while training** - All the VALIDATION things, in the *fit* function are related to data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data ---* **Loss functions** ( `binary_crossentropy` vs `categorical_crossentropy`) - -- For `binary_crossentropy`, sigmoid activation, scalar target -- For `categorical_crossentropy`, softmax activation, one-hot encoded target---* **LearningRateSchedular Callback** - The LearningRateScheduler callback allows us to define a function to call that takes the epoch number as an argument and returns the learning rate to use in optimizer. When used, the learning rate specified by optimizer is ignored.
###Code
train_gen = generate_images(train_idx, is_training=True, batch_size=32)
# Since, train_idx is too large to fit inside RAM at once, we generate batches of size 32/64 from it (called train_gen))
valid_gen = generate_images(val_idx, is_training=True, batch_size=32)
history = model.fit(train_gen, steps_per_epoch = len(train_idx)//32, epochs=22 , callbacks=callbacks_list,
validation_data=valid_gen, validation_steps=len(val_idx)//32)
###Output
Epoch 1/22
592/592 [==============================] - 200s 337ms/step - loss: 2.0457 - age_output_loss: 1.1166 - gender_output_loss: 0.5320 - age_output_accuracy: 0.5296 - gender_output_accuracy: 0.7298 - val_loss: 2.1632 - val_age_output_loss: 1.3199 - val_gender_output_loss: 0.4401 - val_age_output_accuracy: 0.4390 - val_gender_output_accuracy: 0.7865
Epoch 2/22
592/592 [==============================] - 192s 324ms/step - loss: 1.4360 - age_output_loss: 0.8870 - gender_output_loss: 0.3670 - age_output_accuracy: 0.6281 - gender_output_accuracy: 0.8369 - val_loss: 1.4478 - val_age_output_loss: 0.9380 - val_gender_output_loss: 0.3106 - val_age_output_accuracy: 0.6132 - val_gender_output_accuracy: 0.8585
Epoch 3/22
592/592 [==============================] - 194s 327ms/step - loss: 1.2849 - age_output_loss: 0.7804 - gender_output_loss: 0.3219 - age_output_accuracy: 0.6670 - gender_output_accuracy: 0.8596 - val_loss: 1.2353 - val_age_output_loss: 0.8277 - val_gender_output_loss: 0.3026 - val_age_output_accuracy: 0.6419 - val_gender_output_accuracy: 0.8495
Epoch 4/22
592/592 [==============================] - 193s 327ms/step - loss: 1.2300 - age_output_loss: 0.7363 - gender_output_loss: 0.3038 - age_output_accuracy: 0.6867 - gender_output_accuracy: 0.8650 - val_loss: 1.1190 - val_age_output_loss: 0.6985 - val_gender_output_loss: 0.3027 - val_age_output_accuracy: 0.7044 - val_gender_output_accuracy: 0.8501
Epoch 5/22
592/592 [==============================] - 191s 322ms/step - loss: 1.0230 - age_output_loss: 0.6679 - gender_output_loss: 0.2573 - age_output_accuracy: 0.7122 - gender_output_accuracy: 0.8868 - val_loss: 1.2206 - val_age_output_loss: 0.6861 - val_gender_output_loss: 0.2747 - val_age_output_accuracy: 0.7166 - val_gender_output_accuracy: 0.8788
Epoch 6/22
592/592 [==============================] - 188s 318ms/step - loss: 0.9897 - age_output_loss: 0.6511 - gender_output_loss: 0.2385 - age_output_accuracy: 0.7209 - gender_output_accuracy: 0.8981 - val_loss: 1.1326 - val_age_output_loss: 0.6195 - val_gender_output_loss: 0.3005 - val_age_output_accuracy: 0.7299 - val_gender_output_accuracy: 0.8545
Epoch 7/22
592/592 [==============================] - 188s 318ms/step - loss: 0.9693 - age_output_loss: 0.6331 - gender_output_loss: 0.2338 - age_output_accuracy: 0.7298 - gender_output_accuracy: 0.8973 - val_loss: 1.0137 - val_age_output_loss: 0.6072 - val_gender_output_loss: 0.2372 - val_age_output_accuracy: 0.7321 - val_gender_output_accuracy: 0.8948
Epoch 8/22
592/592 [==============================] - 187s 315ms/step - loss: 0.9226 - age_output_loss: 0.6069 - gender_output_loss: 0.2224 - age_output_accuracy: 0.7435 - gender_output_accuracy: 0.9046 - val_loss: 1.0404 - val_age_output_loss: 0.6176 - val_gender_output_loss: 0.2585 - val_age_output_accuracy: 0.7272 - val_gender_output_accuracy: 0.8853
Epoch 9/22
592/592 [==============================] - 188s 317ms/step - loss: 0.9021 - age_output_loss: 0.6023 - gender_output_loss: 0.2052 - age_output_accuracy: 0.7429 - gender_output_accuracy: 0.9125 - val_loss: 0.9814 - val_age_output_loss: 0.5965 - val_gender_output_loss: 0.2491 - val_age_output_accuracy: 0.7441 - val_gender_output_accuracy: 0.8984
Epoch 10/22
592/592 [==============================] - 189s 320ms/step - loss: 0.7721 - age_output_loss: 0.5399 - gender_output_loss: 0.1751 - age_output_accuracy: 0.7722 - gender_output_accuracy: 0.9286 - val_loss: 0.7754 - val_age_output_loss: 0.5767 - val_gender_output_loss: 0.2373 - val_age_output_accuracy: 0.7468 - val_gender_output_accuracy: 0.9029
Epoch 11/22
592/592 [==============================] - 188s 317ms/step - loss: 0.7236 - age_output_loss: 0.5126 - gender_output_loss: 0.1596 - age_output_accuracy: 0.7866 - gender_output_accuracy: 0.9359 - val_loss: 0.8049 - val_age_output_loss: 0.5783 - val_gender_output_loss: 0.2339 - val_age_output_accuracy: 0.7525 - val_gender_output_accuracy: 0.9056
Epoch 12/22
592/592 [==============================] - 186s 315ms/step - loss: 0.6907 - age_output_loss: 0.4935 - gender_output_loss: 0.1425 - age_output_accuracy: 0.7940 - gender_output_accuracy: 0.9434 - val_loss: 0.9491 - val_age_output_loss: 0.5951 - val_gender_output_loss: 0.2457 - val_age_output_accuracy: 0.7555 - val_gender_output_accuracy: 0.9088
Epoch 13/22
592/592 [==============================] - 187s 316ms/step - loss: 0.6548 - age_output_loss: 0.4669 - gender_output_loss: 0.1346 - age_output_accuracy: 0.8053 - gender_output_accuracy: 0.9467 - val_loss: 0.9543 - val_age_output_loss: 0.6014 - val_gender_output_loss: 0.2447 - val_age_output_accuracy: 0.7551 - val_gender_output_accuracy: 0.9082
Epoch 14/22
592/592 [==============================] - 185s 312ms/step - loss: 0.6376 - age_output_loss: 0.4527 - gender_output_loss: 0.1273 - age_output_accuracy: 0.8135 - gender_output_accuracy: 0.9517 - val_loss: 0.9251 - val_age_output_loss: 0.6113 - val_gender_output_loss: 0.2543 - val_age_output_accuracy: 0.7551 - val_gender_output_accuracy: 0.9084
Epoch 15/22
592/592 [==============================] - 186s 315ms/step - loss: 0.5350 - age_output_loss: 0.3996 - gender_output_loss: 0.1030 - age_output_accuracy: 0.8414 - gender_output_accuracy: 0.9611 - val_loss: 0.7436 - val_age_output_loss: 0.6051 - val_gender_output_loss: 0.2878 - val_age_output_accuracy: 0.7686 - val_gender_output_accuracy: 0.9062
Epoch 16/22
592/592 [==============================] - 190s 320ms/step - loss: 0.4934 - age_output_loss: 0.3703 - gender_output_loss: 0.0904 - age_output_accuracy: 0.8539 - gender_output_accuracy: 0.9663 - val_loss: 0.7627 - val_age_output_loss: 0.6136 - val_gender_output_loss: 0.2645 - val_age_output_accuracy: 0.7595 - val_gender_output_accuracy: 0.9136
Epoch 17/22
592/592 [==============================] - 195s 330ms/step - loss: 0.4685 - age_output_loss: 0.3505 - gender_output_loss: 0.0839 - age_output_accuracy: 0.8612 - gender_output_accuracy: 0.9702 - val_loss: 0.7044 - val_age_output_loss: 0.6437 - val_gender_output_loss: 0.2894 - val_age_output_accuracy: 0.7656 - val_gender_output_accuracy: 0.9115
Epoch 18/22
592/592 [==============================] - 199s 335ms/step - loss: 0.4576 - age_output_loss: 0.3405 - gender_output_loss: 0.0799 - age_output_accuracy: 0.8643 - gender_output_accuracy: 0.9717 - val_loss: 0.7869 - val_age_output_loss: 0.6630 - val_gender_output_loss: 0.2833 - val_age_output_accuracy: 0.7532 - val_gender_output_accuracy: 0.9117
Epoch 19/22
592/592 [==============================] - 201s 340ms/step - loss: 0.4350 - age_output_loss: 0.3245 - gender_output_loss: 0.0753 - age_output_accuracy: 0.8739 - gender_output_accuracy: 0.9751 - val_loss: 0.5612 - val_age_output_loss: 0.6520 - val_gender_output_loss: 0.2968 - val_age_output_accuracy: 0.7595 - val_gender_output_accuracy: 0.9136
Epoch 20/22
592/592 [==============================] - 193s 326ms/step - loss: 0.3802 - age_output_loss: 0.2901 - gender_output_loss: 0.0650 - age_output_accuracy: 0.8895 - gender_output_accuracy: 0.9787 - val_loss: 1.0296 - val_age_output_loss: 0.6577 - val_gender_output_loss: 0.3153 - val_age_output_accuracy: 0.7574 - val_gender_output_accuracy: 0.9160
Epoch 21/22
592/592 [==============================] - 189s 320ms/step - loss: 0.3519 - age_output_loss: 0.2720 - gender_output_loss: 0.0576 - age_output_accuracy: 0.8957 - gender_output_accuracy: 0.9808 - val_loss: 1.2229 - val_age_output_loss: 0.6853 - val_gender_output_loss: 0.3360 - val_age_output_accuracy: 0.7517 - val_gender_output_accuracy: 0.9113
Epoch 22/22
592/592 [==============================] - 187s 317ms/step - loss: 0.3406 - age_output_loss: 0.2657 - gender_output_loss: 0.0524 - age_output_accuracy: 0.8970 - gender_output_accuracy: 0.9831 - val_loss: 1.2183 - val_age_output_loss: 0.6851 - val_gender_output_loss: 0.3367 - val_age_output_accuracy: 0.7544 - val_gender_output_accuracy: 0.9149
###Markdown
---Clearly, the performance of the model is saturating. I found the reason to be the diminutive learning rate after 45 epochs which becomes 0.0005, that is too low for the model to learn anything. *When the learning rate is too small, training is not only slower, but may become permanently stuck with a high training error.* This is precisely what seems to have happened above.Hence, we train the model, by **re-establishing the learning rate** to a lower value than initialized before (as model would be now at the trough , looking for the minimum using Adam optimizer). I decided it to be 0.002 and again keep a drop every 12 epochs. Initializing the learning rate higher than 0.002, results in very volatile and fluctuating results, as the model keeps on oscillating about that minimum.
###Code
picture = "/content/drive/My Drive/lr_finder.png"
from IPython.display import Image
Image(picture, width=800)
import math
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint
from keras.callbacks import LearningRateScheduler
def step_decay(epoch):
initial_lrate = 0.002
drop = 0.5
epochs_drop = 7.0
lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))
return lrate
opt = Adam(lr=0.0)
lrate = LearningRateScheduler(step_decay)
train_gen = generate_images(train_idx, is_training=True, batch_size=32)
valid_gen = generate_images(val_idx, is_training=True, batch_size=32)
callbacks_list = [lrate]
history = model.fit(train_gen, steps_per_epoch = len(train_idx)//32, epochs=22 , callbacks=callbacks_list,
validation_data=valid_gen, validation_steps=len(val_idx)//32)
###Output
Epoch 1/22
592/592 [==============================] - 10910s 18s/step - loss: 0.6538 - age_output_loss: 0.4499 - gender_output_loss: 0.1399 - age_output_accuracy: 0.8294 - gender_output_accuracy: 0.9517 - val_loss: 0.3447 - val_age_output_loss: 0.3667 - val_gender_output_loss: 0.1035 - val_age_output_accuracy: 0.8575 - val_gender_output_accuracy: 0.9628
Epoch 2/22
592/592 [==============================] - 158s 268ms/step - loss: 0.5831 - age_output_loss: 0.4121 - gender_output_loss: 0.1203 - age_output_accuracy: 0.8385 - gender_output_accuracy: 0.9578 - val_loss: 0.4025 - val_age_output_loss: 0.3830 - val_gender_output_loss: 0.1111 - val_age_output_accuracy: 0.8499 - val_gender_output_accuracy: 0.9599
Epoch 3/22
592/592 [==============================] - 158s 267ms/step - loss: 0.5579 - age_output_loss: 0.3978 - gender_output_loss: 0.1109 - age_output_accuracy: 0.8471 - gender_output_accuracy: 0.9618 - val_loss: 0.5020 - val_age_output_loss: 0.3963 - val_gender_output_loss: 0.1199 - val_age_output_accuracy: 0.8370 - val_gender_output_accuracy: 0.9559
Epoch 4/22
592/592 [==============================] - 158s 267ms/step - loss: 0.5348 - age_output_loss: 0.3788 - gender_output_loss: 0.1030 - age_output_accuracy: 0.8541 - gender_output_accuracy: 0.9641 - val_loss: 0.3861 - val_age_output_loss: 0.4186 - val_gender_output_loss: 0.1354 - val_age_output_accuracy: 0.8342 - val_gender_output_accuracy: 0.9523
Epoch 5/22
592/592 [==============================] - 161s 271ms/step - loss: 0.5200 - age_output_loss: 0.3690 - gender_output_loss: 0.0968 - age_output_accuracy: 0.8597 - gender_output_accuracy: 0.9658 - val_loss: 0.5428 - val_age_output_loss: 0.4403 - val_gender_output_loss: 0.1323 - val_age_output_accuracy: 0.8237 - val_gender_output_accuracy: 0.9540
Epoch 6/22
592/592 [==============================] - 160s 270ms/step - loss: 0.5104 - age_output_loss: 0.3595 - gender_output_loss: 0.0868 - age_output_accuracy: 0.8629 - gender_output_accuracy: 0.9686 - val_loss: 0.3927 - val_age_output_loss: 0.4612 - val_gender_output_loss: 0.1584 - val_age_output_accuracy: 0.8159 - val_gender_output_accuracy: 0.9474
Epoch 7/22
592/592 [==============================] - 161s 273ms/step - loss: 0.4052 - age_output_loss: 0.3019 - gender_output_loss: 0.0687 - age_output_accuracy: 0.8875 - gender_output_accuracy: 0.9758 - val_loss: 0.3832 - val_age_output_loss: 0.4377 - val_gender_output_loss: 0.1572 - val_age_output_accuracy: 0.8340 - val_gender_output_accuracy: 0.9497
Epoch 8/22
592/592 [==============================] - 160s 271ms/step - loss: 0.3614 - age_output_loss: 0.2722 - gender_output_loss: 0.0561 - age_output_accuracy: 0.8993 - gender_output_accuracy: 0.9816 - val_loss: 0.5037 - val_age_output_loss: 0.4249 - val_gender_output_loss: 0.1679 - val_age_output_accuracy: 0.8357 - val_gender_output_accuracy: 0.9529
Epoch 9/22
592/592 [==============================] - 160s 271ms/step - loss: 0.3303 - age_output_loss: 0.2470 - gender_output_loss: 0.0506 - age_output_accuracy: 0.9099 - gender_output_accuracy: 0.9818 - val_loss: 0.4126 - val_age_output_loss: 0.4375 - val_gender_output_loss: 0.1684 - val_age_output_accuracy: 0.8309 - val_gender_output_accuracy: 0.9508
Epoch 10/22
592/592 [==============================] - 161s 272ms/step - loss: 0.3251 - age_output_loss: 0.2434 - gender_output_loss: 0.0473 - age_output_accuracy: 0.9142 - gender_output_accuracy: 0.9840 - val_loss: 0.7140 - val_age_output_loss: 0.4718 - val_gender_output_loss: 0.1651 - val_age_output_accuracy: 0.8275 - val_gender_output_accuracy: 0.9516
Epoch 11/22
592/592 [==============================] - 162s 273ms/step - loss: 0.2988 - age_output_loss: 0.2235 - gender_output_loss: 0.0437 - age_output_accuracy: 0.9200 - gender_output_accuracy: 0.9852 - val_loss: 0.8084 - val_age_output_loss: 0.4462 - val_gender_output_loss: 0.1679 - val_age_output_accuracy: 0.8330 - val_gender_output_accuracy: 0.9540
Epoch 12/22
592/592 [==============================] - 161s 272ms/step - loss: 0.2865 - age_output_loss: 0.2118 - gender_output_loss: 0.0419 - age_output_accuracy: 0.9232 - gender_output_accuracy: 0.9852 - val_loss: 0.9976 - val_age_output_loss: 0.4730 - val_gender_output_loss: 0.1993 - val_age_output_accuracy: 0.8340 - val_gender_output_accuracy: 0.9455
Epoch 13/22
592/592 [==============================] - 162s 273ms/step - loss: 0.2878 - age_output_loss: 0.2111 - gender_output_loss: 0.0396 - age_output_accuracy: 0.9255 - gender_output_accuracy: 0.9860 - val_loss: 1.0647 - val_age_output_loss: 0.5154 - val_gender_output_loss: 0.1910 - val_age_output_accuracy: 0.8188 - val_gender_output_accuracy: 0.9483
Epoch 14/22
592/592 [==============================] - 161s 273ms/step - loss: 0.2213 - age_output_loss: 0.1723 - gender_output_loss: 0.0290 - age_output_accuracy: 0.9391 - gender_output_accuracy: 0.9900 - val_loss: 0.9897 - val_age_output_loss: 0.4970 - val_gender_output_loss: 0.2010 - val_age_output_accuracy: 0.8345 - val_gender_output_accuracy: 0.9500
Epoch 15/22
592/592 [==============================] - 160s 269ms/step - loss: 0.2028 - age_output_loss: 0.1586 - gender_output_loss: 0.0260 - age_output_accuracy: 0.9453 - gender_output_accuracy: 0.9909 - val_loss: 1.0216 - val_age_output_loss: 0.4707 - val_gender_output_loss: 0.1932 - val_age_output_accuracy: 0.8364 - val_gender_output_accuracy: 0.9523
Epoch 16/22
592/592 [==============================] - 158s 268ms/step - loss: 0.1878 - age_output_loss: 0.1450 - gender_output_loss: 0.0242 - age_output_accuracy: 0.9499 - gender_output_accuracy: 0.9921 - val_loss: 0.6582 - val_age_output_loss: 0.4862 - val_gender_output_loss: 0.2134 - val_age_output_accuracy: 0.8317 - val_gender_output_accuracy: 0.9508
Epoch 17/22
592/592 [==============================] - 159s 269ms/step - loss: 0.1799 - age_output_loss: 0.1380 - gender_output_loss: 0.0249 - age_output_accuracy: 0.9519 - gender_output_accuracy: 0.9920 - val_loss: 0.6531 - val_age_output_loss: 0.5266 - val_gender_output_loss: 0.1913 - val_age_output_accuracy: 0.8285 - val_gender_output_accuracy: 0.9525
Epoch 18/22
592/592 [==============================] - 159s 269ms/step - loss: 0.1786 - age_output_loss: 0.1355 - gender_output_loss: 0.0258 - age_output_accuracy: 0.9530 - gender_output_accuracy: 0.9909 - val_loss: 0.7356 - val_age_output_loss: 0.5203 - val_gender_output_loss: 0.2106 - val_age_output_accuracy: 0.8307 - val_gender_output_accuracy: 0.9514
Epoch 19/22
592/592 [==============================] - 159s 268ms/step - loss: 0.1728 - age_output_loss: 0.1342 - gender_output_loss: 0.0196 - age_output_accuracy: 0.9549 - gender_output_accuracy: 0.9932 - val_loss: 0.6228 - val_age_output_loss: 0.5268 - val_gender_output_loss: 0.2090 - val_age_output_accuracy: 0.8290 - val_gender_output_accuracy: 0.9491
Epoch 20/22
592/592 [==============================] - 159s 268ms/step - loss: 0.1665 - age_output_loss: 0.1286 - gender_output_loss: 0.0202 - age_output_accuracy: 0.9561 - gender_output_accuracy: 0.9925 - val_loss: 0.6983 - val_age_output_loss: 0.5502 - val_gender_output_loss: 0.2334 - val_age_output_accuracy: 0.8266 - val_gender_output_accuracy: 0.9451
Epoch 21/22
592/592 [==============================] - 159s 269ms/step - loss: 0.1551 - age_output_loss: 0.1232 - gender_output_loss: 0.0170 - age_output_accuracy: 0.9592 - gender_output_accuracy: 0.9940 - val_loss: 0.7495 - val_age_output_loss: 0.5346 - val_gender_output_loss: 0.2187 - val_age_output_accuracy: 0.8264 - val_gender_output_accuracy: 0.9483
Epoch 22/22
592/592 [==============================] - 159s 268ms/step - loss: 0.1389 - age_output_loss: 0.1111 - gender_output_loss: 0.0168 - age_output_accuracy: 0.9646 - gender_output_accuracy: 0.9944 - val_loss: 0.9818 - val_age_output_loss: 0.5370 - val_gender_output_loss: 0.2206 - val_age_output_accuracy: 0.8279 - val_gender_output_accuracy: 0.9502
###Markdown
We see that the model has started overfitting. Validation loss for both age and gender is increasing, which a clear sign of overfitting. Hence, stop the training, else the model might lose its generalization ability.I tried training further, and the accuracy was not able to increase. Hence, I conclude that the model has achieved saturation.
###Code
model.save('/content/drive/My Drive/Colab Notebooks/a_g_best') # Saving the above run model. It has performed best till date.
new_model.save("/content/drive/My Drive/Colab Notebooks/a_g_best.h5") # Converting to .h5 file for deployment
from keras.models import load_model
new_model = load_model('/content/drive/My Drive/Colab Notebooks/a_g_best')
###Output
Using TensorFlow backend.
###Markdown
The short answer to saving and loading - **Saving a Keras Model** -`model = ... Get model (Sequential, Functional Model, or Model subclass)``model.save('path/to/location')`This `save` function saves -1. The architecture of the model, allowing to re-create the model.2. The weights of the model.3. The training configuration (loss, optimizer).4. The state of the optimizer, allowing to resume training exactly where you left off.Calling `save('my_model')` creates a SavedModel folder `my_model`, containing the following - `assets` , `saved_model.pb` & `variables`. The model architecture, and training configuration (including the optimizer, losses, and metrics) are stored in `saved_model.pb`. The weights are saved in the `variables/` directory. **Loading the model back** - `model = keras.models.load_model('path/to/location')`
###Code
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
def loadImage(filepath):
test_img = image.load_img(filepath, target_size=(198, 198))
test_img = image.img_to_array(test_img)
test_img = np.expand_dims(test_img, axis = 0)
test_img /= 255
return test_img
picture = "/content/drive/My Drive/old_man.jpg"
age_pred, gender_pred = new_model.predict(loadImage(picture))
img = image.load_img(picture)
plt.imshow(img)
plt.show()
max=-1
count=0
for i in age_pred[0]:
if i>max:
max = i
temp = count
count+=1
if temp==0:
print('0-24 yrs old')
if temp==1:
print('25-49 yrs old')
if temp==2:
print('50-74 yrs old')
if temp==3:
print('75-99 yrs old')
if temp==4:
print('91-124 yrs old')
if gender_pred[0][0]>gender_pred[0][1]:
print('Male')
else:
print('Female')
###Output
_____no_output_____ |
Mobile Games AB Testing with Cookie Cats Analyze an AB test from the popular mobilzle game- Cookie Cats./notebook.ipynb | ###Markdown
1. Of cats and cookiesCookie Cats is a hugely popular mobile puzzle game developed by Tactile Entertainment. It's a classic "connect three"-style puzzle game where the player must connect tiles of the same color to clear the board and win the level. It also features singing cats. We're not kidding! Check out this short demo:As players progress through the levels of the game, they will occasionally encounter gates that force them to wait a non-trivial amount of time or make an in-app purchase to progress. In addition to driving in-app purchases, these gates serve the important purpose of giving players an enforced break from playing the game, hopefully resulting in that the player's enjoyment of the game being increased and prolonged.But where should the gates be placed? Initially the first gate was placed at level 30, but in this notebook we're going to analyze an AB-test where we moved the first gate in Cookie Cats from level 30 to level 40. In particular, we will look at the impact on player retention. But before we get to that, a key step before undertaking any analysis is understanding the data. So let's load it in and take a look!
###Code
# Importing pandas
# ... YOUR CODE FOR TASK 1 ...
import pandas as pd
# Reading in the data
df = pd.read_csv('datasets/cookie_cats.csv')
# Showing the first few rows
# ... YOUR CODE FOR TASK 1 ...
df.head()
###Output
_____no_output_____
###Markdown
2. The AB-test dataThe data we have is from 90,189 players that installed the game while the AB-test was running. The variables are:userid - a unique number that identifies each player.version - whether the player was put in the control group (gate_30 - a gate at level 30) or the group with the moved gate (gate_40 - a gate at level 40).sum_gamerounds - the number of game rounds played by the player during the first 14 days after install.retention_1 - did the player come back and play 1 day after installing?retention_7 - did the player come back and play 7 days after installing?When a player installed the game, he or she was randomly assigned to either gate_30 or gate_40. As a sanity check, let's see if there are roughly the same number of players in each AB group.
###Code
# Counting the number of players in each AB group.
# ... YOUR CODE FOR TASK 2 ...
df.groupby('version')['version'].count()
###Output
_____no_output_____
###Markdown
3. The distribution of game rounds It looks like there is roughly the same number of players in each group, nice!The focus of this analysis will be on how the gate placement affects player retention, but just for fun: Let's plot the distribution of the number of game rounds players played during their first week playing the game.
###Code
# This command makes plots appear in the notebook
%matplotlib inline
# Counting the number of players for each number of gamerounds
plot_df = df.groupby('sum_gamerounds')['userid'].count()
# Plotting the distribution of players that played 0 to 100 game rounds
ax = plot_df.head(100).plot(x='sum_gamerounds', y='userid')
ax.set_xlabel("sum_gamerounds")
ax.set_ylabel("userid")
###Output
_____no_output_____
###Markdown
4. Overall 1-day retentionIn the plot above we can see that some players install the game but then never play it (0 game rounds), some players just play a couple of game rounds in their first week, and some get really hooked!What we want is for players to like the game and to get hooked. A common metric in the video gaming industry for how fun and engaging a game is 1-day retention: The percentage of players that comes back and plays the game one day after they have installed it. The higher 1-day retention is, the easier it is to retain players and build a large player base. As a first step, let's look at what 1-day retention is overall.
###Code
# The % of users that came back the day after they installed
# ... YOUR CODE FOR TASK 4 ...
df['retention_1'].sum()/df['retention_1'].count()
###Output
_____no_output_____
###Markdown
5. 1-day retention by AB-group So, a little less than half of the players come back one day after installing the game. Now that we have a benchmark, let's look at how 1-day retention differs between the two AB-groups.
###Code
# Calculating 1-day retention for each AB-group
# ... YOUR CODE FOR TASK 5 ...
df.groupby('version')['retention_1'].sum()/df.groupby('version')['retention_1'].count()
###Output
_____no_output_____
###Markdown
6. Should we be confident in the difference?It appears that there was a slight decrease in 1-day retention when the gate was moved to level 40 (44.2%) compared to the control when it was at level 30 (44.8%). It's a small change, but even small changes in retention can have a large impact. But while we are certain of the difference in the data, how certain should we be that a gate at level 40 will be worse in the future?There are a couple of ways we can get at the certainty of these retention numbers. Here we will use bootstrapping: We will repeatedly re-sample our dataset (with replacement) and calculate 1-day retention for those samples. The variation in 1-day retention will give us an indication of how uncertain the retention numbers are.
###Code
# Creating an list with bootstrapped means for each AB-group
boot_1d = []
for i in range(500):
boot_mean = df.sample(frac=1, replace=True).groupby('version')['retention_1'].mean()
boot_1d.append(boot_mean)
# Transforming the list to a DataFrame
boot_1d = pd.DataFrame(boot_1d)
# A Kernel Density Estimate plot of the bootstrap distributions
boot_1d.plot()
###Output
_____no_output_____
###Markdown
7. Zooming in on the differenceThese two distributions above represent the bootstrap uncertainty over what the underlying 1-day retention could be for the two AB-groups. Just eyeballing this plot, we can see that there seems to be some evidence of a difference, albeit small. Let's zoom in on the difference in 1-day retention(Note that in this notebook we have limited the number of bootstrap replication to 500 to keep the calculations quick. In "production" we would likely increase this to a much larger number, say, 10 000.)
###Code
# Adding a column with the % difference between the two AB-groups
boot_1d['diff'] = (boot_1d['gate_30'] - boot_1d['gate_40']) / boot_1d['gate_40'] * 100
# Ploting the bootstrap % difference
ax = boot_1d['diff'].plot()
ax.set_xlabel("Not bad not bad at all!")
###Output
_____no_output_____
###Markdown
8. The probability of a difference From this chart, we can see that the most likely % difference is around 1% - 2%, and that most of the distribution is above 0%, in favor of a gate at level 30. But what is the probability that the difference is above 0%? Let's calculate that as well.
###Code
# Calculating the probability that 1-day retention is greater when the gate is at level 30
prob = (boot_1d['diff'] > 0).sum() / len(boot_1d['diff'])
# Pretty printing the probability
print(prob)
###Output
0.946
###Markdown
9. 7-day retention by AB-groupThe bootstrap analysis tells us that there is a high probability that 1-day retention is better when the gate is at level 30. However, since players have only been playing the game for one day, it is likely that most players haven't reached level 30 yet. That is, many players won't have been affected by the gate, even if it's as early as level 30. But after having played for a week, more players should have reached level 40, and therefore it makes sense to also look at 7-day retention. That is: What percentage of the people that installed the game also showed up a week later to play the game again.Let's start by calculating 7-day retention for the two AB-groups.
###Code
# Calculating 7-day retention for both AB-groups
df.groupby('version')['retention_7'].sum()/df.groupby('version')['retention_7'].count()
###Output
_____no_output_____
###Markdown
10. Bootstrapping the difference againLike with 1-day retention, we see that 7-day retention is slightly lower (18.2%) when the gate is at level 40 than when the gate is at level 30 (19.0%). This difference is also larger than for 1-day retention, presumably because more players have had time to hit the first gate. We also see that the overall 7-day retention is lower than the overall 1-day retention; fewer people play a game a week after installing than a day after installing.But as before, let's use bootstrap analysis to figure out how certain we should be of the difference between the AB-groups.
###Code
# Creating a list with bootstrapped means for each AB-group
boot_7d = []
for i in range(500):
boot_mean = df.sample(frac=1, replace=True).groupby('version')['retention_7'].mean()
boot_7d.append(boot_mean)
# Transforming the list to a DataFrame
boot_7d = pd.DataFrame(boot_7d)
# Adding a column with the % difference between the two AB-groups
boot_7d['diff'] = (boot_7d['gate_30'] - boot_7d['gate_40']) / boot_7d['gate_40'] * 100
# Ploting the bootstrap % difference
ax = boot_7d['diff'].plot()
ax.set_xlabel("% difference in means")
# Calculating the probability that 7-day retention is greater when the gate is at level 30
prob = (boot_7d['diff'] > 0).sum() / len(boot_7d['diff'])
# Pretty printing the probability
print(prob)
###Output
1.0
###Markdown
11. The conclusionThe bootstrap result tells us that there is strong evidence that 7-day retention is higher when the gate is at level 30 than when it is at level 40. The conclusion is: If we want to keep retention high — both 1-day and 7-day retention — we should not move the gate from level 30 to level 40. There are, of course, other metrics we could look at, like the number of game rounds played or how much in-game purchases are made by the two AB-groups. But retention is one of the most important metrics. If we don't retain our player base, it doesn't matter how much money they spend in-game. So, why is retention higher when the gate is positioned earlier? One could expect the opposite: The later the obstacle, the longer people are going to engage with the game. But this is not what the data tells us. The theory of hedonic adaptation can give one explanation for this. In short, hedonic adaptation is the tendency for people to get less and less enjoyment out of a fun activity over time if that activity is undertaken continuously. By forcing players to take a break when they reach a gate, their enjoyment of the game is prolonged. But when the gate is moved to level 40, fewer players make it far enough, and they are more likely to quit the game because they simply got bored of it.
###Code
# So, given the data and the bootstrap analysis
# Should we move the gate from level 30 to level 40 ?
move_to_level_40 = False # True or False ?
###Output
_____no_output_____ |
5 Multiple Conditions.ipynb | ###Markdown
Multiple Conditions---By combining ```if``` and ```else``` we were able to create binary pathways. With another built-in condiitonal keyword, we can create _multiple_ pathways in our code. ```elif``` statement```elif``` is a built-in keyword related to if statements.- ```elif``` can only exists if there is a related ```if``` statement above it- ```elif``` can have its own condition, and it will execute its code block if the Boolean condition is True- After the first if statement, you are allowed to have as many elif statements as you'd like- It is recommended that your elif's boolean condition is related to the condition that comes before itExamine the following format:```python if boolean_condition1: code here elif boolean_condition2: code here elif boolean_condition3: code here else: code here```__NOTE:__- if ```boolean_condition1``` is __True__, it will ignore the conditional statements below it- if ```boolean_condition1``` is __False__, it will check the 2nd condition- if both ```boolean_condition1``` and ```boolean_condition2``` is __False__, it will check the 3rd condition- if all the conditions evaluate to __False__, then the else's code block will execute
###Code
# Example
age = 14
if age > 17:
print('You are allowed to watch any movies.')
elif age >= 13:
print('You can watch any movies with any rating with exception of:')
print('-- You cannot watch NC-17.')
print('-- You require parent/adult guaradian supervision to watch R rated movies.')
else:
print('You can watch G rated movies and you also need parental guidance for PG and PG-13 rated movies.')
###Output
You can watch any movies with any rating with exception of:
-- You cannot watch NC-17.
-- You require parent/adult guaradian supervision to watch R rated movies.
|
Split-a-String-in-Balanced-Strings.ipynb | ###Markdown
Split a String in Balanced StringsBalanced strings are those who have equal quantity of 'L' and 'R' characters.Given a balanced string s split it in the maximum amount of balanced strings.Return the maximum amount of splitted balanced strings. 解析题目来源:[LeetCode - Split a String in Balanced Strings - 1221](https://leetcode.com/problems/split-a-string-in-balanced-strings/)贪心算法,贪心之处在于:一有机会平衡,就立即输出结果
###Code
def balancedStringSplit(s):
select = []
count = 0
for c in s:
select.append(c)
if (select.count("L") == select.count("R")):
count += 1
select.clear()
return count
print(balancedStringSplit("RLRRLLRLRL"))
###Output
_____no_output_____ |
02_Filtering_&_Sorting/Euro12/Exercises[Solved].ipynb | ###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
url = "https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv"
euro12 = pd.read_csv(url)
euro12.head()
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.Team.nunique()
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.shape[1]
###Output
_____no_output_____
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline.head()
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(by=['Red Cards', 'Yellow Cards'], ascending=True)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
discipline['Yellow Cards'].mean()
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12['Goals'] > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12['Team'].str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
euro12.iloc[:,0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
euro12.iloc[:, :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
euro12.set_index('Team', inplace=True)
euro12
euro12.loc[['England', 'Italy', 'Russia'], ['Shooting Accuracy']]
###Output
_____no_output_____ |
experiments/tl_3/A_killme/cores_wisig-oracle.run1.framed/trials/1/trial.ipynb | ###Markdown
Transfer Learning Template
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
###Output
_____no_output_____
###Markdown
Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
###Code
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_3A:cores+wisig -> oracle.run1.framed",
"device": "cuda",
"lr": 0.001,
"x_shape": [2, 200],
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_loss",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 200]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 16000, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10.",
"1-11.",
"1-15.",
"1-16.",
"1-17.",
"1-18.",
"1-19.",
"10-4.",
"10-7.",
"11-1.",
"11-14.",
"11-17.",
"11-20.",
"11-7.",
"13-20.",
"13-8.",
"14-10.",
"14-11.",
"14-14.",
"14-7.",
"15-1.",
"15-20.",
"16-1.",
"16-16.",
"17-10.",
"17-11.",
"17-2.",
"19-1.",
"19-16.",
"19-19.",
"19-20.",
"19-3.",
"2-10.",
"2-11.",
"2-17.",
"2-18.",
"2-20.",
"2-3.",
"2-4.",
"2-5.",
"2-6.",
"2-7.",
"2-8.",
"3-13.",
"3-18.",
"3-3.",
"4-1.",
"4-10.",
"4-11.",
"4-19.",
"5-5.",
"6-15.",
"7-10.",
"7-14.",
"8-18.",
"8-20.",
"8-3.",
"8-8.",
],
"domains": [1, 2, 3, 4, 5],
"num_examples_per_domain_per_label": 100,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/cores.stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "take_200"],
"episode_transforms": [],
"domain_prefix": "C_A_",
},
{
"labels": [
"1-10",
"1-12",
"1-14",
"1-16",
"1-18",
"1-19",
"1-8",
"10-11",
"10-17",
"10-4",
"10-7",
"11-1",
"11-10",
"11-19",
"11-20",
"11-4",
"11-7",
"12-19",
"12-20",
"12-7",
"13-14",
"13-18",
"13-19",
"13-20",
"13-3",
"13-7",
"14-10",
"14-11",
"14-12",
"14-13",
"14-14",
"14-19",
"14-20",
"14-7",
"14-8",
"14-9",
"15-1",
"15-19",
"15-6",
"16-1",
"16-16",
"16-19",
"16-20",
"17-10",
"17-11",
"18-1",
"18-10",
"18-11",
"18-12",
"18-13",
"18-14",
"18-15",
"18-16",
"18-17",
"18-19",
"18-2",
"18-20",
"18-4",
"18-5",
"18-7",
"18-8",
"18-9",
"19-1",
"19-10",
"19-11",
"19-12",
"19-13",
"19-14",
"19-15",
"19-19",
"19-2",
"19-20",
"19-3",
"19-4",
"19-6",
"19-7",
"19-8",
"19-9",
"2-1",
"2-13",
"2-15",
"2-3",
"2-4",
"2-5",
"2-6",
"2-7",
"2-8",
"20-1",
"20-12",
"20-14",
"20-15",
"20-16",
"20-18",
"20-19",
"20-20",
"20-3",
"20-4",
"20-5",
"20-7",
"20-8",
"3-1",
"3-13",
"3-18",
"3-2",
"3-8",
"4-1",
"4-10",
"4-11",
"5-1",
"5-5",
"6-1",
"6-15",
"6-6",
"7-10",
"7-11",
"7-12",
"7-13",
"7-14",
"7-7",
"7-8",
"7-9",
"8-1",
"8-13",
"8-14",
"8-18",
"8-20",
"8-3",
"8-8",
"9-1",
"9-7",
],
"domains": [1, 2, 3, 4],
"num_examples_per_domain_per_label": 100,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/wisig.node3-19.stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "take_200"],
"episode_transforms": [],
"domain_prefix": "W_A_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 2000,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": ["unit_mag", "take_200", "resample_20Msps_to_25Msps"],
"episode_transforms": [],
"domain_prefix": "ORACLE.run1_",
},
],
"seed": 1337,
"dataset_seed": 1337,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
###Output
_____no_output_____ |
v1/Db2 11 Time and Date Functions.ipynb | ###Markdown
Db2 11 Time and Date Functions There are plenty of new date and time functions found in Db2 11. These functions allow you to extract portions from a dateand format the date in a variety of different ways. While Db2 already has a number of date and time functions, these newfunctions allow for greater compatibility with other database implementations, making it easier to port to DB2.ion.
###Code
%run db2.ipynb
###Output
_____no_output_____
###Markdown
Table of Contents* [Extract Function](extract)* [DATE_PART Function](part)* [DATE_TRUNC Function](trunc)* [Extracting Specific Days from a Month](month)* [Date Addition](add)* [Extracting Weeks, Months, Quarters, and Years](extract)* [Next Day Function](nextday)* [Between Date/Time Functions](between)* [Months Between](mbetween)* [Date Duration](duration)* [Overlaps Predicate](overlaps)* [UTC Time Conversions](utc) [Back to Top](top) Extract FunctionThe EXTRACT function extracts and element from a date/time value. The syntax of the EXTRACT command is:EXTRACT( element FROM expression )This is a slightly different format from most functions that you see in the DB2. Element must be one of the following values:|Element Name | Description|:---------------- | :-----------------------------------------------------------------------------------------|EPOCH | Number of seconds since 1970-01-01 00:00:00.00. The value can be positive or negative.|MILLENNIUM(S) | The millennium is to be returned.|CENTURY(CENTURIES)| The number of full 100-year periods represented by the year.|DECADE(S) | The number of full 10-year periods represented by the year.|YEAR(S) | The year portion is to be returned. |QUARTER | The quarter of the year (1 - 4) is to be returned.|MONTH | The month portion is to be returned. |WEEK | The number of the week of the year (1 - 53) that the specified day is to be returned.|DAY(S) | The day portion is to be returned. |DOW | The day of the week that is to be returned. Note that "1" represents Sunday. |DOY | The day (1 - 366) of the year that is to be returned.|HOUR(S) | The hour portion is to be returned. |MINUTE(S) | The minute portion is to be returned. |SECOND(S) | The second portion is to be returned. |MILLISECOND(S) | The second of the minute, including fractional parts to one thousandth of a second|MICROSECOND(S) | The second of the minute, including fractional parts to one millionth of a secondThe synonym NOW is going to be used in the next example. NOW is a synonym for CURRENT TIMESTAMP.
###Code
%sql VALUES NOW
###Output
_____no_output_____
###Markdown
This SQL will return every possible extract value from the current date.the SQL standard.
###Code
%%sql -a
WITH DATES(FUNCTION, RESULT) AS (
VALUES
('EPOCH', EXTRACT( EPOCH FROM NOW )),
('MILLENNIUM(S)', EXTRACT( MILLENNIUM FROM NOW )),
('CENTURY(CENTURIES)', EXTRACT( CENTURY FROM NOW )),
('DECADE(S)', EXTRACT( DECADE FROM NOW )),
('YEAR(S)', EXTRACT( YEAR FROM NOW )),
('QUARTER', EXTRACT( QUARTER FROM NOW )),
('MONTH', EXTRACT( MONTH FROM NOW )),
('WEEK', EXTRACT( WEEK FROM NOW )),
('DAY(S)', EXTRACT( DAY FROM NOW )),
('DOW', EXTRACT( DOW FROM NOW )),
('DOY', EXTRACT( DOY FROM NOW )),
('HOUR(S)', EXTRACT( HOURS FROM NOW )),
('MINUTE(S)', EXTRACT( MINUTES FROM NOW )),
('SECOND(S)', EXTRACT( SECONDS FROM NOW )),
('MILLISECOND(S)', EXTRACT( MILLISECONDS FROM NOW )),
('MICROSECOND(S)', EXTRACT( MICROSECONDS FROM NOW ))
)
SELECT FUNCTION, CAST(RESULT AS BIGINT) FROM DATES
###Output
_____no_output_____
###Markdown
[Back to Top](top) DATE_PART FunctionDATE_PART is similar to the EXTRACT function but it uses the more familiar syntax:DATE_PART(element, expression)In the case of the function, the element must be placed in quotes, rather than as a keyword in the EXTRACT function. in addition, the DATE_PART always returns a BIGINT, while the EXTRACT function will return a different data type depending on the element being returned. For instance, compare the SECONDs option for both functions. In the case of EXTRACT you get a DECIMAL result while for the DATE_PART you get a truncated BIGINT.
###Code
%%sql -a
WITH DATES(FUNCTION, RESULT) AS (
VALUES
('EPOCH', DATE_PART('EPOCH' ,NOW )),
('MILLENNIUM(S)', DATE_PART('MILLENNIUM' ,NOW )),
('CENTURY(CENTURIES)', DATE_PART('CENTURY' ,NOW )),
('DECADE(S)', DATE_PART('DECADE' ,NOW )),
('YEAR(S)', DATE_PART('YEAR' ,NOW )),
('QUARTER', DATE_PART('QUARTER' ,NOW )),
('MONTH', DATE_PART('MONTH' ,NOW )),
('WEEK', DATE_PART('WEEK' ,NOW )),
('DAY(S)', DATE_PART('DAY' ,NOW )),
('DOW', DATE_PART('DOW' ,NOW )),
('DOY', DATE_PART('DOY' ,NOW )),
('HOUR(S)', DATE_PART('HOURS' ,NOW )),
('MINUTE(S)', DATE_PART('MINUTES' ,NOW )),
('SECOND(S)', DATE_PART('SECONDS' ,NOW )),
('MILLISECOND(S)', DATE_PART('MILLISECONDS' ,NOW )),
('MICROSECOND(S)', DATE_PART('MICROSECONDS' ,NOW ))
)
SELECT FUNCTION, CAST(RESULT AS BIGINT) FROM DATES;
###Output
_____no_output_____
###Markdown
[Back to Top](top) DATE_TRUNC FunctionDATE_TRUNC computes the same results as the DATE_PART function but then truncates the value down. Note that not all values can be truncated. The function syntax is:DATE_TRUNC(element, expression)The element must be placed in quotes, rather than as a keyword in the EXTRACT function.Note that DATE_TRUNC always returns a BIGINT.The elements that can be truncated are:|Element Name |Description|:---------------- |:------------------------------------------------------------------------------|MILLENNIUM(S) |The millennium is to be returned.|CENTURY(CENTURIES) |The number of full 100-year periods represented by the year.|DECADE(S) |The number of full 10-year periods represented by the year.|YEAR(S) |The year portion is to be returned. |QUARTER |The quarter of the year (1 - 4) is to be returned.|MONTH |The month portion is to be returned. |WEEK |The number of the week of the year (1 - 53) that the specified day is to be returned.|DAY(S) |The day portion is to be returned. |HOUR(S) |The hour portion is to be returned. |MINUTE(S) |The minute portion is to be returned. |SECOND(S) |The second portion is to be returned. |MILLISECOND(S) |The second of the minute, including fractional parts to one thousandth of a second|MICROSECOND(S) |The second of the minute, including fractional parts to one millionth of a secondry data types.
###Code
%%sql -a
WITH DATES(FUNCTION, RESULT) AS (
VALUES
('MILLENNIUM(S)', DATE_TRUNC('MILLENNIUM' ,NOW )),
('CENTURY(CENTURIES)', DATE_TRUNC('CENTURY' ,NOW )),
('DECADE(S)', DATE_TRUNC('DECADE' ,NOW )),
('YEAR(S)', DATE_TRUNC('YEAR' ,NOW )),
('QUARTER', DATE_TRUNC('QUARTER' ,NOW )),
('MONTH', DATE_TRUNC('MONTH' ,NOW )),
('WEEK', DATE_TRUNC('WEEK' ,NOW )),
('DAY(S)', DATE_TRUNC('DAY' ,NOW )),
('HOUR(S)', DATE_TRUNC('HOURS' ,NOW )),
('MINUTE(S)', DATE_TRUNC('MINUTES' ,NOW )),
('SECOND(S)', DATE_TRUNC('SECONDS' ,NOW )),
('MILLISECOND(S)', DATE_TRUNC('MILLISECONDS' ,NOW )),
('MICROSECOND(S)', DATE_TRUNC('MICROSECONDS' ,NOW ))
)
SELECT FUNCTION, RESULT FROM DATES
###Output
_____no_output_____
###Markdown
[Back to Top](top) Extracting Specfic Days from a MonthThere are three functions that retrieve day information from a date. These functions include:- DAYOFMONTH - returns an integer between 1 and 31 that represents the day of the argument- FIRST_DAY - returns a date or timestamp that represents the first day of the month of the argument- DAYS_TO_END_OF_MONTH - returns the number of days to the end of the monthThis is the current date so that you know what all of the calculations are based on.
###Code
%sql VALUES NOW
###Output
_____no_output_____
###Markdown
This expression (DAYOFMONTH) returns the day of the month.
###Code
%sql VALUES DAYOFMONTH(NOW)
###Output
_____no_output_____
###Markdown
FIRST_DAY will return the first day of the month. You could probably compute this with standard SQL date functions, but it is a lot easier just to use this builtin function.
###Code
%sql VALUES FIRST_DAY(NOW)
###Output
_____no_output_____
###Markdown
Finally, DAYS_TO_END_OF_MOTNH will return the number of days to the end of the month. A Zero would be returned if you are on the last day of the month.
###Code
%sql VALUES DAYS_TO_END_OF_MONTH(NOW)
###Output
_____no_output_____
###Markdown
[Back to Top](top) Date Addition FunctionsThe date addition functions will add or subtract days from a current timestamp. The functions that are available are:- ADD_YEARS - Add years to a date- ADD_MONTHS - Add months to a date- ADD_DAYS - Add days to a date- ADD_HOURS - Add hours to a date- ADD_MINUTES - Add minutes to a date- ADD_SECONDS - Add seconds to a dateThe format of the function is: ADD_DAYS ( expression, numeric expression )The following SQL will add one "unit" to the current date.
###Code
%%sql
WITH DATES(FUNCTION, RESULT) AS
(
VALUES
('CURRENT DATE ',NOW),
('ADD_YEARS ',ADD_YEARS(NOW,1)),
('ADD_MONTHS ',ADD_MONTHS(NOW,1)),
('ADD_DAYS ',ADD_DAYS(NOW,1)),
('ADD_HOURS ',ADD_HOURS(NOW,1)),
('ADD_MINUTES ',ADD_MINUTES(NOW,1)),
('ADD_SECONDS ',ADD_SECONDS(NOW,1))
)
SELECT * FROM DATES
###Output
_____no_output_____
###Markdown
A negative number can be used to subtract values from the current date.
###Code
%%sql
WITH DATES(FUNCTION, RESULT) AS
(
VALUES
('CURRENT DATE ',NOW),
('ADD_YEARS ',ADD_YEARS(NOW,-1)),
('ADD_MONTHS ',ADD_MONTHS(NOW,-1)),
('ADD_DAYS ',ADD_DAYS(NOW,-1)),
('ADD_HOURS ',ADD_HOURS(NOW,-1)),
('ADD_MINUTES ',ADD_MINUTES(NOW,-1)),
('ADD_SECONDS ',ADD_SECONDS(NOW,-1))
)
SELECT * FROM DATES
###Output
_____no_output_____
###Markdown
[Back to Top](top) Extracting Weeks, Months, Quarters, and Years from a DateThere are four functions that extract different values from a date. These functions include:- THIS_QUARTER - returns the first day of the quarter- THIS_WEEK - returns the first day of the week (Sunday is considered the first day of that week)- THIS_MONTH - returns the first day of the month- THIS_YEAR - returns the first day of the year
###Code
%%sql
WITH DATES(FUNCTION, RESULT) AS
(
VALUES
('CURRENT DATE ',NOW),
('THIS_WEEK ',THIS_WEEK(NOW)),
('THIS_MONTH ',THIS_MONTH(NOW)),
('THIS_QUARTER ',THIS_QUARTER(NOW)),
('THIS_YEAR ',THIS_YEAR(NOW))
)
SELECT * FROM DATES
###Output
_____no_output_____
###Markdown
There is also a NEXT function for each of these. The NEXT function will return the next week, month, quarter,or year given a current date.
###Code
%%sql
WITH DATES(FUNCTION, RESULT) AS
(
VALUES
('CURRENT DATE ',NOW),
('NEXT_WEEK ',NEXT_WEEK(NOW)),
('NEXT_MONTH ',NEXT_MONTH(NOW)),
('NEXT_QUARTER ',NEXT_QUARTER(NOW)),
('NEXT_YEAR ',NEXT_YEAR(NOW))
)
SELECT * FROM DATES
###Output
_____no_output_____
###Markdown
[Back to Top](top) Next Day FunctionThe previous set of functions returned a date value for the current week, month, quarter, or year (or the next oneif you used the NEXT function). The NEXT_DAY function returns the next day (after the date you supply) based on the string representation of the day. The date string will be dependent on the codepage that you are using for the database.The date (from an English perspective) can be:|Day |Short form|:-------- |:---------|Monday |MON|Tuesday |TUE|Wednesday |WED|Thursday |THU|Friday |FRI|Saturday |SAT|Sunday |SUNThe following SQL will show you the "day" after the current date that is Monday through Sunday.
###Code
%%sql
WITH DATES(FUNCTION, RESULT) AS
(
VALUES
('CURRENT DATE ',NOW),
('Monday ',NEXT_DAY(NOW,'Monday')),
('Tuesday ',NEXT_DAY(NOW,'TUE')),
('Wednesday ',NEXT_DAY(NOW,'Wednesday')),
('Thursday ',NEXT_DAY(NOW,'Thursday')),
('Friday ',NEXT_DAY(NOW,'FRI')),
('Saturday ',NEXT_DAY(NOW,'Saturday')),
('Sunday ',NEXT_DAY(NOW,'Sunday'))
)
SELECT * FROM DATES
###Output
_____no_output_____
###Markdown
[Back to Top](top) Between Date/Time FunctionsThese date functions compute the number of full seconds, minutes, hours, days, weeks, and years betweentwo dates. If there isn't a full value between the two objects (like less than a day), a zero will bereturned. These new functions are:- HOURS_BETWEEN - returns the number of full hours between two arguments- MINUTES_BETWEEN - returns the number of full minutes between two arguments- SECONDS_BETWEEN - returns the number of full seconds between two arguments- DAYS_BETWEEN - returns the number of full days between two arguments- WEEKS_BETWEEN - returns the number of full weeks between two arguments- YEARS_BETWEEN - returns the number of full years between two argumentsThe format of the function is:DAYS_BETWEEN( expression1, expression2 )The following SQL will use a date that is in the future with exactly one extra second, minute, hour, day,week and year added to it.
###Code
%%sql -q
DROP VARIABLE FUTURE_DATE;
CREATE VARIABLE FUTURE_DATE TIMESTAMP DEFAULT(NOW + 1 SECOND + 1 MINUTE + 1 HOUR + 8 DAYS + 1 YEAR);
WITH DATES(FUNCTION, RESULT) AS (
VALUES
('SECONDS_BETWEEN',SECONDS_BETWEEN(FUTURE_DATE,NOW)),
('MINUTES_BETWEEN',MINUTES_BETWEEN(FUTURE_DATE,NOW)),
('HOURS_BETWEEN ',HOURS_BETWEEN(FUTURE_DATE,NOW)),
('DAYS BETWEEN ',DAYS_BETWEEN(FUTURE_DATE,NOW)),
('WEEKS_BETWEEN ',WEEKS_BETWEEN(FUTURE_DATE,NOW)),
('YEARS_BETWEEN ',YEARS_BETWEEN(FUTURE_DATE,NOW))
)
SELECT * FROM DATES;
###Output
_____no_output_____
###Markdown
[Back to Top](top) MONTHS_BETWEEN FunctionYou may have noticed that the MONTHS_BETWEEN function was not in the previous list of functions. Thereason for this is that the value returned for MONTHS_BETWEEN is different from the other functions. The MONTHS_BETWEENfunction returns a DECIMAL value rather than an integer value. The reason for this is that the duration of amonth is not as precise as a day, week or year. The following example will show how the duration is a decimal value rather than an integer. You could always truncate the value if you want an integer.
###Code
%%sql
WITH DATES(FUNCTION, RESULT) AS (
VALUES
('0 MONTH ',MONTHS_BETWEEN(NOW, NOW)),
('1 MONTH ',MONTHS_BETWEEN(NOW + 1 MONTH, NOW)),
('1 MONTH + 1 DAY',MONTHS_BETWEEN(NOW + 1 MONTH + 1 DAY, NOW)),
('LEAP YEAR ',MONTHS_BETWEEN('2016-02-01','2016-03-01')),
('NON-LEAP YEAR ',MONTHS_BETWEEN('2015-02-01','2015-03-01'))
)
SELECT * FROM DATES
###Output
_____no_output_____
###Markdown
[Back to Top](top) Date Duration FunctionsAn alternate way of representing date durations is through the use of an integer with the format YYYYMMDD wherethe YYYY represents the year, MM for the month and DD for the day. Date durations are easier to manipulate thantimestamp values and take up substantially less storage.There are two new functions. - YMD_BETWEEN returns a numeric value that specifies the number of full years, full months, and full days between two datetime values - AGE returns a numeric value that represents the number of full years, full months, and full days between the current timestamp and the argument This SQL statement will return various AGE calculations based on the current timestamp.
###Code
%%sql
WITH DATES(FUNCTION, RESULT) AS (
VALUES
('AGE + 1 DAY ',AGE(NOW - 1 DAY)),
('AGE + 1 MONTH ',AGE(NOW - 1 MONTH)),
('AGE + 1 YEAR ',AGE(NOW - 1 YEAR)),
('AGE + 1 DAY + 1 MONTH ',AGE(NOW - 1 DAY - 1 MONTH)),
('AGE + 1 DAY + 1 YEAR ',AGE(NOW - 1 DAY - 1 YEAR)),
('AGE + 1 DAY + 1 MONTH + 1 YEAR',AGE(NOW - 1 DAY - 1 MONTH - 1 YEAR))
)
SELECT * FROM DATES
###Output
_____no_output_____
###Markdown
The YMD_BETWEEN function is similar to the AGE function except that it takes two date arguments. We cansimulate the AGE function by supplying the NOW function to the YMD_BETWEEN function.
###Code
%%sql
WITH DATES(FUNCTION, RESULT) AS (
VALUES
('1 DAY ',YMD_BETWEEN(NOW,NOW - 1 DAY)),
('1 MONTH ',YMD_BETWEEN(NOW,NOW - 1 MONTH)),
('1 YEAR ',YMD_BETWEEN(NOW,NOW - 1 YEAR)),
('1 DAY + 1 MONTH ',YMD_BETWEEN(NOW,NOW - 1 DAY - 1 MONTH)),
('1 DAY + 1 YEAR ',YMD_BETWEEN(NOW,NOW - 1 DAY - 1 YEAR)),
('1 DAY + 1 MONTH + 1 YEAR',YMD_BETWEEN(NOW,NOW - 1 DAY - 1 MONTH - 1 YEAR))
)
SELECT * FROM DATES
###Output
_____no_output_____
###Markdown
[Back to Top](top) OVERLAPS PredicateThe OVERLAPS predicate is used to determine whether two chronological periods overlap. This is not a function within DB2, but rather a special SQL syntax extension. A chronological period is specified by a pair of date-time expressions. The first expression specifiesthe start of a period; the second specifies its end.(start1,end1) OVERLAPS (start2, end2)The beginning and end values are not included in the periods. The following summarizes the overlap logic. For example, the periods 2016-10-19 to 2016-10-20 and 2016-10-20 to 2016-10-21 do not overlap. For instance, the following interval does not overlap.
###Code
%%sql
VALUES
CASE
WHEN
(NOW, NOW + 1 DAY) OVERLAPS (NOW + 1 DAY, NOW + 2 DAYS) THEN 'Overlaps'
ELSE
'No Overlap'
END
###Output
_____no_output_____
###Markdown
If the first date range is extended by one day then the range will overlap.
###Code
%%sql
VALUES
CASE
WHEN
(NOW, NOW + 2 DAYS) OVERLAPS (NOW + 1 DAY, NOW + 2 DAYS) THEN 'Overlaps'
ELSE
'No Overlap'
END
###Output
_____no_output_____
###Markdown
Identical date ranges will overlap.
###Code
%%sql
VALUES
CASE
WHEN
(NOW, NOW + 1 DAY) OVERLAPS (NOW, NOW + 1 DAY) THEN 'Overlaps'
ELSE
'No Overlap'
END
###Output
_____no_output_____
###Markdown
[Back to Top](top) UTC Time ConversionsDb2 has two functions that allow you to translate timestamps to and from UTC (Coordinated Universal Time).The FROM_UTC_TIMESTAMP scalar function returns a TIMESTAMP that is converted from Coordinated Universal Time to the time zone specified by the time zone string. The TO_UTC_TIMESTAMP scalar function returns a TIMESTAMP that is converted to Coordinated Universal Time from the timezone that is specified by the timezone string. The format of the two functions is:FROM_UTC_TIMESTAMP( expression, timezone )TO_UTC_TIMESTAMP( expression, timezone)The return value from each of these functions is a timestamp. The "expression" is a timestamp thatyou want to convert to the local timezone (or convert to UTC). The timezone is an expression that specifies the time zone that the expression is to be adjusted to. The value of the timezone-expression must be a time zone name from the Internet Assigned Numbers Authority (IANA)time zone database. The standard format for a time zone name in the IANA database is Area/Location, where:- Area is the English name of a continent, ocean, or the special area 'Etc'- Location is the English name of a location within the area; usually a city, or small islandExamples:- "America/Toronto"- "Asia/Sakhalin"- "Etc/UTC" (which represents Coordinated Universal Time)For complete details on the valid set of time zone names and the rules that are associated with those time zones,refer to the IANA time zone database. The database server uses version 2010c of the IANA time zone database. The result is a timestamp, adjusted from/to the Coordinated Universal Time time zone to the time zone specified by the timezone-expression. If the timezone-expression returns a value that is not a time zone in the IANA time zone database, then the value of expression is returned without being adjusted.The timestamp adjustment is done by first applying the raw offset from Coordinated Universal Time of the timezone-expression. If Daylight Saving Time is in effect at the adjusted timestamp for the time zone that is specified by the timezone-expression, then the Daylight Saving Time offset is also applied to the timestamp.Time zones that use Daylight Saving Time have ambiguities at the transition dates. When a time zone changes from standard time to Daylight Saving Time, a range of time does not occur as it is skipped during the transition. When a time zone changes from Daylight Saving Time to standard time, a range of time occurs twice. Ambiguous timestamps are treated as if they occurred when standard time was in effect for the time zone. Convert the Coordinated Universal Time timestamp '2011-12-25 09:00:00.123456' to the 'Asia/Tokyo' time zone. The following returns a TIMESTAMP with the value '2011-12-25 18:00:00.123456'.
###Code
%%sql
VALUES FROM_UTC_TIMESTAMP(TIMESTAMP '2011-12-25 09:00:00.123456', 'Asia/Tokyo');
###Output
_____no_output_____
###Markdown
Convert the Coordinated Universal Time timestamp '2014-11-02 06:55:00' to the 'America/Toronto' time zone. The following returns a TIMESTAMP with the value '2014-11-02 01:55:00'.
###Code
%%sql
VALUES FROM_UTC_TIMESTAMP(TIMESTAMP'2014-11-02 06:55:00', 'America/Toronto');
###Output
_____no_output_____
###Markdown
Convert the Coordinated Universal Time timestamp '2015-03-02 06:05:00' to the 'America/Toronto' time zone. The following returns a TIMESTAMP with the value '2015-03-02 01:05:00'.
###Code
%%sql
VALUES FROM_UTC_TIMESTAMP(TIMESTAMP'2015-03-02 06:05:00', 'America/Toronto');
###Output
_____no_output_____
###Markdown
Convert the timestamp '1970-01-01 00:00:00' to the Coordinated Universal Time timezone from the 'America/Denver'timezone. The following returns a TIMESTAMP with the value '1970-01-01 07:00:00'.
###Code
%%sql
VALUES TO_UTC_TIMESTAMP(TIMESTAMP'1970-01-01 00:00:00', 'America/Denver');
###Output
_____no_output_____
###Markdown
Using UTC FunctionsOne of the applications for using the UTC is to take the transaction timestamp and normalize it acrossall systems that access the data. You can convert the timestamp to UTC on insert and then when it is retrieved, it can be converted to the local timezone.This example will use a number of techniques to hide the complexity of changing timestamps to local timezones.The following SQL will create our base transaction table (TXS_BASE) that will be used throughout theexample.
###Code
%%sql -q
DROP TABLE TXS_BASE;
CREATE TABLE TXS_BASE
(
ID INTEGER NOT NULL,
CUSTID INTEGER NOT NULL,
TXTIME_UTC TIMESTAMP NOT NULL
);
###Output
_____no_output_____
###Markdown
The UTC functions will be written to take advantage of a local timezone variable called TIME_ZONE. Thisvariable will contain the timezone of the server (or user) that is running the transaction. In this case we are using the timezone in Toronto, Canada.
###Code
%%sql
CREATE OR REPLACE VARIABLE TIME_ZONE VARCHAR(255) DEFAULT('America/Toronto');
###Output
_____no_output_____
###Markdown
The SET Command can be used to update the TIME_ZONE to the current location we are in.
###Code
%sql SET TIME_ZONE = 'America/Toronto'
###Output
_____no_output_____
###Markdown
In order to retrieve the value of the current timezone, we take advantage of a simple user-defined functioncalled GET_TIMEZONE. It just retrieves the contents of the current TIME_ZONE variable that we set up.
###Code
%%sql
CREATE OR REPLACE FUNCTION GET_TIMEZONE()
RETURNS VARCHAR(255)
LANGUAGE SQL CONTAINS SQL
RETURN (TIME_ZONE)
###Output
_____no_output_____
###Markdown
The TXS view is used by all SQL statements rather than the TXS_BASE table. The reason for this is to take advantage of INSTEAD OF triggers that can manipulate the UTC without modifying the original SQL.Note that when the data is returned from the view that the TXTIME field is converted from UTC to the currentTIMEZONE that we are in.
###Code
%%sql
CREATE OR REPLACE VIEW TXS AS
(
SELECT
ID,
CUSTID,
FROM_UTC_TIMESTAMP(TXTIME_UTC,GET_TIMEZONE()) AS TXTIME
FROM
TXS_BASE
)
###Output
_____no_output_____
###Markdown
An INSTEAD OF trigger (INSERT, UPDATE, and DELETE) is created against the TXS view so that any insert or update on a TXTIME column will be converted back to the UTC value. From an application perspective, we are using the local time, not the UTC time.
###Code
%%sql -d
CREATE OR REPLACE TRIGGER I_TXS
INSTEAD OF INSERT ON TXS
REFERENCING NEW AS NEW_TXS
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
INSERT INTO TXS_BASE VALUES (
NEW_TXS.ID,
NEW_TXS.CUSTID,
TO_UTC_TIMESTAMP(NEW_TXS.TXTIME,GET_TIMEZONE())
);
END
@
CREATE OR REPLACE TRIGGER U_TXS
INSTEAD OF UPDATE ON TXS
REFERENCING NEW AS NEW_TXS OLD AS OLD_TXS
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE TXS_BASE
SET (ID, CUSTID, TXTIME_UTC) =
(NEW_TXS.ID,
NEW_TXS.CUSTID,
TO_UTC_TIMESTAMP(NEW_TXS.TXTIME,TIME_ZONE)
)
WHERE
TXS_BASE.ID = OLD_TXS.ID
;
END
@
CREATE OR REPLACE TRIGGER D_TXS
INSTEAD OF DELETE ON TXS
REFERENCING OLD AS OLD_TXS
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
DELETE FROM TXS_BASE
WHERE
TXS_BASE.ID = OLD_TXS.ID
;
END
@
###Output
_____no_output_____
###Markdown
At this point in time(!) we can start inserting records into our table. We have already set the timezoneto be Toronto, so the next insert statement will take the current time (NOW) and insert it into the table. For reference, here is the current time.
###Code
%sql VALUES NOW
###Output
_____no_output_____
###Markdown
We will insert one record into the table and immediately retrieve the result.
###Code
%%sql
INSERT INTO TXS VALUES(1,1,NOW);
SELECT * FROM TXS;
###Output
_____no_output_____
###Markdown
Note that the timsstamp appears to be the same as what we insert (plus or minus a few seconds). What actuallysits in the base table is the UTC time.
###Code
%sql SELECT * FROM TXS_BASE
###Output
_____no_output_____
###Markdown
We can modify the time that is returned to us by changing our local timezone. The statement will make the system think we are in Vancouver.
###Code
%sql SET TIME_ZONE = 'America/Vancouver'
###Output
_____no_output_____
###Markdown
Retrieving the results will show that the timestamp has shifted by 3 hours (Vancouver is 3 hours behindToronto).
###Code
%sql SELECT * FROM TXS
###Output
_____no_output_____
###Markdown
So what happens if we insert a record into the table now that we are in Vancouver?
###Code
%%sql
INSERT INTO TXS VALUES(2,2,NOW);
SELECT * FROM TXS;
###Output
_____no_output_____
###Markdown
The data retrieved reflects the fact that we are now in Vancouver from an application perspective. Looking at thebase table and you will see that everything has been converted to UTC time.
###Code
%sql SELECT * FROM TXS_BASE
###Output
_____no_output_____
###Markdown
Finally, we can switch back to Toronto time and see when the transactions were done. You will see that from aToronto perspetive tht the transactions were done three hours later because of the timezone differences.
###Code
%%sql
SET TIME_ZONE = 'America/Toronto';
SELECT * FROM TXS;
###Output
_____no_output_____ |
Elmo.ipynb | ###Markdown
ELMO( Embeddings from Language Models) Elmo is used for building char-level embedding unlike Glove/Word2vec/BOW which used for word embeddings.Computes contextualized word representations using character-based word representations and bidirectional LSTMs, as described in the paper "Deep contextualized word representations"1. Captures contextual meaning of word ,having diffrent embedding for different words2. Handle out of Vocabulary words 3. Capture Morphological words embeddingsInstead of using a fixed embedding for each word, ELMo looks at the entire sentence before assigning each word in it an embedding. It uses a bi-directional LSTM trained on a specific task to be able to create those embeddings. ELMo provided a significant step towards pre-training in the context of NLP.1. For In depth knowledge of ELMO Architecture refer :https://www.mihaileric.com/posts/deep-contextualized-word-representations-elmo/2. Elmo Architecture papaer refer : https://arxiv.org/pdf/1508.06615.pdf3. For Highway Netwrk refer : https://towardsdatascience.com/review-highway-networks-gating-function-to-highway-image-classification-5a33833797b5
###Code
import tensorflow as tf
import tensorflow_hub as hub
###Output
_____no_output_____
###Markdown
Elmo Embedding
###Code
elmo =hub.Module("https://tfhub.dev/google/elmo/2", trainable=False) ### Load Elmo Model
text1="She sat on the river bank across from a series of wide, large steps leading up a hill to the park where the Arch stood, framed against a black sky."
text2="How could a man with four million in the bank be in financial danger?"
embeddings = elmo(
[text1, text22],
signature="default",
as_dict=True)["elmo"]
with tf.Session() as session:
session.run([tf.global_variables_initializer(), tf.tables_initializer()])
embeddings = session.run(embeddings)
###Output
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
###Markdown
Word Embedding
###Code
word_embeddings = elmo(
[text1, text2],
signature="default",
as_dict=True)["word_emb"]
with tf.Session() as session:
session.run([tf.global_variables_initializer(), tf.tables_initializer()])
word_embeddings = session.run(word_embeddings)
word_embeddings.shape
###Output
_____no_output_____
###Markdown
LSTM Layer1 Embeding
###Code
lstm1_embeddings = elmo(
[text1, text2],
signature="default",
as_dict=True)["lstm_outputs1"]
with tf.Session() as session:
session.run([tf.global_variables_initializer(), tf.tables_initializer()])
lstm1_embeddings = session.run(lstm1_embeddings)
lstm1_embeddings.shape
###Output
_____no_output_____
###Markdown
InputsThe module defines two signatures: default, and tokens.With the default signature, the module takes untokenized sentences as input. The input tensor is a string tensor with shape [batch_size]. The module tokenizes each string by splitting on spaces.With the tokens signature, the module takes tokenized sentences as input. The input tensor is a string tensor with shape [batch_size, max_length] and an int32 tensor with shape [batch_size] corresponding to the sentence length. The length input is necessary to exclude padding in the case of sentences with varying length. The output dictionary contains: 1. word_emb: the character-based word representations with shape [batch_size, max_length, 512]. 2. lstm_outputs1: the first LSTM hidden state with shape [batch_size, max_length, 1024]. 3. lstm_outputs2: the second LSTM hidden state with shape [batch_size, max_length, 1024]. 4. elmo: the weighted sum of the 3 layers, where the weights are trainable. This tensor has shape [batch_size, max_length, 1024] 5. default: a fixed mean-pooling of all contextualized word representations with shape [batch_size, 1024].
###Code
embeddings.shape
embeddings[0][5] ##Bank in text1
embeddings[1][9] ##Bank in text2
###Output
_____no_output_____
###Markdown
###Code
import numpy as np
from allennlp.commands.elmo import ElmoEmbedder
from sklearn.decomposition import PCA
class Elmo:
def __init__(self):
self.elmo = ElmoEmbedder()
def get_elmo_vector(self, tokens, layer):
vectors = self.elmo.embed_sentence(tokens)
X = []
for vector in vectors[layer]:
X.append(vector)
X = np.array(X)
return X
def dim_reduction(X, n):
pca = PCA(n_components=n)
print("size of X: {}".format(X.shape))
results = pca.fit_transform(X)
print("size of reduced X: {}".format(results.shape))
for i, ratio in enumerate(pca.explained_variance_ratio_):
print("Variance retained ratio of PCA-{}: {}".format(i+1, ratio))
return results
def plot(word, token_list, reduced_X, file_name, title):
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
# plot ELMo vectors
i = 0
for j, token in enumerate(token_list):
color = pick_color(j)
for _, w in enumerate(token):
# only plot the word of interest
if w.lower() in [word, word + 's', word + 'ful']:
ax.plot(reduced_X[i, 0], reduced_X[i, 1], color)
i += 1
tokens = []
for token in token_list:
tokens += token
# annotate point
k = 0
for i, token in enumerate(tokens):
if token.lower() in [word, word + 's', word + 'ful']:
text = ' '.join(token_list[k])
# bold the word of interest in the sentence
text = text.replace(token, r"$\bf{" + token + "}$")
plt.annotate(text, xy=(reduced_X[i, 0], reduced_X[i, 1]))
k += 1
ax.set_title(title)
ax.set_xlabel("PCA 1")
ax.set_ylabel("PCA 2")
fig.savefig(file_name, bbox_inches="tight")
print("{} saved\n".format(file_name))
def pick_color(i):
if i == 0:
color = 'ro'
elif i == 1:
color = 'bo'
elif i == 2:
color = 'yo'
elif i == 3:
color = 'go'
else:
color = 'co'
return color
if __name__ == "__main__":
model = Elmo()
fruit = OrderedDict()
fruit[0] = "Should I pack some fruits for you"
fruit[1] = "This is a fruitful venture"
fruit[2] = "Is berry a fruit"
fruit[3] = "Doctors think eating fruits are healthy"
fruit[4] = "I have fruits for breakfast"
fruit[5] = "Fruits are perishable"
fruit[6] = "Tomato is a fruit"
words = {
"fruit": fruit
}
# contextual vectors for ELMo layer 1 and 2
for layer in [1,2]:
for word, sentences in words.items():
X = np.concatenate([model.get_elmo_vector(tokens=sentences[idx].split(),
layer=layer)
for idx, _ in enumerate(sentences)], axis=0)
# The first 2 principal components
X_reduce = dim_reduction(X=X, n=2)
token_list = []
for _, sentence in sentences.items():
token_list.append(sentence.split())
file_name = "{}_elmo_layer_{}.png".format(word, layer)
title = "Layer {} ELMo vectors of the word {}".format(layer, word)
plot(word, token_list, X_reduce, file_name, title)
###Output
_____no_output_____ |
src/create_dataset.ipynb | ###Markdown
dataset:**Lens**```/data/inspur_disk03/userdir/wangcx/lens_cxchange to /data/inspur_disk03/userdir/wangcx/lens_cx_add_bass @0822```**non-lens**```/data/inspur_disk03/userdir/wangcx/decals_non_lens_BASS_resulation/PSF/101t101_good/data/inspur_disk03/userdir/wangcx/decals_non_lens_BASS_resulation/COMP/101t101_good/data/inspur_disk03/userdir/wangcx/decals_non_lens_BASS_resulation/DEV/101t101_good/data/inspur_disk03/userdir/wangcx/decals_non_lens_BASS_resulation/REX/101t101_good```
###Code
import astropy.io.fits as fits
import numpy as np
import os
import glob
import tqdm
import matplotlib.pylab as plt
def check_dir(path):
if not os.path.isdir(path):
print('mkdir: ', path)
os.makedirs(path)
DirBase = "/data/inspur_disk03/userdir/wangcx"
OutBase = "/data/dell5/userdir/maotx/Lens/0822"
check_dir(OutBase)
data_shape = [101,101,3]
fp_lens = glob.glob(os.path.join(DirBase,'lens_cx_add_bass/cutout*.fits'))
fp_nlens = glob.glob(os.path.join(DirBase,'decals_non_lens_BASS_resulation/*/101t101_good/cutout*.fits'))
fp_lens=[i.replace(DirBase+'/','') for i in fp_lens]
fp_nlens=[i.replace(DirBase+'/','') for i in fp_nlens]
def show(data):
fig, axes = plt.subplots(1,4,figsize=(4.5*4,4.5))
ax = axes[0]
print(data.shape)
ax.imshow(data[:,:,0])
ax.invert_yaxis()
ax = axes[1]
ax.imshow(data[:,:,1])
ax.invert_yaxis()
ax = axes[2]
ax.imshow(data[:,:,2])
ax.invert_yaxis()
ax = axes[3]
ax.imshow(data)
ax.invert_yaxis()
def preprocess(func):
def wrapper(*args, **kwargs):
fp = args[0]
data = func(fp).astype(np.float32)
cut = 100
#---------------------------
m = data.mean(axis=(1,2))[:,None,None]
s = data.std(axis=(1,2))[:,None,None]
data -= m
data /= s
data = data.transpose(1,2,0)
#data = np.clip(data,-cut,cut)
#---------------------------
return data
return wrapper
@preprocess
def readdata(fp):
with fits.open(fp) as hdu:
data = hdu[0].data
return data
data = readdata(os.path.join(DirBase,fp_lens[0]))
show(data)
data = readdata(os.path.join(DirBase,fp_lens[2]))
show(data)
data = readdata(os.path.join(DirBase,fp_lens[4]))
show(data)
data = readdata(os.path.join(DirBase,fp_nlens[0]))
show(data)
###Output
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
###Markdown
select training sample
###Code
frac = np.array([0.8,0.9,1.0])
frac = np.array([0.9,1.0,1.0])
dt = np.dtype([('label', np.int32, 1), ('image', np.float32, data_shape)])
def sel(size, frac):
random_ind = np.random.permutation(np.arange(size))
ind_dataset = np.ceil(frac*size).astype(np.int32)
ind_train = random_ind[:ind_dataset[0]]
ind_valid = random_ind[ind_dataset[0]:ind_dataset[1]]
ind_test = random_ind[ind_dataset[1]:ind_dataset[2]]
return ind_train, ind_valid, ind_test
def get_filename(fps, frac):
index = sel(len(fps), frac)
fp_tr = [fps[i] for i in index[0]]
fp_va = [fps[i] for i in index[1]]
fp_te = [fps[i] for i in index[2]]
return fp_tr, fp_va, fp_te
path = os.path.join(OutBase, 'data')
check_dir(path)
FNs = {}
for fps, ns in zip([fp_lens, fp_nlens], ['lens', 'nlens']):
fp_tr, fp_va, fp_te = get_filename(fps, frac)
FNs[ns] = [fp_tr, fp_va, fp_te]
with open(os.path.join(path, 'filename_{}_tr'.format(ns)), 'w') as f:
for i in fp_tr:
f.writelines(i+'\n')
with open(os.path.join(path, 'filename_{}_va'.format(ns)), 'w') as f:
for i in fp_va:
f.writelines(i+'\n')
with open(os.path.join(path, 'filename_{}_te'.format(ns)), 'w') as f:
for i in fp_te:
f.writelines(i+'\n')
fn_train = np.hstack([FNs[i][0] for i in ['lens', 'nlens']])
nl1 = np.hstack([FNs[i][0] for i in ['lens']]).shape[0]
nl0 = np.hstack([FNs[i][0] for i in ['nlens']]).shape[0]
print(nl1, nl0)
data_tr = np.empty([len(fn_train)], dtype=dt)
data_tr['label'][:nl1] = 1
data_tr['label'][nl1:nl1+nl0] = 0
for i in tqdm.tqdm(range(len(fn_train))):
data_tr['image'][i] = readdata(os.path.join(DirBase, fn_train[i]))
fn_valid = np.hstack([FNs[i][1] for i in ['lens', 'nlens']])
nl1 = np.hstack([FNs[i][1] for i in ['lens']]).shape[0]
nl0 = np.hstack([FNs[i][1] for i in ['nlens']]).shape[0]
print(nl1, nl0)
data_va = np.empty([len(fn_valid)], dtype=dt)
data_va['label'][:nl1] = 1
data_va['label'][nl1:nl1+nl0] = 0
for i in tqdm.tqdm(range(len(fn_valid))):
data_va['image'][i] = readdata(os.path.join(DirBase, fn_valid[i]))
fn_test = np.hstack([FNs[i][2] for i in ['lens', 'nlens']])
nl1 = np.hstack([FNs[i][2] for i in ['lens']]).shape[0]
nl0 = np.hstack([FNs[i][2] for i in ['nlens']]).shape[0]
print nl1, nl0
data_te = np.empty([len(fn_test)], dtype=dt)
data_te['label'][:nl1]=1
data_te['label'][nl1:nl1+nl0]=0
remove = []
for i in tqdm.tqdm(range(len(fn_test))):
data_te['image'][i] = readdata(os.path.join(DirBase,fn_test[i]))
###Output
1%|▏ | 29/2107 [00:00<00:08, 246.58it/s]
###Markdown
oversampling for training and validation sets:
###Code
def oversampling(sample):
index = np.arange(sample.shape[0])
dstar_ind = index[sample['label']==1]
sstar_ind = index[sample['label']==0]
resample_ind = dstar_ind[np.random.randint(0, dstar_ind.shape[0]-1, sstar_ind.shape[0])]
ind_new = np.hstack([sstar_ind,resample_ind])
ind_new = ind_new[np.random.permutation(np.arange(ind_new.shape[0]))]
return ind_new
reind = oversampling(data_tr)
train_dataset = data_tr[reind]
reind = oversampling(data_va)
valid_dataset = data_va[reind]
###Output
_____no_output_____
###Markdown
for a test case:
###Code
test_dataset = data_va
np.save(os.path.join(OutBase,'data/test.npy'), test_dataset)
test_dataset = data_te
print(train_dataset.shape, valid_dataset.shape, test_dataset.shape)
np.save(os.path.join(OutBase,'data/training.npy'), train_dataset)
#np.save(os.path.join(OutBase,'data/test.npy'), test_dataset)
np.save(os.path.join(OutBase,'data/valid.npy'), valid_dataset)
###Output
_____no_output_____
###Markdown
--- create TFrecord file:
###Code
import os
import numpy as np
import tensorflow as tf
dirbase='/nfs/P100/SDSSV_Classifiers/processed_dataset/TF_dataset'
###Output
_____no_output_____
###Markdown
Define corresponding data type.
###Code
def _int64_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
return tf.train.Feature(float_list=tf.train.FloatList(value=value))
###Output
_____no_output_____
###Markdown
Rewrite the following functions for different dataset.
###Code
def convert_to(data_set, Dir, name):
"""Converts a dataset to tfrecords."""
features = data_set.features
labels = data_set.labels
num_examples = data_set.num_examples
if features.shape[0] != num_examples:
raise ValueError('Features size %d does not match label size %d.' %
(features.shape[0], num_examples))
filename = os.path.join(Dir, name + '.tfrecords')
print('Writing', filename)
writer = tf.python_io.TFRecordWriter(filename)
for index in xrange(num_examples):
data_set.index = index
feature_raw = features[index].reshape(-1).tolist()
label_raw = labels[index].reshape(-1).tolist()
example = tf.train.Example(features=tf.train.Features(feature={
'index': _int64_feature(data_set.index),
'label_raw': _float_feature(label_raw),
'feature_raw': _float_feature(feature_raw)}))
writer.write(example.SerializeToString())
writer.close()
def main(data, filename):
# Get the data.
class data_set():
pass
Len = data.shape[0]
features = data['flux_norm']
labels = data['label'].reshape(-1,1)
data_set.features = features
data_set.labels = labels
data_set.num_examples = Len
convert_to(data_set, Dir=dirbase, name=filename)
###Output
_____no_output_____
###Markdown
Run the convert code.
###Code
star_type = ['wdsb2', 'wd', 'fgkm', 'hotstars', 'yso', 'cv']
Dir = '/nfs/P100/SDSSV_Classifiers/processed_dataset/dataset'
mean = np.load('/nfs/P100/SDSSV_Classifiers/processed_dataset/Norm_mu.npy')
std = np.load('/nfs/P100/SDSSV_Classifiers/processed_dataset/Norm_std.npy')
def loaddata(mode='train'):
DATA = []
for i in star_type:
filename=os.path.join(Dir, mode+'_'+i+'.npy')
data = np.load(filename)
data['flux_norm'] = (data['flux_norm']-mean)/std #!!!
DATA.append(data)
DATA = np.hstack(DATA)
random_ind = np.random.permutation(np.arange(DATA.shape[0]))
DATA = DATA[random_ind]
print mode, DATA['flux_norm'].mean(), DATA['flux_norm'].std()
return DATA
train_dataset = loaddata('train')
valid_dataset = loaddata('valid')
test_dataset = loaddata('test')
Dir2 = '/nfs/P100/SDSSV_Classifiers/processed_dataset/TF_dataset'
np.save(os.path.join(Dir2,'train.npy'),train_dataset)
np.save(os.path.join(Dir2,'valid.npy'),valid_dataset)
np.save(os.path.join(Dir2,'test.npy'),test_dataset)
main(train_dataset,'training')
main(valid_dataset,'valid')
main(test_dataset,'test')
###Output
('Writing', '/nfs/P100/SDSSV_Classifiers/processed_dataset/TF_dataset/training.tfrecords')
('Writing', '/nfs/P100/SDSSV_Classifiers/processed_dataset/TF_dataset/valid.tfrecords')
('Writing', '/nfs/P100/SDSSV_Classifiers/processed_dataset/TF_dataset/test.tfrecords')
###Markdown
**check repeat**
###Code
print test_dataset.shape
print valid_dataset.shape
print train_dataset.shape
a = valid_dataset
b = train_dataset
s = np.unique(b)
print s.dtype
print s.shape
for i in xrange(len(a)):
for j in xrange(len(s)):
if a[i]['index'] != s[j]['index']:
continue
if a[i]['label'] != s[j]['label']:
continue
if (a[i]['flux_norm'][:20] == s[j]['flux_norm'][:20]).sum() == 0:
continue
print i,j
print 'finished'
###Output
finished
|
samples/notebooks/A-LakeCreator/Example-2-Extract-Files.ipynb | ###Markdown
<!-- Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.--> Extracting files in parallel using notebooks***Extracting Data from Zipped Files and Migrating Output Files to S3 Buckets***___--- Contents1. [Introduction](Introduction)2. [Setup and Define Parameters](Setup-and-Define-Parameters) 1. [Define Parameters](Define-Parameters) 2. [Copy Zip Files Locally to Handle Extraction](Copy-Zip-Files-Locally-to-Handle-Extraction)3. [Extract Zip Files](Step-2:-Extract-Zip-Files)4. [Check For Errors and Clean Up](Step-3:-Checking-for-Errors-and-Clean-Up) --- IntroductionThis notebook goes through the process of extracting zip files and migrating their unzipped content to s3. We will go through the following steps to extract our files:1. Migrate the files to our local environment2. Use Shell Commands in our notebook with IPython to unzip the files3. Use AWS CLI Commands to move these unzipepd files to a remote target s3 bucket. When calling on this notebook to run in **Example 1: "Orchestration Notebook for Building the Lake"**, we will concurrently notebooks for each of the different zipped files on AWS Fargate which provides a serverless container execution environment. This way we can reduce time and extract our zip files in parallel. *** Setup Define Parameters First, let's define the source folder, s3 bucket path, and zip file name for our zip files we wish to extract. This will allow us to format an extract path where the Zip file sits in our remote environmentWe also will specify an s3 bucket path for our target folders which will be where the extracted contnet will be placed.
###Code
sourceFolder = "landing/"
bucketName = "orbit-test-base-accoun-testlakebucketfa111111-1111111111"
zipFileName = "landing/cms/DE1_0_2008_Beneficiary_Summary_File_Sample_1.zip"
targetFolder = "s3://orbit-test-base-accoun-testlakebucketfa111111-1111111111/extracted/"
use_subdirs = True
toExtractPath = "s3://{}/{}".format(bucketName,zipFileName)
toExtractPath
###Output
_____no_output_____
###Markdown
Copy Zip Files Locally to Handle ExtractionOnce we have defined our parameters we can copy the zip files over from our s3 bucket "ExtractPath" to a zip file located on our local environment. This will allow us to call on the shell commands to unzip our file and move it back to cloud storage in s3:
###Code
!aws s3 ls --recursive $toExtractPath
!aws s3 cp $toExtractPath ./$zipFileName
###Output
_____no_output_____
###Markdown
**Note:** Here we are just removing the filename extension so we can store unzipped content in the same named file:
###Code
baseName = zipFileName.split(".")[0]
baseName
###Output
_____no_output_____
###Markdown
*** Extract Zip FilesNow, let's call on the **unzip Shell command** to unzip our file in our local source location and transfer the unzipped file to the target directory "baseName".We will then check that we have a valid target Folder name in s3 to move the unzipped content back to cloud storage:
###Code
!rm -fR ./$baseName
!unzip ./$zipFileName -d ./$baseName
if use_subdirs:
filename = baseName.split("/")[-1]
targetFolder += filename
targetFolder
###Output
_____no_output_____
###Markdown
Move Output and Error Files to Target s3 Bucket(s)Lastly, let's use "**%%bash script magics**" to run cells with bash in a subprocess. We can copy all of the output and errors (if any) to our target folder in s3 to complete the extraction process for our zip files:
###Code
%%bash --out output --err error -s "$baseName" "$targetFolder"
echo "aws s3 cp --recursive ./$1 $2"
aws s3 cp --recursive ./$1 $2
###Output
_____no_output_____
###Markdown
*** Checking for Errors and Clean UpLets double check that we did not run into any errors during the process unzipping our zip files. We can check to see if any errors were logged when unzipping and assert that no errors were found if successful. Next, we can remove our two local directories holding our zipped file and our unzipped file(s) and continue building out Data Lake with our unzipped data securely stored in s3:
###Code
print(output)
print(error)
assert "upload" in output
assert len(error) == 0
!rm -fR ./$baseName
!rm -f ./$zipFileName
###Output
_____no_output_____
###Markdown
<!-- Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.--> Extracting files in parallel using notebooks***Extracting Data from Zipped Files and Migrating Output Files to S3 Buckets***___--- Contents1. [Introduction](Introduction)2. [Setup and Define Parameters](Setup-and-Define-Parameters) 1. [Define Parameters](Define-Parameters) 2. [Copy Zip Files Locally to Handle Extraction](Copy-Zip-Files-Locally-to-Handle-Extraction)3. [Extract Zip Files](Step-2:-Extract-Zip-Files)4. [Check For Errors and Clean Up](Step-3:-Checking-for-Errors-and-Clean-Up) --- IntroductionThis notebook goes through the process of extracting zip files and migrating their unzipped content to s3. We will go through the following steps to extract our files:1. Migrate the files to our local environment2. Use Shell Commands in our notebook with IPython to unzip the files3. Use AWS CLI Commands to move these unzipepd files to a remote target s3 bucket. When calling on this notebook to run in **Example 1: "Orchestration Notebook for Building the Lake"**, we will concurrently notebooks for each of the different zipped files on AWS Fargate which provides a serverless container execution environment. This way we can reduce time and extract our zip files in parallel. *** Setup Define Parameters First, let's define the source folder, s3 bucket path, and zip file name for our zip files we wish to extract. This will allow us to format an extract path where the Zip file sits in our remote environmentWe also will specify an s3 bucket path for our target folders which will be where the extracted contnet will be placed.
###Code
sourceFolder = "landing/"
bucketName = "orbit-test-base-accoun-testlakebucketfa111111-1111111111"
zipFileName = "landing/cms/DE1_0_2008_Beneficiary_Summary_File_Sample_1.zip"
targetFolder = "s3://orbit-test-base-accoun-testlakebucketfa111111-1111111111/extracted/"
use_subdirs = "True"
# Till we allow Boolean from CRD
use_subdirs_condition = True if use_subdirs == "True" else False
toExtractPath = "s3://{}/{}".format(bucketName,zipFileName)
toExtractPath
###Output
_____no_output_____
###Markdown
Copy Zip Files Locally to Handle ExtractionOnce we have defined our parameters we can copy the zip files over from our s3 bucket "ExtractPath" to a zip file located on our local environment. This will allow us to call on the shell commands to unzip our file and move it back to cloud storage in s3:
###Code
!aws s3 ls --recursive $toExtractPath
!aws s3 cp $toExtractPath ./$zipFileName
###Output
_____no_output_____
###Markdown
**Note:** Here we are just removing the filename extension so we can store unzipped content in the same named file:
###Code
baseName = zipFileName.split(".")[0]
baseName
###Output
_____no_output_____
###Markdown
*** Extract Zip FilesNow, let's call on the **unzip Shell command** to unzip our file in our local source location and transfer the unzipped file to the target directory "baseName".We will then check that we have a valid target Folder name in s3 to move the unzipped content back to cloud storage:
###Code
!rm -fR ./$baseName
!unzip ./$zipFileName -d ./$baseName
if use_subdirs_condition:
filename = baseName.split("/")[-1]
targetFolder += filename
targetFolder
###Output
_____no_output_____
###Markdown
Move Output and Error Files to Target s3 Bucket(s)Lastly, let's use "**%%bash script magics**" to run cells with bash in a subprocess. We can copy all of the output and errors (if any) to our target folder in s3 to complete the extraction process for our zip files:
###Code
%%bash --out output --err error -s "$baseName" "$targetFolder"
echo "aws s3 cp --recursive ./$1 $2"
aws s3 cp --recursive ./$1 $2
###Output
_____no_output_____
###Markdown
*** Checking for Errors and Clean UpLets double check that we did not run into any errors during the process unzipping our zip files. We can check to see if any errors were logged when unzipping and assert that no errors were found if successful. Next, we can remove our two local directories holding our zipped file and our unzipped file(s) and continue building out Data Lake with our unzipped data securely stored in s3:
###Code
print(output)
print(error)
assert "upload" in output
assert len(error) == 0
!rm -fR ./$baseName
!rm -f ./$zipFileName
###Output
_____no_output_____ |
workflow/Transform and Cache Work Plan Info.ipynb | ###Markdown
The FWS work plan species originated from a published PDF file, but then a number of things have gone on over time to assemble information and assistance that USGS can provide from across Mission Areas and Science Centers. Much of this has been put together into one core spreadsheet that we are treating here as our master source (sources/Prelisting Science USGS Master_19Mar2018.xlsx). The worksheets in the spreadsheet all contain various kinds of information that we work with elsewhere in these notebooks. The main listing we refer to is in the "FWS 7 Year Workplan Species" worksheet. It has been enhanced a bit over time with an additional field with species guilds used for organizational purposes.This notebook digests the spreadsheet a little bit to produce a data structure that is more conducive to working with in Python throughout this system.
###Code
import pandas as pd
import numpy as np
import bispy
from IPython.display import display
import json
bis_utils = bispy.bis.Utils()
import pickle
# Open up the cache of ECOS info for use
with open("../cache/ecos.json", "r") as f:
cached_ecos_data = json.loads(f.read())
f.close()
# Quick function to retrieve the ECOS Link (Search URL recorded in processing metadata) for cached ECOS scraped records
def ecos_bits(name, return_var="ECOS Link"):
ecos_scraped_record = next((r for r in cached_ecos_data if r["data"][0]["Scientific Name"] == name), None)
if ecos_scraped_record is None:
return_data = None
else:
if return_var == "ECOS Link":
return_data = ecos_scraped_record["processing_metadata"]["api"]
elif return_var == "ITIS TSN":
return_data = ecos_scraped_record["data"][0]["itis_tsn"]
return return_data
spp_ecos_links = pd.read_excel(
"../sources/AdditionalSourceData.xlsx",
sheet_name="Extracted Species ECOS Links"
)
def lookup_name(name):
return spp_ecos_links.loc[spp_ecos_links['Scientific Name'] == name, 'Lookup Name'].iloc[0]
spp_list = pd.read_excel("../sources/Prelisting Science USGS Master_19Mar2018.xlsx", sheet_name="FWS 7 Year Workplan Species", usecols="A:G")
spp_list_clean = pd.DataFrame(spp_list).replace({np.nan:None}).apply(lambda x: x.str.strip() if x.dtype == "object" else x)
spp_list_clean["Lookup Name"] = spp_list_clean.apply(lambda x: lookup_name(x["Scientific Name"]), axis=1)
spp_list_clean["ECOS Link"] = spp_list_clean.apply(lambda x: ecos_bits(x["Scientific Name"]), axis=1)
spp_list_clean["ITIS TSN"] = spp_list_clean.apply(lambda x: ecos_bits(x["Scientific Name"], "ITIS TSN"), axis=1)
spp_list = pd.DataFrame(spp_list_clean).replace({np.nan:None}).apply(lambda x: x.str.strip() if x.dtype == "object" else x)
# Cache the array of retrieved documents and return/display a random sample for verification
display(bis_utils.doc_cache("../cache/workplan_species.json", spp_list.to_dict(orient='records')))
###Output
_____no_output_____ |
cartpole/ddqn.ipynb | ###Markdown
Double Deep Q-NetworkA criticism of the deep q-network (DQN) introduced previously is that it overestimates the values of actions. In DQN, the target is computed as:$$Y_t^{DQN} \equiv R_{t+1} + \gamma max_a Q(S_{t+1}, a; \theta_t^-)$$Hado et al argue that the estimation is due to the max operator which is used to both select and evaluate an action. When using tabular methods with Q-learning, the same overestimation is noticed as well. To solve the problem, Double Q-learning (DQL) was introduced. In [1], the DQN was adapted to use DQL and the resulting technique was termed Double Deep Q-Network (DDQN). In DDQN, the targets are computed as follows:$$Y_t^{DDQN} \equiv R_{t+1} + \gamma Q(S_{t+1}, argmax_a Q(S_{t+1}, a; \theta_t); \theta_t^-)$$Several things have happend here. Firstly, the **online network is used to select an action** in the next state $S_{t+1}$. Secondly, the **target network is used to compute the value** of taking the selected action in that state. Take a moment to study both equations to convince yourself of the changes.A final point is that overoptimism is not always a bad thing as the DQN was still able to achieve state of the art results. The use of a DDQN also does not guarantee better results. However, reducing overoptimism can benefit the stability of learning. Below I compare the performance of DQN vs DDQN. The changes are slight and are concentrated in the train() method. Everything else is left unchanged.
###Code
import math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import gym
import cntk
from cntk import *
from cntk.layers import *
%matplotlib inline
class ReplayBuffer:
"""
Fixed capacity buffer implemented as circular queue
Transitions are stored as (s, a, r, s', done) tuples
"""
def __init__(self, capacity):
self.samples = np.ndarray(capacity, dtype=object)
self.capacity = capacity
self.counter = 0
self.flag = False
def size(self):
if self.flag:
return self.capacity
else:
return self.counter
def add(self, sample):
self.samples[self.counter] = sample
self.counter += 1
if self.counter >= self.capacity:
self.counter = 0
self.flag = True
def sample(self, n):
n = min(n, self.size())
size = self.size()
if size < self.capacity:
return np.random.choice(self.samples[:size], n, replace=False)
else:
return np.random.choice(self.samples, n, replace=False)
class Agent:
def __init__(self, state_dim, action_dim, learning_rate):
self.state_dim = state_dim
self.action_dim = action_dim
self.learning_rate = learning_rate
self.epsilon = 1
# Create the model and set up trainer
self.state_var = input(self.state_dim, np.float32)
self.action_var = input(self.action_dim, np.float32)
self.online_model = Sequential([
Dense(64, activation=relu),
Dense(self.action_dim)
])(self.state_var)
loss = reduce_mean(square(self.online_model - self.action_var), axis=0)
lr_schedule = learning_rate_schedule(self.learning_rate, UnitType.sample)
learner = sgd(self.online_model.parameters, lr_schedule)
self.trainer = Trainer(self.online_model, loss, learner)
# Create target network and initialize with same weights
self.target_model = None
self.update_target()
def update_target(self):
"""
Updates the target network using the online network weights
"""
self.target_model = self.online_model.clone(CloneMethod.clone)
def update_epsilon(self, episode):
"""
Updates epsilon using exponential decay with the decay rate chosen such
that epsilon is 0.05 by episode 8000
"""
self.epsilon = max(math.exp(-3.74e-4 * episode), 0.05)
def predict(self, s, target=False):
"""
Feeds a state through the model (our network) and obtains the values of each action
"""
if target:
return self.target_model.eval(s)
else:
return self.online_model.eval(s)
def act(self, state):
"""
Selects an action using the epoch-greedy approach
"""
prob = np.random.randn(1)
if prob > self.epsilon:
# exploit (greedy)
return np.argmax(self.predict(state))
else:
# explore (random action)
return np.random.randint(0, self.action_dim)
def train(self, x, y):
"""
Performs a single gradient descent step using the provided states and targets
"""
self.trainer.train_minibatch({self.state_var: x, self.action_var: y})
def evaluate(self, env, n):
"""
Computes the average performance of the trained model over n episodes
"""
episode = 0
rewards = 0
while episode < n:
s = env.reset()
done = False
while not done:
a = np.argmax(self.predict(s.astype(np.float32)))
s, r, done, info = env.step(a)
rewards += r
episode += 1
return rewards / float(n)
def initialize_buffer(env, buffer):
"""
Initializes the replay buffer using experiences generated by taking random actions
"""
actions = env.action_space.n
s = env.reset()
while buffer.size() < buffer.capacity:
a = np.random.randint(0, actions)
s_, r, done, info = env.step(a)
buffer.add((s, a, r, s_, done))
if done:
s = env.reset()
else:
s = s_
###Output
_____no_output_____
###Markdown
Notice the change in how we compute the targets for DDQN here. We use the online network to get the values of actions in the next state and then select the one with the highest value. We then use the target network to compute the value of selecting that action. Slight change, but large difference.
###Code
def train(env, agent, buffer, episodes, gamma, minibatch_size, update_freq, ddqn=False):
"""
param env: The gym environment to train with
param agent: The agent to train
param buffer: The replay buffer to sample experiences from
param episodes: The number of episodes to train for
param gamma: The discount factor
param minibatch_size: The number of transitions to sample for
param update_freq: The frequency at which to update the target network
param ddqn: If true, uses DDQN expression to compute targets
"""
episode = 0
rewards = 0
log_freq = 200
episode_rewards = []
s = env.reset().astype(np.float32)
while episode < episodes:
# Select an action using policy derived from Q (e-greedy)
a = agent.act(s)
# Take action and observe the next state and reward
s_, r, done, info = env.step(a)
s_ = s_.astype(np.float32)
# Store transition in replay buffer
buffer.add((s, a, r, s_, done))
s = s_
rewards += r
# Sample random transitions from replay buffer
batch = buffer.sample(minibatch_size)
# Compute targets, y_i
states = np.array([obs[0] for obs in batch], dtype=np.float32)
states_ = np.array([obs[3] for obs in batch], dtype=np.float32)
y = agent.predict(states)
if not ddqn:
q_next = agent.predict(states_, target=True)
for i in range(minibatch_size):
p, a, r, p_, d = batch[i]
if d:
y[i, a] = r
else:
y[i, a] = r + gamma * np.amax(q_next[i])
else:
q_next = agent.predict(states_)
q_next_target = agent.predict(states_, target=True)
for i in range(minibatch_size):
p, a, r, p_, d = batch[i]
if d:
y[i, a] = r
else:
y[i, a] = r + gamma * q_next_target[i][np.argmax(q_next[i])]
# Train using state and computed target
agent.train(states, y)
if done:
# Episode over, reset environment
episode_rewards.append(rewards)
rewards = 0
episode += 1
agent.update_epsilon(episode)
s = env.reset().astype(np.float32)
if episode % log_freq == 0:
ave = sum(episode_rewards[(episode - log_freq):]) / float(log_freq)
print('Episode = {}, Average rewards = {}'.format(episode, ave))
if episode % update_freq == 0:
agent.update_target()
return episode_rewards
gamma = 0.60
learning_rate = 0.00025
episodes = 10000
buffer_capacity = 32
minibatch_size = 8
update_freq = 500
env = gym.make('CartPole-v0')
state_dim = env.observation_space.shape
action_dim = env.action_space.n
buffer1 = ReplayBuffer(buffer_capacity)
agent1 = Agent(state_dim, action_dim, learning_rate)
buffer2 = ReplayBuffer(buffer_capacity)
agent2 = Agent(state_dim, action_dim, learning_rate)
initialize_buffer(env, buffer1)
rewards1 = train(env, agent1, buffer1, episodes, gamma, minibatch_size, update_freq)
initialize_buffer(env, buffer2)
rewards2 = train(env, agent2, buffer2, episodes, gamma, minibatch_size, update_freq, ddqn=True)
pd.Series(rewards1).rolling(window=100).mean().plot(label='dqn')
pd.Series(rewards2).rolling(window=100).mean().plot(label='ddqn')
plt.legend()
plt.show()
eval_episodes = 200
ave1 = agent1.evaluate(env, eval_episodes)
ave2 = agent2.evaluate(env, eval_episodes)
print('DQN Average performance = {}'.format(ave1))
print('DDQN Average performance = {}'.format(ave2))
###Output
DQN Average performance = 196.9
DDQN Average performance = 180.35
###Markdown
The chart above and the results for average performance assert the fact that the use of a DDQN doesn't always improve performance. You may notice though that for a majority of the chart, the average reward for DDQN was higher than that of DQN. CartPole is a relatively simple problem though and the benefits of a DDQN will most likely be more apparent when used with more complex problems. To have a more assertive conclusion, we could run the experiment several times and average of the results of both.Once again, you can try out your own experiments by modifying the different parameters and noting the performance of both algorithms.
###Code
agent2.online_model.save_model('cart_pole.ddqn')
# Load saved model and evaluate
model = load_model('cart_pole.ddqn')
s = env.reset()
done = False
while not done:
env.render()
a = np.argmax(model.eval(s.astype(np.float32)))
s, r, done, info = env.step(a)
env.close()
###Output
_____no_output_____ |
Alphabet_Soup_Predictor.ipynb | ###Markdown
Preprocessing
###Code
# Import our dependencies.
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, LabelEncoder
import pandas as pd
import tensorflow as tf
# Import and read the charity_data.csv.
import pandas as pd
application_df = pd.read_csv("../Resources/charity_data.csv")
application_df.head()
# Drop the non-beneficial ID columns, 'EIN' and 'NAME'.
# YOUR CODE GOES HERE
application_df = application_df.drop(columns=['EIN', 'NAME'])
application_df.head()
# Determine the number of unique values in each column.
# YOUR CODE GOES HERE
application_df.nunique()
# Look at APPLICATION_TYPE value counts for binning
# YOUR CODE GOES HERE
app_counts = application_df['APPLICATION_TYPE'].value_counts()
app_counts
# Choose a cutoff value and create a list of application types to be replaced
# use the variable name `application_types_to_replace`
# YOUR CODE GOES HERE
application_types_to_replace = list(app_counts[app_counts < 500].index)
# Replace in dataframe
for app in application_types_to_replace:
application_df['APPLICATION_TYPE'] = application_df['APPLICATION_TYPE'].replace(app,"Other")
# Check to make sure binning was successful
application_df['APPLICATION_TYPE'].value_counts()
# Look at CLASSIFICATION value counts for binning
# YOUR CODE GOES HERE
class_counts= application_df['CLASSIFICATION'].value_counts()
class_counts
# You may find it helpful to look at CLASSIFICATION value counts >1
# YOUR CODE GOES HERE
class_counts[class_counts>1]
# Choose a cutoff value and create a list of classifications to be replaced
# use the variable name `classifications_to_replace`
# YOUR CODE GOES HERE
classifications_to_replace = list(class_counts[class_counts < 1883].index)
# Replace in dataframe
for cls in classifications_to_replace:
application_df['CLASSIFICATION'] = application_df['CLASSIFICATION'].replace(cls,"Other")
# Check to make sure binning was successful
application_df['CLASSIFICATION'].value_counts()
# Convert categorical data to numeric with `pd.get_dummies`
# YOUR CODE GOES HERE
application_df['SPECIAL_CONSIDERATIONS'] = LabelEncoder().fit_transform(application_df['SPECIAL_CONSIDERATIONS'])
application_df = pd.get_dummies(application_df)
application_df.head()
# Split our preprocessed data into our features and target arrays
# YOUR CODE GOES HERE
y = application_df['IS_SUCCESSFUL'].values
X = application_df.drop(['IS_SUCCESSFUL'], 1).values
# Split training/test datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, stratify=y)
# Create a StandardScaler instances
scaler = StandardScaler()
# Fit the StandardScaler
X_scaler = scaler.fit(X_train)
# Scale the data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Compile, Train and Evaluate the Model
###Code
# Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer.
nn_model = tf.keras.models.Sequential()
# First hidden layer.
nn_model.add(tf.keras.layers.Dense(units=8, activation="relu", input_dim=42))
# Second hidden layer.
nn_model.add(tf.keras.layers.Dense(units=3, activation="relu"))
# Output layer.
nn_model.add(tf.keras.layers.Dense(units=1, activation="sigmoid"))
# Check the structure of the model.
nn_model.summary()
# Compile the model.
nn_model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Train the model.
fit_model = nn_model.fit(X_train_scaled, y_train, epochs=100)
# Evaluate the model using the test data.
model_loss, model_accuracy = nn_model.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
# Export our model to HDF5 file.
nn_model.save('AlphabetSoupCharity.h5')
# Create a DataFrame containing training history.
history_df = pd.DataFrame(fit_model.history)
# Increase the index by 1 to match the number of epochs.
history_df.index += 1
# Plot the loss.
history_df.plot(y="loss");
# Plot the accuracy.
history_df.plot(y="accuracy");
###Output
_____no_output_____ |
homeworks/homework01_word_vectors/homework01_texts.ipynb | ###Markdown
Homework 01. Simple text processing.
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
from IPython import display
%load_ext autoreload
%autoreload 2
# ignore EpochDepricationWarning on scheduler.step()
import warnings
warnings.filterwarnings('ignore')
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Toxic or notYour main goal in this assignment is to classify, whether the comments are toxic or not. And practice with both classical approaches and PyTorch in the process.*Credits: This homework is inspired by YSDA NLP_course.**Disclaimer: The used dataset may contain obscene language and is used only as an example of real unfiltered data.*
###Code
# In colab uncomment this cell
# ! wget https://raw.githubusercontent.com/neychev/made_nlp_course/master/homeworks/homework01/utils.py -nc
try:
data = pd.read_csv('../../datasets/comments_small_dataset/comments.tsv', sep='\t')
except FileNotFoundError:
! wget https://raw.githubusercontent.com/neychev/made_nlp_course/master/datasets/comments_small_dataset/comments.tsv -nc
data = pd.read_csv("comments.tsv", sep='\t')
texts = data['comment_text'].values
target = data['should_ban'].values
data[50::200]
from sklearn.model_selection import train_test_split
texts_train, texts_test, y_train, y_test = train_test_split(texts, target, test_size=0.5, random_state=42)
###Output
_____no_output_____
###Markdown
__Note:__ it is generally a good idea to split data into train/test before anything is done to them.It guards you against possible data leakage in the preprocessing stage. For example, should you decide to select words present in obscene tweets as features, you should only count those words over the training set. Otherwise your algoritm can cheat evaluation. Preprocessing and tokenizationComments contain raw text with punctuation, upper/lowercase letters and even newline symbols.To simplify all further steps, we'll split text into space-separated tokens using one of nltk tokenizers.Generally, library `nltk` [link](https://www.nltk.org) is widely used in NLP. It is not necessary in here, but mentioned to intoduce it to you.
###Code
from nltk.tokenize import TweetTokenizer
tokenizer = TweetTokenizer()
preprocess = lambda text: ' '.join(tokenizer.tokenize(text.lower()))
text = 'How to be a grown-up at work: replace "I don\'t want to do that" with "Ok, great!".'
print("before:", text,)
print("after:", preprocess(text),)
# task: preprocess each comment in train and test
texts_train = list(map(preprocess, texts_train))
texts_test = list(map(preprocess, texts_test))
# Small check that everything is done properly
assert texts_train[5] == 'who cares anymore . they attack with impunity .'
assert texts_test[89] == 'hey todds ! quick q ? why are you so gay'
assert len(texts_test) == len(y_test)
###Output
_____no_output_____
###Markdown
Step 1: bag of wordsOne traditional approach to such problem is to use bag of words features:1. build a vocabulary of frequent words (use train data only)2. for each training sample, count the number of times a word occurs in it (for each word in vocabulary).3. consider this count a feature for some classifier__Note:__ in practice, you can compute such features using sklearn. __Please don't do that in the current assignment, though.__* `from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer`
###Code
# task: find up to k most frequent tokens in texts_train,
# sort them by number of occurences (highest first)
from collections import Counter
k = min(10000, len(set(' '.join(texts_train).split())))
counter = Counter()
for sentence in texts_train:
counter.update(sentence.split())
bow_vocabulary = [word for word, _ in counter.most_common(k)]
print('example features:', sorted(bow_vocabulary)[::100])
print('vocabulary size: ', k)
def text_to_bow(text):
""" convert text string to an array of token counts. Use bow_vocabulary. """
vec = np.zeros(k)
for word in text.split():
if word in bow_vocabulary:
vec[bow_vocabulary.index(word)] += 1
return np.array(vec, 'float32')
X_train_bow = np.stack(list(map(text_to_bow, texts_train)))
X_test_bow = np.stack(list(map(text_to_bow, texts_test)))
# Small check that everything is done properly
k_max = len(set(' '.join(texts_train).split()))
assert X_train_bow.shape == (len(texts_train), min(k, k_max))
assert X_test_bow.shape == (len(texts_test), min(k, k_max))
assert np.all(X_train_bow[5:10].sum(-1) == np.array([len(s.split()) for s in texts_train[5:10]]))
assert len(bow_vocabulary) <= min(k, k_max)
assert X_train_bow[6, bow_vocabulary.index('.')] == texts_train[6].split().count('.')
###Output
_____no_output_____
###Markdown
Now let's do the trick with `sklearn` logistic regression implementation:
###Code
from sklearn.linear_model import LogisticRegression
bow_model = LogisticRegression().fit(X_train_bow, y_train)
from sklearn.metrics import roc_auc_score, roc_curve
for name, X, y, model in [
('train', X_train_bow, y_train, bow_model),
('test ', X_test_bow, y_test, bow_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
###Output
_____no_output_____
###Markdown
Seems alright. Now let's create the simple logistic regression using PyTorch. Just like in the classwork.
###Code
import torch
from torch import nn
from torch.nn import functional as F
from torch.optim.lr_scheduler import StepLR, ReduceLROnPlateau
from sklearn.metrics import accuracy_score
from utils import plot_train_process
model = nn.Sequential()
model.add_module(
'l1',
nn.Linear(k, 2)
)
###Output
_____no_output_____
###Markdown
Remember what we discussed about loss functions! `nn.CrossEntropyLoss` combines both log-softmax and `NLLLoss`.__Be careful with it! Criterion `nn.CrossEntropyLoss` with still work with log-softmax output, but it won't allow you to converge to the optimum.__ Next comes small demonstration:
###Code
# loss_function = nn.NLLLoss()
loss_function = nn.CrossEntropyLoss()
opt = torch.optim.Adam(model.parameters())
lr_scheduler = torch.optim.lr_scheduler.StepLR(opt, step_size=5, gamma=0.1)
X_train_bow_torch = torch.Tensor(X_train_bow)
X_test_bow_torch = torch.Tensor(X_test_bow)
y_train_torch = torch.Tensor(y_train).type(torch.LongTensor)
y_test_torch = torch.Tensor(y_test).type(torch.LongTensor)
y_train_torch[:3]
###Output
_____no_output_____
###Markdown
Let's test that everything is fine
###Code
# example loss
loss = loss_function(model(X_train_bow_torch[:3]), y_train_torch[:3])
loss.item()
assert type(loss.item()) == float
###Output
_____no_output_____
###Markdown
Here comes small function to train the model. In future we will take in into separate file, but for this homework it's ok to implement it here.
###Code
def train_model(
model,
opt,
lr_scheduler,
X_train_torch,
y_train_torch,
X_val_torch,
y_val_torch,
n_iterations=500,
batch_size=32,
warm_start=False,
show_plots=True,
eval_every=10
):
if not warm_start:
for name, module in model.named_children():
print('resetting ', name)
try:
module.reset_parameters()
except AttributeError as e:
print('Cannot reset {} module parameters: {}'.format(name, e))
train_loss_history = []
train_acc_history = []
val_loss_history = []
val_acc_history = []
local_train_loss_history = []
local_train_acc_history = []
for i in range(n_iterations):
# sample 256 random observations
ix = np.random.randint(0, len(X_train_torch), batch_size)
x_batch = X_train_torch[ix]
y_batch = y_train_torch[ix]
# predict log-probabilities or logits
y_predicted = model(x_batch)
# compute loss, just like before
loss = loss_function(y_predicted, y_batch)
# compute gradients
loss.backward()
# Adam step
opt.step()
# clear gradients
opt.zero_grad()
local_train_loss_history.append(loss.data.numpy())
local_train_acc_history.append(
accuracy_score(
y_batch.to('cpu').detach().numpy(),
y_predicted.to('cpu').detach().numpy().argmax(axis=1)
)
)
if i % eval_every == 0:
train_loss_history.append(np.mean(local_train_loss_history))
train_acc_history.append(np.mean(local_train_acc_history))
local_train_loss_history, local_train_acc_history = [], []
predictions_val = model(X_val_torch)
val_loss_history.append(loss_function(predictions_val, y_val_torch).to('cpu').detach().item())
acc_score_val = accuracy_score(y_val_torch.cpu().numpy(), predictions_val.to('cpu').detach().numpy().argmax(axis=1))
val_acc_history.append(acc_score_val)
lr_scheduler.step(train_loss_history[-1])
if show_plots:
display.clear_output(wait=True)
plot_train_process(train_loss_history, val_loss_history, train_acc_history, val_acc_history)
return model
###Output
_____no_output_____
###Markdown
Let's run it on the data. Note, that here we use the `test` part of the data for validation. It's not so good idea in general, but in this task our main goal is practice.
###Code
train_model(model, opt, lr_scheduler, X_train_bow_torch, y_train_torch, X_test_bow_torch, y_test_torch)
from utils import plot_roc
plot_roc(model, (X_train_bow_torch, y_train), (X_test_bow_torch, y_test), nn=True, title='BoW')
###Output
_____no_output_____
###Markdown
Try to vary the number of tokens `k` and check how the model performance changes. Show it on a plot.
###Code
for k in range(500, 5500, 1000):
counter = Counter()
for sentence in texts_train:
counter.update(sentence.split())
bow_vocabulary = [word for word, _ in counter.most_common(k)]
X_train_bow = np.stack(list(map(text_to_bow, texts_train)))
X_test_bow = np.stack(list(map(text_to_bow, texts_test)))
X_train_bow_torch = torch.Tensor(X_train_bow)
X_test_bow_torch = torch.Tensor(X_test_bow)
y_train_torch = torch.Tensor(y_train).type(torch.LongTensor)
y_test_torch = torch.Tensor(y_test).type(torch.LongTensor)
model = nn.Sequential()
model.add_module(
'l1',
nn.Linear(k, 2)
)
opt = torch.optim.Adam(model.parameters())
lr_scheduler = torch.optim.lr_scheduler.StepLR(opt, step_size=5, gamma=0.1)
loss_function = nn.CrossEntropyLoss()
train_model(
model,
opt,
lr_scheduler,
X_train_bow_torch,
y_train_torch,
X_test_bow_torch,
y_test_torch,
show_plots=False
)
plot_roc(
model,
(X_train_bow_torch, y_train),
(X_test_bow_torch, y_test),
nn=True,
title=f'k={k}'
)
plt.show()
###Output
resetting l1
###Markdown
Из данного эксперимента можно понять, что:1. Выбирать число слов, которое мы будем учитывать в BoW - важно, это влияет на качество обучения.2. Зависимость не является линейной, а различия иногда не значимые. Поэтому, если мы не добиваемся самого лучшего качества, то число слов можно подобрать GridSearch`ем по широкой сетке Step 2: implement TF-IDF featuresNot all words are equally useful. One can prioritize rare words and downscale words like "and"/"or" by using __tf-idf features__. This abbreviation stands for __text frequency/inverse document frequence__ and means exactly that:$$ feature_i = { Count(word_i \in x) \times { log {N \over Count(word_i \in D) + \alpha} }}, $$where x is a single text, D is your dataset (a collection of texts), N is a total number of documents and $\alpha$ is a smoothing hyperparameter (typically 1). And $Count(word_i \in D)$ is the number of documents where $word_i$ appears.It may also be a good idea to normalize each data sample after computing tf-idf features.__Your task:__ implement tf-idf features, train a model and evaluate ROC curve. Compare it with basic BagOfWords model from above.__Please don't use sklearn/nltk builtin tf-idf vectorizers in your solution :)__ You can still use 'em for debugging though. Blog post about implementing the TF-IDF features from scratch: https://triton.ml/blog/tf-idf-from-scratch
###Code
# task: find up to k most frequent tokens in texts_train,
# sort them by number of occurences (highest first)
from collections import Counter
k = min(10000, len(set(' '.join(texts_train).split())))
counter = Counter()
for sentence in texts_train:
counter.update(sentence.split())
vocabulary = [word for word, _ in counter.most_common(k)]
print('example features:', sorted(bow_vocabulary)[::100])
def computeTFDict(text):
count = len(text.split())
TFDict = Counter(text.split())
for word in TFDict:
TFDict[word] /= count
return TFDict
def computeCountDict(tf):
counter = Counter()
for _dict in tf:
counter.update(_dict.keys())
return counter
def computeIDFDict(counter, doc_number):
idfDict = {}
for word in counter:
idfDict[word] = np.log(doc_number / counter[word])
return idfDict
def computeTFIDFDict(tf_dict, idf_dict):
reviewTFIDFDict = {}
for word in tf_dict:
reviewTFIDFDict[word] = tf_dict[word] * idf_dict[word]
return reviewTFIDFDict
def computeTFIDFVector(text, k, words):
tfidfVector = [0.0] * k
for i, word in enumerate(words):
if word in text:
tfidfVector[i] = text[word]
return tfidfVector
def tfidf_vectorize(texts):
tf = [computeTFDict(text) for text in texts]
counter = computeCountDict(tf)
idf = computeIDFDict(counter, len(texts))
tfidf = [computeTFIDFDict(tf, idf) for tf in tf]
return [computeTFIDFVector(text, k, vocabulary) for text in tfidf]
train_tfidfVector = tfidf_vectorize(texts_train)
test_tfidfVector = tfidf_vectorize(texts_test)
###Output
_____no_output_____
###Markdown
Same stuff about model and optimizers here (or just omit it, if you are using the same model as before).
###Code
model = nn.Sequential()
model.add_module(
'l1',
nn.Linear(k, 2)
)
opt = torch.optim.Adam(model.parameters())
lr_scheduler = torch.optim.lr_scheduler.StepLR(opt, step_size=5, gamma=0.1)
loss_function = nn.CrossEntropyLoss()
X_train_tfidf_torch = torch.Tensor(train_tfidfVector)
X_test_tfidf_torch = torch.Tensor(test_tfidfVector)
y_train_torch = torch.Tensor(y_train).type(torch.LongTensor)
y_test_torch = torch.Tensor(y_test).type(torch.LongTensor)
train_model(
model,
opt,
lr_scheduler,
X_train_tfidf_torch,
y_train_torch,
X_test_tfidf_torch,
y_test_torch,
show_plots=False
)
plot_roc(
model,
(X_train_tfidf_torch, y_train),
(X_test_tfidf_torch, y_test),
nn=True,
title=f'TFIDF'
)
###Output
resetting l1
###Markdown
Видим прирост качества, по сравнению с BoW Fit your model to the data. No not hesitate to vary number of iterations, learning rate and so on._Note: due to very small dataset, increasing the complexity of the network might not be the best idea._ Step 3: Comparing it with Naive BayesNaive Bayes classifier is a good choice for such small problems. Try to tune it for both BOW and TF-iDF features. Compare the results with Logistic Regression.
###Code
from sklearn.naive_bayes import MultinomialNB
naive = MultinomialNB().fit(X_train_bow, y_train)
plot_roc(
naive,
(X_train_bow, y_train),
(X_test_bow, y_test),
nn=False,
title=f'NB BoW'
)
naive = MultinomialNB().fit(train_tfidfVector, y_train)
plot_roc(
naive,
(train_tfidfVector, y_train),
(test_tfidfVector, y_test),
nn=False,
title=f'NB TFIDF'
)
###Output
_____no_output_____
###Markdown
Shape some thoughts on the results you aquired. Which model has show the best performance? Did changing the learning rate/lr scheduler help? Самый лучший результат показал наивный байес на tfidf векторизации, учитывая кол-во текстов это стоило ожидать. С lr/шедулером не играл, не вижу особого смысла Step 4: Using the external knowledge.Use the `gensim` word2vec pretrained model to translate words into vectors. Use several models with this new encoding technique. Compare the results, share your thoughts.
###Code
import gensim.downloader as api
info = api.info()
gensim_model = api.load("glove-twitter-50")
train_emb = np.zeros((500, 50))
for index, text in enumerate(texts_train):
tmp = []
for word in text.split():
if word in gensim_model.vocab.keys():
tmp.append(gensim_model.wv[word])
train_emb[index] = np.array(tmp).mean(axis=0)
test_emb = np.zeros((500, 50))
for index, text in enumerate(texts_test):
tmp = []
for word in text.split():
if word in gensim_model.vocab.keys():
tmp.append(gensim_model.wv[word])
test_emb[index] = np.array(tmp).mean(axis=0)
model = nn.Sequential()
model.add_module(
'l1',
nn.Linear(50, 2)
)
opt = torch.optim.Adam(model.parameters())
lr_scheduler = torch.optim.lr_scheduler.StepLR(opt, step_size=10, gamma=0.1)
loss_function = nn.CrossEntropyLoss()
X_train_emb_torch = torch.Tensor(train_emb)
X_test_emb_torch = torch.Tensor(test_emb)
y_train_torch = torch.Tensor(y_train).type(torch.LongTensor)
y_test_torch = torch.Tensor(y_test).type(torch.LongTensor)
train_model(
model,
opt,
lr_scheduler,
X_train_emb_torch,
y_train_torch,
X_test_emb_torch,
y_test_torch,
show_plots=True,
n_iterations=1000
)
plot_roc(
model,
(X_train_emb_torch, y_train),
(X_test_emb_torch, y_test),
nn=True,
title=f'glove-twitter-50'
)
###Output
_____no_output_____
###Markdown
Homework 01. Simple text processing.
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
from IPython import display
###Output
_____no_output_____
###Markdown
Toxic or notYour main goal in this assignment is to classify, whether the comments are toxic or not. And practice with both classical approaches and PyTorch in the process.*Credits: This homework is inspired by YSDA NLP_course.**Disclaimer: The used dataset may contain obscene language and is used only as an example of real unfiltered data.*
###Code
# In colab uncomment this cell
# ! wget https://raw.githubusercontent.com/neychev/made_nlp_course/master/homeworks/homework01/utils.py -nc
try:
data = pd.read_csv('../../datasets/comments_small_dataset/comments.tsv', sep='\t')
except FileNotFoundError:
! wget https://raw.githubusercontent.com/neychev/made_nlp_course/master/datasets/comments_small_dataset/comments.tsv -nc
data = pd.read_csv("comments.tsv", sep='\t')
texts = data['comment_text'].values
target = data['should_ban'].values
data[50::200]
from sklearn.model_selection import train_test_split
texts_train, texts_test, y_train, y_test = train_test_split(texts, target, test_size=0.5, random_state=42)
###Output
_____no_output_____
###Markdown
__Note:__ it is generally a good idea to split data into train/test before anything is done to them.It guards you against possible data leakage in the preprocessing stage. For example, should you decide to select words present in obscene tweets as features, you should only count those words over the training set. Otherwise your algoritm can cheat evaluation. Preprocessing and tokenizationComments contain raw text with punctuation, upper/lowercase letters and even newline symbols.To simplify all further steps, we'll split text into space-separated tokens using one of nltk tokenizers.Generally, library `nltk` [link](https://www.nltk.org) is widely used in NLP. It is not necessary in here, but mentioned to intoduce it to you.
###Code
from nltk.tokenize import TweetTokenizer
tokenizer = TweetTokenizer()
preprocess = lambda text: ' '.join(tokenizer.tokenize(text.lower()))
text = 'How to be a grown-up at work: replace "I don\'t want to do that" with "Ok, great!".'
print("before:", text,)
print("after:", preprocess(text),)
# task: preprocess each comment in train and test
texts_train = np.asarray([preprocess(x) for x in texts_train])
texts_test = np.asarray([preprocess(x) for x in texts_test])
# Small check that everything is done properly
assert texts_train[5] == 'who cares anymore . they attack with impunity .'
assert texts_test[89] == 'hey todds ! quick q ? why are you so gay'
assert len(texts_test) == len(y_test)
###Output
_____no_output_____
###Markdown
Step 1: bag of wordsOne traditional approach to such problem is to use bag of words features:1. build a vocabulary of frequent words (use train data only)2. for each training sample, count the number of times a word occurs in it (for each word in vocabulary).3. consider this count a feature for some classifier__Note:__ in practice, you can compute such features using sklearn. __Please don't do that in the current assignment, though.__* `from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer`
###Code
# task: find up to k most frequent tokens in texts_train,
# sort them by number of occurences (highest first)
k = min(10000, len(set(' '.join(texts_train).split())))
from collections import Counter
counter = Counter()
for sample in texts_train:
for word in sample.split():
counter[word] += 1
bow_vocabulary = [word for word, _ in counter.most_common(k)]
print('example features:', sorted(bow_vocabulary)[::100])
len(bow_vocabulary)
def text_to_bow(text):
""" convert text string to an array of token counts. Use bow_vocabulary. """
vec = np.zeros(len(bow_vocabulary))
for word in text.split():
if word in bow_vocabulary:
vec[bow_vocabulary.index(word)] += 1
return np.array(vec, 'float32')
X_train_bow = np.stack(list(map(text_to_bow, texts_train)))
X_test_bow = np.stack(list(map(text_to_bow, texts_test)))
# Small check that everything is done properly
k_max = len(set(' '.join(texts_train).split()))
assert X_train_bow.shape == (len(texts_train), min(k, k_max))
assert X_test_bow.shape == (len(texts_test), min(k, k_max))
assert np.all(X_train_bow[5:10].sum(-1) == np.array([len(s.split()) for s in texts_train[5:10]]))
assert len(bow_vocabulary) <= min(k, k_max)
assert X_train_bow[6, bow_vocabulary.index('.')] == texts_train[6].split().count('.')
###Output
_____no_output_____
###Markdown
Now let's do the trick with `sklearn` logistic regression implementation:
###Code
from sklearn.linear_model import LogisticRegression
bow_model = LogisticRegression().fit(X_train_bow, y_train)
from sklearn.metrics import roc_auc_score, roc_curve
for name, X, y, model in [
('train', X_train_bow, y_train, bow_model),
('test ', X_test_bow, y_test, bow_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
###Output
_____no_output_____
###Markdown
Seems alright. Now let's create the simple logistic regression using PyTorch. Just like in the classwork.
###Code
import torch
from torch import nn
from torch.nn import functional as F
from torch.optim.lr_scheduler import StepLR, ReduceLROnPlateau
from sklearn.metrics import accuracy_score
from utils import plot_train_process
model = nn.Sequential()
model.add_module('l1',
nn.Linear(k, 2))
###Output
_____no_output_____
###Markdown
Remember what we discussed about loss functions! `nn.CrossEntropyLoss` combines both log-softmax and `NLLLoss`.__Be careful with it! Criterion `nn.CrossEntropyLoss` with still work with log-softmax output, but it won't allow you to converge to the optimum.__ Next comes small demonstration:
###Code
# loss_function = nn.NLLLoss()
loss_function = nn.CrossEntropyLoss()
opt = torch.optim.Adam(model.parameters())
lr_scheduler = StepLR(opt, step_size=5)
X_train_bow_torch = torch.Tensor(X_train_bow)
X_test_bow_torch = torch.Tensor(X_test_bow)
y_train_torch = torch.Tensor(y_train).type(torch.LongTensor)
y_test_torch = torch.Tensor(y_test).type(torch.LongTensor)
###Output
_____no_output_____
###Markdown
Let's test that everything is fine
###Code
model(X_train_bow_torch[:3])
# example loss
loss = loss_function(model(X_train_bow_torch[:3]), y_train_torch[:3])
assert type(loss.item()) == float
###Output
_____no_output_____
###Markdown
Here comes small function to train the model. In future we will take in into separate file, but for this homework it's ok to implement it here.
###Code
def train_model(
model,
opt,
lr_scheduler,
X_train_torch,
y_train_torch,
X_val_torch,
y_val_torch,
n_iterations=500,
batch_size=32,
warm_start=False,
show_plots=True,
eval_every=10
):
if not warm_start:
for name, module in model.named_children():
print('resetting ', name)
try:
module.reset_parameters()
except AttributeError as e:
print('Cannot reset {} module parameters: {}'.format(name, e))
train_loss_history = []
train_acc_history = []
val_loss_history = []
val_acc_history = []
local_train_loss_history = []
local_train_acc_history = []
for i in range(n_iterations):
# sample 256 random observations
ix = np.random.randint(0, len(X_train_torch), batch_size)
x_batch = X_train_torch[ix]
y_batch = y_train_torch[ix]
# predict log-probabilities or logits
y_predicted = model(x_batch)
# compute loss, just like before
loss = loss_function(y_predicted, y_batch)
# compute gradients
loss.backward()
# Adam step
opt.step()
# clear gradients
opt.zero_grad()
local_train_loss_history.append(loss.data.numpy())
local_train_acc_history.append(
accuracy_score(
y_batch.to('cpu').detach().numpy(),
y_predicted.to('cpu').detach().numpy().argmax(axis=1)
)
)
if i % eval_every == 0:
train_loss_history.append(np.mean(local_train_loss_history))
train_acc_history.append(np.mean(local_train_acc_history))
local_train_loss_history, local_train_acc_history = [], []
predictions_val = model(X_val_torch)
val_loss_history.append(loss_function(predictions_val, y_val_torch).to('cpu').detach().item())
acc_score_val = accuracy_score(y_val_torch.cpu().numpy(), predictions_val.to('cpu').detach().numpy().argmax(axis=1))
val_acc_history.append(acc_score_val)
lr_scheduler.step(train_loss_history[-1])
if show_plots:
display.clear_output(wait=True)
plot_train_process(train_loss_history, val_loss_history, train_acc_history, val_acc_history)
return model
###Output
_____no_output_____
###Markdown
Let's run it on the data. Note, that here we use the `test` part of the data for validation. It's not so good idea in general, but in this task our main goal is practice.
###Code
train_model(model, opt, lr_scheduler, X_train_bow_torch, y_train_torch, X_test_bow_torch, y_test_torch)
from sklearn.metrics import roc_auc_score, roc_curve
for name, X, y, model in [
('train', X_train_bow_torch, y_train, model),
('test ', X_test_bow_torch, y_test, model)
]:
proba = model(X).detach().cpu().numpy()[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
###Output
_____no_output_____
###Markdown
Try to vary the number of tokens `k` and check how the model performance changes. Show it on a plot.
###Code
for k in range(2000, 7001, 1000):
bow_vocabulary = [word for word, _ in counter.most_common(k)]
X_train_bow = np.stack(list(map(text_to_bow, texts_train)))
X_test_bow = np.stack(list(map(text_to_bow, texts_test)))
bow_model = LogisticRegression().fit(X_train_bow, y_train)
for name, X, y, model in [
('train', X_train_bow, y_train, bow_model),
('test ', X_test_bow, y_test, bow_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.title(k)
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Step 2: implement TF-IDF featuresNot all words are equally useful. One can prioritize rare words and downscale words like "and"/"or" by using __tf-idf features__. This abbreviation stands for __text frequency/inverse document frequence__ and means exactly that:$$ feature_i = { Count(word_i \in x) \times { log {N \over Count(word_i \in D) + \alpha} }}, $$where x is a single text, D is your dataset (a collection of texts), N is a total number of documents and $\alpha$ is a smoothing hyperparameter (typically 1). And $Count(word_i \in D)$ is the number of documents where $word_i$ appears.It may also be a good idea to normalize each data sample after computing tf-idf features.__Your task:__ implement tf-idf features, train a model and evaluate ROC curve. Compare it with basic BagOfWords model from above.__Please don't use sklearn/nltk builtin tf-idf vectorizers in your solution :)__ You can still use 'em for debugging though. Blog post about implementing the TF-IDF features from scratch: https://triton.ml/blog/tf-idf-from-scratch
###Code
k = 5000
counter = Counter()
for sample in texts_train:
for word in sample.split():
counter[word] += 1
bow_vocabulary = [word for word, _ in counter.most_common(k)]
def get_doc_count(bow_vocabulary, texts):
"""Returns a dictionary that stores uniquie words of dataset and document it occured"""
splitted_texts = [x.split() for x in texts]
doc_count = Counter()
for word in bow_vocabulary:
for splitted_text in splitted_texts:
if word in splitted_text:
doc_count[word] += 1
return doc_count
doc_count = get_doc_count(bow_vocabulary, texts)
def compute_review_tf_dict(review):
""" Returns a tf dictionary for each review whose keys are all
the unique words in the review and whose values are their
corresponding tf.
"""
# Counts the number of times the word appears in review
counter = Counter()
counter.update(review.split())
# Computes tf for each word
for word in counter:
counter[word] = counter[word] / len(review)
return counter
def tf_idf_preproccess(sample, alpha=1):
"""Returns a final vector on an inputed sample"""
vector = [0.0] * len(doc_count)
tfDict = compute_review_tf_dict(sample)
wordDict = sorted(doc_count.keys())
for word in tfDict:
for i, _ in enumerate(wordDict):
if word == _ :
vector[i] = tfDict[word] * np.log(len(doc_count) / (alpha + doc_count[word]))
return vector
model = nn.Sequential()
model.add_module('l1',
nn.Linear(len(doc_count), 2))
loss_function = nn.CrossEntropyLoss()
opt = torch.optim.Adam(model.parameters())
lr_scheduler = StepLR(opt, step_size=5)
X_train_tfidf = [tf_idf_preproccess(x) for x in texts_train]
X_test_tfidf = [tf_idf_preproccess(x) for x in texts_test]
X_train_tfidf_torch = torch.Tensor(X_train_tfidf)
X_test_tfidf_torch = torch.Tensor(X_test_tfidf)
y_train_torch = torch.Tensor(y_train).type(torch.LongTensor)
y_test_torch = torch.Tensor(y_test).type(torch.LongTensor)
train_model(
model,
opt,
lr_scheduler,
X_train_tfidf_torch,
y_train_torch,
X_test_tfidf_torch,
y_test_torch,
show_plots=False
)
for name, X, y, model in [
('train', X_train_tfidf_torch, y_train, model),
('test ', X_test_tfidf_torch, y_test, model)
]:
proba = model(X).detach().cpu().numpy()[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.title(f'k={len(doc_count)}')
plt.legend(fontsize='large')
plt.grid()
plt.show()
###Output
resetting l1
###Markdown
Fit your model to the data. No not hesitate to vary number of iterations, learning rate and so on._Note: due to very small dataset, increasing the complexity of the network might not be the best idea._ Вывод: как и ожидалось tf-idf показал себя лучше Step 3: Comparing it with Naive BayesNaive Bayes classifier is a good choice for such small problems. Try to tune it for both BOW and TF-iDF features. Compare the results with Logistic Regression.
###Code
from sklearn.naive_bayes import MultinomialNB
clf = MultinomialNB().fit(X_train_bow, y_train)
for name, X, y, model in [
('train', X_train_bow, y_train, clf),
('test ', X_test_bow, y_test, clf)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.title("NB on bow")
plt.grid()
clf = MultinomialNB().fit(X_train_tfidf, y_train)
for name, X, y, model in [
('train', X_train_tfidf, y_train, clf),
('test ', X_test_tfidf, y_test, clf)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.title("NB on tfidf")
plt.grid()
###Output
_____no_output_____
###Markdown
Shape some thoughts on the results you aquired. Which model has show the best performance? Did changing the learning rate/lr scheduler help? Наивный байес с tf-idf показал себя лучше на тесте нежели логистическая регрессия, lr scheduler вряд ли спасет Step 4: Using the external knowledge.Use the `gensim` word2vec pretrained model to translate words into vectors. Use several models with this new encoding technique. Compare the results, share your thoughts.
###Code
import gensim.downloader as api
twitter = api.load("glove-twitter-50")
texts_train_gensim = []
for sample in texts_train:
vec = []
for word in sample.split():
if word in twitter.vocab:
vec.append(twitter.get_vector(word))
texts_train_gensim.append(np.array(vec).mean(axis=0))
texts_test_gensim = []
for sample in texts_test:
vec = []
for word in sample.split():
if word in twitter.vocab:
vec.append(twitter.get_vector(word))
texts_test_gensim.append(np.array(vec).mean(axis=0))
gensim_log_model = LogisticRegression().fit(texts_train_gensim, y_train)
for name, X, y, model in [
('train', texts_train_gensim, y_train, gensim_log_model),
('test ', texts_test_gensim, y_test, gensim_log_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.title("logreg with gensim")
plt.grid()
X_train_gensim_torch = torch.Tensor(texts_train_gensim)
X_test_gensim_torch = torch.Tensor(texts_test_gensim)
y_train_torch = torch.Tensor(y_train).type(torch.LongTensor)
y_test_torch = torch.Tensor(y_test).type(torch.LongTensor)
model = nn.Sequential()
model.add_module('l1',
nn.Linear(50, 2))
loss_function = nn.CrossEntropyLoss()
opt = torch.optim.Adam(model.parameters())
lr_scheduler = StepLR(opt, step_size=5)
train_model(
model,
opt,
lr_scheduler,
X_train_gensim_torch,
y_train_torch,
X_test_gensim_torch,
y_test_torch,
show_plots=True,
n_iterations=1000
)
for name, X, y, model in [
('train', X_train_gensim_torch, y_train, model),
('test ', X_test_gensim_torch, y_test, model)
]:
proba = model(X).detach().cpu().numpy()[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.title("torch logreg with gensim")
plt.legend(fontsize='large')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Homework 01. Simple text processing.
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
from IPython import display
###Output
_____no_output_____
###Markdown
Toxic or notYour main goal in this assignment is to classify, whether the comments are toxic or not. And practice with both classical approaches and PyTorch in the process.*Credits: This homework is inspired by YSDA NLP_course.**Disclaimer: The used dataset may contain obscene language and is used only as an example of real unfiltered data.*
###Code
# In colab uncomment this cell
# ! wget https://raw.githubusercontent.com/neychev/made_nlp_course/master/homeworks/homework01/utils.py -nc
try:
data = pd.read_csv('../../datasets/comments_small_dataset/comments.tsv', sep='\t')
except FileNotFoundError:
! wget https://raw.githubusercontent.com/neychev/made_nlp_course/master/datasets/comments_small_dataset/comments.tsv -nc
data = pd.read_csv("comments.tsv", sep='\t')
texts = data['comment_text'].values
target = data['should_ban'].values
data[50::200]
from sklearn.model_selection import train_test_split
texts_train, texts_test, y_train, y_test = train_test_split(texts, target, test_size=0.5, random_state=42)
###Output
_____no_output_____
###Markdown
__Note:__ it is generally a good idea to split data into train/test before anything is done to them.It guards you against possible data leakage in the preprocessing stage. For example, should you decide to select words present in obscene tweets as features, you should only count those words over the training set. Otherwise your algoritm can cheat evaluation. Preprocessing and tokenizationComments contain raw text with punctuation, upper/lowercase letters and even newline symbols.To simplify all further steps, we'll split text into space-separated tokens using one of nltk tokenizers.Generally, library `nltk` [link](https://www.nltk.org) is widely used in NLP. It is not necessary in here, but mentioned to intoduce it to you.
###Code
from nltk.tokenize import TweetTokenizer
tokenizer = TweetTokenizer()
preprocess = lambda text: ' '.join(tokenizer.tokenize(text.lower()))
text = 'How to be a grown-up at work: replace "I don\'t want to do that" with "Ok, great!".'
print("before:", text,)
print("after:", preprocess(text),)
# task: preprocess each comment in train and test
texts_train = #<YOUR CODE>
texts_test = #<YOUR CODE>
# Small check that everything is done properly
assert texts_train[5] == 'who cares anymore . they attack with impunity .'
assert texts_test[89] == 'hey todds ! quick q ? why are you so gay'
assert len(texts_test) == len(y_test)
###Output
_____no_output_____
###Markdown
Step 1: bag of wordsOne traditional approach to such problem is to use bag of words features:1. build a vocabulary of frequent words (use train data only)2. for each training sample, count the number of times a word occurs in it (for each word in vocabulary).3. consider this count a feature for some classifier__Note:__ in practice, you can compute such features using sklearn. __Please don't do that in the current assignment, though.__* `from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer`
###Code
# task: find up to k most frequent tokens in texts_train,
# sort them by number of occurences (highest first)
k = min(10000, len(set(' '.join(texts_train).split())))
#<YOUR CODE>
bow_vocabulary = #<YOUR CODE>
print('example features:', sorted(bow_vocabulary)[::100])
def text_to_bow(text):
""" convert text string to an array of token counts. Use bow_vocabulary. """
#<YOUR CODE>
return np.array(<...>, 'float32')
X_train_bow = np.stack(list(map(text_to_bow, texts_train)))
X_test_bow = np.stack(list(map(text_to_bow, texts_test)))
# Small check that everything is done properly
k_max = len(set(' '.join(texts_train).split()))
assert X_train_bow.shape == (len(texts_train), min(k, k_max))
assert X_test_bow.shape == (len(texts_test), min(k, k_max))
assert np.all(X_train_bow[5:10].sum(-1) == np.array([len(s.split()) for s in texts_train[5:10]]))
assert len(bow_vocabulary) <= min(k, k_max)
assert X_train_bow[6, bow_vocabulary.index('.')] == texts_train[6].split().count('.')
###Output
_____no_output_____
###Markdown
Now let's do the trick with `sklearn` logistic regression implementation:
###Code
from sklearn.linear_model import LogisticRegression
bow_model = LogisticRegression().fit(X_train_bow, y_train)
from sklearn.metrics import roc_auc_score, roc_curve
for name, X, y, model in [
('train', X_train_bow, y_train, bow_model),
('test ', X_test_bow, y_test, bow_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
###Output
_____no_output_____
###Markdown
Seems alright. Now let's create the simple logistic regression using PyTorch. Just like in the classwork.
###Code
import torch
from torch import nn
from torch.nn import functional as F
from torch.optim.lr_scheduler import StepLR, ReduceLROnPlateau
from sklearn.metrics import accuracy_score
from utils import plot_train_process
model = nn.Sequential()
model.add_module('l1', ### YOUR CODE HERE
### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Remember what we discussed about loss functions! `nn.CrossEntropyLoss` combines both log-softmax and `NLLLoss`.__Be careful with it! Criterion `nn.CrossEntropyLoss` with still work with log-softmax output, but it won't allow you to converge to the optimum.__ Next comes small demonstration:
###Code
# loss_function = nn.NLLLoss()
loss_function = nn.CrossEntropyLoss()
opt = ### YOUR CODE HERE
X_train_bow_torch = ### YOUR CODE HERE
X_test_bow_torch = ### YOUR CODE HERE
y_train_torch = ### YOUR CODE HERE
y_test_torch = ### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Let's test that everything is fine
###Code
# example loss
loss = loss_function(model(X_train_bow_torch[:3]), y_train_torch[:3])
assert type(loss.item()) == float
###Output
_____no_output_____
###Markdown
Here comes small function to train the model. In future we will take in into separate file, but for this homework it's ok to implement it here.
###Code
def train_model(
model,
opt,
lr_scheduler,
X_train_torch,
y_train_torch,
X_val_torch,
y_val_torch,
n_iterations=500,
batch_size=32,
warm_start=False,
show_plots=True,
eval_every=10
):
if not warm_start:
for name, module in model.named_children():
print('resetting ', name)
try:
module.reset_parameters()
except AttributeError as e:
print('Cannot reset {} module parameters: {}'.format(name, e))
train_loss_history = []
train_acc_history = []
val_loss_history = []
val_acc_history = []
local_train_loss_history = []
local_train_acc_history = []
for i in range(n_iterations):
# sample 256 random observations
ix = np.random.randint(0, len(X_train_torch), batch_size)
x_batch = X_train_torch[ix]
y_batch = y_train_torch[ix]
# predict log-probabilities or logits
y_predicted = ### YOUR CODE
# compute loss, just like before
### YOUR CODE
# compute gradients
### YOUR CODE
# Adam step
### YOUR CODE
# clear gradients
### YOUR CODE
local_train_loss_history.append(loss.data.numpy())
local_train_acc_history.append(
accuracy_score(
y_batch.to('cpu').detach().numpy(),
y_predicted.to('cpu').detach().numpy().argmax(axis=1)
)
)
if i % eval_every == 0:
train_loss_history.append(np.mean(local_train_loss_history))
train_acc_history.append(np.mean(local_train_acc_history))
local_train_loss_history, local_train_acc_history = [], []
predictions_val = model(X_val_torch)
val_loss_history.append(loss_function(predictions_val, y_val_torch).to('cpu').detach().item())
acc_score_val = accuracy_score(y_val_torch.cpu().numpy(), predictions_val.to('cpu').detach().numpy().argmax(axis=1))
val_acc_history.append(acc_score_val)
lr_scheduler.step(train_loss_history[-1])
if show_plots:
display.clear_output(wait=True)
plot_train_process(train_loss_history, val_loss_history, train_acc_history, val_acc_history)
return model
###Output
_____no_output_____
###Markdown
Let's run it on the data. Note, that here we use the `test` part of the data for validation. It's not so good idea in general, but in this task our main goal is practice.
###Code
train_model(model, opt, lr_scheduler, X_train_bow_torch, y_train_torch, X_test_bow_torch, y_test_torch)
from sklearn.metrics import roc_auc_score, roc_curve
for name, X, y, model in [
('train', X_train_bow_torch, y_train, model),
('test ', X_test_bow_torch, y_test, model)
]:
proba = model(X).detach().cpu().numpy()[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
###Output
_____no_output_____
###Markdown
Try to vary the number of tokens `k` and check how the model performance changes. Show it on a plot.
###Code
# Your beautiful code here
###Output
_____no_output_____
###Markdown
Step 2: implement TF-IDF featuresNot all words are equally useful. One can prioritize rare words and downscale words like "and"/"or" by using __tf-idf features__. This abbreviation stands for __text frequency/inverse document frequence__ and means exactly that:$$ feature_i = { Count(word_i \in x) \times { log {N \over Count(word_i \in D) + \alpha} }}, $$where x is a single text, D is your dataset (a collection of texts), N is a total number of documents and $\alpha$ is a smoothing hyperparameter (typically 1). And $Count(word_i \in D)$ is the number of documents where $word_i$ appears.It may also be a good idea to normalize each data sample after computing tf-idf features.__Your task:__ implement tf-idf features, train a model and evaluate ROC curve. Compare it with basic BagOfWords model from above.__Please don't use sklearn/nltk builtin tf-idf vectorizers in your solution :)__ You can still use 'em for debugging though. Blog post about implementing the TF-IDF features from scratch: https://triton.ml/blog/tf-idf-from-scratch
###Code
# Your beautiful code here
###Output
_____no_output_____
###Markdown
Same stuff about moel and optimizers here (or just omit it, if you are using the same model as before).
###Code
### YOUR CODE HERE
X_train_tfidf_torch = ### YOUR CODE HERE
X_test_tfidf_torch = ### YOUR CODE HERE
y_train_torch = ### YOUR CODE HERE
y_test_torch = ### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Fit your model to the data. No not hesitate to vary number of iterations, learning rate and so on._Note: due to very small dataset, increasing the complexity of the network might not be the best idea._ Step 3: Comparing it with Naive BayesNaive Bayes classifier is a good choice for such small problems. Try to tune it for both BOW and TF-iDF features. Compare the results with Logistic Regression.
###Code
# Your beautiful code here
###Output
_____no_output_____
###Markdown
Shape some thoughts on the results you aquired. Which model has show the best performance? Did changing the learning rate/lr scheduler help? _Your beautiful thoughts here_ Step 4: Using the external knowledge.Use the `gensim` word2vec pretrained model to translate words into vectors. Use several models with this new encoding technique. Compare the results, share your thoughts.
###Code
# Your beautiful code here
###Output
_____no_output_____ |
assignments/machine_learning/assignment_3_classification/.ipynb_checkpoints/assignment_3-checkpoint.ipynb | ###Markdown
ASSIGNMENT 3MULTICLASSIFICATIONWelcome to your last assignment in classification section.Today you will work with a famous NIST handwritten digit recognition dataset that isavailable via sklearn.datasets module. You will experience the effect of scaling in terms of statistic measures and visualize multiclassification decision boundaries. As in the previous assignment you will find the best classifier with best hyperparameters using predefined functions. With all this said, let's get started. PART 1Loading, scaling and visualization of dataFirstly let's import necessary packages and load our data.We'll also set a random seed to 5.
###Code
from sklearn.datasets import load_digits
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import random
from collections import Counter
seed = 5
digits = load_digits()
digits
###Output
_____no_output_____
###Markdown
It's useful to read the description of the datasetto construct some intuition and to understand with what we are working.Actually, sklearn digit recognition dataset includes only a part of bigger dataset described below (1797 samples).
###Code
digits.DESCR.split('\n')
###Output
_____no_output_____
###Markdown
As our data is loaded in a dict form we can easily extractfeatures and targets from it as shown below.
###Code
data_X = digits.data
data_y = digits.target
data_X
data_y
print(data_y.shape)
print(data_X.shape)
###Output
(1797, 64)
###Markdown
Now let's visualize some random samples from our data using matplotlib.
###Code
idx = random.randint(0,data_X.shape[0])
arr = digits.images[idx]
plt.gray();
plt.matshow(arr);
print(' Number {}'.format(data_y[idx]));
###Output
Number 3
###Markdown
We are dealing with 10 classes of grayscale images that contain handwritten digits from 0 to 9. To understand the perfomance of the classifierwe need to choose a proper metric. Thus, we need to understand if our data is balanced. For this stuff we ploted the distribution of number of samples below.
###Code
dict_counter = dict(Counter(data_y))
values = list(dict_counter.values())
keys = list(dict_counter.keys())
sns.set()
plt.figure(figsize=(15,8));
plt.title('Distribution of number of samples ')
plt.bar(range(len(values)),values);
plt.plot(range(len(values)),values,'--r')
plt.xticks(range(len(values)));
###Output
_____no_output_____
###Markdown
From the inset above it's obvious that our data is balanced.Thus, as the main metric we will use accuracy. Our data is already processed and all the pixelsare in range from 0 to 16. Let's see some statistical measures of our data.
###Code
print('Mean : {}, standard diviation : {}, variance : {}'.format(np.mean(data_X),np.std(data_X),np.var(data_X)))
###Output
Mean : 4.884164579855314, standard diviation : 6.016787548672236, variance : 36.20173240585726
###Markdown
Hm, despite the fact that data is already processed we still have a high variance. We can change the situation by performing max scaling.
###Code
data_X = data_X/16
data_X
print(np.mean(data_X),np.std(data_X),np.var(data_X))
###Output
0.30526028624095713 0.3760492217920148 0.1414130172103799
###Markdown
Now we have all the data squashed into the interval from 0 to 1.As you see the variance decreased. We are ready to train the classifier. PART 2Training classifiersWe will firstly train a LogisticRegression with intuitive hyperparametersand validate it with cross_val_score.
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
from sklearn.decomposition import PCA
clf = LogisticRegression(C=100,random_state=seed,penalty='l2')
score = cross_val_score(clf,data_X, data_y, scoring='accuracy',cv=5)
print('Mean accuracy score for Logistic Regression : {}'.format(np.mean(score)))
###Output
Mean accuracy score for Logistic Regression : 0.9210482168718908
###Markdown
Using simple logistic regression we got mean accuracy score of92. We can plot the linear boundary of our classifier using PCA about which you will get much more details in unsupervised section. In a nutshell what PCA (principal component analysis) does is the processof dimensionality reduction with respect to features. Thus we can easily visualze high-dimensional matrix by reducing it to just 2 dimensional. Function plot_boundary uses PCA to reducethe dimensions of our features and then plots the decision boundary of the classifier with respectto transformed features.
###Code
def plot_boundary(clf,dataset):
h = 0.25
pca = PCA(n_components=2)
X = pca.fit_transform(dataset.data)
clf.fit(X, dataset.target)
plt.figure(figsize=(15,8))
plt.title('Visualizing decision boundaries')
plt.scatter(X[:,0], X[:,1], c=dataset.target)
plt.figure(figsize=(15,8))
x_min, x_max = X[:,0].min() - 10*h, X[:,0].max() + 10*h
y_min, y_max = X[:,1].min() - 10*h, X[:,1].max() + 10*h
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z)
plt.contour(xx, yy, Z, colors='k', linewidths=0.7)
plt.scatter(X[:,0], X[:,1], c=dataset.target)
plot_boundary(clf,digits)
###Output
/home/volodymyr/envs/courses_env/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/home/volodymyr/envs/courses_env/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:469: FutureWarning: Default multi_class will be changed to 'auto' in 0.22. Specify the multi_class option to silence this warning.
"this warning.", FutureWarning)
###Markdown
The inset above should have helped you to undestand howthe decision boundary looks like for multiclassification tasks. The next step is to use the functionswith which you are already familiar to choose the best classifier.Here we are searching for best classifier only among SVC and LogisticRegression because our datais much more complex that the ones we used in the previous assignments and it mighttake a long time to find the best classifier among many. Anyway we encourage you to add other classifiers to classifiers_dict and experiment with the tunable hyperparameters.
###Code
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import StratifiedKFold
from sklearn.svm import SVC
from sklearn import tree
from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier
classifiers_dict = {'LogisticRegression':(LogisticRegression(
random_state=seed),{'C' : [0.001,0.01,0.1,10,100],
'penalty' : ['l2', 'l1']
}),
'SVC': (SVC(random_state=seed), {'C' : [0.001,0.01,0.1,10,100],
'kernel' : ['linear', 'poly', 'rbf'],
'degree' : (1,2,3),
'gamma' : [0.001,0.01,0.1,10,100]
})
}
def call_grid_search(clf,data_X,data_y,scoring,cv):
classifier, params = clf
kfolds = StratifiedKFold(cv)
gscv = GridSearchCV(classifier,param_grid=params, cv=kfolds.split(data_X,data_y), scoring=scoring,n_jobs=-1)
gscv.fit(data_X,data_y)
return gscv
def choose_best_classifier(classifiers_dict,data_X,data_y,scoring='precision',cv=5):
best_score = 0
best_clf = None
for name, clf in classifiers_dict.items():
gscv = call_grid_search(clf,data_X,data_y,scoring,cv)
score = gscv.best_score_
print('Classifier : {0}, mean {1} : {2}'.format(name,scoring, score))
if score>best_score:
best_score = score
best_clf = (name,gscv.best_estimator_)
print('Best classifier : {0}, mean {1} : {2} '.format(best_clf[0],scoring,best_score))
return best_clf
name, clf = choose_best_classifier(classifiers_dict,data_X,data_y,'accuracy')
###Output
/home/volodymyr/envs/courses_env/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/home/volodymyr/envs/courses_env/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:469: FutureWarning: Default multi_class will be changed to 'auto' in 0.22. Specify the multi_class option to silence this warning.
"this warning.", FutureWarning)
|
test/test.ipynb | ###Markdown
Trial with nglview and asehttps://github.com/arose/nglviewhttps://wiki.fysik.dtu.dk/ase/about.html Borrowing from this example :https://github.com/arose/nglview/blob/master/examples/users/ase.md Installationpip install nglview==1.1.5pip install asepip install ipywidgets==7.0.0pip install widgetsnbextension==3.0.0maybe also ipykernel=4.6? useful pageshttps://wiki.fysik.dtu.dk/ase/ase/build/build.html?highlight=moleculease.build.molecule
###Code
from ase.build import molecule
import nglview
import numpy as np
import ipywidgets
def makeview(models):
view = nglview.show_ase(model)
names = ['OCHCHO', 'C3H9C', 'CH3COF', 'CH3CH2NH2']
mols = [molecule(name) for name in names]
view = nglview.show_ase(mols[3])
nglview.write_html('index.html', [view])
view
!open index.html
from ipywidgets.embed import embed_minimal_html
embed_minimal_html('index.html', views=[view], title='test export')
mol = mdt.from_name('ethylene')
mol.draw()
###Output
_____no_output_____
###Markdown
Trying with rdkitconda install -c rdkit rdkit also see here: http://patrickfuller.github.io/imolecule/examples/ipython.html
###Code
from rdkit import Chem
from rdkit.Chem import AllChem
from rdkit.Chem.Draw import IPythonConsole
IPythonConsole.ipython_3d = True
taxol = ("CC(=O)OC1C2=C(C)C(CC(O)(C(OC(=O)c3ccccc3)C4C5(COC5CC(O)C4(C)C1=O)"
"OC(=O)C)C2(C)C)OC(=O)C(O)C(NC(=O)c6ccccc6)c7ccccc7")
mol = Chem.AddHs(Chem.MolFromSmiles(taxol))
AllChem.EmbedMolecule(mol)
AllChem.MMFFOptimizeMolecule(mol)
mol
###Output
_____no_output_____
###Markdown
Trying with py3Dmolpip install py3Dmolhttp://nbviewer.jupyter.org/github/3dmol/3Dmol.js/blob/9050b97144e81f065df7eecc87ba9a16723ab14b/py3Dmol/examples.ipynb
###Code
import py3Dmol
view = py3Dmol.view(query='pdb:1hvr')
view.setStyle({'cartoon':{'color':'spectrum'}})
view
###Output
_____no_output_____
###Markdown
cntopic简单好用的lda话题模型,支持中英文。该库基于gensim和pyLDAvis,实现了lda话题模型及可视化功能。 安装
###Code
pip install cntopic
###Output
_____no_output_____
###Markdown
使用这里给大家引入一个场景,假设大家采集新闻数据,忘记采集新闻文本对应的新闻类别,如果人工标注又很费工夫。这时候我们可以用lda话题模型帮我们洞察数据中的规律,发现新闻有n种话题群体。这样lda模型对数据自动打标注topic_1, topic_2, topic_3... ,topic_n。我们研究者的工作量仅仅限于解读topic_1, topic_2, topic_3... ,topic_n分别是什么话题即可。lda训练过程,大致分为1. 读取文件2. 准备数据3. 训练lda模型4. 使用lda模型5. 存储与导入lda模型 1. 读取文件这里我们用一个新闻数据,一共有10类,每类1000条数据,涵盖'时尚', '财经', '科技', '教育', '家居', '体育', '时政', '游戏', '房产', '娱乐'
###Code
import pandas as pd
df = pd.read_csv('chinese_news.csv')
df.head()
###Output
_____no_output_____
###Markdown
label标签的分布情况
###Code
df['label'].value_counts()
###Output
_____no_output_____
###Markdown
2. 准备数据一般准备数据包括:1. 分词、数据清洗2. 按照模块需求整理数据的格式注意在scikit-learn中:- 英文文本不需要分词,原封不动传入即可。- 中文文本需要先分词,后整理为英文那样用空格间隔的字符串。形如”我 爱 中国“
###Code
import jieba
def text2tokens(raw_text):
#将文本raw_text分词后得到词语列表
tokens = jieba.lcut(raw_text)
#tokens = raw_text.lower().split(' ') #英文用空格分词即可
tokens = [t for t in tokens if len(t)>1] #剔除单字
return tokens
#对content列中所有的文本依次进行分词
documents = [text2tokens(txt)
for txt in df['content']]
#显示前5个document
print(documents[:5])
###Output
Building prefix dict from the default dictionary ...
Loading model from cache /var/folders/sc/3mnt5tgs419_hk7s16gq61p80000gn/T/jieba.cache
Loading model cost 0.633 seconds.
Prefix dict has been built successfully.
###Markdown
3. 训练lda模型现在开始正式使用cntopic模块,开启LDA话题模型分析。步骤包括|Step|功能|代码||---|:---|:---||0|准备documents,已经在前面准备好了|-||1|初始化Topic类|topic = Topic(cwd=os.getcwd())||2|根据documents数据,构建词典空间|topic.create_dictionary(documents=documents)||3|构建语料(将文本转为文档-词频矩阵)|topic.create_corpus(documents=documents)||4|指定n_topics,构建LDA话题模型|topic.train_lda_model(n_topics)|这里我们就按照n_topics=10构建lda话题模型,一般情况n_topics可能要实验多次,找到最佳的n_topics运行过程中会在代码所在的文件夹内生成一个output文件夹,内部含有- dictionary.dict 词典文件- lda.model.xxx 多个lda模型文件,其中xxx是代指![]()上述代码耗时较长,请耐心等待程序运行完毕~
###Code
import os
from cntopic import Topic
topic = Topic(cwd=os.getcwd()) #构建词典dictionary
topic.create_dictionary(documents=documents) #根据documents数据,构建词典空间
topic.create_corpus(documents=documents) #构建语料(将文本转为文档-词频矩阵)
topic.train_lda_model(n_topics=10) #指定n_topics,构建LDA话题模型
###Output
_____no_output_____
###Markdown
4. 使用LDA模型上面的代码大概运行了5分钟,LDA模型已经训练好了。 现在我们可以利用LDA做一些事情,包括|Step|功能|代码|补充||---|:---|:---|:---||1|分词后的某文档|document = ['游戏', '体育']|||2|预测document对应的话题|topic.get_document_topics(document)|||3|显示每种话题与对应的特征词之间关系|topic.show_topics()|||4|数据中不同话题分布情况|topic.topic_distribution(raw_documents)|raw_documents是列表或series,如本教程中的df['content']||5|可视化LDA话题模型|topic.visualize_lda()|可视化结果在output中查找vis.html文件,浏览器打开即可|| 4.1 准备document假设有一个文档 ``'游戏体育真有意思'`` 分词处理得到document
###Code
document = jieba.lcut('游戏体育真有意思')
document
###Output
_____no_output_____
###Markdown
4.2 预测document对应的话题我们使用topic模型,看看document对应的话题
###Code
topic.get_document_topics(document)
###Output
_____no_output_____
###Markdown
我们的lda话题模型是按照n_topics=10训练的,限制调用topic预测某个document时,得到的结果是这10种话题及对应概率的元组列表。从中可以看到概率最大的是 ``话题6``, 概率有0.51443774。所以我们可以大致认为document是话题6 4.3 显示每种话题与对应的特征词之间关系但是仅仅告诉每个文档是 ``话题n``,我们仍然不知道 ``话题n``代表的是什么,所以我们需要看看每种 ``话题n``对应的 ``特征词语``。
###Code
topic.show_topics()
###Output
_____no_output_____
###Markdown
根据上面的 ``话题n`` 与 ``特征词`` 大致可以解读每个 ``话题n`` 是什么内容的话题。 4.4 话题分布情况现在我们想知道数据集中不同 ``话题n`` 的分布情况
###Code
topic.topic_distribution(raw_documents=df['content'])
###Output
_____no_output_____
###Markdown
我们的数据有10类,每类是1000条。而现在LDA话题模型单纯的根据文本的一些线索,按照n_topics=10给我们分出的效果还不错。最完美的情况是每个 ``话题n`` 都是接近1000, 现在 ``话题9``太多, ``话题6、 话题7``太少。不过我们也要注意到某些话题可能存在交集,容易分错,比如- 财经、房产、时政- 体育娱乐- 财经、科技等综上,目前模型还算可以,表现还能接受。 4.5 可视化现在只有10个话题, 我们用肉眼看还能接受,但是当话题数太多的时,还是借助可视化工具帮助我们科学评判训练结果。这就用到topic.visualize_lda(),运行结束后在``代码所在的文件夹内output中找vis.html文件,右键浏览器打开``
###Code
topic.visualize_lda()
###Output
_____no_output_____
###Markdown
图中有左右两大区域- 左侧 话题分布情况,圆形越大话题越多,圆形四散在四个象限- 右侧 某话题对应的特征词,从上到下权重越来越低需要注意的是左侧- 尽量圆形均匀分布在四个象限比较好,如果圆形全部集中到有限的区域,模型训练不好- 圆形与圆形交集较少比较好,如果交集太多,说明n_topics设置的太大,应该设置的再小一些 五、存储与导入lda模型lda话题模型训练特别慢,如果不保存训练好的模型,实际上是在浪费我们的生命和电脑计算力。好消息是cntopic默认为大家存储模型,存储地址是output文件夹内,大家只需要知道如何导入模型即可。这里需要导入的有两个模型,使用步骤|步骤|模型|代码|作用||---|---|---|---||0|-|-|准备documents||1|-|topic = Topic(cwd=os.getcwd())|初始化||2|词典|topic.load_dictionary(dictpath='output/dictionary.dict')|直接导入词典,省略topic.create_dictionary()||3|-|topic.create_corpus(documents=documents)|构建语料(将文本转为文档-词频矩阵)||4|lda话题模型|topic.load_lda_model(modelpath='output/model/lda.model')|导入lda话题模型, 相当于省略topic.train_lda_model(n_topics)|现在我们试一试, 为了与之前的区分,这里我们起名topic2
###Code
topic2 = Topic(cwd=os.getcwd())
topic2.load_dictionary(dictpath='output/dictionary.dict')
topic2.create_corpus(documents=documents)
topic2.load_lda_model(modelpath='output/model/lda.model')
###Output
_____no_output_____
###Markdown
In this notebook, we are going to show how pycombat works. This package implements Combat, a technique for data harmonisation based on a linear mixed model in which location and scale random effects across batches are adjusted using a bayesian approach (Johnson, 2007):$$ Y_{ijk} = \alpha_k + X\beta_k + \gamma_{ik} + \delta_{ik}\epsilon_{ijk}$$Original Combat tecnique also allowed to keep the baseline effects $\alpha_k$ and the effects of interest $\beta_k$ by reintroducing these after harmonisation:$$Y \longrightarrow Y^{combat}_{ijk} = \frac{Y_{ijk} - \hat{\alpha}_k - X\hat{\beta}_k - \hat{\gamma}_{ik}}{\hat{\delta}_{ik}} + \hat{\alpha}_k + X\hat{\beta}_k $$One extension of this python package is the possibility of removing unwanted variables' effect by no reintroducing them again. Using the same linear mixed model of the begining, we now separte source of covariation $C$ from sources of effects of interest $X$:$$ Y_{ijk} = \alpha_k + X\beta_k^{x} + C\beta_c^{c} + \gamma_{ik} + \delta_{ik}\epsilon_{ijk}$$And then in this case, combat adjustment will be given by:$$Y \longrightarrow Y^{combat}_{ijk} = \frac{Y_{ijk} - \hat{\alpha}_k - X\hat{\beta}_k^{x} - C\hat{\beta}_c^{c} - \hat{\gamma}_{ik}}{\hat{\delta}_{ik}} + \hat{\alpha}_k + X\hat{\beta}_k^{x} $$Such modification to combat has been recently proposed and applied by some authors (Wachinger, 2020).In this notebook, we are going to show how this package works by making use of a data on gene expression measurements from a bladder cancer study (Dyrskjot, 2004). We will then compare our results with those from neuroCombat (Fortin, 2017), a known python implementation of Combat (though this does not include the modification we have implemented).*References*:- W. Evan Johnson, Cheng Li, Ariel Rabinovic, Adjusting batch effects in microarray expression data using empirical Bayes methods, Biostatistics, Volume 8, Issue 1, January 2007, Pages 118–127, https://doi.org/10.1093/biostatistics/kxj037- L. Dyrskjot, M. Kruhoffer, T. Thykjaer, N. Marcussen, J. L. Jensen,K. Moller, and T. F. Orntoft. Gene expression in the urinary bladder: acommon carcinoma in situ gene expression signature exists disregardinghistopathological classification.Cancer Res., 64:4040–4048, Jun 2004.- Christian Wachinger, Anna Rieckmann, Sebastian Pölsterl. Detect and Correct Bias in Multi-Site Neuroimaging Datasets. arXiv:2002.05049 - Fortin, J. P., N. Cullen, Y. I. Sheline, W. D. Taylor, I. Aselcioglu, P. A. Cook, P. Adams, C. Cooper, M. Fava, P. J. McGrath, M. McInnis, M. L. Phillips, M. H. Trivedi, M. M. Weissman and R. T. Shinohara (2017). "Harmonization of cortical thickness measurements across scanners and sites." Neuroimage 167: 104-120.
###Code
import numpy as np
import pandas as pd
import sys
from pycombat import Combat
###Output
_____no_output_____
###Markdown
Following the spirit of scikit-learn, Combat is a class that includes a method called **fit**, which finds the fitted values of the linear mixed model, and **transform**, a method that used the previously learning paramters to adjust the data. There is also a method called **fit_transform**, which concatenates both methods. So the first thing that you need to do is to define a instance of this class
###Code
combat = Combat()
###Output
_____no_output_____
###Markdown
When you define the instance, you can pass it the following parameters: - method: which is either "p" for paramteric or "np" for non-parametric (not implemented yet!!) - conv: the criterion to decide when to stop the EB optimization step (default value = 0.0001) Now, you have to call the method **fit**, passsing it the data. These data will consist of the following ingredients: - Y: The matrix of response variables, with dimensions [observations x features] - b: The array of batch label for each observation. In principle these could be labelled as numbers or strings. - X: The matrix of effects of interest to keep, with dimensions [observations x features_interest] - C: The matrix of covariates to remove, with dimensions [observations x features_covariates] ***Important:*** If you have effects of interest or covariates that involve categorical features, make sure that you drop the first level of these categories when building the independent matrices, otherwise they would be singular. You can easily accomplished this using the pandas and **pd.get_dummies** with the option *drop_first* checked. Let's then see how this package works
###Code
# Y is the matrix of response variables
Y = np.load('bladder-expr.npy')
print("the matrix of response variables has %d observations and %d outcome variables" % (Y.shape[0], Y.shape[1]))
# This loads the set of independent variables, including the batch labels
pheno = pd.read_csv('bladder-pheno.txt', delimiter='\t')
pheno.head()
b = pheno.batch.values
print("We have %d different batches" % len(np.unique(b)))
###Output
We have 5 different batches
###Markdown
We also have information about the type of cancer and age in this data
###Code
pheno.cancer.value_counts()
pheno.describe()['age']
###Output
_____no_output_____
###Markdown
Say we want to keep these two effects after combat harmonisation. We can then build our matrix X from these two variables
###Code
X = np.column_stack((pd.get_dummies(pheno.cancer.values, drop_first=True).values,
pheno.age.values))
print(X[:10,:])
###Output
[[0 1 1]
[0 1 2]
[0 1 3]
[0 1 4]
[0 1 5]
[0 1 6]
[0 1 7]
[0 1 1]
[1 0 2]
[1 0 3]]
###Markdown
And now we **fit** the combat model with these pieces
###Code
combat.fit(Y, b, X=X, C=None) # X and C are None by default, so no need here to write C=None
###Output
_____no_output_____
###Markdown
And then, we can adjust this dataset calling the **transform** method
###Code
Y_combat = combat.transform(Y=Y, b=b, X=X)
###Output
_____no_output_____
###Markdown
We can check that this gives the same result as that applying neuroCombat
###Code
from neuroCombat import neuroCombat
discrete_cols = ['cancer']
continuous_cols = ['age']
batch_col = 'batch'
Y_neurocombat = neuroCombat(data=Y,
covars=pheno,
batch_col=batch_col,
discrete_cols=discrete_cols,
continuous_cols=continuous_cols)
np.corrcoef(Y_combat.flat, Y_neurocombat.flat)
###Output
_____no_output_____
###Markdown
用户贷款违约预测:https://www.heywhale.com/home/competition/615ff7bdc270e400182b249e/content/1 导入模块
###Code
%%time
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
from toad import detect
pd.set_option('display.width', 180)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', 100)
###Output
Wall time: 0 ns
###Markdown
加载数据
###Code
%%time
train = pd.read_csv("../data/train.csv")
test = pd.read_csv("../data/test.csv")
###Output
Wall time: 488 ms
###Markdown
数据探索 查看前五行 训练集查看
###Code
%%time
train.head()
###Output
_____no_output_____
###Markdown
测试集查看
###Code
%%time
test.head()
###Output
Wall time: 1e+03 µs
###Markdown
查看数据类型、缺失值和枚举值
###Code
%%time
detect(train).iloc[:, :4]
###Output
Wall time: 756 ms
###Markdown
描述性统计 训练集查看
###Code
%%time
train.describe()
###Output
Wall time: 76 ms
###Markdown
测试集查看
###Code
%%time
test.head()
###Output
Wall time: 0 ns
###Markdown
特征探索
###Code
%%time
train[train["car_ownership"]=="no"].groupby(["label"])["car_ownership"].count()[1] / len(train) * 100
%%time
train[train["car_ownership"]=="yes"].groupby(["label"])["car_ownership"].count()[1] / len(train) * 100
def p_decorate(func):
def func_wrapper(name):
return "<p>{0}</p>".format(func(name))
return func_wrapper
@p_decorate
def get_text(name):
return "lorem ipsum, {0} dolor sit amet".format(name)
get_text("John")
###Output
_____no_output_____
###Markdown
Analysing outputs
###Code
%matplotlib inline
import numpy as np
import cPickle
from keras.utils import np_utils
import matplotlib.pyplot as plt
import numpy as np
import cPickle
import seaborn as sns
import tabulate
import pyprind
from sklearn.decomposition import PCA
import sys
sys.path.insert(0, "../src")
from model.cnn import CNNProj
from model.zsn import ZSN
from space.space import Space
###Output
d:\tools\miniconda\lib\site-packages\matplotlib\__init__.py:872: UserWarning: axes.color_cycle is deprecated and replaced with axes.prop_cycle; please use the latter.
warnings.warn(self.msg_depr % (key, alt_key))
Using gpu device 0: GeForce GT 630M (CNMeM is disabled)
###Markdown
Load and subset data
###Code
"""
Testing simple CNN
"""
# Load data
data = cPickle.load(open("../data/cifar100_with_vec.pkl", "r"))
(X_train, Y_train, V_train), (X_test, Y_test, V_test) = data["data"]
labels = data["labels"]
# Initialize model
X_shape = X_train.shape[1:]
Xim_shape = 200
V_shape = 50
# Load embedding space
emb = Space("../data/glove.6B." + str(V_shape) + "d.txt")
# Prepare classes
# 10 known classes for large CNN
known = [
"mountain",
"lion",
"chimpanzee",
"house",
"bicycle",
"crocodile",
"whale",
"rocket",
"tractor",
"train"
]
def subset_data(network):
"""
Subset data for training network
"""
known_indices = map(labels.index, network.classes)
known_classes_train_indices = np.in1d(Y_train, known_indices)
known_classes_test_indices = np.in1d(Y_test, known_indices)
Y_train_sub = Y_train[known_classes_train_indices, :]
Y_test_sub = Y_test[known_classes_test_indices, :]
X_train_sub = np.copy(X_train[known_classes_train_indices, :])
X_test_sub = np.copy(X_test[known_classes_test_indices, :])
V_train_sub = V_train[known_classes_train_indices, :]
V_test_sub = V_test[known_classes_test_indices, :]
X_train_sub = X_train_sub.astype("float32")
X_test_sub = X_test_sub.astype("float32")
X_train_sub /= 255
X_test_sub /= 255
Y_train_sub_new = [known_indices.index(i) for i in Y_train_sub]
Y_test_sub_new = [known_indices.index(i) for i in Y_test_sub]
Y_train_sub_new_c = np_utils.to_categorical(Y_train_sub_new, len(known_indices))
Y_test_sub_new_c = np_utils.to_categorical(Y_test_sub_new, len(known_indices))
return {
"X_train": X_train_sub,
"X_test": X_test_sub,
"Y_train": Y_train_sub_new_c,
"Y_test": Y_test_sub_new_c,
"V_train": V_train_sub,
"V_test": V_test_sub
}
# Create an ensemble
cnns = [
CNNProj("mountain_lion", known[:2], X_shape, 200, 2, V_shape),
CNNProj("chimp_house", known[2:4], X_shape, 200, 2, V_shape),
CNNProj("bicycle_croc", known[4:6], X_shape, 200, 2, V_shape),
CNNProj("whale_rocket", known[6:8], X_shape, 200, 2, V_shape),
CNNProj("tractor_train", known[8:], X_shape, 200, 2, V_shape)
]
###Output
_____no_output_____
###Markdown
Train networks
###Code
for net in cnns:
sub = subset_data(net)
print("Training '" + net.name + "'")
# net.train(sub["X_train"], sub["Y_train"], sub["V_train"], [200, 200], [40, 400])
# net.model.save_weights("../data/" + net.name)
# net.proj.model.save_weights("../data/" + net.name + "_proj")
net.model.load_weights("../data/" + net.name)
net.proj.model.load_weights("../data/" + net.name + "_proj")
# Print accuracies
# print("Accuracies")
# print(net.accuracies(sub["X_train"],
# sub["Y_train"],
# sub["V_train"],
# sub["X_test"],
# sub["Y_test"],
# sub["V_test"]))
# Create inference system
zsn = ZSN(cnns, emb)
###Output
Training 'mountain_lion'
Accuracies
1000/1000 [==============================] - 1s
200/200 [==============================] - 0s
1000/1000 [==============================] - 0s
200/200 [==============================] - 0s
{'Training accuracy (embedding)': 0.80900000000000005, 'Testing accuracy': 0.95999999999999996, 'Training accuracy': 0.97999999999999998, 'Testing accuracy (embedding)': 0.84999999999999998}
Training 'chimp_house'
Accuracies
1000/1000 [==============================] - 1s
200/200 [==============================] - 0s
1000/1000 [==============================] - 0s
200/200 [==============================] - 0s
{'Training accuracy (embedding)': 0.59299999999999997, 'Testing accuracy': 0.92500000000000004, 'Training accuracy': 0.92900000000000005, 'Testing accuracy (embedding)': 0.56999999999999995}
Training 'bicycle_croc'
Accuracies
1000/1000 [==============================] - 1s
200/200 [==============================] - 0s
1000/1000 [==============================] - 0s
200/200 [==============================] - 0s
{'Training accuracy (embedding)': 0.35199999999999998, 'Testing accuracy': 0.77000000000000002, 'Training accuracy': 0.77200000000000002, 'Testing accuracy (embedding)': 0.38}
Training 'whale_rocket'
Accuracies
1000/1000 [==============================] - 1s
200/200 [==============================] - 0s
1000/1000 [==============================] - 0s
200/200 [==============================] - 0s
{'Training accuracy (embedding)': 0.92100000000000004, 'Testing accuracy': 0.86499999999999999, 'Training accuracy': 0.92700000000000005, 'Testing accuracy (embedding)': 0.88500000000000001}
Training 'tractor_train'
Accuracies
1000/1000 [==============================] - 1s
200/200 [==============================] - 0s
1000/1000 [==============================] - 0s
200/200 [==============================] - 0s
{'Training accuracy (embedding)': 0.72699999999999998, 'Testing accuracy': 0.69999999999999996, 'Training accuracy': 0.72599999999999998, 'Testing accuracy (embedding)': 0.69999999999999996}
###Markdown
Take vector outputs
###Code
# 5 unknown classes for zero shot prediction
unknown = [
"lamp",
"clock",
"rose",
"baby",
"bridge"
]
# Generate zero shot testing data
unknown_indices = map(labels.index, unknown)
unknown_classes_train_indices = np.in1d(Y_train, unknown_indices)
unknown_classes_test_indices = np.in1d(Y_test, unknown_indices)
Y_unknown = np.concatenate([
Y_train[unknown_classes_train_indices, :],
Y_test[unknown_classes_test_indices, :]
], axis=0)
X_unknown = np.concatenate([
X_train[unknown_classes_train_indices, :],
X_test[unknown_classes_test_indices, :]
], axis=0)
X_unknown = X_unknown.astype("float32")
X_unknown /= 255
Y_unknown_labels = [labels[i] for i in Y_unknown]
V_unknown = np.concatenate([
V_train[unknown_classes_train_indices, :],
V_test[unknown_classes_test_indices, :]
], axis=0)
output = zsn.evaluate_zero_shot_vector(X_unknown, V_unknown)
cPickle.dump(output, open("../data/vec_out", "w"))
# output = cPickle.load(open("../data/vec_out"))
glob = np.array(output[0])
indi = np.array(output[1])
stack = np.array(output[2])
# Plot histograms
f, axarr = plt.subplots(5, figsize=(10, 25))
plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.5)
sns.set(style="whitegrid", palette="muted")
for i in xrange(5):
indices = np.array(Y_unknown_labels) == unknown[i]
sns.boxplot(data=np.concatenate([glob[indices][:, None],
# stack[indices][:, None],
indi[:, indices].T], axis=1), ax=axarr[i], orient="h")
axarr[i].set_xlim(-0.4, 0.8)
axarr[i].set_xlabel("Similarity to '" + unknown[i] + "'")
axarr[i].set_yticklabels(["Ensemble", "Stack"] + [c.name.replace("_", " ").title() for c in zsn.cnns]);
plt.savefig("./plots/box.pdf")
# Plot histograms
f, axarr = plt.subplots(5, figsize=(10, 20))
plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.5)
sns.set(style="whitegrid", palette="husl")
sns.despine(left=True)
for i in xrange(5):
indices = np.array(Y_unknown_labels) == unknown[i]
sns.distplot(glob[indices], hist=False, kde_kws={"shade": True}, ax=axarr[i], label="Ensemble")
# sns.distplot(stack[indices], hist=False, kde_kws={"shade": True}, ax=axarr[i], label="Stack")
sns.distplot(indi[0, indices], hist=False, kde_kws={"shade": False}, ax=axarr[i], label=zsn.cnns[0].name.replace("_", " ").title());
sns.distplot(indi[1, indices], hist=False, kde_kws={"shade": False}, ax=axarr[i], label=zsn.cnns[1].name.replace("_", " ").title());
sns.distplot(indi[2, indices], hist=False, kde_kws={"shade": False}, ax=axarr[i], label=zsn.cnns[2].name.replace("_", " ").title());
sns.distplot(indi[3, indices], hist=False, kde_kws={"shade": False}, ax=axarr[i], label=zsn.cnns[3].name.replace("_", " ").title());
sns.distplot(indi[4, indices], hist=False, kde_kws={"shade": False}, ax=axarr[i], label=zsn.cnns[4].name.replace("_", " ").title());
axarr[i].set_xlim(-0.25,0.8)
axarr[i].set_yticks([])
axarr[i].set_xlabel("Similarity to '" + unknown[i] + "'")
axarr[i].legend(loc="upper left")
plt.savefig("./plots/dist.pdf")
###Output
_____no_output_____
###Markdown
Hit@k things
###Code
def show_hit_k(idx, k):
"""
Show hit at k list and the image
"""
plt.figure(figsize=(1, 1))
plt.grid("off")
plt.xticks([])
plt.yticks([])
plt.imshow(np.swapaxes(np.swapaxes(X_unknown[idx], 0, 2), 0, 1))
plt.show()
res = zsn.predict(X_unknown[idx])
def _name(i):
return zsn.cnns[i].name.replace("_", " ").title()
table = []
table.append(["Ensemble"] + zsn.space.get_nearest_words(res["vec"], k).tolist())
# table.append(["Stack"] + zsn.space.get_nearest_words(res["stack_vec"], k).tolist())
for i in xrange(len(zsn.cnns)):
table.append([_name(i)] + zsn.space.get_nearest_words(res["indi_vec"][i], k).tolist())
print(tabulate.tabulate(table, tablefmt="fancy_grid", headers=["Network"]+["" for i in xrange(k)]))
show_hit_k(7, 9)
###Output
_____no_output_____
###Markdown
Scatter plot thing
###Code
def scatter_pca(lbl):
"""
Scatter for given class with PCA
"""
ids = np.array(Y_unknown_labels) == lbl
X = X_unknown[ids]
orig = V_unknown[Y_unknown_labels.index(lbl)][None, :]
ens = np.zeros((X.shape[0], 50))
stk = np.zeros(ens.shape)
tmp1 = np.zeros(ens.shape)
tmp2 = np.zeros(ens.shape)
tmp3 = np.zeros(ens.shape)
tmp4 = np.zeros(ens.shape)
tmp5 = np.zeros(ens.shape)
bar = pyprind.ProgBar(X.shape[0])
for idx, item in enumerate(X):
res = zsn.predict(item)
ens[idx, :] = res["vec"]
stk[idx, :] = res["stack_vec"]
tmp1[idx, :] = res["indi_vec"][0]
tmp2[idx, :] = res["indi_vec"][1]
tmp3[idx, :] = res["indi_vec"][2]
tmp4[idx, :] = res["indi_vec"][3]
tmp5[idx, :] = res["indi_vec"][4]
bar.update()
sc = PCA(n_components=2).fit_transform(np.concatenate([orig, ens, stk, tmp1, tmp2, tmp3, tmp4, tmp5]))
orig = sc[0, :]
return [orig, np.split(sc[1:, :], 7)]
baby = scatter_pca("baby")
bridge = scatter_pca("bridge")
rose = scatter_pca("rose")
clock = scatter_pca("clock")
lamp = scatter_pca("lamp")
# Plot histograms
f, axarr = plt.subplots(3, 2, figsize=(10, 15))
cp = sns.color_palette("deep")
sns.set(style="white")
sns.despine(left=True, bottom=True)
items = [[baby, bridge], [rose, clock], [lamp]]
labs = [["Baby", "Bridge"], ["Rose", "Clock"], ["Lamp"]]
for i in xrange(2):
for j in xrange(2):
axarr[i][j].set_xticks([])
axarr[i][j].set_yticks([])
axarr[i][j].scatter(items[i][j][1][0][:, 0], items[i][j][1][0][:, 1], c=cp[0])
axarr[i][j].scatter(items[i][j][1][2][:, 0], items[i][j][1][2][:, 1], c=cp[1])
axarr[i][j].scatter(items[i][j][1][3][:, 0], items[i][j][1][3][:, 1], c=cp[2])
axarr[i][j].scatter(items[i][j][1][4][:, 0], items[i][j][1][4][:, 1], c=cp[3])
axarr[i][j].scatter(items[i][j][1][5][:, 0], items[i][j][1][5][:, 1], c=cp[4])
axarr[i][j].scatter(items[i][j][1][6][:, 0], items[i][j][1][6][:, 1], c=cp[5])
axarr[i][j].scatter(items[i][j][0][0], items[i][j][0][1], c="r", s=100, marker="s")
axarr[i][j].set_xlabel(labs[i][j])
i = 2
j = 0
axarr[i][j].set_xticks([])
axarr[i][j].set_yticks([])
axarr[i][j].scatter(items[i][j][1][0][:, 0], items[i][j][1][0][:, 1], c=cp[0])
axarr[i][j].scatter(items[i][j][1][2][:, 0], items[i][j][1][2][:, 1], c=cp[1])
axarr[i][j].scatter(items[i][j][1][3][:, 0], items[i][j][1][3][:, 1], c=cp[2])
axarr[i][j].scatter(items[i][j][1][4][:, 0], items[i][j][1][4][:, 1], c=cp[3])
axarr[i][j].scatter(items[i][j][1][5][:, 0], items[i][j][1][5][:, 1], c=cp[4])
axarr[i][j].scatter(items[i][j][1][6][:, 0], items[i][j][1][6][:, 1], c=cp[5])
axarr[i][j].scatter(items[i][j][0][0], items[i][j][0][1], c="r", s=100, marker="s")
axarr[i][j].set_xlabel(labs[i][j])
i = 2
j = 1
axarr[i][j].set_xticks([])
axarr[i][j].set_yticks([])
axarr[i][j].scatter([], [], c=cp[0], label="Ensemble")
axarr[i][j].scatter([], [], c=cp[1], label="Mountain Lion")
axarr[i][j].scatter([], [], c=cp[2], label="Chimp House")
axarr[i][j].scatter([], [], c=cp[3], label="Bicycle Croc")
axarr[i][j].scatter([], [], c=cp[4], label="Whale Rocket")
axarr[i][j].scatter([], [], c=cp[5], label="Tractor Train")
axarr[i][j].scatter([], [], c="r", s=100, marker="s", label="Original Word Vector")
axarr[i][j].legend(loc="center")
plt.savefig("./plots/scatter.pdf")
###Output
_____no_output_____
###Markdown
start
###Code
encodings={}
def prepare_encodings(path):
for i in os.listdir(path):
if i.endswith(".jpg"):
name=os.path.splitext(i)[0]
image_file=path+'/'+i
scores,boxe=predict(image_file,[1])
boxe=boxe.numpy()[0]*800
boxe=boxe.astype('int32')
image = Image.open(image_file)
image = image.resize((800,800))
image = np.asarray(image)
crop = image[boxe[1]:boxe[3], boxe[0]:boxe[2]]
plt.imshow(crop)
crop=im.fromarray(crop)
crop=crop.resize((160,160))
encodings[name] = img_to_encoding(crop,model)
prepare_encodings('../input/facess')
import cv2
from matplotlib.patches import Rectangle
legend_properties = {'weight':'bold'}
vidcap = cv2.VideoCapture('../input/facess/virushka.mp4')
image_file = '../input/facess/virat.jpg'
success,image = vidcap.read()
count = 0
while success:
image = cv2.resize(image, (800, 800))
fig, ax = plt.subplots()
ax.imshow(image)
scores,boxe=predict(image_file,image)
boxe=boxe.numpy()*800
scores=scores.numpy()
boxe=boxe.astype('int32')
if(len(scores)>0):
for j in range(len(scores)):
boe=boxe[j]
crop = image[boe[1]:boe[3], boe[0]:boe[2]]
crop=im.fromarray(crop)
crop=crop.resize((160,160))
encoding = img_to_encoding(crop,model)
disti,namei = verify(encoding)
if(namei!='Ankit'):
w=boe[2]-boe[0]
h=boe[3]-boe[1]
rect = Rectangle((boe[0], boe[1]), w, h, linewidth=1, edgecolor='g', facecolor='none',lw=1.3)
ax.text(boe[0], boe[1], namei+'__'+str(scores[j]), fontsize=9, color='y')
ax.legend(prop=legend_properties)
ax.add_patch(rect)
fig.savefig('./gola/virushka_%d.png' %count, dpi = 100)
plt.close(fig)
success,image = vidcap.read()
count += 1
###Output
_____no_output_____
###Markdown
image_file='../input/facess/IMG_20211026_111517.jpg'scores,boxe=predict(image_file)boxe=boxe.numpy()[0]*800boxe=boxe.astype('int32')image = Image.open(image_file)image = image.resize((800,800))image = np.asarray(image)crop = image[boxe[1]:boxe[3], boxe[0]:boxe[2]]plt.imshow(crop)crop=im.fromarray(crop)crop=crop.resize((160,160))a,b=verify(encodings['ankit'],img_to_encoding(crop,model)) print(a,b)
###Code
os.mkdir('tola')
def generate_video():
image_folder = './gola' # make sure to use your folder
video_name = 'mygeneratedvideo.avi'
os.chdir("./tola")
images = [img for img in os.listdir(image_folder)
if img.endswith(".jpg") or
img.endswith(".jpeg") or
img.endswith("png")]
# Array images should only consider
# the image files ignoring others if any
frame = cv2.imread(os.path.join(image_folder, images[0]))
# setting the frame width, height width
# the width, height of first image
height, width, layers = frame.shape
video = cv2.VideoWriter(video_name, 0, 1, (width, height))
# Appending the images to the video one by one
for image in images:
video.write(cv2.imread(os.path.join(image_folder, image)))
# Deallocating memories taken for window creation
cv2.destroyAllWindows()
video.release() # releasing the video generated
# Calling the generate_video function
generate_video()
###Output
_____no_output_____
###Markdown
robots.txthttps://www.lagou.com/robots.txt>User-agent: Jobuispider>Disallow: />>User-agent: *>Disallow: /resume/>Disallow: /nearBy/>Disallow: /ologin/>Disallow: /jobs/list_*>Disallow: /one.lagou.com>Disallow: /ns3.lagou.com>Disallow: /hr.lagou.com>Disallow: /two.lagou.com>Disallow: /t/temp1/>Disallow: /center/preview.html>Disallow: /center/previewApp.html>Disallow: /*?utm_source=*>Allow: /gongsi/interviewExperiences.html?companyId=*>Disallow: /*?* 搜索
###Code
#url = 'https://www.lagou.com/jobs/positionAjax.json?px=default&city=%E5%8C%97%E4%BA%AC&needAddtionalResult=false&isSchoolJob=0'
url = 'https://www.lagou.com/jobs/positionAjax.json?px=default&city=全国&needAddtionalResult=false&isSchoolJob=0'
header = {}
with open('/home/gk07/Instances/lagou/lagouSpider/head.txt') as f:
for line in f:
l = [x.strip() for x in line.split(':',1)]
header[l[0]] = l[1]
datas = {'first':'true',
'pn':'1',
'kd':'python'
}
session = requests.session()
res = session.post(url,headers=header,data=datas)
res.json().keys()
res.json()['content'].keys()
res.json()['content']['pageSize']
res.json()['content']['pageNo']
##
res.json()['content']['positionResult']['totalCount']
res.json()['content']['positionResult'].keys()
len(res.json()['content']['positionResult']['result'])
###Output
_____no_output_____
###Markdown
搜索结果- 浏览器访问拉勾每页返回15条招聘信息,最多显示30页,也就是说最多只能获取450条搜索结果- 爬虫搜索python关键字且城市设置为全国,返回的json数据中res.json()['content']['positionResult']['totalCount']显示有1388个结果。按每页15条计算,一共有93页。- 测试证明爬虫可以提取30页以后的数据
###Code
url = 'https://www.lagou.com/jobs/positionAjax.json?px=default&city=全国&needAddtionalResult=false&isSchoolJob=0'
header = {}
with open('/home/gk07/Instances/lagou/lagouSpider/head.txt') as f:
for line in f:
l = [x.strip() for x in line.split(':',1)]
header[l[0]] = l[1]
datas = {'first':'false',
'pn':'7',
'kd':'python'}
session = requests.session()
res = session.post(url,headers=header,data=datas)
#res.json()
res.json()['content']['positionResult']['totalCount']
len(res.json()['content']['positionResult']['result'])
res.json()['content']['positionResult']['result'][0]
###Output
_____no_output_____
###Markdown
Laplace Transforms
###Code
@lru_cache(maxsize=1024)
def lap_unit(s: float, a: float):
# unit step transform
return 1 / s * mp.exp(-a * s)
@lru_cache(maxsize=1024)
def lap_ln(s: float):
# log transform
return -mp.log(s) / s - 0.577216 / s
@lru_cache(maxsize=1024)
def lap_rad(s: float, cD: float = 0.0, skin: float = 0.0):
# infinite-acting radial flow transform
b = mp.besselk(0, mp.sqrt(s))
return 1.0 / (s / (b + skin) + (s ** 2) * cD)
###Output
_____no_output_____
###Markdown
Unit Step Function M = 8
###Code
time = 10 ** np.linspace(-1, 1, 101)
a = 1.0
M = 8
inv1 = gwr(lambda s: lap_unit(s, a), time, M)
fig, ax = plt.subplots(1, 1, figsize=(12, 8))
ax.plot(time, inv1, 'o', c='C3', ms=10, mfc='w', label=r'Unit Step GWR')
ax.set(xscale='log', xlim=(1e-1, 1e1), ylim=(-1, 2))
ax.grid()
ax.grid(which='minor', axis='x')
ax.legend()
plt.title(r'GWR Inversion of the Unit Step Function, M=16')
plt.savefig('Unit_Step_M_16.png', dpi=100, bbox_anchor='tight')
###Output
_____no_output_____
###Markdown
M = 128
###Code
M = 128
inv2 = gwr(lambda s: lap_unit(s, a), time, M)
fig, ax = plt.subplots(1, 1, figsize=(12, 8))
ax.plot(time, inv2, 'o', c='C3', ms=10, mfc='w', label=r'Unit Step GWR')
ax.set(xscale='log', xlim=(1e-1, 1e1), ylim=(-1, 2))
ax.grid()
ax.grid(which='minor', axis='x')
ax.legend()
plt.title(r'GWR Inversion of the Unit Step Function, M=128')
plt.savefig('Unit_Step_M_128.png', dpi=100, bbox_anchor='tight')
###Output
_____no_output_____
###Markdown
Unit Step Comparison
###Code
fig, ax = plt.subplots(1, 1, figsize=(12, 8))
ax.plot(time, inv1, '^', c='C3', ms=7, mfc='w', mew=1.5, label=r'$M=16$')
ax.plot(time, inv2, 'o', c='C0', ms=7, mfc='w', mew=1.5, label=r'$M=128$')
ax.set(xscale='log', xlim=(1e-1, 1e1), ylim=(-1, 2))
ax.grid()
ax.grid(which='minor', axis='x')
ax.legend()
plt.title(r'GWR Inversion of the Unit Step Function')
plt.savefig('Unit_Step_comparison.png', dpi=100, bbox_anchor='tight')
###Output
_____no_output_____
###Markdown
Ln[t]
###Code
time = 10 ** np.linspace(-2, 5, 31)
inv = gwr(lap_ln, time, 4)
fig, ax = plt.subplots(1, 1, figsize=(12, 8))
ax.plot(time, np.log(time), c='k', label=r'$Ln \, t$')
ax.plot(time, inv, 'o', c='C3', ms=10, mfc='w', label=r'$Ln \, t$ GWR')
ax.set(xscale='log', xlim=(1e-2, 1e4), ylim=(-5, 10))
ax.set_yticks(range(-5, 11))
ax.grid()
ax.grid(which='minor', axis='x')
ax.legend()
plt.title(r'GWR Inversion of $\frac{ln \, s}{s} - \frac{0.577216}{s}$ $[ln \, t]$')
plt.savefig('Ln_t.png', dpi=100, bbox_anchor='tight')
###Output
_____no_output_____
###Markdown
Infinite-Acting Radial Flow System
###Code
time = 10 ** np.linspace(1, 7, 31)
cD = 1e3
M = 8
inv = gwr(lambda s: lap_rad(s, cD=cD), time, M)
d1_inv = time * gwr(lambda s: s * lap_rad(s, cD=cD), time, M)
d2_inv = time * time * np.abs(gwr(lambda s: s * s * lap_rad(s, cD=cD), time, M))
int_inv = 1 / time * gwr(lambda s: 1 / s * lap_rad(s, cD=cD), time, M)
fig, ax = plt.subplots(1, 1, figsize=(12, 8))
ax.plot(time, inv, 'o', c='C3', ms=10, mfc='w', label='$p_{wD}$ GWR')
ax.plot(time, d1_inv, 'o', c='C0', ms=10, mfc='w', label='$p^\prime_{wD}$ GWR')
ax.plot(time, d2_inv, '^', c='xkcd:golden', ms=10, mfc='w', label='$p^{\prime\prime}_{wD}$ GWR')
ax.plot(time, int_inv, 'd', c='C2', ms=10, mfc='w', label='$\int{p_{wD}}$ GWR')
ax.set(xscale='log', yscale='log', xlim=(1e1, 1e7))#, ylim=(1e-3, 1e1))
ax.grid()
ax.grid(which='minor', axis='x')
ax.legend()
ax.set(xlabel=r'${t_D} \, / \, {C_D}$ and Effective Shut-In Time Match')
ax.set(ylabel=r'$p_{wD}$, $p^{\prime}_{wD}$, and Wellbore Pressure Drop Match')
plt.title('GWR Inversion of Wellbore Storage and Skin Case\n(Infinite-Acting Radial Flow System)')
plt.savefig('Radial_Flow_WBS.png', dpi=100, bbox_anchor='tight')
###Output
_____no_output_____
###Markdown
MIXUP
###Code
alpha_ = 0.4
# def mixup_data(x, y, alpha=alpha_, use_cuda=True):
# if alpha > 0:
# lam = np.random.beta(alpha, alpha)
# else:
# lam = 1
# batch_size = x.size()[0]
# if use_cuda:
# index = torch.randperm(batch_size).cuda()
# else:
# index = torch.randperm(batch_size)
# mixed_x = lam * x + (1 - lam) * x[index, :]
# y_a, y_b = y, y[index]
# return mixed_x, y_a, y_b, lam
# def mixup_criterion(criterion, pred, y_a, y_b, lam):
# return lam * criterion(pred.float().cuda(), y_a.float().cuda()) + (1 - lam) * criterion(pred.float().cuda(), y_b.float().cuda())
def mixup_data(x, y, alpha=1.0, use_cuda=True):
'''Returns mixed inputs, pairs of targets, and lambda'''
if alpha > 0:
lam = np.random.beta(alpha, alpha)
else:
lam = 1
batch_size = x.size()[0]
if use_cuda:
index = torch.randperm(batch_size).cuda()
else:
index = torch.randperm(batch_size)
mixed_x = lam * x + (1 - lam) * x[index, :]
# print(y)
y_a, y_b = y, y[index]
return mixed_x, y_a, y_b, lam
def mixup_criterion(criterion, pred, y_a, y_b, lam):
return lam * criterion(pred, y_a) + (1 - lam) * criterion(pred, y_b)
class ConcatDataset(torch.utils.data.Dataset):
def __init__(self, *datasets):
self.datasets = datasets
def __getitem__(self, i):
return tuple(d[i] for d in self.datasets)
def __len__(self):
return min(len(d) for d in self.datasets)
plt.ion() # interactive mode
EO_data_transforms = {
'Training': transforms.Compose([
transforms.Grayscale(num_output_channels=3),
transforms.Resize((30,30)),
AutoAugment(),
Cutout(),
# transforms.RandomRotation(15,),
# transforms.RandomResizedCrop(30),
# transforms.RandomHorizontalFlip(),
# transforms.RandomVerticalFlip(),
transforms.Grayscale(num_output_channels=1),
transforms.ToTensor(),
transforms.Normalize([0.2913437], [0.12694514])
#transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
# transforms.Lambda(lambda x: x.repeat(3, 1, 1)),
#transforms.Normalize(mean=[0.507, 0.487, 0.441], std=[0.267, 0.256, 0.276])
]),
'Test': transforms.Compose([
transforms.Grayscale(num_output_channels=1),
transforms.Resize(30),
transforms.ToTensor(),
transforms.Normalize([0.2913437], [0.12694514])
#transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
# transforms.Lambda(lambda x: x.repeat(3, 1, 1)),
# transforms.Normalize(mean=[0.507, 0.487, 0.441], std=[0.267, 0.256, 0.276])
]),
'valid_EO': transforms.Compose([
transforms.Grayscale(num_output_channels=1),
transforms.Resize((30,30)),
# AutoAugment(),
# transforms.RandomRotation(15,),
# transforms.RandomResizedCrop(48),
# transforms.RandomHorizontalFlip(),
# transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.2913437], [0.12694514])
# transforms.Grayscale(num_output_channels=1),
# transforms.Resize(48),
# transforms.ToTensor(),
# transforms.Normalize([0.5], [0.5])
# transforms.Lambda(lambda x: x.repeat(3, 1, 1)),
# transforms.Normalize(mean=[0.507, 0.487, 0.441], std=[0.267, 0.256, 0.276])
]),
}
# Data augmentation and normalization for training
# Just normalization for validation
data_transforms = {
'Training': transforms.Compose([
transforms.Grayscale(num_output_channels=1),
transforms.Resize((52,52)),
transforms.RandomRotation(15,),
transforms.RandomResizedCrop(48),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.4062625], [0.12694514])
#transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
# transforms.Lambda(lambda x: x.repeat(3, 1, 1)),
#transforms.Normalize(mean=[0.507, 0.487, 0.441], std=[0.267, 0.256, 0.276])
]),
'Test': transforms.Compose([
transforms.Grayscale(num_output_channels=1),
transforms.Resize(48),
transforms.ToTensor(),
transforms.Normalize([0.4062625], [0.12694514])
#transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
# transforms.Lambda(lambda x: x.repeat(3, 1, 1)),
# transforms.Normalize(mean=[0.507, 0.487, 0.441], std=[0.267, 0.256, 0.276])
]),
'valid': transforms.Compose([
transforms.Grayscale(num_output_channels=1),
transforms.Resize((52,52)),
transforms.RandomRotation(15,),
transforms.RandomResizedCrop(48),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.4062625], [0.12694514])
# transforms.Grayscale(num_output_channels=1),
# transforms.Resize(48),
# transforms.ToTensor(),
# transforms.Normalize([0.5], [0.5])
# transforms.Lambda(lambda x: x.repeat(3, 1, 1)),
# transforms.Normalize(mean=[0.507, 0.487, 0.441], std=[0.267, 0.256, 0.276])
]),
}
# data_dir = '/mnt/sda1/cvpr21/Classification/ram'
# EO_image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
# EO_data_transforms[x])
# for x in ['Training', 'Test']}
# EO_dataloaders = {x: torch.utils.data.DataLoader(EO_image_datasets[x], batch_size=256,
# shuffle=True, num_workers=64, pin_memory=True)
# for x in ['Training', 'Test']}
# EO_dataset_sizes = {x: len(EO_image_datasets[x]) for x in ['Training', 'Test']}
# EO_class_names = EO_image_datasets['Training'].classes
# image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
# data_transforms[x])
# for x in ['Training', 'Test']}
# combine_dataset = ConcatDataset(EO_image_datasets, image_datasets)
# dataloaders = {x: torch.utils.data.DataLoader(combine_dataset[x], batch_size=256,
# shuffle=True, num_workers=64, pin_memory=True)
# for x in ['Training', 'Test']}
# dataset_sizes = {x: len(image_datasets[x]) for x in ['Training', 'Test']}
# class_names = image_datasets['Training'].classes
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# def imshow(inp, title=None):
# """Imshow for Tensor."""
# inp = inp.numpy().transpose((1, 2, 0))
# # mean = np.array([0.1786, 0.4739, 0.5329])
# # std = np.array([[0.0632, 0.1361, 0.0606]])
# # inp = std * inp + mean
# inp = np.clip(inp, 0, 1)
# plt.imshow(inp)
# if title is not None:
# plt.title(title)
# plt.pause(0.001) # pause a bit so that plots are updated
# # Get a batch of training data
# EO_inputs, EO_classes = next(iter(EO_dataloaders['Training']))
# inputs, classes, k ,_= next(iter(dataloaders))
# # Make a grid from batch
# EO_out = torchvision.utils.make_grid(EO_inputs)
# out = torchvision.utils.make_grid(inputs)
# imshow(EO_out, title=[EO_class_names[x] for x in classes])
# imshow(out, title=[class_names[x] for x in classes])
from torch.utils import data
from tqdm import tqdm
from PIL import Image
output_dim = 10
class SAR_EO_Combine_Dataset(data.Dataset):
def __init__(self,df_sar,dirpath_sar,transform_sar,df_eo=None,dirpath_eo=None,transform_eo=None,test = False):
self.df_sar = df_sar
self.test = test
self.dirpath_sar = dirpath_sar
self.transform_sar = transform_sar
self.df_eo = df_eo
# self.test = test
self.dirpath_eo = dirpath_eo
self.transform_eo = transform_eo
#image data
# if not self.test:
# self.image_arr = np.asarray(str(self.dirpath)+'/'+self.df.iloc[:, 0]+'.png')
# else:
# self.image_arr = np.asarray(str(self.dirpath)+'/'+self.df.iloc[:, 0])
# #labels data
# if not self.test:
# self.label_df = self.df.iloc[:,1]
# Calculate length of df
self.data_len = len(self.df_sar.index)
def __len__(self):
return self.data_len
def __getitem__(self, idx):
image_name_sar = self.df_sar.img_name[idx]
image_name_sar = os.path.join(self.dirpath_sar, image_name_sar)
img_sar = Image.open(image_name_sar)#.convert('RGB')
img_tensor_sar = self.transform_sar(img_sar)
image_name_eo = self.df_eo.img_name[idx]
image_name_eo = os.path.join(self.dirpath_eo, image_name_eo)
img_eo = Image.open(image_name_eo)#.convert('RGB')
img_tensor_eo = self.transform_eo(img_eo)
# image_name = self.df.img_name[idx]
# img = Image.open(image_name)#.convert('RGB')
# img_tensor = self.transform(img)
if not self.test:
image_labels = int(self.df_sar.class_id[idx])
# label_tensor = torch.zeros((1, output_dim))
# for label in image_labels.split():
# label_tensor[0, int(label)] = 1
image_label = torch.tensor(image_labels,dtype= torch.long)
image_label = image_label.squeeze()
image_labels_eo = int(self.df_eo.class_id[idx])
# label_tensor_eo = torch.zeros((1, output_dim))
# for label_eo in image_labels_eo.split():
# label_tensor_eo[0, int(label_eo)] = 1
image_label_eo = torch.tensor(image_labels_eo,dtype= torch.long)
image_label_eo = image_label_eo.squeeze()
# print(image_label_eo)
return (img_tensor_sar,image_label), (img_tensor_eo, image_label_eo)
return (img_tensor_sar)
class SAR_EO_Combine_Dataset2(data.Dataset):
def __init__(self,df_sar,dirpath_sar,transform_sar,df_eo=None,dirpath_eo=None,transform_eo=None,test = False):
self.df_sar = df_sar
self.test = test
self.dirpath_sar = dirpath_sar
self.transform_sar = transform_sar
self.df_eo = df_eo
# self.test = test
self.dirpath_eo = dirpath_eo
self.transform_eo = transform_eo
#image data
# if not self.test:
# self.image_arr = np.asarray(str(self.dirpath)+'/'+self.df.iloc[:, 0]+'.png')
# else:
# self.image_arr = np.asarray(str(self.dirpath)+'/'+self.df.iloc[:, 0])
# #labels data
# if not self.test:
# self.label_df = self.df.iloc[:,1]
# Calculate length of df
self.data_len = len(self.df_sar.index)
def __len__(self):
return self.data_len
def __getitem__(self, idx):
image_name_sar = self.df_sar.img_name[idx]
image_name_sar = os.path.join(self.dirpath_sar, image_name_sar)
img_sar = Image.open(image_name_sar)#.convert('RGB')
img_tensor_sar = self.transform_sar(img_sar)
image_name_eo = self.df_eo.img_name[idx]
image_name_eo = os.path.join(self.dirpath_eo, image_name_eo)
img_eo = Image.open(image_name_eo)#.convert('RGB')
img_tensor_eo = self.transform_eo(img_eo)
# image_name = self.df.img_name[idx]
# img = Image.open(image_name)#.convert('RGB')
# img_tensor = self.transform(img)
if not self.test:
image_labels = int(self.df_sar.class_id[idx])
# label_tensor = torch.zeros((1, output_dim))
# for label in image_labels.split():
# label_tensor[0, int(label)] = 1
image_label = torch.tensor(image_labels,dtype= torch.long)
image_label = image_label.squeeze()
image_labels_eo = int(self.df_eo.class_id[idx])
# label_tensor_eo = torch.zeros((1, output_dim))
# for label_eo in image_labels_eo.split():
# label_tensor_eo[0, int(label_eo)] = 1
image_label_eo = torch.tensor(image_labels_eo,dtype= torch.long)
image_label_eo = image_label_eo.squeeze()
# print(image_label_eo)
return (img_tensor_sar,image_label), (img_tensor_eo, image_label_eo)
return (img_tensor_sar)
model = torch.load("../KD1/resnet34_kd114.pt")
import pandas as pd
from torch.utils import data
from tqdm import tqdm
from PIL import Image
class ImageData(data.Dataset):
def __init__(self,df,dirpath,transform,test = False):
self.df = df
self.test = test
self.dirpath = dirpath
self.conv_to_tensor = transform
#image data
if not self.test:
self.image_arr = np.asarray(str(self.dirpath)+'/'+self.df.iloc[:, 0]+'.png')
else:
self.image_arr = np.asarray(str(self.dirpath)+'/'+self.df.iloc[:, 0])
#labels data
if not self.test:
self.label_df = self.df.iloc[:,1]
# Calculate length of df
self.data_len = len(self.df.index)
def __len__(self):
return self.data_len
def __getitem__(self, idx):
image_name = self.image_arr[idx]
img = Image.open(image_name)#.convert('RGB')
img_tensor = self.conv_to_tensor(img)
if not self.test:
image_labels = self.label_df[idx]
label_tensor = torch.zeros((1, output_dim))
for label in image_labels.split():
label_tensor[0, int(label)] = 1
image_label = torch.tensor(label_tensor,dtype= torch.float32)
return (img_tensor,image_label.squeeze())
return (img_tensor)
BATCH_SIZE = 1
test_dir = "./data/test" ## Change it to the test file path
test_dir_ls = os.listdir(test_dir)
test_dir_ls.sort()
test_df = pd.DataFrame(test_dir_ls)
test_dataset = ImageData(test_df,test_dir,EO_data_transforms["valid_EO"],test = True)
test_loader = data.DataLoader(dataset=test_dataset,batch_size=BATCH_SIZE,shuffle=False)
output_dim = 10
DISABLE_TQDM = False
predictions = np.zeros((len(test_dataset), output_dim))
i = 0
for test_batch in tqdm(test_loader,disable = DISABLE_TQDM):
test_batch = test_batch.to(device)
batch_prediction = model(test_batch).detach().cpu().numpy()
predictions[i * BATCH_SIZE:(i+1) * BATCH_SIZE, :] = batch_prediction
i+=1
###Output
100%|██████████| 826/826 [00:15<00:00, 53.29it/s]
###Markdown
submission balance for class 0
###Code
m = nn.Softmax(dim=1)
predictions_tensor = torch.from_numpy(predictions)
output_softmax = m(predictions_tensor)
# output_softmax = output_softmax/output_softmax.sum()
pred = np.argmax(predictions,axis = 1)
plot_ls = []
idx = 0
for each_pred in pred:
if each_pred == 0:
plot_ls.append(output_softmax[idx][0].item())
idx+=1
# plot_ls
# idx = 0
# # print(output_softmax)
# for i in pred:
# # print(predictions_tensor[idx])
# each_output_softmax = output_softmax[idx]/output_softmax[idx].sum()
# print(each_output_softmax)
# if i == 0:
# new_list = set(predictions[idx])
# new_list.remove(max(new_list))
# index = predictions[idx].tolist().index(max(new_list))
# # index = predictions[idx].index()
# # print(index)
# idx+=1
import matplotlib.pyplot as plt
plt.hist(plot_ls, bins=80, histtype="stepfilled", alpha=.8)
plot_ls.sort()
val = plot_ls[-85]
print(val)
plt.vlines(val, ymin = 0, ymax = 22, colors = 'r')
# print(output_softmax)
idx = 0
counter = 0
for i in pred:
# print(predictions_tensor[idx])
# each_output_softmax = output_softmax[idx]/output_softmax[idx].sum()
# print(each_output_softmax)
if i == 0 and output_softmax[idx][0] < val:
new_list = set(predictions[idx])
new_list.remove(max(new_list))
index = predictions[idx].tolist().index(max(new_list))
# index = predictions[idx].index()
# print(index)
pred[idx] = index
output_softmax[idx][0] = -100.0
counter += 1
idx+=1
print(counter)
###Output
374
###Markdown
submission balance for class 1
###Code
plot_ls = []
idx = 0
for each_pred in pred:
if each_pred == 1:
plot_ls.append(output_softmax[idx][1].item())
idx+=1
# plot_ls
# idx = 0
# # print(output_softmax)
# for i in pred:
# # print(predictions_tensor[idx])
# each_output_softmax = output_softmax[idx]/output_softmax[idx].sum()
# print(each_output_softmax)
# if i == 0:
# new_list = set(predictions[idx])
# new_list.remove(max(new_list))
# index = predictions[idx].tolist().index(max(new_list))
# # index = predictions[idx].index()
# # print(index)
# idx+=1
import matplotlib.pyplot as plt
plt.hist(plot_ls, bins=80, histtype="stepfilled", alpha=.8)
plot_ls.sort()
val = plot_ls[-85]
print(val)
plt.vlines(val, ymin = 0, ymax = 22, colors = 'r')
# print(output_softmax)
idx = 0
counter = 0
for i in pred:
# print(predictions_tensor[idx])
# each_output_softmax = output_softmax[idx]/output_softmax[idx].sum()
# print(each_output_softmax)
if i == 1 and output_softmax[idx][1] < val:
new_list = set(output_softmax[idx])
new_list.remove(max(new_list))
index = output_softmax[idx].tolist().index(max(new_list))
# index = predictions[idx].index()
# print(index)
pred[idx] = index
output_softmax[idx][1] = -100.0
counter += 1
idx+=1
print(counter)
###Output
154
###Markdown
submission balance for class 2
###Code
plot_ls = []
idx = 0
for each_pred in pred:
if each_pred == 2:
plot_ls.append(output_softmax[idx][2].item())
idx+=1
# plot_ls
# idx = 0
# # print(output_softmax)
# for i in pred:
# # print(predictions_tensor[idx])
# each_output_softmax = output_softmax[idx]/output_softmax[idx].sum()
# print(each_output_softmax)
# if i == 0:
# new_list = set(predictions[idx])
# new_list.remove(max(new_list))
# index = predictions[idx].tolist().index(max(new_list))
# # index = predictions[idx].index()
# # print(index)
# idx+=1
import matplotlib.pyplot as plt
plt.hist(plot_ls, bins=80, histtype="stepfilled", alpha=.8)
plot_ls.sort()
val = plot_ls[-85]
print(val)
plt.vlines(val, ymin = 0, ymax = 22, colors = 'r')
# print(output_softmax)
idx = 0
counter = 0
for i in pred:
# print(predictions_tensor[idx])
# each_output_softmax = output_softmax[idx]/output_softmax[idx].sum()
# print(each_output_softmax)
if i == 2 and output_softmax[idx][2] < val:
new_list = set(output_softmax[idx])
new_list.remove(max(new_list))
index = output_softmax[idx].tolist().index(max(new_list))
# index = predictions[idx].index()
# print(index)
pred[idx] = index
output_softmax[idx][2] = -100.0
counter += 1
idx+=1
print(counter)
###Output
63
###Markdown
submission balance for class 3
###Code
plot_ls = []
idx = 0
for each_pred in pred:
if each_pred == 3:
plot_ls.append(output_softmax[idx][3].item())
idx+=1
# plot_ls
# idx = 0
# # print(output_softmax)
# for i in pred:
# # print(predictions_tensor[idx])
# each_output_softmax = output_softmax[idx]/output_softmax[idx].sum()
# print(each_output_softmax)
# if i == 0:
# new_list = set(predictions[idx])
# new_list.remove(max(new_list))
# index = predictions[idx].tolist().index(max(new_list))
# # index = predictions[idx].index()
# # print(index)
# idx+=1
import matplotlib.pyplot as plt
plt.hist(plot_ls, bins=80, histtype="stepfilled", alpha=.8)
plot_ls.sort()
val = plot_ls[-85]
print(val)
plt.vlines(val, ymin = 0, ymax = 22, colors = 'r')
# print(output_softmax)
idx = 0
counter = 0
for i in pred:
# print(predictions_tensor[idx])
# each_output_softmax = output_softmax[idx]/output_softmax[idx].sum()
# print(each_output_softmax)
if i == 3 and output_softmax[idx][3] < val:
new_list = set(output_softmax[idx])
new_list.remove(max(new_list))
index = output_softmax[idx].tolist().index(max(new_list))
# index = predictions[idx].index()
# print(index)
pred[idx] = index
output_softmax[idx][3] = -100.0
counter += 1
idx+=1
print(counter)
# pred = np.argmax(predictions,axis = 1)
pred_list = []
for i in range(len(pred)):
result = [pred[i]]
pred_list.append(result)
pred_list
predicted_class_idx = pred_list
test_df['class_id'] = predicted_class_idx
test_df['class_id'] = test_df['class_id'].apply(lambda x : ' '.join(map(str,list(x))))
test_df = test_df.rename(columns={0: 'image_id'})
test_df['image_id'] = test_df['image_id'].apply(lambda x : x.split('.')[0])
test_df
for (idx, row) in test_df.iterrows():
row.image_id = row.image_id.split("_")[1]
for k in range(10):
i = 0
for (idx, row) in test_df.iterrows():
if row.class_id == str(k):
i+=1
print(i)
test_df
test_df.to_csv('results.csv',index = False)
###Output
_____no_output_____
###Markdown
Create tables and run the ETL
###Code
%run ../src/create_tables.py
%run ../src/etl.py
###Output
71 files found in /data/song_data
1/71 files processed.
2/71 files processed.
3/71 files processed.
4/71 files processed.
5/71 files processed.
6/71 files processed.
7/71 files processed.
8/71 files processed.
9/71 files processed.
10/71 files processed.
11/71 files processed.
12/71 files processed.
13/71 files processed.
14/71 files processed.
15/71 files processed.
16/71 files processed.
17/71 files processed.
18/71 files processed.
19/71 files processed.
20/71 files processed.
21/71 files processed.
22/71 files processed.
23/71 files processed.
24/71 files processed.
25/71 files processed.
26/71 files processed.
27/71 files processed.
28/71 files processed.
29/71 files processed.
30/71 files processed.
31/71 files processed.
32/71 files processed.
33/71 files processed.
34/71 files processed.
35/71 files processed.
36/71 files processed.
37/71 files processed.
38/71 files processed.
39/71 files processed.
40/71 files processed.
41/71 files processed.
42/71 files processed.
43/71 files processed.
44/71 files processed.
45/71 files processed.
46/71 files processed.
47/71 files processed.
48/71 files processed.
49/71 files processed.
50/71 files processed.
51/71 files processed.
52/71 files processed.
53/71 files processed.
54/71 files processed.
55/71 files processed.
56/71 files processed.
57/71 files processed.
58/71 files processed.
59/71 files processed.
60/71 files processed.
61/71 files processed.
62/71 files processed.
63/71 files processed.
64/71 files processed.
65/71 files processed.
66/71 files processed.
67/71 files processed.
68/71 files processed.
69/71 files processed.
70/71 files processed.
71/71 files processed.
30 files found in /data/log_data
1/30 files processed.
2/30 files processed.
3/30 files processed.
4/30 files processed.
5/30 files processed.
6/30 files processed.
7/30 files processed.
8/30 files processed.
9/30 files processed.
10/30 files processed.
11/30 files processed.
12/30 files processed.
13/30 files processed.
14/30 files processed.
15/30 files processed.
16/30 files processed.
17/30 files processed.
18/30 files processed.
19/30 files processed.
20/30 files processed.
21/30 files processed.
22/30 files processed.
23/30 files processed.
24/30 files processed.
25/30 files processed.
26/30 files processed.
27/30 files processed.
28/30 files processed.
29/30 files processed.
30/30 files processed.
###Markdown
Connect to the sparkifyDB and run basic queries
###Code
%load_ext sql
%sql postgresql://student:student@postgresDb/sparkifydb
%sql SELECT 'songplays' as TABLE_NAME, COUNT(*) FROM songplays \
UNION ALL \
SELECT 'songs' as TABLE_NAME, COUNT(*) FROM songs \
UNION ALL \
SELECT 'artists' as TABLE_NAME, COUNT(*) FROM artists \
UNION ALL \
SELECT 'users' as TABLE_NAME, COUNT(*) FROM users \
UNION ALL \
SELECT 'songplays_song_id' as TABLE_NAME, COUNT(*) FROM songplays WHERE songplays.song_id is not null \
UNION ALL \
SELECT 'songplays_artist_id' as TABLE_NAME, COUNT(*) FROM songplays WHERE songplays.artist_id is not null
%sql SELECT * FROM songplays LIMIT 5;
%sql SELECT * FROM songs LIMIT 5;
%sql SELECT * FROM users LIMIT 5;
%sql SELECT * FROM artists LIMIT 5;
%sql select COUNT(*) from songs s join artists a on s.artist_id = a.artist_id
%sql SELECT * FROM time LIMIT 5;
###Output
* postgresql://student:***@postgresDb/sparkifydb
5 rows affected.
###Markdown
House Prices: Advanced Regression Techniques
###Code
import numpy as np
import pandas as pd
import autopreprocessing as ap
df = pd.read_csv("train.csv",low_memory=False)
x = df.drop(["SalePrice"],axis=1)
y = df["SalePrice"]
x , missing_val = ap.processing_missing(x)
missing_val
x, cat_dict = ap.proc_category(x,max_cardi=5)
cat_dict
x.info()
from sklearn.ensemble import RandomForestRegressor
m = RandomForestRegressor(n_jobs=-1)
m.fit(x,y)
m.score(x,y)
###Output
C:\Users\ASUS\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
###Markdown
TestDecision Tree
###Code
## YOU MUST PASTE YOUR OWN ROUTE INFORMATION HERE
my_route = 'https://fraud-detection-git-test2-fady-missiha-dev.apps.rhods-sb-prod.3sox.p1.openshiftapps.com/'
## YOU MUST PASTE YOUR OWN ROUTE INFORMATION HERE
import requests
test_fraud_Tran = '{ "CASH_OUT": 0.0, "amount": 181.0, "oldbalanceOrg": 181.0, "newbalanceOrig": 0.0, "oldbalanceDest": 0.0, "newbalanceDest": 0.0 }' # fraud
response = requests.post(my_route + '/predictions', test_fraud_Tran)
response.json()
test_non_fraud_Tran = '{ "CASH_OUT": 1.0, "amount": 229133.94, "oldbalanceOrg": 15325.0, "newbalanceOrig": 0.0, "oldbalanceDest": 5083.0, "newbalanceDest": 51513.44 }' # non-fraud
response = requests.post(my_route + '/predictions', test_non_fraud_Tran)
response.json()
test_non_fraud_Tran = '{ "CASH_OUT": 0.0, "amount": 229133.94, "oldbalanceOrg": 15325.0, "newbalanceOrig": 0.0, "oldbalanceDest": 5083.0, "newbalanceDest": 51513.44 }' # non-fraud
response = requests.post(my_route + '/predictions', test_non_fraud_Tran)
response.json()
###Output
_____no_output_____
###Markdown
RandomForest Test
###Code
test_fraud_Tran = '{ "CASH_OUT": 0.0, "amount": 181.0, "oldbalanceOrg": 181.0, "newbalanceOrig": 0.0, "oldbalanceDest": 0.0, "newbalanceDest": 0.0 }' # fraud
response = requests.post(my_route + '/predictionsRF', test_fraud_Tran)
response.json()
test_non_fraud_Tran = '{ "CASH_OUT": 1.0, "amount": 229133.94, "oldbalanceOrg": 15325.0, "newbalanceOrig": 0.0, "oldbalanceDest": 5083.0, "newbalanceDest": 51513.44 }' # non-fraud
response = requests.post(my_route + '/predictionsRF', test_non_fraud_Tran)
response.json()
test_non_fraud_Tran = '{ "CASH_OUT": 0.0, "amount": 229133.94, "oldbalanceOrg": 15325.0, "newbalanceOrig": 0.0, "oldbalanceDest": 5083.0, "newbalanceDest": 51513.44 }' # non-fraud
response = requests.post(my_route + '/predictionsRF', test_non_fraud_Tran)
response.json()
###Output
_____no_output_____
###Markdown
speed test
###Code
import pyarrow as pa
import numpy as np
from arrow_ext import ext
import pyarrow.compute as pc
b = pa.array(np.random.randint(0,5000000,10000000), type=pa.int32())
c = pc.cast(b, pa.string())
from arrow_ext import ext
ext.g
import pyarrow as pa
import numpy as np
from arrow_ext import ext
import pyarrow.compute as pc
b = pa.array(np.random.randint(0,50000,100000), type=pa.int32())
c = pc.cast(b, pa.string())
%%time
f = ext.duplicatesFilter2(c)
%%time
f1 = ext.getUniqueRowIndex2(c,c)
f1
f1 = ext.getUniqueRowIndex(c)
f1
f1
%%time
f2 = ext.getUniqueRowIndex(c)
f2
###Output
_____no_output_____
###Markdown
MIOPY: Use casesIn this tutorial, we demonstrate how MIOPY can be used to study the microRNA/mRNA interaction from expression data.For this tutorial, we use the TCGA-LUAD dataset. Use Case S1: MicroRNAs targeting immune modulators including PD-L1We were intereseted in finding out which are the most important microRNAs regulating immune-checkpoints in tumor cells. Loading the example dataset
###Code
import miopy as mp
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
dfMir, dfRna, metadata = mp.load_dataset()
###Output
_____no_output_____
###Markdown
**We filtered to keep only primary tumor samples**.
###Code
dfExpr = mp.concat_matrix(dfMir,dfRna)
dfExpr = dfExpr.loc[metadata.query('sample_type == "PrimaryTumor"').index,:]
###Output
_____no_output_____
###Markdown
Run CorrelationIn the use case from the publication, we used the Immune Checkpoint (ICBI) geneset, but in this case we reduce the number of genes to reduce the computational times. We can run all the methods with mp.all_methods, every methods can be running indivdually.
###Code
lGene = open("genesets/geneset_Immune checkpoints [ICBI].txt","r").read().split()
lGene[0:1]
res, pearson = mp.all_methods(dfExpr, lMirUser = None, lGeneUser = lGene[0:5]+["CD274"], n_core = 4, background = True, test = True)
###Output
Obtain Concat Gene and MIR
Loading dataset...
Classifier Rho
Classifier R
Classifier Tau
Background
###Markdown
As result, the function return a table with all the microRNA/mRNA pairs and the coeficient obatin for each method.
###Code
res.loc[res["P-Value"] < 0.05,:].sort_values("P-Value")
###Output
_____no_output_____
###Markdown
**Filtering the results**Let's now run mp.FilterDF() to keep the most important microRNAs/mRNA pair. FilterDf allow to filter the pairs through the coeficients, the adjust pvalue, and/or the number of prediction tools that predict the interaction. In the publications, we use and FDR 10.
###Code
table, matrix = mp.FilterDF(table = res, matrix = pearson, join = "or", low_coef = -0.2, high_coef = 1, pval = 0.1, analysis = "Correlation", min_db = 10)
###Output
_____no_output_____
###Markdown
MIO implement the BORDA ranking sistem, which use all the metrics in the table to ranking the microRNA/mRNA pairs from the most relevant.
###Code
table[["Ranking","Mir","Gene"]].head()
###Output
_____no_output_____
###Markdown
Predict TargetMIO integrate a custom database from a variety of target prediction tools. In MIO a target prediction can be done using only the 40 integrate prediction tools, or using a gene expression data. In this example, we predict the microRNA whih targeting CD274 (PDL1) using the database, and using the previous results.**Using only the 40 prediction tools**
###Code
table, matrix = mp.predict_target(lTarget = ["CD274",], min_db = 10)
table.sort_values("Number Prediction Tools", ascending=False).head()
###Output
_____no_output_____
###Markdown
**Using the correlation result**
###Code
table, matrix = mp.predict_target(table = res, matrix = None, lTarget = ["CD274",], lTools = None, method = "or", min_db = 5, low_coef = -0.2, high_coef = 1, pval = 0.1)
table.sort_values("Ranking").head()
###Output
_____no_output_____
###Markdown
Use Case S2: Genes involved in antigen processing and presentation by microRNAsDeficient or down regulated genes of the antigen processing and presentation machinery have been associated with response prediction to cancer immunotherapy. In order to study, which microRNAs are potentially able to down regulate the complete pathwey we perfom a correlation analysis using a weigthed expression score.
###Code
lGene = open("genesets/geneset_Antigen Processig and Presentation [ImmPort].txt","r").read().split()
dfCor, dfPval, dfSetScore = mp.gene_set_correlation(dfExpr, lGene, GeneSetName = "Antigen Processig and Presentation [ImmPort]", lMirUser = None, n_core = 8)
###Output
_____no_output_____
###Markdown
gene_set_correlation return 3 elements: the pearson's coefficients, the p.value, and the calculate module score for each sample and microRNA.
###Code
dfPval.columns = ["P.val"]
table = pd.concat([dfCor, dfPval], axis = 1)
table.sort_values("Antigen Processig and Presentation [ImmPort]").head()
###Output
_____no_output_____
###Markdown
Use Case S3: Identifying a microRNA signature predictive for survivalIn the publication we used the TCGA-CRC dataset to predict microRNA related with the microsatelite inestability. In this case, we are going to use the TCGA-LUAD to predict the survival (death status) samples. This is only an example about how to use the function.
###Code
from miopy.feature_selection import feature_selection
data = pd.concat([dfMir.transpose(),metadata.loc[:,"event"]], axis = 1)
data = data.dropna()
top_feature, dAll, DictScore = feature_selection(data, k = 10, topk = 25, group = "event")
###Output
Loading dataset...
Classifier Random Forest
training: 1.0000, test: 0.6316
training: 1.0000, test: 0.6316
training: 1.0000, test: 0.5789
training: 1.0000, test: 0.5526
training: 1.0000, test: 0.6579
training: 1.0000, test: 0.6053
training: 1.0000, test: 0.5676
training: 1.0000, test: 0.6486
training: 1.0000, test: 0.5135
training: 1.0000, test: 0.5946
test mean: 0.5982
Classifier Logistic Regresion
training: 1.0000, test: 0.7105
training: 1.0000, test: 0.5526
training: 1.0000, test: 0.5263
training: 1.0000, test: 0.5000
training: 1.0000, test: 0.6579
training: 1.0000, test: 0.5526
training: 1.0000, test: 0.4865
training: 1.0000, test: 0.5405
training: 1.0000, test: 0.5405
training: 1.0000, test: 0.7297
test mean: 0.5797
Classifier Ridge Classfier
training: 1.0000, test: 0.7105
training: 1.0000, test: 0.6053
training: 1.0000, test: 0.4211
training: 1.0000, test: 0.6316
training: 1.0000, test: 0.5526
training: 1.0000, test: 0.5789
training: 1.0000, test: 0.5135
training: 1.0000, test: 0.5676
training: 1.0000, test: 0.6216
training: 1.0000, test: 0.6216
test mean: 0.5824
Classifier Support Vector Machine Classfier
training: 1.0000, test: 0.6579
training: 1.0000, test: 0.5526
training: 1.0000, test: 0.5263
training: 1.0000, test: 0.5000
training: 1.0000, test: 0.5789
training: 1.0000, test: 0.5263
training: 1.0000, test: 0.5405
training: 1.0000, test: 0.5676
training: 1.0000, test: 0.5405
training: 1.0000, test: 0.7027
test mean: 0.5693
Classifier Ada Classifier
training: 1.0000, test: 0.6842
training: 1.0000, test: 0.5789
training: 1.0000, test: 0.5000
training: 1.0000, test: 0.5526
training: 1.0000, test: 0.6842
training: 1.0000, test: 0.6316
training: 1.0000, test: 0.3784
training: 1.0000, test: 0.4865
training: 1.0000, test: 0.4324
training: 1.0000, test: 0.5405
test mean: 0.5469
Classifier Bagging Classifier
training: 1.0000, test: 0.6053
training: 1.0000, test: 0.6579
training: 1.0000, test: 0.6053
training: 1.0000, test: 0.5789
training: 1.0000, test: 0.6842
training: 1.0000, test: 0.6053
training: 1.0000, test: 0.5676
training: 1.0000, test: 0.5676
training: 1.0000, test: 0.5676
training: 1.0000, test: 0.5405
test mean: 0.5980
Classifier Gradient Boosting Classifier
training: 1.0000, test: 0.6316
training: 1.0000, test: 0.7105
training: 1.0000, test: 0.5526
training: 1.0000, test: 0.5000
training: 1.0000, test: 0.7105
training: 1.0000, test: 0.6316
training: 1.0000, test: 0.5946
training: 1.0000, test: 0.5676
training: 1.0000, test: 0.5946
training: 1.0000, test: 0.5676
test mean: 0.6061
###Markdown
Th feature selection return the top predictors most informative in separating the death status in the TCGA-LUAD patients. Now, we can use this predictors to training a model, and see how robust are these microRNAs.
###Code
from miopy.classification import classification_cv
results = classification_cv(data, k = 5, name = "Random Forest", group = "event", lFeature = top_feature.index)
###Output
Loading dataset...
Classifier Random Forest
training: 1.0000, test: 0.7763
training: 1.0000, test: 0.6933
training: 1.0000, test: 0.6400
training: 1.0000, test: 0.6800
training: 1.0000, test: 0.5600
Test Mean: 0.6699
###Markdown
Use Case S4: MicroRNA target genes synthetic lethal to immune (therapy) essential genesIn order to identify synthetic lethal partner genes in tumor cells we have taken advantage of previous efforts and used the ISLE algorithm for calculation (Lee et al., 2018), which is available within MIO. We wereinterested in identifying microRNAs targeting genes which are synthetic lethal to immune(therapy) essential genes. We used the option Target Prediction, miRNA Synthetic Lethal Prediction. In addition, MIOPY can perform an overrepresentation analysis for microRNAs based on the number of synthetic lethal target genes compared to all potential target genes.
###Code
lGene = open("genesets/geneset_Immune essential genes [Patel].txt","r").read().split()
target, matrix, ora = mp.predict_lethality2(lQuery = lGene, lTools = None, method = "or", min_db = 25)
target.sort_values("Number Prediction Tools", ascending=False).head()
ora.sort_values("FDR").head()
###Output
_____no_output_____
###Markdown
Test of LINE Notify Bot Libraries
###Code
import requests
###Output
_____no_output_____
###Markdown
Setting Access token can be obtained from [LINE Notify](https://notify-bot.line.me/ja/).
###Code
access_token = 'Your Access Token'
# when access token is written in access_token.txt
with open('access_token.txt', 'r') as fin:
access_token = fin.read().strip()
headers = {'Authorization': 'Bearer ' + access_token}
url = "https://notify-api.line.me/api/notify"
###Output
_____no_output_____
###Markdown
Send Message
###Code
message = 'only message'
payload = {'message': message}
requests.post(url, headers=headers, params=payload)
###Output
_____no_output_____
###Markdown
Send Image
###Code
message = 'with image'
image = 'test.png' # png or jpg
payload = {'message': message}
files = {'imageFile': open(image, 'rb')}
requests.post(url, headers=headers, params=payload, files=files)
###Output
_____no_output_____
###Markdown
if message is None, image is not sent and no error occurs (but the status code of the response is 400).
###Code
payload = {'message': ''}
requests.post(url, headers=headers, params=payload, files=files)
###Output
_____no_output_____
###Markdown
Send Sticker Sticker and its package IDs are chosen from: https://devdocs.line.me/files/sticker_list.pdf
###Code
payload = {
'message': 'with sticker',
'stickerPackageId': 1,
'stickerId': 13,
}
requests.post(url, headers=headers, params=payload)
###Output
_____no_output_____
###Markdown
if message is None, sticker is not sent and no error occurs (but the status code of the response is 400).
###Code
payload = {
'message': '',
'stickerPackageId': 1,
'stickerId': 13,
}
requests.post(url, headers=headers, params=payload, files=files)
###Output
_____no_output_____
###Markdown
if sticker and its package IDs do not exist, message is not sent and no error occurs
###Code
payload = {
'message': 'with sticker',
'stickerPackageId': 1,
'stickerId': 10000,
}
requests.post(url, headers=headers, params=payload, files=files)
###Output
_____no_output_____
###Markdown
Test of LINENotifyBot Module From Command Line
###Code
!python ../line_notify_bot.py access_token.txt "test from command line" -i test.png -sp 1 -s 13
###Output
_____no_output_____
###Markdown
As Module
###Code
import sys; sys.path.append('../')
from line_notify_bot import LINENotifyBot
bot = LINENotifyBot(access_token=access_token)
bot.send(
message='test of module',
image='test.png', # png or jpg
sticker_package_id=1,
sticker_id=13,
)
###Output
_____no_output_____
###Markdown
Errors if message is None
###Code
bot.send(message='')
###Output
_____no_output_____
###Markdown
if message is not str
###Code
bot.send(message=1)
###Output
_____no_output_____
###Markdown
if image file does not exist
###Code
bot.send(
message='test of image file error',
image='images/test.png',
)
###Output
_____no_output_____
###Markdown
if sticker ID is input but package ID is not input
###Code
bot.send(
message='test of sticker error',
sticker_id=1,
)
###Output
_____no_output_____
###Markdown
if wrong sticker ID is input
###Code
bot.send(
message='test of sticker error',
sticker_package_id=1,
sticker_id=10000,
)
###Output
_____no_output_____
###Markdown
Notebook for testing purpose Section 1
###Code
# this is a regular cell
a = 1
a
# this is a workflow cell
[10]
a = 100
s = "This is an Python 3 cell"
with open('test_output.txt', 'w') as out:
out.write('something')
%sosrun test
[test]
output: 'test.out'
sh:
echo "test line " >> ${output}
%sossave -f "test_wf.sos"
###Output
Workflow saved to test.sos
|
multi-model-endpoints/models/setup-model-1-torchserve.ipynb | ###Markdown
Torchserve Setup for the First Model 1. Download Model
###Code
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("nlptown/bert-base-multilingual-uncased-sentiment")
model = AutoModelForSequenceClassification.from_pretrained("nlptown/bert-base-multilingual-uncased-sentiment")
###Output
_____no_output_____
###Markdown
2. Save Locally
###Code
from pathlib import Path
import shutil
cur_dir = Path.cwd()
base_dir = cur_dir/'model-1'
model_dir = base_dir/'model'
code_dir = base_dir/'upload_code'
shutil.rmtree(str(model_dir))
tokenizer.save_pretrained(model_dir)
model.save_pretrained(model_dir)
###Output
_____no_output_____
###Markdown
3. Copy inference code inside the model directory```model/||- config.json|- pytorch_model.bin|- special_tokens_map.json|- tokenizer_config.json|- vocab.txt|- code/ |- inference.py```
###Code
shutil.copytree(code_dir, str(model_dir/'code'))
###Output
_____no_output_____
###Markdown
4. Install DependenciesThis is to keep the model decoupled from others
###Code
import subprocess
import os
###Output
_____no_output_____
###Markdown
Important! The python version used to install the dependencies must match the deployed image
###Code
my_env = os.environ.copy()
my_env['PYENV_VERSION'] = '3.6.10'
subprocess.check_call(["pyenv", "exec", "python", "-m", "pip", "install", "--target", str(model_dir/'code'), "transformers==4.2.2"], env=my_env)
###Output
_____no_output_____
###Markdown
5. Add Manifest File
###Code
subprocess.check_call([
"torch-model-archiver", "--model-name", "model",
"--handler", "torchserve/handler_service.py",
"--export-path", "model-1",
"--version", "1",
"--archive-format", "no-archive",
"-f"
])
###Output
_____no_output_____
###Markdown
6. Package model into a tar
###Code
import tarfile
with tarfile.open(base_dir/'model.tar.gz', 'w:gz') as tar:
tar.add(model_dir, arcname='.')
###Output
_____no_output_____
###Markdown
7. Upload model package to S3
###Code
import boto3
s3 = boto3.resource("s3")
s3_bucket = "mlops-mme-torchserve-deployment-bucket-validation"
bucket = s3.Bucket(s3_bucket)
bucket.upload_file(str(base_dir/'model.tar.gz'), 'model-1b.tar.gz')
###Output
_____no_output_____ |
notebooks/Hands-on Part II -- Estimating the Effect of Schooling on Wages - Complete.ipynb | ###Markdown
Estimating the Effect of Schooling on Wages: Intrumental Variables Application Summary of Contents:1. [Introduction](intro)2. [NLSYM Dataset](data)3. [A Gentle Start: The Naive Approach](naive)4. [Using Instrumental Variables: 2SLS](2sls)5. [Bonus: Deep Instrumental Variables](deepiv)**Important:** This notebook is an end-to-end solution for this problem. If you are looking for notebook with some room for experimentation, look for the same file name without the "Complete" suffix. 1. Introduction To measure true causal effects of a treatment $T$ on an outcome $Y$ from observational data, we need to record all features $X$ that might influence both $T$ and $Y$. These $X$'s are called confounders. When some confounders are not recorded in the data, we might get biased estimates of the treatment effect. Here is an example:* Children of high-income parents might attain higher levels of education (e.g. college) since they can afford it* Children of high-income parents might also obtain better paying jobs due to parents' connections and knowledge* At first sight, it might appear as if education has an effect on income, when in fact this could be fully explained by family backgroundThere are several reasons for not recording all possible confounders, such as incomplete data or a confounder that is difficult to quantify (e.g. parental involvement). However, not all is lost! In cases such as these, we can use instrumental variables $Z$, features that affect the outcome only through their effect on the treatment. In this notebook, we use a real-world problem to show how treatment effects can be extracted with the help of instrumental variables. 2. NLSYM Dataset The **causal impact of schooling on wages** had been studied at length. Although it is generally agreed that there is a positive impact, it is difficult to measure this effect precisely. The core problem is that education levels are not assigned at random in the population and we cannot record all possible confounders. (Think about the value parents assign to education. How would you quantify how valuable parents think their children's education is?). To get around this issue, we can use **proximity to a 4-year college** as an instrumental variable. Having a college nearby can allow individuals (especially low-income ones) to complete more years of education. Hence, if there was a positive treatment effect, we would expect these individuals to have higher wages on average. Note that college proximity is a valid IV since it does not affect wages directly. We use data from the National Longitudinal Survey of Young Men (NLSYM, 1966) to estimate the average treatment effect (ATE) of education on wages (see also [Card, 1999](https://www.nber.org/papers/w4483)). The NLSYM data contains entries from men ages 14-24 that were interviewed in 1966 and again in 1976. The dataset contains the following variables:* $Y$ (outcome): wages (log)* $T$ (treatment): years of schooling* $Z$ (IV): proximity to a 4-year college (binary)* $X$ (heterogeneity): e.g. parental education* $W$ (controls): e.g. family composition, location, etc.The world can then be modelled as:$$\begin{align}Y & = \theta(X) \cdot T + f(W) + \epsilon\\T & = g(Z, W) + \eta\end{align}$$where $\epsilon, \eta$ are uncorrelated error terms.
###Code
# Python imports
import keras
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.stats import pearsonr
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import KFold
from sklearn.preprocessing import PolynomialFeatures, StandardScaler
# EconML imports
from econml.dml import DMLCateEstimator
from econml.two_stage_least_squares import NonparametricTwoStageLeastSquares
from econml.deepiv import DeepIVEstimator
%matplotlib inline
# Data processing
df = pd.read_csv("data/card.csv", dtype=float)
# Filter out individuals with low education levels (outliers)
data_filter = df['educ'].values >= 6
# Define some variables
T = df['educ'].values[data_filter]
Z = df['nearc4'].values[data_filter]
Y = df['lwage'].values[data_filter]
# Impute missing values with mean, add dummy columns
# Filter outliers (interviewees with less than 6 years of education)
X_df = df[['exper', 'expersq']].copy()
X_df['fatheduc'] = df['fatheduc'].fillna(value=df['fatheduc'].mean())
X_df['fatheduc_nan'] = df['fatheduc'].isnull() * 1
X_df['motheduc'] = df['motheduc'].fillna(value=df['motheduc'].mean())
X_df['motheduc_nan'] = df['motheduc'].isnull() * 1
X_df[['momdad14', 'sinmom14', 'reg661', 'reg662',
'reg663', 'reg664', 'reg665', 'reg666', 'reg667', 'reg668', 'reg669', 'south66']] = df[['momdad14', 'sinmom14',
'reg661', 'reg662','reg663', 'reg664', 'reg665', 'reg666', 'reg667', 'reg668', 'reg669', 'south66']]
X_df[['black', 'smsa', 'south', 'smsa66']] = df[['black', 'smsa', 'south', 'smsa66']]
columns_to_scale = ['fatheduc', 'motheduc', 'exper', 'expersq']
# Scale continuous variables
scaler = StandardScaler()
X_df[columns_to_scale] = scaler.fit_transform(X_df[columns_to_scale])
X = X_df.values[data_filter]
# Explore data
X_df.head()
###Output
_____no_output_____
###Markdown
3. A Gentle Start: The Naive Approach Let's assume we know nothing about instrumental variables and we want to measure the treatment effect of schooling on wages. We can apply an IV-free method like Double Machine Learning (DML) to do this and extract a treatment effect.
###Code
dml_est = DMLCateEstimator(model_y=RandomForestRegressor(n_estimators=100),
model_t=RandomForestRegressor(n_estimators=100))
dml_est.fit(Y, T, X)
dml_ate = dml_est.effect(X).mean()
print("Average treatment effect: {0:.3f}".format(dml_ate))
###Output
Average treatment effect: 0.065
###Markdown
This treatment effect is smaller than other values obtained in literature via IV. Why could that be? Because DML (like all IV-free methods) assumes that the residual errors are uncorrelated (i.e. $Y - \hat{Y}$ is uncorrelated with $T - \hat{T}$). Let's test this assumption:
###Code
# Split data in 2 parts for cross-fitting
# We do this to avoid over-fitting
T_res, Y_res = np.zeros(T.shape[0]), np.zeros(Y.shape[0])
kf = KFold(n_splits=2, shuffle=True)
for train_index, test_index in kf.split(X):
T_res[test_index] = T[test_index] - \
RandomForestRegressor(n_estimators=100).fit(X[train_index], T[train_index]).predict(X[test_index])
Y_res[test_index] = Y[test_index] - \
RandomForestRegressor(n_estimators=100).fit(X[train_index], Y[train_index]).predict(X[test_index])
plt.scatter(T_res, Y_res)
plt.show()
corr_coeficient = pearsonr(T_res, Y_res)[0]
print("Correlation coefficient between T and Y errors: {0:.2f}".format(corr_coeficient))
###Output
Correlation coefficient between T and Y errors: 0.30
###Markdown
The correlation coefficient between the residuals is quite large, which means that there is some unobserved variables that affect both $T$ and $Y$. To get an accurate estimate in this case, we need to use IVs. 4. Using Intrumental Variables: 2SLS Two stage least square regression procedure (2SLS):1. Fit a model $T \sim W, Z$2. Fit a linear model $Y \sim \hat{T}$ where $\hat{T}$ is the prediction of the model in step 1.The coefficient from 2. above is the average treatment effect.If interested in heterogeneous treatment effects, fit a model $Y \sim \hat{T}\otimes h(X)$, where $h(X)$ is a chosen featurization of the treatment effect. For more information, see the `econml` [documentation](https://econml.azurewebsites.net).
###Code
# For average treatment effects, X is a column of 1s
W = X
Z = Z.reshape(-1, 1)
T = T.reshape(-1, 1)
X_ate = np.ones_like(Z)
# We apply 2SLS from the EconML library
two_sls_est = NonparametricTwoStageLeastSquares(
t_featurizer=PolynomialFeatures(degree=1, include_bias=False),
x_featurizer=PolynomialFeatures(degree=1, include_bias=False),
z_featurizer=PolynomialFeatures(degree=1, include_bias=False),
dt_featurizer=None) # dt_featurizer only matters for marginal_effect
two_sls_est.fit(Y, T, X_ate, W, Z)
two_sls_ate = two_sls_est.effect(np.ones((1,1)))[0]
print("Average treatment effect: {0:.3f}".format(two_sls_ate))
###Output
Average treatment effect: 0.134
###Markdown
5. Bonus: Deep Instrumental Variables For very flexible, but fully non-parametric IV methods, you can use neural networks for the two models in 2SLS and a mixture of gaussians for the featurizer $h(X)$. In `econml`, this method is called DeepIV. The NLSYM dataset is small (on neural net scale) so applying DeepIV is a bit of a stretch. Nevertheless, we apply DeepIV the NLSYM data as an example. You should not read too much into the results.
###Code
# Define treatment model, T ~ X, Z
treatment_model = keras.Sequential([keras.layers.Dense(64, activation='relu', input_shape=(X.shape[1] + 1,)),
keras.layers.Dropout(rate=0.17),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dropout(rate=0.17),
keras.layers.Dense(1)])
# Define outcome model, Y ~ T_hat, X
response_model = keras.Sequential([keras.layers.Dense(64, activation='relu', input_shape=(X.shape[1] + 1,)),
keras.layers.Dropout(rate=0.17),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dropout(rate=0.17),
keras.layers.Dense(1)])
keras_fit_options = { "epochs": 30,
"validation_split": 0.3,
"callbacks": [keras.callbacks.EarlyStopping(patience=2, restore_best_weights=True)]}
deepIvEst = DeepIVEstimator(n_components = 10, # number of gaussians in our mixture density network
m = lambda z, x : treatment_model(keras.layers.concatenate([z,x])), # treatment model
h = lambda t, x : response_model(keras.layers.concatenate([t,x])), # response model
n_samples = 1, # number of samples to use to estimate the response
use_upper_bound_loss = False, # whether to use an approximation to the true loss
n_gradient_samples = 1, # number of samples to use in second estimate of the response (to make loss estimate unbiased)
optimizer='adam', # Keras optimizer to use for training - see https://keras.io/optimizers/
first_stage_options = keras_fit_options, # options for training treatment model
second_stage_options = keras_fit_options) # options for training response model
deepIvEst.fit(Y, T, X, Z)
deepIv_effect = deepIvEst.effect(X)
print("Average treatment effect: {0:.3f}".format(deepIv_effect.mean()))
# Heterogeneity of treatment effects
plt.hist(deepIv_effect)
plt.show()
###Output
_____no_output_____ |
Code/Sentiment Analysis.ipynb | ###Markdown
Make Predictions
###Code
#this code block convert text to integers for the machine to understand.
word_index = imdb.get_word_index()
def encode_text(text):
tokens = keras.preprocessing.text.text_to_word_sequence(text)
tokens = [word_index[word] if word in word_index else 0 for word in tokens]
return sequence.pad_sequences([tokens], MAXLEN)[0]
text = "that movie was just amazing, so amazing"
encoded = encode_text(text)
print(encoded)
##this code block convert integers to text for us to understand.
reverse_word_index = {value: key for (key, value) in word_index.items()}
def decode_integers(integers):
PAD = 0
text = ""
for num in integers:
if num != PAD:
text += reverse_word_index[num] + " "
return text[:-1]
print(decode_integers(encoded))
#predictor
#The more positive a text/review, the higher the number output is. If the number is low, then it's a negative review/text.
def predict(text):
encoded_text = encode_text(text)
pred = np.zeros((1,250))
pred[0] = encoded_text
result = model.predict(pred)
print(result[0])
#To play around, you change add, modify or delete words in a text and watch the output change
positive_review = "That movie was awesome!I really loved it and would great watch it again because it was amazingly great"
predict(positive_review)
negative_review = "That movie really sucked. I hated it and wouldn't watch it again. Was one of the worst things I've ever watched"
predict(negative_review)
###Output
[0.8935995]
[0.34049943]
###Markdown
Table of Contents1 Setting up Environment2 Importing Dataset3 Extrating Comments Datasets4 Data Pre-processing5 Sentiment Analysis5.1 Calculating Comment Scores5.2 Calculating Post Scores5.3 Testing6 Exporting Dataset Setting up Environment
###Code
import pandas as pd
import numpy as np
from nltk.sentiment.vader import SentimentIntensityAnalyzer as SIA
###Output
_____no_output_____
###Markdown
Importing Dataset
###Code
filepath = "../Instagram/Cleaned data/"
# filename = "All_posts_2wikipedia.csv"
filename = "All_posts_3pareto.csv"
df = pd.read_csv(filepath + filename)
df
numposts = df.shape[0]
###Output
_____no_output_____
###Markdown
Extrating Comments Datasets
###Code
df["comments"][0]
comments = df["comments"][1039][2:-2].split('\', \'')
print(len(comments))
comments
df_comments = pd.DataFrame(columns=["ID", "comment"])
df["no. of scrapped comments"] = ""
for i in range(len(df)):
comments = df["comments"][i][2:-2].split('\', \'')
num_comments = len(comments)
df["no. of scrapped comments"][i] = num_comments
df_comments = df_comments.append(pd.DataFrame({"ID": [df["ID"][i]] * num_comments,
"comment": comments}))
df_comments
###Output
_____no_output_____
###Markdown
Data Pre-processingtodo: what are the data pre-processing steps required for sentiment analysis? Remove stop words and punctuation? Sentiment Analysis Calculating Comment ScoresSentiment scores are give to each comment in each post, ranging from -1 for very negative to +1 for very positive.
###Code
def scores(s):
sia = SIA()
pol_score = sia.polarity_scores(s['comment'])
return s.append(pd.Series(list(pol_score.values()), index=pol_score.keys()))
df_comments = df_comments.apply(scores, axis=1)
df_comments
df_comments.describe()
df_comments[df_comments["ID"] == 1]
###Output
_____no_output_____
###Markdown
Calculating Post Scores
###Code
df["sentiment score"] = np.nan
df.astype({'sentiment score': 'f'}).dtypes
for i in range(len(df)):
df["sentiment score"][i] = df_comments[df_comments["ID"] == i]["compound"].mean()
df
df["sentiment score"].describe()
###Output
_____no_output_____
###Markdown
View dataset where "food" is labelled
###Code
df[~df["food"].isna()]
df[~df["food"].isna()]["sentiment score"].describe()
###Output
_____no_output_____
###Markdown
Testing
###Code
def scores_single(s):
sia = SIA()
pol_score = sia.polarity_scores(s)
return pol_score
for comment in comments:
print(comment)
print(scores_single(comment))
print()
###Output
😱😱🤤
{'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound': 0.0}
Really enjoy watching your cooking....🧡
{'neg': 0.0, 'neu': 0.534, 'pos': 0.466, 'compound': 0.5413}
Yum! Looks amazing - thanks for the recipe 🙏
{'neg': 0.0, 'neu': 0.417, 'pos': 0.583, 'compound': 0.7901}
😍 Wow!!!
{'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': 0.6884}
Goodness. The liao more than the rice! 😂😂
{'neg': 0.0, 'neu': 0.68, 'pos': 0.32, 'compound': 0.5093}
Kacang la
{'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound': 0.0}
Wahhh super good👍 gonna try this recipe!!!
{'neg': 0.0, 'neu': 0.557, 'pos': 0.443, 'compound': 0.6981}
Nice
{'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': 0.4215}
😍😍😍
{'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound': 0.0}
Wow yummy
{'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': 0.802}
Thank you for the recipe!
{'neg': 0.0, 'neu': 0.589, 'pos': 0.411, 'compound': 0.4199}
Wahh a lot of work... but indeed worth the hassle😋😋
{'neg': 0.0, 'neu': 0.773, 'pos': 0.227, 'compound': 0.3291}
😘😘😘
{'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound': 0.0}
@leeen 😍
{'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound': 0.0}
###Markdown
Exporting Dataset
###Code
df.to_csv(filepath + filename[:-4] + "_sentiments" + ".csv", index=False)
###Output
_____no_output_____ |
50_ode/20_Heun_Method.ipynb | ###Markdown
상미분방정식을 위한 훈의 방법Heun's Method for Ordinary Differntial Equations 오일러법 사례 검토Review of Euler Method 다시 한번 다음 1계 미분 방정식을 생각해 보자.Once again, let's think about the following first order differential equation again. $$\left\{ \begin{align} a_0 \frac{d}{dt}x(t)+a_1 x(t)&=0 \\ x(0)&=x_0 \\ \end{align}\right.$$ 알고있다시피, python 함수로는 다음과 같이 쓸 수 있다.We know that we can write it as a python function as follows.
###Code
a_0, a_1 = 2.0, 1.0
def dx_dt(t, x):
return - a_1 * x / a_0
###Output
_____no_output_____
###Markdown
훈의 방법Heun's Method 훈의 방법은 독일 수학자 칼 훈의 이름을 딴 것이다.Heun's Method is named after German mathematician Karl Heun. 훈의 방법은 다음 두 값의 평균을 $t_i \le t \le t_{i+1}$ 사이에서 대표적인 $\frac{d}{dt}x$ 값으로 가정한다.Heun's Method assumes the average of following two as the representative $\frac{d}{dt}x$ value within $t_i \le t \le t_{i+1}$ interval. $$ s_i=\frac{d}{dt}x\left(t_{i}\right) \\ s_{i+1}=\frac{d}{dt}x\left(t_{i+1}\right)$$ 그런데, $x(t_{i+1})$의 엄밀해를 알지 못하는 상태에서 어떻게 $s_{i+1}=\frac{d}{dt}x\left(t_{i+1}\right)$ 을 계산할 것인가?Now, how can we calculate $\frac{d}{dt}x=\left.\frac{d}{dt}x\right|_{t=t_{i+1}}$ without knowing the exact solution of $x(t)$? 오일러법으로 구한 $\left.x\right|_{t=t_{i+1}}$의 근사값 $\left.\hat{x}\right|_{t=t_{i+1}}$으로 $\frac{d}{dt}x=\left.\frac{d}{dt}x\right|_{t=t_{i+1}}$을 사용할 것이다.We would use $\frac{d}{dt}x=\left.\frac{d}{dt}x\right|_{t=t_{i+1}}$ using $\left.\hat{x}\right|_{t=t_{i+1}}$, the approximation of $\left.x\right|_{t=t_{i+1}}$ by the Euler Method. 요약 Summary $(t_i, x_i)$ 으로부터 $t_{i+1}$ 지점의 $x_{i+1}$을 구하는 과정을 비교해 보자.Let's compare the steps to find $x_{i+1}$ of $t_{i+1}$ from $(t_i, x_i)$. * 오일러법 Euler's method| Equation 수식 | Description 설명 ||:-------------------:|:-----------------------------------------------:|| $$s_i = f(t_i, x_i)$$ | $t_i$ 와 $x_i$ 로 $t_i$ 에서의 기울기 $s_i$를 계산Calculate slope $s_i$ at $t_i$ using $t_i$ and $x_i$ || $x_{i+1}=x_i + s_i \Delta t$ | $x_i$ 에서 출발하여 기울기 $s_i$를 따라 $\Delta t$ 만큼 전진하여 $x_{i+1}$를 결정Starting from $x_i$, follow slope $s_i$ forward by $\Delta t$ to decide $x_{i+1}$ | * 훈의 방법 (수정오일러법) Heun's method (Modified Euler's method)| Equation 수식 | Description 설명 ||:-------------------:|:-----------------------------------------------:|| $$s_i = f(t_i, x_i)$$ | $t_i$ 와 $x_i$ 로 $t_i$ 에서의 기울기 $s_i$를 계산Calculate slope $s_i$ at $t_i$ using $t_i$ and $x_i$ || $\hat{x}_{i+1}=x_i + s_i \Delta t$ | $x_i$ 에서 출발하여 기울기 $s_i$를 따라 $\Delta t$ 만큼 전진하여 임시로 $\hat{x}_{i+1}$를 결정Starting from $x_i$, follow slope $s_i$ forward by $\Delta t$ to temporarily decide $\hat{x}_{i+1}$ || $ \hat{s}_{i+1}=f(t_{i+1}, \hat{x}_{i+1}) $ | 임시로 찾은 $\hat{x}_{i+1}$ 를 이용하여 $t_{i+1}$ 에서의 기울기 $\hat{s}_{i+1}$ 를 추정Estimate slope $\hat{s}_{i+1}$ at $t_{i+1}$ using temporarily found $\hat{x}_{i+1}$ || $ s_{Heun} = \frac{1}{2}\left( s_i + \hat{s}_{i+1} \right) $ | $s_i$ 와 $\hat{s}_{i+1}$ 의 평균으로 $t_i$ ~ $t_{i+1}$ 구간의 기울기 $ s_{Heun}$ 를 결정Decide slope $ s_{Heun}$ representing $t_i$ ~ $t_{i+1}$ interval by taking average of $s_i$ and $\hat{s}_{i+1}$ || $ x_{i+1} = x_i + s_{Heun} \Delta t $ | 기울기 $ s_{Heun}$ 를 따라 $\Delta t$ 만큼 전진하여 $x_{i+1}$ 를 결정Decide $x_{i+1}$ by going foward by $\Delta t$ following slope $ s_{Heun}$ | | Euler | Heun ||:------:|:------:|| $$ s_i = f(t_i, x_i) $$| $$ s_i = f(t_i, x_i) $$ || $$ x_{i+1}=x_i + s_i \Delta t $$ | $$ \hat{x}_{i+1}=x_i + s_i \Delta t $$ || $$ $$ | $$ \hat{s}_{i+1}=f(t_{i+1}, \hat{x}_{i+1}) $$ || $$ $$ | $$ s_{Heun} = \frac{1}{2}\left(s_i + \hat{s}_{i+1}\right) $$ || $$ $$ | $$ x_{i+1} = x_i + s_{Heun} \Delta t $$ |
###Code
def heun(f, t_array, x_0):
time_list = [t_array[0]]
result_list = [x_0]
x_i = x_0
for k, t_i in enumerate(t_array[:-1]):
# time step
delta_t = t_array[k+1] - t_array[k]
# slope at i
s_i = f(t_i, x_i)
# x[i + 1] by Euler
x_i_plus_1 = x_i + s_i * delta_t
# slope at i + 1
s_i_plus_1 = f(t_array[k+1], x_i_plus_1)
# average of slope
s_average = (s_i + s_i_plus_1) * 0.5
# x[i + 1] by Heun
x_i_plus_1_m = x_i + s_average * delta_t
time_list.append(t_array[k+1])
result_list.append(x_i_plus_1_m)
x_i = x_i_plus_1_m
return time_list, result_list
###Output
_____no_output_____
###Markdown
근사해와 방향장Approximate solutions and direction fields 엄밀해, 오일러법, 훈의 방법을 방향장과 겹쳐 그려보자.Let's overlap the exact solution, Forware Euler Method, and Heun's Method with the direction field.
###Code
import ode_plot
import ode_solver
###Output
_____no_output_____
###Markdown
$t$와 $x$의 범위Ranges of $t$ and $x$
###Code
t_slopes = py.linspace(0, 6)
x_slopes = py.linspace(0, 6)
###Output
_____no_output_____
###Markdown
초기값Initial value$x(t_0)$
###Code
x_0 = 4.5
###Output
_____no_output_____
###Markdown
$\Delta t = 0.5 $ (sec)
###Code
delta_t_05 = 0.5
t_05_sec = np.arange(t_slopes[0], t_slopes[-1] + delta_t_05*0.5, delta_t_05)
###Output
_____no_output_____
###Markdown
오일러법Euler method
###Code
t_euler_out, x_euler_out = ode_solver.euler(dx_dt, t_05_sec, x_0)
###Output
_____no_output_____
###Markdown
훈의 방법Heun's method매개변수가 모두 같다는 점을 주목하시오.Please note that the arguments are the same.
###Code
t_heun__out, x_heun__out = heun(dx_dt, t_05_sec, x_0)
###Output
_____no_output_____
###Markdown
이제 그려 보자.Now let's plot.
###Code
# Slopes at each (t, x) points
ode_plot.ode_slope_1state(dx_dt, x_slopes, t_slopes)
py.plot(t_euler_out, x_euler_out, '.-', label='Euler')
py.plot(t_heun__out, x_heun__out, '*-', label='Heun')
# Exact solution
exact = ode_plot.ExactPlotterFirstOrderODE(t_slopes)
exact.plot()
# Aspect ratio
py.axis('equal')
# xy limits
py.xlim(left=t_slopes[0], right=t_slopes[-1])
py.ylim(bottom=x_slopes[0], top=x_slopes[-1])
py.legend(loc=0, fontsize='xx-large');
###Output
_____no_output_____
###Markdown
오일러법에 비해 훈의 방법의 근사해가 엄밀해에 비해 오차가 더 적은 것을 알 수 있다.We can see that the approximate solution of Heun's Method is closer to the exact solution than that the Euler Method. Scipy
###Code
import scipy.integrate as si
sol = si.solve_ivp(dx_dt, (t_heun__out[0], t_heun__out[-1]), [x_0], t_eval=t_heun__out)
py.plot(sol.t, sol.y[0, :], 'o', label='solve_ivp')
py.plot(t_euler_out, x_euler_out, '.-', label='Euler')
py.plot(t_heun__out, x_heun__out, '*-', label='Heun')
# plot exact solution
exact = ode_plot.ExactPlotterFirstOrderODE(t_slopes)
exact.plot()
py.grid(True)
py.xlabel('t(sec)')
py.ylabel('y(t)')
py.legend(loc=0);
import pandas as pd
df = pd.DataFrame(
data={
'euler':x_euler_out,
'heun' :x_heun__out,
'solve_ivp':sol.y[0, :],
'exact':exact.exact(py.array(t_heun__out))
},
index=pd.Series(t_heun__out, name='t(sec)'),
columns=['exact', 'euler', 'heun', 'solve_ivp']
)
df['euler_error'] = df.euler - df.exact
df['heun_error'] = df.heun - df.exact
df['solve_ivp_error'] = df.solve_ivp - df.exact
###Output
_____no_output_____
###Markdown
표 형태Table form
###Code
pd.set_option('display.max_rows', 10)
df
###Output
_____no_output_____
###Markdown
각종 통계Statistics
###Code
df.describe()
###Output
_____no_output_____
###Markdown
이 경우, 훈의 방법의 오차에 대한 의견은?In this case, what do you think about the error of the Heun's method?
###Code
import numpy.linalg as nl
nl.norm(df.euler_error), nl.norm(df.heun_error), nl.norm(df.solve_ivp_error),
###Output
_____no_output_____
###Markdown
연습 문제Exercises 01 다음 미분방정식의 엄밀해를 구하시오:Find exact solution of the following differential equation:$$\begin{align}10 \frac{d}{dt}x(t) + 100 x(t) &= 0 \\x(0) &= 10\end{align}$$ 위 미분방정식의 수치해를 오일러법으로 구하시오.Find numerical solution of the above differential equation using Euler Method. 위 미분방정식의 수치해를 훈의 방법으로 구하고 엄밀해, 오일러법과 비교하시오.Find numerical solution of the above differential equation using Heun's method and compare with exact solution and Euler Method. 02 다음 미분방정식의 수치해를 오일러법으로 구하시오:Find numerical solution of the following differential equation using Euler Method:$$\begin{align}10 \frac{d}{dt}x(t) + 100 x(t) &= sin(t[rad]) \\x(0) &= 0\end{align}$$ 위 미분방정식의 수치해를 훈의 방법으로 구하고 오일러법과 비교하시오.Find numerical solution of the above differential equation using Heun's method and compare with Euler Method. Final Bell마지막 종
###Code
# stackoverfow.com/a/24634221
import os
os.system("printf '\a'");
###Output
_____no_output_____
###Markdown
상미분방정식을 위한 훈의 방법Heun's Method for Ordinary Differntial Equations 오일러법 사례 검토Review of Euler Method 다시 한번 다음 1계 미분 방정식을 생각해 보자.Once again, let's think about the following first order differential equation again. $$\left\{ \begin{align} a_0 \frac{d}{dt}x(t)+a_1 x(t)&=0 \\ x(0)&=x_0 \\ \end{align}\right.$$ 알고있다시피, python 함수로는 다음과 같이 쓸 수 있다.We know that we can write it as a python function as follows.
###Code
a_0, a_1 = 2.0, 1.0
def dx_dt(t, x):
return - a_1 * x / a_0
###Output
_____no_output_____
###Markdown
훈의 방법Heun's Method 훈의 방법은 독일 수학자 칼 훈의 이름을 딴 것이다.Heun's Method is named after German mathematician Karl Heun. 훈의 방법은 다음 두 값의 평균을 $t_i \le t \le t_{i+1}$ 사이에서 대표적인 $\frac{d}{dt}x$ 값으로 가정한다.Heun's Method assumes the average of following two as the representative $\frac{d}{dt}x$ value within $t_i \le t \le t_{i+1}$ interval. $$ s_i=\frac{d}{dt}x\left(t_{i}\right) \\ s_{i+1}=\frac{d}{dt}x\left(t_{i+1}\right)$$ 그런데, $x(t_{i+1})$의 엄밀해를 알지 못하는 상태에서 어떻게 $s_{i+1}=\frac{d}{dt}x\left(t_{i+1}\right)$ 을 계산할 것인가?Now, how can we calculate $\frac{d}{dt}x=\left.\frac{d}{dt}x\right|_{t=t_{i+1}}$ without knowing the exact solution of $x(t)$? 오일러법으로 구한 $\left.x\right|_{t=t_{i+1}}$의 근사값 $\left.\hat{x}\right|_{t=t_{i+1}}$으로 $\frac{d}{dt}x=\left.\frac{d}{dt}x\right|_{t=t_{i+1}}$을 사용할 것이다.We would use $\frac{d}{dt}x=\left.\frac{d}{dt}x\right|_{t=t_{i+1}}$ using $\left.\hat{x}\right|_{t=t_{i+1}}$, the approximation of $\left.x\right|_{t=t_{i+1}}$ by the Euler Method.
###Code
def heun(f, t_array, x_0):
time_list = [t_array[0]]
result_list = [x_0]
x_i = x_0
for k, t_i in enumerate(t_array[:-1]):
# time step
delta_t = t_array[k+1] - t_array[k]
# slope at i
s_i = f(t_i, x_i)
# x[i + 1] by Euler
x_i_plus_1 = x_i + s_i * delta_t
# slope at i + 1
s_i_plus_1 = f(t_array[k+1], x_i_plus_1)
# average of slope
s_average = (s_i + s_i_plus_1) * 0.5
# x[i + 1] by Heun
x_i_plus_1_m = x_i + s_average * delta_t
time_list.append(t_array[k+1])
result_list.append(x_i_plus_1_m)
x_i = x_i_plus_1_m
return time_list, result_list
###Output
_____no_output_____
###Markdown
근사해와 방향장Approximate solutions and direction fields 엄밀해, 오일러법, 훈의 방법을 방향장과 겹쳐 그려보자.Let's overlap the exact solution, Forware Euler Method, and Heun's Method with the direction field.
###Code
import ode_plot
import ode_solver
###Output
_____no_output_____
###Markdown
$t$와 $x$의 범위Ranges of $t$ and $x$
###Code
t_slopes = py.linspace(0, 6)
x_slopes = py.linspace(0, 6)
###Output
_____no_output_____
###Markdown
초기값Initial value$x(t_0)$
###Code
x_0 = 4.5
###Output
_____no_output_____
###Markdown
$\Delta t = 0.5 $ (sec)
###Code
delta_t_05 = 0.5
t_05_sec = np.arange(t_slopes[0], t_slopes[-1] + delta_t_05*0.5, delta_t_05)
###Output
_____no_output_____
###Markdown
오일러법Euler method
###Code
t_euler_out, x_euler_out = ode_solver.euler(dx_dt, t_05_sec, x_0)
###Output
_____no_output_____
###Markdown
훈의 방법Heun's method
###Code
t_heun__out, x_heun__out = heun(dx_dt, t_05_sec, x_0)
###Output
_____no_output_____
###Markdown
이제 그려 보자.Now let's plot.
###Code
# Slopes at each (t, x) points
ode_plot.ode_slope_1state(dx_dt, x_slopes, t_slopes)
py.plot(t_euler_out, x_euler_out, '.-', label='Euler')
py.plot(t_heun__out, x_heun__out, '*-', label='Heun')
# Exact solution
exact = ode_plot.ExactPlotterFirstOrderODE(t_slopes)
exact.plot()
# Aspect ratio
py.axis('equal')
# xy limits
py.xlim(left=t_slopes[0], right=t_slopes[-1])
py.ylim(bottom=x_slopes[0], top=x_slopes[-1])
py.legend(loc=0, fontsize='xx-large')
###Output
_____no_output_____
###Markdown
오일러법에 비해 훈의 방법의 근사해가 엄밀해에 비해 오차가 더 적은 것을 알 수 있다.We can see that the approximate solution of Heun's Method is closer to the exact solution than that the Euler Method. Scipy
###Code
import scipy.integrate as si
sol = si.solve_ivp(dx_dt, (t_heun__out[0], t_heun__out[-1]), [x_0], t_eval=t_heun__out)
py.plot(sol.t, sol.y[0, :], 'o', label='solve_ivp')
py.plot(t_euler_out, x_euler_out, '.-', label='Euler')
py.plot(t_heun__out, x_heun__out, '*-', label='Heun')
# plot exact solution
exact = ode_plot.ExactPlotterFirstOrderODE(t_slopes)
exact.plot()
py.grid(True)
py.xlabel('t(sec)')
py.ylabel('y(t)')
py.legend(loc=0);
import pandas as pd
df = pd.DataFrame(
data={
'euler':x_euler_out,
'heun' :x_heun__out,
'solve_ivp':sol.y[0, :],
'exact':exact.exact(py.array(t_heun__out))
},
index=pd.Series(t_heun__out, name='t(sec)'),
columns=['exact', 'euler', 'heun', 'solve_ivp']
)
df['euler_error'] = df.euler - df.exact
df['heun_error'] = df.heun - df.exact
df['solve_ivp_error'] = df.solve_ivp - df.exact
###Output
_____no_output_____
###Markdown
표 형태Table form
###Code
pd.set_option('display.max_rows', 10)
df
###Output
_____no_output_____
###Markdown
각종 통계Statistics
###Code
df.describe()
###Output
_____no_output_____
###Markdown
이 경우, 훈의 방법의 오차에 대한 의견은?In this case, what do you think about the error of the Heun's method?
###Code
import numpy.linalg as nl
nl.norm(df.euler_error), nl.norm(df.heun_error), nl.norm(df.solve_ivp_error),
###Output
_____no_output_____
###Markdown
연습 문제Exercises 01 다음 미분방정식의 엄밀해를 구하시오:Find exact solution of the following differential equation:$$\begin{align}10 \frac{d}{dt}x(t) + 100 x(t) &= 0 \\x(0) &= 10\end{align}$$ 위 미분방정식의 수치해를 오일러법으로 구하시오.Find numerical solution of the above differential equation using Euler Method. 위 미분방정식의 수치해를 훈의 방법으로 구하고 엄밀해, 오일러법과 비교하시오.Find numerical solution of the above differential equation using Heun's method and compare with exact solution and Euler Method. 02 다음 미분방정식의 수치해를 오일러법으로 구하시오:Find numerical solution of the following differential equation using Euler Method:$$\begin{align}10 \frac{d}{dt}x(t) + 100 x(t) &= sin(t[rad]) \\x(0) &= 0\end{align}$$ 위 미분방정식의 수치해를 훈의 방법으로 구하고 오일러법과 비교하시오.Find numerical solution of the above differential equation using Heun's method and compare with Euler Method. Final Bell마지막 종
###Code
# stackoverfow.com/a/24634221
import os
os.system("printf '\a'");
###Output
_____no_output_____ |
Hands-on/modelling/Split data.ipynb | ###Markdown
Foundations: Split data into train, validation, and test setUsing the Titanic dataset from [this](https://www.kaggle.com/c/titanic/overview) Kaggle competition.In this section, we will split the data into train, validation, and test set in preparation for fitting a basic model in the next section. 1. Import libraries
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.model_selection import train_test_split
%matplotlib inline
print("Setup completed")
###Output
Setup completed
###Markdown
2. Read clean/preprocessed data
###Code
titanic = pd.read_csv('../dataset/titanic_clean.csv')
titanic.head()
###Output
_____no_output_____
###Markdown
Split into train (60), validation(20), and test set(20)
###Code
labels = titanic['Survived']
features = titanic.drop('Survived', axis=1)
labels.head()
features.head()
X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.4, random_state=42)
X_val, X_test, y_val, y_test = train_test_split(X_test, y_test, test_size=0.5, random_state=42)
for dataset in [y_train, y_val, y_test]:
print(round(len(dataset) / len(labels), 2))
###Output
0.6
0.2
0.2
###Markdown
Write out all data
###Code
X_train.to_csv('../dataset/train_features.csv', index=False)
X_val.to_csv('../dataset/val_features.csv', index=False)
X_test.to_csv('../dataset/test_features.csv', index=False)
y_train.to_csv('../dataset/train_labels.csv', index=False)
y_val.to_csv('../dataset/val_labels.csv', index=False)
y_test.to_csv('../dataset/test_labels.csv', index=False)
###Output
_____no_output_____ |
examples/model_compress/pruning/legacy/mobilenetv2_end2end/Compressing MobileNetV2 with NNI Pruners.ipynb | ###Markdown
IntroductionIn this tutorial, we give an end-to-end demo of compressing [MobileNetV2](https://arxiv.org/abs/1801.04381) for finegrained classification using [NNI Pruners](https://nni.readthedocs.io/en/stable/Compression/pruning.html). Although MobileNetV2 is already a highly optimized architecture, we show that we can further reduce its size by over 50% with minimal performance loss using iterative pruning and knowledge distillation. To similate a real usage scenario, we use the [Stanford Dogs](http://vision.stanford.edu/aditya86/ImageNetDogs/) dataset as the target task, and show how to implement and optimize the following steps:* Model pre-training* Pruning* Model Speedup* Finetuning the pruned modelAlso, we will compare our approach with some baseline channel compression schemes defined by the authors of the MobileNets, and show that NNI pruners can provide a superior performance while being easy-to-use. We release this notebook along with our code under the folder `examples/model_compress/pruning/mobilenet_end2end/`.
###Code
import os
import xml
from PIL import Image
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from nni.compression.pytorch import ModelSpeedup
from nni.compression.pytorch.utils import count_flops_params
from utils import create_model, get_dataloader
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
num_workers = 16
torch.set_num_threads(num_workers)
###Output
_____no_output_____
###Markdown
Background Pruning MobileNetV2The main building block of MobileNetV2 is "inverted residual blocks", where a pointwise convolution first projects into a feature map with higher channels,following a depthwise convolution, and a pointwise convolution with linear activation that projects into a features map with less channels (thus called "inverted residuals and linear bottlenecks"). With 11 such blocks stacked together, the entire model has 3.4M parameters and takes up about 10M storage space (this number is platform-dependent).Now we consider compressing MobileNetV2 by **filter pruning** (also called channel pruning). Recall that in genernal, a $k\times k$ convolutional kernel has the weight with shape $(out\_channel, \frac{in\_channel}{groups}, k, k)$. If the input has shape $(B, in\_channel, H, W)$, the convolutional layer's output (with padding) would have shape $(B, out\_channel, H, W)$. Suppose we remove $M$ filters from this layer, then weight would have shape $(out\_channel-M, \frac{in\_channel}{groups}, k, k)$, and the output would then have shape $(B, out\_channel - M, H, W)$. Further, we have the following observations:* The model's number of parameters is directly reduced by $M\times \frac{in\_channel}{groups} \times k \times k$.* We are performing structured pruning, as each filter's weight elements are adjacent. Compared to unstructured pruning (or fine-grained pruning), structured pruning generally allows us to directly remove weights and their connections from the network, resulting in greater compression and speed-up. For this reason, in this tutorial we solely focus on filter-level pruning. * Since the output channel is shrinked, we can also remove weights from the next layer corresponding to these channel dimensions. In NNI, the pruner prunes the weights by just setting the weight values to zero, and then the [ModelSpeedup](https://nni.readthedocs.io/en/stable/Compression/ModelSpeedup.html) tool infers the weight relations and removes pruned weights and connections, which we will also demonstrate later.
###Code
# check model architecture
model = torch.hub.load('pytorch/vision:v0.8.1', 'mobilenet_v2', pretrained=True).to(device)
print(model)
# check model FLOPs and parameter counts with NNI utils
dummy_input = torch.rand([1, 3, 224, 224]).to(device)
flops, params, results = count_flops_params(model, dummy_input)
print(f"FLOPs: {flops}, params: {params}")
###Output
Using cache found in /home/v-diwu4/.cache/torch/hub/pytorch_vision_v0.8.1
###Markdown
Stanford DogsThe [Stanford Dogs](http://vision.stanford.edu/aditya86/ImageNetDogs/) dataset contains images of 120 breeds of dogs from around the world. It is built using images and annotation from ImageNet for the task of fine-grained image classification. We choose this task to simulate a transfer learning scenario, where a model pre-trained on the ImageNet is further transferred to an often simpler downstream task.To download and prepare the data, please run `prepare_data.sh`, which downloads the images and annotations, and preprocesses the images for training.
###Code
# Run prepare_data.sh
!chmod u+x prepare_data.sh
!./prepare_data.sh
###Output
file_list.mat
train_list.mat
test_list.mat
Directory already exists. Nothing done.
###Markdown
Then, you may run following code block, which shows several instances:
###Code
# Show several examples
# Code adapted from https://www.kaggle.com/mclikmb4/xception-transfer-learning-120-breeds-83-acc
image_path = './data/stanford-dogs/Images/'
breed_list = sorted(os.listdir(image_path))
plt.figure(figsize=(10, 10))
for i in range(9):
plt.subplot(331 + i)
breed = np.random.choice(breed_list)
dog = np.random.choice(os.listdir('./data/stanford-dogs/Annotation/' + breed))
img = Image.open(image_path + breed + '/' + dog + '.jpg')
tree = xml.etree.ElementTree.parse('./data/stanford-dogs/Annotation/' + breed + '/' + dog)
root = tree.getroot()
objects = root.findall('object')
plt.imshow(img)
for o in objects:
bndbox = o.find('bndbox')
xmin = int(bndbox.find('xmin').text)
ymin = int(bndbox.find('ymin').text)
xmax = int(bndbox.find('xmax').text)
ymax = int(bndbox.find('ymax').text)
plt.plot([xmin, xmax, xmax, xmin, xmin], [ymin, ymin, ymax, ymax, ymin])
plt.text(xmin, ymin, o.find('name').text, bbox={'ec': None})
###Output
_____no_output_____
###Markdown
Model Pre-trainingFirst, we obtain a MobileNetV2 model on this task, which will serve as the base model for compression. Unfortunately, although this step is often called model "pre-training" in the model compression teminologies, we are actually finetuning a model pre-trained on ImageNet.
###Code
# This script will save the state dict of the pretrained model to "./pretrained_mobilenet_v2_torchhub/checkpoint_best.pt"
# %run pretrain.py
# %run test.py
###Output
_____no_output_____
###Markdown
Compression via PruningIn this section, we first demonstrate how to perform channel pruning with NNI pruners in three steps: * defining a config list* creating a Pruner instance* calling `pruner.compress` and `pruner.export_model` to calculate and export masksThen, we demonstrate the common practices after pruning:* model speedup* further finetuning (with or without knowledge distillation)* evaluationFinally, we present a grid search example to find the balance between model performance and the final model size. We include some of our results and discuss our observations. Note that the code blocks in this section are taken from the file `pruning_experiments.py`. You can directly run the file by specifying several command line arguments and see the end-to-end process. You can also run the file to reproduce our experiments. We will discuss that in the last section. Using NNI Pruners
###Code
from nni.algorithms.compression.pytorch.pruning import (
LevelPruner,
SlimPruner,
FPGMPruner,
TaylorFOWeightFilterPruner,
L1FilterPruner,
L2FilterPruner,
AGPPruner,
ActivationMeanRankFilterPruner,
ActivationAPoZRankFilterPruner
)
pruner_name_to_class = {
'level': LevelPruner,
'l1': L1FilterPruner,
'l2': L2FilterPruner,
'slim': SlimPruner,
'fpgm': FPGMPruner,
'taylor': TaylorFOWeightFilterPruner,
'agp': AGPPruner,
'activationmeanrank': ActivationMeanRankFilterPruner,
'apoz': ActivationAPoZRankFilterPruner
}
# load model from the pretrained checkpoint
model_type = 'mobilenet_v2_torchhub'
checkpoint = "./pretrained_mobilenet_v2_torchhub/checkpoint_best.pt"
pretrained = True
input_size = 224
n_classes = 120
model = create_model(model_type=model_type, pretrained=pretrained, n_classes=n_classes,
input_size=input_size, checkpoint=checkpoint).to(device)
# Defining the config list.
# Note that here we only prune the depthwise convolution and the last pointwise convolution.
# We will let the model speedup tool propagate the sparsity to the first pointwise convolution layer.
pruner_name = 'l1'
sparsity = 0.5
if pruner_name != 'slim':
config_list = [{
'op_names': ['features.{}.conv.1.0'.format(x) for x in range(2, 18)],
'sparsity': sparsity
},{
'op_names': ['features.{}.conv.2'.format(x) for x in range(2, 18)],
'sparsity': sparsity
}]
else:
# For slim pruner, we should specify BatchNorm layers instead of the corresponding Conv2d layers
config_list = [{
'op_names': ['features.{}.conv.1.1'.format(x) for x in range(2, 18)],
'sparsity': sparsity
},{
'op_names': ['features.{}.conv.3'.format(x) for x in range(2, 18)],
'sparsity': sparsity
}]
# Different pruners require different additional parameters, so we put them together in the kwargs dict.
# Please check the docs for detailed information.
kwargs = {}
if pruner_name in ['slim', 'taylor', 'activationmeanrank', 'apoz', 'agp']:
from pruning_experiments import trainer_helper
train_dataloader = get_dataloader('train', './data/stanford-dogs/Processed/train', batch_size=32)
def trainer(model, optimizer, criterion, epoch):
return trainer_helper(model, criterion, optimizer, train_dataloader, device)
kwargs = {
'trainer': trainer,
'optimizer': torch.optim.Adam(model.parameters()),
'criterion': nn.CrossEntropyLoss()
}
if pruner_name == 'agp':
kwargs['pruning_algorithm'] = 'l1'
kwargs['num_iterations'] = 10
kwargs['epochs_per_iteration'] = 1
if pruner_name == 'slim':
kwargs['sparsifying_training_epochs'] = 10
# Create pruner, call pruner.compress(), and export the pruned model
pruner = pruner_name_to_class[pruner_name](model, config_list, **kwargs)
pruner.compress()
pruner.export_model('./pruned_model.pth', './mask.pth')
###Output
[2021-08-31 07:17:21] INFO (nni.compression.pytorch.compressor/MainThread) Model state_dict saved to ./pruned_model.pth
[2021-08-31 07:17:21] INFO (nni.compression.pytorch.compressor/MainThread) Mask dict saved to ./mask.pth
###Markdown
Model Speedup
###Code
# Note: must unwrap the model before speed up
pruner._unwrap_model()
dummy_input = torch.rand(1,3,224,224).to(device)
ms = ModelSpeedup(model, dummy_input, './mask.pth')
ms.speedup_model()
flops, params, results = count_flops_params(model, dummy_input)
print(model)
print(f"FLOPs: {flops}, params: {params}")
###Output
+-------+----------------------+--------+-------------------+----------+---------+
| Index | Name | Type | Weight Shape | FLOPs | #Params |
+-------+----------------------+--------+-------------------+----------+---------+
| 0 | features.0.0 | Conv2d | (32, 3, 3, 3) | 10838016 | 864 |
| 1 | features.1.conv.0.0 | Conv2d | (32, 1, 3, 3) | 3612672 | 288 |
| 2 | features.1.conv.1 | Conv2d | (16, 32, 1, 1) | 6422528 | 512 |
| 3 | features.2.conv.0.0 | Conv2d | (48, 16, 1, 1) | 9633792 | 768 |
| 4 | features.2.conv.1.0 | Conv2d | (48, 1, 3, 3) | 1354752 | 432 |
| 5 | features.2.conv.2 | Conv2d | (16, 48, 1, 1) | 2408448 | 768 |
| 6 | features.3.conv.0.0 | Conv2d | (72, 16, 1, 1) | 3612672 | 1152 |
| 7 | features.3.conv.1.0 | Conv2d | (72, 1, 3, 3) | 2032128 | 648 |
| 8 | features.3.conv.2 | Conv2d | (16, 72, 1, 1) | 3612672 | 1152 |
| 9 | features.4.conv.0.0 | Conv2d | (72, 16, 1, 1) | 3612672 | 1152 |
| 10 | features.4.conv.1.0 | Conv2d | (72, 1, 3, 3) | 508032 | 648 |
| 11 | features.4.conv.2 | Conv2d | (25, 72, 1, 1) | 1411200 | 1800 |
| 12 | features.5.conv.0.0 | Conv2d | (96, 25, 1, 1) | 1881600 | 2400 |
| 13 | features.5.conv.1.0 | Conv2d | (96, 1, 3, 3) | 677376 | 864 |
| 14 | features.5.conv.2 | Conv2d | (25, 96, 1, 1) | 1881600 | 2400 |
| 15 | features.6.conv.0.0 | Conv2d | (96, 25, 1, 1) | 1881600 | 2400 |
| 16 | features.6.conv.1.0 | Conv2d | (96, 1, 3, 3) | 677376 | 864 |
| 17 | features.6.conv.2 | Conv2d | (25, 96, 1, 1) | 1881600 | 2400 |
| 18 | features.7.conv.0.0 | Conv2d | (96, 25, 1, 1) | 1881600 | 2400 |
| 19 | features.7.conv.1.0 | Conv2d | (96, 1, 3, 3) | 169344 | 864 |
| 20 | features.7.conv.2 | Conv2d | (59, 96, 1, 1) | 1110144 | 5664 |
| 21 | features.8.conv.0.0 | Conv2d | (192, 59, 1, 1) | 2220288 | 11328 |
| 22 | features.8.conv.1.0 | Conv2d | (192, 1, 3, 3) | 338688 | 1728 |
| 23 | features.8.conv.2 | Conv2d | (59, 192, 1, 1) | 2220288 | 11328 |
| 24 | features.9.conv.0.0 | Conv2d | (192, 59, 1, 1) | 2220288 | 11328 |
| 25 | features.9.conv.1.0 | Conv2d | (192, 1, 3, 3) | 338688 | 1728 |
| 26 | features.9.conv.2 | Conv2d | (59, 192, 1, 1) | 2220288 | 11328 |
| 27 | features.10.conv.0.0 | Conv2d | (192, 59, 1, 1) | 2220288 | 11328 |
| 28 | features.10.conv.1.0 | Conv2d | (192, 1, 3, 3) | 338688 | 1728 |
| 29 | features.10.conv.2 | Conv2d | (59, 192, 1, 1) | 2220288 | 11328 |
| 30 | features.11.conv.0.0 | Conv2d | (192, 59, 1, 1) | 2220288 | 11328 |
| 31 | features.11.conv.1.0 | Conv2d | (192, 1, 3, 3) | 338688 | 1728 |
| 32 | features.11.conv.2 | Conv2d | (87, 192, 1, 1) | 3273984 | 16704 |
| 33 | features.12.conv.0.0 | Conv2d | (288, 87, 1, 1) | 4910976 | 25056 |
| 34 | features.12.conv.1.0 | Conv2d | (288, 1, 3, 3) | 508032 | 2592 |
| 35 | features.12.conv.2 | Conv2d | (87, 288, 1, 1) | 4910976 | 25056 |
| 36 | features.13.conv.0.0 | Conv2d | (288, 87, 1, 1) | 4910976 | 25056 |
| 37 | features.13.conv.1.0 | Conv2d | (288, 1, 3, 3) | 508032 | 2592 |
| 38 | features.13.conv.2 | Conv2d | (87, 288, 1, 1) | 4910976 | 25056 |
| 39 | features.14.conv.0.0 | Conv2d | (288, 87, 1, 1) | 4910976 | 25056 |
| 40 | features.14.conv.1.0 | Conv2d | (288, 1, 3, 3) | 127008 | 2592 |
| 41 | features.14.conv.2 | Conv2d | (134, 288, 1, 1) | 1891008 | 38592 |
| 42 | features.15.conv.0.0 | Conv2d | (480, 134, 1, 1) | 3151680 | 64320 |
| 43 | features.15.conv.1.0 | Conv2d | (480, 1, 3, 3) | 211680 | 4320 |
| 44 | features.15.conv.2 | Conv2d | (134, 480, 1, 1) | 3151680 | 64320 |
| 45 | features.16.conv.0.0 | Conv2d | (480, 134, 1, 1) | 3151680 | 64320 |
| 46 | features.16.conv.1.0 | Conv2d | (480, 1, 3, 3) | 211680 | 4320 |
| 47 | features.16.conv.2 | Conv2d | (134, 480, 1, 1) | 3151680 | 64320 |
| 48 | features.17.conv.0.0 | Conv2d | (480, 134, 1, 1) | 3151680 | 64320 |
| 49 | features.17.conv.1.0 | Conv2d | (480, 1, 3, 3) | 211680 | 4320 |
| 50 | features.17.conv.2 | Conv2d | (160, 480, 1, 1) | 3763200 | 76800 |
| 51 | features.18.0 | Conv2d | (1280, 160, 1, 1) | 10035200 | 204800 |
| 52 | classifier.1 | Linear | (120, 1280) | 153600 | 153720 |
+-------+----------------------+--------+-------------------+----------+---------+
FLOPs total: 139206976
#Params total: 1074880
MobileNetV2(
(features): Sequential(
(0): ConvBNActivation(
(0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(16, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(48, 48, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=48, bias=False)
(1): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(48, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(3): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(16, 72, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(72, 72, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=72, bias=False)
(1): BatchNorm2d(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(72, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(16, 72, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(72, 72, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=72, bias=False)
(1): BatchNorm2d(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(72, 25, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(25, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(5): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(25, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=96, bias=False)
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(96, 25, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(25, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(6): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(25, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=96, bias=False)
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(96, 25, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(25, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(7): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(25, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=96, bias=False)
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(96, 59, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(59, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(8): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(59, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(192, 59, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(59, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(9): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(59, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(192, 59, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(59, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(10): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(59, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(192, 59, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(59, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(11): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(59, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(192, 87, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(87, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(12): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(87, 288, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(288, 288, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=288, bias=False)
(1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(288, 87, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(87, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(13): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(87, 288, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(288, 288, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=288, bias=False)
(1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(288, 87, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(87, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(14): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(87, 288, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(288, 288, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=288, bias=False)
(1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(288, 134, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(134, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(15): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(134, 480, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(480, 480, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=480, bias=False)
(1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(480, 134, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(134, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(16): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(134, 480, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(480, 480, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=480, bias=False)
(1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(480, 134, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(134, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(17): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(134, 480, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(480, 480, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=480, bias=False)
(1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(480, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(18): ConvBNActivation(
(0): Conv2d(160, 1280, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1280, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
)
(classifier): Sequential(
(0): Dropout(p=0.2, inplace=False)
(1): Linear(in_features=1280, out_features=120, bias=True)
)
###Markdown
Fine-tuning after PruningUsually, after pruning out some weights from the model, we need further fine-tuning to let the model recover its performance as much as possible. For finetuning, we can either use the same setting during pretraining, or use an additional technique called [**Knowledge Distillation**](https://arxiv.org/pdf/1503.02531.pdf). The key idea is that the model learns on both the original hard labels and the soft labels produced by a teacher model running on the same input. In our setting, **the model before pruning can conveniently serve as the teacher model**. Empirically, we found that using distillation during fine-tuning consistently improves the performance of the pruned model. We will further discuss related experiments in the following section.Note that knowledge distillation can easily be done with the following lines of code:
###Code
# sample code: training with knowledge distillation
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
def train_with_distillation(student_model, teacher_model, optimizer, train_dataloader, device, alpha=0.99, temperature=8):
student_model.train()
for i, (inputs, labels) in enumerate(tqdm(train_dataloader)):
optimizer.zero_grad()
inputs, labels = inputs.float().to(device), labels.to(device)
with torch.no_grad():
teacher_preds = teacher_model(inputs)
student_preds = student_model(inputs)
soft_loss = nn.KLDivLoss()(F.log_softmax(student_preds/temperature, dim=1),
F.softmax(teacher_preds/temperature, dim=1))
hard_loss = F.cross_entropy(student_preds, labels)
loss = soft_loss * (alpha * temperature * temperature) + hard_loss * (1. - alpha)
loss.backward()
optimizer.step()
"""
###Output
_____no_output_____
###Markdown
Finetuning after pruning:
###Code
from pruning_experiments import run_finetune, run_finetune_distillation, run_eval
use_distillation = True
n_epochs = 10 # set for demo purposes; increase this number for your experiments
learning_rate = 1e-4
weight_decay = 0.0
train_dataloader = get_dataloader('train', './data/stanford-dogs/Processed/train', batch_size=32)
valid_dataloader = get_dataloader('eval', './data/stanford-dogs/Processed/valid', batch_size=32)
test_dataloader = get_dataloader('eval', './data/stanford-dogs/Processed/test', batch_size=32)
if not use_distillation:
run_finetune(model, train_dataloader, valid_dataloader, device,
n_epochs=n_epochs, learning_rate=learning_rate, weight_decay=weight_decay)
else:
alpha = 0.99
temperature = 8
# use model with the original checkpoint as the teacher
teacher_model = create_model(model_type=model_type, pretrained=pretrained, n_classes=n_classes,
input_size=input_size, checkpoint=checkpoint).to(device)
run_finetune_distillation(model, teacher_model, train_dataloader, valid_dataloader, device,
alpha, temperature,
n_epochs=n_epochs, learning_rate=learning_rate, weight_decay=weight_decay)
test_loss, test_acc = run_eval(model, test_dataloader, device)
print('Test loss: {}\nTest accuracy: {}'.format(test_loss, test_acc))
###Output
Using cache found in /home/v-diwu4/.cache/torch/hub/pytorch_vision_v0.8.1
###Markdown
Steps of Optimizing Pruning ParametersSo far, we have shown the end-to-end process of compressing a MobileNetV2 model on Stanford Dogs dataset using NNI pruners. It is crucial to mention that to make sure that the final model has a satisfactory performance, several trials on different sparsity values and pruner settings are necessary. To simplify this process for you, in this section we discuss how we approach the problem and mention some empirical observations. We hope that this section can serve as a good reference of the general process of optimizing the pruning with NNI Pruners.To help you reproduce some of the experiments, we implement `pruning_experiments.py`. Please find examples in the following code blocks for how to run experiments with this script. Step 1: selecting the layer to pruneJust as the first step of using the pruning is writing a `config_list`, the first thing you should consider when pruning a model is **which layer to prune**. This is crucial because some layers are not as sensitive to pruning as the others. In our example, we have several candidates for pruning:* the first pointwise convolution in all layers (the `conv 0.0`'s)* the depthwise convolution in all layers (the `conv 1.0`'s)* the second pointwise convolution in all layers (the `conv 2`'s)* some combination of the previous choicesThe following figure shows our experiment results. We run `L1FilterPruner` to explore some of the previous choices with layer sparsity ranging from 0.1 to 0.9. The x-axis shows the effective global sparsity after `ModelSpeedup`. We observe that jointly pruning the depthwise convolution and the second pointwise convolution often gives higher scores at large global sparsities. Therefore, in the following experiments, we limit the modules to prune to the `conv 1.0`'s and the `conv 2`'s. Thus the config list is always written in the following way:
###Code
config_list = [{
'op_names': ['features.{}.conv.1.0'.format(x) for x in range(2, 18)],
'sparsity': sparsity
},{
'op_names': ['features.{}.conv.2'.format(x) for x in range(2, 18)],
'sparsity': sparsity
}]
###Output
_____no_output_____
###Markdown
To run some experiments for this step, please run `pruning_experiments.py` and specify the following arguments:
###Code
# Example shell script:
"""
for sparsity in 0.2 0.4 0.6 0.8; do
for pruning_mode in 'conv0' 'conv1' 'conv2' 'conv1andconv2' 'all'; do
python pruning_experiments.py \
--experiment_dir pretrained_mobilenet_v2_torchhub/ \
--checkpoint_name 'checkpoint_best.pt' \
--sparsity $sparsity \
--pruning_mode $pruning_mode \
--pruner_name l1 \
--speedup \
--finetune_epochs 30
done
done
"""
###Output
_____no_output_____
###Markdown
Step 2: trying one-shot prunersAfter determining which modules to prune, we consider the next two questions:* **Which global sparsity range should we aim at?*** **Is there any one-shot pruning algorithm outperforming others at a large margin?**The first problem stems from the natural tradeoff between model size and accuracy. As long as we have acceptable performance, we wish the model to be as small as possible. Therefore, in this step, we can run some one-shot pruners with different sparsity settings, and find a range of sparsities that the model seem to maintain acceptable performance. The following figure summarizes our experiments on three pruners. We perform 30 epoch final finetuning for each experiment. Starting from the original model (with accuracy 0.8), we observe that when the sparsity is below 0.4, the pruned model can easily recover, with the performance approaching the model before pruning. On the other hand, when the sparsity is above 0.7, the model's performance drops too much even after finetuning. Therefore, we limit our search space to sparsity settings between 0.4 and 0.7 in the experiments for the following step 3 and step 4.In addition, we observe that the slim pruner has better performance in the one-shot pruning setting. However, as we will show later, when we consider iterative pruning, the importance of choosing base pruning algorithms seem to be overwhelmed by choosing a correct pruning schedule. To run some experiments for this step, please run `pruning_experiments.py` and specify the following arguments:
###Code
# Example shell script:
"""
for sparsity in 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9; do
for pruning_mode in 'conv1', 'conv1andconv2'; do
python pruning_experiments.py \
--experiment_dir pretrained_mobilenet_v2_torchhub/ \
--checkpoint_name 'checkpoint_best.pt' \
--sparsity $sparsity \
--pruning_mode $pruning_mode \
--pruner_name l1 \
--speedup \
--finetune_epochs 30
done
done
"""
###Output
_____no_output_____
###Markdown
Step 3: determining iterative pruning strategyNow that we have found a good set of modules to prune and a good range of sparsity settings to experiment on, we can shift our focus to iterative pruning. Iterative pruning interleaves pruning with finetuning, and is often shown be more performant than one-shot pruning, which prunes the model once to the target sparsity. The following figure establishes that the superiority of iterative pruning under the same other settings. Then, we consider the following two important hyperparameters for iterative pruning:* the total number of pruning iterations* the number of finetuning epochs between pruning iterationsWe experiment we 2, 4, and 8 iterations, with 1 or 3 intermediate finetuning epochs. The results are summarized in the following figure. We clearly observe that increasing the number of pruning iterations significantly improves the final performance, while increasing the number of epochs only helps slightly. Therefore, we recommend that you should spend effort in **determining a correct (often large) number of pruning iterations**, while need not to spend a lot of effort tuning the number of finetuning epochs in between. In our case, we found iteration numbers between 64 and 128 gives the best performance. To run some experiments for this step, please run `pruning_experiments.py` and specify the following arguments:
###Code
# Example shell script:
"""
for sparsity in 0.4 0.5 0.6 0.7; do
for n_iters in 2 4 8 16; do
python pruning_experiments.py \
--experiment_dir pretrained_mobilenet_v2_torchhub/ \
--checkpoint_name 'checkpoint_best.pt' \
--sparsity $sparsity \
--pruning_mode 'conv1andconv2' \
--pruner_name 'agp' \
--agp_n_iters $n_iters \
--speedup \
--finetune_epochs 30 \
done
done
"""
###Output
_____no_output_____
###Markdown
Step 4: determining finetuning strategyFinally, after pruning the model, we recommend **using knowledge distillation for finetuning**, which only involves changing several lines of code computing the loss (if we reuse the model before pruning as the teacher model). As shown in the following figure, using knowledge distillation during finetuning can bring about 5 percentage performance improvement in our task. To run some experiments for this step, please run `pruning_experiments.py` and specify the following arguments:
###Code
# Example shell script:
"""
for sparsity in 0.4 0.5 0.6 0.7; do
python pruning_experiments.py \
--experiment_dir pretrained_mobilenet_v2_torchhub/ \
--checkpoint_name 'checkpoint_best.pt' \
--sparsity $sparsity \
--pruning_mode 'conv1andconv2' \
--pruner_name 'agp' \
--speedup \
--finetune_epochs 80
done
for sparsity in 0.4 0.5 0.6 0.7; do
python pruning_experiments.py \
--experiment_dir pretrained_mobilenet_v2_torchhub/ \
--checkpoint_name 'checkpoint_best.pt' \
--sparsity $sparsity \
--pruning_mode 'conv1andconv2' \
--pruner_name 'agp' \
--speedup \
--finetune_epochs 80 \
-- kd
done
"""
###Output
_____no_output_____ |
SupportVectorMachines.ipynb | ###Markdown
Support Vector Machine Models**Support vector machines (SVMs)** are a widely used and powerful category of machine learning algorithms. There are many variations on the basic idea of an SVM. An SVM attempts to **maximally seperate** classes by finding the **suport vector** with the lowest error rate or maximum separation. SVMs can use many types of **kernel functions**. The most common kernel functions are **linear** and the **radial basis function** or **RBF**. The linear basis function attempts to separate classes by finding hyperplanes in the feature space that maximally separate classes. The RBF uses set of local Gaussian shaped basis kernels to find a nonlinear separation of the classes. As a first step, execute the code in the cell below to load the required packages to run the rest of this notebook.
###Code
from sklearn import svm, preprocessing
#from statsmodels.api import datasets
from sklearn import datasets ## Get dataset from sklearn
import sklearn.model_selection as ms
import sklearn.metrics as sklm
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import numpy.random as nr
%matplotlib inline
###Output
_____no_output_____
###Markdown
To get a feel for these data, you will now load and plot them. The code in the cell below does the following:1. Loads the iris data as a Pandas data frame. 2. Adds column names to the data frame.3. Displays all 4 possible scatter plot views of the data. Execute this code and examine the results.
###Code
def plot_iris(iris):
'''Function to plot iris data by type'''
setosa = iris[iris['Species'] == 'setosa']
versicolor = iris[iris['Species'] == 'versicolor']
virginica = iris[iris['Species'] == 'virginica']
fig, ax = plt.subplots(2, 2, figsize=(12,12))
x_ax = ['Sepal_Length', 'Sepal_Width']
y_ax = ['Petal_Length', 'Petal_Width']
for i in range(2):
for j in range(2):
ax[i,j].scatter(setosa[x_ax[i]], setosa[y_ax[j]], marker = 'x')
ax[i,j].scatter(versicolor[x_ax[i]], versicolor[y_ax[j]], marker = 'o')
ax[i,j].scatter(virginica[x_ax[i]], virginica[y_ax[j]], marker = '+')
ax[i,j].set_xlabel(x_ax[i])
ax[i,j].set_ylabel(y_ax[j])
## Import the dataset from sklearn.datasets
iris = datasets.load_iris()
## Create a data frame from the dictionary
species = [iris.target_names[x] for x in iris.target]
iris = pd.DataFrame(iris['data'], columns = ['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width'])
iris['Species'] = species
#print(species)
## Plot views of the iris data
plot_iris(iris)
###Output
_____no_output_____
###Markdown
You can see that Setosa (in blue) is well separated from the other two categories. The Versicolor (in orange) and the Virginica (in green) show considerable overlap. The question is how well our classifier will separate these categories. Scikit Learn classifiers require numerically coded numpy arrays for the features and as a label. The code in the cell below does the following processing:1. Creates a numpy array of the features.2. Numerically codes the label using a dictionary lookup, and converts it to a numpy array. Execute this code.
###Code
Features = np.array(iris[['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width']])
levels = {'setosa':0, 'versicolor':1, 'virginica':2}
Labels = np.array([levels[x] for x in iris['Species']])
###Output
_____no_output_____
###Markdown
Next, execute the code in the cell below to split the dataset into test and training set. Notice that unusually, 100 of the 150 cases are being used as the test dataset.
###Code
## Randomly sample cases to create independent training and test data
nr.seed(1115)
indx = range(Features.shape[0])
indx = ms.train_test_split(indx, test_size = 100)
X_train = Features[indx[0],:]
y_train = np.ravel(Labels[indx[0]])
X_test = Features[indx[1],:]
y_test = np.ravel(Labels[indx[1]])
###Output
_____no_output_____
###Markdown
As is always the case with machine learning, numeric features must be scaled. The code in the cell below performs the following processing:1. A Zscore scale object is defined using the `StandarScaler` function from the Scikit Learn preprocessing package. 2. The scaler is fit to the training features. Subsequently, this scaler is used to apply the same scaling to the test data and in production. 3. The training features are scaled using the `transform` method. Execute this code.
###Code
scale = preprocessing.StandardScaler()
scale.fit(X_train)
X_train = scale.transform(X_train)
###Output
_____no_output_____
###Markdown
Now you will define and fit a linear SVM model. The code in the cell below defines a linear SVM object using the `LinearSVC` function from the Scikit Learn SVM package, and then fits the model. Execute this code.
###Code
nr.seed(1115)
svm_mod = svm.LinearSVC()
svm_mod.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Notice that the SVM model object hyper parameters are displayed. Next, the code in the cell below performs the following processing to score the test data subset:1. The test features are scaled using the scaler computed for the training features. 2. The `predict` method is used to compute the scores from the scaled features. Execute this code.
###Code
X_test = scale.transform(X_test)
scores = svm_mod.predict(X_test)
###Output
_____no_output_____
###Markdown
It is time to evaluate the model results. Keep in mind that the problem has been made difficult deliberately, by having more test cases than training cases. The iris data has three species categories. Therefore it is necessary to use evaluation code for a three category problem. The function in the cell below extends code from pervious labs to deal with a three category problem. Execute this code and examine the results.
###Code
def print_metrics_3(labels, scores):
conf = sklm.confusion_matrix(labels, scores)
print(' Confusion matrix')
print(' Score Setosa Score Versicolor Score Virginica')
print('Actual Setosa %6d' % conf[0,0] + ' %5d' % conf[0,1] + ' %5d' % conf[0,2])
print('Actual Versicolor %6d' % conf[1,0] + ' %5d' % conf[1,1] + ' %5d' % conf[1,2])
print('Actual Vriginica %6d' % conf[2,0] + ' %5d' % conf[2,1] + ' %5d' % conf[2,2])
## Now compute and display the accuracy and metrics
print('')
print('Accuracy %0.2f' % sklm.accuracy_score(labels, scores))
metrics = sklm.precision_recall_fscore_support(labels, scores)
print(' ')
print(' Setosa Versicolor Virginica')
print('Num case %0.2f' % metrics[3][0] + ' %0.2f' % metrics[3][1] + ' %0.2f' % metrics[3][2])
print('Precision %0.2f' % metrics[0][0] + ' %0.2f' % metrics[0][1] + ' %0.2f' % metrics[0][2])
print('Recall %0.2f' % metrics[1][0] + ' %0.2f' % metrics[1][1] + ' %0.2f' % metrics[1][2])
print('F1 %0.2f' % metrics[2][0] + ' %0.2f' % metrics[2][1] + ' %0.2f' % metrics[2][2])
print_metrics_3(y_test, scores)
###Output
Confusion matrix
Score Setosa Score Versicolor Score Virginica
Actual Setosa 34 1 0
Actual Versicolor 0 24 10
Actual Vriginica 0 3 28
Accuracy 0.86
Setosa Versicolor Virginica
Num case 35.00 34.00 31.00
Precision 1.00 0.86 0.74
Recall 0.97 0.71 0.90
F1 0.99 0.77 0.81
###Markdown
Examine these results. Notice the following:1. The confusion matrix has dimension 3X3. You can see that most cases are correctly classified. 2. The overll accuracy is 0.86. Since the classes are roughly balanced, this metric indicates relatively good performance of the classifier, particularly since it was only trained on 50 cases. 3. The precision, recall and F1 for each of the classes is relatively good. Versicolor has the worst metrics since it has the largest number of misclassified cases. To get a better feel for what the classifier is doing, the code in the cell below displays a set of plots showing correctly (as '+') and incorrectly (as 'o') cases, with the species color-coded. Execute this code and examine the results.
###Code
def plot_iris_score(iris, y_test, scores):
'''Function to plot iris data by type'''
## Find correctly and incorrectly classified cases
true = np.equal(scores, y_test).astype(int)
print(true)
## Create data frame from the test data
iris = pd.DataFrame(iris)
levels = {0:'setosa', 1:'versicolor', 2:'virginica'}
iris['Species'] = [levels[x] for x in y_test]
iris.columns = ['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width', 'Species']
## Set up for the plot
fig, ax = plt.subplots(2, 2, figsize=(12,12))
markers = ['o', '+']
x_ax = ['Sepal_Length', 'Sepal_Width']
y_ax = ['Petal_Length', 'Petal_Width']
for t in range(2): # loop over correct and incorect classifications
setosa = iris[(iris['Species'] == 'setosa') & (true == t)]
versicolor = iris[(iris['Species'] == 'versicolor') & (true == t)]
virginica = iris[(iris['Species'] == 'virginica') & (true == t)]
# loop over all the dimensions
for i in range(2):
for j in range(2):
ax[i,j].scatter(setosa[x_ax[i]], setosa[y_ax[j]], marker = markers[t], color = 'blue')
ax[i,j].scatter(versicolor[x_ax[i]], versicolor[y_ax[j]], marker = markers[t], color = 'orange')
ax[i,j].scatter(virginica[x_ax[i]], virginica[y_ax[j]], marker = markers[t], color = 'green')
ax[i,j].set_xlabel(x_ax[i])
ax[i,j].set_ylabel(y_ax[j])
plot_iris_score(X_test, y_test, scores)
###Output
[0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1
1 1 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 0 0 1 0 1 1
0 1 1 0 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1]
###Markdown
Examine these plots. You can see how the classifier has divided the feature space between the classes. Notice that most of the errors occur in the overlap region between Virginica and Versicolor. This behavior is to be expected. There is an error in classifying Setosa which is a bit surprising, and which probably arises from the projection of the division between classes. Is it possible that a nonlinear SVM would separate these cases better? The code in the cell below uses the `SVC` function to define a nonlinear model using radial basis function. This model is fit with the training data and displays the evaluation of the model. Execute this code, and answer **Question 1** on the course page.
###Code
nr.seed(1115)
svc_mod = svm.SVC()
svc_mod.fit(X_train, y_train)
scores = svm_mod.predict(X_test)
print_metrics_3(y_test, scores)
plot_iris_score(X_test, y_test, scores)
###Output
Confusion matrix
Score Setosa Score Versicolor Score Virginica
Actual Setosa 34 1 0
Actual Versicolor 0 24 10
Actual Vriginica 0 3 28
Accuracy 0.86
Setosa Versicolor Virginica
Num case 35.00 34.00 31.00
Precision 1.00 0.86 0.74
Recall 0.97 0.71 0.90
F1 0.99 0.77 0.81
[0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1
1 1 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 0 0 1 0 1 1
0 1 1 0 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1]
###Markdown
Support Vector Machines: supervised algorithm for classification by finding a **separator**.1. Mapping data to **high-dimensional** feature space.2. finding **separator** which is a hyperplan, not necessary linear.> tranforming data: and its dimensional representation. for example instead of x, [x,x^2] => Kernelling(Linear,Polynomial,RBF,Sigmoid).> finging the right hyperplane. hyperplane with the max margin between the different classes. getting the coefs of the separator.* accurate in high-dimensional spaces.* memory efficient.but : the algorithm prone to over-fitting, no probability estimation, not very good for more than 1k rows.* image recognition.* text category assignment.* detecting spam.* sentiment analysis.* Gene expression classification.* Regression, outlier detection and clustering. Suport Vector Machines (SVMs) Benign or Malignant Cell.
###Code
# downloading the data.
!wget -O cell_samples.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-Coursera/labs/Data_files/cell_samples.csv
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pylab as pl
import scipy.optimize as opt
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
%matplotlib inline
df = pd.read_csv('/content/cell_samples.csv')
df.info()
df.head()
# benign = 2, malignant = 4.
ax = df[df['Class'] == 4][:50].plot(kind="scatter",x="Clump",y="UnifSize",color="DarkBlue",label="Malignant Cells")
df[df['Class'] == 2][:50].plot(kind='scatter',x="Clump",y="UnifSize",color="Red",label="Benign Cells",ax=ax)
plt.show()
# preprocessing and selection
df.dtypes
# droping no numerical values in BareNuc
df = df[pd.to_numeric(df['BareNuc'],errors='coerce').notnull()]
df.info()
# from object type to int64 type
df['BareNuc'] = df['BareNuc'].astype('int64')
df.dtypes
# getting our features
X = df.drop(['ID','Class'],axis=1).values
X[:5]
# our target
Y = df['Class'].values
Y[:5]
print(X.shape,Y.shape)
# train test split
x_train,x_test,y_train,y_test = train_test_split(X,Y,test_size=0.2,random_state=4)
print(x_train.shape,x_test.shape)
print(y_train.shape,y_test.shape)
# kernel functions => linear,Polynomial,Radial basis function (RBF), Sigmoid...
from sklearn import svm
svm_ = svm.SVC(kernel="rbf")
svm_.fit(x_train,y_train)
# prediction
y_pre = svm_.predict(x_test)
y_pre[:5]
Y[:5]
# evaluation
from sklearn.metrics import classification_report,confusion_matrix
import itertools
def plot_confusion_matrix(cm,
classes,
normalize=False,
title="Confusion Matrix",
cmap=plt.cm.Blues):
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:,np.newaxis]
print('Normalized')
else:
print('Not Normalized')
print(cm)
plt.imshow(cm,
interpolation='nearest',
cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks,classes,rotation=45)
plt.yticks(tick_marks,classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2
for i,j in itertools.product(range(cm.shape[0]),
range(cm.shape[1])):
plt.text(j,i,format(cm[i,j],fmt),
horizontalalignment='center',
color='white' if cm[i,j] > thresh else 'black')
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
# confusion matrix
cnf_matrix = confusion_matrix(y_test,y_pre,labels=[2,4])
np.set_printoptions(precision=2)
print(classification_report(y_test,y_pre))
# plot the confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix,
classes=['Benign(2)','Malignant(4)'])
# f1_score
from sklearn.metrics import f1_score
f1_score(y_test,y_pre,average='weighted')
# jaccard
from sklearn.metrics import jaccard_similarity_score
jaccard_similarity_score(y_test,y_pre)
# lets try the linear kernel
svm_linear = svm.SVC(kernel='linear')
svm_linear.fit(x_train,y_train)
y_pre_linear = svm_linear.predict(x_test)
# evaluation
cnf_matrix_linear = confusion_matrix(y_test,y_pre_linear,labels=[2,4])
plt.figure()
plot_confusion_matrix(cnf_matrix_linear,
classes=['Bengin(2)','Malignant(4)'])
print('avg f1-score: %.4f' % f1_score(y_test,y_pre_linear,average='weighted'))
print('Jaccard score: %.4f' % jaccard_similarity_score(y_test,y_pre_linear))
###Output
_____no_output_____
###Markdown
Classification Algorithm Support Vector Machines Import packages and dataset
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# Load the Iris Plant Database
from sklearn import svm, datasets
iris = datasets.load_iris()
# Load the keys
print('Iris Keys')
print(iris.keys())
print('\n')
# Info on the data
print(iris['DESCR'])
# Create a data frame for characteristics/features of the three types of iris
features = pd.DataFrame(iris['data'],columns=iris['feature_names'])
features.info()
# Analyze the features dataset
features.describe()
# Create a data frame for the iris types data
target = pd.DataFrame(iris['target'],columns=['species'])
target.info()
df = pd.concat([features, target], axis=1)
###Output
_____no_output_____
###Markdown
Visualize the Dataset
###Code
sns.pairplot(df, hue='species')
###Output
_____no_output_____
###Markdown
Setosa generally has the shortest and widest sepals. It also has the shortest and skinnest petals. Overall, we do see some correleations between the features. Judging by the clusters, the setosa species seems to be quite different from the other two species. Train the Model by SVM Algorithm
###Code
from sklearn.cross_validation import train_test_split
# Train the first 2 features only
X = iris.data[:, :2]
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
# Set up the meshgrid
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
h = (x_max / x_min)/100
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
X_plot = np.c_[xx.ravel(), yy.ravel()]
# Create the SVM with Linear Kernel
svc = svm.SVC(kernel='linear', C=1).fit(X, y)
# Plot the SVM with Linear Kernel
Z = svc.predict(X_plot)
Z = Z.reshape(xx.shape)
plt.figure(figsize=(15, 5))
plt.subplot(1, 2, 1)
plt.contourf(xx, yy, Z, cmap=plt.cm.tab10, alpha=0.3)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.ocean)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.title('SVC with linear kernel')
# Create the SVM with RBF Kernel
svc = svm.SVC(kernel='rbf', C=1, gamma=1).fit(X, y)
# Plot the SVM with RBF Kernel
Z = svc.predict(X_plot)
Z = Z.reshape(xx.shape)
plt.subplot(1, 2, 2)
plt.contourf(xx, yy, Z, cmap=plt.cm.tab10, alpha=0.3)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.ocean)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.title('SVC with RBF kernel')
plt.show()
###Output
_____no_output_____
###Markdown
On the left, the SVC uses the linear kernel and it results straight line decision boundaries among the three species of iris. Using this linear kernel, we see that the green points land in the light blue region all by themselves. However, the blue and white points are not perfectly classified. With the RBF kernel, the decision boundaries do not have to be straight lines. The green points again are perfectly enclosed, but the blue and white points still are not perfectly classifed. Of course, this requires more tuning in the parameters. The Parameters of Radial Basis Function (RBF)The linear kerneral only requires the C parameter which is the penalty parameter for missclassifying a data point. The RBF requires 2 parameters which are C and gamma. The parameter C deals with the tradeoff between misclassification of the training points and smooth decision boundry. A high C aims at classifiying all data correctly while a low c (low bias, high variance) is acceptable to have misclassified data points (high bias, low variance). The parameter G defines how far the influence of a single training point reaches. If gamma is high, the decision boundary depends on the points that are very close to it, which effectively ignoring some of points that are far from the decision boundary. If gamma is low, even far away points get taken into account when deciding where to draw the decision boundary. For high gammas, you can end up with islands of decision boundaries. A particular point that is near the boundary will carry a lot of weight. It can pull the decision boundary all the way so it ends up land on the correct side. If gamma is low, points near the boundary and points far away both influence the decision boundary. Thus, when gamma is low, boundary region is very broad.
###Code
# Create the SVM with RBF Kernel
# Only consider the changes in Gamma while holding C constant
svc = svm.SVC(kernel='rbf', C=1, gamma=1).fit(X, y)
# Plot the SVM with Linear Kernel
Z = svc.predict(X_plot)
Z = Z.reshape(xx.shape)
plt.figure(figsize=(15, 5))
plt.subplot(1, 3, 1)
plt.contourf(xx, yy, Z, cmap=plt.cm.tab10, alpha=0.3)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.ocean)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.title('SVC with RBF kernel C=1 gamma=1')
# Create the SVM with RBF Kernel
svc = svm.SVC(kernel='rbf', C=1, gamma=10).fit(X, y)
# Plot the SVM with RBF Kernel
Z = svc.predict(X_plot)
Z = Z.reshape(xx.shape)
plt.subplot(1, 3, 2)
plt.contourf(xx, yy, Z, cmap=plt.cm.tab10, alpha=0.3)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.ocean)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.title('SVC with RBF kernel C=1 gamma=10')
# Create the SVM with RBF Kernel
svc = svm.SVC(kernel='rbf', C=1, gamma=100).fit(X, y)
# Plot the SVM with RBF Kernel
Z = svc.predict(X_plot)
Z = Z.reshape(xx.shape)
plt.subplot(1, 3, 3)
plt.contourf(xx, yy, Z, cmap=plt.cm.tab10, alpha=0.3)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.ocean)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.title('SVC with RBF kernel C=1 gamma=100')
plt.show()
###Output
_____no_output_____
###Markdown
We can see from above, as gamma increases, the decision boundary becomes more depedent on individual data points, thus, creating islands (i.e. when gamma = 100). Looking at gamma=100, both green and blue points are pulling the boundries to enclose them. Since some blue and green points are off from the majority, islands are created to ensure that these points also have boundries.
###Code
# Create the SVM with RBF Kernel
# Only consider the changes in C while holding gamma constant
svc = svm.SVC(kernel='rbf', C=1, gamma=1).fit(X, y)
# Plot the SVM with Linear Kernel
Z = svc.predict(X_plot)
Z = Z.reshape(xx.shape)
plt.figure(figsize=(15, 5))
plt.subplot(1, 3, 1)
plt.contourf(xx, yy, Z, cmap=plt.cm.tab10, alpha=0.3)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.ocean)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.title('SVC with RBF kernel C=1 gamma=1')
# Create the SVM with RBF Kernel
svc = svm.SVC(kernel='rbf', C=100, gamma=1).fit(X, y)
# Plot the SVM with RBF Kernel
Z = svc.predict(X_plot)
Z = Z.reshape(xx.shape)
plt.subplot(1, 3, 2)
plt.contourf(xx, yy, Z, cmap=plt.cm.tab10, alpha=0.3)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.ocean)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.title('SVC with RBF kernel C=100 gamma=1')
# Create the SVM with RBF Kernel
svc = svm.SVC(kernel='rbf', C=10000, gamma=1).fit(X, y)
# Plot the SVM with RBF Kernel
Z = svc.predict(X_plot)
Z = Z.reshape(xx.shape)
plt.subplot(1, 3, 3)
plt.contourf(xx, yy, Z, cmap=plt.cm.tab10, alpha=0.3)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.ocean)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.title('SVC with RBF kernel C=10000 gamma=1')
plt.show()
###Output
_____no_output_____
###Markdown
The C parameter deals with misclassification. As C increases, misclassification has become more intolerate and this means that the SVC classifer works really hard to ensure the correct classification of the data points. The boundary again becomes more dependent on the data points (i.e. C=10000). GridSearchCV and Evaluating the Model
###Code
from sklearn.model_selection import GridSearchCV
# Set up parameters by 5-fold cross validation
para = [{'kernel': ['rbf'],
'gamma': [1, 0.1, 0.01, 0.001],
'C': [0.1, 1, 10, 100]},
{'kernel': ['linear'], 'C': [1, 10, 100, 1000]}]
# 5-fold cross validation to perform grid search to calculate optimal hyper-parameters
clf = GridSearchCV(SVC(), para, verbose=2, cv = 5)
clf.fit(X_train, y_train)
# Print out the best parameters
print(clf.best_params_)
# Applying the test dataset to the above model
clf_predictions = clf.predict(X_test)
# Find out the cconfusion matrix and the classification report
from sklearn.metrics import confusion_matrix, classification_report
print(confusion_matrix(y_test, clf_predictions))
###Output
[[17 1 0]
[ 0 15 2]
[ 0 2 8]]
###Markdown
Since we have chosen 30% of the data to be our test data, we have 45/150 points in this test dataset. This 45 is derived from adding all the entries from the above confusion matrix. The diagnoal of the table contains all the correct predictions. To test for overall classification, we use the following:Accuracy = (True Positive + True Negative)/Total or (Add up the Diagnoal Entries)/Total In our case, it is (17 + 15 + 8)/ 45 = 40/45 = 0.89. Thus, the overall accuracy of our classification is 89%. We can also find out the misclassification rate. Misclassification Rate = (False Positive + False Negative)/Total or (Add up the non-Diagnoal Entires)/TotalIn our case, it is (1+2+2)/45 = 5/45 = 0.11. Thus, the overall misclassifcation rate is 11%. This can also be found by using 1 - 0.89 = 0.11 based on the accuracy rate found above.
###Code
print(classification_report(y_test, clf_predictions))
###Output
precision recall f1-score support
0 1.00 0.94 0.97 18
1 0.83 0.88 0.86 17
2 0.80 0.80 0.80 10
avg / total 0.89 0.89 0.89 45
###Markdown
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC #Support Vector Classifier
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # this is the petal length and width
y = (iris["target"] == 2).astype(np.float64) #boolean of whether the flower is an iris virginica or not as a float 64
iris
plt.scatter(iris["data"][:,2], iris["data"][:,3])
plt.show()
y
model = Pipeline([
("scaler", StandardScaler()),
("linear_svc", LinearSVC(C=1, loss="hinge"))
])
model.fit(X, y)
model.predict([[5.5, 1.7]])
###Output
_____no_output_____ |
docs/notebooks/0001 - Converting Slocum data to a standard DataFrame.ipynb | ###Markdown
Converting Slocum data to a standard DataFrame
###Code
from IPython.lib.pretty import pprint
import logging
logger = logging.getLogger('gutils')
logger.handlers = [logging.StreamHandler()]
logger.setLevel(logging.DEBUG)
import sys
from pathlib import Path
# Just a hack to be able to `import gutils`
sys.path.append(str(Path('.').absolute().parent.parent))
binary_folder = Path('.').absolute().parent.parent / 'gutils' / 'tests' / 'resources' / 'slocum' / 'real' / 'binary'
bass_binary = binary_folder / 'bass-20160909T1733'
!ls $bass_binary
###Output
8e6d1b16.cac usf-bass-2016-252-1-23.tbd
991560ed.cac usf-bass-2016-252-1-24.sbd
da485e91.cac usf-bass-2016-252-1-24.tbd
usf-bass-2016-252-0-0.tbd usf-bass-2016-252-1-2.sbd
usf-bass-2016-252-1-0.sbd usf-bass-2016-252-1-2.tbd
usf-bass-2016-252-1-0.tbd usf-bass-2016-252-1-3.sbd
usf-bass-2016-252-1-10.sbd usf-bass-2016-252-1-3.tbd
usf-bass-2016-252-1-10.tbd usf-bass-2016-252-1-4.sbd
usf-bass-2016-252-1-11.sbd usf-bass-2016-252-1-4.tbd
usf-bass-2016-252-1-11.tbd usf-bass-2016-252-1-5.sbd
usf-bass-2016-252-1-12.sbd usf-bass-2016-252-1-5.tbd
usf-bass-2016-252-1-12.tbd usf-bass-2016-252-1-6.sbd
usf-bass-2016-252-1-13.sbd usf-bass-2016-252-1-6.tbd
usf-bass-2016-252-1-13.tbd usf-bass-2016-252-1-7.sbd
usf-bass-2016-252-1-14.sbd usf-bass-2016-252-1-7.tbd
usf-bass-2016-252-1-14.tbd usf-bass-2016-252-1-8.sbd
usf-bass-2016-252-1-15.sbd usf-bass-2016-252-1-8.tbd
usf-bass-2016-252-1-15.tbd usf-bass-2016-252-1-9.sbd
usf-bass-2016-252-1-16.sbd usf-bass-2016-252-1-9.tbd
usf-bass-2016-252-1-16.tbd usf-bass-2016-253-0-0.sbd
usf-bass-2016-252-1-17.sbd usf-bass-2016-253-0-0.tbd
usf-bass-2016-252-1-17.tbd usf-bass-2016-253-0-1.sbd
usf-bass-2016-252-1-18.sbd usf-bass-2016-253-0-1.tbd
usf-bass-2016-252-1-18.tbd usf-bass-2016-253-0-2.sbd
usf-bass-2016-252-1-19.sbd usf-bass-2016-253-0-2.tbd
usf-bass-2016-252-1-19.tbd usf-bass-2016-253-0-3.sbd
usf-bass-2016-252-1-1.sbd usf-bass-2016-253-0-3.tbd
usf-bass-2016-252-1-1.tbd usf-bass-2016-253-0-4.sbd
usf-bass-2016-252-1-20.sbd usf-bass-2016-253-0-4.tbd
usf-bass-2016-252-1-20.tbd usf-bass-2016-253-0-5.sbd
usf-bass-2016-252-1-21.sbd usf-bass-2016-253-0-5.tbd
usf-bass-2016-252-1-21.tbd usf-bass-2016-253-0-6.sbd
usf-bass-2016-252-1-22.sbd usf-bass-2016-253-0-6.tbd
usf-bass-2016-252-1-22.tbd usf-bass-2016-253-0-7.tbd
usf-bass-2016-252-1-23.sbd usf-bass-2016-253-0-8.tbd
###Markdown
SlocumMergerConvert binary (*.bd) files into ASCII Merge a subset of binary filesIf you know the flight/science pair you wish to merge
###Code
import tempfile
from gutils.slocum import SlocumMerger
ascii_output = tempfile.mkdtemp()
merger = SlocumMerger(
str(bass_binary),
ascii_output,
globs=[
'usf-bass-2016-252-1-12.sbd',
'usf-bass-2016-252-1-12.tbd'
]
)
# The merge results contain a reference to the new produced ASCII file
# as well as which binary files were involved in its creation
merge_results = merger.convert()
###Output
Converted usf-bass-2016-252-1-12.sbd,usf-bass-2016-252-1-12.tbd to usf_bass_2016_252_1_12_sbd.dat
###Markdown
Merge all files in a directoryThis matches science and flight files together
###Code
merger = SlocumMerger(
str(bass_binary),
ascii_output,
)
# The merge results contain a reference to the new produced ASCII file as well as what binary files went into it.
merge_results = merger.convert()
###Output
Converted usf-bass-2016-252-1-0.sbd,usf-bass-2016-252-1-0.tbd to usf_bass_2016_252_1_0_sbd.dat
Converted usf-bass-2016-252-1-10.sbd,usf-bass-2016-252-1-10.tbd to usf_bass_2016_252_1_10_sbd.dat
Converted usf-bass-2016-252-1-11.sbd,usf-bass-2016-252-1-11.tbd to usf_bass_2016_252_1_11_sbd.dat
Converted usf-bass-2016-252-1-12.sbd,usf-bass-2016-252-1-12.tbd to usf_bass_2016_252_1_12_sbd.dat
Converted usf-bass-2016-252-1-13.sbd,usf-bass-2016-252-1-13.tbd to usf_bass_2016_252_1_13_sbd.dat
Converted usf-bass-2016-252-1-14.sbd,usf-bass-2016-252-1-14.tbd to usf_bass_2016_252_1_14_sbd.dat
Converted usf-bass-2016-252-1-15.sbd,usf-bass-2016-252-1-15.tbd to usf_bass_2016_252_1_15_sbd.dat
Converted usf-bass-2016-252-1-16.sbd,usf-bass-2016-252-1-16.tbd to usf_bass_2016_252_1_16_sbd.dat
Converted usf-bass-2016-252-1-17.sbd,usf-bass-2016-252-1-17.tbd to usf_bass_2016_252_1_17_sbd.dat
Converted usf-bass-2016-252-1-18.sbd,usf-bass-2016-252-1-18.tbd to usf_bass_2016_252_1_18_sbd.dat
Converted usf-bass-2016-252-1-19.sbd,usf-bass-2016-252-1-19.tbd to usf_bass_2016_252_1_19_sbd.dat
Converted usf-bass-2016-252-1-1.sbd,usf-bass-2016-252-1-1.tbd to usf_bass_2016_252_1_1_sbd.dat
Converted usf-bass-2016-252-1-20.sbd,usf-bass-2016-252-1-20.tbd to usf_bass_2016_252_1_20_sbd.dat
Converted usf-bass-2016-252-1-21.sbd,usf-bass-2016-252-1-21.tbd to usf_bass_2016_252_1_21_sbd.dat
Converted usf-bass-2016-252-1-22.sbd,usf-bass-2016-252-1-22.tbd to usf_bass_2016_252_1_22_sbd.dat
Converted usf-bass-2016-252-1-23.sbd,usf-bass-2016-252-1-23.tbd to usf_bass_2016_252_1_23_sbd.dat
Converted usf-bass-2016-252-1-24.sbd,usf-bass-2016-252-1-24.tbd to usf_bass_2016_252_1_24_sbd.dat
Converted usf-bass-2016-252-1-2.sbd,usf-bass-2016-252-1-2.tbd to usf_bass_2016_252_1_2_sbd.dat
Converted usf-bass-2016-252-1-3.sbd,usf-bass-2016-252-1-3.tbd to usf_bass_2016_252_1_3_sbd.dat
Converted usf-bass-2016-252-1-4.sbd,usf-bass-2016-252-1-4.tbd to usf_bass_2016_252_1_4_sbd.dat
Converted usf-bass-2016-252-1-5.sbd,usf-bass-2016-252-1-5.tbd to usf_bass_2016_252_1_5_sbd.dat
Converted usf-bass-2016-252-1-6.sbd,usf-bass-2016-252-1-6.tbd to usf_bass_2016_252_1_6_sbd.dat
Converted usf-bass-2016-252-1-7.sbd,usf-bass-2016-252-1-7.tbd to usf_bass_2016_252_1_7_sbd.dat
Converted usf-bass-2016-252-1-8.sbd,usf-bass-2016-252-1-8.tbd to usf_bass_2016_252_1_8_sbd.dat
Converted usf-bass-2016-252-1-9.sbd,usf-bass-2016-252-1-9.tbd to usf_bass_2016_252_1_9_sbd.dat
Converted usf-bass-2016-253-0-0.sbd,usf-bass-2016-253-0-0.tbd to usf_bass_2016_253_0_0_sbd.dat
Converted usf-bass-2016-253-0-1.sbd,usf-bass-2016-253-0-1.tbd to usf_bass_2016_253_0_1_sbd.dat
Converted usf-bass-2016-253-0-2.sbd,usf-bass-2016-253-0-2.tbd to usf_bass_2016_253_0_2_sbd.dat
Converted usf-bass-2016-253-0-3.sbd,usf-bass-2016-253-0-3.tbd to usf_bass_2016_253_0_3_sbd.dat
Converted usf-bass-2016-253-0-4.sbd,usf-bass-2016-253-0-4.tbd to usf_bass_2016_253_0_4_sbd.dat
Converted usf-bass-2016-253-0-5.sbd,usf-bass-2016-253-0-5.tbd to usf_bass_2016_253_0_5_sbd.dat
Converted usf-bass-2016-253-0-6.sbd,usf-bass-2016-253-0-6.tbd to usf_bass_2016_253_0_6_sbd.dat
###Markdown
What does the ASCII file look like?
###Code
ascii_file = merge_results[0]['ascii']
!cat $ascii_file
###Output
dbd_label: DBD_ASC(dinkum_binary_data_ascii)file
encoding_ver: 2
num_ascii_tags: 14
all_sensors: 0
filename: usf-bass-2016-252-1-0
the8x3_filename: 02470000
filename_extension: sbd
filename_label: usf-bass-2016-252-1-0-sbd(02470000)
mission_name: SLOPE.MI
fileopen_time: Fri_Sep__9_13:40:04_2016
sensors_per_cycle: 33
num_label_lines: 3
num_segments: 1
segment_filename_0: usf-bass-2016-252-1-0
c_heading c_wpt_lat m_altitude m_avg_speed m_ballast_pumped m_battery m_battpos m_depth m_depth_rate m_gps_lat m_gps_lon m_heading m_lat m_leakdetect_voltage m_lon m_mission_avg_speed_climbing m_mission_avg_speed_diving m_pitch m_present_time m_roll m_vacuum m_vehicle_temp m_water_depth m_water_vx m_water_vy sci_bbfl2s_bb_scaled sci_bbfl2s_cdom_scaled sci_bbfl2s_chlor_scaled sci_m_present_time sci_oxy3835_oxygen sci_water_cond sci_water_pressure sci_water_temp
rad lat m m/s cc volts in m m/s lat lon rad lat volts lon m/s m/s rad timestamp rad inHg degC m m/s m/s nodim ppb ug/l timestamp nodim s/m bar degc
4 8 4 4 4 4 4 4 4 8 8 4 8 4 8 4 4 4 8 4 4 4 4 4 4 4 4 4 8 4 4 4 4
0 0 0 0.246461 229.801 15.7242 0.950998 0.452354 -0.00340392 2821.1215 -8017.0038 1.46085 2821.12150001597 2.49734 -8017.00379999988 -0.189884 -0.251538 0.0130771 1473428360.61066 -0.198457 8.8256 29.9688 -1 0 0 NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN 0.918535 0.187865 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428412.19098 NaN NaN NaN -1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.246208 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428426.16879 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN 0 0 NaN NaN NaN NaN NaN NaN 2821.1215 -8017.0038 NaN 2821.12150001597 NaN -8017.00379999988 NaN NaN NaN 1473428431.28223 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0595101 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428452.2485 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0128355 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428467.69785 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.168417 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428472.87552 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN 228.013 NaN NaN 0.246208 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428478.17279 0.0124877 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN 0.0287601 0.40568 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428483.4783 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428484.49551 NaN NaN NaN NaN NaN NaN 0 0 0 1473428484.49551 179.99 0 0 0
NaN NaN NaN NaN NaN NaN NaN 0.444575 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428488.82791 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.58071 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428494.19019 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.518477 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428499.53586 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN 2830 NaN NaN NaN NaN NaN 0.222871 NaN 2821.1311 -8017.0297 NaN 2821.13110001597 NaN -8017.02969999988 NaN NaN NaN 1473428504.59827 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.370674 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428509.72824 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428513.27548 NaN NaN NaN NaN NaN NaN 0.000710932 1.2064 0.192 1473428513.27548 179.99 0 0 0
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428514.44083 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428514.44083 NaN 5.67787 0.023 27.5135
NaN NaN NaN NaN NaN NaN NaN 0.425128 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428514.69376 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428516.79086 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428516.79086 NaN 5.69567 0.027 27.5172
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428519.09698 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428519.09698 NaN 5.69597 0.035 27.5365
NaN NaN NaN NaN NaN NaN NaN 0.479581 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428519.86343 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428521.41492 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428521.41492 NaN 5.69631 0.056 27.534
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428523.63354 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428523.63354 NaN 5.6953 0.07 27.5086
NaN NaN NaN NaN NaN NaN NaN 0.806303 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428524.91858 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428526.04126 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428526.04126 NaN 5.67278 0.084 27.4894
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428529.59787 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428529.59787 NaN 5.64021 0.107 27.5028
NaN NaN NaN NaN NaN NaN NaN 0.907432 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428529.79099 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428531.65271 NaN NaN NaN NaN NaN NaN 0.000834746 0.8352 0.2304 1473428531.65271 180.15 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428533.74435 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428533.74435 NaN 5.63605 0.128 27.4764
NaN NaN NaN NaN NaN NaN NaN 1.16414 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428534.67154 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428538.00861 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428538.00861 NaN 5.63874 0.147 27.458
NaN NaN NaN NaN NaN NaN NaN 1.46364 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428539.55307 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428542.17761 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428542.17761 NaN 5.64294 0.164 27.4483
NaN NaN NaN NaN NaN NaN 0.932307 1.68923 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428544.42581 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428545.58829 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428545.58829 NaN 5.6427 0.185 27.4467
NaN NaN NaN NaN NaN NaN NaN 1.89149 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428549.30762 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428549.91537 NaN NaN NaN NaN NaN NaN 0.000770842 0.7424 0.1792 1473428549.91537 180.06 5.64118 0.205 27.443
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428554.05774 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428554.05774 NaN 5.64134 0.227 27.4375
NaN NaN NaN NaN NaN NaN NaN 2.19098 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428554.1907 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428556.12717 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428556.12717 NaN 5.64262 0.236 27.4355
NaN NaN NaN NaN NaN NaN NaN 2.69273 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428559.06427 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428560.28629 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428560.28629 NaN 5.64297 0.255 27.4311
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428563.64142 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428563.64142 NaN 5.64207 0.277 27.4302
NaN NaN NaN NaN NaN NaN NaN 2.95722 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428563.67258 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428568.03708 NaN NaN NaN NaN NaN NaN 0.000774836 0.6496 0.2304 1473428568.03708 180.11 5.64258 0.299 27.4287
NaN NaN NaN NaN NaN NaN NaN 3.13614 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428568.28598 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428572.28632 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428572.28632 NaN 5.645 0.319 27.4238
NaN NaN NaN NaN NaN NaN NaN 3.40841 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428572.89859 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428575.49786 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428575.49786 NaN 5.64494 0.34 27.4225
NaN NaN NaN NaN NaN NaN NaN 3.599 NaN NaN NaN NaN 2821.13057997066 NaN -8017.02489220558 NaN NaN NaN 1473428577.49075 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428579.60822 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428579.60822 NaN 5.64481 0.361 27.4208
NaN NaN NaN NaN NaN NaN NaN 3.84015 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428582.10736 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428583.78879 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428583.78879 NaN 5.64536 0.381 27.4185
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428585.96313 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428585.96313 NaN 5.64627 0.392 27.4182
NaN NaN 21.5104 NaN NaN NaN NaN 4.08519 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428586.7146 NaN NaN NaN 25.5956 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428587.02817 NaN NaN NaN NaN NaN NaN 0.000786818 0.5568 0.2176 1473428587.02817 180 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428590.42468 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428590.42468 NaN 5.64739 0.412 27.417
NaN NaN NaN NaN NaN NaN NaN 4.32246 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428591.32504 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428593.78668 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428593.78668 NaN 5.64716 0.432 27.4142
NaN NaN NaN NaN NaN NaN NaN 4.54416 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428595.93347 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428597.9715 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428597.9715 NaN 5.64787 0.453 27.4163
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428600.06396 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428600.06396 NaN 5.64849 0.465 27.4149
NaN NaN NaN NaN -198.424 NaN NaN 4.80087 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428600.55002 0.0237333 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428604.14093 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428604.14093 NaN 5.64972 0.484 27.4156
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428605.14731 NaN NaN NaN NaN NaN NaN 0.000774836 1.1136 0.2688 1473428605.14731 179.84 NaN NaN NaN
NaN NaN NaN NaN NaN NaN 0.936242 5.00313 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428605.14841 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428608.37436 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428608.37436 NaN 5.65029 0.506 27.4129
NaN NaN NaN NaN NaN NaN NaN 5.25595 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428609.74396 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428611.4939 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428611.4939 NaN 5.65081 0.524 27.4104
NaN NaN NaN NaN NaN NaN NaN 5.46598 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428614.33276 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428615.60056 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428615.60056 NaN 5.65142 0.548 27.4093
NaN NaN NaN NaN NaN NaN NaN 5.7188 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428618.93689 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428619.77155 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428619.77155 NaN 5.65229 0.568 27.4085
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428621.95804 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428621.95804 NaN 5.6526 0.576 27.4082
NaN NaN NaN NaN NaN NaN NaN 5.93662 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428623.52499 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428624.16507 NaN NaN NaN NaN NaN NaN 0.000830752 0.5568 0.256 1473428624.16507 179.76 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428626.4144 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428626.4144 NaN 5.65294 0.597 27.4077
NaN NaN NaN NaN NaN NaN NaN 6.18944 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428628.11267 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428629.77213 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428629.77213 NaN 5.65378 0.615 27.4081
NaN NaN NaN NaN NaN NaN NaN 6.47338 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428632.70032 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428634.03946 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428634.03946 NaN 5.6545 0.639 27.407
NaN NaN NaN NaN NaN NaN NaN 6.70675 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428637.28668 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428638.09018 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428638.09018 NaN 5.65526 0.661 27.4069
NaN NaN NaN NaN NaN NaN NaN 6.94012 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428641.88058 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428642.15778 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428642.15778 NaN 5.65572 0.68 27.4059
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428643.28085 NaN NaN NaN NaN NaN NaN 0.000806788 0.2784 0.2816 1473428643.28085 179.62 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428644.39185 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428644.39185 NaN 5.65596 0.692 27.4039
NaN NaN NaN NaN NaN NaN NaN 7.15794 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428646.47479 NaN NaN NaN 25.5621 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428647.75806 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428647.75806 NaN 5.65657 0.715 27.4031
NaN NaN NaN NaN NaN NaN NaN 7.3952 NaN NaN NaN NaN 2821.13075031956 NaN -8017.02291616793 NaN NaN NaN 1473428651.07559 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428652.12738 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428652.12738 NaN 5.65714 0.734 27.4019
NaN NaN NaN NaN NaN NaN NaN 7.67136 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428655.66843 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428656.23761 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428656.23761 NaN 5.65763 0.756 27.4015
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428658.28024 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428658.28024 NaN 5.65788 0.766 27.4011
NaN NaN 18.4042 NaN NaN NaN NaN 7.86583 0.0439298 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428660.25714 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428661.47556 NaN NaN NaN NaN NaN NaN 0.000830752 0.8352 0.2688 1473428661.47556 179.61 5.65844 0.788 27.4013
NaN NaN NaN NaN NaN NaN NaN 8.08754 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428664.84686 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428665.75806 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428665.75806 NaN 5.65903 0.809 27.3995
NaN NaN NaN NaN NaN NaN 0.940669 8.35592 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428669.43631 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428669.90594 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428669.90594 NaN 5.6595 0.832 27.3988
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428673.97763 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428673.97763 NaN 5.65984 0.852 27.3986
NaN NaN NaN NaN NaN NaN NaN 8.63208 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428674.02298 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428678.10028 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428678.10028 NaN 5.66042 0.873 27.3955
NaN NaN NaN NaN NaN NaN NaN 8.83044 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428678.60883 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428680.15216 NaN NaN NaN NaN NaN NaN 0.000726908 0.5568 0.2944 1473428680.15216 179.54 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428682.24142 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428682.24142 NaN 5.66094 0.899 27.3954
NaN NaN NaN NaN NaN NaN NaN 9.09882 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428683.20056 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428685.48285 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428685.48285 NaN 5.66118 0.916 27.3954
NaN NaN NaN NaN NaN NaN NaN 9.34386 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428687.78989 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428689.57947 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428689.57947 NaN 5.6616 0.937 27.3927
NaN NaN NaN NaN NaN NaN NaN 9.58113 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428692.4007 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428693.63168 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428693.63168 NaN 5.662 0.957 27.3925
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428695.66382 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428695.66382 NaN 5.66222 0.97 27.3923
NaN NaN NaN NaN NaN NaN NaN 9.89618 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428697.02243 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428698.88278 NaN NaN NaN NaN NaN NaN 0.00196505 0.5568 0.2816 1473428698.88278 179.49 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428699.89529 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428699.89529 NaN 5.66265 0.989 27.3917
NaN NaN NaN NaN NaN NaN NaN 9.93896 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428701.61066 NaN NaN NaN 25.812 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428704.01031 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428704.01031 NaN 5.66303 1.013 27.3914
NaN NaN NaN NaN NaN NaN NaN 10.3513 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428706.21323 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428706.27457 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428706.27457 NaN 5.66327 1.025 27.3902
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428708.34531 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428708.34531 NaN 5.66349 1.036 27.3903
NaN NaN NaN NaN NaN NaN NaN 10.4174 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428710.79794 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428711.50339 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428711.50339 NaN 5.66376 1.052 27.3908
NaN NaN NaN NaN NaN NaN NaN 10.8063 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428715.38651 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428715.61343 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428715.61343 NaN 5.66419 1.078 27.3888
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428717.66141 NaN NaN NaN NaN NaN NaN 0.00087868 0.5568 0.2816 1473428717.66141 179.38 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428719.7583 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428719.7583 NaN 5.66455 1.101 27.3876
NaN NaN NaN NaN NaN NaN NaN 10.9658 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428719.97528 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428724.07965 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428724.07965 NaN 5.66473 1.121 27.3842
NaN NaN NaN NaN -198.524 NaN NaN 11.2497 NaN NaN NaN NaN 2821.13075031956 NaN -8017.02291616793 NaN NaN NaN 1473428724.56967 0.0319858 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428728.19189 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428728.19189 NaN 5.66501 1.141 27.382
NaN NaN NaN NaN NaN NaN NaN 11.5337 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428729.15753 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428730.27402 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428730.27402 NaN 5.66513 1.154 27.3773
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428733.47513 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428733.47513 NaN 5.66507 1.178 27.3664
NaN NaN 15.873 NaN NaN NaN 0.940669 11.7904 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428733.74292 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428736.58246 NaN NaN NaN NaN NaN NaN 0.000726908 0.2784 0.2816 1473428736.58246 179.28 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428737.70374 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428737.70374 NaN 5.66461 1.198 27.3526
NaN NaN NaN NaN NaN NaN NaN 12.0121 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428738.33102 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428741.9119 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428741.9119 NaN 5.66404 1.218 27.3312
NaN NaN NaN NaN NaN NaN NaN 12.3699 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428742.92673 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428745.98544 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428745.98544 NaN 5.66291 1.243 27.3029
NaN NaN NaN NaN NaN NaN NaN 12.5372 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428747.5242 NaN NaN NaN 25.7643 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428750.0481 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428750.0481 NaN 5.65976 1.262 27.176
NaN NaN NaN NaN NaN NaN NaN 12.7978 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428752.1254 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428754.17499 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428754.17499 NaN 5.65445 1.285 27.1564
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428755.17618 NaN NaN NaN NaN NaN NaN 0.000906638 0.464 0.4608 1473428755.17618 179.21 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 12.9534 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428756.71216 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428758.21573 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428758.21573 NaN 5.65197 1.309 27.1508
NaN NaN NaN NaN NaN NaN NaN 13.1984 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428761.29898 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428762.4317 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428762.4317 NaN 5.65067 1.325 27.1436
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428765.48724 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428765.48724 NaN 5.64939 1.349 27.1312
NaN NaN NaN NaN NaN NaN NaN 13.5018 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428765.88324 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428769.60437 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428769.60437 NaN 5.64794 1.374 27.1232
NaN NaN NaN NaN NaN NaN NaN 13.7974 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428770.46957 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428773.69357 NaN NaN NaN NaN NaN NaN 0.00138592 0.928 0.6144 1473428773.69357 178.14 5.64676 1.393 27.1223
NaN NaN NaN NaN NaN NaN NaN 13.9646 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428775.05801 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428778.19278 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428778.19278 NaN 5.64588 1.414 27.1207
NaN NaN NaN NaN NaN NaN NaN 14.2486 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428779.64752 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428780.46216 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428780.46216 NaN 5.64563 1.431 27.1212
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428783.79977 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428783.79977 NaN 5.64525 1.45 27.1207
NaN NaN NaN NaN NaN NaN NaN 14.5442 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428784.24347 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428787.9267 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428787.9267 NaN 5.64495 1.472 27.121
NaN NaN NaN NaN NaN NaN NaN 14.7153 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428788.83359 NaN NaN NaN 25.465 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428792.03799 NaN NaN NaN NaN NaN NaN 0.00135796 0.928 0.6144 1473428792.03799 176.03 5.64471 1.49 27.1201
NaN NaN NaN NaN NaN NaN NaN 15.0226 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428793.43265 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428794.06738 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428794.06738 NaN 5.64467 1.504 27.1207
NaN NaN NaN NaN NaN NaN 0.940669 15.2404 NaN NaN NaN NaN 2821.13075031956 NaN -8017.02291616793 NaN NaN NaN 1473428798.02185 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428798.20789 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428798.20789 NaN 5.64459 1.529 27.1188
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428800.22784 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428800.22784 NaN 5.64443 1.535 27.1182
NaN NaN NaN NaN NaN NaN NaN 15.4699 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428802.62057 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428804.31305 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428804.31305 NaN 5.64428 1.556 27.1171
NaN NaN 10.7497 NaN NaN NaN NaN 15.7577 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428807.20731 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428807.60938 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428807.60938 NaN 5.64424 1.577 27.1173
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428810.85651 NaN NaN NaN NaN NaN NaN 0.00144583 1.0208 0.6656 1473428810.85651 172.8 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 15.9755 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428811.79816 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428811.98297 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428811.98297 NaN 5.64425 1.597 27.1185
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428814.18906 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428814.18906 NaN 5.64428 1.61 27.1167
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428816.30832 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428816.30832 NaN 5.64424 1.619 27.1176
NaN NaN NaN NaN NaN NaN NaN 16.1622 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428816.38867 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428819.48315 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428819.48315 NaN 5.64428 1.645 27.1172
NaN NaN NaN NaN NaN NaN NaN 16.5279 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428820.98227 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428823.59412 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428823.59412 NaN 5.64434 1.664 27.1158
NaN NaN NaN NaN NaN NaN NaN 16.6562 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428825.5704 NaN NaN NaN 25.5182 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428827.72644 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428827.72644 NaN 5.64431 1.683 27.1158
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428829.7742 NaN NaN NaN NaN NaN NaN 0.00170144 0.7424 0.6528 1473428829.7742 170.4 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 17.0141 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428830.1731 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428831.91855 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428831.91855 NaN 5.64444 1.712 27.1155
NaN NaN NaN NaN NaN NaN NaN 17.1619 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428834.76251 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428836.04175 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428836.04175 NaN 5.64453 1.728 27.1158
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428838.08643 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428838.08643 NaN 5.64458 1.737 27.114
NaN NaN NaN NaN NaN NaN NaN 17.403 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428839.35159 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428842.20016 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428842.20016 NaN 5.64459 1.762 27.1148
NaN NaN NaN NaN NaN NaN NaN 17.6403 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428843.93826 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428844.25543 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428844.25543 NaN 5.64465 1.773 27.1136
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428847.48502 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428847.48502 NaN 5.64468 1.789 27.1145
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428848.48578 NaN NaN NaN NaN NaN NaN 0.00157763 0.7424 0.6528 1473428848.48578 168.44 NaN NaN NaN
NaN NaN NaN NaN -198.722 NaN NaN 17.9437 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428848.52783 0.0343915 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428851.61179 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428851.61179 NaN 5.64486 1.816 27.114
NaN NaN NaN NaN NaN NaN NaN 18.3093 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428853.13077 NaN NaN NaN 17.9437 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428855.83121 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428855.83121 NaN 5.64498 1.836 27.114
NaN NaN NaN NaN NaN NaN -0.00960501 18.6165 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428858.14413 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428859.5817 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428859.5817 NaN 5.64659 1.843 27.1148
NaN NaN NaN NaN NaN NaN NaN 18.6477 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428862.87701 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428863.68787 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428863.68787 NaN 5.65625 1.815 27.1157
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428867.04935 NaN NaN NaN NaN NaN NaN 0.0016655 0.928 0.6784 1473428867.04935 167.37 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 18.247 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428867.5961 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428868.22827 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428868.22827 NaN 5.66262 1.792 27.1157
NaN NaN NaN NaN NaN NaN NaN 17.7725 NaN NaN NaN NaN 2821.13084737075 NaN -8017.02280205148 NaN NaN NaN 1473428872.27084 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428872.3277 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428872.3277 NaN 5.66241 1.745 27.1164
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428876.07306 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428876.07306 NaN 5.66219 1.674 27.1148
NaN NaN NaN NaN NaN NaN NaN 16.8779 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428876.90192 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428880.24661 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428880.24661 NaN 5.6622 1.59 27.1166
NaN NaN 8.86203 NaN NaN NaN NaN 15.8122 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428881.50699 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428883.47766 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428883.47766 NaN 5.66226 1.496 27.1188
NaN NaN NaN NaN NaN NaN NaN 14.832 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428886.11362 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428888.11069 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428888.11069 NaN 5.66267 1.403 27.1228
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428890.20544 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428890.20544 NaN 5.66371 1.352 27.1369
NaN NaN NaN NaN NaN NaN NaN 13.5718 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428890.7305 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428894.3299 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428894.3299 NaN 5.67839 1.256 27.2964
NaN NaN NaN NaN NaN NaN NaN 12.5683 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428895.33847 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428897.52414 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428897.52414 NaN 5.6859 1.179 27.3735
NaN NaN NaN NaN NaN NaN NaN 11.7398 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428899.95108 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428902.15424 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428902.15424 NaN 5.68779 1.11 27.3883
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428904.21307 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428904.21307 NaN 5.6881 1.077 27.3893
NaN NaN NaN NaN NaN NaN NaN 10.9036 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428904.57532 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428908.3371 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428908.3371 NaN 5.68874 1.012 27.3919
NaN NaN NaN NaN NaN NaN NaN 10.2268 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428909.18323 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428911.52829 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428911.52829 NaN 5.68837 0.951 27.3949
NaN NaN NaN NaN NaN NaN NaN 9.49555 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428913.79251 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428916.21768 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428916.21768 NaN 5.68858 0.887 27.3954
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428918.31256 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428918.31256 NaN 5.68864 0.861 27.3961
NaN NaN NaN NaN NaN NaN NaN 8.84989 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428918.40341 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428921.50345 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428921.50345 NaN 5.68892 0.803 27.397
NaN NaN NaN NaN NaN NaN 0.84328 8.37536 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428923.01346 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428926.25281 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428926.25281 NaN 5.68922 0.748 27.3987
NaN NaN NaN NaN NaN NaN NaN 7.55467 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428927.62891 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428928.35901 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428928.35901 NaN 5.68935 0.719 27.3978
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428931.49988 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428931.49988 NaN 5.68944 0.676 27.3986
NaN NaN NaN NaN NaN NaN NaN 7.05292 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428932.23941 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428936.31424 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428936.31424 NaN 5.68959 0.63 27.3994
NaN NaN NaN NaN NaN NaN NaN 6.49671 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428936.87299 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428939.48001 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428939.48001 NaN 5.69008 0.583 27.4046
NaN NaN NaN NaN NaN NaN NaN 5.99496 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428941.4877 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428943.74454 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428943.74454 NaN 5.69014 0.535 27.4061
NaN NaN NaN NaN NaN NaN NaN 5.48932 NaN NaN NaN NaN 2821.13790371314 NaN -8017.0144639324 NaN NaN NaN 1473428946.09784 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428948.36011 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428948.36011 NaN 5.69057 0.491 27.4076
NaN NaN NaN NaN NaN NaN NaN 4.91756 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428950.73175 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428951.57718 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428951.57718 NaN 5.69109 0.446 27.4138
NaN NaN NaN NaN NaN NaN NaN 4.43914 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428955.34097 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428956.38098 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428956.38098 NaN 5.69057 0.4 27.4155
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428959.5354 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428959.5354 NaN 5.69084 0.355 27.4189
NaN NaN NaN NaN NaN NaN NaN 3.92183 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428959.95135 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428964.41791 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428964.41791 NaN 5.69106 0.308 27.4197
NaN NaN NaN NaN NaN NaN NaN 3.37341 -0.122745 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428964.56671 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428967.5668 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428967.5668 NaN 5.69118 0.264 27.4218
NaN NaN NaN NaN 200.186 NaN NaN 2.94555 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428969.18146 0.0262771 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428971.82175 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428971.82175 NaN 5.6913 0.219 27.425
NaN NaN NaN NaN NaN NaN NaN 2.42825 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428973.79742 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428974.18738 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428974.18738 NaN 5.69136 0.194 27.4273
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428977.48764 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428977.48764 NaN 5.69188 0.133 27.4549
NaN NaN NaN NaN NaN NaN NaN 1.60755 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428978.63345 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428980.7005 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428980.7005 NaN 5.69509 0.112 27.4822
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428981.74469 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428981.74469 177.75 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428982.78717 NaN NaN NaN NaN NaN NaN 0.000770842 0.8352 0.1664 1473428982.78717 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN 0.896401 0.852978 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428983.3602 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428984.01105 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428984.01105 NaN 5.69861 0.091 27.5038
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428988.13861 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428988.13861 NaN 5.6975 0.086 27.5009
NaN NaN NaN NaN NaN NaN NaN 0.814083 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428988.38171 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428992.23965 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428992.23965 NaN 5.69688 0.085 27.499
NaN NaN NaN NaN NaN NaN NaN 0.802414 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428993.2789 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428994.49759 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428994.49759 NaN 5.69726 0.092 27.4949
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428997.61264 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428997.61264 NaN 5.69757 0.111 27.497
NaN NaN NaN NaN NaN NaN NaN 0.985223 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428998.1665 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429000.66299 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429000.66299 178.7 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429001.67395 NaN NaN NaN NaN NaN NaN 0.000726908 0.6496 0.2048 1473429001.67395 NaN 5.69772 0.127 27.518
NaN NaN NaN NaN NaN NaN NaN 1.30806 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429003.40628 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429005.79449 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429005.79449 NaN 5.69793 0.143 27.5278
NaN NaN NaN NaN NaN NaN NaN 1.45975 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429008.63629 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429010.02066 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429010.02066 NaN 5.69735 0.166 27.5222
NaN NaN NaN NaN NaN NaN NaN 1.76702 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429013.8721 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429014.11603 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429014.11603 NaN 5.69708 0.188 27.4829
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429018.28445 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429018.28445 NaN 5.69665 0.208 27.4655
NaN NaN NaN NaN NaN NaN NaN 2.11319 NaN NaN NaN NaN 2821.14251160755 NaN -8017.00929895496 NaN NaN NaN 1473429019.1076 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429019.30963 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429019.30963 179.03 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429020.34491 NaN NaN NaN NaN NaN NaN 0.000826758 0.464 0.2176 1473429020.34491 NaN 5.69653 0.22 27.4609
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429023.70209 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429023.70209 NaN 5.69671 0.242 27.4545
NaN NaN NaN NaN NaN NaN NaN 2.67718 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429024.3541 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429028.20441 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429028.20441 NaN 5.69707 0.265 27.4358
NaN NaN NaN NaN NaN NaN NaN 2.93389 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429028.97122 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429030.44244 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429030.44244 NaN 5.69714 0.277 27.4339
NaN NaN NaN NaN NaN NaN NaN 3.22171 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429033.57056 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429033.83807 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429033.83807 NaN 5.69754 0.301 27.4361
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429035.98972 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429035.98972 NaN 5.69748 0.312 27.4397
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429038.1196 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429038.1196 179.18 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 3.43564 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429038.17828 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429039.14203 NaN NaN NaN NaN NaN NaN 0.000810782 0.5568 0.2048 1473429039.14203 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429040.14737 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429040.14737 NaN 5.69732 0.335 27.4354
NaN NaN NaN NaN NaN NaN NaN 3.68457 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429042.79141 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429044.21957 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429044.21957 NaN 5.69723 0.359 27.4276
NaN NaN NaN NaN NaN NaN 0.921978 4.01518 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429047.4006 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429048.30338 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429048.30338 NaN 5.6972 0.384 27.4251
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429051.49179 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429051.49179 NaN 5.69668 0.406 27.4404
NaN NaN 21.9158 NaN NaN NaN NaN 4.25244 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429052.00473 NaN NaN NaN 26.1682 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429055.57516 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429055.57516 NaN 5.69674 0.432 27.446
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429056.62976 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429056.62976 179.26 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 4.51693 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429056.63199 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429057.64951 NaN NaN NaN NaN NaN NaN 0.000926608 0.6496 0.256 1473429057.64951 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429059.71432 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429059.71432 NaN 5.6969 0.455 27.4352
NaN NaN NaN NaN NaN NaN NaN 4.78142 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429061.24365 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429064.00903 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429064.00903 NaN 5.69665 0.479 27.4301
NaN NaN NaN NaN NaN NaN NaN 5.05758 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429065.84293 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429066.04294 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429066.04294 NaN 5.69674 0.494 27.4319
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429070.13345 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429070.13345 NaN 5.69668 0.518 27.4277
NaN NaN NaN NaN NaN NaN NaN 5.33374 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429070.43863 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429074.29309 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429074.29309 NaN 5.69699 0.542 27.4231
NaN NaN NaN NaN NaN NaN NaN 5.6099 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429075.03357 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429075.46622 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429075.46622 179.2 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429076.56534 NaN NaN NaN NaN NaN NaN 0.00085871 0.928 0.2048 1473429076.56534 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429077.63898 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429077.63898 NaN 5.69708 0.567 27.4225
NaN NaN NaN NaN NaN NaN NaN 5.89383 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429079.62384 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429081.68198 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429081.68198 NaN 5.69711 0.592 27.4186
NaN NaN NaN NaN NaN NaN NaN 6.15054 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429084.21371 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429085.77359 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429085.77359 NaN 5.6972 0.615 27.4153
NaN NaN NaN NaN NaN NaN NaN 6.43837 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429088.79993 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429090.16083 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429090.16083 NaN 5.6972 0.642 27.4153
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429092.30042 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429092.30042 NaN 5.69723 0.654 27.4149
NaN NaN NaN NaN -200.809 NaN NaN 6.72231 NaN NaN NaN NaN 2821.14464050505 NaN -8017.00585903024 NaN NaN NaN 1473429093.39001 0.029012 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429094.57605 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429094.57605 179.2 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429095.68176 NaN NaN NaN NaN NaN NaN 0.000982524 0.6496 0.2432 1473429095.68176 NaN 5.69711 0.678 27.4197
NaN NaN NaN NaN NaN NaN NaN 7.03347 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429097.98975 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429100.15802 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429100.15802 NaN 5.69702 0.704 27.4133
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429102.38156 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429102.38156 NaN 5.69705 0.716 27.4119
NaN NaN NaN NaN NaN NaN NaN 7.27851 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429102.57651 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429105.86844 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429105.86844 NaN 5.6972 0.738 27.4072
NaN NaN NaN NaN NaN NaN NaN 7.58579 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429107.16992 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429110.0387 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429110.0387 NaN 5.69733 0.765 27.4065
NaN NaN NaN NaN NaN NaN 0.92001 7.89306 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429111.76031 NaN NaN NaN 25.4828 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429112.09503 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429112.09503 NaN 5.69757 0.779 27.4037
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429113.13681 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429113.13681 179.21 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429114.15509 NaN NaN NaN NaN NaN NaN 0.000954566 0.3712 0.2304 1473429114.15509 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429116.22021 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429116.22021 NaN 5.69769 0.801 27.4018
NaN NaN NaN NaN NaN NaN NaN 8.08365 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429116.36401 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429119.48068 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429119.48068 NaN 5.69754 0.829 27.4008
NaN NaN NaN NaN NaN NaN NaN 8.4376 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429120.94904 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429123.75717 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429123.75717 NaN 5.6976 0.854 27.4006
NaN NaN 17.5897 NaN NaN NaN NaN 8.71764 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429125.54095 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429126.03827 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429126.03827 NaN 5.69767 0.869 27.401
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429128.25992 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429128.25992 NaN 5.69776 0.879 27.4009
NaN NaN NaN NaN NaN NaN NaN 8.99769 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429130.12988 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429131.67834 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429131.67834 179.23 5.69767 0.903 27.4003
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429132.75494 NaN NaN NaN NaN NaN NaN 0.000754866 0.5568 0.256 1473429132.75494 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 9.35942 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429134.71548 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429135.78986 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429135.78986 NaN 5.69733 0.935 27.4085
NaN NaN NaN NaN NaN NaN NaN 9.55779 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429139.30627 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429139.98639 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429139.98639 NaN 5.69727 0.955 27.403
NaN NaN NaN NaN NaN NaN NaN 9.95063 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429143.89462 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429144.18024 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429144.18024 NaN 5.69647 0.984 27.409
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429146.28729 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429146.28729 NaN 5.69662 1.001 27.4096
NaN NaN NaN NaN NaN NaN NaN 10.2035 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429148.48334 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429149.62369 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429149.62369 NaN 5.69647 1.023 27.402
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429150.6283 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429150.6283 179.4 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429151.63229 NaN NaN NaN NaN NaN NaN 0.000774836 0.6496 0.3072 1473429151.63229 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 10.538 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429153.07309 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429153.66394 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429153.66394 NaN 5.69607 1.052 27.4005
NaN NaN NaN NaN NaN NaN NaN 10.8841 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429157.66669 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429157.77539 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429157.77539 NaN 5.69604 1.077 27.4017
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429161.99649 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429161.99649 NaN 5.69583 1.106 27.4003
NaN NaN NaN NaN NaN NaN NaN 11.1953 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429162.25378 NaN NaN NaN 25.785 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429164.25214 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429164.25214 NaN 5.69586 1.122 27.3976
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429166.48782 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429166.48782 NaN 5.69592 1.133 27.3942
NaN NaN NaN NaN NaN NaN NaN 11.4442 NaN NaN NaN NaN 2821.14464050505 NaN -8017.00585903024 NaN NaN NaN 1473429166.85452 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429169.57159 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429169.57159 179.47 5.69565 1.158 27.3912
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429170.57608 NaN NaN NaN NaN NaN NaN 0.000790812 0.928 0.2944 1473429170.57608 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 11.8682 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429171.46515 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429173.69931 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429173.69931 NaN 5.69552 1.188 27.3863
NaN NaN NaN NaN NaN NaN 0.916076 12.0354 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429176.0733 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429177.99109 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429177.99109 NaN 5.69552 1.211 27.3685
NaN NaN NaN NaN NaN NaN NaN 12.3699 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429180.66479 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429182.26614 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429182.26614 NaN 5.69513 1.24 27.3501
NaN NaN NaN NaN NaN NaN NaN 12.6383 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429185.24774 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429185.58804 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429185.58804 NaN 5.695 1.262 27.3421
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429188.62561 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429188.62561 179.42 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429189.63995 NaN NaN NaN NaN NaN NaN 0.000882674 0.6496 0.3584 1473429189.63995 NaN 5.69421 1.292 27.321
NaN NaN NaN NaN NaN NaN NaN 12.9339 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429189.85291 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429193.76108 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429193.76108 NaN 5.69412 1.315 27.3013
NaN NaN NaN NaN NaN NaN NaN 13.1401 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429194.45071 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429198.00687 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429198.00687 NaN 5.69311 1.34 27.236
NaN NaN 14.5897 NaN NaN NaN NaN 13.4396 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429199.04654 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429200.25986 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429200.25986 NaN 5.69269 1.356 27.2098
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429202.49417 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429202.49417 NaN 5.69189 1.367 27.2031
NaN NaN NaN NaN NaN NaN NaN 13.6379 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429203.65582 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429205.952 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429205.952 NaN 5.68978 1.388 27.2067
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429206.97992 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429206.97992 179.39 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429208.02393 NaN NaN NaN NaN NaN NaN 0.00104243 0.6496 0.5376 1473429208.02393 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 14.0463 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429208.24399 NaN NaN NaN 25.7667 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429210.09799 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429210.09799 NaN 5.68734 1.42 27.2122
NaN NaN NaN NaN NaN NaN NaN 14.2408 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429212.84341 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429214.23318 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429214.23318 NaN 5.6845 1.44 27.1762
NaN NaN NaN NaN -200.61 NaN NaN 14.6298 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429217.42667 0.0363026 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429217.5025 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429217.5025 NaN 5.68169 1.47 27.1742
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429221.63412 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429221.63412 NaN 5.67974 1.494 27.1825
NaN NaN NaN NaN NaN NaN NaN 14.8398 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429222.01199 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429225.01193 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429225.01193 179.13 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429226.14691 NaN NaN NaN NaN NaN NaN 0.00142985 0.928 0.5888 1473429226.14691 NaN 5.67874 1.527 27.1947
NaN NaN NaN NaN NaN NaN NaN 15.1976 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429226.59732 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429228.37634 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429228.37634 NaN 5.67864 1.542 27.1864
NaN NaN NaN NaN NaN NaN NaN 15.501 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429231.18765 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429231.72968 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429231.72968 NaN 5.67688 1.565 27.1753
NaN NaN NaN NaN NaN NaN NaN 15.7461 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429235.77563 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429236.02365 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429236.02365 NaN 5.67563 1.591 27.1989
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429240.10004 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429240.10004 NaN 5.67463 1.621 27.1785
NaN NaN NaN NaN NaN NaN 0.920994 16.1078 NaN NaN NaN NaN 2821.14545642763 NaN -8017.00521172678 NaN NaN NaN 1473429240.37323 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429244.20444 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429244.20444 178.37 5.67316 1.645 27.1627
NaN NaN NaN NaN NaN NaN NaN 16.3839 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429244.96576 NaN NaN NaN 25.4877 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429245.21008 NaN NaN NaN NaN NaN NaN 0.00172141 1.1136 0.6656 1473429245.21008 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429248.26965 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429248.26965 NaN 5.67158 1.676 27.178
NaN NaN NaN NaN NaN NaN NaN 16.6407 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429249.56754 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429251.51321 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429251.51321 NaN 5.67144 1.69 27.1553
NaN NaN NaN NaN NaN NaN NaN 16.9713 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429254.17755 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429255.63513 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429255.63513 NaN 5.6711 1.729 27.1518
NaN NaN NaN NaN NaN NaN NaN 17.2435 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429258.7735 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429260.04849 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429260.04849 NaN 5.67018 1.746 27.1527
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429262.13043 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429262.13043 NaN 5.6697 1.759 27.1393
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429263.25894 NaN NaN NaN NaN NaN NaN 0.00163754 0.928 0.704 1473429263.25894 177.48 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 17.5586 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429263.36993 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429264.33914 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429264.33914 NaN 5.66961 1.776 27.1419
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429267.50415 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429267.50415 NaN 5.66973 1.805 27.156
NaN NaN NaN NaN NaN NaN NaN 17.9203 0.0786024 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429267.9621 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429271.65991 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429271.65991 NaN 5.66951 1.825 27.145
NaN NaN 9.10379 NaN NaN NaN NaN 18.1187 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429272.55997 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429273.70639 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429273.70639 NaN 5.66912 1.836 27.1403
NaN NaN NaN NaN NaN NaN NaN 18.5193 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429277.14624 NaN NaN NaN 25.6231 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429277.9939 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429277.9939 NaN 5.66897 1.875 27.1394
NaN NaN NaN NaN NaN NaN NaN 18.706 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429281.74829 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429282.38416 NaN NaN NaN NaN NaN NaN 0.00180529 0.8352 0.7296 1473429282.38416 175.84 5.66865 1.889 27.1292
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429285.76166 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429285.76166 NaN 5.66806 1.923 27.1247
NaN NaN NaN NaN NaN NaN NaN 19.0677 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429286.33539 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429288.00699 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429288.00699 NaN 5.66787 1.936 27.1273
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429290.26694 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429290.26694 NaN 5.66748 1.942 27.1393
NaN NaN NaN NaN NaN NaN NaN 19.2894 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429290.92191 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429292.47144 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429292.47144 NaN 5.66727 1.95 27.1395
NaN NaN NaN NaN NaN NaN NaN 19.6278 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429295.51093 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429295.58105 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429295.58105 NaN 5.66699 1.985 27.1277
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429299.72311 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429299.72311 NaN 5.66675 2.011 27.1328
NaN NaN NaN NaN NaN NaN NaN 19.9234 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429300.10733 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429300.73196 NaN NaN NaN NaN NaN NaN 0.00180129 0.928 0.6912 1473429300.73196 173.83 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429304.02066 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429304.02066 NaN 5.66632 2.034 27.128
NaN NaN NaN NaN NaN NaN 0.918535 20.2657 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429304.70248 NaN NaN NaN 25.6711 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429308.29517 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429308.29517 NaN 5.66642 2.058 27.1275
NaN NaN NaN NaN NaN NaN NaN 20.433 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429309.30487 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429310.38992 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429310.38992 NaN 5.66617 2.063 27.122
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429313.47784 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429313.47784 NaN 5.66581 2.092 27.1237
NaN NaN NaN NaN NaN NaN NaN 20.7169 NaN NaN NaN NaN 2821.14604117814 NaN -8017.00432475682 NaN NaN NaN 1473429313.89249 NaN NaN NaN 25.4483 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429317.60733 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429317.60733 NaN 5.6655 2.112 27.1171
NaN NaN NaN NaN NaN NaN NaN 20.9892 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429318.49084 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429319.64282 NaN NaN NaN NaN NaN NaN 0.00208886 0.8352 0.7808 1473429319.64282 171.46 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429321.87021 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429321.87021 NaN 5.66547 2.147 27.1134
NaN NaN NaN NaN NaN NaN NaN 21.242 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429323.08035 NaN NaN NaN 25.4361 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429326.0741 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429326.0741 NaN 5.6652 2.159 27.0937
NaN NaN NaN NaN NaN NaN NaN 21.4754 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429327.70001 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429328.09775 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429328.09775 NaN 5.66502 2.168 27.0621
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429332.23477 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429332.23477 NaN 5.66472 2.189 27.0067
NaN NaN NaN NaN NaN NaN NaN 21.736 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429332.30597 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429335.50552 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429335.50552 NaN 5.6646 2.21 26.8635
NaN NaN NaN NaN NaN NaN NaN 21.9382 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429337.02841 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429338.0051 NaN NaN NaN NaN NaN NaN 0.00271193 1.2992 0.7936 1473429338.0051 170.26 5.66463 2.218 26.6628
NaN NaN NaN NaN 49.9248 NaN NaN 22.1288 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429341.73657 0.0230613 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429342.18777 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429342.18777 NaN 5.66488 2.227 26.6251
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429345.50818 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429345.50818 NaN 5.66473 2.208 26.61
NaN NaN 4.19414 NaN NaN NaN NaN 22.2922 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429346.43924 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429349.79926 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429349.79926 NaN 5.66487 2.2 26.8514
NaN NaN NaN NaN NaN NaN NaN 22.0277 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429351.13263 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429352.03485 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429352.03485 NaN 5.66466 2.18 26.7467
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429355.50766 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429355.50766 NaN 5.63225 2.131 26.6511
NaN NaN NaN NaN NaN NaN NaN 21.3937 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429355.76236 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429359.86285 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429359.86285 NaN 5.6425 2.093 27.0508
NaN NaN NaN NaN NaN NaN NaN 21.0709 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429360.37491 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429362.09506 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429362.09506 NaN 5.65234 2.073 27.0944
NaN NaN NaN NaN NaN NaN NaN 20.6702 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429364.98654 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429365.50909 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429365.50909 NaN 5.65667 2.032 27.111
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429369.72723 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429369.72723 NaN 5.65773 1.976 27.1116
NaN NaN NaN NaN NaN NaN NaN 19.8768 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429370.18689 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429372.10626 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429372.10626 NaN 5.65804 1.951 27.1115
NaN NaN NaN NaN NaN NaN NaN 19.3633 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429374.79941 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429375.53036 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429375.53036 NaN 5.65855 1.901 27.1134
NaN NaN NaN NaN NaN NaN NaN 18.7216 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429379.41068 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429379.797 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429379.797 NaN 5.65879 1.844 27.1148
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429382.19254 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429382.19254 NaN 5.65919 1.816 27.1161
NaN NaN NaN NaN NaN NaN NaN 18.107 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429384.03336 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429385.55945 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429385.55945 NaN 5.65934 1.76 27.1168
NaN NaN NaN NaN NaN NaN NaN 17.473 NaN NaN NaN NaN 2821.15042246915 NaN -8017.00652032318 NaN NaN NaN 1473429388.64703 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429389.81747 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429389.81747 NaN 5.65955 1.707 27.1167
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429392.21298 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429392.21298 NaN 5.65967 1.675 27.1169
NaN NaN NaN NaN NaN NaN NaN 16.8585 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429393.2597 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429395.64862 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429395.64862 NaN 5.65976 1.624 27.1165
NaN NaN NaN NaN NaN NaN NaN 16.2361 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429397.86908 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429399.84213 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429399.84213 NaN 5.65991 1.566 27.1183
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429401.98074 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429401.98074 NaN 5.66013 1.537 27.1197
NaN NaN NaN NaN NaN NaN NaN 15.536 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429402.47736 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429404.27316 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429404.27316 NaN 5.66016 1.51 27.1188
NaN NaN NaN NaN NaN NaN NaN 14.9448 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429407.0943 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429407.64365 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429407.64365 NaN 5.66043 1.456 27.1224
NaN NaN NaN NaN NaN NaN NaN 14.2797 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429411.70218 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429411.9003 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429411.9003 NaN 5.66058 1.398 27.1237
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429414.03629 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429414.03629 NaN 5.6608 1.375 27.1266
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429416.32956 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429416.32956 NaN 5.66161 1.351 27.137
NaN NaN NaN NaN NaN NaN NaN 13.7157 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429416.33594 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429419.71274 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429419.71274 NaN 5.66319 1.298 27.1512
NaN NaN NaN NaN NaN NaN NaN 13.0973 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429420.94519 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429423.91061 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429423.91061 NaN 5.66504 1.246 27.1759
NaN NaN NaN NaN NaN NaN NaN 12.5022 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429425.55499 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429426.0976 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429426.0976 NaN 5.67237 1.219 27.282
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429428.34198 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429428.34198 NaN 5.67712 1.189 27.3277
NaN NaN NaN NaN NaN NaN NaN 11.9265 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429430.16565 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429431.77548 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429431.77548 NaN 5.68266 1.138 27.3709
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429433.97958 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429433.97958 NaN 5.68373 1.117 27.375
NaN NaN NaN NaN NaN NaN NaN 11.3975 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429434.77609 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429436.39801 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429436.39801 NaN 5.68483 1.09 27.3827
NaN NaN NaN NaN NaN NaN NaN 10.7363 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429439.38391 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429439.77026 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429439.77026 NaN 5.68623 1.036 27.3915
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429441.97314 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429441.97314 NaN 5.68678 1.01 27.3953
NaN NaN NaN NaN NaN NaN NaN 10.1451 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429443.99399 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429444.13599 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429444.13599 NaN 5.68712 0.984 27.3961
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429446.46619 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429446.46619 NaN 5.68751 0.959 27.3971
NaN NaN NaN NaN NaN NaN NaN 9.56168 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429448.60583 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429449.94296 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429449.94296 NaN 5.68794 0.91 27.3988
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429452.13477 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429452.13477 NaN 5.68831 0.883 27.3997
NaN NaN NaN NaN NaN NaN NaN 8.91601 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429453.22153 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429454.45193 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429454.45193 NaN 5.68831 0.857 27.3997
NaN NaN NaN NaN NaN NaN NaN 8.34814 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429457.83017 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429457.84586 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429457.84586 NaN 5.68881 0.806 27.4021
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429460.04694 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429460.04694 NaN 5.68906 0.781 27.4037
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429462.23514 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429462.23514 NaN 5.68921 0.754 27.404
NaN NaN NaN NaN NaN NaN NaN 7.7297 NaN NaN NaN NaN 2821.15935937784 NaN -8017.00804595665 NaN NaN NaN 1473429462.43909 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429464.53949 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429464.53949 NaN 5.68931 0.727 27.4048
NaN NaN NaN NaN NaN NaN NaN 7.19294 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429467.04929 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429467.9068 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429467.9068 NaN 5.68956 0.676 27.4063
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429470.04388 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429470.04388 NaN 5.6898 0.65 27.4075
NaN NaN NaN NaN NaN NaN NaN 6.59784 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429471.65921 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429472.23013 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429472.23013 NaN 5.68993 0.624 27.4091
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429474.55945 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429474.55945 NaN 5.69008 0.596 27.4103
NaN NaN NaN NaN NaN NaN NaN 6.00274 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429476.27039 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429477.98892 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429477.98892 NaN 5.69048 0.542 27.4134
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429480.18661 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429480.18661 NaN 5.69069 0.515 27.4159
NaN NaN NaN NaN NaN NaN NaN 5.38819 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429480.87964 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429482.56461 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429482.56461 NaN 5.69087 0.488 27.4168
NaN NaN NaN NaN NaN NaN NaN 4.7542 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429485.48505 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429485.99347 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429485.99347 NaN 5.69102 0.434 27.417
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429488.16776 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429488.16776 NaN 5.69108 0.406 27.4193
NaN NaN NaN NaN NaN NaN NaN 4.1202 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429490.09323 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429490.35413 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429490.35413 NaN 5.69124 0.381 27.4214
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429492.6601 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429492.6601 NaN 5.69149 0.354 27.4241
NaN NaN NaN NaN NaN NaN NaN 3.5251 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429494.70215 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429496.02606 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429496.02606 NaN 5.6922 0.299 27.434
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429498.16367 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429498.16367 NaN 5.69286 0.273 27.442
NaN NaN NaN NaN NaN NaN NaN 2.89888 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429499.31116 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429500.32556 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429500.32556 NaN 5.69338 0.246 27.4454
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429502.65271 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429502.65271 NaN 5.69371 0.219 27.4581
NaN NaN NaN NaN NaN NaN NaN 2.36212 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429503.9277 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429506.0192 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429506.0192 NaN 5.69848 0.165 27.5027
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429508.19385 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429508.19385 NaN 5.69897 0.14 27.5166
NaN NaN NaN NaN NaN NaN NaN 1.58421 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429508.7077 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429511.2684 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429511.2684 NaN 5.70133 0.119 27.5379
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429512.32449 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429512.32449 178.11 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.985223 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429513.43961 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429513.60553 NaN NaN NaN NaN NaN NaN 0.000726908 0.464 0.1792 1473429513.60553 NaN 5.7028 0.096 27.5385
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429517.66168 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429517.66168 NaN 5.70237 0.089 27.5381
NaN NaN NaN NaN NaN NaN NaN 0.817972 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429518.44119 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429521.76654 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429521.76654 NaN 5.70216 0.09 27.5393
NaN NaN NaN NaN NaN NaN NaN 0.829641 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429523.33423 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429525.89575 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429525.89575 NaN 5.70228 0.106 27.5497
NaN NaN NaN NaN NaN NaN NaN 1.05134 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429528.19379 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429530.00281 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429530.00281 NaN 5.70292 0.123 27.5329
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429531.01666 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429531.01666 178.72 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429532.03638 NaN NaN NaN NaN NaN NaN 0.000682974 1.0208 0.2048 1473429532.03638 NaN 5.70295 0.131 27.5291
NaN NaN NaN NaN NaN NaN NaN 1.38585 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429533.3967 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429536.33057 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429536.33057 NaN 5.70222 0.15 27.5146
NaN NaN NaN NaN NaN NaN NaN 1.49475 NaN NaN NaN NaN 2821.16616993654 NaN -8017.00624776751 NaN NaN NaN 1473429538.60843 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429539.5553 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429539.5553 NaN 5.70185 0.169 27.5049
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429543.68158 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429543.68158 NaN 5.70149 0.192 27.4981
NaN NaN NaN NaN NaN NaN NaN 1.82148 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429543.82477 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429547.79324 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429547.79324 NaN 5.70115 0.211 27.4943
NaN NaN NaN NaN NaN NaN NaN 2.13653 NaN NaN NaN 0.530402 NaN NaN NaN NaN NaN -0.144176 1473429549.04248 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429549.86557 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429549.86557 178.8 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429550.98758 NaN NaN NaN NaN NaN NaN 0.000794806 0.928 0.1664 1473429550.98758 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429552.06073 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429552.06073 NaN 5.70096 0.233 27.4922
NaN NaN NaN NaN NaN NaN NaN 2.61494 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429554.26694 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429554.30841 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429554.30841 NaN 5.70097 0.244 27.4823
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429557.71628 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429557.71628 NaN 5.70091 0.266 27.4618
NaN NaN NaN NaN NaN NaN NaN 2.92611 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429558.86404 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429561.90515 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429561.90515 NaN 5.70078 0.287 27.4494
NaN NaN NaN NaN NaN NaN NaN 3.17115 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429563.45425 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429563.9631 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429563.9631 NaN 5.70075 0.297 27.4485
NaN NaN NaN NaN NaN NaN NaN 3.42397 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429568.05075 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429568.11896 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429568.11896 179.27 5.70075 0.319 27.4447
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429569.14853 NaN NaN NaN NaN NaN NaN 0.000826758 0.7424 0.2432 1473429569.14853 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429570.19104 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429570.19104 NaN 5.70075 0.33 27.4425
NaN NaN NaN NaN NaN NaN NaN 3.64178 0.0484787 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429572.64456 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429573.59732 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429573.59732 NaN 5.70075 0.351 27.4378
NaN NaN NaN NaN NaN NaN NaN 3.87905 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429577.24283 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429577.95917 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429577.95917 NaN 5.70069 0.372 27.4355
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429580.20569 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429580.20569 NaN 5.70069 0.384 27.4338
NaN NaN 21.5592 NaN NaN NaN NaN 4.08519 NaN NaN NaN NaN NaN NaN NaN NaN NaN -0.172267 1473429581.83511 NaN NaN NaN 25.6444 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429582.43683 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429582.43683 NaN 5.70072 0.395 27.4335
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429585.82318 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429585.82318 NaN 5.70082 0.418 27.43
NaN NaN NaN NaN NaN NaN NaN 4.3808 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429586.43915 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429586.97433 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429586.97433 179.32 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429588.03674 NaN NaN NaN NaN NaN NaN 0.000870692 0.3712 0.2176 1473429588.03674 NaN 5.70078 0.43 27.429
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429590.28949 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429590.28949 NaN 5.70072 0.441 27.4282
NaN NaN NaN NaN NaN NaN NaN 4.61417 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429591.02597 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429593.70193 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429593.70193 NaN 5.70054 0.463 27.426
NaN NaN NaN NaN NaN NaN NaN 4.87866 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429595.61444 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429597.88562 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429597.88562 NaN 5.70045 0.487 27.4249
NaN NaN NaN NaN NaN NaN NaN 5.11981 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429600.20117 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429602.02325 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429602.02325 NaN 5.7002 0.508 27.4229
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429604.06583 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429604.06583 NaN 5.69977 0.521 27.4222
NaN NaN NaN NaN NaN NaN NaN 5.36874 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429604.79105 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429605.07736 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429605.07736 179.33 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429607.16962 NaN NaN NaN NaN NaN NaN 0.000930602 0.5568 0.2176 1473429607.16962 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429608.33105 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429608.33105 NaN 5.69959 0.542 27.4334
NaN NaN NaN NaN NaN NaN NaN 5.6449 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429609.3848 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429611.49329 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429611.49329 NaN 5.69895 0.562 27.4257
NaN NaN NaN NaN NaN NaN NaN 5.9055 NaN NaN NaN NaN 2821.16616993654 NaN -8017.00624776751 NaN NaN -0.174884 1473429613.96741 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429615.64175 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429615.64175 NaN 5.69904 0.586 27.4362
NaN NaN NaN NaN NaN NaN NaN 6.09609 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429618.55765 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429619.76044 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429619.76044 NaN 5.6991 0.611 27.4231
NaN NaN NaN NaN NaN NaN NaN 6.69897 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429623.14609 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429624.0466 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429624.0466 179.37 5.69913 0.638 27.4207
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429625.17685 NaN NaN NaN NaN NaN NaN 0.000786818 0.6496 0.2432 1473429625.17685 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429626.27789 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429626.27789 NaN 5.69922 0.647 27.4176
NaN NaN NaN NaN NaN NaN NaN 6.90123 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429627.99146 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429629.72098 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429629.72098 NaN 5.69916 0.664 27.4144
NaN NaN NaN NaN NaN NaN NaN 7.07626 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429632.7074 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429633.90378 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429633.90378 NaN 5.69916 0.671 27.418
NaN NaN NaN NaN NaN NaN NaN 7.05292 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429637.431 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429638.25919 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429638.25919 NaN 5.69953 0.669 27.4124
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429641.49829 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429641.49829 NaN 5.69864 0.65 27.4101
NaN NaN NaN NaN NaN NaN NaN 6.82343 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429642.15961 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429642.68805 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429642.68805 179.36 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429643.84424 NaN NaN NaN NaN NaN NaN 0.000930602 0.6496 0.2688 1473429643.84424 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429646.24643 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429646.24643 NaN 5.69439 0.608 27.4105
NaN NaN NaN NaN NaN NaN NaN 6.30613 NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.492577 1473429646.76523 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429648.30573 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429648.30573 NaN 5.69347 0.581 27.413
NaN NaN NaN NaN NaN NaN NaN 5.65657 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429651.37711 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429651.53183 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429651.53183 NaN 5.69304 0.529 27.4153
NaN NaN NaN NaN NaN NaN NaN 5.13537 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429655.98798 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429656.23886 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429656.23886 NaN 5.69289 0.478 27.4172
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429658.31262 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429658.31262 NaN 5.69277 0.45 27.4195
NaN NaN NaN NaN NaN NaN NaN 4.44303 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429660.60373 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429661.47842 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429661.47842 NaN 5.69301 0.395 27.4234
NaN NaN NaN NaN NaN NaN NaN 3.85182 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429665.21356 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429665.69717 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429665.69717 NaN 5.69369 0.338 27.4339
NaN NaN NaN NaN NaN NaN NaN 3.21004 NaN NaN NaN 0.473328 NaN NaN NaN NaN NaN NaN 1473429669.8215 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429670.31308 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429670.31308 NaN 5.69407 0.282 27.4385
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429673.50735 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429673.50735 NaN 5.69714 0.228 27.4789
NaN NaN NaN NaN NaN NaN NaN 2.57994 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429674.42975 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429677.77335 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429677.77335 NaN 5.70093 0.171 27.5224
NaN NaN NaN NaN NaN NaN NaN 2.01206 NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.545692 1473429679.026 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429680.02524 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429680.02524 NaN 5.70207 0.145 27.5238
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429683.50858 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429683.50858 NaN 5.7028 0.091 27.5328
NaN NaN NaN NaN NaN NaN NaN 1.28472 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429683.63525 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429687.72357 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429687.72357 NaN 5.70326 0.045 27.5404
NaN NaN NaN NaN 228.112 NaN 0.948046 0.48736 NaN NaN NaN NaN 2821.17141299325 NaN -8017.00552900455 NaN NaN NaN 1473429688.22528 -0.0579429 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429690.05981 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429690.05981 NaN 5.70445 0.025 27.5497
NaN NaN NaN NaN NaN NaN NaN 0.0867369 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429693.0274 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429693.49521 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429693.49521 NaN 5.7047 0.01 27.5487
NaN NaN NaN NaN NaN NaN NaN 0.0750683 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429697.76962 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429702.51135 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0322833 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429707.25961 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0478414 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429712.04974 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.102295 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429716.80505 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.106185 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429721.55743 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0711787 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429726.30743 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0556205 NaN 2821.2475 -8017.0383 NaN NaN NaN NaN NaN NaN NaN 1473429731.06155 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.00116686 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429735.92755 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429740.77383 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0206146 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429745.61465 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN 0.928864 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429750.44839 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429755.28848 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429760.12601 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0556205 NaN NaN NaN NaN 2821.24890001598 NaN -8017.04209999988 NaN NaN NaN 1473429764.95984 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0322833 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429769.80423 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429774.62967 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429779.48779 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0167251 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429784.6926 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.31622 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429789.90195 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.49125 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429795.01669 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.347337 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429800.12466 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.421238 NaN 2821.2497 -8017.0442 NaN NaN NaN NaN NaN NaN NaN 1473429805.23203 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN 229.553 NaN NaN 0.300662 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429810.33975 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN 0.948538 0.798524 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429815.44958 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.891873 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429820.56409 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.362895 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429825.66925 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.790745 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429830.80325 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.413459 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429836.12595 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.327889 NaN NaN NaN NaN 2821.24970001598 NaN -8017.04419999989 NaN NaN NaN 1473429841.43347 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.409569 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429846.5415 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.518477 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429851.853 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.467913 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429857.16873 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.40568 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429862.48941 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.374564 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429871.99411 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN 0.931077 0.693506 0.048332 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429877.09753 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
###Markdown
SlocumReader Load the ASCII file into a pandas DataFrame
###Code
import json
from gutils.slocum import SlocumReader
slocum_data = SlocumReader(ascii_file)
print('Mode: ', slocum_data.mode)
print('ASCII: ', slocum_data.ascii_file)
print('Headers: ', json.dumps(slocum_data.metadata, indent=4))
slocum_data.data.columns.tolist()
slocum_data.data.head(20)[[
'sci_m_present_time',
'm_depth',
'm_gps_lat',
'm_gps_lon',
'sci_water_pressure',
'sci_water_temp'
]]
###Output
_____no_output_____
###Markdown
Standardize into a glider-independent DataFrame* Lossless (adds columns)* Common axis names* Common variable names used in computations of density, salinity, etc.* Interpolates GPS coordinates* Converts to decimal degrees* Calcualtes depth from pressure if available* Calculates pressure from depth if need be* Calculates density and salinity
###Code
standard = slocum_data.standardize()
# Which columns were added?
set(standard.columns).difference(slocum_data.data.columns)
standard.head(20)[[
't',
'z',
'y',
'x',
'pressure',
'temperature'
]]
###Output
_____no_output_____
###Markdown
Converting Slocum data to a standard DataFrame
###Code
from IPython.lib.pretty import pprint
import logging
logger = logging.getLogger('gutils')
logger.handlers = [logging.StreamHandler()]
logger.setLevel(logging.DEBUG)
import sys
from pathlib import Path
# Just a hack to be able to `import gutils`
sys.path.append(str(Path('.').absolute().parent.parent))
binary_folder = Path('.').absolute().parent.parent / 'gutils' / 'tests' / 'resources' / 'slocum' / 'real' / 'binary'
bass_binary = binary_folder / 'bass-20160909T1733'
!ls $bass_binary
###Output
8e6d1b16.cac usf-bass-2016-252-1-23.tbd
991560ed.cac usf-bass-2016-252-1-24.sbd
da485e91.cac usf-bass-2016-252-1-24.tbd
usf-bass-2016-252-0-0.tbd usf-bass-2016-252-1-2.sbd
usf-bass-2016-252-1-0.sbd usf-bass-2016-252-1-2.tbd
usf-bass-2016-252-1-0.tbd usf-bass-2016-252-1-3.sbd
usf-bass-2016-252-1-10.sbd usf-bass-2016-252-1-3.tbd
usf-bass-2016-252-1-10.tbd usf-bass-2016-252-1-4.sbd
usf-bass-2016-252-1-11.sbd usf-bass-2016-252-1-4.tbd
usf-bass-2016-252-1-11.tbd usf-bass-2016-252-1-5.sbd
usf-bass-2016-252-1-12.sbd usf-bass-2016-252-1-5.tbd
usf-bass-2016-252-1-12.tbd usf-bass-2016-252-1-6.sbd
usf-bass-2016-252-1-13.sbd usf-bass-2016-252-1-6.tbd
usf-bass-2016-252-1-13.tbd usf-bass-2016-252-1-7.sbd
usf-bass-2016-252-1-14.sbd usf-bass-2016-252-1-7.tbd
usf-bass-2016-252-1-14.tbd usf-bass-2016-252-1-8.sbd
usf-bass-2016-252-1-15.sbd usf-bass-2016-252-1-8.tbd
usf-bass-2016-252-1-15.tbd usf-bass-2016-252-1-9.sbd
usf-bass-2016-252-1-16.sbd usf-bass-2016-252-1-9.tbd
usf-bass-2016-252-1-16.tbd usf-bass-2016-253-0-0.sbd
usf-bass-2016-252-1-17.sbd usf-bass-2016-253-0-0.tbd
usf-bass-2016-252-1-17.tbd usf-bass-2016-253-0-1.sbd
usf-bass-2016-252-1-18.sbd usf-bass-2016-253-0-1.tbd
usf-bass-2016-252-1-18.tbd usf-bass-2016-253-0-2.sbd
usf-bass-2016-252-1-19.sbd usf-bass-2016-253-0-2.tbd
usf-bass-2016-252-1-19.tbd usf-bass-2016-253-0-3.sbd
usf-bass-2016-252-1-1.sbd usf-bass-2016-253-0-3.tbd
usf-bass-2016-252-1-1.tbd usf-bass-2016-253-0-4.sbd
usf-bass-2016-252-1-20.sbd usf-bass-2016-253-0-4.tbd
usf-bass-2016-252-1-20.tbd usf-bass-2016-253-0-5.sbd
usf-bass-2016-252-1-21.sbd usf-bass-2016-253-0-5.tbd
usf-bass-2016-252-1-21.tbd usf-bass-2016-253-0-6.sbd
usf-bass-2016-252-1-22.sbd usf-bass-2016-253-0-6.tbd
usf-bass-2016-252-1-22.tbd usf-bass-2016-253-0-7.tbd
usf-bass-2016-252-1-23.sbd usf-bass-2016-253-0-8.tbd
###Markdown
SlocumMergerConvert binary (*.bd) files into ASCII Merge a subset of binary filesIf you know the flight/science pair you wish to merge
###Code
import tempfile
from gutils.slocum import SlocumMerger
ascii_output = tempfile.mkdtemp()
merger = SlocumMerger(
str(bass_binary),
ascii_output,
globs=[
'usf-bass-2016-252-1-12.sbd',
'usf-bass-2016-252-1-12.tbd'
]
)
# The merge results contain a reference to the new produced ASCII file
# as well as which binary files were involved in its creation
merge_results = merger.convert()
###Output
Converted usf-bass-2016-252-1-12.sbd,usf-bass-2016-252-1-12.tbd to usf_bass_2016_252_1_12_sbd.dat
###Markdown
Merge all files in a directoryThis matches science and flight files together
###Code
merger = SlocumMerger(
str(bass_binary),
ascii_output,
)
# The merge results contain a reference to the new produced ASCII file as well as what binary files went into it.
merge_results = merger.convert()
###Output
Converted usf-bass-2016-252-1-0.sbd,usf-bass-2016-252-1-0.tbd to usf_bass_2016_252_1_0_sbd.dat
Converted usf-bass-2016-252-1-10.sbd,usf-bass-2016-252-1-10.tbd to usf_bass_2016_252_1_10_sbd.dat
Converted usf-bass-2016-252-1-11.sbd,usf-bass-2016-252-1-11.tbd to usf_bass_2016_252_1_11_sbd.dat
Converted usf-bass-2016-252-1-12.sbd,usf-bass-2016-252-1-12.tbd to usf_bass_2016_252_1_12_sbd.dat
Converted usf-bass-2016-252-1-13.sbd,usf-bass-2016-252-1-13.tbd to usf_bass_2016_252_1_13_sbd.dat
Converted usf-bass-2016-252-1-14.sbd,usf-bass-2016-252-1-14.tbd to usf_bass_2016_252_1_14_sbd.dat
Converted usf-bass-2016-252-1-15.sbd,usf-bass-2016-252-1-15.tbd to usf_bass_2016_252_1_15_sbd.dat
Converted usf-bass-2016-252-1-16.sbd,usf-bass-2016-252-1-16.tbd to usf_bass_2016_252_1_16_sbd.dat
Converted usf-bass-2016-252-1-17.sbd,usf-bass-2016-252-1-17.tbd to usf_bass_2016_252_1_17_sbd.dat
Converted usf-bass-2016-252-1-18.sbd,usf-bass-2016-252-1-18.tbd to usf_bass_2016_252_1_18_sbd.dat
Converted usf-bass-2016-252-1-19.sbd,usf-bass-2016-252-1-19.tbd to usf_bass_2016_252_1_19_sbd.dat
Converted usf-bass-2016-252-1-1.sbd,usf-bass-2016-252-1-1.tbd to usf_bass_2016_252_1_1_sbd.dat
Converted usf-bass-2016-252-1-20.sbd,usf-bass-2016-252-1-20.tbd to usf_bass_2016_252_1_20_sbd.dat
Converted usf-bass-2016-252-1-21.sbd,usf-bass-2016-252-1-21.tbd to usf_bass_2016_252_1_21_sbd.dat
Converted usf-bass-2016-252-1-22.sbd,usf-bass-2016-252-1-22.tbd to usf_bass_2016_252_1_22_sbd.dat
Converted usf-bass-2016-252-1-23.sbd,usf-bass-2016-252-1-23.tbd to usf_bass_2016_252_1_23_sbd.dat
Converted usf-bass-2016-252-1-24.sbd,usf-bass-2016-252-1-24.tbd to usf_bass_2016_252_1_24_sbd.dat
Converted usf-bass-2016-252-1-2.sbd,usf-bass-2016-252-1-2.tbd to usf_bass_2016_252_1_2_sbd.dat
Converted usf-bass-2016-252-1-3.sbd,usf-bass-2016-252-1-3.tbd to usf_bass_2016_252_1_3_sbd.dat
Converted usf-bass-2016-252-1-4.sbd,usf-bass-2016-252-1-4.tbd to usf_bass_2016_252_1_4_sbd.dat
Converted usf-bass-2016-252-1-5.sbd,usf-bass-2016-252-1-5.tbd to usf_bass_2016_252_1_5_sbd.dat
Converted usf-bass-2016-252-1-6.sbd,usf-bass-2016-252-1-6.tbd to usf_bass_2016_252_1_6_sbd.dat
Converted usf-bass-2016-252-1-7.sbd,usf-bass-2016-252-1-7.tbd to usf_bass_2016_252_1_7_sbd.dat
Converted usf-bass-2016-252-1-8.sbd,usf-bass-2016-252-1-8.tbd to usf_bass_2016_252_1_8_sbd.dat
Converted usf-bass-2016-252-1-9.sbd,usf-bass-2016-252-1-9.tbd to usf_bass_2016_252_1_9_sbd.dat
Converted usf-bass-2016-253-0-0.sbd,usf-bass-2016-253-0-0.tbd to usf_bass_2016_253_0_0_sbd.dat
Converted usf-bass-2016-253-0-1.sbd,usf-bass-2016-253-0-1.tbd to usf_bass_2016_253_0_1_sbd.dat
Converted usf-bass-2016-253-0-2.sbd,usf-bass-2016-253-0-2.tbd to usf_bass_2016_253_0_2_sbd.dat
Converted usf-bass-2016-253-0-3.sbd,usf-bass-2016-253-0-3.tbd to usf_bass_2016_253_0_3_sbd.dat
Converted usf-bass-2016-253-0-4.sbd,usf-bass-2016-253-0-4.tbd to usf_bass_2016_253_0_4_sbd.dat
Converted usf-bass-2016-253-0-5.sbd,usf-bass-2016-253-0-5.tbd to usf_bass_2016_253_0_5_sbd.dat
Converted usf-bass-2016-253-0-6.sbd,usf-bass-2016-253-0-6.tbd to usf_bass_2016_253_0_6_sbd.dat
###Markdown
What does the ASCII file look like?
###Code
ascii_file = merge_results[0]['ascii']
!cat $ascii_file
###Output
dbd_label: DBD_ASC(dinkum_binary_data_ascii)file
encoding_ver: 2
num_ascii_tags: 14
all_sensors: 0
filename: usf-bass-2016-252-1-0
the8x3_filename: 02470000
filename_extension: sbd
filename_label: usf-bass-2016-252-1-0-sbd(02470000)
mission_name: SLOPE.MI
fileopen_time: Fri_Sep__9_13:40:04_2016
sensors_per_cycle: 33
num_label_lines: 3
num_segments: 1
segment_filename_0: usf-bass-2016-252-1-0
c_heading c_wpt_lat m_altitude m_avg_speed m_ballast_pumped m_battery m_battpos m_depth m_depth_rate m_gps_lat m_gps_lon m_heading m_lat m_leakdetect_voltage m_lon m_mission_avg_speed_climbing m_mission_avg_speed_diving m_pitch m_present_time m_roll m_vacuum m_vehicle_temp m_water_depth m_water_vx m_water_vy sci_bbfl2s_bb_scaled sci_bbfl2s_cdom_scaled sci_bbfl2s_chlor_scaled sci_m_present_time sci_oxy3835_oxygen sci_water_cond sci_water_pressure sci_water_temp
rad lat m m/s cc volts in m m/s lat lon rad lat volts lon m/s m/s rad timestamp rad inHg degC m m/s m/s nodim ppb ug/l timestamp nodim s/m bar degc
4 8 4 4 4 4 4 4 4 8 8 4 8 4 8 4 4 4 8 4 4 4 4 4 4 4 4 4 8 4 4 4 4
0 0 0 0.246461 229.801 15.7242 0.950998 0.452354 -0.00340392 2821.1215 -8017.0038 1.46085 2821.12150001597 2.49734 -8017.00379999988 -0.189884 -0.251538 0.0130771 1473428360.61066 -0.198457 8.8256 29.9688 -1 0 0 NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN 0.918535 0.187865 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428412.19098 NaN NaN NaN -1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.246208 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428426.16879 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN 0 0 NaN NaN NaN NaN NaN NaN 2821.1215 -8017.0038 NaN 2821.12150001597 NaN -8017.00379999988 NaN NaN NaN 1473428431.28223 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0595101 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428452.2485 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0128355 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428467.69785 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.168417 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428472.87552 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN 228.013 NaN NaN 0.246208 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428478.17279 0.0124877 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN 0.0287601 0.40568 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428483.4783 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428484.49551 NaN NaN NaN NaN NaN NaN 0 0 0 1473428484.49551 179.99 0 0 0
NaN NaN NaN NaN NaN NaN NaN 0.444575 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428488.82791 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.58071 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428494.19019 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.518477 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428499.53586 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN 2830 NaN NaN NaN NaN NaN 0.222871 NaN 2821.1311 -8017.0297 NaN 2821.13110001597 NaN -8017.02969999988 NaN NaN NaN 1473428504.59827 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.370674 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428509.72824 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428513.27548 NaN NaN NaN NaN NaN NaN 0.000710932 1.2064 0.192 1473428513.27548 179.99 0 0 0
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428514.44083 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428514.44083 NaN 5.67787 0.023 27.5135
NaN NaN NaN NaN NaN NaN NaN 0.425128 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428514.69376 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428516.79086 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428516.79086 NaN 5.69567 0.027 27.5172
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428519.09698 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428519.09698 NaN 5.69597 0.035 27.5365
NaN NaN NaN NaN NaN NaN NaN 0.479581 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428519.86343 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428521.41492 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428521.41492 NaN 5.69631 0.056 27.534
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428523.63354 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428523.63354 NaN 5.6953 0.07 27.5086
NaN NaN NaN NaN NaN NaN NaN 0.806303 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428524.91858 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428526.04126 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428526.04126 NaN 5.67278 0.084 27.4894
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428529.59787 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428529.59787 NaN 5.64021 0.107 27.5028
NaN NaN NaN NaN NaN NaN NaN 0.907432 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428529.79099 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428531.65271 NaN NaN NaN NaN NaN NaN 0.000834746 0.8352 0.2304 1473428531.65271 180.15 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428533.74435 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428533.74435 NaN 5.63605 0.128 27.4764
NaN NaN NaN NaN NaN NaN NaN 1.16414 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428534.67154 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428538.00861 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428538.00861 NaN 5.63874 0.147 27.458
NaN NaN NaN NaN NaN NaN NaN 1.46364 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428539.55307 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428542.17761 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428542.17761 NaN 5.64294 0.164 27.4483
NaN NaN NaN NaN NaN NaN 0.932307 1.68923 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428544.42581 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428545.58829 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428545.58829 NaN 5.6427 0.185 27.4467
NaN NaN NaN NaN NaN NaN NaN 1.89149 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428549.30762 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428549.91537 NaN NaN NaN NaN NaN NaN 0.000770842 0.7424 0.1792 1473428549.91537 180.06 5.64118 0.205 27.443
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428554.05774 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428554.05774 NaN 5.64134 0.227 27.4375
NaN NaN NaN NaN NaN NaN NaN 2.19098 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428554.1907 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428556.12717 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428556.12717 NaN 5.64262 0.236 27.4355
NaN NaN NaN NaN NaN NaN NaN 2.69273 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428559.06427 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428560.28629 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428560.28629 NaN 5.64297 0.255 27.4311
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428563.64142 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428563.64142 NaN 5.64207 0.277 27.4302
NaN NaN NaN NaN NaN NaN NaN 2.95722 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428563.67258 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428568.03708 NaN NaN NaN NaN NaN NaN 0.000774836 0.6496 0.2304 1473428568.03708 180.11 5.64258 0.299 27.4287
NaN NaN NaN NaN NaN NaN NaN 3.13614 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428568.28598 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428572.28632 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428572.28632 NaN 5.645 0.319 27.4238
NaN NaN NaN NaN NaN NaN NaN 3.40841 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428572.89859 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428575.49786 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428575.49786 NaN 5.64494 0.34 27.4225
NaN NaN NaN NaN NaN NaN NaN 3.599 NaN NaN NaN NaN 2821.13057997066 NaN -8017.02489220558 NaN NaN NaN 1473428577.49075 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428579.60822 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428579.60822 NaN 5.64481 0.361 27.4208
NaN NaN NaN NaN NaN NaN NaN 3.84015 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428582.10736 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428583.78879 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428583.78879 NaN 5.64536 0.381 27.4185
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428585.96313 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428585.96313 NaN 5.64627 0.392 27.4182
NaN NaN 21.5104 NaN NaN NaN NaN 4.08519 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428586.7146 NaN NaN NaN 25.5956 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428587.02817 NaN NaN NaN NaN NaN NaN 0.000786818 0.5568 0.2176 1473428587.02817 180 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428590.42468 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428590.42468 NaN 5.64739 0.412 27.417
NaN NaN NaN NaN NaN NaN NaN 4.32246 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428591.32504 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428593.78668 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428593.78668 NaN 5.64716 0.432 27.4142
NaN NaN NaN NaN NaN NaN NaN 4.54416 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428595.93347 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428597.9715 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428597.9715 NaN 5.64787 0.453 27.4163
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428600.06396 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428600.06396 NaN 5.64849 0.465 27.4149
NaN NaN NaN NaN -198.424 NaN NaN 4.80087 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428600.55002 0.0237333 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428604.14093 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428604.14093 NaN 5.64972 0.484 27.4156
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428605.14731 NaN NaN NaN NaN NaN NaN 0.000774836 1.1136 0.2688 1473428605.14731 179.84 NaN NaN NaN
NaN NaN NaN NaN NaN NaN 0.936242 5.00313 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428605.14841 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428608.37436 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428608.37436 NaN 5.65029 0.506 27.4129
NaN NaN NaN NaN NaN NaN NaN 5.25595 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428609.74396 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428611.4939 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428611.4939 NaN 5.65081 0.524 27.4104
NaN NaN NaN NaN NaN NaN NaN 5.46598 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428614.33276 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428615.60056 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428615.60056 NaN 5.65142 0.548 27.4093
NaN NaN NaN NaN NaN NaN NaN 5.7188 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428618.93689 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428619.77155 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428619.77155 NaN 5.65229 0.568 27.4085
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428621.95804 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428621.95804 NaN 5.6526 0.576 27.4082
NaN NaN NaN NaN NaN NaN NaN 5.93662 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428623.52499 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428624.16507 NaN NaN NaN NaN NaN NaN 0.000830752 0.5568 0.256 1473428624.16507 179.76 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428626.4144 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428626.4144 NaN 5.65294 0.597 27.4077
NaN NaN NaN NaN NaN NaN NaN 6.18944 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428628.11267 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428629.77213 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428629.77213 NaN 5.65378 0.615 27.4081
NaN NaN NaN NaN NaN NaN NaN 6.47338 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428632.70032 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428634.03946 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428634.03946 NaN 5.6545 0.639 27.407
NaN NaN NaN NaN NaN NaN NaN 6.70675 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428637.28668 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428638.09018 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428638.09018 NaN 5.65526 0.661 27.4069
NaN NaN NaN NaN NaN NaN NaN 6.94012 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428641.88058 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428642.15778 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428642.15778 NaN 5.65572 0.68 27.4059
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428643.28085 NaN NaN NaN NaN NaN NaN 0.000806788 0.2784 0.2816 1473428643.28085 179.62 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428644.39185 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428644.39185 NaN 5.65596 0.692 27.4039
NaN NaN NaN NaN NaN NaN NaN 7.15794 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428646.47479 NaN NaN NaN 25.5621 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428647.75806 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428647.75806 NaN 5.65657 0.715 27.4031
NaN NaN NaN NaN NaN NaN NaN 7.3952 NaN NaN NaN NaN 2821.13075031956 NaN -8017.02291616793 NaN NaN NaN 1473428651.07559 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428652.12738 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428652.12738 NaN 5.65714 0.734 27.4019
NaN NaN NaN NaN NaN NaN NaN 7.67136 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428655.66843 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428656.23761 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428656.23761 NaN 5.65763 0.756 27.4015
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428658.28024 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428658.28024 NaN 5.65788 0.766 27.4011
NaN NaN 18.4042 NaN NaN NaN NaN 7.86583 0.0439298 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428660.25714 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428661.47556 NaN NaN NaN NaN NaN NaN 0.000830752 0.8352 0.2688 1473428661.47556 179.61 5.65844 0.788 27.4013
NaN NaN NaN NaN NaN NaN NaN 8.08754 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428664.84686 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428665.75806 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428665.75806 NaN 5.65903 0.809 27.3995
NaN NaN NaN NaN NaN NaN 0.940669 8.35592 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428669.43631 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428669.90594 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428669.90594 NaN 5.6595 0.832 27.3988
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428673.97763 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428673.97763 NaN 5.65984 0.852 27.3986
NaN NaN NaN NaN NaN NaN NaN 8.63208 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428674.02298 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428678.10028 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428678.10028 NaN 5.66042 0.873 27.3955
NaN NaN NaN NaN NaN NaN NaN 8.83044 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428678.60883 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428680.15216 NaN NaN NaN NaN NaN NaN 0.000726908 0.5568 0.2944 1473428680.15216 179.54 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428682.24142 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428682.24142 NaN 5.66094 0.899 27.3954
NaN NaN NaN NaN NaN NaN NaN 9.09882 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428683.20056 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428685.48285 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428685.48285 NaN 5.66118 0.916 27.3954
NaN NaN NaN NaN NaN NaN NaN 9.34386 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428687.78989 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428689.57947 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428689.57947 NaN 5.6616 0.937 27.3927
NaN NaN NaN NaN NaN NaN NaN 9.58113 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428692.4007 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428693.63168 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428693.63168 NaN 5.662 0.957 27.3925
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428695.66382 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428695.66382 NaN 5.66222 0.97 27.3923
NaN NaN NaN NaN NaN NaN NaN 9.89618 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428697.02243 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428698.88278 NaN NaN NaN NaN NaN NaN 0.00196505 0.5568 0.2816 1473428698.88278 179.49 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428699.89529 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428699.89529 NaN 5.66265 0.989 27.3917
NaN NaN NaN NaN NaN NaN NaN 9.93896 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428701.61066 NaN NaN NaN 25.812 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428704.01031 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428704.01031 NaN 5.66303 1.013 27.3914
NaN NaN NaN NaN NaN NaN NaN 10.3513 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428706.21323 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428706.27457 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428706.27457 NaN 5.66327 1.025 27.3902
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428708.34531 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428708.34531 NaN 5.66349 1.036 27.3903
NaN NaN NaN NaN NaN NaN NaN 10.4174 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428710.79794 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428711.50339 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428711.50339 NaN 5.66376 1.052 27.3908
NaN NaN NaN NaN NaN NaN NaN 10.8063 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428715.38651 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428715.61343 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428715.61343 NaN 5.66419 1.078 27.3888
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428717.66141 NaN NaN NaN NaN NaN NaN 0.00087868 0.5568 0.2816 1473428717.66141 179.38 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428719.7583 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428719.7583 NaN 5.66455 1.101 27.3876
NaN NaN NaN NaN NaN NaN NaN 10.9658 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428719.97528 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428724.07965 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428724.07965 NaN 5.66473 1.121 27.3842
NaN NaN NaN NaN -198.524 NaN NaN 11.2497 NaN NaN NaN NaN 2821.13075031956 NaN -8017.02291616793 NaN NaN NaN 1473428724.56967 0.0319858 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428728.19189 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428728.19189 NaN 5.66501 1.141 27.382
NaN NaN NaN NaN NaN NaN NaN 11.5337 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428729.15753 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428730.27402 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428730.27402 NaN 5.66513 1.154 27.3773
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428733.47513 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428733.47513 NaN 5.66507 1.178 27.3664
NaN NaN 15.873 NaN NaN NaN 0.940669 11.7904 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428733.74292 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428736.58246 NaN NaN NaN NaN NaN NaN 0.000726908 0.2784 0.2816 1473428736.58246 179.28 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428737.70374 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428737.70374 NaN 5.66461 1.198 27.3526
NaN NaN NaN NaN NaN NaN NaN 12.0121 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428738.33102 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428741.9119 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428741.9119 NaN 5.66404 1.218 27.3312
NaN NaN NaN NaN NaN NaN NaN 12.3699 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428742.92673 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428745.98544 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428745.98544 NaN 5.66291 1.243 27.3029
NaN NaN NaN NaN NaN NaN NaN 12.5372 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428747.5242 NaN NaN NaN 25.7643 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428750.0481 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428750.0481 NaN 5.65976 1.262 27.176
NaN NaN NaN NaN NaN NaN NaN 12.7978 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428752.1254 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428754.17499 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428754.17499 NaN 5.65445 1.285 27.1564
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428755.17618 NaN NaN NaN NaN NaN NaN 0.000906638 0.464 0.4608 1473428755.17618 179.21 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 12.9534 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428756.71216 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428758.21573 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428758.21573 NaN 5.65197 1.309 27.1508
NaN NaN NaN NaN NaN NaN NaN 13.1984 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428761.29898 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428762.4317 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428762.4317 NaN 5.65067 1.325 27.1436
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428765.48724 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428765.48724 NaN 5.64939 1.349 27.1312
NaN NaN NaN NaN NaN NaN NaN 13.5018 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428765.88324 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428769.60437 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428769.60437 NaN 5.64794 1.374 27.1232
NaN NaN NaN NaN NaN NaN NaN 13.7974 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428770.46957 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428773.69357 NaN NaN NaN NaN NaN NaN 0.00138592 0.928 0.6144 1473428773.69357 178.14 5.64676 1.393 27.1223
NaN NaN NaN NaN NaN NaN NaN 13.9646 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428775.05801 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428778.19278 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428778.19278 NaN 5.64588 1.414 27.1207
NaN NaN NaN NaN NaN NaN NaN 14.2486 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428779.64752 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428780.46216 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428780.46216 NaN 5.64563 1.431 27.1212
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428783.79977 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428783.79977 NaN 5.64525 1.45 27.1207
NaN NaN NaN NaN NaN NaN NaN 14.5442 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428784.24347 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428787.9267 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428787.9267 NaN 5.64495 1.472 27.121
NaN NaN NaN NaN NaN NaN NaN 14.7153 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428788.83359 NaN NaN NaN 25.465 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428792.03799 NaN NaN NaN NaN NaN NaN 0.00135796 0.928 0.6144 1473428792.03799 176.03 5.64471 1.49 27.1201
NaN NaN NaN NaN NaN NaN NaN 15.0226 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428793.43265 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428794.06738 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428794.06738 NaN 5.64467 1.504 27.1207
NaN NaN NaN NaN NaN NaN 0.940669 15.2404 NaN NaN NaN NaN 2821.13075031956 NaN -8017.02291616793 NaN NaN NaN 1473428798.02185 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428798.20789 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428798.20789 NaN 5.64459 1.529 27.1188
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428800.22784 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428800.22784 NaN 5.64443 1.535 27.1182
NaN NaN NaN NaN NaN NaN NaN 15.4699 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428802.62057 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428804.31305 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428804.31305 NaN 5.64428 1.556 27.1171
NaN NaN 10.7497 NaN NaN NaN NaN 15.7577 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428807.20731 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428807.60938 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428807.60938 NaN 5.64424 1.577 27.1173
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428810.85651 NaN NaN NaN NaN NaN NaN 0.00144583 1.0208 0.6656 1473428810.85651 172.8 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 15.9755 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428811.79816 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428811.98297 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428811.98297 NaN 5.64425 1.597 27.1185
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428814.18906 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428814.18906 NaN 5.64428 1.61 27.1167
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428816.30832 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428816.30832 NaN 5.64424 1.619 27.1176
NaN NaN NaN NaN NaN NaN NaN 16.1622 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428816.38867 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428819.48315 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428819.48315 NaN 5.64428 1.645 27.1172
NaN NaN NaN NaN NaN NaN NaN 16.5279 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428820.98227 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428823.59412 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428823.59412 NaN 5.64434 1.664 27.1158
NaN NaN NaN NaN NaN NaN NaN 16.6562 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428825.5704 NaN NaN NaN 25.5182 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428827.72644 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428827.72644 NaN 5.64431 1.683 27.1158
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428829.7742 NaN NaN NaN NaN NaN NaN 0.00170144 0.7424 0.6528 1473428829.7742 170.4 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 17.0141 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428830.1731 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428831.91855 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428831.91855 NaN 5.64444 1.712 27.1155
NaN NaN NaN NaN NaN NaN NaN 17.1619 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428834.76251 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428836.04175 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428836.04175 NaN 5.64453 1.728 27.1158
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428838.08643 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428838.08643 NaN 5.64458 1.737 27.114
NaN NaN NaN NaN NaN NaN NaN 17.403 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428839.35159 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428842.20016 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428842.20016 NaN 5.64459 1.762 27.1148
NaN NaN NaN NaN NaN NaN NaN 17.6403 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428843.93826 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428844.25543 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428844.25543 NaN 5.64465 1.773 27.1136
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428847.48502 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428847.48502 NaN 5.64468 1.789 27.1145
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428848.48578 NaN NaN NaN NaN NaN NaN 0.00157763 0.7424 0.6528 1473428848.48578 168.44 NaN NaN NaN
NaN NaN NaN NaN -198.722 NaN NaN 17.9437 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428848.52783 0.0343915 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428851.61179 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428851.61179 NaN 5.64486 1.816 27.114
NaN NaN NaN NaN NaN NaN NaN 18.3093 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428853.13077 NaN NaN NaN 17.9437 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428855.83121 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428855.83121 NaN 5.64498 1.836 27.114
NaN NaN NaN NaN NaN NaN -0.00960501 18.6165 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428858.14413 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428859.5817 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428859.5817 NaN 5.64659 1.843 27.1148
NaN NaN NaN NaN NaN NaN NaN 18.6477 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428862.87701 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428863.68787 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428863.68787 NaN 5.65625 1.815 27.1157
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428867.04935 NaN NaN NaN NaN NaN NaN 0.0016655 0.928 0.6784 1473428867.04935 167.37 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 18.247 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428867.5961 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428868.22827 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428868.22827 NaN 5.66262 1.792 27.1157
NaN NaN NaN NaN NaN NaN NaN 17.7725 NaN NaN NaN NaN 2821.13084737075 NaN -8017.02280205148 NaN NaN NaN 1473428872.27084 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428872.3277 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428872.3277 NaN 5.66241 1.745 27.1164
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428876.07306 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428876.07306 NaN 5.66219 1.674 27.1148
NaN NaN NaN NaN NaN NaN NaN 16.8779 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428876.90192 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428880.24661 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428880.24661 NaN 5.6622 1.59 27.1166
NaN NaN 8.86203 NaN NaN NaN NaN 15.8122 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428881.50699 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428883.47766 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428883.47766 NaN 5.66226 1.496 27.1188
NaN NaN NaN NaN NaN NaN NaN 14.832 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428886.11362 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428888.11069 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428888.11069 NaN 5.66267 1.403 27.1228
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428890.20544 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428890.20544 NaN 5.66371 1.352 27.1369
NaN NaN NaN NaN NaN NaN NaN 13.5718 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428890.7305 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428894.3299 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428894.3299 NaN 5.67839 1.256 27.2964
NaN NaN NaN NaN NaN NaN NaN 12.5683 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428895.33847 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428897.52414 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428897.52414 NaN 5.6859 1.179 27.3735
NaN NaN NaN NaN NaN NaN NaN 11.7398 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428899.95108 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428902.15424 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428902.15424 NaN 5.68779 1.11 27.3883
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428904.21307 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428904.21307 NaN 5.6881 1.077 27.3893
NaN NaN NaN NaN NaN NaN NaN 10.9036 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428904.57532 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428908.3371 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428908.3371 NaN 5.68874 1.012 27.3919
NaN NaN NaN NaN NaN NaN NaN 10.2268 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428909.18323 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428911.52829 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428911.52829 NaN 5.68837 0.951 27.3949
NaN NaN NaN NaN NaN NaN NaN 9.49555 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428913.79251 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428916.21768 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428916.21768 NaN 5.68858 0.887 27.3954
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428918.31256 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428918.31256 NaN 5.68864 0.861 27.3961
NaN NaN NaN NaN NaN NaN NaN 8.84989 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428918.40341 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428921.50345 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428921.50345 NaN 5.68892 0.803 27.397
NaN NaN NaN NaN NaN NaN 0.84328 8.37536 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428923.01346 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428926.25281 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428926.25281 NaN 5.68922 0.748 27.3987
NaN NaN NaN NaN NaN NaN NaN 7.55467 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428927.62891 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428928.35901 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428928.35901 NaN 5.68935 0.719 27.3978
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428931.49988 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428931.49988 NaN 5.68944 0.676 27.3986
NaN NaN NaN NaN NaN NaN NaN 7.05292 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428932.23941 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428936.31424 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428936.31424 NaN 5.68959 0.63 27.3994
NaN NaN NaN NaN NaN NaN NaN 6.49671 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428936.87299 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428939.48001 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428939.48001 NaN 5.69008 0.583 27.4046
NaN NaN NaN NaN NaN NaN NaN 5.99496 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428941.4877 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428943.74454 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428943.74454 NaN 5.69014 0.535 27.4061
NaN NaN NaN NaN NaN NaN NaN 5.48932 NaN NaN NaN NaN 2821.13790371314 NaN -8017.0144639324 NaN NaN NaN 1473428946.09784 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428948.36011 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428948.36011 NaN 5.69057 0.491 27.4076
NaN NaN NaN NaN NaN NaN NaN 4.91756 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428950.73175 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428951.57718 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428951.57718 NaN 5.69109 0.446 27.4138
NaN NaN NaN NaN NaN NaN NaN 4.43914 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428955.34097 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428956.38098 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428956.38098 NaN 5.69057 0.4 27.4155
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428959.5354 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428959.5354 NaN 5.69084 0.355 27.4189
NaN NaN NaN NaN NaN NaN NaN 3.92183 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428959.95135 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428964.41791 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428964.41791 NaN 5.69106 0.308 27.4197
NaN NaN NaN NaN NaN NaN NaN 3.37341 -0.122745 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428964.56671 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428967.5668 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428967.5668 NaN 5.69118 0.264 27.4218
NaN NaN NaN NaN 200.186 NaN NaN 2.94555 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428969.18146 0.0262771 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428971.82175 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428971.82175 NaN 5.6913 0.219 27.425
NaN NaN NaN NaN NaN NaN NaN 2.42825 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428973.79742 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428974.18738 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428974.18738 NaN 5.69136 0.194 27.4273
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428977.48764 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428977.48764 NaN 5.69188 0.133 27.4549
NaN NaN NaN NaN NaN NaN NaN 1.60755 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428978.63345 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428980.7005 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428980.7005 NaN 5.69509 0.112 27.4822
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428981.74469 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428981.74469 177.75 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428982.78717 NaN NaN NaN NaN NaN NaN 0.000770842 0.8352 0.1664 1473428982.78717 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN 0.896401 0.852978 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428983.3602 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428984.01105 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428984.01105 NaN 5.69861 0.091 27.5038
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428988.13861 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428988.13861 NaN 5.6975 0.086 27.5009
NaN NaN NaN NaN NaN NaN NaN 0.814083 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428988.38171 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428992.23965 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428992.23965 NaN 5.69688 0.085 27.499
NaN NaN NaN NaN NaN NaN NaN 0.802414 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428993.2789 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428994.49759 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428994.49759 NaN 5.69726 0.092 27.4949
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428997.61264 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428997.61264 NaN 5.69757 0.111 27.497
NaN NaN NaN NaN NaN NaN NaN 0.985223 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473428998.1665 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429000.66299 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429000.66299 178.7 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429001.67395 NaN NaN NaN NaN NaN NaN 0.000726908 0.6496 0.2048 1473429001.67395 NaN 5.69772 0.127 27.518
NaN NaN NaN NaN NaN NaN NaN 1.30806 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429003.40628 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429005.79449 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429005.79449 NaN 5.69793 0.143 27.5278
NaN NaN NaN NaN NaN NaN NaN 1.45975 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429008.63629 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429010.02066 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429010.02066 NaN 5.69735 0.166 27.5222
NaN NaN NaN NaN NaN NaN NaN 1.76702 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429013.8721 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429014.11603 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429014.11603 NaN 5.69708 0.188 27.4829
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429018.28445 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429018.28445 NaN 5.69665 0.208 27.4655
NaN NaN NaN NaN NaN NaN NaN 2.11319 NaN NaN NaN NaN 2821.14251160755 NaN -8017.00929895496 NaN NaN NaN 1473429019.1076 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429019.30963 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429019.30963 179.03 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429020.34491 NaN NaN NaN NaN NaN NaN 0.000826758 0.464 0.2176 1473429020.34491 NaN 5.69653 0.22 27.4609
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429023.70209 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429023.70209 NaN 5.69671 0.242 27.4545
NaN NaN NaN NaN NaN NaN NaN 2.67718 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429024.3541 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429028.20441 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429028.20441 NaN 5.69707 0.265 27.4358
NaN NaN NaN NaN NaN NaN NaN 2.93389 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429028.97122 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429030.44244 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429030.44244 NaN 5.69714 0.277 27.4339
NaN NaN NaN NaN NaN NaN NaN 3.22171 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429033.57056 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429033.83807 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429033.83807 NaN 5.69754 0.301 27.4361
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429035.98972 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429035.98972 NaN 5.69748 0.312 27.4397
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429038.1196 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429038.1196 179.18 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 3.43564 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429038.17828 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429039.14203 NaN NaN NaN NaN NaN NaN 0.000810782 0.5568 0.2048 1473429039.14203 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429040.14737 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429040.14737 NaN 5.69732 0.335 27.4354
NaN NaN NaN NaN NaN NaN NaN 3.68457 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429042.79141 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429044.21957 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429044.21957 NaN 5.69723 0.359 27.4276
NaN NaN NaN NaN NaN NaN 0.921978 4.01518 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429047.4006 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429048.30338 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429048.30338 NaN 5.6972 0.384 27.4251
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429051.49179 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429051.49179 NaN 5.69668 0.406 27.4404
NaN NaN 21.9158 NaN NaN NaN NaN 4.25244 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429052.00473 NaN NaN NaN 26.1682 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429055.57516 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429055.57516 NaN 5.69674 0.432 27.446
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429056.62976 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429056.62976 179.26 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 4.51693 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429056.63199 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429057.64951 NaN NaN NaN NaN NaN NaN 0.000926608 0.6496 0.256 1473429057.64951 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429059.71432 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429059.71432 NaN 5.6969 0.455 27.4352
NaN NaN NaN NaN NaN NaN NaN 4.78142 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429061.24365 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429064.00903 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429064.00903 NaN 5.69665 0.479 27.4301
NaN NaN NaN NaN NaN NaN NaN 5.05758 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429065.84293 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429066.04294 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429066.04294 NaN 5.69674 0.494 27.4319
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429070.13345 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429070.13345 NaN 5.69668 0.518 27.4277
NaN NaN NaN NaN NaN NaN NaN 5.33374 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429070.43863 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429074.29309 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429074.29309 NaN 5.69699 0.542 27.4231
NaN NaN NaN NaN NaN NaN NaN 5.6099 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429075.03357 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429075.46622 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429075.46622 179.2 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429076.56534 NaN NaN NaN NaN NaN NaN 0.00085871 0.928 0.2048 1473429076.56534 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429077.63898 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429077.63898 NaN 5.69708 0.567 27.4225
NaN NaN NaN NaN NaN NaN NaN 5.89383 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429079.62384 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429081.68198 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429081.68198 NaN 5.69711 0.592 27.4186
NaN NaN NaN NaN NaN NaN NaN 6.15054 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429084.21371 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429085.77359 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429085.77359 NaN 5.6972 0.615 27.4153
NaN NaN NaN NaN NaN NaN NaN 6.43837 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429088.79993 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429090.16083 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429090.16083 NaN 5.6972 0.642 27.4153
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429092.30042 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429092.30042 NaN 5.69723 0.654 27.4149
NaN NaN NaN NaN -200.809 NaN NaN 6.72231 NaN NaN NaN NaN 2821.14464050505 NaN -8017.00585903024 NaN NaN NaN 1473429093.39001 0.029012 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429094.57605 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429094.57605 179.2 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429095.68176 NaN NaN NaN NaN NaN NaN 0.000982524 0.6496 0.2432 1473429095.68176 NaN 5.69711 0.678 27.4197
NaN NaN NaN NaN NaN NaN NaN 7.03347 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429097.98975 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429100.15802 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429100.15802 NaN 5.69702 0.704 27.4133
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429102.38156 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429102.38156 NaN 5.69705 0.716 27.4119
NaN NaN NaN NaN NaN NaN NaN 7.27851 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429102.57651 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429105.86844 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429105.86844 NaN 5.6972 0.738 27.4072
NaN NaN NaN NaN NaN NaN NaN 7.58579 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429107.16992 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429110.0387 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429110.0387 NaN 5.69733 0.765 27.4065
NaN NaN NaN NaN NaN NaN 0.92001 7.89306 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429111.76031 NaN NaN NaN 25.4828 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429112.09503 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429112.09503 NaN 5.69757 0.779 27.4037
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429113.13681 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429113.13681 179.21 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429114.15509 NaN NaN NaN NaN NaN NaN 0.000954566 0.3712 0.2304 1473429114.15509 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429116.22021 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429116.22021 NaN 5.69769 0.801 27.4018
NaN NaN NaN NaN NaN NaN NaN 8.08365 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429116.36401 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429119.48068 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429119.48068 NaN 5.69754 0.829 27.4008
NaN NaN NaN NaN NaN NaN NaN 8.4376 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429120.94904 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429123.75717 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429123.75717 NaN 5.6976 0.854 27.4006
NaN NaN 17.5897 NaN NaN NaN NaN 8.71764 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429125.54095 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429126.03827 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429126.03827 NaN 5.69767 0.869 27.401
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429128.25992 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429128.25992 NaN 5.69776 0.879 27.4009
NaN NaN NaN NaN NaN NaN NaN 8.99769 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429130.12988 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429131.67834 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429131.67834 179.23 5.69767 0.903 27.4003
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429132.75494 NaN NaN NaN NaN NaN NaN 0.000754866 0.5568 0.256 1473429132.75494 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 9.35942 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429134.71548 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429135.78986 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429135.78986 NaN 5.69733 0.935 27.4085
NaN NaN NaN NaN NaN NaN NaN 9.55779 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429139.30627 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429139.98639 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429139.98639 NaN 5.69727 0.955 27.403
NaN NaN NaN NaN NaN NaN NaN 9.95063 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429143.89462 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429144.18024 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429144.18024 NaN 5.69647 0.984 27.409
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429146.28729 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429146.28729 NaN 5.69662 1.001 27.4096
NaN NaN NaN NaN NaN NaN NaN 10.2035 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429148.48334 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429149.62369 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429149.62369 NaN 5.69647 1.023 27.402
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429150.6283 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429150.6283 179.4 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429151.63229 NaN NaN NaN NaN NaN NaN 0.000774836 0.6496 0.3072 1473429151.63229 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 10.538 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429153.07309 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429153.66394 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429153.66394 NaN 5.69607 1.052 27.4005
NaN NaN NaN NaN NaN NaN NaN 10.8841 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429157.66669 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429157.77539 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429157.77539 NaN 5.69604 1.077 27.4017
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429161.99649 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429161.99649 NaN 5.69583 1.106 27.4003
NaN NaN NaN NaN NaN NaN NaN 11.1953 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429162.25378 NaN NaN NaN 25.785 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429164.25214 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429164.25214 NaN 5.69586 1.122 27.3976
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429166.48782 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429166.48782 NaN 5.69592 1.133 27.3942
NaN NaN NaN NaN NaN NaN NaN 11.4442 NaN NaN NaN NaN 2821.14464050505 NaN -8017.00585903024 NaN NaN NaN 1473429166.85452 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429169.57159 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429169.57159 179.47 5.69565 1.158 27.3912
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429170.57608 NaN NaN NaN NaN NaN NaN 0.000790812 0.928 0.2944 1473429170.57608 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 11.8682 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429171.46515 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429173.69931 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429173.69931 NaN 5.69552 1.188 27.3863
NaN NaN NaN NaN NaN NaN 0.916076 12.0354 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429176.0733 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429177.99109 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429177.99109 NaN 5.69552 1.211 27.3685
NaN NaN NaN NaN NaN NaN NaN 12.3699 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429180.66479 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429182.26614 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429182.26614 NaN 5.69513 1.24 27.3501
NaN NaN NaN NaN NaN NaN NaN 12.6383 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429185.24774 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429185.58804 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429185.58804 NaN 5.695 1.262 27.3421
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429188.62561 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429188.62561 179.42 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429189.63995 NaN NaN NaN NaN NaN NaN 0.000882674 0.6496 0.3584 1473429189.63995 NaN 5.69421 1.292 27.321
NaN NaN NaN NaN NaN NaN NaN 12.9339 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429189.85291 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429193.76108 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429193.76108 NaN 5.69412 1.315 27.3013
NaN NaN NaN NaN NaN NaN NaN 13.1401 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429194.45071 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429198.00687 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429198.00687 NaN 5.69311 1.34 27.236
NaN NaN 14.5897 NaN NaN NaN NaN 13.4396 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429199.04654 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429200.25986 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429200.25986 NaN 5.69269 1.356 27.2098
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429202.49417 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429202.49417 NaN 5.69189 1.367 27.2031
NaN NaN NaN NaN NaN NaN NaN 13.6379 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429203.65582 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429205.952 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429205.952 NaN 5.68978 1.388 27.2067
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429206.97992 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429206.97992 179.39 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429208.02393 NaN NaN NaN NaN NaN NaN 0.00104243 0.6496 0.5376 1473429208.02393 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 14.0463 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429208.24399 NaN NaN NaN 25.7667 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429210.09799 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429210.09799 NaN 5.68734 1.42 27.2122
NaN NaN NaN NaN NaN NaN NaN 14.2408 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429212.84341 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429214.23318 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429214.23318 NaN 5.6845 1.44 27.1762
NaN NaN NaN NaN -200.61 NaN NaN 14.6298 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429217.42667 0.0363026 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429217.5025 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429217.5025 NaN 5.68169 1.47 27.1742
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429221.63412 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429221.63412 NaN 5.67974 1.494 27.1825
NaN NaN NaN NaN NaN NaN NaN 14.8398 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429222.01199 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429225.01193 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429225.01193 179.13 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429226.14691 NaN NaN NaN NaN NaN NaN 0.00142985 0.928 0.5888 1473429226.14691 NaN 5.67874 1.527 27.1947
NaN NaN NaN NaN NaN NaN NaN 15.1976 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429226.59732 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429228.37634 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429228.37634 NaN 5.67864 1.542 27.1864
NaN NaN NaN NaN NaN NaN NaN 15.501 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429231.18765 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429231.72968 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429231.72968 NaN 5.67688 1.565 27.1753
NaN NaN NaN NaN NaN NaN NaN 15.7461 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429235.77563 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429236.02365 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429236.02365 NaN 5.67563 1.591 27.1989
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429240.10004 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429240.10004 NaN 5.67463 1.621 27.1785
NaN NaN NaN NaN NaN NaN 0.920994 16.1078 NaN NaN NaN NaN 2821.14545642763 NaN -8017.00521172678 NaN NaN NaN 1473429240.37323 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429244.20444 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429244.20444 178.37 5.67316 1.645 27.1627
NaN NaN NaN NaN NaN NaN NaN 16.3839 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429244.96576 NaN NaN NaN 25.4877 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429245.21008 NaN NaN NaN NaN NaN NaN 0.00172141 1.1136 0.6656 1473429245.21008 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429248.26965 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429248.26965 NaN 5.67158 1.676 27.178
NaN NaN NaN NaN NaN NaN NaN 16.6407 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429249.56754 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429251.51321 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429251.51321 NaN 5.67144 1.69 27.1553
NaN NaN NaN NaN NaN NaN NaN 16.9713 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429254.17755 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429255.63513 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429255.63513 NaN 5.6711 1.729 27.1518
NaN NaN NaN NaN NaN NaN NaN 17.2435 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429258.7735 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429260.04849 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429260.04849 NaN 5.67018 1.746 27.1527
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429262.13043 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429262.13043 NaN 5.6697 1.759 27.1393
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429263.25894 NaN NaN NaN NaN NaN NaN 0.00163754 0.928 0.704 1473429263.25894 177.48 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 17.5586 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429263.36993 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429264.33914 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429264.33914 NaN 5.66961 1.776 27.1419
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429267.50415 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429267.50415 NaN 5.66973 1.805 27.156
NaN NaN NaN NaN NaN NaN NaN 17.9203 0.0786024 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429267.9621 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429271.65991 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429271.65991 NaN 5.66951 1.825 27.145
NaN NaN 9.10379 NaN NaN NaN NaN 18.1187 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429272.55997 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429273.70639 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429273.70639 NaN 5.66912 1.836 27.1403
NaN NaN NaN NaN NaN NaN NaN 18.5193 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429277.14624 NaN NaN NaN 25.6231 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429277.9939 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429277.9939 NaN 5.66897 1.875 27.1394
NaN NaN NaN NaN NaN NaN NaN 18.706 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429281.74829 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429282.38416 NaN NaN NaN NaN NaN NaN 0.00180529 0.8352 0.7296 1473429282.38416 175.84 5.66865 1.889 27.1292
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429285.76166 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429285.76166 NaN 5.66806 1.923 27.1247
NaN NaN NaN NaN NaN NaN NaN 19.0677 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429286.33539 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429288.00699 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429288.00699 NaN 5.66787 1.936 27.1273
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429290.26694 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429290.26694 NaN 5.66748 1.942 27.1393
NaN NaN NaN NaN NaN NaN NaN 19.2894 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429290.92191 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429292.47144 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429292.47144 NaN 5.66727 1.95 27.1395
NaN NaN NaN NaN NaN NaN NaN 19.6278 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429295.51093 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429295.58105 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429295.58105 NaN 5.66699 1.985 27.1277
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429299.72311 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429299.72311 NaN 5.66675 2.011 27.1328
NaN NaN NaN NaN NaN NaN NaN 19.9234 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429300.10733 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429300.73196 NaN NaN NaN NaN NaN NaN 0.00180129 0.928 0.6912 1473429300.73196 173.83 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429304.02066 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429304.02066 NaN 5.66632 2.034 27.128
NaN NaN NaN NaN NaN NaN 0.918535 20.2657 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429304.70248 NaN NaN NaN 25.6711 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429308.29517 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429308.29517 NaN 5.66642 2.058 27.1275
NaN NaN NaN NaN NaN NaN NaN 20.433 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429309.30487 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429310.38992 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429310.38992 NaN 5.66617 2.063 27.122
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429313.47784 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429313.47784 NaN 5.66581 2.092 27.1237
NaN NaN NaN NaN NaN NaN NaN 20.7169 NaN NaN NaN NaN 2821.14604117814 NaN -8017.00432475682 NaN NaN NaN 1473429313.89249 NaN NaN NaN 25.4483 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429317.60733 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429317.60733 NaN 5.6655 2.112 27.1171
NaN NaN NaN NaN NaN NaN NaN 20.9892 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429318.49084 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429319.64282 NaN NaN NaN NaN NaN NaN 0.00208886 0.8352 0.7808 1473429319.64282 171.46 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429321.87021 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429321.87021 NaN 5.66547 2.147 27.1134
NaN NaN NaN NaN NaN NaN NaN 21.242 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429323.08035 NaN NaN NaN 25.4361 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429326.0741 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429326.0741 NaN 5.6652 2.159 27.0937
NaN NaN NaN NaN NaN NaN NaN 21.4754 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429327.70001 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429328.09775 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429328.09775 NaN 5.66502 2.168 27.0621
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429332.23477 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429332.23477 NaN 5.66472 2.189 27.0067
NaN NaN NaN NaN NaN NaN NaN 21.736 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429332.30597 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429335.50552 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429335.50552 NaN 5.6646 2.21 26.8635
NaN NaN NaN NaN NaN NaN NaN 21.9382 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429337.02841 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429338.0051 NaN NaN NaN NaN NaN NaN 0.00271193 1.2992 0.7936 1473429338.0051 170.26 5.66463 2.218 26.6628
NaN NaN NaN NaN 49.9248 NaN NaN 22.1288 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429341.73657 0.0230613 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429342.18777 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429342.18777 NaN 5.66488 2.227 26.6251
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429345.50818 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429345.50818 NaN 5.66473 2.208 26.61
NaN NaN 4.19414 NaN NaN NaN NaN 22.2922 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429346.43924 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429349.79926 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429349.79926 NaN 5.66487 2.2 26.8514
NaN NaN NaN NaN NaN NaN NaN 22.0277 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429351.13263 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429352.03485 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429352.03485 NaN 5.66466 2.18 26.7467
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429355.50766 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429355.50766 NaN 5.63225 2.131 26.6511
NaN NaN NaN NaN NaN NaN NaN 21.3937 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429355.76236 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429359.86285 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429359.86285 NaN 5.6425 2.093 27.0508
NaN NaN NaN NaN NaN NaN NaN 21.0709 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429360.37491 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429362.09506 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429362.09506 NaN 5.65234 2.073 27.0944
NaN NaN NaN NaN NaN NaN NaN 20.6702 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429364.98654 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429365.50909 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429365.50909 NaN 5.65667 2.032 27.111
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429369.72723 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429369.72723 NaN 5.65773 1.976 27.1116
NaN NaN NaN NaN NaN NaN NaN 19.8768 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429370.18689 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429372.10626 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429372.10626 NaN 5.65804 1.951 27.1115
NaN NaN NaN NaN NaN NaN NaN 19.3633 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429374.79941 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429375.53036 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429375.53036 NaN 5.65855 1.901 27.1134
NaN NaN NaN NaN NaN NaN NaN 18.7216 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429379.41068 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429379.797 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429379.797 NaN 5.65879 1.844 27.1148
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429382.19254 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429382.19254 NaN 5.65919 1.816 27.1161
NaN NaN NaN NaN NaN NaN NaN 18.107 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429384.03336 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429385.55945 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429385.55945 NaN 5.65934 1.76 27.1168
NaN NaN NaN NaN NaN NaN NaN 17.473 NaN NaN NaN NaN 2821.15042246915 NaN -8017.00652032318 NaN NaN NaN 1473429388.64703 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429389.81747 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429389.81747 NaN 5.65955 1.707 27.1167
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429392.21298 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429392.21298 NaN 5.65967 1.675 27.1169
NaN NaN NaN NaN NaN NaN NaN 16.8585 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429393.2597 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429395.64862 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429395.64862 NaN 5.65976 1.624 27.1165
NaN NaN NaN NaN NaN NaN NaN 16.2361 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429397.86908 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429399.84213 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429399.84213 NaN 5.65991 1.566 27.1183
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429401.98074 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429401.98074 NaN 5.66013 1.537 27.1197
NaN NaN NaN NaN NaN NaN NaN 15.536 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429402.47736 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429404.27316 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429404.27316 NaN 5.66016 1.51 27.1188
NaN NaN NaN NaN NaN NaN NaN 14.9448 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429407.0943 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429407.64365 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429407.64365 NaN 5.66043 1.456 27.1224
NaN NaN NaN NaN NaN NaN NaN 14.2797 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429411.70218 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429411.9003 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429411.9003 NaN 5.66058 1.398 27.1237
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429414.03629 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429414.03629 NaN 5.6608 1.375 27.1266
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429416.32956 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429416.32956 NaN 5.66161 1.351 27.137
NaN NaN NaN NaN NaN NaN NaN 13.7157 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429416.33594 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429419.71274 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429419.71274 NaN 5.66319 1.298 27.1512
NaN NaN NaN NaN NaN NaN NaN 13.0973 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429420.94519 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429423.91061 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429423.91061 NaN 5.66504 1.246 27.1759
NaN NaN NaN NaN NaN NaN NaN 12.5022 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429425.55499 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429426.0976 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429426.0976 NaN 5.67237 1.219 27.282
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429428.34198 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429428.34198 NaN 5.67712 1.189 27.3277
NaN NaN NaN NaN NaN NaN NaN 11.9265 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429430.16565 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429431.77548 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429431.77548 NaN 5.68266 1.138 27.3709
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429433.97958 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429433.97958 NaN 5.68373 1.117 27.375
NaN NaN NaN NaN NaN NaN NaN 11.3975 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429434.77609 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429436.39801 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429436.39801 NaN 5.68483 1.09 27.3827
NaN NaN NaN NaN NaN NaN NaN 10.7363 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429439.38391 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429439.77026 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429439.77026 NaN 5.68623 1.036 27.3915
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429441.97314 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429441.97314 NaN 5.68678 1.01 27.3953
NaN NaN NaN NaN NaN NaN NaN 10.1451 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429443.99399 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429444.13599 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429444.13599 NaN 5.68712 0.984 27.3961
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429446.46619 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429446.46619 NaN 5.68751 0.959 27.3971
NaN NaN NaN NaN NaN NaN NaN 9.56168 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429448.60583 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429449.94296 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429449.94296 NaN 5.68794 0.91 27.3988
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429452.13477 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429452.13477 NaN 5.68831 0.883 27.3997
NaN NaN NaN NaN NaN NaN NaN 8.91601 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429453.22153 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429454.45193 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429454.45193 NaN 5.68831 0.857 27.3997
NaN NaN NaN NaN NaN NaN NaN 8.34814 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429457.83017 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429457.84586 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429457.84586 NaN 5.68881 0.806 27.4021
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429460.04694 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429460.04694 NaN 5.68906 0.781 27.4037
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429462.23514 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429462.23514 NaN 5.68921 0.754 27.404
NaN NaN NaN NaN NaN NaN NaN 7.7297 NaN NaN NaN NaN 2821.15935937784 NaN -8017.00804595665 NaN NaN NaN 1473429462.43909 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429464.53949 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429464.53949 NaN 5.68931 0.727 27.4048
NaN NaN NaN NaN NaN NaN NaN 7.19294 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429467.04929 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429467.9068 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429467.9068 NaN 5.68956 0.676 27.4063
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429470.04388 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429470.04388 NaN 5.6898 0.65 27.4075
NaN NaN NaN NaN NaN NaN NaN 6.59784 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429471.65921 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429472.23013 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429472.23013 NaN 5.68993 0.624 27.4091
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429474.55945 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429474.55945 NaN 5.69008 0.596 27.4103
NaN NaN NaN NaN NaN NaN NaN 6.00274 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429476.27039 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429477.98892 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429477.98892 NaN 5.69048 0.542 27.4134
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429480.18661 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429480.18661 NaN 5.69069 0.515 27.4159
NaN NaN NaN NaN NaN NaN NaN 5.38819 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429480.87964 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429482.56461 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429482.56461 NaN 5.69087 0.488 27.4168
NaN NaN NaN NaN NaN NaN NaN 4.7542 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429485.48505 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429485.99347 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429485.99347 NaN 5.69102 0.434 27.417
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429488.16776 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429488.16776 NaN 5.69108 0.406 27.4193
NaN NaN NaN NaN NaN NaN NaN 4.1202 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429490.09323 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429490.35413 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429490.35413 NaN 5.69124 0.381 27.4214
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429492.6601 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429492.6601 NaN 5.69149 0.354 27.4241
NaN NaN NaN NaN NaN NaN NaN 3.5251 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429494.70215 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429496.02606 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429496.02606 NaN 5.6922 0.299 27.434
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429498.16367 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429498.16367 NaN 5.69286 0.273 27.442
NaN NaN NaN NaN NaN NaN NaN 2.89888 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429499.31116 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429500.32556 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429500.32556 NaN 5.69338 0.246 27.4454
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429502.65271 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429502.65271 NaN 5.69371 0.219 27.4581
NaN NaN NaN NaN NaN NaN NaN 2.36212 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429503.9277 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429506.0192 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429506.0192 NaN 5.69848 0.165 27.5027
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429508.19385 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429508.19385 NaN 5.69897 0.14 27.5166
NaN NaN NaN NaN NaN NaN NaN 1.58421 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429508.7077 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429511.2684 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429511.2684 NaN 5.70133 0.119 27.5379
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429512.32449 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429512.32449 178.11 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.985223 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429513.43961 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429513.60553 NaN NaN NaN NaN NaN NaN 0.000726908 0.464 0.1792 1473429513.60553 NaN 5.7028 0.096 27.5385
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429517.66168 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429517.66168 NaN 5.70237 0.089 27.5381
NaN NaN NaN NaN NaN NaN NaN 0.817972 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429518.44119 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429521.76654 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429521.76654 NaN 5.70216 0.09 27.5393
NaN NaN NaN NaN NaN NaN NaN 0.829641 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429523.33423 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429525.89575 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429525.89575 NaN 5.70228 0.106 27.5497
NaN NaN NaN NaN NaN NaN NaN 1.05134 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429528.19379 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429530.00281 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429530.00281 NaN 5.70292 0.123 27.5329
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429531.01666 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429531.01666 178.72 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429532.03638 NaN NaN NaN NaN NaN NaN 0.000682974 1.0208 0.2048 1473429532.03638 NaN 5.70295 0.131 27.5291
NaN NaN NaN NaN NaN NaN NaN 1.38585 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429533.3967 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429536.33057 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429536.33057 NaN 5.70222 0.15 27.5146
NaN NaN NaN NaN NaN NaN NaN 1.49475 NaN NaN NaN NaN 2821.16616993654 NaN -8017.00624776751 NaN NaN NaN 1473429538.60843 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429539.5553 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429539.5553 NaN 5.70185 0.169 27.5049
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429543.68158 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429543.68158 NaN 5.70149 0.192 27.4981
NaN NaN NaN NaN NaN NaN NaN 1.82148 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429543.82477 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429547.79324 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429547.79324 NaN 5.70115 0.211 27.4943
NaN NaN NaN NaN NaN NaN NaN 2.13653 NaN NaN NaN 0.530402 NaN NaN NaN NaN NaN -0.144176 1473429549.04248 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429549.86557 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429549.86557 178.8 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429550.98758 NaN NaN NaN NaN NaN NaN 0.000794806 0.928 0.1664 1473429550.98758 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429552.06073 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429552.06073 NaN 5.70096 0.233 27.4922
NaN NaN NaN NaN NaN NaN NaN 2.61494 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429554.26694 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429554.30841 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429554.30841 NaN 5.70097 0.244 27.4823
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429557.71628 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429557.71628 NaN 5.70091 0.266 27.4618
NaN NaN NaN NaN NaN NaN NaN 2.92611 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429558.86404 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429561.90515 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429561.90515 NaN 5.70078 0.287 27.4494
NaN NaN NaN NaN NaN NaN NaN 3.17115 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429563.45425 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429563.9631 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429563.9631 NaN 5.70075 0.297 27.4485
NaN NaN NaN NaN NaN NaN NaN 3.42397 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429568.05075 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429568.11896 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429568.11896 179.27 5.70075 0.319 27.4447
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429569.14853 NaN NaN NaN NaN NaN NaN 0.000826758 0.7424 0.2432 1473429569.14853 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429570.19104 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429570.19104 NaN 5.70075 0.33 27.4425
NaN NaN NaN NaN NaN NaN NaN 3.64178 0.0484787 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429572.64456 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429573.59732 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429573.59732 NaN 5.70075 0.351 27.4378
NaN NaN NaN NaN NaN NaN NaN 3.87905 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429577.24283 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429577.95917 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429577.95917 NaN 5.70069 0.372 27.4355
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429580.20569 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429580.20569 NaN 5.70069 0.384 27.4338
NaN NaN 21.5592 NaN NaN NaN NaN 4.08519 NaN NaN NaN NaN NaN NaN NaN NaN NaN -0.172267 1473429581.83511 NaN NaN NaN 25.6444 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429582.43683 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429582.43683 NaN 5.70072 0.395 27.4335
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429585.82318 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429585.82318 NaN 5.70082 0.418 27.43
NaN NaN NaN NaN NaN NaN NaN 4.3808 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429586.43915 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429586.97433 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429586.97433 179.32 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429588.03674 NaN NaN NaN NaN NaN NaN 0.000870692 0.3712 0.2176 1473429588.03674 NaN 5.70078 0.43 27.429
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429590.28949 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429590.28949 NaN 5.70072 0.441 27.4282
NaN NaN NaN NaN NaN NaN NaN 4.61417 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429591.02597 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429593.70193 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429593.70193 NaN 5.70054 0.463 27.426
NaN NaN NaN NaN NaN NaN NaN 4.87866 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429595.61444 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429597.88562 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429597.88562 NaN 5.70045 0.487 27.4249
NaN NaN NaN NaN NaN NaN NaN 5.11981 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429600.20117 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429602.02325 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429602.02325 NaN 5.7002 0.508 27.4229
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429604.06583 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429604.06583 NaN 5.69977 0.521 27.4222
NaN NaN NaN NaN NaN NaN NaN 5.36874 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429604.79105 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429605.07736 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429605.07736 179.33 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429607.16962 NaN NaN NaN NaN NaN NaN 0.000930602 0.5568 0.2176 1473429607.16962 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429608.33105 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429608.33105 NaN 5.69959 0.542 27.4334
NaN NaN NaN NaN NaN NaN NaN 5.6449 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429609.3848 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429611.49329 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429611.49329 NaN 5.69895 0.562 27.4257
NaN NaN NaN NaN NaN NaN NaN 5.9055 NaN NaN NaN NaN 2821.16616993654 NaN -8017.00624776751 NaN NaN -0.174884 1473429613.96741 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429615.64175 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429615.64175 NaN 5.69904 0.586 27.4362
NaN NaN NaN NaN NaN NaN NaN 6.09609 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429618.55765 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429619.76044 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429619.76044 NaN 5.6991 0.611 27.4231
NaN NaN NaN NaN NaN NaN NaN 6.69897 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429623.14609 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429624.0466 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429624.0466 179.37 5.69913 0.638 27.4207
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429625.17685 NaN NaN NaN NaN NaN NaN 0.000786818 0.6496 0.2432 1473429625.17685 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429626.27789 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429626.27789 NaN 5.69922 0.647 27.4176
NaN NaN NaN NaN NaN NaN NaN 6.90123 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429627.99146 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429629.72098 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429629.72098 NaN 5.69916 0.664 27.4144
NaN NaN NaN NaN NaN NaN NaN 7.07626 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429632.7074 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429633.90378 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429633.90378 NaN 5.69916 0.671 27.418
NaN NaN NaN NaN NaN NaN NaN 7.05292 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429637.431 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429638.25919 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429638.25919 NaN 5.69953 0.669 27.4124
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429641.49829 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429641.49829 NaN 5.69864 0.65 27.4101
NaN NaN NaN NaN NaN NaN NaN 6.82343 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429642.15961 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429642.68805 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429642.68805 179.36 NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429643.84424 NaN NaN NaN NaN NaN NaN 0.000930602 0.6496 0.2688 1473429643.84424 NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429646.24643 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429646.24643 NaN 5.69439 0.608 27.4105
NaN NaN NaN NaN NaN NaN NaN 6.30613 NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.492577 1473429646.76523 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429648.30573 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429648.30573 NaN 5.69347 0.581 27.413
NaN NaN NaN NaN NaN NaN NaN 5.65657 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429651.37711 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429651.53183 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429651.53183 NaN 5.69304 0.529 27.4153
NaN NaN NaN NaN NaN NaN NaN 5.13537 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429655.98798 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429656.23886 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429656.23886 NaN 5.69289 0.478 27.4172
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429658.31262 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429658.31262 NaN 5.69277 0.45 27.4195
NaN NaN NaN NaN NaN NaN NaN 4.44303 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429660.60373 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429661.47842 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429661.47842 NaN 5.69301 0.395 27.4234
NaN NaN NaN NaN NaN NaN NaN 3.85182 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429665.21356 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429665.69717 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429665.69717 NaN 5.69369 0.338 27.4339
NaN NaN NaN NaN NaN NaN NaN 3.21004 NaN NaN NaN 0.473328 NaN NaN NaN NaN NaN NaN 1473429669.8215 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429670.31308 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429670.31308 NaN 5.69407 0.282 27.4385
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429673.50735 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429673.50735 NaN 5.69714 0.228 27.4789
NaN NaN NaN NaN NaN NaN NaN 2.57994 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429674.42975 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429677.77335 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429677.77335 NaN 5.70093 0.171 27.5224
NaN NaN NaN NaN NaN NaN NaN 2.01206 NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.545692 1473429679.026 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429680.02524 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429680.02524 NaN 5.70207 0.145 27.5238
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429683.50858 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429683.50858 NaN 5.7028 0.091 27.5328
NaN NaN NaN NaN NaN NaN NaN 1.28472 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429683.63525 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429687.72357 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429687.72357 NaN 5.70326 0.045 27.5404
NaN NaN NaN NaN 228.112 NaN 0.948046 0.48736 NaN NaN NaN NaN 2821.17141299325 NaN -8017.00552900455 NaN NaN NaN 1473429688.22528 -0.0579429 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429690.05981 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429690.05981 NaN 5.70445 0.025 27.5497
NaN NaN NaN NaN NaN NaN NaN 0.0867369 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429693.0274 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429693.49521 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429693.49521 NaN 5.7047 0.01 27.5487
NaN NaN NaN NaN NaN NaN NaN 0.0750683 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429697.76962 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429702.51135 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0322833 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429707.25961 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0478414 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429712.04974 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.102295 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429716.80505 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.106185 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429721.55743 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0711787 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429726.30743 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0556205 NaN 2821.2475 -8017.0383 NaN NaN NaN NaN NaN NaN NaN 1473429731.06155 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.00116686 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429735.92755 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429740.77383 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0206146 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429745.61465 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN 0.928864 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429750.44839 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429755.28848 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429760.12601 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0556205 NaN NaN NaN NaN 2821.24890001598 NaN -8017.04209999988 NaN NaN NaN 1473429764.95984 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0322833 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429769.80423 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429774.62967 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429779.48779 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.0167251 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429784.6926 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.31622 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429789.90195 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.49125 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429795.01669 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.347337 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429800.12466 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.421238 NaN 2821.2497 -8017.0442 NaN NaN NaN NaN NaN NaN NaN 1473429805.23203 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN 229.553 NaN NaN 0.300662 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429810.33975 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN 0.948538 0.798524 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429815.44958 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.891873 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429820.56409 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.362895 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429825.66925 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.790745 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429830.80325 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.413459 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429836.12595 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.327889 NaN NaN NaN NaN 2821.24970001598 NaN -8017.04419999989 NaN NaN NaN 1473429841.43347 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.409569 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429846.5415 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.518477 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429851.853 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.467913 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429857.16873 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.40568 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429862.48941 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN 0.374564 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429871.99411 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN 0.931077 0.693506 0.048332 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1473429877.09753 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
###Markdown
SlocumReader Load the ASCII file into a pandas DataFrame
###Code
import json
from gutils.slocum import SlocumReader
slocum_data = SlocumReader(ascii_file)
print('Mode: ', slocum_data.mode)
print('ASCII: ', slocum_data.ascii_file)
print('Headers: ', json.dumps(slocum_data.metadata, indent=4))
slocum_data.data.columns.tolist()
slocum_data.data.head(20)[[
'sci_m_present_time',
'm_depth',
'm_gps_lat',
'm_gps_lon',
'sci_water_pressure',
'sci_water_temp'
]]
###Output
_____no_output_____
###Markdown
Standardize into a glider-independent DataFrame* Lossless (adds columns)* Common axis names* Common variable names used in computations of density, salinity, etc.* Interpolates GPS coordinates* Converts to decimal degrees* Calcualtes depth from pressure if available* Calculates pressure from depth if need be* Calculates density and salinity
###Code
standard = slocum_data.standardize()
# Which columns were added?
set(standard.columns).difference(slocum_data.data.columns)
standard.head(20)[[
't',
'z',
'y',
'x',
'pressure',
'temperature'
]]
###Output
_____no_output_____ |
Submission/Classification-vgg-baseline.ipynb | ###Markdown
Data preprocessing, and obtain training set and test set
###Code
n_examples = X.shape[0]
# n_train = int(n_examples * 0.9877)
n_train = int(n_examples * 0.8)
train_idx = np.random.choice(range(0,n_examples), size=n_train, replace=False) #Randomly select training sample subscript
test_idx = list(set(range(0,n_examples))-set(train_idx)) #Test sample index
X_train = X[train_idx] #training samples
X_test = X[test_idx] #testing samples
Y_train = Y[train_idx]
Y_test = Y[test_idx]
print("X_train:",X_train.shape)
print("Y_train:",Y_train.shape)
print("X_test:",X_test.shape)
print("Y_test:",Y_test.shape)
X_train[0]
classes = ['32PSK',
'16APSK',
'32QAM',
'FM',
'GMSK',
'32APSK',
'OQPSK',
'8ASK',
'BPSK',
'8PSK',
'AM-SSB-SC',
'4ASK',
'16PSK',
'64APSK',
'128QAM',
'128APSK',
'AM-DSB-SC',
'AM-SSB-WC',
'64QAM',
'QPSK',
'256QAM',
'AM-DSB-WC',
'OOK',
'16QAM']
def baseline_cnn_model(X_train,classes):
decay = 0.00001
in_shp = X_train.shape[1:] #Dimensions of each sample
#input layer
X_input = Input(in_shp)
X = Reshape([1,1024,2], input_shape=in_shp)(X_input)
x = Conv1D(64,3,padding='same', activation='relu',kernel_initializer='glorot_uniform',data_format="channels_first")(X)
x = MaxPool2D(pool_size=2,strides=2, padding='valid', data_format="channels_first")(x)
x = Conv1D(64,3,padding='same', activation='relu')(X)
x = MaxPool2D(pool_size=2,strides=2, padding='valid', data_format="channels_first")(x)
x = Conv1D(64,3,padding='same', activation='relu')(X)
x = MaxPool2D(pool_size=2,strides=2, padding='valid', data_format="channels_first")(x)
x = Conv1D(64,3,padding='same', activation='relu')(X)
x = MaxPool2D(pool_size=2,strides=2, padding='valid', data_format="channels_first")(x)
x = Conv2D(64,3,padding='same', activation='relu')(X)
x = MaxPool2D(pool_size=2,strides=2, padding='valid', data_format="channels_first")(x)
x = Conv1D(64,3,padding='same', activation='relu')(X)
x = MaxPool2D(pool_size=2,strides=2, padding='valid', data_format="channels_first")(x)
x = Conv1D(64,3,padding='same', activation='relu')(X)
x = MaxPool2D(pool_size=2,strides=2, padding='valid', data_format="channels_first")(x)
# x = EfficientNetB3( weights='imagenet', include_top=False)(input_image)
X = Flatten()(X)
X = Dense(128, activation='selu', kernel_initializer='he_normal', name="dense1")(X)
# X = AlphaDropout(0.3)(X)
#Full Con 2
X = Dense(128, activation='selu', kernel_initializer='he_normal', name="dense2")(X)
# X = AlphaDropout(0.3)(X)
#Full Con 3
X = Dense(len(classes), kernel_initializer='he_normal', name="dense3")(X)
#SoftMax
X = Activation('softmax')(X)
return tf.keras.models.Model(inputs=X_input, outputs=X)
model = baseline_cnn_model(X_train,classes)
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
#Create Model
# model = Model.Model(inputs=X_input,outputs=X)
model.summary()
from tensorflow.keras.utils import plot_model
plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
# perform training ...
# - call the main training loop in keras for our network+dataset
print(tf.test.gpu_device_name())
# mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0"], cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())
%%time
filepath = 'cnn_model.h5'
history = model.fit(X_train,
Y_train,
batch_size=32,
epochs=100,
verbose=1,
# validation_data=(X_test, Y_test),
validation_split = 0.2,
callbacks = [
tf.keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='auto'),
# tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=10, verbose=1, mode='auto')
])
fig, ((ax1, ax2)) = plt.subplots(nrows=1, ncols=2,figsize=(20,6))
ax1.plot(history.history['accuracy'],'b', history.history['val_accuracy'], 'r')
ax1.set_ylabel('Accuracy Rate',fontsize=12)
ax1.set_xlabel('Iteration',fontsize=12)
ax1.set_title('Categorical Cross Entropy ',fontsize=14)
ax1.legend(['Training Accuracy','Validation Accuracy'],fontsize=12,loc='best')
ax2.plot(history.history['loss'], 'b',history.history['val_loss'],'r')
ax2.set_ylabel('Loss',fontsize=12)
ax2.set_xlabel('Iteration',fontsize=12)
ax2.set_title('Learning Curve ',fontsize=14)
ax2.legend(['Training Loss','Validation Loss'],fontsize=12,loc='best')
# plt.savefig('crosse_results.png')
plt.show()
model = load_model(filepath)
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues, labels=[]):
plt.figure(figsize=(10, 10))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(labels))
plt.xticks(tick_marks, labels, rotation=45)
plt.yticks(tick_marks, labels)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Plot confusion matrix
batch_size = 1024
test_Y_hat = model.predict(X_test, batch_size=3000)
conf = np.zeros([len(classes),len(classes)])
confnorm = np.zeros([len(classes),len(classes)])
for i in range(0,X_test.shape[0]):
j = list(Y_test[i,:]).index(1)
k = int(np.argmax(test_Y_hat[i,:]))
conf[j,k] = conf[j,k] + 1
for i in range(0,len(classes)):
confnorm[i,:] = conf[i,:] / np.sum(conf[i,:])
plot_confusion_matrix(confnorm, labels=classes)
for i in range(len(confnorm)):
print(classes[i],confnorm[i,i])
acc={}
Z_test = Z[test_idx]
Z_test = Z_test.reshape((len(Z_test)))
SNRs = np.unique(Z_test)
for snr in SNRs:
X_test_snr = X_test[Z_test==snr]
Y_test_snr = Y_test[Z_test==snr]
pre_Y_test = model.predict(X_test_snr)
conf = np.zeros([len(classes),len(classes)])
confnorm = np.zeros([len(classes),len(classes)])
for i in range(0,X_test_snr.shape[0]): #该信噪比下测试数据量
j = list(Y_test_snr[i,:]).index(1) #正确类别下标
k = int(np.argmax(pre_Y_test[i,:])) #预测类别下标
conf[j,k] = conf[j,k] + 1
for i in range(0,len(classes)):
confnorm[i,:] = conf[i,:] / np.sum(conf[i,:])
plt.figure()
plot_confusion_matrix(confnorm, labels=classes, title="ConvNet Confusion Matrix (SNR=%d)"%(snr))
cor = np.sum(np.diag(conf))
ncor = np.sum(conf) - cor
print ("Overall Accuracy %s: "%snr, cor / (cor+ncor))
acc[snr] = 1.0*cor/(cor+ncor)
plt.plot(acc.keys(),acc.values())
plt.ylabel('ACC')
plt.xlabel('SNR')
plt.grid(True)
plt.show()
###Output
_____no_output_____ |
contrib/report.ipynb | ###Markdown
Relatório sobre os dados abertos de CNPJ da Receita Federal Carregar dados
###Code
from os import getcwd
from pathlib import Path
import dask.dataframe as dd
import numpy as np
DATA_DIR = Path(getcwd()).parent / "data" / "csv"
def load(dataset_type, columns):
files = tuple(
path
for path in DATA_DIR.glob("*.csv")
if dataset_type in path.name
)
df = dd.read_csv(
files,
delimiter=";",
encoding="latin1",
header=None,
usecols=columns.keys(),
dtype={key: str for key in columns.keys()},
)
df.columns = columns.values()
return df
###Output
_____no_output_____
###Markdown
Base do CNPJ
###Code
base = load("EMPRECSV", {0: "base"})
base.head()
###Output
_____no_output_____
###Markdown
Estabelecimentos
###Code
venues = load(
"ESTABELE",
{
0: "base",
1: "ordem",
2: "digito_verificador",
5: "situacao_cadastral",
6: "data_situacao_cadastral",
10: "data_de_inicio_da_atividade",
28: "situacao_especial",
29: "data_situacao_especial",
},
)
venues.head()
###Output
_____no_output_____
###Markdown
Quadro societário
###Code
partners = load("SOCIOCSV", {0: "base", 5: "data_de_entrada"})
partners.head()
###Output
_____no_output_____
###Markdown
Simples & MEI
###Code
taxes = load(
"SIMPLES",
{
0: "base",
2: "data_de_opcao_pelo_simples",
3: "data_exclusao_do_simples",
5: "data_de_opcao_pelo_mei",
6: "data_exclusao_do_mei",
}
)
taxes.head()
###Output
_____no_output_____ |
NLP/Text Classification Using RNN.ipynb | ###Markdown
Building the Model
###Code
# Embedding dimensionality
D = 20
# Hidden state vectorsize (dimensionality)
M = 15
# Input layer
i = Input(shape=(T,))
# Embedding layer
x = Embedding(V + 1, D)(i)
# LSTM layer
x = LSTM(M, return_sequences=True)(x)
x = GlobalMaxPooling1D()(x)
# Dense layer
x = Dense(1, activation='sigmoid')(x)
model = Model(i, x)
# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Train the model
r = model.fit(x=data_train, y=y_train, epochs=10, validation_data=(data_test, y_test))
# Loss per iteration
plt.plot(r.history['loss'], label='Loss')
plt.plot(r.history['val_loss'], label='Validation Loss')
plt.legend()
plt.show()
# Accuracy per iteration
plt.plot(r.history['accuracy'], label='Accuracy')
plt.plot(r.history['val_accuracy'], label='Validation accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____ |
Jupyter_Tutorial/optimize_test.ipynb | ###Markdown
Import Required Packages
###Code
import tensorflow as tf
import tensorflow_addons as tfa
from tqdm import tqdm
import pandas as pd
import sklearn
from sklearn import metrics
import re
import numpy as np
import pickle as pkl
import PIL
import datetime
import os
import random
import shutil
import statistics
import time
###Output
_____no_output_____
###Markdown
Import Required Functions or Methods from Other Files
###Code
import import_ipynb
from util import *
from model import *
###Output
_____no_output_____
###Markdown
Train CLAM Model Train CLAM Model on the Given Training Data
###Code
def nb_optimize(img_features, slide_label, i_model, b_model, c_model, i_optimizer, b_optimizer, c_optimizer,
i_loss_func, b_loss_func, n_class, c1, c2, mutual_ex):
with tf.GradientTape() as i_tape, tf.GradientTape() as b_tape, tf.GradientTape() as c_tape:
att_score, A, h, ins_labels, ins_logits_unnorm, ins_logits, slide_score_unnorm, \
Y_prob, Y_hat, Y_true, predict_slide_label = c_model.call(img_features, slide_label)
ins_labels, ins_logits_unnorm, ins_logits = i_model.call(slide_label, h, A)
ins_loss = list()
for j in range(len(ins_logits)):
i_loss = i_loss_func(tf.one_hot(ins_labels[j], 2), ins_logits[j])
ins_loss.append(i_loss)
if mutual_ex:
I_Loss = tf.math.add_n(ins_loss) / n_class
else:
I_Loss = tf.math.add_n(ins_loss)
slide_score_unnorm, Y_hat, Y_prob, predict_slide_label, Y_true = b_model.call(slide_label, A, h)
B_Loss = b_loss_func(Y_true, Y_prob)
T_Loss = c1 * B_Loss + c2 * I_Loss
i_grad = i_tape.gradient(I_Loss, i_model.trainable_weights)
i_optimizer.apply_gradients(zip(i_grad, i_model.trainable_weights))
b_grad = b_tape.gradient(B_Loss, b_model.trainable_weights)
b_optimizer.apply_gradients(zip(b_grad, b_model.trainable_weights))
c_grad = c_tape.gradient(T_Loss, c_model.trainable_weights)
c_optimizer.apply_gradients(zip(c_grad, c_model.trainable_weights))
return I_Loss, B_Loss, T_Loss, predict_slide_label
def b_optimize(batch_size, n_ins, n_samples, img_features, slide_label, i_model, b_model,
c_model, i_optimizer, b_optimizer, c_optimizer, i_loss_func, b_loss_func,
n_class, c1, c2, mutual_ex):
step_size = 0
Ins_Loss = list()
Bag_Loss = list()
Total_Loss = list()
label_predict = list()
for n_step in range(0, (n_samples // batch_size + 1)):
if step_size < (n_samples - batch_size):
with tf.GradientTape() as i_tape, tf.GradientTape() as b_tape, tf.GradientTape() as c_tape:
att_score, A, h, ins_labels, ins_logits_unnorm, ins_logits, slide_score_unnorm, \
Y_prob, Y_hat, Y_true, predict_label = c_model.call(img_features=img_features[step_size:(step_size + batch_size)],
slide_label=slide_label)
ins_labels, ins_logits_unnorm, ins_logits = i_model.call(slide_label, h, A)
ins_loss = list()
for j in range(len(ins_logits)):
i_loss = i_loss_func(tf.one_hot(ins_labels[j], 2), ins_logits[j])
ins_loss.append(i_loss)
if mutual_ex:
Loss_I = tf.math.add_n(ins_loss) / n_class
else:
Loss_I = tf.math.add_n(ins_loss)
slide_score_unnorm, Y_hat, Y_prob, predict_label, Y_true = b_model.call(slide_label, A, h)
Loss_B = b_loss_func(Y_true, Y_prob)
Loss_T = c1 * Loss_B + c2 * Loss_I
i_grad = i_tape.gradient(Loss_I, i_model.trainable_weights)
i_optimizer.apply_gradients(zip(i_grad, i_model.trainable_weights))
b_grad = b_tape.gradient(Loss_B, b_model.trainable_weights)
b_optimizer.apply_gradients(zip(b_grad, b_model.trainable_weights))
c_grad = c_tape.gradient(Loss_T, c_model.trainable_weights)
c_optimizer.apply_gradients(zip(c_grad, c_model.trainable_weights))
else:
with tf.GradientTape() as i_tape, tf.GradientTape() as b_tape, tf.GradientTape() as c_tape:
att_score, A, h, ins_labels, ins_logits_unnorm, ins_logits, slide_score_unnorm, \
Y_prob, Y_hat, Y_true, predict_label = c_model.call(img_features=img_features[(step_size - n_ins):],
slide_label=slide_label)
ins_labels, ins_logits_unnorm, ins_logits = i_model.call(slide_label, h, A)
ins_loss = list()
for j in range(len(ins_logits)):
i_loss = i_loss_func(tf.one_hot(ins_labels[j], 2), ins_logits[j])
ins_loss.append(i_loss)
if mutual_ex:
Loss_I = tf.math.add_n(ins_loss) / n_class
else:
Loss_I = tf.math.add_n(ins_loss)
slide_score_unnorm, Y_hat, Y_prob, predict_label, Y_true = b_model.call(slide_label, A, h)
Loss_B = b_loss_func(Y_true, Y_prob)
Loss_T = c1 * Loss_B + c2 * Loss_I
i_grad = i_tape.gradient(Loss_I, i_model.trainable_weights)
i_optimizer.apply_gradients(zip(i_grad, i_model.trainable_weights))
b_grad = b_tape.gradient(Loss_B, b_model.trainable_weights)
b_optimizer.apply_gradients(zip(b_grad, b_model.trainable_weights))
c_grad = c_tape.gradient(Loss_T, c_model.trainable_weights)
c_optimizer.apply_gradients(zip(c_grad, c_model.trainable_weights))
Ins_Loss.append(float(Loss_I))
Bag_Loss.append(float(Loss_B))
Total_Loss.append(float(Loss_T))
label_predict.append(predict_label)
step_size += batch_size
I_Loss = statistics.mean(Ins_Loss)
B_Loss = statistics.mean(Bag_Loss)
T_Loss = statistics.mean(Total_Loss)
predict_slide_label = most_frequent(label_predict)
return I_Loss, B_Loss, T_Loss, predict_slide_label
def train_step(i_model, b_model, c_model, train_path, i_optimizer_func, b_optimizer_func,
c_optimizer_func, i_loss_func, b_loss_func, mutual_ex, n_class, c1, c2,
i_learn_rate, b_learn_rate, c_learn_rate, i_l2_decay, b_l2_decay, c_l2_decay,
n_ins, batch_size, batch_op):
loss_total = list()
loss_ins = list()
loss_bag = list()
i_optimizer = i_optimizer_func(learning_rate=i_learn_rate, weight_decay=i_l2_decay)
b_optimizer = b_optimizer_func(learning_rate=b_learn_rate, weight_decay=b_l2_decay)
c_optimizer = c_optimizer_func(learning_rate=c_learn_rate, weight_decay=c_l2_decay)
slide_true_label = list()
slide_predict_label = list()
train_sample_list = os.listdir(train_path)
train_sample_list = random.sample(train_sample_list, len(train_sample_list))
for i in train_sample_list:
print('=', end="")
single_train_data = train_path + i
img_features, slide_label = get_data_from_tf(single_train_data)
# shuffle the order of img features list in order to reduce the side effects of randomly drop potential
# number of patches' feature vectors during training when enable batch training option
img_features = random.sample(img_features, len(img_features))
if batch_op:
I_Loss, B_Loss, T_Loss, predict_slide_label = b_optimize(batch_size=batch_size, n_ins=n_ins, n_samples=len(img_features),
img_features=img_features, slide_label=slide_label,
i_model=i_model, b_model=b_model, c_model=c_model,
i_optimizer=i_optimizer, b_optimizer=b_optimizer,
c_optimizer=c_optimizer, i_loss_func=i_loss_func,
b_loss_func = b_loss_func, n_class=n_class, c1=c1,
c2=c2, mutual_ex=mutual_ex)
else:
I_Loss, B_Loss, T_Loss, predict_slide_label = nb_optimize(img_features=img_features, slide_label=slide_label,
i_model=i_model, b_model=b_model, c_model=c_model,
i_optimizer=i_optimizer, b_optimizer=b_optimizer,
c_optimizer=c_optimizer, i_loss_func=i_loss_func,
b_loss_func=b_loss_func, n_class=n_class, c1=c1, c2=c2,
mutual_ex=mutual_ex)
loss_total.append(float(T_Loss))
loss_ins.append(float(I_Loss))
loss_bag.append(float(B_Loss))
slide_true_label.append(slide_label)
slide_predict_label.append(predict_slide_label)
tn, fp, fn, tp = sklearn.metrics.confusion_matrix(slide_true_label, slide_predict_label).ravel()
train_tn = int(tn)
train_fp = int(fp)
train_fn = int(fn)
train_tp = int(tp)
train_sensitivity = round(train_tp / (train_tp + train_fn), 2)
train_specificity = round(train_tn / (train_tn + train_fp), 2)
train_acc = round((train_tp + train_tn) / (train_tn + train_fp + train_fn + train_tp), 2)
fpr, tpr, thresholds = sklearn.metrics.roc_curve(slide_true_label, slide_predict_label, pos_label=1)
train_auc = round(sklearn.metrics.auc(fpr, tpr), 2)
train_loss = statistics.mean(loss_total)
train_ins_loss = statistics.mean(loss_ins)
train_bag_loss = statistics.mean(loss_bag)
return train_loss, train_ins_loss, train_bag_loss, train_tn, train_fp, train_fn, train_tp, train_sensitivity, \
train_specificity, train_acc, train_auc
###Output
_____no_output_____
###Markdown
Validating CLAM Model
###Code
def nb_val(img_features, slide_label, i_model, b_model, c_model,
i_loss_func, b_loss_func, n_class, c1, c2, mutual_ex):
att_score, A, h, ins_labels, ins_logits_unnorm, ins_logits, slide_score_unnorm, \
Y_prob, Y_hat, Y_true, predict_slide_label = c_model.call(img_features, slide_label)
ins_labels, ins_logits_unnorm, ins_logits = i_model.call(slide_label, h, A)
ins_loss = list()
for j in range(len(ins_logits)):
i_loss = i_loss_func(tf.one_hot(ins_labels[j], 2), ins_logits[j])
ins_loss.append(i_loss)
if mutual_ex:
I_Loss = tf.math.add_n(ins_loss) / n_class
else:
I_Loss = tf.math.add_n(ins_loss)
slide_score_unnorm, Y_hat, Y_prob, predict_slide_label, Y_true = b_model.call(slide_label, A, h)
B_Loss = b_loss_func(Y_true, Y_prob)
T_Loss = c1 * B_Loss + c2 * I_Loss
return I_Loss, B_Loss, T_Loss, predict_slide_label
def b_val(batch_size, n_ins, n_samples, img_features, slide_label, i_model, b_model,
c_model, i_loss_func, b_loss_func, n_class, c1, c2, mutual_ex):
step_size = 0
Ins_Loss = list()
Bag_Loss = list()
Total_Loss = list()
label_predict = list()
for n_step in range(0, (n_samples // batch_size + 1)):
if step_size < (n_samples - batch_size):
att_score, A, h, ins_labels, ins_logits_unnorm, ins_logits, slide_score_unnorm, \
Y_prob, Y_hat, Y_true, predict_label = c_model.call(img_features=img_features[step_size:(step_size + batch_size)],
slide_label=slide_label)
ins_labels, ins_logits_unnorm, ins_logits = i_model.call(slide_label, h, A)
ins_loss = list()
for j in range(len(ins_logits)):
i_loss = i_loss_func(tf.one_hot(ins_labels[j], 2), ins_logits[j])
ins_loss.append(i_loss)
if mutual_ex:
Loss_I = tf.math.add_n(ins_loss) / n_class
else:
Loss_I = tf.math.add_n(ins_loss)
slide_score_unnorm, Y_hat, Y_prob, predict_label, Y_true = b_model.call(slide_label, A, h)
Loss_B = b_loss_func(Y_true, Y_prob)
Loss_T = c1 * Loss_B + c2 * Loss_I
else:
att_score, A, h, ins_labels, ins_logits_unnorm, ins_logits, slide_score_unnorm, \
Y_prob, Y_hat, Y_true, predict_label = c_model.call(img_features=img_features[(step_size - n_ins):],
slide_label=slide_label)
ins_labels, ins_logits_unnorm, ins_logits = i_model.call(slide_label, h, A)
ins_loss = list()
for j in range(len(ins_logits)):
i_loss = i_loss_func(tf.one_hot(ins_labels[j], 2), ins_logits[j])
ins_loss.append(i_loss)
if mutual_ex:
Loss_I = tf.math.add_n(ins_loss) / n_class
else:
Loss_I = tf.math.add_n(ins_loss)
slide_score_unnorm, Y_hat, Y_prob, predict_label, Y_true = b_model.call(slide_label, A, h)
Loss_B = b_loss_func(Y_true, Y_prob)
Loss_T = c1 * Loss_B + c2 * Loss_I
Ins_Loss.append(float(Loss_I))
Bag_Loss.append(float(Loss_B))
Total_Loss.append(float(Loss_T))
label_predict.append(predict_label)
step_size += batch_size
I_Loss = statistics.mean(Ins_Loss)
B_Loss = statistics.mean(Bag_Loss)
T_Loss = statistics.mean(Total_Loss)
predict_slide_label = most_frequent(label_predict)
return I_Loss, B_Loss, T_Loss, predict_slide_label
def val_step(i_model, b_model, c_model, val_path, i_loss_func, b_loss_func, mutual_ex,
n_class, c1, c2, n_ins, batch_size, batch_op):
loss_t = list()
loss_i = list()
loss_b = list()
slide_true_label = list()
slide_predict_label = list()
val_sample_list = os.listdir(val_path)
val_sample_list = random.sample(val_sample_list, len(val_sample_list))
for i in val_sample_list:
print('=', end="")
single_val_data = val_path + i
img_features, slide_label = get_data_from_tf(single_val_data)
img_features = random.sample(img_features, len(img_features)) # follow the training loop, see details there
if batch_op:
I_Loss, B_Loss, T_Loss, predict_slide_label = b_val(batch_size=batch_size, n_ins=n_ins, n_samples=len(img_features),
img_features=img_features, slide_label=slide_label,
i_model=i_model, b_model=b_model, c_model=c_model,
i_loss_func=i_loss_func, b_loss_func=b_loss_func,
n_class=n_class, c1=c1, c2=c2, mutual_ex=mutual_ex)
else:
I_Loss, B_Loss, T_Loss, predict_slide_label = nb_val(img_features=img_features, slide_label=slide_label,
i_model=i_model, b_model=b_model, c_model=c_model,
i_loss_func=i_loss_func, b_loss_func=b_loss_func,
n_class=n_class, c1=c1, c2=c2, mutual_ex=mutual_ex)
loss_t.append(float(T_Loss))
loss_i.append(float(I_Loss))
loss_b.append(float(B_Loss))
slide_true_label.append(slide_label)
slide_predict_label.append(predict_slide_label)
tn, fp, fn, tp = sklearn.metrics.confusion_matrix(slide_true_label, slide_predict_label).ravel()
val_tn = int(tn)
val_fp = int(fp)
val_fn = int(fn)
val_tp = int(tp)
val_sensitivity = round(val_tp / (val_tp + val_fn), 2)
val_specificity = round(val_tn / (val_tn + val_fp), 2)
val_acc = round((val_tp + val_tn) / (val_tn + val_fp + val_fn + val_tp), 2)
fpr, tpr, thresholds = sklearn.metrics.roc_curve(slide_true_label, slide_predict_label, pos_label=1)
val_auc = round(sklearn.metrics.auc(fpr, tpr), 2)
val_loss = statistics.mean(loss_t)
val_ins_loss = statistics.mean(loss_i)
val_bag_loss = statistics.mean(loss_b)
return val_loss, val_ins_loss, val_bag_loss, val_tn, val_fp, val_fn, val_tp, val_sensitivity, val_specificity, \
val_acc, val_auc
###Output
_____no_output_____
###Markdown
Test Optimized CLAM Model
###Code
def test_step(n_class, n_ins, att_gate, att_only, mil_ins, mut_ex, i_model, b_model, c_model,
test_path, result_path, result_file_name):
start_time = time.time()
slide_true_label = list()
slide_predict_label = list()
sample_names = list()
for i in os.listdir(test_path):
print('>', end="")
single_test_data = test_path + i
img_features, slide_label = get_data_from_tf(single_test_data)
att_score, A, h, ins_labels, ins_logits_unnorm, ins_logits, slide_score_unnorm, \
Y_prob, Y_hat, Y_true, predict_slide_label = s_clam_call(att_net=c_model[0],
ins_net=c_model[1],
bag_net=c_model[2],
img_features=img_features,
slide_label=slide_label,
n_class=n_class, n_ins=n_ins,
att_gate=att_gate, att_only=att_only,
mil_ins=mil_ins, mut_ex=mut_ex)
ins_labels, ins_logits_unnorm, ins_logits = ins_call(m_ins_classifier=i_model,
bag_label=slide_label,
h=h, A=A, n_class=n_class,
n_ins=n_ins, mut_ex=mut_ex)
slide_score_unnorm, Y_hat, Y_prob, predict_slide_label, Y_true = s_bag_call(bag_classifier=b_model,
bag_label=slide_label,
A=A, h=h, n_class=n_class)
slide_true_label.append(slide_label)
slide_predict_label.append(predict_slide_label)
sample_names.append(i)
test_results = pd.DataFrame(list(zip(sample_names, slide_true_label, slide_predict_label)),
columns=['Sample Names', 'Slide True Label', 'Slide Predict Label'])
test_results.to_csv(os.path.join(result_path, result_file_name), sep='\t', index=False)
tn, fp, fn, tp = sklearn.metrics.confusion_matrix(slide_true_label, slide_predict_label).ravel()
test_tn = int(tn)
test_fp = int(fp)
test_fn = int(fn)
test_tp = int(tp)
test_sensitivity = round(test_tp / (test_tp + test_fn), 2)
test_specificity = round(test_tn / (test_tn + test_fp), 2)
test_acc = round((test_tp + test_tn) / (test_tn + test_fp + test_fn + test_tp), 2)
fpr, tpr, thresholds = sklearn.metrics.roc_curve(slide_true_label, slide_predict_label, pos_label=1)
test_auc = round(sklearn.metrics.auc(fpr, tpr), 2)
test_run_time = time.time() - start_time
template = '\n Test Accuracy: {}, Test Sensitivity: {}, Test Specificity: {}, Test Running Time: {}'
print(template.format(f"{float(test_acc):.4%}",
f"{float(test_sensitivity):.4%}",
f"{float(test_specificity):.4%}",
"--- %s mins ---" % int(test_run_time / 60)))
###Output
_____no_output_____
###Markdown
Optimizing CLAM Model Training & Validating CLAM Model
###Code
def train_val(train_log, val_log, train_path, val_path, i_model, b_model,
c_model, i_optimizer_func, b_optimizer_func, c_optimizer_func,
i_loss_func, b_loss_func, mutual_ex, n_class, c1, c2,
i_learn_rate, b_learn_rate, c_learn_rate,
i_l2_decay, b_l2_decay, c_l2_decay, n_ins,
batch_size, batch_op, epochs):
train_summary_writer = tf.summary.create_file_writer(train_log)
val_summary_writer = tf.summary.create_file_writer(val_log)
for epoch in range(epochs):
# Training Step
start_time = time.time()
train_loss, train_ins_loss, train_bag_loss, train_tn, train_fp, train_fn, train_tp, \
train_sensitivity, train_specificity, train_acc, train_auc = train_step(
i_model=i_model, b_model=b_model, c_model=c_model, train_path=train_path,
i_optimizer_func=i_optimizer_func, b_optimizer_func=b_optimizer_func,
c_optimizer_func=c_optimizer_func, i_loss_func=i_loss_func,
b_loss_func=b_loss_func, mutual_ex=mutual_ex, n_class=n_class,
c1=c1, c2=c2, i_learn_rate=i_learn_rate, b_learn_rate=b_learn_rate,
c_learn_rate=c_learn_rate, i_l2_decay=i_l2_decay, b_l2_decay=b_l2_decay,
c_l2_decay=c_l2_decay, n_ins=n_ins, batch_size=batch_size, batch_op=batch_op)
with train_summary_writer.as_default():
tf.summary.scalar('Total Loss', float(train_loss), step=epoch)
tf.summary.scalar('Instance Loss', float(train_ins_loss), step=epoch)
tf.summary.scalar('Bag Loss', float(train_bag_loss), step=epoch)
tf.summary.scalar('Accuracy', float(train_acc), step=epoch)
tf.summary.scalar('AUC', float(train_auc), step=epoch)
tf.summary.scalar('Sensitivity', float(train_sensitivity), step=epoch)
tf.summary.scalar('Specificity', float(train_specificity), step=epoch)
tf.summary.histogram('True Positive', int(train_tp), step=epoch)
tf.summary.histogram('False Positive', int(train_fp), step=epoch)
tf.summary.histogram('True Negative', int(train_tn), step=epoch)
tf.summary.histogram('False Negative', int(train_fn), step=epoch)
# Validation Step
val_loss, val_ins_loss, val_bag_loss, val_tn, val_fp, val_fn, val_tp, \
val_sensitivity, val_specificity, val_acc, val_auc = val_step(
i_model=i_model, b_model=b_model, c_model=c_model, val_path=val_path,
i_loss_func=i_loss_func, b_loss_func=b_loss_func, mutual_ex=mutual_ex,
n_class=n_class, c1=c1, c2=c2, n_ins=n_ins, batch_size=batch_size, batch_op=batch_op)
with val_summary_writer.as_default():
tf.summary.scalar('Total Loss', float(val_loss), step=epoch)
tf.summary.scalar('Instance Loss', float(val_ins_loss), step=epoch)
tf.summary.scalar('Bag Loss', float(val_bag_loss), step=epoch)
tf.summary.scalar('Accuracy', float(val_acc), step=epoch)
tf.summary.scalar('AUC', float(val_auc), step=epoch)
tf.summary.scalar('Sensitivity', float(val_sensitivity), step=epoch)
tf.summary.scalar('Specificity', float(val_specificity), step=epoch)
tf.summary.histogram('True Positive', int(val_tp), step=epoch)
tf.summary.histogram('False Positive', int(val_fp), step=epoch)
tf.summary.histogram('True Negative', int(val_tn), step=epoch)
tf.summary.histogram('False Negative', int(val_fn), step=epoch)
epoch_run_time = time.time() - start_time
template = '\n Epoch {}, Train Loss: {}, Train Accuracy: {}, Val Loss: {}, Val Accuracy: {}, Epoch Running ' \
'Time: {} '
print(template.format(epoch + 1,
f"{float(train_loss):.8}",
f"{float(train_acc):.4%}",
f"{float(val_loss):.8}",
f"{float(val_acc):.4%}",
"--- %s mins ---" % int(epoch_run_time / 60)))
###Output
_____no_output_____
###Markdown
Main Function to Optimizing and Testing CLAM Model Test the Optimized CLAM Model by Saving the Trained CLAM Model
###Code
def clam_optimize(train_log, val_log, train_path, val_path, i_model, b_model,
c_model, i_optimizer_func, b_optimizer_func, c_optimizer_func,
i_loss_func, b_loss_func, mutual_ex, n_class, c1, c2,
i_learn_rate, b_learn_rate, c_learn_rate, i_l2_decay, b_l2_decay,
c_l2_decay, n_ins, batch_size, batch_op, i_model_dir, b_model_dir,
c_model_dir, m_bag_op, m_clam_op, g_att_op, epochs):
train_val(train_log=train_log, val_log=val_log, train_path=train_path,
val_path=val_path, i_model=i_model, b_model=b_model, c_model=c_model,
i_optimizer_func=i_optimizer_func, b_optimizer_func=b_optimizer_func,
c_optimizer_func=c_optimizer_func, i_loss_func=i_loss_func,
b_loss_func=b_loss_func, mutual_ex=mutual_ex, n_class=n_class,
c1=c1, c2=c2, i_learn_rate=i_learn_rate, b_learn_rate=b_learn_rate,
c_learn_rate=c_learn_rate, i_l2_decay=i_l2_decay, b_l2_decay=b_l2_decay,
c_l2_decay=c_l2_decay, n_ins=n_ins, batch_size=batch_size,
batch_op=batch_op, epochs=epochs)
model_save(i_model=i_model, b_model=b_model, c_model=c_model,
i_model_dir=i_model_dir, b_model_dir=b_model_dir,
c_model_dir=c_model_dir, n_class=n_class, m_bag_op=m_bag_op,
m_clam_op=m_clam_op, g_att_op=g_att_op)
def clam_test(n_class, n_ins, att_gate, att_only, mil_ins, mut_ex, test_path,
result_path, result_file_name, i_model_dir, b_model_dir, c_model_dir,
m_bag_op, m_clam_op):
i_trained_model, b_trained_model, c_trained_model = restore_model(i_model_dir=i_model_dir,
b_model_dir=b_model_dir,
c_model_dir=c_model_dir,
n_class=n_class, m_bag_op=m_bag_op,
m_clam_op=m_clam_op, g_att_op=att_gate)
test_step(n_class=n_class, n_ins=n_ins,
att_gate=att_gate, att_only=att_only,
mil_ins=mil_ins, mut_ex=mut_ex,
i_model=i_trained_model,
b_model=b_trained_model,
c_model=c_trained_model,
test_path=test_path,
result_path=result_path,
result_file_name=result_file_name)
###Output
_____no_output_____ |
Projects/4_HMM Tagger/HMM warmup (optional).ipynb | ###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy'] at 2.30%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name = "Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name = "Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name = "Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name = "Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize = (5, 5), filename = "example.png", overwrite = True, show_ends = False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes', 'yes', 'no']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
yes 3% 0% 0% 0%
no 0% 1% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes', 'yes', 'no'] is 1.60%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes', 'yes', 'no']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy', 'Rainy', 'Sunny'] at 0.40%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)|Weather |$yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| Weather | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network. If Visualization Not Work, Follow the steps below:- Download and Install https://graphviz.gitlab.io/_pages/Download/Download_windows.html- ```conda install graphviz```- Add graphviz installed path (C:...\graphviz\bin) to Control Panel > System and Security > System > Advanced System Settings > Environment Variables > Path > Edit > New- Very Important: Restart your Jupyter notebook/machine. I tried restarting machine and it worked.
###Code
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes','no','yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
no 1% 3% 0% 0%
yes 1% 0% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes', 'no', 'yes'] is 1.10%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes','yes','no']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy', 'Rainy', 'Sunny'] at 0.40%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
product([['Sunny', 'Rainy']]*3)
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
import os
os.environ["PATH"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin/'
!where dot
show_model(model, figsize=(5, 5), filename="_example1.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy'] at 2.30%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['no', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Sunny', 'Sunny', 'Rainy'] at 5.18%.
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy'] at 2.30%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy'] at 2.30%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy'] at 2.30%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy'] at 2.30%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy'] at 2.30%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'yes', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Rainy', 'Rainy'] at 9.22%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy'] at 2.30%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(5, 5), filename="opt.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy'] at 2.30%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
for state in model.states:
if (state.name != "Example Model-start" and state.name != "Example Model-end"):
print(state.name)
###Output
Rainy
Sunny
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy'] at 2.30%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy'] at 2.30%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| _ | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| _ | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(10, 10), show_ends=True)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix.
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(X|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['no', 'no', 'no']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
no 10% 45% 0% 0%
no 3% 36% 0% 0%
no 2% 27% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['no', 'no', 'no'] is 28.80%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['no', 'no', 'no']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Sunny', 'Sunny', 'Sunny'] at 23.33%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'no']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'no'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 23.33% <-- Viterbi path
('Sunny', 'Sunny', 'Rainy') is 1.30%
('Sunny', 'Rainy', 'Sunny') is 0.65%
('Sunny', 'Rainy', 'Rainy') is 0.22%
('Rainy', 'Sunny', 'Sunny') is 2.59%
('Rainy', 'Sunny', 'Rainy') is 0.14%
('Rainy', 'Rainy', 'Sunny') is 0.43%
('Rainy', 'Rainy', 'Rainy') is 0.14%
The total likelihood of observing ['no', 'no', 'no'] over all possible paths is 28.80%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy'] at 2.30%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy'] at 2.30%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy'] at 2.30%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
_____no_output_____
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(10, 10), filename="example.png", overwrite=True, show_ends=True)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
print(column_order)
print(column_names)
print(order_index)
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
_____no_output_____
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
_____no_output_____
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
print(viterbi_path)
###Output
_____no_output_____
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
_____no_output_____
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[ 0. 0.5 0.5 0. ]
[ 0. 0.8 0.2 0. ]
[ 0. 0.4 0.6 0. ]
[ 0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes','no','yes', 'no','no']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
forward_matrix = np.exp(model.forward(observations))
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
no 1% 3% 0% 0%
no 0% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes', 'no', 'no'] is 2.68%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
observations = ['yes','no','yes', 'no','no']
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Sunny'] at 0.60%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy'] at 2.30%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
# model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
# model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
model.add_transitions(sunny_state, [sunny_state, rainy_state], [0.8, 0.2])
# TODO: add rainy day transitions using the probabilities specified in the transition table
# model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
# model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
model.add_transitions(rainy_state, [sunny_state, rainy_state], [0.4, 0.6])
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[ 0. 0.5 0.5 0. ]
[ 0. 0.8 0.2 0. ]
[ 0. 0.4 0.6 0. ]
[ 0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy'] at 2.30%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
###Markdown
Intro to Hidden Markov Models (optional)--- IntroductionIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Build a Simple HMM---You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.A simplified diagram of the required network topology is shown below.![](_example.png) Describing the Network$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.) Initializing an HMM Network with PomegranateThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.htmlinitialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
###Output
_____no_output_____
###Markdown
**IMPLEMENTATION**: Add the Hidden StatesWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution. Observation Emission Probabilities: $P(Y_t | X_t)$We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)| | $yes$ | $no$ || --- | --- | --- || $Sunny$ | 0.10 | 0.90 || $Rainy$ | 0.80 | 0.20 |
###Code
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
###Output
Looks good so far!
###Markdown
**IMPLEMENTATION:** Adding TransitionsOnce the states are added to the model, we can build up the desired topology of individual state transitions. Initial Probability $P(X_0)$:We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:| $Sunny$ | $Rainy$ || --- | ---| 0.5 | 0.5 | State transition probabilities $P(X_{t} | X_{t-1})$Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)| | $Sunny$ | $Rainy$ || --- | --- | --- ||$Sunny$| 0.80 | 0.20 ||$Rainy$| 0.40 | 0.60 |
###Code
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
###Output
Great! You've finished the model.
###Markdown
Visualize the Network---We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
###Code
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
###Output
_____no_output_____
###Markdown
Checking the ModelThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.Run the next cell to inspect the full state transition matrix, then read the .
###Code
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
###Output
The state transition matrix, P(Xt|Xt-1):
[[0. 0.5 0.5 0. ]
[0. 0.8 0.2 0. ]
[0. 0.4 0.6 0. ]
[0. 0. 0. 0. ]]
The transition probability from Rainy to Sunny is 40%
###Markdown
Inference in Hidden Markov Models---Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:**Likelihood Evaluation**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the modelWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.**Hidden State Decoding**Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observationsWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states. **Parameter Learning**Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate. IMPLEMENTATION: Calculate Sequence LikelihoodCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
###Output
Rainy Sunny Example Model-start Example Model-end
<start> 0% 0% 100% 0%
yes 40% 5% 0% 0%
no 5% 18% 0% 0%
yes 5% 2% 0% 0%
The likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%
###Markdown
IMPLEMENTATION: Decoding the Most Likely Hidden State SequenceThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
###Code
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
###Output
The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy'] at 2.30%.
###Markdown
Forward likelihood vs Viterbi likelihoodRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
###Code
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
###Output
The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...
('Sunny', 'Sunny', 'Sunny') is 2.59%
('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path
('Sunny', 'Rainy', 'Sunny') is 0.07%
('Sunny', 'Rainy', 'Rainy') is 0.86%
('Rainy', 'Sunny', 'Sunny') is 0.29%
('Rainy', 'Sunny', 'Rainy') is 0.58%
('Rainy', 'Rainy', 'Sunny') is 0.05%
('Rainy', 'Rainy', 'Rainy') is 0.58%
The total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%
|
womens_hackathon_python_geospatial.ipynb | ###Markdown
Free cloud computing with Google Drive A GIS programming workshop[Alex Pakalniskis](https://alexpakalniskis.com)![GIS photo](https://new.library.arizona.edu/sites/default/files/styles/featured_image/public/featured_media/gislayers.png?itok=TzJ28PPD)![GRASS GIS](https://grasswiki.osgeo.org/w/images/thumb/Wxgui-pyshell.png/400px-Wxgui-pyshell.png)
###Code
# Install curl (https://curl.haxx.se/), g++ (https://en.wikipedia.org/wiki/GNU_Compiler_Collection), and make (https://en.wikipedia.org/wiki/Make_(software)) on the remote machine you are using
!apt-get install -qq curl g++ make
# Use curl to download zippied spatial indexing software from Open Source Geospatial Foundation (https://www.osgeo.org/) and use tar to unzip
!curl -L http://download.osgeo.org/libspatialindex/spatialindex-src-1.8.5.tar.gz | tar xz
# Python library for operating system specific tasks such as changing directories (https://docs.python.org/3/library/os.html)
import os
# Change current working directory to newly unzipped OSGEO spatial indexing software
os.chdir('spatialindex-src-1.8.5')
# Configure software and dependencies for install on remote Google computer
!./configure
# Build software from its Makefile (https://en.wikipedia.org/wiki/Makefile) by invoking make (https://en.wikipedia.org/wiki/Make_(software))
!make
# Copy built software and files to correct locations for accessing later
!make install
# Use pip Python package manager (https://en.wikipedia.org/wiki/Pip_(package_manager)) to install rtree (http://toblerity.org/rtree/)
# Python-wrapper for libspatialindex, a C++ library (https://libspatialindex.org/) for implementing R-tree data access (https://en.wikipedia.org/wiki/R-tree)
# http://toblerity.org/rtree/
!pip install rtree
# Configure (symbolic) links with ldconfig (https://linux.die.net/man/8/ldconfig)
!ldconfig
# Import rtree and sublibraries. Can use for large data indexing, but we will likely not use it for this workshop demonstration. Go forth and experiment! Have fun.
import rtree
from rtree import index
from rtree.index import Rtree
# Check for packages which need updating and install the development branch of libspatialindex onto the remote Google cloud computer with a linux OS
!sudo apt-get update && apt-get install -y libspatialindex-dev
# Install descartes (https://pypi.org/project/descartes/) to enable plotting planar geometric vector objects in matplotlib
!pip install descartes
# Spatial version of Pandas library for GIS data management and basic analyses (http://geopandas.org/)
!pip install geopandas
# Geographic data science library (https://pysal.readthedocs.io/en/latest/)
!pip install pysal
# Choropleth mapping schemes from the makers of PySAL (https://github.com/pysal/mapclassify)
!pip install mapclassify
# Library for zonal statistics and interpolated point queries (https://pythonhosted.org/rasterstats/)
!pip install rasterstats
# Python-wrapper for leaflet.js JavaScript interactive mapping library (https://python-visualization.github.io/folium/)
!pip install folium
# Upgrade the software
!sudo apt-get upgrade
# Set the current working directory to "home"
os.chdir("/home/")
# Data management and analysis library with emphasis on tabular data. Widely used in research and industry, as it was initially developed for financial analyses of stocks. Comes preloaded in Colaboratory.
import pandas as pd
# Matrix algebra library for multidimensional arrays. Hugely popular in scientific computing. Based of the proprietary MATLAB software. Comes preloaded in Colaboratory.
import numpy as np
import geopandas as gpd
# Legendary visualization library. Comes preloaded in Colaboratory.
import matplotlib.pyplot as plt
# Great, newer visualization library built-on matplotlib but with more modern styling. Comes preloaded in Colaboratory. (https://seaborn.pydata.org/)
import seaborn as sns
# Colaboratory helper library for importing and downloading locally stored files into the remote Google machine
import google.colab.files
import pysal
import mapclassify
# Library for reading and writing geospatial raster data such as TIFFS (https://rasterio.readthedocs.io/en/stable/quickstart.html)
import rasterio
import rasterstats
# Pyplot sublibrary for legend styling
import matplotlib.patches as mpatches
import folium
from folium import plugins
from folium.plugins import MarkerCluster
from folium.plugins import MiniMap
# Scikit-image is a library of image processing algorithms including computer vision protocols
from skimage import data, io, segmentation, color, transform
from skimage.future import graph
# Setting global plot style aesthetics through seaborn
sns.set(context="paper",
palette="colorblind")
###Output
_____no_output_____
###Markdown
Use Python to read and manipulate data from a public Google Sheet in Google Drive
###Code
# Google Sheet with information about Data Centers in Tucson, AZ. Includes Latitude and Longitude: Obtained October 2019
data_url = "https://docs.google.com/spreadsheets/d/1xOpiV58l76stT406ecqlc-wp8MqN3X-hCo-EJxouKsg/view#gid=0"
# There are also great Python libraries like gspread for more direct Google Sheets/Python integration
# For this workshop, I stuck with string manipulation in order to tweak the public-view Google Sheet URL into a downloadable CSV file
# Old suffix that I want to replace
google_suffix = "/view#gid=0"
# New suffix that I am providing
new_google_suffix = "/export?format=csv&gid=0"
# Use `replace` function to create a new string variable with the updated suffix
data = data_url.replace(google_suffix, new_google_suffix)
# Print the updated URL (formatted to download as a CSV file)
print(data)
# Read the CSV-formatted data into Python with pandas
# Pandas is great and can read locally or remotely stored CSV and XLSX files
df = pd.read_csv(data)
# Another commonly used term for this type of tabular dataset is a pandas DataFrame.
# Also a term frequently used in R programming and their tidyerse.
df
df.plot()
###Output
_____no_output_____
###Markdown
Wut? Not the most descriptive or enlightening of figures.
###Code
# Let's use pandas scatter plot function to make a hacky but simple figure of the Tucson data center locations
df.plot.scatter(y="Latitude", x="Longitude", color="black")
# Convert the spreadsheet into a geospatial dataset by joining "Latitude" and "Longitude" into a single Point geometry
# The point geometries can be plotted on maps such as those depicting city boundaries.
# Using GeoDataFrames instead of DataFrames will generally simplify any plotting or analysis.
gdf = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df.Longitude, df.Latitude))
gdf
gdf.plot()
###Output
_____no_output_____
###Markdown
See how much less code that requires?Let's plot a few other layers to make the map a bit nicer lookingWe'll use some data from Tucson Open Data portal for GIS (http://gisdata.tucsonaz.gov/) and PublicaMundi (http://www.publicamundi.eu/).
###Code
# GeoJSON data of state boundaries in USA
states_url = "https://raw.githubusercontent.com/PublicaMundi/MappingAPI/master/data/geojson/us-states.json"
# Read state boundary file using geopandas
states = gpd.read_file(states_url)
states.plot(cmap="Accent_r")
# Create a subset of the data for the continental US
continental = states.drop([1,11,51])
continental
continental.plot(cmap="winter")
# You can also create a variable for random color plotting. There will be as many colors/numbers as there are states in the continental US (48) + District of Columbia (1).
vals = np.linspace(0,1,len(continental))
vals
# Shuffle the values to the colorramp is randomized
np.random.shuffle(vals)
# Generate a randomized 256-gradation colormap from the pyplot "Winter" default
cmap = plt.cm.colors.ListedColormap(plt.cm.winter(vals))
# Generate a pyplot figure
fig, ax = plt.subplots(figsize=(10,6))
# Plot the continental US with the randomized "Winter" colormap
continental.plot(ax=ax, cmap=cmap)
# Set the x and y limits of the plot to the x and y limits of the pyplot axis and not the bounds including Alaska and Hawaii
cont_xlim = ax.get_xlim()
cont_ylim = ax.get_ylim()
# Jurisdiction boundary data (http://gisdata.tucsonaz.gov/datasets/jurisdiction-boundaries-open-data)
zones = "https://opendata.arcgis.com/datasets/b53bbe832e4e4d94a31730b596487d28_0.geojson"
# Use geopandas to read in the geojson data
zones_gdf = gpd.read_file(zones)
# Display the first five lines of the data set
zones_gdf.head()
# Plot the jurisdictional boundary data
# Use column "NAME" for choropleth mapping
# Feel free to change the colormap to another matplotlib colormap (https://matplotlib.org/3.1.1/gallery/color/colormap_reference.html)
zones_gdf.plot(column="NAME",cmap="tab20b_r", legend=True, figsize=(10,5), legend_kwds={'loc': 'lower left'})
# And some street data (http://gisdata.tucsonaz.gov/datasets/major-streets-and-routes-open-data)
# Again we'll use geopandas to read in the geojson data
tucson_streets = gpd.read_file("https://opendata.arcgis.com/datasets/c6d21082e6d248f0b7db0ff4f6f0ed8e_7.geojson")
# And plot the first five rows of the data set
tucson_streets.head()
# Plot the street network data
# Use column "NAME" for choropleth mapping
# Feel free to change the colormap to another matplotlib colormap (https://matplotlib.org/3.1.1/gallery/color/colormap_reference.html)
tucson_streets.plot(column="MSR_TYPE",cmap="Dark2", legend=True, figsize=(10,10), legend_kwds={'loc': 'lower left'})
###Output
_____no_output_____
###Markdown
Let's put it all together to map the data centers in Tucson
###Code
# Create a matplotlib figure with a single axis
fig, ax = plt.subplots(figsize=(15,8))
# Plot the continental united states on the axis
continental.plot(ax=ax, color="gray")
# Plot the southern Arizona administrative boundaries
zones_gdf.plot(ax=ax, column="NAME", cmap="cividis_r", legend=True, legend_kwds={'loc': 'lower right'})
# Plot Tucson major streets and roads
tucson_streets.plot(ax=ax, color="gray", hatch="..")
# Plot the data center locations
gdf.plot(ax=ax, color="white", marker="o", markersize=75)
###Output
_____no_output_____
###Markdown
Not ideal. Let's tweak the axis extent using the boundary information of the data center dataframe
###Code
print(gdf.total_bounds)
# Create a matplotlib figure with a single axis
fig, ax = plt.subplots(figsize=(15,8))
# Plot the continental united states on the axis
continental.plot(ax=ax, color="gray")
# Plot the southern Arizona administrative boundaries
zones_gdf.plot(ax=ax, column="NAME", cmap="cividis_r", legend=True, legend_kwds={'loc': 'lower right'})
# Plot Tucson major streets and roads
tucson_streets.plot(ax=ax, color="gray", hatch="..")
# Plot the data center locations
gdf.plot(ax=ax, color="white", marker="o", markersize=75)
# Adjust x and y limits to the total bounds of the data center data
ax.set_xlim(gdf.total_bounds[0]-0.05, gdf.total_bounds[2]+0.05)
ax.set_ylim(gdf.total_bounds[1]-0.05, gdf.total_bounds[3]+0.05)
# Set the figure title
plt.title("Data Centers around Tucson, Arizona", fontsize="xx-large", fontweight="bold")
# Implement tight layout. Entirely optional.
plt.tight_layout()
# Save to the "home" directory on the Google virtual machine
plt.savefig("data_centers_tucson.png")
# Create a variable for the newly created map image
my_photo = "/home/data_centers_tucson.png"
# Read the first of three image layers as a numpy array
my_data = io.imread(my_photo)[:,:,0]
my_data
plt.imshow(my_data)
plt.grid(False)
# Use an image processing algorithm to swirl your map data. Have fun and adjust the parameters like rotation, strength, and radius. Also add in your own data.
swrld = transform.swirl(my_data, rotation=1455, strength=2, radius=1000)
# Create a matplotlib figure
fig, ax = plt.subplots()
# Set the swirled image to the axis
ax.imshow(swrld)
# Turn of axis grid lines
ax.grid(False)
plt.imshow(swrld / my_data*my_data*0.9, cmap="Greys")
plt.grid(False)
plt.title("Slightly Creepy Map", fontsize="xx-large")
plt.savefig("/home/creepy_map.png")
###Output
_____no_output_____
###Markdown
Let's make some interactive maps
###Code
# Display the default map from folium
m = folium.Map(control_scale=True)
m
# Let's use some custom tiles from CartoDB. This requires the tiles and an attribution.
tiles = "https://{s}.basemaps.cartocdn.com/dark_all/{z}/{x}/{y}{r}.png"
attr = '© <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a> contributors © <a href="https://carto.com/attributions">CARTO</a>'
m = folium.Map(tiles=tiles,attr=attr,control_scale=True)
m
# Let's zoom into a location on Earth. I've semi-randomly chosen a latitude and longitude coordinate, then zoomed in for visualization purposes.
m = folium.Map(location=[57,-4],
zoom_start=5,
tiles=tiles,
attr=attr,
control_scale=True)
m
# What about inset maps (which are also interactive). Let's use a different base map provider to contrast the figure aesthetics.
mini_map_tiles = "Stamen Toner"
m = folium.Map(location=[57,-4],
zoom_start=5,
tiles=tiles,
attr=attr,
control_scale=True)
minimap = plugins.MiniMap(toggle_display=True,
tile_layer=mini_map_tiles,
width=100)
m.add_child(minimap)
m
#Let's create a list of the coordinates of Tucson Data Centers. This will come in handy for mapping the data.
coordinates = gdf[['Latitude', 'Longitude']]
coordinates_list = coordinates.values.tolist()
# Calculate the average Latitude and Longitude to center the map perfectly on the Data Center data set
mean_lat=gdf['Latitude'].mean()
mean_lon=gdf['Longitude'].mean()
# Create a variable called "location" to be invoked in a folium map
location = [mean_lat, mean_lon]
location
m = folium.Map(location=location,
tiles = tiles,
attr=attr,
control_scale=True, zoom_start=7)
MarkerCluster(locations=coordinates_list, popups=df["Name"]).add_to(m)
minimap = plugins.MiniMap(toggle_display=True,
tile_layer=mini_map_tiles,
width=100)
m.add_child(minimap)
m
m = folium.Map(location=location,
tiles = tiles,
attr=attr,
control_scale=True)
MarkerCluster(locations=coordinates_list, popups=df["Name"]).add_to(m)
# Can also call the fit_bounds method to fit the interactive map neatly to the bounds of the coordinates_list previously generated
m.fit_bounds([coordinates_list])
minimap = plugins.MiniMap(toggle_display=True,
tile_layer=mini_map_tiles,
width=100)
m.add_child(minimap)
m
m.save("tucson_data_centers.html")
google.colab.files.download("/home/tucson_data_centers.html")
google.colab.files.download("/home/creepy_map.png")
###Output
_____no_output_____ |
SQL for Data Science.ipynb | ###Markdown
This is going to be more technical. I'll make a mini-series on practical applications with SQL if you like :) SELECT Statement```SELECT [DISTINCT] Tablename1.columnname1, Tablename2.columnname3, . . TablenameX.columnnameY,FROM Tablename1[LEFT] JOIN Tablename2 ON conditions[LEFT] JOIN Tablename3 ON conditions..[LEFT] JOIN TablenameX ON conditionsWHERE 1=1 AND conditions[GROUP BY] TablenameN.columnnameM, ...[HAVING] aggregated conditions[ORDER BY] TablenameL.columnnameO, ...[LIMIT] number``` We have this restauruant loaded from this CSV. We're going to continue building on this.You can try coding out these problems as we go along
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from pandasql import sqldf
pdsql = lambda q: sqldf(q, globals())
# Fetching/Processing Dataset
orders = pd.read_csv('csv/restaurant-1-orders.csv')
orders.columns = ['number', 'timestamp', 'item', 'quantity', 'price', 'total_products_in_cart']
orders['date'] = pd.to_datetime(orders['timestamp'].str[:10])
orders.sample(2)
print(f"Timeline: {orders['date'].min().date()} to {orders['date'].max().date()}")
###Output
Timeline: 2015-01-09 to 2019-12-07
###Markdown
Dates are stored this way because string sorting can also be used
###Code
# Fetching/Processing Dataset
products = pd.read_csv('csv/restaurant-1-products-price.csv')
products.columns = ['item', 'price']
products.sample(2)
###Output
_____no_output_____
###Markdown
JOIN **Can we verify if orders.price represents item price or item * quantity price?**
###Code
result = pdsql(
"""
SELECT
orders.*,
products.price AS product_price
FROM orders
LEFT JOIN products
ON products.item=orders.item
""")
result.sample(3)
###Output
_____no_output_____
###Markdown
`orders.price` here is the price of a single item. SELECT/WHERE **Which items costed more than \$10?**
###Code
result = pdsql(
"""
SELECT DISTINCT
item
FROM products
WHERE 1=1
AND price > 10
""")
print(f"Number of items > $10 = {result.shape[0]}")
result['item'].tolist()[:10]
###Output
Number of items > $10 = 42
###Markdown
Aggregations **How many orders were placed daily in 2019?**
###Code
result = pdsql(
"""
SELECT
DATE(date) AS date,
COUNT(DISTINCT number) AS num_orders
FROM orders
WHERE 1=1
AND date >= '2019-01-01'
AND date < '2020-01-01'
GROUP BY 1
""")
plt.rcParams.update({'figure.figsize': (17, 3), 'figure.dpi':300})
fig, ax = plt.subplots()
sns.lineplot(data=result.tail(50), x='date', y='num_orders')
plt.grid(linestyle='-', linewidth=0.3)
ax.tick_params(axis='x', rotation=90)
###Output
_____no_output_____
###Markdown
CASE Statements Categorize dates in 2019 based on sale counts: - high yeild: > 30 sales- medium yeild: 10-30 sales- low yeild: <10 sales
###Code
result = pdsql(
"""
SELECT
date,
num_orders,
CASE WHEN num_orders > 30 THEN 'high'
WHEN num_orders < 10 THEN 'low'
ELSE 'medium'END AS category
FROM (
SELECT
DATE(date) AS date,
COUNT(DISTINCT number) AS num_orders
FROM orders
WHERE 1=1
AND date >= '2019-01-01'
AND date < '2020-01-01'
GROUP BY 1
) T
""")
result.sample(3)
###Output
_____no_output_____
###Markdown
Common Table Expressions Better to manage than nesting queries (most of the time)
###Code
result = pdsql(
"""
WITH daily_orders AS (
SELECT
DATE(date) AS date,
COUNT(DISTINCT number) AS num_orders
FROM orders
WHERE 1=1
AND date >= '2019-01-01'
AND date < '2020-01-01'
GROUP BY 1
)
SELECT
date,
num_orders,
CASE WHEN num_orders > 30 THEN 'high'
WHEN num_orders < 10 THEN 'low'
ELSE 'medium'END AS category
FROM daily_orders
""")
result.sample(3)
###Output
_____no_output_____
###Markdown
Window Functions **What were the top 3 most expensive orders every day**? Let's break this problem down. Step 1: Get the total price of all orders every day
###Code
result = pdsql("""
WITH order_prices AS (
SELECT
DATE(date) AS date,
number,
SUM(price) AS total_price
FROM orders
WHERE 1=1
AND date >= '2019-01-01'
AND date < '2020-01-01'
GROUP BY 1, 2
)
SELECT * FROM order_prices
""")
result
###Output
_____no_output_____
###Markdown
Step 2: Rank the orders every day from most expensive to least expensive
###Code
result = pdsql("""
WITH order_prices AS (
SELECT
DATE(date) AS date,
number,
SUM(price) AS total_price
FROM orders
WHERE 1=1
AND date >= '2019-01-01'
AND date < '2020-01-01'
GROUP BY 1, 2
)
SELECT
date,
number,
total_price,
ROW_NUMBER() OVER (PARTITION BY date ORDER BY total_price DESC) AS ranking
FROM order_prices
ORDER BY 1, 4
""")
result.head(20)
###Output
_____no_output_____
###Markdown
Step 3: Get the top 3 per day
###Code
result = pdsql("""
WITH order_prices AS (
SELECT
DATE(date) AS date,
number,
SUM(price) AS total_price
FROM orders
WHERE 1=1
AND date >= '2019-01-01'
AND date < '2020-01-01'
GROUP BY 1, 2
)
SELECT *
FROM
(SELECT
date,
number,
total_price,
ROW_NUMBER() OVER (PARTITION BY date ORDER BY total_price DESC) AS ranking
FROM order_prices
ORDER BY 1, 4)
WHERE ranking <= 3
""")
result.head(10)
###Output
_____no_output_____ |
resnet_teeth_unb.ipynb | ###Markdown
Fine Classifier
###Code
!unzip /content/drive/MyDrive/oldteeth/fines.zip
import pandas as pd
data = pd.read_csv('/content/drive/MyDrive/oldteeth/fine.csv')
data
data['fine'].value_counts()
filename = data['filename']
import glob
import cv2
X = []
files = '/content/fines/'
for myfile in filename:
image = cv2.imread(files+myfile)
print(myfile)
image = cv2.resize(image, (128,128))
X.append(image)
len(X)
x = np.asarray(X)
import numpy as np
from sklearn.preprocessing import LabelEncoder
labelencoder = LabelEncoder()
data['fine'] = labelencoder.fit_transform(data['fine'])
data
arr = []
for i in data['fine']:
arr.append([i])
y = np.asarray(arr)
len(y)
def one_hot(y):
n_values = np.max(y) + 1
y_new = np.eye(n_values)[y[:,0]]
return y_new
y=one_hot(y)
x_train , x_test, y_train, y_test = train_test_split(x, y, test_size =.1, random_state = 27)
y_train[0]
base_model = tf.keras.applications.resnet50.ResNet50(weights= 'imagenet', include_top=False)
dropout_rate = 0.5
model_f = tf.keras.models.Sequential()
model_f.add(base_model)
model_f.add(tf.keras.layers.GlobalMaxPooling2D(name="gap"))
if dropout_rate > 0:
model_f.add(tf.keras.layers.Dropout(dropout_rate, name="dropout_out"))
model_f.add(tf.keras.layers.Dense(6, activation='softmax', name="fc_out"))
model_f.summary()
model_f.compile(optimizer=Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['accuracy'])
from tensorflow.keras import callbacks
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
earlystopping = callbacks.EarlyStopping(monitor ="val_loss", mode ="min", patience = 20, restore_best_weights = True,verbose=1)
model_checkpoint = ModelCheckpoint('Fine_classifier.h5', verbose=1, save_best_only=True)
n_folds=1
epochs=200
batch_size=8
def fit_and_evaluate(t_x, val_x, t_y, val_y, EPOCHS=1000, BATCH_SIZE=8):
model = model_f
results = model.fit(t_x, t_y, epochs=EPOCHS, batch_size=BATCH_SIZE, callbacks=[model_checkpoint],
verbose=1, validation_split=0.2)
print("Val Score: ", model.evaluate(val_x, val_y))
return results
#save the model history in a list after fitting so that we can plot later
model_history = []
for i in range(n_folds):
print("Training on Fold: ",i+1)
t_x, val_x, t_y, val_y = train_test_split(x_train, y_train, test_size=0.2,
random_state = np.random.randint(1,1000, 1)[0])
model_history.append(fit_and_evaluate(t_x, val_x, t_y, val_y, epochs, batch_size))
print("======="*12, end="\n\n\n")
df = pd.DataFrame()
for i in model_history[0].history.keys():
df[i] = model_history[0].history[i]
df.to_csv('/content/drive/MyDrive/oldteeth/history_resnet_fine.csv')
df.plot(subplots = True,figsize = (10,4))
###Output
_____no_output_____ |
Stock Sentiment Analysis/Stock Sentiment Analysis.ipynb | ###Markdown
Stock Sentiment Analysis using News Headlines
###Code
import pandas as pd
df=pd.read_csv('Data.csv', encoding = "ISO-8859-1")
df.head()
train = df[df['Date'] < '20150101']
test = df[df['Date'] > '20141231']
# Removing punctuations
data=train.iloc[:,2:27]
data.replace("[^a-zA-Z]"," ",regex=True, inplace=True)
# Renaming column names for ease of access
list1= [i for i in range(25)]
new_Index=[str(i) for i in list1]
data.columns= new_Index
data.head(5)
# Convertng headlines to lower case
for index in new_Index:
data[index]=data[index].str.lower()
data.head(1)
# joining all the headlines in a row
' '.join(str(x) for x in data.iloc[1,0:25])
headlines = []
for row in range(0,len(data.index)):
headlines.append(' '.join(str(x) for x in data.iloc[row,0:25]))
headlines[0]
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.ensemble import RandomForestClassifier
## implement BAG OF WORDS
countvector=CountVectorizer(ngram_range=(2,2))
traindataset=countvector.fit_transform(headlines)
# implement RandomForest Classifier
randomclassifier=RandomForestClassifier(n_estimators=200,criterion='entropy')
randomclassifier.fit(traindataset,train['Label'])
## Predict for the Test Dataset
test_transform= []
for row in range(0,len(test.index)):
test_transform.append(' '.join(str(x) for x in test.iloc[row,2:27]))
test_dataset = countvector.transform(test_transform)
predictions = randomclassifier.predict(test_dataset)
## Import library to check accuracy
from sklearn.metrics import classification_report,confusion_matrix,accuracy_score
matrix=confusion_matrix(test['Label'],predictions)
print(matrix)
score=accuracy_score(test['Label'],predictions)
print(score)
report=classification_report(test['Label'],predictions)
print(report)
###Output
[[141 45]
[ 12 180]]
0.8492063492063492
precision recall f1-score support
0 0.92 0.76 0.83 186
1 0.80 0.94 0.86 192
accuracy 0.85 378
macro avg 0.86 0.85 0.85 378
weighted avg 0.86 0.85 0.85 378
###Markdown
Using Tfidf
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.ensemble import RandomForestClassifier
## implement BAG OF WORDS
tfidfvector=TfidfVectorizer(ngram_range=(2,2))
traindataset=tfidfvector.fit_transform(headlines)
# implement RandomForest Classifier
randomclassifier=RandomForestClassifier(n_estimators=200,criterion='entropy')
randomclassifier.fit(traindataset,train['Label'])
## Predict for the Test Dataset
test_transform= []
for row in range(0,len(test.index)):
test_transform.append(' '.join(str(x) for x in test.iloc[row,2:27]))
test_dataset = tfidfvector.transform(test_transform)
predictions = randomclassifier.predict(test_dataset)
matrix=confusion_matrix(test['Label'],predictions)
print(matrix)
score=accuracy_score(test['Label'],predictions)
print(score)
report=classification_report(test['Label'],predictions)
print(report)
###Output
[[148 38]
[ 19 173]]
0.8492063492063492
precision recall f1-score support
0 0.89 0.80 0.84 186
1 0.82 0.90 0.86 192
accuracy 0.85 378
macro avg 0.85 0.85 0.85 378
weighted avg 0.85 0.85 0.85 378
|
examples/Q&A.ipynb | ###Markdown
Backprop Core Example: Q&AQuestion answering lets you ask questions on provided context.
###Code
# Set your API key to do inference on Backprop's platform
# Leave as None to run locally
api_key = None
import backprop
qa = backprop.QA(api_key=api_key)
# A context paragraph about the ISS, segments taken from Wikipedia.
context = """
The International Space Station (ISS) is a modular space station (habitable artificial satellite) in low Earth orbit. It is a multinational collaborative project involving five participating space agencies: NASA (United States), Roscosmos (Russia), JAXA (Japan), ESA (Europe), and CSA (Canada).
The station serves as a microgravity and space environment research laboratory in which scientific research is conducted in astrobiology, astronomy, meteorology, physics, and other fields.
The station is divided into two sections: the Russian Orbital Segment (ROS), operated by Russia; and the United States Orbital Segment (USOS), which is shared by many nations.
The first ISS component was launched in 1998, and the first long-term residents arrived on 2 November 2000.
The Dragon spacecraft allows the return of pressurised cargo to Earth, which is used, for example, to repatriate scientific experiments for further analysis. As of September 2019, 239 astronauts, cosmonauts, and space tourists from 19 different nations have visited the space station, many of them multiple times; this includes 151 Americans, 47 Russians, nine Japanese, eight Canadians, and five Italians.
"""
qs = ["When was the first piece of the ISS launched?",
"When did the first astronauts get to the ISS?",
"Which spacecraft lets cargo return to Earth?",
"What do they study in the ISS?",
"How many space agencies operate the ISS?"]
for q in qs:
answer = qa(q, context=context)
print(answer)
###Output
1998
2 November 2000
Dragon
astrobiology, astronomy, meteorology, physics, and other fields
five
###Markdown
Previous QA ContextIn the default example above, the question "How many space agencies operate the ISS?" returns a correct answer -- "five". However, we don't get any detail about what those sections are.Adding previous QA pairs (form of a (Q, A) tuple) means Backprop can be asked follow-up questions in a natural way.
###Code
# Asking the initial question
first_q = "How many space agencies operate the ISS?"
first_a = qa(first_q, context=context)
qa_pairs = [(first_q, first_a)]
# The follow up doesn't need to be explicit.
# With the context given, "Which are they?" implies we are referring to the space agencies.
follow_up = "Which are they?"
follow_up_ans = qa(follow_up, context=context, prev_qa=qa_pairs)
print(follow_up_ans)
###Output
NASA (United States), Roscosmos (Russia), JAXA (Japan), ESA (Europe), and CSA (Canada)
|
docs/rxd-tutorials/thresholds.ipynb | ###Markdown
OverviewSuppose we have an rxd.Reaction or rxd.Rate that should only occur when the concentration is above (or below) a certain threshold. These functions, however, only support continuous rate functions. What can we do?
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
One approach is to use a sigmoid function such as $\tanh(x)$:
###Code
from matplotlib import pyplot as plt
import numpy
x = numpy.linspace(-5, 5)
y = numpy.tanh(x)
plt.grid()
plt.plot(x, y)
plt.xlabel('x')
plt.ylabel('tanh(x)')
plt.show()
###Output
_____no_output_____
###Markdown
Consider the following transformation of $\tanh(x)$: $$f(x) = \frac{1 + \tanh(2m(x-a))}{2}$$ One can show that$\displaystyle \lim_{x \to \infty} f(x) = 1$,$\displaystyle \lim_{x \to -\infty} f(x) = 0$,$\displaystyle f(a) = 0.5,$ and $\displaystyle f'(a) = m$. Furthermore $f$ is a sigmoid function that shifts between $0$ and $1$ arbitrarily quickly (parameterized by $m$) around $x=a$. Here, for example, is the graph of $\displaystyle g(x) = \frac{1 + \tanh(2\cdot 10(x-2))}{2}$:
###Code
x = numpy.linspace(0, 4, 1000)
y = (1+numpy.tanh(2*10*(x-2)))/2
plt.grid()
plt.plot(x, y)
plt.xlabel('x')
plt.ylabel('g(x)')
plt.show()
###Output
_____no_output_____
###Markdown
Using this logic, we can scale reaction rates by a function of the form $f(x)$ for suitably chosen $a$ and $m$ to approximately threshold them by a concentration. For example, suppose we wish to model a substance (we'll arbitrarily call it IP3) that degrades exponentially (i.e. $y'=-k y$) but only when the concentration is above $0.25$:
###Code
from neuron import h, rxd
from matplotlib import pyplot as plt
h.load_file('stdrun.hoc')
soma = h.Section(name='soma')
cyt = rxd.Region([soma], name='cyt', nrn_region='i')
ip3 = rxd.Species(cyt, name='ip3', charge=0)
k = 2 # degradation rate
threshold = 0.25 # mM... called 'a' in f(x)
m = 100 # steepness of switch
degradation_switch = (1 + rxd.rxdmath.tanh((ip3 - threshold) * 2 * m)) / 2
degradation = rxd.Rate(ip3, -k * ip3 * degradation_switch)
# prior to NEURON 7.7, this first finitialize is necessary for the pointers to exist below
h.finitialize(-65)
t = h.Vector()
ip3_conc = h.Vector()
t.record(h._ref_t)
ip3_conc.record(soma(0.5)._ref_ip3i)
h.finitialize(-65)
h.continuerun(2)
plt.plot(t, ip3_conc)
plt.show()
###Output
_____no_output_____
###Markdown
Reaction-diffusion thresholdsSuppose we have an rxd.Reaction or rxd.Rate that should only occur when the concentration is above (or below) a certain threshold. These functions, however, only support continuous rate functions. What can we do? A version of this notebook may be run online via Google Colab at https://tinyurl.com/rxd-thresholds (make a copy or open in playground mode). One approach is to use a sigmoid function such as $\tanh(x)$:
###Code
from matplotlib import pyplot as plt
import numpy
x = numpy.linspace(-5, 5)
y = numpy.tanh(x)
plt.grid()
plt.plot(x, y)
plt.xlabel('x')
plt.ylabel('tanh(x)')
plt.show()
###Output
_____no_output_____
###Markdown
Consider the following transformation of $\tanh(x)$: $$f(x) = \frac{1 + \tanh(2m(x-a))}{2}$$ One can show that$\displaystyle \lim_{x \to \infty} f(x) = 1$,$\displaystyle \lim_{x \to -\infty} f(x) = 0$,$\displaystyle f(a) = 0.5,$ and $\displaystyle f'(a) = m$. Furthermore $f$ is a sigmoid function that shifts between $0$ and $1$ arbitrarily quickly (parameterized by $m$) around $x=a$. Here, for example, is the graph of $\displaystyle g(x) = \frac{1 + \tanh(2\cdot 10(x-2))}{2}$:
###Code
x = numpy.linspace(0, 4, 1000)
y = (1+numpy.tanh(2*10*(x-2)))/2
plt.grid()
plt.plot(x, y)
plt.xlabel('x')
plt.ylabel('g(x)')
plt.show()
###Output
_____no_output_____
###Markdown
Using this logic, we can scale reaction rates by a function of the form $f(x)$ for suitably chosen $a$ and $m$ to approximately threshold them by a concentration. For example, suppose we wish to model a substance (we'll arbitrarily call it IP3) that degrades exponentially (i.e. $y'=-k y$) but only when the concentration is above $0.25$:
###Code
from neuron import h, rxd
from neuron.units import mV, ms, mM
from matplotlib import pyplot as plt
h.load_file('stdrun.hoc')
soma = h.Section(name='soma')
cyt = rxd.Region([soma], name='cyt', nrn_region='i')
ip3 = rxd.Species(cyt, name='ip3', charge=0, initial=1 * mM)
k = 2 # degradation rate
threshold = 0.25 # mM... called 'a' in f(x)
m = 100 # steepness of switch
degradation_switch = (1 + rxd.rxdmath.tanh((ip3 - threshold) * 2 * m)) / 2
degradation = rxd.Rate(ip3, -k * ip3 * degradation_switch)
t = h.Vector().record(h._ref_t)
ip3_conc = h.Vector().record(soma(0.5)._ref_ip3i)
h.finitialize(-65 * mV)
h.continuerun(2 * ms)
plt.plot(t, ip3_conc)
plt.xlabel('t (ms)')
plt.ylabel('[IP3] (mM)')
plt.show()
###Output
_____no_output_____
###Markdown
OverviewSuppose we have an rxd.Reaction or rxd.Rate that should only occur when the concentration is above (or below) a certain threshold. These functions, however, only support continuous rate functions. What can we do? A version of this notebook may be run online via Google Colab at https://tinyurl.com/rxd-thresholds (make a copy or open in playground mode). One approach is to use a sigmoid function such as $\tanh(x)$:
###Code
from matplotlib import pyplot as plt
import numpy
x = numpy.linspace(-5, 5)
y = numpy.tanh(x)
plt.grid()
plt.plot(x, y)
plt.xlabel('x')
plt.ylabel('tanh(x)')
plt.show()
###Output
_____no_output_____
###Markdown
Consider the following transformation of $\tanh(x)$: $$f(x) = \frac{1 + \tanh(2m(x-a))}{2}$$ One can show that$\displaystyle \lim_{x \to \infty} f(x) = 1$,$\displaystyle \lim_{x \to -\infty} f(x) = 0$,$\displaystyle f(a) = 0.5,$ and $\displaystyle f'(a) = m$. Furthermore $f$ is a sigmoid function that shifts between $0$ and $1$ arbitrarily quickly (parameterized by $m$) around $x=a$. Here, for example, is the graph of $\displaystyle g(x) = \frac{1 + \tanh(2\cdot 10(x-2))}{2}$:
###Code
x = numpy.linspace(0, 4, 1000)
y = (1+numpy.tanh(2*10*(x-2)))/2
plt.grid()
plt.plot(x, y)
plt.xlabel('x')
plt.ylabel('g(x)')
plt.show()
###Output
_____no_output_____
###Markdown
Using this logic, we can scale reaction rates by a function of the form $f(x)$ for suitably chosen $a$ and $m$ to approximately threshold them by a concentration. For example, suppose we wish to model a substance (we'll arbitrarily call it IP3) that degrades exponentially (i.e. $y'=-k y$) but only when the concentration is above $0.25$:
###Code
from neuron import h, rxd
from neuron.units import mV, ms, mM
from matplotlib import pyplot as plt
h.load_file('stdrun.hoc')
soma = h.Section(name='soma')
cyt = rxd.Region([soma], name='cyt', nrn_region='i')
ip3 = rxd.Species(cyt, name='ip3', charge=0, initial=1 * mM)
k = 2 # degradation rate
threshold = 0.25 # mM... called 'a' in f(x)
m = 100 # steepness of switch
degradation_switch = (1 + rxd.rxdmath.tanh((ip3 - threshold) * 2 * m)) / 2
degradation = rxd.Rate(ip3, -k * ip3 * degradation_switch)
t = h.Vector().record(h._ref_t)
ip3_conc = h.Vector().record(soma(0.5)._ref_ip3i)
h.finitialize(-65 * mV)
h.continuerun(2 * ms)
plt.plot(t, ip3_conc)
plt.xlabel('t (ms)')
plt.ylabel('[IP3] (mM)')
plt.show()
###Output
_____no_output_____ |
RecurrentQSAR-example-logP.ipynb | ###Markdown
Predicting logP with RNNs and SMILES strings This notebook demonstrates how to build predictive recurrent neural network for SMILES strings. We will build regression model for logP with OpenChem Toolkit (https://github.com/Mariewelt/OpenChem)
###Code
# Cloning OpenChem. Comment this line if you already cloned the repository
! git clone https://github.com/Mariewelt/OpenChem.git
###Output
_____no_output_____
###Markdown
Imports
###Code
import sys
import os
sys.path.append('./OpenChem')
from openchem.models.Smiles2Label import Smiles2Label
from openchem.modules.encoders.rnn_encoder import RNNEncoder
from openchem.modules.mlp.openchem_mlp import OpenChemMLP
from openchem.data.smiles_data_layer import SmilesDataset
from openchem.data.utils import save_smiles_property_file
from openchem.data.utils import create_loader
from openchem.models.openchem_model import build_training, fit, evaluate
import torch.nn as nn
from torch.optim import RMSprop, SGD, Adam
from torch.optim.lr_scheduler import ExponentialLR, StepLR
import torch.nn.functional as F
from sklearn.metrics import r2_score
import pandas as pd
import copy
import pickle
###Output
_____no_output_____
###Markdown
Reading data
###Code
from openchem.data.utils import read_smiles_property_file
data = read_smiles_property_file('./data/logP_labels.csv',
cols_to_read=[1, 2])
smiles = data[0]
labels = data[1]
from openchem.data.utils import get_tokens
tokens, _, _ = get_tokens(smiles)
tokens = tokens + ' '
###Output
_____no_output_____
###Markdown
Model architecture Here we define the architecture of our Recurrent Neural Network (RNN). We will use 2 LSTM layers. For more details on how to build models with OpenChem, visit: https://mariewelt.github.io/OpenChem/
###Code
import torch
from openchem.utils.utils import identity
from openchem.modules.embeddings.basic_embedding import Embedding
model_object = Smiles2Label
model_params = {
'use_cuda': True,
'random_seed': 42,
'world_size': 1,
'task': 'regression',
'data_layer': SmilesDataset,
'use_clip_grad': False,
'batch_size': 128,
'num_epochs': 51,
'logdir': './logs/logp_logs',
'print_every': 1,
'save_every': 5,
#'train_data_layer': train_dataset,
#'val_data_layer': test_dataset,
'eval_metrics': r2_score,
'criterion': nn.MSELoss(),
'optimizer': Adam,
'optimizer_params': {
'lr': 0.005,
},
'lr_scheduler': StepLR,
'lr_scheduler_params': {
'step_size': 15,
'gamma': 0.8
},
'embedding': Embedding,
'embedding_params': {
'num_embeddings': len(tokens),
'embedding_dim': 128,
'padding_idx': tokens.index(' ')
},
'encoder': RNNEncoder,
'encoder_params': {
'input_size': 128,
'layer': "LSTM",
'encoder_dim': 128,
'n_layers': 2,
'dropout': 0.8,
'is_bidirectional': False
},
'mlp': OpenChemMLP,
'mlp_params': {
'input_size': 128,
'n_layers': 2,
'hidden_size': [128, 1],
'activation': [F.relu, identity],
'dropout': 0.0
}
}
try:
os.stat(model_params['logdir'])
except:
os.mkdir(model_params['logdir'])
###Output
_____no_output_____
###Markdown
Initializing data splitter for cross validation
###Code
from sklearn.model_selection import KFold
cross_validation_split = KFold(n_splits=5, shuffle=True)
data = cross_validation_split.split(smiles, labels)
###Output
_____no_output_____
###Markdown
Training cross-validated models
###Code
import os
i = 0
models = []
results = []
for split in data:
print('Cross validation, fold number ' + str(i) + ' in progress...')
train, test = split
X_train = smiles[train]
y_train = labels[train]
X_test = smiles[test]
y_test = labels[test]
save_smiles_property_file('./data/logp_train_fold' + str(i) + '.smi',
X_train, y_train.reshape(-1, 1))
save_smiles_property_file('./data/logp_test_fold' + str(i) + '.smi',
X_test, y_test.reshape(-1, 1))
train_dataset = SmilesDataset('./data/logp_train_fold' + str(i) + '.smi',
delimiter=',', cols_to_read=[0, 1], tokens=tokens)
test_dataset = SmilesDataset('./data/logp_test_fold' + str(i) + '.smi',
delimiter=',', cols_to_read=[0, 1], tokens=tokens)
model_params['train_data_layer'] = train_dataset
model_params['val_data_layer'] = test_dataset
model_params['logdir'] = './logs/logp_logs/fold' + str(i)
logdir = model_params['logdir']
ckpt_dir = logdir + '/checkpoint/'
try:
os.stat(ckpt_dir)
except:
os.mkdir(logdir)
os.mkdir(ckpt_dir)
train_loader = create_loader(train_dataset,
batch_size=model_params['batch_size'],
shuffle=True,
num_workers=4,
pin_memory=True,
sampler=None)
val_loader = create_loader(test_dataset,
batch_size=model_params['batch_size'],
shuffle=False,
num_workers=1,
pin_memory=True)
models.append(model_object(params=model_params).cuda())
criterion, optimizer, lr_scheduler = build_training(models[i], model_params)
results.append(fit(models[i], lr_scheduler, train_loader, optimizer, criterion,
model_params, eval=True, val_loader=val_loader))
i = i+1
###Output
Cross validation, fold number 0 in progress...
TRAINING: [Time: 0m 2s, Epoch: 0, Progress: 0%, Loss: 2.2958]
EVALUATION: [Time: 0m 0s, Loss: 1.3561, Metrics: 0.5885]
TRAINING: [Time: 0m 5s, Epoch: 1, Progress: 1%, Loss: 1.0152]
EVALUATION: [Time: 0m 0s, Loss: 0.5825, Metrics: 0.8231]
TRAINING: [Time: 0m 7s, Epoch: 2, Progress: 3%, Loss: 0.7186]
EVALUATION: [Time: 0m 0s, Loss: 0.7800, Metrics: 0.7640]
TRAINING: [Time: 0m 10s, Epoch: 3, Progress: 5%, Loss: 0.6039]
EVALUATION: [Time: 0m 0s, Loss: 0.5324, Metrics: 0.8384]
TRAINING: [Time: 0m 12s, Epoch: 4, Progress: 7%, Loss: 0.5563]
EVALUATION: [Time: 0m 0s, Loss: 0.4946, Metrics: 0.8499]
TRAINING: [Time: 0m 15s, Epoch: 5, Progress: 9%, Loss: 0.5006]
EVALUATION: [Time: 0m 0s, Loss: 0.5587, Metrics: 0.8310]
TRAINING: [Time: 0m 18s, Epoch: 6, Progress: 11%, Loss: 0.4807]
EVALUATION: [Time: 0m 0s, Loss: 0.4322, Metrics: 0.8676]
TRAINING: [Time: 0m 20s, Epoch: 7, Progress: 13%, Loss: 0.4437]
EVALUATION: [Time: 0m 0s, Loss: 0.7048, Metrics: 0.7858]
TRAINING: [Time: 0m 23s, Epoch: 8, Progress: 15%, Loss: 0.4152]
EVALUATION: [Time: 0m 0s, Loss: 0.6407, Metrics: 0.8040]
TRAINING: [Time: 0m 26s, Epoch: 9, Progress: 17%, Loss: 0.3834]
EVALUATION: [Time: 0m 0s, Loss: 0.3756, Metrics: 0.8848]
TRAINING: [Time: 0m 28s, Epoch: 10, Progress: 19%, Loss: 0.3649]
EVALUATION: [Time: 0m 0s, Loss: 0.3322, Metrics: 0.8982]
TRAINING: [Time: 0m 31s, Epoch: 11, Progress: 21%, Loss: 0.3771]
EVALUATION: [Time: 0m 0s, Loss: 0.4947, Metrics: 0.8497]
TRAINING: [Time: 0m 33s, Epoch: 12, Progress: 23%, Loss: 0.3314]
EVALUATION: [Time: 0m 0s, Loss: 0.3597, Metrics: 0.8904]
TRAINING: [Time: 0m 36s, Epoch: 13, Progress: 25%, Loss: 0.3325]
EVALUATION: [Time: 0m 0s, Loss: 0.3472, Metrics: 0.8935]
TRAINING: [Time: 0m 39s, Epoch: 14, Progress: 27%, Loss: 0.3367]
EVALUATION: [Time: 0m 0s, Loss: 0.4131, Metrics: 0.8737]
TRAINING: [Time: 0m 41s, Epoch: 15, Progress: 29%, Loss: 0.3249]
EVALUATION: [Time: 0m 0s, Loss: 0.3934, Metrics: 0.8810]
TRAINING: [Time: 0m 44s, Epoch: 16, Progress: 31%, Loss: 0.2897]
EVALUATION: [Time: 0m 0s, Loss: 0.2771, Metrics: 0.9147]
TRAINING: [Time: 0m 47s, Epoch: 17, Progress: 33%, Loss: 0.2676]
EVALUATION: [Time: 0m 0s, Loss: 0.3518, Metrics: 0.8921]
TRAINING: [Time: 0m 49s, Epoch: 18, Progress: 35%, Loss: 0.2644]
EVALUATION: [Time: 0m 0s, Loss: 0.2847, Metrics: 0.9127]
TRAINING: [Time: 0m 52s, Epoch: 19, Progress: 37%, Loss: 0.2654]
EVALUATION: [Time: 0m 0s, Loss: 0.3325, Metrics: 0.8984]
TRAINING: [Time: 0m 55s, Epoch: 20, Progress: 39%, Loss: 0.2448]
EVALUATION: [Time: 0m 0s, Loss: 0.2571, Metrics: 0.9208]
TRAINING: [Time: 0m 57s, Epoch: 21, Progress: 41%, Loss: 0.2519]
EVALUATION: [Time: 0m 0s, Loss: 0.2438, Metrics: 0.9247]
TRAINING: [Time: 1m 0s, Epoch: 22, Progress: 43%, Loss: 0.2586]
EVALUATION: [Time: 0m 0s, Loss: 0.2640, Metrics: 0.9194]
TRAINING: [Time: 1m 3s, Epoch: 23, Progress: 45%, Loss: 0.2356]
EVALUATION: [Time: 0m 0s, Loss: 0.3210, Metrics: 0.9018]
TRAINING: [Time: 1m 5s, Epoch: 24, Progress: 47%, Loss: 0.2336]
EVALUATION: [Time: 0m 0s, Loss: 0.2560, Metrics: 0.9215]
TRAINING: [Time: 1m 8s, Epoch: 25, Progress: 49%, Loss: 0.2185]
EVALUATION: [Time: 0m 0s, Loss: 0.2805, Metrics: 0.9139]
TRAINING: [Time: 1m 10s, Epoch: 26, Progress: 50%, Loss: 0.2345]
EVALUATION: [Time: 0m 0s, Loss: 0.2480, Metrics: 0.9249]
TRAINING: [Time: 1m 13s, Epoch: 27, Progress: 52%, Loss: 0.2121]
EVALUATION: [Time: 0m 0s, Loss: 0.3560, Metrics: 0.8932]
TRAINING: [Time: 1m 16s, Epoch: 28, Progress: 54%, Loss: 0.2123]
EVALUATION: [Time: 0m 0s, Loss: 0.2436, Metrics: 0.9252]
TRAINING: [Time: 1m 18s, Epoch: 29, Progress: 56%, Loss: 0.2177]
EVALUATION: [Time: 0m 0s, Loss: 0.2830, Metrics: 0.9126]
TRAINING: [Time: 1m 21s, Epoch: 30, Progress: 58%, Loss: 0.2061]
EVALUATION: [Time: 0m 0s, Loss: 0.2755, Metrics: 0.9154]
TRAINING: [Time: 1m 24s, Epoch: 31, Progress: 60%, Loss: 0.1966]
EVALUATION: [Time: 0m 0s, Loss: 0.2538, Metrics: 0.9220]
TRAINING: [Time: 1m 27s, Epoch: 32, Progress: 62%, Loss: 0.1871]
EVALUATION: [Time: 0m 0s, Loss: 0.3249, Metrics: 0.9010]
TRAINING: [Time: 1m 29s, Epoch: 33, Progress: 64%, Loss: 0.1758]
EVALUATION: [Time: 0m 0s, Loss: 0.2934, Metrics: 0.9099]
TRAINING: [Time: 1m 32s, Epoch: 34, Progress: 66%, Loss: 0.1803]
EVALUATION: [Time: 0m 0s, Loss: 0.3104, Metrics: 0.9047]
TRAINING: [Time: 1m 35s, Epoch: 35, Progress: 68%, Loss: 0.1685]
EVALUATION: [Time: 0m 0s, Loss: 0.2401, Metrics: 0.9264]
TRAINING: [Time: 1m 37s, Epoch: 36, Progress: 70%, Loss: 0.1795]
EVALUATION: [Time: 0m 0s, Loss: 0.2985, Metrics: 0.9087]
TRAINING: [Time: 1m 40s, Epoch: 37, Progress: 72%, Loss: 0.1681]
EVALUATION: [Time: 0m 0s, Loss: 0.2362, Metrics: 0.9273]
TRAINING: [Time: 1m 42s, Epoch: 38, Progress: 74%, Loss: 0.1642]
EVALUATION: [Time: 0m 0s, Loss: 0.2728, Metrics: 0.9160]
TRAINING: [Time: 1m 45s, Epoch: 39, Progress: 76%, Loss: 0.1520]
EVALUATION: [Time: 0m 0s, Loss: 0.3015, Metrics: 0.9077]
TRAINING: [Time: 1m 48s, Epoch: 40, Progress: 78%, Loss: 0.1642]
EVALUATION: [Time: 0m 0s, Loss: 0.2480, Metrics: 0.9240]
TRAINING: [Time: 1m 50s, Epoch: 41, Progress: 80%, Loss: 0.1632]
EVALUATION: [Time: 0m 0s, Loss: 0.2338, Metrics: 0.9285]
TRAINING: [Time: 1m 53s, Epoch: 42, Progress: 82%, Loss: 0.1601]
EVALUATION: [Time: 0m 0s, Loss: 0.2421, Metrics: 0.9254]
TRAINING: [Time: 1m 55s, Epoch: 43, Progress: 84%, Loss: 0.1626]
EVALUATION: [Time: 0m 0s, Loss: 0.2399, Metrics: 0.9264]
TRAINING: [Time: 1m 58s, Epoch: 44, Progress: 86%, Loss: 0.1630]
EVALUATION: [Time: 0m 0s, Loss: 0.2301, Metrics: 0.9294]
TRAINING: [Time: 2m 1s, Epoch: 45, Progress: 88%, Loss: 0.1694]
EVALUATION: [Time: 0m 0s, Loss: 0.2398, Metrics: 0.9263]
TRAINING: [Time: 2m 3s, Epoch: 46, Progress: 90%, Loss: 0.1501]
EVALUATION: [Time: 0m 0s, Loss: 0.2562, Metrics: 0.9215]
TRAINING: [Time: 2m 6s, Epoch: 47, Progress: 92%, Loss: 0.1392]
EVALUATION: [Time: 0m 0s, Loss: 0.2233, Metrics: 0.9312]
TRAINING: [Time: 2m 9s, Epoch: 48, Progress: 94%, Loss: 0.1396]
EVALUATION: [Time: 0m 0s, Loss: 0.2368, Metrics: 0.9281]
TRAINING: [Time: 2m 11s, Epoch: 49, Progress: 96%, Loss: 0.1373]
EVALUATION: [Time: 0m 0s, Loss: 0.2358, Metrics: 0.9274]
TRAINING: [Time: 2m 14s, Epoch: 50, Progress: 98%, Loss: 0.1316]
EVALUATION: [Time: 0m 0s, Loss: 0.2627, Metrics: 0.9192]
Cross validation, fold number 1 in progress...
TRAINING: [Time: 0m 2s, Epoch: 0, Progress: 0%, Loss: 1.8173]
EVALUATION: [Time: 0m 0s, Loss: 1.1259, Metrics: 0.6849]
TRAINING: [Time: 0m 5s, Epoch: 1, Progress: 1%, Loss: 0.8299]
EVALUATION: [Time: 0m 0s, Loss: 0.8562, Metrics: 0.7579]
TRAINING: [Time: 0m 7s, Epoch: 2, Progress: 3%, Loss: 0.6655]
EVALUATION: [Time: 0m 0s, Loss: 0.5297, Metrics: 0.8524]
TRAINING: [Time: 0m 10s, Epoch: 3, Progress: 5%, Loss: 0.5613]
EVALUATION: [Time: 0m 0s, Loss: 0.6256, Metrics: 0.8214]
TRAINING: [Time: 0m 13s, Epoch: 4, Progress: 7%, Loss: 0.5227]
EVALUATION: [Time: 0m 0s, Loss: 0.4516, Metrics: 0.8710]
TRAINING: [Time: 0m 15s, Epoch: 5, Progress: 9%, Loss: 0.4802]
EVALUATION: [Time: 0m 0s, Loss: 0.5253, Metrics: 0.8512]
TRAINING: [Time: 0m 18s, Epoch: 6, Progress: 11%, Loss: 0.4293]
EVALUATION: [Time: 0m 0s, Loss: 0.3818, Metrics: 0.8919]
TRAINING: [Time: 0m 21s, Epoch: 7, Progress: 13%, Loss: 0.4165]
EVALUATION: [Time: 0m 0s, Loss: 0.4413, Metrics: 0.8782]
TRAINING: [Time: 0m 24s, Epoch: 8, Progress: 15%, Loss: 0.3944]
EVALUATION: [Time: 0m 0s, Loss: 0.3568, Metrics: 0.8991]
TRAINING: [Time: 0m 26s, Epoch: 9, Progress: 17%, Loss: 0.3687]
EVALUATION: [Time: 0m 0s, Loss: 0.4999, Metrics: 0.8600]
TRAINING: [Time: 0m 29s, Epoch: 10, Progress: 19%, Loss: 0.3660]
EVALUATION: [Time: 0m 0s, Loss: 0.7382, Metrics: 0.7939]
TRAINING: [Time: 0m 32s, Epoch: 11, Progress: 21%, Loss: 0.3337]
EVALUATION: [Time: 0m 0s, Loss: 0.4114, Metrics: 0.8848]
TRAINING: [Time: 0m 35s, Epoch: 12, Progress: 23%, Loss: 0.3226]
EVALUATION: [Time: 0m 0s, Loss: 0.3865, Metrics: 0.8889]
TRAINING: [Time: 0m 37s, Epoch: 13, Progress: 25%, Loss: 0.3150]
EVALUATION: [Time: 0m 0s, Loss: 0.3179, Metrics: 0.9090]
TRAINING: [Time: 0m 40s, Epoch: 14, Progress: 27%, Loss: 0.2929]
EVALUATION: [Time: 0m 0s, Loss: 0.3241, Metrics: 0.9068]
TRAINING: [Time: 0m 43s, Epoch: 15, Progress: 29%, Loss: 0.3026]
EVALUATION: [Time: 0m 0s, Loss: 0.3181, Metrics: 0.9101]
###Markdown
Evaluating the models
###Code
import numpy as np
rmse = []
r2_score = []
for i in range(5):
test_dataset = SmilesDataset('./data/logp_test_fold' + str(i) + '.smi',
delimiter=',', cols_to_read=[0, 1], tokens=tokens)
val_loader = create_loader(test_dataset,
batch_size=model_params['batch_size'],
shuffle=False,
num_workers=1,
pin_memory=True)
metrics = evaluate(models[i], val_loader, criterion)
rmse.append(np.sqrt(metrics[0]))
r2_score.append(metrics[1])
print("Cross-validated RMSE: ", np.mean(rmse))
print("Cross-validated R^2 score: ", np.mean(r2_score))
###Output
Cross-validated RMSE: 0.49763956473190146
Cross-validated R^2 score: 0.9254044340191147
|
Jupyter Notebook/ML Pipeline Preparation.ipynb | ###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
import nltk
nltk.download(['punkt', 'wordnet','averaged_perceptron_tagger'])
# import libraries
import re
import pandas as pd
import sqlalchemy as db
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
import pickle
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import GridSearchCV
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.metrics import classification_report
from sklearn import model_selection
from sklearn.neighbors import KNeighborsClassifier
# load data from database
engine = db.create_engine('sqlite:///InsertDatabaseName.db')
df = pd.read_sql_table('CategorisedMessages', engine)
X = df['message']
Y = df[['related', 'request', 'offer',
'aid_related', 'medical_help', 'medical_products', 'search_and_rescue',
'security', 'military', 'child_alone', 'water', 'food', 'shelter',
'clothing', 'money', 'missing_people', 'refugees', 'death', 'other_aid',
'infrastructure_related', 'transport', 'buildings', 'electricity',
'tools', 'hospitals', 'shops', 'aid_centers', 'other_infrastructure',
'weather_related', 'floods', 'storm', 'fire', 'earthquake', 'cold',
'other_weather', 'direct_report']]
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
# Replace none-character with space
text = re.sub('[^A-Za-z0-9]',' ',text)
# Tokenize the input
tokens = word_tokenize(text)
# Initialize lemmatizer for standardize form of words
lemmatizer = WordNetLemmatizer()
# We will iterate each token in the list, lemmatize and return result
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = pipeline.predict(X_test)
y_test.columns
df_pred = pd.DataFrame(y_pred)
df_pred.columns=y_test.columns
for column in y_test:
print(column)
print(classification_report(y_test[column],df_pred[column]))
pipeline.get_params()
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {
'clf__estimator__n_estimators': [10,20]
}
cv = GridSearchCV(pipeline, param_grid=parameters,n_jobs=4)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.fit(X_train, y_train)
y_pred = cv.predict(X_test)
df_pred2 = pd.DataFrame(y_pred)
df_pred2.columns=y_test.columns
for column in y_test:
print(column)
print(classification_report(y_test[column],df_pred2[column]))
###Output
related
precision recall f1-score support
0 0.64 0.38 0.47 1487
1 0.83 0.93 0.88 5020
2 0.50 0.45 0.47 47
avg / total 0.79 0.80 0.78 6554
request
precision recall f1-score support
0 0.88 0.98 0.93 5432
1 0.83 0.36 0.50 1122
avg / total 0.87 0.88 0.86 6554
offer
precision recall f1-score support
0 1.00 1.00 1.00 6528
1 0.00 0.00 0.00 26
avg / total 0.99 1.00 0.99 6554
aid_related
precision recall f1-score support
0 0.71 0.88 0.78 3744
1 0.76 0.52 0.61 2810
avg / total 0.73 0.72 0.71 6554
medical_help
precision recall f1-score support
0 0.92 1.00 0.96 6013
1 0.72 0.06 0.11 541
avg / total 0.91 0.92 0.89 6554
medical_products
precision recall f1-score support
0 0.95 1.00 0.97 6211
1 0.72 0.08 0.15 343
avg / total 0.94 0.95 0.93 6554
search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.98 6348
1 0.75 0.03 0.06 206
avg / total 0.96 0.97 0.96 6554
security
precision recall f1-score support
0 0.98 1.00 0.99 6433
1 1.00 0.02 0.03 121
avg / total 0.98 0.98 0.97 6554
military
precision recall f1-score support
0 0.97 1.00 0.98 6340
1 0.78 0.07 0.12 214
avg / total 0.96 0.97 0.96 6554
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
water
precision recall f1-score support
0 0.96 1.00 0.98 6126
1 0.86 0.36 0.50 428
avg / total 0.95 0.95 0.95 6554
food
precision recall f1-score support
0 0.92 0.99 0.95 5788
1 0.83 0.33 0.48 766
avg / total 0.91 0.91 0.90 6554
shelter
precision recall f1-score support
0 0.92 1.00 0.96 5938
1 0.83 0.17 0.28 616
avg / total 0.91 0.92 0.89 6554
clothing
precision recall f1-score support
0 0.98 1.00 0.99 6442
1 0.75 0.11 0.19 112
avg / total 0.98 0.98 0.98 6554
money
precision recall f1-score support
0 0.98 1.00 0.99 6400
1 0.44 0.03 0.05 154
avg / total 0.96 0.98 0.97 6554
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6470
1 0.00 0.00 0.00 84
avg / total 0.97 0.99 0.98 6554
refugees
precision recall f1-score support
0 0.96 1.00 0.98 6306
1 0.50 0.02 0.04 248
avg / total 0.95 0.96 0.95 6554
death
precision recall f1-score support
0 0.96 1.00 0.98 6254
1 0.79 0.11 0.20 300
avg / total 0.95 0.96 0.94 6554
other_aid
precision recall f1-score support
0 0.87 1.00 0.93 5673
1 0.62 0.03 0.06 881
avg / total 0.84 0.87 0.81 6554
infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 6132
1 0.17 0.00 0.00 422
avg / total 0.89 0.94 0.90 6554
transport
precision recall f1-score support
0 0.95 1.00 0.98 6240
1 0.68 0.04 0.08 314
avg / total 0.94 0.95 0.93 6554
buildings
precision recall f1-score support
0 0.95 1.00 0.98 6218
1 0.74 0.08 0.15 336
avg / total 0.94 0.95 0.93 6554
electricity
precision recall f1-score support
0 0.98 1.00 0.99 6426
1 0.70 0.05 0.10 128
avg / total 0.98 0.98 0.97 6554
tools
precision recall f1-score support
0 0.99 1.00 1.00 6506
1 0.00 0.00 0.00 48
avg / total 0.99 0.99 0.99 6554
hospitals
precision recall f1-score support
0 0.99 1.00 0.99 6484
1 0.00 0.00 0.00 70
avg / total 0.98 0.99 0.98 6554
shops
precision recall f1-score support
0 1.00 1.00 1.00 6532
1 0.00 0.00 0.00 22
avg / total 0.99 1.00 0.99 6554
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 6475
1 0.00 0.00 0.00 79
avg / total 0.98 0.99 0.98 6554
other_infrastructure
precision recall f1-score support
0 0.96 1.00 0.98 6261
1 0.33 0.01 0.01 293
avg / total 0.93 0.95 0.93 6554
weather_related
precision recall f1-score support
0 0.84 0.96 0.89 4657
1 0.83 0.56 0.67 1897
avg / total 0.84 0.84 0.83 6554
floods
precision recall f1-score support
0 0.94 1.00 0.97 5984
1 0.89 0.38 0.53 570
avg / total 0.94 0.94 0.93 6554
storm
precision recall f1-score support
0 0.93 0.99 0.96 5907
1 0.76 0.32 0.45 647
avg / total 0.91 0.92 0.91 6554
fire
precision recall f1-score support
0 0.99 1.00 0.99 6473
1 0.67 0.02 0.05 81
avg / total 0.98 0.99 0.98 6554
earthquake
precision recall f1-score support
0 0.96 0.99 0.97 5915
1 0.89 0.58 0.70 639
avg / total 0.95 0.95 0.95 6554
cold
precision recall f1-score support
0 0.98 1.00 0.99 6428
1 0.50 0.04 0.07 126
avg / total 0.97 0.98 0.97 6554
other_weather
precision recall f1-score support
0 0.95 1.00 0.97 6209
1 0.55 0.02 0.03 345
avg / total 0.93 0.95 0.92 6554
direct_report
precision recall f1-score support
0 0.86 0.98 0.91 5301
1 0.77 0.30 0.44 1253
avg / total 0.84 0.85 0.82 6554
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
# Function that count number of adv and adj in the sentence and return back as new feature
class AdvAdjExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
token_list = tokenize(text)
pos_tags = nltk.pos_tag(token_list)
adj_adv = 0
for word, tag in pos_tags:
if tag in ['JJ','JJR','JJS','RB','RBR','RBS']:
adj_adv = adj_adv+1
return adj_adv
def fit(self, x, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
# Add number of adj and adv count as new feature and try again
pipeline2 = Pipeline([
('features',FeatureUnion([
('text_pipeline',Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('adv_adj', AdvAdjExtractor())
])),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
parameters2 = {
'clf__estimator__n_estimators': [10,20]
}
cv2 = GridSearchCV(pipeline2, param_grid=parameters2,n_jobs=10)
cv2.fit(X_train, y_train)
y_pred = cv2.predict(X_test)
# Evaluate improvement result
df_pred3 = pd.DataFrame(y_pred)
df_pred3.columns=y_test.columns
for column in y_test:
print(column)
print(classification_report(y_test[column],df_pred3[column]))
###Output
related
precision recall f1-score support
0 0.67 0.32 0.43 1487
1 0.82 0.95 0.88 5018
2 0.46 0.24 0.32 49
avg / total 0.78 0.80 0.78 6554
request
precision recall f1-score support
0 0.89 0.99 0.93 5434
1 0.87 0.38 0.53 1120
avg / total 0.88 0.88 0.86 6554
offer
precision recall f1-score support
0 0.99 1.00 1.00 6518
1 0.00 0.00 0.00 36
avg / total 0.99 0.99 0.99 6554
aid_related
precision recall f1-score support
0 0.74 0.86 0.79 3819
1 0.75 0.57 0.65 2735
avg / total 0.74 0.74 0.73 6554
medical_help
precision recall f1-score support
0 0.92 1.00 0.96 6003
1 0.68 0.08 0.15 551
avg / total 0.90 0.92 0.89 6554
medical_products
precision recall f1-score support
0 0.95 1.00 0.98 6218
1 0.70 0.11 0.19 336
avg / total 0.94 0.95 0.94 6554
search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.99 6378
1 0.50 0.03 0.06 176
avg / total 0.96 0.97 0.96 6554
security
precision recall f1-score support
0 0.98 1.00 0.99 6434
1 0.33 0.02 0.03 120
avg / total 0.97 0.98 0.97 6554
military
precision recall f1-score support
0 0.97 1.00 0.98 6337
1 0.65 0.06 0.11 217
avg / total 0.96 0.97 0.95 6554
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
water
precision recall f1-score support
0 0.95 1.00 0.97 6136
1 0.89 0.17 0.29 418
avg / total 0.94 0.95 0.93 6554
food
precision recall f1-score support
0 0.93 0.99 0.96 5832
1 0.83 0.41 0.55 722
avg / total 0.92 0.93 0.91 6554
shelter
precision recall f1-score support
0 0.93 1.00 0.96 5949
1 0.83 0.22 0.35 605
avg / total 0.92 0.92 0.90 6554
clothing
precision recall f1-score support
0 0.99 1.00 0.99 6461
1 0.67 0.04 0.08 93
avg / total 0.98 0.99 0.98 6554
money
precision recall f1-score support
0 0.98 1.00 0.99 6398
1 0.67 0.01 0.03 156
avg / total 0.97 0.98 0.97 6554
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6471
1 0.00 0.00 0.00 83
avg / total 0.97 0.99 0.98 6554
refugees
precision recall f1-score support
0 0.97 1.00 0.98 6332
1 0.57 0.02 0.03 222
avg / total 0.95 0.97 0.95 6554
death
precision recall f1-score support
0 0.96 1.00 0.98 6241
1 0.94 0.10 0.18 313
avg / total 0.96 0.96 0.94 6554
other_aid
precision recall f1-score support
0 0.87 1.00 0.93 5684
1 0.70 0.02 0.05 870
avg / total 0.85 0.87 0.81 6554
infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 6133
1 0.20 0.00 0.00 421
avg / total 0.89 0.94 0.90 6554
transport
precision recall f1-score support
0 0.95 1.00 0.98 6240
1 0.67 0.04 0.07 314
avg / total 0.94 0.95 0.93 6554
buildings
precision recall f1-score support
0 0.95 1.00 0.97 6213
1 0.78 0.06 0.11 341
avg / total 0.94 0.95 0.93 6554
electricity
precision recall f1-score support
0 0.98 1.00 0.99 6435
1 0.67 0.03 0.06 119
avg / total 0.98 0.98 0.97 6554
tools
precision recall f1-score support
0 0.99 1.00 1.00 6514
1 0.00 0.00 0.00 40
avg / total 0.99 0.99 0.99 6554
hospitals
precision recall f1-score support
0 0.99 1.00 0.99 6471
1 0.00 0.00 0.00 83
avg / total 0.97 0.99 0.98 6554
shops
precision recall f1-score support
0 1.00 1.00 1.00 6524
1 0.00 0.00 0.00 30
avg / total 0.99 1.00 0.99 6554
aid_centers
precision recall f1-score support
0 0.99 1.00 1.00 6489
1 0.00 0.00 0.00 65
avg / total 0.98 0.99 0.99 6554
other_infrastructure
precision recall f1-score support
0 0.96 1.00 0.98 6271
1 0.00 0.00 0.00 283
avg / total 0.92 0.96 0.94 6554
weather_related
precision recall f1-score support
0 0.85 0.96 0.90 4659
1 0.86 0.59 0.70 1895
avg / total 0.86 0.86 0.85 6554
floods
precision recall f1-score support
0 0.94 0.99 0.97 5980
1 0.87 0.37 0.52 574
avg / total 0.94 0.94 0.93 6554
storm
precision recall f1-score support
0 0.93 0.99 0.96 5884
1 0.82 0.35 0.49 670
avg / total 0.92 0.93 0.91 6554
fire
precision recall f1-score support
0 0.99 1.00 0.99 6483
1 1.00 0.03 0.05 71
avg / total 0.99 0.99 0.98 6554
earthquake
precision recall f1-score support
0 0.97 0.99 0.98 5930
1 0.91 0.75 0.82 624
avg / total 0.97 0.97 0.97 6554
cold
precision recall f1-score support
0 0.98 1.00 0.99 6422
1 0.80 0.09 0.16 132
avg / total 0.98 0.98 0.97 6554
other_weather
precision recall f1-score support
0 0.95 1.00 0.97 6195
1 0.50 0.03 0.05 359
avg / total 0.92 0.95 0.92 6554
direct_report
precision recall f1-score support
0 0.86 0.98 0.92 5281
1 0.80 0.33 0.47 1273
avg / total 0.85 0.85 0.83 6554
###Markdown
9. Export your model as a pickle file
###Code
pickle.dump(cv2, open('message_classification_model.sav', 'wb'))
###Output
_____no_output_____
###Markdown
10. Use this notebook to complete `train.py`Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
###Code
import nltk
nltk.download(['punkt', 'wordnet','averaged_perceptron_tagger'])
import sys
import re
import pandas as pd
import sqlalchemy as db
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
import pickle
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import GridSearchCV
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.metrics import classification_report
from sklearn import model_selection
from sklearn.neighbors import KNeighborsClassifier
class AdvAdjExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
token_list = tokenize(text)
pos_tags = nltk.pos_tag(token_list)
adj_adv = 0
for word, tag in pos_tags:
if tag in ['JJ','JJR','JJS','RB','RBR','RBS']:
adj_adv = adj_adv+1
return adj_adv
def fit(self, x, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
def load_data(database_filepath):
engine = db.create_engine('sqlite:///'+database_filepath)
df = pd.read_sql_table('CategorisedMessages', engine)
category_names = ['related', 'request', 'offer',
'aid_related', 'medical_help', 'medical_products', 'search_and_rescue',
'security', 'military', 'child_alone', 'water', 'food', 'shelter',
'clothing', 'money', 'missing_people', 'refugees', 'death', 'other_aid',
'infrastructure_related', 'transport', 'buildings', 'electricity',
'tools', 'hospitals', 'shops', 'aid_centers', 'other_infrastructure',
'weather_related', 'floods', 'storm', 'fire', 'earthquake', 'cold',
'other_weather', 'direct_report']
X = df['message']
Y = df[category_names]
return X, Y, category_names
def tokenize(text):
# Replace none-character with space
text = re.sub('[^A-Za-z0-9]',' ',text)
# Tokenize the input
tokens = word_tokenize(text)
# Initialize lemmatizer for standardize form of words
lemmatizer = WordNetLemmatizer()
# We will iterate each token in the list, lemmatize and return result
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
def build_model():
# Add number of adj and adv count as new feature and try again
pipeline = Pipeline([
('features',FeatureUnion([
('text_pipeline',Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('adv_adj', AdvAdjExtractor())
])),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
parameters = {
'clf__estimator__n_estimators': [10,20]
}
return GridSearchCV(pipeline, param_grid=parameters,n_jobs=4)
def evaluate_model(model, X_test, Y_test, category_names):
y_pred = model.predict(X_test)
# Convert to dataframe for ease of iteration
df_y_pred = pd.DataFrame(y_pred)
df_y_pred.columns=category_names
# Evaluate improvement result
for column in category_names:
print(column)
print(classification_report(Y_test[column],df_y_pred[column]))
def save_model(model, model_filepath):
pickle.dump(model, open(model_filepath, 'wb'))
def main():
if len(sys.argv) == 3:
database_filepath, model_filepath = sys.argv[1:]
print('Loading data...\n DATABASE: {}'.format(database_filepath))
X, Y, category_names = load_data(database_filepath)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2)
print('Building model...')
model = build_model()
print('Training model...')
model.fit(X_train, Y_train)
print('Evaluating model...')
evaluate_model(model, X_test, Y_test, category_names)
print('Saving model...\n MODEL: {}'.format(model_filepath))
save_model(model, model_filepath)
print('Trained model saved!')
else:
print('Please provide the filepath of the disaster messages database '\
'as the first argument and the filepath of the pickle file to '\
'save the model to as the second argument. \n\nExample: python '\
'train_classifier.py ../data/DisasterResponse.db classifier.pkl')
if __name__ == '__main__':
main()
###Output
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package wordnet to /root/nltk_data...
[nltk_data] Package wordnet is already up-to-date!
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] /root/nltk_data...
[nltk_data] Package averaged_perceptron_tagger is already up-to-
[nltk_data] date!
Loading data...
DATABASE: InsertDatabaseName.db
Building model...
Training model...
Evaluating model...
related
precision recall f1-score support
0 0.68 0.32 0.43 1237
1 0.81 0.95 0.88 3970
2 0.62 0.28 0.38 36
avg / total 0.78 0.80 0.77 5243
request
precision recall f1-score support
0 0.89 0.99 0.94 4355
1 0.87 0.43 0.57 888
avg / total 0.89 0.89 0.88 5243
offer
precision recall f1-score support
0 1.00 1.00 1.00 5219
1 0.00 0.00 0.00 24
avg / total 0.99 1.00 0.99 5243
aid_related
precision recall f1-score support
0 0.74 0.87 0.80 3093
1 0.75 0.56 0.64 2150
avg / total 0.74 0.74 0.73 5243
medical_help
precision recall f1-score support
0 0.92 1.00 0.96 4798
1 0.59 0.04 0.07 445
avg / total 0.89 0.92 0.88 5243
medical_products
precision recall f1-score support
0 0.95 1.00 0.97 4974
1 0.65 0.05 0.09 269
avg / total 0.94 0.95 0.93 5243
search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.99 5091
1 0.75 0.08 0.14 152
avg / total 0.97 0.97 0.96 5243
security
precision recall f1-score support
0 0.98 1.00 0.99 5136
1 0.25 0.01 0.02 107
avg / total 0.96 0.98 0.97 5243
military
precision recall f1-score support
0 0.97 1.00 0.98 5058
1 0.76 0.07 0.13 185
avg / total 0.96 0.97 0.95 5243
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 5243
avg / total 1.00 1.00 1.00 5243
water
precision recall f1-score support
0 0.95 1.00 0.98 4911
1 0.88 0.30 0.45 332
avg / total 0.95 0.95 0.94 5243
food
precision recall f1-score support
0 0.93 0.99 0.96 4677
1 0.86 0.43 0.57 566
avg / total 0.93 0.93 0.92 5243
shelter
precision recall f1-score support
0 0.93 0.99 0.96 4795
1 0.78 0.22 0.34 448
avg / total 0.92 0.93 0.91 5243
clothing
precision recall f1-score support
0 0.98 1.00 0.99 5159
1 0.67 0.02 0.05 84
avg / total 0.98 0.98 0.98 5243
money
precision recall f1-score support
0 0.98 1.00 0.99 5142
1 0.33 0.01 0.02 101
avg / total 0.97 0.98 0.97 5243
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 5171
1 0.67 0.03 0.05 72
avg / total 0.98 0.99 0.98 5243
refugees
precision recall f1-score support
0 0.97 1.00 0.98 5077
1 0.00 0.00 0.00 166
avg / total 0.94 0.97 0.95 5243
death
precision recall f1-score support
0 0.96 1.00 0.98 4992
1 0.85 0.16 0.27 251
avg / total 0.95 0.96 0.94 5243
other_aid
precision recall f1-score support
0 0.87 1.00 0.93 4530
1 0.44 0.02 0.04 713
avg / total 0.81 0.86 0.81 5243
infrastructure_related
precision recall f1-score support
0 0.93 1.00 0.96 4886
1 0.00 0.00 0.00 357
avg / total 0.87 0.93 0.90 5243
transport
precision recall f1-score support
0 0.95 1.00 0.98 4994
1 0.68 0.05 0.10 249
avg / total 0.94 0.95 0.93 5243
buildings
precision recall f1-score support
0 0.96 1.00 0.98 4985
1 0.69 0.09 0.16 258
avg / total 0.94 0.95 0.94 5243
electricity
precision recall f1-score support
0 0.98 1.00 0.99 5134
1 0.58 0.06 0.12 109
avg / total 0.97 0.98 0.97 5243
tools
precision recall f1-score support
0 0.99 1.00 1.00 5215
1 0.00 0.00 0.00 28
avg / total 0.99 0.99 0.99 5243
hospitals
precision recall f1-score support
0 0.99 1.00 0.99 5188
1 0.00 0.00 0.00 55
avg / total 0.98 0.99 0.98 5243
shops
precision recall f1-score support
0 0.99 1.00 1.00 5212
1 0.00 0.00 0.00 31
avg / total 0.99 0.99 0.99 5243
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 5177
1 0.00 0.00 0.00 66
avg / total 0.97 0.99 0.98 5243
other_infrastructure
precision recall f1-score support
0 0.95 1.00 0.98 5007
1 0.00 0.00 0.00 236
avg / total 0.91 0.95 0.93 5243
weather_related
precision recall f1-score support
0 0.85 0.95 0.90 3779
1 0.83 0.58 0.68 1464
avg / total 0.85 0.85 0.84 5243
floods
precision recall f1-score support
0 0.94 1.00 0.97 4804
1 0.87 0.29 0.43 439
avg / total 0.93 0.94 0.92 5243
storm
precision recall f1-score support
0 0.93 0.99 0.96 4754
1 0.75 0.32 0.45 489
avg / total 0.92 0.93 0.91 5243
fire
precision recall f1-score support
0 0.99 1.00 0.99 5186
1 1.00 0.02 0.03 57
avg / total 0.99 0.99 0.98 5243
earthquake
precision recall f1-score support
0 0.97 0.99 0.98 4732
1 0.88 0.68 0.77 511
avg / total 0.96 0.96 0.96 5243
cold
precision recall f1-score support
0 0.98 1.00 0.99 5141
1 0.64 0.07 0.12 102
avg / total 0.98 0.98 0.97 5243
other_weather
precision recall f1-score support
0 0.95 1.00 0.97 4987
1 0.44 0.03 0.05 256
avg / total 0.93 0.95 0.93 5243
direct_report
precision recall f1-score support
0 0.85 0.98 0.91 4212
1 0.82 0.30 0.44 1031
avg / total 0.85 0.85 0.82 5243
Saving model...
MODEL: classifier.pkl
|
notebooks/book1/02/change_of_vars_demo1d.ipynb | ###Markdown
Monte Carlo approximation on Uniform distribution
###Code
import jax.numpy as jnp
from jax import random
import matplotlib.pyplot as plt
import seaborn as sns
try:
from probml_utils import savefig, latexify, is_latexify_enabled
except:
%pip install git+https://github.com/probml/probml-utils.git
from probml_utils import savefig, latexify, is_latexify_enabled
latexify(width_scale_factor=1, fig_height=2)
x_samples = jnp.linspace(-1, 1, 200)
lower_limit = -1
upper_limit = 1
px_uniform = 1 / (upper_limit - lower_limit) * jnp.ones(len(x_samples))
square_fn = lambda x: x**2
y = square_fn(x_samples)
# analytic
y_pdf = 1 / (2 * jnp.sqrt(y + 1e-2))
# monte carlo
n = 1000
key = random.PRNGKey(0)
uniform_samples = random.uniform(key, shape=(n, 1), minval=lower_limit, maxval=upper_limit)
fn_samples = square_fn(uniform_samples)
print(jnp.mean(fn_samples))
if not is_latexify_enabled():
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(15, 3))
else:
fig, ax = plt.subplots(nrows=1, ncols=3)
ax[0].set_title("Uniform distribution")
ax[0].plot(x_samples, px_uniform, "-")
ax[0].set_xlabel("$x$")
ax[0].set_ylabel("$p(x)$")
ax[1].set_title("Analytical p(y), $y(x)$ = $x^2$")
ax[1].plot(y, y_pdf, "-", linewidth=2)
ax[1].set_xlabel("$y$")
ax[1].set_ylabel("$p(y)$")
ax[2].set_title("Monte carlo approximation")
sns.distplot(fn_samples, kde=False, ax=ax[2], bins=20, norm_hist=True, hist_kws=dict(edgecolor="k", linewidth=1))
ax[2].set_xlabel("$y$")
ax[2].set_ylabel("$Frequency$")
sns.despine()
savefig("changeOfVars")
plt.show()
###Output
/home/rohit_khoiwal/.local/lib/python3.8/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
/home/rohit_khoiwal/.local/lib/python3.8/site-packages/probml_utils/plotting.py:79: UserWarning: set FIG_DIR environment variable to save figures
warnings.warn("set FIG_DIR environment variable to save figures")
|
notebooks/Basic_Diarization.ipynb | ###Markdown
Trying to split
###Code
## Mel-filterbank
mel_window_length = 25 # In milliseconds
mel_window_step = 10 # In milliseconds
mel_n_channels = 40
## Audio
sampling_rate = 16000
# Number of spectrogram frames in a partial utterance
partials_n_frames = 40 # 400 ms
def compute_partial_slices(n_samples: int, rate, min_coverage):
"""
Computes where to split an utterance waveform and its corresponding mel spectrogram to
obtain partial utterances of <partials_n_frames> each. Both the waveform and the
mel spectrogram slices are returned, so as to make each partial utterance waveform
correspond to its spectrogram.
The returned ranges may be indexing further than the length of the waveform. It is
recommended that you pad the waveform with zeros up to wav_slices[-1].stop.
:param n_samples: the number of samples in the waveform
:param rate: how many partial utterances should occur per second. Partial utterances must
cover the span of the entire utterance, thus the rate should not be lower than the inverse
of the duration of a partial utterance. By default, partial utterances are 1.6s long and
the minimum rate is thus 0.625.
:param min_coverage: when reaching the last partial utterance, it may or may not have
enough frames. If at least <min_pad_coverage> of <partials_n_frames> are present,
then the last partial utterance will be considered by zero-padding the audio. Otherwise,
it will be discarded. If there aren't enough frames for one partial utterance,
this parameter is ignored so that the function always returns at least one slice.
:return: the waveform slices and mel spectrogram slices as lists of array slices. Index
respectively the waveform and the mel spectrogram with these slices to obtain the partial
utterances.
"""
assert 0 < min_coverage <= 1
# Compute how many frames separate two partial utterances
samples_per_frame = int((sampling_rate * mel_window_step / 1000))
n_frames = int(np.ceil((n_samples + 1) / samples_per_frame))
frame_step = int(np.round((sampling_rate / rate) / samples_per_frame))
assert 0 < frame_step, "The rate is too high"
assert frame_step <= partials_n_frames, "The rate is too low, it should be %f at least" % \
(sampling_rate / (samples_per_frame * partials_n_frames))
# Compute the slices
wav_slices, mel_slices = [], []
steps = max(1, n_frames - partials_n_frames + frame_step + 1)
for i in range(0, steps, frame_step):
mel_range = np.array([i, i + partials_n_frames])
wav_range = mel_range * samples_per_frame
mel_slices.append(slice(*mel_range))
wav_slices.append(slice(*wav_range))
# Evaluate whether extra padding is warranted or not
last_wav_range = wav_slices[-1]
coverage = (n_samples - last_wav_range.start) / (last_wav_range.stop - last_wav_range.start)
if coverage < min_coverage and len(mel_slices) > 1:
mel_slices = mel_slices[:-1]
wav_slices = wav_slices[:-1]
return wav_slices, mel_slices
import torch
import torchaudio
wav2mel = torch.jit.load("wav2mel.pt")
dvector = torch.jit.load("dvector.pt").eval()
wav_file_name, reference = file_loader[1]
wav_tensor, sample_rate = torchaudio.load(wav_file_name)
mel_tensor = wav2mel(wav_tensor, sample_rate) # shape: (frames, mel_dim)
emb_tensor = dvector.embed_utterance(mel_tensor) # shape: (emb_dim)
print(mel_tensor.shape)
print(emb_tensor.shape)
###Output
torch.Size([10201, 40])
torch.Size([256])
###Markdown
Loading model
###Code
% cd /content
import torch
import torchaudio
# from data.wav2mel import Wav2Mel
torchaudio.set_audio_backend("sox_io")
# wav2mel = Wav2Mel()
# wav2mel = torch.jit.script(wav2mel)
# wav2mel.save("wav2mel.pt")
# wav2mel = torch.jit.load("log_melspectrogram.pt")
dvector = torch.jit.load("dvector.pt").eval()
# audio_file_name, reference = file_loader[1]
# wav_tensor, sample_rate = torchaudio.load(audio_file_name)
# mel_tensor = wav2mel(wav_tensor, sample_rate) # shape: (frames, mel_dim)
# emb_tensor = dvector.embed_utterance(mel_tensor) # shape: (emb_dim)a
# emb_tensor.shape
# wav_tensor.shape, mel_tensor.shape, emb_tensor.shape
import os
if os.path.exists("/content/Speaker-Diarization-System"):
% cd /content/Speaker-Diarization-System
! git pull
else:
% cd /content
! git clone https://gitlab.com/vaithak/Speaker-Diarization-System.git
% cd /content/Speaker-Diarization-System
!pip install -r requirements.txt
ls
# !pip install resemblyzer
# !pip install spectralcluster
from Utils import DataLoader
from Preprocessing import VAD_chunk
from Clustering import SpectralClustering
from Embedding import concat_segs, get_STFTs, align_embeddings, SpeechEmbedder
import torch
def create_labelling(labels, continuos_times, seg_length_ms = 0.4):
labelling = []
time_idx = 0
start_time = continuos_times[0][0]
end_time = start_time + seg_length_ms
for i in range(len(labels)):
if end_time >= continuos_times[time_idx][1]:
temp = [str(labels[i]), start_time, continuos_times[time_idx][1]]
labelling.append(tuple(temp))
time_idx += 1
start_time = continuos_times[time_idx][0]
end_time = start_time + seg_length_ms
elif i==len(labels)-1:
temp = [str(labels[i]), start_time, end_time]
labelling.append(tuple(temp))
elif labels[i] != labels[i+1]:
temp = [str(labels[i]), start_time, end_time]
labelling.append(tuple(temp))
start_time = end_time
end_time = start_time + seg_length_ms
else:
end_time += seg_length_ms
return labelling
# from sklearn.cluster import SpectralClustering, KMeans
from Clustering import SpectralClustering
def get_hypothesis(audio_file_name, embedder_net):
times, segs = VAD_chunk(2, audio_file_name)
concat_seg, continuos_times = concat_segs(times, segs)
STFT_frames = get_STFTs(concat_seg)
STFT_frames = np.stack(STFT_frames, axis=2)
STFT_frames = torch.tensor(np.transpose(STFT_frames, axes=(2,1,0)))
print(STFT_frames.shape)
embeddings = []
for STFT_frame in STFT_frames:
embeddings.append((dvector.embed_utterance(STFT_frame.reshape(-1,40))).detach().numpy())
print(len(embeddings), embeddings[0].shape)
# embeddings = torch.tensor(embeddings)
# embeddings = embedder_net(STFT_frames)
aligned_embeddings = align_embeddings(embeddings)
# Ger cluster ids for embedding
clusterer = SpectralClustering(
min_clusters=4,
max_clusters=20,
p_percentile=0.90,
gaussian_blur_sigma=1
)
# labels = clusterer.predict(aligned_embeddings)
labels = clusterer.predict(aligned_embeddings)
# Get labelling from cluster assignment
labelling = create_labelling(labels, continuos_times)
return labelling, labels, aligned_embeddings
(wav_data), reference = file_loader[2]
labelling, labels, embeddings = get_hypothesis(wav_data, 16000)
len(labelling), len(labels), len(embeddings)
from __future__ import print_function
import time
import numpy as np
import pandas as pd
# from sklearn.datasets import fetch_mldata
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import seaborn as sns
tsne = TSNE(n_components=2, verbose=1, perplexity=40, n_iter=300)
tsne_results = tsne.fit_transform(embeddings)
df = pd.DataFrame(tsne_results, columns = {"one", "two"})
df["y"] = (labels)
df.head()
np.unique(labels)
# df_subset['tsne-2d-one'] = tsne_results[:,0]
# df_subset['tsne-2d-two'] = tsne_results[:,1]
plt.figure(figsize=(16,10))
sns.scatterplot(
x="one", y="two",
hue="y",
palette=sns.color_palette("hls", len(np.unique(labels))),
data=df,
legend="full",
alpha=0.3
)
from pyannote.core import Segment, Timeline, Annotation
from pyannote.metrics.diarization import DiarizationErrorRate
import webrtcvad
import warnings
def Annotation_from_tuple_arr(labels_arr):
annotate = Annotation()
for label in labels_arr:
annotate[Segment(label[1], label[2])] = str(int(label[0]))
return annotate
hypothesis = Annotation_from_tuple_arr(labelling)
metric = DiarizationErrorRate()
abs(metric(reference, hypothesis))
print(reference)
print(hypothesis)
###Output
_____no_output_____ |
AlphabetSoupCharity_Optimization3.ipynb | ###Markdown
Preprocessing
###Code
# Import our dependencies
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import pandas as pd
import tensorflow as tf
# Import and read the charity_data.csv.
import pandas as pd
application_df = pd.read_csv("Resources/charity_data.csv")
application_df.head()
# Drop the non-beneficial ID columns, 'EIN' and 'NAME'.
application_df.drop(['EIN','STATUS'], axis=1, inplace=True)
application_df
# Determine the number of unique values in each column.
application_df.nunique()
# Look at APPLICATION_TYPE value counts for binning
typeCount = application_df['APPLICATION_TYPE'].value_counts()
typeCount
# Choose a cutoff value and create a list of application types to be replaced
# use the variable name `application_types_to_replace`
application_types_to_replace = list(typeCount[typeCount<50].index)
# Replace in dataframe
for app in application_types_to_replace:
application_df['APPLICATION_TYPE'] = application_df['APPLICATION_TYPE'].replace(app,"Other")
# Check to make sure binning was successful
application_df['APPLICATION_TYPE'].value_counts()
application_df.nunique()
# Look at NAME value counts for binning
nameCount = application_df['NAME'].value_counts()
nameCount
# You may find it helpful to look at NAME value counts >1
nameCount1 = nameCount[nameCount > 1]
nameCount1
# Choose a cutoff value and create a names to be replaced
name_types_to_replace = list(nameCount[nameCount < 2].index)
# Replace in dataframe
for ntr in name_types_to_replace:
application_df['NAME'] = application_df['NAME'].replace(ntr,"Other")
# Check to make sure binning was successful
application_df['NAME'].value_counts()
# Convert categorical data to numeric with `pd.get_dummies`
application_df = pd.get_dummies(application_df, dtype = float)
application_df
# Split our preprocessed data into our features and target arrays
y = application_df['IS_SUCCESSFUL'].values
X = application_df.drop(['IS_SUCCESSFUL'], axis = 1)
# Split the preprocessed data into a training and testing dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 43)
# Create a StandardScaler instances
scaler = StandardScaler()
# Fit the StandardScaler
X_scaler = scaler.fit(X_train)
# Scale the data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Compile, Train and Evaluate the Model
###Code
# Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer.
features = len(X_train_scaled[0])
layer1 = 8
layer2 = 12
layer3 = 18
nn = tf.keras.models.Sequential()
# First hidden layer
nn.add(tf.keras.layers.Dense(units=layer1, input_dim=features, activation='relu'))
# Second hidden layer
nn.add(tf.keras.layers.Dense(units=layer2, activation='relu'))
#third hidden layer
nn.add(tf.keras.layers.Dense(units=layer3, activation='relu'))
# Output layer
nn.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
# Check the structure of the model
nn.summary()
# Compile the model
nn.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Train the model
fit_model = nn.fit(X_train_scaled,y_train,epochs=100)
# Evaluate the model using the test data
model_loss, model_accuracy = nn.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
training_history.plot(y='accuracy')
training_history = pd.DataFrame(fit_model.history)
training_history.index += 1
training_history.plot(y="loss")
###Output
_____no_output_____
###Markdown
Deliverable 2: Compile, Train and Evaluate the Model
###Code
# Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer.
number_input_features = len(X_train[0])
hidden_nodes_layer1 = 180
hidden_nodes_layer2 = 90
hidden_nodes_layer3 = 60
nn = tf.keras.models.Sequential()
# First hidden layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer1, input_dim=number_input_features, activation="tanh"))
# Second hidden layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer2, activation="tanh"))
# Third hidden layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer3, activation="tanh"))
# Output layer
nn.add(tf.keras.layers.Dense(units=1, activation="sigmoid"))
# Check the structure of the model
nn.summary()
# Compile the model
nn.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Import checkpoint dependencies
import os
from tensorflow.keras.callbacks import ModelCheckpoint
# Define the checkpoint path and filenames
os.makedirs("checkpoints/",exist_ok=True)
checkpoint_path = "checkpoints/weights.{epoch:02d}.hdf5"
# Create a callback that saves the model's weights every 5 epochs
cp_callback = ModelCheckpoint(
filepath=checkpoint_path,
verbose=1,
save_weights_only=True,
save_freq="epoch",
period=5)
# Train the model
fit_model = nn.fit(X_train_scaled,y_train,epochs=100,callbacks=[cp_callback])
# Evaluate the model using the test data
model_loss, model_accuracy = nn.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
# Export our model to HDF5 file
nn.save("AlphabetSoupCharity_Optimization3.h5")
###Output
_____no_output_____
###Markdown
Deliverable 1: Preprocessing the Data for a Neural Network¶
###Code
# Import our dependencies
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler,OneHotEncoder
import pandas as pd
import tensorflow as tf
# Import and read the charity_data.csv.
import pandas as pd
application_df = pd.read_csv("Resources/charity_data.csv")
application_df.head()
# Drop the non-beneficial ID columns, 'EIN' and 'NAME'.
application_df = application_df.drop(columns = ['EIN', 'NAME'])
# Determine the number of unique values in each column.
column_headers = list(application_df.columns.values)
print(column_headers)
application_df[column_headers].nunique()
# Look at APPLICATION_TYPE value counts for binning
APP_TYPE_COUNTS = application_df.APPLICATION_TYPE.value_counts()
print(APP_TYPE_COUNTS)
# Visualize the value counts of APPLICATION_TYPE
APP_TYPE_COUNTS.plot.density()
# Determine which values to replace if counts are less than ...?
replace_application = list(APP_TYPE_COUNTS[APP_TYPE_COUNTS < 500].index)
# Replace in dataframe
for app in replace_application:
application_df.APPLICATION_TYPE = application_df.APPLICATION_TYPE.replace(app,"Other")
# Check to make sure binning was successful
application_df.APPLICATION_TYPE.value_counts()
# Look at CLASSIFICATION value counts for binning
CLASSIFICATION_COUNTS = application_df.CLASSIFICATION.value_counts()
print(CLASSIFICATION_COUNTS)
# Visualize the value counts of CLASSIFICATION
CLASSIFICATION_COUNTS.plot.density()
# Determine which values to replace if counts are less than ..?
replace_class = list(CLASSIFICATION_COUNTS[CLASSIFICATION_COUNTS < 1000].index)
# Replace in dataframe
for cls in replace_class:
application_df.CLASSIFICATION = application_df.CLASSIFICATION.replace(cls,"Other")
# Check to make sure binning was successful
application_df.CLASSIFICATION.value_counts()
# Generate our categorical variable lists
application_cat = application_df.dtypes[application_df.dtypes== "object"].index.tolist()
application_cat
application_df[application_cat].dtypes
# Create a OneHotEncoder instance
enc = OneHotEncoder(sparse=False)
# Fit and transform the OneHotEncoder using the categorical variable list
encode_df = pd.DataFrame(enc.fit_transform(application_df[application_cat]))
# Add the encoded variable names to the dataframe
encode_df.columns = enc.get_feature_names(application_cat)
encode_df.head()
# Merge one-hot encoded features and drop the originals
application_df = application_df.merge(encode_df, left_index=True, right_index=True)
application_df = application_df.drop(application_cat, 1)
application_df.head()
# Split our preprocessed data into our features and target arrays
y = application_df["IS_SUCCESSFUL"].values
X = application_df.drop(columns= ["IS_SUCCESSFUL"], axis = 1).values
# Split the preprocessed data into a training and testing dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 10)
# Create a StandardScaler instances
scaler = StandardScaler()
# Fit the StandardScaler
X_scaler = scaler.fit(X_train)
# Scale the data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Deliverable 2: Compile, Train and Evaluate the Model
###Code
# Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer.
number_input_features = len(X_train[0])
hidden_nodes_layer1 = 80
hidden_nodes_layer2 = 30
nn = tf.keras.models.Sequential()
# First hidden layer
nn.add(
tf.keras.layers.Dense(units=hidden_nodes_layer1, input_dim=number_input_features, activation="relu")
)
# Second hidden layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer2, activation="relu"))
# Output layer
nn.add(tf.keras.layers.Dense(units=1, activation="sigmoid"))
# Check the structure of the model
nn.summary()
# Import checkpoint dependencies
import os
from tensorflow.keras.callbacks import ModelCheckpoint
# Define the checkpoint path and filenames
os.makedirs("checkpoints/",exist_ok=True)
checkpoint_path = "checkpoints/weights.{epoch:02d}.hdf5"
# Compile the model
nn.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Create a callback that saves the model's weights every epoch
cp_callback = ModelCheckpoint(
filepath=checkpoint_path,
verbose=1,
save_weights_only=True,
save_freq='epoch')
# Train the model
fit_model = nn.fit(X_train_scaled,y_train,epochs=1000, callbacks=[cp_callback])
# Evaluate the model using the test data
model_loss, model_accuracy = nn.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
#Export Model to HDF5
nn.save("AlphabetSoupCharity_Optimization3.h5")
###Output
_____no_output_____
###Markdown
Preprocessing Optimization 3rd attempt In this optimization attempt, I will be removing additional columns {affiliation and organization} since the information in the columns do not appear to be relevant to predicting whether or not applicants will be successful.
###Code
# Import our dependencies
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import pandas as pd
import tensorflow as tf
# Import and read the charity_data.csv.
import pandas as pd
application_df = pd.read_csv(r"C:\Users\tsatr\Deep_Learning_Challenge\Resources\charity_data.csv")
application_df.head()
# Drop the non-beneficial ID columns, 'EIN' and 'NAME'.
application_df = application_df.drop(['EIN'], 1)
application_df
application_df = application_df.drop(['NAME'], 1)
application_df
application_df = application_df.drop(['AFFILIATION'], 1)
application_df
application_df = application_df.drop(['ORGANIZATION'], 1)
application_df
# Determine the number of unique values in each column.
application_df.nunique()
###Output
_____no_output_____
###Markdown
APPLICATION_TYPE, CLASSIFICATION, and ASK_AMOUNT have more than 10 unique values.
###Code
# Look at APPLICATION_TYPE value counts for binning
application_type_counts = application_df['APPLICATION_TYPE'].value_counts()
application_type_counts
# Choose a cutoff value and create a list of application types to be replaced
# use the variable name `application_types_to_replace`
#set cutoff to 500, create dictionary from value_counts result then initialize list for app types to replace
cutoff = 500
app_type_count_dict = dict(application_type_counts)
application_types_to_replace = []
#iterate through items in dictionary and add app types with value <cutoff to list
for key, value in app_type_count_dict.items():
if value < cutoff:
application_types_to_replace.append(key)
#create copy of df for reduced application types
red_application_type_df = application_df
# Replace in dataframe
for app in application_types_to_replace:
red_application_type_df['APPLICATION_TYPE'] = red_application_type_df['APPLICATION_TYPE'].replace(app,"Other")
# Check to make sure binning was successful
red_application_type_df['APPLICATION_TYPE'].value_counts()
# Look at CLASSIFICATION value counts for binning
classification_counts = red_application_type_df['CLASSIFICATION'].value_counts()
classification_counts
# You may find it helpful to look at CLASSIFICATION value counts >1
classification_counts.loc[classification_counts > 1]
# Choose a cutoff value and create a list of classifications to be replaced
# use the variable name `classifications_to_replace`
#set cutoff to 1500, create dictionary from value_counts result then initialize list for classifications to replace
cutoff = 1500
classification_counts_dict = dict(classification_counts)
classifications_to_replace = []
#iterate through items in dictionary, add classifications with value <cutoff to list
for key, value in classification_counts_dict.items():
if value < cutoff:
classifications_to_replace.append(key)
#create copy of dataframe for reduced classifications
red_classifications_df = red_application_type_df
# Replace in dataframe
for cls in classifications_to_replace:
red_classifications_df['CLASSIFICATION'] = red_classifications_df['CLASSIFICATION'].replace(cls,"Other")
# Check to make sure binning was successful
red_classifications_df['CLASSIFICATION'].value_counts()
# Convert categorical data to numeric with `pd.get_dummies`
dummies_df = pd.get_dummies(red_classifications_df)
dummies_df.head
# Split our preprocessed data into our features and target arrays
X = dummies_df.drop('IS_SUCCESSFUL', axis = 1)
y = dummies_df['IS_SUCCESSFUL']
# Split the preprocessed data into a training and testing dataset
X_train, X_test, y_train, y_test, = train_test_split(X, y, random_state=42)
print('X_train:\t{}'.format(X_train.shape))
print('y_train:\t{}'.format(y_train.shape))
print('X_test:\t{}'.format(X_test.shape))
print('y_test:\t{}'.format(y_test.shape))
# Create a StandardScaler instances
scaler = StandardScaler()
# Fit the StandardScaler
X_scaler = scaler.fit(X_train)
# Scale the data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Compile, Train and Evaluate the Model
###Code
# Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer.
number_input_features = len(X_train)
hidden_nodes_layer1 = 120
hidden_nodes_layer2 = 30
hidden_nodes_layer3 = 1
nn = tf.keras.models.Sequential()
# First hidden layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer1, activation="relu", input_dim=33))
# Second hidden layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer2, activation="relu"))
# Output layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer3, activation="sigmoid"))
# Check the structure of the model
nn.summary()
# Compile the model
nn.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
import os
#Define the checkpoint path and filenames
os.makedirs("checkpoints_optimized/", exist_ok=True)
checkpoint_dir = "checkpoints_optimized/weights.{epoch:02d}.hdf5"
from keras.callbacks import Callback
from tensorflow.keras.callbacks import ModelCheckpoint
# Create a callback that saves the model's weights every 5 epochs
checkpoint = ModelCheckpoint(filepath = checkpoint_dir, monitor='loss', verbose=1,
save_best_only=True, mode='auto', period=5)
# Train the model
fit_model = nn.fit(X_train_scaled, y_train, epochs=50)
# Evaluate the model using the test data
model_loss, model_accuracy = nn.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
# Export our model to HDF5 file
nn.save('AlphabetSoupCharity_Optimization3.h5')
###Output
_____no_output_____ |
3_multiprocessing.ipynb | ###Markdown
Stable Baselines Tutorial - Multiprocessing of environmentsGithub repo: https://github.com/araffin/rl-tutorial-jnrr19Stable-Baselines: https://github.com/hill-a/stable-baselinesDocumentation: https://stable-baselines.readthedocs.io/en/master/RL Baselines zoo: https://github.com/araffin/rl-baselines-zoo IntroductionIn this notebook, you will learn how to use *Vectorized Environments* (aka multiprocessing) to make training faster. You will also see that this speed up comes at a cost of sample efficiency. Install Dependencies and Stable Baselines Using Pip
###Code
# Stable Baselines only supports tensorflow 1.x for now
%tensorflow_version 1.x
!apt install swig cmake libopenmpi-dev zlib1g-dev
!pip install stable-baselines[mpi]==2.10.0
###Output
_____no_output_____
###Markdown
Remove tensorflow warningsTo have a clean output, we will filter tensorflow warnings, mostly due to the migration from tf 1.x to 2.x
###Code
# Filter tensorflow version warnings
import os
# https://stackoverflow.com/questions/40426502/is-there-a-way-to-suppress-the-messages-tensorflow-prints/40426709
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # or any {'0', '1', '2'}
import warnings
# https://stackoverflow.com/questions/15777951/how-to-suppress-pandas-future-warning
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.simplefilter(action='ignore', category=Warning)
import tensorflow as tf
tf.get_logger().setLevel('INFO')
tf.autograph.set_verbosity(0)
import logging
tf.get_logger().setLevel(logging.ERROR)
###Output
_____no_output_____
###Markdown
Vectorized Environments and Imports[Vectorized Environments](https://stable-baselines.readthedocs.io/en/master/guide/vec_envs.html) are a method for stacking multiple independent environments into a single environment. Instead of training an RL agent on 1 environment per step, it allows us to train it on n environments per step. This provides two benefits:* Agent experience can be collected more quickly* The experience will contain a more diverse range of states, it usually improves explorationStable-Baselines provides two types of Vectorized Environment:- SubprocVecEnv which run each environment in a separate process- DummyVecEnv which run all environment on the same processIn practice, DummyVecEnv is usually faster than SubprocVecEnv because of communication delays that subprocesses have.
###Code
import time
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import gym
from stable_baselines.common.policies import MlpPolicy
from stable_baselines.common.vec_env import DummyVecEnv, SubprocVecEnv
from stable_baselines.common import set_global_seeds
from stable_baselines import PPO2
###Output
_____no_output_____
###Markdown
Import evaluate function
###Code
from stable_baselines.common.evaluation import evaluate_policy
###Output
_____no_output_____
###Markdown
Define an environment functionThe multiprocessing implementation requires a function that can be called inside the process to instantiate a gym env
###Code
def make_env(env_id, rank, seed=0):
"""
Utility function for multiprocessed env.
:param env_id: (str) the environment ID
:param seed: (int) the inital seed for RNG
:param rank: (int) index of the subprocess
"""
def _init():
env = gym.make(env_id)
# Important: use a different seed for each environment
env.seed(seed + rank)
return env
set_global_seeds(seed)
return _init
###Output
_____no_output_____
###Markdown
Stable-Baselines also provides directly an helper to create vectorized environment:
###Code
from stable_baselines.common.cmd_util import make_vec_env
###Output
_____no_output_____
###Markdown
Define a few constants (feel free to try out other environments and algorithms)We will be using the Cartpole environment: [https://gym.openai.com/envs/CartPole-v1/](https://gym.openai.com/envs/CartPole-v1/)![Cartpole](https://cdn-images-1.medium.com/max/1143/1*h4WTQNVIsvMXJTCpXm_TAw.gif)
###Code
env_id = 'CartPole-v1'
# The different number of processes that will be used
PROCESSES_TO_TEST = [1, 2, 4, 8, 16]
NUM_EXPERIMENTS = 3 # RL algorithms can often be unstable, so we run several experiments (see https://arxiv.org/abs/1709.06560)
TRAIN_STEPS = 5000
# Number of episodes for evaluation
EVAL_EPS = 20
ALGO = PPO2
# We will create one environment to evaluate the agent on
eval_env = gym.make(env_id)
###Output
_____no_output_____
###Markdown
Iterate through the different numbers of processesFor each processes, several experiments are run per processThis may take a couple of minutes.
###Code
reward_averages = []
reward_std = []
training_times = []
total_procs = 0
for n_procs in PROCESSES_TO_TEST:
total_procs += n_procs
print('Running for n_procs = {}'.format(n_procs))
if n_procs == 1:
# if there is only one process, there is no need to use multiprocessing
train_env = DummyVecEnv([lambda: gym.make(env_id)])
else:
# Here we use the "spawn" method for launching the processes, more information is available in the doc
# This is equivalent to make_vec_env(env_id, n_envs=n_procs, vec_env_cls=SubprocVecEnv, vec_env_kwargs=dict(start_method='spawn'))
train_env = SubprocVecEnv([make_env(env_id, i+total_procs) for i in range(n_procs)], start_method='spawn')
rewards = []
times = []
for experiment in range(NUM_EXPERIMENTS):
# it is recommended to run several experiments due to variability in results
train_env.reset()
model = ALGO('MlpPolicy', train_env, verbose=0)
start = time.time()
model.learn(total_timesteps=TRAIN_STEPS)
times.append(time.time() - start)
mean_reward, _ = evaluate_policy(model, eval_env, n_eval_episodes=EVAL_EPS)
rewards.append(mean_reward)
# Important: when using subprocess, don't forget to close them
# otherwise, you may have memory issues when running a lot of experiments
train_env.close()
reward_averages.append(np.mean(rewards))
reward_std.append(np.std(rewards))
training_times.append(np.mean(times))
###Output
_____no_output_____
###Markdown
Plot the results
###Code
training_steps_per_second = [TRAIN_STEPS / t for t in training_times]
plt.figure(figsize=(9, 4))
plt.subplots_adjust(wspace=0.5)
plt.subplot(1, 2, 1)
plt.errorbar(PROCESSES_TO_TEST, reward_averages, yerr=reward_std, capsize=2)
plt.xlabel('Processes')
plt.ylabel('Average return')
plt.subplot(1, 2, 2)
plt.bar(range(len(PROCESSES_TO_TEST)), training_steps_per_second)
plt.xticks(range(len(PROCESSES_TO_TEST)), PROCESSES_TO_TEST)
plt.xlabel('Processes')
_ = plt.ylabel('Training steps per second')
###Output
_____no_output_____
###Markdown
Sample efficiency vs wall clock time trade-offThere is clearly a trade-off between sample efficiency, diverse experience and wall clock time. Lets try getting the best performance in a fixed amount of time, say 10 seconds per experiment
###Code
SECONDS_PER_EXPERIMENT = 10
steps_per_experiment = [int(SECONDS_PER_EXPERIMENT * fps) for fps in training_steps_per_second]
reward_averages = []
reward_std = []
training_times = []
for n_procs, train_steps in zip(PROCESSES_TO_TEST, steps_per_experiment):
total_procs += n_procs
print('Running for n_procs = {} for steps = {}'.format(n_procs, train_steps))
if n_procs == 1:
# if there is only one process, there is no need to use multiprocessing
train_env = DummyVecEnv([lambda: gym.make(env_id)])
else:
train_env = SubprocVecEnv([make_env(env_id, i+total_procs) for i in range(n_procs)], start_method='spawn')
# Alternatively, you can use a DummyVecEnv if the communication delays is the bottleneck
# train_env = DummyVecEnv([make_env(env_id, i+total_procs) for i in range(n_procs)])
rewards = []
times = []
for experiment in range(NUM_EXPERIMENTS):
# it is recommended to run several experiments due to variability in results
train_env.reset()
model = ALGO('MlpPolicy', train_env, verbose=0)
start = time.time()
model.learn(total_timesteps=train_steps)
times.append(time.time() - start)
mean_reward, _ = evaluate_policy(model, eval_env, n_eval_episodes=EVAL_EPS)
rewards.append(mean_reward)
train_env.close()
reward_averages.append(np.mean(rewards))
reward_std.append(np.std(rewards))
training_times.append(np.mean(times))
###Output
_____no_output_____
###Markdown
Plot the results
###Code
training_steps_per_second = [s / t for s,t in zip(steps_per_experiment, training_times)]
plt.figure()
plt.subplot(1,2,1)
plt.errorbar(PROCESSES_TO_TEST, reward_averages, yerr=reward_std, capsize=2, c='k', marker='o')
plt.xlabel('Processes')
plt.ylabel('Average return')
plt.subplot(1,2,2)
plt.bar(range(len(PROCESSES_TO_TEST)), training_steps_per_second)
plt.xticks(range(len(PROCESSES_TO_TEST)),PROCESSES_TO_TEST)
plt.xlabel('Processes')
plt.ylabel('Training steps per second')
###Output
_____no_output_____
###Markdown
DummyVecEnv vs SubprocVecEnv
###Code
reward_averages = []
reward_std = []
training_times = []
total_procs = 0
for n_procs in PROCESSES_TO_TEST:
total_procs += n_procs
print('Running for n_procs = {}'.format(n_procs))
# Here we are using only one process even for n_env > 1
# this is equivalent to DummyVecEnv([make_env(env_id, i + total_procs) for i in range(n_procs)])
train_env = make_vec_env(env_id, n_envs=n_procs)
rewards = []
times = []
for experiment in range(NUM_EXPERIMENTS):
# it is recommended to run several experiments due to variability in results
train_env.reset()
model = ALGO('MlpPolicy', train_env, verbose=0)
start = time.time()
model.learn(total_timesteps=TRAIN_STEPS)
times.append(time.time() - start)
mean_reward, _ = evaluate_policy(model, eval_env, n_eval_episodes=EVAL_EPS)
rewards.append(mean_reward)
train_env.close()
reward_averages.append(np.mean(rewards))
reward_std.append(np.std(rewards))
training_times.append(np.mean(times))
training_steps_per_second = [TRAIN_STEPS / t for t in training_times]
plt.figure()
plt.subplot(1,2,1)
plt.errorbar(PROCESSES_TO_TEST, reward_averages, yerr=reward_std, capsize=2)
plt.xlabel('Processes')
plt.ylabel('Average return')
plt.subplot(1,2,2)
plt.bar(range(len(PROCESSES_TO_TEST)), training_steps_per_second)
plt.xticks(range(len(PROCESSES_TO_TEST)),PROCESSES_TO_TEST)
plt.xlabel('Processes')
plt.ylabel('Training steps per second')
###Output
_____no_output_____
###Markdown
Stable Baselines3 Tutorial - Multiprocessing of environmentsGithub repo: https://github.com/araffin/rl-tutorial-jnrr19/tree/sb3/Stable-Baselines3: https://github.com/DLR-RM/stable-baselines3Documentation: https://stable-baselines3.readthedocs.io/en/master/RL Baselines3 zoo: https://github.com/DLR-RM/rl-baselines3-zoo IntroductionIn this notebook, you will learn how to use *Vectorized Environments* (aka multiprocessing) to make training faster. You will also see that this speed up comes at a cost of sample efficiency. Install Dependencies and Stable Baselines3 Using Pip
###Code
!apt install swig
!pip install stable-baselines3[extra]
###Output
_____no_output_____
###Markdown
Vectorized Environments and Imports[Vectorized Environments](https://stable-baselines3.readthedocs.io/en/master/guide/vec_envs.html) are a method for stacking multiple independent environments into a single environment. Instead of training an RL agent on 1 environment per step, it allows us to train it on n environments per step. This provides two benefits:* Agent experience can be collected more quickly* The experience will contain a more diverse range of states, it usually improves explorationStable-Baselines provides two types of Vectorized Environment:- SubprocVecEnv which run each environment in a separate process- DummyVecEnv which run all environment on the same processIn practice, DummyVecEnv is usually faster than SubprocVecEnv because of communication delays that subprocesses have.
###Code
import time
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import gym
from stable_baselines3.common.vec_env import DummyVecEnv, SubprocVecEnv
from stable_baselines3.common.utils import set_random_seed
from stable_baselines3 import PPO, A2C
###Output
_____no_output_____
###Markdown
Import evaluate function
###Code
from stable_baselines3.common.evaluation import evaluate_policy
###Output
_____no_output_____
###Markdown
Define an environment functionThe multiprocessing implementation requires a function that can be called inside the process to instantiate a gym env
###Code
def make_env(env_id, rank, seed=0):
"""
Utility function for multiprocessed env.
:param env_id: (str) the environment ID
:param seed: (int) the inital seed for RNG
:param rank: (int) index of the subprocess
"""
def _init():
env = gym.make(env_id)
# Important: use a different seed for each environment
env.seed(seed + rank)
return env
set_random_seed(seed)
return _init
###Output
_____no_output_____
###Markdown
Stable-Baselines also provides directly an helper to create vectorized environment:
###Code
from stable_baselines3.common.cmd_util import make_vec_env
###Output
_____no_output_____
###Markdown
Define a few constants (feel free to try out other environments and algorithms)We will be using the Cartpole environment: [https://gym.openai.com/envs/CartPole-v1/](https://gym.openai.com/envs/CartPole-v1/)![Cartpole](https://cdn-images-1.medium.com/max/1143/1*h4WTQNVIsvMXJTCpXm_TAw.gif)
###Code
env_id = 'CartPole-v1'
# The different number of processes that will be used
PROCESSES_TO_TEST = [1, 2, 4, 8, 16]
NUM_EXPERIMENTS = 3 # RL algorithms can often be unstable, so we run several experiments (see https://arxiv.org/abs/1709.06560)
TRAIN_STEPS = 5000
# Number of episodes for evaluation
EVAL_EPS = 20
ALGO = A2C
# We will create one environment to evaluate the agent on
eval_env = gym.make(env_id)
###Output
_____no_output_____
###Markdown
Iterate through the different numbers of processesFor each processes, several experiments are run per processThis may take a couple of minutes.
###Code
reward_averages = []
reward_std = []
training_times = []
total_procs = 0
for n_procs in PROCESSES_TO_TEST:
total_procs += n_procs
print('Running for n_procs = {}'.format(n_procs))
if n_procs == 1:
# if there is only one process, there is no need to use multiprocessing
train_env = DummyVecEnv([lambda: gym.make(env_id)])
else:
# Here we use the "fork" method for launching the processes, more information is available in the doc
# This is equivalent to make_vec_env(env_id, n_envs=n_procs, vec_env_cls=SubprocVecEnv, vec_env_kwargs=dict(start_method='fork'))
train_env = SubprocVecEnv([make_env(env_id, i+total_procs) for i in range(n_procs)], start_method='fork')
rewards = []
times = []
for experiment in range(NUM_EXPERIMENTS):
# it is recommended to run several experiments due to variability in results
train_env.reset()
model = ALGO('MlpPolicy', train_env, verbose=0)
start = time.time()
model.learn(total_timesteps=TRAIN_STEPS)
times.append(time.time() - start)
mean_reward, _ = evaluate_policy(model, eval_env, n_eval_episodes=EVAL_EPS)
rewards.append(mean_reward)
# Important: when using subprocess, don't forget to close them
# otherwise, you may have memory issues when running a lot of experiments
train_env.close()
reward_averages.append(np.mean(rewards))
reward_std.append(np.std(rewards))
training_times.append(np.mean(times))
###Output
Running for n_procs = 1
Running for n_procs = 2
Running for n_procs = 4
Running for n_procs = 8
Running for n_procs = 16
###Markdown
Plot the results
###Code
training_steps_per_second = [TRAIN_STEPS / t for t in training_times]
plt.figure(figsize=(9, 4))
plt.subplots_adjust(wspace=0.5)
plt.subplot(1, 2, 1)
plt.errorbar(PROCESSES_TO_TEST, reward_averages, yerr=reward_std, capsize=2)
plt.xlabel('Processes')
plt.ylabel('Average return')
plt.subplot(1, 2, 2)
plt.bar(range(len(PROCESSES_TO_TEST)), training_steps_per_second)
plt.xticks(range(len(PROCESSES_TO_TEST)), PROCESSES_TO_TEST)
plt.xlabel('Processes')
_ = plt.ylabel('Training steps per second')
###Output
_____no_output_____
###Markdown
Sample efficiency vs wall clock time trade-offThere is clearly a trade-off between sample efficiency, diverse experience and wall clock time. Lets try getting the best performance in a fixed amount of time, say 10 seconds per experiment
###Code
SECONDS_PER_EXPERIMENT = 10
steps_per_experiment = [int(SECONDS_PER_EXPERIMENT * fps) for fps in training_steps_per_second]
reward_averages = []
reward_std = []
training_times = []
for n_procs, train_steps in zip(PROCESSES_TO_TEST, steps_per_experiment):
total_procs += n_procs
print('Running for n_procs = {} for steps = {}'.format(n_procs, train_steps))
if n_procs == 1:
# if there is only one process, there is no need to use multiprocessing
train_env = DummyVecEnv([lambda: gym.make(env_id)])
else:
train_env = SubprocVecEnv([make_env(env_id, i+total_procs) for i in range(n_procs)], start_method='spawn')
# Alternatively, you can use a DummyVecEnv if the communication delays is the bottleneck
# train_env = DummyVecEnv([make_env(env_id, i+total_procs) for i in range(n_procs)])
rewards = []
times = []
for experiment in range(NUM_EXPERIMENTS):
# it is recommended to run several experiments due to variability in results
train_env.reset()
model = ALGO('MlpPolicy', train_env, verbose=0)
start = time.time()
model.learn(total_timesteps=train_steps)
times.append(time.time() - start)
mean_reward, _ = evaluate_policy(model, eval_env, n_eval_episodes=EVAL_EPS)
rewards.append(mean_reward)
train_env.close()
reward_averages.append(np.mean(rewards))
reward_std.append(np.std(rewards))
training_times.append(np.mean(times))
###Output
Running for n_procs = 1 for steps = 8285
Running for n_procs = 2 for steps = 10953
Running for n_procs = 4 for steps = 16746
Running for n_procs = 8 for steps = 28450
Running for n_procs = 16 for steps = 36560
###Markdown
Plot the results
###Code
training_steps_per_second = [s / t for s,t in zip(steps_per_experiment, training_times)]
plt.figure()
plt.subplot(1,2,1)
plt.errorbar(PROCESSES_TO_TEST, reward_averages, yerr=reward_std, capsize=2, c='k', marker='o')
plt.xlabel('Processes')
plt.ylabel('Average return')
plt.subplot(1,2,2)
plt.bar(range(len(PROCESSES_TO_TEST)), training_steps_per_second)
plt.xticks(range(len(PROCESSES_TO_TEST)),PROCESSES_TO_TEST)
plt.xlabel('Processes')
plt.ylabel('Training steps per second')
###Output
_____no_output_____
###Markdown
DummyVecEnv vs SubprocVecEnv
###Code
reward_averages = []
reward_std = []
training_times = []
total_procs = 0
for n_procs in PROCESSES_TO_TEST:
total_procs += n_procs
print('Running for n_procs = {}'.format(n_procs))
# Here we are using only one process even for n_env > 1
# this is equivalent to DummyVecEnv([make_env(env_id, i + total_procs) for i in range(n_procs)])
train_env = make_vec_env(env_id, n_envs=n_procs)
rewards = []
times = []
for experiment in range(NUM_EXPERIMENTS):
# it is recommended to run several experiments due to variability in results
train_env.reset()
model = ALGO('MlpPolicy', train_env, verbose=0)
start = time.time()
model.learn(total_timesteps=TRAIN_STEPS)
times.append(time.time() - start)
mean_reward, _ = evaluate_policy(model, eval_env, n_eval_episodes=EVAL_EPS)
rewards.append(mean_reward)
train_env.close()
reward_averages.append(np.mean(rewards))
reward_std.append(np.std(rewards))
training_times.append(np.mean(times))
training_steps_per_second = [TRAIN_STEPS / t for t in training_times]
plt.figure()
plt.subplot(1,2,1)
plt.errorbar(PROCESSES_TO_TEST, reward_averages, yerr=reward_std, capsize=2)
plt.xlabel('Processes')
plt.ylabel('Average return')
plt.subplot(1,2,2)
plt.bar(range(len(PROCESSES_TO_TEST)), training_steps_per_second)
plt.xticks(range(len(PROCESSES_TO_TEST)),PROCESSES_TO_TEST)
plt.xlabel('Processes')
plt.ylabel('Training steps per second')
###Output
_____no_output_____
###Markdown
Stable Baselines3 Tutorial - Multiprocessing of environmentsGithub repo: https://github.com/araffin/rl-tutorial-jnrr19/tree/sb3/Stable-Baselines3: https://github.com/DLR-RM/stable-baselines3Documentation: https://stable-baselines3.readthedocs.io/en/master/RL Baselines3 zoo: https://github.com/DLR-RM/rl-baselines3-zoo IntroductionIn this notebook, you will learn how to use *Vectorized Environments* (aka multiprocessing) to make training faster. You will also see that this speed up comes at a cost of sample efficiency. Install Dependencies and Stable Baselines3 Using Pip
###Code
!apt install swig
!pip install stable-baselines3[extra]
###Output
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
swig3.0
Suggested packages:
swig-doc swig-examples swig3.0-examples swig3.0-doc
The following NEW packages will be installed:
swig swig3.0
0 upgraded, 2 newly installed, 0 to remove and 13 not upgraded.
Need to get 1,100 kB of archives.
After this operation, 5,822 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 swig3.0 amd64 3.0.12-1 [1,094 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic/universe amd64 swig amd64 3.0.12-1 [6,460 B]
Fetched 1,100 kB in 1s (1,456 kB/s)
Selecting previously unselected package swig3.0.
(Reading database ... 146374 files and directories currently installed.)
Preparing to unpack .../swig3.0_3.0.12-1_amd64.deb ...
Unpacking swig3.0 (3.0.12-1) ...
Selecting previously unselected package swig.
Preparing to unpack .../swig_3.0.12-1_amd64.deb ...
Unpacking swig (3.0.12-1) ...
Setting up swig3.0 (3.0.12-1) ...
Setting up swig (3.0.12-1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Collecting stable-baselines3[extra]
[?25l Downloading https://files.pythonhosted.org/packages/76/7c/ec89fd9a51c2ff640f150479069be817136c02f02349b5dd27a6e3bb8b3d/stable_baselines3-0.10.0-py3-none-any.whl (145kB)
[K |████████████████████████████████| 153kB 15.1MB/s
[?25hRequirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from stable-baselines3[extra]) (3.2.2)
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.6/dist-packages (from stable-baselines3[extra]) (1.3.0)
Requirement already satisfied: gym>=0.17 in /usr/local/lib/python3.6/dist-packages (from stable-baselines3[extra]) (0.17.3)
Requirement already satisfied: torch>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from stable-baselines3[extra]) (1.7.0+cu101)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from stable-baselines3[extra]) (1.19.5)
Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from stable-baselines3[extra]) (1.1.5)
Requirement already satisfied: opencv-python; extra == "extra" in /usr/local/lib/python3.6/dist-packages (from stable-baselines3[extra]) (4.1.2.30)
Requirement already satisfied: pillow; extra == "extra" in /usr/local/lib/python3.6/dist-packages (from stable-baselines3[extra]) (7.0.0)
Requirement already satisfied: atari-py~=0.2.0; extra == "extra" in /usr/local/lib/python3.6/dist-packages (from stable-baselines3[extra]) (0.2.6)
Requirement already satisfied: psutil; extra == "extra" in /usr/local/lib/python3.6/dist-packages (from stable-baselines3[extra]) (5.4.8)
Requirement already satisfied: tensorboard; extra == "extra" in /usr/local/lib/python3.6/dist-packages (from stable-baselines3[extra]) (2.4.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->stable-baselines3[extra]) (2.8.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->stable-baselines3[extra]) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->stable-baselines3[extra]) (1.3.1)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->stable-baselines3[extra]) (2.4.7)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym>=0.17->stable-baselines3[extra]) (1.4.1)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym>=0.17->stable-baselines3[extra]) (1.5.0)
Requirement already satisfied: dataclasses in /usr/local/lib/python3.6/dist-packages (from torch>=1.4.0->stable-baselines3[extra]) (0.8)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.6/dist-packages (from torch>=1.4.0->stable-baselines3[extra]) (3.7.4.3)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from torch>=1.4.0->stable-baselines3[extra]) (0.16.0)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->stable-baselines3[extra]) (2018.9)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from atari-py~=0.2.0; extra == "extra"->stable-baselines3[extra]) (1.15.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]) (0.4.2)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]) (2.23.0)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]) (3.12.4)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]) (1.32.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]) (1.0.1)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]) (1.7.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]) (3.3.3)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]) (51.3.3)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]) (1.17.2)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.6/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]) (0.10.0)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]) (0.36.2)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard; extra == "extra"->stable-baselines3[extra]) (1.3.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard; extra == "extra"->stable-baselines3[extra]) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard; extra == "extra"->stable-baselines3[extra]) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard; extra == "extra"->stable-baselines3[extra]) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard; extra == "extra"->stable-baselines3[extra]) (2020.12.5)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard; extra == "extra"->stable-baselines3[extra]) (3.3.0)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]) (4.6)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]) (4.2.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]) (0.2.8)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard; extra == "extra"->stable-baselines3[extra]) (3.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard; extra == "extra"->stable-baselines3[extra]) (3.4.0)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.6/dist-packages (from rsa<5,>=3.1.4; python_version >= "3"->google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]) (0.4.8)
Installing collected packages: stable-baselines3
Successfully installed stable-baselines3-0.10.0
###Markdown
Vectorized Environments and Imports[Vectorized Environments](https://stable-baselines3.readthedocs.io/en/master/guide/vec_envs.html) are a method for stacking multiple independent environments into a single environment. Instead of training an RL agent on 1 environment per step, it allows us to train it on n environments per step. This provides two benefits:* Agent experience can be collected more quickly* The experience will contain a more diverse range of states, it usually improves explorationStable-Baselines provides two types of Vectorized Environment:- SubprocVecEnv which run each environment in a separate process- DummyVecEnv which run all environment on the same processIn practice, DummyVecEnv is usually faster than SubprocVecEnv because of communication delays that subprocesses have.
###Code
import time
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import gym
from stable_baselines3.common.vec_env import DummyVecEnv, SubprocVecEnv
from stable_baselines3.common.utils import set_random_seed
from stable_baselines3 import PPO, A2C
###Output
_____no_output_____
###Markdown
Import evaluate function
###Code
from stable_baselines3.common.evaluation import evaluate_policy
###Output
_____no_output_____
###Markdown
Define an environment functionThe multiprocessing implementation requires a function that can be called inside the process to instantiate a gym env
###Code
def make_env(env_id, rank, seed=0):
"""
Utility function for multiprocessed env.
:param env_id: (str) the environment ID
:param seed: (int) the inital seed for RNG
:param rank: (int) index of the subprocess
"""
def _init():
env = gym.make(env_id)
# Important: use a different seed for each environment
env.seed(seed + rank)
return env
set_random_seed(seed)
return _init
###Output
_____no_output_____
###Markdown
Stable-Baselines also provides directly an helper to create vectorized environment:
###Code
from stable_baselines3.common.cmd_util import make_vec_env
###Output
/usr/local/lib/python3.6/dist-packages/stable_baselines3/common/cmd_util.py:6: FutureWarning: Module ``common.cmd_util`` has been renamed to ``common.env_util`` and will be removed in the future.
"Module ``common.cmd_util`` has been renamed to ``common.env_util`` and will be removed in the future.", FutureWarning
###Markdown
Define a few constants (feel free to try out other environments and algorithms)We will be using the Cartpole environment: [https://gym.openai.com/envs/CartPole-v1/](https://gym.openai.com/envs/CartPole-v1/)![Cartpole](https://cdn-images-1.medium.com/max/1143/1*h4WTQNVIsvMXJTCpXm_TAw.gif)
###Code
env_id = 'CartPole-v1'
# The different number of processes that will be used
PROCESSES_TO_TEST = [1, 2, 4, 8, 16]
NUM_EXPERIMENTS = 3 # RL algorithms can often be unstable, so we run several experiments (see https://arxiv.org/abs/1709.06560)
TRAIN_STEPS = 5000
# Number of episodes for evaluation
EVAL_EPS = 20
ALGO = A2C
# We will create one environment to evaluate the agent on
eval_env = gym.make(env_id)
###Output
_____no_output_____
###Markdown
Iterate through the different numbers of processesFor each processes, several experiments are run per processThis may take a couple of minutes.
###Code
reward_averages = []
reward_std = []
training_times = []
total_procs = 0
for n_procs in PROCESSES_TO_TEST:
total_procs += n_procs
print('Running for n_procs = {}'.format(n_procs))
if n_procs == 1:
# if there is only one process, there is no need to use multiprocessing
train_env = DummyVecEnv([lambda: gym.make(env_id)])
else:
# Here we use the "fork" method for launching the processes, more information is available in the doc
# This is equivalent to make_vec_env(env_id, n_envs=n_procs, vec_env_cls=SubprocVecEnv, vec_env_kwargs=dict(start_method='fork'))
train_env = SubprocVecEnv([make_env(env_id, i+total_procs) for i in range(n_procs)], start_method='fork')
rewards = []
times = []
for experiment in range(NUM_EXPERIMENTS):
# it is recommended to run several experiments due to variability in results
train_env.reset()
model = ALGO('MlpPolicy', train_env, verbose=0)
start = time.time()
model.learn(total_timesteps=TRAIN_STEPS)
times.append(time.time() - start)
mean_reward, _ = evaluate_policy(model, eval_env, n_eval_episodes=EVAL_EPS)
rewards.append(mean_reward)
# Important: when using subprocess, don't forget to close them
# otherwise, you may have memory issues when running a lot of experiments
train_env.close()
reward_averages.append(np.mean(rewards))
reward_std.append(np.std(rewards))
training_times.append(np.mean(times))
###Output
Running for n_procs = 1
Running for n_procs = 2
Running for n_procs = 4
Running for n_procs = 8
Running for n_procs = 16
###Markdown
Plot the results
###Code
training_steps_per_second = [TRAIN_STEPS / t for t in training_times]
plt.figure(figsize=(9, 4))
plt.subplots_adjust(wspace=0.5)
plt.subplot(1, 2, 1)
plt.errorbar(PROCESSES_TO_TEST, reward_averages, yerr=reward_std, capsize=2)
plt.xlabel('Processes')
plt.ylabel('Average return')
plt.subplot(1, 2, 2)
plt.bar(range(len(PROCESSES_TO_TEST)), training_steps_per_second)
plt.xticks(range(len(PROCESSES_TO_TEST)), PROCESSES_TO_TEST)
plt.xlabel('Processes')
_ = plt.ylabel('Training steps per second')
###Output
_____no_output_____
###Markdown
Sample efficiency vs wall clock time trade-offThere is clearly a trade-off between sample efficiency, diverse experience and wall clock time. Lets try getting the best performance in a fixed amount of time, say 10 seconds per experiment
###Code
SECONDS_PER_EXPERIMENT = 10
steps_per_experiment = [int(SECONDS_PER_EXPERIMENT * fps) for fps in training_steps_per_second]
reward_averages = []
reward_std = []
training_times = []
for n_procs, train_steps in zip(PROCESSES_TO_TEST, steps_per_experiment):
total_procs += n_procs
print('Running for n_procs = {} for steps = {}'.format(n_procs, train_steps))
if n_procs == 1:
# if there is only one process, there is no need to use multiprocessing
train_env = DummyVecEnv([lambda: gym.make(env_id)])
else:
train_env = SubprocVecEnv([make_env(env_id, i+total_procs) for i in range(n_procs)], start_method='spawn')
# Alternatively, you can use a DummyVecEnv if the communication delays is the bottleneck
# train_env = DummyVecEnv([make_env(env_id, i+total_procs) for i in range(n_procs)])
rewards = []
times = []
for experiment in range(NUM_EXPERIMENTS):
# it is recommended to run several experiments due to variability in results
train_env.reset()
model = ALGO('MlpPolicy', train_env, verbose=0)
start = time.time()
model.learn(total_timesteps=train_steps)
times.append(time.time() - start)
mean_reward, _ = evaluate_policy(model, eval_env, n_eval_episodes=EVAL_EPS)
rewards.append(mean_reward)
train_env.close()
reward_averages.append(np.mean(rewards))
reward_std.append(np.std(rewards))
training_times.append(np.mean(times))
###Output
Running for n_procs = 1 for steps = 4773
Running for n_procs = 2 for steps = 8015
Running for n_procs = 4 for steps = 14661
Running for n_procs = 8 for steps = 25520
Running for n_procs = 16 for steps = 40853
###Markdown
Plot the results
###Code
training_steps_per_second = [s / t for s,t in zip(steps_per_experiment, training_times)]
plt.figure()
plt.subplot(1,2,1)
plt.errorbar(PROCESSES_TO_TEST, reward_averages, yerr=reward_std, capsize=2, c='k', marker='o')
plt.xlabel('Processes')
plt.ylabel('Average return')
plt.subplot(1,2,2)
plt.bar(range(len(PROCESSES_TO_TEST)), training_steps_per_second)
plt.xticks(range(len(PROCESSES_TO_TEST)),PROCESSES_TO_TEST)
plt.xlabel('Processes')
plt.ylabel('Training steps per second')
###Output
_____no_output_____
###Markdown
DummyVecEnv vs SubprocVecEnv
###Code
reward_averages = []
reward_std = []
training_times = []
total_procs = 0
for n_procs in PROCESSES_TO_TEST:
total_procs += n_procs
print('Running for n_procs = {}'.format(n_procs))
# Here we are using only one process even for n_env > 1
# this is equivalent to DummyVecEnv([make_env(env_id, i + total_procs) for i in range(n_procs)])
train_env = make_vec_env(env_id, n_envs=n_procs)
rewards = []
times = []
for experiment in range(NUM_EXPERIMENTS):
# it is recommended to run several experiments due to variability in results
train_env.reset()
model = ALGO('MlpPolicy', train_env, verbose=0)
start = time.time()
model.learn(total_timesteps=TRAIN_STEPS)
times.append(time.time() - start)
mean_reward, _ = evaluate_policy(model, eval_env, n_eval_episodes=EVAL_EPS)
rewards.append(mean_reward)
train_env.close()
reward_averages.append(np.mean(rewards))
reward_std.append(np.std(rewards))
training_times.append(np.mean(times))
training_steps_per_second = [TRAIN_STEPS / t for t in training_times]
plt.figure()
plt.subplot(1,2,1)
plt.errorbar(PROCESSES_TO_TEST, reward_averages, yerr=reward_std, capsize=2)
plt.xlabel('Processes')
plt.ylabel('Average return')
plt.subplot(1,2,2)
plt.bar(range(len(PROCESSES_TO_TEST)), training_steps_per_second)
plt.xticks(range(len(PROCESSES_TO_TEST)),PROCESSES_TO_TEST)
plt.xlabel('Processes')
plt.ylabel('Training steps per second')
###Output
_____no_output_____
###Markdown
Stable Baselines3 Tutorial - Multiprocessing of environmentsGithub repo: https://github.com/araffin/rl-tutorial-jnrr19/tree/sb3/Stable-Baselines3: https://github.com/DLR-RM/stable-baselines3Documentation: https://stable-baselines3.readthedocs.io/en/master/RL Baselines3 zoo: https://github.com/DLR-RM/rl-baselines3-zoo IntroductionIn this notebook, you will learn how to use *Vectorized Environments* (aka multiprocessing) to make training faster. You will also see that this speed up comes at a cost of sample efficiency. Install Dependencies and Stable Baselines3 Using Pip
###Code
!apt install swig
!pip install stable-baselines3[extra]
###Output
_____no_output_____
###Markdown
Vectorized Environments and Imports[Vectorized Environments](https://stable-baselines3.readthedocs.io/en/master/guide/vec_envs.html) are a method for stacking multiple independent environments into a single environment. Instead of training an RL agent on 1 environment per step, it allows us to train it on n environments per step. This provides two benefits:* Agent experience can be collected more quickly* The experience will contain a more diverse range of states, it usually improves explorationStable-Baselines provides two types of Vectorized Environment:- SubprocVecEnv which run each environment in a separate process- DummyVecEnv which run all environment on the same processIn practice, DummyVecEnv is usually faster than SubprocVecEnv because of communication delays that subprocesses have.
###Code
import time
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import gym
from stable_baselines3.common.vec_env import DummyVecEnv, SubprocVecEnv
from stable_baselines3.common.utils import set_random_seed
from stable_baselines3 import PPO, A2C
###Output
_____no_output_____
###Markdown
Import evaluate function
###Code
from stable_baselines3.common.evaluation import evaluate_policy
###Output
_____no_output_____
###Markdown
Define an environment functionThe multiprocessing implementation requires a function that can be called inside the process to instantiate a gym env
###Code
def make_env(env_id, rank, seed=0):
"""
Utility function for multiprocessed env.
:param env_id: (str) the environment ID
:param seed: (int) the inital seed for RNG
:param rank: (int) index of the subprocess
"""
def _init():
env = gym.make(env_id)
# Important: use a different seed for each environment
env.seed(seed + rank)
return env
set_random_seed(seed)
return _init
###Output
_____no_output_____
###Markdown
Stable-Baselines also provides directly an helper to create vectorized environment:
###Code
from stable_baselines3.common.env_util import make_vec_env
###Output
_____no_output_____
###Markdown
Define a few constants (feel free to try out other environments and algorithms)We will be using the Cartpole environment: [https://gym.openai.com/envs/CartPole-v1/](https://gym.openai.com/envs/CartPole-v1/)![Cartpole](https://cdn-images-1.medium.com/max/1143/1*h4WTQNVIsvMXJTCpXm_TAw.gif)
###Code
env_id = 'CartPole-v1'
# The different number of processes that will be used
PROCESSES_TO_TEST = [1, 2, 4, 8, 16]
NUM_EXPERIMENTS = 3 # RL algorithms can often be unstable, so we run several experiments (see https://arxiv.org/abs/1709.06560)
TRAIN_STEPS = 5000
# Number of episodes for evaluation
EVAL_EPS = 20
ALGO = A2C
# We will create one environment to evaluate the agent on
eval_env = gym.make(env_id)
###Output
_____no_output_____
###Markdown
Iterate through the different numbers of processesFor each processes, several experiments are run per processThis may take a couple of minutes.
###Code
reward_averages = []
reward_std = []
training_times = []
total_procs = 0
for n_procs in PROCESSES_TO_TEST:
total_procs += n_procs
print('Running for n_procs = {}'.format(n_procs))
if n_procs == 1:
# if there is only one process, there is no need to use multiprocessing
train_env = DummyVecEnv([lambda: gym.make(env_id)])
else:
# Here we use the "fork" method for launching the processes, more information is available in the doc
# This is equivalent to make_vec_env(env_id, n_envs=n_procs, vec_env_cls=SubprocVecEnv, vec_env_kwargs=dict(start_method='fork'))
train_env = SubprocVecEnv([make_env(env_id, i+total_procs) for i in range(n_procs)], start_method='fork')
rewards = []
times = []
for experiment in range(NUM_EXPERIMENTS):
# it is recommended to run several experiments due to variability in results
train_env.reset()
model = ALGO('MlpPolicy', train_env, verbose=0)
start = time.time()
model.learn(total_timesteps=TRAIN_STEPS)
times.append(time.time() - start)
mean_reward, _ = evaluate_policy(model, eval_env, n_eval_episodes=EVAL_EPS)
rewards.append(mean_reward)
# Important: when using subprocess, don't forget to close them
# otherwise, you may have memory issues when running a lot of experiments
train_env.close()
reward_averages.append(np.mean(rewards))
reward_std.append(np.std(rewards))
training_times.append(np.mean(times))
###Output
Running for n_procs = 1
Running for n_procs = 2
Running for n_procs = 4
Running for n_procs = 8
Running for n_procs = 16
###Markdown
Plot the results
###Code
training_steps_per_second = [TRAIN_STEPS / t for t in training_times]
plt.figure(figsize=(9, 4))
plt.subplots_adjust(wspace=0.5)
plt.subplot(1, 2, 1)
plt.errorbar(PROCESSES_TO_TEST, reward_averages, yerr=reward_std, capsize=2)
plt.xlabel('Processes')
plt.ylabel('Average return')
plt.subplot(1, 2, 2)
plt.bar(range(len(PROCESSES_TO_TEST)), training_steps_per_second)
plt.xticks(range(len(PROCESSES_TO_TEST)), PROCESSES_TO_TEST)
plt.xlabel('Processes')
_ = plt.ylabel('Training steps per second')
###Output
_____no_output_____
###Markdown
Sample efficiency vs wall clock time trade-offThere is clearly a trade-off between sample efficiency, diverse experience and wall clock time. Lets try getting the best performance in a fixed amount of time, say 10 seconds per experiment
###Code
SECONDS_PER_EXPERIMENT = 10
steps_per_experiment = [int(SECONDS_PER_EXPERIMENT * fps) for fps in training_steps_per_second]
reward_averages = []
reward_std = []
training_times = []
for n_procs, train_steps in zip(PROCESSES_TO_TEST, steps_per_experiment):
total_procs += n_procs
print('Running for n_procs = {} for steps = {}'.format(n_procs, train_steps))
if n_procs == 1:
# if there is only one process, there is no need to use multiprocessing
train_env = DummyVecEnv([lambda: gym.make(env_id)])
else:
train_env = SubprocVecEnv([make_env(env_id, i+total_procs) for i in range(n_procs)], start_method='spawn')
# Alternatively, you can use a DummyVecEnv if the communication delays is the bottleneck
# train_env = DummyVecEnv([make_env(env_id, i+total_procs) for i in range(n_procs)])
rewards = []
times = []
for experiment in range(NUM_EXPERIMENTS):
# it is recommended to run several experiments due to variability in results
train_env.reset()
model = ALGO('MlpPolicy', train_env, verbose=0)
start = time.time()
model.learn(total_timesteps=train_steps)
times.append(time.time() - start)
mean_reward, _ = evaluate_policy(model, eval_env, n_eval_episodes=EVAL_EPS)
rewards.append(mean_reward)
train_env.close()
reward_averages.append(np.mean(rewards))
reward_std.append(np.std(rewards))
training_times.append(np.mean(times))
###Output
Running for n_procs = 1 for steps = 8285
Running for n_procs = 2 for steps = 10953
Running for n_procs = 4 for steps = 16746
Running for n_procs = 8 for steps = 28450
Running for n_procs = 16 for steps = 36560
###Markdown
Plot the results
###Code
training_steps_per_second = [s / t for s,t in zip(steps_per_experiment, training_times)]
plt.figure()
plt.subplot(1,2,1)
plt.errorbar(PROCESSES_TO_TEST, reward_averages, yerr=reward_std, capsize=2, c='k', marker='o')
plt.xlabel('Processes')
plt.ylabel('Average return')
plt.subplot(1,2,2)
plt.bar(range(len(PROCESSES_TO_TEST)), training_steps_per_second)
plt.xticks(range(len(PROCESSES_TO_TEST)),PROCESSES_TO_TEST)
plt.xlabel('Processes')
plt.ylabel('Training steps per second')
###Output
_____no_output_____
###Markdown
DummyVecEnv vs SubprocVecEnv
###Code
reward_averages = []
reward_std = []
training_times = []
total_procs = 0
for n_procs in PROCESSES_TO_TEST:
total_procs += n_procs
print('Running for n_procs = {}'.format(n_procs))
# Here we are using only one process even for n_env > 1
# this is equivalent to DummyVecEnv([make_env(env_id, i + total_procs) for i in range(n_procs)])
train_env = make_vec_env(env_id, n_envs=n_procs)
rewards = []
times = []
for experiment in range(NUM_EXPERIMENTS):
# it is recommended to run several experiments due to variability in results
train_env.reset()
model = ALGO('MlpPolicy', train_env, verbose=0)
start = time.time()
model.learn(total_timesteps=TRAIN_STEPS)
times.append(time.time() - start)
mean_reward, _ = evaluate_policy(model, eval_env, n_eval_episodes=EVAL_EPS)
rewards.append(mean_reward)
train_env.close()
reward_averages.append(np.mean(rewards))
reward_std.append(np.std(rewards))
training_times.append(np.mean(times))
training_steps_per_second = [TRAIN_STEPS / t for t in training_times]
plt.figure()
plt.subplot(1,2,1)
plt.errorbar(PROCESSES_TO_TEST, reward_averages, yerr=reward_std, capsize=2)
plt.xlabel('Processes')
plt.ylabel('Average return')
plt.subplot(1,2,2)
plt.bar(range(len(PROCESSES_TO_TEST)), training_steps_per_second)
plt.xticks(range(len(PROCESSES_TO_TEST)),PROCESSES_TO_TEST)
plt.xlabel('Processes')
plt.ylabel('Training steps per second')
###Output
_____no_output_____ |
pyspark-ml-taxis/notebooks/04 - NYC Taxi Trips - Part 3 - Analyze - Skeleton.ipynb | ###Markdown
Part 3 - Simple AnalysisIn this step, we perform the first simple analysis of the taxi trip data in order to get a better understanding.
###Code
dwh_basedir = "/user/hadoop/nyc-dwh"
structured_basedir = dwh_basedir + "/structured"
refined_basedir = dwh_basedir + "/refined"
###Output
_____no_output_____
###Markdown
0. Setup Environment 0.1 Spark Session
###Code
from pyspark.sql import SparkSession
import pyspark.sql.functions as f
if not 'spark' in locals():
spark = SparkSession.builder \
.master("local[*]") \
.config("spark.driver.memory","64G") \
.getOrCreate()
spark
###Output
_____no_output_____
###Markdown
0.2 Matplotlib
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
###Output
_____no_output_____
###Markdown
0.3 Geopandas and friends
###Code
import pandas as pd
import geopandas as gpd
import contextily as ctx
from shapely.geometry import Point
# Helper function to fetch background map tiles
def add_basemap(ax, zoom, url='http://tile.stamen.com/terrain/tileZ/tileX/tileY.png'):
xmin, xmax, ymin, ymax = ax.axis()
basemap, extent = ctx.bounds2img(xmin, ymin, xmax, ymax, zoom=zoom, url=url)
ax.imshow(basemap, extent=extent, interpolation='bilinear')
# restore original x/y limits
ax.axis((xmin, xmax, ymin, ymax))
###Output
_____no_output_____
###Markdown
1. Read Taxi DataNow we can read in the taxi data from the structured zone.
###Code
taxi_trips = spark.read.parquet(refined_basedir + "/taxi-trip")
taxi_trips.limit(10).toPandas()
###Output
_____no_output_____
###Markdown
Just to be sure, let us inspect the schema. It should match exactly the specified one.
###Code
taxi_trips.printSchema()
###Output
_____no_output_____
###Markdown
1.3 Create SampleFor some actions, we only need a subset of the whole data (some visualizations won't event work with the full data), therefore we create a subsample that is small enough to be handeled efficiently by Python. We'd like to have around 100,000 records in the sample.Spark offers the required functionality to create a *random* sample of the data, but we need to specify a fraction instead of an absolute number. Therefore we first count the number of records and then make up an appropriate fraction.
###Code
taxi_trips.count()
###Output
_____no_output_____
###Markdown
In order to get around 100,000 records, we use a fraction of 0.001. This will give us 170,000 records, which is good enough for us.
###Code
taxi_trips_sample = # YOUR CODE HERE
taxi_trips_sample.count()
###Output
_____no_output_____
###Markdown
2. Simple Geo VisualisationIn order to get an understanding of the data, let us first make a geo visualization using some Python functionality. We'd eventually want to draw the pickup locations on top of a map, so we understand the whole area that is served by the taxi cabs. We might want to use this information later when it comes to the ML part. 2.1 Estimate ExtentAs a first step, let us estimate a realistic extent of the pickup location. There are some broken records in the data set, which would render the visualisation meaningless, therefore we try to estimate extends such that 95% of all points lie within that border.
###Code
quantile = taxi_trips_sample \
.filter((taxi_trips_sample["pickup_longitude"] > -75) & (taxi_trips_sample["pickup_longitude"] < -65)) \
.filter((taxi_trips_sample["pickup_latitude"] > 35) & (taxi_trips_sample["pickup_latitude"] < 45)) \
.stat.approxQuantile(["pickup_longitude", "pickup_latitude"], [0.025,0.975], 0.01)
min_pickup_longitude = quantile[0][0]
max_pickup_longitude = quantile[0][1]
min_pickup_latitude = quantile[1][0]
max_pickup_latitude = quantile[1][1]
print("min_pickup_longitude=" + str(min_pickup_longitude))
print("max_pickup_longitude=" + str(max_pickup_longitude))
print("min_pickup_latitude=" + str(min_pickup_latitude))
print("max_pickup_latitude=" + str(max_pickup_latitude))
###Output
_____no_output_____
###Markdown
2.2 Visualize pickup locationNow by using some appropriate Python libraries, we can visualize the pickup locations nicely on a map. Since the data contains some bogus coordinates and some (maybe correct) outliers, we limit the area to the extends that we estimated above. This means that we throw away all records which lie outside of the core area (only for this visualization, of course!)
###Code
df = taxi_trips_sample.select("pickup_longitude","pickup_latitude") \
.filter((taxi_trips_sample["pickup_longitude"] >= min_pickup_longitude) & (taxi_trips_sample["pickup_longitude"] <= max_pickup_longitude)) \
.filter((taxi_trips_sample["pickup_latitude"] >= min_pickup_latitude) & (taxi_trips_sample["pickup_latitude"] <= max_pickup_latitude)) \
.toPandas()
# Convert DataFrame to GeoDataFrame
coords = pd.Series(zip(df["pickup_longitude"], df["pickup_latitude"]))
geo_df = gpd.GeoDataFrame(df, crs = {'init': 'epsg:4326'}, geometry = coords.apply(Point)).to_crs(epsg=3857)
# ... and make the plot
ax = geo_df.plot(figsize=(15, 10), alpha=0.1)
# Add basemap below
add_basemap(ax, 12)
###Output
_____no_output_____
###Markdown
Retrieve the geo extends for later reuse for more visualizations.
###Code
geo_min_x, geo_max_x = ax.get_xlim()
geo_min_y, geo_max_y = ax.get_ylim()
print("geo_min_x=" + str(geo_min_x))
print("geo_max_x=" + str(geo_max_x))
print("geo_min_y=" + str(geo_min_y))
print("geo_max_y=" + str(geo_max_y))
###Output
_____no_output_____
###Markdown
3. Simple QuestionsUsing the taxi trips table, we can already answer some simple questions. 3.1 Average Fare per Mile
###Code
# Perform aggregation
df = # YOUR CODE HERE
result = # YOUR CODE HERE
result.toPandas()
###Output
_____no_output_____
###Markdown
3.2 Average fare per minute
###Code
# Perform aggregation
df = taxi_trips.withColumn("fare_per_minute", taxi_trips["total_amount"]/taxi_trips["trip_time_in_secs"]*60)
result = df.select(
f.avg(df["fare_per_minute"]).alias("avg_fare_per_minute"),
f.stddev(df["fare_per_minute"]).alias("stddev_fare_per_minute")
)
result.toPandas()
###Output
_____no_output_____
###Markdown
4. Make some PicturesJust to get a rough feeling about the data, we make some pictures of the taxi trip data. 4.1 Average trips per day of weekLet us see if the average number of trips is the same for every week day.
###Code
# Step 1: Calculate the number of trips for every day
trips_per_day = # YOUR CODE HERE
# Step 2: Calculate average number of trips per day of week
result = # YOUR CODE HERE
pdf = result.toPandas()
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.bar(pdf["pickup_dayofweek"], pdf["avg_count"], align='center', alpha=0.5)
plt.ylabel('Frequency')
plt.title('Day of week')
###Output
_____no_output_____
###Markdown
4.2 Make a Plot of Fare per DayThe next picture contains the total fare amount (including tip and other expenses) for every day in 2013.
###Code
# Calculate two aggregations
# 1. Total fare amount per day
# 2. Total number of trips per day
daily = # YOUR CODE HERE
# Convert to Pandas
pdf = daily.toPandas()
# Make a Plot
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.plot(pdf["date"],pdf["amount"], color="red")
plt.plot(pdf["date"],pdf["count"], color="green")
plt.legend(handles=[
mpatches.Patch(color='red', label='Total Fare Amount per Day'),
mpatches.Patch(color='green', label='Total Number of Trips der Day')
])
###Output
_____no_output_____
###Markdown
4.3 Make a plot of average trips for each hourLet us plot the average number of trips and amount of income per hour. This will be done using a two step aggregation:1. Calculate the total number of trips and total fare for every hour in the whole year2. Calculate the average numbers per hour from this data
###Code
# Step 1: Calculate totals for every hour of the year
hourly_trips = taxi_trips \
.withColumn("date", f.to_date('pickup_datetime')) \
.withColumn("hour", f.hour('pickup_datetime')) \
.groupBy("hour", "date").agg(
f.sum("total_amount").alias("total_amount"),
f.count("total_amount").alias("total_count")
)
# Step 2: Calculate average values per hour of day
hourly_avg = hourly_trips \
.groupBy("hour").agg(
f.avg("total_amount").alias("avg_amount"),
f.avg("total_count").alias("avg_count")
)\
.orderBy("hour")
# Convert to Pandas
pdf = hourly_avg.toPandas()
# Make a Plot
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.plot(pdf["hour"],pdf["avg_amount"], color="red")
plt.plot(pdf["hour"],pdf["avg_count"], color="green")
plt.legend(handles=[
mpatches.Patch(color='red', label='Average Total Fare per Hour'),
mpatches.Patch(color='green', label='Average Trip Count per Hour')
])
###Output
_____no_output_____
###Markdown
4.3 Passenger CountsAnother simple question is a histogram of the passenger count of all trips. Note that again the data contains some bogus data, therefore we limit the analysis to records with a passenger count less than 20.
###Code
result = taxi_trips \
.filter(f.col("passenger_count") < 20) \
.groupBy("passenger_count") \
.count()
pdf = result.toPandas()
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.bar(pdf["passenger_count"], pdf["count"], align='center', alpha=0.5)
plt.ylabel('Frequency')
plt.title('Number of Passengers')
###Output
_____no_output_____
###Markdown
4.4 Average income by hour by driverThe next plot is a slight variation of the previous one, focusing on the individual driver. The question is, how much money does a driver make on average for a specific hour of the day. Again, this requires a two step aggregation1. Calculate the total amount for every hour of the year for each driver2. Calculate the average income per hour over all days and all drivers
###Code
# Step 1: Calculate totals per driver per hour per day
hourly_totals = taxi_trips \
.withColumn("date", f.to_date('pickup_datetime')) \
.withColumn("hour", f.hour('pickup_datetime')) \
.groupBy("date", "hour", "hack_license").agg(
f.sum("fare_amount").alias("fare_amount"),
f.sum("tip_amount").alias("tip_amount"),
f.count("fare_amount").alias("trip_count")
)
# Step 2: Calculate average per hour
hourly_driver_avg = hourly_totals \
.groupBy("hour").agg(
f.avg("fare_amount").alias("avg_fare_amount"),
f.avg("tip_amount").alias("avg_tip_amount"),
f.avg("trip_count").alias("avg_trip_count")
) \
.orderBy("hour")
# Convert to Pandas
pdf = hourly_driver_avg.toPandas()
# Make a Plot
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.plot(pdf["hour"],pdf["avg_fare_amount"], color="red")
plt.plot(pdf["hour"],pdf["avg_tip_amount"], color="green")
plt.plot(pdf["hour"],pdf["avg_trip_count"], color="blue")
plt.legend(handles=[
mpatches.Patch(color='red', label='Average Fare Amount per Hour per Driver'),
mpatches.Patch(color='green', label='Average Tip Amount per Hour per Driver'),
mpatches.Patch(color='blue', label='Average Trip Count per Hour per Driver')
])
###Output
_____no_output_____
###Markdown
4.5 Average income per dayThe next plot is a slight variation of the previous one, now looking at the income on a whole day. The question is, how much money does a driver make on average for a specific day. Again, this requires a two step aggregation1. Calculate the total amount for every day of the year for each driver2. Calculate the average income per day over all days and all drivers
###Code
# Step 1: Calculate totals per driver per date
daily_totals = taxi_trips \
.withColumn("date", f.to_date('pickup_datetime')) \
.groupBy("date", "hack_license").agg(
f.sum("total_amount").alias("amount"),
f.count("total_amount").alias("count")
)
# Step 2: Calculate average per date
daily_average = daily_totals \
.groupBy("date").agg(
f.avg("amount").alias("avg_amount"),
f.avg("count").alias("avg_count")
) \
.orderBy("date")
# Convert to Pandas
pdf = daily_average.toPandas()
# Make a Plot
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.plot(pdf["date"],pdf["avg_amount"], color="red")
plt.plot(pdf["date"],pdf["avg_count"], color="blue")
plt.legend(handles=[
mpatches.Patch(color='red', label='Average Fare Amount per Day per Driver'),
mpatches.Patch(color='blue', label='Average Trip Count per Day per Driver')
])
###Output
_____no_output_____
###Markdown
4.6 Tip by passenger countDoes the tip depend on the number of passengers? Let us display the average tip amount for each number of passengers.
###Code
tip_by_passengers = taxi_trips \
.filter(taxi_trips["passenger_count"] < 10) \
.groupBy("passenger_count").agg(
f.avg("tip_amount").alias("tip_amount")
) \
.orderBy("passenger_count")
# Convert to Pandas
pdf = tip_by_passengers.toPandas()
# Make a Plot
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.bar(pdf["passenger_count"], pdf["tip_amount"], align='center', alpha=0.5)
plt.ylabel('Tip Amount')
plt.title('Number of Passengers')
###Output
_____no_output_____
###Markdown
5. Preaggregated Taxi TripsBefore we start to include additional data sets from other sources, let us first focus a reasonable question which we'd like to give an answer to with machine learning.We are not so much interested into the individual trips, but we'd like to understand at which time a driver can make most money. We already saw in the pictures above, that the average amount of money per hour per driver doesn't vary very much, although most money seems to be made in the evening hours. Unfortunately these numbers do not neccessarily tell the whole truth, since we don't have any information about how long a driver was actually working.So the question is: *"Can we predict the overall fares for a specific hour on a specific day?"* We will refine that question a little bit and clarify what information may be used to create the prediction, such that it makes sense from a business point of view.We now prepare the joined trip data to contain data for precisely this question - we will remove the drivers hack license and medallion. 5.1 Extend InformationAs a first step, we add some more columns:* **date** and **hour** - The pickup date and hour (without minutes or seconds)* **lat_idx** and **long_idx** - We map the whole geo range into a grid and these two columns contains logical coordinates in this grid.
###Code
min_pickup_longitude=-74.007698
max_pickup_longitude=-73.776711
min_pickup_latitude=40.706902
max_pickup_latitude=40.799072
longitude_grid_size = 10
latitude_grid_size = 5
longitude_grid_diff = (max_pickup_longitude - min_pickup_longitude) / longitude_grid_size
latitude_grid_diff = (max_pickup_latitude - min_pickup_latitude) / latitude_grid_size
extended_trips = taxi_trips \
.withColumn("date", f.to_date(taxi_trips["pickup_datetime"])) \
.withColumn("hour", f.hour(taxi_trips["pickup_datetime"])) \
.withColumn("lat_idx", f.rint((taxi_trips["pickup_latitude"] - min_pickup_latitude)/latitude_grid_diff)) \
.withColumn("long_idx", f.rint((taxi_trips["pickup_longitude"] - min_pickup_longitude)/longitude_grid_diff)) \
.withColumn("lat_idx", f.when((f.col("lat_idx") >= 0) & (f.col("lat_idx") < latitude_grid_size), f.col("lat_idx")).otherwise(-1).cast("int")) \
.withColumn("long_idx", f.when((f.col("long_idx") >= 0) & (f.col("long_idx") < longitude_grid_size), f.col("long_idx")).otherwise(-1).cast("int")) \
extended_trips.limit(10).toPandas()
###Output
_____no_output_____
###Markdown
5.2 Preaggregate and Store into Refined ZoneNow we preaggregate the extended data using the following dimensions* **date** and **hour*** **lat_idx** and **long_idx**In addition to the dimensions, the result will aggregate (sum up) the following metrics* **passenger_count*** **fare_amount*** **tip_amount*** **total_amount**
###Code
hourly_taxi_trips = # YOUR CODE HERE
hourly_taxi_trips.write.mode("overwrite").parquet(refined_basedir + "/taxi-trips-hourly")
hourly_taxi_trips = spark.read.parquet(refined_basedir + "/taxi-trips-hourly")
hourly_taxi_trips.limit(10).toPandas()
###Output
_____no_output_____
###Markdown
6. More PicturesUsing the preaggregated data set, we can now draw some more pictures. 6.1 Daily Aggregates
###Code
daily = hourly_taxi_trips \
.groupBy("date").agg(
f.sum("fare_amount").alias("fare_amount"),
f.sum("tip_amount").alias("tip_amount"),
f.sum("total_amount").alias("total_amount"),
f.sum("trip_count").alias("trip_count")
)\
.orderBy("date")
# Convert to Pandas
pdf = daily.toPandas()
# Make a Plot
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.plot(pdf["date"],pdf["fare_amount"], color="red")
plt.plot(pdf["date"],pdf["tip_amount"], color="green")
plt.plot(pdf["date"],pdf["total_amount"], color="blue")
plt.plot(pdf["date"],pdf["trip_count"], color="violet")
plt.legend(handles=[
mpatches.Patch(color='red', label='Fare Amount'),
mpatches.Patch(color='green', label='Tip Amount'),
mpatches.Patch(color='blue', label='Total Amount'),
mpatches.Patch(color='violet', label='Trip Count')
])
###Output
_____no_output_____
###Markdown
6.2 Average fare and tip per hour
###Code
hourly = hourly_taxi_trips \
.groupBy("hour").agg(
f.avg(hourly_taxi_trips["fare_amount"] / hourly_taxi_trips["trip_count"]).alias("avg_fare_amount"),
f.avg(hourly_taxi_trips["tip_amount"] / hourly_taxi_trips["trip_count"]).alias("avg_tip_amount")
)\
.orderBy("hour")
# Convert to Pandas
pdf = hourly.toPandas()
# Make a Plot
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.plot(pdf["hour"],pdf["avg_fare_amount"], color="red")
plt.plot(pdf["hour"],pdf["avg_tip_amount"], color="green")
plt.legend(handles=[
mpatches.Patch(color='red', label='Average Fare Amount'),
mpatches.Patch(color='green', label='Average Tip Amount')
])
###Output
_____no_output_____
###Markdown
Part 3 - Simple AnalysisIn this step, we perform the first simple analysis of the taxi trip data in order to get a better understanding.
###Code
dwh_basedir = "/user/hadoop/nyc-dwh"
structured_basedir = dwh_basedir + "/structured"
refined_basedir = dwh_basedir + "/refined"
###Output
_____no_output_____
###Markdown
0. Setup Environment 0.1 Spark Session
###Code
import pyspark.sql.functions as f
from pyspark.sql import SparkSession
if not 'spark' in locals():
spark = SparkSession.builder \
.master("local[*]") \
.config("spark.driver.memory","64G") \
.getOrCreate()
spark
###Output
_____no_output_____
###Markdown
0.2 Matplotlib
###Code
%matplotlib inline
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
###Output
_____no_output_____
###Markdown
0.3 Geopandas and friends
###Code
import contextily as ctx
import geopandas as gpd
import pandas as pd
from shapely.geometry import Point
# Helper function to fetch background map tiles
def add_basemap(ax, zoom, url='http://tile.stamen.com/terrain/tileZ/tileX/tileY.png'):
xmin, xmax, ymin, ymax = ax.axis()
basemap, extent = ctx.bounds2img(xmin, ymin, xmax, ymax, zoom=zoom, url=url)
ax.imshow(basemap, extent=extent, interpolation='bilinear')
# restore original x/y limits
ax.axis((xmin, xmax, ymin, ymax))
###Output
_____no_output_____
###Markdown
1. Read Taxi DataNow we can read in the taxi data from the structured zone.
###Code
taxi_trips = spark.read.parquet(refined_basedir + "/taxi-trip")
taxi_trips.limit(10).toPandas()
###Output
_____no_output_____
###Markdown
Just to be sure, let us inspect the schema. It should match exactly the specified one.
###Code
taxi_trips.printSchema()
###Output
_____no_output_____
###Markdown
1.3 Create SampleFor some actions, we only need a subset of the whole data (some visualizations won't event work with the full data), therefore we create a subsample that is small enough to be handeled efficiently by Python. We'd like to have around 100,000 records in the sample.Spark offers the required functionality to create a *random* sample of the data, but we need to specify a fraction instead of an absolute number. Therefore we first count the number of records and then make up an appropriate fraction.
###Code
taxi_trips.count()
###Output
_____no_output_____
###Markdown
In order to get around 100,000 records, we use a fraction of 0.001. This will give us 170,000 records, which is good enough for us.
###Code
taxi_trips_sample = # YOUR CODE HERE
taxi_trips_sample.count()
###Output
_____no_output_____
###Markdown
2. Simple Geo VisualisationIn order to get an understanding of the data, let us first make a geo visualization using some Python functionality. We'd eventually want to draw the pickup locations on top of a map, so we understand the whole area that is served by the taxi cabs. We might want to use this information later when it comes to the ML part. 2.1 Estimate ExtentAs a first step, let us estimate a realistic extent of the pickup location. There are some broken records in the data set, which would render the visualisation meaningless, therefore we try to estimate extends such that 95% of all points lie within that border.
###Code
quantile = taxi_trips_sample \
.filter((taxi_trips_sample["pickup_longitude"] > -75) & (taxi_trips_sample["pickup_longitude"] < -65)) \
.filter((taxi_trips_sample["pickup_latitude"] > 35) & (taxi_trips_sample["pickup_latitude"] < 45)) \
.stat.approxQuantile(["pickup_longitude", "pickup_latitude"], [0.025,0.975], 0.01)
min_pickup_longitude = quantile[0][0]
max_pickup_longitude = quantile[0][1]
min_pickup_latitude = quantile[1][0]
max_pickup_latitude = quantile[1][1]
print("min_pickup_longitude=" + str(min_pickup_longitude))
print("max_pickup_longitude=" + str(max_pickup_longitude))
print("min_pickup_latitude=" + str(min_pickup_latitude))
print("max_pickup_latitude=" + str(max_pickup_latitude))
###Output
_____no_output_____
###Markdown
2.2 Visualize pickup locationNow by using some appropriate Python libraries, we can visualize the pickup locations nicely on a map. Since the data contains some bogus coordinates and some (maybe correct) outliers, we limit the area to the extends that we estimated above. This means that we throw away all records which lie outside of the core area (only for this visualization, of course!)
###Code
df = taxi_trips_sample.select("pickup_longitude","pickup_latitude") \
.filter((taxi_trips_sample["pickup_longitude"] >= min_pickup_longitude) & (taxi_trips_sample["pickup_longitude"] <= max_pickup_longitude)) \
.filter((taxi_trips_sample["pickup_latitude"] >= min_pickup_latitude) & (taxi_trips_sample["pickup_latitude"] <= max_pickup_latitude)) \
.toPandas()
# Convert DataFrame to GeoDataFrame
coords = pd.Series(zip(df["pickup_longitude"], df["pickup_latitude"]))
geo_df = gpd.GeoDataFrame(df, crs = {'init': 'epsg:4326'}, geometry = coords.apply(Point)).to_crs(epsg=3857)
# ... and make the plot
ax = geo_df.plot(figsize=(15, 10), alpha=0.1)
# Add basemap below
add_basemap(ax, 12)
###Output
_____no_output_____
###Markdown
Retrieve the geo extends for later reuse for more visualizations.
###Code
geo_min_x, geo_max_x = ax.get_xlim()
geo_min_y, geo_max_y = ax.get_ylim()
print("geo_min_x=" + str(geo_min_x))
print("geo_max_x=" + str(geo_max_x))
print("geo_min_y=" + str(geo_min_y))
print("geo_max_y=" + str(geo_max_y))
###Output
_____no_output_____
###Markdown
3. Simple QuestionsUsing the taxi trips table, we can already answer some simple questions. 3.1 Average Fare per Mile
###Code
# Perform aggregation
df = # YOUR CODE HERE
result = # YOUR CODE HERE
result.toPandas()
###Output
_____no_output_____
###Markdown
3.2 Average fare per minute
###Code
# Perform aggregation
df = taxi_trips.withColumn("fare_per_minute", taxi_trips["total_amount"]/taxi_trips["trip_time_in_secs"]*60)
result = df.select(
f.avg(df["fare_per_minute"]).alias("avg_fare_per_minute"),
f.stddev(df["fare_per_minute"]).alias("stddev_fare_per_minute")
)
result.toPandas()
###Output
_____no_output_____
###Markdown
4. Make some PicturesJust to get a rough feeling about the data, we make some pictures of the taxi trip data. 4.1 Average trips per day of weekLet us see if the average number of trips is the same for every week day.
###Code
# Step 1: Calculate the number of trips for every day
trips_per_day = # YOUR CODE HERE
# Step 2: Calculate average number of trips per day of week
result = # YOUR CODE HERE
pdf = result.toPandas()
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.bar(pdf["pickup_dayofweek"], pdf["avg_count"], align='center', alpha=0.5)
plt.ylabel('Frequency')
plt.title('Day of week')
###Output
_____no_output_____
###Markdown
4.2 Make a Plot of Fare per DayThe next picture contains the total fare amount (including tip and other expenses) for every day in 2013.
###Code
# Calculate two aggregations
# 1. Total fare amount per day
# 2. Total number of trips per day
daily = # YOUR CODE HERE
# Convert to Pandas
pdf = daily.toPandas()
# Make a Plot
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.plot(pdf["date"],pdf["amount"], color="red")
plt.plot(pdf["date"],pdf["count"], color="green")
plt.legend(handles=[
mpatches.Patch(color='red', label='Total Fare Amount per Day'),
mpatches.Patch(color='green', label='Total Number of Trips der Day')
])
###Output
_____no_output_____
###Markdown
4.3 Make a plot of average trips for each hourLet us plot the average number of trips and amount of income per hour. This will be done using a two step aggregation:1. Calculate the total number of trips and total fare for every hour in the whole year2. Calculate the average numbers per hour from this data
###Code
# Step 1: Calculate totals for every hour of the year
hourly_trips = taxi_trips \
.withColumn("date", f.to_date('pickup_datetime')) \
.withColumn("hour", f.hour('pickup_datetime')) \
.groupBy("hour", "date").agg(
f.sum("total_amount").alias("total_amount"),
f.count("total_amount").alias("total_count")
)
# Step 2: Calculate average values per hour of day
hourly_avg = hourly_trips \
.groupBy("hour").agg(
f.avg("total_amount").alias("avg_amount"),
f.avg("total_count").alias("avg_count")
)\
.orderBy("hour")
# Convert to Pandas
pdf = hourly_avg.toPandas()
# Make a Plot
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.plot(pdf["hour"],pdf["avg_amount"], color="red")
plt.plot(pdf["hour"],pdf["avg_count"], color="green")
plt.legend(handles=[
mpatches.Patch(color='red', label='Average Total Fare per Hour'),
mpatches.Patch(color='green', label='Average Trip Count per Hour')
])
###Output
_____no_output_____
###Markdown
4.3 Passenger CountsAnother simple question is a histogram of the passenger count of all trips. Note that again the data contains some bogus data, therefore we limit the analysis to records with a passenger count less than 20.
###Code
result = taxi_trips \
.filter(f.col("passenger_count") < 20) \
.groupBy("passenger_count") \
.count()
pdf = result.toPandas()
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.bar(pdf["passenger_count"], pdf["count"], align='center', alpha=0.5)
plt.ylabel('Frequency')
plt.title('Number of Passengers')
###Output
_____no_output_____
###Markdown
4.4 Average income by hour by driverThe next plot is a slight variation of the previous one, focusing on the individual driver. The question is, how much money does a driver make on average for a specific hour of the day. Again, this requires a two step aggregation1. Calculate the total amount for every hour of the year for each driver2. Calculate the average income per hour over all days and all drivers
###Code
# Step 1: Calculate totals per driver per hour per day
hourly_totals = taxi_trips \
.withColumn("date", f.to_date('pickup_datetime')) \
.withColumn("hour", f.hour('pickup_datetime')) \
.groupBy("date", "hour", "hack_license").agg(
f.sum("fare_amount").alias("fare_amount"),
f.sum("tip_amount").alias("tip_amount"),
f.count("fare_amount").alias("trip_count")
)
# Step 2: Calculate average per hour
hourly_driver_avg = hourly_totals \
.groupBy("hour").agg(
f.avg("fare_amount").alias("avg_fare_amount"),
f.avg("tip_amount").alias("avg_tip_amount"),
f.avg("trip_count").alias("avg_trip_count")
) \
.orderBy("hour")
# Convert to Pandas
pdf = hourly_driver_avg.toPandas()
# Make a Plot
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.plot(pdf["hour"],pdf["avg_fare_amount"], color="red")
plt.plot(pdf["hour"],pdf["avg_tip_amount"], color="green")
plt.plot(pdf["hour"],pdf["avg_trip_count"], color="blue")
plt.legend(handles=[
mpatches.Patch(color='red', label='Average Fare Amount per Hour per Driver'),
mpatches.Patch(color='green', label='Average Tip Amount per Hour per Driver'),
mpatches.Patch(color='blue', label='Average Trip Count per Hour per Driver')
])
###Output
_____no_output_____
###Markdown
4.5 Average income per dayThe next plot is a slight variation of the previous one, now looking at the income on a whole day. The question is, how much money does a driver make on average for a specific day. Again, this requires a two step aggregation1. Calculate the total amount for every day of the year for each driver2. Calculate the average income per day over all days and all drivers
###Code
# Step 1: Calculate totals per driver per date
daily_totals = taxi_trips \
.withColumn("date", f.to_date('pickup_datetime')) \
.groupBy("date", "hack_license").agg(
f.sum("total_amount").alias("amount"),
f.count("total_amount").alias("count")
)
# Step 2: Calculate average per date
daily_average = daily_totals \
.groupBy("date").agg(
f.avg("amount").alias("avg_amount"),
f.avg("count").alias("avg_count")
) \
.orderBy("date")
# Convert to Pandas
pdf = daily_average.toPandas()
# Make a Plot
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.plot(pdf["date"],pdf["avg_amount"], color="red")
plt.plot(pdf["date"],pdf["avg_count"], color="blue")
plt.legend(handles=[
mpatches.Patch(color='red', label='Average Fare Amount per Day per Driver'),
mpatches.Patch(color='blue', label='Average Trip Count per Day per Driver')
])
###Output
_____no_output_____
###Markdown
4.6 Tip by passenger countDoes the tip depend on the number of passengers? Let us display the average tip amount for each number of passengers.
###Code
tip_by_passengers = taxi_trips \
.filter(taxi_trips["passenger_count"] < 10) \
.groupBy("passenger_count").agg(
f.avg("tip_amount").alias("tip_amount")
) \
.orderBy("passenger_count")
# Convert to Pandas
pdf = tip_by_passengers.toPandas()
# Make a Plot
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.bar(pdf["passenger_count"], pdf["tip_amount"], align='center', alpha=0.5)
plt.ylabel('Tip Amount')
plt.title('Number of Passengers')
###Output
_____no_output_____
###Markdown
5. Preaggregated Taxi TripsBefore we start to include additional data sets from other sources, let us first focus a reasonable question which we'd like to give an answer to with machine learning.We are not so much interested into the individual trips, but we'd like to understand at which time a driver can make most money. We already saw in the pictures above, that the average amount of money per hour per driver doesn't vary very much, although most money seems to be made in the evening hours. Unfortunately these numbers do not neccessarily tell the whole truth, since we don't have any information about how long a driver was actually working.So the question is: *"Can we predict the overall fares for a specific hour on a specific day?"* We will refine that question a little bit and clarify what information may be used to create the prediction, such that it makes sense from a business point of view.We now prepare the joined trip data to contain data for precisely this question - we will remove the drivers hack license and medallion. 5.1 Extend InformationAs a first step, we add some more columns:* **date** and **hour** - The pickup date and hour (without minutes or seconds)* **lat_idx** and **long_idx** - We map the whole geo range into a grid and these two columns contains logical coordinates in this grid.
###Code
min_pickup_longitude=-74.007698
max_pickup_longitude=-73.776711
min_pickup_latitude=40.706902
max_pickup_latitude=40.799072
longitude_grid_size = 10
latitude_grid_size = 5
longitude_grid_diff = (max_pickup_longitude - min_pickup_longitude) / longitude_grid_size
latitude_grid_diff = (max_pickup_latitude - min_pickup_latitude) / latitude_grid_size
extended_trips = taxi_trips \
.withColumn("date", f.to_date(taxi_trips["pickup_datetime"])) \
.withColumn("hour", f.hour(taxi_trips["pickup_datetime"])) \
.withColumn("lat_idx", f.rint((taxi_trips["pickup_latitude"] - min_pickup_latitude)/latitude_grid_diff)) \
.withColumn("long_idx", f.rint((taxi_trips["pickup_longitude"] - min_pickup_longitude)/longitude_grid_diff)) \
.withColumn("lat_idx", f.when((f.col("lat_idx") >= 0) & (f.col("lat_idx") < latitude_grid_size), f.col("lat_idx")).otherwise(-1).cast("int")) \
.withColumn("long_idx", f.when((f.col("long_idx") >= 0) & (f.col("long_idx") < longitude_grid_size), f.col("long_idx")).otherwise(-1).cast("int")) \
extended_trips.limit(10).toPandas()
###Output
_____no_output_____
###Markdown
5.2 Preaggregate and Store into Refined ZoneNow we preaggregate the extended data using the following dimensions* **date** and **hour*** **lat_idx** and **long_idx**In addition to the dimensions, the result will aggregate (sum up) the following metrics* **passenger_count*** **fare_amount*** **tip_amount*** **total_amount**
###Code
hourly_taxi_trips = # YOUR CODE HERE
hourly_taxi_trips.write.mode("overwrite").parquet(refined_basedir + "/taxi-trips-hourly")
hourly_taxi_trips = spark.read.parquet(refined_basedir + "/taxi-trips-hourly")
hourly_taxi_trips.limit(10).toPandas()
###Output
_____no_output_____
###Markdown
6. More PicturesUsing the preaggregated data set, we can now draw some more pictures. 6.1 Daily Aggregates
###Code
daily = hourly_taxi_trips \
.groupBy("date").agg(
f.sum("fare_amount").alias("fare_amount"),
f.sum("tip_amount").alias("tip_amount"),
f.sum("total_amount").alias("total_amount"),
f.sum("trip_count").alias("trip_count")
)\
.orderBy("date")
# Convert to Pandas
pdf = daily.toPandas()
# Make a Plot
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.plot(pdf["date"],pdf["fare_amount"], color="red")
plt.plot(pdf["date"],pdf["tip_amount"], color="green")
plt.plot(pdf["date"],pdf["total_amount"], color="blue")
plt.plot(pdf["date"],pdf["trip_count"], color="violet")
plt.legend(handles=[
mpatches.Patch(color='red', label='Fare Amount'),
mpatches.Patch(color='green', label='Tip Amount'),
mpatches.Patch(color='blue', label='Total Amount'),
mpatches.Patch(color='violet', label='Trip Count')
])
###Output
_____no_output_____
###Markdown
6.2 Average fare and tip per hour
###Code
hourly = hourly_taxi_trips \
.groupBy("hour").agg(
f.avg(hourly_taxi_trips["fare_amount"] / hourly_taxi_trips["trip_count"]).alias("avg_fare_amount"),
f.avg(hourly_taxi_trips["tip_amount"] / hourly_taxi_trips["trip_count"]).alias("avg_tip_amount")
)\
.orderBy("hour")
# Convert to Pandas
pdf = hourly.toPandas()
# Make a Plot
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.plot(pdf["hour"],pdf["avg_fare_amount"], color="red")
plt.plot(pdf["hour"],pdf["avg_tip_amount"], color="green")
plt.legend(handles=[
mpatches.Patch(color='red', label='Average Fare Amount'),
mpatches.Patch(color='green', label='Average Tip Amount')
])
###Output
_____no_output_____ |
5_arborescences/6_file_de_priorite_et_tas.ipynb | ###Markdown
File de priorité - structures de tas [Vidéo d'accompagnement](https://vimeo.com/516288038) Introduction Une **file de priorité** «max» (resp. «min») est un *ensemble «dynamique»* qui offre principalement les opérations suivantes:- `.inserer(elt)`: insère un élément,- `.extraire_max()`: (ou `_min`) renvoie l'élément ayant la clé la plus grande tout en supprimant cet élément.- `.augmenter_clé(i, valeur)`: (ou `_diminuer`) modifie la clé d'index `i` avec la valeur fournie qui doit *être supérieure à l'ancienne valeur*L'objectif est de fournir ces opérations avec un coût minimum. Nous verrons quelques applications des files de priorité (*priority queue*) un peu plus tard (ordonnancement des processus, codage de Huffmann...) Stratégie naïve On peut penser à utiliser une *structure linéaire* (tableau, liste chaînée) et à maintenir ses éléments dans l'ordre croissant au fur et à mesure des opérations. Voici comment on pourrait s'y prendre en spécialisant le type `list`:
###Code
class FilePrioriteMax(list):
def inserer(self, valeur):
self.append(valeur)
i = len(self) - 2
while i >= 0 and self[i] > valeur:
self[i], self[i+1] = self[i+1], self[i] # swap
i -= 1
def extraire_max(self):
return self.pop()
def augmenter_cle(self, i, valeur):
if valeur < self[i]:
raise Exception("valeur fournie inférieure à valeur courante")
self[i] = valeur
while i < len(self) - 1 and self[i] > self[i+1]:
self[i], self[i+1] = self[i+1], self[i]
i += 1
###Output
_____no_output_____
###Markdown
Cela a le mérite d'être (relativement) simple. Voici un exemple interactif d'utilisation:
###Code
from random import randint
fpm = FilePrioriteMax()
# on y insère 10 éléments au hasard
for _ in range(10):
fpm.inserer(randint(1, 100))
print(f"\tfpm = {fpm}\n");
# choisir l'un des opérations pour voir l'effet sur le tableau
while True:
op = input("inserer (1), extraire_max (2) augmenter_cle (3)? ")
if not op in "123":
break
if op == "1":
fpm.inserer(int(input("valeur? ")))
elif op == "2":
print(f"\tmax = {fpm.extraire_max()}")
else:
fpm.augmenter_cle(int(input("index?")), int(input("valeur?")))
print(f"\tfpm = {fpm}\n")
###Output
_____no_output_____
###Markdown
ExerciceQuels sont les coûts respectifs de ces trois opérations? Solution La boucle de `inserer`, dans le cas où on insère une valeur plus grandes que celles déjà présentes dans la file (pire des cas), produira $n$ itérations donc `inserer` est $O(n)$. Notez que si l'on s'en sert $n$ fois pour construire une file de taille $n$, le coût de construction de la file est est $O(n^2)$ ($1+2+\cdots+n$).`extraire_max` est $O(1)$ comme le pop sous-jacent des listes de Python.Enfin, `augmenter_cle` est $O(n)$ car il y a aura $n$ itérations si on augmente la clé de la première valeur en lui donnant une valeur supérieure à celles des autres éléments. Arbre binaire organisé en tas max On peut résoudre de façon plus efficace ce problème en se contentant d'un **ordre partiel** (plutôt que *total* comme précédemment). L'idée est d'utiliser un *arbre binaire* organisé en «**tas max**». Cela signifie précisément qu'il doit vérifier les deux conditions suivantes:> 1. l'arbre binaire est **parfait**: *tous ses niveaux sont remplis de gauche à droite sauf peut-être le dernier*. >> **Objectif**: minimiser sa hauteur.>> 2. tout noeud porte une **clé supérieure** (ou au pire égale) **à celles de ses enfants**. >> **Objectif**: repérer facilement le max sans pour autant imposer un ordre total.*Attention à ne pas confondre* **tas max** et **arbre binaire de recherche**: *ils n'ont rien à voir!* Un arbre binaire *parfait* a la particularité suivante: la numérotation par niveau est «sans trou». Cette particularité permet de représenter un tel arbre de façon très compacte à l'aide d'un **tableau**. Prenez garde que les noeuds sont numérotés à partir de **1** (pour une fois).Si `i` est le numéro d'un noeud alors, s'il existe:- son **parent** a le numéro `i // 2` (quotient de la division entière de `i` par `2`)- son **enfant gauche** a le numéro `2*i`,- son **enfant droit** a le numéro `2*i + 1` si on dénote par `j` la numérotation à partir de **0** alors `j = i - 1` et la formule du «parent» devient: j_parent = i_parent - 1 = i // 2 - 1 = (j+1) // 2 - 1 soit j_p = parent(j) = (j+1) // 2 - 1 Exercice Chercher les deux autres formules et implémenter.
###Code
def parent(j): # numérotation python (à partir de 0)
pass
def gauche(j):
pass
def droit(j):
pass
def parent(j): # numérotation python (à partir de 0)
return (j+1)//2-1
def gauche(j):
return 2*(j+1)-1
def droit(j):
return 2*(j+1)
###Output
_____no_output_____
###Markdown
classe `TasMax` Par la suite, nous utiliserons une classe dédiée:
###Code
class TasMax:
def __init__(self, tableau=None):
self._t = []
# attention: la taille du tas peut différer de celle du tableau _t.
self.taille = 0
if tableau is not None:
self._construire(tableau) # voir plus loin
def __len__(self):
return self.taille
def get_tableau(self): # pour les tests
return self._t.copy()
def _construire(self, tableau):
pass # sera implémentée un peu plus loin
###Output
_____no_output_____
###Markdown
Note sur le mot «**tas**» Ce mot évoque une collection de données *faiblement* structurée.Il est couramment utilisé en informatique pour désigner une zone mémoire variable d'un programme en cours d'exécution (processus) dans laquelle celui-ci peut effectuer des *allocation/désallocation*. Par exemple, les objets muables comme les *list* sont gérées dans *le tas* du fait que ses structures peuvent être modifiées pendant l'éxécution du programme.Python gère automatiquement le tas comme nous l'avions expérimenté avec `__del__`. Mais, dans un langage comme **C**, le programmeur doit gérer lui-même cette zone à l'aide des primitives `malloc` et `free` (notamment).Le tas comme *zone mémoire variable* et la *structure de données tas* dont nous parlons ici n'ont que le nom en commun! Procédure `_entasser_max` *Note*: le mot de «*procédure*» désigne une fonction (ou une méthode) qui ne renvoie rien et dont le rôle est souvent de modifier une structure en la réorganisant. Pour *implémenter* notre **tas max**, nous allons commencer par résoudre le sous-problème suivant:> On suppose que tous les noeuds de l'arbre enraciné au noeud d'index `i` vérifient la propriété de tas max (\*) *sauf peut-être le noeud `i` (sa racine)*. On demande de rectifier cette situation de façon que l'arbre enraciné en `i` soit un tas-max.>> (\*) sa valeur est supérieure à celles de ses deux enfants.La méthode `._entasser_max(i)` doit résoudre le problème précédent. L'idée est de comparer la valeur du noeud `i` avec celles de ces enfants afin d'effectueur un échange si besoin. Si l'échange a lieu, le noeud `i` vérifie la propriété de tas-max et il y a alors deux cas possibles:- le sous-arbre qui reçoit la valeur à sa racine est un tas-max: le problème est réglé!- **sinon**, on se retrouve avec un problème *similaire* avec un arbre plus petit... ExerciceRéfléchir au coût de cette procédure puis implémenter la:
###Code
def _entasser_max(self, i):
pass
TasMax._entasser_max = _entasser_max
del _entasser_max
###Output
_____no_output_____
###Markdown
Solution Le nombre d'actions est clairement au plus égal à la hauteur de l'arbre qui est $\log n$ si l'arbre contient $n$ valeurs.
###Code
def _entasser_max(self, i):
i_max = i #
g, d = gauche(i), droit(i)
# chercher l'index du noeud qui contient la plus grande valeur
# attention à tenir compte de la taille du tas
if g < self.taille and self._t[g] > self._t[i]:
i_max = g
if d < self.taille and self._t[d] > self._t[i_max]:
i_max = d
# éventuellement, échange et récursion
if i_max != i:
self._t[i], self._t[i_max] = self._t[i_max], self._t[i]
self._entasser_max(i_max)
TasMax._entasser_max = _entasser_max
del _entasser_max
###Output
_____no_output_____
###Markdown
Nous testerons cette solution dans la section suivante. Construction d'un tas à partir d'un tableau On suppose donné un tableau `tab`. Notre but est de *produire un tas-max à partir de celui-ci*.On peut déjà lire le tableau comme un arbre binaire «*parfait*» et il faut observer que ses feuilles sont déjà des tax-max (trivialement).L'idée est d'appliquer `__entasser_max` «en remontant» l'arbre niveau par niveau à partir du dernier noeud *interne* (dans la numérotation par niveau). Plus précisément, on parcourt l'arbre: - *de la droite vers la gauche* et *du bas vers le haut* (donc **par index décroissant**) ,- à partir du dernier noeud interne et jusqu'au noeud racine. Exercice 1. Décrire étape par étape la construction d'un tas-max à partir du tableau `[14, 9, 20, 15, 19, 12, 10, 13]`.2. Implémenter la méthode interne `_construire(tableau)`
###Code
def _construire(tab):
self._t = tableau
self.taille = len(tableau)
pass
TasMax._construire = _construire
del _construire
###Output
_____no_output_____
###Markdown
Solution **1** Voici ces étapes dans la représentation tableau * pour le père, _ pour le ou les enfants [14, 9, 20, *15*, 19, 12, 10, _13_] 15 > 13: rien à faire [14, 9, *20*, 15, 19, _12_, _10_, 13] 20 > 12 et 20 > 10: rien à faire [14, *9*, 20, _15_, _19_, 12, 10, 13] échange avec 19 et entasser encore mais [14, 19, 20, 15, *9*, 12, 10, 13] 9 n'a pas d'enfant [*14*, _19_, _20_, 15, 9, 12, 10, 13] échange avec 20 et entasser encore mais [20, 19, *14*, 15, 9, _12_, _10_, 13] 14 a deux enfants de valeurs inférieures. Résultat [20, 19, 14, 15, 9, 12, 10, 13]. **2** Implémentation de _construire
###Code
def _construire(self, tableau):
self._t = tableau
self.taille = len(tableau)
# i: index du dernier noeud interne
i = parent(self.taille-1)
while i >= 0:
self._entasser_max(i)
i -= 1
TasMax._construire = _construire
del _construire
###Output
_____no_output_____
###Markdown
Testez
###Code
import pytest
# spécifique aux notebooks
import ipytest
ipytest.autoconfig()
@pytest.fixture()
def tas_simple():
return TasMax([14, 9, 20, 15, 19, 12, 10, 13])
from random import randint
@pytest.fixture()
def tas_aleatoire():
return TasMax([randint(0, 1000) for _ in range(100)])
%%run_pytest[clean]
def test_construire(tas_simple, tas_aleatoire):
assert tas_simple.get_tableau() == [20, 19, 14, 15, 9, 12, 10, 13]
t = tas_aleatoire.get_tableau()
i = len(tas_aleatoire)-1
while i >= 1:
assert t[parent(i)] >= t[i]
i -= 1
###Output
_____no_output_____
###Markdown
Efficacité Comme la hauteur d'un tas est voisine de $\log n$ où $n$ est le nombre de noeuds de l'arbre et qu'à chaque itération l'opération `entasser` est proportionnelle (en temps) à la hauteur de l'arbre, on s'attend à une efficacité en $O(n\log n)$ au pire.En fait, la situation est meilleure comme on peut le «sentir». On démontre - c'est un peu technique - que:> le coût de construction d'un tas à partir d'un tableau est $O(n)$. Tri par tas Même si ça n'a rien à voir avec la problématique «file de priorité», il se trouve qu'on peut trier un tableau à l'aide d'un tas et que c'est un tri *très efficace* appelé **tri par tas**. Le tri par tas commence par tranformer le tableau fourni en un tas-max (pour un tri ascendant) à l'aide de l'algorithme précédent.Cela fait, on parcourt le tas depuis le bas jusqu'à la racine *exclue* et, à chaque itération:- on échange la valeur du noeud courant avec celle de la racine,- on «détache» le dernier noeud en décrémentant la taille du tas (d'une unité)- on appelle `_entasser_max` sur la racine. Voici une illustration (un noeud vert vérifie la propriété de tas-max, un noeud rouge fait partie du tableau mais pas du tas). Voilà un exemple de ce que cela donne dans la représentation «tableau»: Le tableau est trié «en place» (inutile de retourner quoi que ce soit) Exercice1. Analyser la complexité du tri par tas.2. L'implémenter comme une méthode «statique» (voir note plus bas) de la classe `TasMax`. *Note sur la notion de méthode statique* une **méthode statique** ne s'appuie pas sur les instances d'une classe. On peut la voir commeune fonction «attachée» à une classe.Pour l'appeler, on utilise le nom de la classe et non celui d'une instance de celle-ci. Par exemple:```pythontableau = [....]TasMax.trier(tableau) tableau trié en place (pas de valeur de retour) à partir de là, tableau est trié```
###Code
def trier(tab):
tas = ___
pass
TasMax.trier = staticmethod(trier) # on peut aussi utiliser le décorateur @staticmethod
del trier
def trier(tableau):
tas = TasMax(tableau)
while tas.taille > 1:
tas._t[0], tas._t[tas.taille-1] = tas._t[tas.taille-1], tas._t[0]
tas.taille -= 1
tas._entasser_max(0)
TasMax.trier = staticmethod(trier) # on peut aussi utiliser le décorateur @staticmethod
del trier
%%run_pytest[clean]
def test_trier():
tableau = [randint(0, 100) for i in range(15)]
TasMax.trier(tableau)
for i in range(len(tableau)-1):
assert tableau[i] <= tableau[i+1]
###Output
_____no_output_____
###Markdown
Efficacité La construction du tas est $O(n)$. Ensuite, chaque itération est dominée par le coût de `entasser` qui est $O(\log n)$. Comme il y a $n$ itérations, le coût de la boucle sera $O(n\log n)$. Ce dernier domine le côut de la construction du tas et c'est donc aussi le coût de ce tri:> Le tri par tas a un coût de $O(n\log n)$. `extraire_max` L'idée est similaire à la précédente. Si le tas est non vide:1. échanger la valeur de la racine avec celle du dernier noeud,2. décrémenter la taille du tas (inutile de faire un pop!),3. puis entasser la valeur située à la racine pour rétablir la propriété de tas-max.
###Code
def max(self):
"""Renvoie le max sans le supprimer"""
if self.taille == 0:
raise IndexError("Le tas est vide!")
pass
def extraire_max(self):
"""Renvoie le max tout en le supprimant du tas"""
if self.taille == 0:
raise IndexError("Le tas est vide!")
pass
def _max(self):
"""Renvoie le max sans le supprimer"""
if self.taille == 0:
raise IndexError("Le tas est vide!")
return self._t[0]
def extraire_max(self):
"""Renvoie le max tout en le supprimant du tas"""
if self.taille == 0:
raise IndexError("Le tas est vide!")
r = self._t[0]
self._t[0] = self._t[self.taille-1]
self.taille -= 1
self._entasser_max(0)
return r
TasMax.max = _max; TasMax.extraire_max = extraire_max
del _max; del extraire_max
%%run_pytest[clean]
def test_extraire_max(tas_aleatoire):
N = n = len(tas_aleatoire)
liste = []
for _ in range(N):
liste.append(tas_aleatoire.extraire_max())
n -= 1
assert len(tas_aleatoire) == n
for i in range(N-1):
assert liste[i] >= liste[i+1]
with pytest.raises(IndexError):
tas_aleatoire.extraire_max()
###Output
_____no_output_____
###Markdown
`augmenter_cle` Lorsqu'on augmente la clé d'un noeud d'un tas max, son noeud **parent** peut ne plus satisfaire la propriété de tax max (et seulement lui). On doit donc «remonter» le long des ancêtres du noeud visé afin de rétablir la propriété de tas max. Voici un exemple: Exercice Implémenter!
###Code
def augmenter_cle(self, i, valeur):
"""modifie la valeur du noeud d'index i avec une valeur supérieure à celle qu'il avait antérieurement"""
if i >= self.taille:
raise IndexError("indice non valide")
if valeur < self._t[i]:
raise Exception(f"{valeur} est plus petite que la valeur courante {self._t[i]}")
pass
def augmenter_cle(self, i, valeur):
"""modifie la valeur du noeud d'index i avec une valeur supérieure à celle qu'il avait antérieurement"""
if i >= self.taille:
raise IndexError("indice non valide")
if valeur < self._t[i]:
raise Exception(f"{valeur} est plus petite que la valeur courante {self._t[i]}")
self._t[i] = valeur
ip = parent(i)
while i > 0 and self._t[ip] < self._t[i]:
self._t[ip], self._t[i] = self._t[i], self._t[ip]
i = ip
ip = parent(i)
TasMax.augmenter_cle = augmenter_cle
del augmenter_cle
%%run_pytest[clean]
def test_augmenter_cle(tas_aleatoire):
tab = tas_aleatoire.get_tableau()
with pytest.raises(IndexError):
tas_aleatoire.augmenter_cle(len(tas_aleatoire), 1000)
with pytest.raises(Exception):
i = randint(0, 99)
tas_aleatoire.augmenter_cle(i, tab[i]-1)
maxi = tab[0]
tas_aleatoire.augmenter_cle(len(tas_aleatoire)-1, maxi+1)
assert maxi+1 == tas_aleatoire.max()
###Output
_____no_output_____
###Markdown
`inserer` Le problème est en fait assez similaire au précédent: on insère le noeud à la suite du dernier (dernière feuille) puis on remonte ses ancêtres de façon à rétablir la propriété de tas-max pour ceux-ci. Pour éviter de réécrire un code similaire à celui de `augmenter_cle`, on peut utiliser l'astuce suivante:- insèrer un noeud de clé «$-\infty$» ( à la fin du tableau sous-jacent) puis- appeller `augmenter_cle` sur ce noeud pour lui attribuer sa valeur définitive.*Note*: Une valeur $-\infty$ se caractérise (entre autre) par la propriété d'être toujours inférieure à n'importe quelle autre. On peut produire une valeur ayant cette propriété avec `float("-inf")`. Exercice 1. Implémenter en prenant garde à n'appeler `append` sur le tableau sous-jacent que si c'est nécessaire!
###Code
def inserer(self, valeur):
"""insère la valeur dans le tas"""
pass
TasMax.inserer = inserer
del inserer
def inserer(self, valeur):
"""insère la valeur dans le tas"""
self.taille += 1
if len(self._t) < self.taille:
self._t.append(float("-inf"))
else:
self._t[self.taille - 1] = float("-inf")
self.augmenter_cle(self.taille-1, valeur)
TasMax.inserer = inserer
del inserer
%%run_pytest[clean]
from random import shuffle
def test_inserer(tas_aleatoire):
tab = tas_aleatoire.get_tableau()
shuffle(tab)
tas = TasMax()
for v in tab:
tas.inserer(v)
assert len(tas) == len(tas_aleatoire)
N = len(tas)
for _ in range(N):
assert tas_aleatoire.extraire_max() == tas.extraire_max()
###Output
_____no_output_____ |
python/pandas-datetime.ipynb | ###Markdown
더미 데이터 생성하기
###Code
pd.date_range(start='2018-01-01', end='2019-01-01', freq='1H')
pd.date_range(start='2018-01-01', end='2019-01-01', freq='1D')
###Output
_____no_output_____
###Markdown
- 맨 마지막날을 제외하고 싶으면 아래처럼
###Code
pd.date_range(start='2018-01-01', end='2019-01-01', freq='1D')[:-1]
###Output
_____no_output_____ |
Evaluation/Dynamic-Multiple run.ipynb | ###Markdown
coverage k volte, con n0 e t0 random
###Code
from scipy.stats import ks_2samp
K = 1000
T = 500
c_stm,c_mio,c_tag,c_dym = [],[],[],[]
for stm in stm_gen:
c_stm.append(ds.coverage(stm,K,T))
print("STM done")
c_orig = ds.coverage(orig_graphs,K,T)
c_stab = ds.coverage(orig_graphs,K,T)
for etn in etn_gen:
c_mio.append(ds.coverage(etn,K,T))
print("ETN done")
for tag in tag_gen:
c_tag.append(ds.coverage(tag,K,T))
print("TAG done")
for dym in dym_gen:
c_dym.append( ds.coverage(dym,K,T))
print("DYM done")
def mean_ks(c_orig,c_gens):
res = []
for i in c_gens:
res.append(ks_2samp(c_orig,i)[0])
return np.mean(res),np.std(res)
print("coverage")
print("orig vs sta \t",ks_2samp(c_orig, c_stab)[0])
print("orig vs mio \t",mean_ks(c_orig, c_mio))
print("orig vs tag \t",mean_ks(c_orig, c_tag))
print("orig vs dym \t",mean_ks(c_orig, c_dym))
print("orig vs stm \t",mean_ks(c_orig, c_stm))
np.save(COV+"/stab",c_stab)
np.save(COV+"/orig",c_orig)
np.save(COV+"/tag",c_tag)
np.save(COV+"/etn",c_mio)
np.save(COV+"/dym",c_dym)
np.save(COV+"/stm",c_stm)
###Output
_____no_output_____
###Markdown
MFPT
###Code
path = "dynamic_results/"+file_name+"/Multiple_run"
MFPTs = path+"/MFPT"
MFPTs
orig_graphs = load_origin_graph(file_name,gap=299)
etn_gen = load_ETNgen_graph(file_name)
dym_gen = load_dym_graph(file_name)
tag_gen = load_tag_graph(file_name)
stm_gen = load_stm_graph(file_name)
print(MFPTs)
K = 1
m_ori = ds.MFPT(orig_graphs,K)
m_ori2 = [x for x in m_ori if x < max(m_ori)-10]
print(1)
m_stb = ds.MFPT(orig_graphs,K)
m_stb2 = [x for x in m_stb if x < max(m_ori)-10]
print(2)
m_stm,m_mio,m_tag,m_dym = [],[],[],[]
c = 0
for stm in stm_gen:
c = c + 1
print("\t",c)
tmp = ds.MFPT(stm,K)
tmp = [x for x in tmp if x < max(m_ori)-10]
m_stm.append(tmp)
print("STM done")
c = 0
for etn in etn_gen:
c = c + 1
print("\t",c)
tmp = ds.MFPT(etn,K)
tmp = [x for x in tmp if x < max(m_ori)-10]
m_mio.append(tmp)
print("ETN done")
c = 0
for tag in tag_gen:
c = c + 1
print("\t",c)
tmp = ds.MFPT(tag,K)
tmp = [x for x in tmp if x < max(m_ori)-10]
m_tag.append(tmp)
print("TAG done")
c = 0
for dym in dym_gen:
c = c + 1
print("\t",c)
tmp = ds.MFPT(dym,K)
tmp = [x for x in tmp if x < max(m_ori)-10]
m_dym.append(tmp)
print("SYM done")
print("orig vs sta \t",ks_2samp(m_ori2, m_stb2)[0])
print("orig vs mio \t",mean_ks(m_ori2, m_mio))
print("orig vs mio \t",mean_ks(m_ori2, m_tag))
print("orig vs mio \t",mean_ks(m_ori2, m_dym))
print("orig vs mio \t",mean_ks(m_ori2, m_stm))
np.save(MFPTs+"/stab",m_stb2)
np.save(MFPTs+"/orig",m_ori2)
np.save(MFPTs+"/tag",m_tag)
np.save(MFPTs+"/etn",m_mio)
np.save(MFPTs+"/dym",m_dym)
np.save(MFPTs+"/stm",m_stm)
MFPTs
###Output
_____no_output_____
###Markdown
SIR model
###Code
import os
path = "dynamic_results/"+file_name+"/Multiple_run"
R0 = path+"/R0/"
la025 = R0+"la025"
la015 = R0+"la015"
la001 = R0+"la001"
if not os.path.exists(la001):
os.makedirs(la025)
os.makedirs(la015)
os.makedirs(la001)
path
orig_graphs = load_origin_graph(file_name,gap=299)
etn_gen = load_ETNgen_graph(file_name)
dym_gen = load_dym_graph(file_name)
tag_gen = load_tag_graph(file_name)
stm_gen = load_stm_graph(file_name)
for lambd in [0.25,0.15,0.01]:
mu =0.005
K = 100
if lambd == 0.25:
la = la025
if lambd == 0.15:
la = la015
if lambd == 0.01:
la = la001
print("R0 lambda",lambd,lambd)
r_ori = ds.compute_r0(K,orig_graphs,lambd,mu)
r_sta = ds.compute_r0(K,orig_graphs,lambd,mu)
r_etn,r_stm,r_tag,r_dym = [],[],[],[]
for etn in etn_gen:
r_etn.append(ds.compute_r0(K,etn,lambd,mu))
print("Done ETN")
for stm in stm_gen:
r_stm.append(ds.compute_r0(K,stm,lambd,mu))
print("Done STM")
for tag in tag_gen:
r_tag.append(ds.compute_r0(K,tag,lambd,mu))
print("Done TAG")
for dym in dym_gen:
r_dym.append(ds.compute_r0(K,dym,lambd,mu))
print("Done DYM")
print("orig vs sta \t",ks_2samp(r_ori, r_sta)[0])
print("orig vs etn \t",mean_ks(r_ori, r_etn))
print("orig vs stm \t",mean_ks(r_ori, r_stm))
print("orig vs tag \t",mean_ks(r_ori, r_tag))
print("orig vs dym \t",mean_ks(r_ori, r_dym))
np.save(la+"/stab",r_sta)
np.save(la+"/orig",r_ori)
np.save(la+"/tag",r_tag)
np.save(la+"/etn",r_etn)
np.save(la+"/dym",r_dym)
np.save(la+"/stm",r_stm)
###Output
R0 lambda 0.25 0.25
Done ETN
Done STM
Done TAG
Done DYM
orig vs sta 0.13
orig vs etn (0.28700000000000003, 0.0681248853210044)
orig vs stm (0.583, 0.027586228448267452)
orig vs tag (0.643, 0.25239057034683365)
orig vs dym (0.184, 0.0749933330370107)
R0 lambda 0.15 0.15
Done ETN
Done STM
Done TAG
Done DYM
orig vs sta 0.1
orig vs etn (0.181, 0.04437341546466759)
orig vs stm (0.6260000000000001, 0.033823069050575534)
orig vs tag (0.5770000000000001, 0.26555790329041234)
orig vs dym (0.11200000000000002, 0.05706137047074842)
R0 lambda 0.01 0.01
Done ETN
Done STM
Done TAG
Done DYM
orig vs sta 0.03
orig vs etn (0.033, 0.024103941586387897)
orig vs stm (0.541, 0.07867019766086775)
orig vs tag (0.306, 0.194381069037085)
orig vs dym (0.043, 0.04405678154382137)
###Markdown
plot
###Code
ORIGINAL_COLOR = "#4C4C4C"
ETN_COLOR = "#5100FF"
STM_COLOR = "#FF6A74"
TAG_COLOR = "#63CA82"
DYM_COLOR = "#FFD579"
ORIGINAL_COLOR = "#020005"
ETN_COLOR = "#ffb000"
STM_COLOR = "#762f22"
TAG_COLOR = "#f3e79d"
DYM_COLOR = "#785478"#"#503850"
ORIGINAL_COLOR = "#020005"
ETN_COLOR = "#20639b"
STM_COLOR = "#3caea3"
TAG_COLOR = "#f6d55c"
DYM_COLOR = "#ed553b"
ORIGINAL_COLOR = "#020005"
ETN_COLOR = "#F3AA20"
STM_COLOR = "#2A445E"
TAG_COLOR = "#841E62"
DYM_COLOR = "#346B6D"
ORIGINAL_COLOR = '#474747' #dark grey
ETN_COLOR = '#fb7041' #'#E5865E' # arancio
TAG_COLOR = '#96ccc8' # light blue
STM_COLOR = '#bad1f2' #8F2E27' # rosso
DYM_COLOR = '#559ca6' # teal
line_width = 1.5
import os
def load_cov(file_name):
ori = np.load("dynamic_results/"+file_name+"/Multiple_run/coverage/orig.npy")
stb = np.load("dynamic_results/"+file_name+"/Multiple_run/coverage/stab.npy")
etn = np.load("dynamic_results/"+file_name+"/Multiple_run/coverage/etn.npy")
tag = np.load("dynamic_results/"+file_name+"/Multiple_run/coverage/tag.npy")
stm = np.load("dynamic_results/"+file_name+"/Multiple_run/coverage/stm.npy")
dym = np.load("dynamic_results/"+file_name+"/Multiple_run/coverage/dym.npy")
return ori,stb,etn,tag,stm,dym
def load_mfpt(file_name):
ori = np.load("dynamic_results/"+file_name+"/Multiple_run/MFPT/orig.npy")
stb = np.load("dynamic_results/"+file_name+"/Multiple_run/MFPT/stab.npy")
etn = np.load("dynamic_results/"+file_name+"/Multiple_run/MFPT/etn.npy", allow_pickle=True)
tag = np.load("dynamic_results/"+file_name+"/Multiple_run/MFPT/tag.npy", allow_pickle=True)
stm = np.load("dynamic_results/"+file_name+"/Multiple_run/MFPT/stm.npy", allow_pickle=True)
dym = np.load("dynamic_results/"+file_name+"/Multiple_run/MFPT/dym.npy", allow_pickle=True)
return ori,stb,etn,tag,stm,dym
def load_r0(file_name,lambd="la001"):
ori = np.load("dynamic_results/"+file_name+"/Multiple_run/R0/"+lambd+"/orig.npy")
stb = np.load("dynamic_results/"+file_name+"/Multiple_run/R0/"+lambd+"/stab.npy")
etn = np.load("dynamic_results/"+file_name+"/Multiple_run/R0/"+lambd+"/etn.npy")
tag = np.load("dynamic_results/"+file_name+"/Multiple_run/R0/"+lambd+"/tag.npy")
stm = np.load("dynamic_results/"+file_name+"/Multiple_run/R0/"+lambd+"/stm.npy")
dym = np.load("dynamic_results/"+file_name+"/Multiple_run/R0/"+lambd+"/dym.npy")
return ori,stb,etn,tag,stm,dym
file_name = "InVS13"
cov = load_cov(file_name)
mfpt = load_mfpt(file_name)
ro_025 = load_r0(file_name,"la025")
ro_015 = load_r0(file_name,"la015")
ro_001 = load_r0(file_name,"la001")
from matplotlib.patches import Patch
from matplotlib.lines import Line2D
def compute_ks_cov_mfpt(cov,mfpt):
mfpt_ms = [ks_2samp(mfpt[0],mfpt[1])[0]]
cov_ms = [ks_2samp(cov[0],cov[1])[0]]
for i in range(len(cov)-2):
tmp = []
tmp2 = []
for j in cov[i+2]:
tmp.append(ks_2samp(cov[0], j)[0])
for j in mfpt[i+2]:
tmp2.append(ks_2samp(mfpt[0], j)[0])
cov_ms.append([np.mean(tmp),np.std(tmp)])
mfpt_ms.append([np.mean(tmp2),np.std(tmp2)])
return cov_ms,mfpt_ms
def plot_cov_mfpt(ax,file_name,legend=False):
if file_name == "LH10":
#ax.set_title("Hospital")
ax.set_ylabel("Hospital")
if file_name == "InVS13":
ax.set_ylabel("Workplace")
if file_name == "High_School11":
ax.set_ylabel("High school")
cov = load_cov(file_name)
mfpt = load_mfpt(file_name)
x = np.arange(2)
cov_ms, mfpt_ms = compute_ks_cov_mfpt(cov,mfpt)
x1 = np.array([cov_ms[0],mfpt_ms[0]])
x2 = np.array([cov_ms[1],mfpt_ms[1]])
x3 = np.array([cov_ms[2],mfpt_ms[2]])
x4 = np.array([cov_ms[3],mfpt_ms[3]])
x5 = np.array([cov_ms[4],mfpt_ms[4]])
error_bar_style = dict(ecolor=ORIGINAL_COLOR, alpha=0.8, lw=1.5, capsize=4, capthick=1)
width = 0.2
rects1 = ax.bar(x - 0.3, x2[:,0], width, yerr=x2[:,1], label='ETN-gen',color=ETN_COLOR, error_kw=error_bar_style)
rects4 = ax.bar(x - 0.1, x3[:,0], width, yerr=x3[:,1], label='STM',color=STM_COLOR, error_kw=error_bar_style)
rects5 = ax.bar(x + 0.1, x5[:,0], width, yerr=x5[:,1], label='TagGen',color=TAG_COLOR, error_kw=error_bar_style)
rects4 = ax.bar(x + 0.3 , x4[:,0], width, yerr=x4[:,1], label='Dymond',color=DYM_COLOR, error_kw=error_bar_style)
ax.plot([-0.45,0.45],[x1[0],x1[0]],linewidth=3, label='Stability',color=ORIGINAL_COLOR)
ax.plot([1-0.45,1.45],[x1[1],x1[1]],linewidth=3, label='Stability',color=ORIGINAL_COLOR)
ax.tick_params(bottom=False, right=False,left=False)
ax.set_axisbelow(True)
ax.yaxis.grid(True, color='lightgrey')
ax.xaxis.grid(False)
#ax.yaxis.grid(True, color='#FFFFFF')
#ax.set_facecolor('#EFEFEF')
#ax.xaxis.grid(False)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_color('#DDDDDD')
labels = ["Coverage","MFPT"]
ax.set_xticks(x)
ax.set_xticklabels(labels,rotation=0)
ax.set_ylim((0,1))
file_name == "High_School11"
cov = load_cov(file_name)
mfpt = load_mfpt(file_name)
x = np.arange(2)
cov_ms, mfpt_ms = compute_ks_cov_mfpt(cov,mfpt)
def compute_ks_r0(la25,la15,la01):
orig_25 = la25[0]
orig_15 = la15[0]
orig_01 = la01[0]
res = [[[ks_2samp(la25[0], la25[1])[0]],[ks_2samp(la15[0], la15[1])[0]],[ks_2samp(la01[0], la01[1])[0]]]]
for i in range(len(la25)-2):
i = i+2
tmp = []
for j in la25[i]:
tmp.append(ks_2samp(orig_25, j)[0])
ks_25 = np.array([np.mean(tmp),np.std(tmp)])
tmp = []
for j in la15[i]:
tmp.append(ks_2samp(orig_15, j)[0])
ks_15 = np.array([np.mean(tmp),np.std(tmp)])
tmp = []
for j in la01[i]:
tmp.append(ks_2samp(orig_01, j)[0])
ks_01 = np.array([np.mean(tmp),np.std(tmp)])
res.append(np.array([ks_25,ks_15,ks_01]))
return res
def plot_r0(ax,file_name,legend=False):
#if file_name == "LH10":
#ax.set_title("Hospital")
# ax.set_ylabel("Hospital")
#if file_name == "InVS13":
# ax.set_ylabel("Workplace")
#if file_name == "High_School11":
# ax.set_ylabel("High school")
r0_025 = load_r0(file_name,"la025")
r0_015 = load_r0(file_name,"la015")
r0_001 = load_r0(file_name,"la001")
x1,x2,x3,x4,x5 = compute_ks_r0(r0_025,r0_015,r0_001)
x = np.arange(3)
width = 0.2
error_bar_style = dict(ecolor=ORIGINAL_COLOR, alpha=0.8, lw=1.5, capsize=4, capthick=1)
rects1 = ax.bar(x - 0.3, x2[:,0], width, label='ETN-gen',color=ETN_COLOR, yerr=x2[:,1], error_kw=error_bar_style)
rects4 = ax.bar(x - 0.1, x3[:,0], width, label='STM',color=STM_COLOR, yerr=x3[:,1], error_kw=error_bar_style)
rects5 = ax.bar(x + 0.1, x5[:,0], width, label='TagGen',color=TAG_COLOR, yerr=x5[:,1], error_kw=error_bar_style)
rects4 = ax.bar(x + 0.3 , x4[:,0], width, label='Dymond',color=DYM_COLOR, yerr=x4[:,1], error_kw=error_bar_style)
ax.plot([-0.45,0.45],[x1[0],x1[0]],linewidth=3, label='Stability',color=ORIGINAL_COLOR)
ax.plot([1-0.45,1.45],[x1[1],x1[1]],linewidth=3, label='Stability',color=ORIGINAL_COLOR)
ax.plot([2-0.45,2.45],[x1[2],x1[2]],linewidth=3, label='Stability',color=ORIGINAL_COLOR)
ax.tick_params(bottom=False, right=False,left=False)
ax.set_axisbelow(True)
#ax.yaxis.grid(True, color='#FFFFFF')
#ax.set_facecolor('#EFEFEF')
#ax.xaxis.grid(False)
ax.yaxis.grid(True, color='lightgrey')
ax.xaxis.grid(False)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_color('#DDDDDD')
labels = [r"$\lambda$ = 0.25",r"$\lambda$ = 0.15",r"$\lambda$ = 0.01"]
ax.set_xticks(x)
ax.set_xticklabels(labels,rotation=0)
ax.set_ylim((0,1))
if legend:
c = 0
ORIGINAL_COLOR = "#020005"
ETN_COLOR = "#F3AA20"
STM_COLOR = "#2A445E"
TAG_COLOR = "#841E62"
DYM_COLOR = "#346B6D"
ORIGINAL_COLOR = '#474747' #dark grey
ETN_COLOR = '#fb7041' #'#E5865E' # arancio
TAG_COLOR = '#96ccc8' # light blue
STM_COLOR = '#bad1f2' #8F2E27' # rosso
DYM_COLOR = '#559ca6' # teal
line_width = 1.5
fig, ax = plt.subplots(3,2, figsize=(6,6), gridspec_kw={'width_ratios': [2, 3]})
fig.tight_layout(h_pad=1,w_pad=0)
fig.text(0.16, 1, 'Random walk',fontdict={'size':14})
fig.text(0.66, 1, 'SIR model',fontdict={'size':14})
plot_cov_mfpt(ax[0][0],"LH10")
plot_r0(ax[0][1],"LH10")
plot_cov_mfpt(ax[1][0],"InVS13")
plot_r0(ax[1][1],"InVS13")
plot_cov_mfpt(ax[2][0],"High_School11",legend=True)
plot_r0(ax[2][1],"High_School11",legend=True)
legend_elements = [Line2D([0], [0], color=ORIGINAL_COLOR, lw=3,label='Original'),
Patch(facecolor=ETN_COLOR, edgecolor=ETN_COLOR,label='ETN-Gen'),
Patch(facecolor=STM_COLOR, edgecolor=STM_COLOR,label='STM'),
Patch(facecolor=TAG_COLOR, edgecolor=TAG_COLOR,label='TagGen'),
Patch(facecolor=DYM_COLOR, edgecolor=DYM_COLOR,label='Dymond')]
# Create the figure
ax[2][0].legend(handles=legend_elements,loc='center left', bbox_to_anchor=(-0.2, -0.4),ncol=5)
fig.savefig("dynamic_main_test_WithE_V3.pdf", bbox_inches = 'tight')
def plot_cov_mfpt2(ax,file_name,legend=False):
if file_name == "LH10":
#ax.set_title("Hospital")
ax.set_title("Hospital")
if file_name == "InVS13":
ax.set_title("Workplace")
if file_name == "High_School11":
ax.set_title("High school")
cov = load_cov(file_name)
mfpt = load_mfpt(file_name)
x = np.arange(2)
cov_ms, mfpt_ms = compute_ks_cov_mfpt(cov,mfpt)
x1 = np.array([cov_ms[0],mfpt_ms[0]])
x2 = np.array([cov_ms[1],mfpt_ms[1]])
x3 = np.array([cov_ms[2],mfpt_ms[2]])
x4 = np.array([cov_ms[3],mfpt_ms[3]])
x5 = np.array([cov_ms[4],mfpt_ms[4]])
error_bar_style = dict(ecolor=ORIGINAL_COLOR, alpha=0.8, lw=1.5, capsize=4, capthick=1)
width = 0.2
rects1 = ax.bar(x - 0.3, x2[:,0], width, yerr=x2[:,1], label='ETN-gen',color=ETN_COLOR, error_kw=error_bar_style)
rects4 = ax.bar(x - 0.1, x3[:,0], width, yerr=x3[:,1], label='STM',color=STM_COLOR, error_kw=error_bar_style)
rects5 = ax.bar(x + 0.1, x5[:,0], width, yerr=x5[:,1], label='TagGen',color=TAG_COLOR, error_kw=error_bar_style)
rects4 = ax.bar(x + 0.3 , x4[:,0], width, yerr=x4[:,1], label='Dymond',color=DYM_COLOR, error_kw=error_bar_style)
ax.plot([-0.45,0.45],[x1[0],x1[0]],linewidth=3, label='Stability',color=ORIGINAL_COLOR)
ax.plot([1-0.45,1.45],[x1[1],x1[1]],linewidth=3, label='Stability',color=ORIGINAL_COLOR)
ax.tick_params(bottom=False, right=False,left=False)
ax.set_axisbelow(True)
ax.yaxis.grid(True, color='lightgrey')
ax.xaxis.grid(False)
#ax.yaxis.grid(True, color='#FFFFFF')
#ax.set_facecolor('#EFEFEF')
#ax.xaxis.grid(False)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_color('#DDDDDD')
labels = ["Coverage","MFPT"]
ax.set_xticks(x)
ax.set_xticklabels(labels,rotation=0)
ax.set_ylim((0,1))
def plot_r02(ax,file_name,legend=False):
#if file_name == "LH10":
#ax.set_title("Hospital")
# ax.set_ylabel("Hospital")
#if file_name == "InVS13":
# ax.set_ylabel("Workplace")
#if file_name == "High_School11":
# ax.set_ylabel("High school")
r0_025 = load_r0(file_name,"la025")
r0_015 = load_r0(file_name,"la015")
r0_001 = load_r0(file_name,"la001")
x1,x2,x3,x4,x5 = compute_ks_r0(r0_025,r0_015,r0_001)
x = np.arange(3)
width = 0.2
error_bar_style = dict(ecolor=ORIGINAL_COLOR, alpha=0.8, lw=1.5, capsize=4, capthick=1)
rects1 = ax.bar(x - 0.3, x2[:,0], width, label='ETN-gen',color=ETN_COLOR, yerr=x2[:,1], error_kw=error_bar_style)
rects4 = ax.bar(x - 0.1, x3[:,0], width, label='STM',color=STM_COLOR, yerr=x3[:,1], error_kw=error_bar_style)
rects5 = ax.bar(x + 0.1, x5[:,0], width, label='TagGen',color=TAG_COLOR, yerr=x5[:,1], error_kw=error_bar_style)
rects4 = ax.bar(x + 0.3 , x4[:,0], width, label='Dymond',color=DYM_COLOR, yerr=x4[:,1], error_kw=error_bar_style)
ax.plot([-0.45,0.45],[x1[0],x1[0]],linewidth=3, label='Stability',color=ORIGINAL_COLOR)
ax.plot([1-0.45,1.45],[x1[1],x1[1]],linewidth=3, label='Stability',color=ORIGINAL_COLOR)
ax.plot([2-0.45,2.45],[x1[2],x1[2]],linewidth=3, label='Stability',color=ORIGINAL_COLOR)
ax.tick_params(bottom=False, right=False,left=False)
ax.set_axisbelow(True)
#ax.yaxis.grid(True, color='#FFFFFF')
#ax.set_facecolor('#EFEFEF')
#ax.xaxis.grid(False)
ax.yaxis.grid(True, color='lightgrey')
ax.xaxis.grid(False)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_color('#DDDDDD')
labels = [r"$\lambda$ = 0.25",r"$\lambda$ = 0.15",r"$\lambda$ = 0.01"]
ax.set_xticks(x)
ax.set_xticklabels(labels,rotation=0)
ax.set_ylim((0,1))
if legend:
c = 0
fig, ax = plt.subplots(2,3, figsize=(12,6))#, gridspec_kw={'width_ratios': [2, 3]})
fig.tight_layout(h_pad=2,w_pad=-1)
plot_cov_mfpt2(ax[0][0],"LH10")
plot_r02(ax[1][0],"LH10")
plot_cov_mfpt2(ax[0][1],"InVS13")
plot_r02(ax[1][1],"InVS13")
plot_cov_mfpt2(ax[0][2],"High_School11",legend=True)
plot_r02(ax[1][2],"High_School11",legend=True)
ax[0][1].tick_params(axis='y', colors='white')
ax[1][1].tick_params(axis='y', colors='white')
ax[0][2].tick_params(axis='y', colors='white')
ax[1][2].tick_params(axis='y', colors='white')
legend_elements = [Line2D([0], [0], color=ORIGINAL_COLOR, lw=3,label='Original'),
Patch(facecolor=ETN_COLOR, edgecolor=ETN_COLOR,label='ETN-Gen'),
Patch(facecolor=STM_COLOR, edgecolor=STM_COLOR,label='STM'),
Patch(facecolor=TAG_COLOR, edgecolor=TAG_COLOR,label='TagGen'),
Patch(facecolor=DYM_COLOR, edgecolor=DYM_COLOR,label='Dymond')]
# Create the figure
ax[1][0].legend(handles=legend_elements,loc='center left', bbox_to_anchor=(-0.2, -0.2),ncol=5)
fig.text(-0.01, 0.66, 'Random walk',fontdict={'size':14,'color':'#4d4d4d'},weight="bold",rotation=90)
fig.text(-0.01, 0.2, 'SIR model',fontdict={'size':14,'color':'#4d4d4d'},weight="bold",rotation=90)
fig.savefig("dynamic_main_test_WithE_V4.pdf", bbox_inches = 'tight')
def plot_cov_mfpt3(ax,file_name,legend=False):
if file_name == "LH10":
#ax.set_title("Hospital")
ax.set_title("Hospital")
if file_name == "InVS13":
ax.set_title("Workplace")
if file_name == "High_School11":
ax.set_title("High school")
cov = load_cov(file_name)
mfpt = load_mfpt(file_name)
x = np.arange(2)
cov_ms, mfpt_ms = compute_ks_cov_mfpt(cov,mfpt)
x1 = np.array([cov_ms[0],mfpt_ms[0]])
x2 = np.array([cov_ms[1],mfpt_ms[1]])
x3 = np.array([cov_ms[2],mfpt_ms[2]])
x4 = np.array([cov_ms[3],mfpt_ms[3]])
x5 = np.array([cov_ms[4],mfpt_ms[4]])
error_bar_style = dict(ecolor=ORIGINAL_COLOR, alpha=0.8, lw=1.5, capsize=4, capthick=1)
width = 0.2
rects1 = ax.bar(x - 0.3, x2[:,0], width, yerr=x2[:,1], label='ETN-gen',color=ETN_COLOR, error_kw=error_bar_style)
rects4 = ax.bar(x - 0.1, x3[:,0], width, yerr=x3[:,1], label='STM',color=STM_COLOR, error_kw=error_bar_style)
rects5 = ax.bar(x + 0.1, x5[:,0], width, yerr=x5[:,1], label='TagGen',color=TAG_COLOR, error_kw=error_bar_style)
rects4 = ax.bar(x + 0.3 , x4[:,0], width, yerr=x4[:,1], label='Dymond',color=DYM_COLOR, error_kw=error_bar_style)
ax.plot([-0.45,0.45],[x1[0],x1[0]],linewidth=3, label='Stability',color=ORIGINAL_COLOR)
ax.plot([1-0.45,1.45],[x1[1],x1[1]],linewidth=3, label='Stability',color=ORIGINAL_COLOR)
ax.tick_params(bottom=False, right=False,left=False)
ax.set_axisbelow(True)
ax.yaxis.grid(True, color='lightgrey')
ax.xaxis.grid(False)
#ax.yaxis.grid(True, color='#FFFFFF')
#ax.set_facecolor('#EFEFEF')
#ax.xaxis.grid(False)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_color('#DDDDDD')
labels = ["Coverage","MFPT"]
ax.set_xticks(x)
ax.set_xticklabels(labels,rotation=0)
ax.set_ylim((0,1))
def plot_r03(ax,file_name,legend=False):
if file_name == "LH10":
#ax.set_title("Hospital")
ax.set_title("Hospital")
if file_name == "InVS13":
ax.set_title("Workplace")
if file_name == "High_School11":
ax.set_title("High school")
r0_025 = load_r0(file_name,"la025")
r0_015 = load_r0(file_name,"la015")
r0_001 = load_r0(file_name,"la001")
x1,x2,x3,x4,x5 = compute_ks_r0(r0_025,r0_015,r0_001)
x = np.arange(3)
width = 0.2
error_bar_style = dict(ecolor=ORIGINAL_COLOR, alpha=0.8, lw=1.5, capsize=4, capthick=1)
rects1 = ax.bar(x - 0.3, x2[:,0], width, label='ETN-gen',color=ETN_COLOR, yerr=x2[:,1], error_kw=error_bar_style)
rects4 = ax.bar(x - 0.1, x3[:,0], width, label='STM',color=STM_COLOR, yerr=x3[:,1], error_kw=error_bar_style)
rects5 = ax.bar(x + 0.1, x5[:,0], width, label='TagGen',color=TAG_COLOR, yerr=x5[:,1], error_kw=error_bar_style)
rects4 = ax.bar(x + 0.3 , x4[:,0], width, label='Dymond',color=DYM_COLOR, yerr=x4[:,1], error_kw=error_bar_style)
ax.plot([-0.45,0.45],[x1[0],x1[0]],linewidth=3, label='Stability',color=ORIGINAL_COLOR)
ax.plot([1-0.45,1.45],[x1[1],x1[1]],linewidth=3, label='Stability',color=ORIGINAL_COLOR)
ax.plot([2-0.45,2.45],[x1[2],x1[2]],linewidth=3, label='Stability',color=ORIGINAL_COLOR)
ax.tick_params(bottom=False, right=False,left=False)
ax.set_axisbelow(True)
#ax.yaxis.grid(True, color='#FFFFFF')
#ax.set_facecolor('#EFEFEF')
#ax.xaxis.grid(False)
ax.yaxis.grid(True, color='lightgrey')
ax.xaxis.grid(False)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_color('#DDDDDD')
labels = [r"$\lambda$ = 0.25",r"$\lambda$ = 0.15",r"$\lambda$ = 0.01"]
ax.set_xticks(x)
ax.set_xticklabels(labels,rotation=0)
ax.set_ylim((0,1))
if legend:
c = 0
def empty_plot(ax):
ax.plot()
ax.yaxis.grid(False)
ax.xaxis.grid(False)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_visible(False)
# No ticks
ax.set_xticks([])
ax.set_yticks([])
fig, ax = plt.subplots(1,7, figsize=(12,3), gridspec_kw={'width_ratios': [1,1,1,0.2,1.5,1.5,1.5]})
fig.tight_layout(w_pad=-1)
plot_cov_mfpt3(ax[0],"LH10")
plot_cov_mfpt3(ax[1],"InVS13")
plot_cov_mfpt3(ax[2],"High_School11",legend=True)
empty_plot(ax[3])
plot_r03(ax[4],"LH10")
plot_r03(ax[5],"InVS13")
plot_r03(ax[6],"High_School11",legend=True)
ax[1].tick_params(axis='y', colors='white')
ax[2].tick_params(axis='y', colors='white')
#ax[4].tick_params(axis='y', colors='white')
ax[5].tick_params(axis='y', colors='white')
ax[6].tick_params(axis='y', colors='white')
legend_elements = [Line2D([0], [0], color=ORIGINAL_COLOR, lw=3,label='Original'),
Patch(facecolor=ETN_COLOR, edgecolor=ETN_COLOR,label='ETN-Gen'),
Patch(facecolor=STM_COLOR, edgecolor=STM_COLOR,label='STM'),
Patch(facecolor=TAG_COLOR, edgecolor=TAG_COLOR,label='TagGen'),
Patch(facecolor=DYM_COLOR, edgecolor=DYM_COLOR,label='Dymond')]
# Create the figure
ax[0].legend(handles=legend_elements,loc='center left', bbox_to_anchor=(-0.2, -0.2),ncol=5)
fig.text(0.17, 1.08, 'Random walk',fontdict={'size':14,'color':'#4d4d4d'},weight="bold")
fig.text(0.682, 1.08, 'SIR model',fontdict={'size':14,'color':'#4d4d4d'},weight="bold")
fig.savefig("dynamic_main_test_WithE_V5.pdf", bbox_inches = 'tight')
###Output
_____no_output_____ |
notebooks/transformation-base.ipynb | ###Markdown
Fine tuning T5-base summarizationBase on code from https://github.com/huggingface/notebooks/blob/master/examples/summarization.ipynb
###Code
import sys, os
ON_COLAB = 'google.colab' in sys.modules
if ON_COLAB:
GIT_ROOT = 'https://github.com/furyhawk/text_summarization/raw/master'
os.system(f'wget {GIT_ROOT}/notebooks/setup.py')
%run -i setup.py
%run "$BASE_DIR/settings.py"
%reload_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'png'
# to print output of all statements and not just the last
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# otherwise text between $ signs will be interpreted as formula and printed in italic
pd.set_option('display.html.use_mathjax', False)
# path to import blueprints packages
sys.path.append(BASE_DIR + '/packages')
###Output
_____no_output_____
###Markdown
If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets as well as other dependencies. Uncomment the following cell and run it.
###Code
! pip install datasets transformers rouge-score nltk
###Output
Requirement already satisfied: datasets in c:\users\furyx\miniconda3\envs\text\lib\site-packages (1.12.1)
Requirement already satisfied: transformers in c:\users\furyx\miniconda3\envs\text\lib\site-packages (4.11.3)
Requirement already satisfied: rouge-score in c:\users\furyx\miniconda3\envs\text\lib\site-packages (0.0.4)
Requirement already satisfied: nltk in c:\users\furyx\miniconda3\envs\text\lib\site-packages (3.6.3)
Requirement already satisfied: numpy>=1.17 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from datasets) (1.21.2)
Requirement already satisfied: tqdm>=4.62.1 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from datasets) (4.62.3)
Requirement already satisfied: dill in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from datasets) (0.3.4)
Requirement already satisfied: pandas in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from datasets) (1.3.3)
Requirement already satisfied: pyarrow!=4.0.0,>=1.0.0 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from datasets) (5.0.0)
Requirement already satisfied: packaging in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from datasets) (21.0)
Requirement already satisfied: xxhash in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from datasets) (2.0.2)
Requirement already satisfied: multiprocess in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from datasets) (0.70.12.2)
Requirement already satisfied: requests>=2.19.0 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from datasets) (2.26.0)
Requirement already satisfied: huggingface-hub<0.1.0,>=0.0.14 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from datasets) (0.0.19)
Requirement already satisfied: aiohttp in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from datasets) (3.7.4.post0)
Requirement already satisfied: fsspec[http]>=2021.05.0 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from datasets) (2021.10.0)
Requirement already satisfied: pyyaml>=5.1 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from transformers) (5.4.1)
Requirement already satisfied: filelock in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from transformers) (3.3.0)
Requirement already satisfied: sacremoses in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from transformers) (0.0.43)
Requirement already satisfied: regex!=2019.12.17 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from transformers) (2021.9.30)
Requirement already satisfied: tokenizers<0.11,>=0.10.1 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from transformers) (0.10.3)
Requirement already satisfied: absl-py in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from rouge-score) (0.14.0)
Requirement already satisfied: six>=1.14.0 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from rouge-score) (1.16.0)
Requirement already satisfied: joblib in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from nltk) (1.0.1)
Requirement already satisfied: click in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from nltk) (8.0.3)
Requirement already satisfied: typing-extensions in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from huggingface-hub<0.1.0,>=0.0.14->datasets) (3.10.0.2)
Requirement already satisfied: pyparsing>=2.0.2 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from packaging->datasets) (2.4.7)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from requests>=2.19.0->datasets) (2021.10.8)
Requirement already satisfied: idna<4,>=2.5 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from requests>=2.19.0->datasets) (3.1)
Requirement already satisfied: charset-normalizer~=2.0.0 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from requests>=2.19.0->datasets) (2.0.0)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from requests>=2.19.0->datasets) (1.26.7)
Requirement already satisfied: colorama in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from tqdm>=4.62.1->datasets) (0.4.4)
Requirement already satisfied: yarl<2.0,>=1.0 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from aiohttp->datasets) (1.7.0)
Requirement already satisfied: chardet<5.0,>=2.0 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from aiohttp->datasets) (4.0.0)
Requirement already satisfied: attrs>=17.3.0 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from aiohttp->datasets) (21.2.0)
Requirement already satisfied: multidict<7.0,>=4.5 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from aiohttp->datasets) (5.2.0)
Requirement already satisfied: async-timeout<4.0,>=3.0 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from aiohttp->datasets) (3.0.1)
Requirement already satisfied: pytz>=2017.3 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from pandas->datasets) (2021.1)
Requirement already satisfied: python-dateutil>=2.7.3 in c:\users\furyx\miniconda3\envs\text\lib\site-packages (from pandas->datasets) (2.8.2)
###Markdown
If you're opening this notebook locally, make sure your environment has an install from the last version of those libraries.To be able to share your model with the community and generate results like the one shown in the picture below via the inference API, there are a few more steps to follow.First you have to store your authentication token from the Hugging Face website (sign up [here](https://huggingface.co/join) if you haven't already!) then execute the following cell and input your username and password:
###Code
from huggingface_hub import notebook_login
#notebook_login()
###Output
_____no_output_____
###Markdown
Then you need to install Git-LFS. Uncomment the following instructions:
###Code
# !apt install git-lfs # for linux
# !git lfs install # for windows
import os.path
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Load BBC News summary dataset1. Removal of newline from text elements.2. Partition the first line of the articles as title column.3. Aligning the respective summary(label) with the news article(input).
###Code
root_path = f'../data/BBC News Summary'
# root_path = f'/kaggle/input/bbc-news-summary/BBC News Summary'
def loadDataset(root_path):
types_of_articles = ['business',
'entertainment', 'politics', 'sport', 'tech']
df = pd.DataFrame(columns=['title', 'article', 'summary'])
for type_of_article in types_of_articles:
# type_of_article = 'business' # entertainment, politices, sport, tech
num_of_article = len(os.listdir(
f"{root_path}/News Articles/{type_of_article}"))
print(f'"Reading {type_of_article} articles"')
dataframe = pd.DataFrame(columns=['title', 'article', 'summary'])
for i in tqdm(range(num_of_article)):
with open(f'{root_path}/News Articles/{type_of_article}/{(i+1):03d}.txt', 'r', encoding="utf8", errors='ignore') as f:
article = f.read().partition("\n")
with open(f'{root_path}/Summaries/{type_of_article}/{(i+1):03d}.txt', 'r', encoding="utf8", errors='ignore') as f:
summary = f.read()
dataframe.loc[i] = [article[0], article[2].replace(
'\n', ' ').replace('\r', ''), summary]
df = df.append(dataframe, ignore_index=True)
return df
fname = 'bbc.csv'
if os.path.isfile(fname):
df = pd.read_csv(fname)
else:
df = loadDataset(root_path)
df.to_csv(fname, index=False)
df.head()
df.info()
#df.to_csv('bbc.csv')
###Output
_____no_output_____
###Markdown
Make sure your version of Transformers is at least 4.11.0 since the functionality was introduced in that version:
###Code
import transformers
print(transformers.__version__)
###Output
4.11.3
###Markdown
You can find a script version of this notebook to fine-tune your model in a distributed fashion using multiple GPUs or TPUs [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq). Fine-tuning a model on a summarization task In this notebook, we will see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) model for a summarization task. We will use the [BBC dataset](https://www.kaggle.com/pariza/bbc-news-summary) which contains BBC News Summary.We will see how to easily load the dataset for this task using 🤗 Datasets and how to fine-tune a model on it using the `Trainer` API.
###Code
model_checkpoint = "t5-base"
###Output
_____no_output_____
###Markdown
This notebook is built to run with any model checkpoint from the [Model Hub](https://huggingface.co/models) as long as that model has a sequence-to-sequence version in the Transformers library. Here we picked the [`t5-small`](https://huggingface.co/t5-small) checkpoint. Loading the dataset We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data and get the metric we need to use for evaluation (to compare our model to the benchmark). This can be easily done with the functions `load_dataset` and `load_metric`.
###Code
from datasets import Dataset, DatasetDict, load_metric
raw_datasets = Dataset.from_pandas(df) # Load from dataframe created earlier
metric = load_metric("rouge")
###Output
_____no_output_____
###Markdown
The `dataset` object itself is [`DatasetDict`](https://huggingface.co/docs/datasets/package_reference/main_classes.htmldatasetdict), which contains one key for the training, validation and test set:
###Code
raw_datasets
raw_datasets.info
###Output
_____no_output_____
###Markdown
Whether to use news headline or summary as label
###Code
# raw_datasets = raw_datasets.remove_columns("title")
###Output
_____no_output_____
###Markdown
Splitting the dataset in train and test split
###Code
# 90% train, 10% test + validation
train_testvalid = raw_datasets.train_test_split(test_size=0.1)
# Split the 10% test + validation in half test, half validation
test_valid = train_testvalid["test"].train_test_split(test_size=0.5)
# gather everyone if you want to have a single DatasetDict
raw_datasets = DatasetDict({
"train": train_testvalid["train"],
"test": test_valid["test"],
"valid": test_valid["train"]})
raw_datasets
###Output
_____no_output_____
###Markdown
To access an actual element, you need to select a split first, then give an index:
###Code
raw_datasets["train"][0]
###Output
_____no_output_____
###Markdown
To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset.
###Code
import datasets
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=5):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
for column, typ in dataset.features.items():
if isinstance(typ, datasets.ClassLabel):
df[column] = df[column].transform(lambda i: typ.names[i])
display(HTML(df.to_html()))
show_random_elements(raw_datasets["train"])
###Output
_____no_output_____
###Markdown
The metric is an instance of [`datasets.Metric`](https://huggingface.co/docs/datasets/package_reference/main_classes.htmldatasets.Metric):
###Code
metric
###Output
_____no_output_____
###Markdown
You can call its `compute` method with your predictions and labels, which need to be list of decoded strings:
###Code
fake_preds = ["hello there", "general kenobi"]
fake_labels = ["hello there", "general kenobi"]
metric.compute(predictions=fake_preds, references=fake_labels)
###Output
_____no_output_____
###Markdown
Preprocessing the data Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers `Tokenizer` which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that the model requires.To do all of this, instantiate tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure:- a tokenizer that corresponds to the model architecture used,- download the vocabulary used when pretraining this specific checkpoint.That vocabulary will be cached, so it's not downloaded again the next time we run the cell.
###Code
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) # t5-base
###Output
_____no_output_____
###Markdown
By default, the call above will use one of the fast tokenizers (backed by Rust) from the 🤗 Tokenizers library. You can directly call this tokenizer on one sentence or a pair of sentences:
###Code
tokenizer("Hello, this one sentence!")
###Output
_____no_output_____
###Markdown
To prepare the targets for our model, we need to tokenize them inside the `as_target_tokenizer` context manager. This will make sure the tokenizer uses the special tokens corresponding to the targets:
###Code
with tokenizer.as_target_tokenizer():
print(tokenizer(["Hello, this one sentence!", "This is another sentence."]))
sentence = "He didnt want to talk about cells on the cell phone because he considered it boring"
inputs = tokenizer.encode(sentence, return_tensors='pt', add_special_tokens=True) # return PyTorch tensors
tokens = tokenizer.convert_ids_to_tokens(list(inputs[0])) # Extract sample of batch index 0 from inputs list of lists
print(tokens)
###Output
['▁He', '▁didn', 't', '▁want', '▁to', '▁talk', '▁about', '▁cells', '▁on', '▁the', '▁cell', '▁phone', '▁because', '▁', 'he', '▁considered', '▁it', '▁boring', '</s>']
###Markdown
If you are using one of the five T5 checkpoints we have to prefix the inputs with "summarize:" (the model can also translate and it needs the prefix to know which task it has to perform).
###Code
if model_checkpoint in ["t5-small", "t5-base", "t5-larg", "t5-3b", "t5-11b"]:
prefix = "summarize: "
else:
prefix = ""
###Output
_____no_output_____
###Markdown
We can then write the function that will preprocess our samples. We just feed them to the `tokenizer` with the argument `truncation=True`. This will ensure that an input longer that what the model selected can handle will be truncated to the maximum length accepted by the model. The padding will be dealt with later on (in a data collator) so we pad examples to the longest length in the batch and not the whole dataset.
###Code
max_input_length = 1024
max_target_length = 128
def preprocess_function(examples):
# x var for summarization. Column article.
inputs = [prefix + doc for doc in examples["article"]]
model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True)
# Setup the tokenizer for targets y. We are using column summary as label.
with tokenizer.as_target_tokenizer():
labels = tokenizer(examples["summary"], max_length=max_target_length, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
###Output
_____no_output_____
###Markdown
This function works with one or several examples. In the case of several examples, the tokenizer will return a list of lists for each key:
###Code
preprocess_function(raw_datasets['train'][:2])
###Output
_____no_output_____
###Markdown
To apply this function on all the pairs of sentences in our dataset, we just use the `map` method of our `dataset` object we created earlier. This will apply the function on all the elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed in one single command.
###Code
# raw_datasets['train'] = raw_datasets['train'].shard(num_shards=100, index=3)
# raw_datasets['validation'] = raw_datasets['validation'].shard(num_shards=100, index=3)
# raw_datasets['test'] = raw_datasets['test'].shard(num_shards=100, index=3)
print(raw_datasets.num_rows)
tokenized_datasets = raw_datasets.map(preprocess_function, batched=True)
###Output
100%|██████████| 3/3 [00:00<00:00, 3.28ba/s]
100%|██████████| 1/1 [00:00<00:00, 16.13ba/s]
100%|██████████| 1/1 [00:00<00:00, 18.18ba/s]
###Markdown
Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass `load_from_cache_file=False` in the call to `map` to not use the cached files and force the preprocessing to be applied again.Note that we passed `batched=True` to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently. Fine-tuning the model Now that data is ready, download the pretrained model and fine-tune it. Since the task is of the sequence-to-sequence kind, use the `AutoModelForSeq2SeqLM` class. Like with the tokenizer, the `from_pretrained` method will download and cache the model.
###Code
from transformers import AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
###Output
_____no_output_____
###Markdown
Note that we don't get a warning like in our classification example. This means we used all the weights of the pretrained model and there is no randomly initialized head in this case. To instantiate a `Seq2SeqTrainer`, we will need to define three more things. The most important is the [`Seq2SeqTrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.htmltransformers.Seq2SeqTrainingArguments), which is a class that contains all the attributes to customize the training. It requires one folder name, which will be used to save the checkpoints of the model, and all other arguments are optional:batch_size 6 for 32GB of working memory.learning_rate change from 2e-5 to 3e-4 slightly.
###Code
batch_size = 6
model_name = model_checkpoint.split("/")[-1]
args = Seq2SeqTrainingArguments(
f"{model_name}-finetuned-bbc",
evaluation_strategy = "epoch",
learning_rate=3e-4,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=1,
predict_with_generate=True,
fp16=False,
push_to_hub=True,
)
###Output
_____no_output_____
###Markdown
Here we set the evaluation to be done at the end of each epoch, tweak the learning rate, use the `batch_size` defined at the top of the cell and customize the weight decay. Since the `Seq2SeqTrainer` will save the model regularly and our dataset is quite large, we tell it to make three saves maximum. Lastly, we use the `predict_with_generate` option (to properly generate summaries) and activate mixed precision training (to go a bit faster).The last argument to setup everything so we can push the model to the [Hub](https://huggingface.co/models) regularly during training. Remove it if you didn't follow the installation steps at the top of the notebook. If you want to save your model locally in a name that is different than the name of the repository it will be pushed, or if you want to push your model under an organization and not your name space, use the `hub_model_id` argument to set the repo name (it needs to be the full name, including your namespace: for instance `"sgugger/t5-finetuned-xsum"` or `"huggingface/t5-finetuned-xsum"`).Then, we need a special kind of data collator, which will not only pad the inputs to the maximum length in the batch, but also the labels:
###Code
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)
###Output
_____no_output_____
###Markdown
The last thing to define for our `Seq2SeqTrainer` is how to compute the metrics from the predictions. We need to define a function for this, which will just use the `metric` we loaded earlier, and we have to do a bit of pre-processing to decode the predictions into texts:
###Code
import nltk
import numpy as np
def compute_metrics(eval_pred):
predictions, labels = eval_pred
decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Rouge expects a newline after each sentence
decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds]
decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels]
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
# Extract a few results
result = {key: value.mid.fmeasure * 100 for key, value in result.items()}
# Add mean generated length
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions]
result["gen_len"] = np.mean(prediction_lens)
return {k: round(v, 4) for k, v in result.items()}
###Output
_____no_output_____
###Markdown
Then we just need to pass all of this along with our datasets to the `Seq2SeqTrainer`:
###Code
!git lfs install
trainer = Seq2SeqTrainer(
model,
args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["valid"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics
)
###Output
w:\workspace\text_summarization\notebooks\t5-base-finetuned-bbc is already a clone of https://huggingface.co/furyhawk/t5-base-finetuned-bbc. Make sure you pull the latest changes with `repo.git_pull()`.
###Markdown
We can now finetune our model by just calling the `train` method:
###Code
import nltk
nltk.download('punkt')
trainer.train()
###Output
The following columns in the training set don't have a corresponding argument in `T5ForConditionalGeneration.forward` and have been ignored: title, summary, category, article.
***** Running training *****
Num examples = 2002
Num Epochs = 1
Instantaneous batch size per device = 6
Total train batch size (w. parallel, distributed & accumulation) = 6
Gradient Accumulation steps = 1
Total optimization steps = 334
100%|██████████| 334/334 [4:24:29<00:00, 44.91s/it]The following columns in the evaluation set don't have a corresponding argument in `T5ForConditionalGeneration.forward` and have been ignored: title, summary, category, article.
***** Running Evaluation *****
Num examples = 111
Batch size = 6
100%|██████████| 334/334 [4:32:03<00:00, 44.91s/it]
Training completed. Do not forget to share your model on huggingface.co/models =)
100%|██████████| 334/334 [4:32:03<00:00, 48.87s/it]
###Markdown
Upload the trained model to Hugging Face Hub,
###Code
trainer.push_to_hub()
###Output
Saving model checkpoint to t5-base-finetuned-bbc
Configuration saved in t5-base-finetuned-bbc\config.json
Model weights saved in t5-base-finetuned-bbc\pytorch_model.bin
tokenizer config file saved in t5-base-finetuned-bbc\tokenizer_config.json
Special tokens file saved in t5-base-finetuned-bbc\special_tokens_map.json
Copy vocab file to t5-base-finetuned-bbc\spiece.model
Upload file pytorch_model.bin: 99%|█████████▉| 846M/850M [00:55<00:00, 16.7MB/s]To https://huggingface.co/furyhawk/t5-base-finetuned-bbc
f587d43..a4aea5b main -> main
Upload file pytorch_model.bin: 100%|██████████| 850M/850M [00:57<00:00, 15.5MB/s]
Upload file training_args.bin: 100%|██████████| 2.86k/2.86k [00:56<?, ?B/s]
Dropping the following result as it does not have all the necessary field:
{'task': {'name': 'Sequence-to-sequence Language Modeling', 'type': 'text2text-generation'}}
To https://huggingface.co/furyhawk/t5-base-finetuned-bbc
a4aea5b..40bcfef main -> main
|
models/NAIVE_BAYES/submission.ipynb | ###Markdown
NB model on kaggle-pet competition DATA LOADING
###Code
import pandas as pd
import numpy as np
import os
os.listdir('../../data')
assert 'out_breed.csv' in os.listdir('../../data') # this assert breaks if the data is configured uncorrectly
breeds = pd.read_csv('../../data/out_breed.csv')
colors = pd.read_csv('../../data/out_color.csv')
states = pd.read_csv('../../data/out_state.csv')
train = pd.read_csv('../../data/out_train.csv')
test = pd.read_csv('../../data/out_test.csv')
sub = pd.read_csv('../../data/out_submission.csv')
###Output
_____no_output_____
###Markdown
MODEL Ensemble
###Code
string_cols = ["Unnamed: 0", "dataset_type", "Name", "RescuerID", "Description", "PhotoAmt","VideoAmt","PetID"]
categorical_col = ["Type","Gender","Vaccinated","Dewormed","Sterilized","Breed1","Breed2","Color1","Color2","Color3","State"]
numerical_col = [col for col in train.columns if col not in string_cols and col not in categorical_col and col != "AdoptionSpeed"]
mapping_sizes = [2, 2, 3, 3, 3, 307, 307, 7, 7, 7, 15]
X = pd.concat([train[numerical_col], train[categorical_col]], axis=1)
Y = train['AdoptionSpeed']
from ensembleNaiveBayes import PredictiveModel
model = PredictiveModel("validation ensemble")
model.validation(X, Y, mapping_sizes, verbose=False)
model.validation(X, Y, mapping_sizes, method=2, verbose=False)
###Output
Mon Mar 25 11:41:54 2019 [base-gaussianNB.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-multinomialNB-Type.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-multinomialNB-Gender.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-multinomialNB-Vaccinated.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-multinomialNB-Dewormed.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-multinomialNB-Sterilized.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-multinomialNB-Breed1.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-multinomialNB-Breed2.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-multinomialNB-Color1.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-multinomialNB-Color2.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-multinomialNB-Color3.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-multinomialNB-State.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-gaussianNB.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-multinomialNB-Type.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-multinomialNB-Gender.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-multinomialNB-Vaccinated.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-multinomialNB-Dewormed.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-multinomialNB-Sterilized.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-multinomialNB-Breed1.__init__] initialized succesfully
Mon Mar 25 11:41:54 2019 [base-multinomialNB-Breed2.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Color1.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Color2.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Color3.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-State.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-gaussianNB.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Type.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Gender.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Vaccinated.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Dewormed.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Sterilized.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Breed1.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Breed2.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Color1.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Color2.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Color3.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-State.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-gaussianNB.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Type.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Gender.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Vaccinated.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Dewormed.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Sterilized.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Breed1.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Breed2.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Color1.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Color2.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-Color3.__init__] initialized succesfully
Mon Mar 25 11:41:55 2019 [base-multinomialNB-State.__init__] initialized succesfully
Mon Mar 25 11:41:56 2019 [base-gaussianNB.__init__] initialized succesfully
Mon Mar 25 11:41:56 2019 [base-multinomialNB-Type.__init__] initialized succesfully
Mon Mar 25 11:41:56 2019 [base-multinomialNB-Gender.__init__] initialized succesfully
Mon Mar 25 11:41:56 2019 [base-multinomialNB-Vaccinated.__init__] initialized succesfully
Mon Mar 25 11:41:56 2019 [base-multinomialNB-Dewormed.__init__] initialized succesfully
Mon Mar 25 11:41:56 2019 [base-multinomialNB-Sterilized.__init__] initialized succesfully
Mon Mar 25 11:41:56 2019 [base-multinomialNB-Breed1.__init__] initialized succesfully
Mon Mar 25 11:41:56 2019 [base-multinomialNB-Breed2.__init__] initialized succesfully
Mon Mar 25 11:41:56 2019 [base-multinomialNB-Color1.__init__] initialized succesfully
Mon Mar 25 11:41:56 2019 [base-multinomialNB-Color2.__init__] initialized succesfully
Mon Mar 25 11:41:56 2019 [base-multinomialNB-Color3.__init__] initialized succesfully
Mon Mar 25 11:41:56 2019 [base-multinomialNB-State.__init__] initialized succesfully
###Markdown
Submission score
###Code
0.172
###Output
_____no_output_____
###Markdown
Meta Fetures
###Code
help(model.generate_meta_train)
meta_train = model.generate_meta_train(X, Y, mapping_sizes, n_folds = 5)
meta_train
###Output
_____no_output_____ |
notebooks/time_series/raw/ex6.ipynb | ###Markdown
Introduction Run this cell to set everything up!
###Code
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.time_series.ex6 import *
# Setup notebook
from pathlib import Path
import ipywidgets as widgets
from learntools.time_series.style import * # plot style settings
from learntools.time_series.utils import (create_multistep_example,
load_multistep_data,
make_lags,
make_multistep_target,
plot_multistep)
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.multioutput import RegressorChain
from sklearn.preprocessing import LabelEncoder
from xgboost import XGBRegressor
comp_dir = Path('../input/store-sales-time-series-forecasting')
store_sales = pd.read_csv(
comp_dir / 'train.csv',
usecols=['store_nbr', 'family', 'date', 'sales', 'onpromotion'],
dtype={
'store_nbr': 'category',
'family': 'category',
'sales': 'float32',
'onpromotion': 'uint32',
},
parse_dates=['date'],
infer_datetime_format=True,
)
store_sales['date'] = store_sales.date.dt.to_period('D')
store_sales = store_sales.set_index(['store_nbr', 'family', 'date']).sort_index()
family_sales = (
store_sales
.groupby(['family', 'date'])
.mean()
.unstack('family')
.loc['2017']
)
test = pd.read_csv(
comp_dir / 'test.csv',
dtype={
'store_nbr': 'category',
'family': 'category',
'onpromotion': 'uint32',
},
parse_dates=['date'],
infer_datetime_format=True,
)
test['date'] = test.date.dt.to_period('D')
test = test.set_index(['store_nbr', 'family', 'date']).sort_index()
###Output
_____no_output_____
###Markdown
-------------------------------------------------------------------------------Consider the following three forecasting tasks:a. 3-step forecast using 4 lag features with a 2-step lead timeb. 1-step forecast using 3 lag features with a 1-step lead timec. 3-step forecast using 4 lag features with a 1-step lead timeRun the next cell to see three datasets, each representing one of the tasks above.
###Code
datasets = load_multistep_data()
data_tabs = widgets.Tab([widgets.Output() for _ in enumerate(datasets)])
for i, df in enumerate(datasets):
data_tabs.set_title(i, f'Dataset {i+1}')
with data_tabs.children[i]:
display(df)
display(data_tabs)
###Output
_____no_output_____
###Markdown
1) Match description to datasetCan you match each task to the appropriate dataset?
###Code
# YOUR CODE HERE: Match the task to the dataset. Answer 1, 2, or 3.
task_a = ____
task_b = ____
task_c = ____
# Check your answer
q_1.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
#%%RM_IF(PROD)%%
task_a = 2
task_b = 1
task_c = 3
q_1.assert_check_passed()
###Output
_____no_output_____
###Markdown
-------------------------------------------------------------------------------Look at the time indexes of the training and test sets. From this information, can you identify the forecasting task for *Store Sales*?
###Code
print("Training Data", "\n" + "-" * 13 + "\n", store_sales)
print("\n")
print("Test Data", "\n" + "-" * 9 + "\n", test)
###Output
_____no_output_____
###Markdown
2) Identify the forecasting task for *Store Sales* competitionTry to identify the *forecast origin* and the *forecast horizon*. How many steps are within the forecast horizon? What is the lead time for the forecast?Run this cell after you've thought about your answer.
###Code
# View the solution (Run this cell to receive credit!)
q_2.check()
###Output
_____no_output_____
###Markdown
-------------------------------------------------------------------------------In the tutorial we saw how to create a multistep dataset for a single time series. Fortunately, we can use exactly the same procedure for datasets of multiple series. 3) Create multistep dataset for *Store Sales*Create targets suitable for the *Store Sales* forecasting task. Use 4 days of lag features. Drop any missing values from both targets and features.
###Code
# YOUR CODE HERE
y = family_sales.loc[:, 'sales']
# YOUR CODE HERE: Make 4 lag features
X = ____
# YOUR CODE HERE: Make multistep target
y = ____
#_UNCOMMENT_IF(PROD)_
#y, X = y.align(X, join='inner', axis=0)
# Check your answer
q_3.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_3.hint()
#_COMMENT_IF(PROD)_
q_3.solution()
#%%RM_IF(PROD)%%
y = family_sales.loc[:, 'sales']
X = make_lags(y, lags=4).dropna()
y = make_multistep_target(y, steps=16).dropna()
y, X = y.align(X, join='inner', axis=0)
q_3.assert_check_passed()
###Output
_____no_output_____
###Markdown
-------------------------------------------------------------------------------In the tutorial, we saw how to forecast with the MultiOutput and Direct strategies on the *Flu Trends* series. Now, you'll apply the DirRec strategy to the multiple time series of *Store Sales*.Make sure you've successfully completed the previous exercise and then run this cell to prepare the data for XGBoost.
###Code
le = LabelEncoder()
X = (X
.stack('family') # wide to long
.reset_index('family') # convert index to column
.assign(family=lambda x: le.fit_transform(x.family)) # label encode
)
y = y.stack('family') # wide to long
display(y)
###Output
_____no_output_____
###Markdown
4) Forecast with the DirRec strategyInstatiate a model that applies the DirRec strategy to XGBoost.
###Code
from sklearn.multioutput import RegressorChain
# YOUR CODE HERE
model = ____
# Check your answer
q_4.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_4.hint()
#_COMMENT_IF(PROD)_
q_4.solution()
#%%RM_IF(PROD)%%
from sklearn.multioutput import RegressorChain
model = RegressorChain(XGBRegressor())
q_4.assert_check_passed()
###Output
_____no_output_____
###Markdown
Run this cell if you'd like to train this model.
###Code
model.fit(X, y)
y_pred = pd.DataFrame(
model.predict(X),
index=y.index,
columns=y.columns,
).clip(0.0)
###Output
_____no_output_____
###Markdown
And use this code to see a sample of the 16-step predictions this model makes on the training data.
###Code
FAMILY = 'BEAUTY'
START = '2017-04-01'
EVERY = 16
y_pred_ = y_pred.xs(FAMILY, level='family', axis=0).loc[START:]
y_ = family_sales.loc[START:, 'sales'].loc[:, FAMILY]
fig, ax = plt.subplots(1, 1, figsize=(11, 4))
ax = y_.plot(**plot_params, ax=ax, alpha=0.5)
ax = plot_multistep(y_pred_, ax=ax, every=EVERY)
_ = ax.legend([FAMILY, FAMILY + ' Forecast'])
###Output
_____no_output_____
###Markdown
Introduction Run this cell to set everything up!
###Code
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.time_series.ex6 import *
# Setup notebook
from pathlib import Path
import ipywidgets as widgets
from learntools.time_series.style import * # plot style settings
from learntools.time_series.utils import (create_multistep_example,
load_multistep_data,
make_lags,
make_multistep_target,
plot_multistep)
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.multioutput import RegressorChain
from sklearn.preprocessing import LabelEncoder
from xgboost import XGBRegressor
comp_dir = Path('../input/store-sales-time-series-forecasting')
store_sales = pd.read_csv(
comp_dir / 'train.csv',
usecols=['store_nbr', 'family', 'date', 'sales', 'onpromotion'],
dtype={
'store_nbr': 'category',
'family': 'category',
'sales': 'float32',
'onpromotion': 'uint32',
},
parse_dates=['date'],
infer_datetime_format=True,
)
store_sales['date'] = store_sales.date.dt.to_period('D')
store_sales = store_sales.set_index(['store_nbr', 'family', 'date']).sort_index()
family_sales = (
store_sales
.groupby(['family', 'date'])
.mean()
.unstack('family')
.loc['2017']
)
test = pd.read_csv(
comp_dir / 'test.csv',
dtype={
'store_nbr': 'category',
'family': 'category',
'onpromotion': 'uint32',
},
parse_dates=['date'],
infer_datetime_format=True,
)
test['date'] = test.date.dt.to_period('D')
test = test.set_index(['store_nbr', 'family', 'date']).sort_index()
###Output
_____no_output_____
###Markdown
-------------------------------------------------------------------------------Consider the following three forecasting tasks:a. 3-step forecast using 4 lag features with a 2-step lead timeb. 1-step forecast using 3 lag features with a 1-step lead timec. 3-step forecast using 4 lag features with a 1-step lead timeRun the next cell to see three datasets, each representing one of the tasks above.
###Code
datasets = load_multistep_data()
data_tabs = widgets.Tab([widgets.Output() for _ in enumerate(datasets)])
for i, df in enumerate(datasets):
data_tabs.set_title(i, f'Dataset {i+1}')
with data_tabs.children[i]:
display(df)
display(data_tabs)
###Output
_____no_output_____
###Markdown
1) Match description to datasetCan you match each task to the appropriate dataset?
###Code
# YOUR CODE HERE: Match the task to the dataset. Answer 1, 2, or 3.
task_a = ____
task_b = ____
task_c = ____
# Check your answer
q_1.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
#%%RM_IF(PROD)%%
task_a = 2
task_b = 1
task_c = 3
q_1.assert_check_passed()
###Output
_____no_output_____
###Markdown
-------------------------------------------------------------------------------Look at the time indexes of the training and test sets. From this information, can you identify the forecasting task for *Store Sales*?
###Code
print("Training Data", "\n" + "-" * 13 + "\n", store_sales)
print("\n")
print("Test Data", "\n" + "-" * 9 + "\n", test)
###Output
_____no_output_____
###Markdown
2) Identify the forecasting task for *Store Sales* competitionTry to identify the *forecast origin* and the *forecast horizon*. How many steps are within the forecast horizon? What is the lead time for the forecast?Run this cell after you've thought about your answer.
###Code
# View the solution (Run this cell to receive credit!)
q_2.check()
###Output
_____no_output_____
###Markdown
-------------------------------------------------------------------------------In the tutorial we saw how to create a multistep dataset for a single time series. Fortunately, we can use exactly the same procedure for datasets of multiple series. 3) Create multistep dataset for *Store Sales*Create targets suitable for the *Store Sales* forecasting task. Use 4 days of lag features. Drop any missing values from both targets and features.
###Code
# YOUR CODE HERE
y = family_sales.loc[:, 'sales']
# YOUR CODE HERE: Make 4 lag features
X = ____
# YOUR CODE HERE: Make multistep target
y = ____
#_UNCOMMENT_IF(PROD)_
#y, X = y.align(X, join='inner', axis=1)
# Check your answer
q_3.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_3.hint()
#_COMMENT_IF(PROD)_
q_3.solution()
#%%RM_IF(PROD)%%
y = family_sales.loc[:, 'sales']
X = make_lags(y, lags=4).dropna()
y = make_multistep_target(y, steps=16).dropna()
y, X = y.align(X, join='inner', axis=0)
q_3.assert_check_passed()
###Output
_____no_output_____
###Markdown
-------------------------------------------------------------------------------In the tutorial, we saw how to forecast with the MultiOutput and Direct strategies on the *Flu Trends* series. Now, you'll apply the DirRec strategy to the multiple time series of *Store Sales*.Make sure you've successfully completed the previous exercise and then run this cell to prepare the data for XGBoost.
###Code
le = LabelEncoder()
X = (X
.stack('family') # wide to long
.reset_index('family') # convert index to column
.assign(family=lambda x: le.fit_transform(x.family)) # label encode
)
y = y.stack('family') # wide to long
display(y)
###Output
_____no_output_____
###Markdown
4) Forecast with the DirRec strategyInstatiate a model that applies the DirRec strategy to XGBoost.
###Code
from sklearn.multioutput import RegressorChain
# YOUR CODE HERE
model = ____
# Check your answer
q_4.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_4.hint()
#_COMMENT_IF(PROD)_
q_4.solution()
#%%RM_IF(PROD)%%
from sklearn.multioutput import RegressorChain
model = RegressorChain(XGBRegressor())
q_4.assert_check_passed()
###Output
_____no_output_____
###Markdown
Run this cell if you'd like to train this model.
###Code
model.fit(X, y)
y_pred = pd.DataFrame(
model.predict(X),
index=y.index,
columns=y.columns,
).clip(0.0)
###Output
_____no_output_____
###Markdown
And use this code to see a sample of the 16-step predictions this model makes on the training data.
###Code
FAMILY = 'BEAUTY'
START = '2017-04-01'
EVERY = 16
y_pred_ = y_pred.xs(FAMILY, level='family', axis=0).loc[START:]
y_ = family_sales.loc[START:, 'sales'].loc[:, FAMILY]
fig, ax = plt.subplots(1, 1, figsize=(11, 4))
ax = y_.plot(**plot_params, ax=ax, alpha=0.5)
ax = plot_multistep(y_pred_, ax=ax, every=EVERY)
_ = ax.legend([FAMILY, FAMILY + ' Forecast'])
###Output
_____no_output_____ |
D'wave tutorials/5.D-wave Classical Solvers.ipynb | ###Markdown
Solving a QUBO with dimod Samplers\begin{equation}H_{1}^{QUBO}=-4.4x_{1}^2+0.6x_{2}^2-2x_{3}^2+2.8x_{1}x_{2}-0.8x_{2}x_{3}+2.4\end{equation}
###Code
linear = {0: -4.4, 1: 0.6, 2: -2}
quadratic = {(0,1): 2.8, (1,2):-0.8}
offset = 2.4
bqm_qubo = dimod.BinaryQuadraticModel(linear,quadratic,offset,dimod.Vartype.BINARY)
print(bqm_qubo)
print('\n',bqm_qubo.to_numpy_matrix().astype(float))
###Output
BinaryQuadraticModel({0: -4.4, 1: 0.6, 2: -2.0}, {(0, 1): 2.8, (1, 2): -0.8}, 2.4, 'BINARY')
[[-4.4 2.8 0. ]
[ 0. 0.6 -0.8]
[ 0. 0. -2. ]]
###Markdown
Using dwave.nealneal also provides Simulated annealing sampler
###Code
import neal
sampleset = neal.SimulatedAnnealingSampler().sample(bqm_qubo, num_reads=10)
print(sampleset.to_pandas_dataframe())
###Output
0 1 2 energy num_occurrences
0 1 0 1 -4.0 1
1 1 0 1 -4.0 1
2 1 0 1 -4.0 1
3 1 0 1 -4.0 1
4 1 0 1 -4.0 1
5 1 0 1 -4.0 1
6 1 0 1 -4.0 1
7 1 0 1 -4.0 1
8 1 0 1 -4.0 1
9 1 0 1 -4.0 1
###Markdown
Using dwave.greedyAn implementation of a steepest descent solver for binary quadratic models.Steepest descent is the discrete analogue of gradient descent, but the best move is computed using a local minimization rather rather than computing a gradient. At each step, we determine the dimension along which to descend based on the highest energy drop caused by a variable flip.
###Code
import greedy
sampleset = greedy.SteepestDescentSolver().sample(bqm_qubo)
print(sampleset.to_pandas_dataframe())
import greedy
sampleset = greedy.SteepestDescentSampler().sample(bqm_qubo)
print(sampleset.to_pandas_dataframe())
###Output
0 1 2 energy num_occurrences
0 1 0 1 -4.0 1
###Markdown
Using dwave.tabuThe TabuSampler sampler implements the MST2 multistart tabu search algorithm for quadratic unconstrained binary optimization (QUBO) problems with a dimod Python wrapper.
###Code
import tabu
sampleset = tabu.TabuSampler().sample(bqm_qubo,num_reads=5)
print(sampleset.to_pandas_dataframe())
##### Creates Random QUBO of specific size ##########
def Random_QUBO(n):
Z = np.zeros((n,n))
for i in range(n):
for j in range(n):
if i==j:
Z[i,j]=np.random.uniform(-2,2)
elif i!=j:
Z[i,j]=Z[j,i]=np.random.uniform(-1,1)
qubo = dimod.BinaryQuadraticModel.from_numpy_matrix(Z)
return qubo.to_numpy_matrix().astype(float)
###Output
_____no_output_____
###Markdown
Using dwave.qbsolv
###Code
from dwave_qbsolv import QBSolv
w = Random_QUBO(15) # creating random QUBO
sampleset = QBSolv().sample_qubo(w, solver_limit=10) # broken in two sub problems
print(sampleset)
###Output
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 energy num_oc.
0 1 0 1 0 1 0 0 1 1 1 0 1 0 1 0 -11.916516 51
['BINARY', 1 rows, 51 samples, 15 variables]
|
use_model_example.ipynb | ###Markdown
Using a trained model to separate audio filesIn order to use a trained model to run separation on an audio file, we need to:- load the model from checkpoint- be able to perform the same audio processing as the one used to transform the audio to features during trainingThe class AudioSeparator implements the loading of model from checkpoint and the instantiation of the validation set (of the dataset that was used to train the model). The validation set implements the audio processing performed during training, so we can thus use it for our purpose. *Note*: the instantiation of the AudioSeparator class will create the validation set, which can be slow if the validation set creation requires a lot of work (eg. load a lot of files into RAM). While this is acceptable for most datasets, for practical applications it should be avoided.
###Code
# Some imports that we will need
import librosa # for audio saving
import torch
import numpy as np
from separator import AudioSeparator
###Output
_____no_output_____
###Markdown
The AudioSeparator class needs 2 parameters: the checkpoint of the model to load, and the path to a folder to store the separated audio. We won't need to use the folder, so we can pass any string for this argument.
###Code
separated_audio_folder = "" # anything will do
# Path to the trained model checkpoint
model_ckpt = 'path_to_mode_checkpoint.ckpt'
# Instantiate the AudioSeparator
separator = AudioSeparator.from_checkpoint({"checkpoint_path": model_ckpt, "separated_audio_folder": separated_audio_folder})
###Output
_____no_output_____
###Markdown
Now to load the audio that we want to perform source separation upon:
###Code
# Load aAudioSeparatorratorSeparatorio /home/similar way as during training
audio = separator.data_set.load_audio("path_to_wav_to_separate.wav")
###Output
_____no_output_____
###Markdown
Compute audio features similarly as in training:
###Code
# Compute short-time Fourier transform
magnitude, phase = separator.data_set.separated_stft(audio)
# Go from magnitude spectrogram to actual features used during training
features = separator.data_set.stft_magnitude_to_features(magnitude=magnitude)
features = torch.tensor(features).unsqueeze(0) # convert to torch tensor and add channel dimension
# Scale the features as done during the training
if separator.data_set.config['scaling_type'].lower() != "none":
features = separator.data_set.shift_and_scale_features(features,
separator.data_set.config['shift'],
separator.data_set.config['scaling'])
###Output
_____no_output_____
###Markdown
(Most) models can only process input features of a fixed shape, so the features need to be split in chunks of the right shape. The frequency shape and channel shape are decided by the processing, so we just need to split along time dimension.
###Code
features_shape = separator.data_set.features_shape() # (channel, frequency, time)
# Make chunks along time dimension, and stack them in a newly created batch dimension
# Note: the last chunk which would have a smaller size than required is discarded (equivalent to truncate input audio)
# shape of batch: [n_chunks, channel, frequency, time]
batch = torch.stack([features[..., i*features_shape[-1]:(i+1)*features_shape[-1]]
for i in range(features.shape[-1]//features_shape[-1])], 0)
_, masks = separator.model(batch) # Labels have no utility for separation
###Output
_____no_output_____
###Markdown
Shape of masks: (n_chunks, n_classes, frequency, time). In order to separate for a specific class: we need to know which mask to select. The classes used in training are in separator.data_set.classes:
###Code
print('\n'.join("%s: %s" % (class_name,idx)
for (idx, class_name) in {idx: class_name for idx, class_name in enumerate(separator.data_set.classes)}.items()))
class_idx = 9 # Fill here the class you are interested in !
# Example to plot the masks and spectrograms
# import matplotlib.pyplot as plt
# chunk_idx = 4
# fig, axs = plt.subplots(1, 2, figsize=(10, 5))
# h0 = axs[0].imshow(masks[chunk_idx][class_idx].detach(), aspect='auto', origin='lower')
# h1 = axs[1].imshow(batch[chunk_idx].detach().squeeze(), aspect='auto', origin='lower')
# plt.tight_layout()
# plt.show()
###Output
_____no_output_____
###Markdown
Get the separated spectrograms for all the sources in the data set:
###Code
spectrograms = [separator.separate_spectrogram_in_lin_scale(masks[i].detach(),
features_shape,
magnitude[..., i*features_shape[-1]:(i+1)*features_shape[-1]])
for i in range(batch.shape[0])]
###Output
_____no_output_____
###Markdown
Now select the class we are interested in:
###Code
# Select the class we are interested in:
class_spectrograms = [spec[class_idx].squeeze() for spec in spectrograms]
###Output
_____no_output_____
###Markdown
Put the spectrograms together to have a single spectrogram for the entire recording
###Code
# concatenate along time dimension to produce a single spectrogram for the entire recording:
source_spectrogram = np.concatenate(class_spectrograms, axis=-1)
###Output
_____no_output_____
###Markdown
Synthetize the separated audio from the separated spectrogram and the mixture phase:
###Code
# We need to truncate the phase too.
separated_audio = separator.spectrogram_to_audio(source_spectrogram, phase[..., :source_spectrogram.shape[-1]])
###Output
_____no_output_____
###Markdown
To save the audio to file:
###Code
librosa.output.write_wav("path_to_output.wav", separated_audio, sr=separator.data_set.config['sampling_rate'])
###Output
_____no_output_____ |
notebooks/Step_1_Test_Classifier_&_Create_Input_For_Explainer.ipynb | ###Markdown
Imports
###Code
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
import sys
sys.path.append("/ocean/projects/asc170022p/singla/ExplainingBBSmoothly/")
import yaml
from utils import *
from sklearn.metrics import roc_auc_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import recall_score
from sklearn.metrics import confusion_matrix
import scipy.misc as scm
import warnings
warnings.filterwarnings("ignore")
warnings.filterwarnings("ignore", category=DeprecationWarning)
np.random.seed(0)
main_dir = '/ocean/projects/asc170022p/singla/ExplainingBBSmoothly'
###Output
_____no_output_____
###Markdown
Get classifier training config
###Code
config = os.path.join(main_dir, 'configs/Step_1_StanfordCheXpert_Classifier_256.yaml')
config = yaml.load(open(config))
for k in config.keys():
print(k, config[k])
categories = config['categories'].split(',')
###Output
('output_folder_name', 'Classifier_Output_MIMIC')
('partition', 'test')
('feature_names', 'dense_2,dense_3,dense_4')
('image_dir', '/ocean/projects/asc170022p/shared/Data/chestXRayDatasets/StanfordCheXpert')
('output_csv', '/ocean/projects/asc170022p/singla/ExplainingBBSmoothly/data/MIMIC_CXR_PA_AP_views_image_report_labels.csv')
('do_center_crop', True)
('batch_size', 400)
('num_channel', 1)
('epochs', 10)
('seed', 0)
('classifier_type', 'DenseNet')
('training_columns_to_repeat', '')
('uncertain_label', 0)
('categories', 'Lung Lesion,Pleural Effusion,Edema,Cardiomegaly,Consolidation,Support Devices,No Finding,Pneumonia,Fracture,Atelectasis,Pneumothorax,Enlarged Cardiomediastinum,Lung Opacity,Pleural Other')
('crop_size', 225)
('num_class', 14)
('name', 'StanfordCheXpert_256')
('ckpt_dir_continue', 'output/classifier/StanfordCheXpert_256')
('use_output_csv', True)
('output_csv_names_column', 'lateral_512_jpeg')
('feature', False)
('path_column', 'Path')
('train', '/ocean/projects/asc170022p/shared/Data/chestXRayDatasets/StanfordCheXpert/CheXpert-v1.0-small/train.csv')
('log_dir', 'output/classifier')
('weights_in_batch', 0)
('test', '/ocean/projects/asc170022p/shared/Data/chestXRayDatasets/StanfordCheXpert/CheXpert-v1.0-small/valid.csv')
('input_size', 256)
('only_frontal', 1)
('names', '')
###Markdown
Check classifier on MIMIC dataset - test set
###Code
output_dir = os.path.join(main_dir,config['log_dir'], config['name'], config['output_folder_name'])
print(output_dir)
# Read classifier output
train_or_test = config['partition']
names = np.load(os.path.join(output_dir, 'name_'+train_or_test+'.npy'),allow_pickle=True)
prediction_y = np.load(os.path.join(output_dir, 'prediction_y_'+train_or_test+'.npy'))
true_y = np.load(os.path.join(output_dir, 'true_y_'+train_or_test+'.npy'))
print(names.shape, prediction_y.shape, true_y.shape)
print('True labels: ', np.unique(true_y))
# Compute Metrics
for i in [1,2,3]:
pred_y = (prediction_y[:,i]>0.5).astype(int)
print(categories[i], i)
print("ROC-AUC: ", roc_auc_score(true_y[:,i], prediction_y[:,i]))
print("Accuracy: ", accuracy_score(true_y[:,i],pred_y))
print("Recall: ", recall_score(true_y[:,i], pred_y))
tp = np.sum((prediction_y[true_y[:,i] == 1,i]>0.5).astype(int))
print("Stats: ", np.unique(true_y[:,i], return_counts=True), tp)
print(confusion_matrix(true_y[:,i], pred_y))
###Output
('Pleural Effusion', 1)
('ROC-AUC: ', 0.8658145410469027)
('Accuracy: ', 0.7574814126394052)
('Recall: ', 0.8485785271303128)
('Stats: ', (array([0., 1.], dtype=float32), array([162473, 52727])), 44743)
[[118267 44206]
[ 7984 44743]]
('Edema', 2)
('ROC-AUC: ', 0.853515091026822)
('Accuracy: ', 0.7722537174721189)
('Recall: ', 0.7812955714176069)
('Stats: ', (array([0., 1.], dtype=float32), array([189142, 26058])), 20359)
[[145830 43312]
[ 5699 20359]]
('Cardiomegaly', 3)
('ROC-AUC: ', 0.7646970077741333)
('Accuracy: ', 0.6134386617100371)
('Recall: ', 0.8118751275249949)
('Stats: ', (array([0., 1.], dtype=float32), array([175992, 39208])), 31832)
[[100180 75812]
[ 7376 31832]]
###Markdown
Select a class to create explanation
###Code
current_index = 3
name = categories[current_index]
print(name)
df_explain = pd.DataFrame()
df_explain['names'] = names
df_explain[name] = true_y[:,current_index]
df_explain[name+'_prob'] = prediction_y[:,current_index]
df_explain['bin'] = np.floor(df_explain[name+'_prob'].astype('float') * 10).astype('int')
df_explain = df_explain.drop_duplicates()
print(df_explain.shape)
print(np.unique(df_explain['bin'], return_counts=True))
df_explain.head(2)
#Reliability Curve
from sklearn.calibration import CalibratedClassifierCV, calibration_curve
from sklearn.metrics import (brier_score_loss, precision_score, recall_score,f1_score)
true_label = np.asarray(df_explain[name]).astype(int)
predicted_prob = np.asarray(df_explain[name+'_prob']).astype(float)
fraction_of_positives, mean_predicted_value = calibration_curve(true_label, predicted_prob, n_bins=10)
clf_score = brier_score_loss(true_label, predicted_prob, pos_label=1)
plt.plot(mean_predicted_value, fraction_of_positives, "s-",
label="%s (%1.3f)" % ('Data-before binning', clf_score))
plt.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated")
plt.ylabel('Fraction of positives')
#plt.ylim([-0.05, 1.05])
plt.title('Calibration plots (reliability curve) for full label for '+ name)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Create Training Data for the Explainer
###Code
n = 4000
np.random.seed(0)
for i in range(0,10):
print i
df_bin = df_explain.loc[df_explain['bin'] == i]
df_bin = df_bin.sample(n=n)
if i == 0:
df_bin_all = df_bin
else:
df_bin_all = pd.concat([df_bin, df_bin_all])
print(df_bin_all.shape)
print(np.unique(df_bin[name],return_counts=True))
print(np.unique(df_bin_all['bin'] ,return_counts=True))
#Reliability Curve
from sklearn.calibration import CalibratedClassifierCV, calibration_curve
from sklearn.metrics import (brier_score_loss, precision_score, recall_score,
f1_score)
true_label = np.asarray(df_bin_all[name]).astype(int)
predicted_prob = np.asarray(df_bin_all[name+'_prob']).astype(float)
fraction_of_positives, mean_predicted_value = calibration_curve(true_label, predicted_prob, n_bins=10)
clf_score = brier_score_loss(true_label, predicted_prob, pos_label=1)
plt.plot(mean_predicted_value, fraction_of_positives, "s-",
label="%s (%1.3f)" % ('Data-after binning', clf_score))
plt.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated")
plt.ylabel('Fraction of positives')
plt.ylim([-0.05, 1.05])
plt.title('Calibration plots (reliability curve) for '+name)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
[Optional] Create Training Data for the Explainer: Calibrated
###Code
n = 4000
for i in range(10):
print i
df_bin = df_explain.loc[df_explain['bin'] == i]
print(df_bin.shape)
print(np.min(df_bin[name+'_prob']), np.max(df_bin[name+'_prob']))
print(np.unique(df_bin[name],return_counts=True))
df_bin_0 = df_explain.loc[(df_explain['bin'] == i) & (df_explain[name] ==0)]
df_bin_1 = df_explain.loc[(df_explain['bin'] == i) & (df_explain[name] ==1)]
n_0 = int((1 - (0.1 * i) ) * n)
if df_bin_0.shape[0] >= n_0:
df_bin = df_bin_0.sample(n=n_0)
else:
df_bin = df_bin_0
n_0 = df_bin_0.shape[0]
n_1 = n - n_0
if df_bin_1.shape[0] >= n_1:
df_bin = pd.concat([df_bin, df_bin_1.sample(n=n_1)])
else:
df_bin = pd.concat([df_bin, df_bin_1])
if i == 0:
df_bin_all = df_bin
else:
df_bin_all = pd.concat([df_bin, df_bin_all])
print(df_bin_all.shape)
print(np.unique(df_bin[name],return_counts=True))
print(np.unique(df_bin_all['bin'] ,return_counts=True))
#Reliability Curve
from sklearn.calibration import CalibratedClassifierCV, calibration_curve
from sklearn.metrics import (brier_score_loss, precision_score, recall_score,
f1_score)
true_label = np.asarray(df_bin_all[name]).astype(int)
predicted_prob = np.asarray(df_bin_all[name+'_prob']).astype(float)
fraction_of_positives, mean_predicted_value = calibration_curve(true_label, predicted_prob, n_bins=10)
clf_score = brier_score_loss(true_label, predicted_prob, pos_label=1)
plt.plot(mean_predicted_value, fraction_of_positives, "s-",
label="%s (%1.3f)" % ('Data-after binning', clf_score))
plt.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated")
plt.ylabel('Fraction of positives')
plt.ylim([-0.05, 1.05])
plt.title('Calibration plots (reliability curve) for '+name)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Save output to be read by the explainer
###Code
name = name.replace(' ', '_')
experiment_dir = os.path.join(main_dir,config['log_dir'], config['name'], 'Explainer_MIMIC_'+name)
print(experiment_dir,name)
if not os.path.exists(experiment_dir):
os.makedirs(experiment_dir)
df_temp = df_bin_all[['names', 'bin']]
df_temp.to_csv(os.path.join(experiment_dir, 'list_attr_'+name+'.txt'), sep = ' ', index = None, header = None)
print(df_temp.shape)
one_line = str(df_temp.shape[0]) + '\n'
second_line = "0-0.09 0.1-0.19 0.2-0.29 0.3-0.39 0.4-0.49 0.5-0.59 0.6-0.69 0.7-0.79 0.8-0.89 0.9-0.99\n"
with open(os.path.join(experiment_dir, 'list_attr_'+name+'.txt'), 'r+') as fp:
lines = fp.readlines() # lines is list of line, each element '...\n'
lines.insert(0, one_line) # you can use any index if you know the line index
lines.insert(1, second_line)
fp.seek(0) # file pointer locates at the beginning to write the whole file again
fp.writelines(lines)
fp = open(os.path.join(experiment_dir, 'list_attr_'+name+'.txt'), 'rw')
print(fp.readline())
print(fp.readline())
print(fp.readline())
print(fp.readline())
print(fp.readline())
print(fp.readline())
df_bin_all.to_csv(os.path.join(experiment_dir, 'Data_Output_Classifier_'+name+'.csv'), sep = ' ', index = None)
###Output
(40000, 2)
40000
0-0.09 0.1-0.19 0.2-0.29 0.3-0.39 0.4-0.49 0.5-0.59 0.6-0.69 0.7-0.79 0.8-0.89 0.9-0.99
/ocean/projects/asc170022p/shared/Data/chestXRayDatasets/MIMICCXR/2.0.0/files/p17/p17096578/s57569940/view3_lateral_HE_512.jpeg 9
/ocean/projects/asc170022p/shared/Data/chestXRayDatasets/MIMICCXR/2.0.0/files/p12/p12992793/s53329529/view2_lateral_HE_512.jpeg 9
/ocean/projects/asc170022p/shared/Data/chestXRayDatasets/MIMICCXR/2.0.0/files/p15/p15181240/s56046134/view3_lateral_HE_512.jpeg 9
/ocean/projects/asc170022p/shared/Data/chestXRayDatasets/MIMICCXR/2.0.0/files/p10/p10388400/s58237613/view2_lateral_HE_512.jpeg 9
|
jupyter/classification_tree.ipynb | ###Markdown
Klasifikasi menggunakan KNNNotebook ini merupakan bagian dari buku **Machine Learning menggunakan Python** oleh **Fahmi Noor Fiqri**. Notebook ini berisi contoh kode untuk **BAB V - K-NEAREST NEIGHBOR** Data Preparation
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.model_selection import cross_validate
# Membaca data dari file CSV
df = pd.read_csv(r'../datasets/iris.csv')
# Memisahkan features dan label
X = df.iloc[:, :-1].values
y = df.iloc[:, 4].values
# Melakukan label encoding
lb = LabelEncoder()
y = lb.fit_transform(y)
###Output
_____no_output_____
###Markdown
Modelling & Evaluation
###Code
# Membuat model decision tree
classifier = Pipeline([
('normalize', StandardScaler()),
('classify', DecisionTreeClassifier(random_state=42, max_depth=3))
])
# Melakukan prediksi dengan data uji dan menampilkan statistik klasifikasi
scoring = ["accuracy", "precision_macro", "recall_macro", "f1_macro"]
scores = cross_validate(classifier, X, y, scoring=scoring, cv=5, return_estimator=True)
accuracy_mean = np.mean(scores["test_accuracy"])
accuracy_std = np.std(scores["test_accuracy"])
print("Akurasi {:.2f} - Standar deviasi {:.2f}".format(accuracy_mean, accuracy_std))
cols = ["fit_time", "score_time", "classifier", *scoring]
cv_result = pd.DataFrame.from_records(zip(*scores.values()), columns=cols)
cv_result.drop("classifier", axis=1).round(4)
# Mendapatkan model klasifikasi terbaik dari cross-validation
best_classifier_index = scores["test_accuracy"].argmax()
best_classifier = scores["estimator"][best_classifier_index]
# Menampilkan tree
plt.figure(figsize=(10, 6))
tree_classifier = best_classifier.steps[1][1]
plot_tree(tree_classifier, feature_names=df.columns.values[:-1], class_names=lb.classes_, filled=True)
plt.show()
# Prediksi data baru
pred_input = [[3.0, 1.2, 2.4, 1.1]] # input data
probabilities = best_classifier.predict_proba(pred_input) # hitung probabilitas
predicted = best_classifier.predict(pred_input) # prediksi kelas
print("Probabilitas:", probabilities)
print("Hasil klasifikasi:", lb.inverse_transform(predicted))
###Output
Probabilitas: [[1. 0. 0.]]
Hasil klasifikasi: ['Iris-setosa']
|
4_2_Robot_Localization/3_2. Sense Function, solution.ipynb | ###Markdown
Sense FunctionIn this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing and updating that distribution.You know these steps well, and this time, you're tasked with writing a function `sense` that encompasses this behavior. 1. The robot starts off knowing nothing; the robot is equally likely to be anywhere and so `p` is a uniform distribution.2. Then the robot senses a grid color: red or green, and updates this distribution `p` according to the values of pHit and pMiss.* The probability that it is sensing the color correctly is `pHit = 0.6`.* The probability that it is sensing the wrong color is `pMiss = 0.2`
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
QUIZ: Complete the sense function so that this outputs an unnormalized distribution, `p`, after sensing. Use the previous exercise as a starting point. `q = [0.04, 0.12, 0.12, 0.04, 0.04]` should be exactly the distribution you get when the sensor measurement `Z= 'red'`. This complete function should also output the correct `q` for `Z= 'green'`.Note that `pHit` refers to the probability that the robot correctly senses the color of the square it is on, so if a robot senses red *and* is on a red square, we'll multiply the current location probability (0.2) with pHit. Same goes for if a robot senses green *and* is on a green square.
###Code
# given initial variables
p=[0.2, 0.2, 0.2, 0.2, 0.2]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
## Complete this function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns an unnormalized distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
return q
q = sense(p,Z)
print(q)
display_map(q)
###Output
[0.04000000000000001, 0.12, 0.12, 0.04000000000000001, 0.04000000000000001]
###Markdown
Sense FunctionIn this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing and updating that distribution.You know these steps well, and this time, you're tasked with writing a function `sense` that encompasses this behavior. 1. The robot starts off knowing nothing; the robot is equally likely to be anywhere and so `p` is a uniform distribution.2. Then the robot senses a grid color: red or green, and updates this distribution `p` according to the values of pHit and pMiss.* The probability that it is sensing the color correctly is `pHit = 0.6`.* The probability that it is sensing the wrong color is `pMiss = 0.2`
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
QUIZ: Complete the sense function so that this outputs an unnormalized distribution, `p`, after sensing. Use the previous exercise as a starting point. `q = [0.04, 0.12, 0.12, 0.04, 0.04]` should be exactly the distribution you get when the sensor measurement `Z= 'red'`. This complete function should also output the correct `q` for `Z= 'green'`.Note that `pHit` refers to the probability that the robot correctly senses the color of the square it is on, so if a robot senses red *and* is on a red square, we'll multiply the current location probability (0.2) with pHit. Same goes for if a robot senses green *and* is on a green square.
###Code
# given initial variables
p=[0.2, 0.2, 0.2, 0.2, 0.2]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
## Complete this function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns an unnormalized distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
return q
q = sense(p,Z)
print(q)
display_map(q)
###Output
[0.04000000000000001, 0.12, 0.12, 0.04000000000000001, 0.04000000000000001]
###Markdown
Sense FunctionIn this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing and updating that distribution.You know these steps well, and this time, you're tasked with writing a function `sense` that encompasses this behavior. 1. The robot starts off knowing nothing; the robot is equally likely to be anywhere and so `p` is a uniform distribution.2. Then the robot senses a grid color: red or green, and updates this distribution `p` according to the values of pHit and pMiss.* The probability that it is sensing the color correctly is `pHit = 0.6`.* The probability that it is sensing the wrong color is `pMiss = 0.2`
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
QUIZ: Complete the sense function so that this outputs an unnormalized distribution, `p`, after sensing. Use the previous exercise as a starting point. `q = [0.04, 0.12, 0.12, 0.04, 0.04]` should be exactly the distribution you get when the sensor measurement `Z= 'red'`. This complete function should also output the correct `q` for `Z= 'green'`.Note that `pHit` refers to the probability that the robot correctly senses the color of the square it is on, so if a robot senses red *and* is on a red square, we'll multiply the current location probability (0.2) with pHit. Same goes for if a robot senses green *and* is on a green square.
###Code
# given initial variables
p=[0.2, 0.2, 0.2, 0.2, 0.2]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
## Complete this function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns an unnormalized distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
return q
q = sense(p,Z)
print(q)
display_map(q)
###Output
[0.04000000000000001, 0.12, 0.12, 0.04000000000000001, 0.04000000000000001]
###Markdown
Sense FunctionIn this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing and updating that distribution.You know these steps well, and this time, you're tasked with writing a function `sense` that encompasses this behavior. 1. The robot starts off knowing nothing; the robot is equally likely to be anywhere and so `p` is a uniform distribution.2. Then the robot senses a grid color: red or green, and updates this distribution `p` according to the values of pHit and pMiss.* The probability that it is sensing the color correctly is `pHit = 0.6`.* The probability that it is sensing the wrong color is `pMiss = 0.2`
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
QUIZ: Complete the sense function so that this outputs an unnormalized distribution, `p`, after sensing. Use the previous exercise as a starting point. `q = [0.04, 0.12, 0.12, 0.04, 0.04]` should be exactly the distribution you get when the sensor measurement `Z= 'red'`. This complete function should also output the correct `q` for `Z= 'green'`.Note that `pHit` refers to the probability that the robot correctly senses the color of the square it is on, so if a robot senses red *and* is on a red square, we'll multiply the current location probability (0.2) with pHit. Same goes for if a robot senses green *and* is on a green square.
###Code
# given initial variables
p=[0.2, 0.2, 0.2, 0.2, 0.2]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
## Complete this function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns an unnormalized distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
return q
q = sense(p,Z)
print(q)
display_map(q)
###Output
[0.04000000000000001, 0.12, 0.12, 0.04000000000000001, 0.04000000000000001]
###Markdown
Sense FunctionIn this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing and updating that distribution.You know these steps well, and this time, you're tasked with writing a function `sense` that encompasses this behavior. 1. The robot starts off knowing nothing; the robot is equally likely to be anywhere and so `p` is a uniform distribution.2. Then the robot senses a grid color: red or green, and updates this distribution `p` according to the values of pHit and pMiss.* The probability that it is sensing the color correctly is `pHit = 0.6`.* The probability that it is sensing the wrong color is `pMiss = 0.2`
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
QUIZ: Complete the sense function so that this outputs an unnormalized distribution, `p`, after sensing. Use the previous exercise as a starting point. `q = [0.04, 0.12, 0.12, 0.04, 0.04]` should be exactly the distribution you get when the sensor measurement `Z= 'red'`. This complete function should also output the correct `q` for `Z= 'green'`.Note that `pHit` refers to the probability that the robot correctly senses the color of the square it is on, so if a robot senses red *and* is on a red square, we'll multiply the current location probability (0.2) with pHit. Same goes for if a robot senses green *and* is on a green square.
###Code
# given initial variables
p=[0.2, 0.2, 0.2, 0.2, 0.2]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
## Complete this function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns an unnormalized distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
return q
q = sense(p,Z)
print(q)
display_map(q)
###Output
[0.04000000000000001, 0.12, 0.12, 0.04000000000000001, 0.04000000000000001]
###Markdown
Sense FunctionIn this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing and updating that distribution.You know these steps well, and this time, you're tasked with writing a function `sense` that encompasses this behavior. 1. The robot starts off knowing nothing; the robot is equally likely to be anywhere and so `p` is a uniform distribution.2. Then the robot senses a grid color: red or green, and updates this distribution `p` according to the values of pHit and pMiss.* The probability that it is sensing the color correctly is `pHit = 0.6`.* The probability that it is sensing the wrong color is `pMiss = 0.2`
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
QUIZ: Complete the sense function so that this outputs an unnormalized distribution, `p`, after sensing. Use the previous exercise as a starting point. `q = [0.04, 0.12, 0.12, 0.04, 0.04]` should be exactly the distribution you get when the sensor measurement `Z= 'red'`. This complete function should also output the correct `q` for `Z= 'green'`.Note that `pHit` refers to the probability that the robot correctly senses the color of the square it is on, so if a robot senses red *and* is on a red square, we'll multiply the current location probability (0.2) with pHit. Same goes for if a robot senses green *and* is on a green square.
###Code
# given initial variables
p=[0.2, 0.2, 0.2, 0.2, 0.2]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
## Complete this function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns an unnormalized distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
return q
q = sense(p,Z)
print(q)
display_map(q)
###Output
[0.04000000000000001, 0.12, 0.12, 0.04000000000000001, 0.04000000000000001]
###Markdown
Sense FunctionIn this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing and updating that distribution.You know these steps well, and this time, you're tasked with writing a function `sense` that encompasses this behavior. 1. The robot starts off knowing nothing; the robot is equally likely to be anywhere and so `p` is a uniform distribution.2. Then the robot senses a grid color: red or green, and updates this distribution `p` according to the values of pHit and pMiss.* The probability that it is sensing the color correctly is `pHit = 0.6`.* The probability that it is sensing the wrong color is `pMiss = 0.2`
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
QUIZ: Complete the sense function so that this outputs an unnormalized distribution, `p`, after sensing. Use the previous exercise as a starting point. `q = [0.04, 0.12, 0.12, 0.04, 0.04]` should be exactly the distribution you get when the sensor measurement `Z= 'red'`. This complete function should also output the correct `q` for `Z= 'green'`.Note that `pHit` refers to the probability that the robot correctly senses the color of the square it is on, so if a robot senses red *and* is on a red square, we'll multiply the current location probability (0.2) with pHit. Same goes for if a robot senses green *and* is on a green square.
###Code
# given initial variables
p=[0.2, 0.2, 0.2, 0.2, 0.2]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
## Complete this function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns an unnormalized distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
return q
q = sense(p,Z)
print(q)
display_map(q)
###Output
[0.04000000000000001, 0.12, 0.12, 0.04000000000000001, 0.04000000000000001]
|
notebooks/01 - Data Loading.ipynb | ###Markdown
Data Loading Get some data to play with
###Code
from sklearn.datasets import fetch_openml
blood = fetch_openml('blood-transfusion-service-center')
print(blood.DESCR)
blood.data.shape
blood.data
import pandas as pd
X = pd.DataFrame(data.data, columns=['recency', 'frequency', 'total_amount', 'since_first'])
blood.target.shape
blood.target
y = pd.Series(data.target)
y.value_counts()
import matplotlib.pyplot as plt
%matplotlib inline
pd.plotting.scatter_matrix(X, c=y=='2', cmap='Paired', figsize=(10, 10));
###Output
_____no_output_____
###Markdown
**Data is always a numpy array (or sparse matrix) of shape (n_samples, n_features)** Split the data to get going
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
blood.data.shape
X_train.shape
X_test.shape
###Output
_____no_output_____
###Markdown
Exercises Excercise 1Load the iris dataset from the ``sklearn.datasets`` module using the ``load_iris`` function.The function returns a dictionary-like object that has the same attributes as ``digits``.What is the number of classes, features and data points in this dataset?Use a scatterplot to visualize the dataset.You can look at ``DESCR`` attribute to learn more about the dataset.``print(iris.DESCR)``Split the data into training and test set. Exercise 2Usually data doesn't come in that nice a format. You can find the csv file that contains the iris dataset at the following path:```pythonimport sklearn.datasetsimport osiris_path = os.path.join(sklearn.datasets.__path__[0], 'data', 'iris.csv')```Load the data from there using pandas ``pd.read_csv`` method and bring it into the same format as before with the data in a variable X and the labels in a variable y. The first few lines of ``iris.csv`` file looks like:```150,4,setosa,versicolor,virginica5.1,3.5,1.4,0.2,04.9,3.0,1.4,0.2,04.7,3.2,1.3,0.2,04.6,3.1,1.5,0.2,0``` http://github.com/amueller/ml-workshop-1-of-4
###Code
# %load solutions/load_iris.py
###Output
_____no_output_____
###Markdown
Data Loading Get some data to play with
###Code
from sklearn.datasets import fetch_openml
blood = fetch_openml('blood-transfusion-service-center')
print(blood.DESCR)
blood.data.shape
blood.data
import pandas as pd
X = pd.DataFrame(blood.data, columns=['recency', 'frequency', 'total_amount', 'since_first'])
blood.target.shape
blood.target
y = pd.Series(blood.target)
y.value_counts()
import matplotlib.pyplot as plt
pd.plotting.scatter_matrix(X, c=y=='2', cmap='Paired', figsize=(10, 10));
###Output
_____no_output_____
###Markdown
**Data is always a numpy array (or sparse matrix) of shape (n_samples, n_features)** Split the data to get going
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
X.shape
X_train.shape
X_test.shape
###Output
_____no_output_____
###Markdown
Exercises Excercise 1Load the iris dataset from the ``sklearn.datasets`` module using the ``load_iris`` function.The function returns a dictionary-like object that has the same attributes as ``blood``.What is the number of classes, features and data points in this dataset?Use a scatterplot to visualize the dataset.You can look at ``DESCR`` attribute to learn more about the dataset.``print(iris.DESCR)``Split the data into training and test set. Exercise 2Usually data doesn't come in that nice a format. You can find the csv file that contains the iris dataset at the following path:```pythonimport sklearn.datasetsimport osiris_path = os.path.join(sklearn.datasets.__path__[0], 'data', 'iris.csv')```Load the data from there using pandas ``pd.read_csv`` method and bring it into the same format as before with the data in a variable X and the labels in a variable y. The first few lines of ``iris.csv`` file looks like:```150,4,setosa,versicolor,virginica5.1,3.5,1.4,0.2,04.9,3.0,1.4,0.2,04.7,3.2,1.3,0.2,04.6,3.1,1.5,0.2,0``` http://github.com/amueller/ml-workshop-1-of-4
###Code
# %load solutions/load_iris.py
###Output
_____no_output_____
###Markdown
Data Loading Get some data to play with
###Code
from sklearn.datasets import load_digits
import numpy as np
digits = load_digits()
digits.keys()
digits.data.shape
digits.data
digits.target.shape
digits.target
np.bincount(digits.target)
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(digits.data[0].reshape(8, 8), cmap="gray_r")
digits.target[0]
fig, axes = plt.subplots(4, 4, figsize=(6, 6))
for x, y, ax in zip(digits.data, digits.target, axes.ravel()):
ax.set_title(y)
ax.imshow(x.reshape(8, 8), cmap="gray_r")
ax.set_xticks(())
ax.set_yticks(())
plt.tight_layout()
###Output
_____no_output_____
###Markdown
**Data is always a numpy array (or sparse matrix) of shape (n_samples, n_features)** Split the data to get going
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(digits.data,
digits.target)
digits.data.shape
X_train.shape
X_test.shape
###Output
_____no_output_____
###Markdown
Exercises Excercise 1Load the iris dataset from the ``sklearn.datasets`` module using the ``load_iris`` function.The function returns a dictionary-like object that has the same attributes as ``digits``.What is the number of classes, features and data points in this dataset?Use a scatterplot to visualize the dataset.You can look at ``DESCR`` attribute to learn more about the dataset.``print(iris.DESCR)`` Exercise 2Usually data doesn't come in that nice a format. You can find the csv file that contains the iris dataset at the following path:```pythonimport sklearn.datasetsimport osiris_path = os.path.join(sklearn.datasets.__path__[0], 'data', 'iris.csv')```Load the data from there using pandas ``pd.read_csv`` method and bring it into the same format as before with the data in a variable X and the labels in a variable y. The first few lines of ``iris.csv`` file looks like:```150,4,setosa,versicolor,virginica5.1,3.5,1.4,0.2,04.9,3.0,1.4,0.2,04.7,3.2,1.3,0.2,04.6,3.1,1.5,0.2,0```
###Code
# %load solutions/load_iris.py
###Output
_____no_output_____
###Markdown
Getting used to the jupyter notebook
###Code
print(1)
a = [1, 2]
len(a)
sum(a)
###Output
_____no_output_____
###Markdown
Get help by adding a question-mark next to the function name:
###Code
sum?
###Output
_____no_output_____
###Markdown
You can also press `` + `` after the opening parenthesis of a function:
###Code
sum(a)
###Output
_____no_output_____
###Markdown
Data loading with Pandas
###Code
import pandas as pd
# subset of the 1993 US census
data = pd.read_csv("adult.csv", index_col=0)
data.head()
###Output
_____no_output_____
###Markdown
Simple analysis
###Code
data.shape
data.columns
data.income.value_counts()
% matplotlib inline
data.groupby("income").age.hist()
###Output
_____no_output_____
###Markdown
Splitting into training and test data
###Code
X = data.drop("income", axis=1)
y = data.income
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
X_train.head()
X_train.shape
###Output
_____no_output_____
###Markdown
ExercisesLoad the "boston house prices" dataset from the ``boston_house_prices.csv`` file using the ``pd.read_csv`` function (you don't need ``index_column`` here).You can find a description of this dataset in the ``boston_house_prices.txt`` file.This is a regression dataset with "MEDV" the median house value in a block in thousand dollars the target.How many features are there and how many samples?Split the data into a training and a test set for learning.Optionally you can plot MEDV vs any of the features using the ``plot`` method of the dataframe (using ``kind="scatter"``).
###Code
# Try to solve it yourself. You can get a solution by uncommenting the line below.
# %load solutions/load_boston.py
###Output
_____no_output_____
###Markdown
Data Loading Get some data to play with
###Code
from sklearn.datasets import load_digits
import numpy as np
digits = load_digits()
digits.keys()
digits.data.shape
digits.data
digits.target.shape
digits.target
np.bincount(digits.target)
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(digits.data[0].reshape(8, 8), cmap="gray_r")
digits.target[0]
fig, axes = plt.subplots(4, 4, figsize=(6, 6))
for x, y, ax in zip(digits.data, digits.target, axes.ravel()):
ax.set_title(y)
ax.imshow(x.reshape(8, 8), cmap="gray_r")
ax.set_xticks(())
ax.set_yticks(())
plt.tight_layout()
###Output
_____no_output_____
###Markdown
**Data is always a numpy array (or sparse matrix) of shape (n_samples, n_features)** Split the data to get going
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(digits.data,
digits.target)
digits.data.shape
X_train.shape
X_test.shape
###Output
_____no_output_____
###Markdown
Exercises Excercise 1Load the iris dataset from the ``sklearn.datasets`` module using the ``load_iris`` function.The function returns a dictionary-like object that has the same attributes as ``digits``.What is the number of classes, features and data points in this dataset?Use a scatterplot to visualize the dataset.You can look at ``DESCR`` attribute to learn more about the dataset.``print(iris.DESCR)`` Exercise 2Usually data doesn't come in that nice a format. You can find the csv file that contains the iris dataset at the following path:```pythonimport sklearn.datasetsimport osiris_path = os.path.join(sklearn.datasets.__path__[0], 'data', 'iris.csv')```Load the data from there using pandas ``pd.read_csv`` method and bring it into the same format as before with the data in a variable X and the labels in a variable y. The first few lines of ``iris.csv`` file looks like:```150,4,setosa,versicolor,virginica5.1,3.5,1.4,0.2,04.9,3.0,1.4,0.2,04.7,3.2,1.3,0.2,04.6,3.1,1.5,0.2,0```
###Code
# %load solutions/load_iris.py
###Output
_____no_output_____
###Markdown
Data Loading Get some data to play with
###Code
# Wisconsin breast cancer diagnostic dataset
# more info at https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)
import pandas as pd
data = pd.read_csv("data/breast_cancer_wisconsin.csv")
data.head()
###Output
_____no_output_____
###Markdown
Data Set Information:Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. A few of the images can be found at [Web Link]Separating plane described above was obtained using Multisurface Method-Tree (MSM-T) [K. P. Bennett, "Decision Tree Construction Via Linear Programming." Proceedings of the 4th Midwest Artificial Intelligence and Cognitive Science Society, pp. 97-101, 1992], a classification method which uses linear programming to construct a decision tree. Relevant features were selected using an exhaustive search in the space of 1-4 features and 1-3 separating planes.The actual linear program used to obtain the separating plane in the 3-dimensional space is that described in: [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34].This database is also available through the UW CS ftp server:ftp ftp.cs.wisc.educd math-prog/cpo-dataset/machine-learn/WDBC/ Attribute Information:1) ID number2) Diagnosis (M = malignant, B = benign)3-32)Ten real-valued features are computed for each cell nucleus:a) radius (mean of distances from center to points on the perimeter)b) texture (standard deviation of gray-scale values)c) perimeterd) areae) smoothness (local variation in radius lengths)f) compactness (perimeter^2 / area - 1.0)g) concavity (severity of concave portions of the contour)h) concave points (number of concave portions of the contour)i) symmetryj) fractal dimension ("coastline approximation" - 1)
###Code
data.head()
X = data.drop('Class', axis=1)
y = data.Class
X.head()
y.value_counts()
import matplotlib.pyplot as plt
# plot first five features
pd.plotting.scatter_matrix(X.iloc[:, :5], c=y, cmap='Paired', figsize=(10, 10));
###Output
_____no_output_____
###Markdown
**Data is always a numpy array (or sparse matrix) of shape (n_samples, n_features)** Split the data to get going
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
X.shape
X_train.shape
X_test.shape
###Output
_____no_output_____
###Markdown
Materials at https://github.com/amueller/ml-workshop-short ExcerciseLoad the 'Pima Indians Diabetes Database' dataset from openml (https://www.openml.org/d/37).The csv file is at ``data/pima_diabetes.csv``, the target column is ``'class'``.What is the number of classes, features and data points in this dataset?Use a scatterplot to visualize the dataset.Split the data into training and test set.
###Code
# %load solutions/load_pima_diabetes.py
###Output
_____no_output_____
###Markdown
Data Loading Get some data to play with
###Code
from sklearn.datasets import load_digits
import numpy as np
digits = load_digits()
digits.keys()
digits.data.shape
digits.data
digits.target.shape
digits.target
np.bincount(digits.target)
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(digits.data[0].reshape(8, 8), cmap="gray_r")
plt.imshow(digits.data[0].reshape(1, -1), cmap="gray_r")
digits.target[0]
fig, axes = plt.subplots(4, 4, figsize=(6, 6))
for x, y, ax in zip(digits.data, digits.target, axes.ravel()):
ax.set_title(y)
ax.imshow(x.reshape(8, 8), cmap="gray_r")
ax.set_xticks(())
ax.set_yticks(())
plt.tight_layout()
###Output
_____no_output_____
###Markdown
**Data is always a numpy array (or sparse matrix) of shape (n_samples, n_features)** Split the data to get going
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(digits.data,
digits.target)
digits.data.shape
X_train.shape
X_test.shape
###Output
_____no_output_____
###Markdown
Exercises Excercise 1Load the iris dataset from the ``sklearn.datasets`` module using the ``load_iris`` function.The function returns a dictionary-like object that has the same attributes as ``digits``.What is the number of classes, features and data points in this dataset?Use a scatterplot to visualize the dataset.You can look at ``DESCR`` attribute to learn more about the dataset.``print(iris.DESCR)``Split the data into training and test set. Exercise 2Usually data doesn't come in that nice a format. You can find the csv file that contains the iris dataset at the following path:```pythonimport sklearn.datasetsimport osiris_path = os.path.join(sklearn.datasets.__path__[0], 'data', 'iris.csv')```Load the data from there using pandas ``pd.read_csv`` method and bring it into the same format as before with the data in a variable X and the labels in a variable y. The first few lines of ``iris.csv`` file looks like:```150,4,setosa,versicolor,virginica5.1,3.5,1.4,0.2,04.9,3.0,1.4,0.2,04.7,3.2,1.3,0.2,04.6,3.1,1.5,0.2,0```
###Code
# %load solutions/load_iris.py
###Output
_____no_output_____
###Markdown
Data Loading Get some data to play with
###Code
from sklearn.datasets import load_digits
import numpy as np
digits = load_digits()
digits.keys()
digits.data.shape
digits.data.shape
digits.target.shape
digits.target
np.bincount(digits.target)
import matplotlib.pyplot as plt
%matplotlib inline
# %matplotlib notebook <- interactive interface
plt.matshow(digits.data[0].reshape(8, 8), cmap=plt.cm.Greys)
digits.target[0]
fig, axes = plt.subplots(4, 4)
for x, y, ax in zip(digits.data, digits.target, axes.ravel()):
ax.set_title(y)
ax.imshow(x.reshape(8, 8), cmap="gray_r")
ax.set_xticks(())
ax.set_yticks(())
plt.tight_layout()
###Output
_____no_output_____
###Markdown
**Data is always a numpy array (or sparse matrix) of shape (n_samples, n_features)** Split the data to get going
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(digits.data,
digits.target, test_size=0.25, random_state=1)
digits.data.shape
X_train.shape
X_test.shape
###Output
_____no_output_____
###Markdown
Exercises Excercise 1Load the iris dataset from the ``sklearn.datasets`` module using the ``load_iris`` function.The function returns a dictionary-like object that has the same attributes as ``digits``.What is the number of classes, features and data points in this dataset?Use a scatterplot to visualize the dataset.You can look at ``DESCR`` attribute to learn more about the dataset. Exercise 2Usually data doesn't come in that nice a format. You can find the csv file that contains the iris dataset at the following path:```pythonimport sklearn.datasetsimport osiris_path = os.path.join(sklearn.datasets.__path__[0], 'data', 'iris.csv')```Load the data from there using pandas ``pd.read_csv`` method and bring it into the same format as before with the data in a variable X and the labels in a variable y.
###Code
# %load solutions/load_iris.py
###Output
_____no_output_____ |
data/gen_data.ipynb | ###Markdown
Prepare data for PFD SPM analysisThis does the following:* Loads CPSP's master historical SPM file, limiting to relevant fields (see [documentation](https://static1.squarespace.com/static/5743308460b5e922a25a6dc7/t/5c8179014785d342b6e63abe/1551988993650/SPM+public+use+data+documentation_02142019.pdf)).* Merges with IPUMS ASEC data to get state.* Outputs data with necessary fields. Setup
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Load data
###Code
spm_raw = pd.read_csv('/home/mghenis/datarepo/pub_spm_master.csv.gz',
usecols=['year', 'serial', 'lineno', 'pernum',
'sex', 'age', 'a_sex', 'a_age',
'SPMu_Poor_Metadj_anch_cen', 'marsupwt'])
ipums_raw = pd.read_csv('asec_hh_state.csv.gz')
###Output
_____no_output_____
###Markdown
Preprocess SPM Recode sex and age. Use `female` for clarity.
###Code
spm = spm_raw.copy(deep=True)
spm['female'] = (spm.sex == 'Female') | (spm.a_sex == 'Female')
spm.age = np.where(spm.age.isnull(), spm.a_age, spm.age)
spm.drop(['a_age', 'a_sex', 'sex'], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Weight is multiplied by 100 prior to 1991.
###Code
spm.marsupwt = np.where(spm.year < 1991, spm.marsupwt / 100, spm.marsupwt)
###Output
_____no_output_____
###Markdown
IPUMS
###Code
ipums = ipums_raw.copy(deep=True)
ipums.columns = ipums.columns.map(str.lower)
ipums['female'] = ipums.sex == 2
ipums.drop(['sex'], axis=1, inplace=True)
# Recode year to be calendar rather than survey year.
ipums.year -= 1
###Output
_____no_output_____
###Markdown
Merge
###Code
def merge_ipums_cpsp(ipums, cpsp):
""" Merges IPUMS ASEC with CPSP historical SPM poverty file, per CPSP
documentation.
Args:
ipums: Raw IPUMS ASEC. Must include year, serial, lineno, female, age,
and pernum.
cpsp: Raw CPSP historical SPM poverty file, with female added from
sex and a_sex, and age set to a_age when age is null.
Returns:
DataFrame with all relevant fields from ipums and cpsp.
Note: This procedure is required even for household-level fields, as
CPSID (a household identifier) was introduced in the 1989 survey year.
"""
LINK_VARS_1975_1977 = ['year', 'serial', 'pernum', 'female', 'age']
LINK_VARS_OTHER_YEARS = ['year', 'serial', 'lineno', 'female', 'age']
# Create intermediate datasets to merge, dropping unused merge columns.
ipums_1975_1977 = ipums[ipums.year.isin([1975, 1976, 1977])].drop(
['lineno'], axis=1)
cpsp_1975_1977 = cpsp[cpsp.year.isin([1975, 1976, 1977])].drop(
['lineno'], axis=1)
ipums_other_years = ipums[~ipums.year.isin([1975, 1976, 1977])].drop(
['pernum'], axis=1)
cpsp_other_years = cpsp[~cpsp.year.isin([1975, 1976, 1977])].drop(
['pernum'], axis=1)
# Merge.
res_1975_1977 = ipums_1975_1977.merge(cpsp_1975_1977,
on=LINK_VARS_1975_1977)
res_other_years = ipums_other_years.merge(cpsp_other_years,
on=LINK_VARS_OTHER_YEARS)
# Return the concatenation of the two files after re-sorting on year.
return pd.concat([res_1975_1977, res_other_years]).sort_values('year')
merged = merge_ipums_cpsp(ipums, spm)
###Output
_____no_output_____
###Markdown
Clean up and exportRename core columns.
###Code
merged.rename({'SPMu_Poor_Metadj_anch_cen': 'poor', 'marsupwt': 'w'}, axis=1,
inplace=True)
###Output
_____no_output_____
###Markdown
Export without all the merging columns.
###Code
merged[['year', 'statefip', 'age', 'female', 'poor', 'w']].to_csv(
'spm_state.csv.gz', index=False)
###Output
_____no_output_____ |
docs/stabilizer-doc.ipynb | ###Markdown
Stablizer Formalism (`stabilizer`) General Idea: State-Map Duality Every stabilizer state $\rho$ is dual to a Clifford unitary $U$, such that the state can be generated from the zero state $|00\cdots0\rangle$ as$$\rho = U|00\cdots0\rangle\langle 00\cdots0|U^\dagger$$.Both $\rho$ and $U$ describes a stabilizer code:* $\rho$ is a projection operator that specifies the code subspace of the stabilizer code.* $U$ is the encoding Clifford unitary that encodes the logical + syndrome qubits to the physical qubits in the stabilizer code.The package `stabilizer` (based on `paulialg`) provides related functions to represent stabilizer states and Clifford maps. There are two classes defined in this package.* `stabilizer.CliffordMap`. Since the Clifford unitary $U$ maps Pauli operators to Pauli operators, it is sufficient to specify a Clifford unitary by how each single-qubit Pauli operator transforms under the unitary. Such transformation rules are stored in a table called the Clifford map.* `stabilizer.StabilizerState`. The stabilizer state is specified by a set of **stabilizers** and the corresponding **destabilizers**. Using the binary representation of Pauli operators, they can be stored in a table, called the stabilizer tableau. Since both classes need to store a table of Pauli operators, they are both realized as subclasses of `paulialg.PauliList`. Basic Usage Constructors Construct Clifford Maps `identity_map(N)` constructs an identity Clifford map on $N$ qubits.
###Code
stabilizer.identity_map(4)
###Output
_____no_output_____
###Markdown
`random_pauli_map(N)` samples a random Clifford map made of random single-qubit Clifford gates on $N$ qubits, i.e. $U=\prod_i U_i\in\mathrm{Cl}(2)^N$. Each realization specifies a random local Pauli basis.
###Code
stabilizer.random_pauli_map(4)
###Output
_____no_output_____
###Markdown
`random_clifford_map(N)` samples a globally random Clifford map on $N$ qubits, i.e. $U\in\mathrm{Cl}(2^N)$. Each realization specifies a random global stabilizer basis.
###Code
stabilizer.random_clifford_map(4)
###Output
_____no_output_____
###Markdown
`clifford_rotation_map(N)` constructs a Clifford map based for a Clifford rotation given its generator.
###Code
stabilizer.clifford_rotation_map('-XXYZ')
###Output
_____no_output_____
###Markdown
Construct Stabilizer States `maximally_mixed_state(N)` constructs a $N$-qubit maximally mixed state (by setting the density matrix to full rank).$$\rho=2^{-N}\mathbb{1}.$$
###Code
stabilizer.maximally_mixed_state(3)
###Output
_____no_output_____
###Markdown
`zero_state(N)` constructs a $N$-qubit all-zero state $$\rho=|0\cdots0\rangle\langle 0\cdots0|=\prod_{i}\frac{1+Z_i}{2}.$$
###Code
stabilizer.zero_state(4)
###Output
_____no_output_____
###Markdown
`one_state(N)` constructs a $N$-qubit all-one state $$\rho=|1\cdots1\rangle\langle 1\cdots1|=\prod_{i}\frac{1-Z_i}{2}.$$
###Code
stabilizer.one_state(4)
###Output
_____no_output_____
###Markdown
`ghz_state(N)` constructs a $N$-qubit GHZ state$$\rho = |\Psi\rangle\langle\Psi|, \qquad \text{with }|\Psi\rangle=\frac{1}{\sqrt{2}}(|0\cdots0\rangle+|1\cdots1\rangle).$$
###Code
stabilizer.ghz_state(4)
###Output
_____no_output_____
###Markdown
`random_pauli_map(N)` samples a $N$ qubit random Pauli state.$$\rho=U|0\cdots0\rangle\langle 0\cdots0|U^\dagger,\qquad\text{with }U\in \mathrm{Cl}(2)^N.$$
###Code
stabilizer.random_pauli_state(4)
###Output
_____no_output_____
###Markdown
`random_clifford_map(N)` samples a $N$ qubit random Clifford (random stabilizer) state.$$\rho=U|0\cdots0\rangle\langle 0\cdots0|U^\dagger,\qquad\text{with }U\in \mathrm{Cl}(2^N).$$
###Code
stabilizer.random_clifford_state(4)
###Output
_____no_output_____
###Markdown
`stabilizer_state(...)` is a universal constructor of stabilizer state by specifying all stabilizers.
###Code
stabilizer.stabilizer_state('XXY','-YYI')
###Output
_____no_output_____
###Markdown
A hack to inspect the full stabilizer tableau is by converting `StabilizerState` to `PauliList` by
###Code
stabilizer.stabilizer_state('XXY','-YYI')[:]
###Output
_____no_output_____
###Markdown
User need to ensure that stabilizers commute with each other, otherwise an error will be raised.
###Code
stabilizer.stabilizer_state('XXY','-YYI','IZZ')
###Output
_____no_output_____
###Markdown
State-Map Conversion Stabilizer states and Clifford maps can be mapped to each other.
###Code
rho = stabilizer.stabilizer_state('XXY','-YYI')
rho
rho.to_map()
rho.to_map().to_state()
###Output
_____no_output_____
###Markdown
* `.to_map()` and `.to_state()` will make new copies of Pauli string data in the memory.* the information about the rank of the density matrix is lost in the Clifford map, so the back conversion will result in a zero rank stabilizer state. Clifford Map Methods Map Embedding `.embed(small_map, mask)` provides the method to embed a smaller Clifford map on a subset of qubits to the current Clifford map. This is a **in-place** operation. The Clifford map object that provide this method will get modified under the embedding.**Parameters:*** `small_map` is a `CliffordMap` object supported on a subset of qubits.* `mask` is a boolean array specifying the subset of qubits.
###Code
cmap = stabilizer.identity_map(6)
cmap
cmap.embed(random_clifford_map(3), numpy.array([True,False,False,True,True,False]))
###Output
_____no_output_____
###Markdown
Map Composition `.compose(other)` returns the composition of the current Clifford map with another Clifford map. This will return a new Clifford map without modifying either of the input maps. The Clifford map object which initiates this method will be the preceeding map in the composition. **Parameters:*** `other` - another `CliffordMap`. Example: composition of a Clifford rotation with its inverse rotation will be an identity map.
###Code
clifford_rotation_map('-XXY').compose(clifford_rotation_map('+XXY'))
###Output
_____no_output_____
###Markdown
Map Inversion `.inverse()` returns the inverse of the current Clifford map. This will return a new Clifford map withoutt modifying the original map. The inverse map is such that its composition with the original map must be identity
###Code
cmap = clifford_rotation_map('Y')
cmap
cmap.inverse()
###Output
_____no_output_____
###Markdown
Test on random maps.
###Code
cmap = random_clifford_map(4)
cmap.inverse().compose(cmap)
cmap.compose(cmap.inverse())
###Output
_____no_output_____
###Markdown
Both left and right composition are identity. Stabilizer State Methods - Initialization via stabilizer tableau:A stabilizer state can be initialized via: `StabilzierState(gs,ps)`, and the rank "self.r=0" by default. One can change the rank by `self.set_r(r)` For example, let's generate a random stabilzier tableau and list of phases
###Code
a_random_state =stabilizer.random_clifford_map(2).to_state()
###Output
_____no_output_____
###Markdown
The stabilzier tableau is:
###Code
a_random_state.gs
###Output
_____no_output_____
###Markdown
The phases are:
###Code
a_random_state.ps
###Output
_____no_output_____
###Markdown
We can create a stabilizer state:
###Code
new_state = stabilizer.StabilizerState(gs=a_random_state.gs,ps=a_random_state.ps)
print(new_state)
print('rank: ',new_state.r)
###Output
StabilizerState(
-YZ
+ZX)
rank: 0
###Markdown
We can also change the rank:
###Code
new_state.set_r(1)
print('rank: ',new_state.r)
###Output
rank: 1
###Markdown
- Get active stabilziers One can get the activate stabilizers by getting the attribute `stabilizers`. And the activte stabilizers will be returned as `PauliList`.
###Code
state = stabilizer.ghz_state(4)
state.set_r(1)
print(type(state.stabilizers),state.stabilizers)
print(state.stabilizers.gs)
print(state.stabilizers.ps)
###Output
[[0 0 0 0 0 1 0 1]
[0 0 0 1 0 1 0 0]
[0 1 0 1 0 0 0 0]]
[0 0 0]
###Markdown
- Meaurement One can preform a sequential measurement on the stabilizer state by calling `.measure(obs)` method.`obs` is PauliList or StabilizerState (only active stabilizers measured), and the Pauli operators are measured sequentially. The output of this function is: `readout` and `log2prob` of the readout. And the convension for readout is:0 $\rightarrow$ eigenvalue 1; 1 $\rightarrow$ eigenvalue -1After measurement, the state will be changed.
###Code
state = stabilizer.ghz_state(2)
print(state)
obs = paulialg.paulis('-YY','XI')
readout, log2prob = state.measure(obs)
print("readout: ", readout)
print("probability: ", 2**(prob))
print("State after measurement: ", state)
###Output
State after measurement: StabilizerState(
+XX
-XI)
###Markdown
-Expectation One can calculate the expectation value of a list of Pauli operator by `.expect(obs)` method.`obs` can be a `PauliList` or `StabilizerState`The output of this function will a `List` containing the trace between stabilzier state and each Pauli operators.
###Code
state = stabilizer.ghz_state(2)
print(state)
obs = paulialg.paulis(['ZZ','XI'])
state.expect(obs)
###Output
_____no_output_____
###Markdown
-Entropy `StabilizerState.entropy(A)` will calculate the entanglement entropy of region *A**A* is an array of regions, i.e. [1,1,0,0]. This function will return entanglement entropy (bits) in $\text{log}_2$ basis.
###Code
state = stabilizer.ghz_state(4)
print('Entropy: ', state.entropy(numpy.array([1,0,1,0])))
###Output
Entropy: 1
###Markdown
-Sample stabilizer from stabilizer group `StabilizerState.sample(L)` method will sample `L(integer)` number of Pauli strings that is in the stabilizer group defined by `StabilizerState`
###Code
state = stabilizer.ghz_state(4)
state.sample(10)
###Output
_____no_output_____
###Markdown
-Generate full stabilizer group element `StabilizerState.stabilizer_group()` method will return all the elements defined by the stabilizer group (active generators) of `StabilizerState`
###Code
state = stabilizer.ghz_state(4)
state.set_r(2)
state.stabilizer_group()
state = stabilizer.ghz_state(4)
state.set_r(0)
state.stabilizer_group()
###Output
_____no_output_____
###Markdown
Development
###Code
paulialg.pauli('XX').convert()
###Output
_____no_output_____
###Markdown
The following is out of date Stabilizer State Methods Measurement `.measure(obs)` measure the stabilizer state on a set of commuting observables.**Parameters:*** `obs` - Observables to measure. The following types are supported: * `PauliList` - a list of Pauli operators (user must ensure that operators in the list are commuting, otherwise they can not measured simutaneously). * `StabilizerState` - stabilizers of a stabilizer state is always commuting, which can be treated as commuting observables for measurement. **Returns:*** `out` - measuremnt outcome, can only be $0$, $\pm1$ for independent Pauli observables on stabilizer state.* `log2prob` - the log2 of the probability of realizing this particular outcome. The following documents is out of date. StabilizerState `StabilizerState(N, r=None, S=None, b=None)` represents a stablizer state for an $[N,r]$ stablizer code (i.e. $N$ physical qubits encoding $r$ logical qubits).**Parameters**- `N`: number of physical qubits.- `r`: number of logical qubits.- `S`: stabilizer tableau.- `b`: sign indicator. Example: A 5-qubit state with 2 logical qubits (being the first 2 physical qubits). The state is stablized by 3 stabilizers (acting on the last 3 qubits).
###Code
vaeqst.StabilizerState(5, r=2)
###Output
_____no_output_____
###Markdown
Representations Density Matrix A $[N,r]$ stabilizer state is describe by the **density matrix** of the following form:$$\rho = \frac{1}{2^r}\prod_{k=1}^{N-r}\frac{1+(-)^{b_k}S_k}{2}.$$* Each stabilizer $S_k$ is a (non-trivial) Pauli operator defined on totally $N$ qubits. The stabilizers commute with each other $[S_k,S_{k'}]=0$. They generate an Abelian subgroup $\mathcal{S}=\{\prod_{k=1}^{N-r} S_k^{a_k}|a_k=0,1\}$ of the $N$-qubit Pauli group, called the *stabilizer group*.* Each sign indicator $b_k=0,1$ is a binary variable specifying the eigen space of the stabilizer.* There are totally $N-r$ stabilizers for a $[N,r]$ stablizer code (of code rate: $r/N$). The simultaneous eigenspace of all stabilizers constitutes the *code subspace*. * The code subspace is $2^r$ dimensional (which is also the rank of the density matrix $\rho$). The stabilizer state $\rho$ is always defined to be the maximally mixed state in the code subspace, such that $\rho$ is also the **projection operator** that projects any state into the code subspace. Binary Representation of Pauli Operators Any Pauli operator can be specified by two one-hot (binary) vectors $x$ and $z$ ($x_i,z_i=0,1$ for $i=1,\cdots,N$):$$\sigma_{(x,z)}=\mathrm{i}^{x\cdot z}\prod_{i=1}^{N}X_i^{x_i}\prod_{i=1}^{N}Z_i^{z_i}.$$* The binary vector $x$ (or $z$) specifies the qubits where the $X$ (or $Z$) operator acts ($Y$ operator acts at where $X$ and $Z$ act simultaneously).* **Multiplication** of two Pauli operators$$\sigma_{(x,z)}\sigma_{(x',z')}=\mathrm{i}^{p(x,z;x',z')}\sigma_{(x+x',z+z')\%2},$$where the power $p$ of $\mathrm{i}$ in the prefactor is given by$$p(x,z;x',z')=\sum_{i=1}^{N}\left(z_ix'_i-x_iz'_i + 2(z_i+z'_i)\left\lfloor\frac{x_i+x'_i}{2}\right\rfloor+2(x_i+x'_i)\left\lfloor\frac{z_i+z'_i}{2}\right\rfloor\right)\mod 4.$$* **Commutation relation**: two Pauli operator either commute to anticommute.$$\sigma_{(x,z)}\sigma_{(x',z')}=(-)^{c(x,z;x',z')}\sigma_{(x',z')}\sigma_{(x,z)},$$where the *anticommutation indicator* $c$ has a simpler form$$c(x,z;x',z')=\frac{p(x,z;x',z')-p(x',z';x,z)}{2}=\sum_{i=1}^{N}\left(z_ix'_i-x_iz'_i\right)\mod 2.$$The binary vectors $x$ and $z$ can be interweaved into a $2N$-component vector $g=(x_0,z_0,x_1,z_1,\cdots)$, which forms the binary representation of a Pauli operator $\sigma_g$. Stabilizer Tableau Each stabilizer state is internally stored in the form of a **stabilizer tableau** $S$, together with the **sign indicator** $b$. For a $[N,r]$ stabilizer state, its stabilizer tableau is a $2N\times 2N$ matrix of the following structure * Each row is a binary representation $(x,z)$ of a Pauli oparator $\sigma_{(x,z)}$.* Totally $2N$ Pauli operators grouped into $N$ stabilizers and $N$ destabilizers, such that for $i,j=0,\cdots,2N-1$:$$\sigma_{g_{i}}\sigma_{g_{j}}=(-)^{\delta_{i+N,j}-\delta_{j+N,i}}\sigma_{g_{j}}\sigma_{g_{i}},$$i.e. the $i$th stabilizer only anticommute with the $i$th destabilizer and they commute with all the other operators in the tableau.* The rows $r:N$ correspond to the $N-r$ active stabilizers $S_k$, which stabilize the code subspace (impleted as projection operators). The rows $0:r$ corresponds to the $r$ standby (inactive) stabilizers that does not realy stabilizer the code subspace (but they will act as logical operators in the code subspace).* The rows $N+r:2N$ correspond to the $N-r$ active destabilizers that anticommute with the active stabilizers. The rows $N:N+r$ correspond to the $r$ standby destabilizers taht anticommute with the standby stabilizers.Although the stabilizer state is only specified by the active stabilizers, the other operators in the stabilizer tableau are still important in order to complete the operator basis. Such that the tableau can specify an unitary operator in the Clifford group that generate the state. The algorithm must mantain the algebraic structure betwen all stabilizers and destabilizers while updating the tableau. Example: stabilizer tableau is given by `StabilizerState.S`
###Code
rho = vaeqst.StabilizerState(5, r=2)
rho.S
###Output
_____no_output_____
###Markdown
The sign indicator is a $N$ vector, which only keeps the sign of the stabilizers. Because the sign of destabilizers are not used in any where.
###Code
rho.b
###Output
_____no_output_____
###Markdown
Methods Copy `StabilizerState.copy()` returns a copy of the state, such that the original state will not be touch by modification on the copy state. It is useful to copy the state for measurement (as measurement changes the state).Example:
###Code
rho0 = vaeqst.StabilizerState(5, r=2)
rho1 = rho0.copy()
rho1.r = 3
rho0, rho1
###Output
_____no_output_____
###Markdown
Measurement The stabilizer state can serve both as a density matrix and as a measurement operator.* When it serves as a density matrix, $(-)^{b_k}S_k$ are the stabilizers that stabilize the state.* When it serves as a measurment operator, $(-)^{h_k}G_k$ are the commuting observables to be measured in parallel.`StabilizerState.measure(other)` provides the method to measure on a stabilizer state a set of observables specified by another stabilizer state.**Parameters:*** `other`: stabilizer state representing the measurement operator.**Returns:*** `out`: the measurement outcome (vector containing outcome of each observable).* `log2prob`: the log2 probability to obtain this outcome.**Side Effect:**The state itself will be updated to the post measurement state. **Algorithm Outline**:We scan over every observable $G_k$ in the measurement operator. For each observable, we continue to scan over all operators in the stabilizer tableau. If the observable $G_k$ anticommute with1. at least one active stabilizer (the first of them being $S_p$) $\to$ $G_k$ is an *error* operator that take the state out of the code subspace $\to$ the measurement will collapse the state to one of the two possible measurement outcomes $G_{k}=\pm 1$ with equal probability, and the state will be *updated*.2. at least one standby stabilizer or destabilizer (the first of them being $S_p$)$\to$ $G_k$ is a *logical* operator that will further stabilize the code subspace $\to$ the measurement will activate a new pair of stabilizer and destabilizer, and the state will be *extended*.3. otherwise, $\to$ $G_k$ is a *trivial* operator in the code subspace $\to$ the measurement is classical, and the state is untouched.| | `update`? | `extend`? ||----|-----------|-----------|| 1. | `True` | `False` || 2. | `True` | `True` || 3. | `False` | `False` |* *update*: * $G_k$ must replace $S_p$ to be the active stabilizer. But to mantain its algebraic relatiion with the destabilizer, the original $S_p$ can be promoted to become the corresponding destabilizer. Such that $S_\tilde{p}\leftarrow S_p$, $S_p \leftarrow G_k$ ($\tilde{p}$ denotes the dual row of $p$) * The sign of $G_k$ is randomly asigned with half-to-half probability.* *extend*: * The number of logical qubit will be reduced by one $r\leftarrow r-1$. * To include new stabilizer-destabilizer pair to the system, apart from the steps in the update algorithm, we also need to bring the new stabilizer $S_p$ to row-$r$ and the new destabilizer $S_{\tilde{p}}$ to row-$(N+r)$. If update (including extension) did not happen after scanning through all the stabilizers and standby destabilizers, then we know that the measurement is classical ($G_k$ is trivial in code subspace, it must belong to the stabilizer group). So we continue to scan over the remaining active destabilizer. For each active destabilizer $S_j$ that anticommute with $G_k$, it indicates that $G_k$ contains the component of the corresponding active stabilizer $S_{\tilde{j}}$, which should then be collected. Finally the measuremnt outcome $x_k=0,1$ is such that the following equation holds$$(-)^{h_k+x_k}G_k=\prod_{\tilde{j}}(-)^{b_{\tilde{j}}}S_{\tilde{j}}.$$ Example:
###Code
rho = vaeqst.GHZState(5)
print('starting from:\n', rho)
obs = vaeqst.RandomPauliState(5)
print('to measure:\n', obs)
out, log2prob = rho.measure(obs)
print('obtain outcome:\n', out, '\nwith probability 2^({})'.format(log2prob))
print('end up with:\n', rho)
###Output
starting from:
StabilizerState(
+ZZIII
+IZZII
+IIZZI
+IIIZZ
+XXXXX)
to measure:
StabilizerState(
+XIIII
-IZIII
+IIXII
+IIIXI
-IIIIX)
obtain outcome:
[1 0 1 0 1]
with probability 2^(-5)
end up with:
StabilizerState(
-XIIII
-IIXII
+IIIXI
+IIIIX
-IZIII)
###Markdown
Expectation `StabilizerState.expect(other)` provides the method to compute the expectation value of observables on a stabilizer state.$$ e_k = (-)^{h_k} \mathrm{Tr} \rho G_k$$**Parameters:*** `other`: stabilizer state representing the measurement operator.**Returns:*** `expect`: a vector of expectation values $e_k$.This method will not modify the stabilizer state. Example:
###Code
rho = vaeqst.GHZState(5)
print('base state:\n', rho)
obs = vaeqst.RandomCliffordState(5)
print('measurement:\n', obs)
print('expectation values:\n', rho.expect(obs))
###Output
base state:
StabilizerState(
+ZZIII
+IZZII
+IIZZI
+IIIZZ
+XXXXX)
measurement:
StabilizerState(
-IYYZX
-IXYXX
-ZXZXY
+ZYYZX
-ZYZXX)
expectation values:
[0 0 0 0 0]
###Markdown
The expectation values of Pauli operators on stabilizer states are either $0$ or $\pm1$. The expectation value will be $\pm1$ only if the corresponding observable is in the stabilizer group. Fidelity `StabilizerState.fidelity(other)` provides the method to compute the fidelity between the base state $\rho$ and another state $\rho'$.**Parameters:*** `other`: another stabilizer state $\rho'$ to compare.**Returns:*** `F`: fidelity $F(\rho,\rho')=\left(\mathrm{Tr}\sqrt{\sqrt{\rho}\rho'\sqrt{\rho}}\right)^2$. **Algorithm Outline:**If both $\rho$ and $\rho'$ are stabilizer states,$$\rho=\frac{1}{2^{r}}\prod_{j=1}^{N-r}\frac{1+(-)^{b_j}S_j}{2},\quad\rho'=\frac{1}{2^{r'}}\prod_{k=1}^{N-r'}\frac{1+(-)^{h_k}G_{k}}{2}.$$The stabilizers in $\rho'$ can be classified into three cases on the stabilizer code defined by $\rho$. If $G_k$ anticommute with1. an active stabilizer in $\rho$: $G_k$ is an *error* operator, denoted by $k\in \mathcal{E}$;2. a standby stabilizer or destabilizer in $\rho$: $G_k$ is an *logical* operator, denoted by $k\in \mathcal{L}$;3. other wise, $G_k$ is a *trivial* operator (in the code subspace), denoted by $k\in \mathcal{I}$.The Fidelity is given by [arXiv:quant-ph/0505036](https://arxiv.org/abs/quant-ph/0505036)$$F(\rho,\rho')=2^{r-r'-2|\mathcal{L}|-|\mathcal{E}|}\prod_{k\in\mathcal{I}}\frac{1+e_k}{2},$$where $e_k$ is the expectation value of $(-)^{h_k}G_k$ on $\rho$. The product $o=\prod_{k\in\mathcal{I}}\frac{1+e_k}{2}$ defines the overlap indicator $o=0$ if $\rho$ and $\rho'$ are orthogonal, otherwise $o=1$. Example: verify that fidelity is symmetric.
###Code
rho = vaeqst.GHZState(5)
sig = vaeqst.RandomCliffordState(5)
rho.fidelity(sig), sig.fidelity(rho)
###Output
_____no_output_____
###Markdown
Fidelity of a state with itself is always 1, even if the state is mixed.
###Code
rho = vaeqst.StabilizerState(5, r=2)
rho.fidelity(rho)
###Output
_____no_output_____
###Markdown
Entanglement Entropy `StabilizerState.entropy(A)` calculate the entanglement entropy of the stabilizer state in region $A$.**Parameters:*** `A`: a one-hot array of size $N$ specifying the entanglement region. $A_i=1$ if $i\in A$ otherwise $A_i=0$.**Returns:*** Entanglement entropy $S_\rho(A)$ in unit of bit (in log2 base).Example:
###Code
A = numpy.array([0,1,0,1,0])
vaeqst.RandomCliffordState(5).entropy(A)
###Output
_____no_output_____
###Markdown
Tokenize `StabilizerState.tokenize()` tokenize the stabilizer basis. This could be useful for machine learning task. The tokens can be encoded by language processing techniques.**Rules:*** 0 = I* 1 = X* 2 = Y* 3 = Z* 4 = +* 5 = -* 6 = (+/-)Example:
###Code
rho = vaeqst.RandomCliffordState(5)
print('state:\n',rho)
rho.tokenize()
###Output
state:
StabilizerState(
+ZIXZI
-YIIYX
-YZXXZ
+XIYIZ
+ZYIZX)
###Markdown
Sample `Stabilizer.sample(L)` sample $L$ stabilizers from the stabilizer group, return in the token form.Example:
###Code
rho.sample(3)
###Output
_____no_output_____
###Markdown
RandomCliffordState `RandomClifordState(N, r=None)` is a subclass of `StabilizerState`. It represents a random stabilizer state generated by unitary transformations sampled uniformly in the global Clifford group.**Parameters**- `N`: number of physical qubits.- `r`: number of logical qubits.Example:
###Code
vaeqst.RandomCliffordState(20)
###Output
_____no_output_____
###Markdown
Random Stabilizer Algorithm The algorithm generates a random stabilizer tableau, corresponding to a uniformly sampled element in the global Clifford group. The problem can be solved iteratively. * Let $(\mathcal{S}_{N-1}, \mathcal{D}_{N-1})$ be the sets of stabilizers and destabilizers (paired up) of $(N-1)$ qubits.* The sets can be expanded to $N$ qubits by$$\mathcal{S}_{N}:\left\{\begin{array}{ll}S_0=U (Z\otimes I^{\otimes (N-1)}) U^\dagger & \\S_{i+1}=U (I\otimes S'_{i}) U^\dagger & \text{for }S_i\in \mathcal{S}_{N-1}\end{array}\right.$$$$\mathcal{D}_{N}:\left\{\begin{array}{ll}D_0=U (\left\{\begin{array}{c}X\\Y\end{array}\right\}\otimes I^{\otimes (N-1)}) U^\dagger & \\D_{i+1}=U (I\otimes D'_{i}) U^\dagger & \text{for }D_i\in \mathcal{D}_{N-1}\end{array}\right.$$where $U$ is a random Clifford rotation on $N$ qubits.* $U$ can be generated by first sample a random pair of stabilizer $S_0$ and destablizer $D_0$, and then find the Clifford rotion to diagonalize them to the first qubit. Random Stabilizer-Destabilizer Pair Using binary representation of Pauli operators,* Generate a random non-trivial stabilizer $S_0$ by sampling a binary array of $2N$ components, excluding the all-0 case. (If all-0 array is sampled, reject it and resample, until the vector is not all zero).* Generate a random destabilizer $D_0$ by * first sampling a binary array of $2N$ components, * if $D_0$ anticommute with $S_0$: we are done, * if $D_0$ commute with $S_0$: pick the first nontrivial qubit on which $S_0$ acts, modify the corresponding $D_0$ operator on that qubit to flip the commutation relation. The modification is given by the following table. | S\D | I 00 | X 10 | Y 11 | Z 01 ||------|------|------|------|------|| X 10 | Z 01 | Y 11 | X 10 | I 00 || Y 11 | X 10 | I 00 | Z 01 | Y 11 || Z 01 | Y 11 | Z 01 | I 00 | X 10 |The rule is:$$x_i^{(D)} \to x_i^{(D)} + z_i^{(S)}, \quad z_i^{(D)}\to z_i^{(D)} + x_i^{(S)} + z_i^{(S)}$$ Find Clifford Rotation To diagonalize $S_0\to Z_0$ and $D_0\to X_0\text{ or }Y_0$.* First diagonalize $S_0$ by * If $S_0$ commute with $Z_0$, * If $S_0=I_0\otimes A$, take the first non-trivial qubit of $A$, permute it cyclically among $X,Y,Z$ to create $B$, then $$X_0\otimes B: S_0\to X_0\otimes AB$$ * If $S_0=Z_0\otimes A$: $$X_0\otimes A:S_0\to Y_0\otimes I$$ * Now $S_0$ has been transformed to anticommute with $Z_0$. * If $S_0$ anticommute with $Z_0$: $$S_0Z_0: S_0\to Z_0\otimes I$$* Now $S_0=Z_0\otimes I$, $D_0$ must have been transformed to the form $D_0=\begin{array}{c}X_0\\ Y_0\end{array}\otimes C$.* Then diagonalize $D_0$ by:$$Z_0\otimes C: Z_0\otimes I\to Z_0\otimes I, D_0\to \begin{array}{c}X_0\\ Y_0\end{array}\otimes I.$$Collect the Clifford roations along the way and apply them in reverse order to scramble the stabilizers in $(\mathcal{S}_{N-1}, \mathcal{D}_{N-1})$ to the $N$ qubit system. RandomPauliState `RandomPauliState(N, r=None)` is a subclass of `StabilizerState`. It represents a random stabilizer state generated by unitary transformations sampled uniformly in the local (onsite) Clifford group.**Parameters**- `N`: number of physical qubits.- `r`: number of logical qubits.Example:
###Code
vaeqst.RandomPauliState(20)
###Output
_____no_output_____
###Markdown
GHZState `GHZState(N)` is a subclass of `StabilizerState`. It represents a GHZ state $\rho=|\Psi\rangle\langle\Psi|$, with $|\Psi\rangle=\frac{1}{\sqrt{2}}(|00\cdots 0\rangle+|11\cdots 1\rangle)$.**Parameters**- `N`: number of physical qubits.Example:
###Code
vaeqst.GHZState(20)
stabilizer.ghz_state(4)
paulialg.pauli('X')@paulialg.pauli('Z')
(paulialg.pauli('X')@paulialg.pauli('Z')).g
(paulialg.pauli('X')@paulialg.pauli('Z')).p
###Output
_____no_output_____ |
teaching_material/session_3/.ipynb_checkpoints/session_3_slides-checkpoint.ipynb | ###Markdown
Session 3: Data Structuring 2*Nicklas Johansen* AgendaIn this session, we will work with different types of data:- Boolean Data- Numeric Operations and Methods- String Operations- Categorical Data- Time Series Data Recap - Loading Packages- Pandas Series- Pandas Data Frames- Series vs DataFrames- Converting Data Types- Indices and Column Names- Viewing Series and Dataframes- Row and Column Selection- Modifying DataFrames- Changing the Index- Changing Column Values- Sorting Data- DO2021 COHORT
###Code
# Loading packages
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import requests
import seaborn as sns
###Output
_____no_output_____
###Markdown
Boolean Data Logical Expression for Series (1:2)*Can we test an expression for all elements?* Yes: **==**, **!=** work for a single object or Series with same indices. Example:
###Code
print(my_series3)
print()
print(my_series3 == 0)
###Output
_____no_output_____
###Markdown
What datatype is returned? Logical Expression in Series (2:2)*Can we check if elements in a series equal some element in a container?* Yes, the `isin` method. Example:
###Code
my_rng = list(range(2))
print(my_rng)
print()
print(my_series3.isin(my_rng))
###Output
[0, 1]
yesterday True
today True
tomorrow False
dtype: bool
###Markdown
Power of Boolean Series (1:2)*Can we combine boolean Series?* Yes, we can use:- the `&` operator (*and*)- the `|` operator (*or*)
###Code
titanic = sns.load_dataset('titanic')
titanic.head()
print(((titanic.sex == 'female') & (titanic.age >= 30)).head(3)) # selection by multiple columns
###Output
0 False
1 True
2 False
dtype: bool
###Markdown
What datatype was returned? Power of Boolean Series (2:2)*Why do we care for boolean series (and arrays)?* Mainly because we can use them to select rows based on their content.
###Code
print(my_series3)
print()
print(my_series3[my_series3<3])
###Output
yesterday 0
today 1
tomorrow 3
dtype: int64
yesterday 0
today 1
dtype: int64
###Markdown
NOTE: Boolean selection is extremely useful for dataframes!! Numeric Operations and Methods Numeric Operations (1:3)*How can we make basic arithmetic operations with arrays, series and dataframes?* It really works just like with Python data, e.g. lists. An example with squaring:
###Code
2 ** 2
num_ser1 = pd.Series([2,3,2,1,1])
num_ser2 = num_ser1 ** 2
print(num_ser1)
print(num_ser2)
###Output
0 2
1 3
2 2
3 1
4 1
dtype: int64
0 4
1 9
2 4
3 1
4 1
dtype: int64
###Markdown
Numeric Operations (2:3)*Are other numeric python operators the same??* Numeric operators work `/`, `//`, `-`, `*`, `**` as expected.So does comparative (`==`, `!=`, `>`, `<`) *Why is this useful?* - vectorized operations are VERY fast;- requires very little code.
###Code
10 / 2
num_ser1 / num_ser1
###Output
_____no_output_____
###Markdown
Numeric Operations (3:3)*Can we also do this with vectors of data?* Yes, we can also do elementwise addition, multiplication, subtractions etc. of series. Example:
###Code
num_ser1 + num_ser2
###Output
_____no_output_____
###Markdown
Numeric methods (1:4)*OK, these were some quite simple operations with pandas series. Are there other numeric methods?* Yes, pandas series and dataframes have other powerful numeric methods built-in. Consider an example series of 10 million randomly generated observations:
###Code
arr_rand = np.random.randn(10**7) # Draw 10^7 observations from standard normal, arr_rand = np.random.normal(size = 10**7)
s2 = pd.Series(arr_rand) # Convert to pandas series
s2
###Output
_____no_output_____
###Markdown
Numeric methods (2:4)Now, display the median of this distribution:
###Code
s2.median() # Display median
###Output
_____no_output_____
###Markdown
Other useful methods include: `mean`, `quantile`, `min`, `max`, `std`, `describe`, `quantile` and many more.
###Code
np.round(s2.describe(),2) # Display other characteristics of distribution (rounded)
###Output
_____no_output_____
###Markdown
Numeric methods (3:4)An important method is `value_counts`. This counts number for each observation. Example:
###Code
cuts = np.arange(-10, 10, 1) # range from -10 to 10 with intervals of unit size
cats = pd.cut(s2, cuts) # cut into categorical data
cats.value_counts()
###Output
_____no_output_____
###Markdown
What is observation in the value_counts output - index or data? Numeric methods (4/4)*Are there other powerful numeric methods?* Yes: examples include - `unique`, `nunique`: the unique elements and the count of unique elements- `cut`, `qcut`: partition series into bins - `diff`: difference every two consecutive observations- `cumsum`: cumulative sum- `nlargest`, `nsmallest`: the n largest elements - `idxmin`, `idxmax`: index which is minimal/maximal - `corr`: correlation matrixCheck [series documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html) for more information. String Operations String Operations (1:3)*Do the numeric python operators also apply to strings?* In some cases yes, and this can be done very elegantly! Consider the following example with a series:
###Code
names_ser1 = pd.Series(['Nicklas', 'Jacob', 'Preben', 'Laila'])
names_ser1
###Output
_____no_output_____
###Markdown
Now add another string:
###Code
names_ser1 + ' works @ SAMF'
###Output
_____no_output_____
###Markdown
String Operations (2/3)*Can two vectors of strings also be combined like as with numeric vectors?* Fortunately, yes:
###Code
names_ser2 = pd.Series(['python', 'something with pyramids', 'resaerch', 'admin'])
names_ser1 + ' teaches ' + names_ser2
###Output
_____no_output_____
###Markdown
String Operations (3:3)*Any other types of vectorized operations with strings?* Many. In particular, there is a large set of string-specific operation (see `.str`-notation below). Some examples (see table 7-5 in PDA for more - we will revisit in session 5):
###Code
names_ser1.str.upper() # works similarly with lower()
names_ser1.str.contains('k')
names_ser1.str[0:2] # We can even do vectorized slicing of strings!
###Output
_____no_output_____
###Markdown
Categorical Data The Categorical Data Type*Are string (or object) columns attractive to work with?*
###Code
pd.Series(['Pandas', 'series'])
###Output
_____no_output_____
###Markdown
No, sometimes the categorical data type is better:- Use categorical data when many characters are repeated - Less storage and faster computations- You can put some order (structure) on your string data- It also allows new features: - Plots have bars, violins etc. sorted according to category order Example of Categorical Data (1:2)Simulate data:
###Code
edu_list = ['BSc Political Science', 'Secondary School'] + ['High School']*2
str_ser = pd.Series(edu_list*10**5)
str_ser
###Output
_____no_output_____
###Markdown
Option 1: No order
###Code
cat_ser = str_ser.astype('category')
cat_ser
###Output
_____no_output_____
###Markdown
Example of Categorical Data (2:2)Option 2: Order
###Code
edu_cats = ['Secondary School', 'High School', 'BSc Political Science']
cats = pd.Categorical(str_ser, categories=edu_cats, ordered=True)
cat_ser2 = pd.Series(cats, index=str_ser.index)
cat_ser2
###Output
_____no_output_____
###Markdown
Numbers as CategoriesIt is natural to think of measures in categories, e.g. small and large. *Can we convert our numerical data to bins in a smart way?* Yes, there are two methods that are useful (and you just applied one of them earlier in this session!):- `cut` which divides data by user specified bins- `qcut` which divides data by user specified quantiles - E.g. median, $q=0.5$; lower quartile threshold, $q=0.25$; etc.
###Code
cat_ser3 = pd.qcut(pd.Series(np.random.normal(size = 10**6)), q = [0,0.025, 0.975, 1])
cat_ser3.cat.categories
cat_ser3.cat.codes.head(5)
###Output
_____no_output_____
###Markdown
Converting to Numeric and BinaryFor regression, we often want our string / categorical variable as dummy variables:- That is, all categories have their own binary column (0 and 1) - Note: We may leave one 'reference' category out here (intro statistics)- Rest as numeric *How can we do this?* Insert dataframe, `df`, into the function as `pd.get_dummies(df)`
###Code
pd.get_dummies(cat_ser3).head(5)
###Output
_____no_output_____
###Markdown
Time Series Data Temporal Data Type*Why is time so fundamental?* Every measurement made by a human was made at some point in time - therefore, it has a "timestamp"! Formats for Time*How are time stamps measured?* 1. **Datetime** (ISO 8601): Standard calendar - year, month, day (minute, second, milisecond); timezone - can come as string in raw data2. **Epoch time**: Seconds since January 1, 1970 - 00:00, GMT (Greenwich time zone) - nanoseconds in pandas Time Data in Pandas*Does Pandas store it in a smart way?* Pandas and numpy have native support for temporal data combining datetime and epoch time.
###Code
str_ser2 = pd.Series(['20210101', '20210727', '20210803', '20211224'])
dt_ser = pd.to_datetime(str_ser2)
dt_ser
###Output
_____no_output_____
###Markdown
Example of Passing Temporal Data*How does the input type matter for how time data is passed?*A lot! As we will see, `to_datetime()` may assume either *datetime* or *epoch time* format:
###Code
pd.to_datetime(str_ser2)
pd.to_datetime(str_ser2.astype(int))
###Output
_____no_output_____
###Markdown
Time Series Data*Why are temporal data powerful?*We can easily make and plot time series. Example of $\sim$40 years of Apple stock prices:- Tip: Install in terminal using: *pip install yfinance* in Anaconda Prompt
###Code
! pip install yfinance
import yfinance as yf
plt.plot(yf.download("AAPL", data_source='yahoo')['Adj Close'])
plt.yscale('log')
plt.xlabel('Time')
plt.ylabel('Apple Stock Price')
###Output
[*********************100%***********************] 1 of 1 completed
###Markdown
Time Series Components*What is within the series that we just donwloaded? What is a time series*
###Code
aapl = yf.download("AAPL", data_source='yahoo')['Adj Close']
aapl.head(5)
aapl.head(5).index
###Output
_____no_output_____
###Markdown
So in essence, time series in pandas are often just series of data with a time index. Pandas and Time Series*Why is pandas good at handling and processing time series data?*It has specific tools for resampling and interpolating data:- See 11.3, 11.5 and 11.6 in PDA textbook Datetime in Pandas*What other uses might time data have?*We can extract data from datetime columns. These columns have the `dt` and its sub-methods. Example:
###Code
dt_ser2 = pd.Series(aapl.index)
dt_ser2.dt.month #also year, weekday, hour, second
###Output
_____no_output_____ |
jupyter/smokeDev/imgAna_4.jupyter-py36.ipynb | ###Markdown
WikiRecentPhase4 [WikiRecentPhase3](./imgAna_3.ipynb) illustrated processing Wikipedia events continuously with Streams using [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to find and extract images releated to the event. Building on previous notebooks, this notebook extracts faces from the submitted image and analyze's them with machine learning facilities. Overview - Image Analysis utilizing deep learningThe previous notebook extracted images from the Wikipedia events. This continues the processing using two deep learning models provided by IBM's [Model Asset Exchange](https://developer.ibm.com/exchanges/models/). - [Facial Recognizer](https://developer.ibm.com/exchanges/models/all/max-facial-recognizer/) to locate images faces within an image. - [Facial Emotion Classifier](https://developer.ibm.com/exchanges/models/all/max-facial-emotion-classifier/) to classify the emotions of face(s) located within an image. The facial recognizer locates the images, Streams extracts them and forwards onto for classificaion, the results are eventually presented on a view where this notebook renders the orinal image, recognized faces and classification. Setup Add credentials for the IBM Streams service ICPD setupWith the cell below selected, click the "Connect to instance" button in the toolbar to insert the credentials for the service.See an example. Cloud setupTo use Streams instance running in the cloud setup a [credential.py](setup_credential.ipynb) Show meAfter doing the 'Setup' above you can use Menu 'Cell' | 'Run All' to compose, build, submit and start the rendering of the live Wikidata, go to [Show me now](showMeNow) for the rendering.
###Code
# Install components
!pip install sseclient
!pip install --user --upgrade streamsx
# Setup
import pandas as pd
import pandas
from IPython.core.debugger import set_trace
from IPython.display import display, clear_output
import io
from statistics import mean
from collections import deque
from collections import Counter
from collections import OrderedDict
from urllib.parse import urlparse
import matplotlib.pyplot as plt
import ipywidgets as widgets
from ipywidgets import Button, HBox, VBox, Layout
from matplotlib.pyplot import imshow
from PIL import Image, ImageTk
from bs4 import BeautifulSoup
from urllib.parse import urlparse
import numpy as np
%matplotlib inline
from sseclient import SSEClient as EventSource
from ipywidgets import Button, HBox, VBox, Layout
from functools import lru_cache
import requests
from PIL import Image
from io import BytesIO
import copy
import base64
from streamsx.topology.topology import *
import streamsx.rest as rest
from streamsx.topology import context
###Output
_____no_output_____
###Markdown
Support functions for Jupyter
###Code
def catchInterrupt(func):
"""decorator : when interupt occurs the display is lost if you don't catch it
TODO * <view>.stop_data_fetch() # stop
"""
def catch_interrupt(*args, **kwargs):
try:
func(*args, **kwargs)
except (KeyboardInterrupt): pass
return catch_interrupt
#
# Support for locating/rendering views.
def display_view_stop(eventView, period=2):
"""Wrapper for streamsx.rest_primitives.View.display() to have button. """
button = widgets.Button(description="Stop Updating")
display(button)
eventView.display(period=period)
def on_button_clicked(b):
eventView.stop_data_fetch()
b.description = "Stopped"
button.on_click(on_button_clicked)
def view_events(views):
"""
Build interface to display a list of views and
display view when selected from list.
"""
view_names = [view.name for view in views]
nameView = dict(zip(view_names, views))
select = widgets.RadioButtons(
options = view_names,
value = None,
description = 'Select view to display',
disabled = False
)
def on_change(b):
if (b['name'] == 'label'):
clear_output(wait=True)
[view.stop_data_fetch() for view in views ]
display(select)
display_view_stop(nameView[b['new']], period=2)
select.observe(on_change)
display(select)
def find_job(instance, job_name=None):
"""locate job within instance"""
for job in instance.get_jobs():
if job.applicationName.split("::")[-1] == job_name:
return job
else:
return None
def display_views(instance, job_name):
"Locate/promote and display all views of a job"
job = find_job(instance, job_name=job_name)
if job is None:
print("Failed to locate job")
else:
views = job.get_views()
view_events(views)
def list_jobs(_instance=None, cancel=False):
"""
Interactive selection of jobs to cancel.
Prompts with SelectMultiple widget, if thier are no jobs, your presente with a blank list.
"""
active_jobs = { "{}:{}".format(job.name, job.health):job for job in _instance.get_jobs()}
selectMultiple_jobs = widgets.SelectMultiple(
options=active_jobs.keys(),
value=[],
rows=len(active_jobs),
description = "Cancel jobs(s)" if cancel else "Active job(s):",
layout=Layout(width='60%')
)
cancel_jobs = widgets.ToggleButton(
value=False,
description='Cancel',
disabled=False,
button_style='warning', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Delete selected jobs',
icon="stop"
)
def on_value_change(change):
for job in selectMultiple_jobs.value:
print("canceling job:", job, active_jobs[job].cancel())
cancel_jobs.disabled = True
selectMultiple_jobs.disabled = True
cancel_jobs.observe(on_value_change, names='value')
if cancel:
return HBox([selectMultiple_jobs, cancel_jobs])
else:
return HBox([selectMultiple_jobs])
def render_image(image_url=None, output_region=None):
"""Write the image into a output region.
Args::
url: image
output_region: output region
.. note:: The creation of the output 'stage', if this is not done the image is rendered in the page and
the output region.
"""
try:
response = requests.get(image_url)
stage = widgets.Output(layout={'border': '1px solid green'})
except:
print("Error on request : ", image_url)
else:
if response.status_code == 200:
with output_region:
stage.append_display_data(widgets.Image(
value=response.content,
#format='jpg',
width=300,
height=400,
))
output_region.clear_output(wait=True)
###Output
_____no_output_____
###Markdown
Connect to the server : ICP4D or Cloud instance.¶Attempt to import if fails the cfg will not be defined we know were using Cloud.
###Code
def get_instance():
"""Setup to access your Streams instance.
..note::The notebook is work within Cloud and ICP4D.
Refer to the 'Setup' cells above.
Returns:
instance : Access to Streams instance, used for submitting and rendering views.
"""
try:
from icpd_core import icpd_util
import urllib3
global cfg
cfg[context.ConfigParams.SSL_VERIFY] = False
instance = rest.Instance.of_service(cfg)
print("Within ICP4D")
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
except ImportError:
cfg = None
print("Outside ICP4D")
import credential
sc = rest.StreamingAnalyticsConnection(service_name='Streaming3Turbine',
vcap_services=credential.vcap_conf)
instance = sc.get_instances()[0]
return instance,cfg
instance,cfg = get_instance()
###Output
Outside ICP4D
###Markdown
List jobs and cancel....This page will submit a job named 'WikiPhase4'. If it's running you'll want to cancel it before submitting a new version. If it is running, no need to cancel/submit you can just procede to the [Viewing data section](viewingData).
###Code
list_jobs(instance, cancel=True)
###Output
_____no_output_____
###Markdown
Support functions that are executed within StreamsDetails of these functions can be found in previous notesbooks of this suite.
###Code
def get_events():
"""fetch recent changes from wikievents site using SSE"""
for change in EventSource('https://stream.wikimedia.org/v2/stream/recentchange'):
if len(change.data):
try:
obj = json.loads(change.data)
except json.JSONDecodeError as err:
print("JSON l1 error:", err, "Invalid JSON:", change.data)
except json.decoder.JSONDecodeError as err:
print("JSON l2 error:", err, "Invalid JSON:", change.data)
else:
yield(obj)
class sum_aggregation():
def __init__(self, sum_map={'new_len':'newSum','old_len':'oldSum','delta_len':'deltaSum' }):
"""
Summation of column(s) over a window's tuples.
Args::
sum_map : specfify tuple columns to be summed and the result field.
tuples : at run time, list of tuples will flow in. Sum each fields
"""
self.sum_map = sum_map
def __call__(self, tuples)->dict:
"""
Args:
tuples : list of tuples constituting a window, over all the tuples sum using the sum_map key/value
to specify the input and result field.
Returns:
dictionary of fields summations over tuples
"""
summaries = dict()
for summary_field,result_field in self.sum_map.items():
summation = sum([ele[summary_field] for ele in tuples])
summaries.update({result_field : summation})
return(summaries)
import collections
class tally_fields(object):
def __init__(self, top_count=3, fields=['user', 'wiki', 'title']):
"""
Tally fields of a list of tuples.
Args::
fields : fields of tuples that are to be tallied
"""
self.fields = fields
self.top_count = top_count
def __call__(self, tuples)->dict:
"""
Args::
tuples : list of tuples tallying to perform.
return::
dict of tallies
"""
tallies = dict()
for field in self.fields:
stage = [tuple[field] for tuple in tuples if tuple[field] is not None]
tallies[field] = collections.Counter(stage).most_common(self.top_count)
return tallies
import csv
class wiki_lang():
"""
Augment the tuple to include language wiki event.
Mapping is loaded at build time and utilized at runtime.
"""
def __init__(self, fname="wikimap.csv"):
self.wiki_map = dict()
with open(fname, mode='r') as csv_file:
csv_reader = csv.DictReader(csv_file)
for row in csv_reader:
self.wiki_map[row['dbname']] = row
def __call__(self, tuple):
"""using 'wiki' field to look pages code, langauge and native
Args:
tuple: tuple (dict) with a 'wiki' fields
Returns:'
input tuple with 'code', 'language, 'native' fields added to the input tuple.
"""
if tuple['wiki'] in self.wiki_map:
key = tuple['wiki']
tuple['code'] = self.wiki_map[key]['code']
tuple['language'] = self.wiki_map[key]['in_english']
tuple['native'] = self.wiki_map[key]['name_language']
else:
tuple['code'] = tuple['language'] = tuple['native'] = None
return tuple
#@lru_cache(maxsize=None)
def shred_item_image(url):
"""Shred the item page, seeking image.
Discover if referencing image by shredding referening url. If it is, dig deeper
and extract the 'src' link.
Locate the image within the page, locate <a class='image' src=**url** ,..>
This traverses two files, pulls the thumbnail ref and follows to fullsize.
Args:
url: item page to analyse
Returns:
If image found [{name,title,org_url},...]
.. warning:: this fetches from wikipedia, requesting too frequenty is bad manners. Uses the lru_cache()
so it minimises the requests.
This can pick up multiple titles, on a page that is extract, dropping to only one.
"""
img_urls = list()
try:
rThumb = requests.get(url = url)
#print(r.content)
soupThumb = BeautifulSoup(rThumb.content, "html.parser")
divThumb = soupThumb.find("div", class_="thumb")
if divThumb is None:
print("No thumb found", url )
return img_urls
thumbA = divThumb.find("a", class_="image")
thumbHref = thumbA.attrs['href']
rFullImage = requests.get(url=thumbHref)
soupFull = BeautifulSoup(rFullImage.content, "html.parser")
except Exception as e:
print("Error request.get, url: {} except:{}".format(url, str(e)))
else:
divFull = soupFull.find("div", class_="fullImageLink", id="file")
if (divFull is not None):
fullA = divFull.find("a")
img_urls.append({"title":soupThumb.title.getText(),"img": fullA.attrs['href'],"org_url":url})
finally:
return img_urls
#@lru_cache(maxsize=None)
def shred_jpg_image(url):
"""Shed the jpg page, seeking image, the reference begins with 'Fred:' and
ends with '.jpg'.
Discover if referencing image by shredding referening url. If it is, dig deeper
and extract the 'src' link.
Locate the image within the page,
locate : <div class='fullImageLinks'..>
<a href="..url to image" ...>.</a>
:
</div>
Args:
url: item page to analyse
Returns:
If image found [{name,title,org_url='requesting url'},...]
.. warning:: this fetches from wikipedia, requesting too frequenty is bad manners. Uses the lru_cache()
so it minimises the requests.
"""
img_urls = list()
try:
r = requests.get(url = url)
soup = BeautifulSoup(r.content, "html.parser")
except Exception as e:
print("Error request.get, url: {} except:{}".format(url, str(e)))
else:
div = soup.find("div", class_="fullImageLink")
if (div is not None):
imgA = div.find("a")
img_urls.append({"title":soup.title.getText(),"img":"https:" + imgA.attrs['href'],"org_url":url})
else:
print("failed to find div for",url)
finally:
return img_urls
class soup_image_extract():
"""If the the field_name has a potential a image we
Return:
None : field did not have potenital for an image.
[] : had potential but no url found.
[{title,img,href}]
"""
def __init__(self, field_name="title", url_base="https://www.wikidata.org/wiki/"):
self.url_base = url_base
self.field_name = field_name
def __call__(self, _tuple):
title = _tuple[self.field_name]
img_desc = None
if (title[0] == "Q"):
lnk = self.url_base + title
img_desc = shred_item_image(lnk)
elif title.startswith("File:") and (title.endswith('.JPG') or title.endswith('.jpg')):
lnk = self.url_base + title.replace(' ','_')
img_desc = shred_jpg_image(lnk)
_tuple['img_desc'] = img_desc
return _tuple
class soup_image():
"""If the the field_name has a potential for a image we
Return:
None : field did not have potenital for an image.
[] : had potential but no url found.
[{title,img,href}]
"""
def __init__(self, field_name="title", url_base="https://www.wikidata.org/wiki/"):
self.url_base = url_base
self.field_name = field_name
self.cache_item = None
self.cache_jpg = None
def __call__(self, _tuple):
if self.cache_item is None:
self.cache_item = cache_url_process(shred_item_image)
self.cache_jpg = cache_url_process(shred_jpg_image)
title = _tuple[self.field_name]
img_desc = None
if (title[0] == "Q"):
lnk = self.url_base + title
img_desc = self.cache_item.cache_process(lnk)
print("cache_item", self.cache_item.stats())
elif title.startswith("File:") and (title.endswith('.JPG') or title.endswith('.jpg')):
lnk = self.url_base + title.replace(' ','_')
img_desc = self.cache_jpg.cache_process(lnk)
print("cache_jpg", self.cache_jpg.stats())
#print("cache_jpg", self.cache_jpg.stats())
_tuple['img_desc'] = img_desc
return _tuple
## Support of streams processing
class cache_url_process():
def __init__(self, process_url, cache_max=200):
"""I would use @lru_cache() but I ran into two problems.
- when I got to the server it could not find the function.
- get a stack overflow when building the topology.
Args::
process_url: a function that process's the request, when not cached.
Function will accept a URL and retrn dict.
Return::
result from process_url that may be a cached value.
"""
self.urls = OrderedDict()
self.hits = 0
self.attempts = 0
self.process = process_url
self.cache_max = cache_max
def cache_process(self, url):
self.attempts += 1
if url in self.urls:
self.hits += 1
stage = self.urls[url]
del self.urls[url] # move to begining of que
self.urls[url] = stage
n = len(self.urls) - self.cache_max
[self.urls.popitem(last=False) for idx in range(n if n > 0 else 0)]
return stage
stage = self.process(url)
self.urls[url] = stage
return stage
def stats(self):
return dict({"attempts":self.attempts,"hits":self.hits,"len":len(self.urls)})
###Output
_____no_output_____
###Markdown
Facial Image Extraction + Emotion Analysis[Jump Table](jumpTable)Using [IBM Facial Recognizer](https://developer.ibm.com/exchanges/models/all/max-facial-recognizer/)
###Code
tmpBufIoOut = None
def facial_fetch(imgurl):
"""Using the facial recognizer get the location of all the faces on the image.
Args:
imgurl : image the recognizer is done on.
Return:
location of found faces
..note:
- In light of the fact that were using a free service, it can stop working at anytime.
- Pulls the binary image from wikipedia forwards the binary onto the service.
"""
predict_url='http://max-facial-recognizer.max.us-south.containers.appdomain.cloud/model/predict'
parsed = urlparse(imgurl)
filename = parsed.path.split('/')[-1]
if (filename.lower().endswith('.svg')):
print("Cannot process svg:", imgurl)
return list(), None
if (filename.lower().endswith('.tif')):
print("Cannot process tif:", imgurl)
return list(), None
try:
page = requests.get(imgurl)
except Exception as e:
print("Image fetch exception:", e)
return None, None
bufIoOut = io.BytesIO(page.content)
files = {'image': (filename, bufIoOut, "image/jpeg")}
try:
r = requests.post(predict_url, files=files)
except Exception as e:
print("Analysis service exception", e)
return None, None
if (r.status_code != 200):
print("Analysis failure:",r.status_code, r.json())
return None, None
analysis = r.json()
return analysis, bufIoOut
def facial_locate(imgurl):
analysis,bufIoOut = facial_fetch(imgurl)
if bufIoOut is None:
return None
if (analysis['predictions']) == 0:
print("No predictions found for", imgurl)
return None
return({'bin_image':bufIoOut, 'faces':analysis})
def crop_percent(img_dim, box_extent):
"""get the % of image the cropped image is"""
img_size = img_dim[0] * img_dim[1]
box_size = abs((int(box_extent[0]) - int(box_extent[2])) * (int(box_extent[1])- int(box_extent[3])))
percent = ((box_size/img_size) * 100)
return(percent)
def image_cropper(bin_image, faces):
"""Crop out the faces from a URL.
Args:
url : image images
faces : list of {region,predictions} that that should be cropped
Return:
dict with 'annotated_image' and 'crops'
'crops' is list of dicts with
{image:face image,
probability:chances it's a face,
image_percent:found reqion % of of the image,
detection_box:region of the original image that the image was extacted from}
'crops' empty - nothing found, no faces found
"""
crops = list()
for face in faces['predictions']:
i = Image.open(bin_image)
percent = crop_percent( i.size, face['detection_box'])
img = i.crop(face['detection_box'])
crops.append({'image':img, 'probability':face['probability'],'detection_box':face['detection_box'],'image_percent':percent})
return crops
class facial_image():
"""Extract all the faces from an image, for each found face generate a tuple with a face field.
Args:
- field_name : name of field on input tuple with the image description dict
- img_field : dictionary entry that has url of image
Return:
None - No 'img_desc' field or no faces found
List of tuples composed of the input tuple with a new 'face' field
face a dictionary:
- probability : probability that it's a face
- percentage : % of the field_img that the detection_box occupies
- detection_box : within the orginal image, coodinates of extracted image
- bytes_PIL_b64 : cropped image in binary Base64 ascii
..notes:
1. the next operator in line should be the flat_map() that takes the list of tuples and converts
to a stream of tuples.
..code::
'''
## Example of displaying encoded cropped image.
from PIL import Image
from io import BytesIO
import copy
calidUrlImage = "URL of valid Image to be analysized"
minimal_tuple = {'img_desc':[{'img':validUrlImage}]}
fi = facial_image()
crops = fi.__call__(minimal_tuple)
for crop in crops:
cropImg = Image.open(io.BytesIO(base64.b64decode(crop['face']['bytes_PIL_b64'])))
print("Image Size",cropImg.size)
display(cropImg)
'''
"""
def __init__(self, field_name="img_desc", url_base="https://www.wikidata.org/wiki/", image_field='img'):
self.url_base = url_base
self.img_desc = field_name
self.img_field = image_field
self.cache_item = None
def __call__(self, _tuple):
if self.img_desc not in _tuple or len(_tuple[self.img_desc]) == 0:
return None
desc = _tuple[self.img_desc][0]
if self.img_field not in desc:
print("Missing 'img' field in 'img_desc'")
return None
processed = facial_locate(desc[self.img_field])
if processed is None:
return None
crops = image_cropper(processed['bin_image'], processed['faces'])
tuples = list()
for crop in crops:
augmented_tuple = copy.copy(_tuple)
with io.BytesIO() as output:
crop['image'].save(output, format="JPEG")
contents = output.getvalue()
crop['bytes_PIL_b64'] = base64.b64encode(contents).decode('ascii')
del crop['image']
augmented_tuple['face'] = crop
tuples.append(augmented_tuple)
return tuples
###Output
_____no_output_____
###Markdown
codeVerify debug Simulate a test tuple: facial_image```pythonvalidUrlImage = "https://upload.wikimedia.org/wikipedia/commons/5/52/Bundesarchiv_B_145_Bild-F023358-0012%2C_Empfang_in_der_Landesvertretung_Bayern%2C_Hallstein.jpg"minimal_tuple = {'img_desc':[{'img':validUrlImage}]}fi = facial_image()crops = fi.__call__(minimal_tuple)for crop in crops: cropImg = Image.open(io.BytesIO(base64.b64decode(crop['face']['bytes_PIL_b64']))) cropImg = Image.open(io.BytesIO(res[0]['face']['bytes_PIL'])) print("Image Size",cropImg.size) display(cropImg)``` Facial Analysis Functionality[Jump Table](jumpTable)Using the [IBM emotion classifier](https://developer.ibm.com/exchanges/models/all/max-facial-emotion-classifier/) to analyise images that are being added to wikiepedia.
###Code
def emotion_crop(bufIoOut, imgurl):
""" Our friends: "https://developer.ibm.com/exchanges/models/all/max-facial-emotion-classifier/"
Analyse an image using the service "http://max-facial-emotion-classifier.max.us-south.containers.appdomain.cloud/".
Send binary image to analysis
The processing nodes not necessarily return a prediction, could be an indication that it's not a predictable image.
Args:
imgurl: the original source image that the cropped region came from
bufIoOut : the binary cropped image to be analysized
Returns:
None - error encountered
[] : executed, no prediction.
[{anger,contempt,disgust,happiness,neutral,sadness,surpise}]
..note:
This utilizing a function put up the by our friends, $$ == 0.
It can stop working at anytime.
"""
predict_url='http://max-facial-emotion-classifier.max.us-south.containers.appdomain.cloud/model/predict'
parsed = urlparse(imgurl)
filename = parsed.path.split('/')[-1]
files = {'image': (filename, bufIoOut, "image/jpeg")}
try:
r = requests.post(predict_url, files=files)
except Exception as e:
print("Analysis service exception", e)
return None
if (r.status_code != 200):
print("Analysis failure:",r.status_code, r.json())
return None
analysis = r.json()
if len(analysis['predictions']) == 0:
return list()
emotions = analysis['predictions'][0]['emotion_predictions']
return [{emot['label']:float("{0:.2f}".format(emot['probability'])) for emot in emotions}]
class emotion_image():
"""If there is an img entry, attempt to analyize
Args:
field_name : name of field on input tuple with the image description dict
img_field: dictionary entry that has url of image
Return:
None - No 'img_desc' field or no entries in the field
Add a emotion to the tuple.
Empty [] if nothing in img_desc or no emotion could be derived
None : field did not have potenital for an image.
[] : had potential but no url found.
[{title,img,href}]
"""
def __init__(self):
pass
def __call__(self, _tuple):
bufIoOut_decode_image = io.BytesIO(base64.b64decode(_tuple['face']['bytes_PIL_b64']))
url = _tuple['img_desc'][0]['img']
emotion = emotion_crop(bufIoOut_decode_image, url)
_tuple['emotion'] = emotion
return(_tuple)
###Output
_____no_output_____
###Markdown
codeVerify / Simuate a test tuple: facial_image + emotion operators. ```pythonvalidUrlImage = "https://upload.wikimedia.org/wikipedia/commons/5/52/Bundesarchiv_B_145_Bild-F023358-0012%2C_Empfang_in_der_Landesvertretung_Bayern%2C_Hallstein.jpg"minimal_tuple = {'img_desc':[{'img':validUrlImage}]}fi = facial_image()crops = fi.__call__(minimal_tuple)ei = emotion_image()for crop in crops: cropImg = Image.open(io.BytesIO(base64.b64decode(crop['face']['bytes_PIL_b64']))) cropImg = Image.open(io.BytesIO(res[0]['face']['bytes_PIL'])) print("Image Size",cropImg.size) display(cropImg) print(ei.__call__(crop)['emotion'])``` Compose, build and submit the Streams application.The following Code cell composed the Streams application depicted here:![stillPhase4.jpg](images/stillPhase4.jpg) This is notebook is an extention of the previous, I'll only discuss processing beyond 'langAugment' for details regarding prior processing refer to previous [notebook](./imgAna_3.ipynb)s.The events output by the map named 'soupActive' have an associated image determined by using the [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) libary. At 'facialImgs' the 'img_desc is used by facial_image() to extract a list of faces from the image using the [Facial Recognizer](https://developer.ibm.com/exchanges/models/all/max-facial-recognizer/). The list of faces is decomposed into tuples at 'faceImg' by flat_map(), resulting in an tuple for every face found. The tuple includes: a binary version of the cropped face,original image url, location within the image of the face, probaility that it is a face, the percentage of the image the face occupies.The image binary version of the cropped face is processed by 'faceEmotion' using the [Facial Emotion Classifier](https://developer.ibm.com/exchanges/models/all/max-facial-emotion-classifier/) service. If the cropped face is deemed worthy of analysis an emotionscore is is added to the tuple where it can be inspected as 'faceEmotion' view.
###Code
list_jobs(instance, cancel=True)
## TODO If WikiPhase4 is running, cancel before submitting.
def WikiPhase4(jobName=None, wiki_lang_fname=None):
"""
Compose topology.
-- wiki_lang : csv file mapping database name to langauge
"""
topo = Topology(name=jobName)
### make sure we sseclient in Streams environment.
topo.add_pip_package('sseclient')
topo.add_pip_package('bs4')
## wiki events
wiki_events = topo.source(get_events, name="wikiEvents")
## select events generated by humans
human_filter = wiki_events.filter(lambda x: x['type']=='edit' and x['bot'] is False, name='humanFilter')
# pare down the humans set of columns
pared_human= human_filter.map(lambda x : {'timestamp':x['timestamp'],
'new_len':x['length']['new'],
'old_len':x['length']['old'],
'delta_len':x['length']['new'] - x['length']['old'],
'wiki':x['wiki'],'user':x['user'],
'title':x['title']},
name="paredHuman")
pared_human.view(buffer_time=1.0, sample_size=200, name="paredEdits", description="Edits done by humans")
## Define window(count)& aggregate
sum_win = pared_human.last(100).trigger(20)
sum_aggregate = sum_win.aggregate(sum_aggregation(sum_map={'new_len':'newSum','old_len':'oldSum','delta_len':'deltaSum' }), name="sumAggregate")
sum_aggregate.view(buffer_time=1.0, sample_size=200, name="aggEdits", description="Aggregations of human edits")
## Define window(count) & tally edits
tally_win = pared_human.last(100).trigger(10)
tally_top = tally_win.aggregate(tally_fields(fields=['user', 'title'], top_count=10), name="talliesTop")
tally_top.view(buffer_time=1.0, sample_size=200, name="talliesCount", description="Top count tallies: user,titles")
## augment filterd/pared edits with language
if cfg is None:
lang_augment = pared_human.map(wiki_lang(fname='../datasets/wikimap.csv'), name="langAugment")
else:
lang_augment = pared_human.map(wiki_lang(fname=os.environ['DSX_PROJECT_DIR']+'/datasets/wikimap.csv'), name="langAugment")
lang_augment.view(buffer_time=1.0, sample_size=200, name="langAugment", description="Language derived from wiki")
## Define window(time) & tally language
time_lang_win = lang_augment.last(datetime.timedelta(minutes=2)).trigger(5)
time_lang = time_lang_win.aggregate(tally_fields(fields=['language'], top_count=10), name="timeLang")
time_lang.view(buffer_time=1.0, sample_size=200, name="talliesTime", description="Top timed tallies: language")
## attempt to extract image using beautifulsoup add img_desc[{}] field
soup_image = lang_augment.map(soup_image_extract(field_name="title", url_base="https://www.wikidata.org/wiki/"),name="imgSoup")
soup_active = soup_image.filter(lambda x: x['img_desc'] is not None and len(x['img_desc']) > 0, name="soupActive")
soup_active.view(buffer_time=1.0, sample_size=200, name="soupActive", description="Image extracted via Bsoup")
## facial extraction -
facial_images = soup_active.map(facial_image(field_name='img_desc'),name="facialImgs")
face_image = facial_images.flat_map(name="faceImg")
face_image.view(buffer_time=10.0, sample_size=20, name="faceImg", description="Face image analysis/extraction")
## emotion anaylsis on image -
face_emotion = face_image.map(emotion_image(), name="faceEmotion")
face_emotion.view(buffer_time=10.0, sample_size=20, name="faceEmotion", description="Factial emotion analysis")
return ({"topo":topo,"view":{ }})
###Output
_____no_output_____
###Markdown
Submitting job : ICP or Cloud
###Code
resp = WikiPhase4(jobName="WikiPhase4")
if cfg is not None:
# Disable SSL certificate verification if necessary
cfg[context.ConfigParams.SSL_VERIFY] = False
submission_result = context.submit("DISTRIBUTED",resp['topo'], config=cfg)
if cfg is None:
import credential
cloud = {
context.ConfigParams.VCAP_SERVICES: credential.vcap_conf,
context.ConfigParams.SERVICE_NAME: "Streaming3Turbine",
context.ContextTypes.STREAMING_ANALYTICS_SERVICE:"STREAMING_ANALYTIC",
context.ConfigParams.FORCE_REMOTE_BUILD: True,
}
submission_result = context.submit("STREAMING_ANALYTICS_SERVICE",resp['topo'],config=cloud)
# The submission_result object contains information about the running application, or job
if submission_result.job:
print("JobId: ", submission_result['id'] , "Name: ", submission_result['name'])
###Output
_____no_output_____
###Markdown
Viewing data The running application has number of views to see what what data is moving through the stream. The following cell will fetch the views' queue and display it's data when selected. | view name | description of data is the view | bot ||---------|-------------|--------------||aggEdits | summarised fields | False ||langAugment | mapped augmented fields | False ||paredEdits | seleted fields | False ||talliesCount | last 100 messages tallied | False | |talliesTimes | 2 minute windowed | False ||soupActive | extracted images links| False ||faceImg | analyse image for faces and extract | False ||faceEmotion | emotional analysis of facial images | False | You want to stop the the fetching the view data when done.
###Code
# View the data that is flowing.....
display_views(instance, "WikiPhase4")
###Output
_____no_output_____
###Markdown
Jump Table..- **[Running / Active](runningActive)**- [Access Foundation](accessFoundation)@server- [Compose Submit](composeSubmit)@server- [Language Distribution](languageDistribution)- [Soup Functionality](soupFunctionality)@server- [Image Extraction](imageExtraction) - [Facial Image Extraction](facialFunctionality)@server- [Image Facial Location](imageFacialAnalysis)- [Analysis Functionaltiy](analysisFunctionality)@server- [Image Emotion Analysis](imageEmotionAnalysis) : - ![phase4_1.gif](attachment:phase5.gif) Image Facial Location with [MAX](https://developer.ibm.com/exchanges/models/)Using [IBM Facial Recognizer](https://developer.ibm.com/exchanges/models/all/max-facial-recognizer/)[Jump Table](jumpTable) Access Views / Render Views UIFrom the server this is getting the cropped images. Streams is passing the image through the IBM Facial Recognizer that extracts the coordinates of potential faces. A new tuple is generatedfor each potential face consisting of the - input tuple, this include a url image being analyzed- face dict() consisting of ...- - probability : probabilty that it's an face- - image_percentage : % of image original image the found face occupies- - bytes_PIL_b64 : binary image version of found image- - detection_box : region within the original image the face was detected
###Code
from PIL import Image, ImageDraw # https://pillow.readthedocs.io/en/4.3.x/
import requests # http://docs.python-requests.org/en/master/
def line_box(ele):
"""build a box with lines."""
return (ele[0],ele[1],ele[0],ele[3],ele[2],ele[3],ele[2],ele[1],ele[0],ele[1])
def resize_image(bin_image, basewidth=None, baseheight=None):
"""Resize image proportional to the base, make it fit in cell"""
if basewidth is not None:
wpercent = (basewidth/float(bin_image.size[0]))
hsize = int((float(bin_image.size[1])*float(wpercent)))
return bin_image.resize((basewidth,hsize), Image.ANTIALIAS)
wpercent = (baseheight/float(bin_image.size[1]))
wsize = int((float(bin_image.size[0])*float(wpercent)))
return bin_image.resize((wsize,baseheight), Image.ANTIALIAS)
# example image url: https://m.media-amazon.com/images/S/aplus-media/vc/6a9569ab-cb8e-46d9-8aea-a7022e58c74a.jpg
def face_crop(bin_image, detection_box, percent, probability):
"""Crop out the faces from a URL using detection_box and send to analysis.
Args:
url : image images
faces : list of {region,predictions} that that should be cropped
Return:
dict with 'annotated_image' and 'crops'
'crops' is list of dicts with
{image:face image,
probability:chances it's a face,
image_percent:found reqion % of of the image,
detection_box:region of the original image that the image was extacted from}
'crops' empty - nothing found, no faces found
"""
crops = list()
draw = ImageDraw.Draw(bin_image)
box_width = 5 if percent > .01 else 20
box_fill = "orange" if probability > .90 else "red"
draw.line(line_box(detection_box), fill=box_fill, width=box_width)
#draw.rectangle(detection_box, fill=128)
return {'annotated_image':bin_image}
###Output
_____no_output_____
###Markdown
codeVerify debug Test the scale box```pythongraph_widget = widgets.Output(layout={'border': '1px solid blue','width':'200pt','height':'200pt'})graphboard = VBox([ graph_widget])display(graphboard)scale(graph_widget)```
###Code
order_index = ['surprise', 'happiness', 'contempt', 'neutral', 'sadness', 'anger', 'disgust','fear']
colors = ['hotpink', 'gold', 'lightcoral', 'beige', 'brown', 'red', 'green', 'purple']
def scale(region):
"""Display the scale used on the scoring.
Args:
region to write the scale into
..note: this invoked when the emotion classifier does not return any results. Put
up the scale to understand the score.
"""
with region:
fz = 150
fd = -1.30
plt.text(0.0, 1.0,
"{:^35s}".format("Emotion Anlysis Inconclusive"), size=fz,
ha="left", va="top",
bbox=dict(boxstyle="square",
fc="white",
fill=True)
)
plt.rcParams['font.family'] = 'monospace'
for idx in range(len(colors)):
plt.text(0.0, (fd * idx) + -2,
"{:^35s}".format(order_index[idx]), size=fz,
ha="left", va="top",
bbox=dict(boxstyle="square",
fc=colors[idx],
fill=True
)
)
plt.axis('off')
plt.show()
clear_output(wait=True)
bar_idx = 0
img_dict = dict()
def bar_cell(percentage, probability, emotion, crop_img):
"""In cells below main photo the results of the two
deep learning models are displayed by this function.
"""
global bar_idx
with crops_bar[bar_idx % bar_cells]['image']:
display(resize_image(crop_img,basewidth=100))
clear_output(wait=True)
if len(emotion) > 0:
print(emotion)
with crops_bar[bar_idx % bar_cells]['pie']:
fig1, ax1 = plt.subplots()
emot = [emotion[0][key] for key in order_index]
#df = pandas.DataFrame(emotion[0], index=order_index)
ax1.pie(emot ,
shadow=True, startangle=90, colors=colors)
ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.
plt.show()
clear_output(wait=True)
else:
scale(crops_bar[bar_idx % bar_cells]['pie'])
crops_bar[bar_idx % bar_cells]['probability'].value = "conf : {0:.2f}%".format(probability)
crops_bar[bar_idx % bar_cells]['image_percent'].value = "img {0:.2f}%".format(percentage)
bar_idx += 1
def encode_img(img):
"""must be easier way"""
with io.BytesIO() as output:
img.save(output, format="JPEG")
contents = output.getvalue()
return base64.b64encode(contents).decode('ascii')
def decode_img(bin64):
"""must be easier way"""
img = Image.open(io.BytesIO(base64.b64decode(bin64)))
return img
def render_emotions(emotion_tuples):
"""Using view data display the emotion results.
..note: We have cropped face image, the location and the url of the original image.
Display original image with the outline of the image location, I lay multiple onlines
on the image by holding them in a map, this also reduces the number of times I
pull from wikipedia.
"""
for emotion in emotion_tuples:
img_url = emotion['img_desc'][0]['img']
percent = emotion['face']['image_percent']
probability = emotion['face']['probability']
if (img_url in img_dict):
print("cache", img_url)
bimg = decode_img(img_dict[img_url])
face_crops = face_crop(bimg,emotion['face']['detection_box'], percent, probability)
img_dict[img_url] = encode_img(face_crops['annotated_image'])
with full_widget:
fullImg = face_crops['annotated_image']
dspImg = resize_image(fullImg, baseheight=400)
display(dspImg)
clear_output(wait=True)
else:
print("web", img_url)
r = requests.get(img_url, timeout=4.0)
if r.status_code != requests.codes.ok:
assert False, 'Status code error: {}.'.format(r.status_code)
with Image.open(io.BytesIO(r.content)) as bin_image:
bimg = bin_image
#display(bimg)
face_crops = face_crop(bimg,emotion['face']['detection_box'], percent, probability)
img_dict[img_url] = encode_img(face_crops['annotated_image'])
with full_widget:
fullImg = face_crops['annotated_image']
dspImg = resize_image(fullImg, baseheight=400)
display(dspImg)
clear_output(wait=True)
binImg = emotion['face']['bytes_PIL_b64']
bar_cell(percent,
probability,
emotion['emotion'],
Image.open(io.BytesIO(base64.b64decode(binImg))))
###Output
_____no_output_____
###Markdown
Show me now
###Code
## Setup the 'Dashboard' - Display the images sent to Wikipedia, result of facial extraction followed by emotion (pie chart) analysis
## Next cell populates the 'Dashboard'.....
crops_bar = list() # setup in layout section.
bar_cells = 7
## Layout the dashboard cells
url_widget = widgets.Label(value="Img URL", layout={'border': '1px solid red','width':'100%'})
full_widget = widgets.Output(layout={'border': '1px solid red','width':'100%','height':'300pt'})
title_widget = widgets.Label(value="Title", layout={'border': '1px solid red','width':'30%'})
vbox_bar = list()
for idx in range(bar_cells):
vbox = {
'probability' : widgets.Label(value="prop:{}".format(idx), layout={'border': '1px solid blue','width':'100pt'}),
'image_percent' : widgets.Label(value="image %", layout={'border': '1px solid blue','width':'100pt'}),
'image' : widgets.Output(layout={'border': '1px solid blue','width':'100pt','height':'120pt'}),
'pie' : widgets.Output(layout={'border': '1px solid black','width':'100pt','height':'100pt'})
}
crops_bar.append(vbox)
vbox_bar.append(widgets.VBox([vbox['probability'], vbox['image_percent'], vbox['image'], vbox['pie']]))
display(widgets.VBox([full_widget,widgets.HBox(vbox_bar)]))
# Populate the dashboard - If you want this to run longer set cnt higher
cnt = 40
_view = instance.get_views(name="faceEmotion")[0]
_view.start_data_fetch()
for idx in range(10):
emotion_tuples = _view.fetch_tuples(max_tuples=10, timeout=20)
print("Count of tuples", len(emotion_tuples))
render_emotions(emotion_tuples)
_view.stop_data_fetch()
###Output
_____no_output_____
###Markdown
Cancel jobs when your done
###Code
list_jobs(instance, cancel=True)
###Output
_____no_output_____ |
examples/qualitative_bankruptcy/multi_classification_image_qualitative_bankruptcy.ipynb | ###Markdown
Running a Federated Cycle with SynergosThis tutorial aims to give you an understanding of how to use the synergos package to run a full federated learning cycle. In a federated learning system, there are many contributory participants, known as Worker nodes, which receive a global model to train on, with their own local dataset. The dataset does not leave the individual Worker nodes at any point, and remains private to the node.The job to synchronize, orchestrate and initiate an federated learning cycle, falls on a Trusted Third Party (TTP). The TTP pushes out the global model architecture and parameters for the individual nodes to train on, calling upon the required data, based on tags, e.g "training", which points to relevant data on the individual nodes. At no point does the TTP receive, copy or access the Worker nodes' local datasets.In this tutorial, you will go through the steps required by each participant (TTP and Worker), by simulating each of them locally with docker containers. Specifically, we will simulate a TTP and 2 Workers. At the end of this, we will have:- Connected the participants- Trained the model- Evaluate the model About the Dataset and TaskThe dataset used in this notebook is on qualitative bankruptcy tabular data, comprising 6 predictor features, and 1 target feature for a binary class. The dataset is available in the same directory as this notebook. Within the dataset directory, `data1` is for Worker 1 and `data2` is for Worker 2. The task to be carried out will be a binary classifcation.The dataset we have provided is a processed subset of the [original Qualitative Bankruptcy dataset](https://archive.ics.uci.edu/ml/datasets/Qualitative_Bankruptcy).**Reference:**- *Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.* Initiating the docker containers Before we begin, we have to start the docker containers.Firstly, pull the required docker images with the following commands:1. Synergos TTP (Basic):```docker pull gcr.io/synergos-aisg/synergos_ttp:v0.1.0docker tag gcr.io/synergos-aisg/synergos_ttp:v0.1.0 synergos_ttp:v0.1.0```2. Synergos Worker:```docker pull gcr.io/synergos-aisg/synergos_worker:v0.1.0docker tag gcr.io/synergos-aisg/synergos_worker:v0.1.0 synergos_worker:v0.1.0```Next, in separate CLI terminals, run the following command:**Note: For Windows users, it is advisable to use powershell or command prompt based interfaces****Worker 1**```docker run -v :/worker/data -v :/worker/outputs --name worker_1 synergos_worker:v0.1.0 --id worker_1 --logging_variant basic```**Worker 2**```docker run -v :/worker/data -v :/worker/outputs --name worker_2 synergos_worker:v0.1.0 --id worker_2 --logging_variant basic```**TTP**```docker run -p 0.0.0.0:5000:5000 -p 5678:5678 -p 8020:8020 -p 8080:8080 -v :/ttp/mlflow -v :/ttp/data --name ttp --link worker_1 --link worker_2 synergos_ttp:v0.1.0 --id ttp --logging_variant basic -c``` Once ready, for each terminal, you should see that a Flask app is running on http://0.0.0.0:5000 of the container.You are now ready for the next step. ConfigurationIn a new terminal, run `docker inspect bridge` and find the IPv4Address for each container. Ideally, the containers should have the following addresses:- worker_1 address: 172.17.0.2- worker_2 address: 172.17.0.3- ttp address: 172.17.0.4If not, just note the relevant IP addresses for each docker container.Run the following cells below.**Note: For Windows users, `host` should be Docker Desktop VM's IP. Follow [this](https://stackoverflow.com/questions/58073936/how-to-get-ip-address-of-docker-desktop-vm) on instructions to find IP**
###Code
from synergos import Driver
host = "localhost" # Different for Windows users
port = 5000
# Initiate Driver
driver = Driver(host=host, port=port)
###Output
_____no_output_____
###Markdown
Phase 1: REGISTERSubmitting TTP & Participant metadata 1A. TTP creates a collaboration
###Code
collab_task = driver.collaborations
collab_task.create('test_collaboration')
###Output
_____no_output_____
###Markdown
1B. TTP controller creates a project
###Code
driver.projects.create(
collab_id="test_collaboration",
project_id="test_project",
action="classify",
incentives={
'tier_1': [],
'tier_2': [],
}
)
###Output
_____no_output_____
###Markdown
1C. TTP controller creates an experiment
###Code
driver.experiments.create(
collab_id="test_collaboration",
project_id="test_project",
expt_id="test_experiment",
model=[
{
"activation": "sigmoid",
"is_input": True,
"l_type": "Linear",
"structure": {
"bias": True,
"in_features": 18,
"out_features": 1
}
}
]
)
###Output
_____no_output_____
###Markdown
1D. TTP controller creates a run
###Code
driver.runs.create(
collab_id="test_collaboration",
project_id="test_project",
expt_id="test_experiment",
run_id="test_run",
rounds=2,
epochs=1,
base_lr=0.0005,
max_lr=0.005,
criterion="L1Loss"
)
###Output
_____no_output_____
###Markdown
1E. Participants registers their servers' configurations and roles
###Code
participant_resp_1 = driver.participants.create(
participant_id="worker_1",
)
display(participant_resp_1)
participant_resp_2 = driver.participants.create(
participant_id="worker_2",
)
display(participant_resp_2)
registration_task = driver.registrations
# Add and register worker_1 node
registration_task.add_node(
host='172.17.0.2',
port=8020,
f_port=5000,
log_msgs=True,
verbose=True
)
registration_task.list_nodes()
registration_task.create(
collab_id="test_collaboration",
project_id="test_project",
participant_id="worker_1",
role="host"
)
registration_task = driver.registrations
registration_task.add_node(
host='172.17.0.3',
port=8020,
f_port=5000,
log_msgs=True,
verbose=True
)
registration_task.list_nodes()
registration_task.create(
collab_id="test_collaboration",
project_id="test_project",
participant_id="worker_2",
role="guest"
)
###Output
_____no_output_____
###Markdown
1F. Participants registers their tags for a specific project
###Code
driver.tags.create(
collab_id="test_collaboration",
project_id="test_project",
participant_id="worker_1",
train=[["train"]],
evaluate=[["evaluate"]],
predict=[["predict"]]
)
driver.tags.create(
collab_id="test_collaboration",
project_id="test_project",
participant_id="worker_2",
train=[["train"]],
evaluate=[["evaluate"]],
predict=[["predict"]]
)
###Output
_____no_output_____
###Markdown
Phase 2: TRAINAlignment, Training & Optimisation 2A. Perform multiple feature alignment to dynamically configure datasets and models for cross-grid compatibility
###Code
driver.alignments.create(collab_id='test_collaboration',
project_id="test_project",
verbose=False,
log_msg=False)
###Output
_____no_output_____
###Markdown
2B. Trigger training across the federated grid
###Code
model_resp = driver.models.create(
collab_id="test_collaboration",
project_id="test_project",
expt_id="test_experiment",
run_id="test_run",
log_msg=False,
verbose=False
)
display(model_resp)
###Output
_____no_output_____
###Markdown
Phase 3: EVALUATE Validation & Predictions 3A. Perform validation(s) of combination(s)
###Code
driver.validations.create(
collab_id='test_collaboration',
project_id="test_project",
expt_id="test_experiment",
run_id="test_run",
log_msg=False,
verbose=False
)
###Output
_____no_output_____
###Markdown
3B. Perform prediction(s) of combination(s)
###Code
driver.predictions.create(
collab_id="test_collaboration",
tags={"test_project": [["predict"]]},
participant_id="worker_1",
project_id="test_project",
expt_id="test_experiment",
run_id="test_run"
)
###Output
_____no_output_____ |
assignment1/svm_solution.ipynb | ###Markdown
Multiclass Support Vector Machine exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*In this exercise you will: - implement a fully-vectorized **loss function** for the SVM- implement the fully-vectorized expression for its **analytic gradient**- **check your implementation** using numerical gradient- use a validation set to **tune the learning rate and regularization** strength- **optimize** the loss function with **SGD**- **visualize** the final learned weights
###Code
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the
# notebook rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
CIFAR-10 Data Loading and Preprocessing
###Code
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Split the data into train, val, and test sets. In addition we will
# create a small development set as a subset of the training data;
# we can use this for development so our code runs faster.
num_training = 49000
num_validation = 1000
num_test = 1000
num_dev = 500
# Our validation set will be num_validation points from the original
# training set.
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
# Our training set will be the first num_train points from the original
# training set.
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
# We will also make a development set, which is a small subset of
# the training set.
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# We use the first num_test points of the original test set as our
# test set.
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# As a sanity check, print out the shapes of the data
print('Training data shape: ', X_train.shape)
print('Validation data shape: ', X_val.shape)
print('Test data shape: ', X_test.shape)
print('dev data shape: ', X_dev.shape)
# Preprocessing: subtract the mean image
# first: compute the image mean based on the training data
mean_image = np.mean(X_train, axis=0)
print(mean_image[:10]) # print a few of the elements
plt.figure(figsize=(4,4))
plt.imshow(mean_image.reshape((32,32,3)).astype('uint8')) # visualize the mean image
plt.show()
# second: subtract the mean image from train and test data
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# third: append the bias dimension of ones (i.e. bias trick) so that our SVM
# only has to worry about optimizing a single weight matrix W.
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
print(X_train.shape, X_val.shape, X_test.shape, X_dev.shape)
###Output
[130.64189796 135.98173469 132.47391837 130.05569388 135.34804082
131.75402041 130.96055102 136.14328571 132.47636735 131.48467347]
###Markdown
SVM ClassifierYour code for this section will all be written inside **cs231n/classifiers/linear_svm.py**. As you can see, we have prefilled the function `compute_loss_naive` which uses for loops to evaluate the multiclass SVM loss function.
###Code
# Evaluate the naive implementation of the loss we provided for you:
from cs231n.classifiers.linear_svm import svm_loss_naive
import time
# generate a random SVM weight matrix of small numbers
W = np.random.randn(3073, 10) * 0.0001
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.000005)
print('loss: %f' % (loss, ))
###Output
loss: 9.617520
###Markdown
The `grad` returned from the function above is right now all zero. Derive and implement the gradient for the SVM cost function and implement it inline inside the function `svm_loss_naive`. You will find it helpful to interleave your new code inside the existing function.To check that you have correctly implemented the gradient correctly, you can numerically estimate the gradient of the loss function and compare the numeric estimate to the gradient that you computed. We have provided code that does this for you:
###Code
# Once you've implemented the gradient, recompute it with the code below
# and gradient check it with the function we provided for you
# Compute the loss and its gradient at W.
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.0)
# Numerically compute the gradient along several randomly chosen dimensions, and
# compare them with your analytically computed gradient. The numbers should match
# almost exactly along all dimensions.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad)
# do the gradient check once again with regularization turned on
# you didn't forget the regularization gradient did you?
loss, grad = svm_loss_naive(W, X_dev, y_dev, 5e1)
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, grad)
###Output
numerical: 8.951270 analytic: 8.826210, relative error: 7.034701e-03
numerical: 24.887771 analytic: 24.887771, relative error: 1.084767e-11
numerical: -3.285003 analytic: -3.285003, relative error: 1.332287e-10
numerical: 8.443979 analytic: 8.523606, relative error: 4.692835e-03
numerical: 14.322607 analytic: 14.322607, relative error: 3.709889e-12
numerical: -26.395935 analytic: -26.350645, relative error: 8.586497e-04
numerical: -40.117003 analytic: -40.117003, relative error: 9.268210e-12
numerical: -45.846247 analytic: -45.813232, relative error: 3.601883e-04
numerical: 9.707326 analytic: 9.687735, relative error: 1.010088e-03
numerical: 0.725204 analytic: 0.725204, relative error: 1.544081e-10
numerical: 9.554058 analytic: 9.554058, relative error: 3.220060e-11
numerical: -8.310320 analytic: -8.310320, relative error: 2.674436e-11
numerical: 8.886163 analytic: 8.886163, relative error: 7.509528e-11
numerical: 22.097148 analytic: 22.097148, relative error: 2.349780e-11
numerical: -16.951624 analytic: -16.948319, relative error: 9.748268e-05
numerical: 24.495631 analytic: 24.495631, relative error: 1.060943e-11
numerical: -6.607718 analytic: -6.537854, relative error: 5.314600e-03
numerical: 1.446586 analytic: 1.446586, relative error: 8.729934e-11
numerical: 7.927922 analytic: 7.780426, relative error: 9.389653e-03
numerical: -6.989631 analytic: -6.943361, relative error: 3.320892e-03
###Markdown
**Inline Question 1**It is possible that once in a while a dimension in the gradcheck will not match exactly. What could such a discrepancy be caused by? Is it a reason for concern? What is a simple example in one dimension where a gradient check could fail? How would change the margin affect of the frequency of this happening? *Hint: the SVM loss function is not strictly speaking differentiable*$\color{blue}{\textit Your Answer:}$ *fill this in.*
###Code
# Next implement the function svm_loss_vectorized; for now only compute the loss;
# we will implement the gradient in a moment.
tic = time.time()
loss_naive, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Naive loss: %e computed in %fs' % (loss_naive, toc - tic))
from cs231n.classifiers.linear_svm import svm_loss_vectorized
tic = time.time()
loss_vectorized, _ = svm_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))
# The losses should match but your vectorized implementation should be much faster.
print('difference: %f' % (loss_naive - loss_vectorized))
# Complete the implementation of svm_loss_vectorized, and compute the gradient
# of the loss function in a vectorized way.
# The naive implementation and the vectorized implementation should match, but
# the vectorized version should still be much faster.
tic = time.time()
_, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Naive loss and gradient: computed in %fs' % (toc - tic))
tic = time.time()
_, grad_vectorized = svm_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Vectorized loss and gradient: computed in %fs' % (toc - tic))
# The loss is a single number, so it is easy to compare the values computed
# by the two implementations. The gradient on the other hand is a matrix, so
# we use the Frobenius norm to compare them.
difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('difference: %f' % difference)
###Output
Naive loss and gradient: computed in 0.082087s
Vectorized loss and gradient: computed in 0.004408s
difference: 0.000000
###Markdown
Stochastic Gradient DescentWe now have vectorized and efficient expressions for the loss, the gradient and our gradient matches the numerical gradient. We are therefore ready to do SGD to minimize the loss.
###Code
# In the file linear_classifier.py, implement SGD in the function
# LinearClassifier.train() and then run it with the code below.
from cs231n.classifiers import LinearSVM
svm = LinearSVM()
tic = time.time()
loss_hist = svm.train(X_train, y_train, learning_rate=1e-7, reg=2.5e4,
num_iters=1500, verbose=True)
toc = time.time()
print('That took %fs' % (toc - tic))
# A useful debugging strategy is to plot the loss as a function of
# iteration number:
plt.plot(loss_hist)
plt.xlabel('Iteration number')
plt.ylabel('Loss value')
plt.show()
# Write the LinearSVM.predict function and evaluate the performance on both the
# training and validation set
y_train_pred = svm.predict(X_train)
print('training accuracy: %f' % (np.mean(y_train == y_train_pred), ))
y_val_pred = svm.predict(X_val)
print('validation accuracy: %f' % (np.mean(y_val == y_val_pred), ))
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of about 0.39 on the validation set.
# Note: you may see runtime/overflow warnings during hyper-parameter search.
# This may be caused by extreme values, and is not a bug.
learning_rates = np.linspace(1e-7, 3e-7, 10)
regularization_strengths = np.linspace(2.5e4, 5e4, 10)
# results is dictionary mapping tuples of the form
# (learning_rate, regularization_strength) to tuples of the form
# (training_accuracy, validation_accuracy). The accuracy is simply the fraction
# of data points that are correctly classified.
results = {}
best_val = -1 # The highest validation accuracy that we have seen so far.
best_svm = None # The LinearSVM object that achieved the highest validation rate.
################################################################################
# TODO: #
# Write code that chooses the best hyperparameters by tuning on the validation #
# set. For each combination of hyperparameters, train a linear SVM on the #
# training set, compute its accuracy on the training and validation sets, and #
# store these numbers in the results dictionary. In addition, store the best #
# validation accuracy in best_val and the LinearSVM object that achieves this #
# accuracy in best_svm. #
# #
# Hint: You should use a small value for num_iters as you develop your #
# validation code so that the SVMs don't take much time to train; once you are #
# confident that your validation code works, you should rerun the validation #
# code with a larger value for num_iters. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
i = 0
for lr in learning_rates:
for reg in regularization_strengths:
i+=1
svm = LinearSVM()
svm.train(X_train, y_train, learning_rate=lr, reg=reg,
num_iters=1500, verbose=False)
y_train_pred = svm.predict(X_train)
train_acc = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val)
val_acc = np.mean(y_val == y_val_pred)
results[(lr,reg)] = (train_acc,val_acc)
if best_val<val_acc:
best_val = val_acc
best_svm = svm
print("\rDone:",i,end="")
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
print("")
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Visualize the cross-validation results
import math
x_scatter = [math.log10(x[0]) for x in results]
y_scatter = [math.log10(x[1]) for x in results]
# plot training accuracy
marker_size = 100
colors = [results[x][0] for x in results]
plt.subplot(2, 1, 1)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors, cmap=plt.cm.coolwarm)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 training accuracy')
# plot validation accuracy
colors = [results[x][1] for x in results] # default size of markers is 20
plt.subplot(2, 1, 2)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors, cmap=plt.cm.coolwarm)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 validation accuracy')
plt.show()
# Evaluate the best svm on test set
y_test_pred = best_svm.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('linear SVM on raw pixels final test set accuracy: %f' % test_accuracy)
# Visualize the learned weights for each class.
# Depending on your choice of learning rate and regularization strength, these may
# or may not be nice to look at.
w = best_svm.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
###Output
_____no_output_____ |
python_data_science/Week_3/Unix Command.ipynb | ###Markdown
Unix Command for Data Scientists Declare Filename
###Code
!ls ./unix
filename = './unix/shakespeare.txt'
!echo $filename
print(filename)
###Output
./unix/shakespeare.txt
./unix/shakespeare.txt
###Markdown
head
###Code
!head -n 3 $filename
###Output
This is the 100th Etext file presented by Project Gutenberg, and
is presented in cooperation with World Library, Inc., from their
Library of the Future and Shakespeare CDROMS. Project Gutenberg
###Markdown
tail
###Code
!tail -n 10 $filename
###Output
PERSONAL USE ONLY, AND (2) ARE NOT DISTRIBUTED OR USED
COMMERCIALLY. PROHIBITED COMMERCIAL DISTRIBUTION INCLUDES BY ANY
SERVICE THAT CHARGES FOR DOWNLOAD TIME OR FOR MEMBERSHIP.>>
End of this Etext of The Complete Works of William Shakespeare
###Markdown
wc
###Code
!wc $filename
!wc -l $filename
###Output
124505 ./unix/shakespeare.txt
###Markdown
cat
###Code
!cat $filename | wc -l
###Output
124505
###Markdown
grep
###Code
!grep -i 'parchment' $filename
## matching pattern one per line and counting the number of lines
!cat $filename | grep -o 'liberty' | wc -l
###Output
71
###Markdown
sed
###Code
!sed -e 's/parchment/manuscript/g' $filename > temp.txt
!grep -i 'manuscript' temp.txt
###Output
If the skin were manuscript, and the blows you gave were ink,
Ham. Is not manuscript made of sheepskins?
of the skin of an innocent lamb should be made manuscript? That
manuscript, being scribbl'd o'er, should undo a man? Some say the
Upon a manuscript, and against this fire
But here's a manuscript with the seal of Caesar;
With inky blots and rotten manuscript bonds;
Nor brass, nor stone, nor manuscript, bears not one,
###Markdown
sort
###Code
!head -n 5 $filename
!head -n 5 $filename | sort
# column seprated by ' ', sort on column 2 (-k2), Case insenstiive (-f)
!head -n 5 $filename | sort -f -t' ' -k2
###Output
This is the 100th Etext file presented by Project Gutenberg, and
Library of the Future and Shakespeare CDROMS. Project Gutenberg
is presented in cooperation with World Library, Inc., from their
often releases Etexts that are NOT placed in the Public Domain!!
###Markdown
uniq
###Code
!sort $filename | wc -l
!uniq $filename | wc -l
###Output
121532
###Markdown
Count most frequent word in the file using Unix
###Code
!sed -e 's/ /\n/g' -e 's/\r//g' < $filename | sed '/^$/d' | sort | uniq -c | sort -nr | head -13
###Output
23244 the
19542 I
18302 and
15623 to
15551 of
12532 a
10824 my
9576 in
9081 you
7851 is
7531 that
7068 And
6948 not
sort: write failed: 'standard output': Broken pipe
sort: write error
###Markdown
Writing output to the file
###Code
!sed -e 's/ /\'$'\n/g' < $filename | sort | uniq -c | sort -nr | head -13 > count_words.txt
!cat count_words.txt
###Output
502289 $
22678 the$
19163 I$
17868 and$
15324 to$
15216 of$
14779
12152 a$
10614 my$
9347 in$
8709 you$
7662 is$
7332 that$
###Markdown
Plot by import word_counts into python
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import csv
xTicks = []
y = []
with open('count_words.txt') as csvfile:
plots = csv.reader(csvfile, delimiter=' ')
for row in plots:
y.append(int(row[-2]))
xTicks.append(str(row[-1]))
# Remove the spaces from the first elemnt
xTicks = xTicks[1:]
y = y[1:]
# print(y)
# print(xTicks)
x = range(len(y))
plt.figure(figsize=(10, 10))
plt.xticks(x, xTicks, rotation=90) # rotating the label 90 degrees
plt.plot(x,y,'*')
###Output
_____no_output_____ |
00.1_Project_create_and_locate_RS.ipynb | ###Markdown
In Brightway, a project is made of three databases: an inventory database, a biosphere database (with elementary flows and natural compartments) as well as an optional impacts characterization database. Contrary to many LCA sofltware, each project is independent and has its own databases. Hence, they can easiy be used by different brigthway installations. This creates an empty project in your default anaconda environment
###Code
import brightway2 as bw
bw.projects.create_project('a_dummy_project')
###Output
_____no_output_____
###Markdown
Your project is created, but you're not yet inside it. This will get you into your project.
###Code
bw.projects.set_current('a_dummy_project')
###Output
_____no_output_____
###Markdown
You can check at anytime in which project you're in like so.
###Code
bw.projects.current
###Output
_____no_output_____
###Markdown
You can also check all the projects that are installed on your computer, like below. It returns a list of projects, the number of databases in each projects, and their size (in GB).
###Code
bw.projects.report()
###Output
_____no_output_____
###Markdown
Here you can check where the project you created is physically stored on your computer. Not very convenient to find.
###Code
bw.projects.output_dir
###Output
_____no_output_____
###Markdown
And here you can ask to get a list of databases your project contains.
###Code
bw.databases
###Output
_____no_output_____
###Markdown
Which returns an empty list of databases, and that's normal since we have not imported any databases into the project. We can now try to create a project in a more convenient location. In this case, in a Dropbox folder. For this, we need to specify the path of the project we want to create/access before we load the brightway package. You may need however to restart this notebook (kernel -> restart) so as to unload the brightway package.
###Code
import os
# This retirieves your Windows username
user=os.getenv('USERNAME')
#This sets where you want the folder of your project to be.
os.environ['BRIGHTWAY2_DIR'] = "C:\\Users\\"+user+"\\Dropbox\\Example_folder\\"
#You import brightway
import brightway2 as bw
#And create/load the project
bw.projects.set_current('a_dummy_project')
###Output
Using environment variable BRIGHTWAY2_DIR for data directory:
C:\Users\ros\Dropbox\Example_folder\
###Markdown
We can now check that, indeed, your prject folder is now stored within the dropbox folder "Example_folder".
###Code
bw.projects.output_dir
###Output
_____no_output_____
###Markdown
The fact of retrieveing the Windows variable "USER" and use it in the folder path allows other users with whom the Dropbox folder is shared to run these lines without modifying the path. When sharing a common project folder on a syncing service like Dropbox, you need to make sure that only one user is allowed to write in the project at any given time, otherwise the database may end up corrupted. To do so, you can add the line below. This line access the configuration pickle (what's a [pickle](https://pythontips.com/2013/08/02/what-is-pickle-in-python/)?) of your project (each project has a configuration pickle). If one user is working within the project, the other users will be allowed to "read" (access and see data and results), but not modify. Once the user exits the project, the other users will be allowed to write int he project.
###Code
bw.config.p['lockable'] = True
###Output
_____no_output_____
###Markdown
And finally, we can delete our project.
###Code
bw.projects.delete_project("a_dummy_project", delete_dir=False)
###Output
_____no_output_____
###Markdown
If you do not specify a name in bw.projects.delete_project(), the currently active project is deleted. If you specify delete_dir=False, only the porject name is deleted, but the data remains. And as we can see, in the folder Example_folder, the project "a_dummy_project" is not listed anymore. Note that the project called "default" is always created when specifying a new location for project storing.
###Code
bw.projects
###Output
_____no_output_____ |
Numba and C++/Calling C++.ipynb | ###Markdown
Calling C++ Pyton contains multiple ways of calling functions written in C++. This notebooks shows how to use **ctypes** and **cffi** on a **Windows** computer. * **ctypes**: Recommended for calling C++ function *outside* **Numba**.* **cffi**: Required to call C++ function *inside* **Numba**. Structs not allowed. From the **consav** package we will use the **cpptools** module to compile and link to C++ files. **Compilers:** Two compiler workflows have been implemented:* **vs**: Free *Microsoft Visual Studio 2017 Community Edition* ([link](https://visualstudio.microsoft.com/downloads/))* **intel:** Costly *Intel Parallel Studio 2018 Composer Edition* ([link](https://software.intel.com/en-us/parallel-studio-xe))For parallization we will use **OpenMP**.The **installation paths** might need to be adjusted. See arguments to the **cpptools.compile()** function.
###Code
compiler = 'vs'
###Output
_____no_output_____
###Markdown
ctypes
###Code
# use 8 threads in numba
from consav import runtools
runtools.write_numba_config(disable=0,threads=8)
import ctypes as ct
import numpy as np
import numba as nb
from consav import cpptools
# a. main class
# list of elements
parlist = [
('X',nb.double[:]),
('Y',nb.double[:]),
('N',nb.int32),
('a',nb.double),
('b',nb.double),
('threads',nb.int32)
]
# python class
class ParClass():
def __init__(self):
pass
# cpp struct
# return python version of the C++ struct
# write file with struct definition to include in .cpp-file (note: ensures order of fields is the same)
ParStruct = cpptools.setup_struct(parlist,structname='par_struct',structfile='cppfuncs/par_struct.cpp')
# b. compile
cpptools.compile('cppfuncs/example',compiler=compiler) # adjust paths?
# c. settings
par = ParClass()
par.N = 10
par.X = np.linspace(0,10,par.N)
par.Y = np.zeros(par.N)
par.a = 2
par.b = 1
par.threads = 4
# d. link
# list of functions with argument types (long is int)
funcs = [('fun',[ct.POINTER(ParStruct)]),
('fun_nostruct',[ct.POINTER(ct.c_double),
ct.POINTER(ct.c_double),
ct.c_long,
ct.c_double,
ct.c_double,
ct.c_long])]
if compiler == 'vs':
example = cpptools.link('example',funcs,use_openmp_with_vs=True)
else:
example = cpptools.link('example',funcs)
# e. wrapper
def wrapper(par):
p_par = cpptools.get_struct_pointer(par,ParStruct)
example.fun(p_par)
def wrapper_nostruct(X,Y,N,a,b,threads):
p_X = np.ctypeslib.as_ctypes(X)
p_Y = np.ctypeslib.as_ctypes(Y)
example.fun_nostruct(p_X,p_Y,N,a,b,threads)
# f. calls and checks
wrapper(par)
assert np.allclose(par.X*(par.a+par.b),par.Y)
print('all assertions true for example.fun')
par.Y = np.zeros(par.N)
wrapper_nostruct(par.X,par.Y,par.N,par.a,par.b,par.threads)
assert np.allclose(par.X*(par.a+par.b),par.Y)
print('all assertions true for example.fun_nostruct')
# g. delink (remove dll file)
cpptools.delink(example,'example')
###Output
cpp files compiled
cpp files loaded
all assertions true for example.fun
all assertions true for example.fun_nostruct
cpp files delinked
###Markdown
cffi
###Code
import os
from cffi import FFI
import numba as nb
# a. main class
# list of elements
parlist = [
('X',nb.double[:]),
('Y',nb.double[:]),
('N',nb.int32),
('a',nb.double),
('b',nb.double),
('threads',nb.int32)
]
# python class
@nb.jitclass(parlist)
class ParClass():
def __init__(self):
pass
# b. compile
cpptools.compile('cppfuncs/example',compiler=compiler)
# c. settings
par = ParClass()
par.N = 10
par.X = np.zeros(par.N)
par.Y = np.zeros(par.N)
par.a = 2
par.b = 1
par.threads = 4
# d. link
ffi = FFI()
ffi.cdef(r'''void fun_nostruct(double *X, double *Y, int N, double a, double b, int threads);''')
example = ffi.dlopen("example.dll")
# e. regular call
p_X = ffi.cast('double *', par.X.ctypes.data)
p_Y = ffi.cast('double *', par.Y.ctypes.data)
example.fun_nostruct(p_X,p_Y,par.N,par.a,par.b,par.threads)
assert np.allclose(par.X*(par.a+par.b),par.Y)
print('all assertions true for fun_nostruct')
# f. numba call
fun_nostruct_numba = example.fun_nostruct
@nb.njit
def wrapper_nostruct(X,Y,N,a,b,threads):
p_X = ffi.from_buffer(X)
p_Y = ffi.from_buffer(Y)
fun_nostruct_numba(p_X,p_Y,N,a,b,threads)
par.Y = np.zeros(par.N)
wrapper_nostruct(par.X,par.Y,par.N,par.a,par.b,par.threads)
assert np.allclose(par.X*(par.a+par.b),par.Y)
print('all assertions true for fun_nostruct (in numba)')
# g. clean up
ffi.dlclose(example)
os.remove('example.dll')
###Output
cpp files compiled
all assertions true for fun_nostruct
all assertions true for fun_nostruct (in numba)
###Markdown
Calling C++ Pyton contains multiple ways of calling functions written in C++. This notebooks shows how to use **ctypes** and **cffi** on a **Windows** computer. * **ctypes**: Recommended for calling C++ function *outside* **Numba**.* **cffi**: Required to call C++ function *inside* **Numba**. Structs not allowed. From the **consav** package we will use the **cpptools** module to compile and link to C++ files. **Compilers:** Two compiler workflows have been implemented:* **vs**: Free *Microsoft Visual Studio 2017 Community Edition* ([link](https://visualstudio.microsoft.com/downloads/))* **intel:** Costly *Intel Parallel Studio 2018 Composer Edition* ([link](https://software.intel.com/en-us/parallel-studio-xe))For parallization we will use **OpenMP**.The **installation paths** might need to be adjusted. See arguments to the **cpptools.compile()** function.
###Code
compiler = 'vs'
###Output
_____no_output_____
###Markdown
ctypes
###Code
# use 8 threads in numba
from consav import runtools
runtools.write_numba_config(disable=0,threads=8)
import ctypes as ct
import numpy as np
import numba as nb
from consav import cpptools
# a. main class
# list of elements
parlist = [
('X',nb.double[::1]),
('Y',nb.double[::1]),
('N',nb.int64),
('a',nb.double),
('b',nb.double),
('threads',nb.int64)
]
# python class
class ParClass():
def __init__(self):
pass
# cpp struct
# return python version of the C++ struct
# write file with struct definition to include in .cpp-file (note: ensures order of fields is the same)
ParStruct = cpptools.setup_struct(parlist,structname='par_struct',structfile='cppfuncs/par_struct.cpp')
# b. compile
cpptools.compile('cppfuncs/example',compiler=compiler) # press shift+tab to see default paths to compilers
# c. settings
par = ParClass()
par.N = 10
par.X = np.linspace(0,10,par.N)
par.Y = np.zeros(par.N)
par.a = 2
par.b = 1
par.threads = 4
# d. link
# list of functions with argument types (long is int)
funcs = [('fun',[ct.POINTER(ParStruct)]),
('fun_nostruct',[ct.POINTER(ct.c_double),
ct.POINTER(ct.c_double),
ct.c_long,
ct.c_double,
ct.c_double,
ct.c_long])]
if compiler == 'vs':
example = cpptools.link('example',funcs,use_openmp_with_vs=True)
else:
example = cpptools.link('example',funcs)
# e. wrapper
def wrapper(par):
p_par = cpptools.get_struct_pointer(par,ParStruct)
example.fun(p_par)
def wrapper_nostruct(X,Y,N,a,b,threads):
p_X = np.ctypeslib.as_ctypes(X)
p_Y = np.ctypeslib.as_ctypes(Y)
example.fun_nostruct(p_X,p_Y,N,a,b,threads)
# f. calls and checks
wrapper(par)
assert np.allclose(par.X*(par.a+par.b),par.Y)
print('all assertions true for example.fun')
par.Y = np.zeros(par.N)
wrapper_nostruct(par.X,par.Y,par.N,par.a,par.b,par.threads)
assert np.allclose(par.X*(par.a+par.b),par.Y)
print('all assertions true for example.fun_nostruct')
# g. delink (remove dll file)
cpptools.delink(example,'example')
###Output
cpp files compiled
cpp files loaded
all assertions true for example.fun
all assertions true for example.fun_nostruct
cpp files delinked
###Markdown
cffi
###Code
import os
from cffi import FFI
# a. main class
# list of elements
parlist = [
('X',nb.double[::1]),
('Y',nb.double[::1]),
('N',nb.int64),
('a',nb.double),
('b',nb.double),
('threads',nb.int64)
]
# python class
@nb.jitclass(parlist)
class ParClass():
def __init__(self):
pass
# b. compile
cpptools.compile('cppfuncs/example',compiler=compiler)
# c. settings
par = ParClass()
par.N = 10
par.X = np.zeros(par.N)
par.Y = np.zeros(par.N)
par.a = 2
par.b = 1
par.threads = 4
# d. link
ffi = FFI()
ffi.cdef(r'''void fun_nostruct(double *X, double *Y, int N, double a, double b, int threads);''')
example = ffi.dlopen("example.dll")
# e. regular call
p_X = ffi.cast('double *', par.X.ctypes.data)
p_Y = ffi.cast('double *', par.Y.ctypes.data)
example.fun_nostruct(p_X,p_Y,par.N,par.a,par.b,par.threads)
assert np.allclose(par.X*(par.a+par.b),par.Y)
print('all assertions true for fun_nostruct')
# f. numba call
fun_nostruct_numba = example.fun_nostruct
@nb.njit
def wrapper_nostruct(X,Y,N,a,b,threads):
p_X = ffi.from_buffer(X)
p_Y = ffi.from_buffer(Y)
fun_nostruct_numba(p_X,p_Y,N,a,b,threads)
par.Y = np.zeros(par.N)
wrapper_nostruct(par.X,par.Y,par.N,par.a,par.b,par.threads)
assert np.allclose(par.X*(par.a+par.b),par.Y)
print('all assertions true for fun_nostruct (in numba)')
# g. clean up
ffi.dlclose(example)
os.remove('example.dll')
###Output
cpp files compiled
all assertions true for fun_nostruct
all assertions true for fun_nostruct (in numba)
|
day10/.ipynb_checkpoints/my_cnn-checkpoint.ipynb | ###Markdown
Convolutional Neural Network with PyTorch Padding Summary- **Valid** Padding (Zero Padding) - Output size < Input Size- **Same** Padding - Output size = Input Size Dimension Calculations- $ O = \frac {W - K + 2P}{S} + 1$ - $O$: output height/length - $W$: input height/length - $K$: filter size (kernel size) - $P$: padding - $ P = \frac{K - 1}{2} $ - $S$: stride Example 1: Output Dimension Calculation for Valid Padding- $W = 4$- $K = 3$- $P = 0$- $S = 1$- $O = \frac {4 - 3 + 2*0}{1} + 1 = \frac {1}{1} + 1 = 1 + 1 = 2 $ Example 2: Output Dimension Calculation for Same Padding- $W = 5$- $K = 3$- $P = \frac{3 - 1}{2} = \frac{2}{2} = 1 $- $S = 1 $- $O = \frac {5 - 3 + 2*1}{1} + 1 = \frac {4}{1} + 1 = 5$ 2. Building a Convolutional Neural Network with PyTorch Model A: - 2 Convolutional Layers - Same Padding (same output size)- 2 Max Pooling Layers- 1 Fully Connected Layer Steps- Step 1: Load Dataset- Step 2: Make Dataset Iterable- Step 3: Create Model Class- Step 4: Instantiate Model Class- Step 5: Instantiate Loss Class- Step 6: Instantiate Optimizer Class- Step 7: Train Model Step 1: Loading MNIST Train Dataset
###Code
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
from torch.autograd import Variable
!pip3 install torch
train_dataset = dsets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = dsets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor())
print(train_dataset.train_data.size())
print(train_dataset.train_labels.size())
print(test_dataset.test_data.size())
print(test_dataset.test_labels.size())
###Output
torch.Size([10000])
###Markdown
Step 2: Make Dataset Iterable
###Code
batch_size = 100
n_iters = 3000
num_epochs = n_iters / (len(train_dataset) / batch_size)
num_epochs = int(num_epochs)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
###Output
_____no_output_____
###Markdown
Step 3: Create Model Class Output Formula for Convolution- $ O = \frac {W - K + 2P}{S} + 1$ - $O$: output height/length - $W$: input height/length - $K$: **filter size (kernel size) = 5** - $P$: **same padding (non-zero)** - $P = \frac{K - 1}{2} = \frac{5 - 1}{2} = 2$ - $S$: **stride = 1** Output Formula for Pooling- $ O = \frac {W - K}{S} + 1$ - W: input height/width - K: **filter size = 2** - S: **stride size = filter size**, PyTorch defaults the stride to kernel filter size - If using PyTorch default stride, this will result in the formula $ O = \frac {W}{K}$
###Code
class CNNModel(nn.Module):
def __init__(self):
super(CNNModel, self).__init__()
# Convolution 1
self.cnn1 = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=2)
self.relu1 = nn.ReLU()
# Max pool 1
self.maxpool1 = nn.MaxPool2d(kernel_size=2)
# Convolution 2
self.cnn2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, stride=1, padding=2)
self.relu2 = nn.ReLU()
# Max pool 2
self.maxpool2 = nn.MaxPool2d(kernel_size=2)
# Fully connected 1 (readout)
self.fc1 = nn.Linear(32 * 7 * 7, 10)
def forward(self, x):
# Convolution 1
out = self.cnn1(x)
out = self.relu1(out)
# Max pool 1
out = self.maxpool1(out)
# Convolution 2
out = self.cnn2(out)
out = self.relu2(out)
# Max pool 2
out = self.maxpool2(out)
# Resize
# Original size: (100, 32, 7, 7)
# out.size(0): 100
# New out size: (100, 32*7*7)
out = out.view(out.size(0), -1)
# Linear function (readout)
out = self.fc1(out)
return out
###Output
_____no_output_____
###Markdown
Step 4: Instantiate Model Class
###Code
model = CNNModel()
###Output
_____no_output_____
###Markdown
Step 5: Instantiate Loss Class- Convolutional Neural Network: **Cross Entropy Loss** - _Feedforward Neural Network_: **Cross Entropy Loss** - _Logistic Regression_: **Cross Entropy Loss** - _Linear Regression_: **MSE**
###Code
criterion = nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
Step 6: Instantiate Optimizer Class- Simplified equation - $\theta = \theta - \eta \cdot \nabla_\theta $ - $\theta$: parameters (our variables) - $\eta$: learning rate (how fast we want to learn) - $\nabla_\theta$: parameters' gradients- Even simplier equation - `parameters = parameters - learning_rate * parameters_gradients` - **At every iteration, we update our model's parameters**
###Code
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
Parameters In-Depth
###Code
print(model)
print(model.parameters())
print(len(list(model.parameters())))
# Convolution 1: 16 Kernels
print(list(model.parameters())[0].size())
# Convolution 1 Bias: 16 Kernels
print(list(model.parameters())[1].size())
# Convolution 2: 32 Kernels with depth = 16
print(list(model.parameters())[2].size())
# Convolution 2 Bias: 32 Kernels with depth = 16
print(list(model.parameters())[3].size())
# Fully Connected Layer 1
print(list(model.parameters())[4].size())
# Fully Connected Layer Bias
print(list(model.parameters())[5].size())
###Output
<generator object Module.parameters at 0x14586663a0f8>
6
torch.Size([16, 1, 5, 5])
torch.Size([16])
torch.Size([32, 16, 5, 5])
torch.Size([32])
torch.Size([10, 1568])
torch.Size([10])
###Markdown
Step 7: Train Model- Process 1. **Convert inputs/labels to variables** - CNN Input: (1, 28, 28) - Feedforward NN Input: (1, 28*28) 2. Clear gradient buffets 3. Get output given inputs 4. Get loss 5. Get gradients w.r.t. parameters 6. Update parameters using gradients - `parameters = parameters - learning_rate * parameters_gradients` 7. REPEAT
###Code
iter = 0
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# Load images
images = images.requires_grad_()
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward pass to get output/logits
outputs = model(images)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(outputs, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
iter += 1
if iter % 500 == 0:
# Calculate Accuracy
correct = 0
total = 0
# Iterate through test dataset
for images, labels in test_loader:
# Load images
images = images.requires_grad_()
# Forward pass only to get logits/output
outputs = model(images)
# Get predictions from the maximum value
_, predicted = torch.max(outputs.data, 1)
# Total number of labels
total += labels.size(0)
# Total correct predictions
correct += (predicted == labels).sum()
accuracy = 100 * correct / total
# Print Loss
print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.item(), accuracy))
###Output
Iteration: 500. Loss: 0.3126809000968933. Accuracy: 88
Iteration: 1000. Loss: 0.3046148419380188. Accuracy: 92
Iteration: 1500. Loss: 0.4902287423610687. Accuracy: 93
Iteration: 2000. Loss: 0.1136619970202446. Accuracy: 95
Iteration: 2500. Loss: 0.18004001677036285. Accuracy: 96
Iteration: 3000. Loss: 0.14143550395965576. Accuracy: 96
###Markdown
Model B: - 2 Convolutional Layers - Same Padding (same output size)- 2 **Average Pooling** Layers- 1 Fully Connected Layer Steps- Step 1: Load Dataset- Step 2: Make Dataset Iterable- Step 3: Create Model Class- Step 4: Instantiate Model Class- Step 5: Instantiate Loss Class- Step 6: Instantiate Optimizer Class- Step 7: Train Model
###Code
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
from torch.autograd import Variable
'''
STEP 1: LOADING DATASET
'''
train_dataset = dsets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = dsets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor())
'''
STEP 2: MAKING DATASET ITERABLE
'''
batch_size = 100
n_iters = 3000
num_epochs = n_iters / (len(train_dataset) / batch_size)
num_epochs = int(num_epochs)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
'''
STEP 3: CREATE MODEL CLASS
'''
class CNNModel(nn.Module):
def __init__(self):
super(CNNModel, self).__init__()
# Convolution 1
self.cnn1 = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=2)
self.relu1 = nn.ReLU()
# Average pool 1
self.avgpool1 = nn.AvgPool2d(kernel_size=2)
# Convolution 2
self.cnn2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, stride=1, padding=2)
self.relu2 = nn.ReLU()
# Average pool 2
self.avgpool2 = nn.AvgPool2d(kernel_size=2)
# Fully connected 1 (readout)
self.fc1 = nn.Linear(32 * 7 * 7, 10)
def forward(self, x):
# Convolution 1
out = self.cnn1(x)
out = self.relu1(out)
# Average pool 1
out = self.avgpool1(out)
# Convolution 2
out = self.cnn2(out)
out = self.relu2(out)
# Max pool 2
out = self.avgpool2(out)
# Resize
# Original size: (100, 32, 7, 7)
# out.size(0): 100
# New out size: (100, 32*7*7)
out = out.view(out.size(0), -1)
# Linear function (readout)
out = self.fc1(out)
return out
'''
STEP 4: INSTANTIATE MODEL CLASS
'''
model = CNNModel()
'''
STEP 5: INSTANTIATE LOSS CLASS
'''
criterion = nn.CrossEntropyLoss()
'''
STEP 6: INSTANTIATE OPTIMIZER CLASS
'''
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
'''
STEP 7: TRAIN THE MODEL
'''
iter = 0
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# Load images as Variable
images = images.requires_grad_()
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward pass to get output/logits
outputs = model(images)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(outputs, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
iter += 1
if iter % 500 == 0:
# Calculate Accuracy
correct = 0
total = 0
# Iterate through test dataset
for images, labels in test_loader:
# Load images to a Torch Variable
images = images.requires_grad_()
# Forward pass only to get logits/output
outputs = model(images)
# Get predictions from the maximum value
_, predicted = torch.max(outputs.data, 1)
# Total number of labels
total += labels.size(0)
# Total correct predictions
correct += (predicted == labels).sum()
accuracy = 100 * correct / total
# Print Loss
print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.item(), accuracy))
###Output
Iteration: 500. Loss: 0.45780864357948303. Accuracy: 86
Iteration: 1000. Loss: 0.3461257219314575. Accuracy: 89
Iteration: 1500. Loss: 0.24028310179710388. Accuracy: 90
Iteration: 2000. Loss: 0.17413872480392456. Accuracy: 91
Iteration: 2500. Loss: 0.11797639727592468. Accuracy: 92
Iteration: 3000. Loss: 0.20305947959423065. Accuracy: 93
###Markdown
Average Pooling Test Accuracy < Max Pooling Test Accuracy Model C: - 2 Convolutional Layers - **Valid Padding** (smaller output size)- 2 **Max Pooling** Layers- 1 Fully Connected Layer Steps- Step 1: Load Dataset- Step 2: Make Dataset Iterable- Step 3: Create Model Class- Step 4: Instantiate Model Class- Step 5: Instantiate Loss Class- Step 6: Instantiate Optimizer Class- Step 7: Train Model
###Code
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
from torch.autograd import Variable
'''
STEP 1: LOADING DATASET
'''
train_dataset = dsets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = dsets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor())
'''
STEP 2: MAKING DATASET ITERABLE
'''
batch_size = 100
n_iters = 3000
num_epochs = n_iters / (len(train_dataset) / batch_size)
num_epochs = int(num_epochs)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
'''
STEP 3: CREATE MODEL CLASS
'''
class CNNModel(nn.Module):
def __init__(self):
super(CNNModel, self).__init__()
# Convolution 1
self.cnn1 = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=0)
self.relu1 = nn.ReLU()
# Max pool 1
self.maxpool1 = nn.MaxPool2d(kernel_size=2)
# Convolution 2
self.cnn2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, stride=1, padding=0)
self.relu2 = nn.ReLU()
# Max pool 2
self.maxpool2 = nn.MaxPool2d(kernel_size=2)
# Fully connected 1 (readout)
self.fc1 = nn.Linear(32 * 4 * 4, 10)
def forward(self, x):
# Convolution 1
out = self.cnn1(x)
out = self.relu1(out)
# Max pool 1
out = self.maxpool1(out)
# Convolution 2
out = self.cnn2(out)
out = self.relu2(out)
# Max pool 2
out = self.maxpool2(out)
# Resize
# Original size: (100, 32, 7, 7)
# out.size(0): 100
# New out size: (100, 32*7*7)
out = out.view(out.size(0), -1)
# Linear function (readout)
out = self.fc1(out)
return out
'''
STEP 4: INSTANTIATE MODEL CLASS
'''
model = CNNModel()
'''
STEP 5: INSTANTIATE LOSS CLASS
'''
criterion = nn.CrossEntropyLoss()
'''
STEP 6: INSTANTIATE OPTIMIZER CLASS
'''
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
'''
STEP 7: TRAIN THE MODEL
'''
iter = 0
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# Load images as Variable
images = images.requires_grad_()
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward pass to get output/logits
outputs = model(images)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(outputs, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
iter += 1
if iter % 500 == 0:
# Calculate Accuracy
correct = 0
total = 0
# Iterate through test dataset
for images, labels in test_loader:
# Load images to a Torch Variable
images = images.requires_grad_()
# Forward pass only to get logits/output
outputs = model(images)
# Get predictions from the maximum value
_, predicted = torch.max(outputs.data, 1)
# Total number of labels
total += labels.size(0)
# Total correct predictions
correct += (predicted == labels).sum()
accuracy = 100 * correct / total
# Print Loss
print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.item(), accuracy))
###Output
Iteration: 500. Loss: 0.3439464271068573. Accuracy: 88
Iteration: 1000. Loss: 0.30488425493240356. Accuracy: 92
Iteration: 1500. Loss: 0.15341302752494812. Accuracy: 94
Iteration: 2000. Loss: 0.19498983025550842. Accuracy: 95
Iteration: 2500. Loss: 0.14073897898197174. Accuracy: 96
Iteration: 3000. Loss: 0.19746263325214386. Accuracy: 96
###Markdown
Deep Learning- 3 ways to expand a convolutional neural network - More convolutional layers - Less aggressive downsampling - Smaller kernel size for pooling (gradually downsampling) - More fully connected layers - Cons - Need a larger dataset - Curse of dimensionality - Does not necessarily mean higher accuracy 3. Building a Convolutional Neural Network with PyTorch (GPU) Model AGPU: 2 things must be on GPU- `model`- `variables` Steps- Step 1: Load Dataset- Step 2: Make Dataset Iterable- Step 3: Create Model Class- **Step 4: Instantiate Model Class**- Step 5: Instantiate Loss Class- Step 6: Instantiate Optimizer Class- **Step 7: Train Model**
###Code
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
from torch.autograd import Variable
'''
STEP 1: LOADING DATASET
'''
train_dataset = dsets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = dsets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor())
'''
STEP 2: MAKING DATASET ITERABLE
'''
batch_size = 100
n_iters = 3000
num_epochs = n_iters / (len(train_dataset) / batch_size)
num_epochs = int(num_epochs)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
'''
STEP 3: CREATE MODEL CLASS
'''
class CNNModel(nn.Module):
def __init__(self):
super(CNNModel, self).__init__()
# Convolution 1
self.cnn1 = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=0)
self.relu1 = nn.ReLU()
# Max pool 1
self.maxpool1 = nn.MaxPool2d(kernel_size=2)
# Convolution 2
self.cnn2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, stride=1, padding=0)
self.relu2 = nn.ReLU()
# Max pool 2
self.maxpool2 = nn.MaxPool2d(kernel_size=2)
# Fully connected 1 (readout)
self.fc1 = nn.Linear(32 * 4 * 4, 10)
def forward(self, x):
# Convolution 1
out = self.cnn1(x)
out = self.relu1(out)
# Max pool 1
out = self.maxpool1(out)
# Convolution 2
out = self.cnn2(out)
out = self.relu2(out)
# Max pool 2
out = self.maxpool2(out)
# Resize
# Original size: (100, 32, 7, 7)
# out.size(0): 100
# New out size: (100, 32*7*7)
out = out.view(out.size(0), -1)
# Linear function (readout)
out = self.fc1(out)
return out
'''
STEP 4: INSTANTIATE MODEL CLASS
'''
model = CNNModel()
#######################
# USE GPU FOR MODEL #
#######################
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)
'''
STEP 5: INSTANTIATE LOSS CLASS
'''
criterion = nn.CrossEntropyLoss()
'''
STEP 6: INSTANTIATE OPTIMIZER CLASS
'''
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
'''
STEP 7: TRAIN THE MODEL
'''
iter = 0
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
#######################
# USE GPU FOR MODEL #
#######################
images = images.requires_grad_().to(device)
labels = labels.to(device)
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward pass to get output/logits
outputs = model(images)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(outputs, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
iter += 1
if iter % 500 == 0:
# Calculate Accuracy
correct = 0
total = 0
# Iterate through test dataset
for images, labels in test_loader:
#######################
# USE GPU FOR MODEL #
#######################
images = images.requires_grad_().to(device)
labels = labels.to(device)
# Forward pass only to get logits/output
outputs = model(images)
# Get predictions from the maximum value
_, predicted = torch.max(outputs.data, 1)
# Total number of labels
total += labels.size(0)
#######################
# USE GPU FOR MODEL #
#######################
# Total correct predictions
if torch.cuda.is_available():
correct += (predicted.cpu() == labels.cpu()).sum()
else:
correct += (predicted == labels).sum()
accuracy = 100 * correct / total
# Print Loss
print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.item(), accuracy))
###Output
Iteration: 500. Loss: 0.2899038791656494. Accuracy: 89
Iteration: 1000. Loss: 0.35760563611984253. Accuracy: 92
Iteration: 1500. Loss: 0.24162068963050842. Accuracy: 94
Iteration: 2000. Loss: 0.2910376489162445. Accuracy: 95
Iteration: 2500. Loss: 0.09718359261751175. Accuracy: 96
Iteration: 3000. Loss: 0.1140664592385292. Accuracy: 96
|
Home_Depot_model_fit.ipynb | ###Markdown
Model fitting
###Code
import math
import os
import random
import pandas as pd
import numpy as np
import pickle as pickle
import matplotlib.pyplot as plt
import xgboost as xgb
from sklearn.ensemble import RandomForestRegressor
from sklearn.cross_validation import cross_val_score
from scipy.sparse import hstack, coo_matrix, csr_matrix
%matplotlib inline
with open('feats.pkl', 'rb') as infile:
X = pickle.load(infile)
with open('target.pkl', 'rb') as infile:
y = pickle.load(infile)
n_train = 74067 # Number of rows containing training data
# If X is sparse matrix:
X_train = X[:n_train,:]
X_test = X[n_train:,:]
y_train = y.iloc[:n_train,1]
X_id = X_train[:,0:2].todense()
X_id = pd.DataFrame(X_id)
X_id = X_id.astype('int')
X_id.columns = ['id', 'product_uid']
# If X is dense dataframe
X_train = X.iloc[:n_train,:]
X_test = X.iloc[n_train:,:]
X_train = X_train.fillna(0)
X_test = X_test.fillna(0)
y_train = y.iloc[:n_train,1]
X_id = X_train[['id', 'product_uid']]
''' Returns indices for training set to make a really good set.
General strategy:
get last 11.5% of data from train in all cases. This is based on num rows in test that
do not occur in train. This ensures our training process also doesn't see similar num
of rows at the end.
For the remainder, grab one row from train for each product_uid that occurs > 1.
'''
# Group all by product_uid (which disappears), then count aggregated rows in each column
# select only 'id' column, which is a surrogate column name for the rowcount per uid.
trainend = int(len(X_id)*0.885)
counts = X_id[:trainend].groupby(['product_uid']).count()[['id']]
# Only care about uid's with counts higher than 1 (do not remove single rows)
counts = counts[counts['id'] > 1]
counts = counts.add_suffix('_Count').reset_index()
valid_product_uids = set(counts['product_uid'].values)
inds = []
allowed_uids = X_id.loc[X_id['product_uid'].isin(valid_product_uids)]
# For now, always grab first row of valid product uid.
lastUid = 0
for idx, mrow in allowed_uids.iterrows():
if lastUid == mrow['product_uid']:
continue
lastUid = mrow['product_uid']
inds.append(idx)
test_inds = inds + list(X_id[trainend:].index.values)
train_inds = list(X_id.loc[~X_id.index.isin(test_inds)].index.values)
print("Train: "+str(len(train_inds))+", test: "+str(len(test_inds)))
# If X is sparse matrix:
X_train_train = X_train[train_inds,:]
X_train_test = X_train[test_inds,:]
y_train_train = y_train.iloc[train_inds]
y_train_test = y_train.iloc[test_inds]
# If X is dense dataframe
X_train_train = X_train.iloc[train_inds,:]
X_train_test = X_train.iloc[test_inds,:]
X_train_train = X_train_train.fillna(0)
X_train_test= X_train_test.fillna(0)
y_train_train = y_train.iloc[train_inds]
y_train_test = y_train.iloc[test_inds]
dtrain = xgb.DMatrix(X_train_train, label=y_train_train)
dtest = xgb.DMatrix(X_train_test, label=y_train_test)
evallist = [(dtrain,'train'), (dtest,'test')]
nrounds = 10000
e1 = 20
e2 = 40
lambda1 = np.ones(e1)*0.01
lambda2 = np.ones(e2)*0.01
lambda3 = np.ones(nrounds - e1 - e2)*0.01
learning_rates = np.hstack([lambda1,lambda2,lambda3])
param = {'max_depth':10,
'eta':0.01,
'min_child_weight':1,
'max_delta_step':0,
'gamma':1,
'lambda':1,
'alpha':3,
'colsample_bytree':0.3,
'subsample':1,
'eval_metric':'rmse',
'maximize':False,
'nthread':4}
xgb_fit = xgb.train(param,
dtrain,
nrounds,
evals = evallist,
early_stopping_rounds = 20,
verbose_eval = 50,
learning_rates = learning_rates.tolist())
dtrain = xgb.DMatrix(X_train, label=y_train)
evallist = [(dtrain,'train'), (dtrain,'train')]
# Best score with ~1.4x
nrounds = round(xgb_fit.best_iteration * 1.4)
xgb_fit_full = xgb.train(param,
dtrain,
nrounds,
evals = evallist,
verbose_eval = 50,
learning_rates = learning_rates.tolist())
dtest = xgb.DMatrix(X_test)
preds = xgb_fit_full.predict(dtest)
preds[preds>3] = 3
preds[preds<1] = 1
pred = pd.concat((y.iloc[n_train:,0].reset_index(drop = True),
pd.DataFrame(preds)),
axis = 1,
ignore_index = True)
pred.columns = ['id', 'relevance']
pred.to_csv('pred_clean_stem_v2_1_4.csv', index = False)
###Output
_____no_output_____ |
Sample-based Learning Methods/week5/Planning_Assignment-v2.ipynb | ###Markdown
Assignment: Dyna-Q and Dyna-Q+ Welcome to this programming assignment! In this notebook, you will:1. implement the Dyna-Q and Dyna-Q+ algorithms. 2. compare their performance on an environment which changes to become 'better' than it was before, that is, the task becomes easier. We will give you the environment and infrastructure to run the experiment and visualize the performance. The assignment will be graded automatically by comparing the behavior of your agent to our implementations of the algorithms. The random seed will be set explicitly to avoid different behaviors due to randomness. Please go through the cells in order. The Shortcut Maze EnvironmentIn this maze environment, the goal is to reach the goal state (G) as fast as possible from the starting state (S). There are four actions – up, down, right, left – which take the agent deterministically from a state to the corresponding neighboring states, except when movement is blocked by a wall (denoted by grey) or the edge of the maze, in which case the agent remains where it is. The reward is +1 on reaching the goal state, 0 otherwise. On reaching the goal state G, the agent returns to the start state S to being a new episode. This is a discounted, episodic task with $\gamma = 0.95$.Later in the assignment, we will use a variant of this maze in which a 'shortcut' opens up after a certain number of timesteps. We will test if the the Dyna-Q and Dyna-Q+ agents are able to find the newly-opened shorter route to the goal state. PackagesWe import the following libraries that are required for this assignment. Primarily, we shall be using the following libraries:1. numpy: the fundamental package for scientific computing with Python.2. matplotlib: the library for plotting graphs in Python.3. RL-Glue: the library for reinforcement learning experiments.**Please do not import other libraries** — this will break the autograder.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import os, jdc, shutil
from tqdm import tqdm
from rl_glue import RLGlue
from agent import BaseAgent
from maze_env import ShortcutMazeEnvironment
os.makedirs('results', exist_ok=True)
plt.rcParams.update({'font.size': 15})
plt.rcParams.update({'figure.figsize': [8,5]})
###Output
_____no_output_____
###Markdown
Section 1: Dyna-Q Let's start with a quick recap of the tabular Dyna-Q algorithm.Dyna-Q involves four basic steps:1. Action selection: given an observation, select an action to be performed (here, using the $\epsilon$-greedy method).2. Direct RL: using the observed next state and reward, update the action values (here, using one-step tabular Q-learning).3. Model learning: using the observed next state and reward, update the model (here, updating a table as the environment is assumed to be deterministic).4. Planning: update the action values by generating $n$ simulated experiences using certain starting states and actions (here, using the random-sample one-step tabular Q-planning method). This is also known as the 'Indirect RL' step. The process of choosing the state and action to simulate an experience with is known as 'search control'.Steps 1 and 2 are parts of the [tabular Q-learning algorithm](http://www.incompleteideas.net/book/RLbook2018.pdfpage=153) and are denoted by line numbers (a)–(d) in the pseudocode above. Step 3 is performed in line (e), and Step 4 in the block of lines (f).We highly recommend revising the Dyna videos in the course and the material in the RL textbook (in particular, [Section 8.2](http://www.incompleteideas.net/book/RLbook2018.pdfpage=183)). Alright, let's begin coding.As you already know by now, you will develop an agent which interacts with the given environment via RL-Glue. More specifically, you will implement the usual methods `agent_start`, `agent_step`, and `agent_end` in your `DynaQAgent` class, along with a couple of helper methods specific to Dyna-Q, namely `update_model` and `planning_step`. We will provide detailed comments in each method describing what your code should do. Let's break this down in pieces and do it one-by-one.First of all, check out the `agent_init` method below. As in earlier assignments, some of the attributes are initialized with the data passed inside `agent_info`. In particular, pay attention to the attributes which are new to `DynaQAgent`, since you shall be using them later.
###Code
# Do not modify this cell!
class DynaQAgent(BaseAgent):
def agent_init(self, agent_info):
"""Setup for the agent called when the experiment first starts.
Args:
agent_init_info (dict), the parameters used to initialize the agent. The dictionary contains:
{
num_states (int): The number of states,
num_actions (int): The number of actions,
epsilon (float): The parameter for epsilon-greedy exploration,
step_size (float): The step-size,
discount (float): The discount factor,
planning_steps (int): The number of planning steps per environmental interaction
random_seed (int): the seed for the RNG used in epsilon-greedy
planning_random_seed (int): the seed for the RNG used in the planner
}
"""
# First, we get the relevant information from agent_info
# NOTE: we use np.random.RandomState(seed) to set the two different RNGs
# for the planner and the rest of the code
try:
self.num_states = agent_info["num_states"]
self.num_actions = agent_info["num_actions"]
except:
print("You need to pass both 'num_states' and 'num_actions' \
in agent_info to initialize the action-value table")
self.gamma = agent_info.get("discount", 0.95)
self.step_size = agent_info.get("step_size", 0.1)
self.epsilon = agent_info.get("epsilon", 0.1)
self.planning_steps = agent_info.get("planning_steps", 10)
self.rand_generator = np.random.RandomState(agent_info.get('random_seed', 42))
self.planning_rand_generator = np.random.RandomState(agent_info.get('planning_random_seed', 42))
# Next, we initialize the attributes required by the agent, e.g., q_values, model, etc.
# A simple way to implement the model is to have a dictionary of dictionaries,
# mapping each state to a dictionary which maps actions to (reward, next state) tuples.
self.q_values = np.zeros((self.num_states, self.num_actions))
self.actions = list(range(self.num_actions))
self.past_action = -1
self.past_state = -1
self.model = {} # model is a dictionary of dictionaries, which maps states to actions to
# (reward, next_state) tuples
###Output
_____no_output_____
###Markdown
Now let's create the `update_model` method, which performs the 'Model Update' step in the pseudocode. It takes a `(s, a, s', r)` tuple and stores the next state and reward corresponding to a state-action pair.Remember, because the environment is deterministic, an easy way to implement the model is to have a dictionary of encountered states, each mapping to a dictionary of actions taken in those states, which in turn maps to a tuple of next state and reward. In this way, the model can be easily accessed by `model[s][a]`, which would return the `(s', r)` tuple.
###Code
%%add_to DynaQAgent
# [GRADED]
def update_model(self, past_state, past_action, state, reward):
"""updates the model
Args:
past_state (int): s
past_action (int): a
state (int): s'
reward (int): r
Returns:
Nothing
"""
# Update the model with the (s,a,s',r) tuple (1~4 lines)
### START CODE HERE ###
_ = {past_action: (state, reward)}
if self.model.get(past_state):
self.model[past_state].update(_)
else:
self.model[past_state] = _
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
Test `update_model()`
###Code
# Do not modify this cell!
## Test code for update_model() ##
actions = []
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"random_seed": 0,
"planning_random_seed": 0}
test_agent = DynaQAgent()
test_agent.agent_init(agent_info)
test_agent.update_model(0,2,0,1)
test_agent.update_model(2,0,1,1)
test_agent.update_model(0,3,1,2)
print("Model: \n", test_agent.model)
###Output
Model:
{0: {2: (0, 1), 3: (1, 2)}, 2: {0: (1, 1)}}
###Markdown
Expected output:```Model: {0: {2: (0, 1), 3: (1, 2)}, 2: {0: (1, 1)}}``` Next, you will implement the planning step, the crux of the Dyna-Q algorithm. You shall be calling this `planning_step` method at every timestep of every trajectory.
###Code
%%add_to DynaQAgent
# [GRADED]
def planning_step(self):
"""performs planning, i.e. indirect RL.
Args:
None
Returns:
Nothing
"""
# The indirect RL step:
# - Choose a state and action from the set of experiences that are stored in the model. (~2 lines)
# - Query the model with this state-action pair for the predicted next state and reward.(~1 line)
# - Update the action values with this simulated experience. (2~4 lines)
# - Repeat for the required number of planning steps.
#
# Note that the update equation is different for terminal and non-terminal transitions.
# To differentiate between a terminal and a non-terminal next state, assume that the model stores
# the terminal state as a dummy state like -1
#
# Important: remember you have a random number generator 'planning_rand_generator' as
# a part of the class which you need to use as self.planning_rand_generator.choice()
# For the sake of reproducibility and grading, *do not* use anything else like
# np.random.choice() for performing search control.
### START CODE HERE ###
for i in range(self.planning_steps):
state = self.planning_rand_generator.choice([*self.model])
action = self.planning_rand_generator.choice([*self.model[state]])
new_state, reward = self.model[state][action]
if new_state == -1:
target = reward
self.q_values[state, action] = self.q_values[state, action] + self.step_size*(
target - self.q_values[state, action])
else:
target = reward + self.gamma*np.max(self.q_values[new_state,:])
self.q_values[state, action] = self.q_values[state, action] + self.step_size*(
target - self.q_values[state, action])
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
Test `planning_step()`
###Code
# Do not modify this cell!
## Test code for planning_step() ##
actions = []
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"planning_steps": 4,
"random_seed": 0,
"planning_random_seed": 5}
test_agent = DynaQAgent()
test_agent.agent_init(agent_info)
test_agent.update_model(0,2,1,1)
test_agent.update_model(2,0,1,1)
test_agent.update_model(0,3,0,1)
test_agent.update_model(0,1,-1,1)
test_agent.planning_step()
print("Model: \n", test_agent.model)
print("Action-value estimates: \n", test_agent.q_values)
###Output
Model:
{0: {2: (1, 1), 3: (0, 1), 1: (-1, 1)}, 2: {0: (1, 1)}}
Action-value estimates:
[[0. 0.1 0. 0.2]
[0. 0. 0. 0. ]
[0.1 0. 0. 0. ]]
###Markdown
Expected output:```Model: {0: {2: (1, 1), 3: (0, 1), 1: (-1, 1)}, 2: {0: (1, 1)}}Action-value estimates: [[0. 0.1 0. 0.2 ] [0. 0. 0. 0. ] [0.1 0. 0. 0. ]]```If your output does not match the above, one of the first things to check is to make sure that you haven't changed the `planning_random_seed` in the test cell. Additionally, make sure you have handled terminal updates correctly. Now before you move on to implement the rest of the agent methods, here are the helper functions that you've used in the previous assessments for choosing an action using an $\epsilon$-greedy policy.
###Code
%%add_to DynaQAgent
# Do not modify this cell!
def argmax(self, q_values):
"""argmax with random tie-breaking
Args:
q_values (Numpy array): the array of action values
Returns:
action (int): an action with the highest value
"""
top = float("-inf")
ties = []
for i in range(len(q_values)):
if q_values[i] > top:
top = q_values[i]
ties = []
if q_values[i] == top:
ties.append(i)
return self.rand_generator.choice(ties)
def choose_action_egreedy(self, state):
"""returns an action using an epsilon-greedy policy w.r.t. the current action-value function.
Important: assume you have a random number generator 'rand_generator' as a part of the class
which you can use as self.rand_generator.choice() or self.rand_generator.rand()
Args:
state (List): coordinates of the agent (two elements)
Returns:
The action taken w.r.t. the aforementioned epsilon-greedy policy
"""
if self.rand_generator.rand() < self.epsilon:
action = self.rand_generator.choice(self.actions)
else:
values = self.q_values[state]
action = self.argmax(values)
return action
###Output
_____no_output_____
###Markdown
Next, you will implement the rest of the agent-related methods, namely `agent_start`, `agent_step`, and `agent_end`.
###Code
%%add_to DynaQAgent
# [GRADED]
def agent_start(self, state):
"""The first method called when the experiment starts,
called after the environment starts.
Args:
state (Numpy array): the state from the
environment's env_start function.
Returns:
(int) the first action the agent takes.
"""
# given the state, select the action using self.choose_action_egreedy()),
# and save current state and action (~2 lines)
### self.past_state = ?
### self.past_action = ?
### START CODE HERE ###
self.past_state = state
self.past_action = self.choose_action_egreedy(state)
### END CODE HERE ###
return self.past_action
def agent_step(self, reward, state):
"""A step taken by the agent.
Args:
reward (float): the reward received for taking the last action taken
state (Numpy array): the state from the
environment's step based on where the agent ended up after the
last step
Returns:
(int) The action the agent takes given this state.
"""
# - Direct-RL step (~1-3 lines)
# - Model Update step (~1 line)
# - `planning_step` (~1 line)
# - Action Selection step (~1 line)
# Save the current state and action before returning the action to be performed. (~2 lines)
### START CODE HERE ###
target = reward + self.gamma*np.max(self.q_values[state,:])
self.q_values[self.past_state, self.past_action] = self.q_values[self.past_state, self.past_action] + \
self.step_size*(target - self.q_values[self.past_state, self.past_action])
self.update_model(self.past_state, self.past_action, state, reward)
self.planning_step()
self.past_state, self.past_action = state, self.choose_action_egreedy(state)
### END CODE HERE ###
return self.past_action
def agent_end(self, reward):
"""Called when the agent terminates.
Args:
reward (float): the reward the agent received for entering the
terminal state.
"""
# - Direct RL update with this final transition (1~2 lines)
# - Model Update step with this final transition (~1 line)
# - One final `planning_step` (~1 line)
#
# Note: the final transition needs to be handled carefully. Since there is no next state,
# you will have to pass a dummy state (like -1), which you will be using in the planning_step() to
# differentiate between updates with usual terminal and non-terminal transitions.
### START CODE HERE ###
target = reward
self.q_values[self.past_state, self.past_action] = self.q_values[self.past_state, self.past_action] + \
self.step_size*(target - self.q_values[self.past_state, self.past_action])
self.update_model(self.past_state, self.past_action, -1, reward)
self.planning_step()
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
Test `agent_start()`
###Code
# Do not modify this cell!
## Test code for agent_start() ##
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"random_seed": 0,
"planning_random_seed": 0}
test_agent = DynaQAgent()
test_agent.agent_init(agent_info)
action = test_agent.agent_start(0)
print("Action:", action)
print("Model: \n", test_agent.model)
print("Action-value estimates: \n", test_agent.q_values)
###Output
Action: 1
Model:
{}
Action-value estimates:
[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]]
###Markdown
Expected output:```Action: 1Model: {}Action-value estimates: [[0. 0. 0. 0.] [0. 0. 0. 0.] [0. 0. 0. 0.]]``` Test `agent_step()`
###Code
# Do not modify this cell!
## Test code for agent_step() ##
actions = []
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"planning_steps": 2,
"random_seed": 0,
"planning_random_seed": 0}
test_agent = DynaQAgent()
test_agent.agent_init(agent_info)
actions.append(test_agent.agent_start(0))
actions.append(test_agent.agent_step(1,2))
actions.append(test_agent.agent_step(0,1))
print("Actions:", actions)
print("Model: \n", test_agent.model)
print("Action-value estimates: \n", test_agent.q_values)
###Output
Actions: [1, 3, 1]
Model:
{0: {1: (2, 1)}, 2: {3: (1, 0)}}
Action-value estimates:
[[0. 0.3439 0. 0. ]
[0. 0. 0. 0. ]
[0. 0. 0. 0. ]]
###Markdown
Expected output:```Actions: [1, 3, 1]Model: {0: {1: (2, 1)}, 2: {3: (1, 0)}}Action-value estimates: [[0. 0.3439 0. 0. ] [0. 0. 0. 0. ] [0. 0. 0. 0. ]]``` Test `agent_end()`
###Code
# Do not modify this cell!
## Test code for agent_end() ##
actions = []
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"planning_steps": 2,
"random_seed": 0,
"planning_random_seed": 0}
test_agent = DynaQAgent()
test_agent.agent_init(agent_info)
actions.append(test_agent.agent_start(0))
actions.append(test_agent.agent_step(1,2))
actions.append(test_agent.agent_step(0,1))
test_agent.agent_end(1)
print("Actions:", actions)
print("Model: \n", test_agent.model)
print("Action-value Estimates: \n", test_agent.q_values)
###Output
Actions: [1, 3, 1]
Model:
{0: {1: (2, 1)}, 2: {3: (1, 0)}, 1: {1: (-1, 1)}}
Action-value Estimates:
[[0. 0.41051 0. 0. ]
[0. 0.1 0. 0. ]
[0. 0. 0. 0.01 ]]
###Markdown
Expected output:```Actions: [1, 3, 1]Model: {0: {1: (2, 1)}, 2: {3: (1, 0)}, 1: {1: (-1, 1)}}Action-value Estimates: [[0. 0.41051 0. 0. ] [0. 0.1 0. 0. ] [0. 0. 0. 0.01 ]]``` Experiment: Dyna-Q agent in the maze environmentAlright. Now we have all the components of the `DynaQAgent` ready. Let's try it out on the maze environment! The next cell runs an experiment on this maze environment to test your implementation. The initial action values are $0$, the step-size parameter is $0.125$. and the exploration parameter is $\epsilon=0.1$. After the experiment, the sum of rewards in each episode should match the correct result.We will try planning steps of $0,5,50$ and compare their performance in terms of the average number of steps taken to reach the goal state in the aforementioned maze environment. For scientific rigor, we will run each experiment $30$ times. In each experiment, we set the initial random-number-generator (RNG) seeds for a fair comparison across algorithms.
###Code
# Do not modify this cell!
def run_experiment(env, agent, env_parameters, agent_parameters, exp_parameters):
# Experiment settings
num_runs = exp_parameters['num_runs']
num_episodes = exp_parameters['num_episodes']
planning_steps_all = agent_parameters['planning_steps']
env_info = env_parameters
agent_info = {"num_states" : agent_parameters["num_states"], # We pass the agent the information it needs.
"num_actions" : agent_parameters["num_actions"],
"epsilon": agent_parameters["epsilon"],
"discount": env_parameters["discount"],
"step_size" : agent_parameters["step_size"]}
all_averages = np.zeros((len(planning_steps_all), num_runs, num_episodes)) # for collecting metrics
log_data = {'planning_steps_all' : planning_steps_all} # that shall be plotted later
for idx, planning_steps in enumerate(planning_steps_all):
print('Planning steps : ', planning_steps)
os.system('sleep 0.5') # to prevent tqdm printing out-of-order before the above print()
agent_info["planning_steps"] = planning_steps
for i in tqdm(range(num_runs)):
agent_info['random_seed'] = i
agent_info['planning_random_seed'] = i
rl_glue = RLGlue(env, agent) # Creates a new RLGlue experiment with the env and agent we chose above
rl_glue.rl_init(agent_info, env_info) # We pass RLGlue what it needs to initialize the agent and environment
for j in range(num_episodes):
rl_glue.rl_start() # We start an episode. Here we aren't using rl_glue.rl_episode()
# like the other assessments because we'll be requiring some
is_terminal = False # data from within the episodes in some of the experiments here
num_steps = 0
while not is_terminal:
reward, _, action, is_terminal = rl_glue.rl_step() # The environment and agent take a step
num_steps += 1 # and return the reward and action taken.
all_averages[idx][i][j] = num_steps
log_data['all_averages'] = all_averages
np.save("results/Dyna-Q_planning_steps", log_data)
def plot_steps_per_episode(file_path):
data = np.load(file_path).item()
all_averages = data['all_averages']
planning_steps_all = data['planning_steps_all']
for i, planning_steps in enumerate(planning_steps_all):
plt.plot(np.mean(all_averages[i], axis=0), label='Planning steps = '+str(planning_steps))
plt.legend(loc='upper right')
plt.xlabel('Episodes')
plt.ylabel('Steps\nper\nepisode', rotation=0, labelpad=40)
plt.axhline(y=16, linestyle='--', color='grey', alpha=0.4)
plt.show()
# Do NOT modify the parameter settings.
# Experiment parameters
experiment_parameters = {
"num_runs" : 30, # The number of times we run the experiment
"num_episodes" : 40, # The number of episodes per experiment
}
# Environment parameters
environment_parameters = {
"discount": 0.95,
}
# Agent parameters
agent_parameters = {
"num_states" : 54,
"num_actions" : 4,
"epsilon": 0.1,
"step_size" : 0.125,
"planning_steps" : [0, 5, 50] # The list of planning_steps we want to try
}
current_env = ShortcutMazeEnvironment # The environment
current_agent = DynaQAgent # The agent
run_experiment(current_env, current_agent, environment_parameters, agent_parameters, experiment_parameters)
plot_steps_per_episode('results/Dyna-Q_planning_steps.npy')
shutil.make_archive('results', 'zip', 'results');
###Output
Planning steps : 0
###Markdown
What do you notice?As the number of planning steps increases, the number of episodes taken to reach the goal decreases rapidly. Remember that the RNG seed was set the same for all the three values of planning steps, resulting in the same number of steps taken to reach the goal in the first episode. Thereafter, the performance improves. The slowest improvement is when there are $n=0$ planning steps, i.e., for the non-planning Q-learning agent, even though the step size parameter was optimized for it. Note that the grey dotted line shows the minimum number of steps required to reach the goal state under the optimal greedy policy.--- Experiment(s): Dyna-Q agent in the _changing_ maze environment Great! Now let us see how Dyna-Q performs on the version of the maze in which a shorter path opens up after 3000 steps. The rest of the transition and reward dynamics remain the same. Before you proceed, take a moment to think about what you expect to see. Will Dyna-Q find the new, shorter path to the goal? If so, why? If not, why not?
###Code
# Do not modify this cell!
def run_experiment_with_state_visitations(env, agent, env_parameters, agent_parameters, exp_parameters, result_file_name):
# Experiment settings
num_runs = exp_parameters['num_runs']
num_max_steps = exp_parameters['num_max_steps']
planning_steps_all = agent_parameters['planning_steps']
env_info = {"change_at_n" : env_parameters["change_at_n"]}
agent_info = {"num_states" : agent_parameters["num_states"],
"num_actions" : agent_parameters["num_actions"],
"epsilon": agent_parameters["epsilon"],
"discount": env_parameters["discount"],
"step_size" : agent_parameters["step_size"]}
state_visits_before_change = np.zeros((len(planning_steps_all), num_runs, 54)) # For saving the number of
state_visits_after_change = np.zeros((len(planning_steps_all), num_runs, 54)) # state-visitations
cum_reward_all = np.zeros((len(planning_steps_all), num_runs, num_max_steps)) # For saving the cumulative reward
log_data = {'planning_steps_all' : planning_steps_all}
for idx, planning_steps in enumerate(planning_steps_all):
print('Planning steps : ', planning_steps)
os.system('sleep 1') # to prevent tqdm printing out-of-order before the above print()
agent_info["planning_steps"] = planning_steps # We pass the agent the information it needs.
for run in tqdm(range(num_runs)):
agent_info['random_seed'] = run
agent_info['planning_random_seed'] = run
rl_glue = RLGlue(env, agent) # Creates a new RLGlue experiment with the env and agent we chose above
rl_glue.rl_init(agent_info, env_info) # We pass RLGlue what it needs to initialize the agent and environment
num_steps = 0
cum_reward = 0
while num_steps < num_max_steps-1 :
state, _ = rl_glue.rl_start() # We start the experiment. We'll be collecting the
is_terminal = False # state-visitation counts to visiualize the learned policy
if num_steps < env_parameters["change_at_n"]:
state_visits_before_change[idx][run][state] += 1
else:
state_visits_after_change[idx][run][state] += 1
while not is_terminal and num_steps < num_max_steps-1 :
reward, state, action, is_terminal = rl_glue.rl_step()
num_steps += 1
cum_reward += reward
cum_reward_all[idx][run][num_steps] = cum_reward
if num_steps < env_parameters["change_at_n"]:
state_visits_before_change[idx][run][state] += 1
else:
state_visits_after_change[idx][run][state] += 1
log_data['state_visits_before'] = state_visits_before_change
log_data['state_visits_after'] = state_visits_after_change
log_data['cum_reward_all'] = cum_reward_all
np.save("results/" + result_file_name, log_data)
def plot_cumulative_reward(file_path, item_key, y_key, y_axis_label, legend_prefix, title):
data_all = np.load(file_path).item()
data_y_all = data_all[y_key]
items = data_all[item_key]
for i, item in enumerate(items):
plt.plot(np.mean(data_y_all[i], axis=0), label=legend_prefix+str(item))
plt.axvline(x=3000, linestyle='--', color='grey', alpha=0.4)
plt.xlabel('Timesteps')
plt.ylabel(y_axis_label, rotation=0, labelpad=60)
plt.legend(loc='upper left')
plt.title(title)
plt.show()
###Output
_____no_output_____
###Markdown
Did you notice that the environment changes after a fixed number of _steps_ and not episodes? This is because the environment is separate from the agent, and the environment changes irrespective of the length of each episode (i.e., the number of environmental interactions per episode) that the agent perceives. And hence we are now plotting the data per step or interaction of the agent and the environment, in order to comfortably see the differences in the behaviours of the agents before and after the environment changes. Okay, now we will first plot the cumulative reward obtained by the agent per interaction with the environment, averaged over 10 runs of the experiment on this changing world.
###Code
# Do NOT modify the parameter settings.
# Experiment parameters
experiment_parameters = {
"num_runs" : 10, # The number of times we run the experiment
"num_max_steps" : 6000, # The number of steps per experiment
}
# Environment parameters
environment_parameters = {
"discount": 0.95,
"change_at_n": 3000
}
# Agent parameters
agent_parameters = {
"num_states" : 54,
"num_actions" : 4,
"epsilon": 0.1,
"step_size" : 0.125,
"planning_steps" : [5, 10, 50] # The list of planning_steps we want to try
}
current_env = ShortcutMazeEnvironment # The environment
current_agent = DynaQAgent # The agent
run_experiment_with_state_visitations(current_env, current_agent, environment_parameters, agent_parameters, experiment_parameters, "Dyna-Q_shortcut_steps")
plot_cumulative_reward('results/Dyna-Q_shortcut_steps.npy', 'planning_steps_all', 'cum_reward_all', 'Cumulative\nreward', 'Planning steps = ', 'Dyna-Q : Varying planning_steps')
###Output
Planning steps : 5
###Markdown
We observe that the slope of the curves is almost constant. If the agent had discovered the shortcut and begun using it, we would expect to see an increase in the slope of the curves towards the later stages of training. This is because the agent can get to the goal state faster and get the positive reward. Note that the timestep at which the shortcut opens up is marked by the grey dotted line.Note that this trend is constant across the increasing number of planning steps.Now let's check the heatmap of the state visitations of the agent with `planning_steps=10` during training, before and after the shortcut opens up after 3000 timesteps.
###Code
# Do not modify this cell!
def plot_state_visitations(file_path, plot_titles, idx):
data = np.load(file_path).item()
data_keys = ["state_visits_before", "state_visits_after"]
positions = [211,212]
titles = plot_titles
wall_ends = [None,-1]
for i in range(2):
state_visits = data[data_keys[i]][idx]
average_state_visits = np.mean(state_visits, axis=0)
grid_state_visits = np.rot90(average_state_visits.reshape((6,9)).T)
grid_state_visits[2,1:wall_ends[i]] = np.nan # walls
#print(average_state_visits.reshape((6,9)))
plt.subplot(positions[i])
plt.pcolormesh(grid_state_visits, edgecolors='gray', linewidth=1, cmap='viridis')
plt.text(3+0.5, 0+0.5, 'S', horizontalalignment='center', verticalalignment='center')
plt.text(8+0.5, 5+0.5, 'G', horizontalalignment='center', verticalalignment='center')
plt.title(titles[i])
plt.axis('off')
cm = plt.get_cmap()
cm.set_bad('gray')
plt.subplots_adjust(bottom=0.0, right=0.7, top=1.0)
cax = plt.axes([1., 0.0, 0.075, 1.])
cbar = plt.colorbar(cax=cax)
plt.show()
# Do not modify this cell!
plot_state_visitations("results/Dyna-Q_shortcut_steps.npy", ['Dyna-Q : State visitations before the env changes', 'Dyna-Q : State visitations after the env changes'], 1)
###Output
_____no_output_____
###Markdown
What do you observe?The state visitation map looks almost the same before and after the shortcut opens. This means that the Dyna-Q agent hasn't quite discovered and started exploiting the new shortcut.Now let's try increasing the exploration parameter $\epsilon$ to see if it helps the Dyna-Q agent discover the shortcut.
###Code
# Do not modify this cell!
def run_experiment_only_cumulative_reward(env, agent, env_parameters, agent_parameters, exp_parameters):
# Experiment settings
num_runs = exp_parameters['num_runs']
num_max_steps = exp_parameters['num_max_steps']
epsilons = agent_parameters['epsilons']
env_info = {"change_at_n" : env_parameters["change_at_n"]}
agent_info = {"num_states" : agent_parameters["num_states"],
"num_actions" : agent_parameters["num_actions"],
"planning_steps": agent_parameters["planning_steps"],
"discount": env_parameters["discount"],
"step_size" : agent_parameters["step_size"]}
log_data = {'epsilons' : epsilons}
cum_reward_all = np.zeros((len(epsilons), num_runs, num_max_steps))
for eps_idx, epsilon in enumerate(epsilons):
print('Agent : Dyna-Q, epsilon : %f' % epsilon)
os.system('sleep 1') # to prevent tqdm printing out-of-order before the above print()
agent_info["epsilon"] = epsilon
for run in tqdm(range(num_runs)):
agent_info['random_seed'] = run
agent_info['planning_random_seed'] = run
rl_glue = RLGlue(env, agent) # Creates a new RLGlue experiment with the env and agent we chose above
rl_glue.rl_init(agent_info, env_info) # We pass RLGlue what it needs to initialize the agent and environment
num_steps = 0
cum_reward = 0
while num_steps < num_max_steps-1 :
rl_glue.rl_start() # We start the experiment
is_terminal = False
while not is_terminal and num_steps < num_max_steps-1 :
reward, _, action, is_terminal = rl_glue.rl_step() # The environment and agent take a step and return
# the reward, and action taken.
num_steps += 1
cum_reward += reward
cum_reward_all[eps_idx][run][num_steps] = cum_reward
log_data['cum_reward_all'] = cum_reward_all
np.save("results/Dyna-Q_epsilons", log_data)
# Do NOT modify the parameter settings.
# Experiment parameters
experiment_parameters = {
"num_runs" : 30, # The number of times we run the experiment
"num_max_steps" : 6000, # The number of steps per experiment
}
# Environment parameters
environment_parameters = {
"discount": 0.95,
"change_at_n": 3000
}
# Agent parameters
agent_parameters = {
"num_states" : 54,
"num_actions" : 4,
"step_size" : 0.125,
"planning_steps" : 10,
"epsilons": [0.1, 0.2, 0.4, 0.8] # The list of epsilons we want to try
}
current_env = ShortcutMazeEnvironment # The environment
current_agent = DynaQAgent # The agent
run_experiment_only_cumulative_reward(current_env, current_agent, environment_parameters, agent_parameters, experiment_parameters)
plot_cumulative_reward('results/Dyna-Q_epsilons.npy', 'epsilons', 'cum_reward_all', 'Cumulative\nreward', r'$\epsilon$ = ', r'Dyna-Q : Varying $\epsilon$')
###Output
Agent : Dyna-Q, epsilon : 0.100000
###Markdown
What do you observe?Increasing the exploration via the $\epsilon$-greedy strategy does not seem to be helping. In fact, the agent's cumulative reward decreases because it is spending more and more time trying out the exploratory actions.Can we do better...? Section 2: Dyna-Q+ The motivation behind Dyna-Q+ is to give a bonus reward for actions that haven't been tried for a long time, since there is a greater chance that the dynamics for that actions might have changed.In particular, if the modeled reward for a transition is $r$, and the transition has not been tried in $\tau(s,a)$ time steps, then planning updates are done as if that transition produced a reward of $r + \kappa \sqrt{ \tau(s,a)}$, for some small $\kappa$. Let's implement that!Based on your `DynaQAgent`, create a new class `DynaQPlusAgent` to implement the aforementioned exploration heuristic. Additionally :1. actions that had never been tried before from a state should now be allowed to be considered in the planning step,2. and the initial model for such actions is that they lead back to the same state with a reward of zero.At this point, you might want to refer to the video lectures and [Section 8.3](http://www.incompleteideas.net/book/RLbook2018.pdfpage=188) of the RL textbook for a refresher on Dyna-Q+. As usual, let's break this down in pieces and do it one-by-one.First of all, check out the `agent_init` method below. In particular, pay attention to the attributes which are new to `DynaQPlusAgent`– state-visitation counts $\tau$ and the scaling parameter $\kappa$ – because you shall be using them later.
###Code
# Do not modify this cell!
class DynaQPlusAgent(BaseAgent):
def agent_init(self, agent_info):
"""Setup for the agent called when the experiment first starts.
Args:
agent_init_info (dict), the parameters used to initialize the agent. The dictionary contains:
{
num_states (int): The number of states,
num_actions (int): The number of actions,
epsilon (float): The parameter for epsilon-greedy exploration,
step_size (float): The step-size,
discount (float): The discount factor,
planning_steps (int): The number of planning steps per environmental interaction
kappa (float): The scaling factor for the reward bonus
random_seed (int): the seed for the RNG used in epsilon-greedy
planning_random_seed (int): the seed for the RNG used in the planner
}
"""
# First, we get the relevant information from agent_info
# Note: we use np.random.RandomState(seed) to set the two different RNGs
# for the planner and the rest of the code
try:
self.num_states = agent_info["num_states"]
self.num_actions = agent_info["num_actions"]
except:
print("You need to pass both 'num_states' and 'num_actions' \
in agent_info to initialize the action-value table")
self.gamma = agent_info.get("discount", 0.95)
self.step_size = agent_info.get("step_size", 0.1)
self.epsilon = agent_info.get("epsilon", 0.1)
self.planning_steps = agent_info.get("planning_steps", 10)
self.kappa = agent_info.get("kappa", 0.001)
self.rand_generator = np.random.RandomState(agent_info.get('random_seed', 42))
self.planning_rand_generator = np.random.RandomState(agent_info.get('planning_random_seed', 42))
# Next, we initialize the attributes required by the agent, e.g., q_values, model, tau, etc.
# The visitation-counts can be stored as a table as well, like the action values
self.q_values = np.zeros((self.num_states, self.num_actions))
self.tau = np.zeros((self.num_states, self.num_actions))
self.actions = list(range(self.num_actions))
self.past_action = -1
self.past_state = -1
self.model = {}
###Output
_____no_output_____
###Markdown
Now first up, implement the `update_model` method. Note that this is different from Dyna-Q in the aforementioned way.
###Code
%%add_to DynaQPlusAgent
# [GRADED]
def update_model(self, past_state, past_action, state, reward):
"""updates the model
Args:
past_state (int): s
past_action (int): a
state (int): s'
reward (int): r
Returns:
Nothing
"""
# Recall that when adding a state-action to the model, if the agent is visiting the state
# for the first time, then the remaining actions need to be added to the model as well
# with zero reward and a transition into itself. Something like:
## for action in self.actions:
## if action != past_action:
## self.model[past_state][action] = (past_state, 0)
#
# Note: do *not* update the visitation-counts here. We will do that in `agent_step`.
#
# (3 lines)
if past_state not in self.model:
self.model[past_state] = {past_action : (state, reward)}
### START CODE HERE ###
for action in self.actions:
if action != past_action:
self.model[past_state][action] = (past_state, 0)
### END CODE HERE ###
else:
self.model[past_state][past_action] = (state, reward)
###Output
_____no_output_____
###Markdown
Test `update_model()`
###Code
# Do not modify this cell!
## Test code for update_model() ##
actions = []
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"random_seed": 0,
"planning_random_seed": 0}
test_agent = DynaQPlusAgent()
test_agent.agent_init(agent_info)
test_agent.update_model(0,2,0,1)
test_agent.update_model(2,0,1,1)
test_agent.update_model(0,3,1,2)
test_agent.tau[0][0] += 1
print("Model: \n", test_agent.model)
###Output
Model:
{0: {2: (0, 1), 0: (0, 0), 1: (0, 0), 3: (1, 2)}, 2: {0: (1, 1), 1: (2, 0), 2: (2, 0), 3: (2, 0)}}
###Markdown
Expected output:```Model: {0: {2: (0, 1), 0: (0, 0), 1: (0, 0), 3: (1, 2)}, 2: {0: (1, 1), 1: (2, 0), 2: (2, 0), 3: (2, 0)}}```Note that the actions that were not taken from a state are also added to the model, with a loop back into the same state with a reward of 0. Next, you will implement the `planning_step()` method. This will be very similar to the one you implemented in `DynaQAgent`, but here you will be adding the exploration bonus to the reward in the simulated transition.
###Code
%%add_to DynaQPlusAgent
# [GRADED]
def planning_step(self):
"""performs planning, i.e. indirect RL.
Args:
None
Returns:
Nothing
"""
# The indirect RL step:
# - Choose a state and action from the set of experiences that are stored in the model. (~2 lines)
# - Query the model with this state-action pair for the predicted next state and reward.(~1 line)
# - **Add the bonus to the reward** (~1 line)
# - Update the action values with this simulated experience. (2~4 lines)
# - Repeat for the required number of planning steps.
#
# Note that the update equation is different for terminal and non-terminal transitions.
# To differentiate between a terminal and a non-terminal next state, assume that the model stores
# the terminal state as a dummy state like -1
#
# Important: remember you have a random number generator 'planning_rand_generator' as
# a part of the class which you need to use as self.planning_rand_generator.choice()
# For the sake of reproducibility and grading, *do not* use anything else like
# np.random.choice() for performing search control.
### START CODE HERE ###
for i in range(self.planning_steps):
state = self.planning_rand_generator.choice([*self.model])
action = self.planning_rand_generator.choice([*self.model[state]])
new_state, reward = self.model[state][action]
reward = reward + self.kappa * np.sqrt(self.tau[state, action])
if new_state == -1:
target = reward
self.q_values[state, action] = self.q_values[state, action] + self.step_size*(
target - self.q_values[state, action])
else:
target = reward + self.gamma*np.max(self.q_values[new_state,:])
self.q_values[state, action] = self.q_values[state, action] + self.step_size*(
target - self.q_values[state, action])
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
Test `planning_step()`
###Code
# Do not modify this cell!
## Test code for planning_step() ##
actions = []
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"kappa": 0.001,
"planning_steps": 4,
"random_seed": 0,
"planning_random_seed": 1}
test_agent = DynaQPlusAgent()
test_agent.agent_init(agent_info)
test_agent.update_model(0,1,-1,1)
test_agent.tau += 1; test_agent.tau[0][1] = 0
test_agent.update_model(0,2,1,1)
test_agent.tau += 1; test_agent.tau[0][2] = 0 # Note that these counts are manually updated
test_agent.update_model(2,0,1,1) # as we'll code them in `agent_step'
test_agent.tau += 1; test_agent.tau[2][0] = 0 # which hasn't been implemented yet.
test_agent.planning_step()
print("Model: \n", test_agent.model)
print("Action-value estimates: \n", test_agent.q_values)
###Output
Model:
{0: {1: (-1, 1), 0: (0, 0), 2: (1, 1), 3: (0, 0)}, 2: {0: (1, 1), 1: (2, 0), 2: (2, 0), 3: (2, 0)}}
Action-value estimates:
[[0. 0.10014142 0. 0. ]
[0. 0. 0. 0. ]
[0. 0.00036373 0. 0.00017321]]
###Markdown
Expected output:```Model: {0: {1: (-1, 1), 0: (0, 0), 2: (1, 1), 3: (0, 0)}, 2: {0: (1, 1), 1: (2, 0), 2: (2, 0), 3: (2, 0)}}Action-value estimates: [[0. 0.10014142 0. 0. ] [0. 0. 0. 0. ] [0. 0.00036373 0. 0.00017321]]``` Again, before you move on to implement the rest of the agent methods, here are the couple of helper functions that you've used in the previous assessments for choosing an action using an $\epsilon$-greedy policy.
###Code
%%add_to DynaQPlusAgent
# Do not modify this cell!
def argmax(self, q_values):
"""argmax with random tie-breaking
Args:
q_values (Numpy array): the array of action values
Returns:
action (int): an action with the highest value
"""
top = float("-inf")
ties = []
for i in range(len(q_values)):
if q_values[i] > top:
top = q_values[i]
ties = []
if q_values[i] == top:
ties.append(i)
return self.rand_generator.choice(ties)
def choose_action_egreedy(self, state):
"""returns an action using an epsilon-greedy policy w.r.t. the current action-value function.
Important: assume you have a random number generator 'rand_generator' as a part of the class
which you can use as self.rand_generator.choice() or self.rand_generator.rand()
Args:
state (List): coordinates of the agent (two elements)
Returns:
The action taken w.r.t. the aforementioned epsilon-greedy policy
"""
if self.rand_generator.rand() < self.epsilon:
action = self.rand_generator.choice(self.actions)
else:
values = self.q_values[state]
action = self.argmax(values)
return action
###Output
_____no_output_____
###Markdown
Now implement the rest of the agent-related methods, namely `agent_start`, `agent_step`, and `agent_end`. Again, these will be very similar to the ones in the `DynaQAgent`, but you will have to think of a way to update the counts since the last visit.
###Code
%%add_to DynaQPlusAgent
# [GRADED]
def agent_start(self, state):
"""The first method called when the experiment starts, called after
the environment starts.
Args:
state (Numpy array): the state from the
environment's env_start function.
Returns:
(int) The first action the agent takes.
"""
# given the state, select the action using self.choose_action_egreedy(),
# and save current state and action (~2 lines)
### self.past_state = ?
### self.past_action = ?
# Note that the last-visit counts are not updated here.
### START CODE HERE ###
self.past_state = state
self.past_action = self.choose_action_egreedy(state)
### END CODE HERE ###
return self.past_action
def agent_step(self, reward, state):
"""A step taken by the agent.
Args:
reward (float): the reward received for taking the last action taken
state (Numpy array): the state from the
environment's step based on where the agent ended up after the
last step
Returns:
(int) The action the agent is taking.
"""
# Update the last-visited counts (~2 lines)
# - Direct-RL step (1~3 lines)
# - Model Update step (~1 line)
# - `planning_step` (~1 line)
# - Action Selection step (~1 line)
# Save the current state and action before returning the action to be performed. (~2 lines)
### START CODE HERE ###
target = reward + self.kappa * np.sqrt(self.tau[self.past_state, self.past_action]) + self.gamma*np.max(self.q_values[state,:])
self.q_values[self.past_state, self.past_action] = self.q_values[self.past_state, self.past_action] + \
self.step_size*(target - self.q_values[self.past_state, self.past_action])
self.update_model(self.past_state, self.past_action, state, reward)
self.tau = self.tau + 1
self.tau[self.past_state, self.past_action] = 0
self.planning_step()
self.past_state, self.past_action = state, self.choose_action_egreedy(state)
### END CODE HERE ###
return self.past_action
def agent_end(self, reward):
"""Called when the agent terminates.
Args:
reward (float): the reward the agent received for entering the
terminal state.
"""
# Again, add the same components you added in agent_step to augment Dyna-Q into Dyna-Q+
### START CODE HERE ###
target = reward + self.kappa * np.sqrt(self.tau[self.past_state, self.past_action])
self.q_values[self.past_state, self.past_action] = self.q_values[self.past_state, self.past_action] + \
self.step_size*(target - self.q_values[self.past_state, self.past_action])
self.update_model(self.past_state, self.past_action, -1, reward)
self.tau = self.tau + 1
self.tau[self.past_state, self.past_action] = 0
self.planning_step()
### END CODE HERE ###
def agent_step(self, reward, state):
target = reward + self.kappa*np.sqrt(self.tau[self.past_state, self.past_action]) + self.gamma*np.max(self.q_values[state,:])
self.q_values[self.past_state, self.past_action] = self.q_values[self.past_state, self.past_action] + \
self.step_size*(target - self.q_values[self.past_state, self.past_action])
self.update_model(self.past_state, self.past_action, state, reward)
self.tau = self.tau + 1
self.tau[self.past_state, self.past_action] = 0
self.planning_step()
self.past_state, self.past_action = state, self.choose_action_egreedy(state)
return self.past_action
###Output
_____no_output_____
###Markdown
Let's test these methods one-by-one. Test `agent_start()`
###Code
# Do not modify this cell!
## Test code for agent_start() ##
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"kappa": 0.001,
"random_seed": 0,
"planning_random_seed": 0}
test_agent = DynaQPlusAgent()
test_agent.agent_init(agent_info)
action = test_agent.agent_start(0) # state
print("Action:", action)
print("Timesteps since last visit: \n", test_agent.tau)
print("Action-value estimates: \n", test_agent.q_values)
print("Model: \n", test_agent.model)
###Output
Action: 1
Timesteps since last visit:
[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]]
Action-value estimates:
[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]]
Model:
{}
###Markdown
Expected output:```Action: 1Timesteps since last visit: [[0. 0. 0. 0.] [0. 0. 0. 0.] [0. 0. 0. 0.]]Action-value estimates: [[0. 0. 0. 0.] [0. 0. 0. 0.] [0. 0. 0. 0.]]Model: {}```Remember the last-visit counts are not updated in `agent_start()`. Test `agent_step()`
###Code
# Do not modify this cell!
## Test code for agent_step() ##
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"kappa": 0.001,
"planning_steps": 4,
"random_seed": 0,
"planning_random_seed": 0}
test_agent = DynaQPlusAgent()
test_agent.agent_init(agent_info)
actions = []
actions.append(test_agent.agent_start(0)) # state
actions.append(test_agent.agent_step(1,2)) # (reward, state)
actions.append(test_agent.agent_step(0,1)) # (reward, state)
print("Actions:", actions)
print("Timesteps since last visit: \n", test_agent.tau)
print("Action-value estimates: \n", test_agent.q_values)
print("Model: \n", test_agent.model)
###Output
Actions: [1, 3, 1]
Timesteps since last visit:
[[2. 1. 2. 2.]
[2. 2. 2. 2.]
[2. 2. 2. 0.]]
Action-value estimates:
[[1.91000000e-02 2.71000000e-01 0.00000000e+00 1.91000000e-02]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
[0.00000000e+00 1.84847763e-04 4.34264069e-04 1.00000000e-04]]
Model:
{0: {1: (2, 1), 0: (0, 0), 2: (0, 0), 3: (0, 0)}, 2: {3: (1, 0), 0: (2, 0), 1: (2, 0), 2: (2, 0)}}
###Markdown
Expected output:```Actions: [1, 3, 1]Timesteps since last visit: [[2. 1. 2. 2.] [2. 2. 2. 2.] [2. 2. 2. 0.]]Action-value estimates: [[1.91000000e-02 2.71000000e-01 0.00000000e+00 1.91000000e-02] [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [0.00000000e+00 1.83847763e-04 4.24264069e-04 0.00000000e+00]]Model: {0: {1: (2, 1), 0: (0, 0), 2: (0, 0), 3: (0, 0)}, 2: {3: (1, 0), 0: (2, 0), 1: (2, 0), 2: (2, 0)}}``` Test `agent_end()`
###Code
# Do not modify this cell!
## Test code for agent_end() ##
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"kappa": 0.001,
"planning_steps": 4,
"random_seed": 0,
"planning_random_seed": 0}
test_agent = DynaQPlusAgent()
test_agent.agent_init(agent_info)
actions = []
actions.append(test_agent.agent_start(0))
actions.append(test_agent.agent_step(1,2))
actions.append(test_agent.agent_step(0,1))
test_agent.agent_end(1)
print("Actions:", actions)
print("Timesteps since last visit: \n", test_agent.tau)
print("Action-value estimates: \n", test_agent.q_values)
print("Model: \n", test_agent.model)
###Output
Actions: [1, 3, 1]
Timesteps since last visit:
[[3. 2. 3. 3.]
[3. 0. 3. 3.]
[3. 3. 3. 1.]]
Action-value estimates:
[[1.91000000e-02 3.44084848e-01 0.00000000e+00 4.44632051e-02]
[1.91859330e-02 1.90127279e-01 0.00000000e+00 0.00000000e+00]
[0.00000000e+00 1.84847763e-04 4.34264069e-04 1.00000000e-04]]
Model:
{0: {1: (2, 1), 0: (0, 0), 2: (0, 0), 3: (0, 0)}, 2: {3: (1, 0), 0: (2, 0), 1: (2, 0), 2: (2, 0)}, 1: {1: (-1, 1), 0: (1, 0), 2: (1, 0), 3: (1, 0)}}
###Markdown
Expected output:```Actions: [1, 3, 1]Timesteps since last visit: [[3. 2. 3. 3.] [3. 0. 3. 3.] [3. 3. 3. 1.]]Action-value estimates: [[1.91000000e-02 3.44083848e-01 0.00000000e+00 4.44632051e-02] [1.91732051e-02 1.90000000e-01 0.00000000e+00 0.00000000e+00] [0.00000000e+00 1.83847763e-04 4.24264069e-04 0.00000000e+00]]Model: {0: {1: (2, 1), 0: (0, 0), 2: (0, 0), 3: (0, 0)}, 2: {3: (1, 0), 0: (2, 0), 1: (2, 0), 2: (2, 0)}, 1: {1: (-1, 1), 0: (1, 0), 2: (1, 0), 3: (1, 0)}} ``` Experiment: Dyna-Q+ agent in the _changing_ environmentOkay, now we're ready to test our Dyna-Q+ agent on the Shortcut Maze. As usual, we will average the results over 30 independent runs of the experiment.
###Code
# Do NOT modify the parameter settings.
# Experiment parameters
experiment_parameters = {
"num_runs" : 30, # The number of times we run the experiment
"num_max_steps" : 6000, # The number of steps per experiment
}
# Environment parameters
environment_parameters = {
"discount": 0.95,
"change_at_n": 3000
}
# Agent parameters
agent_parameters = {
"num_states" : 54,
"num_actions" : 4,
"epsilon": 0.1,
"step_size" : 0.5,
"planning_steps" : [50]
}
current_env = ShortcutMazeEnvironment # The environment
current_agent = DynaQPlusAgent # The agent
run_experiment_with_state_visitations(current_env, current_agent, environment_parameters, agent_parameters, experiment_parameters, "Dyna-Q+")
shutil.make_archive('results', 'zip', 'results');
###Output
Planning steps : 50
###Markdown
Let's compare the Dyna-Q and Dyna-Q+ agents with `planning_steps=50` each.
###Code
# Do not modify this cell!
def plot_cumulative_reward_comparison(file_name_dynaq, file_name_dynaqplus):
cum_reward_q = np.load(file_name_dynaq).item()['cum_reward_all'][2]
cum_reward_qPlus = np.load(file_name_dynaqplus).item()['cum_reward_all'][0]
plt.plot(np.mean(cum_reward_qPlus, axis=0), label='Dyna-Q+')
plt.plot(np.mean(cum_reward_q, axis=0), label='Dyna-Q')
plt.axvline(x=3000, linestyle='--', color='grey', alpha=0.4)
plt.xlabel('Timesteps')
plt.ylabel('Cumulative\nreward', rotation=0, labelpad=60)
plt.legend(loc='upper left')
plt.title('Average performance of Dyna-Q and Dyna-Q+ agents in the Shortcut Maze\n')
plt.show()
# Do not modify this cell!
plot_cumulative_reward_comparison('results/Dyna-Q_shortcut_steps.npy', 'results/Dyna-Q+.npy')
###Output
_____no_output_____
###Markdown
What do you observe? (For reference, your graph should look like [Figure 8.5 in Chapter 8](http://www.incompleteideas.net/book/RLbook2018.pdfpage=189) of the RL textbook)The slope of the curve increases for the Dyna-Q+ curve shortly after the shortcut opens up after 3000 steps, which indicates that the rate of receiving the positive reward increases. This implies that the Dyna-Q+ agent finds the shorter path to the goal.To verify this, let us plot the state-visitations of the Dyna-Q+ agent before and after the shortcut opens up.
###Code
# Do not modify this cell!
plot_state_visitations("results/Dyna-Q+.npy", ['Dyna-Q+ : State visitations before the env changes', 'Dyna-Q+ : State visitations after the env changes'], 0)
###Output
_____no_output_____ |
.ipynb_checkpoints/Arvato Project Workbook-checkpoint.ipynb | ###Markdown
Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings.
###Code
# import libraries here; add more as necessary
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import pickle
# magic word for producing visualizations in notebook
%matplotlib inline
from yellowbrick.cluster import KElbowVisualizer # Importing Elbow Method Library
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans # Importing K-Means algorithm
from sklearn.metrics import mean_squared_error # Evaluation metric
from sklearn.model_selection import train_test_split # Preprocessing for training and testing data splits
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
import xgboost as xgb
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
from sklearn.metrics import roc_auc_score
###Output
C:\Users\Daniel\Anaconda3\lib\site-packages\pandas\compat\_optional.py:138: UserWarning: Pandas requires version '2.7.0' or newer of 'numexpr' (version '2.6.9' currently installed).
warnings.warn(msg, UserWarning)
###Markdown
Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them.
###Code
# load in the data
azdias = pd.read_csv('Udacity_AZDIAS_052018.csv', sep=';')
customers = pd.read_csv('Udacity_CUSTOMERS_052018.csv', sep=';')
# Be sure to add in a lot more cells (both markdown and code) to document your
# approach and findings!
customers.head()
###Output
_____no_output_____
###Markdown
Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. Data Exploration: 1. Observe datatypes 2. Find percentage of NaN values
###Code
print("The number of customers left after processing: {}".format(len(customers)))
customers.dtypes.unique()
null_count = []
for i in customers:
value = customers[i].isnull().sum()
null_count.append(value)
print("Percentage of null values in each column:")
for i in range(len(null_count)-1):
print("{}: {:.2f}%".format(customers.columns[i], (null_count[i]/customers.shape[0] * 100)))
for i in customers:
print(i)
print(customers[i].unique())
for i in range(0, customers.shape[1]):
if (customers.iloc[:, i].dtypes == 'object'):
print(customers.columns[i])
customers['CAMEO_DEU_2015'].unique()
customers['CAMEO_DEUG_2015'].unique()
customers['CAMEO_INTL_2015'].unique()
customers['D19_LETZTER_KAUF_BRANCHE'].unique()
customers['EINGEFUEGT_AM'].unique()
customers['OST_WEST_KZ'].unique()
###Output
_____no_output_____
###Markdown
Data Preprocessing- The goal here is to remove any duplicates, remove missing values and drop unnecessary columns.
###Code
customers.drop_duplicates(keep = 'first', inplace = True)
azdias.drop_duplicates(keep = 'first', inplace = True)
def first_preprocessing (dataframe):
"""Cleaning of dataframe for better data processing. """
dataframe = dataframe.copy()
dataframe = dataframe.set_index(['LNR']) # Set the Customer ID to index of dataframe
#dataframe.drop_duplicates(keep = 'first', inplace = True) # Removes any duplicates from the
#dataframe.drop(['ALTER_KIND1', 'ALTER_KIND2', 'ALTER_KIND3', 'ALTER_KIND4', 'KBA05_BAUMAX', 'KK_KUNDENTYP', 'TITEL_KZ', 'EINGEFUEGT_AM'], axis=1, inplace=True) # Drops LNR which is the customer id
dataframe.replace(-1, float('NaN'), inplace=True) # -1 values represent missing values, and will be replaced with NaN values
dataframe.replace(0, float('NaN'), inplace=True) # 0 values represent unknown values, and will be replaced with NaN values
dataframe['CAMEO_DEU_2015'].replace('XX', dataframe['CAMEO_DEU_2015'].mode().iloc[0], inplace=True) # Replace unknown string to mode value
dataframe['CAMEO_DEUG_2015'].replace('X', dataframe['CAMEO_DEUG_2015'].mode().iloc[0], inplace=True)
dataframe['CAMEO_DEUG_2015'] = dataframe['CAMEO_DEUG_2015'].apply(pd.to_numeric) # Convert to integer values
dataframe['CAMEO_INTL_2015'].replace('XX', dataframe['CAMEO_INTL_2015'].mode().iloc[0], inplace=True)
dataframe['CAMEO_INTL_2015'] = dataframe['CAMEO_INTL_2015'].apply(pd.to_numeric) # Convert to integer values
new_list = []
for i in dataframe:
dataframe[i] = dataframe[i].fillna(dataframe[i].mode().iloc[0]) # Mode is used to replace NaN values due to categorical values
for i in range(0, dataframe.shape[1]):
if (dataframe.iloc[:, i].dtypes == 'object'): # All object dtypes to be converted to categorical values
dataframe.iloc[:, i] = pd.Categorical(dataframe.iloc[:, i])
dataframe.iloc[:, i] = dataframe.iloc[:, i].cat.codes
dataframe.iloc[:, i] = dataframe.iloc[:, i].astype('int64')
new_list.append(dataframe.columns[i])
return dataframe # return cleaned dataframe
# Preprocessing of Customers dataframe
cleaned_customers = first_preprocessing(customers)
cleaned_customers.head()
# Dropping specific columns with greater than 40% NaN values
cleaned_customers.drop(['ALTER_KIND1', 'ALTER_KIND2', 'ALTER_KIND3', 'ALTER_KIND4','KBA05_BAUMAX', 'KK_KUNDENTYP', 'TITEL_KZ', 'EINGEFUEGT_AM'], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Data Processing Confirmation- This step is to confirm that the data has been thoroughly cleaned for data analysis.
###Code
print("The number of customers left after processing: {}".format(len(cleaned_customers)))
null_count = []
for i in cleaned_customers:
value = cleaned_customers[i].isnull().sum()
null_count.append(value)
print("Percentage of null values in each column:")
for i in range(len(null_count)-1):
print("{}: {:.2f}%".format(cleaned_customers.columns[i], (null_count[i]/cleaned_customers.shape[0] * 100)))
###Output
Percentage of null values in each column:
AGER_TYP: 0.00%
AKT_DAT_KL: 0.00%
ALTER_HH: 0.00%
ALTERSKATEGORIE_FEIN: 0.00%
ANZ_HAUSHALTE_AKTIV: 0.00%
ANZ_HH_TITEL: 0.00%
ANZ_KINDER: 0.00%
ANZ_PERSONEN: 0.00%
ANZ_STATISTISCHE_HAUSHALTE: 0.00%
ANZ_TITEL: 0.00%
ARBEIT: 0.00%
BALLRAUM: 0.00%
CAMEO_DEU_2015: 0.00%
CAMEO_DEUG_2015: 0.00%
CAMEO_INTL_2015: 0.00%
CJT_GESAMTTYP: 0.00%
CJT_KATALOGNUTZER: 0.00%
CJT_TYP_1: 0.00%
CJT_TYP_2: 0.00%
CJT_TYP_3: 0.00%
CJT_TYP_4: 0.00%
CJT_TYP_5: 0.00%
CJT_TYP_6: 0.00%
D19_BANKEN_ANZ_12: 0.00%
D19_BANKEN_ANZ_24: 0.00%
D19_BANKEN_DATUM: 0.00%
D19_BANKEN_DIREKT: 0.00%
D19_BANKEN_GROSS: 0.00%
D19_BANKEN_LOKAL: 0.00%
D19_BANKEN_OFFLINE_DATUM: 0.00%
D19_BANKEN_ONLINE_DATUM: 0.00%
D19_BANKEN_ONLINE_QUOTE_12: 0.00%
D19_BANKEN_REST: 0.00%
D19_BEKLEIDUNG_GEH: 0.00%
D19_BEKLEIDUNG_REST: 0.00%
D19_BILDUNG: 0.00%
D19_BIO_OEKO: 0.00%
D19_BUCH_CD: 0.00%
D19_DIGIT_SERV: 0.00%
D19_DROGERIEARTIKEL: 0.00%
D19_ENERGIE: 0.00%
D19_FREIZEIT: 0.00%
D19_GARTEN: 0.00%
D19_GESAMT_ANZ_12: 0.00%
D19_GESAMT_ANZ_24: 0.00%
D19_GESAMT_DATUM: 0.00%
D19_GESAMT_OFFLINE_DATUM: 0.00%
D19_GESAMT_ONLINE_DATUM: 0.00%
D19_GESAMT_ONLINE_QUOTE_12: 0.00%
D19_HANDWERK: 0.00%
D19_HAUS_DEKO: 0.00%
D19_KINDERARTIKEL: 0.00%
D19_KONSUMTYP: 0.00%
D19_KONSUMTYP_MAX: 0.00%
D19_KOSMETIK: 0.00%
D19_LEBENSMITTEL: 0.00%
D19_LETZTER_KAUF_BRANCHE: 0.00%
D19_LOTTO: 0.00%
D19_NAHRUNGSERGAENZUNG: 0.00%
D19_RATGEBER: 0.00%
D19_REISEN: 0.00%
D19_SAMMELARTIKEL: 0.00%
D19_SCHUHE: 0.00%
D19_SONSTIGE: 0.00%
D19_SOZIALES: 0.00%
D19_TECHNIK: 0.00%
D19_TELKO_ANZ_12: 0.00%
D19_TELKO_ANZ_24: 0.00%
D19_TELKO_DATUM: 0.00%
D19_TELKO_MOBILE: 0.00%
D19_TELKO_OFFLINE_DATUM: 0.00%
D19_TELKO_ONLINE_DATUM: 0.00%
D19_TELKO_ONLINE_QUOTE_12: 0.00%
D19_TELKO_REST: 0.00%
D19_TIERARTIKEL: 0.00%
D19_VERSAND_ANZ_12: 0.00%
D19_VERSAND_ANZ_24: 0.00%
D19_VERSAND_DATUM: 0.00%
D19_VERSAND_OFFLINE_DATUM: 0.00%
D19_VERSAND_ONLINE_DATUM: 0.00%
D19_VERSAND_ONLINE_QUOTE_12: 0.00%
D19_VERSAND_REST: 0.00%
D19_VERSI_ANZ_12: 0.00%
D19_VERSI_ANZ_24: 0.00%
D19_VERSI_DATUM: 0.00%
D19_VERSI_OFFLINE_DATUM: 0.00%
D19_VERSI_ONLINE_DATUM: 0.00%
D19_VERSI_ONLINE_QUOTE_12: 0.00%
D19_VERSICHERUNGEN: 0.00%
D19_VOLLSORTIMENT: 0.00%
D19_WEIN_FEINKOST: 0.00%
DSL_FLAG: 0.00%
EINGEZOGENAM_HH_JAHR: 0.00%
EWDICHTE: 0.00%
EXTSEL992: 0.00%
FINANZ_ANLEGER: 0.00%
FINANZ_HAUSBAUER: 0.00%
FINANZ_MINIMALIST: 0.00%
FINANZ_SPARER: 0.00%
FINANZ_UNAUFFAELLIGER: 0.00%
FINANZ_VORSORGER: 0.00%
FINANZTYP: 0.00%
FIRMENDICHTE: 0.00%
GEBAEUDETYP: 0.00%
GEBAEUDETYP_RASTER: 0.00%
GEBURTSJAHR: 0.00%
GEMEINDETYP: 0.00%
GFK_URLAUBERTYP: 0.00%
GREEN_AVANTGARDE: 0.00%
HEALTH_TYP: 0.00%
HH_DELTA_FLAG: 0.00%
HH_EINKOMMEN_SCORE: 0.00%
INNENSTADT: 0.00%
KBA05_ALTER1: 0.00%
KBA05_ALTER2: 0.00%
KBA05_ALTER3: 0.00%
KBA05_ALTER4: 0.00%
KBA05_ANHANG: 0.00%
KBA05_ANTG1: 0.00%
KBA05_ANTG2: 0.00%
KBA05_ANTG3: 0.00%
KBA05_ANTG4: 0.00%
KBA05_AUTOQUOT: 0.00%
KBA05_CCM1: 0.00%
KBA05_CCM2: 0.00%
KBA05_CCM3: 0.00%
KBA05_CCM4: 0.00%
KBA05_DIESEL: 0.00%
KBA05_FRAU: 0.00%
KBA05_GBZ: 0.00%
KBA05_HERST1: 0.00%
KBA05_HERST2: 0.00%
KBA05_HERST3: 0.00%
KBA05_HERST4: 0.00%
KBA05_HERST5: 0.00%
KBA05_HERSTTEMP: 0.00%
KBA05_KRSAQUOT: 0.00%
KBA05_KRSHERST1: 0.00%
KBA05_KRSHERST2: 0.00%
KBA05_KRSHERST3: 0.00%
KBA05_KRSKLEIN: 0.00%
KBA05_KRSOBER: 0.00%
KBA05_KRSVAN: 0.00%
KBA05_KRSZUL: 0.00%
KBA05_KW1: 0.00%
KBA05_KW2: 0.00%
KBA05_KW3: 0.00%
KBA05_MAXAH: 0.00%
KBA05_MAXBJ: 0.00%
KBA05_MAXHERST: 0.00%
KBA05_MAXSEG: 0.00%
KBA05_MAXVORB: 0.00%
KBA05_MOD1: 0.00%
KBA05_MOD2: 0.00%
KBA05_MOD3: 0.00%
KBA05_MOD4: 0.00%
KBA05_MOD8: 0.00%
KBA05_MODTEMP: 0.00%
KBA05_MOTOR: 0.00%
KBA05_MOTRAD: 0.00%
KBA05_SEG1: 0.00%
KBA05_SEG10: 0.00%
KBA05_SEG2: 0.00%
KBA05_SEG3: 0.00%
KBA05_SEG4: 0.00%
KBA05_SEG5: 0.00%
KBA05_SEG6: 0.00%
KBA05_SEG7: 0.00%
KBA05_SEG8: 0.00%
KBA05_SEG9: 0.00%
KBA05_VORB0: 0.00%
KBA05_VORB1: 0.00%
KBA05_VORB2: 0.00%
KBA05_ZUL1: 0.00%
KBA05_ZUL2: 0.00%
KBA05_ZUL3: 0.00%
KBA05_ZUL4: 0.00%
KBA13_ALTERHALTER_30: 0.00%
KBA13_ALTERHALTER_45: 0.00%
KBA13_ALTERHALTER_60: 0.00%
KBA13_ALTERHALTER_61: 0.00%
KBA13_ANTG1: 0.00%
KBA13_ANTG2: 0.00%
KBA13_ANTG3: 0.00%
KBA13_ANTG4: 0.00%
KBA13_ANZAHL_PKW: 0.00%
KBA13_AUDI: 0.00%
KBA13_AUTOQUOTE: 0.00%
KBA13_BAUMAX: 0.00%
KBA13_BJ_1999: 0.00%
KBA13_BJ_2000: 0.00%
KBA13_BJ_2004: 0.00%
KBA13_BJ_2006: 0.00%
KBA13_BJ_2008: 0.00%
KBA13_BJ_2009: 0.00%
KBA13_BMW: 0.00%
KBA13_CCM_0_1400: 0.00%
KBA13_CCM_1000: 0.00%
KBA13_CCM_1200: 0.00%
KBA13_CCM_1400: 0.00%
KBA13_CCM_1401_2500: 0.00%
KBA13_CCM_1500: 0.00%
KBA13_CCM_1600: 0.00%
KBA13_CCM_1800: 0.00%
KBA13_CCM_2000: 0.00%
KBA13_CCM_2500: 0.00%
KBA13_CCM_2501: 0.00%
KBA13_CCM_3000: 0.00%
KBA13_CCM_3001: 0.00%
KBA13_FAB_ASIEN: 0.00%
KBA13_FAB_SONSTIGE: 0.00%
KBA13_FIAT: 0.00%
KBA13_FORD: 0.00%
KBA13_GBZ: 0.00%
KBA13_HALTER_20: 0.00%
KBA13_HALTER_25: 0.00%
KBA13_HALTER_30: 0.00%
KBA13_HALTER_35: 0.00%
KBA13_HALTER_40: 0.00%
KBA13_HALTER_45: 0.00%
KBA13_HALTER_50: 0.00%
KBA13_HALTER_55: 0.00%
KBA13_HALTER_60: 0.00%
KBA13_HALTER_65: 0.00%
KBA13_HALTER_66: 0.00%
KBA13_HERST_ASIEN: 0.00%
KBA13_HERST_AUDI_VW: 0.00%
KBA13_HERST_BMW_BENZ: 0.00%
KBA13_HERST_EUROPA: 0.00%
KBA13_HERST_FORD_OPEL: 0.00%
KBA13_HERST_SONST: 0.00%
KBA13_HHZ: 0.00%
KBA13_KMH_0_140: 0.00%
KBA13_KMH_110: 0.00%
KBA13_KMH_140: 0.00%
KBA13_KMH_140_210: 0.00%
KBA13_KMH_180: 0.00%
KBA13_KMH_210: 0.00%
KBA13_KMH_211: 0.00%
KBA13_KMH_250: 0.00%
KBA13_KMH_251: 0.00%
KBA13_KRSAQUOT: 0.00%
KBA13_KRSHERST_AUDI_VW: 0.00%
KBA13_KRSHERST_BMW_BENZ: 0.00%
KBA13_KRSHERST_FORD_OPEL: 0.00%
KBA13_KRSSEG_KLEIN: 0.00%
KBA13_KRSSEG_OBER: 0.00%
KBA13_KRSSEG_VAN: 0.00%
KBA13_KRSZUL_NEU: 0.00%
KBA13_KW_0_60: 0.00%
KBA13_KW_110: 0.00%
KBA13_KW_120: 0.00%
KBA13_KW_121: 0.00%
KBA13_KW_30: 0.00%
KBA13_KW_40: 0.00%
KBA13_KW_50: 0.00%
KBA13_KW_60: 0.00%
KBA13_KW_61_120: 0.00%
KBA13_KW_70: 0.00%
KBA13_KW_80: 0.00%
KBA13_KW_90: 0.00%
KBA13_MAZDA: 0.00%
KBA13_MERCEDES: 0.00%
KBA13_MOTOR: 0.00%
KBA13_NISSAN: 0.00%
KBA13_OPEL: 0.00%
KBA13_PEUGEOT: 0.00%
KBA13_RENAULT: 0.00%
KBA13_SEG_GELAENDEWAGEN: 0.00%
KBA13_SEG_GROSSRAUMVANS: 0.00%
KBA13_SEG_KLEINST: 0.00%
KBA13_SEG_KLEINWAGEN: 0.00%
KBA13_SEG_KOMPAKTKLASSE: 0.00%
KBA13_SEG_MINIVANS: 0.00%
KBA13_SEG_MINIWAGEN: 0.00%
KBA13_SEG_MITTELKLASSE: 0.00%
KBA13_SEG_OBEREMITTELKLASSE: 0.00%
KBA13_SEG_OBERKLASSE: 0.00%
KBA13_SEG_SONSTIGE: 0.00%
KBA13_SEG_SPORTWAGEN: 0.00%
KBA13_SEG_UTILITIES: 0.00%
KBA13_SEG_VAN: 0.00%
KBA13_SEG_WOHNMOBILE: 0.00%
KBA13_SITZE_4: 0.00%
KBA13_SITZE_5: 0.00%
KBA13_SITZE_6: 0.00%
KBA13_TOYOTA: 0.00%
KBA13_VORB_0: 0.00%
KBA13_VORB_1: 0.00%
KBA13_VORB_1_2: 0.00%
KBA13_VORB_2: 0.00%
KBA13_VORB_3: 0.00%
KBA13_VW: 0.00%
KKK: 0.00%
KOMBIALTER: 0.00%
KONSUMNAEHE: 0.00%
KONSUMZELLE: 0.00%
LP_FAMILIE_FEIN: 0.00%
LP_FAMILIE_GROB: 0.00%
LP_LEBENSPHASE_FEIN: 0.00%
LP_LEBENSPHASE_GROB: 0.00%
LP_STATUS_FEIN: 0.00%
LP_STATUS_GROB: 0.00%
MIN_GEBAEUDEJAHR: 0.00%
MOBI_RASTER: 0.00%
MOBI_REGIO: 0.00%
NATIONALITAET_KZ: 0.00%
ONLINE_AFFINITAET: 0.00%
ORTSGR_KLS9: 0.00%
OST_WEST_KZ: 0.00%
PLZ8_ANTG1: 0.00%
PLZ8_ANTG2: 0.00%
PLZ8_ANTG3: 0.00%
PLZ8_ANTG4: 0.00%
PLZ8_BAUMAX: 0.00%
PLZ8_GBZ: 0.00%
PLZ8_HHZ: 0.00%
PRAEGENDE_JUGENDJAHRE: 0.00%
REGIOTYP: 0.00%
RELAT_AB: 0.00%
RETOURTYP_BK_S: 0.00%
RT_KEIN_ANREIZ: 0.00%
RT_SCHNAEPPCHEN: 0.00%
RT_UEBERGROESSE: 0.00%
SEMIO_DOM: 0.00%
SEMIO_ERL: 0.00%
SEMIO_FAM: 0.00%
SEMIO_KAEM: 0.00%
SEMIO_KRIT: 0.00%
SEMIO_KULT: 0.00%
SEMIO_LUST: 0.00%
SEMIO_MAT: 0.00%
SEMIO_PFLICHT: 0.00%
SEMIO_RAT: 0.00%
SEMIO_REL: 0.00%
SEMIO_SOZ: 0.00%
SEMIO_TRADV: 0.00%
SEMIO_VERT: 0.00%
SHOPPER_TYP: 0.00%
SOHO_KZ: 0.00%
STRUKTURTYP: 0.00%
UMFELD_ALT: 0.00%
UMFELD_JUNG: 0.00%
UNGLEICHENN_FLAG: 0.00%
VERDICHTUNGSRAUM: 0.00%
VERS_TYP: 0.00%
VHA: 0.00%
VHN: 0.00%
VK_DHT4A: 0.00%
VK_DISTANZ: 0.00%
VK_ZG11: 0.00%
W_KEIT_KIND_HH: 0.00%
WOHNDAUER_2008: 0.00%
WOHNLAGE: 0.00%
ZABEOTYP: 0.00%
PRODUCT_GROUP: 0.00%
CUSTOMER_GROUP: 0.00%
ONLINE_PURCHASE: 0.00%
ANREDE_KZ: 0.00%
###Markdown
General Population Data Processing
###Code
cleaned_population = first_preprocessing(azdias)
cleaned_population.head()
# Dropping specific columns with greater than 40% NaN values
cleaned_population.drop(['ALTER_KIND1', 'ALTER_KIND2', 'ALTER_KIND3', 'ALTER_KIND4','KBA05_BAUMAX', 'KK_KUNDENTYP', 'TITEL_KZ', 'EINGEFUEGT_AM'], axis=1, inplace=True)
cleaned_population.head()
###Output
_____no_output_____
###Markdown
Side note: - Due to the size of the data, processing the data can be computationally intensive. The preferred method for imputation is KNN imputation. However, this takes roughly 1 hour for processing the Customers csv file and likely much longer for the general population. The decision was made to use Mode for imputation instead. KNN imputation would have been more representative of the actual data.
###Code
#from sklearn.impute import KNNImputer # Importing K Nearest Neighbors Algorithm
# K Nearest Neighbours algorithm is used to replace values with its nearest neighbours - or most similar row data
#def knn_imputation(dataframe):
# cleaned_dataframe = pd.DataFrame(KNNImputer(n_neighbors=5, weights='uniform', metric='nan_euclidean').fit(dataframe).transform(dataframe), columns = dataframe.columns)
# return cleaned_dataframe
# Code for viewing distinct values within a column
#for i in cleaned_customers:
# print(cleaned_customers[i].unique())
###Output
_____no_output_____
###Markdown
Data Modelling:- Now that the data has been cleaned, the data is now available for modelling. In this stage, we will opt to use Principal Component Analysis (PCA) to reduce the dimensionality of the data. In other words, we will take the dataset of 365 columns and reduce it to just a few. - After the dimensionality has been reduced, the data will be clustered to form distinct groups of customers using K-means clustering. 1. Principal Component Analysis
###Code
pca = PCA(n_components=20)
X_df = pca.fit(cleaned_customers).transform(cleaned_customers)
PCA_components = pd.DataFrame(X_df)
sum(pca.explained_variance_ratio_)
pc_range = range(1, pca.n_components_+1)
plt.title("Variance vs Number of Principal Components", size=20)
plt.bar(pc_range, pca.explained_variance_ratio_, color='blue')
plt.xlabel('Principal Components')
plt.ylabel('Variance %')
plt.xticks(pc_range)
###Output
_____no_output_____
###Markdown
Observation: From the chart, we can see a distinct drop-off after the second component. This means that the majority of the data can be explained by using only two principal components.
###Code
pca = PCA(n_components=2)
X_df = pca.fit(cleaned_customers).transform(cleaned_customers)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
2. K-Means Clustering
###Code
model = KMeans()
visualizer = KElbowVisualizer(model, k=(1,15)) # Loop through model to find ideal number of clusters within the data
visualizer.fit(PCA_components)
visualizer.show()
k_means_model = KMeans(n_clusters = 3, init = "k-means++")
k_means_pred = k_means_model.fit_predict(X_df) # Fitting the data onto the K-means clustering algorithm
uniq = np.unique(k_means_pred)
plt.figure(figsize=(15,15))
for i in uniq:
plt.scatter(X_df[k_means_pred == i , 0] , X_df[k_means_pred == i , 1] , label = i)
plt.xlabel([])
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.legend()
plt.show() # Plotting the data onto a chart
cleaned_customers['cluster'] = k_means_model.labels_ # Adding extra column to the Customers dataframe to allocate data to separate groups
cleaned_customers.head()
# Locating the central locations of the clusters
array = k_means_model.cluster_centers_
array = array.astype(int)
array
# Allocating the clustered groups onto different dataframes
dataframe_cluster_0 = cleaned_customers[cleaned_customers['cluster'] == 0]
dataframe_cluster_1 = cleaned_customers[cleaned_customers['cluster'] == 1]
dataframe_cluster_2 = cleaned_customers[cleaned_customers['cluster'] == 2]
#define data
data = [len(dataframe_cluster_0), len(dataframe_cluster_1), len(dataframe_cluster_2)]
labels = ['Cluster 1', 'Cluster 2', 'Cluster 3']
#define Seaborn color palette to use
colors = sns.color_palette('pastel')[0:5]
#create pie chart
plt.title('Percentage of Total Customer Individuals in Each Cluster', size = 20)
plt.pie(data, labels = labels, colors = colors, autopct='%.0f%%')
plt.show()
###Output
_____no_output_____
###Markdown
Modelling of General Population
###Code
pca = PCA(n_components=2)
pop_X_df = pca.fit(cleaned_population).transform(cleaned_population)
PCA_components = pd.DataFrame(pop_X_df)
print("The 2 principal components are able to explain {:.2f}% of the data.".format(sum(pca.explained_variance_ratio_) * 100))
k_means_pred = k_means_model.fit_predict(cleaned_population)
cleaned_population['cluster'] = k_means_model.labels_ # Adding extra column to the Customers dataframe to allocate data to separate groups
cleaned_population.head()
# Allocating the clustered groups onto different dataframes
pop_dataframe_cluster_0 = cleaned_population[cleaned_population['cluster'] == 0]
pop_dataframe_cluster_1 = cleaned_population[cleaned_population['cluster'] == 1]
pop_dataframe_cluster_2 = cleaned_population[cleaned_population['cluster'] == 2]
#define data
data = [len(pop_dataframe_cluster_0), len(pop_dataframe_cluster_1), len(pop_dataframe_cluster_2)]
labels = ['Cluster 1', 'Cluster 2', 'Cluster 3']
#define Seaborn color palette to use
colors = sns.color_palette('pastel')[0:5]
#create pie chart
plt.title('Percentage of Total Population Individuals in Each Cluster', size = 20)
plt.pie(data, labels = labels, colors = colors, autopct='%.0f%%')
plt.show()
###Output
_____no_output_____
###Markdown
Feature Importance in Cluster allocation- After finding our clusters, it is important to identify the most influential features from the original dataframe. This will tells us which features will yield important information on our customers.
###Code
rfc_df = cleaned_customers.copy()
rfc_y = rfc_df.pop('cluster')
rfc_X = rfc_df[:]
rfc_X.head()
rfc_y.head()
rfc_X_train, rfc_X_test, rfc_y_train, rfc_y_test = train_test_split(rfc_X, rfc_y) # Splitting the data into training and testing datasets
rfc = RandomForestClassifier()
rfc.fit(rfc_X_train, rfc_y_train)
rfc_pred = rfc.predict(rfc_X_test)
print ("Accuracy : {:.2f}%".format(accuracy_score(rfc_y_test, rfc_pred)*100))
rfc_array = rfc.feature_importances_
#df = pd.DataFrame(array.reshape(1, 368), columns=X.columns)
# Arranging the most important features into a list
rfc_importances = []
count = 0
for i in rfc_array:
rfc_importances.append([i, rfc_X.columns[count]])
count += 1
# Sorting the feature importances from maximum importance to least.
rfc_importances.sort(reverse=True)
rfc_labels = []
rfc_values = []
for i in rfc_importances[0:20]:
rfc_labels.append(i[1])
rfc_values.append(i[0])
rfc_importances[0:20]
sns.barplot(rfc_labels, rfc_values)
plt.xticks(rotation=90)
plt.title('Feature Importances in Cluster Allocation', size = 20)
###Output
C:\Users\Daniel\Anaconda3\lib\site-packages\seaborn\_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
###Markdown
Important: The biggest factor seems to the number of cars in the postal code area. This could indicate the type of financial product Arvato is selling. Exploratory Data Analysis- Here we will be looking at some characteristics of the customers population, as well as the most important features in the dataset. - Due to the nature of the dataset, we will refer to the "DIAS Attributes - Values 2017" csv file to understand the meaning of the categorical values.
###Code
figure, axes = plt.subplots(1, 3, sharex=True, figsize=(15,5))
figure.suptitle('Distribution of Age in Each Cluster', fontsize=20)
sns.distplot(dataframe_cluster_0['ALTERSKATEGORIE_GROB'], ax=axes[0], color ='red', bins = 10, kde = False)
sns.distplot(dataframe_cluster_1['ALTERSKATEGORIE_GROB'], ax=axes[1], color ='red', bins = 10, kde = False)
sns.distplot(dataframe_cluster_2['ALTERSKATEGORIE_GROB'], ax=axes[2], color ='red', bins = 10, kde = False)
###Output
_____no_output_____
###Markdown
Oberservations:- The first cluster has a younger population with individuals falling into the category of less than 30 years of age, and between the ages of 30 and 45. - The second cluster tends to have a larger population of middle aged individuals ranging from 46 - 60 years of age. - The third cluster has the largest percentage of individuals who are over the age of 60 years of age
###Code
figure, axes = plt.subplots(1, 3, sharex=True, figsize=(15,5))
figure.suptitle('Class of Each Cluster', fontsize=20)
sns.distplot(dataframe_cluster_0['CAMEO_DEUG_2015'], ax=axes[0], color ='red', bins = 10, kde = False)
sns.distplot(dataframe_cluster_1['CAMEO_DEUG_2015'], ax=axes[1], color ='red', bins = 10, kde = False)
sns.distplot(dataframe_cluster_2['CAMEO_DEUG_2015'], ax=axes[2], color ='red', bins = 10, kde = False)
dataframe_cluster_1['CAMEO_DEUG_2015'].unique()
figure, axes = plt.subplots(1, 3, sharex=True, figsize=(15,5))
figure.suptitle('Number of Cars in Postal Code Area of Each Cluster', fontsize=20)
sns.distplot(dataframe_cluster_0['KBA13_ANZAHL_PKW'], ax=axes[0], color ='red', bins = 10, kde = False)
sns.distplot(dataframe_cluster_1['KBA13_ANZAHL_PKW'], ax=axes[1], color ='red', bins = 10, kde = False)
sns.distplot(dataframe_cluster_2['KBA13_ANZAHL_PKW'], ax=axes[2], color ='red', bins = 10, kde = False)
figure, axes = plt.subplots(1, 3, sharex=True, figsize=(15,5))
figure.suptitle('Development of Most Recent Car Manufacturer in Postal Code Area of Each Cluster', fontsize=20)
sns.distplot(dataframe_cluster_0['KBA05_HERSTTEMP'], ax=axes[0], color ='red', bins = 10, kde = False)
sns.distplot(dataframe_cluster_1['KBA05_HERSTTEMP'], ax=axes[1], color ='red', bins = 10, kde = False)
sns.distplot(dataframe_cluster_2['KBA05_HERSTTEMP'], ax=axes[2], color ='red', bins = 10, kde = False)
figure, axes = plt.subplots(1, 3, sharex=True, figsize=(15, 5))
figure.suptitle('Number of Buildings in Postal Code Area of Each Cluster', fontsize=20)
sns.distplot(dataframe_cluster_0['PLZ8_GBZ'], ax=axes[0], color ='red', bins = 10, kde = False)
sns.distplot(dataframe_cluster_1['PLZ8_GBZ'], ax=axes[1], color ='red', bins = 10, kde = False)
sns.distplot(dataframe_cluster_2['PLZ8_GBZ'], ax=axes[2], color ='red', bins = 10, kde = False)
figure, axes = plt.subplots(1, 3, sharex=True, figsize=(15, 5))
figure.suptitle('Number of buildings in Postal Codes of Each Cluster', fontsize=20)
sns.distplot(dataframe_cluster_0['PLZ8_HHZ'], ax=axes[0], color ='red', bins = 5, kde = False)
sns.distplot(dataframe_cluster_1['PLZ8_HHZ'], ax=axes[1], color ='red', bins = 5, kde = False)
sns.distplot(dataframe_cluster_2['PLZ8_HHZ'], ax=axes[2], color ='red', bins = 5, kde = False)
figure, axes = plt.subplots(1, 3, sharex=True, figsize=(15, 5))
figure.suptitle('Unemployment of Each Cluster', fontsize=20)
sns.distplot(dataframe_cluster_0['RELAT_AB'], ax=axes[0], color ='red', bins = 6, kde = False)
sns.distplot(dataframe_cluster_1['RELAT_AB'], ax=axes[1], color ='red', bins = 6, kde = False)
sns.distplot(dataframe_cluster_2['RELAT_AB'], ax=axes[2], color ='red', bins = 6, kde = False)
filtered_customers = cleaned_customers.filter(['AGER_TYP', 'ALTERSKATEGORIE_GROB', 'ANREDE_KZ', 'BALLRAUM', 'D19_BANKEN_ANZ_12', 'D19_BANKEN_ANZ_24',
'D19_BANKEN_ANZ_24',
'D19_BANKEN_DATUM', 'D19_VERSAND_OFFLINE_DATUM', 'D19_VERSAND_ONLINE_DATUM', 'D19_VERSAND_DATUM',
'D19_VERSAND_ONLINE_QUOTE_12', 'FINANZ_MINIMALIST', 'FINANZ_SPARER', 'FINANZ_VORSORGER',
'FINANZ_ANLEGER', 'FINANZ_UNAUFFAELLIGER', 'FINANZ_HAUSBAUER', 'FINANZTYP', 'GEBURTSJAHR',
'GFK_URLAUBERTYP', 'GREEN_AVANTGARDE', 'HEALTH_TYP', 'LP_LEBENSPHASE_FEIN',
'LP_LEBENSPHASE_GROB', 'LP_FAMILIE_FEIN', 'LP_FAMILIE_GROB', 'LP_STATUS_FEIN',
'LP_STATUS_GROB', 'NATIONALITAET_KZ', 'PRAEGENDE_JUGENDJAHRE', 'RETOURTYP_BK_S',
'SEMIO_SOZ', 'SEMIO_FAM', 'SEMIO_REL', 'SEMIO_MAT', 'SEMIO_VERT', 'SEMIO_LUST',
'SEMIO_ERL', 'SEMIO_KULT', 'SEMIO_RAT', 'SEMIO_KRIT', 'SEMIO_DOM', 'SEMIO_KAEM',
'SEMIO_PFLICHT', 'SEMIO_TRADV', 'SHOPPER_TYP', 'SOHO_FLAG', 'TITEL_KZ',
'VERS_TYP', 'ZABEOTYP', 'GEBAEUDETYP_RASTER', 'KKK', 'MOBI_REGIO',
'ONLINE_AFFINITAET', 'REGIOTYP'])
for i in filtered_customers:
filtered_customers[i] = filtered_customers[i].astype(int)
#sns.set_theme(style="white")
# Obtaining correlation matrix
#corr_df = filtered_customers.copy() #.drop(['cluster'], axis=1)
corr = filtered_customers.corr()
# Matplotlib graph setup
f, ax = plt.subplots(figsize=(20, 20))
# Generating Seaplot colormap
cmap = sns.diverging_palette(230, 20, as_cmap=True)
sns.heatmap(corr, cmap=cmap, vmax=1, center=0,
square=True, linewidths=1, cbar_kws={"shrink": 1}, fmt=".2f")
###Output
_____no_output_____
###Markdown
Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld.
###Code
train_csv = pd.read_csv('Udacity_MAILOUT_052018_TRAIN.csv', sep=';')
train_csv.head()
train_csv.drop_duplicates(keep = 'first', inplace = True)
def second_preprocessing (dataframe):
"""Cleaning of dataframe for better data processing. """
dataframe = dataframe.copy()
dataframe = dataframe.set_index(['LNR']) # Set the Customer ID to index of dataframe
#dataframe.drop_duplicates(keep = 'first', inplace = True) # Removes any duplicates from the
dataframe.drop(['ALTER_KIND1', 'ALTER_KIND2', 'ALTER_KIND3', 'ALTER_KIND4', 'KBA05_BAUMAX', 'KK_KUNDENTYP', 'TITEL_KZ', 'EINGEFUEGT_AM'], axis=1, inplace=True) # Drops LNR which is the customer id
dataframe.replace(-1, float('NaN'), inplace=True) # -1 values represent missing values, and will be replaced with NaN values
dataframe.replace(0, float('NaN'), inplace=True) # 0 values represent unknown values, and will be replaced with NaN values
dataframe['CAMEO_DEU_2015'].replace('XX', dataframe['CAMEO_DEU_2015'].mode().iloc[0], inplace=True) # Replace unknown string to mode value
dataframe['CAMEO_DEUG_2015'].replace('X', dataframe['CAMEO_DEUG_2015'].mode().iloc[0], inplace=True)
dataframe['CAMEO_DEUG_2015'] = dataframe['CAMEO_DEUG_2015'].apply(pd.to_numeric) # Convert to integer values
dataframe['CAMEO_INTL_2015'].replace('XX', dataframe['CAMEO_INTL_2015'].mode().iloc[0], inplace=True)
dataframe['CAMEO_INTL_2015'] = dataframe['CAMEO_INTL_2015'].apply(pd.to_numeric) # Convert to integer values
new_list = []
for i in dataframe:
dataframe[i] = dataframe[i].fillna(dataframe[i].mode().iloc[0]) # Mode is used to replace NaN values due to categorical values
dataframe = pd.get_dummies(dataframe)
"""for i in range(0, dataframe.shape[1]):
if (dataframe.iloc[:, i].dtypes == 'object'): # All object dtypes to be converted to categorical values
dataframe.iloc[:, i] = pd.Categorical(dataframe.iloc[:, i])
dataframe.iloc[:, i] = dataframe.iloc[:, i].cat.codes
dataframe.iloc[:, i] = dataframe.iloc[:, i].astype('int64')
new_list.append(dataframe.columns[i])"""
return dataframe # return cleaned dataframe
# Function for viewing the number of positive responses
def response_counter(response_array):
"""Counts the number of positive and negative responses in purchasing. """
number_of_yes = 0
number_of_no = 0
for i in response_array:
if i == 1:
number_of_yes += 1
else:
number_of_no += 1
return number_of_yes, number_of_no
# Obtaining obtaining target features
y = train_csv.pop('RESPONSE') # Pop off target fature
X = train_csv[:] # Store features in separate variable for processing
X.head()
X = second_preprocessing(X)
X.head()
# Viewing number of target feature rows
print(y.unique())
# Number of unique values in target feature
print(y.nunique())
number_of_yes, number_of_no = response_counter(y)
print("The number of yes responses in target column is {}, and the number of no responses is {}".format(number_of_yes, number_of_no))
###Output
The number of yes responses in target column is 532, and the number of no responses is 42430
###Markdown
Modelling of Supervised Learning Model - The goal is to: - Normalize the features - Create training, testing and validation datasets - Train the model using a Random Forest Classifier or XGBoost Model - Evaluate the model
###Code
sc = StandardScaler()
scaled_X = sc.fit_transform(X)
response_counter(y)
# Split data into training and testing dataset
X_train, X_test, y_train, y_test = train_test_split(scaled_X, y, test_size=0.2, random_state=2, stratify=y)
###Output
_____no_output_____
###Markdown
XGBoost
###Code
xgboost_model = xgb.XGBClassifier(colsample_bytree=0.6, gamma=1, max_depth=5, min_child_weight=10, n_estimators=10, subsample=0.8, e_label_encoder=False)
param_grid = {
'n_estimators': [10, 100, 200],
min_child_weight': [1, 5, 10],
'gamma': [0.5, 1, 1.5, 2, 5],
'subsample': [0.6, 0.8, 1.0],
colsample_bytree': [0.6, 0.8, 1.0],
'max_depth': [3, 4, 5]
}
xgboost_model = GridSearchCV(model, param_grid, scoring='roc_auc', verbose=3)
xgboost_model.fit(X_train, y_train)
xgb_pred = xgboost_model.predict(X_test)
###Output
_____no_output_____
###Markdown
The best parameters for the XGBoost Grid Search was: - (colsample_bytree=0.6, gamma=1, max_depth=5, min_child_weight=10, n_estimators=10, subsample=0.8)
###Code
# Assessing the accuracy of the XGBoost
from sklearn.metrics import accuracy_score
xgb_pred = xgboost_model.predict(X_test)
print ("Accuracy : {:.2f}%".format(accuracy_score(y_test, xgb_pred)*100))
from sklearn.metrics import classification_report
print(classification_report(y_test, xgb_pred))
from sklearn.metrics import roc_auc_score
roc_auc_score(y_test, xgb_pred)
import pickle
filename = 'XGBoost.pkl'
pickle.dump(xgboost_model, open(filename, 'wb'))
###Output
_____no_output_____
###Markdown
Random Forest Classifier
###Code
rfc = RandomForestClassifier(random_state = 42)
parameters = {
'n_estimators': [200, 500],
'max_features': ['auto', 'sqrt', 'log2'],
'max_depth' : [4,5,6,7,8],
'criterion' :['gini', 'entropy']
}
cv = GridSearchCV(rfc, param_grid=parameters, verbose = 3, scoring='roc_auc')
rfc.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
The best parameters for the Random Forest Classifier was: - ('n_estimators': [200], 'max_features': ['auto'], 'max_depth' : [4], 'criterion' :['gini'])
###Code
rfc_pred = rfc.predict(X_test)
print ("Accuracy : {:.2f}%".format(accuracy_score(y_test, rfc_pred)*100))
print(classification_report(y_test, rfc_pred))
print(roc_auc_score(y_test, rfc_pred))
rfc_pred = rfc.predict(X_test)
print ("Accuracy : {:.2f}%".format(accuracy_score(y_test, rfc_pred)*100))
print(classification_report(y_test, rfc_pred))
filename = 'RandomForestClassifier.pkl'
pickle.dump(rfc, open(filename, 'wb'))
###Output
_____no_output_____
###Markdown
Test Dataset- The goal is to test the models on Kaggle to validate the results. The results on Kaggle will take precendence over the other performances measures in previous evaluations.
###Code
test_csv = pd.read_csv('Udacity_MAILOUT_052018_TEST.csv', sep=';')
test_csv.head()
X = test_csv[:]
X = second_preprocessing(test_csv)
scaled_X = sc.fit_transform(X)
r_pred = rfc.predict_proba(scaled_X)
xgb_pred = xgboost_model.predict_proba(scaled_X)
pred_csv = pd.DataFrame()
pred_csv['LNR'] = test_csv['LNR']
pred_csv.head()
# Ensemble technique to improve accuracy of predictions on test dataset
pred_csv['rfc_response'] = r_pred[:, 1]
pred_csv['xgb_response'] = xgb_pred[:, 1]
pred_csv['RESPONSE'] = (pred_csv['rfc_response']+ pred_csv['xgb_response'])/2 # Return the average probablity between the models
pred_csv.head()
pred_csv.drop(['rfc_response', 'xgb_response'], axis=1, inplace=True)
pred_csv.head()
pred_csv.to_csv("Arvato_Test_prediction.csv", index=False)
###Output
_____no_output_____
###Markdown
Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. Importing Libraries
###Code
# import libraries here; add more as necessary
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
#imports to help me plot my venn diagrams
import matplotlib_venn as venn2
from matplotlib_venn import venn2
from pylab import rcParams
# import the util.py file where I define my functions
from utils import *
# sklearn
from sklearn.preprocessing import StandardScaler, Imputer, RobustScaler, MinMaxScaler, OneHotEncoder
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.metrics import confusion_matrix,precision_recall_fscore_support
from sklearn.utils.multiclass import unique_labels
from sklearn.linear_model import LinearRegression
# magic word for producing visualizations in notebook
%matplotlib inline
###Output
_____no_output_____
###Markdown
Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them.
###Code
# load in the data
'''
There are 2 warnings when we read in the datasets:
DtypeWarning: Columns (19,20) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
This warning happens when pandas attempts to guess datatypes on particular columns, I will address this on
the pre-processing steps
'''
azdias = pd.read_csv(r"C:\Users\sousa\Desktop\github\Arvato\data\azdias.csv")
customers = pd.read_csv(r"C:\Users\sousa\Desktop\github\Arvato\data\customers.csv")
attributes = pd.read_csv(r"C:\Users\sousa\Desktop\github\Arvato\data\features.csv")
# I will now check what is the problem with the columns 19 and 20
# getting the name of these columns
print(azdias.iloc[:,19:21].columns)
print(customers.iloc[:,19:21].columns)
# checking the unique values in these columns for possible issues
print(azdias.CAMEO_DEUG_2015.unique())
print(azdias.CAMEO_INTL_2015.unique())
print(customers.CAMEO_DEUG_2015.unique())
print(customers.CAMEO_INTL_2015.unique())
###Output
[nan 8.0 4.0 2.0 6.0 1.0 9.0 5.0 7.0 3.0 '4' '3' '7' '2' '8' '9' '6' '5'
'1' 'X']
[nan 51.0 24.0 12.0 43.0 54.0 22.0 14.0 13.0 15.0 33.0 41.0 34.0 55.0 25.0
23.0 31.0 52.0 35.0 45.0 44.0 32.0 '22' '24' '41' '12' '54' '51' '44'
'35' '23' '25' '14' '34' '52' '55' '31' '32' '15' '13' '43' '33' '45'
'XX']
[1.0 nan 5.0 4.0 7.0 3.0 9.0 2.0 6.0 8.0 '6' '3' '8' '9' '2' '4' '1' '7'
'5' 'X']
[13.0 nan 34.0 24.0 41.0 23.0 15.0 55.0 14.0 22.0 43.0 51.0 33.0 25.0 44.0
54.0 32.0 12.0 35.0 31.0 45.0 52.0 '45' '25' '55' '51' '14' '54' '43'
'22' '15' '24' '35' '23' '12' '44' '41' '52' '31' '13' '34' '32' '33'
'XX']
###Markdown
It seems like the mixed type issue comes from that X that appears in these columns.There are ints, floats and strings all in the mix
###Code
cols = ['CAMEO_DEUG_2015', 'CAMEO_INTL_2015']
azdias = mixed_type_fixer(azdias, cols)
customers = mixed_type_fixer(customers, cols)
###Output
_____no_output_____
###Markdown
Checking if values were fixed Change this cell to code if you want to perform the checksazdias.CAMEO_DEUG_2015.unique()customers.CAMEO_INTL_2015.unique() Considering the appearance of these mixed type data entries I created a function to check the dtype of the different attributesThis might be useful in case some attributes have too many category values, which might fragment the data clustering too much.
###Code
#doing a quick check of categorical features and see if some are too granular to be maintained
cat_check = categorical_checker(azdias, attributes)
customers.AKT_DAT_KL.unique()
###Output
_____no_output_____
###Markdown
Based on the categorical info it might be a good idea do drop CAMEO_DEU_2015 column, it is far too fragmented with 45 different category values, this is an idea to revisit after testing the models There is an extra column called Unnamed that seems like an index duplication, I will now drop it
###Code
#dropping unnamed column
azdias = azdias.drop(azdias.columns[0], axis = 1)
customers = customers.drop(customers.columns[0], axis = 1)
###Output
_____no_output_____
###Markdown
We also have 3 columns that are different between azdias and customers:'CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'I will drop those to harmonize the 2 datasets
###Code
customers = customers.drop(['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP'], inplace=False, axis=1)
###Output
_____no_output_____
###Markdown
I will now check overal shapes of the datasets Azdias Shape
###Code
# checking how the azdias dataframe looks like
print('Printing dataframe shape')
print(azdias.shape)
print('________________________________________________________')
azdias.head()
###Output
Printing dataframe shape
(891221, 366)
________________________________________________________
###Markdown
Customers Shape
###Code
# checking how the customer dataframe looks like
print('Printing dataframe shape')
print(customers.shape)
print('________________________________________________________')
customers.head()
###Output
Printing dataframe shape
(191652, 366)
________________________________________________________
###Markdown
Attributes shape
###Code
# Check the summary csv file
print(attributes.shape)
attributes.head()
###Output
(332, 5)
###Markdown
On the dataframe shapes: For now it is noted that the 2 initial working dataframes are harmonized in terms of number of columns: azdias: (891221, 366) customers: (191652, 366) attributes: (332, 5)
###Code
#saving the unique attribute names to lists
attributes_list = attributes.attribute.unique().tolist()
azdias_list = list(azdias.columns)
customers_list = list(customers.columns)
#establishing uniqueness of the attributes accross the datasets in work
common_to_all = (set(attributes_list) & set(azdias_list) & set(customers_list))
unique_to_azdias = (set(azdias_list) - set(attributes_list) - set(customers_list))
unique_to_customers = (set(customers_list) - set(attributes_list) - set(azdias_list))
unique_to_attributes = (set(attributes_list) - set(customers_list) - set(azdias_list))
unique_to_attributes_vs_azdias = (set(attributes_list) - set(azdias_list))
unique_to_azdias_vs_attributes = (set(attributes_list) - set(azdias_list))
common_azdias_attributes = (set(azdias_list) & set(attributes_list))
print("No of items common to all 3 daframes: " + str(len(common_to_all)))
print("No of items exclusive to azdias: " + str(len(unique_to_azdias)))
print("No of items exclusive to customers: " + str(len(unique_to_customers)))
print("No of items exclusive to attributes: " + str(len(unique_to_attributes)))
print("No of items overlapping between azdias and attributes: " + str(len(common_azdias_attributes)))
print("No of items exclusive to attributes vs azdias: " + str(len(unique_to_attributes_vs_azdias)))
print("No of items exclusive to azdias vs attributes: " + str(len(unique_to_azdias_vs_attributes)))
rcParams['figure.figsize'] = 8, 8
ax = plt.axes()
ax.set_facecolor('lightgrey')
v = venn2([len(azdias_list), len(attributes_list), len(common_azdias_attributes)],
set_labels=('Azdias', 'Attributes'),
set_colors = ['cyan', 'grey']);
plt.title("Attribute presence on Azdias vs DIAS Attributes ")
plt.show()
###Output
_____no_output_____
###Markdown
From this little exploration we got quite a little bit of information: - There are 3 extra features in the customers dataset, it corresponds to the columns 'CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP' - All the datasets share 327 features between them - The attributes file has 5 columns corresponding to feature information that does not exist in the other datasets Preprocessing Now that I have a birds-eye view of the data I will proceed with cleaning and handling missing calues, re-encode features (since the first portion of this project will involve unsupervised learning), perform some feature enginnering and scaling. Assessing missing data and replacing it with nan Before dealing with the missing and unknown data I will save a copy of the dataframes for the purpose of visualizing how much improvement was achieved
###Code
#making dataframes copies pre-cleanup
azdias_pre_cleanup = azdias.copy()
customers_pre_cleanup = customers.copy()
azdias_pre_cleanup['AKT_DAT_KL'].isnull().sum()*100/len(azdias_pre_cleanup['AKT_DAT_KL'])
# I am using feat_fixer to use the information in the attributes dataframe to fill the information
# regarding missing and unknown values
azdias = feat_fixer(azdias, attributes)
customers = feat_fixer(customers, attributes)
###Output
_____no_output_____
###Markdown
Since the net step involves dropping columns missing data over a threshold it is important to check if there is a column match between azdias and customers before and after the cleanup process There is a chance that some columns are missing too much data in one dataframe and being dropped while they are abundant in the other, causing a discrepancy in the shape between the 2 dataframes It is always hard to define a threshold on how much missing data is too much, my first approach will consider over 30% too much Based on model performance this is an idea to revisit and adjust
###Code
balance_checker(azdias, customers)
###Output
Feature balance between dfs?: True
###Markdown
Prior to cleanup customers and azdias match
###Code
percent_missing_azdias_df = percentage_of_missing(azdias)
percent_missing_azdias_pc_df = percentage_of_missing(azdias_pre_cleanup)
percent_missing_customers_df = percentage_of_missing(customers)
percent_missing_customers_pc_df = percentage_of_missing(customers_pre_cleanup)
print('Identified missing data in Azdias: ')
print('Pre-cleanup: ' + str(azdias_pre_cleanup.isnull().sum().sum()) + ' Post_cleanup: ' + str(azdias.isnull().sum().sum()))
print('Identified missing data in Customers: ')
print('Pre-cleanup: ' + str(customers_pre_cleanup.isnull().sum().sum()) + ' Post_cleanup: ' + str(customers.isnull().sum().sum()))
print('Azdias columns not missing values(percentage):')
print('Pre-cleanup: ', (percent_missing_azdias_df['percent_missing'] == 0.0).sum())
print('Post-cleanup: ', (percent_missing_azdias_pc_df['percent_missing'] == 0.0).sum())
print('Customers columns not missing values(percentage):')
print('Pre-cleanup: ', (percent_missing_customers_df['percent_missing'] == 0.0).sum())
print('Post-cleanup: ', (percent_missing_customers_pc_df['percent_missing'] == 0.0).sum())
###Output
Azdias columns not missing values(percentage):
Pre-cleanup: 88
Post-cleanup: 93
Customers columns not missing values(percentage):
Pre-cleanup: 88
Post-cleanup: 93
###Markdown
Deciding on what data to maintain based on the percentage missing
###Code
# missing more or less than 30% of the data
azdias_missing_over_30 = split_on_percentage(percent_missing_azdias_df, 30, '>')
azdias_missing_less_30 = split_on_percentage(percent_missing_azdias_df, 30, '<=')
customers_missing_over_30 = split_on_percentage(percent_missing_customers_df, 30, '>')
customers_missing_less_30 = split_on_percentage(percent_missing_customers_df, 30, '<=')
#plotting select features and their missing data percentages
figure, axes = plt.subplots(4, 1, figsize = (15,15), squeeze = False)
azdias_missing_over_30.sort_values(by = 'percent_missing', ascending = False).plot(kind = 'bar', x = 'column_name', y = 'percent_missing',
ax = axes[0][0], color = 'red', title = 'Azdias percentage of missing values over 30%' )
#due to the sheer amount of data points to be plotted this does not make an appealing vis so I will restrict
#the number of plotted points to 40
azdias_missing_less_30.sort_values(by = 'percent_missing', ascending = False)[:40].plot(kind = 'bar', x = 'column_name', y = 'percent_missing',
ax = axes[1][0], title = 'Azdias percentage of missing values less 30%' )
customers_missing_over_30.sort_values(by = 'percent_missing', ascending = False).plot(kind = 'bar', x = 'column_name', y = 'percent_missing',
ax = axes[2][0], color = 'red', title = 'Customers percentage of missing values over 30%' )
#due to the sheer amount of data points to be plotted this does not make an appealing vis so I will restrict
#the number of plotted points to 40
customers_missing_less_30.sort_values(by = 'percent_missing', ascending = False)[:40].plot(kind = 'bar', x = 'column_name', y = 'percent_missing',
ax = axes[3][0], title = 'Customers percentage of missing values less 30%' )
plt.tight_layout()
plt.show()
azdias['AKT_DAT_KL'].isnull().sum()*100/len(azdias['AKT_DAT_KL'])
###Output
_____no_output_____
###Markdown
The vast majority of the columns with missing values have a percent of missing under 30% Based on this information I will remove columns with more than 30% missing values
###Code
#extracting column names with more than 30% values missing so we can drop them from azdias df
azdias_col_delete = columns_to_delete(azdias_missing_over_30)
#extracting column names with more than 30% values missing so we can drop them from customers df
customers_col_delete = columns_to_delete(customers_missing_over_30)
#dropping the columns identified in the previous lists
azdias = azdias.drop(azdias_col_delete, axis = 1)
customers = customers.drop(customers_col_delete, axis = 1)
###Output
_____no_output_____
###Markdown
Now that we dropped columns missing more than 30% of their data let's check if we should also drop rows based on a particular threshold
###Code
#plotting distribution of null values
row_hist(azdias, customers, 30)
###Output
_____no_output_____
###Markdown
Based on this visualization we deduct 2 things - most of the rows are missing the information over less than 50 columns - both customer and azdias have probably overlapping rows in which they are missing info corresponding to over 200 columns
###Code
#deleting rows based on the information acquired in the previous histogram
azdias = row_dropper(azdias, 50)
customers = row_dropper(customers, 50)
#plotting null values distribution after cleanup
row_hist(azdias, customers, 30)
balance_checker(azdias, customers)
azdias.shape
customers.shape
###Output
_____no_output_____
###Markdown
Based on this information the azdias df has a few columns extra when compared to customers:- 'KBA13_SEG_WOHNMOBILE', 'ORTSGR_KLS9', 'KBA13_SEG_SPORTWAGEN', 'KBA13_SEG_OBERKLASSE'- These colummns refer to information on the type of car individuals ownThe customers dataframe has a column not present in azdias: - 'AKT_DAT_KL'So to finalize this step I will drop these columns
###Code
azdias = azdias.drop(['KBA13_SEG_WOHNMOBILE', 'ORTSGR_KLS9', 'KBA13_SEG_SPORTWAGEN', 'KBA13_SEG_OBERKLASSE'], inplace=False, axis=1)
customers = customers.drop(['AKT_DAT_KL'], inplace=False, axis=1)
balance_checker(azdias, customers)
###Output
_____no_output_____
###Markdown
Feature Encoding Like I previously checked using the categorical_checker there are many features in need of re-encoding for the unsupervised learning portion - numerical features will be kept as is- ordinal features will be kept as is- categorical features and mixed type features will have to be re-encoded
###Code
#checking for mixed type features
attributes[attributes.type == 'mixed']
#retrieve a list of categorical features for future encoding
cats = attributes[attributes.type == 'categorical']
list(cats['attribute'])
###Output
_____no_output_____
###Markdown
At this point I already dealt with the CAMEO_INTL_2015 column by converting XX to nan PRAEGENDE_JUGENDJAHRE has 3 dimentions: generation decade, if people are mainstream or avant-garde and if they are from east or west, I will create new features out of this particular column LP_LEBENSPHASE_GROB seems to encode the same information as the CAMEO column and it is divided between gross(grob) and fine (fein)
###Code
azdias = special_feature_handler(azdias)
customers = special_feature_handler(customers)
azdias.TITEL_KZ.unique()
azdias.select_dtypes('object').head()
###Output
_____no_output_____
###Markdown
Feature engineering Based on the previous exploration there are a few features that are good candidates for novel feature creation
###Code
azdias_eng = azdias.copy()
customers_eng = customers.copy()
feat_eng(azdias_eng)
feat_eng(customers_eng)
azdias_eng.shape
customers_eng.shape
azdias.TITEL_KZ.unique()
###Output
_____no_output_____
###Markdown
Now that I am done with creating new features and dealing with the most obvious columns I need to encode the remaining categorical features Considering this post: https://stats.stackexchange.com/questions/224051/one-hot-vs-dummy-encoding-in-scikit-learn there are advantages and drawbacks with chosing one-hot-encoding vs dummy encoding. There are also concerns regarding using dummies all together https://towardsdatascience.com/one-hot-encoding-is-making-your-tree-based-ensembles-worse-heres-why-d64b282b5769 so I will keep this in mind while moving forward For now I will go with dummy creation
###Code
#finally I will encode all the features that are left
cat_features = ['AGER_TYP','ANREDE_KZ','CAMEO_DEU_2015','CAMEO_DEUG_2015','CJT_GESAMTTYP','D19_BANKEN_DATUM','D19_BANKEN_OFFLINE_DATUM',
'D19_BANKEN_ONLINE_DATUM','D19_GESAMT_DATUM','D19_GESAMT_OFFLINE_DATUM','D19_GESAMT_ONLINE_DATUM','D19_KONSUMTYP',
'D19_TELKO_DATUM','D19_TELKO_OFFLINE_DATUM','D19_TELKO_ONLINE_DATUM','D19_VERSAND_DATUM','D19_VERSAND_OFFLINE_DATUM','D19_VERSAND_ONLINE_DATUM',
'D19_VERSI_DATUM','D19_VERSI_OFFLINE_DATUM','D19_VERSI_ONLINE_DATUM','FINANZTYP','GEBAEUDETYP',
'GFK_URLAUBERTYP','GREEN_AVANTGARDE','KBA05_BAUMAX','LP_FAMILIE_FEIN',
'LP_FAMILIE_GROB','LP_STATUS_FEIN','LP_STATUS_GROB','NATIONALITAET_KZ','OST_WEST_KZ','PLZ8_BAUMAX',
'SHOPPER_TYP','SOHO_KZ','TITEL_KZ','VERS_TYP','ZABEOTYP']
azdias_ohe = pd.get_dummies(azdias_eng, columns = cat_features)
customers_ohe = pd.get_dummies(customers_eng, columns = cat_features)
azdias_ohe.shape
customers_ohe.shape
balance_checker(azdias_ohe, customers_ohe)
###Output
_____no_output_____
###Markdown
Feature scaling Before moving on to dimentionality reduction I need to apply feature scaling, this way principal component vectors won't be affected by the variation that naturally occurs in the data
###Code
#dataframes using StandardScaler
azdias_SS = feature_scaling(azdias_ohe, 'StandardScaler')
customers_SS = feature_scaling(customers_ohe, 'StandardScaler')
#dataframes using RobustScaler
azdias_RS = feature_scaling(azdias_ohe, 'RobustScaler')
customers_RS = feature_scaling(customers_ohe, 'RobustScaler')
#dataframes using MinMaxScaler
azdias_MMS = feature_scaling(azdias_ohe, 'MinMaxScaler')
customers_MMS = feature_scaling(customers_ohe, 'MinMaxScaler')
###Output
_____no_output_____
###Markdown
Dimensionality Reduction Finally I will use PCA (linear technique) to select only the features that seem to be more impactfull
###Code
components_list_azdias = azdias_SS.columns.values
n_components_azdias = len(components_list_azdias)
components_list_customers = customers_SS.columns.values
n_components_customers = len(components_list_customers)
azdias_SS_pca = pca_model(azdias_SS, n_components_azdias)
customers_SS_pca = pca_model(customers_SS, n_components_customers)
azdias_RS_pca = pca_model(azdias_RS, n_components_azdias)
customers_RS_pca = pca_model(customers_RS, n_components_customers)
azdias_MMS_pca = pca_model(azdias_MMS, n_components_azdias)
customers_MMS_pca = pca_model(customers_MMS, n_components_customers)
scree_plots(azdias_SS_pca, azdias_RS_pca, azdias_MMS_pca, ' azdias')
scree_plots(customers_SS_pca, customers_RS_pca, customers_MMS_pca, ' customers')
###Output
_____no_output_____
###Markdown
Each principal component is a directional vector pointing to the highest variance. The greater the distance from 0 the more the vector points to a feature.
###Code
first_dimension = interpret_pca(azdias_SS, n_components_azdias, 1)
first_dimension
display_interesting_features(azdias_SS, azdias_SS_pca, 0)
display_interesting_features(azdias_RS, azdias_RS_pca, 0)
display_interesting_features(azdias_MMS, azdias_MMS_pca, 0)
###Output
_____no_output_____
###Markdown
On this first dimension most of the information seems to be related to household size, purchase power and types of purchases Based on these plots:- using standard scaler with 300 principal components 90% of the original variance can be represented- using robust scaler with about 150 components we represent 90% of the original variance - using minmax scaler with 250 components we represent 90% of the original variance Moving on I will pick the robust scaler PCA and I will re-fit with with a number of components that explains over 80% of the explained variance
###Code
azdias_pca_refit = pca_model(azdias_RS, 110)
explained_variance = azdias_pca_refit.explained_variance_ratio_.sum()
explained_variance
###Output
_____no_output_____
###Markdown
Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. After a lot of data Pre-Processing we are fibally getting to the analysis, I will start by attempting KMeans to find relevant clusters Now that I have reduced the number of components to use, it is important to select the number of clusters to aim at for kmeans
###Code
pca = PCA(110)
azdias_pca_110 = pca.fit_transform(azdias_RS)
def fit_kmeans(data, centers):
'''
returns the kmeans score regarding SSE for points to centers
INPUT:
data - the dataset you want to fit kmeans to
center - the number of centers you want (the k value)
OUTPUT:
score - the SSE score for the kmeans model fit to the data
'''
kmeans = KMeans(n_clusters = center)
model = kmeans.fit(data)
# SSE score for kmeans model
score = np.abs(model.score(data))
return score
###Output
_____no_output_____
###Markdown
The elbow method (https://bl.ocks.org/rpgove/0060ff3b656618e9136b) is a way to validate the optimal number of clusters to use for a particular dataset.It can take some time training the dataset, optimising for the optimal n of clusters means that less resources are used.
###Code
scores = []
centers = list(range(1,20))
for center in centers:
print('score appended')
scores.append(fit_kmeans(azdias_pca_110, center))
# Investigate the change in within-cluster distance across number of clusters.
# Plot the original data with clusters
plt.plot(centers, scores, linestyle='--', marker='o', color='b')
plt.ylabel('SSE score')
plt.xlabel('K')
plt.title('SSE vs K')
#Using a regression to determine where it is a good cluster number to divide the population (when the gradient decreases)
l_reg = LinearRegression()
l_reg.fit(X=np.asarray([[9,10,11,12,13,14]]).reshape(6,1), y=scores[8:14])
predicted =l_reg.predict(np.asarray(range(2,9)).reshape(-1,1))
plt.plot(list(range(2,20)),np.asarray(list(predicted.reshape(-1,1)) + list(scores[8:20])),'r')
###Output
_____no_output_____
###Markdown
Based on the plot 9 clusters should be enough to proceed with the kmeans training
###Code
# refitting using just 9 clusters
kmeans = KMeans(9)
kmodel = kmeans.fit(azdias_pca_110)
#and now we can compare the customer data to the general demographics
customers_RS.shape
azdias_RS.shape
azdias_kmeans = kmodel.predict(pca.transform(azdias_RS))
customers_kmeans = kmodel.predict(pca.transform(customers_RS))
###Output
_____no_output_____
###Markdown
Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld.
###Code
mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';')
###Output
_____no_output_____
###Markdown
Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep.
###Code
mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';')
###Output
_____no_output_____
###Markdown
Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings.
###Code
# import libraries here; add more as necessary
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import progressbar
from ast import literal_eval
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Imputer, StandardScaler
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans, MiniBatchKMeans
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import roc_auc_score
# magic word for producing visualizations in notebook
%matplotlib inline
###Output
_____no_output_____
###Markdown
Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them.
###Code
# load in the data
#azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', sep=';')
#customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';')
# Be sure to add in a lot more cells (both markdown and code) to document your
# approach and findings!
azdias = pd.read_csv('data/azdias.csv')
# Check the structure of the data after it's loaded (e.g. print the number of
# rows and columns, print the first few rows).
print(azdias.shape)
azdias.head()
azdias.info()
azdias.describe()
###Output
_____no_output_____
###Markdown
Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. Step 1: Preprocessing Step 1.1: Assess Missing DataThe feature summary file from the previous project contains a summary of properties for each demographics data column. The additional columns found in this project will be assessed manually to complete the rest of the properties. The file `DIAS Attributes - Values 2017.xlsx` will be studied for this assessment. This will help make cleaning decisions during this stage of the project.
###Code
# Load in the feature summary file.
feat_info = pd.read_csv('data/feat_info.csv')
print(feat_info.shape)
feat_info.head()
feat_info.info(verbose=True)
# Set attribute as index for `feat_info` dataframe
feat_info.set_index('attribute', inplace=True)
# Delete columns found in the features not found in the general population dataframe
feat_extra = np.setdiff1d(feat_info.index, azdias.columns, assume_unique=True)
feat_info.drop(feat_extra, inplace=True)
feat_info.shape
# Check remaining missing columns in the feature summary file
feat_missing = np.setdiff1d(azdias.columns, feat_info.index, assume_unique=True)
print('There are {} missing features.'.format(len(feat_missing)))
feat_missing
# Create new dataframe of missing features
feat_missing = pd.DataFrame(feat_missing, columns=['attribute'])
feat_missing['information_level'] = np.NaN
feat_missing['type'] = np.NaN
feat_missing['missing_or_unknown'] = '[]'
feat_missing.set_index('attribute', inplace=True)
print('There are {} rows and {} columns.'.format(feat_missing.shape[0], feat_missing.shape[1]))
feat_missing.head()
###Output
There are 0 rows and 3 columns.
###Markdown
Step 1.1.1: Convert Missing Value Codes to NaNsFor simplicity, it will be assumed that there are no missing values in the `feat_missing` columns. Some columns in the `feat_info` dataframe do not have missing values too so this assumption makes sense. The column type will be examined later once an assessment of missing data is completed.
###Code
# Identify missing or unknown data values and convert them to NaNs
# Check natural NaNs
azdias.isnull().sum()
# Add dataframe of missing features to the original features information dataframe
feat_info = feat_info.append(feat_missing)
print('There are {} rows and {} columns.'.format(feat_info.shape[0], feat_info.shape[1]))
feat_info.head()
# This should be null
np.setdiff1d(azdias.columns, feat_info.index, assume_unique=True)
feat_info.missing_or_unknown.value_counts()
# Prepare array of strings containing X for parsing
feat_info.missing_or_unknown = feat_info.missing_or_unknown.replace('[-1,XX]', "[-1,'XX']")
feat_info.missing_or_unknown = feat_info.missing_or_unknown.replace('[XX]', "['XX']")
feat_info.missing_or_unknown = feat_info.missing_or_unknown.replace('[-1,X]', "[-1,'X']")
# Convert string representation to a list
feat_info.missing_or_unknown = feat_info.missing_or_unknown.apply(literal_eval)
# Check for conversion
feat_info['missing_or_unknown'].head()
# Finally convert missing value codes to NaNs
cnter = 0
bar = progressbar.ProgressBar(maxval=azdias.shape[1]+1, widgets=[progressbar.Bar('=', '[', ']'), ' ', progressbar.Percentage()])
bar.start()
for column in azdias:
cnter+=1
bar.update(cnter)
mask = azdias[column].isin(feat_info.at[column, 'missing_or_unknown'])
azdias.at[mask, column] = np.NaN
bar.finish()
# Visually check for conversion
azdias.isnull().sum()
azdias.describe()
###Output
_____no_output_____
###Markdown
Step 1.1.2: Assess Missing Data in Each ColumnThere will be few columns that are outliers in terms of the proportion of values that are missing. Matplotlib's hist() function will be used to visualize the distribution of missing value counts to find these columns. For simplicity, these columns will be removed from the dataframe.
###Code
# Perform an assessment of how much missing data there is in each column of the
# dataset.
missing_col = azdias.isnull().sum()/azdias.shape[0]
missing_col.hist()
plt.xlabel('Proportion')
plt.ylabel('Count')
plt.title('Distribution of All Missing Values per Column');
# Investigate patterns in the amount of missing data in each column.
missing_col_sub = missing_col[missing_col <= 0.3]
missing_col_sub.hist()
plt.xlabel('Proportion')
plt.ylabel('Count')
plt.title('Distribution of Missing Values per Column Less than or Equal to 30%');
# Remove the outlier columns from the dataset. (Other data engineering
# tasks such as re-encoding and imputation will be done later.)
col_outlier = missing_col[missing_col > 0.3].index
azdias_sub = azdias.drop(col_outlier, axis=1)
# Show column outliers
col_outlier
# Remove the outlier attributes from `feat_info`
feat_info_new = feat_info[feat_info.index.isin(col_outlier) == False]
feat_info_new.shape
###Output
_____no_output_____
###Markdown
Discussion 1.1.2: Assess Missing Data in Each ColumnThe distribution of the amount of missing data is skewed to the right. Most of the columns have less than or equal to 30% missing data. For the columns less than or equal to 30%, there are two distinct patterns. The first pattern is an almost single bar for no missing values. The second pattern has a slightly normal distribution for missing values less than 30% but not equal to 0. Columns which are related have missing values with almost similar proportions.The columns with missing values greater than 30% are:1. `AGER_TYP` - Best-ager typology2. `ALTER_HH` - Main age within the household3. `ALTER_KIND1` - No information4. `ALTER_KIND2` - No information5. `ALTER_KIND3` - No information6. `ALTER_KIND4` - No information7. `EXTSEL992` - No information8. `GEBURTSJAHR` - Year of birth9. `TITEL_KZ` - Academic title flag10. `KK_KUNDENTYP` - Consumer pattern over past 12 months11. `KBA05_BAUMAX` - Most common building within the microcell Step 1.2: Select and Re-Encode FeaturesChecking for missing data isn't the only way to prepare a dataset for analysis. Since the unsupervised learning techniques to be used will only work on data that is encoded numerically, a few encoding changes or additional assumptions will be made. In addition, while almost all of the values in the dataset are encoded using numbers, not all of them represent numeric values. The third column of the feature summary (feat_info) will be checked for a summary of types of measurement.* As mentioned earlier, there are some column types which are unknown. They must be identified first before starting the other steps.* For numeric and interval data, these features can be kept without changes.* Most of the variables in the dataset are ordinal in nature. While ordinal values may technically be non-linear in spacing, make the simplifying assumption that the ordinal variables can be treated as being interval in nature (that is, kept without any changes).* Special handling may be necessary for the remaining two variable types: categorical, and 'mixed'.In the first part of this sub-step, the columns which have unknown types will be examined and assumptions will be made. Next, an investigation of the categorical and mixed-type features will be investigated and decider whether they will be kept, dropped, or re-encoded. Finally, in the last part, a new data frame will be created with only the selected and engineered columns.
###Code
# How many features are there of each data type?
print("There are {} columns which have missing data type.".format(feat_info_new['type'].isnull().sum()))
feat_info_new['type'].value_counts()
for col in feat_info_new[feat_info_new['type'].isnull()].index:
print(azdias_sub[col].value_counts())
drop_feat = ['LNR', 'EINGEFUEGT_AM', 'VERDICHTUNGSRAUM']
feat_numeric = ['ANZ_STATISTISCHE_HAUSHALTE']
feat_ordinal = ['AKT_DAT_KL',
'ANZ_KINDER',
'CJT_KATALOGNUTZER',
'CJT_TYP_1',
'CJT_TYP_2',
'CJT_TYP_3',
'CJT_TYP_4',
'CJT_TYP_5',
'CJT_TYP_6',
'D19_KONSUMTYP_MAX',
'D19_SOZIALES',
'D19_TELKO_ONLINE_QUOTE_12',
'D19_VERSI_DATUM',
'D19_VERSI_OFFLINE_DATUM',
'D19_VERSI_ONLINE_DATUM',
'D19_VERSI_ONLINE_QUOTE_12',
'DSL_FLAG',
'FIRMENDICHTE',
'HH_DELTA_FLAG',
'KBA13_ANTG1',
'KBA13_ANTG2',
'KBA13_ANTG3',
'KBA13_ANTG4',
'KBA13_BAUMAX',
'KBA13_GBZ',
'KBA13_HHZ',
'KBA13_KMH_210',
'KOMBIALTER',
'KONSUMZELLE',
'MOBI_RASTER',
'RT_KEIN_ANREIZ',
'RT_SCHNAEPPCHEN',
'RT_UEBERGROESSE',
'UMFELD_ALT',
'UMFELD_JUNG',
'UNGLEICHENN_FLAG',
'VHA',
'VHN',
'VK_DHT4A',
'VK_DISTANZ',
'VK_ZG11']
feat_categorical = ['ALTERSKATEGORIE_FEIN',
'D19_LETZTER_KAUF_BRANCHE',
'EINGEZOGENAM_HH_JAHR',
'GEMEINDETYP',
'STRUKTURTYP']
###Output
_____no_output_____
###Markdown
There are three columns which will be dropped. These are also not in the `DIAS Attributes - Values 2017 Updated.xls` file so the straightforward solution is to drop them.1. `LNR` - This is unique for each row so it is not useful for machine learning.2. `EINGEFUEGT_AM` - This can be categorical but there are many levels which may not be captured accurately.3. `VERDICHTUNGSRAUM` - This can also be categorical but there are many levels too.
###Code
# Drop the three features
azdias_sub = azdias_sub.drop(drop_feat, axis=1)
feat_info_new = feat_info_new.drop(drop_feat, axis=0)
azdias_sub.shape[1], feat_info_new.shape[0] # should be the same
# Fill the missing feature types
feat_info_new.loc[feat_numeric , 'type'] = 'numeric'
feat_info_new.loc[feat_ordinal, 'type'] = 'ordinal'
feat_info_new.loc[feat_categorical, 'type'] = 'categorical'
# How many features are there of each data type?
print("There are {} columns which have missing data type.".format(feat_info_new['type'].isnull().sum()))
feat_info_new['type'].value_counts()
###Output
There are 0 columns which have missing data type.
###Markdown
Step 1.2.1: Re-Encode Categorical FeaturesFor categorical data, levels will be encoded as dummy variables. Depending on the number of categories, one of the following will be performed* For binary (two-level) categoricals that take numeric values, they will be kept without needing to do anything.* There is one binary variable that takes on non-numeric values. For this one, the values will be re-encoded as numbers.* For multi-level categoricals (three or more values), the values will be encoded using multiple dummy variables.
###Code
# Assess categorical variables: which are binary, which are multi-level, and
# which one needs to be re-encoded?
categorical = feat_info_new[feat_info_new['type'] == 'categorical']
# Check categorical variable whether it is binary or multi-level
binary = []
multi_level = []
for att in categorical.index:
if len(azdias_sub[att].value_counts()) == 2:
binary.append(att)
else:
multi_level.append(att)
binary
multi_level
# Re-encode the values as numbers for `OST_WEST_KZ`
azdias_sub['OST_WEST_KZ'] = azdias_sub['OST_WEST_KZ'].replace({'O':1.0, 'W':2.0})
azdias_sub['OST_WEST_KZ'].value_counts()
# Re-encode categorical variable(s) to be kept in the analysis.
azdias_enc = pd.get_dummies(azdias_sub, columns=multi_level)
###Output
_____no_output_____
###Markdown
Discussion 1.2.1: Re-Encode Categorical FeaturesThe binary column which had non-numeric values was re-encoded as values between 1.0 and 2.0. The rest of the categorical features were re-encoded using one-hot-encoding through the use of the pd.get_dummies function from pandas. The number of columns increased from 355 to 576 due to the additional columns created by the one-hot-encoding step. Step 1.2.2: Engineer Mixed-Type FeaturesThere are a handful of features that are marked as "mixed" in the feature summary that require special treatment in order to be included in the analysis.* `PRAEGENDE_JUGENDJAHRE` - combines information on three dimensions: generation by decade, movement (mainstream vs. avantgarde), and nation (east vs. west). While there aren't enough levels to disentangle east from west, two new variables will be created to capture the other two dimensions: an interval-type variable for decade, and a binary variable for movement.* `CAMEO_INTL_2015` - combines information on two axes: wealth and life stage. The two-digit codes will be broken by their 'tens'-place and 'ones'-place digits into two new ordinal variables (which, for the purposes of this project, is equivalent to just treating them as their raw numeric values).
###Code
# Investigate "PRAEGENDE_JUGENDJAHRE" and engineer two new variables.
# Engineer new column `DECADE`
azdias_enc['DECADE'] = azdias_enc['PRAEGENDE_JUGENDJAHRE']
# Engineer new column `MOVEMENT`
mainstream = [1.0, 3.0, 5.0, 8.0, 10.0, 12.0, 14.0]
avantgarde = [2.0, 4.0, 6.0, 7.0, 9.0, 11.0, 13.0, 15.0]
main = azdias_enc['PRAEGENDE_JUGENDJAHRE'].isin(mainstream)
azdias_enc.loc[main, 'MOVEMENT'] = 1.0
avant = azdias_enc['PRAEGENDE_JUGENDJAHRE'].isin(avantgarde)
azdias_enc.loc[avant, 'MOVEMENT'] = 2.0
azdias_enc['MOVEMENT'].value_counts()
# Investigate "CAMEO_INTL_2015" and engineer two new variables.
# Engineer new column 'WEALTH'
azdias_enc['WEALTH'] = azdias_enc['CAMEO_INTL_2015'] // 10
# Engineer new column 'LIFE_STAGE'
azdias_enc['LIFE_STAGE'] = azdias_enc['CAMEO_INTL_2015'] % 10
# Check other categories
mixed = feat_info_new[feat_info_new['type'] == 'mixed']
mixed
# Re-encode `LP_LEBENSPHASE_GROB` and `WOHNLAGE` to be kept in the analysis.
azdias_engg = pd.get_dummies(azdias_enc, columns=['LP_LEBENSPHASE_GROB', 'WOHNLAGE'])
###Output
_____no_output_____
###Markdown
Discussion 1.2.2: Engineer Mixed-Type FeaturesThere were multiple ways to deal with the mixed-type features. The details are described below:1. `LP_LEBENSPHASE_FEIN` - This was dropped because it had too many levels which might not be captured accurately.2. `LP_LEBENSPHASE_GROB` - This can be treated as categorical so it was re-encoded using one-hot-encoding.3. `PRAEGENDE_JUGENDJAHRE` - This was engineered using the procedures outlined above.4. `WOHNLAGE` - This can be treated as categorical so it was re-encoded using one-hot-encoding.5. `CAMEO_INTL_2015` - This was engineered using the procedures outlined above.6. `PLZ8_BAUMAX` - This can be treated as interval so nothing was done. Step 1.2.3: Complete Feature SelectionIn order to finish this step up, the dataframe should consist of the following:* All numeric, interval, and ordinal type columns from the original dataset.* Binary categorical features (all numerically-encoded).* Engineered features from other multi-level categorical features and mixed features.
###Code
# Do whatever you need to in order to ensure that the dataframe only contains
# the columns that should be passed to the algorithm functions.
azdias_clean = azdias_engg.drop(['PRAEGENDE_JUGENDJAHRE', 'CAMEO_INTL_2015', 'LP_LEBENSPHASE_FEIN'], axis=1)
azdias_clean.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891221 entries, 0 to 891220
Columns: 595 entries, AKT_DAT_KL to WOHNLAGE_8.0
dtypes: float64(332), uint8(263)
memory usage: 2.4 GB
###Markdown
Step 1.3: Create Cleaning Function
###Code
def clean_data(df):
"""
Perform feature trimming, re-encoding, and engineering for demographics
data
INPUT: Demographics DataFrame
OUTPUT: Trimmed and cleaned demographics DataFrame
"""
# Put in code here to execute all main cleaning steps:
# convert missing value codes into NaNs, ...
for column in df:
if column == 'RESPONSE':
pass
else:
mask = df[column].isin(feat_info.loc[column, 'missing_or_unknown'])
df.loc[mask, column] = np.NaN
# remove selected columns and rows, ...
col_outlier = ['AGER_TYP',
'ALTER_HH',
'ALTER_KIND1',
'ALTER_KIND2',
'ALTER_KIND3',
'ALTER_KIND4',
'EXTSEL992',
'GEBURTSJAHR',
'TITEL_KZ',
'KK_KUNDENTYP',
'KBA05_BAUMAX']
df_sub = df.drop(col_outlier, axis=1)
# drop additional columns
df_sub = df_sub.drop(['LNR', 'EINGEFUEGT_AM', 'VERDICHTUNGSRAUM'], axis=1)
# select, re-encode, and engineer column values.
df_sub['OST_WEST_KZ'] = df_sub['OST_WEST_KZ'].replace({'O':1.0, 'W':2.0})
# change to float for the other three datasets
df_sub['CAMEO_DEUG_2015'] = df_sub['CAMEO_DEUG_2015'].astype('float')
categorical = ['CJT_GESAMTTYP',
'FINANZTYP',
'GFK_URLAUBERTYP',
'LP_FAMILIE_FEIN',
'LP_FAMILIE_GROB',
'LP_STATUS_FEIN',
'LP_STATUS_GROB',
'NATIONALITAET_KZ',
'SHOPPER_TYP',
'ZABEOTYP',
'GEBAEUDETYP',
'CAMEO_DEUG_2015',
'CAMEO_DEU_2015',
'D19_KONSUMTYP',
'ALTERSKATEGORIE_FEIN',
'D19_LETZTER_KAUF_BRANCHE',
'EINGEZOGENAM_HH_JAHR',
'GEMEINDETYP',
'STRUKTURTYP',
'LP_LEBENSPHASE_GROB',
'WOHNLAGE']
df_enc = pd.get_dummies(df_sub, columns=categorical)
# Engineer mixed-type features
df_enc['DECADE'] = df_enc['PRAEGENDE_JUGENDJAHRE']
main = df_enc['PRAEGENDE_JUGENDJAHRE'].isin([1.0, 3.0, 5.0, 8.0, 10.0, 12.0, 14.0])
df_enc.loc[main, 'MOVEMENT'] = 1.0
avant = df_enc['PRAEGENDE_JUGENDJAHRE'].isin([2.0, 4.0, 6.0, 7.0, 9.0, 11.0, 13.0, 15.0])
df_enc.loc[avant, 'MOVEMENT'] = 2.0
if df_enc['CAMEO_INTL_2015'].dtype == 'float64':
df_enc['WEALTH'] = df_enc['CAMEO_INTL_2015'] // 10
df_enc['LIFE_STAGE'] = df_enc['CAMEO_INTL_2015'] % 10
else:
df_enc['WEALTH'] = df_enc['CAMEO_INTL_2015'].str[0].astype('float')
df_enc['LIFE_STAGE'] = df_enc['CAMEO_INTL_2015'].str[1].astype('float')
df_clean = df_enc.drop(['PRAEGENDE_JUGENDJAHRE', 'CAMEO_INTL_2015', 'LP_LEBENSPHASE_FEIN'], axis=1)
# Return the cleaned dataframe.
return df_clean
azdias_clone = clean_data(azdias)
# Should be True
azdias_clone.shape == azdias_clean.shape
###Output
_____no_output_____
###Markdown
Step 2: Feature Transformation Step 2.1: Apply Feature ScalingBefore dimensionality reduction techniques are applied to the data, feature scaling must be performed so that the principal component vectors are not influenced by the natural differences in scale for features. * `sklearn` requires that data not have missing values in order for its estimators to work properly. So, before applying the scaler to the data, the DataFrame must be cleaned of the remaining missing values before applying the scaler. This was done by applying an Imputer to replace all missing values. * For the actual scaling function, a StandardScaler instance was done, scaling each feature to mean 0 and standard deviation 1.* For these classes, the .fit_transform() method was used to both fit a procedure to the data as well as apply the transformation to the data at the same time.
###Code
# If you've not yet cleaned the dataset of all NaN values, then investigate and
# do that now.
azdias_null = azdias_clean[azdias_clean.isnull().any(axis=1)] # subset of missing data
# Compare shape between `azdias_clean` and its subset of missing data
(azdias_clean.shape, azdias_null.shape)
# Apply `Imputer` to replace all missing values with the mean
imputer = Imputer()
azdias_impute = imputer.fit_transform(azdias_clean)
# Apply feature scaling to the general population demographics data.
scaler = StandardScaler()
azdias_scaled = scaler.fit_transform(azdias_impute)
###Output
_____no_output_____
###Markdown
Discussion 2.1: Apply Feature ScalingFor simplicity, all missing values were replaced by Imputer using the mean along the columns. There are 298,861 rows with missing values in their columns which is roughly 38% of the total number of rows so it is not advisable to drop these.Feature scaling was then applied using StandardScaler instance from sklearn and the .fit_transform() method to fit the procedure to the data and apply the transformation at the same time. The instance returned an array for use in the next step which is PCA. Step 2.2: Perform Dimensionality ReductionOn the scaled data, dimensionality reduction techniques can now be applied.* `sklearn`'s PCA class will be used to apply principal component analysis on the data, thus finding the vectors of maximal variance in the data. To start, at least half the number of features will be set (so there's enough features to see the general trend in variability).* The ratio of variance explained by each principal component as well as the cumulative variance explained will be checked by plotting the cumulative or sequential values using matplotlib's plot() function. Based on the findings, a value for the number of transformed features will be retained for the clustering part of the project.* Once a choice for the number of components to keep has been made, the PCA instance will be re-fit to perform the decided-on transformation.
###Code
# Apply PCA to the data.
n_components = int(azdias_impute.shape[1] / 2) # Half the number of features
pca = PCA(n_components)
azdias_pca = pca.fit_transform(azdias_scaled)
# Investigate the variance accounted for by each principal component.
ind = np.arange(n_components)
vals = pca.explained_variance_ratio_
plt.figure(figsize=(10, 6))
ax = plt.subplot()
cumvals = np.cumsum(vals)
ax.bar(ind, vals)
ax.plot(ind, cumvals)
for i in range(n_components):
ax.xaxis.set_tick_params(width=0)
ax.yaxis.set_tick_params(width=2, length=12)
ax.set_xlabel("Principal Component")
ax.set_ylabel("Variance Explained (%)")
plt.title('Explained Variance Per Principal Component')
# Re-apply PCA to the data while selecting for number of components to retain.
sum(pca.explained_variance_ratio_)
###Output
_____no_output_____
###Markdown
Discussion 2.2: Perform Dimensionality ReductionAs suggested by the procedure, half of the number of features (297 out of 595) was used as the initial number of components for the principal component analysis of the data. As expected, the amount of original variability in the original data decreased per subsequent component. The 297 components can explain at least 86% of the variability in the original dataset. This is good enough for further analysis and no further re-application is necesssary. Step 2.3: Interpret Principal ComponentsNow that the principal components have been transformed, the weight of each variable on the first few components should be checked to see if they can be interpreted in some fashion.Each principal component is a unit vector that points in the direction of highest variance (after accounting for the variance captured by earlier principal components). The further a weight is from zero, the more the principal component is in the direction of the corresponding feature. If two features have large weights of the same sign (both positive or both negative), then increases in one tend expect to be associated with increases in the other. To contrast, features with different signs can be expected to show a negative correlation: increases in one variable should result in a decrease in the other.To investigate the features, each weight should be mapped to their corresponding feature name, then the features should be sorted according to weight. The most interesting features for each principal component, then, will be those at the beginning and end of the sorted list.
###Code
# Map weights for the first principal component to corresponding feature names
# and then print the linked values, sorted by weight.
# Dimension indexing
dimensions = dimensions = ['Dimension {}'.format(i) for i in range(1,len(pca.components_)+1)]
# PCA components
components = pd.DataFrame(np.round(pca.components_, 4), columns = azdias_clean.keys())
components.index = dimensions
components.head(3)
# Map weights for the first principal component to corresponding feature names
# and then print the linked values, sorted by weight.
components.iloc[0].sort_values(ascending=False)
# Map weights for the second principal component to corresponding feature names
# and then print the linked values, sorted by weight.
components.iloc[1].sort_values(ascending=False)
# Map weights for the third principal component to corresponding feature names
# and then print the linked values, sorted by weight.
components.iloc[2].sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Discussion 2.3: Interpret Principal ComponentsFor each principal component or dimension, the top 3 and bottom 3 weights with their corresponding feature names will be investigated for any associations.**Dimension 1:*** `MOBI_REGIO` **(0.1307)** - moving patterns* `PLZ8_ANTG1` **(0.1247)** - number of 1-2 family houses in the PLZ8* `KBA13_ANTG1` **(0.1244)** - no information* `KBA13_ANTG4` **(-0.1206)** - no information* `KBA13_ANTG3` **(-0.1238)** - no information* `PLZ8_ANTG3` **(-0.1241)** - number of 6-10 family houses in the PLZ8**Interpretation:** The first principal component is strongly correlated with none to very low mobilities and high number of 1-2 family houses. `KBA13` is not described in the attributes file but it can be related to car ownership. Higher car owndership and higher number of 5-6 family houses tend to negatively affect this principal component.**Dimension 2:*** `ONLINE_AFFINITAET` **(0.1490)** - online affinity* `DECADE` **(0.1474)** - decade of the dominating movement in the person's youth* `D19_GESAMT_ANZ_24` **(0.1375)** - transaction activity TOTAL POOL in the last 24 months * `CJT_TYP_5` **(-0.1335)** - no information* `D19_GESAMT_ONLINE_DATUM` **(-0.1336)** - actuality of the last transaction with the complete file ONLINE* `CJT_TYP_4` **(-0.1337)** - no information**Interpretation:** The second principal component increases with higher online affinity, more recent decade of the dominating movement, and higher transactions in the last 24 months. Likewise, `CJT_TYP` is also not described in the attributes file but it may be related to the customer journey typology. This principal component decreases with people with older transactions which is expected because the converse is true.**Dimension 3:*** `KBA13_HERST_BMW_BENZ` **(0.1782)** - share of BMW & Mercedes Benz within the PLZ8* `KBA13_SEG_OBEREMITTELKLASSE` **(0.1525)** - share of upper middle class cars and upper class cars* `KBA13_MERCEDES` **(0.1522)** - share of MERCEDES within the PLZ8* `KBA13_SEG_KLEINWAGEN` **(-0.1251)** - share of small and very small cars in the PLZ8* `KBA13_KMH_140_210` **(-0.1255)** - share of cars with max speed between 140 and 210 km/h within the PLZ8* `KBA13_SITZE_5` **(-0.1471)** - number of cars with 5 seats in the PLZ8**Interpretation:** The third principal component is primarily affected by car ownership. People who own a lot of luxury cars such as BWM and Mercedes increase this component. On the other hand, people who own budget cars such as Ford Fiesta decrease this component. Step 3: Clustering Step 3.1 Apply Clustering to General PopulationNow, it's time to see how the data clusters in the principal components space. In this substep, k-means clustering will be applied to the dataset and the average within-cluster distances from each point to their assigned cluster's centroid will be used to decide on a number of clusters to keep.* sklearn's KMeans class will be used to perform k-means clustering on the PCA-transformed data.* Then, the average difference from each point to its assigned cluster's center will be computed.* The above two steps will be performed for a 30 different cluster counts to see how the average distance decreases with an increasing number of clusters. * Once final number of clusters to use is selected, KMeans instance will be re-fit to perform the clustering operation.
###Code
# Over a number of different cluster counts...
scores = []
centers = list(range(1,31))
for center in centers:
# run k-means clustering on the data and...
kmeans = MiniBatchKMeans(n_clusters=center, random_state=28)
model = kmeans.fit(azdias_pca)
# compute the average within-cluster distances.
score = np.abs(model.score(azdias_pca))
scores.append(score)
# Investigate the change in within-cluster distance across number of clusters.
plt.figure(figsize=(10, 6))
ax = plt.subplot()
plt.plot(centers, scores, linestyle='--', marker='o', color='b');
plt.xlabel('K');
plt.xticks(np.arange(1, 31, step=1))
plt.ylabel('SSE');
plt.title('SSE vs. K');
# Re-fit the k-means model with the selected number of clusters and obtain
# cluster predictions for the general population demographics data.
kmeans = KMeans(n_clusters=30, random_state=42, n_jobs=-1)
azdias_preds = kmeans.fit_predict(azdias_pca)
azdias_cluster = pd.DataFrame(np.round(azdias_pca, 4), columns = dimensions)
azdias_cluster.insert(loc=0, column='Cluster', value=azdias_preds)
azdias_cluster.head()
###Output
_____no_output_____
###Markdown
Discussion 3.1 Apply Clustering to General PopulationInstead of using KMeans class to perform k-means clustering on the PCA-transformed data for a number of different cluster counts, an alternative called MiniBatchKMeans was used instead. The user guide says that it uses mini-batches to reduce the computation time, while still attempting to optimise the same objective function. MiniBatchKMeans converges faster than KMeans, but the quality of the results is reduced. In practice this difference in quality can be quite small. The algorithm did not resolve after one hour for KMeans, hence the variant MiniBatchKMeans was used to provide a significantly faster solution.The scree plot shows that the score or the sum of the squared errors (SSE) generally decreased as the number of clusters increased. As the instruction suggested, the maximum clusters used was 30. The 'elbow method' is not applicable in the plot because there is no visible leveling observed. Even though 30 clusters did not produce the lowest SSE, it was still used as the number of clusters for the full KMeans clustering operation. Step 3.2 Apply All Steps to the Customer DataNow that the clusters and cluster centers for the general population have been obtained, it's time to see how the customer data maps on to those clusters. The fits from the general population will be used to clean, transform, and cluster the customer data. In the last step, there will be an interperation on how the general population fits apply to the customer data.
###Code
# This is to set the object type since these two columns are mixed type
dtypes = {"CAMEO_DEUG_2015": object, "CAMEO_INTL_2015": object}
# Load in the customer demographics data.
customers = pd.read_csv('data/customers.csv', dtype=dtypes)
customers_sub = customers.drop(['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP'], axis=1)
print(customers_sub.shape)
customers_sub.head()
# Apply preprocessing onto the customer data
customers_clean = clean_data(customers_sub)
# Check for missing column in `customers_clean`
missing = list(np.setdiff1d(azdias_clean.columns, customers_clean.columns))
missing
# Add the missing column with default value of 0
for m in missing:
customers_clean[m] = 0
customers_clean[m] = customers_clean[m].astype('uint8')
print(customers_clean.shape)
customers_clean.head()
# Apply feature transformation, and clustering from the general
# demographics onto the customer data, obtaining cluster predictions for the
# customer demographics data.
customers_impute = imputer.transform(customers_clean)
customers_scaled = scaler.transform(customers_impute)
customers_pca = pca.transform(customers_scaled)
customers_preds = kmeans.predict(customers_pca)
customers_cluster = pd.DataFrame(np.round(customers_pca, 4), columns = dimensions)
customers_cluster.insert(loc=0, column='Cluster', value=customers_preds)
customers_cluster.head()
###Output
_____no_output_____
###Markdown
Step 3.3 Compare Customer Data to Demographics DataAt this point, there are clustered data based on demographics of the general population of Germany, and the customer data for a mail-order sales company has been mapped onto those demographic clusters. In this final substep, the two cluster distributions will be compared to see where the strongest customer base for the company is.
###Code
# Compare the proportion of data in each cluster for the customer data to the
# proportion of data in each cluster for the general population.
def show_proportion(df_cluster, title='Default Title'):
#get order of bars by frequency
cluster_counts = df_cluster['Cluster'].value_counts()
cluster_order = cluster_counts.index
#compute largest proportion
n_model = df_cluster.shape[0]
max_cluster_count = cluster_counts.iloc[0]
max_prop = max_cluster_count / n_model
#establish tick locations and create plot
base_color = sns.color_palette()[0]
tick_props = np.arange(0, max_prop, 0.02)
tick_names = ['{:0.2f}'.format(v) for v in tick_props]
base_color = sns.color_palette()[0]
sns.countplot(data=df_cluster, y='Cluster', color=base_color, order=cluster_order)
plt.xticks(tick_props * n_model, tick_names)
plt.xlabel('proportion')
plt.title(title);
fig, ax = plt.subplots(figsize=(12,8))
show_proportion(azdias_cluster, title='Proportion of Each Cluster for the General Population')
fig, ax = plt.subplots(figsize=(12,8))
show_proportion(customers_cluster, title='Proportion of Each Cluster for the Customer Data')
# Check for overrepresentation and underrepresentation of clusters between the two datasets
azdias_prop = azdias_cluster['Cluster'].value_counts() / azdias_cluster.shape[0]
customers_prop = customers_cluster['Cluster'].value_counts() / customers_cluster.shape[0]
diff_prop = customers_prop - azdias_prop
max_index = diff_prop.sort_values(ascending=False).index[0]
max_diff = diff_prop.sort_values(ascending=False).iloc[0]
min_index = diff_prop.sort_values(ascending=False).index[-6]
min_diff = diff_prop.sort_values(ascending=False).iloc[-6]
diff_prop.sort_values(ascending=False)
fig, ax = plt.subplots(figsize=(12,8))
diff_prop.sort_values()[:25].plot.barh(color=sns.color_palette()[0])
plt.title("Difference in Proportions Between the General Population and Customer Data", fontsize=16)
plt.xlabel("Difference", fontsize=12)
plt.ylabel("Cluster", fontsize=12);
# Function to transform centroids back to the original data space based on cluster number
def infer_cluster(index):
# Subset the customers_cluster dataframe by the selected index
cluster = customers_cluster[customers_cluster['Cluster'] == index]
cluster_drop = cluster.drop('Cluster', axis=1)
# Perform inverse PCA and inverse scaling to return to the original values
cluster_pca = pca.inverse_transform(cluster_drop)
cluster_scaler = scaler.inverse_transform(cluster_pca)
# Create a new dataframe of the cluster and reuse the feature columns
cluster_final = pd.DataFrame(cluster_scaler, columns=customers_clean.columns)
return cluster_final
# Columns to infer based on Step 2.3
infer_columns = ['MOBI_REGIO',
'PLZ8_ANTG1',
'KBA13_ANTG1',
'ONLINE_AFFINITAET',
'DECADE',
'D19_GESAMT_ANZ_24',
'KBA13_HERST_BMW_BENZ',
'KBA13_SEG_OBEREMITTELKLASSE',
'KBA13_MERCEDES']
# What kinds of people are part of a cluster that is overrepresented in the
# customer data compared to the general population?
print('The cluster which is the most overrepresented is cluster {} with a difference of {}.'
.format(max_index, np.round(max_diff, 4)))
over_cluster = infer_cluster(max_index)
print(over_cluster.shape)
over_cluster.head()
over_cluster[infer_columns].describe()
# What kinds of people are part of a cluster that is underrepresented in the
# customer data compared to the general population?
print('The cluster which is the most underrepresented is cluster {} with a difference of {}.'
.format(min_index, np.round(min_diff, 4)))
under_cluster = infer_cluster(min_index)
print(under_cluster.shape)
under_cluster.head()
under_cluster[infer_columns].describe()
# Compare the differences between the target and non-target groups
over_mean = over_cluster[infer_columns].describe().loc['mean']
under_mean = under_cluster[infer_columns].describe().loc['mean']
mean_df = pd.concat([over_mean, under_mean], axis=1)
mean_df.columns = ['Target', 'Non-Target']
# Plot the means of the columns of interests for the target and non-target groups
fig, ax = plt.subplots(figsize=(16,9))
plt.title("Mean Values of Columns Between Target and Non-Target Customers", fontsize=16)
plt.xlabel("Mean", fontsize=12)
mean_df.plot.barh(ax=ax);
###Output
_____no_output_____
###Markdown
Discussion 3.3 Compare Customer Data to Demographics DataSince there are over 595 features, it will be impractical to check each feature to interpret the two clusters. Based on the principal component interpretations from step 2.3, the columns to be interpreted on the original data of the chosen clusters can be identified.Only three out of the nine features above are clearly different. These are `D19_GESAMT_ANZ_24`, `DECADE`, and `MOBI_REGIO`. The target customers are less mobile, less recent dominating movement in their youths which probably means they are older, and have very low to none transactions in the past 24 months. Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. Exploring the DataThe last column `RESPONSE` will be the target label (whether or not the customer becamse a customer or not). All other applicable columns are features about each individual in the mailout campaign.
###Code
#mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';')
mailout_train = pd.read_csv('data/mailout_train.csv', dtype=dtypes)
mailout_train.head()
# Total number of records
n_records = mailout_train.shape[0]
# Number of records where the individual became a customer
n_customer = mailout_train[mailout_train['RESPONSE'] == 1].shape[0]
# Number of records where individual did not become a customer
n_not_customer = mailout_train[mailout_train['RESPONSE'] == 0].shape[0]
# Percentage of individuals who became customers
customer_perc = (n_customer / n_records) * 100
# Print the results
print("Total number of records: {}".format(n_records))
print("Individuals who became customers: {}".format(n_customer))
print("Individuals who did not become customers: {}".format(n_not_customer))
print("Percentage of individuals who became customers: {}%".format(customer_perc))
sns.countplot("RESPONSE",data=mailout_train)
###Output
_____no_output_____
###Markdown
Out of all the 42,962 individuals in the mailout campaign, only 1.24% of the individuals became customers. The dataset is highly imbalanced because of the disproportionate amount of customers and non-customers. Preparing the DataBefore the imbalanced dataset can be used as input for machine learning algorithms, it often must be cleaned, formatted, and restructured — this is typically known as preprocessing. Fortunately, the features are similar to the general population dataset and the customers dataset.
###Code
# Prepare the data using the function created earlier
mailout_train_clean = clean_data(mailout_train)
print(mailout_train_clean.shape)
mailout_train_clean.head()
# Check for missing column in `clean_mailout_train`
missing = list(np.setdiff1d(customers_clean.columns, mailout_train_clean.columns))
missing
# Add the missing column with default value of 0
for m in missing:
mailout_train_clean[m] = 0
mailout_train_clean[m] = mailout_train_clean[m].astype('uint8')
mailout_train_clean.info()
# Split the data into features and target label
response_raw = mailout_train_clean['RESPONSE']
features_raw = mailout_train_clean.drop('RESPONSE', axis = 1)
###Output
_____no_output_____
###Markdown
Shuffle and Split the DataNow, the dataset has been preprocessed and is now ready for machine learning. As always, we will now split the data (both features and their labels) into training and test sets. Due to class imbalance, a variation of KFold that returns stratified folds will be implemented. The folds made will preserve the percentage of samples for each class.
###Code
# Import StratifiedKFold
from sklearn.model_selection import StratifiedKFold
# Initialize 5 stratified folds
skf = StratifiedKFold(n_splits=5, random_state=28)
skf.get_n_splits(features_raw, response_raw)
print(skf)
###Output
StratifiedKFold(n_splits=5, random_state=28, shuffle=False)
###Markdown
Evaluating Model PerformanceSince there is a large output class imbalance, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the model will be using ROC-AUC to evaluate performance. Aside from the Kaggle competition (see section below) using ROC-AUC as the score, the metric is suitable for binary classification problems such as this. Jason Brownlee, in his [article](https://machinelearningmastery.com/assessing-comparing-classifier-performance-roc-curves-2/) explains that "ROC curves give us the ability to assess the performance of the classifier over its entire operating range. The most widely-used measure is the area under the curve (AUC). The AUC can be used to compare the performance of two or more classifiers. A single threshold can be selected and the classifiers’ performance at that point compared, or the overall performance can be compared by considering the AUC". Compared to the F1 score, the ROC does not require optimizing a threshold for each label.There are a number of approaches to deal with class imbalance which have been already explained by numerous blog posts from different experts. This particular [article](https://www.analyticsvidhya.com/blog/2017/03/imbalanced-classification-problem/) from Analytics Vidhya describes the following techniques:1. Random Over-sampling2. Random Under-sampling3. Cluster-Based Over-sampling4. Informed Over Sampling: Synthetic Minority Over-sampling Technique (SMOTE)5. Modified synthetic minority oversampling technique (MSMOTE)The above approaches deals with handling imbalanced data by resampling original data to provide balanced classes. The same article also provides an alternative approach of modifying existing classification algorithms to make them appropriate for imbalanced data sets.1. Bagging-Based2. Boosting-Based3. Adaptive Boosting4. Gradient Tree Boosting5. Extreme Gradient BoostingThe last three algorithms will be investigated to determine which is best at modeling the data.
###Code
# Import libraries for machine learning pipeline
from sklearn.ensemble import AdaBoostRegressor # Adaptive Boosting
from sklearn.ensemble import GradientBoostingRegressor # Gradient Tree Boosting
from xgboost.sklearn import XGBRegressor # Extreme Gradient Boosting
clf_A = AdaBoostRegressor(random_state=28)
clf_B = GradientBoostingRegressor(random_state=28)
clf_C = XGBRegressor(random_state=28)
model_scores = {}
for i, clf in enumerate([clf_A, clf_B, clf_C]):
# Create machine learning pipeline
pipeline = Pipeline([
('imp', imputer),
('scale', scaler),
('clf', clf)
])
scores = []
j = 0
# Perform 5-fold validation
for train_index, test_index in skf.split(features_raw, response_raw):
j+=1
print('Classifier {}: Fold {}...'.format(i+1, j))
# Split the data into training and test sets
X_train, X_test = features_raw.iloc[train_index], features_raw.iloc[test_index]
y_train, y_test = response_raw.iloc[train_index], response_raw.iloc[test_index]
# Train using the pipeline
pipeline.fit(X_train, y_train)
#Predict on the test data
y_pred = pipeline.predict(X_test)
score = roc_auc_score(y_test, y_pred)
scores.append(score)
print(score)
model_scores[clf] = scores
scores_df = pd.DataFrame(model_scores)
scores_df.columns = ['AdaBoostRegressor', 'GradientBoostingRegressor', 'XGBRegressor']
scores_df
scores_df.describe()
###Output
_____no_output_____
###Markdown
Improving ResultsAmong the three models namely `AdaBoostRegressor`, `GradientBoostingRegressor`, and `XGBRegressor`, the last one performed best based on the mean score. It beat out the other two models by a small margin. Actually, Extreme Gradient Boosting is an advanced and more efficient implementation of Gradient Boosting so it makes sense that it should have the highest score.Another [article](https://www.analyticsvidhya.com/blog/2016/03/complete-guide-parameter-tuning-xgboost-with-codes-python/) from Analytics Vidhya lists the advantages of XGBoost:1. Regularization2. Parallel Processing3. High Flexibility4. Handling Missing Values5. Tree Pruning6. Built-in Cross Validation7. Continue on Existing Model
###Code
# Create function to make a machine learning pipeline
def create_pipeline(clf):
# Create machine learning pipeline
pipeline = Pipeline([
('imp', imputer),
('scale', scaler),
('clf', clf)
])
return pipeline
# Create function to do 5-fold cross-validation above
def cross_validate(clf):
pipeline = create_pipeline(clf)
scores = []
i = 0
# Perform 5-fold validation
for train_index, test_index in skf.split(features_raw, response_raw):
i+=1
print('Fold {}'.format(i))
# Split the data into training and test sets
X_train, X_test = features_raw.iloc[train_index], features_raw.iloc[test_index]
y_train, y_test = response_raw.iloc[train_index], response_raw.iloc[test_index]
# Train using the pipeline
pipeline.fit(X_train, y_train)
#Predict on the test data
y_pred = pipeline.predict(X_test)
score = roc_auc_score(y_test, y_pred)
scores.append(score)
print("Score: {}".format(score))
return scores
# Create initial model
clf_0 = XGBRegressor(
objective = 'binary:logistic', #logistic regression for binary classification
scale_pos_weight = 1, #because of high class imbalance
random_state = 28)
tuned_scores0 = cross_validate(clf_0)
np.array(tuned_scores0).mean()
###Output
_____no_output_____
###Markdown
Tune `max_depth` and `min_child_weight`These will be tuned first because they have the highest impact on the model outcome.
###Code
pipeline = Pipeline([
('imp', imputer),
('scale', scaler),
('clf', clf_0)
])
parameters_1 = {
'clf__max_depth': range(3,10,2),
'clf__min_child_weight': range(1,6,2)
}
cv = GridSearchCV(estimator=pipeline, param_grid=parameters_1, scoring='roc_auc',n_jobs=-1, iid=False, cv=5)
cv.fit(features_raw, response_raw)
cv.grid_scores_, cv.best_params_, cv.best_score_
# Update model
clf_1 = XGBRegressor(
max_depth = 3,
min_child_weight = 5,
objective = 'binary:logistic',
scale_pos_weight = 1,
random_state = 28)
tuned_scores1 = cross_validate(clf_1)
np.array(tuned_scores1).mean()
###Output
_____no_output_____
###Markdown
Tune `gamma`
###Code
pipeline = Pipeline([
('imp', imputer),
('scale', scaler),
('clf', clf_1)
])
parameters_2 = {
'clf__gamma': [i/10.0 for i in range(0,5)]
}
cv = GridSearchCV(estimator=pipeline, param_grid=parameters_2, scoring='roc_auc',n_jobs=-1, iid=False, cv=5)
cv.fit(features_raw, response_raw)
cv.grid_scores_, cv.best_params_, cv.best_score_
clf_2 = XGBRegressor(
gamma = 0.2,
max_depth = 3,
min_child_weight = 1,
objective = 'binary:logistic',
scale_pos_weight = 1,
random_state = 28)
tuned_scores2 = cross_validate(clf_1)
np.array(tuned_scores2).mean()
###Output
_____no_output_____
###Markdown
Tune `subsample` and `colsample_bytree`
###Code
pipeline = Pipeline([
('imp', imputer),
('scale', scaler),
('clf', clf_2)
])
parameters_3 = {
'clf__subsample':[i/10.0 for i in range(6,10)],
'clf__colsample_bytree':[i/10.0 for i in range(6,10)]
}
cv = GridSearchCV(estimator=pipeline, param_grid=parameters_3, scoring='roc_auc',n_jobs=-1, iid=False, cv=5)
cv.fit(features_raw, response_raw)
cv.grid_scores_, cv.best_params_, cv.best_score_
clf_3 = XGBRegressor(
subsample = 0.6,
colsample_bytree = 0.7,
gamma = 0.2,
max_depth = 3,
min_child_weight = 1,
objective = 'binary:logistic',
scale_pos_weight = 1,
random_state = 28)
tuned_scores3 = cross_validate(clf_3)
np.array(tuned_scores3).mean()
###Output
_____no_output_____
###Markdown
Tuning Regularization Parameters
###Code
pipeline = Pipeline([
('imp', imputer),
('scale', scaler),
('clf', clf_3)
])
parameters_4 = {
'clf__reg_alpha':[1e-5, 1e-2, 0.1, 1, 100]
}
cv = GridSearchCV(estimator=pipeline, param_grid=parameters_4, scoring='roc_auc',n_jobs=-1, iid=False, cv=5)
cv.fit(features_raw, response_raw)
cv.grid_scores_, cv.best_params_, cv.best_score_
clf_4 = XGBRegressor(
reg_alpha = 0.1,
subsample = 0.6,
colsample_bytree = 0.7,
gamma = 0.2,
max_depth = 3,
min_child_weight = 1,
objective = 'binary:logistic',
scale_pos_weight = 1,
random_state = 28)
tuned_scores4 = cross_validate(clf_4)
np.array(tuned_scores4).mean()
pipeline = Pipeline([
('imp', imputer),
('scale', scaler),
('clf', clf_4)
])
parameters_5 = {
'clf__reg_alpha':[0, 0.001, 0.005, 0.01, 0.05]
}
cv = GridSearchCV(estimator=pipeline, param_grid=parameters_5, scoring='roc_auc',n_jobs=-1, iid=False, cv=5)
cv.fit(features_raw, response_raw)
cv.grid_scores_, cv.best_params_, cv.best_score_
###Output
/anaconda3/lib/python3.7/site-packages/sklearn/model_selection/_search.py:762: DeprecationWarning: The grid_scores_ attribute was deprecated in version 0.18 in favor of the more elaborate cv_results_ attribute. The grid_scores_ attribute will not be available from 0.20
DeprecationWarning)
###Markdown
The score is less than the previous score but there is a wide range in values. It is worth a look to check again but with a narrower range.
###Code
clf_5 = XGBRegressor(
reg_alpha = 0.05,
subsample = 0.6,
colsample_bytree = 0.7,
gamma = 0.2,
max_depth = 3,
min_child_weight = 1,
objective = 'binary:logistic',
scale_pos_weight = 1,
random_state = 28)
tuned_scores5 = cross_validate(clf_5)
np.array(tuned_scores5).mean()
###Output
_____no_output_____
###Markdown
Reduce `learning_rate` and increase `n_estimators`
###Code
clf_final = XGBRegressor(
learning_rate = 0.01,
n_estimators = 1000,
reg_alpha = 0.05,
subsample = 0.6,
colsample_bytree = 0.7,
gamma = 0.2,
max_depth = 3,
min_child_weight = 1,
objective = 'binary:logistic',
scale_pos_weight = 1,
random_state = 28)
final_scores = cross_validate(clf_final)
np.array(final_scores).mean()
###Output
_____no_output_____
###Markdown
Check Most Important Features
###Code
def feature_plot(importances, X_train, y_train, num_feat=5):
# Display the five most important features
indices = np.argsort(importances)[::-1]
columns = X_train.columns.values[indices[:num_feat]]
values = importances[indices][:num_feat]
# Create the plot
fig = plt.figure(figsize = (16,9))
plt.title("Normalized Weights for the Most Predictive Features", fontsize = 16)
plt.barh(np.arange(num_feat), values[::-1], height = 0.6, align="center", \
label = "Feature Weight")
plt.barh(np.arange(num_feat) - 0.3, np.cumsum(values)[::-1], height = 0.2, align = "center", \
label = "Cumulative Feature Weight")
plt.yticks(np.arange(num_feat), columns[::-1])
plt.xlabel("Weight", fontsize = 12)
plt.ylabel('')
plt.legend(loc = 'upper right')
plt.tight_layout()
plt.show()
importances = clf_5.feature_importances_
feature_plot(importances, features_raw, response_raw, 10)
over_cluster['D19_SOZIALES'].describe(), under_cluster['D19_SOZIALES'].describe()
over_cluster['D19_SOZIALES'].hist()
plt.xlabel('Value')
plt.ylabel('Count')
plt.title('Distribution of `D19_SOZIALES` for the Over Represented Cluster');
under_cluster['D19_SOZIALES'].hist()
plt.xlabel('Value')
plt.ylabel('Count')
plt.title('Distribution of `D19_SOZIALES` for the Under Represented Cluster');
###Output
_____no_output_____
###Markdown
Extending this feature to the cluster representations from the earlier dataset, there is a distinct difference between the over represented cluster and the under represented cluster. The over cluster almost has a single bar with a mean value of 1.68 while the under cluster has more evenly normal distribution with a lower mean of 1.15. Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep.
###Code
#mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';')
mailout_test = pd.read_csv('data/mailout_test.csv')
print(mailout_test.shape)
mailout_test.head()
mailout_test_clean = clean_data(mailout_test)
print(mailout_test_clean.shape)
mailout_test_clean.head()
# Check for missing column in `clean_mailout_test`
missing = list(np.setdiff1d(customers_clean.columns, mailout_test_clean.columns))
missing
# Add the missing column with default value of 0
for m in missing:
mailout_test_clean[m] = 0
mailout_test_clean[m] = mailout_test_clean[m].astype('uint8')
mailout_test_clean.info()
pipeline = create_pipeline(clf_final)
pipeline.fit(features_raw, response_raw)
#Predict on the test data
predictions = pipeline.predict(mailout_test_clean)
###Output
_____no_output_____
###Markdown
Submission
###Code
submission = pd.DataFrame(index=mailout_test['LNR'].astype('int32'), data=predictions)
submission.rename(columns={0: "RESPONSE"}, inplace=True)
submission.head()
submission.to_csv('submission.csv')
###Output
_____no_output_____
###Markdown
Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings.
###Code
# import libraries here; add more as necessary
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# import seaborn as sns
# magic word for producing visualizations in notebook
%matplotlib inline
###Output
_____no_output_____
###Markdown
Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them.
###Code
# load in the data
azdias = pd.read_csv('../data_Arvato/Udacity_AZDIAS_052018.csv')
customers = pd.read_csv('../data_Arvato/Udacity_CUSTOMERS_052018.csv')
# function to get count for each group
def groupcount(data,sortcol = None):
'''
input:
data: df[col] or dr.col,dataframe name followed by the column which used to get the group
sortcol: int, should be 0, 1 or 2, defines which columns used to sort by
use case: groupcount(df.col1, 0)
'''
if data.isnull().sum()>0:
if data.dtype == 'O':
data = data.astype(str)
temp = data.fillna('None').value_counts().reset_index(name='Count')
else:
temp = data.value_counts().reset_index(name='Count')
temp = temp.rename(index=str,columns={'index': 'Group'})
temp['percent'] = round(temp.Count / temp.Count.sum() *100,1)
if sortcol==1:
temp = temp.sort_values('Group')
elif sortcol==2:
temp = temp.sort_values('Count')
return temp
# Function to calculate missing values by column# Funct
def missing_values_table(df):
import pandas as pd
# Total missing values
mis_val = df.isnull().sum()
# Percentage of missing values
mis_val_percent = 100 * df.isnull().sum() / len(df)
# Make a table with the results
mis_val_table = pd.concat([mis_val, mis_val_percent], axis=1)
# Rename the columns
mis_val_table_ren_columns = mis_val_table.rename(
columns = {0 : 'Missing Values', 1 : '% of Total Values'})
# Sort the table by percentage of missing descending
mis_val_table_ren_columns = mis_val_table_ren_columns[
mis_val_table_ren_columns.iloc[:,1] != 0].sort_values(
'% of Total Values', ascending=False).round(1)
#Calculate how many columns has bigger than 95% missing, save result to table
BIGmissingN = mis_val_table_ren_columns.loc[mis_val_table_ren_columns['% of Total Values']>95].count()[1]
# Print some summary information
# print ("Total columns: " + str(df.shape[1]) + "\n"
# + "Columns that have missing values: " + str(mis_val_table_ren_columns.shape[0]) + "\n"
# + 'Missing>95% column number: ' + str(BIGmissingN))
# Return the dataframe with missing information
return mis_val_table_ren_columns
###Output
_____no_output_____
###Markdown
Based on the following analysis, we can confirm the three extra columns in customer data, during which we can see 1. almost half of the customers buy both COSMETIC_AND_FOOD2. the rest of the customers don't have strong preference between cosmetic and food3. around 70% of customers are multi-buyer4. majority of customers are in-store buyers (91%)based on No.4, my questions is: 1. does store location plays an important role for target customers?2. for the 9% online customers, are they far from the store?
###Code
print('azdias data shape: {0}; customer data shape: {0}'.format(azdias.shape, customers.shape))
print('extra columns for customer data: ', [e for e in customers.columns if e not in azdias.columns])
print(groupcount(customers['PRODUCT_GROUP']))
print(groupcount(customers['CUSTOMER_GROUP']))
print(groupcount(customers['ONLINE_PURCHASE']))
###Output
azdias data shape: (891221, 366); customer data shape: (891221, 366)
extra columns for customer data: ['PRODUCT_GROUP', 'CUSTOMER_GROUP', 'ONLINE_PURCHASE']
Group Count percent
0 COSMETIC_AND_FOOD 100860 52.6
1 FOOD 47382 24.7
2 COSMETIC 43410 22.7
Group Count percent
0 MULTI_BUYER 132238 69.0
1 SINGLE_BUYER 59414 31.0
Group Count percent
0 0 174356 91.0
1 1 17296 9.0
###Markdown
there are 6 categorical features in the data
###Code
print(azdias.dtypes.value_counts())
cat_vars = [e for e in azdias.columns if azdias[e].dtypes == 'object']
azdias[cat_vars].head()
azdias.head()
missing_values_table(azdias)
###Output
_____no_output_____
###Markdown
Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld.
###Code
mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';')
###Output
_____no_output_____
###Markdown
Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep.
###Code
mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';')
###Output
_____no_output_____
###Markdown
Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, I have analyzed demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. I have used unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then,i use a model to predict which individuals are most likely to convert into becoming customers for the company. The data has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.
###Code
# import libraries here; add more as necessary
import numpy as np
import time
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import Imputer
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
# magic word for producing visualizations in notebook
%matplotlib inline
###Output
_____no_output_____
###Markdown
Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood.
###Code
# load in the data
azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', sep=';')
customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';')
feat_info = pd.read_excel('DIAS Attributes - Values 2017.xlsx')
del feat_info['Unnamed: 0']
feat_info.ffill(inplace=True)
feat_info
feat_list = feat_info['Attribute'].unique()
# should be an empty array
diff_not_in_feat = list(set(feat_list) - set(azdias))
print('Features in feat_list that are not in azdias:')
print(len(diff_not_in_feat))
print(diff_not_in_feat)
print('-------------')
# features that are not in feat_info (no information regarding type)
diff_not_in_azdias = list(set(azdias) - set(feat_list))
print('Features in azdias that are not in feat_list:')
print(len(diff_not_in_azdias))
print(diff_not_in_azdias)
len(drop_columns)
###Output
_____no_output_____
###Markdown
Dropping some Features in azdias that are not in feat_list because So what will i do with the features which does not provide any details so i am just removing them
###Code
azdias.drop(['D19_KINDERARTIKEL', 'D19_LEBENSMITTEL', 'D19_NAHRUNGSERGAENZUNG', 'D19_BUCH_CD', 'CJT_KATALOGNUTZER', 'KONSUMZELLE', 'CJT_TYP_6', 'D19_LOTTO', 'D19_VERSI_OFFLINE_DATUM', 'D19_WEIN_FEINKOST', 'D19_VERSAND_REST', 'ALTER_KIND3', 'KK_KUNDENTYP', 'D19_GARTEN', 'D19_FREIZEIT', 'D19_TECHNIK', 'D19_BANKEN_LOKAL', 'D19_TIERARTIKEL', 'D19_VERSI_ONLINE_DATUM', 'D19_BEKLEIDUNG_GEH', 'D19_SAMMELARTIKEL', 'CJT_TYP_1', 'D19_VERSI_ONLINE_QUOTE_12', 'ARBEIT', 'KBA13_ANTG2', 'ANZ_KINDER', 'D19_TELKO_REST', 'ALTER_KIND4', 'D19_TELKO_ONLINE_QUOTE_12', 'HH_DELTA_FLAG', 'KBA13_KMH_210', 'D19_RATGEBER', 'CJT_TYP_3', 'D19_VERSI_DATUM', 'UNGLEICHENN_FLAG', 'VERDICHTUNGSRAUM', 'STRUKTURTYP', 'LNR', 'VK_DHT4A', 'D19_HAUS_DEKO', 'D19_DIGIT_SERV', 'D19_VOLLSORTIMENT', 'D19_DROGERIEARTIKEL', 'EINGEFUEGT_AM', 'RT_SCHNAEPPCHEN', 'ALTER_KIND2', 'D19_KONSUMTYP_MAX', 'GEMEINDETYP', 'KBA13_ANTG4', 'D19_SOZIALES', 'D19_BANKEN_GROSS', 'D19_HANDWERK', 'KOMBIALTER', 'KBA13_ANTG3', 'D19_BILDUNG', 'KBA13_ANTG1', 'AKT_DAT_KL', 'D19_VERSICHERUNGEN', 'KBA13_CCM_1401_2500', 'VHN', 'KBA13_GBZ', 'MOBI_RASTER', 'D19_BANKEN_REST', 'VK_DISTANZ', 'VHA', 'KBA13_HHZ', 'CJT_TYP_2', 'D19_TELKO_MOBILE', 'D19_ENERGIE', 'D19_SONSTIGE', 'EINGEZOGENAM_HH_JAHR', 'D19_REISEN', 'D19_BANKEN_DIREKT', 'CJT_TYP_5', 'VK_ZG11', 'ALTERSKATEGORIE_FEIN', 'UMFELD_ALT', 'CAMEO_INTL_2015', 'ALTER_KIND1', 'SOHO_KZ', 'D19_BIO_OEKO', 'D19_KOSMETIK', 'D19_BEKLEIDUNG_REST', 'RT_KEIN_ANREIZ', 'D19_SCHUHE', 'RT_UEBERGROESSE', 'FIRMENDICHTE', 'ANZ_STATISTISCHE_HAUSHALTE', 'UMFELD_JUNG', 'EXTSEL992', 'DSL_FLAG', 'CJT_TYP_4', 'D19_LETZTER_KAUF_BRANCHE', 'KBA13_BAUMAX'],axis=1,inplace=True)
customers.drop(['D19_KINDERARTIKEL', 'D19_LEBENSMITTEL', 'D19_NAHRUNGSERGAENZUNG', 'D19_BUCH_CD', 'CJT_KATALOGNUTZER', 'KONSUMZELLE', 'CJT_TYP_6', 'D19_LOTTO', 'D19_VERSI_OFFLINE_DATUM', 'D19_WEIN_FEINKOST', 'D19_VERSAND_REST', 'ALTER_KIND3', 'KK_KUNDENTYP', 'D19_GARTEN', 'D19_FREIZEIT', 'D19_TECHNIK', 'D19_BANKEN_LOKAL', 'D19_TIERARTIKEL', 'D19_VERSI_ONLINE_DATUM', 'D19_BEKLEIDUNG_GEH', 'D19_SAMMELARTIKEL', 'CJT_TYP_1', 'D19_VERSI_ONLINE_QUOTE_12', 'ARBEIT', 'KBA13_ANTG2', 'ANZ_KINDER', 'D19_TELKO_REST', 'ALTER_KIND4', 'D19_TELKO_ONLINE_QUOTE_12', 'HH_DELTA_FLAG', 'KBA13_KMH_210', 'D19_RATGEBER', 'CJT_TYP_3', 'D19_VERSI_DATUM', 'UNGLEICHENN_FLAG', 'VERDICHTUNGSRAUM', 'STRUKTURTYP', 'LNR', 'VK_DHT4A', 'D19_HAUS_DEKO', 'D19_DIGIT_SERV', 'D19_VOLLSORTIMENT', 'D19_DROGERIEARTIKEL', 'EINGEFUEGT_AM', 'RT_SCHNAEPPCHEN', 'ALTER_KIND2', 'D19_KONSUMTYP_MAX', 'GEMEINDETYP', 'KBA13_ANTG4', 'D19_SOZIALES', 'D19_BANKEN_GROSS', 'D19_HANDWERK', 'KOMBIALTER', 'KBA13_ANTG3', 'D19_BILDUNG', 'KBA13_ANTG1', 'AKT_DAT_KL', 'D19_VERSICHERUNGEN', 'KBA13_CCM_1401_2500', 'VHN', 'KBA13_GBZ', 'MOBI_RASTER', 'D19_BANKEN_REST', 'VK_DISTANZ', 'VHA', 'KBA13_HHZ', 'CJT_TYP_2', 'D19_TELKO_MOBILE', 'D19_ENERGIE', 'D19_SONSTIGE', 'EINGEZOGENAM_HH_JAHR', 'D19_REISEN', 'D19_BANKEN_DIREKT', 'CJT_TYP_5', 'VK_ZG11', 'ALTERSKATEGORIE_FEIN', 'UMFELD_ALT', 'CAMEO_INTL_2015', 'ALTER_KIND1', 'SOHO_KZ', 'D19_BIO_OEKO', 'D19_KOSMETIK', 'D19_BEKLEIDUNG_REST', 'RT_KEIN_ANREIZ', 'D19_SCHUHE', 'RT_UEBERGROESSE', 'FIRMENDICHTE', 'ANZ_STATISTISCHE_HAUSHALTE', 'UMFELD_JUNG', 'EXTSEL992', 'DSL_FLAG', 'CJT_TYP_4', 'D19_LETZTER_KAUF_BRANCHE', 'KBA13_BAUMAX'],axis=1,inplace=True)
customers.drop(['PRODUCT_GROUP', 'ONLINE_PURCHASE', 'CUSTOMER_GROUP'],axis=1,inplace=True)
#looking for nans
column_nans = azdias.isnull().mean()
plt.hist(column_nans, bins = np.arange(0,1+.05,.05))
plt.ylabel('# of features')
plt.xlabel('prop. of missing values')
unknowns = []
for attribute in feat_info['Attribute'].unique():
_ = feat_info.loc[feat_info['Attribute'] == attribute, 'Value'].astype(str).str.cat(sep=',')
_ = _.split(',')
unknowns.append(_)
unknowns = pd.concat([pd.Series(feat_info['Attribute'].unique()), pd.Series(unknowns)], axis=1)
unknowns.columns = ['attribute', 'missing_or_unknown']
print(unknowns)
start = time.time()
for row in unknowns['attribute']:
print(row)
if row in azdias.columns:
na_map = unknowns.loc[unknowns['attribute'] == row, 'missing_or_unknown'].iloc[0]
na_idx = azdias.loc[:, row].isin(na_map)
azdias.loc[na_idx, row] = np.NaN
else:
continue
end = time.time()
elapsed = end - start
elapsed
column_nans = azdias.isnull().mean()
plt.hist(column_nans, bins = np.arange(0,1+.05,.05))
plt.ylabel('# of features')
plt.xlabel('prop. of missing values')
col_drops = column_nans[columns_nans>0.2].index
col_drops
col_drops = ['AGER_TYP', 'ALTER_HH', 'D19_BANKEN_ANZ_12', 'D19_BANKEN_ANZ_24',
'D19_BANKEN_ONLINE_QUOTE_12', 'D19_GESAMT_ANZ_12', 'D19_GESAMT_ANZ_24',
'D19_GESAMT_ONLINE_QUOTE_12', 'D19_KONSUMTYP', 'D19_TELKO_ANZ_12',
'D19_TELKO_ANZ_24', 'D19_VERSAND_ANZ_12', 'D19_VERSAND_ANZ_24',
'D19_VERSAND_ONLINE_QUOTE_12', 'D19_VERSI_ANZ_12', 'D19_VERSI_ANZ_24',
'GREEN_AVANTGARDE', 'KBA05_AUTOQUOT', 'KBA05_BAUMAX', 'KONSUMNAEHE',
'LP_FAMILIE_FEIN', 'LP_FAMILIE_GROB', 'LP_STATUS_FEIN',
'LP_STATUS_GROB', 'MOBI_REGIO', 'PLZ8_BAUMAX', 'RELAT_AB', 'TITEL_KZ']
azdias.drop(col_drops,axis=1,inplace=True)
###Output
_____no_output_____
###Markdown
Correlation Coefficient = 0.8: A fairly strong positive relationship.Correlation Coefficient = 0.6: A moderate positive relationshipSo, i have choosen a value of 0.7
###Code
# find correlation matrix
corr_matrix = azdias.corr().abs()
upper_limit = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool))
# identify columns to drop based on threshold limit
drop_columns = [column for column in upper_limit.columns if any(upper_limit[column] > .7)]
drop_columns = ['D19_BANKEN_ONLINE_DATUM', 'D19_GESAMT_ONLINE_DATUM', 'D19_VERSAND_DATUM', 'D19_VERSAND_OFFLINE_DATUM', 'D19_VERSAND_ONLINE_DATUM',
'FINANZ_SPARER', 'FINANZ_UNAUFFAELLIGER', 'FINANZ_VORSORGER', 'INNENSTADT', 'KBA05_KRSHERST1', 'KBA05_KRSHERST2', 'KBA05_KRSHERST3', 'KBA05_KW2',
'KBA05_SEG2', 'KBA05_SEG5', 'KBA05_SEG9', 'KBA05_ZUL4', 'KBA13_BJ_2000', 'KBA13_BJ_2006', 'KBA13_HALTER_25', 'KBA13_HALTER_30', 'KBA13_HALTER_35', 'KBA13_HALTER_40', 'KBA13_HALTER_50', 'KBA13_HALTER_55', 'KBA13_HALTER_66', 'KBA13_HERST_BMW_BENZ',
'KBA13_HERST_SONST', 'KBA13_KMH_140', 'KBA13_KMH_211', 'KBA13_KMH_250', 'KBA13_KRSHERST_FORD_OPEL', 'KBA13_KW_30', 'KBA13_KW_61_120', 'KBA13_MERCEDES', 'KBA13_OPEL', 'KBA13_SEG_KLEINWAGEN',
'KBA13_SEG_MINIVANS', 'KBA13_SEG_VAN', 'KBA13_SITZE_5', 'KBA13_VORB_1', 'KBA13_VORB_2', 'KBA13_VW', 'LP_LEBENSPHASE_FEIN',
'LP_LEBENSPHASE_GROB', 'ONLINE_AFFINITAET', 'ORTSGR_KLS9', 'PLZ8_ANTG3', 'PLZ8_ANTG4', 'PLZ8_GBZ', 'PLZ8_HHZ', 'PRAEGENDE_JUGENDJAHRE',
'REGIOTYP', 'SEMIO_KAEM', 'SEMIO_VERT', 'ANREDE_KZ', 'ALTERSKATEGORIE_GROB']
azdias.drop(drop_columns,axis=1,inplace=True)
azdias.shape
rows_nans = azdias.isnull().mean(axis=1)
plt.hist(rows_nans, bins = np.arange(0,1+.05,.05))
plt.ylabel('# of features')
plt.xlabel('prop. of missing values')
azdias.shape
azdias=azdias.fillna(azdias.mode().iloc[0])
azdias[azdias.duplicated()]
azdias.drop_duplicates(inplace=True)
azdias.shape
#looking for nans
column_nans = azdias.isnull().mean()
plt.hist(column_nans, bins = np.arange(0,1+.05,.05))
plt.ylabel('# of features')
plt.xlabel('prop. of missing values')
#looking for nans
rows_nans = azdias.isnull().mean(axis=1)
plt.hist(rows_nans)
plt.ylabel('# of features')
plt.xlabel('prop. of missing values')
drop_col = column_nans[column_nans>0.2].index
drop_col
azdias.drop(drop_col,axis=1,inplace=True)
azdias.shape
# finding numeric and categorical columns
Numeric_columns=azdias.select_dtypes(include=np.number).columns.tolist()
categorical_col=set(azdias.columns).difference(set(Numeric_columns))
# numeric cols to numeric
print(categorical_col)
azdias['CAMEO_DEU_2015'].unique()
azdias['CAMEO_DEU_2015'].value_counts()
azdias.drop('CAMEO_DEU_2015',inplace=True,axis=1)
azdias['CAMEO_DEUG_2015'].unique()
azdias['CAMEO_DEUG_2015'].value_counts()
azdias[['CAMEO_DEUG_2015']] = azdias[['CAMEO_DEUG_2015']].replace(['X'],np.NaN)
azdias[['CAMEO_DEUG_2015']] = azdias[['CAMEO_DEUG_2015']].replace(['XX','X'],np.NaN)
azdias['CAMEO_DEUG_2015'].value_counts()
azdias=pd.get_dummies(azdias)
azdias.shape
Numeric_columns=azdias.select_dtypes(include=np.number).columns.tolist()
categorical_col=set(azdias.columns).difference(set(Numeric_columns))
#numeric cols to numeric
print(categorical_col)
###Output
_____no_output_____
###Markdown
Checkpoint
###Code
# import libraries
import numpy as np
import time
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import Imputer
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
# magic word for producing visualizations in notebook
%matplotlib inline
# load in the data
azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', sep=';')
customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';')
azdias.shape
drop_columns = ['D19_KINDERARTIKEL', 'D19_LEBENSMITTEL', 'D19_NAHRUNGSERGAENZUNG',
'D19_BUCH_CD', 'CJT_KATALOGNUTZER', 'KONSUMZELLE', 'CJT_TYP_6',
'D19_LOTTO', 'D19_VERSI_OFFLINE_DATUM', 'D19_WEIN_FEINKOST', 'D19_VERSAND_REST',
'ALTER_KIND3', 'KK_KUNDENTYP', 'D19_GARTEN', 'D19_FREIZEIT', 'D19_TECHNIK',
'D19_BANKEN_LOKAL', 'D19_TIERARTIKEL', 'D19_VERSI_ONLINE_DATUM', 'D19_BEKLEIDUNG_GEH',
'D19_SAMMELARTIKEL', 'CJT_TYP_1', 'D19_VERSI_ONLINE_QUOTE_12', 'ARBEIT', 'KBA13_ANTG2',
'ANZ_KINDER', 'D19_TELKO_REST', 'ALTER_KIND4', 'D19_TELKO_ONLINE_QUOTE_12', 'HH_DELTA_FLAG',
'KBA13_KMH_210', 'D19_RATGEBER', 'CJT_TYP_3', 'D19_VERSI_DATUM', 'UNGLEICHENN_FLAG',
'VERDICHTUNGSRAUM', 'STRUKTURTYP', 'LNR', 'VK_DHT4A', 'D19_HAUS_DEKO', 'D19_DIGIT_SERV',
'D19_VOLLSORTIMENT', 'D19_DROGERIEARTIKEL', 'EINGEFUEGT_AM', 'RT_SCHNAEPPCHEN', 'ALTER_KIND2',
'D19_KONSUMTYP_MAX', 'GEMEINDETYP', 'KBA13_ANTG4', 'D19_SOZIALES', 'D19_BANKEN_GROSS',
'D19_HANDWERK', 'KOMBIALTER', 'KBA13_ANTG3', 'D19_BILDUNG', 'KBA13_ANTG1', 'AKT_DAT_KL',
'D19_VERSICHERUNGEN', 'KBA13_CCM_1401_2500', 'VHN', 'KBA13_GBZ', 'MOBI_RASTER',
'D19_BANKEN_REST', 'VK_DISTANZ', 'VHA', 'KBA13_HHZ', 'CJT_TYP_2', 'D19_TELKO_MOBILE',
'D19_ENERGIE', 'D19_SONSTIGE', 'EINGEZOGENAM_HH_JAHR', 'D19_REISEN', 'D19_BANKEN_DIREKT',
'CJT_TYP_5', 'VK_ZG11', 'ALTERSKATEGORIE_FEIN', 'UMFELD_ALT', 'CAMEO_INTL_2015', 'ALTER_KIND1',
'SOHO_KZ', 'D19_BIO_OEKO', 'D19_KOSMETIK', 'D19_BEKLEIDUNG_REST', 'RT_KEIN_ANREIZ', 'D19_SCHUHE',
'RT_UEBERGROESSE', 'FIRMENDICHTE', 'ANZ_STATISTISCHE_HAUSHALTE', 'UMFELD_JUNG', 'EXTSEL992',
'DSL_FLAG', 'CJT_TYP_4', 'D19_LETZTER_KAUF_BRANCHE', 'KBA13_BAUMAX',
'AGER_TYP', 'ALTER_HH', 'D19_BANKEN_ANZ_12', 'D19_BANKEN_ANZ_24','D19_BANKEN_ONLINE_QUOTE_12',
'D19_GESAMT_ANZ_12', 'D19_GESAMT_ANZ_24','D19_GESAMT_ONLINE_QUOTE_12', 'D19_KONSUMTYP',
'D19_TELKO_ANZ_12','D19_TELKO_ANZ_24', 'D19_VERSAND_ANZ_12', 'D19_VERSAND_ANZ_24',
'D19_VERSAND_ONLINE_QUOTE_12', 'D19_VERSI_ANZ_12','D19_VERSI_ANZ_24','GREEN_AVANTGARDE',
'KBA05_AUTOQUOT', 'KBA05_BAUMAX', 'KONSUMNAEHE','LP_FAMILIE_FEIN', 'LP_FAMILIE_GROB',
'LP_STATUS_FEIN','LP_STATUS_GROB', 'MOBI_REGIO', 'PLZ8_BAUMAX', 'RELAT_AB', 'TITEL_KZ',
'D19_BANKEN_ONLINE_DATUM', 'D19_GESAMT_ONLINE_DATUM', 'D19_VERSAND_DATUM', 'D19_VERSAND_OFFLINE_DATUM',
'D19_VERSAND_ONLINE_DATUM', 'FINANZ_SPARER',
'FINANZ_UNAUFFAELLIGER', 'FINANZ_VORSORGER', 'INNENSTADT', 'KBA05_KRSHERST1', 'KBA05_KRSHERST2',
'KBA05_KRSHERST3', 'KBA05_KW2', 'KBA05_SEG2', 'KBA05_SEG5', 'KBA05_SEG9', 'KBA05_ZUL4',
'KBA13_BJ_2000', 'KBA13_BJ_2006', 'KBA13_HALTER_25', 'KBA13_HALTER_30', 'KBA13_HALTER_35',
'KBA13_HALTER_40', 'KBA13_HALTER_50', 'KBA13_HALTER_55', 'KBA13_HALTER_66', 'KBA13_HERST_BMW_BENZ',
'KBA13_HERST_SONST', 'KBA13_KMH_140', 'KBA13_KMH_211', 'KBA13_KMH_250', 'KBA13_KRSHERST_FORD_OPEL',
'KBA13_KW_30', 'KBA13_KW_61_120', 'KBA13_MERCEDES', 'KBA13_OPEL', 'KBA13_SEG_KLEINWAGEN',
'KBA13_SEG_MINIVANS', 'KBA13_SEG_VAN', 'KBA13_SITZE_5', 'KBA13_VORB_1', 'KBA13_VORB_2',
'KBA13_VW', 'LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB', 'ONLINE_AFFINITAET', 'ORTSGR_KLS9',
'PLZ8_ANTG3', 'PLZ8_ANTG4', 'PLZ8_GBZ', 'PLZ8_HHZ', 'PRAEGENDE_JUGENDJAHRE','REGIOTYP',
'SEMIO_KAEM', 'SEMIO_VERT', 'ANREDE_KZ', 'ALTERSKATEGORIE_GROB','BALLRAUM',
'CJT_GESAMTTYP', 'D19_BANKEN_DATUM','D19_BANKEN_OFFLINE_DATUM','CAMEO_DEU_2015']
azdias.drop(drop_columns,inplace=True,axis=1)
#looking for nans
rows_nans = azdias.isnull().mean(axis=1);
drop_rows = rows_nans[rows_nans>=0.2].index
azdias.drop(drop_rows,axis=0,inplace=True)
azdias[['CAMEO_DEUG_2015']] = azdias[['CAMEO_DEUG_2015']].replace(['XX','X'],np.NaN)
azdias['CAMEO_DEUG_2015']=azdias['CAMEO_DEUG_2015'].apply(pd.to_numeric)
# finding numeric and categorical columns
Numeric_columns=azdias.select_dtypes(include=np.number).columns.tolist()
categorical_col=set(azdias.columns).difference(set(Numeric_columns))
# numeric cols to numeric
print(categorical_col)
azdias[Numeric_columns]=azdias[Numeric_columns].apply(pd.to_numeric)
azdias=azdias.fillna(azdias.mode().iloc[0])
azdias=pd.get_dummies(azdias)
# finding numeric and categorical columns
Numeric_columns=azdias.select_dtypes(include=np.number).columns.tolist()
categorical_col=set(azdias.columns).difference(set(Numeric_columns))
# numeric cols to numeric
print(categorical_col)
azdias.shape
#looking for nans
rows_nans = azdias.isnull().mean(axis=1)
plt.hist(rows_nans)
plt.ylabel('# of features')
plt.xlabel('prop. of missing values')
#looking for nans
column_nans = azdias.isnull().mean()
plt.hist(column_nans, bins = np.arange(0,1+.05,.05))
plt.ylabel('# of features')
plt.xlabel('prop. of missing values')
azdias.to_csv('azdias.csv')
drop_columns =['CAMEO_DEU_2015', 'CAMEO_DEUG_2015', 'CAMEO_INTL_2015', 'D19_LETZTER_KAUF_BRANCHE', 'EINGEFUEGT_AM','OST_WEST_KZ']
['D19_KINDERARTIKEL', 'D19_LEBENSMITTEL', 'D19_NAHRUNGSERGAENZUNG',
'D19_BUCH_CD', 'CJT_KATALOGNUTZER', 'KONSUMZELLE', 'CJT_TYP_6',
'D19_LOTTO', 'D19_VERSI_OFFLINE_DATUM', 'D19_WEIN_FEINKOST', 'D19_VERSAND_REST',
'ALTER_KIND3', 'KK_KUNDENTYP', 'D19_GARTEN', 'D19_FREIZEIT', 'D19_TECHNIK',
'D19_BANKEN_LOKAL', 'D19_TIERARTIKEL', 'D19_VERSI_ONLINE_DATUM', 'D19_BEKLEIDUNG_GEH',
'D19_SAMMELARTIKEL', 'CJT_TYP_1', 'D19_VERSI_ONLINE_QUOTE_12', 'ARBEIT', 'KBA13_ANTG2',
'ANZ_KINDER', 'D19_TELKO_REST', 'ALTER_KIND4', 'D19_TELKO_ONLINE_QUOTE_12', 'HH_DELTA_FLAG',
'KBA13_KMH_210', 'D19_RATGEBER', 'CJT_TYP_3', 'D19_VERSI_DATUM', 'UNGLEICHENN_FLAG',
'VERDICHTUNGSRAUM', 'STRUKTURTYP', 'LNR', 'VK_DHT4A', 'D19_HAUS_DEKO', 'D19_DIGIT_SERV',
'D19_VOLLSORTIMENT', 'D19_DROGERIEARTIKEL', 'EINGEFUEGT_AM', 'RT_SCHNAEPPCHEN', 'ALTER_KIND2',
'D19_KONSUMTYP_MAX', 'GEMEINDETYP', 'KBA13_ANTG4', 'D19_SOZIALES', 'D19_BANKEN_GROSS',
'D19_HANDWERK', 'KOMBIALTER', 'KBA13_ANTG3', 'D19_BILDUNG', 'KBA13_ANTG1', 'AKT_DAT_KL',
'D19_VERSICHERUNGEN', 'KBA13_CCM_1401_2500', 'VHN', 'KBA13_GBZ', 'MOBI_RASTER',
'D19_BANKEN_REST', 'VK_DISTANZ', 'VHA', 'KBA13_HHZ', 'CJT_TYP_2', 'D19_TELKO_MOBILE',
'D19_ENERGIE', 'D19_SONSTIGE', 'EINGEZOGENAM_HH_JAHR', 'D19_REISEN', 'D19_BANKEN_DIREKT',
'CJT_TYP_5', 'VK_ZG11', 'ALTERSKATEGORIE_FEIN', 'UMFELD_ALT', 'CAMEO_INTL_2015', 'ALTER_KIND1',
'SOHO_KZ', 'D19_BIO_OEKO', 'D19_KOSMETIK', 'D19_BEKLEIDUNG_REST', 'RT_KEIN_ANREIZ', 'D19_SCHUHE',
'RT_UEBERGROESSE', 'FIRMENDICHTE', 'ANZ_STATISTISCHE_HAUSHALTE', 'UMFELD_JUNG', 'EXTSEL992',
'DSL_FLAG', 'CJT_TYP_4', 'D19_LETZTER_KAUF_BRANCHE', 'KBA13_BAUMAX',
'AGER_TYP', 'ALTER_HH', 'D19_BANKEN_ANZ_12', 'D19_BANKEN_ANZ_24','D19_BANKEN_ONLINE_QUOTE_12',
'D19_GESAMT_ANZ_12', 'D19_GESAMT_ANZ_24','D19_GESAMT_ONLINE_QUOTE_12', 'D19_KONSUMTYP',
'D19_TELKO_ANZ_12','D19_TELKO_ANZ_24', 'D19_VERSAND_ANZ_12', 'D19_VERSAND_ANZ_24',
'D19_VERSAND_ONLINE_QUOTE_12', 'D19_VERSI_ANZ_12','D19_VERSI_ANZ_24','GREEN_AVANTGARDE',
'KBA05_AUTOQUOT', 'KBA05_BAUMAX', 'KONSUMNAEHE','LP_FAMILIE_FEIN', 'LP_FAMILIE_GROB',
'LP_STATUS_FEIN','LP_STATUS_GROB', 'MOBI_REGIO', 'PLZ8_BAUMAX', 'RELAT_AB', 'TITEL_KZ',
'D19_BANKEN_ONLINE_DATUM', 'D19_GESAMT_ONLINE_DATUM', 'D19_VERSAND_DATUM', 'D19_VERSAND_OFFLINE_DATUM',
'D19_VERSAND_ONLINE_DATUM', 'FINANZ_SPARER',
'FINANZ_UNAUFFAELLIGER', 'FINANZ_VORSORGER', 'INNENSTADT', 'KBA05_KRSHERST1', 'KBA05_KRSHERST2',
'KBA05_KRSHERST3', 'KBA05_KW2', 'KBA05_SEG2', 'KBA05_SEG5', 'KBA05_SEG9', 'KBA05_ZUL4',
'KBA13_BJ_2000', 'KBA13_BJ_2006', 'KBA13_HALTER_25', 'KBA13_HALTER_30', 'KBA13_HALTER_35',
'KBA13_HALTER_40', 'KBA13_HALTER_50', 'KBA13_HALTER_55', 'KBA13_HALTER_66', 'KBA13_HERST_BMW_BENZ',
'KBA13_HERST_SONST', 'KBA13_KMH_140', 'KBA13_KMH_211', 'KBA13_KMH_250', 'KBA13_KRSHERST_FORD_OPEL',
'KBA13_KW_30', 'KBA13_KW_61_120', 'KBA13_MERCEDES', 'KBA13_OPEL', 'KBA13_SEG_KLEINWAGEN',
'KBA13_SEG_MINIVANS', 'KBA13_SEG_VAN', 'KBA13_SITZE_5', 'KBA13_VORB_1', 'KBA13_VORB_2',
'KBA13_VW', 'LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB', 'ONLINE_AFFINITAET', 'ORTSGR_KLS9',
'PLZ8_ANTG3', 'PLZ8_ANTG4', 'PLZ8_GBZ', 'PLZ8_HHZ', 'PRAEGENDE_JUGENDJAHRE','REGIOTYP',
'SEMIO_KAEM', 'SEMIO_VERT', 'ANREDE_KZ', 'ALTERSKATEGORIE_GROB','BALLRAUM',
'CJT_GESAMTTYP', 'D19_BANKEN_DATUM','D19_BANKEN_OFFLINE_DATUM','CAMEO_DEU_2015',
'CUSTOMER_GROUP', 'PRODUCT_GROUP','ONLINE_PURCHASE']
customers.drop(drop_columns,inplace=True,axis=1)
#looking for nans
rows_nans = customers.isnull().mean(axis=1);
drop_rows = rows_nans[rows_nans>=0.2].index
customers.drop(drop_rows,axis=0,inplace=True)
customers[['CAMEO_DEUG_2015']] = customers[['CAMEO_DEUG_2015']].replace(['XX','X'],np.NaN)
customers['CAMEO_DEUG_2015']=customers['CAMEO_DEUG_2015'].apply(pd.to_numeric)
# finding numeric and categorical columns
Numeric_columns=customers.select_dtypes(include=np.number).columns.tolist()
categorical_col=set(customers.columns).difference(set(Numeric_columns))
# numeric cols to numeric
print(categorical_col)
customers[Numeric_columns]=customers[Numeric_columns].apply(pd.to_numeric)
customers=customers.fillna(customers.mode().iloc[0])
customers=pd.get_dummies(customers)
# finding numeric and categorical columns
Numeric_columns=customers.select_dtypes(include=np.number).columns.tolist()
categorical_col=set(customers.columns).difference(set(Numeric_columns))
# numeric cols to numeric
print(categorical_col)
customers.shape
#looking for nans
rows_nans = customers.isnull().mean(axis=1)
plt.hist(rows_nans)
plt.ylabel('# of features')
plt.xlabel('prop. of missing values')
#looking for nans
column_nans = azdias.isnull().mean()
plt.hist(column_nans, bins = np.arange(0,1+.05,.05))
plt.ylabel('# of features')
plt.xlabel('prop. of missing values')
customers.to_csv('customers.csv')
print(azdias.shape)
print(customers.shape)
###Output
(751331, 183)
(135144, 183)
###Markdown
Part 1: Customer Segmentation Reportusing unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. .
###Code
azdias = pd.read_csv('azdias.csv')
azdias.head()
del azdias['Unnamed: 0']
customers = pd.read_csv('customers.csv')
customers.head()
del customers['Unnamed: 0']
scaler = StandardScaler()
azdias_scaled = scaler.fit_transform(azdias)
customers_scaled = scaler.transform(customers)
pca=PCA()
pca.fit(azdias_scaled)
plt.figure(figsize=(10,8))
plt.plot(pca.explained_variance_ratio_.cumsum())
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
as a rule of thumb more than 80% is necessary.So i am taking 100 features so we can get about 90% variance.
###Code
pca = PCA(n_components=100)
pca_azdias = pca.fit_transform(azdias_scaled)
pca_azdias = pd.DataFrame(pca_azdias)
pca_customers = pca.transform(customers_scaled)
pca_customers = pd.DataFrame(pca_customers)
# shape of reduced components
print(pca_azdias.shape)
print(pca_customers.shape)
pca
def map_weights(pca, cols, component):
comp = pd.DataFrame({'feature names': cols, 'weight': pca.components_[component], 'weight_abs': np.abs(pca.components_[component])})
return comp.sort_values(by=['weight_abs'], ascending=False)
comp_0 = map_weights(pca, azdias.columns, 0)
comp_0.head()
###Output
_____no_output_____
###Markdown
So in Germany share of cars and vans are incresing ike BMW in microcell.
###Code
comp_1 = map_weights(pca, azdias.columns, 1)
comp_1.head()
###Output
_____no_output_____
###Markdown
East Germany are preferably like to own middleclass car as compare to west Germany.
###Code
comp_2 = map_weights(pca, azdias.columns, 2)
comp_2.head()
###Output
_____no_output_____
###Markdown
in PLZ8 number of cell and building are increasing and also the share of car and mostly the car owner are in 46 to 60 yrs old.
###Code
pca_azdias_sample=pca_azdias.sample(20000)
wcss=[]
score=[]
for i in range(1,21):
#print(i)
kmeans_pca=KMeans(n_clusters=i,init='k-means++',random_state=42)
model=kmeans_pca.fit(pca_azdias_sample)
wcss.append(model.inertia_)
plt.figure(figsize=(14,6))
plt.plot(range(1,21),wcss,marker='o',linestyle='--')
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
I think **5** is right number for KMeans clustering.
###Code
kmeans = KMeans(5)
model = kmeans.fit(pca_azdias)
prediction_azdias = model.predict(pca_azdias)
print(prediction_azdias)
len(prediction_azdias)
prediction_customers = model.predict(pca_customers)
print(prediction_customers)
len(prediction_customers)
cust_dict = {}
for c in set(prediction_customers):
cust_dict[c] = sum(prediction_customers == c)
print(cust_dict)
y_dict = {}
for c in set(prediction_azdias):
y_dict[c] = sum(prediction_azdias == c)
print(y_dict)
{i: np.where(kmeans.labels_ == i)[0] for i in range(kmeans.n_clusters)}
kmeans.labels_
kmeans.n_clusters
azdias['labels'] = kmeans.labels_
azdias.shape
kmeans.labels_.shape
azdias['labels']
cluster1 = azdias[azdias['labels']==1]
cluster1.shape
cluster2 = azdias[azdias['labels']==2]
cluster2.shape
cluster2.describe()
cluster1.describe()
###Output
_____no_output_____
###Markdown
There is no possible way to compare these clusters. Part 2: Supervised Learning ModelNow that we've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
%matplotlib inline
train_dataset = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv',sep=';')
train_dataset.drop(['CAMEO_DEU_2015', 'CAMEO_DEUG_2015', 'CAMEO_INTL_2015', 'D19_LETZTER_KAUF_BRANCHE', 'EINGEFUEGT_AM','OST_WEST_KZ'],axis=1,inplace=True)
X = train_dataset.drop('RESPONSE',axis=1)
y = train_dataset['RESPONSE']
X_train,X_test,y_train,y_test = train_test_split(X,y,random_state=42,test_size=0.3)
X_train,X_val,y_train,y_val = train_test_split(X_train,y_train,random_state=42,test_size=0.3)
import xgboost
from xgboost import XGBRegressor
model = XGBRegressor(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bytree=0.7, eval_metric='auc', gamma=1.0,
learning_rate=0.01, max_delta_step=0, max_depth=7,
min_child_weight=1, missing=None, n_estimators=250, n_jobs=-1,
nthread=None, objective='binary:logistic', random_state=42,
reg_alpha=1e-07, reg_lambda=1, scale_pos_weight=1, seed=None,
silent=1, subsample=0.5)
model.fit(X_train,y_train)
from sklearn.metrics import roc_auc_score
pred_train = model.predict(X_train)
pred_test = model.predict(X_test)
pred_val = model.predict(X_val)
roc_auc_score(y_train,pred_train)
roc_auc_score(y_test,pred_test)
roc_auc_score(y_val,pred_val)
# Calculate feature importances
importances = model.feature_importances_
# Sort feature importances in descending order
indices = np.argsort(importances)[::-1][:5]
# Rearrange feature names so they match the sorted feature importances
names = [X.columns[i] for i in indices][:5]
# Create plot
plt.figure()
# Create plot title
plt.title("Feature Importance")
# Add bars
plt.bar(range(5), importances[indices])
# Add feature names as x-axis labels
plt.xticks(range(5) ,names, rotation=90)
# Show plot
plt.show()
###Output
_____no_output_____
###Markdown
Part 3: Kaggle Competition
###Code
testdata = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';')
testdata.drop(['CAMEO_DEU_2015', 'CAMEO_DEUG_2015', 'CAMEO_INTL_2015', 'D19_LETZTER_KAUF_BRANCHE', 'EINGEFUEGT_AM','OST_WEST_KZ'],axis=1,inplace=True)
testpred = model.predict(testdata)
kaggle_sub = pd.DataFrame(index=testdata['LNR'].astype('int32'), data=testpred)
kaggle_sub.rename(columns={0: "RESPONSE"}, inplace=True)
kaggle_sub.to_csv('kagglesubmission.csv')
###Output
_____no_output_____ |
notebooks/raw_notebooks/AF_Climatology_Compute.ipynb | ###Markdown
Calculate 12-Month Climatology for intpp-hist & Decadal Climatologies for intpp-ssp585Andrea Fassbender10/17/2019
###Code
%matplotlib inline
import xarray as xr
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
# Load Historical Data Files
ds_lon = xr.open_dataset('Data/ds_lon.nc')
ds_lat = xr.open_dataset('Data/ds_lat.nc')
hist = xr.open_dataset('Data/hist_intpp_mask.nc')
ds_hist = hist.squeeze()
#ds_hist_oa = xr.open_dataset('Data/hist_oa_mask.nc')
#note: to merge datasets into array
#hist = xr.merge([ds_hist_oa.squeeze(), ds_hist.squeeze()])
ds_hist
###Output
_____no_output_____
###Markdown
Historical 12-Month Climatolgoy
###Code
#Calculate climatologies and StDevs
clim_m_hist = ds_hist.groupby('time.month').mean('time')
clim_std_hist = ds_hist.groupby('time.month').std('time')
clim_m_hist
# Load Riley's function (Hilary sent me this via slack direct message)
# note: the function operates in the order of the variable indices in the dataset, and thus returns
# lat/lon, even though you enter your inputs as lon/lat
def find_indices(xgrid, ygrid, xpoint, ypoint):
"""Returns the i, j index for a latitude/longitude point on a grid.
.. note::
Longitude and latitude points (``xpoint``/``ypoint``) should be in the same
range as the grid itself (e.g., if the longitude grid is 0-360, should be
200 instead of -160).
Args:
xgrid (array_like): Longitude meshgrid (shape M, N)
ygrid (array_like): Latitude meshgrid (shape M, N)
xpoint (int or double): Longitude of point searching for on grid.
ypoint (int or double): Latitude of point searching for on grid.
Returns:
i, j (int):
Keys for the inputted grid that lead to the lat/lon point the user is
seeking.
Examples:
>>> import esmtools as et
>>> import numpy as np
>>> x = np.linspace(0, 360, 37)
>>> y = np.linspace(-90, 90, 19)
>>> xx, yy = np.meshgrid(x, y)
>>> xp = 20
>>> yp = -20
>>> i, j = et.spatial.find_indices(xx, yy, xp, yp)
>>> print(xx[i, j])
20.0
>>> print(yy[i, j])
-20.0
"""
dx = xgrid - xpoint
dy = ygrid - ypoint
reduced_grid = abs(dx) + abs(dy)
min_ix = np.nanargmin(reduced_grid)
i, j = np.unravel_index(min_ix, reduced_grid.shape)
return i, j
#historical
fig = plt.figure(figsize=(14, 3))
# Irregular levels to illustrate the use of a proportional colorbar
levels = np.arange(0, 150, 5)
levels2 = np.arange(0, 30, 1)
ax = fig.add_subplot(1, 3, 1)
(clim_m_hist.intpp*(10**8)).isel(month=4).plot(levels=levels)
ax.set_ylabel('Latitude')
ax.set_xlabel('Longitude')
plt.title('April IntPP Mean')
ax = fig.add_subplot(1, 3, 2)
(clim_std_hist.intpp*(10**8)).isel(month=4).plot(levels=levels2)
ax.set_ylabel('Latitude')
ax.set_xlabel('Longitude')
plt.title('April IntPP StDev')
# Identify indecies associated with osp, keo ,and NA
[a,b]=find_indices(ds_lon.lon, ds_lat.lat, 360-145, 50)
[a2,b2]=find_indices(ds_lon.lon, ds_lat.lat, 144.6, 32.4)
[a3,b3]=find_indices(ds_lon.lon, ds_lat.lat, 360-47.2, 50)
[a4,b4]=find_indices(ds_lon.lon, ds_lat.lat, 100, -45)
ax = fig.add_subplot(1, 3, 3)
var = clim_m_hist.intpp.isel(nlon=b,nlat=a)*10**8
var2 = clim_m_hist.intpp.isel(nlon=b2,nlat=a2)*10**8
var3 = clim_m_hist.intpp.isel(nlon=b3,nlat=a3)*10**8
var4 = clim_m_hist.intpp.isel(nlon=b4,nlat=a4)*10**8
#var_st = hist.clim_std[:, a, b].squeeze()*10**8
# #the above code (vs the below) is a way to remove the member dimesion and selec
#for all times at the a and b indices for lat/lon using slice
var_st = clim_std_hist.intpp.isel(nlon=b,nlat=a)*10**8
var2_st = clim_std_hist.intpp.isel(nlon=b2,nlat=a2)*10**8
var3_st = clim_std_hist.intpp.isel(nlon=b3,nlat=a3)*10**8
var4_st = clim_std_hist.intpp.isel(nlon=b4,nlat=a4)*10**8
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels)
plt.plot(var,'k',label='OSP')
plt.plot(var+var_st,':k')
plt.plot(var-var_st,':k')
plt.plot(var2,'r',label='KEO')
plt.plot(var2+var2_st,':r')
plt.plot(var2-var2_st,':r')
plt.plot(var3,'b',label='NA')
plt.plot(var3+var3_st,':b')
plt.plot(var3-var3_st,':b')
plt.plot(var4,'y',label='SO')
plt.plot(var4+var4_st,':y')
plt.plot(var4-var4_st,':y')
plt.legend()
plt.title('Location Comparisons')
###Output
_____no_output_____
###Markdown
SSP585 Decadal Climatologies
###Code
# Load SSP585 Data Files
#ds_85_oa = xr.open_dataset('Data/ssp585_oa_mask.nc')
ssp85 = xr.open_dataset('Data/ssp585_intpp_mask.nc')
ds_85 = ssp85.squeeze()
#ds_85.squeeze().intpp
#ds_85.squeeze().intpp.isel(time=3).plot()
ds_85
# # Create xarrays to save mean and stdev climatologies into (else you get arrays with no coordainte info...)
# # Create string of years bounding decades to loop through
# dlist = np.arange(2020,2100,10)
# # Pre-allocate: length of dlist = number of decades; 12 = months in climatology; lat/lon from ds_85
# pre = np.zeros((len(dlist), 12, ds_85.intpp.shape[1], ds_85.intpp.shape[2])) * np.nan
# times = pd.date_range(start=2025,end=2195,periods=len(dlist))
# # Create empty data arrays
# #clim_m_85 = xr.DataArray(pre, coords=[times, np.arange(1,13), ds_85.nlat, ds_85.nlon], dims=['time','month', 'lat', 'lon'])
# #clim_std_85 = xr.DataArray(pre, coords=[times, np.arange(1,13), ds_85.nlat, ds_85.nlon], dims=['time','month', 'lat', 'lon'])
# Calculate cliamtolgoies for each decade #OMG...we're doing it!!!
# Create string of years bounding decades to loop through
dlist = np.arange(2020,2100,10)
times = pd.date_range(start=2025,end=2195,periods=len(dlist))
# Start the loop. note, using enumerate allows you to link i to the index (0,1,2) and t to the values of dlist
for i, t in enumerate(dlist):
print(str(t)+'-01-01', str(t+9)+'-12-31')
ds_85_sub = ds_85.sel(time=slice(str(t)+'-01-01', str(t+9)+'-12-31'))
m_85_sub = ds_85_sub.groupby('time.month').mean('time')
std_85_sub = ds_85_sub.groupby('time.month').std('time')
if i == 0:
clim_m_85 = m_85_sub
clim_std_85 = std_85_sub
else:
clim_m_85 = xr.concat([clim_m_85, m_85_sub],dim='time')
clim_std_85 = xr.concat([clim_std_85, std_85_sub],dim='time')
clim_m_85['time'] = xr.DataArray((dlist), dims=['time'])
clim_std_85['time'] = xr.DataArray((dlist), dims=['time'])
# clim_m_85[i,:,:,:] = m_85_sub
# clim_std_85[i,:,:,:] = std_85_sub
type(clim_m_85)
clim_std_85
#len(d_clim[:,1,1,1])
# plot ot make sure things worked
fig = plt.figure(figsize=(12, 5))
#set colorbar intervals
levels = np.arange(0, 160, 10)
for i in range(clim_m_85.intpp.shape[0]):
ax = fig.add_subplot(2, 4, i+1)
#plt.plot(d_clim[i,5,:,:]*(10**8))
(clim_m_85.intpp*(10**8)).isel(time=i,month=4).plot(levels=levels)
if i == 4:
ax.set_ylabel('Latitude')
elif i==4:
ax.set_ylabel('Latitude')
if i<4:
plt.title('April ' + str(dlist[i]) + ' IntNPP')
elif i>3:
ax.set_xlabel('Longitude')
plt.title('')
# plot ot make sure things worked
fig = plt.figure(figsize=(12, 5))
#set colorbar intervals
levels = np.arange(0, 160, 10)
for i in range(clim_m_85.intpp.shape[0]):
ax = fig.add_subplot(2, 4, i+1)
#plt.plot(d_clim[i,10,:,:]*(10**8))
(clim_m_85.intpp*(10**8)).isel(time=i,month=10).plot(levels=levels)
ax.set_ylabel('')
ax.set_xlabel('')
if i == 4:
ax.set_ylabel('Latitude')
elif i==4:
ax.set_ylabel('Latitude')
if i<4:
plt.title('October ' + str(dlist[i]) + ' IntNPP')
elif i>3:
ax.set_xlabel('Longitude')
plt.title('')
###Output
_____no_output_____
###Markdown
Make figs above for delta(IntPP) (ssp585-hist) by decade (may + oct)
###Code
clim_m_85
clim_m_hist
var = clim_m_85.isel(time=i,month=5).squeeze() - clim_m_hist.intpp.isel(month=5).squeeze()
var.intpp.squeeze()
# Plot results
fig = plt.figure(figsize=(15, 4))
#set colorbar intervals
levels = np.arange(-30,32, 2)
for i in range(clim_m_85.intpp.shape[0]):
ax = fig.add_subplot(2, 4, i+1)
# Decadal climatology minus historical
var = clim_m_85.isel(time=i,month=5).squeeze() - clim_m_hist.intpp.isel(month=5).squeeze()
(var.intpp*(10**8)).plot(levels=levels)
ax.set_ylabel('')
ax.set_xlabel('')
if i == 4:
ax.set_ylabel('Latitude')
elif i==4:
ax.set_ylabel('Latitude')
if i<4:
plt.title('May ' + str(dlist[i]) + '-Hist $\Delta$IntNPP')
elif i>3:
ax.set_xlabel('Longitude')
plt.title('')
# Plot results
fig = plt.figure(figsize=(15, 4))
#set colorbar intervals
levels = np.arange(-30,32, 2)
for i in range(clim_m_85.intpp.shape[0]):
ax = fig.add_subplot(2, 4, i+1)
# Decadal climatology minus historical
var = clim_m_85.isel(time=i,month=10).squeeze() - clim_m_hist.intpp.isel(month=10).squeeze()
(var.intpp*(10**8)).plot(levels=levels)
ax.set_ylabel('')
ax.set_xlabel('')
if i == 0:
ax.set_ylabel('Latitude')
elif i==4:
ax.set_ylabel('Latitude')
if i<4:
plt.title('Oct. ' + str(dlist[i]) + '-Hist $\Delta$IntNPP')
elif i>3:
ax.set_xlabel('Longitude')
plt.title('')
###Output
_____no_output_____
###Markdown
Look at Specific Locations
###Code
fig = plt.figure(figsize=(12, 6))
# Identify indecies associated with osp, keo, NA, and SO
[a,b]=find_indices(ds_lon.lon, ds_lat.lat, 360-145, 50)#OSP
[a2,b2]=find_indices(ds_lon.lon, ds_lat.lat, 144.6, 32.4)#KEO
[a3,b3]=find_indices(ds_lon.lon, ds_lat.lat, 360-47.2, 50)#NA
[a4,b4]=find_indices(ds_lon.lon, ds_lat.lat, 100, -45)#SO
var = clim_m_hist.intpp.isel(nlon=b,nlat=a)*10**8
var2 = clim_m_hist.intpp.isel(nlon=b2,nlat=a2)*10**8
var3 = clim_m_hist.intpp.isel(nlon=b3,nlat=a3)*10**8
var4 = clim_m_hist.intpp.isel(nlon=b4,nlat=a4)*10**8
var_st = clim_std_hist.intpp.isel(nlon=b,nlat=a)*10**8
var2_st = clim_std_hist.intpp.isel(nlon=b2,nlat=a2)*10**8
var3_st = clim_std_hist.intpp.isel(nlon=b3,nlat=a3)*10**8
var4_st = clim_std_hist.intpp.isel(nlon=b4,nlat=a4)*10**8
ax = fig.add_subplot(2,2,1)
plt.plot(var,'k',label='Hist')
plt.plot(var+var_st,':k')
plt.plot(var-var_st,':k')
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels)
plt.legend()
plt.title('OSP')
plt.ylabel('mol m-2 s-1')
ax = fig.add_subplot(2,2,2)
plt.plot(var2,'k',label='Hist')
plt.plot(var2+var2_st,':k')
plt.plot(var2-var2_st,':k')
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels)
plt.legend()
plt.title('KEO')
plt.ylabel('mol m-2 s-1')
ax = fig.add_subplot(2,2,3)
plt.plot(var3,'k',label='Hist')
plt.plot(var3+var3_st,':k')
plt.plot(var3-var3_st,':k')
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels)
plt.legend()
plt.title('North Atlantic')
plt.ylabel('mol m-2 s-1')
ax = fig.add_subplot(2,2,4)
plt.plot(var4,'k',label='Hist')
plt.plot(var4+var4_st,':k')
plt.plot(var4-var4_st,':k')
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels)
plt.legend()
plt.title('Southern Ocean')
plt.ylabel('mol m-2 s-1')
var = clim_m_85.intpp.isel(time=7,nlon=b,nlat=a)*10**8
var2 = clim_m_85.intpp.isel(time=7,nlon=b2,nlat=a2)*10**8
var3 = clim_m_85.intpp.isel(time=7,nlon=b3,nlat=a3)*10**8
var4 = clim_m_85.intpp.isel(time=7,nlon=b4,nlat=a4)*10**8
var_st = clim_std_85.intpp.isel(time=7,nlon=b,nlat=a)*10**8
var2_st = clim_std_85.intpp.isel(time=7,nlon=b2,nlat=a2)*10**8
var3_st = clim_std_85.intpp.isel(time=7,nlon=b3,nlat=a3)*10**8
var4_st = clim_std_85.intpp.isel(time=7,nlon=b4,nlat=a4)*10**8
ax = fig.add_subplot(2,2,1)
plt.plot(var,'r',label='2090s')
plt.plot(var+var_st,':r')
plt.plot(var-var_st,':r')
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels)
plt.legend()
plt.title('OSP')
plt.ylabel('mol m-2 s-1')
ax = fig.add_subplot(2,2,2)
plt.plot(var2,'r',label='2090s')
plt.plot(var2+var2_st,':r')
plt.plot(var2-var2_st,':r')
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels)
plt.legend()
plt.title('KEO')
plt.ylabel('mol m-2 s-1')
ax = fig.add_subplot(2,2,3)
plt.plot(var3,'r',label='2090s')
plt.plot(var3+var3_st,':r')
plt.plot(var3-var3_st,':r')
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels)
plt.legend()
plt.title('North Atlantic')
plt.ylabel('mol m-2 s-1')
ax = fig.add_subplot(2,2,4)
plt.plot(var4,'r',label='2090s')
plt.plot(var4+var4_st,':r')
plt.plot(var4-var4_st,':r')
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels)
plt.legend()
plt.title('Southern Ocean')
plt.ylabel('mol m-2 s-1')
###Output
/ncar/usr/jupyterhub/envs/cmip6-201910a/lib/python3.7/site-packages/ipykernel_launcher.py:71: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
/ncar/usr/jupyterhub/envs/cmip6-201910a/lib/python3.7/site-packages/ipykernel_launcher.py:81: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
/ncar/usr/jupyterhub/envs/cmip6-201910a/lib/python3.7/site-packages/ipykernel_launcher.py:91: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
/ncar/usr/jupyterhub/envs/cmip6-201910a/lib/python3.7/site-packages/ipykernel_launcher.py:101: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
|
code/빅데이터경진대회_이유민/어린이_보행자_교통사고_다발_지역.ipynb | ###Markdown
어린이 보행자 교통사고 다발 지역https://www.data.go.kr/data/15003493/fileData.do 0. 모듈 import 및 전처리
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
cd C:\Users\82104\Desktop\4-2\uos_bigdata_contest
df = pd.read_csv('./도로교통공단_교통사고 다발지역_20200714.csv', encoding='ansi')
df
df.info() #결측치 없는 좋은 데이터~
df['사고년도'].value_counts() #2020이 없는데..? 일단 진행
df['사고유형구분'].value_counts() #보행존&스쿨존 어린이 분리하자
df[df['시군구명']=='서울특별시 종로구1'] #위경도에 0.01 이하의 아주 미세한 차이(종로구 기준) > 서울로 묶어서 빼자
seoul = df[df['시도명']=='서울특별시']
seoul.reset_index(inplace=True)
seoul
skid=seoul[seoul['사고유형구분']==('보행어린이' or '스쿨존어린이')]
skid.reset_index(inplace=True)
skid #서울내 어린이사고지역
skid['시군구명'].value_counts()
len(skid['시군구명'].value_counts())
# 구 정리 하자..
skid['서울구'] = 'a'
# 자그마치 25개의 구가 있따......!
skid.loc[skid['시군구명'].str.contains('강남구'), ['서울구']] = '강남구'
skid.loc[skid['시군구명'].str.contains('강동구'), ['서울구']] = '강동구'
skid.loc[skid['시군구명'].str.contains('강서구'), ['서울구']] = '강서구'
skid.loc[skid['시군구명'].str.contains('강북구'), ['서울구']] = '강북구'
skid.loc[skid['시군구명'].str.contains('관악구'), ['서울구']] = '관악구'
skid.loc[skid['시군구명'].str.contains('광진구'), ['서울구']] = '광진구'
skid.loc[skid['시군구명'].str.contains('구로구'), ['서울구']] = '구로구'
skid.loc[skid['시군구명'].str.contains('금천구'), ['서울구']] = '금천구'
skid.loc[skid['시군구명'].str.contains('노원구'), ['서울구']] = '노원구'
skid.loc[skid['시군구명'].str.contains('동대문구'), ['서울구']] = '동대문구'
skid.loc[skid['시군구명'].str.contains('도봉구'), ['서울구']] = '도봉구'
skid.loc[skid['시군구명'].str.contains('동작구'), ['서울구']] = '동작구'
skid.loc[skid['시군구명'].str.contains('마포구'), ['서울구']] = '마포구'
skid.loc[skid['시군구명'].str.contains('서대문구'), ['서울구']] = '서대문구'
skid.loc[skid['시군구명'].str.contains('성동구'), ['서울구']] = '성동구'
skid.loc[skid['시군구명'].str.contains('성북구'), ['서울구']] = '성북구'
skid.loc[skid['시군구명'].str.contains('서초구'), ['서울구']] = '서초구'
skid.loc[skid['시군구명'].str.contains('송파구'), ['서울구']] = '송파구'
skid.loc[skid['시군구명'].str.contains('영등포구'), ['서울구']] = '영등포구'
skid.loc[skid['시군구명'].str.contains('용산구'), ['서울구']] = '용산구'
skid.loc[skid['시군구명'].str.contains('양천구'), ['서울구']] = '양천구'
skid.loc[skid['시군구명'].str.contains('은평구'), ['서울구']] = '은평구'
skid.loc[skid['시군구명'].str.contains('종로구'), ['서울구']] = '종로구'
skid.loc[skid['시군구명'].str.contains('중구'), ['서울구']] = '중구'
skid.loc[skid['시군구명'].str.contains('중랑구'), ['서울구']] = '중랑구'
len(skid[skid['서울구']=='a']) # 다행히 깔끔하게 처리되었음^_^
skid
# 반지름 크기 결정 위한 심각도 피처 생성
skid['심각도'] = skid['발생건수']+skid['사상자수'] + skid['사망자수'] + skid['중상자수'] + skid['경상자수'] + skid['부상자수']
skid
skid.to_csv('./skid.csv')
df = pd.read_csv('./skid.csv')
df
###Output
_____no_output_____
###Markdown
2. folium 좌표 시각화
###Code
import folium
# 서울맵 잘나오나?
lon=37.5502
lat=126.982
map_s = folium.Map(location=[lon, lat], zoom_start=11)
map_s
# 좌표계정보 임포트
import json
geo_path="./서울시_행정구역_시군구_정보.json" #(좌표계_ WGS1984)
geo_str=json.load(open(geo_path, encoding='utf-8'))
###Output
_____no_output_____
###Markdown
지도 생성
###Code
# 1. 지도 객체 생성
map_s = folium.Map(location=[37.5502, 126.982], zoom_start=11)
map_s
# 사고다발지역 위도/경도 표시
lat = skid['위도']
lon = skid['경도']
name = skid['사고지역위치명']
###Output
_____no_output_____
###Markdown
마커 아이콘 색상값‘lightgreen’, ‘darkgreen’, ‘darkblue’, ‘cadetblue’, ‘orange’, ‘lightred’, ‘darkred’, ‘green’, ‘blue’, ‘black’, ‘lightblue’, ‘white’, ‘lightgray’, ‘red’, ‘pink’, ‘beige’, ‘gray’, ‘purple’, ‘darkpurple’
###Code
type(skid)
###Output
_____no_output_____
###Markdown
마커 시각화 - 1
###Code
# zoom_start: 배율 1~22
map_osm = folium.Map(location=[37.566651, 126.978428], zoom_start=12)
map_osm
for i in skid.index:
lat = skid.loc[i, '위도']
lon = skid.loc[i, '경도']
name = skid.loc[i, '사고지역위치명']
# 추출한 정보를 지도에 표시
marker = folium.Marker([lat,lon], popup=name)
marker.add_to(map_osm)
map_osm
#map_osm.save('map_test.html')
###Output
_____no_output_____
###Markdown
마커 시각화 - 2
###Code
# zoom_start: 배율 1~22
map_osm = folium.Map(location=[37.566651, 126.978428], zoom_start=11.3)
map_osm
for i in skid.index:
lat = skid.loc[i, '위도']
lon = skid.loc[i, '경도']
name = skid.loc[i, '사고지역위치명']
ser = skid.loc[i, '심각도']
# 추출한 정보를 지도에 표시
marker = folium.CircleMarker([lat,lon], radius=int(ser), color='#3186cc', fill_color='#3186cc',alpha=1, popup=name)
marker.add_to(map_osm)
map_osm
#map_osm.save('map_test2.html')
#map_s = folium.Map(location=[37.5502, 126.982], zoom_start=11)
#for i in skid.index:
# folium.Marker([lat[i], lon[i]], popup='region', icon=folium.Icon(color='darkblue')).add_to(map_s)
#map_s
## 파일로 저장
#map_osm3.save('map_osm3.html') #파일이 저장될 위치
###Output
_____no_output_____ |
cheatsheets/cuML/cuML_TimeSeries.ipynb | ###Markdown
cuML Cheat Sheets sample code(c) 2020 NVIDIA, Blazing SQLDistributed under Apache License 2.0 Imports
###Code
import cudf
import cuml
import numpy as np
import cupy as cp
###Output
_____no_output_____
###Markdown
Create time series dataset
###Code
X = cuml.make_arima(
batch_size=10
, n_obs=100
, order=(2,1,2)
, seasonal_order=(0,1,2,12)
, output_type='cudf'
, random_state=np.random.randint(1e9)
)
X.head()
X.loc[:, 0].to_pandas().plot(kind='line')
###Output
_____no_output_____
###Markdown
--- Time series models--- ExponentialSmoothing()
###Code
exp_smooth = cuml.ExponentialSmoothing(
X
, seasonal='mul'
, seasonal_periods=2
, ts_num=10
)
exp_smooth.fit()
cudf.concat([X, exp_smooth.forecast(4)]).reset_index().loc[:,0].to_pandas().plot(kind='line')
exp_smooth.get_level()
exp_smooth.get_season()
exp_smooth.get_trend()
exp_smooth.score()
###Output
_____no_output_____
###Markdown
tsa.ARIMA()
###Code
arima = cuml.ARIMA(X, (2,1,2), (0,1,2,12), fit_intercept=False)
arima.fit()
arima.forecast(4)
cudf.concat([X, arima.forecast(4)]).reset_index().loc[:,0].to_pandas().plot(kind='line')
df = X.loc[:,0].to_frame(name='observed')
df['pred'] = arima.predict(0, 100).loc[:,0]
df.loc[13:].to_pandas().plot(kind='line')
###Output
[W] [06:02:24.927397] WARNING(`predict`): predictions before 13 are undefined, will be set to NaN
|
Write A Data Science Blog Post - Black Friday.ipynb | ###Markdown
Business UnderstandingBlack Friday is the 'official' kick off to the holiday shopping season, the most important shopping period. The 2018 Black Friday opened a shopping season which became the highest U.S. ecommerce sales day in history with $7.9 billion in revenue. It's important for the sellers to look into the history sales data and prepare early for the next shopping season so as not to lose the grain. I'm going to perform some exploration on Black Friday Dataset From Kaggle. The task or questions I will target for as below:Question 1: Which User spent most during black Friday, list the top 20 spending usersQuestion 2: How about the User Distribution by Age Group? And also consider GenderQuestion 3: Which products are most popular during Black Friday, list the top 20Question 4: Look at the users again, this time focus on group by Occupation in different cityQuestion 5: Correlation between Gender, Age, Occupation, City_Category, Stay_In_Current_City_Years, Marital_Status, Product_Category_x vs PurchaseUnderstanding these questions may provide some advice for the sellers to better understand the customer purchase behaviour against different products so that the seller can prepare for the next shopping season well. Data UnderstandingThis project will use Black Friday Dataset From Kaggle, which is a sample of the transactions made in a retail store. Below are the steps to look at and understand the dataset.
###Code
# Read in the Complete Dataset
BlackFriday_Dataset = pd.read_csv('./BlackFriday.csv')
BlackFriday_Dataset.head()
# Get the Basic info of the dataset
BlackFriday_Dataset.describe()
BlackFriday_Dataset.info()
num_rows = BlackFriday_Dataset.shape[0] #Provide the number of rows in the dataset
num_cols = BlackFriday_Dataset.shape[1] #Provide the number of columns in the dataset
print("Row number: {}".format(num_rows))
print("Column number: {}".format(num_cols))
# To check the column names in the dataset
BlackFriday_Dataset.columns
###Output
_____no_output_____
###Markdown
Prepare DataSome data preparation steps need to be done before using the dataset for exploration, including:1. Checking columns with missing values and analyze impact2. Dealing with missing values3. One-Hot Encoding for Categorical variables such as Club, Nationality, Preferred Positions
###Code
# Data Preparation Step 1: check how many missing values are in the dataset
BlackFriday_Dataset.isnull().sum()
###Output
_____no_output_____
###Markdown
After check, missing values only exist in column "Product_Category_2" & "Product_Category_3". From the describe of the dataset, the min values of "Product_Category_2" & "Product_Category_3" are non-zero. My understanding is that the missing value in "Product_Category_2" & "Product_Category_3" means the customer didn't purchase products in these two category. Thus we can use "0" to fill in the missing value.
###Code
# Data Preparation Step 2: Fill the missing cell with zero
BlackFriday_Dataset.fillna(0)
# Data Preparation Step 3: One-Hot Encoding for Categorical variables
# One-hot encode the feature: Gender, Age, City_Category, Stay_in_curent_City_Years
le = LabelEncoder()
BlackFriday_Dataset['Gender_onehot_encode'] = le.fit_transform(BlackFriday_Dataset['Gender'].astype(str))
BlackFriday_Dataset['Age_onehot_encode'] = le.fit_transform(BlackFriday_Dataset['Age'].astype(str))
BlackFriday_Dataset['City_Category_onehot_encode'] = le.fit_transform(BlackFriday_Dataset['City_Category'].astype(str))
BlackFriday_Dataset['Stay_In_Current_City_Years_encode'] = le.fit_transform(BlackFriday_Dataset['Stay_In_Current_City_Years'].astype(str))
BlackFriday_Dataset.head()
###Output
_____no_output_____
###Markdown
Answer Questions base on datasetI have come up some question to be answered by the Data exploration
###Code
# Question 1: Which User spent most during black Friday, list the top 20 spending users
plt.figure(figsize = (20,8))
BlackFriday_Dataset.groupby('User_ID')['Purchase'].sum().nlargest(20).sort_values().plot(kind='barh',color='green');
###Output
_____no_output_____
###Markdown
It's important for the seller to identify high quality customers. These customers with higher purchase amount should be valued. Understanding the needs of these customers will help the merchant to make more suitable operational decisions, such as product type, pricing, after-sales, etc. Loyalty promgram, advertisments should be made to keep these customers continuing shopping with the merchant.
###Code
# Question 2: How about the User Distribution by Age Group? And also consider Gender
plt.figure(figsize = (20,8))
sns.countplot(BlackFriday_Dataset['Age'], palette='gray')
plt.figure(figsize = (20,8))
sns.countplot(BlackFriday_Dataset['Age'],hue=BlackFriday_Dataset['Gender'],palette='Purples')
###Output
_____no_output_____
###Markdown
We can find from the plot that most of the users who participate in the Black Friday Sale are from age group 26-35, 36-45 and 18-25, which is reasonable as these customers are in the golden age of their life.They make more money than other age groups, and they also have more shopping needs comparing to other age groups.From the second plot, we can find for all age group, Male customers shop more comparing to Female customers. I think this is because that the most worthwhile things to buy on the Black Friday are electrical appliances, small appliances, and game consoles. Apple products, especially iPad, the price of Black Five is the best in a year.Obviously such products are more popular with male customers.
###Code
# Question 3: Which products are most popular during Black Friday, list the top 20
plt.figure(figsize = (20,8))
BlackFriday_Dataset.groupby('Product_ID')['Purchase'].count().nlargest(20).sort_values().plot(kind='barh',color='orange');
###Output
_____no_output_____
###Markdown
List out the most popular products may help the merchant adjust their business strategy and can prepare for the next shopping season better so that to Increase revenue and profit.
###Code
# Question 4: Look at the users again, this time focus on group by Occupation in different city
plt.figure(figsize = (20,8))
sns.countplot(BlackFriday_Dataset['Occupation'], hue = BlackFriday_Dataset["City_Category"],palette='Blues_r')
###Output
_____no_output_____
###Markdown
The plot shows that for almost all Occupation Category, users from Citi B did more shopping compring to users from City A & Citi C. I think the reason is City B is larger than City A & Citi C and thus has a larger population. And customers from occupation 0, 4, 7 did more shopping than other occupations.
###Code
# Question 5: Correlation between Gender, Age, Occupation, City_Category, Stay_In_Current_City_Years, Marital_Status, Product_Category_x vs Purchase
Correlation_DF = BlackFriday_Dataset[['Gender_onehot_encode', 'Age_onehot_encode', 'Occupation', 'City_Category_onehot_encode',
'Stay_In_Current_City_Years_encode', 'Marital_Status', 'Product_Category_1', 'Product_Category_2', 'Product_Category_3', 'Purchase']]
Correlation_DF.corr()
colormap = plt.cm.inferno
plt.figure(figsize=(16,12))
plt.title('Correlation between Gender, Age, Occupation, City_Category, Stay_In_Current_City_Years, Marital_Status, Product_Category_x vs Purchase', y=1.05, size=15)
sns.heatmap(Correlation_DF.corr(),linewidths=0.1,vmax=1.0,
square=True, cmap='Blues', linecolor='white', annot=True)
###Output
_____no_output_____
###Markdown
Business UnderstandingBlack Friday is the 'official' kick off to the holiday shopping season, the most important shopping period. The 2018 Black Friday opened a shopping season which became the highest U.S. ecommerce sales day in history with $7.9 billion in revenue. It's important for the sellers to look into the history sales data and prepare early for the next shopping season so as not to lose the grain. I'm going to perform some exploration on Black Friday Dataset From Kaggle. The task or questions I will target for as below:Question 1: Which User spent most during black Friday, list the top 20 spending usersQuestion 2: Which products are most popular during Black Friday, list the top 20Question 3: Correlation between Gender, Age, Occupation, City_Category, Stay_In_Current_City_Years, Marital_Status, Product_Category_x vs PurchaseUnderstanding these questions may provide some advice for the sellers to better understand the customer purchase behaviour against different products so that the seller can prepare for the next shopping season well. Data UnderstandingThis project will use Black Friday Dataset From Kaggle, which is a sample of the transactions made in a retail store. Below are the steps to look at and understand the dataset.
###Code
# Read in the Complete Dataset
BlackFriday_Dataset = pd.read_csv('./BlackFriday.csv')
BlackFriday_Dataset.head()
# Get the Basic info of the dataset
BlackFriday_Dataset.describe()
BlackFriday_Dataset.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 537577 entries, 0 to 537576
Data columns (total 12 columns):
User_ID 537577 non-null int64
Product_ID 537577 non-null object
Gender 537577 non-null object
Age 537577 non-null object
Occupation 537577 non-null int64
City_Category 537577 non-null object
Stay_In_Current_City_Years 537577 non-null object
Marital_Status 537577 non-null int64
Product_Category_1 537577 non-null int64
Product_Category_2 370591 non-null float64
Product_Category_3 164278 non-null float64
Purchase 537577 non-null int64
dtypes: float64(2), int64(5), object(5)
memory usage: 39.0+ MB
###Markdown
Prepare DataSome data preparation steps need to be done before using the dataset for exploration, including:1. Checking columns with missing values and analyze impact2. Dealing with missing values3. One-Hot Encoding for Categorical variables such as Club, Nationality, Preferred Positions
###Code
# Data Preparation Step 1: check how many missing values are in the dataset
BlackFriday_Dataset.isnull().sum()
###Output
_____no_output_____
###Markdown
After check, missing values only exist in column "Product_Category_2" & "Product_Category_3". From the describe of the dataset, the min values of "Product_Category_2" & "Product_Category_3" are non-zero. My understanding is that the missing value in "Product_Category_2" & "Product_Category_3" means the customer didn't purchase products in these two category. Thus we can use "0" to fill in the missing value.
###Code
# Data Preparation Step 2: Fill the missing cell with zero
BlackFriday_Dataset.fillna(0)
# Data Preparation Step 3: One-Hot Encoding for Categorical variables
# One-hot encode the feature: Gender, Age, City_Category, Stay_in_curent_City_Years
le = LabelEncoder()
BlackFriday_Dataset['Gender_onehot_encode'] = le.fit_transform(BlackFriday_Dataset['Gender'].astype(str))
BlackFriday_Dataset['Age_onehot_encode'] = le.fit_transform(BlackFriday_Dataset['Age'].astype(str))
BlackFriday_Dataset['City_Category_onehot_encode'] = le.fit_transform(BlackFriday_Dataset['City_Category'].astype(str))
BlackFriday_Dataset['Stay_In_Current_City_Years_encode'] = le.fit_transform(BlackFriday_Dataset['Stay_In_Current_City_Years'].astype(str))
BlackFriday_Dataset.head()
###Output
_____no_output_____
###Markdown
Answer Questions base on datasetI have come up some question to be answered by the Data exploration
###Code
# Question 1: Which User spent most during black Friday, list the top 20 spending users
plt.figure(figsize = (20,8))
BlackFriday_Dataset.groupby('User_ID')['Purchase'].sum().nlargest(20).sort_values().plot('barh')
###Output
_____no_output_____
###Markdown
It's important for the seller to identify high quality customers. These customers with higher purchase amount should be valued. Understanding the needs of these customers will help the merchant to make more suitable operational decisions, such as product type, pricing, after-sales, etc. Loyalty promgram, advertisments should be made to keep these customers continuing shopping with the merchant.
###Code
# Question 3: Which products are most popular during Black Friday, list the top 20
plt.figure(figsize = (20,8))
BlackFriday_Dataset.groupby('Product_ID')['Purchase'].count().nlargest(20).sort_values().plot('barh')
###Output
_____no_output_____
###Markdown
List out the most popular products may help the merchant adjust their business strategy and can prepare for the next shopping season better so that to Increase revenue and profit.
###Code
# Question 3: Correlation between Gender, Age, Occupation, City_Category, Stay_In_Current_City_Years, Marital_Status, Product_Category_x vs Purchase
Correlation_DF = BlackFriday_Dataset[['Gender_onehot_encode', 'Age_onehot_encode', 'Occupation', 'City_Category_onehot_encode',
'Stay_In_Current_City_Years_encode', 'Marital_Status', 'Product_Category_1', 'Product_Category_2', 'Product_Category_3', 'Purchase']]
Correlation_DF.corr()
colormap = plt.cm.inferno
plt.figure(figsize=(16,12))
plt.title('Correlation between Gender, Age, Occupation, City_Category, Stay_In_Current_City_Years, Marital_Status, Product_Category_x vs Purchase', y=1.05, size=15)
sns.heatmap(Correlation_DF.corr(),linewidths=0.1,vmax=1.0,
square=True, cmap=colormap, linecolor='white', annot=True)
###Output
_____no_output_____
###Markdown
Business UnderstandingBlack Friday is an informal name for the Friday following Thanksgiving Day in the United States, which is celebrated on the fourth Thursday of November. The day after Thanksgiving has been regarded as the beginning of America's Christmas shopping season since 1952. The 2018 Black Friday American consumers spent $7.9 billion, making it the biggest online shopping day in U.S. history. It is neccessary for the trader to get insights of the customers to increase the revenue. The Dataset was downloaded from Kaggle. The following questions will be answered to help the sellersFirst:User distribution by age and gender?Second: what are the most top 20 popular products?Third: what is the correlation between muiltiple attribute gender, Age, Occupation,etc.?By using CRISP approach I believe it will give an informative feedback to the sellers to guide them on this important period. Data UnderstandingThe dataset were provided by Kaggle, they are sample of the transactions made in a retail store.
###Code
df = pd.read_csv('./BlackFriday.csv')
df.head()
# General describation of the data
df.describe()
print("number of rows",df.shape[0])
print("number of columns",df.shape[1])
print("columns:")
df.columns
###Output
columns:
###Markdown
Prepare DataIn this step I will deal with data so it is going to be prepared for my analysis- I will treat the null values- Encode the categorical data so it can be represtitive
###Code
df.isnull().sum()
###Output
_____no_output_____
###Markdown
"Product_Category_2" & "Product_Category_3" are the only two attributen that has zero values and what we can see from the discribtion the min values of "Product_Category_2" & "Product_Category_3" are non-zero. So I conclude is that the missing value in "Product_Category_2" & "Product_Category_3" means the customer didn't buy anything in these two category, So i will replace them with 0.
###Code
#fillna full all null values in the dataframe
df.update(df.fillna(0))
# One-hot encode the feature: Gender, Age, City_Category, Stay_in_curent_City_Years
le = LabelEncoder()
df['Gender_onehot_encode'] = le.fit_transform(df['Gender'].astype(str))
df['Age_onehot_encode'] = le.fit_transform(df['Age'].astype(str))
df['City_Category_onehot_encode'] = le.fit_transform(df['City_Category'].astype(str))
df['Stay_In_Current_City_Years_encode'] = le.fit_transform(df['Stay_In_Current_City_Years'].astype(str))
df.head()
#double ckeck if there is null vlaues
df.isnull().sum()
###Output
_____no_output_____
###Markdown
Answers based on dataset it is necessary for the sellers to identify the audience, that will help him to target what kind of ads he/she will invest in such as type of ad type of product etc. Q1:User distribution by age and gender?
###Code
plt.figure(figsize = (20,8))
u, counts = np.unique(df['Age'], return_counts=True)
plt.bar(u.astype(str), counts,align="center", ec="k", width=1)
plt.figure(figsize = (20,8))
sns.countplot(df['Age'],hue=df['Gender'])
###Output
_____no_output_____
###Markdown
We can observe from the plot that most of the users who participate in the Black Friday Sale are from age group 26-35, 36-45 and 18-25, which is reasonable as these customers needs to target those kind of discounts as each group has its own finincal concerns such age 26-35 just start their career or start working after colleage and group 18-25 they don't make alot of money they need to tackle this peroid of discounts and finally 36-45 they need to plan for their retirment life.From the second plot, we can find for all age group, Male customers shop more comparing to Female customers. what are the most top 20 popular products?
###Code
plt.figure(figsize = (20,8))
products = df.groupby('Product_ID')['Purchase'].count().nlargest(20).sort_values()
sns.barplot(products.index, products.values)
###Output
_____no_output_____
###Markdown
Q3:what is the correlation between muiltiple attribute gender, Age, Occupation,etc.?
###Code
Corre = df[['Gender_onehot_encode', 'Age_onehot_encode', 'Occupation', 'City_Category_onehot_encode',
'Stay_In_Current_City_Years_encode', 'Marital_Status', 'Product_Category_1', 'Product_Category_2', 'Product_Category_3', 'Purchase']]
Corre.corr()
colormap = plt.cm.inferno
plt.figure(figsize=(16,12))
plt.title('Correlation between Gender, Age, Occupation, City_Category, Stay_In_Current_City_Years, Marital_Status, Product_Category_x vs Purchase', y=1.05, size=15)
sns.heatmap(Corre.corr(),linewidths=0.1,vmax=1.0,
square=True, cmap=colormap, linecolor='Blue', annot=True)
###Output
_____no_output_____
###Markdown
Business UnderstandingBlack Friday is the 'official' kick off to the holiday shopping season, the most important shopping period. The 2018 Black Friday opened a shopping season which became the highest U.S. ecommerce sales day in history with $7.9 billion in revenue. It's important for the sellers to look into the history sales data and prepare early for the next shopping season so as not to lose the grain. I'm going to perform some exploration on Black Friday Dataset From Kaggle. The task or questions I will target for as below:Question 1: Which User spent most during black Friday, list the top 20 spending usersQuestion 2: How about the User Distribution by Age Group? And also consider GenderQuestion 3: Which products are most popular during Black Friday, list the top 20Question 4: Look at the users again, this time focus on group by Occupation in different cityQuestion 5: Correlation between Gender, Age, Occupation, City_Category, Stay_In_Current_City_Years, Marital_Status, Product_Category_x vs PurchaseUnderstanding these questions may provide some advice for the sellers to better understand the customer purchase behaviour against different products so that the seller can prepare for the next shopping season well. Data UnderstandingThis project will use Black Friday Dataset From Kaggle, which is a sample of the transactions made in a retail store. Below are the steps to look at and understand the dataset.
###Code
# Read in the Complete Dataset
BlackFriday_Dataset = pd.read_csv('./BlackFriday.csv')
BlackFriday_Dataset.head()
# Get the Basic info of the dataset
BlackFriday_Dataset.describe()
BlackFriday_Dataset.info()
num_rows = BlackFriday_Dataset.shape[0] #Provide the number of rows in the dataset
num_cols = BlackFriday_Dataset.shape[1] #Provide the number of columns in the dataset
print("Row number: {}".format(num_rows))
print("Column number: {}".format(num_cols))
# To check the column names in the dataset
BlackFriday_Dataset.columns
###Output
_____no_output_____
###Markdown
Prepare DataSome data preparation steps need to be done before using the dataset for exploration, including:1. Checking columns with missing values and analyze impact2. Dealing with missing values3. One-Hot Encoding for Categorical variables such as Club, Nationality, Preferred Positions
###Code
# Data Preparation Step 1: check how many missing values are in the dataset
BlackFriday_Dataset.isnull().sum()
###Output
_____no_output_____
###Markdown
After check, missing values only exist in column "Product_Category_2" & "Product_Category_3". From the describe of the dataset, the min values of "Product_Category_2" & "Product_Category_3" are non-zero. My understanding is that the missing value in "Product_Category_2" & "Product_Category_3" means the customer didn't purchase products in these two category. Thus we can use "0" to fill in the missing value.
###Code
# Data Preparation Step 2: Fill the missing cell with zero
BlackFriday_Dataset.fillna(0)
# Data Preparation Step 3: One-Hot Encoding for Categorical variables
# One-hot encode the feature: Gender, Age, City_Category, Stay_in_curent_City_Years
le = LabelEncoder()
BlackFriday_Dataset['Gender_onehot_encode'] = le.fit_transform(BlackFriday_Dataset['Gender'].astype(str))
BlackFriday_Dataset['Age_onehot_encode'] = le.fit_transform(BlackFriday_Dataset['Age'].astype(str))
BlackFriday_Dataset['City_Category_onehot_encode'] = le.fit_transform(BlackFriday_Dataset['City_Category'].astype(str))
BlackFriday_Dataset['Stay_In_Current_City_Years_encode'] = le.fit_transform(BlackFriday_Dataset['Stay_In_Current_City_Years'].astype(str))
BlackFriday_Dataset.head()
###Output
_____no_output_____
###Markdown
Answer Questions base on datasetI have come up some question to be answered by the Data exploration
###Code
def plot_groupby(col1,col2):
plt.figure(figsize = (20,8))
BlackFriday_Dataset.groupby(col1)[col2].count().nlargest(20).sort_values().plot('barh')
# Question 1: Which User spent most during black Friday, list the top 20 spending users
plot_groupby('User_ID','Purchase')
###Output
_____no_output_____
###Markdown
It's important for the seller to identify high quality customers. These customers with higher purchase amount should be valued. Understanding the needs of these customers will help the merchant to make more suitable operational decisions, such as product type, pricing, after-sales, etc. Loyalty promgram, advertisments should be made to keep these customers continuing shopping with the merchant.
###Code
# Question 2: How about the User Distribution by Age Group? And also consider Gender
plt.figure(figsize = (20,8))
sns.countplot(BlackFriday_Dataset['Age'])
plt.figure(figsize = (20,8))
sns.countplot(BlackFriday_Dataset['Age'],hue=BlackFriday_Dataset['Gender'])
###Output
_____no_output_____
###Markdown
We can find from the plot that most of the users who participate in the Black Friday Sale are from age group 26-35, 36-45 and 18-25, which is reasonable as these customers are in the golden age of their life.They make more money than other age groups, and they also have more shopping needs comparing to other age groups.From the second plot, we can find for all age group, Male customers shop more comparing to Female customers. I think this is because that the most worthwhile things to buy on the Black Friday are electrical appliances, small appliances, and game consoles. Apple products, especially iPad, the price of Black Five is the best in a year.Obviously such products are more popular with male customers.
###Code
# Question 3: Which products are most popular during Black Friday, list the top 20
plot_groupby('Product_ID','Purchase')
###Output
_____no_output_____
###Markdown
List out the most popular products may help the merchant adjust their business strategy and can prepare for the next shopping season better so that to Increase revenue and profit.
###Code
# Question 4: Look at the users again, this time focus on group by Occupation in different city
plt.figure(figsize = (20,8))
sns.countplot(BlackFriday_Dataset['Occupation'], hue = BlackFriday_Dataset["City_Category"])
###Output
_____no_output_____
###Markdown
The plot shows that for almost all Occupation Category, users from Citi B did more shopping compring to users from City A & Citi C. I think the reason is City B is larger than City A & Citi C and thus has a larger population. And customers from occupation 0, 4, 7 did more shopping than other occupations.
###Code
# Question 5: Correlation between Gender, Age, Occupation, City_Category, Stay_In_Current_City_Years, Marital_Status, Product_Category_x vs Purchase
Correlation_DF = BlackFriday_Dataset[['Gender_onehot_encode', 'Age_onehot_encode', 'Occupation', 'City_Category_onehot_encode',
'Stay_In_Current_City_Years_encode', 'Marital_Status', 'Product_Category_1', 'Product_Category_2', 'Product_Category_3', 'Purchase']]
Correlation_DF.corr()
colormap = plt.cm.inferno
plt.figure(figsize=(16,12))
plt.title('Correlation between Gender, Age, Occupation, City_Category, Stay_In_Current_City_Years, Marital_Status, Product_Category_x vs Purchase', y=1.05, size=15)
sns.heatmap(Correlation_DF.corr(),linewidths=0.1,vmax=1.0,
square=True, cmap=colormap, linecolor='white', annot=True)
###Output
_____no_output_____
###Markdown
IntroductionBy using this dataset, I wish to explore what impacts the 'Black Friday' sales. Black Friday is the official kick off to the holiday season. It is one of the most important time for a retailer or a wholesaler and thus it is important to understand what impacts the user buying behaviour. By using the insights, how can a business improve their throughput is what I wish to explain from this Blog Post.I will try to answer the following questions:Question 1: A quick analysis of most spending users to find out who has the highest CLV?Question 2: How is the distribution of users across age and gender?Question 3: Which products garner the maximum sales during black friday?Question 4: How are the users spread across cities and occupation?Question 5: Correlation between Gender, Age, Occupation, City_Category, Stay_In_Current_City_Years, Marital_Status, Product_Category_x vs Purchase Data UnderstandingThis project will use Black Friday Dataset From Kaggle, which is a sample of the transactions made in a retail store. Below are the steps to look at and understand the dataset.
###Code
# Read in the Complete Dataset
BlackFriday_Dataset = pd.read_csv('./BlackFriday.csv')
BlackFriday_Dataset.head()
# Get the Basic info of the dataset
BlackFriday_Dataset.describe()
BlackFriday_Dataset.info()
#Provide the number of rows in the dataset
num_rows = BlackFriday_Dataset.shape[0]
#Provide the number of columns in the dataset
num_cols = BlackFriday_Dataset.shape[1]
print("No of rows: {}".format(num_rows))
print("Number of Columns: {}".format(num_cols))
# To check the column names in the dataset
BlackFriday_Dataset.columns
###Output
_____no_output_____
###Markdown
Prepare DataSome data preparation steps need to be done before using the dataset for exploration, including:1. Checking columns with missing values and analyze impact2. Dealing with missing values3. One-Hot Encoding for Categorical variables such as Club, Nationality, Preferred Positions
###Code
# Data Preparation Step 1: check how many missing values are in the dataset
BlackFriday_Dataset.isnull().sum()
###Output
_____no_output_____
###Markdown
After check, missing values only exist in column "Product_Category_2" & "Product_Category_3". From the describe of the dataset, the min values of "Product_Category_2" & "Product_Category_3" are non-zero. My understanding is that the missing value in "Product_Category_2" & "Product_Category_3" means the customer didn't purchase products in these two category. Thus we can use "0" to fill in the missing value.
###Code
# Data Preparation Step 2: Fill the missing cell with zero
BlackFriday_Dataset.fillna(0)
# Data Preparation Step 3: One-Hot Encoding for Categorical variables
# One-hot encode the feature: Gender, Age, City_Category, Stay_in_curent_City_Years
le = LabelEncoder()
BlackFriday_Dataset['Gender_onehot_encode'] = le.fit_transform(BlackFriday_Dataset['Gender'].astype(str))
BlackFriday_Dataset['Age_onehot_encode'] = le.fit_transform(BlackFriday_Dataset['Age'].astype(str))
BlackFriday_Dataset['City_Category_onehot_encode'] = le.fit_transform(BlackFriday_Dataset['City_Category'].astype(str))
BlackFriday_Dataset['Stay_In_Current_City_Years_encode'] = le.fit_transform(BlackFriday_Dataset['Stay_In_Current_City_Years'].astype(str))
BlackFriday_Dataset.head()
###Output
_____no_output_____
###Markdown
Answer Questions base on datasetI have come up some question to be answered by the Data exploration
###Code
# Question 1: Which User spent most during black Friday, list the top 20 spending users
plt.figure(figsize = (20,8))
BlackFriday_Dataset.groupby('User_ID')['Purchase'].sum().nlargest(20).sort_values().plot('barh')
###Output
/Users/sagar/miniconda3/envs/ml/lib/python3.6/site-packages/ipykernel_launcher.py:3: FutureWarning: `Series.plot()` should not be called with positional arguments, only keyword arguments. The order of positional arguments will change in the future. Use `Series.plot(kind='barh')` instead of `Series.plot('barh',)`.
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
It's important for the seller to identify high quality customers. These customers with higher purchase amount should be valued. Understanding the needs of these customers will help the merchant to make more suitable operational decisions, such as product type, pricing, after-sales, etc. Loyalty promgram, advertisments should be made to keep these customers continuing shopping with the merchant.
###Code
# Question 2: How about the User Distribution by Age Group? And also consider Gender
plt.figure(figsize = (20,8))
sns.countplot(BlackFriday_Dataset['Age'])
plt.figure(figsize = (20,8))
sns.countplot(BlackFriday_Dataset['Age'],hue=BlackFriday_Dataset['Gender'])
###Output
_____no_output_____
###Markdown
We can find from the plot that most of the users who participate in the Black Friday Sale are from age group 26-35, 36-45 and 18-25, which is reasonable as these customers are in the golden age of their life.They make more money than other age groups, and they also have more shopping needs comparing to other age groups.From the second plot, we can find for all age group, Male customers shop more comparing to Female customers. I think this is because that the most worthwhile things to buy on the Black Friday are electrical appliances, small appliances, and game consoles. Apple products, especially iPad, the price of Black Five is the best in a year.Obviously such products are more popular with male customers.
###Code
# Question 3: Which products are most popular during Black Friday, list the top 20
plt.figure(figsize = (20,8))
BlackFriday_Dataset.groupby('Product_ID')['Purchase'].count().nlargest(20).sort_values().plot('barh')
###Output
/Users/sagar/miniconda3/envs/ml/lib/python3.6/site-packages/ipykernel_launcher.py:3: FutureWarning: `Series.plot()` should not be called with positional arguments, only keyword arguments. The order of positional arguments will change in the future. Use `Series.plot(kind='barh')` instead of `Series.plot('barh',)`.
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
List out the most popular products may help the merchant adjust their business strategy and can prepare for the next shopping season better so that to Increase revenue and profit.
###Code
# Question 4: Look at the users again, this time focus on group by Occupation in different city
plt.figure(figsize = (20,8))
sns.countplot(BlackFriday_Dataset['Occupation'], hue = BlackFriday_Dataset["City_Category"])
###Output
_____no_output_____
###Markdown
The plot shows that for almost all Occupation Category, users from Citi B did more shopping compring to users from City A & Citi C. I think the reason is City B is larger than City A & Citi C and thus has a larger population. And customers from occupation 0, 4, 7 did more shopping than other occupations.
###Code
# Question 5: Correlation between Gender, Age, Occupation, City_Category, Stay_In_Current_City_Years, Marital_Status, Product_Category_x vs Purchase
Correlation_DF = BlackFriday_Dataset[['Gender_onehot_encode', 'Age_onehot_encode', 'Occupation', 'City_Category_onehot_encode',
'Stay_In_Current_City_Years_encode', 'Marital_Status', 'Product_Category_1', 'Product_Category_2', 'Product_Category_3', 'Purchase']]
Correlation_DF.corr()
colormap = plt.cm.inferno
plt.figure(figsize=(16,12))
plt.title('Correlation between Gender, Age, Occupation, City_Category, Stay_In_Current_City_Years, Marital_Status, Product_Category_x vs Purchase', y=1.05, size=15)
sns.heatmap(Correlation_DF.corr(),linewidths=0.1,vmax=1.0,
square=True, cmap=colormap, linecolor='white', annot=True)
###Output
_____no_output_____ |
samples/notebooks/Z-Tests/demo-notebook.ipynb | ###Markdown
Orbit Notebook to create demo related resources and trigger regression testing
###Code
import json
from aws_orbit_sdk.common import get_workspace
workspace = get_workspace()
###Output
_____no_output_____
###Markdown
Workspace details
###Code
workspace
# Orbit Environment Name
env_name = workspace['env_name']
###Output
_____no_output_____
###Markdown
Executing lake-creator notebooks
###Code
run_creator_notebooks = {
"compute": {
"node_type": "ec2",
"container": {
"p_concurrent" :1
}
},
"tasks": [
{
"notebookName": "Example-1-Build-Lake.ipynb",
"sourcePath": "/efs/shared/samples/notebooks/A-LakeCreator",
"targetPath": "/efs/shared/regression/notebooks/A-LakeCreator/",
"params": {
}
}
]
}
with open("run_creator_notebooks.json", 'w') as f:
json.dump(run_creator_notebooks, f)
!orbit run notebook \
--env $env_name \
--team lake-creator \
--user regression \
--delay 60 \
--max-attempts 40 \
--wait \
run_creator_notebooks.json
###Output
_____no_output_____
###Markdown
Executing Admin notebooks
###Code
run_admin_notebooks = {
"compute": {
"node_type": "ec2",
"container": {
"p_concurrent" :1
}
},
"tasks": [
{
"notebookName": "run-admin-regression-notebooks.ipynb",
"sourcePath": "/efs/shared/samples/notebooks/Z-Tests",
"targetPath": "/efs/shared/regression/notebooks/Z-Tests/",
"params": {
}
}
]
}
with open("run_admin_notebooks.json", 'w') as f:
json.dump(run_admin_notebooks, f)
!orbit run notebook \
--env $env_name \
--team lake-creator \
--user regression \
--delay 60 \
--max-attempts 40 \
--wait \
run_admin_notebooks.json
###Output
_____no_output_____
###Markdown
Executing lake-user notebooks
###Code
run_lake_user_notebooks = {
"compute": {
"node_type": "ec2",
"container": {
"p_concurrent" :1
}
},
"tasks": [
{
"notebookName": "run-user-regression-notebooks.ipynb",
"sourcePath": "/efs/shared/samples/notebooks/Z-Tests",
"targetPath": "/efs/shared/regression/notebooks/Z-Tests/",
"params": {
}
}
]
}
with open("run_lake_user_notebooks.json", 'w') as f:
json.dump(run_lake_user_notebooks, f)
!orbit run notebook \
--env $env_name \
--team lake-user \
--user regression \
--delay 60 \
--max-attempts 40 \
--wait \
run_lake_user_notebooks.json
###Output
_____no_output_____ |
examples/dictionary_based_classification.ipynb | ###Markdown
Dictionary based time series classification in sktimeDictionary based approaches adapt the bag of words model commonly used in signal processing, computer vision and audio processing for time series classification.Dictionary based classifiers have the same broad structure.A sliding window of length $w$ is run across a series.For each window, the real valued series of length $w$ is converted through approximation and discretisation processes into a symbolic string of length $l$, which consists of $\alpha$ possible letters.The occurrence in a series of each 'word' from the dictionary defined by $l$ and $\alpha$ is counted, and once the sliding window has completed the series is transformed into a histogram.Classification is based on the histograms of the words extracted from the series, rather than the raw data.Currently 3 dictionary based classifiers and implemented in sktime, both making use of the Symbolic Fourier Approximation (SFA)\[1\] transform to discretise into words.These are the Bag of SFA Symbols (BOSS)\[2\], with options for improved efficiency and usability via contracting (cBOSS)\[3\] and the Temporal Dictionary Ensemble (TDE)\[4\]In this notebook, we will demonstrate how to use BOSS, cBOSS and TDE on the ItalyPowerDemand dataset. Both algorithms are currently only compatible with univariate time series datasets. References:\[1\] Schäfer, P., & Högqvist, M. (2012). SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In Proceedings of the 15th International Conference on Extending Database Technology (pp. 516-527).\[2\] Schäfer, P. (2015). The BOSS is concerned with time series classification in the presence of noise. Data Mining and Knowledge Discovery, 29(6), 1505-1530.\[3\] Middlehurst, M., Vickers, W., & Bagnall, A. (2019). Scalable dictionary classifiers for time series classification. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 11-19). Springer, Cham.\[4\] Middlehurst, M., Large, J., Cawley, G., & Bagnall, A. (2020). The Temporal Dictionary Ensemble (TDE) Classifier for Time Series Classification. In The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.\[5\] Large, J., Bagnall, A., Malinowski, S., & Tavenard, R. (2019). On time series classification with dictionary-based classifiers. Intelligent Data Analysis, 23(5), 1073-1089.\[6\] Schäfer, P., & Leser, U. (2017). Fast and accurate time series classification with weasel. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 637-646). 1. Imports
###Code
from sktime.classification.dictionary_based import BOSSEnsemble
from sktime.classification.dictionary_based import TemporalDictionaryEnsemble
from sklearn import metrics
from sktime.datasets import load_italy_power_demand
###Output
_____no_output_____
###Markdown
2. Load data
###Code
X_train, y_train = load_italy_power_demand(split='train', return_X_y=True)
X_test, y_test = load_italy_power_demand(split='test', return_X_y=True)
X_test = X_test[:50]
y_test = y_test[:50]
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
###Output
(67, 1) (67,) (50, 1) (50,)
###Markdown
3. Bag of SFA Symbols (BOSS)BOSS is an ensemble of individual BOSS classifiers making use of the SFA transform.The classifier performs grid-search through a large number of individual classifiers for parameters $l$, $\alpha$, $w$ and $p$ (normalise each window).Of the classifiers searched only those within 92\% accuracy of the best classifier are retained.Individual BOSS classifiers use a non-symmetric distance function, BOSS distance, in conjunction with a nearest neighbour classifier.As tuning is handled inside the classifier, BOSS has very little parameters to be altered and generally should be run using default settings.
###Code
boss = BOSSEnsemble()
boss.fit(X_train, y_train)
boss_preds = boss.predict(X_test)
print("BOSS Accuracy: " + str(metrics.accuracy_score(y_test, boss_preds)))
###Output
BOSS Accuracy: 0.86
###Markdown
4. Contractable BOSS (cBOSS)cBOSS significantly speeds up BOSS with no signficant difference in accuracy by improving how the ensemble is formed.cBOSS utilises a filtered random selection of parameters to find its ensemble members.Each ensemble member is built on a 70% subsample of the train data, using random sampling without replacement.An exponential weighting scheme for the predictions of the base classifiers is introduced.A new parameter for the number of parameters samples $k$ is introduced. of which the top $s$ (max ensemble size) with the highest accuracy are kept for the final ensemble.The $k$ parameter is replaceable with a time limit $t$ through contracting.
###Code
# Recommended non-contract cBOSS parameters
cboss = BOSSEnsemble(randomised_ensemble=True,
n_parameter_samples=250,
max_ensemble_size=50)
# cBOSS with a 5 minute build time contract
#cboss = BOSSEnsemble(randomised_ensemble=True,
# time_limit=5,
# max_ensemble_size=50)
cboss.fit(X_train, y_train)
cboss_preds = cboss.predict(X_test)
print("cBOSS Accuracy: " + str(metrics.accuracy_score(y_test, cboss_preds)))
###Output
cBOSS Accuracy: 0.9
###Markdown
5. Temporal Dictionary Ensemble (TDE)TDE aggregates the best compoents of 3 classifiers extending from the original BOSS algorithm. The ensemble structure and improvements of cBOSS are used; Spatial pyramids are introduced from Spatial BOSS (S-BOSS)\[5\]; From Word Extraction for Time Series Classification (WEASEL)\[6\] bigrams and Information Gain Binning (IGB), a replacement for the multiple coefficient binning (MCB) used by SFA, are included.Two new parameters are included in the ensemble parameter search, the number of spatial pyramid levels $h$ and whether to use IGB or MCB $b$.A Gaussian processes regressor is used to select new parameter sets to evaluate for the ensemble, predicting the accuracy of a set of parameter values using past classifier performances.Inheriting the cBOSS ensemble structure, the number of parameter samples $k$, time limit $t$ and max ensemble size $s$ remain as parameters to be set accounting for memory and time requirements.
###Code
# Recommended non-contract TDE parameters
tde = TemporalDictionaryEnsemble(n_parameter_samples=250,
max_ensemble_size=100,
randomly_selected_params=50)
# TDE with a 5 minute build time contract
#tde = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=100,
# randomly_selected_params=50)
tde.fit(X_train, y_train)
tde_preds = tde.predict(X_test)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test, tde_preds)))
###Output
TDE Accuracy: 0.98
###Markdown
Dictionary based time series classification in sktimeDictionary based approaches adapt the bag of words model commonly used in signal processing, computer vision and audio processing for time series classification.Dictionary based classifiers have the same broad structure.A sliding window of length $w$ is run across a series.For each window, the real valued series of length $w$ is converted through approximation and discretisation processes into a symbolic string of length $l$, which consists of $\alpha$ possible letters.The occurrence in a series of each 'word' from the dictionary defined by $l$ and $\alpha$ is counted, and once the sliding window has completed the series is transformed into a histogram.Classification is based on the histograms of the words extracted from the series, rather than the raw data.Currently 3 dictionary based classifiers and implemented in sktime, both making use of the Symbolic Fourier Approximation (SFA)\[1\] transform to discretise into words.These are the Bag of SFA Symbols (BOSS)\[2\], with options for improved efficiency and usability via contracting (cBOSS)\[3\] and the Temporal Dictionary Ensemble (TDE)\[4\]In this notebook, we will demonstrate how to use BOSS, cBOSS and TDE on the ItalyPowerDemand dataset. Both algorithms are currently only compatible with univariate time series datasets. References:\[1\] Schäfer, P., & Högqvist, M. (2012). SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In Proceedings of the 15th International Conference on Extending Database Technology (pp. 516-527).\[2\] Schäfer, P. (2015). The BOSS is concerned with time series classification in the presence of noise. Data Mining and Knowledge Discovery, 29(6), 1505-1530.\[3\] Middlehurst, M., Vickers, W., & Bagnall, A. (2019). Scalable dictionary classifiers for time series classification. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 11-19). Springer, Cham.\[4\] Middlehurst, M., Large, J., Cawley, G., & Bagnall, A. (2020). The Temporal Dictionary Ensemble (TDE) Classifier for Time Series Classification. In The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.\[5\] Large, J., Bagnall, A., Malinowski, S., & Tavenard, R. (2019). On time series classification with dictionary-based classifiers. Intelligent Data Analysis, 23(5), 1073-1089.\[6\] Schäfer, P., & Leser, U. (2017). Fast and accurate time series classification with WEASEL. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 637-646). 1. Imports
###Code
from sktime.classification.dictionary_based import BOSSEnsemble
from sktime.classification.dictionary_based import WEASEL
from sktime.classification.dictionary_based import TemporalDictionaryEnsemble
from sklearn import metrics
from sktime.datasets import load_italy_power_demand
###Output
_____no_output_____
###Markdown
2. Load data
###Code
X_train, y_train = load_italy_power_demand(split='train', return_X_y=True)
X_test, y_test = load_italy_power_demand(split='test', return_X_y=True)
X_test = X_test[:50]
y_test = y_test[:50]
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
###Output
(67, 1) (67,) (50, 1) (50,)
###Markdown
3. Bag of SFA Symbols (BOSS)BOSS is an ensemble of individual BOSS classifiers making use of the SFA transform.The classifier performs grid-search through a large number of individual classifiers for parameters $l$, $\alpha$, $w$ and $p$ (normalise each window).Of the classifiers searched only those within 92\% accuracy of the best classifier are retained.Individual BOSS classifiers use a non-symmetric distance function, BOSS distance, in conjunction with a nearest neighbour classifier.As tuning is handled inside the classifier, BOSS has very little parameters to be altered and generally should be run using default settings.
###Code
boss = BOSSEnsemble(random_state=47)
boss.fit(X_train, y_train)
boss_preds = boss.predict(X_test)
print("BOSS Accuracy: " + str(metrics.accuracy_score(y_test, boss_preds)))
###Output
BOSS Accuracy: 0.9
###Markdown
4. Contractable BOSS (cBOSS)cBOSS significantly speeds up BOSS with no signficant difference in accuracy by improving how the ensemble is formed.cBOSS utilises a filtered random selection of parameters to find its ensemble members.Each ensemble member is built on a 70% subsample of the train data, using random sampling without replacement.An exponential weighting scheme for the predictions of the base classifiers is introduced.A new parameter for the number of parameters samples $k$ is introduced. of which the top $s$ (max ensemble size) with the highest accuracy are kept for the final ensemble.The $k$ parameter is replaceable with a time limit $t$ through contracting.
###Code
# Recommended non-contract cBOSS parameters
cboss = BOSSEnsemble(randomised_ensemble=True,
n_parameter_samples=250,
max_ensemble_size=50)
# cBOSS with a 5 minute build time contract
#cboss = BOSSEnsemble(randomised_ensemble=True,
# time_limit=5,
# max_ensemble_size=50)
cboss.fit(X_train, y_train)
cboss_preds = cboss.predict(X_test)
print("cBOSS Accuracy: " + str(metrics.accuracy_score(y_test, cboss_preds)))
###Output
cBOSS Accuracy: 0.94
###Markdown
5. Word Extraction for Time Series Classification (WEASEL)WEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set than BOSS. It extends SFA by bigrams, feature selection using Anova-f-test and Information Gain Binning (IGB).
###Code
weasel = WEASEL(binning_strategy="equi-depth", anova=False)
weasel.fit(X_train, y_train)
weasel_preds = weasel.predict(X_test)
print("WEASEL Accuracy: " + str(metrics.accuracy_score(y_test, weasel_preds)))
###Output
WEASEL Accuracy: 1.0
###Markdown
6. Temporal Dictionary Ensemble (TDE)TDE aggregates the best compoents of 3 classifiers extending from the original BOSS algorithm. The ensemble structure and improvements of cBOSS are used; Spatial pyramids are introduced from Spatial BOSS (S-BOSS)\[5\]; From Word Extraction for Time Series Classification (WEASEL)\[6\] bigrams and Information Gain Binning (IGB), a replacement for the multiple coefficient binning (MCB) used by SFA, are included.Two new parameters are included in the ensemble parameter search, the number of spatial pyramid levels $h$ and whether to use IGB or MCB $b$.A Gaussian processes regressor is used to select new parameter sets to evaluate for the ensemble, predicting the accuracy of a set of parameter values using past classifier performances.Inheriting the cBOSS ensemble structure, the number of parameter samples $k$, time limit $t$ and max ensemble size $s$ remain as parameters to be set accounting for memory and time requirements.
###Code
# Recommended non-contract TDE parameters
tde = TemporalDictionaryEnsemble(n_parameter_samples=250,
max_ensemble_size=100,
randomly_selected_params=50)
# TDE with a 5 minute build time contract
#tde = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=100,
# randomly_selected_params=50)
tde.fit(X_train, y_train)
tde_preds = tde.predict(X_test)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test, tde_preds)))
###Output
TDE Accuracy: 0.98
###Markdown
Dictionary based time series classification in sktimeDictionary based approaches adapt the bag of words model commonly used in signal processing, computer vision and audio processing for time series classification.Dictionary based classifiers have the same broad structure.A sliding window of length $w$ is run across a series.For each window, the real valued series of length $w$ is converted through approximation and discretisation processes into a symbolic string of length $l$, which consists of $\alpha$ possible letters.The occurrence in a series of each 'word' from the dictionary defined by $l$ and $\alpha$ is counted, and once the sliding window has completed the series is transformed into a histogram.Classification is based on the histograms of the words extracted from the series, rather than the raw data.Currently 4 univeriate dictionary based classifiers are implemented in sktime, all making use of the Symbolic Fourier Approximation (SFA)\[1\] transform to discretise into words.These are the Bag of SFA Symbols (BOSS)\[2\], the Contractable Bag of SFA Symbols (cBOSS)\[3\], Word Extraction for Time Series Classification (WEASEL)\[4\] and the Temporal Dictionary Ensemble (TDE)\[5\]. WEASEL has a multivariate extension called MUSE\[7\] and TDE has multivariate capabilities.In this notebook, we will demonstrate how to use BOSS, cBOSS, WEASEL and TDE on the ItalyPowerDemand and BasicMotions datasets. References:\[1\] Schäfer, P., & Högqvist, M. (2012). SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In Proceedings of the 15th International Conference on Extending Database Technology (pp. 516-527).\[2\] Schäfer, P. (2015). The BOSS is concerned with time series classification in the presence of noise. Data Mining and Knowledge Discovery, 29(6), 1505-1530.\[3\] Middlehurst, M., Vickers, W., & Bagnall, A. (2019). Scalable dictionary classifiers for time series classification. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 11-19). Springer, Cham.\[4\] Schäfer, P., & Leser, U. (2017). Fast and accurate time series classification with WEASEL. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 637-646).\[5\] Middlehurst, M., Large, J., Cawley, G., & Bagnall, A. (2020). The Temporal Dictionary Ensemble (TDE) Classifier for Time Series Classification. In The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.\[6\] Large, J., Bagnall, A., Malinowski, S., & Tavenard, R. (2019). On time series classification with dictionary-based classifiers. Intelligent Data Analysis, 23(5), 1073-1089.\[7\] Schäfer, P., & Leser, U. (2018). Multivariate time series classification with WEASEL+MUSE. 3rd ECML/PKDD Workshop on AALTD. 1. Imports
###Code
from sklearn import metrics
from sktime.classification.dictionary_based import (
MUSE,
WEASEL,
BOSSEnsemble,
ContractableBOSS,
TemporalDictionaryEnsemble,
)
from sktime.datasets import load_basic_motions, load_italy_power_demand
###Output
_____no_output_____
###Markdown
2. Load data
###Code
X_train, y_train = load_italy_power_demand(split="train", return_X_y=True)
X_test, y_test = load_italy_power_demand(split="test", return_X_y=True)
X_test = X_test[:50]
y_test = y_test[:50]
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
X_train_mv, y_train_mv = load_basic_motions(split="train", return_X_y=True)
X_test_mv, y_test_mv = load_basic_motions(split="test", return_X_y=True)
X_train_mv = X_train_mv[:20]
y_train_mv = y_train_mv[:20]
X_test_mv = X_test_mv[:20]
y_test_mv = y_test_mv[:20]
print(X_train_mv.shape, y_train_mv.shape, X_test_mv.shape, y_test_mv.shape)
###Output
(67, 1) (67,) (50, 1) (50,)
(20, 6) (20,) (20, 6) (20,)
###Markdown
3. Bag of SFA Symbols (BOSS)BOSS is an ensemble of individual BOSS classifiers making use of the SFA transform.The classifier performs grid-search through a large number of individual classifiers for parameters $l$, $\alpha$, $w$ and $p$ (normalise each window).Of the classifiers searched only those within 92\% accuracy of the best classifier are retained.Individual BOSS classifiers use a non-symmetric distance function, BOSS distance, in conjunction with a nearest neighbour classifier.As tuning is handled inside the classifier, BOSS has very little parameters to be altered and generally should be run using default settings.
###Code
boss = BOSSEnsemble(random_state=47)
boss.fit(X_train, y_train)
boss_preds = boss.predict(X_test)
print("BOSS Accuracy: " + str(metrics.accuracy_score(y_test, boss_preds)))
###Output
BOSS Accuracy: 0.94
###Markdown
4. Contractable BOSS (cBOSS)cBOSS significantly speeds up BOSS with no significant difference in accuracy by improving how the ensemble is formed.cBOSS utilises a filtered random selection of parameters to find its ensemble members.Each ensemble member is built on a 70% subsample of the train data, using random sampling without replacement.An exponential weighting scheme for the predictions of the base classifiers is introduced.A new parameter for the number of parameters samples $k$ is introduced. of which the top $s$ (max ensemble size) with the highest accuracy are kept for the final ensemble.The $k$ parameter is replaceable with a time limit $t$ through contracting.
###Code
# Recommended non-contract cBOSS parameters
cboss = ContractableBOSS(n_parameter_samples=250, max_ensemble_size=50, random_state=47)
# cBOSS with a 1 minute build time contract
# cboss = ContractableBOSS(time_limit_in_minutes=1,
# max_ensemble_size=50,
# random_state=47)
cboss.fit(X_train, y_train)
cboss_preds = cboss.predict(X_test)
print("cBOSS Accuracy: " + str(metrics.accuracy_score(y_test, cboss_preds)))
###Output
cBOSS Accuracy: 0.96
###Markdown
5. Word Extraction for Time Series Classification (WEASEL)WEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set than BOSS. It extends SFA by bigrams, feature selection using Anova-f-test and Information Gain Binning (IGB). Univariate
###Code
weasel = WEASEL(binning_strategy="equi-depth", anova=False, random_state=47)
weasel.fit(X_train, y_train)
weasel_preds = weasel.predict(X_test)
print("WEASEL Accuracy: " + str(metrics.accuracy_score(y_test, weasel_preds)))
###Output
WEASEL Accuracy: 0.96
###Markdown
MultivariateWEASEL+MUSE (Multivariate Symbolic Extension) is the multivariate extension of WEASEL.
###Code
muse = MUSE()
muse.fit(X_train_mv, y_train_mv)
muse_preds = muse.predict(X_test_mv)
print("MUSE Accuracy: " + str(metrics.accuracy_score(y_test_mv, muse_preds)))
###Output
MUSE Accuracy: 1.0
###Markdown
6. Temporal Dictionary Ensemble (TDE)TDE aggregates the best components of 3 classifiers extending from the original BOSS algorithm. The ensemble structure and improvements of cBOSS\[3\] are used; Spatial pyramids are introduced from Spatial BOSS (S-BOSS)\[6\]; From Word Extraction for Time Series Classification (WEASEL)\[4\] bigrams and Information Gain Binning (IGB), a replacement for the multiple coefficient binning (MCB) used by SFA, are included.Two new parameters are included in the ensemble parameter search, the number of spatial pyramid levels $h$ and whether to use IGB or MCB $b$.A Gaussian processes regressor is used to select new parameter sets to evaluate for the ensemble, predicting the accuracy of a set of parameter values using past classifier performances.Inheriting the cBOSS ensemble structure, the number of parameter samples $k$, time limit $t$ and max ensemble size $s$ remain as parameters to be set accounting for memory and time requirements. Univariate
###Code
# Recommended non-contract TDE parameters
tde_u = TemporalDictionaryEnsemble(
n_parameter_samples=250,
max_ensemble_size=50,
randomly_selected_params=50,
random_state=47,
)
# TDE with a 1 minute build time contract
# tde = TemporalDictionaryEnsemble(time_limit_in_minutes=1,
# max_ensemble_size=50,
# randomly_selected_params=50,
# random_state=47)
tde_u.fit(X_train, y_train)
tde_u_preds = tde_u.predict(X_test)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test, tde_u_preds)))
###Output
TDE Accuracy: 1.0
###Markdown
Multivariate
###Code
# Recommended non-contract TDE parameters
tde_mv = TemporalDictionaryEnsemble(
n_parameter_samples=250,
max_ensemble_size=50,
randomly_selected_params=50,
random_state=47,
)
# TDE with a 1 minute build time contract
# tde_m = TemporalDictionaryEnsemble(time_limit_in_minutes=1,
# max_ensemble_size=50,
# randomly_selected_params=50,
# random_state=47)
tde_mv.fit(X_train_mv, y_train_mv)
tde_mv_preds = tde_mv.predict(X_test_mv)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test_mv, tde_mv_preds)))
###Output
TDE Accuracy: 1.0
###Markdown
Dictionary based time series classification in sktimeDictionary based approaches adapt the bag of words model commonly used in signal processing, computer vision and audio processing for time series classification.Dictionary based classifiers have the same broad structure.A sliding window of length $w$ is run across a series.For each window, the real valued series of length $w$ is converted through approximation and discretisation processes into a symbolic string of length $l$, which consists of $\alpha$ possible letters.The occurrence in a series of each 'word' from the dictionary defined by $l$ and $\alpha$ is counted, and once the sliding window has completed the series is transformed into a histogram.Classification is based on the histograms of the words extracted from the series, rather than the raw data.Currently 3 dictionary based classifiers and implemented in sktime, both making use of the Symbolic Fourier Approximation (SFA)\[1\] transform to discretise into words.These are the Bag of SFA Symbols (BOSS)\[2\], with options for improved efficiency and usability via contracting (cBOSS)\[3\] and the Temporal Dictionary Ensemble (TDE)\[4\]In this notebook, we will demonstrate how to use BOSS, cBOSS and TDE on the GunPoint dataset. Both algorithms are currently only compatible with univariate time series datasets. References:\[1\] Schäfer, P., & Högqvist, M. (2012). SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In Proceedings of the 15th International Conference on Extending Database Technology (pp. 516-527).\[2\] Schäfer, P. (2015). The BOSS is concerned with time series classification in the presence of noise. Data Mining and Knowledge Discovery, 29(6), 1505-1530.\[3\] Middlehurst, M., Vickers, W., & Bagnall, A. (2019). Scalable dictionary classifiers for time series classification. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 11-19). Springer, Cham.\[4\] Middlehurst, M., Large, J., Cawley, G., & Bagnall, A. (2020). The Temporal Dictionary Ensemble (TDE) Classifier for Time Series Classification. In The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.\[5\] Large, J., Bagnall, A., Malinowski, S., & Tavenard, R. (2019). On time series classification with dictionary-based classifiers. Intelligent Data Analysis, 23(5), 1073-1089.\[6\] Schäfer, P., & Leser, U. (2017). Fast and accurate time series classification with weasel. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 637-646). 1. Imports
###Code
from sktime.classification.dictionary_based import BOSSEnsemble
from sktime.classification.dictionary_based import TemporalDictionaryEnsemble
from sklearn import metrics
from sktime.datasets import load_arrow_head
###Output
_____no_output_____
###Markdown
2. Load dataFor more details on the data set, see the [univariate time series classification notebook](https://github.com/alan-turing-institute/sktime/blob/master/examples/02_classification_univariate.ipynb).
###Code
X_train, y_train = load_arrow_head(split='train', return_X_y=True)
X_test, y_test = load_arrow_head(split='test', return_X_y=True)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
###Output
(36, 1) (36,) (175, 1) (175,)
###Markdown
3. Bag of SFA Symbols (BOSS)BOSS is an ensemble of individual BOSS classifiers making use of the SFA transform.The classifier performs grid-search through a large number of individual classifiers for parameters $l$, $\alpha$, $w$ and $p$ (normalise each window).Of the classifiers searched only those within 92\% accuracy of the best classifier are retained.Individual BOSS classifiers use a non-symmetric distance function, BOSS distance, in conjunction with a nearest neighbour classifier.As tuning is handled inside the classifier, BOSS has very little parameters to be altered and generally should be run using default settings.
###Code
boss = BOSSEnsemble()
boss.fit(X_train, y_train)
boss_preds = boss.predict(X_test)
print("BOSS Accuracy: " + str(metrics.accuracy_score(y_test, boss_preds)))
###Output
BOSS Accuracy: 0.8571428571428571
###Markdown
4. Contractable BOSS (cBOSS)cBOSS significantly speeds up BOSS with no signficant difference in accuracy by improving how the ensemble is formed.cBOSS utilises a filtered random selection of parameters to find its ensemble members.Each ensemble member is built on a 70% subsample of the train data, using random sampling without replacement.An exponential weighting scheme for the predictions of the base classifiers is introduced.A new parameter for the number of parameters samples $k$ is introduced. of which the top $s$ (max ensemble size) with the highest accuracy are kept for the final ensemble.The $k$ parameter is replaceable with a time limit $t$ through contracting.
###Code
# Recommended non-contract cBOSS parameters
cboss = BOSSEnsemble(randomised_ensemble=True, n_parameter_samples=250,
max_ensemble_size=50)
# cBOSS with a 5 minute build time contract
#cboss = BOSSEnsemble(randomised_ensemble=True,
# time_limit=5,
# max_ensemble_size=50)
cboss.fit(X_train,y_train)
cboss_preds = cboss.predict(X_test)
print("cBOSS Accuracy: " + str(metrics.accuracy_score(y_test, cboss_preds)))
###Output
cBOSS Accuracy: 0.8628571428571429
###Markdown
5. Temporal Dictionary Ensemble (TDE)TDE aggregates the best compoents of 3 classifiers extending from the original BOSS algorithm. The ensemble structure and improvements of cBOSS are used; Spatial pyramids are introduced from Spatial BOSS (S-BOSS)\[5\]; From Word Extraction for Time Series Classification (WEASEL)\[6\] bigrams and Information Gain Binning (IGB), a replacement for the multiple coefficient binning (MCB) used by SFA, are included.Two new parameters are included in the ensemble parameter search, the number of spatial pyramid levels $h$ and whether to use IGB or MCB $b$.A Gaussian processes regressor is used to select new parameter sets to evaluate for the ensemble, predicting the accuracy of a set of parameter values using past classifier performances.Inheriting the cBOSS ensemble structure, the number of parameter samples $k$, time limit $t$ and max ensemble size $s$ remain as parameters to be set accounting for memory and time requirements.
###Code
# Recommended non-contract TDE parameters
tde = TemporalDictionaryEnsemble(n_parameter_samples=250,
max_ensemble_size=100,
randomly_selected_params=50)
# TDE with a 5 minute build time contract
#tde = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=100,
# randomly_selected_params=50)
tde.fit(X_train,y_train)
tde_preds = tde.predict(X_test)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test, tde_preds)))
###Output
TDE Accuracy: 0.8171428571428572
###Markdown
Dictionary based time series classification in sktimeDictionary based approaches adapt the bag of words model commonly used in signal processing, computer vision and audio processing for time series classification.Dictionary based classifiers have the same broad structure.A sliding window of length $w$ is run across a series.For each window, the real valued series of length $w$ is converted through approximation and discretisation processes into a symbolic string of length $l$, which consists of $\alpha$ possible letters.The occurrence in a series of each 'word' from the dictionary defined by $l$ and $\alpha$ is counted, and once the sliding window has completed the series is transformed into a histogram.Classification is based on the histograms of the words extracted from the series, rather than the raw data.Currently 4 univeriate dictionary based classifiers are implemented in sktime, all making use of the Symbolic Fourier Approximation (SFA)\[1\] transform to discretise into words.These are the Bag of SFA Symbols (BOSS)\[2\], the Contractable Bag of SFA Symbols (cBOSS)\[3\], Word Extraction for Time Series Classification (WEASEL)\[4\] and the Temporal Dictionary Ensemble (TDE)\[5\]. WEASEL has a multivariate extension called MUSE\[7\].In this notebook, we will demonstrate how to use BOSS, cBOSS, WEASEL and TDE on the ItalyPowerDemand dataset. References:\[1\] Schäfer, P., & Högqvist, M. (2012). SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In Proceedings of the 15th International Conference on Extending Database Technology (pp. 516-527).\[2\] Schäfer, P. (2015). The BOSS is concerned with time series classification in the presence of noise. Data Mining and Knowledge Discovery, 29(6), 1505-1530.\[3\] Middlehurst, M., Vickers, W., & Bagnall, A. (2019). Scalable dictionary classifiers for time series classification. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 11-19). Springer, Cham.\[4\] Schäfer, P., & Leser, U. (2017). Fast and accurate time series classification with WEASEL. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 637-646).\[5\] Middlehurst, M., Large, J., Cawley, G., & Bagnall, A. (2020). The Temporal Dictionary Ensemble (TDE) Classifier for Time Series Classification. In The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.\[6\] Large, J., Bagnall, A., Malinowski, S., & Tavenard, R. (2019). On time series classification with dictionary-based classifiers. Intelligent Data Analysis, 23(5), 1073-1089.\[7\] Schäfer, P., & Leser, U. (2018). Multivariate time series classification with WEASEL+MUSE. 3rd ECML/PKDD Workshop on AALTD. 1. Imports
###Code
from sklearn import metrics
from sktime.classification.dictionary_based import (
MUSE,
WEASEL,
BOSSEnsemble,
ContractableBOSS,
TemporalDictionaryEnsemble,
)
from sktime.datasets import load_italy_power_demand
from sktime.datasets.base import load_japanese_vowels
###Output
_____no_output_____
###Markdown
2. Load data
###Code
X_train, y_train = load_italy_power_demand(split="train", return_X_y=True)
X_test, y_test = load_italy_power_demand(split="test", return_X_y=True)
X_test = X_test[:50]
y_test = y_test[:50]
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
X_train_mv, y_train_mv = load_japanese_vowels(split="train", return_X_y=True)
X_test_mv, y_test_mv = load_japanese_vowels(split="test", return_X_y=True)
X_train_mv = X_train_mv[:50]
y_train_mv = y_train_mv[:50]
X_test_mv = X_test_mv[:50]
y_test_mv = y_test_mv[:50]
print(X_train_mv.shape, y_train_mv.shape, X_test_mv.shape, y_test_mv.shape)
###Output
(67, 1) (67,) (50, 1) (50,)
(50, 12) (50,) (50, 12) (50,)
###Markdown
3. Bag of SFA Symbols (BOSS)BOSS is an ensemble of individual BOSS classifiers making use of the SFA transform.The classifier performs grid-search through a large number of individual classifiers for parameters $l$, $\alpha$, $w$ and $p$ (normalise each window).Of the classifiers searched only those within 92\% accuracy of the best classifier are retained.Individual BOSS classifiers use a non-symmetric distance function, BOSS distance, in conjunction with a nearest neighbour classifier.As tuning is handled inside the classifier, BOSS has very little parameters to be altered and generally should be run using default settings.
###Code
boss = BOSSEnsemble(random_state=47)
boss.fit(X_train, y_train)
boss_preds = boss.predict(X_test)
print("BOSS Accuracy: " + str(metrics.accuracy_score(y_test, boss_preds)))
###Output
BOSS Accuracy: 0.9
###Markdown
4. Contractable BOSS (cBOSS)cBOSS significantly speeds up BOSS with no significant difference in accuracy by improving how the ensemble is formed.cBOSS utilises a filtered random selection of parameters to find its ensemble members.Each ensemble member is built on a 70% subsample of the train data, using random sampling without replacement.An exponential weighting scheme for the predictions of the base classifiers is introduced.A new parameter for the number of parameters samples $k$ is introduced. of which the top $s$ (max ensemble size) with the highest accuracy are kept for the final ensemble.The $k$ parameter is replaceable with a time limit $t$ through contracting.
###Code
# Recommended non-contract cBOSS parameters
cboss = ContractableBOSS(n_parameter_samples=250, max_ensemble_size=50, random_state=47)
# cBOSS with a 5 minute build time contract
# cboss = ContractableBOSS(time_limit=5,
# max_ensemble_size=50,
# random_state=47)
cboss.fit(X_train, y_train)
cboss_preds = cboss.predict(X_test)
print("cBOSS Accuracy: " + str(metrics.accuracy_score(y_test, cboss_preds)))
###Output
cBOSS Accuracy: 0.9
###Markdown
5. Word Extraction for Time Series Classification (WEASEL)WEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set than BOSS. It extends SFA by bigrams, feature selection using Anova-f-test and Information Gain Binning (IGB).
###Code
weasel = WEASEL(binning_strategy="equi-depth", anova=False, random_state=47)
weasel.fit(X_train, y_train)
weasel_preds = weasel.predict(X_test)
print("WEASEL Accuracy: " + str(metrics.accuracy_score(y_test, weasel_preds)))
###Output
WEASEL Accuracy: 0.98
###Markdown
6. Temporal Dictionary Ensemble (TDE)TDE aggregates the best components of 3 classifiers extending from the original BOSS algorithm. The ensemble structure and improvements of cBOSS\[3\] are used; Spatial pyramids are introduced from Spatial BOSS (S-BOSS)\[6\]; From Word Extraction for Time Series Classification (WEASEL)\[4\] bigrams and Information Gain Binning (IGB), a replacement for the multiple coefficient binning (MCB) used by SFA, are included.Two new parameters are included in the ensemble parameter search, the number of spatial pyramid levels $h$ and whether to use IGB or MCB $b$.A Gaussian processes regressor is used to select new parameter sets to evaluate for the ensemble, predicting the accuracy of a set of parameter values using past classifier performances.Inheriting the cBOSS ensemble structure, the number of parameter samples $k$, time limit $t$ and max ensemble size $s$ remain as parameters to be set accounting for memory and time requirements.
###Code
# Recommended non-contract TDE parameters
tde = TemporalDictionaryEnsemble(
n_parameter_samples=250,
max_ensemble_size=50,
randomly_selected_params=50,
random_state=47,
)
# TDE with a 5 minute build time contract
# tde = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=50,
# randomly_selected_params=50,
# random_state=47)
tde.fit(X_train, y_train)
tde_preds = tde.predict(X_test)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test, tde_preds)))
###Output
TDE Accuracy: 1.0
###Markdown
7. WEASEL+MUSE (Multivariate Symbolic Extension)WEASEL+MUSE is the multivariate extension of WEASEL.
###Code
muse = MUSE()
muse.fit(X_train_mv, y_train_mv)
muse_preds = muse.predict(X_test_mv)
print("MUSE Accuracy: " + str(metrics.accuracy_score(y_test_mv, muse_preds)))
###Output
MUSE Accuracy: 1.0
###Markdown
Dictionary based time series classification in sktimeDictionary based approaches adapt the bag of words model commonly used in signal processing, computer vision and audio processing for time series classification.Dictionary based classifiers have the same broad structure.A sliding window of length $w$ is run across a series.For each window, the real valued series of length $w$ is converted through approximation and discretisation processes into a symbolic string of length $l$, which consists of $\alpha$ possible letters.The occurrence in a series of each 'word' from the dictionary defined by $l$ and $\alpha$ is counted, and once the sliding window has completed the series is transformed into a histogram.Classification is based on the histograms of the words extracted from the series, rather than the raw data.Currently 4 univeriate dictionary based classifiers are implemented in sktime, all making use of the Symbolic Fourier Approximation (SFA)\[1\] transform to discretise into words.These are the Bag of SFA Symbols (BOSS)\[2\], the Contractable Bag of SFA Symbols (cBOSS)\[3\], Word Extraction for Time Series Classification (WEASEL)\[4\] and the Temporal Dictionary Ensemble (TDE)\[5\]. WEASEL has a multivariate extension called MUSE\[7\].In this notebook, we will demonstrate how to use BOSS, cBOSS, WEASEL and TDE on the ItalyPowerDemand dataset. References:\[1\] Schäfer, P., & Högqvist, M. (2012). SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In Proceedings of the 15th International Conference on Extending Database Technology (pp. 516-527).\[2\] Schäfer, P. (2015). The BOSS is concerned with time series classification in the presence of noise. Data Mining and Knowledge Discovery, 29(6), 1505-1530.\[3\] Middlehurst, M., Vickers, W., & Bagnall, A. (2019). Scalable dictionary classifiers for time series classification. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 11-19). Springer, Cham.\[4\] Schäfer, P., & Leser, U. (2017). Fast and accurate time series classification with WEASEL. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 637-646).\[5\] Middlehurst, M., Large, J., Cawley, G., & Bagnall, A. (2020). The Temporal Dictionary Ensemble (TDE) Classifier for Time Series Classification. In The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.\[6\] Large, J., Bagnall, A., Malinowski, S., & Tavenard, R. (2019). On time series classification with dictionary-based classifiers. Intelligent Data Analysis, 23(5), 1073-1089.\[7\] Schäfer, P., & Leser, U. (2018). Multivariate time series classification with WEASEL+MUSE. 3rd ECML/PKDD Workshop on AALTD. 1. Imports
###Code
from sklearn import metrics
from sktime.classification.dictionary_based import (
MUSE,
WEASEL,
BOSSEnsemble,
ContractableBOSS,
TemporalDictionaryEnsemble,
)
from sktime.datasets import load_italy_power_demand
from sktime.datasets.base import load_japanese_vowels
###Output
_____no_output_____
###Markdown
2. Load data
###Code
X_train, y_train = load_italy_power_demand(split="train", return_X_y=True)
X_test, y_test = load_italy_power_demand(split="test", return_X_y=True)
X_test = X_test[:50]
y_test = y_test[:50]
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
X_train_mv, y_train_mv = load_japanese_vowels(split="train", return_X_y=True)
X_test_mv, y_test_mv = load_japanese_vowels(split="test", return_X_y=True)
X_train_mv = X_train_mv[:50]
y_train_mv = y_train_mv[:50]
X_test_mv = X_test_mv[:50]
y_test_mv = y_test_mv[:50]
print(X_train_mv.shape, y_train_mv.shape, X_test_mv.shape, y_test_mv.shape)
###Output
(67, 1) (67,) (50, 1) (50,)
(50, 12) (50,) (50, 12) (50,)
###Markdown
3. Bag of SFA Symbols (BOSS)BOSS is an ensemble of individual BOSS classifiers making use of the SFA transform.The classifier performs grid-search through a large number of individual classifiers for parameters $l$, $\alpha$, $w$ and $p$ (normalise each window).Of the classifiers searched only those within 92\% accuracy of the best classifier are retained.Individual BOSS classifiers use a non-symmetric distance function, BOSS distance, in conjunction with a nearest neighbour classifier.As tuning is handled inside the classifier, BOSS has very little parameters to be altered and generally should be run using default settings.
###Code
boss = BOSSEnsemble(random_state=47)
boss.fit(X_train, y_train)
boss_preds = boss.predict(X_test)
print("BOSS Accuracy: " + str(metrics.accuracy_score(y_test, boss_preds)))
###Output
BOSS Accuracy: 0.9
###Markdown
4. Contractable BOSS (cBOSS)cBOSS significantly speeds up BOSS with no significant difference in accuracy by improving how the ensemble is formed.cBOSS utilises a filtered random selection of parameters to find its ensemble members.Each ensemble member is built on a 70% subsample of the train data, using random sampling without replacement.An exponential weighting scheme for the predictions of the base classifiers is introduced.A new parameter for the number of parameters samples $k$ is introduced. of which the top $s$ (max ensemble size) with the highest accuracy are kept for the final ensemble.The $k$ parameter is replaceable with a time limit $t$ through contracting.
###Code
# Recommended non-contract cBOSS parameters
cboss = ContractableBOSS(n_parameter_samples=250, max_ensemble_size=50, random_state=47)
# cBOSS with a 5 minute build time contract
# cboss = ContractableBOSS(time_limit=5,
# max_ensemble_size=50,
# random_state=47)
cboss.fit(X_train, y_train)
cboss_preds = cboss.predict(X_test)
print("cBOSS Accuracy: " + str(metrics.accuracy_score(y_test, cboss_preds)))
###Output
cBOSS Accuracy: 0.9
###Markdown
5. Word Extraction for Time Series Classification (WEASEL)WEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set than BOSS. It extends SFA by bigrams, feature selection using Anova-f-test and Information Gain Binning (IGB).
###Code
weasel = WEASEL(binning_strategy="equi-depth", anova=False, random_state=47)
weasel.fit(X_train, y_train)
weasel_preds = weasel.predict(X_test)
print("WEASEL Accuracy: " + str(metrics.accuracy_score(y_test, weasel_preds)))
###Output
WEASEL Accuracy: 0.98
###Markdown
6. Temporal Dictionary Ensemble (TDE)TDE aggregates the best components of 3 classifiers extending from the original BOSS algorithm. The ensemble structure and improvements of cBOSS\[3\] are used; Spatial pyramids are introduced from Spatial BOSS (S-BOSS)\[6\]; From Word Extraction for Time Series Classification (WEASEL)\[4\] bigrams and Information Gain Binning (IGB), a replacement for the multiple coefficient binning (MCB) used by SFA, are included.Two new parameters are included in the ensemble parameter search, the number of spatial pyramid levels $h$ and whether to use IGB or MCB $b$.A Gaussian processes regressor is used to select new parameter sets to evaluate for the ensemble, predicting the accuracy of a set of parameter values using past classifier performances.Inheriting the cBOSS ensemble structure, the number of parameter samples $k$, time limit $t$ and max ensemble size $s$ remain as parameters to be set accounting for memory and time requirements.
###Code
# Recommended non-contract TDE parameters
tde = TemporalDictionaryEnsemble(
n_parameter_samples=250,
max_ensemble_size=50,
randomly_selected_params=50,
random_state=47,
)
# TDE with a 5 minute build time contract
# tde = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=50,
# randomly_selected_params=50,
# random_state=47)
tde.fit(X_train, y_train)
tde_preds = tde.predict(X_test)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test, tde_preds)))
###Output
TDE Accuracy: 1.0
###Markdown
7. WEASEL+MUSE (Multivariate Symbolic Extension)WEASEL+MUSE is the multivariate extension of WEASEL.
###Code
muse = MUSE()
muse.fit(X_train_mv, y_train_mv)
muse_preds = muse.predict(X_test_mv)
print("MUSE Accuracy: " + str(metrics.accuracy_score(y_test_mv, muse_preds)))
###Output
MUSE Accuracy: 1.0
###Markdown
Dictionary based time series classification in sktimeDictionary based approaches adapt the bag of words model commonly used in signal processing, computer vision and audio processing for time series classification.Dictionary based classifiers have the same broad structure.A sliding window of length $w$ is run across a series.For each window, the real valued series of length $w$ is converted through approximation and discretisation processes into a symbolic string of length $l$, which consists of $\alpha$ possible letters.The occurrence in a series of each 'word' from the dictionary defined by $l$ and $\alpha$ is counted, and once the sliding window has completed the series is transformed into a histogram.Classification is based on the histograms of the words extracted from the series, rather than the raw data.Currently 4 univeriate dictionary based classifiers are implemented in sktime, all making use of the Symbolic Fourier Approximation (SFA)\[1\] transform to discretise into words.These are the Bag of SFA Symbols (BOSS)\[2\], the Contractable Bag of SFA Symbols (cBOSS)\[3\], Word Extraction for Time Series Classification (WEASEL)\[4\] and the Temporal Dictionary Ensemble (TDE)\[5\]. WEASEL has a multivariate extension called MUSE\[7\] and TDE has multivariate capabilities.In this notebook, we will demonstrate how to use BOSS, cBOSS, WEASEL and TDE on the ItalyPowerDemand and JapaneseVowels datasets. References:\[1\] Schäfer, P., & Högqvist, M. (2012). SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In Proceedings of the 15th International Conference on Extending Database Technology (pp. 516-527).\[2\] Schäfer, P. (2015). The BOSS is concerned with time series classification in the presence of noise. Data Mining and Knowledge Discovery, 29(6), 1505-1530.\[3\] Middlehurst, M., Vickers, W., & Bagnall, A. (2019). Scalable dictionary classifiers for time series classification. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 11-19). Springer, Cham.\[4\] Schäfer, P., & Leser, U. (2017). Fast and accurate time series classification with WEASEL. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 637-646).\[5\] Middlehurst, M., Large, J., Cawley, G., & Bagnall, A. (2020). The Temporal Dictionary Ensemble (TDE) Classifier for Time Series Classification. In The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.\[6\] Large, J., Bagnall, A., Malinowski, S., & Tavenard, R. (2019). On time series classification with dictionary-based classifiers. Intelligent Data Analysis, 23(5), 1073-1089.\[7\] Schäfer, P., & Leser, U. (2018). Multivariate time series classification with WEASEL+MUSE. 3rd ECML/PKDD Workshop on AALTD. 1. Imports
###Code
from sklearn import metrics
from sktime.classification.dictionary_based import (
MUSE,
WEASEL,
BOSSEnsemble,
ContractableBOSS,
TemporalDictionaryEnsemble,
)
from sktime.datasets import load_italy_power_demand
from sktime.datasets.base import load_japanese_vowels
###Output
_____no_output_____
###Markdown
2. Load data
###Code
X_train, y_train = load_italy_power_demand(split="train", return_X_y=True)
X_test, y_test = load_italy_power_demand(split="test", return_X_y=True)
X_test = X_test[:50]
y_test = y_test[:50]
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
X_train_mv, y_train_mv = load_japanese_vowels(split="train", return_X_y=True)
X_test_mv, y_test_mv = load_japanese_vowels(split="test", return_X_y=True)
X_train_mv = X_train_mv[:50]
y_train_mv = y_train_mv[:50]
X_test_mv = X_test_mv[:50]
y_test_mv = y_test_mv[:50]
print(X_train_mv.shape, y_train_mv.shape, X_test_mv.shape, y_test_mv.shape)
###Output
(67, 1) (67,) (50, 1) (50,)
(50, 12) (50,) (50, 12) (50,)
###Markdown
3. Bag of SFA Symbols (BOSS)BOSS is an ensemble of individual BOSS classifiers making use of the SFA transform.The classifier performs grid-search through a large number of individual classifiers for parameters $l$, $\alpha$, $w$ and $p$ (normalise each window).Of the classifiers searched only those within 92\% accuracy of the best classifier are retained.Individual BOSS classifiers use a non-symmetric distance function, BOSS distance, in conjunction with a nearest neighbour classifier.As tuning is handled inside the classifier, BOSS has very little parameters to be altered and generally should be run using default settings.
###Code
boss = BOSSEnsemble(random_state=47)
boss.fit(X_train, y_train)
boss_preds = boss.predict(X_test)
print("BOSS Accuracy: " + str(metrics.accuracy_score(y_test, boss_preds)))
###Output
BOSS Accuracy: 0.88
###Markdown
4. Contractable BOSS (cBOSS)cBOSS significantly speeds up BOSS with no significant difference in accuracy by improving how the ensemble is formed.cBOSS utilises a filtered random selection of parameters to find its ensemble members.Each ensemble member is built on a 70% subsample of the train data, using random sampling without replacement.An exponential weighting scheme for the predictions of the base classifiers is introduced.A new parameter for the number of parameters samples $k$ is introduced. of which the top $s$ (max ensemble size) with the highest accuracy are kept for the final ensemble.The $k$ parameter is replaceable with a time limit $t$ through contracting.
###Code
# Recommended non-contract cBOSS parameters
cboss = ContractableBOSS(n_parameter_samples=250, max_ensemble_size=50, random_state=47)
# cBOSS with a 5 minute build time contract
# cboss = ContractableBOSS(time_limit=5,
# max_ensemble_size=50,
# random_state=47)
cboss.fit(X_train, y_train)
cboss_preds = cboss.predict(X_test)
print("cBOSS Accuracy: " + str(metrics.accuracy_score(y_test, cboss_preds)))
###Output
cBOSS Accuracy: 0.92
###Markdown
5. Word Extraction for Time Series Classification (WEASEL) UnivariateWEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set than BOSS. It extends SFA by bigrams, feature selection using Anova-f-test and Information Gain Binning (IGB).
###Code
weasel = WEASEL(binning_strategy="equi-depth", anova=False, random_state=47)
weasel.fit(X_train, y_train)
weasel_preds = weasel.predict(X_test)
print("WEASEL Accuracy: " + str(metrics.accuracy_score(y_test, weasel_preds)))
###Output
WEASEL Accuracy: 0.98
###Markdown
MultivariateWEASEL+MUSE (Multivariate Symbolic Extension) is the multivariate extension of WEASEL.
###Code
muse = MUSE()
muse.fit(X_train_mv, y_train_mv)
muse_preds = muse.predict(X_test_mv)
print("MUSE Accuracy: " + str(metrics.accuracy_score(y_test_mv, muse_preds)))
###Output
MUSE Accuracy: 1.0
###Markdown
6. Temporal Dictionary Ensemble (TDE) UnivariateTDE aggregates the best components of 3 classifiers extending from the original BOSS algorithm. The ensemble structure and improvements of cBOSS\[3\] are used; Spatial pyramids are introduced from Spatial BOSS (S-BOSS)\[6\]; From Word Extraction for Time Series Classification (WEASEL)\[4\] bigrams and Information Gain Binning (IGB), a replacement for the multiple coefficient binning (MCB) used by SFA, are included.Two new parameters are included in the ensemble parameter search, the number of spatial pyramid levels $h$ and whether to use IGB or MCB $b$.A Gaussian processes regressor is used to select new parameter sets to evaluate for the ensemble, predicting the accuracy of a set of parameter values using past classifier performances.Inheriting the cBOSS ensemble structure, the number of parameter samples $k$, time limit $t$ and max ensemble size $s$ remain as parameters to be set accounting for memory and time requirements.
###Code
# Recommended non-contract TDE parameters
tde_u = TemporalDictionaryEnsemble(
n_parameter_samples=250,
max_ensemble_size=50,
randomly_selected_params=50,
random_state=47,
)
# TDE with a 5 minute build time contract
# tde = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=50,
# randomly_selected_params=50,
# random_state=47)
tde_u.fit(X_train, y_train)
tde_u_preds = tde_u.predict(X_test)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test, tde_u_preds)))
###Output
TDE Accuracy: 1.0
###Markdown
Multivariate
###Code
# Recommended non-contract TDE parameters
tde_m = TemporalDictionaryEnsemble(
n_parameter_samples=250,
max_ensemble_size=50,
randomly_selected_params=50,
random_state=47,
)
# TDE with a 5 minute build time contract
# tde_m = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=50,
# randomly_selected_params=50,
# random_state=47)
tde_m.fit(X_train_mv, y_train_mv)
tde_m_preds = tde_m.predict(X_test_mv)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test_mv, tde_m_preds)))
###Output
TDE Accuracy: 1.0
###Markdown
Dictionary based time series classification in sktimeDictionary based approaches adapt the bag of words model commonly used in signal processing, computer vision and audio processing for time series classification.Dictionary based classifiers have the same broad structure.A sliding window of length $w$ is run across a series.For each window, the real valued series of length $w$ is converted through approximation and discretisation processes into a symbolic string of length $l$, which consists of $\alpha$ possible letters.The occurrence in a series of each 'word' from the dictionary defined by $l$ and $\alpha$ is counted, and once the sliding window has completed the series is transformed into a histogram.Classification is based on the histograms of the words extracted from the series, rather than the raw data.Currently 4 univeriate dictionary based classifiers are implemented in sktime, all making use of the Symbolic Fourier Approximation (SFA)\[1\] transform to discretise into words.These are the Bag of SFA Symbols (BOSS)\[2\], the Contractable Bag of SFA Symbols (cBOSS)\[3\], Word Extraction for Time Series Classification (WEASEL)\[4\] and the Temporal Dictionary Ensemble (TDE)\[5\]. WEASEL has a multivariate extension called MUSE\[7\].In this notebook, we will demonstrate how to use BOSS, cBOSS, WEASEL and TDE on the ItalyPowerDemand dataset. References:\[1\] Schäfer, P., & Högqvist, M. (2012). SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In Proceedings of the 15th International Conference on Extending Database Technology (pp. 516-527).\[2\] Schäfer, P. (2015). The BOSS is concerned with time series classification in the presence of noise. Data Mining and Knowledge Discovery, 29(6), 1505-1530.\[3\] Middlehurst, M., Vickers, W., & Bagnall, A. (2019). Scalable dictionary classifiers for time series classification. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 11-19). Springer, Cham.\[4\] Schäfer, P., & Leser, U. (2017). Fast and accurate time series classification with WEASEL. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 637-646).\[5\] Middlehurst, M., Large, J., Cawley, G., & Bagnall, A. (2020). The Temporal Dictionary Ensemble (TDE) Classifier for Time Series Classification. In The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.\[6\] Large, J., Bagnall, A., Malinowski, S., & Tavenard, R. (2019). On time series classification with dictionary-based classifiers. Intelligent Data Analysis, 23(5), 1073-1089.\[7\] Schäfer, P., & Leser, U. (2018). Multivariate time series classification with WEASEL+MUSE. 3rd ECML/PKDD Workshop on AALTD. 1. Imports
###Code
from sklearn import metrics
from sktime.classification.dictionary_based import (
MUSE,
WEASEL,
BOSSEnsemble,
ContractableBOSS,
TemporalDictionaryEnsemble,
)
from sktime.datasets import load_italy_power_demand
from sktime.datasets.base import load_japanese_vowels
###Output
_____no_output_____
###Markdown
2. Load data
###Code
X_train, y_train = load_italy_power_demand(split="train", return_X_y=True)
X_test, y_test = load_italy_power_demand(split="test", return_X_y=True)
X_test = X_test[:50]
y_test = y_test[:50]
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
X_train_mv, y_train_mv = load_japanese_vowels(split="train", return_X_y=True)
X_test_mv, y_test_mv = load_japanese_vowels(split="test", return_X_y=True)
X_train_mv = X_train_mv[:50]
y_train_mv = y_train_mv[:50]
X_test_mv = X_test_mv[:50]
y_test_mv = y_test_mv[:50]
print(X_train_mv.shape, y_train_mv.shape, X_test_mv.shape, y_test_mv.shape)
###Output
(67, 1) (67,) (50, 1) (50,)
(50, 12) (50,) (50, 12) (50,)
###Markdown
3. Bag of SFA Symbols (BOSS)BOSS is an ensemble of individual BOSS classifiers making use of the SFA transform.The classifier performs grid-search through a large number of individual classifiers for parameters $l$, $\alpha$, $w$ and $p$ (normalise each window).Of the classifiers searched only those within 92\% accuracy of the best classifier are retained.Individual BOSS classifiers use a non-symmetric distance function, BOSS distance, in conjunction with a nearest neighbour classifier.As tuning is handled inside the classifier, BOSS has very little parameters to be altered and generally should be run using default settings.
###Code
boss = BOSSEnsemble(random_state=47)
boss.fit(X_train, y_train)
boss_preds = boss.predict(X_test)
print("BOSS Accuracy: " + str(metrics.accuracy_score(y_test, boss_preds)))
###Output
BOSS Accuracy: 0.9
###Markdown
4. Contractable BOSS (cBOSS)cBOSS significantly speeds up BOSS with no significant difference in accuracy by improving how the ensemble is formed.cBOSS utilises a filtered random selection of parameters to find its ensemble members.Each ensemble member is built on a 70% subsample of the train data, using random sampling without replacement.An exponential weighting scheme for the predictions of the base classifiers is introduced.A new parameter for the number of parameters samples $k$ is introduced. of which the top $s$ (max ensemble size) with the highest accuracy are kept for the final ensemble.The $k$ parameter is replaceable with a time limit $t$ through contracting.
###Code
# Recommended non-contract cBOSS parameters
cboss = ContractableBOSS(n_parameter_samples=250, max_ensemble_size=50, random_state=47)
# cBOSS with a 5 minute build time contract
# cboss = ContractableBOSS(time_limit=5,
# max_ensemble_size=50,
# random_state=47)
cboss.fit(X_train, y_train)
cboss_preds = cboss.predict(X_test)
print("cBOSS Accuracy: " + str(metrics.accuracy_score(y_test, cboss_preds)))
###Output
cBOSS Accuracy: 0.9
###Markdown
5. Word Extraction for Time Series Classification (WEASEL)WEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set than BOSS. It extends SFA by bigrams, feature selection using Anova-f-test and Information Gain Binning (IGB).
###Code
weasel = WEASEL(binning_strategy="equi-depth", anova=False, random_state=47)
weasel.fit(X_train, y_train)
weasel_preds = weasel.predict(X_test)
print("WEASEL Accuracy: " + str(metrics.accuracy_score(y_test, weasel_preds)))
###Output
WEASEL Accuracy: 0.98
###Markdown
6. Temporal Dictionary Ensemble (TDE)TDE aggregates the best components of 3 classifiers extending from the original BOSS algorithm. The ensemble structure and improvements of cBOSS\[3\] are used; Spatial pyramids are introduced from Spatial BOSS (S-BOSS)\[6\]; From Word Extraction for Time Series Classification (WEASEL)\[4\] bigrams and Information Gain Binning (IGB), a replacement for the multiple coefficient binning (MCB) used by SFA, are included.Two new parameters are included in the ensemble parameter search, the number of spatial pyramid levels $h$ and whether to use IGB or MCB $b$.A Gaussian processes regressor is used to select new parameter sets to evaluate for the ensemble, predicting the accuracy of a set of parameter values using past classifier performances.Inheriting the cBOSS ensemble structure, the number of parameter samples $k$, time limit $t$ and max ensemble size $s$ remain as parameters to be set accounting for memory and time requirements.
###Code
# Recommended non-contract TDE parameters
tde = TemporalDictionaryEnsemble(
n_parameter_samples=250,
max_ensemble_size=50,
randomly_selected_params=50,
random_state=47,
)
# TDE with a 5 minute build time contract
# tde = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=50,
# randomly_selected_params=50,
# random_state=47)
tde.fit(X_train, y_train)
tde_preds = tde.predict(X_test)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test, tde_preds)))
###Output
TDE Accuracy: 1.0
###Markdown
7. WEASEL+MUSE (Multivariate Symbolic Extension)WEASEL+MUSE is the multivariate extension of WEASEL.
###Code
muse = MUSE()
muse.fit(X_train_mv, y_train_mv)
muse_preds = muse.predict(X_test_mv)
print("MUSE Accuracy: " + str(metrics.accuracy_score(y_test_mv, muse_preds)))
###Output
MUSE Accuracy: 1.0
###Markdown
Dictionary based time series classification in sktimeDictionary based approaches adapt the bag of words model commonly used in signal processing, computer vision and audio processing for time series classification.Dictionary based classifiers have the same broad structure.A sliding window of length $w$ is run across a series.For each window, the real valued series of length $w$ is converted through approximation and discretisation processes into a symbolic string of length $l$, which consists of $\alpha$ possible letters.The occurrence in a series of each 'word' from the dictionary defined by $l$ and $\alpha$ is counted, and once the sliding window has completed the series is transformed into a histogram.Classification is based on the histograms of the words extracted from the series, rather than the raw data.Currently 4 univeriate dictionary based classifiers are implemented in sktime, all making use of the Symbolic Fourier Approximation (SFA)\[1\] transform to discretise into words.These are the Bag of SFA Symbols (BOSS)\[2\], the Contractable Bag of SFA Symbols (cBOSS)\[3\], Word Extraction for Time Series Classification (WEASEL)\[4\] and the Temporal Dictionary Ensemble (TDE)\[5\]. WEASEL has a multivariate extension called MUSE\[7\] and TDE has multivariate capabilities.In this notebook, we will demonstrate how to use BOSS, cBOSS, WEASEL and TDE on the ItalyPowerDemand and JapaneseVowels datasets. References:\[1\] Schäfer, P., & Högqvist, M. (2012). SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In Proceedings of the 15th International Conference on Extending Database Technology (pp. 516-527).\[2\] Schäfer, P. (2015). The BOSS is concerned with time series classification in the presence of noise. Data Mining and Knowledge Discovery, 29(6), 1505-1530.\[3\] Middlehurst, M., Vickers, W., & Bagnall, A. (2019). Scalable dictionary classifiers for time series classification. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 11-19). Springer, Cham.\[4\] Schäfer, P., & Leser, U. (2017). Fast and accurate time series classification with WEASEL. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 637-646).\[5\] Middlehurst, M., Large, J., Cawley, G., & Bagnall, A. (2020). The Temporal Dictionary Ensemble (TDE) Classifier for Time Series Classification. In The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.\[6\] Large, J., Bagnall, A., Malinowski, S., & Tavenard, R. (2019). On time series classification with dictionary-based classifiers. Intelligent Data Analysis, 23(5), 1073-1089.\[7\] Schäfer, P., & Leser, U. (2018). Multivariate time series classification with WEASEL+MUSE. 3rd ECML/PKDD Workshop on AALTD. 1. Imports
###Code
from sklearn import metrics
from sktime.classification.dictionary_based import (
MUSE,
WEASEL,
BOSSEnsemble,
ContractableBOSS,
TemporalDictionaryEnsemble,
)
from sktime.datasets import load_italy_power_demand, load_japanese_vowels
###Output
_____no_output_____
###Markdown
2. Load data
###Code
X_train, y_train = load_italy_power_demand(split="train", return_X_y=True)
X_test, y_test = load_italy_power_demand(split="test", return_X_y=True)
X_test = X_test[:50]
y_test = y_test[:50]
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
X_train_mv, y_train_mv = load_japanese_vowels(split="train", return_X_y=True)
X_test_mv, y_test_mv = load_japanese_vowels(split="test", return_X_y=True)
X_train_mv = X_train_mv[:50]
y_train_mv = y_train_mv[:50]
X_test_mv = X_test_mv[:50]
y_test_mv = y_test_mv[:50]
print(X_train_mv.shape, y_train_mv.shape, X_test_mv.shape, y_test_mv.shape)
###Output
(67, 1) (67,) (50, 1) (50,)
(50, 12) (50,) (50, 12) (50,)
###Markdown
3. Bag of SFA Symbols (BOSS)BOSS is an ensemble of individual BOSS classifiers making use of the SFA transform.The classifier performs grid-search through a large number of individual classifiers for parameters $l$, $\alpha$, $w$ and $p$ (normalise each window).Of the classifiers searched only those within 92\% accuracy of the best classifier are retained.Individual BOSS classifiers use a non-symmetric distance function, BOSS distance, in conjunction with a nearest neighbour classifier.As tuning is handled inside the classifier, BOSS has very little parameters to be altered and generally should be run using default settings.
###Code
boss = BOSSEnsemble(random_state=47)
boss.fit(X_train, y_train)
boss_preds = boss.predict(X_test)
print("BOSS Accuracy: " + str(metrics.accuracy_score(y_test, boss_preds)))
###Output
BOSS Accuracy: 0.94
###Markdown
4. Contractable BOSS (cBOSS)cBOSS significantly speeds up BOSS with no significant difference in accuracy by improving how the ensemble is formed.cBOSS utilises a filtered random selection of parameters to find its ensemble members.Each ensemble member is built on a 70% subsample of the train data, using random sampling without replacement.An exponential weighting scheme for the predictions of the base classifiers is introduced.A new parameter for the number of parameters samples $k$ is introduced. of which the top $s$ (max ensemble size) with the highest accuracy are kept for the final ensemble.The $k$ parameter is replaceable with a time limit $t$ through contracting.
###Code
# Recommended non-contract cBOSS parameters
cboss = ContractableBOSS(n_parameter_samples=250, max_ensemble_size=50, random_state=47)
# cBOSS with a 5 minute build time contract
# cboss = ContractableBOSS(time_limit=5,
# max_ensemble_size=50,
# random_state=47)
cboss.fit(X_train, y_train)
cboss_preds = cboss.predict(X_test)
print("cBOSS Accuracy: " + str(metrics.accuracy_score(y_test, cboss_preds)))
###Output
cBOSS Accuracy: 0.96
###Markdown
5. Word Extraction for Time Series Classification (WEASEL)WEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set than BOSS. It extends SFA by bigrams, feature selection using Anova-f-test and Information Gain Binning (IGB). Univariate
###Code
weasel = WEASEL(binning_strategy="equi-depth", anova=False, random_state=47)
weasel.fit(X_train, y_train)
weasel_preds = weasel.predict(X_test)
print("WEASEL Accuracy: " + str(metrics.accuracy_score(y_test, weasel_preds)))
###Output
WEASEL Accuracy: 0.96
###Markdown
MultivariateWEASEL+MUSE (Multivariate Symbolic Extension) is the multivariate extension of WEASEL.
###Code
muse = MUSE()
muse.fit(X_train_mv, y_train_mv)
muse_preds = muse.predict(X_test_mv)
print("MUSE Accuracy: " + str(metrics.accuracy_score(y_test_mv, muse_preds)))
###Output
MUSE Accuracy: 1.0
###Markdown
6. Temporal Dictionary Ensemble (TDE)TDE aggregates the best components of 3 classifiers extending from the original BOSS algorithm. The ensemble structure and improvements of cBOSS\[3\] are used; Spatial pyramids are introduced from Spatial BOSS (S-BOSS)\[6\]; From Word Extraction for Time Series Classification (WEASEL)\[4\] bigrams and Information Gain Binning (IGB), a replacement for the multiple coefficient binning (MCB) used by SFA, are included.Two new parameters are included in the ensemble parameter search, the number of spatial pyramid levels $h$ and whether to use IGB or MCB $b$.A Gaussian processes regressor is used to select new parameter sets to evaluate for the ensemble, predicting the accuracy of a set of parameter values using past classifier performances.Inheriting the cBOSS ensemble structure, the number of parameter samples $k$, time limit $t$ and max ensemble size $s$ remain as parameters to be set accounting for memory and time requirements. Univariate
###Code
# Recommended non-contract TDE parameters
tde_u = TemporalDictionaryEnsemble(
n_parameter_samples=250,
max_ensemble_size=50,
randomly_selected_params=50,
random_state=47,
)
# TDE with a 5 minute build time contract
# tde = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=50,
# randomly_selected_params=50,
# random_state=47)
tde_u.fit(X_train, y_train)
tde_u_preds = tde_u.predict(X_test)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test, tde_u_preds)))
###Output
TDE Accuracy: 1.0
###Markdown
Multivariate
###Code
# Recommended non-contract TDE parameters
tde_mv = TemporalDictionaryEnsemble(
n_parameter_samples=250,
max_ensemble_size=50,
randomly_selected_params=50,
random_state=47,
)
# TDE with a 5 minute build time contract
# tde_m = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=50,
# randomly_selected_params=50,
# random_state=47)
tde_mv.fit(X_train_mv, y_train_mv)
tde_mv_preds = tde_mv.predict(X_test_mv)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test_mv, tde_mv_preds)))
###Output
TDE Accuracy: 1.0
###Markdown
Dictionary based time series classification in sktimeDictionary based approaches adapt the bag of words model commonly used in signal processing, computer vision and audio processing for time series classification.Dictionary based classifiers have the same broad structure.A sliding window of length $w$ is run across a series.For each window, the real valued series of length $w$ is converted through approximation and discretisation processes into a symbolic string of length $l$, which consists of $\alpha$ possible letters.The occurrence in a series of each 'word' from the dictionary defined by $l$ and $\alpha$ is counted, and once the sliding window has completed the series is transformed into a histogram.Classification is based on the histograms of the words extracted from the series, rather than the raw data.Currently 4 univeriate dictionary based classifiers are implemented in sktime, all making use of the Symbolic Fourier Approximation (SFA)\[1\] transform to discretise into words.These are the Bag of SFA Symbols (BOSS)\[2\], the Contractable Bag of SFA Symbols (cBOSS)\[3\], Word Extraction for Time Series Classification (WEASEL)\[4\] and the Temporal Dictionary Ensemble (TDE)\[5\]. WEASEL has a multivariate extension called MUSE\[7\] and TDE has multivariate capabilities.In this notebook, we will demonstrate how to use BOSS, cBOSS, WEASEL and TDE on the ItalyPowerDemand and JapaneseVowels datasets. References:\[1\] Schäfer, P., & Högqvist, M. (2012). SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In Proceedings of the 15th International Conference on Extending Database Technology (pp. 516-527).\[2\] Schäfer, P. (2015). The BOSS is concerned with time series classification in the presence of noise. Data Mining and Knowledge Discovery, 29(6), 1505-1530.\[3\] Middlehurst, M., Vickers, W., & Bagnall, A. (2019). Scalable dictionary classifiers for time series classification. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 11-19). Springer, Cham.\[4\] Schäfer, P., & Leser, U. (2017). Fast and accurate time series classification with WEASEL. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 637-646).\[5\] Middlehurst, M., Large, J., Cawley, G., & Bagnall, A. (2020). The Temporal Dictionary Ensemble (TDE) Classifier for Time Series Classification. In The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.\[6\] Large, J., Bagnall, A., Malinowski, S., & Tavenard, R. (2019). On time series classification with dictionary-based classifiers. Intelligent Data Analysis, 23(5), 1073-1089.\[7\] Schäfer, P., & Leser, U. (2018). Multivariate time series classification with WEASEL+MUSE. 3rd ECML/PKDD Workshop on AALTD. 1. Imports
###Code
from sklearn import metrics
from sktime.classification.dictionary_based import (
MUSE,
WEASEL,
BOSSEnsemble,
ContractableBOSS,
TemporalDictionaryEnsemble,
)
from sktime.datasets import load_italy_power_demand
from sktime.datasets.base import load_japanese_vowels
###Output
_____no_output_____
###Markdown
2. Load data
###Code
X_train, y_train = load_italy_power_demand(split="train", return_X_y=True)
X_test, y_test = load_italy_power_demand(split="test", return_X_y=True)
X_test = X_test[:50]
y_test = y_test[:50]
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
X_train_mv, y_train_mv = load_japanese_vowels(split="train", return_X_y=True)
X_test_mv, y_test_mv = load_japanese_vowels(split="test", return_X_y=True)
X_train_mv = X_train_mv[:50]
y_train_mv = y_train_mv[:50]
X_test_mv = X_test_mv[:50]
y_test_mv = y_test_mv[:50]
print(X_train_mv.shape, y_train_mv.shape, X_test_mv.shape, y_test_mv.shape)
###Output
(67, 1) (67,) (50, 1) (50,)
(50, 12) (50,) (50, 12) (50,)
###Markdown
3. Bag of SFA Symbols (BOSS)BOSS is an ensemble of individual BOSS classifiers making use of the SFA transform.The classifier performs grid-search through a large number of individual classifiers for parameters $l$, $\alpha$, $w$ and $p$ (normalise each window).Of the classifiers searched only those within 92\% accuracy of the best classifier are retained.Individual BOSS classifiers use a non-symmetric distance function, BOSS distance, in conjunction with a nearest neighbour classifier.As tuning is handled inside the classifier, BOSS has very little parameters to be altered and generally should be run using default settings.
###Code
boss = BOSSEnsemble(random_state=47)
boss.fit(X_train, y_train)
boss_preds = boss.predict(X_test)
print("BOSS Accuracy: " + str(metrics.accuracy_score(y_test, boss_preds)))
###Output
BOSS Accuracy: 0.9
###Markdown
4. Contractable BOSS (cBOSS)cBOSS significantly speeds up BOSS with no significant difference in accuracy by improving how the ensemble is formed.cBOSS utilises a filtered random selection of parameters to find its ensemble members.Each ensemble member is built on a 70% subsample of the train data, using random sampling without replacement.An exponential weighting scheme for the predictions of the base classifiers is introduced.A new parameter for the number of parameters samples $k$ is introduced. of which the top $s$ (max ensemble size) with the highest accuracy are kept for the final ensemble.The $k$ parameter is replaceable with a time limit $t$ through contracting.
###Code
# Recommended non-contract cBOSS parameters
cboss = ContractableBOSS(n_parameter_samples=250, max_ensemble_size=50, random_state=47)
# cBOSS with a 5 minute build time contract
# cboss = ContractableBOSS(time_limit=5,
# max_ensemble_size=50,
# random_state=47)
cboss.fit(X_train, y_train)
cboss_preds = cboss.predict(X_test)
print("cBOSS Accuracy: " + str(metrics.accuracy_score(y_test, cboss_preds)))
###Output
cBOSS Accuracy: 0.9
###Markdown
5. Word Extraction for Time Series Classification (WEASEL) UnivariateWEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set than BOSS. It extends SFA by bigrams, feature selection using Anova-f-test and Information Gain Binning (IGB).
###Code
weasel = WEASEL(binning_strategy="equi-depth", anova=False, random_state=47)
weasel.fit(X_train, y_train)
weasel_preds = weasel.predict(X_test)
print("WEASEL Accuracy: " + str(metrics.accuracy_score(y_test, weasel_preds)))
###Output
WEASEL Accuracy: 0.98
###Markdown
MultivariateWEASEL+MUSE (Multivariate Symbolic Extension) is the multivariate extension of WEASEL.
###Code
muse = MUSE()
muse.fit(X_train_mv, y_train_mv)
muse_preds = muse.predict(X_test_mv)
print("MUSE Accuracy: " + str(metrics.accuracy_score(y_test_mv, muse_preds)))
###Output
MUSE Accuracy: 1.0
###Markdown
6. Temporal Dictionary Ensemble (TDE) UnivariateTDE aggregates the best components of 3 classifiers extending from the original BOSS algorithm. The ensemble structure and improvements of cBOSS\[3\] are used; Spatial pyramids are introduced from Spatial BOSS (S-BOSS)\[6\]; From Word Extraction for Time Series Classification (WEASEL)\[4\] bigrams and Information Gain Binning (IGB), a replacement for the multiple coefficient binning (MCB) used by SFA, are included.Two new parameters are included in the ensemble parameter search, the number of spatial pyramid levels $h$ and whether to use IGB or MCB $b$.A Gaussian processes regressor is used to select new parameter sets to evaluate for the ensemble, predicting the accuracy of a set of parameter values using past classifier performances.Inheriting the cBOSS ensemble structure, the number of parameter samples $k$, time limit $t$ and max ensemble size $s$ remain as parameters to be set accounting for memory and time requirements.
###Code
# Recommended non-contract TDE parameters
tde_u = TemporalDictionaryEnsemble(
n_parameter_samples=250,
max_ensemble_size=50,
randomly_selected_params=50,
random_state=47,
)
# TDE with a 5 minute build time contract
# tde = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=50,
# randomly_selected_params=50,
# random_state=47)
tde_u.fit(X_train, y_train)
tde_u_preds = tde_u.predict(X_test)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test, tde_u_preds)))
###Output
TDE Accuracy: 1.0
###Markdown
Multivariate
###Code
# Recommended non-contract TDE parameters
tde_m = TemporalDictionaryEnsemble(
n_parameter_samples=250,
max_ensemble_size=50,
randomly_selected_params=50,
random_state=47,
)
# TDE with a 5 minute build time contract
# tde_m = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=50,
# randomly_selected_params=50,
# random_state=47)
tde_m.fit(X_train_mv, y_train_mv)
tde_m_preds = tde_m.predict(X_test_mv)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test_mv, tde_m_preds)))
###Output
TDE Accuracy: 1.0
###Markdown
Dictionary based time series classification in sktimeDictionary based approaches adapt the bag of words model commonly used in signal processing, computer vision and audio processing for time series classification.Dictionary based classifiers have the same broad structure.A sliding window of length $w$ is run across a series.For each window, the real valued series of length $w$ is converted through approximation and discretisation processes into a symbolic string of length $l$, which consists of $\alpha$ possible letters.The occurrence in a series of each 'word' from the dictionary defined by $l$ and $\alpha$ is counted, and once the sliding window has completed the series is transformed into a histogram.Classification is based on the histograms of the words extracted from the series, rather than the raw data.Currently 4 univeriate dictionary based classifiers are implemented in sktime, all making use of the Symbolic Fourier Approximation (SFA)\[1\] transform to discretise into words.These are the Bag of SFA Symbols (BOSS)\[2\], the Contractable Bag of SFA Symbols (cBOSS)\[3\], Word Extraction for Time Series Classification (WEASEL)\[4\] and the Temporal Dictionary Ensemble (TDE)\[5\]. WEASEL has a multivariate extension called MUSE\[7\] and TDE has multivariate capabilities.In this notebook, we will demonstrate how to use BOSS, cBOSS, WEASEL and TDE on the ItalyPowerDemand and JapaneseVowels datasets. References:\[1\] Schäfer, P., & Högqvist, M. (2012). SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In Proceedings of the 15th International Conference on Extending Database Technology (pp. 516-527).\[2\] Schäfer, P. (2015). The BOSS is concerned with time series classification in the presence of noise. Data Mining and Knowledge Discovery, 29(6), 1505-1530.\[3\] Middlehurst, M., Vickers, W., & Bagnall, A. (2019). Scalable dictionary classifiers for time series classification. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 11-19). Springer, Cham.\[4\] Schäfer, P., & Leser, U. (2017). Fast and accurate time series classification with WEASEL. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 637-646).\[5\] Middlehurst, M., Large, J., Cawley, G., & Bagnall, A. (2020). The Temporal Dictionary Ensemble (TDE) Classifier for Time Series Classification. In The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.\[6\] Large, J., Bagnall, A., Malinowski, S., & Tavenard, R. (2019). On time series classification with dictionary-based classifiers. Intelligent Data Analysis, 23(5), 1073-1089.\[7\] Schäfer, P., & Leser, U. (2018). Multivariate time series classification with WEASEL+MUSE. 3rd ECML/PKDD Workshop on AALTD. 1. Imports
###Code
from sklearn import metrics
from sktime.classification.dictionary_based import (
MUSE,
WEASEL,
BOSSEnsemble,
ContractableBOSS,
TemporalDictionaryEnsemble,
)
from sktime.datasets import load_italy_power_demand
from sktime.datasets.base import load_japanese_vowels
###Output
_____no_output_____
###Markdown
2. Load data
###Code
X_train, y_train = load_italy_power_demand(split="train", return_X_y=True)
X_test, y_test = load_italy_power_demand(split="test", return_X_y=True)
X_test = X_test[:50]
y_test = y_test[:50]
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
X_train_mv, y_train_mv = load_japanese_vowels(split="train", return_X_y=True)
X_test_mv, y_test_mv = load_japanese_vowels(split="test", return_X_y=True)
X_train_mv = X_train_mv[:50]
y_train_mv = y_train_mv[:50]
X_test_mv = X_test_mv[:50]
y_test_mv = y_test_mv[:50]
print(X_train_mv.shape, y_train_mv.shape, X_test_mv.shape, y_test_mv.shape)
###Output
(67, 1) (67,) (50, 1) (50,)
(50, 12) (50,) (50, 12) (50,)
###Markdown
3. Bag of SFA Symbols (BOSS)BOSS is an ensemble of individual BOSS classifiers making use of the SFA transform.The classifier performs grid-search through a large number of individual classifiers for parameters $l$, $\alpha$, $w$ and $p$ (normalise each window).Of the classifiers searched only those within 92\% accuracy of the best classifier are retained.Individual BOSS classifiers use a non-symmetric distance function, BOSS distance, in conjunction with a nearest neighbour classifier.As tuning is handled inside the classifier, BOSS has very little parameters to be altered and generally should be run using default settings.
###Code
boss = BOSSEnsemble(random_state=47)
boss.fit(X_train, y_train)
boss_preds = boss.predict(X_test)
print("BOSS Accuracy: " + str(metrics.accuracy_score(y_test, boss_preds)))
###Output
BOSS Accuracy: 0.94
###Markdown
4. Contractable BOSS (cBOSS)cBOSS significantly speeds up BOSS with no significant difference in accuracy by improving how the ensemble is formed.cBOSS utilises a filtered random selection of parameters to find its ensemble members.Each ensemble member is built on a 70% subsample of the train data, using random sampling without replacement.An exponential weighting scheme for the predictions of the base classifiers is introduced.A new parameter for the number of parameters samples $k$ is introduced. of which the top $s$ (max ensemble size) with the highest accuracy are kept for the final ensemble.The $k$ parameter is replaceable with a time limit $t$ through contracting.
###Code
# Recommended non-contract cBOSS parameters
cboss = ContractableBOSS(n_parameter_samples=250, max_ensemble_size=50, random_state=47)
# cBOSS with a 5 minute build time contract
# cboss = ContractableBOSS(time_limit=5,
# max_ensemble_size=50,
# random_state=47)
cboss.fit(X_train, y_train)
cboss_preds = cboss.predict(X_test)
print("cBOSS Accuracy: " + str(metrics.accuracy_score(y_test, cboss_preds)))
###Output
cBOSS Accuracy: 0.96
###Markdown
5. Word Extraction for Time Series Classification (WEASEL)WEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set than BOSS. It extends SFA by bigrams, feature selection using Anova-f-test and Information Gain Binning (IGB). Univariate
###Code
weasel = WEASEL(binning_strategy="equi-depth", anova=False, random_state=47)
weasel.fit(X_train, y_train)
weasel_preds = weasel.predict(X_test)
print("WEASEL Accuracy: " + str(metrics.accuracy_score(y_test, weasel_preds)))
###Output
WEASEL Accuracy: 0.96
###Markdown
MultivariateWEASEL+MUSE (Multivariate Symbolic Extension) is the multivariate extension of WEASEL.
###Code
muse = MUSE()
muse.fit(X_train_mv, y_train_mv)
muse_preds = muse.predict(X_test_mv)
print("MUSE Accuracy: " + str(metrics.accuracy_score(y_test_mv, muse_preds)))
###Output
MUSE Accuracy: 1.0
###Markdown
6. Temporal Dictionary Ensemble (TDE)TDE aggregates the best components of 3 classifiers extending from the original BOSS algorithm. The ensemble structure and improvements of cBOSS\[3\] are used; Spatial pyramids are introduced from Spatial BOSS (S-BOSS)\[6\]; From Word Extraction for Time Series Classification (WEASEL)\[4\] bigrams and Information Gain Binning (IGB), a replacement for the multiple coefficient binning (MCB) used by SFA, are included.Two new parameters are included in the ensemble parameter search, the number of spatial pyramid levels $h$ and whether to use IGB or MCB $b$.A Gaussian processes regressor is used to select new parameter sets to evaluate for the ensemble, predicting the accuracy of a set of parameter values using past classifier performances.Inheriting the cBOSS ensemble structure, the number of parameter samples $k$, time limit $t$ and max ensemble size $s$ remain as parameters to be set accounting for memory and time requirements. Univariate
###Code
# Recommended non-contract TDE parameters
tde_u = TemporalDictionaryEnsemble(
n_parameter_samples=250,
max_ensemble_size=50,
randomly_selected_params=50,
random_state=47,
)
# TDE with a 5 minute build time contract
# tde = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=50,
# randomly_selected_params=50,
# random_state=47)
tde_u.fit(X_train, y_train)
tde_u_preds = tde_u.predict(X_test)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test, tde_u_preds)))
###Output
TDE Accuracy: 1.0
###Markdown
Multivariate
###Code
# Recommended non-contract TDE parameters
tde_mv = TemporalDictionaryEnsemble(
n_parameter_samples=250,
max_ensemble_size=50,
randomly_selected_params=50,
random_state=47,
)
# TDE with a 5 minute build time contract
# tde_m = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=50,
# randomly_selected_params=50,
# random_state=47)
tde_mv.fit(X_train_mv, y_train_mv)
tde_mv_preds = tde_mv.predict(X_test_mv)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test_mv, tde_mv_preds)))
###Output
TDE Accuracy: 1.0
###Markdown
Dictionary based time series classification in sktimeDictionary based approaches adapt the bag of words model commonly used in signal processing, computer vision and audio processing for time series classification.Dictionary based classifiers have the same broad structure.A sliding window of length $w$ is run across a series.For each window, the real valued series of length $w$ is converted through approximation and discretisation processes into a symbolic string of length $l$, which consists of $\alpha$ possible letters.The occurrence in a series of each 'word' from the dictionary defined by $l$ and $\alpha$ is counted, and once the sliding window has completed the series is transformed into a histogram.Classification is based on the histograms of the words extracted from the series, rather than the raw data.Currently 3 dictionary based classifiers and implemented in sktime, both making use of the Symbolic Fourier Approximation (SFA)\[1\] transform to discretise into words.These are the Bag of SFA Symbols (BOSS)\[2\], with options for improved efficiency and usability via contracting (cBOSS)\[3\] and the Temporal Dictionary Ensemble (TDE)\[4\]In this notebook, we will demonstrate how to use BOSS, cBOSS and TDE on the ItalyPowerDemand dataset. Both algorithms are currently only compatible with univariate time series datasets. References:\[1\] Schäfer, P., & Högqvist, M. (2012). SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In Proceedings of the 15th International Conference on Extending Database Technology (pp. 516-527).\[2\] Schäfer, P. (2015). The BOSS is concerned with time series classification in the presence of noise. Data Mining and Knowledge Discovery, 29(6), 1505-1530.\[3\] Middlehurst, M., Vickers, W., & Bagnall, A. (2019). Scalable dictionary classifiers for time series classification. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 11-19). Springer, Cham.\[4\] Middlehurst, M., Large, J., Cawley, G., & Bagnall, A. (2020). The Temporal Dictionary Ensemble (TDE) Classifier for Time Series Classification. In The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.\[5\] Large, J., Bagnall, A., Malinowski, S., & Tavenard, R. (2019). On time series classification with dictionary-based classifiers. Intelligent Data Analysis, 23(5), 1073-1089.\[6\] Schäfer, P., & Leser, U. (2017). Fast and accurate time series classification with WEASEL. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 637-646). 1. Imports
###Code
from sktime.classification.dictionary_based import BOSSEnsemble
from sktime.classification.dictionary_based import WEASEL
from sktime.classification.dictionary_based import TemporalDictionaryEnsemble
from sklearn import metrics
from sktime.datasets import load_italy_power_demand
###Output
_____no_output_____
###Markdown
2. Load data
###Code
X_train, y_train = load_italy_power_demand(split='train', return_X_y=True)
X_test, y_test = load_italy_power_demand(split='test', return_X_y=True)
X_test = X_test[:50]
y_test = y_test[:50]
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
###Output
(67, 1) (67,) (50, 1) (50,)
###Markdown
3. Bag of SFA Symbols (BOSS)BOSS is an ensemble of individual BOSS classifiers making use of the SFA transform.The classifier performs grid-search through a large number of individual classifiers for parameters $l$, $\alpha$, $w$ and $p$ (normalise each window).Of the classifiers searched only those within 92\% accuracy of the best classifier are retained.Individual BOSS classifiers use a non-symmetric distance function, BOSS distance, in conjunction with a nearest neighbour classifier.As tuning is handled inside the classifier, BOSS has very little parameters to be altered and generally should be run using default settings.
###Code
boss = BOSSEnsemble(random_state=47)
boss.fit(X_train, y_train)
boss_preds = boss.predict(X_test)
print("BOSS Accuracy: " + str(metrics.accuracy_score(y_test, boss_preds)))
###Output
BOSS Accuracy: 0.9
###Markdown
4. Contractable BOSS (cBOSS)cBOSS significantly speeds up BOSS with no significant difference in accuracy by improving how the ensemble is formed.cBOSS utilises a filtered random selection of parameters to find its ensemble members.Each ensemble member is built on a 70% subsample of the train data, using random sampling without replacement.An exponential weighting scheme for the predictions of the base classifiers is introduced.A new parameter for the number of parameters samples $k$ is introduced. of which the top $s$ (max ensemble size) with the highest accuracy are kept for the final ensemble.The $k$ parameter is replaceable with a time limit $t$ through contracting.
###Code
# Recommended non-contract cBOSS parameters
cboss = BOSSEnsemble(randomised_ensemble=True,
n_parameter_samples=250,
max_ensemble_size=50)
# cBOSS with a 5 minute build time contract
#cboss = BOSSEnsemble(randomised_ensemble=True,
# time_limit=5,
# max_ensemble_size=50)
cboss.fit(X_train, y_train)
cboss_preds = cboss.predict(X_test)
print("cBOSS Accuracy: " + str(metrics.accuracy_score(y_test, cboss_preds)))
###Output
cBOSS Accuracy: 0.94
###Markdown
5. Word Extraction for Time Series Classification (WEASEL)WEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set than BOSS. It extends SFA by bigrams, feature selection using Anova-f-test and Information Gain Binning (IGB).
###Code
weasel = WEASEL(binning_strategy="equi-depth", anova=False)
weasel.fit(X_train, y_train)
weasel_preds = weasel.predict(X_test)
print("WEASEL Accuracy: " + str(metrics.accuracy_score(y_test, weasel_preds)))
###Output
WEASEL Accuracy: 1.0
###Markdown
6. Temporal Dictionary Ensemble (TDE)TDE aggregates the best components of 3 classifiers extending from the original BOSS algorithm. The ensemble structure and improvements of cBOSS are used; Spatial pyramids are introduced from Spatial BOSS (S-BOSS)\[5\]; From Word Extraction for Time Series Classification (WEASEL)\[6\] bigrams and Information Gain Binning (IGB), a replacement for the multiple coefficient binning (MCB) used by SFA, are included.Two new parameters are included in the ensemble parameter search, the number of spatial pyramid levels $h$ and whether to use IGB or MCB $b$.A Gaussian processes regressor is used to select new parameter sets to evaluate for the ensemble, predicting the accuracy of a set of parameter values using past classifier performances.Inheriting the cBOSS ensemble structure, the number of parameter samples $k$, time limit $t$ and max ensemble size $s$ remain as parameters to be set accounting for memory and time requirements.
###Code
# Recommended non-contract TDE parameters
tde = TemporalDictionaryEnsemble(n_parameter_samples=250,
max_ensemble_size=100,
randomly_selected_params=50)
# TDE with a 5 minute build time contract
#tde = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=100,
# randomly_selected_params=50)
tde.fit(X_train, y_train)
tde_preds = tde.predict(X_test)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test, tde_preds)))
###Output
TDE Accuracy: 0.98
###Markdown
Dictionary based time series classification in sktimeDictionary based approaches adapt the bag of words model commonly used in signal processing, computer vision and audio processing for time series classification.Dictionary based classifiers have the same broad structure.A sliding window of length $w$ is run across a series.For each window, the real valued series of length $w$ is converted through approximation and discretisation processes into a symbolic string of length $l$, which consists of $\alpha$ possible letters.The occurrence in a series of each 'word' from the dictionary defined by $l$ and $\alpha$ is counted, and once the sliding window has completed the series is transformed into a histogram.Classification is based on the histograms of the words extracted from the series, rather than the raw data.Currently 4 univeriate dictionary based classifiers are implemented in sktime, all making use of the Symbolic Fourier Approximation (SFA)\[1\] transform to discretise into words.These are the Bag of SFA Symbols (BOSS)\[2\], the Contractable Bag of SFA Symbols (cBOSS)\[3\], Word Extraction for Time Series Classification (WEASEL)\[4\] and the Temporal Dictionary Ensemble (TDE)\[5\]. WEASEL has a multivariate extension called MUSE\[7\].In this notebook, we will demonstrate how to use BOSS, cBOSS, WEASEL and TDE on the ItalyPowerDemand dataset. References:\[1\] Schäfer, P., & Högqvist, M. (2012). SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In Proceedings of the 15th International Conference on Extending Database Technology (pp. 516-527).\[2\] Schäfer, P. (2015). The BOSS is concerned with time series classification in the presence of noise. Data Mining and Knowledge Discovery, 29(6), 1505-1530.\[3\] Middlehurst, M., Vickers, W., & Bagnall, A. (2019). Scalable dictionary classifiers for time series classification. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 11-19). Springer, Cham.\[4\] Schäfer, P., & Leser, U. (2017). Fast and accurate time series classification with WEASEL. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 637-646).\[5\] Middlehurst, M., Large, J., Cawley, G., & Bagnall, A. (2020). The Temporal Dictionary Ensemble (TDE) Classifier for Time Series Classification. In The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.\[6\] Large, J., Bagnall, A., Malinowski, S., & Tavenard, R. (2019). On time series classification with dictionary-based classifiers. Intelligent Data Analysis, 23(5), 1073-1089.\[7\] Schäfer, P., & Leser, U. (2018). Multivariate time series classification with WEASEL+MUSE. 3rd ECML/PKDD Workshop on AALTD. 1. Imports
###Code
from sktime.classification.dictionary_based import BOSSEnsemble
from sktime.classification.dictionary_based import ContractableBOSS
from sktime.classification.dictionary_based import WEASEL
from sktime.classification.dictionary_based import TemporalDictionaryEnsemble
from sktime.classification.dictionary_based import MUSE
from sklearn import metrics
from sktime.datasets import load_italy_power_demand
from sktime.datasets.base import load_japanese_vowels
###Output
_____no_output_____
###Markdown
2. Load data
###Code
X_train, y_train = load_italy_power_demand(split='train', return_X_y=True)
X_test, y_test = load_italy_power_demand(split='test', return_X_y=True)
X_test = X_test[:50]
y_test = y_test[:50]
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
X_train_mv, y_train_mv = load_japanese_vowels(split="train", return_X_y=True)
X_test_mv, y_test_mv = load_japanese_vowels(split="test", return_X_y=True)
X_train_mv = X_train_mv[:50]
y_train_mv = y_train_mv[:50]
X_test_mv = X_test_mv[:50]
y_test_mv = y_test_mv[:50]
print(X_train_mv.shape, y_train_mv.shape, X_test_mv.shape, y_test_mv.shape)
###Output
(67, 1) (67,) (50, 1) (50,)
(50, 12) (50,) (50, 12) (50,)
###Markdown
3. Bag of SFA Symbols (BOSS)BOSS is an ensemble of individual BOSS classifiers making use of the SFA transform.The classifier performs grid-search through a large number of individual classifiers for parameters $l$, $\alpha$, $w$ and $p$ (normalise each window).Of the classifiers searched only those within 92\% accuracy of the best classifier are retained.Individual BOSS classifiers use a non-symmetric distance function, BOSS distance, in conjunction with a nearest neighbour classifier.As tuning is handled inside the classifier, BOSS has very little parameters to be altered and generally should be run using default settings.
###Code
boss = BOSSEnsemble(random_state=47)
boss.fit(X_train, y_train)
boss_preds = boss.predict(X_test)
print("BOSS Accuracy: " + str(metrics.accuracy_score(y_test, boss_preds)))
###Output
BOSS Accuracy: 0.9
###Markdown
4. Contractable BOSS (cBOSS)cBOSS significantly speeds up BOSS with no significant difference in accuracy by improving how the ensemble is formed.cBOSS utilises a filtered random selection of parameters to find its ensemble members.Each ensemble member is built on a 70% subsample of the train data, using random sampling without replacement.An exponential weighting scheme for the predictions of the base classifiers is introduced.A new parameter for the number of parameters samples $k$ is introduced. of which the top $s$ (max ensemble size) with the highest accuracy are kept for the final ensemble.The $k$ parameter is replaceable with a time limit $t$ through contracting.
###Code
# Recommended non-contract cBOSS parameters
cboss = ContractableBOSS(n_parameter_samples=250,
max_ensemble_size=50,
random_state=47)
# cBOSS with a 5 minute build time contract
#cboss = ContractableBOSS(time_limit=5,
# max_ensemble_size=50,
# random_state=47)
cboss.fit(X_train, y_train)
cboss_preds = cboss.predict(X_test)
print("cBOSS Accuracy: " + str(metrics.accuracy_score(y_test, cboss_preds)))
###Output
cBOSS Accuracy: 0.9
###Markdown
5. Word Extraction for Time Series Classification (WEASEL)WEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set than BOSS. It extends SFA by bigrams, feature selection using Anova-f-test and Information Gain Binning (IGB).
###Code
weasel = WEASEL(binning_strategy="equi-depth", anova=False,
random_state=47)
weasel.fit(X_train, y_train)
weasel_preds = weasel.predict(X_test)
print("WEASEL Accuracy: " + str(metrics.accuracy_score(y_test, weasel_preds)))
###Output
WEASEL Accuracy: 0.98
###Markdown
6. Temporal Dictionary Ensemble (TDE)TDE aggregates the best components of 3 classifiers extending from the original BOSS algorithm. The ensemble structure and improvements of cBOSS\[3\] are used; Spatial pyramids are introduced from Spatial BOSS (S-BOSS)\[6\]; From Word Extraction for Time Series Classification (WEASEL)\[4\] bigrams and Information Gain Binning (IGB), a replacement for the multiple coefficient binning (MCB) used by SFA, are included.Two new parameters are included in the ensemble parameter search, the number of spatial pyramid levels $h$ and whether to use IGB or MCB $b$.A Gaussian processes regressor is used to select new parameter sets to evaluate for the ensemble, predicting the accuracy of a set of parameter values using past classifier performances.Inheriting the cBOSS ensemble structure, the number of parameter samples $k$, time limit $t$ and max ensemble size $s$ remain as parameters to be set accounting for memory and time requirements.
###Code
# Recommended non-contract TDE parameters
tde = TemporalDictionaryEnsemble(n_parameter_samples=250,
max_ensemble_size=50,
randomly_selected_params=50,
random_state=47)
# TDE with a 5 minute build time contract
#tde = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=50,
# randomly_selected_params=50,
# random_state=47)
tde.fit(X_train, y_train)
tde_preds = tde.predict(X_test)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test, tde_preds)))
###Output
TDE Accuracy: 1.0
###Markdown
7. WEASEL+MUSE (Multivariate Symbolic Extension)WEASEL+MUSE is the multivariate extension of WEASEL.
###Code
muse = MUSE()
muse.fit(X_train_mv, y_train_mv)
muse_preds = muse.predict(X_test_mv)
print("MUSE Accuracy: " + str(metrics.accuracy_score(y_test_mv, muse_preds)))
###Output
MUSE Accuracy: 1.0
###Markdown
Dictionary based time series classification in sktimeDictionary based approaches adapt the bag of words model commonly used in signal processing, computer vision and audio processing for time series classification.Dictionary based classifiers have the same broad structure.A sliding window of length $w$ is run across a series.For each window, the real valued series of length $w$ is converted through approximation and discretisation processes into a symbolic string of length $l$, which consists of $\alpha$ possible letters.The occurrence in a series of each 'word' from the dictionary defined by $l$ and $\alpha$ is counted, and once the sliding window has completed the series is transformed into a histogram.Classification is based on the histograms of the words extracted from the series, rather than the raw data.Currently 4 univeriate dictionary based classifiers are implemented in sktime, all making use of the Symbolic Fourier Approximation (SFA)\[1\] transform to discretise into words.These are the Bag of SFA Symbols (BOSS)\[2\], the Contractable Bag of SFA Symbols (cBOSS)\[3\], Word Extraction for Time Series Classification (WEASEL)\[4\] and the Temporal Dictionary Ensemble (TDE)\[5\]. WEASEL has a multivariate extension called MUSE\[7\] and TDE has multivariate capabilities.In this notebook, we will demonstrate how to use BOSS, cBOSS, WEASEL and TDE on the ItalyPowerDemand and JapaneseVowels datasets. References:\[1\] Schäfer, P., & Högqvist, M. (2012). SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In Proceedings of the 15th International Conference on Extending Database Technology (pp. 516-527).\[2\] Schäfer, P. (2015). The BOSS is concerned with time series classification in the presence of noise. Data Mining and Knowledge Discovery, 29(6), 1505-1530.\[3\] Middlehurst, M., Vickers, W., & Bagnall, A. (2019). Scalable dictionary classifiers for time series classification. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 11-19). Springer, Cham.\[4\] Schäfer, P., & Leser, U. (2017). Fast and accurate time series classification with WEASEL. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 637-646).\[5\] Middlehurst, M., Large, J., Cawley, G., & Bagnall, A. (2020). The Temporal Dictionary Ensemble (TDE) Classifier for Time Series Classification. In The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.\[6\] Large, J., Bagnall, A., Malinowski, S., & Tavenard, R. (2019). On time series classification with dictionary-based classifiers. Intelligent Data Analysis, 23(5), 1073-1089.\[7\] Schäfer, P., & Leser, U. (2018). Multivariate time series classification with WEASEL+MUSE. 3rd ECML/PKDD Workshop on AALTD. 1. Imports
###Code
from sklearn import metrics
from sktime.classification.dictionary_based import (
MUSE,
WEASEL,
BOSSEnsemble,
ContractableBOSS,
TemporalDictionaryEnsemble,
)
from sktime.datasets import load_italy_power_demand
from sktime.datasets.base import load_japanese_vowels
###Output
_____no_output_____
###Markdown
2. Load data
###Code
X_train, y_train = load_italy_power_demand(split="train", return_X_y=True)
X_test, y_test = load_italy_power_demand(split="test", return_X_y=True)
X_test = X_test[:50]
y_test = y_test[:50]
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
X_train_mv, y_train_mv = load_japanese_vowels(split="train", return_X_y=True)
X_test_mv, y_test_mv = load_japanese_vowels(split="test", return_X_y=True)
X_train_mv = X_train_mv[:50]
y_train_mv = y_train_mv[:50]
X_test_mv = X_test_mv[:50]
y_test_mv = y_test_mv[:50]
print(X_train_mv.shape, y_train_mv.shape, X_test_mv.shape, y_test_mv.shape)
###Output
(67, 1) (67,) (50, 1) (50,)
(50, 12) (50,) (50, 12) (50,)
###Markdown
3. Bag of SFA Symbols (BOSS)BOSS is an ensemble of individual BOSS classifiers making use of the SFA transform.The classifier performs grid-search through a large number of individual classifiers for parameters $l$, $\alpha$, $w$ and $p$ (normalise each window).Of the classifiers searched only those within 92\% accuracy of the best classifier are retained.Individual BOSS classifiers use a non-symmetric distance function, BOSS distance, in conjunction with a nearest neighbour classifier.As tuning is handled inside the classifier, BOSS has very little parameters to be altered and generally should be run using default settings.
###Code
boss = BOSSEnsemble(random_state=47)
boss.fit(X_train, y_train)
boss_preds = boss.predict(X_test)
print("BOSS Accuracy: " + str(metrics.accuracy_score(y_test, boss_preds)))
###Output
_____no_output_____
###Markdown
4. Contractable BOSS (cBOSS)cBOSS significantly speeds up BOSS with no significant difference in accuracy by improving how the ensemble is formed.cBOSS utilises a filtered random selection of parameters to find its ensemble members.Each ensemble member is built on a 70% subsample of the train data, using random sampling without replacement.An exponential weighting scheme for the predictions of the base classifiers is introduced.A new parameter for the number of parameters samples $k$ is introduced. of which the top $s$ (max ensemble size) with the highest accuracy are kept for the final ensemble.The $k$ parameter is replaceable with a time limit $t$ through contracting.
###Code
# Recommended non-contract cBOSS parameters
cboss = ContractableBOSS(n_parameter_samples=250, max_ensemble_size=50, random_state=47)
# cBOSS with a 5 minute build time contract
# cboss = ContractableBOSS(time_limit=5,
# max_ensemble_size=50,
# random_state=47)
cboss.fit(X_train, y_train)
cboss_preds = cboss.predict(X_test)
print("cBOSS Accuracy: " + str(metrics.accuracy_score(y_test, cboss_preds)))
###Output
_____no_output_____
###Markdown
5. Word Extraction for Time Series Classification (WEASEL) UnivariateWEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set than BOSS. It extends SFA by bigrams, feature selection using Anova-f-test and Information Gain Binning (IGB).
###Code
weasel = WEASEL(binning_strategy="equi-depth", anova=False, random_state=47)
weasel.fit(X_train, y_train)
weasel_preds = weasel.predict(X_test)
print("WEASEL Accuracy: " + str(metrics.accuracy_score(y_test, weasel_preds)))
###Output
_____no_output_____
###Markdown
MultivariateWEASEL+MUSE (Multivariate Symbolic Extension) is the multivariate extension of WEASEL.
###Code
muse = MUSE()
muse.fit(X_train_mv, y_train_mv)
muse_preds = muse.predict(X_test_mv)
print("MUSE Accuracy: " + str(metrics.accuracy_score(y_test_mv, muse_preds)))
###Output
_____no_output_____
###Markdown
6. Temporal Dictionary Ensemble (TDE) UnivariateTDE aggregates the best components of 3 classifiers extending from the original BOSS algorithm. The ensemble structure and improvements of cBOSS\[3\] are used; Spatial pyramids are introduced from Spatial BOSS (S-BOSS)\[6\]; From Word Extraction for Time Series Classification (WEASEL)\[4\] bigrams and Information Gain Binning (IGB), a replacement for the multiple coefficient binning (MCB) used by SFA, are included.Two new parameters are included in the ensemble parameter search, the number of spatial pyramid levels $h$ and whether to use IGB or MCB $b$.A Gaussian processes regressor is used to select new parameter sets to evaluate for the ensemble, predicting the accuracy of a set of parameter values using past classifier performances.Inheriting the cBOSS ensemble structure, the number of parameter samples $k$, time limit $t$ and max ensemble size $s$ remain as parameters to be set accounting for memory and time requirements.
###Code
# Recommended non-contract TDE parameters
tde_u = TemporalDictionaryEnsemble(
n_parameter_samples=250,
max_ensemble_size=50,
randomly_selected_params=50,
random_state=47,
)
# TDE with a 5 minute build time contract
# tde = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=50,
# randomly_selected_params=50,
# random_state=47)
tde_u.fit(X_train, y_train)
tde_u_preds = tde_u.predict(X_test)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test, tde_u_preds)))
###Output
_____no_output_____
###Markdown
Multivariate
###Code
# Recommended non-contract TDE parameters
tde_m = TemporalDictionaryEnsemble(
n_parameter_samples=250,
max_ensemble_size=50,
randomly_selected_params=50,
random_state=47,
)
# TDE with a 5 minute build time contract
# tde_m = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=50,
# randomly_selected_params=50,
# random_state=47)
tde_m.fit(X_train_mv, y_train_mv)
tde_m_preds = tde_m.predict(X_test_mv)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test_mv, tde_m_preds)))
###Output
_____no_output_____
###Markdown
Dictionary based time series classification in sktimeDictionary based approaches adapt the bag of words model commonly used in signal processing, computer vision and audio processing for time series classification.Dictionary based classifiers have the same broad structure.A sliding window of length $w$ is run across a series.For each window, the real valued series of length $w$ is converted through approximation and discretisation processes into a symbolic string of length $l$, which consists of $\alpha$ possible letters.The occurrence in a series of each 'word' from the dictionary defined by $l$ and $\alpha$ is counted, and once the sliding window has completed the series is transformed into a histogram.Classification is based on the histograms of the words extracted from the series, rather than the raw data.Currently 4 univeriate dictionary based classifiers and implemented in sktime, both making use of the Symbolic Fourier Approximation (SFA)\[1\] transform to discretise into words.These are the Bag of SFA Symbols (BOSS)\[2\], with options for improved efficiency and usability via contracting (cBOSS)\[3\], the Temporal Dictionary Ensemble (TDE)\[4\] and Word Extraction for Time Series Classification (WEASEL)\[6\]. WEASEL has a multivariate extension called MUSE\[7\].In this notebook, we will demonstrate how to use BOSS, cBOSS and TDE on the ItalyPowerDemand dataset. Both algorithms are currently only compatible with univariate time series datasets. References:\[1\] Schäfer, P., & Högqvist, M. (2012). SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In Proceedings of the 15th International Conference on Extending Database Technology (pp. 516-527).\[2\] Schäfer, P. (2015). The BOSS is concerned with time series classification in the presence of noise. Data Mining and Knowledge Discovery, 29(6), 1505-1530.\[3\] Middlehurst, M., Vickers, W., & Bagnall, A. (2019). Scalable dictionary classifiers for time series classification. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 11-19). Springer, Cham.\[4\] Middlehurst, M., Large, J., Cawley, G., & Bagnall, A. (2020). The Temporal Dictionary Ensemble (TDE) Classifier for Time Series Classification. In The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.\[5\] Large, J., Bagnall, A., Malinowski, S., & Tavenard, R. (2019). On time series classification with dictionary-based classifiers. Intelligent Data Analysis, 23(5), 1073-1089.\[6\] Schäfer, P., & Leser, U. (2017). Fast and accurate time series classification with WEASEL. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 637-646).\[7\] Schäfer, P., & Leser, U. (2018). Multivariate time series classification with WEASEL+MUSE. 3rd ECML/PKDD Workshop on AALTD. 1. Imports
###Code
from sktime.classification.dictionary_based import BOSSEnsemble
from sktime.classification.dictionary_based import WEASEL
from sktime.classification.dictionary_based import TemporalDictionaryEnsemble
from sktime.classification.dictionary_based import MUSE
from sklearn import metrics
from sktime.datasets import load_italy_power_demand
from sktime.datasets.base import load_japanese_vowels # multivariate dataset
###Output
_____no_output_____
###Markdown
2. Load data
###Code
X_train, y_train = load_italy_power_demand(split='train', return_X_y=True)
X_test, y_test = load_italy_power_demand(split='test', return_X_y=True)
X_test = X_test[:50]
y_test = y_test[:50]
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
###Output
(67, 1) (67,) (50, 1) (50,)
###Markdown
3. Bag of SFA Symbols (BOSS)BOSS is an ensemble of individual BOSS classifiers making use of the SFA transform.The classifier performs grid-search through a large number of individual classifiers for parameters $l$, $\alpha$, $w$ and $p$ (normalise each window).Of the classifiers searched only those within 92\% accuracy of the best classifier are retained.Individual BOSS classifiers use a non-symmetric distance function, BOSS distance, in conjunction with a nearest neighbour classifier.As tuning is handled inside the classifier, BOSS has very little parameters to be altered and generally should be run using default settings.
###Code
boss = BOSSEnsemble(random_state=47)
boss.fit(X_train, y_train)
boss_preds = boss.predict(X_test)
print("BOSS Accuracy: " + str(metrics.accuracy_score(y_test, boss_preds)))
###Output
BOSS Accuracy: 0.9
###Markdown
4. Contractable BOSS (cBOSS)cBOSS significantly speeds up BOSS with no significant difference in accuracy by improving how the ensemble is formed.cBOSS utilises a filtered random selection of parameters to find its ensemble members.Each ensemble member is built on a 70% subsample of the train data, using random sampling without replacement.An exponential weighting scheme for the predictions of the base classifiers is introduced.A new parameter for the number of parameters samples $k$ is introduced. of which the top $s$ (max ensemble size) with the highest accuracy are kept for the final ensemble.The $k$ parameter is replaceable with a time limit $t$ through contracting.
###Code
# Recommended non-contract cBOSS parameters
cboss = BOSSEnsemble(randomised_ensemble=True,
n_parameter_samples=250,
max_ensemble_size=50)
# cBOSS with a 5 minute build time contract
#cboss = BOSSEnsemble(randomised_ensemble=True,
# time_limit=5,
# max_ensemble_size=50)
cboss.fit(X_train, y_train)
cboss_preds = cboss.predict(X_test)
print("cBOSS Accuracy: " + str(metrics.accuracy_score(y_test, cboss_preds)))
###Output
cBOSS Accuracy: 0.98
###Markdown
5. Word Extraction for Time Series Classification (WEASEL)WEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set than BOSS. It extends SFA by bigrams, feature selection using Anova-f-test and Information Gain Binning (IGB).
###Code
weasel = WEASEL(binning_strategy="equi-depth", anova=False)
weasel.fit(X_train, y_train)
weasel_preds = weasel.predict(X_test)
print("WEASEL Accuracy: " + str(metrics.accuracy_score(y_test, weasel_preds)))
###Output
WEASEL Accuracy: 1.0
###Markdown
6. Temporal Dictionary Ensemble (TDE)TDE aggregates the best components of 3 classifiers extending from the original BOSS algorithm. The ensemble structure and improvements of cBOSS are used; Spatial pyramids are introduced from Spatial BOSS (S-BOSS)\[5\]; From Word Extraction for Time Series Classification (WEASEL)\[6\] bigrams and Information Gain Binning (IGB), a replacement for the multiple coefficient binning (MCB) used by SFA, are included.Two new parameters are included in the ensemble parameter search, the number of spatial pyramid levels $h$ and whether to use IGB or MCB $b$.A Gaussian processes regressor is used to select new parameter sets to evaluate for the ensemble, predicting the accuracy of a set of parameter values using past classifier performances.Inheriting the cBOSS ensemble structure, the number of parameter samples $k$, time limit $t$ and max ensemble size $s$ remain as parameters to be set accounting for memory and time requirements.
###Code
# Recommended non-contract TDE parameters
tde = TemporalDictionaryEnsemble(n_parameter_samples=250,
max_ensemble_size=100,
randomly_selected_params=50)
# TDE with a 5 minute build time contract
#tde = TemporalDictionaryEnsemble(time_limit=5,
# max_ensemble_size=100,
# randomly_selected_params=50)
tde.fit(X_train, y_train)
tde_preds = tde.predict(X_test)
print("TDE Accuracy: " + str(metrics.accuracy_score(y_test, tde_preds)))
###Output
TDE Accuracy: 1.0
###Markdown
*** 3 Multivariate Time SeriesWe can use WEASEL+MUSE for multivariate time series. 3.1 Load the Training Data
###Code
X_train, y_train = load_japanese_vowels(split="train", return_X_y=True)
X_test, y_test = load_japanese_vowels(split="test", return_X_y=True)
X_train = X_train[:50]
y_train = y_train[:50]
X_test = X_test[:50]
y_test = y_test[:50]
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
###Output
(50, 12) (50,) (50, 12) (50,)
###Markdown
5. WEASEL+MUSE (Multivariate Symbolic Extension)WEASEL+MUSE is the multivariate extension of WEASEL.
###Code
muse = MUSE()
muse.fit(X_train, y_train)
muse_preds = muse.predict(X_test)
print("MUSE Accuracy: " + str(metrics.accuracy_score(y_test, muse_preds)))
###Output
MUSE Accuracy: 1.0
|