markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Simple examplesSay we have a class:
class ProductionClass(object): def method(self, *args): # This does something we do not want to actually run in the test # ... pass
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
To mock the `ProductionClass.method` do this:
from unittest.mock import MagicMock thing = ProductionClass() thing.method = MagicMock(return_value=3) thing.method(3, 4, 5, key='value') thing.method.assert_called_with(3, 4, 5, key='value')
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
More practical use case- Mocking a module or system call- Mocking an object or method- Remember that after testing you want to restore original state- Use `mock.patch` An example- Write code to remove generated files from LaTeX compilation, i.e. remove the *.aux, *.log, *.pdf etc.Here is a simple attempt:
# clean_tex.py import os def cleanup(tex_file_pth): base = os.path.splitext(tex_file_pth)[0] for ext in ('.aux', '.log'): f = base + ext if os.path.exists(f): os.remove(f)
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
Testing this with mock
import mock @mock.patch('clean_tex.os.remove') def test_cleanup_removes_extra_files(mock_remove): cleanup('foo.tex') expected = [mock.call('foo.' + x) for x in ('aux', 'log')] mock_remove.assert_has_calls(expected)
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
- Note the mocked argument that is passed.- Note that we did not mock `os.remove`- Mock where the object is looked up Doing more
import mock @mock.patch('clean_tex.os.path') @mock.patch('clean_tex.os.remove') def test_cleanup_does_not_fail_when_files_dont_exist(mock_remove, mock_path): # Setup the mock_path to return False mock_path.exists.return_value = False cleanup('foo.tex') mock_remove.assert_not_called()
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
- Note the order of the passed arguments- Note the name of the method Patching instance methodsUse `mock.patch.object` to patch an instance method
@mock.patch.object(ProductionClass, 'method') def test_method(mock_method): obj = ProductionClass() obj.method(1) mock_method.assert_called_once_with(1)
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
Mock works as a context manager:
with mock.patch.object(ProductionClass, 'method') as mock_method: obj = ProductionClass() obj.method(1) mock_method.assert_called_once_with(1)
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
More articles on mock- See more here https://docs.python.org/3/library/unittest.mock.html- https://www.toptal.com/python/an-introduction-to-mocking-in-python PytestOffers many useful and convenient features that are useful Odds and ends Linters- `pyflakes`- `flake8` IPython goodies- Use `%run`- Use `%pdb`- `%debug` Debugging- Debug with `%run`- pdb.set_trace()- IPython set trace:
from IPython.core.debugger import Tracer; Tracer()()
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
Support Vector Regression with MinMaxScaler Required Packages
import warnings import numpy as np import pandas as pd import seaborn as se import matplotlib.pyplot as plt from sklearn.svm import SVR from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split from sklearn.pipeline import make_pipeline from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error warnings.filterwarnings('ignore')
_____no_output_____
Apache-2.0
Regression/Support Vector Machine/SVR_MinMaxScaler.ipynb
PrajwalNimje1997/ds-seed
InitializationFilepath of CSV file
file_path= ""
_____no_output_____
Apache-2.0
Regression/Support Vector Machine/SVR_MinMaxScaler.ipynb
PrajwalNimje1997/ds-seed
List of features which are required for model training .
features = []
_____no_output_____
Apache-2.0
Regression/Support Vector Machine/SVR_MinMaxScaler.ipynb
PrajwalNimje1997/ds-seed
Target feature for prediction.
target=''
_____no_output_____
Apache-2.0
Regression/Support Vector Machine/SVR_MinMaxScaler.ipynb
PrajwalNimje1997/ds-seed
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
df=pd.read_csv(file_path) df.head()
_____no_output_____
Apache-2.0
Regression/Support Vector Machine/SVR_MinMaxScaler.ipynb
PrajwalNimje1997/ds-seed
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
X=df[features] Y=df[target]
_____no_output_____
Apache-2.0
Regression/Support Vector Machine/SVR_MinMaxScaler.ipynb
PrajwalNimje1997/ds-seed
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df)
_____no_output_____
Apache-2.0
Regression/Support Vector Machine/SVR_MinMaxScaler.ipynb
PrajwalNimje1997/ds-seed
Calling preprocessing functions on the feature and target set.
x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=NullClearner(Y) X.head()
_____no_output_____
Apache-2.0
Regression/Support Vector Machine/SVR_MinMaxScaler.ipynb
PrajwalNimje1997/ds-seed
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show()
_____no_output_____
Apache-2.0
Regression/Support Vector Machine/SVR_MinMaxScaler.ipynb
PrajwalNimje1997/ds-seed
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
_____no_output_____
Apache-2.0
Regression/Support Vector Machine/SVR_MinMaxScaler.ipynb
PrajwalNimje1997/ds-seed
ModelSupport vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.A Support Vector Machine is a discriminative classifier formally defined by a separating hyperplane. In other terms, for a given known/labelled data points, the SVM outputs an appropriate hyperplane that classifies the inputted new cases based on the hyperplane. In 2-Dimensional space, this hyperplane is a line separating a plane into two segments where each class or group occupied on either side.Here we will use SVR, the svr implementation is based on libsvm. The fit time scales at least quadratically with the number of samples and maybe impractical beyond tens of thousands of samples. Model Tuning Parameters 1. C : float, default=1.0> Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty. 2. kernel : {‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’}, default=’rbf’> Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to pre-compute the kernel matrix from data matrices; that matrix should be an array of shape (n_samples, n_samples). 3. gamma : {‘scale’, ‘auto’} or float, default=’scale’> Gamma is a hyperparameter that we have to set before the training model. Gamma decides how much curvature we want in a decision boundary. 4. degree : int, default=3> Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.Using degree 1 is similar to using a linear kernel. Also, increasing degree parameter leads to higher training times. Data ScalingMinMaxScaler transforms features by scaling each feature to a given range.This estimator scales and translates each feature individually such that it is in the given range on the training set, e.g. between zero and one.This transformation is often used as an alternative to zero mean, unit variance scaling. For more information on MinMaxScaler [ click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html)
model=make_pipeline(MinMaxScaler(),SVR()) model.fit(x_train,y_train)
_____no_output_____
Apache-2.0
Regression/Support Vector Machine/SVR_MinMaxScaler.ipynb
PrajwalNimje1997/ds-seed
Model AccuracyWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.> **score**: The **score** function returns the coefficient of determination R2 of the prediction.
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
Accuracy score 42.32 %
Apache-2.0
Regression/Support Vector Machine/SVR_MinMaxScaler.ipynb
PrajwalNimje1997/ds-seed
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
y_pred=model.predict(x_test) print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100)) print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred))) print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
R2 Score: 42.32 % Mean Absolute Error 0.48 Mean Squared Error 0.38
Apache-2.0
Regression/Support Vector Machine/SVR_MinMaxScaler.ipynb
PrajwalNimje1997/ds-seed
Prediction PlotFirst, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
plt.figure(figsize=(14,10)) plt.plot(range(20),y_test[0:20], color = "green") plt.plot(range(20),model.predict(x_test[0:20]), color = "red") plt.legend(["Actual","prediction"]) plt.title("Predicted vs True Value") plt.xlabel("Record number") plt.ylabel(target) plt.show()
_____no_output_____
Apache-2.0
Regression/Support Vector Machine/SVR_MinMaxScaler.ipynb
PrajwalNimje1997/ds-seed
Vertex AI client library: Custom training image classification model for online prediction for A/B testing Run in Colab View on GitHub OverviewThis tutorial demonstrates how to use the Vertex AI Python client library to train and deploy a custom image classification model for A/B testing for online prediction. DatasetThe dataset used for this tutorial is the [CIFAR10 dataset](https://www.tensorflow.org/datasets/catalog/cifar10) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. ObjectiveIn this tutorial, you learn how to create multiple instances of a custom model from a Python script in a Docker container using the Vertex AI client library and then deploy for A/B testingof online predictions. You can alternatively create custom models from the command line using gcloud or online using Google Cloud Console.The steps performed include:- Create an Vertex AI custom job for training a model.- Train two instances (A and B) of the TensorFlow model.- Retrieve and load the models artifacts.- View the models evaluation.- Upload each model instance as a Vertex AI `Model` resource.- Deploy the model instances to the same serving `Endpoint` resource.- Make a prediction.- Review the results from the two model instances.- Undeploy the `Model` resources. CostsThis tutorial uses billable components of Google Cloud (GCP):* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/ai-platform-unified/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. InstallationInstall the latest version of Vertex AI client library.
import sys if "google.colab" in sys.modules: USER_FLAG = "" else: USER_FLAG = "--user" ! pip3 install -U google-cloud-aiplatform $USER_FLAG
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Install the latest GA version of *google-cloud-storage* library as well.
! pip3 install -U google-cloud-storage $USER_FLAG
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Restart the kernelOnce you've installed the Vertex AI client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Before you begin GPU runtime*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the Vertex AI APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Vertex AI Notebooks.5. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services. For the latest support per region, see the [Vertex AI locations documentation](https://cloud.google.com/ai-platform-unified/docs/general/locations)
REGION = "us-central1" # @param {type: "string"}
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Authenticate your Google Cloud account**If you are using Vertex AI Notebooks**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.**Click Create service account**.In the **Service account name** field, enter a name, and click **Create**.In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex AI" into the filter box, and select **Vertex AI Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.Click Create. A JSON file that contains your key downloads to your local environment.Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
import os import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on AI Platform, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS ''
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you submit a custom training job using the Vertex AI client library, you upload a Python packagecontaining your training code to a Cloud Storage bucket. Vertex AI runsthe code from this package. In this tutorial, Vertex AI also saves thetrained model that results from your job in the same bucket. You can thencreate an `Endpoint` resource based on this output in order to serveonline predictions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
! gsutil mb -l $REGION $BUCKET_NAME
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Finally, validate access to your Cloud Storage bucket by examining its contents:
! gsutil ls -al $BUCKET_NAME
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex AI client libraryImport the Vertex AI client library into our Python environment.
import os import sys import time import google.cloud.aiplatform_v1 as aip from google.protobuf import json_format from google.protobuf.json_format import MessageToJson, ParseDict from google.protobuf.struct_pb2 import Struct, Value
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Vertex AI constantsSetup up the following constants for Vertex AI:- `API_ENDPOINT`: The Vertex AI API service endpoint for dataset, model, job, pipeline and endpoint services.- `PARENT`: The Vertex AI location root path for dataset, model, job, pipeline and endpoint resources.
# API service endpoint API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION) # Vertex AI location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Hardware AcceleratorsSet the hardware accelerators (e.g., GPU), if any, for training and prediction.Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)For GPU, available accelerators include: - aip.AcceleratorType.NVIDIA_TESLA_K80 - aip.AcceleratorType.NVIDIA_TESLA_P100 - aip.AcceleratorType.NVIDIA_TESLA_P4 - aip.AcceleratorType.NVIDIA_TESLA_T4 - aip.AcceleratorType.NVIDIA_TESLA_V100Otherwise specify `(None, None)` to use a container image to run on a CPU.*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
if os.getenv("IS_TESTING_TRAIN_GPU"): TRAIN_GPU, TRAIN_NGPU = ( aip.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_TRAIN_GPU")), ) else: TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1) if os.getenv("IS_TESTING_DEPOLY_GPU"): DEPLOY_GPU, DEPLOY_NGPU = ( aip.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_DEPOLY_GPU")), ) else: DEPLOY_GPU, DEPLOY_NGPU = (None, None)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Container (Docker) imageNext, we will set the Docker container images for training and prediction - TensorFlow 1.15 - `gcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest` - TensorFlow 2.1 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest` - TensorFlow 2.2 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest` - TensorFlow 2.3 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest` - TensorFlow 2.4 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest` - XGBoost - `gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1` - Scikit-learn - `gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest` - Pytorch - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest` - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest` - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest` - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest`For the latest list, see [Pre-built containers for training](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers). - TensorFlow 1.15 - `gcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest` - `gcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest` - TensorFlow 2.1 - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest` - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest` - TensorFlow 2.2 - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest` - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest` - TensorFlow 2.3 - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest` - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest` - XGBoost - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest` - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest` - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest` - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest` - Scikit-learn - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest` - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest` - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest`For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers)
if os.getenv("IS_TESTING_TF"): TF = os.getenv("IS_TESTING_TF") else: TF = "2-1" if TF[0] == "2": if TRAIN_GPU: TRAIN_VERSION = "tf-gpu.{}".format(TF) else: TRAIN_VERSION = "tf-cpu.{}".format(TF) if DEPLOY_GPU: DEPLOY_VERSION = "tf2-gpu.{}".format(TF) else: DEPLOY_VERSION = "tf2-cpu.{}".format(TF) else: if TRAIN_GPU: TRAIN_VERSION = "tf-gpu.{}".format(TF) else: TRAIN_VERSION = "tf-cpu.{}".format(TF) if DEPLOY_GPU: DEPLOY_VERSION = "tf-gpu.{}".format(TF) else: DEPLOY_VERSION = "tf-cpu.{}".format(TF) TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION) DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION) print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU) print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Machine TypeNext, set the machine type to use for training and prediction.- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]*Note: The following is not supported for training:* - `standard`: 2 vCPUs - `highcpu`: 2, 4 and 8 vCPUs*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.
if os.getenv("IS_TESTING_TRAIN_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE") else: MACHINE_TYPE = "n1-standard" VCPU = "4" TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Train machine type", TRAIN_COMPUTE) if os.getenv("IS_TESTING_DEPLOY_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE") else: MACHINE_TYPE = "n1-standard" VCPU = "4" DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Deploy machine type", DEPLOY_COMPUTE)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
TutorialNow you are ready to start creating your own custom model and training for CIFAR10. Set up clientsThe Vertex AI client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex AI server.You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.- Model Service for `Model` resources.- Endpoint Service for deployment.- Job Service for batch jobs and custom training.- Prediction Service for serving.
# client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_job_client(): client = aip.JobServiceClient(client_options=client_options) return client def create_model_client(): client = aip.ModelServiceClient(client_options=client_options) return client def create_endpoint_client(): client = aip.EndpointServiceClient(client_options=client_options) return client def create_prediction_client(): client = aip.PredictionServiceClient(client_options=client_options) return client clients = {} clients["job"] = create_job_client() clients["model"] = create_model_client() clients["endpoint"] = create_endpoint_client() clients["prediction"] = create_prediction_client() for client in clients.items(): print(client)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Train a modelThere are two ways you can train a custom model using a container image:- **Use a Google Cloud prebuilt container**. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.- **Use your own custom container image**. If you use your own container, the container needs to contain your code for training a custom model. Prepare your custom job specificationNow that your clients are ready, your first step is to create a Job Specification for your custom training job. The job specification will consist of the following:- `worker_pool_spec` : The specification of the type of machine(s) you will use for training and how many (single or distributed)- `python_package_spec` : The specification of the Python package to be installed with the pre-built container. Prepare your machine specificationNow define the machine specification for your custom training job. This tells Vertex AI what type of machine instance to provision for the training. - `machine_type`: The type of GCP instance to provision -- e.g., n1-standard-8. - `accelerator_type`: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable `TRAIN_GPU != None`, you are using a GPU; otherwise you will use a CPU. - `accelerator_count`: The number of accelerators.
if TRAIN_GPU: machine_spec = { "machine_type": TRAIN_COMPUTE, "accelerator_type": TRAIN_GPU, "accelerator_count": TRAIN_NGPU, } else: machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Prepare your disk specification(optional) Now define the disk specification for your custom training job. This tells Vertex AI what type and size of disk to provision in each machine instance for the training. - `boot_disk_type`: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD. - `boot_disk_size_gb`: Size of disk in GB.
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard] DISK_SIZE = 200 # GB disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Examine the training package Package layoutBefore you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.- PKG-INFO- README.md- setup.cfg- setup.py- trainer - \_\_init\_\_.py - task.pyThe files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.The file `trainer/task.py` is the Python script for executing the custom training job. *Note*, when we referred to it in the worker pool specification, we replace the directory slash with a dot (`trainer.task`) and dropped the file suffix (`.py`). Package AssemblyIn the following cells, you will assemble the training package.
# Make folder for Python training script ! rm -rf custom ! mkdir custom # Add package information ! touch custom/README.md setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0" ! echo "$setup_cfg" > custom/setup.cfg setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())" ! echo "$setup_py" > custom/setup.py pkg_info = "Metadata-Version: 1.0\n\nName: CIFAR10 image classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex AI" ! echo "$pkg_info" > custom/PKG-INFO # Make the training subfolder ! mkdir custom/trainer ! touch custom/trainer/__init__.py
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Task.py contentsIn the next cell, you write the contents of the training script task.py. We won't go into detail, it's just there for you to browse. In summary:- Get the directory where to save the model artifacts from the command line (`--model_dir`), and if not specified, then from the environment variable `AIP_MODEL_DIR`.- Loads CIFAR10 dataset from TF Datasets (tfds).- Builds a model using TF.Keras model API.- Compiles the model (`compile()`).- Sets a training distribution strategy according to the argument `args.distribute`.- Trains the model (`fit()`) with epochs and steps according to the arguments `args.epochs` and `args.steps`- Saves the trained model (`save(args.model_dir)`) to the specified model directory.
%%writefile custom/trainer/task.py # Single, Mirror and Multi-Machine Distributed Training for CIFAR-10 import tensorflow_datasets as tfds import tensorflow as tf from tensorflow.python.client import device_lib import argparse import os import sys tfds.disable_progress_bar() parser = argparse.ArgumentParser() parser.add_argument('--model-dir', dest='model_dir', default=os.getenv("AIP_MODEL_DIR"), type=str, help='Model dir.') parser.add_argument('--lr', dest='lr', default=0.01, type=float, help='Learning rate.') parser.add_argument('--epochs', dest='epochs', default=10, type=int, help='Number of epochs.') parser.add_argument('--steps', dest='steps', default=200, type=int, help='Number of steps per epoch.') parser.add_argument('--distribute', dest='distribute', type=str, default='single', help='distributed training strategy') args = parser.parse_args() print('Python Version = {}'.format(sys.version)) print('TensorFlow Version = {}'.format(tf.__version__)) print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found'))) print('DEVICES', device_lib.list_local_devices()) # Single Machine, single compute device if args.distribute == 'single': if tf.test.is_gpu_available(): strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0") else: strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0") # Single Machine, multiple compute device elif args.distribute == 'mirror': strategy = tf.distribute.MirroredStrategy() # Multiple Machine, multiple compute device elif args.distribute == 'multi': strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() # Multi-worker configuration print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync)) # Preparing dataset BUFFER_SIZE = 10000 BATCH_SIZE = 64 def make_datasets_unbatched(): # Scaling CIFAR10 data from (0, 255] to (0., 1.] def scale(image, label): image = tf.cast(image, tf.float32) image /= 255.0 return image, label datasets, info = tfds.load(name='cifar10', with_info=True, as_supervised=True) return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat() # Build the Keras model def build_and_compile_cnn_model(): model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile( loss=tf.keras.losses.sparse_categorical_crossentropy, optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr), metrics=['accuracy']) return model # Train the model NUM_WORKERS = strategy.num_replicas_in_sync # Here the batch size scales up by number of workers since # `tf.data.Dataset.batch` expects the global batch size. GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE) with strategy.scope(): # Creation of dataset, and model building/compiling need to be within # `strategy.scope()`. model = build_and_compile_cnn_model() model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps) model.save(args.model_dir)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Store training script on your Cloud Storage bucketNext, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
! rm -f custom.tar custom.tar.gz ! tar cvf custom.tar custom ! gzip custom.tar ! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_cifar10.tar.gz
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Define the worker pool specification for Model ANext, you define the worker pool specification for your custom training job. The worker pool specification will consist of the following:- `replica_count`: The number of instances to provision of this machine type.- `machine_spec`: The hardware specification.- `disk_spec` : (optional) The disk storage specification.- `python_package`: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module.Let's dive deeper now into the python package specification:-`executor_image_spec`: This is the docker image which is configured for your custom training job.-`package_uris`: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image.-`python_module`: The Python module (script) to invoke for running the custom training job. In this example, you will be invoking `trainer.task.py` -- note that it was not neccessary to append the `.py` suffix.-`args`: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting: - `"--model-dir=" + MODEL_DIR` : The Cloud Storage location where to store the model artifacts. There are two ways to tell the training script where to save the model artifacts: - direct: You pass the Cloud Storage location as a command line argument to your training script (set variable `DIRECT = True`), or - indirect: The service passes the Cloud Storage location as the environment variable `AIP_MODEL_DIR` to your training script (set variable `DIRECT = False`). In this case, you tell the service the model artifact location in the job specification. - `"--epochs=" + EPOCHS`: The number of epochs for training. - `"--steps=" + STEPS`: The number of steps (batches) per epoch. - `"--distribute=" + TRAIN_STRATEGY"` : The training distribution strategy to use for single or distributed training. - `"single"`: single device. - `"mirror"`: all GPU devices on a single compute instance. - `"multi"`: all GPU devices on all compute instances.
JOB_NAME = "custom_job_A" + TIMESTAMP MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME) MODEL_DIR_A = MODEL_DIR if not TRAIN_NGPU or TRAIN_NGPU < 2: TRAIN_STRATEGY = "single" else: TRAIN_STRATEGY = "mirror" EPOCHS = 20 STEPS = 100 DIRECT = True if DIRECT: CMDARGS = [ "--model-dir=" + MODEL_DIR, "--epochs=" + str(EPOCHS), "--steps=" + str(STEPS), "--distribute=" + TRAIN_STRATEGY, ] else: CMDARGS = [ "--epochs=" + str(EPOCHS), "--steps=" + str(STEPS), "--distribute=" + TRAIN_STRATEGY, ] worker_pool_spec = [ { "replica_count": 1, "machine_spec": machine_spec, "disk_spec": disk_spec, "python_package_spec": { "executor_image_uri": TRAIN_IMAGE, "package_uris": [BUCKET_NAME + "/trainer_cifar10.tar.gz"], "python_module": "trainer.task", "args": CMDARGS, }, } ]
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Assemble a job specificationNow assemble the complete description for the custom job specification:- `display_name`: The human readable name you assign to this custom job.- `job_spec`: The specification for the custom job. - `worker_pool_specs`: The specification for the machine VM instances. - `base_output_directory`: This tells the service the Cloud Storage location where to save the model artifacts (when variable `DIRECT = False`). The service will then pass the location to the training script as the environment variable `AIP_MODEL_DIR`, and the path will be of the form: /model
if DIRECT: job_spec = {"worker_pool_specs": worker_pool_spec} else: job_spec = { "worker_pool_specs": worker_pool_spec, "base_output_directory": {"output_uri_prefix": MODEL_DIR}, } custom_job = {"display_name": JOB_NAME, "job_spec": job_spec}
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Train the modelNow start the training of your custom training job on Vertex AI. Use this helper function `create_custom_job`, which takes the following parameter:-`custom_job`: The specification for the custom job.The helper function calls job client service's `create_custom_job` method, with the following parameters:-`parent`: The Vertex AI location path to `Dataset`, `Model` and `Endpoint` resources.-`custom_job`: The specification for the custom job.You will display a handful of the fields returned in `response` object, with the two that are of most interest are:`response.name`: The Vertex AI fully qualified identifier assigned to this custom training job. You save this identifier for using in subsequent steps.`response.state`: The current state of the custom training job.
def create_custom_job(custom_job): response = clients["job"].create_custom_job(parent=PARENT, custom_job=custom_job) print("name:", response.name) print("display_name:", response.display_name) print("state:", response.state) print("create_time:", response.create_time) print("update_time:", response.update_time) return response response = create_custom_job(custom_job)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Now get the unique identifier for the custom job you created.
# The full unique ID for the custom job job_id = response.name # The short numeric ID for the custom job job_short_id = job_id.split("/")[-1] print(job_id)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Get information on a custom jobNext, use this helper function `get_custom_job`, which takes the following parameter:- `name`: The Vertex AI fully qualified identifier for the custom job.The helper function calls the job client service's`get_custom_job` method, with the following parameter:- `name`: The Vertex AI fully qualified identifier for the custom job.If you recall, you got the Vertex AI fully qualified identifier for the custom job in the `response.name` field when you called the `create_custom_job` method, and saved the identifier in the variable `job_id`.
def get_custom_job(name, silent=False): response = clients["job"].get_custom_job(name=name) if silent: return response print("name:", response.name) print("display_name:", response.display_name) print("state:", response.state) print("create_time:", response.create_time) print("update_time:", response.update_time) return response response = get_custom_job(job_id)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Wait for training to completeTraining the above model may take upwards of 20 minutes time.Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, we will need to know the location of the saved model, which the Python script saved in your local Cloud Storage bucket at `MODEL_DIR + '/saved_model.pb'`.
while True: response = get_custom_job(job_id, True) if response.state != aip.JobState.JOB_STATE_SUCCEEDED: print("Training job has not completed:", response.state) model_path_to_deploy_A = None if response.state == aip.JobState.JOB_STATE_FAILED: break else: if not DIRECT: MODEL_DIR_A = MODEL_DIR_A + "/model" model_path_to_deploy_A = MODEL_DIR_A print("Training Time:", response.update_time - response.create_time) break time.sleep(60) print("model_to_deploy:", model_path_to_deploy_A)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Define the worker pool specification for Model BNext, you define the worker pool specification for your custom training job. The worker pool specification will consist of the following:- `replica_count`: The number of instances to provision of this machine type.- `machine_spec`: The hardware specification.- `disk_spec` : (optional) The disk storage specification.
JOB_NAME = "custom_job_B" + TIMESTAMP MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME) MODEL_DIR_B = MODEL_DIR if not TRAIN_NGPU or TRAIN_NGPU < 2: TRAIN_STRATEGY = "single" else: TRAIN_STRATEGY = "mirror" EPOCHS = 20 STEPS = 100 DIRECT = True if DIRECT: CMDARGS = [ "--model-dir=" + MODEL_DIR, "--epochs=" + str(EPOCHS), "--steps=" + str(STEPS), "--distribute=" + TRAIN_STRATEGY, ] else: CMDARGS = [ "--epochs=" + str(EPOCHS), "--steps=" + str(STEPS), "--distribute=" + TRAIN_STRATEGY, ] worker_pool_spec = [ { "replica_count": 1, "machine_spec": machine_spec, "disk_spec": disk_spec, "python_package_spec": { "executor_image_uri": TRAIN_IMAGE, "package_uris": [BUCKET_NAME + "/trainer_cifar10.tar.gz"], "python_module": "trainer.task", "args": CMDARGS, }, } ]
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Assemble a job specificationNow assemble the complete description for the custom job specification:- `display_name`: The human readable name you assign to this custom job.- `job_spec`: The specification for the custom job. - `worker_pool_specs`: The specification for the machine VM instances. - `base_output_directory`: This tells the service the Cloud Storage location where to save the model artifacts (when variable `DIRECT = False`). The service will then pass the location to the training script as the environment variable `AIP_MODEL_DIR`, and the path will be of the form: /model
if DIRECT: job_spec = {"worker_pool_specs": worker_pool_spec} else: job_spec = { "worker_pool_specs": worker_pool_spec, "base_output_directory": {"output_uri_prefix": MODEL_DIR}, } custom_job = {"display_name": JOB_NAME, "job_spec": job_spec}
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Train the modelNow start the training of your custom training job on Vertex AI. Use this helper function `create_custom_job`, which takes the following parameter:-`custom_job`: The specification for the custom job.The helper function calls job client service's `create_custom_job` method, with the following parameters:-`parent`: The Vertex AI location path to `Dataset`, `Model` and `Endpoint` resources.-`custom_job`: The specification for the custom job.You will display a handful of the fields returned in `response` object, with the two that are of most interest are:`response.name`: The Vertex AI fully qualified identifier assigned to this custom training job. You save this identifier for using in subsequent steps.`response.state`: The current state of the custom training job.
def create_custom_job(custom_job): response = clients["job"].create_custom_job(parent=PARENT, custom_job=custom_job) print("name:", response.name) print("display_name:", response.display_name) print("state:", response.state) print("create_time:", response.create_time) print("update_time:", response.update_time) return response response = create_custom_job(custom_job)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Now get the unique identifier for the custom job you created.
# The full unique ID for the custom job job_id = response.name # The short numeric ID for the custom job job_short_id = job_id.split("/")[-1] print(job_id)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Wait for training to completeTraining the above model may take upwards of 20 minutes time.Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, we will need to know the location of the saved model, which the Python script saved in your local Cloud Storage bucket at `MODEL_DIR + '/saved_model.pb'`.
while True: response = get_custom_job(job_id, True) if response.state != aip.JobState.JOB_STATE_SUCCEEDED: print("Training job has not completed:", response.state) model_path_to_deploy_B = None if response.state == aip.JobState.JOB_STATE_FAILED: break else: if not DIRECT: MODEL_DIR_B = MODEL_DIR_B + "/model" model_path_to_deploy_B = MODEL_DIR_A print("Training Time:", response.update_time - response.create_time) break time.sleep(60) print("model_to_deploy:", model_path_to_deploy_B)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Load the saved modelYour model instances are stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Let's go ahead and load them from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.To load, you use the TF.Keras `model.load_model()` method passing it the Cloud Storage path where the model is saved -- specified by `MODEL_DIR_A` and `MODEL_DIR_B`.
import tensorflow as tf model_A = tf.keras.models.load_model(MODEL_DIR_A) model_B = tf.keras.models.load_model(MODEL_DIR_B)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Evaluate the modelNow find out how good the model is. Load evaluation dataYou will load the CIFAR10 test (holdout) data from `tf.keras.datasets`, using the method `load_data()`. This will return the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the image data, and the corresponding labels.You don't need the training data, and hence why we loaded it as `(_, _)`.Before you can run the data through evaluation, you need to preprocess it:x_test:1. Normalize (rescaling) the pixel data by dividing each pixel by 255. This will replace each single byte integer pixel with a 32-bit floating point number between 0 and 1.y_test:2. The labels are currently scalar (sparse). If you look back at the `compile()` step in the `trainer/task.py` script, you will find that it was compiled for sparse labels. So we don't need to do anything more.
import numpy as np from tensorflow.keras.datasets import cifar10 (_, _), (x_test, y_test) = cifar10.load_data() x_test = (x_test / 255.0).astype(np.float32) print(x_test.shape, y_test.shape)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Adding client instance to model outputs.For A/B testing, each model needs to output two items in addition to the prediction:- The model instance, whether it is A or B.- An identifier for the client session where the prediction request originates from.The model identifier is already baked into the prediction result returned by the `predict()` method, and you can use the model identifier as the means to determine which model instance A or B made the prediction.Now, why do you need to know the client session? In A/B testing, your not comparing the model's objective performance -- you've done that already in both model evaluation and post in continuous evaluation. Your comparing a business objective, such as did the customer click through the display ad, did they select a recommendation, was there a transaction conversion, etc. Thus, the business objective is measured on the client session, and you have to associate the model instance with the client session. Adding client session output for A/B TestingIn the TF.Keras Functional API, when we build the model using the Model() class, we pass two parameters, the input tensor and the output layer; which I call pulling it all together connecting the inputs to the outputs:```my_model = Model(inputs, outputs)```We will use this method to implement passing through client session identification at prediction with your trained model instances. The syntax for specifying both multiple inputs and outputs looks like this:```my_model = Model( inputs, [outputs1, outputs2])```This assumes that the application server, which makes the prediction request, will add to the prediction request the client session ID. When the prediction response is received back by the application server, it will record both the model instance and the client session ID. An analysis program will then process these records to measure which model A or B better optimized the business objective. Build the Wrapper ModelLet's get started. We can do this in three lines of Keras code!1. Create a `Lambda()` layer. In this layer, you will take as input the softmax output from the model. You will then output the softmax output along with a numerical identifier representing the client session. Because this is a model, for the identifier, you need to:- Output as a number.- Output the value as graph operator constant using `tf.constant()`.- Give it a tensor shape (not-scalar), but specifing the value as a list and then convert to a tensor.2. Create a wrapper model around the original model, where:- The input is the original model input.- The output is the Lambda layer.When you deploy the model, you will use the wrapper version instead of the original model.
import tensorflow as tf from tensorflow.keras import Input, Model from tensorflow.keras.layers import Lambda softmax = model_A.outputs[0] outputs = Lambda(lambda z: (z, tf.convert_to_tensor([tf.constant(0)])))(softmax) wrapper_model_A = Model(model_A.inputs, outputs) softmax = model_B.outputs[0] outputs = Lambda(lambda z: (z, tf.convert_to_tensor([tf.constant(1)])))(softmax) wrapper_model_B = Model(model_B.inputs, outputs)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Local PredictionLet's now do a local prediction with one of your wrapper A/B models. You will pass three instances (images) for prediction, and get back:- The softmax prediction for each instance request.- The model A/B identifier. In this case 0 for A.
wrapper_model_A.predict(x_test[0:3])
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Serving function for image dataTo pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.To resolve this, define a serving function (`serving_fn`) and attach it to the model as a preprocessing step. Add a `@tf.function` decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (`tf.string`), which is passed to the serving function (`serving_fn`). The serving function preprocesses the `tf.string` into raw (uncompressed) numpy bytes (`preprocess_fn`) to match the input requirements of the model:- `io.decode_jpeg`- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).- `image.convert_image_dtype` - Changes integer pixel values to float 32.- `image.resize` - Resizes the image to match the input shape for the model.- `resized / 255.0` - Rescales (normalization) the pixel data between 0 and 1.At this point, the data can be passed to the model (`m_call`).
CONCRETE_INPUT = "numpy_inputs" def _preprocess(bytes_input): decoded = tf.io.decode_jpeg(bytes_input, channels=3) decoded = tf.image.convert_image_dtype(decoded, tf.float32) resized = tf.image.resize(decoded, size=(32, 32)) rescale = tf.cast(resized / 255.0, tf.float32) return rescale @tf.function(input_signature=[tf.TensorSpec([None], tf.string)]) def preprocess_fn(bytes_inputs): decoded_images = tf.map_fn( _preprocess, bytes_inputs, dtype=tf.float32, back_prop=False ) return { CONCRETE_INPUT: decoded_images } # User needs to make sure the key matches model's input @tf.function(input_signature=[tf.TensorSpec([None], tf.string)]) def serving_fn(bytes_inputs): images = preprocess_fn(bytes_inputs) prob = m_call(**images) return prob m_call = tf.function(wrapper_model_A.call).get_concrete_function( [tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)] ) tf.saved_model.save( wrapper_model_A, model_path_to_deploy_A, signatures={"serving_default": serving_fn} ) CONCRETE_INPUT = "numpy_inputs" def _preprocess(bytes_input): decoded = tf.io.decode_jpeg(bytes_input, channels=3) decoded = tf.image.convert_image_dtype(decoded, tf.float32) resized = tf.image.resize(decoded, size=(32, 32)) rescale = tf.cast(resized / 255.0, tf.float32) return rescale @tf.function(input_signature=[tf.TensorSpec([None], tf.string)]) def preprocess_fn(bytes_inputs): decoded_images = tf.map_fn( _preprocess, bytes_inputs, dtype=tf.float32, back_prop=False ) return { CONCRETE_INPUT: decoded_images } # User needs to make sure the key matches model's input @tf.function(input_signature=[tf.TensorSpec([None], tf.string)]) def serving_fn(bytes_inputs): images = preprocess_fn(bytes_inputs) prob = m_call(**images) return prob m_call = tf.function(wrapper_model_B.call).get_concrete_function( [tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)] ) tf.saved_model.save( wrapper_model_B, model_path_to_deploy_B, signatures={"serving_default": serving_fn} ) loaded_A = tf.saved_model.load(model_path_to_deploy_A) loaded_B = tf.saved_model.load(model_path_to_deploy_B) serving_input = list( loaded_A.signatures["serving_default"].structured_input_signature[1].keys() )[0] print("Serving function input:", serving_input)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Upload the modelUse this helper function `upload_model` to upload your model, stored in SavedModel format, up to the `Model` service, which will instantiate a Vertex AI `Model` resource instance for your model. Once you've done that, you can use the `Model` resource instance in the same way as any other Vertex AI `Model` resource instance, such as deploying to an `Endpoint` resource for serving predictions.The helper function takes the following parameters:- `display_name`: A human readable name for the `Endpoint` service.- `image_uri`: The container image for the model deployment.- `model_uri`: The Cloud Storage path to our SavedModel artificat. For this tutorial, this is the Cloud Storage location where the `trainer/task.py` saved the model artifacts, which we specified in the variable `MODEL_DIR`.The helper function calls the `Model` client service's method `upload_model`, which takes the following parameters:- `parent`: The Vertex AI location root path for `Dataset`, `Model` and `Endpoint` resources.- `model`: The specification for the Vertex AI `Model` resource instance.Let's now dive deeper into the Vertex AI model specification `model`. This is a dictionary object that consists of the following fields:- `display_name`: A human readable name for the `Model` resource.- `metadata_schema_uri`: Since your model was built without an Vertex AI `Dataset` resource, you will leave this blank (`''`).- `artificat_uri`: The Cloud Storage path where the model is stored in SavedModel format.- `container_spec`: This is the specification for the Docker container that will be installed on the `Endpoint` resource, from which the `Model` resource will serve predictions. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated.Uploading a model into a Vertex AI Model resource returns a long running operation, since it may take a few moments. You call response.result(), which is a synchronous call and will return when the Vertex AI Model resource is ready.The helper function returns the Vertex AI fully qualified identifier for the corresponding Vertex AI Model instance upload_model_response.model. You will save the identifier for subsequent steps in the variable model_to_deploy_id.
IMAGE_URI = DEPLOY_IMAGE def upload_model(display_name, image_uri, model_uri): model = { "display_name": display_name, "metadata_schema_uri": "", "artifact_uri": model_uri, "container_spec": { "image_uri": image_uri, "command": [], "args": [], "env": [{"name": "env_name", "value": "env_value"}], "ports": [{"container_port": 8080}], "predict_route": "", "health_route": "", }, } response = clients["model"].upload_model(parent=PARENT, model=model) print("Long running operation:", response.operation.name) upload_model_response = response.result(timeout=180) print("upload_model_response") print(" model:", upload_model_response.model) return upload_model_response.model model_to_deploy_id_A = upload_model( "cifar10-" + TIMESTAMP, IMAGE_URI, model_path_to_deploy_A ) model_to_deploy_id_B = upload_model( "cifar10-" + TIMESTAMP, IMAGE_URI, model_path_to_deploy_B )
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Deploy the `Model` resourceNow deploy the trained Vertex AI custom `Model` resource. This requires two steps:1. Create an `Endpoint` resource for deploying the `Model` resource to.2. Deploy the `Model` resource to the `Endpoint` resource. Create an `Endpoint` resourceUse this helper function `create_endpoint` to create an endpoint to deploy the model to for serving predictions, with the following parameter:- `display_name`: A human readable name for the `Endpoint` resource.The helper function uses the endpoint client service's `create_endpoint` method, which takes the following parameter:- `display_name`: A human readable name for the `Endpoint` resource.Creating an `Endpoint` resource returns a long running operation, since it may take a few moments to provision the `Endpoint` resource for serving. You call `response.result()`, which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex AI fully qualified identifier for the `Endpoint` resource: `response.name`.
ENDPOINT_NAME = "cifar10_endpoint-" + TIMESTAMP def create_endpoint(display_name): endpoint = {"display_name": display_name} response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint) print("Long running operation:", response.operation.name) result = response.result(timeout=300) print("result") print(" name:", result.name) print(" display_name:", result.display_name) print(" description:", result.description) print(" labels:", result.labels) print(" create_time:", result.create_time) print(" update_time:", result.update_time) return result result = create_endpoint(ENDPOINT_NAME)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Now get the unique identifier for the `Endpoint` resource you created.
# The full unique ID for the endpoint endpoint_id = result.name # The short numeric ID for the endpoint endpoint_short_id = endpoint_id.split("/")[-1] print(endpoint_id)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Compute instance scalingYou have several choices on scaling the compute instances for handling your online prediction requests:- Single Instance: The online prediction requests are processed on a single compute instance. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.- Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.- Auto Scaling: The online prediction requests are split across a scaleable number of compute instances. - Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.The minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request.
MIN_NODES = 1 MAX_NODES = 1
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Deploy `Model` resource to the `Endpoint` resourceUse this helper function `deploy_model` to deploy the `Model` resource to the `Endpoint` resource you created for serving predictions, with the following parameters:- `model`: The Vertex AI fully qualified model identifier of the model to upload (deploy) from the training pipeline.- `deploy_model_display_name`: A human readable name for the deployed model.- `endpoint`: The Vertex AI fully qualified endpoint identifier to deploy the model to.The helper function calls the `Endpoint` client service's method `deploy_model`, which takes the following parameters:- `endpoint`: The Vertex AI fully qualified `Endpoint` resource identifier to deploy the `Model` resource to.- `deployed_model`: The requirements specification for deploying the model.- `traffic_split`: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs. - If only one model, then specify as **{ "0": 100 }**, where "0" refers to this model being uploaded and 100 means 100% of the traffic. - If there are existing models on the endpoint, for which the traffic will be split, then use `model_id` to specify as **{ "0": percent, model_id: percent, ... }**, where `model_id` is the model id of an existing model to the deployed endpoint. The percents must add up to 100.Let's now dive deeper into the `deployed_model` parameter. This parameter is specified as a Python dictionary with the minimum required fields:- `model`: The Vertex AI fully qualified model identifier of the (upload) model to deploy.- `display_name`: A human readable name for the deployed model.- `disable_container_logging`: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.- `dedicated_resources`: This refers to how many compute instances (replicas) that are scaled for serving prediction requests. - `machine_spec`: The compute instance to provision. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated. - `min_replica_count`: The number of compute instances to initially provision, which you set earlier as the variable `MIN_NODES`. - `max_replica_count`: The maximum number of compute instances to scale to, which you set earlier as the variable `MAX_NODES`. Traffic SplitLet's now dive deeper into the `traffic_split` parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision. ResponseThe method returns a long running operation `response`. We will wait sychronously for the operation to complete by calling the `response.result()`, which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
DEPLOYED_NAME = "cifar10_deployed-" + TIMESTAMP def deploy_model( model, deployed_model_display_name, endpoint, traffic_split={"0": 100} ): # Accelerators can be used only if the model specifies a GPU image. if DEPLOY_GPU: machine_spec = { "machine_type": DEPLOY_COMPUTE, "accelerator_type": DEPLOY_GPU, "accelerator_count": DEPLOY_NGPU, } else: machine_spec = { "machine_type": DEPLOY_COMPUTE, "accelerator_count": 0, } deployed_model = { "model": model, "display_name": deployed_model_display_name, "dedicated_resources": { "min_replica_count": MIN_NODES, "max_replica_count": MAX_NODES, "machine_spec": machine_spec, }, } response = clients["endpoint"].deploy_model( endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split ) print("Long running operation:", response.operation.name) result = response.result() print("result") deployed_model = result.deployed_model print(" deployed_model") print(" id:", deployed_model.id) print(" model:", deployed_model.model) print(" display_name:", deployed_model.display_name) print(" create_time:", deployed_model.create_time) return deployed_model.id deployed_model_id_A = deploy_model( model_to_deploy_id_A, DEPLOYED_NAME + "-A", endpoint_id, {"0": 100} ) deployed_model_id_B = deploy_model( model_to_deploy_id_B, DEPLOYED_NAME + "-B", endpoint_id, {"0": 50, deployed_model_id_A: 50}, )
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Make a online prediction requestNow do a online prediction to your deployed model. Get test itemYou will use an example out of the test (holdout) portion of the dataset as a test item.
test_image = x_test[0] test_label = y_test[0] print(test_image.shape)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Prepare the request contentYou are going to send the CIFAR10 image as compressed JPG image, instead of the raw uncompressed bytes:- `cv2.imwrite`: Use openCV to write the uncompressed image to disk as a compressed JPEG image. - Denormalize the image data from \[0,1) range back to [0,255). - Convert the 32-bit floating point values to 8-bit unsigned integers.- `tf.io.read_file`: Read the compressed JPG images back into memory as raw bytes.- `base64.b64encode`: Encode the raw bytes into a base 64 encoded string.
import base64 import cv2 cv2.imwrite("tmp.jpg", (test_image * 255).astype(np.uint8)) bytes = tf.io.read_file("tmp.jpg") b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Send the prediction requestOk, now you have a test image. Use this helper function `predict_image`, which takes the following parameters:- `image`: The test image data as a numpy array.- `endpoint`: The Vertex AI fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed to.- `parameters_dict`: Additional parameters for serving.This function calls the prediction client service `predict` method with the following parameters:- `endpoint`: The Vertex AI fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed to.- `instances`: A list of instances (encoded images) to predict.- `parameters`: Additional parameters for serving.To pass the image data to the prediction service, in the previous step you encoded the bytes into base64 -- which makes the content safe from modification when transmitting binary data over the network. You need to tell the serving binary where your model is deployed to, that the content has been base64 encoded, so it will decode it on the other end in the serving binary.Each instance in the prediction request is a dictionary entry of the form: {serving_input: {'b64': content}}- `input_name`: the name of the input layer of the underlying model.- `'b64'`: A key that indicates the content is base64 encoded.- `content`: The compressed JPG image bytes as a base64 encoded string.Since the `predict()` service can take multiple images (instances), you will send your single image as a list of one image. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the `predict()` service.The `response` object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction:- `predictions`: Confidence level for the prediction, between 0 and 1, for each of the classes.- `output_2`: The client session ID.
def predict_image(image, endpoint, parameters_dict): # The format of each instance should conform to the deployed model's prediction input schema. instances_list = [{serving_input: {"b64": image}}] instances = [json_format.ParseDict(s, Value()) for s in instances_list] response = clients["prediction"].predict( endpoint=endpoint, instances=instances, parameters=parameters_dict ) print("response") print(" deployed_model_id:", response.deployed_model_id) predictions = response.predictions print("predictions") for prediction in predictions: print(" prediction:", dict(prediction)) predict_image(b64str, endpoint_id, None) def undeploy_model(deployed_model_id, endpoint): response = clients["endpoint"].undeploy_model( endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={} ) print(response) undeploy_model(deployed_model_id_A, endpoint_id) undeploy_model(deployed_model_id_B, endpoint_id)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
Cleaning upTo clean up all GCP resources used in this project, you can [delete the GCPproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
delete_dataset = True delete_pipeline = True delete_model = True delete_endpoint = True delete_batchjob = True delete_customjob = True delete_hptjob = True delete_bucket = True # Delete the dataset using the Vertex AI fully qualified identifier for the dataset try: if delete_dataset and "dataset_id" in globals(): clients["dataset"].delete_dataset(name=dataset_id) except Exception as e: print(e) # Delete the training pipeline using the Vertex AI fully qualified identifier for the pipeline try: if delete_pipeline and "pipeline_id" in globals(): clients["pipeline"].delete_training_pipeline(name=pipeline_id) except Exception as e: print(e) # Delete the model using the Vertex AI fully qualified identifier for the model try: if delete_model and "model_to_deploy_id" in globals(): clients["model"].delete_model(name=model_to_deploy_id) except Exception as e: print(e) # Delete the endpoint using the Vertex AI fully qualified identifier for the endpoint try: if delete_endpoint and "endpoint_id" in globals(): clients["endpoint"].delete_endpoint(name=endpoint_id) except Exception as e: print(e) # Delete the batch job using the Vertex AI fully qualified identifier for the batch job try: if delete_batchjob and "batch_job_id" in globals(): clients["job"].delete_batch_prediction_job(name=batch_job_id) except Exception as e: print(e) # Delete the custom job using the Vertex AI fully qualified identifier for the custom job try: if delete_customjob and "job_id" in globals(): clients["job"].delete_custom_job(name=job_id) except Exception as e: print(e) # Delete the hyperparameter tuning job using the Vertex AI fully qualified identifier for the hyperparameter tuning job try: if delete_hptjob and "hpt_job_id" in globals(): clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id) except Exception as e: print(e) if delete_bucket and "BUCKET_NAME" in globals(): ! gsutil rm -r $BUCKET_NAME
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_online_ab_testing.ipynb
rastringer/ai-platform-samples
This is Tensorflow 2 implementation of GoogleNet model based on the paper in the below linkhttps://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43022.pdfFew of the major differences between the paper and this implementation are1. Number of categories for classification here is 10 compared to the paper's 10002. In the paper, conv layers use relu activation layer, but in this implementation elu is used instead
#For any array manipulations import numpy as np #For plotting graphs import matplotlib.pyplot as plt # For loading data from the file system import os # For randomly selecting data from the dataset import random # For displaying the confusion matrix in a pretty way import pandas # loading tensorflow packages import tensorflow as tf from tensorflow.keras import Model, Input from tensorflow.keras.layers import Conv2D, MaxPool2D, AveragePooling2D, Concatenate, Dense, BatchNormalization, Dropout, Flatten from tensorflow.keras.callbacks import ModelCheckpoint from tensorflow.keras.initializers import RandomNormal print(tf.__version__)
2.5.0
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
Initializations
from tensorflow.python.client import device_lib def get_available_gpus(): local_device_protos = device_lib.list_local_devices() return [x.name for x in local_device_protos if x.device_type == 'GPU'] print("devices =" , tf.config.list_physical_devices()) print(get_available_gpus()) # Shape of the input images height=width=224 channels = 3 input_shape=(224,224,3) # batch_size is the number of images in a batch to train and test the model batch_size = 100 # num_classes is the number of cateegories that input images havee to be classified into # This has to be set based on the input dataset # For the dataset the paper uses, num_classes = 1000 num_classes = 5 # Based on computational power availability, size_factor can be varied. # This will determine the model complexity and number of trainable parameters # This affects the feature map size of all conv layers # The model from the paper uses size_factor=64 size_factor = 32 checkpoint_filePath = '/content/drive/MyDrive/MachineLearning/GoogleNet_experiment_1.h5' #initializing the random seed so that we get consistent results def set_seed(seed=31415): np.random.seed(seed) tf.random.set_seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) os.environ['TF_DETERMINISTIC_OPS'] = '1' set_seed()
_____no_output_____
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
Data Loading and Preprocessing Procuring the dataset
path='/content/Linnaeus 5 256X256' # Check if the folder with the dataset already exists, if not copy it from the saved location if not os.path.isdir(path): !cp '/content/drive/MyDrive/MachineLearning/Linnaeus 5 256X256.rar' '/content/' get_ipython().system_raw("unrar x '/content/Linnaeus 5 256X256.rar'") categories = os.listdir(os.path.join(path, 'train')) print(len(categories), " categories found =", categories)
5 categories found = ['dog', 'berry', 'other', 'bird', 'flower']
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
Training and Validation Datasets
train_image_dataset = tf.keras.preprocessing.image_dataset_from_directory( os.path.join(path, 'train') , labels='inferred' , label_mode='categorical' , class_names=categories , batch_size=batch_size , image_size=(256, 256) , shuffle=True , seed=2 , validation_split=0.1 , subset= 'training' ) validation_image_dataset = tf.keras.preprocessing.image_dataset_from_directory( os.path.join(path, 'train') , labels='inferred' , label_mode='categorical' , class_names=categories , batch_size=batch_size , image_size=(256, 256) , shuffle=True , seed=2 , validation_split=0.1 , subset= 'validation' ) print("Training class names found =" , train_image_dataset.class_names) def crop_images(images, labels): ''' Expecting categories to be names of subfolders and the images belonging to each of the subfolders be stored inside them. While reading the images, they are resized to 256x256x3 and then cropped to 224x224x3 based on the way the paper describes (randomly between 4 corners and center) diagnostics: bool (default False), If True it will print a lot of debug information ''' # In order to clip the image in either from top-left, top-right, bottom-left, bottom-right or center, # we create an array of possible start positions corners_list = [0, (256-input_shape[0])//2, 256-input_shape[0]] # Sampling one number from the list of start positions offset_height = offset_width = random.sample(corners_list, 1)[0] images = tf.image.per_image_standardization(images-127) images = images/tf.math.reduce_max(tf.math.abs(images)) # Since there is an auxillary arm of the model, we have to concatenate two labels with each other during training return tf.image.crop_to_bounding_box(images, offset_height, offset_width, input_shape[0], input_shape[0]), labels #(labels, labels) validation_datasource = validation_image_dataset.map(crop_images) validation_datasource = validation_datasource.cache().prefetch(buffer_size=tf.data.AUTOTUNE).shuffle(batch_size) training_datasource = train_image_dataset.map(crop_images) training_datasource = training_datasource.cache().prefetch(buffer_size=tf.data.AUTOTUNE).shuffle(batch_size) for images, labels in training_datasource: print("images =", images.shape) print("labels =", type(labels)) break training_datasource = train_image_dataset.map(crop_images) training_datasource = training_datasource.cache().prefetch(buffer_size=tf.data.AUTOTUNE).shuffle(batch_size)
images = (100, 224, 224, 3) labels = <class 'tensorflow.python.framework.ops.EagerTensor'>
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
Test Data
test_image_dataset = tf.keras.preprocessing.image_dataset_from_directory( os.path.join(path, 'test') , labels='inferred' , label_mode='categorical' , class_names=categories , batch_size=batch_size , image_size=(256, 256) , seed=2 ) def test_data_crop_images(images, labels): ''' Definiing separate function for test data because labels do not have to be concatenated during testing and the map function does not allow multiple function calls Expecting categories to be names of subfolders and the images belonging to each of the subfolders be stored inside them. While reading the images, they are resized to 256x256x3 and then cropped to 224x224x3 based on the way the paper describes (randomly between 4 corners and center) diagnostics: bool (default False), If True it will print a lot of debug information ''' # In order to clip the image in either from top-left, top-right, bottom-left, bottom-right or center, # we create an array of possible start positions corners_list = [0, (256-input_shape[0])//2, 256-input_shape[0]] # Sampling one number from the list of start positions offset_height = offset_width = random.sample(corners_list, 1)[0] images = tf.image.per_image_standardization(images-127) images = images/tf.math.reduce_max(tf.math.abs(images)) # Since there is an auxillary arm of the model, we have to concatenate two labels with each other during training return tf.image.crop_to_bounding_box(images, offset_height, offset_width, input_shape[0], input_shape[0]), labels test_datasource = test_image_dataset.map(test_data_crop_images) test_datasource = test_datasource.cache().prefetch(buffer_size=tf.data.AUTOTUNE).shuffle(batch_size)
Found 2000 files belonging to 5 classes.
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
Building the GoogleNet Architecture Define the inception block
def inception_block(input, intermediate_filter_size, output_filter_size , kernel_initializer, bias_initializer , use_bias=True, name_prefix=''): ''' input = input tensor that has to be opeerated on intermediate_filter_size = dictionary that keys 3 and 5 {3: filter size of Conv1x1 in the Conv3x3 pipeline, 5: filter size of Conv1x1 in the Conv5x5 pipeline } output_filter_size = dictionary that have keys 1, 3 and 5 {1: filter size of the Conv1x1 filter in the Conv1x1 pipeline, 3: filter size of the Conv3x3 filter in the Conv3x3 pipeline, 5: filter size of the Conv5x5 filter in the Conv5x5 pipeline } name_prefix = string that will be prefixed with each of the layers' names in the block ''' initializer = RandomNormal(mean=0.5, stddev=0.1, seed = 7) # Conv 1x1 pipeline taking the input and feeding directly to the output conv1 = Conv2D(filters = output_filter_size[1], kernel_size=1, strides = 1 , activation='elu', padding='same', use_bias=use_bias , kernel_initializer = kernel_initializer, bias_initializer=bias_initializer , name=name_prefix + 'conv1' ) (input) # Defining Conv 1x1 -> Conv 3x3 pipeline conv1_3 = Conv2D(filters= intermediate_filter_size[3], kernel_size = 1, strides = 1 , activation='elu', padding='same', use_bias=use_bias , kernel_initializer = kernel_initializer, bias_initializer=bias_initializer , name=name_prefix + 'conv1_3')(input) conv3 = Conv2D(filters = output_filter_size[3], kernel_size = 3, strides = 1 , activation='elu', padding='same', use_bias=use_bias , kernel_initializer = kernel_initializer, bias_initializer=bias_initializer , name=name_prefix + 'conv3' )(conv1_3) # Defining the Conv1x1 -> Conv5x5 pipeline conv1_5 = Conv2D(filters= intermediate_filter_size[5], kernel_size = 1, strides = 1 , activation='elu', padding='same', use_bias=use_bias , kernel_initializer = kernel_initializer, bias_initializer=bias_initializer , name=name_prefix + 'conv1_5')(input) conv5 = Conv2D(filters = output_filter_size[5], kernel_size = 5, strides = 1 , activation='elu', padding='same', use_bias=use_bias , kernel_initializer = kernel_initializer, bias_initializer=bias_initializer , name=name_prefix + 'conv5' )(conv1_5) # Defining the MaxPool pipeline max_pool = MaxPool2D(pool_size=3, strides=1, padding='same' , name=name_prefix + 'maxpool')(input) conv_projection = Conv2D(filters = output_filter_size['proj'], kernel_size=1, strides=1 , activation='elu', padding='same', use_bias=use_bias , kernel_initializer = kernel_initializer, bias_initializer=bias_initializer , name=name_prefix + 'proj')(max_pool) # Concatenating the output of the above pipelines output = Concatenate(axis=3 , name=name_prefix + 'concat')([conv1, conv3, conv5, conv_projection]) return output
_____no_output_____
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
Defining the Auxillary Branch block
def auxillary_branch(input, num_classes, kernel_initializer , bias_initializer , filter_size = 128 , use_bias=True, name_prefix=''): #initializer = RandomNormal(mean=0.5, stddev=0.1, seed = 7) avg_pool = AveragePooling2D(pool_size=5, strides=3, padding='valid' , name=name_prefix + 'avg_pool')(input) conv = Conv2D(filters= filter_size , kernel_size=1, strides=1 , padding='same', activation='elu', use_bias=use_bias , kernel_initializer = kernel_initializer, bias_initializer=bias_initializer , name=name_prefix + 'conv')(avg_pool) dense = Dense(units = 1024, activation='elu' , kernel_initializer = kernel_initializer, bias_initializer=bias_initializer , name=name_prefix + 'fc')(conv) dropout = Dropout(0.7, name=name_prefix + 'dropout')(dense) flatten = Flatten(name=name_prefix+'flatten')(dropout) output = Dense(units = num_classes, activation='softmax' , kernel_initializer = kernel_initializer, bias_initializer=bias_initializer , name=name_prefix + 'output')(flatten) return output
_____no_output_____
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
Define the actual model
def build_googleNet(input_shape, size_factor=64, activation='elu' , use_bias=True, num_classes=10): ''' input_shape = tuple of 3 numbers (height, width, channels) batch_size = number of images per batch size_factor = int (default 64). As per the paper, this should be 64. Since all the convolutions are sized as multiples of 64, this is made configurable to be able to train a lighter version of the network if needed activation = str (default 'elu') Since this is a big network, I have chosen to go with elu activation to give the layers a way our of ending up with dead relus ''' kernel_initializer = tf.keras.initializers.GlorotUniform()#(mean=0.5, stddev=0.1, seed = 7) bias_initializer = tf.keras.initializers.GlorotUniform() #(0.2) input = Input(shape = input_shape, batch_size = batch_size , name="main_input") # First portion of GoogleNet Architecture is similar to AlexNet/ LeNet # i.e. a single pipeline # Definining this here #Conv layer for output 112x112x64 conv_layer_1 = Conv2D(filters=size_factor, kernel_size = 7, strides = 2, activation='elu' , padding= 'same', use_bias=use_bias , kernel_initializer = kernel_initializer, bias_initializer=bias_initializer , name='conv_layer_1')(input) # MaxPool will give out 56x56x64 maxpool_1=MaxPool2D(pool_size = 3, strides = 2, padding='same' , name='maxpool_1')(conv_layer_1) # Adding Norm layer as per the textbook norm_1 = BatchNormalization(name='norm_1')(maxpool_1) # Paper says the next Conv3x3 layer needs a reduction layer as well # Defining the reduction layer of Conv with output 56x56x64 conv_layer_2a = Conv2D(filters = size_factor, kernel_size = 1, strides = 1, activation ='elu' , padding='same', use_bias=use_bias , kernel_initializer = kernel_initializer, bias_initializer=bias_initializer , name='conv_layer_2') (norm_1) # Now defining the actual conv3x3 layer output of 28x28x192 conv_layer_2b = Conv2D(filters = size_factor*3, kernel_size = 3, strides = 1, activation='elu' , padding='same', use_bias=use_bias , kernel_initializer = kernel_initializer, bias_initializer=bias_initializer , name='conv_layer_3')(conv_layer_2a) # max_pool should output 28x28x192 max_pool_2 = MaxPool2D(pool_size=3, strides = 2, padding='same' , name='maxpool_2')(conv_layer_2b) # Creeating the first inception block 3a with output 28x28x256 inception3a = inception_block(max_pool_2, intermediate_filter_size={3:96, 5:16} , output_filter_size={1:size_factor , 3:size_factor*2 , 5:size_factor//2 , 'proj':size_factor//2} , use_bias=use_bias , kernel_initializer = kernel_initializer , bias_initializer=bias_initializer , name_prefix='incep_3a_') # Inception Block 3b with output 28x28x480 inception3b = inception_block(inception3a, intermediate_filter_size={3:128, 5:32} , output_filter_size={1:size_factor*2 , 3:size_factor*3 , 5:int(size_factor*1.5) , 'proj':size_factor} , use_bias=use_bias , kernel_initializer = kernel_initializer , bias_initializer=bias_initializer , name_prefix='incep_3b_') # Paper specifies a MaxPool layer here output= 14x14x480 max_pool_3 = MaxPool2D(pool_size=3, strides = 2, padding='same', name='max_pool_3')(inception3b) #Inception Block 4a, output = 14x14x512 inception4a = inception_block(max_pool_3, intermediate_filter_size={3:112, 5:24} , output_filter_size={1:size_factor*3 , 3:int(size_factor*3.25) #208 , 5:int(size_factor * 0.75) #48 , 'proj':size_factor} , use_bias=use_bias , kernel_initializer = kernel_initializer , bias_initializer=bias_initializer , name_prefix='incep_4a_') # First Auxillary outputs with size 1x1xnum_classes #output_aux1 = auxillary_branch(inception4a, num_classes # , filter_size=size_factor*2 # , kernel_initializer = kernel_initializer # , bias_initializer=bias_initializer # , use_bias=use_bias, name_prefix='aux1_') #Inception Block 4b with output shape =14x14x512 inception4b = inception_block(inception4a, intermediate_filter_size={3:96, 5:16} , output_filter_size={1:int(size_factor*2.5) #160 , 3:int(size_factor*3.5) #224 , 5:size_factor , 'proj':size_factor} , use_bias=use_bias , kernel_initializer = kernel_initializer , bias_initializer=bias_initializer , name_prefix='incep_4b_') #Inception Block 4c with output shape = 14x14x512 inception4c = inception_block(inception4b, intermediate_filter_size={3:144, 5:32} , output_filter_size={1:size_factor*2 , 3:size_factor * 4 #256 , 5:size_factor , 'proj':size_factor} , kernel_initializer = kernel_initializer , bias_initializer=bias_initializer , name_prefix='incep_4c_') #Inception Block 4d with output shape =14x14x528 inception4d = inception_block(inception4c, intermediate_filter_size={3:128, 5:24} , output_filter_size={1:int(size_factor*1.75) #112 , 3:int(size_factor*4.5) #288 , 5:size_factor , 'proj':size_factor} , use_bias=use_bias , kernel_initializer = kernel_initializer , bias_initializer=bias_initializer , name_prefix='incep_4d_') # Second Auxillary outputs with output 1x1xnum_classes #output_aux2 = auxillary_branch(inception4d, num_classes, filter_size=size_factor*2 # , kernel_initializer = kernel_initializer, bias_initializer=bias_initializer # , use_bias=use_bias, name_prefix='aux2_') # The paper later said that just one auxillary branch is sufficient and there is # very negligible benefit from the second. Hence removed it here and excluded it from the final output #Inception Block 4e with output 14x14x832 inception4e = inception_block(inception4d, intermediate_filter_size={3:160, 5:32} , output_filter_size={1:size_factor*4 #256 , 3:size_factor*5 #320 , 5:size_factor*2 #128 , 'proj':size_factor*2 } , use_bias=use_bias , kernel_initializer = kernel_initializer , bias_initializer=bias_initializer , name_prefix='incep_4e_') # Paper specifies a MaxPool layer here with output 7x7x832 max_pool_4 = MaxPool2D(pool_size=3, strides = 2, padding='same', name='max_pool_4')(inception4e) #Inception Block 5a with output shape 7x7x832 inception5a = inception_block(max_pool_4, intermediate_filter_size={3:160, 5:32} , output_filter_size={1:size_factor * 4 #256 , 3:size_factor * 5 #320 , 5:size_factor * 2 #128 , 'proj':size_factor*2} , use_bias=use_bias , kernel_initializer = kernel_initializer , bias_initializer=bias_initializer , name_prefix='incep_5a_') #Inception Block 5b with output shape 7x7x1024 inception5b = inception_block(inception5a, intermediate_filter_size={3:192, 5:48} , output_filter_size={1:size_factor * 6 #384 , 3:size_factor * 6 #384 , 5:size_factor * 2 #128 , 'proj':size_factor * 2} , use_bias=use_bias , kernel_initializer = kernel_initializer , bias_initializer=bias_initializer , name_prefix='incep_5b_') # Avg Pool as specified by the paper, output shape = 1x1x1024 avg_pool = AveragePooling2D(pool_size=7, strides=1, padding='valid' , name='avg_pool')(inception5b) dropout = Dropout(rate=0.4, name='dropout')(avg_pool) flatten = Flatten(name='flatten')(dropout) # Final FC layer with output shape 1x1xnum_classes pipeline_output = Dense(units=num_classes, activation='softmax' , kernel_initializer = kernel_initializer , bias_initializer=bias_initializer , name='main_output')(flatten) # Adding auxillary branches for training purposes based on the paper # Not using the aux2 branch in the output as the paper suggested model = Model (inputs = input, outputs = pipeline_output, name='googleNet') model.summary() return model model = build_googleNet(input_shape, size_factor=64, activation='elu', num_classes=num_classes, use_bias=True)
Model: "googleNet" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== main_input (InputLayer) [(100, 224, 224, 3)] 0 __________________________________________________________________________________________________ conv_layer_1 (Conv2D) (100, 112, 112, 64) 9472 main_input[0][0] __________________________________________________________________________________________________ maxpool_1 (MaxPooling2D) (100, 56, 56, 64) 0 conv_layer_1[0][0] __________________________________________________________________________________________________ norm_1 (BatchNormalization) (100, 56, 56, 64) 256 maxpool_1[0][0] __________________________________________________________________________________________________ conv_layer_2 (Conv2D) (100, 56, 56, 64) 4160 norm_1[0][0] __________________________________________________________________________________________________ conv_layer_3 (Conv2D) (100, 56, 56, 192) 110784 conv_layer_2[0][0] __________________________________________________________________________________________________ maxpool_2 (MaxPooling2D) (100, 28, 28, 192) 0 conv_layer_3[0][0] __________________________________________________________________________________________________ incep_3a_conv1_3 (Conv2D) (100, 28, 28, 96) 18528 maxpool_2[0][0] __________________________________________________________________________________________________ incep_3a_conv1_5 (Conv2D) (100, 28, 28, 16) 3088 maxpool_2[0][0] __________________________________________________________________________________________________ incep_3a_maxpool (MaxPooling2D) (100, 28, 28, 192) 0 maxpool_2[0][0] __________________________________________________________________________________________________ incep_3a_conv1 (Conv2D) (100, 28, 28, 64) 12352 maxpool_2[0][0] __________________________________________________________________________________________________ incep_3a_conv3 (Conv2D) (100, 28, 28, 128) 110720 incep_3a_conv1_3[0][0] __________________________________________________________________________________________________ incep_3a_conv5 (Conv2D) (100, 28, 28, 32) 12832 incep_3a_conv1_5[0][0] __________________________________________________________________________________________________ incep_3a_proj (Conv2D) (100, 28, 28, 32) 6176 incep_3a_maxpool[0][0] __________________________________________________________________________________________________ incep_3a_concat (Concatenate) (100, 28, 28, 256) 0 incep_3a_conv1[0][0] incep_3a_conv3[0][0] incep_3a_conv5[0][0] incep_3a_proj[0][0] __________________________________________________________________________________________________ incep_3b_conv1_3 (Conv2D) (100, 28, 28, 128) 32896 incep_3a_concat[0][0] __________________________________________________________________________________________________ incep_3b_conv1_5 (Conv2D) (100, 28, 28, 32) 8224 incep_3a_concat[0][0] __________________________________________________________________________________________________ incep_3b_maxpool (MaxPooling2D) (100, 28, 28, 256) 0 incep_3a_concat[0][0] __________________________________________________________________________________________________ incep_3b_conv1 (Conv2D) (100, 28, 28, 128) 32896 incep_3a_concat[0][0] __________________________________________________________________________________________________ incep_3b_conv3 (Conv2D) (100, 28, 28, 192) 221376 incep_3b_conv1_3[0][0] __________________________________________________________________________________________________ incep_3b_conv5 (Conv2D) (100, 28, 28, 96) 76896 incep_3b_conv1_5[0][0] __________________________________________________________________________________________________ incep_3b_proj (Conv2D) (100, 28, 28, 64) 16448 incep_3b_maxpool[0][0] __________________________________________________________________________________________________ incep_3b_concat (Concatenate) (100, 28, 28, 480) 0 incep_3b_conv1[0][0] incep_3b_conv3[0][0] incep_3b_conv5[0][0] incep_3b_proj[0][0] __________________________________________________________________________________________________ max_pool_3 (MaxPooling2D) (100, 14, 14, 480) 0 incep_3b_concat[0][0] __________________________________________________________________________________________________ incep_4a_conv1_3 (Conv2D) (100, 14, 14, 112) 53872 max_pool_3[0][0] __________________________________________________________________________________________________ incep_4a_conv1_5 (Conv2D) (100, 14, 14, 24) 11544 max_pool_3[0][0] __________________________________________________________________________________________________ incep_4a_maxpool (MaxPooling2D) (100, 14, 14, 480) 0 max_pool_3[0][0] __________________________________________________________________________________________________ incep_4a_conv1 (Conv2D) (100, 14, 14, 192) 92352 max_pool_3[0][0] __________________________________________________________________________________________________ incep_4a_conv3 (Conv2D) (100, 14, 14, 208) 209872 incep_4a_conv1_3[0][0] __________________________________________________________________________________________________ incep_4a_conv5 (Conv2D) (100, 14, 14, 48) 28848 incep_4a_conv1_5[0][0] __________________________________________________________________________________________________ incep_4a_proj (Conv2D) (100, 14, 14, 64) 30784 incep_4a_maxpool[0][0] __________________________________________________________________________________________________ incep_4a_concat (Concatenate) (100, 14, 14, 512) 0 incep_4a_conv1[0][0] incep_4a_conv3[0][0] incep_4a_conv5[0][0] incep_4a_proj[0][0] __________________________________________________________________________________________________ incep_4b_conv1_3 (Conv2D) (100, 14, 14, 96) 49248 incep_4a_concat[0][0] __________________________________________________________________________________________________ incep_4b_conv1_5 (Conv2D) (100, 14, 14, 16) 8208 incep_4a_concat[0][0] __________________________________________________________________________________________________ incep_4b_maxpool (MaxPooling2D) (100, 14, 14, 512) 0 incep_4a_concat[0][0] __________________________________________________________________________________________________ incep_4b_conv1 (Conv2D) (100, 14, 14, 160) 82080 incep_4a_concat[0][0] __________________________________________________________________________________________________ incep_4b_conv3 (Conv2D) (100, 14, 14, 224) 193760 incep_4b_conv1_3[0][0] __________________________________________________________________________________________________ incep_4b_conv5 (Conv2D) (100, 14, 14, 64) 25664 incep_4b_conv1_5[0][0] __________________________________________________________________________________________________ incep_4b_proj (Conv2D) (100, 14, 14, 64) 32832 incep_4b_maxpool[0][0] __________________________________________________________________________________________________ incep_4b_concat (Concatenate) (100, 14, 14, 512) 0 incep_4b_conv1[0][0] incep_4b_conv3[0][0] incep_4b_conv5[0][0] incep_4b_proj[0][0] __________________________________________________________________________________________________ incep_4c_conv1_3 (Conv2D) (100, 14, 14, 144) 73872 incep_4b_concat[0][0] __________________________________________________________________________________________________ incep_4c_conv1_5 (Conv2D) (100, 14, 14, 32) 16416 incep_4b_concat[0][0] __________________________________________________________________________________________________ incep_4c_maxpool (MaxPooling2D) (100, 14, 14, 512) 0 incep_4b_concat[0][0] __________________________________________________________________________________________________ incep_4c_conv1 (Conv2D) (100, 14, 14, 128) 65664 incep_4b_concat[0][0] __________________________________________________________________________________________________ incep_4c_conv3 (Conv2D) (100, 14, 14, 256) 332032 incep_4c_conv1_3[0][0] __________________________________________________________________________________________________ incep_4c_conv5 (Conv2D) (100, 14, 14, 64) 51264 incep_4c_conv1_5[0][0] __________________________________________________________________________________________________ incep_4c_proj (Conv2D) (100, 14, 14, 64) 32832 incep_4c_maxpool[0][0] __________________________________________________________________________________________________ incep_4c_concat (Concatenate) (100, 14, 14, 512) 0 incep_4c_conv1[0][0] incep_4c_conv3[0][0] incep_4c_conv5[0][0] incep_4c_proj[0][0] __________________________________________________________________________________________________ incep_4d_conv1_3 (Conv2D) (100, 14, 14, 128) 65664 incep_4c_concat[0][0] __________________________________________________________________________________________________ incep_4d_conv1_5 (Conv2D) (100, 14, 14, 24) 12312 incep_4c_concat[0][0] __________________________________________________________________________________________________ incep_4d_maxpool (MaxPooling2D) (100, 14, 14, 512) 0 incep_4c_concat[0][0] __________________________________________________________________________________________________ incep_4d_conv1 (Conv2D) (100, 14, 14, 112) 57456 incep_4c_concat[0][0] __________________________________________________________________________________________________ incep_4d_conv3 (Conv2D) (100, 14, 14, 288) 332064 incep_4d_conv1_3[0][0] __________________________________________________________________________________________________ incep_4d_conv5 (Conv2D) (100, 14, 14, 64) 38464 incep_4d_conv1_5[0][0] __________________________________________________________________________________________________ incep_4d_proj (Conv2D) (100, 14, 14, 64) 32832 incep_4d_maxpool[0][0] __________________________________________________________________________________________________ incep_4d_concat (Concatenate) (100, 14, 14, 528) 0 incep_4d_conv1[0][0] incep_4d_conv3[0][0] incep_4d_conv5[0][0] incep_4d_proj[0][0] __________________________________________________________________________________________________ incep_4e_conv1_3 (Conv2D) (100, 14, 14, 160) 84640 incep_4d_concat[0][0] __________________________________________________________________________________________________ incep_4e_conv1_5 (Conv2D) (100, 14, 14, 32) 16928 incep_4d_concat[0][0] __________________________________________________________________________________________________ incep_4e_maxpool (MaxPooling2D) (100, 14, 14, 528) 0 incep_4d_concat[0][0] __________________________________________________________________________________________________ incep_4e_conv1 (Conv2D) (100, 14, 14, 256) 135424 incep_4d_concat[0][0] __________________________________________________________________________________________________ incep_4e_conv3 (Conv2D) (100, 14, 14, 320) 461120 incep_4e_conv1_3[0][0] __________________________________________________________________________________________________ incep_4e_conv5 (Conv2D) (100, 14, 14, 128) 102528 incep_4e_conv1_5[0][0] __________________________________________________________________________________________________ incep_4e_proj (Conv2D) (100, 14, 14, 128) 67712 incep_4e_maxpool[0][0] __________________________________________________________________________________________________ incep_4e_concat (Concatenate) (100, 14, 14, 832) 0 incep_4e_conv1[0][0] incep_4e_conv3[0][0] incep_4e_conv5[0][0] incep_4e_proj[0][0] __________________________________________________________________________________________________ max_pool_4 (MaxPooling2D) (100, 7, 7, 832) 0 incep_4e_concat[0][0] __________________________________________________________________________________________________ incep_5a_conv1_3 (Conv2D) (100, 7, 7, 160) 133280 max_pool_4[0][0] __________________________________________________________________________________________________ incep_5a_conv1_5 (Conv2D) (100, 7, 7, 32) 26656 max_pool_4[0][0] __________________________________________________________________________________________________ incep_5a_maxpool (MaxPooling2D) (100, 7, 7, 832) 0 max_pool_4[0][0] __________________________________________________________________________________________________ incep_5a_conv1 (Conv2D) (100, 7, 7, 256) 213248 max_pool_4[0][0] __________________________________________________________________________________________________ incep_5a_conv3 (Conv2D) (100, 7, 7, 320) 461120 incep_5a_conv1_3[0][0] __________________________________________________________________________________________________ incep_5a_conv5 (Conv2D) (100, 7, 7, 128) 102528 incep_5a_conv1_5[0][0] __________________________________________________________________________________________________ incep_5a_proj (Conv2D) (100, 7, 7, 128) 106624 incep_5a_maxpool[0][0] __________________________________________________________________________________________________ incep_5a_concat (Concatenate) (100, 7, 7, 832) 0 incep_5a_conv1[0][0] incep_5a_conv3[0][0] incep_5a_conv5[0][0] incep_5a_proj[0][0] __________________________________________________________________________________________________ incep_5b_conv1_3 (Conv2D) (100, 7, 7, 192) 159936 incep_5a_concat[0][0] __________________________________________________________________________________________________ incep_5b_conv1_5 (Conv2D) (100, 7, 7, 48) 39984 incep_5a_concat[0][0] __________________________________________________________________________________________________ incep_5b_maxpool (MaxPooling2D) (100, 7, 7, 832) 0 incep_5a_concat[0][0] __________________________________________________________________________________________________ incep_5b_conv1 (Conv2D) (100, 7, 7, 384) 319872 incep_5a_concat[0][0] __________________________________________________________________________________________________ incep_5b_conv3 (Conv2D) (100, 7, 7, 384) 663936 incep_5b_conv1_3[0][0] __________________________________________________________________________________________________ incep_5b_conv5 (Conv2D) (100, 7, 7, 128) 153728 incep_5b_conv1_5[0][0] __________________________________________________________________________________________________ incep_5b_proj (Conv2D) (100, 7, 7, 128) 106624 incep_5b_maxpool[0][0] __________________________________________________________________________________________________ incep_5b_concat (Concatenate) (100, 7, 7, 1024) 0 incep_5b_conv1[0][0] incep_5b_conv3[0][0] incep_5b_conv5[0][0] incep_5b_proj[0][0] __________________________________________________________________________________________________ avg_pool (AveragePooling2D) (100, 1, 1, 1024) 0 incep_5b_concat[0][0] __________________________________________________________________________________________________ dropout (Dropout) (100, 1, 1, 1024) 0 avg_pool[0][0] __________________________________________________________________________________________________ flatten (Flatten) (100, 1024) 0 dropout[0][0] __________________________________________________________________________________________________ main_output (Dense) (100, 5) 5125 flatten[0][0] ================================================================================================== Total params: 5,968,053 Trainable params: 5,967,925 Non-trainable params: 128 __________________________________________________________________________________________________
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
Model Figure
tf.keras.utils.plot_model(model)
_____no_output_____
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
Define Callbacks & Optimizer Learning Rate Modification
def lr_schedule(epoch, learning_rate): #The paper talks about reducing the learning rate by 4% every 8 epochs #Checking if 8 epochs are complete if epoch > 7 and epoch%8 == 0 : # Reducing the learning rate by 4% #print("lr_schedule: epoch =",epoch) return learning_rate* 0.96 else: return learning_rate lrScheduler = tf.keras.callbacks.LearningRateScheduler(schedule=lr_schedule, verbose=1)
_____no_output_____
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
Checkpoint Definition
checkpoint = tf.keras.callbacks.ModelCheckpoint(filepath = checkpoint_filePath , monitor='val_loss' , verbose = 1 , save_best_only = True , save_weights_only = False )
_____no_output_____
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
EarlyStopping Definition
earlyStopper = tf.keras.callbacks.EarlyStopping(monitor='val_loss' , min_delta = 0.0001 , patience = 9 , verbose=1 , restore_best_weights=True )
_____no_output_____
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
Define Callbacks list
callbacks = [earlyStopper , checkpoint , lrScheduler]
_____no_output_____
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
Define Optimizer
# The paper calls for an SGD with momentum of 0.9 optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3, momentum=0.9) #optimizer = tf.keras.optimizers.Adam(learning_rate=1e-5)
_____no_output_____
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
Compile the Model
#First phase of training will be with aux1 branch as output and ignoring the rest of the model #aux1_model = Model(inputs=model.get_layer('main_input').input, outputs=model.get_layer('aux1_output').output) #aux1_model.summary() #aux1_model.reset_states() model.compile(optimizer = optimizer , loss = 'categorical_crossentropy' , metrics = [ 'accuracy'] )
_____no_output_____
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
Train the Model
metrics = model.fit(training_datasource , batch_size = batch_size , epochs= 50 , callbacks = callbacks , validation_data = validation_datasource , shuffle=True ) #tf.keras.models.save_model(model, checkpoint_filePath, save_format='h5')
_____no_output_____
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
Loss and Accuracy Plots
acc = metrics.history['accuracy'] val_acc = metrics.history['val_accuracy'] loss = metrics.history['loss'] val_loss = metrics.history['val_loss'] epochs_range = range(len(metrics.history['accuracy'])) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show()
_____no_output_____
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
Test the Model
predictions = [] actuals=[] for i, (images, labels) in enumerate( test_datasource): pred = model(images) for j in range(len(labels)): actuals.append( labels[j]) predictions.append(pred[j]) # Printing a few labels and predictions to ensure that there are no dead-Relus for j in range(10): print(labels[j].numpy(), "\t", pred[j].numpy())
[0. 0. 0. 1. 0.] [5.2678269e-01 4.5223912e-04 1.5960732e-01 3.1243265e-01 7.2512066e-04] [1. 0. 0. 0. 0.] [9.3688345e-01 3.9315761e-05 4.2963952e-02 2.0080591e-02 3.2674830e-05] [1. 0. 0. 0. 0.] [5.6217247e-01 3.2935925e-05 6.1456640e-03 4.3134734e-01 3.0162817e-04] [0. 1. 0. 0. 0.] [2.6929042e-08 9.9104989e-01 8.4609836e-03 1.1975218e-05 4.7719607e-04] [0. 0. 0. 1. 0.] [1.1177908e-04 2.9847620e-09 7.3413408e-05 9.9979931e-01 1.5532069e-05] [1. 0. 0. 0. 0.] [6.4884788e-01 1.4298371e-07 3.5084713e-01 3.0439568e-04 4.9051403e-07] [1. 0. 0. 0. 0.] [9.9992180e-01 4.6575969e-06 3.7354968e-07 7.2269759e-05 9.6628423e-07] [0. 0. 0. 1. 0.] [2.2897586e-01 2.6874379e-03 5.8660603e-01 1.8155751e-01 1.7324543e-04] [0. 0. 0. 1. 0.] [0.4059488 0.00825622 0.0023726 0.5712473 0.01217511] [0. 0. 0. 1. 0.] [0.2535905 0.12598905 0.01019515 0.60805327 0.002172 ]
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
Confusion Matrix
import pandas as pd pd.DataFrame(tf.math.confusion_matrix( np.argmax(actuals, axis=1), np.argmax(predictions, axis=1), num_classes=num_classes, dtype=tf.dtypes.int32).numpy() , columns = test_image_dataset.class_names , index = test_image_dataset.class_names)
_____no_output_____
CC0-1.0
DeepLearningForVisionSystems_Ch5_InceptionGoogleNet.ipynb
mkkadambi/machine-learning
Implementing the Gradient Descent AlgorithmIn this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data.
import matplotlib.pyplot as plt import numpy as np import pandas as pd #Some helper functions for plotting and drawing lines def plot_points(X, y): admitted = X[np.argwhere(y==1)] rejected = X[np.argwhere(y==0)] plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k') plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k') def display(m, b, color='g--'): plt.xlim(-0.05,1.05) plt.ylim(-0.05,1.05) x = np.arange(-10, 10, 0.1) plt.plot(x, m*x+b, color)
_____no_output_____
MIT
4 - Neural Networks/1. Introduction To Neural Nets/1. Gradient Descent/GradientDescent.ipynb
2series/Artificial-Intelligence
Reading and plotting the data
data = pd.read_csv('data.csv', header=None) X = np.array(data[[0,1]]) y = np.array(data[2]) plot_points(X,y) plt.show()
_____no_output_____
MIT
4 - Neural Networks/1. Introduction To Neural Nets/1. Gradient Descent/GradientDescent.ipynb
2series/Artificial-Intelligence
TODO: Implementing the basic functionsHere is your turn to shine. Implement the following formulas, as explained in the text.- Sigmoid activation function$$\sigma(x) = \frac{1}{1+e^{-x}}$$- Output (prediction) formula$$\hat{y} = \sigma(w_1 x_1 + w_2 x_2 + b)$$- Error function$$Error(y, \hat{y}) = - y \log(\hat{y}) - (1-y) \log(1-\hat{y})$$- The function that updates the weights$$ w_i \longrightarrow w_i + \alpha (y - \hat{y}) x_i$$$$ b \longrightarrow b + \alpha (y - \hat{y})$$
# Implement the following functions # Activation (sigmoid) function def sigmoid(x): return (1 / (1 + np.exp(-x))) # Output (prediction) formula def output_formula(features, weights, bias): return sigmoid(np.matmul(features, weights) + bias) # Error (log-loss) formula def error_formula(y, output): return - y * np.log(output) - (1 - y) * np.log(1 - output) # Gradient descent step def update_weights(x, y, weights, bias, learnrate): yhat = output_formula(x, weights, bias) weights = weights + learnrate * (y - yhat) * x bias = bias + learnrate * (y - yhat) return weights, bias
_____no_output_____
MIT
4 - Neural Networks/1. Introduction To Neural Nets/1. Gradient Descent/GradientDescent.ipynb
2series/Artificial-Intelligence
Training functionThis function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm.
np.random.seed(44) epochs = 100 learnrate = 0.01 def train(features, targets, epochs, learnrate, graph_lines=False): errors = [] n_records, n_features = features.shape last_loss = None weights = np.random.normal(scale=1 / n_features**.5, size=n_features) bias = 0 for e in range(epochs): print("starting epoch:{}".format(e)) del_w = np.zeros(weights.shape) for x, y in zip(features, targets): output = output_formula(x, weights, bias) error = error_formula(y, output) weights, bias = update_weights(x, y, weights, bias, learnrate) # Printing out the log-loss error on the training set out = output_formula(features, weights, bias) loss = np.mean(error_formula(targets, out)) errors.append(loss) if e % (epochs / 10) == 0: print("\n========== Epoch", e,"==========") if last_loss and last_loss < loss: print("Train loss: ", loss, " WARNING - Loss Increasing") else: print("Train loss: ", loss) last_loss = loss predictions = out > 0.5 accuracy = np.mean(predictions == targets) print("Accuracy: ", accuracy) return weights, bias, errors def train_plot(features,targets,weights,bias): # Plotting the solution boundary plt.title("Solution boundary") display(-weights[0]/weights[1], -bias/weights[1], 'black') plot_points(features, targets) plt.show() def train_err(errors): plt.title("Error Plot") plt.xlabel('Number of epochs') plt.ylabel('Error') plt.plot(errors) plt.show()
_____no_output_____
MIT
4 - Neural Networks/1. Introduction To Neural Nets/1. Gradient Descent/GradientDescent.ipynb
2series/Artificial-Intelligence
Time to train the algorithm!When we run the function, we'll obtain the following:- 10 updates with the current training loss and accuracy- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.- A plot of the error function. Notice how it decreases as we go through more epochs.
weights, bias, errors = train(X, y, epochs, learnrate, True) train_plot(X, y, weights,bias) train_err(errors)
_____no_output_____
MIT
4 - Neural Networks/1. Introduction To Neural Nets/1. Gradient Descent/GradientDescent.ipynb
2series/Artificial-Intelligence
hard-coded argumentsexplain GCN model
# get args from main_gnn CLI class Argument(object): name = "args" args = Argument() args.batch_size = 256 args.num_workers = 0 args.num_layers = 5 args.emb_dim = 600 args.drop_ratio = 0 args.graph_pooling = "sum" args.checkpoint_dir = "models/gin-virtual/checkpoint" args.device = 0 device = torch.device("cuda:" + str(args.device)) if torch.cuda.is_available() else torch.device("cpu") # device = "cpu" device shared_params = { 'num_layers': args.num_layers, 'emb_dim': args.emb_dim, 'drop_ratio': args.drop_ratio, 'graph_pooling': args.graph_pooling }
_____no_output_____
MIT
examples/lsc/pcqm4m/.ipynb_checkpoints/triplet-loss-checkpoint.ipynb
edwardelson/ogb
load model
from gnn import GNN """ LOAD Checkpoint data """ checkpoint = torch.load(os.path.join(args.checkpoint_dir, 'checkpoint.pt')) checkpoint.keys() gnn_name = "gin-virtual" gnn_type = "gin" virtual_node = True model = GNN(gnn_type = gnn_type, virtual_node = virtual_node, **shared_params).to(device) model.load_state_dict(checkpoint["model_state_dict"]) model.state_dict() model.eval() type(model) optimizer = optim.Adam(model.parameters(), lr=0.001) scheduler = StepLR(optimizer, step_size=300, gamma=0.25) reg_criterion = torch.nn.L1Loss()
_____no_output_____
MIT
examples/lsc/pcqm4m/.ipynb_checkpoints/triplet-loss-checkpoint.ipynb
edwardelson/ogb
load data
### importing OGB-LSC from ogb.lsc import PygPCQM4MDataset, PCQM4MEvaluator dataset = PygPCQM4MDataset(root = 'dataset/') split_idx = dataset.get_idx_split() split_idx["train"], split_idx["test"], split_idx["valid"] valid_loader = DataLoader(dataset[split_idx["valid"]], batch_size=args.batch_size, shuffle=False, num_workers = args.num_workers) # valid_loader = DataLoader(dataset[queryID], batch_size=args.batch_size, shuffle=False, num_workers = args.num_workers) valid_loader
_____no_output_____
MIT
examples/lsc/pcqm4m/.ipynb_checkpoints/triplet-loss-checkpoint.ipynb
edwardelson/ogb
triplet loss
""" load triplet dataset """ name = "valid" anchor_loader = DataLoader(dataset[split_idx[name]], batch_size=args.batch_size, shuffle=True, num_workers = args.num_workers) positive_loader = DataLoader(dataset[split_idx[name]], batch_size=args.batch_size, shuffle=True, num_workers = args.num_workers) negative_loader = DataLoader(dataset[split_idx[name]], batch_size=args.batch_size, shuffle=True, num_workers = args.num_workers) """ get embedding """ model_activation = {} def get_activation(name): def hook(model, input, output): model_activation[name] = output return hook model.gnn_node.register_forward_hook(get_activation('gnn_node')) """ define triplet loss """ import torch import torch.nn as nn import torch.nn.functional as F from torch import Tensor from torch_geometric.nn import global_add_pool class TripletLossRegression(nn.Module): """ anchor, positive, negative are node-level embeddings of a GNN before they are sent to a pooling layer, and hence are expected to be matrices. anchor_gt, positive_gt, and negative_gt are ground truth tensors that correspond to the ground-truth values of the anchor, positive, and negative respectively. """ def __init__(self, margin: float = 0.0, eps=1e-6): super(TripletLossRegression, self).__init__() self.margin = margin self.eps = eps def forward(self, anchor_batch, negative_batch, positive_batch, anchor: Tensor, negative: Tensor, positive: Tensor, anchor_gt: Tensor, negative_gt: Tensor, positive_gt: Tensor) -> Tensor: anchor = global_add_pool(anchor, anchor_batch) positive = global_add_pool(positive, positive_batch) negative = global_add_pool(negative, negative_batch) pos_distance = torch.linalg.norm(positive - anchor, dim=1) negative_distance = torch.linalg.norm(negative - anchor, dim=1) coeff = torch.div(torch.abs(negative_gt - anchor_gt) , (torch.abs(positive_gt - anchor_gt) + self.eps)) loss = F.relu((pos_distance - coeff * negative_distance) + self.margin) return torch.mean(loss) # def triplet_loss_train(model, device, anchor_loader, negative_loader, positive_loader, optimizer, gnn_name): model.train() loss_accum = 0 triplet_loss_criterion = TripletLossRegression() for step, (anchor_batch, negative_batch, positive_batch) in \ enumerate(zip(tqdm(anchor_loader, desc="Iteration"), negative_loader, positive_loader)): anchor_batch = anchor_batch.to(device) pred_anchor = model(anchor_batch).view(-1,) anchor_embed = model_activation['gnn_node'] negative_batch = negative_batch.to(device) pred_neg = model(negative_batch).view(-1,) neg_embed = model_activation['gnn_node'] positive_batch = positive_batch.to(device) pred_pos= model(positive_batch).view(-1,) pos_embed = model_activation['gnn_node'] optimizer.zero_grad() mae_loss = reg_criterion(pred_anchor, anchor_batch.y) tll_loss = triplet_loss_criterion(anchor_batch.batch, negative_batch.batch, positive_batch.batch, anchor_embed, neg_embed, pos_embed, anchor_batch.y, negative_batch.y, positive_batch.y) loss = mae_loss + tll_loss if gnn_name == 'gin-virtual-bnn': kl_loss = model.get_kl_loss()[0] loss += kl_loss loss.backward() optimizer.step() loss_accum += loss.detach().cpu().item() # return loss_accum / (step + 1) loss_accum / (step + 1) raise Exception("") """ IMPORTANT: GRAPH QUERY ID Pick the graph """ selectedID = 75088 #0 #131054 queryID = split_idx["valid"][selectedID:selectedID + 1] queryID list(valid_loader)
_____no_output_____
MIT
examples/lsc/pcqm4m/.ipynb_checkpoints/triplet-loss-checkpoint.ipynb
edwardelson/ogb
predict
batch = list(valid_loader)[0] data = batch[0] data batch = batch.to(device) with torch.no_grad(): pred = model(batch).view(-1,) pred y_true = data.y.item() y_pred = pred.item() y_true, y_pred
_____no_output_____
MIT
examples/lsc/pcqm4m/.ipynb_checkpoints/triplet-loss-checkpoint.ipynb
edwardelson/ogb
plot sample
import networkx as nx import matplotlib.pyplot as plt def plotGraph(data, y_pred, y_true, ax, printnodelabel=False, printedgelabel=False): edges = data.edge_index.T.tolist() edges = np.array(edges) edges = [(x[0][0], x[0][1], {"feat": str(x[1])}) for x in list(zip(edges.tolist(), data.edge_attr.tolist()))] nodes = [(x[0], {"feat": str(x[1])}) for x in enumerate(data.x.tolist())] G = nx.Graph() G.add_nodes_from(nodes) G.add_edges_from(edges) nodelabels = nx.get_node_attributes(G, 'feat') edgelabels = nx.get_edge_attributes(G, "feat") pos = nx.spring_layout(G) ax.set_title("pred={:.2f}, true={:.2f}".format(y_pred, y_true)) if printnodelabel: nx.draw(G, pos, labels=nodelabels, ax=ax, node_size=40) else: nx.draw(G, pos, ax=ax, node_size=40) if printedgelabel: nx.draw_networkx_edge_labels(G, pos, ax=ax, edge_labels=edgelabels) fig, ax = plt.subplots() plotGraph(data, y_pred, y_true, ax, False, True)
_____no_output_____
MIT
examples/lsc/pcqm4m/.ipynb_checkpoints/triplet-loss-checkpoint.ipynb
edwardelson/ogb
perturb edge feature edge (5, 6, 2) possible dimensions
import ogb.utils as utils edgeFeatDims = utils.features.get_bond_feature_dims() edgeFeatDims perturb_data_list = [] for _ in range(5000): # clone original data pData = data.clone() # create random noise randomNoise = np.random.randint(low=-4, high=4, size=data.edge_attr.shape) randomNoise = torch.tensor(randomNoise) # add edge_attr noise pData.edge_attr += randomNoise pData.edge_attr[:, 0] = pData.edge_attr[:, 0].clip(0, edgeFeatDims[0]-1) pData.edge_attr[:, 1] = pData.edge_attr[:, 1].clip(0, edgeFeatDims[1]-1) pData.edge_attr[:, 2] = pData.edge_attr[:, 2].clip(0, edgeFeatDims[2]-1) perturb_data_list.append(pData) len(perturb_data_list) valid_loader = DataLoader(perturb_data_list, batch_size=args.batch_size, shuffle=False, num_workers = args.num_workers) # get data batch = list(valid_loader)[0] batch = batch.to(device) with torch.no_grad(): pred = model(batch) #.view(-1,) pred.shape plt.title("Perturb edge features. Label: {:.2f}".format(y_true)) plt.hist(pred.view(-1).tolist()) plt.axvline(y_pred, c="r") plt.show()
_____no_output_____
MIT
examples/lsc/pcqm4m/.ipynb_checkpoints/triplet-loss-checkpoint.ipynb
edwardelson/ogb
given fixed node features and topology, perturbing edge features don't disturb the output much perturb node features
nodeDims = utils.features.get_atom_feature_dims() nodeDims perturb_data_list = [] for _ in range(1000): # clone original data pData = data.clone() # create random noise randomNoise = np.random.randint(low=-1, high=1, size=data.x.shape) randomNoise = torch.tensor(randomNoise) # add edge_attr noise pData.x += randomNoise pData.x[:, 0] = pData.x[:, 0].clip(0, nodeDims[0]-1) pData.x[:, 1] = pData.x[:, 1].clip(0, nodeDims[1]-1) pData.x[:, 2] = pData.x[:, 2].clip(0, nodeDims[2]-1) pData.x[:, 3] = pData.x[:, 2].clip(0, nodeDims[3]-1) pData.x[:, 4] = pData.x[:, 2].clip(0, nodeDims[4]-1) pData.x[:, 5] = pData.x[:, 2].clip(0, nodeDims[5]-1) pData.x[:, 6] = pData.x[:, 2].clip(0, nodeDims[6]-1) pData.x[:, 7] = pData.x[:, 2].clip(0, nodeDims[7]-1) pData.x[:, 8] = pData.x[:, 2].clip(0, nodeDims[8]-1) perturb_data_list.append(pData) len(perturb_data_list) # perturb_data_list = [data] # for i in range(1): # pData = data.clone() # # pData.x[-1, 0] = torch.tensor(i) # pData.x[-1] = torch.tensor([ 5, 0, 4, 5, 3, 0, 2, 0, 0]) # perturb_data_list.append(pData) valid_loader = DataLoader(perturb_data_list, batch_size=args.batch_size, shuffle=False, num_workers = args.num_workers) # get data batch = list(valid_loader)[0] batch = batch.to(device) with torch.no_grad(): pred = model(batch) #.view(-1,) pred.shape #, pred plt.title("Perturb node features. Label: {:.2f}".format(y_true)) plt.hist(pred.view(-1).tolist()) plt.axvline(y_pred, c="r") plt.show()
_____no_output_____
MIT
examples/lsc/pcqm4m/.ipynb_checkpoints/triplet-loss-checkpoint.ipynb
edwardelson/ogb