code
stringlengths
2.5k
150k
kind
stringclasses
1 value
``` # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Vertex client library: Custom training tabular regression model with pipeline for online prediction with training pipeline <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_tabular_regression_online_pipeline.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_tabular_regression_online_pipeline.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> <br/><br/><br/> ## Overview This tutorial demonstrates how to use the Vertex client library for Python to train and deploy a custom tabular regression model for online prediction, using a training pipeline. ### Dataset The dataset used for this tutorial is the [Boston Housing Prices dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html). The version of the dataset you will use in this tutorial is built into TensorFlow. The trained model predicts the median price of a house in units of 1K USD. ### Objective In this tutorial, you create a custom model from a Python script in a Google prebuilt Docker container using the Vertex client library, and then do a prediction on the deployed model by sending data. You can alternatively create custom models using `gcloud` command-line tool or online using Google Cloud Console. The steps performed include: - Create a Vertex custom job for training a model. - Create a `TrainingPipeline` resource. - Train a TensorFlow model with the `TrainingPipeline` resource. - Retrieve and load the model artifacts. - View the model evaluation. - Upload the model as a Vertex `Model` resource. - Deploy the `Model` resource to a serving `Endpoint` resource. - Make a prediction. - Undeploy the `Model` resource. ### Costs This tutorial uses billable components of Google Cloud (GCP): * Vertex AI * Cloud Storage Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage. ## Installation Install the latest version of Vertex client library. ``` import os import sys # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install -U google-cloud-aiplatform $USER_FLAG ``` Install the latest GA version of *google-cloud-storage* library as well. ``` ! pip3 install -U google-cloud-storage $USER_FLAG ``` ### Restart the kernel Once you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages. ``` if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) ``` ## Before you begin ### GPU runtime *Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** ### Set up your Google Cloud project **The following steps are required, regardless of your notebook environment.** 1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs. 2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project) 3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component) 4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook. 5. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. **Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. ``` PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID ``` #### Region You can also change the `REGION` variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you. - Americas: `us-central1` - Europe: `europe-west4` - Asia Pacific: `asia-east1` You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/vertex-ai/docs/general/locations) ``` REGION = "us-central1" # @param {type: "string"} ``` #### Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial. ``` from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") ``` ### Authenticate your Google Cloud account **If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. **Otherwise**, follow these steps: In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page. **Click Create service account**. In the **Service account name** field, enter a name, and click **Create**. In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. ``` # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' ``` ### Create a Cloud Storage bucket **The following steps are required, regardless of your notebook environment.** When you submit a custom training job using the Vertex client library, you upload a Python package containing your training code to a Cloud Storage bucket. Vertex runs the code from this package. In this tutorial, Vertex also saves the trained model that results from your job in the same bucket. You can then create an `Endpoint` resource based on this output in order to serve online predictions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. ``` BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP ``` **Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket. ``` ! gsutil mb -l $REGION $BUCKET_NAME ``` Finally, validate access to your Cloud Storage bucket by examining its contents: ``` ! gsutil ls -al $BUCKET_NAME ``` ### Set up variables Next, set up some variables used throughout the tutorial. ### Import libraries and define constants #### Import Vertex client library Import the Vertex client library into our Python environment. ``` import time from google.cloud.aiplatform import gapic as aip from google.protobuf import json_format from google.protobuf.json_format import MessageToJson, ParseDict from google.protobuf.struct_pb2 import Struct, Value ``` #### Vertex constants Setup up the following constants for Vertex: - `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services. - `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources. ``` # API service endpoint API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION) # Vertex location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION ``` #### CustomJob constants Set constants unique to CustomJob training: - Dataset Training Schemas: Tells the `Pipeline` resource service the task (e.g., classification) to train the model for. ``` CUSTOM_TASK_GCS_PATH = ( "gs://google-cloud-aiplatform/schema/trainingjob/definition/custom_task_1.0.0.yaml" ) ``` #### Hardware Accelerators Set the hardware accelerators (e.g., GPU), if any, for training and prediction. Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4) For GPU, available accelerators include: - aip.AcceleratorType.NVIDIA_TESLA_K80 - aip.AcceleratorType.NVIDIA_TESLA_P100 - aip.AcceleratorType.NVIDIA_TESLA_P4 - aip.AcceleratorType.NVIDIA_TESLA_T4 - aip.AcceleratorType.NVIDIA_TESLA_V100 Otherwise specify `(None, None)` to use a container image to run on a CPU. *Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support. ``` if os.getenv("IS_TESTING_TRAIN_GPU"): TRAIN_GPU, TRAIN_NGPU = ( aip.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_TRAIN_GPU")), ) else: TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1) if os.getenv("IS_TESTING_DEPOLY_GPU"): DEPLOY_GPU, DEPLOY_NGPU = ( aip.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_DEPOLY_GPU")), ) else: DEPLOY_GPU, DEPLOY_NGPU = (None, None) ``` #### Container (Docker) image Next, we will set the Docker container images for training and prediction - TensorFlow 1.15 - `gcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest` - TensorFlow 2.1 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest` - TensorFlow 2.2 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest` - TensorFlow 2.3 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest` - TensorFlow 2.4 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest` - XGBoost - `gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1` - Scikit-learn - `gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest` - Pytorch - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest` - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest` - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest` - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest` For the latest list, see [Pre-built containers for training](https://cloud.google.com/vertex-ai/docs/training/pre-built-containers). - TensorFlow 1.15 - `gcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest` - `gcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest` - TensorFlow 2.1 - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest` - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest` - TensorFlow 2.2 - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest` - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest` - TensorFlow 2.3 - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest` - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest` - XGBoost - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest` - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest` - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest` - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest` - Scikit-learn - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest` - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest` - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest` For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers) ``` if os.getenv("IS_TESTING_TF"): TF = os.getenv("IS_TESTING_TF") else: TF = "2-1" if TF[0] == "2": if TRAIN_GPU: TRAIN_VERSION = "tf-gpu.{}".format(TF) else: TRAIN_VERSION = "tf-cpu.{}".format(TF) if DEPLOY_GPU: DEPLOY_VERSION = "tf2-gpu.{}".format(TF) else: DEPLOY_VERSION = "tf2-cpu.{}".format(TF) else: if TRAIN_GPU: TRAIN_VERSION = "tf-gpu.{}".format(TF) else: TRAIN_VERSION = "tf-cpu.{}".format(TF) if DEPLOY_GPU: DEPLOY_VERSION = "tf-gpu.{}".format(TF) else: DEPLOY_VERSION = "tf-cpu.{}".format(TF) TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION) DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION) print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU) print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU) ``` #### Machine Type Next, set the machine type to use for training and prediction. - Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \] *Note: The following is not supported for training:* - `standard`: 2 vCPUs - `highcpu`: 2, 4 and 8 vCPUs *Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*. ``` if os.getenv("IS_TESTING_TRAIN_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE") else: MACHINE_TYPE = "n1-standard" VCPU = "4" TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Train machine type", TRAIN_COMPUTE) if os.getenv("IS_TESTING_DEPLOY_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE") else: MACHINE_TYPE = "n1-standard" VCPU = "4" DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Deploy machine type", DEPLOY_COMPUTE) ``` # Tutorial Now you are ready to start creating your own custom model and training for Boston Housing. ## Set up clients The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server. You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront. - Model Service for `Model` resources. - Pipeline Service for training. - Endpoint Service for deployment. - Job Service for batch jobs and custom training. - Prediction Service for serving. ``` # client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_model_client(): client = aip.ModelServiceClient(client_options=client_options) return client def create_pipeline_client(): client = aip.PipelineServiceClient(client_options=client_options) return client def create_endpoint_client(): client = aip.EndpointServiceClient(client_options=client_options) return client def create_prediction_client(): client = aip.PredictionServiceClient(client_options=client_options) return client clients = {} clients["model"] = create_model_client() clients["pipeline"] = create_pipeline_client() clients["endpoint"] = create_endpoint_client() clients["prediction"] = create_prediction_client() for client in clients.items(): print(client) ``` ## Train a model There are two ways you can train a custom model using a container image: - **Use a Google Cloud prebuilt container**. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model. - **Use your own custom container image**. If you use your own container, the container needs to contain your code for training a custom model. ## Prepare your custom job specification Now that your clients are ready, your first step is to create a Job Specification for your custom training job. The job specification will consist of the following: - `worker_pool_spec` : The specification of the type of machine(s) you will use for training and how many (single or distributed) - `python_package_spec` : The specification of the Python package to be installed with the pre-built container. ### Prepare your machine specification Now define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training. - `machine_type`: The type of GCP instance to provision -- e.g., n1-standard-8. - `accelerator_type`: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable `TRAIN_GPU != None`, you are using a GPU; otherwise you will use a CPU. - `accelerator_count`: The number of accelerators. ``` if TRAIN_GPU: machine_spec = { "machine_type": TRAIN_COMPUTE, "accelerator_type": TRAIN_GPU, "accelerator_count": TRAIN_NGPU, } else: machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0} ``` ### Prepare your disk specification (optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training. - `boot_disk_type`: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD. - `boot_disk_size_gb`: Size of disk in GB. ``` DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard] DISK_SIZE = 200 # GB disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE} ``` ### Define the worker pool specification Next, you define the worker pool specification for your custom training job. The worker pool specification will consist of the following: - `replica_count`: The number of instances to provision of this machine type. - `machine_spec`: The hardware specification. - `disk_spec` : (optional) The disk storage specification. - `python_package`: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module. Let's dive deeper now into the python package specification: -`executor_image_spec`: This is the docker image which is configured for your custom training job. -`package_uris`: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image. -`python_module`: The Python module (script) to invoke for running the custom training job. In this example, you will be invoking `trainer.task.py` -- note that it was not neccessary to append the `.py` suffix. -`args`: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting: - `"--model-dir=" + MODEL_DIR` : The Cloud Storage location where to store the model artifacts. There are two ways to tell the training script where to save the model artifacts: - direct: You pass the Cloud Storage location as a command line argument to your training script (set variable `DIRECT = True`), or - indirect: The service passes the Cloud Storage location as the environment variable `AIP_MODEL_DIR` to your training script (set variable `DIRECT = False`). In this case, you tell the service the model artifact location in the job specification. - `"--epochs=" + EPOCHS`: The number of epochs for training. - `"--steps=" + STEPS`: The number of steps (batches) per epoch. - `"--distribute=" + TRAIN_STRATEGY"` : The training distribution strategy to use for single or distributed training. - `"single"`: single device. - `"mirror"`: all GPU devices on a single compute instance. - `"multi"`: all GPU devices on all compute instances. - `"--param-file=" + PARAM_FILE`: The Cloud Storage location for storing feature normalization values. ``` JOB_NAME = "custom_job_" + TIMESTAMP MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME) if not TRAIN_NGPU or TRAIN_NGPU < 2: TRAIN_STRATEGY = "single" else: TRAIN_STRATEGY = "mirror" EPOCHS = 20 STEPS = 100 PARAM_FILE = BUCKET_NAME + "/params.txt" DIRECT = True if DIRECT: CMDARGS = [ "--model-dir=" + MODEL_DIR, "--epochs=" + str(EPOCHS), "--steps=" + str(STEPS), "--distribute=" + TRAIN_STRATEGY, "--param-file=" + PARAM_FILE, ] else: CMDARGS = [ "--epochs=" + str(EPOCHS), "--steps=" + str(STEPS), "--distribute=" + TRAIN_STRATEGY, "--param-file=" + PARAM_FILE, ] worker_pool_spec = [ { "replica_count": 1, "machine_spec": machine_spec, "disk_spec": disk_spec, "python_package_spec": { "executor_image_uri": TRAIN_IMAGE, "package_uris": [BUCKET_NAME + "/trainer_boston.tar.gz"], "python_module": "trainer.task", "args": CMDARGS, }, } ] ``` ### Examine the training package #### Package layout Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout. - PKG-INFO - README.md - setup.cfg - setup.py - trainer - \_\_init\_\_.py - task.py The files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image. The file `trainer/task.py` is the Python script for executing the custom training job. *Note*, when we referred to it in the worker pool specification, we replace the directory slash with a dot (`trainer.task`) and dropped the file suffix (`.py`). #### Package Assembly In the following cells, you will assemble the training package. ``` # Make folder for Python training script ! rm -rf custom ! mkdir custom # Add package information ! touch custom/README.md setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0" ! echo "$setup_cfg" > custom/setup.cfg setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())" ! echo "$setup_py" > custom/setup.py pkg_info = "Metadata-Version: 1.0\n\nName: Boston Housing tabular regression\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex" ! echo "$pkg_info" > custom/PKG-INFO # Make the training subfolder ! mkdir custom/trainer ! touch custom/trainer/__init__.py ``` #### Task.py contents In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary: - Get the directory where to save the model artifacts from the command line (`--model_dir`), and if not specified, then from the environment variable `AIP_MODEL_DIR`. - Loads Boston Housing dataset from TF.Keras builtin datasets - Builds a simple deep neural network model using TF.Keras model API. - Compiles the model (`compile()`). - Sets a training distribution strategy according to the argument `args.distribute`. - Trains the model (`fit()`) with epochs specified by `args.epochs`. - Saves the trained model (`save(args.model_dir)`) to the specified model directory. - Saves the maximum value for each feature `f.write(str(params))` to the specified parameters file. ``` %%writefile custom/trainer/task.py # Single, Mirror and Multi-Machine Distributed Training for Boston Housing import tensorflow_datasets as tfds import tensorflow as tf from tensorflow.python.client import device_lib import numpy as np import argparse import os import sys tfds.disable_progress_bar() parser = argparse.ArgumentParser() parser.add_argument('--model-dir', dest='model_dir', default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.') parser.add_argument('--lr', dest='lr', default=0.001, type=float, help='Learning rate.') parser.add_argument('--epochs', dest='epochs', default=20, type=int, help='Number of epochs.') parser.add_argument('--steps', dest='steps', default=100, type=int, help='Number of steps per epoch.') parser.add_argument('--distribute', dest='distribute', type=str, default='single', help='distributed training strategy') parser.add_argument('--param-file', dest='param_file', default='/tmp/param.txt', type=str, help='Output file for parameters') args = parser.parse_args() print('Python Version = {}'.format(sys.version)) print('TensorFlow Version = {}'.format(tf.__version__)) print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found'))) # Single Machine, single compute device if args.distribute == 'single': if tf.test.is_gpu_available(): strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0") else: strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0") # Single Machine, multiple compute device elif args.distribute == 'mirror': strategy = tf.distribute.MirroredStrategy() # Multiple Machine, multiple compute device elif args.distribute == 'multi': strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() # Multi-worker configuration print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync)) def make_dataset(): # Scaling Boston Housing data features def scale(feature): max = np.max(feature) feature = (feature / max).astype(np.float) return feature, max (x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data( path="boston_housing.npz", test_split=0.2, seed=113 ) params = [] for _ in range(13): x_train[_], max = scale(x_train[_]) x_test[_], _ = scale(x_test[_]) params.append(max) # store the normalization (max) value for each feature with tf.io.gfile.GFile(args.param_file, 'w') as f: f.write(str(params)) return (x_train, y_train), (x_test, y_test) # Build the Keras model def build_and_compile_dnn_model(): model = tf.keras.Sequential([ tf.keras.layers.Dense(128, activation='relu', input_shape=(13,)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(1, activation='linear') ]) model.compile( loss='mse', optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr)) return model NUM_WORKERS = strategy.num_replicas_in_sync # Here the batch size scales up by number of workers since # `tf.data.Dataset.batch` expects the global batch size. BATCH_SIZE = 16 GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS with strategy.scope(): # Creation of dataset, and model building/compiling need to be within # `strategy.scope()`. model = build_and_compile_dnn_model() # Train the model (x_train, y_train), (x_test, y_test) = make_dataset() model.fit(x_train, y_train, epochs=args.epochs, batch_size=GLOBAL_BATCH_SIZE) model.save(args.model_dir) ``` #### Store training script on your Cloud Storage bucket Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket. ``` ! rm -f custom.tar custom.tar.gz ! tar cvf custom.tar custom ! gzip custom.tar ! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_boston.tar.gz ``` ## Train the model using a `TrainingPipeline` resource Now start training of your custom training job using a training pipeline on Vertex. To train the your custom model, do the following steps: 1. Create a Vertex `TrainingPipeline` resource for the `Dataset` resource. 2. Execute the pipeline to start the training. ### Create a `TrainingPipeline` resource You may ask, what do we use a pipeline for? We typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of: 1. Being reusable for subsequent training jobs. 2. Can be containerized and ran as a batch job. 3. Can be distributed. 4. All the steps are associated with the same pipeline job for tracking progress. #### The `training_pipeline` specification First, you need to describe a pipeline specification. Let's look into the *minimal* requirements for constructing a `training_pipeline` specification for a custom job: - `display_name`: A human readable name for the pipeline job. - `training_task_definition`: The training task schema. - `training_task_inputs`: A dictionary describing the requirements for the training job. - `model_to_upload`: A dictionary describing the specification for the (uploaded) Vertex custom `Model` resource. - `display_name`: A human readable name for the `Model` resource. - `artificat_uri`: The Cloud Storage path where the model artifacts are stored in SavedModel format. - `container_spec`: This is the specification for the Docker container that will be installed on the `Endpoint` resource, from which the custom model will serve predictions. ``` from google.protobuf import json_format from google.protobuf.struct_pb2 import Value MODEL_NAME = "custom_pipeline-" + TIMESTAMP PIPELINE_DISPLAY_NAME = "custom-training-pipeline" + TIMESTAMP training_task_inputs = json_format.ParseDict( {"workerPoolSpecs": worker_pool_spec}, Value() ) pipeline = { "display_name": PIPELINE_DISPLAY_NAME, "training_task_definition": CUSTOM_TASK_GCS_PATH, "training_task_inputs": training_task_inputs, "model_to_upload": { "display_name": PIPELINE_DISPLAY_NAME + "-model", "artifact_uri": MODEL_DIR, "container_spec": {"image_uri": DEPLOY_IMAGE}, }, } print(pipeline) ``` #### Create the training pipeline Use this helper function `create_pipeline`, which takes the following parameter: - `training_pipeline`: the full specification for the pipeline training job. The helper function calls the pipeline client service's `create_pipeline` method, which takes the following parameters: - `parent`: The Vertex location root path for your `Dataset`, `Model` and `Endpoint` resources. - `training_pipeline`: The full specification for the pipeline training job. The helper function will return the Vertex fully qualified identifier assigned to the training pipeline, which is saved as `pipeline.name`. ``` def create_pipeline(training_pipeline): try: pipeline = clients["pipeline"].create_training_pipeline( parent=PARENT, training_pipeline=training_pipeline ) print(pipeline) except Exception as e: print("exception:", e) return None return pipeline response = create_pipeline(pipeline) ``` Now save the unique identifier of the training pipeline you created. ``` # The full unique ID for the pipeline pipeline_id = response.name # The short numeric ID for the pipeline pipeline_short_id = pipeline_id.split("/")[-1] print(pipeline_id) ``` ### Get information on a training pipeline Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's `get_training_pipeline` method, with the following parameter: - `name`: The Vertex fully qualified pipeline identifier. When the model is done training, the pipeline state will be `PIPELINE_STATE_SUCCEEDED`. ``` def get_training_pipeline(name, silent=False): response = clients["pipeline"].get_training_pipeline(name=name) if silent: return response print("pipeline") print(" name:", response.name) print(" display_name:", response.display_name) print(" state:", response.state) print(" training_task_definition:", response.training_task_definition) print(" training_task_inputs:", dict(response.training_task_inputs)) print(" create_time:", response.create_time) print(" start_time:", response.start_time) print(" end_time:", response.end_time) print(" update_time:", response.update_time) print(" labels:", dict(response.labels)) return response response = get_training_pipeline(pipeline_id) ``` # Deployment Training the above model may take upwards of 20 minutes time. Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field `model_to_deploy.name`. ``` while True: response = get_training_pipeline(pipeline_id, True) if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED: print("Training job has not completed:", response.state) model_to_deploy_id = None if response.state == aip.PipelineState.PIPELINE_STATE_FAILED: raise Exception("Training Job Failed") else: model_to_deploy = response.model_to_upload model_to_deploy_id = model_to_deploy.name print("Training Time:", response.end_time - response.start_time) break time.sleep(60) print("model to deploy:", model_to_deploy_id) if not DIRECT: MODEL_DIR = MODEL_DIR + "/model" model_path_to_deploy = MODEL_DIR ``` ## Load the saved model Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction. To load, you use the TF.Keras `model.load_model()` method passing it the Cloud Storage path where the model is saved -- specified by `MODEL_DIR`. ``` import tensorflow as tf model = tf.keras.models.load_model(MODEL_DIR) ``` ## Evaluate the model Now let's find out how good the model is. ### Load evaluation data You will load the Boston Housing test (holdout) data from `tf.keras.datasets`, using the method `load_data()`. This will return the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the feature data, and the corresponding labels (median value of owner-occupied home). You don't need the training data, and hence why we loaded it as `(_, _)`. Before you can run the data through evaluation, you need to preprocess it: x_test: 1. Normalize (rescaling) the data in each column by dividing each value by the maximum value of that column. This will replace each single value with a 32-bit floating point number between 0 and 1. ``` import numpy as np from tensorflow.keras.datasets import boston_housing (_, _), (x_test, y_test) = boston_housing.load_data( path="boston_housing.npz", test_split=0.2, seed=113 ) def scale(feature): max = np.max(feature) feature = (feature / max).astype(np.float32) return feature # Let's save one data item that has not been scaled x_test_notscaled = x_test[0:1].copy() for _ in range(13): x_test[_] = scale(x_test[_]) x_test = x_test.astype(np.float32) print(x_test.shape, x_test.dtype, y_test.shape) print("scaled", x_test[0]) print("unscaled", x_test_notscaled) ``` ### Perform the model evaluation Now evaluate how well the model in the custom job did. ``` model.evaluate(x_test, y_test) ``` ## Upload the model for serving Next, you will upload your TF.Keras model from the custom job to Vertex `Model` service, which will create a Vertex `Model` resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex, your serving function ensures that the data is decoded on the model server before it is passed as input to your model. ### How does the serving function work When you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a `tf.string`. The serving function consists of two parts: - `preprocessing function`: - Converts the input (`tf.string`) to the input shape and data type of the underlying model (dynamic graph). - Performs the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc. - `post-processing function`: - Converts the model output to format expected by the receiving application -- e.q., compresses the output. - Packages the output for the the receiving application -- e.g., add headings, make JSON object, etc. Both the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content. One consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported. ## Get the serving function signature You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer. When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request. ``` loaded = tf.saved_model.load(model_path_to_deploy) serving_input = list( loaded.signatures["serving_default"].structured_input_signature[1].keys() )[0] print("Serving function input:", serving_input) ``` ### Upload the model Use this helper function `upload_model` to upload your model, stored in SavedModel format, up to the `Model` service, which will instantiate a Vertex `Model` resource instance for your model. Once you've done that, you can use the `Model` resource instance in the same way as any other Vertex `Model` resource instance, such as deploying to an `Endpoint` resource for serving predictions. The helper function takes the following parameters: - `display_name`: A human readable name for the `Endpoint` service. - `image_uri`: The container image for the model deployment. - `model_uri`: The Cloud Storage path to our SavedModel artificat. For this tutorial, this is the Cloud Storage location where the `trainer/task.py` saved the model artifacts, which we specified in the variable `MODEL_DIR`. The helper function calls the `Model` client service's method `upload_model`, which takes the following parameters: - `parent`: The Vertex location root path for `Dataset`, `Model` and `Endpoint` resources. - `model`: The specification for the Vertex `Model` resource instance. Let's now dive deeper into the Vertex model specification `model`. This is a dictionary object that consists of the following fields: - `display_name`: A human readable name for the `Model` resource. - `metadata_schema_uri`: Since your model was built without an Vertex `Dataset` resource, you will leave this blank (`''`). - `artificat_uri`: The Cloud Storage path where the model is stored in SavedModel format. - `container_spec`: This is the specification for the Docker container that will be installed on the `Endpoint` resource, from which the `Model` resource will serve predictions. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated. Uploading a model into a Vertex Model resource returns a long running operation, since it may take a few moments. You call response.result(), which is a synchronous call and will return when the Vertex Model resource is ready. The helper function returns the Vertex fully qualified identifier for the corresponding Vertex Model instance upload_model_response.model. You will save the identifier for subsequent steps in the variable model_to_deploy_id. ``` IMAGE_URI = DEPLOY_IMAGE def upload_model(display_name, image_uri, model_uri): model = { "display_name": display_name, "metadata_schema_uri": "", "artifact_uri": model_uri, "container_spec": { "image_uri": image_uri, "command": [], "args": [], "env": [{"name": "env_name", "value": "env_value"}], "ports": [{"container_port": 8080}], "predict_route": "", "health_route": "", }, } response = clients["model"].upload_model(parent=PARENT, model=model) print("Long running operation:", response.operation.name) upload_model_response = response.result(timeout=180) print("upload_model_response") print(" model:", upload_model_response.model) return upload_model_response.model model_to_deploy_id = upload_model( "boston-" + TIMESTAMP, IMAGE_URI, model_path_to_deploy ) ``` ### Get `Model` resource information Now let's get the model information for just your model. Use this helper function `get_model`, with the following parameter: - `name`: The Vertex unique identifier for the `Model` resource. This helper function calls the Vertex `Model` client service's method `get_model`, with the following parameter: - `name`: The Vertex unique identifier for the `Model` resource. ``` def get_model(name): response = clients["model"].get_model(name=name) print(response) get_model(model_to_deploy_id) ``` ## Deploy the `Model` resource Now deploy the trained Vertex custom `Model` resource. This requires two steps: 1. Create an `Endpoint` resource for deploying the `Model` resource to. 2. Deploy the `Model` resource to the `Endpoint` resource. ### Create an `Endpoint` resource Use this helper function `create_endpoint` to create an endpoint to deploy the model to for serving predictions, with the following parameter: - `display_name`: A human readable name for the `Endpoint` resource. The helper function uses the endpoint client service's `create_endpoint` method, which takes the following parameter: - `display_name`: A human readable name for the `Endpoint` resource. Creating an `Endpoint` resource returns a long running operation, since it may take a few moments to provision the `Endpoint` resource for serving. You call `response.result()`, which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the `Endpoint` resource: `response.name`. ``` ENDPOINT_NAME = "boston_endpoint-" + TIMESTAMP def create_endpoint(display_name): endpoint = {"display_name": display_name} response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint) print("Long running operation:", response.operation.name) result = response.result(timeout=300) print("result") print(" name:", result.name) print(" display_name:", result.display_name) print(" description:", result.description) print(" labels:", result.labels) print(" create_time:", result.create_time) print(" update_time:", result.update_time) return result result = create_endpoint(ENDPOINT_NAME) ``` Now get the unique identifier for the `Endpoint` resource you created. ``` # The full unique ID for the endpoint endpoint_id = result.name # The short numeric ID for the endpoint endpoint_short_id = endpoint_id.split("/")[-1] print(endpoint_id) ``` ### Compute instance scaling You have several choices on scaling the compute instances for handling your online prediction requests: - Single Instance: The online prediction requests are processed on a single compute instance. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one. - Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them. - Auto Scaling: The online prediction requests are split across a scaleable number of compute instances. - Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions. The minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request. ``` MIN_NODES = 1 MAX_NODES = 1 ``` ### Deploy `Model` resource to the `Endpoint` resource Use this helper function `deploy_model` to deploy the `Model` resource to the `Endpoint` resource you created for serving predictions, with the following parameters: - `model`: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline. - `deploy_model_display_name`: A human readable name for the deployed model. - `endpoint`: The Vertex fully qualified endpoint identifier to deploy the model to. The helper function calls the `Endpoint` client service's method `deploy_model`, which takes the following parameters: - `endpoint`: The Vertex fully qualified `Endpoint` resource identifier to deploy the `Model` resource to. - `deployed_model`: The requirements specification for deploying the model. - `traffic_split`: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs. - If only one model, then specify as **{ "0": 100 }**, where "0" refers to this model being uploaded and 100 means 100% of the traffic. - If there are existing models on the endpoint, for which the traffic will be split, then use `model_id` to specify as **{ "0": percent, model_id: percent, ... }**, where `model_id` is the model id of an existing model to the deployed endpoint. The percents must add up to 100. Let's now dive deeper into the `deployed_model` parameter. This parameter is specified as a Python dictionary with the minimum required fields: - `model`: The Vertex fully qualified model identifier of the (upload) model to deploy. - `display_name`: A human readable name for the deployed model. - `disable_container_logging`: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production. - `dedicated_resources`: This refers to how many compute instances (replicas) that are scaled for serving prediction requests. - `machine_spec`: The compute instance to provision. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated. - `min_replica_count`: The number of compute instances to initially provision, which you set earlier as the variable `MIN_NODES`. - `max_replica_count`: The maximum number of compute instances to scale to, which you set earlier as the variable `MAX_NODES`. #### Traffic Split Let's now dive deeper into the `traffic_split` parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance. Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision. #### Response The method returns a long running operation `response`. We will wait sychronously for the operation to complete by calling the `response.result()`, which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources. ``` DEPLOYED_NAME = "boston_deployed-" + TIMESTAMP def deploy_model( model, deployed_model_display_name, endpoint, traffic_split={"0": 100} ): if DEPLOY_GPU: machine_spec = { "machine_type": DEPLOY_COMPUTE, "accelerator_type": DEPLOY_GPU, "accelerator_count": DEPLOY_NGPU, } else: machine_spec = { "machine_type": DEPLOY_COMPUTE, "accelerator_count": 0, } deployed_model = { "model": model, "display_name": deployed_model_display_name, "dedicated_resources": { "min_replica_count": MIN_NODES, "max_replica_count": MAX_NODES, "machine_spec": machine_spec, }, "disable_container_logging": False, } response = clients["endpoint"].deploy_model( endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split ) print("Long running operation:", response.operation.name) result = response.result() print("result") deployed_model = result.deployed_model print(" deployed_model") print(" id:", deployed_model.id) print(" model:", deployed_model.model) print(" display_name:", deployed_model.display_name) print(" create_time:", deployed_model.create_time) return deployed_model.id deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id) ``` ## Make a online prediction request Now do a online prediction to your deployed model. ### Get test item You will use an example out of the test (holdout) portion of the dataset as a test item. ``` test_item = x_test[0] test_label = y_test[0] print(test_item.shape) ``` ### Send the prediction request Ok, now you have a test data item. Use this helper function `predict_data`, which takes the parameters: - `data`: The test data item as a numpy 1D array of floating point values. - `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed. - `parameters_dict`: Additional parameters for serving. This function uses the prediction client service and calls the `predict` method with the parameters: - `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed. - `instances`: A list of instances (data items) to predict. - `parameters`: Additional parameters for serving. To pass the test data to the prediction service, you package it for transmission to the serving binary as follows: 1. Convert the data item from a 1D numpy array to a 1D Python list. 2. Convert the prediction request to a serialized Google protobuf (`json_format.ParseDict()`) Each instance in the prediction request is a dictionary entry of the form: {input_name: content} - `input_name`: the name of the input layer of the underlying model. - `content`: The data item as a 1D Python list. Since the `predict()` service can take multiple data items (instances), you will send your single data item as a list of one data item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the `predict()` service. The `response` object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction: - `predictions` -- the predicated median value of a house in units of 1K USD. ``` def predict_data(data, endpoint, parameters_dict): parameters = json_format.ParseDict(parameters_dict, Value()) # The format of each instance should conform to the deployed model's prediction input schema. instances_list = [{serving_input: data.tolist()}] instances = [json_format.ParseDict(s, Value()) for s in instances_list] response = clients["prediction"].predict( endpoint=endpoint, instances=instances, parameters=parameters ) print("response") print(" deployed_model_id:", response.deployed_model_id) predictions = response.predictions print("predictions") for prediction in predictions: print(" prediction:", prediction) predict_data(test_item, endpoint_id, None) ``` ## Undeploy the `Model` resource Now undeploy your `Model` resource from the serving `Endpoint` resoure. Use this helper function `undeploy_model`, which takes the following parameters: - `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed to. - `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` is deployed to. This function calls the endpoint client service's method `undeploy_model`, with the following parameters: - `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed. - `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource is deployed. - `traffic_split`: How to split traffic among the remaining deployed models on the `Endpoint` resource. Since this is the only deployed model on the `Endpoint` resource, you simply can leave `traffic_split` empty by setting it to {}. ``` def undeploy_model(deployed_model_id, endpoint): response = clients["endpoint"].undeploy_model( endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={} ) print(response) undeploy_model(deployed_model_id, endpoint_id) ``` # Cleaning up To clean up all GCP resources used in this project, you can [delete the GCP project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: - Dataset - Pipeline - Model - Endpoint - Batch Job - Custom Job - Hyperparameter Tuning Job - Cloud Storage Bucket ``` delete_dataset = True delete_pipeline = True delete_model = True delete_endpoint = True delete_batchjob = True delete_customjob = True delete_hptjob = True delete_bucket = True # Delete the dataset using the Vertex fully qualified identifier for the dataset try: if delete_dataset and "dataset_id" in globals(): clients["dataset"].delete_dataset(name=dataset_id) except Exception as e: print(e) # Delete the training pipeline using the Vertex fully qualified identifier for the pipeline try: if delete_pipeline and "pipeline_id" in globals(): clients["pipeline"].delete_training_pipeline(name=pipeline_id) except Exception as e: print(e) # Delete the model using the Vertex fully qualified identifier for the model try: if delete_model and "model_to_deploy_id" in globals(): clients["model"].delete_model(name=model_to_deploy_id) except Exception as e: print(e) # Delete the endpoint using the Vertex fully qualified identifier for the endpoint try: if delete_endpoint and "endpoint_id" in globals(): clients["endpoint"].delete_endpoint(name=endpoint_id) except Exception as e: print(e) # Delete the batch job using the Vertex fully qualified identifier for the batch job try: if delete_batchjob and "batch_job_id" in globals(): clients["job"].delete_batch_prediction_job(name=batch_job_id) except Exception as e: print(e) # Delete the custom job using the Vertex fully qualified identifier for the custom job try: if delete_customjob and "job_id" in globals(): clients["job"].delete_custom_job(name=job_id) except Exception as e: print(e) # Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job try: if delete_hptjob and "hpt_job_id" in globals(): clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id) except Exception as e: print(e) if delete_bucket and "BUCKET_NAME" in globals(): ! gsutil rm -r $BUCKET_NAME ```
github_jupyter
# GRU 236 * Operate on 16000 GenCode 34 seqs. * 5-way cross validation. Save best model per CV. * Report mean accuracy from final re-validation with best 5. * Use Adam with a learn rate decay schdule. ``` NC_FILENAME='ncRNA.gc34.processed.fasta' PC_FILENAME='pcRNA.gc34.processed.fasta' DATAPATH="" try: from google.colab import drive IN_COLAB = True PATH='/content/drive/' drive.mount(PATH) DATAPATH=PATH+'My Drive/data/' # must end in "/" NC_FILENAME = DATAPATH+NC_FILENAME PC_FILENAME = DATAPATH+PC_FILENAME except: IN_COLAB = False DATAPATH="" EPOCHS=200 SPLITS=5 K=3 VOCABULARY_SIZE=4**K+1 # e.g. K=3 => 64 DNA K-mers + 'NNN' EMBED_DIMEN=16 FILENAME='GRU236' NEURONS=64 ACT="tanh" DROP=0.5 import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import ShuffleSplit from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedKFold from sklearn.model_selection import StratifiedKFold import tensorflow as tf from tensorflow import keras from keras.wrappers.scikit_learn import KerasRegressor from keras.models import Sequential from keras.layers import Bidirectional from keras.layers import GRU from keras.layers import Dense from keras.layers import LayerNormalization import time dt='float32' tf.keras.backend.set_floatx(dt) ``` ## Build model ``` def compile_model(model): adam_default_learn_rate = 0.001 schedule = tf.keras.optimizers.schedules.ExponentialDecay( initial_learning_rate = adam_default_learn_rate*10, #decay_steps=100000, decay_rate=0.96, staircase=True) decay_steps=10000, decay_rate=0.99, staircase=True) # learn rate = initial_learning_rate * decay_rate ^ (step / decay_steps) alrd = tf.keras.optimizers.Adam(learning_rate=schedule) bc=tf.keras.losses.BinaryCrossentropy(from_logits=False) print("COMPILE...") #model.compile(loss=bc, optimizer=alrd, metrics=["accuracy"]) model.compile(loss=bc, optimizer="adam", metrics=["accuracy"]) print("...COMPILED") return model def build_model(): embed_layer = keras.layers.Embedding( #VOCABULARY_SIZE, EMBED_DIMEN, input_length=1000, input_length=1000, mask_zero=True) #input_dim=[None,VOCABULARY_SIZE], output_dim=EMBED_DIMEN, mask_zero=True) input_dim=VOCABULARY_SIZE, output_dim=EMBED_DIMEN, mask_zero=True) #rnn1_layer = keras.layers.Bidirectional( rnn1_layer = keras.layers.GRU(NEURONS, return_sequences=True, input_shape=[1000,EMBED_DIMEN], activation=ACT, dropout=DROP)#)#bi #rnn2_layer = keras.layers.Bidirectional( rnn2_layer = keras.layers.GRU(NEURONS, return_sequences=False, activation=ACT, dropout=DROP)#)#bi dense1_layer = keras.layers.Dense(NEURONS, activation=ACT,dtype=dt) #drop1_layer = keras.layers.Dropout(DROP) dense2_layer = keras.layers.Dense(NEURONS, activation=ACT,dtype=dt) #drop2_layer = keras.layers.Dropout(DROP) output_layer = keras.layers.Dense(1, activation="sigmoid", dtype=dt) mlp = keras.models.Sequential() mlp.add(embed_layer) mlp.add(rnn1_layer) mlp.add(rnn2_layer) mlp.add(dense1_layer) #mlp.add(drop1_layer) mlp.add(dense2_layer) #mlp.add(drop2_layer) mlp.add(output_layer) mlpc = compile_model(mlp) return mlpc ``` ## Load and partition sequences ``` # Assume file was preprocessed to contain one line per seq. # Prefer Pandas dataframe but df does not support append. # For conversion to tensor, must avoid python lists. def load_fasta(filename,label): DEFLINE='>' labels=[] seqs=[] lens=[] nums=[] num=0 with open (filename,'r') as infile: for line in infile: if line[0]!=DEFLINE: seq=line.rstrip() num += 1 # first seqnum is 1 seqlen=len(seq) nums.append(num) labels.append(label) seqs.append(seq) lens.append(seqlen) df1=pd.DataFrame(nums,columns=['seqnum']) df2=pd.DataFrame(labels,columns=['class']) df3=pd.DataFrame(seqs,columns=['sequence']) df4=pd.DataFrame(lens,columns=['seqlen']) df=pd.concat((df1,df2,df3,df4),axis=1) return df def separate_X_and_y(data): y= data[['class']].copy() X= data.drop(columns=['class','seqnum','seqlen']) return (X,y) ``` ## Make K-mers ``` def make_kmer_table(K): npad='N'*K shorter_kmers=[''] for i in range(K): longer_kmers=[] for mer in shorter_kmers: longer_kmers.append(mer+'A') longer_kmers.append(mer+'C') longer_kmers.append(mer+'G') longer_kmers.append(mer+'T') shorter_kmers = longer_kmers all_kmers = shorter_kmers kmer_dict = {} kmer_dict[npad]=0 value=1 for mer in all_kmers: kmer_dict[mer]=value value += 1 return kmer_dict KMER_TABLE=make_kmer_table(K) def strings_to_vectors(data,uniform_len): all_seqs=[] for seq in data['sequence']: i=0 seqlen=len(seq) kmers=[] while i < seqlen-K+1 -1: # stop at minus one for spaced seed #kmer=seq[i:i+2]+seq[i+3:i+5] # SPACED SEED 2/1/2 for K=4 kmer=seq[i:i+K] i += 1 value=KMER_TABLE[kmer] kmers.append(value) pad_val=0 while i < uniform_len: kmers.append(pad_val) i += 1 all_seqs.append(kmers) pd2d=pd.DataFrame(all_seqs) return pd2d # return 2D dataframe, uniform dimensions def make_kmers(MAXLEN,train_set): (X_train_all,y_train_all)=separate_X_and_y(train_set) X_train_kmers=strings_to_vectors(X_train_all,MAXLEN) # From pandas dataframe to numpy to list to numpy num_seqs=len(X_train_kmers) tmp_seqs=[] for i in range(num_seqs): kmer_sequence=X_train_kmers.iloc[i] tmp_seqs.append(kmer_sequence) X_train_kmers=np.array(tmp_seqs) tmp_seqs=None labels=y_train_all.to_numpy() return (X_train_kmers,labels) def make_frequencies(Xin): Xout=[] VOCABULARY_SIZE= 4**K + 1 # plus one for 'NNN' for seq in Xin: freqs =[0] * VOCABULARY_SIZE total = 0 for kmerval in seq: freqs[kmerval] += 1 total += 1 for c in range(VOCABULARY_SIZE): freqs[c] = freqs[c]/total Xout.append(freqs) Xnum = np.asarray(Xout) return (Xnum) def make_slice(data_set,min_len,max_len): slice = data_set.query('seqlen <= '+str(max_len)+' & seqlen>= '+str(min_len)) return slice ``` ## Cross validation ``` def do_cross_validation(X,y,given_model): cv_scores = [] fold=0 splitter = ShuffleSplit(n_splits=SPLITS, test_size=0.1) #, random_state=37863) for train_index,valid_index in splitter.split(X): fold += 1 X_train=X[train_index] # use iloc[] for dataframe y_train=y[train_index] X_valid=X[valid_index] y_valid=y[valid_index] # Avoid continually improving the same model. model = compile_model(keras.models.clone_model(given_model)) bestname=DATAPATH+FILENAME+".cv."+str(fold)+".best" mycallbacks = [keras.callbacks.ModelCheckpoint( filepath=bestname, save_best_only=True, monitor='val_accuracy', mode='max')] print("FIT") start_time=time.time() history=model.fit(X_train, y_train, # batch_size=10, default=32 works nicely epochs=EPOCHS, verbose=1, # verbose=1 for ascii art, verbose=0 for none callbacks=mycallbacks, validation_data=(X_valid,y_valid) ) end_time=time.time() elapsed_time=(end_time-start_time) print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time)) pd.DataFrame(history.history).plot(figsize=(8,5)) plt.grid(True) plt.gca().set_ylim(0,1) plt.show() best_model=keras.models.load_model(bestname) scores = best_model.evaluate(X_valid, y_valid, verbose=0) print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100)) cv_scores.append(scores[1] * 100) print() print("%d-way Cross Validation mean %.2f%% (+/- %.2f%%)" % (fold, np.mean(cv_scores), np.std(cv_scores))) ``` ## Train on RNA lengths 200-1Kb ``` MINLEN=200 MAXLEN=1000 print("Load data from files.") nc_seq=load_fasta(NC_FILENAME,0) pc_seq=load_fasta(PC_FILENAME,1) train_set=pd.concat((nc_seq,pc_seq),axis=0) nc_seq=None pc_seq=None print("Ready: train_set") #train_set subset=make_slice(train_set,MINLEN,MAXLEN)# One array to two: X and y print ("Data reshape") (X_train,y_train)=make_kmers(MAXLEN,subset) #print ("Data prep") #X_train=make_frequencies(X_train) print ("Compile the model") model=build_model() print ("Summarize the model") print(model.summary()) # Print this only once model.save(DATAPATH+FILENAME+'.model') print ("Cross valiation") do_cross_validation(X_train,y_train,model) print ("Done") ```
github_jupyter
``` import numpy as np import pandas as pd import holoviews as hv import networkx as nx hv.extension('bokeh') %opts Graph [width=400 height=400] ``` Visualizing and working with network graphs is a common problem in many different disciplines. HoloViews provides the ability to represent and visualize graphs very simply and easily with facilities for interactively exploring the nodes and edges of the graph, especially using the bokeh plotting interface. The ``Graph`` ``Element`` differs from other elements in HoloViews in that it consists of multiple sub-elements. The data of the ``Graph`` element itself are the abstract edges between the nodes. By default the element will automatically compute concrete ``x`` and ``y`` positions for the nodes and represent them using a ``Nodes`` element, which is stored on the Graph. The abstract edges and concrete node positions are sufficient to render the ``Graph`` by drawing straight-line edges between the nodes. In order to supply explicit edge paths we can also declare ``EdgePaths``, providing explicit coordinates for each edge to follow. To summarize a ``Graph`` consists of three different components: * The ``Graph`` itself holds the abstract edges stored as a table of node indices. * The ``Nodes`` hold the concrete ``x`` and ``y`` positions of each node along with a node ``index``. The ``Nodes`` may also define any number of value dimensions, which can be revealed when hovering over the nodes or to color the nodes by. * The ``EdgePaths`` can optionally be supplied to declare explicit node paths. #### A simple Graph Let's start by declaring a very simple graph connecting one node to all others. If we simply supply the abstract connectivity of the ``Graph``, it will automatically compute a layout for the nodes using the ``layout_nodes`` operation, which defaults to a circular layout: ``` # Declare abstract edges N = 8 node_indices = np.arange(N, dtype=np.int32) source = np.zeros(N, dtype=np.int32) target = node_indices padding = dict(x=(-1.2, 1.2), y=(-1.2, 1.2)) simple_graph = hv.Graph(((source, target),)).redim.range(**padding) simple_graph ``` #### Accessing the nodes and edges We can easily access the ``Nodes`` and ``EdgePaths`` on the ``Graph`` element using the corresponding properties: ``` simple_graph.nodes + simple_graph.edgepaths ``` #### Supplying explicit paths Next we will extend this example by supplying explicit edges: ``` def bezier(start, end, control, steps=np.linspace(0, 1, 100)): return (1-steps)**2*start + 2*(1-steps)*steps*control+steps**2*end x, y = simple_graph.nodes.array([0, 1]).T paths = [] for node_index in node_indices: ex, ey = x[node_index], y[node_index] paths.append(np.column_stack([bezier(x[0], ex, 0), bezier(y[0], ey, 0)])) bezier_graph = hv.Graph(((source, target), (x, y, node_indices), paths)).redim.range(**padding) bezier_graph ``` ## Interactive features #### Hover and selection policies Thanks to Bokeh we can reveal more about the graph by hovering over the nodes and edges. The ``Graph`` element provides an ``inspection_policy`` and a ``selection_policy``, which define whether hovering and selection highlight edges associated with the selected node or nodes associated with the selected edge, these policies can be toggled by setting the policy to ``'nodes'`` (the default) and ``'edges'``. ``` bezier_graph.options(inspection_policy='edges') ``` In addition to changing the policy we can also change the colors used when hovering and selecting nodes: ``` %%opts Graph [tools=['hover', 'box_select']] (edge_hover_line_color='green' node_hover_fill_color='red') bezier_graph.options(inspection_policy='nodes') ``` #### Additional information We can also associate additional information with the nodes and edges of a graph. By constructing the ``Nodes`` explicitly we can declare additional value dimensions, which are revealed when hovering and/or can be mapped to the color by specifying the ``color_index``. We can also associate additional information with each edge by supplying a value dimension to the ``Graph`` itself, which we can map to a color using the ``edge_color_index``. ``` %%opts Graph [color_index='Type' edge_color_index='Weight'] (cmap='Set1' edge_cmap='viridis') node_labels = ['Output']+['Input']*(N-1) np.random.seed(7) edge_labels = np.random.rand(8) nodes = hv.Nodes((x, y, node_indices, node_labels), vdims='Type') graph = hv.Graph(((source, target, edge_labels), nodes, paths), vdims='Weight').redim.range(**padding) graph + graph.options(inspection_policy='edges') ``` If you want to supply additional node information without speciying explicit node positions you may pass in a ``Dataset`` object consisting of various value dimensions. ``` %%opts Graph [color_index='Label'] (cmap='Set1') node_info = hv.Dataset(node_labels, vdims='Label') hv.Graph(((source, target), node_info)).redim.range(**padding) ``` ## Working with NetworkX NetworkX is a very useful library when working with network graphs and the Graph Element provides ways of importing a NetworkX Graph directly. Here we will load the Karate Club graph and use the ``circular_layout`` function provided by NetworkX to lay it out: ``` %%opts Graph [tools=['hover']] G = nx.karate_club_graph() hv.Graph.from_networkx(G, nx.layout.circular_layout).redim.range(**padding) ``` #### Animating graphs Like all other elements ``Graph`` can be updated in a ``HoloMap`` or ``DynamicMap``. Here we animate how the Fruchterman-Reingold force-directed algorithm lays out the nodes in real time. ``` %%opts Graph G = nx.karate_club_graph() def get_graph(iteration): np.random.seed(10) return hv.Graph.from_networkx(G, nx.spring_layout, iterations=iteration) hv.HoloMap({i: get_graph(i) for i in range(5, 30, 5)}, kdims='Iterations').redim.range(x=(-1.2, 1.2), y=(-1.2, 1.2)) ``` ## Real world graphs As a final example let's look at a slightly larger graph. We will load a dataset of a Facebook network consisting a number of friendship groups identified by their ``'circle'``. We will load the edge and node data using pandas and then color each node by their friendship group using many of the things we learned above. ``` %opts Nodes Graph [width=800 height=800 xaxis=None yaxis=None] %%opts Graph [color_index='circle'] %%opts Graph (node_size=10 edge_line_width=1) colors = ['#000000']+hv.Cycle('Category20').values edges_df = pd.read_csv('../assets/fb_edges.csv') fb_nodes = hv.Nodes(pd.read_csv('../assets/fb_nodes.csv')).sort() fb_graph = hv.Graph((edges_df, fb_nodes), label='Facebook Circles') fb_graph = fb_graph.redim.range(x=(-0.05, 1.05), y=(-0.05, 1.05)).options(cmap=colors) fb_graph ``` ## Bundling graphs The datashader library provides algorithms for bundling the edges of a graph and HoloViews provides convenient wrappers around the libraries. Note that these operations need ``scikit-image`` which you can install using: ``` conda install scikit-image ``` or ``` pip install scikit-image ``` ``` from holoviews.operation.datashader import datashade, bundle_graph bundled = bundle_graph(fb_graph) bundled ``` ## Datashading graphs For graphs with a large number of edges we can datashade the paths and display the nodes separately. This loses some of the interactive features but will let you visualize quite large graphs: ``` %%opts Nodes [color_index='circle'] (size=10 cmap=colors) Overlay [show_legend=False] datashade(bundled, normalization='linear', width=800, height=800) * bundled.nodes ``` ### Applying selections Alternatively we can select the nodes and edges by an attribute that resides on either. In this case we will select the nodes and edges for a particular circle and then overlay just the selected part of the graph on the datashaded plot. Note that selections on the ``Graph`` itself will select all nodes that connect to one of the selected nodes. In this way a smaller subgraph can be highlighted and the larger graph can be datashaded. ``` %%opts Graph (node_fill_color='white') datashade(bundle_graph(fb_graph), normalization='linear', width=800, height=800) *\ bundled.select(circle='circle15') ``` To select just nodes that are in 'circle15' set the ``selection_mode='nodes'`` overriding the default of 'edges': ``` bundled.select(circle='circle15', selection_mode='nodes') ```
github_jupyter
# How have Airbnb prices changed due to COVID-19? ## Business Understanding This is the most recent data (Oct, 2020) taken from the official website Airbnb http://insideairbnb.com/get-the-data.html In this Notebook, we'll look at this data, clean up, analyze, visualize, and model. And we will answer the following questions for Business Understanding: 1. What correlates best with the price? 2. How has price and busyness changed over the course of COVID-19? 4. Can we predict the price based on its features? Let's begin! ``` #import libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import geopandas as gpd #ml libraries from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestRegressor from sklearn.ensemble import AdaBoostRegressor from sklearn.ensemble import GradientBoostingRegressor import xgboost as xgb from xgboost import plot_importance from keras import backend as K import tensorflow as tf import time from tensorflow import keras from keras import models, layers, optimizers, regularizers from IPython.display import SVG from keras.utils.vis_utils import model_to_dot from tensorflow.keras.callbacks import EarlyStopping from sklearn.ensemble import RandomForestRegressor from sklearn.ensemble import AdaBoostRegressor from sklearn.ensemble import GradientBoostingRegressor from sklearn.model_selection import learning_curve from sklearn.preprocessing import StandardScaler, MinMaxScaler #metrics from sklearn.metrics import r2_score, mean_squared_error %matplotlib inline def printColunmsInfo(df): '''takes dataframe, prints columns info''' df.info() print("\n") printTotalRowsAndColumns(df) print("---------------------------------------") def printTotalRowsAndColumns(df): '''print number of columns and rows''' print("Total columns: ", df.shape[1]) print("Total rows: ", df.shape[0]) def stringToNumConverter(string): '''deletes not numbers symbols from string''' newString = "" if pd.notna(string): for i in string: if i.isdigit() or i == ".": newString += i return newString def create_dummy_df(df, cat_cols, dummy_na): '''creates dummy''' for col in cat_cols: try: # for each cat add dummy var, drop original column df = pd.concat([df.drop(col, axis=1), pd.get_dummies(df[col], prefix=col, prefix_sep='_', drop_first=True, dummy_na=dummy_na)], axis=1) except: continue return df def dateToCategorical(row): '''changes column from date type to categorical''' if row.year <= 2016: return "4+ years" elif row.year <= 2018: return "2-3 years" elif row.year <= 2019: return "1-2 years" elif row.year == 2020: if row.month > 8: return "0-1 month" elif row.month > 2: return "2-6 months" elif row.month <= 2: return "this year" else : return "no reviews" def appendToMetricsdf(df, model_name, train_r2, test_r2, train_mse, test_mse): '''appends new row to metrics_df''' new_row = {"Model Name" : model_name, "r-squared train" : train_r2, "r-squared train test" : test_r2, "MSE train" : train_mse, "MSE test" : test_mse } df = df.append(new_row, ignore_index=True) return df def r2_keras(y_true, y_pred): '''calculates r2_score''' SS_res = K.sum(K.square(y_true - y_pred)) SS_tot = K.sum(K.square(y_true - K.mean(y_true))) return (1 - SS_res/(SS_tot + K.epsilon()) ) #load data sf_cal = pd.read_csv("datasets/calendar.csv", low_memory=False, index_col=0) sf_list = pd.read_csv("datasets/listings.csv") ``` ## Cleaning the Data ### Listing Data Frame First, let's look on Listing Data Frame. It is the biggest table. We won't need some columns because they don't make much sense for our purposes. So we will drop them. ``` sf_list = sf_list[['id', 'host_since', 'host_is_superhost', 'host_listings_count', 'host_response_time', 'host_response_rate', 'host_acceptance_rate','neighbourhood_cleansed', 'latitude', 'longitude', 'property_type', 'room_type', 'accommodates', 'bathrooms', 'bedrooms', 'beds', 'amenities', 'minimum_nights', 'maximum_nights', 'review_scores_rating', 'review_scores_accuracy', 'review_scores_cleanliness', 'review_scores_checkin', 'review_scores_communication', 'review_scores_location', 'review_scores_value', 'availability_30', 'number_of_reviews', 'last_review', 'reviews_per_month', 'price']] sf_list.head() ``` We have left the following columns: * __'id'__ — we'll use to join tables * __host_since__ and __last_review__ — datatype data, we transform to categorical * __'host_response_time'__ — categorical data * __host_is_superhost__ — boolean data * __'host_response_rate'__ and __'host_acceptance_rate'__ — as a percentage, we will change to integer * __neighbourhood_cleansed'__ — neighbourhood name * __'latitude', 'longitude'__ — сoordinates, we use them for visualisation * __'room_type'__ and __property_type__ — categorical data * __'accommodates', 'bathrooms', 'bedrooms', 'beds'__ — numerical values describing property * __'amenities'__ — can be used to identify words associated with amenities * __'minimum_nights', 'maximum_nights'__ — numerical values * __'review_scores_rating'__ — numbers between 20 and 100 * __'review_scores_accuracy', 'review_scores_cleanliness', 'review_scores_checkin', review_scores_communication', 'review_scores_location', 'review_scores_value'__ — numbers between 2 and 10 * __availability_30__, __number_of_reviews__, __reviews_per_month__ — numerical * __'price'__ — target value Let's convert string data to numeric. ``` #converting datatype of price column to integer sf_list["price"] = sf_list["price"].apply(lambda string: ''.join(i for i in string if i.isdigit())[:-2]) sf_list["price"] = pd.to_numeric(sf_list["price"], downcast="integer") #host_response_rate and host_acceptance_rate types to float sf_list["host_response_rate"] = sf_list["host_acceptance_rate"].apply(lambda string: stringToNumConverter(string)) sf_list["host_response_rate"] = pd.to_numeric(sf_list["host_response_rate"], downcast="float") sf_list["host_acceptance_rate"] = sf_list["host_acceptance_rate"].apply(lambda string: stringToNumConverter(string)) sf_list["host_acceptance_rate"] = pd.to_numeric(sf_list["host_acceptance_rate"], downcast="float") #converting t, f value to 1 or 0 sf_list["host_is_superhost"] = sf_list["host_is_superhost"].apply((lambda string: 1 if string == "t" else 0)) #converting datatype of date columns to datetime sf_list["last_review"] = pd.to_datetime(arg=sf_list["last_review"], errors="coerce") sf_list["host_since"] = pd.to_datetime(arg=sf_list["host_since"], errors="coerce") print("Listing Data Frame") printColunmsInfo(sf_list) ``` ### Amenities Data Frame Consider the data about the amenities. This column is a set of lists enclosed in strings. So I had to use the *eval*. If you know a more elegant method, please let me know. Then we'll add columns for each amenitie, remove the common and very rare amenities. ``` amenitiesList = [] for item in sf_list["amenities"].value_counts().reset_index()["index"]: item = eval(item) for i in item: if i not in amenitiesList: amenitiesList.append(i) print("Total amenities: ", len(amenitiesList)) print(amenitiesList) amenities_df = sf_list[["id", "amenities"]] #we don't need "amenities" in original data frame anymore sf_list.drop(["amenities"], axis=1, inplace=True) amenitiesDict = {} for item in range(amenities_df.shape[0]): i_id, amenitiesSet = amenities_df.loc[item, "id"], set(eval(amenities_df.loc[item, "amenities"])) amenitiesDict[i_id] = amenitiesSet for amenitie in amenitiesList: bilist = [] for amId in amenities_df["id"]: if amenitie in amenitiesDict[amId]: bilist.append(1) else: bilist.append(0) amenities_df.insert(loc=len(amenities_df.columns), column=amenitie, value=bilist, allow_duplicates=True) print(amenities_df.shape) ``` ### Calendar Data Frame ``` sf_cal.head() ``` This Data Frame has folowing columns: * __listing_id__ — id values, we'll use to join tables * __date__ — we need to change datatype to datetime * __available__ — it has to be boolean, so we need to change it * __minimum_nights, maximum_nights__ — we have same columns in Listing Data Frame, drop they later * __adjusted_price__, __price__ — target values ``` #converting datatype of price and adjusted_price columns to integer sf_cal["price"] = sf_cal["price"].apply(lambda string: stringToNumConverter(string)) sf_cal["price"] = pd.to_numeric(sf_cal["price"], downcast="integer") sf_cal["adjusted_price"] = sf_cal["adjusted_price"].apply(lambda string: stringToNumConverter(string)) sf_cal["adjusted_price"] = pd.to_numeric(sf_cal["adjusted_price"], downcast="integer") #converting datatype of date columns to datetime sf_cal["date"] = pd.to_datetime(arg=sf_cal["date"], errors="coerce") #converting t, f value to Boolean datatype sf_cal["available"] = sf_cal["available"].apply((lambda string: True if string == "t" else False)) print("Calendar Data Frame") printColunmsInfo(sf_cal) ``` ## Data Understanding Let's analyze the data to answer the questions given at the beginning: #### 1. What correlates best with price? ##### Does Amenities correlate with price? ``` amen_price_corr_neg = amenities_df.merge(sf_list[["id", "price"]], on="id").corr()[["id", "price"]].sort_values(by="price").head(10) amen_price_corr_pos = amenities_df.merge(sf_list[["id", "price"]], on="id").corr()[["id", "price"]].sort_values(by="price").drop("price", axis=0).tail(10) #negative correlation amen_price_corr_neg.drop("id", axis=1).style.bar(color="#00677e", align="mid") #positive correlation amen_price_corr_pos.drop("id", axis=1).style.bar(color="#cd4a4c") ``` As you can see, air conditioning, gym, and building staff are highly correlated with price. The rest of the amenities correlate either weakly or not at all. ##### Does Review Scores correlate with price? ``` plt.subplots(figsize=(9, 6)) sns.heatmap(sf_list[['review_scores_rating', 'review_scores_accuracy', 'review_scores_cleanliness', 'review_scores_checkin', 'review_scores_communication', 'review_scores_location', 'review_scores_value', "number_of_reviews", 'price']].corr(), annot=True, fmt=".2f") ``` Review Scores correlate weakly with price, but they correlate well with each other. ##### Does Housing Characteristics correlate with price? ``` plt.subplots(figsize=(9, 6)) sns.heatmap(sf_list[['accommodates', 'bathrooms', 'bedrooms', 'beds', 'price']].corr(), annot=True, fmt=".2f") ``` There is an obvious correlation. The more people you can accommodate, the more expensive it is to rent a room. Same about bedrooms and beds. But the number of bathrooms does not have a strong impact. Some more dependencies on the price, which we will use in modeling: ``` sf_list.groupby(["room_type"]).mean().reset_index()[["room_type","price"]].style.bar(color="#cd4a4c") sf_list.groupby(["property_type"]).mean().reset_index()[["property_type","price"]].sort_values(by="price", ascending=False).style.bar(color="#cd4a4c") sf_list.groupby(["host_response_time"]).mean().reset_index()[["host_response_time","price"]].style.bar(color="#cd4a4c") sf_list.groupby(["host_is_superhost"]).mean().reset_index()[["host_is_superhost","price"]] sf_list[["number_of_reviews","price"]].corr() plt.subplots(figsize=(9, 6)) sns.heatmap(sf_list[["host_response_rate", "host_acceptance_rate", "minimum_nights", "maximum_nights", "number_of_reviews", "price"]].corr(), annot=True, fmt=".2f") ``` ##### How about Neighbourhoods? Let's find the most expensive neighbourhood. ``` #coordinates of San Francisco sf_latitude, sf_longitude = 37.7647993, -122.4629897 #the necessary data for the map sf_map = gpd.read_file("planning_neighborhoods/planning_neighborhoods.shp") sf_neig_mean = sf_list.groupby(["neighbourhood_cleansed"]).mean().reset_index() sf_map = sf_map.merge(sf_neig_mean, left_on="neighborho", right_on="neighbourhood_cleansed") vmin, vmax = 100, 1300 fig, ax = plt.subplots(figsize = (20, 20)) ax.set_title("Average price in each neighborhood of San Francisco", fontdict={"fontsize": "25", "fontweight" : "3"}) sf_map.plot(column="price", cmap="OrRd", linewidth=0.8, ax=ax, edgecolor="0.8") texts = [] for x, y, label in zip(sf_map.centroid.geometry.x, sf_map.centroid.geometry.y, sf_map["neighbourhood_cleansed"]): texts.append(plt.text(x, y, label, fontsize = 8)) sm = plt.cm.ScalarMappable(cmap="OrRd", norm=plt.Normalize(vmin=vmin, vmax=vmax)) # empty array for the data range sm._A = [] # add the colorbar to the figure cbar = fig.colorbar(sm) ax.axis("off") plt.show() sf_list.groupby(["neighbourhood_cleansed"]).mean().reset_index()[["neighbourhood_cleansed","price"]].sort_values(by="price", ascending=False).style.bar(color="#cd4a4c") ``` As you can see from the map, the high price is more related to the location. The most expensive areas are Golden Gate Park and Financial District. If you look at my previous research, you understand that Golden Gate Park is quite safe, unlike the Financial District which pretty criminal. All this data can be used to predict prices. But before that, let's answer the second question. #### 2. How has price and busyness changed over the course of COVID-19? Let's start by looking at price changes over the past year. ``` per = sf_cal.date.dt.to_period("M") g = sf_cal.groupby(per) ax = sns.set_palette("viridis") plt.figure(figsize=(16,6)) sns.barplot(x=g.mean().reset_index()["date"], y=g.mean().reset_index()["price"]) plt.xlabel("Month", fontsize=20) plt.ylabel("Price per night", fontsize=20) plt.title("Average Price per night in San Francisco", fontsize=25) plt.show() ``` During the covid period, the average price per night rose by about $33. And it does not stop growing linearly. Next one is busyness. ``` ax = sns.set_palette("viridis") plt.figure(figsize=(16,6)) sns.barplot(x=g.mean().reset_index()["date"], y=g.mean().reset_index()["available"]) plt.xlabel("Month", fontsize=20) plt.ylabel("Availability, proportion", fontsize=20) plt.title("Average Availability in San Francisco", fontsize=25) plt.show() ``` September last year was quite popular (wonderful weather). Then the decline began. But with the onset of covid, the decline intensified and reached its peak (half of the housing is vacant) by May. As expected, the covid did not affect the Airbnb business in the best way. Prices have gone up and there are fewer customers. The indicators have not yet returned to their previous values. To answer the last question, we have to prepare the data for modeling. ### Can we predict the price based on its features? ## Prepare Data #### Working with NaNs and categorical variables Let's turn "last_review" and "host_since" from date type to categorical values. For that, we create new columns and fill them in. ``` sf_list["since_last_review"] = sf_list["last_review"].apply(lambda row : dateToCategorical(row)) sf_list["host_since_cat"] = sf_list["host_since"].apply(lambda row : dateToCategorical(row)) #drop all Nans in "price" columns drop_sf_list = sf_list.dropna(subset=["price"], axis=0) #create data frame with categorical values cat_sf_list = drop_sf_list[["id", "neighbourhood_cleansed", "room_type",'property_type', "since_last_review", "host_since_cat"]] #create data frame with nimerical mean_sf_list = drop_sf_list[["id", "accommodates", "review_scores_rating", "bathrooms", "bedrooms", "beds", "review_scores_accuracy", "review_scores_cleanliness", "availability_30", "number_of_reviews", "reviews_per_month", "review_scores_communication", "review_scores_location", "review_scores_value", "host_is_superhost", "host_listings_count", "price"]] num_cols = ["accommodates", "review_scores_rating", "bathrooms", "bedrooms", "beds", "review_scores_accuracy", "review_scores_cleanliness", "availability_30", "number_of_reviews", "reviews_per_month", "review_scores_communication", "review_scores_location", "review_scores_value", "host_is_superhost", "host_listings_count", "price"] for col in num_cols: mean_sf_list[col] = mean_sf_list[col].astype('float64').replace(0.0, 0.01) mean_sf_list[col] = np.log(mean_sf_list[col]) #fill the mean fill_mean = lambda col: col.fillna(col.mean()) mean_sf_list = mean_sf_list.apply(fill_mean, axis=0) #create dummy data frame cat_cols_lst = ["neighbourhood_cleansed", "room_type",'property_type', "since_last_review", "host_since_cat"] dummy_sf_list = create_dummy_df(cat_sf_list, cat_cols_lst, dummy_na=False) ``` After all, we'll merge tree Data Frames: mean_sf_list, dummy_sf_list and amenities_df. ``` full_sf_list = dummy_sf_list.merge(amenities_df.drop(["amenities"], axis=1), on="id").merge(mean_sf_list, on="id") ``` ## Data Modeling Let's start modeling. We will try several models and compare the results. ``` #preparation train and test data X = full_sf_list.drop(["price"], axis=1) y = full_sf_list["price"] #scaling scaler = StandardScaler() X = pd.DataFrame(scaler.fit_transform(X), columns=list(X.columns)) #split into train and test X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3, random_state=42) #writing the metrics for every model in DataFrame metrics_columns = ["Model Name", "r-squared train", "r-squared train test", "MSE train", "MSE test"] metrics_df = pd.DataFrame(columns=metrics_columns) ``` ## Predicting Price ### AdaBoost regressor ``` adaboost_model = AdaBoostRegressor(n_estimators=20) adaboost_model.fit(X_train, y_train) #predict and score the model y_test_preds = adaboost_model.predict(X_test) y_train_preds = adaboost_model.predict(X_train) #scoring model test_r2 = round(r2_score(y_test, y_test_preds), 4) train_r2 = round(r2_score(y_train, y_train_preds), 4) test_mse = round(mean_squared_error(y_test, y_test_preds), 4) train_mse = round(mean_squared_error(y_train, y_train_preds), 4) print('r-squared score for training set was {}. r-squared score for test set was {}.'.format(train_r2, test_r2)) print('MSE score for training set was {}. MSE score for test set was {}.'.format(train_mse, test_mse)) #add row to metrics metrics_df = appendToMetricsdf(metrics_df, "AdaBoost regressor", train_r2, test_r2, train_mse, test_mse) ``` ### Gradient Boosting for regression ``` gradboost_model = GradientBoostingRegressor(n_estimators=300) gradboost_model.fit(X_train, y_train) #predict and score the model y_test_preds = gradboost_model.predict(X_test) y_train_preds = gradboost_model.predict(X_train) #scoring model test_r2 = round(r2_score(y_test, y_test_preds), 4) train_r2 = round(r2_score(y_train, y_train_preds), 4) test_mse = round(mean_squared_error(y_test, y_test_preds), 4) train_mse = round(mean_squared_error(y_train, y_train_preds), 4) print('r-squared score for training set was {}. r-squared score for test set was {}.'.format(train_r2, test_r2)) print('MSE score for training set was {}. MSE score for test set was {}.'.format(train_mse, test_mse)) metrics_df = appendToMetricsdf(metrics_df, "Gradient Boosting", train_r2, test_r2, train_mse, test_mse) ``` ### Extreme Gradient Boosting ``` xgb_reg = xgb.XGBRegressor() xgb_reg.fit(X_train, y_train) y_train_preds = xgb_reg.predict(X_train) y_test_preds = xgb_reg.predict(X_test) #scoring model test_r2 = round(r2_score(y_test, y_test_preds), 4) train_r2 = round(r2_score(y_train, y_train_preds), 4) test_mse = round(mean_squared_error(y_test, y_test_preds), 4) train_mse = round(mean_squared_error(y_train, y_train_preds), 4) print('r-squared score for training set was {}. r-squared score for test set was {}.'.format(train_r2, test_r2)) print('MSE score for training set was {}. MSE score for test set was {}.'.format(train_mse, test_mse)) metrics_df = appendToMetricsdf(metrics_df, "Extreme Gradient Boosting", train_r2, test_r2, train_mse, test_mse) ``` ### Neural Network ``` #building the model model = models.Sequential() model.add(layers.Dense(128, input_shape=(X_train.shape[1],), activation='relu')) model.add(layers.Dense(256, activation='relu')) model.add(layers.Dense(256, activation='relu')) model.add(layers.Dense(1, activation='linear')) #compiling the model model.compile(optimizer='adam', loss='mse', metrics=[r2_keras]) #model summary print(model.summary()) # Training the model model_start = time.time() model_history = model.fit(X_train, y_train, epochs=500, batch_size=256, validation_data=(X_test, y_test)) model_end = time.time() print(f"Time taken to run: {round((model_end - model_start)/60,1)} minutes") #evaluate model loss_train = model_history.history['loss'] loss_val = model_history.history['val_loss'] plt.figure(figsize=(8,6)) plt.plot(model_history.history['loss']) plt.plot(model_history.history['val_loss']) plt.title('Training and Test loss at each epoch') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() score_train = model.evaluate(X_train, y_train, verbose = 0) score_test = model.evaluate(X_test, y_test, verbose = 0) train_r2 = round(score_train[1], 4) test_r2 = round(score_test[1], 4) train_mse = round(score_train[0], 4) test_mse = round(score_test[0], 4) metrics_df = appendToMetricsdf(metrics_df, "Neural Network", train_r2, test_r2, train_mse, test_mse) ``` ## Evaluate the Results Let's take a look at our results and compare them with each other. ``` metrics_df ``` The AdaBoost regressor showed bad r2 score. The predictions of this model are not similar to the real values. Gradient Boosting and Extreme Gradient Boosting showed similar results, but Gradient Boosting is slightly better. Finally, I trained a neural network that performs worse than Gradient Boosting and shows overfitting. ## Thank you!
github_jupyter
<img src="http://akhavanpour.ir/notebook/images/srttu.gif" alt="SRTTU" style="width: 150px;"/> [![Azure Notebooks](https://notebooks.azure.com/launch.png)](https://notebooks.azure.com/import/gh/Alireza-Akhavan/class.vision) # <div style="direction:rtl;text-align:right;font-family:B Lotus, B Nazanin, Tahoma"> تولید متن با شبکه بازگشتی LSTM در Keras</div> <div style="direction:rtl;text-align:right;font-family:Tahoma"> کدها برگرفته از فصل هشتم کتاب </div> [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff) <div style="direction:rtl;text-align:right;font-family:Tahoma"> و گیت هاب نویسنده کتاب و توسعه دهنده کراس </div> [François Chollet](http://nbviewer.jupyter.org/github/fchollet/deep-learning-with-python-notebooks/blob/master/8.1-text-generation-with-lstm.ipynb) <div style="direction:rtl;text-align:right;font-family:Tahoma"> است. </div> ``` import keras keras.__version__ ``` # Text generation with LSTM ## Implementing character-level LSTM text generation Let's put these ideas in practice in a Keras implementation. The first thing we need is a lot of text data that we can use to learn a language model. You could use any sufficiently large text file or set of text files -- Wikipedia, the Lord of the Rings, etc. In this example we will use some of the writings of Nietzsche, the late-19th century German philosopher (translated to English). The language model we will learn will thus be specifically a model of Nietzsche's writing style and topics of choice, rather than a more generic model of the English language. ### <div style="direction:rtl;text-align:right;font-family:B Lotus, B Nazanin, Tahoma"> مجموعه داده </div> ``` import keras import numpy as np path = keras.utils.get_file( 'nietzsche.txt', origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt') text = open(path).read().lower() print('Corpus length:', len(text)) ``` Next, we will extract partially-overlapping sequences of length `maxlen`, one-hot encode them and pack them in a 3D Numpy array `x` of shape `(sequences, maxlen, unique_characters)`. Simultaneously, we prepare a array `y` containing the corresponding targets: the one-hot encoded characters that come right after each extracted sequence. ``` # Length of extracted character sequences maxlen = 60 # We sample a new sequence every `step` characters step = 3 # This holds our extracted sequences sentences = [] # This holds the targets (the follow-up characters) next_chars = [] for i in range(0, len(text) - maxlen, step): sentences.append(text[i: i + maxlen]) next_chars.append(text[i + maxlen]) print('Number of sequences:', len(sentences)) # List of unique characters in the corpus chars = sorted(list(set(text))) print('Unique characters:', len(chars)) # Dictionary mapping unique characters to their index in `chars` char_indices = dict((char, chars.index(char)) for char in chars) # Next, one-hot encode the characters into binary arrays. print('Vectorization...') x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool) y = np.zeros((len(sentences), len(chars)), dtype=np.bool) for i, sentence in enumerate(sentences): for t, char in enumerate(sentence): x[i, t, char_indices[char]] = 1 y[i, char_indices[next_chars[i]]] = 1 ``` ## <div style="direction:rtl;text-align:right;font-family:B Lotus, B Nazanin, Tahoma"> ایجاد شبکه (Building the network)</div> Our network is a single `LSTM` layer followed by a `Dense` classifier and softmax over all possible characters. But let us note that recurrent neural networks are not the only way to do sequence data generation; 1D convnets also have proven extremely successful at it in recent times. ``` from keras import layers model = keras.models.Sequential() model.add(layers.LSTM(128, input_shape=(maxlen, len(chars)))) model.add(layers.Dense(len(chars), activation='softmax')) ``` Since our targets are one-hot encoded, we will use `categorical_crossentropy` as the loss to train the model: ``` optimizer = keras.optimizers.RMSprop(lr=0.01) model.compile(loss='categorical_crossentropy', optimizer=optimizer) ``` ## Training the language model and sampling from it Given a trained model and a seed text snippet, we generate new text by repeatedly: * 1) Drawing from the model a probability distribution over the next character given the text available so far * 2) Reweighting the distribution to a certain "temperature" * 3) Sampling the next character at random according to the reweighted distribution * 4) Adding the new character at the end of the available text This is the code we use to reweight the original probability distribution coming out of the model, and draw a character index from it (the "sampling function"): ``` def sample(preds, temperature=1.0): preds = np.asarray(preds).astype('float64') preds = np.log(preds) / temperature exp_preds = np.exp(preds) preds = exp_preds / np.sum(exp_preds) probas = np.random.multinomial(1, preds, 1) return np.argmax(probas) ``` Finally, this is the loop where we repeatedly train and generated text. We start generating text using a range of different temperatures after every epoch. This allows us to see how the generated text evolves as the model starts converging, as well as the impact of temperature in the sampling strategy. ``` import random import sys for epoch in range(1, 60): print('epoch', epoch) # Fit the model for 1 epoch on the available training data model.fit(x, y, batch_size=128, epochs=1) # Select a text seed at random start_index = random.randint(0, len(text) - maxlen - 1) generated_text = text[start_index: start_index + maxlen] print('--- Generating with seed: "' + generated_text + '"') for temperature in [0.2, 0.5, 1.0, 1.2]: print('------ temperature:', temperature) sys.stdout.write(generated_text) # We generate 400 characters for i in range(400): sampled = np.zeros((1, maxlen, len(chars))) for t, char in enumerate(generated_text): sampled[0, t, char_indices[char]] = 1. preds = model.predict(sampled, verbose=0)[0] next_index = sample(preds, temperature) next_char = chars[next_index] generated_text += next_char generated_text = generated_text[1:] sys.stdout.write(next_char) sys.stdout.flush() print() ``` As you can see, a low temperature results in extremely repetitive and predictable text, but where local structure is highly realistic: in particular, all words (a word being a local pattern of characters) are real English words. With higher temperatures, the generated text becomes more interesting, surprising, even creative; it may sometimes invent completely new words that sound somewhat plausible (such as "eterned" or "troveration"). With a high temperature, the local structure starts breaking down and most words look like semi-random strings of characters. Without a doubt, here 0.5 is the most interesting temperature for text generation in this specific setup. Always experiment with multiple sampling strategies! A clever balance between learned structure and randomness is what makes generation interesting. Note that by training a bigger model, longer, on more data, you can achieve generated samples that will look much more coherent and realistic than ours. But of course, don't expect to ever generate any meaningful text, other than by random chance: all we are doing is sampling data from a statistical model of which characters come after which characters. Language is a communication channel, and there is a distinction between what communications are about, and the statistical structure of the messages in which communications are encoded. To evidence this distinction, here is a thought experiment: what if human language did a better job at compressing communications, much like our computers do with most of our digital communications? Then language would be no less meaningful, yet it would lack any intrinsic statistical structure, thus making it impossible to learn a language model like we just did. ## Take aways * We can generate discrete sequence data by training a model to predict the next tokens(s) given previous tokens. * In the case of text, such a model is called a "language model" and could be based on either words or characters. * Sampling the next token requires balance between adhering to what the model judges likely, and introducing randomness. * One way to handle this is the notion of _softmax temperature_. Always experiment with different temperatures to find the "right" one.
github_jupyter
## Swarm intelligence agent Last checked score: 1062.9 ``` def swarm(obs, conf): def send_scout_carrier(x, y): """ send scout carrier to explore current cell and, if possible, cell above """ points = send_scouts(x, y) # if cell above exists if y > 0: cell_above_points = send_scouts(x, y - 1) # cell above points have lower priority if points < m1 and points < (cell_above_points - 1): # current cell's points will be negative points -= cell_above_points return points def send_scouts(x, y): """ send scouts to get points from all axes of the cell """ axes = explore_axes(x, y) points = combine_points(axes) return points def explore_axes(x, y): """ find points, marks, zeros and amount of in_air cells of all axes of the cell, "NE" = North-East etc. """ return { "NE -> SW": [ explore_direction(x, lambda z : z + 1, y, lambda z : z - 1), explore_direction(x, lambda z : z - 1, y, lambda z : z + 1) ], "E -> W": [ explore_direction(x, lambda z : z + 1, y, lambda z : z), explore_direction(x, lambda z : z - 1, y, lambda z : z) ], "SE -> NW": [ explore_direction(x, lambda z : z + 1, y, lambda z : z + 1), explore_direction(x, lambda z : z - 1, y, lambda z : z - 1) ], "S -> N": [ explore_direction(x, lambda z : z, y, lambda z : z + 1), explore_direction(x, lambda z : z, y, lambda z : z - 1) ] } def explore_direction(x, x_fun, y, y_fun): """ get points, mark, zeros and amount of in_air cells of this direction """ # consider only opponents mark mark = 0 points = 0 zeros = 0 in_air = 0 for i in range(one_mark_to_win): x = x_fun(x) y = y_fun(y) # if board[x][y] is inside board's borders if y >= 0 and y < conf.rows and x >= 0 and x < conf.columns: # mark of the direction will be the mark of the first non-empty cell if mark == 0 and board[x][y] != 0: mark = board[x][y] # if board[x][y] is empty if board[x][y] == 0: zeros += 1 if (y + 1) < conf.rows and board[x][y + 1] == 0: in_air += 1 elif board[x][y] == mark: points += 1 # stop searching for marks in this direction else: break return { "mark": mark, "points": points, "zeros": zeros, "in_air": in_air } def combine_points(axes): """ combine points of different axes """ points = 0 # loop through all axes for axis in axes: # if mark in both directions of the axis is the same # or mark is zero in one or both directions of the axis if (axes[axis][0]["mark"] == axes[axis][1]["mark"] or axes[axis][0]["mark"] == 0 or axes[axis][1]["mark"] == 0): # combine points of the same axis points += evaluate_amount_of_points( axes[axis][0]["points"] + axes[axis][1]["points"], axes[axis][0]["zeros"] + axes[axis][1]["zeros"], axes[axis][0]["in_air"] + axes[axis][1]["in_air"], m1, m2, axes[axis][0]["mark"] ) else: # if marks in directions of the axis are different and none of those marks is 0 for direction in axes[axis]: points += evaluate_amount_of_points( direction["points"], direction["zeros"], direction["in_air"], m1, m2, direction["mark"] ) return points def evaluate_amount_of_points(points, zeros, in_air, m1, m2, mark): """ evaluate amount of points in one direction or entire axis """ # if points + zeros in one direction or entire axis >= one_mark_to_win # multiply amount of points by one of the multipliers or keep amount of points as it is if (points + zeros) >= one_mark_to_win: if points >= one_mark_to_win: points *= m1 elif points == two_marks_to_win: points = points * m2 + zeros - in_air else: points = points + zeros - in_air else: points = 0 return points ################################################################################# # one_mark_to_win points multiplier m1 = 100 # two_marks_to_win points multiplier m2 = 10 # define swarm's mark swarm_mark = obs.mark # define opponent's mark opp_mark = 2 if swarm_mark == 1 else 1 # define one mark to victory one_mark_to_win = conf.inarow - 1 # define two marks to victory two_marks_to_win = conf.inarow - 2 # define board as two dimensional array board = [] for column in range(conf.columns): board.append([]) for row in range(conf.rows): board[column].append(obs.board[conf.columns * row + column]) # define board center board_center = conf.columns // 2 # start searching for the_column from board center x = board_center # shift to left/right from board center shift = 0 # THE COLUMN !!! the_column = { "x": x, "points": float("-inf") } # searching for the_column while x >= 0 and x < conf.columns: # find first empty cell starting from bottom of the column y = conf.rows - 1 while y >= 0 and board[x][y] != 0: y -= 1 # if column is not full if y >= 0: # send scout carrier to get points points = send_scout_carrier(x, y) # evaluate which column is THE COLUMN !!! if points > the_column["points"]: the_column["x"] = x the_column["points"] = points # shift x to right or left from swarm center shift *= -1 if shift >= 0: shift += 1 x = board_center + shift # Swarm's final decision :) return the_column["x"] ``` #### Converting the agent into a python file so that it can be submitted ``` import inspect import os def write_agent_to_file(function, file): with open(file, "a" if os.path.exists(file) else "w") as f: f.write(inspect.getsource(function)) print(function, "written to", file) write_agent_to_file(swarm, os.getcwd() + "\\submission.py") ```
github_jupyter
# Getting to know LSTMs better Created: September 13, 2018 Author: Thamme Gowda Goals: - To get batches of *unequal length sequences* encoded correctly! - Know how the hidden states flow between encoders and decoders - Know how the multiple stacked LSTM layers pass hidden states Example: a simple bi-directional LSTM which takes 3d input vectors and produces 2d output vectors. ``` import torch from torch import nn lstm = nn.LSTM(3, 2, batch_first=True, bidirectional=True) # Lets create a batch input. # 3 sequences in batch (the first dim) , see batch_first=True # Then the logest sequence is 4 time steps, ==> second dimension # Each time step has 3d vector which is input ==> last dimension pad_seq = torch.rand(3, 4, 3) # That is nice for the theory # but in practice we are dealing with un equal length sequences # among those 3 sequences in the batch, lets us say # first sequence is the longest, with 4 time steps --> no padding needed # second seq is 3 time steps --> pad the last time step pad_seq[1, 3, :] = 0.0 # third seq is 2 time steps --> pad the last two steps pad_seq[2, 2:, :] = 0.0 print("Padded Input:") print(pad_seq) # so we got these lengths lens = [4,3,2] print("Sequence Lenghts: ", lens) # lets send padded seq to LSTM out,(h_t, c_t) = lstm(pad_seq) print("All Outputs:") print(out) ``` ^^ Output is 2x2d=4d vector since it is bidirectional forward 2d, backward 2d are concatenated Total vectors=12: 3 seqs in batch x 4 time steps;; each vector is 4d > Hmm, what happened to my padding time steps? Will padded zeros mess with the internal weights of LSTM when I do backprop? --- Lets look at the last Hidden state ``` print(h_t) ``` Last hidden state is a 2d (same as output) vectors, but 2 for each step because of bidirectional rnn There are 3 of them since there were three seqs in the batch each corresponding to the last step But the definition of *last time step* is bit tricky For the left-to-right LSTM, it is the last step of input For the right-to-left LSTM, it is the first step of input This makes sense now. --- Lets look at $c_t$: ``` print("Last c_t:") print(c_t) ``` This should be similar to the last hidden state. ## Question: > what happened to my padding time steps? Did the last hidden state exclude the padded time steps? I can see that last hidden state of the forward LSTM didnt distinguish padded zeros. Lets see output of each time steps and last hidden state of left-to-right LSTM, again. We know that the lengths (after removing padding) are \[4,3,2] ``` print("All time stamp outputs:") print(out[:, :, :2]) print("Last hidden state (forward LSTM):") print(h_t[0]) ``` *Okay, Now I get it.* When building sequence to sequence (for Machine translation) I cant pass last hidden state like this to a decoder. We have to inform the LSTM about lengths. How? Thats why we have `torch.nn.utils.rnn.pack_padded_sequence` ``` print("Padded Seqs:") print(pad_seq) print("Lens:", lens) print("Pack Padded Seqs:") pac_pad_seq = torch.nn.utils.rnn.pack_padded_sequence(pad_seq, lens, batch_first=True) print(pac_pad_seq) ``` Okay, this is doing some magic -- getting rid of all padded zeros -- Cool! `batch_sizes=tensor([3, 3, 2, 1]` seems to be the main ingredient of this magic. `[3, 3, 2, 1]` I get it! We have 4 time steps in batch. - First two step has all 3 seqs in the batch. - third step is made of first 2 seqs in batch. - Fourth step is made of first seq in batch I now understand why the sequences in the batch has to be sorted by descending order of lengths! Now let us send it to LSTM and see what it produces ``` pac_pad_out, (pac_ht, pac_ct) = lstm(pac_pad_seq) # Lets first look at output. this is packed output print(pac_pad_out) ``` Okay this is packed output. Sequences are of unequal lengths. Now we need to restore the output by padding 0s for shorter sequences. ``` pad_out = nn.utils.rnn.pad_packed_sequence(pac_pad_out, batch_first=True, padding_value=0) print(pad_out) ``` Output looks good! Now Let us look at the hidden state. ``` print(pac_ht) ``` This is great. As we see the forward (or Left-to-right) LSTM's last hidden state is proper as per the lengths. So should be the c_t. Let us concatenate forward and reverse LSTM's hidden states ``` torch.cat([pac_ht[0],pac_ht[1]], dim=1) ``` ---- # Multi Layer LSTM Let us redo the above hacking to understand how 2 layer LSTM works ``` n_layers = 2 inp_size = 3 out_size = 2 lstm2 = nn.LSTM(inp_size, out_size, num_layers=n_layers, batch_first=True, bidirectional=True) pac_out, (h_n, c_n) = lstm2(pac_pad_seq) print("Packed Output:") print(pac_out) pad_out = nn.utils.rnn.pad_packed_sequence(pac_out, batch_first=True, padding_value=0) print("Pad Output:") print(pad_out) print("Last h_n:") print(h_n) print("Last c_n:") print(c_n) ``` The LSTM output looks similar to single layer LSTM. However the ht and ct states are bigger -- since there are two layers. Now its time to RTFM. > h_n of shape `(num_layers * num_directions, batch, hidden_size)`: tensor containing the hidden state for `t = seq_len`. Like output, the layers can be separated using `h_n.view(num_layers, num_directions, batch, hidden_size)` and similarly for c_n. ``` batch_size = 3 num_dirs = 2 l_n_h_n = h_n.view(n_layers, num_dirs, batch_size, out_size)[-1] # last layer last time step hidden state print(l_n_h_n) last_hid = torch.cat([l_n_h_n[0], l_n_h_n[1]], dim=1) print("last layer last time stamp hidden state") print(last_hid) print("Padded Outputs :") print(pad_out) ```
github_jupyter
## Differential Privacy - Simple Database Queries The database is going to be a VERY simple database with only one boolean column. Each row corresponds to a person. Each value corresponds to whether or not that person has a certain private attribute (such as whether they have a certain disease, or whether they are above/below a certain age). We are then going to learn how to know whether a database query over such a small database is differentially private or not - and more importantly - what techniques we can employ to ensure various levels of privacy #### Create a Simple Database To do this, initialize a random list of 1s and 0s (which are the entries in our database). Note - the number of entries directly corresponds to the number of people in our database. ``` import torch # the number of entries in our DB / this of it as number of people in the DB num_entries = 5000 db = torch.rand(num_entries) > 0.5 db ``` ## Generate Parallel Databases > "When querying a database, if I removed someone from the database, would the output of the query change?". In order to check for this, we create "parallel databases" which are simply databases with one entry removed. We'll create a list of every parallel database to the one currently contained in the "db" variable. Then, create a helper function which does the following: - creates the initial database (db) - creates all parallel databases ``` def create_parallel_db(db, remove_index): return torch.cat((db[0:remove_index], db[remove_index+1:])) def create_parallel_dbs(db): parallel_dbs = list() for i in range(len(db)): pdb = create_parallel_db(db, i) parallel_dbs.append(pdb) return parallel_dbs def create_db_and_parallels(num_entries): # generate dbs and parallel dbs on the fly db = torch.rand(num_entries) > 0.5 pdbs = create_parallel_dbs(db) return db, pdbs db, pdbs = create_db_and_parallels(10) pdbs print("Real database:", db) print("Size of real DB", db.size()) print("A sample parallel DB", pdbs[0]) print("Size of parallel DB", pdbs[0].size()) ``` # Towards Evaluating The Differential Privacy of a Function Intuitively, we want to be able to query our database and evaluate whether or not the result of the query is leaking "private" information. > This is about evaluating whether the output of a query changes when we remove someone from the database. Specifically, we want to evaluate the *maximum* amount the query changes when someone is removed (maximum over all possible people who could be removed). To find how much privacy is leaked, we'll iterate over each person in the database and **measure** the difference in the output of the query relative to when we query the entire database. Just for the sake of argument, let's make our first "database query" a simple sum. Aka, we're going to count the number of 1s in the database. ``` db, pdbs = create_db_and_parallels(200) def query(db): return db.sum() query(db) # the output of the parallel dbs is different from the db query query(pdbs[1]) full_db_result = query(db) print(full_db_result) sensitivity = 0 sensitivity_scale = [] for pdb in pdbs: pdb_result = query(pdb) db_distance = torch.abs(pdb_result - full_db_result) if(db_distance > sensitivity): sensitivity_scale.append(db_distance) sensitivity = db_distance sensitivity ``` #### Sensitivity > The maximum amount the query changes when removing an individual from the DB. # Evaluating the Privacy of a Function The difference between each parallel db's query result and the query result for the real database and its max value (which was 1) is called "sensitivity". It corresponds to the function we chose for the query. The "sum" query will always have a sensitivity of exactly 1. We can also calculate sensitivity for other functions as well. Let's calculate sensitivity for the "mean" function. ``` def sensitivity(query, num_entries=1000): db, pdbs = create_db_and_parallels(num_entries) full_db_result = query(db) max_distance = 0 for pdb in pdbs: # for each parallel db, execute the query (sum, or mean, ..., etc) pdb_result = query(pdb) db_distance = torch.abs(pdb_result - full_db_result) if (db_distance > max_distance): max_distance = db_distance return max_distance # our query is now the mean def query(db): return db.float().mean() sensitivity(query) ``` Wow! That sensitivity is WAY lower. Note the intuition here. >"Sensitivity" is measuring how sensitive the output of the query is to a person being removed from the database. For a simple sum, this is always 1, but for the mean, removing a person is going to change the result of the query by rougly 1 divided by the size of the database. Thus, "mean" is a VASTLY less "sensitive" function (query) than SUM. # Calculating L1 Sensitivity For Threshold TO calculate the sensitivty for the "threshold" function: - First compute the sum over the database (i.e. sum(db)) and return whether that sum is greater than a certain threshold. - Then, create databases of size 10 and threshold of 5 and calculate the sensitivity of the function. - Finally, re-initialize the database 10 times and calculate the sensitivity each time. ``` def query(db, threshold=5): """ Query that adds a threshold of 5, and returns whether sum is > threshold or not. """ return (db.sum() > threshold).float() for i in range(10): sens = sensitivity(query, num_entries=10) print(sens) ``` # A Basic Differencing Attack Sadly none of the functions we've looked at so far are differentially private (despite them having varying levels of sensitivity). The most basic type of attack can be done as follows. Let's say we wanted to figure out a specific person's value in the database. All we would have to do is query for the sum of the entire database and then the sum of the entire database without that person! ## Performing a Differencing Attack on Row 10 (How privacy can fail) We'll construct a database and then demonstrate how one can use two different sum queries to explose the value of the person represented by row 10 in the database (note, you'll need to use a database with at least 10 rows) ``` db, _ = create_db_and_parallels(100) db # create a parallel db with that person (index 10) removed pdb = create_parallel_db(db, remove_index=10) pdb # differencing attack using sum query sum(db) - sum(pdb) # a differencing attack using mean query sum(db).float() /len(db) - sum(pdb).float() / len(pdb) # differencing using a threshold (sum(db).float() > 50) - (sum(pdb).float() > 50) ``` # Local Differential Privacy Differential privacy always requires a form of randommess or noise added to the query to protect from things like Differencing Attacks. To explain this, let's look at Randomized Response. ### Randomized Response (Local Differential Privacy) Let's say I have a group of people I wish to survey about a very taboo behavior which I think they will lie about (say, I want to know if they have ever committed a certain kind of crime). I'm not a policeman, I'm just trying to collect statistics to understand the higher level trend in society. So, how do we do this? One technique is to add randomness to each person's response by giving each person the following instructions (assuming I'm asking a simple yes/no question): - Flip a coin 2 times. - If the first coin flip is heads, answer honestly - If the first coin flip is tails, answer according to the second coin flip (heads for yes, tails for no)! Thus, each person is now protected with "plausible deniability". If they answer "Yes" to the question "have you committed X crime?", then it might becasue they actually did, or it might be because they are answering according to a random coin flip. Each person has a high degree of protection. Furthermore, we can recover the underlying statistics with some accuracy, as the "true statistics" are simply averaged with a 50% probability. Thus, if we collect a bunch of samples and it turns out that 60% of people answer yes, then we know that the TRUE distribution is actually centered around 70%, because 70% averaged with a 50% (a coin flip) is 60% which is the result we obtained. However, it should be noted that, especially when we only have a few samples, this comes at the cost of accuracy. This tradeoff exists across all of Differential Privacy. > NOTE: **The greater the privacy protection (plausible deniability) the less accurate the results. ** Let's implement this local DP for our database before! The main goal is to: * Get the most accurate query with the **greatest** amount of privacy * Greatest fit with trust models in the actual world, (don't waste trust) Let's implement local differential privacy: ``` db, pdbs = create_db_and_parallels(100) db def query(db): true_result = torch.mean(db.float()) # local differential privacy is adding noise to data: replacing some # of the values with random values first_coin_flip = (torch.rand(len(db)) > 0.5).float() second_coin_flip = (torch.rand(len(db)) > 0.5).float() # differentially private DB ... augmented_db = db.float() * first_coin_flip + (1 - first_coin_flip) * second_coin_flip # the result is skewed if we do: # torch.mean(augmented_db.float()) # we remove the skewed average that was the result of the differential privacy dp_result = torch.mean(augmented_db.float()) * 2 - 0.5 return dp_result, true_result db, pdbs = create_db_and_parallels(10) private_result, true_result = query(db) print(f"Without noise {private_result}") print(f"With noise: {true_result}") # Increasing the size of the dateset db, pdbs = create_db_and_parallels(100) private_result, true_result = query(db) print(f"Without noise {private_result}") print(f"With noise: {true_result}") # Increasing the size of the dateset even further db, pdbs = create_db_and_parallels(1000) private_result, true_result = query(db) print(f"Without noise {private_result}") print(f"With noise: {true_result}") ``` As we have seen, > The more data we have the more the noise will tend to not affect the output of the query # Varying Amounts of Noise We are going to augment the randomized response query to allow for varying amounts of randomness to be added. To do this, we bias the coin flip to be higher or lower and then run the same experiment. We'll need to both adjust the likelihood of the first coin flip AND the de-skewing at the end (where we create the "augmented_result" variable). ``` # Noise < 0.5 sets the likelihood that the coin flip will be heads, and vice-versa. noise = 0.2 true_result = torch.mean(db.float()) # let's add the noise to data: replacing some of the values with random values first_coin_flip = (torch.rand(len(db)) > noise).float() second_coin_flip = (torch.rand(len(db)) > 0.5).float() # differentially private DB ... augmented_db = db.float() * first_coin_flip + (1 - first_coin_flip) * second_coin_flip # since the result will be skewed if we do: torch.mean(augmented_db.float()) # we'll remove the skewed average above by doing below: dp_result = torch.mean(augmented_db.float()) * 2 - 0.5 sk_result = augmented_db.float().mean() print('True result:', true_result) print('Skewed result:', sk_result) print('De-skewed result:', dp_result) def query(db, noise=0.2): """Default noise(0.2) above sets the likelihood that the coin flip will be heads""" true_result = torch.mean(db.float()) # local diff privacy is adding noise to data: replacing some # of the values with random values first_coin_flip = (torch.rand(len(db)) > noise).float() second_coin_flip = (torch.rand(len(db)) > 0.5).float() # differentially private DB ... augmented_db = db.float() * first_coin_flip + (1 - first_coin_flip) * second_coin_flip # the result is skewed if we do: # torch.mean(augmented_db.float()) # we remove the skewed average that was the result of the differential privacy sk_result = augmented_db.float().mean() private_result = ((sk_result / noise ) - 0.5) * noise / (1 - noise) return private_result, true_result # test varying noise db, pdbs = create_db_and_parallels(10) private_result, true_result = query(db, noise=0.2) print(f"Without noise {private_result}") print(f"With noise: {true_result}") # Increasing the size of the dateset even further db, pdbs = create_db_and_parallels(100) private_result, true_result = query(db, noise=0.4) print(f"Without noise {private_result}") print(f"With noise: {true_result}") # Increasing the size of the dateset even further db, pdbs = create_db_and_parallels(10000) private_result, true_result = query(db, noise=0.8) print(f"Without noise {private_result}") print(f"With noise: {true_result}") ``` From the analysis above, with more data, its easier to protect privacy with noise. It becomes a lot easier to learn about general characteristics in the DB because the algorithm has more data points to look at and compare with each other. So differential privacy mechanisms has helped us filter out any information unique to individual data entities and try to let through information that is consistent across multiple different people in the dataset. > The larger the dataset, the easier it is to protect privacy. # The Formal Definition of Differential Privacy The previous method of adding noise was called "Local Differentail Privacy" because we added noise to each datapoint individually. This is necessary for some situations wherein the data is SO sensitive that individuals do not trust noise to be added later. However, it comes at a very high cost in terms of accuracy. However, alternatively we can add noise AFTER data has been aggregated by a function. This kind of noise can allow for similar levels of protection with a lower affect on accuracy. However, participants must be able to trust that no-one looked at their datapoints _before_ the aggregation took place. In some situations this works out well, in others (such as an individual hand-surveying a group of people), this is less realistic. Nevertheless, global differential privacy is incredibly important because it allows us to perform differential privacy on smaller groups of individuals with lower amounts of noise. Let's revisit our sum functions. ``` db, pdbs = create_db_and_parallels(100) def query(db): return torch.sum(db.float()) def M(db): query(db) + noise query(db) ``` So the idea here is that we want to add noise to the output of our function. We actually have two different kinds of noise we can add - Laplacian Noise or Gaussian Noise. However, before we do so at this point we need to dive into the formal definition of Differential Privacy. ![alt text](dp_formula.png "Title") _Image From: "The Algorithmic Foundations of Differential Privacy" - Cynthia Dwork and Aaron Roth - https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf_ This definition does not _create_ differential privacy, instead it is a measure of how much privacy is afforded by a query M. Specifically, it's a comparison between running the query M on a database (x) and a parallel database (y). As you remember, parallel databases are defined to be the same as a full database (x) with one entry/person removed. Thus, this definition says that FOR ALL parallel databases, the maximum distance between a query on database (x) and the same query on database (y) will be e^epsilon, but that occasionally this constraint won't hold with probability delta. Thus, this theorem is called "epsilon delta" differential privacy. # Epsilon Let's unpack the intuition of this for a moment. Epsilon Zero: If a query satisfied this inequality where epsilon was set to 0, then that would mean that the query for all parallel databases outputed the exact same value as the full database. As you may remember, when we calculated the "threshold" function, often the Sensitivity was 0. In that case, the epsilon also happened to be zero. Epsilon One: If a query satisfied this inequality with epsilon 1, then the maximum distance between all queries would be 1 - or more precisely - the maximum distance between the two random distributions M(x) and M(y) is 1 (because all these queries have some amount of randomness in them, just like we observed in the last section). # Delta Delta is basically the probability that epsilon breaks. Namely, sometimes the epsilon is different for some queries than it is for others. For example, you may remember when we were calculating the sensitivity of threshold, most of the time sensitivity was 0 but sometimes it was 1. Thus, we could calculate this as "epsilon zero but non-zero delta" which would say that epsilon is perfect except for some probability of the time when it's arbitrarily higher. Note that this expression doesn't represent the full tradeoff between epsilon and delta. # How To Add Noise for Global Differential Privacy Global Differential Privacy adds noise to the output of a query. We'll add noise to the output of our query so that it satisfies a certain epsilon-delta differential privacy threshold. There are two kinds of noise we can add - Gaussian Noise - Laplacian Noise. Generally speaking Laplacian is better, but both are still valid. Now to the hard question... ### How much noise should we add? The amount of noise necessary to add to the output of a query is a function of four things: - the type of noise (Gaussian/Laplacian) - the sensitivity of the query/function - the desired epsilon (ε) - the desired delta (δ) Thus, for each type of noise we're adding, we have different way of calculating how much to add as a function of sensitivity, epsilon, and delta. Laplacian noise is increased/decreased according to a "scale" parameter b. We choose "b" based on the following formula. `b = sensitivity(query) / epsilon` In other words, if we set b to be this value, then we know that we will have a privacy leakage of <= epsilon. Furthermore, the nice thing about Laplace is that it guarantees this with delta == 0. There are some tunings where we can have very low epsilon where delta is non-zero, but we'll ignore them for now. ### Querying Repeatedly - if we query the database multiple times - we can simply add the epsilons (Even if we change the amount of noise and their epsilons are not the same). # Create a Differentially Private Query Let's create a query function which sums over the database and adds just the right amount of noise such that it satisfies an epsilon constraint. query will be for "sum" and for "mean". We'll use the correct sensitivity measures for both. ``` epsilon = 0.001 import numpy as np db, pdbs = create_db_and_parallels(100) db def sum_query(db): return db.sum() def laplacian_mechanism(db, query, sensitivity): beta = sensitivity / epsilon noise = torch.tensor(np.random.laplace(0, beta, 1)) return query(db) + noise laplacian_mechanism(db, sum_query, 0.01) def mean_query(db): return torch.mean(db.float()) laplacian_mechanism(db, mean_query, 1) ``` # Differential Privacy for Deep Learning So what does all of this have to do with Deep Learning? Well, these mechanisms form the core primitives for how Differential Privacy provides guarantees in the context of Deep Learning. ### Perfect Privacy > "a query to a database returns the same value even if we remove any person from the database". In the context of Deep Learning, we have a similar standard. > Training a model on a dataset should return the same model even if we remove any person from the dataset. Thus, we've replaced "querying a database" with "training a model on a dataset". In essence, the training process is a kind of query. However, one should note that this adds two points of complexity which database queries did not have: 1. do we always know where "people" are referenced in the dataset? 2. neural models rarely never train to the same output model, even on identical data The answer to (1) is to treat each training example as a single, separate person. Strictly speaking, this is often overly zealous as some training examples have no relevance to people and others may have multiple/partial (consider an image with multiple people contained within it). Thus, localizing exactly where "people" are referenced, and thus how much your model would change if people were removed, is challenging. The answer to (2) is also an open problem. To solve this, lets look at PATE. ## Scenario: A Health Neural Network You work for a hospital and you have a large collection of images about your patients. However, you don't know what's in them. You would like to use these images to develop a neural network which can automatically classify them, however since your images aren't labeled, they aren't sufficient to train a classifier. However, being a cunning strategist, you realize that you can reach out to 10 partner hospitals which have annotated data. It is your hope to train your new classifier on their datasets so that you can automatically label your own. While these hospitals are interested in helping, they have privacy concerns regarding information about their patients. Thus, you will use the following technique to train a classifier which protects the privacy of patients in the other hospitals. - 1) You'll ask each of the 10 hospitals to train a model on their own datasets (All of which have the same kinds of labels) - 2) You'll then use each of the 10 partner models to predict on your local dataset, generating 10 labels for each of your datapoints - 3) Then, for each local data point (now with 10 labels), you will perform a DP query to generate the final true label. This query is a "max" function, where "max" is the most frequent label across the 10 labels. We will need to add laplacian noise to make this Differentially Private to a certain epsilon/delta constraint. - 4) Finally, we will retrain a new model on our local dataset which now has labels. This will be our final "DP" model. So, let's walk through these steps. I will assume you're already familiar with how to train/predict a deep neural network, so we'll skip steps 1 and 2 and work with example data. We'll focus instead on step 3, namely how to perform the DP query for each example using toy data. So, let's say we have 10,000 training examples, and we've got 10 labels for each example (from our 10 "teacher models" which were trained directly on private data). Each label is chosen from a set of 10 possible labels (categories) for each image. ``` import numpy as np num_teachers = 10 # we're working with 10 partner hospitals num_examples = 10000 # the size of OUR dataset num_labels = 10 # number of lablels for our classifier # fake predictions fake_preds = ( np.random.rand( num_teachers, num_examples ) * num_labels).astype(int).transpose(1,0) fake_preds[:,0] # Step 3: Perform a DP query to generate the final true label/outputs, # Use the argmax function to find the most frequent label across all 10 labels, # Then finally add some noise to make it differentially private. new_labels = list() for an_image in fake_preds: # count the most frequent label the hospitals came up with label_counts = np.bincount(an_image, minlength=num_labels) epsilon = 0.1 beta = 1 / epsilon for i in range(len(label_counts)): # for each label, add some noise to the counts label_counts[i] += np.random.laplace(0, beta, 1) new_label = np.argmax(label_counts) new_labels.append(new_label) # new_labels new_labels[:10] ``` # PATE Analysis ``` # lets say the hospitals came up with these outputs... 9, 9, 3, 6 ..., 2 labels = np.array([9, 9, 3, 6, 9, 9, 9, 9, 8, 2]) counts = np.bincount(labels, minlength=10) print(counts) query_result = np.argmax(counts) query_result ``` If every hospital says the result is 9, then we have very low sensitivity. We could remove a person, from the dataset, and the query results still is 9, then we have not leaked any information. Core assumption: The same patient was not present at any of this two hospitals. Removing any one of this hospitals, acts as a proxy to removing one person, which means that if we do remove one hospital, the query result should not be different. ``` from syft.frameworks.torch.differential_privacy import pate num_teachers, num_examples, num_labels = (100, 100, 10) # generate fake predictions/labels preds = (np.random.rand(num_teachers, num_examples) * num_labels).astype(int) indices = (np.random.rand(num_examples) * num_labels).astype(int) # true answers preds[:,0:10] *= 0 # perform PATE to find the data depended epsilon and data independent epsilon data_dep_eps, data_ind_eps = pate.perform_analysis( teacher_preds=preds, indices=indices, noise_eps=0.1, delta=1e-5 ) print('Data Independent Epsilon', data_ind_eps) print('Data Dependent Epsilon', data_dep_eps) assert data_dep_eps < data_ind_eps data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1, delta=1e-5) print("Data Independent Epsilon:", data_ind_eps) print("Data Dependent Epsilon:", data_dep_eps) preds[:,0:50] *= 0 data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1, delta=1e-5, moments=20) print("Data Independent Epsilon:", data_ind_eps) print("Data Dependent Epsilon:", data_dep_eps) ``` # Where to Go From Here Read: - Algorithmic Foundations of Differential Privacy: https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf - Deep Learning with Differential Privacy: https://arxiv.org/pdf/1607.00133.pdf - The Ethical Algorithm: https://www.amazon.com/Ethical-Algorithm-Science-Socially-Design/dp/0190948205 Topics: - The Exponential Mechanism - The Moment's Accountant - Differentially Private Stochastic Gradient Descent Advice: - For deployments - stick with public frameworks! - Join the Differential Privacy Community - Don't get ahead of yourself - DP is still in the early days # Application of DP in Private Federated Learning DP works by adding statistical noise either at the input level or output level of the model so that you can mask out individual user contribution, but at the same time gain insight into th overall population without sacrificing privacy. > Case: Figure out average money one has in their pockets. We could go and ask someone how much they have in their wallet. They pick a random number between -100 and 100. Add that to the real value, say $20 and a picked number of 100. resulting in 120. That way, we have no way to know what the actual amount of money in their wallet is. When sufficiently large numbers of people submit these results, if we take the average, the noise will cancel out and we'll start seeing the true average. Apart from statistical use cases, we can apply DP in Private Federated learning. Suppose you want to train a model using distributed learning across a number of user devices. One way to do that is to get all the private data from the devices, but that's not very privacy friendly. Instead, we send the model from the server back to the devices. The devices will then train the model using their user data, and only send the privatized model updates back to the server. Server will then aggregate the updates and make an informed decision of the overall model on the server. As you do more and more rounds, slowly the model converges to the true population without private user data having to leave the devices. If you increase the level of privacy, the model converges a bit slower and vice versa. # Project: For the final project for this section, you're going to train a DP model using this PATE method on the MNIST dataset, provided below. ``` import torchvision.datasets as datasets mnist_trainset = datasets.MNIST(root='./data', train=True, download=True, transform=None) train_data = mnist_trainset.train_data train_targets = mnist_trainset.train_labels test_data = mnist_trainset.test_data test_targets = mnist_trainset.test_labels ```
github_jupyter
``` import os from tqdm import tqdm from typing import Optional, List, Dict from dataclasses import dataclass, field import torch from transformers import AutoModel, AutoTokenizer # bluebert models BlueBERT_MODELCARD = [ 'bionlp/bluebert_pubmed_mimic_uncased_L-12_H-768_A-12', 'bionlp/bluebert_pubmed_mimic_uncased_L-24_H-1024_A-16', 'bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12', 'bionlp/bluebert_pubmed_uncased_L-24_H-1024_A-16' ] # googlebert models GoogleBERT_MODELCARD = [ 'google/bert_uncased_L-2_H-128_A-2', 'google/bert_uncased_L-4_H-128_A-2', 'google/bert_uncased_L-6_H-128_A-2', 'google/bert_uncased_L-2_H-512_A-2', 'google/bert_uncased_L-4_H-512_A-2', 'google/bert_uncased_L-6_H-512_A-2', ] @dataclass class EhrKgNode2IdMapping: ''' This class could be only implemented, as the form of "entity2id.txt" (or "node2id.txt" in the feature) ''' exp_path: str file_name: str = field(default='entity2id.txt') # actually it means node2id.txt (they all have entities and literals) kg_special_token_ids: dict = field(default_factory=lambda: {"PAD":0,"MASK":1}) skip_first_line: bool = True def get_lines(self): file_path = os.path.join(self.exp_path, self.file_name) with open(file_path) as f: lines = f.read().splitlines() if self.skip_first_line: lines = lines[1:] return lines def get_id2literal(self) -> dict: lines = self.get_lines() lines_literal = list(filter(None, [self._get_literal(line) for line in lines])) id2literal = {self._make_id2key(line) : self._make_str2val(line) for line in lines_literal} return id2literal def get_id2entity(self) -> dict: ''' actually means (entity => node)''' lines = self.get_lines() id2entity = {self._make_id2key(line) : self._make_str2val(line) for line in lines} return id2entity def _get_literal(self, line: str) -> str: (node, node_id) = line.split('\t') _check_node = node.split('^^') if len(_check_node) == 2: literal = _check_node[0].replace("\"","") # clean " return literal + '\t' + node_id def _make_id2key(self, line: str) -> int: _id = int(line.split('\t')[1]) _add = len(self.kg_special_token_ids) # len(config.kg_special_token_ids) key = (_id + _add) return key def _make_str2val(self, line: str) -> str: val = line.split('\t')[0].split('^^')[0] return val _no_default = object() @dataclass class EhrKgNode2EmbeddingMapping(EhrKgNode2IdMapping): model_name_or_path: str = _no_default # kg_special_token_ids: dict = field(default_factory={"PAD":0,"MASK":1}) # tokenizer_name: Optional[str] = field( # default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"} # ) def __post_init__(self): if self.model_name_or_path is _no_default: raise TypeError("__init__ missing 1 required argument: 'model_name_or_path'") def _load_model_and_tokenizer(self): # load model if self.model_name_or_path: model = AutoModel.from_pretrained(self.model_name_or_path) else: raise ValueError("There is no (pre-trained) model name or path.") # load tokenizer if self.model_name_or_path: tokenizer = AutoTokenizer.from_pretrained(self.model_name_or_path) else: raise ValueError("There is no (pre-trained) tokenizer name or path.") return model, tokenizer def get_literal_embeddings_from_model(self): model, tokenizer = self._load_model_and_tokenizer() # load (pre-trained) model and tokenizer id2literal = self.get_id2literal() # get mapping dict def _convert_to_model_input(literal: str, tokenizer) -> List[str]: return tokenizer(text=literal, return_tensors='pt', padding=True, truncation=True) id2literalembedding = {} for k, v in tqdm(id2literal.items()): encoded_input = _convert_to_model_input(literal=v, tokenizer=tokenizer) _, output = model(**encoded_input) id2literalembedding[k] = output.cpu().detach() return id2literalembedding def save_literal_embeddings_from_model(self, save_file_dir: str, save_file_name: str = 'id2literalembedding.pt'): if not os.path.isdir(save_file_dir): os.mkdir(save_file_dir) save_file_path = os.path.join(save_file_dir, save_file_name) id2literalembedding = self.get_literal_embeddings_from_model() torch.save(id2literalembedding, save_file_path) ``` ## 0. PATH ``` os.getcwd() EXP_PATH = os.getcwd() # file directory FILE_NAME = 'entity2id.txt' # mapping file ``` ## 1. EhrKgNode2IdMapping ``` ehrkg_node2id_mapping = EhrKgNode2IdMapping(exp_path=EXP_PATH, file_name=FILE_NAME, kg_special_token_ids={"PAD":0,"MASK":1}, skip_first_line=True) ``` ### get id2entity: dict ``` id2entity = ehrkg_node_mapping.get_id2entity() ``` ### get id2literal: dict ``` id2literal = ehrkg_node_mapping.get_id2literal() ``` ## 2. EhrKgNode2EmbeddingMapping ``` model_name_or_path = GoogleBERT_MODELCARD[2] print(model_name_or_path) ehrkg_node2embedding_mapping = EhrKgNode2EmbeddingMapping(exp_path=EXP_PATH, file_name=FILE_NAME, kg_special_token_ids={"PAD":0,"MASK":1}, skip_first_line=True, model_name_or_path=model_name_or_path) ``` ### get id2literalembeddings: dict ``` id2literalembeddings = ehrkg_node2embedding_mapping.get_literal_embeddings_from_model() ``` ### save id2literalembeddings ``` SAVE_FILE_DIR = os.getcwd() ehr_kg_embedding_mapping.save_literal_embeddings_from_model(save_file_dir=SAVE_FILE_DIR) ```
github_jupyter
# Lesson 9 Practice: Supervised Machine Learning Use this notebook to follow along with the lesson in the corresponding lesson notebook: [L09-Supervised_Machine_Learning-Lesson.ipynb](./L09-Supervised_Machine_Learning-Lesson.ipynb). ## Instructions Follow along with the teaching material in the lesson. Throughout the tutorial sections labeled as "Tasks" are interspersed and indicated with the icon: ![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/16/Apps-gnome-info-icon.png). You should follow the instructions provided in these sections by performing them in the practice notebook. When the tutorial is completed you can turn in the final practice notebook. For each task, use the cell below it to write and test your code. You may add additional cells for any task as needed or desired. ## Task 1a: Setup Import the following package sets: + packages for data management + pacakges for visualization + packages for machine learning Remember to activate the `%matplotlib inline` magic. ``` %matplotlib inline # Data Management import numpy as np import pandas as pd # Visualization import seaborn as sns import matplotlib.pyplot as plt # Machine learning from sklearn import model_selection from sklearn import preprocessing from sklearn.metrics import accuracy_score from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC, LinearSVC from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB from sklearn.linear_model import Perceptron from sklearn.linear_model import SGDClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.discriminant_analysis import LinearDiscriminantAnalysis ``` ## Task 2a: Data Exploration After reviewing the data in sections 2.1, 2.2, 2.3 and 2.4 do you see any problems with this iris dataset? If so, please describe them in the practice notebook. If not, simply indicate that there are no issues. ## Task 2b: Make Assumptions After reviewing the data in sections 2.1, 2.2, 2.3 and 2.4 are there any columns that would make poor predictors of species? **Hint**: columns that are poor predictors are: + those with too many missing values + those with no difference in variation when grouped by the outcome class + variables with high levels of collinearity ## Task 3a: Practice with the random forest classifier Now that you have learned how to perform supervised machine learning using a variety of algorithms, lets practice using a new algorithm we haven't looked at yet: the Random Forest Classifier. The random forest classifier builds multiple decision trees and merges them together. Review the sklearn [online documentation for the RandomForestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html). For this task: 1. Perform a 10-fold cross-validation strategy to see how well the random forest classifier performs with the iris data 2. Use a boxplot to show the distribution of accuracy 3. Use the `fit` and `predict` functions to see how well it performs with the testing data. 4. Plot the confusion matrix 5. Print the classification report. ``` iris = sns.load_dataset('iris') X = iris.loc[:,'sepal_length':'petal_width'].values Y = iris['species'].values X = preprocessing.robust_scale(X) Xt, Xv, Yt, Yv = model_selection.train_test_split(X, Y, test_size=0.2, random_state=10) kfold = model_selection.KFold(n_splits=10, random_state=10) results = { 'LogisticRegression' : np.zeros(10), 'LinearDiscriminantAnalysis' : np.zeros(10), 'KNeighborsClassifier' : np.zeros(10), 'DecisionTreeClassifier' : np.zeros(10), 'GaussianNB' : np.zeros(10), 'SVC' : np.zeros(10), 'RandomForestClassifier': np.zeros(10) } results # Create the LogisticRegression object prepared for a multinomial outcome validation set. alg = RandomForestClassifier() # Execute the cross-validation strategy results['RandomForestClassifier'] = model_selection.cross_val_score(alg, Xt, Yt, cv=kfold, scoring="accuracy", error_score=np.nan) # Take a look at the scores for each of the 10-fold runs. results['RandomForestClassifier'] pd.DataFrame(results).plot(kind="box", rot=90); # Create the LinearDiscriminantAnalysis object with defaults. alg = RandomForestClassifier() # Create a new model using all of the training data. alg.fit(Xt, Yt) # Using the testing data, predict the iris species. predictions = alg.predict(Xv) # Let's see the predictions predictions accuracy_score(Yv, predictions) labels = ['versicolor', 'virginica', 'setosa'] cm = confusion_matrix(Yv, predictions, labels=labels) print(cm) ```
github_jupyter
``` import numpy as np import time import tensorflow as tf from tensorflow.keras.datasets import mnist from tensorflow.keras import models, layers from tensorflow.keras.models import Sequential from tensorflow.keras import optimizers from tensorflow.keras.layers import Dense ``` Much as any computer program can be ultimately reduced to a small set of binary operations on binary inputs (AND, OR, NOR, and so on), all transformations learned by deep neural networks can be reduced to a handful of tensor operations applied to tensors of numeric data. For instance, it’s possible to add tensors, multiply tensors, and so on. A Keras layer instance looks like this ``` Dense(512, activation='relu') ``` This layer can be interpreted as a function, which takes as input a matrix and returns another matrix — a new representation for the input tensor. Specifically, the function is as follows (where W is a matrix and b is a vector, both attributes of the layer). We have three tensor operations here: a dot product (dot) between the input tensor and a tensor named W; an addition (+) between the resulting matrix and a vector b; and, finally, a relu operation. relu(x) is max(x, 0) ``` # output = relu(dot(W, input) + b) ``` ### Element-wise operations The **relu** operation and **addition** are element-wise operations: operations that are applied independently to each entry in the tensors being considered. This means these operations are highly amenable to massively parallel implementations. If you want to write a naive Python implementation of an element-wise operation, you use a for loop, as in this naive implementation of an element-wise **relu** operation: ``` def naive_relu(x): assert len(x.shape) == 2 x = x.copy() for i in range(x.shape[0]): for j in range(x.shape[1]): x[i, j] = max(x[i, j], 0) return x def naive_add(x, y): assert len(x.shape) == 2 assert x.shape == y.shape x = x.copy() for i in range(x.shape[0]): for j in range(x.shape[1]): x[i, j] += y[i, j] return x ``` On the same principle, you can do element-wise multiplication, subtraction, and so on. In practice, when dealing with NumPy arrays, these operations are available as well-optimized built-in NumPy functions, which themselves delegate the heavy lifting to a Basic Linear Algebra Subprograms (BLAS) implementation if you have one installed. BLAS are low-level, highly parallel, efficient tensor-manipulation routines that are typically implemented in Fortran or C. In NumPy, you can do the following element-wise operation, and it will be blazing fast ``` # z = x + y # z = np.maximum(z, 0) ``` Time the difference: ``` x = np.random.random((20, 100)) y = np.random.random((20, 100)) time_start = time.time() for _ in range(1000): z = x + y z = np.maximum(z, 0) duration = time.time() - time_start print(f"Duration: {duration} sec") time_start = time.time() for _ in range(1000): z = naive_add(x, y) z = naive_relu(z) duration = time.time() - time_start print(f"Duration: {duration} sec") ``` ### Broadcasting When possible, and if there’s no ambiguity, the smaller tensor will be broadcasted to match the shape of the larger tensor. Broadcasting consists of two steps: Axes (called broadcast axes) are added to the smaller tensor to match the ndim of the larger tensor. The smaller tensor is repeated alongside these new axes to match the full shape of the larger tensor. Example - Consider X with shape (32, 10) and y with shape (10,). First, we add an empty first axis to y, whose shape becomes (1, 10). Then, we repeat y 32 times alongside this new axis, so that we end up with a tensor Y with shape (32, 10), where Y[i, :] == y for i in range(0, 32). At this point, we can proceed to add X and Y, because they have the same shape. ``` def naive_add_matrix_and_vector(x, y): assert len(x.shape) == 2 assert len(y.shape) == 1 assert x.shape[1] == y.shape[0] x = x.copy() for i in range(x.shape[0]): for j in range(x.shape[1]): x[i, j] += y[j] return x x = np.random.random((64, 3, 32, 10)) y = np.random.random((32, 10)) z = np.maximum(x, y) ``` ### Tensor product The tensor product, or dot product (not to be confused with an element-wise product, the * operator) is one of the most common, most useful tensor operations. In NumPy, a tensor product is done using the np.dot function (because the mathematical notation for tensor product is usually a dot). ``` x = np.random.random((32,)) y = np.random.random((32,)) z = np.dot(x, y) z # naive interpretation of two vectors def naive_vector_dot(x, y): assert len(x.shape) == 1 assert len(y.shape) == 1 assert x.shape[0] == y.shape[0] z = 0. for i in range(x.shape[0]): z += x[i] * y[i] return z zz = naive_vector_dot(x, y) zz # naive interpretation of matrix and vector def naive_matrix_vector_dot(x, y): assert len(x.shape) == 2 assert len(y.shape) == 1 assert x.shape[1] == y.shape[0] z = np.zeros(x.shape[0]) for i in range(x.shape[0]): for j in range(x.shape[1]): z[i] += x[i, j] * y[j] return z ``` As soon as one of the two tensors has an ndim greater than 1, dot is no longer symmetric, which is to say that dot(x, y) isn’t the same as dot(y, x) The most common applications may be the dot product between two matrices. You can take the dot product of two matrices x and y (dot(x, y)) if and only if x.shape[1] == y.shape[0] (mn nm). The result is a matrix with shape (x.shape[0], y.shape[1]), where the coefficients are the vector products between the rows of x and the columns of y. Here’s the naive implementation: ``` def naive_matrix_dot(x, y): assert len(x.shape) == 2 assert len(y.shape) == 2 assert x.shape[1] == y.shape[0] z = np.zeros((x.shape[0], y.shape[1])) for i in range(x.shape[0]): for j in range(y.shape[1]): row_x = x[i, :] column_y = y[:, j] z[i, j] = naive_vector_dot(row_x, column_y) return z ``` ### Tensor reshaping Reshaping a tensor means rearranging its rows and columns to match a target shape. Naturally, the reshaped tensor has the same total number of coefficients as the initial tensor. Reshaping is best understood via simple examples: ``` x = np.array([[0., 1.], [2., 3.], [4., 5.]]) print(x.shape) x = x.reshape((6, 1)) x x = x.reshape((2, 3)) x ``` A special case of reshaping that’s commonly encountered is transposition. Transposing a matrix means exchanging its rows and its columns, so that x[i, :] becomes x[:, i]: ``` x = np.zeros((300, 20)) print(x.shape) x = np.transpose(x) print(x.shape) ``` ### Geometric interpretation of tensor operations Because the contents of the tensors manipulated by tensor operations can be interpreted as coordinates of points in some geometric space, all tensor operations have a geometric interpretation. For instance, let’s consider addition. We’ll start with the following vector: ### The engine of neural networks: gradient-based optimization Derivative of a tensor operation: the gradient Stochastic gradient descent Chaining derivatives: the Backpropagation algorithm The chain rule The Gradient Tape in TensorFlow - The API through which you can leverage TensorFlow’s powerful automatic differentiation capabilities is the GradientTape. ``` x = tf.Variable(0.) with tf.GradientTape() as tape: y = 2 * x + 3 grad_of_y_wrt_x = tape.gradient(y, x) grad_of_y_wrt_x W = tf.Variable(tf.random.uniform((2, 2))) b = tf.Variable(tf.zeros((2,))) x = tf.random.uniform((2, 2)) with tf.GradientTape() as tape: y = tf.matmul(W, x) + b grad_of_y_wrt_W_and_b = tape.gradient(y, [W, b]) grad_of_y_wrt_W_and_b (train_images, train_labels), (test_images, test_labels) = mnist.load_data() train_images = train_images.reshape((60000, 28 * 28)) train_images = train_images.astype('float32') / 255 test_images = test_images.reshape((10000, 28 * 28)) test_images = test_images.astype('float32') / 255 model = models.Sequential([ layers.Dense(512, activation='relu'), layers.Dense(10, activation='softmax') ]) model.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy", metrics="accuracy") model.fit(train_images, train_labels, epochs=5, batch_size=128) ``` Implementing from scratch in TensorFlow Let’s implement a simple Python class NaiveDense that creates two TensorFlow variables W and b, and exposes a call method that applies the above transformation. ``` class NaiveDense: def __init__(self, input_size, output_size, activation): self.activation = activation w_shape = (input_size, output_size) # create a matrix W of shape "(input_size, output_size)", initialized with random values w_initial_value = tf.random.uniform(w_shape, minval=0, maxval=1e-1) self.W = tf.Variable(w_initial_value) b_shape = (output_size,) # create a vector b os shape (output_size, ), initialized with zeros b_initial_value = tf.zeros(b_shape) self.b = tf.Variable(b_initial_value) def __call__(self, inputs): # apply the forward pass return self.activation(tf.matmul(inputs, self.W) + self.b) @property def weights(self): # convinience method for rettrieving the layer weights return [self.W, self.b] ``` A simple Sequential class - create a NaiveSequential class to chain these layers. It wraps a list of layers, and exposes a call methods that simply call the underlying layers on the inputs, in order. It also features a weights property to easily keep track of the layers' parameters. ``` class NaiveSequential: def __init__(self, layers): self.layers = layers def __call__(self, inputs): x = inputs for layer in self.layers: x = layer(x) return x @property def weights(self): weights = [] for layer in self.layers: weights += layer.weights return weights ``` Using this NaiveDense class and this NaiveSequential class, we can create a mock Keras model: ``` model = NaiveSequential([ NaiveDense(input_size=28 * 28, output_size=512, activation=tf.nn.relu), NaiveDense(input_size=512, output_size=10, activation=tf.nn.softmax) ]) assert len(model.weights) == 4 ``` A batch generator Next, we need a way to iterate over the MNIST data in mini-batches. This is easy: ``` class BatchGenerator: def __init__(self, images, labels, batch_size=128): self.index = 0 self.images = images self.labels = labels self.batch_size = batch_size def next(self): images = self.images[self.index : self.index + self.batch_size] labels = self.labels[self.index : self.index + self.batch_size] self.index += self.batch_size return images, labels ``` Running one training step The most difficult part of the process is the “training step”: updating the weights of the model after running it on one batch of data. We need to: 1. Compute the predictions of the model for the images in the batch 2. Compute the loss value for these predictions given the actual labels 3. Compute the gradient of the loss with regard to the model’s weights 4. Move the weights by a small amount in the direction opposite to the gradient To compute the gradient, we will use the TensorFlow GradientTape object ``` learning_rate = 1e-3 def update_weights(gradients, weights): for g, w in zip(gradients, weights): w.assign_sub(w * learning_rate) # assign_sub is the equivalent of -= for TensorFlow variables def one_training_step(model, images_batch, labels_batch): with tf.GradientTape() as tape: # run the "forward pass" (compute the model's predictions under the GradientTape scope) predictions = model(images_batch) per_sample_losses = tf.keras.losses.sparse_categorical_crossentropy( labels_batch, predictions) average_loss = tf.reduce_mean(per_sample_losses) gradients = tape.gradient(average_loss, model.weights) # compute the gradient of the loss with regard to the weights. The output gradients is a list where each entry corresponds to a weight from the models.weights list update_weights(gradients, model.weights) # update the weights using the gradients return average_loss ``` In practice, you will almost never implement a weight update step like this by hand. Instead, you would use an Optimizer instance from Keras. Like this: ``` optimizer = optimizers.SGD(learning_rate=1e-3) def update_weights(gradients, weights): optimizer.apply_gradients(zip(gradients, weights)) ``` The full training loop An epoch of training simply consists of the repetition of the training step for each batch in the training data, and the full training loop is simply the repetition of one epoch: ``` def fit(model, images, labels, epochs, batch_size=128): for epoch_counter in range(epochs): print('Epoch %d' % epoch_counter) batch_generator = BatchGenerator(images, labels) for batch_counter in range(len(images) // batch_size): images_batch, labels_batch = batch_generator.next() loss = one_training_step(model, images_batch, labels_batch) if batch_counter % 100 == 0: print('loss at batch %d: %.2f' % (batch_counter, loss)) from tensorflow.keras.datasets import mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() train_images = train_images.reshape((60000, 28 * 28)) train_images = train_images.astype('float32') / 255 test_images = test_images.reshape((10000, 28 * 28)) test_images = test_images.astype('float32') / 255 fit(model, train_images, train_labels, epochs=10, batch_size=128) ``` Evaluating the model We can evaluate the model by taking the argmax of its predictions over the test images, and comparing it to the expected labels: ``` predictions = model(test_images) predictions = predictions.numpy() # calling .numpy() to a TensorFlow tensor converts it to a NumPy tensor predicted_labels = np.argmax(predictions, axis=1) matches = predicted_labels == test_labels # print('accuracy: %.2f' % matches.average()) print(f"Accuracy: {np.average(matches)}") ```
github_jupyter
``` import numpy as np import pandas as pd from tqdm import tqdm from utils import clean_target from categorical_ordinal import get_categorical_ordinal_columns from categorical_nominal import get_categorical_nominal_columns from columns_transformers import ColumnSelector from sklearn.pipeline import Pipeline, FeatureUnion from sklearn.impute import SimpleImputer from sklearn.ensemble import RandomForestRegressor from sklearn.preprocessing import OrdinalEncoder, OneHotEncoder, MinMaxScaler from sklearn.model_selection import train_test_split from sklearn.metrics import mean_absolute_error, mean_squared_error ``` <font color="orange"> <b>Grupos de features:</b></font> - **Categorial Ordinal:** - TP_ (17-4-3 = 10) - Questions: ```["Q001", "Q002", "Q003", "Q004", "Q005","Q006", "Q007", "Q008", "Q009", "Q010", "Q011", "Q012", "Q013", "Q014", "Q015", "Q016", "Q017", "Q019", "Q022", "Q024"]``` (20) - **Categorial Nominal:** - IN_ : All Binary (52) - TP_ : ```["TP_SEXO", "TP_ESTADO_CIVIL", "TP_COR_RACA", "TP_NACIONALIDADE"]``` (4) - SG_ : (4-1 = 3) - Questions: ```["Q018", "Q020", "Q021", "Q023", "Q025"]``` (5) - **Numerical:** - NU_IDADE (1) - Droped: - Identificator: ```[NU_INSCRICAO]``` (1) - More than 40% missing: ```['CO_ESCOLA', 'NO_MUNICIPIO_ESC', 'SG_UF_ESC', 'TP_DEPENDENCIA_ADM_ESC', 'TP_LOCALIZACAO_ESC', 'TP_SIT_FUNC_ESC']``` (4) - NO_M: (To many categories): ```['NO_MUNICIPIO_RESIDENCIA', 'NO_MUNICIPIO_NASCIMENTO', 'NO_MUNICIPIO_PROVA']``` (3) - NU_NOTA: Targets variables (5) ``` train_df = pd.read_parquet("data/train.parquet") clean_target(train_df) #test= pd.read_parquet("data/test.parquet") categorical_ordinal_columns = get_categorical_ordinal_columns(train_df) qtd_categorical_ordinal_columns=len(categorical_ordinal_columns) print(f"Number of categorial ordinal features: {qtd_categorical_ordinal_columns}") categorical_nominal_columns = get_categorical_nominal_columns(train_df) qtd_categorical_nominal_columns = len(categorical_nominal_columns) print(f"Number of categorial nominal features: {qtd_categorical_nominal_columns}") drop_columns = ["NU_INSCRICAO", "CO_ESCOLA", "NO_MUNICIPIO_ESC", "SG_UF_ESC", "TP_DEPENDENCIA_ADM_ESC", "TP_LOCALIZACAO_ESC", "TP_SIT_FUNC_ESC", "NO_MUNICIPIO_RESIDENCIA", "NO_MUNICIPIO_NASCIMENTO", "NO_MUNICIPIO_PROVA"] qtd_drop_columns = len(drop_columns) print(f"Number of columns dropped: {qtd_drop_columns}") target_columns = train_df.filter(regex="NU_NOTA").columns.tolist() qtd_target_columns = len(target_columns) print(f"Number of targets: {qtd_target_columns}") numerical_columns = ["NU_IDADE"] qtd_numerical_columns = len(numerical_columns) print(f"Number of targets: {qtd_numerical_columns}") target_columns = train_df.filter(regex="NU_NOTA").columns.tolist() qtd_target_columns = len(target_columns) print(f"Number of targets: {qtd_target_columns}") all_columns = drop_columns + categorical_nominal_columns + categorical_ordinal_columns + numerical_columns + target_columns qtd_total = qtd_drop_columns + qtd_categorical_nominal_columns + qtd_categorical_ordinal_columns + qtd_numerical_columns + qtd_target_columns print(f"Total columns: {qtd_total}") ``` ## **Create Pipeline** ``` """ Variáveis categóricas com dados ordinais que tem dados faltantes: - TP_ENSINO: Suposto que NaN representa a categoria faltante descrita nos metadados. - TP_STATUS_REDACAO: Mapeado para outra classe (Faltou na prova) """ categorical_ordinal_pipe = Pipeline([ ('selector', ColumnSelector(categorical_ordinal_columns)), ('imputer', SimpleImputer(missing_values=np.nan, strategy='constant', fill_value=0)), ('encoder', OrdinalEncoder(handle_unknown='use_encoded_value', unknown_value=-1)) ]) """ Variáveis categóricas com dados ordinais que tem dados faltantes: - SG_UF_NASCIMENTO: Mapeado para uma nova categoria """ categorical_nominal_pipe = Pipeline([ ('selector', ColumnSelector(categorical_nominal_columns)), ('imputer', SimpleImputer(missing_values=np.nan, strategy='constant', fill_value="missing")), ('encoder', OneHotEncoder(drop="first", handle_unknown='ignore')) ]) numerical_pipe = Pipeline([ ('selector', ColumnSelector(numerical_columns)), ('imputer', SimpleImputer(missing_values=np.nan, strategy='constant', fill_value=0)), ('scaler', MinMaxScaler()) ]) preprocessor = FeatureUnion([ ('categorical_ordinal', categorical_ordinal_pipe), ('categorical_nominal', categorical_nominal_pipe), ('numerical', numerical_pipe) ]) kwargs_regressor = {"n_estimators":50, "n_jobs":-1, "verbose":2} pipe = Pipeline([ ('preprocessor', preprocessor), ('feature_selection', VarianceThreshold(threshold=0.05)), ('model', RandomForestRegressor(**kwargs_regressor)) ]) n_samples = 1000 X = train_df.sample(n_samples).drop(columns=target_columns+drop_columns) y = train_df.sample(n_samples).filter(regex="NU_NOTA") X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42) def split_target(y): y_nu_nota_cn = y["NU_NOTA_CN"] y_nu_nota_ch = y["NU_NOTA_CH"] y_nu_nota_lc = y["NU_NOTA_LC"] y_nu_nota_mt = y["NU_NOTA_MT"] y_nu_nota_redacao = y["NU_NOTA_REDACAO"] return (y_nu_nota_cn, y_nu_nota_ch, y_nu_nota_lc, y_nu_nota_mt, y_nu_nota_redacao) y_train_cn, y_train_ch, y_train_lc, y_train_mt, y_train_redacao = split_target(y_train) y_test_cn, y_test_ch, y_test_lc, y_test_mt, y_test_redacao = split_target(y_test) y_structure = {"NU_NOTA_CN":[y_train_cn, y_test_cn], "NU_NOTA_CH":[y_train_ch, y_test_ch], "NU_NOTA_LC":[y_train_lc, y_test_lc], "NU_NOTA_MT":[y_train_mt, y_test_mt], "NU_NOTA_REDACAO":[y_train_redacao, y_test_redacao]} from joblib import dump for key, ys in tqdm(y_structure.items()): pipe.fit(X_train, ys[0]) dump(pipe, f"models/model_{key}.joblib") y_train_hat = pipe.predict(X_train) ys.append(y_train_hat) y_test_hat = pipe.predict(X_test) ys.append(y_test_hat) for key, ys in tqdm(y_structure.items()): train_error = mean_squared_error(ys[0], ys[2], squared=False) test_error = mean_squared_error(ys[1], ys[3], squared=False) print(key) print(f"Train: {train_error}") print(f"Test: {test_error}\n") ```
github_jupyter
``` # importing all the required libraries import pandas as pd from google.colab import files import io import spacy from sklearn.feature_extraction.text import CountVectorizer from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder import keras from keras.utils import to_categorical from keras import Sequential from keras.layers import Dense from keras.layers import Input from keras.layers import Softmax from sklearn.metrics import classification_report, confusion_matrix import matplotlib.pyplot as plt import sklearn.decomposition import keras.callbacks import pickle import re import nltk from nltk.stem import PorterStemmer uploaded = files.upload() ``` Defining custom early stopper classes for early stopping of model.fit keras method ``` class CustomStopper(keras.callbacks.EarlyStopping): def __init__(self, monitor='val_loss', min_delta=0, patience=10, verbose=0, mode='auto', start_epoch = 30): # add argument for starting epoch super(CustomStopper, self).__init__() self.start_epoch = start_epoch def on_epoch_end(self, epoch, logs=None): if epoch > self.start_epoch: super().on_epoch_end(epoch, logs) ``` Defining variables to be passes in the various methods ``` filename = 'Data_v1.xlsx' #file name of the uploaded dataset file modelName = 'Model1' #name of the model, this will be used to save model evaluation and history numEpochs = 150 # maximum number of epochs if early stopping doesnt work batchsize = 50 # batchsize which will be used in each step by optimizer defined in the model optimizer = 'adadelta' #optimizer to be used in model.fit keras method ``` Method to read uploaded file. Returns back the text input samples and target labels for each. Transforms X to a vector which holds the number of occurences of each word for every sample ``` def mypreprocessor(text): porter_stemmer = PorterStemmer() words=re.split("\\s+",text) stemmed_words=[porter_stemmer.stem(word=word) for word in words] return ' '.join(stemmed_words) def Preprocessing(): X = pd.read_excel(list(uploaded.items())[0][0],usecols="H") #pass usecols as the column containing all the training samples y = pd.read_excel(list(uploaded.items())[0][0],usecols="F") #pass usecols as the column containing all the target labels X = [str(i) for i in X.extracted_text.to_list()] #the property used with X. should match column name in excel # for i in range(len(X)): # X[i] = re.sub(r'(\s\d+\s)|{\d+}|\(\d+\)','',X[i]) # X[i] = re.sub(r'gain-of-function|gain of function|toxic gain of function|activating mutation|constitutively active|hypermorph|ectopic expression|neomorph|gain of interaction|function protein|fusion transcript','GOF',X[i]) # X[i] = re.sub(r'haploinsufficiency|haploinsufficient|hypomorph|amorph|null mutation|hemizygous','HI',X[i]) # X[i] = re.sub(r'dominant-negative|dominant negative|antimorph','DN',X[i]) # X[i] = re.sub(r'loss of function|loss-of-function','LOF',X[i]) # X = preprocess_data(X) y = y.mutation_consequence.to_list() # vocabulary = ['gain-of-function','gain of function', # 'toxic gain of function','activating mutation', # 'constitutively active','hypermorph','ectopic expression', # 'neomorph','gain of interaction','function protein','fusion transcript', # 'haploinsufficiency','haploinsufficient','hypomorph','amorph', # 'null mutation','hemizygous','dominant-negative','dominant negative','antimorph', # 'loss of function','loss-of-function'] X=TfidfVectorizer(X,preprocessor=mypreprocessor,max_df=200 ,ngram_range=(1, 2)).fit(X).transform(X) # X=CountVectorizer(X,preprocessor=mypreprocessor,max_df=200 ,ngram_range=(1, 2)).fit(X).transform(X) return X, y ``` Method to split the dataset into training and testing. Changes y to one-hot encoded vector, e.g if target class is 3, then returns [0,0,0,1,0] for 5 target classes ``` def TrainTestSplit(X, y): X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state=100, stratify = y) #split the dataset, test_Size variable defines the size of the test dataset, stratify column makes sure even distribution of target labels X_train = X_train.toarray() #changing to numpy array to work with keras sequential model X_test = X_test.toarray() #changing to numpy array to work with keras sequential model le = LabelEncoder() y_train = to_categorical(le.fit(y_train).transform(y_train)) y_test = to_categorical(le.fit(y_test).transform(y_test)) return X_train, X_test, y_train, y_test, le.classes_ # returns training and test datasets, as well as class names ``` Defining the model to be used for training the datasets. ``` def ModelBuild(X, y): inputs = keras.layers.Input(shape=(len(X_train[0]),)) dense1 = keras.layers.Dense(200, activation="relu")(inputs) #fully connected with input vectors # dropout = keras.layers.Dropout(0.2)(dense1) #regularization layer if required dense2 = keras.layers.Dense(50, activation="relu")(dense1) #fully connected with Layer 1 # dropout2 = keras.layers.Dropout(0.1)(dense2) #regularization layer if required # dense3 = keras.layers.Dense(50, activation="relu")(dense2) outputs = keras.layers.Dense(len(y_train[0]), activation="sigmoid")(dense2) #output layer model = keras.Model(inputs=inputs, outputs=outputs) return model ``` Method to show summary of the model as well as the shape in diagram form ``` def PlotModel(model, filename): model.summary() keras.utils.plot_model(model, filename, show_shapes=True) ``` Method to compile the defined model as well as run the training. Returns a history variable which can be used to plot training and validation loss as well as accuracy at every epoch ``` def PlotTraining(model, X_test, y_test): model.compile(loss='categorical_crossentropy',optimizer=optimizer,metrics=[keras.metrics.CategoricalAccuracy(),'accuracy']) # EarlyStoppage = CustomStopper() es = keras.callbacks.EarlyStopping(monitor='val_accuracy', baseline=0.7, patience=30) history = model.fit(X_train, y_train,validation_split=0.2,epochs=numEpochs, batch_size=batchsize) #,callbacks = [es] ) - use this for early stopping model.evaluate(X_test, y_test) return history ``` Plots the validation and training accuracy at every epoch using a history object obtained by model.fit in the previous step ``` def plot(history): # list all data in history print(history.keys()) # summarize history for accuracy plt.plot(history['accuracy']) plt.plot(history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # summarize history for loss plt.plot(history['loss']) plt.plot(history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() ``` Calling the methods to run all the required steps in the pipeline ``` X, y = Preprocessing() X_train, X_test, y_train, y_test, ClassNames = TrainTestSplit(X, y) # svd = sklearn.decomposition.TruncatedSVD(n_components=60, n_iter=5, random_state=42) # X_train = svd.fit(X_train).transform(X_train) # svd = sklearn.decomposition.TruncatedSVD(n_components=60, n_iter=5, random_state=42) # X_test = svd.fit(X_test).transform(X_test) model = ModelBuild(X_train, y_train) PlotModel(model, modelName +".png") history = PlotTraining(model, X_test, y_test) print(confusion_matrix(y_test.argmax(axis=-1),model.predict(X_test).argmax(axis=-1))) print(classification_report(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1),target_names=ClassNames)) with open('/content/%s' %modelName, 'wb') as file_pi: pickle.dump(history.history, file_pi) history = pickle.load(open('/content/%s' % modelName, "rb")) model.save(modelName +'.h5') plot(history) with open('/content/%s_train' %modelName, 'wb') as file_pi: pickle.dump(X_train, file_pi) with open('/content/%s_test' %modelName, 'wb') as file_pi: pickle.dump(X_test, file_pi) with open('/content/%s_Labeltest' %modelName, 'wb') as file_pi: pickle.dump(y_test, file_pi) with open('/content/%s_LabelTrain' %modelName, 'wb') as file_pi: pickle.dump(y_train, file_pi) with open('/content/%s_ConfusionMatrix' %modelName, 'wb') as file_pi: pickle.dump(confusion_matrix(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1)), file_pi) with open('/content/%s_ClassificationReport' %modelName, 'wb') as file_pi: pickle.dump(classification_report(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1), target_names=ClassNames), file_pi) ``` Running above solution with reduced text and preprocessing ``` uploaded = files.upload() modelName = 'Model2' #name of the model, this will be used to save model evaluation and history numEpochs = 150 # maximum number of epochs if early stopping doesnt work batchsize = 50 # batchsize which will be used in each step by optimizer defined in the model optimizer = 'adam' #optimizer to be used in model.fit keras method ``` Calling the above pipeline again with new parameters ``` X, y = Preprocessing() X_train, X_test, y_train, y_test, ClassNames = TrainTestSplit(X, y) model = ModelBuild(X_train, y_train) PlotModel(model, modelName +".png") history = PlotTraining(model, X_test, y_test) print(confusion_matrix(y_test.argmax(axis=-1),model.predict(X_test).argmax(axis=-1))) print(classification_report(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1),target_names=ClassNames)) with open('/content/%s' %modelName, 'wb') as file_pi: pickle.dump(history.history, file_pi) history = pickle.load(open('/content/%s' % modelName, "rb")) model.save(modelName +'.h5') plot(history) with open('/content/%s_train' %modelName, 'wb') as file_pi: pickle.dump(X_train, file_pi) with open('/content/%s_test' %modelName, 'wb') as file_pi: pickle.dump(X_test, file_pi) with open('/content/%s_Labeltest' %modelName, 'wb') as file_pi: pickle.dump(y_test, file_pi) with open('/content/%s_LabelTrain' %modelName, 'wb') as file_pi: pickle.dump(y_train, file_pi) with open('/content/%s_ConfusionMatrix' %modelName, 'wb') as file_pi: pickle.dump(confusion_matrix(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1)), file_pi) with open('/content/%s_ClassificationReport' %modelName, 'wb') as file_pi: pickle.dump(classification_report(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1), target_names=ClassNames), file_pi) ``` Running the model with TF-IDF vectorizer instead of CountVectorizer (adam optimizer) ``` uploaded = files.upload() modelName = 'Model3' #name of the model, this will be used to save model evaluation and history numEpochs = 150 # maximum number of epochs if early stopping doesnt work batchsize = 20 # batchsize which will be used in each step by optimizer defined in the model optimizer = 'adam' #optimizer to be used in model.fit keras method from sklearn.feature_extraction.text import TfidfVectorizer ``` For the next step, go to Preprocessing method and change CountVectorizer to TfIdfVectorizer ``` X, y = Preprocessing() X_train, X_test, y_train, y_test, ClassNames = TrainTestSplit(X, y) model = ModelBuild(X_train, y_train) PlotModel(model, modelName +".png") history = PlotTraining(model, X_test, y_test) print(confusion_matrix(y_test.argmax(axis=-1),model.predict(X_test).argmax(axis=-1))) print(classification_report(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1),target_names=ClassNames)) with open('/content/%s' %modelName, 'wb') as file_pi: pickle.dump(history.history, file_pi) history = pickle.load(open('/content/%s' % modelName, "rb")) model.save(modelName +'.h5') plot(history) with open('/content/%s_train' %modelName, 'wb') as file_pi: pickle.dump(X_train, file_pi) with open('/content/%s_test' %modelName, 'wb') as file_pi: pickle.dump(X_test, file_pi) with open('/content/%s_Labeltest' %modelName, 'wb') as file_pi: pickle.dump(y_test, file_pi) with open('/content/%s_LabelTrain' %modelName, 'wb') as file_pi: pickle.dump(y_train, file_pi) with open('/content/%s_ConfusionMatrix' %modelName, 'wb') as file_pi: pickle.dump(confusion_matrix(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1)), file_pi) with open('/content/%s_ClassificationReport' %modelName, 'wb') as file_pi: pickle.dump(classification_report(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1), target_names=ClassNames), file_pi) print(pickle.load(open('Model3_ClassificationReport','rb'))) ``` Running model with adadelta optimizer and tfidf vectorizer ``` modelName = 'Model5' #name of the model, this will be used to save model evaluation and history numEpochs = 200 # maximum number of epochs if early stopping doesnt work batchsize = 50 # batchsize which will be used in each step by optimizer defined in the model optimizer = 'adam' #optimizer to be used in model.fit keras method X, y = Preprocessing() X_train, X_test, y_train, y_test, ClassNames = TrainTestSplit(X, y) model = ModelBuild(X_train, y_train) PlotModel(model, modelName +".png") history = PlotTraining(model, X_test, y_test) print(confusion_matrix(y_test.argmax(axis=-1),model.predict(X_test).argmax(axis=-1))) print(classification_report(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1),target_names=ClassNames)) X_train.shape with open('/content/%s' %modelName, 'wb') as file_pi: pickle.dump(history.history, file_pi) history = pickle.load(open('/content/%s' % modelName, "rb")) model.save(modelName +'.h5') plot(history) with open('/content/%s_train' %modelName, 'wb') as file_pi: pickle.dump(X_train, file_pi) with open('/content/%s_test' %modelName, 'wb') as file_pi: pickle.dump(X_test, file_pi) with open('/content/%s_Labeltest' %modelName, 'wb') as file_pi: pickle.dump(y_test, file_pi) with open('/content/%s_LabelTrain' %modelName, 'wb') as file_pi: pickle.dump(y_train, file_pi) with open('/content/%s_ConfusionMatrix' %modelName, 'wb') as file_pi: pickle.dump(confusion_matrix(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1)), file_pi) with open('/content/%s_ClassificationReport' %modelName, 'wb') as file_pi: pickle.dump(classification_report(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1), target_names=ClassNames), file_pi) ``` Code to download the files as zip folders (to load the models and datasets for prediction/evaluation use pickle.load) ``` # !zip -r '/Model1.zip' 'Model1Folder' files.download('/Model1.zip') !zip -r '/Model2.zip' 'Model2Folder' files.download('/Model2.zip') !zip -r '/Model3.zip' 'Model3Folder' files.download('/Model3.zip') !zip -r '/Model4.zip' 'Model4Folder' files.download('/Model4.zip') !zip -r '/Model5.zip' 'Model5Folder' files.download('/Model5.zip') ```
github_jupyter
# Time handling Last year in this course, people asked: "how do you handle times?" That's a good question... ## Exercise What is the ambiguity in these cases? 1. Meet me for lunch at 12:00 2. The meeting is at 14:00 3. How many hours are between 01:00 and 06:00 (in the morning) 4. When does the new year start? Local times are a *political* construction and subject to change. They differ depending on where you are. Human times are messy. If you try to do things with human times, you can expect to be sad. But still, *actual* time advances at the same rate all over the world (excluding relativity). There *is* a way to do this. ## What are timezones? A timezone specifies a certain *local time* at a certain location on earth. If you specify a timestamp such as 14:00 on 1 October 2019, it is **naive** if it does not include a timezone. Dependon on where you are standing, you can experience this timestamp at different times. If it include a timezone, it is **aware**. An aware timestamp exactly specifies a certain time across the whole world (but depending on where you are standing, your localtime may be different). **UTC** (coordinated universal time) is a certain timezone - the basis of all other timezones. Unix computers have a designated **localtime** timezone, which is used by default to display things. This is in the `TZ` environment variable. The **tz database** (or zoneinfo) is a open source, comprehensive, updated catalog of all timezones across the whole planet since 1970. It contains things like `EET`, `EEST`, but also geographic locations like `Europe/Helsinki` because the abbreviations can change. [Wikipedia](https://en.wikipedia.org/wiki/Tz_database) and [list of all zones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). ## unixtime Unixtime is zero at 00:00 on 1 January 1970, and increases at a rate of one per second. This definition defines a single unique time everywhere in the world. You can find unixtime with the `date +%s` command: ``` !date +%s ``` You can convert from unixtime to real (local) time using the date command again ``` !date -d @1234567890 ``` There are functions which take (unixtime + timezone) and produce the timestamp (year, month, day, hour, minute, second). And vice versa. Unix time has two main benefits: * Un-ambiguous: defines a single time * You can do math on the times and compute differences, add time, etc, and it just works. ## Recommendations When you have times, always store them in unixtime in numerical format. When you need a human time (e.g. "what hour was this time"), you use a function to compute that property *in a given timezone*. If you store the other time components, for example hour and minute, this is just for convenience and you should *not* assume that you can go back to the unixtime to do math. [Richard's python time reference](http://rkd.zgib.net/wiki/DebianNotes/PythonTime) is the only comprehensive cataloging of Python that he knows of. ## Exercises To do these, you have to search for the functions yourself. ### 1. Convert this unixtime to localtime in Helsinki ``` ts = 1570078806 ``` ### 2. Convert the same time to UTC ### Convert that unixtime to a pandas `Timestamp` You'll need to search the docs some... ## Localization and conversion If you are given a time like "14:00 1 October 2019", and you want to convert it to a different timezone, can you? No, because there is no timezone already. You have to **localize** it by applying a timezone, then you can convert. ``` import pytz tz = pytz.timezone("Asia/Tokyo") tz # Make a timestamp from a real time. We dont' know when this is... import pandas as pd import datetime dt = pd.Timestamp(datetime.datetime(2019, 10, 1, 14, 0)) dt dt.timestamp() # Localize it - interpert it as a certain timezone localized = dt.tz_localize(tz) localized dt.timestamp() converted = localized.tz_convert(pytz.timezone('Europe/Helsinki')) converted ``` And we notice it does the conversion... if we don't localize first, then this doesn't work. ## Exercises ### 1. Convert this timestamp to a pandas timestamp in Europe/Helsinki and Asia/Tokyo ``` ts = 1570078806 ``` ### Print the day of the year and hour of this unixtime ## From the command line ``` !date !date -d "15:00" !date -d "15:00 2019-10-31" !date -d "15:00 2019-10-31" +%s !date -d @1572526800 !TZ=America/New_York date -d @1572526800 !date -d '2019-10-01 14:00 CEST' ``` ## See also * Julian day - days since 1 January year 4713BCE, or Gregorian ordinal - days since 1 january year 1. Useful if you need to do date, instead of time, arithmetic. * [Richard's python-time reference](http://rkd.zgib.net/wiki/DebianNotes/PythonTime)
github_jupyter
### Hyper Parameter Tuning One of the primary objective and challenge in machine learning process is improving the performance score, based on data patterns and observed evidence. To achieve this objective, almost all machine learning algorithms have specific set of parameters that needs to estimate from dataset which will maximize the performance score. The best way to choose good hyperparameters is through trial and error of all possible combination of parameter values. Scikit-learn provide GridSearch and RandomSearch functions to facilitate automatic and reproducible approach for hyperparameter tuning. ``` from IPython.display import Image Image(filename='../Chapter 4 Figures/Hyper_Parameter_Tuning.png', width=1000) ``` ### GridSearch ``` import pandas as pd import numpy as np from sklearn.preprocessing import StandardScaler from sklearn.cross_validation import train_test_split from sklearn import cross_validation from sklearn import metrics from matplotlib.colors import ListedColormap import matplotlib.pyplot as plt %matplotlib inline from sklearn.ensemble import RandomForestClassifier from sklearn.grid_search import GridSearchCV seed = 2017 # read the data in df = pd.read_csv("Data/Diabetes.csv") X = df.ix[:,:8].values # independent variables y = df['class'].values # dependent variables #Normalize X = StandardScaler().fit_transform(X) # evaluate the model by splitting into train and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=seed) kfold = cross_validation.StratifiedKFold(y=y_train, n_folds=5, random_state=seed) num_trees = 100 clf_rf = RandomForestClassifier(random_state=seed).fit(X_train, y_train) rf_params = { 'n_estimators': [100, 250, 500, 750, 1000], 'criterion': ['gini', 'entropy'], 'max_features': [None, 'auto', 'sqrt', 'log2'], 'max_depth': [1, 3, 5, 7, 9] } # setting verbose = 10 will print the progress for every 10 task completion grid = GridSearchCV(clf_rf, rf_params, scoring='roc_auc', cv=kfold, verbose=10, n_jobs=-1) grid.fit(X_train, y_train) print 'Best Parameters: ', grid.best_params_ results = cross_validation.cross_val_score(grid.best_estimator_, X_train,y_train, cv=kfold) print "Accuracy - Train CV: ", results.mean() print "Accuracy - Train : ", metrics.accuracy_score(grid.best_estimator_.predict(X_train), y_train) print "Accuracy - Test : ", metrics.accuracy_score(grid.best_estimator_.predict(X_test), y_test) ``` ### RandomSearch ``` from sklearn.model_selection import RandomizedSearchCV from scipy.stats import randint as sp_randint # specify parameters and distributions to sample from param_dist = {'n_estimators':sp_randint(100,1000), 'criterion': ['gini', 'entropy'], 'max_features': [None, 'auto', 'sqrt', 'log2'], 'max_depth': [None, 1, 3, 5, 7, 9] } # run randomized search n_iter_search = 20 random_search = RandomizedSearchCV(clf_rf, param_distributions=param_dist, cv=kfold, n_iter=n_iter_search, verbose=10, n_jobs=-1, random_state=seed) random_search.fit(X_train, y_train) # report(random_search.cv_results_) print 'Best Parameters: ', random_search.best_params_ results = cross_validation.cross_val_score(random_search.best_estimator_, X_train,y_train, cv=kfold) print "Accuracy - Train CV: ", results.mean() print "Accuracy - Train : ", metrics.accuracy_score(random_search.best_estimator_.predict(X_train), y_train) print "Accuracy - Test : ", metrics.accuracy_score(random_search.best_estimator_.predict(X_test), y_test) from bayes_opt import BayesianOptimization from sklearn.cross_validation import cross_val_score def rfccv(n_estimators, min_samples_split, max_features): return cross_val_score(RandomForestClassifier(n_estimators=int(n_estimators), min_samples_split=int(min_samples_split), max_features=min(max_features, 0.999), random_state=2017), X_train, y_train, 'f1', cv=kfold).mean() gp_params = {"alpha": 1e5} rfcBO = BayesianOptimization(rfccv, {'n_estimators': (100, 1000), 'min_samples_split': (2, 25), 'max_features': (0.1, 0.999)}) rfcBO.maximize(n_iter=10, **gp_params) print('RFC: %f' % rfcBO.res['max']['max_val']) ```
github_jupyter
``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. !wget --no-check-certificate \ https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sarcasm.json \ -O /tmp/sarcasm.json import numpy as np import json import tensorflow as tf from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences vocab_size = 1000 embedding_dim = 16 max_length = 120 trunc_type='post' padding_type='post' oov_tok = "<OOV>" training_size = 20000 with open("/tmp/sarcasm.json", 'r') as f: datastore = json.load(f) sentences = [] labels = [] urls = [] for item in datastore: sentences.append(item['headline']) labels.append(item['is_sarcastic']) training_sentences = sentences[0:training_size] testing_sentences = sentences[training_size:] training_labels = labels[0:training_size] testing_labels = labels[training_size:] tokenizer = Tokenizer(num_words=vocab_size, oov_token=oov_tok) tokenizer.fit_on_texts(training_sentences) word_index = tokenizer.word_index training_sequences = tokenizer.texts_to_sequences(training_sentences) training_padded = pad_sequences(training_sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type) testing_sequences = tokenizer.texts_to_sequences(testing_sentences) testing_padded = pad_sequences(testing_sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type) training_padded = np.array(training_padded) training_labels = np.array(training_labels) testing_padded = np.array(testing_padded) testing_labels = np.array(testing_labels) model = tf.keras.Sequential([ tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length), tf.keras.layers.Conv1D(128, 5, activation='relu'), tf.keras.layers.GlobalMaxPooling1D(), tf.keras.layers.Dense(24, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid') ]) model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) model.summary() num_epochs = 10 history = model.fit(training_padded, training_labels, epochs=num_epochs, validation_data=(testing_padded, testing_labels), verbose=1) import matplotlib.pyplot as plt def plot_graphs(history, string): plt.plot(history.history[string]) plt.plot(history.history['val_'+string]) plt.xlabel("Epochs") plt.ylabel(string) plt.legend([string, 'val_'+string]) plt.show() plot_graphs(history, 'accuracy') plot_graphs(history, 'loss') ```
github_jupyter
``` from time import time import secrets import flickrapi import requests import os import pandas as pd import pickle import logging def get_photos(image_tag): # setup dataframe for data raw_photos = pd.DataFrame(columns=['latitude', 'longitude','farm','server','id','secret']) # initialize api flickr = flickrapi.FlickrAPI(secrets.api_key, secrets.api_secret, format='parsed-json') errors = '' try: # search photos based on settings photos = flickr.photos.search(tags=image_tag, sort='relevance', content_type=1, #photos only extras='description,geo,url_c', has_geo=1, geo_context=2, #outdoors per_page=100, page=1 ) # append photo details: description and getags raw_photos = raw_photos.append(pd.DataFrame(photos['photos']['photo']) [['latitude', 'longitude','farm','server','id','secret']], ignore_index=True) # construct url from pieces raw_photos['url'] = 'https://farm'+ raw_photos.farm.astype(str) + '.staticflickr.com/' + raw_photos.server.astype(str) + '/'+ raw_photos.id.astype(str) + '_' + raw_photos.secret.astype(str) + '.jpg' # need a try/except here for images less than 'per page' print('..downloading photos') download_images(raw_photos, image_tag) # save data print('..saving metadata') with open('data/%s/%s.pkl' %(image_tag, image_tag), 'wb') as f: pickle.dump(raw_photos, f) f.close() del raw_photos except: print('Could not get info for: %s. '%image_tag) errors = image_tag return errors def create_folder(path): if not os.path.isdir(path): os.makedirs(path) def download_images(df, keyword): path = ''.join(['data/',keyword]) create_folder(path) print('...df length: %d' %len(df.index)) print('...going through each row of dataframe') for idx, row in df.iterrows(): try: image_path = ''.join([path,'/',row.id,'.jpg']) response = requests.get(row.url)#, stream=True) with open(image_path, 'wb') as outfile: outfile.write(response.content) outfile.close() except: print('...Error occured at idx: %d'%idx) print('...download completed.') places = pd.read_csv('IndoorOutdoor_places205.csv', names=['key','label']) places.head() # retrieve all outdoor scene categories. We clean up the 'key' column, remove duplicates, and re-index the dataframe. places['key'] = places['key'].str[3:].str.split('/',1,expand=True) places = places[places.label == 2] places = places.drop_duplicates(ignore_index=True) places['key'] = places['key'].str.strip('\'') places['key'] = places['key'].replace(to_replace='_',value=' ',regex=True) places.head(-20) places.count() #should have 132 errors = [] for idx, row in places.iterrows(): # change this idx when it crashes. It will give an error for a few indices. It probably means Flickr does not have # geotagged images for these keywords. We skip over those. Should have a total of 130 keywords at the end. if idx < 0: pass else: start = time() error = get_photos(row.key) end = time() print('%20s in %.2e seconds.' %(row.key, end-start)) # should vary between 3-8 seconds depending on the keyword. if error != '': errors.append(error) # we test loading the pickle file. keyword = 'basilica' with open('data/%s/%s.pkl' %(keyword,keyword), 'rb') as f: test = pickle.load(f) f.close() test.head() # we test loading the image. from PIL import Image image = Image.open('data/%s/%s.jpg'%(keyword,test.id[0])) image.show() ```
github_jupyter
``` import numpy as np import pandas as pd from os import makedirs from os.path import join, exists #from nilearn.input_data import NiftiLabelsMasker from nilearn.connectome import ConnectivityMeasure from nilearn.plotting import plot_anat, plot_roi import bct #from nipype.interfaces.fsl import InvWarp, ApplyWarp import datetime subjects = ['101', '102', '103', '104', '106', '107', '108', '110', '212', '213', '214', '215', '216', '217', '218', '219', '320', '321', '322', '323', '324', '325', '327', '328', '329', '330', '331', '332', '333', '334', '335', '336', '337', '338', '339', '340', '341', '342', '343', '344', '345', '346', '347', '348', '349', '350', '451', '452', '453', '455', '456', '457', '458', '459', '460', '462', '463', '464', '465', '467', '468', '469', '470', '502', '503', '571', '572', '573', '574', '575', '577', '578', '579', '580', '581', '582', '584', '585', '586', '587', '588', '589', '590', '591', '592', '593', '594', '595', '596', '597', '598', '604', '605', '606', '607', '608', '609', '610', '611', '612', '613', '614', '615', '616', '617', '618', '619', '620', '621', '622', '623', '624', '625', '626', '627', '628', '629', '630', '631', '633', '634'] #subjects = ['101', '102'] sink_dir = '/Users/katherine/Dropbox/Projects/physics-retrieval/data/output' shen = '/home/kbott006/physics-retrieval/shen2015_2mm_268_parcellation.nii.gz' craddock = '/home/kbott006/physics-retrieval/craddock2012_tcorr05_2level_270_2mm.nii.gz' masks = {'shen2015': shen, 'craddock2012': craddock} sessions = [0,1] sesh = ['pre', 'post'] tasks = ['rest'] kappa_upper = 0.21 kappa_lower = 0.31 lab_notebook_dir = sink_dir index = pd.MultiIndex.from_product([subjects, sessions], names=['subject', 'session']) lab_notebook = pd.DataFrame(index=index, columns=['start', 'end', 'errors']) correlation_measure = ConnectivityMeasure(kind='correlation') index = pd.MultiIndex.from_product([subjects, sessions, tasks, masks.keys()], names=['subject', 'session', 'task', 'mask']) df = pd.DataFrame(columns=['lEff1', 'clustCoeff1'], index=index, dtype=np.float64) for subject in subjects: for session in sessions: lab_notebook.at[(subject, session),'start'] = str(datetime.datetime.now()) for task in tasks: for mask in masks.keys(): try: #shen_masker = NiftiLabelsMasker(xfmd_masks['shen2015'], background_label=0, standardize=True, detrend=True,t_r=3.) #craddock_masker = NiftiLabelsMasker(xfmd_masks['craddock2012'], background_label=0, standardize=True, detrend=True,t_r=3.) #confounds = '/home/data/nbc/physics-learning/anxiety-physics/output/{1}/{0}/{0}_confounds.txt'.format(subject, sesh[session]) #epi_data = join(data_dir, subject, 'session-{0}'.format(session), 'resting-state/resting-state-0/endor1.feat', 'filtered_func_data.nii.gz') #shen_ts = shen_masker.fit_transform(epi_data, confounds) #shen_corrmat = correlation_measure.fit_transform([shen_ts])[0] #np.savetxt(join(sink_dir, sesh[session], subject, '{0}-session-{1}-rest_network_corrmat_shen2015.csv'.format(subject, session)), shen_corrmat, delimiter=",") corrmat = np.genfromtxt(join(sink_dir, '{0}-session-{1}-{2}_network_corrmat_{3}.csv'.format(subject, session, task, mask)), delimiter=",") print(corrmat.shape) #craddock_ts = craddock_masker.fit_transform(epi_data, confounds) #craddock_corrmat = correlation_measure.fit_transform([craddock_ts])[0] #np.savetxt(join(sink_dir, sesh[session], subject, '{0}-session-{1}-rest_network_corrmat_craddock2012.csv'.format(subject, session)), craddock_corrmat, delimiter=",") ge_s = [] ge_c = [] md_s = [] md_c = [] for p in np.arange(kappa_upper, kappa_lower, 0.02): thresh = bct.threshold_proportional(corrmat, p, copy=True) #network measures of interest here #global efficiency ge = bct.efficiency_wei(thresh, local=True) ge_s.append(ge) #modularity md = bct.clustering_coef_wu(thresh) md_s.append(md) ge_s = np.asarray(ge_s) md_s = np.asarray(md_s) leff = np.trapz(ge_s, dx=0.01, axis=0) print('local efficiency:', leff[0]) ccoef = np.trapz(md_s, dx=0.01, axis=0) for j in np.arange(1, 270): df.at[(subject, session, task, mask), 'lEff{0}'.format(j)] = leff[j-1] df.at[(subject, session, task, mask), 'clustCoeff{0}'.format(j)] = ccoef[j-1] #df.to_csv(join(sink_dir, 'resting-state_graphtheory_shen+craddock.csv'), sep=',') lab_notebook.at[(subject, session),'end'] = str(datetime.datetime.now()) except Exception as e: print(e, subject, session) lab_notebook.at[(subject,session),'errors'] = [e, str(datetime.datetime.now())] df.to_csv(join(sink_dir, 'resting-state_nodal-graphtheory_shen+craddock.csv'), sep=',') df.to_csv(join(sink_dir, 'resting-state_nodal-graphtheory_shen+craddock_{0}.csv'.format(str(datetime.datetime.today()))), sep=',') lab_notebook.to_csv(join(lab_notebook_dir, 'LOG_resting-state-graph-theory_{0}.csv'.format(str(datetime.datetime.now())))) df for j in np.arange(1, 269): print(ccoef[j-1]) ```
github_jupyter
# Notebook 2: Gradient Descent ## Learning Goal The goal of this notebook is to gain intuition for various gradient descent methods by visualizing and applying these methods to some simple two-dimensional surfaces. Methods studied include ordinary gradient descent, gradient descent with momentum, NAG, ADAM, and RMSProp. ## Overview In this notebook, we will visualize what different gradient descent methods are doing using some simple surfaces. From the onset, we emphasize that doing gradient descent on the surfaces is different from performing gradient descent on a loss function in Machine Learning (ML). The reason is that in ML not only do we want to find good minima, we want to find good minima that generalize well to new data. Despite this crucial difference, we can still build intuition about gradient descent methods by applying them to simple surfaces (see related blog posts [here](http://ruder.io/optimizing-gradient-descent/) and [here](http://tiao.io/notes/visualizing-and-animating-optimization-algorithms-with-matplotlib/)). ## Surfaces We will consider three simple surfaces: a quadratic minimum of the form $$z=ax^2+by^2,$$ a saddle-point of the form $$z=ax^2-by^2,$$ and [Beale's Function](https://en.wikipedia.org/wiki/Test_functions_for_optimization), a convex function often used to test optimization problems of the form: $$z(x,y) = (1.5-x+xy)^2+(2.25-x+xy^2)^2+(2.625-x+xy^3)^2$$ These surfaces can be plotted using the cells below. ``` #This cell sets up basic plotting functions awe #we will use to visualize the gradient descent routines. #Make plots interactive #%matplotlib notebook #Make plots static %matplotlib inline #Make 3D plots from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt from matplotlib import cm #from matplotlib import animation from IPython.display import HTML from matplotlib.colors import LogNorm #from itertools import zip_longest #Import Numpy import numpy as np #Define function for plotting def plot_surface(x, y, z, azim=-60, elev=40, dist=10, cmap="RdYlBu_r"): fig = plt.figure() ax = fig.add_subplot(111, projection='3d') plot_args = {'rstride': 1, 'cstride': 1, 'cmap':cmap, 'linewidth': 20, 'antialiased': True, 'vmin': -2, 'vmax': 2} ax.plot_surface(x, y, z, **plot_args) ax.view_init(azim=azim, elev=elev) ax.dist=dist ax.set_xlim(-1, 1) ax.set_ylim(-1, 1) ax.set_zlim(-2, 2) plt.xticks([-1, -0.5, 0, 0.5, 1], ["-1", "-1/2", "0", "1/2", "1"]) plt.yticks([-1, -0.5, 0, 0.5, 1], ["-1", "-1/2", "0", "1/2", "1"]) ax.set_zticks([-2, -1, 0, 1, 2]) ax.set_zticklabels(["-2", "-1", "0", "1", "2"]) ax.set_xlabel("x", fontsize=18) ax.set_ylabel("y", fontsize=18) ax.set_zlabel("z", fontsize=18) return fig, ax; def overlay_trajectory_quiver(ax,obj_func,trajectory, color='k'): xs=trajectory[:,0] ys=trajectory[:,1] zs=obj_func(xs,ys) ax.quiver(xs[:-1], ys[:-1], zs[:-1], xs[1:]-xs[:-1], ys[1:]-ys[:-1],zs[1:]-zs[:-1],color=color,arrow_length_ratio=0.3) return ax; def overlay_trajectory(ax,obj_func,trajectory,label,color='k'): xs=trajectory[:,0] ys=trajectory[:,1] zs=obj_func(xs,ys) ax.plot(xs,ys,zs, color, label=label) return ax; def overlay_trajectory_contour_M(ax,trajectory, label,color='k',lw=2): xs=trajectory[:,0] ys=trajectory[:,1] ax.plot(xs,ys, color, label=label,lw=lw) ax.plot(xs[-1],ys[-1],color+'>', markersize=14) return ax; def overlay_trajectory_contour(ax,trajectory, label,color='k',lw=2): xs=trajectory[:,0] ys=trajectory[:,1] ax.plot(xs,ys, color, label=label,lw=lw) return ax; #DEFINE SURFACES WE WILL WORK WITH #Define monkey saddle and gradient def monkey_saddle(x,y): return x**3 - 3*x*y**2 def grad_monkey_saddle(params): x=params[0] y=params[1] grad_x= 3*x**2-3*y**2 grad_y= -6*x*y return [grad_x,grad_y] #Define saddle surface def saddle_surface(x,y,a=1,b=1): return a*x**2-b*y**2 def grad_saddle_surface(params,a=1,b=1): x=params[0] y=params[1] grad_x= a*x grad_y= -1*b*y return [grad_x,grad_y] # Define minima_surface def minima_surface(x,y,a=1,b=1): return a*x**2+b*y**2-1 def grad_minima_surface(params,a=1,b=1): x=params[0] y=params[1] grad_x= 2*a*x grad_y= 2*b*y return [grad_x,grad_y] def beales_function(x,y): return np.square(1.5-x+x*y)+np.square(2.25-x+x*y*y)+np.square(2.625-x+x*y**3) return f def grad_beales_function(params): x=params[0] y=params[1] grad_x=2*(1.5-x+x*y)*(-1+y)+2*(2.25-x+x*y**2)*(-1+y**2)+2*(2.625-x+x*y**3)*(-1+y**3) grad_y=2*(1.5-x+x*y)*x+4*(2.25-x+x*y**2)*x*y+6*(2.625-x+x*y**3)*x*y**2 return [grad_x,grad_y] def contour_beales_function(): #plot beales function x, y = np.meshgrid(np.arange(-4.5, 4.5, 0.2), np.arange(-4.5, 4.5, 0.2)) fig, ax = plt.subplots(figsize=(10, 6)) z=beales_function(x,y) cax = ax.contour(x, y, z, levels=np.logspace(0, 5, 35), norm=LogNorm(), cmap="RdYlBu_r") ax.plot(3,0.5, 'r*', markersize=18) ax.set_xlabel('$x$') ax.set_ylabel('$y$') ax.set_xlim((-4.5, 4.5)) ax.set_ylim((-4.5, 4.5)) return fig,ax #Make plots of surfaces plt.close() # closes previous plots x, y = np.mgrid[-1:1:31j, -1:1:31j] fig1,ax1=plot_surface(x,y,monkey_saddle(x,y)) fig2,ax2=plot_surface(x,y,saddle_surface(x,y)) fig3,ax3=plot_surface(x,y,minima_surface(x,y,5),0) #Contour plot of Beale's Function fig4,ax4 =contour_beales_function() plt.show() ``` ## Gradient descent with and without momentum In this notebook, we will visualize various gradient descent algorithms used in machine learning. We will be especially interested in trying to understand how various hyperparameters -- especially the learning rate -- affect our performance. Here, we confine ourselves primarily to looking at the performance in the absence of noise. However, we encourage the reader to experiment with playing with the noise strength below and seeing what differences introducing stochasticity makes. Throughout, we denote the parameters by $\theta$ and the energy function we are trying to minimize by $E(\theta)$. <b>Gradient Descent</b> We start by considering a simple gradient descent method. In this method, we will take steps in the direction of the local gradient. Given some parameters $\theta$, we adjust the parameters at each iteration so that $$\theta_{t+1}= \theta_t - \eta_t \nabla_\theta E(\theta),$$ where we have introduced the learning rate $\eta_t$ that controls how large a step we take. In general, the algorithm is extremely sensitive to the choice of $\eta_t$. If $\eta_t$ is too large, then one can wildly oscillate around minima and miss important structure at small scales. This problem is amplified if our gradient computations are noisy and inexact (as is often the case in machine learning applications). If $\eta_t$ is too small, then the learning/minimization procedure becomes extremely slow. This raises the natural question: <i> What sets the natural scale for the learning rate and how can we adaptively choose it?</i> We discuss this extensively in Section IV of the review. <b>Gradient Descent with Momentum</b> One problem with gradient descent is that it has no memory of where the "ball rolling down the hill" comes from. This can be an issue when there are many shallow minima in our landscape. If we make an analogy with a ball rolling down a hill, the lack of memory is equivalent to having no inertia or momentum (i.e. completely overdamped dynamics). Without momentum, the ball has no kinetic energy and cannot climb out of shallow minima. Momentum becomes especially important when we start thinking about stochastic gradient descent with noisy, stochastic estimates of the gradient. In this case, we should remember where we were coming from and not react drastically to each new update. Inspired by this, we can add a memory or momentum term to the stochastic gradient descent term above: $$ v_{t}=\gamma v_{t-1}+\eta_{t}\nabla_\theta E(\theta_t),\\ \theta_{t+1}= \theta_t -v_{t}, $$ with $0\le \gamma < 1$ called the momentum parameter. When $\gamma=0$, this reduces to ordinary gradient descent, and increasing $\gamma$ increases the inertial contribution to the gradient. From the equations above, we can see that typical memory lifetimes of the gradient is given by $(1-\gamma)^{-1}$. For $\gamma=0$ as in gradient descent, the lifetime is just one step. For $\gamma=0.9$, we typically remember a gradient for ten steps. We will call this gradient descent with classical momentum or CM for short. A final widely used variant of gradient descent with momentum is called the Nesterov accelerated gradient (NAG). In NAG, rather than calculating the gradient at the current position, one calculates the gradient at the position momentum will carry us to at time $t+1$, namely, $\theta_t -\gamma v_{t-1}$. Thus, the update becomes $$ v_{t}=\gamma v_{t-1}+\eta_{t}\nabla_\theta E(\theta_t-\gamma v_{t-1})\\ \theta_{t+1}= \theta_t -v_{t} $$ ``` #This writes a simple gradient descent, gradient descent+ momentum, #nesterov. #Mean-gradient based methods def gd(grad, init, n_epochs=1000, eta=10**-4, noise_strength=0): #This is a simple optimizer params=np.array(init) param_traj=np.zeros([n_epochs+1,2]) param_traj[0,]=init v=0; for j in range(n_epochs): noise=noise_strength*np.random.randn(params.size) v=eta*(np.array(grad(params))+noise) params=params-v param_traj[j+1,]=params return param_traj def gd_with_mom(grad, init, n_epochs=5000, eta=10**-4, gamma=0.9,noise_strength=0): params=np.array(init) param_traj=np.zeros([n_epochs+1,2]) param_traj[0,]=init v=0 for j in range(n_epochs): noise=noise_strength*np.random.randn(params.size) v=gamma*v+eta*(np.array(grad(params))+noise) params=params-v param_traj[j+1,]=params return param_traj def NAG(grad, init, n_epochs=5000, eta=10**-4, gamma=0.9,noise_strength=0): params=np.array(init) param_traj=np.zeros([n_epochs+1,2]) param_traj[0,]=init v=0 for j in range(n_epochs): noise=noise_strength*np.random.randn(params.size) params_nesterov=params-gamma*v v=gamma*v+eta*(np.array(grad(params_nesterov))+noise) params=params-v param_traj[j+1,]=params return param_traj ``` ## Experiments with GD, CM, and NAG Before introducing more complicated situations, let us experiment with these methods to gain some intuition. Let us look at the dependence of GD on learning rate in a simple quadratic minima of the form $z=ax^2+by^2-1$. Make plots below for $\eta=0.1,0.5,1,1.01$ and $a=1$ and $b=1$. (to do this, you would have to add additional arguments to the function `gd` above in order to pass the new values of `a` and `b`; otherwise the default values `a=1` and `b=1` will be used by the gradient) <ul> <li> What are the qualitatively different behaviors that arise as $\eta$ is increased? <li> What does this tell us about the importance of choosing learning parameters? How do these change if we change $a$ and $b$ above? In particular how does anisotropy change the learning behavior? <li> Make similar plots for CM and NAG? How do the learning rates for these procedures compare with those for GD? </ul> ``` # Investigate effect of learning rate in GD plt.close() a,b = 1.0,1.0 x, y = np.meshgrid(np.arange(-4.5, 4.5, 0.2), np.arange(-4.5, 4.5, 0.2)) fig, ax = plt.subplots(figsize=(10, 6)) z=np.abs(minima_surface(x,y,a,b)) ax.contour(x, y, z, levels=np.logspace(0.0, 5, 35), norm=LogNorm(), cmap="RdYlBu_r") ax.plot(0,0, 'r*', markersize=18) #initial point init1=[-2,4] init2=[-1.7,4] init3=[-1.5,4] init4=[-3,4.5] eta1=0.1 eta2=0.5 eta3=1 eta4=1.01 gd_1=gd(grad_minima_surface,init1, n_epochs=100, eta=eta1) gd_2=gd(grad_minima_surface,init2, n_epochs=100, eta=eta2) gd_3=gd(grad_minima_surface,init3, n_epochs=100, eta=eta3) gd_4=gd(grad_minima_surface,init4, n_epochs=10, eta=eta4) #print(gd_1) overlay_trajectory_contour(ax,gd_1,'$\eta=$%s'% eta1,'g--*', lw=0.5) overlay_trajectory_contour(ax,gd_2,'$\eta=$%s'% eta2,'b-<', lw=0.5) overlay_trajectory_contour(ax,gd_3,'$\eta=$%s'% eta3,'->', lw=0.5) overlay_trajectory_contour(ax,gd_4,'$\eta=$%s'% eta4,'c-o', lw=0.5) plt.legend(loc=2) plt.show() fig.savefig("GD3regimes.pdf", bbox_inches='tight') ``` ## Gradient Descents that utilize the second moment In stochastic gradient descent, with and without momentum, we still have to specify a schedule for tuning the learning rates $\eta_t$ as a function of time. As discussed in Sec. IV in the context of Newton's method, this presents a number of dilemmas. The learning rate is limited by the steepest direction which can change depending on where in the landscape we are. To circumvent this problem, ideally our algorithm would take large steps in shallow, flat directions and small steps in steep, narrow directions. Second-order methods accomplish this by calculating or approximating the Hessian and normalizing the learning rate by the curvature. However, this is very computationally expensive for extremely large models. Ideally, we would like to be able to adaptively change our step size to match the landscape without paying the steep computational price of calculating or approximating Hessians. Recently, a number of methods have been introduced that accomplish this by tracking not only the gradient but also the second moment of the gradient. These methods include AdaGrad, AdaDelta, RMS-Prop, and ADAM. Here, we discuss the latter of these two as representatives of this class of algorithms. In RMS prop (Root-Mean-Square propagation), in addition to keeping a running average of the first moment of the gradient, we also keep track of the second moment through a moving average. The update rule for RMS prop is given by $$ \mathbf{g}_t = \nabla_\theta E(\boldsymbol{\theta}) \\ \mathbf{s}_t =\beta \mathbf{s}_{t-1} +(1-\beta)\mathbf{g}_t^2 \nonumber \\ \boldsymbol{\theta}_{t+1}=\boldsymbol{\theta}_t + \eta_t { \mathbf{g}_t \over \sqrt{\mathbf{s}_t +\epsilon}}, \nonumber \\ $$ where $\beta$ controls the averaging time of the second moment and is typically taken to be about $\beta=0.9$, $\eta_t$ is a learning rate typically chosen to be $10^{-3}$, and $\epsilon\sim 10^{-8}$ is a small regularization constant to prevent divergences. It is clear from this formula that the learning rate is reduced in directions where the norm of the gradient is consistently large. This greatly speeds up the convergence by allowing us to use a larger learning rate for flat directions. A related algorithm is the ADAM optimizer. In ADAM, we keep a running average of both the first and second moment of the gradient and use this information to adaptively change the learning rate for different parameters. In addition to keeping a running average of the first and second moments of the gradient, ADAM performs an additional bias correction to account for the fact that we are estimating the first two moments of the gradient using a running average (denoted by the hats in the update rule below). The update rule for ADAM is given by (where multiplication and division are understood to be element wise operations) $$ \mathbf{g}_t = \nabla_\theta E(\boldsymbol{\theta}) \\ \mathbf{m}_t = \beta_1 \mathbf{m}_{t-1} + (1-\beta_1) \mathbf{g}_t \nonumber \\ \mathbf{s}_t =\beta_2 \mathbf{s}_{t-1} +(1-\beta_2)\mathbf{g}_t^2 \nonumber \\ \hat{\mathbf{m}}_t={\mathbf{m}_t \over 1-\beta_1} \nonumber \\ \hat{\mathbf{s}}_t ={\mathbf{s}_t \over1-\beta_2} \nonumber \\ \boldsymbol{\theta}_{t+1}=\boldsymbol{\theta}_t + \eta_t { \hat{\mathbf{m}}_t \over \sqrt{\hat{\mathbf{s}}_t +\epsilon}}, \nonumber $$ where $\beta_1$ and $\beta_2$ set the memory lifetime of the first and second moment and are typically take to be $0.9$ and $0.99$ respectively, and $\eta$ and $\epsilon$ are identical to RMSprop. ``` ################################################################################ # Methods that exploit first and second moments of gradient: RMS-PROP and ADAMS ################################################################################ def rms_prop(grad, init, n_epochs=5000, eta=10**-3, beta=0.9,epsilon=10**-8,noise_strength=0): params=np.array(init) param_traj=np.zeros([n_epochs+1,2]) param_traj[0,]=init#Import relevant packages grad_sq=0; for j in range(n_epochs): noise=noise_strength*np.random.randn(params.size) g=np.array(grad(params))+noise grad_sq=beta*grad_sq+(1-beta)*g*g v=eta*np.divide(g,np.sqrt(grad_sq+epsilon)) params= params-v param_traj[j+1,]=params return param_traj def adams(grad, init, n_epochs=5000, eta=10**-4, gamma=0.9, beta=0.99,epsilon=10**-8,noise_strength=0): params=np.array(init) param_traj=np.zeros([n_epochs+1,2]) param_traj[0,]=init v=0; grad_sq=0; for j in range(n_epochs): noise=noise_strength*np.random.randn(params.size) g=np.array(grad(params))+noise v=gamma*v+(1-gamma)*g grad_sq=beta*grad_sq+(1-beta)*g*g v_hat=v/(1-gamma) grad_sq_hat=grad_sq/(1-beta) params=params-eta*np.divide(v_hat,np.sqrt(grad_sq_hat+epsilon)) param_traj[j+1,]=params return param_traj ``` ## Experiments with ADAM and RMSprop In this section, we will experiment with ADAM and RMSprop. To do so, we will use a function commonly used in optimization protocols: $$ f(x,y)=(1.5-x+xy)^2+(2.25-x+xy^2)^2+(2.625-x+xy^3)^2. $$ This function has a global minimum at $(x,y)=(3,0.5)$. We will use GD, GD with classical momentum, NAG, RMSprop, and ADAM to find minima starting at different initial conditions. One of the things you should experiment with is the learning rate and the number of steps, $N_{\mathrm{steps}}$ we take. Initially, we have set $N_{\mathrm{steps}}=10^4$ and the learning rate for ADAM/RMSprop to $\eta=10^{-3}$ and the learning rate for the remaining methods to $10^{-6}$. <ul> <li> Examine the plot for these default values. What do you see? <li> Make a plot when the learning rate of all methods is $\eta=10^{-6}$? How does your plot change? <li> Now set the learning rate for all algorithms to $\eta=10^{-3}$? What goes wrong? Why? </ul> ``` plt.close() #Make static plot of the results Nsteps=10**4 lr_l=10**-3 lr_s=10**-6 init1=np.array([4,3]) fig1, ax1=contour_beales_function() gd_trajectory1=gd(grad_beales_function,init1,Nsteps, eta=lr_s, noise_strength=0) gdm_trajectory1=gd_with_mom(grad_beales_function,init1,Nsteps,eta=lr_s, gamma=0.9,noise_strength=0) NAG_trajectory1=NAG(grad_beales_function,init1,Nsteps,eta=lr_s, gamma=0.9,noise_strength=0) rms_prop_trajectory1=rms_prop(grad_beales_function,init1,Nsteps,eta=lr_l, beta=0.9,epsilon=10**-8,noise_strength=0) adam_trajectory1=adams(grad_beales_function,init1,Nsteps,eta=lr_l, gamma=0.9, beta=0.99,epsilon=10**-8,noise_strength=0) overlay_trajectory_contour_M(ax1,gd_trajectory1, 'GD','k') overlay_trajectory_contour_M(ax1,gd_trajectory1, 'GDM','m') overlay_trajectory_contour_M(ax1,NAG_trajectory1, 'NAG','c--') overlay_trajectory_contour_M(ax1,rms_prop_trajectory1,'RMS', 'b-.') overlay_trajectory_contour_M(ax1,adam_trajectory1,'ADAMS', 'r') plt.legend(loc=2) #init2=np.array([1.5,1.5]) #gd_trajectory2=gd(grad_beales_function,init2,Nsteps, eta=10**-6, noise_strength=0) #gdm_trajectory2=gd_with_mom(grad_beales_function,init2,Nsteps,eta=10**-6, gamma=0.9,noise_strength=0) #NAG_trajectory2=NAG(grad_beales_function,init2,Nsteps,eta=10**-6, gamma=0.9,noise_strength=0) #rms_prop_trajectory2=rms_prop(grad_beales_function,init2,Nsteps,eta=10**-3, beta=0.9,epsilon=10**-8,noise_strength=0) #adam_trajectory2=adams(grad_beales_function,init2,Nsteps,eta=10**-3, gamma=0.9, beta=0.99,epsilon=10**-8,noise_strength=0) #overlay_trajectory_contour_M(ax1,gdm_trajectory2, 'GDM','m') #overlay_trajectory_contour_M(ax1,NAG_trajectory2, 'NAG','c--') #overlay_trajectory_contour_M(ax1,rms_prop_trajectory2,'RMS', 'b-.') #overlay_trajectory_contour_M(ax1,adam_trajectory2,'ADAMS', 'r') init3=np.array([-1,4]) gd_trajectory3=gd(grad_beales_function,init3,10**5, eta=lr_s, noise_strength=0) gdm_trajectory3=gd_with_mom(grad_beales_function,init3,10**5,eta=lr_s, gamma=0.9,noise_strength=0) NAG_trajectory3=NAG(grad_beales_function,init3,Nsteps,eta=lr_s, gamma=0.9,noise_strength=0) rms_prop_trajectory3=rms_prop(grad_beales_function,init3,Nsteps,eta=lr_l, beta=0.9,epsilon=10**-8,noise_strength=0) adam_trajectory3=adams(grad_beales_function,init3,Nsteps,eta=lr_l, gamma=0.9, beta=0.99,epsilon=10**-8,noise_strength=0) overlay_trajectory_contour_M(ax1,gd_trajectory3, 'GD','k') overlay_trajectory_contour_M(ax1,gdm_trajectory3, 'GDM','m') overlay_trajectory_contour_M(ax1,NAG_trajectory3, 'NAG','c--') overlay_trajectory_contour_M(ax1,rms_prop_trajectory3,'RMS', 'b-.') overlay_trajectory_contour_M(ax1,adam_trajectory3,'ADAMS', 'r') init4=np.array([-2,-4]) gd_trajectory4=gd(grad_beales_function,init4,Nsteps, eta=lr_s, noise_strength=0) gdm_trajectory4=gd_with_mom(grad_beales_function,init4,Nsteps,eta=lr_s, gamma=0.9,noise_strength=0) NAG_trajectory4=NAG(grad_beales_function,init4,Nsteps,eta=lr_s, gamma=0.9,noise_strength=0) rms_prop_trajectory4=rms_prop(grad_beales_function,init4,Nsteps,eta=lr_l, beta=0.9,epsilon=10**-8,noise_strength=0) adam_trajectory4=adams(grad_beales_function,init4,Nsteps,eta=lr_l, gamma=0.9, beta=0.99,epsilon=10**-8,noise_strength=0) overlay_trajectory_contour_M(ax1,gd_trajectory4, 'GD','k') overlay_trajectory_contour_M(ax1,gdm_trajectory4, 'GDM','m') overlay_trajectory_contour_M(ax1,NAG_trajectory4, 'NAG','c--') overlay_trajectory_contour_M(ax1,rms_prop_trajectory4,'RMS', 'b-.') overlay_trajectory_contour_M(ax1,adam_trajectory4,'ADAMS', 'r') plt.show() ```
github_jupyter
____ <center> <h1 style="background-color:#975be5; color:white"><br>01-Linear Regression Project<br></h1></center> ____ <div align="right"> <b><a href="https://keytodatascience.com/">KeytoDataScience.com </a></b> </div> Congratulations !! KeytoDataScience just got some contract work with an Ecommerce company based in New York City that sells clothing online but they also have in-store style and clothing advice sessions. Customers come in to the store, have sessions/meetings with a personal stylist, then they can go home and order either on a mobile app or website for the clothes they want. __The company is trying to decide whether to focus their efforts on their mobile app experience or their website. They've hired you on contract on behalf of KeytoDataScience to help them figure it out!__ Let's get started! Just follow the steps below to analyze the customer data (Emails and Addresses in data set are fake). ## 1 Imports **Import pandas, numpy, matplotlib, and seaborn. (You'll import sklearn as you need it.)** ## 2 Get the Data We'll work with the Ecommerce Customers csv file from the company. It has Customer info, suchas Email, Address, and their color Avatar. Then it also has numerical value columns: * Avg. Session Length: Average session of in-store style advice sessions. * Time on App: Average time spent on App in minutes * Time on Website: Average time spent on Website in minutes * Length of Membership: How many years the customer has been a member. **Read in the Ecommerce Customers csv file as a DataFrame called customers.** **Check the head of customers, and check out its info() and describe() methods.** ## 3 Exploratory Data Analysis **Let's explore the data!** For the rest of the exercise we'll only be using the numerical data of the csv file. **Use seaborn to create a jointplot to compare the Time on Website and Yearly Amount Spent columns. Does the correlation make sense?** **Do the same but with the Time on App column instead.** **Use jointplot to create a 2D hex bin plot comparing Time on App and Length of Membership.** **Let's explore these types of relationships across the entire data set. Use [pairplot](https://stanford.edu/~mwaskom/software/seaborn/tutorial/axis_grids.html#plotting-pairwise-relationships-with-pairgrid-and-pairplot) to recreate the plot below.(Don't worry about the the colors)** **Based off this plot what looks to be the most correlated feature with Yearly Amount Spent?** ``` # Length of Membership ``` **Create a linear model plot (using seaborn's lmplot) of Yearly Amount Spent vs. Length of Membership.** ## 4 Training and Testing Data Now that we've explored the data a bit, let's go ahead and split the data into training and testing sets. ** Set a variable X equal to the numerical features of the customers and a variable y equal to the "Yearly Amount Spent" column. ** **Use model_selection.train_test_split from sklearn to split the data into training and testing sets. Set test_size=0.3 and random_state=101** ## 5 Training the Model Now its time to train our model on our training data! **Import LinearRegression from sklearn.linear_model** **Create an instance of a LinearRegression() model named lm.** **Train/fit lm on the training data.** **Print out the coefficients of the model** ## 6 Predicting Test Data Now that we have fit our model, let's evaluate its performance by predicting off the test values! **Use lm.predict() to predict off the X_test set of the data.** ** Create a scatterplot of the real test values versus the predicted values. ** ## 7 Evaluating the Model __Let's evaluate our model performance by calculating:__ - R-squared (R2) or Explained variance score - Mean Absolute Error - Mean Squared Error - Root Mean Squared Error. ## 8 Residuals You should have gotten a very good model with a good fit. Let's quickly explore the residuals to make sure everything was okay with our data. **Plot a histogram of the residuals and make sure it looks normally distributed. Use either seaborn distplot, or just plt.hist().** ## 9 Conclusion We still want to figure out the answer to the original question, do we focus our efforst on mobile app or website development? Or maybe that doesn't even really matter, and Membership Time is what is really important. Let's see if we can interpret the coefficients at all to get an idea. **Recreate the dataframe below.** **How can you interpret these coefficients?** Interpreting the coefficients: - Holding all other features fixed, a 1 unit increase in **Avg. Session Length** is associated with an **increase of 25.98 total dollars spent**. - Holding all other features fixed, a 1 unit increase in **Time on App** is associated with an **increase of 38.59 total dollars spent**. - Holding all other features fixed, a 1 unit increase in **Time on Website** is associated with an **increase of 0.19 total dollars spent**. - Holding all other features fixed, a 1 unit increase in **Length of Membership** is associated with an **increase of 61.27 total dollars spent**. **Do you think the company should focus more on their mobile app or on their website?** This is tricky, there are two ways to think about this: Develop the Website to catch up to the performance of the mobile app, or develop the app more since that is what is working better. This sort of answer really depends on the other factors going on at the company, you would probably want to explore the relationship between Length of Membership and the App or the Website before coming to a conclusion! ____ <center> <h1 style="background-color:#975be5; color:white"><br>Great Job!<br></h1><br></center> ____ Congrats on your contract work! The company loved the insights! <div align="right"> <b><a href="https://keytodatascience.com/">KeytoDataScience.com </a></b> </div>
github_jupyter
# 3. laboratorijska vježba ``` # učitavanje potrebnih biblioteka import numpy as np import matplotlib.pyplot as plt import scipy.signal as ss #@title pomoćna funkcija # izvršite ovu ćeliju ali se ne opterećujte detaljima implementacije def plot_frequency_response(f, Hm, fc=None, ylim_min=None): """Grafički prikaz prijenosne funkcije filtra. Args f (numpy.ndarray) : frekvencije Hm (numpy.ndarray) : apsolutne vrijednosti prijenosne funkcije fc (number) : cutoff frekvencija ylim_min (number): minimalna vrijednost na y-osi za dB skalu Returns (matplotlib.figure.Figure, matplotlib.axes._subplots.AxesSubplot) """ Hc = 1 / np.sqrt(2) if fc is None: fc_idx = np.where(np.isclose(Hm, Hc, rtol=1e-03))[0][0] fc = f[fc_idx] H_db = 20 * np.log10(Hm) fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(12, 7.5)) ax[0, 0].plot(f, Hm, label='$H(f)$') ax[0, 0].plot(fc, Hc, 'o', label='$H(f_c)$') ax[0, 0].vlines(fc, Hm.min(), Hc, linestyle='--') ax[0, 0].annotate(f'$f_c = {fc:.3f}$ Hz\n$H(f_c)={Hc:.3f}$', (fc * 1.4, Hc)) ax[0, 0].set_xscale('log') ax[0, 0].set_ylabel('$|V_{out}$ / $V_{in}$|') ax[0, 0].set_title('log scale') ax[0, 0].legend(loc='lower left') ax[0, 0].grid() ax[0, 1].plot(f, Hm, label='$H(f)$') ax[0, 1].plot(fc, Hc, 'o', label='$H(f_c)$') ax[0, 1].annotate(f'$f_c = {fc:.3f}$ Hz\n$H(f_c)={Hc:.3f}$', (fc * 1.4, Hc)) ax[0, 1].set_title('linear scale') ax[0, 1].legend() ax[0, 1].grid() ax[1, 0].plot(f, H_db, label='$H_{dB}(f)$') ax[1, 0].plot(fc, H_db.max() - 3, 'o', label='$H_{dB}(f_c)$') ax[1, 0].vlines(fc, H_db.min(), H_db.max() - 3, linestyle='--') ax[1, 0].annotate(f'$f_c = {fc:.3f}$ Hz\n$H(f_c)={H_db.max() - 3:.3f} dB$', (fc * 1.4, H_db.max() - 3)) ax[1, 0].set_xscale('log') ax[1, 0].set_xlabel('$f$ [Hz]') ax[1, 0].set_ylabel('$20 \\cdot \\log$ |$V_{out}$ / $V_{in}$|') if ylim_min: ax[1, 0].set_ylim((ylim_min, 10)) ax[1, 0].legend(loc='lower left') ax[1, 0].grid() ax[1, 1].plot(f, H_db, label='$H_{dB}(f)$') ax[1, 1].plot(fc, H_db.max() - 3, 'o', label='$H_{dB}(f_c)$') ax[1, 1].annotate(f'$f_c = {fc:.3f}$ Hz\n$H(f_c)={H_db.max() - 3:.3f} dB$', (fc * 1.4, H_db.max() - 3)) ax[1, 1].set_xlabel('$f$ [Hz]') if ylim_min: ax[1, 1].set_ylim((ylim_min, 10)) ax[1, 1].legend() ax[1, 1].grid() fig.tight_layout return fig, ax ``` ### Pasivni visoko-propusni filtri Realizacija visoko-propusnog filtra u ovom slučaju se ostvaruje korištenjem otpornika i zavojnice povezanih u seriju, pri čemu se izlaz promatra kao napon na zavojnici, $V_{out}$ Uz pretpostavku da je signal na ulazu, $V_{in}$, sinusoidalni naponski izvor, analizu možemo prebaciti u frekvencijsku domenu koristeći impedancijski model. Na ovaj način izbjegavamo potrebu za korištenjem diferencijalnog računa i čitav proračun se svodi na jednostavni algebarski problem. <center> <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/b/bb/Series-RL.svg/768px-Series-RL.svg.png" alt="simple-rl-highpass" width="400"/> </center> Izraz za funkciju prijenosnog odziva dobijamo kao omjer izlaznog i ulaznog napona. Izlazni napon - napon na zavojnici, $V_{out}$, definiramo kroz podjelu ulaznog napona na sljedeći način: $$ \begin{align} V_{out} &= \frac{Z_l}{Z_l + Z_r} \cdot V_{in} \\ H(\omega) = \frac{V_{out}}{V_{in}} &= \frac{Z_l}{Z_l + Z_r} = \frac{j\omega L}{j\omega L + R} = \frac{1}{1+R/(j\omega L)} \end{align} $$ Kako je $H$ funkcija frekvencije, imamo dva ruba slučaja: * za iznimno niske frekvencije kada je $\omega \sim 0$ slijedi da je $H(\omega) \rightarrow 0$; * za iznimno visoke frekvencije kada $\omega \rightarrow \infty$ slijedi da je $H(\omega) = 0$. Potrebno je dodatno definirati već spomenutu *cut-off* frekvenciju, $f_c$, za koju amplituda funkcije frekvencijskog odziva, $H$, pada za $\sqrt 2$ puta, odnosno za $3$ dB: $$ \begin{align} f_c &= \frac{R}{2 \pi L} \end{align} $$ Link za interaktivni rad sa pasivnim visoko-propusnim filtrom: http://sim.okawa-denshi.jp/en/LRtool.php #### Zadatak 1 Prvi zadatak je implementirati funkciju `cutoff_frequency` koja na ulazu prima iznose otpora, `R`, i zavojnice, `L`, a na izlazu daje *cutoff* frekvenciju visoko-propusnog filtra. ``` def cutoff_frequency(R, L): """Cutoff frekvencija visoko-propusnog RL filtra. Args: R (number) : vrijednost otpora otpornika L (number) : induktivitet zavojnice Returns: number """ ####################################################### ## TO-DO: implementiraj proračun cutoff frekvencije ## # Nakon toga zakomentiraj sljedeću liniju. raise NotImplementedError('Implementiraj proračun cutoff frekvencije.') ####################################################### # definiraj cutoff frekvenciju fc = ... return fc ``` Kolika je *cutoff* frekvencija za otpor od $200 \Omega$ i induktivitet zavojnice od $100 mH$? ``` R = ... # otpor L = ... # induktivitet fc = cutoff_frequency(...) # cutoff frekvencija print(f'R = {R/1000} kΩ') print(f'L = {L*1000} mH') print(f'cutoff frekvencija iznosi {fc:.2f} Hz, ' 'očekivana vrijednost je 318.31 Hz') ``` #### Zadatak 2 Drugi zadatak je implementirati funkciju `rl_highpass` koja na ulazu prima iznose otpora, `R`, induktiviteta, `L`, i frekvenciju, `f`, a na izlazu daje prijenosni odziv pasivnog visoko-propusnog RL filtra. ``` def rl_highpass(R, L, f): """Funkcija prijenosnog odziva RL visoko-propusnog filtra. Args: R (number) : vrijednost otpora otpornika L (number) : induktivitet f (number or numpy.ndarray) : frekvencija/e Returns: float or numpy.ndarray """ ###################################################### ## TO-DO: implementiraj funkciju prijenosnog odziva ## # Nakon toga zakomentiraj sljedeću liniju. raise NotImplementedError('Implementiraj funckiju prijenosnog odziva.') ###################################################### # definiraj funkciju prijenosnog pazeći da `f` može biti ili broj (int, # float) ili 1-D niz (`numpy.ndarray`) H = ... return H ``` Kolika je vrijednost prijenosne funkcije pri *cutoff* frekvencija za otpor od $200 \Omega$ i induktivitet zavojnice od $100 mH$? ``` R = ... # otpor L = ... # induktivitet Hc = rl_highpass(...) # prijenosna funkcija pri cutoff frekvenciji print(f'R = {R:.2f} Ω') print(f'C = {L * 1000:.2f} mH') print(f'pojačanje pri cutoff frekvenciji iznosi {abs(Hc):.4f}, ' 'očekivana vrijednost je 1/√2\n\n' 'provjerite ispravnost dobivenog rezutltata') # ćelija za provjeru rezultata ``` Pretvorite vrijednost prijenosne funkcije pri *cutoff* frekvenciju u decibele i uvjerite se u tvrdnju da amplituda funkcije frekvencijskog odziva, $H$, pada za $3$ dB pri *cutoff* frekvenciji. ``` Hc_dB = ... # pretvorba prijenosne funkcije pri cutoff frekvenciji u dB skalu print(Hc_dB) ``` Za raspon od $10000$ vrijednosti frekvencija do $10 kHz$ te za otpor od $200 \Omega$ i induktivitet zavojnice od $100 mH$, izračunajte vrijednosti prijenosne funkcije. ``` f = np.linspace(..., num=10000) H = rl_highpass(...) # prijenosna funkcija ``` S obzirom da su vrijednosti prijenosne funkcije kompleksne veličine, razmilite što je potrebno napraviti s njima prije nego ih grafički prikažemo? ``` Hm = ... # konverzija u apsolutne vrijednosti ``` Vizualizirajte ovisnost prijenosne funkcije o frekvenciji koristeći `matplotlib` i funkciju `matplotlib.pyplot.plot`. ``` plt.plot(...) plt.xlabel('f [Hz]') plt.ylabel('H(f)') plt.show() ``` Vizualizirajte sada rezultate koristeći već implementiranu funkciju `plot_frequency_response`. Napomena: za provjeru načina korištenja prethodne funkcije koristite sljedeću naredbu: ```python help(plot_frequency_response) ``` ili jednostavno ```python plot_frequency_response? ``` ``` # provjerite način korištenja funkcije fig, ax = plot_frequency_response(...) # grafički prikaz dobivenih rezultata ``` ### Strujno-naponska karakteristika RL visoko-propusnog filtra ``` def time_constant(L, R): """Vremenska konstanta RL visoko-propusnog filtra. Args: R (number) : vrijednost otpora otpornika L (number) : induktivitet Returns: float or numpy.ndarray """ ################################################################## ## TO-DO: implementiraj fnkciju koja racuna vremensku konstantu ## # Nakon toga zakomentiraj sljedeću liniju. raise NotImplementedError('Implementiraj vremensku konstantu.') ################################################################## # definiraj vremensku konstantu tau = ... return tau tau = time_constant(L, R) # vremenska konstanta ``` Koja fizikalna veličina je pridružena vremenskoj konstanti? Objasni. ``` def rl_current(t, t_switch, V, R, L): """Struja kroz RL visoko-propusni filtar. Args: t (number or numpy.ndarray) : trenutak/ci u kojima računamo vrijednost struje t_switch (number) : treneutak promjene predznaka struje V (number) : napon na ulazu R (number) : vrijednost otpora otpornika L (number) : induktivitet Returns: float or numpy.ndarray """ I0 = V / R i = np.where(t < t_switch, I0 * (1 - np.exp((-R / L) * t)), I0 * np.exp((-R / L) * (t - t_switch))) return i V = 5 # napon na ulazu tau = time_constant(L, R) # vremenska konstanta filtra t_switch = tau * 4.4 # vrijeme promjene predznaka struje T = 2 * t_switch # period t = np.linspace(0, T) # vremenska serija trenutaka u kojima evaluiramo vrijednost struje i_rl = rl_current(t, t_switch, V, R, L) # RL struja i = V / R * np.sin(2 * np.pi * t / T) # sinusna struja # vizualizacija RL struje plt.figure() plt.plot(t, i_rl, label='struja') plt.plot(t, i, label='on-off ciklus') plt.plot([t.min(), t_switch, t.max()], [0, 0, 0], 'rx') plt.hlines(0, t.min(), t.max(), 'k') plt.vlines(t_switch, i.min(), i.max(), 'k') plt.xlabel('t [s]') plt.ylabel('i(t) [A]') plt.legend() plt.grid() plt.show() ``` ### Pojasno propusni filtri Sljedeći kod koristi više različitih tipova pojasno-propusnih filtara (Hamming, Kaiser, Remez) i uspoređuje ih s idealnom prijenosnom funkcijom. ``` def bandpass_firwin(ntaps, lowcut, highcut, fs, window='hamming'): taps = ss.firwin(ntaps, [lowcut, highcut], nyq=0.5 * fs, pass_zero=False, window=window, scale=False) return taps def bandpass_kaiser(ntaps, lowcut, highcut, fs, width): atten = ss.kaiser_atten(ntaps, width / (0.5 * fs)) beta = ss.kaiser_beta(atten) taps = ss.firwin(ntaps, [lowcut, highcut], nyq=0.5 * fs, pass_zero=False, window=('kaiser', beta), scale=False) return taps def bandpass_remez(ntaps, lowcut, highcut, fs, width): delta = 0.5 * width edges = [0, lowcut - delta, lowcut + delta, highcut - delta, highcut + delta, 0.5 * fs, ] taps = ss.remez(ntaps, edges, [0, 1, 0], Hz=fs) return taps fs = 63.0 lowcut = 0.7 highcut = 4.0 ntaps = 128 taps_hamming = bandpass_firwin(ntaps, lowcut, highcut, fs) taps_kaiser16 = bandpass_kaiser(ntaps, lowcut, highcut, fs, width=1.6) taps_kaiser10 = bandpass_kaiser(ntaps, lowcut, highcut, fs, width=1.0) taps_remez = bandpass_remez(ntaps, lowcut, highcut, fs=fs, width=1.0) plt.figure() w, h = ss.freqz(taps_hamming, 1, worN=2000) plt.plot(fs * 0.5 / np.pi * w, abs(h), label='Hammingov prozor') w, h = ss.freqz(taps_kaiser16, 1, worN=2000) plt.plot(fs * 0.5 / np.pi * w, abs(h), label='Kaiser, širina = 1.6') w, h = ss.freqz(taps_kaiser10, 1, worN=2000) plt.plot(fs * 0.5/ np.pi * w, abs(h), label='Kaiser, širina = 1.0') w, h = ss.freqz(taps_remez, 1, worN=2000) plt.plot(fs * 0.5 / np.pi * w, abs(h), label=f'Remez, širina = 1.0') h = np.where((fs * 0.5 / np.pi * w < lowcut) | (fs * 0.5 / np.pi * w > highcut), 0, 1) plt.plot(fs * 0.5 / np.pi * w, h, 'k-', label='idealna karakteristika') plt.fill_between(fs * 0.5 / np.pi * w, h, color='gray', alpha=0.1) plt.xlim(0, 8.0) plt.grid() plt.legend(loc='upper right') plt.xlabel('f (Hz)') plt.ylabel('H(f)') plt.show() ```
github_jupyter
# **EXPERIMENT 1** Aim: Exploring variable in a dataset Objectives: Exploring Variables in a Dataset Learn how to open and examine a dataset. Practice classifying variables by their type: quantitative or categorical. Learn how to handle categorical variables whose values are numerically coded. Link to experiment: https://upscfever.com/upsc-fever/en/data/en-exercises-1.html ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt depression = pd.read_csv('https://raw.githubusercontent.com/kopalsharma19/J045-ML-Sem-V/master/Lab%20Experiments/Experiment-1%20060720/depression.csv') friends = pd.read_csv('https://raw.githubusercontent.com/kopalsharma19/J045-ML-Sem-V/master/Lab%20Experiments/Experiment-1%20060720/friends.csv') actor_age = pd.read_csv('https://raw.githubusercontent.com/kopalsharma19/J045-ML-Sem-V/master/Lab%20Experiments/Experiment-1%20060720/actor_age.csv') grad_data = pd.read_csv('https://raw.githubusercontent.com/kopalsharma19/J045-ML-Sem-V/master/Lab%20Experiments/Experiment-1%20060720/grad_data.csv') ratings = pd.read_csv('https://raw.githubusercontent.com/kopalsharma19/J045-ML-Sem-V/master/Lab%20Experiments/Experiment-1%20060720/ratings.csv') ``` ## **Question 1** What are the categorical variables in depression dataset? ``` depression.head(10), depression.dtypes ``` The categorical Variables in depression dataset are- 1. Hospt 2. Treat 3. Outcome 4. Gender ## **QUESTION 2** What are the quantitative variables in depression dataset? ``` depression.head(10), depression.dtypes ``` Quantitative variables in depression dataset are- 1. Time 2. AcuteT 3. Age # **QUESTION 3** Describe the distribution of the variable "friends" in dataset - Survey that asked 1,200 U.S. college students about their body perception ``` print("Datatype\n", friends.dtypes) print("\n") print("Shape of Dataset - ", friends.shape) friends.Friends.value_counts() friends.Friends.value_counts().plot(kind='pie') ``` ## **QUESTION 4** Describe the distribution of the ages of the Best Actor Oscar winners. Be sure to address shape, center, spread and outliers (Dataset - Best Actor Oscar winners (1970-2013)) ``` actor_age.describe() np.median(actor_age['Age']) actor_age.boxplot(column='Age') actor_age.shape actor_age.hist(column='Age') ``` Shape: Skewed to the right, 44 rows and 1 column Center (Median): 43.5 Spread: The standard deviation is 9.749153 Outlier: 76, there are no lower outliers ## **QUESTION 5** Getting information from the output: a. How many observations are in this data set? b. What is the mean age of the actors who won the Oscar? c. What is the five-number summary of the distribution? (Dataset - Best Actor Oscar winners (1970-2013)) ``` actor_age.describe() ``` a) No. of Observations (count)- 44 b) Mean age of actors (mean)- 44.977273 c) Five-number summary of distribution is min- 29 First Quartile (25%)- 38 Second Quartile (Median) (50%)- 43.5 Third Quartile (75%)- 50 max- 76 ## **QUESTION 6** Get information from the five-number summary: a. Half of the actors won the Oscar before what age? b. What is the range covered by all the actors' ages? c. What is the range covered by the middle 50% of the ages? (Dataset - Best Actor Oscar winners (1970-2013)) ``` actor_age.describe() ``` a) Half of the actors won oscar before the age of 43.5 b) Range of age for all actors in 29-76 c) Range covered by middle 50% of the ages- 38-50.25 ## **QUESTION 7** What are the standard deviations of the three rating distributions? Was your intuition correct? (Dataset - 27 students in the class were asked to rate the instructor on a number scale of 1 to 9) ``` ratings.head(10) ratings.describe() ``` Standard Deviation for Class. I- 1.568929 Standard Deviation for Class. II- 4.0 Standard Deviation for Class. III- 2.631174 No my intuition wasn't correct. ## **QUESTION 8** Assume that the average rating in each of the three classes is 5 (which should be visually reasonably clear from the histograms), and recall the interpretation of the SD as a "typical" or "average" distance between the data points and their mean. Judging from the table and the histograms, which class would have the largest standard deviation, and which one would have the smallest standard deviation? Explain your reasoning (Dataset - 27 students in the class were asked to rate the instructor on a number scale of 1 to 9) ``` ratings.head() ratings.describe() ratings.hist(column='Class.I') ratings.hist(column='Class.II') ratings.hist(column='Class.III') ``` Seeing the tables and histograms Class 1 has the least standard deviation as maximum values lie in the center. Class 2 has the most standard deviation as maximum values lie at different ends of the histogram and very few in the center. ``` ```
github_jupyter
# Example Map Plotting ### At the start of a Jupyter notebook you need to import all modules that you will use ``` import pandas as pd import xarray as xr import matplotlib.pyplot as plt import numpy as np from scipy.interpolate import griddata import cartopy import cartopy.crs as ccrs # For plotting maps import cartopy.feature as cfeature # For plotting maps from cartopy.util import add_cyclic_point # For plotting maps import datetime ``` ### Define the directories and file of interest for your results. This can be shortened to less lines as well. ``` #result_dir = "/home/buchholz/Documents/code_database/untracked/my-notebook/Janyl_plotting/" result_dir = "../../data/" file = "CAM_chem_merra2_FCSD_1deg_QFED_monthly_2019.nc" #the netcdf file is now held in an xarray dataset named 'nc' and can be referenced later in the notebook nc_load = xr.open_dataset(result_dir+file) #to see what the netCDF file contains, just call the variable you read it into nc_load ``` ### Extract the variable of choice at the time and level of choice ``` #extract grid variables lat = nc_load['lat'] lon = nc_load['lon'] #extract variable var_sel = nc_load['PM25'] print(var_sel) #print(var_sel[0][0][0][0]) #select the surface level at a specific time and convert to ppbv from vmr #var_srf = var_sel.isel(time=0, lev=55) #select the surface level for an average over three times and convert to ppbv from vmr var_srf = var_sel.isel(time=[2,3,4], lev=55) # MAM chosen var_srf = var_srf.mean('time') var_srf = var_srf*1e09 # 10-9 to ppb print(var_srf.shape) # Add cyclic point to avoid white line over Africa var_srf_cyc, lon_cyc = add_cyclic_point(var_srf, coord=lon) ``` ### Plot the value over a specific region ``` plt.figure(figsize=(20,8)) #Define projection ax = plt.axes(projection=ccrs.PlateCarree()) #define contour levels clev = np.arange(0, 100, 1) #plot the data plt.contourf(lon_cyc,lat,var_srf_cyc,clev,cmap='Spectral_r',extend='both') # add coastlines #ax.coastlines() ax.add_feature(cfeature.COASTLINE) #add lat lon grids ax.gridlines(draw_labels=True, color='grey', alpha=0.5, linestyle='--') #longitude limits in degrees ax.set_xlim(20,120) #latitude limits in degrees ax.set_ylim(5,60) # Title plt.title("CAM-chem 2019 O$_{3}$") #axes # y-axis ax.text(-0.09, 0.55, 'Latitude', va='bottom', ha='center', rotation='vertical', rotation_mode='anchor', transform=ax.transAxes) # x-axis ax.text(0.5, -0.10, 'Longitude', va='bottom', ha='center', rotation='horizontal', rotation_mode='anchor', transform=ax.transAxes) # legend ax.text(1.18, 0.5, 'O$_{3}$ (ppb)', va='bottom', ha='center', rotation='vertical', rotation_mode='anchor', transform=ax.transAxes) plt.colorbar() plt.show() ``` ### Add location markers ``` ##Now lets look at the sufrace plot again, but this time add markers for observations at several points. #first we need to define our observational data into an array #this can also be imported from text files using various routines # Kyzylorda, Urzhar, Almaty, Balkhash obs_lat = np.array([44.8488,47.0870,43.2220,46.2161]) obs_lon = np.array([65.4823,81.6315,76.8512,74.3775]) obs_names = ["Kyzylorda", "Urzhar", "Almaty", "Balkhash"] num_obs = obs_lat.shape[0] plt.figure(figsize=(20,8)) #Define projection ax = plt.axes(projection=ccrs.PlateCarree()) #define contour levels clev = np.arange(0, 100, 1) #plot the data plt.contourf(lon_cyc,lat,var_srf_cyc,clev,cmap='Spectral_r') # add coastlines ax.add_feature(cfeature.COASTLINE) ax.add_feature(cfeature.BORDERS) #add lat lon grids ax.gridlines(draw_labels=True, color='grey', alpha=0.5, linestyle='--') #longitude limits in degrees ax.set_xlim(20,120) #latitude limits in degrees ax.set_ylim(5,60) # Title plt.title("CAM-chem 2019 O$_{3}$") #axes # y-axisCOUNTRY ax.text(-0.09, 0.55, 'Latitude', va='bottom', ha='center', rotation='vertical', rotation_mode='anchor', transform=ax.transAxes) # x-axis ax.text(0.5, -0.10, 'Longitude', va='bottom', ha='center', rotation='horizontal', rotation_mode='anchor', transform=ax.transAxes) # legend ax.text(1.18, 0.5, 'O$_{3}$ (ppb)', va='bottom', ha='center', rotation='vertical', rotation_mode='anchor', transform=ax.transAxes) #convert your observation lat/lon to Lambert-Conformal grid points #xpt,ypt = m(obs_lon,obs_lat) #to specify the color of each point it is easiest plot individual points in a loop for i in range(num_obs): plt.plot(obs_lon[i], obs_lat[i], linestyle='none', marker="o", markersize=8, alpha=0.8, c="black", markeredgecolor="black", markeredgewidth=1, transform=ccrs.PlateCarree()) plt.text(obs_lon[i] - 0.8, obs_lat[i] - 0.5, obs_names[i], fontsize=20, horizontalalignment='right', transform=ccrs.PlateCarree()) plt.colorbar() plt.show() cartopy.config['data_dir'] ```
github_jupyter
# Description This notebook documents allows the following on a group seven LIFX Tilechain with 5 Tiles laid out horizontaly as following T1 [0] [1] [2] [3] [4] T2 [0] [1] [2] [3] [4] T3 [0] [1] [2] [3] [4] T4 [0] [1] [2] [3] [4] T5 [0] [1] [2] [3] [4] T6 [0] [1] [2] [3] [4] T7 [0] [1] [2] [3] [4] Care should be taken to ensure that the LIFX Tiles are all facing up to ensure that the 0,0 position is in the expected place. Program will perform the following - take a jpg or png located in the same folder as the notebook and create a image to display across all 4 tilechains or 20 tiles. Image will be reduced from original size to a 32x40 matrix so resolution will not be great. You've been warned. ``` !pip install pylifxtiles !pip install thread #Main Program for Convert Single Image to Tiles # Full running function with all dependencies #imports RGB to HSBK conversion function from LIFX LAN library import _thread as thread from lifxlan import LifxLAN from lifxlan.utils import RGBtoHSBK from pylifxtiles import tiles from pylifxtiles import actions from matplotlib import image from PIL import Image # modify this variable to the name of the specific LIFX Tilechain as shown in the LIFX app source_image = './images/meghan.jpg' def main(): lan = LifxLAN() tilechain_lights = lan.get_tilechain_lights() print(len(tilechain_lights)) if len(tilechain_lights) != 0: for tile in tilechain_lights: if tile.get_label() == 'T1': print(tile.get_label()) T1 = tile if tile.get_label() =='T2': print(tile.get_label()) T2 = tile if tile.get_label() == 'T3': print(tile.get_label()) T3 = tile if tile.get_label() == 'T4': print(tile.get_label()) T4 = tile if tile.get_label() == 'T5': print(tile.get_label()) T5 = tile if tile.get_label() == 'T6': print(tile.get_label()) T6 = tile if tile.get_label() == 'T7': print(tile.get_label()) T7 = tile tc_list = [ T1, T2, T3, T4, T5, T6, T7] try: thread.start_new_thread(display_image,(source_image,(40,56), tc_list)) except KeyboardInterrupt: print("Done.") #combined function # resize image and force a new shape and save to disk def display_image(image_to_display,image_size, tilechain_list): # load the image my_image = Image.open(image_to_display) # report the size of the image #print(my_image.size) # resize image and ignore original aspect ratio img_resized = my_image.resize(image_size) #changing the file extension from jpg to png changes output brightness. You might need to play with this. img_resized.save('./images/resized_image.jpg') data = image.imread('./images/resized_image.jpg') target_tcs = [] for row in data: temp_row = [] for pixel in row: temp_row.append(RGBtoHSBK(pixel)) target_tcs.append(temp_row) #print ("length of target_tcs is " + str(len(target_tcs))) tcsplit = tiles.split_tilechains(target_tcs) #print ("legnth of tcssplit is " + str(len(tcsplit))) #print ("length tilelist is " + str(len(tilechain_list))) for tile in range(len(tilechain_list)): print (tile) tilechain_list[tile].set_tilechain_colors(tiles.split_combined_matrix(tcsplit[tile]),rapid=True) if __name__ == "__main__": main() ``` # test write to three tiles ``` #Main Program for Convert Single Image to Tiles # Full running function with all dependencies #imports RGB to HSBK conversion function from LIFX LAN library from lifxlan import LifxLAN from lifxlan.utils import RGBtoHSBK from pylifxtiles import tiles from pylifxtiles import actions from matplotlib import image from PIL import Image # modify this variable to the name of the specific LIFX Tilechain as shown in the LIFX app source_image = './images/Youtubelogo.jpg' def main(): lan = LifxLAN() tilechain_lights = lan.get_tilechain_lights() print(len(tilechain_lights)) if len(tilechain_lights) != 0: for tile in tilechain_lights: if tile.get_label() == 'T1': print(tile.get_label()) T1 = tile if tile.get_label() =='T2': print(tile.get_label()) T2 = tile if tile.get_label() == 'T3': print(tile.get_label()) T3 = tile if tile.get_label() == 'T4': print(tile.get_label()) T4 = tile tc_list = [T2, T3, T4] try: display_image(source_image,(40,24), tc_list) except KeyboardInterrupt: print("Done.") #combined function # resize image and force a new shape and save to disk def display_image(image_to_display,image_size, tilechain_list): # load the image my_image = Image.open(image_to_display) # report the size of the image #print(my_image.size) # resize image and ignore original aspect ratio img_resized = my_image.resize(image_size) #changing the file extension from jpg to png changes output brightness. You might need to play with this. img_resized.save('./images/resized_image.jpg') data = image.imread('./images/resized_image.jpg') target_tcs = [] for row in data: temp_row = [] for pixel in row: temp_row.append(RGBtoHSBK(pixel)) target_tcs.append(temp_row) print ("length of target_tcs is " + str(len(target_tcs))) tcsplit = tiles.split_tilechains(target_tcs) print ("legnth of tcssplit is " + str(len(tcsplit))) print ("length tilelist is " + str(len(tilechain_list))) for tile in range(len(tilechain_list)): print (tile) tilechain_list[tile].set_tilechain_colors(tiles.split_combined_matrix(tcsplit[tile]),rapid=True) if __name__ == "__main__": main() import threading ```
github_jupyter
``` import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import statsmodels.api as sm ``` # Import Risk INFORM index ``` path = "C:\\batch8_worldbank\\datasets\\tempetes\\INFORM_Risk_2021.xlsx" xl = pd.ExcelFile(path) xl.sheet_names inform_df = xl.parse(xl.sheet_names[2]) inform_df.columns = inform_df.iloc[0] inform_df = inform_df[2:] inform_df.head() ``` # Import emdat ``` path = "C:\\batch8_worldbank\\datasets\\tempetes\\wb_disasters_bdd.xlsx" disasters_df = pd.read_excel(path) disasters_df.head() disasters_df['ISO'] max(disasters_df['Year']) ``` # Filter on storms ``` storms_df = disasters_df[disasters_df["Disaster Type"]=="Storm"] ``` # Number of storms, nb people affected and total damages by country by decade ``` nb_storms_by_year_by_country = storms_df.groupby(["Start Year", "ISO"]).aggregate({"Disaster Type":"count", "No Affected": "sum", "Total Damages ('000 US$)":"sum"}) nb_storms_by_year_by_country = nb_storms_by_year_by_country.reset_index() nb_storms_by_year_by_country = nb_storms_by_year_by_country.rename(columns={"Start Year": "year", "Disaster Type": "storms_count", "No Affected": "total_nb_affected", "Total Damages ('000 US$)": "total_damages"}) nb_storms_by_year_by_country["decade"] = nb_storms_by_year_by_country["year"].apply(lambda row: (row//10)*10) nb_storms_by_decade_by_country = nb_storms_by_year_by_country.groupby(["decade", "ISO"]).aggregate({"storms_count":"sum", "total_nb_affected":"sum", "total_damages":"sum"}) nb_storms_by_decade_by_country = nb_storms_by_decade_by_country.reset_index() nb_storms_by_decade_by_country.head() max(nb_storms_by_decade_by_country["decade"]) ``` # Keep observations on decades 2000, 2010 and 2020 to increase nb of datapoints ``` nb_storms_by_decade_by_country_2020 = nb_storms_by_decade_by_country[nb_storms_by_decade_by_country["decade"]>=2000] nb_storms_by_decade_by_country_2020.head() nb_storms_by_decade_by_country_2020.shape nb_storms_by_decade_by_country_2020.columns inform_df.columns # Merge on ISO nb_storms_by_decade_by_country_2020_with_inform = pd.merge(nb_storms_by_decade_by_country_2020, inform_df, how="left", left_on="ISO", right_on="ISO3") nb_storms_by_decade_by_country_2020_with_inform.head() nb_storms_by_decade_by_country_2020_with_inform.shape nb_storms_by_decade_by_country_2020_with_inform_filt_col = nb_storms_by_decade_by_country_2020_with_inform[["decade", "ISO", "storms_count", "total_nb_affected", "total_damages","INFORM RISK"]] nb_storms_by_decade_by_country_2020_with_inform_filt_col.dtypes nb_storms_by_decade_by_country_2020_with_inform_filt_col["INFORM RISK"] = nb_storms_by_decade_by_country_2020_with_inform_filt_col["INFORM RISK"].astype("float") nb_storms_by_decade_by_country_2020_with_inform_filt_col.head() nb_storms_inform_by_country_cor = nb_storms_by_decade_by_country_2020_with_inform_filt_col[["ISO", "storms_count", "total_nb_affected", "total_damages","INFORM RISK"]] corr = nb_storms_inform_by_country_cor.corr() sm.graphics.plot_corr(corr, xnames=list(corr.columns)) plt.show() ``` # Keep observations on decades 2010 and 2020 ``` nb_storms_inform_by_country_2010_2020 = nb_storms_by_decade_by_country_2020_with_inform_filt_col[nb_storms_by_decade_by_country_2020_with_inform_filt_col["decade"]>=2010] nb_storms_inform_by_country_2010_2020_cor = nb_storms_inform_by_country_2010_2020[["ISO", "storms_count", "total_nb_affected", "total_damages","INFORM RISK"]] corr = nb_storms_inform_by_country_2010_2020_cor.corr() sm.graphics.plot_corr(corr, xnames=list(corr.columns)) plt.show() ``` # Keep observations on decade 2020 (decade of INFORM index) ``` nb_storms_inform_by_country_2020_only = nb_storms_by_decade_by_country_2020_with_inform_filt_col[nb_storms_by_decade_by_country_2020_with_inform_filt_col["decade"]==2020] nb_storms_inform_by_country_2020_only.head() nb_storms_inform_by_country_2020_only_cor = nb_storms_inform_by_country_2020_only[["ISO", "storms_count", "total_nb_affected", "total_damages","INFORM RISK"]] corr = nb_storms_inform_by_country_2020_only_cor.corr() sm.graphics.plot_corr(corr, xnames=list(corr.columns)) plt.show() ```
github_jupyter
# Db2 Jupyter Notebook Extensions Tutorial The SQL code tutorials for Db2 rely on a Jupyter notebook extension, commonly refer to as a "magic" command. The beginning of all of the notebooks begin with the following command which will load the extension and allow the remainder of the notebook to use the %sql magic command. <pre> &#37;run db2.ipynb </pre> The cell below will load the Db2 extension. Note that it will take a few seconds for the extension to load, so you should generally wait until the "Db2 Extensions Loaded" message is displayed in your notebook. ``` %run db2.ipynb ``` ## Options There are two options that can be set with the **`%sql`** command. These options are: - **`MAXROWS n`** - The maximum number of rows that you want to display as part of a SQL statement. Setting MAXROWS to -1 will return all output, while maxrows of 0 will suppress all output. - **`RUNTIME n`** - When using the timer option on a SQL statement, the statement will execute for **`n`** number of seconds. The result that is returned is the number of times the SQL statement executed rather than the execution time of the statement. The default value for runtime is one second, so if the SQL is very complex you will need to increase the run time. To set an option use the following syntax: ``` %sql option option_name value option_name value .... ``` The following example sets all three options: ``` %sql option maxrows 100 runtime 2 ``` The values will be saved between Jupyter notebooks sessions. ## Connections to Db2 Before any SQL commands can be issued, a connection needs to be made to the Db2 database that you will be using. The connection can be done manually (through the use of the CONNECT command), or automatically when the first `%sql` command is issued. The Db2 magic command tracks whether or not a connection has occured in the past and saves this information between notebooks and sessions. When you start up a notebook and issue a command, the program will reconnect to the database using your credentials from the last session. In the event that you have not connected before, the system will prompt you for all the information it needs to connect. This information includes: - Database name (SAMPLE) - Hostname - localhost (enter an IP address if you need to connect to a remote server) - PORT - 50000 (this is the default but it could be different) - Userid - DB2INST1 - Password - No password is provided so you have to enter a value - Maximum Rows - 10 lines of output are displayed when a result set is returned There will be default values presented in the panels that you can accept, or enter your own values. All of the information will be stored in the directory that the notebooks are stored on. Once you have entered the information, the system will attempt to connect to the database for you and then you can run all of the SQL scripts. More details on the CONNECT syntax will be found in a section below. If you have credentials available from Db2 on Cloud or DSX, place the contents of the credentials into a variable and then use the `CONNECT CREDENTIALS <var>` syntax to connect to the database. ```Python db2blu = { "uid" : "xyz123456", ...} %sql CONNECT CREDENTIALS db2blu ``` If the connection is successful using the credentials, the variable will be saved to disk so that you can connected from within another notebook using the same syntax. The next statement will force a CONNECT to occur with the default values. If you have not connected before, it will prompt you for the information. ``` %sql CONNECT ``` ## Line versus Cell Command The Db2 extension is made up of one magic command that works either at the LINE level (`%sql`) or at the CELL level (`%%sql`). If you only want to execute a SQL command on one line in your script, use the `%sql` form of the command. If you want to run a larger block of SQL, then use the `%%sql` form. Note that when you use the `%%sql` form of the command, the entire contents of the cell is considered part of the command, so you cannot mix other commands in the cell. The following is an example of a line command: ``` %sql VALUES 'HELLO THERE' ``` If you have SQL that requires multiple lines, of if you need to execute many lines of SQL, then you should be using the CELL version of the `%sql` command. To start a block of SQL, start the cell with `%%sql` and do not place any SQL following the command. Subsequent lines can contain SQL code, with each SQL statement delimited with the semicolon (`;`). You can change the delimiter if required for procedures, etc... More details on this later. ``` %%sql VALUES 1, 2, 3 ``` If you are using a single statement then there is no need to use a delimiter. However, if you are combining a number of commands then you must use the semicolon. ``` %%sql DROP TABLE STUFF; CREATE TABLE STUFF (A INT); INSERT INTO STUFF VALUES 1,2,3; SELECT * FROM STUFF; ``` The script will generate messages and output as it executes. Each SQL statement that generates results will have a table displayed with the result set. If a command is executed, the results of the execution get listed as well. The script you just ran probably generated an error on the DROP table command. ## Options Both forms of the `%sql` command have options that can be used to change the behavior of the code. For both forms of the command (`%sql`, `%%sql`), the options must be on the same line as the command: <pre> %sql -t ... %%sql -t </pre> The only difference is that the `%sql` command can have SQL following the parameters, while the `%%sql` requires the SQL to be placed on subsequent lines. There are a number of parameters that you can specify as part of the `%sql` statement. * `-d` - Use alternative delimiter * `-t` - Time the statement execution * `-q` - Suppress messages * `-j` - JSON formatting of a column * `-a` - Show all output * `-pb` - Bar chart of results * `-pp` - Pie chart of results * `-pl` - Line chart of results * `-i` - Interactive mode with Pixiedust * `-sampledata` Load the database with the sample EMPLOYEE and DEPARTMENT tables * `-r` - Return the results into a variable (list of rows) * `-e` - Echo macro substitution Multiple parameters are allowed on a command line. Each option should be separated by a space: <pre> %sql -a -j ... </pre> A `SELECT` statement will return the results as a dataframe and display the results as a table in the notebook. If you use the assignment statement, the dataframe will be placed into the variable and the results will not be displayed: <pre> r = %sql SELECT * FROM EMPLOYEE </pre> The sections below will explain the options in more detail. ## Delimiters The default delimiter for all SQL statements is the semicolon. However, this becomes a problem when you try to create a trigger, function, or procedure that uses SQLPL (or PL/SQL). Use the `-d` option to turn the SQL delimiter into the at (`@`) sign and `-q` to suppress error messages. The semi-colon is then ignored as a delimiter. For example, the following SQL will use the `@` sign as the delimiter. ``` %%sql -d -q DROP TABLE STUFF @ CREATE TABLE STUFF (A INT) @ INSERT INTO STUFF VALUES 1,2,3 @ SELECT * FROM STUFF @ ``` The delimiter change will only take place for the statements following the `%%sql` command. Subsequent cells in the notebook will still use the semicolon. You must use the `-d` option for every cell that needs to use the semicolon in the script. ## Limiting Result Sets The default number of rows displayed for any result set is 10. You have the option of changing this option when initially connecting to the database. If you want to override the number of rows display you can either update the control variable, or use the -a option. The `-a` option will display all of the rows in the answer set. For instance, the following SQL will only show 10 rows even though we inserted 15 values: ``` %sql values 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ``` You will notice that the displayed result will split the visible rows to the first 5 rows and the last 5 rows. Using the `-a` option will display all values in a scrollable table. ``` %sql -a values 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ``` To change the default value of rows displayed, you can use the `%sql option maxrow` command to set the value to something else. A value of 0 or -1 means unlimited output. ``` %sql option maxrows 5 %sql values 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ``` A special note regarding the output from a `SELECT` statement. If the SQL statement is the last line of a block, the results will be displayed by default (unless you assigned the results to a variable). If the SQL is in the middle of a block of statements, the results will not be displayed. To explicitly display the results you must use the display function (or pDisplay if you have imported another library like pixiedust which overrides the pandas display function). ``` # Set the maximum back %sql option maxrows 10 %sql values 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ``` ## Quiet Mode Every SQL statement will result in some output. You will either get an answer set (`SELECT`), or an indication if the command worked. For instance, the following set of SQL will generate some error messages since the tables will probably not exist: ``` %%sql DROP TABLE TABLE_NOT_FOUND; DROP TABLE TABLE_SPELLED_WRONG; ``` If you know that these errors may occur you can silence them with the -q option. ``` %%sql -q DROP TABLE TABLE_NOT_FOUND; DROP TABLE TABLE_SPELLED_WRONG; ``` SQL output will not be suppressed, so the following command will still show the results. ``` %%sql -q DROP TABLE TABLE_NOT_FOUND; DROP TABLE TABLE_SPELLED_WRONG; VALUES 1,2,3; ``` ## Variables in %sql Blocks Python variables can be passed to a `%sql` line command, and to a `%%sql` block. For both forms of the `%sql` command you can pass variables by placing a colon in front of the variable name. ```python %sql SELECT * FROM EMPLOYEE WHERE EMPNO = :empno ``` The following example illustrates the use of a variable in the SQL. ``` empno = '000010' %sql SELECT * FROM EMPLOYEE WHERE EMPNO = :empno ``` You can doublecheck that the substitution took place by using the `-e` option which echos the SQL command after substitution. ``` %sql -e SELECT * FROM EMPLOYEE WHERE EMPNO = :empno ``` Note that the variable `:empno` did not have quotes around it, although it is a string value. The `%sql` call will examine the contents of the variable and add quotes around strings so you do not have to supply them in the SQL command. Variables can also be array types. Arrays are expanded into multiple values, each separated by commas. This is useful when building SQL `IN` lists. The following example searches for 3 employees based on their employee number. ``` empnos = ['000010','000020','000030'] %sql SELECT * FROM EMPLOYEE WHERE EMPNO IN (:empnos) ``` You can reference individual array items using this technique as well. If you wanted to search for only the first value in the `empnos` array, use `:empnos[0]` instead. ``` %sql SELECT * FROM EMPLOYEE WHERE EMPNO IN (:empnos[0]) ``` One final type of variable substitution that is allowed is for dictionaries. Python dictionaries resemble JSON objects and can be used to insert JSON values into Db2. For instance, the following variable contains company information in a JSON structure. ``` customer = { "name" : "Aced Hardware Stores", "city" : "Rockwood", "employees" : 14 } ``` Db2 has builtin functions for dealing with JSON objects. There is another Jupyter notebook which goes through this in detail. Rather than using those functions, the following code will create a Db2 table with a string column that will contain the contents of this JSON record. ``` %%sql DROP TABLE SHOWJSON; CREATE TABLE SHOWJSON (INJSON VARCHAR(256)); ``` To insert the Dictionary (JSON Record) into this Db2 table, you only need to use the variable name as one of the fields being inserted. ``` %sql INSERT INTO SHOWJSON VALUES :customer ``` Selecting from this table will show that the data has been inserted as a string. ``` %sql select * from showjson ``` If you want to retrieve the data from a column that contains JSON records, you must use the `-j` flag to insert the contents back into a variable. ``` v = %sql -j SELECT * FROM SHOWJSON ``` The variable `v` now contains the original JSON record for you to use. ``` v ``` ## SQL Character Strings Character strings require special handling when dealing with Db2. The single quote character `'` is reserved for delimiting string constants, while the double quote `"` is used for naming columns that require special characters. You cannot use the double quote character to delimit strings that happen to contain the single quote character. What Db2 requires you do is placed two quotes in a row to have them interpreted as a single quote character. For instance, the next statement will select one employee from the table who has a quote in their last name: `O'CONNELL`. ``` %sql SELECT * FROM EMPLOYEE WHERE LASTNAME = 'O''CONNELL' ``` Python handles quotes differently! You can assign a string to a Python variable using single or double quotes. The following assignment statements are not identical! ``` lastname = "O'CONNELL" print(lastname) lastname = 'O''CONNELL' print(lastname) ``` If you use the same syntax as Db2, Python will remove the quote in the string! It interprets this as two strings (O and CONNELL) being concatentated together. That probably isn't what you want! So the safest approach is to use double quotes around your string when you assign it to a variable. Then you can use the variable in the SQL statement as shown in the following example. ``` lastname = "O'CONNELL" %sql -e SELECT * FROM EMPLOYEE WHERE LASTNAME = :lastname ``` Notice how the string constant was updated to contain two quotes when inserted into the SQL statement. This is done automatically by the `%sql` magic command, so there is no need to use the two single quotes when assigning a string to a variable. However, you must use the two single quotes when using constants in a SQL statement. ## Builtin Variables There are 5 predefined variables defined in the program: - database - The name of the database you are connected to - uid - The userid that you connected with - hostname = The IP address of the host system - port - The port number of the host system - max - The maximum number of rows to return in an answer set Theses variables are all part of a structure called _settings. To retrieve a value, use the syntax: ```python db = _settings['database'] ``` There are also 3 variables that contain information from the last SQL statement that was executed. - sqlcode - SQLCODE from the last statement executed - sqlstate - SQLSTATE from the last statement executed - sqlerror - Full error message returned on last statement executed You can access these variables directly in your code. The following code segment illustrates the use of the SQLCODE variable. ``` empnos = ['000010','999999'] for empno in empnos: ans1 = %sql -r SELECT SALARY FROM EMPLOYEE WHERE EMPNO = :empno if (sqlcode != 0): print("Employee "+ empno + " left the company!") else: print("Employee "+ empno + " salary is " + str(ans1[1][0])) ``` ## Timing SQL Statements Sometimes you want to see how the execution of a statement changes with the addition of indexes or other optimization changes. The `-t` option will run the statement on the LINE or one SQL statement in the CELL for exactly one second. The results will be displayed and optionally placed into a variable. The syntax of the command is: <pre> sql_time = %sql -t SELECT * FROM EMPLOYEE </pre> For instance, the following SQL will time the VALUES clause. ``` %sql -t VALUES 1,2,3,4,5,6,7,8,9 ``` When timing a statement, no output will be displayed. If your SQL statement takes longer than one second you will need to modify the runtime options. You can use the `%sql option runtime` command to change the duration the statement runs. ``` %sql option runtime 5 %sql -t VALUES 1,2,3,4,5,6,7,8,9 %sql option runtime 1 ``` ## JSON Formatting Db2 supports querying JSON that is stored in a column within a table. Standard output would just display the JSON as a string. For instance, the following statement would just return a large string of output. ``` %%sql VALUES '{ "empno":"000010", "firstnme":"CHRISTINE", "midinit":"I", "lastname":"HAAS", "workdept":"A00", "phoneno":[3978], "hiredate":"01/01/1995", "job":"PRES", "edlevel":18, "sex":"F", "birthdate":"08/24/1963", "pay" : { "salary":152750.00, "bonus":1000.00, "comm":4220.00} }' ``` Adding the -j option to the `%sql` (or `%%sql`) command will format the first column of a return set to better display the structure of the document. Note that if your answer set has additional columns associated with it, they will not be displayed in this format. ``` %%sql -j VALUES '{ "empno":"000010", "firstnme":"CHRISTINE", "midinit":"I", "lastname":"HAAS", "workdept":"A00", "phoneno":[3978], "hiredate":"01/01/1995", "job":"PRES", "edlevel":18, "sex":"F", "birthdate":"08/24/1963", "pay" : { "salary":152750.00, "bonus":1000.00, "comm":4220.00} }' ``` JSON fields can be inserted into Db2 columns using Python dictionaries. This makes the input and output of JSON fields much simpler. For instance, the following code will create a Python dictionary which is similar to a JSON record. ``` employee = { "firstname" : "John", "lastname" : "Williams", "age" : 45 } ``` The field can be inserted into a character column (or BSON if you use the JSON functions) by doing a direct variable insert. ``` %%sql -q DROP TABLE SHOWJSON; CREATE TABLE SHOWJSON(JSONIN VARCHAR(128)); ``` An insert would use a variable parameter (colon in front of the variable) instead of a character string. ``` %sql INSERT INTO SHOWJSON VALUES (:employee) %sql SELECT * FROM SHOWJSON ``` An assignment statement to a variable will result in an equivalent Python dictionary type being created. Note that we must use the raw `-j` flag to make sure we only get the data and not a data frame. ``` x = %sql -j SELECT * FROM SHOWJSON print("First Name is " + x[0]["firstname"] + " and the last name is " + x[0]['lastname']) ``` ## Plotting Sometimes it would be useful to display a result set as either a bar, pie, or line chart. The first one or two columns of a result set need to contain the values need to plot the information. The three possible plot options are: * `-pb` - bar chart (x,y) * `-pp` - pie chart (y) * `-pl` - line chart (x,y) The following data will be used to demonstrate the different charting options. ``` %sql values 1,2,3,4,5 ``` Since the results only have one column, the pie, line, and bar charts will not have any labels associated with them. The first example is a bar chart. ``` %sql -pb values 1,2,3,4,5 ``` The same data as a pie chart. ``` %sql -pp values 1,2,3,4,5 ``` And finally a line chart. ``` %sql -pl values 1,2,3,4,5 ``` If you retrieve two columns of information, the first column is used for the labels (X axis or pie slices) and the second column contains the data. ``` %sql -pb values ('A',1),('B',2),('C',3),('D',4),('E',5) ``` For a pie chart, the first column is used to label the slices, while the data comes from the second column. ``` %sql -pp values ('A',1),('B',2),('C',3),('D',4),('E',5) ``` Finally, for a line chart, the x contains the labels and the y values are used. ``` %sql -pl values ('A',1),('B',2),('C',3),('D',4),('E',5) ``` The following SQL will plot the number of employees per department. ``` %%sql -pb SELECT WORKDEPT, COUNT(*) FROM EMPLOYEE GROUP BY WORKDEPT ``` The final option for plotting data is to use interactive mode `-i`. This will display the data using an open-source project called Pixiedust. You can view the results in a table and then interactively create a plot by dragging and dropping column names into the appropriate slot. The next command will place you into interactive mode. ``` %sql -i select * from employee ``` ## Sample Data Many of the Db2 notebooks depend on two of the tables that are found in the `SAMPLE` database. Rather than having to create the entire `SAMPLE` database, this option will create and populate the `EMPLOYEE` and `DEPARTMENT` tables in your database. Note that if you already have these tables defined, they will not be dropped. ``` %sql -sampledata ``` ## Result Sets By default, any `%sql` block will return the contents of a result set as a table that is displayed in the notebook. The results are displayed using a feature of pandas dataframes. The following select statement demonstrates a simple result set. ``` %sql select * from employee fetch first 3 rows only ``` You can assign the result set directly to a variable. ``` x = %sql select * from employee fetch first 3 rows only ``` The variable x contains the dataframe that was produced by the `%sql` statement so you access the result set by using this variable or display the contents by just referring to it in a command line. ``` x ``` There is an additional way of capturing the data through the use of the `-r` flag. <pre> var = %sql -r select * from employee </pre> Rather than returning a dataframe result set, this option will produce a list of rows. Each row is a list itself. The column names are found in row zero (0) and the data rows start at 1. To access the first column of the first row, you would use var[1][0] to access it. ``` rows = %sql -r select * from employee fetch first 3 rows only print(rows[1][0]) ``` The number of rows in the result set can be determined by using the length function and subtracting one for the header row. ``` print(len(rows)-1) ``` If you want to iterate over all of the rows and columns, you could use the following Python syntax instead of creating a for loop that goes from 0 to 41. ``` for row in rows: line = "" for col in row: line = line + str(col) + "," print(line) ``` If you don't want the header row, modify the first line to start at the first row instead of row zero. ``` for row in rows[1:]: line = "" for col in row: line = line + str(col) + "," print(line) ``` Since the data may be returned in different formats (like integers), you should use the str() function to convert the values to strings. Otherwise, the concatenation function used in the above example will fail. For instance, the 6th field is a birthdate field. If you retrieve it as an individual value and try and concatenate a string to it, you get the following error. ``` try: print("Birth Date="+rows[1][6]) except Exception as err: print("Oops... Something went wrong!") print(err) ``` You can fix this problem by adding the str function to convert the date. ``` print("Birth Date="+str(rows[1][6])) ``` ## Development SQL The previous set of `%sql` and `%%sql` commands deals with SQL statements and commands that are run in an interactive manner. There is a class of SQL commands that are more suited to a development environment where code is iterated or requires changing input. The commands that are associated with this form of SQL are: - AUTOCOMMIT - COMMIT/ROLLBACK - PREPARE - EXECUTE In addition, the `sqlcode`, `sqlstate` and `sqlerror` fields are populated after every statement so you can use these variables to test for errors. Autocommit is the default manner in which SQL statements are executed. At the end of the successful completion of a statement, the results are commited to the database. There is no concept of a transaction where multiple DML/DDL statements are considered one transaction. The `AUTOCOMMIT` command allows you to turn autocommit `OFF` or `ON`. This means that the set of SQL commands run after the `AUTOCOMMIT OFF` command are executed are not commited to the database until a `COMMIT` or `ROLLBACK` command is issued. `COMMIT (WORK)` will finalize all of the transactions (`COMMIT`) to the database and `ROLLBACK` will undo all of the changes. If you issue a `SELECT` statement during the execution of your block, the results will reflect all of your changes. If you `ROLLBACK` the transaction, the changes will be lost. `PREPARE` is typically used in a situation where you want to repeatidly execute a SQL statement with different variables without incurring the SQL compilation overhead. For instance: ``` x = %sql PREPARE SELECT LASTNAME FROM EMPLOYEE WHERE EMPNO=? for y in ['000010','000020','000030']: %sql execute :x using :y ``` `EXECUTE` is used to execute a previously compiled statement. ## Db2 CONNECT Statement As mentioned at the beginning of this notebook, connecting to Db2 is automatically done when you issue your first `%sql` statement. Usually the program will prompt you with what options you want when connecting to a database. The other option is to use the CONNECT statement directly. The CONNECT statement is similar to the native Db2 CONNECT command, but includes some options that allow you to connect to databases that has not been catalogued locally. The CONNECT command has the following format: <pre> %sql CONNECT TO &lt;database&gt; USER &lt;userid&gt; USING &lt;password | ?&gt; HOST &lt;ip address&gt; PORT &lt;port number&gt; </pre> If you use a "?" for the password field, the system will prompt you for a password. This avoids typing the password as clear text on the screen. If a connection is not successful, the system will print the error message associated with the connect request. If the connection is successful, the parameters are saved on your system and will be used the next time you run a SQL statement, or when you issue the %sql CONNECT command with no parameters. If you want to force the program to connect to a different database (with prompting), use the CONNECT RESET command. The next time you run a SQL statement, the program will prompt you for the the connection and will force the program to reconnect the next time a SQL statement is executed. #### Credits: IBM 2018, George Baklarz [[email protected]]
github_jupyter
# Welcome to the matched filtering tutorial! ### Installation Make sure you have PyCBC and some basic lalsuite tools installed. You can do this in a terminal with pip: ``` ! pip install lalsuite pycbc ``` <span style="color:gray">Jess notes: this notebook was made with a PyCBC 1.8.0 kernel. </span> ### Learning goals With this tutorial, you learn how to: * Generate source waveforms detectable by LIGO, Virgo, KAGRA * Use PyCBC to run a matched filter search on gravitational wave detector data * Estimate the significance of a trigger given a background distribution * **Challenge**: Code up a trigger coincidence algorithm This tutorial borrows heavily from tutorials made for the [LIGO-Virgo Open Data Workshop](https://www.gw-openscience.org/static/workshop1/course.html) by Alex Nitz. You can find PyCBC documentation and additional examples [here](http://pycbc.org/pycbc/latest/html/py-modindex.html). Let's get started! ___ ## Generate a gravitational wave signal waveform We'll use a popular waveform approximant ([SOEBNRv4](https://arxiv.org/pdf/1611.03703.pdf)) to generate waveforms that would be detectable by LIGO, Virgo, or KAGRA. First we import the packages we'll need. ``` from pycbc.waveform import get_td_waveform import pylab ``` Let's see what these waveforms look like for different component masses. We'll assume the two compact object have masses equal to each other, and we'll set a lower frequency bound of 30 Hz (determined by the sensitivity of our detectors). We can also set a time sample rate with `get_td_waveform`. Let's try a rate of 4096 Hz. Let's make a plot of the plus polarization (`hp`) to get a feel for what the waveforms look like. ``` for m in [5, 10, 30, 100]: hp, hc = get_td_waveform(approximant="SEOBNRv4_opt", mass1=m, mass2=m, delta_t=1.0/4096, f_lower=30) pylab.plot(hp.sample_times, hp, label='$M_{\odot 1,2}=%s$' % m) pylab.legend(loc='upper left') pylab.ylabel('GW strain (plus polarization)') pylab.grid() pylab.xlabel('Time (s)') pylab.show() ``` Now let's see what happens if we decrease the lower frequency bound from 30 Hz to 15 Hz. ``` for m in [5, 10, 30, 100]: hp, hc = get_td_waveform(approximant="SEOBNRv4_opt", mass1=m, mass2=m, delta_t=1.0/4096, f_lower=15) pylab.plot(hp.sample_times, hp, label='$M_{\odot 1,2}=%s$' % m) pylab.legend(loc='upper left') pylab.ylabel('GW strain (plus polarization)') pylab.grid() pylab.xlabel('Time (s)') pylab.show() ``` --- ### Exercise 1 What happens to the waveform when the total mass (let's say 20 M<sub>sol</sub>) stays the same, but the mass ratio between the component masses changes? Compare the waveforms for a m<sub>1</sub> = m<sub>2</sub> = 10 M<sub>sol</sub> system, and a m<sub>1</sub> = 2 M<sub>sol</sub>, m<sub>2</sub> = 18 M<sub>sol</sub> system. What do you notice? ``` # complete ``` ### Exercise 2 How much longer (in signal duration) would LIGO and Virgo (and KAGRA) be able to detect a 1.4-1.4 M<sub>sol</sub> binary neutron star system if our detectors were sensitive down to 10 Hz instead of 30 Hz? ** Note you'll need to use a different waveform approximant here. Try TaylorF2.** <span style="color:gray">Jess notes: this would be a major benefit of next-generation ("3G") ground-based gravitational wave detectors.</span> ``` # complete ``` --- ### Distance vs. signal amplitude Let's see what happens when we scale the distance (in units of Megaparsecs) for a system with a total mass of 20 M<sub>sol</sub>. <span style="color:gray">Note: redshift effects are not included here.</span> ``` for d in [100, 500, 1000]: hp, hc = get_td_waveform(approximant="SEOBNRv4_opt", mass1=10, mass2=10, delta_t=1.0/4096, f_lower=30, distance=d) pylab.plot(hp.sample_times, hp, label='Distance=%s Mpc' % d) pylab.grid() pylab.xlabel('Time (s)') pylab.ylabel('GW strain (plus polarization)') pylab.legend(loc='upper left') pylab.show() ``` --- ## Run a matched filter search on gravitational wave detector data PyCBC also maintains a catalog of open data as PyCBC time series objects, easy to manipulate with PyCBC tools. Let's try using that and importing the data around the first detection, GW150914. ``` import pylab from pycbc.catalog import Merger from pycbc.filter import resample_to_delta_t, highpass merger = Merger("GW150914") # Get the data from the Hanford detector strain = merger.strain('H1') ``` ### Data pre-conditioning Once we've imported the open data from this alternate source, the first thing we'll need to do is **pre-condition** the data. This serves a few purposes: * 1) reduces the dynamic range of the data * 2) supresses high amplitudes at low frequencies, which can introduce numerical artifacts * 3) if we don't need high frequency information, downsampling allows us to compute our matched filter result faster Let's try highpassing above 15 Hz and downsampling to 2048 Hz, and we'll make a plot to see what the result looks like: ``` # Remove the low frequency content and downsample the data to 2048Hz strain = resample_to_delta_t(highpass(strain, 15.0), 1.0/2048) pylab.plot(strain.sample_times, strain) pylab.xlabel('Time (s)') ``` Notice the large amplitude excursions in the data at the start and end of our data segment. This is **spectral leakage** caused by filters we applied to the boundaries ringing off the discontinuities where the data suddenly starts and ends (for a time up to the length of the filter). To avoid this we should trim the ends of the data in all steps of our filtering. Let's try cropping a couple seconds off of either side. ``` # Remove 2 seconds of data from both the beginning and end conditioned = strain.crop(2, 2) pylab.plot(conditioned.sample_times, conditioned) pylab.xlabel('Time (s)') ``` That's better. ### Calculating the spectral density of the data Optimal matched filtering requires *whitening*; weighting the frequency components of the potential signal and data by the estimated noise amplitude. Let's compute the power spectral density (PSD) of our conditioned data. ``` from pycbc.psd import interpolate, inverse_spectrum_truncation # Estimate the power spectral density # We use 4 second samles of our time series in Welch method. psd = conditioned.psd(4) # Now that we have the psd we need to interpolate it to match our data # and then limit the filter length of 1 / PSD. After this, we can # directly use this PSD to filter the data in a controlled manner psd = interpolate(psd, conditioned.delta_f) # 1/PSD will now act as a filter with an effective length of 4 seconds # Since the data has been highpassed above 15 Hz, and will have low values # below this we need to informat the function to not include frequencies # below this frequency. psd = inverse_spectrum_truncation(psd, 4 * conditioned.sample_rate, low_frequency_cutoff=15) ``` ---- ### Define a signal model Recall that matched filtering is essentially integrating the inner product between your data and your signal model in frequency or time (after weighting frequencies correctly) as you slide your signal model over your data in time. If there is a signal in the data that matches your 'template', we will see a large value of this inner product (the SNR, or 'signal to noise ratio') at that time. In a full search, we would grid over the parameters and calculate the SNR time series for each template in our template bank Here we'll define just one template. Let's assume equal masses (which is within the posterior probability of GW150914). Because we want to match our signal model with each time sample in our data, let's also rescale our signal model vector to match the same number of time samples as our data vector (**<- very important!**). Let's also plot the output to see what it looks like. ``` m = 36 # Solar masses hp, hc = get_td_waveform(approximant="SEOBNRv4_opt", mass1=m, mass2=m, delta_t=conditioned.delta_t, f_lower=20) # We should resize the vector of our template to match our data hp.resize(len(conditioned)) pylab.plot(hp) pylab.xlabel('Time samples') ``` Note that the waveform template currently begins at the start of the vector. However, we want our SNR time series (the inner product between our data and our template) to track with the approximate merger time. To do this, we need to shift our template so that the merger is approximately at the first bin of the data. For this reason, waveforms returned from `get_td_waveform` have their merger stamped with time zero, so we can easily shift the merger into the right position to compute our SNR time series. Let's try shifting our template time and plot the output. ``` template = hp.cyclic_time_shift(hp.start_time) pylab.plot(template) pylab.xlabel('Time samples') ``` --- ### Calculate an SNR time series Now that we've pre-conditioned our data and defined a signal model, we can compute the output of our matched filter search. ``` from pycbc.filter import matched_filter import numpy snr = matched_filter(template, conditioned, psd=psd, low_frequency_cutoff=20) pylab.figure(figsize=[10, 4]) pylab.plot(snr.sample_times, abs(snr)) pylab.xlabel('Time (s)') pylab.ylabel('SNR') ``` Note that as we expect, there is some corruption at the start and end of our SNR time series by the template filter and the PSD filter. To account for this, we can smoothly zero out 4 seconds (the length of the PSD filter) at the beginning and end for the PSD filtering. We should remove an 4 additional seconds at the beginning to account for the template length, although this is somewhat generous for so short a template. A longer signal such as from a BNS, would require much more padding at the beginning of the vector. ``` snr = snr.crop(4 + 4, 4) pylab.figure(figsize=[10, 4]) pylab.plot(snr.sample_times, abs(snr)) pylab.ylabel('Signal-to-noise') pylab.xlabel('Time (s)') pylab.show() ``` Finally, now that the output is properly cropped, we can find the peak of our SNR time series and estimate the merger time and associated SNR of any event candidate within the data. ``` peak = abs(snr).numpy().argmax() snrp = snr[peak] time = snr.sample_times[peak] print("We found a signal at {}s with SNR {}".format(time, abs(snrp))) ``` You found the first gravitational wave detection in LIGO Hanford data! Nice work. --- ### Exercise 3 How does the SNR change if you re-compute the matched filter result using a signal model with compenent masses that are closer to the current estimates for GW150914, say m<sub>1</sub> = 36 M<sub>sol</sub> and m<sub>2</sub> = 31 M<sub>sol</sub>? ``` # complete ``` ### Exercise 4 **Network SNR** is the quadrature sum of the single-detector SNR from each contributing detector. GW150914 was detected by H1 and L1. Try calculating the network SNR (you'll need to estimate the SNR in L1 first), and compare your answer to the network PyCBC SNR as reported in the [GWTC-1 catalog](https://arxiv.org/abs/1811.12907). ``` # complete ``` --- ## Estimate the single-detector significance of an event candidate Great, we found a large spike in SNR! What are the chances this is a real astrophysical signal? How often would detector noise produce this by chance? Let's plot a histogram of SNR values output by our matched filtering analysis for this time and see how much this trigger stands out. ``` # import what we need from scipy.stats import norm from math import pi from math import exp # make a histogram of SNR values background = (abs(snr)) # plot the histogram to check out any other outliers pylab.hist(background, bins=50) pylab.xlabel('SNR') pylab.semilogy() # use norm.fit to fit a normal (Gaussian) distribution (mu, sigma) = norm.fit(background) # print out the mean and standard deviation of the fit print('The fit mean = %f and the fit std dev = %f' )%(mu, sigma) ``` ### Exercise 5 At what single-detector SNR is the significance of a trigger > 5 sigma? Remember that sigma is constant for a normal distribution (read: this should be simple multiplication now that we have estimated what 1 sigma is). ``` # complete ``` --- ## Challenge Our match filter analysis assumes the noise is *stationary* and *Gaussian*, which is not a good assumption, and this short data set isn't representative of all the various things that can go bump in the detector (remember the phone?). **The simple significance estimate above won't work as soon as we encounter a glitch!** We need a better noise background estimate, and we can leverage our detector network to help make our signals stand out from our background. Observing a gravitational wave signal between detectors is an important cross-check to minimize the impact of transient detector noise. Our strategy: * We look for loud triggers within a time window to identify foreground events that occur within the gravitational wave travel time (v=c) between detectors, but could come from any sky position. * We use time slides to estimate the noise background for a network of detectors. If you still have time, try coding up an algorithm that checks for time coincidence between triggers in different detectors. Remember that the maximum gravitational wave travel time between LIGO detectors is ~10 ms. Check your code with the GPS times for the H1 and L1 triggers you identified for GW150914. ``` # complete if time ```
github_jupyter
<a href="https://www.bigdatauniversity.com"><img src="https://ibm.box.com/shared/static/qo20b88v1hbjztubt06609ovs85q8fau.png" width="400px" align="center"></a> <h1 align="center"><font size="5">RESTRICTED BOLTZMANN MACHINES</font></h1> <h3>Introduction</h3> <b>Restricted Boltzmann Machine (RBM):</b> RBMs are shallow neural nets that learn to reconstruct data by themselves in an unsupervised fashion. <h4>Why are RBMs important?</h4> It can automatically extract <b>meaningful</b> features from a given input. <h4>How does it work?</h4> RBM is a 2 layer neural network. Simply, RBM takes the inputs and translates those into a set of binary values that represents them in the hidden layer. Then, these numbers can be translated back to reconstruct the inputs. Through several forward and backward passes, the RBM will be trained, and a trained RBM can reveal which features are the most important ones when detecting patterns. <h4>What are the applications of RBM?</h4> RBM is useful for <a href='http://www.cs.utoronto.ca/~hinton/absps/netflixICML.pdf'> Collaborative Filtering</a>, dimensionality reduction, classification, regression, feature learning, topic modeling and even <b>Deep Belief Networks</b>. <h4>Is RBM a generative or Discriminative model?</h4> RBM is a generative model. Let me explain it by first, see what is different between discriminative and generative models: <b>Discriminative:</b> Consider a classification problem in which we want to learn to distinguish between Sedan cars (y = 1) and SUV cars (y = 0), based on some features of cars. Given a training set, an algorithm like logistic regression tries to find a straight line—that is, a decision boundary—that separates the suv and sedan. <b>Generative:</b> looking at cars, we can build a model of what Sedan cars look like. Then, looking at SUVs, we can build a separate model of what SUV cars look like. Finally, to classify a new car, we can match the new car against the Sedan model, and match it against the SUV model, to see whether the new car looks more like the SUV or Sedan. Generative Models specify a probability distribution over a dataset of input vectors. We can do both supervise and unsupervised tasks with generative models: <ul> <li>In an unsupervised task, we try to form a model for P(x), where P is the probability given x as an input vector.</li> <li>In the supervised task, we first form a model for P(x|y), where P is the probability of x given y(the label for x). For example, if y = 0 indicates whether a car is a SUV or y = 1 indicates indicate a car is a Sedan, then p(x|y = 0) models the distribution of SUVs’ features, and p(x|y = 1) models the distribution of Sedans’ features. If we manage to find P(x|y) and P(y), then we can use <code>Bayes rule</code> to estimate P(y|x), because: $$p(y|x) = \frac{p(x|y)p(y)}{p(x)}$$</li> </ul> Now the question is, can we build a generative model, and then use it to create synthetic data by directly sampling from the modeled probability distributions? Lets see. <h2>Table of Contents</h2> <ol> <li><a href="#ref1">Initialization</a></li> <li><a href="#ref2">RBM layers</a></li> <li><a href="#ref3">What RBM can do after training?</a></li> <li><a href="#ref4">How to train the model?</a></li> <li><a href="#ref5">Learned features</a></li> </ol> <p></p> </div> <br> <hr> <a id="ref1"></a> <h3>Initialization</h3> First we have to load the utility file which contains different utility functions that are not connected in any way to the networks presented in the tutorials, but rather help in processing the outputs into a more understandable way. ``` import urllib.request with urllib.request.urlopen("http://deeplearning.net/tutorial/code/utils.py") as url: response = url.read() target = open('utils.py', 'w') target.write(response.decode('utf-8')) target.close() ``` Now, we load in all the packages that we use to create the net including the TensorFlow package: ``` import tensorflow as tf import numpy as np from tensorflow.examples.tutorials.mnist import input_data #!pip install pillow from PIL import Image from utils import tile_raster_images import matplotlib.pyplot as plt %matplotlib inline ``` <hr> <a id="ref2"></a> <h3>RBM layers</h3> An RBM has two layers. The first layer of the RBM is called the <b>visible</b> (or input layer). Imagine that our toy example, has only vectors with 7 values, so the visible layer must have j=7 input nodes. The second layer is the <b>hidden</b> layer, which possesses i neurons in our case. Each hidden node can have either 0 or 1 values (i.e., si = 1 or si = 0) with a probability that is a logistic function of the inputs it receives from the other j visible units, called for example, p(si = 1). For our toy sample, we'll use 2 nodes in the hidden layer, so i = 2. <center><img src="https://ibm.box.com/shared/static/eu26opvcefgls6vnwuo29uwp0nudmokh.png" alt="RBM Model" style="width: 400px;"></center> Each node in the first layer also has a <b>bias</b>. We will denote the bias as “v_bias” for the visible units. The <b>v_bias</b> is shared among all visible units. Here we define the <b>bias</b> of second layer as well. We will denote the bias as “h_bias” for the hidden units. The <b>h_bias</b> is shared among all hidden units ``` v_bias = tf.placeholder("float", [7]) h_bias = tf.placeholder("float", [2]) ``` We have to define weights among the input layer and hidden layer nodes. In the weight matrix, the number of rows are equal to the input nodes, and the number of columns are equal to the output nodes. Let <b>W</b> be the Tensor of 7x2 (7 - number of visible neurons, 2 - number of hidden neurons) that represents weights between neurons. ``` W = tf.constant(np.random.normal(loc=0.0, scale=1.0, size=(7, 2)).astype(np.float32)) ``` <hr> <a id="ref3"></a> <h3>What RBM can do after training?</h3> Think RBM as a model that has been trained based on images of a dataset of many SUV and Sedan cars. Also, imagine that the RBM network has only two hidden nodes, one for the weight and, and one for the size of cars, which in a sense, their different configurations represent different cars, one represent SUV cars and one for Sedan. In a training process, through many forward and backward passes, RBM adjust its weights to send a stronger signal to either the SUV node (0, 1) or the Sedan node (1, 0) in the hidden layer, given the pixels of images. Now, given a SUV in hidden layer, which distribution of pixels should we expect? RBM can give you 2 things. First, it encodes your images in hidden layer. Second, it gives you the probability of observing a case, given some hidden values. <h3>How to inference?</h3> RBM has two phases: <ul> <li>Forward Pass</li> <li>Backward Pass or Reconstruction</li> </ul> <b>Phase 1) Forward pass:</b> Input one training sample (one image) <b>X</b> through all visible nodes, and pass it to all hidden nodes. Processing happens in each node in the hidden layer. This computation begins by making stochastic decisions about whether to transmit that input or not (i.e. to determine the state of each hidden layer). At the hidden layer's nodes, <b>X</b> is multiplied by a <b>$W_{ij}$</b> and added to <b>h_bias</b>. The result of those two operations is fed into the sigmoid function, which produces the node’s output, $p({h_j})$, where j is the unit number. $p({h_j})= \sigma(\sum_i w_{ij} x_i)$, where $\sigma()$ is the logistic function. Now lets see what $p({h_j})$ represents. In fact, it is the probabilities of the hidden units. And, all values together are called <b>probability distribution</b>. That is, RBM uses inputs x to make predictions about hidden node activations. For example, imagine that the values of $h_p$ for the first training item is [0.51 0.84]. It tells you what is the conditional probability for each hidden neuron to be at Phase 1): <ul> <li>p($h_{1}$ = 1|V) = 0.51</li> <li>($h_{2}$ = 1|V) = 0.84</li> </ul> As a result, for each row in the training set, <b>a vector/tensor</b> is generated, which in our case it is of size [1x2], and totally n vectors ($p({h})$=[nx2]). We then turn unit $h_j$ on with probability $p(h_{j}|V)$, and turn it off with probability $1 - p(h_{j}|V)$. Therefore, the conditional probability of a configuration of h given v (for a training sample) is: $$p(\mathbf{h} \mid \mathbf{v}) = \prod_{j=0}^H p(h_j \mid \mathbf{v})$$ Now, sample a hidden activation vector <b>h</b> from this probability distribution $p({h_j})$. That is, we sample the activation vector from the probability distribution of hidden layer values. Before we go further, let's look at a toy example for one case out of all input. Assume that we have a trained RBM, and a very simple input vector such as [1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0], lets see what would be the output of forward pass: ``` sess = tf.Session() X = tf.constant([[1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0]]) v_state = X print ("Input: ", sess.run(v_state)) h_bias = tf.constant([0.1, 0.1]) print ("hb: ", sess.run(h_bias)) print ("w: ", sess.run(W)) # Calculate the probabilities of turning the hidden units on: h_prob = tf.nn.sigmoid(tf.matmul(v_state, W) + h_bias) #probabilities of the hidden units print ("p(h|v): ", sess.run(h_prob)) # Draw samples from the distribution: h_state = tf.nn.relu(tf.sign(h_prob - tf.random_uniform(tf.shape(h_prob)))) #states print ("h0 states:", sess.run(h_state)) ``` <b>Phase 2) Backward Pass (Reconstruction):</b> The RBM reconstructs data by making several forward and backward passes between the visible and hidden layers. So, in the second phase (i.e. reconstruction phase), the samples from the hidden layer (i.e. h) play the role of input. That is, <b>h</b> becomes the input in the backward pass. The same weight matrix and visible layer biases are used to go through the sigmoid function. The produced output is a reconstruction which is an approximation of the original input. ``` vb = tf.constant([0.1, 0.2, 0.1, 0.1, 0.1, 0.2, 0.1]) print ("b: ", sess.run(vb)) v_prob = sess.run(tf.nn.sigmoid(tf.matmul(h_state, tf.transpose(W)) + vb)) print ("p(vi∣h): ", v_prob) v_state = tf.nn.relu(tf.sign(v_prob - tf.random_uniform(tf.shape(v_prob)))) print ("v probability states: ", sess.run(v_state)) ``` RBM learns a probability distribution over the input, and then, after being trained, the RBM can generate new samples from the learned probability distribution. As you know, <b>probability distribution</b>, is a mathematical function that provides the probabilities of occurrence of different possible outcomes in an experiment. The (conditional) probability distribution over the visible units v is given by $p(\mathbf{v} \mid \mathbf{h}) = \prod_{i=0}^V p(v_i \mid \mathbf{h}),$ where, $p(v_i \mid \mathbf{h}) = \sigma\left( a_i + \sum_{j=0}^H w_{ji} h_j \right)$ so, given current state of hidden units and weights, what is the probability of generating [1. 0. 0. 1. 0. 0. 0.] in reconstruction phase, based on the above <b>probability distribution</b> function? ``` inp = sess.run(X) print(inp) print(v_prob[0]) v_probability = 1 for elm, p in zip(inp[0],v_prob[0]) : if elm ==1: v_probability *= p else: v_probability *= (1-p) v_probability ``` How similar X and V vectors are? Of course, the reconstructed values most likely will not look anything like the input vector because our network has not trained yet. Our objective is to train the model in such a way that the input vector and reconstructed vector to be same. Therefore, based on how different the input values look to the ones that we just reconstructed, the weights are adjusted. <hr> <h2>MNIST</h2> We will be using the MNIST dataset to practice the usage of RBMs. The following cell loads the MNIST dataset. ``` mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) trX, trY, teX, teY = mnist.train.images, mnist.train.labels, mnist.test.images, mnist.test.labels ``` Lets look at the dimension of the images. ``` trX[1].shape ``` MNIST images have 784 pixels, so the visible layer must have 784 input nodes. For our case, we'll use 50 nodes in the hidden layer, so i = 50. ``` vb = tf.placeholder("float", [784]) hb = tf.placeholder("float", [50]) ``` Let <b>W</b> be the Tensor of 784x50 (784 - number of visible neurons, 50 - number of hidden neurons) that represents weights between the neurons. ``` W = tf.placeholder("float", [784, 50]) ``` Lets define the visible layer: ``` v0_state = tf.placeholder("float", [None, 784]) ``` Now, we can define hidden layer: ``` h0_prob = tf.nn.sigmoid(tf.matmul(v0_state, W) + hb) #probabilities of the hidden units h0_state = tf.nn.relu(tf.sign(h0_prob - tf.random_uniform(tf.shape(h0_prob)))) #sample_h_given_X ``` Now, we define reconstruction part: ``` v1_prob = tf.nn.sigmoid(tf.matmul(h0_state, tf.transpose(W)) + vb) v1_state = tf.nn.relu(tf.sign(v1_prob - tf.random_uniform(tf.shape(v1_prob)))) #sample_v_given_h ``` <h3>What is objective function?</h3> <b>Goal</b>: Maximize the likelihood of our data being drawn from that distribution <b>Calculate error:</b> In each epoch, we compute the "error" as a sum of the squared difference between step 1 and step n, e.g the error shows the difference between the data and its reconstruction. <b>Note:</b> tf.reduce_mean computes the mean of elements across dimensions of a tensor. ``` err = tf.reduce_mean(tf.square(v0_state - v1_state)) ``` <a id="ref4"></a> <h3>How to train the model?</h3> <b>Warning!!</b> The following part discuss how to train the model which needs some algebra background. Still, you can skip this part and run the next cells. As mentioned, we want to give a high probability to the input data we train on. So, in order to train an RBM, we have to maximize the product of probabilities assigned to all rows v (images) in the training set V (a matrix, where each row of it is treated as a visible vector v): <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/d42e9f5aad5e1a62b11b119c9315236383c1864a"> Which is equivalent, maximizing the expected log probability of V: <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/ba0ceed99dca5ff1d21e5ace23f5f2223f19efc0"> So, we have to update the weights wij to increase p(v) for all v in our training data during training. So we have to calculate the derivative: $$\frac{\partial \log p(\mathbf v)}{\partial w_{ij}}$$ This cannot be easily done by typical <b>gradient descent (SGD)</b>, so we can use another approach, which has 2 steps: <ol> <li>Gibbs Sampling</li> <li>Contrastive Divergence</li> </ol> <h3>Gibbs Sampling</h3> First, given an input vector v we are using p(h|v) for prediction of the hidden values h. <ul> <li>$p(h|v) = sigmoid(X \otimes W + hb)$</li> <li>h0 = sampleProb(h0)</li> </ul> Then, knowing the hidden values, we use p(v|h) for reconstructing of new input values v. <ul> <li>$p(v|h) = sigmoid(h0 \otimes transpose(W) + vb)$</li> <li>$v1 = sampleProb(v1)$ (Sample v given h)</li> </ul> This process is repeated k times. After k iterations we obtain an other input vector vk which was recreated from original input values v0 or X. Reconstruction steps: <ul> <li> Get one data point from data set, like <i>x</i>, and pass it through the net</li> <li>Pass 0: (x) $\Rightarrow$ (h0) $\Rightarrow$ (v1) (v1 is reconstruction of the first pass)</li> <li>Pass 1: (v1) $\Rightarrow$ (h1) $\Rightarrow$ (v2) (v2 is reconstruction of the second pass)</li> <li>Pass 2: (v2) $\Rightarrow$ (h2) $\Rightarrow$ (v3) (v3 is reconstruction of the third pass)</li> <li>Pass n: (vk) $\Rightarrow$ (hk+1) $\Rightarrow$ (vk+1)(vk is reconstruction of the nth pass)</li> </ul> <h4>What is sampling here (sampleProb)?</h4> In forward pass: We randomly set the values of each hi to be 1 with probability $sigmoid(v \otimes W + hb)$. - To sample h given v means to sample from the conditional probability distribution P(h|v). It means that you are asking what are the probabilities of getting a specific set of values for the hidden neurons, given the values v for the visible neurons, and sampling from this probability distribution. In reconstruction: We randomly set the values of each vi to be 1 with probability $ sigmoid(h \otimes transpose(W) + vb)$. <h3>contrastive divergence (CD-k)</h3> The update of the weight matrix is done during the Contrastive Divergence step. Vectors v0 and vk are used to calculate the activation probabilities for hidden values h0 and hk. The difference between the outer products of those probabilities with input vectors v0 and vk results in the update matrix: $\Delta W =v0 \otimes h0 - vk \otimes hk$ Contrastive Divergence is actually matrix of values that is computed and used to adjust values of the W matrix. Changing W incrementally leads to training of W values. Then on each step (epoch), W is updated to a new value W' through the equation below: $W' = W + alpha * \Delta W$ <b>What is Alpha?</b> Here, alpha is some small step rate and is also known as the "learning rate". Ok, lets assume that k=1, that is we just get one more step: ``` h1_prob = tf.nn.sigmoid(tf.matmul(v1_state, W) + hb) h1_state = tf.nn.relu(tf.sign(h1_prob - tf.random_uniform(tf.shape(h1_prob)))) #sample_h_given_X alpha = 0.01 W_Delta = tf.matmul(tf.transpose(v0_state), h0_prob) - tf.matmul(tf.transpose(v1_state), h1_prob) update_w = W + alpha * W_Delta update_vb = vb + alpha * tf.reduce_mean(v0_state - v1_state, 0) update_hb = hb + alpha * tf.reduce_mean(h0_state - h1_state, 0) ``` Let's start a session and initialize the variables: ``` cur_w = np.zeros([784, 50], np.float32) cur_vb = np.zeros([784], np.float32) cur_hb = np.zeros([50], np.float32) prv_w = np.zeros([784, 50], np.float32) prv_vb = np.zeros([784], np.float32) prv_hb = np.zeros([50], np.float32) sess = tf.Session() init = tf.global_variables_initializer() sess.run(init) ``` Lets look at the error of the first run: ``` sess.run(err, feed_dict={v0_state: trX, W: prv_w, vb: prv_vb, hb: prv_hb}) #Parameters epochs = 5 batchsize = 100 weights = [] errors = [] for epoch in range(epochs): for start, end in zip( range(0, len(trX), batchsize), range(batchsize, len(trX), batchsize)): batch = trX[start:end] cur_w = sess.run(update_w, feed_dict={ v0_state: batch, W: prv_w, vb: prv_vb, hb: prv_hb}) cur_vb = sess.run(update_vb, feed_dict={v0_state: batch, W: prv_w, vb: prv_vb, hb: prv_hb}) cur_hb = sess.run(update_hb, feed_dict={ v0_state: batch, W: prv_w, vb: prv_vb, hb: prv_hb}) prv_w = cur_w prv_vb = cur_vb prv_hb = cur_hb if start % 10000 == 0: errors.append(sess.run(err, feed_dict={v0_state: trX, W: cur_w, vb: cur_vb, hb: cur_hb})) weights.append(cur_w) print ('Epoch: %d' % epoch,'reconstruction error: %f' % errors[-1]) plt.plot(errors) plt.xlabel("Batch Number") plt.ylabel("Error") plt.show() ``` What is the final weight after training? ``` uw = weights[-1].T print (uw) # a weight matrix of shape (50,784) ``` <a id="ref5"></a> <h3>Learned features</h3> We can take each hidden unit and visualize the connections between that hidden unit and each element in the input vector. In our case, we have 50 hidden units. Lets visualize those. Let's plot the current weights: <b>tile_raster_images</b> helps in generating an easy to grasp image from a set of samples or weights. It transform the <b>uw</b> (with one flattened image per row of size 784), into an array (of size $25\times20$) in which images are reshaped and laid out like tiles on a floor. ``` tile_raster_images(X=cur_w.T, img_shape=(28, 28), tile_shape=(5, 10), tile_spacing=(1, 1)) import matplotlib.pyplot as plt from PIL import Image %matplotlib inline image = Image.fromarray(tile_raster_images(X=cur_w.T, img_shape=(28, 28) ,tile_shape=(5, 10), tile_spacing=(1, 1))) ### Plot image plt.rcParams['figure.figsize'] = (18.0, 18.0) imgplot = plt.imshow(image) imgplot.set_cmap('gray') ``` Each tile in the above visualization corresponds to a vector of connections between a hidden unit and visible layer's units. Let's look at one of the learned weights corresponding to one of hidden units for example. In this particular square, the gray color represents weight = 0, and the whiter it is, the more positive the weights are (closer to 1). Conversely, the darker pixels are, the more negative the weights. The positive pixels will increase the probability of activation in hidden units (after multiplying by input/visible pixels), and negative pixels will decrease the probability of a unit hidden to be 1 (activated). So, why is this important? So we can see that this specific square (hidden unit) can detect a feature (e.g. a "/" shape) and if it exists in the input. ``` from PIL import Image image = Image.fromarray(tile_raster_images(X =cur_w.T[10:11], img_shape=(28, 28),tile_shape=(1, 1), tile_spacing=(1, 1))) ### Plot image plt.rcParams['figure.figsize'] = (4.0, 4.0) imgplot = plt.imshow(image) imgplot.set_cmap('gray') ``` Let's look at the reconstruction of an image now. Imagine that we have a destructed image of figure 3. Lets see if our trained network can fix it: First we plot the image: ``` !wget -O destructed3.jpg https://ibm.box.com/shared/static/vvm1b63uvuxq88vbw9znpwu5ol380mco.jpg img = Image.open('destructed3.jpg') img ``` Now let's pass this image through the net: ``` # convert the image to a 1d numpy array sample_case = np.array(img.convert('I').resize((28,28))).ravel().reshape((1, -1))/255.0 ``` Feed the sample case into the network and reconstruct the output: ``` hh0_p = tf.nn.sigmoid(tf.matmul(v0_state, W) + hb) #hh0_s = tf.nn.relu(tf.sign(hh0_p - tf.random_uniform(tf.shape(hh0_p)))) hh0_s = tf.round(hh0_p) hh0_p_val,hh0_s_val = sess.run((hh0_p, hh0_s), feed_dict={ v0_state: sample_case, W: prv_w, hb: prv_hb}) print("Probability nodes in hidden layer:" ,hh0_p_val) print("activated nodes in hidden layer:" ,hh0_s_val) # reconstruct vv1_p = tf.nn.sigmoid(tf.matmul(hh0_s_val, tf.transpose(W)) + vb) rec_prob = sess.run(vv1_p, feed_dict={ hh0_s: hh0_s_val, W: prv_w, vb: prv_vb}) ``` Here we plot the reconstructed image: ``` img = Image.fromarray(tile_raster_images(X=rec_prob, img_shape=(28, 28),tile_shape=(1, 1), tile_spacing=(1, 1))) plt.rcParams['figure.figsize'] = (4.0, 4.0) imgplot = plt.imshow(img) imgplot.set_cmap('gray') ``` <hr> ## Want to learn more? Running deep learning programs usually needs a high performance platform. __PowerAI__ speeds up deep learning and AI. Built on IBM’s Power Systems, __PowerAI__ is a scalable software platform that accelerates deep learning and AI with blazing performance for individual users or enterprises. The __PowerAI__ platform supports popular machine learning libraries and dependencies including TensorFlow, Caffe, Torch, and Theano. You can use [PowerAI on IMB Cloud](https://cocl.us/ML0120EN_PAI). Also, you can use __Watson Studio__ to run these notebooks faster with bigger datasets.__Watson Studio__ is IBM’s leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, __Watson Studio__ enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of __Watson Studio__ users today with a free account at [Watson Studio](https://cocl.us/ML0120EN_DSX).This is the end of this lesson. Thank you for reading this notebook, and good luck on your studies. ### Thanks for completing this lesson! Notebook created by: <a href = "https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a> ### References: https://en.wikipedia.org/wiki/Restricted_Boltzmann_machine http://deeplearning.net/tutorial/rbm.html http://www.cs.utoronto.ca/~hinton/absps/netflixICML.pdf<br> http://imonad.com/rbm/restricted-boltzmann-machine/ <hr> Copyright &copy; 2018 [Cognitive Class](https://cocl.us/DX0108EN_CC). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).
github_jupyter
<a href="https://colab.research.google.com/github/terrainthesky-hub/DS-Unit-2-Kaggle-Challenge/blob/master/module4-classification-metrics/Lesley_Rich_224_assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` #Confusion matrix is at the bottom!! ************** import pandas as pd import os from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(r'C:\Users\Lesley\Downloads\train_features.csv'), pd.read_csv(r'C:\Users\Lesley\Downloads\train_labels.csv')) test = pd.read_csv(r'C:\Users\Lesley\Downloads\test_features.csv') train, val = train_test_split(train, train_size=0.80, test_size=0.20, stratify=train['status_group'], random_state=42) train.shape, val.shape, test.shape import numpy as np def wrangle(X): """Wrangle train, validate, and test sets in the same way""" # Prevent SettingWithCopyWarning X = X.copy() # About 3% of the time, latitude has small values near zero, # outside Tanzania, so we'll treat these values like zero. X['latitude'] = X['latitude'].replace(-2e-08, 0) # When columns have zeros and shouldn't, they are like null values. # So we will replace the zeros with nulls, and impute missing values later. # Also create a "missing indicator" column, because the fact that # values are missing may be a predictive signal. cols_with_zeros = ['longitude', 'latitude', 'construction_year', 'gps_height', 'population'] for col in cols_with_zeros: X[col] = X[col].replace(0, np.nan) X[col+'_MISSING'] = X[col].isnull() # Drop duplicate columns duplicates = ['quantity_group', 'payment_type'] X = X.drop(columns=duplicates) # Drop recorded_by (never varies) and id (always varies, random) unusable_variance = ['recorded_by'] X = X.drop(columns=unusable_variance) # Convert date_recorded to datetime X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True) # Extract components from date_recorded, then drop the original column X['year_recorded'] = X['date_recorded'].dt.year X['month_recorded'] = X['date_recorded'].dt.month X['day_recorded'] = X['date_recorded'].dt.day X = X.drop(columns='date_recorded') # Engineer feature: how many years from construction_year to date_recorded X['years'] = X['year_recorded'] - X['construction_year'] X['years_MISSING'] = X['years'].isnull() # return the wrangled dataframe return X train = wrangle(train) val = wrangle(val) test = wrangle(test) target = 'status_group' train_features = train.drop(columns=[target, 'id']) numeric_features = train_features.select_dtypes(include='number').columns.tolist() cardinality = train_features.select_dtypes(exclude='number').nunique() categorical_features = cardinality[cardinality <= 50].index.tolist() features = numeric_features + categorical_features X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] X_test = test[features] import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.model_selection import cross_val_score from sklearn.pipeline import make_pipeline from sklearn.feature_selection import f_regression, SelectKBest from sklearn.ensemble import RandomForestClassifier pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(), RandomForestClassifier() ) k = 3 score = cross_val_score(pipeline, X_train, y_train, cv=k, scoring='accuracy') print(f'Accuracy for {k} folds', score) from scipy.stats import randint, uniform from sklearn.model_selection import GridSearchCV, RandomizedSearchCV pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(), RandomForestClassifier() ) param_distributions = { 'simpleimputer__strategy': ['mean', 'median'], 'randomforestclassifier__n_estimators': [23 ,24, 25, 26, 27, 28, 29, 30], 'randomforestclassifier__max_depth': [5, 10, 15, 20, 25, None], 'randomforestclassifier__max_features': uniform(0, 1), 'randomforestclassifier__min_samples_leaf': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'randomforestclassifier__min_samples_split': [5,6, 7, 8, 9, 10, 11, 12, 13, 14, 15] } search = RandomizedSearchCV( pipeline, param_distributions=param_distributions, n_iter=10, cv=3, scoring='accuracy', verbose=10, return_train_score=True, n_jobs=-1) search.fit(X_train, y_train); pipeline.named_steps['randomforestclassifier'] pipeline = search.best_estimator_ pipeline print('Best hyperparameters', search.best_params_) sklearn.__version__ !pip install --user --upgrade scikit-learn import sklearn sklearn.__version__ y_pred = pipeline.predict(X_test) path=r'C:\Users\Lesley\Desktop\Lambda\Lesley_Rich' submission = test[['id']].copy() submission['status_group'] = y_pred # submission['status_group'] submission.to_csv(path+'DecisionTreeWaterPumpSub3.csv', index=False) from sklearn.metrics import plot_confusion_matrix import matplotlib.pyplot as plt plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical') plt.show() ```
github_jupyter
### Question 1 #### Create a function that takes a number as an argument and returns True or False depending #### on whether the number is symmetrical or not. A number is symmetrical when it is the same as #### its reverse. #### Examples #### is_symmetrical(7227) ➞ True #### is_symmetrical(12567) ➞ False #### is_symmetrical(44444444) ➞ True #### is_symmetrical(9939) ➞ False #### is_symmetrical(1112111) ➞ True ``` def is_symmetrical(n): rev = str(n)[::-1] #print(rev) if rev == str(n): return True return False is_symmetrical(7227) is_symmetrical(12567) is_symmetrical(44444444) is_symmetrical(9939) is_symmetrical(1112111) ``` ### Question 2 #### Given a string of numbers separated by a comma and space, return the product of the #### numbers. #### Examples #### multiply_nums(&quot;2, 3&quot;) ➞ 6 #### multiply_nums(&quot;1, 2, 3, 4&quot;) ➞ 24 #### multiply_nums(&quot;54, 75, 453, 0&quot;) ➞ 0 #### multiply_nums(&quot;10, -2&quot;) ➞ -20 ``` def multiply_nums(s): s = s.replace(' ', "") s = s.split(',') sum = 1 for i in s: sum = sum * int(i) return sum multiply_nums("2, 3") multiply_nums("1, 2, 3, 4") multiply_nums("54, 75, 453, 0") multiply_nums("10, -2") ``` ### Question 3 #### Create a function that squares every digit of a number. #### Examples #### square_digits(9119) ➞ 811181 #### square_digits(2483) ➞ 416649 #### square_digits(3212) ➞ 9414 #### Notes #### The function receives an integer and must return an integer. ``` def square_digits(n): sq = ''.join(str(int(i)**2) for i in str(n)) return int(sq) square_digits(9119) square_digits(2483) square_digits(3212) ``` ### Question 4 #### Create a function that sorts a list and removes all duplicate items from it. #### Examples #### setify([1, 3, 3, 5, 5]) ➞ [1, 3, 5] #### setify([4, 4, 4, 4]) ➞ [4] #### setify([5, 7, 8, 9, 10, 15]) ➞ [5, 7, 8, 9, 10, 15] #### setify([3, 3, 3, 2, 1]) ➞ [1, 2, 3] ``` def setify(l): m = [] l.sort() l = set(l) for i in l: m.append(i) return m setify([1, 3, 3, 5, 5]) setify([4, 4, 4, 4]) setify([5, 7, 8, 9, 10, 15]) setify([3, 3, 3, 2, 1]) ``` ### Question 5 #### Create a function that returns the mean of all digits. #### Examples #### mean(42) ➞ 3 #### mean(12345) ➞ 3 #### mean(666) ➞ 6 #### Notes ####  The mean of all digits is the sum of digits / how many digits there are (e.g. mean of digits in #### 512 is (5+1+2)/3(number of digits) = 8/3=2). ####  The mean will always be an integer. ``` def mean(n): sum = 0 lenth = len(str(n)) for i in str(n): sum = sum+ int(i) return int(sum/lenth) mean(42) mean(12345) mean(666) ```
github_jupyter
``` import numpy as np import pandas as pd from sklearn.preprocessing import MinMaxScaler from sklearn.preprocessing import StandardScaler df_train = pd.read_excel('wpbc.train.xlsx') df_test = pd.read_excel('wpbc.test.xlsx') train = df_train test = df_test train.shape test.shape train.describe() import seaborn import matplotlib.pyplot as plt def plot_df(df, name): corr = df[df.columns].corr() mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True plt.figure(figsize=(20, 15)) seaborn.set(font_scale=1.2) seaborn.heatmap(corr, mask=mask, center=0, annot=True, square=True, linewidths=3, alpha=0.7) plt.title(name) plot_df(train, 'Train') print(train.columns) class_name = input("Chooese the class: ") minmax_scaler = MinMaxScaler() standard_scaler = StandardScaler() temp_tr_ans = train[class_name] temp_ts_ans = test[class_name] class_count = len(temp_tr_ans.unique()) print(class_count) tr_data = train.drop([class_name], axis=1) ts_data = test.drop([class_name], axis=1) # #결측치 채우기 if 결측치가 0일 경우 # from sklearn.impute import SimpleImputer # rep_0 = SimpleImputer(missing_values=0, strategy="mean") # tr_data = rep_0.fit_transform(tr_data) # ts_data = rep_0.fit_transform(ts_data) #결측치 채우기 if 결측치가 ?일 경우 - 먼저 ?를 특정한수(ex.333)으로 변경 from sklearn.impute import SimpleImputer rep_0 = SimpleImputer(missing_values=333, strategy="mean") tr_data = rep_0.fit_transform(tr_data) ts_data = rep_0.fit_transform(ts_data) mm_tr_data = minmax_scaler.fit_transform(tr_data) mm_ts_data = minmax_scaler.transform(ts_data) std_tr_data = standard_scaler.fit_transform(tr_data) std_ts_data = standard_scaler.transform(ts_data) tr_ans, _ = pd.factorize(temp_tr_ans, sort=True) ts_ans, _ = pd.factorize(temp_ts_ans, sort=True) tr_ans import tensorflow as tf from sklearn.model_selection import GridSearchCV from sklearn.model_selection import RandomizedSearchCV from sklearn.model_selection import ParameterGrid from sklearn.metrics import precision_score, recall_score, f1_score, accuracy_score from tensorflow.keras.wrappers.scikit_learn import KerasClassifier from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Activation from tensorflow.keras.layers import BatchNormalization from tensorflow.keras.layers import Dropout from sklearn import metrics from tensorflow.keras.regularizers import l2 from sklearn.metrics import precision_recall_fscore_support from sklearn.metrics import confusion_matrix # real Version def create_model(hidden_layers = 1, neurons =1, init_mode = 'uniform', activation = 'elu', kernel_regularizer=l2(0.001)): model = Sequential() model.add(Dense(neurons, input_dim=len(mm_tr_data.T), kernel_initializer=init_mode, activation=activation)) for i in range(hidden_layers): model.add(Dense(neurons, kernel_initializer=init_mode, kernel_regularizer=kernel_regularizer)) model.add(BatchNormalization()) model.add(Activation(activation)) model.add(Dropout(0.2)) if class_count == 2: model.add(Dense(1,activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) elif class_count != 2: model.add(Dense(class_count, activation='softmax')) model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model keras_model = KerasClassifier(build_fn=create_model, epochs=64, batch_size=16) leaky_relu = tf.nn.leaky_relu hidden_layers = [4,8,12] neurons = [32, 64, 128] activation = ['elu', leaky_relu] init_mode = ['glorot_uniform', 'he_normal'] param_grid = dict(hidden_layers = hidden_layers, neurons = neurons, init_mode = init_mode, activation = activation) minmax_grid = GridSearchCV(estimator=keras_model, param_grid=param_grid, n_jobs= -1, cv=3) std_grid = GridSearchCV(estimator=keras_model, param_grid=param_grid, n_jobs= -1, cv=3) import warnings warnings.filterwarnings("ignore") minmax_grid_result = minmax_grid.fit(mm_tr_data, tr_ans) std_grid_result = std_grid.fit(std_tr_data, tr_ans) print("Scaler = minmax") print("Best: %f using %s" % (minmax_grid_result.best_score_, minmax_grid_result.best_params_)) means = minmax_grid_result.cv_results_['mean_test_score'] stds = minmax_grid_result.cv_results_['std_test_score'] params = minmax_grid_result.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print("%f (%f) with: %r" % (mean, stdev, param)) print("Scaler = standard") print("Best: %f using %s" % (std_grid_result.best_score_, std_grid_result.best_params_)) means = std_grid_result.cv_results_['mean_test_score'] stds = std_grid_result.cv_results_['std_test_score'] params = std_grid_result.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print("%f (%f) with: %r" % (mean, stdev, param)) pred = minmax_grid.predict(mm_ts_data) accuracy = accuracy_score(pred, ts_ans) ts_ans = ts_ans.astype(float) precision, recall, fbeta_score, support = precision_recall_fscore_support(ts_ans, pred) conf_mat = confusion_matrix(ts_ans, pred) print("Accuracy = ", accuracy) print("Confusion Matrix") print("{0}".format(metrics.confusion_matrix(ts_ans, pred))) print("") print("Classification Report") print(metrics.classification_report(ts_ans, pred)) pred = std_grid.predict(std_ts_data) accuracy = accuracy_score(pred, ts_ans) ts_ans = ts_ans.astype(float) precision, recall, fbeta_score, support = precision_recall_fscore_support(ts_ans, pred) conf_mat = confusion_matrix(ts_ans, pred) print("Accuracy = ", accuracy) print("Confusion Matrix") print("{0}".format(metrics.confusion_matrix(ts_ans, pred))) print("") print("Classification Report") print(metrics.classification_report(ts_ans, pred)) # # testbed Version # def create_model(hidden_layers = 1, neurons =1, init_mode = 'uniform', activation = 'elu'): # model = Sequential() # model.add(Dense(neurons, input_dim=len(tr_data.T), kernel_initializer=init_mode, activation=activation)) # for i in range(hidden_layers): # model.add(Dense(neurons, kernel_initializer=init_mode)) # model.add(BatchNormalization()) # model.add(Activation(activation)) # model.add(Dropout(0.2)) # if class_count == 2: # model.add(Dense(1,activation='sigmoid')) # model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # elif class_count != 2: # model.add(Dense(class_count-1, activation='softmax')) # model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # return model # hidden_layers = [5, 10] # neurons = [32, 64] # activation = ['elu'] # init_mode = ['he_uniform'] # keras_model = KerasClassifier(build_fn=create_model, epochs=4, batch_size=4) # param_grid = dict(hidden_layers = hidden_layers, neurons = neurons, init_mode = init_mode, activation = activation) # grid = GridSearchCV(estimator=keras_model, param_grid=param_grid, n_jobs= -1, cv=2) ```
github_jupyter
<a href="https://colab.research.google.com/github/Max-FM/IAA-Social-Distancing/blob/master/Differential_Imaging.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> #Differential Imaging **Warning:** This notebook will likely cause Google Colab to crash. It is advised to run the notebook locally, either by downloading and running through Jupyter or by connecting to a local runtime. **Disclaimer:** Satellite images are not publicly available in the GitHub repository in order to avoid potential legal issues. The images used are available internally to other researchers at the University of Portsmouth [here](https://drive.google.com/drive/folders/1GGK6HksIM7jISqC71g0KpzSJnPjFkWO2?usp=sharing). Access is restricted to external persons and all external access requests will be denied. Should the user wish to acquire the images themselves, the corresponding shapefiles are publicly available in the repository. ###Import Files ``` import rasterio as rio import rioxarray as riox import numpy as np import xarray as xr import matplotlib.pyplot as plt from glob import glob ``` ###Define Filepaths ``` fdir = '/home/foxleym/Downloads' filepaths = glob(f'{fdir}/Southsea2020_PSScene4Band_Explorer/files/*_SR_clip.tif') ``` ###Create 4-Band Median Raster ``` blueList = [] greenList = [] redList = [] nirList = [] for i, file in enumerate(filepaths): blueList.append(riox.open_rasterio(file)[0,:,:]) greenList.append(riox.open_rasterio(file)[1,:,:]) redList.append(riox.open_rasterio(file)[2,:,:]) nirList.append(riox.open_rasterio(file)[3,:,:]) blue_median = xr.concat(blueList, "t").median(dim="t") green_median = xr.concat(greenList, "t").median(dim="t") red_median = xr.concat(redList, "t").median(dim="t") nir_median = xr.concat(nirList, "t").median(dim="t") median_raster = xr.concat([blue_median, green_median, red_median, nir_median], dim='band') del(blueList, greenList, redList, nirList, blue_median, green_median, red_median, nir_median) median_raster.rio.to_raster(f'{fdir}/Southsea2020_PSScene4Band_Explorer/Southsea2020Median.tif') ``` ###Obtain Median RBG Raster and Plot ``` def normalize(array): """Normalizes numpy arrays into scale 0.0 - 1.0""" array_min, array_max = array.min(), array.max() return ((array - array_min)/(array_max - array_min)) def make_composite(band_1, band_2, band_3): """Converts three raster bands into a composite image""" return normalize(np.dstack((band_1, band_2, band_3))) b, g, r, nir = median_raster rgb = make_composite(r, g, b) plt.figure(figsize=(15,15)) plt.imshow(rgb) plt.xticks([]) plt.yticks([]) ``` ###Perform Image Subtractions ``` subtractions = [] for f in filepaths: fname = f.split('/')[-1].split('.')[0] raster = riox.open_rasterio(f) subtraction = raster - median_raster subtractions.append(subtraction) subtraction.rio.to_raster(f'{fdir}/Southsea2020_PSScene4Band_Explorer/files/{fname}_MEDDIFF.tif') ``` ###Convert to RBG and Plot ``` b_0, g_0, r_0, nir_0 = raster b_med, g_med, r_med, nir_med = median_raster b_sub, g_sub, r_sub, nir_sub = subtractions[0] rgb_0 = make_composite(r_0, g_0, b_0) rgb_med = make_composite(r_med, g_med, b_med) rgb_sub = make_composite(r_sub, g_sub, b_sub) rgb_list = [rgb_0, rgb_med, rgb_sub] fig, ax = plt.subplots(nrows = 3, figsize=(15,15)) for i, rgb in enumerate(rgb_list): ax[i].imshow(rgb) ax[i].set_xticks([]) ax[i].set_yticks([]) plt.tight_layout() ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt import torch import pandas as pd from scipy.misc import derivative import time data= pd.read_csv("Thurber_Data.txt",names=['y','x'], sep=" ") data y = torch.from_numpy(data['y'].to_numpy(np.float64)) x = torch.from_numpy(data['x'].to_numpy(np.float64)) # b = torch.tensor([1000,1000,400,40,0.7,0.3,0.03],requires_grad=True) b = torch.tensor([1300,1500,500,75,1,0.4,0.05],requires_grad=True) plt.plot(x.numpy(),y.numpy()) ## Numerical Differentiation # b = np.array([1000,1000,400,40,0.7,0.3,0.03]).reshape(-1,1) b = np.array([1300,1500,500,75,1,0.4,0.05]).reshape(-1,1) x=x.detach().numpy() u=0.1 #beta multiply identity matrix beta = 10 V_prev = 0 def f0(b0): return (b0 + b[1]*x + b[2]*np.square(x) + b[3]*np.power(x,3)) / (1 + b[4]*x + b[5]*np.square(x) + b[6]*np.power(x,3))-y.detach().numpy() def f1(b1): return(b[0] + b1*x + b[2]*np.square(x) + b[3]*np.power(x,3)) / (1 + b[4]*x + b[5]*np.square(x) + b[6]*np.power(x,3))-y.detach().numpy() def f2(b2): return (b[0] + b[1]*x + b2*np.square(x) + b[3]*np.power(x,3)) / (1 + b[4]*x + b[5]*np.square(x) + b[6]*np.power(x,3))-y.detach().numpy() def f3(b3): return (b[0] + b[1]*x + b[2]*np.square(x) + b3*np.power(x,3)) / (1 + b[4]*x + b[5]*np.square(x) + b[6]*np.power(x,3))-y.detach().numpy() def f4(b4): return (b[0] + b[1]*x + b[2]*np.square(x) + b[3]*np.power(x,3)) / (1 + b4*x + b[5]*np.square(x) + b[6]*np.power(x,3))-y.detach().numpy() def f5(b5): return (b[0] + b[1]*x + b[2]*np.square(x) + b[3]*np.power(x,3)) / (1 + b[4]*x + b5*np.square(x) + b[6]*np.power(x,3))-y.detach().numpy() def f6(b6): return (b[0] + b[1]*x + b[2]*np.square(x) + b[3]*np.power(x,3)) / (1 + b[4]*x + b[5]*np.square(x) + b6*np.power(x,3))-y.detach().numpy() start_time = time.time() for c in range(500): y_pred = (b[0] + b[1]*x + b[2]*np.square(x) + b[3]*np.power(x,3)) / (1 + b[4]*x + b[5]*np.square(x) + b[6]*np.power(x,3)) error = (y_pred - y.detach().numpy()).reshape(-1,1) d_b0 = derivative(f0,b[0] , dx=1e-6) d_b1 = derivative(f1,b[1] , dx=1e-6) d_b2 = derivative(f2,b[2] , dx=1e-6) d_b3 = derivative(f3,b[3] , dx=1e-6) d_b4 = derivative(f4,b[4] , dx=1e-6) d_b5 = derivative(f5,b[5] , dx=1e-6) d_b6 = derivative(f6,b[6] , dx=1e-6) jacobian = np.transpose(np.array([d_b0,d_b1,d_b2,d_b3,d_b4,d_b5,d_b6])) dParam = np.matmul(np.matmul(np.linalg.inv((np.matmul(np.transpose(jacobian),jacobian)+u*np.identity(len(b)))),np.transpose(jacobian)),error) b -= dParam V = np.sum(np.square(error)) if(V > V_prev): u *= beta else: u /= beta V_prev = V print("c: ",c," error: ",V," B:", b) if V < 5.6427082397E+03: break print("time taken to execute: ",time.time()-start_time) def Jacobian(loss,params,numParams): jacobian = torch.empty(len(loss), numParams) for i in range(len(loss)): loss[i].backward(retain_graph=True) for n in range(numParams): jacobian[i][n] = params.grad[n] params.grad.zero_() return jacobian ## Automatic Differentiation num_param = len(b) u=0.1 #beta multiply identity matrix beta = 10 error_prev = 0 start_time = time.time() for c in range(200): y_pred = (b[0] + b[1]*x + b[2]*torch.square(x) + b[3]*torch.pow(x,3)) / (1 + b[4]*x + b[5]*torch.square(x) + b[6]*torch.pow(x,3)) loss = y_pred-y error = torch.sum(torch.square(loss)) #residual sum of squares print("",c," error is: ",error.detach().numpy()," b is ", b.detach().numpy()) jacobian = Jacobian(loss,b,len(b)) dParam = torch.matmul(torch.matmul(torch.inverse(torch.matmul(torch.transpose(jacobian,-1,0),jacobian)+u*torch.eye(num_param, num_param)),torch.transpose(jacobian,-1,0)),loss.float()) with torch.no_grad(): b -=dParam if(error > error_prev): u *= beta else: u /= beta error_prev = error if error< 5.642708245E+03: #3.9050739624 given residual sum of squares break print("time taken to execute: ",time.time()-start_time) plt.plot(y_pred.detach(),'g.', y,'r') ```
github_jupyter
# oneDPL- Gamma Correction example #### Sections - [Gamma Correction](#Gamma-Correction) - [Why use buffer iterators?](#Why-use-buffer-iterators?) - _Lab Exercise:_ [Gamma Correction](#Lab-Exercise:-Gamma-Correction) - [Image outputs](#Image-outputs) ## Learning Objectives * Build a sample __DPC++ application__ to perform Image processing (gamma correction) using oneDPL. ## Gamma Correction Gamma correction is an image processing algorithm where we enhance the image brightness and contrast levels to have a better view of the image. Below example creates a bitmap image, and applies the gamma to the image using the DPC++ library offloading to a device. Once we run the program we can view the original image and the gamma corrected image in the corresponding cells below In the below program we write a data parallel algorithm using the DPC++ library to leverage the computational power in __heterogenous computers__. The DPC++ platform model includes a host computer and a device. The host offloads computation to the device, which could be a __GPU, FPGA, or a multi-core CPU__. We create a buffer, being responsible for moving data around and counting dependencies. DPC++ Library provides `oneapi::dpl::begin()` and `oneapi::dpl::end()` interfaces for getting buffer iterators and we implemented as below. ### Why use buffer iterators? Using buffer iterators will ensure that memory is not copied back and forth in between each algorithm execution on device. The code example below shows how the same example above is implemented using buffer iterators which make sure the memory stays on device until the buffer is destructed. Pass the policy object to the `std::for_each` Parallel STL algorithm, which is defined in the oneapi::dpl::execution namespace and pass the __'begin'__ and __'end'__ buffer iterators as the second and third arguments. The `oneapi::dpl::execution::dpcpp_default` object is a predefined object of the device_policy class, created with a default kernel name and a default queue. Use it to create customized policy objects, or to pass directly when invoking an algorithm. The Parallel STL API handles the data transfer and compute. ### Lab Exercise: Gamma Correction * In this example the student will learn how to use oneDPL library to perform the gamma correction. * Follow the __Steps 1 to 3__ in the below code to create a SYCL buffer, create buffer iterators, and then call the std::for each function with DPC++ support. 1. Select the code cell below, __follow the STEPS 1 to 3__ in the code comments, click run ▶ to save the code to file. 2. Next run ▶ the cell in the __Build and Run__ section below the code to compile and execute the code. ``` %%writefile gamma-correction/src/main.cpp //============================================================== // Copyright © 2019 Intel Corporation // // SPDX-License-Identifier: MIT // ============================================================= #include <oneapi/dpl/algorithm> #include <oneapi/dpl/execution> #include <oneapi/dpl/iterator> #include <iomanip> #include <iostream> #include <CL/sycl.hpp> #include "utils.hpp" using namespace sycl; using namespace std; int main() { // Image size is width x height int width = 1440; int height = 960; Img<ImgFormat::BMP> image{width, height}; ImgFractal fractal{width, height}; // Lambda to process image with gamma = 2 auto gamma_f = [](ImgPixel &pixel) { auto v = (0.3f * pixel.r + 0.59f * pixel.g + 0.11f * pixel.b) / 255.0f; auto gamma_pixel = static_cast<uint8_t>(255 * v * v); if (gamma_pixel > 255) gamma_pixel = 255; pixel.set(gamma_pixel, gamma_pixel, gamma_pixel, gamma_pixel); }; // fill image with created fractal int index = 0; image.fill([&index, width, &fractal](ImgPixel &pixel) { int x = index % width; int y = index / width; auto fractal_pixel = fractal(x, y); if (fractal_pixel < 0) fractal_pixel = 0; if (fractal_pixel > 255) fractal_pixel = 255; pixel.set(fractal_pixel, fractal_pixel, fractal_pixel, fractal_pixel); ++index; }); string original_image = "fractal_original.png"; string processed_image = "fractal_gamma.png"; Img<ImgFormat::BMP> image2 = image; image.write(original_image); // call standard serial function for correctness check image.fill(gamma_f); // use default policy for algorithms execution auto policy = oneapi::dpl::execution::dpcpp_default; // We need to have the scope to have data in image2 after buffer's destruction { // ****Step 1: Uncomment the below line to create a buffer, being responsible for moving data around and counting dependencies //buffer<ImgPixel> b(image2.data(), image2.width() * image2.height()); // create iterator to pass buffer to the algorithm // **********Step 2: Uncomment the below lines to create buffer iterators. These are passed to the algorithm //auto b_begin = oneapi::dpl::begin(b); //auto b_end = oneapi::dpl::end(b); //*****Step 3: Uncomment the below line to call std::for_each with DPC++ support //std::for_each(policy, b_begin, b_end, gamma_f); } image2.write(processed_image); // check correctness if (check(image.begin(), image.end(), image2.begin())) { cout << "success\n"; } else { cout << "fail\n"; return 1; } cout << "Run on " << policy.queue().get_device().template get_info<info::device::name>() << "\n"; cout << "Original image is in " << original_image << "\n"; cout << "Image after applying gamma correction on the device is in " << processed_image << "\n"; return 0; } ``` #### Build and Run Select the cell below and click run ▶ to compile and execute the code: ``` ! chmod 755 q; chmod 755 run_gamma_correction.sh; if [ -x "$(command -v qsub)" ]; then ./q run_gamma_correction.sh; else ./run_gamma_correction.sh; fi ``` _If the Jupyter cells are not responsive or if they error out when you compile the code samples, please restart the Jupyter Kernel: "Kernel->Restart Kernel and Clear All Outputs" and compile the code samples again_ ### Image outputs once you run the program sucessfuly it creates gamma corrected image and the original image. You can see the difference by running the two cells below and visually compare it. ##### View the gamma corrected Image Select the cell below and click run ▶ to view the generated image using gamma correction: ``` from IPython.display import display, Image display(Image(filename='gamma-correction/build/src/fractal_gamma.png')) ``` ##### View the original Image Select the cell below and click run ▶ to view the generated image using gamma correction: ``` from IPython.display import display, Image display(Image(filename='gamma-correction/build/src/fractal_original.png')) ``` # Summary In this module you will have learned how to apply gamma correction to Images using Data Parallel C++ Library <html><body><span style="color:Red"><h1>Reset Notebook</h1></span></body></html> ##### Should you be experiencing any issues with your notebook or just want to start fresh run the below cell. ``` from IPython.display import display, Markdown, clear_output import ipywidgets as widgets button = widgets.Button( description='Reset Notebook', disabled=False, button_style='', # 'success', 'info', 'warning', 'danger' or '' tooltip='This will update this notebook, overwriting any changes.', icon='check' # (FontAwesome names without the `fa-` prefix) ) out = widgets.Output() def on_button_clicked(_): # "linking function with output" with out: # what happens when we press the button clear_output() !rsync -a --size-only /data/oneapi_workshop/oneAPI_Essentials/07_DPCPP_Library/ ~/oneAPI_Essentials/07_DPCPP_Library print('Notebook reset -- now click reload on browser.') # linking button and function together using a button's method button.on_click(on_button_clicked) # displaying button and its output together widgets.VBox([button,out]) ```
github_jupyter
# DECOMON tutorial #3 ## Local Robustness to Adversarial Attacks for classification tasks ## Introduction After training a model, we want to make sure that the model will give the same output for any images "close" to the initial one, showing some robustness to perturbation. In this notebook, we start from a classifier built on MNIST dataset that given a hand-written digit as input will predict the digit. This will be the first part of the notebook. <img src="./data/Plot-of-a-Subset-of-Images-from-the-MNIST-Dataset.png" alt="examples of hand-written digit" width="600"/> In the second part of the notebook, we will investigate the robustness of this model to unstructured modification of the input space: adversarial attacks. For this kind of attacks, **we vary the magnitude of the perturbation of the initial image** and want to assess that despite this noise, the classifier's prediction remain unchanged. <img src="./data/illustration_adv_attacks.jpeg" alt="examples of hand-written digit" width="600"/> What we will show is the use of decomon module to assess the robustness of the prediction towards noise. ## The notebook ### imports ``` import os import tensorflow.keras as keras import matplotlib.pyplot as plt import matplotlib.patches as patches %matplotlib inline import numpy as np import tensorflow.keras.backend as K from tensorflow.keras.models import Sequential, Model, load_model from tensorflow.keras.layers import Dense from tensorflow.keras.datasets import mnist from ipywidgets import interact, interactive, fixed, interact_manual from ipykernel.pylab.backend_inline import flush_figures import ipywidgets as widgets import time import sys sys.path.append('..') import os.path import os import pickle as pkl from contextlib import closing import time import tensorflow as tf import decomon from decomon.wrapper import refine_boxes x_min = np.ones((3, 4, 5)) x_max = 2*x_min refine_boxes(x_min, x_max, 10) ``` ### load images We load MNIST data from keras datasets. ``` ara img_rows, img_cols = 28, 28 (x_train, y_train_), (x_test, y_test_) = mnist.load_data() x_train = x_train.reshape((-1, 784)) x_test = x_test.reshape((-1, 784)) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 y_train = keras.utils.to_categorical(y_train_) y_test = keras.utils.to_categorical(y_test_) ``` ### learn the model (classifier for MNIST images) For the model, we use a small fully connected network. It is made of 6 layers with 100 units each and ReLU activation functions. **Decomon** is compatible with a large set of Keras layers, so do not hesitate to modify the architecture. ``` model = Sequential() model.add(Dense(100, activation='relu', input_dim=784)) model.add(Dense(100, activation='relu')) model.add(Dense(10, activation='softmax')) model.compile('adam', 'categorical_crossentropy', metrics='acc') model.fit(x_train, y_train, batch_size=32, shuffle=True, validation_split=0.2, epochs=5) model.evaluate(x_test, y_test, batch_size=32) ``` After training, we see that the assessment of performance of the model on data that was not seen during training shows pretty good results: around 0.97 (maximum value is 1). It means that out of 100 images, the model was able to guess the correct digit for 97 images. But how can we guarantee that we will get this performance for images different from the ones in the test dataset? - If we perturbate a "little" an image that was well predicted, will the model stay correct? - Up to which perturbation? - Can we guarantee that the model will output the same digit for a given perturbation? This is where decomon comes in. <img src="./data/decomon.jpg" alt="Decomon!" width="400"/> ### Applying Decomon for Local Robustness to misclassification In this section, we detail how to prove local robustness to misclassification. Misclassification can be studied with the global optimisation of a function f: $$ f(x; \Omega) = \max_{z\in \Omega} \text{NN}_{j\not= i}(z) - \text{NN}_i(z)\;\; \text{s.t}\;\; i = argmax\;\text{NN}(x)$$ If the maximum of f is **negative**, this means that whathever the input sample from the domain, the value outputs by the neural network NN for class i will always be greater than the value output for another class. Hence, there will be no misclassification possible. This is **adversarial robustness**. <img src="./data/tuto_3_formal_robustness.png" alt="Decomon!" width="400"/> In that order, we will use the [decomon](https://gheprivate.intra.corp/CRT-DataScience/decomon/tree/master/decomon) library. Decomon combines several optimization trick, including linear relaxation to get state-of-the-art outer approximation. To use **decomon** for **adversarial robustness** we first need the following imports: + *from decomon.models import convert*: to convert our current Keras model into another neural network nn_model. nn_model will output the same prediction that our model and adds extra information that will be used to derive our formal bounds. For a sake of clarity, how to get such bounds is hidden to the user + *from decomon import get_adv_box*: a genereric method to get an upper bound of the funtion f described previously. If the returned value is negative, then we formally assess the robustness to misclassification. + *from decomon import check_adv_box*: a generic method that computes the maximum of a lower bound of f. Eventually if this value is positive, it demonstrates that the function f takes positive value. It results that a positive value formally proves the existence of misclassification. ``` import decomon from decomon.models import convert from decomon import get_adv_box, get_upper_box, get_lower_box, check_adv_box, get_upper_box ``` For computational efficiency, we convert the model into its decomon version once and for all. Note that the decomon method will work on the non-converted model. To obtain more refined guarantees, we activate an option denoted **forward**. You can speed up the method by removing this option in the convert method. ``` decomon_model = convert(model) from decomon import build_formal_adv_model adv_model = build_formal_adv_model(decomon_model) x_=x_train[:1] eps=1e-2 z = np.concatenate([x_[:, None]-eps, x_[:, None]+eps], 1) get_adv_box(decomon_model, x_,x_, source_labels=y_train[0].argmax()) adv_model.predict([x_, z, y_train[:1]]) # compute gradient import tensorflow as tf x_tensor = tf.convert_to_tensor(x_, dtype=tf.float32) from tensorflow.keras.layers import Concatenate with tf.GradientTape() as t: t.watch(x_tensor) z_tensor = Concatenate(1)([x_tensor[:,None]-eps,\ x_tensor[:, None]+eps]) output = adv_model([x_, z_tensor, y_train[:1]]) result = output gradients = t.gradient(output, x_tensor) mask = gradients.numpy() # scale between 0 and 1. mask = (mask-mask.min()) plt.imshow(gradients.numpy().reshape((28,28))) img_mask = np.zeros((784,)) img_mask[np.argsort(mask[0])[::-1][:100]]=1 plt.imshow(img_mask.reshape((28,28))) plt.imshow(mask.reshape((28,28))) plt.imshow(x_.reshape((28,28))) ``` We offer an interactive visualisation of the basic adversarial robustness method from decomon **get_adv_upper**. We randomly choose 10 test images use **get_adv_upper** to assess their robustness to misclassification pixel perturbations. The magnitude of the noise on each pixel is independent and bounded by the value of the variable epsilon. The user can reset the examples and vary the noise amplitude. Note one of the main advantage of decomon: **we can assess robustness on batches of data!** Circled in <span style="color:green">green</span> are examples that are formally assessed to be robust, <span style="color:orange">orange</span> examples that could be robust and <span style="color:red">red</span> examples that are formally non robust ``` def frame(epsilon, reset=0, filename='./data/.hidden_index.pkl'): n_cols = 5 n_rows = 2 n_samples = n_cols*n_rows if reset: index = np.random.permutation(len(x_test))[:n_samples] with closing(open(filename, 'wb')) as f: pkl.dump(index, f) # save data else: # check that file exists if os.path.isfile(filename): with closing(open(filename, 'rb')) as f: index = pkl.load(f) else: index = np.arange(n_samples) with closing(open(filename, 'wb')) as f: pkl.dump(index, f) #x = np.concatenate([x_test[0:1]]*10, 0) x = x_test[index] x_min = np.maximum(x - epsilon, 0) x_max = np.minimum(x + epsilon, 1) n_cols = 5 n_rows = 2 fig, axs = plt.subplots(n_rows, n_cols) fig.set_figheight(n_rows*fig.get_figheight()) fig.set_figwidth(n_cols*fig.get_figwidth()) plt.subplots_adjust(hspace=0.2) # increase vertical separation axs_seq = axs.ravel() source_label = np.argmax(model.predict(x), 1) start_time = time.process_time() upper = get_adv_box(decomon_model, x_min, x_max, source_labels=source_label) lower = check_adv_box(decomon_model, x_min, x_max, source_labels=source_label) end_time = time.process_time() count = 0 time.sleep(1) r_time = "{:.2f}".format(end_time - start_time) fig.suptitle('Formal Robustness to Adversarial Examples with eps={} running in {} seconds'.format(epsilon, r_time), fontsize=16) for i in range(n_cols): for j in range(n_rows): ax= axs[j, i] ax.imshow(x[count].reshape((28,28)), cmap='Greys') robust='ROBUST' if lower[count]>=0: color='red' robust='NON ROBUST' elif upper[count]<0: color='green' else: color='orange' robust='MAYBE ROBUST' ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # Create a Rectangle patch rect = patches.Rectangle((0,0),27,27,linewidth=3,edgecolor=color,facecolor='none') ax.add_patch(rect) ax.set_title(robust) count+=1 interact(frame, epsilon = widgets.FloatSlider(value=0., min=0., max=5./255., step=0.0001, continuous_update=False, readout_format='.4f',), reset = widgets.IntSlider(value=0., min=0, max=1, step=1, continuous_update=False), fast = widgets.IntSlider(value=1., min=0, max=1, step=1, continuous_update=False) ) ``` As explained previously, the method **get_adv_upper** output a constant upper bound that is valid on the whole domain. Sometimes, this bound can be too lose and needs to be refined by splitting the input domain into sub domains. Several heuristics are possible and you are free to develop your own or take an existing one of the shelf.
github_jupyter
``` import argparse import copy import sys sys.path.append('../../') import sopa.src.models.odenet_cifar10.layers as cifar10_models from sopa.src.models.odenet_cifar10.utils import * parser = argparse.ArgumentParser() # Architecture params parser.add_argument('--is_odenet', type=eval, default=True, choices=[True, False]) parser.add_argument('--network', type=str, choices=['metanode34', 'metanode18', 'metanode10', 'metanode6', 'metanode4', 'premetanode34', 'premetanode18', 'premetanode10', 'premetanode6', 'premetanode4'], default='premetanode10') parser.add_argument('--in_planes', type=int, default=64) # Type of layer's output normalization parser.add_argument('--normalization_resblock', type=str, default='NF', choices=['BN', 'GN', 'LN', 'IN', 'NF']) parser.add_argument('--normalization_odeblock', type=str, default='NF', choices=['BN', 'GN', 'LN', 'IN', 'NF']) parser.add_argument('--normalization_bn1', type=str, default='NF', choices=['BN', 'GN', 'LN', 'IN', 'NF']) parser.add_argument('--num_gn_groups', type=int, default=32, help='Number of groups for GN normalization') # Type of layer's weights normalization parser.add_argument('--param_normalization_resblock', type=str, default='PNF', choices=['WN', 'SN', 'PNF']) parser.add_argument('--param_normalization_odeblock', type=str, default='PNF', choices=['WN', 'SN', 'PNF']) parser.add_argument('--param_normalization_bn1', type=str, default='PNF', choices=['WN', 'SN', 'PNF']) # Type of activation parser.add_argument('--activation_resblock', type=str, default='ReLU', choices=['ReLU', 'GeLU', 'Softsign', 'Tanh', 'AF']) parser.add_argument('--activation_odeblock', type=str, default='ReLU', choices=['ReLU', 'GeLU', 'Softsign', 'Tanh', 'AF']) parser.add_argument('--activation_bn1', type=str, default='ReLU', choices=['ReLU', 'GeLU', 'Softsign', 'Tanh', 'AF']) args, unknown_args = parser.parse_known_args() # Initialize Neural ODE model config = copy.deepcopy(args) norm_layers = (get_normalization(config.normalization_resblock), get_normalization(config.normalization_odeblock), get_normalization(config.normalization_bn1)) param_norm_layers = (get_param_normalization(config.param_normalization_resblock), get_param_normalization(config.param_normalization_odeblock), get_param_normalization(config.param_normalization_bn1)) act_layers = (get_activation(config.activation_resblock), get_activation(config.activation_odeblock), get_activation(config.activation_bn1)) model = getattr(cifar10_models, config.network)(norm_layers, param_norm_layers, act_layers, config.in_planes, is_odenet=config.is_odenet) model ```
github_jupyter
``` %matplotlib inline # Write your imports here import sympy as sp import math import numpy as np import matplotlib.pyplot as plt ``` # High-School Maths Exercise ## Getting to Know Jupyter Notebook. Python Libraries and Best Practices. Basic Workflow ### Problem 1. Markdown Jupyter Notebook is a very light, beautiful and convenient way to organize your research and display your results. Let's play with it for a while. First, you can double-click each cell and edit its content. If you want to run a cell (that is, execute the code inside it), use Cell > Run Cells in the top menu or press <kbd>Ctrl</kbd> + <kbd>Enter</kbd>. Second, each cell has a type. There are two main types: Markdown (which is for any kind of free text, explanations, formulas, results... you get the idea), and code (which is, well... for code :D). Let me give you a... #### Quick Introduction to Markdown ##### Text and Paragraphs There are several things that you can do. As you already saw, you can write paragraph text just by typing it. In order to create a new paragraph, just leave a blank line. See how this works below: ``` This is some text. This text is on a new line, but it will continue the same paragraph (so you can make your paragraphs more easily readable by just continuing on a new line, or just go on and on like this one line is ever continuing). This text is displayed in a new paragraph. And this is yet another paragraph. ``` **Result:** This is some text. This text is on a new line, but it will continue the same paragraph (so you can make your paragraphs more easily readable by just continuing on a new line, or just go on and on like this one line is ever continuing). This text is displayed in a new paragraph. And this is yet another paragraph. ##### Headings There are six levels of headings. Level one is the highest (largest and most important), and level 6 is the smallest. You can create headings of several types by prefixing the header line with one to six "#" symbols (this is called a pound sign if you are ancient, or a sharp sign if you're a musician... or a hashtag if you're too young :D). Have a look: ``` # Heading 1 ## Heading 2 ### Heading 3 #### Heading 4 ##### Heading 5 ###### Heading 6 ``` **Result:** # Heading 1 ## Heading 2 ### Heading 3 #### Heading 4 ##### Heading 5 ###### Heading 6 It is recommended that you have **only one** H1 heading - this should be the header of your notebook (or scientific paper). Below that, you can add your name or just jump to the explanations directly. ##### Emphasis You can create emphasized (stonger) text by using a **bold** or _italic_ font. You can do this in several ways (using asterisks (\*) or underscores (\_)). In order to "escape" a symbol, prefix it with a backslash (\). You can also strike thorugh your text in order to signify a correction. ``` **bold** __bold__ *italic* _italic_ This is \*\*not \*\* bold. I ~~didn't make~~ a mistake. ``` **Result:** **bold** __bold__ *italic* _italic_ This is \*\*not\*\* bold. I ~~didn't make~~ a mistake. ##### Lists You can add two types of lists: ordered and unordered. Lists can also be nested inside one another. To do this, press <kbd>Tab</kbd> once (it will be converted to 4 spaces). To create an ordered list, just type the numbers. Don't worry if your numbers are wrong - Jupyter Notebook will create them properly for you. Well, it's better to have them properly numbered anyway... ``` 1. This is 2. A list 10. With many 9. Items 1. Some of which 2. Can 3. Be nested 42. You can also * Mix * list * types ``` **Result:** 1. This is 2. A list 10. With many 9. Items 1. Some of which 2. Can 3. Be nested 42. You can also * Mix * list * types To create an unordered list, type an asterisk, plus or minus at the beginning: ``` * This is * An + Unordered - list ``` **Result:** * This is * An + Unordered - list ##### Links There are many ways to create links but we mostly use one of them: we present links with some explanatory text. See how it works: ``` This is [a link](http://google.com) to Google. ``` **Result:** This is [a link](http://google.com) to Google. ##### Images They are very similar to links. Just prefix the image with an exclamation mark. The alt(ernative) text will be displayed if the image is not available. Have a look (hover over the image to see the title text): ``` ![Alt text](http://i.imgur.com/dkY1gph.jpg) Do you know that "taco cat" is a palindrome? Thanks to The Oatmeal :) ``` **Result:** ![Alt text](http://i.imgur.com/dkY1gph.jpg) Do you know that "taco cat" is a palindrome? Thanks to The Oatmeal :) If you want to resize images or do some more advanced stuff, just use HTML. Did I mention these cells support HTML, CSS and JavaScript? Now I did. ##### Tables These are a pain because they need to be formatted (somewhat) properly. Here's a good [table generator](http://www.tablesgenerator.com/markdown_tables). Just select File > Paste table data... and provide a tab-separated list of values. It will generate a good-looking ASCII-art table for you. ``` | Cell1 | Cell2 | Cell3 | |-------|-------|-------| | 1.1 | 1.2 | 1.3 | | 2.1 | 2.2 | 2.3 | | 3.1 | 3.2 | 3.3 | ``` **Result:** | Cell1 | Cell2 | Cell3 | |-------|-------|-------| | 1.1 | 1.2 | 1.3 | | 2.1 | 2.2 | 2.3 | | 3.1 | 3.2 | 3.3 | ##### Code Just use triple backtick symbols. If you provide a language, it will be syntax-highlighted. You can also use inline code with single backticks. <pre> ```python def square(x): return x ** 2 ``` This is `inline` code. No syntax highlighting here. </pre> **Result:** ```python def square(x): return x ** 2 ``` This is `inline` code. No syntax highlighting here. **Now it's your turn to have some Markdown fun.** In the next cell, try out some of the commands. You can just throw in some things, or do something more structured (like a small notebook). <p style="color: #d9534f">Write some Markdown here.</p> # This is my highlight with a _italic_ word <p style="text-align:right">by LuGe</p> ### few python code ```python def multiply_by(x, y): return x * y ``` previous python method will `multiply` any two numbers in other words: $result = x * y$ ``` def multiply_by(x, y): return x * y res = multiply_by(4, 7.21324) print(res) ``` ### Problem 2. Formulas and LaTeX Writing math formulas has always been hard. But scientists don't like difficulties and prefer standards. So, thanks to Donald Knuth (a very popular computer scientist, who also invented a lot of algorithms), we have a nice typesetting system, called LaTeX (pronounced _lah_-tek). We'll be using it mostly for math formulas, but it has a lot of other things to offer. There are two main ways to write formulas. You could enclose them in single `$` signs like this: `$ ax + b $`, which will create an **inline formula**: $ ax + b $. You can also enclose them in double `$` signs `$$ ax + b $$` to produce $$ ax + b $$. Most commands start with a backslash and accept parameters either in square brackets `[]` or in curly braces `{}`. For example, to make a fraction, you typically would write `$$ \frac{a}{b} $$`: $$ \frac{a}{b} $$. [Here's a resource](http://www.stat.pitt.edu/stoffer/freetex/latex%20basics.pdf) where you can look up the basics of the math syntax. You can also search StackOverflow - there are all sorts of solutions there. You're on your own now. Research and recreate all formulas shown in the next cell. Try to make your cell look exactly the same as mine. It's an image, so don't try to cheat by copy/pasting :D. Note that you **do not** need to understand the formulas, what's written there or what it means. We'll have fun with these later in the course. ![Math formulas and equations](math.jpg) <p style="color: #d9534f">Write your formulas here.</p> Equation of a line: $$y = ax+b$$ Roots of quadratic equasion $ax^2 + bx + c = 0$ $$x_{1,2}=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$$ Taylor series expansion: $$f(x)\arrowvert_{x=a}=f(a)+f'(a)(x-a)+\frac{f^n(a)}{2!}(x-a)^2+\dots+\frac{f^{(n)}(a)}{n!}(x-a)^n+\dots$$ Binominal theoren: $$ (x+y)^n=\left(\begin{array}{cc}n \\0 \end{array}\right)x^ny^0+\left(\begin{array}{cc}n \\1 \end{array}\right)x^{n-1}y^1+\dots+ \left(\begin{array}{cc}n \\n \end{array}\right)x^0y^n=\sum^n_{k=0}\left(\begin{array}{cc}n \\k \end{array}\right)x^{n-k}y^k$$ An integral(this one is a lot of fun to solve:D): $$\int_{+\infty}^{-\infty}e^{-x^{2}}dx=\sqrt\pi$$ A short matrix: $$\left(\begin{array}{cc}2&1&3 \\2&6&8\\6&8&18 \end{array}\right)$$ A long matrix: $$A=\left(\begin{array}{cc}a_{11}&a_{12}&\dots&a_{1n} \\a_{21}&a_{22}&\dots&a_{2n}\\\vdots&\vdots&\ddots&\vdots \\a_{m1}&a_{m2}&\dots&a_{mn}\end{array}\right)$$ ### Problem 3. Solving with Python Let's first do some symbolic computation. We need to import `sympy` first. **Should your imports be in a single cell at the top or should they appear as they are used?** There's not a single valid best practice. Most people seem to prefer imports at the top of the file though. **Note: If you write new code in a cell, you have to re-execute it!** Let's use `sympy` to give us a quick symbolic solution to our equation. First import `sympy` (you can use the second cell in this notebook): ```python import sympy ``` Next, create symbols for all variables and parameters. You may prefer to do this in one pass or separately: ```python x = sympy.symbols('x') a, b, c = sympy.symbols('a b c') ``` Now solve: ```python sympy.solve(a * x**2 + b * x + c) ``` Hmmmm... we didn't expect that :(. We got an expression for $a$ because the library tried to solve for the first symbol it saw. This is an equation and we have to solve for $x$. We can provide it as a second paramter: ```python sympy.solve(a * x**2 + b * x + c, x) ``` Finally, if we use `sympy.init_printing()`, we'll get a LaTeX-formatted result instead of a typed one. This is very useful because it produces better-looking formulas. ``` sp.init_printing() x = sp.symbols('x') a,b,c = sp.symbols('a b c') sp.solve(a*x**2 + b*x + c, x) ``` How about a function that takes $a, b, c$ (assume they are real numbers, you don't need to do additional checks on them) and returns the **real** roots of the quadratic equation? Remember that in order to calculate the roots, we first need to see whether the expression under the square root sign is non-negative. If $b^2 - 4ac > 0$, the equation has two real roots: $x_1, x_2$ If $b^2 - 4ac = 0$, the equation has one real root: $x_1 = x_2$ If $b^2 - 4ac < 0$, the equation has zero real roots Write a function which returns the roots. In the first case, return a list of 2 numbers: `[2, 3]`. In the second case, return a list of only one number: `[2]`. In the third case, return an empty list: `[]`. ``` def solve_quadratic_equation(a, b, c): d = b**2 - 4*a*c if a ==0 and b != 0: return [-c/b] elif a==0: return [] elif d < 0: return [] elif d == 0: return[-b/2*a] else: d = math.sqrt(d) return[(-b - d)/2*a,(-b + d)/2*a ] # Testing: Execute this cell. The outputs should match the expected outputs. Feel free to write more tests print(solve_quadratic_equation(1, -1, -2)) # [-1.0, 2.0] print(solve_quadratic_equation(1, -8, 16)) # [4.0] print(solve_quadratic_equation(1, 1, 1)) # [] print(solve_quadratic_equation(0, 1, 1)) # [-1.0] print(solve_quadratic_equation(0, 0, 1)) # [] ``` **Bonus:** Last time we saw how to solve a linear equation. Remember that linear equations are just like quadratic equations with $a = 0$. In this case, however, division by 0 will throw an error. Extend your function above to support solving linear equations (in the same way we did it last time). ### Problem 4. Equation of a Line Let's go back to our linear equations and systems. There are many ways to define what "linear" means, but they all boil down to the same thing. The equation $ax + b = 0$ is called *linear* because the function $f(x) = ax+b$ is a linear function. We know that there are several ways to know what one particular function means. One of them is to just write the expression for it, as we did above. Another way is to **plot** it. This is one of the most exciting parts of maths and science - when we have to fiddle around with beautiful plots (although not so beautiful in this case). The function produces a straight line and we can see it. How do we plot functions in general? We know that functions take many (possibly infinitely many) inputs. We can't draw all of them. We could, however, evaluate the function at some points and connect them with tiny straight lines. If the points are too many, we won't notice - the plot will look smooth. Now, let's take a function, e.g. $y = 2x + 3$ and plot it. For this, we're going to use `numpy` arrays. This is a special type of array which has two characteristics: * All elements in it must be of the same type * All operations are **broadcast**: if `x = [1, 2, 3, 10]` and we write `2 * x`, we'll get `[2, 4, 6, 20]`. That is, all operations are performed at all indices. This is very powerful, easy to use and saves us A LOT of looping. There's one more thing: it's blazingly fast because all computations are done in C, instead of Python. First let's import `numpy`. Since the name is a bit long, a common convention is to give it an **alias**: ```python import numpy as np ``` Import that at the top cell and don't forget to re-run it. Next, let's create a range of values, e.g. $[-3, 5]$. There are two ways to do this. `np.arange(start, stop, step)` will give us evenly spaced numbers with a given step, while `np.linspace(start, stop, num)` will give us `num` samples. You see, one uses a fixed step, the other uses a number of points to return. When plotting functions, we usually use the latter. Let's generate, say, 1000 points (we know a straight line only needs two but we're generalizing the concept of plotting here :)). ```python x = np.linspace(-3, 5, 1000) ``` Now, let's generate our function variable ```python y = 2 * x + 3 ``` We can print the values if we like but we're more interested in plotting them. To do this, first let's import a plotting library. `matplotlib` is the most commnly used one and we usually give it an alias as well. ```python import matplotlib.pyplot as plt ``` Now, let's plot the values. To do this, we just call the `plot()` function. Notice that the top-most part of this notebook contains a "magic string": `%matplotlib inline`. This hints Jupyter to display all plots inside the notebook. However, it's a good practice to call `show()` after our plot is ready. ```python plt.plot(x, y) plt.show() ``` ``` k = np.arange(1, 7, 1) print(k) x = np.linspace(-3, 5, 1000) ##y = 2 * x + 3 y = [2 * current + 3 for current in x] plt.plot(x,y) ax = plt.gca() ax.spines["bottom"].set_position("zero") ax.spines["left"].set_position("zero") ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) xticks = ax.xaxis.get_major_ticks() xticks[4].label1.set_visible(False) yticks = ax.yaxis.get_major_ticks() yticks[2].label1.set_visible(False) ax.text(-0.3,-1, '0', fontsize = 12) plt.show() ``` It doesn't look too bad bit we can do much better. See how the axes don't look like they should? Let's move them to zeto. This can be done using the "spines" of the plot (i.e. the borders). All `matplotlib` figures can have many plots (subfigures) inside them. That's why when performing an operation, we have to specify a target figure. There is a default one and we can get it by using `plt.gca()`. We usually call it `ax` for "axis". Let's save it in a variable (in order to prevent multiple calculations and to make code prettier). Let's now move the bottom and left spines to the origin $(0, 0)$ and hide the top and right one. ```python ax = plt.gca() ax.spines["bottom"].set_position("zero") ax.spines["left"].set_position("zero") ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) ``` **Note:** All plot manipulations HAVE TO be done before calling `show()`. It's up to you whether they should be before or after the function you're plotting. This should look better now. We can, of course, do much better (e.g. remove the double 0 at the origin and replace it with a single one), but this is left as an exercise for the reader :). ### * Problem 5. Linearizing Functions Why is the line equation so useful? The main reason is because it's so easy to work with. Scientists actually try their best to linearize functions, that is, to make linear functions from non-linear ones. There are several ways of doing this. One of them involves derivatives and we'll talk about it later in the course. A commonly used method for linearizing functions is through algebraic transformations. Try to linearize $$ y = ae^{bx} $$ Hint: The inverse operation of $e^{x}$ is $\ln(x)$. Start by taking $\ln$ of both sides and see what you can do. Your goal is to transform the function into another, linear function. You can look up more hints on the Internet :). ``` x = np.linspace(-5,5,5000) y = 0.5 * np.exp(0.5 * x) plt.plot(x, y) plt.show plt.title('exponent') ``` ### * Problem 6. Generalizing the Plotting Function Let's now use the power of Python to generalize the code we created to plot. In Python, you can pass functions as parameters to other functions. We'll utilize this to pass the math function that we're going to plot. Note: We can also pass *lambda expressions* (anonymous functions) like this: ```python lambda x: x + 2``` This is a shorter way to write ```python def some_anonymous_function(x): return x + 2 ``` We'll also need a range of x values. We may also provide other optional parameters which will help set up our plot. These may include titles, legends, colors, fonts, etc. Let's stick to the basics now. Write a Python function which takes another function, x range and number of points, and plots the function graph by evaluating it at every point. **BIG hint:** If you want to use not only `numpy` functions for `f` but any one function, a very useful (and easy) thing to do, is to vectorize the function `f` (e.g. to allow it to be used with `numpy` broadcasting): ```python f_vectorized = np.vectorize(f) y = f_vectorized(x) ``` ``` def plot_math_function(f, min_x, max_x, num_points): x = np.linspace(min_x, max_x, num_points) f_vectorized = np.vectorize(f) y = f_vectorized(x) plt.plot(x,y) plt.show() plot_math_function(lambda x: 2 * x + 3, -3, 5, 1000) plot_math_function(lambda x: -x + 8, -1, 10, 1000) plot_math_function(lambda x: x**2 - x - 2, -3, 4, 1000) plot_math_function(lambda x: np.sin(x), -np.pi, np.pi, 1000) plot_math_function(lambda x: np.sin(x) / x, -4 * np.pi, 4 * np.pi, 1000) ``` ### * Problem 7. Solving Equations Graphically Now that we have a general plotting function, we can use it for more interesting things. Sometimes we don't need to know what the exact solution is, just to see where it lies. We can do this by plotting the two functions around the "=" sign ans seeing where they intersect. Take, for example, the equation $2x + 3 = 0$. The two functions are $f(x) = 2x + 3$ and $g(x) = 0$. Since they should be equal, the point of their intersection is the solution of the given equation. We don't need to bother marking the point of intersection right now, just showing the functions. To do this, we'll need to improve our plotting function yet once. This time we'll need to take multiple functions and plot them all on the same graph. Note that we still need to provide the $[x_{min}; x_{max}]$ range and it's going to be the same for all functions. ```python vectorized_fs = [np.vectorize(f) for f in functions] ys = [vectorized_f(x) for vectorized_f in vectorized_fs] ``` ``` def plot_math_functions(functions, min_x, max_x, num_points): # Write your code here pass plot_math_functions([lambda x: 2 * x + 3, lambda x: 0], -3, 5, 1000) plot_math_functions([lambda x: 3 * x**2 - 2 * x + 5, lambda x: 3 * x + 7], -2, 3, 1000) ``` This is also a way to plot the solutions of systems of equation, like the one we solved last time. Let's actually try it. ``` plot_math_functions([lambda x: (-4 * x + 7) / 3, lambda x: (-3 * x + 8) / 5, lambda x: (-x - 1) / -2], -1, 4, 1000) ``` ### Problem 8. Trigonometric Functions We already saw the graph of the function $y = \sin(x)$. But, how do we define the trigonometric functions once again? Let's quickly review that. <img src="angle-in-right-triangle.png" style="max-height: 200px" alt="Right triangle" /> The two basic trigonometric functions are defined as the ratio of two sides: $$ \sin(x) = \frac{\text{opposite}}{\text{hypotenuse}} $$ $$ \cos(x) = \frac{\text{adjacent}}{\text{hypotenuse}} $$ And also: $$ \tan(x) = \frac{\text{opposite}}{\text{adjacent}} = \frac{\sin(x)}{\cos(x)} $$ $$ \cot(x) = \frac{\text{adjacent}}{\text{opposite}} = \frac{\cos(x)}{\sin(x)} $$ This is fine, but using this, "right-triangle" definition, we're able to calculate the trigonometric functions of angles up to $90^\circ$. But we can do better. Let's now imagine a circle centered at the origin of the coordinate system, with radius $r = 1$. This is called a "unit circle". <img src="triangle-unit-circle.png" style="max-height: 300px" alt="Trigonometric unit circle" /> We can now see exactly the same picture. The $x$-coordinate of the point in the circle corresponds to $\cos(\alpha)$ and the $y$-coordinate - to $\sin(\alpha)$. What did we get? We're now able to define the trigonometric functions for all degrees up to $360^\circ$. After that, the same values repeat: these functions are **periodic**: $$ \sin(k.360^\circ + \alpha) = \sin(\alpha), k = 0, 1, 2, \dots $$ $$ \cos(k.360^\circ + \alpha) = \cos(\alpha), k = 0, 1, 2, \dots $$ We can, of course, use this picture to derive other identities, such as: $$ \sin(90^\circ + \alpha) = \cos(\alpha) $$ A very important property of the sine and cosine is that they accept values in the range $(-\infty; \infty)$ and produce values in the range $[-1; 1]$. The two other functions take values in the range $(-\infty; \infty)$ **except when their denominators are zero** and produce values in the same range. #### Radians A degree is a geometric object, $1/360$th of a full circle. This is quite inconvenient when we work with angles. There is another, natural and intrinsic measure of angles. It's called the **radian** and can be written as $\text{rad}$ or without any designation, so $\sin(2)$ means "sine of two radians". ![Radian definition](radian.gif) It's defined as *the central angle of an arc with length equal to the circle's radius* and $1\text{rad} \approx 57.296^\circ$. We know that the circle circumference is $C = 2\pi r$, therefore we can fit exactly $2\pi$ arcs with length $r$ in $C$. The angle corresponding to this is $360^\circ$ or $2\pi\ \text{rad}$. Also, $\pi rad = 180^\circ$. (Some people prefer using $\tau = 2\pi$ to avoid confusion with always multiplying by 2 or 0.5 but we'll use the standard notation here.) **NOTE:** All trigonometric functions in `math` and `numpy` accept radians as arguments. In order to convert between radians and degrees, you can use the relations $\text{[deg]} = 180/\pi.\text{[rad]}, \text{[rad]} = \pi/180.\text{[deg]}$. This can be done using `np.deg2rad()` and `np.rad2deg()` respectively. #### Inverse trigonometric functions All trigonometric functions have their inverses. If you plug in, say $\pi/4$ in the $\sin(x)$ function, you get $\sqrt{2}/2$. The inverse functions (also called, arc-functions) take arguments in the interval $[-1; 1]$ and return the angle that they correspond to. Take arcsine for example: $$ \arcsin(y) = x: sin(y) = x $$ $$ \arcsin\left(\frac{\sqrt{2}}{2}\right) = \frac{\pi}{4} $$ Please note that this is NOT entirely correct. From the relations we found: $$\sin(x) = sin(2k\pi + x), k = 0, 1, 2, \dots $$ it follows that $\arcsin(x)$ has infinitely many values, separated by $2k\pi$ radians each: $$ \arcsin\left(\frac{\sqrt{2}}{2}\right) = \frac{\pi}{4} + 2k\pi, k = 0, 1, 2, \dots $$ In most cases, however, we're interested in the first value (when $k = 0$). It's called the **principal value**. Note 1: There are inverse functions for all four basic trigonometric functions: $\arcsin$, $\arccos$, $\arctan$, $\text{arccot}$. These are sometimes written as $\sin^{-1}(x)$, $cos^{-1}(x)$, etc. These definitions are completely equivalent. Just notice the difference between $\sin^{-1}(x) := \arcsin(x)$ and $\sin(x^{-1}) = \sin(1/x)$. #### Exercise Use the plotting function you wrote above to plot the inverse trigonometric functions. Use `numpy` (look up how to use inverse trigonometric functions). ``` x = np.linspace(-10,10) plt.plot(x, np.arctan(x)) plt.plot(x, np.sin(x)) plt.plot(x, np.cos(x)) plt.show() x = np.linspace(-10, 10) plt.plot(x, np.arccosh(x)) plt.show() ``` ### ** Problem 9. Perlin Noise This algorithm has many applications in computer graphics and can serve to demonstrate several things... and help us learn about math, algorithms and Python :). #### Noise Noise is just random values. We can generate noise by just calling a random generator. Note that these are actually called *pseudorandom generators*. We'll talk about this later in this course. We can generate noise in however many dimensions we want. For example, if we want to generate a single dimension, we just pick N random values and call it a day. If we want to generate a 2D noise space, we can take an approach which is similar to what we already did with `np.meshgrid()`. $$ \text{noise}(x, y) = N, N \in [n_{min}, n_{max}] $$ This function takes two coordinates and returns a single number N between $n_{min}$ and $n_{max}$. (This is what we call a "scalar field"). Random variables are always connected to **distributions**. We'll talk about these a great deal but now let's just say that these define what our noise will look like. In the most basic case, we can have "uniform noise" - that is, each point in our little noise space $[n_{min}, n_{max}]$ will have an equal chance (probability) of being selected. #### Perlin noise There are many more distributions but right now we'll want to have a look at a particular one. **Perlin noise** is a kind of noise which looks smooth. It looks cool, especially if it's colored. The output may be tweaked to look like clouds, fire, etc. 3D Perlin noise is most widely used to generate random terrain. #### Algorithm ... Now you're on your own :). Research how the algorithm is implemented (note that this will require that you understand some other basic concepts like vectors and gradients). #### Your task 1. Research about the problem. See what articles, papers, Python notebooks, demos, etc. other people have created 2. Create a new notebook and document your findings. Include any assumptions, models, formulas, etc. that you're using 3. Implement the algorithm. Try not to copy others' work, rather try to do it on your own using the model you've created 4. Test and improve the algorithm 5. (Optional) Create a cool demo :), e.g. using Perlin noise to simulate clouds. You can even do an animation (hint: you'll need gradients not only in space but also in time) 6. Communicate the results (e.g. in the Softuni forum) Hint: [This](http://flafla2.github.io/2014/08/09/perlinnoise.html) is a very good resource. It can show you both how to organize your notebook (which is important) and how to implement the algorithm.
github_jupyter
1/14 최초 구현 by 소연 수정 및 테스트 시 본 파일이 아닌 사본 사용을 부탁드립니다. ``` import os, sys from google.colab import drive drive.mount('/content/drive') %cd /content/drive/Shareddrives/KPMG_Ideation import warnings warnings.filterwarnings('ignore') import numpy as np import pandas as pd from pprint import pprint from krwordrank.word import KRWordRank from copy import deepcopy import kss import itertools import unicodedata import requests from functools import reduce from bs4 import BeautifulSoup import string import torch from textrankr import TextRank from lexrankr import LexRank from nltk.corpus import stopwords from nltk.tokenize import word_tokenize, sent_tokenize from pydub import AudioSegment from konlpy.tag import Okt import re import nltk # nltk.download('punkt') # import pre-trained model -- frameBERT (pytorch GPU 환경 필요) %cd /content/drive/Shareddrives/KPMG_Ideation/OpenInformationExtraction/frameBERT !pip install transformers import frame_parser path="/content/drive/Shareddrives/KPMG_Ideation/OpenInformationExtraction/frameBERT" parser = frame_parser.FrameParser(model_path=path, language='ko') ##### below are permanently installed packages ##### # nb_path = '/content/notebooks' # os.symlink('/content/drive/Shareddrives/KPMG_Ideation', nb_path) # sys.path.insert(0, nb_path) # !pip install --target=$nb_path pydub # !pip install --target=$nb_path kss # %cd /content/drive/Shareddrives/KPMG_Ideation/hanspell # !python setup.py install # !pip install --target=$nb_path transformers # !apt-get update # !apt-get g++ openjdk-8-jdk # !pip3 install --target=$nb_path konlpy # !pip install --target=$nb_path soykeyword # !pip install --target=$nb_path krwordrank # !pip install --target=$nb_path bert # !pip install --target=$nb_path textrankr # !pip install --target=$nb_path lexrankr # Due to google api credentials, SpeechRecognition needs to be installed everytime !pip install SpeechRecognition import speech_recognition as sr # !pip install --upgrade google-cloud-speech def to_wav(audio_file_name): if audio_file_name.split('.')[1] == 'mp3': sound = AudioSegment.from_mp3(audio_file_name) audio_file_name = audio_file_name.split('.')[0] + '.wav' sound.export(audio_file_name, format="wav") if audio_file_name.split('.')[1] == 'm4a': sound = AudioSegment.from_file(file_name,'m4a') audio_file_name = audio_file_name.replace('m4a','wav') sound.export(audio_file_name, format="wav") #!/usr/bin/env python3 files_path = '' file_name = '' startMin = 0 startSec = 0 endMin = 4 endSec = 30 # Time to miliseconds startTime = startMin*60*1000+startSec*1000 endTime = endMin*60*1000+endSec*1000 %cd /content/drive/Shareddrives/KPMG_Ideation/data file_name='audio_only_1.m4a' track = AudioSegment.from_file(file_name,'m4a') wav_filename = file_name.replace('m4a', 'wav') file_handle = track.export(wav_filename, format='wav') song = AudioSegment.from_wav('audio_only_1.wav') extract = song[startTime:endTime] # Saving as wav extract.export('result.wav', format="wav") AUDIO_FILE = os.path.join(os.path.dirname(os.path.abspath('data')), "result.wav") # use the audio file as the audio source r = sr.Recognizer() with sr.AudioFile(AUDIO_FILE) as source: audio = r.record(source) # read the entire audio file # recognize speech using Google Speech Recognition try: # for testing purposes, we're just using the default API key # to use another API key, use `r.recognize_google(audio, key="GOOGLE_SPEECH_RECOGNITION_API_KEY")` # instead of `r.recognize_google(audio)` txt = r.recognize_google(audio, language='ko') print("Google Speech Recognition:" + txt) except sr.UnknownValueError: print("Google Speech Recognition could not understand audio") except sr.RequestError as e: print("Could not request results from Google Speech Recognition service; {0}".format(e)) %cd /content/drive/Shareddrives/KPMG_Ideation/hanspell from hanspell import spell_checker chked="" line = kss.split_sentences(txt) for i in range(len(line)): line[i] = spell_checker.check(line[i])[2] print("Checked spelling ",line[i]) chked += "".join(line[i]) chked += ". " chked okt = Okt() class Text(): def __init__(self, text): text = re.sub("'", ' ', text) paragraphs = text.split('\n') self.text = text self.paragraphs = [i for i in paragraphs if i] self.counts = len(self.paragraphs) self.docs = [kss.split_sentences(paragraph) for paragraph in paragraphs if kss.split_sentences(paragraph)] self.newtext = deepcopy(self.text) print("TEXT") def findall(self, p, s): i = s.find(p) while i != -1: yield i i = s.find(p, i + 1) def countMatcher(self, sentences, paragraph_no): paragraph = self.docs[paragraph_no] total_no = len(paragraph) vec = [0] * total_no for idx, candidate in enumerate(paragraph): for sentence in sentences: if sentence[:4] in candidate: vec[idx] += 1 return vec class Highlight(Text): def __init__(self, text): super().__init__(text) print("Highlight") wordrank_extractor = KRWordRank(min_count=3, max_length=10) self.keywords, rank, graph = wordrank_extractor.extract(self.paragraphs) self.path = "/content/drive/Shareddrives/KPMG_Ideation/OpenInformationExtraction/frameBERT" p = [] kw = [] for k, v in self.keywords.items(): p.append(okt.pos(k)) kw.append(k) words = self.text.split(' ') s = set() keylist = [word for i in kw for word in words if i in word] keylist = [i for i in keylist if len(i)>2] for i in keylist: if len(i)>2: s.add(i) # print("KEYLIST: ",keylist) p = [okt.pos(word) for word in s] self.s = set() for idx in range(len(p)): ls = p[idx] for tags in ls: word,tag = tags if tag == "Noun": if len(word)>=2: self.s.add(word) self.keys = [] for temp in self.s: self.keys.append(" " + str(temp)) print("KEYWORDS: ", self.keys) def add_tags_conj(self, txt): conj = '그리고, 그런데, 그러나, 그래도, 그래서, 또는, 및, 즉, 게다가, 따라서, 때문에, 아니면, 왜냐하면, 단, 오히려, 비록, 예를 들어, 반면에, 하지만, 그렇다면, 바로, 이에 대해' conj = conj.replace("'", "") self.candidates = conj.split(",") self.newtext = deepcopy(txt) self.idx = [(i, i + len(candidate)) for candidate in self.candidates for i in self.findall(candidate, txt)] for i in range(len(self.idx)): try: self.idx = [(start, start + len(candidate)) for candidate in self.candidates for start in self.findall(candidate, self.newtext)] word = self.newtext[self.idx[i][0]:self.idx[i][1]] self.newtext = word.join([self.newtext[:self.idx[i][0]], self.newtext[self.idx[i][1]:]]) except: pass return self.newtext class Summarize(Highlight): def __init__(self, text, paragraph_no): super().__init__(text) print("length of paragraphs ",len(self.paragraphs)) self.txt = self.paragraphs[paragraph_no] self.paragraph_no = paragraph_no def summarize(self): url = "https://api.smrzr.io/v1/summarize?num_sentences=5&algorithm=kmeans" headers = { 'content-type': 'raw/text', 'origin': 'https://smrzr.io', 'referer': 'https://smrzr.io/', 'sec-fetch-dest': 'empty', 'sec-fetch-mode': 'cors', 'sec-fetch-site': 'same-site', "user-agent": "Mozilla/5.0" } resp = requests.post(url, headers=headers, data= self.txt.encode('utf-8')) assert resp.status_code == 200 summary = resp.json()['summary'] temp = summary.split('\n') print("BERT: ", temp) return temp def summarizeTextRank(self): tr = TextRank(sent_tokenize) summary = tr.summarize(self.txt, num_sentences=5).split('\n') print("Textrank: ",summary) return summary def summarizeLexRank(self): lr = LexRank() lr.summarize(self.txt) summaries = lr.probe() print("Lexrank: ",summaries) return summaries def ensembleSummarize(self): a = np.array(self.countMatcher(self.summarize(), self.paragraph_no)) try: b = np.array(self.countMatcher(self.summarizeLexRank(), self.paragraph_no)) except: b = np.zeros_like(a) c = np.array(self.countMatcher(self.summarizeTextRank(),self.paragraph_no)) result= a+b+c i, = np.where(result == max(result)) txt, index = self.docs[self.paragraph_no][i[0]], i[0] return txt, index result = chked high = Highlight(result) summarizer = Summarize(chked, 0) sum, id = summarizer.ensembleSummarize() print("summarized ",sum) sum ``` - 사용자 인식(speaker identification)이 됐으면 좋겠다 -- clova NOTE 사용시 해결 > 무료 api는 supervised만 있는 듯 >google speech api는 한국어 speaker diarization 지원 X - 시간단위로 잘리는 것 루프 만들기 - 기본 웹프레임워크 만들기 - 아웃풋 어떤 모양일지?
github_jupyter
## Networks and Simulation ### Packages ``` %%writefile magic_functions.py from tqdm import tqdm from multiprocess import Pool import scipy import networkx as nx import random import pandas as pd import numpy as np import rpy2.robjects as robjects from rpy2.robjects import pandas2ri from sklearn.metrics.pairwise import cosine_similarity from tqdm.notebook import tqdm import warnings warnings.filterwarnings("ignore") import pickle from scipy import stats ### read percentage of organizations in each region and market cap range p_reg = pd.read_excel('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/prob_mats.xlsx', 'reg',index_col=0) p_med = pd.read_excel('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/prob_mats.xlsx', 'med',index_col=0) ``` ### Generating network with desired characteristics ``` def create_network(N,nr,er,asa,bs_n,m_size): ### Graph generation ## Total organizations N=N ## region specific N n_regions_list=[int(0.46*N),int( 0.16*N),int( 0.38*N)] if (len(n_regions_list[0]*["NrA"]+n_regions_list[1]*["Eur"]+n_regions_list[2]*['Asia'])!=N): if (len(n_regions_list[0]*["NrA"]+n_regions_list[1]*["Eur"]+n_regions_list[2]*['Asia'])-N)>0: n_regions_list[0] = n_regions_list[0]+len(n_regions_list[0]*["NrA"]+n_regions_list[1]*["Eur"]+n_regions_list[2]*['Asia'])-N else: n_regions_list[0] = n_regions_list[0]-len(n_regions_list[0]*["NrA"]+n_regions_list[1]*["Eur"]+n_regions_list[2]*['Asia'])+N g = nx.random_partition_graph(n_regions_list, p_in= 0.60, p_out=0.15, seed=123, directed=True) edge_list_df=pd.DataFrame(list(g.edges(data=True))) edge_list_df.columns=['source','target','weight'] ### #calculate n of b,bs,s nr_n=[int(nr[0]*n_regions_list[0]),int(nr[1]*n_regions_list[0]),int(nr[2]*n_regions_list[0])] er_n=[int(er[0]*n_regions_list[1]),int(er[1]*n_regions_list[1]),int(er[2]*n_regions_list[1])] asa_n=[int(asa[0]*n_regions_list[2]),int(asa[1]*n_regions_list[2]),int(asa[2]*n_regions_list[2])] if (np.sum(nr_n)<n_regions_list[0]): nr_n[0]=nr_n[0]+(n_regions_list[0]-np.sum(nr_n)) if (np.sum(er_n)<n_regions_list[1]): er_n[0]=er_n[0]+(n_regions_list[1]-np.sum(er_n)) if (np.sum(asa_n)<n_regions_list[2]): asa_n[0]=asa_n[0]+(n_regions_list[2]-np.sum(asa_n)) ## if bs n controlled k_diff=nr_n[2]-int((nr_n[0]+nr_n[2])/((nr_n[0]/nr_n[2])+1+bs_n)) nr_n[2]=nr_n[2]-k_diff nr_n[0]=nr_n[0]+k_diff k_diff=er_n[2]-int((er_n[0]+er_n[2])/((er_n[0]/er_n[2])+1+bs_n)) er_n[2]=er_n[2]-k_diff er_n[0]=er_n[0]+k_diff k_diff=asa_n[2]-int((asa_n[0]+asa_n[2])/((asa_n[0]/asa_n[2])+1+bs_n)) asa_n[2]=asa_n[2]-k_diff asa_n[0]=asa_n[0]+k_diff # choose b , s , bs #nr list1=range(0,n_regions_list[0]) random.seed(10) list1_0=random.sample(list1, nr_n[0]) random.seed(10) list1_1=random.sample(pd.DataFrame(set(list1)-set(list1_0)).iloc[:,0].tolist(),nr_n[1]) random.seed(10) list1_2=random.sample(pd.DataFrame(set(list1)-(set(list1_1).union(set(list1_0)))).iloc[:,0].tolist(),nr_n[2]) #eur list2=range(0+n_regions_list[0],n_regions_list[1]+n_regions_list[0]) random.seed(10) list2_0=random.sample(list2, er_n[0]) random.seed(10) list2_1=random.sample(pd.DataFrame(set(list2)-set(list2_0)).iloc[:,0].tolist(),er_n[1]) random.seed(10) list2_2=random.sample(pd.DataFrame(set(list2)-(set(list2_1).union(set(list2_0)))).iloc[:,0].tolist(),er_n[2]) #asi list3=range(0+n_regions_list[0]+n_regions_list[1],n_regions_list[2]+n_regions_list[0]+n_regions_list[1]) random.seed(10) list3_0=random.sample(list3, asa_n[0]) random.seed(10) list3_1=random.sample(pd.DataFrame(set(list3)-set(list3_0)).iloc[:,0].tolist(),asa_n[1]) random.seed(10) list3_2=random.sample(pd.DataFrame(set(list3)-(set(list3_1).union(set(list3_0)))).iloc[:,0].tolist(),asa_n[2]) # nodes_frame=pd.DataFrame(range(N),columns=['nodes']) nodes_frame['partition']=n_regions_list[0]*["NrA"]+n_regions_list[1]*["Eur"]+n_regions_list[2]*['Asia'] nodes_frame['category']="" nodes_frame['category'][list1_0]="buyer" nodes_frame['category'][list2_0]="buyer" nodes_frame['category'][list3_0]="buyer" nodes_frame['category'][list1_1]="both" nodes_frame['category'][list2_1]="both" nodes_frame['category'][list3_1]="both" nodes_frame['category'][list1_2]="sup" nodes_frame['category'][list2_2]="sup" nodes_frame['category'][list3_2]="sup" # params_sn=pd.read_csv('skew_norm_params_reg_tier_mark_size.csv',index_col=0) nodes_frame['ms']="" ########### draw a market size based on region and tier for i in nodes_frame['nodes']: ps = params_sn.loc[(params_sn['tier']==nodes_frame['category'][i])&((params_sn['reg']==nodes_frame['partition'][i]))] #print(ps) np.random.seed(seed=123) nodes_frame['ms'][i] = stats.skewnorm(ps['ae'], ps['loce'], ps['scalee']).rvs(1)[0] nqn1=np.quantile(nodes_frame['ms'],0.05) nqn3=np.quantile(nodes_frame['ms'],0.5) nodes_frame['ms']=nodes_frame['ms']+ m_size*nodes_frame['ms'] dummy=pd.DataFrame(columns=['ms']) dummy['ms']=range(0,N) for i in range(0,N): if nodes_frame.iloc[i,3]<=nqn1: dummy['ms'][i]="low" elif nodes_frame.iloc[i,3]<=nqn3: dummy['ms'][i]="med" else: dummy['ms'][i]="high" nodes_frame['ms2']=dummy['ms'] buy_list=list1_0+list2_0+list3_0 sup_list=list1_2+list2_2+list3_2 edge_list_df_new=edge_list_df.drop([i for i, e in enumerate(list(edge_list_df['source'])) if e in set(sup_list)],axis=0) new_index=range(edge_list_df_new.shape[0]) edge_list_df_new.index=new_index edge_list_df_new=edge_list_df_new.drop([i for i, e in enumerate(list(edge_list_df_new['target'])) if e in set(buy_list)],axis=0) new_index=range(edge_list_df_new.shape[0]) edge_list_df_new.index=new_index g = nx.DiGraph( ) # Add edges and edge attributes for i, elrow in edge_list_df_new.iterrows(): g.add_edge(elrow[0], elrow[1], attr_dict=elrow[2]) return [edge_list_df_new,nodes_frame,g] ``` ### Generate initial attributes #### Python wrapper ``` def sample_lab_attr_all_init(N): # Defining the R script and loading the instance in Python r = robjects.r r['source']('sampling_for_attributes_normal.R') # Loading the function we have defined in R. sampling_for_attributes_r2 = robjects.globalenv['sampling_for_attributes_normal'] #Invoking the R function and getting the result df_result_r = sampling_for_attributes_r2(N) #Converting it back to a pandas dataframe. df_result = pandas2ri.rpy2py(df_result_r) return(df_result) ``` #### R function for beta distributed tolerance ``` library(bnlearn) library(stats) sampling_for_attributes_normal <- function(N){ #' Preprocessing df to filter country #' #' data_orgs<-read.csv('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv') library(bnlearn) library(stats) my_model <- readRDS("C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/model_fit.rds") x_s <- seq(0, 1, length.out = N) y1<-dbeta(x_s, 1.1, 0.5)*100 x<-y1 #N_reg=N for (i in 1:length(x)){ ## S3 method for class 'bn.fit' sampled_data<-rbn(my_model, n = 500) sampled_data[,c(1:7)] <- lapply(sampled_data[,c(1:7)], as.numeric) sampled_data[sampled_data <=0] <- NA sampled_data[sampled_data >=100] <- NA r_ind<-rowMeans(sampled_data, na.rm=FALSE) sampled_data<-sampled_data[!is.na(r_ind),] #head(sampled_data) sampled_data$score= as.numeric(rowMeans(sampled_data))#as.matrix(rowMeans(sampled_data)) sc_diffs=abs(x[i]-sampled_data$score) if(i==1){ sampled_data_f<-sampled_data[sc_diffs==min(sc_diffs),] }else{ sampled_data_f<-rbind(sampled_data_f,sampled_data[sc_diffs==min(sc_diffs),]) } } return(sampled_data_f) } ``` ### Generate new attributes #### Python wrapper ``` def sample_lab_attr_new_B(N,reg,s_av1,s_av2): # Defining the R script and loading the instance in Python r = robjects.r r['source']('sampling_for_attributes.R') # Loading the function we have defined in R. sampling_for_attributes_r = robjects.globalenv['sampling_for_attributes'] #Invoking the R function and getting the result df_result_r = sampling_for_attributes_r(N,reg) #print(df_result_r.head()) #Converting it back to a pandas dataframe. df_result = pandas2ri.rpy2py(df_result_r) """if (s_av2-s_av1)<2: s_av2=np.min([s_av2+2,64.28571]) if (s_av2-s_av1)>0: s_av1=np.max([s_av1-2,0]) else: s_av1=np.max([s_av2-2,0]) if s_av2>100: s_av2=100""" sampled_data=df_result.loc[((df_result['score']>=(s_av1)) & (df_result['score']<=(s_av2)))] if sampled_data.shape[0]==0: s_av=np.mean([s_av1,s_av2]) s_th=s_av*0.05 sampled_data=df_result.loc[((df_result['score']>=(s_av-s_th)) | (df_result['score']<=(s_av+s_th)))] tmp_vector=np.abs(sampled_data['score']-s_av) #tmp_vector2=np.abs(df_result['score']-s_av) sampled_data=sampled_data.loc[tmp_vector==np.min(tmp_vector)] return(sampled_data.sample()) ``` #### R function to sample new attributes ``` sampling_for_attributes <- function(N,reg){ #' Preprocessing df to filter country #' #' library(bnlearn) #my_model <- readRDS("C:/Users/ADMIN/OneDrive/Documents/IIM_R1/proj2/model_fit.rds") if ( reg==1){ #C:/Users/ADMIN/OneDrive/Documents my_model <- readRDS("C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/model_fit_1.rds") }else if(reg==2){ my_model <- readRDS("C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/model_fit_2.rds") }else{ my_model <- readRDS("C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/model_fit_3.rds") } N_reg=N N=N+500 ## S3 method for class 'bn.fit' sampled_data<-rbn(my_model, n = N) sampled_data[,c(1:7)] <- lapply(sampled_data[,c(1:7)], as.numeric) sampled_data[sampled_data <=0] <- NA sampled_data[sampled_data >=100] <- NA r_ind<-rowMeans(sampled_data, na.rm=FALSE) sampled_data<-sampled_data[!is.na(r_ind),] #head(sampled_data) sampled_data<-sampled_data[sample(nrow(sampled_data), N_reg), ] rownames(sampled_data) <- seq(length=nrow(sampled_data)) sampled_data #sampled_data$score= (0.38752934*sampled_data[,1]+ 0.37163856*sampled_data[,2]+ 0.32716766*sampled_data[,3]+ 0.39613783*sampled_data[,4]+ 0.38654069*sampled_data[,5]+0.38654069*sampled_data[,6]+ 0.38589444*sampled_data[,7])/(0.38752934+ 0.37163856+ 0.32716766+ 0.39613783+ 0.38654069+0.38654069+ 0.38589444) sampled_data$score= as.numeric(rowMeans(sampled_data))#as.matrix(rowMeans(sampled_data)) return(sampled_data) } ``` ### Bayesian Network fit to attributes of an organization ``` library(bnlearn) data<-read.csv('C:/Users/ADMIN/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv') head(data) data1<-data[data$Region=='North America',] data2<-data[data$Region=='Europe',] data3<-data[data$Region=='Asia',] data<-data[,c(2:8)] data1<-data1[,c(2:8)] data2<-data2[,c(2:8)] data3<-data3[,c(2:8)] dim(data) summary(data) data[,c(1:7)] <- lapply(data[,c(1:7)], as.numeric) data1[,c(1:7)] <- lapply(data1[,c(1:7)], as.numeric) data2[,c(1:7)] <- lapply(data2[,c(1:7)], as.numeric) data3[,c(1:7)] <- lapply(data3[,c(1:7)], as.numeric) bn.scores <- hc(data) bn.scores1 <- hc(data1) bn.scores2 <- hc(data2) bn.scores3 <- hc(data3) plot(bn.scores) plot(bn.scores1) plot(bn.scores2) plot(bn.scores3) bn.scores fit = bn.fit(bn.scores,data ) fit1 = bn.fit(bn.scores1,data1 ) fit2 = bn.fit(bn.scores2,data2 ) fit3 = bn.fit(bn.scores3,data3 ) fit bn.fit.qqplot(fit) bn.fit.xyplot(fit) bn.fit.histogram(fit) bn.fit.histogram(fit1) bn.fit.histogram(fit2) bn.fit.histogram(fit3) saveRDS(fit1, file = "model_fit_1.rds") saveRDS(fit1, file = "model_fit_2.rds") saveRDS(fit1, file = "model_fit_3.rds") ## S3 method for class 'bn' rbn(bn.scores, n = 1000, data, fit = "mle", ..., debug = FALSE) ## S3 method for class 'bn.fit' sampled_data<-rbn(fit, n = 1000,) head(sampled_data) write.csv(sampled_data,file = 'C:/Users/ADMIN/OneDrive/Documents/IIM_R1/proj2/sampled_data.csv') ``` ### R scripts for cumulative and probability density of a tolerance score ``` prob_cdf<-function(cur_sc,reg){ data<-read.csv('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv') #head(data) reg_lt=unique(data$Region) data<-data[data$Region==reg_lt[reg],] #hist(data$Tolerance,probability=TRUE) #lines(density(data$Tolerance),col="red") ecdff<-ecdf(data$Tolerance) p=1-ecdff(cur_sc) return(p) } prob_cdf_m<-function(cur_sc,msh){ data<-read.csv('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv') #head(data) m_lt=c(unique(data$market_cap)[1],unique(data$market_cap)[3],unique(data$market_cap)[2]) data<-data[data$market_cap==m_lt[msh],] #hist(data$Tolerance,probability=TRUE) #lines(density(data$Tolerance),col="red") ecdff<-ecdf(data$Tolerance) p=1-ecdff(cur_sc) return(p) } prob_pdf<-function(cur_sc,reg){ #library(MEPDF) data<-read.csv('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv') #head(data) reg_lt=unique(data$Region) data<-data[data$Region==reg_lt[reg],] #hist(data$Tolerance,probability=TRUE) #lines(density(data$Tolerance),col="red") #ecdff<-epdf(data$Tolerance) kd=density(data$Tolerance) p= kd$y[which(abs(kd$x-cur_sc)==min(abs(kd$x-cur_sc)))] return(p) } prob_pdf_m<-function(cur_sc,msh){ #library(MEPDF) data<-read.csv('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv') #head(data) m_lt=c(unique(data$market_cap)[1],unique(data$market_cap)[3],unique(data$market_cap)[2]) data<-data[data$market_cap==m_lt[msh],] #hist(data$Tolerance,probability=TRUE) #lines(density(data$Tolerance),col="red") kd=density(data$Tolerance) p= kd$y[which(abs(kd$x-cur_sc)==min(abs(kd$x-cur_sc)))] return(p) } ``` ### Python scripts for running R scripts ``` def prob_cdf(cur_sc,reg): # Defining the R script and loading the instance in Python r = robjects.r r['source']('prob_cdf.R') # Loading the function we have defined in R. prob_cdf_r = robjects.globalenv['prob_cdf'] #Invoking the R function and getting the result df_result_p= prob_cdf_r(cur_sc,reg) #Converting it back to a pandas dataframe. #df_result = pandas2ri.rpy2py(df_result_r) return(df_result_p) def prob_cdf_m(cur_sc,msh): # Defining the R script and loading the instance in Python r = robjects.r r['source']('prob_cdf.R') # Loading the function we have defined in R. prob_cdf_m = robjects.globalenv['prob_cdf_m'] #Invoking the R function and getting the result df_result_p= prob_cdf_m(cur_sc,msh) #Converting it back to a pandas dataframe. #df_result = pandas2ri.rpy2py(df_result_r) return(df_result_p) def prob_pdf(cur_sc,reg): # Defining the R script and loading the instance in Python r = robjects.r r['source']('prob_cdf.R') # Loading the function we have defined in R. prob_pdf_r = robjects.globalenv['prob_pdf'] #Invoking the R function and getting the result df_result_p= prob_pdf_r(cur_sc,reg) #Converting it back to a pandas dataframe. #df_result = pandas2ri.rpy2py(df_result_r) return(df_result_p) def prob_pdf_m(cur_sc,msh): # Defining the R script and loading the instance in Python r = robjects.r r['source']('prob_cdf.R') # Loading the function we have defined in R. prob_pdf_m = robjects.globalenv['prob_pdf_m'] #Invoking the R function and getting the result df_result_p= prob_pdf_m(cur_sc,msh) #Converting it back to a pandas dataframe. #df_result = pandas2ri.rpy2py(df_result_r) return(df_result_p) ``` ### Simulation ``` def simulation_continous (node_attr,edge_list_df,num_sim,W,bs1,bs2,N,r_on,m_on,p_reg,p_med,probs_mat,probs_mat2,run_iter,alpha1,alpha2,alpha3,Tmp,rgn,mcp): #nodes and edges N=N node_attr = node_attr edge_list_df = edge_list_df #P's blanck_data_tot=np.empty([N,32,num_sim],dtype='object') #blanck_data_tot2=np.empty([N,4,num_sim],dtype='object') for i in tqdm (range (num_sim), desc="Running i ..."): blanck_data=np.empty([N,32],dtype='object') #blanck_data2=np.empty([N,4],dtype='object') # node attr to edge attr df_3=cosine_similarity(node_attr.iloc[:,:8]) df_4=pd.DataFrame(df_3) df_4.values[[np.arange(len(df_4))]*2] = np.nan #mat_data.head() edge_list_2=df_4.stack().reset_index() edge_list_2.columns=['source','target','weight'] #edge_list_2.head() edge_list_f=pd.merge(edge_list_df, edge_list_2, how='left', left_on=['source','target'], right_on = ['source','target']) #edge_list_f.head() edge_list_f.drop('weight_x',axis=1,inplace=True) edge_list_f.columns=['source','target','weight'] #edge_list_f.head() st = ["high","low"] ########################################################### for j in tqdm (range(0,N), desc="Running j..."): #N=np.float(N) if len(list(np.where(edge_list_f.iloc[:,1]==j)[0]))>=1: #################################################################################################### ########################################## MIMETIC################################################## #################################################################################################### st=["high","low"] st=pd.DataFrame(st) st.columns=['state'] #Index in node attributes df['partitions'] == jth row partition column p_tier_ind = [i for i, e in enumerate(list(node_attr['tier'])) if e in set([node_attr.iloc[j,10]])] t_node_attr = node_attr.iloc[p_tier_ind,:] #t_node_attr=t_node_attr.reset_index().iloc[:,1:] #t_node_attr.head() t_node_attr_score=t_node_attr['score'].copy() t_node_attr_score=t_node_attr_score.reset_index().iloc[:,1:] #t_node_attr_score #t_node_attr.index[tnr] for tnr in range(0,t_node_attr.shape[0]): if node_attr.iloc[j,:]['score']<t_node_attr_score['score'][tnr]: t_node_attr['state'][t_node_attr.index[tnr]]='high' else: t_node_attr['state'][t_node_attr.index[tnr]]='low' tier_p=pd.DataFrame(t_node_attr['state'].value_counts()/np.sum(t_node_attr['state'].value_counts())) tier_p=tier_p.reset_index() tier_p.columns=['state','t_p'] #tier_p t_tier_p=pd.merge(st,tier_p,how="left",left_on=['state'],right_on='state') t_tier_p=t_tier_p.fillna(0.01) tier_p=t_tier_p #tier_p ############################################################################### #d_tier.index #pd.DataFrame(node_attr.iloc[p_tier_ind,-2-2-1]) #df_4.iloc[list(node_attr.iloc[p_tier_ind,:].index),j].reset_index().iloc[:,-1] #states and distances #d_tier=pd.concat([node_attr.iloc[p_tier_ind,-2-2-1], # df_4.iloc[list(node_attr.iloc[p_tier_ind,:].index),j] ],axis=1) d_tier=pd.concat([t_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[p_tier_ind,:].index),j] ],axis=1) #print(Ld) #d_tier=d_tier.drop([j]) #d_tier=d_tier.reset_index() d_tier=d_tier.fillna(1) #and average disances per state d_tier_avg=d_tier.groupby(['state']).mean(str(j)) #d_tier_avg s_tier_avg=pd.DataFrame(t_node_attr.groupby(['state']).mean()['score']) s_tier_avg=pd.merge(st,s_tier_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) #s_tier_avg ## state local prob and avg distance mimetic_p=pd.merge(tier_p,d_tier_avg, how='left', left_on=['state'], right_on = ['state']) mimetic_p=pd.merge(mimetic_p,s_tier_avg, how='left', left_on=['state'], right_on = ['state']) #mimetic_p mimetic_p.columns=['state','tier_p','cur_node','score_m'] mimetic_p['tier_p'] = mimetic_p['tier_p']/np.sum(mimetic_p['tier_p']) #mimetic_p #round(mimetic_p['score_m'][0]) ################################################ region_ind = [i for i, e in enumerate(list(p_reg.columns)) if e in set([node_attr.iloc[j,9]])] ms_ind = [i for i, e in enumerate(list(p_med.columns)) if e in set([node_attr.iloc[j,12]])] h_reg=prob_pdf(round(round(mimetic_p['score_m'][0])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] l_reg=prob_pdf(round(round(mimetic_p['score_m'][1])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] pbreg=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbreg']) pbreg.index=['high','low'] pbreg=pbreg.reset_index() pbreg.columns=['state','pbreg'] #pbreg h_reg=prob_pdf_m(round(round(mimetic_p['score_m'][0])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] l_reg=prob_pdf_m(round(round(mimetic_p['score_m'][1])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] pbm=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbm']) pbm.index=['high','low'] pbm=pbm.reset_index() pbm.columns=['state','pbm'] #pbm pbreg.index=mimetic_p.index pbm.index=mimetic_p.index mimetic_p['pbreg_m']=pbreg['pbreg'] mimetic_p['pbm_m']=pbm['pbm'] #mimetic_p #################################################################################################### ########################################## Local & Global / inform reg & normative ################# #################################################################################################### #Index in node attributes df for rows with target column == j prnt_ind = [i for i, e in enumerate(list(node_attr.index)) if e in set(edge_list_f.loc[edge_list_f.iloc[:,1]==j].iloc[:,0])] #Index in node attributes df for rows with target column == j prnt_ind2 = [i for i, e in enumerate(list(node_attr.index)) if e in set(edge_list_f.loc[edge_list_f.iloc[:,0]==j].iloc[:,1])] l_node_attr = node_attr.iloc[prnt_ind,:] l_node_attr_score=l_node_attr['score'].copy() l_node_attr_score=l_node_attr_score.reset_index().iloc[:,1:] #len(l_node_attr.iloc[:,-2-2-1]) #l_node_attr.loc[j] for tnr in range(0,l_node_attr.shape[0]): if node_attr.iloc[j,:]['score']<l_node_attr_score['score'][tnr]: l_node_attr['state'][l_node_attr.index[tnr]]='high' else: l_node_attr['state'][l_node_attr.index[tnr]]='low' l2_node_attr = node_attr.iloc[prnt_ind2,:] l2_node_attr_score=l2_node_attr['score'].copy() l2_node_attr_score=l2_node_attr_score.reset_index().iloc[:,1:] for tnr in range(0,l2_node_attr.shape[0]): if node_attr.iloc[j,:]['score']<l2_node_attr_score['score'][tnr]: l2_node_attr['state'][l2_node_attr.index[tnr]]='high' else: l2_node_attr['state'][l2_node_attr.index[tnr]]='low' #Lp1 if len(prnt_ind2)>0: #states prob of parent nodes(can also clculate d*count probabilities) Lp1 = pd.DataFrame(l_node_attr.iloc[:,-2-2-1].value_counts()/np.sum(l_node_attr.iloc[:,-2-2-1].value_counts())) Lp1 = Lp1.reset_index() #states prob of parent nodes(can also clculate d*count probabilities) Lp2 = pd.DataFrame(l2_node_attr.iloc[:,-2-2-1].value_counts()/np.sum(l2_node_attr.iloc[:,-2-2-1].value_counts())) Lp2 = Lp2.reset_index() Lp1=pd.merge(st,Lp1,how="left",left_on=['state'],right_on='index').fillna(0.01) Lp2=pd.merge(st,Lp2,how="left",left_on=['state'],right_on='index').fillna(0.01) Lp=pd.merge(Lp1,Lp2,how="left",left_on=['state_x'],right_on='state_x') #print(Lp.head()) Lp['state']=bs1*Lp['state_y_x']+bs2*Lp['state_y_y'] Lp=Lp.iloc[:,[0,5]] Lp.columns=['index','state'] #print(Lp1.head()) #print(Lp2.head()) else: #states prob of parent nodes(can also clculate d*count probabilities) Lp = pd.DataFrame(l_node_attr.iloc[:,-2-2-1].value_counts()/np.sum(l_node_attr.iloc[:,-2-2-1].value_counts())) Lp = Lp.reset_index() #print(Lp) Lp=pd.merge(st,Lp,how="left",left_on=['state'],right_on='index').fillna(0.01) Lp=Lp.iloc[:,[0,2]] #print(Lp) Lp.columns=['index','state'] #Lp.head() if len(prnt_ind2)>0: #states and distances Ld1=pd.concat([l_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[prnt_ind,:].index),j] ],axis=1) Lad1=Ld1.groupby(['state']).mean() #states and distances Ld2=pd.concat([l2_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[prnt_ind2,:].index),j] ],axis=1) #Lp2.head() Lad2=Ld2.groupby(['state']).mean() Lad1=pd.merge(st,Lad1,how="left",left_on=['state'],right_on='state').fillna(0.01) Lad2=pd.merge(st,Lad2,how="left",left_on=['state'],right_on='state').fillna(0.01) Lad=pd.merge(Lad1,Lad2,how="left",left_on=['state'],right_on='state').fillna(0.01) #print(Lad) Lad['state_n']=bs1*Lad[str(j)+'_x']+bs2*Lad[str(j)+'_y'] Lad=Lad.iloc[:,[0,3]] Lad.columns=['state',str(j)] Lad.index=Lad['state'] Lad=Lad.iloc[:,1] #print(Lad.head()) s_l1_avg=pd.DataFrame(l_node_attr.groupby(['state']).mean()['score']) s_l1_avg=pd.merge(st,s_l1_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) s_l2_avg=pd.DataFrame(l2_node_attr.groupby(['state']).mean()['score']) s_l2_avg=pd.merge(st,s_l2_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) s_l_avg=pd.merge(s_l1_avg,s_l2_avg,how="left",left_on=['state'],right_on='state') #print(s_l_avg) s_l_avg['score_n']=bs1*s_l_avg['score'+'_x']+bs2*s_l_avg['score'+'_y'] s_l_avg=s_l_avg.iloc[:,[0,3]] s_l_avg.columns=['state','score'] else: #states and distances Ld=pd.concat([l_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[prnt_ind,:].index),j] ],axis=1) #print(Ld) #and average disances per state Lad=Ld.groupby(['state']).mean()#str(j) Lad=pd.merge(st,Lad,how="left",left_on=['state'],right_on='state').fillna(0.01) Lad=Lad.reset_index() s_l_avg=pd.DataFrame(l_node_attr.groupby(['state']).mean()['score']) s_l_avg=pd.merge(st,s_l_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) s_l_avg=s_l_avg.reset_index() #Lad.head() #print(Lad) #Lad #s_l_avg #print(dist_local) if len(prnt_ind2)>0: ## state local prob and avg distance dist_local=pd.merge(Lp,Lad, how='left', left_on=['index'], right_on = ['state']) dist_local.columns=['state','local_prob','cur_node_l'] #dist_local dist_local=pd.merge(dist_local,s_l_avg, how='left', left_on=['state'], right_on = ['state']) else : #bs1*s_l_avg['score'+'_x']+s_l_avg*Lad['score'+'_y'] dist_local=pd.merge(Lp,Lad, how='left', left_on=['index'], right_on = ['state']) dist_local=dist_local.iloc[:,[0,1,4]] dist_local.columns=['state','local_prob','cur_node_l'] #dist_local #print(s_l_avg) dist_local=pd.merge(dist_local,s_l_avg, how='left', left_on=['state'], right_on = ['state']) dist_local=dist_local.iloc[:,[0,1,2,4]] #dist_local dist_local.columns=['state','local_prob','cur_node_l','score_l'] dist_local['local_prob']=dist_local['local_prob']/np.sum(dist_local['local_prob']) #print(dist_local) h_reg=prob_pdf(round(round(dist_local['score_l'][0])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] l_reg=prob_pdf(round(round(dist_local['score_l'][1])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] pbreg=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbreg']) pbreg.index=['high','low'] pbreg=pbreg.reset_index() pbreg.columns=['state','pbreg'] #pbreg h_reg=prob_pdf_m(round(round(dist_local['score_l'][0])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] l_reg=prob_pdf_m(round(round(dist_local['score_l'][1])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] pbm=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbm']) pbm.index=['high','low'] pbm=pbm.reset_index() pbm.columns=['state','pbm'] #pbm pbreg.index=mimetic_p.index pbm.index=mimetic_p.index dist_local['pbreg_l']=pbreg['pbreg'] dist_local['pbm_l']=pbm['pbm'] #dist_local ## global prob #glb_p=pd.DataFrame(node_attr['state'].value_counts()/np.sum(node_attr['state'].value_counts())) #glb_p=glb_p.reset_index() #glb_p.columns=['state','g_p'] st=["high","low"] st=pd.DataFrame(st) st.columns=['state'] #Index in node attributes df['partitions'] == jth row partition column p_region_ind = [i for i, e in enumerate(list(node_attr['partition'])) if e in set([node_attr.iloc[j,9]])] r_node_attr = node_attr.iloc[p_region_ind,:] r_node_attr_score=r_node_attr['score'].copy() r_node_attr_score=r_node_attr_score.reset_index().iloc[:,1:] for tnr in range(0,r_node_attr.shape[0]): if node_attr.iloc[j,:]['score']<r_node_attr_score['score'][tnr]: r_node_attr['state'][r_node_attr.index[tnr]]='high' else: r_node_attr['state'][r_node_attr.index[tnr]]='low' glb_p=pd.DataFrame(r_node_attr['state'].value_counts()/np.sum(r_node_attr['state'].value_counts())) glb_p=glb_p.reset_index() glb_p.columns=['state','g_p'] t_glb_p=pd.merge(st,glb_p,how="left",left_on=['state'],right_on='state') t_glb_p=t_glb_p.fillna(0.01) glb_p=t_glb_p #print(glb_p) #states and distances gd=pd.concat([r_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[p_region_ind,:].index),j] ],axis=1) #print(gd) #and average disances per state gad=gd.groupby(['state']).mean(str(j)) gad=pd.merge(st,gad,how="left",left_on=['state'],right_on='state') #gad.reset_index(inplace=True) #print(gad) s_g_avg=pd.DataFrame(r_node_attr.groupby(['state']).mean()['score']) s_g_avg=pd.merge(st,s_g_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) #s_g_avg ## state local prob and avg distance dist_global=pd.merge(glb_p,gad, how='left', left_on=['state'], right_on = ['state']) dist_global=pd.merge(dist_global,s_g_avg, how='left', left_on=['state'], right_on = ['state']) #dist_local dist_global.columns=['state','glob_prob','cur_node_g','score_g'] dist_global['glob_prob'] =dist_global['glob_prob']/np.sum(dist_global['glob_prob']) #print(dist_global) h_reg=prob_pdf(round(round(dist_global['score_g'][0])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] l_reg=prob_pdf(round(round(dist_global['score_g'][1])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] pbreg=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbreg']) pbreg.index=['high','low'] pbreg=pbreg.reset_index() pbreg.columns=['state','pbreg'] #pbreg h_reg=prob_pdf_m(round(round(dist_global['score_g'][0])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] l_reg=prob_pdf_m(round(round(dist_global['score_g'][1])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] pbm=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbm']) pbm.index=['high','low'] pbm=pbm.reset_index() pbm.columns=['state','pbm'] #pbm pbreg.index=mimetic_p.index pbm.index=mimetic_p.index dist_global['pbreg_g']=pbreg['pbreg'] dist_global['pbm_g']=pbm['pbm'] #dist_global #print('glb_p') if (((i+1)*(j+1)) % 5000) ==0: print(dist_global) ## all memetic dist_local_global=pd.merge(dist_global,dist_local, how='left', left_on=['state'], right_on = ['state']) dist_local_global=dist_local_global.fillna(0.01) #dist_local_global['m_p']=dist_local_global.product(axis=1)/np.sum(dist_local_global.product(axis=1)) #print(dist_local_global) # #################################################################################################### ########################################## All_ Pressures ########################################## #################################################################################################### # # All presures all_p = pd.merge(mimetic_p,dist_local_global,how='left', left_on=['state'], right_on = ['state']) all_p=all_p.fillna(0.01) #all_p = pd.merge(all_p,mimetic_p,how='left', left_on=['state'], right_on = ['state']) #all_p #= all_p.iloc[:,[0,4,5,6]] #0.25*all_p.iloc[:,3:5].product(axis=1) #all_p.iloc[:,3:5] #w1=w2=w3=w4=0.25 #all_p #w1*all_p['tier_p'][0]*all_p['cur_node'][0]*all_p['score_m'][0] #w1=w2=w3=w4=0.25 all_p_tpd=all_p.copy() #all_p_tpd all_p_new=pd.DataFrame(all_p_tpd['state']) #print(all_p_new) all_p_new['tier_p']=(all_p_tpd['tier_p']*all_p_tpd['cur_node'])/np.sum(all_p_tpd['tier_p']*all_p_tpd['cur_node']) all_p_new['glob_prob']=(all_p_tpd['glob_prob']*all_p_tpd['cur_node_g'])/np.sum(all_p_tpd['glob_prob']*all_p_tpd['cur_node_g']) all_p_new['local_prob']=(all_p_tpd['local_prob']*all_p_tpd['cur_node_l'])/np.sum(all_p_tpd['local_prob']*all_p_tpd['cur_node_l']) all_p_new['pbreg']=(all_p_tpd['pbreg_m']+all_p_tpd['pbreg_g']+all_p_tpd['pbreg_l'])/np.sum(all_p_tpd['pbreg_m']+all_p_tpd['pbreg_g']+all_p_tpd['pbreg_l']) all_p_new['pbm']=(all_p_tpd['pbm_m']+all_p_tpd['pbm_g']+all_p_tpd['pbm_l'])/np.sum(all_p_tpd['pbm_m']+all_p_tpd['pbm_g']+all_p_tpd['pbm_l']) all_p_new['score_m']=all_p_tpd['score_m'] all_p_new['score_l']=all_p_tpd['score_l'] all_p_new['score_g']=all_p_tpd['score_g'] #pd.DataFrame(all_p_new) all_p=all_p_new """ if r_on==1: rpbr =[all_p['pbreg_m'][0],all_p['pbreg_l'][0],all_p['pbreg_g'][0]] else: rpbr =[1,1,1] if m_on==1: mpbm =[all_p['pbm_m'][0],all_p['pbm_l'][0],all_p['pbm_g'][0]] else: mpbm =[1,1,1] ptotalh=np.exp((w1*all_p['tier_p'][0]*all_p['cur_node'][0]*rpbr[0]*mpbm[0]*all_p['score_m'][0]+w2*all_p['local_prob'][0]*all_p['cur_node_l'][0]*rpbr[1]*mpbm[1]*all_p['score_l'][0]+w3*all_p['glob_prob'][0]*all_p['cur_node_g'][0]*rpbr[2]*mpbm[2]*all_p['score_g'][0])/w4)/(1+np.exp((w1*all_p['tier_p'][0]*all_p['cur_node'][0]*rpbr[0]*mpbm[0]*all_p['score_m'][0]+w2*all_p['local_prob'][0]*all_p['cur_node_l'][0]*rpbr[1]*mpbm[1]*all_p['score_l'][0]+w3*all_p['glob_prob'][0]*all_p['cur_node_g'][0]*rpbr[2]*mpbm[2]*all_p['score_g'][0])/w4)) ptotall=1-ptotalh ptot=pd.DataFrame([ptotalh,ptotall],columns=['ptotal']) ptot.index=all_p.index all_p['ptotal']=ptot['ptotal'] #all_p """ if r_on==1: rpbr =all_p['pbreg'][0] else: rpbr =0 if m_on==1: mpbm =all_p['pbm'][0] else: mpbm =0 rpmp=(all_p['pbreg']*all_p['pbm'])/np.sum(all_p['pbreg']*all_p['pbm']) if r_on==0: rpmp[0]=1 rpmp[1]=1 ###### multivariate normal nsd2=list() for repeat in range(0,100): nsd=list() for mni in range(0,3): nsd.append(np.random.normal(0,1)) nsd2.append(np.random.multivariate_normal([0]*3,([nsd]*3))) #for ni,nsd2i in enumerate(nsd2): # nsd2[ni]=np.round_(nsd2i,2) nsd2=list(np.round_(pd.DataFrame(nsd2).mean(axis=0),2)) ## 2 if (j==0): nsd3=list() for repeat in range(0,100): nsd=list() for mni in range(0,3): nsd.append(np.random.normal(0,1)) nsd3.append(np.random.multivariate_normal([0]*3,([nsd]*3))) #for ni,nsd2i in enumerate(nsd2): # nsd2[ni]=np.round_(nsd2i,2) nsd3=list(np.round_(pd.DataFrame(nsd3).mean(axis=0),2)) #### normal epsilon_l=list() for repeat in range(0,100): epsilon_l.append(np.random.normal(0,1)) epsilon=np.mean(epsilon_l) #### """ if ((node_attr.iloc[j,9]==rgn)&(node_attr.iloc[j,12]==mcp)): w1=W[0] w2=W[1] w3=W[2] else: w1=W[3] w2=W[4] w3=W[5] """ #### if ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[0] w2=W[1] w3=W[2] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[3] w2=W[4] w3=W[5] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[6] w2=W[7] w3=W[8] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[9] w2=W[10] w3=W[11] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[12] w2=W[13] w3=W[14] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[15] w2=W[16] w3=W[17] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[18] w2=W[19] w3=W[20] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[21] w2=W[22] w3=W[23] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[24] w2=W[25] w3=W[26] else : w1=0.333 w2=0.333 w3=0.333 #ptotalh=np.exp((w1*all_p['tier_p'][0]+w2*all_p['local_prob'][0]+w3*all_p['glob_prob'][0]+w4*rpbr+w5*mpbm))/(1+np.exp((w1*all_p['tier_p'][0]+w2*all_p['local_prob'][0]+w3*all_p['glob_prob'][0]+w4*rpbr+w5*mpbm))) #ptotalh=((np.exp((w1*(all_p['tier_p'][0]+alpha1)+w2*(all_p['local_prob'][0]+alpha2)+w3*(all_p['glob_prob'][0]+alpha3))/Tmp)/(1+np.exp((w1*(all_p['tier_p'][0]+alpha1)+w2*(all_p['local_prob'][0]+alpha2)+w3*(all_p['glob_prob'][0]+alpha3))/Tmp)))*(rpmp[0])) ptotalh=((np.exp(((w1+alpha1)*(all_p['tier_p'][0])+(w2+alpha2)*(all_p['local_prob'][0])+(w3+alpha3)*(all_p['glob_prob'][0]) + (nsd2[0]+nsd3[0])*(all_p['tier_p'][0])+(nsd2[1]+nsd3[1])*(all_p['local_prob'][0])+ (nsd2[2]+nsd3[2])*(all_p['glob_prob'][0])+epsilon)/Tmp)/(1+np.exp(((w1+alpha1)*(all_p['tier_p'][0])+(w2+alpha2)*(all_p['local_prob'][0])+(w3+alpha3)*(all_p['glob_prob'][0]) + (nsd2[0]+nsd3[0])*(all_p['tier_p'][0])+(nsd2[1]+nsd3[1])*(all_p['local_prob'][0])+ (nsd2[2]+nsd3[2])*(all_p['glob_prob'][0])+epsilon)/Tmp)))*(rpmp[0])) #ptotalh=ptotalh/np.sum(ptotalh) #ptotall=1-ptotalh ptotall=(1/(1+np.exp(((w1+alpha1)*(all_p['tier_p'][0])+(w2+alpha2)*(all_p['local_prob'][0])+(w3+alpha3)*(all_p['glob_prob'][0]) + (nsd2[0]+nsd3[0])*(all_p['tier_p'][0])+(nsd2[1]+nsd3[1])*(all_p['local_prob'][0])+ (nsd2[2]+nsd3[2])*(all_p['glob_prob'][0])+epsilon)/Tmp)))*(rpmp[1]) ptot=pd.DataFrame([ptotalh,ptotall],columns=['ptotal']) ptot.index=all_p.index all_p['ptotal']=ptot['ptotal'] all_p['ptotal']=all_p['ptotal']/np.sum(all_p['ptotal']) #all_p #print(all_p) #0.6224593312018546 """ d_s_ind=np.where(all_p['ptotal']==np.max(all_p['ptotal']))[0][0] """ if np.count_nonzero([w1,w2,w3])!=0: if all_p['ptotal'][0]>0.6224593312018546: #0.6224593312018546: d_s_ind=0 elif all_p['ptotal'][0]<0.6224593312018546: #0.6224593312018546: d_s_ind=1 else: d_s_ind = 1 if np.random.random()<0.5 else 0 else: if all_p['ptotal'][0]>0.5: d_s_ind=0 elif all_p['ptotal'][0]<0.5: d_s_ind=1 else: d_s_ind = 1 if np.random.random()<0.5 else 0 """u = np.random.uniform() if all_p['ptotal'][0]>u: d_s_ind=0 else: d_s_ind=1""" #print(d_s_ind) """ if r_on==1: rpbr =[all_p['pbreg_m'][d_s_ind],all_p['pbreg_l'][d_s_ind],all_p['pbreg_g'][d_s_ind]] else: rpbr =[1,1,1] if m_on==1: mpbm =[all_p['pbm_m'][d_s_ind],all_p['pbm_l'][d_s_ind],all_p['pbm_g'][d_s_ind]] else: mpbm =[1,1,1] """ if r_on==1: rpbr =all_p['pbreg'][d_s_ind] else: rpbr =0 if m_on==1: mpbm =all_p['pbm'][d_s_ind] else: mpbm =0 """s_av=(w1*all_p['tier_p'][d_s_ind]*all_p['cur_node'][d_s_ind]*rpbr[0]*mpbm[0]*all_p['score_m'][d_s_ind]+w2*all_p['local_prob'][d_s_ind]*all_p['cur_node_l'][d_s_ind]*rpbr[1]*mpbm[1]*all_p['score_l'][d_s_ind]+w3*all_p['glob_prob'][d_s_ind]*all_p['cur_node_g'][d_s_ind]*rpbr[2]*mpbm[2]*all_p['score_g'][d_s_ind])/(w1*all_p['tier_p'][d_s_ind]*all_p['cur_node'][d_s_ind]*rpbr[0]*mpbm[0]+w2*all_p['local_prob'][d_s_ind]*all_p['cur_node_l'][d_s_ind]*rpbr[1]*mpbm[1]+w3*all_p['glob_prob'][d_s_ind]*all_p['cur_node_g'][d_s_ind]*rpbr[2]*mpbm[2])""" """s_av=(w1*all_p['score_m'][d_s_ind]+w2*all_p['score_l'][d_s_ind]+w3*all_p['score_g'][d_s_ind])/(w1+w2+w3)""" # s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) # s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) if w1==0: s_av1=np.min([all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) s_av2=np.max([all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) elif w2==0: s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_g'][d_s_ind]]) s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_g'][d_s_ind]]) elif w3==0: s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind]]) s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind]]) elif np.count_nonzero([w1,w2,w3])==1: s_av1=[w1*all_p['score_m'][d_s_ind]+w2*all_p['score_l'][d_s_ind]+w3*all_p['score_g'][d_s_ind]] s_av2=[w1*all_p['score_m'][d_s_ind]+w2*all_p['score_l'][d_s_ind]+w3*all_p['score_g'][d_s_ind]] elif np.count_nonzero([w1,w2,w3])==0: s_av1=node_attr['score'][j] s_av2=node_attr['score'][j] else: s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) #s_av #region_ind #print(all_p) probs_mat[i,j]=np.max(all_p['ptotal']) if i==0: probs_mat2[i,j]=np.max(all_p['ptotal']) else: probs_mat2[i+j,:]=probs_mat[i,j] ## hihest prob label #desired_state = random.choices(list(all_p['state']),list(all_p['all']))[0] #desired_state = all_p['state'][d_s_ind] #desired_state #desired_state = list(all_p.loc[all_p['all']==np.max(all_p['all'])]['state'])[0] ##### draw attributes with given label """sample_df_1=sample_lab_attr_new(np.float(N),region_ind[0],s_av,0.05*s_av)""" """if s_av1==s_av2: if d_s_ind==0: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],s_av1+0.1,100) else: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],0,s_av2+0.12) else: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],s_av1+0.1,s_av2+0.12)""" if s_av1==s_av2: if d_s_ind==0: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],s_av1+0.1,100) else: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],0,s_av2+0.12) else: if d_s_ind==0: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],(s_av1+s_av2)/2,100) else: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],0,(s_av1+s_av2)/2) #################################################################################################### ########################################## Update attributes ###################################### #################################################################################################### ## update node attributes for k,replc in enumerate(sample_df_1.values[0]): node_attr.iloc[j,k]=replc ## update edge attributes # node attr to edge attr df_3=cosine_similarity(node_attr.iloc[:,:8]) df_4=pd.DataFrame(df_3) df_4.values[[np.arange(len(df_4))]*2] = np.nan #mat_data.head() edge_list_2=df_4.stack().reset_index() edge_list_2.columns=['source','target','weight'] #edge_list_2.head() edge_list_f=pd.merge(edge_list_df, edge_list_2, how='left', left_on=['source','target'], right_on = ['source','target']) #edge_list_f.head() edge_list_f.drop('weight_x',axis=1,inplace=True) edge_list_f.columns=['source','target','weight'] for k,replc in enumerate(node_attr.iloc[j,:].values): blanck_data[j,k]=replc for k,replc in enumerate(all_p.iloc[0,1:].values): blanck_data[j,k+13]=replc blanck_data[j,29]=j blanck_data[j,30]=i blanck_data[j,31]=all_p['state'][d_s_ind] #blanck_data2[:,:2,i]=np.array(edge_list_f) else: ####2 #################################################################################################### ########################################## MIMETIC################################################## #################################################################################################### st=["high","low"] st=pd.DataFrame(st) st.columns=['state'] #Index in node attributes df['partitions'] == jth row partition column p_tier_ind = [i for i, e in enumerate(list(node_attr['tier'])) if e in set([node_attr.iloc[j,10]])] t_node_attr = node_attr.iloc[p_tier_ind,:] #t_node_attr=t_node_attr.reset_index().iloc[:,1:] #t_node_attr.head() t_node_attr_score=t_node_attr['score'].copy() t_node_attr_score=t_node_attr_score.reset_index().iloc[:,1:] #t_node_attr_score #t_node_attr.index[tnr] for tnr in range(0,t_node_attr.shape[0]): if node_attr.iloc[j,:]['score']<t_node_attr_score['score'][tnr]: t_node_attr['state'][t_node_attr.index[tnr]]='high' else: t_node_attr['state'][t_node_attr.index[tnr]]='low' tier_p=pd.DataFrame(t_node_attr['state'].value_counts()/np.sum(t_node_attr['state'].value_counts())) tier_p=tier_p.reset_index() tier_p.columns=['state','t_p'] #tier_p t_tier_p=pd.merge(st,tier_p,how="left",left_on=['state'],right_on='state') t_tier_p=t_tier_p.fillna(0.01) tier_p=t_tier_p #tier_p #d_tier.index #pd.DataFrame(node_attr.iloc[p_tier_ind,-2-2-1]) #df_4.iloc[list(node_attr.iloc[p_tier_ind,:].index),j].reset_index().iloc[:,-1] #states and distances #d_tier=pd.concat([node_attr.iloc[p_tier_ind,-2-2-1], # df_4.iloc[list(node_attr.iloc[p_tier_ind,:].index),j] ],axis=1) d_tier=pd.concat([t_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[p_tier_ind,:].index),j] ],axis=1) #print(Ld) #d_tier=d_tier.drop([j]) #d_tier=d_tier.reset_index() d_tier=d_tier.fillna(1) #and average disances per state d_tier_avg=d_tier.groupby(['state']).mean(str(j)) #d_tier_avg s_tier_avg=pd.DataFrame(t_node_attr.groupby(['state']).mean()['score']) s_tier_avg=pd.merge(st,s_tier_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) #s_tier_avg ## state local prob and avg distance mimetic_p=pd.merge(tier_p,d_tier_avg, how='left', left_on=['state'], right_on = ['state']) mimetic_p=pd.merge(mimetic_p,s_tier_avg, how='left', left_on=['state'], right_on = ['state']) #mimetic_p mimetic_p.columns=['state','tier_p','cur_node','score_m'] mimetic_p['tier_p'] = mimetic_p['tier_p']/np.sum(mimetic_p['tier_p']) #mimetic_p #round(mimetic_p['score_m'][0]) ################################################ regulatary mem region_ind = [i for i, e in enumerate(list(p_reg.columns)) if e in set([node_attr.iloc[j,9]])] ms_ind = [i for i, e in enumerate(list(p_med.columns)) if e in set([node_attr.iloc[j,12]])] h_reg=prob_pdf(round(round(mimetic_p['score_m'][0])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] l_reg=prob_pdf(round(round(mimetic_p['score_m'][1])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] pbreg=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbreg']) pbreg.index=['high','low'] pbreg=pbreg.reset_index() pbreg.columns=['state','pbreg'] #pbreg h_reg=prob_pdf_m(round(round(mimetic_p['score_m'][0])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] l_reg=prob_pdf_m(round(round(mimetic_p['score_m'][1])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] pbm=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbm']) pbm.index=['high','low'] pbm=pbm.reset_index() pbm.columns=['state','pbm'] #pbm pbreg.index=mimetic_p.index pbm.index=mimetic_p.index mimetic_p['pbreg_m']=pbreg['pbreg'] mimetic_p['pbm_m']=pbm['pbm'] #mimetic_p #################################################################################################### ########################################## Local & Global / inform reg & normative ################# #################################################################################################### #Index in node attributes df for rows with target column == j """prnt_ind = [i for i, e in enumerate(list(node_attr.index)) if e in set(edge_list_f.loc[edge_list_f.iloc[:,1]==j].iloc[:,0])]""" #Index in node attributes df for rows with target column == j prnt_ind2 = [i for i, e in enumerate(list(node_attr.index)) if e in set(edge_list_f.loc[edge_list_f.iloc[:,0]==j].iloc[:,1])] """l_node_attr = node_attr.iloc[prnt_ind,:] l_node_attr_score=l_node_attr['score'].copy() l_node_attr_score=l_node_attr_score.reset_index().iloc[:,1:] #len(l_node_attr.iloc[:,-2-2-1]) #l_node_attr.loc[j] for tnr in range(0,l_node_attr.shape[0]): if node_attr.iloc[j,:]['score']<l_node_attr_score['score'][tnr]: l_node_attr['state'][l_node_attr.index[tnr]]='high' else: l_node_attr['state'][l_node_attr.index[tnr]]='low'""" l2_node_attr = node_attr.iloc[prnt_ind2,:] l2_node_attr_score=l2_node_attr['score'].copy() l2_node_attr_score=l2_node_attr_score.reset_index().iloc[:,1:] for tnr in range(0,l2_node_attr.shape[0]): if node_attr.iloc[j,:]['score']<l2_node_attr_score['score'][tnr]: l2_node_attr['state'][l2_node_attr.index[tnr]]='high' else: l2_node_attr['state'][l2_node_attr.index[tnr]]='low' #Lp1 """if len(prnt_ind2)>0: #states prob of parent nodes(can also clculate d*count probabilities) Lp1 = pd.DataFrame(l_node_attr.iloc[:,-2-2-1].value_counts()/np.sum(l_node_attr.iloc[:,-2-2-1].value_counts())) Lp1 = Lp1.reset_index() #states prob of parent nodes(can also clculate d*count probabilities) Lp2 = pd.DataFrame(l2_node_attr.iloc[:,-2-2-1].value_counts()/np.sum(l2_node_attr.iloc[:,-2-2-1].value_counts())) Lp2 = Lp2.reset_index() Lp1=pd.merge(st,Lp1,how="left",left_on=['state'],right_on='index').fillna(0.01) Lp2=pd.merge(st,Lp2,how="left",left_on=['state'],right_on='index').fillna(0.01) Lp=pd.merge(Lp1,Lp2,how="left",left_on=['state_x'],right_on='state_x') #print(Lp.head()) Lp['state']=bs1*Lp['state_y_x']+bs2*Lp['state_y_y'] Lp=Lp.iloc[:,[0,5]] Lp.columns=['index','state'] #print(Lp1.head()) #print(Lp2.head()) else:""" #states prob of parent nodes(can also clculate d*count probabilities) Lp = pd.DataFrame(l2_node_attr.iloc[:,-2-2-1].value_counts()/np.sum(l2_node_attr.iloc[:,-2-2-1].value_counts())) Lp = Lp.reset_index() #print(Lp) Lp=pd.merge(st,Lp,how="left",left_on=['state'],right_on='index').fillna(0.01) Lp=Lp.iloc[:,[0,2]] #print(Lp) Lp.columns=['index','state'] #print(Lp) #Lp.head() """if len(prnt_ind2)>0: #states and distances Ld1=pd.concat([l_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[prnt_ind,:].index),j] ],axis=1) Lad1=Ld1.groupby(['state']).mean() #states and distances Ld2=pd.concat([l2_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[prnt_ind2,:].index),j] ],axis=1) #Lp2.head() Lad2=Ld2.groupby(['state']).mean() Lad1=pd.merge(st,Lad1,how="left",left_on=['state'],right_on='state').fillna(0.01) Lad2=pd.merge(st,Lad2,how="left",left_on=['state'],right_on='state').fillna(0.01) Lad=pd.merge(Lad1,Lad2,how="left",left_on=['state'],right_on='state').fillna(0.01) #print(Lad) Lad['state_n']=bs1*Lad[str(j)+'_x']+bs2*Lad[str(j)+'_y'] Lad=Lad.iloc[:,[0,3]] Lad.columns=['state',str(j)] #print(Lad.head()) s_l1_avg=pd.DataFrame(l_node_attr.groupby(['state']).mean()['score']) s_l1_avg=pd.merge(st,s_l1_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) s_l2_avg=pd.DataFrame(l2_node_attr.groupby(['state']).mean()['score']) s_l2_avg=pd.merge(st,s_l2_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) s_l_avg=pd.merge(s_l1_avg,s_l2_avg,how="left",left_on=['state'],right_on='state') #print(s_l_avg) s_l_avg['score_n']=bs1*s_l_avg['score'+'_x']+bs2*s_l_avg['score'+'_y'] s_l_avg=s_l_avg.iloc[:,[0,3]] s_l_avg.columns=['state','score'] else:""" #states and distances Ld=pd.concat([l2_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[prnt_ind2,:].index),j] ],axis=1) #print(Ld) #and average disances per state Lad=Ld.groupby(['state']).mean()#str(j) #print(Lad) Lad=pd.merge(st,Lad,how="left",left_on=['state'],right_on='state').fillna(0.01) #print(Lad) Lad=Lad.reset_index() #print(Lad) s_l_avg=pd.DataFrame(l2_node_attr.groupby(['state']).mean()['score']) s_l_avg=pd.merge(st,s_l_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) s_l_avg=s_l_avg.reset_index() #Lad.head() #print(Lad) #Lad.index=Lad['state'] #Lad=Lad.iloc[:,1:] #print(Lad) #Lad #s_l_avg #bs1*s_l_avg['score'+'_x']+s_l_avg*Lad['score'+'_y'] ## state local prob and avg distance dist_local=pd.merge(Lp,Lad, how='left', left_on=['index'], right_on = ['state']) dist_local=dist_local.iloc[:,[0,1,4]] dist_local.columns=['state','local_prob','cur_node_l'] #dist_local #print(s_l_avg) dist_local=pd.merge(dist_local,s_l_avg, how='left', left_on=['state'], right_on = ['state']) dist_local=dist_local.iloc[:,[0,1,2,4]] #print(dist_local) #dist_local=dist_local.drop(['index']) #dist_local dist_local.columns=['state','local_prob','cur_node_l','score_l'] dist_local['local_prob']=dist_local['local_prob']/np.sum(dist_local['local_prob']) #print(dist_local) h_reg=prob_pdf(round(round(dist_local['score_l'][0])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] l_reg=prob_pdf(round(round(dist_local['score_l'][1])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] pbreg=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbreg']) pbreg.index=['high','low'] pbreg=pbreg.reset_index() pbreg.columns=['state','pbreg'] #pbreg h_reg=prob_pdf_m(round(round(dist_local['score_l'][0])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] l_reg=prob_pdf_m(round(round(dist_local['score_l'][1])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] pbm=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbm']) pbm.index=['high','low'] pbm=pbm.reset_index() pbm.columns=['state','pbm'] #pbm pbreg.index=mimetic_p.index pbm.index=mimetic_p.index dist_local['pbreg_l']=pbreg['pbreg'] dist_local['pbm_l']=pbm['pbm'] #dist_local ## global prob #glb_p=pd.DataFrame(node_attr['state'].value_counts()/np.sum(node_attr['state'].value_counts())) #glb_p=glb_p.reset_index() #glb_p.columns=['state','g_p'] st=["high","low"] st=pd.DataFrame(st) st.columns=['state'] #Index in node attributes df['partitions'] == jth row partition column p_region_ind = [i for i, e in enumerate(list(node_attr['partition'])) if e in set([node_attr.iloc[j,9]])] r_node_attr = node_attr.iloc[p_region_ind,:] r_node_attr_score=r_node_attr['score'].copy() r_node_attr_score=r_node_attr_score.reset_index().iloc[:,1:] for tnr in range(0,r_node_attr.shape[0]): if node_attr.iloc[j,:]['score']<r_node_attr_score['score'][tnr]: r_node_attr['state'][r_node_attr.index[tnr]]='high' else: r_node_attr['state'][r_node_attr.index[tnr]]='low' glb_p=pd.DataFrame(r_node_attr['state'].value_counts()/np.sum(r_node_attr['state'].value_counts())) glb_p=glb_p.reset_index() glb_p.columns=['state','g_p'] t_glb_p=pd.merge(st,glb_p,how="left",left_on=['state'],right_on='state') t_glb_p=t_glb_p.fillna(0.01) glb_p=t_glb_p #print(glb_p) #states and distances gd=pd.concat([r_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[p_region_ind,:].index),j] ],axis=1) #print(gd) #and average disances per state gad=gd.groupby(['state']).mean(str(j)) gad=pd.merge(st,gad,how="left",left_on=['state'],right_on='state') #gad.reset_index(inplace=True) #print(gad) s_g_avg=pd.DataFrame(r_node_attr.groupby(['state']).mean()['score']) s_g_avg=pd.merge(st,s_g_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) #s_g_avg ## state local prob and avg distance dist_global=pd.merge(glb_p,gad, how='left', left_on=['state'], right_on = ['state']) dist_global=pd.merge(dist_global,s_g_avg, how='left', left_on=['state'], right_on = ['state']) #dist_local dist_global.columns=['state','glob_prob','cur_node_g','score_g'] dist_global['glob_prob'] =dist_global['glob_prob']/np.sum(dist_global['glob_prob']) #print(dist_global) h_reg=prob_pdf(round(round(dist_global['score_g'][0])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] l_reg=prob_pdf(round(round(dist_global['score_g'][1])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] pbreg=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbreg']) pbreg.index=['high','low'] pbreg=pbreg.reset_index() pbreg.columns=['state','pbreg'] #pbreg h_reg=prob_pdf_m(round(round(dist_global['score_g'][0])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] l_reg=prob_pdf_m(round(round(dist_global['score_g'][1])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] pbm=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbm']) pbm.index=['high','low'] pbm=pbm.reset_index() pbm.columns=['state','pbm'] #pbm pbreg.index=mimetic_p.index pbm.index=mimetic_p.index dist_global['pbreg_g']=pbreg['pbreg'] dist_global['pbm_g']=pbm['pbm'] #dist_global #print('glb_p') #if (((i+1)*(j+1)) % 5000) ==0: print(dist_global) ## all memetic dist_local_global=pd.merge(dist_global,dist_local, how='left', left_on=['state'], right_on = ['state']) dist_local_global=dist_local_global.fillna(0.01) #dist_local_global['m_p']=dist_local_global.product(axis=1)/np.sum(dist_local_global.product(axis=1)) #print(dist_local_global) # #################################################################################################### ########################################## All_ Pressures ########################################## #################################################################################################### # # All presures all_p = pd.merge(mimetic_p,dist_local_global,how='left', left_on=['state'], right_on = ['state']) all_p=all_p.fillna(0.01) #all_p = pd.merge(all_p,mimetic_p,how='left', left_on=['state'], right_on = ['state']) #all_p #= all_p.iloc[:,[0,4,5,6]] #0.25*all_p.iloc[:,3:5].product(axis=1) #all_p.iloc[:,3:5] #w1=w2=w3=w4=0.25 #all_p #w1*all_p['tier_p'][0]*all_p['cur_node'][0]*all_p['score_m'][0] #w1=w2=w3=w4=0.25 all_p_tpd=all_p.copy() #all_p_tpd all_p_new=pd.DataFrame(all_p_tpd['state']) #print(all_p_new) all_p_new['tier_p']=(all_p_tpd['tier_p']*all_p_tpd['cur_node'])/np.sum(all_p_tpd['tier_p']*all_p_tpd['cur_node']) all_p_new['glob_prob']=(all_p_tpd['glob_prob']*all_p_tpd['cur_node_g'])/np.sum(all_p_tpd['glob_prob']*all_p_tpd['cur_node_g']) all_p_new['local_prob']=(all_p_tpd['local_prob']*all_p_tpd['cur_node_l'])/np.sum(all_p_tpd['local_prob']*all_p_tpd['cur_node_l']) all_p_new['pbreg']=(all_p_tpd['pbreg_m']+all_p_tpd['pbreg_g']+all_p_tpd['pbreg_l'])/np.sum(all_p_tpd['pbreg_m']+all_p_tpd['pbreg_g']+all_p_tpd['pbreg_l']) all_p_new['pbm']=(all_p_tpd['pbm_m']+all_p_tpd['pbm_g']+all_p_tpd['pbm_l'])/np.sum(all_p_tpd['pbm_m']+all_p_tpd['pbm_g']+all_p_tpd['pbm_l']) all_p_new['score_m']=all_p_tpd['score_m'] all_p_new['score_l']=all_p_tpd['score_l'] all_p_new['score_g']=all_p_tpd['score_g'] #pd.DataFrame(all_p_new) all_p=all_p_new """ if r_on==1: rpbr =[all_p['pbreg_m'][0],all_p['pbreg_l'][0],all_p['pbreg_g'][0]] else: rpbr =[1,1,1] if m_on==1: mpbm =[all_p['pbm_m'][0],all_p['pbm_l'][0],all_p['pbm_g'][0]] else: mpbm =[1,1,1] ptotalh=np.exp((w1*all_p['tier_p'][0]*all_p['cur_node'][0]*rpbr[0]*mpbm[0]*all_p['score_m'][0]+w2*all_p['local_prob'][0]*all_p['cur_node_l'][0]*rpbr[1]*mpbm[1]*all_p['score_l'][0]+w3*all_p['glob_prob'][0]*all_p['cur_node_g'][0]*rpbr[2]*mpbm[2]*all_p['score_g'][0])/w4)/(1+np.exp((w1*all_p['tier_p'][0]*all_p['cur_node'][0]*rpbr[0]*mpbm[0]*all_p['score_m'][0]+w2*all_p['local_prob'][0]*all_p['cur_node_l'][0]*rpbr[1]*mpbm[1]*all_p['score_l'][0]+w3*all_p['glob_prob'][0]*all_p['cur_node_g'][0]*rpbr[2]*mpbm[2]*all_p['score_g'][0])/w4)) ptotall=1-ptotalh ptot=pd.DataFrame([ptotalh,ptotall],columns=['ptotal']) ptot.index=all_p.index all_p['ptotal']=ptot['ptotal'] #all_p """ if r_on==1: rpbr =all_p['pbreg'][0] else: rpbr =0 if m_on==1: mpbm =all_p['pbm'][0] else: mpbm =0 rpmp=(all_p['pbreg']*all_p['pbm'])/np.sum(all_p['pbreg']*all_p['pbm']) if r_on==0: rpmp[0]=1 rpmp[1]=1 ###### multivariate normal nsd2=list() for repeat in range(0,100): nsd=list() for mni in range(0,3): nsd.append(np.random.normal(0,1)) nsd2.append(np.random.multivariate_normal([0]*3,([nsd]*3))) #for ni,nsd2i in enumerate(nsd2): # nsd2[ni]=np.round_(nsd2i,2) nsd2=list(np.round_(pd.DataFrame(nsd2).mean(axis=0),2)) ## 2 if (j==0): nsd3=list() for repeat in range(0,100): nsd=list() for mni in range(0,3): nsd.append(np.random.normal(0,1)) nsd3.append(np.random.multivariate_normal([0]*3,([nsd]*3))) #for ni,nsd2i in enumerate(nsd2): # nsd2[ni]=np.round_(nsd2i,2) nsd3=list(np.round_(pd.DataFrame(nsd3).mean(axis=0),2)) #### normal epsilon_l=list() for repeat in range(0,100): epsilon_l.append(np.random.normal(0,1)) epsilon=np.mean(epsilon_l) """ if ((node_attr.iloc[j,9]==rgn)&(node_attr.iloc[j,12]==mcp)): w1=W[0] w2=W[1] w3=W[2] else: w1=W[3] w2=W[4] w3=W[5] """ #### if ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[0] w2=W[1] w3=W[2] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[3] w2=W[4] w3=W[5] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[6] w2=W[7] w3=W[8] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[9] w2=W[10] w3=W[11] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[12] w2=W[13] w3=W[14] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[15] w2=W[16] w3=W[17] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[18] w2=W[19] w3=W[20] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[21] w2=W[22] w3=W[23] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[24] w2=W[25] w3=W[26] else : w1=0.333 w2=0.333 w3=0.333 #ptotalh=np.exp((w1*all_p['tier_p'][0]+w2*all_p['local_prob'][0]+w3*all_p['glob_prob'][0]+w4*rpbr+w5*mpbm))/(1+np.exp((w1*all_p['tier_p'][0]+w2*all_p['local_prob'][0]+w3*all_p['glob_prob'][0]+w4*rpbr+w5*mpbm))) #ptotalh=((np.exp((w1*(all_p['tier_p'][0]+alpha1)+w2*(all_p['local_prob'][0]+alpha2)+w3*(all_p['glob_prob'][0]+alpha3))/Tmp)/(1+np.exp((w1*(all_p['tier_p'][0]+alpha1)+w2*(all_p['local_prob'][0]+alpha2)+w3*(all_p['glob_prob'][0]+alpha3))/Tmp)))*(rpmp[0])) ptotalh=((np.exp(((w1+alpha1)*(all_p['tier_p'][0])+(w2+alpha2)*(all_p['local_prob'][0])+(w3+alpha3)*(all_p['glob_prob'][0]) + (nsd2[0]+nsd3[0])*(all_p['tier_p'][0])+(nsd2[1]+nsd3[1])*(all_p['local_prob'][0])+ (nsd2[2]+nsd3[2])*(all_p['glob_prob'][0])+epsilon)/Tmp)/(1+np.exp(((w1+alpha1)*(all_p['tier_p'][0])+(w2+alpha2)*(all_p['local_prob'][0])+(w3+alpha3)*(all_p['glob_prob'][0]) + (nsd2[0]+nsd3[0])*(all_p['tier_p'][0])+(nsd2[1]+nsd3[1])*(all_p['local_prob'][0])+ (nsd2[2]+nsd3[2])*(all_p['glob_prob'][0])+epsilon)/Tmp)))*(rpmp[0])) #ptotalh=ptotalh/np.sum(ptotalh) #ptotall=1-ptotalh ptotall=(1/(1+np.exp(((w1+alpha1)*(all_p['tier_p'][0])+(w2+alpha2)*(all_p['local_prob'][0])+(w3+alpha3)*(all_p['glob_prob'][0]) + (nsd2[0]+nsd3[0])*(all_p['tier_p'][0])+(nsd2[1]+nsd3[1])*(all_p['local_prob'][0])+ (nsd2[2]+nsd3[2])*(all_p['glob_prob'][0])+epsilon)/Tmp)))*(rpmp[1]) ptot=pd.DataFrame([ptotalh,ptotall],columns=['ptotal']) ptot.index=all_p.index all_p['ptotal']=ptot['ptotal'] all_p['ptotal']=all_p['ptotal']/np.sum(all_p['ptotal']) #all_p #print(all_p) #0.6224593312018546 """ d_s_ind=np.where(all_p['ptotal']==np.max(all_p['ptotal']))[0][0] """ if np.count_nonzero([w1,w2,w3])!=0: if all_p['ptotal'][0]>0.6224593312018546: #0.6224593312018546: d_s_ind=0 elif all_p['ptotal'][0]<0.6224593312018546: #0.6224593312018546: d_s_ind=1 else: d_s_ind = 1 if np.random.random()<0.5 else 0 else: if all_p['ptotal'][0]>0.5: d_s_ind=0 elif all_p['ptotal'][0]<0.5: d_s_ind=1 else: d_s_ind = 1 if np.random.random()<0.5 else 0 """u = np.random.uniform() if all_p['ptotal'][0]>u: d_s_ind=0 else: d_s_ind=1""" #print(d_s_ind) """ if r_on==1: rpbr =[all_p['pbreg_m'][d_s_ind],all_p['pbreg_l'][d_s_ind],all_p['pbreg_g'][d_s_ind]] else: rpbr =[1,1,1] if m_on==1: mpbm =[all_p['pbm_m'][d_s_ind],all_p['pbm_l'][d_s_ind],all_p['pbm_g'][d_s_ind]] else: mpbm =[1,1,1] """ if r_on==1: rpbr =all_p['pbreg'][d_s_ind] else: rpbr =0 if m_on==1: mpbm =all_p['pbm'][d_s_ind] else: mpbm =0 """s_av=(w1*all_p['tier_p'][d_s_ind]*all_p['cur_node'][d_s_ind]*rpbr[0]*mpbm[0]*all_p['score_m'][d_s_ind]+w2*all_p['local_prob'][d_s_ind]*all_p['cur_node_l'][d_s_ind]*rpbr[1]*mpbm[1]*all_p['score_l'][d_s_ind]+w3*all_p['glob_prob'][d_s_ind]*all_p['cur_node_g'][d_s_ind]*rpbr[2]*mpbm[2]*all_p['score_g'][d_s_ind])/(w1*all_p['tier_p'][d_s_ind]*all_p['cur_node'][d_s_ind]*rpbr[0]*mpbm[0]+w2*all_p['local_prob'][d_s_ind]*all_p['cur_node_l'][d_s_ind]*rpbr[1]*mpbm[1]+w3*all_p['glob_prob'][d_s_ind]*all_p['cur_node_g'][d_s_ind]*rpbr[2]*mpbm[2])""" """s_av=(w1*all_p['score_m'][d_s_ind]+w2*all_p['score_l'][d_s_ind]+w3*all_p['score_g'][d_s_ind])/(w1+w2+w3)""" # s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) # s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) if w1==0: s_av1=np.min([all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) s_av2=np.max([all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) elif w2==0: s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_g'][d_s_ind]]) s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_g'][d_s_ind]]) elif w3==0: s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind]]) s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind]]) elif np.count_nonzero([w1,w2,w3])==1: s_av1=[w1*all_p['score_m'][d_s_ind]+w2*all_p['score_l'][d_s_ind]+w3*all_p['score_g'][d_s_ind]] s_av2=[w1*all_p['score_m'][d_s_ind]+w2*all_p['score_l'][d_s_ind]+w3*all_p['score_g'][d_s_ind]] elif np.count_nonzero([w1,w2,w3])==0: s_av1=node_attr['score'][j] s_av2=node_attr['score'][j] else: s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) #s_av #region_ind #print(all_p) probs_mat[i,j]=np.max(all_p['ptotal']) if i==0: probs_mat2[i,j]=np.max(all_p['ptotal']) else: probs_mat2[i+j,:]=probs_mat[i,j] ## hihest prob label #desired_state = random.choices(list(all_p['state']),list(all_p['all']))[0] #desired_state = all_p['state'][d_s_ind] #desired_state #desired_state = list(all_p.loc[all_p['all']==np.max(all_p['all'])]['state'])[0] ##### draw attributes with given label """sample_df_1=sample_lab_attr_new(np.float(N),region_ind[0],s_av,0.05*s_av)""" """if s_av1==s_av2: if d_s_ind==0: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],s_av1+0.1,100) else: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],0,s_av2+0.12) else: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],s_av1+0.1,s_av2+0.12)""" if s_av1==s_av2: if d_s_ind==0: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],s_av1+0.1,100) else: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],0,s_av2+0.12) else: if d_s_ind==0: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],(s_av1+s_av2)/2,100) else: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],0,(s_av1+s_av2)/2) #################################################################################################### ########################################## Update attributes ###################################### #################################################################################################### ## update node attributes for k,replc in enumerate(sample_df_1.values[0]): node_attr.iloc[j,k]=replc ## update edge attributes # node attr to edge attr df_3=cosine_similarity(node_attr.iloc[:,:8]) df_4=pd.DataFrame(df_3) df_4.values[[np.arange(len(df_4))]*2] = np.nan #mat_data.head() edge_list_2=df_4.stack().reset_index() edge_list_2.columns=['source','target','weight'] #edge_list_2.head() edge_list_f=pd.merge(edge_list_df, edge_list_2, how='left', left_on=['source','target'], right_on = ['source','target']) #edge_list_f.head() edge_list_f.drop('weight_x',axis=1,inplace=True) edge_list_f.columns=['source','target','weight'] for k,replc in enumerate(node_attr.iloc[j,:].values): blanck_data[j,k]=replc for k,replc in enumerate(all_p.iloc[0,1:].values): blanck_data[j,k+13]=replc blanck_data[j,29]=j blanck_data[j,30]=i blanck_data[j,31]=all_p['state'][d_s_ind] #blanck_data2[:,:2,i]=np.array(edge_list_f) blanck_data_tot[:,:,i]=pd.DataFrame(blanck_data) #if i>= 2: #if i%5==0: #probs_mat_pr.append(np.prod(np.log(probs_mat[i,:]),axis=1)) edge_list_f.to_csv(folder_location+"sc_"+str(run_iter+1)+"/"+str(i)+"_edge_attr.csv") reshaped_bd = np.vstack(blanck_data_tot[:,:,i] for i in range(num_sim)) reshaped_bd_df=pd.DataFrame(reshaped_bd) reshaped_bd_df.to_csv(folder_location+"sc_"+str(run_iter+1)+"/"+ "other_node_attr.csv") print('Complete') ``` ### Parallelization of simulation ``` def process_func(run_iter): #print('@@@@@ run iter @@@@@ --' + str(run_iter)) stc=scn_params.iloc[run_iter,20] Tmp=scn_params.iloc[run_iter,21] if stc==1: nr=[0.22,0.35,0.43] er=[0.38,0.13,0.50] asa=[0.22,0.06,0.72] else: nr=[0.33,0.335,0.335] er=[0.33,0.335,0.335] asa=[0.33,0.335,0.335] #W=[scn_params.iloc[run_iter,5],scn_params.iloc[run_iter,6],scn_params.iloc[run_iter,7],scn_params.iloc[run_iter,8],scn_params.iloc[run_iter,9],scn_params.iloc[run_iter,10]] W=[scn_params.iloc[run_iter,23],scn_params.iloc[run_iter,24],scn_params.iloc[run_iter,25],scn_params.iloc[run_iter,26], scn_params.iloc[run_iter,27],scn_params.iloc[run_iter,28],scn_params.iloc[run_iter,29],scn_params.iloc[run_iter,30], scn_params.iloc[run_iter,31],scn_params.iloc[run_iter,32],scn_params.iloc[run_iter,33],scn_params.iloc[run_iter,34], scn_params.iloc[run_iter,35],scn_params.iloc[run_iter,36],scn_params.iloc[run_iter,37],scn_params.iloc[run_iter,38], scn_params.iloc[run_iter,39],scn_params.iloc[run_iter,40],scn_params.iloc[run_iter,41],scn_params.iloc[run_iter,42], scn_params.iloc[run_iter,43],scn_params.iloc[run_iter,44],scn_params.iloc[run_iter,45],scn_params.iloc[run_iter,46], scn_params.iloc[run_iter,47],scn_params.iloc[run_iter,48],scn_params.iloc[run_iter,49]] N=scn_params.iloc[run_iter,0] bs_n=scn_params.iloc[run_iter,3] m_size=scn_params.iloc[run_iter,4] bs1=scn_params.iloc[run_iter,1] bs2=scn_params.iloc[run_iter,2] rgn=scn_params.iloc[run_iter,13] mcp=scn_params.iloc[run_iter,14] ################################################### create network network_created=create_network(N,nr,er,asa,bs_n,m_size) #graph g=network_created[2] #centrality deg_cent = nx.degree_centrality(g) in_deg_cent = nx.in_degree_centrality(g) out_deg_cent = nx.out_degree_centrality(g) eigen_cent = nx.eigenvector_centrality(g) #katz_cent = nx.katz_centrality(g) closeness_cent = nx.closeness_centrality(g) #betw_cent = nx.betweenness_centrality(g) #vote_cent = nx.voterank(g) deg=pd.DataFrame(list(deg_cent.values()),columns=['deg']) indeg=pd.DataFrame(list(in_deg_cent.values()),columns=['indeg']) outdeg=pd.DataFrame(list(out_deg_cent.values()),columns=['outdeg']) eigencent=pd.DataFrame(list(eigen_cent.values()),columns=['eigdeg']) closeness=pd.DataFrame(list(closeness_cent.values()),columns=['closedeg']) all_net_p=pd.concat([deg,indeg,outdeg,closeness,eigencent],axis=1) #tier and ms nodes_frame=network_created[1] #edge list edge_list_df_new=network_created[0] edge_list_df=edge_list_df_new.copy() n_regions_list=[int(0.46*N),int( 0.16*N),int( 0.38*N)] if (len(n_regions_list[0]*["NrA"]+n_regions_list[1]*["Eur"]+n_regions_list[2]*['Asia'])!=N): if (len(n_regions_list[0]*["NrA"]+n_regions_list[1]*["Eur"]+n_regions_list[2]*['Asia'])-N)>0: n_regions_list[0] = n_regions_list[0]+len(n_regions_list[0]*["NrA"]+n_regions_list[1]*["Eur"]+n_regions_list[2]*['Asia'])-N else: n_regions_list[0] = n_regions_list[0]-len(n_regions_list[0]*["NrA"]+n_regions_list[1]*["Eur"]+n_regions_list[2]*['Asia'])+N #print(n_regions_list) #till partition ###################################################initial attributes at random one at a time #### method 1 """init_samples = sample_lab_attr_all_init(np.float(N),30,10)""" init_samples = sample_lab_attr_all_init(np.float(N)) #### method 2 """init_samples1 = initial_random_attr(n_regions_list[0],np.array([0.2,0.3,0.5])) init_samples1=init_samples1.reset_index().iloc[:,1:] init_samples2 = initial_random_attr(n_regions_list[1],np.array([0.3,0.3,0.4])) init_samples2=init_samples2.reset_index().iloc[:,1:] init_samples3 = initial_random_attr(n_regions_list[2],np.array([0.5,0.3,0.2])) init_samples3=init_samples3.reset_index().iloc[:,1:] init_samples=pd.concat([init_samples1,init_samples2,init_samples3],axis=0)""" ### method 3 """init_samples1 = sample_lab_attr_all(n_regions_list[0],1) init_samples1=init_samples1.reset_index().iloc[:,1:] init_samples2 = sample_lab_attr_all(n_regions_list[1],2) init_samples2=init_samples2.reset_index().iloc[:,1:] init_samples3 = sample_lab_attr_all(n_regions_list[2],3) init_samples3=init_samples3.reset_index().iloc[:,1:] init_samples=pd.concat([init_samples1,init_samples2,init_samples3],axis=0)""" ############ #init_samples.head() init_samples=init_samples.reset_index() #init_samples.head() init_samples=init_samples.iloc[:,1:] #init_samples.index node_attr=init_samples node_attr['state']="high" node_attr['partition']="" for i in range(0,node_attr.shape[0]): if i<n_regions_list[0]: node_attr['partition'][i]='NrA' elif i< (n_regions_list[0]+n_regions_list[1]): node_attr['partition'][i]='Eur' else: node_attr['partition'][i]='Asia' #tier and MS merge with attributes node_attr = pd.concat([node_attr,nodes_frame.iloc[:,2:]],axis=1) #node_attr.columns node_attr.columns=['X1..Commitment...Governance', 'X2..Traceability.and.Risk.Assessment', 'X3..Purchasing.Practices', 'X4..Recruitment', 'X5..Worker.Voice', 'X6..Monitoring', 'X7..Remedy', 'score', 'state', 'partition','tier','ms','ms2'] # #node_attr.info() # region wise reg assumption and market size assumption #p_reg_org=p_reg.copy() #p_med_org=p_med.copy() # init_node_attrs_df=node_attr.copy() init_edge_attrs_df=edge_list_df.copy() import os os.mkdir(folder_location+"sc_"+str(run_iter+1)) node_attr.to_csv(folder_location+"sc_"+str(run_iter+1)+"/"+str(0)+ "_node_attr.csv") df_3=cosine_similarity(node_attr.iloc[:,:8]) df_4=pd.DataFrame(df_3) df_4.values[[np.arange(len(df_4))]*2] = np.nan #mat_data.head() edge_list_2=df_4.stack().reset_index() edge_list_2.columns=['source','target','weight'] #edge_list_2.head() edge_list_f=pd.merge(edge_list_df, edge_list_2, how='left', left_on=['source','target'], right_on = ['source','target']) #edge_list_f.head() edge_list_f.drop('weight_x',axis=1,inplace=True) edge_list_f.columns=['source','target','weight'] edge_list_f.to_csv(folder_location+"sc_"+str(run_iter+1)+"/"+"initial_edge_attr.csv") ##### run simulation for all print("@@@@@@@@@@@@@@@ -- "+str(run_iter)) """ w1=scn_params.iloc[run_iter,5] w2=scn_params.iloc[run_iter,6] w3=scn_params.iloc[run_iter,7] w4=scn_params.iloc[run_iter,8] w5=scn_params.iloc[run_iter,9] """ r_on=scn_params.iloc[run_iter,15] m_on=scn_params.iloc[run_iter,16] alpha1=scn_params.iloc[run_iter,17] alpha2=scn_params.iloc[run_iter,18] alpha3=scn_params.iloc[run_iter,19] if N==500: num_sim=20 else: num_sim=20 probs_mat=np.zeros((num_sim,N)) probs_mat2=np.zeros((((num_sim-1)*N)+1,N)) ## Initial node and edge attributes node_attr=init_node_attrs_df.copy() edge_list_df=init_edge_attrs_df.copy() ################################################## simulation simulation_continous(node_attr=node_attr,edge_list_df=edge_list_df,num_sim=num_sim,W=W,bs1=bs1,bs2=bs2,N=N,r_on=r_on,m_on=m_on,p_reg=p_reg,p_med=p_med,probs_mat=probs_mat,probs_mat2=probs_mat2,run_iter=run_iter,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3,Tmp=Tmp,rgn=rgn,mcp=mcp) lik_probs_mat=pd.DataFrame(probs_mat) lik_probs_mat2=pd.DataFrame(probs_mat2) lik_probs_mat.to_csv(folder_location+"sc_"+str(run_iter+1)+"/"+"lik_probs_mat.csv") lik_probs_mat2.to_csv(folder_location+"sc_"+str(run_iter+1)+"/"+"lik_probs_mat2.csv") del lik_probs_mat ``` ### Running simulation ``` #scenarios scn_params=pd.read_csv('simulation_log/testscenarios_uniform_parallel_W_alpha_new_sense_simple_new_missing_reg/testscenarios_uniform_parallel_W_alpha_new_sense_simple.csv') # Organizational data data_orgs=pd.read_csv('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv') folder_location='simulation_log/testscenarios_uniform_parallel_W_alpha_new_sense_simple_new_missing_reg/' from magic_functions import process_func frames_list = range(0,31)#range(0,8) with Pool(max_pool) as p: pool_outputs = list( tqdm( p.imap(process_func, frames_list), total=len(frames_list) ) ) ```
github_jupyter
I started this competition investigating neural networks with this kernel https://www.kaggle.com/mulargui/keras-nn Now switching to using ensembles in this new kernel. As of today V6 is the most performant version. You can find all my notes and versions at https://github.com/mulargui/kaggle-Classify-forest-types ``` # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) #load data dftrain=pd.read_csv('/kaggle/input/learn-together/train.csv') dftest=pd.read_csv('/kaggle/input/learn-together/test.csv') ####### DATA PREPARATION ##### #split train data in features and labels y = dftrain.Cover_Type x = dftrain.drop(['Id','Cover_Type'], axis=1) # split test data in features and Ids Ids = dftest.Id x_predict = dftest.drop('Id', axis=1) # one data set with all features X = pd.concat([x,x_predict],keys=[0,1]) ###### FEATURE ENGINEERING ##### #https://www.kaggle.com/mancy7/simple-eda #Soil_Type7, Soil_Type15 are non-existent in the training set, nothing to learn #I have problems with np.where if I do this, postponed #X.drop(["Soil_Type7", "Soil_Type15"], axis = 1, inplace=True) #https://www.kaggle.com/evimarp/top-6-roosevelt-national-forest-competition from itertools import combinations from bisect import bisect X['Euclidean_distance_to_hydro'] = (X.Vertical_Distance_To_Hydrology**2 + X.Horizontal_Distance_To_Hydrology**2)**.5 cols = [ 'Horizontal_Distance_To_Roadways', 'Horizontal_Distance_To_Fire_Points', 'Horizontal_Distance_To_Hydrology', ] X['distance_mean'] = X[cols].mean(axis=1) X['distance_sum'] = X[cols].sum(axis=1) X['distance_road_fire'] = X[cols[:2]].mean(axis=1) X['distance_hydro_fire'] = X[cols[1:]].mean(axis=1) X['distance_road_hydro'] = X[[cols[0], cols[2]]].mean(axis=1) X['distance_sum_road_fire'] = X[cols[:2]].sum(axis=1) X['distance_sum_hydro_fire'] = X[cols[1:]].sum(axis=1) X['distance_sum_road_hydro'] = X[[cols[0], cols[2]]].sum(axis=1) X['distance_dif_road_fire'] = X[cols[0]] - X[cols[1]] X['distance_dif_hydro_road'] = X[cols[2]] - X[cols[0]] X['distance_dif_hydro_fire'] = X[cols[2]] - X[cols[1]] # Vertical distances measures colv = ['Elevation', 'Vertical_Distance_To_Hydrology'] X['Vertical_dif'] = X[colv[0]] - X[colv[1]] X['Vertical_sum'] = X[colv].sum(axis=1) SHADES = ['Hillshade_9am', 'Hillshade_Noon', 'Hillshade_3pm'] X['shade_noon_diff'] = X['Hillshade_9am'] - X['Hillshade_Noon'] X['shade_3pm_diff'] = X['Hillshade_Noon'] - X['Hillshade_3pm'] X['shade_all_diff'] = X['Hillshade_9am'] - X['Hillshade_3pm'] X['shade_sum'] = X[SHADES].sum(axis=1) X['shade_mean'] = X[SHADES].mean(axis=1) X['ElevationHydro'] = X['Elevation'] - 0.25 * X['Euclidean_distance_to_hydro'] X['ElevationV'] = X['Elevation'] - X['Vertical_Distance_To_Hydrology'] X['ElevationH'] = X['Elevation'] - 0.19 * X['Horizontal_Distance_To_Hydrology'] X['Elevation2'] = X['Elevation']**2 X['ElevationLog'] = np.log1p(X['Elevation']) X['Aspect_cos'] = np.cos(np.radians(X.Aspect)) X['Aspect_sin'] = np.sin(np.radians(X.Aspect)) #df['Slope_sin'] = np.sin(np.radians(df.Slope)) X['Aspectcos_Slope'] = X.Slope * X.Aspect_cos #df['Aspectsin_Slope'] = df.Slope * df.Aspect_sin cardinals = [i for i in range(45, 361, 90)] points = ['N', 'E', 'S', 'W'] X['Cardinal'] = X.Aspect.apply(lambda x: points[bisect(cardinals, x) % 4]) d = {'N': 0, 'E': 1, 'S': 0, 'W':-1} X['Cardinal'] = X.Cardinal.apply(lambda x: d[x]) #https://www.kaggle.com/jakelj/basic-ensemble-model X['Avg_shade'] = ((X['Hillshade_9am'] + X['Hillshade_Noon'] + X['Hillshade_3pm']) / 3) X['Morn_noon_int'] = ((X['Hillshade_9am'] + X['Hillshade_Noon']) / 2) X['noon_eve_int'] = ((X['Hillshade_3pm'] + X['Hillshade_Noon']) / 2) #adding features based on https://douglas-fraser.com/forest_cover_management.pdf pages 21,22 #note: not all climatic and geologic codes have a soil type columns=['Soil_Type1', 'Soil_Type2', 'Soil_Type3', 'Soil_Type4', 'Soil_Type5', 'Soil_Type6'] X['Climatic2'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type7', 'Soil_Type8'] X['Climatic3'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type9', 'Soil_Type10', 'Soil_Type11', 'Soil_Type12', 'Soil_Type13'] X['Climatic4'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type14', 'Soil_Type15'] X['Climatic5'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type16', 'Soil_Type17', 'Soil_Type18'] X['Climatic6'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type19', 'Soil_Type20', 'Soil_Type21', 'Soil_Type22', 'Soil_Type23', 'Soil_Type24', 'Soil_Type25', 'Soil_Type26', 'Soil_Type27', 'Soil_Type28', 'Soil_Type29', 'Soil_Type30', 'Soil_Type31', 'Soil_Type32', 'Soil_Type33', 'Soil_Type34'] X['Climatic7'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type35', 'Soil_Type36', 'Soil_Type37', 'Soil_Type38', 'Soil_Type39', 'Soil_Type40'] X['Climatic8'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type14', 'Soil_Type15', 'Soil_Type16', 'Soil_Type17', 'Soil_Type19', 'Soil_Type20', 'Soil_Type21'] X['Geologic1'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type9', 'Soil_Type22', 'Soil_Type23'] X['Geologic2'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type7', 'Soil_Type8'] X['Geologic5'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type1', 'Soil_Type2', 'Soil_Type3', 'Soil_Type4', 'Soil_Type5', 'Soil_Type6', 'Soil_Type10', 'Soil_Type11', 'Soil_Type12', 'Soil_Type13', 'Soil_Type18', 'Soil_Type24', 'Soil_Type25', 'Soil_Type26', 'Soil_Type27', 'Soil_Type28', 'Soil_Type29', 'Soil_Type30', 'Soil_Type31', 'Soil_Type32', 'Soil_Type33', 'Soil_Type34', 'Soil_Type35', 'Soil_Type36', 'Soil_Type37', 'Soil_Type38', 'Soil_Type39', 'Soil_Type40'] X['Geologic7'] = np.select([X[columns].sum(1).gt(0)], [1]) #Reversing One-Hot-Encoding to Categorical attributes, several articles recommend it for decision tree algorithms #Doing it for Soil_Type, Wilderness_Area, Geologic and Climatic X['Soil_Type']=np.where(X.loc[:, 'Soil_Type1':'Soil_Type40'])[1] +1 X.drop(X.loc[:,'Soil_Type1':'Soil_Type40'].columns, axis=1, inplace=True) X['Wilderness_Area']=np.where(X.loc[:, 'Wilderness_Area1':'Wilderness_Area4'])[1] +1 X.drop(X.loc[:,'Wilderness_Area1':'Wilderness_Area4'].columns, axis=1, inplace=True) X['Climatic']=np.where(X.loc[:, 'Climatic2':'Climatic8'])[1] +1 X.drop(X.loc[:,'Climatic2':'Climatic8'].columns, axis=1, inplace=True) X['Geologic']=np.where(X.loc[:, 'Geologic1':'Geologic7'])[1] +1 X.drop(X.loc[:,'Geologic1':'Geologic7'].columns, axis=1, inplace=True) from sklearn.preprocessing import StandardScaler StandardScaler(copy=False).fit_transform(X) # Adding Gaussian Mixture features to perform some unsupervised learning hints from the full data #https://www.kaggle.com/arateris/2-layer-k-fold-learning-forest-cover #https://www.kaggle.com/stevegreenau/stacking-multiple-classifiers-clustering from sklearn.mixture import GaussianMixture X['GM'] = GaussianMixture(n_components=15).fit_predict(X) #https://www.kaggle.com/arateris/2-layer-k-fold-learning-forest-cover # Add PCA features from sklearn.decomposition import PCA pca = PCA(n_components=0.99).fit(X) trans = pca.transform(X) for i in range(trans.shape[1]): col_name= 'pca'+str(i+1) X[col_name] = trans[:,i] #https://www.kaggle.com/kwabenantim/forest-cover-stacking-multiple-classifiers # Scale and bin features from sklearn.preprocessing import MinMaxScaler MinMaxScaler((0, 100),copy=False).fit_transform(X) #X = np.floor(X).astype('int8') print("Completed feature engineering!") #break it down again in train and test x,x_predict = X.xs(0),X.xs(1) ###### THIS IS THE ENSEMBLE MODEL SECTION ###### #https://www.kaggle.com/kwabenantim/forest-cover-stacking-multiple-classifiers import random randomstate = 1 random.seed(randomstate) np.random.seed(randomstate) from sklearn.ensemble import AdaBoostClassifier from sklearn.tree import DecisionTreeClassifier ab_clf = AdaBoostClassifier(n_estimators=200, base_estimator=DecisionTreeClassifier( min_samples_leaf=2, random_state=randomstate), random_state=randomstate) #max_features = min(30, x.columns.size) max_features = 30 from sklearn.ensemble import ExtraTreesClassifier et_clf = ExtraTreesClassifier(n_estimators=300, min_samples_leaf=2, min_samples_split=2, max_depth=50, max_features=max_features, random_state=randomstate, n_jobs=1) from lightgbm import LGBMClassifier lg_clf = LGBMClassifier(n_estimators=300, num_leaves=128, verbose=-1, random_state=randomstate, n_jobs=1) from sklearn.ensemble import RandomForestClassifier rf_clf = RandomForestClassifier(n_estimators=300, random_state=randomstate, n_jobs=1) #Added a KNN classifier to the ensemble #https://www.kaggle.com/edumunozsala/feature-eng-and-a-simple-stacked-model from sklearn.neighbors import KNeighborsClassifier knn_clf = KNeighborsClassifier(n_neighbors=y.nunique(), n_jobs=1) #added several more classifiers at once #https://www.kaggle.com/edumunozsala/feature-eng-and-a-simple-stacked-model from sklearn.ensemble import BaggingClassifier from sklearn.tree import DecisionTreeClassifier bag_clf = BaggingClassifier(base_estimator=DecisionTreeClassifier(criterion = 'entropy', max_depth=None, min_samples_split=2, min_samples_leaf=1,max_leaf_nodes=None, max_features='auto', random_state = randomstate), n_estimators=500,max_features=0.75, max_samples=1.0, random_state=randomstate,n_jobs=1,verbose=0) from sklearn.linear_model import LogisticRegression lr_clf = LogisticRegression(max_iter=1000, n_jobs=1, solver= 'lbfgs', multi_class = 'multinomial', random_state=randomstate, verbose=0) #https://www.kaggle.com/bustam/6-models-for-forest-classification from catboost import CatBoostClassifier cat_clf = CatBoostClassifier(n_estimators =300, eval_metric='Accuracy', metric_period=200, max_depth = None, random_state=randomstate, verbose=0) #https://www.kaggle.com/jakelj/basic-ensemble-model from sklearn.experimental import enable_hist_gradient_boosting from sklearn.ensemble import HistGradientBoostingClassifier hbc_clf = HistGradientBoostingClassifier(max_iter = 500, max_depth =25, random_state = randomstate) ensemble = [('AdaBoostClassifier', ab_clf), ('ExtraTreesClassifier', et_clf), ('LGBMClassifier', lg_clf), #('KNNClassifier', knn_clf), ('BaggingClassifier', bag_clf), #('LogRegressionClassifier', lr_clf), #('CatBoostClassifier', cat_clf), #('HBCClassifier', hbc_clf), ('RandomForestClassifier', rf_clf) ] #Cross-validating classifiers from sklearn.model_selection import cross_val_score for label, clf in ensemble: score = cross_val_score(clf, x, y, cv=10, scoring='accuracy', verbose=0, n_jobs=-1) print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (score.mean(), score.std(), label)) # Fitting stack from mlxtend.classifier import StackingCVClassifier stack = StackingCVClassifier(classifiers=[ab_clf, et_clf, lg_clf, bag_clf, rf_clf], meta_classifier=rf_clf, cv=10, stratify=True, shuffle=True, use_probas=True, use_features_in_secondary=True, verbose=0, random_state=randomstate) stack = stack.fit(x, y) print("Completed modeling!") #make predictions y_predict = stack.predict(x_predict) y_predict = pd.Series(y_predict, index=x_predict.index, dtype=y.dtype) print("Completed predictions!") # Save predictions to a file for submission output = pd.DataFrame({'Id': Ids, 'Cover_Type': y_predict}) output.to_csv('submission.csv', index=False) #create a link to download the file from IPython.display import FileLink FileLink(r'submission.csv') ```
github_jupyter
## DATASET GENERATION ``` import numpy as np import os from scipy.misc import imread, imresize import matplotlib.pyplot as plt %matplotlib inline cwd = os.getcwd() print ("PACKAGES LOADED") print ("CURRENT FOLDER IS [%s]" % (cwd) ) ``` ### CONFIGURATION ``` # FOLDER LOCATIONS paths = ["../../img_dataset/celebs/Arnold_Schwarzenegger" , "../../img_dataset/celebs/Junichiro_Koizumi" , "../../img_dataset/celebs/Vladimir_Putin" , "../../img_dataset/celebs/George_W_Bush"] categories = ['Terminator', 'Koizumi', 'Putin', 'Bush'] # CONFIGURATIONS imgsize = [64, 64] use_gray = 1 data_name = "custom_data" print ("YOUR IMAGES SHOULD BE AT") for i, path in enumerate(paths): print (" [%d/%d] %s" % (i, len(paths), path)) print ("DATA WILL BE SAVED TO \n [%s]" % (cwd + '/data/' + data_name + '.npz')) ``` ### RGB2GRAY ``` def rgb2gray(rgb): if len(rgb.shape) is 3: return np.dot(rgb[...,:3], [0.299, 0.587, 0.114]) else: return rgb ``` ### LOAD IMAGES ``` nclass = len(paths) valid_exts = [".jpg",".gif",".png",".tga", ".jpeg"] imgcnt = 0 for i, relpath in zip(range(nclass), paths): path = cwd + "/" + relpath flist = os.listdir(path) for f in flist: if os.path.splitext(f)[1].lower() not in valid_exts: continue fullpath = os.path.join(path, f) currimg = imread(fullpath) # CONVERT TO GRAY (IF REQUIRED) if use_gray: grayimg = rgb2gray(currimg) else: grayimg = currimg # RESIZE graysmall = imresize(grayimg, [imgsize[0], imgsize[1]])/255. grayvec = np.reshape(graysmall, (1, -1)) # SAVE curr_label = np.eye(nclass, nclass)[i:i+1, :] if imgcnt is 0: totalimg = grayvec totallabel = curr_label else: totalimg = np.concatenate((totalimg, grayvec), axis=0) totallabel = np.concatenate((totallabel, curr_label), axis=0) imgcnt = imgcnt + 1 print ("TOTAL %d IMAGES" % (imgcnt)) ``` ### DIVIDE INTO TRAINING AND TEST ``` def print_shape(string, x): print ("SHAPE OF [%s] IS [%s]" % (string, x.shape,)) randidx = np.random.randint(imgcnt, size=imgcnt) trainidx = randidx[0:int(4*imgcnt/5)] testidx = randidx[int(4*imgcnt/5):imgcnt] trainimg = totalimg[trainidx, :] trainlabel = totallabel[trainidx, :] testimg = totalimg[testidx, :] testlabel = totallabel[testidx, :] print_shape("totalimg", totalimg) print_shape("totallabel", totallabel) print_shape("trainimg", trainimg) print_shape("trainlabel", trainlabel) print_shape("testimg", testimg) print_shape("testlabel", testlabel) ``` ### SAVE TO NPZ ``` savepath = cwd + "/data/" + data_name + ".npz" np.savez(savepath, trainimg=trainimg, trainlabel=trainlabel , testimg=testimg, testlabel=testlabel , imgsize=imgsize, use_gray=use_gray, categories=categories) print ("SAVED TO [%s]" % (savepath)) ``` ### LOAD NPZ ``` # LOAD cwd = os.getcwd() loadpath = cwd + "/data/" + data_name + ".npz" l = np.load(loadpath) print (l.files) # Parse data trainimg_loaded = l['trainimg'] trainlabel_loaded = l['trainlabel'] testimg_loaded = l['testimg'] testlabel_loaded = l['testlabel'] categories_loaded = l['categories'] print ("[%d] TRAINING IMAGES" % (trainimg_loaded.shape[0])) print ("[%d] TEST IMAGES" % (testimg_loaded.shape[0])) print ("LOADED FROM [%s]" % (savepath)) ``` ### PLOT LOADED DATA ``` ntrain_loaded = trainimg_loaded.shape[0] batch_size = 5; randidx = np.random.randint(ntrain_loaded, size=batch_size) for i in randidx: currimg = np.reshape(trainimg_loaded[i, :], (imgsize[0], -1)) currlabel_onehot = trainlabel_loaded[i, :] currlabel = np.argmax(currlabel_onehot) if use_gray: currimg = np.reshape(trainimg[i, :], (imgsize[0], -1)) plt.matshow(currimg, cmap=plt.get_cmap('gray')) plt.colorbar() else: currimg = np.reshape(trainimg[i, :], (imgsize[0], imgsize[1], 3)) plt.imshow(currimg) title_string = ("[%d] CLASS-%d (%s)" % (i, currlabel, categories_loaded[currlabel])) plt.title(title_string) plt.show() ```
github_jupyter
``` from pynq import Overlay from pynq import PL from pprint import pprint pprint(PL.ip_dict) print(PL.timestamp) ol2 = Overlay('base.bit') ol2.download() pprint(PL.ip_dict) print(PL.timestamp) PL.interrupt_controllers PL.gpio_dict a = PL.ip_dict for i,j in enumerate(a): print(i,j,a[j]) a['SEG_rgbled_gpio_Reg'] b = [value for key, value in a.items() if 'mb_bram_ctrl' in key.lower()] print(b) addr_base,addr_range,state = a['SEG_rgbled_gpio_Reg'] addr_base a = [None] a*10 import re tcl_name = 'parse.txt' pat1 = 'connect_bd_net' pat2 = '[get_bd_pins processing_system7_0/GPIO_O]' result = {} gpio_pool1 = set() gpio_pool2 = set() with open(tcl_name, 'r') as f: for line in f: if not line.startswith('#') and (pat1 in line) and (pat2 in line): gpio_pool1 = gpio_pool1.union(set(re.findall( '\[get_bd_pins (.+?)/Din\]', line, re.IGNORECASE))) while gpio_pool1: gpio_net = gpio_pool1.pop() if not gpio_net in gpio_pool2: pat3 = '[get_bd_pins ' + gpio_net + '/Din]' gpio_pool2.add(gpio_net) with open(tcl_name, 'r') as f: for line in f: if not line.startswith('#') and (pat1 in line) and \ (pat3 in line): gpio_pool1 = gpio_pool1.union(set(re.findall( '\[get_bd_pins (.+?)/Din\]', line, re.IGNORECASE))) gpio_pool1.discard(gpio_net) gpio_list = list(gpio_pool2) print(gpio_list) """ index = 0 match = [] for i in gpio_list: pat4 = "create_bd_cell -type ip -vlnv (.+?) " + i + "($| )" with open(tcl_name, 'r') as f: for line in f: if not line.startswith('#'): m = re.search(pat4, line, re.IGNORECASE) if m: match.append(m.group(2)) continue print(match) """ with open('parse.txt') as f: file_str =''.join(line.replace('\n',' ').replace('\r','') for line in f and not line.startswith('#')) print(file_str) for j in gpio_list: pat5 = "set_property -dict \[ list \\\\ "+\ "CONFIG.DIN_FROM {([0-9]+)} \\\\ "+\ "CONFIG.DIN_TO {([0-9]+)} \\\\ "+\ "CONFIG.DIN_WIDTH {([0-9]+)} \\\\ "+\ "CONFIG.DOUT_WIDTH {([0-9]+)} \\\\ "+\ "\] \$" + j print(pat5) m = re.search(pat5,file_str,re.IGNORECASE) if m: index = m.group(1) result[j] = [int(index), None] print(result) str1 = 'create_bd_cell -type ip -vlnv xilinx.com:ip:xlslice:1.0 mb3_timer_capture_4' str2 = 'set mb3_timer_capture_5 [ create_bd_cell -type ip -vlnv xilinx.com:ip:xlslice:1.0 mb3_timer_capture_5 ]' pat1 = "create_bd_cell -type ip -vlnv (.+?) (.+?)($| )" match1 = re.search(pat1, str2, re.IGNORECASE) match1.group(2) with open('parse.txt') as f: data=''.join(line.replace('\n',' ').replace('\r','') for line in f) print(data) str1 = "[123 456\ $2]" pat1 = "\[(.+?) (.+?)\\\\ \$(.+?)]" m = re.search(pat1, str1, re.IGNORECASE) if m: print(m.group(1)) print(m.group(2)) print(type(m.group(1))) a = [1,2,3] print(a[-1]) print(a) import re prop_name_regex = "CONFIG.DIN_FROM {([0-9]+)} \\\\" str1 = "CONFIG.DIN_FROM {13} \\" m = re.search(prop_name_regex,str1) if m: print(m.group(1)) a = {1:'mb_1_reset', 2:'mb_2_reset'} res = dict((v,[k,None]) for k,v in a.items() if k>1) print(res) a = {1:'mb_1_reset', 2:'mb_2_reset'} b = a.copy() a.clear() print(b) a = {1:['mb_1_reset',None], 2:['mb_2_reset','running']} a = {i:j for i,j in a.items() if j[1] is not None} print(a) import re str1 = " set processing_system7_0 [ create_bd_cell -type ip -vlnv "+\ "xilinx.com:ip:processing_system7:5.5 processing_system7_0 ]" ip_regex = "create_bd_cell -type ip -vlnv " + \ "(.+?):ip:(.+?):(.+?) (.+?) " m = re.search(ip_regex,str1) print(m.groups()) import numpy as np a = np.random.randint(0,32,10,dtype=np.uint32) print(a) ```
github_jupyter
# Classifying Fashion-MNIST Now it's your turn to build and train a neural network. You'll be using the [Fashion-MNIST dataset](https://github.com/zalandoresearch/fashion-mnist), a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world. <img src='assets/fashion-mnist-sprite.png' width=500px> In this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebooks though as you work through this. First off, let's load the dataset through torchvision. ``` import torch from torchvision import datasets, transforms import helper # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True) ``` Here we can see one of the images. ``` image, label = next(iter(trainloader)) helper.imshow(image[0,:]); ``` ## Building the network Here you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up to you how many layers you add and the size of those layers. ``` # TODO: Define your network architecture here from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) def forward(self, x): x = x.view(x.shape[0], -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.log_softmax(self.fc4(x), dim = 1) return x ``` # Train the network Now you should create your network and train it. First you'll want to define [the criterion](http://pytorch.org/docs/master/nn.html#loss-functions) ( something like `nn.CrossEntropyLoss`) and [the optimizer](http://pytorch.org/docs/master/optim.html) (typically `optim.SGD` or `optim.Adam`). Then write the training code. Remember the training pass is a fairly straightforward process: * Make a forward pass through the network to get the logits * Use the logits to calculate the loss * Perform a backward pass through the network with `loss.backward()` to calculate the gradients * Take a step with the optimizer to update the weights By adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4. ``` # TODO: Create the network, define the criterion and optimizer model = Classifier() print(list(model.parameters())) criterion = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr = 0.003) # TODO: Train the network here epochs = 5 for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() output = model.forward(images) loss = criterion(output, labels) loss.backward() optimizer.step() running_loss += loss.item() else: print(f"Training loss: {running_loss/len(trainloader)}") %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper # Test out your network! dataiter = iter(testloader) images, labels = dataiter.next() img = images[0] # Convert 2D image to 1D vector img = img.resize_(1, 784) # TODO: Calculate the class probabilities (softmax) for img ps = torch.exp(model(img)) # Plot the image and probabilities helper.view_classify(img.resize_(1, 28, 28), ps, version='Fashion') ```
github_jupyter
``` import cirq from cirq_iqm import Adonis, circuit_from_qasm from cirq_iqm.iqm_gates import IsingGate, XYGate ``` # The Adonis architecture Qubit connectivity: ``` QB1 | QB4 - QB3 - QB2 | QB5 ``` Construct an `IQMDevice` instance representing the Adonis architecture ``` adonis = Adonis() print(adonis.NATIVE_GATES) print(adonis.NATIVE_GATE_INSTANCES) print(adonis.qubits) ``` # Creating a quantum circuit Create a quantum circuit and insert native gates ``` a, b, c = adonis.qubits[:3] circuit = cirq.Circuit(device=adonis) circuit.append(cirq.X(a)) circuit.append(cirq.PhasedXPowGate(phase_exponent=0.3, exponent=0.5)(c)) circuit.append(cirq.CZ(a, c)) circuit.append(cirq.YPowGate(exponent=1.1)(c)) print(circuit) ``` ----- Insert non-native gates, which are immediately decomposed into native ones ``` circuit.append(IsingGate(0.2)(a, c)) circuit.append(XYGate(0.5)(a, c)) circuit.append(cirq.HPowGate(exponent=-0.4)(a)) print(circuit) ``` # Optimizing a quantum circuit Use the `IQMDevice.simplify_circuit` method to run a sequence of optimization passes on a circuit ``` circuit = cirq.Circuit(device=adonis) circuit.append(cirq.H(a)) circuit.append(cirq.CNOT(a, c)) circuit.append(cirq.measure(a, c, key='result')) print(circuit) adonis.simplify_circuit(circuit) print(circuit) ``` # Simulating a quantum circuit Circuits that contain IQM-native gates can be simulated using the standard Cirq simulators ``` sim = cirq.Simulator() samples = sim.run(circuit, repetitions=100) print('Samples:') print(samples.histogram(key='result')) print('\nState before the measurement:') result = sim.simulate(circuit[:-1]) print(result) ``` Note that the above output vector represents the state before the measurement in the optimized circuit, not the original one, which would have the same phase for both terms. `IQMDevice.simplify_circuit` has eliminated a `ZPowGate` which has no effect on the measurement. --- # Creating a quantum circuit from an OpenQASM 2.0 program The OpenQASM standard gate set has been extended with the IQM native gates ``` qasm_program = """ OPENQASM 2.0; include "qelib1.inc"; qreg q[3]; creg meas[3]; rx(1.7) q[1]; h q[0]; cx q[1], q[2]; ising(-0.6) q[0], q[2]; // OpenQASM extension """ circuit = circuit_from_qasm(qasm_program) print(circuit) ``` Decompose the circuit for the Adonis architecture ``` decomposed = adonis.map_circuit(circuit) print(decomposed) ``` See the `examples` directory for more examples.
github_jupyter
# Advanced Lane Finding Project ## The goals / steps of this project are the following: * Compute the camera calibration matrix and distortion coefficients given a set of chessboard images. * Apply a distortion correction to raw images. * Use color transforms, gradients, etc., to create a thresholded binary image. * Apply a perspective transform to rectify binary image ("birds-eye view"). * Detect lane pixels and fit to find the lane boundary. * Determine the curvature of the lane and vehicle position with respect to center. * Warp the detected lane boundaries back onto the original image. * Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position. [//]: # (Image References) [image1]: ./writeup_images/image_1_chess_distorsion.png "Distorsion Correction Chessborad" [image2]: ./writeup_images/image_2_distorsion_straight_lines.png "Distorsion Correction" [image3]: ./writeup_images/image_3_warped.png "Warped Image" [image4]: ./writeup_images/image_4_color_1.png [image5]: ./writeup_images/image_5_color.png [image6]: ./writeup_images/image_6_color.png [image7]: ./writeup_images/image_7_thresh.png [image8]: ./writeup_images/image_8_thresh.png [image9]: ./writeup_images/image_9_thresh.png [image10]: ./writeup_images/image_10_thresh.png [image11]: ./writeup_images/image_11_thresh.png [image12]: ./writeup_images/image_12_thresh.png [image13]: ./writeup_images/image_13_thresh.png [image14]: ./writeup_images/image_14_thresh.png [image15]: ./writeup_images/image_15_thresh.png [image16]: ./writeup_images/image_16_thresh.png [image17]: ./writeup_images/image_17_thresh.png [image18]: ./writeup_images/image_18_thresh.png [image19]: ./writeup_images/image_19_thresh.png [image20]: ./writeup_images/image_20_thresh.png [image21]: ./writeup_images/image_window_final_1.png [image22]: ./writeup_images/image_window_final_2.png [image23]: ./writeup_images/image_window_final_3.png [image24]: ./writeup_images/final_1.png [image25]: ./writeup_images/final_2.png [image26]: ./writeup_images/final_3.png [image27]: ./writeup_images/final_4.png [image28]: ./writeup_images/final_5.png [image29]: ./writeup_images/final_6.png [video1]: ./project_video_FINAL.mp4 "Video1" [video2]: ./challenge_video_output_FINAL.mp4 "Video2" ### Camera Calibration #### 1. Briefly state how you computed the camera matrix and distortion coefficients. Provide an example of a distortion corrected calibration image. The code for this step is as follows: ```python def camera_calibration(img, objpoints, imgpoints): original = img.copy() gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) # Find the chessboard corners ret, corners = cv2.findChessboardCorners(gray, (9,6),None) # If found, add object points, image points if ret == True: objpoints.append(objp) imgpoints.append(corners) # Draw and display the corners img = cv2.drawChessboardCorners(img, (9,6), corners, ret) f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10)) f.subplots_adjust(hspace = .2, wspace=.05) #cv2.imwrite('origin'+str(i+1)+'.jpg',original) #cv2.imwrite('corners_detected'+str(i+1)+'.jpg',img) ax1.imshow(original) ax1.set_title('Original Image '+str(i+1), fontsize=30) ax2.imshow(img) ax2.set_title('Corners detected '+str(i+1),fontsize=30) return objpoints, imgpoints def cal_matrix(imge, objpoints, imgpoints): # Use cv2.calibrateCamera() and cv2.undistort() ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, imge.shape[:-1],None,None) return mtx, dist def undistort(img,mtx,dist): undist = cv2.undistort(img,mtx,dist,None,mtx) return undist objp = np.zeros((6*9,3), np.float32) objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2) #Arrays to store object points and image points from all the images. objpoints = [] # 3d points in real world space imgpoints = [] # 2d points in image plane. # Make a list of calibration images images = glob.glob('camera_cal/calibration*.jpg') # Step through the list and search for chessboard corners for i, fname in enumerate(images): img = mpimg.imread(fname) objpoints, imgpoints = camera_calibration(img, objpoints, imgpoints) dst = mpimg.imread('camera_cal/calibration1.jpg') frame = mpimg.imread('test_images/test4.jpg') mtx, dist = cal_matrix(dst, objpoints, imgpoints) #Chessboard image Undistortion udst = undistort(dst, mtx,dist) #Frame of the Video Undistortion undistorted = undistort(frame, mtx,dist) ``` I start by preparing "object points", which will be the (x, y, z) coordinates of the chessboard corners in the world. Here I am assuming the chessboard is fixed on the (x, y) plane at z=0, such that the object points are the same for each calibration image. Thus, `objp` is just a replicated array of coordinates, and `objpoints` will be appended with a copy of it every time I successfully detect all chessboard corners in a test image. `imgpoints` will be appended with the (x, y) pixel position of each of the corners in the image plane with each successful chessboard detection. This is donde using the `camera_calibration(img, objpoints, imgpoints)` function. I then used the output `objpoints` and `imgpoints` to compute the camera calibration (mtx) and distortion coefficients (dst) using the `cal_matrix(imge, objpoints, imgpoints)` function. I applied this distortion correction to the test image of the chessboard using the `undistort(img,mtx,dist)` function and obtained this result: ![alt text][image1] ### Pipeline (single images) #### 1. Provide an example of a distortion-corrected image. To demonstrate this step, I will describe how I apply the distortion correction to one of the test images. As it was descreibed above, once the mtx and dst matrices are calculated, the distrosion correction of the previous image is made using the `undistort(img,mtx,dist)` function and obtained this result: ![alt text][image2] #### 2. Describe how (and identify where in your code) you used color transforms, gradients or other methods to create a thresholded binary image. Provide an example of a binary image result. I used a combination of color and gradient thresholds to generate a binary image as follows: ```python def abs_sobel_thresh(img, orient='x', sobel_kernel=3, thresh=(50,100)): # Calculate directional gradient # Apply threshold # Apply the following steps to img # 1) Convert to grayscale gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # 2) Take the derivative in x or y given orient = 'x' or 'y' if (orient=='x'): sobel = cv2.Sobel(gray, cv2.CV_64F, 1, 0) elif (orient == 'y'): sobel = cv2.Sobel(gray, cv2.CV_64F, 0, 1) # 3) Take the absolute value of the derivative or gradient abs_sobel = np.absolute(sobel) # 4) Scale to 8-bit (0 - 255) then convert to type = np.uint8 scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel)) # 5) Create a mask of 1's where the scaled gradient magnitude # is > thresh_min and < thresh_max grad_binary = np.zeros_like(scaled_sobel) grad_binary[(scaled_sobel >= thresh[0]) & (scaled_sobel <= thresh[1])] = 1 return grad_binary def mag_thresh(img, sobel_kernel=3, mag_thresh=(0, 255)): # Calculate gradient magnitude # Apply threshold # Convert to grayscale gray = cv2.cvtColor(img, cv2.COLOR_BGR2Lab)[:,:,2] # Take both Sobel x and y gradients sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=sobel_kernel) sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel) # Calculate the gradient magnitude gradmag = np.sqrt(sobelx**2 + sobely**2) # Rescale to 8 bit scale_factor = np.max(gradmag)/255 gradmag = (gradmag/scale_factor).astype(np.uint8) # Create a binary image of ones where threshold is met, zeros otherwise mag_binary = np.zeros_like(gradmag) mag_binary[(gradmag >= mag_thresh[0]) & (gradmag <= mag_thresh[1])] = 1 return mag_binary def dir_threshold(img, sobel_kernel=3, thresh=(0, np.pi/2)): # Calculate gradient direction # Apply threshold gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Calculate the x and y gradients sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=sobel_kernel) sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel) # Take the absolute value of the gradient direction, # apply a threshold, and create a binary image result absgraddir = np.arctan2(np.absolute(sobely), np.absolute(sobelx)) dir_binary = np.zeros_like(absgraddir) dir_binary[(absgraddir >= thresh[0]) & (absgraddir <= thresh[1])] = 1 return dir_binary def img_thresh(img, s_sobel_thresh=(8, 100), sx_thresh=(10, 100)): img = np.copy(img) ksize = 3 # Apply each of the thresholding functions # Sobel x gradx = abs_sobel_thresh(img, orient='x', sobel_kernel=ksize, thresh=(50, 150)) # Sobel y grady = abs_sobel_thresh(img, orient='y', sobel_kernel=ksize, thresh=(50, 150)) # Magnitud mag_binary = mag_thresh(img, sobel_kernel=ksize, mag_thresh=(30, 100)) # Dir dir_binary = dir_threshold(img, sobel_kernel=ksize, thresh=(0.7, 1.3)) combined_grad = np.zeros_like(dir_binary) combined_grad[((gradx == 1) & (grady == 1)) | ((mag_binary == 1) & (dir_binary == 1))] = 1 #def color_thresh_combined(img, s_thresh, l_thresh, v_thresh, b_thresh): v_thresh = [230,255] s_thresh = [235,255] l_thresh = [215,255] b_thresh = [230,255] lab_b_thresh = [195,255] hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HSV) V_binary = hsv[:,:,2] V_binary = V_binary*(255/np.max(V_binary)) V_thresh_binary= np.zeros_like(V_binary) V_thresh_binary[(V_binary >= v_thresh[0]) & (V_binary <= v_thresh[1])] = 1 hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS) S_binary = hls[:,:,2] max_sat = np.max(S_binary) if max_sat >= 245: S_binary = S_binary*(210/np.max(S_binary)) # Threshold x gradient S_thresh_binary= np.zeros_like(S_binary) S_thresh_binary[(S_binary >= s_thresh[0]) & (S_binary <= s_thresh[1])] = 1 luv = cv2.cvtColor(img, cv2.COLOR_RGB2LUV) L_binary = luv[:,:,0] max_l = np.max(L_binary) L_binary = L_binary*(255/np.max(L_binary)) # Threshold x gradient L_thresh_binary= np.zeros_like(L_binary) L_thresh_binary[(L_binary >= l_thresh[0]) & (L_binary <= l_thresh[1])] = 1 lab = cv2.cvtColor(img, cv2.COLOR_RGB2Lab) LAB_B_binary = lab[:,:,2] max_value = np.max(LAB_B_binary) if ((max_value <= 190)&((max_l < 252)|(max_sat < 220))): if (max_value <= 170): LAB_B_binary = LAB_B_binary*(210/np.max(LAB_B_binary)) else: LAB_B_binary = LAB_B_binary*(255/np.max(LAB_B_binary)) lab_B_thresh_binary= np.zeros_like(LAB_B_binary) lab_B_thresh_binary[(LAB_B_binary >= lab_b_thresh[0]) & (LAB_B_binary <= lab_b_thresh[1])] = 1 B_binary = img[:,:,0] max_blue = np.max(B_binary) #print(max_blue) # Threshold x gradient if max_blue <= 238: B_binary= B_binary*(255/np.max(B_binary)) B_thresh_binary = np.zeros_like(B_binary) B_thresh_binary[(B_binary >= b_thresh[0]) & (B_binary <= b_thresh[1])] = 1 color_binary= np.zeros_like(B_binary) color_binary[((V_thresh_binary == 1) | (S_thresh_binary == 1) | (L_thresh_binary == 1) | (B_thresh_binary == 1))] = 1 # Sobel x sobelx = cv2.Sobel(L_binary, cv2.CV_64F, 1, 0) # Take the derivative in x abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx)) # Threshold x gradient sxbinary = np.zeros_like(scaled_sobel) sxbinary[(scaled_sobel >= sx_thresh[0]) & (scaled_sobel <= sx_thresh[1])] = 1 # Threshold color channel s_binary = np.zeros_like(S_binary) sobel = cv2.Sobel(S_binary , cv2.CV_64F, 1, 0) abs_sobel = np.absolute(sobel) # 4) Scale to 8-bit (0 - 255) then convert to type = np.uint8 scaled_sobel_s = np.uint8(255*abs_sobel/np.max(abs_sobel)) s_binary[(scaled_sobel_s >= s_sobel_thresh[0]) & (scaled_sobel_s <= s_sobel_thresh[1])] = 1 # Stack each channel #color_binary = np.dstack(( np.zeros_like(sxbinary), sxbinary, s_binary)) * 255 combined = np.zeros_like(sxbinary) combined[((color_binary == 1)|(lab_B_thresh_binary == 1)|((s_binary == 1) & (sxbinary == 1)))] = 1 return combined ``` Regarding color transforms, I use and filter the Value Channel of the HSV color space with a threshold of v_thresh = [230,255], the Saturation Channel of the HLS color space with a threshold of s_thresh = [235,255], the Light Channel of the LUV color space with a threshold of l_thresh = [215,255] and the Blue Channel of the RGB color space with a threshold of b_thresh = [230,255]. Here are some examples of my output for this step. ### Image 1 ![alt text][image4] ### Image 2 ![alt text][image5] ### Image 3 ![alt text][image6] As it can be seen, it works ok for image 1 and 2. However, when the image is dark or shadowed as in Image 3 (from the challenge video), the color spaces thresholds are not enough to find the lanes. Therefore, I use the gradient of the Saturation Channel and the Light Channel with a threshold of s_sobel_thresh = (8, 100) and sx_thresh = (10, 100), respectively. The color `Yellow` of the lines is something we can also take advantage of. In the LAB color space, positive values of B Channel respresent the color yellow. Thus, the B Channel of the LAB color space with a threshold of lab_b_thresh = [195,255] is used. Here are some examples of my output for this step. ### Image 4 ![alt text][image7] ### Image 5 ![alt text][image8] ### Image 6 ![alt text][image9] ### Image 7 ![alt text][image10] ### Image 8 ![alt text][image11] ### Image 9 ![alt text][image12] ### Image 10 ![alt text][image13] ### Image 11 ![alt text][image14] ### Image 12 ![alt text][image15] ### Image 13 ![alt text][image16] ### Image 14 ![alt text][image17] ### Image 15 ![alt text][image18] ### Image 16 ![alt text][image19] ### Image 17 ![alt text][image20] From these images, one can observed that `Light Sobel Threshold` and `Saturation Sobel Threshold` images are really noisy. However, if they are added with an logic `and` operator, as in `Light & Saturation Thresholds` image, I can easly identify lines in shadowed and dark images. The `Combined Thresholds` image shows the combiantion of all color sapces, B_channel of the LAB color space and the Light and Saturation Sobel binary images with an `or` operator. #### 3. Describe how (and identify where in your code) you performed a perspective transform and provide an example of a transformed image. The code for my perspective transform includes a function called `perspective_transform(img, offset = 320)`, which appears as follows: ```python def perspective_transform(img, offset = 320): #define 4 source points src = np.float32([[,],[,],[,],[,]]) #Note: you could pick any four of the detected corners # as long as those four corners define a rectangle #One especially smart way to do this would be to use four well-chosen # corners that were automatically detected during the undistortion steps #We recommend using the automatic detection of corners in your cod src = np.float32([(0.451*img.shape[1], 0.6388*img.shape[0]), (0.1585*img.shape[1], img.shape[0]), (0.88*img.shape[1], img.shape[0]), (0.55*img.shape[1], 0.6388*img.shape[0])]) # For destination points, I'm arbitrarily choosing some points to be # a nice fit for displaying our warped result # again, not exact, but close enough for our purposes dst = np.float32([(offset, 0), (offset, img.shape[0]), (img.shape[1]-offset, img.shape[0]), (img.shape[1]-offset, 0)]) # d) use cv2.getPerspectiveTransform() to get M, the transform matrix M = cv2.getPerspectiveTransform(src,dst) inv_M = cv2.getPerspectiveTransform(dst,src) # e) use cv2.warpPerspective() to warp your image to a top-down view warped = cv2.warpPerspective(img,M,(img.shape[1], img.shape[0]),flags=cv2.INTER_LINEAR) return warped, inv_M ``` The function takes as inputs an image (`img`), as well as the offset to define the destination points (`dst`). Inside the functions is defined the source (`src`) points. I chose the hardcode the source and destination points in the following manner: ```python offset= 250 src = np.float32([(0.451*img.shape[1], 0.6388*img.shape[0]), (0.1585*img.shape[1], img.shape[0]), (0.88*img.shape[1], img.shape[0]), (0.55*img.shape[1], 0.6388*img.shape[0])]) # For destination points, I'm arbitrarily choosing some points to be # a nice fit for displaying our warped result # again, not exact, but close enough for our purposes dst = np.float32([(offset, 0), (offset, img.shape[0]), (img.shape[1]-offset, img.shape[0]), (img.shape[1]-offset, 0)]) # d) use cv2.getPerspectiveTransform() to get M, the transform matrix ``` This resulted in the following source and destination points: | Source | Destination | |:-------------:|:-------------:| | 585, 460 | 320, 0 | | 203, 720 | 320, 720 | | 1127, 720 | 960, 720 | | 695, 460 | 960, 0 | I verified that my perspective transform was working as expected by showing the test image and its warped counterpart to verify that the lines appear parallel in the warped image. ![alt text][image3] #### 4. Describe how (and identify where in your code) you identified lane-line pixels and fit their positions with a polynomial? The indentification of the lane-line pixels is done by two functions. First, the `find_lane_pixels(binary_warped)` function which code is: ```python def find_lane_pixels(binary_warped): # Take a histogram of the bottom half of the image histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0) # Create an output image to draw on and visualize the result out_img = np.dstack((binary_warped, binary_warped, binary_warped)) # Find the peak of the left and right halves of the histogram # These will be the starting point for the left and right lines midpoint = np.int(histogram.shape[0]//2) leftx_base = np.argmax(histogram[:midpoint]) rightx_base = np.argmax(histogram[midpoint:]) + midpoint # HYPERPARAMETERS # Choose the number of sliding windows nwindows = 9 # Set the width of the windows +/- margin margin = 100 # Set minimum number of pixels found to recenter window minpix = 30 # Set height of windows - based on nwindows above and image shape window_height = np.int(binary_warped.shape[0]//nwindows) # Identify the x and y positions of all nonzero pixels in the image nonzero = binary_warped.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) # Current positions to be updated later for each window in nwindows leftx_current = leftx_base rightx_current = rightx_base # Create empty lists to receive left and right lane pixel indices left_lane_inds = [] right_lane_inds = [] # Step through the windows one by one for window in range(nwindows): # Identify window boundaries in x and y (and right and left) win_y_low = binary_warped.shape[0] - (window+1)*window_height win_y_high = binary_warped.shape[0] - window*window_height ### TO-DO: Find the four below boundaries of the window ### win_xleft_low = leftx_current - margin # Update this win_xleft_high = leftx_current + margin # Update this win_xright_low = rightx_current - margin # Update this win_xright_high = rightx_current + margin # Update this # Draw the windows on the visualization image cv2.rectangle(out_img,(win_xleft_low,win_y_low), (win_xleft_high,win_y_high),(0,255,0), 2) cv2.rectangle(out_img,(win_xright_low,win_y_low), (win_xright_high,win_y_high),(0,255,0), 2) ### TO-DO: Identify the nonzero pixels in x and y within the window ### good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0] good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0] # Append these indices to the lists left_lane_inds.append(good_left_inds) right_lane_inds.append(good_right_inds) ### TO-DO: If you found > minpix pixels, recenter next window ### ### (`right` or `leftx_current`) on their mean position ### if len(good_left_inds) > minpix: leftx_current = np.int(np.mean(nonzerox[good_left_inds])) if len(good_right_inds) > minpix: rightx_current = np.int(np.mean(nonzerox[good_right_inds])) # Concatenate the arrays of indices (previously was a list of lists of pixels) try: left_lane_inds = np.concatenate(left_lane_inds) right_lane_inds = np.concatenate(right_lane_inds) except ValueError: # Avoids an error if the above is not implemented fully pass # Extract left and right line pixel positions leftx = nonzerox[left_lane_inds] lefty = nonzeroy[left_lane_inds] rightx = nonzerox[right_lane_inds] righty = nonzeroy[right_lane_inds] out_img[lefty, leftx] = [255, 0, 0] out_img[righty, rightx] = [0, 0, 255] left_fit, right_fit = (None, None) # Fit a second order polynomial to each if len(leftx) != 0: left_fit = np.polyfit(lefty, leftx, 2) if len(rightx) != 0: right_fit = np.polyfit(righty, rightx, 2) return left_fit, right_fit, leftx, lefty, rightx, righty, out_img ``` It takes as input the binary_warped image and performs a histogram filter to find the x coordinates of the peaks where there are more pixels. Then, from the bottom to the top of the image, a search is performed through sliding windows. The numeber of windows is preset and the starting points are the x coordinates previously found in the histogram step. During this search, each time that pixels are found, the windows is re-center for the next step. Finnally, I fit my lane lines with a 2nd order polynomial kinda with thw `cv2.polyfit(x,y,grade)` like this: ### Image 1 Window_Search ![all_text][image21] ### Image 2 Window_Search ![all_text][image22] The second function is used with prior information. Once you have found a polynomial for the lane-lines, it is not necesary to do a blind search. The `search_around_poly(binary_warped, left_fit_search, right_fit_search)` function search around a region defined by the previous polynomial fit and a margin. Its code is: ```python def search_around_poly(binary_warped, left_fit_search, right_fit_search): # HYPERPARAMETER # Choose the width of the margin around the previous polynomial to search # The quiz grader expects 100 here, but feel free to tune on your own! margin = 80 nonzero = binary_warped.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) ### TO-DO: Set the area of search based on activated x-values ### ### within the +/- margin of our polynomial function ### ### Hint: consider the window areas for the similarly named variables ### ### in the previous quiz, but change the windows to our new search area ### left_lane_inds = ((nonzerox > (left_fit_search[0]*(nonzeroy**2) + left_fit_search[1]*nonzeroy + left_fit_search[2] - margin)) & (nonzerox < (left_fit_search[0]*(nonzeroy**2) + left_fit_search[1]*nonzeroy + left_fit_search[2] + margin))) right_lane_inds = ((nonzerox > (right_fit_search[0]*(nonzeroy**2) + right_fit_search[1]*nonzeroy + right_fit_search[2] - margin)) & (nonzerox < (right_fit_search[0]*(nonzeroy**2) + right_fit_search[1]*nonzeroy + right_fit_search[2] + margin))) # Again, extract left and right line pixel positions leftx = nonzerox[left_lane_inds] lefty = nonzeroy[left_lane_inds] rightx = nonzerox[right_lane_inds] righty = nonzeroy[right_lane_inds] ## Visualization ## # Create an image to draw on and an image to show the selection window out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255 window_img = np.zeros_like(out_img) # Color in left and right line pixels out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0] out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255] # Fit new polynomials left_fit_new, right_fit_new = (None, None) if len(leftx) != 0: # Fit a second order polynomial to each left_fit_new = np.polyfit(lefty, leftx, 2) if len(rightx) != 0: right_fit_new = np.polyfit(righty, rightx, 2) ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0]) if left_fit_new is not None: left_fitx = left_fit_new[0]*ploty**2 + left_fit_new[1]*ploty + left_fit_new[2] left_line_window1 = np.array([np.transpose(np.vstack([left_fitx-margin, ploty]))]) left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx+margin, ploty])))]) left_line_pts = np.hstack((left_line_window1, left_line_window2)) cv2.fillPoly(window_img, np.int_([left_line_pts]), (0,255, 0)) if right_fit_new is not None: right_fitx = right_fit_new[0]*ploty**2 + right_fit_new[1]*ploty + right_fit_new[2] right_line_window1 = np.array([np.transpose(np.vstack([right_fitx-margin, ploty]))]) right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx+margin, ploty])))]) right_line_pts = np.hstack((right_line_window1, right_line_window2)) cv2.fillPoly(window_img, np.int_([right_line_pts]), (0,255, 0)) # Draw the lane onto the warped blank image result = cv2.addWeighted(out_img, 1, window_img, 0.3, 0) return left_fit_new, right_fit_new, leftx, lefty, rightx, righty, result ``` ![alt text][image23] #### 5. Describe how (and identify where in your code) you calculated the radius of curvature of the lane and the position of the vehicle with respect to center. I did this in lines # through # in my code in `my_other_file.py` #### 6. Provide an example image of your result plotted back down onto the road such that the lane area is identified clearly. I implemented this step in lines as follows: ```python def draw_lines(img, inv_M, left_fit, right_fit): ploty = np.linspace(0, img.shape[0]-1, img.shape[0]) left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2] right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2] margin = 50 out_img = np.zeros_like(img).astype(np.uint8) left_line_window0 = np.array([np.flipud(np.transpose(np.vstack([left_fitx-margin, ploty])))]) left_line_window1 = np.array([np.transpose(np.vstack([left_fitx, ploty]))]) left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))]) right_line_window3 = np.array([np.transpose(np.vstack([right_fitx+margin, ploty]))]) central_line_pts = np.hstack((left_line_window1, left_line_window2)) left_side_pts = np.hstack((left_line_window1,left_line_window0)) right_side_pts = np.hstack((left_line_window2, right_line_window3)) # Draw the lane onto the warped blank image cv2.fillPoly(out_img, np.int_([left_side_pts]), (255,255,0)) cv2.fillPoly(out_img, np.int_([right_side_pts]), (255,255, 0)) cv2.fillPoly(out_img, np.int_([central_line_pts]), (0,255, 0)) warped_image = cv2.warpPerspective(out_img,inv_M,(out_img.shape[1], out_img.shape[0]),flags=cv2.INTER_LINEAR) result = cv2.addWeighted(img, 1, warped_image , 0.3, 0) # Plot the polynomial lines onto the image #plt.plot(left_fitx, ploty, color='yellow') #plt.plot(right_fitx, ploty, color='yellow') return result def draw_info(img,left_curverad, right_curverad, center_difference, side_position): # Display radius of curvature and vehicle offset cv2.putText(img, 'Coded by Juan ALVAREZ', (10, 50), cv2.FONT_HERSHEY_PLAIN, 2, (255, 63, 150), 4) # Display radius of curvature and vehicle offset cv2.putText(img, 'Radius of Curvature of Left line is ' + str(round(left_curverad/1000, 3)) + '(Km)', (10, 100), cv2.FONT_HERSHEY_PLAIN, 2, (255, 63, 150), 4) cv2.putText(img, 'Radius of Curvature of Right line is ' + str(round(right_curverad/1000, 3)) + '(Km)', (10, 150), cv2.FONT_HERSHEY_PLAIN, 2, (255, 63, 150), 4) cv2.putText(img, 'Vehicle is ' + str(abs(round(center_difference, 3))) + 'm ' + side_position + ' of center', (10, 200), cv2.FONT_HERSHEY_PLAIN, 2, (255, 63, 150), 4) return img ``` I use two functions. The `draw_lines(img, inv_M, left_fit, right_fit)` and `draw_info(img,left_curverad, right_curverad, center_difference, side_position)` functions. The fist one perfomrs the inverse perspective transform with the matrix inv_M and plot the lines and the region between lines on the original image. The second one draws the information of my name, the radius of curvature and the position of the car with respect of the center of the camera and the found lines. Here is an example of my result on a test image: ![alt text][image25] --- ### Pipeline (video) #### 1. Provide a link to your final video output. Your pipeline should perform reasonably well on the entire project video (wobbly lines are ok but no catastrophic failures that would cause the car to drive off the road!). Here's a [link to my video result of the project video](./project_video_FINAL.mp4) Here's a [link to my video result of the challenge video](./challenge_video_output_FINAL.mp4) --- ### Discussion #### 1. Briefly discuss any problems / issues you faced in your implementation of this project. Where will your pipeline likely fail? What could you do to make it more robust? Here I'll talk about the approach I took, what techniques I used, what worked and why, where the pipeline might fail and how I might improve it if I were going to pursue this project further. First, through thresholding color spaces and gradients of the image and combining them, the lines were found. This approach presented problems when the frame was darker and blurrier. Dark shadows represent a huge challenge for my approach. Furthermore, when a car is approaching to the line, it affects the line detections and modifies the fiiting. I also made a line class to keep track the lines during the video, calculate some attributes and methods of the line and perform a sanity check before drawing a new line. The sanity check consist in checking if the lines detected: * are parallels * have a similar curvature with respect to the last line detected * Have a similar horizontal distance to the car I save the fit values of the lines of the last 5 detections. If a line is not detected, the first of the 5 records is deleted. The code for the line class is: ```python class Line(): def __init__(self): # was the line detected in the last iteration? self.detected = False # x values of the last n fits of the line self.recent_xfitted = [] #average x values of the fitted line over the last n iterations self.bestx = None #polynomial coefficients averaged over the last n iterations self.best_fit = None #polynomial coefficients for the most recent fit self.current_fit = [np.array([False])] #radius of curvature of the line in some units self.radius_of_curvature = None #distance in meters of vehicle center from the line self.line_base_pos = None #difference in fit coefficients between last and new fits self.diffs = np.array([0,0,0], dtype='float') #plot y self.ploty = None #coordiantes of base position self.base_xy = None #x values for detected line pixels self.allx = None #y values for detected line pixels self.ally = None self.reset = False def calculate_radius_of_curvature(self): if self.best_fit is not None and self.ploty is not None: ym_per_pix = 30/720 # meters per pixel in y dimension xm_per_pix = 3.7/700 # meters per pixel in x dimension # Define y-value where we want radius of curvature # We'll choose the maximum y-value, corresponding to the bottom of the image y_eval = np.max(self.ploty) if self.allx is not None: # Fit new polynomials to x,y in world space fit_world = np.polyfit(self.ploty*ym_per_pix, self.bestx*xm_per_pix, 2) # Calculate the new radii of curvature ##### TO-DO: Implement the calculation of R_curve (radius of curvature) ##### radius = (1+(2*fit_world[0]*y_eval+fit_world[1])**2)**(3/2)/(np.absolute(2*fit_world[0])) return radius def update_radius_of_curvature(radius): self.radius_of_curvature = radius def calculate_line_base_pos(self, center): self.line_base_pos = self.base_xy[0] - center def update_base_xy(self): y_val = np.max(self.ploty) x_val = self.best_fit[0]*y_val**2 + self.best_fit[1]*y_val + self.best_fit[2] self.base_xy = (x_val, y_val) def update_line_fit(self, line_fit, x_coordinates, y_coordinates): # add a found fit to the line, up to n if line_fit is not None: if self.best_fit is not None: # if we have a best fit, see how this new fit compares self.diffs = abs(line_fit-self.best_fit) if (self.diffs[0] > 0.001 or \ self.diffs[1] > 1 or \ self.diffs[2] > 100) and \ len(self.current_fit) > 0: # bad fit! abort! abort! ... well, unless there are no fits in the current_fit queue, then we'll take it self.detected = False else: self.detected = True self.allx = x_coordinates self.ally = y_coordinates fitx = line_fit[0]*self.ploty**2 + line_fit[1]*self.ploty + line_fit[2] self.recent_xfitted.append(fitx) self.current_fit.append(line_fit) if len(self.current_fit) > 5: # throw out old fits, keep newest n self.current_fit = self.current_fit[len(self.current_fit)-5:] self.recent_xfitted = self.recent_xfitted[len(self.recent_xfitted)-5:] self.best_fit = np.average(self.current_fit, axis=0) radius = self.calculate_radius_of_curvature() self.radius_of_curvature = radius self.update_base_xy() self.bestx = np.average(self.recent_xfitted, axis=0) else: self.detected = True self.current_fit = [line_fit] self.allx = x_coordinates self.ally = y_coordinates fitx = line_fit[0]*self.ploty**2 + line_fit[1]*self.ploty + line_fit[2] self.recent_xfitted = [fitx] self.bestx = fitx self.best_fit = line_fit radius = self.calculate_radius_of_curvature() self.radius_of_curvature = radius self.update_base_xy() # or remove one from the history, if not found else: self.detected = False if len(self.current_fit) > 1: # delete last line_fit self.current_fit = self.current_fit[:len(self.current_fit)-1] self.best_fit = np.average(self.current_fit, axis=0) self.recent_xfitted = self.recent_xfitted[:len(self.recent_xfitted)-1] self.bestx = np.average(self.recent_xfitted, axis=0) ``` The sanity check is done by this code: ```python #SANITY CHECK if left_fit is not None and right_fit is not None: # calculate x-intercept (bottom of image, x=image_height) for fits left_fit_bottom = left_fit[0]*height**2 + left_fit[1]*height + left_fit[2] right_fit_bottom = right_fit[0]*height**2 + right_fit[1]*height + right_fit[2] interception_bottom_difference = abs(right_fit_bottom-left_fit_bottom) left_fit_middle = left_fit[0]*(height/6)**2 + left_fit[1]*(height/6) + left_fit[2] right_fit_middle = right_fit[0]*(height/6)**2 + right_fit[1]*(height/6) + right_fit[2] interception_middle_difference = abs(right_fit_middle-left_fit_middle) if (abs(0.43*width - interception_bottom_difference) > 0.15*width) or (abs(0.42*width - interception_middle_difference) > 0.2*width): left_fit = None right_fit = None else: if (Left.radius_of_curvature is not None) and (abs(measure_radius_of_curvature(leftx, lefty) - Left.radius_of_curvature) > 3*Left.radius_of_curvature): left_fit = None if (Right.radius_of_curvature is not None) and (abs(measure_radius_of_curvature(rightx, righty) - Right.radius_of_curvature) > 3*Right.radius_of_curvature): right_fit = None ``` This helped too much to accurately decide if drawing a line or not. A better detection of the lines with more robust methods will help a lot.
github_jupyter
# Capsule Networks (CapsNets) Based on the paper: [Dynamic Routing Between Capsules](https://arxiv.org/abs/1710.09829), by Sara Sabour, Nicholas Frosst and Geoffrey E. Hinton (NIPS 2017). Inspired in part from Huadong Liao's implementation: [CapsNet-TensorFlow](https://github.com/naturomics/CapsNet-Tensorflow). # Introduction Watch [this video](https://www.youtube.com/embed/pPN8d0E3900) to understand the key ideas behind Capsule Networks: ``` from IPython.display import HTML # Display the video in an iframe: HTML("""<iframe width="560" height="315" src="https://www.youtube.com/embed/pPN8d0E3900" frameborder="0" allowfullscreen></iframe>""") ``` # Imports To support both Python 2 and Python 3: ``` from __future__ import division, print_function, unicode_literals ``` To plot pretty figures: ``` %matplotlib inline import matplotlib import matplotlib.pyplot as plt ``` We will need NumPy and TensorFlow: ``` import numpy as np import tensorflow as tf ``` # Reproducibility Let's reset the default graph, in case you re-run this notebook without restarting the kernel: ``` tf.reset_default_graph() ``` Let's set the random seeds so that this notebook always produces the same output: ``` np.random.seed(42) tf.set_random_seed(42) ``` # Load MNIST Yes, I know, it's MNIST again. But hopefully this powerful idea will work as well on larger datasets, time will tell. ``` from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/") ``` Let's look at what these hand-written digit images look like: ``` n_samples = 5 plt.figure(figsize=(n_samples * 2, 3)) for index in range(n_samples): plt.subplot(1, n_samples, index + 1) sample_image = mnist.train.images[index].reshape(28, 28) plt.imshow(sample_image, cmap="binary") plt.axis("off") plt.show() ``` And these are the corresponding labels: ``` mnist.train.labels[:n_samples] ``` Now let's build a Capsule Network to classify these images. Here's the overall architecture, enjoy the ASCII art! ;-) Note: for readability, I left out two arrows: Labels → Mask, and Input Images → Reconstruction Loss. ``` Loss ↑ ┌─────────┴─────────┐ Labels → Margin Loss Reconstruction Loss ↑ ↑ Length Decoder ↑ ↑ Digit Capsules ────Mask────┘ ↖↑↗ ↖↑↗ ↖↑↗ Primary Capsules ↑ Input Images ``` We are going to build the graph starting from the bottom layer, and gradually move up, left side first. Let's go! # Input Images Let's start by creating a placeholder for the input images (28×28 pixels, 1 color channel = grayscale). ``` X = tf.placeholder(shape=[None, 28, 28, 1], dtype=tf.float32, name="X") ``` # Primary Capsules The first layer will be composed of 32 maps of 6×6 capsules each, where each capsule will output an 8D activation vector: ``` caps1_n_maps = 32 caps1_n_caps = caps1_n_maps * 6 * 6 # 1152 primary capsules caps1_n_dims = 8 ``` To compute their outputs, we first apply two regular convolutional layers: ``` conv1_params = { "filters": 256, "kernel_size": 9, "strides": 1, "padding": "valid", "activation": tf.nn.relu, } conv2_params = { "filters": caps1_n_maps * caps1_n_dims, # 256 convolutional filters "kernel_size": 9, "strides": 2, "padding": "valid", "activation": tf.nn.relu } conv1 = tf.layers.conv2d(X, name="conv1", **conv1_params) conv2 = tf.layers.conv2d(conv1, name="conv2", **conv2_params) ``` Note: since we used a kernel size of 9 and no padding (for some reason, that's what `"valid"` means), the image shrunk by 9-1=8 pixels after each convolutional layer (28×28 to 20×20, then 20×20 to 12×12), and since we used a stride of 2 in the second convolutional layer, the image size was divided by 2. This is how we end up with 6×6 feature maps. Next, we reshape the output to get a bunch of 8D vectors representing the outputs of the primary capsules. The output of `conv2` is an array containing 32×8=256 feature maps for each instance, where each feature map is 6×6. So the shape of this output is (_batch size_, 6, 6, 256). We want to chop the 256 into 32 vectors of 8 dimensions each. We could do this by reshaping to (_batch size_, 6, 6, 32, 8). However, since this first capsule layer will be fully connected to the next capsule layer, we can simply flatten the 6×6 grids. This means we just need to reshape to (_batch size_, 6×6×32, 8). ``` caps1_raw = tf.reshape(conv2, [-1, caps1_n_caps, caps1_n_dims], name="caps1_raw") ``` Now we need to squash these vectors. Let's define the `squash()` function, based on equation (1) from the paper: $\operatorname{squash}(\mathbf{s}) = \dfrac{\|\mathbf{s}\|^2}{1 + \|\mathbf{s}\|^2} \dfrac{\mathbf{s}}{\|\mathbf{s}\|}$ The `squash()` function will squash all vectors in the given array, along the given axis (by default, the last axis). **Caution**, a nasty bug is waiting to bite you: the derivative of $\|\mathbf{s}\|$ is undefined when $\|\mathbf{s}\|=0$, so we can't just use `tf.norm()`, or else it will blow up during training: if a vector is zero, the gradients will be `nan`, so when the optimizer updates the variables, they will also become `nan`, and from then on you will be stuck in `nan` land. The solution is to implement the norm manually by computing the square root of the sum of squares plus a tiny epsilon value: $\|\mathbf{s}\| \approx \sqrt{\sum\limits_i{{s_i}^2}\,\,+ \epsilon}$. ``` def squash(s, axis=-1, epsilon=1e-7, name=None): with tf.name_scope(name, default_name="squash"): squared_norm = tf.reduce_sum(tf.square(s), axis=axis, keep_dims=True) safe_norm = tf.sqrt(squared_norm + epsilon) squash_factor = squared_norm / (1. + squared_norm) unit_vector = s / safe_norm return squash_factor * unit_vector ``` Now let's apply this function to get the output $\mathbf{u}_i$ of each primary capsules $i$ : ``` caps1_output = squash(caps1_raw, name="caps1_output") ``` Great! We have the output of the first capsule layer. It wasn't too hard, was it? However, computing the next layer is where the fun really begins. # Digit Capsules To compute the output of the digit capsules, we must first compute the predicted output vectors (one for each primary / digit capsule pair). Then we can run the routing by agreement algorithm. ## Compute the Predicted Output Vectors The digit capsule layer contains 10 capsules (one for each digit) of 16 dimensions each: ``` caps2_n_caps = 10 caps2_n_dims = 16 ``` For each capsule $i$ in the first layer, we want to predict the output of every capsule $j$ in the second layer. For this, we will need a transformation matrix $\mathbf{W}_{i,j}$ (one for each pair of capsules ($i$, $j$)), then we can compute the predicted output $\hat{\mathbf{u}}_{j|i} = \mathbf{W}_{i,j} \, \mathbf{u}_i$ (equation (2)-right in the paper). Since we want to transform an 8D vector into a 16D vector, each transformation matrix $\mathbf{W}_{i,j}$ must have a shape of (16, 8). To compute $\hat{\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$), we will use a nice feature of the `tf.matmul()` function: you probably know that it lets you multiply two matrices, but you may not know that it also lets you multiply higher dimensional arrays. It treats the arrays as arrays of matrices, and it performs itemwise matrix multiplication. For example, suppose you have two 4D arrays, each containing a 2×3 grid of matrices. The first contains matrices $\mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D}, \mathbf{E}, \mathbf{F}$ and the second contains matrices $\mathbf{G}, \mathbf{H}, \mathbf{I}, \mathbf{J}, \mathbf{K}, \mathbf{L}$. If you multiply these two 4D arrays using the `tf.matmul()` function, this is what you get: $ \pmatrix{ \mathbf{A} & \mathbf{B} & \mathbf{C} \\ \mathbf{D} & \mathbf{E} & \mathbf{F} } \times \pmatrix{ \mathbf{G} & \mathbf{H} & \mathbf{I} \\ \mathbf{J} & \mathbf{K} & \mathbf{L} } = \pmatrix{ \mathbf{AG} & \mathbf{BH} & \mathbf{CI} \\ \mathbf{DJ} & \mathbf{EK} & \mathbf{FL} } $ We can apply this function to compute $\hat{\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$) like this (recall that there are 6×6×32=1152 capsules in the first layer, and 10 in the second layer): $ \pmatrix{ \mathbf{W}_{1,1} & \mathbf{W}_{1,2} & \cdots & \mathbf{W}_{1,10} \\ \mathbf{W}_{2,1} & \mathbf{W}_{2,2} & \cdots & \mathbf{W}_{2,10} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{W}_{1152,1} & \mathbf{W}_{1152,2} & \cdots & \mathbf{W}_{1152,10} } \times \pmatrix{ \mathbf{u}_1 & \mathbf{u}_1 & \cdots & \mathbf{u}_1 \\ \mathbf{u}_2 & \mathbf{u}_2 & \cdots & \mathbf{u}_2 \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{u}_{1152} & \mathbf{u}_{1152} & \cdots & \mathbf{u}_{1152} } = \pmatrix{ \hat{\mathbf{u}}_{1|1} & \hat{\mathbf{u}}_{2|1} & \cdots & \hat{\mathbf{u}}_{10|1} \\ \hat{\mathbf{u}}_{1|2} & \hat{\mathbf{u}}_{2|2} & \cdots & \hat{\mathbf{u}}_{10|2} \\ \vdots & \vdots & \ddots & \vdots \\ \hat{\mathbf{u}}_{1|1152} & \hat{\mathbf{u}}_{2|1152} & \cdots & \hat{\mathbf{u}}_{10|1152} } $ The shape of the first array is (1152, 10, 16, 8), and the shape of the second array is (1152, 10, 8, 1). Note that the second array must contain 10 identical copies of the vectors $\mathbf{u}_1$ to $\mathbf{u}_{1152}$. To create this array, we will use the handy `tf.tile()` function, which lets you create an array containing many copies of a base array, tiled in any way you want. Oh, wait a second! We forgot one dimension: _batch size_. Say we feed 50 images to the capsule network, it will make predictions for these 50 images simultaneously. So the shape of the first array must be (50, 1152, 10, 16, 8), and the shape of the second array must be (50, 1152, 10, 8, 1). The first layer capsules actually already output predictions for all 50 images, so the second array will be fine, but for the first array, we will need to use `tf.tile()` to have 50 copies of the transformation matrices. Okay, let's start by creating a trainable variable of shape (1, 1152, 10, 16, 8) that will hold all the transformation matrices. The first dimension of size 1 will make this array easy to tile. We initialize this variable randomly using a normal distribution with a standard deviation to 0.01. ``` init_sigma = 0.01 W_init = tf.random_normal( shape=(1, caps1_n_caps, caps2_n_caps, caps2_n_dims, caps1_n_dims), stddev=init_sigma, dtype=tf.float32, name="W_init") W = tf.Variable(W_init, name="W") ``` Now we can create the first array by repeating `W` once per instance: ``` batch_size = tf.shape(X)[0] W_tiled = tf.tile(W, [batch_size, 1, 1, 1, 1], name="W_tiled") ``` That's it! On to the second array, now. As discussed earlier, we need to create an array of shape (_batch size_, 1152, 10, 8, 1), containing the output of the first layer capsules, repeated 10 times (once per digit, along the third dimension, which is axis=2). The `caps1_output` array has a shape of (_batch size_, 1152, 8), so we first need to expand it twice, to get an array of shape (_batch size_, 1152, 1, 8, 1), then we can repeat it 10 times along the third dimension: ``` caps1_output_expanded = tf.expand_dims(caps1_output, -1, name="caps1_output_expanded") caps1_output_tile = tf.expand_dims(caps1_output_expanded, 2, name="caps1_output_tile") caps1_output_tiled = tf.tile(caps1_output_tile, [1, 1, caps2_n_caps, 1, 1], name="caps1_output_tiled") ``` Let's check the shape of the first array: ``` W_tiled ``` Good, and now the second: ``` caps1_output_tiled ``` Yes! Now, to get all the predicted output vectors $\hat{\mathbf{u}}_{j|i}$, we just need to multiply these two arrays using `tf.matmul()`, as explained earlier: ``` caps2_predicted = tf.matmul(W_tiled, caps1_output_tiled, name="caps2_predicted") ``` Let's check the shape: ``` caps2_predicted ``` Perfect, for each instance in the batch (we don't know the batch size yet, hence the "?") and for each pair of first and second layer capsules (1152×10) we have a 16D predicted output column vector (16×1). We're ready to apply the routing by agreement algorithm! ## Routing by agreement First let's initialize the raw routing weights $b_{i,j}$ to zero: ``` raw_weights = tf.zeros([batch_size, caps1_n_caps, caps2_n_caps, 1, 1], dtype=np.float32, name="raw_weights") ``` We will see why we need the last two dimensions of size 1 in a minute. ### Round 1 First, let's apply the softmax function to compute the routing weights, $\mathbf{c}_{i} = \operatorname{softmax}(\mathbf{b}_i)$ (equation (3) in the paper): ``` routing_weights = tf.nn.softmax(raw_weights, dim=2, name="routing_weights") ``` Now let's compute the weighted sum of all the predicted output vectors for each second-layer capsule, $\mathbf{s}_j = \sum\limits_{i}{c_{i,j}\hat{\mathbf{u}}_{j|i}}$ (equation (2)-left in the paper): ``` weighted_predictions = tf.multiply(routing_weights, caps2_predicted, name="weighted_predictions") weighted_sum = tf.reduce_sum(weighted_predictions, axis=1, keep_dims=True, name="weighted_sum") ``` There are a couple important details to note here: * To perform elementwise matrix multiplication (also called the Hadamard product, noted $\circ$), we use the `tf.multiply()` function. It requires `routing_weights` and `caps2_predicted` to have the same rank, which is why we added two extra dimensions of size 1 to `routing_weights`, earlier. * The shape of `routing_weights` is (_batch size_, 1152, 10, 1, 1) while the shape of `caps2_predicted` is (_batch size_, 1152, 10, 16, 1). Since they don't match on the fourth dimension (1 _vs_ 16), `tf.multiply()` automatically _broadcasts_ the `routing_weights` 16 times along that dimension. If you are not familiar with broadcasting, a simple example might help: $ \pmatrix{1 & 2 & 3 \\ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000} = \pmatrix{1 & 2 & 3 \\ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000 \\ 10 & 100 & 1000} = \pmatrix{10 & 200 & 3000 \\ 40 & 500 & 6000} $ And finally, let's apply the squash function to get the outputs of the second layer capsules at the end of the first iteration of the routing by agreement algorithm, $\mathbf{v}_j = \operatorname{squash}(\mathbf{s}_j)$ : ``` caps2_output_round_1 = squash(weighted_sum, axis=-2, name="caps2_output_round_1") caps2_output_round_1 ``` Good! We have ten 16D output vectors for each instance, as expected. ### Round 2 First, let's measure how close each predicted vector $\hat{\mathbf{u}}_{j|i}$ is to the actual output vector $\mathbf{v}_j$ by computing their scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$. * Quick math reminder: if $\vec{a}$ and $\vec{b}$ are two vectors of equal length, and $\mathbf{a}$ and $\mathbf{b}$ are their corresponding column vectors (i.e., matrices with a single column), then $\mathbf{a}^T \mathbf{b}$ (i.e., the matrix multiplication of the transpose of $\mathbf{a}$, and $\mathbf{b}$) is a 1×1 matrix containing the scalar product of the two vectors $\vec{a}\cdot\vec{b}$. In Machine Learning, we generally represent vectors as column vectors, so when we talk about computing the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$, this actually means computing ${\hat{\mathbf{u}}_{j|i}}^T \mathbf{v}_j$. Since we need to compute the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ for each instance, and for each pair of first and second level capsules $(i, j)$, we will once again take advantage of the fact that `tf.matmul()` can multiply many matrices simultaneously. This will require playing around with `tf.tile()` to get all dimensions to match (except for the last 2), just like we did earlier. So let's look at the shape of `caps2_predicted`, which holds all the predicted output vectors $\hat{\mathbf{u}}_{j|i}$ for each instance and each pair of capsules: ``` caps2_predicted ``` And now let's look at the shape of `caps2_output_round_1`, which holds 10 outputs vectors of 16D each, for each instance: ``` caps2_output_round_1 ``` To get these shapes to match, we just need to tile the `caps2_output_round_1` array 1152 times (once per primary capsule) along the second dimension: ``` caps2_output_round_1_tiled = tf.tile( caps2_output_round_1, [1, caps1_n_caps, 1, 1, 1], name="caps2_output_round_1_tiled") ``` And now we are ready to call `tf.matmul()` (note that we must tell it to transpose the matrices in the first array, to get ${\hat{\mathbf{u}}_{j|i}}^T$ instead of $\hat{\mathbf{u}}_{j|i}$): ``` agreement = tf.matmul(caps2_predicted, caps2_output_round_1_tiled, transpose_a=True, name="agreement") ``` We can now update the raw routing weights $b_{i,j}$ by simply adding the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ we just computed: $b_{i,j} \gets b_{i,j} + \hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ (see Procedure 1, step 7, in the paper). ``` raw_weights_round_2 = tf.add(raw_weights, agreement, name="raw_weights_round_2") ``` The rest of round 2 is the same as in round 1: ``` routing_weights_round_2 = tf.nn.softmax(raw_weights_round_2, dim=2, name="routing_weights_round_2") weighted_predictions_round_2 = tf.multiply(routing_weights_round_2, caps2_predicted, name="weighted_predictions_round_2") weighted_sum_round_2 = tf.reduce_sum(weighted_predictions_round_2, axis=1, keep_dims=True, name="weighted_sum_round_2") caps2_output_round_2 = squash(weighted_sum_round_2, axis=-2, name="caps2_output_round_2") ``` We could go on for a few more rounds, by repeating exactly the same steps as in round 2, but to keep things short, we will stop here: ``` caps2_output = caps2_output_round_2 ``` ### Static or Dynamic Loop? In the code above, we created different operations in the TensorFlow graph for each round of the routing by agreement algorithm. In other words, it's a static loop. Sure, instead of copy/pasting the code several times, we could have written a `for` loop in Python, but this would not change the fact that the graph would end up containing different operations for each routing iteration. It's actually okay since we generally want less than 5 routing iterations, so the graph won't grow too big. However, you may prefer to implement the routing loop within the TensorFlow graph itself rather than using a Python `for` loop. To do this, you would need to use TensorFlow's `tf.while_loop()` function. This way, all routing iterations would reuse the same operations in the graph, it would be a dynamic loop. For example, here is how to build a small loop that computes the sum of squares from 1 to 100: ``` def condition(input, counter): return tf.less(counter, 100) def loop_body(input, counter): output = tf.add(input, tf.square(counter)) return output, tf.add(counter, 1) with tf.name_scope("compute_sum_of_squares"): counter = tf.constant(1) sum_of_squares = tf.constant(0) result = tf.while_loop(condition, loop_body, [sum_of_squares, counter]) with tf.Session() as sess: print(sess.run(result)) ``` As you can see, the `tf.while_loop()` function expects the loop condition and body to be provided _via_ two functions. These functions will be called only once by TensorFlow, during the graph construction phase, _not_ while executing the graph. The `tf.while_loop()` function stitches together the graph fragments created by `condition()` and `loop_body()` with some additional operations to create the loop. Also note that during training, TensorFlow will automagically handle backpropogation through the loop, so you don't need to worry about that. Of course, we could have used this one-liner instead! ;-) ``` sum([i**2 for i in range(1, 100 + 1)]) ``` Joke aside, apart from reducing the graph size, using a dynamic loop instead of a static loop can help reduce how much GPU RAM you use (if you are using a GPU). Indeed, if you set `swap_memory=True` when calling the `tf.while_loop()` function, TensorFlow will automatically check GPU RAM usage at each loop iteration, and it will take care of swapping memory between the GPU and the CPU when needed. Since CPU memory is much cheaper and abundant than GPU RAM, this can really make a big difference. # Estimated Class Probabilities (Length) The lengths of the output vectors represent the class probabilities, so we could just use `tf.norm()` to compute them, but as we saw when discussing the squash function, it would be risky, so instead let's create our own `safe_norm()` function: ``` def safe_norm(s, axis=-1, epsilon=1e-7, keep_dims=False, name=None): with tf.name_scope(name, default_name="safe_norm"): squared_norm = tf.reduce_sum(tf.square(s), axis=axis, keep_dims=keep_dims) return tf.sqrt(squared_norm + epsilon) y_proba = safe_norm(caps2_output, axis=-2, name="y_proba") ``` To predict the class of each instance, we can just select the one with the highest estimated probability. To do this, let's start by finding its index using `tf.argmax()`: ``` y_proba_argmax = tf.argmax(y_proba, axis=2, name="y_proba") ``` Let's look at the shape of `y_proba_argmax`: ``` y_proba_argmax ``` That's what we wanted: for each instance, we now have the index of the longest output vector. Let's get rid of the last two dimensions by using `tf.squeeze()` which removes dimensions of size 1. This gives us the capsule network's predicted class for each instance: ``` y_pred = tf.squeeze(y_proba_argmax, axis=[1,2], name="y_pred") y_pred ``` Okay, we are now ready to define the training operations, starting with the losses. # Labels First, we will need a placeholder for the labels: ``` y = tf.placeholder(shape=[None], dtype=tf.int64, name="y") ``` # Margin loss The paper uses a special margin loss to make it possible to detect two or more different digits in each image: $ L_k = T_k \max(0, m^{+} - \|\mathbf{v}_k\|)^2 - \lambda (1 - T_k) \max(0, \|\mathbf{v}_k\| - m^{-})^2$ * $T_k$ is equal to 1 if the digit of class $k$ is present, or 0 otherwise. * In the paper, $m^{+} = 0.9$, $m^{-} = 0.1$ and $\lambda = 0.5$. * Note that there was an error in the video (at 15:47): the max operations are squared, not the norms. Sorry about that. ``` m_plus = 0.9 m_minus = 0.1 lambda_ = 0.5 ``` Since `y` will contain the digit classes, from 0 to 9, to get $T_k$ for every instance and every class, we can just use the `tf.one_hot()` function: ``` T = tf.one_hot(y, depth=caps2_n_caps, name="T") ``` A small example should make it clear what this does: ``` with tf.Session(): print(T.eval(feed_dict={y: np.array([0, 1, 2, 3, 9])})) ``` Now let's compute the norm of the output vector for each output capsule and each instance. First, let's verify the shape of `caps2_output`: ``` caps2_output ``` The 16D output vectors are in the second to last dimension, so let's use the `safe_norm()` function with `axis=-2`: ``` caps2_output_norm = safe_norm(caps2_output, axis=-2, keep_dims=True, name="caps2_output_norm") ``` Now let's compute $\max(0, m^{+} - \|\mathbf{v}_k\|)^2$, and reshape the result to get a simple matrix of shape (_batch size_, 10): ``` present_error_raw = tf.square(tf.maximum(0., m_plus - caps2_output_norm), name="present_error_raw") present_error = tf.reshape(present_error_raw, shape=(-1, 10), name="present_error") ``` Next let's compute $\max(0, \|\mathbf{v}_k\| - m^{-})^2$ and reshape it: ``` absent_error_raw = tf.square(tf.maximum(0., caps2_output_norm - m_minus), name="absent_error_raw") absent_error = tf.reshape(absent_error_raw, shape=(-1, 10), name="absent_error") ``` We are ready to compute the loss for each instance and each digit: ``` L = tf.add(T * present_error, lambda_ * (1.0 - T) * absent_error, name="L") ``` Now we can sum the digit losses for each instance ($L_0 + L_1 + \cdots + L_9$), and compute the mean over all instances. This gives us the final margin loss: ``` margin_loss = tf.reduce_mean(tf.reduce_sum(L, axis=1), name="margin_loss") ``` # Reconstruction Now let's add a decoder network on top of the capsule network. It is a regular 3-layer fully connected neural network which will learn to reconstruct the input images based on the output of the capsule network. This will force the capsule network to preserve all the information required to reconstruct the digits, across the whole network. This constraint regularizes the model: it reduces the risk of overfitting the training set, and it helps generalize to new digits. ## Mask The paper mentions that during training, instead of sending all the outputs of the capsule network to the decoder network, we must send only the output vector of the capsule that corresponds to the target digit. All the other output vectors must be masked out. At inference time, we must mask all output vectors except for the longest one, i.e., the one that corresponds to the predicted digit. You can see this in the paper's figure 2 (at 18:15 in the video): all output vectors are masked out, except for the reconstruction target's output vector. We need a placeholder to tell TensorFlow whether we want to mask the output vectors based on the labels (`True`) or on the predictions (`False`, the default): ``` mask_with_labels = tf.placeholder_with_default(False, shape=(), name="mask_with_labels") ``` Now let's use `tf.cond()` to define the reconstruction targets as the labels `y` if `mask_with_labels` is `True`, or `y_pred` otherwise. ``` reconstruction_targets = tf.cond(mask_with_labels, # condition lambda: y, # if True lambda: y_pred, # if False name="reconstruction_targets") ``` Note that the `tf.cond()` function expects the if-True and if-False tensors to be passed _via_ functions: these functions will be called just once during the graph construction phase (not during the execution phase), similar to `tf.while_loop()`. This allows TensorFlow to add the necessary operations to handle the conditional evaluation of the if-True or if-False tensors. However, in our case, the tensors `y` and `y_pred` are already created by the time we call `tf.cond()`, so unfortunately TensorFlow will consider both `y` and `y_pred` to be dependencies of the `reconstruction_targets` tensor. The `reconstruction_targets` tensor will end up with the correct value, but: 1. whenever we evaluate a tensor that depends on `reconstruction_targets`, the `y_pred` tensor will be evaluated (even if `mask_with_layers` is `True`). This is not a big deal because computing `y_pred` adds no computing overhead during training, since we need it anyway to compute the margin loss. And during testing, if we are doing classification, we won't need reconstructions, so `reconstruction_targets` won't be evaluated at all. 2. we will always need to feed a value for the `y` placeholder (even if `mask_with_layers` is `False`). This is a bit annoying, but we can pass an empty array, because TensorFlow won't use it anyway (it just does not know it yet when it checks for dependencies). Now that we have the reconstruction targets, let's create the reconstruction mask. It should be equal to 1.0 for the target class, and 0.0 for the other classes, for each instance. For this we can just use the `tf.one_hot()` function: ``` reconstruction_mask = tf.one_hot(reconstruction_targets, depth=caps2_n_caps, name="reconstruction_mask") ``` Let's check the shape of `reconstruction_mask`: ``` reconstruction_mask ``` Let's compare this to the shape of `caps2_output`: ``` caps2_output ``` Mmh, its shape is (_batch size_, 1, 10, 16, 1). We want to multiply it by the `reconstruction_mask`, but the shape of the `reconstruction_mask` is (_batch size_, 10). We must reshape it to (_batch size_, 1, 10, 1, 1) to make multiplication possible: ``` reconstruction_mask_reshaped = tf.reshape( reconstruction_mask, [-1, 1, caps2_n_caps, 1, 1], name="reconstruction_mask_reshaped") ``` At last! We can apply the mask: ``` caps2_output_masked = tf.multiply( caps2_output, reconstruction_mask_reshaped, name="caps2_output_masked") caps2_output_masked ``` One last reshape operation to flatten the decoder's inputs: ``` decoder_input = tf.reshape(caps2_output_masked, [-1, caps2_n_caps * caps2_n_dims], name="decoder_input") ``` This gives us an array of shape (_batch size_, 160): ``` decoder_input ``` ## Decoder Now let's build the decoder. It's quite simple: two dense (fully connected) ReLU layers followed by a dense output sigmoid layer: ``` n_hidden1 = 512 n_hidden2 = 1024 n_output = 28 * 28 with tf.name_scope("decoder"): hidden1 = tf.layers.dense(decoder_input, n_hidden1, activation=tf.nn.relu, name="hidden1") hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, name="hidden2") decoder_output = tf.layers.dense(hidden2, n_output, activation=tf.nn.sigmoid, name="decoder_output") ``` ## Reconstruction Loss Now let's compute the reconstruction loss. It is just the squared difference between the input image and the reconstructed image: ``` X_flat = tf.reshape(X, [-1, n_output], name="X_flat") squared_difference = tf.square(X_flat - decoder_output, name="squared_difference") reconstruction_loss = tf.reduce_sum(squared_difference, name="reconstruction_loss") ``` ## Final Loss The final loss is the sum of the margin loss and the reconstruction loss (scaled down by a factor of 0.0005 to ensure the margin loss dominates training): ``` alpha = 0.0005 loss = tf.add(margin_loss, alpha * reconstruction_loss, name="loss") ``` # Final Touches ## Accuracy To measure our model's accuracy, we need to count the number of instances that are properly classified. For this, we can simply compare `y` and `y_pred`, convert the boolean value to a float32 (0.0 for False, 1.0 for True), and compute the mean over all the instances: ``` correct = tf.equal(y, y_pred, name="correct") accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") ``` ## Training Operations The paper mentions that the authors used the Adam optimizer with TensorFlow's default parameters: ``` optimizer = tf.train.AdamOptimizer() training_op = optimizer.minimize(loss, name="training_op") ``` ## Init and Saver And let's add the usual variable initializer, as well as a `Saver`: ``` init = tf.global_variables_initializer() saver = tf.train.Saver() ``` And... we're done with the construction phase! Please take a moment to celebrate. :) # Training Training our capsule network is pretty standard. For simplicity, we won't do any fancy hyperparameter tuning, dropout or anything, we will just run the training operation over and over again, displaying the loss, and at the end of each epoch, measure the accuracy on the validation set, display it, and save the model if the validation loss is the lowest seen found so far (this is a basic way to implement early stopping, without actually stopping). Hopefully the code should be self-explanatory, but here are a few details to note: * if a checkpoint file exists, it will be restored (this makes it possible to interrupt training, then restart it later from the last checkpoint), * we must not forget to feed `mask_with_labels=True` during training, * during testing, we let `mask_with_labels` default to `False` (but we still feed the labels since they are required to compute the accuracy), * the images loaded _via_ `mnist.train.next_batch()` are represented as `float32` arrays of shape \[784\], but the input placeholder `X` expects a `float32` array of shape \[28, 28, 1\], so we must reshape the images before we feed them to our model, * we evaluate the model's loss and accuracy on the full validation set (5,000 instances). To view progress and support systems that don't have a lot of RAM, the code evaluates the loss and accuracy on one batch at a time, and computes the mean loss and mean accuracy at the end. *Warning*: if you don't have a GPU, training will take a very long time (at least a few hours). With a GPU, it should take just a few minutes per epoch (e.g., 6 minutes on an NVidia GeForce GTX 1080Ti). ``` n_epochs = 10 batch_size = 50 restore_checkpoint = True n_iterations_per_epoch = mnist.train.num_examples // batch_size n_iterations_validation = mnist.validation.num_examples // batch_size best_loss_val = np.infty checkpoint_path = "./my_capsule_network" with tf.Session() as sess: if restore_checkpoint and tf.train.checkpoint_exists(checkpoint_path): saver.restore(sess, checkpoint_path) else: init.run() for epoch in range(n_epochs): for iteration in range(1, n_iterations_per_epoch + 1): X_batch, y_batch = mnist.train.next_batch(batch_size) # Run the training operation and measure the loss: _, loss_train = sess.run( [training_op, loss], feed_dict={X: X_batch.reshape([-1, 28, 28, 1]), y: y_batch, mask_with_labels: True}) print("\rIteration: {}/{} ({:.1f}%) Loss: {:.5f}".format( iteration, n_iterations_per_epoch, iteration * 100 / n_iterations_per_epoch, loss_train), end="") # At the end of each epoch, # measure the validation loss and accuracy: loss_vals = [] acc_vals = [] for iteration in range(1, n_iterations_validation + 1): X_batch, y_batch = mnist.validation.next_batch(batch_size) loss_val, acc_val = sess.run( [loss, accuracy], feed_dict={X: X_batch.reshape([-1, 28, 28, 1]), y: y_batch}) loss_vals.append(loss_val) acc_vals.append(acc_val) print("\rEvaluating the model: {}/{} ({:.1f}%)".format( iteration, n_iterations_validation, iteration * 100 / n_iterations_validation), end=" " * 10) loss_val = np.mean(loss_vals) acc_val = np.mean(acc_vals) print("\rEpoch: {} Val accuracy: {:.4f}% Loss: {:.6f}{}".format( epoch + 1, acc_val * 100, loss_val, " (improved)" if loss_val < best_loss_val else "")) # And save the model if it improved: if loss_val < best_loss_val: save_path = saver.save(sess, checkpoint_path) best_loss_val = loss_val ``` Training is finished, we reached over 99.3% accuracy on the validation set after just 5 epochs, things are looking good. Now let's evaluate the model on the test set. # Evaluation ``` n_iterations_test = mnist.test.num_examples // batch_size with tf.Session() as sess: saver.restore(sess, checkpoint_path) loss_tests = [] acc_tests = [] for iteration in range(1, n_iterations_test + 1): X_batch, y_batch = mnist.test.next_batch(batch_size) loss_test, acc_test = sess.run( [loss, accuracy], feed_dict={X: X_batch.reshape([-1, 28, 28, 1]), y: y_batch}) loss_tests.append(loss_test) acc_tests.append(acc_test) print("\rEvaluating the model: {}/{} ({:.1f}%)".format( iteration, n_iterations_test, iteration * 100 / n_iterations_test), end=" " * 10) loss_test = np.mean(loss_tests) acc_test = np.mean(acc_tests) print("\rFinal test accuracy: {:.4f}% Loss: {:.6f}".format( acc_test * 100, loss_test)) ``` We reach 99.43% accuracy on the test set. Pretty nice. :) # Predictions Now let's make some predictions! We first fix a few images from the test set, then we start a session, restore the trained model, evaluate `caps2_output` to get the capsule network's output vectors, `decoder_output` to get the reconstructions, and `y_pred` to get the class predictions: ``` n_samples = 5 sample_images = mnist.test.images[:n_samples].reshape([-1, 28, 28, 1]) with tf.Session() as sess: saver.restore(sess, checkpoint_path) caps2_output_value, decoder_output_value, y_pred_value = sess.run( [caps2_output, decoder_output, y_pred], feed_dict={X: sample_images, y: np.array([], dtype=np.int64)}) ``` Note: we feed `y` with an empty array, but TensorFlow will not use it, as explained earlier. And now let's plot the images and their labels, followed by the corresponding reconstructions and predictions: ``` sample_images = sample_images.reshape(-1, 28, 28) reconstructions = decoder_output_value.reshape([-1, 28, 28]) plt.figure(figsize=(n_samples * 2, 3)) for index in range(n_samples): plt.subplot(1, n_samples, index + 1) plt.imshow(sample_images[index], cmap="binary") plt.title("Label:" + str(mnist.test.labels[index])) plt.axis("off") plt.show() plt.figure(figsize=(n_samples * 2, 3)) for index in range(n_samples): plt.subplot(1, n_samples, index + 1) plt.title("Predicted:" + str(y_pred_value[index])) plt.imshow(reconstructions[index], cmap="binary") plt.axis("off") plt.show() ``` The predictions are all correct, and the reconstructions look great. Hurray! # Interpreting the Output Vectors Let's tweak the output vectors to see what their pose parameters represent. First, let's check the shape of the `cap2_output_value` NumPy array: ``` caps2_output_value.shape ``` Let's create a function that will tweak each of the 16 pose parameters (dimensions) in all output vectors. Each tweaked output vector will be identical to the original output vector, except that one of its pose parameters will be incremented by a value varying from -0.5 to 0.5. By default there will be 11 steps (-0.5, -0.4, ..., +0.4, +0.5). This function will return an array of shape (_tweaked pose parameters_=16, _steps_=11, _batch size_=5, 1, 10, 16, 1): ``` def tweak_pose_parameters(output_vectors, min=-0.5, max=0.5, n_steps=11): steps = np.linspace(min, max, n_steps) # -0.25, -0.15, ..., +0.25 pose_parameters = np.arange(caps2_n_dims) # 0, 1, ..., 15 tweaks = np.zeros([caps2_n_dims, n_steps, 1, 1, 1, caps2_n_dims, 1]) tweaks[pose_parameters, :, 0, 0, 0, pose_parameters, 0] = steps output_vectors_expanded = output_vectors[np.newaxis, np.newaxis] return tweaks + output_vectors_expanded ``` Let's compute all the tweaked output vectors and reshape the result to (_parameters_×_steps_×_instances_, 1, 10, 16, 1) so we can feed the array to the decoder: ``` n_steps = 11 tweaked_vectors = tweak_pose_parameters(caps2_output_value, n_steps=n_steps) tweaked_vectors_reshaped = tweaked_vectors.reshape( [-1, 1, caps2_n_caps, caps2_n_dims, 1]) ``` Now let's feed these tweaked output vectors to the decoder and get the reconstructions it produces: ``` tweak_labels = np.tile(mnist.test.labels[:n_samples], caps2_n_dims * n_steps) with tf.Session() as sess: saver.restore(sess, checkpoint_path) decoder_output_value = sess.run( decoder_output, feed_dict={caps2_output: tweaked_vectors_reshaped, mask_with_labels: True, y: tweak_labels}) ``` Let's reshape the decoder's output so we can easily iterate on the output dimension, the tweak steps, and the instances: ``` tweak_reconstructions = decoder_output_value.reshape( [caps2_n_dims, n_steps, n_samples, 28, 28]) ``` Lastly, let's plot all the reconstructions, for the first 3 output dimensions, for each tweaking step (column) and each digit (row): ``` for dim in range(3): print("Tweaking output dimension #{}".format(dim)) plt.figure(figsize=(n_steps / 1.2, n_samples / 1.5)) for row in range(n_samples): for col in range(n_steps): plt.subplot(n_samples, n_steps, row * n_steps + col + 1) plt.imshow(tweak_reconstructions[dim, col, row], cmap="binary") plt.axis("off") plt.show() ``` # Conclusion I tried to make the code in this notebook as flat and linear as possible, to make it easier to follow, but of course in practice you would want to wrap the code in nice reusable functions and classes. For example, you could try implementing your own `PrimaryCapsuleLayer`, and `DenseRoutingCapsuleLayer` classes, with parameters for the number of capsules, the number of routing iterations, whether to use a dynamic loop or a static loop, and so on. For an example a modular implementation of Capsule Networks based on TensorFlow, take a look at the [CapsNet-TensorFlow](https://github.com/naturomics/CapsNet-Tensorflow) project. That's all for today, I hope you enjoyed this notebook!
github_jupyter
# Detecting depression in Tweets using Baye's Theorem # Installing and importing libraries ``` !pip install wordcloud !pip install nltk import nltk nltk.download('punkt') from nltk.tokenize import word_tokenize from nltk.corpus import stopwords from nltk.stem import PorterStemmer import matplotlib.pyplot as plt from wordcloud import WordCloud from math import log, sqrt import pandas as pd import numpy as np import re %matplotlib inline ``` # Loading the Data ``` tweets = pd.read_csv('sentiment_tweets3.csv') tweets.head(20) tweets.drop(['Unnamed: 0'], axis = 1, inplace = True) tweets['label'].value_counts() tweets.info() ``` # Splitting the Data in Training and Testing Sets As you can see, I used almost all the data for training: 98% and the rest for testing. ``` totalTweets = 8000 + 2314 trainIndex, testIndex = list(), list() for i in range(tweets.shape[0]): if np.random.uniform(0, 1) < 0.98: trainIndex += [i] else: testIndex += [i] trainData = tweets.iloc[trainIndex] testData = tweets.iloc[testIndex] tweets.info() trainData['label'].value_counts() trainData.head() testData['label'].value_counts() testData.head() ``` # Wordcloud Analysis ``` depressive_words = ' '.join(list(tweets[tweets['label'] == 1]['message'])) depressive_wc = WordCloud(width = 512,height = 512, collocations=False, colormap="Blues").generate(depressive_words) plt.figure(figsize = (10, 8), facecolor = 'k') plt.imshow(depressive_wc) plt.axis('off') plt.tight_layout(pad = 0) plt.show() positive_words = ' '.join(list(tweets[tweets['label'] == 0]['message'])) positive_wc = WordCloud(width = 512,height = 512, collocations=False, colormap="Blues").generate(positive_words) plt.figure(figsize = (10, 8), facecolor = 'k') plt.imshow(positive_wc) plt.axis('off'), plt.tight_layout(pad = 0) plt.show() ``` #Pre-processing the data for the training: Tokenization, stemming, and removal of stop words ``` def process_message(message, lower_case = True, stem = True, stop_words = True, gram = 2): if lower_case: message = message.lower() words = word_tokenize(message) words = [w for w in words if len(w) > 2] if gram > 1: w = [] for i in range(len(words) - gram + 1): w += [' '.join(words[i:i + gram])] return w if stop_words: sw = stopwords.words('english') words = [word for word in words if word not in sw] if stem: stemmer = PorterStemmer() words = [stemmer.stem(word) for word in words] return words class TweetClassifier(object): def __init__(self, trainData, method = 'tf-idf'): self.tweets, self.labels = trainData['message'], trainData['label'] self.method = method def train(self): self.calc_TF_and_IDF() if self.method == 'tf-idf': self.calc_TF_IDF() else: self.calc_prob() def calc_prob(self): self.prob_depressive = dict() self.prob_positive = dict() for word in self.tf_depressive: self.prob_depressive[word] = (self.tf_depressive[word] + 1) / (self.depressive_words + \ len(list(self.tf_depressive.keys()))) for word in self.tf_positive: self.prob_positive[word] = (self.tf_positive[word] + 1) / (self.positive_words + \ len(list(self.tf_positive.keys()))) self.prob_depressive_tweet, self.prob_positive_tweet = self.depressive_tweets / self.total_tweets, self.positive_tweets / self.total_tweets def calc_TF_and_IDF(self): noOfMessages = self.tweets.shape[0] self.depressive_tweets, self.positive_tweets = self.labels.value_counts()[1], self.labels.value_counts()[0] self.total_tweets = self.depressive_tweets + self.positive_tweets self.depressive_words = 0 self.positive_words = 0 self.tf_depressive = dict() self.tf_positive = dict() self.idf_depressive = dict() self.idf_positive = dict() for i in range(noOfMessages): message_processed = process_message(self.tweets.iloc[i]) count = list() #To keep track of whether the word has ocured in the message or not. #For IDF for word in message_processed: if self.labels.iloc[i]: self.tf_depressive[word] = self.tf_depressive.get(word, 0) + 1 self.depressive_words += 1 else: self.tf_positive[word] = self.tf_positive.get(word, 0) + 1 self.positive_words += 1 if word not in count: count += [word] for word in count: if self.labels.iloc[i]: self.idf_depressive[word] = self.idf_depressive.get(word, 0) + 1 else: self.idf_positive[word] = self.idf_positive.get(word, 0) + 1 def calc_TF_IDF(self): self.prob_depressive = dict() self.prob_positive = dict() self.sum_tf_idf_depressive = 0 self.sum_tf_idf_positive = 0 for word in self.tf_depressive: self.prob_depressive[word] = (self.tf_depressive[word]) * log((self.depressive_tweets + self.positive_tweets) \ / (self.idf_depressive[word] + self.idf_positive.get(word, 0))) self.sum_tf_idf_depressive += self.prob_depressive[word] for word in self.tf_depressive: self.prob_depressive[word] = (self.prob_depressive[word] + 1) / (self.sum_tf_idf_depressive + len(list(self.prob_depressive.keys()))) for word in self.tf_positive: self.prob_positive[word] = (self.tf_positive[word]) * log((self.depressive_tweets + self.positive_tweets) \ / (self.idf_depressive.get(word, 0) + self.idf_positive[word])) self.sum_tf_idf_positive += self.prob_positive[word] for word in self.tf_positive: self.prob_positive[word] = (self.prob_positive[word] + 1) / (self.sum_tf_idf_positive + len(list(self.prob_positive.keys()))) self.prob_depressive_tweet, self.prob_positive_tweet = self.depressive_tweets / self.total_tweets, self.positive_tweets / self.total_tweets def classify(self, processed_message): pDepressive, pPositive = 0, 0 for word in processed_message: if word in self.prob_depressive: pDepressive += log(self.prob_depressive[word]) else: if self.method == 'tf-idf': pDepressive -= log(self.sum_tf_idf_depressive + len(list(self.prob_depressive.keys()))) else: pDepressive -= log(self.depressive_words + len(list(self.prob_depressive.keys()))) if word in self.prob_positive: pPositive += log(self.prob_positive[word]) else: if self.method == 'tf-idf': pPositive -= log(self.sum_tf_idf_positive + len(list(self.prob_positive.keys()))) else: pPositive -= log(self.positive_words + len(list(self.prob_positive.keys()))) pDepressive += log(self.prob_depressive_tweet) pPositive += log(self.prob_positive_tweet) return pDepressive >= pPositive def predict(self, testData): result = dict() for (i, message) in enumerate(testData): processed_message = process_message(message) result[i] = int(self.classify(processed_message)) return result def metrics(labels, predictions): true_pos, true_neg, false_pos, false_neg = 0, 0, 0, 0 for i in range(len(labels)): true_pos += int(labels.iloc[i] == 1 and predictions[i] == 1) true_neg += int(labels.iloc[i] == 0 and predictions[i] == 0) false_pos += int(labels.iloc[i] == 0 and predictions[i] == 1) false_neg += int(labels.iloc[i] == 1 and predictions[i] == 0) precision = true_pos / (true_pos + false_pos) recall = true_pos / (true_pos + false_neg) Fscore = 2 * precision * recall / (precision + recall) accuracy = (true_pos + true_neg) / (true_pos + true_neg + false_pos + false_neg) print("Precision: ", precision) print("Recall: ", recall) print("F-score: ", Fscore) print("Accuracy: ", accuracy) sc_tf_idf = TweetClassifier(trainData, 'tf-idf') sc_tf_idf.train() preds_tf_idf = sc_tf_idf.predict(testData['message']) metrics(testData['label'], preds_tf_idf) sc_bow = TweetClassifier(trainData, 'bow') sc_bow.train() preds_bow = sc_bow.predict(testData['message']) metrics(testData['label'], preds_bow) ``` # Predictions with TF-IDF # Depressive Tweets ``` pm = process_message('Lately I have been feeling unsure of myself as a person & an artist') sc_tf_idf.classify(pm) pm = process_message('Extreme sadness, lack of energy, hopelessness') sc_tf_idf.classify(pm) pm = process_message('Hi hello depression and anxiety are the worst') sc_tf_idf.classify(pm) pm = process_message('I am officially done with @kanyewest') sc_tf_idf.classify(pm) pm = process_message('Feeling down...') sc_tf_idf.classify(pm) pm = process_message('My depression will not let me work out') sc_tf_idf.classify(pm) ``` # Positive Tweets ``` pm = process_message('Loving how me and my lovely partner is talking about what we want.') sc_tf_idf.classify(pm) pm = process_message('Very rewarding when a patient hugs you and tells you they feel great after changing the diet and daily habits') sc_tf_idf.classify(pm) pm = process_message('Happy Thursday everyone. Thought today was Wednesday so super happy tomorrow is Friday yayyyyy') sc_tf_idf.classify(pm) pm = process_message('It’s the little things that make me smile. Got our new car today and this arrived with it') sc_tf_idf.classify(pm) ``` # Predictions with Bag-of-Words (BOW) # Depressive tweets ``` pm = process_message('Hi hello depression and anxiety are the worst') sc_bow.classify(pm) pm = process_message('My depression will not let me work out') sc_bow.classify(pm) pm = process_message('Feeling down...') sc_bow.classify(pm) ``` # Positive Tweets ``` pm = process_message('Loving how me and my lovely partner is talking about what we want.') sc_bow.classify(pm) pm = process_message('Very rewarding when a patient hugs you and tells you they feel great after changing the diet and daily habits') sc_bow.classify(pm) pm = process_message('Happy Thursday everyone. Thought today was Wednesday so super happy tomorrow is Friday yayyyyy') sc_bow.classify(pm) ```
github_jupyter
<a href="https://colab.research.google.com/github/Anmol42/IDP-sem4/blob/main/notebooks/Sig-mu_vae.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` import torch import torchvision import torch.nn as nn import matplotlib.pyplot as plt import torch.nn.functional as F import torchvision.transforms as transforms import numpy as np from torch.utils.data.dataloader import DataLoader from google.colab import drive drive.mount('/content/drive') !unzip -q /content/drive/MyDrive/Datasets/faces.zip ## Silenced the unzip action from skimage.io import imread_collection path = "/content/faces/*.jpg" train_ds = imread_collection(path) from skimage.io import imread_collection from skimage.color import rgb2lab,lab2rgb from skimage.transform import resize def get_img_data(path): train_ds = imread_collection(path) images = torch.zeros(len(train_ds),3,128,128) for i,im in enumerate(train_ds): im = resize(im, (128,128,3), anti_aliasing=True) image = rgb2lab(im) image = torch.Tensor(image) image = image.permute(2,0,1) images[i]=image return images def normalize_data(data): data[:,0] = data[:,0]/100 data[:,1:] = data[:,1:]/128 return data images = get_img_data(path) images = normalize_data(images) batch_size = 100 class component(nn.Module): def __init__(self): super(component,self).__init__() self.conv1 = nn.Sequential(nn.Conv2d(1,8,kernel_size=3,padding=1,stride=2), nn.BatchNorm2d(8), nn.LeakyReLU()) self.conv2 = nn.Sequential(nn.Conv2d(8,16,kernel_size=5,padding=2,stride=2), nn.BatchNorm2d(16), nn.LeakyReLU()) self.conv3 = nn.Sequential(nn.Conv2d(16,32,kernel_size=3,padding=1,stride=2), nn.BatchNorm2d(32), nn.LeakyReLU()) self.conv4 = nn.Sequential(nn.Conv2d(32,64,kernel_size=5,padding=2,stride=2), #size is 8x8 at this point nn.LeakyReLU()) # BottleNeck self.bottleneck = nn.Sequential(nn.Conv2d(64,128,kernel_size=3,stride=2,padding=1), nn.LeakyReLU()) # size 4x4 self.linear = nn.Linear(128*4*4,256) def forward(self,xb,z): out1 = self.conv1(xb) out2 = self.conv2(out1) out3 = self.conv3(out2) out4 = self.conv4(out3) out5 = self.bottleneck(out4) out5 = out5.view(z.shape[0],-1) out6 = self.linear(out5) return out6 ## generator model class generator(nn.Module): def __init__(self,component): # z is input noise super(generator,self).__init__() self.sigma = component() self.mu = component() self.deconv7 = nn.Sequential(nn.ConvTranspose2d(256,128,kernel_size=4,stride=2,padding=1), nn.ReLU()) self.deconv6 = nn.Sequential(nn.ConvTranspose2d(128,64,kernel_size=4,stride=2,padding=1), nn.ReLU()) self.deconv5 = nn.Sequential(nn.ConvTranspose2d(64,64,kernel_size=4,stride=2,padding=1), nn.ReLU()) self.deconv4 = nn.Sequential(nn.ConvTranspose2d(64,32,kernel_size=4,stride=2,padding=1), nn.ReLU()) self.deconv3 = nn.Sequential(nn.ConvTranspose2d(32,16,kernel_size=4,stride=2,padding=1), nn.ReLU()) self.deconv2 = nn.Sequential(nn.ConvTranspose2d(16,8,kernel_size=4,stride=2,padding=1), nn.ReLU()) self.deconv1 = nn.Sequential(nn.ConvTranspose2d(8,2,kernel_size=4,stride=2,padding=1), nn.Tanh()) self.linear = nn.Linear(128*4*4,512) def forward(self,xb,z): sig = self.sigma(xb,z) mm = self.mu(xb,z) noise = z*sig + mm out5 = self.deconv7(noise.unsqueeze(2).unsqueeze(2)) out5 = self.deconv6(out5) out5 = self.deconv5(out5) out5 = self.deconv4(out5) out5 = self.deconv3(out5) out5 = self.deconv2(out5) out5 = self.deconv1(out5) return torch.cat((xb,out5),1) ## discriminator class discriminator(nn.Module): def __init__(self): super(discriminator,self).__init__() self.network = nn.Sequential( nn.Conv2d(3,8,kernel_size=3,stride=1), nn.MaxPool2d(kernel_size=2), nn.ReLU(), nn.Conv2d(8,16,kernel_size=5), nn.MaxPool2d(kernel_size=2), nn.ReLU(), nn.Conv2d(16,32,kernel_size=3), nn.MaxPool2d(kernel_size=2), nn.ReLU(), nn.Conv2d(32,64,kernel_size=3), nn.MaxPool2d(kernel_size=2), nn.ReLU(), nn.Flatten() ) self.linear1 = nn.Linear(64*25,128) self.linear2 = nn.Linear(128,1) def forward(self,x): out = self.network(x) out = self.linear1(out) out = self.linear2(out) out = torch.sigmoid(out) return out gen_model = generator(component) dis_model = discriminator() train_dl = DataLoader(images[:10000],batch_size,shuffle=True,pin_memory=True,num_workers=2) val_dl = DataLoader(images[10000:11000],batch_size, num_workers=2,pin_memory=True) test_dl = DataLoader(images[11000:],batch_size,num_workers=2) bceloss = nn.BCEWithLogitsLoss() #minimise this # t is whether the image is fake or real; x is prob vect of patches being real/fake. def loss_inf(x,t): # probability vector from discriminator as input return int(t)*(bceloss(x,torch.ones_like(x))) + (1-int(t))*bceloss(x,torch.zeros_like(x)) l1loss = nn.L1Loss() def gen_loss(x,y): return l1loss(x,y) def to_device(data, device): """Move tensor(s) to chosen device""" if isinstance(data, (list,tuple)): return [to_device(x, device) for x in data] return data.to(device, non_blocking=True) class DeviceDataLoader(): """Wrap a dataloader to move data to a device""" def __init__(self, dl, device): self.dl = dl self.device = device def __iter__(self): """Yield a batch of data after moving it to device""" for b in self.dl: yield to_device(b, self.device) def __len__(self): """Number of batches""" return len(self.dl) train_dl = DeviceDataLoader(train_dl,'cuda') val_dl = DeviceDataLoader(val_dl,'cuda') test_dl = DeviceDataLoader(test_dl,'cuda') gen_model.to('cuda') dis_model.to('cuda') def fit(epochs,lr_g,lr_d,generator,discriminator,batch_size,opt_func=torch.optim.Adam): gen_optimize = opt_func(generator.parameters(),lr_g) dis_optimize = opt_func(discriminator.parameters(),lr_d) train_g_history,train_d_history = [],[] val_g_history, val_d_history = [],[] for epoch in range(epochs): epoch_loss_g = torch.zeros(1).to('cuda') epoch_loss_d = torch.zeros(1).to('cuda') noise = torch.randn(batch_size,256).to('cuda') for batch in train_dl: for i in range(5): out = generator(batch[:,0].unsqueeze(1),noise) # gives a,b channel for LAB color scheme real_score = discriminator(batch) # how real is the og input image fake_score = discriminator(out) # how real is the generated image loss_d = loss_inf(real_score,1) + loss_inf(fake_score,0)# discriminator #print(loss_d.item()) loss_d.backward() dis_optimize.zero_grad() dis_optimize.step() out = generator(batch[:,0].unsqueeze(1),noise) # gives a,b channel for LAB color scheme real_score = discriminator(batch) # how real is the og input image fake_score = discriminator(out) # how real is the generated image loss_g = 4*gen_loss(out,batch) + loss_inf(fake_score,1) loss_g.backward() gen_optimize.step() gen_optimize.zero_grad() with torch.no_grad(): epoch_loss_g += loss_g epoch_loss_d += loss_d train_d_history.append(epoch_loss_d) train_g_history.append(epoch_loss_g) epoch_loss_g = 0 epoch_loss_d = 0 for batch in val_dl: with torch.no_grad(): out = generator(batch[:,0].unsqueeze(1),noise) # gives a,b channel for LAB color scheme real_score = discriminator(batch) # how real is the og input image fake_score = discriminator(out) # how real is the generated image loss_d = loss_inf(real_score,1) + loss_inf(fake_score,0)# discriminator loss_g = 4*gen_loss(out,batch) + loss_inf(fake_score,1) epoch_loss_g += loss_g epoch_loss_d += loss_d val_g_history.append(epoch_loss_g.item()) val_d_history.append(epoch_loss_d.item()) if epoch % 3 == 0: print("Gen Epoch Loss",epoch_loss_g) print("Discriminator Epoch loss",epoch_loss_d) return train_d_history,train_g_history,val_d_history,val_g_history loss_h = fit(6,0.001,0.001,gen_model,dis_model,batch_size,opt_func=torch.optim.Adam) import matplotlib.pyplot as plt plt.plot(loss_h[1]) from skimage.color import rgb2lab,lab2rgb,rgb2gray def tensor_to_pic(tensor : torch.Tensor) -> np.ndarray: tensor[0] *= 100 tensor[1:]*= 128 image = tensor.permute(1,2,0).detach().cpu().numpy() image = lab2rgb(image) return image def show_images(n,dataset = images,gen=gen_model,dis=dis_model) -> None: gen_model.eval() dis_model.eval() z = torch.randn(1,256).to('cuda') #z = torch.ones_like(z) image_tensor = dataset[n].to('cuda') gen_tensor = gen(image_tensor[0].unsqueeze(0).unsqueeze(0),z)[0] image = tensor_to_pic(image_tensor) #print(torch.sum(gen_tensor)) gray = np.zeros_like(image) bw = rgb2gray(image) gray[:,:,0],gray[:,:,1],gray[:,:,2] = bw,bw,bw gen_image = tensor_to_pic(gen_tensor) to_be_shown = np.concatenate((gray,gen_image,image),axis=1) plt.figure(figsize=(15,15)) plt.imshow(to_be_shown) plt.show() i = np.random.randint(3500,20000) print(i) show_images(i) ## Shows generated and coloured images side by side ```
github_jupyter
# MicroGrid Energy Management ## Summary The goal of the Microgrid problem is to compute an optimal power flow within the distributed sources, loads, storages and a main grid. On a given time horizon $H$, the optimal power flow poblem aims to find the optimal command of the components, e.g.charging/discharging for storage, seeling/buying for external power sources, turning on/off intellingent loads, etc. KNowing that the power balance must be respected $\forall t\in[0,H]$. This problem can be formulated as a mixed integer linear program, for which constraints, variables and objectives are organized using pyomo blocks. <img src="figures/mg_pv_bat_eol_house.png" width="500"> ## Problem Statement The Energy Management problem can be formulated mathematically as a mixed integer linear problem using the following model. Plan : 1. Definition of sets 2. Definitions of distributed sources, loads and charges. 3. Definition of the global constraint (Power balance) 4. Definition of the objective ### Sets The micogrid modelling equieres only one Coninuous Set : the time. We not H the horizon in seconds. ``` from pyomo.environ import * from pyomo.dae import ContinuousSet, Integral H = 60*60*24 # Time horizon in seconds m = AbstractModel() m.time = ContinuousSet(initialize=(0, H)) ``` ### Blocks The microgrid is created by connecting units together, such as batteries, loads, renewable sources, etc. In the pyomo vocabulary, such a components is called a block. As a first step, the microgrid is constituted of a renewable power source (PV panel), a critical load, and a connection to the main grid. A quick descipion of the usefull blocks is following : - **Maingrid** : A block that describes the model of the distribution grid connection, a base version, named `AbsMainGridV0` is available in `microgrids.maingrids`. - **Renewable Power Source** : A block that describes the model of a PV panels. This will be modeled by a deterministic power profile using a `Param` indexed by the time. Such a block is available in `microgrids.sources.AbsFixedPowerSource`. - **Power Load** : A block that describes the model of a critical load. This will be modeled by a deterministic power profile using a `Param` indexed by the time. Such a block is available in `microgrids.sources.AbsFixedPowerLoad`. Blocks are added to the main problem as follow : ``` from batteries import AbsBatteryV0 from maingrids import AbsMainGridV0 from sources import AbsFixedPowerLoad, AbsFixedPowerSource m.mg = AbsMainGridV0() m.s = AbsFixedPowerSource() m.l = AbsFixedPowerLoad() ``` Each block is described by a set of constrains, variables, parameter and expressions. One can print any pyomo object using the `pprint` method. Example : m.mg.pprint() One can access documentation of any object using the builtIn method `doc` or `help` function (for heritance). Pop-up documentation shortcut : `Shift+Tab`. print(m.mg.doc) help(m.mg) Let's have a look to the maingrid block : ### Global Power Constraint The electical connection between blocks is modelled using constraints, aka Kirchhoff’s Laws or power balance : $$\sum P_{sources}(t) = \sum P_{loads}(t), \forall t \in [0, H] $$ ``` @m.Constraint(m.time) def power_balance(m, t): return m.mg.p[t] + m.s.p[t] == m.l.p[t] ``` ### Objectif As a first hypothesis, we will only consider a fixed selling/buying price of energy $c$, such that : $$J = \int_{0}^{H} c.p_{mg}(t)$$. Pyomo allows you to make integrals over a continuous set as follow : ``` m.int = Integral(m.time, wrt=m.time, rule=lambda m, i: m.mg.inst_cost[i]) m.obj = Objective(expr=m.int) ``` ## Instantiate the problem, discretize it and solve it `m` is a pyomo Abstract Model, as we saw in the pyomo tutorial. Which meens that the structure of the problem is now completlty defined and may be used for different scenarios or cases. The fllowing steps consern : 1. Loading data (scenario, predictions and sizing of the components) 2. Instantiate the problem 3. Discretization 4. Solving ### Loading data Parameters, loads and source profiles are already defined in the file `data/datamodels.py`. One can load it and plot PV and Load profiles as followed : ``` %run data/data_models.py df_s[['P_pv', 'P_load_1']].plot(figsize=(15,3)) ``` ### 2. Problem Instantiation The abstract model is instanciate by the previously definied data dictionnary, as follow : ``` inst = m.create_instance(data) ``` ### Discretization After instantiation, one can discretize the problem equation over the time horizon. In the folowing, we choose a number of finite element $nfe = 96$ i.e. every $15~ min$ for $H=1\ day$. ``` from pyomo.environ import TransformationFactory inst = m.create_instance(data) nfe = 60*60*24/(15*60) TransformationFactory('dae.finite_difference').apply_to(inst, nfe=nfe) ``` ### Solving The problem is solve as follow : ``` opt = SolverFactory("glpk") res = opt.solve(inst, load_solutions=True) ``` ## Post-Processing ``` from utils import pplot index = pd.date_range(start = TSTART, end = TEND, periods = nfe+1) pplot(inst.mg.p, inst.l.p, inst.s.p, index = index, marker='x', figsize=(10,5)) ```
github_jupyter
# Analyzing interstellar reddening and calculating synthetic photometry ## Authors Kristen Larson, Lia Corrales, Stephanie T. Douglas, Kelle Cruz Input from Emir Karamehmetoglu, Pey Lian Lim, Karl Gordon, Kevin Covey ## Learning Goals - Investigate extinction curve shapes - Deredden spectral energy distributions and spectra - Calculate photometric extinction and reddening - Calculate synthetic photometry for a dust-reddened star by combining `dust_extinction` and `synphot` - Convert from frequency to wavelength with `astropy.unit` equivalencies - Unit support for plotting with `astropy.visualization` ## Keywords dust extinction, synphot, astroquery, units, photometry, extinction, physics, observational astronomy ## Companion Content * [Bessell & Murphy (2012)](https://ui.adsabs.harvard.edu/#abs/2012PASP..124..140B/abstract) ## Summary In this tutorial, we will look at some extinction curves from the literature, use one of those curves to deredden an observed spectrum, and practice invoking a background source flux in order to calculate magnitudes from an extinction model. The primary libraries we'll be using are [dust_extinction](https://dust-extinction.readthedocs.io/en/latest/) and [synphot](https://synphot.readthedocs.io/en/latest/), which are [Astropy affiliated packages](https://www.astropy.org/affiliated/). We recommend installing the two packages in this fashion: ``` pip install synphot pip install dust_extinction ``` This tutorial requires v0.7 or later of `dust_extinction`. To ensure that all commands work properly, make sure you have the correct version installed. If you have v0.6 or earlier installed, run the following command to upgrade ``` pip install dust_extinction --upgrade ``` ``` import pathlib import matplotlib.pyplot as plt %matplotlib inline import numpy as np import astropy.units as u from astropy.table import Table from dust_extinction.parameter_averages import CCM89, F99 from synphot import units, config from synphot import SourceSpectrum,SpectralElement,Observation,ExtinctionModel1D from synphot.models import BlackBodyNorm1D from synphot.spectrum import BaseUnitlessSpectrum from synphot.reddening import ExtinctionCurve from astroquery.simbad import Simbad from astroquery.mast import Observations import astropy.visualization ``` # Introduction Dust in the interstellar medium (ISM) extinguishes background starlight. The wavelength dependence of the extinction is such that short-wavelength light is extinguished more than long-wavelength light, and we call this effect *reddening*. If you're new to extinction, here is a brief introduction to the types of quantities involved. The fractional change to the flux of starlight is $$ \frac{dF_\lambda}{F_\lambda} = -\tau_\lambda $$ where $\tau$ is the optical depth and depends on wavelength. Integrating along the line of sight, the resultant flux is an exponential function of optical depth, $$ \tau_\lambda = -\ln\left(\frac{F_\lambda}{F_{\lambda,0}}\right). $$ With an eye to how we define magnitudes, we usually change the base from $e$ to 10, $$ \tau_\lambda = -2.303\log\left(\frac{F_\lambda}{F_{\lambda,0}}\right), $$ and define an extinction $A_\lambda = 1.086 \,\tau_\lambda$ so that $$ A_\lambda = -2.5\log\left(\frac{F_\lambda}{F_{\lambda,0}}\right). $$ There are two basic take-home messages from this derivation: * Extinction introduces a multiplying factor $10^{-0.4 A_\lambda}$ to the flux. * Extinction is defined relative to the flux without dust, $F_{\lambda,0}$. Once astropy and the affiliated packages are installed, we can import from them as needed: # Example 1: Investigate Extinction Models The `dust_extinction` package provides various models for extinction $A_\lambda$ normalized to $A_V$. The shapes of normalized curves are relatively (and perhaps surprisingly) uniform in the Milky Way. The little variation that exists is often parameterized by the ratio of extinction ($A_V$) to reddening in the blue-visual ($E_{B-V}$), $$ R_V \equiv \frac{A_V}{E_{B-V}} $$ where $E_{B-V}$ is differential extinction $A_B-A_V$. In this example, we show the $R_V$-parameterization for the Clayton, Cardelli, & Mathis (1989, CCM) and the Fitzpatrick (1999) models. [More model options are available in the `dust_extinction` documentation.](https://dust-extinction.readthedocs.io/en/latest/dust_extinction/model_flavors.html) ``` # Create wavelengths array. wav = np.arange(0.1, 3.0, 0.001)*u.micron for model in [CCM89, F99]: for R in (2.0,3.0,4.0): # Initialize the extinction model ext = model(Rv=R) plt.plot(1/wav, ext(wav), label=model.name+' R='+str(R)) plt.xlabel('$\lambda^{-1}$ ($\mu$m$^{-1}$)') plt.ylabel('A($\lambda$) / A(V)') plt.legend(loc='best') plt.title('Some Extinction Laws') plt.show() ``` Astronomers studying the ISM often display extinction curves against inverse wavelength (wavenumber) to show the ultraviolet variation, as we do here. Infrared extinction varies much less and approaches zero at long wavelength in the absence of wavelength-independent, or grey, extinction. # Example 2: Deredden a Spectrum Here we deredden (unextinguish) the IUE ultraviolet spectrum and optical photometry of the star $\rho$ Oph (HD 147933). First, we will use astroquery to fetch the archival [IUE spectrum from MAST](https://archive.stsci.edu/iue/): ``` download_dir = pathlib.Path('~/.astropy/cache/astroquery/Mast').expanduser() download_dir.mkdir(exist_ok=True) obsTable = Observations.query_object("HD 147933", radius="1 arcsec") obsTable_spec = obsTable[obsTable['dataproduct_type'] == 'spectrum'] obsTable_spec obsids = obsTable_spec[39]['obsid'] dataProductsByID = Observations.get_product_list(obsids) manifest = Observations.download_products(dataProductsByID, download_dir=str(download_dir)) ``` We read the downloaded files into an astropy table: ``` t_lwr = Table.read(download_dir / 'mastDownload/IUE/lwr05639/lwr05639mxlo_vo.fits') print(t_lwr) ``` The `.quantity` extension in the next lines will read the Table columns into Quantity vectors. Quantities keep the units of the Table column attached to the numpy array values. ``` wav_UV = t_lwr['WAVE'][0,].quantity UVflux = t_lwr['FLUX'][0,].quantity ``` Now, we use astroquery again to fetch photometry from Simbad to go with the IUE spectrum: ``` custom_query = Simbad() custom_query.add_votable_fields('fluxdata(U)','fluxdata(B)','fluxdata(V)') phot_table=custom_query.query_object('HD 147933') Umag=phot_table['FLUX_U'] Bmag=phot_table['FLUX_B'] Vmag=phot_table['FLUX_V'] ``` To convert the photometry to flux, we look up some [properties of the photometric passbands](http://ned.ipac.caltech.edu/help/photoband.lst), including the flux of a magnitude zero star through the each passband, also known as the zero-point of the passband. ``` wav_U = 0.3660 * u.micron zeroflux_U_nu = 1.81E-23 * u.Watt/(u.m*u.m*u.Hz) wav_B = 0.4400 * u.micron zeroflux_B_nu = 4.26E-23 * u.Watt/(u.m*u.m*u.Hz) wav_V = 0.5530 * u.micron zeroflux_V_nu = 3.64E-23 * u.Watt/(u.m*u.m*u.Hz) ``` The zero-points that we found for the optical passbands are not in the same units as the IUE fluxes. To make matters worse, the zero-point fluxes are $F_\nu$ and the IUE fluxes are $F_\lambda$. To convert between them, the wavelength is needed. Fortunately, astropy provides an easy way to make the conversion with *equivalencies*: ``` zeroflux_U = zeroflux_U_nu.to(u.erg/u.AA/u.cm/u.cm/u.s, equivalencies=u.spectral_density(wav_U)) zeroflux_B = zeroflux_B_nu.to(u.erg/u.AA/u.cm/u.cm/u.s, equivalencies=u.spectral_density(wav_B)) zeroflux_V = zeroflux_V_nu.to(u.erg/u.AA/u.cm/u.cm/u.s, equivalencies=u.spectral_density(wav_V)) ``` Now we can convert from photometry to flux using the definition of magnitude: $$ F=F_0\ 10^{-0.4\, m} $$ ``` Uflux = zeroflux_U * 10.**(-0.4*Umag) Bflux = zeroflux_B * 10.**(-0.4*Bmag) Vflux = zeroflux_V * 10.**(-0.4*Vmag) ``` Using astropy quantities allow us to take advantage of astropy's unit support in plotting. [Calling `astropy.visualization.quantity_support` explicitly turns the feature on.](http://docs.astropy.org/en/stable/units/quantity.html#plotting-quantities) Then, when quantity objects are passed to matplotlib plotting functions, the axis labels are automatically labeled with the unit of the quantity. In addition, quantities are converted automatically into the same units when combining multiple plots on the same axes. ``` astropy.visualization.quantity_support() plt.plot(wav_UV,UVflux,'m',label='UV') plt.plot(wav_V,Vflux,'ko',label='U, B, V') plt.plot(wav_B,Bflux,'ko') plt.plot(wav_U,Uflux,'ko') plt.legend(loc='best') plt.ylim(0,3E-10) plt.title('rho Oph') plt.show() ``` Finally, we initialize the extinction model, choosing values $R_V = 5$ and $E_{B-V} = 0.5$. This star is famous in the ISM community for having large-$R_V$ dust in the line of sight. ``` Rv = 5.0 # Usually around 3, but about 5 for this star. Ebv = 0.5 ext = F99(Rv=Rv) ``` To extinguish (redden) a spectrum, multiply by the `ext.extinguish` function. To unextinguish (deredden), divide by the same `ext.extinguish`, as we do here: ``` plt.semilogy(wav_UV,UVflux,'m',label='UV') plt.semilogy(wav_V,Vflux,'ko',label='U, B, V') plt.semilogy(wav_B,Bflux,'ko') plt.semilogy(wav_U,Uflux,'ko') plt.semilogy(wav_UV,UVflux/ext.extinguish(wav_UV,Ebv=Ebv),'b', label='dereddened: EBV=0.5, RV=5') plt.semilogy(wav_V,Vflux/ext.extinguish(wav_V,Ebv=Ebv),'ro', label='dereddened: EBV=0.5, RV=5') plt.semilogy(wav_B,Bflux/ext.extinguish(wav_B,Ebv=Ebv),'ro') plt.semilogy(wav_U,Uflux/ext.extinguish(wav_U,Ebv=Ebv),'ro') plt.legend(loc='best') plt.title('rho Oph') plt.show() ``` Notice that, by dereddening the spectrum, the absorption feature at 2175 Angstrom is removed. This feature can also be seen as the prominent bump in the extinction curves in Example 1. That we have smoothly removed the 2175 Angstrom feature suggests that the values we chose, $R_V = 5$ and $E_{B-V} = 0.5$, are a reasonable model for the foreground dust. Those experienced with dereddening should notice that that `dust_extinction` returns $A_\lambda/A_V$, while other routines like the IDL fm_unred procedure often return $A_\lambda/E_{B-V}$ by default and need to be divided by $R_V$ in order to compare directly with `dust_extinction`. # Example 3: Calculate Color Excess with `synphot` Calculating broadband *photometric* extinction is harder than it might look at first. All we have to do is look up $A_\lambda$ for a particular passband, right? Under the right conditions, yes. In general, no. Remember that we have to integrate over a passband to get synthetic photometry, $$ A = -2.5\log\left(\frac{\int W_\lambda F_{\lambda,0} 10^{-0.4A_\lambda} d\lambda}{\int W_\lambda F_{\lambda,0} d\lambda} \right), $$ where $W_\lambda$ is the fraction of incident energy transmitted through a filter. See the detailed appendix in [Bessell & Murphy (2012)](https://ui.adsabs.harvard.edu/#abs/2012PASP..124..140B/abstract) for an excellent review of the issues and common misunderstandings in synthetic photometry. There is an important point to be made here. The expression above does not simplify any further. Strictly speaking, it is impossible to convert spectral extinction $A_\lambda$ into a magnitude system without knowing the wavelength dependence of the source's original flux across the filter in question. As a special case, if we assume that the source flux is constant in the band (i.e. $F_\lambda = F$), then we can cancel these factors out from the integrals, and extinction in magnitudes becomes the weighted average of the extinction factor across the filter in question. In that special case, $A_\lambda$ at $\lambda_{\rm eff}$ is a good approximation for magnitude extinction. In this example, we will demonstrate the more general calculation of photometric extinction. We use a blackbody curve for the flux before the dust, apply an extinction curve, and perform synthetic photometry to calculate extinction and reddening in a magnitude system. First, let's get the filter transmission curves: ``` # Optional, for when the STScI ftp server is not answering: config.conf.vega_file = 'http://ssb.stsci.edu/cdbs/calspec/alpha_lyr_stis_008.fits' config.conf.johnson_u_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_u_004_syn.fits' config.conf.johnson_b_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_b_004_syn.fits' config.conf.johnson_v_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_v_004_syn.fits' config.conf.johnson_r_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_r_003_syn.fits' config.conf.johnson_i_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_i_003_syn.fits' config.conf.bessel_j_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/bessell_j_003_syn.fits' config.conf.bessel_h_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/bessell_h_004_syn.fits' config.conf.bessel_k_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/bessell_k_003_syn.fits' u_band = SpectralElement.from_filter('johnson_u') b_band = SpectralElement.from_filter('johnson_b') v_band = SpectralElement.from_filter('johnson_v') r_band = SpectralElement.from_filter('johnson_r') i_band = SpectralElement.from_filter('johnson_i') j_band = SpectralElement.from_filter('bessel_j') h_band = SpectralElement.from_filter('bessel_h') k_band = SpectralElement.from_filter('bessel_k') ``` If you are running this with your own python, see the [synphot documentation](https://synphot.readthedocs.io/en/latest/#installation-and-setup) on how to install your own copy of the necessary files. Next, let's make a background flux to which we will apply extinction. Here we make a 10,000 K blackbody using the model mechanism from within `synphot` and normalize it to $V$ = 10 in the Vega-based magnitude system. ``` # First, create a blackbody at some temperature. sp = SourceSpectrum(BlackBodyNorm1D, temperature=10000) # sp.plot(left=1, right=15000, flux_unit='flam', title='Blackbody') # Get the Vega spectrum as the zero point flux. vega = SourceSpectrum.from_vega() # vega.plot(left=1, right=15000) # Normalize the blackbody to some chosen magnitude, say V = 10. vmag = 10. v_band = SpectralElement.from_filter('johnson_v') sp_norm = sp.normalize(vmag * units.VEGAMAG, v_band, vegaspec=vega) sp_norm.plot(left=1, right=15000, flux_unit='flam', title='Normed Blackbody') ``` Now we initialize the extinction model and choose an extinction of $A_V$ = 2. To get the `dust_extinction` model working with `synphot`, we create a wavelength array and make a spectral element with the extinction model as a lookup table. ``` # Initialize the extinction model and choose the extinction, here Av = 2. ext = CCM89(Rv=3.1) Av = 2. # Create a wavelength array. wav = np.arange(0.1, 3, 0.001)*u.micron # Make the extinction model in synphot using a lookup table. ex = ExtinctionCurve(ExtinctionModel1D, points=wav, lookup_table=ext.extinguish(wav, Av=Av)) sp_ext = sp_norm*ex sp_ext.plot(left=1, right=15000, flux_unit='flam', title='Normed Blackbody with Extinction') ``` Synthetic photometry refers to modeling an observation of a star by multiplying the theoretical model for the astronomical flux through a certain filter response function, then integrating. ``` # "Observe" the star through the filter and integrate to get photometric mag. sp_obs = Observation(sp_ext, v_band) sp_obs_before = Observation(sp_norm, v_band) # sp_obs.plot(left=1, right=15000, flux_unit='flam', # title='Normed Blackbody with Extinction through V Filter') ``` Next, `synphot` performs the integration and computes magnitudes in the Vega system. ``` sp_stim_before = sp_obs_before.effstim(flux_unit='vegamag', vegaspec=vega) sp_stim = sp_obs.effstim(flux_unit='vegamag', vegaspec=vega) print('before dust, V =', np.round(sp_stim_before,1)) print('after dust, V =', np.round(sp_stim,1)) # Calculate extinction and compare to our chosen value. Av_calc = sp_stim - sp_stim_before print('$A_V$ = ', np.round(Av_calc,1)) ``` This is a good check for us to do. We normalized our spectrum to $V$ = 10 mag and added 2 mag of visual extinction, so the synthetic photometry procedure should reproduce these chosen values, and it does. Now we are ready to find the extinction in other passbands. We calculate the new photometry for the rest of the Johnson optical and the Bessell infrared filters. We calculate extinction $A = \Delta m$ and plot color excess, $E(\lambda - V) = A_\lambda - A_V$. Notice that `synphot` calculates the effective wavelength of the observations for us, which is very useful for plotting the results. We show reddening with the model extinction curve for comparison in the plot. ``` bands = [u_band,b_band,v_band,r_band,i_band,j_band,h_band,k_band] for band in bands: # Calculate photometry with dust: sp_obs = Observation(sp_ext, band, force='extrap') obs_effstim = sp_obs.effstim(flux_unit='vegamag', vegaspec=vega) # Calculate photometry without dust: sp_obs_i = Observation(sp_norm, band, force='extrap') obs_i_effstim = sp_obs_i.effstim(flux_unit='vegamag', vegaspec=vega) # Extinction = mag with dust - mag without dust # Color excess = extinction at lambda - extinction at V color_excess = obs_effstim - obs_i_effstim - Av_calc plt.plot(sp_obs_i.effective_wavelength(), color_excess,'or') print(np.round(sp_obs_i.effective_wavelength(),1), ',', np.round(color_excess,2)) # Plot the model extinction curve for comparison plt.plot(wav,Av*ext(wav)-Av,'--k') plt.ylim([-2,2]) plt.xlabel('$\lambda$ (Angstrom)') plt.ylabel('E($\lambda$-V)') plt.title('Reddening of T=10,000K Background Source with Av=2') plt.show() ``` ## Exercise Try changing the blackbody temperature to something very hot or very cool. Are the color excess values the same? Have the effective wavelengths changed? Note that the photometric extinction changes because the filter transmission is not uniform. The observed throughput of the filter depends on the shape of the background source flux.
github_jupyter
``` # Checkout www.pygimli.org for more examples %matplotlib inline ``` # 2D ERT modeling and inversion ``` import matplotlib.pyplot as plt import numpy as np import pygimli as pg import pygimli.meshtools as mt from pygimli.physics import ert ``` Create geometry definition for the modelling domain. worldMarker=True indicates the default boundary conditions for the ERT ``` world = mt.createWorld(start=[-50, 0], end=[50, -50], layers=[-1, -8], worldMarker=True) ``` Create some heterogeneous circular anomaly ``` block = mt.createCircle(pos=[-4.0, -5.0], radius=[1, 1.8], marker=4, boundaryMarker=10, area=0.01) circle = mt.createCircle(pos=[4.0, -5.0], radius=[1, 1.8], marker=5, boundaryMarker=10, area=0.01) poly = mt.createPolygon([(1,-4), (2,-1.5), (4,-2), (5,-2), (8,-3), (5,-3.5), (3,-4.5)], isClosed=True, addNodes=3, interpolate='spline', marker=5) ``` Merge geometry definition into a Piecewise Linear Complex (PLC) ``` geom = world + block + circle # + poly ``` Optional: show the geometry ``` pg.show(geom) ``` Create a Dipole Dipole ('dd') measuring scheme with 21 electrodes. ``` scheme = ert.createData(elecs=np.linspace(start=-20, stop=20, num=42), schemeName='dd') ``` Put all electrode (aka sensors) positions into the PLC to enforce mesh refinement. Due to experience, its convenient to add further refinement nodes in a distance of 10% of electrode spacing to achieve sufficient numerical accuracy. ``` for p in scheme.sensors(): geom.createNode(p) geom.createNode(p - [0, 0.01]) # Create a mesh for the finite element modelling with appropriate mesh quality. mesh = mt.createMesh(geom, quality=34) # Create a map to set resistivity values in the appropriate regions # [[regionNumber, resistivity], [regionNumber, resistivity], [...] rhomap = [[1, 50.], [2, 50.], [3, 50.], [4, 150.], [5, 15]] # Take a look at the mesh and the resistivity distribution pg.show(mesh, data=rhomap, label=pg.unit('res'), showMesh=True) ``` Perform the modeling with the mesh and the measuring scheme itself and return a data container with apparent resistivity values, geometric factors and estimated data errors specified by the noise setting. The noise is also added to the data. Here 1% plus 1µV. Note, we force a specific noise seed as we want reproducable results for testing purposes. ``` data = ert.simulate(mesh, scheme=scheme, res=rhomap, noiseLevel=1, noiseAbs=1e-6, seed=1337, verbose=False) pg.info(np.linalg.norm(data['err']), np.linalg.norm(data['rhoa'])) pg.info('Simulated data', data) pg.info('The data contains:', data.dataMap().keys()) pg.info('Simulated rhoa (min/max)', min(data['rhoa']), max(data['rhoa'])) pg.info('Selected data noise %(min/max)', min(data['err'])*100, max(data['err'])*100) # data['k'] ``` Optional: you can filter all values and tokens in the data container. Its possible that there are some negative data values due to noise and huge geometric factors. So we need to remove them. ``` data.remove(data['rhoa'] < 0) # data.remove(data['k'] < -20000.0) pg.info('Filtered rhoa (min/max)', min(data['rhoa']), max(data['rhoa'])) # You can save the data for further use data.save('simple.dat') # You can take a look at the data ert.show(data, cMap="RdBu_r") ``` Initialize the ERTManager, e.g. with a data container or a filename. ``` mgr = ert.ERTManager('simple.dat') ``` Run the inversion with the preset data. The Inversion mesh will be created with default settings. ``` inv = mgr.invert(lam=10, verbose=False) #np.testing.assert_approx_equal(mgr.inv.chi2(), 0.7, significant=1) ``` Let the ERTManger show you the model of the last successful run and how it fits the data. Shows data, model response, and model. ``` mgr.showResultAndFit(cMap="RdBu_r") meshPD = pg.Mesh(mgr.paraDomain) # Save copy of para mesh for plotting later ``` You can also provide your own mesh (e.g., a structured grid if you like them) Note, that x and y coordinates needs to be in ascending order to ensure that all the cells in the grid have the correct orientation, i.e., all cells need to be numbered counter-clockwise and the boundary normal directions need to point outside. ``` inversionDomain = pg.createGrid(x=np.linspace(start=-21, stop=21, num=43), y=-pg.cat([0], pg.utils.grange(0.5, 8, n=8))[::-1], marker=2) ``` The inversion domain for ERT problems needs a boundary that represents the far regions in the subsurface of the halfspace. Give a cell marker lower than the marker for the inversion region, the lowest cell marker in the mesh will be the inversion boundary region by default. ``` grid = pg.meshtools.appendTriangleBoundary(inversionDomain, marker=1, xbound=50, ybound=50) pg.show(grid, markers=True) #pg.show(grid, markers=True) ``` The Inversion can be called with data and mesh as argument as well ``` model = mgr.invert(data, mesh=grid, lam=10, verbose=False) # np.testing.assert_approx_equal(mgr.inv.chi2(), 0.951027, significant=3) ``` You can of course get access to mesh and model and plot them for your own. Note that the cells of the parametric domain of your mesh might be in a different order than the values in the model array if regions are used. The manager can help to permutate them into the right order. ``` # np.testing.assert_approx_equal(mgr.inv.chi2(), 1.4, significant=2) maxC = 150 modelPD = mgr.paraModel(model) # do the mapping pg.show(mgr.paraDomain, modelPD, label='Model', cMap='RdBu_r', logScale=True, cMin=15, cMax=maxC) pg.info('Inversion stopped with chi² = {0:.3}'.format(mgr.fw.chi2())) fig, (ax1, ax2, ax3) = plt.subplots(3,1, sharex=True, sharey=True, figsize=(8,7)) pg.show(mesh, rhomap, ax=ax1, hold=True, cMap="RdBu_r", logScale=True, orientation="vertical", cMin=15, cMax=maxC) pg.show(meshPD, inv, ax=ax2, hold=True, cMap="RdBu_r", logScale=True, orientation="vertical", cMin=15, cMax=maxC) mgr.showResult(ax=ax3, cMin=15, cMax=maxC, cMap="RdBu_r", orientation="vertical") labels = ["True model", "Inversion unstructured mesh", "Inversion regular grid"] for ax, label in zip([ax1, ax2, ax3], labels): ax.set_xlim(mgr.paraDomain.xmin(), mgr.paraDomain.xmax()) ax.set_ylim(mgr.paraDomain.ymin(), mgr.paraDomain.ymax()) ax.set_title(label) ```
github_jupyter
### N-gram language models or how to write scientific papers (4 pts) We shall train our language model on a corpora of [ArXiv](http://arxiv.org/) articles and see if we can generate a new one! ![img](https://media.npr.org/assets/img/2013/12/10/istock-18586699-monkey-computer_brick-16e5064d3378a14e0e4c2da08857efe03c04695e-s800-c85.jpg) _data by neelshah18 from [here](https://www.kaggle.com/neelshah18/arxivdataset/)_ _Disclaimer: this has nothing to do with actual science. But it's fun, so who cares?!_ ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline # Alternative manual download link: https://yadi.sk/d/_nGyU2IajjR9-w # !wget "https://www.dropbox.com/s/99az9n1b57qkd9j/arxivData.json.tar.gz?dl=1" -O arxivData.json.tar.gz # !tar -xvzf arxivData.json.tar.gz data = pd.read_json("./arxivData.json") data.sample(n=5) # assemble lines: concatenate title and description lines = data.apply(lambda row: row['title'] + ' ; ' + row['summary'], axis=1).tolist() sorted(lines, key=len)[:3] ``` ### Tokenization You know the dril. The data is messy. Go clean the data. Use WordPunctTokenizer or something. ``` from nltk.tokenize import WordPunctTokenizer # Task: convert lines (in-place) into strings of space-separated tokens. import & use WordPunctTokenizer tokenizer = WordPunctTokenizer() lines = [' '.join(tokenizer.tokenize(line.lower())) for line in lines] assert sorted(lines, key=len)[0] == \ 'differential contrastive divergence ; this paper has been retracted .' assert sorted(lines, key=len)[2] == \ 'p = np ; we claim to resolve the p =? np problem via a formal argument for p = np .' ``` ### N-Gram Language Model (1point) A language model is a probabilistic model that estimates text probability: the joint probability of all tokens $w_t$ in text $X$: $P(X) = P(w_1, \dots, w_T)$. It can do so by following the chain rule: $$P(w_1, \dots, w_T) = P(w_1)P(w_2 \mid w_1)\dots P(w_T \mid w_1, \dots, w_{T-1}).$$ The problem with such approach is that the final term $P(w_T \mid w_1, \dots, w_{T-1})$ depends on $n-1$ previous words. This probability is impractical to estimate for long texts, e.g. $T = 1000$. One popular approximation is to assume that next word only depends on a finite amount of previous words: $$P(w_t \mid w_1, \dots, w_{t - 1}) = P(w_t \mid w_{t - n + 1}, \dots, w_{t - 1})$$ Such model is called __n-gram language model__ where n is a parameter. For example, in 3-gram language model, each word only depends on 2 previous words. $$ P(w_1, \dots, w_n) = \prod_t P(w_t \mid w_{t - n + 1}, \dots, w_{t - 1}). $$ You can also sometimes see such approximation under the name of _n-th order markov assumption_. The first stage to building such a model is counting all word occurences given N-1 previous words ``` from tqdm import tqdm from collections import defaultdict, Counter # special tokens: # - unk represents absent tokens, # - eos is a special token after the end of sequence UNK, EOS = "_UNK_", "_EOS_" def count_ngrams(lines, n): """ Count how many times each word occured after (n - 1) previous words :param lines: an iterable of strings with space-separated tokens :returns: a dictionary { tuple(prefix_tokens): {next_token_1: count_1, next_token_2: count_2}} When building counts, please consider the following two edge cases - if prefix is shorter than (n - 1) tokens, it should be padded with UNK. For n=3, empty prefix: "" -> (UNK, UNK) short prefix: "the" -> (UNK, the) long prefix: "the new approach" -> (new, approach) - you should add a special token, EOS, at the end of each sequence "... with deep neural networks ." -> (..., with, deep, neural, networks, ., EOS) count the probability of this token just like all others. """ counts = defaultdict(Counter) # counts[(word1, word2)][word3] = how many times word3 occured after (word1, word2) if n == 1: for line in lines: counts.update(((word,) for word in line.split())) for i in range(1, n+1, 1): for line in lines: splitted = line.split() line_len = len(splitted) splitted.append(EOS) for j in range(line_len): try: current_slice = splitted[j:j+i] counts.update(tuple(current_slice[:-1]) return counts # let's test it dummy_lines = sorted(lines, key=len)[:100] dummy_counts = count_ngrams(dummy_lines, n=3) assert set(map(len, dummy_counts.keys())) == {2}, "please only count {n-1}-grams" assert len(dummy_counts[('_UNK_', '_UNK_')]) == 78 assert dummy_counts['_UNK_', 'a']['note'] == 3 assert dummy_counts['p', '=']['np'] == 2 assert dummy_counts['author', '.']['_EOS_'] == 1 ['author', '.'][:-1] ``` Once we can count N-grams, we can build a probabilistic language model. The simplest way to compute probabilities is in proporiton to counts: $$ P(w_t | prefix) = { Count(prefix, w_t) \over \sum_{\hat w} Count(prefix, \hat w) } $$ ``` class NGramLanguageModel: def __init__(self, lines, n): """ Train a simple count-based language model: compute probabilities P(w_t | prefix) given ngram counts :param n: computes probability of next token given (n - 1) previous words :param lines: an iterable of strings with space-separated tokens """ assert n >= 1 self.n = n counts = count_ngrams(lines, self.n) # compute token proabilities given counts self.probs = defaultdict(Counter) # probs[(word1, word2)][word3] = P(word3 | word1, word2) # populate self.probs with actual probabilities <YOUR CODE> def get_possible_next_tokens(self, prefix): """ :param prefix: string with space-separated prefix tokens :returns: a dictionary {token : it's probability} for all tokens with positive probabilities """ prefix = prefix.split() prefix = prefix[max(0, len(prefix) - self.n + 1):] prefix = [ UNK ] * (self.n - 1 - len(prefix)) + prefix return self.probs[tuple(prefix)] def get_next_token_prob(self, prefix, next_token): """ :param prefix: string with space-separated prefix tokens :param next_token: the next token to predict probability for :returns: P(next_token|prefix) a single number, 0 <= P <= 1 """ return self.get_possible_next_tokens(prefix).get(next_token, 0) ``` Let's test it! ``` dummy_lm = NGramLanguageModel(dummy_lines, n=3) p_initial = dummy_lm.get_possible_next_tokens('') # '' -> ['_UNK_', '_UNK_'] assert np.allclose(p_initial['learning'], 0.02) assert np.allclose(p_initial['a'], 0.13) assert np.allclose(p_initial.get('meow', 0), 0) assert np.allclose(sum(p_initial.values()), 1) p_a = dummy_lm.get_possible_next_tokens('a') # '' -> ['_UNK_', 'a'] assert np.allclose(p_a['machine'], 0.15384615) assert np.allclose(p_a['note'], 0.23076923) assert np.allclose(p_a.get('the', 0), 0) assert np.allclose(sum(p_a.values()), 1) assert np.allclose(dummy_lm.get_possible_next_tokens('a note')['on'], 1) assert dummy_lm.get_possible_next_tokens('a machine') == \ dummy_lm.get_possible_next_tokens("there have always been ghosts in a machine"), \ "your 3-gram model should only depend on 2 previous words" ``` Now that you've got a working n-gram language model, let's see what sequences it can generate. But first, let's train it on the whole dataset. ``` lm = NGramLanguageModel(lines, n=3) ``` The process of generating sequences is... well, it's sequential. You maintain a list of tokens and iteratively add next token by sampling with probabilities. $ X = [] $ __forever:__ * $w_{next} \sim P(w_{next} | X)$ * $X = concat(X, w_{next})$ Instead of sampling with probabilities, one can also try always taking most likely token, sampling among top-K most likely tokens or sampling with temperature. In the latter case (temperature), one samples from $$w_{next} \sim {P(w_{next} | X) ^ {1 / \tau} \over \sum_{\hat w} P(\hat w | X) ^ {1 / \tau}}$$ Where $\tau > 0$ is model temperature. If $\tau << 1$, more likely tokens will be sampled with even higher probability while less likely tokens will vanish. ``` def get_next_token(lm, prefix, temperature=1.0): """ return next token after prefix; :param temperature: samples proportionally to lm probabilities ^ (1 / temperature) if temperature == 0, always takes most likely token. Break ties arbitrarily. """ <YOUR CODE> from collections import Counter test_freqs = Counter([get_next_token(lm, 'there have') for _ in range(10000)]) assert 250 < test_freqs['not'] < 450 assert 8500 < test_freqs['been'] < 9500 assert 1 < test_freqs['lately'] < 200 test_freqs = Counter([get_next_token(lm, 'deep', temperature=1.0) for _ in range(10000)]) assert 1500 < test_freqs['learning'] < 3000 test_freqs = Counter([get_next_token(lm, 'deep', temperature=0.5) for _ in range(10000)]) assert 8000 < test_freqs['learning'] < 9000 test_freqs = Counter([get_next_token(lm, 'deep', temperature=0.0) for _ in range(10000)]) assert test_freqs['learning'] == 10000 print("Looks nice!") ``` Let's have fun with this model ``` prefix = 'artificial' # <- your ideas :) for i in range(100): prefix += ' ' + get_next_token(lm, prefix) if prefix.endswith(EOS) or len(lm.get_possible_next_tokens(prefix)) == 0: break print(prefix) prefix = 'bridging the' # <- more of your ideas for i in range(100): prefix += ' ' + get_next_token(lm, prefix, temperature=0.5) if prefix.endswith(EOS) or len(lm.get_possible_next_tokens(prefix)) == 0: break print(prefix) ``` __More in the homework:__ nucleous sampling, top-k sampling, beam search(not for the faint of heart). ### Evaluating language models: perplexity (1point) Perplexity is a measure of how well does your model approximate true probability distribution behind data. __Smaller perplexity = better model__. To compute perplexity on one sentence, use: $$ {\mathbb{P}}(w_1 \dots w_N) = P(w_1, \dots, w_N)^{-\frac1N} = \left( \prod_t P(w_t \mid w_{t - n}, \dots, w_{t - 1})\right)^{-\frac1N}, $$ On the corpora level, perplexity is a product of probabilities of all tokens in all sentences to the power of 1, divided by __total length of all sentences__ in corpora. This number can quickly get too small for float32/float64 precision, so we recommend you to first compute log-perplexity (from log-probabilities) and then take the exponent. ``` def perplexity(lm, lines, min_logprob=np.log(10 ** -50.)): """ :param lines: a list of strings with space-separated tokens :param min_logprob: if log(P(w | ...)) is smaller than min_logprop, set it equal to min_logrob :returns: corpora-level perplexity - a single scalar number from the formula above Note: do not forget to compute P(w_first | empty) and P(eos | full_sequence) PLEASE USE lm.get_next_token_prob and NOT lm.get_possible_next_tokens """ <YOUR CODE> return <...> lm1 = NGramLanguageModel(dummy_lines, n=1) lm3 = NGramLanguageModel(dummy_lines, n=3) lm10 = NGramLanguageModel(dummy_lines, n=10) ppx1 = perplexity(lm1, dummy_lines) ppx3 = perplexity(lm3, dummy_lines) ppx10 = perplexity(lm10, dummy_lines) ppx_missing = perplexity(lm3, ['the jabberwock , with eyes of flame , ']) # thanks, L. Carrol print("Perplexities: ppx1=%.3f ppx3=%.3f ppx10=%.3f" % (ppx1, ppx3, ppx10)) assert all(0 < ppx < 500 for ppx in (ppx1, ppx3, ppx10)), "perplexity should be nonnegative and reasonably small" assert ppx1 > ppx3 > ppx10, "higher N models should overfit and " assert np.isfinite(ppx_missing) and ppx_missing > 10 ** 6, "missing words should have large but finite perplexity. " \ " Make sure you use min_logprob right" assert np.allclose([ppx1, ppx3, ppx10], (318.2132342216302, 1.5199996213739575, 1.1838145037901249)) ``` Now let's measure the actual perplexity: we'll split the data into train and test and score model on test data only. ``` from sklearn.model_selection import train_test_split train_lines, test_lines = train_test_split(lines, test_size=0.25, random_state=42) for n in (1, 2, 3): lm = NGramLanguageModel(n=n, lines=train_lines) ppx = perplexity(lm, test_lines) print("N = %i, Perplexity = %.5f" % (n, ppx)) # whoops, it just blew up :) ``` ### LM Smoothing The problem with our simple language model is that whenever it encounters an n-gram it has never seen before, it assigns it with the probabilitiy of 0. Every time this happens, perplexity explodes. To battle this issue, there's a technique called __smoothing__. The core idea is to modify counts in a way that prevents probabilities from getting too low. The simplest algorithm here is Additive smoothing (aka [Lapace smoothing](https://en.wikipedia.org/wiki/Additive_smoothing)): $$ P(w_t | prefix) = { Count(prefix, w_t) + \delta \over \sum_{\hat w} (Count(prefix, \hat w) + \delta) } $$ If counts for a given prefix are low, additive smoothing will adjust probabilities to a more uniform distribution. Not that the summation in the denominator goes over _all words in the vocabulary_. Here's an example code we've implemented for you: ``` class LaplaceLanguageModel(NGramLanguageModel): """ this code is an example, no need to change anything """ def __init__(self, lines, n, delta=1.0): self.n = n counts = count_ngrams(lines, self.n) self.vocab = set(token for token_counts in counts.values() for token in token_counts) self.probs = defaultdict(Counter) for prefix in counts: token_counts = counts[prefix] total_count = sum(token_counts.values()) + delta * len(self.vocab) self.probs[prefix] = {token: (token_counts[token] + delta) / total_count for token in token_counts} def get_possible_next_tokens(self, prefix): token_probs = super().get_possible_next_tokens(prefix) missing_prob_total = 1.0 - sum(token_probs.values()) missing_prob = missing_prob_total / max(1, len(self.vocab) - len(token_probs)) return {token: token_probs.get(token, missing_prob) for token in self.vocab} def get_next_token_prob(self, prefix, next_token): token_probs = super().get_possible_next_tokens(prefix) if next_token in token_probs: return token_probs[next_token] else: missing_prob_total = 1.0 - sum(token_probs.values()) missing_prob_total = max(0, missing_prob_total) # prevent rounding errors return missing_prob_total / max(1, len(self.vocab) - len(token_probs)) #test that it's a valid probability model for n in (1, 2, 3): dummy_lm = LaplaceLanguageModel(dummy_lines, n=n) assert np.allclose(sum([dummy_lm.get_next_token_prob('a', w_i) for w_i in dummy_lm.vocab]), 1), "I told you not to break anything! :)" for n in (1, 2, 3): lm = LaplaceLanguageModel(train_lines, n=n, delta=0.1) ppx = perplexity(lm, test_lines) print("N = %i, Perplexity = %.5f" % (n, ppx)) # optional: try to sample tokens from such a model ``` ### Kneser-Ney smoothing (2 points) Additive smoothing is simple, reasonably good but definitely not a State of The Art algorithm. Your final task in this notebook is to implement [Kneser-Ney](https://en.wikipedia.org/wiki/Kneser%E2%80%93Ney_smoothing) smoothing. It can be computed recurrently, for n>1: $$P_{kn}(w_t | prefix_{n-1}) = { \max(0, Count(prefix_{n-1}, w_t) - \delta) \over \sum_{\hat w} Count(prefix_{n-1}, \hat w)} + \lambda_{prefix_{n-1}} \cdot P_{kn}(w_t | prefix_{n-2})$$ where - $prefix_{n-1}$ is a tuple of {n-1} previous tokens - $lambda_{prefix_{n-1}}$ is a normalization constant chosen so that probabilities add up to 1 - Unigram $P_{kn}(w_t | prefix_{n-2})$ corresponds to Kneser Ney smoothing for {N-1}-gram language model. - Unigram $P_{kn}(w_t)$ is a special case: how likely it is to see x_t in an unfamiliar context See lecture slides or wiki for more detailed formulae. __Your task__ is to - implement KneserNeyLanguageModel - test it on 1-3 gram language models - find optimal (within reason) smoothing delta for 3-gram language model with Kneser-Ney smoothing ``` class KneserNeyLanguageModel(NGramLanguageModel): """ A template for Kneser-Ney language model. Default delta may be suboptimal. """ def __init__(self, lines, n, delta=1.0): self.n = n <YOUR CODE> def get_possible_next_tokens(self, prefix): < YOUR CODE > def get_next_token_prob(self, prefix, next_token): <YOUR CODE> #test that it's a valid probability model for n in (1, 2, 3): dummy_lm = KneserNeyLanguageModel(dummy_lines, n=n) assert np.allclose(sum([dummy_lm.get_next_token_prob('a', w_i) for w_i in dummy_lm.vocab]), 1), "I told you not to break anything! :)" for n in (1, 2, 3): lm = KneserNeyLanguageModel(train_lines, n=n, smoothing=<...>) ppx = perplexity(lm, test_lines) print("N = %i, Perplexity = %.5f" % (n, ppx)) ```
github_jupyter
``` # load data and write out sentence and target import pandas as pd loaded_set = pd.read_excel("Dataset/"+"training.xlsx") loaded_set['Sentence'] from transformers import AutoModel, AutoTokenizer # german tokens for bert tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased") #model = AutoModel.from_pretrained("dbmdz/bert-base-german-cased") tokens_num=[] for sen in loaded_set['Sentence']: tokenized = (tokenizer.tokenize(sen)) tokens_num.append( ['[CLS]'] + tokenized + ['[SEP]']) # get max_seq length lens = [len(i) for i in tokens_num] max_seq_length = max(lens) max_seq_length = int(1.5*max_seq_length) #max_seq_length = 256 tokens_num[0] tokenizer.convert_tokens_to_ids(tokens_num[0]) def manual_features(x): letter_count = [] avarange_letter_per_word = [] num_words = [] num_letters_array = [] longest_word_length = [] shortest_word_length = [] genitiv = [] akkusativ = [] dativ = [] dass = [] for sen in x: current_sen_split = sen.split() num_words.append(len(current_sen_split)) num_letters = [] if "des" in sen: genitiv.append(1) else: genitiv.append(0) if "dem" in sen: akkusativ.append(1) else: akkusativ.append(0) if "den" in sen: dativ.append(1) else: dativ.append(0) if "dass" in sen: dass.append(1) else: dass.append(0) for y in range(len(current_sen_split)): current_word = current_sen_split[y] num_letters.append(len(current_word)) current_lettercount = sum(num_letters) letter_count.append(current_lettercount) avarange_letter_per_word.append(current_lettercount/len(current_sen_split)) longest_word_length.append(max(num_letters)) shortest_word_length.append(min(num_letters)) feature_dict = { 'dativ':dativ, 'akkusativ': akkusativ, 'genitiv': genitiv, 'dass': dass, 'num_words':num_words, 'letter_count':letter_count, 'avarange_letter_per_word':avarange_letter_per_word, 'longest_word_length':longest_word_length, 'shortest_word_length':shortest_word_length, } feature_dataframe = pd.DataFrame(data=feature_dict) scaler = StandardScaler() feature_dataframe[['num_words', 'longest_word_length', 'shortest_word_length', 'letter_count', 'avarange_letter_per_word']] = scaler.fit_transform(feature_dataframe[['num_words', 'longest_word_length', 'shortest_word_length', 'letter_count', 'avarange_letter_per_word']]) feature_dataframe[['num_words', 'longest_word_length', 'shortest_word_length', 'letter_count', 'avarange_letter_per_word']] = scaler.transform(feature_dataframe[['num_words', 'longest_word_length', 'shortest_word_length', 'letter_count', 'avarange_letter_per_word']]) tensorX = tf.constant(feature_dataframe.values) return tensorX import numpy as np from sklearn.preprocessing import StandardScaler scaler = StandardScaler() def encode_names(n, tokenizer): tokens = list(tokenizer.tokenize(n)) tokens.append('[SEP]') return tokenizer.convert_tokens_to_ids(tokens) def bert_encode(string_list, tokenizer, max_seq_length): num_examples = len(string_list) letter_count = [] avarange_letter_per_word = [] num_words = [] num_letters_array = [] longest_word_length = [] shortest_word_length = [] genitiv = [] akkusativ = [] dativ = [] dass = [] for sen in string_list: current_sen_split = sen.split() num_words.append(len(current_sen_split)) num_letters = [] if "des" in sen: genitiv.append(1) else: genitiv.append(0) if "dem" in sen: akkusativ.append(1) else: akkusativ.append(0) if "den" in sen: dativ.append(1) else: dativ.append(0) if "dass" in sen: dass.append(1) else: dass.append(0) for y in range(len(current_sen_split)): current_word = current_sen_split[y] num_letters.append(len(current_word)) current_lettercount = sum(num_letters) letter_count.append(current_lettercount) avarange_letter_per_word.append(current_lettercount/len(current_sen_split)) longest_word_length.append(max(num_letters)) shortest_word_length.append(min(num_letters)) feature_dict = { 'num_words':num_words, 'avarange_letter_per_word':avarange_letter_per_word, 'longest_word_length':longest_word_length, } feature_dataframe = pd.DataFrame(data=feature_dict) scaler = StandardScaler() feature_dataframe[['num_words', 'longest_word_length', 'avarange_letter_per_word']] = scaler.fit_transform(feature_dataframe[['num_words', 'longest_word_length', 'avarange_letter_per_word']]) feature_dataframe[['num_words', 'longest_word_length', 'avarange_letter_per_word']] = scaler.transform(feature_dataframe[['num_words', 'longest_word_length', 'avarange_letter_per_word']]) X_train_mF = tf.constant(feature_dataframe.values) string_tokens = tf.ragged.constant([ encode_names(n, tokenizer) for n in np.array(string_list)]) cls = [tokenizer.convert_tokens_to_ids(['[CLS]'])]*string_tokens.shape[0] input_word_ids = tf.concat([cls, string_tokens], axis=-1) input_mask = tf.ones_like(input_word_ids).to_tensor(shape=(None, max_seq_length)) type_cls = tf.zeros_like(cls) type_tokens = tf.ones_like(string_tokens) input_type_ids = tf.concat( [type_cls, type_tokens], axis=-1).to_tensor(shape=(None, max_seq_length)) scaler_input_word_ids = scaler.fit_transform(input_type_ids) inputs = { #'sc': scaler_input_word_ids, #'input_word_ids': input_word_ids, 'input_word_ids': input_word_ids.to_tensor(shape=(None, max_seq_length)), 'input_mask': input_mask, 'input_type_ids': input_type_ids, 'X_train_mF': X_train_mF } return inputs from sklearn.model_selection import train_test_split x = loaded_set['Sentence'] y = loaded_set['MOS'] x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.20, random_state=32) y_train = round(y_train, 2) y_test = round(y_test, 2) import tensorflow as tf X_train = bert_encode(x_train, tokenizer, max_seq_length) X_test = bert_encode(x_test, tokenizer, max_seq_length) import tensorflow_hub as hub bert_layer = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/2", trainable=False) embedding_size = 768 max_seq_length = max_seq_length #length of the tokenised tensor input_word_ids = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32, name="input_word_ids") input_mask = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32, name="input_mask") segment_ids = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32, name="segment_ids") X_train_mF = tf.keras.layers.Input(shape=(3,), dtype=tf.int32, name="X_train_mF") pooled_output, sequence_output = bert_layer([input_word_ids, input_mask, segment_ids]) dropout = tf.keras.layers.Dropout(0.2)(pooled_output) reshaped_bert = tf.keras.layers.Reshape((6,128))(dropout) dense_mf_1 = tf.keras.layers.Dense(20)(X_train_mF) dense_mf_2 = tf.keras.layers.Dense(128)(dense_mf_1) #dense_mf_3 = tf.keras.layers.Dense(24)(dropout_mf) dropout_mf = tf.keras.layers.Dropout(0.3)(dense_mf_2) reshaped_mf = tf.keras.layers.Reshape((1,128))(dense_mf_2) #concatinated_3 = tf.concat([concatinated_1, concatinated_2 ], 1) #reshaped_mf = tf.keras.layers.Reshape((1,24))(dense_mf_3) concatinated = tf.concat([reshaped_bert, reshaped_mf], 1) gru_1_out = tf.keras.layers.GRU(200, return_sequences=True, activation='relu')(concatinated) gru_2_out = tf.keras.layers.GRU(100, return_sequences=True, activation='relu')(gru_1_out) flat = tf.keras.layers.Flatten()(gru_2_out) dropout_2 = tf.keras.layers.Dropout(0.3)(flat) dense_2 = tf.keras.layers.Dense(300)(dropout_2) dense_3 = tf.keras.layers.Dense(100)(dense_2) dense_4 = tf.keras.layers.Dense(50)(dense_3) pred = tf.keras.layers.Dense(1)(dense_2) model = tf.keras.Model( inputs={ 'input_word_ids': input_word_ids, 'input_mask': input_mask, 'input_type_ids': segment_ids, 'X_train_mF':X_train_mF }, outputs=pred) model.compile(optimizer= tf.keras.optimizers.Adam(0.001), loss= "mean_absolute_error", metrics= ["mean_squared_error"]) model.summary() epochs = 50 batch_size = 15 model.fit(X_train, y_train.values, epochs=epochs, batch_size=batch_size) import numpy as np pred = model.predict(X_test) rounded_pred = np.around(pred, decimals=2) rounded_pred def rmse(predictions, targets): return np.sqrt(((predictions - targets) ** 2).mean()) rmse(rounded_pred.transpose(), y_test.values) ```
github_jupyter
This page was created from a Jupyter notebook. The original notebook can be found [here](https://github.com/klane/databall/blob/master/notebooks/parameter-tuning.ipynb). It investigates tuning model parameters to achieve better performance. First we must import the necessary installed modules. ``` import itertools import numpy as np import matplotlib.pyplot as plt import seaborn as sns from functools import partial from sklearn.linear_model import LogisticRegression from sklearn.svm import LinearSVC from sklearn.ensemble import RandomForestClassifier from sklearn.neural_network import MLPClassifier from hyperopt import hp ``` Next we need to import a few local modules. ``` import os import sys import warnings warnings.filterwarnings('ignore') module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path) from databall.database import Database from databall.plotting import format_538, plot_metrics, plot_matrix from databall.model_selection import calculate_metrics, optimize_params, train_test_split import databall.util as util ``` Apply the FiveThirtyEight plot style. ``` plt.style.use('fivethirtyeight') ``` # Data As before, we collect the stats and betting data from the database and create training and test sets where the 2016 season is reserved as the test set. ``` database = Database('../data/nba.db') games = database.betting_stats(window=10) x_train, y_train, x_test, y_test = train_test_split(games, 2006, 2016, xlabels=util.stat_names() + ['SEASON']) ``` The stats below are the box score stats used during [feature selection](feature-selection.md). I decided to further explore these because they are readily available from multiple sources and do not require any calculation of advanced stats by users. ``` stats = ['FGM', 'FGA', 'FG3M', 'FG3A', 'FTM', 'FTA', 'OREB', 'DREB', 'AST', 'TOV', 'STL', 'BLK'] stats = ['TEAM_' + s for s in stats] + ['POSSESSIONS'] stats += [s + '_AWAY' for s in stats] + ['HOME_SPREAD'] ``` # Logistic Regression The plots below show `LogisticRegression` model performance using different combinations of three parameters in a grid search: `penalty` (type of norm), `class_weight` (where "balanced" indicates weights are inversely proportional to class frequencies and the default is one), and `dual` (flag to use the dual formulation, which changes the equation being optimized). For each combination, models were trained with different `C` values, which controls the inverse of the regularization strength. All models have similar accuracy, ROC area, and precision/recall area for all `C` values tested. However, their individual precision and recall metrics change wildly with C. We are more interested in accuracy for this specific problem because accuracy directly controls profit. Using a grid search is not the most efficient parameter tuning method because grid searches do not use information from prior runs to aid future parameter choices. You are at the mercy of the selected grid points. ``` # Create functions that return logistic regression models with different parameters models = [partial(LogisticRegression, penalty='l1'), partial(LogisticRegression, penalty='l1', class_weight='balanced'), partial(LogisticRegression), partial(LogisticRegression, class_weight='balanced'), partial(LogisticRegression, dual=True), partial(LogisticRegression, class_weight='balanced', dual=True)] start = -8 stop = -2 C_vec = np.logspace(start=start, stop=stop, num=20) results = calculate_metrics(models, x_train, y_train, stats, 'C', C_vec, k=6) legend = ['L1 Norm', 'L1 Norm, Balanced Class', 'L2 Norm (Default)', 'L2 Norm, Balanced Class', 'L2 Norm, Dual Form', 'L2 Norm, Balanced Class, Dual Form'] fig, ax = plot_metrics(C_vec, results, 'Regularization Parameter', log=True) ax[-1].legend(legend, fontsize=16, bbox_to_anchor=(1.05, 1), borderaxespad=0) [a.set_xlim(10**start, 10**stop) for a in ax] [a.set_ylim(-0.05, 1.05) for a in ax] title = 'Grid searches are not the most efficient' subtitle = 'Grid search of logistic regression hyperparameters' format_538(fig, 'NBA Stats & Covers.com', ax=ax, title=title, subtitle=subtitle, xoff=(-0.22, 3.45), yoff=(-1.54, -1.64), toff=(-.16, 1.25), soff=(-0.16, 1.12), n=100) plt.show() ``` An alternative solution is to use an optimization algorithm that minimizes a loss function to select the hyperparameters. I experimented with the hyperopt package for this, which accepts a parameter search space and loss function as its inputs. The search space consists of discrete choices and ranges on continuous variables. I swapped out the `class_weight` and `dual` variables in favor of `fit_intercept` and `intercept_scaling`, which controls whether to include an intercept in the `LogisticRegression` model and a scaling factor. The scaling factor can help reduce the effect of regularization on the intercept. I chose cross-validation accuracy as the loss function (actually 1-accuracy since the optimizer minimizes the loss function) since we are interested in increasing profits. The optimal hyperparameters are displayed below. ``` space_log = {} space_log['C'] = hp.loguniform('C', -8*np.log(10), -2*np.log(10)) space_log['intercept_scaling'] = hp.loguniform('intercept_scaling', -8*np.log(10), 8*np.log(10)) space_log['penalty'] = hp.choice('penalty', ['l1', 'l2']) space_log['fit_intercept'] = hp.choice('fit_intercept', [False, True]) model = LogisticRegression() best_log, param_log = optimize_params(model, x_train, y_train, stats, space_log, max_evals=1000) print(best_log) ``` The search history is displayed below. The intercept scale factor tended toward high values, even though the default value is 1.0. ``` labels = ['Regularization', 'Intercept Scale', 'Penalty', 'Intercept'] fig, ax = plot_matrix(param_log.index.values, param_log[[k for k in space_log.keys()]].values, 'Iteration', labels, 2, 2, logy=[True, True, False, False]) [a.set_yticks([0, 1]) for a in ax[2:]] ax[2].set_yticklabels(['L1', 'L2']) ax[3].set_yticklabels(['False', 'True']) title = 'Hyperopt is more flexible than a grid search' subtitle = 'Hyperopt search of logistic regression hyperparameters' format_538(fig, 'NBA Stats & Covers.com', ax=ax, title=title, subtitle=subtitle, xoff=(-0.18, 2.25), yoff=(-1.42, -1.52), toff=(-.16, 1.25), soff=(-0.16, 1.12), n=80, bottomtick=np.nan) plt.show() ``` The cross-validation accuracy history shows that many models performed about the same despite their parameter values given the band of points just below 51% accuracy. The optimizer was also unable to find a model that significantly improved accuracy. ``` fig = plt.figure(figsize=(12, 6)) plt.plot(param_log.index.values, param_log['accuracy'], '.', markersize=5) title = 'Improvements are hard to come by' subtitle = 'Accuracy of logistic regression hyperparameter optimization history' format_538(fig, 'NBA Stats & Covers.com', xlabel='Iteration', ylabel='Accuracy', title=title, subtitle=subtitle, xoff=(-0.1, 1.01), yoff=(-0.14, -0.2), toff=(-0.09, 1.12), soff=(-0.09, 1.04), bottomtick=0.5) plt.show() ``` # Support Vector Machine The [`LinearSVC`](http://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html#sklearn.svm.LinearSVC) class is similar to a generic [`SVC`](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC) with a linear kernel, but is implemented with liblinear instead of libsvm. The documentation states that `LinearSVC` scales better to large sample sizes since `SVC`'s fit time complexity is more than quadratic with the number of samples. I initially tried `SVC`, but the training time was too costly. `LinearSVC` proved to be must faster for this application. The code below sets up a `LinearSVC` hyperparameter search space using four parameters: `C` (penalty of the error term), `loss` (the loss function), `fit_intercept` (identical to `LogisticRegression`), and `intercept_scaling` (identical to `LogisticRegression`). I limited the number of evaluations to 500 to reduce the computational cost. ``` space_svm = {} space_svm['C'] = hp.loguniform('C', -8*np.log(10), -2*np.log(10)) space_svm['intercept_scaling'] = hp.loguniform('intercept_scaling', -8*np.log(10), 8*np.log(10)) space_svm['loss'] = hp.choice('loss', ['hinge', 'squared_hinge']) space_svm['fit_intercept'] = hp.choice('fit_intercept', [False, True]) model = LinearSVC() best_svm, param_svm = optimize_params(model, x_train, y_train, stats, space_svm, max_evals=500) print(best_svm) ``` The search history below is similar to the logistic regression history, but hyperopt appears to test more intercept scales with low values than before. This is also indicated by the drastic reduction in the intercept scale compared to logistic regression. ``` labels = ['Regularization', 'Intercept Scale', 'Loss', 'Intercept'] fig, ax = plot_matrix(param_svm.index.values, param_svm[[k for k in space_svm.keys()]].values, 'Iteration', labels, 2, 2, logy=[True, True, False, False]) [a.set_yticks([0, 1]) for a in ax[2:]] ax[2].set_yticklabels(['Hinge', 'Squared\nHinge']) ax[3].set_yticklabels(['False', 'True']) title = 'Hyperopt is more flexible than a grid search' subtitle = 'Hyperopt search of support vector machine hyperparameters' format_538(fig, 'NBA Stats & Covers.com', ax=ax, title=title, subtitle=subtitle, xoff=(-0.24, 2.25), yoff=(-1.42, -1.52), toff=(-.22, 1.25), soff=(-0.22, 1.12), n=80, bottomtick=np.nan) plt.show() ``` The plot below shows the `LinearSVC` cross-validation accuracy history. There is a band of points similar to what we observed for logistic regression below 51% accuracy. The support vector machine model does not perform much better than logistic regression, and several points fall below 50% accuracy. ``` fig = plt.figure(figsize=(12, 6)) plt.plot(param_svm.index.values, param_svm['accuracy'], '.', markersize=5) title = 'Improvements are hard to come by' subtitle = 'Accuracy of support vector machine hyperparameter optimization history' format_538(fig, 'NBA Stats & Covers.com', xlabel='Iteration', ylabel='Accuracy', title=title, subtitle=subtitle, xoff=(-0.1, 1.01), yoff=(-0.14, -0.2), toff=(-0.09, 1.12), soff=(-0.09, 1.04), bottomtick=0.5) plt.show() ``` # Random Forest The code below builds a [`RandomForestClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier) hyperparameter search space using the parameters `n_estimators` (number of decision trees in the forest), `class_weight` (identical to the `LogisticRegression` grid search), `criterion` (function to evaluate split quality), and `bootstrap` (controls whether bootstrap samples are used when building trees). I reduced the number of function evaluations to 100 in the interest of computational time. ``` space_rf = {} space_rf['n_estimators'] = 10 + hp.randint('n_estimators', 40) space_rf['criterion'] = hp.choice('criterion', ['gini', 'entropy']) space_rf['class_weight'] = hp.choice('class_weight', [None, 'balanced']) space_rf['bootstrap'] = hp.choice('bootstrap', [False, True]) model = RandomForestClassifier(random_state=8) best_rf, param_rf = optimize_params(model, x_train, y_train, stats, space_rf, max_evals=100) print(best_rf) ``` The random forest hyperparameter search history is displayed below. ``` labels = ['Estimators', 'Criterion', 'Class Weight', 'Bootstrap'] fig, ax = plot_matrix(param_rf.index.values, param_rf[[k for k in space_rf.keys()]].values, 'Iteration', labels, 2, 2) [a.set_yticks([0, 1]) for a in ax[1:]] ax[1].set_yticklabels(['Gini', 'Entropy']) ax[2].set_yticklabels(['None', 'Balanced']) ax[3].set_yticklabels(['False', 'True']) title = 'Hyperopt is more flexible than a grid search' subtitle = 'Hyperopt search of random forest hyperparameters' format_538(fig, 'NBA Stats & Covers.com', ax=ax, title=title, subtitle=subtitle, xoff=(-0.26, 2.25), yoff=(-1.42, -1.52), toff=(-.24, 1.25), soff=(-0.24, 1.12), n=80, bottomtick=np.nan) plt.show() ``` The cross-validation accuracy history shows the random forest model performs slightly worse than logistic regression. ``` fig = plt.figure(figsize=(12, 6)) plt.plot(param_rf.index.values, param_rf['accuracy'], '.', markersize=5) title = 'Improvements are hard to come by' subtitle = 'Accuracy of random forest hyperparameter optimization history' format_538(fig, 'NBA Stats & Covers.com', xlabel='Iteration', ylabel='Accuracy', title=title, subtitle=subtitle, xoff=(-0.1, 1.01), yoff=(-0.14, -0.2), toff=(-0.09, 1.12), soff=(-0.09, 1.04), bottomtick=0.5) plt.show() ``` # Neural Network The code below builds a [`MLPClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html#sklearn.neural_network.MLPClassifier) hyperparameter search space using the parameters `hidden_layer_sizes` (number of neurons in each hidden layer), `alpha` (controls the L2 regularization similar to the `C` parameter in `LogisticRegression` and `LinearSVC`), `activation` (network activation function), and `solver` (the algorithm used to optimize network weights). The network structure was held to a single hidden layer. I kept the number of function evaluations at 100 in the interest of computational time. ``` space_mlp = {} space_mlp['hidden_layer_sizes'] = 10 + hp.randint('hidden_layer_sizes', 40) space_mlp['alpha'] = hp.loguniform('alpha', -8*np.log(10), 3*np.log(10)) space_mlp['activation'] = hp.choice('activation', ['relu', 'logistic', 'tanh']) space_mlp['solver'] = hp.choice('solver', ['lbfgs', 'sgd', 'adam']) model = MLPClassifier() best_mlp, param_mlp = optimize_params(model, x_train, y_train, stats, space_mlp, max_evals=100) print(best_mlp) ``` The multi-layer perceptron hyperparameter search history is displayed below. ``` labels = ['Hidden Neurons', 'Regularization', 'Activation', 'Solver'] fig, ax = plot_matrix(param_mlp.index.values, param_mlp[[k for k in space_mlp.keys()]].values, 'Iteration', labels, 2, 2, logy=[False, True, False, False]) [a.set_yticks([0, 1, 2]) for a in ax[2:]] ax[2].set_yticklabels(['RELU', 'Logistic', 'Tanh']) ax[3].set_yticklabels(['LBFGS', 'SGD', 'ADAM']) title = 'Hyperopt is more flexible than a grid search' subtitle = 'Hyperopt search of multi-layer perceptron hyperparameters' format_538(fig, 'NBA Stats & Covers.com', ax=ax, title=title, subtitle=subtitle, xoff=(-0.26, 2.25), yoff=(-1.42, -1.52), toff=(-.24, 1.25), soff=(-0.24, 1.12), n=80, bottomtick=np.nan) plt.show() ``` The cross-validation history suggests the multi-layer perceptron performs the best of the four models, albeit the improvement is minor. ``` fig = plt.figure(figsize=(12, 6)) plt.plot(param_mlp.index.values, param_mlp['accuracy'], '.', markersize=5) title = 'Improvements are hard to come by' subtitle = 'Accuracy of multi-layer perceptron hyperparameter optimization history' format_538(fig, 'NBA Stats & Covers.com', xlabel='Iteration', ylabel='Accuracy', title=title, subtitle=subtitle, xoff=(-0.1, 1.01), yoff=(-0.14, -0.2), toff=(-0.09, 1.12), soff=(-0.09, 1.04), bottomtick=0.5) plt.show() ```
github_jupyter
### What is DCT (discrete cosine transformation) ? - This notebook creates arbitrary consumption functions at both 1-dimensional and 2-dimensional grids and illustrate how DCT approximates the full-grid function with different level of accuracies. - This is used in [DCT-Copula-Illustration notebook](DCT-Copula-Illustration.ipynb) to plot consumption functions approximated by DCT versus original consumption function at full grids. - Written by Tao Wang - June 19, 2019 ``` # Setup def in_ipynb(): try: if str(type(get_ipython())) == "<class 'ipykernel.zmqshell.ZMQInteractiveShell'>": return True else: return False except NameError: return False # Determine whether to make the figures inline (for spyder or jupyter) # vs whatever is the automatic setting that will apply if run from the terminal if in_ipynb(): # %matplotlib inline generates a syntax error when run from the shell # so do this instead get_ipython().run_line_magic('matplotlib', 'inline') else: get_ipython().run_line_magic('matplotlib', 'auto') # Import tools import scipy.fftpack as sf # scipy discrete fourier transform import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import numpy.linalg as lag from scipy import misc from matplotlib import cm ## DCT in 1 dimension grids= np.linspace(0,100,100) # this represents the grids on which consumption function is defined.i.e. m or k c =grids + 50*np.cos(grids*2*np.pi/40) # this is an arbitrary example of consumption function c_dct = sf.dct(c,norm='ortho') # set norm =ortho is important ind=np.argsort(abs(c_dct))[::-1] # get indices of dct coefficients(absolute value) in descending order ## DCT in 1 dimension for difference accuracy levels fig = plt.figure(figsize=(5,5)) fig.suptitle('DCT compressed c function with different accuracy levels') lvl_lst = np.array([0.5,0.9,0.99]) plt.plot(c,'r*',label='c at full grids') c_dct = sf.dct(c,norm='ortho') # set norm =ortho is important ind=np.argsort(abs(c_dct))[::-1] for idx in range(len(lvl_lst)): i = 1 # starts the loop that finds the needed indices so that an target level of approximation is achieved while lag.norm(c_dct[ind[0:i]].copy())/lag.norm(c_dct) < lvl_lst[idx]: i = i + 1 needed = i print("For accuracy level of "+str(lvl_lst[idx])+", "+str(needed)+" basis functions used") c_dct_rdc=c.copy() c_dct_rdc[ind[needed+1:]] = 0 c_approx = sf.idct(c_dct_rdc) plt.plot(c_approx,label=r'c approx at ${}$'.format(lvl_lst[idx])) plt.legend(loc=0) ## Blockwise DCT. For illustration but not used in BayerLuetticke. ## But it illustrates how doing dct in more finely devided blocks give a better approximation size = c.shape c_dct = np.zeros(size) c_approx=np.zeros(size) fig = plt.figure(figsize=(5,5)) fig.suptitle('DCT compressed c function with different number of basis funcs') nbs_lst = np.array([20,50]) plt.plot(c,'r*',label='c at full grids') for i in range(len(nbs_lst)): delta = np.int(size[0]/nbs_lst[i]) for pos in np.r_[:size[0]:delta]: c_dct[pos:(pos+delta)] = sf.dct(c[pos:(pos+delta)],norm='ortho') c_approx[pos:(pos+delta)]=sf.idct(c_dct[pos:(pos+delta)]) plt.plot(c_dct,label=r'Nb of blocks= ${}$'.format(nbs_lst[i])) plt.legend(loc=0) # DCT in 2 dimensions def dct2d(x): x0 = sf.dct(x.copy(),axis=0,norm='ortho') x_dct = sf.dct(x0.copy(),axis=1,norm='ortho') return x_dct def idct2d(x): x0 = sf.idct(x.copy(),axis=1,norm='ortho') x_idct= sf.idct(x0.copy(),axis=0,norm='ortho') return x_idct # arbitrarily generate a consumption function at different grid points grid0=20 grid1=20 grids0 = np.linspace(0,20,grid0) grids1 = np.linspace(0,20,grid1) c2d = np.zeros([grid0,grid1]) # create an arbitrary c functions at 2-dimensional grids for i in range(grid0): for j in range(grid1): c2d[i,j]= grids0[i]*grids1[j] - 50*np.sin(grids0[i]*2*np.pi/40)+10*np.cos(grids1[j]*2*np.pi/40) ## do dct for 2-dimensional c at full grids c2d_dct=dct2d(c2d) ## convert the 2d to 1d for easier manipulation c2d_dct_flt = c2d_dct.flatten(order='F') ind2d = np.argsort(abs(c2d_dct_flt.copy()))[::-1] # get indices of dct coefficients(abosolute value) # in the decending order # DCT in 2 dimensions for different levels of accuracy fig = plt.figure(figsize=(15,10)) fig.suptitle('DCT compressed c function with different accuracy levels') lvl_lst = np.array([0.999,0.99,0.9,0.8,0.5]) ax=fig.add_subplot(2,3,1) ax.imshow(c2d) ax.set_title(r'$1$') for idx in range(len(lvl_lst)): i = 1 while lag.norm(c2d_dct_flt[ind2d[:i]].copy())/lag.norm(c2d_dct_flt) < lvl_lst[idx]: i += 1 needed = i print("For accuracy level of "+str(lvl_lst[idx])+", "+str(needed)+" basis functions are used") c2d_dct_rdc=c2d_dct.copy() idx_urv = np.unravel_index(np.sort(ind2d[needed+1:]),(grid0,grid1),order='F') c2d_dct_rdc[idx_urv] = 0 c2d_approx = idct2d(c2d_dct_rdc) ax = fig.add_subplot(2,3,idx+2) ax.set_title(r'${}$'.format(lvl_lst[idx])) ax.imshow(c2d_approx) ## surface plot of c at full grids and dct approximates with different accuracy levels fig = plt.figure(figsize=(15,10)) fig.suptitle('DCT compressed c function in different accuracy levels') lvl_lst = np.array([0.999,0.99,0.9,0.8,0.5]) ax=fig.add_subplot(2,3,1,projection='3d') ax.plot_surface(grids0,grids1,c2d,cmap=cm.coolwarm) ax.set_title(r'$1$') for idx in range(len(lvl_lst)): i = 1 while lag.norm(c2d_dct_flt[ind2d[:i]].copy())/lag.norm(c2d_dct_flt) < lvl_lst[idx]: i += 1 needed = i print("For accuracy level of "+str(lvl_lst[idx])+", "+str(needed)+" basis functions are used") c2d_dct_rdc=c2d_dct.copy() idx_urv = np.unravel_index(ind2d[needed+1:],(grid0,grid1)) c2d_dct_rdc[idx_urv] = 0 c2d_approx = idct2d(c2d_dct_rdc) ax = fig.add_subplot(2,3,idx+2,projection='3d') ax.set_title(r'${}$'.format(lvl_lst[idx])) ax.plot_surface(grids0,grids1,c2d_approx,cmap=cm.coolwarm) # surface plot of absoulte value of differences of c at full grids and approximated fig = plt.figure(figsize=(15,10)) fig.suptitle('Differences(abosolute value) of DCT compressed with c at full grids in different accuracy levels') lvl_lst = np.array([0.999,0.99,0.9,0.8,0.5]) ax=fig.add_subplot(2,3,1,projection='3d') c2d_diff = abs(c2d-c2d) ax.plot_surface(grids0,grids1,c2d_diff,cmap=cm.coolwarm) ax.set_title(r'$1$') for idx in range(len(lvl_lst)): i = 1 while lag.norm(c2d_dct_flt[ind2d[:i]].copy())/lag.norm(c2d_dct_flt) < lvl_lst[idx]: i += 1 needed = i print("For accuracy level of "+str(lvl_lst[idx])+", "+str(needed)+" basis functions are used") c2d_dct_rdc=c2d_dct.copy() idx_urv = np.unravel_index(ind2d[needed+1:],(grid0,grid1)) c2d_dct_rdc[idx_urv] = 0 c2d_approx = idct2d(c2d_dct_rdc) c2d_approx_diff = abs(c2d_approx - c2d) ax = fig.add_subplot(2,3,idx+2,projection='3d') ax.set_title(r'${}$'.format(lvl_lst[idx])) ax.plot_surface(grids0,grids1,c2d_approx_diff,cmap= 'OrRd',linewidth=1) ax.view_init(20, 90) ```
github_jupyter
``` import pymongo import pandas as pd import numpy as np from pymongo import MongoClient from bson.objectid import ObjectId import datetime import matplotlib.pyplot as plt from collections import defaultdict %matplotlib inline import json plt.style.use('ggplot') import seaborn as sns from math import log10, floor from time import time from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer from sklearn.cluster import KMeans, MiniBatchKMeans ``` # CU Woot Math Method 2 for unsupervosed discovery of new behavior traits ## 1) Convert response field dictionary into a document ## 2) Develop word vector using term frequency - inverse document frequency ## 3) Use K-Means to cluster documents ## 4) Map traits to clusters to validate technique In the first results presented to Woot Math a 100K sample of the entire data set was chosen. In this report, I'll start with the same type of analysis to develop the same heat map. In the meeting Sean and Brent suggested using just one of the qual_id and repeat the experiment and then look at the samples in clusers without traits. I'll do that in a subsequent analysis ## Part 1. Heat map with 100 K sample of all qual_id's ``` ## Connect to local DB client = MongoClient('localhost', 27017) print ("Setup db access") # # Get collections from mongodb # #db = client.my_test_db db = client.test chunk = 100000 start = 0 end = start + chunk #reponses = db.anon_student_task_responses.find({'correct':False})[start:end] reponses = db.anon_student_task_responses.find()[start:end] df_responses = pd.DataFrame(list(reponses)) print (df_responses.shape) ## Make the documents to be analyzed ## Functions for turning dictionary into document def make_string_from_list(key, elem_list): # Append key to each item in list ans = '' for elem in elem_list: ans += key + '_' + elem def make_string(elem, key=None, top=True): ans = '' if not elem: return ans if top: top = False top_keys = [] for idx in range(len(elem.keys())): top_keys.append(True) for idx, key in enumerate(elem.keys()): if top_keys[idx]: top = True top_keys[idx] = False ans += ' ' else: top = False #print ('ans = ', ans) #print (type(elem[key])) if type(elem[key]) is str or\ type(elem[key]) is int: #print ('add value', elem[key]) value = str(elem[key]) #ans += key + '_' + value + ' ' + value + ' ' ans += key + '_' + value + ' ' elif type(elem[key]) is list: #print ('add list', elem[key]) temp_elem = dict() for item in elem[key]: temp_elem[key] = item ans += make_string(temp_elem, top) elif type(elem[key]) is dict: #print ('add dict', elem[key]) for item_key in elem[key].keys(): temp_elem = dict() temp_elem[item_key] = elem[key][item_key] ans += key + '_' + make_string(temp_elem, top) elif type(elem[key]) is float: #print ('add dict', elem[key]) sig = 2 value = elem[key] value = round(value, sig-int( floor(log10(abs(value))))-1) value = str(value) #ans += key + '_' + value + ' ' + value + ' ' ans += key + '_' + value + ' ' # ans += ' ' + key + ' ' #print ('not handled', elem[key]) return ans # Makes the cut & paste below easier df3 = df_responses df3['response_doc'] = df3['response'].map(make_string) df3['response_doc'] = df3['response_doc'].map(lambda x: x + ' ') df3['response_doc'] = df3['response_doc'].map(lambda x: x.replace('/','_')) df3['response_doc'] = df3['response_doc'] + ' ' + df3['txt'] df3['response_doc'] = df3['response_doc'].map(lambda x: x + ' ') df3['response_doc'] = df3['response_doc'].map(lambda x: x.replace("\n", "")) df3['response_doc'] = df3['response_doc'].map(lambda x: x.replace("?", " ")) ``` ## Sample Documents ``` for idx in range(20): print ("Sample number:", idx, "\n", df3.iloc[idx]['response_doc']) data_samples = df3['response_doc'] n_features = 1000 n_samples = len(data_samples) n_topics = 50 n_top_words = 20 print("Extracting tf-idf features ...") tfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2, max_features=n_features, stop_words='english') t0 = time() tfidf = tfidf_vectorizer.fit_transform(data_samples) print("done in %0.3fs." % (time() - t0)) # Number of clusters true_k = 100 km = MiniBatchKMeans(n_clusters=true_k, init='k-means++', n_init=1, init_size=1000, batch_size=1000, random_state=62) print("Clustering with %s" % km) t0 = time() km.fit(tfidf) print("done in %0.3fs" % (time() - t0)) print() print("Top terms per cluster:") order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = tfidf_vectorizer.get_feature_names() for i in range(true_k): print("Cluster %d:\n" % i, end='') for ind in order_centroids[i, :30]: print(' --- %s\n' % terms[ind], end='') print() df3['cluster_100'] = km.labels_ df3['trait_1'] = df3['behavioral_traits'].apply(lambda x : x[0] if len(x) > 0 else 'None' ) df3['trait_2'] = df3['behavioral_traits'].apply(lambda x : x[1] if len(x) > 1 else 'None' ) df_trait_1 = df3.groupby(['cluster_100', 'trait_1']).size().unstack(fill_value=0) df_trait_2 = df3.groupby(['cluster_100', 'trait_2']).size().unstack(fill_value=0) df_cluster_100 = df3.groupby('cluster_100') df_trait_1.index.rename('cluster_100', inplace=True) df_trait_2.index.rename('cluster_100', inplace=True) df_traits = pd.concat([df_trait_1, df_trait_2], axis=1) df_traits = df_traits.drop('None', axis=1) #df_traits_norm = (df_traits - df_traits.mean()) / (df_traits.max() - df_traits.min()) df_traits_norm = (df_traits / (df_traits.sum()) ) fig = plt.figure(figsize=(18.5, 16)) cmap = sns.cubehelix_palette(light=.95, as_cmap=True) sns.heatmap(df_traits_norm, cmap=cmap, linewidths=.5) #sns.heatmap(df_traits_norm, cmap="YlGnBu", linewidths=.5) ```
github_jupyter
## Analysis of stock prices using PCA / Notebook 3 In this notebook we will study the dimensionality of stock price sequences, and show that they lie between the 1D of smooth functions and 2D of rapidly varying functions. The mathematicians Manuel Mandelbrot and Richard Hudson wrote a book titled [The Misbehavior of Markets: A Fractal View of Financial Turbulence](https://www.amazon.com/gp/product/0465043577?ie=UTF8&tag=trivisonno-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=0465043577). In this book they demonstrate that financial sequences have a fractal dimension that is higher than one. In other words, the changes in stock prices are more similar to random walk, than to a smooth differentiable curve. In this notebook we will estimate the fractal dimension of sequences corresponding to the log of the price of a stock. We will do the same for some other, non-random sequences. We will use the [Box Counting](https://en.wikipedia.org/wiki/Box_counting) method to estimate the dimension. ### Box Counting For the sake of simplicity, lets start with a simple smooth curve corresoinding to $sin(x)$. Intuitively speaking, the dimension of this curve should be 1. Lets see how we measure that using box-counting. The idea is simple: we split the 2D plane into smaller and smaller rectangles and count the number of rectangles that touch the curve. The gridlines in the figure below partition the figure into $16 \times 16 = 256$ rectangles. The yellow shading corresponds the partition of the figure into $8 \times 8$ rectangles. The green corresponds to the partition into $16\times 16$ (which is the same as the grid), The blue and the red correspond to partitions into $32\times32$ and $64 \times 64$ respectively. You can see that as theboxes get smaller their number increases. ![Sinusoid](figs/Sinusoid.BoxCount.png) The dimension is defined by the relation between the size of the cubes and the number of rectangle that touch the curve. More precisly, we say that the size of a rectangle in a $n \times n$ partition is $\epsilon=1/n$. We denote by $N(\epsilon)$ the number of rectangles of size $\epsilon$ that touch the curve. Then if $d$ is the dimension, the relationship between $N(\epsilon)$ and $\epsilon$ is $$ N(\epsilon) = \frac{C}{\epsilon^d} $$ For some constant $C$ Taking $\log$s of both side we get $$ (1)\;\;\;\;\;\;\;\;\;\;\;\;\log N(\epsilon) = \log C + d \log \frac{1}{\epsilon} $$ We can use this equation to estimate $d$ as follows: let $\epsilon_2 \gg \epsilon_1$ be two sizes that are far apart (say $\epsilon_1=1/4$ and $\epsilon_2=1/1024$), and let $N(\epsilon_1),N(\epsilon_2)$ be the corresponding box counts. Then by taking the difference between Equation (1) for the two sizes we get the estimate $$ d \approx \frac{\log N(\epsilon_1) - \log N(\epsilon_2)}{\log \epsilon_2- \log \epsilon_1} $$ Note that this is an estimate, it depends on the particular values of $\epsilon_1$ and $\epsilon_2$. We can refer to it as the "dimension" if we get the same number for any choice of the two sizes (as well as other details sich as the extent of the function. Here are similar figures for the seque ![AMZN](figs/AMZN.BoxCount.png) ![IBM](figs/IBM.BoxCount.png) ``` import findspark findspark.init() from pyspark import SparkContext #sc.stop() sc = SparkContext(master="local[3]") from pyspark.sql import * sqlContext = SQLContext(sc) %pylab inline import numpy as np df=sqlContext.read.csv('../Data/SP500.csv',header='true',inferSchema='true') df.count() columns=df.columns col=[c for c in columns if '_P' in c] tickers=[a[:-2] for a in col] tickers[:10],len(tickers) def get_seq(ticker): key=ticker+"_P" L=df.select(key).collect() L=[x[key] for x in L if not x[key] is None] return L ``` #### We generate graphs like the ones below for your analysis of dimensionality on the stocks ![Graph for Analysing Stocks](figs/plots.png) ``` pickleFile="Tester/Dimensionality.pkl" ``` ## Finding Dimension We find the dimension for a particular ticker using its sequence of data ###### <span style="color:blue">Sample Input:</span> ```python dimension = Box_count([sequence of AAPL], 'AAPL') ``` ###### <span style="color:magenta">Sample Output:</span> dimension = 1.28 ``` from scipy.optimize import curve_fit import pandas as pd def f( x, A, Df ): ''' User defined function for scipy.optimize.curve_fit(), which will find optimal values for A and Df. ''' return Df * x + A def count_boxes(PriceSequence, n): length = len(PriceSequence) PriceSequence = map(lambda x: log(x), PriceSequence) # Log of price is needed? maxP = max(PriceSequence) minP = min(PriceSequence) full_x = np.linspace(0,length,n+1).tolist() full_y = np.linspace(minP,maxP,n+1).tolist() x_spacing = full_x[1]-full_x[0] y_spacing = full_y[1]-full_y[0] counts = np.zeros((n,n)) boxpoints = n+1 for i in range(length-1): (x1,x2) = (i,i+1) (y1,y2) = (PriceSequence[i],PriceSequence[i+1]) xPoints = np.linspace(x1,x2,boxpoints).tolist() yPoints = np.linspace(y1,y2,boxpoints).tolist() for j in range(boxpoints): xindex = int(xPoints[j]/x_spacing) yindex = int((yPoints[j] - minP)/y_spacing) -1 if(counts[xindex][yindex] == 0): counts[xindex][yindex] = 1 return np.sum(counts) def Box_count(LL,ticker): ## Your Implementation goes here dimension = 0.0 r = np.array([ 2.0**i for i in xrange(0,10)]) # r - 1/episilon N = np.array([ count_boxes( LL, int(ri)) for ri in r ]) popt, pcov = curve_fit( f, np.log(r),np.log( N )) Lacunarity, dimension = popt return dimension ```
github_jupyter
# PySDDR: An Advanced Tutorial In the beginner's guide only tabular data was used as input to the PySDDR framework. In this advanced tutorial we show the effects when combining structured and unstructured data. Currently, the framework only supports images as unstructured data. We will use the MNIST dataset as a source for the unstructured data and generate additional tabular features corresponding to those. Our outcome in this tutorial is simulated based on linear and non-linear effects of tabular data and a linear effect of the number shown on the MNIST image. Our model is not provided with the (true) number, but instead has to learn the number effect from the image (together with the structured data effects): \begin{equation*} y = \sin(x_1) - 3x_2 + x_3^4 + 3\cdot number + \epsilon \end{equation*} with $\epsilon \sim \mathcal{N}(0, \sigma^2)$ and $number$ is the number on the MNIST image. The aim of training is for the model to be able to output a latent effect, representing the number depicted in the MNIST image. We start by importing the sddr module and other required libraries ``` # import the sddr module from sddr import Sddr import torch import torch.nn as nn import torch.optim as optim import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns #set seeds for reproducibility torch.manual_seed(1) np.random.seed(1) ``` ### User inputs First the user defines the data to be used. The data is loaded and if it does not already exist, a column needs to be added to the tabular data describing the unstructured data - structured data correspondence. In the example below we add a column where each item includes the name of the image to which the current row of tabular data corresponds. ``` data_path = '../data/mnist_data/tab.csv' data = pd.read_csv(data_path,delimiter=',') # append a column for the numbers: each data point contains a file name of the corresponding image for i in data.index: data.loc[i,'numbers'] = f'img_{i}.jpg' ``` Next the distribution, formulas and training parameters are defined. The size of each image is ```28x28``` so our neural network has a layer which flattens the input, which is followed by a linear layer of input size ```28x28``` and an output size of ```128```. Finally, this is followed by a ```ReLU``` for the activation. Here the structured data is not pre-loaded as it would be typically too large to load in one step. Therefore the path to the directory in which it is stored is provided along with the data type (for now only 'images' supported). The images are then loaded in batches using PyTorch's dataloader. Note that here again the key given in the ```unstructured_data``` dictionary must match the name it is given in the formula, in this case ```'numbers'```. Similarly the keys of the ```deep_models_dict``` must also match the names in the formula, in this case ```'dnn'``` ``` # define distribution and the formula for the distibutional parameter distribution = 'Normal' formulas = {'loc': '~ -1 + spline(x1, bs="bs", df=10) + x2 + dnn(numbers) + spline(x3, bs="bs", df=10)', 'scale': '~1' } # define the deep neural networks' architectures and output shapes used in the above formula deep_models_dict = { 'dnn': { 'model': nn.Sequential(nn.Flatten(1, -1), nn.Linear(28*28,128), nn.ReLU()), 'output_shape': 128}, } # define your training hyperparameters train_parameters = { 'batch_size': 8000, 'epochs': 1000, 'degrees_of_freedom': {'loc':9.6, 'scale':9.6}, 'optimizer' : optim.Adam, 'val_split': 0.15, 'early_stop_epsilon': 0.001, 'dropout_rate': 0.01 } # provide the location and datatype of the unstructured data unstructured_data = { 'numbers' : { 'path' : '../data/mnist_data/mnist_images', 'datatype' : 'image' } } # define output directory output_dir = './outputs' ``` ### Initialization The sddr instance is initialized with the parameters given by the user in the previous step: ``` sddr = Sddr(output_dir=output_dir, distribution=distribution, formulas=formulas, deep_models_dict=deep_models_dict, train_parameters=train_parameters, ) ``` ### Training The sddr network is trained with the data defined above and the loss curve is plotted. ``` sddr.train(structured_data=data, target="y_gen", unstructured_data = unstructured_data, plot=True) ``` ### Evaluation - Visualizing the partial effects In this case the data is assumed to follow a normal distribution, in which case two distributional parameters, loc and scale, need to be estimated. Below we plot the partial effects of each smooth term. Remember the partial effects are computed by: partial effect = smooth_features * coefs (weights) In other words the smoothing terms are multiplied with the weights of the Structured Head. We use the partial effects to interpret whether our model has learned correctly. ``` partial_effects_loc = sddr.eval('loc',plot=True) partial_effects_scale = sddr.eval('scale',plot=True) ``` As we can see the distributional parameter loc has two parial effects, one sinusoidal and one quadratic. The parameter scale expectedly has no partial effect since the formula only includes an intercept. Next we retrieve our ground truth data and compare it with the model's estimation ``` # compare prediction of neural network with ground truth data_pred = data.loc[:,:] ground_truth = data.loc[:,'y_gen'] # predict returns partial effects and a distributional layer that gives statistical information about the prediction distribution_layer, partial_effect = sddr.predict(data_pred, clipping=True, plot=False, unstructured_data = unstructured_data) # retrieve the mean and variance of the distributional layer predicted_mean = distribution_layer.loc[:,:].T predicted_variance = distribution_layer.scale[0] # and plot the result plt.scatter(ground_truth, predicted_mean) print(f"Predicted variance for first sample: {predicted_variance}") ``` The comparison shows that for most samples the predicted and true values are directly propotional. Next we want to check if the model learned the correct correspondence of images and numbers ``` # we create a copy of our original structured data where we set all inputs but the images to be zero data_pred_copy = data.copy() data_pred_copy.loc[:,'x1'] = 0 data_pred_copy.loc[:,'x2'] = 0 data_pred_copy.loc[:,'x3'] = 0 # and make a prediction using only the images distribution_layer, partial_effect = sddr.predict(data_pred_copy, clipping=True, plot=False, unstructured_data = unstructured_data) # add the predicted mean value to our tabular data data_pred_copy['predicted_number'] = distribution_layer.loc[:,:].numpy().flatten() # and compare the true number on the images with the predicted number ax = sns.boxplot(x="y_true", y="predicted_number", data=data_pred_copy) ax.set_xlabel("true number"); ax.set_ylabel("predicted latent effect of number"); ``` Observing the boxplot figure we see that as the true values, i.e. numbers depicted on images, are increasing, so too are the medians of the predicted distributions. Therefore the partial effect of the neural network is directly correlated with the number depicted in the MNIST images, proving that our neural network, though simple, has learned from the unstructured data.
github_jupyter
<center> <img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" /> </center> # Simple Linear Regression Estimated time needed: **15** minutes ## Objectives After completing this lab you will be able to: * Use scikit-learn to implement simple Linear Regression * Create a model, train it, test it and use the model ### Importing Needed packages ``` import matplotlib.pyplot as plt import pandas as pd import pylab as pl import numpy as np %matplotlib inline ``` ### Downloading Data To download the data, we will use !wget to download it from IBM Object Storage. ``` !wget -O FuelConsumption.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv ``` **Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC) ## Understanding the Data ### `FuelConsumption.csv`: We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01) * **MODELYEAR** e.g. 2014 * **MAKE** e.g. Acura * **MODEL** e.g. ILX * **VEHICLE CLASS** e.g. SUV * **ENGINE SIZE** e.g. 4.7 * **CYLINDERS** e.g 6 * **TRANSMISSION** e.g. A6 * **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9 * **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9 * **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2 * **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 ## Reading the data in ``` df = pd.read_csv("FuelConsumption.csv") # take a look at the dataset df.head() ``` ### Data Exploration Let's first have a descriptive exploration on our data. ``` # summarize the data df.describe() ``` Let's select some features to explore more. ``` cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']] cdf.head(9) ``` We can plot each of these features: ``` viz = cdf[['CYLINDERS','ENGINESIZE','CO2EMISSIONS','FUELCONSUMPTION_COMB']] viz.hist() plt.show() ``` Now, let's plot each of these features against the Emission, to see how linear their relationship is: ``` plt.scatter(cdf.FUELCONSUMPTION_COMB, cdf.CO2EMISSIONS, color='blue') plt.xlabel("FUELCONSUMPTION_COMB") plt.ylabel("Emission") plt.show() plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show() ``` ## Practice Plot **CYLINDER** vs the Emission, to see how linear is their relationship is: ``` # write your code here ``` <details><summary>Click here for the solution</summary> ```python plt.scatter(cdf.CYLINDERS, cdf.CO2EMISSIONS, color='blue') plt.xlabel("Cylinders") plt.ylabel("Emission") plt.show() ``` </details> #### Creating train and test dataset Train/Test Split involves splitting the dataset into training and testing sets that are mutually exclusive. After which, you train with the training set and test with the testing set. This will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the model. Therefore, it gives us a better understanding of how well our model generalizes on new data. This means that we know the outcome of each data point in the testing dataset, making it great to test with! Since this data has not been used to train the model, the model has no knowledge of the outcome of these data points. So, in essence, it is truly an out-of-sample testing. Let's split our dataset into train and test sets. 80% of the entire dataset will be used for training and 20% for testing. We create a mask to select random rows using **np.random.rand()** function: ``` msk = np.random.rand(len(df)) < 0.8 train = cdf[msk] test = cdf[~msk] ``` ### Simple Regression Model Linear Regression fits a linear model with coefficients B = (B1, ..., Bn) to minimize the 'residual sum of squares' between the actual value y in the dataset, and the predicted value yhat using linear approximation. #### Train data distribution ``` plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show() ``` #### Modeling Using sklearn package to model data. ``` from sklearn import linear_model regr = linear_model.LinearRegression() train_x = np.asanyarray(train[['ENGINESIZE']]) train_y = np.asanyarray(train[['CO2EMISSIONS']]) regr.fit(train_x, train_y) # The coefficients print ('Coefficients: ', regr.coef_) print ('Intercept: ',regr.intercept_) ``` As mentioned before, **Coefficient** and **Intercept** in the simple linear regression, are the parameters of the fit line. Given that it is a simple linear regression, with only 2 parameters, and knowing that the parameters are the intercept and slope of the line, sklearn can estimate them directly from our data. Notice that all of the data must be available to traverse and calculate the parameters. #### Plot outputs We can plot the fit line over the data: ``` plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') plt.plot(train_x, regr.coef_[0][0]*train_x + regr.intercept_[0], '-r') plt.xlabel("Engine size") plt.ylabel("Emission") ``` #### Evaluation We compare the actual values and predicted values to calculate the accuracy of a regression model. Evaluation metrics provide a key role in the development of a model, as it provides insight to areas that require improvement. There are different model evaluation metrics, lets use MSE here to calculate the accuracy of our model based on the test set: * Mean Absolute Error: It is the mean of the absolute value of the errors. This is the easiest of the metrics to understand since it’s just average error. * Mean Squared Error (MSE): Mean Squared Error (MSE) is the mean of the squared error. It’s more popular than Mean Absolute Error because the focus is geared more towards large errors. This is due to the squared term exponentially increasing larger errors in comparison to smaller ones. * Root Mean Squared Error (RMSE). * R-squared is not an error, but rather a popular metric to measure the performance of your regression model. It represents how close the data points are to the fitted regression line. The higher the R-squared value, the better the model fits your data. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). ``` from sklearn.metrics import r2_score test_x = np.asanyarray(test[['ENGINESIZE']]) test_y = np.asanyarray(test[['CO2EMISSIONS']]) test_y_ = regr.predict(test_x) print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y))) print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2)) print("R2-score: %.2f" % r2_score(test_y , test_y_) ) ``` ## Exercise Lets see what the evaluation metrics are if we trained a regression model using the `FUELCONSUMPTION_COMB` feature. Start by selecting `FUELCONSUMPTION_COMB` as the train_x data from the `train` dataframe, then select `FUELCONSUMPTION_COMB` as the test_x data from the `test` dataframe ``` train_x = #ADD CODE test_x = #ADD CODE ``` <details><summary>Click here for the solution</summary> ```python train_x = train[["FUELCONSUMPTION_COMB"]] test_x = train[["FUELCONSUMPTION_COMB"]] ``` </details> Now train a Logistic Regression Model using the `train_x` you created and the `train_y` created previously ``` regr = linear_model.LinearRegression() #ADD CODE ``` <details><summary>Click here for the solution</summary> ```python regr = linear_model.LinearRegression() regr.fit(train_x, train_y) ``` </details> Find the predictions using the model's `predict` function and the `test_x` data ``` predictions = #ADD CODE ``` <details><summary>Click here for the solution</summary> ```python predictions = regr.predict(test_x) ``` </details> Finally use the `predictions` and the `test_y` data and find the Mean Absolute Error value using the `np.absolute` and `np.mean` function like done previously ``` #ADD CODE ``` <details><summary>Click here for the solution</summary> ```python print("Mean Absolute Error: %.2f" % np.mean(np.absolute(predictions - test_y))) ``` </details> We can see that the MAE is much worse than it is when we train using `ENGINESIZE` <h2>Want to learn more?</h2> IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: <a href="https://www.ibm.com/analytics/spss-statistics-software?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01">SPSS Modeler</a> Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href="https://www.ibm.com/cloud/watson-studio?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01">Watson Studio</a> ### Thank you for completing this lab! ## Author Saeed Aghabozorgi ### Other Contributors <a href="https://www.linkedin.com/in/joseph-s-50398b136/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01" target="_blank">Joseph Santarcangelo</a> Azim Hirjani ## Change Log | Date (YYYY-MM-DD) | Version | Changed By | Change Description | | ----------------- | ------- | ------------- | ---------------------------------- | | 2020-11-03 | 2.1 | Lakshmi Holla | Changed URL of the csv | | 2020-08-27 | 2.0 | Lavanya | Moved lab to course repo in GitLab | | | | | | | | | | | ## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
github_jupyter
``` # Copyright 2021 Google LLC # Use of this source code is governed by an MIT-style # license that can be found in the LICENSE file or at # https://opensource.org/licenses/MIT. # Author(s): Kevin P. Murphy ([email protected]) and Mahmoud Soliman ([email protected]) ``` <a href="https://opensource.org/licenses/MIT" target="_parent"><img src="https://img.shields.io/github/license/probml/pyprobml"/></a> <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/figures//chapter16_figures.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Cloning the pyprobml repo ``` !git clone https://github.com/probml/pyprobml %cd pyprobml/scripts ``` # Installing required software (This may take few minutes) ``` !apt-get install octave -qq > /dev/null !apt-get install liboctave-dev -qq > /dev/null %%capture %load_ext autoreload %autoreload 2 DISCLAIMER = 'WARNING : Editing in VM - changes lost after reboot!!' from google.colab import files def interactive_script(script, i=True): if i: s = open(script).read() if not s.split('\n', 1)[0]=="## "+DISCLAIMER: open(script, 'w').write( f'## {DISCLAIMER}\n' + '#' * (len(DISCLAIMER) + 3) + '\n\n' + s) files.view(script) %run $script else: %run $script def show_image(img_path): from google.colab.patches import cv2_imshow import cv2 img = cv2.imread(img_path, cv2.IMREAD_UNCHANGED) img=cv2.resize(img,(600,600)) cv2_imshow(img) ``` ## Figure 16.1:<a name='16.1'></a> <a name='fig:knn'></a> (a) Illustration of a $K$-nearest neighbors classifier in 2d for $K=5$. The nearest neighbors of test point $\mathbf x $ have labels $\ 1, 1, 1, 0, 0\ $, so we predict $p(y=1|\mathbf x , \mathcal D ) = 3/5$. (b) Illustration of the Voronoi tesselation induced by 1-NN. Adapted from Figure 4.13 of <a href='#Duda01'>[DHS01]</a> . Figure(s) generated by [knn_voronoi_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/knn_voronoi_plot.py) ``` interactive_script("knn_voronoi_plot.py") ``` ## Figure 16.2:<a name='16.2'></a> <a name='knnThreeClass'></a> Decision boundaries induced by a KNN classifier. (a) $K=1$. (b) $K=2$. (c) $K=5$. (d) Train and test error vs $K$. Figure(s) generated by [knn_classify_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/knn_classify_demo.py) ``` interactive_script("knn_classify_demo.py") ``` ## Figure 16.3:<a name='16.3'></a> <a name='curse'></a> Illustration of the curse of dimensionality. (a) We embed a small cube of side $s$ inside a larger unit cube. (b) We plot the edge length of a cube needed to cover a given volume of the unit cube as a function of the number of dimensions. Adapted from Figure 2.6 from <a href='#HastieBook'>[HTF09]</a> . Figure(s) generated by [curse_dimensionality.py](https://github.com/probml/pyprobml/blob/master/scripts/curse_dimensionality.py) ``` interactive_script("curse_dimensionality.py") ``` ## Figure 16.4:<a name='16.4'></a> <a name='fig:LCA'></a> Illustration of latent coincidence analysis (LCA) as a directed graphical model. The inputs $\mathbf x , \mathbf x ' \in \mathbb R ^D$ are mapped into Gaussian latent variables $\mathbf z , \mathbf z ' \in \mathbb R ^L$ via a linear mapping $\mathbf W $. If the two latent points coincide (within length scale $\kappa $) then we set the similarity label to $y=1$, otherwise we set it to $y=0$. From Figure 1 of <a href='#Der2012'>[ML12]</a> . Used with kind permission of Lawrence Saul. ``` show_image("/content/pyprobml/notebooks/figures/images/LCA-PGM.png") ``` ## Figure 16.5:<a name='16.5'></a> <a name='fig:tripletNet'></a> Networks for deep metric learning. (a) Siamese network. (b) Triplet network. From Figure 5 of <a href='#Kaya2019'>[MH19]</a> . Used with kind permission of Mahmut Kaya. . ``` show_image("/content/pyprobml/notebooks/figures/images/siameseNet.png") show_image("/content/pyprobml/notebooks/figures/images/tripletNet.png") ``` ## Figure 16.6:<a name='16.6'></a> <a name='fig:tripletBound'></a> Speeding up triplet loss minimization. (a) Illustration of hard vs easy negatives. Here $a$ is the anchor point, $p$ is a positive point, and $n_i$ are negative points. Adapted from Figure 4 of <a href='#Kaya2019'>[MH19]</a> . (b) Standard triplet loss would take $8 \times 3 \times 4 = 96$ calculations, whereas using a proxy loss (with one proxy per class) takes $8 \times 2 = 16$ calculations. From Figure 1 of <a href='#Do2019cvpr'>[Tha+19]</a> . Used with kind permission of Gustavo Cerneiro. ``` show_image("/content/pyprobml/notebooks/figures/images/hard-negative-mining.png") show_image("/content/pyprobml/notebooks/figures/images/tripletBound.png") ``` ## Figure 16.7:<a name='16.7'></a> <a name='fig:SEC'></a> Adding spherical embedding constraint to a deep metric learning method. Used with kind permission of Dingyi Zhang. ``` show_image("/content/pyprobml/notebooks/figures/images/SEC.png") ``` ## Figure 16.8:<a name='16.8'></a> <a name='smoothingKernels'></a> A comparison of some popular normalized kernels. Figure(s) generated by [smoothingKernelPlot.m](https://github.com/probml/pmtk3/blob/master/demos/smoothingKernelPlot.m) ``` !octave -W smoothingKernelPlot.m >> _ ``` ## Figure 16.9:<a name='16.9'></a> <a name='parzen'></a> A nonparametric (Parzen) density estimator in 1d estimated from 6 data points, denoted by x. Top row: uniform kernel. Bottom row: Gaussian kernel. Left column: bandwidth parameter $h=1$. Right column: bandwidth parameter $h=2$. Adapted from http://en.wikipedia.org/wiki/Kernel_density_estimation . Figure(s) generated by [Kernel_density_estimation](http://en.wikipedia.org/wiki/Kernel_density_estimation) [parzen_window_demo2.py](https://github.com/probml/pyprobml/blob/master/scripts/parzen_window_demo2.py) ``` interactive_script("parzen_window_demo2.py") ``` ## Figure 16.10:<a name='16.10'></a> <a name='kernelRegression'></a> An example of kernel regression in 1d using a Gaussian kernel. Figure(s) generated by [kernelRegressionDemo.m](https://github.com/probml/pmtk3/blob/master/demos/kernelRegressionDemo.m) ``` !octave -W kernelRegressionDemo.m >> _ ``` ## References: <a name='Duda01'>[DHS01]</a> R. O. Duda, P. E. Hart and D. G. Stork. "Pattern Classification". (2001). <a name='HastieBook'>[HTF09]</a> T. Hastie, R. Tibshirani and J. Friedman. "The Elements of Statistical Learning". (2009). <a name='Kaya2019'>[MH19]</a> K. Mahmut and B. HasanSakir. "Deep Metric Learning: A Survey". In: Symmetry (2019). <a name='Der2012'>[ML12]</a> D. Matthew and S. LawrenceK. "Latent Coincidence Analysis: A Hidden Variable Model forDistance Metric Learning". (2012). <a name='Do2019cvpr'>[Tha+19]</a> D. Thanh-Toan, T. Toan, R. Ian, K. Vijay, H. Tuan and C. Gustavo. "A Theoretically Sound Upper Bound on the Triplet Loss forImproving the Efficiency of Deep Distance Metric Learning". (2019).
github_jupyter
# Inference and Validation Now that you have a trained network, you can use it for making predictions. This is typically called **inference**, a term borrowed from statistics. However, neural networks have a tendency to perform *too well* on the training data and aren't able to generalize to data that hasn't been seen before. This is called **overfitting** and it impairs inference performance. To test for overfitting while training, we measure the performance on data not in the training set called the **validation** set. We avoid overfitting through regularization such as dropout while monitoring the validation performance during training. In this notebook, I'll show you how to do this in PyTorch. As usual, let's start by loading the dataset through torchvision. You'll learn more about torchvision and loading data in a later part. This time we'll be taking advantage of the test set which you can get by setting `train=False` here: ```python testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) ``` The test set contains images just like the training set. Typically you'll see 10-20% of the original dataset held out for testing and validation with the rest being used for training. ``` import torch from torchvision import datasets, transforms # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True) ``` Here I'll create a model like normal, using the same one from my solution for part 4. ``` from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.log_softmax(self.fc4(x), dim=1) return x ``` The goal of validation is to measure the model's performance on data that isn't part of the training set. Performance here is up to the developer to define though. Typically this is just accuracy, the percentage of classes the network predicted correctly. Other options are [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_(classification_context)) and top-5 error rate. We'll focus on accuracy here. First I'll do a forward pass with one batch from the test set. ``` model = Classifier() images, labels = next(iter(testloader)) # Get the class probabilities ps = torch.exp(model(images)) # Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples print(ps.shape) ``` With the probabilities, we can get the most likely class using the `ps.topk` method. This returns the $k$ highest values. Since we just want the most likely class, we can use `ps.topk(1)`. This returns a tuple of the top-$k$ values and the top-$k$ indices. If the highest value is the fifth element, we'll get back 4 as the index. ``` top_p, top_class = ps.topk(1, dim=1) # Look at the most likely classes for the first 10 examples print(top_class[:10,:]) ``` Now we can check if the predicted classes match the labels. This is simple to do by equating `top_class` and `labels`, but we have to be careful of the shapes. Here `top_class` is a 2D tensor with shape `(64, 1)` while `labels` is 1D with shape `(64)`. To get the equality to work out the way we want, `top_class` and `labels` must have the same shape. If we do ```python equals = top_class == labels ``` `equals` will have shape `(64, 64)`, try it yourself. What it's doing is comparing the one element in each row of `top_class` with each element in `labels` which returns 64 True/False boolean values for each row. ``` equals = top_class == labels.view(*top_class.shape) #print(equals) ``` Now we need to calculate the percentage of correct predictions. `equals` has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to `torch.mean`. If only it was that simple. If you try `torch.mean(equals)`, you'll get an error ``` RuntimeError: mean is not implemented for type torch.ByteTensor ``` This happens because `equals` has type `torch.ByteTensor` but `torch.mean` isn't implemented for tensors with that type. So we'll need to convert `equals` to a float tensor. Note that when we take `torch.mean` it returns a scalar tensor, to get the actual value as a float we'll need to do `accuracy.item()`. ``` accuracy = torch.mean(equals.type(torch.FloatTensor)) print(f'Accuracy: {accuracy.item()*100}%') ``` The network is untrained so it's making random guesses and we should see an accuracy around 10%. Now let's train our network and include our validation pass so we can measure how well the network is performing on the test set. Since we're not updating our parameters in the validation pass, we can speed up our code by turning off gradients using `torch.no_grad()`: ```python # turn off gradients with torch.no_grad(): # validation pass here for images, labels in testloader: ... ``` >**Exercise:** Implement the validation loop below and print out the total accuracy after the loop. You can largely copy and paste the code from above, but I suggest typing it in because writing it out yourself is essential for building the skill. In general you'll always learn more by typing it rather than copy-pasting. You should be able to get an accuracy above 80%. ``` model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: # validation pass here for images, labels in testloader: ps = torch.exp(model(images)) op_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy = torch.mean(equals.type(torch.FloatTensor)) print(f'Accuracy: {accuracy.item()*100}%') ``` ## Overfitting If we look at the training and validation losses as we train the network, we can see a phenomenon known as overfitting. <img src='assets/overfitting.png' width=450px> The network learns the training set better and better, resulting in lower training losses. However, it starts having problems generalizing to data outside the training set leading to the validation loss increasing. The ultimate goal of any deep learning model is to make predictions on new data, so we should strive to get the lowest validation loss possible. One option is to use the version of the model with the lowest validation loss, here the one around 8-10 training epochs. This strategy is called *early-stopping*. In practice, you'd save the model frequently as you're training then later choose the model with the lowest validation loss. The most common method to reduce overfitting (outside of early-stopping) is *dropout*, where we randomly drop input units. This forces the network to share information between weights, increasing it's ability to generalize to new data. Adding dropout in PyTorch is straightforward using the [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout) module. ```python class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) # output so no dropout here x = F.log_softmax(self.fc4(x), dim=1) return x ``` During training we want to use dropout to prevent overfitting, but during inference we want to use the entire network. So, we need to turn off dropout during validation, testing, and whenever we're using the network to make predictions. To do this, you use `model.eval()`. This sets the model to evaluation mode where the dropout probability is 0. You can turn dropout back on by setting the model to train mode with `model.train()`. In general, the pattern for the validation loop will look like this, where you turn off gradients, set the model to evaluation mode, calculate the validation loss and metric, then set the model back to train mode. ```python # turn off gradients with torch.no_grad(): # set model to evaluation mode model.eval() # validation pass here for images, labels in testloader: ... # set model back to train mode model.train() ``` > **Exercise:** Add dropout to your model and train it on Fashion-MNIST again. See if you can get a lower validation loss or higher accuracy. ``` from torch import nn, optim import torch.nn.functional as F class Classifier2(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = F.log_softmax(self.fc4(x), dim=1) return x model = Classifier2() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: model.train() optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() print('loss: ', loss) running_loss += loss.item() else: model.eval() # validation pass here images, labels next(iter(testloader)) ps = torch.exp(model(images)) op_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy = torch.mean(equals.type(torch.FloatTensor)) print(f'Accuracy: {accuracy.item()*100}%') ``` ## Inference Now that the model is trained, we can use it for inference. We've done this before, but now we need to remember to set the model in inference mode with `model.eval()`. You'll also want to turn off autograd with the `torch.no_grad()` context. ``` # Import helper module (should be in the repo) import helper # Test out your network! model.eval() dataiter = iter(testloader) images, labels = dataiter.next() img = images[0] # Convert 2D image to 1D vector img = img.view(1, 784) # Calculate the class probabilities (softmax) for img with torch.no_grad(): output = model.forward(img) ps = torch.exp(output) # Plot the image and probabilities helper.view_classify(img.view(1, 28, 28), ps, version='Fashion') ``` ## Next Up! In the next part, I'll show you how to save your trained models. In general, you won't want to train a model everytime you need it. Instead, you'll train once, save it, then load the model when you want to train more or use if for inference.
github_jupyter
# TimeEval shared parameter optimization result analysis ``` # Automatically reload packages: %load_ext autoreload %autoreload 2 # imports import json import warnings import pandas as pd import numpy as np import scipy as sp import plotly.offline as py import plotly.graph_objects as go import plotly.figure_factory as ff import plotly.express as px from plotly.subplots import make_subplots from pathlib import Path from timeeval import Datasets ``` ## Configuration Target parameters that were optimized in this run (per algorithm): ``` algo_param_mapping = { "HBOS": ["n_bins"], "MultiHMM": ["n_bins"], "MTAD-GAT": ["context_window_size", "mag_window_size", "score_window_size"], "PST": ["n_bins"] } ``` Define data and results folder: ``` # constants and configuration data_path = Path("../../data") / "test-cases" result_root_path = Path("../timeeval_experiments/results") experiment_result_folder = "2021-10-04_shared-optim2" # build paths result_paths = [d for d in result_root_path.iterdir() if d.is_dir()] print("Available result directories:") display(result_paths) result_path = result_root_path / experiment_result_folder print("\nSelecting:") print(f"Data path: {data_path.resolve()}") print(f"Result path: {result_path.resolve()}") ``` Load results and dataset metadata: ``` def extract_hyper_params(param_names): def extract(value): params = json.loads(value) result = None for name in param_names: try: value = params[name] result = pd.Series([name, value], index=["optim_param_name", "optim_param_value"]) break except KeyError: pass if result is None: raise ValueError(f"Parameters {param_names} not found in '{value}'") return result return extract # load results print(f"Reading results from {result_path.resolve()}") df = pd.read_csv(result_path / "results.csv") # add dataset_name column df["dataset_name"] = df["dataset"].str.split(".").str[0] # add optim_params column df[["optim_param_name", "optim_param_value"]] = "" for algo in algo_param_mapping: df_algo = df.loc[df["algorithm"] == algo] df.loc[df_algo.index, ["optim_param_name", "optim_param_value"]] = df_algo["hyper_params"].apply(extract_hyper_params(algo_param_mapping[algo])) # load dataset metadata dmgr = Datasets(data_path) ``` Define plotting functions: ``` def load_scores_df(algorithm_name, dataset_id, optim_params, repetition=1): params_id = df.loc[(df["algorithm"] == algorithm_name) & (df["collection"] == dataset_id[0]) & (df["dataset"] == dataset_id[1]) & (df["optim_param_name"] == optim_params[0]) & (df["optim_param_value"] == optim_params[1]), "hyper_params_id"].item() path = ( result_path / algorithm_name / params_id / dataset_id[0] / dataset_id[1] / str(repetition) / "anomaly_scores.ts" ) return pd.read_csv(path, header=None) def plot_scores(algorithm_name, dataset_name): if isinstance(algorithm_name, tuple): algorithms = [algorithm_name] elif not isinstance(algorithm_name, list): raise ValueError("Please supply a tuple (algorithm_name, optim_param_name, optim_param_value) or a list thereof as first argument!") else: algorithms = algorithm_name # construct dataset ID dataset_id = ("GutenTAG", f"{dataset_name}.unsupervised") # load dataset details df_dataset = dmgr.get_dataset_df(dataset_id) # check if dataset is multivariate dataset_dim = df.loc[df["dataset_name"] == dataset_name, "dataset_input_dimensionality"].unique().item() dataset_dim = dataset_dim.lower() auroc = {} df_scores = pd.DataFrame(index=df_dataset.index) skip_algos = [] algos = [] for algo, optim_param_name, optim_param_value in algorithms: optim_params = f"{optim_param_name}={optim_param_value}" algos.append((algo, optim_params)) # get algorithm metric results try: auroc[(algo, optim_params)] = df.loc[ (df["algorithm"] == algo) & (df["dataset_name"] == dataset_name) & (df["optim_param_name"] == optim_param_name) & (df["optim_param_value"] == optim_param_value), "ROC_AUC" ].item() except ValueError: warnings.warn(f"No ROC_AUC score found! Probably {algo} with params {optim_params} was not executed on {dataset_name}.") auroc[(algo, optim_params)] = -1 skip_algos.append((algo, optim_params)) continue # load scores training_type = df.loc[df["algorithm"] == algo, "algo_training_type"].values[0].lower().replace("_", "-") try: df_scores[(algo, optim_params)] = load_scores_df(algo, ("GutenTAG", f"{dataset_name}.{training_type}"), (optim_param_name, optim_param_value)).iloc[:, 0] except (ValueError, FileNotFoundError): warnings.warn(f"No anomaly scores found! Probably {algo} was not executed on {dataset_name} with params {optim_params}.") df_scores[(algo, optim_params)] = np.nan skip_algos.append((algo, optim_params)) algorithms = [a for a in algos if a not in skip_algos] # Create plot fig = make_subplots(2, 1) if dataset_dim == "multivariate": for i in range(1, df_dataset.shape[1]-1): fig.add_trace(go.Scatter(x=df_dataset.index, y=df_dataset.iloc[:, i], name=f"channel-{i}"), 1, 1) else: fig.add_trace(go.Scatter(x=df_dataset.index, y=df_dataset.iloc[:, 1], name="timeseries"), 1, 1) fig.add_trace(go.Scatter(x=df_dataset.index, y=df_dataset["is_anomaly"], name="label"), 2, 1) for item in algorithms: algo, optim_params = item fig.add_trace(go.Scatter(x=df_scores.index, y=df_scores[item], name=f"{algo}={auroc[item]:.4f} ({optim_params})"), 2, 1) fig.update_xaxes(matches="x") fig.update_layout( title=f"Results of {','.join(np.unique([a for a, _ in algorithms]))} on {dataset_name}", height=400 ) return py.iplot(fig) ``` ## Analyze TimeEval results ``` df[["algorithm", "dataset_name", "status", "AVERAGE_PRECISION", "PR_AUC", "RANGE_PR_AUC", "ROC_AUC", "execute_main_time", "optim_param_name", "optim_param_value"]] ``` --- ### Errors ``` df_error_counts = df.pivot_table(index=["algo_training_type", "algorithm"], columns=["status"], values="repetition", aggfunc="count") df_error_counts = df_error_counts.fillna(value=0).astype(np.int64) ``` #### Aggregation of errors per algorithm grouped by algorithm training type ``` for tpe in ["SEMI_SUPERVISED", "SUPERVISED", "UNSUPERVISED"]: if tpe in df_error_counts.index: print(tpe) display(df_error_counts.loc[tpe]) ``` #### Slow algorithms Algorithms, for which more than 50% of all executions ran into the timeout. ``` df_error_counts[df_error_counts["Status.TIMEOUT"] > (df_error_counts["Status.ERROR"] + df_error_counts["Status.OK"])] ``` #### Broken algorithms Algorithms, which failed for at least 50% of the executions. ``` error_threshold = 0.5 df_error_counts[df_error_counts["Status.ERROR"] > error_threshold*( df_error_counts["Status.TIMEOUT"] + df_error_counts["Status.ERROR"] + df_error_counts["Status.OK"] )] ``` #### Detail errors ``` algo_list = ["MTAD-GAT", "MultiHMM"] error_list = ["OOM", "Segfault", "ZeroDivisionError", "IncompatibleParameterConfig", "WrongDBNState", "SyntaxError", "other"] errors = pd.DataFrame(0, index=error_list, columns=algo_list, dtype=np.int_) for algo in algo_list: df_tmp = df[(df["algorithm"] == algo) & (df["status"] == "Status.ERROR")] for i, run in df_tmp.iterrows(): path = result_path / run["algorithm"] / run["hyper_params_id"] / run["collection"] / run["dataset"] / str(run["repetition"]) / "execution.log" with path.open("r") as fh: log = fh.read() if "status code '139'" in log: errors.loc["Segfault", algo] += 1 elif "status code '137'" in log: errors.loc["OOM", algo] += 1 elif "Expected n_neighbors <= n_samples" in log: errors.loc["IncompatibleParameterConfig", algo] += 1 elif "ZeroDivisionError" in log: errors.loc["ZeroDivisionError", algo] += 1 elif "does not have key" in log: errors.loc["WrongDBNState", algo] += 1 elif "NameError" in log: errors.loc["SyntaxError", algo] += 1 else: print(f'\n\n#### {run["dataset"]} ({run["optim_param_name"]}:{run["optim_param_value"]})') print(log) errors.loc["other", algo] += 1 errors.T ``` --- ### Parameter assessment ``` sort_by = ("ROC_AUC", "mean") metric_agg_type = ["mean", "median"] time_agg_type = "mean" aggs = { "AVERAGE_PRECISION": metric_agg_type, "RANGE_PR_AUC": metric_agg_type, "PR_AUC": metric_agg_type, "ROC_AUC": metric_agg_type, "train_main_time": time_agg_type, "execute_main_time": time_agg_type, "repetition": "count" } df_tmp = df.reset_index() df_tmp = df_tmp.groupby(by=["algorithm", "optim_param_name", "optim_param_value"]).agg(aggs) df_tmp = df_tmp.reset_index() df_tmp = df_tmp.sort_values(by=["algorithm", "optim_param_name", sort_by], ascending=False) df_tmp = df_tmp.set_index(["algorithm", "optim_param_name", "optim_param_value"]) with pd.option_context("display.max_rows", None, "display.max_columns", None): display(df_tmp) ``` #### Selected parameters - HBOS: `n_bins=20` (more is better) - MultiHMM: `n_bins=5` (8 is slightly better, but takes way longer. The scores are very bad anyway!) - MTAD-GAT: `context_window_size=30,mag_window_size=40,score_window_size=52` (very slow) - PST: `n_bins=5` (less is better) > **Note** > > MTAD-GAT is very slow! Exclude from further runs! ``` plot_scores([("MultiHMM", "n_bins", 5), ("MultiHMM", "n_bins", 8)], "sinus-type-mean") plot_scores([("MTAD-GAT", "context_window_size", 30), ("MTAD-GAT", "context_window_size", 40)], "sinus-type-mean") ```
github_jupyter
## Tacotron 2 inference code Edit the variables **checkpoint_path** and **text** to match yours and run the entire code to generate plots of mel outputs, alignments and audio synthesis from the generated mel-spectrogram using Griffin-Lim. #### Import libraries and setup matplotlib ``` import matplotlib %matplotlib inline import matplotlib.pylab as plt import IPython.display as ipd import sys sys.path.append('waveglow/') import numpy as np import torch from hparams import create_hparams from model import Tacotron2 from layers import TacotronSTFT, STFT from audio_processing import griffin_lim from train import load_model from text import text_to_sequence from denoiser import Denoiser def plot_data(data, figsize=(16, 4)): fig, axes = plt.subplots(1, len(data), figsize=figsize) for i in range(len(data)): axes[i].imshow(data[i], aspect='auto', origin='bottom', interpolation='none') ``` #### Setup hparams ``` hparams = create_hparams() hparams.sampling_rate = 22050 ``` #### Load model from checkpoint ``` checkpoint_path = "tacotron2_statedict.pt" model = load_model(hparams) model.load_state_dict(torch.load(checkpoint_path)['state_dict']) _ = model.cuda().eval().half() ``` #### Load WaveGlow for mel2audio synthesis and denoiser ``` waveglow_path = 'waveglow_256channels.pt' waveglow = torch.load(waveglow_path)['model'] waveglow.cuda().eval().half() for m in waveglow.modules(): if 'Conv' in str(type(m)): setattr(m, 'padding_mode', 'zeros') for k in waveglow.convinv: k.float() denoiser = Denoiser(waveglow) ``` #### Prepare text input ``` #%%timeit 77.9 µs ± 237 ns text = "Waveglow is really awesome!" sequence = np.array(text_to_sequence(text, ['english_cleaners']))[None, :] sequence = torch.autograd.Variable( torch.from_numpy(sequence)).cuda().long() ``` #### Decode text input and plot results ``` #%%timeit 240 ms ± 9.72 ms mel_outputs, mel_outputs_postnet, _, alignments = model.inference(sequence) plot_data((mel_outputs.float().data.cpu().numpy()[0], mel_outputs_postnet.float().data.cpu().numpy()[0], alignments.float().data.cpu().numpy()[0].T)) ``` #### Synthesize audio from spectrogram using WaveGlow ``` #%%timeit 193 ms ± 4.87 ms with torch.no_grad(): audio = waveglow.infer(mel_outputs_postnet, sigma=0.666) ipd.Audio(audio[0].data.cpu().numpy(), rate=hparams.sampling_rate) ``` #### (Optional) Remove WaveGlow bias ``` audio_denoised = denoiser(audio, strength=0.01)[:, 0] ipd.Audio(audio_denoised.cpu().numpy(), rate=hparams.sampling_rate) ``` #### Save result as wav ``` import librosa # save librosa.output.write_wav('./out.wav', audio[0].data.cpu().numpy().astype(np.float32), 22050) # check y, sr = librosa.load('out.wav') ipd.Audio(y, rate=sr) ```
github_jupyter
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#label-identity-hairstyle" data-toc-modified-id="label-identity-hairstyle-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>label identity hairstyle</a></span></li><li><span><a href="#Prepare-hairstyle-images" data-toc-modified-id="Prepare-hairstyle-images-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Prepare hairstyle images</a></span></li><li><span><a href="#prepare-hairstyle-manifest" data-toc-modified-id="prepare-hairstyle-manifest-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>prepare hairstyle manifest</a></span></li></ul></div> ``` from query.models import Video, FaceIdentity, Identity from esper.widget import * from esper.prelude import collect, esper_widget import pickle import os import random get_ipython().magic('matplotlib inline') get_ipython().magic('reload_ext autoreload') get_ipython().magic('autoreload 2') ``` # label identity hairstyle ``` identity_hair_dict = {} identities = Identity.objects.all() identity_list = [(i.id, i.name) for i in identities] identity_list.sort() # 154 hair_color_3 = {0: 'black', 1: 'white', 2: 'blond'} hair_color_5 = {0: 'black', 1: 'white', 2: 'blond', 3: 'brown', 4: 'gray'} hair_length = {0: 'long', 1: 'medium', 2: 'short', 3: 'bald'} identity_label = [id for id in identity_label if id not in identity_hair_dict] # idx += 1 # iid = identity_list[idx][0] # name = identity_list[idx][1] # iid = identity_label[idx] # print(name) print(iid) result = qs_to_result( FaceIdentity.objects \ .filter(identity__id=1365) \ .filter(probability__gt=0.8), limit=30) esper_widget(result) ''' {'black' : 0, 'white': 1, 'blond' : 2}, # hair_color_3 {'black' : 0, 'white': 1, 'blond' : 2, 'brown' : 3, 'gray' : 4}, # hair_color_5 {'long' : 0, 'medium' : 1, 'short' : 2, 'bald' : 3} # hair_length ''' label = identity_hair_dict[iid] = (2,2,0) print(hair_color_3[label[0]], hair_color_5[label[1]], hair_length[label[2]]) pickle.dump(identity_hair_dict, open('/app/data/identity_hair_dict.pkl', 'wb')) ``` # Prepare hairstyle images ``` faceIdentities = FaceIdentity.objects \ .filter(identity__name='melania trump') \ .filter(probability__gt=0.9) \ .select_related('face__frame__video') faceIdentities_sampled = random.sample(list(faceIdentities), 1000) print("Load %d face identities" % len(faceIdentities_sampled)) identity_grouped = collect(list(faceIdentities_sampled), lambda identity: identity.face.frame.video.id) print("Group into %d videos" % len(identity_grouped)) face_dict = {} for video_id, fis in identity_grouped.items(): video = Video.objects.filter(id=video_id)[0] face_list = [] for i in fis: face_id = i.face.id frame_id = i.face.frame.number identity_id = i.identity.id x1, y1, x2, y2 = i.face.bbox_x1, i.face.bbox_y1, i.face.bbox_x2, i.face.bbox_y2 bbox = (x1, y1, x2, y2) face_list.append((frame_id, face_id, identity_id, bbox)) face_list.sort() face_dict[video.path] = face_list print("Preload face bbox done") if __name__ == "__main__": solve_parallel(face_dict, res_dict_path='/app/result/clothing/fina_dict.pkl', workers=10) ``` # prepare hairstyle manifest ``` img_list = os.listdir('/app/result/clothing/images/') len(img_list) group_by_identity = {} for name in img_list: iid = int(name.split('_')[0]) if iid not in group_by_identity: group_by_identity[iid] = [] else: group_by_identity[iid].append(name) identity_label = [id for id, img_list in group_by_identity.items() if len(img_list) > 10] identity_label.sort() identity_hair_dict = pickle.load(open('/app/data/identity_hair_dict.pkl', 'rb')) NUM_PER_ID = 1000 hairstyle_manifest = [] for iid, img_list in group_by_identity.items(): if len(img_list) > 10 and iid in identity_hair_dict: if len(img_list) < NUM_PER_ID: img_list_sample = img_list else: img_list_sample = random.sample(img_list, NUM_PER_ID) attrib = identity_hair_dict[iid] hairstyle_manifest += [(path, attrib) for path in img_list_sample] random.shuffle(hairstyle_manifest) len(hairstyle_manifest) pickle.dump(hairstyle_manifest, open('/app/result/clothing/hairstyle_manifest.pkl', 'wb')) ```
github_jupyter
``` import numpy as np import pandas as pd ``` ### loading dataset ``` data = pd.read_csv("student-data.csv") data.head() data.shape type(data) ``` ### Exploratory data analysis ``` import matplotlib.pyplot as plt import seaborn as sns a = data.plot() data.info() data.isnull().sum() a = sns.heatmap(data.isnull(),cmap='Blues') a = sns.heatmap(data.isnull(),cmap='Blues',yticklabels=False) ``` #### this indicates that we have no any null values in the dataset ``` a = sns.heatmap(data.isna(),yticklabels=False) ``` #### this heatmap indicates that we have no any 'NA' values in the dataset ``` sns.set(style='darkgrid') sns.countplot(data=data,x='reason') ``` This indicates the count for choosing school of various reasons. A count plot can be thought of as a histogram across a categorical, instead of quantitative, variable. ``` data.head(7) ``` calculating total passed students ``` passed = data.loc[data.passed == 'yes'] passed.shape tot_passed=passed.shape[0] print('total passed students is: {} '.format(tot_passed)) ``` calculating total failed students ``` failed = data.loc[data.passed == 'no'] print('total failed students is: {}'.format(failed.shape[0])) ``` ### Feature Engineering ``` data.head() ``` To identity feature and target variable lets first do some feature engineering stuff! ``` data.columns data.columns[-1] ``` Here 'passed' is our target variable. Since in this system we need to develop the model that will predict the likelihood that a given student will pass, quantifying whether an intervention is necessary. ``` target = data.columns[-1] data.columns[:-1] #initially taking all columns as our feature variables feature = list(data.columns[:-1]) data[target].head() data[feature].head() ``` Now taking feature and target data in seperate dataframe ``` featuredata = data[feature] targetdata = data[target] ``` Now we need to convert several non-numeric columns like 'internet' into numerical form for the model to process ``` def preprocess_features(X): output = pd.DataFrame(index = X.index) for col, col_data in X.iteritems(): if col_data.dtype == object: col_data = col_data.replace(['yes', 'no'], [1, 0]) if col_data.dtype == object: col_data = pd.get_dummies(col_data, prefix = col) output = output.join(col_data) return output featuredata = preprocess_features(featuredata) type(featuredata) featuredata.head() featuredata.drop(['address_R','sex_F'],axis=1,inplace=True) featuredata.columns featuredata.drop(['famsize_GT3','Pstatus_A',],axis=1,inplace=True) ``` ### MODEL IMPLEMENTATION ## Decision tree ``` from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split model=DecisionTreeClassifier() X_train, X_test, y_train, y_test = train_test_split(featuredata, targetdata, test_size=0.33, random_state=6) model.fit(X_train,y_train) from sklearn.metrics import accuracy_score predictions = model.predict(X_test) accuracy_score(y_test,predictions)*100 ``` ## K-Nearest Neighbours ``` from sklearn.neighbors import KNeighborsClassifier new_classifier = KNeighborsClassifier(n_neighbors=7) new_classifier.fit(X_train,y_train) predictions2 = new_classifier.predict(X_test) accuracy_score(y_test,predictions2)*100 ``` ## SVM ``` from sklearn import svm clf = svm.SVC(random_state=6) clf.fit(featuredata,targetdata) clf.score(featuredata,targetdata) predictions3= clf.predict(X_test) accuracy_score(y_test,predictions3)*100 ``` ## Model application areas #### KNN KNN: k-NN is often used in search applications where you are looking for “similar” items; that is, when your task is some form of “find items similar to this one”. The way you measure similarity is by creating a vector representation of the items, and then compare the vectors using an appropriate distance metric (like the Euclidean distance, for example). The biggest use case of k-NN search might be Recommender Systems. If you know a user likes a particular item, then you can recommend similar items for them. KNN strength: effective for larger datasets, robust to noisy training data KNN weakness: need to determine value of k, computation cost is high. #### Decision tree Decision Tree: Can handle both numerical and categorical data. Decision tree strength: Decision trees implicitly perform feature selection, require relatively little effort from users for data preparation, easy to interpret and explain to executives. Decision tree weakness: Over Fitting, not fit for continuous variables. #### SVM SVM: SVM classify parts of the image as a face and non-face and create a square boundary around the face(Facial recognization). We use SVMs to recognize handwritten characters used widely(Handwritten recognization). Strengths: SVM's can model non-linear decision boundaries, and there are many kernels to choose from. They are also fairly robust against overfitting, especially in high-dimensional space. Weaknesses: However, SVM's are memory intensive, trickier to tune due to the importance of picking the right kernel, and don't scale well to larger datasets. ## Choosing the best model In this case, I will be using the SVM model to predict the outcomes. 80.15% of accuracy is achieved in SVM in our case. SVM is a supervised machine learning algorithm which can be used for classification or regression problems. It uses a technique called the kernel trick to transform your data and then based on these transformations it finds an optimal boundary between the possible outputs.
github_jupyter
### k-means clustering ``` import warnings warnings.filterwarnings('ignore') %matplotlib inline import scipy as sc import scipy.stats as stats from scipy.spatial.distance import euclidean import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.colors as mcolors plt.style.use('fivethirtyeight') plt.rcParams['font.family'] = 'sans-serif' plt.rcParams['font.serif'] = 'Ubuntu' plt.rcParams['font.monospace'] = 'Ubuntu Mono' plt.rcParams['font.size'] = 10 plt.rcParams['axes.labelsize'] = 10 plt.rcParams['axes.labelweight'] = 'bold' plt.rcParams['axes.titlesize'] = 10 plt.rcParams['xtick.labelsize'] = 8 plt.rcParams['ytick.labelsize'] = 8 plt.rcParams['legend.fontsize'] = 10 plt.rcParams['figure.titlesize'] = 12 plt.rcParams['image.cmap'] = 'jet' plt.rcParams['image.interpolation'] = 'none' plt.rcParams['figure.figsize'] = (16, 8) plt.rcParams['lines.linewidth'] = 2 plt.rcParams['lines.markersize'] = 8 colors = ['#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', '#810f7c', '#137e6d', '#be0119', '#3b638c', '#af6f09', '#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', '#810f7c', '#137e6d', '#be0119', '#3b638c', '#af6f09'] cmap = mcolors.LinearSegmentedColormap.from_list("", ["#82cafc", "#069af3", "#0485d1", colors[0], colors[8]]) rv0 = stats.multivariate_normal(mean=[3, 3], cov=[[.3, .3],[.3,.4]]) rv1 = stats.multivariate_normal(mean=[1.5, 1], cov=[[.5, -.5],[-.5,.7]]) rv2 = stats.multivariate_normal(mean=[0, 1.2], cov=[[.15, .1],[.1,.3]]) rv3 = stats.multivariate_normal(mean=[3.2, 1], cov=[[.2, 0],[0,.1]]) z0 = rv0.rvs(size=300) z1 = rv1.rvs(size=300) z2 = rv2.rvs(size=300) z3 = rv3.rvs(size=300) z=np.concatenate((z0, z1, z2, z3), axis=0) fig, ax = plt.subplots() ax.scatter(z0[:,0], z0[:,1], s=40, color='C0', alpha =.8, edgecolors='k', label=r'$C_0$') ax.scatter(z1[:,0], z1[:,1], s=40, color='C1', alpha =.8, edgecolors='k', label=r'$C_1$') ax.scatter(z2[:,0], z2[:,1], s=40, color='C2', alpha =.8, edgecolors='k', label=r'$C_2$') ax.scatter(z3[:,0], z3[:,1], s=40, color='C3', alpha =.8, edgecolors='k', label=r'$C_3$') plt.xlabel('$x$') plt.ylabel('$y$') plt.legend() plt.show() cc='xkcd:turquoise' fig = plt.figure(figsize=(16,8)) ax = fig.gca() plt.scatter(z[:,0], z[:,1], s=40, color=cc, edgecolors='k', alpha=.8) plt.ylabel('$x_2$', fontsize=12) plt.xlabel('$x_1$', fontsize=12) plt.title('Data set', fontsize=12) plt.show() # Number of clusters nc = 3 # X coordinates of random centroids C_x = np.random.sample(nc)*(np.max(z[:,0])-np.min(z[:,0]))*.7+np.min(z[:,0])*.7 # Y coordinates of random centroids C_y = np.random.sample(nc)*(np.max(z[:,1])-np.min(z[:,1]))*.7+np.min(z[:,0])*.7 C = np.array(list(zip(C_x, C_y)), dtype=np.float32) fig = plt.figure(figsize=(16,8)) ax = fig.gca() plt.scatter(z[:,0], z[:,1], s=40, color=cc, edgecolors='k', alpha=.5) for i in range(nc): plt.scatter(C_x[i], C_y[i], marker='*', s=500, c=colors[i], edgecolors='k', linewidth=1.5) plt.ylabel('$x_2$', fontsize=12) plt.xlabel('$x_1$', fontsize=12) plt.title('Data set', fontsize=12) plt.show() C_list = [] errors = [] # Cluster Labels(0, 1, 2, 3) clusters = np.zeros(z.shape[0]) C_list.append(C) # Error func. - Distance between new centroids and old centroids error = np.linalg.norm([euclidean(C[i,:], [0,0]) for i in range(nc)]) errors.append(error) print("Error: {0:3.5f}".format(error)) for l in range(10): # Assigning each value to its closest cluster for i in range(z.shape[0]): distances = [euclidean(z[i,:], C[j,:]) for j in range(nc)] cluster = np.argmin(distances) clusters[i] = cluster # Storing the old centroid values C = np.zeros([nc,2]) # Finding the new centroids by taking the average value for i in range(nc): points = [z[j,:] for j in range(z.shape[0]) if clusters[j] == i] C[i] = np.mean(points, axis=0) error = np.linalg.norm([euclidean(C[i,:], C_list[-1][i,:]) for i in range(nc)]) errors.append(error) C_list.append(C) fig = plt.figure(figsize=(16,8)) ax = fig.gca() for cl in range(nc): z1 = z[clusters==cl] plt.scatter(z1[:,0],z1[:,1], c=colors[cl], marker='o', s=40, edgecolors='k', alpha=.7) for i in range(nc): plt.scatter(C[i,0], C[i,1], marker='*', s=400, c=colors[i], edgecolors='k', linewidth=1.5) plt.ylabel('$x_2$', fontsize=12) plt.xlabel('$x_1$', fontsize=12) plt.title('Data set', fontsize=12) plt.show() C_list print("Error: {0:3.5f}".format(error)) errors ```
github_jupyter
#1. Install Dependencies First install the libraries needed to execute recipes, this only needs to be done once, then click play. ``` !pip install git+https://github.com/google/starthinker ``` #2. Get Cloud Project ID To run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play. ``` CLOUD_PROJECT = 'PASTE PROJECT ID HERE' print("Cloud Project Set To: %s" % CLOUD_PROJECT) ``` #3. Get Client Credentials To read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play. ``` CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE' print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS) ``` #4. Enter SmartSheet Sheet To BigQuery Parameters Move sheet data into a BigQuery table. 1. Specify <a href='https://smartsheet-platform.github.io/api-docs/' target='_blank'>SmartSheet</a> token. 1. Locate the ID of a sheet by viewing its properties. 1. Provide a BigQuery dataset ( must exist ) and table to write the data into. 1. StarThinker will automatically map the correct schema. Modify the values below for your use case, can be done multiple times, then click play. ``` FIELDS = { 'auth_read': 'user', # Credentials used for reading data. 'auth_write': 'service', # Credentials used for writing data. 'token': '', # Retrieve from SmartSheet account settings. 'sheet': '', # Retrieve from sheet properties. 'dataset': '', # Existing BigQuery dataset. 'table': '', # Table to create from this report. 'schema': '', # Schema provided in JSON list format or leave empty to auto detect. 'link': True, # Add a link to each row as the first column. } print("Parameters Set To: %s" % FIELDS) ``` #5. Execute SmartSheet Sheet To BigQuery This does NOT need to be modified unless you are changing the recipe, click play. ``` from starthinker.util.project import project from starthinker.script.parse import json_set_fields USER_CREDENTIALS = '/content/user.json' TASKS = [ { 'smartsheet': { 'auth': 'user', 'token': {'field': {'name': 'token','kind': 'string','order': 2,'default': '','description': 'Retrieve from SmartSheet account settings.'}}, 'sheet': {'field': {'name': 'sheet','kind': 'string','order': 3,'description': 'Retrieve from sheet properties.'}}, 'link': {'field': {'name': 'link','kind': 'boolean','order': 7,'default': True,'description': 'Add a link to each row as the first column.'}}, 'out': { 'bigquery': { 'auth': 'user', 'dataset': {'field': {'name': 'dataset','kind': 'string','order': 4,'default': '','description': 'Existing BigQuery dataset.'}}, 'table': {'field': {'name': 'table','kind': 'string','order': 5,'default': '','description': 'Table to create from this report.'}}, 'schema': {'field': {'name': 'schema','kind': 'json','order': 6,'description': 'Schema provided in JSON list format or leave empty to auto detect.'}} } } } } ] json_set_fields(TASKS, FIELDS) project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True) project.execute(_force=True) ```
github_jupyter
<a href="https://colab.research.google.com/github/NataliaDiaz/colab/blob/master/MI203-td2_tree_and_forest.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # TD: prédiction du vote 2016 aux Etats-Unis par arbres de décisions et méthodes ensemblistes La séance d'aujourd'hui porte sur la prévision du vote en 2016 aux États-Unis. Précisément, les données d'un recensement sont fournies avec diverses informations par comté à travers les États-Unis. L'objectif est de construire des prédicteurs de leur couleur politique (républicain ou démocrate) à partir de ces données. Exécuter les commandes suivantes pour charger l'environnement. ``` %matplotlib inline from pylab import * import numpy as np import os import random import matplotlib.pyplot as plt ``` # Accès aux données * Elles sont disponibles: https://github.com/stepherbin/teaching/tree/master/ENSTA/TD2 * Charger le fichier the combined_data.csv sur votre drive puis monter le depuis colab ``` USE_COLAB = True UPLOAD_OUTPUTS = False if USE_COLAB: # mount the google drive from google.colab import drive drive.mount('/content/drive', force_remount=True) # download data on GoogleDrive data_dir = "/content/drive/My Drive/teaching/ENSTA/TD_tree/" else: data_dir = "data/" import pandas as pd census_data = pd.read_csv( os.path.join(data_dir, 'combined_data.csv') ) ``` # Analyse préliminaire des données Les données sont organisées en champs: * fips = code du comté à 5 chiffres, le premier ou les deux premiers chiffres indiquent l'état. * votes = nombre de votants * etc.. Regarder leur structure, quantité, nature. Où se trouvent les informations pour former les ensembles d'apprentissage et de test? Où se trouvent les classes à prédire? Visualiser quelques distributions. Le format de données python est décrit ici: https://pandas.pydata.org/pandas-docs/stable/reference/frame.html ``` # Exemples de moyens d'accéder aux caractéristiques des données print(census_data.shape ) print(census_data.columns.values) print(census_data['fips']) print(census_data.head(3)) iattr = 10 attrname = census_data.columns[iattr] print("Mean of {} is {:.1f}".format(attrname,np.array(census_data[attrname]).mean())) ######################### ## METTRE VOTRE CODE ICI ######################### print("Nombre de données = {}".format(7878912123)) # à modifier print("Nombre d'attributs utiles = {}".format(4564564654)) # à modifier #hist.... ``` La classe à prédire ('Democrat') n'est décrite que par un seul attribut binaire. Calculer la répartition des couleurs politiques (quel est a priori la probabilité qu'un comté soit démocrate vs. républicain) ``` ######################### ## METTRE VOTRE CODE ICI ######################### print("La probabilité qu'un comté soit démocrate est de {:.2f}%%".format(100*proba_dem)) ``` # Préparation du chantier d'apprentissage On va préparer les ensembles d'apprentissage et de test. Pour éviter des problèmes de format de données, on choisit une liste d'attributs utiles dans la liste "feature_cols" ci dessous. L'ensemble de test sera constitué des comtés d'un seul état. Info: https://scikit-learn.org/stable/model_selection.html Liste des états et leurs codes FIPS code (2 digits): https://en.wikipedia.org/wiki/Federal_Information_Processing_Standard_state_code ``` ## Sous ensembles d'attributs informatifs pour la suite feature_cols = ['BLACK_FEMALE_rate', 'BLACK_MALE_rate', 'Percent of adults with a bachelor\'s degree or higher, 2011-2015', 'ASIAN_MALE_rate', 'ASIAN_FEMALE_rate', '25-29_rate', 'age_total_pop', '20-24_rate', 'Deep_Pov_All', '30-34_rate', 'Density per square mile of land area - Population', 'Density per square mile of land area - Housing units', 'Unemployment_rate_2015', 'Deep_Pov_Children', 'PovertyAllAgesPct2014', 'TOT_FEMALE_rate', 'PerCapitaInc', 'MULTI_FEMALE_rate', '35-39_rate', 'MULTI_MALE_rate', 'Percent of adults completing some college or associate\'s degree, 2011-2015', '60-64_rate', '55-59_rate', '65-69_rate', 'TOT_MALE_rate', '85+_rate', '70-74_rate', '80-84_rate', '75-79_rate', 'Percent of adults with a high school diploma only, 2011-2015', 'WHITE_FEMALE_rate', 'WHITE_MALE_rate', 'Amish', 'Buddhist', 'Catholic', 'Christian Generic', 'Eastern Orthodox', 'Hindu', 'Jewish', 'Mainline Christian', 'Mormon', 'Muslim', 'Non-Catholic Christian', 'Other', 'Other Christian', 'Other Misc', 'Pentecostal / Charismatic', 'Protestant Denomination', 'Zoroastrian'] filtered_cols = ['Percent of adults with a bachelor\'s degree or higher, 2011-2015', 'Percent of adults completing some college or associate\'s degree, 2011-2015', 'Percent of adults with a high school diploma only, 2011-2015', 'Density per square mile of land area - Population', 'Density per square mile of land area - Housing units', 'WHITE_FEMALE_rate', 'WHITE_MALE_rate', 'BLACK_FEMALE_rate', 'BLACK_MALE_rate', 'ASIAN_FEMALE_rate', 'Catholic', 'Christian Generic', 'Jewish', '70-74_rate', 'D', 'R'] ## 1-state test split def county_data(census_data, fips_code=17): #fips_code 48=Texas, 34=New Jersey, 31=Nebraska, 17=Illinois, 06=California, 36=New York mask = census_data['fips'].between(fips_code*1000, fips_code*1000 + 999) census_data_train = census_data[~mask] census_data_test = census_data[mask] XTrain = census_data_train[feature_cols] yTrain = census_data_train['Democrat'] XTest = census_data_test[feature_cols] yTest = census_data_test['Democrat'] return XTrain, yTrain, XTest, yTest STATE_FIPS_CODE = 17 X_train, y_train, X_test, y_test = county_data(census_data, STATE_FIPS_CODE) #print(X_train.head(2)) #print(y_test.head(2)) ``` # Apprentissage d'un arbre de décision On utilisera la bibliothèque scikit learn * Construire l'arbre sur les données d'entrainement * Prédire le vote sur les comtés de test * Calculer l'erreur et la matrice de confusion Faire varier certains paramètres (profondeur max, pureté, critère...) et visualisez leur influence. Info: https://scikit-learn.org/stable/modules/tree.html Info: https://scikit-learn.org/stable/modules/model_evaluation.html ``` from sklearn import tree ######################### ## METTRE VOTRE CODE ICI ######################### ``` Les instructions suivantes permettent de visualiser l'arbre. Interpréter le contenu de la représentation. ``` import graphviz dot_data = tree.export_graphviz(clf, out_file=None) graph = graphviz.Source(dot_data) dot_data = tree.export_graphviz(clf, out_file=None, feature_names=X_train.columns.values, class_names=["R","D"], filled=True, rounded=True, special_characters=True) graph = graphviz.Source(dot_data) graph # Prédiction et évaluation ######################### ## METTRE VOTRE CODE ICI ######################### ``` --- # Bagging L'objectif de cette partie est de construire **à la main** une approche de bagging. Le principe de l'approche est de: * Apprendre et collecter plusieurs arbres sur des échantillonnages aléatoires des données d'apprentissage * Agréger les prédictions par vote * Evaluer: Les prédictions agrégées * Comparer avec les arbres individuels et le résultat précédent Utiliser les fonctions de construction d'ensemble d'apprentissage/test de scikit-learn https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html pour générer les sous-esnembles échantillonnés. **Comparer après le cours** les fonctions de scikit-learn: https://scikit-learn.org/stable/modules/ensemble.html Numpy tips: [np.arange](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.arange.html), [numpy.sum](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.sum.html), [numpy.mean](https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.mean.html), [numpy.where](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.where.html) ``` from sklearn.model_selection import train_test_split # Données d'apprentissage: X_train, y_train, idx_train # Données de test: X_test, y_test, idx_test # Les étapes de conception du prédicteur (apprentissage) sont les suivantes: # - Construction des sous-ensembles de données # - Apprentissage d'un arbre # - Agrégation de l'arbre dans la forêt # # Pour le test def learn_forest(XTrain, yTrain, nb_trees, depth=15): ######################### ## COMPLETER LE CODE ######################### forest = [] singleperf=[] for ss in range(nb_trees): # bagging for subset # single tree training # grow the forest # single tree evaluation return forest,singleperf def predict_forest(forest, XTest, yTest = None): singleperf=[] all_preds=[] nb_trees = len(forest) ######################### ## METTRE VOTRE CODE ICI ######################### if (yTest is not None): return final_pred,singleperf else: return final_pred ######################### ## METTRE VOTRE CODE ICI ######################### X_train, y_train, X_test, y_test = county_data(census_data, 6) F,singleperf = learn_forest(X_train, y_train, 20, depth=15) pred, singleperftest = predict_forest(F, X_test, y_test) acc = perf.balanced_accuracy_score( y_test, pred ) print("Taux de bonne prédiction = {:.2f}%".format(100*acc)) print(mean(singleperftest)) #print(singleperftest) #print(singleperf) ```
github_jupyter
Osnabrück University - Machine Learning (Summer Term 2018) - Prof. Dr.-Ing. G. Heidemann, Ulf Krumnack # Exercise Sheet 08 ## Introduction This week's sheet should be solved and handed in before the end of **Sunday, June 3, 2018**. If you need help (and Google and other resources were not enough), feel free to contact your groups' designated tutor or whomever of us you run into first. Please upload your results to your group's Stud.IP folder. ## Assignment 0: Math recap (Conditional Probability) [2 Bonus Points] This exercise is supposed to be very easy and is voluntary. There will be a similar exercise on every sheet. It is intended to revise some basic mathematical notions that are assumed throughout this class and to allow you to check if you are comfortable with them. Usually you should have no problem to answer these questions offhand, but if you feel unsure, this is a good time to look them up again. You are always welcome to discuss questions with the tutors or in the practice session. Also, if you have a (math) topic you would like to recap, please let us know. **a)** Explain the idea of conditional probability. How is it defined? Conditional probability is the probability that an event A happens, given that another event B happened. For example: The probability of rain is $$P(weather="rain") = 0.3$$. But if you observe, if the street is wet you would get the conditional probability $$P(weather= "rain" |~ street="wet") = 0.95$$ The definition is: $$ P(A|B) = \frac{P(A,B)}{P(B)} $$ **b)** What is Bayes' theorem? What are its applications? Bayes Theorem states: $$ P(B|A) = \frac{P(A|B) \cdot P(B)}{P(A)} $$ The most important application is in reasoning backwards from event to cause (from data to parameters of your distribution): $$ P(\Theta|Data) = \frac{P(Data|\Theta)P(\Theta)}{P(Data)}$$ **c)** What does the law of total probability state? The law of total probability states, that the probabilty of an event occuring is the same as the sum of the probabilities of this event occuring together with all possible states of an other event: $$P(A) = \sum_b P(A,B=b) = \sum_b P(A|B=b) P(B=b)$$ ## Assignment 1: Multilayer Perceptron (MLP) [10 Points] Last week you implemented a simple perceptron. We discussed that one can use multiple perceptrons to build a network. This week you will build your own MLP. Again the following code cells are just a guideline. If you feel like it, just follow the algorithm steps and implement the MLP yourself. ### Implementation In the following you will be guided through implementing an MLP step by step. Instead of sticking to this guide, you are free to take a complete custom approach instead if you wish. We will take a bottom-up approach: Starting from an individual **perceptron** (aka neuron), we will derive a **layer of perceptrons** and end up with a **multilayer perceptron** (aka neural network). Each step will be implemented as its own python *class*. Such a class defines a type of element which can be instantiated multiple times. You can think of the relation between such instances and their designated classes as individuals of a specific population (e.g. Bernard and Bianca are both individuals of the population mice). Class definitions contain methods, which can be used to manipulate instance of that class or to make it perform specific actions — again, taking the population reference, each mouse of the mice population would for example have the method `eat_cheese()`. To guide you along, all required classes and functions are outlined in valid python code with extensive comments. You just need to fill in the gaps. For each method the [docstring](https://www.python.org/dev/peps/pep-0257/#what-is-a-docstring) (the big comment contained by triple quotes at the beginning of the method) describes the arguments that each specific method accepts (`Args`) and the values it is expected to return (`Returns`). ### Perceptron Similar to last week you here need to implement a perceptron. But instead of directly applying it, we will define a class which is reusable to instantiate a theoretically infinite amount of individual perceptrons. We will need the following three functionalities: #### Weight initialization The weights are initialized by sampling values from a standard normal distribution. There are as many weights as there are values in the input vector and an additional one for the perceptron's bias. #### Forward-Propagation / Activation Calculate the weighted sums of a neuron's inputs and apply it's activation function $\sigma$. The output vector $o$ of perceptron $j$ of layer $k$ given an input $x$ (the output of the previous layer) in a neural network is given by the following formula. Note: $N$ gives the number of values of a given vector, $w_{j,0}(k)$ specifies the bias of perceptron $j$ in layer $k$ and $w_{j,1...N(x)}(k)$ the other weights of perceptron $j$ in layer $k$. $$o_{k,j}(x) = \sigma\left(w_{j,0}(k)+\sum\limits_{i=1}^{N(x)} x_i w_{j,i}(k)\right)$$ Think of the weights $w(k)$ as a matrix being located in-between layer $k$ and the layer located *to its left* in the network. So values flowing from layer $k-1$ to layer $k$ are weighted by the values of $w(k)$. As activation function we will use the sigmoid function because of its nice derivative (needed later): $$\begin{align*} \sigma(x) &= \frac{1}{1 + \exp{(-x)}}\\ \frac{d\sigma}{dx}(x) &= \sigma(x) \cdot (1 - \sigma(x)) \end{align*}$$ #### Back-Propagation / Adaptation In order to learn something the perceptron needs to slowly adjust its weights. Each weight $w_{j,i}$ in layer $k$ is adjusted by a value $\Delta w_{j,i}$ given a learning rate $\epsilon$, the previous layer's output (or, for the first hidden layer, the network's input) $o_{k-1,i}(x)$ and the layer's error signals $\delta(k)$ (which will be calculated by the MultilayerPerceptron): $$\Delta w_{j,i}(k) = \epsilon\, \delta_j(k) o_{k-1,i}(x)$$ ``` import numpy as np # Activation function σ. # We use scipy's builtin because it fixes some NaN problems for us. # sigmoid = lambda x: 1 / (1 + np.exp(-x)) from scipy.special import expit as sigmoid class Perceptron: """Single neuron handling its own weights and bias.""" def __init__(self, dim_in, act_func=sigmoid): """Initialize a new neuron with its weights and bias. Args: dim_in (int): Dimensionality of the data coming into this perceptron. In a network of perceptrons this basically represents the number of neurons in the layer before this neuron's layer. Used for generating the perceptron's weights vector, which not only includes one weight per input but also an additional bias weight. act_fun (function): Function to apply on activation. """ self.act_func = act_func # Set self.weights ### BEGIN SOLUTION self.weights = np.random.normal(size=dim_in + 1) ### END SOLUTION def activate(self, x): """Activate this neuron with a specific input. Calculate the weighted sum of inputs and apply the activation function. Args: x (ndarray): Vector of input values. Returns: float: A real number representing the perceptron's activation after calculating the weighted sum of inputs and applying the perceptron's activation function. """ # Return the activation value ### BEGIN SOLUTION return self.act_func(self.weights @ np.append(1, x)) ### END SOLUTION def adapt(self, x, delta, rate=0.03): """Adapt this neuron's weights by a specific delta. Args: x (ndarray): Vector of input values. delta (float): Weight adaptation delta value. rate (float): Learning rate. """ # Adapt self.weights according to the update rule ### BEGIN SOLUTION self.weights += rate * delta * np.append(1, x) ### END SOLUTION _p = Perceptron(2) assert _p.weights.size == 3, "Should have a weight per input and a bias." assert isinstance(_p.activate([2, 1]), float), "Should activate as scalar." assert -1 <= _p.activate([100, 100]) <= 1, "Should activate using sigmoid." _p.weights = np.array([.5, .5, .5]) _p.adapt(np.array([2, 3]), np.array(.5)) assert np.allclose(_p.weights, [0.515, 0.53, 0.545]), \ "Should update weights correctly." ``` ### PerceptronLayer A `PerceptronLayer` is a combination of multiple `Perceptron` instances. It mainly is concerened with passing input and delta values to its individual neurons. There is no math to be done here! #### Initialization When initializing a `PerceptronLayer` (like this: `layer = PerceptronLayer(5, 3)`), the `__init__` function is called. It creates a list of `Perceptron`s: For each output value there must be one perceptron. Each of those perceptrons receives the same inputs and the same activation function as the perceptron layer. #### Activation During the activation step, the perceptron layer activates each of its perceptrons. These values will not only be needed for forward propagation but will also be needed for implementing backpropagation in the `MultilayerPerceptron` (coming up next). #### Adaptation To update its perceptrons, the perceptron layer adapts each one with the corresponding delta. For this purpose, the MLP passes a list of input values and a list of deltas to the adaptation function. The inputs are passed to *all* perceptrons. The list of deltas is exactly as long as the list of perceptrons: The first delta is for the first perceptron, the second for the second, etc. The delta values themselves will be computed by the MLP. ``` class PerceptronLayer: """Layer of multiple neurons. Attributes: perceptrons (list): List of perceptron instances in the layer. """ def __init__(self, dim_in, dim_out, act_func=sigmoid): """Initialize the layer as a list of individual neurons. A layer contains as many neurons as it has outputs, each neuron has as many input weights (+ bias) as the layer has inputs. Args: dim_in (int): Dimensionality of the expected input values, also the size of the previous layer of a neural network. dim_out (int): Dimensionality of the output, also the requested amount of in this layer and the input dimension of the next layer. act_func (function): Activation function to use in each perceptron of this layer. """ # Set self.perceptrons to a list of Perceptrons ### BEGIN SOLUTION self.perceptrons = [Perceptron(dim_in, act_func) for _ in range(dim_out)] ### END SOLUTION def activate(self, x): """Activate this layer by activating each individual neuron. Args: x (ndarray): Vector of input values. Retuns: ndarray: Vector of output values which can be used as input to another PerceptronLayer instance. """ # return the vector of activation values ### BEGIN SOLUTION return np.array([p.activate(x) for p in self.perceptrons]) ### END SOLUTION def adapt(self, x, deltas, rate=0.03): """Adapt this layer by adapting each individual neuron. Args: x (ndarray): Vector of input values. deltas (ndarray): Vector of delta values. rate (float): Learning rate. """ # Update all the perceptrons in this layer ### BEGIN SOLUTION for perceptron, delta in zip(self.perceptrons, deltas): perceptron.adapt(x, delta, rate) ### END SOLUTION @property def weight_matrix(self): """Helper property for getting this layer's weight matrix. Returns: ndarray: All the weights for this perceptron layer. """ return np.asarray([p.weights for p in self.perceptrons]).T _l = PerceptronLayer(3, 2) assert len(_l.perceptrons) == 2, "Should have as many perceptrons as outputs." assert len(_l.activate([1,2,3])) == 2, "Should provide correct output amount." ``` ### MultilayerPerceptron #### Forward-Propagation / Activation Propagate the input value $x$ through each layer of the network, employing the output of the previous layer as input to the next layer. #### Back-Propagation / Adaptation This is the most complex step of the whole task. Split into three separate parts: 1. ***Forward propagation***: Compute the outputs for each individual layer – similar to the forward-propagation step above, but we need to keep track of the intermediate results to compute each layer's errors. That means: Store the input as the first "output" and then activate each of the network's layers using the *previous* layer's output and store the layer's activation result. 2. ***Backward propagation***: Calculate each layer's error signals $\delta_i(k)$. The important part here is to do so from the last to the first array, because each layer's error depends on the error from its following layer. Note: The first part of this formula makes use of the activation functions derivative $\frac{d\sigma}{dx}(k)$. $$\delta_i(k) = o_i(k)\ (1 - o_i(k))\ \sum\limits_{j=1}^{N(k+1)} w_{ji}(k+1,k)\delta_j(k+1)$$ (*Hint*: For the last layer (i.e. the first you calculate the $\delta$ for) the sum in the formula above is the total network error. For all preceding layers $k$ you need to recalculate `e` using the $\delta$ and weights of layer $k+1$. We already implemented a helper function for you to access the weights of a specific layer. Check the `PerceptronLayer` if you did not find it yet.) 3. ***Adaptation***: Call each layers adaptation function with its input, its designated error signals and the given learning rate. Hint: The last two steps can be performed in a single loop if you wish, but make sure to use the non-updated weights for the calculation of the next layer's error signals! ``` class MultilayerPerceptron: """Network of perceptrons, also a set of multiple perceptron layers. Attributes: layers (list): List of perceptron layers in the network. """ def __init__(self, *layers): """Initialize a new network, madeup of individual PerceptronLayers. Args: *layers: Arbitrarily many PerceptronLayer instances. """ self.layers = layers def activate(self, x): """Activate network and return the last layer's output. Args: x (ndarray): Vector of input values. Returns: (ndarray): Vector of output values from the last layer of the network after propagating forward through the network. """ # Propagate activation through the network # and return output for last layer ### BEGIN SOLUTION for layer in self.layers: x = layer.activate(x) return x ### END SOLUTION def adapt(self, x, t, rate=0.03): """Adapt the whole network given an input and expected output. Args: x (ndarray): Vector of input values. t (ndarray): Vector of target values (expected outputs). rate (float): Learning rate. """ # Activate each layer and collect intermediate outputs. ### BEGIN SOLUTION outputs = [x] for layer in self.layers: outputs.append(layer.activate(outputs[-1])) ### END SOLUTION # Calculate error 'e' between t and network output. ### BEGIN SOLUTION e = t - outputs[-1] ### END SOLUTION # Backpropagate error through the network computing # intermediate delta and adapting each layer. ### BEGIN SOLUTION for k, layer in reversed(list(enumerate(self.layers, 1))): layer_input = outputs[k - 1] layer_output = outputs[k] delta = (layer_output * (1 - layer_output)) * e e = (layer.weight_matrix @ delta)[1:] layer.adapt(layer_input, delta, rate) ### END SOLUTION ``` ### Classification #### Problem Definition Before we start, we need a problem to solve. In the following cell we first generate some three dimensional data (= $\text{input_dim}$) between 0 and 1 and label all data according to a binary classification: If the data is close to the center (radius < 2.5), it belongs to one class, if it is further away from the center it belongs to the other class. In the cell below we visualize the data set. ``` def uniform(a, b, n=1): """Returns n floats uniformly distributed between a and b.""" return (b - a) * np.random.random_sample(n) + a n = 1000 radius = 5 r = np.append(uniform(0, radius * .5, n // 2), uniform(radius * .7, radius, n // 2)) angle = uniform(0, 2 * np.pi, n) x = r * np.sin(angle) + uniform(-radius, radius, n) y = r * np.cos(angle) + uniform(-radius, radius, n) inputs = np.vstack((x, y)).T targets = np.less(np.linalg.norm(inputs, axis=1), radius * .5) %matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots(num='Data') ax.set(title='Labeled Data') ax.scatter(*inputs.T, 2, c=targets, cmap='RdYlBu') plt.show() ``` ### Model Design The following cell already contains a simple model with a single layer. Play around with some different configurations! ``` MLP = MultilayerPerceptron( PerceptronLayer(2, 1), ) # Adapt this MLP ### BEGIN SOLUTION MLP = MultilayerPerceptron( PerceptronLayer(2, 4), PerceptronLayer(4, 2), PerceptronLayer(2, 1), ) ### END SOLUTION ``` ### Training Train the network on random samples from the data. Try adjusting the epochs and watch the training performance closely using different models. ``` %matplotlib notebook from matplotlib import cm EPOCHS = 200000 max_accuracy = 0 fig, ax = plt.subplots(num='Training') scatter = ax.scatter(*inputs.T, 2) plt.show() for epoch in range(1, EPOCHS + 1): sample_index = np.random.randint(0, len(targets)) MLP.adapt(inputs[sample_index], targets[sample_index]) if (epoch % 2500) == 0: outputs = np.squeeze([MLP.activate(x) for x in inputs]) predictions = np.round(outputs) accuracy = np.sum(predictions == targets) / len(targets) * 100 if accuracy > max_accuracy: max_accuracy = accuracy scatter.set_color(cm.RdYlBu(outputs)) ax.set(title=f'Training {epoch / EPOCHS * 100:.0f}%: {accuracy:.2f}%. Best accuracy: {max_accuracy:.2f}%') fig.canvas.draw() ``` ### Evaluation ``` %matplotlib inline fig, ax = plt.subplots(nrows=2, ncols=2) ax[0, 0].scatter(*inputs.T, 2, c=outputs, cmap='RdYlBu') ax[0, 0].set_title('Continuous Classification') ax[0, 1].set_title('Discretized Classification') ax[0, 1].scatter(*inputs.T, 2, c=np.round(outputs), cmap='RdYlBu') ax[1, 0].set_title('Original Labels') ax[1, 0].scatter(*inputs.T, 2, c=targets, cmap='RdYlBu') ax[1, 1].set_title('Wrong Classifications') ax[1, 1].scatter(*inputs.T, 2, c=(targets != np.round(outputs)), cmap='OrRd') plt.show() ``` ## Results Document your results in the following cell. We are interested in which network configurations you tried and what accuracies they resulted in. Did you run into problems during training? Was it steady or did it get stuck? Did you recognize anything about the training process? How could we get better results? Tell us! **Answer:** 2 hidden and one output layer with a total of 7 neurons can already stably render results of 90%+ (with some data generation luck). During training the model sometimes gets stuck in saddle points for a long time. One way to tackle this would be to compute noisy gradients instead of the real gradients -- something that *stochastic gradient descent*, the main method most frameworks for working with neural networks use by default, makes use of as well. Some more information on that specific problem and solution [here](http://www.offconvex.org/2016/03/22/saddlepoints/). Another problem with our training approach is that we train on the complete dataset without a training/evaluation split! If we would split the data we could also make use of "early stopping": Instead of using the final state of the network for our evaluation, we could use the one which got the best max accuracy on the evaluation set during training by saving it whenever the max accuracy goes up. ## Assignment 2: MLP and RBFN [10 Points] This exercise is aimed at deepening the understanding of Radial Basis Function Networks and how they relate to Multilayer Perceptrons. Not all of the answers can be found directly in the slides - so when answering the (more algorithmic) questions, first take a minute and think about how you would go about solving them and if nothing comes to mind search the internet for a little bit. If you are interested in a real life application of both algorithms and how they compare take a look at this paper: [Comparison between Multi-Layer Perceptron and Radial Basis Function Networks for Sediment Load Estimation in a Tropical Watershed](http://file.scirp.org/pdf/JWARP20121000014_80441700.pdf) ![Schematic of a RBFN](RBFN.png) We have prepared a little example that shows how radial basis function approximation works in Python. This is not an example implementation of a RBFN but illustrates the work of the hidden neurons. ``` %matplotlib inline import numpy as np from numpy.random import uniform from scipy.interpolate import Rbf import matplotlib import matplotlib.pyplot as plt from matplotlib import cm def func(x, y): """ This is the example function that should be fitted. Its shape could be described as two peaks close to each other - one going up, the other going down """ return (x + y) * np.exp(-4.0 * (x**2 + y**2)) # number of training points (you may try different values here) training_size = 50 # sample 'training_size' data points from the input space [-1,1]x[-1,1] ... x = uniform(-1.0, 1.0, size=training_size) y = uniform(-1.0, 1.0, size=training_size) # ... and compute function values for them. fvals = func(x, y) # get the aprroximation via RBF new_func = Rbf(x, y, fvals) # Plot both functions: # create a 100x100 grid of input values x_grid, y_grid = np.mgrid[-1:1:100j, -1:1:100j] fig, ax = plt.subplots(ncols=2, sharey=True, figsize=(10, 6)) # This plot represents the original function f_orig = func(x_grid, y_grid) img = ax[0].imshow(f_orig, extent=[-1, 1, -1, 1], cmap='RdBu') ax[0].set(title='Original Function') # This plots the approximation of the original function by the RBF # if the plot looks strange try to run it again, the sampling # in the beginning is random f_new = new_func(x_grid, y_grid) plt.imshow(f_new, extent=[-1, 1, -1, 1], cmap='RdBu') ax[1].set(title='RBF Result', xlim=[-1, 1], ylim=[-1, 1]) # scatter the datapoints that have been used by the RBF plt.scatter(x, y, color='black') fig.colorbar(img, ax=ax) plt.show() ``` ### Radial Basis Function Networks #### What are radial basis functions? Radial basis functions are all functions that fullfill the following criteria: The value of the function for a certain point depends only on the distance of that point to the origin or some other fixed center point. In mathematical formulation that spells out to: $\phi (\mathbf {x} )=\phi (\|\mathbf {x} \|)$ or $\phi (\mathbf {x} ,\mathbf {c} )=\phi (\|\mathbf {x} -\mathbf {c} \|)$. Notice that it is not necessary (but most common) to use the norm as the measure of distance. #### What is the structure of a RBFN? You may also use the notion from the above included picture. RBFN's are networks that contain only one hidden layer. The input is connected to all the hidden units. Each of the hidden units has a different radial basis function that is *sensitive* to ranges in the input domain. The output is then a linear combination of the outpus ot those functions. #### How is a RBFN trained? Note: all input data has to be normalized. Training a RBFN is a two-step process. First the functions in the hidden layer are initialized. This can be either done by sampling from the input data or by first performing a k-means clustering, where k is the number of nodes that have to be initialzed. The second step fits a linear model with coefficients $w_{i}$ to the hidden layer's outputs with respect to some objective function. The objective function depends on the task: it can be the least squares function, or the weights can be adapted by gradient descent. ### Comparison to the Multilayer Perceptron #### What do both models have in common? Where do they differ? |RBFN |MLP | |---------------------|---------------------| | non-linear layered feedforward network|non-linear layered feedforward network| | hidden neurons use radial basis functions, output neurons use linear function| input, hidden and output-layer all use the same activation function| | universal approximator | universal approximator | | learning usually affects only one or some RBF | learning affects many weights throught the network| #### How can classification in both networks be visualized? ![Classification](Solution_Classification.png) #### When would you use a RBFN instead of a Multilayer Perceptron? RBFNs are more robust to noise and should therefore be used when the data contains false-positives.
github_jupyter
<a href="https://colab.research.google.com/github/Laelapz/Some_Tests/blob/main/BERTimbau.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Tem caracteres em chinês? Pq eles pegam a maior distribuição do dataset??? Tirado do Twitter? (Alguns nomes/sobrenomes) O Dataset do Bert base inglês parecia mais organizado Cade o alfabeto? Tem muitas subwords ``` !pip install transformers from transformers import AutoTokenizer # Or BertTokenizer from transformers import AutoModelForPreTraining # Or BertForPreTraining for loading pretraining heads from transformers import AutoModel # or BertModel, for BERT without pretraining heads model = AutoModelForPreTraining.from_pretrained('neuralmind/bert-base-portuguese-cased') tokenizer = AutoTokenizer.from_pretrained('neuralmind/bert-base-portuguese-cased', do_lower_case=False) import torch with open("vocabulary.txt", 'w') as f: # For each token... for token in tokenizer.vocab.keys(): # Write it out and escape any unicode characters. f.write(token + '\n') one_chars = [] one_chars_hashes = [] # For each token in the vocabulary... for token in tokenizer.vocab.keys(): # Record any single-character tokens. if len(token) == 1: one_chars.append(token) # Record single-character tokens preceded by the two hashes. elif len(token) == 3 and token[0:2] == '##': one_chars_hashes.append(token) print('Number of single character tokens:', len(one_chars), '\n') # Print all of the single characters, 40 per row. # For every batch of 40 tokens... for i in range(0, len(one_chars), 40): # Limit the end index so we don't go past the end of the list. end = min(i + 40, len(one_chars) + 1) # Print out the tokens, separated by a space. print(' '.join(one_chars[i:end])) print('Number of single character tokens with hashes:', len(one_chars_hashes), '\n') # Print all of the single characters, 40 per row. # Strip the hash marks, since they just clutter the display. tokens = [token.replace('##', '') for token in one_chars_hashes] # For every batch of 40 tokens... for i in range(0, len(tokens), 40): # Limit the end index so we don't go past the end of the list. end = min(i + 40, len(tokens) + 1) # Print out the tokens, separated by a space. print(' '.join(tokens[i:end])) print('Are the two sets identical?', set(one_chars) == set(tokens)) import matplotlib.pyplot as plt import seaborn as sns import numpy as np sns.set(style='darkgrid') # Increase the plot size and font size. sns.set(font_scale=1.5) plt.rcParams["figure.figsize"] = (10,5) # Measure the length of every token in the vocab. token_lengths = [len(token) for token in tokenizer.vocab.keys()] # Plot the number of tokens of each length. sns.countplot(token_lengths) plt.title('Vocab Token Lengths') plt.xlabel('Token Length') plt.ylabel('# of Tokens') print('Maximum token length:', max(token_lengths)) num_subwords = 0 subword_lengths = [] # For each token in the vocabulary... for token in tokenizer.vocab.keys(): # If it's a subword... if len(token) >= 2 and token[0:2] == '##': # Tally all subwords num_subwords += 1 # Measure the sub word length (without the hashes) length = len(token) - 2 # Record the lengths. subword_lengths.append(length) vocab_size = len(tokenizer.vocab.keys()) print('Number of subwords: {:,} of {:,}'.format(num_subwords, vocab_size)) # Calculate the percentage of words that are '##' subwords. prcnt = float(num_subwords) / vocab_size * 100.0 print('%.1f%%' % prcnt) sns.countplot(subword_lengths) plt.title('Subword Token Lengths (w/o "##")') plt.xlabel('Subword Length') plt.ylabel('# of ## Subwords') ```
github_jupyter
# The Binomial Distribution This notebook is part of [Bite Size Bayes](https://allendowney.github.io/BiteSizeBayes/), an introduction to probability and Bayesian statistics using Python. Copyright 2020 Allen B. Downey License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) The following cell downloads `utils.py`, which contains some utility function we'll need. ``` from os.path import basename, exists def download(url): filename = basename(url) if not exists(filename): from urllib.request import urlretrieve local, _ = urlretrieve(url, filename) print('Downloaded ' + local) download('https://github.com/AllenDowney/BiteSizeBayes/raw/master/utils.py') ``` If everything we need is installed, the following cell should run with no error messages. ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt ``` ## The Euro problem revisited In [a previous notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/07_euro.ipynb) I presented a problem from David MacKay's book, [*Information Theory, Inference, and Learning Algorithms*](http://www.inference.org.uk/mackay/itila/p0.html): > A statistical statement appeared in The Guardian on Friday January 4, 2002: > > >"When spun on edge 250 times, a Belgian one-euro coin came up heads 140 times and tails 110. ‘It looks very suspicious to me’, said Barry Blight, a statistics lecturer at the London School of Economics. ‘If the coin were unbiased the chance of getting a result as extreme as that would be less than 7%’." > > But [asks MacKay] do these data give evidence that the coin is biased rather than fair? To answer this question, we made these modeling decisions: * If you spin a coin on edge, there is some probability, $x$, that it will land heads up. * The value of $x$ varies from one coin to the next, depending on how the coin is balanced and other factors. We started with a uniform prior distribution for $x$, then updated it 250 times, once for each spin of the coin. Then we used the posterior distribution to compute the MAP, posterior mean, and a credible interval. But we never really answered MacKay's question. In this notebook, I introduce the binomial distribution and we will use it to solve the Euro problem more efficiently. Then we'll get back to MacKay's question and see if we can find a more satisfying answer. ## Binomial distribution Suppose I tell you that a coin is "fair", that is, the probability of heads is 50%. If you spin it twice, there are four outcomes: `HH`, `HT`, `TH`, and `TT`. All four outcomes have the same probability, 25%. If we add up the total number of heads, it is either 0, 1, or 2. The probability of 0 and 2 is 25%, and the probability of 1 is 50%. More generally, suppose the probability of heads is `p` and we spin the coin `n` times. What is the probability that we get a total of `k` heads? The answer is given by the binomial distribution: $P(k; n, p) = \binom{n}{k} p^k (1-p)^{n-k}$ where $\binom{n}{k}$ is the [binomial coefficient](https://en.wikipedia.org/wiki/Binomial_coefficient), usually pronounced "n choose k". We can compute this expression ourselves, but we can also use the SciPy function `binom.pmf`: ``` from scipy.stats import binom n = 2 p = 0.5 ks = np.arange(n+1) a = binom.pmf(ks, n, p) a ``` If we put this result in a Series, the result is the distribution of `k` for the given values of `n` and `p`. ``` pmf_k = pd.Series(a, index=ks) pmf_k ``` The following function computes the binomial distribution for given values of `n` and `p`: ``` def make_binomial(n, p): """Make a binomial PMF. n: number of spins p: probability of heads returns: Series representing a PMF """ ks = np.arange(n+1) a = binom.pmf(ks, n, p) pmf_k = pd.Series(a, index=ks) return pmf_k ``` And here's what it looks like with `n=250` and `p=0.5`: ``` pmf_k = make_binomial(n=250, p=0.5) pmf_k.plot() plt.xlabel('Number of heads (k)') plt.ylabel('Probability') plt.title('Binomial distribution'); ``` The most likely value in this distribution is 125: ``` pmf_k.idxmax() ``` But even though it is the most likely value, the probability that we get exactly 125 heads is only about 5%. ``` pmf_k[125] ``` In MacKay's example, we got 140 heads, which is less likely than 125: ``` pmf_k[140] ``` In the article MacKay quotes, the statistician says, ‘If the coin were unbiased the chance of getting a result as extreme as that would be less than 7%’. We can use the binomial distribution to check his math. The following function takes a PMF and computes the total probability of values greater than or equal to `threshold`. ``` def prob_ge(pmf, threshold): """Probability of values greater than a threshold. pmf: Series representing a PMF threshold: value to compare to returns: probability """ ge = (pmf.index >= threshold) total = pmf[ge].sum() return total ``` Here's the probability of getting 140 heads or more: ``` prob_ge(pmf_k, 140) ``` It's about 3.3%, which is less than 7%. The reason is that the statistician includes all values "as extreme as" 140, which includes values less than or equal to 110, because 140 exceeds the expected value by 15 and 110 falls short by 15. The probability of values less than or equal to 110 is also 3.3%, so the total probability of values "as extreme" as 140 is about 7%. The point of this calculation is that these extreme values are unlikely if the coin is fair. That's interesting, but it doesn't answer MacKay's question. Let's see if we can. ## Estimating x As promised, we can use the binomial distribution to solve the Euro problem more efficiently. Let's start again with a uniform prior: ``` xs = np.arange(101) / 100 uniform = pd.Series(1, index=xs) uniform /= uniform.sum() ``` We can use `binom.pmf` to compute the likelihood of the data for each possible value of $x$. ``` k = 140 n = 250 xs = uniform.index likelihood = binom.pmf(k, n, p=xs) ``` Now we can do the Bayesian update in the usual way, multiplying the priors and likelihoods, ``` posterior = uniform * likelihood ``` Computing the total probability of the data, ``` total = posterior.sum() total ``` And normalizing the posterior, ``` posterior /= total ``` Here's what it looks like. ``` posterior.plot(label='Uniform') plt.xlabel('Probability of heads (x)') plt.ylabel('Probability') plt.title('Posterior distribution, uniform prior') plt.legend() ``` **Exercise:** Based on what we know about coins in the real world, it doesn't seem like every value of $x$ is equally likely. I would expect values near 50% to be more likely and values near the extremes to be less likely. In Notebook 7, we used a triangle prior to represent this belief about the distribution of $x$. The following code makes a PMF that represents a triangle prior. ``` ramp_up = np.arange(50) ramp_down = np.arange(50, -1, -1) a = np.append(ramp_up, ramp_down) triangle = pd.Series(a, index=xs) triangle /= triangle.sum() ``` Update this prior with the likelihoods we just computed and plot the results. ``` # Solution posterior2 = triangle * likelihood total2 = posterior2.sum() total2 # Solution posterior2 /= total2 # Solution posterior.plot(label='Uniform') posterior2.plot(label='Triangle') plt.xlabel('Probability of heads (x)') plt.ylabel('Probability') plt.title('Posterior distribution, uniform prior') plt.legend(); ``` ## Evidence Finally, let's get back to MacKay's question: do these data give evidence that the coin is biased rather than fair? I'll use a Bayes table to answer this question, so here's the function that makes one: ``` def make_bayes_table(hypos, prior, likelihood): """Make a Bayes table. hypos: sequence of hypotheses prior: prior probabilities likelihood: sequence of likelihoods returns: DataFrame """ table = pd.DataFrame(index=hypos) table['prior'] = prior table['likelihood'] = likelihood table['unnorm'] = table['prior'] * table['likelihood'] prob_data = table['unnorm'].sum() table['posterior'] = table['unnorm'] / prob_data return table ``` Recall that data, $D$, is considered evidence in favor of a hypothesis, `H`, if the posterior probability is greater than the prior, that is, if $P(H|D) > P(H)$ For this example, I'll call the hypotheses `fair` and `biased`: ``` hypos = ['fair', 'biased'] ``` And just to get started, I'll assume that the prior probabilities are 50/50. ``` prior = [0.5, 0.5] ``` Now we have to compute the probability of the data under each hypothesis. If the coin is fair, the probability of heads is 50%, and we can compute the probability of the data (140 heads out of 250 spins) using the binomial distribution: ``` k = 140 n = 250 like_fair = binom.pmf(k, n, p=0.5) like_fair ``` So that's the probability of the data, given that the coin is fair. But if the coin is biased, what's the probability of the data? Well, that depends on what "biased" means. If we know ahead of time that "biased" means the probability of heads is 56%, we can use the binomial distribution again: ``` like_biased = binom.pmf(k, n, p=0.56) like_biased ``` Now we can put the likelihoods in the Bayes table: ``` likes = [like_fair, like_biased] make_bayes_table(hypos, prior, likes) ``` The posterior probability of `biased` is about 86%, so the data is evidence that the coin is biased, at least for this definition of "biased". But we used the data to define the hypothesis, which seems like cheating. To be fair, we should define "biased" before we see the data. ## Uniformly distributed bias Suppose "biased" means that the probability of heads is anything except 50%, and all other values are equally likely. We can represent that definition by making a uniform distribution and removing 50%. ``` biased_uniform = uniform.copy() biased_uniform[50] = 0 biased_uniform /= biased_uniform.sum() ``` Now, to compute the probability of the data under this hypothesis, we compute the probability of the data for each value of $x$. ``` xs = biased_uniform.index likelihood = binom.pmf(k, n, xs) ``` And then compute the total probability in the usual way: ``` like_uniform = np.sum(biased_uniform * likelihood) like_uniform ``` So that's the probability of the data under the "biased uniform" hypothesis. Now we make a Bayes table that compares the hypotheses `fair` and `biased uniform`: ``` hypos = ['fair', 'biased uniform'] likes = [like_fair, like_uniform] make_bayes_table(hypos, prior, likes) ``` Using this definition of `biased`, the posterior is less than the prior, so the data are evidence that the coin is *fair*. In this example, the data might support the fair hypothesis or the biased hypothesis, depending on the definition of "biased". **Exercise:** Suppose "biased" doesn't mean every value of $x$ is equally likely. Maybe values near 50% are more likely and values near the extremes are less likely. In the previous exercise we created a PMF that represents a triangle-shaped distribution. We can use it to represent an alternative definition of "biased": ``` biased_triangle = triangle.copy() biased_triangle[50] = 0 biased_triangle /= biased_triangle.sum() ``` Compute the total probability of the data under this definition of "biased" and use a Bayes table to compare it with the fair hypothesis. Is the data evidence that the coin is biased? ``` # Solution like_triangle = np.sum(biased_triangle * likelihood) like_triangle # Solution hypos = ['fair', 'biased triangle'] likes = [like_fair, like_triangle] make_bayes_table(hypos, prior, likes) # Solution # For this definition of "biased", # the data are slightly in favor of the fair hypothesis. ``` ## Bayes factor In the previous section, we used a Bayes table to see whether the data are in favor of the fair or biased hypothesis. I assumed that the prior probabilities were 50/50, but that was an arbitrary choice. And it was unnecessary, because we don't really need a Bayes table to say whether the data favor one hypothesis or another: we can just look at the likelihoods. Under the first definition of biased, `x=0.56`, the likelihood of the biased hypothesis is higher: ``` like_fair, like_biased ``` Under the biased uniform definition, the likelihood of the fair hypothesis is higher. ``` like_fair, like_uniform ``` The ratio of these likelihoods tells us which hypothesis the data support. If the ratio is less than 1, the data support the second hypothesis: ``` like_fair / like_biased ``` If the ratio is greater than 1, the data support the first hypothesis: ``` like_fair / like_uniform ``` This likelihood ratio is called a [Bayes factor](https://en.wikipedia.org/wiki/Bayes_factor); it provides a concise way to present the strength of a dataset as evidence for or against a hypothesis. ## Summary In this notebook I introduced the binomial disrtribution and used it to solve the Euro problem more efficiently. Then we used the results to (finally) answer the original version of the Euro problem, considering whether the data support the hypothesis that the coin is fair or biased. We found that the answer depends on how we define "biased". And we summarized the results using a Bayes factor, which quantifies the strength of the evidence. [In the next notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/13_price.ipynb) we'll start on a new problem based on the television game show *The Price Is Right*. ## Exercises **Exercise:** In preparation for an alien invasion, the Earth Defense League has been working on new missiles to shoot down space invaders. Of course, some missile designs are better than others; let's assume that each design has some probability of hitting an alien ship, `x`. Based on previous tests, the distribution of `x` in the population of designs is roughly uniform between 10% and 40%. Now suppose the new ultra-secret Alien Blaster 9000 is being tested. In a press conference, a Defense League general reports that the new design has been tested twice, taking two shots during each test. The results of the test are confidential, so the general won't say how many targets were hit, but they report: "The same number of targets were hit in the two tests, so we have reason to think this new design is consistent." Is this data good or bad; that is, does it increase or decrease your estimate of `x` for the Alien Blaster 9000? Plot the prior and posterior distributions, and use the following function to compute the prior and posterior means. ``` def pmf_mean(pmf): """Compute the mean of a PMF. pmf: Series representing a PMF return: float """ return np.sum(pmf.index * pmf) # Solution xs = np.linspace(0.1, 0.4) prior = pd.Series(1, index=xs) prior /= prior.sum() # Solution likelihood = xs**2 + (1-xs)**2 # Solution posterior = prior * likelihood posterior /= posterior.sum() # Solution prior.plot(color='gray', label='prior') posterior.plot(label='posterior') plt.xlabel('Probability of success (x)') plt.ylabel('Probability') plt.ylim(0, 0.027) plt.title('Distribution of before and after testing') plt.legend(); # Solution pmf_mean(prior), pmf_mean(posterior) # With this prior, being "consistent" is more likely # to mean "consistently bad". ```
github_jupyter
# A/B testing, traffic shifting and autoscaling ### Introduction In this lab you will create an endpoint with multiple variants, splitting the traffic between them. Then after testing and reviewing the endpoint performance metrics, you will shift the traffic to one variant and configure it to autoscale. ### Table of Contents - [1. Create an endpoint with multiple variants](#c3w2-1.) - [1.1. Construct Docker Image URI](#c3w2-1.1.) - [Exercise 1](#c3w2-ex-1) - [1.2. Create Amazon SageMaker Models](#c3w2-1.2.) - [Exercise 2](#c3w2-ex-2) - [Exercise 3](#c3w2-ex-3) - [1.3. Set up Amazon SageMaker production variants](#c3w2-1.3.) - [Exercise 4](#c3w2-ex-4) - [Exercise 5](#c3w2-ex-5) - [1.4. Configure and create endpoint](#c3w2-1.4.) - [Exercise 6](#c3w2-ex-6) - [2. Test model](#c3w2-2.) - [2.1. Test the model on a few sample strings](#c3w2-2.1.) - [Exercise 7](#c3w2-ex-7) - [2.2. Generate traffic and review the endpoint performance metrics](#c3w2-2.2.) - [3. Shift the traffic to one variant and review the endpoint performance metrics](#c3w2-3.) - [Exercise 8](#c3w2-ex-8) - [4. Configure one variant to autoscale](#c3w2-4.) Let's install and import the required modules. ``` # please ignore warning messages during the installation !pip install --disable-pip-version-check -q sagemaker==2.35.0 !conda install -q -y pytorch==1.6.0 -c pytorch !pip install --disable-pip-version-check -q transformers==3.5.1 import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format='retina' import boto3 import sagemaker import pandas as pd import botocore config = botocore.config.Config(user_agent_extra='dlai-pds/c3/w2') # low-level service client of the boto3 session sm = boto3.client(service_name='sagemaker', config=config) sm_runtime = boto3.client('sagemaker-runtime', config=config) sess = sagemaker.Session(sagemaker_client=sm, sagemaker_runtime_client=sm_runtime) bucket = sess.default_bucket() role = sagemaker.get_execution_role() region = sess.boto_region_name cw = boto3.client(service_name='cloudwatch', config=config) autoscale = boto3.client(service_name="application-autoscaling", config=config) ``` <a name='c3w2-1.'></a> # 1. Create an endpoint with multiple variants Two models trained to analyze customer feedback and classify the messages into positive (1), neutral (0), and negative (-1) sentiments are saved in the following S3 bucket paths. These `tar.gz` files contain the model artifacts, which result from model training. ``` model_a_s3_uri = 's3://dlai-practical-data-science/models/ab/variant_a/model.tar.gz' model_b_s3_uri = 's3://dlai-practical-data-science/models/ab/variant_b/model.tar.gz' ``` Let's deploy an endpoint splitting the traffic between these two models 50/50 to perform A/B Testing. Instead of creating a PyTorch Model object and calling `model.deploy()` function, you will create an `Endpoint configuration` with multiple model variants. Here is the workflow you will follow to create an endpoint: <img src="images/endpoint-workflow.png" width="60%" align="center"> <a name='c3w2-1.1.'></a> ### 1.1. Construct Docker Image URI <img src="images/endpoint-workflow-1-image.png" width="60%" align="center"> You will need to create the models in Amazon SageMaker, which retrieves the URI for the pre-built SageMaker Docker image stored in Amazon Elastic Container Re gistry (ECR). Let's construct the ECR URI which you will pass into the `create_model` function later. Set the instance type. For the purposes of this lab, you will use a relatively small instance. Please refer to [this link](https://aws.amazon.com/sagemaker/pricing/) for additional instance types that may work for your use cases outside of this lab. ``` inference_instance_type = 'ml.m5.large' ``` <a name='c3w2-ex-1'></a> ### Exercise 1 Create an ECR URI using the `'PyTorch'` framework. Review other parameters of the image. ``` inference_image_uri = sagemaker.image_uris.retrieve( ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes framework='pytorch', # Replace None ### END SOLUTION - DO NOT delete this comment for grading purposes version='1.6.0', instance_type=inference_instance_type, region=region, py_version='py3', image_scope='inference' ) print(inference_image_uri) ``` <a name='c3w2-1.2.'></a> ### 1.2. Create Amazon SageMaker Models <img src="images/endpoint-workflow-2-models.png" width="60%" align="center"> Amazon SageMaker Model includes information such as the S3 location of the model, the container image that can be used for inference with that model, the execution role, and the model name. Let's construct the model names. ``` import time from pprint import pprint timestamp = int(time.time()) model_name_a = '{}-{}'.format('a', timestamp) model_name_b = '{}-{}'.format('b', timestamp) ``` You will use the following function to check if the model already exists in Amazon SageMaker. ``` def check_model_existence(model_name): for model in sm.list_models()['Models']: if model_name == model['ModelName']: return True return False ``` <a name='c3w2-ex-2'></a> ### Exercise 2 Create an Amazon SageMaker Model based on the `model_a_s3_uri` data. **Instructions**: Use `sm.create_model` function, which requires the model name, Amazon SageMaker execution role and a primary container description (`PrimaryContainer` dictionary). The `PrimaryContainer` includes the S3 bucket location of the model artifacts (`ModelDataUrl` key) and ECR URI (`Image` key). ``` if not check_model_existence(model_name_a): model_a = sm.create_model( ModelName=model_name_a, ExecutionRoleArn=role, PrimaryContainer={ ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes 'ModelDataUrl': model_a_s3_uri, # Replace None 'Image': inference_image_uri # Replace None ### END SOLUTION - DO NOT delete this comment for grading purposes } ) pprint(model_a) else: print("Model {} already exists".format(model_name_a)) ``` <a name='c3w2-ex-3'></a> ### Exercise 3 Create an Amazon SageMaker Model based on the `model_b_s3_uri` data. **Instructions**: Use the example in the cell above. ``` if not check_model_existence(model_name_b): model_b = sm.create_model( ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes ModelName=model_name_b, ExecutionRoleArn=role, ### END SOLUTION - DO NOT delete this comment for grading purposes PrimaryContainer={ 'ModelDataUrl': model_b_s3_uri, 'Image': inference_image_uri } ) pprint(model_b) else: print("Model {} already exists".format(model_name_b)) ``` <a name='c3w2-1.3.'></a> ### 1.3. Set up Amazon SageMaker production variants <img src="images/endpoint-workflow-3-variants.png" width="60%" align="center"> A production variant is a packaged SageMaker Model combined with the configuration related to how that model will be hosted. You have constructed the model in the section above. The hosting resources configuration includes information on how you want that model to be hosted: the number and type of instances, a pointer to the SageMaker package model, as well as a variant name and variant weight. A single SageMaker Endpoint can actually include multiple production variants. <a name='c3w2-ex-4'></a> ### Exercise 4 Create an Amazon SageMaker production variant for the SageMaker Model with the `model_name_a`. **Instructions**: Use the `production_variant` function passing the `model_name_a` and instance type defined above. ```python variantA = production_variant( model_name=..., # SageMaker Model name instance_type=..., # instance type initial_weight=50, # traffic distribution weight initial_instance_count=1, # instance count variant_name='VariantA', # production variant name ) ``` ``` from sagemaker.session import production_variant variantA = production_variant( ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes model_name=model_name_a, # Replace None instance_type=inference_instance_type, # Replace None ### END SOLUTION - DO NOT delete this comment for grading purposes initial_weight=50, initial_instance_count=1, variant_name='VariantA', ) print(variantA) ``` <a name='c3w2-ex-5'></a> ### Exercise 5 Create an Amazon SageMaker production variant for the SageMaker Model with the `model_name_b`. **Instructions**: See the required arguments in the cell above. ``` variantB = production_variant( ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes model_name=model_name_b, # Replace all None instance_type=inference_instance_type, # Replace all None initial_weight=50, # Replace all None ### END SOLUTION - DO NOT delete this comment for grading purposes initial_instance_count=1, variant_name='VariantB' ) print(variantB) ``` <a name='c3w2-1.4.'></a> ### 1.4. Configure and create the endpoint <img src="images/endpoint-workflow-4-configuration.png" width="60%" align="center"> You will use the following functions to check if the endpoint configuration and endpoint itself already exist in Amazon SageMaker. ``` def check_endpoint_config_existence(endpoint_config_name): for endpoint_config in sm.list_endpoint_configs()['EndpointConfigs']: if endpoint_config_name == endpoint_config['EndpointConfigName']: return True return False def check_endpoint_existence(endpoint_name): for endpoint in sm.list_endpoints()['Endpoints']: if endpoint_name == endpoint['EndpointName']: return True return False ``` Create the endpoint configuration by specifying the name and pointing to the two production variants that you just configured that tell SageMaker how you want to host those models. ``` endpoint_config_name = '{}-{}'.format('ab', timestamp) if not check_endpoint_config_existence(endpoint_config_name): endpoint_config = sm.create_endpoint_config( EndpointConfigName=endpoint_config_name, ProductionVariants=[variantA, variantB] ) pprint(endpoint_config) else: print("Endpoint configuration {} already exists".format(endpoint_config_name)) ``` <img src="images/endpoint-workflow-5-endpoint.png" width="60%" align="center"> Construct the endpoint name. ``` model_ab_endpoint_name = '{}-{}'.format('ab', timestamp) print('Endpoint name: {}'.format(model_ab_endpoint_name)) ``` <a name='c3w2-ex-6'></a> ### Exercise 6 Create an endpoint with the endpoint name and configuration defined above. ``` if not check_endpoint_existence(model_ab_endpoint_name): endpoint_response = sm.create_endpoint( ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes EndpointName=model_ab_endpoint_name, # Replace None EndpointConfigName=endpoint_config_name # Replace None ### END SOLUTION - DO NOT delete this comment for grading purposes ) print('Creating endpoint {}'.format(model_ab_endpoint_name)) pprint(endpoint_response) else: print("Endpoint {} already exists".format(model_ab_endpoint_name)) ``` Review the created endpoint configuration in the AWS console. **Instructions**: - open the link - notice that you are in the section Amazon SageMaker -> Endpoint configuration - check the name of the endpoint configuration, its Amazon Resource Name (ARN) and production variants - click on the production variants and check their container information: image and model data location ``` from IPython.core.display import display, HTML display( HTML( '<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpointConfig/{}">REST Endpoint configuration</a></b>'.format( region, endpoint_config_name ) ) ) ``` Review the created endpoint in the AWS console. **Instructions**: - open the link - notice that you are in the section Amazon SageMaker -> Endpoints - check the name of the endpoint, its ARN and status - below you can review the monitoring metrics such as CPU, memory and disk utilization. Further down you can see the endpoint configuration settings with its production variants ``` from IPython.core.display import display, HTML display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}">SageMaker REST endpoint</a></b>'.format(region, model_ab_endpoint_name))) ``` Wait for the endpoint to deploy. ### _This cell will take approximately 5-10 minutes to run._ ``` %%time waiter = sm.get_waiter('endpoint_in_service') waiter.wait(EndpointName=model_ab_endpoint_name) ``` _Wait until the ^^ endpoint ^^ is deployed_ <a name='c3w2-2.'></a> # 2. Test model <a name='c3w2-2.1.'></a> ### 2.1. Test the model on a few sample strings Here, you will pass sample strings of text to the endpoint in order to see the sentiment. You are given one example of each, however, feel free to play around and change the strings yourself! <a name='c3w2-ex-7'></a> ### Exercise 7 Create an Amazon SageMaker Predictor based on the deployed endpoint. **Instructions**: Use the `Predictor` object with the following parameters. Please pass JSON serializer and deserializer objects here, calling them with the functions `JSONLinesSerializer()` and `JSONLinesDeserializer()`, respectively. More information about the serializers can be found [here](https://sagemaker.readthedocs.io/en/stable/api/inference/serializers.html). ```python predictor = Predictor( endpoint_name=..., # endpoint name serializer=..., # a serializer object, used to encode data for an inference endpoint deserializer=..., # a deserializer object, used to decode data from an inference endpoint sagemaker_session=sess ) ``` ``` from sagemaker.predictor import Predictor from sagemaker.serializers import JSONLinesSerializer from sagemaker.deserializers import JSONLinesDeserializer inputs = [ {"features": ["I love this product!"]}, {"features": ["OK, but not great."]}, {"features": ["This is not the right product."]}, ] predictor = Predictor( ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes endpoint_name=model_ab_endpoint_name, # Replace None serializer=JSONLinesSerializer(), # Replace None deserializer=JSONLinesDeserializer(), # Replace None ### END SOLUTION - DO NOT delete this comment for grading purposes sagemaker_session=sess ) predicted_classes = predictor.predict(inputs) for predicted_class in predicted_classes: print("Predicted class {} with probability {}".format(predicted_class['predicted_label'], predicted_class['probability'])) ``` <a name='c3w2-2.2.'></a> ### 2.2. Generate traffic and review the endpoint performance metrics Now you will generate traffic. To analyze the endpoint performance you will review some of the metrics that Amazon SageMaker emits in CloudWatch: CPU Utilization, Latency and Invocations. Full list of namespaces and metrics can be found [here](https://docs.aws.amazon.com/sagemaker/latest/dg/monitoring-cloudwatch.html). CloudWatch `get_metric_statistics` documentation can be found [here](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricStatistics.html). But before that, let's create a function that will help to extract the results from CloudWatch and plot them. ``` def plot_endpoint_metrics_for_variants(endpoint_name, namespace_name, metric_name, variant_names, start_time, end_time): try: joint_variant_metrics = None for variant_name in variant_names: metrics = cw.get_metric_statistics( # extracts the results in a dictionary format Namespace=namespace_name, # the namespace of the metric, e.g. "AWS/SageMaker" MetricName=metric_name, # the name of the metric, e.g. "CPUUtilization" StartTime=start_time, # the time stamp that determines the first data point to return EndTime=end_time, # the time stamp that determines the last data point to return Period=60, # the granularity, in seconds, of the returned data points Statistics=["Sum"], # the metric statistics Dimensions=[ # dimensions, as CloudWatch treats each unique combination of dimensions as a separate metric {"Name": "EndpointName", "Value": endpoint_name}, {"Name": "VariantName", "Value": variant_name} ], ) if metrics["Datapoints"]: # access the results from the distionary using the key "Datapoints" df_metrics = pd.DataFrame(metrics["Datapoints"]) \ .sort_values("Timestamp") \ .set_index("Timestamp") \ .drop("Unit", axis=1) \ .rename(columns={"Sum": variant_name}) # rename the column with the metric results as a variant_name if joint_variant_metrics is None: joint_variant_metrics = df_metrics else: joint_variant_metrics = joint_variant_metrics.join(df_metrics, how="outer") joint_variant_metrics.plot(title=metric_name) except: pass ``` Establish wide enough time bounds to show all the charts using the same timeframe: ``` from datetime import datetime, timedelta start_time = datetime.now() - timedelta(minutes=30) end_time = datetime.now() + timedelta(minutes=30) print('Start Time: {}'.format(start_time)) print('End Time: {}'.format(end_time)) ``` Set the list of the the variant names to analyze. ``` variant_names = [variantA["VariantName"], variantB["VariantName"]] print(variant_names) ``` Run some predictions and view the metrics for each variant. ### _This cell will take approximately 1-2 minutes to run._ ``` %%time for i in range(0, 100): predicted_classes = predictor.predict(inputs) ``` _Μake sure the predictions ^^ above ^^ ran successfully_ Let’s query CloudWatch to get a few metrics that are split across variants. If you see `Metrics not yet available`, please be patient as metrics may take a few mins to appear in CloudWatch. ``` time.sleep(30) # Sleep to accomodate a slight delay in metrics gathering # CPUUtilization # The sum of each individual CPU core's utilization. # The CPU utilization of each core can range between 0 and 100. For example, if there are four CPUs, CPUUtilization can range from 0% to 400%. plot_endpoint_metrics_for_variants( endpoint_name=model_ab_endpoint_name, namespace_name="/aws/sagemaker/Endpoints", metric_name="CPUUtilization", variant_names=variant_names, start_time=start_time, end_time=end_time ) # Invocations # The number of requests sent to a model endpoint. plot_endpoint_metrics_for_variants( endpoint_name=model_ab_endpoint_name, namespace_name="AWS/SageMaker", metric_name="Invocations", variant_names=variant_names, start_time=start_time, end_time=end_time ) # InvocationsPerInstance # The number of invocations sent to a model, normalized by InstanceCount in each production variant. plot_endpoint_metrics_for_variants( endpoint_name=model_ab_endpoint_name, namespace_name="AWS/SageMaker", metric_name="InvocationsPerInstance", variant_names=variant_names, start_time=start_time, end_time=end_time ) # ModelLatency # The interval of time taken by a model to respond as viewed from SageMaker (in microseconds). plot_endpoint_metrics_for_variants( endpoint_name=model_ab_endpoint_name, namespace_name="AWS/SageMaker", metric_name="ModelLatency", variant_names=variant_names, start_time=start_time, end_time=end_time ) ``` <a name='c3w2-3.'></a> # 3. Shift the traffic to one variant and review the endpoint performance metrics Generally, the winning model would need to be chosen. The decision would be made based on the endpoint performance metrics and some other business related evaluations. Here you can assume that the winning model is in the Variant B and shift all traffic to it. Construct a list with the updated endpoint weights. ### _**No downtime** occurs during this traffic-shift activity._ ### _This may take a few minutes. Please be patient._ ``` updated_endpoint_config = [ { "VariantName": variantA["VariantName"], "DesiredWeight": 0, }, { "VariantName": variantB["VariantName"], "DesiredWeight": 100, }, ] ``` <a name='c3w2-ex-8'></a> ### Exercise 8 Update variant weights in the configuration of the existing endpoint. **Instructions**: Use the `sm.update_endpoint_weights_and_capacities` function, passing the endpoint name and list of updated weights for each of the variants that you defined above. ``` sm.update_endpoint_weights_and_capacities( ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes EndpointName=model_ab_endpoint_name, # Replace None DesiredWeightsAndCapacities=updated_endpoint_config # Replace None ### END SOLUTION - DO NOT delete this comment for grading purposes ) ``` _Wait for the ^^ endpoint update ^^ to complete above_ This may take a few minutes. Please be patient. ### _There is **no downtime** while the update is applying._ While waiting for the update (or afterwards) you can review the endpoint in the AWS console. **Instructions**: - open the link - notice that you are in the section Amazon SageMaker -> Endpoints - check the name of the endpoint, its ARN and status (`Updating` or `InService`) - below you can see the endpoint runtime settings with the updated weights ``` from IPython.core.display import display, HTML display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}">SageMaker REST endpoint</a></b>'.format(region, model_ab_endpoint_name))) waiter = sm.get_waiter("endpoint_in_service") waiter.wait(EndpointName=model_ab_endpoint_name) ``` Run some more predictions and view the metrics for each variant. ### _This cell will take approximately 1-2 minutes to run._ ``` %%time for i in range(0, 100): predicted_classes = predictor.predict(inputs) ``` _Μake sure the predictions ^^ above ^^ ran successfully_ If you see `Metrics not yet available`, please be patient as metrics may take a few minutes to appear in CloudWatch. Compare the results with the plots above. ``` # CPUUtilization # The sum of each individual CPU core's utilization. # The CPU utilization of each core can range between 0 and 100. For example, if there are four CPUs, CPUUtilization can range from 0% to 400%. plot_endpoint_metrics_for_variants( endpoint_name=model_ab_endpoint_name, namespace_name="/aws/sagemaker/Endpoints", metric_name="CPUUtilization", variant_names=variant_names, start_time=start_time, end_time=end_time ) # Invocations # The number of requests sent to a model endpoint. plot_endpoint_metrics_for_variants( endpoint_name=model_ab_endpoint_name, namespace_name="AWS/SageMaker", metric_name="Invocations", variant_names=variant_names, start_time=start_time, end_time=end_time ) # InvocationsPerInstance # The number of invocations sent to a model, normalized by InstanceCount in each production variant. plot_endpoint_metrics_for_variants( endpoint_name=model_ab_endpoint_name, namespace_name="AWS/SageMaker", metric_name="InvocationsPerInstance", variant_names=variant_names, start_time=start_time, end_time=end_time ) # ModelLatency # The interval of time taken by a model to respond as viewed from SageMaker (in microseconds). plot_endpoint_metrics_for_variants( endpoint_name=model_ab_endpoint_name, namespace_name="AWS/SageMaker", metric_name="ModelLatency", variant_names=variant_names, start_time=start_time, end_time=end_time ) ``` <a name='c3w2-4.'></a> # 4. Configure one variant to autoscale Let's configure Variant B to autoscale. You would not autoscale Variant A since no traffic is being passed to it at this time. First, you need to define a scalable target. It is an AWS resource and in this case you want to scale a `sagemaker` resource as indicated in the `ServiceNameSpace` parameter. Then the `ResourceId` is a SageMaker Endpoint. Because autoscaling is used by other AWS resources, you’ll see a few parameters that will remain static for scaling SageMaker Endpoints. Thus the `ScalableDimension` is a set value for SageMaker Endpoint scaling. You also need to specify a few key parameters that control the min and max behavior for your Machine Learning instances. The `MinCapacity` indicates the minimum number of instances you plan to scale in to. The `MaxCapacity` is the maximum number of instances you want to scale out to. So in this case you always want to have at least 1 instance running and a maximum of 2 during peak periods. ``` autoscale.register_scalable_target( ServiceNamespace="sagemaker", ResourceId="endpoint/" + model_ab_endpoint_name + "/variant/VariantB", ScalableDimension="sagemaker:variant:DesiredInstanceCount", MinCapacity=1, MaxCapacity=2, RoleARN=role, SuspendedState={ "DynamicScalingInSuspended": False, "DynamicScalingOutSuspended": False, "ScheduledScalingSuspended": False, }, ) waiter = sm.get_waiter("endpoint_in_service") waiter.wait(EndpointName=model_ab_endpoint_name) ``` Check that the parameters from the function above are in the description of the scalable target: ``` autoscale.describe_scalable_targets( ServiceNamespace="sagemaker", MaxResults=100, ) ``` Define and apply scaling policy using the `put_scaling_policy` function. The scaling policy provides additional information about the scaling behavior for your instance. `TargetTrackingScaling` refers to a specific autoscaling type supported by SageMaker, that uses a scaling metric and a target value as the indicator to scale. In the scaling policy configuration, you have the predefined metric `PredefinedMetricSpecification` which is the number of invocations on your instance and the `TargetValue` which indicates the number of invocations per ML instance you want to allow before triggering your scaling policy. A scale out cooldown of 60 seconds means that after autoscaling successfully scales out it starts to calculate the cooldown time. The scaling policy won’t increase the desired capacity again until the cooldown period ends. The scale in cooldown setting of 300 seconds means that SageMaker will not attempt to start another cooldown policy within 300 seconds of when the last one completed. ``` autoscale.put_scaling_policy( PolicyName="bert-reviews-autoscale-policy", ServiceNamespace="sagemaker", ResourceId="endpoint/" + model_ab_endpoint_name + "/variant/VariantB", ScalableDimension="sagemaker:variant:DesiredInstanceCount", PolicyType="TargetTrackingScaling", TargetTrackingScalingPolicyConfiguration={ "TargetValue": 2.0, # the number of invocations per ML instance you want to allow before triggering your scaling policy "PredefinedMetricSpecification": { "PredefinedMetricType": "SageMakerVariantInvocationsPerInstance", # scaling metric }, "ScaleOutCooldown": 60, # wait time, in seconds, before beginning another scale out activity after last one completes "ScaleInCooldown": 300, # wait time, in seconds, before beginning another scale in activity after last one completes }, ) waiter = sm.get_waiter("endpoint_in_service") waiter.wait(EndpointName=model_ab_endpoint_name) ``` Generate traffic again and review the endpoint in the AWS console. ### _This cell will take approximately 1-2 minutes to run._ ``` %%time for i in range(0, 100): predicted_classes = predictor.predict(inputs) ``` Review the autoscaling: - open the link - notice that you are in the section Amazon SageMaker -> Endpoints - below you can see the endpoint runtime settings with the instance counts. You can run the predictions multiple times to observe the increase of the instance count to 2 ``` from IPython.core.display import display, HTML display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}">SageMaker REST endpoint</a></b>'.format(region, model_ab_endpoint_name))) ``` Upload the notebook into S3 bucket for grading purposes. **Note:** you may need to click on "Save" button before the upload. ``` !aws s3 cp ./C3_W2_Assignment.ipynb s3://$bucket/C3_W2_Assignment_Learner.ipynb ```
github_jupyter
# Class Coding Lab: Introduction to Programming The goals of this lab are to help you to understand: 1. the Jupyter and IDLE programming environments 1. basic Python Syntax 2. variables and their use 3. how to sequence instructions together into a cohesive program 4. the input() function for input and print() function for output ## Let's start with an example: Hello, world! This program asks for your name as input, then says hello to you as output. Most often it's the first program you write when learning a new programming language. Click in the cell below and click the run cell button. ``` your_name = input("What is your name? ") print('Hello there',your_name) ``` Believe it or not there's a lot going on in this simple two-line program, so let's break it down. - The first line: - Asks you for input, prompting you `What is your Name?` - It then stores your input in the variable `your_name` - The second line: - prints out the following text: `Hello there` - then prints out the contents of the variable `your_name` At this point you might have a few questions. What is a variable? Why do I need it? Why is this two lines? Etc... All will be revealed in time. ## Variables Variables are names in our code which store values. I think of variables as cardboard boxes. Boxes hold things. Variables hold things. The name of the variable is on the ouside of the box (that way you know which box it is), and value of the variable represents the contents of the box. ### Variable Assignment **Assignment** is an operation where we store data in our variable. It's like packing something up in the box. In this example we assign the value "USA" to the variable **country** ``` # Here's an example of variable assignment. Wre country = 'USA' ``` ### Variable Access What good is storing data if you cannot retrieve it? Lucky for us, retrieving the data in variable is as simple as calling its name: ``` country # This should say 'USA' ``` At this point you might be thinking: Can I overwrite a variable? The answer, of course, is yes! Just re-assign it a different value: ``` country = 'Canada' ``` You can also access a variable multiple times. Each time it simply gives you its value: ``` country, country, country ``` ### The Purpose Of Variables Variables play an vital role in programming. Computer instructions have no memory of each other. That is one line of code has no idea what is happening in the other lines of code. The only way we can "connect" what happens from one line to the next is through variables. For example, if we re-write the Hello, World program at the top of the page without variables, we get the following: ``` input("What is your name? ") print('Hello there') ``` When you execute this program, notice there is no longer a connection between the input and the output. In fact, the input on line 1 doesn't matter because the output on line 2 doesn't know about it. It cannot because we never stored the results of the input into a variable! ### What's in a name? Um, EVERYTHING Computer code serves two equally important purposes: 1. To solve a problem (obviously) 2. To communicate hwo you solved problem to another person (hmmm... I didn't think of that!) If our code does something useful, like land a rocket, predict the weather, or calculate month-end account balances then the chances are 100% certain that *someone else will need to read and understand our code.* Therefore it's just as important we develop code that is easilty understood by both the computer and our colleagues. This starts with the names we choose for our variables. Consider the following program: ``` y = input("Enter your city: ") x = input("Enter your state: ") print(x,y,'is a nice place to live') ``` What do `x` and `y` represent? Is there a semantic (design) error in this program? You might find it easy to figure out the answers to these questions, but consider this more human-friendly version: ``` state = input("Enter your city: ") city = input("Enter your state: ") print(city,state,'is a nice place to live') ``` Do the aptly-named variables make it easier to find the semantic errors in this second version? ### You Do It: Finally re-write this program so that it uses well-thought out variables AND in semantically correct: ``` # TODO: Code it re-write the above program to work as it should: Stating City State is a nice place to live city = input("Enter your city: ") state = input("Enter your state: ") print(city + ",", state, "is a nice place to live") ``` ### Now Try This: Now try to write a program which asks for two separate inputs: your first name and your last name. The program should then output `Hello` with your first name and last name. For example if you enter `Mike` for the first name and `Fudge` for the last name the program should output `Hello Mike Fudge` **HINTS** - Use appropriate variable names. If you need to create a two word variable name use an underscore in place of the space between the words. eg. `two_words` - You will need a separate set of inputs for each name. ``` # TODO: write your code here first_name = input("What's your name? ") last_name = input("What's your last name? ") print ("Hello,",first_name,last_name) ``` ### Variable Concatenation: Your First Operator The `+` symbol is used to combine to variables containing text values together. Consider the following example: ``` prefix = "re" suffix = "ment" root = input("Enter a root word, like 'ship': ") print( prefix + root + suffix) ``` ### Now Try This Write a program to prompt for three colors as input, then outputs those three colors as a lis, informing me which one was the middle (2nd entered) color. For example if you were to enter `red` then `green` then `blue` the program would output: `Your colors were: red, green, and blue. The middle was is green.` **HINTS** - you'll need three variables one fore each input - you should try to make the program output like my example. This includes commas and the word `and`. ``` # TODO: write your code here first_color = input("Choose a color: ") second_color = input("Choose another color: ") third_color = input("Choose another color: ") print("Your colors were", first_color + ",", second_color + ", and", third_color + ". The middle color was", second_color + ".") ```
github_jupyter
# Before your start: - Read the README.md file - Comment as much as you can and use the resources (README.md file) - Happy learning! ``` #import numpy and pandas ``` # Challenge 1 - The `stats` Submodule This submodule contains statistical functions for conducting hypothesis tests, producing various distributions and other useful tools. Let's examine this submodule using the KickStarter dataset. Load the data using Ironhack's database (db: kickstarter, table: projects). ``` # Your code here: ``` Now print the `head` function to examine the dataset. ``` # Your code here: ``` Import the `mode` function from `scipy.stats` and find the mode of the `country` and `currency` column. ``` # Your code here: ``` The trimmed mean is a function that computes the mean of the data with observations removed. The most common way to compute a trimmed mean is by specifying a percentage and then removing elements from both ends. However, we can also specify a threshold on both ends. The goal of this function is to create a more robust method of computing the mean that is less influenced by outliers. SciPy contains a function called `tmean` for computing the trimmed mean. In the cell below, import the `tmean` function and then find the 75th percentile of the `goal` column. Compute the trimmed mean between 0 and the 75th percentile of the column. Read more about the `tmean` function [here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.tmean.html#scipy.stats.tmean). ``` # Your code here: ``` #### SciPy contains various statistical tests. One of the tests is Fisher's exact test. This test is used for contingency tables. The test originates from the "Lady Tasting Tea" experiment. In 1935, Fisher published the results of the experiment in his book. The experiment was based on a claim by Muriel Bristol that she can taste whether tea or milk was first poured into the cup. Fisher devised this test to disprove her claim. The null hypothesis is that the treatments do not affect outcomes, while the alternative hypothesis is that the treatment does affect outcome. To read more about Fisher's exact test, see: * [Wikipedia's explanation](http://b.link/test61) * [A cool deep explanation](http://b.link/handbook47) * [An explanation with some important Fisher's considerations](http://b.link/significance76) Let's perform Fisher's exact test on our KickStarter data. We intend to test the hypothesis that the choice of currency has an impact on meeting the pledge goal. We'll start by creating two derived columns in our dataframe. The first will contain 1 if the amount of money in `usd_pledged_real` is greater than the amount of money in `usd_goal_real`. We can compute this by using the `np.where` function. If the amount in one column is greater than the other, enter a value of 1, otherwise enter a value of zero. Add this column to the dataframe and name it `goal_met`. ``` # Your code here: ``` Next, create a column that checks whether the currency of the project is in US Dollars. Create a column called `usd` using the `np.where` function where if the currency is US Dollars, assign a value of 1 to the row and 0 otherwise. ``` # Your code here: ``` Now create a contingency table using the `pd.crosstab` function in the cell below to compare the `goal_met` and `usd` columns. Import the `fisher_exact` function from `scipy.stats` and conduct the hypothesis test on the contingency table that you have generated above. You can read more about the `fisher_exact` function [here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.fisher_exact.html#scipy.stats.fisher_exact). The output of the function should be the odds ratio and the p-value. The p-value will provide you with the outcome of the test. ``` # Your code here: ``` # Challenge 2 - The `interpolate` submodule This submodule allows us to interpolate between two points and create a continuous distribution based on the observed data. In the cell below, import the `interp1d` function and first take a sample of 10 rows from `kickstarter`. ``` # Your code here: ``` Next, create a linear interpolation of the backers as a function of `usd_pledged_real`. Create a function `f` that generates a linear interpolation of backers as predicted by the amount of real pledged dollars. ``` # Your code here: ``` Now create a new variable called `x_new`. This variable will contain all integers between the minimum number of backers in our sample and the maximum number of backers. The goal here is to take the dataset that contains few obeservations due to sampling and fill all observations with a value using the interpolation function. Hint: one option is the `np.arange` function. ``` # Your code here: ``` Plot function f for all values of `x_new`. Run the code below. ``` # Run this code: %matplotlib inline import matplotlib.pyplot as plt plt.plot(x_new, f(x_new)) ``` Next create a function that will generate a cubic interpolation function. Name the function `g`. ``` # Your code here: # Run this code: plt.plot(x_new, g(x_new)) ``` # Bonus Challenge - The Binomial Distribution The binomial distribution allows us to calculate the probability of k successes in n trials for a random variable with two possible outcomes (which we typically label success and failure). The probability of success is typically denoted by p and the probability of failure is denoted by 1-p. The `scipy.stats` submodule contains a `binom` function for computing the probabilites of a random variable with the binomial distribution. You may read more about the binomial distribution [here](http://b.link/binomial55) * In the cell below, compute the probability that a dice lands on 5 exactly 3 times in 8 tries. ``` # Your code here: ``` * Do a simulation for the last event: do a function that simulate 8 tries and return a 1 if the result is 5 exactly 3 times and 0 if not. Now launch your simulation. ``` # Your code here: ``` * Launch 10 simulations and represent the result in a bar plot. Now launch 1000 simulations and represent it. What do you see? ``` # Your code here: ```
github_jupyter
``` # Import libraries import numpy as np import pandas as pd import sklearn as sk import matplotlib import matplotlib.pyplot as plt from matplotlib.font_manager import FontProperties # for unicode fonts import psycopg2 import sys import datetime as dt import mp_utils as mp from sklearn.pipeline import Pipeline # used to impute mean for data and standardize for computational stability from sklearn.preprocessing import Imputer from sklearn.preprocessing import StandardScaler # logistic regression is our favourite model ever from sklearn.linear_model import LogisticRegression from sklearn.linear_model import LogisticRegressionCV # l2 regularized regression from sklearn.linear_model import LassoCV # used to calculate AUROC/accuracy from sklearn import metrics # used to create confusion matrix from sklearn.metrics import confusion_matrix # gradient boosting - must download package https://github.com/dmlc/xgboost import xgboost as xgb # default colours for prettier plots col = [[0.9047, 0.1918, 0.1988], [0.2941, 0.5447, 0.7494], [0.3718, 0.7176, 0.3612], [1.0000, 0.5482, 0.1000], [0.4550, 0.4946, 0.4722], [0.6859, 0.4035, 0.2412], [0.9718, 0.5553, 0.7741], [0.5313, 0.3359, 0.6523]]; # "Tableau 20" colors as RGB. tableau20 = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120), (44, 160, 44), (152, 223, 138), (214, 39, 40), (255, 152, 150), (148, 103, 189), (197, 176, 213), (140, 86, 75), (196, 156, 148), (227, 119, 194), (247, 182, 210), (127, 127, 127), (199, 199, 199), (188, 189, 34), (219, 219, 141), (23, 190, 207), (158, 218, 229)] # Scale the RGB values to the [0, 1] range, which is the format matplotlib accepts. for i in range(len(tableau20)): r, g, b = tableau20[i] tableau20[i] = (r / 255., g / 255., b / 255.) marker = ['v','o','d','^','s','>','+'] ls = ['-','-','-','-','-','s','--','--'] # bigger font ! plt.rcParams.update({'font.size': 22}) %matplotlib inline from __future__ import print_function ``` # Plot data from example patient's time-series ``` df = pd.read_csv('/tmp/mp_data.csv') # load in this patient's deathtime from the actual experiment df_offset = pd.read_csv('/tmp/mp_death.csv') # get censoring information df_censor = pd.read_csv('/tmp/mp_censor.csv') ``` # Experiment A: First 24 hours ``` # define the patient iid = 200001 iid2 = 200019 T_WINDOW = 24 time_dict = {iid: 24, iid2: 24} df_pat = df.loc[df['icustay_id']==iid, :].set_index('hr') deathtime = df_offset.loc[df_offset['icustay_id']==iid, 'deathtime_hours'].values # Two subplots, the axes array is 1-d f, axarr = plt.subplots(2, sharex=True, figsize=[10,10]) pretty_labels = {'heartrate': 'Heart rate', 'meanbp': 'Mean blood pressure', 'resprate': 'Respiratory rate', 'spo2': 'Peripheral oxygen saturation', 'tempc': 'Temperature', 'bg_ph': 'pH', 'bg_bicarbonate': 'Serum bicarbonate', 'hemoglobin': 'Hemoglobin', 'potassium': 'Potassium', 'inr': 'International normalized ratio', 'bg_lactate': 'Lactate', 'wbc': 'White blood cell count'} #var_list = df.columns # first plot all the vitals in subfigure 1 var_vitals = [u'heartrate', u'meanbp', u'resprate', u'tempc', u'spo2'] i=0 t_scale = 1.0 # divide by this to get from hours to t_unit t_unit = 'Hours elapsed' for v in var_vitals: idx = ~df_pat[v].isnull() if np.sum(idx) > 0: axarr[0].plot(df_pat.loc[idx,v].index/t_scale, df_pat.loc[idx,v].values, '--', label=pretty_labels[v], marker=marker[np.mod(i,7)], markersize=8, color=tableau20[i], linewidth=2) i+=1 axarr[0].set_ylim([0,150]) y_lim = axarr[0].get_ylim() # add ICU discharge if dischtime is not np.nan: axarr[0].plot([deathtime,deathtime], y_lim, 'k:',linewidth=3) # add a grey patch to represent the window endtime = time_dict[iid] rect = matplotlib.patches.Rectangle( (endtime-T_WINDOW, y_lim[0]), T_WINDOW, y_lim[1], color='#bdbdbd') axarr[0].add_patch(rect) # #axarr[0].text(starttime/60.0-4-2,4, 'window',fontsize=16) axarr[0].set_ylabel('Vital signs for {}'.format(iid),fontsize=16) # next plot the vitals for the next patient in subfigure 2 df_pat = df.loc[df['icustay_id']==iid2, :].set_index('hr') deathtime = df_offset.loc[df_offset['icustay_id']==iid2, 'deathtime_hours'].values i=0 t_scale = 1.0 # divide by this to get from hours to t_unit t_unit = 'Hours elapsed since ICU admission' for v in var_vitals: idx = ~df_pat[v].isnull() if np.sum(idx) > 0: axarr[1].plot(df_pat.loc[idx,v].index/t_scale, df_pat.loc[idx,v].values, '--', label=pretty_labels[v], marker=marker[np.mod(i,7)], markersize=8, color=tableau20[i], linewidth=2) i+=1 axarr[1].set_ylim([0,150]) y_lim = axarr[1].get_ylim() # add ICU discharge if deathtime is not np.nan: axarr[1].plot([deathtime,deathtime], y_lim, 'k:',linewidth=3) axarr[1].arrow(deathtime-5, 115, 4, 0, head_width=5, head_length=1, fc='k', ec='k') axarr[1].text(deathtime-12, 112.5, 'Death', fontsize=16) # add DNR dnrtime = df_censor.loc[df_censor['icustay_id']==iid2, 'censortime_hours'].values if dnrtime.shape[0]>0: axarr[1].plot([dnrtime,dnrtime], y_lim, 'm:', linewidth=3) axarr[1].arrow(dnrtime+5, 135, -4, 0, head_width=5, head_length=1, fc='k', ec='k') axarr[1].text(dnrtime+5, 132.5, 'DNR',fontsize=16) # add a patch to represent the window endtime = time_dict[iid2] rect = matplotlib.patches.Rectangle( (endtime-T_WINDOW, y_lim[0]), T_WINDOW, y_lim[1], color='#bdbdbd') axarr[1].add_patch(rect) axarr[1].set_xlabel(t_unit,fontsize=16) axarr[1].set_ylabel('Vital signs for {}'.format(iid2),fontsize=16) axarr[1].legend(shadow=True, fancybox=True,loc='upper center', bbox_to_anchor=(0.5, 1.21),ncol=3) plt.show() ``` # Experiment B: Random time ``` # generate a random time dictionary T_WINDOW=4 df_tmp=df_offset.copy().merge(df_censor, how='left', left_on='icustay_id', right_on='icustay_id') time_dict = mp.generate_times(df_tmp, T=2, seed=111, censor=True) # define the patient iid = 200001 iid2 = 200019 df_pat = df.loc[df['icustay_id']==iid, :].set_index('hr') deathtime = df_offset.loc[df_offset['icustay_id']==iid, 'deathtime_hours'].values # Two subplots, the axes array is 1-d f, axarr = plt.subplots(2, sharex=True, figsize=[10,10]) pretty_labels = {'heartrate': 'Heart rate', 'meanbp': 'Mean blood pressure', 'resprate': 'Respiratory rate', 'spo2': 'Peripheral oxygen saturation', 'tempc': 'Temperature', 'bg_ph': 'pH', 'bg_bicarbonate': 'Serum bicarbonate', 'hemoglobin': 'Hemoglobin', 'potassium': 'Potassium', 'inr': 'International normalized ratio', 'bg_lactate': 'Lactate', 'wbc': 'White blood cell count'} #var_list = df.columns # first plot all the vitals in subfigure 1 var_vitals = [u'heartrate', u'meanbp', u'resprate', u'tempc', u'spo2'] i=0 t_scale = 1.0 # divide by this to get from hours to t_unit t_unit = 'Hours elapsed' for v in var_vitals: idx = ~df_pat[v].isnull() if np.sum(idx) > 0: axarr[0].plot(df_pat.loc[idx,v].index/t_scale, df_pat.loc[idx,v].values, '--', label=pretty_labels[v], marker=marker[np.mod(i,7)], color=tableau20[i], linewidth=2) i+=1 axarr[0].set_ylim([0,150]) y_lim = axarr[0].get_ylim() # add ICU discharge if dischtime is not np.nan: axarr[0].plot([deathtime,deathtime], y_lim, 'k:',linewidth=3) # add a grey patch to represent the window endtime = time_dict[iid] rect = matplotlib.patches.Rectangle( (endtime-T_WINDOW, y_lim[0]), T_WINDOW, y_lim[1], color='#bdbdbd') axarr[0].add_patch(rect) # #axarr[0].text(starttime/60.0-4-2,4, 'window',fontsize=16) axarr[0].set_ylabel('Vital signs for {}'.format(iid),fontsize=16) # next plot the vitals for the next patient in subfigure 2 df_pat = df.loc[df['icustay_id']==iid2, :].set_index('hr') deathtime = df_offset.loc[df_offset['icustay_id']==iid2, 'deathtime_hours'].values i=0 t_scale = 1.0 # divide by this to get from hours to t_unit t_unit = 'Hours elapsed since ICU admission' for v in var_vitals: idx = ~df_pat[v].isnull() if np.sum(idx) > 0: axarr[1].plot(df_pat.loc[idx,v].index/t_scale, df_pat.loc[idx,v].values, '--', label=pretty_labels[v], marker=marker[np.mod(i,7)], markersize=8, color=tableau20[i], linewidth=2) i+=1 axarr[1].set_ylim([0,150]) y_lim = axarr[1].get_ylim() # add ICU discharge if deathtime is not np.nan: axarr[1].plot([deathtime,deathtime], y_lim, 'k:',linewidth=3) axarr[1].arrow(deathtime-5, 115, 4, 0, head_width=5, head_length=1, fc='k', ec='k') axarr[1].text(deathtime-12, 112.5, 'Death', fontsize=16) # add DNR dnrtime = df_censor.loc[df_censor['icustay_id']==iid2, 'censortime_hours'].values if dnrtime.shape[0]>0: axarr[1].plot([dnrtime,dnrtime], y_lim, 'm:', linewidth=3) axarr[1].arrow(dnrtime+5, 135, -4, 0, head_width=5, head_length=1, fc='k', ec='k') axarr[1].text(dnrtime+5, 132.5, 'DNR',fontsize=16) # add a patch to represent the window endtime = time_dict[iid2] rect = matplotlib.patches.Rectangle( (endtime-T_WINDOW, y_lim[0]), T_WINDOW, y_lim[1], color='#bdbdbd') axarr[1].add_patch(rect) axarr[1].set_xlabel(t_unit,fontsize=16) axarr[1].set_ylabel('Vital signs for {}'.format(iid2),fontsize=16) #axarr[1].legend(shadow=True, fancybox=True,loc='upper center', bbox_to_anchor=(0.5, 1.1),ncol=3) plt.show() ``` # Both 24 hours and 4 hour window ``` # generate a random time dictionary T_WINDOW=4 df_tmp=df_offset.copy().merge(df_censor, how='left', left_on='icustay_id', right_on='icustay_id') time_dict = mp.generate_times(df_tmp, T=2, seed=111, censor=True) # define the patient iid = 200001 iid2 = 200019 df_pat = df.loc[df['icustay_id']==iid, :].set_index('hr') deathtime = df_offset.loc[df_offset['icustay_id']==iid, 'deathtime_hours'].values # Two subplots, the axes array is 1-d f, axarr = plt.subplots(2, sharex=True, figsize=[10,10]) pretty_labels = {'heartrate': 'Heart rate', 'meanbp': 'Mean blood pressure', 'resprate': 'Respiratory rate', 'spo2': 'Peripheral oxygen saturation', 'tempc': 'Temperature', 'bg_ph': 'pH', 'bg_bicarbonate': 'Serum bicarbonate', 'hemoglobin': 'Hemoglobin', 'potassium': 'Potassium', 'inr': 'International normalized ratio', 'bg_lactate': 'Lactate', 'wbc': 'White blood cell count'} #var_list = df.columns # first plot all the vitals in subfigure 1 var_vitals = [u'heartrate', u'meanbp', u'resprate', u'tempc', u'spo2'] i=0 t_scale = 1.0 # divide by this to get from hours to t_unit t_unit = 'Hours elapsed' for v in var_vitals: idx = ~df_pat[v].isnull() if np.sum(idx) > 0: axarr[0].plot(df_pat.loc[idx,v].index/t_scale, df_pat.loc[idx,v].values, '--', label=pretty_labels[v], marker=marker[np.mod(i,7)], color=tableau20[i], linewidth=2) i+=1 axarr[0].set_ylim([0,150]) y_lim = axarr[0].get_ylim() # add ICU discharge if dischtime is not np.nan: axarr[0].plot([deathtime,deathtime], y_lim, 'k:',linewidth=3) # add a grey patch to represent the 4 hour window endtime = time_dict[iid] rect = matplotlib.patches.Rectangle( (endtime-T_WINDOW, y_lim[0]), T_WINDOW, y_lim[1], color='#bdbdbd') axarr[0].add_patch(rect) # #axarr[0].text(starttime/60.0-4-2,4, 'window',fontsize=16) # add a grey patch to represent the 24 hour window rect = matplotlib.patches.Rectangle( (0, y_lim[0]), 24, y_lim[1], color='#bdbdbd') axarr[0].add_patch(rect) # #axarr[0].text(starttime/60.0-4-2,4, 'window',fontsize=16) axarr[0].set_ylabel('Vital signs for {}'.format(iid),fontsize=16) # next plot the vitals for the next patient in subfigure 2 df_pat = df.loc[df['icustay_id']==iid2, :].set_index('hr') deathtime = df_offset.loc[df_offset['icustay_id']==iid2, 'deathtime_hours'].values i=0 t_scale = 1.0 # divide by this to get from hours to t_unit t_unit = 'Hours elapsed since ICU admission' for v in var_vitals: idx = ~df_pat[v].isnull() if np.sum(idx) > 0: axarr[1].plot(df_pat.loc[idx,v].index/t_scale, df_pat.loc[idx,v].values, '--', label=pretty_labels[v], marker=marker[np.mod(i,7)], markersize=8, color=tableau20[i], linewidth=2) i+=1 axarr[1].set_ylim([0,150]) y_lim = axarr[1].get_ylim() # add ICU discharge if deathtime is not np.nan: axarr[1].plot([deathtime,deathtime], y_lim, 'k:',linewidth=3) axarr[1].arrow(deathtime-5, 115, 4, 0, head_width=5, head_length=1, fc='k', ec='k') axarr[1].text(deathtime-12, 112.5, 'Death', fontsize=16) # add DNR dnrtime = df_censor.loc[df_censor['icustay_id']==iid2, 'censortime_hours'].values if dnrtime.shape[0]>0: axarr[1].plot([dnrtime,dnrtime], y_lim, 'm:', linewidth=3) axarr[1].arrow(dnrtime+5, 135, -4, 0, head_width=5, head_length=1, fc='k', ec='k') axarr[1].text(dnrtime+5, 132.5, 'DNR',fontsize=16) # add a patch to represent the 4 hour window endtime = time_dict[iid2] rect = matplotlib.patches.Rectangle( (endtime-T_WINDOW, y_lim[0]), T_WINDOW, y_lim[1], color='#bdbdbd') axarr[1].add_patch(rect) axarr[1].arrow(dnrtime+5, 135, -4, 0, head_width=5, head_length=1, fc='k', ec='k') axarr[1].text(dnrtime+5, 132.5, 'DNR',fontsize=16) # add a patch to represent the 24 hour window rect = matplotlib.patches.Rectangle( (0, y_lim[0]), 24, y_lim[1], color='#bdbdbd') axarr[1].add_patch(rect) axarr[1].arrow(dnrtime+5, 135, -4, 0, head_width=5, head_length=1, fc='k', ec='k') axarr[1].text(dnrtime+5, 132.5, 'DNR',fontsize=16) axarr[1].set_xlabel(t_unit,fontsize=16) axarr[1].set_ylabel('Vital signs for {}'.format(iid2),fontsize=16) #axarr[1].legend(shadow=True, fancybox=True,loc='upper center', bbox_to_anchor=(0.5, 1.1),ncol=3) plt.show() ```
github_jupyter
# Fairseq in Amazon SageMaker: Pre-trained English to French translation model In this notebook, we will show you how to serve an English to French translation model using pre-trained model provided by the [Fairseq toolkit](https://github.com/pytorch/fairseq) ## Permissions Running this notebook requires permissions in addition to the regular SageMakerFullAccess permissions. This is because it creates new repositories in Amazon ECR. The easiest way to add these permissions is simply to add the managed policy AmazonEC2ContainerRegistryFullAccess to the role that you used to start your notebook instance. There's no need to restart your notebook instance when you do this, the new permissions will be available immediately. ## Download pre-trained model Fairseq maintains their pre-trained models [here](https://github.com/pytorch/fairseq/blob/master/examples/translation/README.md). We will use the model that was pre-trained on the [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) dataset. As the models are archived in .bz2 format, we need to convert them to .tar.gz as this is the format supported by Amazon SageMaker. ### Convert archive ``` %%sh wget https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2 tar xvjf wmt14.v2.en-fr.fconv-py.tar.bz2 > /dev/null cd wmt14.en-fr.fconv-py mv model.pt checkpoint_best.pt tar czvf wmt14.en-fr.fconv-py.tar.gz checkpoint_best.pt dict.en.txt dict.fr.txt bpecodes README.md > /dev/null ``` The pre-trained model has been downloaded and converted. The next step is upload the data to Amazon S3 in order to make it available for running the inference. ### Upload data to Amazon S3 ``` import sagemaker sagemaker_session = sagemaker.Session() region = sagemaker_session.boto_session.region_name account = sagemaker_session.boto_session.client("sts").get_caller_identity().get("Account") bucket = sagemaker_session.default_bucket() prefix = "sagemaker/DEMO-pytorch-fairseq/pre-trained-models" role = sagemaker.get_execution_role() trained_model_location = sagemaker_session.upload_data( path="wmt14.en-fr.fconv-py/wmt14.en-fr.fconv-py.tar.gz", bucket=bucket, key_prefix=prefix ) ``` ## Build Fairseq serving container Next we need to register a Docker image in Amazon SageMaker that will contain the Fairseq code and that will be pulled at inference time to perform the of the precitions from the pre-trained model we downloaded. ``` %%sh chmod +x create_container.sh ./create_container.sh pytorch-fairseq-serve ``` The Fairseq serving image has been pushed into Amazon ECR, the registry from which Amazon SageMaker will be able to pull that image and launch both training and prediction. ## Hosting the pre-trained model for inference We first needs to define a base JSONPredictor class that will help us with sending predictions to the model once it's hosted on the Amazon SageMaker endpoint. ``` from sagemaker.predictor import RealTimePredictor, json_serializer, json_deserializer class JSONPredictor(RealTimePredictor): def __init__(self, endpoint_name, sagemaker_session): super(JSONPredictor, self).__init__( endpoint_name, sagemaker_session, json_serializer, json_deserializer ) ``` We can now use the Model class to deploy the model artificats (the pre-trained model), and deploy it on a CPU instance. Let's use a `ml.m5.xlarge`. ``` from sagemaker import Model algorithm_name = "pytorch-fairseq-serve" image = "{}.dkr.ecr.{}.amazonaws.com/{}:latest".format(account, region, algorithm_name) model = Model( model_data=trained_model_location, role=role, image=image, predictor_cls=JSONPredictor, ) predictor = model.deploy(initial_instance_count=1, instance_type="ml.m5.xlarge") ``` Now it's your time to play. Input a sentence in English and get the translation in French by simply calling predict. ``` import html result = predictor.predict("I love translation") # Some characters are escaped HTML-style requiring to unescape them before printing print(html.unescape(result)) ``` Once you're done with getting predictions, remember to shut down your endpoint as you no longer need it. ## Delete endpoint ``` model.sagemaker_session.delete_endpoint(predictor.endpoint) ``` Voila! For more information, you can check out the [Fairseq toolkit homepage](https://github.com/pytorch/fairseq).
github_jupyter
``` import codecs from itertools import * import numpy as np from sklearn import svm from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import GaussianNB from sklearn import tree from sklearn import model_selection from sklearn.model_selection import train_test_split from sklearn.ensemble import GradientBoostingClassifier import xgboost as xgb from sklearn.ensemble import RandomForestClassifier import pylab as pl def load_data(filename): file = codecs.open(filename,'r','utf-8') data = [] label = [] for line in islice(file,0,None): line = line.strip().split(',') #print ("reading data....") data.append([float(i) for i in line[1:-1]]) label.append(line[-1]) x = np.array(data) y = np.array(label) #print (x) #print (y) return x,y def logistic_regression(x_train,y_train): print("logistic_regression...") clf1 = LogisticRegression() score1 = model_selection.cross_val_score(clf1,x_train,y_train,cv=10,scoring="accuracy") x = [int(i) for i in range(1,11)] y = score1 pl.ylabel(u'Accuracy') pl.xlabel(u'times') pl.plot(x,y,label='LogReg') pl.legend() #pl.savefig("picture/LogReg.png") print (np.mean(score1)) def svm_(x_train,y_train): print("svm...") clf2 = svm.LinearSVC(random_state=2016) score2 = model_selection.cross_val_score(clf2,x_train,y_train,cv=10,scoring='accuracy') #print score2 print ('The accuracy of linearSVM:') print (np.mean(score2)) x = [int(i) for i in range(1, 11)] y = score2 pl.ylabel(u'Accuracy') pl.xlabel(u'times') pl.plot(x, y,label='SVM') pl.legend() #pl.savefig("picture/SVM.png") def gradient_boosting(x_train,y_train): print("gradient_boosting...") clf5 = GradientBoostingClassifier() score5 = model_selection.cross_val_score(clf5,x_train,y_train,cv=10,scoring="accuracy") print ('The accuracy of GradientBoosting:') print (np.mean(score5)) x = [int(i) for i in range(1, 11)] y = score5 pl.ylabel(u'Accuracy') pl.xlabel(u'times') pl.plot(x, y,label='GBDT') pl.legend() #pl.savefig("picture/GBDT.png") def xgb_boost(x_train,y_train): print("xgboost....") clf = xgb.XGBClassifier() score = model_selection.cross_val_score(clf,x_train,y_train,cv=10,scoring="accuracy") print ('The accuracy of XGBoosting:') print (np.mean(score)) x = [int(i) for i in range(1, 11)] y = score pl.ylabel(u'Accuracy') pl.xlabel(u'times') pl.plot(x, y,label='xgboost') pl.legend() #pl.savefig("picture/XGBoost.png") def random_forest(x_train,y_train): print("random_forest...") clf = RandomForestClassifier(n_estimators=100) score = model_selection.cross_val_score(clf,x_train,y_train,cv=10,scoring="accuracy") print ('The accuracy of RandomForest:') print (np.mean(score)) x = [int(i) for i in range(1, 11)] y = score pl.ylabel(u'Accuracy') pl.xlabel(u'times') pl.plot(x, y,label='RandForest') pl.legend() #pl.savefig("picture/RandomForest.png") def train_acc(filename): x_train,y_train = load_data(filename) logistic_regression(x_train,y_train) svm_(x_train,y_train) gradient_boosting(x_train,y_train) xgb_boost(x_train,y_train) random_forest(x_train,y_train) train_acc("feature1227/feature_all_1227.csv") train_acc("features/feature_all_1223.csv") train_acc("features/feature_amino_acid_freq_2_gram.csv") train_acc("features/feature_all_1224.csv") train_acc("feature1224/feature_amino_acid_freq_2_gram&pssmDT.csv") train_acc("feature1224/feature_amino_acid_freq_2_gram&localDPP.csv") train_acc("feature1224/feature_amino_acid_freq_2_gram&pssmDT&localDPP.csv") train_acc("feature1224/feature_amino_acid_freq_2_gram&amino_acid.csv") train_acc("feature1225/feature_amino_acid_freq_top_10.csv") train_acc("feature1225/feature_all_1225_1.csv") train_acc("feature1225/feature_all_1225_2.csv") train_acc("feature1225/feature_ACC_1225.csv") train_acc("final1225/feature_all.csv") train_acc("predict1226_2/feature_all.csv") from sklearn.externals import joblib x,y = load_data("predict1226_2/feature_all.csv") rf = RandomForestClassifier(n_estimators=100) rf.fit(x,y) joblib.dump(rf,"predict1226_2/rf.model") #y_pred = rf.predict(x) #y_preprob = rf.predict_proba(x)[:,1] #print (y_pred) #print (y_preprob) from sklearn.externals import joblib x,y = load_data("predict1226_2/feature_all.csv") rf = RandomForestClassifier(n_estimators=100) rf.fit(x,y) joblib.dump(rf,"predict1226_2/rf.model") ```
github_jupyter
# SSD Evaluation Tutorial This is a brief tutorial that explains how compute the average precisions for any trained SSD model using the `Evaluator` class. The `Evaluator` computes the average precisions according to the Pascal VOC pre-2010 or post-2010 detection evaluation algorithms. You can find details about these computation methods [here](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/htmldoc/devkit_doc.html#sec:ap). As an example we'll evaluate an SSD300 on the Pascal VOC 2007 `test` dataset, but note that the `Evaluator` works for any SSD model and any dataset that is compatible with the `DataGenerator`. If you would like to run the evaluation on a different model and/or dataset, the procedure is analogous to what is shown below, you just have to build the appropriate model and load the relevant dataset. Note: I that in case you would like to evaluate a model on MS COCO, I would recommend to follow the [MS COCO evaluation notebook](https://github.com/pierluigiferrari/ssd_keras/blob/master/ssd300_evaluation_COCO.ipynb) instead, because it can produce the results format required by the MS COCO evaluation server and uses the official MS COCO evaluation code, which computes the mAP slightly differently from the Pascal VOC method. Note: In case you want to evaluate any of the provided trained models, make sure that you build the respective model with the correct set of scaling factors to reproduce the official results. The models that were trained on MS COCO and fine-tuned on Pascal VOC require the MS COCO scaling factors, not the Pascal VOC scaling factors. ``` from keras import backend as K from keras.models import load_model from keras.optimizers import Adam from imageio import imread import numpy as np from matplotlib import pyplot as plt from models.keras_ssd300 import ssd_300 from keras_loss_function.keras_ssd_loss import SSDLoss from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes from keras_layers.keras_layer_DecodeDetections import DecodeDetections from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast from keras_layers.keras_layer_L2Normalization import L2Normalization from data_generator.object_detection_2d_data_generator import DataGenerator from eval_utils.average_precision_evaluator import Evaluator %matplotlib inline import os import os.path as p # Set a few configuration parameters. img_height = 300 img_width = 300 n_classes = 20 model_mode = 'training' ``` ## 1. Load a trained SSD Either load a trained model or build a model and load trained weights into it. Since the HDF5 files I'm providing contain only the weights for the various SSD versions, not the complete models, you'll have to go with the latter option when using this implementation for the first time. You can then of course save the model and next time load the full model directly, without having to build it. You can find the download links to all the trained model weights in the README. ### 1.1. Build the model and load trained weights into it ``` # 1: Build the Keras model K.clear_session() # Clear previous models from memory. model = ssd_300(image_size=(img_height, img_width, 3), n_classes=n_classes, mode=model_mode, l2_regularization=0.0005, scales=[0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05], # The scales for MS COCO [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05] aspect_ratios_per_layer=[[1.0, 2.0, 0.5], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5], [1.0, 2.0, 0.5]], two_boxes_for_ar1=True, steps=[8, 16, 32, 64, 100, 300], offsets=[0.5, 0.5, 0.5, 0.5, 0.5, 0.5], clip_boxes=False, variances=[0.1, 0.1, 0.2, 0.2], normalize_coords=True, subtract_mean=[123, 117, 104], swap_channels=[2, 1, 0], confidence_thresh=0.01, iou_threshold=0.45, top_k=200, nms_max_output_size=400) # 2: Load the trained weights into the model. weights_path = '/usr/local/data/msmith/uncertainty/ssd_keras/good_dropout_model/ssd300_dropout_PASCAL2012_train_+12_epoch-58_loss-3.8960_val_loss-5.0832.h5' model.load_weights(weights_path, by_name=True) # 3: Compile the model so that Keras won't complain the next time you load it. adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0) ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0) model.compile(optimizer=adam, loss=ssd_loss.compute_loss) ``` Or ### 1.2. Load a trained model We set `model_mode` to 'inference' above, so the evaluator expects that you load a model that was built in 'inference' mode. If you're loading a model that was built in 'training' mode, change the `model_mode` parameter accordingly. ``` # TODO: Set the path to the `.h5` file of the model to be loaded. model_path = 'ssd300_dropout_pascal_07+12_epoch-114_loss-4.3685_val_loss-4.5034.h5' # We need to create an SSDLoss object in order to pass that to the model loader. ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0) K.clear_session() # Clear previous models from memory. model = load_model(model_path, custom_objects={'AnchorBoxes': AnchorBoxes, 'L2Normalization': L2Normalization, 'DecodeDetections': DecodeDetections, 'compute_loss': ssd_loss.compute_loss}) model.summary() ``` ## 2. Create a data generator for the evaluation dataset Instantiate a `DataGenerator` that will serve the evaluation dataset during the prediction phase. ``` ROOT_PATH = '/usr/local/data/msmith/APL/Datasets/PASCAL/' # The directories that contain the images. VOC_2007_images_dir = p.join(ROOT_PATH,'VOCdevkit/VOC2007/JPEGImages/') VOC_2012_images_dir = p.join(ROOT_PATH,'VOCdevkit/VOC2012/JPEGImages/') # The directories that contain the annotations. VOC_2007_annotations_dir = p.join(ROOT_PATH,'VOCdevkit/VOC2007/Annotations/') VOC_2012_annotations_dir = p.join(ROOT_PATH,'VOCdevkit/VOC2012/Annotations/') # The paths to the image sets. VOC_2007_train_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2007/ImageSets/Main/train.txt') VOC_2012_train_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2012/ImageSets/Main/train.txt') VOC_2007_val_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2007/ImageSets/Main/val.txt') VOC_2012_val_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2012/ImageSets/Main/val.txt') VOC_2007_trainval_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2007/ImageSets/Main/trainval.txt') VOC_2012_trainval_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2012/ImageSets/Main/trainval.txt') VOC_2007_test_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2007/ImageSets/Main/test.txt') dataset = DataGenerator(load_images_into_memory=True) # The XML parser needs to now what object class names to look for and in which order to map them to integers. classes = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'] dataset.parse_xml(images_dirs=[VOC_2012_images_dir], image_set_filenames=[VOC_2012_val_image_set_filename], annotations_dirs=[VOC_2012_annotations_dir], classes=classes, include_classes='all', exclude_truncated=False, exclude_difficult=False, ret=False) ``` ## 3. Run the evaluation Now that we have instantiated a model and a data generator to serve the dataset, we can set up the evaluator and run the evaluation. The evaluator is quite flexible: It can compute the average precisions according to the Pascal VOC pre-2010 algorithm, which samples 11 equidistant points of the precision-recall curves, or according to the Pascal VOC post-2010 algorithm, which integrates numerically over the entire precision-recall curves instead of sampling a few individual points. You could also change the number of sampled recall points or the required IoU overlap for a prediction to be considered a true positive, among other things. Check out the `Evaluator`'s documentation for details on all the arguments. In its default settings, the evaluator's algorithm is identical to the official Pascal VOC pre-2010 Matlab detection evaluation algorithm, so you don't really need to tweak anything unless you want to. The evaluator roughly performs the following steps: It runs predictions over the entire given dataset, then it matches these predictions to the ground truth boxes, then it computes the precision-recall curves for each class, then it samples 11 equidistant points from these precision-recall curves to compute the average precision for each class, and finally it computes the mean average precision over all classes. ``` evaluator = Evaluator(model=model, n_classes=n_classes, data_generator=dataset, model_mode=model_mode) results = evaluator(img_height=img_height, img_width=img_width, batch_size=2, data_generator_mode='resize', round_confidences=False, matching_iou_threshold=0.5, border_pixels='include', sorting_algorithm='quicksort', average_precision_mode='sample', num_recall_points=11, ignore_neutral_boxes=True, return_precisions=True, return_recalls=True, return_average_precisions=True, verbose=True) mean_average_precision, average_precisions, precisions, recalls = results ``` ## 4. Visualize the results Let's take a look: ``` for i in range(1, len(average_precisions)): print("{:<14}{:<6}{}".format(classes[i], 'AP', round(average_precisions[i], 3))) print() print("{:<14}{:<6}{}".format('','mAP', round(mean_average_precision, 3))) m = max((n_classes + 1) // 2, 2) n = 2 fig, cells = plt.subplots(m, n, figsize=(n*8,m*8)) for i in range(m): for j in range(n): if n*i+j+1 > n_classes: break cells[i, j].plot(recalls[n*i+j+1], precisions[n*i+j+1], color='blue', linewidth=1.0) cells[i, j].set_xlabel('recall', fontsize=14) cells[i, j].set_ylabel('precision', fontsize=14) cells[i, j].grid(True) cells[i, j].set_xticks(np.linspace(0,1,11)) cells[i, j].set_yticks(np.linspace(0,1,11)) cells[i, j].set_title("{}, AP: {:.3f}".format(classes[n*i+j+1], average_precisions[n*i+j+1]), fontsize=16) ``` ## 5. Advanced use `Evaluator` objects maintain copies of all relevant intermediate results like predictions, precisions and recalls, etc., so in case you want to experiment with different parameters, e.g. different IoU overlaps, there is no need to compute the predictions all over again every time you make a change to a parameter. Instead, you can only update the computation from the point that is affected onwards. The evaluator's `__call__()` method is just a convenience wrapper that executes its other methods in the correct order. You could just call any of these other methods individually as shown below (but you have to make sure to call them in the correct order). Note that the example below uses the same evaluator object as above. Say you wanted to compute the Pascal VOC post-2010 'integrate' version of the average precisions instead of the pre-2010 version computed above. The evaluator object still has an internal copy of all the predictions, and since computing the predictions makes up the vast majority of the overall computation time and since the predictions aren't affected by changing the average precision computation mode, we skip computing the predictions again and instead only compute the steps that come after the prediction phase of the evaluation. We could even skip the matching part, since it isn't affected by changing the average precision mode either. In fact, we would only have to call `compute_average_precisions()` `compute_mean_average_precision()` again, but for the sake of illustration we'll re-do the other computations, too. ``` evaluator.get_num_gt_per_class(ignore_neutral_boxes=True, verbose=False, ret=False) evaluator.match_predictions(ignore_neutral_boxes=True, matching_iou_threshold=0.5, border_pixels='include', sorting_algorithm='quicksort', verbose=True, ret=False) precisions, recalls = evaluator.compute_precision_recall(verbose=True, ret=True) average_precisions = evaluator.compute_average_precisions(mode='integrate', num_recall_points=11, verbose=True, ret=True) mean_average_precision = evaluator.compute_mean_average_precision(ret=True) for i in range(1, len(average_precisions)): print("{:<14}{:<6}{}".format(classes[i], 'AP', round(average_precisions[i], 3))) print() print("{:<14}{:<6}{}".format('','mAP', round(mean_average_precision, 3))) ```
github_jupyter
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>[![Kaggle](https://kaggle.com/static/images/open-in-kaggle.svg)](https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb) # Tutorial 2: Differential Equations **Week 0, Day 4: Calculus** **By Neuromatch Academy** __Content creators:__ John S Butler, Arvind Kumar with help from Rebecca Brady __Content reviewers:__ Swapnil Kumar, Sirisha Sripada, Matthew McCann, Tessy Tom __Production editors:__ Matthew McCann, Ella Batty **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** <p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p> --- # Tutorial Objectives *Estimated timing of tutorial: 45 minutes* A great deal of neuroscience can be modelled using differential equations, from gating channels to single neurons to a network of neurons to blood flow, to behaviour. A simple way to think about differential equations is they are equations that describe how something changes. The most famous of these in neuroscience is the Nobel Prize winning Hodgkin Huxley equation, which describes a neuron by modelling the gating of each axon. But we will not start there; we will start a few steps back. Differential Equations are mathematical equations that describe how something like population or a neuron changes over time. The reason why differential equations are so useful is they can generalise a process such that one equation can be used to describe many different outcomes. The general form of a first order differential equation is: \begin{align*} \frac{d}{dt}y(t)&=f(t,y(t))\\ \end{align*} which can be read as "the change in a process $y$ over time $t$ is a function $f$ of time $t$ and itself $y$". This might initially seem like a paradox as you are using a process $y$ you want to know about to describe itself, a bit like the MC Escher drawing of two hands painting [each other](https://en.wikipedia.org/wiki/Drawing_Hands). But that is the beauty of mathematics - this can be solved some of time, and when it cannot be solved exactly we can use numerical methods to estimate the answer (as we will see in the next tutorial). In this tutorial, we will see how __differential equations are motivated by observations of physical responses.__ We will break down the population differential equation, then the integrate and fire model, which leads nicely into raster plots and frequency-current curves to rate models. **Steps:** - Get an intuitive understanding of a linear population differential equation (humans, not neurons) - Visualize the relationship between the change in population and the population - Breakdown the Leaky Integrate and Fire (LIF) differential equation - Code the exact solution of an LIF for a constant input - Visualize and listen to the response of the LIF for different inputs ``` # @title Video 1: Why do we care about differential equations? from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1v64y197bW", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="LhX-mUd8lPo", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ``` --- # Setup ``` # Imports import numpy as np import matplotlib.pyplot as plt # @title Figure Settings import IPython.display as ipd from matplotlib import gridspec import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' # use NMA plot style plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") my_layout = widgets.Layout() # @title Plotting Functions def plot_dPdt(alpha=.3): """ Plots change in population over time Args: alpha: Birth Rate Returns: A figure two panel figure left panel: change in population as a function of population right panel: membrane potential as a function of time """ with plt.xkcd(): time=np.arange(0, 10 ,0.01) fig = plt.figure(figsize=(12,4)) gs = gridspec.GridSpec(1, 2) ## dpdt as a fucntion of p plt.subplot(gs[0]) plt.plot(np.exp(alpha*time), alpha*np.exp(alpha*time)) plt.xlabel(r'Population $p(t)$ (millions)') plt.ylabel(r'$\frac{d}{dt}p(t)=\alpha p(t)$') ## p exact solution plt.subplot(gs[1]) plt.plot(time, np.exp(alpha*time)) plt.ylabel(r'Population $p(t)$ (millions)') plt.xlabel('time (years)') plt.show() def plot_V_no_input(V_reset=-75): """ Args: V_reset: Reset Potential Returns: A figure two panel figure left panel: change in membrane potential as a function of membrane potential right panel: membrane potential as a function of time """ E_L=-75 tau_m=10 t=np.arange(0,100,0.01) V= E_L+(V_reset-E_L)*np.exp(-(t)/tau_m) V_range=np.arange(-90,0,1) dVdt=-(V_range-E_L)/tau_m with plt.xkcd(): time=np.arange(0, 10, 0.01) fig = plt.figure(figsize=(12, 4)) gs = gridspec.GridSpec(1, 2) plt.subplot(gs[0]) plt.plot(V_range,dVdt) plt.hlines(0,min(V_range),max(V_range), colors='black', linestyles='dashed') plt.vlines(-75, min(dVdt), max(dVdt), colors='black', linestyles='dashed') plt.plot(V_reset,-(V_reset - E_L)/tau_m, 'o', label=r'$V_{reset}$') plt.text(-50, 1, 'Positive') plt.text(-50, -2, 'Negative') plt.text(E_L - 1, max(dVdt), r'$E_L$') plt.legend() plt.xlabel('Membrane Potential V (mV)') plt.ylabel(r'$\frac{dV}{dt}=\frac{-(V(t)-E_L)}{\tau_m}$') plt.subplot(gs[1]) plt.plot(t,V) plt.plot(t[0],V_reset,'o') plt.ylabel(r'Membrane Potential $V(t)$ (mV)') plt.xlabel('time (ms)') plt.ylim([-95, -60]) plt.show() ## LIF PLOT def plot_IF(t, V,I,Spike_time): """ Args: t : time V : membrane Voltage I : Input Spike_time : Spike_times Returns: figure with three panels top panel: Input as a function of time middle panel: membrane potential as a function of time bottom panel: Raster plot """ with plt.xkcd(): fig = plt.figure(figsize=(12, 4)) gs = gridspec.GridSpec(3, 1, height_ratios=[1, 4, 1]) # PLOT OF INPUT plt.subplot(gs[0]) plt.ylabel(r'$I_e(nA)$') plt.yticks(rotation=45) plt.hlines(I,min(t),max(t),'g') plt.ylim((2, 4)) plt.xlim((-50, 1000)) # PLOT OF ACTIVITY plt.subplot(gs[1]) plt.plot(t,V) plt.xlim((-50, 1000)) plt.ylabel(r'$V(t)$(mV)') # PLOT OF SPIKES plt.subplot(gs[2]) plt.ylabel(r'Spike') plt.yticks([]) plt.scatter(Spike_time, 1 * np.ones(len(Spike_time)), color="grey", marker=".") plt.xlim((-50, 1000)) plt.xlabel('time(ms)') plt.show() ## Plotting the differential Equation def plot_dVdt(I=0): """ Args: I : Input Current Returns: figure of change in membrane potential as a function of membrane potential """ with plt.xkcd(): E_L = -75 tau_m = 10 V = np.arange(-85, 0, 1) g_L = 10. fig = plt.figure(figsize=(6, 4)) plt.plot(V,(-(V-E_L) + I*10) / tau_m) plt.hlines(0, min(V), max(V), colors='black', linestyles='dashed') plt.xlabel('V (mV)') plt.ylabel(r'$\frac{dV}{dt}$') plt.show() # @title Helper Functions ## EXACT SOLUTION OF LIF def Exact_Integrate_and_Fire(I,t): """ Args: I : Input Current t : time Returns: Spike : Spike Count Spike_time : Spike time V_exact : Exact membrane potential """ Spike = 0 tau_m = 10 R = 10 t_isi = 0 V_reset = E_L = -75 V_exact = V_reset * np.ones(len(t)) V_th = -50 Spike_time = [] for i in range(0, len(t)): V_exact[i] = E_L + R*I + (V_reset - E_L - R*I) * np.exp(-(t[i]-t_isi)/tau_m) # Threshold Reset if V_exact[i] > V_th: V_exact[i-1] = 0 V_exact[i] = V_reset t_isi = t[i] Spike = Spike+1 Spike_time = np.append(Spike_time, t[i]) return Spike, Spike_time, V_exact ``` --- # Section 1: Population differential equation ``` # @title Video 2: Population differential equation from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1pg41137CU", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="czgGyoUsRoQ", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ``` This video covers our first example of a differential equation: a differential equation which models the change in population. <details> <summary> <font color='blue'>Click here for text recap of video </font></summary> To get an intuitive feel of a differential equations, we will start with a population differential equation, which models the change in population [1], that is human population not neurons, we will get to neurons later. Mathematically it is written like: \begin{align*} \\ \frac{d}{dt}\,p(t) &= \alpha p(t),\\ \end{align*} where $p(t)$ is the population of the world and $\alpha$ is a parameter representing birth rate. Another way of thinking about the models is that the equation \begin{align*} \\ \frac{d}{dt}\,p(t) &= \alpha p(t),\\ \text{can be written as:}\\ \text{"Change in Population"} &= \text{ "Birth rate times Current population."} \end{align*} The equation is saying something reasonable maybe not the perfect model but a good start. </details> ### Think! 1.1: Interpretating the behavior of a linear population equation Using the plot below of change of population $\frac{d}{dt} p(t) $ as a function of population $p(t)$ with birth-rate $\alpha=0.3$, discuss the following questions: 1. Why is the population differential equation known as a linear differential equation? 2. How does population size affect the rate of change of the population? ``` # @markdown Execute the code to plot the rate of change of population as a function of population p = np.arange(0, 100, 0.1) with plt.xkcd(): dpdt = 0.3*p fig = plt.figure(figsize=(6, 4)) plt.plot(p, dpdt) plt.xlabel(r'Population $p(t)$ (millions)') plt.ylabel(r'$\frac{d}{dt}p(t)=\alpha p(t)$') plt.show() # to_remove explanation """ 1. The plot of $\frac{dp}{dt}$ is a line, which is why the differential equation is known as a linear differential equation. 2. As the population increases, the change of population increases. A population of 20 has a change of 6 while a population of 100 has a change of 30. This makes sense - the larger the population the larger the change. """ ``` ## Section 1.1: Exact solution of the population equation ### Section 1.1.1: Initial condition The linear population differential equation is known as an initial value differential equation because we need an initial population value to solve it. Here we will set our initial population at time 0 to 1: \begin{align*} &p(0)=1.\\ \end{align*} Different initial conditions will lead to different answers, but they will not change the differential equation. This is one of the strengths of a differential equation. ### Section 1.1.2: Exact Solution To calculate the exact solution of a differential equation, we must integrate both sides. Instead of numerical integration (as you delved into in the last tutorial), we will first try to solve the differential equations using analytical integration. As with derivatives, we can find analytical integrals of simple equations by consulting [a list](https://en.wikipedia.org/wiki/Lists_of_integrals). We can then get integrals for more complex equations using some mathematical tricks - the harder the equation the more obscure the trick. The linear population equation \begin{align*} \frac{d}{dt}\,p(t) &= \alpha p(t),\\\\ p(0)=P_0,\\ \end{align*} has the exact solution: \begin{align*} p(t)&=P_0e^{\alpha t}.\\ \end{align*} The exact solution written in words is: \begin{align*} \text{"Population"}&=\text{"grows/declines exponentially as a function of time and birth rate"}.\\ \end{align*} Most differential equations do not have a known exact solution, so in the next tutorial on numerical methods we will show how the solution can be estimated. A small aside: a good deal of progress in mathematics was due to mathematicians writing taunting letters to each other saying they had a trick that could solve something better than everyone else. So do not worry too much about the tricks. #### Example Exact Solution of the Population Equation Let's consider the population differential equation with a birth rate $\alpha=0.3$: \begin{align*} \frac{d}{dt}\,p(t) = 0.3 p(t),\\ \text{with the initial condition}\\ p(0)=1.\\ \end{align*} It has an exact solution \begin{align*} \\ p(t)=e^{0.3 t}. \end{align*} ``` # @markdown Execute code to plot the exact solution t = np.arange(0, 10, 0.1) # Time from 0 to 10 years in 0.1 steps with plt.xkcd(): p = np.exp(0.3 * t) fig = plt.figure(figsize=(6, 4)) plt.plot(t, p) plt.ylabel('Population (millions)') plt.xlabel('time (years)') plt.show() ``` ## Section 1.2: Parameters of the differential equation *Estimated timing to here from start of tutorial: 12 min* One of the goals when designing a differential equation is to make it generalisable. Which means that the differential equation will give reasonable solutions for different countries with different birth rates $\alpha$. ### Interactive Demo 1.2: Interactive Parameter Change Play with the widget to see the relationship between $\alpha$ and the population differential equation as a function of population (left-hand side), and the population solution as a function of time (right-hand side). Pay close attention to the transition point from positive to negative. How do changing parameters of the population equation affect the outcome? 1. What happens when $\alpha < 0$? 2. What happens when $\alpha > 0$? 3. What happens when $\alpha = 0$? ``` # @markdown Make sure you execute this cell to enable the widget! my_layout.width = '450px' @widgets.interact( alpha=widgets.FloatSlider(.3, min=-1., max=1., step=.1, layout=my_layout) ) def Pop_widget(alpha): plot_dPdt(alpha=alpha) plt.show() # to_remove explanation """ 1. Negative values of alpha result in an exponential decrease to 0 a stable solution. 2. Positive Values of alpha in an exponential increases to infinity. 3. Alpha equal to 0 is a unique point known as an equilibrium point when the dp/dt=0 and there is no change in population. This is known as a stable point. """ ``` The population differential equation is an over-simplification and has some very obvious limitations: 1. Population growth is not exponential as there are limited number of resources so the population will level out at some point. 2. It does not include any external factors on the populations like weather, predators and preys. These kind of limitations can be addressed by extending the model. While it might not seem that the population equation has direct relevance to neuroscience, a similar equation is used to describe the accumulation of evidence for decision making. This is known as the Drift Diffusion Model and you will see in more detail in the Linear System day in Neuromatch (W2D2). Another differential equation that is similar to the population equation is the Leaky Integrate and Fire model which you may have seen in the python pre-course materials on W0D1 and W0D2. It will turn up later in Neuromatch as well. Below we will delve in the motivation of the differential equation. --- # Section 2: The leaky integrate and fire model ``` # @title Video 3: The leaky integrate and fire model from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1rb4y1C79n", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="ZfWO6MLCa1s", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ``` This video covers the Leaky Integrate and Fire model (a linear differential equation which describes the membrane potential of a single neuron). <details> <summary> <font color='blue'>Click here for text recap of full LIF equation from video </font></summary> The Leaky Integrate and Fire Model is a linear differential equation that describes the membrane potential ($V$) of a single neuron which was proposed by Louis Édouard Lapicque in 1907 [2]. The subthreshold membrane potential dynamics of a LIF neuron is described by \begin{align} \tau_m\frac{dV}{dt} = -(V-E_L) + R_mI\, \end{align} where $\tau_m$ is the time constant, $V$ is the membrane potential, $E_L$ is the resting potential, $R_m$ is membrane resistance, and $I$ is the external input current. </details> In the next few sections, we will break down the full LIF equation and then build it back up to get an intuitive feel of the different facets of the differential equation. ## Section 2.1: LIF without input *Estimated timing to here from start of tutorial: 18 min* As seen in the video, we will first model an LIF neuron without input, which results in the equation: \begin{align} \frac{dV}{dt} &= \frac{-(V-E_L)}{\tau_m}.\\ \end{align} where $\tau_m$ is the time constant, $V$ is the membrane potential, and $E_L$ is the resting potential. <details> <summary> <font color='blue'>Click here for further details (from video) </font></summary> Removing the input gives the equation \begin{align} \tau_m\frac{dV}{dt} &= -V+E_L,\\ \end{align} which can be written in words as: \begin{align} \begin{matrix}\text{"Time constant multiplied by the} \\ \text{change in membrane potential"}\end{matrix}&=\begin{matrix}\text{"Minus Current} \\ \text{membrane potential"} \end{matrix}+ \begin{matrix}\text{"resting potential"}\end{matrix}.\\ \end{align} The equation can be re-arranged to look even more like the population equation: \begin{align} \frac{dV}{dt} &= \frac{-(V-E_L)}{\tau_m}.\\ \end{align} </details> ### Think! 2.1: Effect on membrane potential $V$ on the LIF model The plot the below shows the change in membrane potential $\frac{dV}{dt}$ as a function of membrane potential $V$ with the parameters set as: * `E_L = -75` * `V_reset = -50` * `tau_m = 10.` 1. What is the effect on $\frac{dV}{dt}$ when $V>-75$ mV? 2. What is the effect on $\frac{dV}{dt}$ when $V<-75$ mV 3. What is the effect on $\frac{dV}{dt}$ when $V=-75$ mV? ``` # @markdown Make sure you execute this cell to plot the relationship between dV/dt and V # Parameter definition E_L = -75 tau_m = 10 # Range of Values of V V = np.arange(-90, 0, 1) dV = -(V - E_L) / tau_m with plt.xkcd(): fig = plt.figure(figsize=(6, 4)) plt.plot(V, dV) plt.hlines(0, min(V), max(V), colors='black', linestyles='dashed') plt.vlines(-75, min(dV), max(dV), colors='black', linestyles='dashed') plt.text(-50, 1, 'Positive') plt.text(-50, -2, 'Negative') plt.text(E_L, max(dV) + 1, r'$E_L$') plt.xlabel(r'$V(t)$ (mV)') plt.ylabel(r'$\frac{dV}{dt}=\frac{-(V-E_L)}{\tau_m}$') plt.ylim(-8, 2) plt.show() # to_remove explanation """ 1. For $V>-75$ mV, the derivative is negative. 2. For $V<-75$ mV, the derivative is positive. 3. For $V=-75$ mV, the derivative is equal to $0$ is and a stable point when nothing changes. """ ``` ### Section 2.1.1: Exact Solution of the LIF model without input The LIF model has the exact solution: \begin{align*} V(t)=&\ E_L+(V_{reset}-E_L)e^{\frac{-t}{\tau_m}}\\ \end{align*} where $\tau_m$ is the time constant, $V$ is the membrane potential, $E_L$ is the resting potential, and $V_{reset}$ is the initial membrane potential. <details> <summary> <font color='blue'>Click here for further details (from video) </font></summary> Similar to the population equation, we need an initial membrane potential at time $0$ to solve the LIF model. With this equation \begin{align} \frac{dV}{dt} &= \frac{-(V-E_L)}{\tau_m}\,\\ V(0)&=V_{reset}, \end{align} where is $V_{reset}$ is called the reset potential. The LIF model has the exact solution: \begin{align*} V(t)=&\ E_L+(V_{reset}-E_L)e^{\frac{-t}{\tau_m}}\\ \text{ which can be written as: }\\ \begin{matrix}\text{"Current membrane} \\ \text{potential}"\end{matrix}=&\text{"Resting potential"}+\begin{matrix}\text{"Reset potential minus resting potential} \\ \text{times exponential with rate one over time constant."}\end{matrix}\\ \end{align*} </details> #### Interactive Demo 2.1.1: Initial Condition $V_{reset}$ This exercise is to get an intuitive feel of how the different initial conditions $V_{reset}$ impacts the differential equation of the LIF and the exact solution for the equation: \begin{align} \frac{dV}{dt} &= \frac{-(V-E_L)}{\tau_m}\,\\ \end{align} with the parameters set as: * `E_L = -75,` * `tau_m = 10.` The panel on the left-hand side plots the change in membrane potential $\frac{dV}{dt}$ as a function of membrane potential $V$ and right-hand side panel plots the exact solution $V$ as a function of time $t,$ the green dot in both panels is the reset potential $V_{reset}$. Pay close attention to when $V_{reset}=E_L=-75$mV. 1. How does the solution look with initial values of $V_{reset} < -75$? 2. How does the solution look with initial values of $V_{reset} > -75$? 3. How does the solution look with initial values of $V_{reset} = -75$? ``` #@markdown Make sure you execute this cell to enable the widget! my_layout.width = '450px' @widgets.interact( V_reset=widgets.FloatSlider(-77., min=-91., max=-61., step=2, layout=my_layout) ) def V_reset_widget(V_reset): plot_V_no_input(V_reset) # to_remove explanation """ 1. Initial Values of $V_{reset} < -75$ result in the solution increasing to -75mV because $\frac{dV}{dt} > 0$. 2. Initial Values of $V_{reset} > -75$ result in the solution decreasing to -75mV because $\frac{dV}{dt} < 0$. 3. Initial Values of $V_{reset} = -75$ result in a constant $V = -75$ mV because $\frac{dV}{dt} = 0$ (Stable point). """ ``` ## Section 2.2: LIF with input *Estimated timing to here from start of tutorial: 24 min* We will re-introduce the input $I$ and membrane resistance $R_m$ giving the original equation: \begin{align} \tau_m\frac{dV}{dt} = -(V-E_L) + \color{blue}{R_mI}\, \end{align} The input can be other neurons or sensory information. ### Interactive Demo 2.2: The Impact of Input The interactive plot below manipulates $I$ in the differential equation. - With increasing input, how does the $\frac{dV}{dt}$ change? How would this impact the solution? ``` # @markdown Make sure you execute this cell to enable the widget! my_layout.width = '450px' @widgets.interact( I=widgets.FloatSlider(3., min=0., max=20., step=2, layout=my_layout) ) def Pop_widget(I): plot_dVdt(I=I) plt.show() # to_remove explanation """ dV/dt becomes bigger and less of it is below 0. This means the solution will increase well beyond what is bioligically plausible """ ``` ### Section 2.2.1: LIF exact solution The LIF with a constant input has a known exact solution: \begin{align*} V(t)=&\ E_L+R_mI+(V_{reset}-E_L-R_mI)e^{\frac{-t}{\tau_m}}\\ \text{which is written as:}\\ \begin{matrix}\text{"Current membrane} \\ \text{potential"}\end{matrix}=&\text{"Resting potential"}+\begin{matrix}\text{"Reset potential minus resting potential} \\ \text{times exponential with rate one over time constant." }\end{matrix}\\ \end{align*} The plot below shows the exact solution of the membrane potential with the parameters set as: * `V_reset = -75,` * `E_L = -75,` * `tau_m = 10,` * `R_m = 10,` * `I = 10.` Ask yourself, does the result make biological sense? If not, what would you change? We'll delve into this in the next section ``` # @markdown Make sure you execute this cell to see the exact solution dt = 0.5 t_rest = 0 t = np.arange(0, 1000, dt) tau_m = 10 R_m = 10 V_reset = E_L = -75 I = 10 V = E_L + R_m*I + (V_reset - E_L - R_m*I) * np.exp(-(t)/tau_m) with plt.xkcd(): fig = plt.figure(figsize=(6, 4)) plt.plot(t,V) plt.ylabel('V (mV)') plt.xlabel('time (ms)') plt.show() ``` ## Section 2.3: Maths is one thing, but neuroscience matters *Estimated timing to here from start of tutorial: 30 min* ``` # @title Video 4: Adding firing to the LIF from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1gX4y1P7pZ", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="rLQk-vXRaX0", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ``` This video first recaps the introduction of input to the leaky integrate and fire model and then delves into how we add spiking behavior (or firing) to the model. <details> <summary> <font color='blue'>Click here for text recap of video </font></summary> While the mathematics of the exact solution is exact, it is not biologically valid as a neuron spikes and definitely does not plateau at a very positive value. To model the firing of a spike, we must have a threshold voltage $V_{th}$ such that if the voltage $V(t)$ goes above it, the neuron spikes $$V(t)>V_{th}.$$ We must record the time of spike $t_{isi}$ and count the number of spikes $$t_{isi}=t, $$ $$𝑆𝑝𝑖𝑘𝑒=𝑆𝑝𝑖𝑘𝑒+1.$$ Then reset the membrane voltage $V(t)$ $$V(t_{isi} )=V_{Reset}.$$ To take into account the spike the exact solution becomes: \begin{align*} V(t)=&\ E_L+R_mI+(V_{reset}-E_L-R_mI)e^{\frac{-(t-t_{isi})}{\tau_m}},&\qquad V(t)<V_{th} \\ V(t)=&V_{reset},&\qquad V(t)>V_{th}\\ Spike=&Spike+1,&\\ t_{isi}=&t,\\ \end{align*} while this does make the neuron spike, it introduces a discontinuity which is not as elegant mathematically as it could be, but it gets results so that is good. </detail> ### Interactive Demo 2.3.1: Input on spikes This exercise show the relationship between firing rate and the Input for exact solution `V` of the LIF: $$ V(t)=\ E_L+R_mI+(V_{reset}-E_L-R_mI)e^{\frac{-(t-t_{isi})}{\tau_m}}, $$ with the parameters set as: * `V_reset = -75,` * `E_L = -75,` * `tau_m = 10,` * `R_m = 10.` Below is a figure with three panels; * the top panel is the input, $I,$ * the middle panel is the membrane potential $V(t)$. To illustrate the spike, $V(t)$ is set to $0$ and then reset to $-75$ mV when there is a spike. * the bottom panel is the raster plot with each dot indicating a spike. First, as electrophysiologist normally listen to spikes when conducting experiments, listen to the music of the firing rate for a single value of $I$. (Note the audio doesn't work in some browsers so don't worry about it if you can't hear anything) ``` # @markdown Make sure you execute this cell to be able to hear the neuron I = 3 t = np.arange(0, 1000, dt) Spike, Spike_time, V = Exact_Integrate_and_Fire(I, t) plot_IF(t, V, I, Spike_time) ipd.Audio(V, rate=len(V)) ``` Manipulate the input into the LIF to see the impact of input on the firing pattern (rate). * What is the effect of $I$ on spiking? * Is this biologically valid? ``` # @markdown Make sure you execute this cell to enable the widget! my_layout.width = '450px' @widgets.interact( I=widgets.FloatSlider(3, min=2.0, max=4., step=.1, layout=my_layout) ) def Pop_widget(I): Spike, Spike_time, V = Exact_Integrate_and_Fire(I, t) plot_IF(t, V, I, Spike_time) # to_remove explanation """ 1. As $I$ increases, the number of spikes increases. 2. No, as there is a limit to the number of spikes due to a refractory period, which is not accounted for in this model. """ ``` ## Section 2.4 Firing Rate as a function of Input *Estimated timing to here from start of tutorial: 38 min* The firing frequency of a neuron plotted as a function of current is called an input-output curve (F–I curve). It is also known as a transfer function, which you came across in the previous tutorial. This function is one of the starting points for the rate model, which extends from modelling single neurons to the firing rate of a collection of neurons. By fitting this to a function, we can start to generalise the firing pattern of many neurons, which can be used to build rate models. This will be discussed later in Neuromatch. ``` # @markdown *Execture this cell to visualize the FI curve* I_range = np.arange(2.0, 4.0, 0.1) Spike_rate = np.ones(len(I_range)) for i, I in enumerate(I_range): Spike_rate[i], _, _ = Exact_Integrate_and_Fire(I, t) with plt.xkcd(): fig = plt.figure(figsize=(6, 4)) plt.plot(I_range,Spike_rate) plt.xlabel('Input Current (nA)') plt.ylabel('Spikes per Second (Hz)') plt.show() ``` The LIF model is a very nice differential equation to start with in computational neuroscience as it has been used as a building block for many papers that simulate neuronal response. __Strengths of LIF model:__ + Has an exact solution; + Easy to interpret; + Great to build network of neurons. __Weaknesses of the LIF model:__ - Spiking is a discontinuity; - Abstraction from biology; - Cannot generate different spiking patterns. --- # Summary *Estimated timing of tutorial: 45 min* ``` # @title Video 5: Summary from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1jV411x7t9", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="VzwLAW5p4ao", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ``` In this tutorial, we have seen two differential equations, the population differential equations and the leaky integrate and fire model. We learned about: * The motivation for differential equations. * An intuitive relationship between the solution and the form of the differential equation. * How different parameters of the differential equation impact the solution. * The strengths and limitations of the simple differential equations. --- # Links to Neuromatch Days Differential equations turn up in a number of different Neuromatch days: * The LIF model is discussed in more details in Model Types (Week 1 Day 1) and Real Neurons (Week 2 Day 3). * Drift Diffusion model which is a differential equation for decision making is discussed in Linear Systems (Week 2 Day 2). * Systems of differential equations are discussed in Linear Systems (Week 2 Day 2) and Dynamic Networks (Week 2 Day 4). --- # References 1. Lotka, A. L, (1920) Analytical note on certain rhythmic relations inorganic systems.Proceedings of the National Academy of Sciences,6(7):410–415,1920. 2. Brunel N, van Rossum MC. Lapicque's 1907 paper: from frogs to integrate-and-fire. Biol Cybern. 2007 Dec;97(5-6):337-9. doi: 10.1007/s00422-007-0190-0. Epub 2007 Oct 30. PMID: 17968583. # Bibliography 1. Dayan, P., & Abbott, L. F. (2001). Theoretical neuroscience: computational and mathematical modeling of neural systems. Computational Neuroscience Series. 2. Strogatz, S. Nonlinear dynamics and chaos: with applications to physics, biology, chemistry, and engineering (studies in nonlinearity), Westview Press; 2 edition (29 July 2014) ## Supplemental Popular Reading List 1. Lindsay, G. (2021). Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain. Bloomsbury Publishing. 2. Strogatz, S. (2004). Sync: The emerging science of spontaneous order. Penguin UK. ## Popular Podcast 1. Strogatz, S. (Host). (2020-), Joy of X https://www.quantamagazine.org/tag/the-joy-of-x/ Quanta Magazine
github_jupyter
# K-means clustering demo ## 1. Different distance metrics ``` from math import sqrt def manhattan(v1,v2): res=0 dimensions=min(len(v1),len(v2)) for i in range(dimensions): res+=abs(v1[i]-v2[i]) return res def euclidean(v1,v2): res=0 dimensions=min(len(v1),len(v2)) for i in range(dimensions): res+=pow(abs(v1[i]-v2[i]),2) return sqrt(float(res)) def cosine(v1,v2): dotproduct=0 dimensions=min(len(v1),len(v2)) for i in range(dimensions): dotproduct+=v1[i]*v2[i] v1len=0 v2len=0 for i in range (dimensions): v1len+=v1[i]*v1[i] v2len+=v2[i]*v2[i] v1len=sqrt(v1len) v2len=sqrt(v2len) # we need distance here - # we convert cosine similarity into distance return 1.0-(float(dotproduct)/(v1len*v2len)) def pearson(v1,v2): # Simple sums sum1=sum(v1) sum2=sum(v2) # Sums of the squares sum1Sq=sum([pow(v,2) for v in v1]) sum2Sq=sum([pow(v,2) for v in v2]) # Sum of the products pSum=sum([v1[i]*v2[i] for i in range(min(len(v1),len(v2)))]) # Calculate r (Pearson score) numerator=pSum-(sum1*sum2/len(v1)) denominator=sqrt((sum1Sq-pow(sum1,2)/len(v1))*(sum2Sq-pow(sum2,2)/len(v1))) if denominator==0: return 1.0 # we need distance here - # we convert pearson correlation into distance return 1.0-numerator/denominator def tanimoto(v1,v2): c1,c2,shared=0,0,0 for i in range(len(v1)): if v1[i]!=0 or v2[i]!= 0: if v1[i]!=0: c1+=1 # in v1 if v2[i]!=0: c2+=1 # in v2 if v1[i]!=0 and v2[i]!=0: shared+=1 # in both # we need distance here - # we convert tanimoto similarity into distance return 1.0-(float(shared)/(c1+c2-shared)) ``` ## 2. K-means clustering algorithm ``` import random # k-means clustering def kcluster(rows,distance=euclidean,k=4): # Determine the minimum and maximum values for each point ranges=[(min([row[i] for row in rows]),max([row[i] for row in rows])) for i in range(len(rows[0]))] # Create k randomly placed centroids clusters=[[random.random()*(ranges[i][1]-ranges[i][0])+ranges[i][0] for i in range(len(rows[0]))] for j in range(k)] lastmatches=None bestmatches = None for t in range(100): print ('Iteration %d' % t) bestmatches=[[] for i in range(k)] # Find which centroid is the closest for each row for j in range(len(rows)): row=rows[j] bestmatch=0 for i in range(k): d=distance(clusters[i],row) if d<distance(clusters[bestmatch],row): bestmatch=i bestmatches[bestmatch].append(j) # If the results are the same as last time, this is complete if bestmatches==lastmatches: break lastmatches=bestmatches # Move the centroids to the average of the cluster members for i in range(k): avgs=[0.0]*len(rows[0]) if len(bestmatches[i])>0: for rowid in bestmatches[i]: for m in range(len(rows[rowid])): avgs[m]+=rows[rowid][m] for j in range(len(avgs)): avgs[j]/=len(bestmatches[i]) clusters[i]=avgs return bestmatches ``` ## 3. Toy demo: clustering papers by title ### 3.1. Data preparation The input is a list of Computer Science paper titles from file [titles.txt](titles.txt). ``` file_name = "titles.txt" f = open(file_name, "r", encoding="utf-8") i = 0 for line in f: print("document", i, ": ", line.strip()) i += 1 ``` To compare documents written in Natural Language, we need to decide how to decide which attributes of a document are important. The simplest possible model is called a **bag of words**: that is we consider each word in a document as a separate and independent dimension. First, we collect all different words occuring across all the document collection (called corpora in NLP). These will become our dimensions. We create a vector as big as the entire vocabulary in a given corpora. Next we represent each document as a numeric vector: the number of occurrences of a given word becomes value in the corresponding vector dimension. Here are the functions for converting documents into bag of words: ``` import re # Returns dictionary of word counts for a text def get_word_counts(text, all_words): wc={} words = get_words(text) # Loop over all the entries for word in words: if (word not in stopwords) and (word in all_words): wc[word] = wc.get(word,0)+1 return wc # splits text into words def get_words(txt): # Split words by all non-alpha characters words=re.compile(r'[^A-Z^a-z]+').split(txt) # Convert to lowercase return [word.lower() for word in words if word!=''] # converts counts into a vector def get_word_vector(word_list, wc): v = [0]*len(word_list) for i in range(len(word_list)): if word_list[i] in wc: v[i] = wc[word_list[i]] return v # prints matrix def print_word_matrix(docs): for d in docs: print (d[0], d[1]) ``` Some words of the document should be ignored. These are words that are very commonly used in all documents no matter the topic of the document: ''the'', ''it'', ''and'' etc. These words are called **stop words**. Which words to consider as stop words is application-dependent. One of possible stop words collection is given in file ''stop_words.txt''. ``` stop_words_file = "stop_words.txt" f = open(stop_words_file, "r", encoding="utf-8") stopwords = [] for line in f: stopwords.append(line.strip()) f.close() print(stopwords[:20]) ``` We collect all unique words and for each document we will count how many times each word is present. ``` file_name = "titles.txt" f = open(file_name, "r", encoding="utf-8") documents = [] doc_id = 1 all_words = {} # transfer content of a file into a list of lines lines = [line for line in f] # create a dictionary of all words and their total counts for line in lines: doc_words = get_words(line) for w in doc_words : if w not in stopwords: all_words[w] = all_words.get(w,0)+1 unique_words = set() for w, count in all_words.items(): if all_words[w] > 1 : unique_words.add(w) # create a matrix of word presence in each document for line in lines: documents.append(["d"+str(doc_id), get_word_counts(line,unique_words)]) doc_id += 1 unique_words=list(unique_words) print("All unique words:",unique_words) print(documents) ``` Now we want to convert each document into a numeric vector: ``` out = open(file_name.split('.')[0] + "_vectors.txt", "w") # write a header which contains the words themselves for w in unique_words: out.write('\t' + w) out.write('\n') # print_word_matrix to file for i in range(len(documents)): vector = get_word_vector(unique_words, documents[i][1]) out.write(documents[i][0]) for x in vector: out.write('\t' + str(x)) out.write('\n') out.close() ``` Our data now looks like this matrix: ``` doc_vectors_file = "titles_vectors.txt" f = open(doc_vectors_file, "r", encoding="utf-8") s = f.read() print(s) # This function will read document vectors file and produce 2D data matrix, # plus the names of the rows and the names of the columns. def read_vector_file(file_name): f = open(file_name) lines=[line for line in f] # First line is the column headers colnames=lines[0].strip().split('\t')[:] # print(colnames) rownames=[] data=[] for line in lines[1:]: p=line.strip().split('\t') # First column in each row is the rowname if len(p)>1: rownames.append(p[0]) # The data for this row is the remainder of the row data.append([float(x) for x in p[1:]]) return rownames,colnames,data # This function will transpose the data matrix def rotatematrix(data): newdata=[] for i in range(len(data[0])): newrow=[data[j][i] for j in range(len(data))] newdata.append(newrow) return newdata ``` As the result of all this, we have the matrix where the rows are document vectors. Each vector dimension represents a unique word in the collection. The value in each dimension represents the count of this word in a particular document. ### 3.2. Clustering documents Performing k-means clustering. ``` doc_vectors_file = "titles_vectors.txt" docs,words,data=read_vector_file(doc_vectors_file) num_clusters=2 print('Searching for {} clusters:'.format(num_clusters)) clust=kcluster(data,distance=pearson,k=num_clusters) print() print ('Document clusters') print ('=================') for i in range(num_clusters): print ('cluster {}:'.format(i+1)) print ([docs[r] for r in clust[i]]) print() ``` Does this grouping make sense? ``` for d in documents: print(d) ``` ### 3.3. Clustering words by their occurrence in documents We may consider that the words are similar if they occur in the same document. We say that the words are connected - they belong to the same topic, they occur in a similar context. If we want to cluster words by their occurrences in the documents, all we need to do is to transpose the document matrix. ``` rdata=rotatematrix(data) num_clusters = 3 print ('Grouping words into {} clusters:'.format(num_clusters)) clust=kcluster(rdata,distance=cosine,k=num_clusters) print() print ('word clusters:') print("=============") for i in range(num_clusters): print("cluster {}".format(i+1)) print ([words[r] for r in clust[i]]) print() ``` Copyright &copy; 2022 Marina Barsky. All rights reserved.
github_jupyter
``` # -*- coding: utf-8 -*- """ EVCで変換する. 詳細 : https://pdfs.semanticscholar.org/cbfe/71798ded05fb8bf8674580aabf534c4dbb8bc.pdf Converting by EVC. Check detail : https://pdfs.semanticscholar.org/cbfe/71798ded05fb8bf8674580abf534c4dbb8bc.pdf """ from __future__ import division, print_function import os from shutil import rmtree import argparse import glob import pickle import time import numpy as np from numpy.linalg import norm from sklearn.decomposition import PCA from sklearn.mixture import GMM # sklearn 0.20.0から使えない from sklearn.preprocessing import StandardScaler import scipy.signal import scipy.sparse %matplotlib inline import matplotlib.pyplot as plt import IPython from IPython.display import Audio import soundfile as sf import wave import pyworld as pw import librosa.display from dtw import dtw import warnings warnings.filterwarnings('ignore') """ Parameters __Mixtured : GMM混合数 __versions : 実験セット __convert_source : 変換元話者のパス __convert_target : 変換先話者のパス """ # parameters __Mixtured = 40 __versions = 'pre-stored0.1.1' __convert_source = 'input/EJM10/V01/T01/TIMIT/000/*.wav' __convert_target = 'adaptation/EJM04/V01/T01/ATR503/A/*.wav' __measure_target = 'adaptation/EJM04/V01/T01/TIMIT/000/*.wav' # settings __same_path = './utterance/' + __versions + '/' __output_path = __same_path + 'output/EJM04/' # EJF01, EJF07, EJM04, EJM05 Mixtured = __Mixtured pre_stored_pickle = __same_path + __versions + '.pickle' pre_stored_source_list = __same_path + 'pre-source/**/V01/T01/**/*.wav' pre_stored_list = __same_path + "pre/**/V01/T01/**/*.wav" #pre_stored_target_list = "" (not yet) pre_stored_gmm_init_pickle = __same_path + __versions + '_init-gmm.pickle' pre_stored_sv_npy = __same_path + __versions + '_sv.npy' save_for_evgmm_covarXX = __output_path + __versions + '_covarXX.npy' save_for_evgmm_covarYX = __output_path + __versions + '_covarYX.npy' save_for_evgmm_fitted_source = __output_path + __versions + '_fitted_source.npy' save_for_evgmm_fitted_target = __output_path + __versions + '_fitted_target.npy' save_for_evgmm_weights = __output_path + __versions + '_weights.npy' save_for_evgmm_source_means = __output_path + __versions + '_source_means.npy' for_convert_source = __same_path + __convert_source for_convert_target = __same_path + __convert_target for_measure_target = __same_path + __measure_target converted_voice_npy = __output_path + 'sp_converted_' + __versions converted_voice_wav = __output_path + 'sp_converted_' + __versions mfcc_save_fig_png = __output_path + 'mfcc3dim_' + __versions f0_save_fig_png = __output_path + 'f0_converted' + __versions converted_voice_with_f0_wav = __output_path + 'sp_f0_converted' + __versions mcd_text = __output_path + __versions + '_MCD.txt' EPSILON = 1e-8 class MFCC: """ MFCC() : メル周波数ケプストラム係数(MFCC)を求めたり、MFCCからスペクトルに変換したりするクラス. 動的特徴量(delta)が実装途中. ref : http://aidiary.hatenablog.com/entry/20120225/1330179868 """ def __init__(self, frequency, nfft=1026, dimension=24, channels=24): """ 各種パラメータのセット nfft : FFTのサンプル点数 frequency : サンプリング周波数 dimension : MFCC次元数 channles : メルフィルタバンクのチャンネル数(dimensionに依存) fscale : 周波数スケール軸 filterbankl, fcenters : フィルタバンク行列, フィルタバンクの頂点(?) """ self.nfft = nfft self.frequency = frequency self.dimension = dimension self.channels = channels self.fscale = np.fft.fftfreq(self.nfft, d = 1.0 / self.frequency)[: int(self.nfft / 2)] self.filterbank, self.fcenters = self.melFilterBank() def hz2mel(self, f): """ 周波数からメル周波数に変換 """ return 1127.01048 * np.log(f / 700.0 + 1.0) def mel2hz(self, m): """ メル周波数から周波数に変換 """ return 700.0 * (np.exp(m / 1127.01048) - 1.0) def melFilterBank(self): """ メルフィルタバンクを生成する """ fmax = self.frequency / 2 melmax = self.hz2mel(fmax) nmax = int(self.nfft / 2) df = self.frequency / self.nfft dmel = melmax / (self.channels + 1) melcenters = np.arange(1, self.channels + 1) * dmel fcenters = self.mel2hz(melcenters) indexcenter = np.round(fcenters / df) indexstart = np.hstack(([0], indexcenter[0:self.channels - 1])) indexstop = np.hstack((indexcenter[1:self.channels], [nmax])) filterbank = np.zeros((self.channels, nmax)) for c in np.arange(0, self.channels): increment = 1.0 / (indexcenter[c] - indexstart[c]) # np,int_ は np.arangeが[0. 1. 2. ..]となるのをintにする for i in np.int_(np.arange(indexstart[c], indexcenter[c])): filterbank[c, i] = (i - indexstart[c]) * increment decrement = 1.0 / (indexstop[c] - indexcenter[c]) # np,int_ は np.arangeが[0. 1. 2. ..]となるのをintにする for i in np.int_(np.arange(indexcenter[c], indexstop[c])): filterbank[c, i] = 1.0 - ((i - indexcenter[c]) * decrement) return filterbank, fcenters def mfcc(self, spectrum): """ スペクトルからMFCCを求める. """ mspec = [] mspec = np.log10(np.dot(spectrum, self.filterbank.T)) mspec = np.array(mspec) return scipy.fftpack.realtransforms.dct(mspec, type=2, norm="ortho", axis=-1) def delta(self, mfcc): """ MFCCから動的特徴量を求める. 現在は,求める特徴量フレームtをt-1とt+1の平均としている. """ mfcc = np.concatenate([ [mfcc[0]], mfcc, [mfcc[-1]] ]) # 最初のフレームを最初に、最後のフレームを最後に付け足す delta = None for i in range(1, mfcc.shape[0] - 1): slope = (mfcc[i+1] - mfcc[i-1]) / 2 if delta is None: delta = slope else: delta = np.vstack([delta, slope]) return delta def imfcc(self, mfcc, spectrogram): """ MFCCからスペクトルを求める. """ im_sp = np.array([]) for i in range(mfcc.shape[0]): mfcc_s = np.hstack([mfcc[i], [0] * (self.channels - self.dimension)]) mspectrum = scipy.fftpack.idct(mfcc_s, norm='ortho') # splrep はスプライン補間のための補間関数を求める tck = scipy.interpolate.splrep(self.fcenters, np.power(10, mspectrum)) # splev は指定座標での補間値を求める im_spectrogram = scipy.interpolate.splev(self.fscale, tck) im_sp = np.concatenate((im_sp, im_spectrogram), axis=0) return im_sp.reshape(spectrogram.shape) def trim_zeros_frames(x, eps=1e-7): """ 無音区間を取り除く. """ T, D = x.shape s = np.sum(np.abs(x), axis=1) s[s < 1e-7] = 0. return x[s > eps] def analyse_by_world_with_harverst(x, fs): """ WORLD音声分析合成器で基本周波数F0,スペクトル包絡,非周期成分を求める. 基本周波数F0についてはharvest法により,より精度良く求める. """ # 4 Harvest with F0 refinement (using Stonemask) frame_period = 5 _f0_h, t_h = pw.harvest(x, fs, frame_period=frame_period) f0_h = pw.stonemask(x, _f0_h, t_h, fs) sp_h = pw.cheaptrick(x, f0_h, t_h, fs) ap_h = pw.d4c(x, f0_h, t_h, fs) return f0_h, sp_h, ap_h def wavread(file): """ wavファイルから音声トラックとサンプリング周波数を抽出する. """ wf = wave.open(file, "r") fs = wf.getframerate() x = wf.readframes(wf.getnframes()) x = np.frombuffer(x, dtype= "int16") / 32768.0 wf.close() return x, float(fs) def preEmphasis(signal, p=0.97): """ MFCC抽出のための高域強調フィルタ. 波形を通すことで,高域成分が強調される. """ return scipy.signal.lfilter([1.0, -p], 1, signal) def alignment(source, target, path): """ タイムアライメントを取る. target音声をsource音声の長さに合うように調整する. """ # ここでは814に合わせよう(targetに合わせる) # p_p = 0 if source.shape[0] > target.shape[0] else 1 #shapes = source.shape if source.shape[0] > target.shape[0] else target.shape shapes = source.shape align = np.array([]) for (i, p) in enumerate(path[0]): if i != 0: if j != p: temp = np.array(target[path[1][i]]) align = np.concatenate((align, temp), axis=0) else: temp = np.array(target[path[1][i]]) align = np.concatenate((align, temp), axis=0) j = p return align.reshape(shapes) covarXX = np.load(save_for_evgmm_covarXX) covarYX = np.load(save_for_evgmm_covarYX) fitted_source = np.load(save_for_evgmm_fitted_source) fitted_target = np.load(save_for_evgmm_fitted_target) weights = np.load(save_for_evgmm_weights) source_means = np.load(save_for_evgmm_source_means) """ 声質変換に用いる変換元音声と目標音声を読み込む. """ timer_start = time.time() source_mfcc_for_convert = [] source_sp_for_convert = [] source_f0_for_convert = [] source_ap_for_convert = [] fs_source = None for name in sorted(glob.iglob(for_convert_source, recursive=True)): print("source = ", name) x_source, fs_source = sf.read(name) f0_source, sp_source, ap_source = analyse_by_world_with_harverst(x_source, fs_source) mfcc_source = MFCC(fs_source) #mfcc_s_tmp = mfcc_s.mfcc(sp) #source_mfcc_for_convert = np.hstack([mfcc_s_tmp, mfcc_s.delta(mfcc_s_tmp)]) source_mfcc_for_convert.append(mfcc_source.mfcc(sp_source)) source_sp_for_convert.append(sp_source) source_f0_for_convert.append(f0_source) source_ap_for_convert.append(ap_source) target_mfcc_for_fit = [] target_f0_for_fit = [] target_ap_for_fit = [] for name in sorted(glob.iglob(for_convert_target, recursive=True)): print("target = ", name) x_target, fs_target = sf.read(name) f0_target, sp_target, ap_target = analyse_by_world_with_harverst(x_target, fs_target) mfcc_target = MFCC(fs_target) #mfcc_target_tmp = mfcc_target.mfcc(sp_target) #target_mfcc_for_fit = np.hstack([mfcc_t_tmp, mfcc_t.delta(mfcc_t_tmp)]) target_mfcc_for_fit.append(mfcc_target.mfcc(sp_target)) target_f0_for_fit.append(f0_target) target_ap_for_fit.append(ap_target) # 全部numpy.arrrayにしておく source_data_mfcc = np.array(source_mfcc_for_convert) source_data_sp = np.array(source_sp_for_convert) source_data_f0 = np.array(source_f0_for_convert) source_data_ap = np.array(source_ap_for_convert) target_mfcc = np.array(target_mfcc_for_fit) target_f0 = np.array(target_f0_for_fit) target_ap = np.array(target_ap_for_fit) print("Load Input and Target Voice time = ", time.time() - timer_start , "[sec]") def convert(source, covarXX, fitted_source, fitted_target, covarYX, weights, source_means): """ 声質変換を行う. """ Mixtured = 40 D = source.shape[0] E = np.zeros((Mixtured, D)) for m in range(Mixtured): xx = np.linalg.solve(covarXX[m], source - fitted_source[m]) E[m] = fitted_target[m] + np.dot(covarYX[m], xx) px = GMM(n_components = Mixtured, covariance_type = 'full') px.weights_ = weights px.means_ = source_means px.covars_ = covarXX posterior = px.predict_proba(np.atleast_2d(source)) return np.dot(posterior, E) def calc_std_mean(input_f0): """ F0変換のために標準偏差と平均を求める. """ tempF0 = input_f0[ np.where(input_f0 > 0)] fixed_logF0 = np.log(tempF0) #logF0 = np.ma.log(input_f0) # 0要素にlogをするとinfになるのでmaskする #fixed_logF0 = np.ma.fix_invalid(logF0).data # maskを取る return np.std(fixed_logF0), np.mean(fixed_logF0) # 標準偏差と平均を返す """ 距離を測るために,正しい目標音声を読み込む """ source_mfcc_for_measure_target = [] source_sp_for_measure_target = [] source_f0_for_measure_target = [] source_ap_for_measure_target = [] for name in sorted(glob.iglob(for_measure_target, recursive=True)): print("measure_target = ", name) x_measure_target, fs_measure_target = sf.read(name) f0_measure_target, sp_measure_target, ap_measure_target = analyse_by_world_with_harverst(x_measure_target, fs_measure_target) mfcc_measure_target = MFCC(fs_measure_target) #mfcc_s_tmp = mfcc_s.mfcc(sp) #source_mfcc_for_convert = np.hstack([mfcc_s_tmp, mfcc_s.delta(mfcc_s_tmp)]) source_mfcc_for_measure_target.append(mfcc_measure_target.mfcc(sp_measure_target)) source_sp_for_measure_target.append(sp_measure_target) source_f0_for_measure_target.append(f0_measure_target) source_ap_for_measure_target.append(ap_measure_target) measure_target_data_mfcc = np.array(source_mfcc_for_measure_target) measure_target_data_sp = np.array(source_sp_for_measure_target) measure_target_data_f0 = np.array(source_f0_for_measure_target) measure_target_data_ap = np.array(source_ap_for_measure_target) def calc_mcd(source, convert, target): """ 変換する前の音声と目標音声でDTWを行う. その後,変換後の音声と目標音声とのMCDを計測する. """ dist, cost, acc, path = dtw(source, target, dist=lambda x, y: norm(x-y, ord=1)) aligned = alignment(source, target, path) return 10.0 / np.log(10) * np.sqrt(2 * np.sum(np.square(aligned - convert))), aligned """ 変換を行う. """ timer_start = time.time() # 事前に目標話者の標準偏差と平均を求めておく temp_f = None for x in range(len(target_f0)): temp = target_f0[x].flatten() if temp_f is None: temp_f = temp else: temp_f = np.hstack((temp_f, temp)) target_std, target_mean = calc_std_mean(temp_f) # 変換 output_mfcc = [] filer = open(mcd_text, 'a') for i in range(len(source_data_mfcc)): print("voice no = ", i) # convert source_temp = source_data_mfcc[i] output_mfcc = np.array([convert(source_temp[frame], covarXX, fitted_source, fitted_target, covarYX, weights, source_means)[0] for frame in range(source_temp.shape[0])]) # syntehsis source_sp_temp = source_data_sp[i] source_f0_temp = source_data_f0[i] source_ap_temp = source_data_ap[i] output_imfcc = mfcc_source.imfcc(output_mfcc, source_sp_temp) y_source = pw.synthesize(source_f0_temp, output_imfcc, source_ap_temp, fs_source, 5) np.save(converted_voice_npy + "s{0}.npy".format(i), output_imfcc) sf.write(converted_voice_wav + "s{0}.wav".format(i), y_source, fs_source) # calc MCD measure_temp = measure_target_data_mfcc[i] mcd, aligned_measure = calc_mcd(source_temp, output_mfcc, measure_temp) filer.write("MCD No.{0} = {1} , shape = {2}\n".format(i, mcd, source_temp.shape)) # save figure spectram range_s = output_imfcc.shape[0] scale = [x for x in range(range_s)] MFCC_sample_s = [source_temp[x][0] for x in range(range_s)] MFCC_sample_c = [output_mfcc[x][0] for x in range(range_s)] MFCC_sample_t = [aligned_measure[x][0] for x in range(range_s)] plt.subplot(311) plt.plot(scale, MFCC_sample_s, label="source", linewidth = 1.0) plt.plot(scale, MFCC_sample_c, label="convert", linewidth = 1.0) plt.plot(scale, MFCC_sample_t, label="target", linewidth = 1.0, linestyle="dashed") plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3, ncol=3, mode="expand", borderaxespad=0.) #plt.xlabel("Flame") #plt.ylabel("amplitude MFCC") MFCC_sample_s = [source_temp[x][1] for x in range(range_s)] MFCC_sample_c = [output_mfcc[x][1] for x in range(range_s)] MFCC_sample_t = [aligned_measure[x][1] for x in range(range_s)] plt.subplot(312) plt.plot(scale, MFCC_sample_s, label="source", linewidth = 1.0) plt.plot(scale, MFCC_sample_c, label="convert", linewidth = 1.0) plt.plot(scale, MFCC_sample_t, label="target", linewidth = 1.0, linestyle="dashed") plt.ylabel("amplitude MFCC") MFCC_sample_s = [source_temp[x][2] for x in range(range_s)] MFCC_sample_c = [output_mfcc[x][2] for x in range(range_s)] MFCC_sample_t = [aligned_measure[x][2] for x in range(range_s)] plt.subplot(313) plt.plot(scale, MFCC_sample_s, label="source", linewidth = 1.0) plt.plot(scale, MFCC_sample_c, label="convert", linewidth = 1.0) plt.plot(scale, MFCC_sample_t, label="target", linewidth = 1.0, linestyle="dashed") plt.xlabel("Flame") plt.savefig(mfcc_save_fig_png + "s{0}.png".format(i) , format='png', dpi=300) plt.close() # synthesis with conveted f0 source_std, source_mean = calc_std_mean(source_f0_temp) std_ratio = target_std / source_std log_conv_f0 = std_ratio * (source_f0_temp - source_mean) + target_mean conv_f0 = np.maximum(log_conv_f0, 0) np.save(converted_voice_npy + "f{0}.npy".format(i), conv_f0) y_conv = pw.synthesize(conv_f0, output_imfcc, source_ap_temp, fs_source, 5) sf.write(converted_voice_with_f0_wav + "sf{0}.wav".format(i) , y_conv, fs_source) # save figure f0 F0_s = [source_f0_temp[x] for x in range(range_s)] F0_c = [conv_f0[x] for x in range(range_s)] plt.plot(scale, F0_s, label="source", linewidth = 1.0) plt.plot(scale, F0_c, label="convert", linewidth = 1.0) plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3, ncol=2, mode="expand", borderaxespad=0.) plt.xlabel("Frame") plt.ylabel("Amplitude") plt.savefig(f0_save_fig_png + "f{0}.png".format(i), format='png', dpi=300) plt.close() filer.close() print("Make Converted Spectram time = ", time.time() - timer_start , "[sec]") ```
github_jupyter
<CENTER> <header> <h1>Pandas Tutorial</h1> <h3>EuroScipy, Erlangen DE, August 24th, 2016</h3> <h2>Joris Van den Bossche</h2> <p></p> Source: <a href="https://github.com/jorisvandenbossche/pandas-tutorial">https://github.com/jorisvandenbossche/pandas-tutorial</a> </header> </CENTER> Two data files are not included in the repo, you can download them from: [`titles.csv`](https://drive.google.com/file/d/0B3G70MlBnCgKa0U4WFdWdGdVOFU/view?usp=sharing) and [`cast.csv`](https://drive.google.com/file/d/0B3G70MlBnCgKRzRmTWdQTUdjNnM/view?usp=sharing) and put them in the `/data` folder. ## Requirements to run this tutorial To follow this tutorial you need to have the following packages installed: - Python version 2.6-2.7 or 3.3-3.5 - `pandas` version 0.18.0 or later: http://pandas.pydata.org/ - `numpy` version 1.7 or later: http://www.numpy.org/ - `matplotlib` version 1.3 or later: http://matplotlib.org/ - `ipython` version 3.x with notebook support, or `ipython 4.x` combined with `jupyter`: http://ipython.org - `seaborn` (this is used for some plotting, but not necessary to follow the tutorial): http://stanford.edu/~mwaskom/software/seaborn/ ## Downloading the tutorial materials If you have git installed, you can get the material in this tutorial by cloning this repo: git clone https://github.com/jorisvandenbossche/pandas-tutorial.git As an alternative, you can download it as a zip file: https://github.com/jorisvandenbossche/pandas-tutorial/archive/master.zip. I will probably make some changes until the start of the tutorial, so best to download the latest version then (or do a `git pull` if you are using git). Two data files are not included in the repo, you can download them from: [`titles.csv`](https://drive.google.com/open?id=0B3G70MlBnCgKajNMa1pfSzN6Q3M) and [`cast.csv`](https://drive.google.com/open?id=0B3G70MlBnCgKal9UYTJSR2ZhSW8) and put them in the `/data` folder. ## Contents Beginners track: - [01 - Introduction - beginners.ipynb](01 - Introduction - beginners.ipynb) - [02 - Data structures](02 - Data structures.ipynb) - [03 - Indexing and selecting data](03 - Indexing and selecting data.ipynb) - [04 - Groupby operations](04 - Groupby operations.ipynb) Advanced track: - [03b - Some more advanced indexing](03b - Some more advanced indexing.ipynb) - [04b - Advanced groupby operations](04b - Advanced groupby operations.ipynb) - [05 - Time series data](05 - Time series data.ipynb) - [06 - Reshaping data](06 - Reshaping data.ipynb)
github_jupyter
# Using Named Entity Recognition (NER) **Named entities** are noun phrases that refer to specific locations, people, organizations, and so on. With **named entity recognition**, you can find the named entities in your texts and also determine what kind of named entity they are. Here’s the list of named entity types from the <a href = "https://www.nltk.org/book/ch07.html#sec-ner">NLTK book</a>: <table> <tr><th>NEtype</th> <th>Examples</th></tr> <tr><td>ORGANIZATION</td> <td>Georgia-Pacific Corp., WHO</td></tr> <tr><td>PERSON</td> <td>Eddy Bonte, President Obama</td></tr> <tr><td>LOCATION</td> <td>Murray River, Mount Everest</td></tr> <tr><td>DATE</td> <td>June, 2008-06-29</td></tr> <tr><td>TIME</td> <td>two fifty a m, 1:30 p.m.</td></tr> <tr><td>MONEY</td> <td>175 million Canadian dollars, GBP 10.40</td></tr> <tr><td>PERCENT</td> <td>twenty pct, 18.75 %</td></tr> <tr><td>FACILITY</td> <td>Washington Monument, Stonehenge</td></tr> <tr><td>GPE</td> <td>South East Asia, Midlothian</td></tr> <table> You can use nltk.ne_chunk() to recognize named entities. Let’s use lotr_pos_tags again to test it out: ``` import nltk from nltk.tokenize import word_tokenize lotr_quote = "It's a dangerous business, Frodo, going out your door." words_in_lotr_quote = word_tokenize(lotr_quote) print(words_in_lotr_quote) lotr_pos_tags = nltk.pos_tag(words_in_lotr_quote) print(lotr_pos_tags) tree = nltk.ne_chunk(lotr_pos_tags) ``` Now take a look at the visual representation: ``` tree.draw() ``` Here’s what you get: See how Frodo has been tagged as a PERSON? You also have the option to use the parameter binary=True if you just want to know what the named entities are but not what kind of named entity they are: ``` tree = nltk.ne_chunk(lotr_pos_tags, binary=True) tree.draw() ``` Now all you see is that Frodo is an NE: That’s how you can identify named entities! But you can take this one step further and extract named entities directly from your text. Create a string from which to extract named entities. You can use this quote from <a href = "https://en.wikipedia.org/wiki/The_War_of_the_Worlds" >The War of the Worlds</a>: ``` quote = """ Men like Schiaparelli watched the red planet—it is odd, by-the-bye, that for countless centuries Mars has been the star of war—but failed to interpret the fluctuating appearances of the markings they mapped so well. All that time the Martians must have been getting ready. During the opposition of 1894 a great light was seen on the illuminated part of the disk, first at the Lick Observatory, then by Perrotin of Nice, and then by other observers. English readers heard of it first in the issue of Nature dated August 2.""" ``` Now create a function to extract named entities: ``` def extract_ne(quote): words = word_tokenize(quote, language='english') tags = nltk.pos_tag(words) tree = nltk.ne_chunk(tags, binary=True) tree.draw() return set( " ".join(i[0] for i in t) for t in tree if hasattr(t, "label") and t.label() == "NE" ) ``` With this function, you gather all named entities, with no repeats. In order to do that, you tokenize by word, apply part of speech tags to those words, and then extract named entities based on those tags. Because you included binary=True, the named entities you’ll get won’t be labeled more specifically. You’ll just know that they’re named entities. Take a look at the information you extracted: ``` extract_ne(quote) ``` You missed the city of Nice, possibly because NLTK interpreted it as a regular English adjective, but you still got the following: 1.**An institution**: 'Lick Observatory' 2.**A planet**: 'Mars' 3.**A publication**: 'Nature' 4.**People**: 'Perrotin', 'Schiaparelli'
github_jupyter
# Closed-Loop Evaluation In this notebook you are going to evaluate Urban Driver to control the SDV with a protocol named *closed-loop* evaluation. **Note: this notebook assumes you've already run the [training notebook](./train.ipynb) and stored your model successfully (or that you have stored a pre-trained one).** **Note: for a detailed explanation of what closed-loop evaluation (CLE) is, please refer to our [planning notebook](../planning/closed_loop_test.ipynb)** ### Imports ``` import matplotlib.pyplot as plt import numpy as np import torch from prettytable import PrettyTable from l5kit.configs import load_config_data from l5kit.data import LocalDataManager, ChunkedDataset from l5kit.dataset import EgoDatasetVectorized from l5kit.vectorization.vectorizer_builder import build_vectorizer from l5kit.simulation.dataset import SimulationConfig from l5kit.simulation.unroll import ClosedLoopSimulator from l5kit.cle.closed_loop_evaluator import ClosedLoopEvaluator, EvaluationPlan from l5kit.cle.metrics import (CollisionFrontMetric, CollisionRearMetric, CollisionSideMetric, DisplacementErrorL2Metric, DistanceToRefTrajectoryMetric) from l5kit.cle.validators import RangeValidator, ValidationCountingAggregator from l5kit.visualization.visualizer.zarr_utils import simulation_out_to_visualizer_scene from l5kit.visualization.visualizer.visualizer import visualize from bokeh.io import output_notebook, show from l5kit.data import MapAPI from collections import defaultdict import os ``` ## Prepare data path and load cfg By setting the `L5KIT_DATA_FOLDER` variable, we can point the script to the folder where the data lies. Then, we load our config file with relative paths and other configurations (rasteriser, training params ...). ``` # set env variable for data from l5kit.data import get_dataset_path os.environ["L5KIT_DATA_FOLDER"], project_path = get_dataset_path() dm = LocalDataManager(None) # get config cfg = load_config_data("./config.yaml") ``` ## Load the model ``` model_path = project_path + "/urban_driver_dummy_model.pt" device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = torch.load(model_path).to(device) model = model.eval() torch.set_grad_enabled(False) ``` ## Load the evaluation data Differently from training and open loop evaluation, this setting is intrinsically sequential. As such, we won't be using any of PyTorch's parallelisation functionalities. ``` # ===== INIT DATASET eval_cfg = cfg["val_data_loader"] eval_zarr = ChunkedDataset(dm.require(eval_cfg["key"])).open() vectorizer = build_vectorizer(cfg, dm) eval_dataset = EgoDatasetVectorized(cfg, eval_zarr, vectorizer) print(eval_dataset) ``` ## Define some simulation properties We define here some common simulation properties such as the length of the simulation and how many scene to simulate. **NOTE: these properties have a significant impact on the execution time. We suggest you to increase them only if your setup includes a GPU.** ``` num_scenes_to_unroll = 10 num_simulation_steps = 50 ``` # Closed-loop simulation We define a closed-loop simulation that drives the SDV for `num_simulation_steps` steps while using the log-replayed agents. Then, we unroll the selected scenes. The simulation output contains all the information related to the scene, including the annotated and simulated positions, states, and trajectories of the SDV and the agents. If you want to know more about what the simulation output contains, please refer to the source code of the class `SimulationOutput`. ``` # ==== DEFINE CLOSED-LOOP SIMULATION sim_cfg = SimulationConfig(use_ego_gt=False, use_agents_gt=True, disable_new_agents=True, distance_th_far=500, distance_th_close=50, num_simulation_steps=num_simulation_steps, start_frame_index=0, show_info=True) sim_loop = ClosedLoopSimulator(sim_cfg, eval_dataset, device, model_ego=model, model_agents=None) # ==== UNROLL scenes_to_unroll = list(range(0, len(eval_zarr.scenes), len(eval_zarr.scenes)//num_scenes_to_unroll)) sim_outs = sim_loop.unroll(scenes_to_unroll) ``` # Closed-loop metrics **Note: for a detailed explanation of CLE metrics, please refer again to our [planning notebook](../planning/closed_loop_test.ipynb)** ``` metrics = [DisplacementErrorL2Metric(), DistanceToRefTrajectoryMetric(), CollisionFrontMetric(), CollisionRearMetric(), CollisionSideMetric()] validators = [RangeValidator("displacement_error_l2", DisplacementErrorL2Metric, max_value=30), RangeValidator("distance_ref_trajectory", DistanceToRefTrajectoryMetric, max_value=4), RangeValidator("collision_front", CollisionFrontMetric, max_value=0), RangeValidator("collision_rear", CollisionRearMetric, max_value=0), RangeValidator("collision_side", CollisionSideMetric, max_value=0)] intervention_validators = ["displacement_error_l2", "distance_ref_trajectory", "collision_front", "collision_rear", "collision_side"] cle_evaluator = ClosedLoopEvaluator(EvaluationPlan(metrics=metrics, validators=validators, composite_metrics=[], intervention_validators=intervention_validators)) ``` # Quantitative evaluation We can now compute the metric evaluation, collect the results and aggregate them. ``` cle_evaluator.evaluate(sim_outs) validation_results = cle_evaluator.validation_results() agg = ValidationCountingAggregator().aggregate(validation_results) cle_evaluator.reset() ``` ## Reporting errors from the closed-loop We can now report the metrics and plot them. ``` fields = ["metric", "value"] table = PrettyTable(field_names=fields) values = [] names = [] for metric_name in agg: table.add_row([metric_name, agg[metric_name].item()]) values.append(agg[metric_name].item()) names.append(metric_name) print(table) plt.bar(np.arange(len(names)), values) plt.xticks(np.arange(len(names)), names, rotation=60, ha='right') plt.show() ``` # Qualitative evaluation ## Visualise the closed-loop We can visualise the scenes we have obtained previously. **The policy is now in full control of the SDV as this moves through the annotated scene.** ``` output_notebook() mapAPI = MapAPI.from_cfg(dm, cfg) for sim_out in sim_outs: # for each scene vis_in = simulation_out_to_visualizer_scene(sim_out, mapAPI) show(visualize(sim_out.scene_id, vis_in)) ```
github_jupyter
<a href="https://colab.research.google.com/github/AmanPriyanshu/Reinforcement-Learning/blob/master/DQN_practice.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` import torch import numpy as np from matplotlib import pyplot as plt torch.manual_seed(0) np.random.seed(0) class Environment: def __init__(self): self.constant_function_details = {'value':np.random.randint(10, 90)} self.uniform_function_details = {'min': 25, 'max': 75} self.gaussian_function_details = {'mean': 50, 'std': 25} self.quadratic_growth_details = {'m':0.0175, 'count':0} self.bandits = None self.generate_bandit_instance() def return_constant(self): return self.constant_function_details['value'] + np.random.random()*10 def return_uniform(self): return np.random.uniform(self.uniform_function_details['min'], self.uniform_function_details['max']) def return_gaussian(self): return np.random.normal(loc=self.gaussian_function_details['mean'], scale=self.gaussian_function_details['std']) def return_quadratic_growth(self): self.quadratic_growth_details['count'] += 1 return np.power((self.quadratic_growth_details['m'] * self.quadratic_growth_details['count']), 2) def generate_bandit_instance(self): self.bandits = np.array([self.return_constant, self.return_uniform, self.return_gaussian, self.return_quadratic_growth]) np.random.shuffle(self.bandits) def observe_all_bandits(self): vals = [] for func in self.bandits: vals.append(func()/100) return np.array(vals) env = Environment() values = [] for _ in range(1000): values.append(env.observe_all_bandits()) values = np.array(values) for index, function in enumerate(env.bandits): plt.plot(np.arange(values.shape[0]), values.T[index], label=function.__name__[len('return_'):]) plt.legend() plt.show() ``` ## Model: ``` def model_generator(): model = torch.nn.Sequential( torch.nn.Linear(2, 4), torch.nn.ReLU(), torch.nn.Linear(4, 8), torch.nn.ReLU(), torch.nn.Linear(8, 4), torch.nn.Softmax(dim=1), ) return model class Agent: def __init__(self): self.transition = {'state': None, 'action': None, 'next_state': None, 'reward': None} self.replay_memory = self.ReplayMemory() self.policy_net = model_generator() self.target_net = model_generator() self.target_net.eval() self.target_net.load_state_dict(self.policy_net.state_dict()) self.epsilon = 1 self.epsilon_limit = 0.01 self.steps_taken = 0 self.gamma = 0. self.optimizer = torch.optim.Adam(self.policy_net.parameters()) self.batch_size = 5 def loss_calculator(self): samples = self.replay_memory.sample(self.batch_size) losses = [] for sample in samples: action = sample['action'] state = sample['state'] next_state = sample['next_state'] reward = sample['reward'] loss = self.policy_pass(state)[0][action] - (reward + self.gamma * torch.max(self.target_pass(next_state))) losses.append(loss) loss = torch.mean(torch.stack(losses)) if abs(loss.item()) < 1: loss = 0.5 * torch.pow(loss, 2) else: loss = torch.abs(loss) - 0.5 return loss def policy_update(self): loss = self.loss_calculator() self.optimizer.zero_grad() loss.backward() self.optimizer.step() def target_update(self): self.target_net.load_state_dict(self.policy_net.state_dict()) def target_pass(self, state): input_state = torch.tensor([[state['rank'], state['reward']]], dtype=torch.float) actions = self.target_net(input_state) return actions def policy_pass(self, state): input_state = torch.tensor([[state['rank'], state['reward']]], dtype=torch.float) actions = self.policy_net(input_state) return actions def take_action(self, state): if np.random.random() < self.epsilon: action = torch.randint(0, 4, (1,)) else: actions = self.policy_pass(state) action = torch.argmax(actions, 1) return action def take_transition(self, transition): self.steps_taken += 1 self.replay_memory.push(transition) if self.steps_taken%self.batch_size == 0 and self.steps_taken>20: self.policy_update() if self.steps_taken%25 == 0 and self.steps_taken>20: self.target_update() self.epsilon -= self.epsilon_limit/6 if self.epsilon<self.epsilon_limit: self.epsilon = self.epsilon_limit class ReplayMemory(object): def __init__(self, capacity=15): self.capacity = capacity self.memory = [None] * self.capacity self.position = 0 def push(self, transition): self.memory[self.position] = transition self.position = (self.position + 1) % self.capacity def sample(self, batch_size=5): return np.random.choice(np.array(self.memory), batch_size) def __len__(self): return len(self.memory) env = Environment() agent1 = Agent() rewards = [] state = {'rank':0, 'reward':0} for _ in range(1000): with torch.no_grad(): action = agent1.take_action(state) observation = env.observe_all_bandits() reward = observation[action] rank = np.argsort(observation)[action] next_state = {'rank': rank, 'reward':reward} transition = {'state': state, 'action': action, 'next_state': next_state, 'reward': reward} agent1.take_transition(transition) rewards.append(reward) plt.plot([i for i in range(len(rewards))], rewards, label='rewards') plt.legend() plt.title('Rewards Progression') plt.show() ```
github_jupyter
``` %load_ext autoreload %autoreload 2 from quantumnetworks import MultiModeSystem, plot_full_evolution import numpy as np ``` # Trapezoidal Method ``` # params stored in txt sys = MultiModeSystem(params={"dir":"data/"}) x_0 = np.array([1,0,0,1]) ts = np.linspace(0, 10, 101) X = sys.trapezoidal(x_0, ts) fig, ax = plot_full_evolution(X, ts, labels=["$q_a$","$p_a$","$q_b$","$p_b$"]) ax.legend() ``` # Forward Euler ``` # params stored in txt sys = MultiModeSystem(params={"dir":"data/"}) x_0 = np.array([1,0,0,1]) ts = np.linspace(0, 10, 10001) X = sys.forward_euler(x_0, ts) fig, ax = plot_full_evolution(X, ts, labels=["$q_a$","$p_a$","$q_b$","$p_b$"]) ax.legend() u = sys.eval_u(0) sys.eval_Jf(x_0, u) sys.eval_Jf_numerical(x_0, u) # params directly provided omegas = [1,2] kappas = [0.001,0.005] gammas = [0.002,0.002] kerrs = [0.001, 0.001] couplings = [[0,1,0.002]] sys = MultiModeSystem(params={"omegas":omegas, "kappas":kappas, "gammas":gammas, "kerrs": kerrs, "couplings":couplings}) x_0 = np.array([1,0,0,1]) ts = np.linspace(0, 10, 1001) X = sys.forward_euler(x_0, ts) fig, ax = plot_full_evolution(X, ts, labels=["$q_a$","$p_a$","$q_b$","$p_b$"]) ax.legend() # single mode system omegas = [2*np.pi*1] kappas = [2*np.pi*0.001] gammas = [2*np.pi*0.002] kerrs = [2*np.pi*0.001] couplings = [] sys = MultiModeSystem(params={"omegas":omegas, "kappas":kappas,"gammas":gammas,"kerrs":kerrs,"couplings":couplings}) x_0 = np.array([1,0]) ts = np.linspace(0, 10, 100001) X = sys.forward_euler(x_0, ts) fig, ax = plot_full_evolution(X, ts, labels=["$q_a$","$p_a$"]) ax.legend() # params directly provided omegas = [2*np.pi*1,2*np.pi*2,2*np.pi*1] kappas = [2*np.pi*0.001,2*np.pi*0.005,2*np.pi*0.001] gammas = [2*np.pi*0.002,2*np.pi*0.002,2*np.pi*0.002] kerrs = [2*np.pi*0.001, 2*np.pi*0.001, 2*np.pi*0.001] couplings = [[0,1,2*np.pi*0.002],[1,2,2*np.pi*0.002]] sys = MultiModeSystem(params={"omegas":omegas, "kappas":kappas, "gammas":gammas, "kerrs":kerrs, "couplings":couplings}) print(sys.A) # x_0 = np.array([1,0,0,1]) # ts = np.linspace(0, 10, 1001) # X = sys.forward_euler(x_0, ts) # fig, ax = plot_full_evolution(X, ts, labels=["$q_a$","$p_a$","$q_b$","$p_b$"]) # ax.legend() ``` # Linearization ``` omegas = [2*np.pi*1,2*np.pi*2,2*np.pi*1] kappas = [2*np.pi*0.001,2*np.pi*0.005,2*np.pi*0.001] gammas = [2*np.pi*0.002,2*np.pi*0.002,2*np.pi*0.002] kerrs = [2*np.pi*0.001, 2*np.pi*0.001, 2*np.pi*0.001] couplings = [[0,1,2*np.pi*0.002],[1,2,2*np.pi*0.002]] sys = MultiModeSystem(params={"omegas":omegas, "kappas":kappas, "gammas":gammas, "kerrs":kerrs, "couplings":couplings}) x_0 = np.array([1,0, 0,1, 1,0]) ts = np.linspace(0, 1, 1001) X = sys.forward_euler(x_0, ts) fig, ax = plot_full_evolution(X, ts, labels=["$q_a$","$p_a$", "$q_b$","$p_b$", "$q_c$","$p_c$"]) ax.legend() X_linear = sys.forward_euler_linear(x_0, ts, x_0, 0) fig, ax = plot_full_evolution(X_linear, ts, labels=["$q_{a,linear}$","$p_{a,linear}$","$q_{b,linear}$","$p_{b,linear}$","$q_{c,linear}$","$p_{c,linear}$"]) Delta_X = (X-X_linear)/X plot_full_evolution(Delta_X[:,:50], ts[:50], labels=["$q_a - q_{a,linear}$","$p_a - p_{a,linear}$","$q_b - q_{b,linear}$","$p_b - p_{b,linear}$","$q_c - q_{c,linear}$","$p_c - p_{c,linear}$"]) ax.legend() ```
github_jupyter
# Getting Started with Azure Machine Learning Azure Machine Learning (*Azure ML*) is a cloud-based service for creating and managing machine learning solutions. It's designed to help data scientists leverage their existing data processing and model development skills and frameworks, and help them scale their workloads to the cloud. The Azure ML SDK for Python provides classes you can use to work with Azure ML in your Azure subscription. ## Before You Start 1. Complete the steps in [Lab 1 - Getting Started with Azure Machine Learning](./labdocs/Lab01.md) to create an Azure Machine Learning workspace and a compute instance with the contents of this repo. 2. Open this notebook in the compute instance and run it there. ## Check the Azure ML SDK Version Let's start by importing the **azureml-core** package and checking the version of the SDK that is installed. Click the cell below and then use the **&#9658; Run** button on the toolbar to run it. ``` import azureml.core print("Ready to use Azure ML", azureml.core.VERSION) ``` ## Connect to Your Workspace All experiments and associated resources are managed within you Azure ML workspace. You can connect to an existing workspace, or create a new one using the Azure ML SDK. In most cases, you should store the workspace configuration in a JSON configuration file. This makes it easier to reconnect without needing to remember details like your Azure subscription ID. You can download the JSON configuration file from the blade for your workspace in the Azure portal, but if you're using a Compute Instance within your workspace, the configuration file has alreday been downloaded to the root folder. The code below uses the configuration file to connect to your workspace. The first time you run it in a notebook session, you'll be prompted to sign into Azure by clicking the https://microsoft.com/devicelogin link, entering an automatically generated code, and signing into Azure. After you have successfully signed in, you can close the browser tab that was opened, return to this notebook, and wait for the sign-in process to complete. ``` from azureml.core import Workspace ws = Workspace.from_config() print(ws.name, "loaded") ``` ## Run an Experiment One of the most fundamentals tasks that data scientists need to perform is to create and run experiments that process and analyze data. In this exercise, you'll learn how to use an Azure ML *experiment* to run Python code and record values extracted from data. In this case, you'll use a simple dataset that contains details of patients that have been tested for diabetes. You'll run an experiment to explore the data, extracting statistics, visualizations, and data samples. Most of the code you'll use is fairly generic Python, such as you might run in any data exploration process. However, with the addition of a few lines, the code uses an Azure ML *experiment* to log details of the run. ``` from azureml.core import Experiment import pandas as pd import matplotlib.pyplot as plt %matplotlib inline # Create an Azure ML experiment in your workspace experiment = Experiment(workspace = ws, name = "diabetes-experiment") # Start logging data from the experiment run = experiment.start_logging() print("Starting experiment:", experiment.name) # load the data from a local file data = pd.read_csv('data/diabetes.csv') # Count the rows and log the result row_count = (len(data)) run.log('observations', row_count) print('Analyzing {} rows of data'.format(row_count)) # Plot and log the count of diabetic vs non-diabetic patients diabetic_counts = data['Diabetic'].value_counts() fig = plt.figure(figsize=(6,6)) ax = fig.gca() diabetic_counts.plot.bar(ax = ax) ax.set_title('Patients with Diabetes') ax.set_xlabel('Diagnosis') ax.set_ylabel('Patients') plt.show() run.log_image(name = 'label distribution', plot = fig) # log distinct pregnancy counts pregnancies = data.Pregnancies.unique() run.log_list('pregnancy categories', pregnancies) # Log summary statistics for numeric columns med_columns = ['PlasmaGlucose', 'DiastolicBloodPressure', 'TricepsThickness', 'SerumInsulin', 'BMI'] summary_stats = data[med_columns].describe().to_dict() for col in summary_stats: keys = list(summary_stats[col].keys()) values = list(summary_stats[col].values()) for index in range(len(keys)): run.log_row(col, stat = keys[index], value = values[index]) # Save a sample of the data and upload it to the experiment output data.sample(100).to_csv('sample.csv', index=False, header=True) run.upload_file(name = 'outputs/sample.csv', path_or_stream = './sample.csv') # Complete the run run.complete() ``` ## View Experiment Results After the experiment has been finished, you can use the **run** object to get information about the run and its outputs: ``` import json # Get run details details = run.get_details() print(details) # Get logged metrics metrics = run.get_metrics() print(json.dumps(metrics, indent=2)) # Get output files files = run.get_file_names() print(json.dumps(files, indent=2)) ``` In Jupyter Notebooks, you can use the **RunDetails** widget to get a better visualization of the run details, while the experiment is running or after it has finished. ``` from azureml.widgets import RunDetails RunDetails(run).show() ``` Note that the **RunDetails** widget includes a link to view the run in Azure Machine Learning studio. Click this to open a new browser tab with the run details (you can also just open [Azure Machine Learning studio](https://ml.azure.com) and find the run on the **Experiments** page). When viewing the run in Azure Machine Learning studio, note the following: - The **Details** tab contains the general properties of the experiment run. - The **Metrics** tab enables you to select logged metrics and view them as tables or charts. - The **Images** tab enables you to select and view any images or plots that were logged in the experiment (in this case, the *Label Distribution* plot) - The **Child Runs** tab lists any child runs (in this experiment there are none). - The **Outputs + Logs** tab shows the output or log files generated by the experiment. - The **Snapshot** tab contains all files in the folder where the experiment code was run (in this case, everything in the same folder as this notebook). - The **Explanations** tab is used to show model explanations generated by the experiment (in this case, there are none). - The **Fairness** tab is used to visualize predictive performance disparities that help you evaluate the fairness of machine learning models (in this case, there are none). ## Run an Experiment Script In the previous example, you ran an experiment inline in this notebook. A more flexible solution is to create a separate script for the experiment, and store it in a folder along with any other files it needs, and then use Azure ML to run the experiment based on the script in the folder. First, let's create a folder for the experiment files, and copy the data into it: ``` import os, shutil # Create a folder for the experiment files folder_name = 'diabetes-experiment-files' experiment_folder = './' + folder_name os.makedirs(folder_name, exist_ok=True) # Copy the data file into the experiment folder shutil.copy('data/diabetes.csv', os.path.join(folder_name, "diabetes.csv")) ``` Now we'll create a Python script containing the code for our experiment, and save it in the experiment folder. > **Note**: running the following cell just *creates* the script file - it doesn't run it! ``` %%writefile $folder_name/diabetes_experiment.py from azureml.core import Run import pandas as pd import os # Get the experiment run context run = Run.get_context() # load the diabetes dataset data = pd.read_csv('diabetes.csv') # Count the rows and log the result row_count = (len(data)) run.log('observations', row_count) print('Analyzing {} rows of data'.format(row_count)) # Count and log the label counts diabetic_counts = data['Diabetic'].value_counts() print(diabetic_counts) for k, v in diabetic_counts.items(): run.log('Label:' + str(k), v) # Save a sample of the data in the outputs folder (which gets uploaded automatically) os.makedirs('outputs', exist_ok=True) data.sample(100).to_csv("outputs/sample.csv", index=False, header=True) # Complete the run run.complete() ``` This code is a simplified version of the inline code used before. However, note the following: - It uses the `Run.get_context()` method to retrieve the experiment run context when the script is run. - It loads the diabetes data from the folder where the script is located. - It creates a folder named **outputs** and writes the sample file to it - this folder is automatically uploaded to the experiment run Now you're almost ready to run the experiment. There are just a few configuration issues you need to deal with: 1. Create a *Run Configuration* that defines the Python code execution environment for the script - in this case, it will automatically create a Conda environment with some default Python packages installed. 2. Create a *Script Configuration* that identifies the Python script file to be run in the experiment, and the environment in which to run it. The following cell sets up these configuration objects, and then submits the experiment. > **Note**: This will take a little longer to run the first time, as the conda environment must be created. ``` import os import sys from azureml.core import Experiment, RunConfiguration, ScriptRunConfig from azureml.widgets import RunDetails # create a new RunConfig object experiment_run_config = RunConfiguration() # Create a script config src = ScriptRunConfig(source_directory=experiment_folder, script='diabetes_experiment.py', run_config=experiment_run_config) # submit the experiment experiment = Experiment(workspace = ws, name = 'diabetes-experiment') run = experiment.submit(config=src) RunDetails(run).show() run.wait_for_completion() ``` As before, you can use the widget or the link to the experiment in [Azure Machine Learning studio](https://ml.azure.com) to view the outputs generated by the experiment, and you can also write code to retrieve the metrics and files it generated: ``` # Get logged metrics metrics = run.get_metrics() for key in metrics.keys(): print(key, metrics.get(key)) print('\n') for file in run.get_file_names(): print(file) ``` ## View Experiment Run History Now that you've run experiments multiple times, you can view the history in [Azure Machine Learning studio](https://ml.azure.com) and explore each logged run. Or you can retrieve an experiment by name from the workspace and iterate through its runs using the SDK: ``` from azureml.core import Experiment, Run diabetes_experiment = ws.experiments['diabetes-experiment'] for logged_run in diabetes_experiment.get_runs(): print('Run ID:', logged_run.id) metrics = logged_run.get_metrics() for key in metrics.keys(): print('-', key, metrics.get(key)) ``` ## Use MLflow MLflow is an open source platform for managing machine learning processes. It's commonly (but not exclusively) used in Databricks environments to coordinate experiments and track metrics. In Azure Machine Learning experiments, you can use MLflow to track metrics instead of the native log functionality if you desire. ### Use MLflow with an Inline Experiment To use MLflow to track metrics for an inline experiment, you must set the MLflow *tracking URI* to the workspace where the experiment is being run. This enables you to use **mlflow** tracking methods to log data to the experiment run. ``` from azureml.core import Experiment import pandas as pd import mlflow # Set the MLflow tracking URI to the workspace mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri()) # Create an Azure ML experiment in your workspace experiment = Experiment(workspace=ws, name='diabetes-mlflow-experiment') mlflow.set_experiment(experiment.name) # start the MLflow experiment with mlflow.start_run(): print("Starting experiment:", experiment.name) # Load data data = pd.read_csv('data/diabetes.csv') # Count the rows and log the result row_count = (len(data)) print('observations:', row_count) mlflow.log_metric('observations', row_count) # Get a link to the experiment in Azure ML studio experiment_url = experiment.get_portal_url() print('See details at', experiment_url) ``` After running the code above, you can use the link that is displayed to view the experiment in Azure Machine Learning studio. Then select the latest run of tghe experiment and view it **Metrics** tab to see the logged metric. ### Use MLflow in an Experiment Script You can also use MLflow to track metrics in an experiment script. Run the following two cells to create a folder and a script for an experiment that uses MLflow. ``` import os, shutil # Create a folder for the experiment files folder_name = 'mlflow-experiment-files' experiment_folder = './' + folder_name os.makedirs(folder_name, exist_ok=True) # Copy the data file into the experiment folder shutil.copy('data/diabetes.csv', os.path.join(folder_name, "diabetes.csv")) %%writefile $folder_name/mlflow_diabetes.py from azureml.core import Run import pandas as pd import mlflow # start the MLflow experiment with mlflow.start_run(): # Load data data = pd.read_csv('diabetes.csv') # Count the rows and log the result row_count = (len(data)) print('observations:', row_count) mlflow.log_metric('observations', row_count) ``` When you use MLflow tracking in an Azure ML experiment script, the MLflow tracking URI is set automatically when you start the experiment run. However, the environment in which the script is to be run must include the required **mlflow** packages. ``` from azureml.core import Experiment, RunConfiguration, ScriptRunConfig from azureml.core.conda_dependencies import CondaDependencies from azureml.widgets import RunDetails # create a new RunConfig object experiment_run_config = RunConfiguration() # Ensure the required packages are installed packages = CondaDependencies.create(pip_packages=['mlflow', 'azureml-mlflow']) experiment_run_config.environment.python.conda_dependencies=packages # Create a script config src = ScriptRunConfig(source_directory=experiment_folder, script='mlflow_diabetes.py', run_config=experiment_run_config) # submit the experiment experiment = Experiment(workspace = ws, name = 'diabetes-mlflow-experiment') run = experiment.submit(config=src) RunDetails(run).show() run.wait_for_completion() ``` As usual, you can get the logged metrics from the experiment run when it's finished. ``` # Get logged metrics metrics = run.get_metrics() for key in metrics.keys(): print(key, metrics.get(key)) ``` Now you've seen how to use the Azure ML SDK to view the resources in your workspace and run experiments. ### Learn More - For more details about the SDK, see the [Azure ML SDK documentation](https://docs.microsoft.com/python/api/overview/azure/ml/intro?view=azure-ml-py). - To find out more about running experiments, see [Start, monitor, and cancel training runs in Python](https://docs.microsoft.com/azure/machine-learning/how-to-manage-runs) in the Azure ML documentation. - For details of how to log metrics in a run, see [Monitor Azure ML experiment runs and metrics](https://docs.microsoft.com/azure/machine-learning/how-to-track-experiments). - For more information about integrating Azure ML experiments with MLflow, see [Track model metrics and deploy ML models with MLflow and Azure Machine Learning](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow). ## Clean Up On the **File** menu, click **Close and Halt** to close this notebook. Then close all Jupyter tabs in your browser and **stop** your compute instance to minimize costs.
github_jupyter
# Introduction Implementation of the cTAKES BoW method with relation pairs (f.e. CUI-Relationship-CUI) (added to the BoW cTAKES orig. pairs (Polarity-CUI)), evaluated against the annotations from: > Gehrmann, Sebastian, et al. "Comparing deep learning and concept extraction based methods for patient phenotyping from clinical narratives." PloS one 13.2 (2018): e0192360. ## Import Packages ``` # imported packages import multiprocessing import collections import itertools import re import os # xml and xmi from lxml import etree # arrays and dataframes import pandas import numpy from pandasql import sqldf # classifier from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.ensemble import GradientBoostingClassifier from sklearn.preprocessing import FunctionTransformer from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.multiclass import OneVsRestClassifier from sklearn.naive_bayes import MultinomialNB from sklearn.pipeline import Pipeline from sklearn.svm import SVC # plotting import matplotlib matplotlib.use('Agg') # server try: get_ipython # jupyter notebook %matplotlib inline except: pass import matplotlib.pyplot as plt # import custom modules import context # set search path to one level up from src import evaluation # method for evaluation of classifiers ``` ## Define variables and parameters ``` # variables and parameters # filenames input_directory = '../data/interim/cTAKES_output' input_filename = '../data/raw/annotations.csv' results_filename = '../reports/ctakes_relationgram_bow_tfidf_results.csv' plot_filename_1 = '../reports/ctakes_relationgram_bow_tfidf_boxplot_1.png' plot_filename_2 = '../reports/ctakes_relationgram_bow_tfidf_boxplot_2.png' # number of splits and repeats for cross validation n_splits = 5 n_repeats = 10 # n_repeats = 1 # for testing # number of workers n_workers=multiprocessing.cpu_count() # n_workers = 1 # for testing # keep the conditions for which results are reported in the publication conditions = [ # 'cohort', 'Obesity', # 'Non.Adherence', # 'Developmental.Delay.Retardation', 'Advanced.Heart.Disease', 'Advanced.Lung.Disease', 'Schizophrenia.and.other.Psychiatric.Disorders', 'Alcohol.Abuse', 'Other.Substance.Abuse', 'Chronic.Pain.Fibromyalgia', 'Chronic.Neurological.Dystrophies', 'Advanced.Cancer', 'Depression', # 'Dementia', # 'Unsure', ] ``` ## Load and prepare data ### Load and parse xmi data ``` %load_ext ipycache %%cache --read 2.6-JS-ctakes-relationgram-bow-tfidf_cache.pkl X def ctakes_xmi_to_df(xmi_path): records = [] tree = etree.parse(xmi_path) root = tree.getroot() mentions = [] for mention in root.iterfind('*[@{http://www.omg.org/XMI}id][@typeID][@polarity]'): if 'ontologyConceptArr' in mention.attrib: for concept in mention.attrib['ontologyConceptArr'].split(" "): d = dict(mention.attrib) d['ontologyConceptArr'] = concept mentions.append(d) else: d = dict(mention.attrib) mentions.append(d) mentions_df = pandas.DataFrame(mentions) concepts = [] for concept in root.iterfind('*[@{http://www.omg.org/XMI}id][@cui][@tui]'): concepts.append(dict(concept.attrib)) concepts_df = pandas.DataFrame(concepts) events = [] for event in root.iterfind('*[@{http://www.omg.org/XMI}id][@properties]'): events.append(dict(event.attrib)) events_df = pandas.DataFrame(events) eventproperties = [] for eventpropertie in root.iterfind('*[@{http://www.omg.org/XMI}id][@docTimeRel]'): eventproperties.append(dict(eventpropertie.attrib)) eventproperties_df = pandas.DataFrame(eventproperties) merged_df = mentions_df.add_suffix('_1')\ .merge(right=concepts_df, left_on='ontologyConceptArr_1', right_on='{http://www.omg.org/XMI}id')\ .merge(right=events_df, left_on='event_1', right_on='{http://www.omg.org/XMI}id')\ .merge(right=eventproperties_df, left_on='properties', right_on='{http://www.omg.org/XMI}id') # # unique cui and tui per event IDEA: consider keeping all # merged_df = merged_df.drop_duplicates(subset=['event', 'cui', 'tui']) # merge polarity of the *mention and the cui merged_df = merged_df.dropna(subset=['cui']) # remove any NaN merged_df['polaritycui'] = merged_df['polarity_1'] + merged_df['cui'] # extract relations textrelations = [] for tr in root.iterfind('*[@{http://www.omg.org/XMI}id][@category][@arg1][@arg2]'): textrelations.append(dict(tr.attrib)) textrelations_df = pandas.DataFrame(textrelations) relationarguments = [] for relationargument in root.iterfind('*[@{http://www.omg.org/XMI}id][@argument][@role]'): relationarguments.append(dict(relationargument.attrib)) relationarguments_df = pandas.DataFrame(relationarguments) # transforms tdf = textrelations_df tdf['xmiid'] = tdf['{http://www.omg.org/XMI}id'] rdf = relationarguments_df rdf['xmiid'] = rdf['{http://www.omg.org/XMI}id'] mdf = mentions_df mdf['xmiid'] = mdf['{http://www.omg.org/XMI}id'] cdf = concepts_df cdf['xmiid'] = cdf['{http://www.omg.org/XMI}id'] subquery_1 = """ -- table with: -- (from *Relation): category -- (from RelationArgument): argument (as argument1 and argument2) (Foreign Key *Mentions.xmiid) -- (from *Mention): begin - end (as begin1 - end1 - begin2 - end2) SELECT r.category, m1.begin as begin1, m1.end as end1, m2.begin as begin2, m2.end as end2 FROM tdf r INNER JOIN rdf a1 ON r.arg1 = a1.xmiid INNER JOIN rdf a2 ON r.arg2 = a2.xmiid INNER JOIN mdf m1 ON a1.argument = m1.xmiid INNER JOIN mdf m2 ON a2.argument = m2.xmiid """ subquery_2 = """ -- table with: -- (from *Mentions): begin - end - polarity -- (from Concepts): cui SELECT m.begin, m.end, m.polarity, c.cui FROM mdf m INNER JOIN cdf c ON m.ontologyConceptArr = c.xmiid """ # run subqueries and save in new tables sq1 = sqldf(subquery_1, locals()) sq2 = sqldf(subquery_2, locals()) query = """ -- table with: -- (from Concept): cui1, cui2 -- (from *Mention): polarity1, polarity2 -- (from *Relation): category (what kind of relation) SELECT sq1.category, sq21.cui as cui1, sq22.cui as cui2, sq21.polarity as polarity1, sq22.polarity as polarity2 FROM sq1 sq1 INNER JOIN sq2 sq21 ON sq21.begin >= sq1.begin1 and sq21.end <= sq1.end1 INNER JOIN sq2 sq22 ON sq22.begin >= sq1.begin2 and sq22.end <= sq1.end2 """ res = sqldf(query, locals()) # remove duplicates res = res.drop_duplicates(subset=['cui1', 'cui2', 'category', 'polarity1', 'polarity2']) res['string'] = res['polarity1'] + res['cui1'] + res['category'] + res['polarity2'] + res['cui2'] # return as a string return ' '.join(list(res['string']) + list(merged_df['polaritycui'])) X = [] # key function for sorting the files according to the integer of the filename def key_fn(x): i = x.split(".")[0] if i != "": return int(i) return None for f in sorted(os.listdir(input_directory), key=key_fn): # for each file in the input directory if f.endswith(".xmi"): fpath = os.path.join(input_directory, f) # parse file and append as a dataframe to x_df try: X.append(ctakes_xmi_to_df(fpath)) except Exception as e: print e X.append('NaN') X = numpy.array(X) ``` ### Load annotations and classification data ``` # read and parse csv file data = pandas.read_csv(input_filename) # data = data[0:100] # for testing # X = X[0:100] # for testing data.head() # groups: the subject ids # used in order to ensure that # "patients’ notes stay within the set, so that all discharge notes in the # test set are from patients not previously seen by the model." Gehrmann17. groups_df = data.filter(items=['subject.id']) groups = groups_df.as_matrix() # y: the annotated classes y_df = data.filter(items=conditions) # filter the conditions y = y_df.as_matrix() print(X.shape, groups.shape, y.shape) ``` ## Define classifiers ``` # dictionary of classifiers (sklearn estimators) classifiers = collections.OrderedDict() def tokenizer(text): pattern = r'[\s]+' # match any sequence of whitespace characters repl = r' ' # replace with space temp_text = re.sub(pattern, repl, text) return temp_text.lower().split(' ') # lower-case and split on space prediction_models = [ ('logistic_regression', LogisticRegression(random_state=0)), ("random_forest", RandomForestClassifier(random_state=0)), ("naive_bayes", MultinomialNB()), ("svm_linear", SVC(kernel="linear", random_state=0, probability=True)), ("gradient_boosting", GradientBoostingClassifier(random_state=0)), ] # BoW representation_models = [('ctakes_relationgram_bow_tfidf', TfidfVectorizer(tokenizer=tokenizer))] # IDEA: Use Tfidf on normal BoW model aswell? # cross product of representation models and prediction models # save to classifiers as pipelines of rep. model into pred. model for rep_model, pred_model in itertools.product(representation_models, prediction_models): classifiers.update({ # add this classifier to classifiers dictionary '{rep_model}_{pred_model}'.format(rep_model=rep_model[0], pred_model=pred_model[0]): # classifier name Pipeline([rep_model, pred_model]), # concatenate representation model with prediction model in a pipeline }) ``` ## Run and evaluate ``` results = evaluation.run_evaluation(X=X, y=y, groups=groups, conditions=conditions, classifiers=classifiers, n_splits=n_splits, n_repeats=n_repeats, n_workers=n_workers) ``` ## Save and plot results ``` # save results results_df = pandas.DataFrame(results) results_df.to_csv(results_filename) results_df.head(100) ## load results for plotting # import pandas # results = pandas.read_csv('output/results.csv') # plot and save axs = results_df.groupby('name').boxplot(column='AUROC', by='condition', rot=90, figsize=(10,10)) for ax in axs: ax.set_ylim(0,1) plt.savefig(plot_filename_1) # plot and save axs = results_df.groupby('condition').boxplot(column='AUROC', by='name', rot=90, figsize=(10,10)) for ax in axs: ax.set_ylim(0,1) plt.savefig(plot_filename_2) ```
github_jupyter
___ <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a> ___ # Principal Component Analysis Let's discuss PCA! Since this isn't exactly a full machine learning algorithm, but instead an unsupervised learning algorithm, we will just have a lecture on this topic, but no full machine learning project (although we will walk through the cancer set with PCA). ## PCA Review Make sure to watch the video lecture and theory presentation for a full overview of PCA! Remember that PCA is just a transformation of your data and attempts to find out what features explain the most variance in your data. For example: <img src='PCA.png' /> ## Libraries ``` import matplotlib.pyplot as plt import pandas as pd import numpy as np import seaborn as sns %matplotlib inline ``` ## The Data Let's work with the cancer data set again since it had so many features. ``` from sklearn.datasets import load_breast_cancer cancer = load_breast_cancer() cancer.keys() print(cancer['DESCR']) df = pd.DataFrame(cancer['data'],columns=cancer['feature_names']) #(['DESCR', 'data', 'feature_names', 'target_names', 'target']) df.head() ``` ## PCA Visualization As we've noticed before it is difficult to visualize high dimensional data, we can use PCA to find the first two principal components, and visualize the data in this new, two-dimensional space, with a single scatter-plot. Before we do this though, we'll need to scale our data so that each feature has a single unit variance. ``` from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(df) scaled_data = scaler.transform(df) ``` PCA with Scikit Learn uses a very similar process to other preprocessing functions that come with SciKit Learn. We instantiate a PCA object, find the principal components using the fit method, then apply the rotation and dimensionality reduction by calling transform(). We can also specify how many components we want to keep when creating the PCA object. ``` from sklearn.decomposition import PCA pca = PCA(n_components=2) pca.fit(scaled_data) ``` Now we can transform this data to its first 2 principal components. ``` x_pca = pca.transform(scaled_data) scaled_data.shape x_pca.shape ``` Great! We've reduced 30 dimensions to just 2! Let's plot these two dimensions out! ``` plt.figure(figsize=(8,6)) plt.scatter(x_pca[:,0],x_pca[:,1],c=cancer['target'],cmap='plasma') plt.xlabel('First principal component') plt.ylabel('Second Principal Component') ``` Clearly by using these two components we can easily separate these two classes. ## Interpreting the components Unfortunately, with this great power of dimensionality reduction, comes the cost of being able to easily understand what these components represent. The components correspond to combinations of the original features, the components themselves are stored as an attribute of the fitted PCA object: ``` pca.components_ ``` In this numpy matrix array, each row represents a principal component, and each column relates back to the original features. we can visualize this relationship with a heatmap: ``` df_comp = pd.DataFrame(pca.components_,columns=cancer['feature_names']) plt.figure(figsize=(12,6)) sns.heatmap(df_comp,cmap='plasma',) ``` This heatmap and the color bar basically represent the correlation between the various feature and the principal component itself. ## Conclusion Hopefully this information is useful to you when dealing with high dimensional data! # Great Job!
github_jupyter
### The **operator** Module ``` import operator dir(operator) ``` #### Arithmetic Operators A variety of arithmetic operators are implemented. ``` operator.add(1, 2) operator.mul(2, 3) operator.pow(2, 3) operator.mod(13, 2) operator.floordiv(13, 2) operator.truediv(3, 2) ``` These would have been very handy in our previous section: ``` from functools import reduce reduce(lambda x, y: x*y, [1, 2, 3, 4]) ``` Instead of defining a lambda, we could simply use **operator.mul**: ``` reduce(operator.mul, [1, 2, 3, 4]) ``` #### Comparison and Boolean Operators Comparison and Boolean operators are also implemented as functions: ``` operator.lt(10, 100) operator.le(10, 10) operator.is_('abc', 'def') ``` We can even get the truthyness of an object: ``` operator.truth([1,2]) operator.truth([]) operator.and_(True, False) operator.or_(True, False) ``` #### Element and Attribute Getters and Setters We generally select an item by index from a sequence by using **[n]**: ``` my_list = [1, 2, 3, 4] my_list[1] ``` We can do the same thing using: ``` operator.getitem(my_list, 1) ``` If the sequence is mutable, we can also set or remove items: ``` my_list = [1, 2, 3, 4] my_list[1] = 100 del my_list[3] print(my_list) my_list = [1, 2, 3, 4] operator.setitem(my_list, 1, 100) operator.delitem(my_list, 3) print(my_list) ``` We can also do the same thing using the **operator** module's **itemgetter** function. The difference is that this returns a callable: ``` f = operator.itemgetter(2) ``` Now, **f(my_list)** will return **my_list[2]** ``` f(my_list) x = 'python' f(x) ``` Furthermore, we can pass more than one index to **itemgetter**: ``` f = operator.itemgetter(2, 3) my_list = [1, 2, 3, 4] f(my_list) x = 'pytyhon' f(x) ``` Similarly, **operator.attrgetter** does the same thing, but with object attributes. ``` class MyClass: def __init__(self): self.a = 10 self.b = 20 self.c = 30 def test(self): print('test method running...') obj = MyClass() obj.a, obj.b, obj.c f = operator.attrgetter('a') f(obj) my_var = 'b' operator.attrgetter(my_var)(obj) my_var = 'c' operator.attrgetter(my_var)(obj) f = operator.attrgetter('a', 'b', 'c') f(obj) ``` Of course, attributes can also be methods. In this case, **attrgetter** will return the object's **test** method - a callable that can then be called using **()**: ``` f = operator.attrgetter('test') obj_test_method = f(obj) obj_test_method() ``` Just like lambdas, we do not need to assign them to a variable name in order to use them: ``` operator.attrgetter('a', 'b')(obj) operator.itemgetter(2, 3)('python') ``` Of course, we can achieve the same thing using functions or lambdas: ``` f = lambda x: (x.a, x.b, x.c) f(obj) f = lambda x: (x[2], x[3]) f([1, 2, 3, 4]) f('python') ``` ##### Use Case Example: Sorting Suppose we want to sort a list of complex numbers based on the real part of the numbers: ``` a = 2 + 5j a.real l = [10+1j, 8+2j, 5+3j] sorted(l, key=operator.attrgetter('real')) ``` Or if we want to sort a list of string based on the last character of the strings: ``` l = ['aaz', 'aad', 'aaa', 'aac'] sorted(l, key=operator.itemgetter(-1)) ``` Or maybe we want to sort a list of tuples based on the first item of each tuple: ``` l = [(2, 3, 4), (1, 2, 3), (4, ), (3, 4)] sorted(l, key=operator.itemgetter(0)) ``` #### Slicing ``` l = [1, 2, 3, 4] l[0:2] l[0:2] = ['a', 'b', 'c'] print(l) del l[3:5] print(l) ``` We can do the same thing this way: ``` l = [1, 2, 3, 4] operator.getitem(l, slice(0,2)) operator.setitem(l, slice(0,2), ['a', 'b', 'c']) print(l) operator.delitem(l, slice(3, 5)) print(l) ``` #### Calling another Callable ``` x = 'python' x.upper() operator.methodcaller('upper')('python') ``` Of course, since **upper** is just an attribute of the string object **x**, we could also have used: ``` operator.attrgetter('upper')(x)() ``` If the callable takes in more than one parameter, they can be specified as additional arguments in **methodcaller**: ``` class MyClass: def __init__(self): self.a = 10 self.b = 20 def do_something(self, c): print(self.a, self.b, c) obj = MyClass() obj.do_something(100) operator.methodcaller('do_something', 100)(obj) class MyClass: def __init__(self): self.a = 10 self.b = 20 def do_something(self, *, c): print(self.a, self.b, c) obj.do_something(c=100) operator.methodcaller('do_something', c=100)(obj) ``` More information on the **operator** module can be found here: https://docs.python.org/3/library/operator.html
github_jupyter
## This is the basic load and clean stuff ``` # %load ~/dataviz/ExplorePy/clean-divvy-explore.py import pandas as pd import numpy as np import datetime as dt import pandas.api.types as pt import pytz as pytz from astral import LocationInfo from astral.sun import sun from astral.geocoder import add_locations, database, lookup from dateutil import parser as du_pr from pathlib import Path db = database() TZ=pytz.timezone('US/Central') chi_town = lookup('Chicago', db) print(chi_town) rev = "5" input_dir = '/mnt/d/DivvyDatasets' input_divvy_basename = "divvy_trip_history_201909-202108" input_divvy_base = input_dir + "/" + input_divvy_basename input_divvy_raw = input_divvy_base + ".csv" input_divvy_rev = input_dir + "/rev" + rev + "-" + input_divvy_basename + ".csv" input_chitemp = input_dir + "/" + "ChicagoTemperature.csv" # # returns true if the rev file is already present # def rev_file_exists(): path = Path(input_divvy_rev) return path.is_file() def update_dow_to_category(df): # # we need to get the dow properly set # cats = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] cats_type = pt.CategoricalDtype(categories=cats, ordered=True) df['day_of_week'] = df['day_of_week'].astype(cats_type) return df def update_start_cat_to_category(df): cats = ['AM_EARLY', 'AM_RUSH', 'AM_MID', 'LUNCH', 'PM_EARLY', 'PM_RUSH', 'PM_EVENING', 'PM_LATE'] cats_type = pt.CategoricalDtype(categories=cats, ordered=True) df['start_cat'] = df['start_cat'].astype(cats_type) return df # # loads and returns the rev file as a data frame. It handles # the need to specify some column types # # filename : the filename to load # def load_divvy_dataframe(filename): print("Loading " + filename) # so need to set the type on a couple of columns col_names = pd.read_csv(filename, nrows=0).columns types_dict = { 'ride_id': str, 'start_station_id': str, 'end_station_id': str, 'avg_temperature_celsius': float, 'avg_temperature_fahrenheit': float, 'duration': float, 'start_lat': float, 'start_lng': float, 'end_lat': float, 'end_lng': float, 'avg_rain_intensity_mm/hour': float, 'avg_wind_speed': float, 'max_wind_speed': float, 'total_solar_radiation': int, 'is_dark': bool } types_dict.update({col: str for col in col_names if col not in types_dict}) date_cols=['started_at','ended_at','date'] df = pd.read_csv(filename, dtype=types_dict, parse_dates=date_cols) if 'start_time' in df: print("Converting start_time") df['start_time'] = df['start_time'].apply(lambda x: dt.datetime.strptime(x, "%H:%M:%S")) return df def yrmo(year, month): return "{}-{}".format(year, month) def calc_duration_in_minutes(started_at, ended_at): diff = ended_at - started_at return diff.total_seconds() / 60 # # load the chicago temperature into a data frame # def load_temperature_dataframe(): print("Loading " + input_chitemp) df = pd.read_csv(input_chitemp) print("Converting date") df['date'] = df['date'].apply(lambda x: dt.datetime.strptime(x, "%Y-%m-%d")) return df def add_start_time(started_at): return started_at.time() def add_start_cat(started_at): start_time = started_at.time() time_new_day = dt.time(00,00) time_am_rush_start = dt.time(7,00) time_am_rush_end = dt.time(9,00) time_lunch_start = dt.time(11,30) time_lunch_end = dt.time(13,00) time_pm_rush_start = dt.time(15,30) time_pm_rush_end = dt.time(19,00) time_evening_end = dt.time(23,00) if start_time >= time_new_day and start_time < time_am_rush_start: return 'AM_EARLY' if start_time >= time_am_rush_start and start_time < time_am_rush_end: return 'AM_RUSH' if start_time >= time_am_rush_end and start_time < time_lunch_start: return 'AM_MID' if start_time >= time_lunch_start and start_time < time_lunch_end: return 'LUNCH' # slight change on Chi rush from 15:00 to 15:30 if start_time >= time_lunch_end and start_time < time_pm_rush_start: return 'PM_EARLY' if start_time >= time_pm_rush_start and start_time < time_pm_rush_end: return 'PM_RUSH' if start_time >= time_pm_rush_end and start_time < time_evening_end: return 'PM_EVENING' return 'PM_LATE' def add_is_dark(started_at): st = started_at.replace(tzinfo=TZ) chk = sun(chi_town.observer, date=st, tzinfo=chi_town.timezone) return st >= chk['dusk'] or st <= chk['dawn'] # # handles loading and processing the divvy raw data by # adding columns, removing bad data, etc. # def process_raw_divvy(filename): df_divvy = load_divvy_dataframe(filename) print("Creating additional columns") data = pd.Series(df_divvy.apply(lambda x: [ add_start_time(x['started_at']), add_is_dark(x['started_at']), yrmo(x['year'], x['month']), calc_duration_in_minutes(x['started_at'], x['ended_at']), add_start_cat(x['started_at']) ], axis = 1)) new_df = pd.DataFrame(data.tolist(), data.index, columns=['start_time','is_dark','yrmo','duration','start_cat']) df_divvy = df_divvy.merge(new_df, left_index=True, right_index=True) # # # # add a simplistic time element # # # print("Adding start_time") # df_divvy['start_time'] = df_divvy.apply(lambda row: add_start_time(row['started_at']), axis = 1) # print("Adding start_cat") # df_divvy['start_cat'] = df_divvy.apply(lambda row: add_start_cat(row['start_time']), axis = 1) # # # # is it dark # # # print("Adding is_dark") # df_divvy['is_dark'] = df_divvy.apply(lambda row: add_is_dark(row['started_at']), axis = 1) # # # # add a year-month column to the divvy dataframe # # this uses a function with the row; it is not # # the absolute fastest way # # # print("Adding year-month as yrmo") # df_divvy['yrmo'] = df_divvy.apply(lambda row: yrmo(row['year'], row['month']), # axis = 1) # # # # we also want a duration to be calculated # # # print("Adding duration") # df_divvy['duration'] = df_divvy.apply(lambda row: calc_duration_in_minutes(row['started_at'], # row['ended_at']), # axis = 1) # # add the temperature # df_chitemp = load_temperature_dataframe() print("Merging in temperature") df_divvy = pd.merge(df_divvy, df_chitemp, on="date") print(df_divvy.shape) print(df_divvy.head()) # print(df_divvy.loc[df_divvy['date'] == '2020-02-21']) # 2020-02-21 was missing in org. temp # print(df_divvy[['ride_id','member_casual','date','duration','yrmo','avg_temperature_fahrenheit','start_time','start_cat']]) # # clean the dataframe to remove invalid durations # which are really only (about) < 1 minute, or > 12 hours # print("Removing invalid durations") df_divvy = df_divvy[(df_divvy.duration >= 1.2) & (df_divvy.duration < 60 * 12)] # print(df_divvy.shape) df_divvy = update_dow_to_category(df_divvy) df_divvy = update_start_cat_to_category(df_divvy) # # drop some bogus columns # print("Dropping columns") df_divvy.drop(df_divvy.columns[[0,-1]], axis=1, inplace=True) return df_divvy # # writes the dataframe to the specified filename # def save_dataframe(df, filename): print("Saving dataframe to " + filename) df_out = df.copy() df_out['date'] = df_out['date'].map(lambda x: dt.datetime.strftime(x, '%Y-%m-%d')) df_out.to_csv(filename, index=False, date_format="%Y-%m-%d %H:%M:%S") # # load the divvy csv into a data frame # if rev_file_exists(): df_divvy = load_divvy_dataframe(input_divvy_rev) df_divvy = update_dow_to_category(df_divvy) df_divvy = update_start_cat_to_category(df_divvy) else: df_divvy = process_raw_divvy(input_divvy_raw) save_dataframe(df_divvy, input_divvy_rev) print(df_divvy) df_divvy.info() # # btw, can just pass the row and let the function figure it out # #def procone(row): # print(row['date']) # return 0 #df_divvy.apply(lambda row: procone(row), axis = 1) ``` ## Look at the average duration by rider type & day of week ### average duration by day of week for rider types ``` type(df_divvy['duration']) df_divvy.info() df_divvy.shape df_rider_by_dow = df_divvy.groupby(['member_casual','day_of_week']).agg(mean_time = ('duration', 'mean')).round(2) df_rider_by_dow df_rider_by_dow.sort_values(by=['member_casual','day_of_week']) ``` ### Now we want to plot ``` %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns ``` #### bar plot of Duration by Rider Type and Day of Week ``` df_rider_by_dow.unstack('member_casual').plot(kind='bar') df_rider_by_dow.reset_index(inplace=True) sns.set(rc={"figure.figsize":(16,8)}) sns.barplot(data=df_rider_by_dow, x="day_of_week", y="mean_time", hue="member_casual") ``` ## Look at the number of riders by type and day of week ### grouping ``` df_rider_by_dow = df_divvy.groupby(['member_casual','day_of_week']).agg(num_rides = ('ID', 'count')) df_rider_by_dow #df_rider_by_dow['day_of_week'] = df_rider_by_dow['day_of_week'].astype(cats_type) df_rider_by_dow.sort_values(by=['member_casual','day_of_week']) ``` ### plot of Number of Rids by Rider Type and Day of Week ``` df_rider_by_dow.unstack('member_casual').plot(kind='bar') df_rider_by_dow.reset_index(inplace=True) sns.set(rc={"figure.figsize":(16,8)}) sns.barplot(data=df_rider_by_dow, x="day_of_week", y="num_rides", hue="member_casual") df_member_by_yr_dow = df_divvy[df_divvy['member_casual'] == 'member'].groupby(['year','day_of_week']).agg(mean_time = ('duration', 'mean')).round(2) df_casual_by_yr_dow = df_divvy[df_divvy['member_casual'] == 'casual'].groupby(['year','day_of_week']).agg(mean_time = ('duration', 'mean')).round(2) df_member_by_yr_dow.unstack('year').plot(kind='bar', title='Member Rider mean time by year and day of week') df_casual_by_yr_dow.unstack('year').plot(kind='bar', title='Casual Rider mean time by year and day of week') df_rider_by_yrmo = df_divvy.groupby(['member_casual','yrmo']).agg(mean_time = ('duration', 'mean')).round(2) df_rider_by_yrmo.unstack('member_casual').plot(kind='bar', title='Rider mean time by yrmo') df_rider_count_by_yrmo = df_divvy.groupby(['member_casual','yrmo']).agg(count = ('ID', 'count')) df_rider_count_by_yrmo.unstack('member_casual').plot(kind='bar', title='Rider count by yrmo') df_rider_count_by_yrmo.unstack('member_casual').plot(kind='line', title='Rider count by yrmo') ``` ## Let's look at starting in the dark by rider ``` df_rider_count_by_is_dark = df_divvy.groupby(['member_casual','is_dark']).agg(count = ('ID', 'count')) df_rider_count_by_is_dark.unstack('member_casual').plot(kind='bar', title='Rider count by starting in the dark') df_rider_by_time = df_divvy.groupby(['member_casual','start_cat']).agg(count = ('ID', 'count')) df_rider_by_time.unstack('start_cat').plot(kind='bar') weekdays = ['Monday','Tuesday','Wednesday','Thursday','Friday'] weekends = ['Saturday','Sunday'] weekday_riders = df_divvy[df_divvy.day_of_week.isin(weekdays)] weekend_riders = df_divvy[df_divvy.day_of_week.isin(weekends)] weekday_riders.shape weekend_riders.shape df_rider_by_time_weekday = weekday_riders.groupby(['member_casual','start_cat']).agg(count = ('ID', 'count')) df_rider_by_time_weekday.unstack('start_cat').plot(kind='bar', title="Weekday times") df_rider_by_time_weekday.to_csv(date_format="%Y-%m-%d %H:%M:%S") df_rider_by_time_weekend = weekend_riders.groupby(['member_casual','start_cat']).agg(count = ('ID', 'count')) df_rider_by_time_weekend.unstack('start_cat').plot(kind='bar', title="Weekend times") df_rider_by_time_weekend.to_csv() ``` ## Starting stations -- member ``` df_starting_member = df_divvy[df_divvy['member_casual']=='member'].groupby(['start_station_name']).agg(count=('ID','count')) df_starting_member = df_starting_member.sort_values(by='count', ascending=False) df_starting_member_top = df_starting_member.iloc[0:19] df_starting_member_top.plot(kind='bar', title="Starting Stations - Member") df_starting_member_weekday = weekday_riders[weekday_riders.member_casual=='member'].groupby(['start_station_name']).agg(count=('ID','count')) df_starting_member_weekday = df_starting_member_weekday.sort_values(by='count', ascending=False) df_starting_member_weekday_top = df_starting_member_weekday.iloc[0:19] df_starting_member_weekday_top.plot(kind='bar', title="Starting Stations Weekday - Member") from io import StringIO output = StringIO() df_starting_member_weekday_top.to_csv(output) print(output.getvalue()) df_starting_member_weekend = weekend_riders[weekend_riders.member_casual=='member'].groupby(['start_station_name']).agg(count=('ID','count')) df_starting_member_weekend = df_starting_member_weekend.sort_values(by='count', ascending=False) df_starting_member_weekend_top = df_starting_member_weekend.iloc[0:19] df_starting_member_weekend_top.plot(kind='bar', title="Starting Stations Weekend - Member") output = StringIO() df_starting_member_weekend_top.to_csv(output) print(output.getvalue()) ``` ## Starting Stations - casual ``` df_starting_casual = df_divvy[df_divvy['member_casual']=='casual'].groupby(['start_station_name']).agg(count=('ID','count')) df_starting_casual = df_starting_casual.sort_values(by='count', ascending=False) df_starting_casual_top = df_starting_casual.iloc[0:19] df_starting_casual_top.head() df_starting_casual_weekday = weekday_riders[weekday_riders.member_casual=='casual'].groupby(['start_station_name']).agg(count=('ID','count')) df_starting_casual_weekday = df_starting_casual_weekday.sort_values(by='count', ascending=False) df_starting_casual_weekday_top = df_starting_casual_weekday.iloc[0:19] output = StringIO() df_starting_casual_weekday_top.to_csv(output) print(output.getvalue()) df_starting_casual_weekday_top.shape df_starting_casual_weekend = weekend_riders[weekend_riders.member_casual=='casual'].groupby(['start_station_name']).agg(count=('ID','count')) df_starting_casual_weekend = df_starting_casual_weekend.sort_values(by='count', ascending=False) df_starting_casual_weekend_top = df_starting_casual_weekend.iloc[0:19] output = StringIO() df_starting_casual_weekend_top.to_csv(output) print(output.getvalue()) ```
github_jupyter
## osumapper: create osu! map using Tensorflow and Colab ### -- For osu!mania game mode -- For mappers who don't know how this colaboratory thing works: - Press Ctrl+Enter in code blocks to run them one by one - It will ask you to upload .osu file and audio.mp3 after the third block of code - .osu file needs to have correct timing (you can use [statementreply](https://osu.ppy.sh/users/126198)'s TimingAnlyz tool) - After uploading them, wait for a few minutes until download pops Github: https://github.com/kotritrona/osumapper ### Step 1: Installation First of all, check the Notebook Settings under Edit tab.<br> Activate GPU to make the training faster. Then, clone the git repository and install dependencies. ``` %cd /content/ !git clone https://github.com/kotritrona/osumapper.git %cd osumapper/v7.0 !apt install -y ffmpeg !apt install -y nodejs !cp requirements_colab.txt requirements.txt !cp package_colab.json package.json !pip install -r requirements.txt !npm install ``` ### Step 2: Choose a pre-trained model Set the select_model variable to one of: - "default": default model (choose only after training it) - "lowkey": model trained with 4-key and 5-key maps (☆2.5-5.5) - "highkey": model trained with 6-key to 9-key maps (☆2.5-5.5) ``` from mania_setup_colab import * select_model = "highkey" model_params = load_pretrained_model(select_model); ``` ### Step 3: Upload map and music file<br> Map file = .osu file with correct timing (**Important:** Set to mania mode and the wished key count!)<br> Music file = the mp3 file in the osu folder ``` from google.colab import files print("Please upload the map file:") mapfile_upload = files.upload() for fn in mapfile_upload.keys(): uploaded_osu_name = fn print('Uploaded map file: "{name}" {length} bytes'.format(name=fn, length=len(mapfile_upload[fn]))) print("Please upload the music file:") music_upload = files.upload() for fn in music_upload.keys(): print('Uploaded music file: "{name}" {length} bytes'.format(name=fn, length=len(music_upload[fn]))) ``` ### Step 4: Read the map and convert to python readable format ``` from act_newmap_prep import * step4_read_new_map(uploaded_osu_name); ``` ### Step 5: Use model to calculate map rhythm Parameters: "note_density" determines how many notes will be placed on the timeline, ranges from 0 to 1.<br> "hold_favor" determines how the model favors holds against circles, ranges from -1 to 1.<br> "divisor_favor" determines how the model favors notes to be on X divisors starting from a beat (white, blue, red, blue), ranges from -1 to 1 each.<br> "hold_max_ticks" determines the max amount of time a hold can hold off, ranges from 1 to +∞.<br> "hold_min_return" determines the final granularity of the pattern dataset, ranges from 1 to +∞.<br> "rotate_mode" determines how the patterns from the dataset gets rotated. modes (0,1,2,3,4) - 0 = no rotation - 1 = random - 2 = mirror - 3 = circulate - 4 = circulate + mirror ``` from mania_act_rhythm_calc import * model = step5_load_model(model_file=model_params["rhythm_model"]); npz = step5_load_npz(); params = model_params["rhythm_param"] # Or set the parameters here... # params = step5_set_params(note_density=0.6, hold_favor=0.2, divisor_favor=[0] * divisor, hold_max_ticks=8, hold_min_return=1, rotate_mode=4); predictions = step5_predict_notes(model, npz, params); notes_each_key = step5_build_pattern(predictions, params, pattern_dataset=model_params["pattern_dataset"]); ``` Do a little modding to the map. Parameters: - key_fix: remove continuous notes on single key modes (0,1,2,3) 0=inactive 1=remove late note 2=remove early note 3=divert ``` modding_params = model_params["modding"] # modding_params = { # "key_fix" : 3 # } notes_each_key = mania_modding(notes_each_key, modding_params); notes, key_count = merge_objects_each_key(notes_each_key) ``` Finally, save the data into an .osu file! ``` from google.colab import files from mania_act_final import * saved_osu_name = step8_save_osu_mania_file(notes, key_count); files.download(saved_osu_name) # clean up if you want to make another map! # colab_clean_up(uploaded_osu_name) ``` That's it! Now you can try out the AI-created map in osu!. For bug reports and feedbacks either report it on github or use discord: <br> [https://discord.com/invite/npmSy7K](https://discord.com/invite/npmSy7K) <img src="https://i.imgur.com/Ko2wogO.jpg" />
github_jupyter
``` #setup import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import matplotlib.pyplot as plt import plotly import seaborn as sns import plotly.express as px import plotly.graph_objects as go import warnings warnings.filterwarnings('ignore') %matplotlib inline print("Setup Complete") import pandas as pd import io from google.colab import files uploaded = files.upload() #loading the data country_vaccinations = pd.read_csv(io.BytesIO(uploaded['country_vaccinations.csv'])) print(country_vaccinations) # Print the first 5 rows of the data country_vaccinations.head() #data preprocessing and cleaning country_vaccinations.info() country_vaccinations.columns country_vaccinations.describe() #Detect missing values country_vaccinations.isnull().sum() #Data spiltting country_vaccinations.fillna(value=0, inplace=True) date = country_vaccinations.date.str.split('-', expand=True) date country_vaccinations['year'] = date[0] country_vaccinations['month'] = date[1] country_vaccinations['day'] = date[2] country_vaccinations.year = pd.to_numeric(country_vaccinations.year) country_vaccinations.month = pd.to_numeric(country_vaccinations.month) country_vaccinations.day = pd.to_numeric(country_vaccinations.day) country_vaccinations.date = pd.to_datetime(country_vaccinations.date) country_vaccinations.head() country_vaccinations.info() #visualization import seaborn as sns import matplotlib import matplotlib.pyplot as plt %matplotlib inline sns.set_style('darkgrid') matplotlib.rcParams['font.size'] = 16 matplotlib.rcParams['figure.figsize'] = (10, 6) #explore mean, min, max country_vaccinations.mean() country_vaccinations.min() country_vaccinations.max() #explore country column country_vaccinations.country.value_counts() country_vaccinations.country country_vaccinations.people_fully_vaccinated.max() country_vaccinations.date.min() country_vaccinations.date.max() #visualization plt.figure(figsize=(18,10)) sns.lineplot(x=country_vaccinations.date, y=country_vaccinations.daily_vaccinations) plt.title('The Number of daily vaccinations dynamic') plt.show() #explore the vaccination rate countries = country_vaccinations.groupby('country')['total_vaccinations'].max().sort_values(ascending= False)[:5].index top_countries = pd.DataFrame(columns= country_vaccinations.columns) for country in countries: top_countries = top_countries.append(country_vaccinations.loc[country_vaccinations['country'] == country]) plt.figure(figsize=(20,8)) sns.lineplot(top_countries['date'], top_countries['daily_vaccinations_per_million'], hue= top_countries['country'], ci= False) plt.title('Vaccination procedure go on rapidly') fully_vaccinated = country_vaccinations.groupby("country")["people_fully_vaccinated"].max().sort_values(ascending= False).head(25) fully_vaccinated.reset_index() plt.figure(figsize=(15,12)) ax = sns.barplot(x=fully_vaccinated, y=fully_vaccinated.index) plt.xlabel("Fully Vaccinated") plt.ylabel("Country"); plt.title('Which country has most number of fully vaccinated people?'); for patch in ax.patches: width = patch.get_width() height = patch.get_height() x = patch.get_x() y = patch.get_y() plt.text(width + x, height + y, '{:.1f} '.format(width)) daily_vaccinations_per_million = country_vaccinations.groupby("country")["daily_vaccinations_per_million"].max().sort_values(ascending= False).head(15) daily_vaccinations_per_million.reset_index() plt.figure(figsize=(12,8)) ax = sns.barplot(x=daily_vaccinations_per_million, y=daily_vaccinations_per_million.index ) plt.xlabel("daily vaccinations per million") plt.ylabel("Country") plt.title("Daily COVID-19 vaccine doses administered per million people"); for patch in ax.patches: width = patch.get_width() height = patch.get_height() x = patch.get_x() y = patch.get_y() plt.text(width + x, height + y, '{:.1f} '.format(width)) #number of people daily vaccinated in India india_df = country_vaccinations[country_vaccinations['country'] == 'India'] india_df india_df.info() india_df.daily_vaccinations_raw.sum() plt.figure(figsize=(19,9)) sns.lineplot(x=india_df.date, y=india_df.daily_vaccinations_raw) plt.xlabel("Date") plt.ylabel("Daily_Vaccination") plt.title('How many people daily vaccinated in India?') #people fully vaccinated in India fully_vaccinated_india = india_df.people_fully_vaccinated.max()/1000000 print("Total fully vaccinated people in India: {0:.2f}M".format(fully_vaccinated_india)) #country which fully vaccinated most of the people population_country=country_vaccinations.groupby('country')['total_vaccinations_per_hundred'].max().sort_values(ascending=False).head(15) population_country.reset_index() plt.figure(figsize= (15, 8)) ax = sns.barplot(x=population_country, y=population_country.index) plt.title('Total Vaccinations / Population') plt.xlabel('Total Vaccinations') plt.ylabel('Country') for patch in ax.patches: width = patch.get_width() height = patch.get_height() x = patch.get_x() y = patch.get_y() plt.text(width + x, height + y, '{:1f} %'.format(width)) ```
github_jupyter
``` from pandas.io.json import json_normalize from pymongo import MongoClient from sklearn import linear_model from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error import numpy as np import pprint course_cluster_uri = "mongodb://agg-student:[email protected]:27017,cluster0-shard-00-01-jxeqq.mongodb.net:27017,cluster0-shard-00-02-jxeqq.mongodb.net:27017/test?ssl=true&replicaSet=Cluster0-shard-0&authSource=admin" course_client = MongoClient(course_cluster_uri) titanic = course_client['coursera-agg']['titanic'] unique_gender_stage = { "$group": { "_id": "$gender", "count": {"$sum": 1} } } possible_gender_values = titanic.aggregate([ { "$match": { "age": {"$type": "number"}, "point_of_embarkation": {"$ne": ""} } }, unique_gender_stage ]) pprint.pprint(list(possible_gender_values)) unique_point_of_embarkation_stage = { "$group": { "_id": "$point_of_embarkation", "count": {"$sum": 1} } } possible_point_of_embarkation_values = titanic.aggregate([ { "$match": { "age": {"$type": "number"}, "point_of_embarkation": {"$ne": ""} } }, unique_point_of_embarkation_stage ]) pprint.pprint(list(possible_point_of_embarkation_values)) # convert "gender" and "point_of_embarkation" to integer, just like one-hot encoding gender_and_point_of_embarkation_conversion_stage = { "$project": { "passenger_id": 1, "survived": 1, "class": 1, "name": 1, "age": 1, "siblings_spouse": 1, "parents_children": 1, "ticket_number": 1, "fare_paid": 1, "cabin": 1, "gender": { "$switch": { "branches": [ {"case": {"$eq": ["$gender", "female"]}, "then": 0}, {"case": {"$eq": ["$gender", "male"]}, "then": 1} ], "default": "?" } }, "point_of_embarkation": { "$switch": { "branches": [ {"case": {"$eq": ["$point_of_embarkation", "Q"]}, "then": 0}, {"case": {"$eq": ["$point_of_embarkation", "C"]}, "then": 1}, {"case": {"$eq": ["$point_of_embarkation", "S"]}, "then": 2} ], "default": "?" } } } } cursor = titanic.aggregate([ { "$match": { "age": {"$type": "number"}, "point_of_embarkation": {"$ne": ""} } }, gender_and_point_of_embarkation_conversion_stage, { "$project": { "_id": 0, "ticket_number": 0, "name": 0, "passenger_id": 0, "cabin": 0 } } ]) # Exhaust our cursor into a list titanic_data = list(cursor) titanic_data[:2] # pandas.io.json.json_normalize() will convert a list of json data into a pandas data frame df = json_normalize(titanic_data) df.head() df_x = df.drop(['survived'], axis=1) df_x.head() df_y = df['survived'] # careful, this is a pitfall! df_y.shape # the dimension is not correct! ``` __Pitfall__: if you get a dimension like `(134,)`, be careful! For linear regression and some models, this works just fine, but for some other models such as CNN/RNN, this dimension will result in sth unexpected and very hard to debug. As a good habit, you should always check your one-dimensional array and make sure that the 2nd shape parameter is not missing. ``` df_y.head() df_y = df.filter(items=['survived']) # to get the right shape, use filter() df_y.shape df_y.head() reg = linear_model.LinearRegression() x_train, x_test, y_train, y_test = train_test_split(df_x, df_y, test_size=0.2, random_state=0) reg.fit(x_train, y_train) reg.predict(x_test) mean_squared_error(y_test, reg.predict(x_test)) # age: 25, # class: 1, # fare_paid: 45, # gender: 1 ('male') # parents_children: 0, # point_of_embarkation: 1 ('C') # siblings_spouse: 1 fake_passenger = [[25, 1, 45, 1, 0, 1, 1]] reg.predict(fake_passenger) ```
github_jupyter
# Evaluate a privacy policy Today, virtually every organization with which you interact will collect or use some about you. Most typically, the collection and use of these data will be disclosed according to an organization's privacy policy. We encounter these privacy polices all the time, when we create an account on a website, open a new credit card, or even sign up for grocery store loyalty program. Yet despite (or perhaps because of) their ubiquity, most people have never read a privacy policy from start to finish. Moreover, even if we took the time to read privacy policies, many of us would struggle to fully understand them due to their frequent use of complex, legalistic, and opaque language. These considerations raise many potential ethical questions regarding whether organizations are sufficiently transparent about the increasingly vast sums of data they collect about their users, customers, employees, and other stakeholders. The purpose of this notebook is to help you gain a better understanding of the landscape of contemporary privacy policies, using a data-driven approach. We'll leverage a [new dataset](https://github.com/ansgarw/privacy) that provides the full text of privacy policies for hundreds of publicly-traded companies, which we'll analyze using some techniques from natural language processing. By the time you make your way through this notebook, you should have a better understanding of the diverse form and content of modern privacy policies, their linguistic characteristics, and a few neat tricks for analyzing large textual data with Python. Without further ado, let's get started! # Roadmap * Preliminaries (packages + data wrangling) * Topic models * Keywords in context * Named entities * Readability * Embeddings * Exercises # Preliminaries Let's start out by loading some packages. We'll be using pandas to help with data wrangling and holding the data in an easy to work with data frame format. The json package is part of the Python Standard Library and will help us with reading the raw data. Matplotlib is for plotting; umap is for clustering policies and is not completely necessary. Finally, we'll use several natural language processing packages, spacy, textacy, and gensim, for the actual text analysis. ``` # run the following commands to install the needed packages """ pip install pandas pip install spacy python -m spacy download en_core_web_lg pip install textacy pip install gensim pip install umap pip install matplotlib """ # load some packages import pandas as pd import json import spacy import textacy import gensim import matplotlib.pyplot as plt import umap import umap.plot from bokeh.plotting import show, output_notebook import tqdm tqdm.tqdm.pandas() # for umap warnings from matplotlib.axes._axes import _log as matplotlib_axes_logger matplotlib_axes_logger.setLevel("ERROR") # load spacy nlp model nlp = spacy.load("en_core_web_lg", disable=["parser"]) nlp.max_length = 2000000 ``` Now, let's go ahead and load the data. ``` # load the data with open("data/policies.json") as f: policies_df = pd.DataFrame({k:" ".join(v) for k,v in json.load(f).items()}.items(), columns=["url","policy_text"]) # check out the results policies_df.head() ``` Looks pretty reasonable. We have one column for the URL and one for the full text of the privacy policy. Note that the orignal data come in a json format, and there, each URL is associated with a set of paragraphs that constitute each privacy policy. In the code above, when we load the data, we concatenate these paragraphs to a single text string, which will be easier for us to work with in what follows. Our next step will be to process the documents with spacy. We'll add a column to our data frame with the processed documents (that way we still have the raw text handy). This might take a minute. If it takes too long on your machine, you can just look at a random sample of policies. Just uncomment out the code below. ``` #policies_df = policies_df.sample(frac=0.20) # set frac to some fraction that will run in a reasonable time on your machine policies_df["policy_text_processed"] = policies_df.policy_text.progress_apply(nlp) ``` With that simple line of code, spacy has done a bunch of hard work for us, including things like tokenization, part-of-speech tagging, entity parsing, and other stuff that go well beyond our needs today. Let's take a quick look. ``` policies_df.head() ``` Okay, at this point, we've loaded all the packages we need, and we've done some of the basic wrangling necessary to get the data into shape. We'll need to do a little more data wrangling to prepare for a few of the analyses in store below, but we've already done enough to let us get started. So without further ado, let's take our first peek at the data. # Topic models We'll start out by trying to get a better sense for __what__ is discussed in corporate privacy policies. To do so, we'll make use of an approach in natural language processing known as topic models. Given our focus, we're not going to go into any of the methodological details of how these models work, but in essence, what they're going to do is search for a set of latent topics in our corpus of documents (here, privacy policies). You can think of topics as clusters of related words on a particular subject (e.g., if we saw the words "homework", "teacher", "student", "lesson" we might infer that the topic was school); documents can contain discussions of multiple topics. To start out, we'll do some more processing on the privacy policies to make them more useable for our topic modeling library (called gensim). ``` # define a processing function process_gensim = lambda tokens: [token.lemma_.lower() for token in tokens if not(token.is_punct or token.is_stop or token.is_space or token.is_digit)] # apply the function policies_df["policy_text_gensim"] = policies_df.policy_text_processed.apply(process_gensim) # create a gensim dictionary gensim_dict = gensim.corpora.dictionary.Dictionary(policies_df["policy_text_gensim"]) # create a gensim corpus gensim_corpus = [gensim_dict.doc2bow(policy_text) for policy_text in policies_df["policy_text_gensim"]] # fit the topic model lda_model = gensim.models.LdaModel(gensim_corpus, id2word=gensim_dict, num_topics=10) # show the results lda_model.show_topics(num_topics=-1, num_words=8) ``` As a bonus, we can also check the coherence, essentially a model fit (generally, these measures look at similarity among high scoring words in topics). If you're so inclined, you can re-run the topic model above with different hyperparameters to see if you can get a better fit; I didn't spend a whole lot of time tuning. ``` # get coherence coherence_model_lda = gensim.models.CoherenceModel(model=lda_model, texts=policies_df["policy_text_gensim"], dictionary=gensim_dict, coherence="c_v") coherence_model_lda.get_coherence() ``` Take a look at the topics identified by the models above. Can you assign human-interpretable labels to them? What can you learn about the different topics of discussion in privacy policies? # Key words in context Topic models are nice, but they're a bit abstract. They give us an overview about interesting clusters of words, but they don't tell us much about how particular words or used or the details of the topics. For that, we can actually learn a lot just by picking out particular words of interest and pulling out their context from the document, known as a "keyword in context" approach. As an illustration, the code below pulls out uses of the word "third party" in the policies of 20 random firms. There's no random seed set, so if you run the code again you'll get a different set of result. In the comment on the first line, I've given you a few additional words you may want to check. ``` KEYWORD = "right" # "third party" # privacy, right, duty, selling, disclose, trust, inform NUM_FIRMS = 20 with pd.option_context("display.max_colwidth", 100, "display.min_rows", NUM_FIRMS, "display.max_rows", NUM_FIRMS): display( pd.DataFrame(policies_df.sample(n=NUM_FIRMS).apply(lambda row: list(textacy.text_utils.KWIC(row["policy_text"], keyword=KEYWORD, window_width=35, print_only=False)), axis=1).explode()).head(NUM_FIRMS) ) ``` Run the code for some different words, not just the ones in my list, but also those that interest you. Can you learn anything about corporate mindsets on privacy? What kind of rights are discussed? # Named entities Another way we can gain some insight into the content of privacy policies is by seeing who exactly they discuss. Once again, spacy gives us an easy (if sometimes rough) way to do this. Specifically, when we process a document using spacy, it will automatically extract several different categories of named entities (e.g., person, organization, place, you can find the full list [here](https://spacy.io/api/annotation)). In the code, we'll pull out all the organization and person entities. ``` # extract named entities from the privacy policies pull_entities = lambda policy_text: list(set([entity.text.lower() for entity in policy_text.ents if entity.label_ in ("ORG", "PERSON")])) policies_df["named_entities"] = policies_df.policy_text_processed.apply(pull_entities) ``` Let's take a quick peek at our data frame and see what the results look like. ``` # look at the entities with pd.option_context("display.max_colwidth", 100, "display.min_rows", 50, "display.max_rows", 50): display(policies_df[["url","named_entities"]].head(50)) ``` Now let's add a bit more structure. We'll run a little code to help us identify the most frequently discussed organizations and people in the corporate privacy policies. ``` # pull the most frequent entities entities = policies_df["named_entities"].explode("named_entities") NUM_WANTED = 50 with pd.option_context("display.min_rows", 50, "display.max_rows", 50): display(entities.groupby(entities).size().sort_values(ascending=False).head(50)) ``` What do you make of the most frequent entities? Are you surprised? Do they fit with what you expected? Can we make any inferences about the kind of data sharing companies might be enaging in by looking at these entities? # Readability Next, we'll evaluate the privacy policies according to their readability. There are many different measures of readability, but the basic idea is to evaluate a text according to various metrics (e.g., words per sentence, number of syllables per word) that correlate with, well, how easy it is to read. The textacy package makes it easy to quickly evaluate a bunch of different metrics of readability. Let's compute them and then do some exploration. ``` # compute a bunch of text statistics (including readability) policies_df["TextStats"] = policies_df.policy_text_processed.apply(textacy.text_stats.TextStats) ``` You can now access the various statistics for individual documents as follows (e.g., for the document at index 0). ``` policies_df.iloc[0]["TextStats"].flesch_kincaid_grade_level ``` This tells us that the Flesch-Kinkaid grade level for the policy is just under 12th grade. We're probably not terribly interested in the readability of any given policy. We can do a little wrangling with pandas to extract various metrics for all policies and add them to the data frame. Below, I'll pull out the Flesch-Kincaid grade level and the Gunning-Fog index (both are grade-level measures). ``` # pull out a few readability metrics policies_df["flesch_kincaid_grade_level"] = policies_df.TextStats.apply(lambda ts: ts.flesch_kincaid_grade_level) policies_df["gunning_fog_index"] = policies_df.TextStats.apply(lambda ts: ts.gunning_fog_index) # let's also clean up some extreme values policies_df.loc[(policies_df.flesch_kincaid_grade_level < 0) | (policies_df.flesch_kincaid_grade_level > 20), "flesch_kincaid_grade_level"] = None policies_df.loc[(policies_df.gunning_fog_index < 0) | (policies_df.gunning_fog_index > 20), "gunning_fog_index"] = None ``` I would encourage you to adapt the code above to pull out some other readability-related features that seem interesting. You can find the full list available in our `TextStats` object [here](https://textacy.readthedocs.io/en/stable/api_reference/misc.html), in the textacy documentation. Let's plot the values we just extracted. ``` # plot with matplotlib fig, axes = plt.subplots(1, 2) policies_df["flesch_kincaid_grade_level"].hist(ax=axes[0]) policies_df["gunning_fog_index"].hist(ax=axes[1]) plt.tight_layout() ``` These results are pretty striking, especially when you consider them alongside statistics on the literacy rate in the United States. According to [surveys](https://www.oecd.org/skills/piaac/Country%20note%20-%20United%20States.pdf) by the OECD, about half of adults in the United States can read at an 8th grade level or lower. # Embeddings Yet another way that we can gain some intuition on privacy policies is by seeing how similar or different particular policies are from one another. For example, we might not be all that surprised if we saw that Google's privacy policy was quite similar to Facebook's. We might raise an eyebrow if we saw that Nike and Facebook also had very similar privacy policies. What kind of data are they collecting on us when we buy our sneakers? One way we can compare the similarity among documents (here, privacy policies) is by embedding them in some high dimensional vector space, and the using linear algebra to find the distance between vectors. Classically, we would do this by representing documents as vectors of words, where entries represent word frequencies, and perhaps weighting those frequencies (e.g., using TF-IDF). Here, we'll use a slightly more sophisticated approach. When we process the privacy policies using spacy, we get a vector representation of each document, which is based on the word embeddings for its constituent terms. Again, given the focus of this class, we're not going to go into the methodological details of word embeddings, but you can think of them as a vectorization that aims to capture semantic relationships. Below, we'll pull the document embeddings from spacy. We'll then do some dimension reduction using a cool algorithm from topological data analysis known as [Uniform Manifold Approximation and Projection](https://arxiv.org/abs/1802.03426) (UMAP), and visualize the results using an interactive plot. ``` # pull the document embeddings from spacy and format for clustering embeddings_df = policies_df[["url", "policy_text_processed"]] embeddings_df = embeddings_df.set_index("url") embeddings_df["policy_text_processed"] = embeddings_df["policy_text_processed"].apply(lambda text: text.vector) embeddings_df = embeddings_df.policy_text_processed.apply(pd.Series) # non-interactive plot mapper = umap.UMAP().fit(embeddings_df.to_numpy()) umap.plot.points(mapper) # interactive plot output_notebook() hover_df = embeddings_df.reset_index() hover_df["index"] = hover_df.index p = umap.plot.interactive(mapper, labels=hover_df["index"], hover_data=hover_df[["index","url"]], point_size=2) umap.plot.show(p) ``` Explore the plots a bit. Can you observe any patterns in the results? Did you expect more or less variation? What do you make of the different clusters? # Exercises * Going back to the keyword-in-context exercise, consider several additional keywords that may give you insight into how different companies are thinking about privacy. How often, for instance, do you see the word "rights" used? How often in conjunction with the word privacy? Do you find evidence of considerations for fairness? * We've seen that the reading level for most privacy policies is quite high, but it's often a little difficult to interpret what, for example, a document written at a grade 14 reading level looks like. To gain some intuition, compute readability scores for some of your own writing (e.g., a prior course paper) and/or for some page on Wikipedia (you can use python, or do a quick Google search for an online readability calculator). How does the writing level of those compare to the privacy policies? * There is a general presumption that many companies use fairly standardized (or boilerplate) privacy policies that are aimed primarily at avoiding legal liability, and that do not describe their particular data practices in detail. Do we see support for these views in the data? Do the privacy policies seem more or less variable than you expected? What are the implications for customers and other stakeholders? * Spend some time exploring the data using any of the techniques above, or your own favorite analytical approach or tools. What additional insights can we learn about privacy policies?
github_jupyter
<a href="https://colab.research.google.com/github/magenta/ddsp/blob/master/ddsp/colab/tutorials/0_processor.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ##### Copyright 2020 Google LLC. Licensed under the Apache License, Version 2.0 (the "License"); ``` # Copyright 2020 Google LLC. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== ``` # DDSP Processor Demo This notebook provides an introduction to the signal `Processor()` object. The main object type in the DDSP library, it is the base class used for Synthesizers and Effects, which share the methods: * `get_controls()`: inputs -> controls. * `get_signal()`: controls -> signal. * `__call__()`: inputs -> signal. (i.e. `get_signal(**get_controls())`) Where: * `inputs` is a variable number of tensor arguments (depending on processor). Often the outputs of a neural network. * `controls` is a dictionary of tensors scaled and constrained specifically for the processor * `signal` is an output tensor (usually audio or control signal for another processor) Let's see why this is a helpful approach by looking at the specific example of the `Additive()` synthesizer processor. ``` #@title Install and import dependencies %tensorflow_version 2.x !pip install -qU ddsp # Ignore a bunch of deprecation warnings import warnings warnings.filterwarnings("ignore") import ddsp import ddsp.training from ddsp.colab.colab_utils import play, specplot, DEFAULT_SAMPLE_RATE import matplotlib.pyplot as plt import numpy as np import tensorflow as tf sample_rate = DEFAULT_SAMPLE_RATE # 16000 ``` # Example: additive synthesizer The additive synthesizer models a sound as a linear combination of harmonic sinusoids. Amplitude envelopes are generated with 50% overlapping hann windows. The final audio is cropped to n_samples. ## `__init__()` All member variables are initialized in the constructor, which makes it easy to change them as hyperparameters using the [gin](https://github.com/google/gin-config) dependency injection library. All processors also have a `name` that is used by `ProcessorGroup()`. ``` n_frames = 1000 hop_size = 64 n_samples = n_frames * hop_size # Create a synthesizer object. additive_synth = ddsp.synths.Additive(n_samples=n_samples, sample_rate=sample_rate, name='additive_synth') ``` ## `get_controls()` The outputs of a neural network are often not properly scaled and constrained. The `get_controls` method gives a dictionary of valid control parameters based on neural network outputs. **3 inputs (amps, hd, f0)** * `amplitude`: Amplitude envelope of the synthesizer output. * `harmonic_distribution`: Normalized amplitudes of each harmonic. * `fundamental_frequency`: Frequency in Hz of base oscillator ``` # Generate some arbitrary inputs. # Amplitude [batch, n_frames, 1]. # Make amplitude linearly decay over time. amps = np.linspace(1.0, -3.0, n_frames) amps = amps[np.newaxis, :, np.newaxis] # Harmonic Distribution [batch, n_frames, n_harmonics]. # Make harmonics decrease linearly with frequency. n_harmonics = 30 harmonic_distribution = (np.linspace(-2.0, 2.0, n_frames)[:, np.newaxis] + np.linspace(3.0, -3.0, n_harmonics)[np.newaxis, :]) harmonic_distribution = harmonic_distribution[np.newaxis, :, :] # Fundamental frequency in Hz [batch, n_frames, 1]. f0_hz = 440.0 * np.ones([1, n_frames, 1], dtype=np.float32) # Plot it! time = np.linspace(0, n_samples / sample_rate, n_frames) plt.figure(figsize=(18, 4)) plt.subplot(131) plt.plot(time, amps[0, :, 0]) plt.xticks([0, 1, 2, 3, 4]) plt.title('Amplitude') plt.subplot(132) plt.plot(time, harmonic_distribution[0, :, :]) plt.xticks([0, 1, 2, 3, 4]) plt.title('Harmonic Distribution') plt.subplot(133) plt.plot(time, f0_hz[0, :, 0]) plt.xticks([0, 1, 2, 3, 4]) _ = plt.title('Fundamental Frequency') ``` Consider the plots above as outputs of a neural network. These outputs violate the synthesizer's expectations: * Amplitude is not >= 0 (avoids phase shifts) * Harmonic distribution is not normalized (factorizes timbre and amplitude) * Fundamental frequency * n_harmonics > nyquist frequency (440 * 20 > 8000), which will lead to [aliasing](https://en.wikipedia.org/wiki/Aliasing). ``` controls = additive_synth.get_controls(amps, harmonic_distribution, f0_hz) print(controls.keys()) # Now let's see what they look like... time = np.linspace(0, n_samples / sample_rate, n_frames) plt.figure(figsize=(18, 4)) plt.subplot(131) plt.plot(time, controls['amplitudes'][0, :, 0]) plt.xticks([0, 1, 2, 3, 4]) plt.title('Amplitude') plt.subplot(132) plt.plot(time, controls['harmonic_distribution'][0, :, :]) plt.xticks([0, 1, 2, 3, 4]) plt.title('Harmonic Distribution') plt.subplot(133) plt.plot(time, controls['f0_hz'][0, :, 0]) plt.xticks([0, 1, 2, 3, 4]) _ = plt.title('Fundamental Frequency') ``` Notice that * Amplitudes are now all positive * The harmonic distribution sums to 1.0 * All harmonics that are above the Nyquist frequency now have an amplitude of 0. The amplitudes and harmonic distribution are scaled by an "exponentiated sigmoid" function (`ddsp.core.exp_sigmoid`). There is nothing particularly special about this function (other functions can be specified as `scale_fn=` during construction), but it has several nice properties: * Output scales logarithmically with input (as does human perception of loudness). * Centered at 0, with max and min in reasonable range for normalized neural network outputs. * Max value of 2.0 to prevent signal getting too loud. * Threshold value of 1e-7 for numerical stability during training. ``` x = tf.linspace(-10.0, 10.0, 1000) y = ddsp.core.exp_sigmoid(x) plt.figure(figsize=(18, 4)) plt.subplot(121) plt.plot(x, y) plt.subplot(122) _ = plt.semilogy(x, y) ``` ## `get_signal()` Synthesizes audio from controls. ``` audio = additive_synth.get_signal(**controls) play(audio) specplot(audio) ``` ## `__call__()` Synthesizes audio directly from the raw inputs. `get_controls()` is called internally to turn them into valid control parameters. ``` audio = additive_synth(amps, harmonic_distribution, f0_hz) play(audio) specplot(audio) ``` # Example: Just for fun... Let's run another example where we tweak some of the controls... ``` ## Some weird control envelopes... # Amplitude [batch, n_frames, 1]. amps = np.ones([n_frames]) * -5.0 amps[:50] += np.linspace(0, 7.0, 50) amps[50:200] += 7.0 amps[200:900] += (7.0 - np.linspace(0.0, 7.0, 700)) amps *= np.abs(np.cos(np.linspace(0, 2*np.pi * 10.0, n_frames))) amps = amps[np.newaxis, :, np.newaxis] # Harmonic Distribution [batch, n_frames, n_harmonics]. n_harmonics = 20 harmonic_distribution = np.ones([n_frames, 1]) * np.linspace(1.0, -1.0, n_harmonics)[np.newaxis, :] for i in range(n_harmonics): harmonic_distribution[:, i] = 1.0 - np.linspace(i * 0.09, 2.0, 1000) harmonic_distribution[:, i] *= 5.0 * np.abs(np.cos(np.linspace(0, 2*np.pi * 0.1 * i, n_frames))) if i % 2 != 0: harmonic_distribution[:, i] = -3 harmonic_distribution = harmonic_distribution[np.newaxis, :, :] # Fundamental frequency in Hz [batch, n_frames, 1]. f0_hz = np.ones([n_frames]) * 200.0 f0_hz[:100] *= np.linspace(2, 1, 100)**2 f0_hz[200:1000] += 20 * np.sin(np.linspace(0, 8.0, 800) * 2 * np.pi * np.linspace(0, 1.0, 800)) * np.linspace(0, 1.0, 800) f0_hz = f0_hz[np.newaxis, :, np.newaxis] # Get valid controls controls = additive_synth.get_controls(amps, harmonic_distribution, f0_hz) # Plot! time = np.linspace(0, n_samples / sample_rate, n_frames) plt.figure(figsize=(18, 4)) plt.subplot(131) plt.plot(time, controls['amplitudes'][0, :, 0]) plt.xticks([0, 1, 2, 3, 4]) plt.title('Amplitude') plt.subplot(132) plt.plot(time, controls['harmonic_distribution'][0, :, :]) plt.xticks([0, 1, 2, 3, 4]) plt.title('Harmonic Distribution') plt.subplot(133) plt.plot(time, controls['f0_hz'][0, :, 0]) plt.xticks([0, 1, 2, 3, 4]) _ = plt.title('Fundamental Frequency') audio = additive_synth.get_signal(**controls) play(audio) specplot(audio) ```
github_jupyter
``` import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt sns.set_context('talk') sns.palplot(sns.color_palette("gray", 100)) new_gray=sns.color_palette("gray",4) new_gray=[(0, 0, 0), (0.85, 0.85, 0.85)] ``` ## Brazil ``` plot_bra2 = pd.read_csv('sensi_withhold_bra.csv') eff_new = pd.DataFrame( np.array([np.repeat(list(plot_bra2['intervention']),320), plot_bra2[plot_bra2.columns[2:]].values.reshape(1,-1)[0]]).T, columns=['intervention','x']) eff_new['x'] = eff_new['x'].astype(float) eff_new['color'] =([1]*319+[0.1])*10 fig1,ax = plt.subplots(figsize=(10,6)) ax.spines['left'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) plt.axvline(x=0,ls="-",linewidth=1,c="black") sns.scatterplot(data = eff_new, x='x', y='intervention', hue='color', s=200, palette=new_gray, alpha=0.3, legend=False, edgecolor=None, ax=ax) plt.xlabel('Brazil',c="black",fontsize=24,fontname='Helvetica') plt.ylabel('') #plt.xlim(-1.5,1) fig1.savefig("sensi_withhold_bra",bbox_inches='tight',dpi=300) ``` ## Japan ``` plot_jp2 = pd.read_csv('sensi_withhold_jp.csv') plot_jp2 eff_new = pd.DataFrame( np.array([np.repeat(list(plot_jp2['intervention']),46), plot_jp2[plot_jp2.columns[2:]].values.reshape(1,-1)[0]]).T, columns=['intervention','x']) eff_new['x'] = eff_new['x'].astype(float) eff_new['color'] =([1]*45+[0.1])*10 fig1,ax = plt.subplots(figsize=(10,6)) ax.spines['left'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) plt.axvline(x=0,ls="-",linewidth=1,c="black") sns.scatterplot(data = eff_new, x='x', y='intervention', hue='color', s=200, palette=new_gray, alpha=0.3, legend=False, edgecolor=None, ax=ax) plt.xlabel('Japan',c="black",fontsize=24,fontname='Helvetica') plt.ylabel('') fig1.savefig("sensi_withhold_jp",bbox_inches='tight',dpi=300) ``` ## UK ``` plot_uk2 = pd.read_csv('sensi_withhold_uk.csv') eff_new4 = pd.DataFrame( np.array([np.repeat(list(plot_uk2['intervention']),235), plot_uk2[plot_uk2.columns[2:]].values.reshape(1,-1)[0]]).T, columns=['intervention','x']) eff_new4['x'] = eff_new4['x'].astype(float) eff_new4['color'] =([1]*234+[0.1])*5 fig4,ax = plt.subplots(figsize=(10,4)) ax.spines['left'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) plt.axvline(x=0,ls="-",linewidth=1,c="black") sns.scatterplot(data = eff_new4, x='x', y='intervention', hue='color', s=200, palette=new_gray, alpha=0.3, legend=False, edgecolor=None, ax=ax) plt.xlabel('United Kingdom',c="black",fontsize=24,fontname='Helvetica') plt.ylabel('') #plt.ylim(-0.6,2.5) fig4.savefig("sensi_withhold_uk",bbox_inches='tight',dpi=300) ``` ## US ``` plot_us2 = pd.read_csv('sensi_withhold_us.csv') eff_new = pd.DataFrame( np.array([np.repeat(list(plot_us2['intervention']),310), plot_us2[plot_us2.columns[2:]].values.reshape(1,-1)[0]]).T, columns=['intervention','x']) eff_new['x'] = eff_new['x'].astype(float) eff_new['color'] =([1]*309+[0.1])*9 fig4,ax = plt.subplots(figsize=(10,5.5)) ax.spines['left'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) plt.axvline(x=0,ls="-",linewidth=1,c="black") sns.scatterplot(data = eff_new, x='x', y='intervention', hue='color', s=200, palette=new_gray, alpha=0.3, legend=False, edgecolor=None, ax=ax) plt.xlabel('United States',c="black",fontsize=24,fontname='Helvetica') plt.ylabel('') fig4.savefig("sensi_withhold_us",bbox_inches='tight',dpi=300) ```
github_jupyter
### Tutorial: Parameterized Hypercomplex Multiplication (PHM) Layer #### Author: Eleonora Grassucci Original paper: Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with 1/n Parameters. Aston Zhang, Yi Tay, Shuai Zhang, Alvin Chan, Anh Tuan Luu, Siu Cheung Hui, Jie Fu. [ArXiv link](https://arxiv.org/pdf/2102.08597.pdf). ``` # Imports import numpy as np import math import time import torch import torch.nn as nn from torch.autograd import Variable import torch.nn.functional as F import torch.utils.data as Data from torch.nn import init # Check Pytorch version: torch.kron is available from 1.8.0 torch.__version__ # Define the PHM class class PHM(nn.Module): ''' Simple PHM Module, the only parameter is A, since S is passed from the trainset. ''' def __init__(self, n, kernel_size, **kwargs): super().__init__(**kwargs) self.n = n A = torch.empty((n-1, n, n)) self.A = nn.Parameter(A) self.kernel_size = kernel_size def forward(self, X, S): H = torch.zeros((self.n*self.kernel_size, self.n*self.kernel_size)) # Sum of Kronecker products for i in range(n-1): H = H + torch.kron(self.A[i], S[i]) return torch.matmul(X, H.T) ``` ### Learn the Hamilton product between two pure quaternions A pure quaternion is a quaternion with scalar part equal to 0. ``` # Setup the training set x = torch.FloatTensor([0, 1, 2, 3]).view(4, 1) # Scalar part equal to 0 W = torch.FloatTensor([[0,-1,-1,-1], [1,0,-1,1], [1,1,0,-1], [1,-1,1,0]]) # Scalar parts equal to 0 y = torch.matmul(W, x) num_examples = 1000 batch_size = 1 X = torch.zeros((num_examples, 16)) S = torch.zeros((num_examples, 16)) Y = torch.zeros((num_examples, 16)) for i in range(num_examples): x = torch.randint(low=-10, high=10, size=(12, ), dtype=torch.float) s = torch.randint(low=-10, high=10, size=(12, ), dtype=torch.float) s1, s2, s3, s4 = torch.FloatTensor([0]*4), s[0:4], s[4:8], s[8:12] s1 = s1.view(2,2) s2 = s2.view(2,2) s3 = s3.view(2,2) s4 = s4.view(2,2) s_1 = torch.cat([s1,-s2,-s3,-s4]) s_2 = torch.cat([s2,s1,-s4,s3]) s_3 = torch.cat([s3,s4,s1,-s2]) s_4 = torch.cat([s4,-s3,s2,s1]) W = torch.cat([s_1,s_2, s_3, s_4], dim=1) x = torch.cat([torch.FloatTensor([0]*4), x]) s = torch.cat([torch.FloatTensor([0]*4), s]) x_mult = x.view(2, 8) y = torch.matmul(x_mult, W.T) y = y.view(16, ) X[i, :] = x S[i, :] = s Y[i, :] = y X = torch.FloatTensor(X).view(num_examples, 16, 1) S = torch.FloatTensor(S).view(num_examples, 16, 1) Y = torch.FloatTensor(Y).view(num_examples, 16, 1) data = torch.cat([X, S, Y], dim=2) train_iter = torch.utils.data.DataLoader(data, batch_size=batch_size) ### Setup the test set num_examples = 1 batch_size = 1 X = torch.zeros((num_examples, 16)) S = torch.zeros((num_examples, 16)) Y = torch.zeros((num_examples, 16)) for i in range(num_examples): x = torch.randint(low=-10, high=10, size=(12, ), dtype=torch.float) s = torch.randint(low=-10, high=10, size=(12, ), dtype=torch.float) s1, s2, s3, s4 = torch.FloatTensor([0]*4), s[0:4], s[4:8], s[8:12] s1 = s1.view(2,2) s2 = s2.view(2,2) s3 = s3.view(2,2) s4 = s4.view(2,2) s_1 = torch.cat([s1,-s2,-s3,-s4]) s_2 = torch.cat([s2,s1,-s4,s3]) s_3 = torch.cat([s3,s4,s1,-s2]) s_4 = torch.cat([s4,-s3,s2,s1]) W = torch.cat([s_1,s_2, s_3, s_4], dim=1) x = torch.cat([torch.FloatTensor([0]*4), x]) s = torch.cat([torch.FloatTensor([0]*4), s]) x_mult = x.view(2, 8) y = torch.matmul(x_mult, W.T) y = y.view(16, ) X[i, :] = x S[i, :] = s Y[i, :] = y X = torch.FloatTensor(X).view(num_examples, 16, 1) S = torch.FloatTensor(S).view(num_examples, 16, 1) Y = torch.FloatTensor(Y).view(num_examples, 16, 1) data = torch.cat([X, S, Y], dim=2) test_iter = torch.utils.data.DataLoader(data, batch_size=batch_size) # Define training function def train(net, lr, phm=True): # Squared loss loss = nn.MSELoss() optimizer = torch.optim.Adam(net.parameters(), lr=lr) for epoch in range(5): for data in train_iter: optimizer.zero_grad() X = data[:, :, 0] S = data[:, 4:, 1] Y = data[:, :, 2] if phm: out = net(X.view(2, 8), S.view(3, 2, 2)) else: out = net(X) l = loss(out, Y.view(2, 8)) l.backward() optimizer.step() print(f'epoch {epoch + 1}, loss {float(l.sum() / batch_size):.6f}') # Initialize model parameters def weights_init_uniform(m): m.A.data.uniform_(-0.07, 0.07) # Create layer instance n = 4 phm_layer = PHM(n, kernel_size=2) phm_layer.apply(weights_init_uniform) # Train the model train(phm_layer, 0.005) # Check parameters of the layer require grad for name, param in phm_layer.named_parameters(): if param.requires_grad: print(name, param.data) # Take a look at the convolution performed on the test set for data in test_iter: X = data[:, :, 0] S = data[:, 4:, 1] Y = data[:, :, 2] y_phm = phm_layer(X.view(2, 8), S.view(3, 2, 2)) print('Hamilton product result from test set:\n', Y.view(2, 8)) print('Performing Hamilton product learned by PHM:\n', y_phm) # Check the PHC layer have learnt the proper algebra for the marix A W = torch.FloatTensor([[0,-1,-1,-1], [1,0,-1,1], [1,1,0,-1], [1,-1,1,0]]) print('Ground-truth Hamilton product matrix:\n', W) print() print('Learned A in PHM:\n', phm_layer.A) print() print('Learned A sum in PHM:\n', sum(phm_layer.A).T) ```
github_jupyter
``` from sklearn import linear_model import numpy as np from collections import namedtuple tokenized_row = namedtuple('tokenized_row', 'sent_count sentences word_count words') from sklearn.feature_extraction.text import CountVectorizer import pickle import csv def train_sgd(train_targets, train_regressors): sgd = linear_model.SGDClassifier() sgd.fit(train_regressors, train_targets) return sgd def error_rate(train_targets, train_regressors, test_targets, test_regressors): sgd = train_sgd(train_targets, train_regressors) test_predictions = sgd.predict(test_regressors) rounded_predictions = np.rint(test_predictions) false_pos = 0 false_neg = 0 correct = 0 for i in range(len(rounded_predictions)): if rounded_predictions[i] == 1 and test_targets[i] == 0: false_pos += 1 if rounded_predictions[i] == 0 and test_targets[i] == 1: false_neg += 1 if rounded_predictions[i] == test_targets[i]: correct += 1 errors = false_pos + false_neg corrects = len(rounded_predictions) - errors assert(correct == corrects) error_rate = float(errors) / len(test_predictions) return (error_rate, false_pos, false_neg) filenames = ['combined_train_test.p', 'r_train_so_test.p', 'so_train_r_test.p', 'so_alone.p', 'reddit_alone.p'] def baseline(filename): with open(filename, 'rb') as pfile: train, test = pickle.load(pfile) train_targets = train['answer_good'].values.reshape(-1, 1) train_regressors = train['AnswerCount'].values.reshape(-1, 1) test_targets = test['answer_good'].values.reshape(-1, 1) test_regressors = test['AnswerCount'].values.reshape(-1, 1) return error_rate(train_targets, train_regressors, test_targets, test_regressors) with open('baseline_results.csv', 'w+', newline="") as csvfile: fieldnames = ['Test Name', 'Success Rate', 'false +', 'false -'] writer = csv.DictWriter(csvfile, fieldnames=fieldnames) writer.writeheader() for name in filenames: errors, false_pos, false_neg = baseline(name) success_rate = 1 - errors writer.writerow({'Test Name': name, 'Success Rate': success_rate, 'false +': false_pos, 'false -': false_neg}) def length_only(filename): with open(filename, 'rb') as pfile: train, test = pickle.load(pfile) # Get length from the dict! Word count and sentence count directory_name = filename.split('.p')[0] with open(directory_name + "/tokenized_dict.p", 'rb') as pfile: train_token_dict, test_token_dict = pickle.load(pfile) train_length = len(train.index.values) train_regressors = np.empty([train_length, 4]) test_length = len(test.index.values) test_regressors = np.empty([test_length, 4]) for i in range(train_length): index = train.index.values[i] row = train_token_dict[index] train_regressors[i] = [row[0].word_count, row[0].sent_count, row[1].word_count, row[1].sent_count] for i in range(test_length): index = test.index.values[i] row = test_token_dict[index] test_regressors[i] = [row[0].word_count, row[0].sent_count, row[1].word_count, row[1].sent_count] test_targets = test['answer_good'].values.reshape(-1, 1) train_targets = train['answer_good'].values.reshape(-1, 1) return error_rate(train_targets, train_regressors, test_targets, test_regressors) with open('length_only_results.csv', 'w+', newline="") as csvfile: fieldnames = ['Test Name', 'Success Rate', 'false +', 'false -'] writer = csv.DictWriter(csvfile, fieldnames=fieldnames) writer.writeheader() for name in filenames: errors, false_pos, false_neg = length_only(name) success_rate = 1 - errors writer.writerow({'Test Name': name, 'Success Rate': success_rate, 'false +': false_pos, 'false -': false_neg}) ```
github_jupyter
# Explore feature-to-feature relationship in Boston ``` import pandas as pd import seaborn as sns from sklearn import datasets import discover import matplotlib.pyplot as plt # watermark is optional - it shows the versions of installed libraries # so it is useful to confirm your library versions when you submit bug reports to projects # install watermark using # %install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py %load_ext watermark # show a watermark for this environment %watermark -d -m -v -p numpy,matplotlib,sklearn -g example_dataset = datasets.load_boston() df_boston = pd.DataFrame(example_dataset.data, columns=example_dataset.feature_names) df_boston['target'] = example_dataset.target df = df_boston cols = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'target'] classifier_overrides = set() df = df_boston df.head() ``` # Discover non-linear relationships _Github note_ colours for `style` don't show up in Github, you'll have to grab a local copy of the Notebook. * NOX predicts RAD, INDUS, TAX and DIS * RAD predicts DIS poorly, NOX better, TAX better * CRIM predicts RAD but RAD poorly predicts CRIM ``` %time df_results = discover.discover(df[cols].sample(frac=1), classifier_overrides) fig, ax = plt.subplots(figsize=(12, 8)) sns.heatmap(df_results.pivot(index='target', columns='feature', values='score').fillna(1), annot=True, center=0, ax=ax, vmin=-0.1, vmax=1, cmap="viridis"); # we can also output a DataFrame using style (note - doesn't render on github with colours, look at a local Notebook!) df_results.pivot(index='target', columns='feature', values='score').fillna(1) \ .style.background_gradient(cmap="viridis", low=0.7, axis=1) \ .set_precision(2) ``` # We can drill in to some of the discovered relationships ``` print(example_dataset.DESCR) # NOX (pollution) predicts AGE of properties - lower pollution means more houses built after 1940 than before df.plot(kind="scatter", x="NOX", y="AGE", alpha=0.1); # NOX (pollution) predicts DIStance, lower pollution means larger distance to places of work df.plot(kind="scatter", x="NOX", y="DIS", alpha=0.1); # More lower-status people means lower house prices ax = df.plot(kind="scatter", x="LSTAT", y="target", alpha=0.1); # closer to employment centres means higher proportion of owner-occupied residences built prior to 1940 (i.e. more older houses) ax = df.plot(kind="scatter", x="DIS", y="AGE", alpha=0.1); ``` # Try correlations Correlations can give us a direction and information about linear and rank-based relationships which we won't get from RF. ## Pearson (linear) ``` df_results = discover.discover(df[cols], classifier_overrides, method='pearson') df_results.pivot(index='target', columns='feature', values='score').fillna(1) \ .style.background_gradient(cmap="viridis", axis=1) \ .set_precision(2) ``` ## Spearman (rank-based) ``` df_results = discover.discover(df[cols], classifier_overrides, method='spearman') df_results.pivot(index='target', columns='feature', values='score').fillna(1) \ .style.background_gradient(cmap="viridis", axis=1) \ .set_precision(2) ax = df.plot(kind="scatter", x="CRIM", y="LSTAT", alpha=0.1); ax = df.plot(kind="scatter", x="CRIM", y="NOX", alpha=0.1); ``` ## Mutual Information Mutual information represents the amount of information that each column predicts about the others. ``` df_results = discover.discover(df[cols], classifier_overrides, method='mutual_information') df_results.pivot(index='target', columns='feature', values='score').fillna(1) \ .style.background_gradient(cmap="viridis", axis=1) \ .set_precision(2) ax = df.plot(kind="scatter", x="TAX", y="INDUS", alpha=0.1) ax = df.plot(kind="scatter", x="TAX", y="NOX", alpha=0.1) ```
github_jupyter